code stringlengths 38 801k | repo_path stringlengths 6 263 |
|---|---|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Chapter 8 - Data Wrangling: Join, Combine, and Reshape
# ## 8.2 Combining and Merging Datasets
import pandas as pd
# - Merging dataframes using `df1.merge(df2)` or `pd.merge(df1, df2)` with parameters
# - Using different flavours of `how` when merging, including `left`, `right`, `inner` and `outer`
# - using parameters when merging to determine columns (and indices) to merge on, incluing `on`, `left_on`, `right_on`, `left_index=True` and `right_index=True`
# - modifying columns when column names overlap using `lsuffix` and `rsuffix`
# - Merging on indices using `df1.join(df2)`
# - Stacking `df`s below each other using `pd.concat([df1, df2])` (or horizontally using `axis=1` parameter)
# - Overlaying 2 `df`s together to fill in missing values using `df1.combine_first(df2)`
# Read from df and some data preparation. Note the year for each of the df
df = pd.read_csv('dataset-C-enrolment.csv')
cols_for_analysis = ['year', 'sex', 'course', 'graduates']
df_f = df[df.sex=='F']
df_f = df_f[cols_for_analysis]
df_f = df_f.head(8)
display(df_f)
df_mf = df[df.sex=='MF']
df_mf = df_mf[cols_for_analysis]
df_mf = df_mf.tail(8)
display(df_mf)
# Rename columns
df_f1 = df_f.copy()[['year', 'graduates']]
_ = df_f1.rename(columns={'graduates' : 'graduates_f'}, inplace=True)
df_mf1 = df_mf.copy()[['year', 'graduates']]
_ = df_mf1.rename(columns={'graduates' : 'graduates_mf'}, inplace=True)
display(df_f1)
display(df_mf1)
# Database-style merging either uses the `df1.merge(df2)` syntax or `pd.merge(df1, df2)` syntax. It is always good to specify the common columns to merge on, using the `on` parameter.
merged_df1 = df_f1.merge(df_mf1, on='year')
display(merged_df1)
merged_df2 = pd.merge(df_f1, df_mf1, on='year')
display(merged_df2)
# If the columns to merge on are different, specify them respectively using `left_on` and `right_on`.
# Using `how='left'` will keep all values of the joining column on the 1st `df`. Using `how='right'` will keep all keys on the 2nd `df`.
merged_df3 = pd.merge(df_f1, df_mf1, on='year', how='left')
display(merged_df3)
merged_df4 = pd.merge(df_f1, df_mf1, on='year', how='right')
display(merged_df4)
# Using `how='outer'` will keep all values of the joining column on both `df`s.
merged_df5 = pd.merge(df_f1, df_mf1, on='year', how='outer')
display(merged_df5)
# When the column names are common across both `df`s, then the suffix will change for each `df` after the merging step.
pd.merge(df_f, df_mf, on='year')
df_f2 = df_f.copy()
display(df_f2)
df_mf2 = df_mf.copy()
# Setting the index of a df
df_mf2 = df_mf2.set_index('year')
display(df_mf2)
# To merge using a column on one `df` and the index of another, use `left_on`, `right_on`, `left_index` and `right_index` respectively.
# Merge using column on left df and index on right df. Hence, left_on and right_index are used
merged_4 = df_f2.merge(df_mf2, left_on='year', right_index=True)
display(merged_4)
# Note that it is possible to merge on 2 or more columns.
# If the common column in both `df`s are the index columns, consider using `.join()`.
df_f2.index = df_f2['year']
display(df_f2)
display(df_mf2)
df_f2.join(df_mf2, lsuffix='_f', rsuffix='_mf')
# Using `pd.concat(df1, df2)` to stack both `df`s
pd.concat([df_f, df_mf])
# Data preparation: make a copy and set the index accordingly.
df_f3, df_mf3 = df_f.copy(), df_mf.copy()
df_f3.index=df_f3['year']
df_f3 = df_f3[['sex', 'graduates']]
df_f3.rename(columns={'graduates' : 'graduates_f'}, inplace=True)
df_mf3.index = df_mf3['year']
df_mf3 = df_mf3[['sex', 'graduates']]
df_mf3.rename(columns={'graduates' : 'graduates_mf'}, inplace=True)
display(df_f3)
display(df_mf3)
# Note that you can also perform a `concat()` operation horizontally. In this case, use `axis=1`. Rows with common columns will be stacked together horizontally.
pd.concat([df_f3, df_mf3], axis=1)
df_wines1, df_wines2 = pd.read_csv('dataset-D3-wines.csv'), pd.read_csv('dataset-D4-wines.csv')
display(df_wines1)
display(df_wines2)
# Another way of combining is "overlaying" one `df` onto another. Then, the second `df` will be used to fill the missing values in the first `df` if it is missing.
df_wines1.combine_first(df_wines2)
# (Note that there is another function called `combine()` and that requires using a function to determine priority of values)
# **References:**
#
# Python for Data Analysis, 2nd Edition, McKinney (2017)
| 08-2-Data-Wrangling.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python playground
# language: python
# name: playground
# ---
# +
# %matplotlib inline
import numpy as np
import quantum as q
import matplotlib.pyplot as plt
from math import sqrt
from tqdm import tqdm, trange
plt.style.use('seaborn')
# -
runs = 10**6
d = 2
# sample states from haar measure, count the number of negativity increased events
# +
neg_init = []
neg_final = []
count = 0
for i in trange(runs):
psi = np.zeros(d**2, dtype = complex)
psi[0] = 1
psi = psi / np.linalg.norm(psi)
psi = q.randu(d**2)@psi
rho = np.outer(psi, psi.conj())
U = q.randu(d**2)
rho_post = (
U.conj().transpose()
@np.diag((U@rho@U.conj().transpose()).diagonal())
@U
)
neg_rho = q.negativity(rho, [d, d], [0, 1])
neg_rho_post = q.negativity(rho_post, [d, d], [0, 1])
neg_init.append(neg_rho)
neg_final.append(neg_rho_post)
if (neg_rho_post > neg_rho):
count += 1
print(f'p {count/runs}')
# -
# plot initial negativity against final negativity, along with proposed upper bound
plt.plot(neg_init, neg_final, '.', alpha = 0.1)
plt.plot(np.linspace(0, 0.5, runs), np.linspace(0.25, 0.5, runs), '-')
plt.xlabel('$N_i$', fontsize = 14)
plt.ylabel('$N_f$', fontsize = 14)
plt.xlim(0, 0.5)
plt.ylim(0, 0.5)
| plot_negativity_increase.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3 (ipykernel)
# language: python
# name: python3
# ---
# + tags=["parameters"]
"""
Update Parameters Here
"""
COLLECTION_NAME = "Quaks"
CONTRACT = "0x07bbdaf30e89ea3ecf6cadc80d6e7c4b0843c729"
CHAIN = "eth"
"""
Optional parameters
"""
KEEP_ALL_DATA = False # set to TRUE to keep the raw JSON on disk
MAX_RESULTS = 100 # max results per request
TIME_DELTA = 1 # time to wait between successful calls
TIME_DELTA_2 = 5 # time to wait after API throttling message
# +
"""
A Moralis API Key is required.
The free tier includes one and is enough to get minting data.
AVAILABLE CHAINS:
eth, ropsten, rinkeby, goerli, kovan,
polygon, mumbai, bsc, bsc testnet,
avalanche, avalanche testnet, fantom
"""
import os
import requests
import json
import time
import pandas as pd
from pandas import json_normalize
from honestnft_utils import config
from honestnft_utils import constants
def get_mintdata(
COLLECTION_NAME,
CONTRACT,
CHAIN,
):
RARITY_CSV = f"{config.RARITY_FOLDER}/{COLLECTION_NAME}_raritytools.csv"
print(f"Rarity data loaded from: {RARITY_CSV}")
RARITY_DB = pd.read_csv(RARITY_CSV)
headers = {"Content-type": "application/json", "x-api-key": config.MORALIS_API_KEY}
print(f"Getting minting data for {COLLECTION_NAME}")
more_results = True
page = 1
start_time = time.time()
all_data = list() # empty list to store data as it comes
cursor = ""
while more_results:
if cursor == "":
url = "https://deep-index.moralis.io/api/v2/nft/{}/transfers?chain={}&format=decimal&limit={}".format(
CONTRACT, CHAIN, MAX_RESULTS
)
else:
url = "https://deep-index.moralis.io/api/v2/nft/{}/transfers?chain={}&format=decimal&limit={}&cursor={}".format(
CONTRACT, CHAIN, MAX_RESULTS, cursor
)
response = requests.get(url, headers=headers)
response_data = response.json()
if response.status_code == 200:
cursor = response_data["cursor"]
PATH = (
f"{config.MINTING_FOLDER}/{COLLECTION_NAME}/{page * MAX_RESULTS}.json"
)
# add new data to existing list
all_data.extend(response_data["result"])
page += 1
# if results in this response is less than MAX_RESULTS then it's the last page
if len(response_data["result"]) < MAX_RESULTS:
more_results = False
else:
time.sleep(TIME_DELTA)
elif response.status_code in [429, 503, 520]:
print(
f"Got a {response.status_code} response from the server. Waiting {TIME_DELTA_2} seconds and retrying"
)
time.sleep(TIME_DELTA_2)
else:
print(f"status_code = {response.status_code}")
print("Received a unexpected error from Moralis API. Closing process.")
print(response.json())
more_results = False
cursor = response.json()["cursor"] if "cursor" in response.json() else None
more_results = cursor != ""
# Save full json data to one master file
if KEEP_ALL_DATA:
folder = f"{config.MINTING_FOLDER}/{COLLECTION_NAME}/"
if not os.path.exists(folder):
os.mkdir(folder)
PATH = f"{config.MINTING_FOLDER}/{COLLECTION_NAME}/{COLLECTION_NAME}.json"
with open(PATH, "w") as destination_file:
json.dump(all_data, destination_file)
df = json_normalize(all_data)
# remove non minting rows
df = df.loc[df["from_address"] == constants.MINT_ADDRESS]
# make sure token_id is an integer
df["token_id"] = df["token_id"].astype(int)
# add rarity rank to minting data
df = df.merge(RARITY_DB, left_on="token_id", right_on="TOKEN_ID")
# discard unwanted columns
df = df[
[
"transaction_hash",
"to_address",
"token_id",
"from_address",
"Rank",
"block_timestamp",
]
]
df.drop_duplicates(subset=["token_id"], inplace=True)
# get matching columns names to HonestNFT csv format
df.columns = ["txid", "to_account", "TOKEN_ID", "current_owner", "rank", "time"]
# clean 'time' field to make it compatible with the csv produced by 'find_minting_data.ipynb'
df["time"] = df["time"].str.replace(".000Z", "")
df.to_csv(f"{config.MINTING_FOLDER}/{COLLECTION_NAME}_minting.csv")
print("--- %s seconds ---" % (round(time.time() - start_time, 1)))
print("finished")
get_mintdata(COLLECTION_NAME, CONTRACT, CHAIN)
| fair_drop/find_minting_data_from_moralis.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Conditional-like execution and masking
#
# This notebook shows how DALI arithmetic expressions can be used to achieve conditional-like application of augmentations and be used for some of the masking operations.
# ## Conditional results
#
# We will create a Pipeline that will use DALI arithmetic expressions to conditionally augment images. Since DALI does not support conditional or partial execution, we have to emulate this behavior by [multiplexing](https://en.wikipedia.org/wiki/Multiplexer) - i.e. all transforms are applied to all inputs, but only the result of one of them is propagated to the output and others are rejected based on some condition.
#
# Keep in mind that all possible inputs to our multiplexing operation will still be calculated by DALI.
#
# ### Imports
#
# Let's start with the necessary imports.
from nvidia.dali.pipeline import Pipeline
import nvidia.dali.ops as ops
import nvidia.dali.types as types
from nvidia.dali.types import Constant
# ### Operators used explicitly
# We don't need to explicitly list arithmetic operators that we want to use in the Pipeline constructor. They work as regular Python operators in the `define_graph` step.
#
# As for the rest of Operators, our Pipeline will use FileReader to provide us with input images. We also need an ImageDecoder to decode the loaded images.
#
# We will use `CoinFlip` as a source for the random conditions. We will cast the result to bool, so it will play nicely with the type promotion rules.
#
# As an example augmentation, we will apply the `BrightnessContrast` Operator. We choose quite extreme parameters, so it will clearly show in the output.
# ### The graph with custom augmentation
#
# Let's proceed to `define_graph`. We start with typical load & decode approach.
# Next we apply the augmentation. We keep handles to both tensors, unaugmented `imgs` and augmented `imgs_adjusted`.
#
# We also need the `condition` - output of `CoinFlip` casted to bool.
#
# ## The multiplexing operation
#
# Now we want to calculate output `out` that is an equivalent to:
#
# ```
# for idx in range(batch_size):
# if condition[idx]:
# out[idx] = imgs_adjusted[idx]
# else:
# out[idx] = imgs[idx]
#
# ```
#
# We can transform the condition to an arithmetic expression:
# ```
# out = condition * imgs_adjusted + (not condition) * imgs
# ```
# when the condition is true we multiply the `imgs_adjusted` by `True` value (thus keeping it), while when it is `False` the multiplication yields `0`. Multiplying some numerical type by boolean keeps the numerical type. To implement the `else` branch, we need to negate the `condition` and do a similar multiplication. Then we just need to add them together.
#
# Due to Python operator limitations, negating the boolean condition is implemented as a bitwise `xor` operation with boolean constant `True`.
#
# We return the output of the multiplexing operation, the original images, and `CoinFlip` values so we can easily visualize the results.
class MuxPipeline(Pipeline):
def __init__(self, batch_size, num_threads, device_id):
super(MuxPipeline, self).__init__(batch_size, num_threads, device_id, seed=42)
self.input = ops.FileReader(device="cpu", file_root="../../data/images", file_list="../../data/images/file_list.txt")
self.decode = ops.ImageDecoder(device="cpu", output_type=types.RGB)
self.bool = ops.Cast(dtype=types.DALIDataType.BOOL)
self.rng = ops.CoinFlip()
self.bricon = ops.BrightnessContrast(brightness=3, contrast=1.5)
def define_graph(self):
input_buf, _ = self.input()
imgs = self.decode(input_buf)
imgs_adjusted = self.bricon(imgs)
condition = self.bool(self.rng())
neg_condition = condition ^ True
out = condition * imgs_adjusted + neg_condition * imgs
return out, imgs, condition
# ### Multiplexing as a helper function
#
# To clean things up we can wrap the multiplexing operation in a helper function called `mux`.
#
# Note that the inputs to `mux` need to allow for the specified element-wise expression. In our case, the condition is a batch of Tensors representing scalars and the corresponding elements of the `True` and `False` cases have matching shapes.
# +
def mux(condition, true_case, false_case):
neg_condition = condition ^ True
return condition * true_case + neg_condition * false_case
class MuxPipeline2(Pipeline):
def __init__(self, batch_size, num_threads, device_id):
super(MuxPipeline2, self).__init__(batch_size, num_threads, device_id, seed=42)
self.input = ops.FileReader(device="cpu", file_root="../../data/images", file_list="../../data/images/file_list.txt")
self.decode = ops.ImageDecoder(device="cpu", output_type=types.RGB)
self.bool = ops.Cast(dtype=types.DALIDataType.BOOL)
self.rng = ops.CoinFlip()
self.bricon = ops.BrightnessContrast(brightness=3, contrast=1.5)
def define_graph(self):
input_buf, _ = self.input()
imgs = self.decode(input_buf)
imgs_adjusted = self.bricon(imgs)
condition = self.bool(self.rng())
out = mux(condition, imgs_adjusted, imgs)
return out, imgs, condition
# -
# ### Running the pipeline
#
# Let's create an instance of the Pipeline and build it. We will use `batch_size = 5` so we can observe that some of the output images are augmented and some are not.
pipe = MuxPipeline2(batch_size = 5, num_threads=1, device_id=0)
pipe.build()
# We will use a simple helper function to show the images. It takes the three outputs from out pipeline, puts the output of multiplexing in left columnt, the original images on the right and asigns proper captions.
# +
import matplotlib.pyplot as plt
import numpy as np
def display(augmented, reference, flip_value = None, cpu = True):
data_idx = 0
fig, axes = plt.subplots(len(augmented), 2, figsize=(15, 15))
for i in range(len(augmented)):
img = augmented.at(i) if cpu else augmented.as_cpu().at(i)
ref = reference.at(i) if cpu else reference.as_cpu().at(i)
if flip_value:
val = flip_value.at(i)[0] if cpu else flip_value.as_cpu().at(i)[0]
else:
val = True
axes[i, 0].imshow(np.squeeze(img))
axes[i, 1].imshow(np.squeeze(ref))
axes[i, 0].axis('off')
axes[i, 1].axis('off')
axes[i, 0].set_title("Image was augmented" if val else "Image was not augmented")
axes[i, 1].set_title("Original image")
# -
# Now, we will run and display the results. You can play this cell several times to see the result for different images.
(output, reference, flip_val) = pipe.run()
display(output, reference, flip_val)
# ## Generating masks with comparisons and bitwise operations
#
# Let's extend our pipeline to operate using some more complex logical conditions. We will use comparison operators to build masks representing regions where the image has low and high pixel intensities.
#
# We will use bitwise **OR** operation to build a mask representing union of this regions. As the values in mask are boolean, the bitwise `|`, `&` `^` operations can be used in similar fashion as their logical counterparts.
#
# As DALI arithmetic expressions are elementwise and specific channel values can vary a lot, we will calculate the masks on gray images, so we will get one value per pixel and duplicate the information to a 3-channel mask, so the shape of image and mask will match. For this we need two `ColorSpaceConversion` Operators, one handling RGB->Gray conversion and the second Gray->RGB.
#
# We will aply brightening and darkening to specified regions using similar approach as before with multiplexing.
#
# ## Comparison operators
#
# DALI allows to use all Python comparison operators directly. The Tensors that will be obtained from comparison contain boolean values.
#
# Creating 1-channel masks for low and high intensities amouts to writing `imgs_gray < 30` and `imgs_gray > 230`.
#
# Note that to convert the resulting boolean mask to 3-channel one, we need to cast it to `uint8` so the `ColorSpaceConversion` Operator will work. Unfortunately that might give some overhead and in practice may not be the most efficient way to calculate custom masks. If you need additional performance see the "Create a custom operator" to read about creating custom operators.
# +
def not_(mask):
return True ^ mask
class MasksPipeline(Pipeline):
def __init__(self, batch_size, num_threads, device_id):
super(MasksPipeline, self).__init__(batch_size, num_threads, device_id, seed=42)
self.input = ops.FileReader(device="cpu", file_root="../../data/images", file_list="../../data/images/file_list.txt")
self.decode = ops.ImageDecoder(device="cpu", output_type=types.RGB)
self.bool = ops.Cast(dtype=types.DALIDataType.BOOL)
self.uint8 = ops.Cast(dtype=types.DALIDataType.UINT8)
self.rng = ops.CoinFlip()
self.brighter = ops.BrightnessContrast(brightness=3)
self.darker = ops.BrightnessContrast(brightness=0.75)
self.gray = ops.ColorSpaceConversion(image_type=types.RGB, output_type=types.GRAY)
self.rgb = ops.ColorSpaceConversion(image_type=types.GRAY, output_type=types.RGB)
def expand_mask(self, mask):
return self.bool(self.rgb(self.uint8(mask)))
def define_graph(self):
input_buf, _ = self.input()
imgs = self.decode(input_buf)
imgs_gray = self.gray(imgs)
imgs_bright = self.brighter(imgs)
imgs_dark = self.darker(imgs)
mask_low = self.expand_mask(imgs_gray < 30)
mask_high = self.expand_mask(imgs_gray > 230)
mask_other = not_(mask_low | mask_high)
out = mask_low * imgs_bright + mask_high * imgs_dark + mask_other * imgs
return out, imgs, mask_other * Constant(255).uint8()
# -
mask_pipe = MasksPipeline(batch_size = 5, num_threads=1, device_id=0)
mask_pipe.build()
# We will adjust our display function so in addition to original and augmented images we can also see the masks that we obtained.
def display2(augmented, reference, mask, cpu = True):
data_idx = 0
fig, axes = plt.subplots(len(augmented), 3, figsize=(15, 15))
for i in range(len(augmented)):
img = augmented.at(i) if cpu else augmented.as_cpu().at(i)
ref = reference.at(i) if cpu else reference.as_cpu().at(i)
m = mask.at(i) if cpu else mask.as_cpu().at(i)
axes[i, 0].imshow(np.squeeze(img))
axes[i, 1].imshow(np.squeeze(ref))
axes[i, 2].imshow(np.squeeze(m))
axes[i, 0].axis('off')
axes[i, 1].axis('off')
axes[i, 2].axis('off')
axes[i, 0].set_title("Augmented image")
axes[i, 1].set_title("Reference decoded image")
axes[i, 2].set_title("Calculated mask")
(output, reference, mask) = mask_pipe.run()
display2(output, reference, mask)
| docs/examples/general/expressions/expr_conditional_and_masking.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
# from keras.layers import Input, Dense
# from keras.models import Model
# from keras import backend
import tensorflow as tf
import keras
import pandas as pd
import io
import requests
import numpy as np
import os
from sklearn.model_selection import train_test_split
from sklearn import metrics
from tensorflow import keras
from tensorflow.keras.models import Model
from tensorflow.keras.layers import Input, Dense, Layer
from tensorflow.keras.optimizers import Adam
from tensorflow.keras.callbacks import EarlyStopping
from tensorflow.python.eager.context import context, EAGER_MODE, GRAPH_MODE
# from keras.models import Sequential
# from keras.layers import Dense, Activation
# from keras.callbacks import EarlyStopping, ModelCheckpoint
# -
loaded_x_train = pd.read_csv("./Dataset/x_train.csv")
loaded_y_train = pd.read_csv("./Dataset/y_train.csv")
loaded_x_test = pd.read_csv("./Dataset/x_test.csv")
loaded_y_test = pd.read_csv("./Dataset/y_test.csv")
print(loaded_x_train.shape)
print(loaded_x_test.shape)
print(loaded_y_train.shape)
print(loaded_y_test.shape)
loaded_x_train_numpy = np.array(loaded_x_train)
loaded_x_test_numpy = np.array(loaded_x_test)
loaded_y_train_numpy = np.array(loaded_y_train)
loaded_y_test_numpy = np.array(loaded_y_test)
from qiskit import QuantumRegister, QuantumCircuit, ClassicalRegister
import qiskit
from qiskit.aqua.operators import Z
from qiskit.aqua.operators import StateFn
quantumRegister = QuantumRegister(4)
quantumCircuit = QuantumCircuit(quantumRegister)
backend = qiskit.Aer.get_backend("qasm_simulator")
operatorZ = Z ^ Z ^ Z ^ Z ^ Z ^ Z ^ Z ^ Z
from qiskit import IBMQ
from qiskit.providers.ibmq import least_busy
provider = IBMQ.load_account()
device = least_busy(provider.backends(filters=lambda x: x.configuration().n_qubits >= 8 and
not x.configuration().simulator and x.status().operational==True))
print(device)
# +
import time
def quantum_layer(initial_parameters, rotation_parameters):
# expecting parameters to be a numpy array
quantumRegister = QuantumRegister(8)
quantumCircuit = QuantumCircuit(quantumRegister)
backend = qiskit.Aer.get_backend("qasm_simulator")
index = 0
quantumCircuit.h(range(8))
for i in range(len(initial_parameters)):
quantumCircuit.ry(initial_parameters[i] * np.pi, i)
while (index <= 7):
parameter_1 = rotation_parameters[index]
qubit_one = index
index = index + 1
parameter_2 = rotation_parameters[index]
qubit_two = index
index = index + 1
quantumCircuit.cry(2 * parameter_1 * np.pi, qubit_one, qubit_two)
quantumCircuit.cry(2 * parameter_2 * np.pi, qubit_two, qubit_one)
psi = StateFn(quantumCircuit)
expectationZ = (~psi @ operatorZ @ psi).eval()
# expectationZ = psi.adjoint().compose(operatorZ).compose(psi).eval().real
print(expectationZ)
# expectations = [200* np.abs(np.real(expectationZ)), 400*np.abs(np.real(expectationZ)), 600*np.abs(np.real(expectationZ)),
# 800 * np.abs(np.real(expectationZ)), 1000 * np.abs(np.real(expectationZ)), 300 * np.abs(np.real(expectationZ)),
# 500 * np.abs(np.real(expectationZ)), 700 * np.abs(np.real(expectationZ)) ]
quantumCircuit.measure_all()
result = qiskit.execute(quantumCircuit, backend, shots=1000).result()
counts = result.get_counts(quantumCircuit)
del(quantumCircuit)
del(quantumRegister)
# print("Exp input {input}, w {w}, expectation {expectation}".format(input=np.mean(initial_parameters), w=np.mean(
# rotation_parameters), expectation=np.mean(expectation*10000)
# ))
return counts
def quantum_operation(initial_parameters, rotation_parameters):
final_output = []
for i in range(initial_parameters.shape[0]):
pred = quantum_layer(initial_parameters[i], rotation_parameters[i])
final_output.append(list(pred))
return final_output
# -
print(type(operatorZ))
from qiskit.quantum_info.analysis.average import average_data
counts = quantum_layer([0.2, 0.2, 0.1, 0.6, 0.7, 0.5, 0.1, 0.8], [0.2, 0.2, 0.1, 0.6, 0.7, 0.5, 0.1, 0.8])
average_data(counts, [Z, Z, Z, Z, Z, Z, Z, Z])
operatorZ
| QML_cybersec/quantum_ids.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Computing the nonadiabatic couplings in Kohn-Sham and excited states bases in extended tight-binding framework
#
# In this tutorial, we will start computing the nonadiabatic couplings (NACs) from the molecular orbital overlap files obtained in [step2](../../7_step2_cp2k/2_xTB). The NACs will be computed in Kohn-Sham states but only in single-partcile (SP) excited state basis, not many-body (MB) states, because we did not perform TD-DFT. Finally we will plot the excited states energies vs time and NAC map.
#
# ## Table of contents
# <a name="toc"></a>
# 1. [Importing needed libraries](#import)
# 2. [Overview of required files](#required_files)
# 3. [Computing the NACs](#comp_nacs)
# 3.1. [Kohn-Sham basis](#KS)\
# 3.2. [Excited state basis](#excited_states)
# 4. [Plotting the results](#plotting)\
# 4.1. [NAC distribution](#nac_dist)\
# 4.2. [Energy vs time](#ene_time)\
# 4.3. [NAC map](#nac_map)\
# 4.4. [Average partial density of states](#ave_pdos)
# - 4.4.1. [Plot pDOS for all atoms angular momentums](#ave_pdos_1)
# - 4.4.2. [Plot pDOS for atoms with no angular momentum component](#ave_pdos_2)
#
# ### A. Learning objectives
#
# * To be able to compute the NACs in Kohn-Sham and excited state basis
# * To be able to plot the NACs distribution
# * To be able to plot the computed excited states energies vs time
# * To be able to plot the NAC map
# * To be able to plot the average partial density of states
#
# ### B. Use cases
#
# * [Computing the NACs](#comp_nacs)
# * [Plotting the results](#plotting)
#
#
# ### C. Functions
#
# - `libra_py`
# - `data_stat`
# - `cmat_distrib`
# - `workflows`
# - `nbra`
# - [`step3`](#comp_nacs)
# - [`run_step3_ks_nacs_libint`](#KS)
# - [`run_step3_sd_nacs_libint`](#excited_states)
# - `units`
# - `au2ev`
#
# ## 1. Importing needed libraries <a name="import"></a>
# [Back to TOC](#toc)
#
# Since the data are stored in sparse format using `scipy.sparse` library, we need to load this library so that we can read and check the orthonormality of the data.
# Import `numpy`, `scipy.sparse`, `data_stat`, `data_io`, `units`, and `step3` modules. Also, `glob` will be needed to find specific types of files.
import os
import sys
import time
import glob
import numpy as np
import scipy.sparse as sp
import matplotlib.pyplot as plt
from liblibra_core import *
from libra_py.workflows.nbra import step3
from libra_py import units, data_stat, data_io
# ## 2. Overview of required files <a name="required_files"></a>
# [Back to TOC](#toc)
#
# * `../../7_step2_cp2k/1_xTB/2_hpc/res`
#
# The MO overlap files are needed and stored in this folder.
#
# * `../../7_step2_cp2k/1_xTB/2_hpc/all_logfiles`
#
# All of the logfiles obtained from the electronic structure calculations of CP2K. These files will be needed to find the Kohn-Sham HOMO index.
# ## 3. Computing the NACs <a name="comp_nacs"></a>
# [Back to TOC](#toc)
#
# ### 3.1. Kohn-Sham basis <a name="KS"></a>
#
# The `libra_py.workflow.nbra.step3.run_step3_ks_nacs_libint(params)` computes the NACs between pairs of Kohn-Sham states using the molecular orbital
# overlaps. The paramters for this function are as follows:
#
# `params['lowest_orbital']`: The lowest orbital considered in the computation of the MO overlaps. This value is exactly the same
# as in the `run_template.py` file in step2.
#
# `params['highest_orbital']`: The highest orbital considered in the computation of the MO overlaps. This value is exactly the same
# as in the `run_template.py` file in step2.
#
# `params['num_occ_states']`: The number of occupied orbitals to be considered from HOMO to lower occupied states. This value is defined by user.
#
# `params['num_unocc_states']`: The number of unoccupied orbitals to be considered from LUMO to higher unoccupied states. This value is defined by user.
#
# The two values above are used to create an active space which then will be used to select the elements from the MO overlap and energy matrices.
#
# `params['use_multiprocessing']`: A boolean flag to use the multiprocessing library of Python or not.
#
# `params['nprocs']`: The number of processors to be used for the calculations. Libra will use this only if the `params['use_multiprocessing']`
# is set to `True`.
#
# `params['time_step']`: The time-step used in the calculations in `fs`.
#
# `params['es_software']`: The name of the software package used to compute the electronic structure calculations. This will be used to generate the HOMO
# index of that system so it can build the active space.
#
# `params['path_to_npz_files']`: The full path to the MO overlap files.
#
# `params['logfile_directory']`: The full path to the folder where all the log files are stored.
#
# `params['path_to_save_ks_Hvibs']`: The full path to the folder in which the NACs between the Konh-Sham states are stored.
#
# `params['start_time']`: The start time-step.
#
# `params['finish_time']`: The finish time-step.
#
# `params['apply_phase_correction']`: A boolean flag for applying phase-correction algorithm.
#
# `params['apply_orthonormalization']`: A boolean flag for applying the orthonormalization algorithm.
#
# `params['do_state_reordering']`: If this value is set to `1` or `2`, the state-reordering will be applied to overlap matrices.
#
# `params['state_reordering_alpha']`: The state-reordering alpha value if the `params['do_state_reordering'] = 2`.
#
# After setting all the above paramters, the calculations are run using `step3.run_step3_ks_nacs_libint(params)`.
# +
params_ks = {
'lowest_orbital': 128-20, 'highest_orbital': 128+21, 'num_occ_states': 20, 'num_unocc_states': 20,
'use_multiprocessing': True, 'nprocs': 8, 'time_step': 1.0, 'es_software': 'cp2k',
'path_to_npz_files': os.getcwd()+'/../../7_step2_cp2k/2_xTB/2_hpc/res',
'logfile_directory': os.getcwd()+'/../../7_step2_cp2k/2_xTB/2_hpc/all_logfiles',
'path_to_save_ks_Hvibs': os.getcwd()+'/res-ks-xTB',
'start_time': 1500, 'finish_time': 1700,
'apply_phase_correction': True, 'apply_orthonormalization': True,
'do_state_reordering': 2, 'state_reordering_alpha':0
}
#### For KS states - Applying correction to KS overlaps and computing the NACs in KS space
step3.run_step3_ks_nacs_libint(params_ks)
# -
# ### 3.2. Excited state basis <a name="excited_states"></a>
#
# The NACs can also be computed between excited states. These include the single-particle and many-body bases which the latter is obtained from the
# TD-DFT calculations. First, we need to compute the overlap between excited state Slater-determinants (SDs) then they will be used to compute the NACs
# between them. For many-body states, the configuration interaction coefficietns will also be used. We will consider both single-particle
# and many-body for DFT calculations but only single-particle for xTB.
#
# To run the calculations `step3.run_step3_sd_nacs_libint(params)` function will be used. Some parameters are common with the ones used to run `step3.run_step3_ks_nacs_libint(params)`.
#
# There are different ways of defining the excited states SDs (the single-particle excited state basis). The first is through
# defining the `num_occ_states` and `num_unocc_states` in which Libra
# will start making the SDs from all of the occupied states (starting from `HOMO-num_occ_states+1`) to all of the unoccupied states (ends
# to `LUMO+num_unocc_states-1`). Also, if the unrestricted spin calculation flag is set to `True`, the SDs will be made for both alpha and beta spin channels.
#
# For example, if you want to build the electron-only excitation basis, you need to set `params['num_occ_states'] = 1` and set `params['num_unocc_states']`
# to a value less than the number of unoccupied orbitals that was considered in the computation of overlaps. This will generate all the electron-only
# excitation from HOMO to unoccupied states.
#
# If the TD-DFT calculations has been done, then Libra will go over all log files and
# generate all the SDs used for all the steps and therefore the definition of these SDs is automatic and Libra will replace the `num_occ_states` and
# `num_unocc_states` itself based on the SDs that were generated from the TD-DFT log files.
#
#
#
# Other parameters needed to run the `step3.run_step3_sd_nacs_libint(params)` function are as follows:
#
# `params['isUKS']`: A boolean flag for unrestricted spin calculations.
#
# `params['is_many_body']`: If set to `True`, the NACs will be computed between pairs of many-body (TD-DFT) states. Also, the NACs between single-particle
# SDs obtained from the TD-DFT results will be computed as well. Otherwise, only single-particle NACs will be computed only for the SDs obtained from
# `num_occ_states` and `num_unocc_states`. This will be used for xTB calculations in which no TD-DFT is performed.
#
# `params['number_of_states']`: The number of TD-DFT states to consider. This value should not exceed the number of requested TD-DFT states in the CP2K
# calculations.
#
# `params['tolerance']`: A lower bound for selection of the excitation with configuration interaction coefficients higher than this value.
#
# `params['verbosity']`: An integer value showing the printing level. The default is set to 0. Higher values will print more data on terminal.
#
# `params['sorting_type']`: After defining the SDs, Libra will sort them either based on `'energy'` or `'identity'`.
#
#
#
# +
#### For excited states - Computing the excited states SDs and their overlaps and NACs
params_mb_sd = {
'lowest_orbital': 128-20, 'highest_orbital': 128+21, 'num_occ_states': 20, 'num_unocc_states': 20,
'isUKS': 0, 'number_of_states': 0, 'tolerance': 0.01, 'verbosity': 0,
'use_multiprocessing': True, 'nprocs': 8,
'is_many_body': False, 'time_step': 1.0, 'es_software': 'cp2k',
'path_to_npz_files': os.getcwd()+'/../../7_step2_cp2k/2_xTB/2_hpc/res',
'logfile_directory': os.getcwd()+'/../../7_step2_cp2k/2_xTB/2_hpc/all_logfiles',
'path_to_save_sd_Hvibs': os.getcwd()+'/res-mixed-basis-xTB',
'outdir': os.getcwd()+'/res-mixed-basis',
'start_time': 1500, 'finish_time': 1700, 'sorting_type': 'identity',
'apply_phase_correction': True, 'apply_orthonormalization': True,
'do_state_reordering': 2, 'state_reordering_alpha':0
}
step3.run_step3_sd_nacs_libint(params_mb_sd)
# -
# ## 4. Plotting the results <a name='plotting'></a>
# [Back to TOC](#toc)
#
# ### 4.1. NAC distribution <a name='nac_dist'></a>
#
# One of the intuitive ways to visualize the NACs is to plot the distribution of the NACs. Here we plot them for SP and MB excited states.
# +
# %matplotlib notebook
for basis in ['sd']:
nac = []
nac_files = glob.glob(F'res-mixed-basis-xTB/Hvib_{basis}*im*')
for nac_file in nac_files:
hvib = sp.load_npz(nac_file)
hvib_dense = hvib.todense().real
for i in range(hvib.shape[0]):
for j in range(hvib.shape[0]):
if j != i:
nac_ij = np.abs(hvib_dense[i,j])* 1000.0 * units.au2ev
x_mb = MATRIX(1,1)
x_mb.set(0, 0, nac_ij )
nac.append( x_mb )
bin_supp, dens, cum = data_stat.cmat_distrib( nac, 0, 0, 0, 0, 50, 0.1)
plt.plot( bin_supp, dens, label='Mixed')
plt.xlabel('|NAC|, meV')
plt.ylabel('PD, 1/meV')
plt.title('NAC distribution, mixed-basis')
plt.legend()
plt.tight_layout()
# plt.savefig('nac_dist_1.jpg', dpi=600)
# -
# ### 4.2. Energy vs time <a name='ene_time'></a>
# Here, we plot the excited states energy vs time. Since the excited states were sorted by their `'identity'` it is easy to visualize the states energies crossings.
# %matplotlib notebook
energy_files = glob.glob('res-mixed-basis-xTB/Hvib_sd*re*')
energy_files = data_io.sort_hvib_file_names(energy_files)
#print('Sorted energy files are:', energy_files)
dt = 1.0 # fs
energies = []
for file in energy_files:
energies.append(np.diag(sp.load_npz(file).todense().real))
energies = np.array(energies)*units.au2ev
md_time = np.arange(0,energies.shape[0]*dt,dt)
#print(energies.shape)
for i in range(energies.shape[1]):
plt.plot(md_time, energies[:,i]-energies[:,0])
plt.title('Energy vs time')
plt.ylabel('Energy, eV')
plt.xlabel('Time, fs')
plt.tight_layout()
# ### 4.3. NAC map <a name='nac_map'></a>
# Another way of visualizing the NAC values is to plot the average NAC matrix using `plt.imshow`.
# %matplotlib notebook
nac_files = glob.glob('res-mixed-basis-xTB/Hvib_sd*im*')
for c, nac_file in enumerate(nac_files):
nac_mat = sp.load_npz(nac_file).todense().real
if c==0:
nac_ave = np.zeros(nac_mat.shape)
nac_ave += np.abs(nac_mat)
nac_ave *= 1000*units.au2ev/c
nstates = nac_ave.shape[0]
plt.imshow(np.flipud(nac_ave), cmap='hot', extent=(0,nstates,0,nstates))#, vmin=0, vmax=150)
plt.xlabel('State index')
plt.ylabel('State index')
plt.colorbar().ax.set_title('meV')
plt.title('Mixed-basis NACs')
# ### 4.4. Average partial density of states <a name='ave_pdos'></a>
# In this section, we will plot the average partial density of states (pDOS) over the MD trajectory. There are two ways to take the average of the pDOS:
#
# 1- Average all the pDOS files and then convolve the average pDOS for each element.
# 2- Convolve the pDOS files and then take the average for each element.
#
# We choose the first one due to two reasons. First, the computational cost is much lower and we only need one convolution. Second is that averaging over the grid points (using the method 2) is dependent on the number of grid points we use for convolution which again adds to the complexity of the procedure.
#
# Here, we will use normalized Gaussian function for weighting the pDOS values and summing them.
#
# $$f(x)=\frac{1}{\sigma\sqrt{2\pi}}\exp(-\frac{(x-\mu)^2}{2\sigma^2})$$
#
# This function is defined in the `gaussian_function` below. To apply this to a vector of numbers and sum all the weighted Gaussians, we use the `gaussian_function_vector` which will be used for pDOS plots.
# +
def gaussian_function(a, mu, sigma, num_points, x_min, x_max):
pre_fact = (a/sigma)/(np.sqrt(2*np.pi))
x = np.linspace(x_min, x_max, num_points)
x_input = np.array((-1/2)/(np.square(sigma))*np.square(x-mu))
gaussian_fun = pre_fact*np.exp(x_input)
return x, gaussian_fun
def gaussian_function_vector(a_vec, mu_vec, sigma, num_points, x_min, x_max):
for i in range(len(a_vec)):
if i==0:
sum_vec = np.zeros(num_points)
energy_grid, conv_vec = gaussian_function(a_vec[i], mu_vec[i], sigma, num_points, x_min, x_max)
sum_vec += conv_vec
return energy_grid, sum_vec
# -
# #### 4.4.1. Plot pDOS for all atoms angular momentums <a name='ave_pdos_1'></a>
#
# In this part, we plot the pDOS for all of the angular momentum components of each atom. This is done by using the `orbital_cols`. In fact, the `orbital_cols` is related to `orbitals`. For example, for `s` orbital, we consider the 3rd index and for `p` orbital, we sum the columns from 4 to 6 (`range(4,7)`). Here we want to show how the code works and how the you can modify that based on your project. In the next section, we will show the pDOS only for atoms and sum all the components in each row of the pdos file. Other parameters are as follows:
#
# `atoms`: The atoms names which will be used in the labeling and plotting. The atoms order should be exactly the same as appear in the `.pdos` files. For example, the `*k1*.pdos` files contain the pDOS data for `N` atom and `*k2*.pdos` files contain the data for the `N` atom.
#
# `npoints`: The number of grid points for making the Gaussian functions. Note that, this value should be more than the number of states in the `.pdos` files.
#
# `sigma`: The standard deviation in eV.
#
# `shift`: This value shifts the minimum and maximum energy found in the `pdos_ave` and will extend the boundaries from both sides by `shift`eV.
#
# Finally, we will plot the total density of states. Note that the HOMO energy level is set to zero.
# +
# %matplotlib notebook
path_to_all_pdos = os.getcwd()+'/../../7_step2_cp2k/2_xTB/2_hpc/all_pdosfiles'
atoms = ['C', 'N']
orbitals_cols = [[3], range(4,7), range(7,12), range(12,19)]
orbitals = ['s','p','d','f']
npoints = 1000
sigma = 0.05 # eV
shift = 2.0 # eV
ave_pdos_convolved_all = []
for c1,i in enumerate([1,2]):
pdos_files = glob.glob(path_to_all_pdos+F'/*k{i}*.pdos')
for c2, pdos_file in enumerate(pdos_files):
pdos_mat = np.loadtxt(pdos_file)
if c2==0:
pdos_ave = np.zeros(pdos_mat.shape)
pdos_ave += pdos_mat
pdos_ave /= c2+1
pdos_ave[:,1] *= units.au2ev
e_min = np.min(pdos_ave[:,1])-shift
e_max = np.max(pdos_ave[:,1])+shift
homo_level = np.max(np.where(pdos_ave[:,2]==2.0))
homo_energy = pdos_ave[:,1][homo_level]
for c3, orbital_cols in enumerate(orbitals_cols):
try:
sum_pdos_ave = np.sum(pdos_ave[:,orbital_cols],axis=1)
ave_energy_grid, ave_pdos_convolved = gaussian_function_vector(sum_pdos_ave, pdos_ave[:,1], sigma,
npoints, e_min, e_max)
ave_pdos_convolved_all.append(ave_pdos_convolved)
pdos_label = atoms[c1]+F', {orbitals[c3]}'
plt.plot(ave_energy_grid-homo_energy, ave_pdos_convolved, label=pdos_label)
except:
pass
ave_pdos_convolved_total = np.sum(np.array(ave_pdos_convolved_all),axis=0)
plt.plot(ave_energy_grid-homo_energy, ave_pdos_convolved_total, color='black', label='Total')
plt.legend()
plt.xlim(-4,4)
plt.ylabel('pDOS, 1/eV')
plt.xlabel('Energy, eV')
plt.title('C$_3$N$_4$ unit cell, 300 K')
plt.tight_layout()
# -
# #### 4.4.2. Plot pDOS for atoms with no angular momentum component <a name='ave_pdos_2'></a>
# As you can see we have removed the `for` loop for the `orbital_cols` and in the `try` section we have set `sum_pdos_ave = np.sum(pdos_ave[:,3::],axis=1)` which will sum all the columns from 3rd index (`pdos_ave[:,3::]`).
# +
# %matplotlib notebook
path_to_all_pdos = os.getcwd()+'/../../7_step2_cp2k/2_xTB/2_hpc/all_pdosfiles'
atoms = ['C', 'N']
npoints = 1000
sigma = 0.05
shift = 2.0 # eV
ave_pdos_convolved_all = []
for c1,i in enumerate([1,2]):
pdos_files = glob.glob(path_to_all_pdos+F'/*k{i}*.pdos')
for c2, pdos_file in enumerate(pdos_files):
pdos_mat = np.loadtxt(pdos_file)
if c2==0:
pdos_ave = np.zeros(pdos_mat.shape)
pdos_ave += pdos_mat
pdos_ave /= c2+1
pdos_ave[:,1] *= units.au2ev
e_min = np.min(pdos_ave[:,1])-shift
e_max = np.max(pdos_ave[:,1])+shift
homo_level = np.max(np.where(pdos_ave[:,2]==2.0))
homo_energy = pdos_ave[:,1][homo_level]
try:
sum_pdos_ave = np.sum(pdos_ave[:,3::],axis=1)
ave_energy_grid, ave_pdos_convolved = gaussian_function_vector(sum_pdos_ave, pdos_ave[:,1], sigma,
npoints, e_min, e_max)
ave_pdos_convolved_all.append(ave_pdos_convolved)
pdos_label = atoms[c1]
plt.plot(ave_energy_grid-homo_energy, ave_pdos_convolved, label=pdos_label)
except:
pass
ave_pdos_convolved_total = np.sum(np.array(ave_pdos_convolved_all),axis=0)
plt.plot(ave_energy_grid-homo_energy, ave_pdos_convolved_total, color='black', label='Total')
plt.legend()
plt.xlim(-4,4)
plt.ylabel('pDOS, 1/eV')
plt.xlabel('Energy, eV')
plt.title('TiO$_2$, 300 K')
plt.tight_layout()
| 6_dynamics/2_nbra_workflows/8_step3_cp2k/2_xTB/tutorials.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3 (ipykernel)
# language: python
# name: python3
# ---
# ## Example: Joint inference of $p(G, \Theta | \mathcal{D})$ for Gaussian Bayes nets
# Setup for Google Colab. Selecting the **GPU** runtime available in Google colab will make inference significantly faster.
#
#
# %cd /content
# !git clone https://github.com/larslorch/dibs.git
# %cd dibs
# %pip install -e . --quiet
# DiBS translates the task of inferring the posterior over Bayesian networks into an inference problem over the continuous latent variable $Z$. This is achieved by modeling the directed acyclic graph $G$ of the Bayesian network using the generative model $p(G | Z)$. The prior $p(Z)$ enforces the acyclicity of $G$.
# Ultimately, this allows us to infer $p(G, \Theta | \mathcal{D})$ (and $p(G | \mathcal{D})$) using off-the-shelf inference methods such as Stein Variational gradient descent (SVGD) (Liu and Wang, 2016).
# +
import jax
import jax.random as random
key = random.PRNGKey(123)
print(f"JAX backend: {jax.default_backend()}")
# -
# ### Generate synthetic ground truth Bayesian network and BN model for inference
#
# `data` contains information about and observations sampled from a synthetic, ground truth causal model with `n_vars` variables. By default, the conditional distributions are linear Gaussian. The random graph model is set by `graph_prior_str`, where `er` denotes Erdos-Renyi and `sf` scale-free graphs.
#
# `model` defines prior $p(G, \Theta)$ and likelihood $p(x | G, \Theta)$ of the BN model for which DiBS will infer the posterior.
#
# **For posterior inference of nonlinear Gaussian networks parameterized by fully-connected neural networks, use the function `make_nonlinear_gaussian_model`.**
#
# +
from dibs.target import make_linear_gaussian_model, make_nonlinear_gaussian_model
from dibs.utils import visualize_ground_truth
key, subk = random.split(key)
data, model = make_linear_gaussian_model(key=subk, n_vars=20, graph_prior_str="sf")
visualize_ground_truth(data.g)
# -
# ### DiBS with SVGD
#
# Infer $p(G, \Theta | D)$ under the prior and conditional distributions defined by `model`.
# The below visualization shows the *matrix of edge probabilities* $G_\alpha(Z^{(k)})$ implied by each transported latent particle (i.e., sample) $Z^{(k)}$ during the iterations of SVGD with DiBS. Refer to the paper for further details.
#
# To explicitly perform posterior inference of $p(G | \mathcal{D})$ using a closed-form marginal likelihood $p(D | G)$, use the separate, analogous class `MarginalDiBS` as demonstrated in the example notebook `dibs_marginal.ipynb`
#
#
# +
from dibs.inference import JointDiBS
dibs = JointDiBS(x=data.x, inference_model=model)
key, subk = random.split(key)
gs, thetas = dibs.sample(key=subk, n_particles=20, steps=1000, callback_every=50, callback=dibs.visualize_callback())
# -
# ### Evaluate on held-out data
#
# Form the empirical (i.e., weighted by counts) and mixture distributions (i.e., weighted by unnormalized posterior probabilities, denoted DiBS+).
dibs_empirical = dibs.get_empirical(gs, thetas)
dibs_mixture = dibs.get_mixture(gs, thetas)
# Compute some evaluation metrics.
# +
from dibs.metrics import expected_shd, threshold_metrics, neg_ave_log_likelihood
for descr, dist in [('DiBS ', dibs_empirical), ('DiBS+', dibs_mixture)]:
eshd = expected_shd(dist=dist, g=data.g)
auroc = threshold_metrics(dist=dist, g=data.g)['roc_auc']
negll = neg_ave_log_likelihood(dist=dist, eltwise_log_likelihood=dibs.eltwise_log_likelihood, x=data.x_ho)
print(f'{descr} | E-SHD: {eshd:4.1f} AUROC: {auroc:5.2f} neg. LL {negll:5.2f}')
# -
| examples/dibs_joint_colab.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + [markdown] id="l9fQcdsF72BR"
# # ARDRegressor with RobustScaler & Polynomial Features
# + [markdown] id="L9YSBPuX72BW"
# This Code template is for regression analysis using ARDRegressor algorithm with RobustScaler and feature transformation technique PolynomialFeatures in a pipeline.
# + [markdown] id="3dVs1_ED72BX"
# ### Required Packages
# + id="YO4omyN472BY"
import warnings
import numpy as np
import pandas as pd
import seaborn as se
import matplotlib.pyplot as plt
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import RobustScaler, PolynomialFeatures
from sklearn.pipeline import make_pipeline
from sklearn.metrics import r2_score, mean_absolute_error, mean_squared_error
from sklearn.linear_model import ARDRegression
warnings.filterwarnings('ignore')
# + [markdown] id="MLUOG-C172BZ"
# ### Initialization
#
# Filepath of CSV file
# + id="rarPNLIc72Ba"
#filepath
file_path= ""
# + [markdown] id="5XAlRZJF72Bb"
# List of features which are required for model training .
# + id="V2BRgXL872Bb"
#x_values
features=[]
# + [markdown] id="596QBwFW72Bc"
# Target feature for prediction.
# + id="4KjHJj5B72Bd"
#y_value
target=''
# + [markdown] id="PeP_2WCm72Be"
# ### Data Fetching
#
# Pandas is an open-source, BSD-licensed library providing high-performance, easy-to-use data manipulation and data analysis tools.
#
# We will use panda's library to read the CSV file using its storage path.And we use the head function to display the initial row or entry.
# + id="GJwHT0fW72Be" outputId="21854dca-f5e9-4b4a-8a97-4dcec7e17eac"
df=pd.read_csv(file_path)
df.head()
# + [markdown] id="8t-mMjf872Bg"
# ### Feature Selections
#
# It is the process of reducing the number of input variables when developing a predictive model. Used to reduce the number of input variables to both reduce the computational cost of modelling and, in some cases, to improve the performance of the model.
#
# We will assign all the required input features to X and target/outcome to Y.
# + id="jhqoRu5K72Bg"
X=df[features]
Y=df[target]
# + [markdown] id="lI6NLJ4v72Bh"
# ### Data Preprocessing
#
# Since the majority of the machine learning models in the Sklearn library doesn't handle string category data and Null value, we have to explicitly remove or replace null values. The below snippet have functions, which removes the null value if any exists. And convert the string classes data in the datasets by encoding them to integer classes.
#
# + id="8FFItviy72Bh"
def NullClearner(df):
if(isinstance(df, pd.Series) and (df.dtype in ["float64","int64"])):
df.fillna(df.mean(),inplace=True)
return df
elif(isinstance(df, pd.Series)):
df.fillna(df.mode()[0],inplace=True)
return df
else:return df
def EncodeX(df):
return pd.get_dummies(df)
# + [markdown] id="xxY0KScP72Bi"
# Calling preprocessing functions on the feature and target set.
#
# + id="KroArxLy72Bi" outputId="8b3fee8f-9cea-435f-f81d-559a8a6ff1a1"
x=X.columns.to_list()
for i in x:
X[i]=NullClearner(X[i])
X=EncodeX(X)
Y=NullClearner(Y)
X.head()
# + [markdown] id="6vgICEdW72Bi"
# #### Correlation Map
#
# In order to check the correlation between the features, we will plot a correlation matrix. It is effective in summarizing a large amount of data where the goal is to see patterns.
# + id="XTESNa8v72Bj" outputId="e8fca979-9218-4021-8060-2e16b66e332c"
f,ax = plt.subplots(figsize=(18, 18))
matrix = np.triu(X.corr())
se.heatmap(X.corr(), annot=True, linewidths=.5, fmt= '.1f',ax=ax, mask=matrix)
plt.show()
# + [markdown] id="fyF3yaJE72Bj"
# ### Data Splitting
#
# The train-test split is a procedure for evaluating the performance of an algorithm. The procedure involves taking a dataset and dividing it into two subsets. The first subset is utilized to fit/train the model. The second subset is used for prediction. The main motive is to estimate the performance of the model on new data.
# + id="Sv3tQ1Ki72Bk"
x_train,x_test,y_train,y_test=train_test_split(X,Y,test_size=0.2,random_state=123)
# + [markdown] id="hZdZUL0O72Bk"
# ## Data Rescaling
# In RobustScaler, data is scaled based on quantile ranges (i.e., IQR: Interquartile Range) rather than the median. Most machine learning estimators require a standardization of a dataset. A common method is to remove the mean and scale variance to the unit. Outliers, however, are sometimes harmful to the sample mean/variance. It is often better to use the median and the interquartile range for such cases.
#
#
# [RobustScaler](https://scikit-learn.org/stable/modules/generated/sklearn.preprocessing.RobustScaler.html)
#
# + [markdown] id="2us_HZWR72Bl"
# ## Feature Transformation
#
# Generate polynomial and interaction features.
#
# Generate a new feature matrix consisting of all polynomial combinations of the features with degree less than or equal to the specified degree.
#
# <a href="https://scikit-learn.org/stable/modules/generated/sklearn.preprocessing.PolynomialFeatures.html">More about PolynomialFeatures module</a>
# + [markdown] id="zOHNuwgx72Bl"
# ### Model
#
# Bayesian ARD regression.
#
# Fit the weights of a regression model, using an ARD prior. The weights of the regression model are assumed to be in Gaussian distributions. Also estimate the parameters lambda (precisions of the distributions of the weights) and alpha (precision of the distribution of the noise). The estimation is done by an iterative procedures (Evidence Maximization)
#
# ### Tuning parameters
#
# ><b>n_iter:</b> int, default=300 -> Maximum number of iterations.
#
# ><b>tol:</b> float, default=1e-3 -> Stop the algorithm if w has converged.
#
# > <b>alpha_1:</b> float, default=1e-6 -> Hyper-parameter : shape parameter for the Gamma distribution prior over the alpha parameter.
#
# ><b>alpha_2:</b> float, default=1e-6 -> Hyper-parameter : inverse scale parameter (rate parameter) for the Gamma distribution prior over the alpha parameter.
#
# ><b>lambda_1:</b> float, default=1e-6 -> Hyper-parameter : shape parameter for the Gamma distribution prior over the lambda parameter.
#
# ><b>lambda_2:</b> float, default=1e-6 -> Hyper-parameter : inverse scale parameter (rate parameter) for the Gamma distribution prior over the lambda parameter.
#
# ><b>compute_score:</b> bool, default=False -> If True, compute the objective function at each step of the model.
#
# ><b>threshold_lambda:</b> float, default=10 000 -> threshold for removing (pruning) weights with high precision from the computation.
#
# ><b>fit_intercept:</b> bool, default=True -> whether to calculate the intercept for this model. If set to false, no intercept will be used in calculations (i.e. data is expected to be centered).
#
# ><b>normalize:</b> bool, default=False -> This parameter is ignored when fit_intercept is set to False. If True, the regressors X will be normalized before regression by subtracting the mean and dividing by the l2-norm. If you wish to standardize, please use StandardScaler before calling fit on an estimator with normalize=False.
#
# ><b>copy_X:</b> bool, default=True -> If True, X will be copied; else, it may be overwritten.
# verbose: bool, default=False -> Verbose mode when fitting the model.
#
#
# + id="hl86NPui72Bm" colab={"base_uri": "https://localhost:8080/"} outputId="81e78c81-1e89-4921-8de2-2a40288d3e63"
model=make_pipeline(RobustScaler(),PolynomialFeatures(),ARDRegression())
model.fit(x_train,y_train)
# + [markdown] id="wymEKRla72Bn"
# #### Model Accuracy
#
# We will use the trained model to make a prediction on the test set.Then use the predicted value for measuring the accuracy of our model.
#
# score: The score function returns the coefficient of determination R2 of the prediction.
#
# + id="yEaoYMAC72Bn" outputId="b18519fb-5cf7-4ef4-d70c-8b107dddf8ef"
print("Accuracy score {:.2f} %\n".format(model.score(x_test,y_test)*100))
# + [markdown] id="Nuo6Sg1T72Bo"
# > **r2_score**: The **r2_score** function computes the percentage variablility explained by our model, either the fraction or the count of correct predictions.
#
# > **mae**: The **mean abosolute error** function calculates the amount of total error(absolute average distance between the real data and the predicted data) by our model.
#
# > **mse**: The **mean squared error** function squares the error(penalizes the model for large errors) by our model.
# + id="G85F51kj72Bo" outputId="14aef6f0-452f-4d67-9579-3aae9d17ffd3"
y_pred=model.predict(x_test)
print("R2 Score: {:.2f} %".format(r2_score(y_test,y_pred)*100))
print("Mean Absolute Error {:.2f}".format(mean_absolute_error(y_test,y_pred)))
print("Mean Squared Error {:.2f}".format(mean_squared_error(y_test,y_pred)))
# + [markdown] id="kG580BVF72Bp"
# #### Prediction Plot
#
# First, we make use of a plot to plot the actual observations, with x_train on the x-axis and y_train on the y-axis.
# For the regression line, we will use x_train on the x-axis and then the predictions of the x_train observations on the y-axis.
# + id="Gw065Gx672Bp" outputId="1b28e6f9-af96-4e9b-8eba-78bce1d28563"
plt.figure(figsize=(14,10))
plt.plot(range(20),y_test[0:20], color = "green")
plt.plot(range(20),model.predict(x_test[0:20]), color = "red")
plt.legend(["Actual","prediction"])
plt.title("Predicted vs True Value")
plt.xlabel("Record number")
plt.ylabel(target)
plt.show()
# + [markdown] id="l1Wu5fomgmu0"
# **creator: <NAME>, GitHub: [profile](https://github.com/viratchowdary21)**
#
| Regression/Linear Models/ARDRegressor_RobustScaler_PolynomialFeatures.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # FIFA World Cup - Benjamin
# The notebook is seperated into two parts: a descriptive one and the fair play analysis.
# +
import pandas as pd
import matplotlib.pyplot as plt
import numpy as np
plt.rcParams.update({'font.size': 16})
#G=Goal, OG=Own Goal, Y=Yellow Card, R=Red Card, SY = Red Card by second yellow, P=Penalty, MP=Missed Penalty, I = Substitution In, O=Substitute Out, IH= In half time?
df_matches=pd.read_csv('data_raw/WorldCupMatches.csv', sep=',')
df_players=pd.read_csv('data_raw/WorldCupPlayers.csv', sep=',')
df_cups=pd.read_csv('data_raw/WorldCups.csv', sep=',')
df_events = pd.read_csv('data_prepared/event.csv', sep=',').replace(np.nan, '', regex=True)
# -
df_matches
df_players
df_events
# # Descriptive part
# In this section we provide some general details about the dataset. First question is: Who won most world cups? Who won most second prizes?
# +
plt.figure(figsize=(14,7))
plt.subplot(121)
df_cups.groupby(['Winner']).size().sort_values(ascending=False).plot.bar()
plt.title("World Cup Wins")
plt.xlabel("")
plt.subplot(122)
df_cups.groupby(['Runners-Up']).size().sort_values(ascending=False).plot.bar()
plt.title("2nd Prize")
plt.xlabel("")
plt.show()
# -
# Next question is: Who scored most goals in world championships?
# +
plt.figure(figsize=(9,6))
df_players.groupby(['Player Name']).size().sort_values(ascending=False).nlargest(n=10).plot.bar()
print(df_players.groupby(['Player Name']).size().sort_values(ascending=False).nlargest(n=10))
# +
# data consistency not sufficient for this calculation as player names are not unique
# note that this does not have an impact on the subsequent analysis
#df_timespan = df_joined_matches[['Player Name','Year']].groupby(['Player Name']).aggregate(['min','max']).Year
#df_timespan.columns = ['start','end']
#df_timespan.apply(lambda x: (x.end-x.start), axis=1).sort_values(ascending = False)
# -
# # Analysis of Fair Play
#
# We first need some preliminary work. How many matches were played at all? How many of them were won by one team?
# +
num_matches_total = len(df_events.groupby('MatchID').mean())
num_matches_decision = len(df_events.loc[(df_events['HomeTeamWins'] == True) | (df_events['AwayTeamWins'] == True)].groupby('MatchID').mean())
num_matches_tie = len(df_events.loc[(df_events['HomeTeamWins'] == False) & (df_events['AwayTeamWins'] == False)].groupby('MatchID').mean())
print("num_matches_total: %g"% num_matches_total)
print("num_matches_decision: %g"% num_matches_decision)
print("num_matches_tie: %g"% num_matches_tie)
print("proportion decision: %.2f"% (num_matches_decision/num_matches_total*100))
print("proportion no decision: %.2f"% (num_matches_tie/num_matches_total*100))
# -
# ## Yellow and Red Cards Statistics
#
# On average 2.68 yellow cards, 0.14 red cards and 0.06 red cards for second yellow are given during a match.
# +
f = {
'Year':'count' # we could do this with any attribute
}
df_cards = df_events.loc[(df_events["EventType"] == "Y") | (df_events["EventType"] == "R") | (df_events["EventType"] == "RSY")]
df_cards = df_cards.groupby(["EventType"]).agg(f)
df_cards.columns = ['Total']
df_cards.assign(AvgPerMatch = lambda x : x.Total/num_matches_total)
# -
# The match with most cards was Portugal vs. Greece in 2006's Round of 16 with 20 cards.
# +
f = {
'Attendance':'count' # again, we could do this with any attribute
}
df_cards = df_events.loc[(df_events["EventType"] == "Y") | (df_events["EventType"] == "R") | (df_events["EventType"] == "RSY")]
df_cards = df_cards.groupby(['MatchID','Stage','Year','Home Team Name', 'Away Team Name', 'Home Team Goals', 'Away Team Goals']).agg(f).reset_index()
df_cards.columns = ['Match ID', 'Stage','Year', 'Home Team Name', 'Away Team Name', 'Home Team Goals', 'Away Team Goals', 'Cards']
df_cards.sort_values(by=['Cards'], ascending=False)
# -
# During the match, 4 players were given a red card by second yellow.
# +
df_events.loc[(df_events["MatchID"] == 97410052.0) & (df_events["EventType"] != "") & (df_events["EventType"] != "I")][['Team Initials','Player Name','EventMinute','EventType']]
#[df_events.MatchID == 97410052][['EventOfHomeTeam','EventType','Player Name']]
# -
# ## Event minutes of red and yellow cards
#
# In this section we want to find out when most yellow and red cards are given. As expected, red cards tend to be given later.
# +
df_events[['EventMinute']] = df_events[['EventMinute']].apply(pd.to_numeric)
#df_events.loc[(df_events['EventType'] == "Y") & (int(df_events['EventMinute']) < 20)]
minutes_yellow = df_events[df_events.EventType == "Y"].EventMinute.values
minutes_red = df_events[df_events.EventType == "R"].EventMinute.values
minutes_red_2nd_yellow = df_events[df_events.EventType == "RSY"].EventMinute.values
plt.figure(figsize=(12,6))
ax = plt.subplot(131)
ax.boxplot(minutes_yellow)
plt.title("Yellow Cards")
ax = plt.subplot(132)
ax.boxplot(minutes_red)
plt.title("Red Cards")
ax = plt.subplot(133)
ax.boxplot(minutes_red_2nd_yellow)
plt.title("Red Cards (By 2nd Yellow)")
plt.show()
# -
# # Fairest team
#
# In this section we want to find out which team is given the fewest yellow cards per match on average. First we have to find out how many yellow cards a team was awarded. We have to create to dataframes as teams appear either as home or away team.
# +
df_yellow_cards = df_events[df_events.EventType == "Y"]
df_yellow_cards_home = df_yellow_cards[df_yellow_cards.EventOfHomeTeam == True][["Home Team Name", "EventType"]]
df_yellow_cards_home = df_yellow_cards_home.groupby("Home Team Name").count().reset_index()
df_yellow_cards_home.columns = ['Team', 'YellowCardsHome']
df_yellow_cards_away = df_yellow_cards[df_yellow_cards.EventOfHomeTeam == False][["Away Team Name", "EventType"]]
df_yellow_cards_away = df_yellow_cards_away.groupby("Away Team Name").count().reset_index()
df_yellow_cards_away.columns = ['Team', 'YellowCardsAway']
df_yellow_cards_count = pd.merge(df_yellow_cards_home, df_yellow_cards_away).fillna(0)
df_yellow_cards_count['YellowCardsTotal'] = df_yellow_cards_count.YellowCardsHome+df_yellow_cards_count.YellowCardsAway
df_yellow_cards_count
# -
# Now we need the amount of matches per team to finally compute the average amount of yellow cards per match per team.
# +
df_home_matches = df_matches[["Home Team Name"]]
df_home_matches["MatchesCount1"] = 1
df_home_matches = df_home_matches.groupby("Home Team Name").count().reset_index()
df_home_matches.columns = ['Team', 'MatchesHome']
df_away_matches = df_matches[["Away Team Name"]]
df_away_matches["MatchesCount2"] = 1
df_away_matches = df_away_matches.groupby("Away Team Name").count().reset_index()
df_away_matches.columns = ['Team', 'MatchesAway']
df_matches_count = pd.merge(df_home_matches, df_away_matches).fillna(0)
df_matches_count['MatchesTotal'] = df_matches_count.MatchesHome+df_matches_count.MatchesAway
df_yellow_cards_teams = pd.merge(df_yellow_cards_count, df_matches_count)
df_yellow_cards_teams['AvgYellowPerMatch'] = df_yellow_cards_teams.YellowCardsTotal/df_yellow_cards_teams.MatchesTotal
#just to get team as index
df_yellow_cards_teams = df_yellow_cards_teams.groupby("Team").mean()
df_yellow_cards_teams
# -
# We see that some teams with only very few matches appear in both lists. These could be statistical outliers.
# +
plt.figure(figsize=(12,6))
ax = plt.subplot(121)
df_yellow_cards_teams["AvgYellowPerMatch"].sort_values(ascending=False).nlargest(n=10).plot.bar()
plt.title("Teams with most yellow cards")
ax = plt.subplot(122)
df_yellow_cards_teams["AvgYellowPerMatch"].sort_values(ascending=True).nsmallest(n=10).plot.bar()
plt.title("Fairest Teams")
plt.show()
# -
# Based on the previous observation we restrict ourselves to teams with at least 30 matches which are in total 21 teams. Germany is on rank 3.
# +
df_yellow_cards_reg_teams = df_yellow_cards_teams[df_yellow_cards_teams.MatchesTotal > 30]
print(len(df_yellow_cards_reg_teams))
plt.figure(figsize=(12,6))
df_yellow_cards_reg_teams["AvgYellowPerMatch"].sort_values(ascending=False).plot.bar()
plt.title("Teams with most yellow cards")
plt.show()
# -
# ## Winners play fair!?
#
# Whether or not a game is a tie does not have an effect on the amount of red or yellow cards. However, if it is not a tie the winner are given less yellow and red cards.
# +
avg_yellow_of_winner = len(df_events.loc[(df_events['EventOfWinner'] == True) & (df_events['EventType'] == 'Y')])/num_matches_decision
avg_yellow_of_loser = len(df_events.loc[(df_events['EventOfLoser'] == True) & (df_events['EventType'] == 'Y')])/num_matches_decision
avg_red_of_winner = len(df_events.loc[(df_events['EventOfWinner'] == True) & ((df_events['EventType'] == 'R') | (df_events['EventType'] == 'RSY'))])/num_matches_decision
avg_red_of_loser = len(df_events.loc[(df_events['EventOfLoser'] == True) & ((df_events['EventType'] == 'R') | (df_events['EventType'] == 'RSY'))])/num_matches_decision
avg_yellow_decided_match = avg_yellow_of_winner+avg_yellow_of_loser
avg_red_decided_match = avg_red_of_winner+avg_red_of_loser
avg_yellow_tie_match = len(df_events.loc[(df_events['HomeTeamWins'] == False) & (df_events['AwayTeamWins'] == False) & (df_events['EventType'] == 'Y')])/num_matches_tie
avg_red_tie_match = len(df_events.loc[(df_events['HomeTeamWins'] == False) & (df_events['AwayTeamWins'] == False) & ((df_events['EventType'] == 'R') | (df_events['EventType'] == 'RSY'))])/num_matches_tie
print("avg_yellow_of_winner: %.2f"% (avg_yellow_of_winner))
print("avg_yellow_of_loser: %.2f"% (avg_yellow_of_loser))
print("avg_yellow_decided_match: %.2f"% (avg_yellow_decided_match))
print("avg_yellow_tie_match: %.2f"% (avg_yellow_tie_match))
print("avg_red_of_winner: %.2f"% (avg_red_of_winner))
print("avg_red_of_loser: %.2f"% (avg_red_of_loser))
print("avg_red_decided_match: %.2f"% (avg_red_decided_match))
print("avg_red_tie_match: %.2f"% (avg_red_tie_match))
plt.figure(figsize=(12,6))
ind = np.arange(2)
width = 0.35
dist = 0.2
ax = plt.subplot(121)
yellow_cards = (avg_yellow_of_winner, avg_yellow_of_loser)
red_cards = (avg_red_of_winner, avg_red_of_loser)
plt.xticks(ind, ('Winner', 'Loser'))
ax.bar(ind, yellow_cards, width, color='y')
ax.bar(ind + width, red_cards, width, color='r')
ax = plt.subplot(122)
yellow_cards = (avg_yellow_decided_match, avg_yellow_tie_match)
red_cards = (avg_red_decided_match, avg_red_tie_match)
plt.xticks(ind, ('Decided', 'Tie'))
ax.bar(ind, yellow_cards, width, color='y')
ax.bar(ind + width, red_cards, width, color='r')
plt.show()
# -
# # Predict Yellow Cards
# In this section we trained a model to predict the amount of yellow cards given in a match.
#
# ## Initial Features for Regression
# - Hour game starts, just added for fun. We do not expect a correlation.
# - The year the match took place
# - The stage (group phase, quarter-finals etc.) We chose to perform a one hot encoding.
# - The total amount of goals
# - The goal difference
# - The goal difference in the half time
# - The change of these differences in the second part of the match
# - Whether there was extra time
# - Whether penalty decided the match
# - The amount of substitutions
# - The amount of substitutions at half time
#
# ## Explanation of Event Types
# The codes for the event types are: G=Goal, OG=Own Goal, Y=Yellow Card, R=Red Card, SY = Red Card by second yellow, P=Penalty, MP=Missed Penalty, I = Substitution In, O=Substitute Out, IH= In half time?
#
# First, we need some data preparation. Specifically, we build the columns:
# - One Hot Encoding for the EventType column (needed for groupby later on)
# - One Hot Encoding for the StageRank column (for the actual regression)
# - total goals scored
# - goal difference half time
# - goal difference end
# - delta of the last two values
#
# Additionally, attendance must be a numeric data type
#
# +
df_events_ohe = pd.concat([df_events, pd.get_dummies(df_events['EventType'])], axis=1)
df_events_ohe = pd.concat([df_events_ohe, pd.get_dummies(df_events['StageRank'], prefix="Stage")], axis=1)
df_events_ohe = df_events_ohe.assign(GoalsTotal = lambda x : x['Home Team Goals']+x['Away Team Goals'])
df_events_ohe = df_events_ohe.assign(GoalDifference = lambda x : abs(x['Home Team Goals']-x['Away Team Goals']))
df_events_ohe = df_events_ohe.assign(GoalDifferenceHalfTime = lambda x : abs(x['Half-time Home Goals']-x['Half-time Away Goals']))
df_events_ohe = df_events_ohe.assign(DeltaGoals = lambda x : x['GoalDifference']-x['GoalDifferenceHalfTime'])
df_events_ohe[['Attendance']] = df_events_ohe[['Attendance']].apply(pd.to_numeric)
df_events_ohe
# -
# Perform a group by to get sum of yellow cards
# +
f = {'HourGameStart':['mean'],
#'Home Team Goals':['mean'], # not symmetric -> throw out
#'Away Team Goals':['mean'],
#'Half-time Home Goals':['mean'],
#'Half-time Away Goals':['mean'],
'Year':['mean'],
'Stage_1':['mean'],
'Stage_2':['mean'],
'Stage_3':['mean'],
'Stage_4':['mean'],
'Stage_5':['mean'],
'Stage_6':['mean'],
'GoalsTotal':['mean'],
'GoalDifference':['mean'],
'GoalDifferenceHalfTime':['mean'],
'DeltaGoals':['mean'],
'ExtraTime':['mean'],
'Penalty':['mean'],
'I':['sum'], #substitutions
'IH':['sum'], #substitutions half time
'Y':['sum'],
}
df_events_grp = df_events_ohe.groupby(['MatchID']).agg(f)
df_events_grp.columns = df_events_grp.columns.get_level_values(0)
df_events_grp
# -
df_events_grp.columns
# We simply use MinMaxScaler as preprocessing
from sklearn.preprocessing import MinMaxScaler
scaler = MinMaxScaler()
df_events_grp[['HourGameStart','Year','GoalsTotal','GoalDifference','GoalDifferenceHalfTime','DeltaGoals']] = scaler.fit_transform(df_events_grp[['HourGameStart','Year','GoalsTotal','GoalDifference','GoalDifferenceHalfTime','DeltaGoals']])
# Linear regression is used to predict the amount of yellow cards for the test data. The MSE is fairly lower than the one of the baseline model. The coefficient of determination (R squared, amount of explained variance) is 0.5 indicating a moderate model performance. Keep in mind that R squared values depend on the amount of features used (strictly increasing on number of features).
#
# From Wiki (R squared is the amount of variance explained by the model)
# \begin{align}
# \mathit{R}^2 = \frac{\text{ESS}}{\text{TSS}}=
# \frac{\displaystyle\sum\nolimits \left(\hat{y}_i- \overline{y}\right)^2}{\displaystyle\sum\nolimits \left(y_i - \overline{y}\right)^2}
# \end{align}
# +
from sklearn.model_selection import train_test_split
from sklearn import datasets, linear_model
from sklearn.metrics import mean_squared_error, r2_score
# train test split
# linear regression to predict y
X = df_events_grp.loc[:, 'HourGameStart':'IH']
#X = pd.concat([df_fouls.loc[:, 'StageRank':'IH'], df_fouls.loc[:, 'ALG':'ZAI']], axis=1)
Y = df_events_grp.loc[:, 'Y']
# transform to numpy array
X = X.as_matrix().astype(np.float)
Y = Y.as_matrix().astype(np.float)
# train/ test split
X_train, X_test, y_train, y_test = train_test_split(X, Y, test_size=0.33, random_state=40)
# fit regression model
regr = linear_model.LinearRegression()
regr.fit(X_train, y_train)
# compare RMSE and R value with base model
y_pred = regr.predict(X_test)
# base model would be the average y value
# needed for comparison
y_base_pred = np.zeros((len(y_pred)))
y_base_pred[:,] = y_train.mean()
print("LINEAR REGRESSION: Mean squared error: %.2f"
% mean_squared_error(y_test, y_pred))
print('LINEAR REGRESSION: Variance score: %.2f' % r2_score(y_test, y_pred))
print("BASE: Mean squared error: %.2f"
% mean_squared_error(y_test, y_base_pred))
# zero for sure, just added for completeness
print('BASE: Variance score: %.2f' % r2_score(y_test, y_base_pred))
# -
# To statistically analyze the model we also compute several statistics. The most important ones are
#
# - T values: statistic for t test checking whether coefficient is zero
# - p values: probability that coefficient is zero
# - F statistic: is the group of features significant?
#
# We can derive that only x2, x15, x16 are significant, i.e. Year, I (=Amount of substitutions) and IH (=Amount of half time substitutions) by observing the p values. Alternatively, this can be concluded by interpreting the confidence intervals spanning over zero for all other variables.
#
# All together F-statistic prob is low enough. Hence, all variables together can be considered significant.
# +
import statsmodels.api as sm
from scipy import stats
X2 = sm.add_constant(X_train)
est = sm.OLS(y_train, X2)
est2 = est.fit()
print(est2.summary())
# -
# To confirm a correlation between yellow cards and year, penalty, substitutions and half time substitutions we compute the Pearson correlation coefficient. The p-value corresponds to the probability to observe the left correlation coefficient randomly. All correlations are significant.
#
# The positive correlation with year suggests that nowadays more yellow cards are given. This might be due to stricter rules or less fair play.
#
# The correlation with substitutions is not obvious.
# +
from scipy.stats.stats import pearsonr
print(pearsonr(df_events_grp['Year'],Y)) # equivalent to: print(pearsonr(X[:,1],Y)) because not sensitive to scaling
print(pearsonr(df_events_grp['I'],Y)) # substitutions
print(pearsonr(df_events_grp['IH'],Y)) # half time substitutions
# -
# The correlation with year and substitutions can also be observed in a scatter plot. For penalty this does plot does not make sense as it is a binary decision.
# +
import numpy as np
import matplotlib.pyplot as plt
plt.figure()
plt.plot(df_events_grp['Year'],Y, "o")
plt.xlabel("Year (scaled)")
plt.ylabel("Yellow Cards")
plt.figure()
plt.plot(df_events_grp['I'],Y, "o")
plt.xlabel("Substitutions (scaled)")
plt.ylabel("Yellow Cards")
# -
# We try several ways to increase the performance
#
# 1. introduce regularization to tune our linear model (usually we need cross validation to tune the introduced hyperparameter values. However as we could not improve model performance we did not perform that step)
# 2. try a different ML model to increase accuracy
# 3. introduce team as one hot encoding feature to increase performance
#
# But first, why not remove the unnecessary features and make the model more robust?
# +
X2 = df_events_grp.loc[:, ['Year', 'Penalty', 'IH', 'I']]
Y2 = df_events_grp.loc[:, 'Y']
# transform to numpy array
X2 = X2.as_matrix().astype(np.float)
Y2 = Y2.as_matrix().astype(np.float)
# train/ test split
X_train2, X_test2, y_train2, y_test2 = train_test_split(X2, Y2, test_size=0.33, random_state=40)
# fit regression model
regr = linear_model.LinearRegression()
regr.fit(X_train2, y_train2)
# compare RMSE and R value with base model
y_pred2 = regr.predict(X_test2)
# base model would be the average y value
# needed for comparison
y_base_pred = np.zeros((len(y_pred)))
y_base_pred[:,] = y_train.mean()
print("LINEAR REGRESSION: Mean squared error: %.2f"
% mean_squared_error(y_test2, y_pred2))
print('LINEAR REGRESSION: Variance score: %.2f' % r2_score(y_test2, y_pred2))
print("BASE: Mean squared error: %.2f"
% mean_squared_error(y_test2, y_base_pred))
# zero for sure, just added for completeness
print('BASE: Variance score: %.2f' % r2_score(y_test2, y_base_pred))
# -
# ## Attempt 1: Regularization
# As mentioned above, regularization fails to improve the linear model. In this case we only tried ridge regression. One could also employ lasso regression etc. to regularize which have the effect of feature selection or lower variable scale respectively.
# +
from sklearn.linear_model import Ridge
clf = Ridge(alpha=2)
clf.fit(X_train, y_train)
y_pred = clf.predict(X_test)
print("REGULARIZATION: Mean squared error: %.2f"
% mean_squared_error(y_test, y_pred))
print('REGULARIZATION: Variance score: %.2f' % r2_score(y_test, y_pred))
# -
# ## Attempt 2a: Try a regression tree
# Just another regression model. Leafes of the tree contain values for the specific subspace.
# +
from sklearn.tree import DecisionTreeRegressor
# fit regression model
regr_1 = DecisionTreeRegressor(max_depth=2)
regr_2 = DecisionTreeRegressor(max_depth=4)
regr_1.fit(X_train, y_train)
regr_2.fit(X_train, y_train)
# compare RMSE and R value with base model
y_pred = regr_1.predict(X_test)
y_pred_2 = regr_2.predict(X_test)
# base model would be the average y value
# needed for comparison
y_base_pred = np.zeros((len(y_pred)))
y_base_pred[:,] = y_train.mean()
print("DECISION TREE 1: Mean squared error: %.2f"
% mean_squared_error(y_test, y_pred))
print('DECISION TREE 1: Variance score: %.2f' % r2_score(y_test, y_pred))
print("DECISION TREE 2: Mean squared error: %.2f"
% mean_squared_error(y_test, y_pred_2))
print('DECISION TREE 2: Variance score: %.2f' % r2_score(y_test, y_pred_2))
# -
# ## Attempt 2b: Neural Network
#
# Sounds fancy but actually just a vanilla multilayer perceptron with only two layers.
# +
from sklearn.neural_network import MLPRegressor
nn = MLPRegressor(
hidden_layer_sizes=(1,), activation='relu', solver='adam', alpha=0.001, batch_size='auto',
learning_rate='constant', learning_rate_init=0.01, power_t=0.5, max_iter=1000, shuffle=True,
random_state=9, tol=0.0001, verbose=False, warm_start=False, momentum=0.9, nesterovs_momentum=True,
early_stopping=False, validation_fraction=0.1, beta_1=0.9, beta_2=0.999, epsilon=1e-08)
nn.fit(X_train, y_train)
y_pred = nn.predict(X_test)
print("NEURAL NETWORK: Mean squared error: %.2f"
% mean_squared_error(y_test, y_pred))
print('NEURAL NETWORK: Variance score: %.2f' % r2_score(y_test, y_pred))
# -
# ## Attempt 3: Add team as one hot encoding
# We hot encode the teams of the specific match and hope to increase the accuracy
df_teams_ohe = (pd.get_dummies(df_events['Home Team Initials'])+pd.get_dummies(df_events['Away Team Initials'])).fillna(value=0)
df_teams_ohe = pd.concat([df_teams_ohe,df_events['MatchID']],axis=1).groupby('MatchID').mean()
df_fouls = df_events_grp.join(df_teams_ohe)
df_fouls = df_fouls.reset_index()
df_fouls.drop(['MatchID'], 1,inplace=True)
df_fouls
from sklearn.preprocessing import MinMaxScaler
scaler = MinMaxScaler()
df_fouls[['HourGameStart','Year','GoalsTotal','GoalDifference','GoalDifferenceHalfTime','DeltaGoals']] = scaler.fit_transform(df_fouls[['HourGameStart','Year','GoalsTotal','GoalDifference','GoalDifferenceHalfTime','DeltaGoals']])
# Unfortunately, adding the team did not improve the model performance. Probably due to shortage of data.
# +
from sklearn.model_selection import train_test_split
from sklearn import datasets, linear_model
from sklearn.metrics import mean_squared_error, r2_score
# train test split
# linear regression to predict y
X = pd.concat([df_fouls.loc[:, 'HourGameStart':'IH'], df_fouls.loc[:, 'ALG':'ZAI']], axis=1)
Y = df_events_grp.loc[:, 'Y']
# transform to numpy array
X = X.as_matrix().astype(np.float)
Y = Y.as_matrix().astype(np.float)
# train/ test split
X_train, X_test, y_train, y_test = train_test_split(X, Y, test_size=0.33, random_state=40)
# base model would be the average y value
# needed for comparison
y_base_pred = np.zeros((len(y_pred)))
y_base_pred[:,] = y_train.mean()
# fit regression model
regr = linear_model.LinearRegression()
regr.fit(X_train, y_train)
# compare RMSE and R value with base model
y_pred = regr.predict(X_test)
print("LINEAR REGRESSION: Mean squared error: %.2f"
% mean_squared_error(y_test, y_pred))
print('LINEAR REGRESSION: Variance score: %.2f' % r2_score(y_test, y_pred))
print("BASE: Mean squared error: %.2f"
% mean_squared_error(y_test, y_base_pred))
# zero for sure, just added for completeness
print('BASE: Variance score: %.2f' % r2_score(y_test, y_base_pred))
# -
# # Predict Red Cards
# Can we also predict whether a red card was given? This is a classification task.
# +
f = {'HourGameStart':['mean'],
#'Home Team Goals':['mean'], # not symmetric -> throw out
#'Away Team Goals':['mean'],
#'Half-time Home Goals':['mean'],
#'Half-time Away Goals':['mean'],
'Year':['mean'],
'Stage_1':['mean'],
'Stage_2':['mean'],
'Stage_3':['mean'],
'Stage_4':['mean'],
'Stage_5':['mean'],
'Stage_6':['mean'],
'GoalsTotal':['mean'],
'GoalDifference':['mean'],
'GoalDifferenceHalfTime':['mean'],
'DeltaGoals':['mean'],
'ExtraTime':['mean'],
'Penalty':['mean'],
'I':['sum'], #substitutions
'IH':['sum'], #substitutions half time
'Y':['sum'],
'R':['sum'],
'RSY':['sum']
}
df_events_grp = df_events_ohe.groupby(['MatchID']).agg(f)
df_events_grp.columns = df_events_grp.columns.get_level_values(0)
# create column indicating whether red cards were given
df_events_grp['R_total'] = df_events_grp.R + df_events_grp.RSY
df_events_grp = df_events_grp.assign(R_flag = lambda x : x.R_total > 0)
df_events_grp = df_events_grp.drop(columns=['R', 'RSY','R_total'])
df_events_grp
# -
# The decision tree is unable to outperform the base model. With higher tree depths the train test gap increases. The trees heavily overfit.
# +
from sklearn import tree
from sklearn.metrics import accuracy_score
# train test split
# linear regression to predict y
X = df_events_grp.loc[:, 'HourGameStart':'Y']
Y = df_events_grp.loc[:, 'R_flag']
# transform to numpy array
X = X.as_matrix().astype(np.float)
Y = Y.as_matrix().astype(np.float)
# train/ test split
X_train, X_test, y_train, y_test = train_test_split(X, Y, test_size=0.33, random_state=42)
def decision_tree_accuracy(depth):
# fit decision tree
clf = tree.DecisionTreeClassifier(max_depth=depth)
clf = clf.fit(X_train, y_train)
# compute accuracy
y_pred = clf.predict(X_test)
score = clf.score(X_test, y_test)
train_score = accuracy_score(clf.predict(X_train), y_train)
return score, train_score
# for comparison also compute accuracy for base model (always output zero)
y_base_pred = np.zeros((len(y_test))) # zeros as most matches are without red cards
base_score = accuracy_score(y_base_pred, y_test)
dec_tree_accuracy = np.array([(i, decision_tree_accuracy(i)[0], decision_tree_accuracy(i)[1]) for i in range(1,15)])
base_accuracy = np.array([base_score for i in range(1,15)])
plt.figure(figsize=(12,6))
plt.plot(dec_tree_accuracy[:,0], dec_tree_accuracy[:,1]*100, label="Decision Tree Test")
plt.plot(dec_tree_accuracy[:,0], dec_tree_accuracy[:,2]*100, label="Decision Tree Train")
plt.plot(dec_tree_accuracy[:,0], base_accuracy*100, label="Base")
plt.legend()
plt.title("Accuracies depending on tree depth")
plt.xlabel("Decision tree depth")
plt.ylabel("Accuracy (%)")
plt.show()
# -
# A little less overfitting, but still unable to achieve a significantly higher accuracy: random forests.
# +
from sklearn.ensemble import RandomForestClassifier
def random_forest_accuracy(depth):
# fit decision tree
clf = RandomForestClassifier(max_depth=depth, n_estimators=10)
clf = clf.fit(X_train, y_train)
# compute accuracy
y_pred = clf.predict(X_test)
score = clf.score(X_test, y_test)
train_score = accuracy_score(clf.predict(X_train), y_train)
return score, train_score
rf_tree_accuracy = np.array([(i, random_forest_accuracy(i)[0], random_forest_accuracy(i)[1]) for i in range(1,15)])
plt.figure(figsize=(12,6))
plt.plot(rf_tree_accuracy[:,0], rf_tree_accuracy[:,1]*100, label="Random Forest Test")
plt.plot(rf_tree_accuracy[:,0], rf_tree_accuracy[:,2]*100, label="Random Forest Train")
plt.plot(rf_tree_accuracy[:,0], base_accuracy*100, label="Base")
plt.legend()
plt.title("Accuracies depending on tree depth")
plt.xlabel("Decision tree depth")
plt.ylabel("Accuracy (%)")
plt.show()
# -
# k-NN does not perform significantly better either.
# +
from sklearn.neighbors import KNeighborsClassifier
def knn_accuracy(depth):
# fit decision tree
clf = KNeighborsClassifier(depth)
clf = clf.fit(X_train, y_train)
# compute accuracy
y_pred = clf.predict(X_test)
score = clf.score(X_test, y_test)
train_score = accuracy_score(clf.predict(X_train), y_train)
return score, train_score
knn_accuracy = np.array([(i, knn_accuracy(i)[0], knn_accuracy(i)[1]) for i in range(1,15)])
plt.figure(figsize=(12,6))
plt.plot(knn_accuracy[:,0], knn_accuracy[:,1]*100, label="k-NN Test")
plt.plot(knn_accuracy[:,0], knn_accuracy[:,2]*100, label="k-NN Train")
plt.plot(knn_accuracy[:,0], base_accuracy*100, label="Base")
plt.legend()
plt.title("Accuracies depending on k")
plt.xlabel("k")
plt.ylabel("Accuracy (%)")
plt.show()
# -
| FIFA_WM_Benjamin.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + language="html"
# <style>
# .output_subarea.output_text.output_stream.output_stdout > pre {
# white-space: pre;
# }
# .p-Widget.jp-RenderedText.jp-OutputArea-output > pre {
# white-space: pre;
# }
# </style>
# -
# Global data variables
DATA_PATH = "/Users/luis/Documents/Work/Telefonica/Courses/DATA/"
from pyspark.sql import functions as F
#
#
# # Creación o modificación de columnas
#
# En Spark hay un único método para la creación o modificación de columnas y es `withColumn`. Este método es de nuevo una transformación y toma dos parámetros: el nombre de la columna a crear (o sobreescribir) y la operación que crea la nueva columna.
#
# Para una ejecución más óptima se recomienda utilizar únicamente las funciones de PySpark cuando se define la operación, pero como se detallará más adelante se pueden utilizar funciones propias.
movies_df = spark.read.csv(DATA_PATH + 'movies/movies.csv', sep=',', header=True, inferSchema=True)
ratings_df = spark.read.csv(DATA_PATH + 'movies/ratings.csv', sep=',', header=True, inferSchema=True)
ratings_movies_df = ratings_df.join(movies_df, on='movieId', how='inner')
ratings_movies_df = ratings_movies_df.cache()
ratings_movies_df.show(5)
#
#
# ## Funciones de Spark
#
#
# __valor fijo__
#
# El ejemplo más sencillo es crear una columna con un valor fijo, en este caso, columna `now` con valor '2019/01/21 14:08', y columna `rating2`con valor 4.0.
#
# Hint: `withColumn`
ratings_movies_df = ratings_movies_df.withColumn('now', F.lit('2019/01/21 14:08'))
ratings_movies_df.show(3)
ratings_movies_df = ratings_movies_df.withColumn('rating2', F.lit(4.0))
ratings_movies_df.show(3)
#
#
# __duplicar columna__
ratings_movies_df.withColumn('title2', F.col('title'))\
.select('title', 'title2')\
.show(10)
ratings_movies_df.select(F.col('title'),
F.col('title').alias('title2')).show()
#
#
# __operaciones aritmeticas__
ratings_movies_df.withColumn('rating_10', F.col('rating') * 2)\
.select('rating', 'rating_10')\
.show(10)
ratings_movies_df.withColumn('rating_avg', (F.col('rating') + F.col('rating2')) / 2)\
.select('rating', 'rating2', 'rating_avg')\
.show(10)
ratings_movies_df.selectExpr('rating',
'rating2',
'(rating + rating2)/2 as mean_rating').show()
#
#
# __if/else__
#
# Crea la columna `kind_rating`, que sea 'high' en caso de que rating sea mayor que 4, y 'low' en caso contrario.
ratings_movies_df.withColumn('kind_rating',
F.when(F.col('rating') >= 4, 'high').otherwise('low')).show(10)
quants = list(movie_rates.select(F.expr('percentile_approx(mean_rating, array(.25, .5, .75))')).first())[0]
quants
movie_rates.withColumn('quality', (F.when(F.col('mean_rating') < quants[0], 'bad')
.when(F.col('mean_rating') < quants[1], 'regular')
.when(F.col('mean_rating') < quants[2], 'good')
.otherwise('very good'))).show()
# +
movie_rates = (ratings_movies_df.groupBy('movieId')
.agg(F.round(F.avg(F.col('rating')), 2).alias('mean_rating'),
F.count('*').alias('total_rates'))
.filter(F.col('total_rates') > 100)
.orderBy(F.col('mean_rating').desc())
)
movie_rates.show(5)
# +
# 1.- Clasificar movies dependiendo mean_rating (q1, q2, q3)
# 2.- Obtener distribucion de generos por grupo (opcional)
# -
#
#
# Se pueden concatenar multiples sentencias _when_. Esta vez, sobreescribe la columna `kind_rating` para crear un nivel intermedio, donde si es mayor que dos y menor que 4, `kind_rating` sea 'med'.
ratings_movies_df.withColumn('kind_rating',
F.when(F.col('rating') >= 4, 'high')\
.when(F.col('rating') >= 2, 'med')\
.otherwise('low')).show(20)
#
#
# __operaciones con strings__
#
# Pon en mayúsculas todos los títulos de las películas
ratings_movies_df.withColumn('title upper', F.upper(F.col('title'))).show(3)
#
#
# Extrae los 10 primeros caracteres de la columna `title`
ratings_movies_df.withColumn('short_title', F.substring(F.col('title'), 0, 10))\
.select('title', 'short_title')\
.show(10, False)
#
#
# Separa los diferentes géneros de la columna `genres` para obtener una lista, usando el separador '|'
ratings_movies_df.withColumn('genres', F.split(F.col('genres'), '\|')).show(4)
# +
# %%time
(ratings_movies_df
.withColumn('genres', F.split(F.col('genres'), '\|'))
.filter(F.expr('array_contains(genres, "Horror")'))
).show()
# +
# %%time
ratings_movies_df.filter(F.col('genres').like('%Horror%')).show()
# -
#
#
# Crea una nueva columna `1st_genre` seleccionando el primer elemento de la lista del código anterior
ratings_movies_df.withColumn('1st_genre', F.split(F.col('genres'), '\|')[0])\
.select('genres', '1st_genre')\
.show(10, False)
#
#
# Reemplaza el caracter '|' por '-' en la columna `genres`
ratings_movies_df.withColumn('genres', F.regexp_replace(F.col('genres'), '\|', '-'))\
.select('title', 'genres')\
.show(10, truncate=False)
#
#
# _Con expresiones regulares_
#
# https://regexr.com/
ratings_movies_df.select(F.col('title')).show(5, False)
# +
ratings_movies_df = ratings_movies_df.withColumn('year',
F.regexp_extract(F.col('title'), '\((\d{4})\)', 1))
ratings_movies_df.show(5)
# -
#
#
# ## Casting
#
# Con el método `withColumn` también es posible convertir el tipo de una columna con la función `cast`. Es importante saber que en caso de no poder convertirse (por ejemplo una letra a número) no saltará error y el resultado será un valor nulo.
ratings_movies_df.printSchema()
#
#
# Cambia el formato de `year` a entero, y `movieId` a string.
ratings_movies_df = ratings_movies_df.withColumn('year', F.col('year').cast('int'))
ratings_movies_df.show(5)
ratings_movies_df = ratings_movies_df.withColumn('movieId', F.col('movieId').cast('string'))
ratings_movies_df.printSchema()
ratings_movies_df.withColumn('error', F.col('title').cast('int')).show(5)
#
#
# ## UDF (User Defined Functions)
#
# Cuando no es posible definir la operación con las funciones de spark se pueden crear funciones propias usando la UDFs. Primero se crea una función de Python normal y posteriormente se crea la UDFs. Es necesario indicar el tipo de la columna de salida en la UDF.
from pyspark.sql.types import StringType, IntegerType, DoubleType, DateType
#
#
# _Aumenta el rating en un 15% para cada película más antigua que 2000 (el máximo siempre es 5)._
def increase_rating(year, rating):
if year < 2000:
rating = min(rating * 1.15, 5.0)
return rating
increase_rating_udf = F.udf(increase_rating, DoubleType())
ratings_movies_df.withColumn('rating_inc',
increase_rating_udf(F.col('year'), F.col('rating')))\
.select('title', 'year', 'rating', 'rating_inc')\
.show(20)
# +
# Generar una nueva columna rating_len = longitud del titulo (usando udfs)
# 1.- Definir funcion
str_len = lambda x: len(x)
# 2.- Definir udf
str_len_udf = F.udf(str_len, IntegerType())
# 3.- Aplicar udf
(ratings_movies_df
.withColumn('rating_len', str_len_udf(F.col('title')))
.select(F.col('title'), F.col('rating'), F.col('rating_len'))
).show(10, False)
# -
#
#
# Extrae el año de la película sin usar expresiones regulares.
title = 'Trainspotting (1996)'
title.replace(')', '').replace('(', '')
year = title.replace(')', '').replace('(', '').split()[-1]
year = int(year)
year
def get_year(title):
year = title.replace(')', '').replace('(', '').split()[-1]
if year.isnumeric():
year = int(year)
else:
year = -1
return year
get_year_udf = F.udf(get_year, IntegerType())
ratings_movies_df.withColumn('year2', get_year_udf(F.col('title')))\
.select('title', 'year', 'year2').show(10, truncate=False)
# +
# 1.- Generar udf que obtenga el numero de vocales unicas dentro del titulo
# 2.- (Opcional) Basketball Diaries, The (1995) -> The Basketball Diaries 95
######
# Definir funcion
uniq_v = lambda x: len(set(x.lower()) & set('aeiou'))
# Definir UDF
uniq_v_udf = F.udf(uniq_v, IntegerType())
# Aplicar UDF
(ratings_movies_df.select(F.col('title'),
uniq_v_udf(F.col('title')).alias('unique_vowels'))).show(5, False)
# -
#
#
# # Datetimes
#
# Hay varias funciones de _pyspark_ que permiten trabajar con fechas: diferencia entre fechas, dia de la semana, año... Pero para ello primero es necesario transformar las columnas a tipo fecha. Se permite la conversion de dos formatos de fecha:
# * timestamp de unix: una columna de tipo entero con los segundos trascurridos entre la medianoche del 1 de Enero de 1990 hasta la fecha.
# * cadena: la fecha representada como una cadena siguiendo un formato específico que puede variar.
ratings_movies_df.select('title', 'timestamp', 'now').show(5)
#
#
# ## unix timestamp a datetime
ratings_movies_df = ratings_movies_df.withColumn('datetime', F.from_unixtime(F.col('timestamp')))
# +
# # %Y/%m/%d
ratings_movies_df.select('datetime', 'timestamp',
F.to_timestamp(F.col('now'),
format='yyyy/MM/dd HH:mm').alias('now')).show(10)
# -
#
#
# ## string a datetime
# +
ratings_movies_df = ratings_movies_df.withColumn('now_datetime',
F.from_unixtime(F.unix_timestamp(F.col('now'), 'yyyy/MM/dd HH:mm')))
ratings_movies_df.select('now', 'now_datetime').show(10)
# -
#
#
# ## funciones con datetimes
ratings_movies_df.select('now_datetime', 'datetime',
F.datediff(F.col('now_datetime'), F.col('datetime'))).show(10)
ratings_movies_df.select('datetime', F.date_add(F.col('datetime'), 10)).show(10)
ratings_movies_df.withColumn('datetime_plus_4_months', F.add_months(F.col('datetime'), 4))\
.select('datetime', 'datetime_plus_4_months').show(5)
ratings_movies_df.select('datetime', F.month(F.col('datetime')).alias('month')).show(10)
ratings_movies_df.select('datetime', F.last_day(F.col('datetime')).alias('last_day')).show(10)
ratings_movies_df.select('datetime', F.dayofmonth(F.col('datetime')).alias('day'),
F.dayofyear(F.col('datetime')).alias('year_day'),
F.date_format(F.col('datetime'), 'EEEE').alias('weekday')).show(10)
#
#
# Para filtrar por fechas se pueden comparar directamente con una cadena en el formato YYYY-MM-DD hh:mm:ss ya que será interpretada como una fecha.
ratings_movies_df.filter(F.col('datetime') >= "2015-09-30 20:00:00").select('datetime', 'title', 'rating').show(10)
ratings_movies_df.filter(F.col('datetime').between("2003-01-31", "2003-02-10"))\
.select('datetime', 'title', 'rating').show(5)
ratings_movies_df.filter(F.year(F.col('datetime')) >= 2012)\
.select('datetime', 'title', 'rating').show(5)
# + [markdown] colab_type="text" id="RQQqo7LCY1GE"
#
#
# # Ejercicio 1
#
# 1) Cree una función que acepte un DataFrame y un diccionario. La función debe usar el diccionario para renombrar un grupo de columnas y devolver el DataFrame ya modificado.
#
# Use el siguiente DataFrame y diccionario:
# + colab={} colab_type="code" id="OZcBXSoEY1GG"
pokemon_df = spark.read.csv(DATA_PATH + 'pokemon.csv', sep=',', header=True, inferSchema=True)
rename_dict = {'Sp. Atk': 'sp_atk',
'Sp. Def': 'sp_def'}
# + colab={"base_uri": "https://localhost:8080/", "height": 173} colab_type="code" id="_gFkYDfbodna" outputId="6ecc769f-077d-432e-99e4-c3bac2d74a7c"
pokemon_df.show(3)
# + colab={} colab_type="code" id="bFegy2Nsogs7"
# Respuesta aqui
# + [markdown] colab_type="text" id="Xy7_B4HbY1GL"
#
#
# 2) Use la función definida en el punto anterior para cambiar los nombres del DF usando el diccionario dado.
#
# 3) Modifique la función de tal forma que también acepte una función en lugar de un diccionario. Use la función para renombrar las columnas.
#
# 4) Estandarice según las buenas prácticas los nombres de las columnas usando la función que acaba de definir.
#
# 5) Cree otra función que acepte un DataFrame y una lista con un subconjunto de columnas. El objetivo de esta función es determinar el número de filas duplicadas del DF.
#
# 6) Use la función creada para obtener el número de duplicados del DataFrame pokemon_df en todas las columnas excepto el nombre (`name`)
# + colab={"base_uri": "https://localhost:8080/", "height": 72} colab_type="code" id="f8VKWgPuY1GO" outputId="dd44e9f9-14d9-4a18-b819-7eafd0a86f85"
# Respuesta aqui
# + colab={} colab_type="code" id="QJlJcyBKqQf0"
# Respuesta aqui
# + colab={"base_uri": "https://localhost:8080/", "height": 52} colab_type="code" id="eDV-hsD_r5Yu" outputId="6f03bb83-2962-482c-e28b-86f95b7a0184"
# Respuesta aqui
# + colab={} colab_type="code" id="VHOh4-1Btc6O"
# Respuesta aqui
# + colab={"base_uri": "https://localhost:8080/", "height": 139} colab_type="code" id="mMji9U1gtdEG" outputId="bd7ec08a-ca36-4f6f-cd03-a42017615803"
# Respuesta aqui
# -
#
#
# # Ejercicio 2
#
# Crea la misma lógica definida en el siguiente UDF, pero sin usar UDFs, es decir, usando exclusivamente funciones de SparkSQL.
# +
movies_df = spark.read.csv(DATA_PATH + 'movie-ratings/movies.csv', sep=',', header=True, inferSchema=True)
movies_df = movies_df.withColumn('genres', F.split(F.col('genres'), '\|'))
from pyspark.sql.types import StringType, IntegerType, DoubleType, BooleanType
def value_in_col(col, value):
return value in col
value_in_col_udf = F.udf(value_in_col, BooleanType())
# -
#
#
# *Pista*: Mira la función *explode*.
# Crimes in Vancouver
crime = spark.read.csv(DATA_PATH + 'crime_in_vancouver.csv',
header=True,
inferSchema=True,
sep=',')
crime.show(5)
# +
# 1.- Hacer una columna fecha.
# 2.- Hacer una columna weekend (True si la fecha es (viernes, sabado, domingo), False e.o.c.)
# 3.- Agrupar por weekend y neigh y obtener crimenes mas frecuentes.
# -
w_crime = (crime
.select(F.col('TYPE'),
F.col('NEIGHBOURHOOD'),
F.to_date(F.concat(F.col('YEAR'),
F.lit('-'),
F.col('MONTH'),
F.lit('-'),
F.col('DAY')
)).alias('DATE'))
.filter(F.col('NEIGHBOURHOOD').isNotNull())
.withColumn('WEEKEND',
F.when((F
.date_format(F.col('DATE'), 'E')
.isin(['Fri', 'Sat', 'Sun'])), True)
.otherwise(False))
.groupBy(F.col('TYPE'), F.col('NEIGHBOURHOOD'), F.col('WEEKEND'))
.agg(F.count('*').alias('total_crime'))
.orderBy(F.col('NEIGHBOURHOOD'), F.col('WEEKEND'), F.col('total_crime').desc())
)
w_crime.show()
from pyspark.sql import Window
wc = (Window()
.partitionBy(F.col('NEIGHBOURHOOD'), F.col('WEEKEND'))
.orderBy(F.col('total_crime').desc()))
(w_crime
.select('*', F.dense_rank().over(wc).alias('top_crime'))
.filter(F.col('top_crime') <= 3)
.orderBy(F.col('NEIGHBOURHOOD'), F.col('WEEKEND'), F.col('top_crime').desc())
).show(27, False)
| 2021Q1_DSF/CLASS NOTEBOOKS/04_dw_formatting.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import pandas as pd
import numpy as np
import seaborn as sns
import matplotlib.pyplot as plt
# %matplotlib inline
# +
def load_data(train, test, fname) :
fpath = "trained_models/{}/eval/{}/{}.txt".format(train, test, fname)
return np.loadtxt(fpath).astype(np.int)
train = "autoattack"
test = "pgd"
y_original = load_data(train, test, "Y_original")
y_original_pred = load_data(train, test, "Y_original_pred")
y_adv = load_data(train, test, "Y_adv")
y_adv_pred = load_data(train, test, "Y_adv_pred")
# -
# Let :
#
# $Y = \{y_1, y_2, y_3, ... , y_n \}$ -> the original label
#
# $P^{M_C}_{X_B} = \{p_1, p_2, p_3, ... , p_n \}$ -> the prediction of model $M_C$ on $X_B$
#
# $E^{M}_{X_B} = \{id, \quad id \in Y \land id \in P^{M}_{X_B} \land y_{id} \neq p_{id} \}$ -> the list of id the where prediction of the original model ${M}$ is wrong for $X_B$
#
# $E^{M_C}_{X_B} = \{id, \quad id \in Y \land id \in P^{M_C}_{X_B} \land y_{id} \neq p_{id} \}$ -> the list of id the where prediction of the robust model ${M_C}$ is wrong for $X_B$
#
# $Repair(List_1, List_2) = \{ id, \quad id \in List_1 \land id \not \in List_2 \}$
#
# $R^C_B = Repair(E^{M}_{X_B}, E^{M_C}_{X_B})$
#
# $R^B_B = Repair(E^{M}_{X_B}, E^{M_B}_{X_B})$
#
# $Match(List_2, List_2) = \{ id, \quad id \in List_1 \land id \in List_2 \} $
#
# $Length(List)$ -> calculate the length of the list
#
# $BSEM_{C-fix-B} = \frac{2 * Length(Match(R^C_B, R^B_B))}{Length(R^C_B) + Length(R^B_B)}$
#
# $BSEM_{B-fix-C} = \frac{2 * Length(Match(R^B_C, R^C_C))}{Length(R^B_C) + Length(R^C_C)}$
#
# $BSEM(B,C) = \frac{BSEM_{B-fix-C} + BSEM_{C-fix-B}}{2}$
#
# +
def get_robust_data(train, test):
y_adv = load_data(train, test, "Y_adv")
y_adv_pred = load_data(train, test, "Y_adv_pred")
return y_adv, y_adv_pred
train = "pgd"
test = "autoattack"
y_adv, y_adv_pred = get_robust_data(train, test)
print("Y_adv({},{}): {}".format(train, test, y_adv))
print("Y_adv_pred({},{}): {}".format(train, test, y_adv_pred))
# +
def error(l1, l2):
if len(l1) != len(l2) :
raise ValueError("The array length must be same")
# err = []
# for i in range(len(l1)) :
# if l1[i] != l2[i] :
# err.append(i)
# return np.array(err)
check = np.not_equal(l1, l2)
return np.argwhere(check == True).reshape(-1)
def repair(l1, l2) :
# return [x for x in l1 if x not in l2]
return l1[np.isin(l1, l2, invert=True)]
y1, y1_pred = get_robust_data("original", test)
y2, y2_pred = get_robust_data(train, test)
# print(error([0,1,2], [0,5,2]))
R = repair(error(y1, y1_pred), error(y2, y2_pred))
len(R)
# +
def match(l1, l2) :
# return [x for x in l1 if x in l2]
return l1[np.isin(l1, l2)]
len(match(R,R))
# +
def get_repair(train, test):
y1, y1_pred = get_robust_data("original", test)
y2, y2_pred = get_robust_data(train, test)
R = repair(error(y1, y1_pred), error(y2, y2_pred))
return R
def one_pov_relation(train, test) :
R_train_test = get_repair(train, test)
R_test_test = get_repair(test, test)
intersection = len(match(R_train_test, R_test_test))
union = len(R_train_test) + len(R_test_test) - intersection
return intersection / union
one_pov_relation(train, test)
# +
def BSEM(a1, a2) :
return (one_pov_relation(a1, a2) + one_pov_relation(a2, a1))/2
BSEM(train, test)
# -
BSEM("pixelattack", "autoattack")
BSEM("squareattack", "autoattack")
BSEM("pgd", "fgsm")
BSEM("cw", "fgsm")
attacks = ["autoattack", "autopgd", "bim", "cw", "fgsm", "pgd", "squareattack", "deepfool", "newtonfool", "pixelattack", "spatialtransformation"]
# +
metrics = {}
for a1 in attacks :
m = {}
for a2 in attacks :
m[a2] = one_pov_relation(a1, a2)
metrics[a1] = m
one_bsem = pd.DataFrame(data=metrics)
# +
# def plot_heatmap(data, cmap, path, annot=False) :
# sns.set_theme(style="white")
# # Draw the heatmap with the mask and correct aspect ratio
# if annot :
# f, ax = plt.subplots(figsize=(12, 6))
# f = sns.heatmap(data, cmap=cmap, vmax=1, center=0, annot=annot, fmt=".3f",
# linewidths=.5, cbar_kws={"shrink": .5})
# f.figure.savefig(path, bbox_inches='tight')
# else :
# # Set up the matplotlib figure
# f, ax = plt.subplots(figsize=(8, 5))
# f = sns.heatmap(data, cmap=cmap, vmax=1, center=0,
# square=True, linewidths=.5, cbar=False)
# f.figure.savefig(path, bbox_inches='tight')
def plot_heatmap(metrics, cmap, fpath, vmin, vmax, annot=True):
df = pd.DataFrame(data=metrics)
plt.figure(figsize=(12,9))
fig = sns.heatmap(df, cmap=cmap, vmin=vmin, vmax=vmax, annot=annot, fmt=".3f", linewidth=0.7)
# fig.set(xlabel='Train', ylabel='Test')
fig.figure.savefig(fpath, bbox_inches='tight')
plt.show()
# +
# Generate a custom diverging colormap
cmap = sns.diverging_palette(h_neg=240, h_pos=0,s=75, l=50, n=1, as_cmap=True)
path = "plot/rq2-one-bsem.png"
plot_heatmap(one_bsem, "binary", path, 0, 1)
# +
metrics = {}
for a1 in attacks :
m = {}
for a2 in attacks :
m[a2] = BSEM(a1, a2)
metrics[a1] = m
bsem = pd.DataFrame(data=metrics)
# +
def plot_half_heatmap(data, cmap, path) :
sns.set_theme(style="white")
# Generate a mask for the upper triangle
mask = np.triu(np.ones_like(data, dtype=bool))
# Set up the matplotlib figure
f, ax = plt.subplots(figsize=(12, 9))
# Draw the heatmap with the mask and correct aspect ratio
f = sns.heatmap(data, mask=mask, cmap=cmap, vmax=1, center=0,
square=True, linewidths=.5, cbar=False, annot=True)
f.figure.savefig(path, bbox_inches='tight')
# Generate a custom diverging colormap
cmap = sns.diverging_palette(h_neg=240, h_pos=0,s=75, l=50, n=1, as_cmap=True)
path = "plot/rq2-bsem.png"
plot_half_heatmap(bsem, cmap, path)
# -
import scipy.cluster.hierarchy as hcluster
linkage = hcluster.linkage(1 - bsem)
dendro = hcluster.dendrogram(linkage, labels=bsem.columns, orientation="right")
| AT_AWP/awp-evaluation-metric-fine-grain.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3 (ipykernel)
# language: python
# name: python3
# ---
# + colab={"base_uri": "https://localhost:8080/", "height": 47} id="dgb5to4YXhzU" outputId="043ebd0d-93cd-4c5b-f656-bd47f6c0c070"
# Optional: change Jupyter Notebook theme to GDD theme
from IPython.core.display import HTML
HTML(url='https://gdd.li/jupyter-theme')
# + [markdown] id="MSMvbJ90Xhzb"
# 
# # Time Series Analysis
#
# - [repo:](https://github.com/JTHaywardGDD/time-series)
# - [video](https://youtu.be/2b725bplNt8)
#
# ## Goal
#
# Pandas is the core data manipulation and analysis library for Python and it has some amazing utilities for dealing with series time data.
#
# The goal of this notebook is to familiarise ourselves with how Pandas can be used to work with Time Series data.
#
# We shall use a real Time Series dataset to demonstrate these functionalities and learn some fundamental techniques for Time Series analysis.
#
# ## Program
# 1. [Time Utilities in Pandas](#timeutil)
# 2. [Reading in Time Series Data](#read)
# 3. [Time-based Manipulations](#mani)
# 4. [Smoothing](#roll)
# 5. [Summary](#sum)
#
# + id="fQSZ0WxXXhzn"
import pandas as pd
# + [markdown] id="cNon2AQNXhzo"
# <a id='timeutil'></a>
#
# ## 1. Time Utilities in Pandas
#
# ### Timestamps
# 
#
# In pandas, specific times are represented as **Timestamps**. A Timestamp is the pandas equivalent of python’s Datetime and is interchangeable with it in most cases.
#
# Pandas can create datetime data from strings formated as `'yyyy-mm-ddThh:mm:ss:ms'` using `pd.Timestamp()`. The date units are years (‘Y’), months (‘M’), weeks (‘W’), and days (‘D’), while the time units are hours (‘h’) in 24 hour format, minutes (‘m’), seconds (‘s’), milliseconds (‘ms’). Note that time units are combined with date units using `'T'
# + colab={"base_uri": "https://localhost:8080/"} id="XIWqkU7nXhzp" outputId="9b20e502-4765-482e-a887-955446a8fbde"
date = pd.Timestamp('2022-03-26T09:00:00')
print(f'the date is {date.date()} and the time is {date.time()}')
date
# + [markdown] id="erYh6tTLXhzq"
# Pandas Timestamps support a wide range of [operations](https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.Timestamp.html).
#
# For example, we can access various attributes stored in the Timestamp.
# + colab={"base_uri": "https://localhost:8080/"} id="o7JWmbQKXhzq" outputId="b174c3a0-b6b3-4cfd-b7d7-ec4232fa0198"
pd.Timestamp('2022-02-20T18:34:56').daysinmonth
# + colab={"base_uri": "https://localhost:8080/"} id="XDWIwqzqXhzr" outputId="76db9959-7a8c-4411-e054-df7891ceafcc"
pd.Timestamp('2022-02-20T18:34:56').weekofyear
# + colab={"base_uri": "https://localhost:8080/"} id="8D92nWGKXhzu" outputId="6321115d-0200-408e-cb80-1ebd69d6c0a4"
pd.Timestamp('2022-02-20T18:34:56').quarter
# + [markdown] id="P85R3VIXXhzv"
# We can also perform time based operations and use time related methods.
# + id="s78h1WxNXhzw"
pd.Timestamp('2022-02-20T18:34:56') - pd.Timestamp('2020-02-18T18:24:32')
# + id="9au7IeM0Xhzw"
pd.Timestamp('2022-02-20T18:34:56').month_name()
# + id="URxQ-tK7Xhzx"
pd.Timestamp('2022-02-20T18:34:56').day_name()
# + [markdown] id="FuKihK0qXhzx"
# <a id='ex'></a>
# ### <mark>Exercise: Investigate the timestamp features and methods
#
# We've seen a few examples, but let's investigate further.
# - What day of the year is it today
# - Are we in a leap year?
# - How long is it until a Public Holiday (e.g. Christmas)?
#
# + colab={"base_uri": "https://localhost:8080/"} id="xSQBQ8_jXhzy" outputId="04fb67b1-ea82-4c7f-d68c-57e3138d2dfa"
current_date = pd.Timestamp('today')
current_date
# + [markdown] id="Ufqr8eyxXhzy"
# <a id='read'></a>
# 
#
#
# ## 2. Reading in Time Series Data
#
#
# Throughout this taster will use a dataset containing daily air quality index in Californian counties between 2007 and 2017 (based on a larger dataset from [Kaggle](https://www.kaggle.com/epa/carbon-monoxide)).
#
# Each datapoint indicates the average air quality index on a certain day: the higher the value - the more polluted.
# + colab={"base_uri": "https://localhost:8080/", "height": 206} id="AVxAP6M9Xhzz" outputId="d66b5c83-8e0d-4fbc-c934-4d40a6734d8c"
github = 'https://raw.githubusercontent.com/JTHaywardGDD/time-series/main/'
air_df = pd.read_csv(f'{github}data/air_quality.csv')
air_df.head()
# + [markdown] id="bS8N8BZnXhzz"
# Typically time data is contained in a separate column of standard strings; notice how our time data is not currently recognised as Timestamps.
# + id="_ujI3yhkXhzz"
air_df.info()
# + [markdown] id="7DGq7nAjXhzz"
# In order to make our time data machine readable, we can set `parse_dates` with the list of columns to be converted to Pandas Timestamps when reading the data with `pd.read_csv`. This automatically identifies the format of the dates, although specific formatting is also possible.
#
# For most time-series analysis functionality, we also benefit from setting the dates as the index in the Pandas DataFrame.
# + id="ulo6QZL7Xhz0"
air_df = pd.read_csv('data/air_quality.csv', index_col='date_local', parse_dates=True)
air_df.head()
# + id="UgCaeohoXhz0"
air_df.info()
# + [markdown] id="wANBDUEtXhz0"
# With the Timestamps as the index, we can directly filter on our DataFrame using the`loc` method and easily produce plots to visualise our data.
# + id="9tAOj2gxXhz0"
air_df.loc['2008-02-01':'2008-02-10']
# + id="55E3LlyeXhz1"
air_df.plot(figsize=(16,4));
# + [markdown] id="LM-FAbrlXhz1"
# <a id='mani'></a>
#
# ## 3. Time-based manipulations
#
# ### Easy Aggregations
#
# Another advantage of the datetime-index approach is that it provides us with some functionality for easy time-based aggregations. One such aggregation is `resample`.
#
# For example, we can easily calculate the _mean_ per _year_ by running:
# + id="meu83JQ6Xhz2"
air_df.resample('Y').mean()
# + [markdown] id="CZpy1-h-Xhz2"
# You can also run the same aggregation per month `M`, week `W`, day `D` or quarter `Q`. Custom aggregation periods are also possible, for example per 4 weeks `4W` or per 3 months `3M`. If the index is a timestamp that also includes times, then you can also aggregate per hour. See [here] for a more comprehensive list of offsets, that can be as specific as _'Business Month Begin'_.
#
# [here]: https://pandas.pydata.org/pandas-docs/stable/user_guide/timeseries.html#dateoffset-objects
#
# In the cell below we compute the mean AQI for consecutive 4 week periods.
# + id="sr_mn5uQXhz2"
air_df.resample('4W').mean().head()
# + [markdown] id="vjBktHiTXhz3"
# We can also use general `.agg()` methods here to apply multiple aggregators, including custom aggregations. For example, the spread per month:
# + id="DDgb_hK4Xhz3"
(
air_df
.resample('M')
.agg([
('mean','mean'),
('var','var'),
('spread', lambda month_df: month_df.max() - month_df.min())
])
#.droplevel(0, axis=1)
.head()
)
# + [markdown] id="PsCA0Eh6Xhz3"
# ### Time Based Features
#
# Any column in Pandas that is of dtype `datetime` has a module attached that can be used to perform vectorised datetime operations. This is very similar to the `.str` module attached to string columns. It is a good thing to explore since the alternative is non-vectorised and much slower.
#
# Below is an example of getting the quarter and adding the day name. Feel free to explore [other](https://pandas.pydata.org/pandas-docs/stable/reference/series.html#api-series-dt) properties and methods.
# + id="oF2mlznXXhz3"
(
air_df
.assign(quarter = lambda df: df.index.quarter,
weekday = lambda df: df.index.day_name())
.head()
)
# + [markdown] id="PTUG2Y6hXhz4"
# ### <mark>Exercise: Resampling
#
# Compute the average AQI per week and find the week that had the worst (or highest) AQI
#
#
#
# + id="dlm5ylueXhz4"
# # %load answers/resample.py
# + [markdown] id="X4P5IjnFXhz4"
# ### <mark>Exercise: Weekday with worst air quality</mark>
#
# Which day of the week has the worst (or highest) on average? Does AQI drop in the weekend?
#
# <font color='green'>Bonus </font>: Make a bar plot to better illustrate any weekly air quality patterns
# + id="gD4MibqEXhz5"
# # %load answers/time_based_features.py
# + [markdown] id="5OvPdJsVXhz5"
# ### <font color='green'>BONUS SECTION: </font> Shifting
#
# It can be often useful to shift some variables forward or backwards in time. This can for example help us create variables with lagged values or calculate differences in values between time steps. This can be done using Panda's `shift` method, which can shift values by a given number of periods (positive or negative).
#
# The example below uses this method to create a new variable for AQI during the previous day:
# + id="Jw7RFkZOXhz5"
(
air_df
.assign(aqi_yesterday = lambda df: df.aqi.shift(1))
.assign(change_in_aqi = lambda df: df.aqi - df.aqi_yesterday)
.dropna()
.head()
)
# + [markdown] id="LqQR9X2_Xhz5"
#
#
# <a id='roll'></a>
#
# ## 4. Smoothing
#
# Let us have a closer look at the air quality patterns during a single year.
#
# The simplest way to plot timestamp data dynamics in Pandas is using the `plot` method, which by default plots a linear plot over time:
# + id="4ruhYfqWXhz5"
air_df_2016 = air_df.loc['2016']
air_df_2016.plot(figsize=(18,6));
# + [markdown] id="Byd1-h3HXhz6"
# ### Rolling Average smoothing
#
# Various patterns can be seen on this daily line graph, but the overall trend can be hard to see due to noise plentiful short spikes.
#
# In order to see a *smoother* pattern over time, a __rolling average__ can be applied to a Time Series. It walks over the timestamps with a given window (7 days for example) and calculates averages for each.
#
# Pandas can perform this via the `rolling` method which can be called on both a DataFrame as well as a Series object. The window size for this method can be set using both a fixed number of data points as well as particular time intervals (days, weeks etc).
# + id="WEUqaSH6Xhz6"
(
air_df_2016
.assign(rolling_mean=lambda df: df['aqi'].rolling('20D').mean())
.plot(figsize=(16, 4))
);
# + [markdown] id="4JJH_wA0Xhz6"
# Note that the orange *rolling mean line* is lagging behind what actually happens. This is because, by default, each point of the rolling average represents information about this day and the preceding days - not just this particular moment! We can remove this effect using *centering*.
#
# To center the rolling mean, we can either manually shift it backwards or use the option `center=True` for the `rolling()` method. You can see below how both achieve the same result - the red and green (overlapping) lines do not lag anymore.
#
# __Importantly__, centering requires information from the future for each point. This makes centering bad practice if we want to further make predictions about the future — this information is then already contained in the present data points, which is referred to as *information leakage* in machine learning.
# + id="codk1zs_Xhz6"
(
air_df_2016
.assign(
rolling_mean=lambda df: df['aqi'].rolling('20D').mean(),
# With `center=True` window size cannot be a time frame!
rolling_mean_center=lambda df: df['aqi'].rolling(20, center=True).mean(),
manual_center=lambda df: df['rolling_mean'].shift(-9)
)
.plot(figsize=(16,4), title='rolling_mean_center and manual_center overlap')
);
# + [markdown] id="dYmq3F5_Xhz6"
# Rolling average smoothing is a simple way to isolate signal from noise in Time Series data and get an idea about general Time Series behavior.
#
# However, there are some notable drawbacks:
#
# - Highly dependent on window size:
# - using a small window sizes can lead to more noise than signal;
# - using a large window size can remove important signal information).
# - It always lags by the window size (unless centered).
# - It is not really informative about the future.
# - It can be significantly skewed by extreme datapoints in the past.
# + [markdown] id="q2FPKCi6Xhz7"
# ### <mark>Exercise: Weekday with worst air quality</mark>
#
# In a previous exercise we saw that the week commencing Monday 24th December 2007 had the worst average air quality.
#
# + id="l4saPEVzXhz7"
(
air_df
.resample('W-Mon')
.mean()
.nlargest(1, 'aqi')
)
# + [markdown] id="dUKjt0QcXhz7"
# However, this is not necessarilly the 7-day period that had the worst average air quality.
#
# For example, that may span a Friday to Thursday, as opposed to Monday to Sunday.
#
# Find the 7-day period that had the worst average air quality.
#
# + id="_P-C8Ph0Xhz8"
# # %load answers/rolling.py
# + [markdown] id="-teRWjSwXhz8"
# ### <font color='green'>BONUS SECTION: </font> Exponential Smoothing
#
# An alternative to calculating the rolling statistics is to smooth the timeseries exponentially with the following formula:
#
# $$\hat{y_t} = \alpha y_t + (1-\alpha) \hat{y}_{t-1}$$
#
# where $\hat{y_t}$ is the output of the exponential smoothing at time $t$, $y_t$ is the data point at time $t$, and $0<\alpha<1$ is the *the smoothing factor*.
#
#
# The idea is to recursively smooth the series by averaging the current average with the current value. If $\alpha$ is high then the smoothing will be low but the average can respond quicker to changes, and if it is low — the result will be much more smooth and flat.
# + id="NlbQ-PPYXhz8"
(
air_df_2016
.assign(
smoothed_01=lambda df: df['aqi'].ewm(alpha=0.1).mean(),
smoothed_001=lambda df: df['aqi'].ewm(alpha=0.01).mean()
)
.plot(figsize=(16, 4))
);
# + [markdown] id="AHab7UTXXhz9"
# Exponential Smoothing exhibits reduced lagging and more weight assigned to the current timestamps compared to Simple Rolling Averages, which also makes it more informative about the future. You can read more about EWM (exponentially weighted Moving average) in the pandas [docs].
#
# [docs]: https://pandas.pydata.org/pandas-docs/stable/user_guide/computation.html#exponentially-weighted-windows
#
# Other alternatives are [weighted smoothing](http://pandas.pydata.org/pandas-docs/stable/user_guide/computation.html#rolling-windows) and [expanding windows](https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.expanding.html).
# + [markdown] id="drjoA040Xhz9"
# <a id='sum'></a>
# ## 5. Summary
#
# We have covered:
# - Timestamps and formatting in Pandas
# - How to properly read in Time Series data in Pandas, and why it is important to set the date as an index
# - Time based manipulations, such as aggregations with `resample`, time-based features based on dtype `datetime` and shifting
# - Smoothing with rolling averages, its disadvantages and some alternatives.
#
# We should now be able to answer analytics questions like:
# - Which year had the worst air quality?
# - Which five day period had the largest decrease in air quality (between the first and last day)?
# - Does air quality improve over the weekend?
#
| 01_time_series_analysis.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3 (ipykernel)
# language: python
# name: python3
# ---
# +
import numpy as np
import pandas as pd
import seaborn as sns
import matplotlib as plt
import json
# TODO: voited more then were registered
# TODO: forgery for each city
# TODO: how many golos-data cover real, it should be % of all data
# TODO: Filters: all, all big cities, each city, not bit cities
# TODO: Benford law
# +
# this is a test
sns.set_theme(style="darkgrid")
# Load an example dataset with long-form data
fmri = sns.load_dataset("fmri")
# Plot the responses for different events and regions
sns.lineplot(x="timepoint", y="signal",
hue="region", style="event",
data=fmri)
# -
with open("data.json") as datafile:
data = json.load(datafile)
# All Stations
print(len(data["stations"].keys()))
# Stations without data
no_data_stations = {}
for code, station in data["stations"].items():
if "choices" not in station:
no_data_stations[code] = station
print(len(no_data_stations.keys()))
# Declare vars
candidates = [
'tihanovkaja',
'lukashenko',
'cherechen',
'dmitriyev',
'kanopatskaja',
'against',
'corrupted',
'ignore',
]
counts = {}
for candidate in candidates:
counts[candidate] = []
counts["index"] = []
# +
for candidate in candidates:
counts[candidate].append(data["total"]["choices"][candidate]["officialVotes"])
counts["index"].append("Official")
for candidate in candidates:
counts[candidate].append(data["total"]["choices"][candidate]["registered"])
counts["index"].append("Registered")
for candidate in candidates:
counts[candidate].append(data["total"]["choices"][candidate]["photoVoices"])
counts["index"].append("With Photo")
# -
# how to get the first digit of a number
print(int(str(423)[0]))
df = pd.DataFrame(counts)
df.set_index("index")
| work/check.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import pandas as pd
import numpy as np
Credit=pd.read_csv("Credit.csv")
# +
#SOME QUICK STATS
# -
Credit.head(10)
Credit.describe().transpose()
Credit['LIMIT_BAL'].describe()
Credit['SEX'].value_counts()
Credit['EDUCATION'].value_counts()
Credit['MARRIAGE'].value_counts()
Credit['AGE'].describe()
pay0=Credit['PAY_0'].value_counts()
pay2=Credit['PAY_2'].value_counts()
pay3=Credit['PAY_3'].value_counts()
pay4=Credit['PAY_4'].value_counts()
pay5=Credit['PAY_5'].value_counts()
pay6=Credit['PAY_6'].value_counts()
print(pay0,pay1,pay3,pay4,pay5,pay6)
Credit.apply(lambda x: sum(x.isnull()),axis=0)
# +
#VISUALIZATION ANALYSIS
# -
import matplotlib
# %matplotlib inline
import seaborn as sb
#Limit Bal distribution
Credit['LIMIT_BAL'].hist(bins=60, color='orange')
#prediction labels
Credit.groupby('default.payment.next.month').size().plot(kind='bar', color='yellow')
#default payment vs Limi_bal wrt gender
sb.barplot(x="default.payment.next.month",y="LIMIT_BAL", hue="SEX", data=Credit, palette="Blues")
#Education levels count vs default payments
sb.countplot(x="EDUCATION", hue="default.payment.next.month", data=Credit, palette="Set2")
#Marriage levels count
sb.countplot(x="MARRIAGE", data=Credit, palette="Set2")
#Marriage levels vs defaul payments
sb.countplot(x="MARRIAGE", hue="default.payment.next.month", data=Credit, palette="bright")
#aLL Pay distributions
sb.countplot(x="PAY_0", data=Credit,palette="Purples_r")
sb.countplot(x="PAY_2", data=Credit,palette=("Purples_r"))
sb.countplot(x="PAY_3", data=Credit,palette="Purples_r")
sb.countplot(x="PAY_4", data=Credit,palette="Purples_r")
sb.countplot(x="PAY_5", data=Credit,palette="Purples_r")
sb.countplot(x="PAY_6", data=Credit,palette="Purples_r")
# +
#Some Extra VISUALIZATIONS(NOT PART OF THE REPORT)
# -
#lets calucate correlation matrix
corr=Credit.corr()
corr.transpose()
mask=np.zeros_like(corr,dtype=np.bool)
mask[np.triu_indices_from(mask)]=True
with sb.axes_style("white"):
a=sb.heatmap(corr,mask=mask,vmax=0.7,square=True)
f, a1=matplotlib.pyplot.subplots(figsize=(15,9))
customap=sb.diverging_palette(220,10,as_cmap=True)
sb.heatmap(corr,mask=mask,cmap=customap,vmax=0.8,center=0,square=True,linewidth=0.4,cbar_kws={"shrink": 0.9})
| Xploratory_analysis.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3 (ipykernel)
# language: python
# name: python3
# ---
# # Sentiment analysis avec Textblob-FR
# Documentation: https://textblob.readthedocs.io/en/dev/
# ## Imports
import sys
from textblob import Blobber
from textblob_fr import PatternTagger, PatternAnalyzer
# ## Fonction
# +
tb = Blobber(pos_tagger=PatternTagger(), analyzer=PatternAnalyzer())
def get_sentiment(input_text):
blob = tb(input_text)
polarity, subjectivity = blob.sentiment
polarity_perc = f"{100*abs(polarity):.0f}"
subjectivity_perc = f"{100*subjectivity:.0f}"
if polarity > 0:
polarity_str = f"{polarity_perc}% positive"
elif polarity < 0:
polarity_str = f"{polarity_perc}% negative"
else:
polarity_str = "neutral"
if subjectivity > 0:
subjectivity_str = f"{subjectivity}% subjective"
else:
subjectivity_str = "perfectly objective"
print(f"This text is {polarity_str} and {subjectivity_str}.")
# -
# ## Analyser le sentiment d'une phrase
# + tags=[]
get_sentiment("Sons disons que le législateur de 1814 n'a fait que suivre l'esprit du déeret de 1810.")
# + tags=[]
get_sentiment("Que si cependant on persistait à invoquer l'article 1 de l'arrêté de 1814.")
# -
get_sentiment("Toutes les dépenses relatives aux armées de l'Etat sont supportées par le trésor public.")
get_sentiment("Tout ce qu'il vous serait possible serait de faire les fondations qui se tasseraient l'hiver prochain.")
get_sentiment("Il est vrai qu'aucun membre de votre administration n'en fait en ce moment partie, comme le veut le susdit article, etc")
get_sentiment("Lettre de l'administration du mont-de-piété, contenant des renseignemens sur les constructions projetées à cet établissement")
get_sentiment("Le Conseil dans cette séance a amendé quelques articles, et voté l'ensemble à l'unanimité.")
get_sentiment("Voici le texte du cahier des charges avec la discussion sous chaque article.")
get_sentiment("Ceux-ci ne pourront effectuer aucun changement aux bâtimens concédés, aux décorations, au mobilier, à la peinture intérieure, ni aux ornemens en général, sans l'autorisation préalable du collège.")
get_sentiment("Les changemens, ainsi autorisés, ne donneront lieu à aucune répétition ou indemnité.")
| module3/s4_sentiment.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# 
#
# ### [Link to Intro to NLP Slides](https://docs.google.com/presentation/d/1N1cj7IeSfkGjHcYQHEmbj13WAlga7K7O2jTRMr9BmKM/)
# # Run the cells below to get setup
# +
import sys, os
IN_COLAB = 'google.colab' in sys.modules
if IN_COLAB:
# !rm -r AI4All2020-Michigan-NLP
# !git clone https://github.com/alahnala/AI4All2020-Michigan-NLP.git
# !cp -r AI4All2020-Michigan-NLP/utils/ .
# !cp -r AI4All2020-Michigan-NLP/Data/ .
# !cp -r AI4All2020-Michigan-NLP/slides/ .
# !cp -r AI4All2020-Michigan-NLP/Experiment-Report-Templates/ .
# !echo "=== Files Copied ==="
# -
import pandas as pd
import nltk
nltk.download('punkt')
from nltk.stem.snowball import PorterStemmer
from utils.nlp_basics import *
from utils.syllable import *
print('Done')
# # Outline
#
# 1. Tokenization
# 2. Lemmatization
# 3. Stemming
# 4. Part-of-speech tagging
# 5. Stopwords
# 
# # Let's play with the string sequence `cake_wikipedia`
#
# ## 1. Simplest tokenizer: split on spaces
#
# Run the cell below. Here we split the sequence by spaces. How would you describe these tokens?
# +
# The first few sentences from the wikipedia page on Cake https://en.wikipedia.org/wiki/Cake
cake_wikipedia = 'Cake is a form of sweet food made from flour, sugar, and other ingredients, that is usually baked. In their oldest forms, cakes were modifications of bread, but cakes now cover a wide range of preparations that can be simple or elaborate, and that share features with other desserts such as pastries, meringues, custards, and pies.'
# calling .split() on a string will split the string on spaces
tokens = cake_wikipedia.split()
show_tokens(tokens)
# -
# ## 2. Split on spaces and separate punctuation from words.
#
# Run the cell below. How would you describe these tokens?
# +
# nltk is a library that is open for anyone to use.
# It stands for "natural language tool kit" and has many useful functions
from nltk.tokenize import word_tokenize
# We use nltk's function "word_tokenize"
tokens = word_tokenize(cake_wikipedia)
show_tokens(tokens)
# -
# ## 3. Split on syllables.
#
# Run the cell below. How would you describe these tokens?
# +
syllable_tokenize = SyllableTokenizer()
tokens = syllable_tokenize.tokenize(cake_wikipedia)
# Show table
show_tokens(tokens)
# -
# ## 4. Challenge: What are some tokenization considerations to make if you're working with tweets?
#
# Try making a tokenizer that keeps hashtags with the # and user handles with the @.
# +
tweet = '@RiikkaTheCat is a #CoolCat :D:)'
def tokenizer(string):
## Your code (use as many lines as you like)
tokens =
return tokens
tokens = tokenizer(tweet)
print(tokens)
# -
# ## Would tokenization in English look the same as other languages?
french = "C'est en effet tout à fait dans la ligne des positions que notre Parlement a toujours adoptées."
tokens = french.split()
show_tokens(tokens)
tokens = word_tokenize(french, language='french')
show_tokens(tokens)
# 
# # Lemmatization
# +
tokens = word_tokenize(cake_wikipedia)
import spacy
# Uses nlp pipeline from spacy to obtain linguistic features
nlp = spacy.load("en_core_web_sm", disable=['parser', 'ner'])
doc = nlp("".join(cake_wikipedia))
allowed_postags=['NOUN', 'ADJ', 'VERB', 'ADV']
# Get lemmas
lemmas = [token.lemma_ for token in doc]
# Here we are making a list of original tokens and a list of stemmed tokens for only the tokens that changed after stemming
lemmas_diff = [lemma for token, lemma in zip(tokens, lemmas) if token.lower() != lemma]
og = [token for token, lemma in zip(tokens, lemmas) if token.lower() != lemma]
# Show table
show_lemmas(og, lemmas_diff)
# -
# 
# # Stemming
# +
# Define a module that will stem the text for us
stemmer = PorterStemmer()
# Use the stemmer on our text
stemmed = [stemmer.stem(token) for token in tokens]
# Here we are making a list of original tokens and a list of stemmed tokens for only the tokens that changed after stemming
og = [token for token, stem in zip(tokens, stemmed) if token.lower() != stem]
stemmed_diff = [stem for token, stem in zip(tokens, stemmed) if token.lower() != stem]
# Put stemmed data and text in a dataframe so we can output a table
data = {'Stems': stemmed_diff, 'Text':og}
df = pd.DataFrame(data, columns = ['Text', 'Stems'])
# Show table
df.T
# -
# 
# # Part-of-speech tagging
# +
# https://en.wikipedia.org/wiki/Cake
cake_wikipedia = 'Cake is a form of sweet food made from flour, sugar, and other ingredients, that is usually baked. In their oldest forms, cakes were modifications of bread, but cakes now cover a wide range of preparations that can be simple or elaborate, and that share features with other desserts such as pastries, meringues, custards, and pies.'
# Uses nlp pipeline from spacy to obtain linguistic features
doc = nlp("".join(cake_wikipedia))
data = {'Text':[token.text for token in doc], 'Lemma':[token.lemma_ for token in doc], 'Part-of-speech':[token.pos_ for token in doc], 'Dependency':[token.dep_ for token in doc], 'Shape':[token.shape_ for token in doc], 'Is Alpha':[token.is_alpha for token in doc], 'Stopword':[token.is_stop for token in doc]}
df = pd.DataFrame (data, columns = ['Text', 'Part-of-speech'])
df.T # show data (T means transpose, excluding the T is fine too)
# -
# 
# # Stopwords
# Run the cell below and observe which words are stopwords if they have **True** in the stopword row
df = pd.DataFrame (data, columns = ['Text', 'Stopword'])
df.T
# Run the cell below to observe just the stopwords in our text
stopwords = df.loc[df['Stopword'] == True]
stopwords.T
# # References
#
# 1. https://www.nltk.org/api/nltk.tokenize.html
# 2. https://www.nltk.org/_modules/nltk/tokenize/sonority_sequencing.html#SyllableTokenizer
# 3. https://spacy.io/api/lemmatizer
# 4. https://spacy.io/usage/linguistic-features
# 5. https://universaldependencies.org/docs/u/pos/
| 1-Intro-to-NLP.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + id="Yln-wzDzE2XB"
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import math
from sklearn.ensemble import RandomForestClassifier
from sklearn import datasets
from sklearn import metrics
from sklearn.model_selection import train_test_split
from ipywidgets import IntProgress
from IPython.display import display
# + colab={"base_uri": "https://localhost:8080/"} id="kUIfTBwC7a81" outputId="6b4e61cc-9bde-4a75-d94c-0644b06e2f3c"
# !gdown --id '1osikvA4sCHiIilxVK-EAI0CppPsEtqSl' --output BloodData.csv
# + colab={"base_uri": "https://localhost:8080/", "height": 374} id="d6PvPMcktCEk" outputId="c0a0bb1b-0748-4879-f5af-492ee9f6c3f5"
train = pd.read_csv("BloodData.csv")
train.head()
# + colab={"base_uri": "https://localhost:8080/"} id="nBHSTsyTedOr" outputId="127c220e-490d-4b90-9c4f-7e79183920f0"
train.info()
# + colab={"base_uri": "https://localhost:8080/"} id="qhO--XlvfcY_" outputId="bd368844-fdab-487a-cee6-78ffd30c7a69"
train["AGE"].value_counts()
# + colab={"base_uri": "https://localhost:8080/"} id="O3yrvQdufrhq" outputId="f06634b5-fbfe-47c0-8a07-96fc7dc35013"
#BMI補值 補平均數
print(train["BMI"].value_counts())
total_BMI = 0
count = 0
for i in range(len(train["BMI"])):
if train["BMI"][i] != 'missing':
total_BMI += float(train["BMI"][i])
count += 1
average_BMI = round(total_BMI/count,3)
print(total_BMI)
print(average_BMI)
for i in range(len(train["BMI"])):
if train["BMI"][i] == 'missing':
train["BMI"][i] = str(average_BMI)
print(train["BMI"].value_counts())
# + colab={"base_uri": "https://localhost:8080/"} id="HyvldR2eY_we" outputId="66016e88-03af-4911-ec9e-0d95411de032"
#HDRS補值
#健康的補0、Dep_M0、Dep_M3補各自的平均值
print(train["HDRS"].value_counts())
total_HDRS_M0 = 0
total_HDRS_M3 = 0
count_HDRS_M0 = 0
count_HDRS_M3 = 0
for i in range(len(train["HDRS"])):
if train["HDRS"][i] != 'missing':
if train['Clinical'][i]=='dep_M0':
total_HDRS_M0 += float(train["HDRS"][i])
count_HDRS_M0 += 1
if train["HDRS"][i] != 'missing':
total_HDRS_M3 += float(train["HDRS"][i])
count_HDRS_M3 += 1
average_HDRS_M0 = math.floor(total_HDRS_M0/count_HDRS_M0)
average_HDRS_M3 = math.floor(total_HDRS_M3/count_HDRS_M3)
print('The average HDRS before treated is ',average_HDRS_M0)
print('The average HDRS after treated is ',average_HDRS_M3)
for i in range(len(train["HDRS"])):
if train["HDRS"][i] == 'missing':
if train['Clinical'][i]=='dep_M0':
train["HDRS"][i] = str(average_HDRS_M0)
if train['Clinical'][i]=='dep_M3':
train["HDRS"][i] = str(average_HDRS_M3)
if train['Clinical'][i]=='control':
train["HDRS"][i] = '0'
print(train["HDRS"].value_counts())
# + colab={"base_uri": "https://localhost:8080/"} id="0OHuPkLKZ53v" outputId="4882e5c7-8823-46a2-edfa-33ead130cd67"
#smoker yes/no 統一改成小寫 3個missing sample補眾數'no'
print(train["smoker"].value_counts())
for i in range(len(train["smoker"])):
if train["smoker"][i] == 'Yes':
train["smoker"][i] = 'yes'
if train["smoker"][i] == 'No':
train["smoker"][i] = 'no'
if train["smoker"][i] == 'missing':
train["smoker"][i] = 'no'
print(train["smoker"].value_counts())
# + id="yHxqjIJVeSDA"
#剩下的空值補0
train["Proteobacteria"].fillna(0,inplace=True)
train["Identified reads:Eukaryota(%)"].fillna(0,inplace=True)
train['Kbp-virus'].fillna(0,inplace=True)
# + colab={"base_uri": "https://localhost:8080/"} id="LHQbUCTMFwAx" outputId="534fa1a6-fa23-4f35-c464-4cc57a42c3cc"
for i in range(len(train["Identified reads:Eukaryota(%)"])):
if train["Identified reads:Eukaryota(%)"][i] == '<0.01':
train["Identified reads:Eukaryota(%)"][i] = '0.005'
if train["Identified reads:virus(%)"][i] == '<0.01':
train["Identified reads:virus(%)"][i] = '0.005'
# + colab={"base_uri": "https://localhost:8080/"} id="ccjmJZnb5K8R" outputId="ab1a00b1-b618-4150-f3ad-eaeeb3331586"
train.info()
# + id="ef5oh2zU6cHw"
CAT_COL = ["Run", "Bases", "BioProject", "BioSample", "Library Name",
"Sample Name", "Patient", "sex", "smoker", "Source_material_ID",
"Clinical", "Terrabacteria group", "Proteobacteria"]
NUM_COL = ["AGE", "BMI", "Bytes", "HDRS", "Unidentified reads(%)", "Identified reads: cellular organisms(%)",
"Identified reads: cellular organisms-bacteria(%)", "Kbp-bacteria",
"Identified reads:virus(%)", "Kbp-virus", "Identified reads:Eukaryota(%)"]
cat_col = []
num_col = []
for col in train:
if col in CAT_COL:
cat_col.append(col)
elif col in NUM_COL:
num_col.append(col)
for col in cat_col:
train[col] = train[col].astype(str)
df_cat = train.loc[:,cat_col] # take all the categorical columns
df_cat = pd.get_dummies(df_cat) # one hot encoding
df_num = train.loc[:,num_col] # take all the numerical columns
df_final = pd.concat([df_cat, df_num], axis=1) # concat categorical/numerical data
# + colab={"base_uri": "https://localhost:8080/", "height": 319} id="4rsZbO2Dbb31" outputId="46c1c8ad-80ab-4fb1-ca43-820ebb488183"
df_final.head()
# + id="Gzsypnlzbjku"
not_select = ["Run", "Bases", "BioProject", "BioSample", "Library Name", "Sample Name", "Bytes", "Patient", "Source_material_ID"]
train_select = train.drop(not_select,axis=1)
cat_col = []
num_col = []
for col in train_select:
if col in CAT_COL:
cat_col.append(col)
elif col in NUM_COL:
num_col.append(col)
for col in cat_col:
if train_select[col].dtype != "O":
# print(col)
train_select[col] = train_select[col].astype(str)
df_cat_select = train_select.loc[:,cat_col] # take all the categorical columns
df_cat_select = pd.get_dummies(df_cat_select) # one hot encoding
df_num_select = train_select.loc[:,num_col] # take all the numerical columns
df_final_select = pd.concat([df_cat_select, df_num_select], axis=1) # concat categorical/numerical data
# + colab={"base_uri": "https://localhost:8080/", "height": 290} id="3izqM0EMbr2-" outputId="64d3c203-2102-4687-cc31-66708f24b0e0"
df_final_select.head()
# + colab={"base_uri": "https://localhost:8080/", "height": 456} id="UmX_cKPdb_z9" outputId="5203a693-2042-4e27-cecd-4210883d95a1"
df_final_select_num = df_final_select.values
for i in range(len(df_final_select_num)):
for j in range(len(df_final_select_num[0])):
df_final_select_num[i][j] = float(df_final_select_num[i][j])
correlation_matrix = np.corrcoef(df_final_select_num)
print(correlation_matrix)
# + colab={"base_uri": "https://localhost:8080/"} id="9EzEF_HYeO6-" outputId="0a3960f8-ab59-44c8-ddad-3252862dae86"
print(len(df_final_select_num[1]))
# + colab={"base_uri": "https://localhost:8080/"} id="ClBkPVtMq9hp" outputId="8877af7f-fea8-41a0-c2d5-fdabcf74ce2a"
not_select = ["Run", "Bases", "BioProject", "BioSample", "Library Name", "Sample Name", "Bytes", "Clinical", "Patient", "Source_material_ID"]
train_select = train.drop(not_select,axis=1)
train_select.info()
# + id="Dv4LE3iTv2oJ"
cat_col = []
num_col = []
for col in train_select:
if col in CAT_COL:
cat_col.append(col)
elif col in NUM_COL:
num_col.append(col)
for col in cat_col:
if train_select[col].dtype != "O":
# print(col)
train_select[col] = train_select[col].astype(str)
df_cat_select = train_select.loc[:,cat_col] # take all the categorical columns
df_cat_select = pd.get_dummies(df_cat_select) # one hot encoding
df_num_select = train_select.loc[:,num_col] # take all the numerical columns
df_final_select = pd.concat([df_cat_select, df_num_select], axis=1) # concat categorical/numerical data
# + colab={"base_uri": "https://localhost:8080/", "height": 290} id="6uX8WzkkbOKO" outputId="f1331189-ce91-443a-eac9-788856209cf5"
df_final_select.head()
# + [markdown] id="WlXuzo6BHkIq"
# #Random Forest Classifier
# + colab={"base_uri": "https://localhost:8080/"} id="oT7bA9UMwVDo" outputId="ec541296-40ac-4922-a1a0-a9059c200a32"
#Use RandomForestClassifier to predict Clinical
x = df_final_select
y = train["Clinical"]
# y = np.array(y,dtype=int)
X_train,X_test,y_train,y_test = train_test_split(x,y,test_size=0.1,random_state=0)
#RandomForest
rfc = RandomForestClassifier()
#rfc=RandomForestClassifier(n_estimators=100,n_jobs = -1,random_state =50, min_samples_leaf = 10)
rfc.fit(X_train,y_train)
y_predict = rfc.predict(X_train)
score_rfc = rfc.score(X_test,y_test)
print("Random Forest Accuracy = ",score_rfc*100," %")
# + colab={"base_uri": "https://localhost:8080/"} id="PwMtRn5DGUTc" outputId="d02d45c4-0f5a-47c5-920a-9cc7e649f1bb"
from sklearn.model_selection import KFold
x = df_final_select
y = train["Clinical"]
kf = KFold(n_splits=5)
best_accuracy = 0
for train_index , test_index in kf.split(x):
X_train, X_test, y_train, y_test = x.iloc[train_index], x.iloc[test_index], y.iloc[train_index], y.iloc[test_index]
rfc = RandomForestClassifier()
rfc.fit(X_train,y_train)
y_predict = rfc.predict(X_train)
accuracy = rfc.score(X_test,y_test)
print("Random Forest Accuracy = ",accuracy*100,"%")
if accuracy > best_accuracy:
best_accuracy = accuracy
best_rfc = rfc
print("Best Accuracy = ",best_accuracy*100,"%")
# + colab={"base_uri": "https://localhost:8080/"} id="lVsvVd54GmW6" outputId="c151d5b4-7262-4769-da1e-e5d3ad0e5a31"
#random forest final accuracy
x_train_new = x
y_train_new = train["Clinical"]
y_train_new = y_train_new.reset_index(drop=True)
y_pred = best_rfc.predict(x_train_new)
count = 0
for i in range(y_pred.shape[0]):
if y_pred[i] != y_train_new[i]:
count += 1
print(y_pred[i],y_train_new[i])
rfc_accuracy = 1-count/y_pred.shape[0]
print("Random Forest Accuracy = ",rfc_accuracy*100,"%")
# + [markdown] id="9c86bBBnHotC"
# #SVM
# + colab={"base_uri": "https://localhost:8080/"} id="-Qzl52nywn9w" outputId="c88d30ac-1903-41a3-ee40-f914371f81ad"
from sklearn import svm
#Use SVM to predict Clinical
x = df_final_select
y = train["Clinical"]
# y = np.array(y,dtype=int)
X_train,X_test,y_train,y_test = train_test_split(x,y,test_size=0.1,random_state=0)
clf = svm.SVC()
clf.fit(X_train,y_train)
y_predict = clf.predict(X_train)
score_clf = clf.score(X_test,y_test)
print("SVM Accuracy = ",score_clf*100," %")
# + colab={"base_uri": "https://localhost:8080/"} id="d2IT2vHgHa8j" outputId="e84fee26-a2fa-463b-f1ef-e7d69ac02ddf"
from sklearn.model_selection import KFold
x = df_final_select
y = train["Clinical"]
kf = KFold(n_splits=5)
best_accuracy = 0
for train_index , test_index in kf.split(x):
X_train, X_test, y_train, y_test = x.iloc[train_index], x.iloc[test_index], y.iloc[train_index], y.iloc[test_index]
clf = svm.SVC()
clf.fit(X_train,y_train)
y_predict = clf.predict(X_train)
accuracy = clf.score(X_test,y_test)
print("SVM Accuracy = ",accuracy*100,"%")
if accuracy > best_accuracy:
best_accuracy = accuracy
best_clf = clf
print("Best Accuracy = ",best_accuracy*100,"%")
# + colab={"base_uri": "https://localhost:8080/"} id="FAjipyAMHbwi" outputId="b8a42cf8-b0c4-46bf-85a1-092f90184ca1"
#SVM final accuracy
x_train_new = x
y_train_new = train["Clinical"]
y_train_new = y_train_new.reset_index(drop=True)
y_pred = best_clf.predict(x_train_new)
count = 0
for i in range(y_pred.shape[0]):
print(y_pred[i],y_train_new[i])
if y_pred[i] != y_train_new[i]:
count += 1
clf_accuracy = 1-count/y_pred.shape[0]
print("SVM Accuracy = ",clf_accuracy*100,"%")
# + [markdown] id="lARu7KV5HrH0"
# #Neural network MLPClassifier
# + colab={"base_uri": "https://localhost:8080/"} id="8vWodBUQwvoX" outputId="2376afec-26fd-44a3-e766-33b1b93abf7b"
from sklearn.neural_network import MLPClassifier
#Use Neural Network MLPClassifier to predict Clinical
x = df_final_select
y = train["Clinical"]
# y = np.array(y,dtype=int)
X_train,X_test,y_train,y_test = train_test_split(x,y,test_size=0.1,random_state=0)
nnclf = MLPClassifier(solver='adam', alpha=1e-5, hidden_layer_sizes=(50, 30), random_state=1, max_iter=2000)
nnclf.fit(X_train,y_train)
y_predict = nnclf.predict(X_train)
score_nnclf = nnclf.score(X_test,y_test)
print("Neural Network Accuracy = ",score_nnclf*100," %")
# + colab={"base_uri": "https://localhost:8080/"} id="CoT7SNAQIww6" outputId="1ac184e6-d1e6-4556-f0fd-39f8662cd349"
from sklearn.model_selection import KFold
x = df_final_select
y = train["Clinical"]
kf = KFold(n_splits=5)
best_accuracy = 0
for train_index , test_index in kf.split(x):
X_train, X_test, y_train, y_test = x.iloc[train_index], x.iloc[test_index], y.iloc[train_index], y.iloc[test_index]
nnclf = MLPClassifier(solver='adam', alpha=1e-5, hidden_layer_sizes=(50, 30), random_state=1, max_iter=2000)
nnclf.fit(X_train,y_train)
y_predict = nnclf.predict(X_train)
accuracy = nnclf.score(X_test,y_test)
print("NN Accuracy = ",accuracy*100,"%")
if accuracy > best_accuracy:
best_accuracy = accuracy
best_nnclf = nnclf
print("NN Accuracy = ",best_accuracy*100,"%")
# + colab={"base_uri": "https://localhost:8080/"} id="ZbVGlnT_I7D5" outputId="7070521e-c1ae-4708-c7bb-c82e38b7fcd6"
#NN final accuracy
x_train_new = x
y_train_new = train["Clinical"]
y_train_new = y_train_new.reset_index(drop=True)
y_pred = best_nnclf.predict(x_train_new)
count = 0
for i in range(y_pred.shape[0]):
print(y_pred[i],y_train_new[i])
if y_pred[i] != y_train_new[i]:
count += 1
nnclf_accuracy = 1-count/y_pred.shape[0]
print("NN Accuracy = ",nnclf_accuracy*100,"%")
# + [markdown] id="dG7iRLaQHxpa"
# #Logistic Regression
# + colab={"base_uri": "https://localhost:8080/"} id="2a8ZSpJW06uP" outputId="cdf22691-b854-4515-8815-6b6728f76d32"
from sklearn.linear_model import LogisticRegression
#Use Logistic Regression to predict Clinical
x = df_final_select
y = train["Clinical"]
# y = np.array(y,dtype=int)
X_train,X_test,y_train,y_test = train_test_split(x,y,test_size=0.1,random_state=0)
logclf = LogisticRegression(random_state=0).fit(X_train,y_train)
logclf.predict(X_train)
logclf.predict_proba(X_train)
score_logclf = logclf.score(X_test,y_test)
print("Logistic Regression Accuracy = ",score_logclf*100," %")
# + colab={"base_uri": "https://localhost:8080/"} id="vSWmvYzoJQwz" outputId="61bfde90-b38c-4767-bf5b-d9fbaa5e51bb"
from sklearn.model_selection import KFold
x = df_final_select
y = train["Clinical"]
kf = KFold(n_splits=5)
best_accuracy = 0
for train_index , test_index in kf.split(x):
X_train, X_test, y_train, y_test = x.iloc[train_index], x.iloc[test_index], y.iloc[train_index], y.iloc[test_index]
logclf = LogisticRegression(random_state=0).fit(X_train,y_train)
y_predict = logclf.predict(X_train)
accuracy = logclf.score(X_test,y_test)
print("Logistic Regression Accuracy = ",accuracy*100,"%")
if accuracy > best_accuracy:
best_accuracy = accuracy
best_logclf = logclf
print("Logistic Regression Accuracy = ",best_accuracy*100,"%")
# + colab={"base_uri": "https://localhost:8080/"} id="aMrM4R1NJTIj" outputId="fb3ba77a-2a88-4fea-a167-cb44c3708f20"
#Logistic final accuracy
x_train_new = x
y_train_new = train["Clinical"]
y_train_new = y_train_new.reset_index(drop=True)
y_pred = best_logclf.predict(x_train_new)
count = 0
for i in range(y_pred.shape[0]):
print(y_pred[i],y_train_new[i])
if y_pred[i] != y_train_new[i]:
count += 1
logclf_accuracy = 1-count/y_pred.shape[0]
print("Logistic Regression Accuracy = ",logclf_accuracy*100,"%")
# + colab={"base_uri": "https://localhost:8080/"} id="Kbh71P2xFIhl" outputId="f7086911-9ad6-464e-9f35-59555298f195"
print(X_train.shape,y_train.shape)
print(X_test.shape,y_test.shape)
| BloodData_v2.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + [markdown] id="view-in-github" colab_type="text"
# <a href="https://colab.research.google.com/github/open-mmlab/mmocr/blob/main/demo/MMOCR_Tutorial.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# + [markdown] id="jU9T31gbQmvs"
# # MMOCR Tutorial
#
# Welcome to MMOCR! This is the official colab tutorial for using MMOCR. In this tutorial, you will learn how to
#
# - Perform testing with a pretrained text recognizer.
# - Perform testing with a pretrained Key Information Extraction (KIE) model.
# - Perform testing with a pretrained text detector
# - Train a text recognizer with a toy dataset.
#
# Let's start!
# + [markdown] id="Sfvz1sywQ9_4"
# ## Install MMOCR
# -
# When installing dependencies for mmocr, please ensure that all the dependency versions are compatible with each other. For instance, if CUDA 10.1 is installed, then the Pytorch version must be compatible with cu10.1. Please see [getting_started.md](docs/getting_started.md) for more details.
# %cd ..
# ### Check NVCC and GCC compiler version
# + colab={"base_uri": "https://localhost:8080/"} id="2DBpcKj2RDfu" outputId="2e99d7ce-3858-4c05-ab29-847f58e1a92e"
# !nvcc -V
# !gcc --version
# -
# ### Install Dependencies
# + colab={"base_uri": "https://localhost:8080/", "height": 1000} id="DwDY3puNNmhe" outputId="06e48bab-3a07-4449-e5ef-9884fe61fe63" tags=["outputPrepend"]
# Install torch dependencies: (use cu101 since colab has CUDA 10.1)
# !pip install -U torch==1.5.0+cu101 torchvision==0.6.0+cu101 -f https://download.pytorch.org/whl/torch_stable.html
# Install mmcv-full thus we could use CUDA operators
# !pip install mmcv-full -f https://download.openmmlab.com/mmcv/dist/cu101/torch1.5.0/index.html
# Install mmdetection
# !pip install mmdet==2.11.0
# Install mmocr
# !git clone https://github.com/open-mmlab/mmocr.git
# %cd mmocr
# !pip install -r requirements.txt
# !pip install -v -e .
# install Pillow 7.0.0 back in order to avoid bug in colab
# !pip install Pillow==7.0.0
# -
# ### Check Installed Dependencies Versions
# + colab={"base_uri": "https://localhost:8080/"} id="JABQfPwQN52g" outputId="d4c337c7-5b72-498d-bfd0-2955a3678c71"
# Check Pytorch installation
import torch, torchvision
print(torch.__version__, torch.cuda.is_available())
# Check MMDetection installation
import mmdet
print(mmdet.__version__)
# Check mmcv installation
import mmcv
from mmcv.ops import get_compiling_cuda_version, get_compiler_version
print(mmcv.__version__)
print(get_compiling_cuda_version())
print(get_compiler_version())
# Check mmocr installation
import mmocr
print(mmocr.__version__)
# -
# ## Perform Testing with a Pretrained Text Recognizer
#
# We now demonstrate how to perform testing on a [demo text recognition image](demo/demo_text_recog.jpg) with a pretrained text recognizer. SAR text recognizer is used for this demo, whose checkpoint can be downloaded from the [official documentation](https://mmocr.readthedocs.io/en/latest/textrecog_models.html#show-attend-and-read-a-simple-and-strong-baseline-for-irregular-text-recognition). We visualize the predicted result in the end.
# !python demo/image_demo.py demo/demo_text_recog.jpg configs/textrecog/sar/sar_r31_parallel_decoder_academic.py https://download.openmmlab.com/mmocr/textrecog/sar/sar_r31_parallel_decoder_academic-dba3a4a3.pth outputs/demo_text_recog_pred.jpg
# Visualize the results
import matplotlib.pyplot as plt
predicted_img = mmcv.imread('./outputs/demo_text_recog_pred.jpg')
plt.imshow(mmcv.bgr2rgb(predicted_img))
plt.show()
# + [markdown] id="NgoH6qEcC9CL"
# ## Perform Testing with a Pretrained Text Detector
#
# Next, we perform testing with a pretrained PANet text detector and visualize the bounding box results for the demo text detection image provided in [demo_text_det.jpg](.github/demo/demo_text_det.jpg). The PANet checkpoint can be downloaded from the [official documentation](https://mmocr.readthedocs.io/en/latest/textdet_models.html#efficient-and-accurate-arbitrary-shaped-text-detection-with-pixel-aggregation-network).
# + colab={"base_uri": "https://localhost:8080/"} id="u0YyG9y0TzL4" outputId="7c2199ca-0542-414d-a8cd-ab5998739c70"
# !python demo/image_demo.py demo/demo_text_det.jpg configs/textdet/panet/panet_r18_fpem_ffm_600e_icdar2015.py https://download.openmmlab.com/mmocr/textdet/panet/panet_r18_fpem_ffm_sbn_600e_icdar2015_20210219-42dbe46a.pth outputs/demo_text_det_pred.jpg
# + colab={"base_uri": "https://localhost:8080/", "height": 616} id="2-UHsqkZJFND" outputId="e347af9e-2f92-45d5-d9c7-eb82802819cf"
# Visualize the results
import matplotlib.pyplot as plt
predicted_img = mmcv.imread('./outputs/demo_text_det_pred.jpg')
plt.figure(figsize=(9, 16))
plt.imshow(mmcv.bgr2rgb(predicted_img))
plt.show()
# + [markdown] id="PTWMzvd3E_h8"
# ## Perform Testing with a Pretrained KIE Model
#
# We perform testing on the WildReceipt dataset for KIE model by first downloading the .tar file from [Datasets Preparation](https://mmocr.readthedocs.io/en/latest/datasets.html) in MMOCR documentation and then extract the dataset. We have chosen the Visual + Textual moduality test dataset, which we evaluate with Macro F1 metrics.
# + colab={"base_uri": "https://localhost:8080/"} id="3VEW3PQrFZ0g" outputId="885a4d2e-ca78-42ab-f4a2-dddd9a2d8321"
# First download the KIE dataset .tar file and extract it to ./data
# !mkdir data
# !wget https://download.openmmlab.com/mmocr/data/wildreceipt.tar
# !tar -xf wildreceipt.tar
# !mv wildreceipt ./data
# + colab={"base_uri": "https://localhost:8080/"} id="p0MHNwybo0iI" outputId="4766b69e-04ea-4739-a0d1-9366789c0d91"
# Test the dataset with macro f1 metrics
# !python tools/test.py configs/kie/sdmgr/sdmgr_unet16_60e_wildreceipt.py https://download.openmmlab.com/mmocr/kie/sdmgr/sdmgr_unet16_60e_wildreceipt_20210405-16a47642.pth --eval macro_f1
# + [markdown] id="nYon41X7RTOT"
# ## Perform Training on a Toy Dataset with MMOCR Recognizer
# We now demonstrate how to perform training with an MMOCR recognizer. Since training a full academic dataset is time consuming (usually takes about several hours), we will train on the toy dataset for the SAR text recognition model and visualize the predictions. Text detection and other downstream tasks such as KIE follow similar procedures.
#
# Training a dataset usually consists of the following steps:
# 1. Convert the dataset into a format supported by MMOCR (e.g. COCO for text detection). The annotation file can be in either .txt or .lmdb format, depending on the size of the dataset. This step is usually applicable to customized datasets, since the datasets and annotation files we provide are already in supported formats.
# 2. Modify the config for training.
# 3. Train the model.
#
# The toy dataset consisits of ten images as well as annotation files in both txt and lmdb format, which can be found in [ocr_toy_dataset](.github/tests/data/ocr_toy_dataset).
# -
# ### Visualize the Toy Dataset
#
# We first get a sense of what the toy dataset looks like by visualizing one of the images and labels.
# + colab={"base_uri": "https://localhost:8080/", "height": 121} id="hZfd2pnqN5-Q" outputId="9767e836-ccab-4a57-aa1a-1bbca56c430f"
import mmcv
import matplotlib.pyplot as plt
img = mmcv.imread('./tests/data/ocr_toy_dataset/imgs/1036169.jpg')
plt.imshow(mmcv.bgr2rgb(img))
plt.show()
# + colab={"base_uri": "https://localhost:8080/"} id="F5M_FVVRN6Fw" outputId="e4ee7608-2bf9-4ae6-be00-691bdbef8c7c"
# Inspect the labels of the annootation file
# !cat tests/data/ocr_toy_dataset/label.txt
# + [markdown] id="i-GrV0xSkAc3"
# ### Modify the Configuration File
#
# In order to perform inference for SAR on colab, we need to modify the config file to accommodate some of the settings of colab such as the number of GPU available.
# + id="uFFH3yUgPEFj"
from mmcv import Config
cfg = Config.fromfile('./configs/textrecog/sar/sar_r31_parallel_decoder_toy_dataset.py')
# + colab={"base_uri": "https://localhost:8080/"} id="67OJ6oAvN6NA" outputId="91253215-1117-4b8d-912d-b5eb6c6e3c2c"
from mmdet.apis import set_random_seed
# Set up working dir to save files and logs.
cfg.work_dir = './demo/tutorial_exps'
# The original learning rate (LR) is set for 8-GPU training.
# We divide it by 8 since we only use one GPU.
cfg.optimizer.lr = 0.001 / 8
cfg.lr_config.warmup = None
# Choose to log training results every 40 images to reduce the size of log file.
cfg.log_config.interval = 40
# Set seed thus the results are more reproducible
cfg.seed = 0
set_random_seed(0, deterministic=False)
cfg.gpu_ids = range(1)
# We can initialize the logger for training and have a look
# at the final config used for training
print(f'Config:\n{cfg.pretty_text}')
# + [markdown] id="TZj5vyqEmulE"
# ### Train the SAR Text Recognizer
# Finally, we train the SAR text recognizer on the toy dataset for five epochs.
# +
from mmocr.datasets import build_dataset
from mmocr.models import build_detector
from mmocr.apis import train_detector
import os.path as osp
# Build dataset
datasets = [build_dataset(cfg.data.train)]
# Build the detector
model = build_detector(
cfg.model, train_cfg=cfg.get('train_cfg'), test_cfg=cfg.get('test_cfg'))
# Add an attribute for visualization convenience
model.CLASSES = datasets[0].CLASSES
# Create work_dir
mmcv.mkdir_or_exist(osp.abspath(cfg.work_dir))
train_detector(model, datasets, cfg, distributed=False, validate=True)
# + [markdown] id="sklydRNXnfJk"
# ### Test and Visualize the Predictions
#
# For completeness, we also perform testing on the latest checkpoint and evaluate the results with hmean-iou metrics. The predictions are saved in the ./outputs file.
# +
from mmocr.apis.inference import model_inference
from mmdet.apis import init_detector
img = './tests/data/ocr_toy_dataset/imgs/1036169.jpg'
checkpoint = "./demo/tutorial_exps/epoch_5.pth"
out_file = 'outputs/1036169.jpg'
model = init_detector(cfg, checkpoint, device="cuda:0")
if model.cfg.data.test['type'] == 'ConcatDataset':
model.cfg.data.test.pipeline = model.cfg.data.test['datasets'][0].pipeline
result = model_inference(model, img)
print(f'result: {result}')
img = model.show_result(
img, result, out_file=out_file, show=False)
mmcv.imwrite(img, out_file)
# + colab={"base_uri": "https://localhost:8080/", "height": 192} id="k3s27QIGQCnT" outputId="4516b2b3-1ca2-4f01-ab8f-9d6b19eb99f1"
# Visualize the results
predicted_img = mmcv.imread('./outputs/1036169.jpg')
plt.figure(figsize=(4, 4))
plt.imshow(mmcv.bgr2rgb(predicted_img))
plt.show()
| demo/MMOCR_Tutorial.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python [conda env:PythonData]
# language: python
# name: conda-env-PythonData-py
# ---
# +
# Dependencies
import pandas as pd
import os
# Imports the method used for connecting to DBs
from sqlalchemy import create_engine , inspect, MetaData
# Imports the methods needed to abstract classes into tables
from sqlalchemy.ext.declarative import declarative_base
# Allow us to declare column types
from sqlalchemy import Column, Integer, String, Float, Date , Text
# PyMySQL
import pymysql
pymysql.install_as_MySQLdb()
#import sqlite3
import sqlite3
import datetime as dt
import sqlalchemy
# -
# Store filepath in a variable
file_one = os.path.join("Resources", "cleaned_hawaii_measurements.csv")
file_two = os.path.join("Resources", "hawaii_stations.csv")
# Read our Data file with the pandas library
# Not every CSV requires an encoding, but be aware this can come up
hawaii_measurements_df = pd.read_csv(file_one, encoding="ISO-8859-1")
hawaii_stations_df = pd.read_csv(file_two, encoding="ISO-8859-1")
# Show 5 rows and the header
hawaii_stations_df.head()
# Create Engine and Pass in MySQL Connection
engine = create_engine("sqlite:///hawaii.sqlite")
conn = engine.connect()
# +
# Create Dog and Cat Classes
# ----------------------------------
# Sets an object to utilize the default declarative base in SQL Alchemy
Base = declarative_base()
# Creates Classes which will serve as the anchor points for our Tables
class Station(Base):
__tablename__ = 'station'
id = Column(Integer, primary_key=True)
station = Column(String(255))
name = Column(String(255))
latitude = Column(Float)
longitude = Column(Float)
elevation = Column(Float)
class Measurement(Base):
__tablename__ = 'measurement'
id = Column(Integer, primary_key=True)
station = Column(String(255))
date = Column(String(255))
prcp = Column(Float)
tobs = Column(Integer)
# -
# Use `create_all` to create the customers table in the database
Base.metadata.create_all(engine)
| database_engineering.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + [markdown] id="dBQEt_LprgU5"
# # NLP Classification
#
#
# + id="eAIGEVimsmuU"
from sklearn.metrics import classification_report
from sklearn.ensemble import RandomForestClassifier
from sklearn.model_selection import train_test_split
from sklearn.metrics import accuracy_score
from sklearn.feature_extraction.text import TfidfVectorizer
from sklearn.feature_selection import SelectKBest
from sklearn.feature_selection import chi2
import pandas as pd
import numpy as np
# + [markdown] id="VKtvVadJwLre"
# # Load data
#
# We could load data from the GitHub repo or other data sources.
# When using Google Colab, we could also upload the data file manually.
# + colab={"base_uri": "https://localhost:8080/", "height": 238} id="gXXgVtQTwYHv" outputId="78e2c21e-006a-4c9b-d294-956cf48668fe"
# define the data URL
sample_data_url = 'https://raw.githubusercontent.com/OHNLP/covid19vaxae/main/sample.csv'
large_data_url = 'https://raw.githubusercontent.com/OHNLP/covid19vaxae/main/large.csv'
# load data by Python Pandas
df_sample = pd.read_csv(sample_data_url)
df_large = pd.read_csv(large_data_url)
print('* loaded %s sample' % len(df_sample))
print('* loaded %s large' % len(df_large))
# preprocessing the data
# for those NaN values, fill with forward method
# more details: https://pandas.pydata.org/docs/reference/api/pandas.DataFrame.fillna.html
df_sample.fillna(method='ffill', inplace=True)
df_large.fillna(method='ffill', inplace=True)
# show it looks
df_sample.head()
# + [markdown] id="BzuHsWRGw_a9"
# # Model 1: Very simple model
#
# Before we start something fancy and complex, let's try a very simple model.
# It only uses the age, sex, and the vaccine name to predict the adverse event.
# Although we could imagine how poor the performance is, let's give a try.
# + [markdown] id="n6cYm6W72v57"
# ## Prepare 3 features
# + colab={"base_uri": "https://localhost:8080/"} id="tR0TYc5RyQHL" outputId="6d103a22-9749-465e-d110-66c955993a41"
# using this dictionary to convert the vaccine name to a number
dict_vax2num = dict(zip(
df_sample.VAX_MANU.unique().tolist(),
np.arange(df_sample.VAX_MANU.nunique())
))
print('* dict_vax2num:', dict_vax2num)
# In this toy model, we use age, sex, the vaccine name as features
X = df_sample[['AGE_YRS', 'SEX', 'VAX_MANU']]
y = df_sample['SYMPTOM']
# convert the sex from text to number
X['SEX'] = X['SEX'].apply(lambda v: 1 if v == 'M' else 0)
# convert the vaccine name to number
X['VAX_MANU'] = X['VAX_MANU'].apply(lambda v: dict_vax2num[v])
# split the train/test sets, we use 20% of records for test
X_train, X_test, y_train, y_test = train_test_split(
X, y, test_size=0.2
)
print('* get train set', X_train.shape)
print(X_train.head(5))
print('* get test set', X_test.shape)
print(X_test.head(5))
# + [markdown] id="PedZTZlC2098"
# ## Train a classifier
# + colab={"base_uri": "https://localhost:8080/"} id="_Zv-sxUh29u8" outputId="6c3db1cf-3e2e-4003-8825-70a00ca29035"
# we use Random Forest Classifier
# since we only have 3 features for each records, 40 trees are enough
clf = RandomForestClassifier(n_estimators=40, random_state=0)
# train the model using our training set
model1 = clf.fit(X_train, y_train)
# use the trained model to predict the test set
# since we already know the labels for the test set
# it's a test in fact
y_pred = model1.predict(X_test)
# get the test results
result1 = classification_report(y_test, y_pred)
# OK, we know it won't be not good at all.
# and ... yes, it's not good :p
print(result1)
# + [markdown] id="6t6mzupY6oGT"
# # Model 2: Better model
#
# Now, let's try a better model by using the text information.
#
# There are many ways to extract the text features.
# In this demo model, we use the basic TF-IDF
# + [markdown] id="3Il5HTuC6wK8"
# ## Prepare symptom text features
# + colab={"base_uri": "https://localhost:8080/"} id="trAAs8FC6zO3" outputId="07f988fa-16c7-4ba5-e6ad-881c9a4f2211"
# this time, we only use the symptom_text to get features.
X = df_sample['SYMPTOM_TEXT']
# still use symptom as the label
y = df_sample['SYMPTOM']
# but the long text it self couldn't be used as feature
# we need to convert the text into a list of numbers (or vector)
# let's use a very popular tool called TF-IDF
# more details about this method could be found here:
# https://scikit-learn.org/stable/modules/generated/sklearn.feature_extraction.text.TfidfVectorizer.html
# first, let's get a vectorizer
vcer = TfidfVectorizer(stop_words='english')
# then convert!
X = vcer.fit_transform(X)
# split the train/test sets, we use 20% of records for test
X_train, X_test, y_train, y_test = train_test_split(
X, y, test_size=0.2
)
# as we can see, now we have a very large feature vector
# which contains more than 2000 numbers to represent a report
print('* get train set', X_train.shape)
print('* get test set', X_test.shape)
# + [markdown] id="LQ19gd889Pl_"
# ## Train a classifier
# + colab={"base_uri": "https://localhost:8080/"} id="McO9DgfV9UTM" outputId="7e0f5836-bd4a-4b82-b449-8ffd7226a039"
# we use Random Forest Classifier
# now, this time since we have more features (2647 feature!),
# we could use more trees to improve the performance.
clf = RandomForestClassifier(n_estimators=200, random_state=0)
# train the model using our training set
model2 = clf.fit(X_train, y_train)
# use the trained model to predict the test set
# since we already know the labels for the test set
# it's a test in fact
y_pred = model2.predict(X_test)
# get the test results
result2 = classification_report(y_test, y_pred)
# yes! the performance is much better than previous one!
# the overall F1 is not bad
print(result2)
# + [markdown] id="5J8etLFD6JvA"
# # Model 3: Next model
#
# Now we have text features and other features, how about use all of them?
# + [markdown] id="m9DFu6gK-v4d"
# ## Prepare more features
# + colab={"base_uri": "https://localhost:8080/"} id="csTf-btt-9ln" outputId="c48b0e3f-c85d-4c60-f15c-9f668c3de38d"
# this time, we use both symptom_text and ages and sex for features.
X = df_sample[['SYMPTOM_TEXT', 'AGE_YRS', 'SEX']]
# still use symptom as the label
y = df_sample['SYMPTOM']
# but the long text it self couldn't be used as feature
# we need to convert the text into a list of numbers (or vector)
# let's use a very popular tool called TF-IDF
# more details about this method could be found here:
# https://scikit-learn.org/stable/modules/generated/sklearn.feature_extraction.text.TfidfVectorizer.html
# first, let's get a vectorizer
vcer = TfidfVectorizer(stop_words='english')
# then convert!
X_sym = vcer.fit_transform(X['SYMPTOM_TEXT'])
print('* X_sym:', X_sym.shape)
# but an issue is that the X_sym is too sparse,
# we don't need too many zero features
# so, we could do a feature selection here
# there are a lot of feature selection methods, could be found here:
# https://scikit-learn.org/stable/modules/feature_selection.html
# we use a simple one, and select only 50 features
selector = SelectKBest(chi2, k=50)
X_sym = selector.fit_transform(X_sym, y)
# also convert the sex feature
X_sex = X['SEX'].apply(lambda v: 1 if v == 'M' else 0)
# since the symptom text feature is a sparse matrix,
# we need to convert it to numpy format
# and put age and sex feature in.
# then the final number of features are 52
X = np.concatenate((
X_sym.toarray(),
X['AGE_YRS'].values[:, None],
X_sex.values[:, None]
), axis=1)
# split the train/test sets, we use 20% of records for test
X_train, X_test, y_train, y_test = train_test_split(
X, y, test_size=0.2
)
print('* get train set', X_train.shape)
print('* get test set', X_test.shape)
# + [markdown] id="pMBDkNpoCVjn"
# ## Train a classifier
# + colab={"base_uri": "https://localhost:8080/"} id="4pocWOBtCSx8" outputId="581c2389-8a05-4ef5-92bf-155e7ff6cad5"
# we use Random Forest Classifier
# now, this time since we have less features (52 features),
# we could use more trees to improve the performance.
clf = RandomForestClassifier(n_estimators=200, random_state=0)
# train the model using our training set
model3 = clf.fit(X_train, y_train)
# use the trained model to predict the test set
# since we already know the labels for the test set
# it's a test in fact
y_pred = model3.predict(X_test)
# get the test results
result3 = classification_report(y_test, y_pred)
# yes! the performance is much better than previous one!
# the overall F1 is getting better, which is better than the second model!
# and depends on the training set, the result may vary each time.
print(result3)
# + [markdown] id="j4_zNaRzsoI4"
# # Evaluate on the large with model 3
#
# Now let's see how the performance is on the large dataset.
#
# The code is the same, just change `df_sample` to `df_large`
# + [markdown] id="OdLWhR-rKwW3"
# ## Prepare the features
#
# This time, we don't need to split the dataset into train and test.
# We will use all of them for test.
# + colab={"base_uri": "https://localhost:8080/"} id="cQ4PN_mVKefl" outputId="879c3335-cdcf-4f8d-912d-211741ab8aa1"
# this time, we use both symptom_text and ages and sex for features.
X = df_large[['SYMPTOM_TEXT', 'AGE_YRS', 'SEX']]
# use symptom as the label
y = df_large['SYMPTOM']
# then convert!
X_sym = vcer.transform(X['SYMPTOM_TEXT'])
print('* X_sym:', X_sym.shape)
# but an issue is that the X_sym is too sparse,
# we don't need too many zero features
# so, we could do a feature selection here
# there are a lot of feature selection methods, could be found here:
# https://scikit-learn.org/stable/modules/feature_selection.html
# we use a simple one, and select only 50 features
selector = SelectKBest(chi2, k=50)
X_sym = selector.fit_transform(X_sym, y)
# also convert the sex feature
X_sex = X['SEX'].apply(lambda v: 1 if v == 'M' else 0)
# since the symptom text feature is a sparse matrix,
# we need to convert it to numpy format
# and put age and sex feature in
X = np.concatenate((
X_sym.toarray(),
X['AGE_YRS'].values[:, None],
X_sex.values[:, None]
), axis=1)
# we don't need to split the dataset, just run the test
print('* get large test set', X.shape)
# + [markdown] id="yiHiD_euK8ct"
# ## Evaluate
# + colab={"base_uri": "https://localhost:8080/"} id="YtKY5DPUK-fZ" outputId="248dfc4a-172b-43ef-ce71-107d69116dfa"
# use the trained model3 to predict the test set
y_pred = model3.predict(X)
# get the test results
result_large = classification_report(y, y_pred)
# oops!
print(result_large)
# + [markdown] id="tlfjvm6hN0Uk"
# # Summary
#
# As shown in the three models, there are mainly two tasks:
#
# 1. Extract features from raw data. This reflects how our model abstracts the data.
# 2. Train a classifier based on features. This reflects how we inteprete the relationship between these data and the target (label).
#
# Even if we use the same classifier (but with different hyperparameters), the better the quality of the features, the better the overall performance of the model. Advances in these two tasks can improve the performance of our model.
| nlp_tasks.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# - TODO: Print graph of connections based on users
# - TODO: Might need to segment by category - lookup average ratings of items.
# - TODO: Use z-scores?
# - TODO: User-based modeling, etc...
# - TODO: To use Rstudio, make a script which combines businesses/matches/whatever into a single CSV
# - TODO: Sentiment analysis
# ## Setup
# +
# %load_ext autoreload
# %autoreload 2
import pandas as pd
import numpy as np
from collections import Counter
random = np.random.RandomState(0)
# %matplotlib inline
import matplotlib.pyplot as plt
import seaborn as sns
sns.set_style('white')
import scipy.stats
from run_trueskill_mp import convert_matches_format
# +
businesses = pd.read_pickle('dataset_processed/businesses.pkl')
matches = pd.read_pickle('dataset_processed/matches.pkl')
# Drop draws, for now
matches = matches[matches.win != 0]
matches = convert_matches_format(matches)
wins_counter = np.bincount(matches.b1)
losses_counter = np.bincount(matches.b2)
matches_counter = wins_counter + losses_counter
# Add matches, wins, losses
businesses['matches'] = matches_counter
businesses['wins'] = wins_counter
businesses['losses'] = losses_counter
businesses = businesses.rename(columns={'avg_rating': 'star_rating'})
# -
# ## Restaurant ranking analyses - message passing inference
# +
mp_samples = np.load('results/mp_dropdraws_20.npy')
ratings, variances = mp_samples[-1, 0, :], mp_samples[-1, 1, :]
n_b = len(ratings)
print("{} ratings ({:.3f}, {:.3f})".format(n_b, ratings.min(), ratings.max()))
businesses['ts_rating'] = ratings
businesses['ts_variance'] = variances
businesses['ranking'] = scipy.stats.rankdata(-ratings)
# -
# Plot convergence of a few samples
# +
n_sel = 3
sel_bs = random.choice(np.arange(mp_samples.shape[-1]), size=n_sel, replace=False)
color_key = ['#1f77b4', '#ff7f0e', '#2ca02c']
f, figs = plt.subplots(1, n_sel, figsize=(10, 5), sharey = True)
ranks = np.arange(3)
for i, (b_i, fig) in list(enumerate(zip(sel_bs, figs))):
b_samples = mp_samples[:, :, b_i].squeeze()
fig.plot(b_samples[:, 0], alpha=1, color=color_key[i], label=b_i)
fig.fill_between(np.arange(len(b_samples)),
b_samples[:, 0] - b_samples[:, 1],
b_samples[:, 0] + b_samples[:, 1],
color='grey', facecolor='grey', alpha=0.22)
fig.set_ylabel('$w$')
fig.set_xlabel('Iteration')
business_str = businesses.loc[b_i].business_id.decode('utf8')
business_ranking = businesses.loc[b_i].ranking
fig.set_title('{}\n({}/{})'.format(business_str, business_ranking, n_b))
# -
sns.jointplot(x='ts_rating', y='star_rating', data=businesses, alpha=0.01)
print("Max rating:")
| mp_analyses.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import mne
import numpy as np
# +
import matplotlib.pyplot as plt
from sklearn.pipeline import make_pipeline
from sklearn.preprocessing import StandardScaler
from sklearn.linear_model import LogisticRegression
from mne.datasets import sample
from mne.decoding import cross_val_multiscore, LinearModel, GeneralizingEstimator, Scaler, \
Vectorizer
from sklearn.model_selection import StratifiedKFold, cross_val_score, StratifiedShuffleSplit, \
RepeatedStratifiedKFold
from sklearn.preprocessing import StandardScaler, LabelEncoder
from sklearn.pipeline import make_pipeline
from sklearn.svm import LinearSVC
# -
# # Autocorrelation
# https://stackoverflow.com/questions/30143417/computing-the-correlation-coefficient-between-two-multi-dimensional-arrays
def generate_correlation_map(x, y):
"""Correlate each n with each m.
Parameters
----------
x : np.array
Shape N X T.
y : np.array
Shape M X T.
Returns
-------
np.array
N X M array in which each element is a correlation coefficient.
"""
mu_x = x.mean(1)
mu_y = y.mean(1)
n = x.shape[1]
if n != y.shape[1]:
raise ValueError('x and y must ' +
'have the same number of timepoints.')
s_x = x.std(1, ddof=n - 1)
s_y = y.std(1, ddof=n - 1)
cov = np.dot(x,
y.T) - n * np.dot(mu_x[:, np.newaxis],
mu_y[np.newaxis, :])
return cov / np.dot(s_x[:, np.newaxis], s_y[np.newaxis, :])
# +
def set_fonts():
from matplotlib.font_manager import FontProperties
font = FontProperties()
font.set_family('serif')
font.set_name('Calibri')
return font
def plot_autocorr_eachGrp(title, avgmap_e, avgmap_l, avgmap_d, vmin, vmax):
font=set_fonts()
fsize_t=30
fsize_x=26
# EARLY ==================================================================================
fig, axs = plt.subplots(3, 2, figsize=(15,15))
ax = axs[0][0]
im = ax.imshow(avgmap_e[0,:,:], interpolation='lanczos', origin='lower', cmap='RdBu_r',
extent=subset.times[[0, -1, 0 , -1]], vmin=vmin, vmax=vmax)
ax.set_ylabel('Time (s)', fontproperties=font, fontsize=fsize_x, fontweight='bold')
ax.set_title('Loc1', fontproperties=font, fontsize=fsize_t, fontweight='bold')
ax.axvline(0, color='k')
ax.axhline(0, color='k')
plt.colorbar(im, ax=ax)
ax.xaxis.set_ticks_position('bottom')
ax = axs[0][1]
im = ax.imshow(avgmap_e[1,:,:], interpolation='lanczos', origin='lower', cmap='RdBu_r',
extent=subset.times[[0, -1, 0 , -1]], vmin=vmin, vmax=vmax)
ax.set_title('Loc2', fontproperties=font, fontsize=fsize_t, fontweight='bold')
ax.axvline(0, color='k')
ax.axhline(0, color='k')
plt.colorbar(im, ax=ax)
ax.xaxis.set_ticks_position('bottom')
ax = axs[1][0]
im = ax.imshow(avgmap_e[2,:,:], interpolation='lanczos', origin='lower', cmap='RdBu_r',
extent=subset.times[[0, -1, 0 , -1]], vmin=vmin, vmax=vmax)
ax.set_xlabel('Time (s)', fontproperties=font, fontsize=fsize_x, fontweight='bold')
ax.set_ylabel('Time (s)', fontproperties=font, fontsize=fsize_x, fontweight='bold')
ax.set_title('Loc3', fontproperties=font, fontsize=fsize_t, fontweight='bold')
ax.axvline(0, color='k')
ax.axhline(0, color='k')
plt.colorbar(im, ax=ax)
ax.xaxis.set_ticks_position('bottom')
ax = axs[1][1]
im = ax.imshow(avgmap_e[3,:,:], interpolation='lanczos', origin='lower', cmap='RdBu_r',
extent=subset.times[[0, -1, 0 , -1]], vmin=vmin, vmax=vmax)
ax.set_xlabel('Testing Time (s)', fontproperties=font, fontsize=fsize_x, fontweight='bold')
ax.set_title('Loc4', fontproperties=font, fontsize=fsize_t, fontweight='bold')
ax.axvline(0, color='k')
ax.axhline(0, color='k')
ax.xaxis.set_ticks_position('bottom')
plt.colorbar(im, ax=ax)
avggrp_e = np.mean(avgmap_e, axis=0)
ax = axs[2][0]
im = ax.imshow(avggrp_e, interpolation='lanczos', origin='lower', cmap='RdBu_r',
extent=subset.times[[0, -1, 0 , -1]], vmin=vmin, vmax=vmax)
ax.set_xlabel('Time (s)', fontproperties=font, fontsize=fsize_x, fontweight='bold')
ax.set_ylabel('Time (s)', fontproperties=font, fontsize=fsize_x, fontweight='bold')
ax.set_title('Average', fontproperties=font, fontsize=fsize_t, fontweight='bold')
ax.axvline(0, color='k')
ax.axhline(0, color='k')
plt.colorbar(im, ax=ax)
ax.xaxis.set_ticks_position('bottom')
fig.delaxes(axs[2][1])
plt.tight_layout()
plt.suptitle( 'earlyBlocks - ' + title, fontproperties=font, fontsize=fsize_t, fontweight='bold', y=1.05)
plt.tight_layout()
# LATER ==================================================================================
fig, axs = plt.subplots(3, 2, figsize=(15,15))
ax = axs[0][0]
im = ax.imshow(avgmap_l[0,:,:], interpolation='lanczos', origin='lower', cmap='RdBu_r',
extent=subset.times[[0, -1, 0 , -1]], vmin=vmin, vmax=vmax)
ax.set_ylabel('Time (s)', fontproperties=font, fontsize=fsize_x, fontweight='bold')
ax.set_title('Loc1', fontproperties=font, fontsize=fsize_t, fontweight='bold')
ax.axvline(0, color='k')
ax.axhline(0, color='k')
plt.colorbar(im, ax=ax)
ax.xaxis.set_ticks_position('bottom')
ax = axs[0][1]
im = ax.imshow(avgmap_l[1,:,:], interpolation='lanczos', origin='lower', cmap='RdBu_r',
extent=subset.times[[0, -1, 0 , -1]], vmin=vmin, vmax=vmax)
ax.set_title('Loc2', fontproperties=font, fontsize=fsize_t, fontweight='bold')
ax.axvline(0, color='k')
ax.axhline(0, color='k')
plt.colorbar(im, ax=ax)
ax.xaxis.set_ticks_position('bottom')
ax = axs[1][0]
im = ax.imshow(avgmap_l[2,:,:], interpolation='lanczos', origin='lower', cmap='RdBu_r',
extent=subset.times[[0, -1, 0 , -1]], vmin=vmin, vmax=vmax)
ax.set_xlabel('Time (s)', fontproperties=font, fontsize=fsize_x, fontweight='bold')
ax.set_ylabel('Time (s)', fontproperties=font, fontsize=fsize_x, fontweight='bold')
ax.set_title('Loc3', fontproperties=font, fontsize=fsize_t, fontweight='bold')
ax.axvline(0, color='k')
ax.axhline(0, color='k')
plt.colorbar(im, ax=ax)
ax.xaxis.set_ticks_position('bottom')
ax = axs[1][1]
im = ax.imshow(avgmap_l[3,:,:], interpolation='lanczos', origin='lower', cmap='RdBu_r',
extent=subset.times[[0, -1, 0 , -1]], vmin=vmin, vmax=vmax)
ax.set_xlabel('Time (s)', fontproperties=font, fontsize=fsize_x, fontweight='bold')
ax.set_title('Loc4', fontproperties=font, fontsize=fsize_t, fontweight='bold')
ax.axvline(0, color='k')
ax.axhline(0, color='k')
ax.xaxis.set_ticks_position('bottom')
plt.colorbar(im, ax=ax)
avggrp_l = np.mean(avgmap_l, axis=0)
ax = axs[2][0]
im = ax.imshow(avggrp_l, interpolation='lanczos', origin='lower', cmap='RdBu_r',
extent=subset.times[[0, -1, 0 , -1]], vmin=vmin, vmax=vmax)
ax.set_xlabel('Time (s)', fontproperties=font, fontsize=fsize_x, fontweight='bold')
ax.set_ylabel('Time (s)', fontproperties=font, fontsize=fsize_x, fontweight='bold')
ax.set_title('Average', fontproperties=font, fontsize=fsize_t, fontweight='bold')
ax.axvline(0, color='k')
ax.axhline(0, color='k')
plt.colorbar(im, ax=ax)
ax.xaxis.set_ticks_position('bottom')
fig.delaxes(axs[2][1])
plt.tight_layout()
plt.suptitle( 'laterBlocks - ' + title, fontproperties=font, fontsize=fsize_t, fontweight='bold', y=1.05)
plt.tight_layout()
plt.show()
# +
# SAVE_EPOCH_ROOT = '../../data/preprocessed/epochs/aft_ICA_rej/'
SAVE_EPOCH_ROOT = '../../data/version5.2/preprocessed/epochs/aft_ICA_rej/'
filename_epoch = SAVE_EPOCH_ROOT + 'epochs_sec_applyBaseline_subj1-afterRejICA-epo.fif'
#Read Epochs
epochs_orig = mne.read_epochs(filename_epoch, proj=True, preload=True, verbose=None)
epochs = epochs_orig.copy()
# -
# ## Some preprocessing
# +
subset = epochs['pred']['non'].copy()
subset = subset.pick_types(eeg=True)
subset.crop(tmin=-0.4,tmax=0.5)
if subset['Block==6'].metadata.Ptrn_Type.values.shape[0]>0:
main_ptrn = subset['Block==6'].metadata.Ptrn_Type.values[0]
else:
main_ptrn = subset['Block==8'].metadata.Ptrn_Type.values[0]
# +
# print('main pattern', main_ptrn)
# print('--------------------------------')
# print('Trgt_Loc_main', subset.metadata.Trgt_Loc_main)
# # 1 3 4 2 3 1 2 4 1 3 4
# print('--------------------------------')
# # 1 3 4 2 3 1 2 4 1 3 4
# print('Trgt_Loc', subset.metadata.Trgt_Loc)
# print('--------------------------------')
# print('Trgt_Loc_prev', subset.metadata.Trgt_Loc_prev)
# # 2 1 3 4 2 3 1 2 4 1 3
# -
# ## Group by current main location
# +
# str_feat = 'Trgt_Loc_main'
# # only early blocks
# subsetE = subset['Block>2 & Block<7'].copy()
# dtset = subsetE.copy()
# iind=0
# dt0 = dtset['%s==%s' %(str_feat, iind+1)]._data.copy()
# dt0 = dt0[:84,:,:]
# iind=1
# dt1 = dtset['%s==%s' %(str_feat, iind+1)]._data.copy()
# dt1 = dt1[:84,:,:]
# print(dt0.shape)
# print(dt1.shape)
# # print(dt1-dt0)
# for iloc in range(2):
# print(iloc)
# dt = dtset['%s==%s' %(str_feat, iind+1)]._data.copy()
# dt = dt[:84,:,:]
# # print(dt)
# print(dt1-dt)
# +
from scipy.signal import savgol_filter
def group_data(dtset, str_feat):
inds = np.zeros((4,1))
for iind in range(4):
inds[iind] = dtset['%s==%s' %(str_feat, iind+1)]._data.shape[0]
ind1=int(min(inds))
ind2=dtset['%s==1' %(str_feat)]._data.shape[1]
ind3=dtset['%s==1' %(str_feat)]._data.shape[2]
print(ind1)
grped_dtset = np.zeros((4, ind1, ind2, ind3))
avg_grped_dtset = np.zeros((4, ind2, ind3))
smooth_grped_dtset = np.zeros((5, ind3))
print('smooth_grped_dtset', smooth_grped_dtset.shape)
dtset_o = dtset.copy()
for iloc in range(4):
dtset = dtset_o.copy()
print(iloc)
dt = dtset['%s==%s' %(str_feat, iloc+1)]._data.copy()
# normalize
dt = (dt - np.mean(dt)) / np.std(dt)
# select based on the mimium num of loc
dt = dt[:ind1,:,:].copy()
grped_dtset[iloc,:,:,:] = dt
avg1 = np.mean(grped_dtset.copy(), axis=1)
avg_grped_dtset = avg1
# TODO: update scipy, some parts will be deprecated
avg = np.mean(avg1, axis=1)
smooth_grped_dtset[:-1,:] = savgol_filter(avg, 7, 3)
print('smooth_grped_dtset', smooth_grped_dtset.shape)
print(avg_grped_dtset.shape)
tot_avg = np.mean(avg_grped_dtset, axis=0)
print(tot_avg.shape)
tot_avg = np.mean(tot_avg, axis=0)
smooth_grped_dtset[4,:] = savgol_filter(tot_avg, 7, 3)
print('grped_dtset', grped_dtset.shape)
print('avg_grped_dtset', avg_grped_dtset.shape)
print('smooth_grped_dtset', smooth_grped_dtset.shape)
return [grped_dtset, avg_grped_dtset, smooth_grped_dtset]
# +
# only early blocks
subsetE = subset['Block>2 & Block<7'].copy()
dtsetE = subsetE.copy()
str_feat = 'Trgt_Loc_main'
[gdtE, avggdtE, sgdtE] = group_data(dtsetE, str_feat)
# -
# print(sgdtE)
# +
# only later blocks
subsetL = subset['Block>6& Block<11'].copy()
dtsetL = subsetL.copy()
str_feat = 'Trgt_Loc_main'
[gdtL, avggdtL, sgdtL] = group_data(dtsetL, str_feat)
# +
avggdtD = avggdtL - avggdtE
print(avggdtD.shape)
# +
ind1=min( [Loc1_E._data.shape[0], Loc2_E._data.shape[0] , \
Loc3_E._data.shape[0], Loc4_E._data.shape[0]] )
# only early blocks
subsetE = subset['Block>2 & Block<7'].copy()
# Group data based on the current main loc
Loc1_E = subsetE['Trgt_Loc_main==1'].copy()
Loc2_E = subsetE['Trgt_Loc_main==2'].copy()
Loc3_E = subsetE['Trgt_Loc_main==3'].copy()
Loc4_E = subsetE['Trgt_Loc_main==4'].copy()
Loc1_E._data = Loc1_E._data[:ind1,:,:]
Loc2_E._data = Loc2_E._data[:ind1,:,:]
Loc3_E._data = Loc3_E._data[:ind1,:,:]
Loc4_E._data = Loc4_E._data[:ind1,:,:]
Loc1_E._data = (Loc1_E._data - np.mean(Loc1_E._data)) / np.std(Loc1_E._data)
Loc2_E._data = (Loc2_E._data - np.mean(Loc2_E._data)) / np.std(Loc2_E._data)
Loc3_E._data = (Loc3_E._data - np.mean(Loc3_E._data)) / np.std(Loc3_E._data)
Loc4_E._data = (Loc4_E._data - np.mean(Loc4_E._data)) / np.std(Loc4_E._data)
# +
# only later blocks
subsetL = subset['Block>6 & Block<11'].copy()
# subsetL = subset['Block>6'].copy()
# Group data based on the current main loc
Loc1_L = subsetL['Trgt_Loc_main==1'].copy()
Loc2_L = subsetL['Trgt_Loc_main==2'].copy()
Loc3_L = subsetL['Trgt_Loc_main==3'].copy()
Loc4_L = subsetL['Trgt_Loc_main==4'].copy()
Loc1_L._data = Loc1_L._data[:ind1,:,:]
Loc2_L._data = Loc2_L._data[:ind1,:,:]
Loc3_L._data = Loc3_L._data[:ind1,:,:]
Loc4_L._data = Loc4_L._data[:ind1,:,:]
Loc1_L._data = (Loc1_L._data - np.mean(Loc1_L._data)) / np.std(Loc1_L._data)
Loc2_L._data = (Loc2_L._data - np.mean(Loc2_L._data)) / np.std(Loc2_L._data)
Loc3_L._data = (Loc3_L._data - np.mean(Loc3_L._data)) / np.std(Loc3_L._data)
Loc4_L._data = (Loc4_L._data - np.mean(Loc4_L._data)) / np.std(Loc4_L._data)
# +
ind1=min( [Loc1_E._data.shape[0], Loc2_E._data.shape[0] , \
Loc3_E._data.shape[0], Loc4_E._data.shape[0]] )
ind2=Loc1_E._data.shape[1]
ind3=Loc1_E._data.shape[2]
avgp1_autcrr = np.zeros((4, ind1, ind2, ind3))
avgp1_autcrr[0,:,:,:]=Loc1_E._data[:ind1,:,:]
avgp1_autcrr[1,:,:,:]=Loc2_E._data[:ind1,:,:]
avgp1_autcrr[2,:,:,:]=Loc3_E._data[:ind1,:,:]
avgp1_autcrr[3,:,:,:]=Loc4_E._data[:ind1,:,:]
print(avgp1_autcrr.shape)
avgE = avgp1_autcrr.copy()
avgact_e = np.mean(avgp1_autcrr, axis=1)
ind1=min( [Loc1_L._data.shape[0], Loc2_L._data.shape[0] , \
Loc3_L._data.shape[0], Loc4_L._data.shape[0]] )
ind2=Loc1_L._data.shape[1]
ind3=Loc1_L._data.shape[2]
avgp1_autcrr = np.zeros((4, ind1, ind2, ind3))
avgp1_autcrr[0,:,:,:]=Loc1_L._data[:ind1,:,:]
avgp1_autcrr[1,:,:,:]=Loc2_L._data[:ind1,:,:]
avgp1_autcrr[2,:,:,:]=Loc3_L._data[:ind1,:,:]
avgp1_autcrr[3,:,:,:]=Loc4_L._data[:ind1,:,:]
print(avgp1_autcrr.shape)
avgL = avgp1_autcrr.copy()
avgact_l = np.mean(avgp1_autcrr, axis=1)
avgact_d = avgact_l - avgact_e
# -
print(avgact_e.shape)
print(avgact_l.shape)
print(avgact_d.shape)
print(avgE.shape, avgL.shape)
print((gdtE.shape, avggdtE.shape, sgdtE.shape))
# print(avgact_e - avggdtE)
print(gdtE - avgE)
def prep_group_data(dtset, str_feat):
# Group data based on the current main loc
Loc1 = dtset['%s==1' %(str_feat)].copy()
Loc2 = dtset['%s==2' %(str_feat)].copy()
Loc3 = dtset['%s==3' %(str_feat)].copy()
Loc4 = dtset['%s==4' %(str_feat)].copy()
inds = np.zeros((4,1))
for iind in range(4):
inds[iind] = dtset['%s==%s' %(str_feat, iind+1)]._data.shape[0]
ind1=int(min(inds))
ind2=dtset['%s==1' %(str_feat)]._data.shape[1]
ind3=dtset['%s==1' %(str_feat)]._data.shape[2]
print(ind1)
Loc1._data = Loc1._data[:ind1,:,:]
Loc2._data = Loc2._data[:ind1,:,:]
Loc3._data = Loc3._data[:ind1,:,:]
Loc4._data = Loc4._data[:ind1,:,:]
Loc1._data = (Loc1._data - np.mean(Loc1._data)) / np.std(Loc1._data)
Loc2._data = (Loc2._data - np.mean(Loc2._data)) / np.std(Loc2._data)
Loc3._data = (Loc3._data - np.mean(Loc3._data)) / np.std(Loc3._data)
Loc4._data = (Loc4._data - np.mean(Loc4._data)) / np.std(Loc4._data)
grped_dtset = np.zeros((4, ind1, ind2, ind3))
grped_dtset[0,:,:,:]=Loc1._data
grped_dtset[1,:,:,:]=Loc2._data
grped_dtset[2,:,:,:]=Loc3._data
grped_dtset[3,:,:,:]=Loc4._data
print(grped_dtset.shape)
avgact_dt = np.mean(grped_dtset, axis=1)
evk_data = np.mean(avgact_dt, axis=1)
smooth_evk = np.zeros((5, evk_data.shape[1]))
smooth_evk[0,:] = savgol_filter(evk_data[0,:],33, 3)
smooth_evk[1,:] = savgol_filter(evk_data[1,:],33, 3)
smooth_evk[2,:] = savgol_filter(evk_data[2,:],33, 3)
smooth_evk[3,:] = savgol_filter(evk_data[3,:],33, 3)
smooth_evk[4,:] = savgol_filter(np.mean(evk_data, 0),23, 3)
return grped_dtset, avgact_dt, smooth_evk
# +
# args.smth_lvl = 33
# args.mtdt_feat
# +
from scipy.signal import savgol_filter
str_feat = 'Trgt_Loc_main'
# only later blocks
dtset = subset['Block>6 & Block<11'].copy()
grped_dtsetL, avgact_dtL, smooth_evkL = prep_group_data(dtset, str_feat)
# +
fig, ax = plt.subplots(3,1,figsize=(15,14))
ax[0].plot(smooth_evkL[0,:])
ax[0].plot(smooth_evkL[1,:])
ax[0].plot(smooth_evkL[2,:])
ax[0].plot(smooth_evkL[3,:])
ax[0].plot(smooth_evkL[4,:], color='black', linewidth=4.0)
ax[0].legend(['loc1', 'loc2', 'loc3', 'loc4', 'Early_AVG'], loc='upper left')
# plt.show()
# -
# print(avgact_l - avggdtL)
# print(gdtL - avgL)
print(avgact_dtL - avgact_l)
# +
# avg_sgdtE = np.mean(sgdtE, axis= 1)
print(sgdtE.shape)
print(sgdtE)
# avg_sgdtL = np.mean(sgdtL, axis= 1)
fig, ax = plt.subplots(4,1,figsize=(15,14))
for iloc in range(4):
ax[iloc].plot(sgdtE[iloc,:], linewidth=4.)
ax[iloc].plot(sgdtL[iloc,:], linewidth=4.)
ax[iloc].legend(['loc%s Early' %(iloc+1), 'loc%s Later' %(iloc+1)])
# +
# print(subset.times)
# +
from scipy.signal import savgol_filter
fig, ax = plt.subplots(3,1,figsize=(15,14))
# window_length : int
# The length of the filter window (i.e. the number of coefficients). window_length must be a positive odd integer.
# polyorder : int
# The order of the polynomial used to fit the samples. polyorder must be less than window_length.
# smooth_level = 11 # 11 * 4 = 45 ms
# smooth_level = 15 # 15 * 4 = 60 ms
# smooth_level = 25 # 25 * 4 = 100 ms
smooth_level = 51 # 51 * 4 = 204 ms
ply_order = 3
lw1=2.5
lw2=4
# window_length, polyorder
evk_data = np.mean(avgact_e, axis=1)
smooth_evk1 = savgol_filter(evk_data[0,:],window_length=smooth_level, polyorder=ply_order)
smooth_evk2 = savgol_filter(evk_data[1,:],window_length=smooth_level, polyorder=ply_order)
smooth_evk3 = savgol_filter(evk_data[2,:],window_length=smooth_level, polyorder=ply_order)
smooth_evk4 = savgol_filter(evk_data[3,:],window_length=smooth_level, polyorder=ply_order)
smooth_evkavg_e = savgol_filter(np.mean(evk_data, 0),window_length=smooth_level, polyorder=ply_order)
ax[0].plot(subset.times, smooth_evk1, linewidth=lw1)
ax[0].plot(subset.times, smooth_evk2, linewidth=lw1)
ax[0].plot(subset.times, smooth_evk3, linewidth=lw1)
ax[0].plot(subset.times, smooth_evk4, linewidth=lw1)
ax[0].plot(subset.times, smooth_evkavg_e, color='black', linewidth=lw2)
ax[0].legend(['loc1', 'loc2', 'loc3', 'loc4', 'Early_AVG'], loc='upper left')
evk_data = np.mean(avgact_l, axis=1)
smooth_evk1 = savgol_filter(evk_data[0,:],window_length=smooth_level, polyorder=ply_order)
smooth_evk2 = savgol_filter(evk_data[1,:],window_length=smooth_level, polyorder=ply_order)
smooth_evk3 = savgol_filter(evk_data[2,:],window_length=smooth_level, polyorder=ply_order)
smooth_evk4 = savgol_filter(evk_data[3,:],window_length=smooth_level, polyorder=ply_order)
smooth_evkavg_l = savgol_filter(np.mean(evk_data, 0),window_length=smooth_level, polyorder=ply_order)
ax[1].plot(subset.times, smooth_evk1, linewidth=lw1)
ax[1].plot(subset.times, smooth_evk2, linewidth=lw1)
ax[1].plot(subset.times, smooth_evk3, linewidth=lw1)
ax[1].plot(subset.times, smooth_evk4, linewidth=lw1)
ax[1].plot(subset.times, smooth_evkavg_l, color='black', linewidth=lw2)
ax[1].legend(['loc1', 'loc2', 'loc3', 'loc4', 'Later_AVG'], loc='upper left')
# plt.xticks(subset.times*1000)
ax[2].plot(subset.times, smooth_evkavg_e, color='red', linewidth=lw2)
ax[2].plot(subset.times, smooth_evkavg_l, color='green', linewidth=lw2)
ax[2].legend(['Early_AVG', 'Later_AVG'], loc='upper left')
for ii in range(3):
ax[ii].axvline(x=0, color='gray', linewidth=5., linestyle='--')
plt.show()
# +
from scipy.signal import savgol_filter
fig, ax = plt.subplots(5,1,figsize=(13,12))
# window_length : int
# The length of the filter window (i.e. the number of coefficients). window_length must be a positive odd integer.
# polyorder : int
# The order of the polynomial used to fit the samples. polyorder must be less than window_length.
# smooth_level = 11 # 11 * 4 = 45 ms
# smooth_level = 15 # 15 * 4 = 60 ms
smooth_level = 25 # 25 * 4 = 100 ms
# smooth_level = 51 # 51 * 4 = 204 ms
ply_order = 3
lw1=2.5
lw2=4
# window_length, polyorder
evk_data = np.mean(avgact_e, axis=1)
smooth_evk1_e = savgol_filter(evk_data[0,:],window_length=smooth_level, polyorder=ply_order)
smooth_evk2_e = savgol_filter(evk_data[1,:],window_length=smooth_level, polyorder=ply_order)
smooth_evk3_e = savgol_filter(evk_data[2,:],window_length=smooth_level, polyorder=ply_order)
smooth_evk4_e = savgol_filter(evk_data[3,:],window_length=smooth_level, polyorder=ply_order)
smooth_evkavg_e = savgol_filter(np.mean(evk_data, 0),window_length=smooth_level, polyorder=ply_order)
evk_data = np.mean(avgact_l, axis=1)
smooth_evk1_l = savgol_filter(evk_data[0,:],window_length=smooth_level, polyorder=ply_order)
smooth_evk2_l = savgol_filter(evk_data[1,:],window_length=smooth_level, polyorder=ply_order)
smooth_evk3_l = savgol_filter(evk_data[2,:],window_length=smooth_level, polyorder=ply_order)
smooth_evk4_l = savgol_filter(evk_data[3,:],window_length=smooth_level, polyorder=ply_order)
smooth_evkavg_l = savgol_filter(np.mean(evk_data, 0),window_length=smooth_level, polyorder=ply_order)
ax[0].plot(subset.times, smooth_evk1_e, linewidth=lw1)
ax[0].plot(subset.times, smooth_evk1_l, linewidth=lw1)
ax[0].legend(['loc1 - Left - Early', 'loc1 - Left - Later'], loc='upper left')
ax[1].plot(subset.times, smooth_evk2_e, linewidth=lw1)
ax[1].plot(subset.times, smooth_evk2_l, linewidth=lw1)
ax[1].legend(['loc2 - Top - Early', 'loc2 - Top - Later'], loc='upper left')
ax[2].plot(subset.times, smooth_evk3_e, linewidth=lw1)
ax[2].plot(subset.times, smooth_evk3_l, linewidth=lw1)
ax[2].legend(['loc3 - Right - Early', 'loc3 - Right - Later'], loc='upper left')
ax[3].plot(subset.times, smooth_evk4_e, linewidth=lw1)
ax[3].plot(subset.times, smooth_evk4_l, linewidth=lw1)
ax[3].legend(['loc4 - Bottom - Early', 'loc4 - Bottom - Later'], loc='upper left')
ax[4].plot(subset.times, smooth_evkavg_e, color='red', linewidth=lw2)
ax[4].plot(subset.times, smooth_evkavg_l, color='green', linewidth=lw2)
ax[4].legend(['Early_AVG', 'Later_AVG'], loc='upper left')
for ii in range(5):
ax[ii].axvline(x=0, color='gray', linewidth=5., linestyle='--')
plt.show()
# +
# from scipy.signal import savgol_filter
# fig, ax = plt.subplots(3,1,figsize=(15,14))
# # window_length : int
# # The length of the filter window (i.e. the number of coefficients). window_length must be a positive odd integer.
# # polyorder : int
# # The order of the polynomial used to fit the samples. polyorder must be less than window_length.
# window_length = 10
# polyorder = 5
# lw1=2.5
# lw2=4
# evk_data = np.mean(avgact_e, axis=1)
# smooth_evk1 = savgol_filter(evk_data[0,:],window_length, polyorder)
# smooth_evk2 = savgol_filter(evk_data[1,:],window_length, polyorder)
# smooth_evk3 = savgol_filter(evk_data[2,:],window_length, polyorder)
# smooth_evk4 = savgol_filter(evk_data[3,:],window_length, polyorder)
# smooth_evkavg_e = savgol_filter(np.mean(evk_data, 0),window_length, polyorder)
# ax[0].plot(subset.times, smooth_evk1, linewidth=lw1)
# ax[0].plot(subset.times, smooth_evk2, linewidth=lw1)
# ax[0].plot(subset.times, smooth_evk3, linewidth=lw1)
# ax[0].plot(subset.times, smooth_evk4, linewidth=lw1)
# ax[0].plot(subset.times, smooth_evkavg_e, color='black', linewidth=lw2)
# ax[0].legend(['loc1', 'loc2', 'loc3', 'loc4', 'Early_AVG'], loc='upper left')
# evk_data = np.mean(avgact_l, axis=1)
# smooth_evk1 = savgol_filter(evk_data[0,:],window_length, polyorder)
# smooth_evk2 = savgol_filter(evk_data[1,:],window_length, polyorder)
# smooth_evk3 = savgol_filter(evk_data[2,:],window_length, polyorder)
# smooth_evk4 = savgol_filter(evk_data[3,:],window_length, polyorder)
# smooth_evkavg_l = savgol_filter(np.mean(evk_data, 0),window_length, polyorder)
# ax[1].plot(subset.times, smooth_evk1, linewidth=lw1)
# ax[1].plot(subset.times, smooth_evk2, linewidth=lw1)
# ax[1].plot(subset.times, smooth_evk3, linewidth=lw1)
# ax[1].plot(subset.times, smooth_evk4, linewidth=lw1)
# ax[1].plot(subset.times, smooth_evkavg_l, color='black', linewidth=lw2)
# ax[1].legend(['loc1', 'loc2', 'loc3', 'loc4', 'Later_AVG'], loc='upper left')
# # plt.xticks(subset.times*1000)
# ax[2].plot(subset.times, smooth_evkavg_e, color='red', linewidth=lw2)
# ax[2].plot(subset.times, smooth_evkavg_l, color='green', linewidth=lw2)
# ax[2].legend(['Early_AVG', 'Later_AVG'], loc='upper left')
# for ii in range(3):
# ax[ii].axvline(x=0, color='gray', linewidth=5., linestyle='--')
# plt.show()
# +
from scipy.signal import savgol_filter
fig, ax = plt.subplots(4,1,figsize=(15,14))
evk_data_e = np.mean(avgact_e, axis=1)
evk_data_l = np.mean(avgact_l, axis=1)
smooth_evke = savgol_filter(evk_data_e[0,:],23, 3)
smooth_evkl = savgol_filter(evk_data_l[0,:],23, 3)
ax[0].plot(smooth_evke, linewidth=4.0)
ax[0].plot(smooth_evkl, linewidth=4.0)
ax[0].legend(['loc1 Early', 'loc1 Later'])
smooth_evke = savgol_filter(evk_data_e[1,:],23, 3)
smooth_evkl = savgol_filter(evk_data_l[1,:],23, 3)
ax[1].plot(smooth_evke, linewidth=4.0)
ax[1].plot(smooth_evkl, linewidth=4.0)
ax[1].legend(['loc2 Early', 'loc2 Later'])
smooth_evke = savgol_filter(evk_data_e[2,:],23, 3)
smooth_evkl = savgol_filter(evk_data_l[2,:],23, 3)
ax[2].plot(smooth_evke, linewidth=4.0)
ax[2].plot(smooth_evkl, linewidth=4.0)
ax[2].legend(['loc3 Early', 'loc3 Later'])
smooth_evke = savgol_filter(evk_data_e[3,:],23, 3)
smooth_evkl = savgol_filter(evk_data_l[3,:],23, 3)
ax[3].plot(smooth_evke, linewidth=4.0)
ax[3].plot(smooth_evkl, linewidth=4.0)
ax[3].legend(['loc4 Early', 'loc4 Later'])
plt.show()
# -
# ## Group by previous location
# +
subset = epochs['pred']['non'].copy()
subset = subset.pick_types(eeg=True)
subset.crop(tmin=-0.4,tmax=0.5)
if subset['Block==6'].metadata.Ptrn_Type.values.shape[0]>0:
main_ptrn = subset['Block==6'].metadata.Ptrn_Type.values[0]
else:
main_ptrn = subset['Block==8'].metadata.Ptrn_Type.values[0]
# +
# only later blocks
subset_E = subset['Block<11'].copy()
subset_E = subset['Block>6'].copy()
# Group data based on the previous trial
Grp1_E = subset_E['Trgt_Loc_prev==1'].copy()
Grp2_E = subset_E['Trgt_Loc_prev==2'].copy()
Grp3_E = subset_E['Trgt_Loc_prev==3'].copy()
Grp4_E = subset_E['Trgt_Loc_prev==4'].copy()
print(Grp1_E._data.shape)
print(Grp2_E._data.shape)
print(Grp3_E._data.shape)
print(Grp4_E._data.shape)
Grp1_E._data = (Grp1_E._data - np.mean(Grp1_E._data)) / np.std(Grp1_E._data)
Grp2_E._data = (Grp2_E._data - np.mean(Grp2_E._data)) / np.std(Grp2_E._data)
Grp3_E._data = (Grp3_E._data - np.mean(Grp3_E._data)) / np.std(Grp3_E._data)
Grp4_E._data = (Grp4_E._data - np.mean(Grp4_E._data)) / np.std(Grp4_E._data)
# +
# ind1=min( [Loc1_E._data.shape[0], Loc2_E._data.shape[0] , \
# Loc3_E._data.shape[0], Loc4_E._data.shape[0]] )
# ind2=Loc1_E._data.shape[1]
# ind3=Loc1_E._data.shape[2]
# avgp1_autcrr = np.zeros((4, ind1, ind2, ind3))
# avgp1_autcrr[0,:,:,:]=Loc1_E._data[:ind1,:,:]
# avgp1_autcrr[1,:,:,:]=Loc2_E._data[:ind1,:,:]
# avgp1_autcrr[2,:,:,:]=Loc3_E._data[:ind1,:,:]
# avgp1_autcrr[3,:,:,:]=Loc4_E._data[:ind1,:,:]
# avgact_e = np.mean(avgp1_autcrr, axis=1)
# +
dtset=subset_E.copy()
inds = np.zeros((4,1))
for iind in range(4):
inds[iind] = dtset['Trgt_Loc_prev==%s' %(iind+1)]._data.shape[0]
ind1=int(min(inds))
ind2=dtset['Trgt_Loc_prev==1']._data.shape[1]
ind3=dtset['Trgt_Loc_prev==1']._data.shape[2]
print(min_inds)
avgp1_autcrr = np.zeros((4, ind1, ind2, ind3))
for iloc in range(4):
print(iloc)
dt = dtset['Trgt_Loc_prev==%s' %(iind+1)]._data
print(dt.shape)
dt = dt[:ind1,:,:]
print(dt.shape)
dt = dt - np.mean(dt) / np.std(dt)
print(dt.shape)
avgp1_autcrr[iloc,:,:,:] = dt
print(avgp1_autcrr.shape)
# -
min_inds=int(min_inds)
print(dt[:min_inds,:,:].shape)
# +
# only later blocks
subsetL = subset['Block<11'].copy()
subsetL = subset['Block>6'].copy()
# Group data based on the previous trial
Grp1L = subsetL['Trgt_Loc_prev==1'].copy()
Grp2L = subsetL['Trgt_Loc_prev==2'].copy()
Grp3L = subsetL['Trgt_Loc_prev==3'].copy()
Grp4L = subsetL['Trgt_Loc_prev==4'].copy()
print(Grp1L._data.shape)
print(Grp2L._data.shape)
print(Grp3L._data.shape)
print(Grp4L._data.shape)
Grp1L._data = (Grp1L._data - np.mean(Grp1L._data)) / np.std(Grp1L._data)
Grp2L._data = (Grp2L._data - np.mean(Grp2L._data)) / np.std(Grp2L._data)
Grp3L._data = (Grp3L._data - np.mean(Grp3L._data)) / np.std(Grp3L._data)
Grp4L._data = (Grp4L._data - np.mean(Grp4L._data)) / np.std(Grp4L._data)
# -
# # calculate autocorrelation for each location
# +
map_r_l = np.zeros((4, Loc1_E._data.shape[2], Loc1_E._data.shape[2]))
data = Loc1_L._data
dt = np.mean(data, axis=1) #avg over channels
x1 = np.transpose(dt)
map_r_l[0,:,:] = generate_correlation_map(x1, x1)
data = Loc2_L._data
dt = np.mean(data, axis=1) #avg over channels
x1 = np.transpose(dt)
map_r_l[1,:,:] = generate_correlation_map(x1, x1)
data = Loc3_L._data
dt = np.mean(data, axis=1) #avg over channels
x1 = np.transpose(dt)
map_r_l[2,:,:] = generate_correlation_map(x1, x1)
data = Loc4_L._data
dt = np.mean(data, axis=1) #avg over channels
x1 = np.transpose(dt)
map_r_l[3,:,:] = generate_correlation_map(x1, x1)
map_r_e = np.zeros((4, Loc1_E._data.shape[2], Loc1_E._data.shape[2]))
data = Loc1_E._data
dt = np.mean(data, axis=1) #avg over channels
x1 = np.transpose(dt)
map_r_e[0,:,:] = generate_correlation_map(x1, x1)
data = Loc2_E._data
dt = np.mean(data, axis=1) #avg over channels
x1 = np.transpose(dt)
map_r_e[1,:,:] = generate_correlation_map(x1, x1)
data = Loc3_E._data
dt = np.mean(data, axis=1) #avg over channels
x1 = np.transpose(dt)
map_r_e[2,:,:] = generate_correlation_map(x1, x1)
data = Loc4_E._data
dt = np.mean(data, axis=1) #avg over channels
x1 = np.transpose(dt)
map_r_e[3,:,:] = generate_correlation_map(x1, x1)
map_r_d = map_r_l - map_r_e
# -
print(map_r_l.shape)
print(map_r_e.shape)
print(map_r_d.shape)
# plot ------------- #
vmin=-1
vmax=1
title='noneFilterNoBasline'
plot_autocorr_eachGrp(title, avgmap_e, avgmap_l, avgmap_d, vmin, vmax)
| jupyter/tinkering/tinkering_readPrepEpochsLoc_myData_messy.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + [markdown] id="uYVvm10rKNLZ" colab_type="text"
# # <NAME> Acute Myeloid / Lymphoblastic Leukemia AI Research Project
# + [markdown] id="g9vdvFtKeeUR" colab_type="text"
# ## Detecting Acute Lymphoblastic Leukemia With Keras & Tensorflow
# **Using the ACUTE LEUKEMIA CLASSIFICATION USING CONVOLUTION NEURAL NETWORK IN CLINICAL DECISION SUPPORT SYSTEM paper & ALL_IDB2**
#
# 
#
# In this notebook you will create and train a Convolutional Neural Network, (KAllCNN_IDB2), to detect Acute Lymphoblastic Leukemia (ALL) using Keras & Tensorflow on Google Colab. The architecture you will create is based on the network proposed in the [ACUTE LEUKEMIA CLASSIFICATION USING CONVOLUTION NEURAL NETWORK IN CLINICAL DECISION SUPPORT SYSTEM](https://airccj.org/CSCP/vol7/csit77505.pdf "ACUTE LEUKEMIA CLASSIFICATION USING CONVOLUTION NEURAL NETWORK IN CLINICAL DECISION SUPPORT SYSTEM") paper by Thanh.TTP, <NAME>, Jin-Hyeok Park, Kwang-Seok Moon, Suk-Hwan Lee, and Ki-<NAME>won.
#
# This notebook is written by [<NAME>](https://www.petermossamlallresearch.com/team/adam-milton-barker/profile "<NAME>") and based on a notebook written by [<NAME>](https://www.petermossamlallresearch.com/team/amita-kapoor/profile "<NAME>"), and [<NAME>](https://www.petermossamlallresearch.com/students/student/taru-jain/profile "<NAME>"), one of the students from the [AML/ALL AI Student Program](https://www.petermossamlallresearch.com/students/ "AML/ALL AI Student Program").
#
# + [markdown] id="VyMwJj8KwanG" colab_type="text"
# # ALL Image Database for Image Processing
#
# 
# _Fig 1. Samples of augmented data generated using ALL_IDB1 from the Acute Lymphoblastic Leukemia Image Database for Image Processing dataset._
#
#
# The [Acute Lymphoblastic Leukemia Image Database for Image Processing](https://homes.di.unimi.it/scotti/all/) dataset created by [<NAME>, Associate Professor Dipartimento di Informatica, Università degli Studi di Milano](https://homes.di.unimi.it/scotti/) is used in this notebook, you will use the **ALL_IDB2** dataset.
# + [markdown] id="pyKxZsy_KJvh" colab_type="text"
# ## Gain Access To ALL-IDB
#
# You you need to be granted access to use the Acute Lymphoblastic Leukemia Image Database for Image Processing dataset. You can find the application form and information about getting access to the dataset on [this page](https://homes.di.unimi.it/scotti/all/#download) as well as information on how to contribute back to the project [here](https://homes.di.unimi.it/scotti/all/results.php).
# + [markdown] id="YySDuFYdZm3-" colab_type="text"
# # Clone AML & ALL Classifiers Repository
#
# First of all you should clone the [AML & ALL Classifiers](https://github.com/AMLResearchProject/AML-ALL-Classifiers/ "AML & ALL Classifiers") repo to your device. To do this you can navigate to the location you want to clone the repository to on your device using terminal (cd Your/Clone/Location), and then use the following command:
#
# ```
# $ git clone https://github.com/AMLResearchProject/AML-ALL-Classifiers.git
# ```
#
# Once you have used the command above you will see a directory called **AML-ALL-Classifiers** in the location you chose to clone the repo to. In terminal, navigate to the **AML-ALL-Classifiers/Python/_Keras/AllCNN/Paper_1/ALL_IDB2/Non_Augmented/** directory, this is your project root directory.
# + [markdown] id="sk-PAX1d3zrG" colab_type="text"
# # Google Drive / Colab
# + [markdown] id="IUNBpUB1bHky" colab_type="text"
# ## Upload Project Root To Google Drive
# Now you need to upload the project root to your Google Drive, placing the tif files from the ALL_IDB2 dataset in the **Model/Data/Training/** directory.
# + [markdown] id="0tIn2DdxRQ1f" colab_type="text"
# ## Mount Google Drive In Colab
#
# 
# _Fig 2. Example of Colab connected to Google Drive._
#
# The first step is to mount your Google Drive in Colab.
#
# **To do this execute the following code block and follow the steps provided:**
# + id="gunJpXRReMos" colab_type="code" outputId="bc47a68d-1b3b-4da1-da42-bb5449c1472a" colab={"base_uri": "https://localhost:8080/", "height": 122}
# %matplotlib inline
import sys
from google.colab import drive
drive.mount('/content/gdrive', force_remount=True)
# + [markdown] id="gsJef2VNSOp2" colab_type="text"
# # Install & Import Requirements
# **Install and import requirements by executing the following code block:**
# + id="b7QNK6GSeS1t" colab_type="code" outputId="fa0d1a83-58e3-4ea5-9282-a0a2687a569c" colab={"base_uri": "https://localhost:8080/", "height": 284}
# !pip install keras_metrics
import os, cv2, keras, keras_metrics, matplotlib.image, random
os.environ['TF_CPP_MIN_LOG_LEVEL'] = '0'
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
from keras import backend as K
from keras import layers
from keras.layers import Activation, Dense, Dropout, Conv2D
from keras.layers import Flatten, MaxPooling2D, ZeroPadding2D
from keras.models import load_model, Model, model_from_json, Sequential
from keras.optimizers import Adam
from keras.preprocessing import image
from keras.utils import np_utils
from numpy.random import seed
from pathlib import Path
from scipy import ndimage
from sklearn.utils import shuffle
from sklearn.metrics import confusion_matrix, precision_score, recall_score
from sklearn.metrics import roc_auc_score
from sklearn.model_selection import train_test_split
from tensorflow import set_random_seed
# %matplotlib inline
seed(3)
set_random_seed(3)
# + [markdown] id="0rM9PNsciUr_" colab_type="text"
# # Program Settings
# Update the settings below to match the locations of the dataset, augmented and test directories on your Google Drive.. The Classes directory path is added allowing the files in the Classes directory to be executed.
#
# **Once you have updated the settings below, execute the following code block:**
# + id="gIC21U66ifLn" colab_type="code" outputId="ae765945-cac3-40ab-a96d-db9e7b49d22f" colab={"base_uri": "https://localhost:8080/", "height": 156}
local_drive = "/content/gdrive/My Drive/"
project_root = "AMLResearchProject/AML-ALL-Classifiers/Python/_Keras/AllCNN/Paper_1/ALL_IDB2/Non_Augmented/"
project_root_full = local_drive + project_root
sys.path.append(project_root_full + 'Classes')
import Data as AllCnnData
import Helpers as AllCnnHelpers
core = AllCnnHelpers.Helpers("Classifier", project_root_full)
configs = core.confs
model_root_path = project_root_full + configs["model_root"]
model_path = model_root_path + "/" + configs["model_file"]
data_dir = model_root_path + configs["data_dir"]
training_dir = model_root_path + data_dir + "/" + configs["training_dir"]
validation_dir = model_root_path + data_dir + "/" + configs["validation_dir"]
batch_size = configs["batch_size"]
epochs = configs["epochs"]
val_steps = configs["val_steps"]
core.logger.info("Class Path: " + project_root + 'Classes')
core.logger.info("Data Path: " + data_dir)
core.logger.info("Model Path: " + model_path)
core.logger.info("Model Root Dir: " + model_root_path)
core.logger.info("Project Root: " + project_root)
core.logger.info("Program settings setup complete.")
# + [markdown] id="GcyAS5yGSZul" colab_type="text"
# # Prepare Your Data
# Now you need to prepare your training and validation data.
# + [markdown] id="YHGp5wXF1bHj" colab_type="text"
# ## Proposed Training / Validation Sets
# In the paper the authors use the **ALL_IDB1** dataset. The paper proposes the following training and validation sets proposed in the paper, where **Normal cell** refers to ALL negative examples and **Abnormal cell** refers to ALL positive examples.
#
# | | Training Set | Test Set |
# | --- | --- | --- |
# | Normal cell | 40 | 19 |
# | Abnormal cell | 40 | 9 |
# | **Total** | **80** | **28** |
# + [markdown] id="oCPaKVIOwXSA" colab_type="text"
# You can view the notebook using **ALL_IDB1** here. In this notebook however, you are going to use the **ALL_IDB2** dataset. On [Fabio Scotti's ALL-IDB website](https://homes.di.unimi.it/scotti/all), Fabio provides a [guideline for reporting your results when using ALL-IDB](https://homes.di.unimi.it/scotti/all/results.php). In this guideline a benchmark is proposed, this benchmark includes testing with both **ALL_IDB1** & **ALL_IDB2**:
#
# > "A system capable to identify the presence of blast cells in the input image can work with different structures of modules, for example, it can processes the following steps: (i) the identification of white cells in the image, (ii) the selection of Lymphocytes, (iii) the classification of tumor cell. Each single step typically contains segmentation/ classification algorithms. In order to measure and fairly compare the identification accuracy of different structures of modules, we propose a benchmark approach partitioned in three different tests, as follows:"
#
# * Cell test - the benchmark account for the classification of single cells is blast or not (the test is positive if the considered cell is blast cell or not);
# * Image level - the whole image is classified (the test is positive if the considered image contains at least one blast cell or not).
#
# In the paper the authors do not cover using **ALL_IDB2**. As ALL_IDB2 has an equal amount of images in each class (130 per class) you will use the entire ALL_IDB2 dataset with a test split of 20%.
#
# If you haven't already, navigate to the **AML-ALL-Classifiers/Python/_Keras/AllCNN/Paper_1/ALL_IDB2/Non_Augmented/Model/Data/Training/** directory and upload the **tif** files from **ALL_IDB2**.
# + [markdown] id="_8rp1G9y5vdF" colab_type="text"
# ## Sort Your Data
# **Ensure that you have completed all steps above and then execute the following code block to sort/split your data, recreating the dataset splits proposed in the paper:**
#
# (This may take some time)
# + id="tOQuYSwWecnU" colab_type="code" outputId="b6bad497-7b2c-4670-c3d2-780aae0ec494" colab={"base_uri": "https://localhost:8080/", "height": 51}
AllData = AllCnnData.Data(core.logger, configs)
data, labels = AllData.prepare_data(data_dir)
X_train, X_test, y_train, y_test = train_test_split(data, labels, test_size=0.2, random_state=3)
# + id="t0WFNPMdemCS" colab_type="code" outputId="6f4bcb9b-6313-4dfe-ee95-4038f384c6e0" colab={"base_uri": "https://localhost:8080/", "height": 51}
print(X_train.shape)
print(X_test.shape)
# + id="5ntUYKrae9Uj" colab_type="code" outputId="d2488f40-bdc9-44b7-cf88-609748436566" colab={"base_uri": "https://localhost:8080/", "height": 51}
print(y_train.shape)
print(y_test.shape)
# + [markdown] id="ZlMAfNn620EF" colab_type="text"
# ## Shuffle Data
# Shuffle the new training data, remembering to use our seed of 3.
#
# **To shuffle the training date, execute the following code block:**
# + id="8_ZHnUa53bbI" colab_type="code" colab={}
data = np.asarray(X_train)
labels = np.asarray(y_train)
Data, Label = shuffle(data, labels, random_state = 3)
data_list = [Data, Label]
# + [markdown] id="KIup5MLn3hP3" colab_type="text"
# # View Dataset Sample
# **To view a sample of your dataset, execute the following code block:**
# + id="9Iu5RJdiH3_2" colab_type="code" outputId="d4bf5988-4401-4805-ea3f-19a1f2a6f403" colab={"base_uri": "https://localhost:8080/", "height": 464}
y = np.argmax(Label, axis=-1)
f, ax = plt.subplots(4, 5, figsize=(30, 7))
for i in range(0, 20):
ax[i//5, i%5].imshow(Data[i])
if y[i]==1:
ax[i//5, i%5].set_title("Non-ALL")
else:
ax[i//5, i%5].set_title("ALL")
# + [markdown] id="JaeNqg-pyfVT" colab_type="text"
# # Model Architecture
# <img src="https://www.PeterMossAmlAllResearch.com/media/images/repositories/paper_1_architecture.png" alt="Proposed Architecture" />
#
# _Fig 3. Proposed Architecture ([Source](https://airccj.org/CSCP/vol7/csit77505.pdf "Source"))_
# + [markdown] id="Twcvifxw6uGJ" colab_type="text"
# ## Proposed Architecture
#
# In the [ACUTE LEUKEMIA CLASSIFICATION USING CONVOLUTION NEURAL NETWORK IN CLINICAL DECISION SUPPORT SYSTEM](https://airccj.org/CSCP/vol7/csit77505.pdf "ACUTE LEUKEMIA CLASSIFICATION USING CONVOLUTION NEURAL NETWORK IN CLINICAL DECISION SUPPORT SYSTEM") paper the authors explain the layers they used to create their convolutional neural network.
#
# > "In this work, we proposed a network contains 4 layers. The first 3 layers for detecting features
# and the other two layers (Fully connected and Softmax) are for classifying the features. The input
# image has the size [50x50x3]. The receptive field (or the filter size) is 5x5. The stride is 1 then we
# move the filters one pixel at a time. The zero-padding is 2. It will allow us to control the spatial
# size of the output image (we will use it to exactly preserve the spatial size of the input volume so
# the input and output width and height are the same). During the experiment, we found that in our
# case, altering the size of original image during the convolution lead to decrease the accuracy
# about 40%. Thus the output image after convolution layer 1 has the same size with the input
# image."
#
# > "The convolution layer 2 has the same structure with the convolution layer 1. The filter size is 5x5,
# the stride is 1 and the zero-padding is 2. The number of feature maps (the channel or the depth) in
# our case is 30. If the number of feature maps is lower or higher than 30, the accuracy will
# decrease 50%. By experiment, we found the accuracy also decrease 50% if we remove
# Convolution layer 2.""
#
# > "The Max-Pooling layer 25x25 has Filter size is 2 and stride is 2. The fully connected layer has 2
# neural. Finally, we use the Softmax layer for the classification. "
#
# Like Amita & Taru's notebook, this notebook introduces droupout layers to avoid overfitting. In this case your network has two dropout layers both having different dropout rates. There is no mention of activations for the convolutional layers so **RELU** has been used.
#
# **To recreate the proposed architecture, execute the following code block:**
# + id="f546l--jIJrf" colab_type="code" outputId="583bcd7a-efd8-40c0-f5b4-3510d813a95a" colab={"base_uri": "https://localhost:8080/", "height": 275}
model = Sequential()
model.name="KAllCnn_IDB2"
model.add(ZeroPadding2D(padding=(2, 2), input_shape=X_train.shape[1:]))
model.add(Conv2D(30, (5, 5), strides=1, padding = "valid", input_shape = X_train.shape[1:], activation = 'relu'))
model.add(Dropout(0.4))
model.add(ZeroPadding2D(padding=(2, 2), input_shape=X_train.shape[1:]))
model.add(Conv2D(30, (5, 5), strides=1, padding = "valid", activation = 'relu'))
model.add(MaxPooling2D(pool_size=(2, 2), strides=2, padding = 'valid'))
model.add(Dropout(0.6))
model.add(Flatten())
model.add(Dense(2))
model.add(Activation("softmax"))
# + [markdown] id="cjZoldSyvKeG" colab_type="text"
# ## View Network Architecture (Summary)
# **To view a summary of your networ architecture, execute the following code block:**
# + id="KQ1PCsxIIUHZ" colab_type="code" outputId="4c9be2b9-fbce-4621-9231-c786b5c44bc1" colab={"base_uri": "https://localhost:8080/", "height": 493}
model.summary()
# + [markdown] id="w6z-rvbNwVYr" colab_type="text"
# # Compile & Fit Your Model
#
# In the following code block first the program complies the model, then fits the model. The **validation_data**, _(X_test, y_test)_, and the number of **validation_steps** are passed to the **Keras model.fit** function. This means that in addition to showing training loss, accuracy, precision and recall the program will show validation loss, accuracy, precision and recall in its output.
#
# **Assuming you have completed all above steps you can execute the following code block to beging training:**
#
# + id="uZzqkS_OJYKl" colab_type="code" outputId="7b94c17c-0df1-4b98-a159-703060e4665e" colab={"base_uri": "https://localhost:8080/", "height": 1000}
optimizer = keras.optimizers.rmsprop(lr = 0.0001, decay = 1e-6)
model.compile(loss = 'binary_crossentropy', optimizer = optimizer,
metrics = ['accuracy', keras_metrics.precision(), keras_metrics.recall()])
history = model.fit(X_train, y_train, validation_data = (X_test, y_test), validation_steps = val_steps,
steps_per_epoch = int(len(X_train)/batch_size), epochs = epochs)
history
# + [markdown] id="xVh6O0zyNlKC" colab_type="text"
# # Evaluate Your Model
# Now we will evaluate how well our model has done.
# + [markdown] id="TY64b46pfe1s" colab_type="text"
# ## View Metrics Names
# **Execute the following code block to view the names of the metrics used during training:**
# + id="Hi8TJxq_SmLb" colab_type="code" outputId="ed1a17bb-0019-4a7c-f09c-19ada1f66a1e" colab={"base_uri": "https://localhost:8080/", "height": 34}
model.metrics_names
# + [markdown] id="BmyFkSCnTN3u" colab_type="text"
# ## Evaluate Model & Print Metrics
# **Execute the following code block to evaluate your model and print the training metrics:**
# + id="wtplOWRhJqm0" colab_type="code" outputId="185fb4d2-9bd3-47a3-dbb3-4cb2a5f7eb08" colab={"base_uri": "https://localhost:8080/", "height": 85}
score = model.evaluate(X_test, y_test, verbose=0)
score
# + [markdown] id="rs7RjY25TuQF" colab_type="text"
# ## Generate AUC Score
# **Execute the following code block to generate your AUC score:**
# + id="qMeKHZ89bX-V" colab_type="code" outputId="00f72079-4cf4-4c5b-d75f-4f634d117fa0" colab={"base_uri": "https://localhost:8080/", "height": 34}
roc_auc_score(y_test, model.predict_proba(X_test))
# + [markdown] id="QWlYTjriVVHT" colab_type="text"
# # Results
#
# Below are the training results for 100 epochs.
#
# | Loss | Accuracy | Precision | Recall | AUC |
# |------|---|---|--|--|
# | 0.083 (~0.84) | 0.961 (~96%) | 0.961 (~0.96) | 0.961 (~0.96) | 0.997 (~1.0) |
# + [markdown] id="pPZyb9RDTzhl" colab_type="text"
# ## Visualise Metrics
# + [markdown] id="FLRWNFmXEcUm" colab_type="text"
# ### Training Loss & Accuracy
# + id="AqrPzxbmbmdR" colab_type="code" outputId="53ab740d-d384-49dc-d39c-6f51cdf4820c" colab={"base_uri": "https://localhost:8080/", "height": 499}
training_acc = history.history['acc']
training_loss = history.history['loss']
plt.figure(figsize = (8, 8))
plt.subplot(2, 1, 1)
plt.plot(training_acc, label = 'Training Accuracy')
plt.legend(loc = 'lower right')
plt.ylabel('Accuracy')
plt.ylim([min(plt.ylim()),1])
plt.title('Training Accuracy')
plt.subplot(2, 1, 2)
plt.plot(training_loss, label = 'Training Loss')
plt.legend(loc = 'upper right')
plt.ylabel('Cross Entropy')
plt.ylim([0,max(plt.ylim())])
plt.title('Training Loss')
plt.show()
# + [markdown] id="1fsmstZgUm1Q" colab_type="text"
# ### Validation Loss & Accuracy
# + id="nA29J5mjdcx6" colab_type="code" outputId="ddbab586-7762-42f7-d129-0a50d9b24224" colab={"base_uri": "https://localhost:8080/", "height": 499}
validation_acc = history.history['val_acc']
validation_loss = history.history['val_loss']
plt.figure(figsize = (8, 8))
plt.subplot(2, 1, 1)
plt.plot(validation_acc, label = 'Validation Accuracy')
plt.legend(loc = 'lower right')
plt.ylabel('Accuracy')
plt.ylim([min(plt.ylim()),1])
plt.title('Validation Accuracy')
plt.subplot(2, 1, 2)
plt.plot(validation_loss, label = 'Validation Loss')
plt.legend(loc = 'upper right')
plt.ylabel('Cross Entropy')
plt.ylim([0,max(plt.ylim())])
plt.title('Validation Loss')
plt.show()
# + [markdown] id="GzHuKBkVrFDW" colab_type="text"
# ## Predictions
# + id="2_0rEhYFrJIF" colab_type="code" outputId="492747de-f43c-4ee2-9b52-a800de608d45" colab={"base_uri": "https://localhost:8080/", "height": 901}
y_pred = model.predict(X_test)
y_pred
# + [markdown] id="RwEulbUTEkfz" colab_type="text"
# ### Confusion Matrix
# + id="uRPExZhDtA5I" colab_type="code" outputId="52d7127c-a845-4f6c-9c40-c18dab807248" colab={"base_uri": "https://localhost:8080/", "height": 51}
matrix = confusion_matrix(y_test.argmax(axis=1), y_pred.argmax(axis=1))
matrix
# + id="D-Pa_ytfvQoq" colab_type="code" outputId="337549a2-2fe7-4d47-b967-310dbf94fb1d" colab={"base_uri": "https://localhost:8080/", "height": 278}
plt.imshow(matrix, cmap=plt.cm.Blues)
plt.xlabel("Predicted labels")
plt.ylabel("True labels")
plt.xticks([], [])
plt.yticks([], [])
plt.title('Confusion matrix ')
plt.colorbar()
plt.show()
# + [markdown] id="ABl7pPxwKWjC" colab_type="text"
# ## Results on ALL-IDB (Images)
#
# + id="vyjHpRrhOvb7" colab_type="code" outputId="82c30808-2734-47a6-8e8c-90b8834a5e7c" colab={"base_uri": "https://localhost:8080/", "height": 34}
TN = matrix[0][0]
FN = matrix[0][1]
TP = matrix[1][0]
FP = matrix[1][1]
(TP, FP, TN, FN)
# + id="lVEDjCb5RnqO" colab_type="code" outputId="175ed0e0-6bb3-4bf8-f22e-f673a351254e" colab={"base_uri": "https://localhost:8080/", "height": 34}
test_len = len(X_test)
TPP = (TP * 100) / test_len
FPP = (FP * 100) / test_len
FNP = (FN * 100) / test_len
TNP = (TN * 100) / test_len
(TPP, FPP, TNP, FNP)
# + id="iodAfPQeNFBu" colab_type="code" outputId="7ce9d239-9101-40e1-d81c-93d0160b61c8" colab={"base_uri": "https://localhost:8080/", "height": 34}
specificity = TN/(TN+FP)
specificity
# + id="58W0lm17nrc2" colab_type="code" outputId="c9f110fd-383f-4dab-d43b-d041c8ff3421" colab={"base_uri": "https://localhost:8080/", "height": 34}
specificity = (specificity * 100) / test_len
specificity
# + id="wsq1dDM-NJiQ" colab_type="code" outputId="732237e2-aab8-404d-fe88-7f7c7b4c530f" colab={"base_uri": "https://localhost:8080/", "height": 34}
misc = FP + FN
misc
# + colab_type="code" outputId="6b6b5fe6-929b-49e8-c173-6ad7c14649ab" id="HIBNGPJkjt0X" colab={"base_uri": "https://localhost:8080/", "height": 34}
misc = (misc * 100) / test_len
misc
# + [markdown] id="-DdbP3BknaA2" colab_type="text"
# ### Figures Of Merit
# + [markdown] id="4lXHj5vILLvE" colab_type="text"
#
# | Figures of merit | Value | Percentage |
# | ---------------- | ----- | ---------- |
# | True Positives | 1 | 1.92% |
# | False Positives | 25 | 48.08%
# | True Negatives | 25 | 48.08% |
# | False Negatives | 1 | 1.92% |
# | Misclassification | 26 | 50.00% |
# | Sensitivity / Recall | 0.96 | 96% |
# | Specificity | 0.5 | 50% |
#
# + [markdown] id="CBFCWOCposL7" colab_type="text"
# # Save Your Keras Model
#
# Now you will save your Keras model and weights so that they can be used again.
# + [markdown] id="IrgjTj8ApKuy" colab_type="text"
# ## Save Model As Json
# + id="tYrOPH1apCU4" colab_type="code" colab={}
with open(model_path, "w") as file:
file.write(model.to_json())
# + [markdown] id="H-UiwZMXpPpr" colab_type="text"
# ## Save Weights
# + id="4o3alUuspEK6" colab_type="code" colab={}
model.save_weights(model_root_path + "/weights.h5")
# + [markdown] id="w_b4v8LZ-9AT" colab_type="text"
# # Load Your Saved Keras Model
# + id="s7YTAT-O_GOU" colab_type="code" outputId="a6a3ba64-39a0-46e3-9b33-6f75b3ea6527" colab={"base_uri": "https://localhost:8080/", "height": 493}
with open(model_path, "r") as file:
jmodel = file.read()
K.set_learning_phase(0)
model = model_from_json(jmodel)
model.load_weights(model_root_path + "/weights.h5")
model.summary()
# + [markdown] id="5JA27GH_c4yR" colab_type="text"
# # Contributing
#
# The Peter Moss Acute Myeloid & Lymphoblastic Leukemia AI Research project encourages and welcomes code contributions, bug fixes and enhancements from the Github.
#
# **Please read the [CONTRIBUTING](https://github.com/AMLResearchProject/AML-ALL-Classifiers/blob/master/CONTRIBUTING.md "CONTRIBUTING") document for a full guide to forking our repositories and submitting your pull requests. You will also find information about our code of conduct on this page.**
# + [markdown] id="Gx76M0YiyU8x" colab_type="text"
# ## Acute Myeloid & Lymphoblastic Leukemia Classifiers Contributors
#
# - [<NAME>](https://github.com/AdamMiltonBarker "<NAME>") - Bigfinite IoT Network Engineer & Intel Software Innovator, Barcelona, Spain
# - [<NAME>](https://github.com/salvatorera "<NAME>") - PhD Immunolgy / Bioinformaticia, Bologna, Italy
# - [Dr <NAME>](https://github.com/salvatorera "Dr <NAME>") - Delhi University, Delhi, India
#
# + [markdown] id="yZKmkJrPyHPr" colab_type="text"
# # Versioning
#
# We use SemVer for versioning. For the versions available, see [Releases](https://github.com/AMLResearchProject/AML-ALL-Classifiers/releases "Releases").
# + [markdown] id="ih7zGxLyyNdk" colab_type="text"
# # License
#
# This project is licensed under the **MIT License** - see the [LICENSE](https://github.com/AMLResearchProject/AML-ALL-Classifiers/blob/master/LICENSE "LICENSE") file for details.
# + [markdown] id="tbJhAHK7yQLb" colab_type="text"
# # Bugs/Issues
#
# We use the [repo issues](https://github.com/AMLResearchProject/AML-ALL-Classifiers/issues "repo issues") to track bugs and general requests related to using this project.
| Projects/Keras/AllCNN/Paper_1/ALL_IDB2/Non_Augmented/AllCNN.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .jl
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Julia 0.5.0-rc4
# language: julia
# name: julia-0.5
# ---
# ### Integral BL code for cylinder
# +
#ntime=cfl*dx/ttime
ntime=100
cfl= 0.2
ncell = 64
dx = pi/(real(ncell) + 1.)
ttime = 1.
time = 0.005
uinf = 1.
c = 1.
re = 1000
x = zeros(ncell+2)
ue = zeros(ncell+2)
uex = zeros(ncell+2)
uet = zeros(ncell+2)
#Set sources
for i = 1:ncell+2
x[i] = real(i-1)*dx
ue[i] = 2*sin(x[i])
uex[i] = 2*cos(x[i])
end
uet[1:ncell+2] = 0
E = zeros(ncell+2)
B = zeros(ncell+2)
del = zeros(ncell+2)
F = zeros(ncell+2)
dfde = zeros(ncell+2)
S = zeros(ncell+2)
unk = zeros(2,ncell+2)
#Set initial conditions
E[:] = 0.4142
for ic = 1:ncell+2
B[ic] = BfromE(E[ic])
F[ic] = FfromE(E[ic])
end
del = sqrt(B*time)
unk[1,:] = del
unk[2,:] = del.*(E + 1)
function FfromE(E::Float64)
F = 4.8274*E^4 - 5.9816*E^3 + 4.0274*E^2 + 0.23247*E + 0.15174
end
function BfromE(E::Float64)
if E < -0.0616
B = -225.86*E^3 - 3016.6*E^2 - 208.68*E - 17.915
elseif E > -0.0395
B = 131.9*E^3 - 167.32*E^2 + 76.642*E - 11.068
else
B = 0.5*(-225.86*E^3 - 3016.6*E^2 - 208.68*E - 17.915 + 131.9*E^3 - 167.32*E^2 + 76.642*E - 11.068)
end
end
function SfromE(E::Float64)
if E < -0.0582
S = 451.55*E^3 + 2010.*E^2 + 138.96*E + 11.296
elseif E > -0.042
S = -96.739*E^3 + 117.74*E^2 - 46.432*E + 6.8074
else
S = 0.5*(451.55*E^3 + 2010.*E^2 + 138.96*E + 11.296 - 96.739*E^3 + 117.74*E^2 - 46.432*E + 6.8074)
end
end
function dfdefromE(E::Float64)
dfde = 4*4.8274*E^3 - 3*5.9816*E^2 + 2*4.0274*E + 0.23247
end
function calcdt(cfl::Float64, lamb::Array{Float64,2})
dt = 10000
for ic = 1:size(lamb,2)
dti = cfl*(dx/(abs(lamb[1,ic]) + abs(lamb[2,ic])))
if dti < dt
dt=dti
end
end
return dt
end
function calcEigen(ue::Vector{Float64}, E::Vector{Float64}, F::Vector{Float64}, dfde::Vector{Float64})
ncell = length(E) - 2
lamb = zeros(2,ncell+2)
for ic = 1:ncell+2
aq = 1.
bq = -ue[ic]*(dfde[ic] - 1.)
cq = ue[ic]*ue[ic]*(E[ic]*dfde[ic] - F[ic])
lamb[1,ic] = (-bq + sqrt(bq*bq - 4*aq*cq))/(2*aq)
lamb[2,ic] = (-bq - sqrt(bq*bq - 4*aq*cq))/(2*aq)
#Always have lamb1 > lamb2
if lamb[2,ic] > lamb[1,ic]
temp = lamb[2,ic]
lamb[2,ic] = lamb[1,ic]
lamb[1,ic] = temp
end
end
return lamb
end
# +
function calc_flux(lamb::Array{Float64,2}, ue::Vector{Float64}, E::Vector{Float64}, del::Vector{Float64})
ncell = length(ue) - 2
flux = zeros(2,2,ncell+2)
Apos = zeros(2,2)
for ic = 1:ncell+2
if lamb[1,ic] >= 0. && lamb[2,ic] >= 0.
flux[1,1,ic] = ue[ic]*E[ic]*del[ic]
flux[1,2,ic] = ue[ic]*F[ic]*del[ic]
elseif lamb[1,ic] < 0. && lamb[2,ic] < 0.
flux[1,:,ic] = 0.
else
Apos[1,1] = (ue[ic]*lamb[1,ic]/(lamb[1,ic] - lamb[2,ic]))*(-1. - lamb[2,ic]/ue[ic])
Apos[1,2] = ue[ic]*lamb[1,ic]/(lamb[1,ic] - lamb[2,ic])
Apos[2,1] = -(ue[ic]*lamb[1,ic]/(lamb[1,ic] - lamb[2,ic]))*(1 + lamb[1,ic]/ue[ic])*(1 + lamb[2,ic]/ue[ic])
Apos[2,2] = (ue[ic]*lamb[1,ic]/(lamb[1,ic] - lamb[2,ic]))*(1 + lamb[1,ic]/ue[ic])
flux[1,1,ic] = Apos[1,1]*del[ic] + Apos[1,2]*(E[ic] + 1.)*del[ic]
flux[1,2,ic] = Apos[2,1]*del[ic] + Apos[2,2]*(E[ic] + 1.)*del[ic]
end
end
for ic = 1:ncell+2
if lamb[1,ic] >= 0. && lamb[2,ic] >= 0.
flux[2,:,ic] = 0.
elseif lamb[1,ic] < 0. && lamb[2,ic] < 0.
flux[2,1,ic] = ue[ic]*E[ic]*del[ic]
flux[2,2,ic] = ue[ic]*F[ic]*del[ic]
else
Aneg[1,1] = (ue[ic]*lamb[2,ic]/(lamb[1,ic] - lamb[2,ic]))*(1. + lamb[1,ic]/ue[ic])
Aneg[1,2] = -ue[ic]*lamb[2,ic]/(lamb[1,ic] - lamb[2,ic])
Aneg[2,1] = (ue[ic]*lamb[2,ic]/(lamb[1,ic] - lamb[2,ic]))*(1. + lamb[1,ic]/ue[ic])*(1. + lamb[2,ic]/ue[ic])
Aneg[2,2] = (ue[ic]*lamb[2,ic]/(lamb[1,ic] - lamb[2,ic]))*(-1. - lamb[2,ic]/ue[ic])
flux[2,1,ic] = Aneg[1,1]*del[ic] + Aneg[1,2]*(E[ic] + 1.)*del[ic]
flux[2,2,ic] = Aneg[2,1]*del[ic] + Aneg[2,2]*(E[ic] + 1.)*del[ic]
end
end
return flux
end
# -
lamb = zeros(2,ncell+2)
unkt = zeros(2,ncell+2)
unkh = zeros(2,ncell+2)
rhs = zeros(2,ncell+2)
flux = zeros(2,2,ncell+2)
crit = zeros(ncell+2)
Apos = zeros(2,2)
Aneg = zeros(2,2)
nstage = 2
# +
#Main loop over time steps
for i = 1:ntime
#i = 1
unk[:,1] = 2*unk[:,2] - unk[:,3]
unk[:,ncell+2] = 2*unk[:,ncell+1] - unk[:,ncell]
#Calculate derived quantities
for ic = 1:ncell+2
del[ic] = unk[1,ic]
E[ic] = (unk[2,ic]./del[ic]) - 1.
F[ic] = FfromE(E[ic])
B[ic] = BfromE(E[ic])
S[ic] = SfromE(E[ic])
dfde[ic] = dfdefromE(E[ic])
end
#Compute eigenvalues
lamb = calcEigen(ue, E, F, dfde)
#Compute timestep
dt = calcdt(cfl, lamb)
#Compute fluxes
flux = calc_flux(lamb, ue, E, del)
#compute rhs
for ic = 2:ncell+1
rhs[1,ic] = B[ic]/(2*del[ic]) - del[ic]*uet[ic]/ue[ic] - (E[ic] + 1.)*del[ic]*uex[ic]
rhs[2,ic] = S[ic]/del[ic] - 2*E[ic]*del[ic]*uet[ic]/ue[ic] - 2*F[ic]*del[ic]*uex[ic]
end
for ic = 2:ncell
unkh[1,ic] = unk[1,ic] - (dt/dx)*(flux[1,1,ic] - flux[1,1,ic-1] + flux[2,1,ic+1]
- flux[2,1,ic]) + dt*rhs[1,ic]
unkh[2,ic] = unk[2,ic] - (dt/dx)*(flux[1,2,ic] - flux[1,2,ic-1] + flux[2,2,ic+1]
- flux[2,2,ic]) + dt*rhs[2,ic]
end
ic = ncell+1
unkh[1,ic] = unk[1,ic] - (dt/dx)*(flux[1,1,ic] - flux[1,1,ic-1]) + dt*rhs[1,ic]
unkh[2,ic] = unk[2,ic] - (dt/dx)*(flux[1,2,ic] - flux[1,2,ic-1]) + dt*rhs[2,ic]
#Update ghost cells
unkh[:,1] = 2*unkh[:,2] - unkh[:,3]
unkh[:,ncell+2] = 2*unkh[:,ncell+1] - unkh[:,ncell]
#Calculate derived quantities
for ic = 1:ncell+2
del[ic] = unk[1,ic]
E[ic] = (unk[2,ic]./del[ic]) - 1.
F[ic] = FfromE(E[ic])
B[ic] = BfromE(E[ic])
S[ic] = SfromE(E[ic])
dfde[ic] = dfdefromE(E[ic])
end
#Compute eigenvalues
lamb = calcEigen(ue, E, F, dfde)
#Compute fluxes
fluxhalf = calc_flux(lamb, ue, E, del)
#compute rhs
for ic = 2:ncell+1
rhs[1,ic] = B[ic]/(2*del[ic]) - del[ic]*uet[ic]/ue[ic] - (E[ic] + 1.)*del[ic]*uex[ic]
rhs[2,ic] = S[ic]/del[ic] - 2*E[ic]*del[ic]*uet[ic]/ue[ic] - 2*F[ic]*del[ic]*uex[ic]
end
ic = 2
unk[1,ic] = 0.5*(unk[1,ic] + unkh[1,ic]) - (0.5*dt/dx)*(flux[1,1,ic] - flux[1,1,ic-1]
- flux[2,1,ic] + 2*flux[2,1,ic+1] - flux[2,1,ic+2] + fluxhalf[1,1,ic] - fluxhalf[1,1,ic-1]
+ fluxhalf[2,1,ic+1] - fluxhalf[2,1,ic]) + 0.5*dt*rhs[1,ic]
unk[2,ic] = 0.5*(unk[2,ic] + unkh[2,ic]) - (0.5*dt/dx)*(flux[1,2,ic] - flux[1,2,ic-1]
- flux[2,2,ic] + 2*flux[2,2,ic+1] - flux[2,2,ic+2] + fluxhalf[1,2,ic] - fluxhalf[1,2,ic-1]
+ fluxhalf[2,2,ic+1] - fluxhalf[2,2,ic]) + 0.5*dt*rhs[2,ic]
for ic = 3:ncell
unk[1,ic] = 0.5*(unk[1,ic] + unkh[1,ic]) - (0.5*dt/dx)*(flux[1,1,ic] - 2*flux[1,1,ic-1] +
flux[1,1,ic-2] - flux[2,1,ic] + 2*flux[2,1,ic+1] - flux[2,1,ic+2] + fluxhalf[1,1,ic]
-fluxhalf[1,1,ic-1] + fluxhalf[2,1,ic+1] - fluxhalf[2,1,ic]) + 0.5*dt*rhs[1,ic]
unk[2,ic] = 0.5*(unk[2,ic] + unkh[2,ic]) - (0.5*dt/dx)*(flux[1,2,ic] - 2*flux[1,2,ic-1]
+ flux[1,2,ic-2] - flux[2,2,ic] + 2*flux[2,2,ic+1] - flux[2,2,ic+2] + fluxhalf[1,2,ic]
- fluxhalf[1,2,ic-1] + fluxhalf[2,2,ic+1] - fluxhalf[2,2,ic]) + 0.5*dt*rhs[2,ic]
end
ic = ncell+1
unk[1,ic] = 0.5*(unk[1,ic] + unkh[1,ic]) - (0.5*dt/dx)*(flux[1,1,ic] - 2*flux[1,1,ic-1]
+ flux[1,1,ic-2] +fluxhalf[1,1,ic] - fluxhalf[1,1,ic-1]) + 0.5*dt*rhs[1,ic]
unk[2,ic] = 0.5*(unk[2,ic] + unkh[2,ic]) - (0.5*dt/dx)*(flux[1,2,ic] - 2*flux[1,2,ic-1]
+ flux[1,2,ic-2] +fluxhalf[1,2,ic] - fluxhalf[1,2,ic-1]) + 0.5*dt*rhs[2,ic]
time = time + dt
if time > ttime
quit()
end
for ic = 2:ncell+1
crit[ic] = abs((del[ic+1] - del[ic])/(del[ic] - del[ic-1]))
if abs(crit[ic]) > 10.
println(x[ic]," ", ue[ic]," ", crit[ic])
end
end
end
# -
time
using PyPlot
plot(x,del)
crit
| Notebooks/.ipynb_checkpoints/BL_cyl-checkpoint.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python [conda env:anaconda3]
# language: python
# name: conda-env-anaconda3-py
# ---
# # CHATBOT TUTORIAL
#
# - https://pytorch.org/tutorials/beginner/chatbot_tutorial.html#
# - Handle loading and preprocessing of [Cornell Movie-Dialogs](https://www.cs.cornell.edu/~cristian/Cornell_Movie-Dialogs_Corpus.html) Corpus dataset
#
# %matplotlib inline
# +
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
from __future__ import unicode_literals
import torch
from torch.jit import script, trace
import torch.nn as nn
from torch import optim
import torch.nn.functional as F
import csv
import random
import re
import os
import unicodedata
import codecs
from io import open
import itertools
import math
USE_CUDA = torch.cuda.is_available()
device = torch.device("cuda" if USE_CUDA else "cpu")
# -
| books/chatbot.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: venv3.6
# language: python
# name: venv3.6
# ---
import SimpleITK as sitk
import cv2
import sys
import os
import numpy as np
import matplotlib.pyplot as plt
# %matplotlib inline
fixed = sitk.ReadImage("BrainT1Slice.png", sitk.sitkFloat32)
moving = sitk.ReadImage("BrainT1SliceBorder20.png",sitk.sitkFloat32)
#sitk.Show(img1, title="cthead1")
def plot(img):
nda = sitk.GetArrayFromImage(img)
plt.imshow(nda)
plot(fixed)
plot(moving)
def command_iteration(method):
if (method.GetOptimizerIteration() == 0):
print("Estimated Scales: ", method.GetOptimizerScales())
print(f"{method.GetOptimizerIteration():3} = {method.GetMetricValue():7.5f} : {method.GetOptimizerPosition()}")
# +
pixelType = sitk.sitkFloat32
R = sitk.ImageRegistrationMethod()
R.SetMetricAsCorrelation()
R.SetOptimizerAsRegularStepGradientDescent(learningRate=2.0,
minStep=1e-4,
numberOfIterations=500,
gradientMagnitudeTolerance=1e-8)
R.SetOptimizerScalesFromIndexShift()
tx = sitk.CenteredTransformInitializer(fixed, moving,
sitk.Similarity2DTransform())
R.SetInitialTransform(tx)
R.SetInterpolator(sitk.sitkLinear)
R.AddCommand(sitk.sitkIterationEvent, lambda: command_iteration(R))
outTx = R.Execute(fixed, moving)
print("-------")
print(outTx)
print(f"Optimizer stop condition: {R.GetOptimizerStopConditionDescription()}")
print(f" Iteration: {R.GetOptimizerIteration()}")
print(f" Metric value: {R.GetMetricValue()}")
# -
if ("SITK_NOSHOW" not in os.environ):
resampler = sitk.ResampleImageFilter()
resampler.SetReferenceImage(fixed)
resampler.SetInterpolator(sitk.sitkLinear)
resampler.SetDefaultPixelValue(1)
resampler.SetTransform(outTx)
out = resampler.Execute(moving)
simg1 = sitk.Cast(sitk.RescaleIntensity(fixed), sitk.sitkUInt8)
simg2 = sitk.Cast(sitk.RescaleIntensity(out), sitk.sitkUInt8)
cimg = sitk.Compose(simg1, simg2, simg1 // 2. + simg2 // 2.)
#sitk.Show(cimg, "ImageRegistration2 Composition")
plot(cimg)
# +
#sitk.WriteTransform(outTx, sys.argv[3])
| simpleitk_demo1.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 2
# language: python
# name: python2
# ---
#import csv from the dataset
import pandas as pd
df=pd.read_csv('c:/Users/Raghav.sharma/Desktop/dmLab/combinedfile2.csv')
df.head()
import matplotlib.pyplot as plt
import seaborn as sns
sns.set(font_scale=1)
df1=
corr=df.loc[:,["GENDER","RACE","ETHNIC","EDUC","EMPLOY","LIVARAG","PRIMINC","ARRESTS","STFIPS","REGION","DIVISION","SERVSETA","DAYWAIT","PSOURCE","NOPRIOR","SUB1","FRSTUSE1","FREQ1"]].corr()#["Survived"]
plt.figure(figsize=(10, 20))
sns.heatmap(corr, vmax=.8, linewidths=0.01,
square=True,annot=True,cmap='YlGnBu',linecolor="white")
plt.title('Correlation between features');
plt.show()
# +
#df.RACE.apply(df.value_counts).plot.pie(subplots = True)
#a.apply(pd.value_counts).plot.pie(subplots=True)
slices=df['RACE'].value_counts()
list(slices)
labels=[]
plt.pie(slices,labels=labels,colors=['white', '#0fff00','gold', 'yellowgreen', 'lightcoral', 'lightskyblue','red','yellow','blue'],startangle=90,shadow=True,autopct='%1.1f%%')
fig = plt.gcf()
fig.set_size_inches(6,6)
plt.legend(labels, loc="best")
plt.show()
# -
import matplotlib.pyplot as plt
# The slices will be ordered and plotted counter-clockwise.
labels = ['WHITE','BLACK OR AFRICAN AMERICAN','OTHER SINGLE RACE','AMERICAN INDIAN OTHER THAN ALASKA NATIVE','TWO OR MORE RACES','ASIAN','NATIVE HAWAIIAN OR OTHER PACIFIC ISLANDER','ALASKA NATIVE (ALEUT, ESKIMO, INDIAN)','ASIAN OR PACIFIC ISLANDER']
slices=df['RACE'].value_counts()
slices=slices.drop([-9])
colors = ['cyan', '#0fff00','gold', 'yellowgreen', 'lightcoral', 'lightskyblue','red','yellow','black']
patches, texts = plt.pie(slices, colors=colors, startangle=90)
plt.legend(patches, labels,loc = 'best')
# Set aspect ratio to be equal so that pie is drawn as a circle.
plt.axis('equal')
fig = plt.gcf()
fig.set_size_inches(5,5)
plt.tight_layout()
plt.title("RACE OF MENTAL PATIENTS OVER THE YEARS")
plt.show()
slices
import matplotlib.pyplot as plt
# The slices will be ordered and plotted counter-clockwise.
labels = ["NOT OF HISPANIC ORIGIN","PUERTO RICAN","MEXICAN","OTHER SPECIFIC HISPANIC","HISPANIC SPECIFIC ORIGIN NOT SPECIFIED","CUBAN"]
slices=df['ETHNIC'].value_counts()
slices=slices.drop([-9])
colors = ['cyan', '#0fff00', 'yellowgreen', 'lightcoral', 'lightskyblue','yellow']
patches, texts = plt.pie(slices, colors=colors, startangle=90)
plt.legend(patches, labels,loc = 'best')
# Set aspect ratio to be equal so that pie is drawn as a circle.
plt.axis('equal')
fig = plt.gcf()
fig.set_size_inches(5,5)
plt.tight_layout()
plt.title("ETHNICITY OF MENTAL PATIENTS OVER YEARS")
plt.show()
slices
df.groupby('AGE')['YEAR'].value_counts()
import matplotlib
matplotlib.style.use('ggplot')
# +
plt.figure()
df.groupby('AGE')['YEAR'].value_counts().unstack().plot()
plt.title('Number Of Patients by Age group')
ax = plt.gca() # grab the current axis
#ax.set_xticks([2,4,6]) # choose which x locations to have ticks
ax.set_xticklabels(["12-14","18-20","25-29","35-39","45-49","55 AND OVER"])
plt.show()
# set the labels to display at those ticks
# +
import matplotlib.pyplot as plt
ax = plt.gca() # grab the current axis
ax.set_xticks([1,2,3]) # choose which x locations to have ticks
ax.set_xticklabels([1,"key point",2]) # set the labels to display at those ticks
# -
| week2/Week2.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + [markdown] id="view-in-github" colab_type="text"
# <a href="https://colab.research.google.com/github/Innarticles/Data-Science-Resources/blob/master/Copy_of_S%2BP_Week_4_Lesson_5.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# + id="GNkzTFfynsmV" colab_type="code" outputId="f636dee8-8cde-4d19-ff5f-d712ef9a51c1" colab={"base_uri": "https://localhost:8080/", "height": 641}
# !pip install tensorflow==2.0.0b1
# + id="56XEQOGknrAk" colab_type="code" outputId="d52053ae-8ab5-4863-97c9-50073f912f46" colab={"base_uri": "https://localhost:8080/", "height": 34}
import tensorflow as tf
print(tf.__version__)
# + id="sLl52leVp5wU" colab_type="code" colab={}
import numpy as np
import matplotlib.pyplot as plt
def plot_series(time, series, format="-", start=0, end=None):
plt.plot(time[start:end], series[start:end], format)
plt.xlabel("Time")
plt.ylabel("Value")
plt.grid(True)
# + id="tP7oqUdkk0gY" colab_type="code" outputId="549237ab-672c-49d5-b36e-bd196eae772f" colab={"base_uri": "https://localhost:8080/", "height": 208}
# !wget --no-check-certificate \
# https://storage.googleapis.com/laurencemoroney-blog.appspot.com/Sunspots.csv \
# -O /tmp/sunspots.csv
# + id="NcG9r1eClbTh" colab_type="code" outputId="f3545940-e807-4109-f8be-a95a3a16d4cf" colab={"base_uri": "https://localhost:8080/", "height": 392}
import csv
time_step = []
sunspots = []
with open('/tmp/sunspots.csv') as csvfile:
reader = csv.reader(csvfile, delimiter=',')
next(reader)
for row in reader:
sunspots.append(float(row[2]))
time_step.append(int(row[0]))
series = np.array(sunspots)
time = np.array(time_step)
plt.figure(figsize=(10, 6))
plot_series(time, series)
# + id="VinZVwUa8WO7" colab_type="code" outputId="e95494c5-13c5-4b60-85f0-f9021720cff3" colab={"base_uri": "https://localhost:8080/", "height": 392}
series = np.array(sunspots)
time = np.array(time_step)
plt.figure(figsize=(10, 6))
plot_series(time, series)
# + id="L92YRw_IpCFG" colab_type="code" colab={}
split_time = 3000
time_train = time[:split_time]
x_train = series[:split_time]
time_valid = time[split_time:]
x_valid = series[split_time:]
window_size = 30
batch_size = 32
shuffle_buffer_size = 1000
# + id="lJwUUZscnG38" colab_type="code" colab={}
def windowed_dataset(series, window_size, batch_size, shuffle_buffer):
series = tf.expand_dims(series, axis=-1)
ds = tf.data.Dataset.from_tensor_slices(series)
ds = ds.window(window_size + 1, shift=1, drop_remainder=True)
ds = ds.flat_map(lambda w: w.batch(window_size + 1))
ds = ds.shuffle(shuffle_buffer)
ds = ds.map(lambda w: (w[:-1], w[1:]))
return ds.batch(batch_size).prefetch(1)
# + id="4XwGrf-A_wF0" colab_type="code" colab={}
def model_forecast(model, series, window_size):
ds = tf.data.Dataset.from_tensor_slices(series)
ds = ds.window(window_size, shift=1, drop_remainder=True)
ds = ds.flat_map(lambda w: w.batch(window_size))
ds = ds.batch(32).prefetch(1)
forecast = model.predict(ds)
return forecast
# + id="AclfYY3Mn6Ph" colab_type="code" outputId="02a3801f-9033-459f-c1c6-f930e65fc01f" colab={"base_uri": "https://localhost:8080/", "height": 1000}
tf.keras.backend.clear_session()
tf.random.set_seed(51)
np.random.seed(51)
window_size = 64
batch_size = 256
train_set = windowed_dataset(x_train, window_size, batch_size, shuffle_buffer_size)
print(train_set)
print(x_train.shape)
model = tf.keras.models.Sequential([
tf.keras.layers.Conv1D(filters=32, kernel_size=5,
strides=1, padding="causal",
activation="relu",
input_shape=[None, 1]),
tf.keras.layers.LSTM(64, return_sequences=True),
tf.keras.layers.LSTM(64, return_sequences=True),
tf.keras.layers.Dense(30, activation="relu"),
tf.keras.layers.Dense(10, activation="relu"),
tf.keras.layers.Dense(1),
tf.keras.layers.Lambda(lambda x: x * 400)
])
lr_schedule = tf.keras.callbacks.LearningRateScheduler(
lambda epoch: 1e-8 * 10**(epoch / 20))
optimizer = tf.keras.optimizers.SGD(lr=1e-8, momentum=0.9)
model.compile(loss=tf.keras.losses.Huber(),
optimizer=optimizer,
metrics=["mae"])
history = model.fit(train_set, epochs=100, callbacks=[lr_schedule])
# + id="vVcKmg7Q_7rD" colab_type="code" outputId="14773dd5-ab46-469a-dc02-96f4b7607b07" colab={"base_uri": "https://localhost:8080/", "height": 290}
plt.semilogx(history.history["lr"], history.history["loss"])
plt.axis([1e-8, 1e-4, 0, 60])
# + id="QsksvkcXAAgq" colab_type="code" outputId="be930f4c-30b1-4e16-92c0-4778fbca4c27" colab={"base_uri": "https://localhost:8080/", "height": 1000}
tf.keras.backend.clear_session()
tf.random.set_seed(51)
np.random.seed(51)
train_set = windowed_dataset(x_train, window_size=60, batch_size=100, shuffle_buffer=shuffle_buffer_size)
model = tf.keras.models.Sequential([
tf.keras.layers.Conv1D(filters=60, kernel_size=5,
strides=1, padding="causal",
activation="relu",
input_shape=[None, 1]),
tf.keras.layers.LSTM(60, return_sequences=True),
tf.keras.layers.LSTM(60, return_sequences=True),
tf.keras.layers.Dense(30, activation="relu"),
tf.keras.layers.Dense(10, activation="relu"),
tf.keras.layers.Dense(1),
tf.keras.layers.Lambda(lambda x: x * 400)
])
optimizer = tf.keras.optimizers.SGD(lr=1e-5, momentum=0.9)
model.compile(loss=tf.keras.losses.Huber(),
optimizer=optimizer,
metrics=["mae"])
history = model.fit(train_set,epochs=500)
# + id="GaC6NNMRp0lb" colab_type="code" colab={}
rnn_forecast = model_forecast(model, series[..., np.newaxis], window_size)
rnn_forecast = rnn_forecast[split_time - window_size:-1, -1, 0]
# + id="PrktQX3hKYex" colab_type="code" outputId="3126e6a2-6f0a-4501-a5ea-360aab623062" colab={"base_uri": "https://localhost:8080/", "height": 392}
plt.figure(figsize=(10, 6))
plot_series(time_valid, x_valid)
plot_series(time_valid, rnn_forecast)
# + id="13XrorC5wQoE" colab_type="code" outputId="392b1fbb-cc0c-4c3f-b246-73de03c381dc" colab={"base_uri": "https://localhost:8080/", "height": 34}
tf.keras.metrics.mean_absolute_error(x_valid, rnn_forecast).numpy()
# + id="MD2kyYUVt3O0" colab_type="code" outputId="4d4ce37c-ebd7-43cc-d0f3-761c37774bda" colab={"base_uri": "https://localhost:8080/", "height": 609}
import matplotlib.image as mpimg
import matplotlib.pyplot as plt
#-----------------------------------------------------------
# Retrieve a list of list results on training and test data
# sets for each training epoch
#-----------------------------------------------------------
loss=history.history['loss']
epochs=range(len(loss)) # Get number of epochs
#------------------------------------------------
# Plot training and validation loss per epoch
#------------------------------------------------
plt.plot(epochs, loss, 'r')
plt.title('Training loss')
plt.xlabel("Epochs")
plt.ylabel("Loss")
plt.legend(["Loss"])
plt.figure()
zoomed_loss = loss[200:]
zoomed_epochs = range(200,500)
#------------------------------------------------
# Plot training and validation loss per epoch
#------------------------------------------------
plt.plot(zoomed_epochs, zoomed_loss, 'r')
plt.title('Training loss')
plt.xlabel("Epochs")
plt.ylabel("Loss")
plt.legend(["Loss"])
plt.figure()
# + id="AOVzQXxCwkzP" colab_type="code" outputId="a68bf3ce-bb7d-4759-b51e-8eea04dc138d" colab={"base_uri": "https://localhost:8080/", "height": 1000}
print(rnn_forecast)
| Copy_of_S+P_Week_4_Lesson_5.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3 (ipykernel)
# language: python
# name: python3
# ---
# # <font color='blue'>Data Science Academy - Python Fundamentos - Capítulo 3</font>
#
# ## Download: http://github.com/dsacademybr
# Versão da Linguagem Python
from platform import python_version
print('Versão da Linguagem Python Usada Neste Jupyter Notebook:', python_version())
# ### Range
# Imprimindo números pares entre 50 e 101
for i in range(50, 101, 2):
print(i)
for i in range(3, 6):
print (i)
for i in range(0, -20, -2):
print(i)
lista = ['Morango', 'Banana', 'Abacaxi', 'Uva']
lista_tamanho = len(lista)
for i in range(0, lista_tamanho):
print(lista[i])
# Tudo em Python é um objeto
type(range(0,3))
# # Fim
# ### Obrigado
#
# ### Visite o Blog da Data Science Academy - <a href="http://blog.dsacademy.com.br">Blog DSA</a>
| Data Science Academy/Python Fundamentos/Cap03/Notebooks/DSA-Python-Cap03-04-Range.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # PIT Summary nonlinear
# # Purpose
# There has been a lot done in the parameter identification techniques (PIT) in this project, this notebook is a summary.
# # Setup
# +
# # %load imports.py
# # %load imports.py
# %matplotlib inline
# %load_ext autoreload
# %autoreload 2
# %config Completer.use_jedi = False ## (To fix autocomplete)
## External packages:
import pandas as pd
pd.options.display.max_rows = 999
pd.options.display.max_columns = 999
pd.set_option("display.max_columns", None)
import numpy as np
import os
import matplotlib.pyplot as plt
#if os.name == 'nt':
# plt.style.use('presentation.mplstyle') # Windows
import plotly.express as px
import plotly.graph_objects as go
import seaborn as sns
import sympy as sp
from sympy.physics.mechanics import (dynamicsymbols, ReferenceFrame,
Particle, Point)
from sympy.physics.vector.printing import vpprint, vlatex
from IPython.display import display, Math, Latex
from src.substitute_dynamic_symbols import run, lambdify
#import pyro
import sklearn
import pykalman
from statsmodels.sandbox.regression.predstd import wls_prediction_std
import statsmodels.api as sm
from scipy.integrate import solve_ivp
## Local packages:
from src.data import mdl
#import src.models.nonlinear_martin_vmm as vmm
#import src.nonlinear_martin_vmm_equations as eq
#import src.models.linear_vmm as vmm
#import src.nonlinear_martin_vmm_equations as eq
import src.nonlinear_abkowitz_vmm_equations as eq
import src.models.nonlinear_martin_vmm as nonlinear_martin_vmm
import src.models.vmm_VCT as vmm_VCT
import src.models.linear_vmm as linear_vmm
#import src.models.linear_vmm as model
from src.symbols import *
from src.parameters import *
import src.symbols as symbols
from src import prime_system
from src.models import regression
from src.visualization.plot import track_plot
from src.equation import Equation
# -
Math(vlatex(eq.X_eq))
Math(vlatex(eq.Y_eq))
Math(vlatex(eq.N_eq))
Math(vlatex(eq.X_eq.rhs-eq.X_eq.lhs))
Math(vlatex(eq.Y_eq.rhs-eq.Y_eq.lhs))
Math(vlatex(eq.N_eq.rhs-eq.N_eq.lhs))
# ## Load test
# +
#id=22773
#id=22616
id=22774
#id=22770
df, units, meta_data = mdl.load(id=id, dir_path='../data/processed/kalman')
df.index = df.index.total_seconds()
df = df.iloc[0:-100].copy()
df.index-=df.index[0]
df['t'] = df.index
df.sort_index(inplace=True)
df['-delta'] = -df['delta']
df['V'] = np.sqrt(df['u']**2 + df['v']**2)
df['thrust'] = df['Prop/PS/Thrust'] + df['Prop/SB/Thrust']
df['U'] = df['V']
df['beta'] = -np.arctan2(df['v'],df['u'])
# -
meta_data['rho']=1000
meta_data['mass'] = meta_data['Volume']*meta_data['rho']
from src.visualization.plot import track_plot
fig,ax=plt.subplots()
#fig.set_size_inches(10,10)
track_plot(df=df, lpp=meta_data.lpp, x_dataset='x0', y_dataset='y0', psi_dataset='psi', beam=meta_data.beam, ax=ax);
df.plot(y='u')
# # Ship parameters
# +
T_ = (meta_data.TA + meta_data.TF)/2
L_ = meta_data.lpp
m_ = meta_data.mass
rho_ = meta_data.rho
B_ = meta_data.beam
CB_ = m_/(T_*B_*L_*rho_)
I_z_ = m_*meta_data.KZZ**2
#I_z_=839.725
ship_parameters = {
'T' : T_,
'L' : L_,
'CB' :CB_,
'B' : B_,
'rho' : rho_,
#'x_G' : meta_data.lcg, # motions are expressed at CG
'x_G' : 0, # motions are expressed at CG
'm' : m_,
'I_z': I_z_,
'volume':meta_data.Volume,
}
ps = prime_system.PrimeSystem(**ship_parameters) # model
scale_factor = meta_data.scale_factor
ps_ship = prime_system.PrimeSystem(L=ship_parameters['L']*scale_factor, rho=meta_data['rho']) # ship
ship_parameters_prime = ps.prime(ship_parameters)
# -
I_z_+m_*meta_data.lcg**2 # Steiner rule...
I_z_
ship_parameters
ship_parameters_prime
# ## Prime system
interesting = ['x0','y0','psi','u','v','r','u1d','v1d','r1d','U','t','delta','thrust','beta']
df_prime = ps.prime(df[interesting], U=df['U'])
df_prime.set_index('t', inplace=True)
# +
fig,ax=plt.subplots()
#fig.set_size_inches(10,10)
track_plot(df=df_prime, lpp=ship_parameters_prime['L'], beam=ship_parameters_prime['B'],
x_dataset='x0', y_dataset='y0', psi_dataset='psi', ax=ax);
df_prime.plot(y='u')
# -
# # Brix parameters
# +
def calculate_prime(row, ship_parameters):
return run(function=row['brix_lambda'], inputs=ship_parameters)
mask = df_parameters['brix_lambda'].notnull()
df_parameters.loc[mask,'brix_prime'] = df_parameters.loc[mask].apply(calculate_prime, ship_parameters=ship_parameters, axis=1)
df_parameters.loc['Ydelta','brix_prime'] = 0.0004 # Just guessing
df_parameters.loc['Ndelta','brix_prime'] = -df_parameters.loc['Ydelta','brix_prime']/4 # Just guessing
df_parameters['brix_prime'].fillna(0, inplace=True)
#df_parameters['brix_SI'].fillna(0, inplace=True)
# -
# ## Simulate with Brix
fig,ax=plt.subplots()
df_prime.plot(y='delta', ax=ax)
df_cut_prime = df_prime.iloc[2000:12000]
df_cut_prime.plot(y='delta', ax=ax, style='--', label='cut')
df_parameters.loc['Xthrust','brix_prime']
result_brix = linear_vmm.simulator.simulate(df_cut_prime, parameters = df_parameters['brix_prime'], ship_parameters=ship_parameters_prime)
df_result_brix = result_brix.result
result_brix.plot_compare()
# ## Back to SI
fig,ax=plt.subplots()
ax.plot(df.index,df_prime.index)
U_ = ship_parameters['L']*df_prime.index/df.index
df_unprime = ps.unprime(df_prime, U=U_)
df_unprime.index = ps._unprime(df_prime.index,unit='time',U=U_)
# +
fig,ax=plt.subplots()
#fig.set_size_inches(10,10)
track_plot(df=df, lpp=meta_data.lpp, x_dataset='x0', y_dataset='y0', psi_dataset='psi', beam=meta_data.beam, ax=ax);
track_plot(df=df_unprime, lpp=meta_data.lpp, x_dataset='x0', y_dataset='y0', psi_dataset='psi', beam=meta_data.beam, ax=ax);
fig,ax=plt.subplots()
df.plot(y='u',ax=ax)
df_unprime.plot(y='u', style='--', ax=ax)
fig,ax=plt.subplots()
df.plot(y='v',ax=ax)
df_unprime.plot(y='v', style='--', ax=ax)
# -
# # VCT regression
# ## Load VCT data
df_VCT_all = pd.read_csv('../data/external/vct.csv', index_col=0)
df_VCT_all.head()
df_VCT = df_VCT_all.groupby(by=['model_name']).get_group('V2_5_MDL_modelScale')
df_VCT['test type'].unique()
# # Subtract the resistance
# +
df_resistance = df_VCT.groupby(by='test type').get_group('resistance')
X = df_resistance[['u','fx']].copy()
X['u**2'] = X['u']**2
y = X.pop('fx')
model_resistance = sm.OLS(y,X)
results_resistance = model_resistance.fit()
X_pred = pd.DataFrame()
X_pred['u'] = np.linspace(X['u'].min(), X['u'].max(), 20)
X_pred['u**2'] = X_pred['u']**2
X_pred['fx'] = results_resistance.predict(X_pred)
fig,ax=plt.subplots()
df_resistance.plot(x='u', y='fx', style='.', ax=ax)
X_pred.plot(x='u', y='fx', style='--', ax=ax);
# -
df_VCT_0_resistance = df_VCT.copy()
df_VCT_0_resistance['u**2'] = df_VCT_0_resistance['u']**2
#df_VCT_0_resistance['fx']-= results_resistance.predict(df_VCT_0_resistance[['u','u**2']])
df_VCT_0_resistance['thrust'] = results_resistance.predict(df_VCT_0_resistance[['u','u**2']])
# ## VCT to prime system
interesting = [
'u',
'v',
'r',
'delta',
'fx',
'fy',
'mz',
'thrust',
]
df_VCT_prime = ps_ship.prime(df_VCT_0_resistance[interesting], U=df_VCT_0_resistance['V'])
from statsmodels.sandbox.regression.predstd import wls_prediction_std
def show_pred_vct(X,y,results, label):
display(results.summary())
X_ = X.copy()
X_['y'] = y
X_.sort_values(by='y', inplace=True)
y_ = X_.pop('y')
y_pred = results.predict(X_)
prstd, iv_l, iv_u = wls_prediction_std(results, exog=X_, alpha=0.05)
#iv_l*=-1
#iv_u*=-1
fig,ax=plt.subplots()
#ax.plot(X_.index,y_, label='Numerical gradient from model test')
#ax.plot(X_.index,y_pred, '--', label='OLS')
ax.plot(y_,y_pred, '.')
ax.plot([y_.min(),y_.max()], [y_.min(),y_.max()], 'r-')
ax.set_ylabel(f'{label} (prediction)')
ax.set_xlabel(label)
ax.fill_between(y_, y1=iv_l, y2=iv_u, zorder=-10, color='grey', alpha=0.5, label=r'5% confidence')
ax.legend();
# ## N
vmm_VCT.simulator.N_qs_eq
label = sp.symbols('N_qs')
N_eq_ = vmm_VCT.simulator.N_qs_eq.subs(N_qs,label)
diff_eq_N = regression.DiffEqToMatrix(ode=N_eq_, label=label, base_features=[delta,u,v,r])
Math(vlatex(diff_eq_N.acceleration_equation))
# +
X = diff_eq_N.calculate_features(data=df_VCT_prime)
y = diff_eq_N.calculate_label(y=df_VCT_prime['mz'])
model_N = sm.OLS(y,X)
results_N = model_N.fit()
show_pred_vct(X=X,y=y,results=results_N, label=r'$N$')
# -
# ## Y
vmm_VCT.simulator.Y_qs_eq
label = sp.symbols('Y_qs')
Y_eq_ = vmm_VCT.simulator.Y_qs_eq.subs(Y_qs,label)
diff_eq_Y = regression.DiffEqToMatrix(ode=Y_eq_, label=label, base_features=[delta,u,v,r])
Math(vlatex(diff_eq_Y.acceleration_equation))
# +
X = diff_eq_Y.calculate_features(data=df_VCT_prime)
y = diff_eq_Y.calculate_label(y=df_VCT_prime['fy'])
model_Y = sm.OLS(y,X)
results_Y = model_Y.fit()
show_pred_vct(X=X,y=y,results=results_Y, label=r'$Y$')
# -
# ## X
vmm_VCT.simulator.X_qs_eq
label = sp.symbols('X_qs')
X_eq_ = vmm_VCT.simulator.X_qs_eq.subs(X_qs,label)
diff_eq_X = regression.DiffEqToMatrix(ode=X_eq_, label=label, base_features=[delta,u,v,r,thrust])
Math(vlatex(diff_eq_X.acceleration_equation))
# +
X = diff_eq_X.calculate_features(data=df_VCT_prime)
y = diff_eq_X.calculate_label(y=df_VCT_prime['fx'])
model_X = sm.OLS(y,X)
results_X = model_X.fit()
show_pred_vct(X=X,y=y,results=results_X, label=r'$X$')
# -
results_summary_X = regression.results_summary_to_dataframe(results_X)
results_summary_Y = regression.results_summary_to_dataframe(results_Y)
results_summary_N = regression.results_summary_to_dataframe(results_N)
# ## Add the regressed parameters
# Hydrodynamic derivatives that depend on acceleration cannot be obtained from the VCT regression. They are however essential if a time simulation should be conducted. These values have then been taken from Brix semi empirical formulas for the simulations below.
# +
df_parameters_all = df_parameters.copy()
for other in [results_summary_X, results_summary_Y, results_summary_N]:
df_parameters_all = df_parameters_all.combine_first(other)
df_parameters_all.rename(columns={'coeff':'regressed'}, inplace=True)
df_parameters_all.drop(columns=['brix_lambda'], inplace=True)
df_parameters_all['prime'] = df_parameters_all['regressed'].combine_first(df_parameters_all['brix_prime']) # prefer regressed
# +
fig,ax=plt.subplots()
fig.set_size_inches(15,5)
mask = ((df_parameters_all['brix_prime']!=0) |
(pd.notnull(df_parameters_all['regressed'])))
df_parameters_all_plot = df_parameters_all.loc[mask].copy()
df_parameters_all_plot.drop(index='Xthrust').plot.bar(y=['brix_prime','regressed'], ax=ax);
# -
# ## Simulate
# +
parameters = df_parameters_all['prime'].copy()
ship_parameters_vct = ship_parameters.copy()
ship_parameters_vct['x_G'] = meta_data.lcg
ship_parameters_vct_prime = ps.prime(ship_parameters_vct)
nonlinear_martin_vmm_result = nonlinear_martin_vmm.simulator.simulate(df_prime, parameters = parameters, ship_parameters=ship_parameters_vct_prime)
nonlinear_martin_vmm_result.plot_compare()
# -
# # Time series PIT
from statsmodels.sandbox.regression.predstd import wls_prediction_std
def show_pred(X,y,results, label):
display(results.summary())
X_ = X
y_ = y
y_pred = results.predict(X_)
prstd, iv_l, iv_u = wls_prediction_std(results, exog=X_, alpha=0.05)
#iv_l*=-1
#iv_u*=-1
fig,ax=plt.subplots()
ax.plot(X_.index,y_, label='Numerical gradient from model test')
ax.plot(X_.index,y_pred, '--', label='OLS')
ax.set_ylabel(label)
ax.fill_between(X_.index, y1=iv_l, y2=iv_u, zorder=-10, color='grey', alpha=0.5, label=r'5\% confidence')
ax.legend();
# ## N
# +
N_eq_ = N_eq.copy()
N_eq_ = N_eq_.subs([
(x_G,0), # Assuming or moving to CG=0
# #(I_z,1), # Removing inertia
# #(eq.p.Nrdot,0), # Removing added mass
# #(eq.p.Nvdot,0), # Removing added mass
# #(eq.p.Nudot,0), # Removing added mass
#
])
solution = sp.solve(N_eq_,r1d)[0]
inertia_ = (I_z-eq.p.Nrdot)
N_eq_ = sp.Eq(r1d*inertia_, solution*inertia_)
# -
Math(vlatex(N_eq_))
label_N = N_eq_.lhs
diff_eq_N = regression.DiffEqToMatrix(ode=N_eq_, label=label_N, base_features=[delta,u,v,r])
Math(vlatex(diff_eq_N.acceleration_equation))
Math(vlatex(diff_eq_N.acceleration_equation_x))
Math(vlatex(diff_eq_N.eq_y))
diff_eq_N.eq_beta
Math(vlatex(diff_eq_N.eq_X))
diff_eq_N.y_lambda
# +
X = diff_eq_N.calculate_features(data=df_prime)
y = run(function=diff_eq_N.y_lambda, inputs=df_prime, **ship_parameters_prime, **df_parameters_all['brix_prime'])
model_N = sm.OLS(y,X)
results_N = model_N.fit()
show_pred(X=X,y=y,results=results_N, label=r'$%s$' % vlatex(label_N))
# -
# ## Y
# +
Y_eq_ = Y_eq.copy()
Y_eq_ = Y_eq.subs([
(x_G,0), # Assuming or moving to CG=0
# #(I_z,1), # Removing inertia
# #(eq.p.Nrdot,0), # Removing added mass
# #(eq.p.Nvdot,0), # Removing added mass
# #(eq.p.Nudot,0), # Removing added mass
#
])
solution = sp.solve(Y_eq_,v1d)[0]
inertia_ = (eq.p.Yvdot-m)
Y_eq_ = sp.Eq(-(v1d*inertia_-u*m*r), -(solution*inertia_-u*m*r))
Math(vlatex(Y_eq_))
# -
label_Y = Y_eq_.lhs
diff_eq_Y = regression.DiffEqToMatrix(ode=Y_eq_, label=label_Y, base_features=[delta,u,v,r])
# +
X = diff_eq_Y.calculate_features(data=df_prime)
y = run(function=diff_eq_Y.y_lambda, inputs=df_prime, **ship_parameters_prime, **df_parameters_all['brix_prime'])
model_Y = sm.OLS(y,X)
results_Y = model_Y.fit()
show_pred(X=X,y=y,results=results_Y, label=r'$%s$' % vlatex(label_Y))
# -
# ## X
# +
X_eq_ = X_eq.copy()
X_eq_ = X_eq_.subs([
(x_G,0), # Assuming or moving to CG=0
# #(I_z,1), # Removing inertia
# #(eq.p.Nrdot,0), # Removing added mass
# #(eq.p.Nvdot,0), # Removing added mass
# #(eq.p.Nudot,0), # Removing added mass
#
])
solution = sp.solve(X_eq_,u1d)[0]
inertia_ = m-eq.p.Xudot
X_eq_ = sp.Eq((u1d*inertia_-m*r*v), (solution*inertia_-m*r*v))
Math(vlatex(X_eq_))
# -
label_X = X_eq_.lhs
diff_eq_X = regression.DiffEqToMatrix(ode=X_eq_, label=label_X, base_features=[delta,u,v,r,thrust])
# +
X = diff_eq_X.calculate_features(data=df_prime)
y = run(function=diff_eq_X.y_lambda, inputs=df_prime, **ship_parameters_prime, **df_parameters_all['brix_prime'])
model_X = sm.OLS(y,X)
results_X = model_X.fit()
show_pred(X=X,y=y,results=results_X, label=r'$%s$' % vlatex(label_X))
# -
results_summary_X = regression.results_summary_to_dataframe(results_X)
results_summary_Y = regression.results_summary_to_dataframe(results_Y)
results_summary_N = regression.results_summary_to_dataframe(results_N)
# ## Add regressed parameters
results = pd.concat([results_summary_X, results_summary_Y, results_summary_N],axis=0)
df_parameters_all['PIT'] = results['coeff']
df_parameters_all['PIT'] = df_parameters_all['PIT'].combine_first(df_parameters_all['brix_prime']) # prefer regressed
# +
fig,ax=plt.subplots()
fig.set_size_inches(15,5)
mask = ((df_parameters_all['brix_prime']!=0) |
(pd.notnull(df_parameters_all['regressed'])) |
(df_parameters_all['PIT']!=0)
)
df_parameters_all_plot = df_parameters_all.loc[mask]
df_parameters_all_plot.drop(index=['Xthrust']).plot.bar(y=['brix_prime','regressed','PIT'], ax=ax);
# -
fig,ax=plt.subplots()
fig.set_size_inches(15,5)
df_parameters_all_plot.loc[['Xthrust']].plot.bar(y=['brix_prime','regressed','PIT'], ax=ax);
# ## Simulate
# +
parameters = df_parameters_all['PIT'].copy()
#parameters['Xv']=0
#parameters['Xr']=0
#parameters['Xu']=0
#parameters['Xdelta']=0
#parameters['Nv']*=-1
solution, df_result_PIT = simulate(df_cut_prime, parameters = parameters, ship_parameters=ship_parameters_prime)
# +
fig,ax=plt.subplots()
track_plot(df=df_cut_prime, lpp=ship_parameters_prime['L'], beam=ship_parameters_prime['B'],ax=ax, label='model test')
track_plot(df=df_result_PIT, lpp=ship_parameters_prime['L'], beam=ship_parameters_prime['B'],ax=ax, label='simulation', color='green')
ax.legend()
for key in df_result_PIT:
fig,ax = plt.subplots()
df_cut_prime.plot(y=key, label='model test', ax=ax)
df_result_PIT.plot(y=key, label='simulation', ax=ax)
ax.set_ylabel(key)
# -
X_eq
# +
u1d,v1d,r1d = sp.symbols('u1d, v1d, r1d')
subs = [
(u1d,u1d),
(v1d,v1d),
(r1d,r1d),
]
eq_X_ = X_eq.subs(subs)
eq_Y_ = Y_eq.subs(subs)
eq_N_ = N_eq.subs(subs)
A,b = sp.linear_eq_to_matrix([eq_X_,eq_Y_,eq_N_],[u1d,v1d,r1d])
# -
A
Math(vlatex(b))
A.inv()
| notebooks/08_02_PIT_nonlinear_summary.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: 'Python 3.9.7 64-bit (''flask_env'': conda)'
# name: python3
# ---
# # `flask_sqlalchemy` Tutorial
# ## Part 1/2, Create the database
import pandas as pd
import datetime as dt
from database_app import db, WalliStat, Campaign
db.create_all()
# ## Create a Campaign
# The default `hourly` Campaign starts the next full hour using the `id=0`
t = dt.datetime.now()
next_hour = t.replace(second=0, microsecond=0, minute=0, hour=t.hour+1)
hourly = Campaign(id=0, title="hourly", start=next_hour, interval=dt.timedelta(seconds=3600))
hourly
db.session.add(hourly)
db.session.commit()
# ## Commit WalliStats from .csv-file
# use the default Campaign: `hourly`
hourly.id
fn = "ExampleData_2021-07-25.csv"
date_str = fn.split(".")[0].split("_")[1]
df = pd.read_csv(fn)
df.head(7)
for index, row in df.head(6).iterrows():
ws = WalliStat(datetime=pd.to_datetime(date_str + " " + row.time).to_pydatetime(),
Temp=row.Temp/10.,
Power=row.P,
campaign_id=hourly.id)
db.session.add(ws)
db.session.commit()
# +
#db.session.query(Campaign).filter(Campaign.id==campaign.id).update({"previous": now})
# -
db.session.query(WalliStat).filter(WalliStat.Temp>1).update({"Power": 22})
db.session.query(WalliStat).filter(WalliStat.Temp>1).all()
# ## Create a second Campaign
campaign1 = Campaign(title="High frequency polling for error checking.", start=dt.datetime.now(), end=dt.datetime.now()+dt.timedelta(days=1), interval=dt.timedelta(seconds=1))
db.session.add(campaign1)
db.session.commit()
Campaign.query.all()
vars(campaign1)
# ## Commit WalliStats to the new Campaign
# This time, use the `campaign` attribute. Note this hasn't even been defined within the ``database_app.py`` Models.
for index, row in df.tail(3).iterrows():
ws = WalliStat(datetime=pd.to_datetime(date_str + " " + row.time).to_pydatetime(),
Temp=row.Temp/10.,
Power=row.P,
campaign=campaign1)
db.session.add(ws)
db.session.commit()
WalliStat.query.all()
# ## Modify one value in the Database
db.session.query(WalliStat).filter(WalliStat.id == 4).update({"Power": 43})
db.session.commit()
| database/database_basics/write_db.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python
# language: python
# name: conda-env-python-py
# ---
# # Segmenting and Clustering Neighborhoods in Toronto | Part-3
# 1. Start by creating a new Notebook for this assignment.
#
# 2. Use the Notebook to build the code to scrape the following Wikipedia page, https://en.wikipedia.org/wiki/List_of_postal_codes_of_Canada:_M, in order to obtain the data that is in the table of postal codes and to transform the data into a pandas dataframe.
# For this assignment, you will be required to explore and cluster the neighborhoods in Toronto.
#
# 3. To create the above dataframe:
#
# .
# - The dataframe will consist of three columns: PostalCode, Borough, and Neighborhood.
# - Only process the cells that have an assigned borough. Ignore cells with a borough that is Not assigned.
# - More than one neighborhood can exist in one postal code area. For example, in the table on the Wikipedia page, you will notice that M5A is listed twice and has two neighborhoods: Harbourfront and Regent Park. These two rows will be combined into one row with the neighborhoods separated with a comma as shown in row 11 in the above table.
# - If a cell has a borough but a Not assigned neighborhood, then the neighborhood will be the same as the borough. So for the 9th cell in the table on the Wikipedia page, the value of the Borough and the Neighborhood columns will be Queen's Park.
# - Clean your Notebook and add Markdown cells to explain your work and any assumptions you are making.
# - In the last cell of your notebook, use the .shape method to print the number of rows of your dataframe.
# .
#
#
# 4. Submit a link to your Notebook on your Github repository. (10 marks)
#
# Note: There are different website scraping libraries and packages in Python. For scraping the above table, you can simply use pandas to read the table into a pandas dataframe.
#
# Another way, which would help to learn for more complicated cases of web scraping is using the BeautifulSoup package. Here is the package's main documentation page: http://beautiful-soup-4.readthedocs.io/en/latest/
#
# The package is so popular that there is a plethora of tutorials and examples on how to use it. Here is a very good Youtube video on how to use the BeautifulSoup package: https://www.youtube.com/watch?v=ng2o98k983k
#
# Use pandas, or the BeautifulSoup package, or any other way you are comfortable with to transform the data in the table on the Wikipedia page into the above pandas dataframe.
# #### Foursquare API to explore neighborhoods on selected cities in Toronto
# Installing Libraries
# !pip install geopy
# !pip install folium
print("Installed!")
# Importing Libraries
# +
import folium
import requests
import json
import matplotlib.cm as cm
import matplotlib.colors as colors
import pandas as pd
from pandas.io.json import json_normalize
from sklearn.cluster import KMeans
from geopy.geocoders import Nominatim
pd.set_option('display.max_columns', None)
pd.set_option('display.max_rows', None)
print("Imported!")
# -
df = pd.read_csv('toronto_part2.csv')
print(df.shape)
df.head()
address = 'Toronto, Ontario Canada'
geolocator = Nominatim(user_agent="test123")
location = geolocator.geocode(address)
latitude = location.latitude
longitude = location.longitude
print('The geograpical coordinate of Toronto Canada are {}, {}.'.format(latitude, longitude))
map_toronto = folium.Map(location=[latitude, longitude], zoom_start=11)
for lat, lng, borough, neighborhood in zip(df['Latitude'], df['Longitude'], df['Borough'], df['Neighborhood']):
label = '{}, {}'.format(neighborhood, borough)
label = folium.Popup(label, parse_html=True)
folium.CircleMarker(
[lat, lng],
radius=4,
popup=label,
color='blue',
fill=True,
fill_color='#87cefa',
fill_opacity=0.5,
parse_html=False).add_to(map_toronto)
map_toronto
toronto_data = df[df['Borough'].str.contains("Toronto")].reset_index(drop=True)
print(toronto_data.shape)
toronto_data.head()
# +
map_toronto = folium.Map(location=[latitude, longitude], zoom_start=11)
for lat, lng, label in zip(toronto_data['Latitude'], toronto_data['Longitude'], toronto_data['Neighborhood']):
label = folium.Popup(label, parse_html=True)
folium.CircleMarker([lat, lng], radius=5, popup=label, color='blue', fill=True, fill_color='#3186cc', fill_opacity=0.7,parse_html=False).add_to(map_toronto)
map_toronto
# -
# Foursquare API
CLIENT_ID = 'DPBYY4JUY3DU20ALPSUV4ONY2K1GOJJKJ1NIHBB32XEMOVYY' # Put Your Client Id
CLIENT_SECRET = '<KEY>' # Put You Client Secret
VERSION = '20180604'
LIMIT = 30
print('Your credentails:')
print('CLIENT_ID: Hidden')
print('CLIENT_SECRET: Hidden')
# #### 1. Exploring Neighbourhood in Toronto
#
def getNearbyVenues(names, latitudes, longitudes, radius=500):
venues_list=[]
for name, lat, lng in zip(names, latitudes, longitudes):
print(name)
url = 'https://api.foursquare.com/v2/venues/explore?&client_id={}&client_secret={}&v={}&ll={},{}&radius={}&limit={}'.format(
CLIENT_ID, CLIENT_SECRET, VERSION, lat, lng, radius, LIMIT)
results = requests.get(url).json()["response"]['groups'][0]['items']
venues_list.append([( name, lat, lng, v['venue']['name'], v['venue']['location']['lat'], v['venue']['location']['lng'], v['venue']['categories'][0]['name']) for v in results])
nearby_venues = pd.DataFrame([item for venue_list in venues_list for item in venue_list])
nearby_venues.columns = ['Neighborhood', 'Neighborhood Latitude', 'Neighborhood Longitude', 'Venue', 'Venue Latitude', 'Venue Longitude', 'Venue Category']
return(nearby_venues)
df = toronto_data
toronto_venues = getNearbyVenues(names=df['Neighborhood'], latitudes=df['Latitude'],longitudes=df['Longitude'])
print(toronto_venues.shape)
toronto_venues.head()
toronto_venues.groupby('Neighborhood').count()
print('There are {} uniques categories.'.format(len(toronto_venues['Venue Category'].unique())))
# #### 2. Analyze Each Borough Neighborhood
# +
toronto_onehot = pd.get_dummies(toronto_venues[['Venue Category']], prefix="", prefix_sep="")
toronto_onehot['Neighborhood'] = toronto_venues['Neighborhood']
fixed_columns = [toronto_onehot.columns[-1]] + list(toronto_onehot.columns[:-1])
toronto_onehot = toronto_onehot[fixed_columns]
toronto_onehot.head()
# -
toronto_onehot.shape
toronto_grouped = toronto_onehot.groupby('Neighborhood').mean().reset_index()
toronto_grouped
toronto_grouped.shape
num_top_venues = 5
for neigh in toronto_grouped['Neighborhood']:
print("----"+neigh+"----")
temp = toronto_grouped[toronto_grouped['Neighborhood'] == neigh].T.reset_index()
temp.columns = ['venue','freq']
temp = temp.iloc[1:]
temp['freq'] = temp['freq'].astype(float)
temp = temp.round({'freq': 2})
print(temp.sort_values('freq', ascending=False).reset_index(drop=True).head(num_top_venues))
print('\n')
def return_most_common_venues(row, num_top_venues):
row_categories = row.iloc[1:]
row_categories_sorted = row_categories.sort_values(ascending=False)
return row_categories_sorted.index.values[0:num_top_venues]
# +
import numpy as np
num_top_venues = 10
indicators = ['st', 'nd', 'rd']
columns = ['Neighborhood']
for ind in np.arange(num_top_venues):
try:
columns.append('{}{} Most Common Venue'.format(ind+1, indicators[ind]))
except:
columns.append('{}th Most Common Venue'.format(ind+1))
neighborhoods_venues_sorted = pd.DataFrame(columns=columns)
neighborhoods_venues_sorted['Neighborhood'] = toronto_grouped['Neighborhood']
for ind in np.arange(toronto_grouped.shape[0]):
neighborhoods_venues_sorted.iloc[ind, 1:] = return_most_common_venues(toronto_grouped.iloc[ind, :], num_top_venues)
neighborhoods_venues_sorted.shape
# -
# #### 3. Clustering Neighborhoods
from sklearn.cluster import KMeans
import sklearn.cluster.k_means_
km = KMeans(n_clusters=3, init='k-means++', max_iter=100, n_init=1,
verbose=True)
kclusters = 10
toronto_grouped_clustering = toronto_grouped.drop('Neighborhood', 1)
kmeans = KMeans(n_clusters=kclusters, random_state=1).fit(toronto_grouped_clustering)
print(kmeans.labels_[0:10])
print(len(kmeans.labels_))
df.head()
# +
neighborhoods_venues_sorted.insert(0, 'Cluster Labels', kmeans.labels_)
toronto_merged = df
toronto_merged = toronto_merged.join(neighborhoods_venues_sorted.set_index('Neighborhood'), on='Neighborhood')
toronto_merged.head() # check the last columns!
# -
# #### Finally, let's visualize the resulting clusters
# +
map_clusters = folium.Map(location=[latitude, longitude], zoom_start=11)
x = np.arange(kclusters)
ys = [i+x+(i*x)**2 for i in range(kclusters)]
colors_array = cm.rainbow(np.linspace(0, 1, len(ys)))
rainbow = [colors.rgb2hex(i) for i in colors_array]
markers_colors = []
for lat, lon, poi, cluster in zip(toronto_merged['Latitude'], toronto_merged['Longitude'], toronto_merged['Neighborhood'],kmeans.labels_):
label = folium.Popup(str(poi) + ' Cluster ' + str(cluster), parse_html=True)
folium.CircleMarker([lat, lon], radius=5, popup=label, color=rainbow[cluster-1], fill=True, fill_color=rainbow[cluster-1], fill_opacity=0.7).add_to(map_clusters)
map_clusters
# -
print("Thank You! Hope You Liked. Still Learning... ")
| Segmenting and Clustering Neighborhoods in Toronto - Part 3.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3 (ipykernel)
# language: python
# name: python3
# ---
# + [markdown] colab_type="text" id="Ok1vxsLqqw3w"
# # Estimating Treatment Effect Using Machine Learning
# + [markdown] colab_type="text" id="B16h5bb8eFmw"
# Welcome to the first assignment of **AI for Medical Treatment**!
#
# You will be using different methods to evaluate the results of a [randomized control trial](https://en.wikipedia.org/wiki/Randomized_controlled_trial) (RCT).
#
# **You will learn:**
# - How to analyze data from a randomized control trial using both:
# - traditional statistical methods
# - and the more recent machine learning techniques
# - Interpreting Multivariate Models
# - Quantifying treatment effect
# - Calculating baseline risk
# - Calculating predicted risk reduction
# - Evaluating Treatment Effect Models
# - Comparing predicted and empirical risk reductions
# - Computing C-statistic-for-benefit
# - Interpreting ML models for Treatment Effect Estimation
# - Implement T-learner
# -
# ### This assignment covers the folowing topics:
#
# - [1. Dataset](#1)
# - [1.1 Why RCT?](#1-1)
# - [1.2 Data Processing](#1-2)
# - [Exercise 1](#ex-01)
# - [Exercise 2](#ex-02)
# - [2. Modeling Treatment Effect](#2)
# - [2.1 Constant Treatment Effect](#2-1)
# - [Exercise 3](#ex-03)
# - [2.2 Absolute Risk Reduction](#2-2)
# - [Exercise 4](#ex-04)
# - [2.3 Model Limitations](#2-3)
# - [Exercise 5](#ex-05)
# - [Exercise 6](#ex-06)
# - [3. Evaluation Metric](#3)
# - [3.1 C-statistic-for-benefit](#3-1)
# - [Exercise 7](#ex-07)
# - [Exercise 8](#ex-08)
# - [4. Machine Learning Approaches](#4)
# - [4.1 T-Learner](#4-1)
# - [Exercise 9](#ex-09)
# - [Exercise 10](#ex-10)
# - [Exercise 11](#ex-11)
# + [markdown] colab_type="text" id="Tklnk8tneq2U"
# ## Packages
#
# We'll first import all the packages that we need for this assignment.
#
#
# - `pandas` is what we'll use to manipulate our data
# - `numpy` is a library for mathematical and scientific operations
# - `matplotlib` is a plotting library
# - `sklearn` contains a lot of efficient tools for machine learning and statistical modeling
# - `random` allows us to generate random numbers in python
# - `lifelines` is an open-source library that implements c-statistic
# - `itertools` will help us with hyperparameters searching
#
# ## Import Packages
#
# Run the next cell to import all the necessary packages, dependencies and custom util functions.
# + colab={} colab_type="code" id="Z5zOXfAIH-41"
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import sklearn
import random
import lifelines
import itertools
plt.rcParams['figure.figsize'] = [10, 7]
# + [markdown] colab_type="text" id="pVEHJZ79mvQx"
# <a name="1"></a>
# ## 1 Dataset
# <a name="1-1"></a>
# ### 1.1 Why RCT?
#
# In this assignment, we'll be examining data from an RCT, measuring the effect of a particular drug combination on colon cancer. Specifically, we'll be looking the effect of [Levamisole](https://en.wikipedia.org/wiki/Levamisole) and [Fluorouracil](https://en.wikipedia.org/wiki/Fluorouracil) on patients who have had surgery to remove their colon cancer. After surgery, the curability of the patient depends on the remaining residual cancer. In this study, it was found that this particular drug combination had a clear beneficial effect, when compared with [Chemotherapy](https://en.wikipedia.org/wiki/Chemotherapy).
# <a name="1-2"></a>
# ### 1.2 Data Processing
# In this first section, we will load in the dataset and calculate basic statistics. Run the next cell to load the dataset. We also do some preprocessing to convert categorical features to one-hot representations.
# + colab={} colab_type="code" id="QOV_BJGyLtjR"
data = pd.read_csv("levamisole_data.csv", index_col=0)
# + [markdown] colab_type="text" id="RlqE8036sj3y"
# Let's look at our data to familiarize ourselves with the various fields.
# + colab={"base_uri": "https://localhost:8080/", "height": 221} colab_type="code" id="RPS1stb7si4N" outputId="a64b50c6-5df2-467a-abee-0d73f82d7825"
print(f"Data Dimensions: {data.shape}")
data.head()
# + [markdown] colab_type="text" id="ctvm6IEhauEd"
# Below is a description of all the fields (one-hot means a different field for each level):
# - `sex (binary): 1 if Male, 0 otherwise`
# - `age (int): age of patient at start of the study`
# - `obstruct (binary): obstruction of colon by tumor`
# - `perfor (binary): perforation of colon`
# - `adhere (binary): adherence to nearby organs`
# - `nodes (int): number of lymphnodes with detectable cancer`
# - `node4 (binary): more than 4 positive lymph nodes`
# - `outcome (binary): 1 if died within 5 years`
# - `TRTMT (binary): treated with levamisole + fluoroucil`
# - `differ (one-hot): differentiation of tumor`
# - `extent (one-hot): extent of local spread`
# + [markdown] colab_type="text" id="WTfGBXTOsq06"
# In particular pay attention to the `TRTMT` and `outcome` columns. Our primary endpoint for our analysis will be the 5-year survival rate, which is captured in the `outcome` variable.
# + [markdown] colab_type="text" id="Mz2uT46QMQPc"
# <a name='ex-01'></a>
# ### Exercise 01
#
# Since this is an RCT, the treatment column is randomized. Let's warm up by finding what the treatment probability is.
#
# $$p_{treatment} = \frac{n_{treatment}}{n}$$
#
# - $n_{treatment}$ is the number of patients where `TRTMT = True`
# - $n$ is the total number of patients.
# + colab={"base_uri": "https://localhost:8080/", "height": 187} colab_type="code" id="WKpz5E_CLKQy" outputId="5fb60465-d681-4fc4-ae67-1dd0baa8158d"
# UNQ_C1 (UNIQUE CELL IDENTIFIER, DO NOT EDIT)
def proportion_treated(df):
"""
Compute proportion of trial participants who have been treated
Args:
df (dataframe): dataframe containing trial results. Column
'TRTMT' is 1 if patient was treated, 0 otherwise.
Returns:
result (float): proportion of patients who were treated
"""
### START CODE HERE (REPLACE INSTANCES OF 'None' with your code) ###
proportion = (len(df[df["TRTMT"] == True]))/len(df)
### END CODE HERE ###
return proportion
# -
# **Test Case**
print("dataframe:\n")
example_df = pd.DataFrame(data =[[0, 0],
[1, 1],
[1, 1],
[1, 1]], columns = ['outcome', 'TRTMT'])
print(example_df)
print("\n")
treated_proportion = proportion_treated(example_df)
print(f"Proportion of patient treated: computed {treated_proportion}, expected: 0.75")
# + [markdown] colab_type="text" id="BtHs90CWLinQ"
# Next let's run it on our trial data.
# + colab={"base_uri": "https://localhost:8080/", "height": 34} colab_type="code" id="Oz9j9egVLh2k" outputId="3a2ce4a7-4747-4bce-efe1-f73bb8304910"
p = proportion_treated(data)
print(f"Proportion Treated: {p} ~ {int(p*100)}%")
# + [markdown] colab_type="text" id="DWvZ4Qvun8p1"
# <a name='ex-02'></a>
# ### Exercise 02
#
# Next, we can get a preliminary sense of the results by computing the empirical 5-year death probability for the treated arm versus the control arm.
#
# The probability of dying for patients who received the treatment is:
#
# $$p_{\text{treatment, death}} = \frac{n_{\text{treatment,death}}}{n_{\text{treatment}}}$$
#
# - $n_{\text{treatment,death}}$ is the number of patients who received the treatment and died.
# - $n_{\text{treatment}}$ is the number of patients who received treatment.
#
# The probability of dying for patients in the control group (who did not received treatment) is:
#
# $$p_{\text{control, death}} = \frac{n_{\text{control,death}}}{n_{\text{control}}}$$
# - $n_{\text{control,death}}$ is the number of patients in the control group (did not receive the treatment) who died.
# - $n_{\text{control}}$ is the number of patients in the control group (did not receive treatment).
#
# + colab={"base_uri": "https://localhost:8080/", "height": 221} colab_type="code" id="etNHvX3AKleg" outputId="758c295e-9556-4314-e83e-c2062ee660ce"
# UNQ_C2 (UNIQUE CELL IDENTIFIER, DO NOT EDIT)
def event_rate(df):
'''
Compute empirical rate of death within 5 years
for treated and untreated groups.
Args:
df (dataframe): dataframe containing trial results.
'TRTMT' column is 1 if patient was treated, 0 otherwise.
'outcome' column is 1 if patient died within 5 years, 0 otherwise.
Returns:
treated_prob (float): empirical probability of death given treatment
untreated_prob (float): empirical probability of death given control
'''
treated_prob = 0.0
control_prob = 0.0
### START CODE HERE (REPLACE INSTANCES OF 'None' with your code) ###
treated_prob = len(df[(df.TRTMT == True) & (df.outcome == True)])/len(df[df.TRTMT == True])
control_prob = len(df[(df.TRTMT == False) & (df.outcome == True)])/len(df[df.TRTMT == False])
### END CODE HERE ###
return treated_prob, control_prob
# -
# **Test Case**
print("TEST CASE\ndataframe:\n")
example_df = pd.DataFrame(data =[[0, 1],
[1, 1],
[1, 1],
[0, 1],
[1, 0],
[1, 0],
[1, 0],
[0, 0]], columns = ['outcome', 'TRTMT'])
#print("dataframe:\n")
print(example_df)
print("\n")
treated_prob, control_prob = event_rate(example_df)
print(f"Treated 5-year death rate, expected: 0.5, got: {treated_prob:.4f}")
print(f"Control 5-year death rate, expected: 0.75, got: {control_prob:.4f}")
# + [markdown] colab_type="text" id="ShpX6ABSV_Pd"
# Now let's try the function on the real data.
# + colab={"base_uri": "https://localhost:8080/", "height": 51} colab_type="code" id="7rw2yKymV-WD" outputId="9daebe7b-d0d1-4654-d3d1-764312b598d2"
treated_prob, control_prob = event_rate(data)
print(f"Death rate for treated patients: {treated_prob:.4f} ~ {int(treated_prob*100)}%")
print(f"Death rate for untreated patients: {control_prob:.4f} ~ {int(control_prob*100)}%")
# + [markdown] colab_type="text" id="yoTzaBUorB-3"
# On average, it seemed like treatment had a positive effect.
#
# #### Sanity checks
# It's important to compute these basic summary statistics as a sanity check for more complex models later on. If they strongly disagree with these robust summaries and there isn't a good reason, then there might be a bug.
# + [markdown] colab_type="text" id="fywUHcbRnsQZ"
# ### Train test split
#
# We'll now try to quantify the impact more precisely using statistical models. Before we get started fitting models to analyze the data, let's split it using the `train_test_split` function from `sklearn`. While a hold-out test set isn't required for logistic regression, it will be useful for comparing its performance to the ML models later on.
# + colab={} colab_type="code" id="FUBvTfF0mQuH"
# As usual, split into dev and test set
from sklearn.model_selection import train_test_split
np.random.seed(18)
random.seed(1)
data = data.dropna(axis=0)
y = data.outcome
# notice we are dropping a column here. Now our total columns will be 1 less than before
X = data.drop('outcome', axis=1)
X_dev, X_test, y_dev, y_test = train_test_split(X, y, test_size = 0.25, random_state=0)
# + colab={"base_uri": "https://localhost:8080/", "height": 51} colab_type="code" id="6EeBLbfeFVnk" outputId="bd02e605-335a-4007-f1c0-46906dc0522c"
print(f"dev set shape: {X_dev.shape}")
print(f"test set shape: {X_test.shape}")
# + [markdown] colab_type="text" id="2c8mLTMQEZxD"
# <a name="2"></a>
# ## 2 Modeling Treatment Effect
# + [markdown] colab_type="text" id="QxHy4RGA0Goi"
# <a name="2-1"></a>
# ### 2.1 Constant Treatment Effect
#
# First, we will model the treatment effect using a standard logistic regression. If $x^{(i)}$ is the input vector, then this models the probability of death within 5 years as
# $$\sigma(\theta^T x^{(i)}) = \frac{1}{1 + exp(-\theta^T x^{(i)})},$$
#
# where $ \theta^T x^{(i)} = \sum_{j} \theta_j x^{(i)}_j$ is an inner product.
#
# -
# For example, if we have three features, $TRTMT$, $AGE$, and $SEX$, then our probability of death would be written as:
#
# $$\sigma(\theta^T x^{(i)}) = \frac{1}{1 + exp(-\theta_{TRTMT} x^{(i)}_{TRTMT} - \theta_{AGE}x_{AGE}^{(i)} - \theta_{SEX}x^{(i)}_{SEX})}.$$
#
# Another way to look at logistic regresion is as a linear model for the "logit" function, or "log odds":
#
# $$logit(p) = \log \left(\frac{p}{1-p} \right)= \theta^T x^{(i)}$$
#
# - "Odds" is defined as the probability of an event divided by the probability of not having the event: $\frac{p}{1-p}$.
#
# - "Log odds", or "logit" function, is the natural log of the odds: $log \left(\frac{p}{1-p} \right)$
# In this example, $x^{(i)}_{TRTMT}$ is the treatment variable. Therefore, $\theta_{TRTMT}$ tells you what the effect of treatment is. If $\theta_{TRTMT}$ is negative, then having treatment reduces the log-odds of death, which means death is less likely than if you did not have treatment.
#
# Note that this assumes a constant relative treatment effect, since the impact of treatment does not depend on any other covariates.
#
# Typically, a randomized control trial (RCT) will seek to establish a negative $\theta_{TRTMT}$ (because the treatment is intended to reduce risk of death), which corresponds to an odds ratio of less than 1.
#
# An odds ratio of less than one implies the probability of death is less than the probability of surviving.
#
# $$ \frac{p}{1-p} < 1 \rightarrow p < 1-p$$
#
# Run the next cell to fit your logistic regression model.
#
# You can use the entire dev set (and do not need to reserve a separate validation set) because there is no need for hyperparameter tuning using a validation set.
# + colab={} colab_type="code" id="U-2hcHYycgFJ"
from sklearn.linear_model import LogisticRegression
lr = LogisticRegression(penalty='l2',solver='lbfgs', max_iter=10000).fit(X_dev, y_dev)
# -
# ### Calculating the Odds ratio
#
# You are interested in finding the odds for treatment relative to the odds for the baseline.
#
# $$ OddsRatio = \frac{Odds_{treatment}}{Odds_{baseline}}$$
#
# where
# $$Odds_{treatment} = \frac{p_{treatment}}{1-p_{treatment}}$$
#
# and
#
# $$Odds_{baseline} = \frac{p_{baseline}}{1-p_{baseline}}$$
# If you look at the expression
#
# $$\log \left(\frac{p}{1-p} \right)= \theta^T x^{(i)} = \theta_{treatment} \times x_{treatment}^{(i)} + \theta_{age} \times x_{age}^{(i)} + \cdots$$
#
# Let's just let "$\theta \times x_{age}^{(i)} + \cdots$" stand for all the other thetas and feature variables except for the treatment $\theta_{treatment}^{(i)}$, and $x_{treatment}^{(i)}$ .
# #### Treatment
# To denote that the patient received treatment, we set $x_{treatment}^{(i)} = 1$. Which means the log odds for a treated patient are:
#
# $$ log( Odds_{treatment}) = \log \left(\frac{p_{treatment}}{1-p_{treatment}} \right) = \theta_{treatment} \times 1 + \theta_{age} \times x_{age}^{(i)} + \cdots$$
#
# To get odds from log odds, use exponentiation (raise to the power of e) to take the inverse of the natural log.
#
# $$Odds_{treatment} = e^{log( Odds_{treatment})} = \left(\frac{p_{treatment}}{1-p_{treatment}} \right) = e^{\theta_{treatment} \times 1 + \theta_{age} \times x_{age}^{(i)} + \cdots}$$
# #### Control (baseline)
#
# Similarly, when the patient has no treatment, this is denoted by $x_{treatment}^{(i)} = 0$. So the log odds for the untreated patient is:
#
# $$log(Odds_{baseline}) = \log \left(\frac{p_{baseline}}{1-p_{baseline}} \right) = \theta_{treatment} \times 0 + \theta_{age} \times x_{age}^{(i)} + \cdots$$
#
# $$ = 0 + \theta_{age} \times x_{age}^{(i)} + \cdots$$
#
# To get odds from log odds, use exponentiation (raise to the power of e) to take the inverse of the natural log.
#
# $$Odds_{baseline} = e^{log(Odds_{baseline})} = \left(\frac{p_{baseline}}{1-p_{baseline}} \right) = e^{0 + \theta_{age} \times x_{age}^{(i)} + \cdots}$$
#
# #### Odds Ratio
#
# The Odds ratio is:
#
# $$ OddsRatio = \frac{Odds_{treatment}}{Odds_{baseline}}$$
#
# Doing some substitution:
#
# $$ OddsRatio = \frac{e^{\theta_{treatment} \times 1 + \theta_{age} \times x_{age}^{(i)} + \cdots}}{e^{0 + \theta_{age} \times x_{age}^{(i)} + \cdots}}$$
#
# Notice that $e^{\theta_{age} \times x_{age}^{(i)} + \cdots}$ cancels on top and bottom, so that:
#
# $$ OddsRatio = \frac{e^{\theta_{treatment} \times 1}}{e^{0}}$$
#
# Since $e^{0} = 1$, This simplifies to:
#
# $$ OddsRatio = e^{\theta_{treatment}}$$
# + [markdown] colab_type="text" id="JVUl6hTRzA-w"
# <a name='ex-03'></a>
# ### Exercise 03: Extract the treatment effect
#
# Complete the `extract_treatment_effect` function to extract $\theta_{treatment}$ and then calculate the odds ratio of treatment from the logistic regression model.
# + colab={"base_uri": "https://localhost:8080/", "height": 51} colab_type="code" id="vePgJgTWeclb" outputId="6517a03a-63b0-4780-d89e-979de53e86cd"
# UNQ_C3 (UNIQUE CELL IDENTIFIER, DO NOT EDIT)
def extract_treatment_effect(lr, data):
theta_TRTMT = 0.0
TRTMT_OR = 0.0
coeffs = {data.columns[i]:lr.coef_[0][i] for i in range(len(data.columns))}
### START CODE HERE (REPLACE INSTANCES OF 'None' with your code) ###
# get the treatment coefficient
theta_TRTMT = coeffs["TRTMT"]
# calculate the Odds ratio for treatment
TRTMT_OR = np.exp(theta_TRTMT)
### END CODE HERE ###
return theta_TRTMT, TRTMT_OR
# -
# #### Test
# Test extract_treatment_effect function
theta_TRTMT, trtmt_OR = extract_treatment_effect(lr, X_dev)
print(f"Theta_TRTMT: {theta_TRTMT:.4f}")
print(f"Treatment Odds Ratio: {trtmt_OR:.4f}")
# ### Expected Output
#
# ```CPP
# Theta_TRTMT: -0.2885
# Treatment Odds Ratio: 0.7494
# ```
# + [markdown] colab_type="text" id="clf289SQtTzV"
# Based on this model, it seems that the treatment has a beneficial effect.
# - The $\theta_{treatment} = -0.29$ is a negative value, meaning that it has the effect of reducing risk of death.
# - In the code above, the $OddsRatio$ is stored in the variable `TRTMT_OR`.
# - The $OddsRatio = 0.75$, which is less than 1.
#
#
# You can think of the $OddsRatio$ as a factor that is multiplied to the baseline odds $Odds_{baseline}$ in order to estimate the $Odds_{treatment}$. You can think about the Odds Ratio as a rate, converting between baseline odds and treatment odds.
#
# $$Odds_{treatment} = OddsRatio \times Odds_{baseline}$$
#
# In this case:
#
# $$Odds_{treatment} = 0.75 \times Odds_{baseline}$$
#
# So you can interpret this to mean that the treatment reduces the odds of death by $(1 - OddsRatio) = 1 - 0.75 = 0.25$, or about 25%.
#
# You will see how well this model fits the data in the next few sections.
# + [markdown] colab_type="text" id="kgv-HoPGsBP-"
# <a name="2-2"></a>
# ### 2.2 Absolute Risk Reduction
# + [markdown] colab_type="text" id="hVhcO3t2yj-4"
# <a name='ex-04'></a>
# ### Exercise 4: Calculate ARR
#
# A valuable quantity is the absolute risk reduction (ARR) of a treatment. If $p$ is the baseline probability of death, and $p_{treatment}$ is the probability of death if treated, then
# $$ARR = p_{baseline} - p_{treatment} $$
#
# In the case of logistic regression, here is how ARR can be computed:
# Recall that the Odds Ratio is defined as:
#
# $$OR = Odds_{treatment} / Odds_{baseline}$$
#
# where the "odds" is the probability of the event over the probability of not having the event, or $p/(1-p)$.
#
# $$Odds_{trtmt} = \frac{p_{treatment}}{1- p_{treatment}}$$
# and
# $$Odds_{baseline} = \frac{p_{baseline}}{1- p_{baseline}}$$
#
# In the function below, compute the predicted absolute risk reduction (ARR) given
# - the odds ratio for treatment "$OR$", and
# - the baseline risk of an individual $p_{baseline}$
#
# If you get stuck, try reviewing the level 1 hints by clicking on the cell "Hints Level 1". If you would like more help, please try viewing "Hints Level 2".
# -
# <details>
# <summary>
# <font size="3" color="darkgreen"><b>Hints Level 1</b></font>
# </summary>
# <p>
# <ul>
# <li> Using the given $p$, compute the baseline odds of death.</li>
# <li> Then, use the Odds Ratio to convert that to odds of death given treatment.</li>
# <li> Finally, convert those odds back into a probability</li>
# </ul>
# </p>
# <details>
# <summary>
# <font size="3" color="darkgreen"><b>Hints Level 2</b></font>
# </summary>
# <p>
# <ul>
# <li> Solve for p_treatment starting with this expression: Odds_treatment = p_treatment / (1 - p_treatment). You may want to do this on a piece of paper.</li>
# </ul>
# </p>
# + colab={"base_uri": "https://localhost:8080/", "height": 119} colab_type="code" id="CCCmR2lQjDzs" outputId="177ff01a-d39a-4a69-ac3a-df0b71588019"
# UNQ_C4 (UNIQUE CELL IDENTIFIER, DO NOT EDIT)
def OR_to_ARR(p, OR):
"""
Compute ARR for treatment for individuals given
baseline risk and odds ratio of treatment.
Args:
p (float): baseline probability of risk (without treatment)
OR (float): odds ratio of treatment versus baseline
Returns:
ARR (float): absolute risk reduction for treatment
"""
### START CODE HERE (REPLACE INSTANCES OF 'None' with your code) ###
# compute baseline odds from p
odds_baseline = p/(1-p)
# compute odds of treatment using odds ratio
odds_trtmt = OR * odds_baseline
# compute new probability of death from treatment odds
p_trtmt = odds_trtmt/(1 + odds_trtmt)
# compute ARR using treated probability and baseline probability
ARR = p - p_trtmt
### END CODE HERE ###
return ARR
# -
# **Test Case**
# +
print("TEST CASES")
test_p, test_OR = (0.75, 0.5)
print(f"baseline p: {test_p}, OR: {test_OR}")
print(f"Output: {OR_to_ARR(test_p, test_OR):.4f}, Expected: {0.15}\n")
test_p, test_OR = (0.04, 1.2)
print(f"baseline p: {test_p}, OR: {test_OR}")
print(f"Output: {OR_to_ARR(test_p, test_OR):.4f}, Expected: {-0.0076}")
# + [markdown] colab_type="text" id="LLxmh1h92FFe"
# #### Visualize the treatment effect as baseline risk varies
#
# The logistic regression model assumes that treatment has a constant effect in terms of odds ratio and is independent of other covariates.
#
# However, this does not mean that absolute risk reduction is necessarily constant for any baseline risk $\hat{p}$. To illustrate this, we can plot absolute risk reduction as a function of baseline predicted risk $\hat{p}$.
#
# Run the next cell to see the relationship between ARR and baseline risk for the logistic regression model.
# + colab={"base_uri": "https://localhost:8080/", "height": 458} colab_type="code" id="eQdG21ogqTWy" outputId="16531142-20c9-459e-8dde-f239c1e31203"
ps = np.arange(0.001, 0.999, 0.001)
diffs = [OR_to_ARR(p, trtmt_OR) for p in ps]
plt.plot(ps, diffs)
plt.title("Absolute Risk Reduction for Constant Treatment OR")
plt.xlabel('Baseline Risk')
plt.ylabel('Absolute Risk Reduction')
plt.show()
# + [markdown] colab_type="text" id="OI4QLB5l2OyZ"
# Note that when viewed on an absolute scale, the treatment effect is not constant, despite the fact that you used a model with no interactions between the features (we didn't multiply two features together).
#
# As shown in the plot, when the baseline risk is either very low (close to zero) or very high (close to one), the Absolute Risk Reduction from treatment is fairly low. When the baseline risk is closer to 0.5 the ARR of treatment is higher (closer to 0.10).
#
# It is always important to remember that baseline risk has a natural effect on absolute risk reduction.
# + [markdown] colab_type="text" id="9bGTgLRkQZPR"
# <a name="2-3"></a>
# ### 2.3 Model Limitations
#
# We can now plot how closely the empirical (actual) risk reduction matches the risk reduction that is predicted by the logistic regression model.
#
# This is complicated by the fact that for each patient, we only observe one outcome (treatment or no treatment).
# - We can't give a patient treatment, then go back in time and measure an alternative scenario where the same patient did not receive the treatment.
# - Therefore, we will group patients into groups based on their baseline risk as predicted by the model, and then plot their empirical ARR within groups that have similar baseline risks.
# - The empirical ARR is the death rate of the untreated patients in that group minus the death rate of the treated patients in that group.
#
# $$ARR_{empirical} = p_{baseline} - p_{treatment}$$
# + [markdown] colab_type="text" id="y7sx9hZ85jNQ"
# <a name='ex-05'></a>
# ### Exercise 5: Baseline Risk
# In the next cell, write a function to compute the baseline risk of each patient using the logistic regression model.
#
# The baseline risk is the model's predicted probability that the patient is predicted to die if they do not receive treatment.
#
# You will later use the baseline risk of each patient to organize patients into risk groups (that have similar baseline risks). This will allow you to calculate the ARR within each risk group.
#
# $$p_{baseline} = logisticRegression(Treatment = False, Age = age_{i}, Obstruct = obstruct_{i}, \cdots)$$
# -
# <details>
# <summary>
# <font size="3" color="darkgreen"><b>Hints</b></font>
# </summary>
# <p>
# <ul>
# <li> A patient receives treatment if their feature x_treatment is True, and does not receive treatment when their x_treatment is False.</li>
# <li>For a patient who actually did receive treatment, you can ask the model to predict their risk without receiving treatment by setting the patient's x_treatment to False.</li>
# <li>The logistic regression predict_proba() function returns a 2D array, one row for each patient, and one column for each possible outcome (each class). In this case, the two outcomes are either no death (0), or death (1). To find out which column contains the probability for death, check the order of the classes by using lr.classes_ </li>
# </ul>
# </p>
# + colab={"base_uri": "https://localhost:8080/", "height": 238} colab_type="code" id="BrIYA-Ciu3EK" outputId="4c6b2802-581c-4346-8e41-da7ee2967d7d"
# UNQ_C5 (UNIQUE CELL IDENTIFIER, DO NOT EDIT)
def base_risks(X, lr_model):
"""
Compute baseline risks for each individual in X.
Args:
X (dataframe): data from trial. 'TRTMT' column
is 1 if subject retrieved treatment, 0 otherwise
lr_model (model): logistic regression model
Returns:
risks (np.array): array of predicted baseline risk
for each subject in X
"""
# first make a copy of the dataframe so as not to overwrite the original
X = X.copy(deep=True)
### START CODE HERE (REPLACE INSTANCES OF 'None' with your code) ###
# Set the treatment variable to assume that the patient did not receive treatment
X["TRTMT"]=0
# Input the features into the model, and predict the probability of death.
risks = lr_model.predict_proba(X)[:,1]
# END CODE HERE
return risks
# -
# **Test Case**
# +
example_df = pd.DataFrame(columns = X_dev.columns)
example_df.loc[0, :] = X_dev.loc[X_dev.TRTMT == 1, :].iloc[0, :]
example_df.loc[1, :] = example_df.iloc[0, :]
example_df.loc[1, 'TRTMT'] = 0
print("TEST CASE")
print(example_df)
print(example_df.loc[:, ['TRTMT']])
print('\n')
print("Base risks for both rows should be the same")
print(f"Baseline Risks: {base_risks(example_df.copy(deep=True), lr)}")
# -
# #### Expected output
#
# ```CPP
# Base risks for both rows should be the same
# Baseline Risks: [0.43115868 0.43115868]
# ```
# + [markdown] colab_type="text" id="JQsYKmVc6prz"
# <a name='ex-06'></a>
# ### Exercise 6: ARR by quantile
#
# Since the effect of treatment varies depending on the baseline risk, it makes more sense to group patients who have similar baseline risks, and then look at the outcomes of those who receive treatment versus those who do not, to estimate the absolute risk reduction (ARR).
#
# You'll now implement the `lr_ARR_quantile` function to plot empirical average ARR for each quantile of base risk.
# -
# <details>
# <summary>
# <font size="3" color="darkgreen"><b>Hints</b></font>
# </summary>
# <p>
# <ul>
# <li>Use pandas.cut to define intervals of bins of equal size. For example, pd.cut(arr,5) uses the values in the list or array 'arr' and returns the intervals of 5 bins.</li>
# <li>Use pandas.DataFrame.groupby to group by a selected column of the dataframe. Then select the desired variable and apply an aggregator function. For example, df.groupby('col1')['col2'].sum() groups by column 1, and then calculates the sum of column 2 for each group. </li>
# </ul>
# </p>
#
# UNQ_C6 (UNIQUE CELL IDENTIFIER, DO NOT EDIT)
def lr_ARR_quantile(X, y, lr):
# first make a deep copy of the features dataframe to calculate the base risks
X = X.copy(deep=True)
# Make another deep copy of the features dataframe to store baseline risk, risk_group, and y
df = X.copy(deep=True)
### START CODE HERE (REPLACE INSTANCES OF 'None' with your code) ###
# Calculate the baseline risks (use the function that you just implemented)
baseline_risk = base_risks(df,lr)
# bin patients into 10 risk groups based on their baseline risks
risk_groups = pd.cut(baseline_risk,10)
# Store the baseline risk, risk_groups, and y into the new dataframe
df.loc[:, 'baseline_risk'] = baseline_risk
df.loc[:, 'risk_group'] = risk_groups
df.loc[:, 'y'] = y
# select the subset of patients who did not actually receive treatment
df_baseline = df[df["TRTMT"]==False]
# select the subset of patients who did actually receive treatment
df_treatment = df[df["TRTMT"]==True]
# For baseline patients, group them by risk group, select their outcome 'y', and take the mean
baseline_mean_by_risk_group = df_baseline.groupby('risk_group')['y'].mean()
# For treatment patients, group them by risk group, select their outcome 'y', and take the mean
treatment_mean_by_risk_group = df_treatment.groupby('risk_group')['y'].mean()
# Calculate the absolute risk reduction by risk group (baseline minus treatment)
arr_by_risk_group = baseline_mean_by_risk_group - treatment_mean_by_risk_group
# Set the index of the arr_by_risk_group dataframe to the average baseline risk of each risk group
# Use data for all patients to calculate the average baseline risk, grouped by risk group.
arr_by_risk_group.index = df.groupby('risk_group')['baseline_risk'].mean()
### END CODE HERE ###
# Set the name of the Series to 'ARR'
arr_by_risk_group.name = 'ARR'
return arr_by_risk_group
# +
# Test
abs_risks = lr_ARR_quantile(X_dev, y_dev, lr)
# print the Series
print(abs_risks)
# just showing this as a Dataframe for easier viewing
display(pd.DataFrame(abs_risks))
# -
# ##### Expected output
# ```CPP
# baseline_risk
# 0.231595 0.089744
# 0.314713 0.042857
# 0.386342 -0.014604
# 0.458883 0.122222
# 0.530568 0.142857
# 0.626937 -0.104072
# 0.693404 0.150000
# 0.777353 0.293706
# 0.836617 0.083333
# 0.918884 0.200000
# Name: ARR, dtype: float64
# ```
# Plot the ARR grouped by baseline risk
# + colab={"base_uri": "https://localhost:8080/", "height": 458} colab_type="code" id="xtmp3BxtNR39" outputId="266dcffc-0c16-4456-c789-106465666b41"
plt.scatter(abs_risks.index, abs_risks, label='empirical ARR')
plt.title("Empirical Absolute Risk Reduction vs. Baseline Risk")
plt.ylabel("Absolute Risk Reduction")
plt.xlabel("Baseline Risk Range")
ps = np.arange(abs_risks.index[0]-0.05, abs_risks.index[-1]+0.05, 0.01)
diffs = [OR_to_ARR(p, trtmt_OR) for p in ps]
plt.plot(ps, diffs, label='predicted ARR')
plt.legend(loc='upper right')
plt.show()
# + [markdown] colab_type="text" id="fz8Es6q98Kjw"
# In the plot, the empirical absolute risk reduction is shown as circles, whereas the predicted risk reduction from the logistic regression model is given by the solid line.
#
# If ARR depended only on baseline risk, then if we plotted actual (empirical) ARR grouped by baseline risk, then it would follow the model's predictions closely (the dots would be near the line in most cases).
#
# However, you can see that the empirical absolute risk reduction (shown as circles) does not match the predicted risk reduction from the logistic regression model (given by the solid line).
#
# This may indicate that ARR may depend on more than simply the baseline risk.
# + [markdown] colab_type="text" id="aAgIlK6Z8s2p"
# <a name="3"></a>
# ## 3 Evaluation Metric
# + [markdown] colab_type="text" id="oCASYrsI1EFI"
# <a name="3-1"></a>
# ### 3.1 C-statistic-for-benefit (C-for-benefit)
#
# You'll now use a measure to evaluate the discriminative power of your models for predicting ARR. Ideally, you could use something like the regular Concordance index (also called C-statistic) from Course 2. Proceeding by analogy, you'd like to estimate something like:
#
# $$P(A \text{ has higher predicted ARR than } B| A \text{ experienced a greater risk reduction than } B).$$
#
# -
# #### The ideal data cannot be observed
#
# The fundamental problem is that for each person, you can only observe either their treatment outcome or their baseline outcome.
# - The patient either receives the treatment, or does not receive the treatment. You can't go back in time to have the same patient undergo treatment and then not have treatment.
# - This means that you can't determine what their actual risk reduction was.
# #### Estimate the treated/untreated patient using a pair of patients
#
# What you will do instead is match people across treatment and control arms based on predicted ARR.
# - Now, in each pair, you'll observe both outcomes, so you'll have an estimate of the true treatment effect.
# - In the pair of patients (A,B),
# - Patient A receives the treatment
# - Patient B does not receive the treatment.
# - Think of the pair of patients as a substitute for the the ideal data that has the same exact patient in both the treatment and control group.
# #### The C-for-benefit
#
# $$P(\text{$P_1$ has a predicted ARR greater than $P_2$} | \text{$P_1$ experiences greater risk reduction than $P_2$}),$$
#
# - Pair 1 consists of two patients (A,B), where A receives treatment, B does not.
# - Pair 2 is another pair of two patients (A,B), where A receives treatment, B does not.
#
# The risk reduction for each pair is:
# - 1 if the treated person A survives and the untreated B person does not (treatment helps).
# - -1 if the treated person A dies and the untreated person B doesn't (treatment harms)
# - 0 otherwise (treatment has no effect, because both patients in the pair live, or both die).
# #### Details for calculating C-for-benefit
#
# The c-for-benefit gives you a way to evaluate the ability of models to discriminate between patient profiles which are likely to experience greater benefit from treatment.
# - If you are better able to predict how likely a treatment can improve a patient's outcome, you can help the doctor and patient make a more informed decision when deciding whether to undergo treatment, considering the possible side-effects and other risks associated with treatment.
#
# Please complete the implementation of the C-statistic-for-benefit below.
#
# The code to create the pairs is given to you.
# ```CPP
# obs_benefit_dict = {
# (0, 0): 0,
# (0, 1): -1,
# (1, 0): 1,
# (1, 1): 0,
# }
# ```
# Here is the interpretation of this dictionary for a pair of patients, (A,B), where A receives treatment and B does not:
# - When patient A does not die, and neither does patient B, `(0, 0)`, the observed benefit of treatment is 0.
# - When patient A does not die, but patient B does die, `(0, 1)`, the observed benefit is -1 (the treatment helped).
# - When patient A dies, but patient B does not die, `(1, 0)`, the observed benefit is 1 (the treatment was harmful)
# - When patient A dies and patient B dies, `(0, 0)`, the observed benefit of treatment is 0.
#
# Each patient in the pair is represented by a tuple `(ARR, y)`.
# - Index 0 contains the predicted ARR, which is the predicted benefit from treatment.
# - Index 1 contains the actual patient outcome: 0 for no death, 1 for death.
#
# So a pair of patients is represented as a tuple containing two tuples:
#
# For example, Pair_1 is `( (ARR_1_A, y_1_A),(ARR_1_B, y_1_B))`, and the data may look like:
# `( (0.60, 0),(0.40, 1))`.
# - This means that patient A (who received treatment) has a predicted benefit of 0.60 and does not die.
# - Patient B (who did not receive treatment) has a predicted benefit of 0.40 and dies.
# <a name='ex-07'></a>
# ### Exercise 7: Calculate c for benefit score
# In `c_for_benefit_score`, you will compute the C-for-benefit given the matched pairs.
#
# $$\text{c for benefit score} = \frac{concordant + 0.5 \times risk\_ties}{permissible}$$
# <details>
# <summary>
# <font size="3" color="darkgreen"><b>Click here for Hints!</b></font>
# </summary>
# <p>
# <ul>
# <li>A pair of patients in this case are two patients whose data are used to represent a single patient.</li>
# <li> A pair of pairs is similar to what you think of as just a "pair" in the course 2 concordance index. It's a pair of pairs of patients (four patients total).</li>
# <li>Each patient is represented by a tuple of two values. The first value is the predicted risk reduction, and the second is the patient's outcome.</li>
# <li>observed benefit: for each patient pair, the first patient is assumed to be the one who received treatment, and second in the pair is the one who did not receive treatment. Observed benefit is either 0 (no effect), -1 (treatment helped), 1 (treatment harmed)</li>
# <li>predicted benefit: for each patient pair, take the mean of the two predicted benefits. This is the first value in each patient's tuple.</li>
# <li>permissible pair of pairs: observed benefit is different between the two pairs of pairs of patients.</li>
# <li>concordant pair: the observed benefit and predicted benefit of pair 1 are both less than those for pair 2; or, the observed and predicted benefit of pair 1 are both greater than those for pair 2. Also, it should be a permissible pair of pairs.</li>
# <li>Risk tie: the predicted benefits of both pairs are equal, and it's also a permissible pair of pairs.</li>
# </ul>
# </p>
#
# + colab={"base_uri": "https://localhost:8080/", "height": 385} colab_type="code" id="XYYwXThLOZKi" outputId="6bbb3684-89d5-4674-9147-221a26a21621"
# UNQ_C7 (UNIQUE CELL IDENTIFIER, DO NOT EDIT)
def c_for_benefit_score(pairs):
"""
Compute c-statistic-for-benefit given list of
individuals matched across treatment and control arms.
Args:
pairs (list of tuples): each element of the list is a tuple of individuals,
the first from the control arm and the second from
the treatment arm. Each individual
p = (pred_outcome, actual_outcome) is a tuple of
their predicted outcome and actual outcome.
Result:
cstat (float): c-statistic-for-benefit computed from pairs.
"""
# mapping pair outcomes to benefit
obs_benefit_dict = {
(0, 0): 0,
(0, 1): -1,
(1, 0): 1,
(1, 1): 0,
}
### START CODE HERE (REPLACE INSTANCES OF 'None', 'False', and 'pass' with your code) ###
# compute observed benefit for each pair
obs_benefit = [obs_benefit_dict[(i[1],j[1])] for (i,j) in pairs]
# compute average predicted benefit for each pair
pred_benefit = [ np.mean([i[0],j[0]]) for (i,j) in pairs]
concordant_count, permissible_count, risk_tie_count = 0, 0, 0
# iterate over pairs of pairs
for i in range(len(pairs)):
for j in range(i + 1, len(pairs)):
# if the observed benefit is different, increment permissible count
if obs_benefit[i] != obs_benefit[j]:
# increment count of permissible pairs
permissible_count = permissible_count +1
# if concordant, increment count
if ((obs_benefit[i]<obs_benefit[j]) == (pred_benefit[i]<pred_benefit[j])): # change to check for concordance
concordant_count = concordant_count + 1
# if risk tie, increment count
if (pred_benefit[i]==pred_benefit[j]): #change to check for risk ties
risk_tie_count = risk_tie_count + 1
# compute c-statistic-for-benefit
cstat = (concordant_count + (0.5 * risk_tie_count)) / permissible_count
# END CODE HERE
return cstat
# -
# **Test Case**
print("TEST CASE")
tmp_pairs = [((0.64, 1), (0.54, 0)),
((0.44, 0),(0.40, 1)),
((0.56, 1), (0.74, 0)),
((0.22,0),(0.22,1)),
((0.22,1),(0.22,0))]
print(f"pairs: {tmp_pairs}")
tmp_cstat = c_for_benefit_score(tmp_pairs)
print(f"Output: {tmp_cstat:.4f}")
# ##### Expected Output
#
# ```CPP
# TEST CASE
# pairs: [((0.64, 1), (0.54, 0)), ((0.44, 0), (0.4, 1)), ((0.56, 1), (0.74, 0)), ((0.22, 0), (0.22, 1)), ((0.22, 1), (0.22, 0))]
# Output: 0.7500
# ```
# <a name='ex-08'></a>
# ### Exercise 8: Create patient pairs and calculate c-for-benefit
#
# You will implement the function `c_statistic`, which prepares the patient data and uses the c-for-benefit score function to calculate the c-for-benefit:
#
# - Take as input:
# - The predicted risk reduction `pred_rr` (ARR)
# - outcomes `y` (1 for death, 0 for no death)
# - treatments `w` (1 for treatment, 0 for no treatment)
# - Collect the predicted risk reduction, outcomes and treatments into tuples, one tuple for each patient.
# - Filter one list of tuples where patients did not receive treatment.
# - Filter another list of tuples where patients received treatment.
#
# - Make sure that there is one treated patient for each untreated patient.
# - If there are fewer treated patients, randomly sample a subset of untreated patients, one for each treated patient.
# - If there are fewer untreated patients, randomly sample a subset of treated patients, one for each untreated patient.
#
# - Sort treated patients by their predicted risk reduction, and similarly sort the untreated patients by predicted risk reduction.
# - This allows you to match the treated patient with the highest predicted risk reduction with the untreated patient with the highest predicted risk reduction. Similarly, the second highest treated patient is matched with the second highest untreated patient.
#
# - Create pairs of treated and untreated patients.
# <details>
# <summary>
# <font size="3" color="darkgreen"><b>Hints</b></font>
# </summary>
# <p>
# <ul>
# <li> Use zip(a,b,c) to create tuples from two or more lists of equal length, and use list(zip(a,b,c)) to store that as a list data type.</li>
# <li> Use filter(lambda x: x[0] == True, some_list) to filter a list (such as a list of tuples) so that the 0th item in each tuple is equal to True. Cast the result as a list using list(filter(lambda x: x[0] == True, some_list)) </li>
# <li>Use random.sample(some_list, sub_sample_length) to sample a subset from a list without replacement.</li>
# <li>Use sorted(some_list, key=lambda x: x[1]) to sort a list of tuples by their value in index 1.</li>
# </ul>
# </p>
#
# UNQ_C8 (UNIQUE CELL IDENTIFIER, DO NOT EDIT)
def c_statistic(pred_rr, y, w, random_seed=0):
"""
Return concordance-for-benefit, the proportion of all matched pairs with
unequal observed benefit, in which the patient pair receiving greater
treatment benefit was predicted to do so.
Args:
pred_rr (array): array of predicted risk reductions
y (array): array of true outcomes
w (array): array of true treatments
Returns:
cstat (float): calculated c-stat-for-benefit
"""
assert len(pred_rr) == len(w) == len(y)
random.seed(random_seed)
### START CODE HERE (REPLACE INSTANCES OF 'None' with your code) ###
# Collect pred_rr, y, and w into tuples for each patient
tuples = tuple(zip(pred_rr, y, w))
# Collect untreated patient tuples, stored as a list
untreated = list(filter(lambda x:x[2]==0,tuples))
# Collect treated patient tuples, stored as a list
treated = list(filter(lambda x:x[2]==1,tuples))
# randomly subsample to ensure every person is matched
# if there are more untreated than treated patients,
# randomly choose a subset of untreated patients, one for each treated patient.
if len(treated) < len(untreated):
untreated = random.sample(untreated, len(treated))
# if there are more treated than untreated patients,
# randomly choose a subset of treated patients, one for each treated patient.
if len(untreated) < len(treated):
treated = random.sample(treated, len(untreated))
assert len(untreated) == len(treated)
# Sort the untreated patients by their predicted risk reduction
untreated = sorted(untreated, key=lambda x: x[0])
# Sort the treated patients by their predicted risk reduction
treated = sorted(treated, key=lambda x: x[0])
# match untreated and treated patients to create pairs together
pairs = tuple(zip(untreated,treated))
# calculate the c-for-benefit using these pairs (use the function that you implemented earlier)
cstat = c_for_benefit_score(pairs)
### END CODE HERE ###
return cstat
# +
# Test
tmp_pred_rr = [0.1,0.2,0.3,0.4,0.5,0.6,0.7,0.8,0.9]
tmp_y = [0,1,0,1,0,1,0,1,0]
tmp_w = [0,0,0,0,1,1,1,1,1]
tmp_cstat = c_statistic(tmp_pred_rr, tmp_y, tmp_w)
print(f"C-for-benefit calculated is {tmp_cstat}")
# -
# ##### Expected output
#
# ```CPP
# C-for-benefit calculated is 0.6
# ```
# + [markdown] colab_type="text" id="XH_yDTAq3D42"
# ### Predicted risk reduction
# In order to compute the c-statistic-for-benefit for any of your models, you need to compute predicted risk reduction from treatment (predicted risk reduction is the input `pred_rr` to the c-statistic function).
#
# - The easiest way to do this in general is to create a version of the data where the treatment variable is False and a version where it is True.
# - Then take the difference $\text{pred_RR} = p_{control} - p_{treatment}$
#
# We've implemented this for you.
# + colab={} colab_type="code" id="arBYI7rR4lqr"
def treatment_control(X):
"""Create treatment and control versions of data"""
X_treatment = X.copy(deep=True)
X_control = X.copy(deep=True)
X_treatment.loc[:, 'TRTMT'] = 1
X_control.loc[:, 'TRTMT'] = 0
return X_treatment, X_control
def risk_reduction(model, data_treatment, data_control):
"""Compute predicted risk reduction for each row in data"""
treatment_risk = model.predict_proba(data_treatment)[:, 1]
control_risk = model.predict_proba(data_control)[:, 1]
return control_risk - treatment_risk
# + [markdown] colab_type="text" id="E4g3JazHF1G9"
# Now let's compute the predicted risk reductions of the logistic regression model on the test set.
# -
X_test_treated, X_test_untreated = treatment_control(X_test)
rr_lr = risk_reduction(lr, X_test_treated, X_test_untreated)
# + [markdown] colab_type="text" id="uv0Yr96aGaeL"
# Before we evaluate the c-statistic-for-benefit, let's look at a histogram of predicted ARR.
# + colab={"base_uri": "https://localhost:8080/", "height": 444} colab_type="code" id="Oa0gA4rCGZtU" outputId="8f8b1896-8276-4101-f488-1453389c62bc"
plt.hist(rr_lr, bins='auto')
plt.title("Histogram of Predicted ARR using logistic regression")
plt.ylabel("count of patients")
plt.xlabel("ARR")
plt.show()
# + [markdown] colab_type="text" id="rTI2xcriG4vi"
# Note that although it predicts different absolute risk reduction, it never predicts that the treatment will adversely impact risk. This is because the odds ratio of treatment is less than 1, so the model always predicts a decrease in the baseline risk. Run the next cell to compute the c-statistic-for-benefit on the test data.
# + colab={"base_uri": "https://localhost:8080/", "height": 34} colab_type="code" id="HTgU2BLbGX1B" outputId="44bd6144-31ca-4a02-e4ce-8f11f139f46d"
tmp_cstat_test = c_statistic(rr_lr, y_test, X_test.TRTMT)
print(f"Logistic Regression evaluated by C-for-Benefit: {tmp_cstat_test:.4f}")
# -
# ##### Expected Output
# ```CPP
# Logistic Regression evaluated by C-for-Benefit: 0.5412
# ```
# + [markdown] colab_type="text" id="o6YQq4LLZdBj"
# Recall that a c statistic ranges from 0 to 1, and is closer to when the model being evaluated is doing a good job with its predictions.
#
# You can see that the model is not doing a great job of predicting risk reduction, given a c-for-benefit of around 0.54.
# -
# ### Regular c-index
# Let's compare this with the regular C-index which you've applied in previous assignments. Note that the regular c-statistic does not look at pairs of pairs of patients, and just compares one patient to another when evaluating the model's performance. So the regular c-index is evaluating the model's ability to predict overall patient risk, not necessarily measuring how well the model predicts benefit from treatment.
# + colab={"base_uri": "https://localhost:8080/", "height": 34} colab_type="code" id="JRtzA6qyJ5sn" outputId="4ada7ef3-b746-4ba1-c208-828cf6c8f674"
from lifelines.utils import concordance_index
tmp_regular_cindex = concordance_index(y_test, lr.predict_proba(X_test)[:, 1])
print(f"Logistic Regression evaluated by regular C-index: {tmp_regular_cindex:.4f}")
# -
# ##### Expected output
# ```CPP
# Logistic Regression evaluated by regular C-index: 0.7785
# ```
# + [markdown] colab_type="text" id="qRYEhMCOLDjs"
# You can see that even though the model accurately predicts overall risk (regular c-index), it does not necessarily do a great job predicting benefit from treatment (c-for-benefit).
# + [markdown] colab_type="text" id="Z_4ogidoLqGd"
# You can also visually assess the discriminative ability of the model by checking if the people it thinks benefit the most from treatment empirically (actually) experience a benefit.
#
# Since you don't have counterfactual results from individuals, you'll need to aggregate patient information in some way.
#
# You can group patients by deciles (10 groups) of risk.
# + colab={"base_uri": "https://localhost:8080/", "height": 458} colab_type="code" id="aP8ST7ycL-I6" outputId="6c02ef30-8683-45b3-f3f1-dea8b39c4f79"
def quantile_benefit(X, y, arr_hat):
df = X.copy(deep=True)
df.loc[:, 'y'] = y
df.loc[:, 'benefit'] = arr_hat
benefit_groups = pd.qcut(arr_hat, 10)
df.loc[:, 'benefit_groups'] = benefit_groups
empirical_benefit = df.loc[df.TRTMT == 0, :].groupby('benefit_groups').y.mean() - df.loc[df.TRTMT == 1].groupby('benefit_groups').y.mean()
avg_benefit = df.loc[df.TRTMT == 0, :].y.mean() - df.loc[df.TRTMT==1, :].y.mean()
return empirical_benefit, avg_benefit
def plot_empirical_risk_reduction(emp_benefit, av_benefit, model):
plt.scatter(range(len(emp_benefit)), emp_benefit)
plt.xticks(range(len(emp_benefit)), range(1, len(emp_benefit) + 1))
plt.title("Empirical Risk Reduction vs. Predicted ({})".format(model))
plt.ylabel("Empirical Risk Reduction")
plt.xlabel("Predicted Risk Reduction Quantile")
plt.plot(range(10), [av_benefit]*10, linestyle='--', label='average RR')
plt.legend(loc='lower right')
plt.show()
emp_benefit, avg_benefit = quantile_benefit(X_test, y_test, rr_lr)
plot_empirical_risk_reduction(emp_benefit, avg_benefit, "Logistic Regression")
# + [markdown] colab_type="text" id="YZM3WZ2fPvOn"
# If the model performed well, then you would see patients in the higher deciles of predicted risk reduction (on the right) also have higher empirical risk reduction (to the top).
#
# This model using logistic regression is far from perfect.
#
# Below, you'll see if you can do better using a more flexible machine learning approach.
# + [markdown] colab_type="text" id="JL8ET3lk9r02"
# <a name="4"></a>
# ## 4 Machine Learning Approaches
# + [markdown] colab_type="text" id="-oOkd5juz5To"
# <a name="4-1"></a>
# ### 4.1 T-Learner
#
# Now you will see how recent machine learning approaches compare to the more standard analysis. The approach we'll look at is called [T-learner](https://arxiv.org/pdf/1706.03461.pdf).
# - "T" stands for "two".
# - The T-learner learns two different models, one for treatment risk, and another model for control risk.
# - Then takes the difference of the two risk predictions to predict the risk reduction.
#
# -
# <a name='ex-09'></a>
# ### Exercise 9: Complete the TLearner class.
#
# - The constructor `__init__()` sets the treatment and control estimators based on the given inputs to the constructor.
# - The `predict` function takes the features and uses each estimator to predict the risk of death. Then it calculates the risk of death for the control estimator minus the risk of death from the treatment estimator, and returns this as the predicted risk reduction.
# UNQ_C9 (UNIQUE CELL IDENTIFIER, DO NOT EDIT)
class TLearner():
"""
T-Learner class.
Attributes:
treatment_estimator (object): fitted model for treatment outcome
control_estimator (object): fitted model for control outcome
"""
def __init__(self, treatment_estimator, control_estimator):
"""
Initializer for TLearner class.
"""
### START CODE HERE (REPLACE INSTANCES OF 'None' with your code) ###
# set the treatment estimator
self.treatment_estimator = treatment_estimator
# set the control estimator
self.control_estimator = control_estimator
### END CODE HERE ###
def predict(self, X):
"""
Return predicted risk reduction for treatment for given data matrix.
Args:
X (dataframe): dataframe containing features for each subject
Returns:
preds (np.array): predicted risk reduction for each row of X
"""
### START CODE HERE (REPLACE INSTANCES OF 'None' with your code) ###
# predict the risk of death using the control estimator
risk_control = self.control_estimator.predict_proba(X)[:,1]
# predict the risk of death using the treatment estimator
risk_treatment = self.treatment_estimator.predict_proba(X)[:,1]
# the predicted risk reduction is control risk minus the treatment risk
pred_risk_reduction = risk_control - risk_treatment
### END CODE HERE ###
return pred_risk_reduction
# ### Tune the model with grid search
#
# In order to tune your two models, you will use grid search to find the desired parameters.
# - You will use a validation set to evaluate the model on different parameters, in order to avoid overfitting to the training set.
#
# To test models on all combinations of hyperparameters, you can first list out all of the values in a list of lists.
# For example:
# ```CPP
# hyperparams = {
# 'n_estimators': [10, 20],
# 'max_depth': [2, 5],
# 'min_samples_leaf': [0.1, 0.2],
# 'random_state': [0]
# }
# ```
# You can generate a list like this:
# ```CPP
# [[10, 20],
# [2, 5],
# [0.1, 0.2]
# ]
# ```
#
# Next, you can get all combinations of the hyperparameter values:
# ```CPP
# [(10, 2, 0.1),
# (10, 2, 0.2),
# (10, 5, 0.1),
# (10, 5, 0.2),
# (20, 2, 0.1),
# (20, 2, 0.2),
# (20, 5, 0.1),
# (20, 5, 0.2)]
# ```
#
# To feed the hyperparameters into an random forest model, you can use a dictionary, so that you do not need to hard code the parameter names.
# For example, instead of
# ```CPP
# RandomForestClassifier(n_estimators= 20, max_depth=5, min_samples_leaf=0.2)
# ```
#
# You have more flexibility if you create a dictionary and pass it into the model.
# ```CPP
# args_d = {'n_estimators': 20, 'max_depth': 5, 'min_samples_leaf': 0.2}
# RandomForestClassifier(**args_d)
# ```
# This allows you to pass in a hyperparameter dictionary for any hyperpameters, not just `n_estimators`, `max_depth`, and `min_samples_leaf`.
#
# So you'll find a way to generate a list of dictionaries, like this:
# ```CPP
# [{'n_estimators': 10, 'max_depth': 2, 'min_samples_leaf': 0.1},
# {'n_estimators': 10, 'max_depth': 2, 'min_samples_leaf': 0.2},
# {'n_estimators': 10, 'max_depth': 5, 'min_samples_leaf': 0.1},
# {'n_estimators': 10, 'max_depth': 5, 'min_samples_leaf': 0.2},
# {'n_estimators': 20, 'max_depth': 2, 'min_samples_leaf': 0.1},
# {'n_estimators': 20, 'max_depth': 2, 'min_samples_leaf': 0.2},
# {'n_estimators': 20, 'max_depth': 5, 'min_samples_leaf': 0.1},
# {'n_estimators': 20, 'max_depth': 5, 'min_samples_leaf': 0.2}]
# ```
#
# Notice how the values in both the list of tuples and list of dictionaries are in the same order as the original hyperparams dictionary. For example, the first value in each is n_estimarors, then max_depth, and then min_samples_leaf:
# ```CPP
# # list of lists
# (10, 2, 0.1)
#
# # list of dictionaries
# {'n_estimators': 10, 'max_depth': 2, 'min_samples_leaf': 0.1}
# ```
#
#
#
# Then for each dictionary of hyperparams:
# - Train a model.
# - Use the regular concordance index to compare their performances.
# - Identify and return the best performing model.
# <a name='ex-10'></a>
# ### Exercise 10: hold out grid search
#
# Implement hold out grid search.
# ##### Note
# In this case, you are not going to apply k-fold cross validation. Since `sklearn.model_selection.GridSearchCV()` applies k-fold cross validation, you won't be using this to perform grid search, and you will implement your own grid search.
#
# Please see the hints if you get stuck.
# <details>
# <summary>
# <font size="3" color="darkgreen"><b>Hints</b></font>
# </summary>
# <p>
# <ul>
# <li>You can use the .items() or .values() method of a dictionary to get its key, value pairs or just values. Use a list() to store them inside a list.</li>
# <li>To get all combinations of the hyperparams, you can use itertools.product(*args_list), where args_list is a list object.</li>
# <li>To generate the list of dictionaries, loop through the list of tuples.</li>
# </ul>
# </p>
#
# UNQ_C10 (UNIQUE CELL IDENTIFIER, DO NOT EDIT)
def holdout_grid_search(clf, X_train_hp, y_train_hp, X_val_hp, y_val_hp, hyperparam, verbose=False):
'''
Conduct hyperparameter grid search on hold out validation set. Use holdout validation.
Hyperparameters are input as a dictionary mapping each hyperparameter name to the
range of values they should iterate over. Use the cindex function as your evaluation
function.
Input:
clf: sklearn classifier
X_train_hp (dataframe): dataframe for training set input variables
y_train_hp (dataframe): dataframe for training set targets
X_val_hp (dataframe): dataframe for validation set input variables
y_val_hp (dataframe): dataframe for validation set targets
hyperparam (dict): hyperparameter dictionary mapping hyperparameter
names to range of values for grid search
Output:
best_estimator (sklearn classifier): fitted sklearn classifier with best performance on
validation set
'''
# Initialize best estimator
best_estimator = None
# initialize best hyperparam
best_hyperparam = {}
# initialize the c-index best score to zero
best_score = 0.0
### START CODE HERE (REPLACE INSTANCES OF 'None' with your code) ###
# Get the values of the hyperparam and store them as a list of lists
hyper_param_l = list(hyperparam.values())
# Generate a list of tuples with all possible combinations of the hyperparams
combination_l_of_t = list(itertools.product(*hyper_param_l))
# Initialize the list of dictionaries for all possible combinations of hyperparams
combination_l_of_d = []
# loop through each tuple in the list of tuples
for val_tuple in combination_l_of_t: # complete this line
param_d = {}
# Enumerate each key in the original hyperparams dictionary
for i, k in enumerate(hyperparam.keys()): # complete this line
# add a key value pair to param_d for each value in val_tuple
param_d[k] = val_tuple[i]
# append the param_dict to the list of dictionaries
combination_l_of_d.append(param_d)
# For each hyperparam dictionary in the list of dictionaries:
for param_d in combination_l_of_d: # complete this line
# Set the model to the given hyperparams
estimator = clf(**param_d)
# Train the model on the training features and labels
estimator.fit(X_train_hp,y_train_hp)
# Predict the risk of death using the validation features
preds = estimator.predict_proba(X_val_hp)[:,1]
# Evaluate the model's performance using the regular concordance index
estimator_score = concordance_index(y_val_hp,preds)
# if the model's c-index is better than the previous best:
if estimator_score > best_score: # complete this line
# save the new best score
best_score = estimator_score
# same the new best estimator
best_estimator = estimator
# save the new best hyperparams
best_hyperparam = param_d
### END CODE HERE ###
if verbose:
print("hyperparam:")
display(hyperparam)
print("hyper_param_l")
display(hyper_param_l)
print("combination_l_of_t")
display(combination_l_of_t)
print(f"combination_l_of_d")
display(combination_l_of_d)
print(f"best_hyperparam")
display(best_hyperparam)
print(f"best_score: {best_score:.4f}")
return best_estimator, best_hyperparam
# +
# Test
n = X_dev.shape[0]
tmp_X_train = X_dev.iloc[:int(n*0.8),:]
tmp_X_val = X_dev.iloc[int(n*0.8):,:]
tmp_y_train = y_dev[:int(n*0.8)]
tmp_y_val = y_dev[int(n*0.8):]
hyperparams = {
'n_estimators': [10, 20],
'max_depth': [2, 5],
'min_samples_leaf': [0.1, 0.2],
'random_state' : [0]
}
from sklearn.ensemble import RandomForestClassifier
control_model = holdout_grid_search(RandomForestClassifier,
tmp_X_train, tmp_y_train,
tmp_X_val, tmp_y_val, hyperparams, verbose=True)
# -
# T-Learner is a convenient framework because it does not restrict your choice of base learners.
# - You will use random forests as the base learners, but are able to choose another model as well.
# ##### Expected output
#
# ```CPP
# hyperparam:
# {'n_estimators': [10, 20],
# 'max_depth': [2, 5],
# 'min_samples_leaf': [0.1, 0.2],
# 'random_state': [0]}
# hyper_param_l
# [[10, 20], [2, 5], [0.1, 0.2], [0]]
# combination_l_of_t
# [(10, 2, 0.1, 0),
# (10, 2, 0.2, 0),
# (10, 5, 0.1, 0),
# (10, 5, 0.2, 0),
# (20, 2, 0.1, 0),
# (20, 2, 0.2, 0),
# (20, 5, 0.1, 0),
# (20, 5, 0.2, 0)]
# combination_l_of_d
# [{'n_estimators': 10,
# 'max_depth': 2,
# 'min_samples_leaf': 0.1,
# 'random_state': 0},
# {'n_estimators': 10,
# 'max_depth': 2,
# 'min_samples_leaf': 0.2,
# 'random_state': 0},
# {'n_estimators': 10,
# 'max_depth': 5,
# 'min_samples_leaf': 0.1,
# 'random_state': 0},
# {'n_estimators': 10,
# 'max_depth': 5,
# 'min_samples_leaf': 0.2,
# 'random_state': 0},
# {'n_estimators': 20,
# 'max_depth': 2,
# 'min_samples_leaf': 0.1,
# 'random_state': 0},
# {'n_estimators': 20,
# 'max_depth': 2,
# 'min_samples_leaf': 0.2,
# 'random_state': 0},
# {'n_estimators': 20,
# 'max_depth': 5,
# 'min_samples_leaf': 0.1,
# 'random_state': 0},
# {'n_estimators': 20,
# 'max_depth': 5,
# 'min_samples_leaf': 0.2,
# 'random_state': 0}]
# best_hyperparam
# {'n_estimators': 10,
# 'max_depth': 2,
# 'min_samples_leaf': 0.1,
# 'random_state': 0}
# best_score: 0.5928
# ```
# + [markdown] colab_type="text" id="O-BkhCwzIEYT"
# <a name='ex-11'></a>
# ### Exercise 11: Training and validation, treatment and control splits
#
# - Unlike logistic regression, the machine learning algorithms used for base learners will generally require hyperparameter tuning, which means that you need to split your dev set into a training and validation set.
# - You need to also split each of the training and validation sets into *treatment* and *control* groups to train the treatment and control base learners of the T-Learner.
#
# The function below takes in a dev dataset and splits it into training and validation sets for treatment and control models, respectively.
# Complete the implementation.
#
# #### Note
# - The input X_train and X_val have the 'TRTMT' column. Please remove the 'TRTMT' column from the treatment and control features that the function returns.
# -
# <details>
# <summary>
# <font size="3" color="darkgreen"><b>Hints</b></font>
# </summary>
# <p>
# <ul>
# <li> To drop a column, set the axis to 1 when calling pandas.DataFrame.drop(...). Axis=0 is used to drop a row by its index label)</li>
# </ul>
# </p>
# + colab={"base_uri": "https://localhost:8080/", "height": 249} colab_type="code" id="QdVLM4Zxjd4L" outputId="9e70dbc4-afbc-46e4-d566-8e19e261bbab"
# UNQ_C11 (UNIQUE CELL IDENTIFIER, DO NOT EDIT)
def treatment_dataset_split(X_train, y_train, X_val, y_val):
"""
Separate treated and control individuals in training
and testing sets. Remember that returned
datasets should NOT contain the 'TRMT' column!
Args:
X_train (dataframe): dataframe for subject in training set
y_train (np.array): outcomes for each individual in X_train
X_val (dataframe): dataframe for subjects in validation set
y_val (np.array): outcomes for each individual in X_val
Returns:
X_treat_train (df): training set for treated subjects
y_treat_train (np.array): labels for X_treat_train
X_treat_val (df): validation set for treated subjects
y_treat_val (np.array): labels for X_treat_val
X_control_train (df): training set for control subjects
y_control_train (np.array): labels for X_control_train
X_control_val (np.array): validation set for control subjects
y_control_val (np.array): labels for X_control_val
"""
### START CODE HERE (REPLACE INSTANCES OF 'None' with your code) ###
# From the training set, get features of patients who received treatment
X_treat_train = X_train[X_train["TRTMT"]==1]
# drop the 'TRTMT' column
X_treat_train = X_treat_train.drop('TRTMT',axis=1)
# From the training set, get the labels of patients who received treatment
y_treat_train = y_train[X_train["TRTMT"]==1]
# From the validation set, get the features of patients who received treatment
X_treat_val = X_val[X_val["TRTMT"]==1]
# Drop the 'TRTMT' column
X_treat_val = X_treat_val.drop('TRTMT',axis=1)
# From the validation set, get the labels of patients who received treatment
y_treat_val = y_val[X_val["TRTMT"]==1]
# --------------------------------------------------------------------------------------------
# From the training set, get the features of patients who did not received treatment
X_control_train = X_train[X_train["TRTMT"]==0]
# Drop the TRTMT column
X_control_train = X_control_train.drop('TRTMT',axis=1)
# From the training set, get the labels of patients who did not receive treatment
y_control_train = y_train[X_train["TRTMT"]==0]
# From the validation set, get the features of patients who did not receive treatment
X_control_val = X_val[X_val["TRTMT"]==0]
# drop the 'TRTMT' column
X_control_val = X_control_val.drop('TRTMT',axis=1)
# From the validation set, get teh labels of patients who did not receive treatment
y_control_val = y_val[X_val["TRTMT"]==0]
### END CODE HERE ###
return (X_treat_train, y_treat_train,
X_treat_val, y_treat_val,
X_control_train, y_control_train,
X_control_val, y_control_val)
# -
# **Test Case**
# +
# Tests
example_df = pd.DataFrame(columns = ['ID', 'TRTMT'])
example_df.ID = range(100)
example_df.TRTMT = np.random.binomial(n=1, p=0.5, size=100)
treated_ids = set(example_df[example_df.TRTMT==1].ID)
example_y = example_df.TRTMT.values
example_train, example_val, example_y_train, example_y_val = train_test_split(
example_df, example_y, test_size = 0.25, random_state=0
)
(x_treat_train, y_treat_train,
x_treat_val, y_treat_val,
x_control_train, y_control_train,
x_control_val, y_control_val) = treatment_dataset_split(example_train, example_y_train,
example_val, example_y_val)
print("Tests")
pass_flag = True
pass_flag = (len(x_treat_train) + len(x_treat_val) + len(x_control_train) +
len(x_control_val) == 100)
print(f"\nDidn't lose any subjects: {pass_flag}")
pass_flag = (("TRTMT" not in x_treat_train) and ("TRTMT" not in x_treat_val) and
("TRTMT" not in x_control_train) and ("TRTMT" not in x_control_val))
print(f"\nTRTMT not in any splits: {pass_flag}")
split_treated_ids = set(x_treat_train.ID).union(set(x_treat_val.ID))
pass_flag = (len(split_treated_ids.union(treated_ids)) == len(treated_ids))
print(f"\nTreated splits have all treated patients: {pass_flag}")
split_control_ids = set(x_control_train.ID).union(set(x_control_val.ID))
pass_flag = (len(split_control_ids.intersection(treated_ids)) == 0)
print(f"\nAll subjects in control split are untreated: {pass_flag}")
pass_flag = (len(set(x_treat_train.ID).intersection(x_treat_val.ID)) == 0)
print(f"\nNo overlap between treat_train and treat_val: {pass_flag}")
pass_flag = (len(set(x_control_train.ID).intersection(x_control_val.ID)) == 0)
print(f"\nNo overlap between control_train and control_val: {pass_flag}")
print(f"\n--> Expected: All statements should be True")
# -
# You will now train a T-learner model on the patient data, and evaluate its performance using the c-for-benefit.
#
# First, get the training and validation sets.
# +
# Import the random forest classifier to be used as the base learner
from sklearn.ensemble import RandomForestClassifier
# Split the dev data into train and validation sets
X_train, X_val, y_train, y_val = train_test_split(X_dev,
y_dev,
test_size = 0.25,
random_state = 0)
# -
# Split the training set into a treatment and control set.
# Similarly, split the validation set into a treatment and control set.
# get treatment and control arms of training and validation sets
(X_treat_train, y_treat_train,
X_treat_val, y_treat_val,
X_control_train, y_control_train,
X_control_val, y_control_val) = treatment_dataset_split(X_train, y_train,
X_val, y_val)
# Choose a set of hyperparameters to perform grid search and find the best model.
# - Please first use these given hyperparameters so that you can get the same c-for-benefit calculation at the end of this exercise.
# - Afterwards, we encourage you to come back and try other ranges for these hyperparameters.
#
# ```CPP
# # Given hyperparams to do grid search
# hyperparams = {
# 'n_estimators': [100, 200],
# 'max_depth': [2, 5, 10, 40, None],
# 'min_samples_leaf': [1, 0.1, 0.2],
# 'random_state': [0]
# }
# ```
# hyperparameter grid (we'll use the same one for both arms for convenience)
# Note that we set random_state to zero
# in order to make the output consistent each time it's run.
hyperparams = {
'n_estimators': [100, 200],
'max_depth': [2, 5, 10, 40, None],
'min_samples_leaf': [1, 0.1, 0.2],
'random_state': [0]
}
# Train the treatment base learner.
# - Perform grid search to find a random forest classifier and associated hyperparameters with the best c-index (the regular c-index).
# perform grid search with the treatment data to find the best model
treatment_model, best_hyperparam_treat = holdout_grid_search(RandomForestClassifier,
X_treat_train, y_treat_train,
X_treat_val, y_treat_val, hyperparams)
# Train the control base learner.
# perform grid search with the control data to find the best model
control_model, best_hyperparam_ctrl = holdout_grid_search(RandomForestClassifier,
X_control_train, y_control_train,
X_control_val, y_control_val, hyperparams)
# Combine the treatment and control base learners into the T-learner.
# Save the treatment and control models into an instance of the TLearner class
t_learner = TLearner(treatment_model, control_model)
# For the validation set, predict each patient's risk reduction.
# +
# Use the t-learner to predict the risk reduction for patients in the validation set
rr_t_val = t_learner.predict(X_val.drop(['TRTMT'], axis=1))
print(f"X_val num of patients {X_val.shape[0]}")
print(f"rr_t_val num of patient predictions {rr_t_val.shape[0]}")
# + [markdown] colab_type="text" id="xYX1rN1tIv4w"
# Now plot a histogram of your predicted risk reduction on the validation set.
# + colab={"base_uri": "https://localhost:8080/", "height": 444} colab_type="code" id="XISgvb6IiXnl" outputId="6850488a-51aa-4bad-a151-1bcf9a7573bc"
plt.hist(rr_t_val, bins='auto')
plt.title("Histogram of Predicted ARR, T-Learner, validation set")
plt.xlabel('predicted risk reduction')
plt.ylabel('count of patients')
plt.show()
# + [markdown] colab_type="text" id="V89cP4pxQhNo"
# Notice when viewing the histogram that predicted risk reduction can be negative.
# - This means that for some patients, the T-learner predicts that treatment will actually increase their risk (negative risk reduction).
# - The T-learner is more flexible compared to the logistic regression model, which only predicts non-negative risk reduction for all patients (view the earlier histogram of the 'predicted ARR' histogram for the logistic regression model, and you'll see that the possible values are all non-negative).
# + [markdown] colab_type="text" id="noMOc9kOI5cw"
# Now plot an empirical risk reduction plot for the validation set examples.
# + colab={"base_uri": "https://localhost:8080/", "height": 458} colab_type="code" id="S-0nbpSkJFmZ" outputId="13afaa75-71e8-4f7f-fa25-78da6cefe18a"
empirical_benefit, avg_benefit = quantile_benefit(X_val, y_val, rr_t_val)
plot_empirical_risk_reduction(empirical_benefit, avg_benefit, 'T Learner [val set]')
# + [markdown] colab_type="text" id="w8F2N-Zje8dB"
# Recall that the predicted risk reduction is along the horizontal axis and the vertical axis is the empirical (actual risk reduction).
#
# A good model would predict a lower risk reduction for patients with actual lower risk reduction. Similarly, a good model would predict a higher risk reduction for patients with actual higher risk reduction (imagine a diagonal line going from the bottom left to the top right of the plot).
#
# The T-learner seems to be doing a bit better (compared to the logistic regression model) at differentiating between the people who would benefit most treatment and the people who would benefit least from treatment.
# + [markdown] colab_type="text" id="CzcjvmxKJWlN"
# Compute the C-statistic-for-benefit on the validation set.
# + colab={"base_uri": "https://localhost:8080/", "height": 34} colab_type="code" id="blwOcph5JVnV" outputId="4f359278-db85-4296-a717-87d6175465cc"
c_for_benefit_tlearner_val_set = c_statistic(rr_t_val, y_val, X_val.TRTMT)
print(f"C-for-benefit statistic of T-learner on val set: {c_for_benefit_tlearner_val_set:.4f}")
# -
# ##### Expected output
#
# ```CPP
# C-for-benefit statistic of T-learner on val set: 0.5043
# ```
# + [markdown] colab_type="text" id="yWo27MRmJoa0"
# Now or the test set, predict each patient's risk reduction
# -
# predict the risk reduction for each of the patients in the test set
rr_t_test = t_learner.predict(X_test.drop(['TRTMT'], axis=1))
# Plot the histogram of risk reduction for the test set.
# Plot a histogram of the predicted risk reduction
plt.hist(rr_t_test, bins='auto')
plt.title("Histogram of Predicted ARR for the T-learner on test set")
plt.xlabel("predicted risk reduction")
plt.ylabel("count of patients")
plt.show()
# Plot the predicted versus empircal risk reduction for the test set.
# Plot the predicted versus empirical risk reduction for the test set
empirical_benefit, avg_benefit = quantile_benefit(X_test, y_test, rr_t_test)
plot_empirical_risk_reduction(empirical_benefit, avg_benefit, 'T Learner (test set)')
# Evaluate the T-learner's performance using the test set.
# + colab={"base_uri": "https://localhost:8080/", "height": 970} colab_type="code" id="tGFuQSpLJnym" outputId="6cc2307e-7abf-40be-df49-8be92147e4c1"
# calculate the c-for-benefit of the t-learner on the test set
c_for_benefit_tlearner_test_set = c_statistic(rr_t_test, y_test, X_test.TRTMT)
print(f"C-for-benefit statistic on test set: {c_for_benefit_tlearner_test_set:.4f}")
# -
# ##### Expected output
#
# ```CPP
# C-for-benefit statistic on test set: 0.5250
# ```
# + [markdown] colab_type="text" id="ihGyqKsEfJa0"
# The c-for-benefit of the two models were evaluated on different test sets. However, we can compare their c-for-benefit scores to get a sense of how they perform:
# - logistic regression: 0.5412
# - T-learner: 0.5250
#
# The T-learner doesn't actually do better than the logistic regression in this case. You can try to tune the hyperparameters of the T-Learner to see if you can improve it.
#
# ### Note
# While the more flexible ML techniques may improve predictive power, the sample size is too small to be certain.
# - Models like the T-learner could still be helpful in identifying subgroups who will likely not be helped by treatment, or could even be harmed by treatment.
# - So doctors can study these patients in more detail to find out how to improve their outcomes.
| AI for Medicine/AI for Medical Treatment/Part 1 - Estimating Treatment Effect Using Machine Learning.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# An agent is trained to determine the weights in a portfolio in consecutive 10 days (an episode include 10 steps) using continuous control, specifically, DDPG.
#
# It **does not** work when episode varies (`batch_num` > 1) because financial time series are very unstationary.
from portfolio_env import PortfolioEnv
env = gym.make('Portfolio-v0',
features=['Close'],
stocks = ['GOOGL'],
batch_num = 1,
batch_size = 10,
window=10)
# +
from keras.models import Sequential, Model
from keras.layers import Dense, Activation, Flatten, Input, Concatenate, Conv2D, Reshape
from keras.optimizers import Adam
from keras import backend as K
K.set_image_dim_ordering('th')
from rl.agents import DDPGAgent
from rl.memory import SequentialMemory
from rl.random import OrnsteinUhlenbeckProcess, GaussianWhiteNoiseProcess
# First, we build two networks for actor and critic seperately
observation_input_raw = Input(shape=(1,)+env.observation_space.shape, name='observation_input')
observation_input = Reshape(env.observation_space.shape)(observation_input_raw)
x = Conv2D(32, (1, 3), activation='relu')(observation_input)
x = Conv2D(16, (1, int(x.shape[-1])))(x)
x = Conv2D(1, (1, 1))(x)
x = Flatten()(x)
action = Activation('softmax')(x)
actor = Model(inputs=observation_input_raw, outputs=action)
print(actor.summary())
nb_actions = env.action_space.shape[0]
action_input = Input(shape=(nb_actions,), name='action_input')
observation_input_raw = Input(shape=(1,)+env.observation_space.shape, name='observation_input')
observation_input = Reshape(env.observation_space.shape)(observation_input_raw)
x = Conv2D(32, (1, 3), activation='relu')(observation_input)
x = Conv2D(16, (1, int(x.shape[-1])))(x)
x = Concatenate(axis=1)([Reshape((1, -1, 1))(action_input), x]) # insert action here
x = Conv2D(1, (1, 1))(x)
x = Flatten()(x)
# the structure above is the same as actor except the inserted action
x = Dense(1)(x)
Q = Activation('linear')(x)
critic = Model(inputs=[action_input, observation_input_raw], outputs=Q)
print(critic.summary())
# Then, we configure and compile our agent. You can use every built-in Keras optimizer
memory = SequentialMemory(limit=100, window_length=1)
random_process = GaussianWhiteNoiseProcess(size=nb_actions, mu=0.5, sigma=.001)
agent = DDPGAgent(nb_actions=nb_actions, actor=actor, critic=critic, critic_action_input=action_input,
memory=memory, random_process=random_process, nb_steps_warmup_critic=100, nb_steps_warmup_actor=100, gamma=.99, target_model_update=1e-3, batch_size=env.batch_size)
agent.compile(Adam(lr=.0001, clipnorm=1.), metrics=['mae'])
agent.fit(env, nb_steps=1000, verbose=1)
# -
# Finally, evaluate our algorithm
agent.test(env, nb_episodes=1)
| ddpg portfolio.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Data Analysis Examples
# ## 1.USA.gov Data from Bitly
from numpy.random import randn
import numpy as np
np.random.seed(123)
import os
import matplotlib.pyplot as plt
import pandas as pd
plt.rc('figure', figsize=(10, 6))
np.set_printoptions(precision=4)
pd.options.display.max_rows = 20
import json
with open('example.txt') as fs:
records = [json.loads(line) for line in fs]
# records[0].keys()
# records[0]['tz']
time_zones = [record['tz'] for record in records if 'tz' in record ]
time_zones[:3]
df = pd.DataFrame(records)
tz_counts = df['tz'].value_counts()
tz_counts
# indexer = tz_counts.argsort()
# df['tz'].iloc[indexer[-10:]]
clean_tz = df['tz'].fillna('Missing')
clean_tz[clean_tz=='']='Unkown'
clean_tz[clean_tz=='Missing']#
tz_counts = clean_tz.value_counts()
tz_counts
clean_tz
# +
fig, ax = plt.subplots()
# Example data
people =tuple(tz_counts.index[:10])
y_pos = np.arange(len(people))
performance = tz_counts.values[:10]
error = np.random.rand(len(people))
ax.barh(y_pos, performance, xerr=error, align='center')
ax.set_yticks(y_pos)
ax.set_yticklabels(people)
ax.invert_yaxis() # labels read top-to-bottom
ax.set_xlabel('per timezone record count')
# ax.set_title('How fast do you want to go today?')
plt.show()
# -
| basic/chb14.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# %load_ext autoreload
# %autoreload 2
# from IPython.core.interactiveshell import InteractiveShell
# InteractiveShell.ast_node_interactivity='all'
# +
import numpy as np
import pandas as pd
from pathlib import Path
# Librosa Libraries
import librosa
import librosa.display
import IPython.display as ipd
import matplotlib.pyplot as plt
import seaborn as sns
sns.set()
import sys
sys.path.append('../easy_gold')
import utils
import datasets
# -
audio_dir = Path('../data/nocall_sample/')
name_list = []
duration_list = []
for path in audio_dir.glob('*'):
print(path)
filename = path.name
x, sr = librosa.load(path, sr=32000)
duration = len(x) / sr
# print(sr)
print(len(x), filename, duration)
name_list.append(filename)
duration_list.append(duration)
duration_list
df = pd.DataFrame()
df['filename'] = name_list
df['duration'] = duration_list
df['ebird_code'] = 'nocall'
df
df.to_csv('../data/nocall.csv', index=False)
| notebooks/check_nocall_sample.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + id="R12Yn6W1dt9t"
"""
You can run either this notebook locally (if you have all the dependencies and a GPU) or on Google Colab.
Instructions for setting up Colab are as follows:
1. Open a new Python 3 notebook.
2. Import this notebook from GitHub (File -> Upload Notebook -> "GITHUB" tab -> copy/paste GitHub URL)
3. Connect to an instance with a GPU (Runtime -> Change runtime type -> select "GPU" for hardware accelerator)
4. Run this cell to set up dependencies.
"""
# If you're using Google Colab and not running locally, run this cell.
## Install dependencies
# !pip install wget
# !apt-get install sox libsndfile1 ffmpeg
# !pip install unidecode
# ## Install NeMo
BRANCH = 'v1.0.0'
# !python -m pip install git+https://github.com/NVIDIA/NeMo.git@$BRANCH#egg=nemo_toolkit[asr]
## Install TorchAudio
# !pip install torchaudio>=0.6.0 -f https://download.pytorch.org/whl/torch_stable.html
## Grab the config we'll use in this example
# !mkdir configs
# + [markdown] id="J6ycGIaZfSLE"
# # Introduction
#
# This Speech Command recognition tutorial is based on the MatchboxNet model from the paper ["MatchboxNet: 1D Time-Channel Separable Convolutional Neural Network Architecture for Speech Commands Recognition"](https://arxiv.org/abs/2004.08531). MatchboxNet is a modified form of the QuartzNet architecture from the paper "[QuartzNet: Deep Automatic Speech Recognition with 1D Time-Channel Separable Convolutions](https://arxiv.org/pdf/1910.10261.pdf)" with a modified decoder head to suit classification tasks.
#
# The notebook will follow the steps below:
#
# - Dataset preparation: Preparing Google Speech Commands dataset
#
# - Audio preprocessing (feature extraction): signal normalization, windowing, (log) spectrogram (or mel scale spectrogram, or MFCC)
#
# - Data augmentation using SpecAugment "[SpecAugment: A Simple Data Augmentation Method for Automatic Speech Recognition](https://arxiv.org/abs/1904.08779)" to increase the number of data samples.
#
# - Develop a small Neural classification model that can be trained efficiently.
#
# - Model training on the Google Speech Commands dataset in NeMo.
#
# - Evaluation of error cases of the model by audibly hearing the samples
# + id="I62_LJzc-p2b"
# Some utility imports
import os
from omegaconf import OmegaConf
# + id="K_M8wpkwd7d7"
# This is where the Google Speech Commands directory will be placed.
# Change this if you don't want the data to be extracted in the current directory.
# Select the version of the dataset required as well (can be 1 or 2)
DATASET_VER = 1
data_dir = './google_dataset_v{0}/'.format(DATASET_VER)
if DATASET_VER == 1:
MODEL_CONFIG = "matchboxnet_3x1x64_v1.yaml"
else:
MODEL_CONFIG = "matchboxnet_3x1x64_v2.yaml"
if not os.path.exists(f"configs/{MODEL_CONFIG}"):
# !wget -P configs/ "https://raw.githubusercontent.com/NVIDIA/NeMo/$BRANCH/examples/asr/conf/matchboxnet/{MODEL_CONFIG}"
# + [markdown] id="tvfwv9Hjf1Uv"
# # Data Preparation
#
# We will be using the open-source Google Speech Commands Dataset (we will use V1 of the dataset for the tutorial but require minor changes to support the V2 dataset). These scripts below will download the dataset and convert it to a format suitable for use with NeMo.
# + [markdown] id="6VL10OXTf8ts"
# ## Download the dataset
#
# The dataset must be prepared using the scripts provided under the `{NeMo root directory}/scripts` sub-directory.
#
# Run the following command below to download the data preparation script and execute it.
#
# **NOTE**: You should have at least 4GB of disk space available if you’ve used --data_version=1; and at least 6GB if you used --data_version=2. Also, it will take some time to download and process, so go grab a coffee.
#
# **NOTE**: You may additionally pass a `--rebalance` flag at the end of the `process_speech_commands_data.py` script to rebalance the class samples in the manifest.
# + id="oqKe6_uLfzKU"
if not os.path.exists("process_speech_commands_data.py"):
# !wget https://raw.githubusercontent.com/NVIDIA/NeMo/$BRANCH/scripts/dataset_processing/process_speech_commands_data.py
# + [markdown] id="TTsxp0nZ1zqo"
# ### Preparing the manifest file
#
# The manifest file is a simple file that has the full path to the audio file, the duration of the audio file, and the label that is assigned to that audio file.
#
# This notebook is only a demonstration, and therefore we will use the `--skip_duration` flag to speed up construction of the manifest file.
#
# **NOTE: When replicating the results of the paper, do not use this flag and prepare the manifest file with correct durations.**
# + id="cWUtDpzKgop9"
# !mkdir {data_dir}
# !python process_speech_commands_data.py --data_root={data_dir} --data_version={DATASET_VER} --skip_duration --log
print("Dataset ready !")
# + [markdown] id="eVsPFxJtg30p"
# ## Prepare the path to manifest files
# + id="ytTFGVe0g9wk"
dataset_path = 'google_speech_recognition_v{0}'.format(DATASET_VER)
dataset_basedir = os.path.join(data_dir, dataset_path)
train_dataset = os.path.join(dataset_basedir, 'train_manifest.json')
val_dataset = os.path.join(dataset_basedir, 'validation_manifest.json')
test_dataset = os.path.join(dataset_basedir, 'validation_manifest.json')
# + [markdown] id="s0SZy9SEhOBf"
# ## Read a few rows of the manifest file
#
# Manifest files are the data structure used by NeMo to declare a few important details about the data :
#
# 1) `audio_filepath`: Refers to the path to the raw audio file <br>
# 2) `command`: The class label (or speech command) of this sample <br>
# 3) `duration`: The length of the audio file, in seconds.
# + id="HYBidCMIhKQV"
# !head -n 5 {train_dataset}
# + [markdown] id="r-pyUBedh8f4"
# # Training - Preparation
#
# We will be training a MatchboxNet model from the paper ["MatchboxNet: 1D Time-Channel Separable Convolutional Neural Network Architecture for Speech Commands Recognition"](https://arxiv.org/abs/2004.08531). The benefit of MatchboxNet over JASPER models is that they use 1D Time-Channel Separable Convolutions, which greatly reduce the number of parameters required to obtain good model accuracy.
#
# MatchboxNet models generally follow the model definition pattern QuartzNet-[BxRXC], where B is the number of blocks, R is the number of convolutional sub-blocks, and C is the number of channels in these blocks. Each sub-block contains a 1-D masked convolution, batch normalization, ReLU, and dropout.
#
# An image of QuartzNet, the base configuration of MatchboxNet models, is provided below.
#
# + [markdown] id="T0sV4riijHJF"
# <p align="center">
# <img src="https://developer.nvidia.com/blog/wp-content/uploads/2020/05/quartznet-model-architecture-1-625x742.png">
# </p>
# + id="ieAPOM9thTN2"
# NeMo's "core" package
import nemo
# NeMo's ASR collection - this collections contains complete ASR models and
# building blocks (modules) for ASR
import nemo.collections.asr as nemo_asr
# + [markdown] id="ss9gLcDv30jI"
# ## Model Configuration
# The MatchboxNet Model is defined in a config file which declares multiple important sections.
#
# They are:
#
# 1) `model`: All arguments that will relate to the Model - preprocessors, encoder, decoder, optimizer and schedulers, datasets and any other related information
#
# 2) `trainer`: Any argument to be passed to PyTorch Lightning
# + id="yoVAs9h1lfci"
# This line will print the entire config of the MatchboxNet model
config_path = f"configs/{MODEL_CONFIG}"
config = OmegaConf.load(config_path)
config = OmegaConf.to_container(config, resolve=True)
config = OmegaConf.create(config)
print(OmegaConf.to_yaml(config))
# + id="m2lJPR0a3qww"
# Preserve some useful parameters
labels = config.model.labels
sample_rate = config.sample_rate
# + [markdown] id="8_pmjeed78rJ"
# ### Setting up the datasets within the config
#
# If you'll notice, there are a few config dictionaries called `train_ds`, `validation_ds` and `test_ds`. These are configurations used to setup the Dataset and DataLoaders of the corresponding config.
#
#
# + id="DIe6Qfs18MiQ"
print(OmegaConf.to_yaml(config.model.train_ds))
# + [markdown] id="Fb01hl868Uc3"
# ### `???` inside configs
#
# You will often notice that some configs have `???` in place of paths. This is used as a placeholder so that the user can change the value at a later time.
#
# Let's add the paths to the manifests to the config above.
# + id="m181HXev8T97"
config.model.train_ds.manifest_filepath = train_dataset
config.model.validation_ds.manifest_filepath = val_dataset
config.model.test_ds.manifest_filepath = test_dataset
# + [markdown] id="pbXngoCM5IRG"
# ## Building the PyTorch Lightning Trainer
#
# NeMo models are primarily PyTorch Lightning modules - and therefore are entirely compatible with the PyTorch Lightning ecosystem!
#
# Lets first instantiate a Trainer object!
# + id="bYtvdBlG5afU"
import torch
import pytorch_lightning as pl
# + id="jRN18CdH51nN"
print("Trainer config - \n")
print(OmegaConf.to_yaml(config.trainer))
# + id="gHf6cHvm6H9b"
# Lets modify some trainer configs for this demo
# Checks if we have GPU available and uses it
cuda = 1 if torch.cuda.is_available() else 0
config.trainer.gpus = cuda
# Reduces maximum number of epochs to 5 for quick demonstration
config.trainer.max_epochs = 5
# Remove distributed training flags
config.trainer.accelerator = None
# + id="UB9nr7G56G3L"
trainer = pl.Trainer(**config.trainer)
# + [markdown] id="2wt603Vq6sqX"
# ## Setting up a NeMo Experiment
#
# NeMo has an experiment manager that handles logging and checkpointing for us, so let's use it !
# + id="TfWJFg7p6Ezf"
from nemo.utils.exp_manager import exp_manager
# + id="SC-QPoW44-p2"
exp_dir = exp_manager(trainer, config.get("exp_manager", None))
# + id="Yqi6rkNR7Dph"
# The exp_dir provides a path to the current experiment for easy access
exp_dir = str(exp_dir)
exp_dir
# + [markdown] id="t0zz-vHH7Uuh"
# ## Building the MatchboxNet Model
#
# MatchboxNet is an ASR model with a classification task - it generates one label for the entire provided audio stream. Therefore we encapsulate it inside the `EncDecClassificationModel` as follows.
# + id="FRMrKhyf5vhy"
asr_model = nemo_asr.models.EncDecClassificationModel(cfg=config.model, trainer=trainer)
# + [markdown] id="jA9UND-Q_oyw"
# # Training a MatchboxNet Model
#
# As MatchboxNet is inherently a PyTorch Lightning Model, it can easily be trained in a single line - `trainer.fit(model)` !
# + [markdown] id="3ngKcRFqBfIF"
# ### Monitoring training progress
#
# Before we begin training, let's first create a Tensorboard visualization to monitor progress
#
# + id="sT3371CbJ8Rz"
try:
from google import colab
COLAB_ENV = True
except (ImportError, ModuleNotFoundError):
COLAB_ENV = False
# + id="Cyfec0PDBsXa"
# Load the TensorBoard notebook extension
if COLAB_ENV:
# %load_ext tensorboard
else:
print("To use tensorboard, please use this notebook in a Google Colab environment.")
# + id="4L5ymu-QBxmz"
if COLAB_ENV:
# %tensorboard --logdir {exp_dir}
else:
print("To use tensorboard, please use this notebook in a Google Colab environment.")
# + [markdown] id="ZApuELDIKQgC"
# ### Training for 5 epochs
# We see below that the model begins to get modest scores on the validation set after just 5 epochs of training
# + id="9xiUUJlH5KdD"
trainer.fit(asr_model)
# + [markdown] id="Dkds1jSvKgSc"
# ### Evaluation on the Test set
#
# Lets compute the final score on the test set via `trainer.test(model)`
# + id="mULTrhEJ_6wV"
trainer.test(asr_model, ckpt_path=None)
# + [markdown] id="XQntce8cLiUC"
# # Fast Training
#
# We can dramatically improve the time taken to train this model by using Multi GPU training along with Mixed Precision.
#
# For multi-GPU training, take a look at [the PyTorch Lightning Multi-GPU training section](https://pytorch-lightning.readthedocs.io/en/latest/advanced/multi_gpu.html)
#
# For mixed-precision training, take a look at [the PyTorch Lightning Mixed-Precision training section](https://pytorch-lightning.readthedocs.io/en/latest/advanced/amp.html)
#
# ```python
# # Mixed precision:
# trainer = Trainer(amp_level='O1', precision=16)
#
# # Trainer with a distributed backend:
# trainer = Trainer(gpus=2, num_nodes=2, accelerator='ddp')
#
# # Of course, you can combine these flags as well.
# ```
# + [markdown] id="ifDHkunjM8y6"
# # Evaluation of incorrectly predicted samples
#
# Given that we have a trained model, which performs reasonably well, let's try to listen to the samples where the model is least confident in its predictions.
#
# For this, we need the support of the librosa library.
#
# **NOTE**: The following code depends on librosa. To install it, run the following code block first.
# + id="s3w3LhHcKuD2"
# !pip install librosa
# + [markdown] id="PcJrZ72sNCkM"
# ## Extract the predictions from the model
#
# We want to possess the actual logits of the model instead of just the final evaluation score, so we can define a function to perform the forward step for us without computing the final loss. Instead, we extract the logits per batch of samples provided.
# + [markdown] id="rvxdviYtOFjK"
# ## Accessing the data loaders
#
# We can utilize the `setup_test_data` method in order to instantiate a data loader for the dataset we want to analyze.
#
# For convenience, we can access these instantiated data loaders using the following accessors - `asr_model._train_dl`, `asr_model._validation_dl` and `asr_model._test_dl`.
# + id="CB0QZCAmM656"
asr_model.setup_test_data(config.model.test_ds)
test_dl = asr_model._test_dl
# + [markdown] id="rA7gXawcPoip"
# ## Partial Test Step
#
# Below we define a utility function to perform most of the test step. For reference, the test step is defined as follows:
#
# ```python
# def test_step(self, batch, batch_idx, dataloader_idx=0):
# audio_signal, audio_signal_len, labels, labels_len = batch
# logits = self.forward(input_signal=audio_signal, input_signal_length=audio_signal_len)
# loss_value = self.loss(logits=logits, labels=labels)
# correct_counts, total_counts = self._accuracy(logits=logits, labels=labels)
# return {'test_loss': loss_value, 'test_correct_counts': correct_counts, 'test_total_counts': total_counts}
# ```
# + id="sBsDOm5ROpQI"
@torch.no_grad()
def extract_logits(model, dataloader):
logits_buffer = []
label_buffer = []
# Follow the above definition of the test_step
for batch in dataloader:
audio_signal, audio_signal_len, labels, labels_len = batch
logits = model(input_signal=audio_signal, input_signal_length=audio_signal_len)
logits_buffer.append(logits)
label_buffer.append(labels)
print(".", end='')
print()
print("Finished extracting logits !")
logits = torch.cat(logits_buffer, 0)
labels = torch.cat(label_buffer, 0)
return logits, labels
# + id="mZSdprUlOuoV"
cpu_model = asr_model.cpu()
cpu_model.eval()
logits, labels = extract_logits(cpu_model, test_dl)
print("Logits:", logits.shape, "Labels :", labels.shape)
# + id="9Wd0ukgNXRBz"
# Compute accuracy - `_accuracy` is a PyTorch Lightning Metric !
acc = cpu_model._accuracy(logits=logits, labels=labels)
print("Accuracy : ", float(acc[0]*100))
# + [markdown] id="NwN9OSqCauSH"
# ## Filtering out incorrect samples
# Let us now filter out the incorrectly labeled samples from the total set of samples in the test set
# + id="N1YJvsmcZ0uE"
import librosa
import json
import IPython.display as ipd
# + id="jZAT9yGAayvR"
# First let's create a utility class to remap the integer class labels to actual string label
class ReverseMapLabel:
def __init__(self, data_loader):
self.label2id = dict(data_loader.dataset.label2id)
self.id2label = dict(data_loader.dataset.id2label)
def __call__(self, pred_idx, label_idx):
return self.id2label[pred_idx], self.id2label[label_idx]
# + id="X3GSXvYHa4KJ"
# Next, let's get the indices of all the incorrectly labeled samples
sample_idx = 0
incorrect_preds = []
rev_map = ReverseMapLabel(test_dl)
# Remember, evaluated_tensor = (loss, logits, labels)
probs = torch.softmax(logits, dim=-1)
probas, preds = torch.max(probs, dim=-1)
total_count = cpu_model._accuracy.total_counts_k[0]
incorrect_ids = (preds != labels).nonzero()
for idx in incorrect_ids:
proba = float(probas[idx][0])
pred = int(preds[idx][0])
label = int(labels[idx][0])
idx = int(idx[0]) + sample_idx
incorrect_preds.append((idx, *rev_map(pred, label), proba))
print(f"Num test samples : {total_count.item()}")
print(f"Num errors : {len(incorrect_preds)}")
# First lets sort by confidence of prediction
incorrect_preds = sorted(incorrect_preds, key=lambda x: x[-1], reverse=False)
# + [markdown] id="0JgGo71gcDtD"
# ## Examine a subset of incorrect samples
# Let's print out the (test id, predicted label, ground truth label, confidence) tuple of first 20 incorrectly labeled samples
# + id="x37wNJsNbcw0"
for incorrect_sample in incorrect_preds[:20]:
print(str(incorrect_sample))
# + [markdown] id="tDnwYsDKcLv9"
# ## Define a threshold below which we designate a model's prediction as "low confidence"
# + id="dpvzeh4PcGJs"
# Filter out how many such samples exist
low_confidence_threshold = 0.25
count_low_confidence = len(list(filter(lambda x: x[-1] <= low_confidence_threshold, incorrect_preds)))
print(f"Number of low confidence predictions : {count_low_confidence}")
# + [markdown] id="ERXyXvCAcSKR"
# ## Let's hear the samples which the model has least confidence in !
# + id="kxjNVjX8cPNP"
# First let's create a helper function to parse the manifest files
def parse_manifest(manifest):
data = []
for line in manifest:
line = json.loads(line)
data.append(line)
return data
# + id="IWxqw5k-cUVd"
# Next, let's create a helper function to actually listen to certain samples
def listen_to_file(sample_id, pred=None, label=None, proba=None):
# Load the audio waveform using librosa
filepath = test_samples[sample_id]['audio_filepath']
audio, sample_rate = librosa.load(filepath)
if pred is not None and label is not None and proba is not None:
print(f"Sample : {sample_id} Prediction : {pred} Label : {label} Confidence = {proba: 0.4f}")
else:
print(f"Sample : {sample_id}")
return ipd.Audio(audio, rate=sample_rate)
# + id="HPj1tFNIcXaU"
# Now let's load the test manifest into memory
test_samples = []
with open(test_dataset, 'r') as test_f:
test_samples = test_f.readlines()
test_samples = parse_manifest(test_samples)
# + id="Nt7b_uiScZcC"
# Finally, let's listen to all the audio samples where the model made a mistake
# Note: This list of incorrect samples may be quite large, so you may choose to subsample `incorrect_preds`
count = min(count_low_confidence, 20) # replace this line with just `count_low_confidence` to listen to all samples with low confidence
for sample_id, pred, label, proba in incorrect_preds[:count]:
ipd.display(listen_to_file(sample_id, pred=pred, label=label, proba=proba))
# + [markdown] id="gxLGGDvHW2kV"
# # Fine-tuning on a new dataset
#
# We currently trained our dataset on all 30/35 classes of the Google Speech Commands dataset (v1/v2).
#
# We will now show an example of fine-tuning a trained model on a subset of the classes, as a demonstration of fine-tuning.
#
# + [markdown] id="mZAPGTzeXnuQ"
# ## Preparing the data-subsets
#
# Let's select 2 of the classes, `yes` and `no` and prepare our manifests with this dataset.
# + id="G1RI4GBNfjUW"
import json
# + id="L3cFvN5vcbjb"
def extract_subset_from_manifest(name: str, manifest_path: str, labels: list):
manifest_dir = os.path.split(manifest_path)[0]
labels = set(labels)
manifest_values = []
print(f"Parsing manifest: {manifest_path}")
with open(manifest_path, 'r') as f:
for line in f:
val = json.loads(line)
if val['command'] in labels:
manifest_values.append(val)
print(f"Number of files extracted from dataset: {len(manifest_values)}")
outpath = os.path.join(manifest_dir, name)
with open(outpath, 'w') as f:
for val in manifest_values:
json.dump(val, f)
f.write("\n")
f.flush()
print("Manifest subset written to path :", outpath)
print()
return outpath
# + id="fXQ0N1evfqZ8"
labels = ["yes", "no"]
train_subdataset = extract_subset_from_manifest("train_subset.json", train_dataset, labels)
val_subdataset = extract_subset_from_manifest("val_subset.json", val_dataset, labels)
test_subdataset = extract_subset_from_manifest("test_subset.json", test_dataset, labels)
# + [markdown] id="IO5pVNyKimiE"
# ## Saving/Restoring a checkpoint
#
# There are multiple ways to save and load models in NeMo. Since all NeMo models are inherently Lightning Modules, we can use the standard way that PyTorch Lightning saves and restores models.
#
# NeMo also provides a more advanced model save/restore format, which encapsulates all the parts of the model that are required to restore that model for immediate use.
#
# In this example, we will explore both ways of saving and restoring models, but we will focus on the PyTorch Lightning method.
# + [markdown] id="lMKvrT88jZwC"
# ### Saving and Restoring via PyTorch Lightning Checkpoints
#
# When using NeMo for training, it is advisable to utilize the `exp_manager` framework. It is tasked with handling checkpointing and logging (Tensorboard as well as WandB optionally!), as well as dealing with multi-node and multi-GPU logging.
#
# Since we utilized the `exp_manager` framework above, we have access to the directory where the checkpoints exist.
#
# `exp_manager` with the default settings will save multiple checkpoints for us -
#
# 1) A few checkpoints from certain steps of training. They will have `--val_loss=` tags
#
# 2) A checkpoint at the last epoch of training denotes by `-last`.
#
# 3) If the model finishes training, it will also have a `--end` checkpoint.
# + id="TcHTw5ErmQRi"
import glob
# + id="5h8zMJHngUrV"
print(exp_dir)
# + id="F9K_Ct_hl8oU"
# Let's list all the checkpoints we have
checkpoint_dir = os.path.join(exp_dir, 'checkpoints')
checkpoint_paths = list(glob.glob(os.path.join(checkpoint_dir, "*.ckpt")))
checkpoint_paths
# + id="67fbB61umfb4"
# We want the checkpoint saved after the final step of training
final_checkpoint = list(filter(lambda x: "-last.ckpt" in x, checkpoint_paths))[0]
print(final_checkpoint)
# + [markdown] id="ZADUzv02nknZ"
# ### Restoring from a PyTorch Lightning checkpoint
#
# To restore a model using the `LightningModule.load_from_checkpoint()` class method.
# + id="ywd9Qj4Xm3VC"
restored_model = nemo_asr.models.EncDecClassificationModel.load_from_checkpoint(final_checkpoint)
# + [markdown] id="0f4GQa8vB1BB"
# ## Prepare the model for fine-tuning
#
# Remember, the original model was trained for a 30/35 way classification task. Now we require only a subset of these models, so we need to modify the decoder head to support fewer classes.
#
# We can do this easily with the convenient function `EncDecClassificationModel.change_labels(new_label_list)`.
#
# By performing this step, we discard the old decoder head, but still, preserve the encoder!
# + id="iMCMds7pB16U"
restored_model.change_labels(labels)
# + [markdown] id="rrspQ2QFtbCK"
# ### Prepare the data loaders
#
# The restored model, upon restoration, will not attempt to set up any data loaders.
#
# This is so that we can manually set up any datasets we want - train and val to finetune the model, test in order to just evaluate, or all three to do both!
#
# The entire config that we used before can still be accessed via `ModelPT.cfg`, so we will use it in order to set up our data loaders. This also gives us the opportunity to set any additional parameters we wish to setup!
# + id="9JxhiZN5ulUl"
import copy
# + id="qzHfTOkPowJo"
train_subdataset_cfg = copy.deepcopy(restored_model.cfg.train_ds)
val_subdataset_cfg = copy.deepcopy(restored_model.cfg.validation_ds)
test_subdataset_cfg = copy.deepcopy(restored_model.cfg.test_ds)
# + id="it9-vFX6vHUl"
# Set the paths to the subset of the dataset
train_subdataset_cfg.manifest_filepath = train_subdataset
val_subdataset_cfg.manifest_filepath = val_subdataset
test_subdataset_cfg.manifest_filepath = test_subdataset
# + id="1qzWY8QDvgfc"
# Setup the data loader for the restored model
restored_model.setup_training_data(train_subdataset_cfg)
restored_model.setup_multiple_validation_data(val_subdataset_cfg)
restored_model.setup_multiple_test_data(test_subdataset_cfg)
# + id="y8GZ5a5rC0gY"
# Check data loaders are correct
print("Train dataset labels :", restored_model._train_dl.dataset.labels)
print("Val dataset labels :", restored_model._validation_dl.dataset.labels)
print("Test dataset labels :", restored_model._test_dl.dataset.labels)
# + [markdown] id="76yDcWZ9zl2G"
# ## Setting up a new Trainer and Experiment Manager
#
# A restored model has a utility method to attach the Trainer object to it, which is necessary in order to correctly set up the optimizer and scheduler!
#
# **Note**: The restored model does not contain the trainer config with it. It is necessary to create a new Trainer object suitable for the environment where the model is being trained. The template can be replicated from any of the training scripts.
#
# Here, since we already had the previous config object that prepared the trainer, we could have used it, but for demonstration, we will set up the trainer config manually.
# + id="swTe3WvBzkBJ"
# Setup the new trainer object
# Let's modify some trainer configs for this demo
# Checks if we have GPU available and uses it
cuda = 1 if torch.cuda.is_available() else 0
trainer_config = OmegaConf.create(dict(
gpus=cuda,
max_epochs=5,
max_steps=None, # computed at runtime if not set
num_nodes=1,
accumulate_grad_batches=1,
checkpoint_callback=False, # Provided by exp_manager
logger=False, # Provided by exp_manager
log_every_n_steps=1, # Interval of logging.
val_check_interval=1.0, # Set to 0.25 to check 4 times per epoch, or an int for number of iterations
))
print(trainer_config.pretty())
# + id="Nd_ej4bI3TIy"
trainer_finetune = pl.Trainer(**trainer_config)
# + [markdown] id="WtGu5q5T32XA"
# ### Setting the trainer to the restored model
#
# All NeMo models provide a convenience method `set_trainer()` in order to setup the trainer after restoration
# + id="BTozhedA3zpM"
restored_model.set_trainer(trainer_finetune)
# + id="XojTpEiI3TQa"
exp_dir_finetune = exp_manager(trainer_finetune, config.get("exp_manager", None))
# + id="x_LSbmCQ3TUf"
exp_dir_finetune = str(exp_dir_finetune)
exp_dir_finetune
# + [markdown] id="QT_mWWnSxPLv"
# ## Setup optimizer + scheduler
#
# For a fine-tuning experiment, let's set up the optimizer and scheduler!
#
# We will use a much lower learning rate than before, and also swap out the scheduler from PolyHoldDecay to CosineDecay.
# + id="TugHsePsxA5Q"
optim_sched_cfg = copy.deepcopy(restored_model.cfg.optim)
# Struct mode prevents us from popping off elements from the config, so let's disable it
OmegaConf.set_struct(optim_sched_cfg, False)
# + id="pZSo0sWPxwiG"
# Lets change the maximum learning rate to previous minimum learning rate
optim_sched_cfg.lr = 0.001
# Lets change the scheduler
optim_sched_cfg.sched.name = "CosineAnnealing"
# "power" isnt applicable to CosineAnnealing so let's remove it
optim_sched_cfg.sched.pop('power')
# "hold_ratio" isnt applicable to CosineAnnealing, so let's remove it
optim_sched_cfg.sched.pop('hold_ratio')
# Set "min_lr" to lower value
optim_sched_cfg.sched.min_lr = 1e-4
print(optim_sched_cfg.pretty())
# + id="FqqyFF3Ey5If"
# Now lets update the optimizer settings
restored_model.setup_optimization(optim_sched_cfg)
# + id="mdivgIPUzgP_"
# We can also just directly replace the config inplace if we choose to
restored_model.cfg.optim = optim_sched_cfg
# + [markdown] id="3-lRyz2_Eyrl"
# ## Fine-tune training step
#
# We fine-tune on the subset classification problem. Note, the model was originally trained on these classes (the subset defined here has already been trained on above).
#
# When fine-tuning on a truly new dataset, we will not see such a dramatic improvement in performance. However, it should still converge a little faster than if it was trained from scratch.
# + [markdown] id="nq-iHIgx6OId"
# ### Monitor training progress via Tensorboard
#
# + id="PIacDWcD5vCR"
if COLAB_ENV:
# %tensorboard --logdir {exp_dir_finetune}
else:
print("To use tensorboard, please use this notebook in a Google Colab environment.")
# + [markdown] id="r5_z1eW76fip"
# ### Fine-tuning for 5 epochs
# + id="WH8rN6dA6V9S"
trainer_finetune.fit(restored_model)
# + [markdown] id="lgV0s8auJpxV"
# ### Evaluation on the Test set
#
# Let's compute the final score on the test set via `trainer.test(model)`
# + id="szpLp6XTDPaK"
trainer_finetune.test(restored_model, ckpt_path=None)
# + [markdown] id="uNBAaf1FKcAZ"
# ## Advanced Usage: Exporting a model in its entirety
#
# While most models can be easily serialized via the Experiment Manager as a PyTorch Lightning checkpoint, there are certain models where this is insufficient.
#
# Consider the case where a Model contains artifacts such as tokenizers or other intermediate file objects that cannot be so easily serialized into a checkpoint.
#
# For such cases, NeMo offers two utility functions that enable serialization of a Model + artifacts - `save_to` and `restore_from`.
#
# Further documentation regarding these methods can be obtained from the documentation pages on NeMo.
# + id="Dov9g2j8Lyjs"
import tarfile
# + id="WNixPPFNJyNc"
# Save a model as a tarfile
restored_model.save_to(os.path.join(exp_dir_finetune, "model.nemo"))
# + id="B2RHYNjjLrcW"
# The above object is just a tarfile which can store additional artifacts.
with tarfile.open(os.path.join(exp_dir_finetune, 'model.nemo')) as blob:
for item in blob:
print(item)
# + id="fRo04x3TLxdu"
# Restore a model from a tarfile
restored_model_2 = nemo_asr.models.EncDecClassificationModel.restore_from(os.path.join(exp_dir_finetune, "model.nemo"))
# + [markdown] id="LyIegk2CPNsI"
# ## Conclusion
# Once the model has been restored, either via a PyTorch Lightning checkpoint or via the `restore_from` methods, one can finetune by following the above general steps.
| src/lab2/nemo/tutorials/asr/03_Speech_Commands.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: 'Python 3.8.1 64-bit (''pyUdemy'': conda)'
# name: python38164bitpyudemyconda8c705f49a8e643418ce4b1ca64c8ab63
# ---
# +
# Float, Double
# Floats in a computer: rational numbers
# Floats in python: fixed size (64bits = 8Byte)
# sign: 1bit
# exponent: 11bits
# significant bits: 52bits
my_value = 42e-06 # 42 * 10^-1
print('{:.32f}'.format(my_value))
# +
my_value2 = 42.1
print('{:.32f}'.format(my_value2), type(my_value2))
my_value3 = float("42.1")
print('{:.32f}'.format(my_value3), type(my_value3))
# +
# my_value4 = float("13/3")
# +
my_fraction = 1 / 100
print('{:.32f}'.format(my_fraction), type(my_fraction))
my_addition = 1 / 10 + 1 / 10 + 1 / 10
print('{:.32f}'.format(my_addition), type(my_addition))
print('{:.32f}'.format(0.1), type(0.1))
print('{:.32f}'.format(0.3), type(0.3))
print(my_addition == 0.3)
print('{:.32f}'.format(0.5), type(0.5))
| Chapter3_BasicFeatures/Numbers/floats.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import re
# ## Anchors & boundaries
# #### `^`: start of string or start of line depending on multiline mode. (But when [^inside brackets], it means "not")
re.search('^abc', 'abcdefg').group()
re.search('^abc.*', 'abc (line start)').group()
# #### `$`: end of string or end of line depending on multiline mode.
re.search('abc$', 'endsinabc').group()
re.search('.*? the end$', 'this is the end').group()
# ## Characters
# #### `\d` matches digit (0-9) & unicode digit
re.search('file\d*', 'file12345').group()
re.search('file_\d\d', 'file_9੩').group()
# #### `\D` matches a character that is not a digit
re.search('\D{4}', 'A_B+C').group()
# #### `\w` matches ASCII letter, digit, underscore, unicode letter & ideogram
# \w doesn't match asterisk/star
re.search('\w*', 'A-b_1').group()
re.search('\w-\w\w\w', 'A-b_1').group()
re.search('\w-\w\w\w', '字-ま_۳').group()
# #### `\W` matches a character that is not a word character
re.search('\W\W\W\W\W', '*-+=)').group()
# #### `\s` matches space, tab, newline, carriage return, vertical tab & any unicode separator
re.search('a\sb\sc', 'a b\nc').group()
# #### `\S` matches a character that is not a whitespace character
re.search('\S\S\S\S', 'Yoyo').group()
# #### `.` matches any character except line break
re.search('a.c', 'abc').group()
re.search('.*', 'whatever, man.').group()
re.search('.*', 'what happen then to\n new line').group()
re.search('.*', 'what happen then to\t new tab').group()
# #### `\` escapes a special character
# list of special character: __. * + ? $ ^ \\ [ { ( ) } ]__
re.search('a\.c', 'a.c').group()
re.search('\[\{\(\)\}\]', '[{()}]').group()
# ## Character classes
# #### `[...]` matches one of the characters in the brackets
re.search('[AEIOU]', 'One uppercasE vowel').group()
re.findall('[AEIOU]', 'One uppercasE vowel')
re.findall('T[ao]p', 'Tap or Top')
re.search('[a-e]', 'abcdefgh12345').group()
# matches from a to e, number 1 & 2
re.findall('[a-e12]', 'abcdefgh12345')
re.search('[\x41-\x45]{3}', 'ABE').group()
re.findall('[\x41-\x45]{3}', 'ABE')
# #### `[^...]` matches one of the characters NOT in the brackets
# matches characters that are not a to e, 1 & 2
re.findall('[^a-e12]', 'abcdefgh12345')
# ## Inline Modifiers
# #### `(?i)`: case-insensitive mode
re.findall('(?i)Monday', 'monDAY')
re.search('(?i)Monday', 'monDAY').group()
# #### `(?s)`: DOTALL mode. The dot matches new line characters (\r\n)
re.findall('(?s)From A.*to Z', 'From A\r\n to Z')
re.findall('(?m)1\r\n^2$\r\n^3$', '1\n2\n3')
# ## Quantifiers
# #### `?`: once or none
re.search('plurals?', 'plural').group()
# makes quantifiers "lazy"
re.search('\d+?', '12345').group()
# makes quantifiers "lazy"
re.search('A*?', 'AAA').group()
# makes quantifiers "lazy"
re.search('\w{2,4}?', 'abcd').group()
# #### `*`: 0 or more, "greedy"
re.search('A*B*C*', 'AAACC').group()
re.search('A*', 'AAA').group()
# #### `+`: 1 or more, "greedy"
re.search('Version\s\w-\w+', 'Version A-b1_1').group()
re.search('\d+', '12345').group()
# #### `{2,4}`: 2 to 4 times
re.search('\d{2,4}', '123456').group()
# #### `{3,}`: 3 or more
re.search('\w{3,}', 'regex_tutorial').group()
# ## Logic
# #### `|`: Alternation OR operand
re.search('22|33', '33').group()
# #### `(...)`: capturing group
re.search('A(nt|pple)', 'Apple (captures "pple")').group()
# #### `(?: ...)`: non-capturing group
re.search('A(?:nt|pple)', 'Apple').group()
| regex/regex.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + [markdown] id="6sg_bO9ehYkq"
# # Notebook to download the MRMS data
# + [markdown] id="FirkM1ciOUfU"
# ## Google Drive setup
# + [markdown] id="QToPH4vViNlq"
# After hyperlinking MRMS folder to google drive, connect personal drive to colab VM
# + colab={"base_uri": "https://localhost:8080/"} id="WPrDzmPUhWZy" executionInfo={"status": "ok", "timestamp": 1611273985655, "user_tz": 0, "elapsed": 1880, "user": {"displayName": "<NAME>", "photoUrl": "", "userId": "11141792345932528185"}} outputId="ed56a44a-4e07-445d-f550-3382595887c3"
# Set up google drive
from google.colab import drive
drive.mount('/content/gdrive', force_remount=True)
# + [markdown] id="7tIe725rixuM"
# Path to the data is /content/gdrive/NWP-Downscale/MRMS/year/month/day/ yearmonthdayhour.zip
# + [markdown] id="cHE3sJImMmWA"
# ## Helper functions
# + id="VYGgvTO1nCa2" executionInfo={"status": "ok", "timestamp": 1611274894744, "user_tz": 0, "elapsed": 956, "user": {"displayName": "<NAME>", "photoUrl": "", "userId": "11141792345932528185"}}
def time_to_string(time):
'''
Convert time to string with leading zeros
'''
time = str(time)
if len(time)==1:
return '0'+ time
return time
# + id="yKW_3IWrn5lw" executionInfo={"status": "ok", "timestamp": 1611274895158, "user_tz": 0, "elapsed": 1122, "user": {"displayName": "<NAME>", "photoUrl": "", "userId": "11141792345932528185"}}
def get_path(time):
'''
Get the path for a given datetime
Parameters:
----------
time: pd.datetime object
Returns:
--------
total_path: string
path to zip file
filename: string
filename of zip file
'''
hour = time_to_string(time.hour)
day = time_to_string(time.day)
month = time_to_string(time.month)
year = time_to_string(time.year)
path = year+'/'+month+'/'+day+'/'
filename = year+month+day+hour
MRMS_PATH = "/content/gdrive/MyDrive/NWP-Downscale/MRMS/"
total_path = MRMS_PATH+path+filename+".zip"
return total_path, filename
# + id="ZCcUxwPoNk1Z"
def get_radar_files(dates, local_path):
"""
Unzip the daily files and extract CONUS
6hr precip accumulations (radar only)
Parameters:
-----------
dates: pd.date_range
dates to unzip
local_path: String
local folder to put files in
"""
for d in dates:
total_path, filename = get_path(d)
# Unzip the file locally
print("Unzipping {}".format(total_path))
# !unzip $total_path
fi = str(filename)+"/CONUS/RadarOnly_QPE_06H"
print("Moving to local data")
# !mv $fi/* /content/gdrive/MyDrive/NWP-Downscale/local_data
f = str(filename)
# !rm -r $f
# + [markdown] id="oCOY-wZjMj2u"
# ## Download data
# + [markdown] id="7md53FMQNg_f"
# Make the date range
# + id="korIXi3oiwbd" executionInfo={"status": "ok", "timestamp": 1611274897004, "user_tz": 0, "elapsed": 922, "user": {"displayName": "<NAME>", "photoUrl": "", "userId": "11141792345932528185"}}
import pandas as pd
# NOTE: date_range uses the American date ordering (month/day/year)
dates = pd.date_range(start='11/04/2019', end='12/01/2019', freq='6H')
# + [markdown] id="NmWRDVg8Ob3q"
# Make a directory for the local data
# + id="_B958RvXOo59"
local_path = "/content/gdrive/MyDrive/NWP-Downscale/local_data"
# + colab={"base_uri": "https://localhost:8080/"} id="CJB-2bkrsr0v" executionInfo={"status": "ok", "timestamp": 1611273580034, "user_tz": 0, "elapsed": 855, "user": {"displayName": "<NAME>", "photoUrl": "", "userId": "11141792345932528185"}} outputId="2372bf12-ff9c-419a-8cfd-513418808525"
# !mkdir $local_path
# + [markdown] id="trRvn9DKOkmC"
# Move files to local_path
# + id="IcCuvgZcmr9k"
get_radar_files(dates, local_path)
# + [markdown] id="_jYnv835PaFu"
# Finally, open terminal on colab VM instance and scp to Azure VM
| notebooks/download_mrms.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# %load_ext autoreload
# %matplotlib inline
# +
# %autoreload 2
from IPython import display
from utils import Logger
import torch
from torch import nn, optim
from torch.autograd.variable import Variable
from torchvision import transforms, datasets
# -
DATA_FOLDER = './torch_data/VGAN/MNIST'
# ## Load Data
def mnist_data():
compose = transforms.Compose(
[transforms.ToTensor(),
transforms.Normalize((.5, .5, .5), (.5, .5, .5))
])
out_dir = '{}/dataset'.format(DATA_FOLDER)
return datasets.MNIST(root=out_dir, train=True, transform=compose, download=True)
# Load data
data = mnist_data()
# Create loader with data, so that we can iterate over it
data_loader = torch.utils.data.DataLoader(data, batch_size=100, shuffle=True)
# Num batches
num_batches = len(data_loader)
# ## Networks
# +
class DiscriminatorNet(torch.nn.Module):
"""
A three hidden-layer discriminative neural network
"""
def __init__(self):
super(DiscriminatorNet, self).__init__()
n_features = 784
n_out = 1
self.hidden0 = nn.Sequential(
nn.Linear(n_features, 1024),
nn.LeakyReLU(0.2),
nn.Dropout(0.3)
)
self.hidden1 = nn.Sequential(
nn.Linear(1024, 512),
nn.LeakyReLU(0.2),
nn.Dropout(0.3)
)
self.hidden2 = nn.Sequential(
nn.Linear(512, 256),
nn.LeakyReLU(0.2),
nn.Dropout(0.3)
)
self.out = nn.Sequential(
torch.nn.Linear(256, n_out),
torch.nn.Sigmoid()
)
def forward(self, x):
x = self.hidden0(x)
x = self.hidden1(x)
x = self.hidden2(x)
x = self.out(x)
return x
def images_to_vectors(images):
return images.view(images.size(0), 784)
def vectors_to_images(vectors):
return vectors.view(vectors.size(0), 1, 28, 28)
# +
class GeneratorNet(torch.nn.Module):
"""
A three hidden-layer generative neural network
"""
def __init__(self):
super(GeneratorNet, self).__init__()
n_features = 100
n_out = 784
self.hidden0 = nn.Sequential(
nn.Linear(n_features, 256),
nn.LeakyReLU(0.2)
)
self.hidden1 = nn.Sequential(
nn.Linear(256, 512),
nn.LeakyReLU(0.2)
)
self.hidden2 = nn.Sequential(
nn.Linear(512, 1024),
nn.LeakyReLU(0.2)
)
self.out = nn.Sequential(
nn.Linear(1024, n_out),
nn.Tanh()
)
def forward(self, x):
x = self.hidden0(x)
x = self.hidden1(x)
x = self.hidden2(x)
x = self.out(x)
return x
# Noise
def noise(size):
n = Variable(torch.randn(size, 100))
if torch.cuda.is_available(): return n.cuda()
return n
# -
discriminator = DiscriminatorNet()
generator = GeneratorNet()
if torch.cuda.is_available():
discriminator.cuda()
generator.cuda()
# ## Optimization
# +
# Optimizers
d_optimizer = optim.Adam(discriminator.parameters(), lr=0.0002)
g_optimizer = optim.Adam(generator.parameters(), lr=0.0002)
# Loss function
loss = nn.BCELoss()
# Number of steps to apply to the discriminator
d_steps = 1 # In Goodfellow et. al 2014 this variable is assigned to 1
# Number of epochs
num_epochs = 200
# -
# ## Training
# +
def real_data_target(size):
'''
Tensor containing ones, with shape = size
'''
data = Variable(torch.ones(size, 1))
if torch.cuda.is_available(): return data.cuda()
return data
def fake_data_target(size):
'''
Tensor containing zeros, with shape = size
'''
data = Variable(torch.zeros(size, 1))
if torch.cuda.is_available(): return data.cuda()
return data
# +
def train_discriminator(optimizer, real_data, fake_data):
# Reset gradients
optimizer.zero_grad()
# 1.1 Train on Real Data
prediction_real = discriminator(real_data)
# Calculate error and backpropagate
error_real = loss(prediction_real, real_data_target(real_data.size(0)))
error_real.backward()
# 1.2 Train on Fake Data
prediction_fake = discriminator(fake_data)
# Calculate error and backpropagate
error_fake = loss(prediction_fake, fake_data_target(real_data.size(0)))
error_fake.backward()
# 1.3 Update weights with gradients
optimizer.step()
# Return error
return error_real + error_fake, prediction_real, prediction_fake
def train_generator(optimizer, fake_data):
# 2. Train Generator
# Reset gradients
optimizer.zero_grad()
# Sample noise and generate fake data
prediction = discriminator(fake_data)
# Calculate error and backpropagate
error = loss(prediction, real_data_target(prediction.size(0)))
error.backward()
# Update weights with gradients
optimizer.step()
# Return error
return error
# -
# ### Generate Samples for Testing
num_test_samples = 16
test_noise = noise(num_test_samples)
# ### Start training
# +
logger = Logger(model_name='VGAN', data_name='MNIST')
for epoch in range(num_epochs):
for n_batch, (real_batch,_) in enumerate(data_loader):
# 1. Train Discriminator
real_data = Variable(images_to_vectors(real_batch))
if torch.cuda.is_available(): real_data = real_data.cuda()
# Generate fake data
fake_data = generator(noise(real_data.size(0))).detach()
# Train D
d_error, d_pred_real, d_pred_fake = train_discriminator(d_optimizer,
real_data, fake_data)
# 2. Train Generator
# Generate fake data
fake_data = generator(noise(real_batch.size(0)))
# Train G
g_error = train_generator(g_optimizer, fake_data)
# Log error
logger.log(d_error, g_error, epoch, n_batch, num_batches)
# Display Progress
if (n_batch) % 100 == 0:
display.clear_output(True)
# Display Images
test_images = vectors_to_images(generator(test_noise)).data.cpu()
logger.log_images(test_images, num_test_samples, epoch, n_batch, num_batches);
# Display status Logs
logger.display_status(
epoch, num_epochs, n_batch, num_batches,
d_error, g_error, d_pred_real, d_pred_fake
)
# Model Checkpoints
logger.save_models(generator, discriminator, epoch)
# -
| 1. Vanilla GAN PyTorch.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
# This Python 3 environment comes with many helpful analytics libraries installed
# It is defined by the kaggle/python docker image: https://github.com/kaggle/docker-python
# For example, here's several helpful packages to load in
import numpy as np # linear algebra
import pandas as pd # data processing, CSV file I/O (e.g. pd.read_csv)
from tqdm.auto import tqdm
from glob import glob
import time, gc
import cv2
import pyarrow.parquet as pq
import pyarrow as pa
from tensorflow import keras
import tensorflow as tf
from keras.preprocessing.image import ImageDataGenerator
from keras.models import Model
from keras.models import clone_model
from keras.layers import Dense,Conv2D,Flatten,MaxPool2D,Dropout,BatchNormalization, Input
from keras.optimizers import Adam
from keras.callbacks import ReduceLROnPlateau
from sklearn.model_selection import train_test_split
from sklearn.metrics import confusion_matrix
import os
import json
import pickle
# +
parent_directory = os.path.dirname(os.getcwd())
def get_dummies(df):
cols = []
for col in df:
cols.append(pd.get_dummies(df[col].astype(str)))
return pd.concat(cols, axis=1)
# -
# IMG_SIZE=64
global IMG_X_SIZE
IMG_X_SIZE = 87
global IMG_Y_SIZE
IMG_Y_SIZE = 106
global N_CHANNELS
N_CHANNELS=1
# Preparing the preprocessed data for fitting in the model
# this is for GCP or local
proc_img_0 = pq.read_table(parent_directory+"/data/preprocessed/preprop_0.parquet").to_pandas()
proc_img_1 = pq.read_table(parent_directory+"/data/preprocessed/preprop_1.parquet").to_pandas()
proc_img_2 = pq.read_table(parent_directory+"/data/preprocessed/preprop_2.parquet").to_pandas()
proc_img_3 = pq.read_table(parent_directory+"/data/preprocessed/preprop_3.parquet").to_pandas()
train_images = pd.concat([proc_img_0, proc_img_1, proc_img_2, proc_img_3])
train_images.drop(columns=['image_id'],inplace=True)
del proc_img_0
del proc_img_1
del proc_img_2
del proc_img_3
# CNN takes images in shape `(batch_size, h, w, channels)`, so reshape the images
train_images = train_images.values.reshape(-1, IMG_X_SIZE, IMG_Y_SIZE, N_CHANNELS)
train_labels = pd.read_csv(parent_directory+"/data/train.csv")
Y_train_root = pd.get_dummies(train_labels['grapheme_root']).values
Y_train_vowel = pd.get_dummies(train_labels['vowel_diacritic']).values
Y_train_consonant = pd.get_dummies(train_labels['consonant_diacritic']).values
del train_labels
# print(f'Training images: {train_images.shape}')
# print(f'Training labels root: {Y_train_root.shape}')
# print(f'Training labels vowel: {Y_train_vowel.shape}')
# print(f'Training labels consonants: {Y_train_consonant.shape}')
# below this should take around 5 minutes
x_train, x_test, y_train_root, y_test_root, y_train_vowel, y_test_vowel, y_train_consonant, y_test_consonant \
= train_test_split(train_images, Y_train_root, Y_train_vowel, Y_train_consonant, test_size=0.3, random_state=666)
del train_images
x_val, x_test, y_val_root, y_test_root, y_val_vowel, y_test_vowel, y_val_consonant, y_test_consonant \
= train_test_split(x_test, y_test_root, y_test_vowel, y_test_consonant, test_size=0.33, random_state=666)
# print(f'x_train size: {x_train.shape}')
# print(f'x_val size: {x_val.shape}')
# print(f'x_test size: {x_test.shape}')
class MultiOutputDataGenerator(keras.preprocessing.image.ImageDataGenerator):
def flow(self,
x,
y=None,
batch_size=32,
shuffle=True,
sample_weight=None,
seed=None,
save_to_dir=None,
save_prefix='',
save_format='png',
subset=None):
targets = None
target_lengths = {}
ordered_outputs = []
for output, target in y.items():
if targets is None:
targets = target
else:
targets = np.concatenate((targets, target), axis=1)
target_lengths[output] = target.shape[1]
ordered_outputs.append(output)
for flowx, flowy in super().flow(x, targets, batch_size=batch_size,
shuffle=shuffle):
target_dict = {}
i = 0
for output in ordered_outputs:
target_length = target_lengths[output]
target_dict[output] = flowy[:, i: i + target_length]
i += target_length
yield flowx, target_dict
# Preparing the data generator (should take two minutes)
# Data augmentation for creating more training data
datagen = MultiOutputDataGenerator(
featurewise_center=False, # set input mean to 0 over the dataset
samplewise_center=False, # set each sample mean to 0
featurewise_std_normalization=False, # divide inputs by std of the dataset
samplewise_std_normalization=False, # divide each input by its std
zca_whitening=False, # apply ZCA whitening
rotation_range=8, # randomly rotate images in the range (degrees, 0 to 180)
zoom_range = 0.15, # Randomly zoom image
width_shift_range=0.15, # randomly shift images horizontally (fraction of total width)
height_shift_range=0.15, # randomly shift images vertically (fraction of total height)
horizontal_flip=False, # randomly flip images
vertical_flip=False) # randomly flip images
# This will just calculate parameters required to augment the given data. This won't perform any augmentations
datagen.fit(x_train)
# +
"""
Not going to use exponential anymore after realizing it sucks
"""
# need to edit these when we run the actual model and not doing hyperparameter tuning
# initial_learning_rate = 0.01
# decay_steps = 5 # this would be more like 10 or 20, since we'll be running more epochs
# decay_rate = 0.1
# learning_rate_exp_root = tf.keras.optimizers.schedules.ExponentialDecay(
# initial_learning_rate = initial_learning_rate, decay_steps = decay_steps, decay_rate=decay_rate, name="lr_expD_root")
# learning_rate_exp_vowel = tf.keras.optimizers.schedules.ExponentialDecay(
# initial_learning_rate = initial_learning_rate, decay_steps = decay_steps, decay_rate=decay_rate, name="lr_expD_vowel")
# learning_rate_exp_consonant = tf.keras.optimizers.schedules.ExponentialDecay(
# initial_learning_rate = initial_learning_rate, decay_steps = decay_steps, decay_rate=decay_rate, name="lr_expD_consonant")
# LR_scheduler_exp_root = tf.keras.callbacks.LearningRateScheduler(learning_rate_exp_root)
# LR_scheduler_exp_vowel = tf.keras.callbacks.LearningRateScheduler(learning_rate_exp_vowel)
# LR_scheduler_exp_consonant = tf.keras.callbacks.LearningRateScheduler(learning_rate_exp_consonant)
# def exponential_decay_fn(epoch):
# return 0.5 * 0.1 **(epoch / 3) # 1st var is initial lr, 2nd is decay_rate, 3rd is decay_steps, i think
# lr_exp_root = keras.callbacks.LearningRateScheduler(exponential_decay_fn)
# lr_exp_vowel = keras.callbacks.LearningRateScheduler(exponential_decay_fn)
# lr_exp_consonant = keras.callbacks.LearningRateScheduler(exponential_decay_fn)
learning_rate_reduction_root = ReduceLROnPlateau(monitor='dense_3_accuracy',
patience=3,
verbose=1,
factor=0.5,
min_lr=0.00001)
learning_rate_reduction_vowel = ReduceLROnPlateau(monitor='dense_4_accuracy',
patience=3,
verbose=1,
factor=0.5,
min_lr=0.00001)
learning_rate_reduction_consonant = ReduceLROnPlateau(monitor='dense_5_accuracy',
patience=3,
verbose=1,
factor=0.5,
min_lr=0.00001)
# -
def build_model(activation, dropout_prob):
inputs = Input(shape = (IMG_X_SIZE, IMG_Y_SIZE, N_CHANNELS))
# first convolutional layer
model = Conv2D(filters=32, kernel_size=(3, 3), padding='SAME', activation=activation, input_shape=(IMG_X_SIZE, IMG_Y_SIZE, N_CHANNELS))(inputs)
model = Conv2D(filters=32, kernel_size=(3, 3), padding='SAME', activation=activation)(model)
model = Conv2D(filters=32, kernel_size=(3, 3), padding='SAME', activation=activation)(model)
model = Conv2D(filters=32, kernel_size=(3, 3), padding='SAME', activation=activation)(model)
model = BatchNormalization(momentum=0.15)(model)
model = MaxPool2D(pool_size=(2, 2))(model)
model = Conv2D(filters=32, kernel_size=(5, 5), padding='SAME', activation=activation)(model)
model = Dropout(rate=dropout_prob)(model)
# 2nd CL
model = Conv2D(filters=64, kernel_size=(3, 3), padding='SAME', activation=activation)(model)
model = Conv2D(filters=64, kernel_size=(3, 3), padding='SAME', activation=activation)(model)
model = Conv2D(filters=64, kernel_size=(3, 3), padding='SAME', activation=activation)(model)
model = Conv2D(filters=64, kernel_size=(3, 3), padding='SAME', activation=activation)(model)
model = BatchNormalization(momentum=0.15)(model)
model = MaxPool2D(pool_size=(2, 2))(model)
model = Conv2D(filters=64, kernel_size=(5, 5), padding='SAME', activation=activation)(model)
model = BatchNormalization(momentum=0.15)(model)
model = Dropout(rate=dropout_prob)(model)
# 3rd CL
model = Conv2D(filters=128, kernel_size=(3, 3), padding='SAME', activation=activation)(model)
model = Conv2D(filters=128, kernel_size=(3, 3), padding='SAME', activation=activation)(model)
model = Conv2D(filters=128, kernel_size=(3, 3), padding='SAME', activation=activation)(model)
model = Conv2D(filters=128, kernel_size=(3, 3), padding='SAME', activation=activation)(model)
model = BatchNormalization(momentum=0.15)(model)
model = MaxPool2D(pool_size=(2, 2))(model)
model = Conv2D(filters=128, kernel_size=(5, 5), padding='SAME', activation=activation)(model)
model = BatchNormalization(momentum=0.15)(model)
model = Dropout(rate=dropout_prob)(model)
# 4th CL
model = Conv2D(filters=256, kernel_size=(3, 3), padding='SAME', activation=activation)(model)
model = Conv2D(filters=256, kernel_size=(3, 3), padding='SAME', activation=activation)(model)
model = Conv2D(filters=256, kernel_size=(3, 3), padding='SAME', activation=activation)(model)
model = Conv2D(filters=256, kernel_size=(3, 3), padding='SAME', activation=activation)(model)
model = BatchNormalization(momentum=0.15)(model)
model = MaxPool2D(pool_size=(2, 2))(model)
model = Conv2D(filters=256, kernel_size=(5, 5), padding='SAME', activation=activation)(model)
model = BatchNormalization(momentum=0.15)(model)
model = Dropout(rate=dropout_prob)(model)
# dense layer
model = Flatten()(model)
model = Dense(1024, activation=activation)(model)
model = Dropout(rate=dropout_prob)(model)
dense = Dense(512, activation=activation)(model)
# softmax layer
head_root = Dense(168, activation = 'softmax', name = "dense_root")(dense)
head_vowel = Dense(11, activation = 'softmax', name = "dense_vowel")(dense)
head_consonant = Dense(7, activation = 'softmax', name = "dense_consonant")(dense)
# output
model = Model(inputs=inputs, outputs=[head_root, head_vowel, head_consonant])
return model
# +
activations = ["tanh", "relu"]
dropout_probs = [0.2, 0.4]
optimizers = ['adam', 'nadam']
# lr_schedulers = ['exp', 'power']
batch_sizes = [256,128]
epochs = 10
# -
# TUNE THE MODEL
if not os.path.exists(parent_directory+"/models"):
os.makedirs(parent_directory+"/models")
histories = {}
counter = 0
for activation in activations:
for dropout_prob in dropout_probs:
for optimizer in optimizers:
for batch_size in batch_sizes:
# # MAKE SURE YOU EDIT THIS OUT LATER BUT THIS IS JUST TO SKIP MODEL 0 CUZ WE ALREADY TRIED IT
if not (counter==8 or counter==12):
counter += 1
continue
print("==========================================================================================")
print("Training model_"+str(counter) +":")
print("\t Activation: " + activation)
print("\t Dropout Probability: " + str(dropout_prob))
print("\t Optimizer: " + optimizer)
print("\t Batch Size: " + str(batch_size))
model = build_model(activation, dropout_prob)
model.compile(optimizer=optimizer, loss='categorical_crossentropy', metrics=['accuracy'])
callbacks=[learning_rate_reduction_root, learning_rate_reduction_vowel, learning_rate_reduction_consonant]
history = model.fit_generator(
datagen.flow(
x_train, {'dense_root': y_train_root, 'dense_vowel': y_train_vowel, 'dense_consonant': y_train_consonant},
batch_size=batch_size),
epochs = epochs, validation_data = (x_val, [y_val_root, y_val_vowel, y_val_consonant]),
steps_per_epoch=x_train.shape[0] // batch_size,
callbacks=callbacks
)
# need to change values of history to float64s or floats, float32 is not json serializable
for key in history.history.keys():
history.history[key] = [np.float64(val) for val in history.history[key]]
# add history to histories
histories["model_" + str(counter)] = (activation, dropout_prob, optimizer, batch_size, history.history)
# save histories as json file
with open(parent_directory+"/models/model_" + str(counter)+".json", "w") as fp:
json.dump(history.history, fp, sort_keys = True, indent = 4)
counter += 1
del model
del history
with open(parent_directory+"/models/histories.json", "w") as fp:
json.dump(histories, fp, sort_keys = True, indent = 4)
del x_train
del x_test
del y_train_root
del y_test_root
del y_train_vowel
del y_test_vowel
del y_train_consonant
del y_test_consonant
gc.collect()
def weighted_avg(vals):
return (vals['val_dense_root_accuracy'][-1]*2 + vals['val_dense_vowel_accuracy'][-1] + vals['val_dense_consonant_accuracy'][-1])/4
# examining histories
with open(parent_directory+"/models/histories0-5.json", "r") as fp:
history = json.load(fp)
with open(parent_directory+"/models/histories8&12.json", "r") as fp:
history.update(json.load(fp))
for key in history.keys():
# need to edit a mistake that listed "exp" learning rate, instead of the batch size
if int(key[-1])%2 == 0:
history[key][3] = 256
else:
history[key][3] = 128
history[key].append(history[key][4]['val_dense_root_accuracy'][-1])
history[key][4] = weighted_avg(history[key][4])
# Half way through we’re noticing that Adam isn’t performing well with these combinations. We also see that a lower dropout rate seems to give a better score on the validation set (not overfitting). Finally, we also noticed earlier that the exponential learning rate scheduler was performing much worse than the power learning rate scheduler, so we went ahead and only used that. This is why we stopped after model_5.
#
# We also see that batch size of 256 is performing better than batch size of 128.
#
# As a matter of time constraint we’ll remove nadam and batch size = 128 and continue with the other options. Each model took around 40 minutes to run, at 10 epochs each. This lead to these results:
# these are the tuning results after 10 epochs
cols = ['activation', 'dropout_prob', 'optimizer', 'batch_size','weighted_avg_val_acc', 'val_root_acc']
tuning_results = pd.DataFrame.from_dict(history, orient='index', columns = cols)
baseline_model = pd.DataFrame([['relu', 0.33, 'adam', 256, (0.9191*2 + 0.9759 +0.9753)/4, 0.9191]], columns = cols)
baseline_model.rename(index={0:'model_baseline'}, inplace=True)
tuning_results.append(baseline_model)
# Final results from this tuning is that we realized ReLU is a better activation function than tanh, it seems the lower dropout probability leads to a slight increase in the root accuracy, the adam optimizer looks a lot better than nadam, and a batch size of 256 is better than 128.
#
# Therefore, what we can explore next is a bigger range of dropout probabilities, and we can try to add more convolutional layers, we can try to add mc dropout, and run our final model for more epochs.
| src/hp_tuning.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + [markdown] slideshow={"slide_type": "slide"}
# **...the next morning**
# + slideshow={"slide_type": "subslide"}
# Startup databrokers and elastic search
import matplotlib.pyplot as plt
# %matplotlib qt5
from pprint import pprint
from rapidz.graph import _clean_text, readable_graph
from xpdan.vend.callbacks.core import Retrieve
from xpdan.vend.callbacks.zmq import Publisher
from xpdconf.conf import glbl_dict
from databroker_elasticsearch import load_elasticindex
from databroker_elasticsearch.brokersearch import BrokerSearch
from databroker import Broker
import yaml
import esconverters
dbs = {}
for yaml_file in ['raw', 'an']:
with open(f'{yaml_file}.yml', 'r') as f:
dbs[yaml_file] = Broker.from_config(yaml.load(f))
# + [markdown] slideshow={"slide_type": "slide"}
# FYI, the objects we have connected are
# 1. databoker databases that contain the metadata about the scans
# 2. elastic-search indexes that have indexed the databrokers and will return just the metadata if queried
# 3. Broker-search objects that will return the run-start header objects when queried (this is what is needed to run the analysis)
# + slideshow={"slide_type": "subslide"}
an_db = dbs['an']
raw_db = dbs['raw']
raw_es = load_elasticindex('es-raw.yaml')
an_es = load_elasticindex('es-an.yaml')
raw_db_es = BrokerSearch(raw_db, raw_es)
an_db_es = BrokerSearch(an_db, an_es)
# + [markdown] slideshow={"slide_type": "slide"}
# 1. Pavol wakes up and wonders how CJ did last night, but CJ is now sleeping soundly in his bed
# 1. Pavol wants to use elastic search to search the database of collected data and see how CJ did last night
# 1. He searches for ``tooth`` in any of the metadata fields
# + slideshow={"slide_type": "subslide"}
# query raw es for tooth
[d['_source']['sample_name'] for d in raw_es.qsearch('tooth')['hits']['hits']] # search all fields
# + [markdown] slideshow={"slide_type": "slide"}
# 1. He finds three datasets, so he knows that CJ had a successful night
# 1. He checks all the datasets ran to completion
# 1. He also has other ways to search for the dinosaur tooth
# + slideshow={"slide_type": "subslide"}
for hdr in raw_db_es('tooth'):
print(hdr.stop)
# + slideshow={"slide_type": "skip"}
raw_es.qsearch('dinosaur') # search all fields
# + slideshow={"slide_type": "skip"}
raw_es.qsearch('sample_name:dinosaur') # search specific field
# + slideshow={"slide_type": "skip"}
raw_es.qsearch('dino*') # glob-like search
# + slideshow={"slide_type": "skip"}
raw_es.qsearch('dinosaurus~2') # fuzzy search max edit distance of 2
# + slideshow={"slide_type": "skip"}
raw_hdr = next(iter(raw_db_es('dinosaurus~2')))
uid = raw_hdr.start['uid']
# + [markdown] slideshow={"slide_type": "slide"}
# 1. Pavol also wants to know if CJ was able to do any analysis on the data during the night
# 1. Pavol searches the databroker that contains analyzed data
# + slideshow={"slide_type": "subslide"}
# queries with an_es
an_es.qsearch('img_sinogram', size=0)
# + slideshow={"slide_type": "skip"}
an_es.qsearch('img_sinogram')
# + slideshow={"slide_type": "skip"}
an_es.qsearch(f'puid:{uid[:6]}*') # word in puid
# + slideshow={"slide_type": "skip"}
an_es.qsearch('analysis_stage:img_sinogram')
# + slideshow={"slide_type": "skip"}
an_es.qsearch('usednodes.ndfunc:*sort_sinogram')
# + slideshow={"slide_type": "skip"}
an_es.qsearch('gridrec')
# + slideshow={"slide_type": "skip"}
an_es.qsearch('usednodes.ndkwargs.algorithm:gridrec')
# + [markdown] slideshow={"slide_type": "slide"}
# 1. Pavol wants to know if there was a tomographic reconstruction already done?
# + slideshow={"slide_type": "subslide"}
# query an_es/databroker for tomo recon
hdrs = an_db_es('analysis_stage:*tomo*')
tomo_analysis_hdr = next(iter(hdrs))
# + [markdown] slideshow={"slide_type": "slide"}
# 1. Now Pavol wants to replay the same analysis from the database as a sanity check to see if he gets the same answer.
# 1. He wants to see exactly what analysis CJ did during the night, so he plots the analysis graph that he found in the database from the analysis done last night.
# + slideshow={"slide_type": "subslide"}
# load and show the graph
from shed.replay import replay
# load the replay
graph, parents, data, vs = replay(raw_db, tomo_analysis_hdr)
# make the graph more accessible to humans by renaming things
# these names *should* match the names in the graph plot
for k, v in graph.nodes.items():
v.update(label=_clean_text(str(v['stream'])).strip())
graph = readable_graph(graph)
# plot the graph
graph.nodes['data img FromEventStream']['stream'].visualize()
# + [markdown] slideshow={"slide_type": "slide"}
# 1. Each unique analysis has its own unique id.
# 2. Each unique graph has its own unique id.
# + slideshow={"slide_type": "subslide"}
hdrs = list(an_db_es('usednodes.ndkwargs.algorithm:gridrec'))
for hdr in hdrs:
print('analysis id:', hdr.start['uid'])
for hdr in hdrs:
print('graph id:', hdr.start['graph_hash'])
# -
# setup a publisher to send over to data viz and capture
p = Publisher(glbl_dict['inbound_proxy_address'], prefix=b'tomo')
z = graph.nodes['img_tomo ToEventStream']['stream'].LastCache().DBFriendly()
z.starsink(p)
# + [markdown] slideshow={"slide_type": "slide"}
# 1. As a sanity check, Pavol replays the analysis from last night with no changes
# + slideshow={"slide_type": "subslide"}
# replay analysis with no changes
r = Retrieve(dbs['raw'].reg.handler_reg)
for v in vs:
d = data[v['uid']]
dd = r(*d)
parents[v["node"]].update(dd)
# + [markdown] slideshow={"slide_type": "slide"}
# 1. Pavol now changes the recostruction algorithm to ``algebraic``. It is the node called ``recon_wrapper`` and he wants the keyword argument ``algorithm`` to be set to ``'art'`` which selects the reconstruction algorithm we want to use.
# 1. He then reruns the analysis through the new pipeline, which has just changed by one node.
# + slideshow={"slide_type": "subslide"}
# change to Algebraic Reconstruction technique
print(graph.nodes['starmap; recon_wrapper']['stream'].kwargs)
graph.nodes['starmap; recon_wrapper']['stream'].kwargs['algorithm'] = 'art'
print(graph.nodes['starmap; recon_wrapper']['stream'].kwargs)
# replay with changes
r = Retrieve(dbs['raw'].reg.handler_reg)
for v in vs:
d = data[v['uid']]
dd = r(*d)
parents[v["node"]].update(dd)
# + [markdown] slideshow={"slide_type": "slide"}
# 1. Just because he can, Pavol compares the ID of the previous graph and the new one. They are different because the graphs are different.
# + slideshow={"slide_type": "subslide"}
# These hashes are different because the algorithms are different
dbs = {}
for yaml_file in ['raw', 'an']:
with open(f'{yaml_file}.yml', 'r') as f:
dbs[yaml_file] = Broker.from_config(yaml.load(f))
from databroker_elasticsearch.converters import register_converter
an_db = dbs['an']
raw_db = dbs['raw']
raw_es = load_elasticindex('es-raw.yaml')
an_es = load_elasticindex('es-an.yaml')
raw_db_es = BrokerSearch(raw_db, raw_es)
an_db_es = BrokerSearch(an_db, an_es)
print(an_db[-1].start['graph_hash'])
print(an_db[-2].start['graph_hash'])
# + [markdown] slideshow={"slide_type": "slide"}
# 1. Pavol searches elastic search for the art reconstruction data
# 1. Not surprisingly, Pavol wants to compare the previous analysis to the new one.
# 1. To do this, he retrieves the last event from each stream and plots them
# + slideshow={"slide_type": "subslide"}
# an_es for new data (via new recon algo)
dbs = {}
for yaml_file in ['raw', 'an']:
with open(f'{yaml_file}.yml', 'r') as f:
dbs[yaml_file] = Broker.from_config(yaml.load(f))
from databroker_elasticsearch.converters import register_converter
an_db = dbs['an']
raw_db = dbs['raw']
raw_es = load_elasticindex('es-raw.yaml')
an_es = load_elasticindex('es-an.yaml')
raw_db_es = BrokerSearch(raw_db, raw_es)
an_db_es = BrokerSearch(an_db, an_es)
vqan = lambda q: pprint((q, an_es.qsearch(q)))
an_es.qsearch('art')
# + slideshow={"slide_type": "subslide"}
hdr1 = next(iter(an_db_es('usednodes.ndkwargs.algorithm:art')))
hdr2 = next(iter(an_db_es('usednodes.ndkwargs.algorithm:gridrec')))
art = next(hdr1.data('img_tomo', stream_name='final_primary'))
grid = next(hdr2.data('img_tomo', stream_name='final_primary'))
# Compare results
fig, axs = plt.subplots(1, 3, tight_layout=True)
for img, ax in zip([art, grid], axs):
ax.imshow(img)
axs[-1].imshow(art - grid)
plt.savefig('reconstructions.png', transparent=True, bbox_inches='tight')
# -
| demo/data_an.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + [markdown] nbpresent={"id": "20676e59-d513-4851-93b5-2071480cc200"} slideshow={"slide_type": "slide"}
# <div style="text-align: center;">
# <FONT size="8">
# <BR><BR><b>
# Stochastic Processes: <BR><BR>Data Analysis and Computer Simulation
# </b>
# </FONT>
# <BR><BR><BR>
#
# <FONT size="7">
# <b>
# Python programming for beginners
# </b>
# </FONT>
# <BR><BR><BR>
#
# <FONT size="7">
# <b>
# -Using Python, iPython, and Jupyter notebook-
# </b>
# </FONT>
# <BR>
# </div>
# + [markdown] slideshow={"slide_type": "notes"}
# #### Note 1
# - In this course, we will use the Jupyter notebook as our programming environment.
# - It is freely available for Windows, Mac, and Linux through the Anaconda Python Distribution.
# - In this plot, I will explain how to install and use the Jupyter notebook in a step-by-step manner to create some common visualizations that we will use throughout this course.
# + [markdown] slideshow={"slide_type": "slide"}
# # Install anaconda
# -
# ## Instractions
# - Download the Python 3.$*$ Anaconda package appropriate for your platform (Windows/Mac/Linux) from the official website (https://www.continuum.io/downloads).
# - Install anaconda by executing the installer program (see details at https://docs.continuum.io/anaconda/install).
# - You can update to the latest version of Anaconda by executing the following commands from the command line (optional).
# ```
# conda update conda
# conda update anaconda
# ```
# + [markdown] slideshow={"slide_type": "notes"}
# #### Note 2
# - Let us now install Anaconda.
# - First, visit the download page of the official Anaconda website and download the latest version of the Python-3 Anaconda package appropriate for your platform, Windows, Mac, or Linux.
# - Here, we will be working with the 64-bit Anaconda 4.3.0 distribution for Mac OSX with Python 3.6, but other combinations should also work.
# - Second, run the installer program (".exe" executable file on Windows, ".dmg" disk image file on Mac, or ".sh" shell script on Linux) and follow the instructions shown on the screen.
# - The installer may ask some questions during the procedure.
# - If you are not sure how to answer, accepting the default responses should be fine.
# - You can update to the latest Anaconda version by executing the commands shown here from the command prompt, but this is only optional.
# + [markdown] slideshow={"slide_type": "slide"}
# # Launch jupyter notebook
# + [markdown] slideshow={"slide_type": "notes"}
# #### Note 3
# - To launch the Jupyter notebook, first, open the "Terminal" application on Mac or Linux, or the "Command Prompt" on Windows to use the command line.
# - It is probably convenient if you create a new folder or directory to store the notebooks for this course.
# - Now, change into your chosen directory using the command shown here.
# - Then, you can launch the Jupiter notebook by typing "`jupyter notebook`" in the command line.
# - Some information will be displayed on your screen, which you can ignore; then the Jupyter notebook will be opened in your web-browser with a local URL (by default, http://localhost:8888).
# - Here we use the Safari web-browser on Mac, but you should observe the same results under other operating systems or browsers.
# + [markdown] slideshow={"slide_type": "-"}
# ## Demonstration
# ```
# # # mkdir work
# # # cd work
# jupyter notebook
# [I 18:10:21.427 NotebookApp] Serving notebooks from local directory: /Users/ryoichi/work
# [I 18:10:21.427 NotebookApp] 0 active kernels
# ...
# ```
# 
# + [markdown] slideshow={"slide_type": "notes"}
# #### Note 4
# - Next, we will start a new Python kernel.
# - Click on the "New" icon, and select "Python 3" which is circled in red in the following figure.
# - The Jupyter notebook works with many different programming languages (or kernels), not just Python, but we will not be using this capability for this course.
# - If you can see more than two "Python" options in the "New" menu, please be sure to choose Python version 3.
# - Here we just choose "Python 3".
# + [markdown] nbpresent={"id": "e9110199-25a9-4051-a111-13ea03e49634"} slideshow={"slide_type": "slide"}
# ## Demo continued...
# 
# + [markdown] slideshow={"slide_type": "notes"}
# #### Note 5
# - Then a new notebook will open, with an empty box or "Cell", in which you can type and run python commands interactively.
# + [markdown] slideshow={"slide_type": "slide"}
# ## Demo continued...
# 
# + [markdown] slideshow={"slide_type": "slide"}
# # Check Python version
# + [markdown] slideshow={"slide_type": "notes"}
# #### Note 6
# - To be sure that you are running a proper Python3 version, type the following commands in the cell and run it by performing one of the following operations.
# - The system will print the version number of the Python interpreter you are currently using.
# - If it is found to be 2.$*$.$*$, please uninstall the present Anaconda and re-install another Anaconda with a proper version of 3.$*$.$*$.
# + [markdown] slideshow={"slide_type": "-"}
# ## Demonstration
#
# - Type the following commands, and perform one of the followings or click the icon circled in red in the figure.
# 1. press "Control-Return"
# 1. choose "Cell" menu -> "Insert Cell below".
# + slideshow={"slide_type": "-"}
import sys
sys.version
# + [markdown] slideshow={"slide_type": "-"}
# 
# + [markdown] nbpresent={"id": "a07d72ba-5817-4081-a6ae-2849a10713dd"} slideshow={"slide_type": "slide"}
# # Use jupyter notebook to run Python in interactive mode: Code mode
# + [markdown] slideshow={"slide_type": "-"}
# ## Creat a new sell
# + [markdown] slideshow={"slide_type": "notes"}
# #### Note 7
# - Here, let us use cells in code mode to run Python in interactive mode.
# - First, perform one of the following operations to create a new cell.
# + [markdown] slideshow={"slide_type": "-"}
# ### Instructions
#
# - Perform one of the following operations to create a new cell.
# 1. press "Shift-Return"
# 1. choose "Insert" -> "Insert Cell below" from the menubar.
# 1. click "+" icon circled in blue in the figure.
#
# 
# + [markdown] slideshow={"slide_type": "slide"}
# ## The simplest calculation
# -
# ### A code example
# + slideshow={"slide_type": "-"}
1+1
# + [markdown] slideshow={"slide_type": "notes"}
# #### Note 8
# - Type "1+1" in the new cell and run it.
# - Then you will find the answer "2" as an output.
# - The cell is editable by clicking on it.
# + [markdown] nbpresent={"id": "19032d51-6a19-4cea-9814-6de89c49db0c"} slideshow={"slide_type": "slide"}
# ## Mathematical functions
# ### A code example
# + nbpresent={"id": "385c074c-f603-438d-a251-f92b1fc8360f"} slideshow={"slide_type": "-"}
import numpy as np
thrad=0.5
theta=thrad*np.pi
sinth=np.sin(theta)
costh=np.cos(theta)
print('theta =',thrad,'* pi')
print('sin(theta) =',sinth)
print('cos(theta) =',costh)
# + [markdown] slideshow={"slide_type": "notes"}
# #### Note 9
# - Now, type the following code-example in a new cell and run it.
# - The 1st line is to import the "numpy" library with a shorter name "np".
# - This library is necessary to use mathematical functions such as "`sin`" and "`cos`" in the notebook.
# - Then you will find the values in the output cell.
# - More detailed information is available at the numpy website (http://www.numpy.org/).
# + [markdown] nbpresent={"id": "1fd13e5c-1e30-4dc0-be04-12f96ec974ff"} slideshow={"slide_type": "slide"}
# # Use jupyter notebook to write documents: Markdown mode
# -
# ## Change cell mode
#
# ### Instructions
#
# - Select the cell and change cell type to Markdown mode by one of the following operations.
# 1. press "ESC" to enter command mode and then press "m"
# 1. choose "Cell" -> "Cell Type" -> "Markdown" from the menu
# + [markdown] slideshow={"slide_type": "notes"}
# #### Note 10
# - You can also use Jupyter notebooks to write documents in Markdown mode.
# - To write a formatted text, select the cell and change cell type to Markdown mode by one of the following operations.
# + [markdown] slideshow={"slide_type": "slide"}
# ## Write text
#
# ### A code example
#
# - Type (or copy and paste) the following code example in the selected cell and run it.
#
# ```
# # Title level 1
# ## Title level 2
# ### Title level 3
#
# - Item 1
# - Item 2
#
#
# 1. Enumerate 1
# 2. Enumerate 2
# ```
#
# + [markdown] slideshow={"slide_type": "notes"}
# #### Note 11
# - Then type the following code-example in the selected cell and run it.
# - This is the output.
# - Detailed information on markdown is available at various websites, for example, at the website called "Mastering Markdown".(https://guides.github.com/features/mastering-markdown/).
# + [markdown] nbpresent={"id": "9f86a7e4-ae1b-4781-b0c7-95d3e15b3bbc"} slideshow={"slide_type": "slide"}
# # Title level 1
# ## Title level 2
# ### Title level 3
#
# - Item 1
# - Item 2
#
#
# 1. Enumerate 1
# 1. Enumerate 2
# + [markdown] nbpresent={"id": "99f8abe3-f5aa-4762-8362-169367cb9d55"} slideshow={"slide_type": "slide"}
# ## Mathematical Typesetting
#
# ### A code example
# - Type (or copy and paste) the following code example in the selected cell and run it.
#
# ```
# $$
# \frac{d\mathbf{R}(t)}{dt}=\mathbf{V}(t) \tag{1}
# $$
# $$
# m\frac{d\mathbf{V}(t)}{dt}=-\zeta\mathbf{V}(t)-k\mathbf{R}(t) \tag{2}
# $$
# ```
#
# + [markdown] slideshow={"slide_type": "notes"}
# #### Note 12
#
# - You can write equations using LaTeX commands in Markdown mode.
# - Type the following code-example in the selected cell in Markdown mode and run it.
# - The results are shown here.
# - Detailed information on LaTeX is also available online, for example at "The LaTeX project" website (https://www.latex-project.org/).
# + [markdown] nbpresent={"id": "0e97808f-846c-422b-95f9-ba2f1a89339d"} slideshow={"slide_type": "fragment"}
# $$
# \frac{d\mathbf{R}(t)}{dt}=\mathbf{V}(t) \tag{1}
# $$
# $$
# m\frac{d\mathbf{V}(t)}{dt}=-\zeta\mathbf{V}(t)-k\mathbf{R}(t) \tag{2}
# $$
# + [markdown] nbpresent={"id": "6fd649ff-33ab-4c60-bb3a-91ea02ecb9fc"} slideshow={"slide_type": "slide"}
# # Save jupyter notebook
# -
# ## Save to file
#
# 1. Select "File" menu -> "Save and Checkpoint".
# 1. click the "save" icon circled in green in the figure shown below.
#
# ## Change file name
#
# - Select "File" menu -> "Rename" -> Enter a new notebook name -> "OK"
#
# 
# + [markdown] slideshow={"slide_type": "notes"}
# #### Note 13
# - Notebooks are periodically saved, but you can force save your changes by selecting "Save and Checkpoint" from the "File" menu or clicking on the "save" icon circled in green in the figure.
# - You can also change the file name using the instructions below.
# + [markdown] slideshow={"slide_type": "slide"}
# # Terminate jupyter notebook
# -
# ## Server
#
# 1. Press "Control-C" ("Control" and "c" keys together) in command line.
# 1. Select "File" menu -> "Close and Halt".
#
# ## Browser
#
# - If the jupyter notebook server is not terminated, you can resume the notebook by re-opening the same local URL (by default, http://localhost:8888).
# + [markdown] slideshow={"slide_type": "notes"}
# #### Note 14
#
# - To terminate the Jupyter notebook, make the command line window active, and press "Control-C" until the command prompt is recovered, or select "File" menu -> "Close and Halt".
# - You can also terminate the web-browser if necessary.
# - If you accidently close the web-browser, without killing the Jupiter notebook from the command line or file menu, you can recover the ipython session by re-opening the local URL (by default, http://localhost:8888) in your web browser.
| edx-stochastic-data-analysis/downloaded_files/01/.ipynb_checkpoints/Stochastic_Processes_week01_1-checkpoint.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # NetCDF file
# First you have to import the meteomatics module and the datetime module
# +
import datetime as dt
import meteomatics.api as api
from __future__ import print_function
# -
# Input here your username and password from your meteomatics profile
###Credentials:
username = 'python-community'
password = '<PASSWORD>'
# Input here the limiting coordinates of the extract you want to look at. You can also change the resolution.
lat_N = 50
lon_W = -16
lat_S = 20
lon_E = 10
res_lat = 2
res_lon = 2
# Input here the directory and the name. The directory must already exist.
filename_nc = 'netcdf_file.nc'
# Input here the startdate, enddate and the timeinterval as datetime objects.
startdate_nc = dt.datetime.utcnow().replace(hour=0, minute=0, second=0, microsecond=0)
enddate_nc = startdate_nc + dt.timedelta(days=3)
interval_nc = dt.timedelta(hours=12)
# Choose the parameter you want to get. You can only chose one parameter at a time. Check here which parameters are available: https://www.meteomatics.com/en/api/available-parameters/
parameter_nc = 't_2m:C'
# In the following, the request will start. If there is an error in the request as for example a wrong parameter or a date that doesn't exist, you get a message.
print("netCDF file:")
try:
api.query_netcdf(filename_nc, startdate_nc, enddate_nc, interval_nc, parameter_nc, lat_N, lon_W, lat_S,
lon_E, res_lat, res_lon, username, password)
print("filename = {}".format(filename_nc))
except Exception as e:
print("Failed, the exception is {}".format(e))
# You will get the data as a NetCDF file. This is a common file format to share climatological data. You need to have a special program to be able to visualize it, as shown here.
#
# 
| examples/notebooks/08_Net_CDF_file.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python [conda root]
# language: python
# name: conda-root-py
# ---
# ## Improving Predictions with `scikit-learn`
#
# In this chapter we will try the same regression as chapter 8, but this time without departure delay; a harder problem.
# +
import sys, os, re
sys.path.append("lib")
import utils
import numpy as np
import sklearn
import iso8601
import datetime
print("Imports loaded...")
# -
# Load and check the size of our training data. May take a minute.
print("Original JSON file size: {:,} Bytes".format(os.path.getsize("../data/simple_flight_delay_features.jsonl")))
training_data = utils.read_json_lines_file('../data/simple_flight_delay_features.jsonl')
print("Training items: {:,}".format(len(training_data))) # 5,714,008
print("Data loaded...")
# Inspect a record before we alter them
print("Size of training data in RAM: {:,} Bytes".format(sys.getsizeof(training_data))) # 50MB
print(training_data[0])
# We need to sample our data to fit into RAM
training_data = np.random.choice(training_data, 1000000) # 'Sample down to 1MM examples'
print("Sampled items: {:,} Bytes".format(len(training_data)))
print("Data sampled...")
# Separate our results from the rest of the data, vectorize and size up
results = [record['ArrDelay'] for record in training_data]
results_vector = np.array(results)
print("Results vectorized size: {:,}".format(sys.getsizeof(results_vector))) # 45,712,160 bytes
print("Results vectorized...")
# Remove the two delay fields and the flight date from our training data
for item in training_data:
item.pop('ArrDelay', None)
item.pop('FlightDate', None)
item.pop('DepDelay', None)
print("ArrDelay, DepDelay and FlightDate removed from training data...")
# Must convert datetime strings to unix times
for item in training_data:
if isinstance(item['CRSArrTime'], str):
dt = iso8601.parse_date(item['CRSArrTime'])
unix_time = int(dt.timestamp())
item['CRSArrTime'] = unix_time
if isinstance(item['CRSDepTime'], str):
dt = iso8601.parse_date(item['CRSDepTime'])
unix_time = int(dt.timestamp())
item['CRSDepTime'] = unix_time
print("CRSArr/DepTime converted to unix time...")
# +
# Use DictVectorizer to convert feature dicts to vectors
from sklearn.feature_extraction import DictVectorizer
print("Sampled dimensions: [{:,}]".format(len(training_data)))
vectorizer = DictVectorizer()
training_vectors = vectorizer.fit_transform(training_data)
print("Size of DictVectorized vectors: {:,} Bytes".format(training_vectors.data.nbytes))
print("Training data vectorized...")
# -
from sklearn.model_selection import train_test_split
# Redo test/train split
X_train, X_test, y_train, y_test = train_test_split(
training_vectors,
results_vector,
test_size=0.1,
random_state=17
)
print(X_train.shape, X_test.shape)
print(y_train.shape, y_test.shape)
print("Test train split performed again...")
# +
from sklearn.ensemble import GradientBoostingRegressor
regressor = GradientBoostingRegressor()
print("Gradient boosting regressor instantiated...!")
# -
# Refit regression on new training data
regressor.fit(X_train, y_train)
print("Regressor fitted again...")
# Predict using the test data again
predicted = regressor.predict(X_test.toarray())
print("Predictions made for X_test again...")
# +
from sklearn.metrics import median_absolute_error, r2_score
# Get the median absolute error again
medae = median_absolute_error(y_test, predicted)
print("Median absolute error: {:.3g}".format(medae))
# Get the r2 score gain
r2 = r2_score(y_test, predicted)
print("r2 score: {:.3g}".format(r2))
# +
# Plot outputs, compare actual vs predicted values
import matplotlib.pyplot as plt
plt.scatter(
y_test,
predicted,
color='blue',
linewidth=1
)
plt.xticks(())
plt.yticks(())
plt.show()
| ch09/Improving flight delay predictions with sklearn.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import os
import numpy as np
import shutil
from tqdm import tqdm_notebook as tqdm
# + active=""
# Fix order of channels in profiles
# -
working_dir = '/data/datasets/organoid_phenotyping/datasets/'
paths = [os.path.join(working_dir, f) for f in os.listdir(working_dir)]
datasets = [p for p in paths if os.path.isdir(p)]
datasets.sort()
datasets = datasets[:-5] # remove new arlottas
datasets
def reorganize(profiles):
new = np.zeros_like(profiles)
tbr1 = profiles[:, 0]
sox2 = profiles[:, 1]
dn = profiles[:, 2]
new[:, 0] = sox2
new[:, 1] = tbr1
new[:, 2] = dn
return new
for dataset in tqdm(datasets[1:], total=len(datasets[1:])):
profiles_old = np.load(os.path.join(dataset, 'cyto_profiles.npy'))
profiles_new = reorganize(profiles_old)
np.save(os.path.join(dataset, 'cyto_profiles.npy'), profiles_new)
for dataset in tqdm(datasets, total=len(datasets)):
profiles_sample_old = np.load(os.path.join(dataset, 'cyto_profiles_sample.npy'))
profiles_sample_new = reorganize(profiles_sample_old)
np.save(os.path.join(dataset, 'cyto_profiles_sample.npy'), profiles_sample_new)
| notebooks/reorganize_profiles.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# ### <span style="font-family: Arial; font-weight:bold;font-size:1.9em;color:#B03060"> <NAME> : Cardio Good Fitness Project
#
# <font color=darkblue>
#
#
#
# ### <span style="font-family: Arial; font-weight:bold;font-size:1.9em;color:#0e92ea"> Description:
#
# Objective
#
# Explore the dataset to identify differences between the customers of each product. You can also explore relationships between the different attributes of the customers. You can approach it from any other line of questioning that you feel could be relevant for the business. The idea is to get you comfortable working in Python.
#
# You are expected to do the following :
#
# Come up with a customer profile (characteristics of a customer) of the different products<br>
# Perform univariate and multivariate analyses<br>
# Generate a set of insights and recommendations that will help the company in targeting new customers.<br>
#
#
#
# ### <span style="font-family: Arial; font-weight:bold;font-size:1.9em;color:#0e92ea"> Data Dictionary:
#
# <font color=darkblue>
# <br>
# The data is about customers of the treadmill product(s) of a retail store called Cardio Good Fitness.
#
# It contains the following variables- <br>
# <font color=darkblue>
# 1. Product - The model no. of the treadmill<br>
# 2. Age - Age of the customer in no of years<br>
# 3. Gender - Gender of the customer<br>
# 4. Education - Education of the customer in no. of years<br>
# 5. Marital Status - Marital status of the customer<br>
# 6. Usage - Avg. # times the customer wants to use the treadmill every week<br>
# 7. Fitness - Self rated fitness score of the customer (5 - very fit, 1 - very unfit)<br>
# 8. Income - Income of the customer<br>
# 9. Miles- Miles that a customer expects to run<br>
# ### Import required libraries for the project.
import pandas as pd # Import Pandas Libaries to load data into dataframes
import seaborn as sns # Import Seaborn libraries to use viz.
import matplotlib.pyplot as plt # Import Pyplot from Matplotlib to plot the Visualization
sns.set(color_codes=True)
import warnings # Import warnings to ignore any
warnings.filterwarnings("ignore")
# %matplotlib inline
# # 2. Loading and exploring the data
# ### import required dataset in to Pandas dataframes
cdata = pd.read_csv("CardioGoodFitness.csv") # Load data set in to pandas data frames
# ### Verify first 10 rows from the imported dataset
cdata.head(10) # Verify data by reviewing first 10 records in data frame
# ### Verify last 5 rows from the imported dataset
cdata.tail(5) # Verify data by reviewing first 10 records in data frame
# ### Verify shape of the dataset
cdata.shape # Check how many rows and columns exists in the data frame.
# #### Observation : Dataset contains 180 rows and 9 columns
# ### Verify data types of columns in the dataset
cdata.info() # check the data types of the columns in the data frame
# #### Observation: Total 9 variables and below is classification of variables
# <b>Categorical variables:</b>
# 1. Product
# 2. Gender
# 3. Martial Status
#
# <b>Numerical Varibales:</b>
# 1. Age
# 2. Usage
# 3. Education
# 4. Fitness
# 5. Income
# 6. Miles
#
# ### Verify null values in the dataset
cdata.isnull().sum() # Check if any columns in the data frame has null values
# #### Observation : There is no null values in the provided data set
# ### Charcterstics of the Customer Statistical summary of the numeric varibales
cdata.describe() #Describe the charcterstics of the data
# #### Observation :
# 1. Minimum age of the customer is 18 and max age of the customer is 50 years.
# 2. Minimum income is of the customer is 29562 and Maximum is 104581
# ### Plotting univariate distributions
plt.figure(figsize=(15,7)) # To resize the plot
sns.histplot(cdata.Income); #plots a frequency polygon superimposed on a histogram using the seaborn package.
plt.show()
# #### Observation : Based on the above histogram, income ranges between 50,000 and 60,000 is the most customer base
plt.figure(figsize=(15,7)) # To resize the plot
sns.histplot(cdata.Income, kde=True) # plots a frequency polygon superimposed on a histogram using the seaborn package.
# seaborn automatically creates class intervals. The number of bins can also be manually set.
plt.show()
# #### Observation : Based on the above histogram , kernel density estimate (KDE) is having the postive skew
plt.figure(figsize=(15,7)) # To resize the plot
sns.boxplot(x='Age', data=cdata) #The box shows the quartiles of the dataset while the whiskers
#extend to show the rest of the distribution, except for points that are determined to be “outliers”
#using a method that is a function of the inter-quartile range
plt.show()
# #### Observation : Based on the above box plot below stastical distriutions are observed for Age of the customers
# ##### Mean: 28 years
# #### 25% : 24 years
# #### Median: 26 years
# #### 75%: 33 years
# #### Outliers: 50 years
plt.figure(figsize=(15,7)) # To resize the plot
sns.violinplot(cdata.Usage, color='orange') # plots a violin plt using the seaborn package.Color can be changed as desired
plt.show()
# #### Observation : Violin plot shows the distrubition of the usage of the fitness equipment weekly by the customers. It is evident that average usage is aroun 3 to 4 hrs weekly
# ### Plotting Multivariate distributions
# ### Visualizing pairwise relationships in a dataset
#
# Trying to visualize the reltionship of variables in the cardio dataset
sns.pairplot(cdata, diag_kind='kde')
#The relationship between x and y can be shown for different subsets of the data using the hue, size,
#and style parameters. These parameters control what visual semantics are used to identify the different subsets
plt.show()
# #### Observation: Created a matrix of axes and to analyze the relationship of each pair of column in the cardio good fitness project dataset loaded in to the dataframe
# ### Variables usage and fitness looks intresting would like to analyze more to observer the pattern
plt.figure(figsize=(15,7)) # To resize the plot
sns.scatterplot(cdata['Usage'], cdata['Fitness']) # Plots the scatter plot using two variables
plt.show()
# #### Observation : More the usage of the fitnes equipment, more the Fitness to the customer
# ### Looking for relation between numerical relationship between the variables
correlation = cdata.corr() # displays the correlation between every possible pair of attributes as a dataframe
correlation
plt.figure(figsize=(15,7)) # To resize the plot
# plot the correlation coefficients as a heatmap
sns.heatmap(correlation, annot=True, vmin=-1, vmax=1, fmt='.2f', cmap='Spectral')
plt.show()
# #### Observation:
# 1. It is observed that is high correlation between Fitness and usage of the equipment
# 2. Fitness and number of miles that customer is running has high correlation
# 3. Age and Fitness has low correlation
# 4. Age and usage has the low correlation
# ### Would like to understand the relationship between two variables Fitness and Miles
plt.figure(figsize=(15,7)) # To resize the plot
sns.lmplot(y='Fitness', x='Miles', data=cdata) #lmplot It is intended as a convenient interface
#to fit regression models across conditional subsets of a dataset.
plt.show()
# #### Observation: More the miles and more the fitness
# ### Would like to understand the relationship between two variables Fitness and Miles by differentiate with Age factor
sns.lmplot(y='Fitness', x='Miles', data=cdata, hue='Age')
#lmplot It is intended as a convenient interface
#to fit regression models across conditional subsets of a dataset.
plt.show()
# #### Observations:
# 1. Customer between 29 and 35 ages group are more fit and ran more miles
# 2. More the miles and more the fitness.
# ### Exploring the categorical features - state and year¶
#
cdata.columns # Check the columns in the data frame
print(cdata.Product.unique()) # check the unique values of the variable Product in data frame
print(cdata.Gender.unique()) # check the unique values of the variable Gender in data frame
print(cdata.MaritalStatus.unique()) # check the unique values of the variable MartialStatus in data frame
# ### Find the relationship beyween product and usage
# Show the bar chart by grouping Product and usage
cdata.groupby(by=['Product'])['Usage'].sum().reset_index().sort_values(['Usage']).plot(x='Product',
y='Usage',
kind='bar',
figsize=(15,5))
plt.show()
# #### Observation: Product TM195 is being used more
plt.figure(figsize=(15, 7)) # To resize the plot
# Show the point plot to see visualize the product and usage popularity
sns.pointplot(x='Product', y='Usage', data=cdata, estimator=sum, ci=None)
plt.xticks(rotation=90) # To rotate the x axis labls
plt.show()
# #### Observation: Product TM195 is being used more
# ### Find the product is being purchased by which income level
plt.figure(figsize=(15,7)) # To resize the plot
#Swarmplot gives a better representation of the distribution of values,
#but it does not scale well to large numbers of observations
sns.swarmplot(cdata['Product'], cdata['Income'])
plt.show()
# #### Observation:
# 1. TM195 is popular product and being purchased by Income levels between 30k to 70k
# 2. TM498 product is being purchased by Income levels between 30k and 70K
# 3. TM798 product is being purchased by Income levels between 45k and 100k
# ### Understand the product usage by different products by genders
plt.figure(figsize=(15,5)) # To resize the plot
# Visualizing data set using the barplot by gender and usage by grouping with product
sns.barplot(data=cdata,x='Gender',y='Usage',hue='Product')
plt.show()
# #### Different products are popular in different genders
# 1.TM195 is being used more by Male when Compared to Female <br>
# 2.TM498 is being used more by Female when Compared to Male <br>
# 3.TM798 is being used more by Female when Compare to Male <br>
# ### Multivarient analysis on Gender and Income by product to observe if there is an outliers
plt.figure(figsize=(15,7)) # To resize the plot
# Visualizing data set using the barplot by gender and usage by grouping with product
#The box shows the quartiles of the dataset while the whiskers
#extend to show the rest of the distribution, except for points that are determined to be “outliers”
#using a method that is a function of the inter-quartile range
sns.boxplot(x='Gender', y='Income', hue='Product', data=cdata)
plt.show()
# #### Observation: Females using the product TM498 has outliers in the income levels
# ### Plot the trend of number of Miles by Age Group
#PLoting the trend graph by age and aggeration of miles to viz, the trend
cdata[['Miles','Age']].groupby(['Age']).sum().plot(figsize=(15,5)) # To resize the plot
plt.show()
# #### Observation: Above trend depict that Age group 25 years is very active and they logged the highest number of miles
# ### Analyze the data set to find the product popularity by age and martial status
plt.figure(figsize=(15,7))# To resize the plot
#This function always treats one of the variables as categorical and draws data at ordinal positions (0, 1, … n)
#on the relevant axis, even when the data has a numeric or date type.
cplot=sns.catplot(x='MaritalStatus', y='Age',
data=cdata,
col='Product', kind='bar',
height=5,col_wrap = 5, palette='Blues_d', capsize=.2);
cplot.set_xticklabels(rotation=90) # To rotate the x axis lables
plt.show();
# #### Observation: It is observed that all the products are famous in different martial status and age group with little bit of error
# ### Analyze data set for number of miles by gender and product
plt.figure(figsize=(15,7))# To resize the plot
#Plot the Viz to show the number of miles put together by gender on each product.
sns.violinplot(x='Gender', y='Miles', data=cdata, hue='Product')
plt.show()
# #### Observation: It is observed that Male customer has put in more miles using the product TM798
# ### Trying to findout the fitness levels of customers with high eduction and income
plt.figure(figsize=(15,7)) # To resize the plot
#Plot the Viz to see how eduction and income levels impacting the fitness levels of the customer
sns.jointplot(x='Education', y='Fitness', hue='Income', data=cdata)
plt.show()
# #### Observation: Higher the eduction and higer the income. Customer has high income is having high fitness goals
# ### Find the highest sales of the product among customers
plt.figure(figsize=(15,7)) # To resize the plot
# Plot which popular is more popular among customers
sns.countplot(x='Product', data=cdata)
plt.show()
# #### Observation: Product TM195 has highest sales in the provided data set
# ### <span style="font-family: Arial; font-weight:bold;font-size:1.9em;color:#0e92ea"> Conculsion
#
# 1. Most Popular product in terms of sales in TM195
# 2. Most Popular product in terms of usage is TM798
# 3. Highest income customer are most active
# 4. Age group between 25 and 28 put in more miles on the Cradio Fitness Products
# 5. TM798 is popular with customers earning high assuming this product is expansive
# 6. TM195 is popular among the low income customer assuming this product is low
# 7. It is recommended to capture market for ages above 28 as the usage of product is slowly reducing
# 8. Variables are directly proportional
# 1. Miles
# 2. Fitness
# 3. Usage
| CardioProject/CardioProject.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# %pylab inline
# # CSP
# **Sources:**
#
# - https://www.cs.ubc.ca/~mack/CS322/lectures/3-CSP2.pdf
#
# **Definition:** A _constraint satisfaction problem (CSP)_ consists of:
#
# * a set of variables $\mathscr V$.
# * a domain $\textrm{dom}(V)$ for each variable $V \in \mathscr V$.
# * a set of constraints $C$.
#
# An example of a CSP model is:
#
# * $\mathscr V = \{V_1, V_2\}$
# * $\textrm{dom}(V_1) = \{1,2,3\}$
# * $\textrm{dom}(V_2) = \{1,2\}$
# * $C = \{C_1,C_2,C_3\}$
# * $C_1: V_2 \neq 2$
# * $C_2: V_1 + V_2 < 5$
# * $C_3: V_1 > V_2$
#
# **Definition**: A _model_ of a CSP is an assignment of values to all of its variables that _satisfies_ all of its constraints.
#
# **Generate and Test (GT) algorithm**: Systematically check all possible worlds. All possible worlds is the cross product of all the domains:
#
# $$ \textrm{dom}(V_1) \times \textrm{dom}(V_2) \times \ldots \times \textrm{dom}(V_n) $$
#
# Generate and test:
#
# 1. Generate possible worlds one at a time.
# 2. Test constraints for each one.
#
# For $k$ variables, each with domain size $d$, and there are $c$ constraints, the complexity is $O(ck^d)$:
#
# * There are $d^k$ possible worlds.
# * For each one need to check c constraints.
# # Implementation
# **CSP solver implementation:**
# +
import copy
def head(xs): return xs[0] if len(xs) > 0 else None
def replace_all(text, dic):
for k, v in dic.items():
text = text.replace(k, str(v))
return text
def subset(A,B): return all([a in B for a in A])
class Variable():
def __init__(self, name, D):
self.name = name
self.domain = D
self.value = D[0]
def __eq__(self, v): return v.name == self.name
def __repr__(self): return '{}={}'.format(self.name, self.value, self.domain)
class Constraint():
def __init__(self, expr, *variables):
self.variables = variables
self.expression = expr
def evalstr(self): return
def eval(self, V):
return eval(replace_all(self.expression, {v.name:v.value for v in V}))
def __repr__(self): return self.expression
class CSP():
def __init__(self, variables, constraints):
self.variables = variables
self.constraints = constraints
self.stop_on_first = False
def solve(self):
self.solution = []
self.gt_solve([], self.variables)
return self.solution[::-1]
# solving with a Generate and Test (GT) algorithm
#
# example:
# for a \in dom A
# for b \in dom B
# for c \in dom C
# if {A=a, B=b, C=c} return solution
def gt_solve(self, S, V):
print('(Call) S contains {}, V contains {}'.format(S,V))
if len(V) == 0:
# -- eval all
#return all([c.eval(S) for c in self.constraints])
# -- verbose output
print(' (Base) Checking constraints...')
for c in self.constraints:
r = c.eval(S)
if r: print(' (Satisfied) {}'.format(c))
else:
print(' (Failed) {}'.format(c))
return False
return True
v = V.pop()
S.append(v)
for d in v.domain:
if self.solution and self.stop_on_first: return
v.value = d
if self.gt_solve(copy.deepcopy(S), copy.deepcopy(V)):
self.solution = S
print(' (Solution) {}'.format(S[::-1]))
return
return False
# -
# ## Example 1
# Testing the `gt_solve` method with the example model shown at the beginning of this document:
v1 = Variable('v1', [1,2,3])
v2 = Variable('v2', [1,2])
V = [v1,v2]
c1 = Constraint('v1 != v2', v1, v2)
c2 = Constraint('v1 + v2 <= 5', v1, v2)
c3 = Constraint('v1 > v2', v1, v2)
c4 = Constraint('v1 >= 3', v1)
C = [c1,c2,c3,c4]
csp = CSP(V, C)
csp.stop_on_first = True
csp.solve()
# ## Example 2
#
# Solving the Australian map coloring problem.
# %%time
colors = {'red': 0, 'blue': 1, 'green': 2}
icolors = {v:k for k,v in colors.items()}
V = [Variable(x, list(colors.values())) for x in 'SA,WA,NT,Q,NSW,V'.split(',')]
V
c1 = Constraint('SA!=WA')
c2 = Constraint('SA!=NT')
c3 = Constraint('SA!=Q')
c4 = Constraint('SA!=NSW')
c5 = Constraint('SA!=V')
c6 = Constraint('WA!=NT')
c7 = Constraint('NT!=Q')
c8 = Constraint('Q!=NSW')
c9 = Constraint('NSW!=V')
C = [c1,c2,c3,c4,c5,c6,c7,c8,c9]
csp = CSP(V,C)
csp.stop_on_first = True
csp.solve()
[{v.name:icolors[v.value]} for v in csp.solution]
# # Results
#
# This is a concise implementation that solves CSP models with the `gt_solve` algorithm.
# +
import copy
class Variable():
def __init__(self, name, D):
self.name = name;
self.domain = D;
self.value = D[0]
def __eq__(self, v): return v.name == self.name
def __repr__(self): return '{}={}'.format(self.name, self.value, self.domain)
class Constraint():
def __init__(self, expr): self.expression = expr
def __repr__(self): return self.expression
def eval(self, V): return eval(self.replace_all(self.expression, {v.name:v.value for v in V}))
def replace_all(self, text, dic):
for k, v in dic.items():
text = text.replace(k, str(v))
return text
class CSP():
def __init__(self, variables, constraints):
self.variables = variables
self.constraints = constraints
self.stop_on_first = False
def solve(self):
self.solution = []
self.gt_solve([], self.variables)
return self.solution[::-1]
def gt_solve(self, S, V):
if len(V) == 0: return all([c.eval(S) for c in self.constraints])
v = V.pop()
S.append(v)
for d in v.domain:
if self.solution and self.stop_on_first: return
v.value = d
if self.gt_solve(copy.deepcopy(S), copy.deepcopy(V)):
self.solution.append(S)
return False
# -
# Applying this to the map coloring problem:
# %%time
colors = {'red': 0, 'blue': 1, 'green': 2}
V = [Variable(x, list(colors.values())) for x in 'SA,WA,NT,Q,NSW,V'.split(',')]
c1 = Constraint('SA!=WA')
c2 = Constraint('SA!=NT')
c3 = Constraint('SA!=Q')
c4 = Constraint('SA!=NSW')
c5 = Constraint('SA!=V')
c6 = Constraint('WA!=NT')
c7 = Constraint('NT!=Q')
c8 = Constraint('Q!=NSW')
c9 = Constraint('NSW!=V')
C = [c1,c2,c3,c4,c5,c6,c7,c8,c9]
csp = CSP(V,C)
csp.solve()
print('There are {} solutions:'.format(len(csp.solution)))
lkup = {v:k for k,v in colors.items()}
print([[{v.name:lkup[v.value]} for v in s] for s in csp.solution])
| Notebooks/CSP (Constraint Satisfaction Problem).ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: zipline
# language: python
# name: zipline
# ---
# Pipeline API是用于对资产数据进行横截面分析的强大工具。它使我们能够定义一组关于多个数据输入的计算,并一次分析大量股票。Pipeline API的一些常见用途包括:
# + 根据过滤规则选择资产
# + 根据评分函数对资产进行排名
# + 计算投资组合分配
# 首先,我们导入Pipeline类并创建一个返回空`pipeline`的函数。将`pipeline`定义放入一个函数中,可以帮助我们随着`pipeline`复杂度的增长保持组织的有序性。在Research和IDE之间传输数据管道时,这特别有用。
# +
# Pipeline class
from zipline.pipeline import Pipeline
def make_pipeline():
# Create and return an empty Pipeline
return Pipeline()
# -
# 要将输出添加到`pipeline`中,我们需要包含对数据集的引用,并指定要对这些数据执行的计算。 例如,我们将添加对`USEquityPricing`数据集中的`close`列的引用。 然后,我们可以将输出定义为该列中的最新值,如下所示:
# +
# Import Pipeline class and USEquityPricing dataset
from zipline.pipeline import Pipeline
from zipline.pipeline.data import CNEquityPricing
def make_pipeline():
# Get latest closing price
close_price = CNEquityPricing.close.latest
# Return Pipeline containing latest closing price
return Pipeline(columns={
'close_price': close_price,
})
# -
from zipline.pipeline.fundamentals import Fundamentals
# Pipeline API还提供了一些内置的计算,其中一些计算是在数据的尾随窗口上计算的。例如,以下代码导入数据列rating并将输出定义为rating的3天移动平均数:
# +
# Import Pipeline class and datasets
from zipline.pipeline import Pipeline
from zipline.pipeline.data import CNEquityPricing
# Import built-in moving average calculation
from zipline.pipeline.factors import SimpleMovingAverage
def make_pipeline():
# Get latest closing price
close_price = CNEquityPricing.close.latest
# Calculate 3 day average of bull_minus_bear scores
rating_score = SimpleMovingAverage(
inputs=[Fundamentals.rating.投资评级],
window_length=3,
)
# Return Pipeline containing close_price
# and sentiment_score
return Pipeline(columns={
'close_price': close_price,
'rating_score': rating_score,
})
# -
# ## 总体选择
# 制定策略的一个重要部分是定义投资组合中进行交易的一组资产。我们通常将这组资产称为我们的交易总体。
#
# 交易范围应尽可能大,同时也要排除不适合投资组合的资产。例如,我们想排除流动性不强或难以交易的股票。`QTradableStocksUS`总体提供这个功能。可以使用`pipeline`构造函数的`screen`参数将`QTradableStocksUS`设置为交易范围:
# +
# Import Pipeline class and datasets
from zipline.pipeline import Pipeline
from zipline.pipeline.data import CNEquityPricing
# Import built-in moving average calculation
from zipline.pipeline.factors import SimpleMovingAverage
# Import built-in trading universe
from zipline.pipeline.builtin import QTradableStocksCN
def make_pipeline():
# Create a reference to our trading universe
base_universe = QTradableStocksCN()
# Get latest closing price
close_price = CNEquityPricing.close.latest
# Calculate 3 day average of bull_minus_bear scores
rating_score = SimpleMovingAverage(
inputs=[Fundamentals.rating.投资评级],
window_length=3,
)
# Return Pipeline containing close_price and
# sentiment_score that has our trading universe as screen
return Pipeline(
columns={
'close_price': close_price,
'rating_score': rating_score,
},
screen=base_universe
)
# -
# 现在`pipeline`定义已经完成,可以使用`run_pipeline`在特定的时间段内执行它。输出将是按日期和资产索引的`pandas.DataFrame`,其中的列对应于我们添加到管道定义中的输出:
# +
# Import run_pipeline method
from zipline.research import run_pipeline
# Execute pipeline created by make_pipeline
# between start_date and end_date
pipeline_output = run_pipeline(make_pipeline(), '2017-01-01', '2018-04-30')
# Display last 10 rows
pipeline_output.tail(10)
# -
# 在下一课中,我们将使用正式确定的策略,以选择要交易的资产。 然后,我们将使用因子分析工具来评估我们策略对历史数据的预测能力。
| quantopian/learn/getting-started/lesson3_Pipeline_API.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Python para executável em programas mais complexos
#
# ### Objetivo:
#
# Muitas vezes nossos códigos puxam informações de outros arquivos ou, no caso de webscraping, usam outros arquivos como o chromedriver.exe para funcionar.
#
# Nesses casos, precisamos não só tomar alguns cuidados, mas também adaptar o nosso código para funcionar.
#
# ### O que usaremos:
#
# - auto-py-to-exe para transformar o arquivo python em executável
# - pathlib ou os para adaptar todos os "caminhos dos arquivos"
# - Alternativamente, podemos usar o tkinter para permitir a gente escolher manualmente o arquivo, independente do computador que vamos rodar o programa
#
# Vamos ver como isso funciona na prática
# +
#PEGAR LINKS DO YOUTUBE
#importar bibliotecas
import time, urllib
from IPython.display import display
from selenium import webdriver
import pandas as pd
import numpy as np
from selenium.webdriver.common.by import By
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.support import expected_conditions as EC
from selenium.webdriver.common.keys import Keys
from selenium.common.exceptions import StaleElementReferenceException
from tkinter import *
import tkinter.filedialog
from tkinter import messagebox
# -
# # Lendo arquivo CSV e encontrando no Computador
# +
#ler csv
root= Tk()
arquivo = tkinter.filedialog.askopenfilename(title = "Selecione o Arquivo csv com Canais e Keywords")
root.destroy()
buscas_df = pd.read_csv(arquivo, encoding = 'ISO-8859-1', sep=';')
display(buscas_df.head())
# -
# # Abrindo youtube e acessando o canal
# +
buscas_canais = buscas_df['canais'].unique()
# ler videos de todas as buscas
driver = webdriver.Chrome()
hrefs = []
delay = 5
# pegando os itens dos canais
for canal in buscas_canais:
if canal is np.nan:
break
hrefs.append(canal)
driver.get(canal)
myElem = WebDriverWait(driver, delay).until(EC.presence_of_element_located((By.CLASS_NAME, 'tp-yt-paper-tab')))
time.sleep(2)
tab = driver.find_elements(By.CLASS_NAME, 'tp-yt-paper-tab')[1].click()
time.sleep(2)
altura = 0
nova_altura = 1
while nova_altura > altura:
altura = driver.execute_script("return document.documentElement.scrollHeight")
driver.execute_script("window.scrollTo(0, " + str(altura) + ");")
time.sleep(3)
nova_altura = driver.execute_script("return document.documentElement.scrollHeight")
videos = driver.find_elements(By.ID, 'thumbnail')
try:
for video in videos:
meu_link = video.get_attribute('href')
if meu_link:
if not 'googleadservices' in meu_link:
hrefs.append(meu_link)
except StaleElementReferenceException:
time.sleep(2)
videos = driver.find_elements(By.ID, 'thumbnail')
for video in videos:
meu_link = video.get_attribute('href')
if meu_link:
if not 'googleadservices' in meu_link:
hrefs.append(meu_link)
print('Pegamos {} vídeos do Canal {}'.format(len(videos), canal))
driver.quit()
# -
# # Gerando arquivo CVS
#salvando o resultado em um csv
hrefs_df = pd.DataFrame(hrefs)
hrefs_df.to_csv(r'Canais Prontos.csv', sep=',', encoding='utf-8')
root= Tk()
messagebox.showinfo("Programa Finalizado com Sucesso", "Seu arquivo csv foi gerado com sucesso na pasta do Programa")
root.destroy()
| API e JSON/.ipynb_checkpoints/System .exe-checkpoint.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: machine-learning-challenges
# language: python
# name: machine-learning-challenges
# ---
# # Computer Vision: Plants Classification
# This dataset is based on the **Plant Seedlings Dataset**, which contains images of approximately 960 unique plants belonging to 12 species at several growth stages, with a resolution of about 10 pixels per mm of annotated RGB images.
#
# The dataset includes the following species:
#
#
# |English |Latin |EPPO|
# |:-----------|:-------------------|:---|
# |Maize |Zea mays L. |ZEAMX|
# |Common wheat|Triticum aestivum L.|TRZAX|
# |Sugar beet|Beta vulgaris var. altissima|BEAVA|
# |Scentless Mayweed|Matricaria perforata Mérat|MATIN|
# |Common Chickweed|Stellaria media|STEME|
# |Shepherd’s Purse|Capsella bursa-pastoris|CAPBP|
# |Cleavers|Galium aparine L.|GALAP|
# |Charlock|Sinapis arvensis L.|SINAR|
# |Fat Hen|Chenopodium album L.|CHEAL|
# |Small-flowered Cranesbill|Geranium pusillum|GERSS|
# |Black-grass|Alopecurus myosuroides|ALOMY|
# |Loose Silky-bent|Apera spica-venti|APESV|
#
# Your mission, should you choose to accept it... consist on:
# - create a model that classifies the full range of categories as accuretely as possible.
# - save the model for further analysis.
#
# If you're caught of killed during the mission, the dataen team will disavow any knowledge of your actions. This notebook will not self-destruct (disappointing right?). Good luck!
#
# +
# %matplotlib inline
import os
import sys
from time import time
import pickle
import pathlib
import itertools
from tqdm import tqdm_notebook as tqdm
import numpy as np
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import matplotlib.image as mpimg
import seaborn as sns
from sklearn.model_selection import train_test_split
import tensorflow as tf
from tensorflow.keras.layers import Dense, Flatten, Conv2D
from tensorflow.keras import Model
np.random.seed(42)
# -
# ## 1. Data Preparation
# ### 1.1 Load data
# +
PLANT_CLASSES = ['Black-grass', 'Charlock', 'Cleavers', 'Common Chickweed', 'Common wheat',
'Fat Hen', 'Loose Silky-bent', 'Maize', 'Scentless Mayweed',
'Shepherds Purse', 'Small-flowered Cranesbill', 'Sugar beet']
CLASSES_DICT_NAMES = {name: k for k, name in zip(range(len(PLANT_CLASSES)), PLANT_CLASSES)}
CLASSES_DICT_NUM = {k: name for k, name in zip(range(len(PLANT_CLASSES)), PLANT_CLASSES)}
DF_PART1 = "./data/plants_part1.gz"
DF_PART2 = "./data/plants_part2.gz"
DF_PART3 = "./data/plants_part3.gz"
RESHAPE_SIZE = (65, 65, 3)
RANDOM_STATE = 42
CLASSES_DICT_NAMES
# -
df_p1 = pd.read_csv(DF_PART1)
df_p2 = pd.read_csv(DF_PART2)
df_p3 = pd.read_csv(DF_PART3)
df = pd.concat([df_p1, df_p2, df_p3], axis=0)
df.shape
df.columns[:10]
df.dropna(axis = 0, inplace=True)
# We can ignore the column 'label'. The column class is the entry we must use for our classification.
#
# The rest of the columns belong to the image and we must reshape those values into 65x65x3 to obtain the images.
df_labels = df[['class']]
df.drop(labels=['label', 'class'], axis=1, inplace=True)
df_images = df.values.reshape(-1, *RESHAPE_SIZE)
# # Analysis
# Classes are quite heavily skewed so may well be a good idea to either:
#
# - subsample now and randomly choose roughly 200 from each class (lose a lot of data from already limited amount)
# - subsample after applying ImageGenerator (preferable as it will keep data varied)
df_label_count= df_labels.groupby('class').agg({'class': ['count']})['class'].reset_index()
plt.figure(figsize=(12,6))
sns.barplot(x='class', y='count', data=df_label_count)
plt.xticks(rotation=90)
plt.figure(figsize=(20,20))
for i in range(36):
plt.subplot(6,6,i+1)
plt.xticks([])
plt.yticks([])
plt.grid(False)
plt.imshow(df_images[i])
plt.xlabel(df_labels['class'].tolist()[i])
plt.show()
# +
class MyModel(Model):
def __init__(self):
super(MyModel, self).__init__()
self.conv1 = Conv2D(32, 3, activation='relu')
self.flatten = Flatten()
self.d1 = Dense(128, activation='relu')
self.d2 = Dense(13, activation='softmax')
def call(self, x):
x = self.conv1(x)
x = self.flatten(x)
x = self.d1(x)
return self.d2(x)
# Create an instance of the model
model = MyModel()
# -
df_labels = df_labels['class'].map(CLASSES_DICT_NAMES)
df_labels.fillna(12, inplace=True)
df_labels = df_labels.apply(int)
x_train, x_test, y_train, y_test = train_test_split(df_images, df_labels, test_size=0.2, random_state=42)
image_generator = tf.keras.preprocessing.image.ImageDataGenerator(featurewise_center=True,
featurewise_std_normalization=True,
rotation_range=20,
width_shift_range=0.2,
height_shift_range=0.2,
horizontal_flip=True)
# +
image_generator.fit(x_train)
model.compile(optimizer='adam',
loss='sparse_categorical_crossentropy',
metric=['accuracy'])
model.fit_generator(image_generator.flow(x_train, y_train, batch_size=32),
steps_per_epoch=len(x_train) / 32,
epochs=10)
| computer_vision/jh_01_CV_Plants_base.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Dependencies
# Dependencies
import pandas as pd
import re
import requests
import math
import plotly
import plotly.plotly as py
import plotly.graph_objs as go
from pprint import pprint
from config import api_key, plotly_key
from yelpapi import YelpAPI
# # Ideas
# <ul><li>Average rating vs. category
# <li>Average rating vs. price rating
# <li>Number of restaurant categories per city
# <li>Review Count vs. city
# # Useful Links
# <ul><li><a href="https://alcidanalytics.com/p/geographic-heatmap-in-python">Heat maps</a>
# # API Call
# +
# API call to Yelp API
yelp_api = YelpAPI(api_key)
# Input string for location search
input_string = input("Search query: ")
api_call = yelp_api.search_query(location=input_string, limit=50)
# API Call to Plotly
plotly.tools.set_credentials_file(username='nguyenkevint94', api_key=plotly_key)
# Delete hashtag to view the contents of api_call
# pprint(api_call)
# +
# Lists
business_names_list = []
categories_list = []
street_address_list = []
city_list = []
country_list = []
lat_list = []
lon_list = []
ratings_list = []
review_count_list = []
price_ratings = []
# Looping through each business in the call
for businesses in api_call["businesses"]:
try:
# Name
name = businesses["name"]
# print(f"Successfully found business name: {name}")
# Category
category = businesses["categories"][0]["alias"]
# print(f"Successfully found category: {category}")
# Street Address
street_address = businesses["location"]["address1"]
# print(f"Successfully found street address: {street_address}")
# City
city = businesses["location"]["city"]
# print(f"Successfully found city: {city}")
# Country
country = businesses["location"]["country"]
# print(f"Successfully found country: {country}")
# Latitude
lat = businesses["coordinates"]["latitude"]
# print(f"Successfully found latitude: {lat}")
#Longitude
lon = businesses["coordinates"]["longitude"]
# print(f"Successfully found longitude: {lon}")
# Price rating
# NOTE: Some places do not have a price rating (ie. $, $$, $$$)
price = businesses["price"]
# print(f"Successfully found price rating: {price}")
# Ratings
rating = businesses["rating"]
# print(f"Successfully found rating: {rating}")
# Review count
review_count = businesses["review_count"]
# print(f"Successfully found review counts: {review_count}")
# Appends
# Tried putting appends after each section, adding them towards the end made it work
# with no error since it passes those without a price rating
business_names_list.append(name)
categories_list.append(category)
street_address_list.append(street_address)
city_list.append(city)
country_list.append(country)
lat_list.append(lat)
lon_list.append(lon)
price_ratings.append(price)
ratings_list.append(rating)
review_count_list.append(review_count)
# print("- - - - - - - - - - - - - - - - - - - - - -")
except Exception:
pass
# -
# # Yelp DataFrame
# +
# Dictionary for DataFrame
business_details_dict = ({"Name": business_names_list,
"Category": categories_list,
"Street": street_address_list,
"City": city_list,
"Country": country_list,
"Latitude": lat_list,
"Longitude": lon_list,
"Rating": ratings_list,
"Review Count": review_count_list,
"$": price_ratings})
# Dictionary to DataFrame
yelp_df = pd.DataFrame(business_details_dict)
yelp_df.head()
# -
# # Sorting by Review Count
sorted_df_reviews = yelp_df.sort_values(by=["Review Count"], ascending=False)
sorted_df_reviews
# # Category Shares
# +
# Counting up number in each category
biz_categories = yelp_df.groupby("Category").count()
# Resetting the index to Category
biz_categories.reset_index("Category", inplace=True)
categories = biz_categories["Category"]
# Labels for each category to be used in Plotly
labels = categories
# Values for each category to be used in Plotly
category_count = biz_categories["Name"]
values = category_count
values
# Setting up arguments for Plotly pie chart
trace = go.Pie(labels=labels, values=values)
py.iplot([trace], filename= "Pie_Chart_Categories")
# -
# # Price Comparisons
# +
# Grouping price ratings ($)
price_groups = yelp_df.groupby("$").count()
price_groups.reset_index("$", inplace=True)
labels = price_groups["$"]
values = price_groups["Name"]
trace = go.Pie(labels=labels, values=values)
py.iplot([trace], filename= "Pie_Chart_Price_Categories")
# -
# # Average Review Count vs. Category
# +
# Grouping categories
average_reviews = yelp_df.groupby("Category").mean()
average_reviews.reset_index("Category")
# Gathering top 5 categories with the highest number of reviews
top_five = average_reviews.nlargest(10,"Review Count")
top_five.reset_index("Category", inplace=True)
# Category names
categories = top_five["Category"]
# Category review counts
review_count = top_five["Review Count"]
# Setting up the bar chart
trace = go.Bar(
x=categories,
y=review_count,
text=categories,
marker=dict(
color="rgb(158, 202, 225)",
line=dict(
color="rgb(8, 48,107)",
width=1.5,
)
),
opacity=0.6
)
data=[trace]
layout=go.Layout(
title="Top 10 Categories by Average Review Count",
)
# Bar chart
fig = go.Figure(data=data, layout=layout)
py.iplot(fig, filename="top-10-categories-by-average-review-count")
# -
| .ipynb_checkpoints/Cuisine Analysis Workbook-checkpoint.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + cell_id="96995714-4524-4825-b917-e206290a6615" tags=[]
print("good morning")
# + cell_id="1576ed19-f5c7-406d-a07e-a8bb5573ffaa" tags=[]
help(float)
# + cell_id="b89dbf33-0100-44cf-8025-8316413f9ca1" tags=[]
float(10)
# + cell_id="580369c7-c988-4192-b7bc-81cfe6043ed5" tags=[]
format(float("0.1"), '0.25f')
# + cell_id="95bbf75d-fbe6-47e2-85d7-0113e7a59d2e" tags=[]
float("22.7")
# + cell_id="7fb901b8-b0c4-4ed8-9e3c-f43de4616804" tags=[]
float("22/7")
# + cell_id="9bbca888-24d5-4baf-b116-122a58dfa96c" tags=[]
from fractions import Fraction
a = Fraction('22/7')
float(a)
# + cell_id="e85bd5cc-ebe9-47da-b0f9-2ff26ce8cf1f" tags=[]
print(0.1) # Python is lying now!
# + cell_id="4bab410c-1bcd-40bf-8cfb-20ab3f81fdd6" tags=[]
format(0.125, '0.50f') # can be exactly represented
# + cell_id="0e933d09-d57e-4287-b7dd-d6d0d2df2be0" tags=[]
a = 0.1 + 0.1 + 0.1
# + cell_id="08c0361a-88f5-485f-84d3-480bf3df6273" tags=[]
b = 0.3
a == b
# + cell_id="d7627535-af28-4d62-8461-8ecdc702206d" tags=[]
format(a, '0.25f')
# + cell_id="4834c6a3-789c-484a-9fc9-3f662af1b556" tags=[]
format(b, '0.25f')
# + cell_id="eff980ab-2437-4f68-94a5-f2177561b919" tags=[]
x = 0.1
format(x, '0.25f')
# + cell_id="d676a93c-1a3d-4a8c-97fe-8bfe3739225f" tags=[]
print(x)
# + cell_id="b1a2768a-d61f-4873-9dd5-20c6b161705a" tags=[]
x = 0.125 + 0.125 + 0.125
# + cell_id="c7c43c6f-e490-41dd-943a-ff56c33ece01" tags=[]
y = 0.375
# + cell_id="d99a5940-25c4-46d2-97c2-2d0a4fae7b3b" tags=[]
x == y
# + cell_id="d5a04713-5dc7-412a-9a73-b60062926c20" tags=[]
x = 0.1 + 0.1 + 0.1
y = 0.3
x == y
# + cell_id="8546e88c-41f4-4700-9440-ab368cc6419a" tags=[]
# how to we compare floats?
round(x, 9) == round(y, 9) # this is absolute tolerance method
# + cell_id="2732956f-add9-4d6e-b00d-e1f4bdd4d3b5" tags=[]
x = 10000.001
y = 10000.002
a = 0.001
b = 0.002
# + cell_id="e11ad15e-986e-49c8-8f7d-3a3689e42778" tags=[]
round(x, 2) == round(y, 2)
# + cell_id="2db6ad1f-8184-49de-b938-0a8d8a838dca" tags=[]
round(a, 2) == round(b, 2)
# + cell_id="384a6408-5441-43f5-a7de-26eebea9f495" tags=[]
from math import isclose
# + cell_id="a52450d3-3d8d-44ee-9bc1-5a902ac1599c" tags=[]
help(isclose)
# + cell_id="f3289973-f96c-4b24-b6ad-9f074ba27789" tags=[]
x = 0.1 + 0.1 + 0.1
y = 0.3
isclose(x, y)
# + cell_id="38aa8353-b5d0-46f8-96d8-994ef7a949a0" tags=[]
x = 10000000000.01
y = 10000000000.02
isclose(x, y, rel_tol=0.01)
# + cell_id="1ce54a21-ee83-464a-9b3e-b7ebc1b347ad" tags=[]
x = 0.01
y = 0.02
isclose(x, y, rel_tol=0.01)
# + cell_id="955fe99b-6318-4489-acf9-dfcce8570447" tags=[]
x = 0.00000000000001
y = 0.00000000000002
isclose(x, y, rel_tol=0.01)
# + cell_id="c51cdf52-2264-4425-ace5-7f76a79f9503" tags=[]
isclose(x, y, rel_tol=0.01, abs_tol=0.01)
# + cell_id="14afa3c5-61b9-4c12-a653-86ea3159d71d" tags=[]
from math import trunc
# + cell_id="6e276004-714b-487d-96b7-9274fa8ad842" tags=[]
trunc(10.3), trunc(10.5), trunc(-10.9999)
# + cell_id="5b3983bc-c808-4f81-8d77-5ea052616979" tags=[]
from math import floor
floor(10.3), floor(10.5), floor(10.9)
# + cell_id="00706ad4-4715-423f-b259-b3022b55d490" tags=[]
trunc(-10.3), trunc(-10.5), trunc(-10.9999)
# + cell_id="2ff250ac-f033-487c-a1f4-347b025cfdff" tags=[]
floor(-10.3), floor(-10.5), floor(-10.9)
# + cell_id="3e31964e-dd90-4a1e-a59f-e9138fb90b65" tags=[]
from math import ceil
ceil(10.3), ceil(10.5), ceil(10.9)
# + cell_id="b418a245-b875-4d43-979a-f7ef3e812bc8" tags=[]
ceil(-10.3), ceil(-10.5), ceil(-10.9)
# + cell_id="17572cd2-f590-4053-8c07-1d811e50cf37" tags=[]
help(round)
# + cell_id="4d3fc28e-ef76-418a-9ad6-941635b93877" tags=[]
a = round(1.9)
a, type(a)
# + cell_id="5f2831dd-c1e9-4d7b-b12d-07c664e50bef" tags=[]
a = round(1.9, 0)
a, type(a)
# + cell_id="e32b3796-7110-4450-9248-cc076c4382fb" tags=[]
round(1.88888, 3), round(1.88888, 2), round(1.88888, 1), round(1.88888, 0)
# + cell_id="ef166499-493e-4109-92bb-7a50a164f1d4" tags=[]
round(888.88, 1), round(888.88, 0), round(888.88, -1), \
round(888.88, -2), round(888.88, -3), round(888.88, -4),
# + cell_id="ca88cee7-b53b-45d1-b811-12b1b167cbb3" tags=[]
round(5001, -4)
# + cell_id="04970a07-d322-4f1c-ad1c-216da6e5850f" tags=[]
round(1.25, 1)
# + cell_id="61606dc5-9260-4d66-b6f2-05ed0fbeaa34" tags=[]
format(1.25, '0.25f')
# + cell_id="cd3d7603-5605-4a12-afe5-48422eba0c3c" tags=[]
format(1.55, '0.25f')
# + cell_id="834e4894-639d-4df1-8add-32faa0e512b9" tags=[]
round(1.55, 1)
# + cell_id="f1059561-bbec-48b7-90b2-75d1d86afc87" tags=[]
round(1.35, 1)
# + cell_id="2c60be89-779e-45c8-8912-3bab98a736ad" tags=[]
round(1.25, 0), round(1.35, 0)
# + cell_id="68a9071d-1d74-49f9-9c8c-1b30c18e431c" tags=[]
round(-1.25, 1), round(-1.35, 1)
# + [markdown] cell_id="59eb9ecd-df27-4b8e-a0ff-95272b8722fb" tags=[]
# ## Decimals
# + cell_id="e4a7c04a-f4c1-40ed-888b-ee0a9df14b6c" tags=[]
import decimal
from decimal import Decimal
# + cell_id="a037dece-3608-4b50-af2d-7ca0f892e728" tags=[]
decimal.getcontext()
# + cell_id="96445266-7c7e-485b-8b5c-db0e7a308974" tags=[]
decimal.getcontext().prec
# + cell_id="df7e346f-018e-4d89-a833-2d6e1b77cf2e" tags=[]
decimal.getcontext().rounding
# + cell_id="fe8e7f90-edcf-433b-a435-944e21402aa5" tags=[]
decimal.getcontext().prec = 6
# + cell_id="3bf041cb-6eb4-46ee-a2a7-879538415eae" tags=[]
type(decimal.getcontext())
# + cell_id="f609bdc0-878c-4dd9-84af-665da1c9c8b7" tags=[]
g_ctx = decimal.getcontext()
# + cell_id="c73c21f4-39a2-4f09-8993-83ebc5fbd94b" tags=[]
g_ctx
# + cell_id="8d7b5a58-127e-438b-8b3b-0c504616e903" tags=[]
g_ctx.rounding = 'ROUND_HALF_UP' # we might do spelling mistake here.. better would be
g_ctx.rounding = decimal.ROUND_HALF_UP
# + cell_id="dc7f33b3-03a9-438c-a2c6-614d25bdb5f4" tags=[]
g_ctx
# + cell_id="5ac5d8cd-78cc-4676-8af2-faaa0dfe934a" tags=[]
g_ctx.prec = 28
g_ctx.rounding = decimal.ROUND_HALF_EVEN
# + cell_id="55639098-ba6d-455f-bc22-ed868b3d0244" tags=[]
decimal.localcontext()
# + cell_id="c9e495af-b436-499f-9cb7-248df4993099" tags=[]
type(decimal.localcontext())
# + cell_id="b6ede51e-2110-46f0-894a-18f6b0a6d7b1" tags=[]
type(decimal.getcontext())
# + cell_id="4f8ba1a9-482a-4631-917d-101983b5ab69" tags=[]
with decimal.localcontext() as ctx:
print(type(ctx))
print(ctx)
# + cell_id="e5e04ddb-f1e8-4c87-b434-f7c2ebdc2b75" tags=[]
with decimal.localcontext() as ctx:
ctx.prec = 6
ctx.rounding = decimal.ROUND_HALF_UP
print(ctx)
# later
print(decimal.getcontext())
# later
print(id(ctx) == id(decimal.getcontext()))
# + cell_id="3cea3437-5565-4208-8483-6b1245e516e4" tags=[]
x = Decimal('1.25')
y = Decimal('1.35')
# + cell_id="24d399c6-2435-4a7b-b502-7da12f96b2c7" tags=[]
with decimal.localcontext() as ctx:
ctx.prec = 6
ctx.rounding = decimal.ROUND_HALF_UP
print(round(x, 1))
print(round(y, 1))
print(round(x, 1))
print(round(y, 1)) # round_half_even
# + cell_id="f6f07123-30a0-4cdc-ae22-dab13b17443a" tags=[]
decimal.getcontext()
# + cell_id="84eed561-0d8f-4713-af77-1ad3bdcc5687" tags=[]
import decimal
from decimal import Decimal
# + cell_id="28cf8475-46a3-41df-b41d-2af9a4289000" tags=[]
help(Decimal)
# + cell_id="800c54aa-3b60-4387-b96d-430734cb24c4" tags=[]
Decimal(10)
# + cell_id="c627bb1f-0363-4c20-bbea-6de74b24697f" tags=[]
Decimal(-10)
# + cell_id="17fe46db-6481-4be3-a5f6-5ab21421a4fc" tags=[]
Decimal('10.1')
# + cell_id="9b8d1667-bcd2-4c81-9807-2d7d547c056c" tags=[]
Decimal(10.1)
# + cell_id="b17234aa-eb9e-4f1f-b609-58ba5ffdaa72" tags=[]
Decimal('-0.34')
# + cell_id="c84d6387-0349-4685-9dbf-c9c8eb84dbc5" tags=[]
# using tuples
t = (1, (3, 1, 4, 1, 5), -4) # 1 represents the sign of the number
Decimal(t)
# + cell_id="9c9c2ac6-1d29-4498-ba88-0e5a06a7db57" tags=[]
Decimal(0, (3, 1, 4, 1, 5), -4)
# + cell_id="1287cf70-df31-4315-92dc-a9e7e22245ef" tags=[]
Decimal((0, (3, 1, 4, 1, 5), -4))
# + cell_id="948abcf5-2fc2-416a-b850-998274a28e28" tags=[]
Decimal((1, (3, 1, 4, 1, 5), -4))
# + cell_id="40df20e8-1d21-4811-af4b-ec8b35be41ac" tags=[]
Decimal(0.1) == Decimal('0.1')
# + cell_id="0096bf84-4d6e-4b9b-8b78-9243040d209b" tags=[]
decimal.getcontext()
# + cell_id="91b56ffa-fb5d-43b9-a5e3-6101d3f83d46" tags=[]
decimal.getcontext().prec = 6 #doesn't effect contructor
# + cell_id="730e2191-561b-4439-80d3-42839b1e9c7d" tags=[]
a = Decimal('0.123123123123') # it will only effect arithmatic operation
# + cell_id="a1019eb1-a637-4ef7-9717-7b0e913c63c0" tags=[]
decimal.getcontext().prec = 2
# + cell_id="269c3e01-eb40-4421-a697-202d35e7d49e" tags=[]
a = Decimal('0.123123123123')
b = Decimal('0.123123123123')
# + cell_id="2a3646c5-2ab7-450d-ae06-6f3ccb7bb6ed" tags=[]
a, b
# + cell_id="9fab25cd-d169-40b4-9877-4a9399518231" tags=[]
0.123123123123 + 0.123123123123
# + cell_id="4db6b13a-cab5-4a0c-96c0-35824cfed404" tags=[]
a + b
# + cell_id="e99dbad6-f200-4d9f-9e49-3083b554c835" tags=[]
decimal.getcontext().prec = 6
print(a + b)
with decimal.localcontext() as ctx:
ctx.prec = 2 # doesn't effect precision of the contructor
c = a + b
print('c within local context: {0}'.format(c))
print('c within global context: {0}'.format(c))
# + cell_id="e99df17e-746f-42a1-ab1c-494b03d61fa5" tags=[]
# n = d * (n // d) + n % d ALWAYS SATISFIED
# + cell_id="481ce63d-1232-4ec9-be07-0dc809f2face" tags=[]
x = 10
y = 3
print(x//y, x%y)
print(divmod(x, y))
print(x == y * (x//y) + (x%y))
# + cell_id="3b3208d0-34e1-48bc-9bf2-bb2aa8719f68" tags=[]
x = Decimal(10)
y = Decimal(3)
print(x//y, x%y)
print(divmod(x, y))
print(x == y * (x//y) + (x%y))
# + cell_id="0505365f-723a-4c06-b085-dfaae909de3d" tags=[]
x = Decimal(-10)
y = Decimal(3)
print(x//y, x%y)
print(divmod(x, y))
print(x == y * (x//y) + (x%y))
# + cell_id="c73bc4bc-132a-4dab-bf5d-3bd20aa6df7f" tags=[]
x = -10
y = 3
print(x//y, x%y)
print(divmod(x, y))
print(x == y * (x//y) + (x%y))
# + cell_id="5b3c9a46-010d-44a2-954e-0a48c8b09feb" tags=[]
x = Decimal(10)
y = Decimal(-3)
print(x//y, x%y)
print(divmod(x, y))
print(x == y * (x//y) + (x%y))
# + [markdown] cell_id="838e6c64-9c0e-42c9-ba90-be82688a1485" tags=[]
# ## Other Math Function
# + cell_id="7d5a61fb-0017-4e73-b01b-8c38f4faf6a9" tags=[]
help(Decimal)
# + cell_id="472a0c94-b1cf-48d1-8aef-40ebf813debf" tags=[]
a = Decimal('1.5')
a
# + cell_id="8cd0899b-0610-451f-8ee8-6cafe06e355b" tags=[]
print(a.ln())
print(a.exp())
print(a.sqrt())
# + cell_id="27621957-e6c0-4d4d-a1d5-8045f6589bad" tags=[]
import math
# + cell_id="80408008-ba62-4b3b-899d-c50d3f14bb9b" tags=[]
math.sqrt(a) # not the same
# + cell_id="57bb15db-6966-474d-9032-e136ce0eb8a9" tags=[]
decimal.getcontext().prec = 28
x = 2
x_dec = Decimal(2)
# + cell_id="be66314d-af0c-4757-bf0d-9927e33efd63" tags=[]
root_float = math.sqrt(x)
root_mixed = math.sqrt(x_dec)
root_dec = x_dec.sqrt()
# + cell_id="5a0f2f7d-0f44-4caf-8440-c6544f7f633a" tags=[]
print(format(root_float, '1.27f'))
print(format(root_mixed, '1.27f'))
print(root_dec)
# + cell_id="0e94939a-98fe-4592-b19a-dfa1a8ed597c" tags=[]
print(format(root_float*root_float, '1.27f'))
print(format(root_mixed*root_mixed, '1.27f'))
print(root_dec* root_dec) # much closer
# + cell_id="9528f7d7-7fce-42dc-98ef-f31e5f308266" tags=[]
import sys
a = 3.1415
b = Decimal('3.1415')
# + cell_id="2977bcf7-3cf9-41b0-b2bf-0b003f950544" tags=[]
sys.getsizeof(a)
# + cell_id="140ae91a-0860-434b-8be2-cc31edd36c52" tags=[]
sys.getsizeof(b)
# + cell_id="3d856d42-22fe-48df-8357-8ce7be850298" tags=[]
import time
# how long does it takes to CREATE a float/decimal
def run_float(n = 1):
for i in range(n):
a = 3.1415
def run_decimal(n = 1):
for i in range(n):
a = Decimal('3.1415')
# + cell_id="e916b17b-a4c3-47f8-96e2-01e67bed6620" tags=[]
n = 10000000
start = time.perf_counter()
run_float(n)
end = time.perf_counter()
print ('float: ', end - start)
start = time.perf_counter()
run_decimal(n)
end = time.perf_counter()
print ('decimal: ', end - start)
# + cell_id="4ee5ac06-8982-459a-9551-d1543566c922" tags=[]
def run_float(n = 1):
a = 3.1415
for i in range(n):
a + a
def run_decimal(n = 1):
a = Decimal('3.1415')
for i in range(n):
a + a
start = time.perf_counter()
run_float(n)
end = time.perf_counter()
print ('float: ', end - start)
start = time.perf_counter()
run_decimal(n)
end = time.perf_counter()
print ('decimal: ', end - start)
# + cell_id="c90eda55-2754-43d8-b3c3-03ca5c61991f" tags=[]
import math
n = 5000000
def run_float(n = 1):
a = 3.1415
for i in range(n):
math.sqrt(a)
def run_decimal(n = 1):
a = Decimal('3.1415')
for i in range(n):
a.sqrt()
start = time.perf_counter()
run_float(n)
end = time.perf_counter()
print ('float: ', end - start)
start = time.perf_counter()
run_decimal(n)
end = time.perf_counter()
print ('decimal: ', end - start)
# + [markdown] cell_id="2e9fef80-0d3e-42f9-8df8-9e5fdb173b97" tags=[]
# ## Use decimal when you have to have extra precision
# + cell_id="027161c7-21b5-423b-a3b3-3ceb62383888" tags=[]
help(complex)
# + cell_id="fc772bfa-92d5-4731-bc2d-34361253d3d6" tags=[]
a = complex(1, 2)
b = 1 + 2j
a == b
# + cell_id="24acf38d-02e5-4e7c-a501-3bbe9c3a9a35" tags=[]
a is b # not always
# + cell_id="df5ae705-57c5-461f-9a8f-052c06a7c681" tags=[]
a.real, type(a.real)
# + cell_id="2e3d2d97-ade5-4793-973b-2f6bd8a61de2" tags=[]
a.imag, type(a.imag)
# + cell_id="4bec1756-72ab-44eb-841b-faddaf6f4e90" tags=[]
a.conjugate()
# + cell_id="49764e2f-6c69-4306-ac82-192b22693bec" tags=[]
a = 1 + 2j
b = 10 + 8j
a + b, a - b, a / b, a **2
# + cell_id="741d4b4d-9afa-4b94-8a26-aa0fc82e11aa" tags=[]
a // b # not defined
# + cell_id="a825c0cf-3aa6-4536-8b6b-55b1238ad280" tags=[]
a % b
# + cell_id="9344413c-8102-4166-a153-4a8f837df883" tags=[]
a = 0.1j
# + cell_id="c237c2b0-38fe-4a12-9cae-dceb53c84fe4" tags=[]
format(a.imag, '0.25f')
# + cell_id="066f1ded-f9b9-408e-8053-7bf24fea33bf" tags=[]
a + a + a == 0.3j
# + cell_id="0b403edb-fd0c-4f76-8a57-5af690fd90e4" tags=[]
import cmath
# + cell_id="ffa27871-43d4-4c5b-ad6d-db706db4ba1b" tags=[]
type(cmath.pi)
# + cell_id="3f90e21d-8dc3-4931-902a-46096f2034f8" tags=[]
a = 1 + 2j
math.sqrt(a)
# + cell_id="86fd3ff1-c38e-4355-8cf5-ad3d018c50d3" tags=[]
cmath.sqrt(a)
# + cell_id="278a8e7b-3264-469d-b32d-20166393242c" tags=[]
cmath.phase(a)
# + cell_id="123a240f-64e5-4cf3-88e2-38398de1001c" tags=[]
a = 1 + 1j # 45 degree and length should be sqrt 2
# + cell_id="5adace82-4433-4244-b87c-75689f100883" tags=[]
cmath.phase(a)
# + cell_id="c12284d6-1c61-42ec-a4b6-19a45b4295eb" tags=[]
cmath.pi/4
# + cell_id="033ba328-c64c-410c-bdbe-6c43e3c86c7b" tags=[]
abs(a)
# + cell_id="032c7358-6b25-4df7-b695-a0d6fa3922a8" tags=[]
cmath.rect(math.sqrt(2), math.pi/4)
# + [markdown] cell_id="e74559e2-3ca3-4456-8221-85a8b87c47eb" tags=[]
# 
# + cell_id="819db1fd-0e35-43c4-88c3-979df92d794d" tags=[]
RHS = cmath.exp(complex(0, math.pi)) + 1
RHS
# + cell_id="5ed002fd-14ab-4b46-891d-7532e519a5dd" tags=[]
cmath.isclose(RHS, 0)
# + cell_id="477c432a-a244-4721-99a3-dbc09f7b5375" tags=[]
help(cmath.isclose) # isclose(a, b, *, rel_tol=1e-09, abs_tol=0.0)
# + cell_id="703477a1-62f9-47d5-8b96-47b71f1764a6" tags=[]
cmath.isclose(RHS, 0, abs_tol=0.00001)
# + cell_id="0876f45c-3968-4b98-8b24-c07aa3af0a4e" tags=[]
bool(1), bool(0)
# + cell_id="ea0d4004-283e-49d6-9518-b79cc7176541" tags=[]
bool(-1)
# + cell_id="1d472da7-22c0-4f3b-b8d1-1b5a21346a47" tags=[]
bool('')
# + cell_id="1836f672-3c39-4028-8a20-3db84636efbe" tags=[]
a = []
bool(a)
# + cell_id="1bd33718-548c-47ce-88ce-9c0a43f99efc" tags=[]
a.__len__()
# + cell_id="f06fa2e7-5082-4795-b727-e2399f557b6a" tags=[]
bool(0.0), bool(0+ 0j)
# + cell_id="1b7bafc1-645f-4bae-8e0b-003ea1719505" tags=[]
from decimal import Decimal
from fractions import Fraction
# + cell_id="3cbe9a05-13e4-4396-bd70-2416bd6028b5" tags=[]
bool(Fraction(0, 1)), bool(Decimal('0.0'))
# + cell_id="8b1bbcf9-d3de-4117-9538-b07eafdef525" tags=[]
bool(10.5), bool(1j), bool(Fraction(1, 2)), bool(Decimal('10.5'))
# + cell_id="a81fa8a4-fd4d-4d85-a9f9-4fe2d054f3e9" tags=[]
format(0.1-0.1, '0.20f')
# + cell_id="ffa026e4-4dfe-4c4d-a11e-601a624d52f4" tags=[]
bool(0.1 - 0.1)
# + cell_id="df61ae9c-d32a-4b06-bc86-a5189ad22c4e" tags=[]
a = []
b = ''
c = ()
bool(a), bool(b), bool(c)
# + cell_id="ab89d459-82ca-42d7-9310-80a99590f3bf" tags=[]
a = {}
b = set()
bool(a), bool(b)
# + cell_id="bb777ad8-07da-48fa-8a77-26a4418723fe" tags=[]
bool(None)
# + cell_id="6474ddfb-6bd5-401a-b9d3-5fa7e3436b81" tags=[]
a = [1, 2, 3] # we want to do something with a only if it exists and is non empty
# + cell_id="a8e6719c-d96a-4746-a482-5b13a4851db8" tags=[]
# normally we do
if a is not None and len(a) > 0:
print(a[0])
else:
print('Nothing to be done')
# + cell_id="f41fcecf-75db-4649-9e6a-29eb15401b11" tags=[]
# alternatively
# a = [1, 2, 3]
# a = None
a = []
if bool(a):
print(a[0])
else:
print('Nothing to be done')
# + cell_id="b94a1964-e5fb-45ea-8315-161db97c6f52" tags=[]
# Danger
a = None
# normally we do
if len(a) and a is not None> 0: #not short circuiting
print(a[0])
else:
print('Nothing to be done')
# + cell_id="00b8a221-fbe2-406a-ae1d-f93b508985a2" tags=[]
True or True and False
# + cell_id="c028811e-23b6-477b-857a-79c6c0b4b613" tags=[]
True or (True and False)
# + cell_id="01f755ea-b071-4920-be52-4e4bf34bf635" tags=[]
(True or True) and False # always use brackets
# + cell_id="2188caf4-3dd2-4119-9818-11451887e1f9" tags=[]
a = 10
b = 2
if a/b > 2:
print('a is at least twice b')
# + cell_id="e16c2277-04a5-4796-9822-737989d2b571" tags=[]
# what if b = 0?
a = 10
b = 0
if a/b > 2:
print('a is at least twice b')
# + cell_id="e98864aa-78ef-4591-a20b-df8c8691fc72" tags=[]
# alternatively
a = 10
b = 0
if b > 0:
if a/b > 2:
print('a is at least twice b')
# + cell_id="5a98f21b-8579-4b36-adf8-9ca59791ebef" tags=[]
# alternatively
a = 10
b = 0
if b > 0 and a/b > 2:
print('a is at least twice b')
# + cell_id="e76bcb3b-ff8a-46bd-b3e8-12f9298e7557" tags=[]
# problem
a = 10
b = None
if b > 0 and a/b > 2:
print('a is at least twice b')
# + cell_id="3d5e7cf8-a56b-4a99-a9c7-545765c4c5d0" tags=[]
# pythonish and best
a = 10
# b = 0
b = None
if b and a/b > 2:
print('a is at least twice b')
# + cell_id="5987844c-a005-4221-bcab-e3fa35245654" tags=[]
import string
help(string)
# + cell_id="d4130c5b-7e48-481b-81cb-0fa8d0c81b98" tags=[]
a = 'c'
a in string.ascii_uppercase
# + cell_id="6f7ba623-4221-4199-9222-4447b183dec6" tags=[]
string.ascii_letters, string.digits, string.ascii_letters
# + cell_id="4f1bdbf0-82bb-493f-a953-9331125ada32" tags=[]
# something that people do a lot
name = 'Bob'
if name[0] in string.digits:
print('Name cannot start with a digit')
# + cell_id="c5c6d754-5794-4fe8-8b0d-375c4de5fb74" tags=[]
name = '1'
if len(name)> 0 and name[0] in string.digits:
print('Name cannot start with a digit')
# + cell_id="51722325-cffb-4208-89d2-246c528fa40f" tags=[]
name = ''
if len(name) and name[0] in string.digits:
print('Name cannot start with a digit')
# + cell_id="428362a6-2da5-44dc-8de1-55d025a79f06" tags=[]
name = ''
if bool(name) and name[0] in string.digits:
print('Name cannot start with a digit')
# + cell_id="1d896372-b256-4eac-a629-8b72abedd855" tags=[]
'a' or [1, 2]
# + cell_id="a2ce1ec1-895e-48fc-8fc7-08cc2a9a3b16" tags=[]
'' or [1, 2]
# + cell_id="020e45ec-381b-486a-86e4-d2580f4988a4" tags=[]
'unfortunate ai crash in Kerala' or 1/0
# + cell_id="2921202c-a319-47e8-821c-fe10a4f27659" tags=[]
0 or 1/0
# + cell_id="92031193-d2df-42e5-9c17-70981e1d786c" tags=[]
s1 = None # can be coming from database
s2 = ''
s3 = 'abc'
# + cell_id="dbe44873-5686-48a0-a33a-7fa574327768" tags=[]
s1 = s1 or 'n/a'
s2 = s2 or 'n/a'
s3 = s3 or 'n/a'
s1, s2, s3
# + cell_id="12ed6af6-aaed-498d-9609-e56f8f8ee9a2" tags=[]
[] or [0]
# + cell_id="a401500a-64e2-433b-9698-21ff83151927" tags=[]
None or [0]
# + cell_id="5f90ce63-aba8-4082-852b-785d8b20807a" tags=[]
print(None and 100)
# + cell_id="f05c8e6e-ae79-41a7-b796-b1e08c298b98" tags=[]
None and 100
# + cell_id="5016a80f-6880-433c-a74f-1cb3b0963586" tags=[]
[] and [0]
# + cell_id="eda81b49-ee2c-49b3-8762-6135c403a589" tags=[]
1 and []
# + cell_id="82524693-d69e-47e1-a90b-adf9e110cc5e" tags=[]
[] and 1/0
# + cell_id="c7738f89-a999-4bce-a2c2-6bc454667e5a" tags=[]
0 and 1/0
# + cell_id="bd3dc501-a0ad-485c-8aa7-0cde5f820deb" tags=[]
0 or 1/0
# + cell_id="97c0d229-4152-4bd3-8273-258109d89577" tags=[]
a = 2
b = 0
a/b
# + cell_id="fbb94cc3-455d-415a-9273-c1270da665ff" tags=[]
#a/b in general , but return 0 when b is zero
a = 2
b = 0
if b == 0:
print(0)
else:
print(a/b)
# + cell_id="f20067ef-0f4c-4e5c-adbf-1b5119ffab87" tags=[]
a = 2
b = 0
print(b and a/b)
# + cell_id="a29d661b-8abb-48c8-9f4e-54c9e599aec9" tags=[]
s1 = None # can be coming from database
s2 = ''
s3 = 'abc'
# + cell_id="370423b7-7d1f-4975-b9aa-c7c2c13eb582" tags=[]
s1[0], s2[0], s3[0]
# + cell_id="e67b795b-a324-47c4-8f33-af78e3dbcf4a" tags=[]
s1 and s1[0], s2 and s2[0], s3 and s3[0]
# + cell_id="07140cad-1659-4a4a-b268-012d54684b56" tags=[]
s1 and s1[0] or '', s2 and s2[0], s3 and s3[0]
# + cell_id="6910c823-0531-43b2-aa88-ed78de87944d" tags=[]
not bool('abc')
# + cell_id="d1829e99-d93e-4f0f-a7e9-fad4587851d1" tags=[]
not ''
# + cell_id="2340b4e1-ed5c-4435-9d61-ba555eb81689" tags=[]
[1, 2] is [1, 2]
# + cell_id="5648a539-783d-4bcb-b953-23d8e3e0878d" tags=[]
'a' in 'this is a test'
# + cell_id="35770943-e00c-4ea3-8d04-d59b4286b4e4" tags=[]
3 in [1, 2, 3]
# + cell_id="b0ed6e6d-6963-47a7-a2c0-e8dc19de0e89" tags=[]
'key1' in {'key1': 1}
# + cell_id="899caaf1-f0a5-4294-9868-b12fa6f4300e" tags=[]
1 in {'key1': 1} # only check keys
# + cell_id="684dec80-fe00-4e75-9962-98d5e2f5d607" tags=[]
# all numberic types except complex numbers support comparson
# + cell_id="60ab46eb-ee86-49ca-a18a-657fc35136e7" tags=[]
1 + 1j < 3 + 3j
# + cell_id="0e2b3f9c-1aeb-4d3b-86e4-8764941f4d50" tags=[]
from decimal import Decimal
from fractions import Fraction
# + cell_id="9752f56b-f189-471b-a5eb-cfd5753acd05" tags=[]
4 < Decimal('10.5')
# + cell_id="dbd4f9b4-53bc-4438-9782-2dc8cee01686" tags=[]
Fraction(2, 3) < Decimal('0.5')
# + cell_id="480c2b95-a8ba-47d4-a0dd-661036cc3c36" tags=[]
4 == 4 + 0j
# + cell_id="249ab17c-8332-4b8c-99ff-20b5ecbb9dd7" tags=[]
True == Fraction(2, 2)
# + cell_id="74116b67-01ac-4e23-8004-2ccb453f7a00" tags=[]
True < Fraction(3, 2)
# + cell_id="56bc9e8e-6ba8-4115-9058-68dc0654d426" tags=[]
1 < 2 and 2 < 3
# + cell_id="215c9df3-bf90-4835-8162-ac1020d02d40" tags=[]
3 < 2 < 1/0
# + cell_id="7a8451e6-2b89-40a4-bcb7-8797d2ca537b" tags=[]
3 < 4 < 1/0 # 3 is not less than 4
# + cell_id="ac905a9b-8fee-484a-be79-c3b061aee38d" tags=[]
1 < 2 > -5
# + cell_id="24037d20-e26c-4ded-bf4e-23b038ae4871" tags=[]
1 < 2 > -5 == Decimal('-5.0')
# + cell_id="6c05a347-a6d7-44a6-8626-e0738315ce83" tags=[]
'A' < 'a' < 'z' > 'Z'
# + cell_id="0e27446c-06a8-45c7-8ab2-3d3a4c1ce0dc" tags=[]
string.ascii_letters
# + cell_id="d2262189-02bc-492b-818f-35926970e1c2" tags=[]
g = 1
g or f
# + cell_id="43adbb8c-c545-497a-8442-923e67a381d9" tags=[]
| Session04_Numeric Types II/notebooks/Session04_ClassRoom.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + [markdown] id="view-in-github" colab_type="text"
# <a href="https://colab.research.google.com/github/Captmoonshot/DS-Unit-4-Spring-4-Deep-Learning/blob/master/LS_DS_444_AGI_and_The_Future.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# + [markdown] id="_IizNKWLomoA" colab_type="text"
# # Lambda School Data Science - Artificial General Intelligence and The Future
#
# 
# + [markdown] id="moj-4gr89pum" colab_type="text"
# # Lecture
# + [markdown] id="0EZdBzC6pvV9" colab_type="text"
# ## Defining Intelligence
# + [markdown] id="t9Y6I1aO9uCz" colab_type="text"
# A straightforward definition of Artificial Intelligence would simply be "intelligence, created from technology rather than biology." But that simply raises the question - what is *intelligence*?
#
# In the early history of computers, this seemed like an easier question. Intelligence meant solving tricky problems - things that took time and mental effort for a human to figure out.
#
# Defined that way, computers have made a litany of intelligent achievements over the years:
# - Arithmetic
# - Logic
# - Chess
# - Go
# - StarCraft
# - Mathematical proofs
# - Understanding natural language
# - Generating natural language
# - Understanding images
# - Generating images
# - Making medical diagnoses
# - Fitting and *optimizing* ML models
#
# And many more - every time you fit a simple regression, you're facilitating an act of artificial intelligence. You're writing code that will (hopefully) understand and generalize based on data, giving a "human-like" ability to intuit and predict something.
# + [markdown] id="vXdC1uCC91ID" colab_type="text"
# ## "General" Intelligence - a moving target
#
# But, somehow, that isn't what most people *really* mean when they talk about AI.
#
# 
#
# Somewhere that word "general" snuck in, and now we're concerned about "Artificial General Intelligence." So, what is that?
#
# 
#
# The inspiration is likely characters such as the above, but that's not a definition. Intuitively the claim is "computers that can be thrown in a variety of environments and learn without guidance", but another good definition (based on how people use the term) may simply be "whatever we haven't figured out how to get computers to do yet."
#
# Repeatedly, claims are made about tasks that will require a "true AI" to achieve. Then, when those tasks are completed, the bar is moved, and "true AI" is somehow always a bit further off.
# + [markdown] id="IJV_2Ozk5LLV" colab_type="text"
# ## AI - Hype versus Value
#
# Hot off the presses! [Google launches an end-to-end AI platform](https://techcrunch.com/2019/04/10/google-expands-its-ai-services/)!
#
# ...
#
# What does that mean? Well, it might mean a lot, but it's a little unclear what. Some selected [Hacker News](https://news.ycombinator.com/item?id=19626275) comments:
#
# > This platform focuses not on the this-AI-is-magic-and-can-solve-everything like many AI SaaS startups announced on Hacker News, but focuses on how to actually integrate this AI into production workflows, which is something I wish was discussed more often in AI. -- minimaxir
#
# > Looks like Google is taking over Cloud (from AWS) for AI by building an ecosystem and building tools for non Data scientists - consumer level product. Surely IBM can do similar thing with their recent Redhat acquisition, but will they ? -- amrrs
#
# > I work in building and deploying production ML/AI models but I'm having a lot of trouble cutting through the marketing jargon in this article and on Google's website as well. Can someone explain what this does in engineering terms? How does this differ from something like AWS Sagemaker? -- chibg10
#
# > This will make a bunch of startup's life really hard. I think it makes it harder to justify investing in your own ML pipeline or even building your own models for many use cases.
#
# One thing it definitely means - AI is a hot keyword, and people making hiring and other corporate decisions will be on the look out for it, even if they're not sure what it is.
#
# So - yes, you *do* know AI. AI is a real thing, and you are capable of using "artificial" technology to bring about real *intelligence* and insight.
#
# Do you know how to make an intelligent anthropomorphic android? No - and nobody else does yet, either. And that's OK. There's still lots of cool advances and things to learn and build.
# + [markdown] id="MpSkJFuIkJeU" colab_type="text"
# ## Automation, for good and ill
#
# It is worth spending a moment considering the double-edged sword that is automation. This story did not begin with artificial intelligence, or even statistics or mathematics - it began when the first tool inventor figured out how to make something clever like a lever or a wheel, and use it to reduce the amount of labor needed to achieve some task.
#
# In the modern day we talk about automation, but in practice most technology is best considered as a *productivity multiplier* - all businesses still need at least *some* humans around, if nothing else to make policy decisions and collect profit. But the productivity of each individual person can be greatly enhanced through the use of technology.
#
# Consider farming - formerly a signification source of employment (and also small family owned farms), technology has tranformed it into a large scale industry where a handful of people produce as much as many more did before. This progression has happened in many areas - fortunately, it is usually accompanied by job growth and opportunity as new markets and services are created by technology as well.
#
# So, is it different now? Maybe - "history will say" is the only safe stance. But we are automating work at an accelerating rate, and it's unclear where all this growth is going and where the opportunities will be. There's a pretty good bet that it'll involve computers and data - and that's probably a large part of why you're here!
#
# The purpose of this section is not to convince you of anything - it is just to make you think. As a Data Scientist, you will have an outsized impact on society, and it is your responsibility to consider that impact and what you want to do with it.
#
# **Important caveat** - think and engage with society, *but* strive to not be strident or unduly certain when you do so. Broadcasting political beliefs, especially while on the job market, usually closes more doors than it opens. So, consider perspectives, and encourage dialogue - don't just (re)broadcast outrage at the latest injustice.
# + [markdown] id="_vXHDbNnzGZz" colab_type="text"
# ## AutoML - taking our own jobs
#
# Us Data Scientists are not immune to automation. Behold, yet another voyage with the RMS Titanic 🚢:
# + [markdown] id="YN4O5Ikxy2g8" colab_type="text"
# 
# + [markdown] id="iy6RIQn9zKhp" colab_type="text"
# ### Using AutoML on some data you've probably seen before
#
# Let's start with [automl-gs](https://github.com/minimaxir/automl-gs), a very new library that just works directly from csv.
# + id="GkJUFfsgnqr_" colab_type="code" outputId="3ef0ee0e-c795-419a-f645-a5bfa383c288" colab={"base_uri": "https://localhost:8080/", "height": 496}
# !pip install automl_gs
# + id="QsGIh584kH3A" colab_type="code" outputId="5e90b281-6e33-408f-cb7f-51b8dc74f0e2" colab={"base_uri": "https://localhost:8080/", "height": 289}
# !wget https://github.com/ryanleeallred/datasets/raw/master/car_regression.csv
# + id="mR0ba-7ikJCd" colab_type="code" outputId="8a1f3b0d-01da-4549-e803-7b3e31a3e908" colab={"base_uri": "https://localhost:8080/", "height": 187}
# !head car_regression.csv
# + id="JwEChFqKkLdW" colab_type="code" outputId="7b4fc4eb-0ec9-4d05-c0d6-fbffdadd95b3" colab={"base_uri": "https://localhost:8080/", "height": 1476}
from automl_gs import automl_grid_search
automl_grid_search('car_regression.csv', 'price')
# + [markdown] id="NSTNAnT9dS6J" colab_type="text"
# Uh oh, what happened? There is an [open issue](https://github.com/minimaxir/automl-gs/issues/14) which suggests running via the command line `automl_gs` tool rather than the Python module to get better error messages for debugging.
# + id="svF0TnE0keaj" colab_type="code" outputId="33dbd241-31a1-4b09-f5c7-adf7bbc7b6f7" colab={"base_uri": "https://localhost:8080/", "height": 1146}
# !automl_gs car_regression.csv price
# + [markdown] id="oRqYSY4yde1L" colab_type="text"
# So, the real issue is in some intermediary step - let's see if we can get rid of engType.
# + id="N0sHkyabk9Pj" colab_type="code" outputId="26a91983-c1b8-4003-cfd4-6978516828fa" colab={"base_uri": "https://localhost:8080/", "height": 1505}
automl_grid_search('car_regression.csv', 'price', col_types={
'engType': 'ignore'
})
# + [markdown] id="W9y2xI5ShN4n" colab_type="text"
# It gets further, but is perhaps a bit too bleeding edge for us. Let's try [TPOT](https://github.com/EpistasisLab/tpot).
# + id="xltSPWmMhQod" colab_type="code" outputId="78d84fd7-79cc-487f-d72f-d27f1bb812dd" colab={"base_uri": "https://localhost:8080/", "height": 598}
# !pip install tpot
# + id="GhI7BzgmhWMy" colab_type="code" outputId="7ca1495b-045e-43bf-fc8d-c9ee7e7a4389" colab={"base_uri": "https://localhost:8080/", "height": 204}
import pandas as pd
from tpot import TPOTRegressor
df = pd.read_csv('car_regression.csv')
df.head()
# + id="XED4cuyimEsP" colab_type="code" outputId="91cb31be-32cc-4cea-c8b0-95fc995192e1" colab={"base_uri": "https://localhost:8080/", "height": 297}
df.describe()
# + id="SRdeEEbomGQ6" colab_type="code" colab={}
from sklearn.model_selection import train_test_split
X = df.drop('price', axis=1).values
X_train, X_test, y_train, y_test = train_test_split(
X, df['price'].values, train_size=0.75, test_size=0.25)
# + id="p5dAYY5VmuEc" colab_type="code" outputId="2aef9c3a-985f-4553-8511-ef088da6e4f7" colab={"base_uri": "https://localhost:8080/", "height": 207}
# %%time
tpot = TPOTRegressor(generations=5, population_size=20, verbosity=2)
tpot.fit(X_train, y_train)
print(tpot.score(X_test, y_test))
# + id="ma_BH3rpqLFA" colab_type="code" outputId="63602cc4-48a2-461a-d922-4ca2ac354f67" colab={"base_uri": "https://localhost:8080/", "height": 52}
tpot.predict(X_test)
# + id="nTSYqr_dqdNb" colab_type="code" outputId="e04e2b8d-f6f2-4c1d-a452-63b36fb7a578" colab={"base_uri": "https://localhost:8080/", "height": 34}
y_test
# + [markdown] id="aVijM-bCd6Xh" colab_type="text"
# It works - but it looks like we're not quite out of a job yet.
# + [markdown] id="hYXU2HBrswcX" colab_type="text"
# ## So, is AutoML an "AGI"?
# + [markdown] id="ws30-q7PtEWE" colab_type="text"
# **No** - it's a grid search in parameter space, with some clever type inference heuristics and a slick interface.
#
# But, it *is* artificial, it *does* give intelligent results, and (like most technology) it *multiplies* productivity. It's not going to "take our jobs" - but it does mean that, in some situations, one data scientist will be able to do what formerly took several to achieve.
# + [markdown] id="glOqJQkA0bxG" colab_type="text"
# ## Is Artificial General Intelligence dangerous?
# + [markdown] id="BZrQq9D3ik6h" colab_type="text"
# 
#
# There's been much philosophizing, thought experimenting, and even some genuine advocacy and policy considerations about the impact of a "true" AGI on human society. Most of these analyses essentially consider the AGI as an unfathomable deity, thinking and moving in ways well beyond human comprehension.
#
# Consider the [paperclip maximizer](https://en.wikipedia.org/wiki/Instrumental_convergence#Paperclip_maximizer):
#
# > Suppose we have an AI whose only goal is to make as many paper clips as possible. The AI will realize quickly that it would be much better if there were no humans because humans might decide to switch it off. Because if humans do so, there would be fewer paper clips. Also, human bodies contain a lot of atoms that could be made into paper clips. The future that the AI would be trying to gear towards would be one in which there were a lot of paper clips but no humans. — <NAME>
#
# This is an example of *instrumental convergence* - the idea that, if an AGI were to pursue an unbounded goal (a natural instruction like "Maximize the health of all humans") it may push it in extremely unexpected ways (put all humans in vats of goo, to both preserve them and prevent them from disabling it, since its existence is also of value to help humans).
#
# Is this a *realistic* concern? Well, maybe eventually - but pretty obviously not an immediate one. There are many more prominent challenges involving tech and society - privacy, economic growth, equality, education - and even *if* AGI existed it's not clear how they would have the means to enact such fantastic plans. Killer robot armies make for good TV, but at some step there's likely a human with an off switch.
# + [markdown] id="ayWGQhHu1yRu" colab_type="text"
# ## Where is AI going, and where does it leave us?
# + [markdown] id="zrBqwYoziUSm" colab_type="text"
# 
#
# On the one hand, we live in a remarkable time. The explosion of technology from WWII to present has brought about countless innovations, greatly increased median life expectancy and GDP, and shows no sign of slowing down.
#
# On the other hand, the more things change the more they stay the same. Humans are still Homo sapiens, with the same brains we've had for many millenia. [Dunbar's number](https://en.wikipedia.org/wiki/Dunbar's_number) stymies our attempts to be globally considerate and aware, and at the end of the day it seems like the vast majority of our behavior is as it ever has been - just with shinier toys.
#
# So, what will happen? Will technology usher in a utopia, where automation finally relieves us all of burdensome tasks and we are free to explore science, art, and leisure? Or are we doomed to a dystopia, where increased production is also increasingly centralized and the vast majority of humanity becomes a permanent underclass in a postmodern cyberpunk world?
#
# Probably neither - both are extreme points along a continuum of possibility. But wherever we do end up, it is all but certain that AI (that is, technology generating insights and signal) will be a key part of it.
# + [markdown] id="ZpbzOQKU7Yv2" colab_type="text"
# ## And what about A*G*I?
# + [markdown] id="kB8YZxHc7gGm" colab_type="text"
# > "I think, therefore I am." -- <NAME>
#
# > "I am a strange loop." -- <NAME>
#
# Artificial General Intelligence is, as discussed, a moving target. Perhaps what we're looking for isn't intelligence, but consciousness - and specifically, consciousness *we* recognize and empathize with. Much like all parents, us humans want to foster something new in our image, and see it succeed in a way we appreciate.
#
# It's not clear if technology will ever *really* get there. The structure and approach to artificial intelligence is inherently, well, artificial - some things like neural networks are "inspired" by biology, but still very different (far fewer connections, but far faster with more data). Perhaps computers really already *are* intelligent, just not in a way we recognize.
#
# And if we ever do succeed at making our virtual progeny, we may find it bittersweet - not because they will inevitably destroy us (though they probably will outlast us), but simply because it will then lead us to wonder what is so special about us in the first place. If we can create an AGI from metal and sand, then are we not just mechanisms of a different sort?
# + [markdown] id="0lfZdD_cp1t5" colab_type="text"
# # Assignment
#
# Use either [automl-gs](https://github.com/minimaxir/automl-gs) or [TPOT](https://github.com/EpistasisLab/tpot) to solve at least two of your prior assignments, projects, or other past work (any time you fit a classification or regression model). Report the results, and compare/contrast with the results you found when you worked on it using your "human" ML approach.
#
# Note - these tools promise a lot, but the reality is that you may have to debug a bit and figure out getting your data in a format that it recognizes. Welcome to the cutting edge - at least there's still plenty of work to do!
# + id="Ltj1je1fp5rO" colab_type="code" outputId="ea761c83-8fb5-4e46-9b41-f0e1187ea67c" colab={"base_uri": "https://localhost:8080/", "height": 173}
# TODO - ✨
"""
from tpot import TPOTClassifier
from sklearn.datasets import load_digits
from sklearn.model_selection import train_test_split
digits = load_digits()
X_train, X_test, y_train, y_test = train_test_split(digits.data, digits.target,
train_size=0.75, test_size=0.25)
tpot = TPOTClassifier(generations=5, population_size=20, verbosity=2)
tpot.fit(X_train, y_train)
print(tpot.score(X_test, y_test))
tpot.export('tpot_mnist_pipeline.py')
"""
import pandas as pd
import numpy as np
from sklearn.model_selection import train_test_split
from tpot import TPOTClassifier
df = pd.read_csv('https://raw.githubusercontent.com/Captmoonshot/adult_data/master/adult_data.csv', usecols=['age', 'workclass_code', 'education_code',
'gender_code', 'occupation_code', 'income_code'])
df = df.rename({'income_code': 'target'}, axis='columns')
y = df['target'].values
X = df.drop('target', axis=1).values
X_train, X_test, y_train, y_test = train_test_split(X, y, train_size=0.75, test_size=0.25)
tpot = TPOTClassifier(generations=5, population_size=20, verbosity=2)
tpot.fit(X_train, y_train)
print(tpot.score(X_test, y_test))
# + id="9FP54bLyCqvW" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 598} outputId="12581112-7142-4bf9-a818-7b492b773e44"
# !pip install tpot
# + id="kwv5y6vpAewq" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 241} outputId="eddef13d-b0ef-4981-ed2a-0fbcc9faaee6"
import numpy as np
import pandas as pd
from sklearn.model_selection import train_test_split
from tpot import TPOTClassifier
df = pd.read_csv('https://raw.githubusercontent.com/Captmoonshot/kaggle_titanic/master/train.csv', usecols=['Survived', 'Pclass', 'Age', 'SibSp',
'Parch', 'Fare'])
df = df.rename({'Survived': 'target'}, axis=1)
y = df['target'].values
X = df.drop('target', axis=1).values
X_train, X_test, y_train, y_test = train_test_split(X, y, train_size=0.75, test_size=0.25)
tpot = TPOTClassifier(generations=5, population_size=20, verbosity=2)
tpot.fit(X_train, y_train)
print(tpot.score(X_test, y_test))
# + id="Q6yWFlqF6-4Q" colab_type="code" colab={}
# + [markdown] id="zE4a4O7Bp5x1" colab_type="text"
# # Resources and Stretch Goals
# + [markdown] id="uT3UV3gap9H6" colab_type="text"
# Stretch goals
# - Apply AutoML to more data, including data you've not analyzed or data you're considering for project work
# - Try to work with the GPU/TPU options, and see if you can accelerate your AutoML
# - Check out other competing AutoML systems (see resources or search and share - many are cloud hosted which is why we went with this)
# - Write a blog post summarizing your experience learning Data Science at Lambda School!
#
# Resources
# - [What to expect from AutoML software](https://epistasislab.github.io/tpot/using/#what-to-expect-from-automl-software)
# - [TPOT examples](https://epistasislab.github.io/tpot/examples/)
# - [Google Cloud AutoML](https://cloud.google.com/automl/) - the Google offering in the AutoML space (also has vision, video, NLP, and translation)
# - [Microsoft AutoML](https://www.microsoft.com/en-us/research/project/automl/)
# - [AutoML.org](https://www.automl.org)
# - [Ludwig](https://uber.github.io/ludwig/) - a toolbox for deep learning that doesn't require coding, from Uber
# - [USENIX Security '18-Q: Why Do Keynote Speakers Keep Suggesting That Improving Security Is Possible?](https://youtu.be/ajGX7odA87k) - a humorous but informative presentation by <NAME>, focused on security but with a consideration of data and machine learning
| LS_DS_444_AGI_and_The_Future.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Requirements
# This notebook saves the game state into an appropriate format (dataframes) to run the analysis.
# <br><br>To run this notebook, you need to record your Pommerman games in a JSON file. The JSON should have all the information that is saved in the game state. Pommerman has in-built methods that allows you to do this.
# <br>I saved the JSON as game1.json
# <br><br>Since I ran the analysis on 50 games of each mode, my loop runs from 1 to 50.
# <br>I ran 50 games for the following modes: FFA_-1, FFA_1, FFA_2, Team_-1, Team_1 and Team_2
# <br><br>Two main modes being FFA and Team
# <br>-1, 1 and 2 are the partial observability settings
# ### Import Libraries
import os
import json
import pandas as pd
# # Functions that help in saving data
# ## Board format
# The main board, bomb blast strength, bomb life, bomb moving direction and flame life are all data points in the game state that are saved in the board (grid) format
def save_board(obs):
board = {}
bomb_blast_strength = {}
bomb_life = {}
bomb_moving_direction = {}
flame_life = {}
for j in range(11):
for k in range(11):
board[(j,k)] = obs['board'][j][k]
return board
# ## General data from game state
def save_game_state(obs, ii):
game = {}
game['alive'] = obs['aliveAgents']
game['step_count'] = ii
return game
# ## Details specific to agent
def save_agent_details(obs):
agent = {}
for c, p in enumerate([10, 11, 12, 13]):
agent['p' + str(p) +'_position_x'] = obs['playerSpecificData'][c]['AgentYPosition']
agent['p' + str(p) +'_position_y'] = obs['playerSpecificData'][c]['AgentXPosition']
agent['p' + str(p) +'_blast_strength'] = obs['playerSpecificData'][c]['playerBombBlastStrength']
agent['p' + str(p) +'_can_kick'] = obs['playerSpecificData'][c]['canKick']
agent['p' + str(p) +'_ammo'] = obs['playerSpecificData'][c]['ammo']
agent['p' + str(p) +'_action'] = obs['actions'][c]
return agent
# ### Folder name where the JSONs are stored
folder = "mcts/"
# # Main Loop
# For the different modes and partial observability settings
for pp in ['ffa_-1/', 'ffa_1/', 'ffa_2/', 'team_-1/', 'team_1/', 'team_2/']:
# Appending folder path with game mode setting
path = folder+pp
print(path)
# Create new folder for analysis
os.mkdir(path+'analysis')
# Loop for 50 games
for gg in range(1, 51):
print(gg)
# Load JSON
f = open(path+'game'+str(gg)+'.json')
data = json.load(f)
board = {}
game = {}
agent = {}
g = 'Game'+str(gg-1)
# Run saver functions for each game
for i in data[g].keys():
ii = int(i[4:])
board[ii] = save_board(data[g][i])
game[ii] = save_game_state(data[g][i], ii)
agent[ii] = save_agent_details(data[g][i])
game_df = pd.DataFrame(game).transpose()
agents_df = pd.DataFrame(agent).transpose()
board_df = pd.DataFrame(board).transpose()
# Save the dataframes
os.mkdir(path+'analysis/game'+str(gg))
game_df.to_csv(path+'analysis/game'+str(gg)+'/game.csv')
agents_df.to_csv(path+'analysis/game'+str(gg)+'/agents.csv')
board_df.to_csv(path+'analysis/game'+str(gg)+'/board.csv')
| Data Analysis/JSON to DF.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# ## In this notebook, we demostrate how to use the pre-trained Beta-3 IRT model to perform adaptive testing on a new model, and estimate its ability.
import sys; sys.path.insert(0, '..')
import sys; sys.path.insert(0, '../atml')
import os
import numpy
import pandas
import matplotlib.pyplot
import joblib
from atml.cat import Standard_CAT
from atml.measure import BS
from atml.visualisation import get_logistic_curve
import sklearn.datasets
from sklearn.ensemble import GradientBoostingClassifier
# %matplotlib inline
# ## Load the IRT model and specify the dictionary and load function of the involved datasets
beta3_mdl = joblib.load('./beta3_mdl.gz')
data_dict = {0: 'iris',
1: 'digits',
2: 'wine'}
def get_data(ref):
if ref == 'iris':
x, y = sklearn.datasets.load_iris(return_X_y=True)
elif ref == 'digits':
x, y = sklearn.datasets.load_digits(return_X_y=True)
elif ref == 'wine':
x, y = sklearn.datasets.load_wine(return_X_y=True)
return x, y
# ## Initialise the adaptive testing process with Standard_CAT(), specity the model to be tested (sklearn's GBC), and selected testing measure (Brier score).
cat_mdl = Standard_CAT(irt_mdl=beta3_mdl)
candidate_mdl = GradientBoostingClassifier()
measure = BS()
# ## Perfrom the adaptive testing with KL item information. The function will return four sequences:
# (1) the index of the selected dataset for each testing step
#
# (2) the name of the selected dataset for each testing step
#
# (3) the performance measurements for each testing step
#
# (4) the ability estimation for each testing step
selected_dataset_index, selected_dataset, measurements, ability_seq = cat_mdl.testing(mdl=candidate_mdl,
measure=measure,
item_info='fisher',
data_dict=data_dict,
get_data=get_data)
| notebooks/beta3_testing.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# ## Project: Make Sense of Census
# ### Problem Statement
# Hello!
#
# You have been hired by 'CACT' (Census Analysis and Collection Team) to help with your numpy programming skills. Your major work for today involves census record management and data analysis.
#
# #### About the Dataset
# The snapshot of the data, you will be working on:
# 
# The dataset has details of 100 people with the following 8 features
#
# | Features | Description |
# |:--------:|:-----------:|
# | age | Age of the person |
# | education-num | No. of years of education they had |
# | race | Person's race <br> KEY ==> 0 : Amer-Indian-Eskimo<br> 1 : Asian-Pac-Islander<br> 2 : Black<br> 3 : Other<br> 4 : White |
# | sex | Person's gender <br> KEY==> 0 : Female <br> 1 : Male |
# | capital-gain | Income from investment sources, apart from wages/salary |
# | capital loss | Losses from investment sources, apart from wages/salary |
# | hours-per-week | No. of hours per week the person works |
# | income | Annual Income of the person<br> KEY ==> 0 : Less than or equal to 50K<br> 1 : More than 50K |
#
# #### Why solve this project
# After completing this project, you will have a better grip on working with numpy. In this project, you will apply the following concepts:
# * Array Appending
# * Array Slicing
# * Array Filtering
# * Array Aggregation
# ### Instructions : Step 1:
# In this first task, we will load the data to a numpy array and add a new record to it.
# * The path to the data set has been stored in the variable named `path`
# * Load the dataset and store it in a variable called data using `np.genfromtxt()`
# ---
# ```
# Example of genfromtxt function
# ```
# ---
# ```sh
# data_file='file.csv' # path for the file
# data=np.genfromtxt(data_file, delimiter=",", skip_header=1)
# print("\nData: \n\n", data)
# print("\nType of data: \n\n", type(data))
# ```
# **Output:**
# ```sh
# Data:
#
# [[39. 13. 4. ... 0. 40. 0.]
# [50. 13. 4. ... 0. 13. 0.]
# [38. 9. 4. ... 0. 40. 0.]
# ...
# [48. 13. 4. ... 0. 58. 1.]
# [40. 10. 4. ... 0. 40. 0.]
# [39. 13. 4. ... 0. 50. 1.]]
#
# Type of data:
# <class 'numpy.ndarray'>
# ```
# ---
# **Note:**
# The parameter `delimiter = ","` is set because the file that we are opening has extension `CSV` (Comma Separated Values)
#
# The parameter `skip_header = 1` is set because the first row of the data (which is called header) contains string values but in our numpy array we need only integers (Remember numpy array can only store data of a single data type)
#
# ---
# Append `new_record` (given) to `data` using `np.concatenate()` and store the new array in a variable called `census`
#
# ---
# The shape of `data` should be (1000, 8) and that of `census` should be (1001,8).
#
# ---
# +
# Importing header files
import numpy as np
import warnings
warnings.filterwarnings('ignore')
#New record
new_record = [[50, 9, 4, 1, 0, 0, 40, 0]]
new_record = np.asarray(new_record)
#Reading file
data = np.genfromtxt(PATH, delimiter = ",", skip_header = 1)
#Code starts here
census = np.concatenate( (new_record, data) )
print(census.shape)
# -
# ### Step 2:
# We often associate the potential of a country based on the age distribution of the people residing there. We too want to do a simple analysis of the age distribution
#
# **Instructions :**
#
# * Create a new array called `age` by taking only age column (age is the column with index 0) of `census` array.
#
# * Find the max age and store it in a variable called `max_age`.
#
# * Find the min age and store it in a variable called `min_age`.
#
# * Find the mean of the age and store it in a variable called `age_mean`.
#
# * Find the standard deviation of the age and store it in a variable called `age_std`.
#
# * **Ponder whether based on the above statistics, would you classify the country as `young` or `old`?**
# ---
# `max_age` should be 90.
#
# `min_age` should be 17.
#
# `age_mean`, rounded off to two places, should be 38.06.
#
# `age_std`, rounded off to two places, should be13.34.
#
# ---
age = census[:,0]
max_age = age.max()
min_age = age.min()
age_mean = np.mean(age)
age_std = np.std(age)
print( max_age, min_age, age_mean, age_std )
# ### Step 3:
# The constitution of the country tries it's best to ensure that people of all races are able to live harmoniously. Let's check the country's race distribution to identify the minorities so that the government can help them.
#
# * Create four different arrays by subsetting `census` array by Race column (Race is the column with index 2) and save them in `race_0`,`race_1`, `race_2`, `race_3` and `race_4` respectively (Meaning: Store the array where `race` column has value 0 in `race_0`, so on and so forth)
# * Store the length of the above created arrays in `len_0`, `len_1`,`len_2`, `len_3` and `len_4` respectively
# * Find out which is the race with the minimum no. of citizens
# * Store the number associated with the minority race in a variable called `minority_race` (For eg: if `len(race_5)` is the minimum, store 5 in `minority_race` because that is the index of the race having the least no. of citizens )
# ---
# `minority_race` should be 3.
#
# ---
# +
race_0 = census[census[:,2] == 0]
race_1 = census[census[:,2] == 1]
race_2 = census[census[:,2] == 2]
race_3 = census[census[:,2] == 3]
race_4 = census[census[:,2] == 4]
len_0 = len(race_0)
len_1 = len(race_1)
len_2 = len(race_2)
len_3 = len(race_3)
len_4 = len(race_4)
minority_race = 0
if len_0 == min(len_0,len_1,len_2,len_3,len_4): minority_race = 0
elif len_1 == min(len_0,len_1,len_2,len_3,len_4): minority_race = 1
elif len_2 == min(len_0,len_1,len_2,len_3,len_4): minority_race = 2
elif len_3 == min(len_0,len_1,len_2,len_3,len_4): minority_race = 3
elif len_4 == min(len_0,len_1,len_2,len_3,len_4): minority_race = 4
print(minority_race)
# -
# ### Step 4:
# As per the new govt. policy, all citizens above age 60 should not be made to work more than 25 hours per week. Let us look at the data and see if that policy is followed.
#
# * Create a new subset array called `senior_citizens` by filtering `census` according to age>60 (age is the column with index 0)
# * Add all the working hours (working hours is the column with index 6) of `senior_citizens` and store it in a variable called `working_hours_sum`
# * Find the length of `senior_citizens` and store it in a variable called `senior_citizens_len`
# * Finally find the average working hours of the senior citizens by dividing `working_hours_sum` by `senior_citizens_len` and store it in a variable called `avg_working hours`.
# * Print `avg_working_hours` and see if the govt. policy is followed.
# ---
# `working_hours_sum` should be 1917.
#
# `avg_working_hours`, rounded off to two places, should be 31.43.
#
# ---
senior_citizens = census[age > 60]
working_hours_sum = np.sum( senior_citizens[:,6] )
senior_citizens_len = len(senior_citizens)
avg_working_hours = working_hours_sum/senior_citizens_len
print(working_hours_sum, avg_working_hours)
# ### Step 5:
# Our parents have repeatedly told us that we need to study well in order to get a good(read: higher-paying) job. Let's see whether the higher educated people have better pay in general.
#
# * Create two new subset arrays called `high` and `low` by filtering `census` according to education-num>10 and education-num<=10 (education-num is the column with index 1) respectively.
# * Find the mean of income column (income is the column with index 7) of `high` array and store it in `avg_pay_high`. Do the same for `low` array and store it's mean in `avg_pay_low`.
# ---
# **Note:** - Since income is a binary variable, mean() here represents the percentage of ppl having annual income higher than 50K. - You could have used `mean()` function to solve the task 'Young Country? Old Country?' as well.
#
# ---
#
# * Compare `avg_pay_high` and `avg_pay_low` and see whether there is truth in better education leads to better pay
# ---
# **Test Cases:** The correct value in `avg_pay_high` should be 0.43 when rounded upto 2 decimal places. The correct value in `avg_pay_low` should be 0.14 when rounded upto 2 decimal places.
#
# ---
high = census[ census[:,1] > 10 ]
low = census[ census[:,1] <= 10 ]
avg_pay_high = np.mean(high[:,7])
avg_pay_low = np.mean(low[:,7])
print(avg_pay_high, avg_pay_low)
| Make-Sense-of-Census/Make_Sense_of_Census.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .sos
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: SoS
# language: sos
# name: sos
# ---
# + [markdown] kernel="SoS"
# # Using SoS workflow system in Jupyter and from command line
# + [markdown] kernel="SoS"
# * **Difficulty level**: easy
# * **Time need to lean**: 15 minutes or less
# * **Key points**:
# * SoS steps can be developed and executed in SoS Notebook
# * SoS workflows can be embedded in Jupyter notebook
# * Magics `%run` executes workflows defined in the current cell
# * Magic `%sosrun` executes workflows defined in the entire notebook
# * Magic `%runfile` executes workflows defined in specified file
# * Command `sos` executes workflows from the command line
# + [markdown] kernel="SoS"
# ## Running SoS
# + [markdown] kernel="SoS"
# The [Running SoS section](https://vatlab.github.io/sos-docs/running.html#content) of the [SoS Homepage](https://vatlab.github.io/sos-docs/) contains all the instructions on how to install and run SoS. Briefly, you have the following options to use SoS
#
# * Try SoS using our live server [http://vatlab.github.io/sos/live](http://vatlab.github.io/sos/live).
# * Start a Jupyter notebook server from our docker image [mdabioinfo/sos-notebook](https://hub.docker.com/r/mdabioinfo/sos-notebook/).
# * Install `sos` and `sos-notebook` locally if you have a local Python (3.6 or higher) installation and a working Jupyter server with kernels of interest.
# * Check with your system administrator if you have access to an institutional JupyterHub server with SoS installed.
#
# For the purpose of this tutorial, it is good enough to use our live server [http://vatlab.github.io/sos/live](http://vatlab.github.io/sos/live). After you see the following interface, select New -> SoS to create a SoS notebook. You can also go to `examples` and open existing SoS notebooks.
# + [markdown] kernel="SoS"
# ## Using the SoS kernel
# + [markdown] kernel="SoS"
# This tutorial is written in a SoS Notebook, which consists of multiple **markdown cells** and **code cells**. With the SoS kernel, each code cell can have its own kernel. SoS Notebook allows you to use multiple kernels in a single notebook and exchange variables among live kernels. This allow you to develop scripts and analyze data in different languages.
#
# For example, the following three code cells perform a multi-language data analysis where the first cell defines a few variables, the second cell runs a bash script to convert an excel file to csv format, and the last cell uses R to read the csv file and generate a plot. Three different kernels, SoS, [bash_kernel](https://github.com/takluyver/bash_kernel), and [IRkernel](https://github.com/IRkernel/IRkernel) are used, and a `%expand` magic is used to pass filenames from the SoS kernel to other kernels.
# + kernel="SoS"
excel_file = 'data/DEG.xlsx'
csv_file = 'DEG.csv'
figure_file = 'output.pdf'
# + kernel="Bash"
# %expand
xlsx2csv {excel_file} > {csv_file}
# + kernel="R"
# %expand
data <- read.csv('{csv_file}')
pdf('{figure_file}')
plot(data$log2FoldChange, data$stat)
dev.off()
# + [markdown] kernel="R"
# <div class="bs-callout bs-callout-primary" role="alert">
# <h4>SoS is extended from Python 3.6+</h4>
# <p>The SoS workflow system extends the syntax of Python 3.6+ so <b>SoS codecells accept any Python code</b></p>
# </div>
# + [markdown] kernel="SoS"
# ## <a id="magic-run"></a> Use magic `%run` to execute the current cell as a SoS workflow
# + [markdown] kernel="R"
# Scripts in different languages can be converted to steps in SoS workflows by adding section headers in the format of
#
# ```
# [header_name]
# ```
# or
# ```
# [header_name: options]
# ```
#
# + [markdown] kernel="SoS"
# <div class="bs-callout bs-callout-primary" role="alert">
# <h4>%run</h4>
# <p> The <code>%run</code> magic execute the content of the cell as a complete SoS workflow using an external process.</p>
# </div>
# + [markdown] kernel="SoS"
# The SoS magic `%run` can be used to execute workflows defined in the current cell. For example, the following cell executes a simple `hello_world` workflow with a single `print` statement.
# + kernel="SoS"
# %run
print('This is our first hello world workflow')
# + [markdown] kernel="SoS"
# SoS starts an external `sos` process, execute the workflow and displays the output in the notebook. **The workflow is executed independently and does not share any variables in the SoS kernel**. For example, if you define a variable in the SoS kernel
# + kernel="SoS"
my_name = 'sos_in_notebook.ipynb'
# + [markdown] kernel="SoS"
# You can use the variable in another SoS cell:
# + kernel="SoS"
print(f'This notebook is named {my_name}')
# + [markdown] kernel="SoS"
# But the variable is not available in the following cell, which is executed by magic `%run` as an independent workflow unless you [define a parameter and pass the variable to it from command line](parameters.html).
# + kernel="SoS"
# %run
print(f'This notebook is named {my_name}')
# + [markdown] kernel="SoS"
# A notebook cell can contain complete workflows with multiple steps. For example, the following cell defines a SoS workflows that resembles the analysis that was performed using three different kernels. The workflow consists of a `global` section that defines variables for all steps, and two sections `plot_10` and `plot_20` that constitute two steps of workflow `plot`. The first step executes a shell script and the second step executes a `R` script. The syntax of this workflow will be discussed in details in [the next tutorial](doc/user_guide/scripts_in_sos.html).
# + kernel="SoS"
# %run
[global]
excel_file = 'data/DEG.xlsx'
csv_file = 'DEG.csv'
figure_file = 'output.pdf'
[plot_10]
run: expand=True
xlsx2csv {excel_file} > {csv_file}
[plot_20]
R: expand=True
data <- read.csv('{csv_file}')
pdf('{figure_file}')
plot(data$log2FoldChange, data$stat)
dev.off()
# + [markdown] kernel="SoS"
# ### Controlling verbosity (option `-v`)
# + [markdown] kernel="SoS"
# Magics `%run` (actually the underlying `sos` command) accepts a set of optional arguments, the easiest of which is option `-v` that controls the verbosity of output.
# + [markdown] kernel="SoS"
# <div class="bs-callout bs-callout-info" role="alert">
# <h4>The verbosity (<code>-v</code>) argument of magics <code>%run</code>, <code>%sosrun</code> and command <code>sos run</code></h4>
# <p>The verbosity argument <code>-v</code> accepts values </p>
# <ul>
# <li><code>-v 0</code>: Display no system messages except errors</li>
# <li><code>-v 1</code>: Display errors and warnings, and a text-based progress bar</li>
# <li><code>-v 2 (default)</code>: Display errors, warnings, and informational messages</li>
# <li><code>-v 3</code>: Display additional debug messages</li>
# <li><code>-v 4</code>: Display very verbose trace messages for development purposes</li>
# </ul>
# </div>
# + [markdown] kernel="SoS"
# As you have seen, the default verbosity level `-v2` displays messages to report the status of execution:
# + kernel="SoS"
# %run
print('This is our first hello world workflow')
# + [markdown] kernel="SoS"
# You can suppress these messages using option `-v1` even `-v0`
# + kernel="SoS"
# %run -v0
print('This is our first hello world workflow')
# + [markdown] kernel="SoS"
# ### Running long workflows in background *
# + [markdown] kernel="SoS"
# <div class="bs-callout bs-callout-info" role="alert">
# <h4>Execute workflow in non-blocking mode</h4>
# <p>You can execute a workflow using magics <code>%run</code>, <code>%sosrun</code>, and <code>%runfile</code> in the background by adding a <code>&</code> at the end of the magic. The workflow will be executed in a queue while you can continue to work the notebook.</p>
# </div>
# + [markdown] kernel="SoS"
# SoS Notebook usually starts a workflow and waits until the workflow is completed. If the workflow takes a long time to execute, you can send workflows to a queue in which workflows will be executed one by one while you continue to work on the notebook. A status table will be displayed for each queued workflows and log messages and results will continue to send back to SoS Notebook.
# + kernel="SoS"
# %run -v0 &
import time
for i in range(5):
print(i)
time.sleep(2)
# + [markdown] kernel="SoS"
# ## <a id="magic-sosrun"></a> Execute embedded workflows using magic `%sosrun`
# + [markdown] kernel="R"
# A SoS notebook can have multiple workflow sections defined in multiple code cells. These sections constitute the content of the **embedded SoS script** of the notebook.
# + [markdown] kernel="SoS"
# For example, the following cell defines a workflow `hello-world`. However, if you execute the cell directly via `Ctrl-Enter` or `Shift-Enter`, it does not produce any output.
# + kernel="SoS"
[hello-world]
print('hello world')
# + [markdown] kernel="R"
# <div class="bs-callout bs-callout-warning" role="alert">
# <h4>Workflow cells cannot be executed directly</h4>
# <p>A workflow cell, namely an SoS cell with a header, cannot be executed directly. Running the cells will produce no output.</p>
# </div>
# + [markdown] kernel="SoS"
# Here is the second step of the workflow defined in another workflow cell:
# + kernel="SoS"
[hello-world_2]
print('hello world again')
# + [markdown] kernel="SoS"
# <div class="bs-callout bs-callout-primary" role="alert">
# <h4>Embedded SoS script</h4>
# <p>An embed SoS script consists of SoS sections in all SoS cells of a notebook.</p>
# </div>
# + [markdown] kernel="SoS"
# The easiest way to view the embedded script of a SoS notebook is to use the `%preview --workflow` magic as follows (The option `-n` lists the script in the notebook instead of the console panel). As you can see, the embedded script consists of steps from the entire notebook.
# + kernel="SoS"
# %preview -n --workflow
# + [markdown] kernel="SoS"
# <div class="bs-callout bs-callout-primary" role="alert">
# <h4>%sosrun</h4>
# <p> The <code>%sosrun</code> magic execute workflows defined in the embedded SoS script of a notebook.</p>
# </div>
# + [markdown] kernel="SoS"
# The `%sosrun` magic can be used to execute any of the workflows defined in the notebook. For example, the following magic execute the workflow `plot` defined in the above section. Because multiple workflows are defined in this notebook (`hello_world`, and `plot`), a workflow name is required for this magic.
# + kernel="SoS"
# %sosrun hello-world
# + kernel="SoS"
# %sosrun plot
# + [markdown] kernel="SoS"
# <div class="bs-callout bs-callout-warning" role="alert">
# <h4>Warning</h4>
# <p>Workflow cells can only be executed by SoS magics <code>%run</code> and <code>%sosrun</code>. SoS will not produce any output if you execute a workflow cell directly.</p>
# </div>
# + [markdown] kernel="SoS"
# ## <a id="magic-runfile"></a> Execute external script with magic `%runfile`
# + [markdown] kernel="SoS"
# <div class="bs-callout bs-callout-primary" role="alert">
# <h4>%runfile filename</h4>
# <p> The <code>%runfile</code> execute a SoS script from specified file with specified option. Both SoS scripts (usually with extension <code>.sos</code>) and SoS notebooks (with extension <code>.ipynb</code>) are supported.</p>
# </div>
# + [markdown] kernel="SoS"
# The third magic to execute SoS workflows in SoS Notebook is to use the `%runfile` magic, which execute workflows from a specified external file. For example, instead of using magic `%sosrun`, you can execute the current notebook with magic
# + kernel="SoS"
# %runfile sos_in_notebook.ipynb plot
# + [markdown] kernel="SoS"
# ## Execute embedded workflows with command `sos`
# + [markdown] kernel="SoS"
# The `%sosrun` magic calls an external command `sos` to execute workflows defined in the notebook. Although for the sake of convenience we will use magic `%run` to execute workflows throughout this documentation, please remember that **you can execute the notebook using command `sos` from command line**.
#
# 
#
# + [markdown] kernel="SoS"
# Alternatively, you can also write the workflow in a text file (usually with extension `.sos`) and execute it with command `sos run`:
# + [markdown] kernel="SoS"
# 
#
# + [markdown] kernel="SoS"
# ## Further reading
#
# * [Inclusion of scripts](Inclusion_of_scripts.html)
# * [How to define and execute basic forward-type workflows](doc/user_guide/forward_workflow.html)
# * [Command line interface](doc/user_guide/cli.html)
| src/user_guide/sos_in_notebook.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import statsmodels.api as sm
from scipy import stats
import math
# ## Plots for statistics: OLS, Lasso, Ridge, OLS_Lasso, OLS_Ridge, Lasso_Ridge
# +
# Generating 'fake' data
def gen_data(nobs, num_cov, m):
x_1 = np.random.normal(scale=1., size=(nobs))
x_2 = np.random.normal(scale=1., size=(nobs, num_cov))
e = np.random.normal(loc=0.0, scale=1.0, size=nobs)
y = (x_1 * m) + e
return y, x_1, x_2
# Setup test
def setup_test_params(y, x_1, x_2, a, model):
X = np.column_stack((x_1, x_2))
if model == 1:
ols = sm.OLS(y, X).fit()
return ols
elif model == 2:
lasso = sm.OLS(y, X).fit_regularized(method='elastic_net', alpha=a, L1_wt=1.0)
return lasso
elif model == 3:
ridge = sm.OLS(y, X).fit_regularized(method='elastic_net', alpha=a, L1_wt=0.0)
return ridge
elif model == 4:
ols = sm.OLS(y, X).fit()
lasso = sm.OLS(y, X).fit_regularized(method='elastic_net', alpha=a, L1_wt=1.0)
return ols, lasso
elif model == 5:
ols = sm.OLS(y, X).fit()
ridge = sm.OLS(y, X).fit_regularized(method='elastic_net', alpha=a, L1_wt=0.0)
return ols, ridge
elif model == 6:
lasso = sm.OLS(y, X).fit_regularized(method='elastic_net', alpha=a, L1_wt=1.0)
ridge = sm.OLS(y, X).fit_regularized(method='elastic_net', alpha=a, L1_wt=0.0)
return lasso, ridge
def standardize(array):
"""divide by variance, multiple by sqrt(n)"""
return np.sqrt(len(array))*array.mean()/array.std()
# MSE
def setup_test_mse(n, k, a, m, model):
y1, x_11, x_21 = gen_data(nobs=n, num_cov=k, m=m)
X1 = np.column_stack((x_11, x_21))
y2, x_12, x_22 = gen_data(nobs=n, num_cov=k, m=m)
X2 = np.column_stack((x_12, x_22))
statistic = None
if model == 1:
ols = sm.OLS(y1, X1).fit()
statistic = (y1-ols.predict(X1))**2
elif model == 2:
lasso = sm.OLS(y1, X1).fit_regularized(method='elastic_net', alpha=a, L1_wt=1.0)
statistic = (y1-lasso.predict(X1))**2
elif model == 3:
ridge = sm.OLS(y1, X1).fit_regularized(method='elastic_net', alpha=a, L1_wt=0.0)
statistic = (y1-ridge.predict(X1))**2
elif model == 4:
ols = sm.OLS(y1, X1).fit()
ols_mse = (y1-ols.predict(X1))**2
lasso = sm.OLS(y2, X2).fit_regularized(method='elastic_net', alpha=a, L1_wt=1.0)
lasso_mse = (y2-lasso.predict(X2))**2
statistic = ols_mse - lasso_mse
elif model == 5:
ols = sm.OLS(y1, X1).fit()
ols_mse = (y1-ols.predict(X1))**2
ridge = sm.OLS(y2, X2).fit_regularized(method='elastic_net', alpha=a, L1_wt=0.0)
ridge_mse = (y2-ridge.predict(X2))**2
statistic = ols_mse - ridge_mse
elif model == 6:
lasso = sm.OLS(y1, X1).fit_regularized(method='elastic_net', alpha=a, L1_wt=1.0)
lasso_mse = (y1-lasso.predict(X1))**2
ridge = sm.OLS(y2, X2).fit_regularized(method='elastic_net', alpha=a, L1_wt=0.0)
ridge_mse = (y2-ridge.predict(X2))**2
statistic = lasso_mse - ridge_mse
return standardize(statistic)
# Calculate MSEs
def mse(lst, n, i, model):
lst_cols = ['statistic_' + str(i)]
df = pd.DataFrame(lst, columns=lst_cols)
print("Mean:", np.mean(df)[0], "Median:", np.median(df), "Mode:", stats.mode(df)[0], "Variance:", np.var(df)[0])
return plt.hist(df['statistic_'+str(i)], label='mse_'+str(i),alpha=0.5)
print(setup_test_mse(1000, 1, .1, 1, 1))
# -
# ### Varying values
# +
# Vary number of observations
def vary_obs(model):
k = 10
m = 1
a = 0.1
n = [100,250,500,1000]
for i in n:
lst = []
for j in range(1000):
results = setup_test_mse(i, k, a, m, model)
lst.append(results)
output = mse(lst, i, i, model)
plt.legend()
plt.show()
# Vary alpha levels
def vary_alpha(model):
k = 10
m = 10
a = [0,0.1,0.5,1]
n = 1000
for i in a:
lst = []
for j in range(1000):
results = setup_test_mse(n, k, i, m, model)
lst.append(results)
output = mse(lst, n, i, model)
plt.legend()
plt.show()
# Vary number of x variables
def vary_xvars(model):
k = [1,10,25,50]
m = 1
a = 0.1
n = 1000
for i in k:
lst = []
for j in range(1000):
results = setup_test_mse(n, i, a, m, model)
lst.append(results)
output = mse(lst, n, i, model)
plt.legend()
plt.show()
# Vary the model with a multiplicative factor
def vary_multiply(model):
k = 10
m = [0.1,0.5,1,2]
a = 0.1
n = 1000
for i in m:
lst = []
for j in range(1000):
results = setup_test_mse(n, k, a, i, model)
lst.append(results)
output = mse(lst, n, i, model)
plt.legend()
plt.show()
def params_scatter(model):
single_models = [1,2,3]
k = [1,10,25,50]
m = 1
a = 0.1
n = 1000
if model in single_models:
for i in k:
y, x_1, x_2 = gen_data(nobs=n, num_cov=i, m=m)
x = setup_test_params(y, x_1, x_2, a, model)
plt.scatter(range(len(x.params)), x.params, label=i)
plt.legend()
plt.show()
else:
for i in k:
y, x_1, x_2 = gen_data(nobs=n, num_cov=i, m=m)
x = setup_test_params(y, x_1, x_2, a, model)
for j in list(setup_test_params(y, x_1, x_2, a, model)):
plt.scatter(range(len(j.params)), j.params)
plt.legend(['model1','model2'])
plt.show()
# -
# Model = 4 is OlS - Lasso
print('Vary Observations')
vary_obs(4)
print('Vary Alpha Levels')
vary_alpha(4)
print('Vary Multiplicative Factors')
vary_multiply(4)
print('Vary X Variables')
vary_xvars(4)
# Model = 5 is OlS - Ridge
print('Vary Observations')
vary_obs(5)
print('Vary Alpha Levels')
vary_alpha(5)
print('Vary Multiplicative Factors')
vary_multiply(5)
print('Vary X Variables')
vary_xvars(5)
# Model = 6 is Lasso - Ridge
print('Vary Observations')
vary_obs(6)
print('Vary Alpha Levels')
vary_alpha(6)
print('Vary Multiplicative Factors')
vary_multiply(6)
print('Vary X Variables')
vary_xvars(6)
| lasso/lasso_sep_training.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
from selenium import webdriver
from selenium.webdriver.common.keys import Keys
import re
import time
def enter_word(wordleBot, word):
wordleBot.send_keys(word)
wordleBot.send_keys(Keys.ENTER)
time.sleep(1)
host = wordleBot.find_element_by_tag_name("game-app")
firstHost = wordleBot.find_element_by_tag_name("game-app")
game = browser.execute_script("return arguments[0].shadowRoot.getElementById('game')",host)
keyboard = game.find_element_by_tag_name("game-keyboard")
keys = browser.execute_script("return arguments[0].shadowRoot.getElementById('keyboard')",keyboard)
time.sleep(2)
keydata = browser.execute_script("return arguments[0].innerHTML;", keys)
correctRegex = re.compile('...............correct',re.VERBOSE)
matches = ['','','','','']
n = 0
print(correctRegex.findall(keydata))
for groups in correctRegex.findall(keydata):
matches[n] = groups[0]
n = n + 1
presentRegex = re.compile('...............present',re.VERBOSE)
nearmatches = ['','','','','']
n = 0
for groups in presentRegex.findall(keydata):
nearmatches[n] = groups[0]
n = n + 1
print(nearmatches)
finalKey = ''
for char in word:
if char in matches:
finalKey = finalKey + 'G'
if char in nearmatches:
finalKey = finalKey + 'Y'
else:
finalKey = finalKey + '?'
return finalKey
browser = webdriver.Firefox(executable_path = r'C:\Users\zbot6\WebDriver\geckodriver.exe')
browser.get('http://www.powerlanguage.co.uk/wordle/')
time.sleep(1)
Elem = browser.find_element_by_tag_name('html')
Elem.click()
time.sleep(1)
enter_word(Elem, "rally")
# +
| WordleWebScraper.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
import os
import numpy as np
import matplotlib.pyplot as plt
import json
plt.style.reload_library()
#plt.style.use('singlecolumn')
def gsm_fidelity(data):
'''return ground state manifold fidelity'''
if round(data['J']/data['B'], 2) > 1:
return np.sum(data['eigoccs'][:2])
else:
return data['eigoccs'][0]
# -
# # Loading and checking data
# ## Cooling
# +
data_dir = "../data/TFIM/logsweep/continuous/DM/cooling/"
files = sorted(os.listdir(data_dir))
cooling_data = []
for file in files:
if not file.endswith('.json'): continue
cooling_data.append(json.load(open(data_dir+file, 'r')))
# + [markdown] heading_collapsed=true
# ### density matrix norm check
# + hidden=true
print(' L, K, J/B, Log10(Trace of the DM - 1)')
print(
*sorted((d['L'],
d['K'],
round(d['J']/d['B'],2),
round(np.log10(np.abs(np.sum(d['eigoccs'])-1)), 0)
) for d in cooling_data ),
sep='\n'
)
# + [markdown] hidden=true
# **Note:**
# the density matrix simulator accumulates numerical errors, producing a non-normalized final density matrix.
# We cannot get rid of the numerical error, but to get consistent results we nomalize the results (energy, fidelities) during dat analysis.
# -
# ## Reheating
# +
data_dir = "../data/TFIM/logsweep/continuous/DM/reheating/"
files = sorted(os.listdir(data_dir))
reheating_data = []
for file in files:
if not file.endswith('.json'): continue
reheating_data.append(json.load(open(data_dir+file, 'r')))
# + [markdown] heading_collapsed=true
# ### density matrix norm check
# + hidden=true
print(' L, K, J/B, Log10(Trace of the DM - 1)')
print(
*sorted((d['L'],
d['K'],
round(d['J']/d['B'],2),
round(np.log10(np.abs(np.sum(d['eigoccs'])-1)), 0)
) for d in reheating_data ),
sep='\n'
)
# -
# ## Available data summary
# +
print(' K , L, J/B ')
avail_cooling = [(d['K'], d['L'], round(d['J']/d['B'],1)) for d in cooling_data]
avail_reheating = [(d['K'], d['L'], round(d['J']/d['B'],1)) for d in reheating_data]
# avail_iterative = [(d['K'], d['L'], round(d['J']/d['B'],1)) for d in iterative_data]
from itertools import product
for K, L, JvB in np.unique(avail_cooling
+ avail_reheating
# + avail_iterative
, axis=0):
K = int(K)
L = int(L)
if L!=7: continue
print((K, L, JvB),
'C' if (K, L, JvB) in avail_cooling else ' ',
'R' if (K, L, JvB) in avail_reheating else ' ',
# 'It' if (K, L, JvB) in avail_iterative else ' '
)
# + [markdown] heading_collapsed=true
# # Varying energy gradation number K
# + hidden=true
L = 7
# + [markdown] heading_collapsed=true hidden=true
# ## cooling
# + [markdown] hidden=true
# ### energy vs K
# + code_folding=[] hidden=true
# L = 7
for JvB in [.2, 1, 5]:
data_iterator = ((d['K'], d['energy'], np.sum(d['eigoccs']))
for d in cooling_data
if d['L'] == L and np.isclose(d['J']/d['B'], JvB))
K_l, E_l, norms_l = zip(*sorted(data_iterator))
plt.plot(K_l, np.array(E_l)/np.array(norms_l), 'o-', label=f'$J/B={JvB}$')
plt.legend()
# + [markdown] hidden=true
# ### GS infidelity vs K
# + code_folding=[] hidden=true
# L = 7
for JvB in [.2, 1, 5]:
data_iterator = ((d['K'], gsm_fidelity(d), np.sum(d['eigoccs']))
for d in cooling_data
if d['L'] == L and np.isclose(d['J']/d['B'], JvB))
K_l, fidelty_l, norms_l = zip(*sorted(data_iterator))
fidelty_l /= np.array(norms_l)
infidelity_l = 1 - np.array(fidelty_l)
plt.plot(K_l, infidelity_l, 'o-', label=f'$J/B={JvB}$')
plt.legend()
plt.xlabel('K')
plt.ylabel('GS manifold infidelity')
plt.xscale('log')
plt.yscale('log')
# + [markdown] heading_collapsed=true hidden=true
# ## reheating
#
# + [markdown] hidden=true
# ### energy vs K
# + code_folding=[] hidden=true
# L = 7
for JvB in [.2, 1, 5]:
data_iterator = ((d['K'], d['energy'], np.sum(d['eigoccs']))
for d in reheating_data
if d['L'] == L and np.isclose(d['J']/d['B'], JvB))
K_l, E_l, norms_l = zip(*sorted(data_iterator))
plt.plot(K_l, np.array(E_l)/np.array(norms_l), 'x:', label=f'$J/B={JvB}$')
plt.legend()
# + [markdown] hidden=true
# ### GS infidelity vs K
# + code_folding=[] hidden=true
# L = 7
for JvB in [.2, 1, 5]:
data_iterator = ((d['K'], gsm_fidelity(d), np.sum(d['eigoccs']))
for d in reheating_data
if d['L'] == L and np.isclose(d['J']/d['B'], JvB))
K_l, fidelty_l, norms_l = zip(*sorted(data_iterator))
fidelty_l /= np.array(norms_l)
infidelity_l = 1 - np.array(fidelty_l)
plt.plot(K_l, infidelity_l, 'x:', label=f'$J/B={JvB}$')
plt.legend()
plt.xscale('log')
plt.yscale('log')
# + [markdown] hidden=true
# ## combined
# + [markdown] hidden=true
# ### energy vs K
# + hidden=true
# L = 7
plt.title(f'TFIM chain $L={L}$. Standard LogSweep density matrix sim')
# cooling
for JvB in [.2, 1, 5]:
data_iterator = ((d['K'], d['energy'], np.sum(d['eigoccs']))
for d in cooling_data
if d['L'] == L and np.isclose(d['J']/d['B'], JvB))
K_l, E_l, norms_l = zip(*sorted(data_iterator))
plt.plot(K_l, np.array(E_l)/np.array(norms_l), 'o-', label=f'cooling $J/B={JvB}$')
plt.gca().set_prop_cycle(None)
# reheating
for JvB in [.2, 1, 5]:
data_iterator = ((d['K'], d['energy'], np.sum(d['eigoccs']))
for d in reheating_data
if d['L'] == L and np.isclose(d['J']/d['B'], JvB))
K_l, E_l, norms_l = zip(*sorted(data_iterator))
plt.plot(K_l, np.array(E_l)/np.array(norms_l), 'x:', label=f'reheating $J/B={JvB}$')
plt.gca().set_prop_cycle(None)
plt.legend(bbox_to_anchor=(1, .5), loc='center left')
# + [markdown] hidden=true
# ### GS infidelity vs K
# + code_folding=[] hidden=true
L = 7
plt.title(f'TFIM chain $L={L}$. Standard LogSweep density matrix sim')
# cooling
for JvB in [.2, 1, 5]:
data_iterator = ((d['K'], gsm_fidelity(d), np.sum(d['eigoccs']))
for d in cooling_data
if d['L'] == L and np.isclose(d['J']/d['B'], JvB))
K_l, fidelty_l, norms_l = zip(*sorted(data_iterator))
fidelty_l /= np.array(norms_l)
infidelity_l = 1 - np.array(fidelty_l)
plt.plot(K_l, infidelity_l, '+:', label=f'cooling $J/B={JvB}$')
plt.gca().set_prop_cycle(None)
# reheating
for JvB in [.2, 1, 5]:
data_iterator = ((d['K'], gsm_fidelity(d), np.sum(d['eigoccs']))
for d in reheating_data
if d['L'] == L and np.isclose(d['J']/d['B'], JvB))
K_l, fidelty_l, norms_l = zip(*sorted(data_iterator))
fidelty_l /= np.array(norms_l)
infidelity_l = 1 - np.array(fidelty_l)
plt.plot(K_l, infidelity_l, 'x:', label=f'reheating $J/B={JvB}$')
plt.legend(bbox_to_anchor=(1, .5), loc='center left')
plt.yscale('log')
plt.xscale('log')
plt.xlabel('K')
plt.ylabel('ground space infidelity')
# -
# # scaling with system size L
# ## check available data at fixed K
# +
K = 10
print(f'available data for K = {K}:')
print(' K , L, J/B ')
avail_cooling = [(d['K'], d['L'], round(d['J']/d['B'],1)) for d in cooling_data if d['K']==K]
avail_reheating = [(d['K'], d['L'], round(d['J']/d['B'],1)) for d in reheating_data if d['K']==K]
# avail_iterative = [(d['K'], d['L'], round(d['J']/d['B'],1)) for d in iterative_data if d['K']==K]
from itertools import product
for K, L, JvB in np.unique(avail_cooling
+ avail_reheating
# + avail_iterative
, axis=0):
K = int(K)
L = int(L)
print((K, L, JvB),
'C' if (K, L, JvB) in avail_cooling else ' ',
'R' if (K, L, JvB) in avail_reheating else ' ',
# 'It' if (K, L, JvB) in avail_iterative else ' '
)
# -
# ### energy vs L
# + code_folding=[]
K=10
fig = plt.figure(figsize=(7, 4))
fig.suptitle(f'TFIM chain of length $L$.\n'
f'Standard LogSweep(K={K}) continuous DM sim', y=1.1)
JvBlist = [.2, 1, 5]
# cooling
for JvB in JvBlist:
data_iterator = ((d['L'], d['energy'], np.sum(d['eigoccs']))
for d in cooling_data
if d['K'] == K and np.isclose(d['J']/d['B'], JvB))
L_l, energy_l, norms_l = zip(*sorted(data_iterator))
energy_l /= np.array(norms_l)
plt.plot(L_l, energy_l, 'o-', label=f'cooling $J/B={JvB}$')
# reheating
plt.gca().set_prop_cycle(None)
for JvB in JvBlist:
data_iterator = ((d['L'], d['energy'], np.sum(d['eigoccs']))
for d in reheating_data
if d['K'] == K and np.isclose(d['J']/d['B'], JvB))
L_l, energy_l, norms_l = zip(*sorted(data_iterator))
energy_l /= np.array(norms_l)
plt.plot(L_l, energy_l, 'x:', label=f'reheating $J/B={JvB}$')
# # iterative
# plt.gca().set_prop_cycle(None)
# for JvB in [.2, 1, 5]:
# data_iterator = ((d['L'], d['energy'], np.sum(d['eigoccs']))
# for d in iterative_data
# if d['K'] == K and np.isclose(d['J']/d['B'], JvB))
# L_l, energy_l, norms_l = zip(*sorted(data_iterator))
# energy_l /= np.array(norms_l)
# plt.plot(L_l, energy_l, '+--', label=f'reheating $J/B={JvB}$')
handles, labels = plt.gca().get_legend_handles_labels()
plt.legend(
[plt.Line2D([],[],ls='',marker='')]
+ handles[:3]
+ [plt.Line2D([],[],ls='',marker='')]
+ handles[3:],
['cooling'] + [f'$J/B = {J}$' for J in JvBlist]
+ ['reheating'] + [f'$J/B = {J}$' for J in JvBlist],
loc='best', ncol=2)
plt.ylim(-1, -0.9)
plt.xlabel('L')
plt.ylabel('energy')
# -
# ### GS infidelity vs L
# + code_folding=[]
plt.title(f'TFIM chain. Standard LogSweep(K={K}) density matrix sim')
# cooling
for JvB in [.2, 1, 5]:
data_iterator = ((d['L'], gsm_fidelity(d), np.sum(d['eigoccs']))
for d in cooling_data
if d['K'] == K and np.isclose(d['J']/d['B'], JvB))
L_l, fidelty_l, norms_l = zip(*sorted(data_iterator))
fidelty_l /= np.array(norms_l)
infidelity_l = 1 - np.array(fidelty_l)
plt.plot(L_l, infidelity_l, 'o-', label=f'cooling $J/B={JvB}$')
plt.gca().set_prop_cycle(None)
# reheating
for JvB in [.2, 1, 5]:
data_iterator = ((d['L'], gsm_fidelity(d), np.sum(d['eigoccs']))
for d in reheating_data
if d['K'] == K and np.isclose(d['J']/d['B'], JvB))
L_l, fidelty_l, norms_l = zip(*sorted(data_iterator))
fidelty_l /= np.array(norms_l)
infidelity_l = 1 - np.array(fidelty_l)
plt.plot(L_l, infidelity_l, 'x:', label=f'reheating $J/B={JvB}$')
plt.gca().set_prop_cycle(None)
# for JvB in [.2, 1, 5]:
# data_iterator = ((d['L'], gsm_fidelity(d), np.sum(d['eigoccs']))
# for d in iterative_data
# if d['K'] == K and np.isclose(d['J']/d['B'], JvB))
# L_l, fidelty_l, norms_l = zip(*sorted(data_iterator))
# fidelty_l /= np.array(norms_l)
# infidelity_l = 1 - np.array(fidelty_l)
# plt.plot(L_l, infidelity_l, '+--', label=f'iterative $J/B={JvB}$')
plt.legend(bbox_to_anchor=(1, .5), loc='center left')
plt.yscale('log')
plt.xscale('log')
plt.xlabel('L')
plt.ylabel('ground space infidelity')
# -
# ## test: changing K with L
# + code_folding=[]
plt.title(f'TFIM chain. Standard LogSweep(K=L) density matrix sim')
# cooling
for JvB in [.2, 1, 5]:
data_iterator = ((d['L'], gsm_fidelity(d), np.sum(d['eigoccs']))
for d in cooling_data
if d['K'] == d['L'] and np.isclose(d['J']/d['B'], JvB))
L_l, fidelty_l, norms_l = zip(*sorted(data_iterator))
fidelty_l /= np.array(norms_l)
infidelity_l = 1 - np.array(fidelty_l)
plt.plot(L_l, infidelity_l, 'o-', label=f'cooling $J/B={JvB}$')
plt.gca().set_prop_cycle(None)
# reheating
for JvB in [.2, 1, 5]:
data_iterator = ((d['L'], gsm_fidelity(d), np.sum(d['eigoccs']))
for d in reheating_data
if d['K'] == d['L'] and np.isclose(d['J']/d['B'], JvB))
L_l, fidelty_l, norms_l = zip(*sorted(data_iterator))
fidelty_l /= np.array(norms_l)
infidelity_l = 1 - np.array(fidelty_l)
plt.plot(L_l, infidelity_l, 'x:', label=f'reheating $J/B={JvB}$')
plt.gca().set_prop_cycle(None)
# for JvB in [.2, 1, 5]:
# data_iterator = ((d['L'], gsm_fidelity(d), np.sum(d['eigoccs']))
# for d in iterative_data
# if d['K'] == K and np.isclose(d['J']/d['B'], JvB))
# L_l, fidelty_l, norms_l = zip(*sorted(data_iterator))
# fidelty_l /= np.array(norms_l)
# infidelity_l = 1 - np.array(fidelty_l)
# plt.plot(L_l, infidelity_l, '+--', label=f'iterative $J/B={JvB}$')
#plt.legend(bbox_to_anchor=(1, .5), loc='center left')
plt.yscale('log')
plt.xscale('log')
plt.xlabel('L')
plt.ylabel('ground space infidelity')
# -
# # Eigenstate occupation plots
# +
L = 7
JvBlist = [0.2, 1, 5]
Klist = [2, 40]
from qdclib import TFIMChain
fig, sbpl = plt.subplots(len(Klist), len(JvBlist),
sharex = True, sharey = True,
gridspec_kw={'hspace': 0, 'wspace': 0},
figsize = (7, 4.6))
for i, JvB in enumerate(JvBlist):
system = TFIMChain(L, JvB, 1)
system.normalize()
for j, K in enumerate(Klist):
for d in cooling_data:
if d['L'] == L and np.isclose(JvB, d['J']/d['B']) and d['K'] == K:
break
sbpl[j, i].plot(system.eigvals, d['eigoccs'], '.', label='cooling')
for d in reheating_data:
if d['L'] == L and np.isclose(JvB, d['J']/d['B']) and d['K'] == K:
break
sbpl[j, i].plot(system.eigvals, d['eigoccs'], '_', label='reheating')
sbpl[j, i].set_yscale('log')
plt.tight_layout()
plt.suptitle('eingenstate occupations under\n'
'continuous coupled evolution QDC-LogSweep',
va='bottom', y=1.05
)
sbpl[1, 1].set_xlabel('eigenstate energy')
sbpl[0, 0].set_ylabel('occupation')
sbpl[1, 0].set_ylabel('occupation')
sbpl[0,2].legend(loc='lower left')
fig.text(0.99, 0.77, '$K = 2$', va='center',
rotation = -90, fontsize = plt.rcParams['axes.labelsize'])
fig.text(0.99, 0.37, '$K = 40$', va='center',
rotation = -90, fontsize = plt.rcParams['axes.labelsize'])
for i, JvB in enumerate(JvBlist):
fig.text(i*0.28+0.27, 1, f'$J/B = {JvB}$',
va = 'baseline', ha = 'center',
fontsize = plt.rcParams['axes.labelsize'])
# -
# ### without reheating, single row
# +
L = 10
JvBlist = [0.2, 1, 5]
Klist = [2, 5, 40]
from qdclib import TFIMChain
fig, sbpl = plt.subplots(1, len(JvBlist),
sharex = True, sharey = True,
gridspec_kw={'hspace': 0, 'wspace': 0},
figsize = (7, 3))
for i, JvB in enumerate(JvBlist):
system = TFIMChain(L, JvB, 1)
system.normalize()
for K, c in zip(Klist, ['#E24A33', '#348ABD', '#988ED5']):
for d in cooling_data:
if d['L'] == L and np.isclose(JvB, d['J']/d['B']) and d['K'] == K:
break
sbpl[i].plot(system.eigvals, d['eigoccs'], '.',
color=c, label='NA')
sbpl[i].set_yscale('log')
sbpl[i].set_ylim(bottom= 1E-17, top=1)
sbpl[i].text(0, 2, f'$J/B = {JvB}$',
va = 'bottom', ha = 'center',
fontsize = plt.rcParams['axes.labelsize'])
plt.tight_layout()
sbpl[1].set_xlabel('eigenstate energy')
sbpl[0].set_ylabel('occupation')
sbpl[2].legend([f'$K = {K}$' for K in Klist], loc='lower left', handlelength=0.5)
#plt.savefig('../figures/eigenoccs.pdf', bbox_inches='tight')
| data-analysis/TFIM-chain-logsweep-continuous-DM.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
import os
import csv
import numpy as np
import matplotlib.pyplot as plt
import cv2
from PIL import Image
# -
class VideoRecord(object):
def __init__(self, video_path, label):
self.path = video_path
self.video = cv2.VideoCapture(self.path)
self.num_frames = self._get_num_frames()
self.label = label
def _get_num_frames(self):
count = 0
success, frame = self.video.read()
while(success):
success, frame = self.video.read()
count += 1
self.video.set(2, 0)
return count
def get_frames(self, indices):
"""
Argument:
indices : Sorted list of frames indices
Returns:
images : Dictionary in format {frame_id: PIL Image}
"""
images = dict()
self.video.set(cv2.CAP_PROP_POS_FRAMES, min(indices))
for count in range(min(indices), max(indices)+1):
success, frame = self.video.read()
if success is False:
print('\nCould not load frame {} from video {}\n'.format(count, self.path))
return None
frame = cv2.cvtColor(frame, cv2.COLOR_BGR2RGB)
if count in indices:
images[count] = Image.fromarray(frame)
return images
def parse_list(list_file):
video_list = []
with open(list_file) as f:
reader = csv.DictReader(f)
for i, row in enumerate(reader):
# if i%10 == 0:
vid = row['id']
actions = row['actions']
if actions == '':
actions = []
else:
actions = [a.split(' ') for a in actions.split(';')]
actions = [{'class': x, 'start': float(
y), 'end': float(z)} for x, y, z in actions]
video_list.append([actions, vid])
return video_list
# +
train_file = '/media/v-pakova/New Volume/Datasets/Charades/Annotations/Charades_v1_train.csv'
root_path = '/media/v-pakova/New Volume/Datasets/Charades/Charades_v1_480'
num_classes = 157
FPS = 24
video_list = parse_list(train_file)
# +
targets = np.zeros((len(video_list), num_classes))
actual_targets = np.zeros((len(video_list), num_classes))
for i, (label, video_path) in enumerate(video_list):
if i%100 == 0:
print(i)
record = VideoRecord(os.path.join(root_path, video_path+'.mp4'), label)
for l in label:
targets[i, int(l['class'][1:])] = 1
frame_start = int(l['start'] * FPS)
if frame_start < record.num_frames:
actual_targets[i, int(l['class'][1:])] = 1
np.stack(actual_targets)
per_label_sum = np.sum(targets, axis=0)
actual_per_label_sum = np.sum(actual_targets, axis=0)
weights = max(per_label_sum) / per_label_sum
# +
x = np.arange(num_classes)
plt.figure(figsize=(20,10))
plt.bar(x, per_label_sum)
plt.xlim(-1, num_classes)
plt.axhline(max(per_label_sum), linestyle='--', color='r')
plt.axhline(min(per_label_sum), linestyle='--', color='b')
l, r = plt.xlim()
plt.text(r+.5, max(per_label_sum), int(max(per_label_sum)), va='center', ha="left")
plt.text(r+.5, min(per_label_sum), int(min(per_label_sum)), va='center', ha="left")
plt.title('Quantity of training examples per label', fontsize=20)
plt.tick_params(labelsize=14)
plt.show()
# +
x = np.arange(num_classes)
plt.figure(figsize=(20,10))
plt.bar(x, actual_per_label_sum, color='g')
plt.xlim(-1, num_classes)
plt.axhline(max(actual_per_label_sum), linestyle='--', color='r')
plt.axhline(min(actual_per_label_sum), linestyle='--', color='b')
l, r = plt.xlim()
plt.text(r+.5, max(actual_per_label_sum), int(max(actual_per_label_sum)), va='center', ha="left")
plt.text(r+.5, min(actual_per_label_sum), int(min(actual_per_label_sum)), va='center', ha="left")
plt.title('Actual quantity of training examples per label', fontsize=20)
plt.tick_params(labelsize=14)
plt.show()
# +
x = np.arange(num_classes)
plt.figure(figsize=(20,10))
# plt.bar(x, per_label_sum)
plt.xlim(-1, num_classes)
width = 0.2 # the width of the bars
rects1 = plt.bar(x - width/2, per_label_sum, width, label='Total')
rects2 = plt.bar(x + width/2, actual_per_label_sum, width, label='Actual')
plt.legend()
plt.axhline(max(per_label_sum), linestyle='--', color='r')
plt.axhline(min(per_label_sum), linestyle='--', color='b')
l, r = plt.xlim()
plt.text(r+.5, max(per_label_sum), int(max(per_label_sum)), va='center', ha="left")
plt.text(r+.5, min(per_label_sum), int(min(per_label_sum)), va='center', ha="left")
plt.title('Quantity of training examples per label', fontsize=20)
plt.tick_params(labelsize=14)
plt.show()
# +
x = np.arange(num_classes)
diff = per_label_sum - actual_per_label_sum
plt.figure(figsize=(20,10))
plt.bar(x, diff)
plt.xlim(-1, num_classes)
plt.axhline(max(diff), linestyle='--', color='r')
plt.axhline(min(diff), linestyle='--', color='b')
l, r = plt.xlim()
plt.text(r+.5, max(diff), int(max(diff)), va='center', ha="left")
plt.text(r+.5, min(diff), int(min(diff)), va='center', ha="left")
plt.title('Quantity of training examples per label that CAN NOT be readed', fontsize=20)
plt.tick_params(labelsize=14)
plt.show()
# -
print('Annotated labels: {} | Readable labels: {} ({:.2f}%) | Diff: {}'.format(
sum(per_label_sum), sum(actual_per_label_sum), sum(actual_per_label_sum)*100/sum(per_label_sum), sum(diff), ))
# +
len_labels = []
for i, (label, video_path) in enumerate(video_list):
len_labels.append(len(label))
len_labels = sorted(len_labels)
# +
x = np.arange(len(video_list))
plt.figure(figsize=(20,10))
plt.bar(x, len_labels)
plt.xlim(0, len(video_list)+1)
plt.axhline(max(len_labels), linestyle='--', color='r')
plt.axhline(min(len_labels), linestyle='--', color='b')
l, r = plt.xlim()
plt.text(r+.5, max(len_labels), int(max(len_labels)), va='center', ha="left")
plt.text(r+.5, min(len_labels), int(min(len_labels)), va='center', ha="left")
plt.title('Quantity of labels per video', fontsize=20)
plt.tick_params(labelsize=14)
plt.show()
# +
classes_frames = np.zeros((num_classes))
total_frames = 0
with open(train_file) as f:
reader = csv.DictReader(f)
for row in reader:
actions = row['actions']
total_frames += int(float(row['length']) * FPS)
if actions:
actions = [a.split(' ') for a in actions.split(';')]
actions = [{'class': x, 'start': float(
y), 'end': float(z)} for x, y, z in actions]
for l in actions:
frame_start = int(l['start'] * FPS)
frame_end = int(l['end'] * FPS)
classes_frames[int(l['class'][1:])] += frame_end - frame_start
# -
np.mean(per_video_ratio)
total_frames
classes_frames
pos_weigth = [(total_frames-p)/p for p in classes_frames]
pos_weigth
weights
# Getting length of videos
length = []
vid = []
for i, (label, video_path) in enumerate(video_list):
if i%100 == 0:
print(i)
record = VideoRecord(os.path.join(root_path, video_path+'.mp4'), label)
length.append(record.num_frames)
vid.append(record.path)
for i, l in enumerate(length):
if l < 64:
print(i, l)
vid[3432]
| notebooks/charades_per_label_sum_short_data.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: venv2
# language: python
# name: venv2
# ---
import pandas as pd
# ## Dataframe exportado do google forms
# ### v1
df1 = pd.read_csv('./Data/informacoes_de_contato.csv')
# ### v2
df1 = pd.read_csv('./Data/informacoes_de_contatov2.csv')
# ### v4
df1 = pd.read_csv('./Data/datasets/informacoes_de_contatov4.csv')
df1 = df1.rename(columns = {'Qual a classificação?': 'y'})
len(df1)
df1.info()
df1.head()
df1.y.value_counts()
# ### Extração TERRITORIO.docx
df_territorios = pd.read_pickle('./Data/datasets/classificador_territorios.pkl')
df_territorios.info()
df_territorios['y'] = "Território"
df_territorios.info()
df1 = df1.rename(columns= {"Palavras-chave utilizada para busca": "palavra_chave",
"Positiva ou Negativa?": "tipo",
"Fonte": "fonte",
"Autor": "autor",
"Teor textual": "texto"})
df1.info()
df1 = df1[['palavra_chave', 'tipo', 'fonte', 'texto', 'y']]
df_final = pd.concat([df1, df_territorios])
df_final.info()
len(df_final)
df_final.head()
df_final.to_parquet('./Data/training_data/df_v4.parquet.gzip', compression='gzip')
| MountingData.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python [conda env:tp] *
# language: python
# name: conda-env-tp-py
# ---
import tweepy
import time
import os
import sys
import json
import argparse
from datetime import datetime
# +
FOLLOWING_DIR = 'following'
USER_DIR = 'twitter-users'
MAX_FRIENDS = 200
FRIENDS_OF_FRIENDS_LIMIT = 200
# Create the directories we need
if not os.path.exists(FOLLOWING_DIR):
os.makedirs(FOLLOWING_DIR)
if not os.path.exists(USER_DIR):
os.makedirs(USER_DIR)
enc = lambda x: x.encode('ascii', errors='ignore')
# The consumer keys can be found on your application's Details
# page located at https://dev.twitter.com/apps (under "OAuth settings")
# The access tokens can be found on your applications's Details
# page located at https://dev.twitter.com/apps (located
# under "Your access token")
CONSUMER_KEY = ''
CONSUMER_SECRET = ''
ACCESS_TOKEN = ''
ACCESS_TOKEN_SECRET = ''
# == OAuth Authentication ==
#
# This mode of authentication is the new preferred way
# of authenticating with Twitter.
auth = tweepy.OAuthHandler(CONSUMER_KEY, CONSUMER_SECRET)
auth.set_access_token(ACCESS_TOKEN, ACCESS_TOKEN_SECRET)
api = tweepy.API(auth)
# Create the directories w
def get_follower_ids(centre, max_depth=1, current_depth=0, taboo_list=[]):
if current_depth == max_depth:
print 'out of depth'
return taboo_list
if centre in taboo_list:
# we've been here before
print 'Already been here.'
return taboo_list
else:
taboo_list.append(centre)
try:
userfname = os.path.join(USER_DIR, str(centre) + '.json')
if not os.path.exists(userfname):
print 'Retrieving user details for twitter id %s' % str(centre)
while True:
try:
user = api.get_user(centre)
d = {'name': user.name,
'screen_name': user.screen_name,
'profile_image_url' : user.profile_image_url,
'created_at' : str(user.created_at),
'id': user.id,
'friends_count': user.friends_count,
'followers_count': user.followers_count,
'followers_ids': user.followers_ids()}
with open(userfname, 'w') as outf:
outf.write(json.dumps(d, indent=1))
user = d
break
except tweepy.TweepError, error:
print type(error)
if str(error) == 'Not authorized.':
print 'Can''t access user data - not authorized.'
return taboo_list
if str(error) == 'User has been suspended.':
print 'User suspended.'
return taboo_list
errorObj = error[0][0]
print errorObj
if errorObj['message'] == 'Rate limit exceeded':
print 'Rate limited. Sleeping for 15 minutes.'
time.sleep(15 * 60 + 15)
continue
return taboo_list
else:
user = json.loads(file(userfname).read())
screen_name = enc(user['screen_name'])
fname = os.path.join(FOLLOWING_DIR, screen_name + '.csv')
friendids = []
if not os.path.exists(fname):
print 'No cached data for screen name "%s"' % screen_name
with open(fname, 'w') as outf:
params = (enc(user['name']), screen_name)
print 'Retrieving friends for user "%s" (%s)' % params
# page over friends
c = tweepy.Cursor(api.friends, id=user['id']).items()
friend_count = 0
while True:
try:
friend = c.next()
friendids.append(friend.id)
params = (friend.id, enc(friend.screen_name), enc(friend.name))
outf.write('%s\t%s\t%s\n' % params)
friend_count += 1
if friend_count >= MAX_FRIENDS:
print 'Reached max no. of friends for "%s".' % friend.screen_name
break
except tweepy.TweepError:
# hit rate limit, sleep for 15 minutes
print 'Rate limited. Sleeping for 15 minutes.'
time.sleep(15 * 60 + 15)
continue
except StopIteration:
break
else:
friendids = [int(line.strip().split('\t')[0]) for line in file(fname)]
print 'Found %d friends for %s' % (len(friendids), screen_name)
# get friends of friends
cd = current_depth
if cd+1 < max_depth:
for fid in friendids[:FRIENDS_OF_FRIENDS_LIMIT]:
taboo_list = get_follower_ids(fid, max_depth=max_depth,
current_depth=cd+1, taboo_list=taboo_list)
if cd+1 < max_depth and len(friendids) > FRIENDS_OF_FRIENDS_LIMIT:
print 'Not all friends retrieved for %s.' % screen_name
except Exception, error:
print 'Error retrieving followers for user id: ', centre
print error
if os.path.exists(fname):
os.remove(fname)
print 'Removed file "%s".' % fname
sys.exit(1)
return taboo_list
# -
if __name__ == '__main__':
ap = argparse.ArgumentParser()
ap.add_argument("-s", "--screen-name", required=True, help="Screen name of twitter user")
ap.add_argument("-d", "--depth", required=True, type=int, help="How far to follow user network")
args = vars(ap.parse_args())
twitter_screenname = args['screen_name']
depth = int(args['depth'])
if depth < 1 or depth > 5:
print 'Depth value %d is not valid. Valid range is 1-5.' % depth
sys.exit('Invalid depth argument.')
print 'Max Depth: %d' % depth
matches = api.lookup_users(screen_names=[twitter_screenname])
if len(matches) == 1:
print get_follower_ids(matches[0].id, max_depth=depth)
else:
print 'Sorry, could not find twitter user with screen name: %s' % twitter_screenname
| notebooks/experiment_stash/LDA.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 2
# language: python
# name: python2
# ---
# This notebook was prepared by [<NAME>](http://donnemartin.com). Source and license info is on [GitHub](https://github.com/donnemartin/data-science-ipython-notebooks).
# # Functions
# * Functions as Objects
# * Lambda Functions
# * Closures
# * \*args, \*\*kwargs
# * Currying
# * Generators
# * Generator Expressions
# * itertools
# ## Functions as Objects
# Python treats functions as objects which can simplify data cleaning. The following contains a transform utility class with two functions to clean strings:
# +
# %%file transform_util.py
import re
class TransformUtil:
@classmethod
def remove_punctuation(cls, value):
"""Removes !, #, and ?.
"""
return re.sub('[!#?]', '', value)
@classmethod
def clean_strings(cls, strings, ops):
"""General purpose method to clean strings.
Pass in a sequence of strings and the operations to perform.
"""
result = []
for value in strings:
for function in ops:
value = function(value)
result.append(value)
return result
# -
# Below are nose tests that exercises the utility functions:
# +
# %%file tests/test_transform_util.py
from nose.tools import assert_equal
from ..transform_util import TransformUtil
class TestTransformUtil():
states = [' Alabama ', 'Georgia!', 'Georgia', 'georgia', \
'FlOrIda', 'south carolina##', 'West virginia?']
expected_output = ['Alabama',
'Georgia',
'Georgia',
'Georgia',
'Florida',
'South Carolina',
'West Virginia']
def test_remove_punctuation(self):
assert_equal(TransformUtil.remove_punctuation('!#?'), '')
def test_map_remove_punctuation(self):
# Map applies a function to a collection
output = map(TransformUtil.remove_punctuation, self.states)
assert_equal('!#?' not in output, True)
def test_clean_strings(self):
clean_ops = [str.strip, TransformUtil.remove_punctuation, str.title]
output = TransformUtil.clean_strings(self.states, clean_ops)
assert_equal(output, self.expected_output)
# -
# Execute the nose tests in verbose mode:
# !nosetests /tests/test_transform_util.py -v
# ## Lambda Functions
# Lambda functions are anonymous functions and are convenient for data analysis, as data transformation functions take functions as arguments.
# Sort a sequence of strings by the number of letters:
strings = ['foo', 'bar,', 'baz', 'f', 'fo', 'b', 'ba']
strings.sort(key=lambda x: len(list(x)))
strings
# ## Closures
# Closures are dynamically-genearated functions returned by another function. The returned function has access to the variables in the local namespace where it was created.
#
# Closures are often used to implement decorators. Decorators are useful to transparently wrap something with additional functionality:
#
# ```python
# def my_decorator(fun):
# def myfun(*params, **kwparams):
# do_something()
# fun(*params, **kwparams)
# return myfun
# ```
# Each time the following closure() is called, it generates the same output:
# +
def make_closure(x):
def closure():
print('Secret value is: %s' % x)
return closure
closure = make_closure(7)
closure()
# -
# Keep track of arguments passed:
# +
def make_watcher():
dict_seen = {}
def watcher(x):
if x in dict_seen:
return True
else:
dict_seen[x] = True
return False
return watcher
watcher = make_watcher()
seq = [1, 1, 2, 3, 5, 8, 13, 2, 5, 13]
[watcher(x) for x in seq]
# -
# ## \*args, \*\*kwargs
# \*args and \*\*kwargs are useful when you don't know how many arguments might be passed to your function or when you want to handle named arguments that you have not defined in advance.
# Print arguments and call the input function on *args:
# +
def foo(func, arg, *args, **kwargs):
print('arg: %s', arg)
print('args: %s', args)
print('kwargs: %s', kwargs)
print('func result: %s', func(args))
foo(sum, "foo", 1, 2, 3, 4, 5)
# -
# ## Currying
# Currying means to derive new functions from existing ones by partial argument appilcation. Currying is used in pandas to create specialized functions for transforming time series data.
#
# The argument y in add_numbers is curried:
# +
def add_numbers(x, y):
return x + y
add_seven = lambda y: add_numbers(7, y)
add_seven(3)
# -
# The built-in functools can simplify currying with partial:
from functools import partial
add_five = partial(add_numbers, 5)
add_five(2)
# ## Generators
# A generator is a simple way to construct a new iterable object. Generators return a sequence lazily. When you call the generator, no code is immediately executed until you request elements from the generator.
#
# Find all the unique ways to make change for $1:
# +
def squares(n=5):
for x in xrange(1, n + 1):
yield x ** 2
# No code is executed
gen = squares()
# Generator returns values lazily
for x in squares():
print x
# -
# ## Generator Expressions
#
# A generator expression is analogous to a comprehension. A list comprehension is enclosed by [], a generator expression is enclosed by ():
gen = (x ** 2 for x in xrange(1, 6))
for x in gen:
print x
# ## itertools
#
# The library itertools has a collection of generators useful for data analysis.
#
# Function groupby takes a sequence and a key function, grouping consecutive elements in the sequence by the input function's return value (the key). groupby returns the function's return value (the key) and a generator.
import itertools
first_letter = lambda x: x[0]
strings = ['foo', 'bar', 'baz']
for letter, gen_names in itertools.groupby(strings, first_letter):
print letter, list(gen_names)
# itertools contains many other useful functions:
#
# | Function | Description|
# | ------------- |-------------|
# | imap | Generator version of map |
# | ifilter | Generator version of filter |
# | combinations | Generates a sequence of all possible k-tuples of elements in the iterable, ignoring order |
# | permutations | Generates a sequence of all possible k-tuples of elements in the iterable, respecting order |
# | groupby | Generates (key, sub-iterator) for each unique key |
| python-data/functions.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + [markdown] id="view-in-github" colab_type="text"
# <a href="https://colab.research.google.com/github/maiormarso/DS-Unit-2-Linear-Models/blob/master/module2-regression-2/LS_DS9_212_assignment_regression_classification_2_(1).ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# + [markdown] id="NKpx39s-Wx0h" colab_type="text"
# Lambda School Data Science
#
# *Unit 2, Sprint 1, Module 2*
#
# ---
# + id="Y_M89xx7-Ytc" colab_type="code" colab={}
# + [markdown] colab_type="text" id="7IXUfiQ2UKj6"
# # Regression 2
#
# ## Assignment
#
# You'll continue to **predict how much it costs to rent an apartment in NYC,** using the dataset from renthop.com.
#
# - [ ] Do train/test split. Use data from April & May 2016 to train. Use data from June 2016 to test.
# - [ ] Engineer at least two new features. (See below for explanation & ideas.)
# - [ ] Fit a linear regression model with at least two features.
# - [ ] Get the model's coefficients and intercept.
# - [ ] Get regression metrics RMSE, MAE, and $R^2$, for both the train and test data.
# - [ ] What's the best test MAE you can get? Share your score and features used with your cohort on Slack!
# - [ ] As always, commit your notebook to your fork of the GitHub repo.
#
#
# #### [Feature Engineering](https://en.wikipedia.org/wiki/Feature_engineering)
#
# > "Some machine learning projects succeed and some fail. What makes the difference? Easily the most important factor is the features used." — <NAME>, ["A Few Useful Things to Know about Machine Learning"](https://homes.cs.washington.edu/~pedrod/papers/cacm12.pdf)
#
# > "Coming up with features is difficult, time-consuming, requires expert knowledge. 'Applied machine learning' is basically feature engineering." — <NAME>, [Machine Learning and AI via Brain simulations](https://forum.stanford.edu/events/2011/2011slides/plenary/2011plenaryNg.pdf)
#
# > Feature engineering is the process of using domain knowledge of the data to create features that make machine learning algorithms work.
#
# #### Feature Ideas
# - Does the apartment have a description?
# - How long is the description?
# - How many total perks does each apartment have?
# - Are cats _or_ dogs allowed?
# - Are cats _and_ dogs allowed?
# - Total number of rooms (beds + baths)
# - Ratio of beds to baths
# - What's the neighborhood, based on address or latitude & longitude?
#
# ## Stretch Goals
# - [ ] If you want more math, skim [_An Introduction to Statistical Learning_](http://faculty.marshall.usc.edu/gareth-james/ISL/ISLR%20Seventh%20Printing.pdf), Chapter 3.1, Simple Linear Regression, & Chapter 3.2, Multiple Linear Regression
# - [ ] If you want more introduction, watch [<NAME>, Statistics 101: Simple Linear Regression](https://www.youtube.com/watch?v=ZkjP5RJLQF4)
# (20 minutes, over 1 million views)
# - [ ] Add your own stretch goal(s) !
# + colab_type="code" id="o9eSnDYhUGD7" colab={}
# %%capture
import sys
# If you're on Colab:
if 'google.colab' in sys.modules:
DATA_PATH = 'https://raw.githubusercontent.com/LambdaSchool/DS-Unit-2-Applied-Modeling/master/data/'
# !pip install category_encoders==2.*
# If you're working locally:
else:
DATA_PATH = '../data/'
# Ignore this Numpy warning when using Plotly Express:
# FutureWarning: Method .ptp is deprecated and will be removed in a future version. Use numpy.ptp instead.
import warnings
warnings.filterwarnings(action='ignore', category=FutureWarning, module='numpy')
# + colab_type="code" id="cvrw-T3bZOuW" colab={}
import numpy as np
import pandas as pd
from datetime import datetime
# Read New York City apartment rental listing data
df = pd.read_csv(DATA_PATH+'apartments/renthop-nyc.csv')
assert df.shape == (49352, 34)
# Remove the most extreme 1% prices,
# the most extreme .1% latitudes, &
# the most extreme .1% longitudes
df = df[(df['price'] >= np.percentile(df['price'], 0.5)) &
(df['price'] <= np.percentile(df['price'], 99.5)) &
(df['latitude'] >= np.percentile(df['latitude'], 0.05)) &
(df['latitude'] < np.percentile(df['latitude'], 99.95)) &
(df['longitude'] >= np.percentile(df['longitude'], 0.05)) &
(df['longitude'] <= np.percentile(df['longitude'], 99.95))]
# + id="2QA9wJlV5rHl" colab_type="code" outputId="02bd987c-d85a-4c45-ad0d-aab91267d4ad" colab={"base_uri": "https://localhost:8080/", "height": 185}
df.head(1)
# + id="qkbTHNUyLc6v" colab_type="code" colab={}
df['bednbaths'] = df['bedrooms'] + df['bathrooms']
#df['easy_outlook'] = df['balcony'] + ['wheelchair_access']
# + id="X30A_gbcPPxf" colab_type="code" colab={}
df['wheelchairview'] = df['balcony'] + df['wheelchair_access']
# + id="oULOU3ymGcf8" colab_type="code" colab={}
dt=df
# + [markdown] id="JjZ8NWGJVQ1r" colab_type="text"
# Do train/test split. Use data from April & May 2016 to train. Use data from June 2016 to test.
# + id="zJfVkhEhJiVl" colab_type="code" colab={}
# + id="IZCGonvoTd3_" colab_type="code" colab={}
dt['year'] = pd.DatetimeIndex(df['created']).year
# + id="mWKIaWNkUFhv" colab_type="code" colab={}
dt['month'] = pd.DatetimeIndex(df['created']).month
# + id="NO91U2Bmsca1" colab_type="code" colab={}
train=dt
test=dt
# + id="1rAW1AF6sqaz" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 980} outputId="f35b559c-5604-477f-9a68-e1f466fea522"
array = ['6']
test=test.loc[(dt['year'] == 2016) & dt['month'].isin(array)]
test.head(500)
# + id="OFE5Ak4dMh2b" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 790} outputId="3aff979e-1758-4b2a-d43d-3fbb237e17f6"
array = ['4', '5']
train=train.loc[(dt['year'] == 2016) & dt['month'].isin(array)]
train.head(500)
# + id="5Zf_vYx2N1Mb" colab_type="code" outputId="361346a0-5448-4880-90c0-c63af70ce546" colab={"base_uri": "https://localhost:8080/", "height": 54}
guess = train['price'].mean()
errors = guess - train['price']
mean_absolute_error = errors.abs().mean()
print(f'If we just guessed every Tribeca condo sold for ${guess:,.0f},')
print(f'we would be off by ${mean_absolute_error:,.0f} on average.')
# + id="oSvOBKo215eK" colab_type="code" colab={}
# 1. Import the appropriate estimator class fro Scikit-learn
from sklearn.linear_model import LinearRegression
# + id="Yf9T61NRZnbh" colab_type="code" colab={}
#2 Instantiate this class
model = LinearRegression()
# + id="I43OcbR55jhm" colab_type="code" colab={}
#3. Arrange X features matrix & y target vector
features = (['bedrooms','bathrooms'])
target = 'price'
X_train = train[features]
y_train = train[target]
# + id="rCqp8ELUv5TF" colab_type="code" outputId="9d9ca66b-8f47-4ce7-dafe-1153b38a05a1" colab={"base_uri": "https://localhost:8080/", "height": 35}
X_train.shape
# + id="3-A7dO-8ttqs" colab_type="code" colab={}
#3. Arrange X features matrix & y target vector
features = (['bedrooms','bathrooms'])
target = 'price'
X_test = test[features]
y_test = test[target]
# + id="x14T4fjQwHTk" colab_type="code" outputId="947bebcc-0d18-4d90-ccf3-752aaf52659f" colab={"base_uri": "https://localhost:8080/", "height": 35}
X_train.shape,X_test.shape
# + id="P7Ue_TwUaSo1" colab_type="code" outputId="6b3a16c1-b770-423a-b7aa-3715ba168350" colab={"base_uri": "https://localhost:8080/", "height": 617}
#trendline='ols' draws and Ordinary Least SQuares regression line
import plotly.express as px
px.scatter(dt, x='bedrooms',y='price', trendline='olds')
# + id="Nx_tpE4NbKDq" colab_type="code" outputId="2c9fd159-0a15-4173-c9e0-34fd61749969" colab={"base_uri": "https://localhost:8080/", "height": 35}
#4. Fit the model
model.fit(X_train, y_train)
# + id="i8-gpNvlC45J" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 35} outputId="11448bbe-8a8f-4817-fc34-adc62325035d"
model.fit(X_test, y_test)
# + id="ehsA24lNepFp" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 54} outputId="fe9bb640-392d-450e-8303-313728fc5ea2"
#5. Apply the model to new
bedrooms = 1
bathrooms = 2
X_train = [['bedrooms'],['bathrooms']]
y_pred = model.predict(X_test)
y_pred
# + id="fvE1jGjyIyUc" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 35} outputId="1ddcc8db-32ad-4aa5-d439-5d0e3ad8282d"
model.coef_
# + id="QSpDdd0gI3r4" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 35} outputId="8e1dd2a4-98b1-40b0-c7c1-f2a2c620595c"
model.intercept_
# + id="mxsQdeJ2GZOg" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 90} outputId="9401e6a4-8ae6-4a03-e4e2-718c103b20fd"
#Equations for a line
m = model.coef_[0]
b = model.intercept_
print('y = mx + b')
print(f'y = {m: 0f}*x + {b: 0f}')
print(f'y = {m: 0f}*x + {b: 0f}')
print(f'price = {m:0f}*bedrooms + {b:.0f}')
# + id="neTrpzGTzD4F" colab_type="code" colab={}
def predict(bedrooms,bathrooms):
y_pred= model.predict([[bedrooms,bathrooms]])
estimate= y_pred[0]
coefficient = model.coef_[1]
result = f'${estimate:,.0f} estimated price for {bedrooms:,.0f} bedroom apparment.'
explanation = f'in this linear regression each additional bedroom adds ${coefficient:,.0f}.'
return result + explanation
# + id="TkalofwjDybK" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 35} outputId="71e01d37-0951-4f27-e413-9288f9bf6779"
predict(1,2)
| module2-regression-2/LS_DS9_212_assignment_regression_classification_2_(1).ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Matplotlib图鉴——基础饼图
#
# ## 公众号:可视化图鉴
import matplotlib
print(matplotlib.__version__) #查看Matplotlib版本
import pandas as pd
print(pd.__version__) #查看pandas版本
import numpy as np
print(np.__version__) #查看numpy版本
import matplotlib.pyplot as plt
plt.rcParams['font.sans-serif'] = ['STHeiti'] #设置中文
# 注意,代码在以下环境全部通过测试:
# - Python 3.7.1
# - Matplotlib == 3.3.2
# - pandas == 1.2.0
# - numpy == 1.19.2
#
# 因版本不同,可能会有部分语法差异,如有报错,请先检查拼写及版本是否一致!
# ### 饼图(甜甜圈图)- 增加白色块
# +
#-*- coding: utf-8 -*-
import matplotlib.pyplot as plt
plt.figure(figsize=(8,9),dpi = 100)
sizes = [150,250,300,60]
labels = ['A','B','C','D']
colors = ['#8A977B','#F4D000','#FF7F00','#FF4040']
patches,l_text,p_text = plt.pie(sizes,labels = labels,
colors=colors,
autopct = '%3.2f%%',
startangle = 90,
pctdistance = 0.8
)
centre_circle = plt.Circle((0,0),0.50,fc='white')
fig = plt.gcf()
fig.gca().add_artist(centre_circle)
plt.legend(patches, labels,
loc="center left",
bbox_to_anchor=(1, 0.2, 1, 1),
fontsize=20)
for t in l_text:
t.set_size(30)
for t in p_text:
t.set_size(17)
plt.title("饼图(甜甜圈图)- 增加白色块",fontsize = 20)
plt.axis('equal')
plt.show()
# -
# ### 饼图(甜甜圈图)修改颜色和改变标签位置
# +
#fig, ax = plt.subplots(figsize=(6, 3), subplot_kw=dict(aspect="equal"))
plt.figure(figsize=(8,9),dpi = 100)
sizes = [150,250,300,60]
labels = ['A','B','C','D']
colors = ['#FE4365','#FC9D9A','#F9CDAD','#C8C8A9']
wedges, texts = plt.pie(sizes,colors = colors, wedgeprops=dict(width=0.5), startangle=-40)
bbox_props = dict(boxstyle="square,pad=0.3", fc="w", ec="k", lw=0.72)
kw = dict(arrowprops=dict(arrowstyle="-"),
bbox=bbox_props, zorder=0, va="center")
for i, p in enumerate(wedges):
ang = (p.theta2 - p.theta1)/2. + p.theta1
y = np.sin(np.deg2rad(ang))
x = np.cos(np.deg2rad(ang))
horizontalalignment = {-1: "right", 1: "left"}[int(np.sign(x))]
connectionstyle = "angle,angleA=0,angleB={}".format(ang)
kw["arrowprops"].update({"connectionstyle": connectionstyle})
plt.annotate(labels[i], xy=(x, y), xytext=(1.2*np.sign(x),y),
horizontalalignment=horizontalalignment,fontsize = 20,**kw)
plt.legend(wedges, labels,
loc="center left",
bbox_to_anchor=(1, 0.2, 1, 1),
fontsize=20)
plt.title("饼图(甜甜圈图)- 修改颜色和改变标签位置",fontsize = 20)
plt.show()
# -
# ### 嵌套饼图
# +
import matplotlib.pyplot as plt
import numpy as np
plt.subplots(figsize=(8,9),dpi = 100)
size = 0.3 #内外圆心的比例
vals = np.array([[60., 32.], [37., 40.], [29., 10.]])
labels1 = ['A','B','C']
labels2 = ['AA','AB','BA','BB','CA','CB']
cmap = plt.get_cmap("tab20b")
outer_colors = cmap(np.arange(3)*4)
inner_colors = cmap([1, 2, 5, 6, 9, 10])
wedges1, text1 = plt.pie(vals.sum(axis=1), radius=1, colors=outer_colors,
wedgeprops=dict(width=size, edgecolor='w'))
wedges2, text2 = plt.pie(vals.flatten(), radius=1-size, colors=inner_colors,
wedgeprops=dict(width=size, edgecolor='w'))
plt.legend(wedges2, labels2,
loc="center left",
bbox_to_anchor=(1, 0.2, 1, 1),
fontsize=20)
plt.title("嵌套饼图",fontsize = 20)
plt.savefig("C_06.png")
plt.show()
# -
| C-饼图/基础饼图MA_C_02/MA_C_02.ipynb |
// -*- coding: utf-8 -*-
// ---
// jupyter:
// jupytext:
// text_representation:
// extension: .java
// format_name: light
// format_version: '1.5'
// jupytext_version: 1.14.4
// kernelspec:
// display_name: Java
// language: java
// name: java
// ---
// # Aerospike Java Client – Reading and Updating Maps
// *Last updated: June 22, 2021*
//
// This notebook demonstrates Java Aerospike CRUD operations (Create, Read, Update, Delete) for maps of data, focusing on server-side **read** and **update** operations.
//
// Aerospike stores records by association with a **key**. Maps contain key:value pairs. This notebook makes use of the word **mapkey** to distinguish from a record **key**.
//
// This [Jupyter Notebook](https://jupyter-notebook.readthedocs.io/en/stable/notebook.html) requires the Aerospike Database running locally with Java kernel and Aerospike Java Client. To create a Docker container that satisfies the requirements and holds a copy of these notebooks, visit the [Aerospike Notebooks Repo](https://github.com/aerospike-examples/interactive-notebooks).
// + [markdown] heading_collapsed=true
// # Notebook Setup
//
// Run these first to initialize Jupyter, download the Java Client, and make sure the Aerospike Database is running.
// + [markdown] hidden=true
// ## Import Jupyter Java Integration
//
// Make it easier to work with Java in Jupyter.
// + hidden=true
import io.github.spencerpark.ijava.IJava;
import io.github.spencerpark.jupyter.kernel.magic.common.Shell;
IJava.getKernelInstance().getMagics().registerMagics(Shell.class);
// + [markdown] hidden=true
// ## Start Aerospike
//
// Ensure Aerospike Database is running locally.
// + hidden=true
// %sh asd
// + [markdown] hidden=true
// ## Download the Aerospike Java Client
//
// Ask Maven to download and install the project object model (POM) of the Aerospike Java Client.
// + hidden=true
// %%loadFromPOM
<dependencies>
<dependency>
<groupId>com.aerospike</groupId>
<artifactId>aerospike-client</artifactId>
<version>5.0.0</version>
</dependency>
</dependencies>
// + [markdown] hidden=true
// ## Start the Aerospike Java Client and Connect
//
// Create an instance of the Aerospike Java Client, and connect to the demo cluster.
//
// The default cluster location for the Docker container is *localhost* port *3000*. If your cluster is not running on your local machine, modify *localhost* and *3000* to the values for your Aerospike cluster.
// + hidden=true
import com.aerospike.client.AerospikeClient;
AerospikeClient client = new AerospikeClient("localhost", 3000);
System.out.println("Initialized the client and connected to the cluster.");
// -
// # CREATING Maps in Aerospike
// ## Create and Print Map Data
//
// Create a string map representing fish metadata. Create an integer map containing timestamped fish observation locations.
// +
import java.util.ArrayList;
import java.util.Arrays;
import java.util.HashMap;
import java.util.List;
import java.util.Map;
HashMap <String, String> mapFish = new HashMap <String, String>();
mapFish.put("name", "Annette");
mapFish.put("fruit", "Pineapple");
mapFish.put("color", "Aquamarine");
mapFish.put("tree", "Redwood");
System.out.println("Fish Map: " + mapFish);
HashMap <Integer, HashMap> mapObs = new HashMap <Integer, HashMap>();
HashMap <String, Integer> mapCoords0 = new HashMap <String, Integer>();
mapCoords0.put("lat", -85);
mapCoords0.put("long", -130);
HashMap <String, Integer> mapCoords1 = new HashMap <String, Integer>();
mapCoords1.put("lat", -25);
mapCoords1.put("long", -50);
HashMap <String, Integer> mapCoords2 = new HashMap <String, Integer>();
mapCoords2.put("lat", 35);
mapCoords2.put("long", 30);
mapObs.put(13456, mapCoords1);
mapObs.put(14567, mapCoords2);
mapObs.put(12345, mapCoords0);
System.out.println("Observations Map:" + mapObs);
// -
// ## Insert the Maps into Aerospike
//
// Insert one record in Aerospike with **Key** "koi", and **Bin Names** *mapfishbin* and *mapobsbin*.
//
// By default, Aerospike data is unsorted, however Aerospike preserves order by index when inserting data. Java HashMaps are sorted by mapkey by default.
// ### Create a Key Object
//
// A **Key** uniquely identifies a specific record in your Aerospike server or cluster. Each key must have a **Namespace** and optionally a **Set** name.
// * In Aerospike, a **Namespace** is like a relational database's tablespace.
// * A **Set** is like a relational database table.
// * A **Record** is like a row in a relational database table.
//
// The namespace *test* is configured on your Aerospike server or cluster.
//
// For additional information on the Aerospike Data Model, go [here](https://www.aerospike.com/docs/architecture/data-model.html).
// +
import com.aerospike.client.Key;
String mapSet = "mapset1";
String mapNamespace = "test";
String theKey = "koi";
Key key = new Key(mapNamespace, mapSet, theKey);
System.out.println("Key created." );
// -
// ### Create a Bin Object for Each Map
//
// A **Bin** is a data field in an Aerospike record.
// +
import com.aerospike.client.Bin;
String mapFishBinName = "mapfishbin";
String mapObsBinName = "mapobsbin";
Bin bin1 = new Bin(mapFishBinName, mapFish);
Bin bin2 = new Bin(mapObsBinName, mapObs);
System.out.println( "Created " + bin1 + " and " + bin2 + ".");
// -
// ### Create a Policy Object for Record Insertion
//
// A **Policy** tells Aerospike the intent of a database operation.
//
// For more information on policies, go [here](https://www.aerospike.com/docs/guide/policies.html).
// +
import com.aerospike.client.policy.ClientPolicy;
ClientPolicy clientPolicy = new ClientPolicy();
System.out.println("Created a client policy.");
// -
// ### Put the Map Data into Aerospike
client.put(clientPolicy.writePolicyDefault, key, bin1, bin2);
System.out.println("Key: " + theKey + "\n" + mapFishBinName + ": " + mapFish + "\n" +
mapObsBinName + ": " + mapObs );
// # READING Maps and Map Elements From the Server
//
// Now that the maps are in Aerospike, the client can return full or partial maps from **bin** contents. No data is modified by these ops.
// ## Get the Record
//
// A record can be retrieved using the **key**, **namespace**, and **set** name.
//
// In the output:
// * **gen** is the generation number, the number of record writes.
// * **exp** is the expiration counter for the record.
//
// For more information on [both generation number and expiration](https://www.aerospike.com/docs/guide/FAQ.html), see the [Aerospike FAQ](https://www.aerospike.com/docs/guide/FAQ.html).
// +
import com.aerospike.client.Record;
Key key = new Key(mapNamespace, mapSet, theKey);
Record record = client.get(null, key);
System.out.println(record);
// -
// ## Get String Elements by Mapkey, Rank, and Value
//
// Aerospike provides **MapOperations** to read string mapkeys and values from the database.
//
// The mapFishBin is a map containing string mapkey/value pairs associated with the fish, "Koi".
//
// For more information on map operations, go [here](https://www.aerospike.com/apidocs/java/com/aerospike/client/cdt/MapOperation.html).
// ### Get String by Mapkey
//
// Aerospike API can be used to look up a value by mapkey. The client returns the specified value as the contents of the bin.
//
// For the list of return type options, go [here](https://www.aerospike.com/apidocs/java/com/aerospike/client/cdt/MapReturnType.html).
// +
import com.aerospike.client.Operation;
import com.aerospike.client.Value;
import com.aerospike.client.cdt.MapOperation;
import com.aerospike.client.cdt.MapReturnType;
String mapKeyToFind = "color";
Key key = new Key(mapNamespace, mapSet, theKey);
Record record = client.get(null, key);
Record colorString = client.operate(null, key,
MapOperation.getByKey(mapFishBinName, Value.get(mapKeyToFind), MapReturnType.VALUE)
);
System.out.println("The string map: " + record.getValue(mapFishBinName));
System.out.println("The " + mapKeyToFind + " in the string map is: " + colorString.getValue(mapFishBinName));
// -
// ### Get Highest Rank String
//
// Aerospike's API contains operations to look up a map element by rank.
//
// For information on list ranking, go [here](https://en.wikipedia.org/wiki/List_ranking).
// +
Integer highestRank = -1;
Key key = new Key(mapNamespace, mapSet, theKey);
Record record = client.get(null, key);
Record highestRankString = client.operate(null, key,
MapOperation.getByRank(mapFishBinName, highestRank, MapReturnType.VALUE)
);
System.out.println("The string map: " + record.getValue(mapFishBinName));
System.out.println("The highest rank string is: " + highestRankString.getValue(mapFishBinName));
// -
// ### Get Mapkey By String Value
//
// Aerospike provides operations to look up an element by value and return the mapkey.
// +
String valueToFind = "Pineapple";
Key key = new Key(mapNamespace, mapSet, theKey);
Record record = client.get(null, key);
Record foundMapKey = client.operate(null, key,
MapOperation.getByValue(mapFishBinName, Value.get(valueToFind), MapReturnType.KEY)
);
System.out.println("The string map: " + record.getValue(mapFishBinName));
System.out.println("The mapkey associated with " + valueToFind + " is: " + foundMapKey.getValue(mapFishBinName));
// -
// ## Get Map Size and Integer Elements by Index and Key Range
//
// Aerospike operations can read integers associated with fish observations.
//
// The mapobsbin is a list of Latitude/Longitude pairs stored by the time of fish observation in seconds from the start of the experiment. The number of seconds, latitude, and longitude are all integers.
// ### Get the Number of Observations in the Map
//
// Aerospike API's size operation returns a count of the mapkeys in a map.
// +
Key key = new Key(mapNamespace, mapSet, theKey);
Record record = client.get(null, key);
Record sizeString = client.operate(null, key,
MapOperation.size(mapObsBinName)
);
System.out.println("The Observation Map: " + record.getValue(mapObsBinName));
System.out.println("The number of Observations in the Map: "
+ sizeString.getValue(mapObsBinName));
// -
// ### Get The First Observation from the Map
//
// Aerospike API operations can look up a value by index. In Aerospike, the index operation can get one or more map elements by key order. Aerospike allows indexing forward from the beginning of the map using zero-based numbering. Negative numbers index backwards from the end of a map.
//
//
// In this example, the first element by index represents the first time the fish was observed. Because the key 12345 is before 13456 and 14567, the first element by index is 12345.
//
//
// For examples of indexes, go [here](https://www.aerospike.com/apidocs/java/com/aerospike/client/cdt/MapOperation.html).
// +
Integer firstIdx = 0;
Key key = new Key(mapNamespace, mapSet, theKey);
Record record = client.get(null, key);
Record firstObservation = client.operate(null, key,
MapOperation.getByIndex(mapObsBinName, firstIdx, MapReturnType.KEY_VALUE)
);
System.out.println("The Observation Map: " + record.getValue(mapObsBinName));
System.out.println("The First Observation: " + firstObservation.getValue(mapObsBinName));
// -
// ### Get All Locations Observed Between 13,000 and 15,000 seconds.
//
// Aerospike delivers values by mapkey range. Get the latitude and longitude pairs for all observations between 13,000 and 15,000 seconds.
// +
Integer lowerBound = 13000;
Integer upperBound = 15000;
Key key = new Key(mapNamespace, mapSet, theKey);
Record record = client.get(null, key);
Record rangeObservations = client.operate(null, key,
MapOperation.getByKeyRange(mapObsBinName, Value.get(lowerBound), Value.get(upperBound),
MapReturnType.KEY_VALUE)
);
System.out.println("The Observation Map: " + record.getValue(mapObsBinName));
System.out.println("The Observations between 13000 and 15000 seconds: "
+ rangeObservations.getValue(mapObsBinName));
// -
// # UPDATING Maps on the Aerospike Server
//
// Aerospike's **MapOperations** can also modify data in the Aerospike Database.
// ## Update the Fish Bin in Aerospike
//
// The Fish Bin contains metadata about the fish.
// ### Create a MapPolicy Java Object for the Fish Bin
//
// When modifying maps, Aerospike requires a **MapPolicy** that governs write protection and order. The default MapPolicy works for Fish Bin.
//
//
// For more information on mappolicy, go [here](https://www.aerospike.com/apidocs/java/com/aerospike/client/cdt/MapPolicy.html).
// +
import com.aerospike.client.cdt.MapPolicy;
MapPolicy mapFishBinPolicy = new MapPolicy();
System.out.println("Created default MapPolicy for " + mapFishBinName + ".")
// -
// ### Change the Tree to Larch
// When new data is put into a map, Aerospike returns the size of the resulting map.
// +
String treeMapkeyName = "tree";
String newTree = "Larch";
Key key = new Key(mapNamespace, mapSet, theKey);
Record record = client.get(null, key);
Record sizeOfMapWithNewTree = client.operate(null, key,
MapOperation.put(mapFishBinPolicy, mapFishBinName, Value.get(treeMapkeyName),
Value.get(newTree))
);
Record mapWithNewTree = client.get(null, key);
System.out.println("Before: " + record.getValue(mapFishBinName));
System.out.println("The size after the operation: "
+ sizeOfMapWithNewTree.getValue(mapFishBinName));
System.out.println(" After: " + mapWithNewTree.getValue(mapFishBinName));
// -
// ### Remove the Fruit
//
// When removing a mapkey:value pair, Aerospike client returns the removed data.
// +
String fruitMapkeyName = "fruit";
Key key = new Key(mapNamespace, mapSet, theKey);
Record record = client.get(null, key);
Record valOfRemovedFruit = client.operate(null, key,
MapOperation.removeByKey(mapFishBinName, Value.get(fruitMapkeyName),
MapReturnType.KEY_VALUE)
);
Record mapWithoutFruit = client.get(null, key);
System.out.println("Before: " + record.getValue(mapFishBinName));
System.out.println("The removed mapkey/value pair: "
+ valOfRemovedFruit.getValue(mapFishBinName));
System.out.println("After removing the " + fruitMapkeyName + ": "
+ mapWithoutFruit.getValue(mapFishBinName));
// -
// ### Add Bait
//
// To be sure that other scientists can catch the fish, add the fish's preferrred bait to the record.
// +
String mapkeyForBait = "bait";
String valueForBait = "<NAME>";
Key key = new Key(mapNamespace, mapSet, theKey);
Record record = client.get(null, key);
Record sizeOfRecordWithBait = client.operate(null, key,
MapOperation.put(mapFishBinPolicy, mapFishBinName, Value.get(mapkeyForBait),
Value.get(valueForBait))
);
Record recordWithBait = client.get(null, key);
System.out.println("Before: " + record.getValue(mapFishBinName));
System.out.println("After adding Bait: " + recordWithBait.getValue(mapFishBinName));
// -
// ### Put an Observation Counter in the Map
//
// The experiment continued past the original end date. The new work requires keeping track of the total number of observations.
// +
String mapkeyObsCount = "Count";
Integer numObservations = 3;
Key key = new Key(mapNamespace, mapSet, theKey);
Record record = client.get(null, key);
Record sizeOfRecordWithObsCounter =
client.operate(null, key, MapOperation.put(mapFishBinPolicy, mapFishBinName,
Value.get(mapkeyObsCount),
Value.get(numObservations))
);
Record recordWithObsCount = client.get(null, key);
System.out.println("Before: " + record.getValue(mapFishBinName));
System.out.println("After Adding the Counter: " + recordWithObsCount.getValue(mapFishBinName));
// -
// ## Update the Observation Map
// Aerospike client can update map elements, such as integers and sub-maps.
//
// The experiment continued past the original end date. The new work requires the regular addition of new observations and keeping track of the total number of observations.
// ### Create a MapPolicy Object for the Observations Bin
//
// In this example, the Observations Map should be maintained as mapkey-sorted in Aerospike, but are put unordered into the database by default. When storing any map on SSD hardware, Key Ordered Maps hold a significant performance advantage over Unordered Maps, at a cost of 4 bytes of storage for metadata.
//
// The MapPolicy contains two types of configurations, **MapOrder** and **MapWriteFlags**. The maporder determines the sort order of the map. The mapwriteflags determine write behaviors, such as if the operation should fail when a mapkey/value already exists.
//
// For more information on maporder, go [here](https://www.aerospike.com/apidocs/java/com/aerospike/client/cdt/MapOrder.html).
//
// For more information on mapwriteflags, go [here](https://www.aerospike.com/apidocs/java/com/aerospike/client/cdt/MapWriteFlags.html).
// +
import com.aerospike.client.cdt.MapOrder;
import com.aerospike.client.cdt.MapWriteFlags;
Record recordObsUnordered = client.get(null, key);
MapPolicy mapObsBinPolicy = new MapPolicy(MapOrder.KEY_ORDERED, MapWriteFlags.DEFAULT);
Record changeOrder =
client.operate(null, key, MapOperation.setMapPolicy(mapObsBinPolicy, mapObsBinName));
Record recordObsOrdered = client.get(null, key);
System.out.println("Before Sorting: " + recordObsUnordered.getValue(mapObsBinName));
System.out.println("Applied mapkey-ordered MapPolicy for " + mapObsBinName + ".");
System.out.println("After Sorting: " + recordObsOrdered.getValue(mapObsBinName));
// -
// ### Add a new Observation
// +
int newObsTimestamp = 15678;
int newObsLat = 80;
int newObsLong = 110;
HashMap <Integer, HashMap> mapNewObs = new HashMap <Integer, HashMap>();
HashMap <String, Integer> mapNewCoords = new HashMap <String, Integer>();
mapNewCoords.put("lat", newObsLat);
mapNewCoords.put("long", newObsLong);
Key key = new Key(mapNamespace, mapSet, theKey);
Record record = client.get(null, key);
Record sizeOfNewObs = client.operate(null, key,
MapOperation.put(mapObsBinPolicy, mapObsBinName, Value.get(newObsTimestamp), Value.get(mapNewCoords))
);
Record recordWithNewObs = client.get(null, key);
System.out.println("Before: " + record.getValue(mapObsBinName));
System.out.println("The Size After Adding the Observation: " + sizeOfNewObs.getValue(mapObsBinName));
System.out.println("After Adding the Observation: " + recordWithNewObs.getValue(mapObsBinName));
// -
// ### Remove the Oldest Observation by Index
//
// This study only maintains the three most recent observations.
// +
Key key = new Key(mapNamespace, mapSet, theKey);
Record record = client.get(null, key);
Record oldObs = client.operate(null, key,
MapOperation.removeByIndex(mapObsBinName, firstIdx, MapReturnType.KEY_VALUE)
);
Record updatedRecord = client.get(null, key);
System.out.println("Before: " + record.getValue(mapObsBinName));
System.out.println("The Removed Observation: " + oldObs.getValue(mapObsBinName));
System.out.println("After Observation Removal: " + updatedRecord.getValue(mapObsBinName));
// -
// ### Increment the Observation Counter
//
// When incrementing a map value, Aerospike returns the new value.
// +
int incNum = 1;
Key key = new Key(mapNamespace, mapSet, theKey);
Record record = client.get(null, key);
Record obsCount = client.operate(null, key,
MapOperation.increment(mapFishBinPolicy, mapFishBinName, Value.get(mapkeyObsCount), Value.get(incNum))
);
Record updatedRecord = client.get(null, key);
System.out.println("Before: " + record.getValue(mapFishBinName));
System.out.println("The New Count: " + obsCount.getValue(mapFishBinName));
System.out.println("After Increment: " + updatedRecord.getValue(mapFishBinName));
// + [markdown] heading_collapsed=true
// # Notebook Cleanup
// + [markdown] hidden=true
// ## Truncate the Set
// Truncate the set from the Aerospike Database.
// + hidden=true
import com.aerospike.client.policy.InfoPolicy;
InfoPolicy infoPolicy = new InfoPolicy();
client.truncate(infoPolicy, mapNamespace, mapSet, null);
System.out.println("Set Truncated.");
// + [markdown] hidden=true
// ## Close the Client connections to Aerospike
// + hidden=true
client.close();
System.out.println("Server connection(s) closed.");
// + [markdown] heading_collapsed=true
// # Code Summary
// + [markdown] hidden=true
// ## Overview
// Here is a collection of all of the non-Jupyter code from this tutorial.
// 1. Import Java Libraries.
// 2. Import Aerospike Client Libraries.
// 3. Start the Aerospike Client.
// 4. Create Test Data.
// 5. Put Record into Aerospike.
// 6. Get Data from Aerospike.
// 1. Get the Record.
// 2. Get String by MapKey and Highest Rank.
// 3. Get MapKey by String.
// 3. Get the Number of Observations and 1st Observation By Index.
// 4. Get Observations by MapKey Range.
// 7. Update the Record in Aerospike
// 1. Change the Tree to a Larch
// 2. Remove the Fruit and add Bait.
// 3. Sort the Observation Map.
// 4. Add an Observation Counter.
// 5. Add a New Observation.
// 6. Remove the Oldest Operation.
// 7. Increment the Observation Counter.
// 8. Truncate the Set.
// 9. Close the Client Connections.
// + hidden=true
// Import Java Libraries.
import java.util.ArrayList;
import java.util.Arrays;
import java.util.HashMap;
import java.util.List;
import java.util.Map;
// Import Aerospike Client Libraries.
import com.aerospike.client.AerospikeClient;
import com.aerospike.client.Key;
import com.aerospike.client.Bin;
import com.aerospike.client.policy.ClientPolicy;
import com.aerospike.client.Record;
import com.aerospike.client.Operation;
import com.aerospike.client.Value;
import com.aerospike.client.cdt.MapOperation;
import com.aerospike.client.cdt.MapReturnType;
import com.aerospike.client.cdt.MapPolicy;
import com.aerospike.client.cdt.MapOrder;
import com.aerospike.client.cdt.MapWriteFlags;
import com.aerospike.client.policy.InfoPolicy;
// Start the Aerospike Client.
AerospikeClient client = new AerospikeClient("localhost", 3000);
System.out.println("Initialized the client and connected to the cluster.");
// Create Test Data.
HashMap <String, String> mapFish = new HashMap <String, String>();
mapFish.put("name", "Annette");
mapFish.put("fruit", "Pineapple");
mapFish.put("color", "Aquamarine");
mapFish.put("tree", "Redwood");
System.out.println("Created Fish Map: " + mapFish);
HashMap <Integer, HashMap> mapObs = new HashMap <Integer, HashMap>();
HashMap <String, Integer> mapCoords0 = new HashMap <String, Integer>();
mapCoords0.put("lat", -85);
mapCoords0.put("long", -130);
HashMap <String, Integer> mapCoords1 = new HashMap <String, Integer>();
mapCoords1.put("lat", -25);
mapCoords1.put("long", -50);
HashMap <String, Integer> mapCoords2 = new HashMap <String, Integer>();
mapCoords2.put("lat", 35);
mapCoords2.put("long", 30);
mapObs.put(13456, mapCoords1);
mapObs.put(14567, mapCoords2);
mapObs.put(12345, mapCoords0);
System.out.println("Created Observations Map: " + mapObs);
// Put Record into Aerospike.
String mapSet = "mapset1";
String mapNamespace = "test";
String theKey = "koi";
String mapFishBin = "mapfishbin";
String mapObsBin = "mapobsbin";
ClientPolicy clientPolicy = new ClientPolicy();
InfoPolicy infoPolicy = new InfoPolicy();
Key key = new Key(mapNamespace, mapSet, theKey);
Bin bin1 = new Bin(mapFishBin, mapFish);
Bin bin2 = new Bin(mapObsBin, mapObs);
client.put(clientPolicy.writePolicyDefault, key, bin1, bin2);
System.out.println("Inserted Key: " + theKey + "\n " + mapFishBin + ": " + mapFish + "\n " +
mapObsBin + ": " + mapObs );
System.out.println();
// Get Data from Aerospike.
// 1. Get the Record.
// 2. Get String by MapKey and Highest Rank.
// 3. Get MapKey by String.
// 3. Get the Number of Observations and 1st Observation By Index.
// 4. Get Observations by MapKey Range.
String mapKeyToFind = "color";
Integer highestRank = -1;
String valueToFind = "Pineapple";
Integer firstIdx = 0;
Integer lowerBound = 13000;
Integer upperBound = 15000;
Key key = new Key(mapNamespace, mapSet, theKey);
Record record = client.get(null, key);
Record results = client.operate(null, key,
MapOperation.getByKey(mapFishBin, Value.get(mapKeyToFind), MapReturnType.VALUE),
MapOperation.getByRank(mapFishBin, highestRank, MapReturnType.VALUE),
MapOperation.getByValue(mapFishBin, Value.get(valueToFind), MapReturnType.KEY),
MapOperation.size(mapObsBin),
MapOperation.getByIndex(mapObsBin, firstIdx, MapReturnType.KEY_VALUE),
MapOperation.getByKeyRange(mapObsBin, Value.get(lowerBound), Value.get(upperBound), MapReturnType.KEY_VALUE)
);
List<?> resultsFish = results.getList(mapFishBin);
List<?> resultsObs = results.getList(mapObsBin);
System.out.println("Read the Full Record From Aerospike:" + record);
System.out.println("The " + mapKeyToFind + " in the string map is: " + resultsFish.get(0));
System.out.println("The highest rank string is: " + resultsFish.get(1));
System.out.println("The mapkey associated with " + valueToFind + " is: " + resultsFish.get(2));
System.out.println("The number of Observations in the Map: " + resultsObs.get(0));
System.out.println("The First Observation: " + resultsObs.get(1));
System.out.println("The Observations between 13000 and 15000 seconds: " + resultsObs.get(2));
System.out.println();
// 7. Update the Record in Aerospike
// 1. Change the Tree to a Larch
// 2. Remove the Fruit and add Bait.
// 3. Add an Observation Counter.
// 4. Sort the Observation Map.
// 5. Add a New Observation.
// 6. Remove the Oldest Operation.
// 7. Increment the Observation Counter.
MapPolicy mapFishBinPolicy = new MapPolicy();
MapPolicy mapObsBinPolicy = new MapPolicy(MapOrder.KEY_ORDERED, MapWriteFlags.DEFAULT);
String treeMapkeyName = "tree";
String newTree = "Larch";
String fruitMapkeyName = "fruit";
String mapkeyForBait = "bait";
String valueForBait = "Mosquito Larva";
String mapkeyObsCount = "Count";
Integer numObservations = 3;
int newObsTimestamp = 15678;
int newObsLat = 80;
int newObsLong = 110;
int incNum = 1;
HashMap <Integer, HashMap> mapNewObs = new HashMap <Integer, HashMap>();
HashMap <String, Integer> mapNewCoords = new HashMap <String, Integer>();
mapNewCoords.put("lat", newObsLat);
mapNewCoords.put("long", newObsLong);
Key key = new Key(mapNamespace, mapSet, theKey);
Record record = client.get(null, key);
Record updatingRecord = client.operate(null, key,
MapOperation.put(mapFishBinPolicy, mapFishBin, Value.get(treeMapkeyName),
Value.get(newTree)),
MapOperation.removeByKey(mapFishBin, Value.get(fruitMapkeyName),
MapReturnType.KEY_VALUE),
MapOperation.put(mapFishBinPolicy, mapFishBin, Value.get(mapkeyForBait),
Value.get(valueForBait)),
MapOperation.put(mapFishBinPolicy, mapFishBin, Value.get(mapkeyObsCount),
Value.get(numObservations)),
MapOperation.setMapPolicy(mapObsBinPolicy, mapObsBin),
MapOperation.put(mapObsBinPolicy, mapObsBin, Value.get(newObsTimestamp),
Value.get(mapNewCoords)),
MapOperation.removeByIndex(mapObsBin, firstIdx, MapReturnType.KEY_VALUE),
MapOperation.increment(mapFishBinPolicy, mapFishBin, Value.get(mapkeyObsCount),
Value.get(incNum))
);
Record finalRecord = client.get(null, key);
List<?> updateFish = updatingRecord.getList(mapFishBin);
List<?> updateObs = updatingRecord.getList(mapObsBin);
System.out.println("Changed " + treeMapkeyName + " to " + newTree + "; there are now " + updateFish.get(0) + " map items in " + mapFishBin);
System.out.println("Removed item " + updateFish.get(1));
System.out.println("Added item [" + mapkeyForBait + "=" + valueForBait + "]; there are now " + updateFish.get(2) + " map items in " + mapFishBin);
System.out.println("Added Observation Counter; there are now " + updateFish.get(3) + " map items in " + mapFishBin);
System.out.println("Sorted " + mapObsBin);
System.out.println("Added New Observation {" + newObsTimestamp + "=" + mapNewCoords + "}, there are now " + updateObs.get(1) + " map items in " + mapObsBin);
System.out.println("Removed Oldest Observation: " + updateObs.get(2));
System.out.println("Incremented Observation Counter to reflect " + updateFish.get(4) + "th observation");
System.out.println();
System.out.println("After Record Edits: " + finalRecord);
// Truncate the Set.
client.truncate(infoPolicy, mapNamespace, mapSet, null);
System.out.println("Set Truncated.");
// Close the Client Connections.
client.close();
// -
// # Takeaway – Aerospike Does Maps
//
// Aerospike and its Java Client are up to the task of working with your map data. Its API provides rich operations to read and update list data using index, mapkey, value, and rank. Not modeled in this tutorial, Aerospike map operation also supports nested lists and maps, by assigning **CTX** or contexts to operations.
//
// For more information on contexts, go [here](https://www.aerospike.com/apidocs/java/com/aerospike/client/cdt/CTX.html). For examples of contexts, go [here](https://www.aerospike.com/apidocs/java/com/aerospike/client/cdt/MapOperation.html).
// # What's Next?
// ## Next Steps
//
// Have questions? Don't hesitate to reach out if you have additional questions about working with lists at https://discuss.aerospike.com/.
//
// Want to check out other Java notebooks?
// 1. [Hello, World](hello_world.ipynb)
// 2. [Reading and Updating Lists](java-working_with_lists.ipynb)
// 3. [Modeling Using Lists](java-modeling__using_lists.ipynb)
// 4. [Aerospike Query and UDF](query_udf.ipynb)
//
// Are you running this from Binder? [Download the Aerospike Notebook Repo](https://github.com/aerospike-examples/interactive-notebooks) and work with Aerospike Database and Jupyter locally using a Docker container.
// ## Additional Resources
//
// * Want to get started with Java? [Download](https://www.aerospike.com/download/client/) or [install](https://github.com/aerospike/aerospike-client-java) the Aerospike Java Client.
// * What other ways can we work with Maps? Take a look at [Aerospike's Map Operations](https://www.aerospike.com/apidocs/java/com/aerospike/client/cdt/MapOperation.html).
// * What are Namespaces, Sets, and Bins? Check out the [Aerospike Data Model](https://www.aerospike.com/docs/architecture/data-model.html).
// * How robust is the Aerospike Database? Browses the [Aerospike Database Architecture](https://www.aerospike.com/docs/architecture/index.html).
| notebooks/java/java-working_with_maps.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + [markdown] button=false new_sheet=false run_control={"read_only": false} slideshow={"slide_type": "slide"}
# # Timer
#
# The code in this notebook helps with measuring time.
# + [markdown] button=false new_sheet=false run_control={"read_only": false} slideshow={"slide_type": "subslide"}
# **Prerequisites**
#
# * This notebook needs some understanding on advanced concepts in Python, notably
# * classes
# * the Python `with` statement
# * measuring time
# + [markdown] button=false new_sheet=false run_control={"read_only": false} slideshow={"slide_type": "slide"}
# ## Measuring Time
#
# The class `Timer` allows to measure the elapsed time during some code execution. A typical usage looks as follows:
#
# ```Python
# from Timer import Timer
#
# with Timer() as t:
# function_that_is_supposed_to_be_timed()
#
# print(t.elapsed_time())
# ```
#
# + button=false new_sheet=false run_control={"read_only": false} slideshow={"slide_type": "skip"}
import fuzzingbook_utils
# + button=false new_sheet=false run_control={"read_only": false} slideshow={"slide_type": "skip"}
import time
# + button=false new_sheet=false run_control={"read_only": false} slideshow={"slide_type": "subslide"}
def clock():
try:
return time.perf_counter() # Python 3
except:
return time.clock() # Python 2
# + button=false new_sheet=false run_control={"read_only": false} slideshow={"slide_type": "subslide"}
class Timer(object):
# Begin of `with` block
def __enter__(self):
self.start_time = clock()
self.end_time = None
return self
# End of `with` block
def __exit__(self, exc_type, exc_value, tb):
self.end_time = clock()
def elapsed_time(self):
"""Return elapsed time in seconds"""
if self.end_time is None:
# still running
return clock() - self.start_time
else:
return self.end_time - self.start_time
# + [markdown] button=false new_sheet=false run_control={"read_only": false} slideshow={"slide_type": "subslide"}
# Here's an example:
# + button=false new_sheet=false run_control={"read_only": false} slideshow={"slide_type": "fragment"}
def some_long_running_function():
i = 1000000
while i > 0:
i -= 1
# + button=false new_sheet=false run_control={"read_only": false} slideshow={"slide_type": "fragment"}
print("Stopping total time:")
with Timer() as t:
some_long_running_function()
print(t.elapsed_time())
# + button=false new_sheet=false run_control={"read_only": false} slideshow={"slide_type": "subslide"}
print("Stopping time in between:")
with Timer() as t:
for i in range(10):
print(t.elapsed_time())
# + [markdown] button=false new_sheet=false run_control={"read_only": false} slideshow={"slide_type": "subslide"}
# That's it, folks – enjoy!
| docs/beta/notebooks/Timer.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3.10.2 64-bit
# language: python
# name: python3
# ---
# +
import plotly.graph_objects as go
import statsmodels.api as sm
import pandas as pd
import numpy as np
import datetime
# data
np.random.seed(123)
numdays=20
X = (np.random.randint(low=-20, high=20, size=numdays).cumsum()+100).tolist()
Y = (np.random.randint(low=-20, high=20, size=numdays).cumsum()+100).tolist()
df = pd.DataFrame({'X': X, 'Y':Y})
# regression
df['bestfit'] = sm.OLS(df['Y'],sm.add_constant(df['X'])).fit().fittedvalues
# plotly figure setup
fig=go.Figure()
fig.add_trace(go.Scatter(name='X vs Y', x=df['X'], y=df['Y'].values, mode='markers'))
fig.add_trace(go.Scatter(name='line of best fit', x=X, y=df['bestfit'], mode='lines'))
# plotly figure layout
fig.update_layout(xaxis_title = 'X', yaxis_title = 'Y')
# retrieve x-values from one of the series
xVals = fig.data[0]['x']
errors = {} # container for prediction errors
# organize data for errors in a dict
for d in fig.data:
errors[d['mode']]=d['y']
shapes = [] # container for shapes
# make a line shape for each error == distance between each marker and line points
for i, x in enumerate(xVals):
shapes.append(go.layout.Shape(type="line",
x0=x,
y0=errors['markers'][i],
x1=x,
y1=errors['lines'][i],
line=dict(
#color=np.random.choice(colors,1)[0],
color = 'black',
width=1),
opacity=0.5,
layer="above")
)
# include shapes in layout
fig.update_layout(shapes=shapes)
fig.show()
# +
nu = [1, 2, 3]
nu.remove(2)
nu
| apagar.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: tf23
# language: python
# name: tf23
# ---
# # Train Neural Networks
# +
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
import os
import PIL
import pathlib
from sklearn.utils import class_weight
import pickle
import tensorflow as tf
from tensorflow import keras
from tensorflow.keras import layers
from tensorflow.keras.models import Sequential
from tensorflow.keras.preprocessing.image import ImageDataGenerator
from tensorflow.keras.applications.vgg16 import VGG16
from tensorflow.keras.applications.vgg19 import VGG19
from tensorflow.keras.applications.resnet50 import ResNet50
from tensorflow.keras.applications.inception_v3 import InceptionV3
from tensorflow.keras.applications.xception import Xception
from tensorflow.keras.applications.inception_resnet_v2 import InceptionResNetV2
from tensorflow.keras.applications.densenet import DenseNet121, DenseNet169, DenseNet201
from tensorflow.keras.optimizers import SGD, Adam, RMSprop
from tensorflow.keras.layers import Flatten, Dense, Dropout, GlobalAveragePooling2D, GlobalMaxPooling2D
print(tf.__version__)
print("Num GPUs Available: ", len(tf.config.experimental.list_physical_devices('GPU')))
# # %load_ext tensorboard
# -
# ## Data
# ### Load data
# **data is structured as:**
#
# ../data/
# dataset/
# train/
# Aloe_Vera/
# Aloe_Vera_1.jpeg
# Aloe_Vera_2.jpeg
# ...
# ...
# Umbrella_Tree/
# Umbrella_Tree_1.jpeg
# Umbrella_Tree_2.jpeg
# ...
# test/
# Aloe_Vera/
# Aloe_Vera_1.jpeg
# Aloe_Vera_2.jpeg
# ...
# ...
# Umbrella_Tree/
# Umbrella_Tree_1.jpeg
# Umbrella_Tree_2.jpeg
# ...
# val/
# Aloe_Vera/
# Aloe_Vera_1.jpeg
# Aloe_Vera_2.jpeg
# ...
# ...
# Umbrella_Tree/
# Umbrella_Tree_1.jpeg
# Umbrella_Tree_2.jpeg
# ...
# House_Plants.csv
#
# **Define dataset and parameters**
# +
data_path = '../data/gsp15_ttv/'
class_names = ['Aloe_Vera', 'Asparagus_Fern', 'Baby_Rubber_Plant', 'Boston_Fern', 'Easter_Lily',
'Fiddle_Leaf_Fig', 'Jade_Plant', 'Monstera','Parlor_Palm', 'Peace_Lily', 'Pothos',
'Rubber_Plant', 'Snake_Plant', 'Spider_Plant', 'Umbrella_Tree']
img_width, img_height = 224, 224
batch_size = 64
# -
# ### Load data
# +
train_data_dir = f'{data_path}/train'
validation_data_dir = f'{data_path}/test'
no_classes = len(class_names)
# import training with augmentation at each epoch
print('Training:')
train_datagen = ImageDataGenerator(
rescale=1. / 255,
shear_range=0.1,
zoom_range=0.2,
rotation_range=30,
width_shift_range=0.1,
height_shift_range=0.1,
horizontal_flip=True)
train_generator = train_datagen.flow_from_directory(
train_data_dir,
target_size=(img_width, img_height),
batch_size=batch_size,
classes=class_names,
class_mode='categorical',
seed = 2020,
shuffle = True)
# import validation
print('\nValidation:')
val_datagen = ImageDataGenerator(rescale=1. / 255)
validation_generator = val_datagen.flow_from_directory(
validation_data_dir,
target_size=(img_width, img_height),
batch_size=batch_size,
classes=class_names,
class_mode='categorical',
seed = 2020,
shuffle = True)
# -
# ## Plot data
# +
plt.figure(figsize=(10, 10))
images, labels = next(train_generator)
for i in range(9):
ax = plt.subplot(3, 3, i + 1)
plt.imshow(images[i])
plt.title(class_names[np.argmax(labels[i])])
plt.axis("off")
# -
# ## Define class weights for imbalanced data
# 
# Using class_weights changes the range of the loss. This may affect the stability of the training depending on the optimizer. Optimizers whose step size is dependent on the magnitude of the gradient, like optimizers. SGD, may fail. The optimizer used here, optimizers. Adam, is unaffected by the scaling change. Also note that because of the weighting, the total losses are not comparable between the two models.
def create_weights_dict(train_generator):
'''Calculates the number of samples per class and returns a dictionary for passing to .fit()'''
n_train = len(train_generator.filenames)
number_of_generator_calls = np.ceil(n_train / (1.0 * batch_size))
# 1.0 above is to skip integer division
label_list = []
for i in range(0,int(number_of_generator_calls)):
label_list.extend(np.array(train_generator[i][1]))
label_list = list(map(lambda x: np.argmax(x), label_list))
class_weight_arr = class_weight.compute_class_weight(class_weight='balanced',
classes = np.unique(label_list),
y = label_list)
class_weight_dict = dict(zip(np.arange(len(label_list)), class_weight_arr))
return class_weight_dict
class_weight_dict = create_weights_dict(train_generator)
class_weight_dict
# +
def write_weights(outfile, class_weight_dict):
try:
with open(outfile, 'wb') as fp:
pickle.dump(class_weight_dict, fp, protocol=pickle.HIGHEST_PROTOCOL)
except:
print("Unable to write to file")
def load_weights(weights_file):
try:
with open(weights_file, 'rb') as fp:
weights = pickle.load(fp)
return weights
except:
print("Unable to load weights file")
# -
write_weights('../data/gsp15_class_weight_dict.p', class_weight_dict)
class_weight_dict = load_weights('../data/gsp15_class_weight_dict.p')
class_weight_dict
# ## Build model
# +
def get_prelim():
model = Sequential([
layers.experimental.preprocessing.RandomFlip("horizontal_and_vertical",
input_shape=(img_height, img_width, 3)),
layers.experimental.preprocessing.RandomRotation(0.2),
layers.experimental.preprocessing.RandomZoom(0.3),
layers.experimental.preprocessing.Rescaling(1./255, input_shape=(img_height, img_width, 3)),
layers.Conv2D(16, 3, padding='same', activation='relu'),
layers.MaxPooling2D(),
layers.Conv2D(32, 3, padding='same', activation='relu'),
layers.MaxPooling2D(),
layers.Conv2D(64, 3, padding='same', activation='relu'),
layers.MaxPooling2D(),
layers.Flatten(),
layers.Dense(128, activation='relu'),
layers.Dense(no_classes),
layers.Activation('softmax')
])
return model
def get_double_conv():
model = keras.models.Sequential([
layers.experimental.preprocessing.RandomFlip("horizontal_and_vertical",
input_shape=(img_height, img_width, 3)),
layers.experimental.preprocessing.RandomRotation(0.2),
layers.experimental.preprocessing.RandomZoom(0.3),
layers.experimental.preprocessing.Rescaling(1./255, input_shape=(img_height, img_width, 3)),
layers.Conv2D(64, 7, activation='relu', padding='same'),
layers.MaxPooling2D(2),
layers.Conv2D(128, 3, activation='relu', padding='same'),
layers.Conv2D(128, 3, activation='relu', padding='same'),
layers.MaxPooling2D(2),
layers.Conv2D(256, 3, activation='relu', padding='same'),
layers.Conv2D(256, 3, activation='relu', padding='same'),
layers.MaxPooling2D(2),
layers.Flatten(),
layers.Dense(128, activation='relu'),
layers.Dense(64, activation='relu'),
layers.Dense(no_classes, activation='softmax')
])
return model
def get_VGG16tl():
resize_layer = layers.experimental.preprocessing.Rescaling(1./255, input_shape=(img_height, img_width, 3))
# create the base model from the pre-trained model
pretrained_model = VGG16(input_shape=(img_height, img_width, 3),
include_top=False,
weights='imagenet')
# freeze the convolutional base
pretrained_model.trainable = False
model = tf.keras.Sequential([resize_layer,
pretrained_model,
Flatten(),
Dense(256, activation='relu'),
Dropout(0.5),
Dense(256, activation='relu'),
Dense(no_classes, activation='softmax')])
return model
def get_InceptionV3tl():
resize_layer = layers.experimental.preprocessing.Rescaling(1./255, input_shape=(img_height, img_width, 3))
# create the base model from the pre-trained model
pretrained_model = InceptionV3(input_shape=(img_height, img_width, 3),
include_top=False,
weights='imagenet')
# freeze the convolutional base
pretrained_model.trainable = False
model = tf.keras.Sequential([resize_layer,
pretrained_model,
Flatten(),
Dense(256, activation='relu'),
Dropout(0.5),
Dense(256, activation='relu'),
Dense(no_classes, activation='softmax')])
return model
def get_InceptionV3tl_1024():
resize_layer = layers.experimental.preprocessing.Rescaling(1./255, input_shape=(img_height, img_width, 3))
# create the base model from the pre-trained model
pretrained_model = InceptionV3(input_shape=(img_height, img_width, 3),
include_top=False,
weights='imagenet')
# freeze the convolutional base
pretrained_model.trainable = False
model = tf.keras.Sequential([resize_layer,
pretrained_model,
Flatten(),
Dense(1024, activation='relu'),
Dropout(0.2),
Dense(no_classes, activation='softmax')])
return model
def get_ResNet50tl():
resize_layer = layers.experimental.preprocessing.Rescaling(1./255, input_shape=(img_height, img_width, 3))
# create the base model from the pre-trained model
pretrained_model = ResNet50(input_shape=(img_height, img_width, 3),
include_top=False,
weights='imagenet')
# freeze the convolutional base
pretrained_model.trainable = False
model = tf.keras.Sequential([resize_layer,
pretrained_model,
Flatten(),
Dense(1024, activation='relu'),
Dropout(0.5),
Dense(no_classes, activation='softmax')])
return model
def get_InceptionResNetV2tl():
resize_layer = layers.experimental.preprocessing.Rescaling(1./255, input_shape=(img_height, img_width, 3))
# create the base model from the pre-trained model VGG16
pretrained_model = InceptionResNetV2(input_shape=(img_height, img_width, 3),
include_top=False,
weights='imagenet')
# freeze the convolutional base
pretrained_model.trainable = False
model = tf.keras.Sequential([resize_layer,
pretrained_model,
Flatten(),
# Dense(256, activation='relu'),
# Dropout(0.5),
# Dense(256, activation='relu'),
Dense(1024, activation='relu'),
Dropout(0.5),
Dense(no_classes, activation='softmax')])
return model
def generate_model(model_name):
if model_name == 'prelim':
model = get_prelim()
data_augmentation
elif model_name == 'double_conv':
model = get_double_conv()
elif model_name == 'VGG16':
model = get_VGG16tl()
elif model_name == 'ResNet50':
model = get_ResNet50tl()
elif model_name == 'InceptionV3':
model = get_InceptionV3tl()
elif model_name == 'InceptionV3_1024':
model = get_InceptionV3tl_1024()
elif model_name == 'InceptionResNetV2':
model = get_InceptionResNetV2tl()
else:
print('please select a valid model')
return model
# +
model = generate_model('VGG16')
model.compile(loss = 'categorical_crossentropy',
optimizer = 'adam',
metrics = ['accuracy', 'top_k_categorical_accuracy'])
model.summary()
# -
# ## Train model
# +
initial_epochs=20
history = model.fit(train_generator,
steps_per_epoch=len(train_generator.filenames)//batch_size,
epochs=initial_epochs,
validation_data=validation_generator,
class_weight=class_weight_dict)
# +
# unfreeze the layers
model.trainable = True
model.compile(loss = 'categorical_crossentropy',
optimizer = keras.optimizers.Adam(1e-5),
metrics = ['accuracy', 'top_k_categorical_accuracy'])
model.summary()
# +
fine_tune_epochs = 100
total_epochs = initial_epochs + fine_tune_epochs
history_fine = model.fit(train_generator,
steps_per_epoch=len(train_generator.filenames)//batch_size,
epochs=total_epochs,
validation_data=validation_generator,
class_weight=class_weight_dict)
# -
# ## Save model/metrics and plot
model_name = 'VGG16_20_100e_GSP1.0'
model.save_weights(f'../models/{model_name}_weights.h5')
model.save(f'../models/{model_name}_model.h5')
# +
acc = history.history['accuracy']
val_acc = history.history['val_accuracy']
t5acc = history.history['top_k_categorical_accuracy']
t5val_acc = history.history['val_top_k_categorical_accuracy']
loss = history.history['loss']
val_loss = history.history['val_loss']
acc += history_fine.history['accuracy']
val_acc += history_fine.history['val_accuracy']
t5acc += history_fine.history['top_k_categorical_accuracy']
t5val_acc += history_fine.history['val_top_k_categorical_accuracy']
loss += history_fine.history['loss']
val_loss += history_fine.history['val_loss']
fig, ax = plt.subplots(nrows=2, ncols=1, figsize=(10,8), sharex=True)
x_plot = np.arange(1, total_epochs+1)
ax[0].plot(x_plot, acc[:total_epochs], '+-', label='training')
ax[0].plot(x_plot, val_acc[:total_epochs], '+-', label='validation')
ax[0].plot(x_plot, t5acc[:total_epochs], '+-', label='top 5 training')
ax[0].plot(x_plot, t5val_acc[:total_epochs], '+-', label='top 5 validation')
ax[0].legend()
ax[0].set_ylabel('accuracy')
# ax[0].set_ylim(0.5, 1)
ax[0].grid(ls='--', c='C7')
ax[0].set_title('accuracy')
ax[0].axvline(initial_epochs, c='C7', ls='--')
ax[1].plot(x_plot, loss[:total_epochs], '+-', label='training')
ax[1].plot(x_plot, val_loss[:total_epochs], '+-', label='validation')
ax[1].legend()
ax[1].set_ylabel('cross entropy')
# ax[1].set_ylim(0, 1)
ax[1].grid(ls='--', c='C7')
ax[1].set_title('loss')
ax[1].set_xlabel('epoch')
ax[1].axvline(initial_epochs, c='C7', ls='--')
plt.show()
plt.savefig(f'../models/{model_name}_graph.svg')
plt.savefig(f'../models/{model_name}_graph.png', dpi=400)
# +
graph_vals = pd.DataFrame({'acc':acc[:total_epochs],
'val_acc':val_acc[:total_epochs],
'loss':loss[:total_epochs],
'val_loss':val_loss[:total_epochs],
't5':t5acc[:total_epochs],
'val_t5':t5val_acc[:total_epochs]})
graph_vals.to_csv(f'../models/{model_name}_metrics.csv', index=False)
# -
val_predictions = model.predict(val_ds, batch_size=BATCH_SIZE)
def plot_cm(labels, predictions, p=0.5):
cm = confusion_matrix(labels, predictions > p)
plt.figure(figsize=(5,5))
sns.heatmap(cm, annot=True, fmt="d")r4t
plt.title('Confusion matrix @{:.2f}'.format(p))
plt.ylabel('Actual label')
plt.xlabel('Predicted label')
plt.savefig('../models/VGG16_70e_1.0.svg')
model.save_weights('../models/VGG16_20_100e_1.0.h5')
model.save('../models/VGG16_20_100e_1.0.h5')
def plot_confusion_matrix(cm, class_names):
"""
Returns a matplotlib figure containing the plotted confusion matrix.
Args:
cm (array, shape = [n, n]): a confusion matrix of integer classes
class_names (array, shape = [n]): String names of the integer classes
"""
figure = plt.figure(figsize=(8, 8))
plt.imshow(cm, interpolation='nearest', cmap=plt.cm.Blues)
plt.title("Confusion matrix")
plt.colorbar()
tick_marks = np.arange(len(class_names))
plt.xticks(tick_marks, class_names, rotation=45)
plt.yticks(tick_marks, class_names)
# Normalize the confusion matrix.
cm = np.around(cm.astype('float') / cm.sum(axis=1)[:, np.newaxis], decimals=2)
# Use white text if squares are dark; otherwise black.
threshold = cm.max() / 2.
for i, j in itertools.product(range(cm.shape[0]), range(cm.shape[1])):
color = "white" if cm[i, j] > threshold else "black"
plt.text(j, i, cm[i, j], horizontalalignment="center", color=color)
plt.tight_layout()
plt.ylabel('True label')
plt.xlabel('Predicted label')
return figure
# +
# Use the model to predict the values from the validation dataset.
test_pred_raw = model.predict(val_ds)
test_pred = np.argmax(test_pred_raw, axis=1)
# Calculate the confusion matrix.
cm = sklearn.metrics.confusion_matrix(test_labels, test_pred)
# Log the confusion matrix as an image summary.
figure = plot_confusion_matrix(cm, class_names=class_names)
# -
# +
acc = history.history['accuracy']
val_acc = history.history['val_accuracy']
loss = history.history['loss']
val_loss = history.history['val_loss']
fig, ax = plt.subplots(nrows=2, ncols=1, figsize=(10,8), sharex=True)
x_vals = np.arange(1, epochs+1)
ax[0].plot(x_vals, acc, '+-', label='training')
ax[0].plot(x_vals, val_acc, '+-', label='validation')
ax[0].legend()
ax[0].set_ylabel('accuracy')
ax[0].set_ylim(0, 1)
ax[0].grid(ls='--', c='C7')
ax[0].set_title('accuracy')
ax[1].plot(x_vals, loss, '+-', label='training')
ax[1].plot(x_vals, val_loss, '+-', label='validation')
ax[1].legend()
ax[1].set_ylabel('cross entropy')
ax[1].set_ylim(0, 3)
ax[1].grid(ls='--', c='C7')
ax[1].set_title('loss')
ax[1].set_xlabel('epoch')
plt.show()
# -
model.save_weights('../models/.h5')
model.save('../models/.h5')
# # Evaluation
# +
import glob
pred_path = '../data/pred_16c_only1/'
pred_ds = tf.keras.preprocessing.image_dataset_from_directory(
pred_path,
# labels = [0]*len(glob.glob(f'{pred_path}*')),
image_size=(img_height, img_width),
batch_size=batch_size
)
normalization_layer = layers.experimental.preprocessing.Rescaling(1./255)
normalized_ds = train_ds.map(lambda x, y: (normalization_layer(x), y))
# -
predictions = model.predict(pred_ds)
print(predictions)
# Generate arg maxes for predictions
classes = np.argmax(predictions, axis = 1)
print(classes[0])
print(class_names[classes[0]])
temp = tf.keras.models.load_model('../models/convmod_1.0.h5')
temp.summary()
dot_img_file = '../models/convmod_1.0.png'
tf.keras.utils.plot_model(model, to_file=dot_img_file, show_shapes=True)
| notebooks/06_train_network.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3 (ipykernel)
# language: python
# name: python3
# ---
# # Diet lecture
# [](https://github.com/ampl/amplcolab/blob/master/ampl-lecture/diet_case_study.ipynb) [](https://colab.research.google.com/github/ampl/amplcolab/blob/master/ampl-lecture/diet_case_study.ipynb) [](https://kaggle.com/kernels/welcome?src=https://github.com/ampl/amplcolab/blob/master/ampl-lecture/diet_case_study.ipynb) [](https://console.paperspace.com/github/ampl/amplcolab/blob/master/ampl-lecture/diet_case_study.ipynb) [](https://studiolab.sagemaker.aws/import/github/ampl/amplcolab/blob/master/ampl-lecture/diet_case_study.ipynb)
#
# Description: Diet case study
#
# Tags: ampl-only, ampl-lecture
#
# Notebook author: N/A
#
# Model author: N/A
#
# Install dependencies
# !pip install -q amplpy ampltools
# Google Colab & Kaggle interagration
MODULES=['ampl', 'coin']
from ampltools import cloud_platform_name, ampl_notebook
from amplpy import AMPL, register_magics
if cloud_platform_name() is None:
ampl = AMPL() # Use local installation of AMPL
else:
ampl = ampl_notebook(modules=MODULES) # Install AMPL and use it
register_magics(ampl_object=ampl) # Evaluate %%ampl_eval cells with ampl.eval()
# This notebook provides the implementation of the production problem described in the book *AMPL: A Modeling Language for Mathematical Programming*
# by <NAME>, <NAME>, and <NAME>.
#
# ## Diet problem
# As an intuitive example of a cost-minimizing model, we use the well-known "diet problem", which finds a mix of foods that satisfies requirements on the amounts of various vitamins. We will construct a small, explicit linear program, and then show how a general model can be formulated for all linear programs of that kind.
#
# After formulating the diet model, we will discuss a few changes that might make it more realistic. The full power of this model, however, derives from its applicability to many situations that have nothing to do with diets. A general model derived from the diet formulation can be also applied to blending, economics, and scheduling.
#
# ### Solving a diet problem instance
#
# Consider the problem of choosing prepared foods to meet certain nutritional requirements. Suppose that precooked dinners of the following kinds are available for the following prices per package:
#
# | | Name | Price $ |
# | ---- | ----------------- | ------- |
# | BEEF | beef | 3.19 |
# | CHK | chicken | 2.59 |
# | FISH | fish | 2.29 |
# | HAM | ham | 2.89 |
# | MCH | macaroni & cheese | 1.89 |
# | MTL | meat loaf | 1.99 |
# | SPG | spaghetti | 1.99 |
# | TUR | turkey | 2.49 |
#
# These dinners provide the following percentages, per package, of the minimum daily requirements for vitamins A, C, B1 and B2:
#
#
# | | A | C | B1 | B2 |
# | ---- | --- | --- | --- | --- |
# | BEEF | 60% | 20% | 10% | 15% |
# | CHK | 8 | 0 | 20 | 20 |
# | FISH | 8 | 10 | 15 | 10 |
# | HAM | 40 | 40 | 35 | 10 |
# | MCH | 15 | 35 | 15 | 15 |
# | MTL | 70 | 30 | 15 | 15 |
# | SPG | 25 | 50 | 25 | 15 |
# | TUR | 60 | 20 | 15 | 10 |
#
# The problem is to find the cheapest combination of packages that will meet a week's requirements - that is, at least 700% of the daily requirement for each nutrient.
#
# Let us write $X_{beef}$ for the number of packages of beef dinner to be purchased, $X_{chk}$ for the number of packages of chicken dinner, and so forth. Then the total cost of the diet will be:
# ```ampl
# total cost =
# 3.19 Xbeef + 2.59 Xchk + 2.29 Xfish + 2.89 Xham +
# 1.89 Xmch + 1.99 Xmtl + 1.99 Xspg + 2.49 Xtur
# ```
#
# The total percentage of the vitamin A requirement is given by a similar formula, except that X BEEF , X CHK , and so forth are multiplied by the percentage per package instead of the cost per package:
# ```ampl
# total percentage of vitamin A daily requirement met =
# 60 Xbeef + 8 Xchk + 8 Xfish + 40 Xham +
# 15 Xmch + 70 Xmtl + 25 Xspg + 60 Xtur
# ```
#
# This amount needs to be greater than or equal to 700 percent. There is a similar formula for each of the other vitamins, and each of these also needs to be $\geq$ 700.
#
# Putting these all together, we have the following linear program:
#
# ```ampl
# Minimize
# 3.19 Xbeef + 2.59 Xchk + 2.29 Xfish + 2.89 Xham +
# 1.89 Xmch + 1.99 Xmtl + 1.99 Xspg + 2.49 Xtur
#
# Subject to
# 60 Xbeef + 8 Xchk + 8 Xfish + 40 Xham +
# 15 Xmch + 70 Xmtl + 25 Xspg + 60 Xtur >= 700
#
# 20 Xbeef + 0 Xchk + 10 Xfish + 40 Xham +
# 35 Xmch + 30 Xmtl + 50 Xspg + 20 Xtur >= 700
#
# 10 Xbeef + 20 Xchk + 15 Xfish + 35 Xham +
# 15 Xmch + 15 Xmtl + 25 Xspg + 15 Xtur >= 700
#
# 15 Xbeef + 20 Xchk + 10 Xfish + 10 Xham +
# 15 Xmch + 15 Xmtl + 15 Xspg + 10 Xtur >= 700
#
# Xbeef >= 0, Xchk >= 0, Xfish >= 0, Xham >= 0,
# Xmch >= 0, Xmtl >= 0, Xspg >= 0, Xtur >= 0
# ```
#
# At the end we have added the common-sense requirement that no fewer than zero packages of a food can be purchased. And now, we can transcribe the model to an AMPL statement of the explicit diet LP:
# +
# %%writefile diet0.mod
# Variables
var Xbeef >= 0; var Xchk >= 0; var Xfish >= 0;
var Xham >= 0; var Xmch >= 0; var Xmtl >= 0;
var Xspg >= 0; var Xtur >= 0;
# Objective function (minimizing)
minimize cost:
3.19*Xbeef + 2.59*Xchk + 2.29*Xfish + 2.89*Xham +
1.89*Xmch + 1.99*Xmtl + 1.99*Xspg + 2.49*Xtur;
# Constraints
subject to A:
60*Xbeef + 8*Xchk + 8*Xfish + 40*Xham +
15*Xmch + 70*Xmtl + 25*Xspg + 60*Xtur >= 700;
subject to C:
20*Xbeef + 0*Xchk + 10*Xfish + 40*Xham +
35*Xmch + 30*Xmtl + 50*Xspg + 20*Xtur >= 700;
subject to B1:
10*Xbeef + 20*Xchk + 15*Xfish + 35*Xham +
15*Xmch + 15*Xmtl + 25*Xspg + 15*Xtur >= 700;
subject to B2:
15*Xbeef + 20*Xchk + 10*Xfish + 10*Xham +
15*Xmch + 15*Xmtl + 15*Xspg + 10*Xtur >= 700;
# -
# A few AMPL commands then suffice to read the file, send the LP to a solver, and retrieve the results (using CBC solver from coin). AMPL commands within a Jupyter Notebook should be executed in a cell with the `%%ampl_eval` header.
# %%ampl_eval
model diet0.mod;
option solver cbc;
solve;
display Xbeef,Xchk,Xfish,Xham,Xmch,Xmtl,Xspg,Xtur;
# The optimal solution is found quickly, but it is hardly what we might have hoped for. The cost is minimized by a monotonous diet of 46 and 2/3 packages of macaroni and cheese! You can check that this neatly provides $15\% \times 46\frac{2}{3} = 700\%$ of the requirement for vitamins A, B1 and B2, and a lot more vitamin C than necessary; the cost is only $\$1.89 × 46\frac{2}{3} = \$88.20.$
#
# You might guess that a better solution would be generated by requiring the amount of each vitamin to equal 700% exactly. Such a requirement can easily be imposed by changing each >= to = in the AMPL constraints. If you go ahead and solve the changed LP, you will find that the diet does indeed become more varied: approximately 19.5 packages of chicken, 16.3 of macaroni and cheese, and 4.3 of meat loaf. But since equalities are more restrictive than inequalities, the cost goes up to $89.99.
#
# ## An AMPL model for the diet problem
#
# Clearly we will have to consider more extensive modifications to our linear program in order to produce a diet that is even remotely acceptable. We will probably want to change the sets of food and nutrients, as well as the nature of the constraints and bounds. As in the production example of the previous chapter, this will be much easier to do if we rely on a general model that can be coupled with a variety of specific data files.
#
# This model deals with two things: nutrients and foods. Thus we begin an AMPL model by declaring sets of each:
# ```ampl
# set NUTR;
# set FOOD;
# ```
# Next we need to specify the numbers required by the model. Certainly a positive cost should be given for each food:
# ```ampl
# param cost {FOOD} > 0;
# ```
# We also specify that for each food there are lower and upper limits on the number of packages in the diet:
# ```ampl
# param f_min {FOOD} >= 0;
# param f_max {j in FOOD} >= f_min[j];
# ```
# Notice that we need a dummy index `j` to run over `FOOD` in the declaration of `f_max`, in order to say that the maximum for each food must be greater than or equal to the corresponding minimum.
#
# To make this model somewhat more general than our examples so far, we also specify similar lower and upper limits on the amount of each nutrient in the diet:
#
# ```ampl
# param n_min {NUTR} >= 0;
# param n_max {i in NUTR} >= n_min[i];
# ```
# Finally, for each combination of a nutrient and a food, we need a number that represents the amount of the nutrient in one package of the food. Such a "product" of two sets is written by listing them both:
# ```ampl
# param amt {NUTR,FOOD} >= 0;
# ```
# References to this parameter require two indices. For example, `amt[i,j]` is the amount of nutrient `i` in a package of food `j`.
#
# The decision variables for this model are the numbers of packages to buy of the different foods:
# ```ampl
# var Buy {j in FOOD} >= f_min[j], <= f_max[j];
# ```
# The number of packages of some food `j` to be bought will be called `Buy[j]`; in any acceptable solution it will have to lie between `f_min[j]` and `f_max[j]`.
#
# The total cost of buying a food `j` is the cost per package, `cost[j]`, times the number of packages, `Buy[j]`. The objective to be minimized is the sum of this product over all foods `j`:
# ```ampl
# minimize Total_Cost: sum {j in FOOD} cost[j] * Buy[j];
# ```
# Similarly, the amount of a nutrient `i` supplied by a food `j` is the nutrient per package, `amt[i,j]`, times the number of packages `Buy[j]`. The total amount of nutrient `i` supplied is the sum of this product over all foods `j`:
# ```ampl
# sum {j in FOOD} amt[i,j] * Buy[j];
# ```
# To complete the model, we need only specify that each such sum must lie between the appropriate bounds. Our constraint declaration begins
# ```ampl
# subject to Diet {i in NUTR}:
# ```
# to say that a constraint named `Diet[i]` must be imposed for each member `i` of `NUTR`. The rest of the declaration gives the algebraic statement of the constraint for nutrient `i`: the variables must satisfy
# ```ampl
# n_min[i] <= sum {j in FOOD} amt[i,j] * Buy[j] <= n_max[i]
# ```
# A "double inequality" like this is interpreted in the obvious way: the value of the sum in the middle must lie between `n_min[i]` and `n_max[i]`. We can write all together into a model file `diet.mod`:
# +
# %%writefile diet.mod
set NUTR;
set FOOD;
param cost {FOOD} > 0;
param f_min {FOOD} >= 0;
param f_max {j in FOOD} >= f_min[j];
param n_min {NUTR} >= 0;
param n_max {i in NUTR} >= n_min[i];
param amt {NUTR,FOOD} >= 0;
var Buy {j in FOOD} >= f_min[j], <= f_max[j];
minimize Total_Cost: sum {j in FOOD} cost[j] * Buy[j];
subject to Diet {i in NUTR}:
n_min[i] <= sum {j in FOOD} amt[i,j] * Buy[j] <= n_max[i];
# -
# ## Data file
#
# By specifying appropriate data, we can solve any of the linear programs that correspond to the above model. Let's begin by using the data from the previous example.
#
# The values of `f_min` and `n_min` are as given originally, while `f_max` and `n_max` are set, for the time being, to large values that won't affect the optimal solution. In the table for `amt`, the notation (tr) indicates that we have "transposed" the table so the columns correspond to the first index (nutrients), and the rows to the second (foods). Alternatively, we could have changed the model to say
# ```ampl
# param amt {FOOD,NUTR}
# ```
# in which case we would have had to write amt[j,i] in the constraint.
#
# The data file `diet.dat` is created with:
# +
# %%writefile diet.dat
data;
set NUTR := A B1 B2 C ;
set FOOD := BEEF CHK FISH HAM MCH MTL SPG TUR ;
param: cost f_min f_max :=
BEEF 3.19 0 100
CHK 2.59 0 100
FISH 2.29 0 100
HAM 2.89 0 100
MCH 1.89 0 100
MTL 1.99 0 100
SPG 1.99 0 100
TUR 2.49 0 100 ;
param: n_min n_max :=
A 700 10000
C 700 10000
B1 700 10000
B2 700 10000 ;
param amt (tr):
A C B1 B2 :=
BEEF 60 20 10 15
CHK 8 0 20 20
FISH 8 10 15 10
HAM 40 40 35 10
MCH 15 35 15 15
MTL 70 30 15 15
SPG 25 50 25 15
TUR 60 20 15 10 ;
# -
# Suppose that model and data are stored in the files `diet.mod` and `diet.dat`, respectively. Then AMPL is used as follows to read these files and to solve the resulting linear program:
# %%ampl_eval
reset; # to clean the previous model
model diet.mod;
data diet.dat;
option solver cbc;
solve;
display Buy;
# Naturally, the result is the same as before.
#
# Now suppose that we want to make the following enhancements. To promote variety, the weekly diet must contain between 2 and 10 packages of each food. The amount of sodium and calories in each package is also given; total sodium must not exceed 40,000 mg, and total calories must be between 16,000 and 24,000. All of these changes can be made through a few modifications to the data. Putting this new data in file `diet2.dat`, we can run AMPL again:
# %%writefile diet2.dat
set NUTR := A B1 B2 C NA CAL ;
set FOOD := BEEF CHK FISH HAM MCH MTL SPG TUR ;
param: cost f_min f_max :=
BEEF 3.19 2 10
CHK 2.59 2 10
FISH 2.29 2 10
HAM 2.89 2 10
MCH 1.89 2 10
MTL 1.99 2 10
SPG 1.99 2 10
TUR 2.49 2 10 ;
param: n_min n_max :=
A 700 20000
C 700 20000
B1 700 20000
B2 700 20000
NA 0 40000
CAL 16000 24000 ;
param amt (tr):
A C B1 B2 NA CAL :=
BEEF 60 20 10 15 938 295
CHK 8 0 20 20 2180 770
FISH 8 10 15 10 945 440
HAM 40 40 35 10 278 430
MCH 15 35 15 15 1182 315
MTL 70 30 15 15 896 400
SPG 25 50 25 15 1329 370
TUR 60 20 15 10 1397 450 ;
# %%ampl_eval
reset data; # to reset the data not the model
data diet2.dat;
solve;
display Buy;
# The message infeasible problem tells us that we have constrained the diet too tightly; there is no way that all of the restrictions can be satisfied.
#
# AMPL lets us examine a variety of values produced by a solver as it attempts to find a solution. We can use marginal (or dual) values to investigate the sensitivity of an optimum solution to changes in the constraints. Here there is no optimum, but the solver does return the last solution that it found while attempting to satisfy the constraints. We can look for the source of the infeasibility by displaying some values associated with this solution:
# %%ampl_eval
display Diet.lb, Diet.body, Diet.ub;
# For each nutrient, `Diet.body` is the sum of the terms `amt[i,j] * Buy[j]` in the constraint `Diet[i]`. The `Diet.lb` and `Diet.ub` values are the "lower bounds" and "upper bounds" on the sum in `Diet[i] - in this case, just the values `n_min[i]` and `n_max[i]`. We can see that the diet returned by the solver does not supply enough vitamin B2, while the amount of sodium (NA) has reached its upper bound.
#
# At this point, there are two obvious choices: we could require less B2 or we could allow more sodium. If we try the latter, and relax the sodium limit to 50,000 mg, a feasible solution becomes possible (let statement permits modifications of data):
# %%ampl_eval
let n_max["NA"] := 50000;
solve;
display Buy;
# This is at least a start toward a palatable diet, although we have to spend \$118.06, compared to \$88.20 for the original, less restricted case. Clearly it would be easy, now that the model is set up, to try many other possibilities. Section 11.3 of the AMPL book describes ways to quickly change the data and re-solve.
#
# One still disappointing aspect of the solution is the need to buy 5.36061 packages of beef, and 9.30605 of spaghetti. How can we find the best possible solution in terms of whole packages? You might think that we could simply round the optimal values to whole numbers - or integers, as they're often called in the context of optimization - but it is not so easy to do so in a feasible way. Using AMPL to modify the reported solution , we can observe that rounding up to 6 packages of beef and 10 of spaghetti, for example, will violate the sodium limit:
# +
# %%ampl_eval
let Buy["BEEF"] := 6;
let Buy["SPG"] := 10;
display Diet.lb, Diet.body, Diet.ub;
# -
# You can similarly check that rounding the solution down to 5 of beef and 9 of spaghetti will provide insufficient vitamin B2. Rounding one up and the other down doesn't work either. With enough experimenting you can find a nearby all-integer solution that does satisfy the constraints, but still you will have no guarantee that it is the least-cost allinteger solution.
#
# AMPL does provide for putting the integrality restriction directly into the declaration of the variables:
# ```ampl
# var Buy {j in FOOD} integer >= f_min[j], <= f_max[j];
# ```
# This will only help, however, if you use a solver that can deal with problems whose variables must be integers. CBC solver can handle these so called integer programs. If we add integer to the declaration of variable `Buy` as above, save the resulting model in the file `dieti.mod`, and add the higher sodium limit to `diet2a.dat`, then we can re-solve as follows:
# +
# %%writefile dieti.mod
set NUTR;
set FOOD;
param cost {FOOD} > 0;
param f_min {FOOD} >= 0;
param f_max {j in FOOD} >= f_min[j];
param n_min {NUTR} >= 0;
param n_max {i in NUTR} >= n_min[i];
param amt {NUTR,FOOD} >= 0;
var Buy {j in FOOD} integer >= f_min[j], <= f_max[j];
minimize Total_Cost: sum {j in FOOD} cost[j] * Buy[j];
subject to Diet {i in NUTR}:
n_min[i] <= sum {j in FOOD} amt[i,j] * Buy[j] <= n_max[i];
# -
# %%writefile diet2a.dat
set NUTR := A B1 B2 C NA CAL ;
set FOOD := BEEF CHK FISH HAM MCH MTL SPG TUR ;
param: cost f_min f_max :=
BEEF 3.19 2 10
CHK 2.59 2 10
FISH 2.29 2 10
HAM 2.89 2 10
MCH 1.89 2 10
MTL 1.99 2 10
SPG 1.99 2 10
TUR 2.49 2 10 ;
param: n_min n_max :=
A 700 20000
C 700 20000
B1 700 20000
B2 700 20000
NA 0 50000
CAL 16000 24000 ;
param amt (tr):
A C B1 B2 NA CAL :=
BEEF 60 20 10 15 938 295
CHK 8 0 20 20 2180 770
FISH 8 10 15 10 945 440
HAM 40 40 35 10 278 430
MCH 15 35 15 15 1182 315
MTL 70 30 15 15 896 400
SPG 25 50 25 15 1329 370
TUR 60 20 15 10 1397 450 ;
# %%ampl_eval
model dieti.mod;
data diet2a.dat;
option solver cbc;
solve;
display Buy;
# Since integrality is an added constraint, it is no surprise that the best integer solution costs about \$1.24 more than the best "continuous" one. But the difference between the diets is unexpected; the amounts of 3 foods change, each by two or more packages. In general, integrality and other "discrete" restrictions make solutions for a model much harder to find.
#
# ## Generalizations to blending, economics and scheduling
#
# Your personal experience probably suggests that diet models are not widely used by people to choose their dinners. These models would be much better suited to situations in which packaging and personal preferences don't play such a prominent role — for example , the blending of animal feed or perhaps food for college dining halls.
#
# The diet model is a convenient, intuitive example of a linear programming formulation that appears in many contexts. Suppose that we rewrite the model in a more general way:
#
# ```ampl
# set INPUT; # inputs
# set OUTPUT; # outputs
# param cost {INPUT} > 0;
# param in_min {INPUT} >= 0;
# param in_max {j in INPUT} >= in_min[j];
# param out_min {OUTPUT} >= 0;
# param out_max {i in OUTPUT} >= out_min[i];
# param io {OUTPUT,INPUT} >= 0;
# var X {j in INPUT} >= in_min[j], <= in_max[j];
# minimize Total_Cost: sum {j in INPUT} cost[j] * X[j];
# subject to Outputs {i in OUTPUT}:
# out_min[i] <= sum {j in INPUT} io[i,j] * X[j] <= out_max[i];
# ```
#
# The objects that were called *foods* and *nutrients* in the diet model are now referred to more generically as *inputs* and *outputs*. For each input `j`, we must decide to use a quantity `X[j]` that lies between `in_min[j]` and `in_max[j]`; as a result we incur a cost equal to `cost[j]·X[j]`, and we create `io[i,j]·X[j]` units of each output `i`. Our goal is to find the least-cost combination of inputs that yields, for each output `i`, an amount between `out_min[i]` and `out_max[i]`.
#
# In one common class of applications for this model, the inputs are raw materials to be mixed together. The outputs are qualities of the resulting **blend**. The raw materials could be the components of an animal feed, but they could equally well be the crude oil derivatives that are blended to make gasoline, or the different kinds of coal that are mixed as input to a coke oven. The qualities can be amounts of something (sodium or calories for animal feed), or more complex measures (vapor pressure or octane rating for gasoline), or even physical properties such as weight and volume.
#
# In another well-known application, the inputs are production activities of some sector of an **economy**, and the outputs are various products. The `in_min` and `in_max` parameters are limits on the levels of the activities, while `out_min` and `out_max` are regulated by demands. Thus the goal is to find levels of the activities that meet demand at the lowest cost. This interpretation is related to the concept of an economic equilibrium.
#
# In still another, quite different application, the inputs are **work schedules**, and the outputs correspond to hours worked on certain days of a month. For a particular work schedule `j`, `io[i,j]` is the number of hours that a person following schedule `j` will work on day `i` (zero if none), `cost[j]` is the monthly salary for a person following schedule `j`, and `X[j]` is the number of workers assigned that schedule. Under this interpretation, the objective becomes the total cost of the monthly payroll, while the constraints say that for each day `i`, the total number of workers assigned to work that day must lie between the limits `out_min[i]` and `out_max[i]`. The same approach can be used in a variety of other scheduling contexts, where the hours, days or months are replaced by other periods of time.
#
# Although linear programming can be very useful in applications like these, we need to keep in mind the assumptions that underlie the LP model. We have already mentioned the "continuity" assumption whereby `X[j]` is allowed to take on any value between `in_min[j]` and `in_max[j]`. This may be a lot more reasonable for blending than for scheduling.
#
# As another example, in writing the objective as
# ```ampl
# sum {j in INPUT} cost[j] * X[j]
# ```
# we are assuming "linearity of costs", that is, that the cost of an input is proportional to the amount of the input used, and that the total cost is the sum of the inputs' individual costs.
#
# In writing the constraints as
# ```ampl
# out_min[i] <= sum {j in INPUT} io[i,j] * X[j] <= out_max[i]
# ```
# we are also assuming that the yield of an output `i` from a particular input is proportional to the amount of the input used, and that the total yield of an output `i` is the sum of the yields from the individual inputs. This "linearity of yield" assumption poses no problem when the inputs are schedules, and the outputs are hours worked. But in the blending example, linearity is a physical assumption about the nature of the raw materials and the qualities, which may or may not hold. In early applications to refineries, for example, it was recognized that the addition of lead as an input had a nonlinear effect on the quality known as octane rating in the resulting blend.
#
# AMPL makes it easy to express discrete or nonlinear models, but any departure from continuity or linearity is likely to make an optimal solution much harder to obtain. At the least, it takes a more powerful solver to optimize the resulting mathematical programs.
#
# ## Bibliography
#
# * <NAME>, "The Diet Problem." Interfaces 20, 4 (1990) pp. 43-47. An entertaining account of the origins of the diet problem.
#
# * <NAME>, <NAME>, and <NAME>, "AMPL: A Modeling Language for Mathematical Programming (2nd edition)." Cengage Learning (2002).
#
#
#
#
#
# * <NAME> and <NAME>, "Stigler's Diet Problem Revisited." Operations Research 49, 1 (2001) pp. 1-13. A review of the diet problem's origins and its influence over the years on linear programming and on nutritionists.
#
# * Said <NAME> and <NAME>, "Matching Supplies to Save Lives: Linear Programming the Production of Heart Valves." Interfaces 11, 6 (1981) pp. 48-56. A less appetizing equivalent of the diet problem, involving the choice of pig heart suppliers.
| ampl-lecture/diet_case_study.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
import json
import numpy as np
import os
from pycocotools.coco import COCO
def COCO_val2017_imglist_gen(input_path,output_path):
root_data_dir=input_path
kp_names = ['nose', 'l_eye', 'r_eye', 'l_ear', 'r_ear', 'l_shoulder',
'r_shoulder', 'l_elbow', 'r_elbow', 'l_wrist', 'r_wrist',
'l_hip', 'r_hip', 'l_knee', 'r_knee', 'l_ankle', 'r_ankle']
max_num_joints = 17
color = np.random.randint(0, 256, (max_num_joints, 3))
mpi = []
test_mpi = []
index=1
tot_lenth=len(os.listdir(root_data_dir+"val2017/"))
for mpi, stage in zip([mpi, test_mpi], ['train', 'val']):
_val_gt_path=os.path.join(root_data_dir, 'annotations', 'person_keypoints_val2017.json')
coco = COCO(_val_gt_path)
for aid in coco.anns.keys():
ann = coco.anns[aid]
rect = np.array([0, 0, 1, 1], np.int32)
if ann['iscrowd']:
continue
joints = ann['keypoints']
if np.sum(joints[2::3]) == 0 or ann['num_keypoints'] == 0 :
continue
imgname = root_data_dir+stage + '2017/' + str(ann['image_id']).zfill(12) + '.jpg'
mpi.append(imgname)
print("%d / %d"%(index,tot_lenth),imgname)
index+=1
for imgi in mpi:
fw = open(output_path + "/COCO_2017_val_imglist.txt", "a")
fw.write(str(imgi) + "\n")
fw.flush()
if __name__ == '__main__':
input_path="/home/data/COCO/MSCOCO/"
output_path="test_COCO/"
COCO_val2017_imglist_gen(input_path,output_path)
# -
| generate_img_list_on_COCO2017val.ipynb |
# ---
# title: "Adding Dropout"
# author: "<NAME>"
# date: 2017-12-20T11:53:49-07:00
# description: "How to add dropout to a neural networking for deep learning in Python.."
# type: technical_note
# draft: false
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# <a alt="Dropout" href="https://machinelearningflashcards.com">
# <img src="/images/machine_learning_flashcards/Dropout_print.png" class="flashcard center-block">
# </a>
# ## Preliminaries
# +
# Load libraries
import numpy as np
from keras.datasets import imdb
from keras.preprocessing.text import Tokenizer
from keras import models
from keras import layers
# Set random seed
np.random.seed(0)
# -
# ## Load IMDB Movie Review Data
# +
# Set the number of features we want
number_of_features = 1000
# Load data and target vector from movie review data
(train_data, train_target), (test_data, test_target) = imdb.load_data(num_words=number_of_features)
# Convert movie review data to a one-hot encoded feature matrix
tokenizer = Tokenizer(num_words=number_of_features)
train_features = tokenizer.sequences_to_matrix(train_data, mode='binary')
test_features = tokenizer.sequences_to_matrix(test_data, mode='binary')
# -
# ## Construct Neural Network Architecture With Dropout Layer
#
# In Keras, we can implement dropout by added `Dropout` layers into our network architecture. Each `Dropout` layer will drop a user-defined hyperparameter of units in the previous layer every batch. Remember in Keras the input layer is assumed to be the first layer and not added using the `add`. Therefore, if we want to add dropout to the input layer, the layer we add in our is a dropout layer. This layer contains both the proportion of the input layer's units to drop `0.2` and `input_shape` defining the shape of the observation data. Next, after we add a dropout layer with `0.5` after each of the hidden layers.
# +
# Start neural network
network = models.Sequential()
# Add a dropout layer for input layer
network.add(layers.Dropout(0.2, input_shape=(number_of_features,)))
# Add fully connected layer with a ReLU activation function
network.add(layers.Dense(units=16, activation='relu'))
# Add a dropout layer for previous hidden layer
network.add(layers.Dropout(0.5))
# Add fully connected layer with a ReLU activation function
network.add(layers.Dense(units=16, activation='relu'))
# Add a dropout layer for previous hidden layer
network.add(layers.Dropout(0.5))
# Add fully connected layer with a sigmoid activation function
network.add(layers.Dense(units=1, activation='sigmoid'))
# -
# ## Compile Neural Network
# Compile neural network
network.compile(loss='binary_crossentropy', # Cross-entropy
optimizer='rmsprop', # Root Mean Square Propagation
metrics=['accuracy']) # Accuracy performance metric
# ## Train Neural Network
# Train neural network
history = network.fit(train_features, # Features
train_target, # Target vector
epochs=3, # Number of epochs
verbose=0, # No output
batch_size=100, # Number of observations per batch
validation_data=(test_features, test_target)) # Data for evaluation
| docs/deep_learning/keras/adding_dropout.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
import os
import argparse
from solver import Solver
from data_loader import get_loader
from torch.backends import cudnn
def str2bool(v):
return v.lower() in ('true')
def main(config):
# For fast training.
cudnn.benchmark = True
# Create directories if not exist.
if not os.path.exists(config.log_dir):
os.makedirs(config.log_dir)
if not os.path.exists(config.model_save_dir):
os.makedirs(config.model_save_dir)
if not os.path.exists(config.sample_dir):
os.makedirs(config.sample_dir)
if not os.path.exists(config.result_dir):
os.makedirs(config.result_dir)
# Data loader.
celeba_loader = None
rafd_loader = None
if config.dataset in ['CelebA', 'Both']:
celeba_loader = get_loader(config.celeba_image_dir, config.attr_path, config.selected_attrs,
config.celeba_crop_size, config.image_size, config.batch_size,
'CelebA', config.mode, config.num_workers)
if config.dataset in ['RaFD', 'Both']:
rafd_loader = get_loader(config.rafd_image_dir, None, None,
config.rafd_crop_size, config.image_size, config.batch_size,
'RaFD', config.mode, config.num_workers)
# Solver for training and testing StarGAN.
solver = Solver(celeba_loader, rafd_loader, config)
if config.mode == 'train':
if config.dataset in ['CelebA', 'RaFD']:
solver.train()
elif config.dataset in ['Both']:
solver.train_multi()
elif config.mode == 'test':
if config.dataset in ['CelebA', 'RaFD']:
solver.test()
elif config.dataset in ['Both']:
solver.test_multi()
if __name__ == '__main__':
parser = argparse.ArgumentParser()
# Model configuration.
parser.add_argument('--c_dim', type=int, default=5, help='dimension of domain labels (1st dataset)')
parser.add_argument('--c2_dim', type=int, default=8, help='dimension of domain labels (2nd dataset)')
parser.add_argument('--celeba_crop_size', type=int, default=178, help='crop size for the CelebA dataset')
parser.add_argument('--rafd_crop_size', type=int, default=256, help='crop size for the RaFD dataset')
parser.add_argument('--image_size', type=int, default=128, help='image resolution')
parser.add_argument('--g_conv_dim', type=int, default=64, help='number of conv filters in the first layer of G')
parser.add_argument('--d_conv_dim', type=int, default=64, help='number of conv filters in the first layer of D')
parser.add_argument('--g_repeat_num', type=int, default=6, help='number of residual blocks in G')
parser.add_argument('--d_repeat_num', type=int, default=6, help='number of strided conv layers in D')
parser.add_argument('--lambda_cls', type=float, default=1, help='weight for domain classification loss')
parser.add_argument('--lambda_rec', type=float, default=10, help='weight for reconstruction loss')
parser.add_argument('--lambda_gp', type=float, default=10, help='weight for gradient penalty')
# Training configuration.
parser.add_argument('--dataset', type=str, default='CelebA', choices=['CelebA', 'RaFD', 'Both'])
parser.add_argument('--batch_size', type=int, default=16, help='mini-batch size')
parser.add_argument('--num_iters', type=int, default=200000, help='number of total iterations for training D')
parser.add_argument('--num_iters_decay', type=int, default=100000, help='number of iterations for decaying lr')
parser.add_argument('--g_lr', type=float, default=0.0001, help='learning rate for G')
parser.add_argument('--d_lr', type=float, default=0.0001, help='learning rate for D')
parser.add_argument('--n_critic', type=int, default=5, help='number of D updates per each G update')
parser.add_argument('--beta1', type=float, default=0.5, help='beta1 for Adam optimizer')
parser.add_argument('--beta2', type=float, default=0.999, help='beta2 for Adam optimizer')
parser.add_argument('--resume_iters', type=int, default=None, help='resume training from this step')
parser.add_argument('--selected_attrs', '--list', nargs='+', help='selected attributes for the CelebA dataset',
default=['Black_Hair', 'Blond_Hair', 'Brown_Hair', 'Male', 'Young'])
# Test configuration.
parser.add_argument('--test_iters', type=int, default=200000, help='test model from this step')
# Miscellaneous.
parser.add_argument('--num_workers', type=int, default=1)
parser.add_argument('--mode', type=str, default='train', choices=['train', 'test'])
parser.add_argument('--use_tensorboard', type=str2bool, default=True)
# Directories.
parser.add_argument('--celeba_image_dir', type=str, default='data/celeba/images')
parser.add_argument('--attr_path', type=str, default='data/celeba/list_attr_celeba.txt')
parser.add_argument('--rafd_image_dir', type=str, default='data/RaFD/train')
parser.add_argument('--log_dir', type=str, default='stargan/logs')
parser.add_argument('--model_save_dir', type=str, default='stargan/models')
parser.add_argument('--sample_dir', type=str, default='stargan/samples')
parser.add_argument('--result_dir', type=str, default='stargan/results')
# Step size.
parser.add_argument('--log_step', type=int, default=10)
parser.add_argument('--sample_step', type=int, default=1000)
parser.add_argument('--model_save_step', type=int, default=10000)
parser.add_argument('--lr_update_step', type=int, default=1000)
config = parser.parse_args()
print(config)
main(config)
# -
| main.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 2
# language: python
# name: python2
# ---
# # Episode 7 - Basic MNIST with tf.contrib.learn
#
# This code is tested against TensorFlow 1.11.0-rc2. You can find a docker image here: https://hub.docker.com/r/tensorflow/tensorflow/
import numpy as np
import matplotlib.pyplot as plt
# %matplotlib inline
import tensorflow as tf
learn = tf.contrib.learn
tf.logging.set_verbosity(tf.logging.ERROR)
# ## Import the dataset
mnist = learn.datasets.load_dataset('mnist')
data = mnist.train.images
labels = np.asarray(mnist.train.labels, dtype=np.int32)
test_data = mnist.test.images
test_labels = np.asarray(mnist.test.labels, dtype=np.int32)
test_labels
# There are 55k examples in train, and 10k in eval. You may wish to limit the size to experiment faster.
max_examples = 10000
data = data[:max_examples]
labels = labels[:max_examples]
# ## Display some digits
def display(i):
img = test_data[i]
plt.title('Example %d. Label: %d' % (i, test_labels[i]))
plt.imshow(img.reshape((28,28)), cmap=plt.cm.gray_r)
display(0)
display(1)
# These digits are clearly drawn. Here's one that's not.
display(8)
# Now let's take a look at how many features we have.
print len(data[0])
# ## Fit a Linear Classifier
#
# Our goal here is to get about 90% accuracy with this simple classifier. For more details on how these work, see https://www.tensorflow.org/versions/r0.10/tutorials/mnist/beginners/index.html#mnist-for-ml-beginners
feature_columns = learn.infer_real_valued_columns_from_input(data)
classifier = learn.LinearClassifier(feature_columns=feature_columns, n_classes=10)
classifier.fit(data, labels, batch_size=100, steps=1000)
# ## Evaluate accuracy
classifier.evaluate(test_data, test_labels)
print classifier.evaluate(test_data, test_labels)["accuracy"]
test_labels
# ## Classify a few examples
#
# We can make predictions on individual images using the predict method
# here's one it gets right
predictions = list(classifier.predict(x=test_data, as_iterable=True))
print ("Predicted %d, Label: %d" % (predictions[0], test_labels[0]))
display(0)
# and one it gets wrong
print ("Predicted %d, Label: %d" % (predictions[8], test_labels[8]))
display(8)
# ## Visualize learned weights
#
#
# Let's see if we can reproduce the pictures of the weights in the TensorFlow Basic MNSIT <a href="https://www.tensorflow.org/tutorials/mnist/beginners/index.html#mnist-for-ml-beginners">tutorial</a>.
wt_names = classifier.get_variable_names()
weights = classifier.get_variable_value(wt_names[1]) #'linear//weight'
#weights = classifier.weights_
f, axes = plt.subplots(2, 5, figsize=(10,4))
axes = axes.reshape(-1)
for i in range(len(axes)):
a = axes[i]
a.imshow(weights.T[i].reshape(28, 28), cmap=plt.cm.seismic)
a.set_title(i)
a.set_xticks(()) # ticks be gone
a.set_yticks(())
plt.show()
# # Next steps
#
# * TensorFlow Docker images: https://hub.docker.com/r/tensorflow/tensorflow/
# * TF.Learn Quickstart: https://www.tensorflow.org/versions/r0.9/tutorials/tflearn/index.html
# * MNIST tutorial: https://www.tensorflow.org/tutorials/mnist/beginners/index.html
# * Visualizating MNIST: http://colah.github.io/posts/2014-10-Visualizing-MNIST/
# * Additional notebooks: https://github.com/tensorflow/tensorflow/tree/master/tensorflow/tools/docker/notebooks
# * More about linear classifiers: https://www.tensorflow.org/versions/r0.10/tutorials/linear/overview.html#large-scale-linear-models-with-tensorflow
# * Much more about linear classifiers: http://cs231n.github.io/linear-classify/
# * Additional TF.Learn samples: https://github.com/tensorflow/tensorflow/tree/master/tensorflow/examples/skflow
| ep7.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# > computers have to take more time processing strings than numbers, which have a precise number of bits. In pandas, there is a special datatype called `category` that encodes our categorical data numerically, and--because of this numerical encoding--it can speed up our code.
# This is actually converted these strings into a numeric representation of the categories. To see this numeric representation, we can use the `get_dummies` function in pandas. This is also called a **"binary indicator"** representation
# * to see how many numeric coumns a dataframe has we can use `df.info()` or `df.dtypes.value_counts()`
#
# > pandas `object` type is the most innefficient, consider that
# * to convert several columns to `category` dtype to make it more efficient:
# ```
# # Define the lambda function and list of coulms to transform (LABELS)
# categorize_label = lambda x: x.astype('category')
# # Convert df[LABELS] to a categorical type
# df[LABELS] = df[LABELS].apply(categorize_label, axis=0)
# # Print the converted dtypes
# print(df[LABELS].dtypes)
# ```
# * to visualize number of labels per column
#
# ```
# # Import matplotlib.pyplot
# import matplotlib.pyplot as plt
#
# # Calculate number of unique values for each label: num_unique_labels
# num_unique_labels = df[LABELS].apply(pd.Series.nunique)
#
# # Plot number of unique values for each label
# num_unique_labels.plot(kind='bar')
#
# # Label the axes
# plt.xlabel('Labels')
# plt.ylabel('Number of unique values')
#
# # Display the plot
# plt.show()
# ```
| notes_on_machine_learning/notes_on_machine_learning2.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Project: Investigate a Dataset (Medical Appointment No Shows)
#
# ## Table of Contents
# <ul>
# <li><a href="#intro">Introduction</a></li>
# <li><a href="#wrangling">Data Wrangling</a></li>
# <li><a href="#eda">Exploratory Data Analysis</a></li>
# <li><a href="#conclusions">Conclusions</a></li>
# </ul>
# <a id='intro'></a>
# ## Introduction
# <b>Selected dataset:</b> No-show appointments
#
# <b>Dataset Description:</b> This dataset collects information from 100k medical appointments in Brazil and is focused on the question of whether or not patients show up for their appointment. A number of characteristics about the patient are included in each row.
#
# - ‘ScheduledDay’ tells us on what day the patient set up their appointment.
# - ‘Neighborhood’ indicates the location of the hospital.
# - ‘Scholarship’ indicates whether or not the patient is enrolled in Brasilian welfare program Bolsa Família.
# +
# import the used libraries
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
# %matplotlib inline
import seaborn as sns
# -
# <a id='wrangling'></a>
# ## Data Wrangling
#
#
# ### General Properties
# +
# Load the data and view first few lines of dataset
df = pd.read_csv('noshowappointments-kagglev2-may-2016.csv')
df.head()
# -
df.shape
# - There are 110527 rows and 14 columns
df.info()
# +
# Check if there is any missing values
df.isnull().sum()
# -
# Check for duplicate rows
df.duplicated().sum()
df.describe()
# ### observation
#
# - some columns need to be renamed such as (Hipertension & Handcap & No-show )
# - some columns need to be droped because I willnot use it in analysis.
# - from summary statistics , there is a mistake in one of the patient's age . it shows -1 years.
#
# ### Data Cleaning
# ##### Define
# - rename columns
# #### Code
df.rename(columns = {'Hipertension': 'Hypertension',
'Handcap': 'Handicap','No-show':'No_show'}, inplace = True)
# #### Test
df.columns
# #### Define
# - delete unused columns
# #### Code
#drop PatientId, AppointmentID, ScheduledDay, Appointment day columns
df.drop(['PatientId','AppointmentID','ScheduledDay','AppointmentDay'], axis=1,inplace=True)
# #### Test
df.head()
# #### Define
# - remove outliers data from age columns like -1 and age > 95
# #### Code
df = df[(df.Age >= 0) & (df.Age < 96)]
# #### Test
print(sorted(df.Age.unique()))
# #### Define
# change the values of No_show columns to help in analysis
#
# 0 = Showed up to appointment
#
# 1 = did not show up to appointment
#Update values in No-show column
df['No_show'].replace({'No':0,'Yes':1},inplace=True)
df.sample(5)
# - save this data set with a new name "clean_data.csv"
df.to_csv('clean_data.csv',index=False)
#read new dataset
df_clean=pd.read_csv('clean_data.csv')
df_clean.head()
# <a id='eda'></a>
# ## Exploratory Data Analysis
# Histogram of all dataset
df_clean.hist(figsize=(15,10));
# based on the histogram charts, we can estimate a few things
# - Most of the patients are below 60 years.
# - Most of the people are not enrolled in Brasilian welfare program.
# - most of the people don't suffer from chronic diseases.
# - Most of the people didn't receive SMS.
# ### Research Question 1: (what is the overall appointment show-up vs. no show-up rate?)
data = df_clean["No_show"].value_counts()
# Draw pie-chart
plt.figure(figsize=(8,8))
data.plot.pie(autopct='%.0f%%',fontsize=(15), labels=['Show','NoShow'], startangle=90, explode=[0.1, 0])
plt.title('Rate of show and no show appointments',fontsize=(20))
plt.show();
# Looking at the pie chart above, the overall show-up rate is 80%.
#
# ### Research Question 2 (Which gender is display more no-show appointments?)
# create a function to customize ploting
def custom_plot(data,x,y,title,color1,color2,a,b):
plt.figure(figsize=(8,8))
data.plot(kind='bar',color=[color1,color2])
plt.xlabel(x,fontsize=(15))
plt.ylabel(y,fontsize=(15))
plt.title(title,fontsize=(25))
plt.xticks(np.arange(2), (a, b), rotation=10)
plt.show();
gender_noshow = df_clean.groupby('Gender').sum()['No_show']
gender_noshow
custom_plot(gender_noshow,
"Gender","Number of No Show",
'No_Show by Gender',
'#FB6090','#58508d',
'Female', 'Male')
plt.figure(figsize=(8,8))
Text_No_Show = sns.countplot(x=df_clean.Gender, hue = df_clean.No_show)
Text_No_Show.set_title('Text Message Received and Appointment Attendance')
plt.xticks(np.arange(2), ('Female', 'Male'), rotation=10);
# - The result from this chart is that the number of females who missed their appointment is nearly double the number of men.
plt.figure(figsize=(8,8))
gender_noshow.plot.pie(autopct='%.0f%%',fontsize=(15), labels=['Female','Male'], startangle=90, explode=[0.1, 0])
plt.title('Rate of show and no show appointments',fontsize=(20))
plt.show();
# ### Research Question 3 (What ages of patients who miss their appointments?)
showed = df_clean.No_show == 0
notshowed = df_clean.No_show == 1
# +
#Plotting age with the show/no show appointments.
df_clean.Age[showed].hist(alpha=0.5, bins=20,label='showed',color = '#EBBBB0', figsize=(8,8))
df_clean.Age[notshowed].hist(alpha= 0.5, bins=20,label='not showed', color = '#4CD7D0');
plt.xlabel("Age",fontsize=(15))
plt.ylabel("number of patients",fontsize=(15))
plt.title('Age with the show/no show appointments',fontsize=(20))
plt.legend()
plt.show();
# -
# - The majority of patients under the age of five are committed to their doctor's appointment, and patients over 50 are still committed to their doctor's appointment.
# - When patient's age increases, the number of no-shows decreases. This may be because older people are more likely to be retired, giving them more time and ability to attend appointments than younger people.
# ### Research Question 4 (What is impact of sending SMS on rate of no-show? )
# Grouping by SMS_received for patients who missed thier appointemnts
sms=df_clean.groupby('SMS_received').sum()['No_show']
custom_plot(sms,"SMS notification",
"Number of No_show of appointments",
'Relation between sending SMS and No_show appointments' ,
'#1768AC','teal',
'not received', 'received')
plt.figure(figsize=(8,8))
Text_No_Show = sns.countplot(x=df_clean.SMS_received, hue = df_clean.No_show)
Text_No_Show.set_title('Text Message Received and Appointment Attendance')
plt.xticks(np.arange(2), ('not received', 'received'), rotation=10);
data1 = df_clean.groupby("Scholarship")["No_show"].mean()
data1
# - Surprisingly, sending an SMS alert does not encourage people to show up for their appointment
# #### Research Question 5 (What is the correlation between no-show rate and scholarship? )
custom_plot(data1,"Scholarship",
"No-show rate",
"No-show rate and social scholarship correlation",
'green','orange',
'Not join', 'joined')
# The percent of patients who No-show are higher with whom had a scholarship
# <a id='conclusions'></a>
# ## Conclusions
#
# - After cleaning the dataset and looking at some of its properties, I couldn't find a characteristic that specifically influences whether or not affects the patient to shows up for the appointment. Just the age varies slightly.
#
# #### Research Question 1 (what is the overall appointment show-up vs. no show-up rate?)
# - the overall show-up rate is 80%.
# - Although the percentage of no-show appointments is less than the percentage of showed up appointments, it is still a high rate of dissatisfaction for healthcare providers.
#
# #### Research Question 2 (Which gender is display more no-show appointments?)
# - Females patients booked aappointments more than men, this shows that women care about their health more than men do.
# - The number of females who missed their appointment is nearly double the number of men.
#
#
# #### Research Question 3 (What ages of patients who miss their appointments?)
# - The majority of patients under the age of five are committed to their doctor's appointment, and patients over 50 are still committed to their doctor's appointment.
# - When patient's age increases, the number of no-shows decreases. This may be because older people are more likely to be retired, giving them more time and ability to attend appointments than younger people.
#
# #### Research Question 4 (What is impact of sending SMS on rate of no-show? )
# - Surprisingly, sending an SMS alert does not encourage people to show up for their appointment.
# - SMS messages is not a powerfull option to decrease the number of no-show appointments.
#
# #### Research Question 5 (What is the correlation between no-show rate and scholarship? )
# - The percent of patients who No-show are higher with whom had a scholarship.
#
#
# ### Limitation
# There are some limitations in the No-show appointments Dataset as:
# - The features are still insufficient to determine the true cause of no-show appointments.
# - More information, such as profession, marital status, address, and any serious medical conditions, is needed.
# - The data was collected from various locations, and it is unclear if the healthcare centres where the data was collected use the same capabilities in terms of service, technology, and so on.
# - As we can see in the "Age" column, there were some negative ages that we had to exclude before proceeding with our study, possibly because they were calculated incorrectly because negative ages are virtually impossible.
# #### References:
# - Medical Appointment No Show Kaggle discussion. https://www.kaggle.com/joniarroba/noshowappointments/discussion
# - Analytics Vidhya - 12 Useful Pandas Techniques in Python for Data Manipulation. https://www.analyticsvidhya.com/blog/2016/01/12-pandas-techniques-python-data-manipulation/
# - Udacity Nanodegree https://classroom.udacity.com/
| investigate-a-dataset-No-show.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import json
import networkx as nx
from networkx.readwrite import json_graph
# Minimimum spanning viz preperations
g = nx.read_gexf("js_viz/graphs/cutlines_gc_from_gephi.gephi")
tree = nx.minimum_spanning_tree(g)
data = json_graph.node_link_data(tree)
with open("js_viz/graphs/spanning_tree.json", "w") as f:
json.dump(data, f)
data["nodes"][0]
g = nx.read_gexf("js_viz/graphs/mod0_from_gephi.gexf")
tree = nx.minimum_spanning_tree(g)
data = json_graph.node_link_data(tree)
with open("js_viz/graphs/mod0spanning_tree.json", "w") as f:
json.dump(data, f)
# Remove Lope
g.remove_node('440')
g.node['440']
tree = nx.minimum_spanning_tree(g)
data = json_graph.node_link_data(tree)
with open("js_viz/graphs/mod0.json", "w") as f:
json.dump(data, f)
def group_by_top_place(g):
places = {}
current = 1
for n, attrs in g.nodes(data=True):
tp = attrs["top_place"]
if tp not in places:
places[tp] = int(current)
group = int(current)
current += 1
else:
group = places[tp]
g.node[n]["group"] = group
return g
g = nx.read_gexf("js_viz/graphs/no_auth_from_gephi.gexf")
g = group_by_top_place(g)
tree = nx.minimum_spanning_tree(g)
data = json_graph.node_link_data(tree)
with open("js_viz/graphs/no_auth_patron_tree.json", "w") as f:
json.dump(data, f)
g.nodes(data=True)
| graphviz.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
#Sqlite é uma bibliotec que implementa um banco de dados Sql Embutido
import sqlite3
# +
#Criando um banco de dados e estabelecendo uma conexão com ele:
conn = sqlite3.connect('Primeiro_banco_de_dados.db')
print(conn)
# -
#cursos: é atraves dessa variavel que criamos tabelas, informamos registro e fazemos as queries de operaçõe no banco
cursor = conn.cursor()
# +
# importante: nos bancos de dados relacionais salvamos os dados em tabela. Essas tabelas possuem atributos(colunas)
# e os registros(linhas). No caso de DB's relacionais nós definimos as tabelas e os tipos de dados previamente.
# Lembrem que os bancos não relacionais é diferente, nos não relacionais não precisamos fazer isso.
# Create table: Comando usado para criar tabelas
# CREATE TABLE nome_tabela(coluna1 tipo, coluna2 tipo, etc);
#geralmente usamos letras maiusculas para escrever comandos em linguagem SQL;
# -
#criando nossa primeira tabela. o metodo execute recebe uma query sql e a executa;
cursor.execute("CREATE TABLE cadastro_cliente (user_id integer, nome text, idade integer, cidade text, email text)")
# +
# usamos essa query para ver quais as tabelas estão no banco
cursor.execute("SELECT name from sqlite_master where type = 'table'")
#o comando fetchall é responsavel por apresentar o resultado da query
cursor.fetchall()
# -
Select: usamos esse comando para selecionar dados na tabela
# +
#vamos ver o que temos na tabela
# * == ALL: seleciona todos os campos (atributos e colunas da tabela)
cursor.execute("SELECT * FROM cadastro_cliente")
cursor.fetchall()
# +
INSERT INTO: Usado para inserir registros em uma tabela
os dados do tipo text precisam estar com aspas como as strings que aprendemos aqui em python e tem que ser aspas
simples, porque quando estamos usando aspas duplas para a query, se não o python faz confusão.
# +
#Precisamo dizer em qual tabela e tambem escrever no formato de dados previsto quando criamos a tabela:
cursor.execute("INSERT INTO cadastro_cliente VALUES(123, 'Maria', 40, 'São Paulo', '<EMAIL>')")
conn.commit()
#commit é um comando usado no git voce deu um comando ali em cima para alterar a tabela e o commit é pra de fato
#guardarmos a informação nno banco
# -
#verificando se a nossa tabela foi de fato alterada:
cursor.execute("SELECT * FROM cadastro_cliente")
cursor.fetchall()
# +
#selecionando dois campos nome e idade:
cursor.execute('SELECT nome, idade FROM cadastro_cliente')
cursor.fetchall()
# +
#Vamos novamente agora vamos inserir 2 regsitros
cursor.execute("INSERT INTO cadastro_cliente VALUES(456, 'Pedro', 22, '<NAME>', '<EMAIL>')")
cursor.execute("INSERT INTO cadastro_cliente VALUES(789, 'João', 35, 'São Paulo', '<EMAIL>')")
cursor.execute("INSERT INTO cadastro_cliente VALUES(901, 'Paulo', 56, '<NAME>', '<EMAIL>')")
conn.commit()
# +
#vizualindo todos os nosso registros
cursor.execute("SELECT * FROM cadastro_cliente")
cursor.fetchall()
# -
select distinct: esse comando é usado para retornar apenas valores distintos
SELECT DISTINCT coluna1, coluna2, ... FROM nome_tabela;
cursor.execute("SELECT cidade FROM cadastro_cliente")
cursor.fetchall()
cursor.execute("SELECT DISTINCT cidade FROM cadastro_cliente")
cursor.fetchall()
SELECT WHERE: Seleciona dados de acordo com alguma condição. Usada para filtrar registros
select coluna1, coluna2, ... FROM nome_tabela WHERE condicao_verdadeira
cursor.execute("SELECT * FROM cadastro_cliente WHERE nome = 'Maria'")
cursor.fetchall()
select order by: seleciona dados ordenando em ordem crescente ou decrescente os dados de acordo com os valores de
uma coluna especifica
SELECT coluna1, coluna2 ... from nome_tabela ORDER BY coluna1, coluna2... DESC/ASC
cursor.execute("SELECT nome, idade FROM cadastro_cliente ORDER BY idade DESC")
cursor.fetchall()
cursor.execute("SELECT nome, idade FROM cadastro_cliente ORDER BY idade DESC, user_id ASC")
cursor.fetchall()
Selcet group by: seleciona dados agrupados por uma(+ de uma) coluna(s) especifica(s)
select coluna1, coluna2... from nome_tabela GROUP BY coluna1...
cursor.execute("SELECT COUNT(user_id), cidade FROM cadastro_cliente GROUP BY cidade")
cursor.fetchall()
# +
#podemos fazer consultas mais complexas como por exemplo:
cursor.execute("SELECT COUNT(user_id), cidade FROM cadastro_cliente GROUP BY cidade ORDER BY COUNT(user_id) DESC")
cursor.fetchall()
# -
#criando outra tabela
cursor.execute("CREATE TABLE compras_cliente(user_id integer, qtd_produtos integer, \
valor_compra decimal, local_compra text)")
cursor.execute("INSERT INTO compras_cliente VALUES(123, 3, 150.70, 'Loja1')")
cursor.execute("INSERT INTO compras_cliente VALUES(456, 1, 20.35, 'Loja2')")
cursor.execute("INSERT INTO compras_cliente VALUES(789, 6, 437, 'Loja3')")
conn.commit()
cursor.execute("SELECT * FROM compras_cliente")
cursor.fetchall()
SELECT COUNT/AVG/SUM:
COUNT: Retorna o numero de valores de uma determinada coluna
AVG: Retorna a média dos valores de uma determinada coluna
Sum: Retorma a soma dos valores de uma determinada coluna
SELECT COUNT/AVG/SUM(nome_cliente) FROM nome_tabela
#quantos usuarios fizeram compras
cursor.execute("SELECT COUNT(user_id) FROM compras_cliente")
cursor.fetchall()
#quantos usuarios fizeram compras na loja 1
cursor.execute("SELECT COUNT(user_id) FROM compras_cliente WHERE local_compra == 'Loja1'")
cursor.fetchall()
#qual foi o valor médio de gasto de cada usuario
cursor.execute("SELECT AVG(valor_compra) FROM compras_cliente")
cursor.fetchall()
# +
#qual foi o total gasto pelos usuarios
cursor.execute("SELECT SUM(valor_compra) FROM compras_cliente")
cursor.fetchall()
# +
#podemos fazer apenas o primeiro join por exemplo, pra ficar mais claro para voçes o que ele faz:
# -
cursor.execute("SELECT cc.user_id, cc.nome, cc.idade, ccli.valor_compra FROM cadastro_cliente as cc LEFT JOIN \
compras_cliente as ccli ON cc.user_id == ccli.user_id")
cursor.fetchall()
# +
#respomdendo nossa pergunta ali de cima: precisamos primerio pegar todas as cidades, então vamos usar a tabela
#cadastro_cliente e cruzar ela com a tabela de compras para termos o gasto dos clientes também
#A chave/coluna que relaciona essas tabelas é a coluna user_id:
cursor.execute("SELECT AVG('valor_compra') as valor_medio, cidade FROM cadastro_cliente\
LEFT JOIN compras_cliente\
ON cadastro_cliente.user_id == compras_cliente.user_id\
GROUP BY cidade\
ORDER BY valor_medio DESC")
cursor.fetchall()
# -
#vejam agora a diferença com o inner join:
cursor.execute("SELECT cc.user_id, cc.nome , cc.cidade,ccli.valor_compra FROM cadastro_cliente as cc INNER JOIN \
compras_cliente as ccli ON cc.user_id == ccli.user_id")
cursor.fetchall()
| SqLite3.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import pandas as pd
import numpy as np
dat = pd.read_csv('../data/raw/Aids2.csv')
dat.head()
dat.drop('Unnamed: 0', axis=1, inplace=True)
# From: https://vincentarelbundock.github.io/Rdatasets/doc/MASS/Aids2.html
#
# ## Australian AIDS Survival Data
# ### Description
# Data on patients diagnosed with AIDS in Australia before 1 July 1991.
#
# ### Usage
# Aids2
#
# ### Format
# This data frame contains 2843 rows and the following columns:
#
# ##### state
# Grouped state of origin: "NSW "includes ACT and "other" is WA, SA, NT and TAS.
#
# ##### sex
# Sex of patient.
#
# ##### diag
# (Julian) date of diagnosis.
#
# ##### death
# (Julian) date of death or end of observation.
#
# ##### status
# "A" (alive) or "D" (dead) at end of observation.
#
# ##### T.categ
# Reported transmission category.
#
# ##### age
# Age (years) at diagnosis.
#
# ### Note
# This data set has been slightly jittered as a condition of its release, to ensure patient confidentiality.
#
# ### Source
# Dr <NAME> and the Australian National Centre in HIV Epidemiology and Clinical Research.
#
# ### References
# <NAME>. and <NAME>. (2002) Modern Applied Statistics with S. Fourth edition. Springer.
dat.state.unique()
dat.status.unique()
dat['T.categ'].unique()
X = dat.copy()[['state', 'sex', 'T.categ', 'age']]
y = pd.DataFrame({
'tte': dat.death - dat.diag,
'event': [1 if val == 'D' else 0 for val in dat.status]
})
X
from sklearn.preprocessing import OneHotEncoder, StandardScaler
from sklearn.model_selection import train_test_split
# Categorical boolean mask
categorical_feature_mask = X.dtypes==object
cat_names = X.columns[categorical_feature_mask].tolist()
num_names = X.columns[~categorical_feature_mask].tolist()
print('Categoricals: ', cat_names, '\nNumerics: ', num_names)
X_train, X_test, y_train, y_test = train_test_split(X, y, random_state=0)
ohe = OneHotEncoder(sparse=False).fit(X_train[cat_names])
scaler = StandardScaler().fit(X_train[num_names])
X_train = np.concatenate(
(ohe.transform(X_train[cat_names]), scaler.transform(X_train[num_names])),
axis=1
)
X_test = np.concatenate(
(ohe.transform(X_test[cat_names]), scaler.transform(X_test[num_names])),
axis=1
)
| notebooks/explore_aids2.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import numpy as np
import matplotlib.pyplot as plt
def GetMin(a):
m = 100
for x in np.linspace(-1, 2, num=3001):
y = (a ** (2 * x)) - 4 * (a ** x) - 1
if (y < m):
m = y
return m
def Option_A():
A = []
M = []
a = 1/2
A.append(a)
M.append(GetMin(a))
a = 2 ** (1/2) - 1
for i in range(0, 10):
a += 1
A.append(a)
M.append(GetMin(a))
plt.figure(figsize=(15,3))
plt.plot(A, M)
plt.ylim(-8, 0)
plt.title("Option A")
plt.show()
def Option_B():
A = []
M = []
for i in range(1, 101):
a = i * 0.01
A.append(a)
M.append(GetMin(a))
a = 2 ** (1/2) - 1
for i in range(0, 10):
a += 1
A.append(a)
M.append(GetMin(a))
plt.figure(figsize=(15,3))
plt.plot(A, M)
plt.ylim(-8, 0)
plt.title("Option B")
plt.show()
def Option_C():
A = []
M = []
for i in range(1, 100):
a = i * 0.005
A.append(a)
M.append(GetMin(a))
a = 2 ** (1/2) - 1
for i in range(0, 10):
a += 1
A.append(a)
M.append(GetMin(a))
plt.figure(figsize=(15,3))
plt.plot(A, M)
plt.ylim(-8, 0)
plt.title("Option C")
plt.show()
Option_A()
Option_B()
Option_C()
| miscellaneous/notebook/Min_calculation.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python (reco_gpu)
# language: python
# name: reco_gpu
# ---
# <i>Copyright (c) Microsoft Corporation. All rights reserved.</i>
#
# <i>Licensed under the MIT License.</i>
# # xDeepFM : the eXtreme Deep Factorization Machine
# This notebook will give you a quick example of how to train an [xDeepFM model](https://arxiv.org/abs/1803.05170).
# xDeepFM \[1\] is a deep learning-based model aims at capturing both lower- and higher-order feature interactions for precise recommender systems. Thus it can learn feature interactions more effectively and manual feature engineering effort can be substantially reduced. To summarize, xDeepFM has the following key properties:
# * It contains a component, named CIN, that learns feature interactions in an explicit fashion and in vector-wise level;
# * It contains a traditional DNN component that learns feature interactions in an implicit fashion and in bit-wise level.
# * The implementation makes this model quite configurable. We can enable different subsets of components by setting hyperparameters like `use_Linear_part`, `use_FM_part`, `use_CIN_part`, and `use_DNN_part`. For example, by enabling only the `use_Linear_part` and `use_FM_part`, we can get a classical FM model.
#
# In this notebook, we test xDeepFM on two datasets: 1) a small synthetic dataset and 2) [Criteo dataset](http://labs.criteo.com/category/dataset)
# ## 0. Global Settings and Imports
# +
import sys
sys.path.append("../../")
import os
import scrapbook as sb
from tempfile import TemporaryDirectory
import tensorflow as tf
tf.get_logger().setLevel('ERROR') # only show error messages
from reco_utils.common.constants import SEED
from reco_utils.recommender.deeprec.deeprec_utils import (
download_deeprec_resources, prepare_hparams
)
from reco_utils.recommender.deeprec.models.xDeepFM import XDeepFMModel
from reco_utils.recommender.deeprec.io.iterator import FFMTextIterator
print("System version: {}".format(sys.version))
print("Tensorflow version: {}".format(tf.__version__))
# -
# #### Parameters
# + tags=["parameters"]
EPOCHS_FOR_SYNTHETIC_RUN = 15
EPOCHS_FOR_CRITEO_RUN = 10
BATCH_SIZE_SYNTHETIC = 128
BATCH_SIZE_CRITEO = 4096
RANDOM_SEED = SEED # Set to None for non-deterministic result
# -
# xDeepFM uses the FFM format as data input: `<label> <field_id>:<feature_id>:<feature_value>`
# Each line represents an instance, `<label>` is a binary value with 1 meaning positive instance and 0 meaning negative instance.
# Features are divided into fields. For example, user's gender is a field, it contains three possible values, i.e. male, female and unknown. Occupation can be another field, which contains many more possible values than the gender field. Both field index and feature index are starting from 1. <br>
#
# ## 1. Synthetic data
# Now let's start with a small synthetic dataset. In this dataset, there are 10 fields, 1000 fefatures, and label is generated according to the result of a set of preset pair-wise feature interactions.
# +
tmpdir = TemporaryDirectory()
data_path = tmpdir.name
yaml_file = os.path.join(data_path, r'xDeepFM.yaml')
train_file = os.path.join(data_path, r'synthetic_part_0')
valid_file = os.path.join(data_path, r'synthetic_part_1')
test_file = os.path.join(data_path, r'synthetic_part_2')
output_file = os.path.join(data_path, r'output.txt')
if not os.path.exists(yaml_file):
download_deeprec_resources(r'https://recodatasets.z20.web.core.windows.net/deeprec/', data_path, 'xdeepfmresources.zip')
# -
# #### 1.1 Prepare hyper-parameters
# prepare_hparams() will create a full set of hyper-parameters for model training, such as learning rate, feature number, and dropout ratio. We can put those parameters in a yaml file, or pass parameters as the function's parameters (which will overwrite yaml settings).
hparams = prepare_hparams(yaml_file,
FEATURE_COUNT=1000,
FIELD_COUNT=10,
cross_l2=0.0001,
embed_l2=0.0001,
learning_rate=0.001,
epochs=EPOCHS_FOR_SYNTHETIC_RUN,
batch_size=BATCH_SIZE_SYNTHETIC)
print(hparams)
# #### 1.2 Create data loader
# Designate a data iterator for the model. xDeepFM uses FFMTextIterator.
input_creator = FFMTextIterator
# #### 1.3 Create model
# When both hyper-parameters and data iterator are ready, we can create a model:
# +
model = XDeepFMModel(hparams, input_creator, seed=RANDOM_SEED)
## sometimes we don't want to train a model from scratch
## then we can load a pre-trained model like this:
#model.load_model(r'your_model_path')
# -
# Now let's see what is the model's performance at this point (without starting training):
print(model.run_eval(test_file))
# AUC=0.5 is a state of random guess. We can see that before training, the model behaves like random guessing.
#
# #### 1.4 Train model
# Next we want to train the model on a training set, and check the performance on a validation dataset. Training the model is as simple as a function call:
model.fit(train_file, valid_file)
# #### 1.5 Evaluate model
#
# Again, let's see what is the model's performance now (after training):
res_syn = model.run_eval(test_file)
print(res_syn)
sb.glue("res_syn", res_syn)
# If we want to get the full prediction scores rather than evaluation metrics, we can do this:
model.predict(test_file, output_file)
# ## 2. Criteo data
#
# Now we have successfully launched an experiment on a synthetic dataset. Next let's try something on a real world dataset, which is a small sample from [Criteo dataset](http://labs.criteo.com/category/dataset). Criteo dataset is a well known industry benchmarking dataset for developing CTR prediction models and it's frequently adopted as evaluation dataset by research papers. The original dataset is too large for a lightweight demo, so we sample a small portion from it as a demo dataset.
# +
print('demo with Criteo dataset')
hparams = prepare_hparams(yaml_file,
FEATURE_COUNT=2300000,
FIELD_COUNT=39,
cross_l2=0.01,
embed_l2=0.01,
layer_l2=0.01,
learning_rate=0.002,
batch_size=BATCH_SIZE_CRITEO,
epochs=EPOCHS_FOR_CRITEO_RUN,
cross_layer_sizes=[20, 10],
init_value=0.1,
layer_sizes=[20,20],
use_Linear_part=True,
use_CIN_part=True,
use_DNN_part=True)
# -
train_file = os.path.join(data_path, r'cretio_tiny_train')
valid_file = os.path.join(data_path, r'cretio_tiny_valid')
test_file = os.path.join(data_path, r'cretio_tiny_test')
model = XDeepFMModel(hparams, FFMTextIterator, seed=RANDOM_SEED)
# check the predictive performance before the model is trained
print(model.run_eval(test_file))
model.fit(train_file, valid_file)
# check the predictive performance after the model is trained
res_real = model.run_eval(test_file)
print(res_real)
sb.glue("res_real", res_real)
# Cleanup
tmpdir.cleanup()
# ## Reference
# \[1\] <NAME>., <NAME>., <NAME>., <NAME>., <NAME>., & <NAME>. (2018). xDeepFM: Combining Explicit and Implicit Feature Interactions for Recommender Systems. Proceedings of the 24th ACM SIGKDD International Conference on Knowledge Discovery \& Data Mining, KDD 2018, London, UK, August 19-23, 2018.<br>
| examples/00_quick_start/xdeepfm_criteo.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # **Image Histogram Processing**
import numpy as np
import matplotlib.pyplot as plt
import matplotlib.image as mpimg
img0 = mpimg.imread('data/messi5.jpg')
img1 = mpimg.imread('data/basketball1.png')
imgplot0 = plt.imshow(img0)
plt.colorbar(imgplot0)
plt.show()
infra_red = plt.imshow(img0[...,0],cmap='hot')
plt.colorbar(infra_red)
plt.show()
grayimg = plt.imshow(img1,cmap='bone')
plt.colorbar(grayimg)
plt.show()
spectral = plt.imshow(img1,cmap='nipy_spectral')
plt.colorbar(spectral)
plt.show()
# +
fig, axes = plt.subplots(2,2)
fig.set_figwidth(10)
fig.set_figheight(10)
ax1 = axes[0,0]
ax2 = axes[0,1]
ax3 = axes[1,0]
ax4 = axes[1,1]
imgplot0 = ax1.imshow(img0)
fig.colorbar(imgplot0,ax=ax1)
ax1.set_xticks([]),ax1.set_yticks([])
ax2.hist(img0.ravel(), bins=256, range=(0,255))
imgplot1 = ax3.imshow(img1,cmap='gray')
fig.colorbar(imgplot1,ax=ax3)
ax3.set_xticks([]),ax3.set_yticks([])
ax4.hist(img1.ravel(), bins=256, range=(0.0,1.0),fc='k')
plt.suptitle('Histograms')
plt.show()
# -
# ## 1. Histogram Equalization or Linearization
# ## 2. Histogram Matching (Specification)
# ## 3. Local Histogram Processing
# ## 4. Histogram Statistics for Image Enhancement
| notebooks/3-histograms.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
from numpy.random import seed
import numpy as np
class AdalineSGD(object):
def __init__(self, eta=0.01, n_iter=10, shuffle=True, random_state=None):
self.eta = eta
self.n_iter = n_iter
self.w_initialized = False
self.shuffle = shuffle
if random_state:
seed(random_state)
def fit(self, X, y):
self._initialize_weights(X.shape[1])
self.cost_ = []
for i in range(self.n_iter):
if self.shuffle:
X, y = self._shuffle(X, y)
cost = []
for xi, target in zip(X, y):
cost.append(self._update_weights(xi, target))
avg_cost = sum(cost)/len(y)
self.cost_.append(avg_cost)
return self
def partial_fit(self, X, y):
if not self.w_initialized:
self._initialize_weights(X.shape[1])
if y.ravel().shape[0] > 1:
for xi, target in zip(X, y):
self._update_weights(xi, target)
else:
self._update_weights(X, y)
return self
def _shuffle(self, X, y):
r = np.random.permutation(len(y))
return X[r], y[r]
def _initialize_weights(self, m):
self.w_ = np.zeros(1 + m)
self.w_initiaized = True
def _update_weights(self, xi, target):
"""Apply Adaline learning rule to update the weights"""
output = self.net_input(xi)
error = (target - output)
self.w_[1:] += self.eta * xi.dot(error)
self.w_[0] += self.eta * error
cost = 0.5 * error**2
return cost
def net_input(self, X):
"""Calculate net input"""
return np.dot(X, self.w_[1:] + self.w_[0])
def activation(self, X):
"""Compute linear activation"""
return self.net_input(X)
def predict(self, X):
"""return class label after unit step"""
return np.where(self.activation(X) >= 0.0, 1, -1)
# +
import pandas as pd
df = pd.read_csv('https://raw.githubusercontent.com/rasbt/python-machine-learning-book/master/code/datasets/iris/iris.data', header=None)
y = df.iloc[0:100, 4].values
y = np.where(y == 'Iris-setosa', -1, 1)
X = df.iloc[0:100, [0, 2]].values
# +
from matplotlib.colors import ListedColormap
import matplotlib.pyplot as plt
def plot_decision_regions(X, y, classifier, resolution=0.02):
# marker generator and color map
markers = ('s', 'x', 'o', '^', 'v')
colors = ('red', 'blue', 'lightgreen', 'gray', 'cyan')
cmap = ListedColormap(colors[:len(np.unique(y))])
# plot the desicion surface
x1_min, x1_max = X[:, 0].min() - 1, X[:, 0].max() + 1
x2_min, x2_max = X[:, 1].min() - 1, X[:, 1].max() + 1
xx1, xx2 = np.meshgrid(np.arange(x1_min, x1_max, resolution), np.arange(x2_min, x2_max, resolution))
Z = classifier.predict(np.array([xx1.ravel(), xx2.ravel()]).T)
Z = Z.reshape(xx1.shape)
plt.contourf(xx1, xx2, Z, alpha=0.4, cmap=cmap)
plt.xlim(xx1.min(), xx1.max())
plt.ylim(xx2.min(), xx2.max())
# plot class samples
for idx, cl in enumerate(np.unique(y)):
plt.scatter(x=X[y == cl, 0], y=X[y == cl, 1], alpha=0.8, c=cmap(idx), marker=markers[idx], label=cl)
# +
X_std = np.copy(X)
X_std[:,0] = (X[:,0] - X[:,0].mean()) / X[:,0].std()
X_std[:,1] = (X[:,1] - X[:,1].mean()) / X[:,1].std()
ada = AdalineSGD(n_iter=15, eta=0.01, random_state=1)
ada.fit(X_std, y)
plot_decision_regions(X_std, y, classifier=ada)
# +
plt.title('Adaline - Stochastic Gradient Descent')
plt.xlabel('sepal length [standardized]')
plt.ylabel('petal length [standardized]')
plt.legend(loc='upper left')
plt.show()
plt.plot(range(1, len(ada.cost_) + 1), ada.cost_, marker='o')
plt.xlabel('Epochs')
plt.ylabel('Averag Cost')
plt.show()
| ch02/AdalineSGD.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: yara
# language: python
# name: yara
# ---
import numpy as np
a = np.array([[1, 0], [0, 1]])
a
b = np.array([[1, 2], [3, 1]])
b
np.matmul(a, b)
a*b
| numpy.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3 (ipykernel)
# language: python
# name: python3
# ---
import seaborn as sns
import matplotlib.pyplot as plt
import pandas as pd
import numpy as np
from ALLCools.dmr.rms_test import permute_root_mean_square_test
from concurrent.futures import ProcessPoolExecutor, as_completed
# init numba
permute_root_mean_square_test(np.array([[0, 1], [0, 1]]))
# +
coverages = np.arange(5, 201, 40)[::-1]
n_permutes = np.arange(1000, 15000, 2000)[::-1]
n_samples = np.arange(2, 101, 30)[::-1]
n_test = 50
def run_tests(n_sample, coverage, n_permute):
ps = []
# repeat to get robust stats of p_values
for i in range(n_test):
true_fracs = np.random.uniform(low=0.8, high=0.9, size=n_sample)
cov = np.random.uniform(low=coverage / 3,
high=coverage * 3,
size=true_fracs.size).astype(int)
mc = np.random.binomial(cov, p=true_fracs)
unc = cov - mc
table = np.array([mc, unc]).T
# the methylpy RMS test
p = permute_root_mean_square_test(table, n_permute=n_permute)
ps.append(p)
return np.array(ps)
with ProcessPoolExecutor(48) as exe:
futures = {}
results = {}
for n_sample in n_samples:
for coverage in coverages:
for n_permute in n_permutes:
f = exe.submit(run_tests,
n_sample=n_sample,
coverage=coverage,
n_permute=n_permute)
futures[f] = (n_sample, coverage, n_permute)
for f in as_completed(futures):
key = futures[f]
print(key)
results[key] = f.result()
# -
data = pd.DataFrame(results).T.stack().reset_index()
data.columns = ['n_sample', 'coverage', 'n_permute', 'sample', 'pvalue']
data.head()
pass_cutoff = data.groupby(
['n_sample', 'coverage',
'n_permute'])['pvalue'].apply(lambda i: (i < 0.003).sum()).reset_index()
pass_cutoff.head()
# +
# Initialize a grid of plots with an Axes for each walk
ncols = coverages.size
nrows = n_samples.size
fig, axes = plt.subplots(figsize=(ncols*1.5, nrows*1.5),
ncols=ncols,
nrows=nrows,
dpi=100,
sharex=True,
sharey=True,
constrained_layout=True)
for col, coverage in enumerate(coverages[::-1]):
for row, n_sample in enumerate(n_samples[::-1]):
ax = axes[row, col]
sns.lineplot(data=pass_cutoff[(pass_cutoff['n_sample'] == n_sample)
& (pass_cutoff['coverage'] == coverage)],
x='n_permute',
y='pvalue',
ax=ax)
if ax.get_subplotspec().is_last_row():
ax.set(xlabel=f"cov {coverage}")
ax.ticklabel_format(axis='x', scilimits=(0, 0))
if ax.get_subplotspec().is_first_col():
ax.set(ylabel=f"n_sample {n_sample}")
ax.set(ylim=(0, n_test))
# -
| docs/allcools/cluster_level/RegionDS/simulate_rms.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
#
# Copyright (C) 2019-2021 vdaas.org vald team <<EMAIL>>
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
# -
# # Vald Similarity Search using chiVe Dataset
# ---
# ※***このnotebookは, 既に[Get Started](https://vald.vdaas.org/docs/tutorial/get-started/)を完了し, Valdの環境構築が完了した方を対象としています.
# まだValdの環境構築がお済みでない方は, 先に[Get Started](https://vald.vdaas.org/docs/tutorial/get-started/)を行うことを推奨します.
# また, データセットとしてchiVeを利用する場合, Vald Agentのdimensionを300に, distance_typeをcosineにすることを推奨するため, [sample-values.yaml](https://github.com/vdaas/vald-demo/blob/main/chive/sample-values.yaml)を用いる or 値を修正したvalues.yamlを用いてValdの構築を行ってください.***
# - *dimension: 300, distance_type: consieの修正例 ([path/to/helm/values.yaml](https://github.com/vdaas/vald/blob/master/example/helm/values.yaml#L45-L49))*:
# ```yaml
# agent:
# ngt:
# dimension: 300
# distance_type: cos
# ```
# ---
# このnotebookの目的は, vald-python-clientを通じて, Valdの基礎的な動作であるInsert/Search/Update/Removeを体験し, 近似近傍探索を用いた検索の一例を体験することです.
# 今回, 検索を行うためのデータセットとして, 日本語単語ベクトルのデータセットである[chiVe](https://github.com/WorksApplications/chiVe)を利用しています.
#
# notebookの概要は以下の通りです:
# - Preprocess
# - Install packages
# - Import dependencies
# - Prepare the vector data with chiVe
# - Similarity Search with vald-client-python
# - Create gRPC channel
# - Insert/Search/Update/Remove
# - Advanced
# - Word Analogies
#
# それでは, Valdを利用した近似近傍探索による検索を体験してみましょう!!
# ---
# ## Preprocess
# Valdを利用するにあたって必要なパッケージやベクトルデータを準備します.
# ### Install packages
# ※*動作環境に応じてパッケージのインストールを行ってください.*
# !pip install grpcio pymagnitude vald-client-python
# ### Import dependencies
# notebookを実行するに当たり, 必要なパッケージをインポートします.
# +
import grpc
import io
import os
import pandas as pd
from pymagnitude import Magnitude
from tqdm.notebook import tqdm
from vald.v1.payload import payload_pb2
from vald.v1.vald import (insert_pb2_grpc,
object_pb2_grpc,
remove_pb2_grpc,
search_pb2_grpc,
update_pb2_grpc)
# -
# ### Prepare the vector data with [chiVe](https://github.com/WorksApplications/chiVe)
# このnotebookでは, 日本語単語ベクトルとして[chiVe](https://github.com/WorksApplications/chiVe)を用いるため, 予め必要なデータをダウンロードしておくことを推奨します.
# ```
# curl "https://sudachi.s3-ap-northeast-1.amazonaws.com/chive/chive-1.2-mc90.magnitude" -o "chive-1.2-mc90.magnitude"
# ```
# データを読み込み, サンプルとなるクエリを用いてベクトルを表示します.
# NOTE: "___" -> "/path/to/chive-1.2-mc90.magnitude"
vectors = Magnitude("___")
"テスト" in vectors, vectors.query("テスト")
# ---
# ## Similarity Search with vald-client-python
# Valdの基礎的な動作であるInsert/Search/Update/Removeを実行すると共に, 前項で準備したベクトルデータを用いて近似近傍探索による検索を行います.
# ### Create gRPC channel
# gRPCによる通信を行うため, Valdが動作している各環境に応じて, 必要なエンドポイントを記述し, channelを作成します.
# NOTE: "___" -> "{host}:{port}"
channel = grpc.insecure_channel("___")
# ### Insert
# 初めに, Valdにデータを入れるため, Insertを行います.
# Insertを行うため, 先程作成したchannelを用いてInsert用のstubを作成します.
# create stub
istub = insert_pb2_grpc.InsertStub(channel)
# 次に, Insert命令を用いてValdにデータ(id="test")を1件Insertし, 正常に動作が完了するか確認します.
# +
ivec = payload_pb2.Object.Vector(id="test", vector=vectors.query("テスト"))
icfg = payload_pb2.Insert.Config(skip_strict_exist_check=True)
ireq = payload_pb2.Insert.Request(vector=ivec, config=icfg)
istub.Insert(ireq)
# -
# 1件のデータ(id="test")のInsertの動作確認が完了次第, 100,000件のデータをValdにInsertします.
# ここで, Insertの時間短縮のため, 複数のデータを用いてInsertを行うMultiInsertを使用しています.
# +
# Insert 100*1000 vector
count = 100
length = 1000
for c in tqdm(range(count)):
ireqs = []
for key, vec in vectors[c*length:(c+1)*length]:
ivec = payload_pb2.Object.Vector(id=key, vector=vec)
icfg = payload_pb2.Insert.Config(skip_strict_exist_check=True)
ireq = payload_pb2.Insert.Request(vector=ivec, config=icfg)
ireqs.append(ireq)
imreq = payload_pb2.Insert.MultiRequest(requests=ireqs)
istub.MultiInsert(imreq)
# -
# ### Search
# 次に, 先程Insertしたデータを使用してSearchを行います.
# Insert時と同様, Search用のstubを作成します.
# create stub
sstub = search_pb2_grpc.SearchStub(channel)
# SearchのRequestを作成し, stubを用いて, __"テスト"__に類似したテキストを検索します.
#
# ※*検索結果が0件または極端に少ない場合, Valdの自動Indexingにより検索結果が返却されていない可能性があるため, Indexing完了のため数秒待機し, 再度Searchを行ってください.*
# +
svec = vectors.query("テスト")
scfg = payload_pb2.Search.Config(num=10, radius=-1.0, epsilon=0.01, timeout=3000000000)
sreq = payload_pb2.Search.Request(vector=svec, config=scfg)
response = sstub.Search(sreq)
pd.DataFrame(
[(result.id, result.distance) for result in response.results],
columns=["id", "distance"])
# -
# また, Valdは既に入力済みのデータに対して, idに紐づくベクトルを用いて検索を行うSearch By IDにも対応しています.
# 以下で, 既にInsert済みのid="test"に紐づくベクトルを使用し, 近似近傍探索による検索を行います.
# +
# Search By ID
sireq = payload_pb2.Search.IDRequest(id="test",config=scfg)
response = sstub.SearchByID(sireq)
pd.DataFrame(
[(result.id, result.distance) for result in response.results],
columns=["id", "distance"])
# -
# ### Update
# ここでは, idに紐づくInsert済みのデータを更新するUpdateを行います.
# Updateを行うため, 同様にstubを作成します.
# create stub
ustub = update_pb2_grpc.UpdateStub(channel)
# id="test"に紐づくデータを__"テスト"__から__"test"__のベクトルに更新します.
# +
uvec = payload_pb2.Object.Vector(id="test", vector=vectors.query("test"))
ucfg = payload_pb2.Update.Config(skip_strict_exist_check=True)
ureq = payload_pb2.Update.Request(vector=uvec, config=ucfg)
ustub.Update(ureq)
# -
# Updateの確認のため, id="test"に紐づくデータに対して検索を行い, __"テスト"__の結果と異なることを確認します.
#
# ※*Insert済みのデータによっては, 同様の結果となる場合もあります.*
# ※*Searchの際と同様に, Indexingなどのタイミングによっては値が変更されていない可能性があるため, 時間をおいて再度検索を行ってください.*
# +
# Search By ID
sireq = payload_pb2.Search.IDRequest(id="test", config=scfg)
response = sstub.SearchByID(sireq)
pd.DataFrame(
[(result.id, result.distance) for result in response.results],
columns=["id", "distance"])
# -
# ### Remove
# 最後に, 入力されたデータを削除するRemoveを行います.
# Remove用のstubを作成します.
# create stub
rstub = remove_pb2_grpc.RemoveStub(channel)
# id="test"に紐づくデータを削除します.
# +
rid = payload_pb2.Object.ID(id="test")
rcfg = payload_pb2.Remove.Config(skip_strict_exist_check=True)
rreq = payload_pb2.Remove.Request(id=rid, config=rcfg)
rstub.Remove(rreq)
# -
# データが削除されたかどうかを確認するため, Existを用いてデータが存在するかどうかをチェックします(データが存在しない場合Errorを返します).
# +
# Exists
ostub = object_pb2_grpc.ObjectStub(channel)
oid = payload_pb2.Object.ID(id="test")
try:
ostub.Exists(oid)
except grpc._channel._InactiveRpcError as _:
print("vector is not found")
# -
# 以上が, Valdの基礎的な動作であるInsert/Search/Update/Removeを用いた検索の例です.
# ---
# ## Advanced
# 近似近傍探索を用いたテキスト検索の実験的な例として, 以下の内容を行います.
#
# - Word Analogies
# ### Word Analogies
# 単語のベクトル表現を用いて, 加法/減法を行い, 意味的に類似した単語を検索します.
# 例として, ”王"-"男"+"女"="女王"となるベクトル表現が得られ, "女王"に意味的に類似した単語が結果に含まれることを期待するテキスト検索を以下を示します.
# +
svec = vectors.query("王") - vectors.query("男") + vectors.query("女")
scfg = payload_pb2.Search.Config(num=10, radius=-1.0, epsilon=0.01, timeout=3000000000)
sreq = payload_pb2.Search.Request(vector=svec, config=scfg)
response = sstub.Search(sreq)
pd.DataFrame(
[(result.id, result.distance) for result in response.results],
columns=["id", "distance"])
# -
# また, 上記とは異なる例も示します(ref: [fastText tutorial#word-analogies](https://fasttext.cc/docs/en/unsupervised-tutorial.html#word-analogies)).
# +
svec = vectors.query("psx") - vectors.query("sony") + vectors.query("nintendo")
scfg = payload_pb2.Search.Config(num=10, radius=-1.0, epsilon=0.01, timeout=3000000000)
sreq = payload_pb2.Search.Request(vector=svec, config=scfg)
response = sstub.Search(sreq)
pd.DataFrame(
[(result.id, result.distance) for result in response.results],
columns=["id", "distance"])
# -
# ---
# 以上で__"Vald Similarity Search using chiVe Dataset"__ notebookは終了です.
# Valdに興味を持っていただきありがとうございました.
#
# 更に詳しく知りたい方は, Githubやofficial web siteをご活用ください:
# - https://github.com/vdaas/vald
# - https://vald.vdaas.org/
| chive/tutorial.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + _cell_guid="b1076dfc-b9ad-4769-8c92-a6c4dae69d19" _uuid="8f2839f25d086af736a60e9eeb907d3b93b6e0e5"
# This Python 3 environment comes with many helpful analytics libraries installed
# It is defined by the kaggle/python docker image: https://github.com/kaggle/docker-python
# For example, here's several helpful packages to load in
import numpy as np # linear algebra
import pandas as pd # data processing, CSV file I/O (e.g. pd.read_csv)
# Input data files are available in the "../input/" directory.
# For example, running this (by clicking run or pressing Shift+Enter) will list the files in the input directory
import os
print(os.listdir("../"))
print(os.listdir("../input"))
print(os.listdir("../input/train_simplified"))
# Any results you write to the current directory are saved as output.
# +
import sys
print(sys.path)
# + _cell_guid="79c7e3d0-c299-4dcb-8224-4455121ee9b0" _uuid="d629ff2d2480ee46fbb7e2d37f6b5fab8052498a"
import torch
print(torch.__version__)
# -
# # Training Data Splitting
do_make_split = True
# + _uuid="2050ec8afafd75135ee48f1da6e4fbf05fee8242"
CSV_DATA_DIR = "../input"
NPY_DATA_DIR = '../data'
# CLASS_NAME = ['apple','bee', 'cat', 'fish', 'frog', 'leaf']
CLASS_NAME=\
['The_Eiffel_Tower', 'The_Great_Wall_of_China', 'The_Mona_Lisa', 'airplane', 'alarm_clock', 'ambulance', 'angel',
'animal_migration', 'ant', 'anvil', 'apple', 'arm', 'asparagus', 'axe', 'backpack', 'banana', 'bandage', 'barn',
'baseball', 'baseball_bat', 'basket', 'basketball', 'bat', 'bathtub', 'beach', 'bear', 'beard', 'bed', 'bee',
'belt', 'bench', 'bicycle', 'binoculars', 'bird', 'birthday_cake', 'blackberry', 'blueberry', 'book',
'boomerang', 'bottlecap', 'bowtie', 'bracelet', 'brain', 'bread', 'bridge', 'broccoli', 'broom',
'bucket', 'bulldozer', 'bus', 'bush', 'butterfly', 'cactus', 'cake', 'calculator', 'calendar', 'camel',
'camera', 'camouflage', 'campfire', 'candle', 'cannon', 'canoe', 'car', 'carrot', 'castle', 'cat', 'ceiling_fan',
'cell_phone', 'cello', 'chair', 'chandelier', 'church', 'circle', 'clarinet', 'clock', 'cloud', 'coffee_cup',
'compass', 'computer', 'cookie', 'cooler', 'couch', 'cow', 'crab', 'crayon', 'crocodile', 'crown', 'cruise_ship',
'cup', 'diamond', 'dishwasher', 'diving_board', 'dog', 'dolphin', 'donut', 'door', 'dragon', 'dresser',
'drill', 'drums', 'duck', 'dumbbell', 'ear', 'elbow', 'elephant', 'envelope', 'eraser', 'eye', 'eyeglasses',
'face', 'fan', 'feather', 'fence', 'finger', 'fire_hydrant', 'fireplace', 'firetruck', 'fish', 'flamingo',
'flashlight', 'flip_flops', 'floor_lamp', 'flower', 'flying_saucer', 'foot', 'fork', 'frog', 'frying_pan',
'garden', 'garden_hose', 'giraffe', 'goatee', 'golf_club', 'grapes', 'grass', 'guitar', 'hamburger',
'hammer', 'hand', 'harp', 'hat', 'headphones', 'hedgehog', 'helicopter', 'helmet', 'hexagon', 'hockey_puck',
'hockey_stick', 'horse', 'hospital', 'hot_air_balloon', 'hot_dog', 'hot_tub', 'hourglass', 'house', 'house_plant',
'hurricane', 'ice_cream', 'jacket', 'jail', 'kangaroo', 'key', 'keyboard', 'knee', 'ladder', 'lantern', 'laptop',
'leaf', 'leg', 'light_bulb', 'lighthouse', 'lightning', 'line', 'lion', 'lipstick', 'lobster', 'lollipop', 'mailbox',
'map', 'marker', 'matches', 'megaphone', 'mermaid', 'microphone', 'microwave', 'monkey', 'moon', 'mosquito',
'motorbike', 'mountain', 'mouse', 'moustache', 'mouth', 'mug', 'mushroom', 'nail', 'necklace', 'nose', 'ocean',
'octagon', 'octopus', 'onion', 'oven', 'owl', 'paint_can', 'paintbrush', 'palm_tree', 'panda', 'pants',
'paper_clip', 'parachute', 'parrot', 'passport', 'peanut', 'pear', 'peas', 'pencil', 'penguin', 'piano',
'pickup_truck', 'picture_frame', 'pig', 'pillow', 'pineapple', 'pizza', 'pliers', 'police_car', 'pond',
'pool', 'popsicle', 'postcard', 'potato', 'power_outlet', 'purse', 'rabbit', 'raccoon', 'radio', 'rain',
'rainbow', 'rake', 'remote_control', 'rhinoceros', 'river', 'roller_coaster', 'rollerskates', 'sailboat',
'sandwich', 'saw', 'saxophone', 'school_bus', 'scissors', 'scorpion', 'screwdriver', 'sea_turtle', 'see_saw',
'shark', 'sheep', 'shoe', 'shorts', 'shovel', 'sink', 'skateboard', 'skull', 'skyscraper', 'sleeping_bag',
'smiley_face', 'snail', 'snake', 'snorkel', 'snowflake', 'snowman', 'soccer_ball', 'sock', 'speedboat',
'spider', 'spoon', 'spreadsheet', 'square', 'squiggle', 'squirrel', 'stairs', 'star', 'steak', 'stereo',
'stethoscope', 'stitches', 'stop_sign', 'stove', 'strawberry', 'streetlight', 'string_bean', 'submarine',
'suitcase', 'sun', 'swan', 'sweater', 'swing_set', 'sword', 't-shirt', 'table', 'teapot', 'teddy-bear',
'telephone', 'television', 'tennis_racquet', 'tent', 'tiger', 'toaster', 'toe', 'toilet', 'tooth',
'toothbrush', 'toothpaste', 'tornado', 'tractor', 'traffic_light', 'train', 'tree', 'triangle',
'trombone', 'truck', 'trumpet', 'umbrella', 'underwear', 'van', 'vase', 'violin', 'washing_machine',
'watermelon', 'waterslide', 'whale', 'wheel', 'windmill', 'wine_bottle', 'wine_glass', 'wristwatch',
'yoga', 'zebra', 'zigzag']
# + _uuid="3bd8eef775144545b21db315fd05398cdee31cf1"
def make_split():
#class_name = ['apple','bee', 'cat', 'fish', 'frog', 'leaf']
class_name = CLASS_NAME
data_dir = '../data'
all_dir = data_dir + '/split/train_simplified'
train_dir = data_dir + '/split/train_0'
valid_dir = data_dir + '/split/valid_0'
os.makedirs(all_dir, exist_ok=True)
os.makedirs(train_dir, exist_ok=True)
os.makedirs(valid_dir, exist_ok=True)
for name in class_name:
name = name.replace('_', ' ')
print(name)
df = pd.read_csv(CSV_DATA_DIR + '/train_simplified/%s.csv'%name)
print(len(df))
key_id = df['key_id'].values.astype(np.int64)
np.random.shuffle(key_id)
N = len(key_id)
N_valid = 80
N_train = N - N_valid
np.save( all_dir+'/%s.npy'%name, key_id)
np.save( train_dir+'/%s.npy'%name, key_id[:N_train])
np.save( valid_dir+'/%s.npy'%name, key_id[N_train:])
# + _uuid="e73a732cad5b3671590fe367c1c5246a431165b6"
if do_make_split:
make_split()
# + _uuid="4fe447630e79a0467f1b75b041e6b6ae4150ec86"
print(os.listdir("../data"))
# print(os.listdir("../data/split/valid_0"))
print(os.listdir("../data/split/train_0"))
# print(os.listdir("../data/split/train_simplified"))
# -
# # Building Model
# + _uuid="4c0ba5cc1747b0fcbffffc8950033e13ade351b0"
# download pretrained resnet34 model
import urllib.request
pretrained_dir = '../pretrained/'
os.makedirs(pretrained_dir, exist_ok=True)
url = 'https://download.pytorch.org/models/resnet34-333f7ec4.pth'
filename = '../pretrained/resnet34-333f7ec4.pth'
# Download the file from `url` and save it locally under `file_name`:
with urllib.request.urlopen(url) as response, open(filename, 'wb') as out_file:
data = response.read() # a `bytes` object
out_file.write(data)
print(os.listdir(pretrained_dir))
# + _uuid="b4ee3f15ec1282a9aca02990d198bb5f7fa86834"
# import in common.py
import os
from datetime import datetime
PROJECT_PATH = os.path.dirname('./')
IDENTIFIER = datetime.now().strftime('%Y-%m-%d_%H-%M-%S')
#numerical libs
import math
import numpy as np
import random
import PIL
import cv2
import matplotlib
matplotlib.use('TkAgg')
#matplotlib.use('WXAgg')
#matplotlib.use('Qt4Agg')
#matplotlib.use('Qt5Agg') #Qt4Agg
print(matplotlib.get_backend())
#print(matplotlib.__version__)
# torch libs
import torch
from torch.utils.data.dataset import Dataset
from torch.utils.data import DataLoader
from torch.utils.data.sampler import *
import torch.nn as nn
import torch.nn.functional as F
import torch.optim as optim
from torch.nn.parallel.data_parallel import data_parallel
# std libs
import collections
import copy
import numbers
import inspect
import shutil
from timeit import default_timer as timer
import itertools
from collections import OrderedDict
import csv
import pandas as pd
import pickle
import glob
import sys
from distutils.dir_util import copy_tree
import time
import matplotlib.pyplot as plt
from mpl_toolkits.mplot3d import Axes3D
from skimage.transform import resize as skimage_resize
# constant #
PI = np.pi
INF = np.inf
EPS = 1e-12
# common.py
class Struct(object):
def __init__(self, **kwargs):
self.__dict__.update(kwargs)
#---------------------------------------------------------------------------------
# print('@%s: ' % os.path.basename(__file__))
if 1:
SEED = 35202 #123 #int(time.time()) #
random.seed(SEED)
np.random.seed(SEED)
torch.manual_seed(SEED)
torch.cuda.manual_seed_all(SEED)
print ('\tset random seed')
print ('\t\tSEED=%d'%SEED)
if 1:
torch.backends.cudnn.benchmark = True ##uses the inbuilt cudnn auto-tuner to find the fastest convolution algorithms. -
torch.backends.cudnn.enabled = True
print ('\tset cuda environment')
print ('\t\ttorch.__version__ =', torch.__version__)
print ('\t\ttorch.version.cuda =', torch.version.cuda)
print ('\t\ttorch.backends.cudnn.version() =', torch.backends.cudnn.version())
try:
print ('\t\tos[\'CUDA_VISIBLE_DEVICES\'] =',os.environ['CUDA_VISIBLE_DEVICES'])
NUM_CUDA_DEVICES = len(os.environ['CUDA_VISIBLE_DEVICES'].split(','))
except Exception:
print ('\t\tos[\'CUDA_VISIBLE_DEVICES\'] =','None')
NUM_CUDA_DEVICES = 1
print ('\t\ttorch.cuda.device_count() =', torch.cuda.device_count())
#print ('\t\ttorch.cuda.current_device() =', torch.cuda.current_device())
print('')
# + _uuid="69be483eda39e79fb8a57bfb41313ad330a1bc5e"
# data.py
CLASS_NAME=\
['The_Eiffel_Tower', 'The_Great_Wall_of_China', 'The_Mona_Lisa', 'airplane', 'alarm_clock', 'ambulance', 'angel',
'animal_migration', 'ant', 'anvil', 'apple', 'arm', 'asparagus', 'axe', 'backpack', 'banana', 'bandage', 'barn',
'baseball', 'baseball_bat', 'basket', 'basketball', 'bat', 'bathtub', 'beach', 'bear', 'beard', 'bed', 'bee',
'belt', 'bench', 'bicycle', 'binoculars', 'bird', 'birthday_cake', 'blackberry', 'blueberry', 'book',
'boomerang', 'bottlecap', 'bowtie', 'bracelet', 'brain', 'bread', 'bridge', 'broccoli', 'broom',
'bucket', 'bulldozer', 'bus', 'bush', 'butterfly', 'cactus', 'cake', 'calculator', 'calendar', 'camel',
'camera', 'camouflage', 'campfire', 'candle', 'cannon', 'canoe', 'car', 'carrot', 'castle', 'cat', 'ceiling_fan',
'cell_phone', 'cello', 'chair', 'chandelier', 'church', 'circle', 'clarinet', 'clock', 'cloud', 'coffee_cup',
'compass', 'computer', 'cookie', 'cooler', 'couch', 'cow', 'crab', 'crayon', 'crocodile', 'crown', 'cruise_ship',
'cup', 'diamond', 'dishwasher', 'diving_board', 'dog', 'dolphin', 'donut', 'door', 'dragon', 'dresser',
'drill', 'drums', 'duck', 'dumbbell', 'ear', 'elbow', 'elephant', 'envelope', 'eraser', 'eye', 'eyeglasses',
'face', 'fan', 'feather', 'fence', 'finger', 'fire_hydrant', 'fireplace', 'firetruck', 'fish', 'flamingo',
'flashlight', 'flip_flops', 'floor_lamp', 'flower', 'flying_saucer', 'foot', 'fork', 'frog', 'frying_pan',
'garden', 'garden_hose', 'giraffe', 'goatee', 'golf_club', 'grapes', 'grass', 'guitar', 'hamburger',
'hammer', 'hand', 'harp', 'hat', 'headphones', 'hedgehog', 'helicopter', 'helmet', 'hexagon', 'hockey_puck',
'hockey_stick', 'horse', 'hospital', 'hot_air_balloon', 'hot_dog', 'hot_tub', 'hourglass', 'house', 'house_plant',
'hurricane', 'ice_cream', 'jacket', 'jail', 'kangaroo', 'key', 'keyboard', 'knee', 'ladder', 'lantern', 'laptop',
'leaf', 'leg', 'light_bulb', 'lighthouse', 'lightning', 'line', 'lion', 'lipstick', 'lobster', 'lollipop', 'mailbox',
'map', 'marker', 'matches', 'megaphone', 'mermaid', 'microphone', 'microwave', 'monkey', 'moon', 'mosquito',
'motorbike', 'mountain', 'mouse', 'moustache', 'mouth', 'mug', 'mushroom', 'nail', 'necklace', 'nose', 'ocean',
'octagon', 'octopus', 'onion', 'oven', 'owl', 'paint_can', 'paintbrush', 'palm_tree', 'panda', 'pants',
'paper_clip', 'parachute', 'parrot', 'passport', 'peanut', 'pear', 'peas', 'pencil', 'penguin', 'piano',
'pickup_truck', 'picture_frame', 'pig', 'pillow', 'pineapple', 'pizza', 'pliers', 'police_car', 'pond',
'pool', 'popsicle', 'postcard', 'potato', 'power_outlet', 'purse', 'rabbit', 'raccoon', 'radio', 'rain',
'rainbow', 'rake', 'remote_control', 'rhinoceros', 'river', 'roller_coaster', 'rollerskates', 'sailboat',
'sandwich', 'saw', 'saxophone', 'school_bus', 'scissors', 'scorpion', 'screwdriver', 'sea_turtle', 'see_saw',
'shark', 'sheep', 'shoe', 'shorts', 'shovel', 'sink', 'skateboard', 'skull', 'skyscraper', 'sleeping_bag',
'smiley_face', 'snail', 'snake', 'snorkel', 'snowflake', 'snowman', 'soccer_ball', 'sock', 'speedboat',
'spider', 'spoon', 'spreadsheet', 'square', 'squiggle', 'squirrel', 'stairs', 'star', 'steak', 'stereo',
'stethoscope', 'stitches', 'stop_sign', 'stove', 'strawberry', 'streetlight', 'string_bean', 'submarine',
'suitcase', 'sun', 'swan', 'sweater', 'swing_set', 'sword', 't-shirt', 'table', 'teapot', 'teddy-bear',
'telephone', 'television', 'tennis_racquet', 'tent', 'tiger', 'toaster', 'toe', 'toilet', 'tooth',
'toothbrush', 'toothpaste', 'tornado', 'tractor', 'traffic_light', 'train', 'tree', 'triangle',
'trombone', 'truck', 'trumpet', 'umbrella', 'underwear', 'van', 'vase', 'violin', 'washing_machine',
'watermelon', 'waterslide', 'whale', 'wheel', 'windmill', 'wine_bottle', 'wine_glass', 'wristwatch',
'yoga', 'zebra', 'zigzag']
#small dataset for debug
# CLASS_NAME = ['apple','bee', 'cat', 'fish', 'frog', 'leaf']
NUM_CLASS = len(CLASS_NAME)
TRAIN_DF = []
TEST_DF = []
def null_augment(drawing,label,index):
cache = Struct(drawing = drawing.copy(), label = label, index=index)
image = drawing_to_temporal_image(drawing, 64, 64)
return image, label, cache
def null_collate(batch):
batch_size = len(batch)
cache = []
input = []
truth = []
for b in range(batch_size):
input.append(batch[b][0])
truth.append(batch[b][1])
cache.append(batch[b][2])
input = np.array(input).transpose(0,3,1,2)
input = torch.from_numpy(input).float()
if truth[0] is not None:
truth = np.array(truth)
truth = torch.from_numpy(truth).long()
return input, truth, cache
#----------------------------------------
# def drawing_to_temporal_image(drawing, H, W):
# point=[]
# time =[]
# for t,(x,y) in enumerate(drawing):
# point.append(np.array((x,y),np.float32).T)
# time.append(np.full(len(x),t))
# point = np.concatenate(point).astype(np.float32)
# time = np.concatenate(time ).astype(np.int32)
# #--------
# image = np.full((H,W,3),0,np.uint8)
# x_max = point[:,0].max()
# x_min = point[:,0].min()
# y_max = point[:,1].max()
# y_min = point[:,1].min()
# w = x_max-x_min
# h = y_max-y_min
# #print(w,h)
# s = max(w,h)
# norm_point = (point-[x_min,y_min])/s
# norm_point = (norm_point-[w/s*0.5,h/s*0.5])*max(W,H)*0.85
# norm_point = np.floor(norm_point + [W/2,H/2]).astype(np.int32)
# #--------
# T = time.max()+1
# for t in range(T):
# p = norm_point[time==t]
# x,y = p.T
# image[y,x]=255
# N = len(p)
# for i in range(N-1):
# x0,y0 = p[i]
# x1,y1 = p[i+1]
# cv2.line(image,(x0,y0),(x1,y1),(255,255,255),1,cv2.LINE_AA)
# return image
def drawing_to_temporal_image(drawing, H, W):
point=[]
time =[]
# print("drawing: ", drawing)
for t,(x,y) in enumerate(drawing):
# print("x: ", x)
# print("y: ", y)
# print("np.full(len(x),t): ", np.full(len(x),t))
point.append(np.array((x,y),np.float32).T)
time.append(np.full(len(x),t))
point = np.concatenate(point).astype(np.float32)
time = np.concatenate(time ).astype(np.int32)
#--------
image = np.full((H,W,3),0,np.uint8)
x_max = point[:,0].max()
x_min = point[:,0].min()
y_max = point[:,1].max()
y_min = point[:,1].min()
w = x_max-x_min
h = y_max-y_min
#print(w,h)
s = max(w,h)
norm_point = (point-[x_min,y_min])/s
norm_point = (norm_point-[w/s*0.5,h/s*0.5])*max(W,H)*0.85
norm_point = np.floor(norm_point + [W/2,H/2]).astype(np.int32)
#--------
# print("time: ", time)
# print("time.max() : ", time.max())
# print("norm_point : ", norm_point)
# print("image : ", image)
T = time.max()+1
# print("T : ", T)
pixel_value_interval = 255/T
for t in range(T):
pixel2_value = int(pixel_value_interval*(t+1))
pixel3_value = 255 - pixel2_value
p = norm_point[time==t]
x,y = p.T
# print("x 2: ", x)
# print("y 2: ", y)
# print("image[y,x] 1: ", image[y,x])
# image[y,x]=255
image[y,x]=[255,pixel2_value,pixel3_value]
# print("image[y,x] 2: ", image[y,x])
N = len(p)
for i in range(N-1):
x0,y0 = p[i]
x1,y1 = p[i+1]
cv2.line(image,(x0,y0),(x1,y1),(255,pixel2_value,pixel3_value),1,cv2.LINE_AA)
# cv2.line(image,(x0,y0),(x1,y1),(255,255,255),1,cv2.LINE_AA)
return image
# def drawing_to_image(drawing, H, W):
# point=[]
# time =[]
# for t,(x,y) in enumerate(drawing):
# point.append(np.array((x,y),np.float32).T)
# time.append(np.full(len(x),t))
# point = np.concatenate(point).astype(np.float32)
# time = np.concatenate(time ).astype(np.int32)
# #--------
# image = np.full((H,W,3),0,np.uint8)
# x_max = point[:,0].max()
# x_min = point[:,0].min()
# y_max = point[:,1].max()
# y_min = point[:,1].min()
# w = x_max-x_min
# h = y_max-y_min
# s = max(w,h)
# norm_point = (point-[x_min,y_min])/s
# norm_point = (norm_point-[w/s*0.5,h/s*0.5])*max(W,H)*0.85
# norm_point = np.floor(norm_point + [W/2,H/2]).astype(np.int32)
# #--------
# T = time.max()+1
# for t in range(T):
# p = norm_point[time==t]
# x,y = p.T
# image[y,x]=255
# N = len(p)
# for i in range(N-1):
# x0,y0 = p[i]
# x1,y1 = p[i+1]
# cv2.line(image,(x0,y0),(x1,y1),(255,255,255),1,cv2.LINE_AA)
# return image
class DoodleDataset(Dataset):
def __init__(self, mode, split='<NIL>', augment = null_augment, complexity = 'simplified'):
super(DoodleDataset, self).__init__()
assert complexity in ['simplified', 'raw']
start = timer()
self.split = split
self.augment = augment
self.mode = mode
self.complexity = complexity
self.df = []
self.id = []
if mode=='train':
global TRAIN_DF
# countrycode, drawing, key_id, recognized, timestamp, word
if TRAIN_DF == []:
for l,name in enumerate(CLASS_NAME):
print('\r\t load df : %3d/%3d %24s %s'%(l,NUM_CLASS,name,time_to_str((timer() - start),'sec')),end='',flush=True)
name = name.replace('_', ' ')
# df = pd.read_csv(DATA_DIR + '/csv/train_%s/%s.csv'%(complexity,name))
df = pd.read_csv(CSV_DATA_DIR + '/train_%s/%s.csv'%(complexity,name))
TRAIN_DF.append(df)
print('')
self.df = TRAIN_DF
for l,name in enumerate(CLASS_NAME):
print('\r\t load split: %3d/%3d %24s %s'%(l,NUM_CLASS,name,time_to_str((timer() - start),'sec')),end='',flush=True)
name = name.replace('_', ' ')
df = TRAIN_DF[l]
#key_id = np.loadtxt(DATA_DIR + '/split/%s/%s'%(split,name), np.int64)
key_id = np.load(NPY_DATA_DIR + '/split/%s/%s.npy'%(split,name))
label = np.full(len(key_id),l,np.int64)
drawing_id = df.loc[df['key_id'].isin(key_id)].index.values
self.id.append(
np.vstack([label, drawing_id, key_id]).T
)
self.id = np.concatenate(self.id)
print('')
print('self.id: ', self.id)
print('self.id[0]: ', self.id[0])
# self.id[0]: [ 0 0 6050317688897536]
label, drawing_id, key_id = self.id[0]
drawing = self.df[label]['drawing'][drawing_id]
print('drawing1: ', drawing)
drawing = eval(drawing)
print('drawing2: ', drawing)
example_input = null_augment(drawing, label, 0)
print('example_input: ', example_input)
print('example_input[0].shape: ', example_input[0].shape)
if mode=='test':
global TEST_DF
# key_id, countrycode, drawing
if TEST_DF == []:
# TEST_DF = pd.read_csv(DATA_DIR + '/csv/test_%s.csv'%(complexity))
TEST_DF = pd.read_csv(CSV_DATA_DIR + '/test_%s.csv'%(complexity))
self.id = np.arange(0,len(TEST_DF))
self.df = TEST_DF
if mode=='valid':
global TEST_DF
# key_id, countrycode, drawing
if TEST_DF == []:
# TEST_DF = pd.read_csv(DATA_DIR + '/csv/test_%s.csv'%(complexity))
TEST_DF = pd.read_csv(CSV_DATA_DIR + '/valid_%s.csv'%(complexity))
self.id = np.arange(0,len(TEST_DF))
self.df = TEST_DF
print('')
def __str__(self):
N = len(self.id)
string = ''\
+ '\tsplit = %s\n'%self.split \
+ '\tmode = %s\n'%self.mode \
+ '\tcomplexity = %s\n'%self.complexity \
+ '\tlen(self.id) = %d\n'%N \
+ '\n'
return string
def __getitem__(self, index):
if self.mode=='train':
label, drawing_id, key_id = self.id[index]
drawing = self.df[label]['drawing'][drawing_id]
drawing = eval(drawing)
if self.mode=='test':
label=None
drawing = self.df['drawing'][index]
drawing = eval(drawing)
return self.augment(drawing, label, index)
def __len__(self):
return len(self.id)
# check #################################################################
def run_check_train_data():
dataset = DoodleDataset('train', 'train_0')
print(dataset)
#--
num = len(dataset)
for m in range(num):
#i = m
i = np.random.choice(num)
image, label, cache = dataset[i]
print('%8d %8d : %3d %s'%(i,cache.index,label,CLASS_NAME[label]))
overlay=255-image
image_show('overlay',overlay, resize=2)
cv2.waitKey(0)
def run_check_test_data():
dataset = DoodleDataset('test')
print(dataset)
#--
num = len(dataset)
for m in range(num):
i = m
#i = np.random.choice(num)
image, label, cache = dataset[i]
print('%8d %8d : '%(i,cache.index))
overlay=255-image
image_show('overlay',overlay, resize=2)
cv2.waitKey(0)
# + _uuid="b290eb5c7e86512b127bec4638e1ab84e1e9f0a0"
# resnet.py
# BatchNorm2d = SynchronizedBatchNorm2d
BatchNorm2d = nn.BatchNorm2d
def conv3x3(in_planes, out_planes, stride=1):
"""3x3 convolution with padding"""
return nn.Conv2d(in_planes, out_planes, kernel_size=3, stride=stride,
padding=1, bias=False)
class BasicBlock(nn.Module):
expansion = 1
def __init__(self, inplanes, planes, stride=1, downsample=None):
super(BasicBlock, self).__init__()
self.conv1 = conv3x3(inplanes, planes, stride)
self.bn1 = BatchNorm2d(planes)
self.relu = nn.ReLU(inplace=True)
self.conv2 = conv3x3(planes, planes)
self.bn2 = BatchNorm2d(planes)
self.downsample = downsample
self.stride = stride
def forward(self, x):
residual = x
out = self.conv1(x)
out = self.bn1(out)
out = self.relu(out)
out = self.conv2(out)
out = self.bn2(out)
if self.downsample is not None:
residual = self.downsample(x)
out += residual
out = self.relu(out)
return out
class Bottleneck(nn.Module):
expansion = 4
def __init__(self, inplanes, planes, stride=1, downsample=None):
super(Bottleneck, self).__init__()
self.conv1 = nn.Conv2d(inplanes, planes, kernel_size=1, bias=False)
self.bn1 = BatchNorm2d(planes)
self.conv2 = nn.Conv2d(planes, planes, kernel_size=3, stride=stride,
padding=1, bias=False)
self.bn2 = BatchNorm2d(planes)
self.conv3 = nn.Conv2d(planes, planes * self.expansion, kernel_size=1, bias=False)
self.bn3 = BatchNorm2d(planes * self.expansion)
self.relu = nn.ReLU(inplace=True)
self.downsample = downsample
self.stride = stride
def forward(self, x):
residual = x
out = self.conv1(x)
out = self.bn1(out)
out = self.relu(out)
out = self.conv2(out)
out = self.bn2(out)
out = self.relu(out)
out = self.conv3(out)
out = self.bn3(out)
if self.downsample is not None:
residual = self.downsample(x)
out += residual
out = self.relu(out)
return out
class ResNet(nn.Module):
def __init__(self, block, layers, num_classes=1000):
self.inplanes = 64
super(ResNet, self).__init__()
self.conv1 = nn.Conv2d(3, 64, kernel_size=7, stride=2, padding=3, bias=False)
self.bn1 = BatchNorm2d(64)
self.relu = nn.ReLU(inplace=True)
self.maxpool = nn.MaxPool2d(kernel_size=3, stride=2, padding=1)
self.layer1 = self._make_layer(block, 64, layers[0])
self.layer2 = self._make_layer(block, 128, layers[1], stride=2)
self.layer3 = self._make_layer(block, 256, layers[2], stride=2)
self.layer4 = self._make_layer(block, 512, layers[3], stride=2)
self.avgpool = nn.AvgPool2d(7, stride=1)
self.fc = nn.Linear(512 * block.expansion, num_classes)
for m in self.modules():
if isinstance(m, nn.Conv2d):
nn.init.kaiming_normal_(m.weight, mode='fan_out', nonlinearity='relu')
elif isinstance(m, BatchNorm2d):
nn.init.constant_(m.weight, 1)
nn.init.constant_(m.bias, 0)
def _make_layer(self, block, planes, blocks, stride=1):
downsample = None
if stride != 1 or self.inplanes != planes * block.expansion:
downsample = nn.Sequential(
nn.Conv2d(self.inplanes, planes * block.expansion,
kernel_size=1, stride=stride, bias=False),
BatchNorm2d(planes * block.expansion),
)
layers = []
layers.append(block(self.inplanes, planes, stride, downsample))
self.inplanes = planes * block.expansion
for i in range(1, blocks):
layers.append(block(self.inplanes, planes))
return nn.Sequential(*layers)
def forward(self, x):
x = self.conv1(x)
x = self.bn1(x)
x = self.relu(x)
x = self.maxpool(x)
x = self.layer1(x)
x = self.layer2(x)
x = self.layer3(x)
x = self.layer4(x)
x = self.avgpool(x)
x = x.view(x.size(0), -1)
x = self.fc(x)
return x
# + _uuid="896d44db6e26b05600e8ee53542387779b75fa5d"
# model32_resnet34.py
def softmax_cross_entropy_criterion(logit, truth, is_average=True):
loss = F.cross_entropy(logit, truth, reduce=is_average)
return loss
def metric(logit, truth, is_average=True):
with torch.no_grad():
prob = F.softmax(logit, 1)
value, top = prob.topk(3, dim=1, largest=True, sorted=True)
correct = top.eq(truth.view(-1, 1).expand_as(top))
if is_average==True:
# top-3 accuracy
correct = correct.float().sum(0, keepdim=False)
correct = correct/len(truth)
top = [correct[0], correct[0]+correct[1], correct[0]+correct[1]+correct[2]]
precision = correct[0]/1 + correct[1]/2 + correct[2]/3
return precision, top
else:
return correct
###########################################################################################3
class Net(nn.Module):
def load_pretrain(self, pretrain_file):
#raise NotImplementedError
#self.resnet.load_state_dict(torch.load(pretrain_file, map_location=lambda storage, loc: storage))
pretrain_state_dict = torch.load(pretrain_file)
# print("pretrain_state_dict.keys(): ", pretrain_state_dict.keys())
state_dict = self.state_dict()
# print("state_dict.keys(): ", state_dict.keys())
keys = list(state_dict.keys())
for key in keys:
if any(s in key for s in []):
continue
if "num_batches_tracked" in key:
continue
# if key.startswith('conv1.0'):
# state_dict[key] = pretrain_state_dict[key.replace('conv1.0','conv1')]
# if key.startswith('conv1.1'):
# state_dict[key] = pretrain_state_dict[key.replace('conv1.1','bn1')]
# if 'resnet.conv1.' in key:
# state_dict[key] = pretrain_state_dict[key.replace('resnet.conv1.','conv1.')]
# if 'resnet.bn1.' in key:
# state_dict[key] = pretrain_state_dict[key.replace('resnet.bn1.','bn1.')]
if 'encoder1.0.' in key:
state_dict[key] = pretrain_state_dict[key.replace('encoder1.0.','conv1.')]
# print(key)
if 'encoder1.1.' in key:
state_dict[key] = pretrain_state_dict[key.replace('encoder1.1.','bn1.')]
# print(key)
if any(s in key for s in []):
continue
if 'resnet.layer0.' in key:
state_dict[key] = pretrain_state_dict[key.replace('resnet.layer0.','layer0.')]
# print(key)
if 'resnet.layer1.' in key:
# print('key1: ',key)
# print('pretrain_state_dict: ',pretrain_state_dict)
state_dict[key] = pretrain_state_dict[key.replace('resnet.layer1.','layer1.')]
# print('key2: ',key)
if 'resnet.layer2.' in key:
state_dict[key] = pretrain_state_dict[key.replace('resnet.layer2.','layer2.')]
# print(key)
if 'resnet.layer3.' in key:
state_dict[key] = pretrain_state_dict[key.replace('resnet.layer3.','layer3.')]
# print(key)
if 'resnet.layer4.' in key:
state_dict[key] = pretrain_state_dict[key.replace('resnet.layer4.','layer4.')]
# print(key)
self.load_state_dict(state_dict)
print('')
def __init__(self, num_class=340):
super(Net,self).__init__()
self.resnet = ResNet(BasicBlock, [3, 4, 6, 3],num_classes=1)
# self.conv1 = nn.Sequential(
# self.resnet.conv1,
# self.resnet.bn1,
# self.resnet.relu,
# #self.resnet.maxpool,
# )
self.encoder1 = nn.Sequential(
nn.Conv2d(3, 64, kernel_size=7, stride=1, padding=3, bias=False),
BatchNorm2d(64),
nn.ReLU(inplace=True),
)
self.encoder2 = nn.Sequential(
nn.MaxPool2d(kernel_size=2, stride=2),
self.resnet.layer1,
)
self.encoder3 = self.resnet.layer2
self.encoder4 = self.resnet.layer3
self.encoder5 = self.resnet.layer4
self.logit = nn.Linear(512, num_class)
def forward(self, x):
batch_size,C,H,W = x.shape
mean=[0.485, 0.456, 0.406] #rgb
std =[0.229, 0.224, 0.225]
x = torch.cat([
(x[:,[0]]-mean[0])/std[0],
(x[:,[1]]-mean[1])/std[1],
(x[:,[2]]-mean[2])/std[2],
],1)
x = self.encoder1(x) #; print('e1',x.size())
x = self.encoder2(x) #; print('e2',x.size())
x = self.encoder3(x) #; print('e3',x.size())
x = self.encoder4(x) #; print('e4',x.size())
x = self.encoder5(x) #; print('e5',x.size())
x = F.adaptive_avg_pool2d(x, output_size=1).view(batch_size,-1)
x = F.dropout(x, p=0.50, training=self.training)
logit = self.logit(x)
return logit
def set_mode(self, mode, is_freeze_bn=False ):
self.mode = mode
if mode in ['eval', 'valid', 'test']:
self.eval()
elif mode in ['train']:
self.train()
if is_freeze_bn==True: ##freeze
for m in self.modules():
if isinstance(m, BatchNorm2d):
m.eval()
m.weight.requires_grad = False
m.bias.requires_grad = False
# + _uuid="2e2f4b517477173b2caeb6f568db01c1c45acc5a"
# rate.py
class NullScheduler():
def __init__(self, lr=0.01 ):
super(NullScheduler, self).__init__()
self.lr = lr
self.cycle = 0
def __call__(self, time):
return self.lr
def __str__(self):
string = 'NullScheduler\n' \
+ 'lr=%0.5f '%(self.lr)
return string
# https://github.com/pytorch/examples/blob/master/imagenet/main.py ###############
def adjust_learning_rate(optimizer, lr):
for param_group in optimizer.param_groups:
param_group['lr'] = lr
def get_learning_rate(optimizer):
lr=[]
for param_group in optimizer.param_groups:
lr +=[ param_group['lr'] ]
assert(len(lr)==1) #we support only one param_group
lr = lr[0]
return lr
# file.py
#https://stackoverflow.com/questions/1855095/how-to-create-a-zip-archive-of-a-directory
def backup_project_as_zip(project_dir, zip_file):
assert(os.path.isdir(project_dir))
assert(os.path.isdir(os.path.dirname(zip_file)))
shutil.make_archive(zip_file.replace('.zip',''), 'zip', project_dir)
pass
def time_to_str(t, mode='min'):
if mode=='min':
t = int(t)/60
hr = t//60
min = t%60
return '%2d hr %02d min'%(hr,min)
elif mode=='sec':
t = int(t)
min = t//60
sec = t%60
return '%2d min %02d sec'%(min,sec)
else:
raise NotImplementedError
# http://stackoverflow.com/questions/34950201/pycharm-print-end-r-statement-not-working
class Logger(object):
def __init__(self):
self.terminal = sys.stdout #stdout
self.file = None
def open(self, file, mode=None):
if mode is None: mode ='w'
self.file = open(file, mode)
def write(self, message, is_terminal=1, is_file=1 ):
if '\r' in message: is_file=0
if is_terminal == 1:
self.terminal.write(message)
self.terminal.flush()
#time.sleep(1)
if is_file == 1:
self.file.write(message)
self.file.flush()
def flush(self):
# this flush method is needed for python 3 compatibility.
# this handles the flush command by doing nothing.
# you might want to specify some extra behavior here.
pass
# + _uuid="a554dfa49ece118b939b020b5590f3ddd6a2b62f"
# train.py
def valid_augment(drawing, label, index):
cache = Struct(drawing = drawing.copy(), label = label, index=index)
image = drawing_to_temporal_image(drawing, 64, 64)
# image = drawing_to_temporal_image(drawing, 32, 32)
return image, label, cache
def train_augment(drawing, label, index):
cache = Struct(drawing = drawing.copy(), label = label, index=index)
## <todo> augmentation ....
image = drawing_to_temporal_image(drawing, 64, 64)
# image = drawing_to_temporal_image(drawing, 32, 32)
return image, label, cache
### training ##############################################################
def do_valid( net, valid_loader, criterion ):
valid_num = 0
probs = []
truths = []
losses = []
corrects = []
for input, truth, cache in valid_loader:
input = input.cuda()
truth = truth.cuda()
with torch.no_grad():
logit = net(input)
prob = F.softmax(logit,1)
loss = criterion(logit, truth, False)
correct = metric(logit, truth, False)
valid_num += len(input)
probs.append(prob.data.cpu().numpy())
losses.append(loss.data.cpu().numpy())
corrects.append(correct.data.cpu().numpy())
truths.append(truth.data.cpu().numpy())
assert(valid_num == len(valid_loader.sampler))
#------------------------------------------------------
prob = np.concatenate(probs)
correct = np.concatenate(corrects)
truth = np.concatenate(truths).astype(np.int32).reshape(-1,1)
loss = np.concatenate(losses)
#---
#top = np.argsort(-predict,1)[:,:3]
loss = loss.mean()
correct = correct.mean(0)
top = [correct[0], correct[0]+correct[1], correct[0]+correct[1]+correct[2]]
precision = correct[0]/1 + correct[1]/2 + correct[2]/3
#----
valid_loss = np.array([
loss, top[0], top[2], precision
])
return valid_loss
def run_train():
fold = 0
out_dir = \
'../output'
os.makedirs(out_dir, exist_ok=True)
initial_checkpoint = \
None
pretrain_file = \
'../pretrained/resnet34-333f7ec4.pth'
schduler = NullScheduler(lr=0.01)
iter_save_interval = 2000
criterion = softmax_cross_entropy_criterion
## setup -----------------------------------------------------------------------------
os.makedirs(out_dir +'/checkpoint', exist_ok=True)
os.makedirs(out_dir +'/train', exist_ok=True)
os.makedirs(out_dir +'/backup', exist_ok=True)
backup_project_as_zip(PROJECT_PATH, out_dir +'/backup/code.train.%s.zip'%IDENTIFIER)
log = Logger()
log.open(out_dir+'/log.train.txt',mode='a')
log.write('\n--- [START %s] %s\n\n' % (IDENTIFIER, '-' * 64))
log.write('\tSEED = %u\n' % SEED)
log.write('\tPROJECT_PATH = %s\n' % PROJECT_PATH)
# log.write('\t__file__ = %s\n' % __file__)
log.write('\tout_dir = %s\n' % out_dir)
log.write('\n')
log.write('\t<additional comments>\n')
log.write('\t ... xxx baseline ... \n')
log.write('\n')
## dataset ----------------------------------------
log.write('** dataset setting **\n')
batch_size = 128 #16 #32
train_dataset = DoodleDataset('train', 'train_0', train_augment)
train_loader = DataLoader(
train_dataset,
#sampler = FixLengthRandomSamplerWithProbability(train_dataset, probability),
#sampler = FixLengthRandomSampler(train_dataset),
#sampler = ConstantSampler(train_dataset,[31]*batch_size*100),
sampler = RandomSampler(train_dataset),
batch_size = batch_size,
drop_last = True,
num_workers = 2,
pin_memory = True,
collate_fn = null_collate)
valid_dataset = DoodleDataset('train', 'valid_0', valid_augment)
valid_loader = DataLoader(
valid_dataset,
#sampler = SequentialSampler(valid_dataset),
sampler = RandomSampler(valid_dataset),
batch_size = batch_size,
drop_last = False,
num_workers = 2,
pin_memory = True,
collate_fn = null_collate)
assert(len(train_dataset)>=batch_size)
log.write('batch_size = %d\n'%(batch_size))
log.write('train_dataset : \n%s\n'%(train_dataset))
log.write('valid_dataset : \n%s\n'%(valid_dataset))
log.write('\n')
## net ----------------------------------------
log.write('** net setting **\n')
net = Net().cuda()
if initial_checkpoint is not None:
log.write('\tinitial_checkpoint = %s\n' % initial_checkpoint)
net.load_state_dict(torch.load(initial_checkpoint, map_location=lambda storage, loc: storage))
if pretrain_file is not None:
log.write('\tpretrain_file = %s\n' % pretrain_file)
net.load_pretrain(pretrain_file)
#net = load_pretrain(net, pretrain_file)
log.write('%s\n'%(type(net)))
log.write('criterion=%s\n'%criterion)
log.write('\n')
## optimiser ----------------------------------
if 0: ##freeze
for p in net.resnet.parameters(): p.requires_grad = False
for p in net.encoder1.parameters(): p.requires_grad = False
for p in net.encoder2.parameters(): p.requires_grad = False
for p in net.encoder3.parameters(): p.requires_grad = False
for p in net.encoder4.parameters(): p.requires_grad = False
pass
#net.set_mode('train',is_freeze_bn=True)
#-----------------------------------------------
optimizer = optim.SGD(filter(lambda p: p.requires_grad, net.parameters()),
lr=schduler(0), momentum=0.9, weight_decay=0.0001)
# num_iters = 1 *1000
num_iters = 300 *1000
iter_smooth = 20
iter_log = 50
iter_valid = 100
iter_save = [0, num_iters-1]\
+ list(range(0, num_iters, iter_save_interval))#1*1000
start_iter = 0
start_epoch= 0
rate = 0
if initial_checkpoint is not None:
initial_optimizer = initial_checkpoint.replace('_model.pth','_optimizer.pth')
checkpoint = torch.load(initial_optimizer)
start_iter = checkpoint['iter' ]
start_epoch = checkpoint['epoch']
#rate = get_learning_rate(optimizer) #load all except learning rate
#optimizer.load_state_dict(checkpoint['optimizer'])
#adjust_learning_rate(optimizer, rate)
pass
log.write('schduler\n %s\n'%(schduler))
log.write('\n')
## start training here! ##############################################
log.write('** start training here! **\n')
log.write(' |------------ VALID -------------|-------- TRAIN/BATCH ----------| \n')
log.write('rate iter epoch | loss acc-1 acc-3 lb | loss acc-1 acc-3 lb | time \n')
log.write('----------------------------------------------------------------------------------------------------\n')
train_loss = np.zeros(6,np.float32)
valid_loss = np.zeros(6,np.float32)
batch_loss = np.zeros(6,np.float32)
iter = 0
i = 0
last_max_lb = -1
start = timer()
while iter<num_iters:
sum_train_loss = np.zeros(6,np.float32)
sum = 0
optimizer.zero_grad()
for input, truth, cache in train_loader:
len_train_dataset = len(train_dataset)
batch_size = len(cache)
iter = i + start_iter
epoch = (iter-start_iter)*batch_size/len_train_dataset + start_epoch
num_samples = epoch*len_train_dataset
if (iter % iter_valid==0) and (iter!=0):
net.set_mode('valid')
valid_loss = do_valid(net, valid_loader, criterion)
net.set_mode('train')
##--------
# lb = valid_loss[7]
# loss = valid_loss[0] + valid_loss[4]
# last_max_lb = max(last_max_lb,lb)
# if last_max_lb-lb<0.005:
# iter_save += [iter,]
# if loss-last_min_loss<0.005:
# iter_save += [iter,]
asterisk = '*' if iter in iter_save else ' '
##--------
print('\r',end='',flush=True)
log.write('%0.4f %5.1f %6.1f | %0.3f %0.3f %0.3f (%0.3f)%s | %0.3f %0.3f %0.3f (%0.3f) | %s' % (\
rate, iter/1000, epoch,
valid_loss[0], valid_loss[1], valid_loss[2], valid_loss[3],asterisk,
train_loss[0], train_loss[1], train_loss[2], train_loss[3],
time_to_str((timer() - start),'min'))
)
log.write('\n')
time.sleep(0.01)
#if 0:
if iter in iter_save:
torch.save(net.state_dict(),out_dir +'/checkpoint/%08d_model.pth'%(iter))
torch.save({
#'optimizer': optimizer.state_dict(),
'iter' : iter,
'epoch' : epoch,
}, out_dir +'/checkpoint/%08d_optimizer.pth'%(iter))
pass
# learning rate schduler -------------
lr = schduler(iter)
if lr<0 : break
adjust_learning_rate(optimizer, lr)
rate = get_learning_rate(optimizer)
# one iteration update -------------
#net.set_mode('train',is_freeze_bn=True)
net.set_mode('train')
input = input.cuda()
truth = truth.cuda()
logit = data_parallel(net, input)
# print("logit.size(): ", logit.size())
loss = criterion(logit, truth)
precision, top = metric(logit, truth)
loss.backward()
optimizer.step()
optimizer.zero_grad()
#torch.nn.utils.clip_grad_norm(net.parameters(), 1)
# print statistics ------------
batch_loss[:4] = np.array(( loss.item(), top[0].item(), top[2].item(), precision.item(),))
sum_train_loss += batch_loss
sum += 1
if iter%iter_smooth == 0:
train_loss = sum_train_loss/sum
sum_train_loss = np.zeros(6,np.float32)
sum = 0
print('\r',end='',flush=True)
print('%0.4f %5.1f %6.1f | %0.3f %0.3f %0.3f (%0.3f)%s | %0.3f %0.3f %0.3f (%0.3f) | %s' % (\
rate, iter/1000, epoch,
valid_loss[0], valid_loss[1], valid_loss[2], valid_loss[3],' ',
batch_loss[0], batch_loss[1], batch_loss[2], batch_loss[3],
time_to_str((timer() - start),'min'))
, end='',flush=True)
i=i+1
pass #-- end of one data loader --
pass #-- end of all iterations --
if 1: #save last
torch.save(net.state_dict(),out_dir +'/checkpoint/%d_model.pth'%(i))
torch.save({
'optimizer': optimizer.state_dict(),
'iter' : i,
'epoch' : epoch,
}, out_dir +'/checkpoint/%d_optimizer.pth'%(i))
log.write('\n')
# + _uuid="53751b7abedc929abc24dc121f721d25e23ab53b"
train_dataset1 = DoodleDataset('train', 'train_0', train_augment)
# + _uuid="944c65e5e6f9d14d3e4f94d56ab1a4e9afd94858"
run_train()
# + [markdown] _uuid="096253507580fc200b792b291da51142fb22d46b"
# # Make validation & test prediction
# + _uuid="6cf3d06b4010faecf31a7706eeccfdb660a83e37"
# local_submit.py
def test_augment(drawing,label,index, augment):
cache = Struct(data = drawing.copy(), label = label, index=index)
#<todo> ... different test-time augment ...
image = drawing_to_temporal_image(drawing, 64, 64)
# image = drawing_to_temporal_image(drawing, 32, 32)
return image, label, cache
##############################################################################################
#generate prediction npy_file
def make_npy_file_from_model(checkpoint, mode, split, augment, out_test_dir, npy_file):
## setup -----------------
# os.makedirs(out_test_dir +'/backup', exist_ok=True)
# backup_project_as_zip(PROJECT_PATH, out_dir +'/backup/code.test.%s.zip'%IDENTIFIER)
log = Logger()
log.open(out_test_dir +'/log.submit.txt',mode='a')
log.write('\n--- [START %s] %s\n\n' % (IDENTIFIER, '-' * 64))
log.write('\tSEED = %u\n' % SEED)
log.write('\tPROJECT_PATH = %s\n' % PROJECT_PATH)
log.write('\tout_test_dir = %s\n' % out_test_dir)
log.write('\n')
## dataset ----------------------------------------
log.write('** dataset setting **\n')
batch_size = 512 #256 #512
test_dataset = DoodleDataset(mode, split,
lambda drawing, label, index : test_augment(drawing, label, index, augment),)
test_loader = DataLoader(
test_dataset,
sampler = SequentialSampler(test_dataset),
batch_size = batch_size,
drop_last = False,
num_workers = 2,
pin_memory = True,
collate_fn = null_collate)
assert(len(test_dataset)>=batch_size)
log.write('test_dataset : \n%s\n'%(test_dataset))
log.write('\n')
## net ----------------------------------------
log.write('** net setting **\n')
net = Net().cuda()
log.write('%s\n\n'%(type(net)))
log.write('\n')
if 1:
log.write('\tcheckpoint = %s\n' % checkpoint)
net.load_state_dict(torch.load(checkpoint, map_location=lambda storage, loc: storage))
####### start here ##########################
criterion = softmax_cross_entropy_criterion
test_num = 0
probs = []
truths = []
losses = []
corrects = []
net.set_mode('test')
for input, truth, cache in test_loader:
print('\r\t',test_num, end='', flush=True)
test_num += len(truth)
with torch.no_grad():
input = input.cuda()
logit = data_parallel(net,input)
prob = F.softmax(logit,1)
probs.append(prob.data.cpu().numpy())
if mode=='train': # debug only
truth = truth.cuda()
loss = criterion(logit, truth, False)
correct = metric(logit, truth, False)
losses.append(loss.data.cpu().numpy())
corrects.append(correct.data.cpu().numpy())
truths.append(truth.data.cpu().numpy())
assert(test_num == len(test_loader.sampler))
print('\r\t',test_num, end='\n', flush=True)
prob = np.concatenate(probs)
if mode=='train': # debug only
correct = np.concatenate(corrects)
truth = np.concatenate(truths).astype(np.int32).reshape(-1,1)
loss = np.concatenate(losses)
loss = loss.mean()
correct = correct.mean(0)
top = [correct[0], correct[0]+correct[1], correct[0]+correct[1]+correct[2]]
precision = correct[0]/1 + correct[1]/2 + correct[2]/3
print('top ', top)
print('precision', precision)
print('')
#-------------------------------------------
np.save(npy_file, np_float32_to_uint8(prob))
print(prob.shape)
log.write('\n')
def prob_to_csv(prob, key_id, csv_file):
top = np.argsort(-prob,1)[:,:3]
word = []
for (t0,t1,t2) in top:
word.append(
CLASS_NAME[t0] + ' ' + \
CLASS_NAME[t1] + ' ' + \
CLASS_NAME[t2]
)
df = pd.DataFrame({ 'key_id' : key_id , 'word' : word}).astype(str)
df.to_csv(csv_file, index=False, columns=['key_id', 'word'], compression='gzip')
def npy_file_to_sbmit_csv(mode, split, npy_file, csv_file):
print('NUM_CLASS', NUM_CLASS)
complexity='simplified'
if mode=='train':
raise NotImplementedError
if mode=='test':
assert(NUM_CLASS==340)
global TEST_DF
if TEST_DF == []:
TEST_DF = pd.read_csv(DATA_DIR + '/csv/test_%s.csv'%(complexity))
key_id = TEST_DF['key_id'].values
prob = np_uint8_to_float32(np.load(npy_file))
print(prob.shape)
prob_to_csv(prob, key_id, csv_file)
#################################################################################################3
def run_test_fold():
mode = 'test' #'train'
configures =[
Struct(
split = '<NIL>', #'valid_0', #
out_test_dir = '../submission',
checkpoint = '../output/checkpoint/317900_model.pth',
),
]
for configure in configures:
split = configure.split
out_test_dir = configure.out_test_dir
checkpoint = configure.checkpoint
augment = 'null'
npy_file = out_test_dir + '/%s-%s.prob.uint8.npy'%(mode,augment)
csv_file = out_test_dir + '/%s-%s.submit.csv.gz'%(mode,augment)
make_npy_file_from_model(checkpoint, mode, split, augment, out_test_dir, npy_file)
npy_file_to_sbmit_csv(mode, split, npy_file, csv_file)
# + _uuid="01cae71871d20cba6f7a28371ccdfbdd965ceb73"
run_test_fold()
# + _uuid="3faa3e44fdca9df2c03c82fee1ad01f10aff1b36"
| projects/project05/Notebook/model_alvin.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + [markdown] id="UBpbr4JZKYTz" colab_type="text"
# # MNIST Digit Classification Using Recurrent Neural Networks
# + id="CxeZAiQkLMNR" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 704} outputId="77ad5785-4e41-40d0-896e-ed9e6873b4ae"
import tensorflow as tf
import numpy as np
import matplotlib.pyplot as plt
import argparse
######################
# Optimization Flags #
######################
learning_rate = 0.001 # initial learning rate
seed = 111
##################
# Training Flags #
##################
batch_size = 128 # Batch size for training
num_epoch = 10 # Number of training iterations
###############
# Model Flags #
###############
hidden_size = 128 # Number of neurons for RNN hodden layer
# Reset the graph set the random numbers to be the same using "seed"
tf.reset_default_graph()
tf.set_random_seed(seed)
np.random.seed(seed)
# Divide 28x28 images to rows of data to feed to RNN as sequantial information
step_size = 28
input_size = 28
output_size = 10
# Input tensors
X = tf.placeholder(tf.float32, [None, step_size, input_size])
y = tf.placeholder(tf.int32, [None])
# Rnn
cell = tf.nn.rnn_cell.BasicRNNCell(num_units=hidden_size)
output, state = tf.nn.dynamic_rnn(cell, X, dtype=tf.float32)
# Forward pass and loss calcualtion
logits = tf.layers.dense(state, output_size)
cross_entropy = tf.nn.sparse_softmax_cross_entropy_with_logits(labels=y, logits=logits)
loss = tf.reduce_mean(cross_entropy)
# optimizer
optimizer = tf.train.AdamOptimizer(learning_rate=learning_rate).minimize(loss)
# Prediction
prediction = tf.nn.in_top_k(logits, y, 1)
accuracy = tf.reduce_mean(tf.cast(prediction, tf.float32))
# input data
from tensorflow.examples.tutorials.mnist import input_data
mnist = input_data.read_data_sets("MNIST_data/")
# Process MNIST
X_test = mnist.test.images # X_test shape: [num_test, 28*28]
X_test = X_test.reshape([-1, step_size, input_size])
y_test = mnist.test.labels
# initialize the variables
init = tf.global_variables_initializer()
# Empty list for tracking
loss_train_list = []
acc_train_list = []
# train the model
with tf.Session() as sess:
sess.run(init)
n_batches = mnist.train.num_examples // batch_size
for epoch in range(num_epoch):
for batch in range(n_batches):
X_train, y_train = mnist.train.next_batch(batch_size)
X_train = X_train.reshape([-1, step_size, input_size])
sess.run(optimizer, feed_dict={X: X_train, y: y_train})
loss_train, acc_train = sess.run(
[loss, accuracy], feed_dict={X: X_train, y: y_train})
loss_train_list.append(loss_train)
acc_train_list.append(acc_train)
print('Epoch: {}, Train Loss: {:.3f}, Train Acc: {:.3f}'.format(
epoch + 1, loss_train, acc_train))
loss_test, acc_test = sess.run(
[loss, accuracy], feed_dict={X: X_test, y: y_test})
print('Test Loss: {:.3f}, Test Acc: {:.3f}'.format(loss_test, acc_test))
# + id="nkPppIILLN5Z" colab_type="code" colab={}
| codes/3-neural_networks/recurrent-neural-networks/code/rnn.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
import pandas as pd
import os
from pyfunctions.sentiment_functions import sentiment_score_df
from pyfunctions.sentiment_functions import export_sentiment_laden_text
# -
def export_sentiment_laden_text(df, col_name, export_path, export_fname):
scored = sentiment_score_df(nouns_modifiers, col_name)
scored['combined_score'] = scored['afinn'] + scored['textblob'] + scored['vader']
scored = scored[scored['combined_score'] != 0.000000]
x
if not os.path.exists(export_path):
os.mkdir(export_path)
scored.to_csv(export_path + export_fname, index = False)
nouns_modifiers = pd.read_csv('/scratch/group/pract-txt-mine/stuff_from_last_couple_months/collocates_noun_modifiers_property_keywords_07192021.csv')
export_sentiment_laden_text(nouns_modifiers, 'grammatical_collocates', '/users/sbuongiorno/', 'collocates.csv')
| collocates/nouns-modifiers/sentiment_laden_nouns_modifiers.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3 (ipykernel)
# language: python
# name: python3
# ---
# # Import Libraries
# +
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
# %matplotlib inline
import seaborn as sns
import sklearn.model_selection
from sklearn.model_selection import train_test_split
from sklearn.model_selection import GridSearchCV
from sklearn.metrics import mean_absolute_error,mean_squared_error,r2_score
from sklearn.linear_model import LinearRegression
from sklearn.preprocessing import LabelEncoder, OneHotEncoder
import statsmodels
import statsmodels.api as sm
import statsmodels.stats.api as sms
from statsmodels.tools.eval_measures import rmse
from statsmodels.stats.outliers_influence import variance_inflation_factor
import warnings
warnings.filterwarnings("ignore")
# -
df_train = pd.read_csv('Train.csv')
df_train.head()
df_test = pd.read_csv('Test.csv')
df_test.head()
# # Data Observation
df_train.info()
# ## Check Null Values :
df_train.isnull().sum()
df_train.isnull().sum()/len(df_train)*100
# ## Null Values Imputation
df_train['Outlet_Size'] = df_train['Outlet_Size'].fillna(df_train['Outlet_Size'].mode()[0])
df_test['Outlet_Size'] = df_test['Outlet_Size'].fillna(df_test['Outlet_Size'].mode()[0])
df_train['Item_Weight'] = df_train['Item_Weight'].fillna(df_train['Item_Weight'].median())
df_test['Item_Weight'] = df_test['Item_Weight'].fillna(df_test['Item_Weight'].median())
df_train.isnull().sum()
# ## Data Description
df_train.describe().T
df_train.describe(include='object').T
plt.figure(figsize=(13,6))
sns.heatmap(df_train.corr(),cmap='coolwarm',annot=True,vmax=1,vmin=-1,linewidths=4)
# # Data Cleaning
df_train['Item_Fat_Content'].replace(['low fat','LF','reg'],['Low Fat','Low Fat','Regular'],inplace = True)
df_test['Item_Fat_Content'].replace(['low fat','LF','reg'],['Low Fat','Low Fat','Regular'],inplace = True)
df_train['Item_Fat_Content'].value_counts()
df_test['Years_Established'] = df_test['Outlet_Establishment_Year'].apply(lambda x: 2021 - x)
df_test = df_test.drop(columns=['Outlet_Establishment_Year'])
df_train['Years_Established'] = df_train['Outlet_Establishment_Year'].apply(lambda x: 2021 - x)
df_train = df_train.drop(columns=['Outlet_Establishment_Year'])
df_train.head()
# ### Drop unnecessary Features
df_train.drop(['Item_Identifier'],axis=1,inplace=True)
df_test.drop(['Item_Identifier'],axis=1,inplace=True)
df_train.drop(['Outlet_Identifier'],axis=1,inplace=True)
df_test.drop(['Outlet_Identifier'],axis=1,inplace=True)
# # Scaling And Encoding
# ## --> For Training :
# +
lr = LabelEncoder()
df_train['Item_Fat_Content'] = lr.fit_transform(pd.DataFrame(df_train['Item_Fat_Content']))
df_train['Item_Type'] = lr.fit_transform(pd.DataFrame(df_train['Item_Type']))
df_train['Outlet_Size'] = lr.fit_transform(pd.DataFrame(df_train['Outlet_Size']))
df_train['Outlet_Location_Type'] = lr.fit_transform(pd.DataFrame(df_train['Outlet_Location_Type']))
# -
df_train = pd.get_dummies(df_train,drop_first=True)
# ## --> For Training :
# +
df_test['Item_Fat_Content'] = lr.fit_transform(pd.DataFrame(df_test['Item_Fat_Content']))
df_test['Item_Type'] = lr.fit_transform(pd.DataFrame(df_test['Item_Type']))
df_test['Outlet_Size'] = lr.fit_transform(pd.DataFrame(df_test['Outlet_Size']))
df_test['Outlet_Location_Type'] = lr.fit_transform(pd.DataFrame(df_test['Outlet_Location_Type']))
# -
df_test = pd.get_dummies(df_test,drop_first=True)
# #### Dividing In to X and y :
X = df_train.drop(('Item_Outlet_Sales'),axis=1)
y = df_train.Item_Outlet_Sales
# # Models :-
# ## Function For models:
from sklearn.model_selection import cross_val_score
#from sklearn.metrics import neg_mean_squared_error
import statistics
from statistics import mean
def train(model,X,y):
#train_the_model
model.fit(X,y)
#predict the model
pred = model.predict(X)
#peform Cross_validation
cv_score = cross_val_score(model,X,y,scoring='neg_mean_squared_error',cv=10)
#sklearn.metrics.SCORERS.keys() to see Scoring methods
m_cv_score = np.abs(np.mean(cv_score))
print("----------Model Report----------")
print('MSE: ',mean_squared_error(y,pred))
print('cv_Score: ',cv_score)
#Mean of Cv_Score to get Avg Value
print('Mean Cv_Score: ',m_cv_score)
# ## Linear Regression :
# +
from sklearn.linear_model import LinearRegression,Ridge,Lasso
model = LinearRegression(normalize=True)
train(model,X,y)
coef = pd.Series(model.coef_,X.columns).sort_values(ascending=False)
coef.plot(kind='bar',title='model Coefficients')
# save the model to disk
filename = 'LinearRegression.sav'
pickle.dump(model, open(filename, 'wb'))
# + [markdown] tags=[]
# ## Lasso
# +
from sklearn.linear_model import LinearRegression,Ridge,Lasso
model = Lasso(normalize=True)
train(model,X,y)
coef = pd.Series(model.coef_,X.columns).sort_values(ascending=False)
coef.plot(kind='bar',title='model Coefficients')
# save the model to disk
filename = 'Lasso.sav'
pickle.dump(model, open(filename, 'wb'))
# -
# ## Ridge
from sklearn.linear_model import LinearRegression,Ridge,Lasso
model = Ridge(normalize=True)
train(model,X,y)
coef = pd.Series(model.coef_,X.columns).sort_values(ascending=False)
coef.plot(kind='bar',title='model Coefficients')
# save the model to disk
filename = 'Ridge.sav'
pickle.dump(model, open(filename, 'wb'))
# ##### Checking Vif For Variables...
# +
def vif_score(X):
vif_score = pd.DataFrame()
vif_score['Ind_Features'] = X.columns
vif_score['vif_scores']=[variance_inflation_factor(X.values,i) for i in range (X.shape[1])]
return vif_score
vif_score(X)
# + active=""
# Intepration:
# Since the Variance of all the Features are in a range below of 15.Hence we can conclude3 that all the Features are important .
# -
# ## Feature Selection:
from sklearn.linear_model import LinearRegression
from mlxtend.feature_selection import SequentialFeatureSelector as SFS
# +
y=y
x=df_dummy
lr=LinearRegression()
sfs= SFS(lr,k_features='best',forward=True,cv=10)
sfs.fit(x,y)
print('The features selected are : ',sfs.k_feature_names_)
print('The R2 value for the model with 5 features is :',sfs.k_score_)
# -
# ## Random Forest :
from sklearn.ensemble import RandomForestRegressor
model = RandomForestRegressor()
train(model,X,y)
coef = pd.Series(model.feature_importances_,X.columns).sort_values(ascending=False)
coef.plot(kind='bar',title='Feature_Importance')
# save the model to disk
filename = 'RandomForestRegressor.sav'
pickle.dump(model, open(filename, 'wb'))
# ## Elastic Net Regressor:
# +
from sklearn.linear_model import ElasticNet
model = ElasticNet(alpha=1.0, l1_ratio=0.5)
# save the model to disk
filename = 'ElasticNet.sav'
pickle.dump(model, open(filename, 'wb'))
train(model,X,y)
coef = pd.Series(model.coef_,X.columns).sort_values(ascending=False)
coef.plot(kind='bar',title='Feature_Importance')
# -
df_test.head()
x_train, x_test, y_train, y_test = train_test_split(df_dummy, y, test_size=0.2, random_state=0)
# +
from sklearn.ensemble import RandomForestRegressor
from sklearn.metrics import mean_absolute_error,accuracy_score
from sklearn.metrics import r2_score
#model
regressor_rf = RandomForestRegressor(n_estimators=200,max_depth=5, min_samples_leaf=100,n_jobs=4,random_state=101)
#fit
regressor_rf.fit(x_train, y_train)
#predict
y_pred = regressor_rf.predict(x_test)
#score variables
RFR_MAE = round(mean_absolute_error(y_test, y_pred),2)
RFR_MSE = round(mean_absolute_error(y_test, y_pred),2)
RFR_R_2 = round(mean_absolute_error(y_test, y_pred),4)
print(f" Mean Absolute Error: {RFR_MAE}\n")
print(f" Mean Squared Error: {RFR_MSE}\n")
print(f" R^2 Score: {RFR_R_2}\n")
# -
# # Final Model For Pickle
model_rf = RandomForestRegressor()
RF_model_full = model_rf.fit(X , y)
import pickle
pickle.dump(RF_model_full,open('model.pkl','wb'))
# # Conclusion:
# + active=""
# As Random Forest Regressor has the lowest MSE Rate ,thus we select Random Forest Regressor as Final model for prediction.
| BigMartSales.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# %pylab
# %load_ext watermark
# %watermark
# ### 기본적인 구조: 하나의 게이트로 된 회로
# +
def forward_multiply_gate(x, y):
return x * y
x = -2
y = 3
forward_multiply_gate(x, y) # -6 이 리턴됨
# -
# #### 무작위 지역 탐색
# x, y 를 랜덤하게 조금씩 변경하면서 가장 좋은 출력을 내는 값을 추적합니다
tweak_amount = 0.01
best_out = -np.inf
best_x = x
best_y = y
for k in range(100):
x_try = x + tweak_amount * (np.random.random() * 2 - 1) # x 를 조금 변경
y_try = y + tweak_amount * (np.random.random() * 2 - 1) # y 를 조금 변경
out = forward_multiply_gate(x_try, y_try)
# 현재까지 최고 값 보다 좋은 경우 이를 새로운 최고 값으로 저장합니다
if out > best_out:
best_out = out
best_x = x_try
best_y = y_try
best_x, best_y, best_out
# #### 계산 기울기
# +
out = forward_multiply_gate(x, y)
h = 0.00001
# x 에 대한 변화율을 계산
xph = x + h
out2 = forward_multiply_gate(xph, y)
x_derivative = (out2 - out) / h
# y 에 대한 변화율을 계산
yph = y + h
out3 = forward_multiply_gate(x, yph)
y_derivative = (out3 - out) / h
print(x_derivative, y_derivative)
step_size = 0.01
out = forward_multiply_gate(x, y)
x_new = x + step_size * x_derivative
y_new = y + step_size * y_derivative
out_new = forward_multiply_gate(x_new, y_new)
print(out_new)
# -
# #### 공식 기울기
# +
x_gradient = y # 수학 공식에 의해
y_gradient = x
x_new2 = x + step_size * x_gradient
y_new2 = y + step_size * y_gradient
forward_multiply_gate(x_new2, y_new2)
# -
# ### 중첩된 구조
# +
# 덧셈 게이트
def forward_add_gate(a, b):
return a + b
# 전체 회로
def forward_circuit(x, y, z):
q = forward_add_gate(x, y)
f = forward_multiply_gate(q, z)
return f
# -
x = -2
y = 5
z = -4
forward_circuit(x, y, z)
# #### 역전파
# +
q = forward_add_gate(x, y)
f = forward_multiply_gate(q, z)
# 입력에 대한 곱셈 게이트의 기울기
derivative_f_wrt_z = q
derivative_f_wrt_q = z
print(derivative_f_wrt_z, derivative_f_wrt_q)
# 입력에 대한 덧셈 게이트의 기울기
derivative_q_wrt_x = 1.0
derivative_q_wrt_y = 1.0
# 체인 룰
derivative_f_wrt_x = derivative_q_wrt_x * derivative_f_wrt_q
derivative_f_wrt_y = derivative_q_wrt_y * derivative_f_wrt_q
print(derivative_f_wrt_x, derivative_f_wrt_y)
# +
# 포스에 맞춰 입력을 조정합니다.
x_new3 = x + step_size * derivative_f_wrt_x
y_new3 = y + step_size * derivative_f_wrt_y
z_new3 = z + step_size * derivative_f_wrt_z
# 회로가 더 높은 값을 출력합니다.
q = forward_add_gate(x_new3, y_new3)
f = forward_multiply_gate(q, z_new3)
print(q, f)
# -
# #### 계산 기울기로 체크
x_derivative = (forward_circuit(x + h, y, z) - forward_circuit(x, y, z)) / h
y_derivative = (forward_circuit(x, y + h, z) - forward_circuit(x, y, z)) / h
z_derivative = (forward_circuit(x, y, z + h) - forward_circuit(x, y, z)) / h
x_derivative, y_derivative, z_derivative
# #### 단일 뉴런
# Unit은 회로 그림의 선에 대응합니다
class Unit(object):
def __init__(self, value, grad):
# 정방향에서 계산되는 값
self.value = value
# 역방향일 때 계산되는 이 유닛에 대한 회로 출력의 변화율
self.grad = grad
# +
class MultiplyGate(object):
def forward(self, u0, u1):
self.u0 = u0
self.u1 = u1
self.utop = Unit(self.u0.value * self.u1.value, 0.0)
return self.utop
def backward(self):
# 출력 유닛의 기울기를 받아 곱셉 게이트의 자체 기울기와 곱하여(체인 룰) 입력 유닛의 기울기로 저장합니다.
self.u0.grad += self.u1.value * self.utop.grad
self.u1.grad += self.u0.value * self.utop.grad
class AddGate(object):
def forward(self, u0, u1):
self.u0 = u0
self.u1 = u1
self.utop = Unit(self.u0.value + self.u1.value, 0.0)
return self.utop
def backward(self):
# 입력에 대한 덧셈 게이트의 기울기는 1 입니다
self.u0.grad += 1 * self.utop.grad
self.u1.grad += 1 * self.utop.grad
class SigmoidGate(object):
def sigmoid(self, x):
return 1 / (1 + np.exp(-x))
def forward(self, u0):
self.u0 = u0
self.utop = Unit(self.sigmoid(self.u0.value), 0.0)
return self.utop
def backward(self):
s = self.sigmoid(self.u0.value)
self.u0.grad += (s * (1 - s)) * self.utop.grad
# +
# 입력 유닛 생성
a = Unit(1.0, 0.0)
b = Unit(2.0, 0.0)
c = Unit(-3.0, 0.0)
x = Unit(-1.0, 0.0)
y = Unit(3.0, 0.0)
# 게이트 생성
mulg0 = MultiplyGate()
mulg1 = MultiplyGate()
addg0 = AddGate()
addg1 = AddGate()
sg0 = SigmoidGate()
# 정방향 계산
def forward_neuron():
ax = mulg0.forward(a, x)
by = mulg1.forward(b, y)
axpby = addg0.forward(ax, by)
axpbypc = addg1.forward(axpby, c)
s = sg0.forward(axpbypc)
return s
s = forward_neuron()
s.value
# -
s.grad = 1.0
sg0.backward() # axpbypc 에 기울기 저장
addg1.backward() # axpby 와 c 에 기울기 저장
addg0.backward() # ax 와 by 에 기울기 저장
mulg1.backward() # b 와 y 에 기울기 저장
mulg0.backward() # a 와 x 에 기울기 저장
# +
a.value += step_size * a.grad
b.value += step_size * b.grad
c.value += step_size * c.grad
x.value += step_size * x.grad
y.value += step_size * y.grad
forward_neuron()
print(a.grad, b.grad, c.grad, x.grad, y.grad)
s.value
# -
# #### 기울기 확인
# +
def forward_circuit_fast(a,b,c,x,y):
return 1/(1 + np.exp( - (a*x + b*y + c)))
a = 1
b = 2
c = -3
x = -1
y = 3;
a_grad = (forward_circuit_fast(a+h,b,c,x,y) - forward_circuit_fast(a,b,c,x,y))/h;
b_grad = (forward_circuit_fast(a,b+h,c,x,y) - forward_circuit_fast(a,b,c,x,y))/h;
c_grad = (forward_circuit_fast(a,b,c+h,x,y) - forward_circuit_fast(a,b,c,x,y))/h;
x_grad = (forward_circuit_fast(a,b,c,x+h,y) - forward_circuit_fast(a,b,c,x,y))/h;
y_grad = (forward_circuit_fast(a,b,c,x,y+h) - forward_circuit_fast(a,b,c,x,y))/h;
print(a_grad, b_grad, c_grad, x_grad, y_grad)
# -
| hackers-guide-to-neural-networks/real-valued-circuit.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import numpy as np
my_list = [0,1,2,3,4]
my_list
np.array(my_list)
np.arange(0,10)
np.arange(0,10,2)
np.zeros((5,5))
np.ones((2,4))
# # Operations
np.random.seed(101)
arr = np.random.randint(0,100,10)
arr
arr2 = np.random.randint(0,100,10)
arr2
arr.max()
arr.min()
arr.mean()
arr.argmax()
arr.argmin()
arr.reshape(2,5)
# # Indexing
mat = np.arange(0,100).reshape(10,10)
mat
mat[0,5]
mat[:,1]
mat[1,:]
mat[0:3,0:3]
#
| 00.Numpy and Image Basics/00. NumPy Basics.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3 (system-wide)
# language: python
# name: python3
# ---
# # Introduction to NetworkX
# 
# This work by <NAME> is licensed under a [Creative Commons Attribution 4.0 International License](http://creativecommons.org/licenses/by/4.0/).
# ## A package for graph objects
#
# Graphs (networks) are common objects that occurs
# in many areas, including computer algorithms,
# social networks, logistic arrangement, and so on.
#
# NetworkX is a Python package that deals with graph objects.
# In this packages, many algorithms have been implemented.
# [NetworkX official tutorial](https://networkx.github.io/documentation/stable/tutorial.html)
# Let's import the package.
# Recall that you may install the package by
# `pip install networkx --user`
import networkx as nx
### for drawing the graphs
### also import matplotlib
import matplotlib.pyplot as plt
import numpy as np
# There are many built-in graphs.
# See [graph generators](https://networkx.github.io/documentation/stable/reference/generators.html).
# +
p10 = nx.path_graph(10)
c10 = nx.cycle_graph(10)
grid = nx.grid_2d_graph(3,4)
### use nx.draw to draw the graph
### use matplotlib to control the axes
fig = plt.figure(figsize=(9,3))
axs = fig.subplots(1,3)
graphs = [p10, c10, grid]
titles = ['path', 'cycle', 'grid']
for i in range(3):
ax = axs[i]
ax.set_title(titles[i])
ax.set_axis_off()
nx.draw(graphs[i],
ax=axs[i])
# -
# ### Nodes, Edges, and Adjacency
g = nx.path_graph(10)
g.nodes ### g.nodes() is also okay
g.edges ### g.edges() is also okay
g.adj ### g.adj() is not okay
# `g.adj` is like a dictionary, while
# `g.adjacency()` is a generator similar to `g.adj.items()`.
for i in g.adjacency():
print(i)
# ### Build a graph by `add_node` and `add_edge`
# Create an empty graph and then
# add nodes and edges one by one.
g = nx.Graph() ### g is an empty graph
print(g.nodes)
print(g.edges)
# Add a node by `g.add_node(v)`.
# Add many nodes by `g.add_nodes_from([v1, ..., vk])`.
g.add_node(1)
g.add_nodes_from([2,3,4])
g.nodes
# Add an edge by `g.add_edge(u,v)`.
# Add many edges by `g.add_edges_from([(u1,v1), ..., (uk,vk)])`.
g.add_edge(1,2)
g.add_edges_from([(2,3), (3,4)])
g.edges
fig = plt.figure(figsize=(3,3))
nx.draw(g, with_labels=True)
# When `g` is a graph,
# use `g.remove_node` or `g.remove_nodes_from` to remove nodes, and
# use `g.remove_edge` or `g.remove_edges_from` to remove edges.
g.remove_edge(3,4)
g.remove_node(1)
fig = plt.figure(figsize=(3,3))
nx.draw(g, with_labels=True)
# ### Drawing a graph
# `nx.draw` allow us to use matplotlib to draw graphs.
#
# There are many keywords to adjust the settings,
# and the usage can be found in the docstrings of
# `nx.draw` and `nx.draw_networkx`.
#
# [`nx.draw` docstring](https://networkx.github.io/documentation/networkx-1.10/reference/generated/networkx.drawing.nx_pylab.draw.html#networkx.drawing.nx_pylab.draw)
# [`nx.draw_networkx` docstring](https://networkx.github.io/documentation/networkx-1.10/reference/generated/networkx.drawing.nx_pylab.draw_networkx.html#networkx.drawing.nx_pylab.draw_networkx)
# +
g = nx.path_graph(5)
fig = plt.figure(figsize=(3,3))
nx.draw(g) ### no node labels by default
# -
# Use `with_labels=True` to add the labels.
# Use `labels=dict` to specify labels.
# +
g = nx.path_graph(5)
fig = plt.figure(figsize=(3,3))
nx.draw(g, with_labels=True) ### label the nodes by node names
# +
g = nx.path_graph(5)
fig = plt.figure(figsize=(3,3))
names = {0: 'N', 1: 'S', 2: 'Y', 3: 'S', 4: 'U'}
nx.draw(g, with_labels=True, labels=names) ### label the nodes by specified names
# -
# Use `node_size` to specify the size of each nodes.
# It can be a single value (default=300)
# or an array of the same length as `g.nodes`.
# +
g = nx.path_graph(5)
fig = plt.figure(figsize=(3,3))
names = {0: 'N', 1: 'S', 2: 'Y', 3: 'S', 4: 'U'}
nx.draw(g,
with_labels=True,
labels=names,
node_size=100)
# +
g = nx.path_graph(5)
fig = plt.figure(figsize=(3,3))
names = {0: 'N', 1: 'S', 2: 'Y', 3: 'S', 4: 'U'}
nx.draw(g,
with_labels=True,
labels=names,
node_size=50*np.arange(5)+100)
# -
# Use `node_color` to specify the size of each nodes.
# It can be a single value
# or an array of the same length as `g.nodes`
# (need to specify `cmap` in the latter case).
# +
g = nx.path_graph(5)
fig = plt.figure(figsize=(3,3))
names = {0: 'N', 1: 'S', 2: 'Y', 3: 'S', 4: 'U'}
nx.draw(g,
with_labels=True,
labels=names,
node_size=50*np.arange(5)+100,
node_color='lightgreen')
# +
g = nx.path_graph(5)
fig = plt.figure(figsize=(3,3))
names = {0: 'N', 1: 'S', 2: 'Y', 3: 'S', 4: 'U'}
nx.draw(g,
with_labels=True,
labels=names,
node_size=50*np.arange(5)+100,
node_color=50*np.arange(5)+100,
cmap='rainbow')
# -
# ### Graph layout and node positions
# The difficult part of drawing a graph
# is to determine the node positions.
#
# Graph drawing is an on-going research area.
#
# NetworkX provides several common way
# to generate the graph layout.
# That is, to compute the positions of nodes.
# [Choices for graph layouts](https://networkx.github.io/documentation/networkx-1.10/reference/drawing.html#module-networkx.drawing.layout):
# - `nx.circular_layout`
# - `nx.random_layout`
# - `nx.shell_layout`
# - `nx.spring_layout`
# - `nx.spectral_layout`
# +
g = nx.path_graph(5)
pos = nx.circular_layout(g)
pos
# -
# Use `pos` keyword in `nx.draw`
# to specify the node positions.
# +
g = nx.path_graph(5)
pos = nx.circular_layout(g)
fig = plt.figure(figsize=(3,3))
nx.draw(g,
with_labels=True,
pos=pos)
# +
g = nx.path_graph(5)
pos = nx.random_layout(g)
fig = plt.figure(figsize=(3,3))
nx.draw(g,
with_labels=True,
pos=pos)
# +
g = nx.path_graph(5)
pos = nx.spectral_layout(g)
fig = plt.figure(figsize=(3,3))
nx.draw(g,
with_labels=True,
pos=pos)
# -
# ##### Exercise
# Create a graph `g` with
# nodes `0, ..., 9` and
# edges
# `(0,1), (1,2), (2,3), (3,4), (4,5), (5,6)`
# `(6,0), (0,7), (7,8), (8,9), (9,7)`.
### your answere here
# ##### Exercise
# Obtain `g` by the following.
# ```Python
# g = nx.Graph()
# g.add_nodes_from([0,1,2,3,4,5,6,7,8,9])
# g.add_edges_from([(0,1), (1,2), (2,3), (3,4), (4,5), (5,6), (6,0), (0,7), (7,8), (8,9), (9,7)])
# ```
# Draw the graph `g`
# on a figure of size `(3,3)`
# using the spring layout.
### your answere here
# ##### Exercise
# Obtain `g` by the following.
# ```Python
# g = nx.Graph()
# g.add_nodes_from([0,1,2,3,4,5,6,7,8,9])
# g.add_edges_from([(0,1), (1,2), (2,3), (3,4), (4,5), (5,6), (6,0), (0,7), (7,8), (8,9), (9,7)])
# ```
# A **balanced partition** of a set $V$
# is a pair of two sets $X$ and $Y$ such that
# $X\cap Y=\emptyset$, $X\cup Y=V$, and $|X|=|Y|$.
#
# Find a balanced partition $(X,Y)$ of the node set of `g`
# with fewest edges between $X$ and $Y$.
# Hint: you might need the following.
# ```Python
# from itertools import combinations
# ```
### your answere here
# ##### Exercise
# Obtain `g` by the following.
# ```Python
# g = nx.random_graphs.erdos_renyi_graph(10,0.1)
# ```
# Explore the function `nx.connected_components` and
# print the node sets of each connected components.
### your answere here
# #### Exercise
# Obtain `g` by the following.
# ```Python
# g = nx.random_graphs.erdos_renyi_graph(10,0.1)
# ```
# Draw `g` on a figure of size `(3,3)` and
# use node colors to distinguish the connected components.
### your answere here
# ##### Exercise
# Obtain `g1, ..., g6` by the following.
# ```Python
# g1 = nx.path_graph(1)
# g2 = nx.path_graph(2)
# g3 = nx.path_graph(3)
# g4 = nx.cycle_graph(4)
# g5 = nx.complete_bipartite_graph(1,4)
# g6 = nx.grid_2d_graph(2,3)
# ```
# Draw the six graphs
# on `subplots(2,3)`.
### your answere here
| Introduction-to-NetworkX.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import re
def print_match(s):
if prog.search(s)==None:
print("No match")
else:
print(prog.search(s).group())
prog = re.compile(r'A{3}')
print_match("ccAAAdd")
print_match("ccAAAAdd")
print_match("ccAAdd")
# +
prog = re.compile(r'A{2,4}B')
print_match("ccAAABdd")
print_match("ccABdd")
print_match("ccAABBBdd")
print_match("ccAAAAAAABdd")
# +
prog = re.compile(r'A{,3}B')
print_match("ccAAABdd")
print_match("ccABdd")
print_match("ccAABBBdd")
print_match("ccAAAAAAABdd")
# +
prog = re.compile(r'A{3,}B')
print_match("ccAAABdd")
print_match("ccABdd")
print_match("ccAABBBdd")
print_match("ccAAAAAAABdd")
# -
prog = re.compile(r'A{2,4}')
print_match("AAAAAAA")
prog = re.compile(r'A{2,4}?')
print_match("AAAAAAA")
| Chapter07/.ipynb_checkpoints/Exercise 7.23-checkpoint.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # IBM Final Capstone Project
# # -Clustering Traffic Accident by Road Features and Nearby Venues-
# ## 1. Introduction
# Travelling is one of the most important aspect in human life. Many human activity depends on travelling. Human needs to travel in order to fulfill their needs such as buying basic needs, business meeting and vacating. Traveling is not limited to human-travel only, other important aspect of travelling is logistic transportation which require travel from one place to other. Houston, as the most populous city in US state of Texas needs many travelling and transporting to fulfill their citizens basic needs.
# <br>
# <br>
# Nowadays, logistic transportation has becoming more important especially in this pandemic era which makes people self-limiting its mobility. COVID-19 pandemic force people to order its basic needs and requested it to be delivered into their places, this situation makes road traffic heavier than before. This happened in Houston also, and in order to preserve stable economy and human well being, Houston government as an organization to govern people needs to solve traffic accident problem to ensure its economic and people well-being.
# ## 2. Problem
# There are many traffic accident in Houston in the last 4 years. We can categorize it into 4 severity level = 1,2,3, and 4. Severity level 1 is the least sever than the rest. Because the most sever traffic accident is 'Severity level 4', this project will be aiming to gain segment and cluster from traffic accident data with severity level 4.
# ## 3. Data
# There are 2 used data in this project:
# 1. US Accident Data (2016-June 2020) by <NAME> in kaggle, link: [DOWNLOAD HERE](https://www.kaggle.com/sobhanmoosavi/us-accidents/download) <br> This data will be used to gain latitude and longitude of accidents and use the latitude and longitude to gain nearby venues by using Foursquare API. Furthermore, this data will also be used to gain road features in the accident places.
# 2. Nearby Venues Data exctracted from Foursquare by using Foursquare API <br> Nearby venues and road features will be used to cluster and segment the accident location. The resulted segmentation will help goverment to decide which segment have to be treated by which policy
# # 4. Methodology
# ## 4.1 Data Pre-Processing
# Before performing any analysis and modelling, data needs to be processed into ideal raw data that can be used into modelling and analysis. Data was prepcoessed by:<br>
# 1. Data Importing <br> Data was imported by 2 means of importing: Pandas importing for US accident data and Foursquare API importing for nearby venues data.
# 2. Data Cleaning <br> Data was cleaned from NAN value.
# 3. Data Filtering <br> Data was filtered into Houston Traffic Accident Data which have severity level of 4 only. Some of the unrelated and uneeded column is dropped for computing cost efficiency. Furthermore, data was filtered into 300 datapoints which was filtered by data which had the most distance(mi) because the maximum limit of Foursquare API call is 950 for free user. Duplicate data in latitude and longitude column is dropped.
# 4. Feature Selection <br> Feature selection for clustering, two main feature that was selected to cluster the data are nearby venues and road feature
# ## 4.2 Exploratory Data Analysis
# EDA is performed by descriptive statisctis and visualization.<br>
# 1. Data Points Exploration <br>
# <img src='Houston Map 1.PNG'/> <br>
# Data was filtered into 300 datapoinst which all of the data are the data which have severity level 4 and located in Houston City area. Folium visualization above shows many of traffic accident severity level 4 located in downtown of Houston City. Furthermore, sever traffic accident was mostly happened in Houston City main road.
# 2. Data Value Exploration <br>
# <img src='EDA Numeric.png'/><br>
# Data value was explored to know the relationship intra and inter columns. Severity have only one unique value because it was filtered to have only value of 4 before. Distance column have means and modus of 2 miles and have many outliers, same Lat and Lng. For outlier in Lat and Lng is not valid because these columns are representing location not continuous value like distance.
# <img src='EDA Numeric displot.png'/><br>
# Categorical data shows there are class imbalances in all of the road features. This is because many of the datapoints location is in the main road/highway of Houston City which does not have many road features. Furthermore, road features generally can reduce road traffic because it gives option to road controller and road users.
# <img src='EDA Categorical.png'/><br>
# ## 4.3 Data Modelling
# Raw datas which consist of US accident and nearby venues is modelled into one dataframe which consist of road features and nearby venues. Road features was extracted from US Traffic Accident in Kaggle and nearby venues was extracted from Foursquare API. Raw data from kaggle consist of 3.5 million rows with 49 columns, whereas data from Foursquare API is in JSON filetype which the information can be extracted by json library in python.<br>
# <br>
# US traffic accident data was filtered to have Houston City data only. Uneeded columns was dropped. US traffic accident data columns filtered from 49 columns into only 9 columns = ID, Severity, Distance, Crossing, Give_Way, Junction, Stop, Traffic_Light, Latitude and Longitude. Column selection was based on main concept of this report: clustering traffic accident based on road features and nearby venues. Distance, Crossing, Give_Way, Junction, Stop, Traffic_Light is the road feature,whereas ID, Severity, Latitude and Longitude is for identification. Because there are 2 columns for each latitude and longitude (start and end), Latitude and Longitude was processed by calculating the middle point of the accident location ((x1+x2)/2). Latitude and longitude will be used to extract nearby venues using Foursquare API. <br>
# <br>
# <img src='Table Raw Houston.PNG'/><br>
# US traffic accident data was dropped into 300 rows only. The selection was based on data which have severity of 4 and have the longest traffic distance. The data was sorted into descending in traffic distance column and selected only 300 which have the longest traffic distance. <br>
# <br>
# Nearby venues was extracted from Foursquare using Foursquare API. Using Latitude and Longitude of the accident to call nearby venues. The specification of the call is nearby venues within radius of 500 metres from accident latitude and longitude. Extracted nearby venues then merged into original dataset. The venue data then use one-hot encoding pd.get_dummies for category binary labelling and grouped by accident id with sum() function to know the total venues category that appear in one coordinate.
# <br>
# <img src='Table Merged.PNG'/><br>
# Nearby venues table then merged with road features table to get cluster-ready data
# ## 4.4 Clustering and Segmenting
# Clustering and Segmenting use SKLearn library. Because the data-ready still contains uneeded information (latitude and longitude), before segmenting and clustering, latitude and longitude was removed to gain pure data which can be clustered and segmented. Pure data then standardize before segmented and clustered to normalize distance between data value to gain more ideal cluster. <br>
# <br>
# <img src='Elbow Method True.PNG'/><br>
# Kmeans clustering algorithm from sklearn was used to cluster the model data, it is a common cluster algorithm used in python. The most optimal cluster is shown by elbow method diagram above. The most optimal cluster is shown by cluster where the slope decrease, cluster = 6.
# # 5. Result
# From data analysis and modelling above we get result:
# 1. Houston Traffic Accident from 2016 to June 2020 can be segmented and clustered into 6 cluster.<br>
# 2. Houston Traffic Accident cluster can be seen in visualization below:<br> <img src='Houston Traffic Accident Cluster.PNG'/><br>
# 3. The most nearby venues is mexican restaurant<br><img src='cluster most.PNG'/><br>
# # 6. Discussion
# US traffic accident data was dataset which was shared by <NAME> in Kaggle it contains US accident data from 2016 to June 2020 which was extracted from navigation apps. In this analysis, the data was added by nearby venues to segment it into numbers of clusters. The most optimal cluster is 6 cluster.<br>
# <br>
# Clustering algorithm that was used is KMeans clustering and from the cluster some insight can be retrieved. Cluster not dependents of location compared to downtonwn. It is because not every road in downtown area of Houston City have same road featuere and nearby venues. This cluster and segmentation can help local governor to adjust policy based on the cluster.
# <br>
# <br>
# KMeans algorithm was used to cluster traffic accident. Kmeans algorithm is a common clustering algorithm which in general, it change the cluster centroid iteratively based on nearby datapoints coordinate. Because of this characteristics, if US accident data other columns such as weather, twilight and environment condition be included into model data, it will change the results.
# <br>
# <br>
#
# # 7. Conclusions
# The results of this report is there are optimum of 6 cluster of Houston City Traffic Accident. Venues that probably affect accident is Hotal, Restaurant, and Store, this is because if there is a mentioned venues nearby, many people will travel to that area and thus causing the probability of accident increasing.<br>
# <br>
# Almost all of the accident which have highest severity level are in highway or main road of Houston City. According to Road Feature data, main road/highway have a limited amount of road feature, this probably made accident severity worse.
| Clustering Houston Traffic Accidents.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3 (ipykernel)
# language: python
# name: python3
# ---
# +
from __future__ import absolute_import, division, print_function, unicode_literals
import IPython.display as display
from PIL import Image
import numpy as np
import matplotlib.pyplot as plt
import time
import os
os.environ['TF_CPP_MIN_LOG_LEVEL'] = '2'
# 0 = all messages are logged (default behavior)
# 1 = INFO messages are not printed
# 2 = INFO and WARNING messages are not printed
# 3 = INFO, WARNING, and ERROR messages are not printed
#On Mac you may encounter an error related to OMP, this is a workaround, but slows down the code
os.environ['KMP_DUPLICATE_LIB_OK']='True' #https://github.com/dmlc/xgboost/issues/1715
# -
import tensorflow as tf
AUTOTUNE = tf.data.experimental.AUTOTUNE
tf.__version__
from openbot import dataloader, data_augmentation, utils, train
# ## Set train and test dirs
# Define the dataset directory and give it a name. Inside the dataset folder, there should be two folders, `train_data` and `test_data`.
dataset_dir = "dataset"
dataset_name = "my_openbot"
train_data_dir = os.path.join(dataset_dir, "train_data")
test_data_dir = os.path.join(dataset_dir, "test_data")
# ## Hyperparameters
# You may have to tune the learning rate and batch size depending on your available compute resources and dataset. As a general rule of thumb, if you increase the batch size by a factor of n, you can increase the learning rate by a factor of sqrt(n). In order to accelerate training and make it more smooth, you should increase the batch size as much as possible. In our paper we used a batch size of 128. For debugging and hyperparamter tuning, you can set the number of epochs to a small value like 10. If you want to train a model which will achieve good performance, you should set it to 50 or more. In our paper we used 100.
# +
params = train.Hyperparameters()
params.MODEL = "pilot_net"
params.TRAIN_BATCH_SIZE = 16
params.TEST_BATCH_SIZE = 16
params.LEARNING_RATE = 0.0001
params.NUM_EPOCHS = 10
params.BATCH_NORM = True
params.FLIP_AUG = False
params.CMD_AUG = False
params.USE_LAST = False
# -
# ## Pre-process the dataset
tr = train.Training(params)
tr.train_data_dir = train_data_dir
tr.test_data_dir = test_data_dir
# Running this for the first time will take some time. This code will match image frames to the controls (labels) and indicator signals (commands). By default, data samples where the vehicle was stationary will be removed. If this is not desired, you need to pass `remove_zeros=False`. If you have made any changes to the sensor files, changed `remove_zeros` or moved your dataset to a new directory, you need to pass `redo_matching=True`.
train.process_data(tr, redo_matching=False, remove_zeros=True)
import threading
def broadcast(event, payload=None):
print(event, payload)
event = threading.Event()
my_callback = train.MyCallback(broadcast, event)
# In the next step, you can convert your dataset to a tfrecord, a data format optimized for training. You can skip this step if you already created a tfrecord before or if you want to train using the files directly.
train.create_tfrecord(my_callback)
# ## Load the dataset
# If you did not create a tfrecord and want to load and buffer files from disk directly, set `no_tf_record = True`.
no_tf_record = False
if no_tf_record:
tr.train_data_dir = train_data_dir
tr.test_data_dir = test_data_dir
train.load_data(tr, verbose=0)
else:
tr.train_data_dir = os.path.join(dataset_dir, "tfrecords/train.tfrec")
tr.test_data_dir = os.path.join(dataset_dir, "tfrecords/test.tfrec")
train.load_tfrecord(tr, verbose=0)
(image_batch, cmd_batch), label_batch = next(iter(tr.train_ds))
utils.show_train_batch(image_batch.numpy(), cmd_batch.numpy(), label_batch.numpy())
# ## Training
train.do_training(tr, my_callback, verbose=1)
# ## Evaluation
# The loss and mean absolute error should decrease. This indicates that the model is fitting the data well. The custom metrics (direction and angle) should go towards 1. These provide some additional insight to the training progress. The direction metric measures weather or not predictions are in the same direction as the labels. Similarly the angle metric measures if the prediction is within a small angle of the labels. The intuition is that driving in the right direction with the correct steering angle is most critical part for good final performance.
plt.plot(tr.history.history['loss'], label='loss')
plt.plot(tr.history.history['val_loss'], label = 'val_loss')
plt.xlabel('Epoch')
plt.ylabel('Loss')
plt.legend(loc='lower right')
plt.savefig(os.path.join(tr.log_path,'loss.png'))
plt.plot(tr.history.history['mean_absolute_error'], label='mean_absolute_error')
plt.plot(tr.history.history['val_mean_absolute_error'], label = 'val_mean_absolute_error')
plt.xlabel('Epoch')
plt.ylabel('Mean Absolute Error')
plt.legend(loc='lower right')
plt.savefig(os.path.join(tr.log_path,'error.png'))
plt.plot(tr.history.history['direction_metric'], label='direction_metric')
plt.plot(tr.history.history['val_direction_metric'], label = 'val_direction_metric')
plt.xlabel('Epoch')
plt.ylabel('Direction Metric')
plt.legend(loc='lower right')
plt.savefig(os.path.join(tr.log_path,'direction.png'))
plt.plot(tr.history.history['angle_metric'], label='angle_metric')
plt.plot(tr.history.history['val_angle_metric'], label = 'val_angle_metric')
plt.xlabel('Epoch')
plt.ylabel('Angle Metric')
plt.legend(loc='lower right')
plt.savefig(os.path.join(tr.log_path,'angle.png'))
# Save tf lite models for best and last checkpoint
best_index = np.argmax(np.array(tr.history.history['val_angle_metric']) \
+ np.array(tr.history.history['val_direction_metric']))
best_checkpoint = str("cp-%04d.ckpt" % (best_index+1))
best_tflite = utils.generate_tflite(tr.checkpoint_path, best_checkpoint)
utils.save_tflite (best_tflite, tr.checkpoint_path, "best")
print("Best Checkpoint (val_angle: %s, val_direction: %s): %s" \
%(tr.history.history['val_angle_metric'][best_index],\
tr.history.history['val_direction_metric'][best_index],\
best_checkpoint))
last_checkpoint = sorted([d for d in os.listdir(tr.checkpoint_path) if os.path.isdir(os.path.join(tr.checkpoint_path, d))])[-1]
last_tflite = utils.generate_tflite(tr.checkpoint_path, last_checkpoint)
utils.save_tflite (last_tflite, tr.checkpoint_path, "last")
print("Last Checkpoint (val_angle: %s, val_direction: %s): %s" \
%(tr.history.history['val_angle_metric'][-1], \
tr.history.history['val_direction_metric'][-1], \
last_checkpoint))
# Evaluate the best model
best_model = utils.load_model(os.path.join(tr.checkpoint_path,best_checkpoint),tr.loss_fn,tr.metric_list, tr.custom_objects)
test_loss, test_acc, test_dir, test_ang = best_model.evaluate(tr.test_ds, steps=tr.image_count_test/tr.hyperparameters.TEST_BATCH_SIZE, verbose=1)
NUM_SAMPLES = 15
(image_batch, cmd_batch), label_batch = next(iter(tr.test_ds))
pred_batch = best_model.predict( (tf.slice(image_batch, [0, 0, 0, 0], [NUM_SAMPLES, -1, -1, -1]), tf.slice(cmd_batch, [0], [NUM_SAMPLES])) )
utils.show_test_batch(image_batch.numpy(), cmd_batch.numpy(), label_batch.numpy(), pred_batch)
utils.compare_tf_tflite(best_model,best_tflite)
# ## Save the notebook as HTML
utils.save_notebook()
current_file = 'policy_learning.ipynb'
output_file = os.path.join(tr.log_path,'notebook.html')
utils.output_HTML(current_file, output_file)
| policy/policy_learning.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# ## Domain adaptation (5 pts total)
#
# In this seminar you will adapt a pre-trained machine translation model for the hotel review translation task you solved a few weeks ago.
#
# This time it comes with a few complications:
# * Harder task: __en -> ru__ instead of __ru -> en__
# * You are given a model pre-trained on WMT. Visit [statmt.org](http://statmt.org/) for more details.
# * The baseline model already includes attention and some hacks
#
# With luck and skills on your side, you will adapt it to improve hotel translation quality.
# !pip3 install subword-nmt &> log
# !wget https://github.com/yandexdataschool/nlp_course/raw/master/week09_da/data.tar.gz
# !wget https://raw.githubusercontent.com/yandexdataschool/nlp_course/master/week09_da/utils.py
# !tar -xvzf data.tar.gz
# !mv data/* .
# !wget https://www.dropbox.com/s/xm73pjug7eq1rff/model-pretrained.npz?dl=1 -O model-pretrained.npz
# ### Data preprocessing
#
# We provide you with a pre-trained model that uses Byte Pair Encodings [(bpe)](https://github.com/rsennrich/subword-nmt) to segment rare words into sub-word units.
#
# It is important that we fine-tune our model using the same set of BPE rules.
# +
import numpy as np
from nltk.tokenize import WordPunctTokenizer
from subword_nmt.apply_bpe import BPE
tokenizer = WordPunctTokenizer()
def tokenize(x):
return ' '.join(tokenizer.tokenize(x.lower()))
bpe = {}
for lang in ['en', 'ru']:
bpe[lang] = BPE(open('./bpe_rules.' + lang))
# -
print(bpe['ru'].process_line(tokenize("Скажи: какого цвета глаза у ветра?")))
# +
data_inp_raw = list(open('./train.domain.en'))
data_out_raw = list(open('./train.domain.ru'))
print(data_inp_raw[0])
print(data_out_raw[0])
# -
# convert lines into space-separated tokenized bpe units
<YOUR CODE>
data_inp = <...>
data_out = <...>
assert data_inp[0] == 'cor@@ del@@ ia hotel is situated in t@@ bil@@ isi , a 3 - minute walk away from saint tr@@ inity church .'
assert data_out[500] == 'некоторые номера также располага@@ ют бал@@ ко@@ ном или тер@@ ра@@ со@@ й .'
from sklearn.model_selection import train_test_split
data_inp, data_out = map(np.array, [data_inp, data_out])
train_inp, dev_inp, train_out, dev_out = train_test_split(data_inp, data_out, test_size=3000,
random_state=42)
# ### Model
#
# For this assignment, you are given a pre-trained neural machine translation model:
# * bidirectional LSTM encoder
# * single LSTM decoder with additive attention
#
# It was trained till convergence on the general dataset of news, websites and literature.
#
# +
import numpy as np
import tensorflow as tf
import utils
tf.reset_default_graph()
sess = tf.InteractiveSession()
inp_voc = utils.Vocab(open('tokens.en').read().split('\n'))
out_voc = utils.Vocab(open('tokens.ru').read().split('\n'))
model = utils.Model('mod', inp_voc, out_voc)
utils.load(tf.trainable_variables(), 'model-pretrained.npz')
# -
src = 'i am the monument to all your sins'
src = bpe['en'].process_line(tokenize(src))
trans, _ = model.translate_lines([src])
print(trans)
# ### Estimate baseline quality
#
# As before, we shall estimate our model's quality using [BLEU](https://en.wikipedia.org/wiki/BLEU) metric.
#
# This metric simply computes which fraction of predicted n-grams is actually present in the reference translation. It does so for n=1,2,3 and 4 and computes the geometric average with penalty if translation is shorter than reference.
#
# One important thing about BLEU is that it is usually computed on a __corpora level__:
# * first you count precisions over the entire test set
# * then you do the geometric averaging and apply penalties
from nltk.translate.bleu_score import corpus_bleu
def bleu(references, translations):
""" Estimates corpora-level BLEU score of predicted translations given references """
return corpus_bleu([[ref.split()] for ref in references],
[trans.split() for trans in translations]) * 100
bleu(['a cat sat on a mat', 'i love bees'],
['a cat sat on a cat', 'i hate people'])
# __Task 1 (1 point):__ evaluate baseline BLEU
#
def evaluate(model, inp_lines, out_lines):
"""
Estimates model's corpora level bleu
:param inp_lines: a list of BPE strings in source language
:param out_lines: a list of BPE strings in target language
:returns: model's BLEU (float scalar)
Important:
* Make sure to de-BPEize both translations and references. You can do that with str.replace
* Use model.translate_lines with default max_len
* If you're low on RAM, split data in several batches and translate sequentially
"""
<YOUR CODE>
return <...>
evaluate(model, dev_inp[:500], dev_out[:500])
# ### Naive training (1.5 points)
#
# The simplest thing you can do in supervised domain adaptation is to simply fine-tune your model on the target domain data.
#
# Here's a reminder of what training objective looks like:
# $$ L = {\frac1{|D|}} \sum_{X, Y \in D} \sum_{y_t \in Y} - \log p(y_t \mid y_1, \dots, y_{t-1}, X, \theta) $$
#
# where $|D|$ is the __total length of all sequences__.
#
#
# +
from utils import select_values_over_last_axis
def compute_loss(model, inp, out, **flags):
"""
Compute loss (float32 scalar) as in the formula above
:param inp: input tokens matrix, int32[batch, time]
:param out: reference tokens matrix, int32[batch, time]
"""
first_state = model.encode(inp, **flags)
batch_size = tf.shape(inp)[0]
bos = tf.fill([batch_size], model.out_voc.bos_ix)
first_logits = tf.log(tf.one_hot(bos, len(model.out_voc)) + 1e-30)
def step(blob, y_prev):
h_prev = blob[:-1]
h_new, logits = model.decode(h_prev, y_prev, **flags)
return list(h_new) + [logits]
*states_seq, logits_seq = tf.scan(step,
initializer=list(first_state) + [first_logits],
elems=tf.transpose(out))
# gather state and logits, each of shape [time, batch, ...]
logits_seq = tf.concat((first_logits[None], logits_seq),axis=0)
#convert from [time, batch,...] to [batch, time, ...]
logits_seq = tf.transpose(logits_seq, [1, 0, 2])
logprobs_seq = tf.nn.log_softmax(logits_seq, dim=-1)
logp_out = select_values_over_last_axis(logprobs_seq, out)
mask = utils.infer_mask(out, out_voc.eos_ix)
return -tf.reduce_sum(logp_out * mask) / tf.reduce_sum(mask)
# -
# ### Training loop
# +
inp = tf.placeholder('int32', [None, None])
out = tf.placeholder('int32', [None, None])
loss = compute_loss(model, inp, out)
train_step = tf.train.AdamOptimizer().minimize(loss)
utils.initialize_uninitialized()
# +
import matplotlib.pyplot as plt
# %matplotlib inline
from IPython.display import clear_output
from tqdm import tqdm, trange
batch_size = 32
metrics = {'train_loss': [], 'dev_bleu': [] }
# +
for _ in trange(10000):
step = len(metrics['train_loss']) + 1
batch_ix = np.random.randint(len(train_inp), size=32)
feed_dict = {
inp: inp_voc.to_matrix(train_inp[batch_ix]),
out: out_voc.to_matrix(train_out[batch_ix]),
}
loss_t, _ = sess.run([loss, train_step], feed_dict)
metrics['train_loss'].append((step, loss_t))
if step % 100 == 0:
metrics['dev_bleu'].append((step, evaluate(model, dev_inp, dev_out)))
clear_output(True)
plt.figure(figsize=(12,4))
for i, (name, history) in enumerate(sorted(metrics.items())):
plt.subplot(1, len(metrics), i + 1)
plt.title(name)
plt.plot(*zip(*history))
plt.grid()
plt.show()
print("Mean loss=%.3f" % np.mean(metrics['train_loss'][-10:], axis=0)[1], flush=True)
# Note: it's okay if bleu oscillates up and down as long as it gets better on average over long term (e.g. 5k batches)
# -
assert np.mean(metrics['dev_bleu'][-10:], axis=0)[1] > 30, "We kind of need a higher bleu BLEU from you. Kind of right now."
#print translations of some random dev lines
<YOUR CODE>
# ### Domain adaptation with KL penalty (2.5 pts)
#
# The problem with fine-tuning is that model can stray too far from the original parameters and forget useful information. One way to mitigate this problem is to use KL penalty:
#
# $$ Loss = (1 - \lambda) \cdot L_{xent} + \lambda \cdot {1 \over N} \underset {x, y_t} \sum KL(P_{teacher}(y_t|x, y_0, ..., y_{t-1}) || P_{student}(y_t|x, y_0, ..., y_{t-1}))$$
# __Note:__ make sure you only optimize student weights (i.e. don't train teacher network)
# +
tf.reset_default_graph()
sess = tf.InteractiveSession()
model = utils.Model('mod', inp_voc, out_voc)
utils.load(tf.trainable_variables(), 'model-pretrained.npz')
teacher = utils.Model('teacher', inp_voc, out_voc)
teacher_ckpt = np.load('model-pretrained.npz')
teacher_ckpt = { name.replace('mod/', 'teacher/'): teacher_ckpt[name] for name in teacher_ckpt}
np.savez('teacher.npz', **teacher_ckpt)
utils.load(tf.trainable_variables(), 'teacher.npz')
# +
def compute_loss_with_kl(model, teacher, inp, out, lambda_coeff=0.25, **flags):
"""
Compute loss (float32 scalar) as in the formula above
:param inp: input tokens matrix, int32[batch, time]
:param out: reference tokens matrix, int32[batch, time]
:param lambda_coeff: lambda from the formula above.
use lambda_coeff from outer scope
"""
loss = <YOUR CODE>
return loss
# -
### Do it yourself: create training step operations and
# feel free to copy the code from simple fine-tuning
<YOUR CODE>
<YOUR CODE: training loop>
### Do it yourself: estimate the final quality
# feel free to reuse the code from simple fine-tuning
<YOUR CODE>
# ### Bonus tasks:
# Both tasks start at 3 points for basic solution and a ton more if you do something as awesome as the stuff from the lecture.
#
# 1. __Domain adaptation with unlabeled data:__
# * In machine translation, it's relatively easy to obtain unparallel data. For the hotels task, there's almost 10x as large a corpora available if you
# * Download the full data [here](https://yadi.sk/d/zrYuTKQ63S33m3)
# * The dataset was originally provided by [Tilde](https://www.tilde.com/). Huge thanks to them! :)
# * The goal is simple: improve the model using the extra data. You can use proxy labels, pre-train as language model or do literally anything else.
# * Using extra out-of-domain data, whether parallel or not, is also encouraged. Here's [statmt](http://www.statmt.org/) with parallel corpora section at the bottom.
#
#
# 2. __Beam search:__
# * While it's not related to domain adaption, beam search is a good general way to improve model inference.
# * The key idea of beam search if to consider not top-1 but top-K hypotheses at each step
# * In the example below, k=4
# * Whenever a hypothesis in top-K is finished (with a `_EOS_`), record it and remember it's score.
# * Iterate until all hypotheses are already worse than the best finished hypo.
# !wget https://raw.githubusercontent.com/yandexdataschool/nlp_course/master/resources/beam_search.html 2> log
from IPython.display import HTML
# source: parlament does not support the amendment freeing tymoshenko
HTML('./beam_search.html')
| resources/_under_construction/19/week13_domain_adaptation/seminar.ipynb |