code stringlengths 38 801k | repo_path stringlengths 6 263 |
|---|---|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # <p style="text-align: center;"> Variational Linear Systems Code </p>
# <p style="text-align: center;"> <NAME> </p>
# This notebook briefly demonstrates the current state of the Variational Linear Systems (VLS) code. All code is contained in `vls_pauli.py`, which defines a `PauliSystem` class.
# +
# =======
# imports
# =======
import time
import numpy as np
from vls_pauli import PauliSystem
from cirq import ParamResolver
# -
# # Creating a Linear System of Equations
# A `PauliSystem` consists of a matrix of the form
# \begin{equation}
# A = \sum_{k = 1}^{K} c_k \sigma_k
# \end{equation}
# where $c_k$ are complex coefficients and $\sigma_k$ are strings of Pauli operators. In code, we represent the matrix $A$ as arrays of strings corresponding to Pauli operators. For example, to represent the Pauli operators
# \begin{align}
# \sigma_1 &= \sigma_X \otimes \sigma_Y \otimes \sigma_X \otimes \sigma_Z \\
# \sigma_2 &= \sigma_Z \otimes \sigma_X \otimes \sigma_I \otimes \sigma_Y
# \end{align}
# we would write:
# specify the pauli operators of the matrix
Amat_ops = np.array([["X", "Y", "X", "Z"],
["Z", "X", "I", "Y"]])
# Coefficients $c_k$ are stored similarly as arrays of complex values:
# specify the coefficients of each term in the matrix
Amat_coeffs = np.array([0.5 + 0.5j, 1 - 2j])
# Finally, the solution vector
# \begin{equation}
# |b\rangle = U |0\>
# \end{equation}
# is represented by the unitary $U$ that (efficiently) prepares $|b\rangle$ from the ground state. For example, the unitary $U$ could be
# \begin{equation}
# U = \sigma_x \otimes \sigma_x \otimes \sigma_I \otimes \sigma_X,
# \end{equation}
# which we would represent in code as:
# specify the unitary that prepares the solution vector b
Umat_ops = np.array(["X", "X", "I", "X"])
# To create `PauliSystem`, we can then simply feed in `Amat_coeffs`, `Amat_ops`, and `Umat_ops`.
# create a linear system of equations
system = PauliSystem(Amat_coeffs, Amat_ops, Umat_ops)
# # Working with a `PauliSystem`
# The `PauliSystem` class can tell basic information about the system:
print("Number of qubits in system:", system.num_qubits())
print("Size of matrix:", system.size())
# To see the actual matrix representation of the system (in the computational basis), we can do:
# get the matrix of the system
matrix = system.matrix()
print(matrix)
# We can also see the solution vector $|b\rangle$ by doing:
b = system.vector()
print(b)
# # Creating an Ansatz
# Initially, the `PauliSystem` ansatz for $V$ is an empty circuit:
# print out the initial (empty) ansatz
system.ansatz
# We are free to pick whatever ansatz we wish. The method `PauliSystem.make_ansatz_circuit` is currently set to make an alternating two-qubit gate ansatz.
# make the alternating two-qubit gate ansatz and print it out
system.make_ansatz_circuit()
system.ansatz # note: circuits aren't printed to allow for scrollable outputs
# This circuit contains 48 parameters (4 qubits x 2 "gates" / qubit x 6 parameters / gate). (Note that printing the circuit gets cut off in the notebook, scroll side to side to see the entire circuit.) For our simple example, we will chop off some of the gates to make the optimization easier:
# remove some of the gates and print it out
system.ansatz = system.ansatz[:-9]
system.ansatz
# # Computing the Cost
# The local cost function is computed via the Hadamard Test. The local cost function can be written
# \begin{equation}
# C_1 = 1 - \frac{1}{n} \sum_{k = 1}^{K} \sum_{l \geq k}^{K} w_{k, l} c_k c_l^* \sum_{j = 1}^{n} \text{Re} \, \langle V_{k, l}^{(j)} \rangle
# \end{equation}
# where
# \begin{equation}
# \langle V_{k, l}^{(j)} \rangle := \langle0^{\otimes n}| V^\dagger A_k^\dagger U P_j U^\dagger A_l V |0^{\otimes n}\rangle
# \end{equation}
# and
# \begin{equation}
# w_{k, l} = \begin{cases}
# 1 \qquad \text{if } k = l\\
# 2 \qquad \text{otherwise}
# \end{cases} .
# \end{equation}
# Thus we have $n K^2$ different circuits to compute the cost. An example of one (the $k = 0$, $l = 1$, $j = 1$ term) is shown below:
system.make_hadamard_test_circuit(system.ops[0], system.ops[1], 0, "real")
# The circuit for computing the norm
# \begin{equation}
# \langle 0 | V^\dagger A_k^\dagger A_l V | 0 \rangle = \langle \psi | A_k^\dagger A_l | \psi \rangle
# \end{equation}
# for the example $k = 0$, $l = 1$ is shown below:
system.make_norm_circuit(system.ops[0], system.ops[1], "real")
# To compute the cost, we can call `PauliSystem.cost` or `PauliSystem.eff_cost` (the latter exploits symmetries to compute the cost more efficiently) and pass in a set of angles to the ansatz gates:
# +
# =======================================
# compute the cost for some set of angles
# =======================================
# normalize the coefficients
system.normalize_coeffs()
# get some angles
angles = np.random.randn(24)
# compute the cost and time it
start = time.time()
cost = system.eff_cost(angles)
end = time.time() - start
# print out the results
print("Local cost C_1 =", cost)
print("Time to compute cost =", end, "seconds")
# -
# # Solving the System
# To solve the system, we minimize the cost function. We'll do this below with the Powell optimization algorithm.
start = time.time()
out = system.solve(x0=angles, opt_method="Powell")
end = time.time() - start
print("It took {} minutes to solve the system.".format(round(end / 60)))
print("Number of iterations of optimization method:", out["nit"])
print("Number of function evaluations:", out["nfev"])
# +
# get the optimal angles
opt_angles = out["x"]
# get a param resolver
param_resolver = ParamResolver(
{str(ii) : opt_angles[ii] for ii in range(len(opt_angles))}
)
sol_circ = system.ansatz.with_parameters_resolved_by(param_resolver)
sol_circ
# -
xhat = sol_circ.to_unitary_matrix()[:, 0]
bhat = np.dot(matrix, xhat)
print(bhat)
print(b)
print(np.linalg.norm(bhat - b))
# # Future Work
# * Normalize the cost to be between zero and one! (divide by N_Ax)
# * Better optimization methods.
# * Optimize over a subset of the parameters at a time, then loop through (and reoptimize).
# * Add random gates using simulated annealing.
# * Compute all $nK^2$ circuits in parallel.
# * Compute expectations of local observables at each cost iteration.
# * Allow for arbitrary unitaries (not just Paulis)
| pauli/vls_tutorial.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3 (ipykernel)
# language: python
# name: python3
# ---
# # Running The Model
# ### Propagation Network
# ### Init
import os
import hack
from DatasetLoader import *
from DataGenerator import *
import pickle
with open('../Models/Dataset_Simple_scaler.pickle', 'rb') as handle:
dataset_scaler = pickle.load(handle, encoding="latin1")
import os
TEST_PATH='../Test_Results/Prop-Network-Fixed'
if os.path.exists(TEST_PATH):
print(Exception('This directory already exists'));
else:
os.mkdir(TEST_PATH);
os.mkdir(TEST_PATH+'/saved_models')
## ALT
my_dataset9 = MyDataset(PATH='../Data/DATASET_FIXED_ONLY/9Objects/',n_of_scene=1100,n_of_exp=4,n_of_obj=9,f_size=8,n_of_rel_type=1,fr_size=240,scaler=dataset_scaler)
my_dataset6 = MyDataset(PATH='../Data/DATASET_FIXED_ONLY/6Objects/',n_of_scene=50,n_of_exp=4,n_of_obj=6,f_size=8,n_of_rel_type=1,fr_size=240,scaler=dataset_scaler)
my_dataset12 = MyDataset(PATH='../Data/DATASET_FIXED_ONLY/12Objects/',n_of_scene=50,n_of_exp=4,n_of_obj=12,f_size=8,n_of_rel_type=1,fr_size=240,scaler=dataset_scaler)
my_dataset9.divideDataset(9.0/11,1.5/11)
my_dataset6.divideDataset(0.0,0.5)
my_dataset12.divideDataset(0.0,0.5)
my_dataset_test6 = MyDataset2(PATH='../Data/DATASET_TEST_Simple/6Objects/',n_of_scene=50,n_of_exp=4,n_of_groups=7,n_of_obj=6,f_size=5,n_of_rel_type=1,fr_size=50,scaler=dataset_scaler)
my_dataset_test6.divideDataset(0,0)
my_dataset_test8 = MyDataset2(PATH='../Data/DATASET_TEST_Simple/8Objects/',n_of_scene=50,n_of_exp=4,n_of_groups=10,n_of_obj=8,f_size=5,n_of_rel_type=1,fr_size=50,scaler=dataset_scaler)
my_dataset_test8.divideDataset(0,0)
my_dataset_test9 = MyDataset2(PATH='../Data/DATASET_TEST_Simple/9Objects/',n_of_scene=50,n_of_exp=4,n_of_groups=12,n_of_obj=9,f_size=5,n_of_rel_type=1,fr_size=50,scaler=dataset_scaler)
my_dataset_test9.divideDataset(0,0)
# ### Network Creation
import hack
from Networks import *
Pns= PropagationNetwork()
Pn1=Pns.getModel(10,6,1)
# +
from Test import *
Pns.setModel(10,'../Models/PN_fixed.hdf5')
_=Pns.getModel(7,6,1)
_=Pns.getModel(9,6,1)
_=Pns.getModel(10,6,1)
_=Pns.getModel(13,6,1)
# -
# ### Running Model on Test Set Sparse
xy_origin_pos9,xy_calculated_pos9,r9,edge9=Test(my_dataset9,Pns,200,dataset_scaler.relation_threshold)
xy_origin_pos6,xy_calculated_pos6,r6,edge6=Test(my_dataset6,Pns,200,dataset_scaler.relation_threshold)
xy_origin_pos12,xy_calculated_pos12,r12,edge12=Test(my_dataset12,Pns,200,dataset_scaler.relation_threshold)
# ### Creating Test Videos
os.mkdir(TEST_PATH+'/TestVideos/')
os.mkdir(TEST_PATH+'/TestVideos/9-Sparse')
os.mkdir(TEST_PATH+'/TestVideos/6-Sparse')
os.mkdir(TEST_PATH+'/TestVideos/12-Sparse')
from IPython.display import clear_output
import os
f1
# for ii in range(1,10,3):
# make_video_Fixed(xy_calculated_pos6[ii,:,:,:],r6[ii,:],edge6[ii],TEST_PATH+'/TestVideos/6-Sparse/test_'+str(ii)+'.mp4')
# make_video_Fixed(xy_origin_pos6[ii,:,:,:],r6[ii,:],edge6[ii],TEST_PATH+'/TestVideos/6-Sparse/true_'+str(ii)+'.mp4')
# clear_output()
# for ii in range(1,10,3):
# make_video_Fixed(xy_calculated_pos12[ii,:,:,:],r12[ii,:],edge12[ii],TEST_PATH+'/TestVideos/12-Sparse/test_'+str(ii)+'.mp4')
# make_video_Fixed(xy_origin_pos12[ii,:,:,:],r12[ii,:],edge12[ii],TEST_PATH+'/TestVideos/12-Sparse/true_'+str(ii)+'.mp4')
# clear_output()
# ### Running Model on Test Set Dense
xy_origin_pos6,xy_calculated_pos6,r6,edge6=Test(my_dataset_test6,Pns,50,dataset_scaler.relation_threshold)
# xy_origin_pos8,xy_calculated_pos8,r8,edge8=Test(my_dataset_test8,Pns,50,dataset_scaler.relation_threshold)
# xy_origin_pos9,xy_calculated_pos9,r9,edge9=Test(my_dataset_test9,Pns,50,dataset_scaler.relation_threshold)
os.mkdir(TEST_PATH+'/TestVideos/6-Dense')
os.mkdir(TEST_PATH+'/TestVideos/8-Dense')
os.mkdir(TEST_PATH+'/TestVideos/9-Dense')
from IPython.display import clear_output
import os
for ii in range(1,10,1):
make_video_Fixed(xy_calculated_pos6[ii,:,:,:],r6[ii,:],edge6[ii],TEST_PATH+'/TestVideos/6-Dense/test_'+str(ii)+'.mp4')
make_video_Fixed(xy_origin_pos6[ii,:,:,:],r6[ii,:],edge6[ii],TEST_PATH+'/TestVideos/6-Dense/true_'+str(ii)+'.mp4')
clear_output()
# for ii in range(1,10,3):
# make_video_Fixed(xy_calculated_pos8[ii,:,:,:],r8[ii,:],edge8[ii],TEST_PATH+'/TestVideos/8-Dense/test_'+str(ii)+'.mp4')
# make_video_Fixed(xy_origin_pos8[ii,:,:,:],r8[ii,:],edge8[ii],TEST_PATH+'/TestVideos/8-Dense/true_'+str(ii)+'.mp4')
# clear_output()
# for ii in range(1,10,3):
# make_video_Fixed(xy_calculated_pos9[ii,:,:,:],r9[ii,:],edge9[ii],TEST_PATH+'/TestVideos/9-Dense/test_'+str(ii)+'.mp4')
# make_video_Fixed(xy_origin_pos9[ii,:,:,:],r9[ii,:],edge9[ii],TEST_PATH+'/TestVideos/9-Dense/true_'+str(ii)+'.mp4')
# clear_output()
# ## Training
TrainDg9_PN=DataGenerator(10,1,240,3600,my_dataset9.data_tr,my_dataset9.r_i_tr,my_dataset9.scaler.relation_threshold,True,64)
valDg9_PN=DataGenerator(10,1,240,600,my_dataset9.data_val,my_dataset9.r_i_val,my_dataset9.scaler.relation_threshold,False,128)
testDg_PN=DataGenerator(10,1,240,200,my_dataset9.data_test,my_dataset9.r_i_test,my_dataset9.scaler.relation_threshold,False,100,False)
# +
import hack
from Networks import *
import tensorflow as tf
Pns= PropagationNetwork()
Pn1=Pns.getModel(10,6,1)
from Callbacks import *
import os
gauss_callback=Change_Noise_Callback(TrainDg9_PN)
test_metrics= Test_My_Metrics_Callback(Pns,3,1,my_dataset9.scaler,dataset_0=my_dataset9,dataset_1=my_dataset6,dataset_2=my_dataset12)
plt_callback=PlotLosses(TEST_PATH+'/Networks_logs.csv',3,[10,7,13])
reduce_lr= tf.keras.callbacks.ReduceLROnPlateau(monitor='val_loss', factor=0.8,verbose=1, patience=20, mode='auto', cooldown=20)
save_model= tf.keras.callbacks.ModelCheckpoint(TEST_PATH+'/saved_models/weights.{epoch:02d}.hdf5', monitor='val_loss', verbose=0, save_best_only=False, save_weights_only=False, mode='auto', period=1)
# -
Pn1.fit(x=TrainDg9_PN,
#y=valDg9_PN,
epochs=250,
use_multiprocessing=True,
workers=32,
callbacks=[reduce_lr,test_metrics,plt_callback,save_model],
verbose=1)
len(TrainDg9_PN)
| Notebooks/Physics_Predictor_fixed.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + [markdown] id="7eT9K7_bRrvu"
# # Visualizing Dense layer using ActivationMaximization
#
# [](https://colab.research.google.com/github/keisen/tf-keras-vis/blob/master/examples/visualize_dense_layer.ipynb)
# [](https://github.com/keisen/tf-keras-vis/blob/master/docs/examples/visualize_dense_layer.ipynb)
#
#
# Preparation
# -----------
#
# ### Load libraries
# + id="kDTk0hMNRrvu" colab={"base_uri": "https://localhost:8080/"} executionInfo={"status": "ok", "timestamp": 1624533765858, "user_tz": -540, "elapsed": 1969, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GjpaWYUzbYg71o2yLv1rB7u5E5d0fdGKdx03dFTnWs=s64", "userId": "05953776188019812919"}} outputId="2a4e4356-32c4-40dd-e3c9-20a56e6b1562"
# %reload_ext autoreload
# %autoreload 2
import numpy as np
from matplotlib import pyplot as plt
# %matplotlib inline
import tensorflow as tf
from tf_keras_vis.utils import num_of_gpus
_, gpus = num_of_gpus()
print('Tensorflow recognized {} GPUs'.format(gpus))
# + [markdown] id="LjQnXREuRrvv"
# ### Load tf.keras.Model
#
# In this notebook, we use VGG16 model, however if you want to use other tf.keras.Model, you can do so by modifying the section below.
# + id="bex_VD7tRrvw" colab={"base_uri": "https://localhost:8080/"} executionInfo={"status": "ok", "timestamp": 1624533767271, "user_tz": -540, "elapsed": 1436, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GjpaWYUzbYg71o2yLv1rB7u5E5d0fdGKdx03dFTnWs=s64", "userId": "05953776188019812919"}} outputId="0f4bc9d2-fd7e-49ee-fc0c-bda598a64995"
from tensorflow.keras.applications.vgg16 import VGG16 as Model
# Load model
model = Model(weights='imagenet', include_top=True)
model.summary()
# + [markdown] id="y4q8wMmXRrvw"
# ## Implement functions required to use ActivationMaximization
#
# ### Model modifier
#
# When the softmax activation function is applied to the last layer of model, it may obstruct generating the actiation maps, so you should replace the function to a linear activation function. Here, we create and use `ReplaceToLinear` instance.
# + id="LTt_ov8wRrvw" executionInfo={"status": "ok", "timestamp": 1624533767272, "user_tz": -540, "elapsed": 6, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GjpaWYUzbYg71o2yLv1rB7u5E5d0fdGKdx03dFTnWs=s64", "userId": "05953776188019812919"}}
from tf_keras_vis.utils.model_modifiers import ReplaceToLinear
replace2linear = ReplaceToLinear()
# Instead of using ReplaceToLinear instance,
# you can also define the function from scratch as follows:
def model_modifier_function(cloned_model):
cloned_model.layers[-1].activation = tf.keras.activations.linear
# + [markdown] id="NMcRYF1hRrvx"
# ### Score function
#
# You **MUST** create `Score` instance or define `score function` that returns arbitrary category value. Here, our socre function returns the value corresponding to No.20 (Ouzel) of imangenet.
# + id="qvQ5oEyyRrvx" executionInfo={"status": "ok", "timestamp": 1624533767821, "user_tz": -540, "elapsed": 5, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GjpaWYUzbYg71o2yLv1rB7u5E5d0fdGKdx03dFTnWs=s64", "userId": "05953776188019812919"}}
from tf_keras_vis.utils.scores import CategoricalScore
# 20 is the imagenet index corresponding to Ouzel.
score = CategoricalScore(20)
# Instead of using CategoricalScore object above,
# you can also define the function from scratch as follows:
def score_function(output):
# The `output` variable refer to the output of the model,
# so, in this case, `output` shape is `(1, 1000)` i.e., (samples, classes).
return output[:, 20]
# + [markdown] id="HkLx--gzRrvw"
# ## Visualizing a specific output category
#
# ### Create ActivationMaximization Instnace
#
# When `clone` argument is True(default), the `model` will be cloned, so the `model` instance will be NOT modified, however the process may take a while.
# + id="TJReX1EBRrvw" executionInfo={"status": "ok", "timestamp": 1624533767820, "user_tz": -540, "elapsed": 553, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GjpaWYUzbYg71o2yLv1rB7u5E5d0fdGKdx03dFTnWs=s64", "userId": "05953776188019812919"}}
from tf_keras_vis.activation_maximization import ActivationMaximization
activation_maximization = ActivationMaximization(model,
model_modifier=replace2linear,
clone=True)
# + [markdown] id="IeFXZGiVRrvx"
# ### Visualize
#
# ActivationMaximization will maximize the output of the score function. Here, as a result, we will get a visualized image that maximizes the model output corresponding to the No.20 (Ouzel) of imagenet.
# + id="QG7LDgmgRrvx" colab={"base_uri": "https://localhost:8080/", "height": 349} executionInfo={"status": "ok", "timestamp": 1624533777551, "user_tz": -540, "elapsed": 9734, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GjpaWYUzbYg71o2yLv1rB7u5E5d0fdGKdx03dFTnWs=s64", "userId": "05953776188019812919"}} outputId="f0fcc125-6f7e-4790-8c59-700ac5f58b91"
# %%time
from tf_keras_vis.activation_maximization.callbacks import Progress
from tf_keras_vis.activation_maximization.input_modifiers import Jitter, Rotate2D, Scale
from tf_keras_vis.activation_maximization.regularizers import Norm, TotalVariation2D
# Generate maximized activation
activations = activation_maximization(score,
callbacks=[Progress()])
## Since v0.6.0, calling `astype()` is NOT necessary.
# activations = activations[0].astype(np.uint8)
# Render
f, ax = plt.subplots(figsize=(4, 4))
ax.imshow(activations[0])
ax.set_title('Ouzel', fontsize=16)
ax.axis('off')
plt.tight_layout()
plt.show()
# + [markdown] id="6R8YQAkORrvy"
# ## Visualizing specific output categories
#
# Then, let's visualize multiple categories at once!
# + [markdown] id="FnZAeMeuRrvy"
# ### Modify Score function
#
# Because change the target you want to visualize, you **MUST** create `Score` instance or define `score function` again that returns arbitrary category values. Here, our socre function returns the values corresponding to No.1 (Goldfish), No.294 (Bear) and No.413 (Assault rifle) of imagenet.
# + id="e9HOWKA0Rrvy" executionInfo={"status": "ok", "timestamp": 1624533777933, "user_tz": -540, "elapsed": 12, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GjpaWYUzbYg71o2yLv1rB7u5E5d0fdGKdx03dFTnWs=s64", "userId": "05953776188019812919"}}
from tf_keras_vis.utils.scores import CategoricalScore
image_titles = ['Goldfish', 'Bear', 'Assault rifle']
scores = CategoricalScore([1, 294, 413])
# Instead of using CategoricalScore object above,
# you can also define the function from scratch as follows:
def score_function(output):
# The `output` variable refer to the output of the model,
# so, in this case, `output` shape is `(3, 1000)` i.e., (samples, classes).
return (output[0, 1], output[0, 294], output[0, 413])
# + [markdown] id="EksPapwbRrvy"
# ### Create Seed-Input values
#
# And then, you MUST create `seed_input` value. In default, when visualizing a specific output category, tf-keras-vis automatically generates `seed_input` to visualize a image for each model input. When visualizing multiple images, you MUST manually create `seed_input`.
# + id="wXBa2BuARrvy" executionInfo={"status": "ok", "timestamp": 1624533777934, "user_tz": -540, "elapsed": 10, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GjpaWYUzbYg71o2yLv1rB7u5E5d0fdGKdx03dFTnWs=s64", "userId": "05953776188019812919"}}
# Create `seed_input` whose shape is (samples, height, width, channels).
seed_input = tf.random.uniform((3, 224, 224, 3), 0, 255)
# + [markdown] id="_-JWct2iRrvy"
# ### Visualize
# + colab={"base_uri": "https://localhost:8080/", "height": 360} id="R9UptZ5LCzXX" executionInfo={"status": "ok", "timestamp": 1624533794325, "user_tz": -540, "elapsed": 16400, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GjpaWYUzbYg71o2yLv1rB7u5E5d0fdGKdx03dFTnWs=s64", "userId": "05953776188019812919"}} outputId="49b52eb1-5fa6-4512-fa84-47e122785a03"
# %%time
from tf_keras_vis.activation_maximization.input_modifiers import Jitter, Rotate2D, Scale
from tf_keras_vis.activation_maximization.regularizers import Norm, TotalVariation2D
from tf_keras_vis.activation_maximization.callbacks import Progress, PrintLogger
# Generate maximized activation
activations = activation_maximization(scores,
seed_input=seed_input,
callbacks=[Progress()])
## Since v0.6.0, calling `astype()` is NOT necessary.
# activations = activations[0].astype(np.uint8)
# Render
f, ax = plt.subplots(nrows=1, ncols=3, figsize=(12, 4))
for i, title in enumerate(image_titles):
ax[i].set_title(title, fontsize=16)
ax[i].imshow(activations[i])
ax[i].axis('off')
plt.tight_layout()
plt.show()
| docs/examples/visualize_dense_layer.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
# source: http://iamtrask.github.io (modified)
import numpy as np
X_XOR = np.array([[0,0,1], [0,1,1], [1,0,1],[1,1,1]])
y_truth = np.array([[0,1,1,1]]).T
np.random.seed(1)
synapse_0 = 2*np.random.random((3,1)) - 1
def sigmoid(x):
output = 1/(1+np.exp(-x))
return output
def sigmoid_output_to_derivative(output):
return output*(1-output)
for iter in range(10000):
layer_1 = sigmoid(np.dot(X_XOR, synapse_0))
layer_1_error = layer_1 - y_truth
layer_1_delta = layer_1_error * sigmoid_output_to_derivative(layer_1)
synapse_0_derivative = np.dot(X_XOR.T,layer_1_delta)
synapse_0 -= synapse_0_derivative
print("Output After Training: \n", layer_1)
# -
| Simple_neural_network.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 2
# language: python
# name: python2
# ---
from IPython.core.debugger import Tracer; debug_here = Tracer()
xxx = [1,2,3,4,5,6]
yyy = [6,5,4,3,2,1]
debug_here()
xyx = zip(xxx,yyy)
xyx
| Ipython_core_debug_Trace.ipynb |
/ ---
/ jupyter:
/ jupytext:
/ text_representation:
/ extension: .q
/ format_name: light
/ format_version: '1.5'
/ jupytext_version: 1.14.4
/ kernelspec:
/ display_name: SQL
/ language: sql
/ name: SQL
/ ---
/ + [markdown] azdata_cell_guid="ce606c93-6723-4ff7-af7a-3c944e4cb281"
/ # Use Azure-SQL as a Key-Value Store
/
/ There are no specific features for create a key-value store solution, as Memory-Optimized Tables can be used very efficiently for this kind of workload.
/ Memory-Optimized tables can be configured to be Durable or Non-Durable. In the latter case data is never persisted and they really behave like an in-memory cache. Here's some useful links:
/ - [In-Memory OLTP in Azure SQL Database](https://azure.microsoft.com/en-us/blog/in-memory-oltp-in-azure-sql-database/)
/ - [Transact-SQL Support for In-Memory OLTP](https://docs.microsoft.com/en-us/sql/relational-databases/in-memory-oltp/transact-sql-support-for-in-memory-oltp)
/ - [Optimize performance by using in-memory technologies](https://docs.microsoft.com/en-us/azure/azure-sql/in-memory-oltp-overview)
/
/ + [markdown] azdata_cell_guid="56dad9fa-5027-4431-a504-65ccc6c87cd3"
/ ## Create Key-Value Store Table
/
/ Memory-Optimized Table are actually *compiled* into .DLL for maximum performances, and they use a lock-free strategy to guarantee consistency and isolation
/
/ + azdata_cell_guid="3717f9b0-021c-451f-89eb-3f2e9cb01328"
if (schema_id('cache') is null)
exec('create schema [cache];')
go
create table [cache].MemoryStore
(
[key] bigint not null,
[value] nvarchar(max) null,
index IX_Hash_Key unique hash ([key]) with (bucket_count= 100000)
) with (memory_optimized = on, durability = schema_only);
go
/ + [markdown] azdata_cell_guid="5fd369b5-0ee3-4d8c-93fb-bdac3da78f77"
/ # Test Select / Insert / Update performances
/
/ To test performance of In-Memory table we want to minimize or even zero-out all other external dependecies to be able only to focus on pure Azure SQL performance. For this reason it is useful to create a Stored Procedure that using the Native Compilation (so that it will be compiled into a .DLL too), will try to execute 100K SELECT/UPSERT operations as fast as possibile. To simulate a more reaslistic worjload, it will get the cached JSON document, update it and then put it back into the In-Memory table acting as Key-Value store cache.
/ + azdata_cell_guid="c2ee3450-77a7-483f-a49b-26e0fbca037f"
create or alter procedure cache.[Test]
with native_compilation, schemabinding
as
begin atomic with (transaction isolation level = snapshot, language = N'us_english')
declare @i int = 0;
declare @o int = 0;
while (@i < 100000)
begin
declare @r int = cast(rand() * 100000 as int)
declare @v nvarchar(max) = (select top(1) [value] from [cache].[MemoryStore] where [key]=@r);
set @o += 1;
if (@v is not null) begin
declare @c int = cast(json_value(@v, '$.counter') as int) + 1;
update [cache].[MemoryStore] set [value] = json_modify(@v, '$.counter', @c) where [key] = @r
set @o += 1;
end else begin
declare @value nvarchar(max) = '{"value": "' + cast(sysdatetime() as nvarchar(max)) + '", "counter": 1}'
insert into [cache].[MemoryStore] values (@r, @value)
set @o += 1;
end
set @i += 1;
end
select total_operations = @o;
end
go
/ + [markdown] azdata_cell_guid="32688954-3613-4047-b840-39f19c18e138"
/
/ Run the procedure to execute the test, while taking notice of start and end time to calculate elapsed milliseconds. Remember, __100K iterations__ will be executed.
/ + azdata_cell_guid="8f16b141-4933-4069-843c-420c648ec407"
delete from [cache].[MemoryStore]
go
declare @s datetime2, @e datetime2;
set @s = sysutcdatetime();
exec cache.[Test];
set @e = sysutcdatetime();
select slo = databasepropertyex(db_name(), 'ServiceObjective'), elapsed_msec = datediff(millisecond, @s, @e)
go
/ + [markdown] azdata_cell_guid="ae332d29-023d-4d17-b5a9-f02598d1661a"
/ In something around __2.9__ seconds a 4 vCore Azure SQL database has been able to complete 100K iterations. As every iteration executed a SELECT followed by an INSERT or UPDATE, it means that 200K operations has been executed in 2.9 seconds, which in turn results to close to __69K query/sec__ !
/ + [markdown] azdata_cell_guid="183dc7d4-eb0e-439d-83c1-56e2fa38e17b"
/ ## Cleanup
/ + azdata_cell_guid="95ec83f3-3966-4515-801b-6959e0918c57"
drop procedure [cache].[Test]
drop table [cache].[MemoryStore]
go
| samples/06-key-value/key-value.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # 3D plots
# +
# %matplotlib notebook
from matplotlib import pyplot as plt
from mpl_toolkits.mplot3d import Axes3D
import numpy as np
fig = plt.figure()
ax = fig.gca(projection='3d')
theta = np.linspace(-4 * np.pi, 4 * np.pi, 100)
z = np.linspace(-2, 2, 100)
r = z**2 + 1
x = r * np.sin(theta)
y = r * np.cos(theta)
ax.plot(x, y, z, label='parametric curve')
ax.legend()
# -
| day3/3D.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .r
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: R
# language: R
# name: ir
# ---
# +
# Require the packages
require(ggplot2)
library(repr)
options(repr.plot.width=10.5, repr.plot.height=4.5)
# +
results_dir <- "../resources/results/results_semisupervised_sensem_7k/014"
lemma_data <- data.frame(iteration=integer(), sense=character(), count=integer(), experiment=character())
for(exp in c("bow_logreg", "wordvec_mlp_2_0", "wordvecpos_mlp_2_0")) {
data <- read.csv(paste(results_dir, exp, "targets_distribution", sep="/"), header = F)
names(data) <- c("iteration", "sense", "count")
data$experiment <- exp
lemma_data <- rbind(lemma_data, data)
}
lemma_data$experiment <- factor(lemma_data$experiment)
levels(lemma_data$experiment) <- c("Bag-of-Words &\nLogistic Regression",
"Word Embeddings &\nMultilayer Perceptron",
"Word Embeddings and PoS &\nMultilayer Perceptron")
# -
p <- ggplot(lemma_data, aes(x=iteration, y=count, fill=sense))
p <- p + facet_wrap(~ experiment, scales = 'free')
p <- p + geom_area(position="fill")
p <- p + scale_x_continuous(breaks=seq(0, 20, 2))
p <- p + scale_y_continuous(breaks=seq(0, 1, 0.1), labels=seq(0, 100, 10))
p <- p + labs(title="Population percentage per sense for lemma \"base.n\"", y="Percent", x="Iteration Number")
p <- p + scale_fill_brewer(name="Sense", palette = "Accent", direction = 1,
breaks=c("alcanzar-09", "alcanzar-08", "alcanzar-06", "alcanzar-05",
"alcanzar-03", "alcanzar-02"))
p <- p + theme(
plot.title=element_text(size=15, face="bold", margin=margin(10, 0, 10, 0), vjust=1, lineheight=0.6),
strip.text.x=element_text(size=10),
axis.title.x=element_text(size=12, margin=margin(10, 0, 0, 0)),
axis.title.y=element_text(size=12, margin=margin(0, 10, 0, 0)),
legend.title=element_text(face="bold", size=13),
legend.text=element_text(size=11),
legend.key.height=unit(1.5,"line")
)
p
# Save the plot
ggsave("plots/dirigir.png", plot=p, width=10.5, height=4.5)
library(grid)
library(gridExtra)
options(repr.plot.width=10.5, repr.plot.height=18)
ggsave("plots/population_progres.png", plot=grid.arrange(p1, p2, p3, p4, ncol = 1), width=10.5, height=18)
| graphics/population_progress_R.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
import cartoframes
from cartoframes import carto_vl as vl
from cartoframes import CartoContext
context = CartoContext(
base_url='https://cartovl.carto.com/',
api_key='default_public'
)
# -
vl.Map(
[vl.Layer(
'higher_edu_by_county',
style={
'color': 'ramp(globalQuantiles($pct_higher_ed, 5), Emrld)',
'stroke-color': 'rgba(0,0,0,0.4)',
'stroke-width': 1
}
)],
context=context
)
vl.Map(
[vl.Layer(
'higher_edu_by_county',
style={
'color': 'ramp(globalQuantiles($pct_higher_ed, 5), reverse(Emrld))',
'stroke-color': 'rgba(0,0,0,0.4)',
'stroke-width': 1
}
)],
context=context
)
vl.Map(
[vl.Layer(
'higher_edu_by_county',
style={
'color': 'ramp(globalQuantiles($pct_higher_ed, 5), cb_YlGn)',
'stroke-color': 'rgba(0,0,0,0.4)',
'stroke-width': 1
}
)],
context=context
)
vl.Map(
[vl.Layer(
'higher_edu_by_county',
style={
'color': 'ramp(globalQuantiles($pct_higher_ed, 5), reverse(cb_YlGn))',
'stroke-color': 'rgba(0,0,0,0.4)',
'stroke-width': 1
}
)],
context=context
)
vl.Map(
[vl.Layer(
'higher_edu_by_county',
style={
'color': 'ramp(globalQuantiles($pct_higher_ed, 5),[#234a67, #fad780])',
'stroke-color': 'rgba(0,0,0,0.4)',
'stroke-width': 1
}
)],
context=context
)
| examples/debug/carto_vl/styling/color_palettes.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Moving data from numpy arrays to pandas DataFrames
# In our last notebook we trained a model and compared our actual and predicted results
#
# What may not have been evident was when we did this we were working with two different objects: a **numpy array** and a **pandas DataFrame**
# To explore further let's rerun the code from the previous notebook to create a trained model and get predicted values for our test data
import pandas as pd
from sklearn.model_selection import train_test_split
from sklearn.linear_model import LinearRegression
# +
# Load our data from the csv file
delays_df = pd.read_csv('Lots_of_flight_data.csv')
# Remove rows with null values since those will crash our linear regression model training
delays_df.dropna(inplace=True)
# Move our features into the X DataFrame
X = delays_df.loc[:,['DISTANCE','CRS_ELAPSED_TIME']]
# Move our labels into the y DataFrame
y = delays_df.loc[:,['ARR_DELAY']]
# Split our data into test and training DataFrames
X_train, X_test, y_train, y_test = train_test_split(X,
y,
test_size=0.3,
random_state=42)
regressor = LinearRegression() # Create a scikit learn LinearRegression object
regressor.fit(X_train, y_train) # Use the fit method to train the model using your training data
y_pred = regressor.predict(X_test) # Generate predicted values for our test data
# -
# In the last Notebook, you might have noticed the output displays differently when you display the contents of the predicted values in y_pred and the actual values in y_test
y_pred
y_test
# Use **type()** to check the datatype of an object.
type(y_pred)
type(y_test)
# * **y_pred** is a numpy array
# * **y_test** is a pandas DataFrame
#
# Another way you might discover this is if you try to use the **head** method on **y_pred**.
#
# This will return an error, because **head** is a method of the DataFrame class it is not a method of numpy arrays
y_pred.head()
# A one dimensional numpy array is similar to a pandas Series
#
import numpy as np
airports_array = np.array(['Pearson','Changi','Narita'])
print(airports_array)
print(airports_array[2])
airports_series = pd.Series(['Pearson','Changi','Narita'])
print(airports_series)
print(airports_series[2])
# A two dimensional numpy array is similar to a pandas DataFrame
airports_array = np.array([
['YYZ','Pearson'],
['SIN','Changi'],
['NRT','Narita']])
print(airports_array)
print(airports_array[0,0])
airports_df = pd.DataFrame([['YYZ','Pearson'],['SIN','Changi'],['NRT','Narita']])
print(airports_df)
print(airports_df.iloc[0,0])
# If you need the functionality of a DataFrame, you can move data from numpy objects to pandas objects and vice-versa.
#
# In the example below we use the DataFrame constructor to read the contents of the numpy array *y_pred* into a DataFrame called *predicted_df*
#
# Then we can use the functionality of the DataFrame object
predicted_df = pd.DataFrame(y_pred)
predicted_df.head()
| even-more-python-for-beginners-data-tools/14 - NumPy vs Pandas/14 - Working with numpy and pandas.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + pycharm={"is_executing": false}
import numpy as np
import pandas as pd
from pandas import Series, DataFrame
import matplotlib.pyplot as plt
# + pycharm={"is_executing": false}
s = Series(np.random.randn(1000))
print(s)
# + pycharm={"is_executing": false}
plt.hist(s, rwidth=0.9)
# -
plt.show()
# + pycharm={"is_executing": false}
s1 = Series(np.arange(10))
# + pycharm={"is_executing": false}
s1
# + pycharm={"is_executing": false}
plt.hist(s1, rwidth=0.9)
# -
plt.show()
plt.hist(s, rwidth=0.9, bins=20, color='r')
plt.show()
# # KDE
type(s)
s.plot(kind='kde')
plt.show()
| matplotlib/3. Histogram and KDE Plot.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import pandas as pd
import numpy as np
from datetime import datetime
from sqlalchemy import create_engine
import requests
from time import sleep
from pricing.service.scoring.lscore import LScoring
import matplotlib.pyplot as plt
import warnings
warnings.filterwarnings('ignore')
engine = create_engine("mysql+pymysql://capMaster:#jackpot123#@captalys.cmrbivuuu7sv.sa-east-1.rds.amazonaws.com:23306/apiPricing")
con = engine.connect()
dfacomp = pd.read_sql("select * from acompanhamento", con)
con.close()
# +
#1. DATA DE DESEMBOLSO
# -
engine = create_engine("mysql+pymysql://capMaster:#jackpot123#@captalys.cmrbivuuu7sv.sa-east-1.rds.amazonaws.com:23306/creditoDigital")
con = engine.connect()
dfop = pd.read_sql("select * from desembolso", con)
con.close()
df_data = dfop[["controleParticipante", "dataDesembolso"]]
df_data.head()
df_data["dataDesembolso"] = df_data.apply(lambda x : x["dataDesembolso"].date(), axis=1)
df_data.shape
res = dfacomp.merge(df_data, left_on="controleParticipante", right_on="controleParticipante", how='left')
res[res["cnpj"]=='28071213000123']
# +
def get_numero_consulta(cnpj):
engine = create_engine("mysql+pymysql://capMaster:#<EMAIL>#@<EMAIL>:23306/varejo")
con = engine.connect()
query = "select data_ref, numero_consulta from consultas_idwall_operacoes where cnpj_cpf='{}'".format(cnpj)
df = pd.read_sql(query, con)
if df.empty:
return None
numero = df[df['data_ref']==df['data_ref'].max()]["numero_consulta"].iloc[0]
con.close()
return numero
def get_details(numero):
URL = "https://api-v2.idwall.co/relatorios"
authorization = "<PASSWORD>"
url_details = URL + "/{}".format(numero) + "/dados"
while True:
dets = requests.get(url_details, headers={"authorization": authorization})
djson = dets.json()
sleep(1)
if djson['result']['status'] == "CONCLUIDO":
break
return dets.json()
def get_idade(cnpj):
numero = get_numero_consulta(cnpj)
if numero is None:
return -1
js = get_details(numero)
data_abertura = js.get("result").get("cnpj").get("data_abertura")
data_abertura = data_abertura.replace("/", "-")
data = datetime.strptime(data_abertura, "%d-%m-%Y").date()
idade = ((datetime.now().date() - data).days/366)
idade_empresa = np.around(idade, 2)
return idade_empresa
# -
fr = []
for el in res["cnpj"].unique().tolist():
idade = get_idade(el)
fr.append(pd.DataFrame({"cnpj" : [el], "idade" : [idade]}))
df_idade = pd.concat(fr)
df_idade[df_idade['idade']==-1]
df_idade.head()
dfacomp = dfacomp.merge(df_idade, left_on='cnpj', right_on='cnpj', how='left')
df_asset = pd.read_excel('HistoricoCobranca.xlsx')
df_asset["cnpj"] = df_asset.apply(lambda x : x["CNPJ"].replace(".", "").replace("-", "").replace("/", ""), axis=1)
df1 = df_asset[["cnpj"]]
df1["flag_cobranca"] = 1
df1.shape
df1.drop_duplicates(inplace=True)
dfacomp = dfacomp.merge(df1, left_on="cnpj", right_on="cnpj", how="left")
dfacomp.fillna({"flag_cobranca" : 0}, inplace=True)
lista_fechou = df_asset[df_asset["JUSTIFICATIVA DO ALERTA"].isin(["Fechou a Loja", "Fechou a Empresa"])]["cnpj"].tolist()
df_idade[df_idade["cnpj"].isin(lista_fechou)]
dfacomp.columns
dfacomp.drop(columns=["milestones", "divida", "liquidacao", "score_temporal", "saldoDevedor", "mediaDia", "mediaEstimada"], axis=1, inplace=True)
dfacomp.drop(columns=["id", "valorPago", "valorDevido", "valorPresente", "taxaRetencao", "taxaRetencaoIdeal"], axis=1, inplace=True)
dfacomp.drop(columns=["valorAquisicao", "taxaEsperada", "taxaMin", "taxaEfetiva", "prazo", "prazoMax", "prazo_efetivo"], axis=1, inplace=True)
dfacomp.drop(columns=['duration', 'duration_esperada', 'duration_efetiva', 'moic_contratado',
'mediaFatAnterior', 'faturamentoMinimo', 'faturamentoMedio', 'entrada',
'custoFixo', 'custoTotal', 'custoCredito', 'tac', 'fluxoMin', 'fluxoMax', 'fluxoMedia'], axis=1, inplace=True)
dfacomp.columns
plt.hist(dfacomp["idade"])
plt.hist(dfacomp[dfacomp["status"]=="ALERTA"]["idade"])
plt.hist(dfacomp[dfacomp["status"]=="OTIMO"]["idade"])
dfacomp["status"].unique().tolist()
plt.hist(dfacomp[dfacomp["status"]=="BOM"]["idade"])
plt.hist(dfacomp[dfacomp["status"]=="NENHUM PAGAMENTO REALIZADO"]["idade"])
df_score = dfacomp[dfacomp["flag_cobranca"]==1][["cnpj", "produto", "score", "idade"]]
df_score.head()
df_score.groupby("produto").count()
# #### Novo score com a correcao no historico de faturamento
from pricing.utils import formata_cnpj
def get_dados(cnpj, produto):
if produto in ["tomatico", "padrao"]:
engine = create_engine("mysql+pymysql://capMaster:#jackpot123#<EMAIL>:23306/credito-digital")
con = engine.connect()
else:
engine = create_engine("mysql+pymysql://capMaster:#jackpot123#<EMAIL>:23306/varejo")
con = engine.connect()
query_wirecard = "select cpf_cnpj as cnpj, data, pgto_liquido as valor from fluxo_moip where cpf_cnpj='{}'".format(formata_cnpj(cnpj))
query_pv = "select cpf_cnpj as cnpj, data, valor from fluxo_pv where cpf_cnpj='{}'".format(formata_cnpj(cnpj))
query_tomatico = "select cnpj, dataFluxo as data, valorFluxo as valor from tb_Fluxo where cnpj='{}'".format(cnpj)
dict_query = {"tomatico" : query_tomatico,
"padrao" : query_tomatico,
"wirecard" : query_wirecard,
"moip" : query_wirecard,
"pagueveloz" : query_pv,
"creditoveloz" : query_pv
}
query = dict_query.get(produto)
df = pd.read_sql(query, con)
con.close()
df = df.groupby("data").sum().reset_index()
datas = pd.date_range(end=datetime(2019, 5, 1), periods=len(df), freq='MS')
datas = [el.date() for el in datas]
df["data"] = datas
dados = df[["data", "valor"]].to_dict("records")
body = {"dados" : dados, "id_produto" : "tomatico"}
return body
# +
import pandas as pd
import numpy as np
from datetime import datetime
from dateutil.relativedelta import relativedelta
from conector.mysql import mysql_engine
from sqlalchemy import create_engine
from pricing.service.scoring.base import BaseScoring
from werkzeug import exceptions
from scipy import stats
from pricing.utils import formata_cnpj
from datamanager import conn_pricing
class LScoring(BaseScoring):
def __init__(self, data=None, cnpj=None, produto=None):
self.cnpj = cnpj
self.produto = data.get("id_produto") if not data is None else produto
self.params = self.get_dados() if not self.cnpj is None else data.get("dados")
# self.params = data['dados']
# self.produto = data['id_produto']
self.faturamentos = None
self.razao_outlier = None
self.data_max = None
self.estabilidade = None
self.pesos = None
self.volatilidade = None
self.curva_score = None
self.score_crescimento = None
self.prop_queda = None
self.score_volatilidade = None
self.slope = None
self.erro = None
self.probabilidade_zeros = None
self.zscore = None
def get_dados(self):
if self.produto in ["tomatico", "padrao"]:
engine = create_engine("mysql+pymysql://capMaster:#jackpot123#@<EMAIL>:23306/credito-digital")
con = engine.connect()
else:
engine = create_engine("mysql+pymysql://capMaster:#<EMAIL>#@<EMAIL>:23306/varejo")
con = engine.connect()
query_wirecard = "select cpf_cnpj as cnpj, data, pgto_liquido as valor from fluxo_moip where cpf_cnpj='{}'".format(formata_cnpj(self.cnpj))
query_pv = "select cpf_cnpj as cnpj, data, valor from fluxo_pv where cpf_cnpj='{}'".format(formata_cnpj(self.cnpj))
query_tomatico = "select cnpj, dataFluxo as data, valorFluxo as valor from tb_Fluxo where cnpj='{}'".format(self.cnpj)
dict_query = {"tomatico" : query_tomatico,
"padrao" : query_tomatico,
"wirecard" : query_wirecard,
"moip" : query_wirecard,
"pagueveloz" : query_pv
}
query = dict_query.get(self.produto)
df = pd.read_sql(query, con)
con.close()
df = df.groupby("data").sum().reset_index()
try:
df["data"] = df.apply(lambda x : x["data"].date(), axis=1)
except:
pass
dados = df[["data", "valor"]].to_dict("records")
return dados
@classmethod
def validar_dados(cls, data):
if data is None:
raise exceptions.BadRequest("Missing data")
if not isinstance(data['dados'], list):
raise exceptions.UnprocessableEntity(
"Field 'dados' should be an array")
@staticmethod
def gera_periodo(periods=12):
now = datetime.now().date()
start = datetime(now.year, now.month, 1)
start = start - relativedelta(months=periods)
datas = pd.date_range(start=start, periods=periods, freq='MS')
datas = [el.date() for el in datas]
return datas
@staticmethod
def mensaliza(df):
df.index = pd.to_datetime(df.data)
df = df.resample('MS').sum().reset_index()
print(df)
return df
def isElegible(self):
df = pd.DataFrame(self.params)
df = self.mensaliza(df)
per = self.gera_periodo(periods=6)
print("periodo de elegibilidade : {}".format(per))
df = df[df['data'].isin(per)].copy()
lista_val = df['valor'].tolist()
if 0 in lista_val or len(df) < 6:
return None
return 1
def gera_serie(self, periods=12):
df = pd.DataFrame(self.params)
df = self.mensaliza(df)
df['data'] = df.data.dt.date
periodo_completo = self.gera_periodo(periods=periods)
df = df[df['data'].isin(periodo_completo)]
if df.empty:
self.faturamentos = df
return
data_min = df['data'].min()
datas = pd.date_range(
start=data_min, end=periodo_completo[-1], freq="MS")
datas = [el.date() for el in datas]
for data in datas:
if data not in df['data'].tolist():
df_extra = pd.DataFrame({"data": [data], "valor": [0]})
df = pd.concat([df, df_extra])
df.sort_values("data", inplace=True)
if self.faturamentos is None:
self.faturamentos = df
return
def outlier_6meses(self):
razao_outlier = self.faturamentos['valor'].mean(
)/np.mean(self.faturamentos['valor'].tolist()[:-1])
if self.razao_outlier is None:
self.razao_outlier = razao_outlier
return
def data_maxima(self):
res = dict(zip(list(self.faturamentos['valor'].diff())[
1:], self.faturamentos['data'].tolist()[0:-1]))
data_max = res.get(np.max(list(res.keys())))
if self.data_max is None:
self.data_max = data_max
return
def crescimento_efetivo(self):
df = self.faturamentos[self.faturamentos['data'] > self.data_max]
estabilidade = df['valor'].std()/df['valor'].iloc[0]
if self.estabilidade is None:
self.estabilidade = estabilidade
return
def calcula_pesos(self):
pesos = list(range(1, self.faturamentos.shape[0]))
if self.estabilidade <= 0.15:
dic_pesos = dict(
zip(self.faturamentos['data'].tolist()[:-1], pesos))
peso_max = np.max(list(dic_pesos.values()))
dic_pesos[self.data_max] = peso_max
if self.data_max - relativedelta(months=1) in list(dic_pesos.keys()):
p = dic_pesos.get(self.data_max - relativedelta(months=1))
else:
p = 0
keys = pd.date_range(start=self.data_max + relativedelta(months=1),
end=list(dic_pesos.keys())[-1], freq='MS')
keys = [el.date() for el in keys]
i = 1
for data in keys:
dic_pesos[data] = p + i
i += 1
else:
dic_pesos = dict(
zip(self.faturamentos['data'].tolist()[:-1], pesos))
if self.pesos is None:
self.pesos = dic_pesos
return
def calcula_volatilidade(self):
self.volatilidade = self.faturamentos['valor'].std(
)/self.faturamentos['valor'].mean()
return
# score de crescimento
def lscore(self):
pesos = list(self.pesos.values())
if self.razao_outlier >= 2:
pesos[-1] = 1
dfcalc = self.faturamentos[['valor']].diff()
dfcalc.dropna(inplace=True)
dfcalc['pesos'] = pesos
dfcalc['tx'] = dfcalc['valor'] * dfcalc['pesos']
tx = dfcalc['tx'].sum() / dfcalc['pesos'].sum()
tx = tx/self.faturamentos['valor'].mean()
return tx
def calibracao(self):
eng = mysql_engine("apiPricing")
df = pd.read_sql("select * from apiPricing.calibracao_score", eng)
self.curva_score = df[['metrica',
'score', 'tipo_metrica', 'bandwidth']]
return
def get_score(self, metrica, tipo_metrica):
dfcal = self.curva_score[self.curva_score['tipo_metrica']
== tipo_metrica]
bw = dfcal['bandwidth'].iloc[0]
if tipo_metrica == 'lscore':
if metrica <= dfcal['metrica'].min():
return 0
if metrica >= dfcal['metrica'].max():
return 1000
else:
if metrica >= dfcal['metrica'].max():
return 0
if metrica <= dfcal["metrica"].min():
return 1000
return dfcal[(dfcal['metrica'] >= metrica-bw) & (dfcal['metrica'] <= metrica+bw)]['score'].mean()
def prop_quedas(self):
dt = self.faturamentos
df1 = dt[['valor']].diff()
df1.dropna(inplace=True)
df1['flag'] = df1.apply(lambda x: int(x['valor'] < 0), axis=1)
if 1 not in df1['flag'].tolist():
self.prop_queda = 0
if 0 not in df1["flag"].tolist():
self.prop_queda = 1
return
def calcula_tendencia(self):
dt = pd.DataFrame(self.params)
dt["valor"] = dt["valor"]/dt["valor"].max()
x = dt.index
y = dt['valor']
slope, intercept, r_value, p_value, std_err = stats.linregress(x, y)
self.slope = slope
self.erro = std_err
return
# calculo da probabilidade de se observar faturamento nulo
def probabilidade_faturamento_nulo(self):
_df = self.faturamentos
media = _df['valor'].mean()
_df['prop'] = _df['valor']/media
periodo_elegibilidade = self.gera_periodo(periods=6)
df_zeros = _df[~_df['data'].isin(periodo_elegibilidade)]
# qualquer valor menor que 20% do valor medio sera considerado faturamento nulo
probabilidade = len(df_zeros[df_zeros['prop'] <= 0.2])/len(_df)
print("probabilidade_zeros :{}".format(probabilidade))
if self.probabilidade_zeros is None:
self.probabilidade_zeros = probabilidade
return
def calcula_zscore(self, score_inicial):
if self.probabilidade_zeros > 0:
n = len(self.faturamentos)
# considering a valid prob if we have at least 10 months
if n >= 10:
score = score_inicial * ((((-1) * n)/(n-6)) * self.probabilidade_zeros + 1)
if self.zscore is None:
self.zscore = score
print("ZSCORE : {}".format(score))
@property
def correcao(self):
return {6 : 0.7, 7 : 0.8, 8 : 0.9}
def get_correcao(self, score):
historico = len(self.faturamentos)
print("HISTORICO : {}".format(historico))
fator_correcao = self.correcao.get(historico, 1)
print('CORRECAO HISTORICO : {}'.format(fator_correcao))
return fator_correcao*score
def calcula(self):
if self.produto == 'tomatico' or self.produto == "padrao":
if not self.isElegible():
return {'score': np.nan}
self.gera_serie()
if self.faturamentos.empty:
return {"score" : np.nan}
now = datetime.now().date() - relativedelta(months=1)
data_proposta = datetime(now.year, now.month, 1).date()
if self.faturamentos[self.faturamentos['data'] == data_proposta]['valor'].iloc[0] == 0:
self.faturamentos = self.faturamentos[self.faturamentos['data'] != data_proposta]
self.data_maxima()
self.outlier_6meses()
self.calcula_volatilidade()
self.crescimento_efetivo()
self.calcula_pesos()
self.probabilidade_faturamento_nulo()
lscore = self.lscore()
self.prop_quedas()
self.calibracao()
score = self.get_score(metrica=lscore, tipo_metrica='lscore')
self.score_crescimento = score
if self.prop_queda == 0:
self.score_crescimento = 1000
self.calcula_zscore(self.score_crescimento)
if not self.zscore is None:
score = (self.score_crescimento + self.zscore)/2
else:
score = self.score_crescimento
score = self.get_correcao(score)
return {'score' : int(score)}
if self.prop_queda == 1:
self.calcula_zscore(self.score_crescimento)
if not self.zscore is None:
score = (self.zscore + self.score_crescimento)/2
else:
score = self.score_crescimento
score = self.get_correcao(score)
return {'score' : int(score)}
self.calcula_tendencia()
if self.slope < -0.2:
self.calcula_zscore(score)
if not self.zscore is None:
score = (self.zscore + self.score_crescimento)/2
else:
score = self.score_crescimento
score = self.get_correcao(score)
return {'score': int(score)}
if abs(self.slope) <= 0.01 and self.erro < 0.05:
self.score_volatilidade = 1000*(1-self.erro)
score = (2*self.score_crescimento + self.score_volatilidade)/3
self.calcula_zscore(score)
if not self.zscore is None:
score = (self.zscore + score)/2
score = self.get_correcao(score)
return {'score': int(score)}
self.params = self.faturamentos.sort_values('data', ascending=False).iloc[:6, :].sort_values('data').to_dict('records')
self.calcula_tendencia()
if self.slope < -0.2:
self.calcula_zscore(self.score_crescimento)
if not self.zscore is None:
score = (self.zscore + self.score_crescimento)/2
else:
score = self.score_crescimento
score = self.get_correcao(score)
return {'score': int(score)}
self.score_volatilidade = int(self.get_score(metrica=self.volatilidade, tipo_metrica='vscore'))
score = (2*self.score_crescimento + self.score_volatilidade)/3
print("SCORE INICIAL : {}".format(score))
self.calcula_zscore(score)
if not self.zscore is None:
score = (self.zscore + score)/2
score = self.get_correcao(score)
return {'score': int(score)}
# -
def get_novo_score(cnpj, produto):
body = get_dados(cnpj, produto)
ls = LScoring(body)
score = ls.calcula().get("score")
historico = len(ls.faturamentos)
return score, historico
fr = []
for el in df_score["cnpj"].tolist():
dt = df_score[df_score["cnpj"]==el]
produto = dt["produto"].iloc[0].lower()
score, hist = get_novo_score(el, produto)
dt["historico"] = hist
dt["score_correcao_historico"] = score
fr.append(dt)
df_score = pd.concat(fr)
df_score.head()
# #### 2. Score de Divida
# +
'''
WIP : score de divida
'''
# from pricing.service.scoring.lscore import LScoring
from sqlalchemy import create_engine
import numpy as np
import pandas as pd
import requests
from time import sleep
from datetime import datetime
from conector.mysql import mysql_engine, CaptalysDBContext
class DScoring(object):
def __init__(self, cnpj, produto):
self.cnpj = cnpj
self.doctype = 'cpf' if len(self.cnpj)<12 else 'cnpj'
self.produto = produto
self.lscore = None
self.baseline = 800
self.fator_elegibilidade = 3
self.faturamento_medio = None
self.calibracao_segmento = None
self.consulta = None
self.estados_dividas = None
self.dispersao_divida = None
self.idade_empresa = None
self.metricas = None
def score_mestre(self, shift=True):
ls = LScoring(cnpj=self.cnpj, produto=self.produto)
if shift:
_df = pd.DataFrame(ls.params)
datas = pd.date_range(end = (datetime.now().date() - relativedelta(months=1)).replace(day=1), periods=len(_df), freq='MS')
datas = [el.date() for el in datas]
_df['data'] = datas
ls.params = _df.to_dict("records")
lscore = ls.calcula().get('score')
fat_medio = ls.faturamentos['valor'].mean()
self.lscore = lscore
self.faturamento_medio = fat_medio
return
def set_calibracao(self):
delta = int(np.floor(0.8*self.lscore/4))
escala_score = {
"credito" : delta,
"processos" : 2*delta,
"infra" : 3*delta,
"outros" : 4*delta
}
if self.calibracao_segmento is None:
self.calibracao_segmento = escala_score
return
@property
def campos_divida(self):
return {
"restricoes" : ["data_ocorrencia", "modalidade_natureza", "natureza", "valor"],
"protestos" : ["data_anotacao", "natureza", "sub_judice_descricao", "valor"],
"pendencias" : ["data_ocorrencia", "modalidade", "natureza", "valor"],
"processos" : ["data_ocorrencia", "descricao_natureza", "natureza", "valor"],
"restricoes_financeiras" : ["data_ocorrencia", "modalidade_natureza", "natureza", "valor"]
}
@property
def campos_rename(self):
return {
"processos" : {"descricao_natureza" : "modalidade_natureza"},
"pendencias" : {"modalidade" : "modalidade_natureza"},
"protestos" : {'sub_judice_descricao' : "modalidade_natureza", "data_anotacao" : "data_ocorrencia"}
}
@property
def segmentos(self):
return {"credito" : ['EMPRESCONTA', 'EMPRESTIMO', 'CREDCARTAO', 'FINANCIAMENT',
'CREDITOEFINANCIAMENTO-FINANC'],
"processos" : ['EXCJUDTRAB', 'FISCALESTADUAL', 'EXECUCAO', 'FISCALFEDERAL',
'FISCALMUNICIPAL','EXECUCAO-JE', 'BUSCAEAPREENSAO'],
"infra" : ['FATAGUA', 'TELEFFX', 'TELEFFIXA', 'TELEFMOVEL', 'CONDOMINIO',
'ENERGIAELET', 'ALUGUEL', 'SERVTELEFON']
}
@property
def escala_impacto(self):
return {"credito" : {"i0" : 0.75, "i1" : 1},
"processos" : {"i0" : 0.5, "i1" : 0.75},
"infra" : {"i0" : 0.25, "i1" : 0.5},
"outros" : {"i0" : 0, "i1" : 0.25},
}
def get_numero_consulta(self):
engine = create_engine("mysql+pymysql://capMaster:#jackpot123#@captalys.cmrbivuuu7sv.sa-east-1.rds.amazonaws.com:23306/varejo")
con = engine.connect()
query = "select data_ref, numero_consulta from consultas_idwall_operacoes where cnpj_cpf='{}'".format(self.cnpj)
df = pd.read_sql(query, con)
numero = df[df['data_ref']==df['data_ref'].max()]["numero_consulta"].iloc[0]
con.close()
self.numero_consulta = numero
return numero
@staticmethod
def get_details(numero):
URL = "https://api-v2.idwall.co/relatorios"
authorization = "b3818f92-5807-4acf-ade8-78a1f6d7996b"
url_details = URL + "/{}".format(numero) + "/dados"
while True:
dets = requests.get(url_details, headers={"authorization": authorization})
djson = dets.json()
sleep(1)
if djson['result']['status'] == "CONCLUIDO":
break
return dets.json()
@staticmethod
def formata_dados(df):
df['modalidade_natureza'] = df.apply(lambda x : x['modalidade_natureza'].replace(" ", "") if isinstance(x['modalidade_natureza'], str) else "OUTROS", axis=1)
df['valor'] = df.apply(lambda x : x['valor'].split("R$ ")[1].replace(",", "."), axis=1)
df["valor"] = df.apply(lambda x : float(x["valor"]), axis=1)
return df
def get_infos_dividas(self, js, tp_pendencia):
res = js.get("result").get(tp_pendencia)
if not res is None:
df = pd.DataFrame(res.get('itens'))
cols = self.campos_divida.get(tp_pendencia)
if "uf" in list(df.columns):
cols = cols + ["uf"]
df = df[cols].copy()
else:
df = df[cols]
df["uf"] = None
rename = self.campos_rename.get(tp_pendencia)
if not rename is None:
df.rename(columns = rename, inplace=True)
df["tipo"] = tp_pendencia
return df
return None
def gera_dados(self):
numero = self.get_numero_consulta()
js = self.get_details(numero)
self.consulta = js
fr = []
lista_pendencias = ["restricoes", "processos", "protestos", "pendencias", "restricoes_financeiras"]
for el in lista_pendencias:
res = self.get_infos_dividas(js, el)
if not res is None:
fr.append(res)
df = pd.concat(fr)
df = self.formata_dados(df)
self.estados_dividas = df["uf"].unique().tolist()
return df
def calcula_dispersao_divida(self):
uf_cnpj = self.consulta.get("result").get("cnpj").get("localizacao").get("estado")
lista_dispersao = [el for el in self.estados_dividas if el!= uf_cnpj]
dispersao = len(lista_dispersao)/4
self.dispersao_divida = dispersao
return
def get_idade(self):
data_abertura = self.consulta.get("result").get("cnpj").get("data_abertura")
data_abertura = data_abertura.replace("/", "-")
data = datetime.strptime(data_abertura, "%d-%m-%Y").date()
idade = ((datetime.now().date() - data).days/366)
self.idade_empresa = np.around(idade, 2)
return
def atribui_segmento(self, df):
df['segmento'] = df.apply(lambda x : 'processos' if x['tipo']=='processos'
else('credito' if x['modalidade_natureza'] in self.segmentos.get("credito")
else ('infra' if x['modalidade_natureza'] in self.segmentos.get("infra") else "outros")), axis=1)
return df
@staticmethod
def calcula_probabilidade(df):
dt = df.groupby("segmento").count().reset_index()[["segmento", "valor"]]
dt.columns = ["segmento", "ocorrencias"]
dt["probabilidade"] = dt["ocorrencias"]/dt["ocorrencias"].sum()
return dt
@staticmethod
def calcula_composicao(df):
dt = df.groupby("segmento").sum().reset_index()
dt.columns = ["segmento", "valor_divida"]
dt["composicao"] = dt["valor_divida"]/dt["valor_divida"].sum()
return dt
def calcula_pi(self, dfcalc):
dfcalc['pi'] = dfcalc['valor_divida']/dfcalc['fat_medio']
dfcalc['pi'] = (1/self.fator_elegibilidade)*dfcalc['pi']
return dfcalc
@staticmethod
def calcula_lambda(dfcalc):
dfcalc["lambda"] = dfcalc['composicao']*dfcalc['pi']
return dfcalc
@staticmethod
def impacto_segmento(lambda_, segmento, escala):
escala = escala.get(segmento)
i0 = escala.get("i0")
i1 = escala.get("i1")
return (i1 - i0)*lambda_ + i0
def calcula_impacto_segmento(self, dfcalc):
dfcalc['impacto_segmento'] = dfcalc.apply(lambda x : self.impacto_segmento(x['lambda'], x["segmento"], self.escala_impacto), axis=1)
return dfcalc
@staticmethod
def calcula_risco(dfcalc):
dfcalc["risco"] = dfcalc["probabilidade"]*dfcalc["impacto_segmento"]
return dfcalc
@staticmethod
def d_score(risco_, score_limite):
return -score_limite*risco_ + score_limite
def calcula_dscore(self, dfcalc):
escala = self.calibracao_segmento
dfcalc["dscore"] = dfcalc.apply(lambda x : self.d_score(x["risco"], escala.get(x["segmento"])), axis=1)
return dfcalc
def get_metricas(self, dfcalc):
segmentos = ["credito", "processos", "infra", "outros"]
final = {}
for el in segmentos:
dt = dfcalc[dfcalc["segmento"]==el]
res = {}
if dt.empty:
res["num_ocorr"] = 0
res["comp"] = 0
res["risco"] = 0
final[el] = res
else:
res["num_ocorr"] = dt["ocorrencias"].iloc[0]
res["comp"] = dt['composicao'].iloc[0]
res["risco"] = dt["risco"].iloc[0]
final[el] = res
self.metricas = final
return
def update_dataset(self):
df_metricas = pd.DataFrame()
df_metricas["cnpj"] = [self.cnpj]
df_metricas["num_ocorr_cr"] = [self.metricas.get('credito').get('num_ocorr')]
df_metricas["num_ocorr_proc"] = [self.metricas.get('processos').get('num_ocorr')]
df_metricas["num_ocorr_infra"] = [self.metricas.get('infra').get('num_ocorr')]
df_metricas["num_ocorr_out"] = [self.metricas.get('outros').get('num_ocorr')]
df_metricas["comp_cr"] = [self.metricas.get('credito').get('comp')]
df_metricas["comp_proc"] = [self.metricas.get('processos').get('comp')]
df_metricas["comp_infra"] = [self.metricas.get('infra').get('comp')]
df_metricas["comp_out"] = [self.metricas.get('outros').get('comp')]
df_metricas["risco_cr"] = [self.metricas.get('credito').get('risco')]
df_metricas["risco_proc"] = [self.metricas.get('processos').get('risco')]
df_metricas["risco_infra"] = [self.metricas.get('infra').get('risco')]
df_metricas["risco_out"] = [self.metricas.get('outros').get('risco')]
df_metricas["idade"] = [self.idade_empresa]
df_metricas["dispersao_divida"] = [self.dispersao_divida]
df_metricas["outlier"] = [None]
df_metricas["data_ref"] = datetime.now().date()
engine = create_engine("mysql+pymysql://capMaster:#jackpot123#@<EMAIL>:23306/varejo")
con = engine.connect()
con.execute("delete from outlier_detection where cnpj='{}'".format(self.cnpj))
df_metricas.to_sql('outlier_detection', schema='varejo', con=con, if_exists='append', index=False)
con.close()
print("DATASET UPDATED!")
return
def calcula(self):
self.score_mestre()
self.set_calibracao()
df = self.gera_dados()
self.calcula_dispersao_divida()
self.get_idade()
df = self.atribui_segmento(df)
dfp = self.calcula_probabilidade(df)
dfc = self.calcula_composicao(df)
dfcalc = dfp.merge(dfc, left_on="segmento", right_on="segmento", how='left')
dfcalc['fat_medio'] = self.faturamento_medio
dfcalc = self.calcula_pi(dfcalc)
dfcalc = self.calcula_lambda(dfcalc)
dfcalc = self.calcula_impacto_segmento(dfcalc)
dfcalc = self.calcula_risco(dfcalc)
dfcalc = self.calcula_dscore(dfcalc)
self.get_metricas(dfcalc)
self.update_dataset()
dscore = dfcalc['dscore'].mean()
lista_segmentos = dfcalc["segmento"].tolist()
lista_dscore = dfcalc["dscore"].tolist()
lista_dscore = [int(el) for el in lista_dscore]
res = dict(zip(lista_segmentos, lista_dscore))
res["lscore"] = int(self.lscore)
res['dscore'] = int(dscore)
res['score'] = int((self.lscore + dscore)/2)
return res, dfcalc
if __name__ == '__main__':
ds = DScoring(cnpj='26203839000110', produto = "tomatico")
scores, _ = ds.calcula()
print(scores)
# -
ds = DScoring(cnpj="28505748000165", produto="tomatico")
scores2, dfcalc2 = ds.calcula()
scores
scores2
dfcalc2
dfcalc
43608.56/19011.638333
# +
#calcular o novo score das operacoes em cobranca com a correcao do historico de faturamneto
#olhar o numero de transacoes das operacoes em cobranca
#olhar o score de divida das operacoes em cobranca
# -
| Modelagem/acompanhamento_score/valida_score.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: number_reader
# language: python
# name: number_reader
# ---
# +
import matplotlib.pyplot as plt
import numpy as np
import os
import tensorflow as tf
import tensorflow_datasets as tfds
print("TensorFlow version:", tf.__version__)
# +
mnist = tf.keras.datasets.mnist
(x_train, y_train), (x_test, y_test) = mnist.load_data()
x_train, x_test = x_train / 255.0, x_test / 255.0
# +
(x_train_scale, y_train_scale), (x_test_scale, y_test_scale) = tfds.as_numpy(tfds.load(
'mnist_corrupted/scale',
split=['train', 'test'],
batch_size=-1,
as_supervised=True,
))
x_train_scale, x_test_scale = x_train_scale / 255.0, x_test_scale / 255.0
x_train_scale = np.reshape(x_train_scale, (60000, 28, 28))
x_test_scale = np.reshape(x_test_scale, (10000, 28, 28))
# +
(x_train_translate, y_train_translate), (x_test_translate, y_test_translate) = tfds.as_numpy(tfds.load(
'mnist_corrupted/translate',
split=['train', 'test'],
batch_size=-1,
as_supervised=True,
))
x_train_translate, x_test_translate = x_train_translate / 255.0, x_test_translate / 255.0
x_train_translate = np.reshape(x_train_translate, (60000, 28, 28))
x_test_translate = np.reshape(x_test_translate, (10000, 28, 28))
# -
model = tf.keras.models.Sequential([
tf.keras.layers.Flatten(input_shape=(28, 28)),
tf.keras.layers.Dense(128, activation='relu'),
tf.keras.layers.Dropout(0.2),
tf.keras.layers.Dense(10)
])
predictions = model(x_train[:1]).numpy()
predictions
tf.nn.softmax(predictions).numpy()
loss_fn = tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True)
loss_fn(y_train[:1], predictions).numpy()
model.compile(optimizer='adam',
loss=loss_fn,
metrics=['accuracy'])
model.fit(x_train, y_train, epochs=5)
#This is the part where it trains the model. x_train is the training images in numpy array, y_train is the answers
# +
model.evaluate(x_test, y_test, verbose=2)
#Validation test, on test set
# +
model.evaluate(x_test_scale, y_test_scale, verbose=2)
#Evaluate on scale mnist dataset before training
# +
model.fit(x_train_scale, y_train_scale, epochs=5)
#Train using scale mnist dataset
# +
model.evaluate(x_test_scale, y_test_scale, verbose=2)
#Evaluate on scale mnist dataset after training
# +
model.evaluate(x_test_translate, y_test_translate, verbose=2)
#Evaluate on translate mnist dataset before training
# +
model.fit(x_train_translate, y_train_translate, epochs=5)
#Train using translate mnist dataset
# +
model.evaluate(x_test_translate, y_test_translate, verbose=2)
#Evaluate on translate mnist dataset after training
# +
model.evaluate(x_test_scale, y_test_scale, verbose=2)
#Evaluate performance on scale mnist dataset after training on translate mnist dataset
#Significant accuracy loss. Hence, we will combine and shuffle the datasets
# +
combined_x_data = np.concatenate((x_train, x_train_scale, x_train_translate))
combined_y_data = np.concatenate((y_train, y_train_scale, y_train_translate))
def unison_shuffled_copies(a, b):
assert len(a) == len(b)
p = np.random.permutation(len(a))
return a[p], b[p]
combined_x_data, combined_y_data = unison_shuffled_copies(combined_x_data, combined_y_data)
# -
model.fit(combined_x_data, combined_y_data, epochs =5)
model.evaluate(x_test, y_test, verbose=2)
model.evaluate(x_test_translate, y_test_translate, verbose=2)
model.evaluate(x_test_scale, y_test_scale, verbose=2)
probability_model = tf.keras.Sequential([
model,
tf.keras.layers.Softmax()
])
probability_model(x_test[:5])
print(x_train[0:1].shape)
pred = model(x_train[0:1], training=False)
print(pred)
print(np.argmax(pred[0]))
#Testing the model on first x_train image
#reference for what the drawn numbers should look like
for x_train_i in range(20):
plt.imshow(x_train[x_train_i])
plt.show()
pred = model(x_train[[x_train_i]], training=False)
print('Guessed number: {}'.format(np.argmax(pred[0])))
# +
from PIL import ImageGrab
#use to check path
print(os.getcwd() + '\images\hand_drawn.png')
import pathlib
drawn_image_path = pathlib.Path(os.getcwd() + r'\images\hand_drawn\user_drawing.png')
def screenshot(widget):
x=root.winfo_rootx()+widget.winfo_x()
y=root.winfo_rooty()+widget.winfo_y()
x1=x+widget.winfo_width()
y1=y+widget.winfo_height()
ImageGrab.grab((x,y,x1,y1)).resize((28,28)).save(drawn_image_path, format='PNG')
# +
from tkinter import *
from tkinter import ttk
class Sketchpad(Canvas):
def __init__(self, parent, **kwargs):
super().__init__(parent, **kwargs)
self.bind("<Button-1>", self.add_oval)
self.bind("<B1-Motion>", self.add_oval)
def add_oval(self, event):
radius = 15
self.create_oval(event.x-radius, event.y+radius, event.x+radius, event.y-radius, fill='black')
def clear(self):
self.delete('all')
root = Tk()
root.columnconfigure(0, weight=1)
root.rowconfigure(0, weight=1)
mainframe = ttk.Frame(root, width=1000, height=500)
mainframe.grid(column=0, row=0, sticky=(N, W, E, S))
sketch = Sketchpad(mainframe, width=500, height=500, bg='white', highlightthickness=0)
sketch.grid(column=0, row=0, sticky='W')
button_frame = ttk.Frame(mainframe, width=200, height=500, borderwidth=5, relief='raised')
button_frame.grid(column=1, row=0, sticky='E')
button_frame.grid_propagate(False)
clear_button = ttk.Button(button_frame, text='Clear Drawing', command=sketch.clear)
clear_button.grid(column=1, row=0, padx=20, pady=20)
submit_button = ttk.Button(button_frame, text='Submit', command=lambda: screenshot(sketch))
submit_button.grid(column=1, row=1, padx=20, pady=20)
root.mainloop()
# +
# import PIL
# drawn_image = PIL.Image.open(drawn_image_path)
# drawn_image.show()
# print(drawn_image)
# im.show()
# +
import PIL
drawn_image = tf.keras.utils.load_img(
drawn_image_path, target_size=(28, 28), color_mode='grayscale')
drawn_image = PIL.ImageOps.invert(drawn_image)
drawn_image = np.array(drawn_image) / 255.0
drawn_img_array = tf.keras.utils.img_to_array(drawn_image)
drawn_img_array = tf.expand_dims(drawn_img_array, 0)
predictions = model(drawn_img_array, training=False)
print('Guessed number is {} with {}% confidence'.format(np.argmax(predictions[0]), max(tf.nn.softmax(predictions).numpy()[0])*100))
# print(tf.nn.softmax(predictions).numpy())
plt.imshow(drawn_image, cmap=plt.cm.binary)
plt.show()
# -
| number_reader_tensorflow.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
from rechunker import rechunk
import s3fs
#import xarray as xr
import zarr
import dask.array as dsa
import shutil
from dask.diagnostics import ProgressBar
import numpy as np
import xarray as xr
import os
import fsspec
import pandas as pd
# +
#from dask_gateway import Gateway
#from dask.distributed import Client
# +
#gateway = Gateway()
#cluster = gateway.new_cluster(worker_memory=8)
#cluster.adapt(minimum=1, maximum=60)
#client = Client(cluster)
#cluster
# +
file_aws = 'https://mur-sst.s3.us-west-2.amazonaws.com/zarr-v1'
file_aws2 = 'https://mur-sst.s3.us-west-2.amazonaws.com/zarr'
fout3 = 'data/test_subset_v2.zarr'
rechunk_location = 'noaa-oisstv02r1/newgroupv2.zarr'
tmp_location = 'noaa-oisstv02r1/tmpv2.zarr'
# create a subset of the data to test rechunker
ds_aws = xr.open_zarr(file_aws,consolidated=True) #read in data
ds_aws = ds_aws.isel(time=slice(0,20),lat=slice(6000,7000),lon=slice(19000,20000)) #subset to reasonable size
ds_aws.to_zarr(fout3,consolidated=True) #output data
xlat,xlon = -26,18
print('v1',ds_aws.analysed_sst.sel(lat=xlat,lon=xlon).isel(time=0).compute().data)
# -
# load the data
ds_zarr = zarr.open_consolidated(fout3, mode='r')
print(zarr.tree(ds_zarr))
# +
#rechunker plan
#s3 = s3fs.S3FileSystem(client_kwargs=dict(region_name='us-east-2'), default_fill_cache=False, skip_instance_cache=True)
s3 = s3fs.S3FileSystem(client_kwargs=dict(region_name='us-west-2'),anon=False, default_fill_cache=False, skip_instance_cache=True)
s3_rechunk_store = s3fs.S3Map(root=rechunk_location, create=True, s3=s3)
# Note this path must exist in S3 or will raise rechunker assertion, `assert temp_store_or_group is not None`
s3_tmp_store = s3fs.S3Map(root=tmp_location, create=True, s3=s3)
# +
target_chunks = {
'analysed_sst': {'time': 20, 'lat': 100, 'lon': 100},
'analysis_error': {'time': 20, 'lat': 100, 'lon': 100},
'mask': {'time': 20, 'lat': 100, 'lon': 100},
'sea_ice_fraction': {'time': 20, 'lat': 100, 'lon': 100},
'lat': None,
'lon': None,
'time': None
}
max_mem = '2GB'
array_plan = rechunk(ds_zarr, target_chunks, max_mem, s3_rechunk_store, s3_tmp_store)
array_plan
# -
## CHECK filename here DOES This have nan or fill_value?
ds_aws = xr.open_zarr(s3_rechunk_store,consolidated=True) #read in data
xlat,xlon = -26,18
print('v1',ds_aws.analysed_sst.sel(lat=xlat,lon=xlon).isel(time=0).compute().data)
| make_zarr/test_mur_rechunker.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 2
# language: python
# name: python2
# ---
# Risa wants to show how cosmology can be degenerate with Cosmology for fixed HOD. I'm gonna make a plot that shows that.
from pearce.emulator import OriginalRecipe, ExtraCrispy, SpicyBuffalo
from pearce.mocks import cat_dict
import numpy as np
from os import path
# +
sim_hps= {'boxno':0,'realization':1, 'system':'sherlock', 'downsample_factor': 1e-2, 'particles':True}
cat2 = cat_dict['testbox'](**sim_hps)
# -
import matplotlib
#matplotlib.use('Agg')
from matplotlib import pyplot as plt
# %matplotlib inline
import seaborn as sns
sns.set()
# +
training_file = '/scratch/users/swmclau2/xi_gm_cosmo/PearceRedMagicXiGMCosmoFixedNd.hdf5'
#training_file = '/u/ki/swmclau2/des/PearceRedMagicXiCosmoFixedNdLowMsat.hdf5'
test_file = '/scratch/users/swmclau2/xi_gm_cosmo_test/PearceRedMagicXiGMCosmoFixedNdTest.hdf5'
#test_file = '/u/ki/swmclau2/des/PearceRedMagicXiCosmoFixedNdLowMsatTest.hdf5'
#test_file = '/u/ki/swmclau2/des/xi_cosmo_tester/PearceRedMagicXiCosmoFixedNd_test.hdf5'
em_method = 'gp'
split_method = 'random'
# -
a = 1.0
z = 1.0/a - 1.0
fixed_params = {'z':z}#, 'cosmo': 3}#, 'r':0.53882047}
np.random.seed(0)
emu = SpicyBuffalo(training_file, method = em_method, fixed_params=fixed_params,
custom_mean_function = 'linear', downsample_factor = 0.1)
# + active=""
# emu = OriginalRecipe(training_file, method = em_method, fixed_params=fixed_params,
# custom_mean_function = None, downsample_factor = 0.0001)
# +
#hod_param_names = ['logM0', 'sigma_logM', 'logM1', 'alpha']
emulation_point = [('logM0', 13.5), ('sigma_logM', 0.25),
('alpha', 0.9),('logM1', 13.5)]#, ('logMmin', 12.233)]
#em_params = {key:test_point_dict[key] for key in hod_param_names}
#em_params = dict(zip(hod_param_names, x_point))
em_params = dict(emulation_point)
em_params.update(fixed_params)
# -
r_bins = np.logspace(-1.1, 1.6, 19)
rpoints = (r_bins[1:]+r_bins[:-1])/2.0
boxno, realization = 0,1
# +
fixed_params = {}#'f_c':1.0}#,'logM1': 13.8 }# 'z':0.0}
cosmo_params = {'simname':'testbox', 'boxno': boxno, 'realization': realization, 'scale_factors':[1.0], 'system': 'sherlock'}
cat = cat_dict[cosmo_params['simname']](**cosmo_params)#construct the specified catalog!
# +
# get cosmo params
try:
del em_params['logMmin']
except KeyError:
pass
cpv = cat._get_cosmo_param_names_vals()
cosmo_param_dict = {key: val for key, val in zip(cpv[0], cpv[1])}
em_params.update( cosmo_param_dict)
# -
y_emu = 10**emu.emulate_wrt_r(em_params)[0]
# +
varied_param_name = 'ln10As'
bounds = emu.get_param_bounds(varied_param_name)
Nvp = 100
vp_vals = np.linspace(bounds[0], bounds[1], Nvp)
varied_param_xis = []
for val in vp_vals:
em_params[varied_param_name] = val
varied_param_xis.append(10**emu.emulate_wrt_r(em_params))
# -
np.save('xi_gm_vals_1.npy', np.array(varied_param_xis))
vp_palette = sns.cubehelix_palette(Nvp)
# +
fig = plt.figure(figsize = (10,6))
#for val in chain_vals:
# plt.plot(rpoints, val[0]-y_calc_jk, c= 'm', alpha = 0.1 )
for i, (val, pval) in enumerate(zip(varied_param_xis, vp_vals)):
plt.plot(rpoints, val[0], color = vp_palette[i], alpha = 0.05)
#plt.plot(rpoints, MAP_xi, label = 'MAP')
#plt.errorbar(rpoints, y_calc_jk, yerr= y_err, c = 'k', label = 'Truth')
#plt.plot(rpoints, y_calc_mean , label = 'Mean')
#plt.plot(rpoints, y_emu, c = 'g',lw =2, label = 'Emu at Truth')
#plt.xscale('log')
plt.loglog()
plt.title(r'Varying $ \log(10^{10} A_s) =$ (%.3f, %.3f)'%bounds)#%varied_param_name)
plt.legend(loc = 'best')
plt.xlabel('r [Mpc]')
plt.ylabel(r'$\xi_{gm}(r)$')
plt.show()
# -
cat.load(1.0, HOD = 'hsabRedMagic', downsample_factor = 1e-2, particles = True)
hod_params = dict(emulation_point)
hod_params['f_c'] = 1.0
from scipy.optimize import minimize_scalar
def add_logMmin(hod_params, cat):
"""
In the fixed number density case, find the logMmin value that will match the nd given hod_params
:param: hod_params:
The other parameters besides logMmin
:param cat:
the catalog in question
:return:
None. hod_params will have logMmin added to it.
"""
hod_params['logMmin'] = 13.0 #initial guess
#cat.populate(hod_params) #may be overkill, but will ensure params are written everywhere
def func(logMmin, hod_params):
hod_params.update({'logMmin':logMmin})
return (cat.calc_analytic_nd(hod_params) - 1e-4)**2
res = minimize_scalar(func, bounds = (12, 16), args = (hod_params,), options = {'maxiter':100}, method = 'Bounded')
# assuming this doens't fail
hod_params['logMmin'] = res.x
add_logMmin(hod_params, cat)
hod_params['mean_occupation_centrals_assembias_param1'] = 0.0
hod_params['mean_occupation_satellites_assembias_param1'] = 0.0
hod_params
cat.populate(hod_params)
mean_xi = cat.calc_xi_gm(r_bins)
# +
fig = plt.figure(figsize = (10,6))
#for val in chain_vals:
# plt.plot(rpoints, val[0]-y_calc_jk, c= 'm', alpha = 0.1 )
for i, (val, pval) in enumerate(zip(varied_param_xis, vp_vals)):
plt.plot(rpoints, val[0]/mean_xi, color = vp_palette[i], alpha = 0.25)
#plt.plot(rpoints, MAP_xi, label = 'MAP')
#plt.errorbar(rpoints, y_calc_jk, yerr= y_err, c = 'k', label = 'Truth')
#plt.plot(rpoints, y_calc_mean , label = 'Mean')
#plt.plot(rpoints, y_emu, c = 'g',lw =2, label = 'Emu at Truth')
plt.xscale('log')
#plt.loglog()
plt.title(r'Varying $ \log(10^{10} A_s) =$ (%.3f, %.3f)'%bounds)#%varied_param_name)
plt.legend(loc = 'best')
plt.xlabel('r [Mpc]')
plt.ylabel(r'$\xi_{gm}(r)/\bar{\xi}_{gm}(r)$')
plt.ylim([0.5, 2.0])
plt.show()
# +
varied_param_name = 'mean_occupation_satellites_assembias_param1'
bounds = (-1, 1)
Nvp = 11
vp_vals = np.linspace(bounds[0], bounds[1], Nvp)
varied_param_xis = []
for val in vp_vals:
hod_params[varied_param_name] = val
cat.populate(hod_params)
varied_param_xis.append(cat.calc_xi_gm(r_bins))
# -
vp_palette = sns.cubehelix_palette(Nvp, start = 50)
np.save('xi_gm_vals_2.npy', varied_param_xis)
# +
fig = plt.figure(figsize = (10,6))
#for val in chain_vals:
# plt.plot(rpoints, val[0]-y_calc_jk, c= 'm', alpha = 0.1 )
for i, (val, pval) in enumerate(zip(varied_param_xis, vp_vals)):
plt.plot(rpoints, val, color = vp_palette[i], alpha = 1.0)
plt.plot(rpoints, mean_xi)
#plt.plot(rpoints, MAP_xi, label = 'MAP')
#plt.errorbar(rpoints, y_calc_jk, yerr= y_err, c = 'k', label = 'Truth')
#plt.plot(rpoints, y_calc_mean , label = 'Mean')
#plt.plot(rpoints, y_emu, c = 'g',lw =2, label = 'Emu at Truth')
#plt.xscale('log')
plt.loglog()
plt.title(r'Varying $\mathcal{A}_{sats} = $ (-1, 1)')#%varied_param_name)
plt.legend(loc = 'best')
plt.xlabel('r [Mpc]')
plt.ylabel(r'$\xi_{gm}(r)$')
plt.show()
# +
fig = plt.figure(figsize = (10,6))
#for val in chain_vals:
# plt.plot(rpoints, val[0]-y_calc_jk, c= 'm', alpha = 0.1 )
for i, (val, pval) in enumerate(zip(varied_param_xis, vp_vals)):
plt.plot(rpoints, val/mean_xi, color = vp_palette[i], alpha = 1.0)
#plt.plot(rpoints, MAP_xi, label = 'MAP')
#plt.errorbar(rpoints, y_calc_jk, yerr= y_err, c = 'k', label = 'Truth')
#plt.plot(rpoints, y_calc_mean , label = 'Mean')
#plt.plot(rpoints, y_emu, c = 'g',lw =2, label = 'Emu at Truth')
plt.xscale('log')
#plt.loglog()
plt.title(r'Varying $\mathcal{A}_{sats} = $ (-1, 1)')#%varied_param_name)
plt.legend(loc = 'best')
plt.xlabel('r [Mpc]')
plt.ylabel(r'$\xi_{gm}(r)/\bar{\xi}_{gm}(r)$')
plt.ylim([0.5, 2.0])
plt.show()
# -
| notebooks/Compare AB and Cosmology Emu Xigm.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import requests
url = 'https://cbdb.fas.harvard.edu/cbdbapi/person.php?name=%E6%9C%B1%E7%86%B9&o=json'
my_headers = {
'Accept': 'text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,image/apng,*/*;q=0.8',
'Accept-Encoding': 'gzip, deflate, br',
'Accept-Language': 'zh-CN,zh;q=0.9',
'Host': 'cbdb.fas.harvard.edu',
'User-Agent': 'Mozilla/5.0 (Windows NT 6.1; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/64.0.3282.140 Safari/537.36'
}
def getJSON(url, headers):
""" Get JSON from the destination URL
@ param url: destination url, str
@ param headers: request headers, dict
@ return json: result, json
"""
res = requests.get(url, headers=headers)
res.raise_for_status() #抛出异常
res.encoding = 'utf-8'
json = res.json()
return json
json = getJSON(url, headers=my_headers)
# 基本信息
json['Package']['PersonAuthority']['PersonInfo']['Person']['BasicInfo']
# 籍贯信息
json['Package']['PersonAuthority']['PersonInfo']['Person']['PersonAddresses']['Address']
# 别名
json['Package']['PersonAuthority']['PersonInfo']['Person']['PersonAliases']['Alias']
# 科举信息
json['Package']['PersonAuthority']['PersonInfo']['Person']['PersonEntryInfo']['Entry']
# 亲属信息
json['Package']['PersonAuthority']['PersonInfo']['Person']['PersonKinshipInfo']['Kinship']
# 社会关系
json['Package']['PersonAuthority']['PersonInfo']['Person']['PersonSocialAssociation']['Association'][0]
# 社会身份
json['Package']['PersonAuthority']['PersonInfo']['Person']['PersonSocialStatus']['SocialStatus']
# 著作
json['Package']['PersonAuthority']['PersonInfo']['Person']['PersonTexts']['Text']
# 相关资源
json['Package']['PersonAuthority']['PersonInfo']['Person']['PersonSources']['Source']
| Blog/01_CBDB/json.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python [conda env:py37] *
# language: python
# name: conda-env-py37-py
# ---
#用adaBoost来创建很多的ut
import numpy as np
from sklearn.linear_model import LinearRegression #引入线性回归模型
from sklearn.linear_model import LogisticRegression #引入逻辑回归模型
f=np.loadtxt("hw2_adaboost_train.txt")
X_train=f[:,0:2]
y_train=f[:,2]
f=np.loadtxt("hw2_adaboost_test.txt")
X_test=f[:,0:2]
y_test=f[:,2]
# +
#如果只是一次性的linear regression,那么当作logistic regression即可
#发现分类效果很不理想
model=LogisticRegression()
model.fit(X_train,y_train)
w=model.coef_.reshape(X.shape[1],1)
b=model.intercept_
print('w=',w)
print('b=',b)
def error(y,y_predict):
return sum(y!=y_predict)/y.shape[0]
y_predict=model.predict(X_test)
print(y_test[0:50])
print(y_predict[0:50])
# -
#利用adaptive boosting
u1=1/X.shape[0]
| hw6/linear aggregation and adaptive boosting.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python [conda root]
# language: python
# name: conda-root-py
# ---
# Please **submit this Jupyter notebook through Canvas** no later than **Monday November 12, 12:59**, before the start of the lecture.
#
# Homework is in **groups of two**, and you are expected to hand in original work. Work that is copied from another group will not be accepted.
# # Exercise 0
# Write down the names + student ID of the people in your group.
# <NAME> (10639918)
#
# <NAME> (10759697)
# -----
# # Exercise 1
# ## (a)
# Let $A$ be the matrix $\begin{bmatrix} 1 & -1 & \alpha \\ 2 & 2 & 1 \\ 0 & \alpha & -3/2 \end{bmatrix}$. For which values of $\alpha$ is $A$ singular?
# $\alpha = -3/2$ or $\alpha = 2$, because for those value of $\alpha$ the det(A) = 0, meaning the matrix is singular.
# ## (b)
# Consider the following linear system of equations:
# $$
# \begin{align*}
# 2x + y + z & = 3 \\
# 2x - y + 3z &= 5 \\
# -2x + \alpha y + 3z &= 1.
# \end{align*}
# $$
# For what values of $\alpha$ does this system have an infinite number of solutions?
# The linear system has an infinite number of solutions when the x, y and z can be expressed as a function of z. With $\alpha = -5$, the solution set becomes (1-z, 1+z, z), so an infinite number of solutions is possible for any real value of z.
# ## (c)
# Denote the columns of an $n \times n$ matrix $A$ as $A_k$ for $k=1,\ldots,n$. We define the function $||A||_* = \max_k ||A_k||_2$. Show that $||A||_*$ is a norm, in that it satisfies the first three properties of a matrix norm (cf. §2.3.2).
# The first three properties:
# 1. $||A|| > 0$ if $A \neq 0$
#
# If $A \neq 0$, that means that at least one value in A is a nonzero value. This means that when for the column of that value the 2-norm is calculated, this value is absolute and squared, and therefore also positive. Thus, the maximum value among all columns is always larger than 0.
#
# 2. $||\gamma A|| = |\gamma|$ x $||A||$ for any scalar $\gamma$.
#
# When applying a scalar $\gamma$ to matrix A, all values in A are multiplied by $\gamma$. Since this influences the size of values (the norm) in a matrix, the same effect is achieved when multiplying the norm of A by the absolute value of that scalar afterwards.
#
# Because $\gamma$ is multiplied by the 2-norm of A as an absolute, the result corresponds with the 2-norm of $\gamma A$, since here the absolute of all values is taken as well.
#
# 3. $||A + B|| \le ||A|| + ||B||$
#
# When adding matrix A to matrix B, the $||A||_{*}$ returns the maximum 2-norm of all columns. However, when adding the norms of A and B afterwards, the maximum 2-norm of matrix A is added to the maximum 2-norm of matrix B, and therefore this is always bigger. Only if either $||A||_{*}$ or $||B||_{*}$ is 0, $||A + B|| = ||A|| + ||B||$.
# ----
# # Exercise 2
# For solving linear systems such as $Ax = b$, it is unnecessary (and often unstable) to compute the inverse $A^{-1}$. Nonetheless, there can be situations where it is useful to compute $A^{-1}$ explicitly. One way to do so is by using the LU-decomposition of $A$.
# ## (a)
# Write an algorithm to compute $A^{-1}$ for a non-singular matrix $A$ using its LU-decomposition. You can use `scipy.linalg.lu` (which returns an LU-decomposition with _partial pivoting_, i.e., with a permutation matrix $P$) and the other `scipy.linalg.lu_*` functions, but not `scipy.linalg.inv` (or other methods for computing matrix inverses directly).
# +
import scipy
from scipy import linalg
import numpy as np
def getInverse(A):
# get LU factorisation of matrix A
lu, piv = scipy.linalg.lu_factor(A)
# define identity matrix
I = np.identity(len(A[0]))
# calculate inverse using L, U and the identity matrix
inv = scipy.linalg.lu_solve((lu, piv), I)
return inv
# example matrix
print getInverse(([[2, 5, 8, 7], [5, 2, 2, 8], [7, 5, 6, 6], [5, 4, 4, 8]]))
# -
# ## (b)
# What is the computational complexity of your algorithm, given that the input matrix has size $n \times n$?
# LU decomposition is $O(n^{3})$, determining the inverse of a triangular matrix can be computed in $O(n^{2})$. So the total computational complexity is $O(n^{3} + n^{2})$.
# ## (c)
# Apply your Python code to compute the inverse of the Hilbert matrix $H_n$ for $n=1, \ldots, 12$ (see https://en.wikipedia.org/wiki/Hilbert_matrix) -- you can use `scipy.linalg.hilbert`. This matrix is _very_ ill-conditioned, so computing its inverse is very hard for large $n$.
#
# Compare the inverse with the "true" inverse given by `scipy.linalg.invhilbert`. Output a (`plt.semilogy`) graph showing how the $\infty$-norm of their difference progresses for $n$.
# +
import matplotlib.pyplot as plt
from scipy.linalg import hilbert, invhilbert
n = 1
ns = []
norms = []
for i in list(range(0, 100)):
# create Hilbert matrix of size n
A = hilbert(n)
# calculate different inverses
ourinv = getInverse(A)
hilbertinv = invhilbert(n)
# calculate difference between inverse matrices
difference = abs(ourinv - hilbertinv)
# calculate infinity-norm of difference matrix
differencenorm = np.linalg.norm(difference, ord=np.inf)
ns.append(n)
norms.append(differencenorm)
n += 1
fig, ax = plt.subplots()
ax.semilogy(ns, norms)
ax.set_xlabel("n")
ax.set_ylabel("Infinity norm of difference matrix our inverse - \"true\" inverse")
plt.show()
# -
# ## (d)
# It is known that the $2$-condition number $cond_2(H_n)$ of the Hilbert matrix grows like $\mathcal O\left(\frac{(1+\sqrt{2})^{4n}}{\sqrt{n}}\right)$. Does the $\infty$-condition number (defined in Example 2.5) of $H_n$ grow in a similar way?
# For the matrix inverse, try both your own matrix inversion routine, and `scipy.linalg.invhilbert`. Output a (`plt.semilogy`) graph showing your results.
# +
import math
n = 1
ns = []
ourconds = []
hilbertconds = []
expectedgrowth = []
for i in list(range(0, 100)):
# create Hilbert matrix of size n
A = hilbert(n)
# calculate different inverses
ourinv = getInverse(A)
hilbertinv = invhilbert(n)
# calculate norms
matrixnorm = np.linalg.norm(A, ord=np.inf)
ournorm = np.linalg.norm(ourinv, ord=np.inf)
hilbertnorm = np.linalg.norm(hilbertinv, ord=np.inf)
ns.append(n)
# calculate infinity-condition numbers for both inverses
ourcond = matrixnorm * ournorm
ourconds.append(ourcond)
hilbertcond = matrixnorm * hilbertnorm
hilbertconds.append(hilbertcond)
# define expected growth for 2-condition number
growth2cond = ((1 + math.sqrt(2)**4 * n)/math.sqrt(n))
expectedgrowth.append(growth2cond)
n += 1
fig, ax = plt.subplots()
ax.semilogy(ns, ourconds, label = "Infinity-condition number (our inverse)")
ax.semilogy(ns, hilbertconds, label = "Infinity-condition number (Hilbert inverse)")
ax.semilogy(ns, expectedgrowth, label = "Growth 2-condition")
ax.set_xlabel("n")
ax.set_ylabel("Condition number")
ax.legend(fontsize = "medium")
plt.show()
| ComputationalScience/Numerical/.ipynb_checkpoints/deFeijter_Ridderikhoff_homework2-checkpoint.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import cv2
img = cv2.imread("morgan.jpg",1)
print(img)
print(img.shape)
print(type(img))
# +
import cv2
#coloured image
cv2.imshow("image",img)
cv2.waitKey(0)
cv2.desroyAllWindow(image)
# +
import cv2
img = cv2.imread("morgan.jpg",1)
#resizing of the image is performed (one way)
resize_img = cv2.resize(img , (600,600))
cv2.imshow("image",resize_img)
cv2.waitKey(0)
cv2.desroyAllWindow(image)
# +
import cv2
img = cv2.imread("morgan.jpg",1)
#resizing of the image is performed (other way)
resize_img = cv2.resize(img , (int(img.shape[1]/2),int(img.shape[0]/2)))
cv2.imshow("image",resize_img)
cv2.waitKey(0)
cv2.desroyAllWindow(image)
# +
#import opencv module
import cv2
#Face cascade classifier
face_cascade = cv2.CascadeClassifier("haarcascade_frontalface_default.xml")
#Eye cascade classifier
eyes_cascade = cv2.CascadeClassifier('haarcascade_eye.xml')
img = cv2.imread("morgan.jpg",1)
#converting the image to black and white (gray scale)
gray_img = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
#searching for the coordinates of the image
faces = face_cascade.detectMultiScale(gray_img, scaleFactor=1.05, minNeighbors=5)
print(type(faces))
print(faces)
for x,y,w,h in faces:
img = cv2.rectangle(img, (x,y), (x+w,y+h), (0,255,0),3)
roi_gray = gray_img[y:y+h, x:x+w]
roi_color = img[y:y+h, x:x+w]
eyes = eyes_cascade.detectMultiScale(roi_gray, 1.1, 3)
for (ex, ey, ew, eh) in eyes:
cv2.rectangle(roi_color, (ex,ey), (ex+ew, ey+eh), (0, 255, 0), 2)
resize_img = cv2.resize(img, (int(img.shape[1]/2), int(img.shape[0]/2)))
cv2.imshow("Gray",resize_img)
cv2.waitKey(0)
cv2.destroyAllWindow(Gray)
# +
import cv2
face_cascade = cv2.CascadeClassifier("haarcascade_frontalface_default.xml")
img1 = cv2.imread("morgan.jpg",1)
img2 = cv2.imread("elon.jpg",1)
gray_img1 = cv2.cvtColor(img1, cv2.COLOR_BGR2GRAY)
gray_img2 = cv2.cvtColor(img2, cv2.COLOR_BGR2GRAY)
faces1 = face_cascade.detectMultiScale(gray_img1, scaleFactor=1.05, minNeighbors=5)
faces2 = face_cascade.detectMultiScale(gray_img2, scaleFactor=1.05, minNeighbors=5)
print(type(faces1))
print(faces1)
print(type(faces2))
print(faces2)
for x,y,w,h in faces1:
img1 = cv2.rectangle(img1, (x,y), (x+w,y+h), (0,255,0),3)
for x,y,w,h in faces2:
img2 = cv2.rectangle(img2, (x,y), (x+w,y+h), (0,255,0),3)
resize_img1 = cv2.resize(img1, (int(img1.shape[1]/2), int(img1.shape[0]/2)))
resize_img2 = cv2.resize(img2, (int(img2.shape[1]), int(img2.shape[0])))
cv2.imshow("Gray1",resize_img1)
cv2.imshow("Gray2",resize_img2)
cv2.waitKey(0)
cv2.waitKey(0)
cv2.destroyAllWindows()
# +
import cv2
face_cascade = cv2.CascadeClassifier("haarcascade_frontalface_default.xml")
img = cv2.imread("ian.jpg",1)
gray_img = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
faces = face_cascade.detectMultiScale(gray_img, scaleFactor=1.05,minNeighbors = 5)
print(type(faces))
print(faces)
for x,y,w,h in faces:
img = cv2.rectangle(img, (x,y), (x+w,y+h), (0,255,0),3)
resize_img = cv2.resize(img, (int(img.shape[1]*2), int(img.shape[0]*2)))
cv2.imshow("Ian",resize_img)
cv2.waitKey(0)
cv2.destroyAllWindow(Originals)
# -
| Face Detection through Image.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + [markdown] id="vQKOfaCj_LTM"
# # **Implement Union, Intersection, complement and difference operations on Fuzzy sets.**
#
# + [markdown] id="K4L0xN0Y_kY1"
# ###**Fuzzy-Set Union Operation**
# + id="X9LByFwUzQ4r" colab={"base_uri": "https://localhost:8080/"} outputId="2ed7bda3-a8e8-4214-f892-148b7206f432"
# original program
A = {"X1": 0.6, "X2": 0.2, "X3":1, "X4":0.4}
print("Set A: ", A)
B = {"X1": 0.1, "X2": 0.8, "X3":0, "X4":0.9}
print("Set B: ",B)
C = {}
for key in A:
C[key] = (max(A[key],B[key]))
print("Union of A and B: ", C)
# + [markdown] id="yvg7dvj5ANAv"
# ###**Fuzzy-Set Intersection Operation**
# + colab={"base_uri": "https://localhost:8080/"} id="pHm-OrGCASKR" outputId="95176614-95c3-423c-97d3-3e7901a22c89"
A = {"X1": 0.6, "X2": 0.2, "X3":1, "X4":0.4}
print("Set A: ", A)
B = {"X1": 0.1, "X2": 0.8, "X3":0, "X4":0.9}
print("Set B: ",B)
C = {}
for key in A:
C[key] = (min(A[key],B[key]))
print("Intersection of A and B: ", C)
# + [markdown] id="IPq0cm-oAVWu"
# ###**Fuzzy-Set Compliment Operation**
# + colab={"base_uri": "https://localhost:8080/"} id="YLSjNdMeAYfC" outputId="4fca1c39-2b17-4125-856c-117f3ba5679d"
A = {"X1": 0.6, "X2": 0.2, "X3":1, "X4":0.4}
print("Set A: ", A)
B = {"X1": 0.1, "X2": 0.8, "X3":0, "X4":0.9}
print("Set B: ",B)
C = {}
D = {}
for key in A:
C[key] = 1 - (A[key])
D[key] = 1 - (B[key])
print("Compliment of A: ", C)
print("Compliment of B: ", D)
# + [markdown] id="11uKBkhnAg1x"
# ###**Fuzzy-Set Difference Operation**
# + colab={"base_uri": "https://localhost:8080/"} id="s5CkzaJKAhne" outputId="117a5104-3add-4a9c-da44-2fcf8872be78"
A = {"X1": 0.6, "X2": 0.2, "X3":1, "X4":0.4}
print("Set A: ", A)
B = {"X1": 0.1, "X2": 0.8, "X3":0, "X4":0.9}
print("Set B: ",B)
C = {}
for key in A:
C[key] = (min(A[key],1-B[key]))
print("Difference of A and B: ", C)
| EXPT_02_Implement_Union,_Intersection,_complement_and_difference_operations_on_Fuzzy_sets.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import pandas as pd
import seaborn as sns
import matplotlib.pyplot as plt
# +
plants = ["index","Nuclear", "Oil", "Coal", "Coal + CCS", "IGCC", "IGCC + CCS", "CCGT", "CCGT + CCS", "Solid Biomass", 'S Biomass CCS', "BIGCC", "BIGCC + CCS", "Biogas", "Biogas + CCS", "Tidal", "Large Hydro", "Onshore", "Offshore", 'Solar PV', "CSP", "Geothermal", "Wave", "Fuel Cells", "CHP"]
results = pd.read_csv("/Users/alexanderkell/Documents/PhD/Projects/17-ftt-power-reinforcement/data/outputs/electricity_generated.csv", names=list(range(48)))
# results = pd.read_csv("/Users/alexanderkell/Documents/PhD/Projects/17-ftt-power-reinforcement/data/outputs/capacity_result.csv", names=list(range(48)))
uk_capacity = results.iloc[:,0:24].reset_index()
ireland_capacity = results.iloc[:,24:].reset_index()
uk_capacity.columns = plants
ireland_capacity.columns = plants
uk_capacity
# +
uk_capacity_long = pd.melt(uk_capacity, id_vars="index")
uk_capacity_long['index'] = 2013 + uk_capacity_long['index']/4
uk_capacity_long
# -
sns.lineplot(data=uk_capacity_long, x="index", y="value", hue="variable")
plt.ylabel("Capacity (GWh/y)")
plt.xlabel("Time steps")
plt.title("UK Electricity Generation")
sns.lineplot(data=uk_capacity_long[uk_capacity_long.value>10], x="index", y="value", hue="variable")
plt.ylabel("Capacity (GWh/y)")
plt.xlabel("Time steps")
plt.title("UK Electricity Generation")
# +
ireland_capacity_long = pd.melt(ireland_capacity, id_vars="index")
ireland_capacity_long['index'] = 2013 + ireland_capacity_long['index']/4
ireland_capacity_long
# -
sns.lineplot(data=ireland_capacity_long, x="index", y="value", hue="variable")
plt.ylabel("Capacity (GWh/y)")
plt.xlabel("Time steps")
plt.title("Ireland Electricity Generation")
sns.lineplot(data=ireland_capacity_long[(ireland_capacity_long['index']>60)& (ireland_capacity_long['value']>2)], x="index", y="value", hue="variable")
plt.ylabel("Capacity (GWh/y)")
plt.xlabel("Time steps")
plt.title("Ireland Electricity Generation")
sns.lineplot(data=ireland_capacity_long[ireland_capacity_long.value>10], x="index", y="value", hue="variable")
# ## Both together
# +
uk_capacity_long
ireland_capacity_long
both_capacity_long = uk_capacity_long.merge(ireland_capacity_long, on=["index",'variable'])
both_capacity_long['value_both'] = both_capacity_long['value_x'] + both_capacity_long['value_y']
# both_capacity_long
sns.lineplot(data=both_capacity_long, x="index", y="value_both", hue="variable")
plt.legend(loc=(1.04,0))
plt.axvline(x=2017, color="black")
both_capacity_long.to_csv("/Users/alexanderkell/Documents/PhD/Projects/17-ftt-power-reinforcement/notebooks/data/processed/both_capacity_long.csv")
# +
sns.lineplot(data=both_capacity_long[both_capacity_long['index']>2017], x="index", y="value_both", hue="variable")
plt.legend(loc=(1.04,0))
# +
uk_capacity_long
ireland_capacity_long
both_capacity_long = uk_capacity_long.merge(ireland_capacity_long, on=["index",'variable'])
both_capacity_long['value_both'] = both_capacity_long['value_x'] + both_capacity_long['value_y']
both_capacity_long
sns.lineplot(data=both_capacity_long[(both_capacity_long['index']>60) & (both_capacity_long['value_both']>1)], x="index", y="value_both", hue="variable")
plt.legend(loc=(1.04,0))
# -
plt.plot(both_capacity_long[both_capacity_long['index']>2017].groupby("index").value_both.sum())
sns.lineplot(data=both_capacity_long[(both_capacity_long['index']<200) & (both_capacity_long['value_both']>2)], x="index", y="value_both", hue="variable")
plt.legend(loc=(1.04,0))
# # Market Share
# +
def get_data(filename):
plants = ["index","Nuclear", "Oil", "Coal", "Coal + CCS", "IGCC", "IGCC + CCS", "CCGT", "CCGT + CCS", "Solid Biomass", 'S Biomass CCS', "BIGCC", "BIGCC + CCS", "Biogas", "Biogas + CCS", "Tidal", "Large Hydro", "Onshore", "Offshore", 'Solar PV', "CSP", "Geothermal", "Wave", "Fuel Cells", "CHP"]
results = pd.read_csv("/Users/alexanderkell/Documents/PhD/Projects/17-ftt-power-reinforcement/data/outputs/{}.csv".format(filename), names=list(range(48)))
uk_result = results.iloc[:,0:24].reset_index()
ireland_result = results.iloc[:,24:].reset_index()
uk_result.columns = plants
ireland_result.columns = plants
ireland_capacity_long = pd.melt(ireland_result, id_vars="index")
uk_capacity_long = pd.melt(uk_result, id_vars="index")
return (uk_capacity_long, ireland_capacity_long)
# -
uk_market_share, ireland_market_share = get_data("market_share")
sns.lineplot(data=uk_market_share, x="index", y="value", hue="variable")
sns.lineplot(data=ireland_market_share, x="index", y="value", hue="variable")
| notebooks/2.0-ajmk-visualising-results.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# %matplotlib inline
#
# # Comparing random forests and the multi-output meta estimator
#
#
# An example to compare multi-output regression with random forest and
# the `multioutput.MultiOutputRegressor <multiclass>` meta-estimator.
#
# This example illustrates the use of the
# `multioutput.MultiOutputRegressor <multiclass>` meta-estimator
# to perform multi-output regression. A random forest regressor is used,
# which supports multi-output regression natively, so the results can be
# compared.
#
# The random forest regressor will only ever predict values within the
# range of observations or closer to zero for each of the targets. As a
# result the predictions are biased towards the centre of the circle.
#
# Using a single underlying feature the model learns both the
# x and y coordinate as output.
#
# +
print(__doc__)
# Author: <NAME> <<EMAIL>>
#
# License: BSD 3 clause
import numpy as np
import matplotlib.pyplot as plt
from sklearn.ensemble import RandomForestRegressor
from sklearn.model_selection import train_test_split
from sklearn.multioutput import MultiOutputRegressor
# Create a random dataset
rng = np.random.RandomState(1)
X = np.sort(200 * rng.rand(600, 1) - 100, axis=0)
y = np.array([np.pi * np.sin(X).ravel(), np.pi * np.cos(X).ravel()]).T
y += (0.5 - rng.rand(*y.shape))
X_train, X_test, y_train, y_test = train_test_split(
X, y, train_size=400, test_size=200, random_state=4)
max_depth = 30
regr_multirf = MultiOutputRegressor(RandomForestRegressor(n_estimators=100,
max_depth=max_depth,
random_state=0))
regr_multirf.fit(X_train, y_train)
regr_rf = RandomForestRegressor(n_estimators=100, max_depth=max_depth,
random_state=2)
regr_rf.fit(X_train, y_train)
# Predict on new data
y_multirf = regr_multirf.predict(X_test)
y_rf = regr_rf.predict(X_test)
# Plot the results
plt.figure()
s = 50
a = 0.4
plt.scatter(y_test[:, 0], y_test[:, 1], edgecolor='k',
c="navy", s=s, marker="s", alpha=a, label="Data")
plt.scatter(y_multirf[:, 0], y_multirf[:, 1], edgecolor='k',
c="cornflowerblue", s=s, alpha=a,
label="Multi RF score=%.2f" % regr_multirf.score(X_test, y_test))
plt.scatter(y_rf[:, 0], y_rf[:, 1], edgecolor='k',
c="c", s=s, marker="^", alpha=a,
label="RF score=%.2f" % regr_rf.score(X_test, y_test))
plt.xlim([-6, 6])
plt.ylim([-6, 6])
plt.xlabel("target 1")
plt.ylabel("target 2")
plt.title("Comparing random forests and the multi-output meta estimator")
plt.legend()
plt.show()
| sklearn/sklearn learning/demonstration/auto_examples_jupyter/ensemble/plot_random_forest_regression_multioutput.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Student Alcohol Consumption
# ### Introduction:
#
# This time you will download a dataset from the UCI.
#
# ### Step 1. Import the necessary libraries
import pandas as pd
import numpy as np
# ### Step 2. Import the dataset from this [address](https://raw.githubusercontent.com/guipsamora/pandas_exercises/master/04_Apply/Students_Alcohol_Consumption/student-mat.csv).
# ### Step 3. Assign it to a variable called df.
df = pd.read_csv('https://raw.githubusercontent.com/guipsamora/pandas_exercises/master/04_Apply/Students_Alcohol_Consumption/student-mat.csv')
df.head()
# ### Step 4. For the purpose of this exercise slice the dataframe from 'school' until the 'guardian' column
stud_alcohol = df.loc[:, 'school':'guardian']
stud_alcohol.head()
# ### Step 5. Create a lambda function that will capitalize strings.
cap_str = lambda x: x.capitalize()
# ### Step 6. Capitalize both Mjob and Fjob
stud_alcohol['Mjob'].apply(cap_str)
stud_alcohol["Fjob"].apply(cap_str)
# ### Step 7. Print the last elements of the data set.
stud_alcohol.tail()
# ### Step 8. Did you notice the original dataframe is still lowercase? Why is that? Fix it and capitalize Mjob and Fjob.
stud_alcohol['Mjob'] = stud_alcohol['Mjob'].apply(cap_str)
stud_alcohol['Fjob'] = stud_alcohol['Fjob'].apply(cap_str)
stud_alcohol.head()
# ### Step 9. Create a function called majority that returns a boolean value to a new column called legal_drinker (Consider majority as older than 17 years old)
majority = lambda x: x >= 18
stud_alcohol['legal_drinker'] = majority(stud_alcohol['age'])
stud_alcohol
# ### Step 10. Multiply every number of the dataset by 10.
# ##### I know this makes no sense, don't forget it is just an exercise
def times10(x):
if type(x) is int:
return 10 * x
return x
stud_alcohol.applymap(times10).head(10)
| 04_Apply/Students_Alcohol_Consumption/Exercises.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
import pandas as pd
import numpy as np
from statsmodels.tsa.api import SimpleExpSmoothing
import statsmodels
import pickle
import matplotlib.pyplot as plt
import time
import warnings
import itertools
warnings.filterwarnings("ignore")
pd.options.display.max_rows = 9999
pd.options.display.max_columns = 100
def show_ts(ts, forecast=None, forecast2 = None, title="Forecast Plot"):
ax = ts.plot(label = "Observed", figsize=(10,3))
if not (forecast is None):
forecast.plot(ax=ax, label='Forecast')
plt.legend()
if not (forecast2 is None):
forecast2.plot(ax=ax, label='Forecast')
plt.legend()
ax.set_xlabel('Date')
ax.set_ylabel('Messages/Second')
plt.title(title)
plt.show()
durations_df = pd.read_csv("results/durations.csv")
# +
data_names = ["avazu","IoT","wiki_de","wiki_en","horton","retailrocket","taxi", "alibaba", "google"]
sampling_rates = ["1h","15min","5min"]
multipliers = [1,4,12]
forecast_horizons = [12,4,1]
train_test_split = 0.8
for data_name in data_names:
for i,sampling_rate in enumerate(sampling_rates):
print()
print()
print(data_name, sampling_rate)
multiplier = multipliers[i]
fh = forecast_horizons[i]
df = pd.read_csv("../data/"+data_name+"_"+sampling_rate+".csv", index_col=0, parse_dates=True)
df["t"] = df.messages
df = df.drop(["messages"], axis=1)
df = df.dropna()
df = df.astype(np.int)
train = df.iloc[:int(len(df)*train_test_split)]
test = df.iloc[int(len(df)*train_test_split):]
print("Train shape:", train.shape)
print("Test shape:", test.shape)
start_time = time.time()
model = SimpleExpSmoothing(train.t, initialization_method="estimated").fit()
end_time = time.time()
training_duration = end_time-start_time
durations_df.loc[(durations_df.dataset == data_name) & (durations_df.sampling_rate == sampling_rate)\
, "SimpleExpSmoothing"] = training_duration
try:
results_df = pd.read_csv("results/"+ data_name + "_" + sampling_rate + "_results.csv", index_col=0, parse_dates=True)
except:
results_df = test.t.to_frame()
results_df["SimpleExpSmoothing"] = 0
results_df["SimpleExpSmoothing"].iloc[:fh] = model.forecast(fh).values
i = 1
start_time = time.time()
while i < len(results_df):
ts = train.t.append(test.t.iloc[:i])
model = SimpleExpSmoothing(ts, initialization_method="estimated").fit()
try:
results_df["SimpleExpSmoothing"].iloc[i:i+fh] += model.forecast(fh).values
except ValueError:
results_df["SimpleExpSmoothing"].iloc[i:] += model.forecast(len(results_df)-i).values
i += 1
end_time = time.time()
tuning_duration = (end_time - start_time) / len(results_df)
durations_df.loc[(durations_df.dataset == data_name) & (durations_df.sampling_rate == sampling_rate)\
, "SimpleExpSmoothing_tune"] = tuning_duration
great_divider = list(range(1,len(results_df)+1))
great_divider = list(map(lambda x: min(x,fh), great_divider))
results_df["SimpleExpSmoothing"] /= great_divider
show_ts(results_df.t, results_df.SimpleExpSmoothing)
results_df.to_csv("results/"+data_name+"_"+sampling_rate+"_results.csv")
durations_df.to_csv("results/durations.csv", index=False)
| classic_experiments/experiments_ses.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import numpy as np
import pandas as pd
from pandas import Series,DataFrame
# ## 重複データ処理
df1 = DataFrame({'key1':['A'] * 2 + ['B'] * 3,
'key2': [2,2,2,3,3]})
df1
# 上から順に重複した場合True
df1.duplicated()
# #### 重複を取り除く
df1.drop_duplicates()
df1.drop_duplicates(['key1'])
df1.drop_duplicates(['key1'], keep='last')
# ## マッピング
df2 = DataFrame({'city':['Alma','Brion','Fox'],
'altitude':[3155, 55000, 2434]})
df2
state_map = {'Alma':'Colorado','Brion':'Utah','Fox':'Myowing'}
# keyに対応するように列を追加できる
df2['state'] = df2['city'].map(state_map)
df2
df2['key1'] = [0,1,2]
df2
# ## 置換
ser1 = Series([1,2,3,4,1,2,3,4])
ser1.replace(1, np.nan)
ser1.replace([1,4],[100,400])
ser1.replace([1,4],100)
ser1.replace({4:np.nan})
# ## Indexの変更
df3 = DataFrame(np.arange(12).reshape((3,4)),
index=['NY','LA','SF'],
columns=list('ABCD'))
df3
df3.index.map(str.lower)
df3.index = df3.index.map(str.lower)
df3
# renameをしたDataFrameを返す
df3.rename(index=str.title, columns=str.lower)
# 元のデータを変更
df3.rename(index=str.title, columns=str.lower, inplace=True)
df3
# ## Binining(分類)
years = [1990,1991,1992,2008,2015,1986,2013,2008,1999]
decate_bins = [1960,1970,1980,1990,2000,2010]
decade_cat = pd.cut(years,decate_bins)
decade_cat
decade_cat.categories
pd.value_counts(decade_cat)
pd.cut(years, 2)
| learn/data_analytics/data_4.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
from pyspark import SparkContext, SparkConf
from pyspark.streaming import StreamingContext
import numpy as np
import matplotlib
# %matplotlib inline
from matplotlib import pyplot as plt
from IPython import display
import time
import os, shutil
conf = SparkConf()
conf.setAppName("HotAlgorithms")
conf.setMaster("spark://192.168.0.108:7077")
# +
sc.stop()
sc = SparkContext(conf = conf)
wait_time = 1
ssc = StreamingContext(sc, wait_time)
# checkpoint 由于使用updateStateByKey(),需要提供检查点以缓存之前批次的中间结果
# 检查点路径
checkpoint_dir_path = "checkpoint"
# remove file
def del_file(checkpoint_dir_path):
ls = os.listdir(checkpoint_dir_path)
for i in ls:
file_path = os.path.join(checkpoint_dir_path, i)
if os.path.isfile(file_path):
os.remove(file_path)
else:
shutil.rmtree(file_path, True)
del_file(checkpoint_dir_path)
ssc.checkpoint(checkpoint_dir_path)
# +
# spark 监视目录
text_file_path = "outputDir"
lines = ssc.textFileStream(text_file_path)
words = lines.flatMap(lambda line: line.split(';'))
words = words.filter(lambda x: x != "null")
# 做map并把之前计算的内容累加
def updateFunction(newValues,runningCount):
if runningCount is None:
runningCount = 0
return sum(newValues,runningCount) #add the new values with the previous running count to get the new count
runningCounts = words.map(lambda x:(x,1)).updateStateByKey(updateFunction)
# +
# 使用foreachRDD 输出模式, 显示为柱状图
def output_histogram(time, RDD):
show_num = 10
RDD = RDD.sortBy(ascending = False, numPartitions = None, keyfunc = lambda x: x[1])
taken = RDD.take(show_num)
if len(taken) == 0:
return
algorithms = np.array([x[0] for x in taken])
frequency = np.array([x[1] for x in taken])
plt.rcParams['axes.unicode_minus']=False
plt.rcParams['figure.figsize'] = [20, 15]
def autolabel(rects):
for rect in rects:
height = rect.get_height()
plt.text(rect.get_x()+rect.get_width()/2.-0.25, 1.01*height, '%s' % int(height))
plt.xticks(np.arange(len(algorithms)), algorithms)
rects = plt.bar(np.arange(len(frequency)), frequency, color=['r','g','b', 'c', 'm', 'y'])
autolabel(rects)
plt.title("Hot Algorithms: Top 10", fontsize=20)
plt.ylabel('Frequency', fontsize=16)
plt.xlabel('Algotithms',fontsize=16)
plt.show()
display.clear_output(wait=True)
# +
# 使用foreachRDD 输出模式, 显示为圆饼图
def output_pieChart(time, RDD):
show_num = 10
RDD = RDD.sortBy(ascending = False, numPartitions = None, keyfunc = lambda x: x[1])
taken = RDD.take(show_num)
if len(taken) == 0:
return
algorithms = np.array([x[0] for x in taken])
frequency = np.array([x[1] for x in taken])
plt.rcParams['axes.unicode_minus']=False
plt.rcParams['figure.figsize'] = [20, 10]
explode = (0,0,0,0,0,0,0,0,0,0)
colors=['deeppink','orchid','indigo', 'royalblue', 'slategray', 'cyan', 'springgreen','yellow','orange','tomato']
patches,l_text,p_text = plt.pie(frequency,
explode=explode,
labels=algorithms,
colors=colors,
labeldistance = 1.05,
autopct = '%3.1f%%',
shadow = False,
startangle = 90,
pctdistance = 0.6)
plt.title("Hot Algorithms: Top 10", fontsize=20)
plt.axis('equal')
plt.legend()
plt.show()
display.clear_output(wait=True)
# +
# 输出为柱状图
runningCounts.foreachRDD(output_histogram)
# 输出为圆饼图
# runningCounts.foreachRDD(output_pieChart)
# -
ssc.start()
| HotAlgorithms.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# %nbdev_hide
# IN_COLAB = 'google.colab' in str(get_ipython())
# if IN_COLAB:
# # !pip install git+https://github.com/pete88b/nbdev_colab_helper.git
# from nbdev_colab_helper.core import *
# project_name = 'syntheticReplica'
# init_notebook(project_name)
# +
# %nbdev_hide
# %load_ext autoreload
# %autoreload 2
from nbdev import *
# %nbdev_default_export dirView
from nbdev.showdoc import *
from fastcore.nb_imports import *
# -
# %nbdev_export
from pathlib import Path
import random
import shutil
# # Directory Tools
#
# > Basic utilities to work with directory.
# ### **Create Directory**
# Create several directories with different names.<br>
#
# Args:
# * base_path : path to base directory
# * dir_list : list of directory names to be created
#
# Return:
# * None
# %nbdev_export
# Make dir, parents=True, exist_ok=True
def mkDir(base_path:Path, dir_list:list) -> None:
path_names = []
for name in dir_list:
name = (base_path).joinpath(name)
name.mkdir(parents=True, exist_ok=True)
path_names.append(name)
# %nbdev_export
# Display additional message
def additionalMssg(count:int) -> None:
if count == 0:
print("All images are ready to display, please proceed!")
elif count > 0:
print(f'Not a JPG or PNG files: {count}')
# ### **File Count**
# Count number of files within a directory.<br>
#
# Args:
# * path_dir : path to directory
#
# Return:
# * int
# %nbdev_export
# Count total quantity of files
def fileCount(path_dir:Path) -> int:
dir_size = len(list(path_dir.iterdir()))
print(f'Total number of items found: {dir_size}')
return dir_size
# %nbdev_export
# List path to subdirs or paths to files of a dir
def listFile(path_dir:Path) -> list:
return sorted(list(path_dir.iterdir()))
# %nbdev_export
# Enumerate list of file names and paths
def itemize(path_dir:Path) -> list:
return [i for i in enumerate(listFile(path_dir))]
# ### **Remove File**
#
# Remove single file, example: ".txt", ".py", etc).<br>
#
# Args:
# * id : file number from showDirInf() output
# * path_dir : path to directory
#
# Return:
# * None
# %nbdev_export
# Remove file with name and extension
def rmFile(id:list, path_dir:Path) -> None:
itemize(path_dir)
# Remove paths corresponding to id in the list.
list(map(lambda x: (itemize(path_dir)[x][1]).unlink(), id))
# ### **Remove File w/ Specific Extension**
#
# Remove any files with a specific extension, for example: ".txt".<br>
#
# Args:
# * path_dir : path to directory
# * extension : extension type
#
# Return:
# * None
# %nbdev_export
# Remove all files with a specific extension in a directory
def rmFileExt(path_dir:Path, extension:str) -> None:
enum = itemize(path_dir)
for i in range(len(enum)):
_, path = enum[i]
if path.is_file() and path.suffix == extension:
path.unlink()
print(f'"{path}"" removed!')
print(f'All "{extension}" removed!')
# ### **Remove Directory**
# Remove unwanted directory or subdirectory, for example: Colab autosave generated folder, ".ipynb_checkpoint".<br>
#
# Args:
# * path_dir : path to directory
# * dir : directory name
#
# Return:
# * None
# +
# %nbdev_export
# Remove directory or subdirectory
def rmDir(path_dir:Path, dir:str) -> None:
dir_list = listFile(path_dir)
folder = path_dir.joinpath(dir)
if folder in dir_list:
shutil.rmtree(folder)
print(f'"{folder}"" removed' )
# -
# ### **Show Directory Information**
# Show file paths, names, types and indicate wheather to remove.<br>
#
# Args:
# * path_dir : path to image directory<br>
# * suffix_list : list of acceptable image suffixes or extensions(remove if not listed)
#
# Return:
# * None
# %nbdev_export
# Display image file information within a directory
def showDirInf(path_dir:Path, suffix_list:list = ['.jpg', '.jpeg', '.png']) -> None:
count = 0
enum = itemize(path_dir)
for i in range(len(enum)):
id, path = enum[i]
if path.suffix not in suffix_list:
print(f'{id}: Not a JPG or PNG, please remove before proceeding -> {path.name}')
count += 1
else:
print(f'{id}: {path.name}')
print(f'\nPath to files: "{path.parents[0]}"')
fileCount(path_dir)
additionalMssg(count)
# %nbdev_hide
from nbdev.export import notebook2script
notebook2script("03_dirView.ipynb")
| 03_dirView.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3.7.12 ('flyai_pytorch1_5')
# language: python
# name: python3
# ---
# # 字典和集合
#
# 字典是Python中的重要数据结构,并且Python中字典的实现进行了高度优化(如何优化的还没整明白)。
#
# ## 泛映射类型
#
# 字典属于泛映射类型数据结构,不同于序列类型,字典总是由key-value(键值对)构成。
#
# collections.abc中定义了Mapping和MutableMapping两个抽象基类,这些基类为dict等数据结构定义了形式接口。这些基类主要是作为形式化的文档并定义里构建一个映射类型需要的最基本接口。使用isinstance就可以判断某一个对象是否是广义上的映射类型。
#
# ### 散列表以及可散列数据类型
#
# 散列函数的使用能够更快速的访问某一特定关键字对应的数据。在Python中通常元素不可变类型都是可散列类型,例如str、bytes以及数值类型。此外,若元组中包含的元素均是可散列类型,则该元组也是可散列类型。Python中,可散列类型必然会实现\_\_hash__()方法,并且具有\_\_eq__()方法。
tt = (1, 2, (3, 4))
print("\n不含可变类型的元组hash:")
print(hash(tt))
tf = (1, 2, frozenset([3, 4]))
print("\n含frozenset的元组hash:")
print(hash(tf))
tl = (1, 2, [3, 4])
print("\n含可变类型的元组hash:")
print(hash(tl))
# ## 字典推导
#
# 字典推导类似于列表推导和生成器表达式,是用于初始化字典的有效手段。
# ## dict & dict-pro-plus-max-super
#
# dict类型是最常用的映射类型,defaultdict和OrderedDict则是dict的变种,这两个类型支持一些特殊的用法。此外还有如ChainMap(可容纳多个映射对象并在进行键查找时,在所有的映射对象中进行查找)、Counter(具有计数器功能,相当适用于计数任务)、UserDict(开箱即用的自定义映射类型基类)等
#
# 总的来说,这些变种均具有dict具有的功能,此外还为了应对不同的应用场合提供了更为丰富的功能。
#
# defaultdict对未知键提供了特殊支持,当传入字典中不存在的键获取数据时,dict会直接报错,而defaultdict则会默认为这个新键创建对应的键值对(在建立字典的时候非常非常非常好用,之前都是首先用if判断字典中是否存在该键,然后根据判断结果创建或者修改键值对)。
#
# OrderedDict则对顺序提供了额外支持,普通的dict并不会记录键值对的顺序,或者说dict中的键值对都是无序的,但是OrderedDict则提供了额外的键值对顺序功能支持,支持类似于先入先出的功能。
#
# ### get() & setdefault()
#
# get()能够规避对dict中某一个key赋值时,由于key不存在于dict中导致的keyError。但是利用get()方法处理这种情况需要经过多次键查询操作(get()是一次,然后赋值又是一次,并且还需要创建临时变量用于存放取得的值以及对值的操作)
#
# setdefault()则可以一步完成上述工作
#
# ### defaultdict
#
# 有些时候,希望某个键不存在于映射中也会返回一个默认值,defaultdict实现了\_\_missing__方法,用于应对这种需求。
#
# defaultdict会按照如下步骤处理映射类型中不存在的键
#
# * 调用定义时指定的方法,创建一个新的对象(例如,若默认创建一个列表类型,则会调用list()方法创建一个新list)
# * 将新对象作为值,构建键值对并记录到原字典中
# * 返回新创建的键值对的值的引用
#
# defaultdict依赖default_factory方法实现上述操作,值得注意的是,default_factory仅会在\_\_getitem__中被调用,对于一个不存在于字典中的键"new_key",若直接用get()函数获取其对应的值则会返回None。
#
# \_\_getitem__并不会直接调用default_factory,而是按照如下流程进行调用:
#
# * 执行defaultdict["new_key"],希望获得"new_key"对应的值
# * 调用\_\_getitem__()方法,结果没有在键列表中查询到"new_key"
# * 调用\_\_missing__()方法,处理未知键"new_key"
# * 调用default_factory()方法,赋予"new_key"默认值
#
# 上述流程中最为关键的是\_\_missing__()方法的实现。实际上,对于自定义的映射数据类型,若想处理未知键的查询和创建任务,也仅需要实现\_\_missing__()方法
#
# ### dict和UserDict
#
# 若想建立自定义的映射数据类型,最为朴素的想法是继承dict类。dict作为内置类型,当将其作为父类时,为了避免一些意想不到的错误,需要大量重写方法。UserDict则不同,UserDict实际上不是dict的子类,而是对dict进行了进一步的封装——UserDict中有一个名为data的属性,这一属性是dict的实例,即UserDict中data属性负责存储数据,而其他方法负责处理数据,相较于直接继承自dict类,这样处理能很大程度上避免由于疏忽导致的各种异常以及无限递归(例如由于待查询的键不存在以及疏忽导致的\_\_missing__()无限递归)
#
# ### 不可变映射(只读)
#
# 在一些应用场合中,或许会希望映射不会被轻易改变,即映射只读。
#
# Python提供了MappingProxyType方法提供这样一种仅只读的类。具体来说,当传入一个映射对象给mappingProxyType,其会返回一个只读的映射视图,并且这一映射视图是动态的 —— 可以通过改变原映射对象来动态改变映射视图中的内容,而不可以直接修改映射视图
# +
# ------------------- 现在向key="3"的键值对中添加元素3
# ------- 方法一 -------
# 一次判断
# 两次键查询
# 不需要临时变量
test_dict = {"1": [1], "2": [1, 2], "3": [1, 2]}
if "3" not in test_dict:
test_dict["3"] = []
test_dict["3"].append(3)
print(test_dict)
# ------- 方法二 -------
# 两次键查询
# 需要临时变量
test_dict = {"1": [1], "2": [1, 2], "3": [1, 2]}
temp_list = test_dict.get("3", [])
temp_list.append(3)
test_dict["3"] = temp_list
print(test_dict)
# ------- 方法三 -------
# 一次键查询
# 不需要临时变量
test_dict = {"1": [1], "2": [1, 2], "3": [1, 2]}
test_dict.setdefault("3", []).append(3)
print(test_dict)
# ----------------------- dict以及UserDict -----------------------
# 实现一个dict,对于这个dict,可以使用字符形式的数字或者直接利用数值进行值查找
# dict[1]和dict["1"]应当返回相同的结果
class strKeyDict0(dict):
# 对于继承自dict的类需要实现如下方法
# 1. 实现__missing__()方法,并且若传入的键是数值型还需要转化为str类型进行二次查找
# 2. 实现__contains__()方法,因为使用 in 关键字进行判断会调用这一方法,
# 为了保证正确,需要利用传入的键以及转换为str类型的键进行检查,值得注意的是
# 这里没有使用 key in dict进行检查,因为strKeyDict0本身就是继承自dict
# key in dict会导致__contains__()无限递归
# 3. 实现get()方法,虽然[]操作符并不会直接调用get方法,而是调用__getitem__(),
# 但是get()是非常常见的方法,为了保证在自定义的dict中get()和__getitem__()一致,
# 需要对get进行重写,这里实际上是由于dict自带的get()方法并不会主动调用__missing__()方法,
# 当遇到未知的键时,get()方法会直接返回None
def __missing__(self, key):
if isinstance(key, str):
raise KeyError(key)
return self[str(key)]
def get(self, key, default=None):
try:
return self[key]
except KeyError:
return default
def __contains__(self, key):
return key in self.keys() or str(key) in self.keys()
from collections import UserDict
class strKeyDict(UserDict):
# 对于继承自UserDict的方法,需要实现如下方法
# 1. 实现__missing__()方法,和strKeyDict(dict)中的实现一致
# 2. 实现__contains__()方法,不同于继承自dict的strKeyDict0,
# 继承自UserDict的strKeyDict中利用self.data存储数据,
# 使得直接用key in dict不会引发无限递归
# 3. 实现__setitem__()方法,
# UserDict中__setitem__()的具体功能实现由dict类型的self.data负责
# 这里仅需要传入相应的键和值即可
# 4. get()方法,在strKeyDict中不需要重写get()方法,
# 理由在于userDict继承自Mapping这一超类
# Mapping.get()和上述strKeyDict0中get()的实现一致
def __missing__(self, key):
if isinstance(key, str):
raise KeyError(key)
return self[str(key)]
def __contains__(self, key):
return str(key) in self.data
def __setitem__(self, key, item):
self.data[str(key)] = item
# -
# ## 集合
#
# 集合在Python中是一个较新的概念,其描述了**唯一**对象的集合。集合有如下特点:
# 1. 元素的唯一性:集合中不会有两个一样的对象(这里的唯一性指的是hash的唯一性)
# 2. 集合中的元素是可散列的,但是集合本身是不可散列的
# 3. 集合允许各种基于集合的二元运算,例如交集、并集以及差集
#
# ### 集合推导
#
# 类似于列表推导以及生成器表达式
#
# ### 集合的操作
#
# Python中的集合支持数学意义上的集合的各类操作,例如差集、并集、交集、对称差集等;此外还支持各种集合/元素之间的判断:属于、子集、真子集等;此外还支持一些其他通用的操作:添加元素、删除元素、迭代等,值得注意的是集合并不支持切片。
# ## 字典以及集合的实现基础
#
# Python内置的字典以及集合依赖于散列表,散列表的引入一方面给予了字典以及集合快速检索的能力,另一方面也导致字典和集合是无序的并且并不是所有的Python元素均可以作为dict的键或者set的元素。
#
# 主要涉及到如下几个问题:
#
# 1. Python中的dict、set以及list效率对比
# 2. 为什么dict和set是无序的
# 3. 为什么不是所有的Python对象均可以作为dict的键或者set的元素
# 4. 为什么在dict以及set对象的生命周期中元素的顺序不是一成不变的
# 5. 为什么不能在迭代循环dict或者set的同时删增元素
# 6. 为什么dict和set占用的内存空间相较于相同数据的list要大得多
#
# ### 字典以及集合中的散列表
#
# dict以及set利用散列表来提高元素搜索效率。具体来说,在dict的散列表中,每一个键值对会占用一个表元,每一个表元由对键的应用和对值的引用两部分组成。
#
# 查找特定元素的流程如下:
# 1. 计算键的散列值
# 2. 使用散列值的一部分来定位散列表中的一个表元
# 3. 判断表元是否为空
# * 若为空,抛出KeyError
# * 若不为空,执行第4步
# 4. 检查是否是期望的元素
# * 若是期望的元素,返回表元中的值
# * 若不是期望的元素,执行第5步
# 5. 发生冲突,使用散列值的另一部分来定位散列表中的另一行,并返回第3步
#
# 添加新元素以及更新现有元素的操作与上述基本一致。不同的是,对于添加新元素,若表元为空则插入元素,否则增加偏移量直到找到下一个为空的表元。此外,**为了保证散列表的稀疏性,当元素数量增添到一定阈值时,Python会自动将这个散列表复制到更大的空间中以避免冲突**。对于更新元素,则在查找到对应的表元后直接对值进行更新。
#
# 上述分析实际上已经回答了关于dict和set的若干问题:
#
# 1. 正是由于dict和set使用散列表来进行数据存储,在进行元素查找时不需要对所有元素进行遍历,这使得搜索特定元素的效率大大提高。并且由于稀疏性,查找时间不会随着dict和set中元素数量的增加而线性增长。
# 2. 由于使用散列表进行存储,dict和set中的元素也没有固定的顺序,在添加新元素时原有的顺序可能会被改变
# 3. dict和set使用散列表进行数据存储,因此这两个数据类型要求元素时可散列的。即,在Python中某元素必须支持\_\_hash__()函数,并且通过\_\_hash__()函数取得的散列值在生命周期中不发生变化。此外,Python还要求可散列的元素支持\_\_eq__()方法以判断相等性。这些条件使得不是所有的对象均可以作为dict的键或者set的元素。
# 4. 与2类似,由于添加元素后很可能改变原有的顺序,因此在dict和set中讨论元素的顺序没有意义,除非使用类似于OrderedDict这样的特殊类型。
# 5. 由于元素的顺序不确定,在循环迭代dict或者set的同时删增元素可能会导致跳过某些元素。若使用.keys()、.values()以及.items()等函数对dict进行循环迭代,Python3中这些方法返回的字典视图具有动态的特性。即,循环迭代过程中对dict的改变会实时反馈到循环条件上,这显然会导致不可预测的错误。
# 6. dict和set是空间换时间的典型例子。散列表需要保证稀疏性以尽可能避免冲突,因此,相较于list,dict和set需要维护更大的内存空间以保证散列表的稀疏性
# ## 总结
#
# 1. Python内置的dict类型在很多任务中承担了重要功能,并且除了基础的dict外,Python标准库也提供了相当多适用于不同应用场合的特殊映射类型(defaultDict、OrderDict、ChainMap等)
# 2. 多数映射类型均提供了setdefault和update两个方法,前者能够很方便的处理未知键,后者则使得批量更新键值对成为可能
# 3. \_\_missing__方法为处理未知键提供了接口,通过自定义这一方法可以明确指定遇到未知键时应当如何处理。
# 4. collections.abs提供了多个映射类型的抽象基类,这些基类可以用于判断对象类型。此外,MappingProxyType可以用来创建不可变映射对象
# 5. dict和set的实现依赖散列表。散列表利用空间换取时间,这使得dict和set具有相当高的效率,与此同时,散列表的使用也使得在dict和set中讨论元素顺序变得没有意义并且并不是所有元素都可以作为dict的键或者set的元素。
| 03_DictionariesAndSets/03_DictionariesAndSets.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import numpy as np
from astropy.io import fits
from astropy.io import ascii
from astropy.table import Table,Column
import glob
import subprocess
import pandas as pd
# +
# Read in one file to look at it:
path = "/Volumes/MRD_Backup/SwiftData/Astrometry/LMC"
files = glob.glob(path+'/45*/?????_1_1.csv')
#print(files[0:5])
f1 = pd.read_csv(files[0])
print(f1.keys())
#f1
field = files[0].split('/')[-2]
fieldn = 'lmc-obs'+field+'-'+np.str(f1['Unnamed: 0'][0])
fieldnames = 'lmc-obs'+field+'-'+np.str(np.array(f1['Unnamed: 0']))
fieldnames
len(f1['Ra'])
# -
# ## Code to Combine All of Them:
# +
path = "/Volumes/MRD_Backup/SwiftData/Astrometry/LMC"
files = glob.glob(path+'/4*/?????_2_1.csv')
outfile = path+'/LMC_combined_2_1.csv'
#Prep the final columns:
name = []
obsid = []
ra = []
dec = []
uvw2_mag = []
uvw2_mag_err = []
uvm2_mag = []
uvm2_mag_err = []
uvw1_mag = []
uvw1_mag_err = []
Umag = []
e_Umag = []
Bmag = []
e_Bmag = []
Vmag = []
e_Vmag = []
Imag = []
e_Imag = []
Flag = []
Jmag = []
e_Jmag = []
Hmag = []
e_Hmag = []
Ksmag = []
e_Ksmag = []
# quality
uvw2_resid_frac = []
uvw2_saturated = []
uvw2_sss = []
uvw2_edge = []
uvm2_resid_frac = []
uvm2_saturated = []
uvm2_sss = []
uvm2_edge = []
uvw1_resid_frac = []
uvw1_saturated = []
uvw1_sss = []
uvw1_edge = []
mcps_flag = []
# locations
uvw2_key = []
uvw2_pix_x = []
uvw2_pix_y = []
uvm2_key = []
uvm2_pix_x = []
uvm2_pix_y = []
uvw1_key = []
uvw1_pix_x = []
uvw1_pix_y = []
for file in files:
f1 = pd.read_csv(file)
field = file.split('/')[-2]
sub_f = []
sub_name = []
for i in range(len(f1['Ra'])):
sub_f.append(field)
name_sub = 'smc-obs'+field+'-'+np.str(f1['Unnamed: 0'][i])
sub_name.append(name_sub)
name.extend(sub_name)
obsid.extend(sub_f)
ra.extend(f1['Ra'])
dec.extend(f1['Dec'])
uvw2_mag.extend(f1['UVW2_MAG'])
uvw2_mag_err.extend(f1['UVW2_MAG_ERR'])
uvm2_mag.extend(f1['UVM2_MAG'])
uvm2_mag_err.extend(f1['UVM2_MAG_ERR'])
uvw1_mag.extend(f1['UVW1_MAG'])
uvw1_mag_err.extend(f1['UVW1_MAG_ERR'])
Umag.extend(f1['Umag'])
e_Umag.extend(f1['e_Umag'])
Bmag.extend(f1['Bmag'])
e_Bmag.extend(f1['e_Bmag'])
Vmag.extend(f1['Vmag'])
e_Vmag.extend(f1['e_Vmag'])
Imag.extend(f1['Imag'])
e_Imag.extend(f1['e_Imag'])
Jmag.extend(f1['Jmag'])
e_Jmag.extend(f1['e_Jmag'])
Hmag.extend(f1['Hmag'])
e_Hmag.extend(f1['e_Hmag'])
Ksmag.extend(f1['Ksmag'])
e_Ksmag.extend(f1['e_Ksmag'])
uvw2_resid_frac.extend(f1['UVW2_RESID_FRAC'])
uvw2_saturated.extend(f1['UVW2_SATURATED'])
uvw2_sss.extend(f1['UVW2_SSS'])
uvw2_edge.extend(f1['UVW2_EDGE'])
uvm2_resid_frac.extend(f1['UVM2_RESID_FRAC'])
uvm2_saturated.extend(f1['UVM2_SATURATED'])
uvm2_sss.extend(f1['UVM2_SSS'])
uvm2_edge.extend(f1['UVM2_EDGE'])
uvw1_resid_frac.extend(f1['UVW1_RESID_FRAC'])
uvw1_saturated.extend(f1['UVW1_SATURATED'])
uvw1_sss.extend(f1['UVW1_SSS'])
uvw1_edge.extend(f1['UVW1_EDGE'])
mcps_flag.extend(f1['Flag'])
uvw2_key.extend(f1['UVW2_KEY'])
uvw2_pix_x.extend(f1['UVW2_PIX_X'])
uvw2_pix_y.extend(f1['UVW2_PIX_Y'])
uvm2_key.extend(f1['UVM2_KEY'])
uvm2_pix_x.extend(f1['UVM2_PIX_X'])
uvm2_pix_y.extend(f1['UVM2_PIX_Y'])
uvw1_key.extend(f1['UVW1_KEY'])
uvw1_pix_x.extend(f1['UVW1_PIX_X'])
uvw1_pix_y.extend(f1['UVW1_PIX_Y'])
#print(len(name))
#print(obsid)
# Write this out:
data_out = [name,obsid,ra,dec,uvw2_mag,uvw2_mag_err,uvm2_mag,uvm2_mag_err,uvw1_mag,uvw1_mag_err,
Umag,e_Umag,Bmag,e_Bmag,Vmag,e_Vmag,Imag,e_Imag,Jmag,e_Jmag,Hmag,e_Hmag,Ksmag,
e_Ksmag,uvw2_resid_frac,uvw2_saturated,uvw2_sss,uvw2_edge,uvm2_resid_frac,uvm2_saturated,
uvm2_sss,uvm2_edge,uvw1_resid_frac,uvw1_saturated,uvw1_sss,uvw1_edge,mcps_flag,uvw2_key,
uvw2_pix_x,uvw2_pix_y,uvm2_key,uvm2_pix_x,uvm2_pix_y,uvw1_key,uvw1_pix_x,uvw1_pix_y]
names = ['name','obsid','ra','dec','uvw2_mag','uvw2_mag_err','uvm2_mag','uvm2_mag_err','uvw1_mag',
'uvw1_mag_err','Umag','e_Umag','Bmag','e_Bmag','Vmag','e_Vmag','Imag','e_Imag','Jmag',
'e_Jmag','Hmag','e_Hmag','Ksmag','e_Ksmag','uvw2_resid_frac','uvw2_saturated','uvw2_sss',
'uvw2_edge','uvm2_resid_frac','uvm2_saturated','uvm2_sss','uvm2_edge','uvw1_resid_frac',
'uvw1_saturated','uvw1_sss','uvw1_edge','mcps_flag','uvw2_key','uvw2_pix_x','uvw2_pix_y',
'uvm2_key','uvm2_pix_x','uvm2_pix_y','uvw1_key','uvw1_pix_x','uvw1_pix_y']
ascii.write(data_out, outfile, names=names, delimiter=',', overwrite=True)
# -
print(len(name))
| Tractor/Collect_csvs.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python [default]
# language: python
# name: python2
# ---
import emapy as epy
import sys;
import pandas as pd
# +
locationAreaBoundingBox = (41.3248770036,2.0520401001,41.4829908452,2.2813796997)
barris = epy.getDatabase('barris', 'geojson','../data/raw/barris.geojson', '',True, 0, 1, 'cartodb_id')
allData = epy.getDatabaseFromOSM('restaurantes', 'amenity', False, True, locationAreaBoundingBox, 'bar')
T = [[barri, data['properties'], data['geometry']]
for barri in barris[1] for data in allData
if epy.coordInsidePolygon(data["geometry"][0],
data["geometry"][1],
epy.transformArrYXToXY(barris[1][barri]))]
# +
df = pd.DataFrame({'id' : [], 'data': []})
allId = dict()
for data in T:
key = int(float(data[0]))
if key in allId:
allId[key] += 1
else:
allId[key] = 1
for idBarri in barris[1]:
key = int(float(idBarri))
if key in allId:
row = [key, allId[key] * 1.0 / len(T)]
df.loc[len(df), ['id', 'data']] = row
else:
df.loc[len(df), ['id', 'data']] = [key,0]
# -
map = epy.mapCreation(41.388790,2.158990)
epy.mapChoropleth(map,
'../data/raw/barris.geojson',
'feature.properties.cartodb_id',
df,
'id',
'data',
'YlGn',
0.7,
0.3,
[],
'bars / barri')
epy.mapSave(map, '../reports/maps/mapOfBarsxBarri.html')
map
| notebooks/bars_barri.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# This file is deprecated. Newer versions use categorization by topic and area. Use:
# - `ep-loader`
# - then `ep-topic-nested`
import pandas as pd, numpy as np
import matplotlib.pyplot as plt
# %matplotlib inline
import lzma,json
f=lzma.open("ep/ep_meps_current.json.xz")
#http://parltrack.euwiki.org/dumps/ep_meps_current.json.xz
members=json.loads(f.read())
f=lzma.open("ep/ep_votes.json.xz")
#http://parltrack.euwiki.org/dumps/ep_votes.json.xz
votes=json.loads(f.read())
hu_countries={'Hungary':'Magyarország','Romania':'Románia'}
def hu_country(c):
if c in hu_countries: return hu_countries[c]
else: return c
countries=['Hungary','Romania']
eu={}
parties={}
groups={}
names={}
for j in members:
z='Constituencies'
w='Groups'
if z in j:
if j[z][0]['country'] in countries:
if j[z][0]['country'] not in eu:eu[j[z][0]['country']]={}
eu[j[z][0]['country']][j['UserID']]=j
names[j['Name']['full']]=j
for i in j[z]:
if type(i['party'])==str:
party=i['party']
else:
party=i['party'][0]
party=str(party)
start=int(str(i['start'])[:4])
end=int(str(i['end'])[:4])
if end==9999:
end=2019
if party not in parties:
parties[party]={'min':9999,'max':0}
parties[party]['min']=min(start,parties[party]['min'])
parties[party]['max']=max(end,parties[party]['max'])
if w in j:
for i in j[w]:
party=i['Organization']
party=str(party)
if type(i['groupid'])==str:
code=i['groupid']
else:
code=i['groupid'][0]
start=int(str(i['start'])[:4])
end=int(str(i['end'])[:4])
if end==9999:
end=2019
if party not in groups:
groups[party]={'min':9999,'max':0}
groups[party]['min']=min(start,groups[party]['min'])
groups[party]['max']=max(end,groups[party]['max'])
groups[party]['code']=code
groups
parties
open('ep/export/json/names.json','w').write(json.dumps(names))
open('ep/export/json/groups.json','w').write(json.dumps(groups))
open('ep/export/json/parties.json','w').write(json.dumps(parties))
def party_normalizer(party):
if party in ['ALDE','ELDR']: return 'ALDE'
elif party in ['ITS','ENF']: return 'ENF'
elif party in ['NA','NI',['NA', 'NI'],'-','Independent']: return 'N/A'
elif party in ['PPE','PPE-DE']: return 'EPP'
elif party in ['Verts/ALE']: return 'Greens'
elif party in ['S&D','PSE']: return 'S&D'
elif party in ['ALDE Romania','Partidul Conservator','Partidul Puterii Umaniste']: return 'ALDE RO'
elif party in ['Demokratikus Koalíció']: return 'DK'
elif party in ['Együtt 2014 - Párbeszéd Magyarországért']:return 'Együtt PM'
elif party in ['Fidesz-Magyar Polgári Szövetség',
'Fidesz-Magyar Polgári Szövetség-Keresztény Demokrata Néppárt',
'Fidesz-Magyar Polgári Szövetség-Kereszténydemokrata Néppárt',
'Kereszténydemokrata Néppárt']:return 'FIDESZ-KDNP'
elif party in ['Forumul Democrat al Germanitor din România']: return 'FDGR'
elif party in ['Jobbik Magyarországért Mozgalom']:return 'Jobbik'
elif party in ['Lehet Más A Politika']:return 'LMP'
elif party in ['Magyar Demokrata Fórum','Modern Magyarország Mozgalom',
'Szabad Demokraták Szövetsége']: return 'Egyéb'
elif party in ['Magyar Szocialista Párt']: return 'MSZP'
elif party in ['Partidul Democrat','Partidul Democrat-Liberal','Partidul Naţional Liberal',
'Partidul Liberal Democrat','PNL']: return'PNL'
elif party in ['Partidul Mișcarea Populară']: return 'PMP'
elif party in ['Partidul Naţional Ţaranesc Creştin Democrat']:return 'PNȚCD'
elif party in ['Partidul România Mare']:return 'PRM'
elif party in ['PSD','Partidul Social Democrat','Partidul Social Democrat + Partidul Conservator']:return 'PSD'
elif party in ['Romániai Magyar Demokrata Szövetség',
'Uniunea Democrată Maghiară din România']:return 'RMDSZ'
elif party in ['Uniunea Națională pentru Progresul României']: return 'UNPR'
else: return party
def party_normalizer2(party):
if party in ['ALDE','ELDR']: return 'ALDE ⏩'
elif party in ['ITS','ENF']: return 'ENF 🌐'
elif party in ['N/A','NA','NI',['NA', 'NI'],'-','Independent']: return 'N/A 👤'
elif party in ['EPP','PPE','PPE-DE']: return 'EPP ⭐️'
elif party in ['Greens','Verts/ALE']: return 'Greens 🌻'
elif party in ['S&D','PSE']: return 'S&D 🔴'
elif party in ['ECR']: return 'ECR 🦁'
elif party in ['ALDE RO','ALDE Romania','Partidul Conservator','Partidul Puterii Umaniste']: return 'ALDE RO 🕊️'
elif party in ['DK','Demokratikus Koalíció']: return 'DK 🔵'
elif party in ['Együtt PM','Együtt 2014 - Párbeszéd Magyarországért']:return 'Együtt PM ✳️'
elif party in ['Fidesz-Magyar Polgári Szövetség',
'Fidesz-Magyar Polgári Szövetség-Keresztény Demokrata Néppárt',
'Fidesz-Magyar Polgári Szövetség-Kereszténydemokrata Néppárt',
'Kereszténydemokrata Néppárt','FIDESZ-KDNP']:return 'FIDESZ-KDNP 🍊'
elif party in ['Forumul Democrat al Germanitor din România','FDGR']: return 'FDGR ⚫️'
elif party in ['Jobbik Magyarországért Mozgalom','Jobbik']:return 'Jobbik ✅'
elif party in ['Lehet Más A Politika','LMP']:return 'LMP 🏃♂️'
elif party in ['Magyar Demokrata Fórum','Modern Magyarország Mozgalom',
'Szabad Demokraták Szövetsége','Egyéb']: return 'Egyéb ⭕️'
elif party in ['Magyar Szocialista Párt','MSZP']: return 'MSZP 🌸'
elif party in ['Partidul Democrat','Partidul Democrat-Liberal','Partidul Naţional Liberal',
'Partidul Liberal Democrat','PNL']: return'PNL 🔶'
elif party in ['Partidul Mișcarea Populară','PMP']: return 'PMP 🍏'
elif party in ['Partidul Naţional Ţaranesc Creştin Democrat','PNȚCD']:return 'PNȚCD ✳️'
elif party in ['Partidul România Mare','PRM']:return 'PRM 🔱'
elif party in ['PSD','Partidul Social Democrat','Partidul Social Democrat + Partidul Conservator']:return 'PSD 🌹'
elif party in ['Romániai Magyar Demokrata Szövetség',
'Uniunea Democrată Maghiară din România','RMDSZ']:return 'RMDSZ 🌷'
elif party in ['Uniunea Națională pentru Progresul României','UNPR']: return 'UNPR 🦅'
else: return party
party_image_links={
"ALDE":"alde.jpg",
"ECR":"ecr.jpg",
"ENF":"enf.jpg",
"N/A":"independent.png",
"EPP":"epp.jpg",
"S&D":"S&D.png",
"Greens":"greens.png",
"ALDE RO":"aldero.jpg",
"DK":"dk.png",
"Egyéb":"hun.jpg",
"Együtt PM":"egyutt.jpg",
"FDGR":"fdgr.jpg",
"FIDESZ-KDNP":"fidesz.png",
"Jobbik":"jobbik.png",
"LMP":"lmp.jpg",
"MSZP":"mszp.png",
"PMP":"pmp.png",
"PNL":"pnl.png",
"PNȚCD":"pntcd.png",
"PRM":"prm.png",
"PSD":"psd.png",
"RMDSZ":"rmdsz.jpg",
"UNPR":"unpr.jpg"
}
master_image_path='https://szekelydata.csaladen.es/ep/ep/img/'
def get_photo(name,allegiance_type2):
if allegiance_type2=='name':
return names[name]['Photo']
else:
if name in party_image_links:
return master_image_path+party_image_links[name]
else:
return ''
def get_photos(df,allegiance_type2):
photos=[]
for i in df['name2'].values:
photos.append(get_photo(i,allegiance_type2))
df['image']=photos
df=df[list(df.columns[:2])+list([df.columns[-1]])+list(df.columns[2:-1])]
return df
from colorthief import ColorThief
plt.style.use('fivethirtyeight')
print(plt.style.available)
def party_color(party,default_color="#000000"):
if party in party_image_links:
path='ep/img/'+party_image_links[party]
color_thief = ColorThief(path)
rgb_color=color_thief.get_color(quality=1)
return '#%02x%02x%02x' % rgb_color
else:
return default_color
party_color_links={}
for party in party_image_links:
party_color_links[party]=party_color(party)
def get_link_color(party,default_color="#000000"):
if party=='N/A': return '#444444'
elif party=='ENF': return '#777777'
elif party=='ALDE RO': return '#459ccc'
elif party=='FDGR': return '#961934'
elif party=='Jobbik': return '#3cb25a'
elif party in party_color_links:
return party_color_links[party]
else:
return default_color
for e,i in enumerate(party_color_links):
plt.plot([0,1],[e,e],color=get_link_color(i),lw=3,label=i)
plt.legend(fontsize=8,loc=3,framealpha=1)
for e,i in enumerate(party_color_links):
print(i+':',get_link_color(i))
for e,i in enumerate(party_color_links):
print(party_normalizer2(i)+':',get_link_color(i))
pnames=[]
for name in names:
dummy={'name':name}
dummy['country']=names[name]['Constituencies'][0]['country']
dummy['hucountry']=hu_country(dummy['country'])
dummy['party']=party_normalizer(names[name]['Constituencies'][0]['party'])
dummy['group']=party_normalizer(names[name]['Groups'][0]['groupid'])
dummy['party2']=party_normalizer2(names[name]['Constituencies'][0]['party'])
dummy['group2']=party_normalizer2(names[name]['Groups'][0]['groupid'])
dummy['partycolor']=get_link_color(dummy['party'])
dummy['groupcolor']=get_link_color(dummy['group'])
dummy['image']=get_photo(name,'name')
dummy['last']=name.split(' ')[-1]
dummy['members']=1
pnames.append(dummy)
open('ep/export/json/pnames.json','w').write(json.dumps(pnames))
def get_allegiance(allegiance,voteid,outcome,name):
if voteid not in allegiance:
allegiance[voteid]={'title':j['title'],'url':j['url'],'ts':j['ts']}
if outcome not in allegiance[voteid]:
allegiance[voteid][outcome]=[]
allegiance[voteid][outcome].append(name)
return allegiance
eu_allegiance={}
eu_vt={}
eu_joint_allegiance={}
eu_joint_vt={}
for country in countries:
hu=eu[country]
hu_allegiance={}
hu_vt={}
for j in votes:
ts=j['ts']
year=str(ts)[:4]
if year not in hu_vt:hu_vt[year]=[]
if year not in hu_allegiance:hu_allegiance[year]={'name':{},'group':{},'party':{}}
if year not in eu_joint_vt:eu_joint_vt[year]=[]
if year not in eu_joint_allegiance:eu_joint_allegiance[year]={'name':{},'group':{},'party':{}}
if j['title'] not in ["Modification de l'ordre du jour"]:
for outcome in ['For','Against']:
if outcome in j:
for group in j[outcome]['groups']:
for i in group['votes']:
if i['ep_id'] in hu:
dummy={}
dummy['vote']=j['voteid']
dummy['party']='-'
for k in hu[i['ep_id']]['Constituencies']:
if k['start']<ts<k['end']:
dummy['party']=k['party']
dummy['name']=hu[i['ep_id']]['Name']['full']
dummy['outcome']=outcome
dummy['group']=group['group']
dummy['party']=party_normalizer(dummy['party'])
dummy['group']=party_normalizer(dummy['group'])
dummy['title']=j['title']
dummy['url']=j['url']
dummy['ts']=ts
dummy['year']=year
hu_vt[year].append(dummy)
eu_joint_vt[year].append(dummy)
for allegiance_type in ['name','group','party']:
hu_allegiance[year][allegiance_type]=\
get_allegiance(hu_allegiance[year][allegiance_type],j['voteid'],
outcome,dummy[allegiance_type])
eu_joint_allegiance[year][allegiance_type]=\
get_allegiance(eu_joint_allegiance[year][allegiance_type],j['voteid'],
outcome,dummy[allegiance_type])
eu_allegiance[country]=hu_allegiance
eu_vt[country]=hu_vt
print(country)
name_votes={}
for country in countries:
for year in eu_vt[country]:
for vote in eu_vt[country][year]:
if vote['name'] not in name_votes:name_votes[vote['name']]={}
if year not in name_votes[vote['name']]:name_votes[vote['name']][year]=0
name_votes[vote['name']][year]+=1
tnames=[]
for name in name_votes:
for year in name_votes[name]:
dummy={'name':name}
dummy['country']=names[name]['Constituencies'][0]['country']
dummy['hucountry']=hu_country(dummy['country'])
dummy['party']=party_normalizer(names[name]['Constituencies'][0]['party'])
dummy['group']=party_normalizer(names[name]['Groups'][0]['groupid'])
dummy['party2']=party_normalizer2(names[name]['Constituencies'][0]['party'])
dummy['group2']=party_normalizer2(names[name]['Groups'][0]['groupid'])
dummy['partycolor']=get_link_color(dummy['party'])
dummy['groupcolor']=get_link_color(dummy['group'])
dummy['image']=get_photo(name,'name')
dummy['members']=1
dummy['year']=int(year)
dummy['last']=name.split(' ')[-1]
dummy['votes']=name_votes[name][year]
tnames.append(dummy)
open('ep/export/json/tnames.json','w').write(json.dumps(tnames))
# Joint allegiance
eu_allegiance['Joint']=eu_joint_allegiance
eu_vt['Joint']=eu_joint_vt
countries=countries+['Joint']
# Allegiance
def get_allegiance_matrix(key,vt,allegiance):
allegiance_matrix={}
initvote={'Same':0,'Opposite':0,'Total':0}
for j1 in vt:
outcome=j1['outcome']
name1=j1[key]
if name1 not in allegiance_matrix:allegiance_matrix[name1]={}
if outcome=='For':
for name2 in allegiance[j1['vote']]['For']:
if name2 not in allegiance_matrix[name1]:
allegiance_matrix[name1][name2]=dict(initvote)
allegiance_matrix[name1][name2]['Total']+=1
allegiance_matrix[name1][name2]['Same']+=1
if 'Against' in allegiance[j1['vote']]:
for name2 in allegiance[j1['vote']]['Against']:
if name2 not in allegiance_matrix[name1]:
allegiance_matrix[name1][name2]=dict(initvote)
allegiance_matrix[name1][name2]['Total']+=1
allegiance_matrix[name1][name2]['Opposite']+=1
elif outcome=='Against':
for name2 in allegiance[j1['vote']]['Against']:
if name2 not in allegiance_matrix[name1]:
allegiance_matrix[name1][name2]=dict(initvote)
allegiance_matrix[name1][name2]['Total']+=1
allegiance_matrix[name1][name2]['Same']+=1
if 'For' in allegiance[j1['vote']]:
for name2 in allegiance[j1['vote']]['For']:
if name2 not in allegiance_matrix[name1]:
allegiance_matrix[name1][name2]=dict(initvote)
allegiance_matrix[name1][name2]['Total']+=1
allegiance_matrix[name1][name2]['Opposite']+=1
for j in allegiance_matrix:
for i in allegiance_matrix[j]:
allegiance_matrix[j][i]['Same_perc']=np.round(allegiance_matrix[j][i]['Same']/allegiance_matrix[j][i]['Total'],3)
allegiance_matrix[j][i]['Opposite_perc']=np.round(allegiance_matrix[j][i]['Opposite']/allegiance_matrix[j][i]['Total'],3)
return allegiance_matrix
eu_allegiance_matrix={}
for country in countries:
for year in sorted(eu_vt[country]):
for allegiance_type1 in ['name','group','party']:
for allegiance_type2 in ['name','group','party']:
dummy=get_allegiance_matrix(allegiance_type1,eu_vt[country][year],
eu_allegiance[country][year][allegiance_type2])
if dummy!={}:
if country not in eu_allegiance_matrix:eu_allegiance_matrix[country]={}
if year not in eu_allegiance_matrix[country]:eu_allegiance_matrix[country][year]={}
if allegiance_type1 not in eu_allegiance_matrix[country][year]:
eu_allegiance_matrix[country][year][allegiance_type1]={}
if allegiance_type2 not in eu_allegiance_matrix[country][year][allegiance_type1]:
eu_allegiance_matrix[country][year][allegiance_type1][allegiance_type2]={}
eu_allegiance_matrix[country][year][allegiance_type1][allegiance_type2]=dummy
print(country,year)
open('ep/export/json/eu_allegiance_matrix.json','w').write(json.dumps(eu_allegiance_matrix))
# Listify dictionary
eu_allegiance_list=[]
for country in sorted(eu_allegiance_matrix):
for year in sorted(eu_allegiance_matrix[country]):
for allegiance_type1 in sorted(eu_allegiance_matrix[country][year]):
for allegiance_type2 in sorted(eu_allegiance_matrix[country][year][allegiance_type1]):
for name1 in sorted(eu_allegiance_matrix[country][year][allegiance_type1][allegiance_type2]):
for name2 in sorted(eu_allegiance_matrix[country][year][allegiance_type1][allegiance_type2][name1]):
dummy={'country':country,
'year':year,
'allegiance_type1':allegiance_type1,
'allegiance_type2':allegiance_type2,
'name1':name1,
'name2':name2}
for key in sorted(eu_allegiance_matrix[country][year][allegiance_type1][allegiance_type2][name1][name2]):
dummy[key]=eu_allegiance_matrix[country][year]\
[allegiance_type1][allegiance_type2][name1][name2][key]
if name1!=name2:
eu_allegiance_list.append(dummy)
open('ep/export/json/eu_allegiance_list.json','w').write(json.dumps(eu_allegiance_list))
# For Flourish
eu_allegiance_list=[]
for country in sorted(eu_allegiance_matrix):
for year in sorted(eu_allegiance_matrix[country]):
for allegiance_type1 in sorted(eu_allegiance_matrix[country][year]):
for allegiance_type2 in sorted(eu_allegiance_matrix[country][year][allegiance_type1]):
for name1 in sorted(eu_allegiance_matrix[country][year][allegiance_type1][allegiance_type2]):
for name2 in sorted(eu_allegiance_matrix[country][year][allegiance_type1][allegiance_type2][name1]):
dummy={'country':country,
'year':year,
'allegiance_type1':allegiance_type1,
'allegiance_type2':allegiance_type2,
'name1':name1,
'name2':name2}
for key in sorted(eu_allegiance_matrix[country][year][allegiance_type1][allegiance_type2][name1][name2]):
dummy[key]=eu_allegiance_matrix[country][year]\
[allegiance_type1][allegiance_type2][name1][name2][key]
if name1!=name2:
eu_allegiance_list.append(dummy)
for country in sorted(eu_allegiance_matrix):
for allegiance_type1 in sorted(eu_allegiance_matrix[country][year]):
for allegiance_type2 in sorted(eu_allegiance_matrix[country][year][allegiance_type1]):
print(country,allegiance_type1,allegiance_type2)
df=pd.DataFrame(eu_allegiance_list).set_index('allegiance_type1').loc[allegiance_type1]\
.set_index('allegiance_type2').loc[allegiance_type2].set_index('country').loc[country]\
.set_index(['name1','name2','year'])[['Same_perc']].unstack()
df=df['Same_perc'].reset_index()
df=get_photos(df,allegiance_type2)
df.to_excel('ep/export/flourish/'+country+'_'+allegiance_type1+'_'+allegiance_type2+'.xlsx')
# Clusterings
from scipy.cluster.hierarchy import dendrogram, linkage, fcluster
def dict_2_matrix(matrix,key,party_labels=False):
labels=sorted(matrix)
slabels=[]
for i in range(len(labels)):
label=labels[i]
if label in names:
if party_labels:
party=party_normalizer(names[label]['Constituencies'][0]['party'])
group=party_normalizer(names[label]['Groups'][0]['groupid'])
slabels.append(str(label)+u' | '+str(party)+' | '+str(group))
else:
slabels.append(label)
else:
slabels.append(label)
#extend to square matrix
inner_keys=matrix[sorted(matrix)[0]]
inner_keys=sorted(inner_keys[sorted(inner_keys)[0]])
for name1 in labels:
for name2 in labels:
if name2 not in matrix[name1]:
matrix[name1][name2]={i:0 for i in inner_keys}
return np.array([[matrix[name1][name2][key] for name2 in sorted(matrix[name1])] for name1 in labels]),slabels
def hier_cluster(matrix,level,th=1,key='Same_perc',party_labels=False,method='single', metric='euclidean',criterion='distance'):
X,labelList=dict_2_matrix(matrix[level][level],key,party_labels)
linked = linkage(X, method=method,metric=metric)
f=fcluster(linked, th, criterion)
labelList=[labelList[i]+' | '+str(f[i]) for i in range(len(labelList))]
return linked,labelList
def dendro(matrix,level,th=1,key='Same_perc',party_labels=False,method='single', metric='euclidean'):
linked,labelList=hier_cluster(matrix,level,th,key,party_labels,method, metric)
plt.figure(figsize=(7, len(labelList)//4+1))
dendrogram(linked,
orientation='right',
labels=labelList,
p=4,
#truncate_mode='lastp',
#show_contracted=True,
color_threshold=th,
distance_sort='descending',
show_leaf_counts=True)
ax=plt.gca()
ax.grid(False)
ax.set_xticks([])
ax.set_xticklabels([])
plt.setp(ax.get_yticklabels(), fontsize=10)
ylbls = ax.get_ymajorticklabels()
num=-1
for lbl in ylbls:
l=lbl.get_text()
l=l[l.find('|')+2:]
l=l[:l.find('|')-1]
num+=1
lbl.set_color(get_link_color(l))
plt.show()
dendro(eu_allegiance_matrix['Hungary']['2018'],'name',2,'Same_perc',True,'complete','seuclidean')
dendro(eu_allegiance_matrix['Romania']['2018'],'name',3,'Same_perc',True,'complete','seuclidean')
dendro(eu_allegiance_matrix['Joint']['2018'],'name',4,'Same_perc',True,'complete','seuclidean')
dendro(eu_allegiance_matrix['Romania']['2018'],'party',2,'Same_perc',True,'complete','seuclidean')
dendro(eu_allegiance_matrix['Hungary']['2018'],'party',2,'Same_perc',True,'complete','seuclidean')
dendro(eu_allegiance_matrix['Joint']['2018'],'party',4,'Same_perc',True,'complete','seuclidean')
# Exctract clusters
def get_unique_parent_node(nodes_children,node):
if node in leafs:
return node
elif len(nodes_children[node])>1:
return node
else:
return get_unique_parent_node(nodes_children,nodes_children[node][0])
def get_unique_parent(node,node_dict,unique_node_set,root):
if node not in node_dict:
return root
elif node_dict[node] in unique_node_set:
return node_dict[node]
else:
return get_unique_parent(node_dict[node],node_dict,unique_node_set,root)
master_tree={}
nc_levels=10
key='Same_perc'
mpruned_nodes=[]
for country in countries:
for year in eu_allegiance_matrix[country]:
for allegiance in eu_allegiance_matrix[country][year]:
uid=country+year+allegiance
cluster_list=[]
clusterdummy={}
for nc in range(2,nc_levels):
hc,hlabels=hier_cluster(eu_allegiance_matrix[country][year],
allegiance,nc,key,True,'complete','seuclidean','maxclust')
for i in hlabels:
hi=i.split('|')
name=hi[0].strip()
cluster_no=hi[-1].strip()
if name not in clusterdummy:
clusterdummy[name]={}
clusterdummy[name]['name']=name
clusterdummy[name]['cluster_level_'+str(nc_levels)]=name
clusterdummy[name]['country']=country
clusterdummy[name]['cluster_level_1']=country
clusterdummy[name]['cluster_level_'+str(nc)]='c'+str(nc)+str(cluster_no)
cluster_list=list(clusterdummy.values())
#construct tree
leafs=sorted(clusterdummy)
nodes=[{'name':country}]
nodes_done=set()
nodes_children={}
for i in cluster_list:
for cluster_level in range(2,nc_levels+1):
node=i['cluster_level_'+str(cluster_level)]
parent=i['cluster_level_'+str(cluster_level-1)]
if node not in nodes_done:
dummy={}
nodes_done.add(node)
dummy['name']=node
dummy['parent']=parent
if parent not in nodes_children:nodes_children[parent]=[]
nodes_children[parent].append(node)
nodes.append(dummy)
#get unique nodes
node_dict={i['name']:i['parent'] for i in nodes[1:]}
unique_nodes={}
for node in nodes_children:
unique_nodes[node]=get_unique_parent_node(nodes_children,node)
unique_node_set=set(unique_nodes.values()).union(set(leafs))
#prune
pruned_nodes=[]
for i in nodes:
dummy=i
name=i['name']
if 'parent' not in i:
pruned_nodes.append(i)
elif i['name'] in unique_node_set:
dummy['parent']=get_unique_parent(name,node_dict,unique_node_set,nodes[0]['name'])
if name in leafs:
if allegiance=='name':
dummy['party']=party_normalizer(names[name]['Constituencies'][0]['party'])
dummy['group']=party_normalizer(names[name]['Groups'][0]['groupid'])
dummy['party2']=party_normalizer2(names[name]['Constituencies'][0]['party'])
dummy['group2']=party_normalizer2(names[name]['Groups'][0]['groupid'])
else:
dummy['party']=''
dummy['group']=''
dummy['party2']=''
dummy['group2']=''
dummy['image']=get_photo(name,allegiance)
pruned_nodes.append(dummy)
for i in pruned_nodes:
dummy=i
if 'party' in dummy:
dummy['partycolor']=get_link_color(dummy['party'])
if 'group' in dummy:
dummy['groupcolor']=get_link_color(dummy['group'])
dummy['country']=country
dummy['year']=year
dummy['allegiance']=allegiance
mpruned_nodes.append(dummy)
open('ep/export/json/nodes.json','w').write(json.dumps(mpruned_nodes))
| ep/ep.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import turtle as t
import numpy as np
import random
# +
screen = t.Screen()
## set up background color
screen.bgcolor('lightgreen')
## set screen title
screen.title("Rick's Program")
screen.tracer(3)
#set up player
player = t.Turtle()
player.color('blue')
player.shape('triangle')
player.penup()
## set up Goal
goal = t.Turtle()
goal.color('red')
goal.shape('circle')
goal.penup()
goal.setpos(-100,100)
speed = 1
## Define Function
def turnleft():
player.lt(30)
def turnright():
player.rt(30)
def speed5():
global speed
speed = 5
def speed1():
global speed
speed = 1
##keyboard Binding
t.listen()
t.onkey(turnleft, 'Left')
t.onkey(turnright, 'Right')
t.onkeypress(speed5, 'Up')
t.onkeyrelease(speed1, 'Up')
while True:
player.forward(speed)
d = np.sqrt((player.xcor()-goal.xcor())**2
+(player.ycor()-goal.ycor())**2)
if d<20:
goal.setpos(random.randint(-300,300)
,random.randint(-300,300))
| Python_Class/.ipynb_checkpoints/Class_8-checkpoint.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + [markdown] id="toWe1IoH7X35"
# # Text to Image tool
#
# Part of [Aphantasia](https://github.com/eps696/aphantasia) suite, made by <NAME> [[eps696](https://github.com/eps696)]
# Based on [CLIP](https://github.com/openai/CLIP) + FFT from [Lucent](https://github.com/greentfrapp/lucent).
# Thanks to [<NAME>](https://twitter.com/advadnoun), [<NAME>](https://twitter.com/jonathanfly), [<NAME>](https://twitter.com/htoyryla), [@eduwatch2](https://twitter.com/eduwatch2) for ideas.
#
# ## Features
# * generates massive detailed imagery, a la deepdream
# * high resolution (up to 12K on RTX 3090)
# * directly parameterized with [FFT](https://github.com/greentfrapp/lucent/blob/master/lucent/optvis/param/spatial.py) [Fourier] or DWT [wavelets] (no pretrained GANs)
# * various CLIP models (including multi-language from [SBERT](https://sbert.net))
# * starting/resuming process from saved FFT parameters or from an image
# * complex requests:
# * image and/or text as main prompts
# (composition similarity controlled with [LPIPS](https://github.com/richzhang/PerceptualSimilarity) loss)
# * separate text prompts for image style and to subtract (suppress) topics
# * criteria inversion (show "the opposite")
#
# + [markdown] id="QytcEMSKBtN-"
# **Run the cell below after each session restart**
#
# Mark `resume` and upload `.pt` file, if you're resuming from the saved snapshot. Or you can simply upload any image to start from.
# Resolution settings below will be overwritten in this case.
# + id="etzxXVZ_r-Nf" cellView="form"
#@title General setup
# # !pip install torchtext==0.8.0 torch==1.7.1 pytorch-lightning==1.2.2 torchvision==0.8.2 ftfy==5.8 regex
# !pip install ftfy==5.8 transformers==4.6.0
# !pip install gputil ffpb
import os
import io
import time
from math import exp
import random
import imageio
import numpy as np
import PIL
from base64 import b64encode
# import moviepy, moviepy.editor
import torch
import torch.nn as nn
import torch.nn.functional as F
import torchvision
from torch.autograd import Variable
from IPython.display import HTML, Image, display, clear_output
from IPython.core.interactiveshell import InteractiveShell
InteractiveShell.ast_node_interactivity = "all"
import ipywidgets as ipy
from google.colab import output, files
import warnings
warnings.filterwarnings("ignore")
# !pip install git+https://github.com/openai/CLIP.git --no-deps
import clip
# !pip install sentence_transformers
from sentence_transformers import SentenceTransformer
# !pip install kornia
import kornia
# !pip install lpips
import lpips
# !pip install PyWavelets==1.1.1
# !pip install git+https://github.com/fbcotter/pytorch_wavelets
import pywt
from pytorch_wavelets import DWTForward, DWTInverse
# from pytorch_wavelets import DTCWTForward, DTCWTInverse
# %cd /content
# !pip install git+https://github.com/eps696/aphantasia
from aphantasia.image import to_valid_rgb, fft_image, img2fft, dwt_image, img2dwt, init_dwt, dwt_scale
from aphantasia.utils import slice_imgs, derivat, pad_up_to, basename, img_list, img_read, plot_text, txt_clean, checkout, old_torch
from aphantasia import transforms
from aphantasia.progress_bar import ProgressIPy as ProgressBar
clear_output()
resume = False #@param {type:"boolean"}
if resume:
resumed = files.upload()
resumed_filename = list(resumed)[0]
resumed_bytes = list(resumed.values())[0]
def makevid(seq_dir, size=None):
out_sequence = seq_dir + '/%04d.jpg'
out_video = seq_dir + '.mp4'
print('.. generating video ..')
# !ffmpeg -y -v warning -i $out_sequence -crf 20 $out_video
# moviepy.editor.ImageSequenceClip(img_list(seq_dir), fps=25).write_videofile(out_video, verbose=False)
data_url = "data:video/mp4;base64," + b64encode(open(out_video,'rb').read()).decode()
wh = '' if size is None else 'width=%d height=%d' % (size, size)
return """<video %s controls><source src="%s" type="video/mp4"></video>""" % (wh, data_url)
# Hardware check
# !ln -sf /opt/bin/nvidia-smi /usr/bin/nvidia-smi
import GPUtil as GPU
gpu = GPU.getGPUs()[0]
# !nvidia-smi -L
print("GPU RAM {0:.0f}MB | Free {1:.0f}MB)".format(gpu.memoryTotal, gpu.memoryFree))
print('\nDone!')
# + [markdown] id="CbJ9K4Cq8MtB"
# Type some `text` and/or upload some image to start.
# Describe `style`, which you'd like to apply to the imagery.
# Put to `subtract` the topics, which you would like to avoid in the result.
# `invert` the whole criteria, if you want to see "the totally opposite".
#
# Options for non-English languages (use only one of them!):
# `multilang` = use multi-language model, trained with ViT
# `translate` = use Google translate (works with any visual model)
# + id="JUvpdy8BWGuM" cellView="form"
#@title Input
text = "" #@param {type:"string"}
style = "" #@param {type:"string"}
subtract = "" #@param {type:"string"}
multilang = False #@param {type:"boolean"}
translate = False #@param {type:"boolean"}
invert = False #@param {type:"boolean"}
upload_image = False #@param {type:"boolean"}
if translate:
# !pip3 install googletrans==3.1.0a0
clear_output()
from googletrans import Translator
translator = Translator()
if upload_image:
uploaded = files.upload()
workdir = '_out'
tempdir = os.path.join(workdir, '%s-%s' % (txt_clean(text)[:50], txt_clean(style)[:50]))
# + [markdown] id="7xEoTPlyk1SD"
# ### Settings
# + [markdown] id="f3Sj0fxmtw6K"
# Select visual `model` (results do vary!). I prefer ViT for consistency (and it's the only native multi-language option).
# `align` option is about composition. `uniform` looks most adequate, `overscan` can make semi-seamless tileable texture.
# `use_wavelets` for DWT encoding instead of FFT. Select `wave` method if needed.
# `aug_transform` applies some augmentations, inhibiting image fragmentation & "graffiti" printing (slower, yet recommended).
# `sync` value adds LPIPS loss between the output and input image (if there's one), allowing to "redraw" it with controlled similarity.
# Decrease `samples` if you face OOM (it's the main RAM eater).
#
# Select optimizer: `_custom` options are more stable but noisy; pure `adam` is softer, but may spill some colored blur on longer runs.
# Setting `steps` much higher (1000-..) will elaborate details and make tones smoother, but may start throwing texts like graffiti.
# Tune `decay` (compositional softness) and `sharpness`, `colors` (saturation) and `contrast` as needed.
#
# Experimental tricks:
# `aug_noise` augmentation, `macro` (from 0 to 1) and `progressive_grow` (read more [here](https://github.com/eps696/aphantasia/issues/2)) may boost bigger forms, making composition less disperse.
# `no_text` tries to remove "graffiti" by subtracting plotted text prompt
# `enhance` boosts training consistency (of simultaneous samples) and steps progress. good start is 0.1~0.2.
# + id="Nq0wA-wc-P-s" cellView="form"
sideX = 1280 #@param {type:"integer"}
sideY = 720 #@param {type:"integer"}
#@markdown > Config
model = 'ViT-B/32' #@param ['ViT-B/16', 'ViT-B/32', 'RN101', 'RN50x16', 'RN50x4', 'RN50']
align = 'uniform' #@param ['central', 'uniform', 'overscan']
use_wavelets = False #@param {type:"boolean"}
wave = 'coif1' #@param ['db2', 'db3', 'db4', 'coif1', 'coif2', 'coif3', 'coif4']
aug_transform = True #@param {type:"boolean"}
sync = 0.4 #@param {type:"number"}
#@markdown > Look
decay = 1.5 #@param {type:"number"}
colors = 1.8 #@param {type:"number"}
contrast = 1.1 #@param {type:"number"}
sharpness = 0 #@param {type:"number"}
#@markdown > Training
steps = 150 #@param {type:"integer"}
samples = 200 #@param {type:"integer"}
learning_rate = .05 #@param {type:"number"}
optimizer = 'adam_custom' #@param ['adam', 'adam_custom', 'adamw', 'adamw_custom']
save_freq = 1 #@param {type:"integer"}
#@markdown > Tricks
aug_noise = 0. #@param {type:"number"}
no_text = 0.07 #@param {type:"number"}
enhance = 0. #@param {type:"number"}
macro = 0.4 #@param {type:"number"}
progressive_grow = False #@param {type:"boolean"}
if multilang: model = 'ViT-B/32' # sbert model is trained with ViT
diverse = -enhance
expand = abs(enhance)
# + [markdown] id="GP8RWuMFj8LO"
# ### Generate
# + cellView="form" id="81P3Vl6ijnQJ"
#@title run
# !rm -rf $tempdir
os.makedirs(tempdir, exist_ok=True)
shape = [1, 3, sideY, sideX]
if use_wavelets:
if resume is True:
if os.path.splitext(resumed_filename)[1].lower()[1:] in ['jpg','png','tif','bmp']:
img_in = imageio.imread(resumed_bytes)
init_pt = img2dwt(img_in, wave, 1.5)
scale = dwt_scale(init_pt, sharpness)
for i in range(len(init_pt)-1):
init_pt[i+1] /= (scale[i] * 10)
sideY, sideX = img_in.shape[0], img_in.shape[1]
else:
init_pt = torch.load(io.BytesIO(resumed_bytes))
# init_pt = [y.detach().cuda() for y in init_pt]
else:
init_pt, _, _, _ = init_dwt(None, shape, wave, colors)
shape = [1, 3, sideY, sideX]
params, image_f, _ = dwt_image(shape, wave, sharpness, colors, init_pt)
else:
if resume is True:
if os.path.splitext(resumed_filename)[1].lower()[1:] in ['jpg','png','tif','bmp']:
img_in = imageio.imread(resumed_bytes)
init_pt = img2fft(img_in, 1.5, 1.5) * 0.1
else:
init_pt = torch.load(io.BytesIO(resumed_bytes))
if isinstance(init_pt, list): init_pt = init_pt[0]
# init_pt = init_pt.cuda()
sideY, sideX = init_pt.shape[2], (init_pt.shape[3]-1)*2
else:
params_shape = [1, 3, sideY, sideX//2+1, 2]
init_pt = torch.randn(*params_shape) * 0.01
shape = [1, 3, sideY, sideX]
params, image_f, _ = fft_image(shape, 1, decay, init_pt)
image_f = to_valid_rgb(image_f, colors = colors)
if progressive_grow is True:
lr1 = learning_rate * 2
lr0 = lr1 * 0.01
else:
lr0 = learning_rate
if optimizer.lower() == 'adamw':
optimr = torch.optim.AdamW(params, lr0, weight_decay=0.01, amsgrad=True)
elif optimizer.lower() == 'adamw_custom':
optimr = torch.optim.AdamW(params, lr0, weight_decay=0.01, amsgrad=True, betas=(.0,.999))
elif optimizer.lower() == 'adam':
optimr = torch.optim.Adam(params, lr0)
else: # adam_custom
optimr = torch.optim.Adam(params, lr0, betas=(.0,.999))
if len(subtract) > 0:
samples = int(samples * 0.75)
print(' using %d samples,' % samples, optimizer, 'optimizer')
model_clip, _ = clip.load(model, jit=old_torch())
modsize = model_clip.visual.input_resolution
xmem = {'ViT-B/16':0.25, 'RN50':0.5, 'RN50x4':0.16, 'RN50x16':0.06, 'RN101':0.33}
if model in xmem.keys():
samples = int(samples * xmem[model])
if multilang is True:
model_lang = SentenceTransformer('clip-ViT-B-32-multilingual-v1').cuda()
def enc_text(txt):
if multilang is True:
emb = model_lang.encode([txt], convert_to_tensor=True, show_progress_bar=False)
else:
emb = model_clip.encode_text(clip.tokenize(txt).cuda())
return emb.detach().clone()
if diverse != 0:
samples = int(samples * 0.5)
if sync > 0 and upload_image:
samples = int(samples * 0.6)
sign = 1. if invert is True else -1.
if aug_transform is True:
trform_f = transforms.transforms_fast
samples = int(samples * 0.95)
else:
trform_f = transforms.normalize()
if upload_image:
in_img = list(uploaded.values())[0]
print(' image:', list(uploaded)[0])
img_in = torch.from_numpy(imageio.imread(in_img).astype(np.float32)/255.).unsqueeze(0).permute(0,3,1,2).cuda()[:,:3,:,:]
in_sliced = slice_imgs([img_in], samples, modsize, transforms.normalize(), align)[0]
img_enc = model_clip.encode_image(in_sliced).detach().clone()
if sync > 0:
align = 'overscan'
sim_loss = lpips.LPIPS(net='vgg', verbose=False).cuda()
sim_size = [sideY//4, sideX//4]
img_in = F.interpolate(img_in, sim_size).float()
# img_in = F.interpolate(img_in, (sideY, sideX)).float()
else:
del img_in
del in_sliced; torch.cuda.empty_cache()
if len(text) > 0:
print(' main topic:', text)
if translate:
text = translator.translate(text, dest='en').text
print(' translated to:', text)
txt_enc = enc_text(text)
if no_text > 0:
txt_plot = torch.from_numpy(plot_text(text, modsize)/255.).unsqueeze(0).permute(0,3,1,2).cuda()
txt_plot_enc = model_clip.encode_image(txt_plot).detach().clone()
if len(style) > 0:
print(' style:', style)
if translate:
style = translator.translate(style, dest='en').text
print(' translated to:', style)
txt_enc2 = enc_text(style)
if len(subtract) > 0:
print(' without:', subtract)
if translate:
subtract = translator.translate(subtract, dest='en').text
print(' translated to:', subtract)
txt_enc0 = enc_text(subtract)
if multilang is True: del model_lang
def save_img(img, fname=None):
img = np.array(img)[:,:,:]
img = np.transpose(img, (1,2,0))
img = np.clip(img*255, 0, 255).astype(np.uint8)
if fname is not None:
imageio.imsave(fname, np.array(img))
imageio.imsave('result.jpg', np.array(img))
def checkout(num):
with torch.no_grad():
img = image_f(contrast=contrast).cpu().numpy()[0]
# empirical tone mapping
if sync > 0 and upload_image:
img = img **1.3
# if sharpness != 0:
# img = img ** (1 + sharpness/2.)
save_img(img, os.path.join(tempdir, '%04d.jpg' % num))
outpic.clear_output()
with outpic:
display(Image('result.jpg'))
prev_enc = 0
def train(i):
loss = 0
noise = aug_noise * torch.randn(1, 1, *params[0].shape[2:4], 1).cuda() if aug_noise > 0 else None
img_out = image_f(noise)
img_sliced = slice_imgs([img_out], samples, modsize, trform_f, align, macro=macro)[0]
out_enc = model_clip.encode_image(img_sliced)
if len(text) > 0: # input text
loss += sign * torch.cosine_similarity(txt_enc, out_enc, dim=-1).mean()
if no_text > 0:
loss -= sign * no_text * torch.cosine_similarity(txt_plot_enc, out_enc, dim=-1).mean()
if len(style) > 0: # input text - style
loss += sign * 0.5 * torch.cosine_similarity(txt_enc2, out_enc, dim=-1).mean()
if len(subtract) > 0: # subtract text
loss -= sign * torch.cosine_similarity(txt_enc0, out_enc, dim=-1).mean()
if upload_image:
loss += sign * 0.5 * torch.cosine_similarity(img_enc, out_enc, dim=-1).mean()
if sync > 0 and upload_image: # image composition sync
prog_sync = (steps - i) / steps
loss += prog_sync * sync * sim_loss(F.interpolate(img_out, sim_size).float(), img_in, normalize=True).squeeze()
if sharpness != 0 and not use_wavelets: # mode = scharr|sobel|default
loss -= sharpness * derivat(img_out, mode='sobel')
# loss -= sharpness * derivat(img_sliced, mode='scharr')
if diverse != 0:
img_sliced = slice_imgs([image_f(noise)], samples, modsize, trform_f, align, macro=macro)[0]
out_enc2 = model_clip.encode_image(img_sliced)
loss += diverse * torch.cosine_similarity(out_enc, out_enc2, dim=-1).mean()
del out_enc2; torch.cuda.empty_cache()
if expand > 0:
global prev_enc
if i > 0:
loss += expand * torch.cosine_similarity(out_enc, prev_enc, dim=-1).mean()
prev_enc = out_enc.detach()
del img_out, img_sliced, out_enc; torch.cuda.empty_cache()
if progressive_grow is True:
lr_cur = lr0 + (i / steps) * (lr1 - lr0)
for g in optimr.param_groups:
g['lr'] = lr_cur
optimr.zero_grad()
loss.backward()
optimr.step()
if i % save_freq == 0:
checkout(i // save_freq)
outpic = ipy.Output()
outpic
pbar = ProgressBar(steps)
for i in range(steps):
train(i)
_ = pbar.upd()
HTML(makevid(tempdir))
torch.save(params, tempdir + '.pt')
files.download(tempdir + '.pt')
files.download(tempdir + '.mp4')
| Aphantasia.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# <img src="http://www.exalumnos.usm.cl/wp-content/uploads/2015/06/ISOTIPO-Color.jpg" title="Title text" width="20%" />
#
# <hr style="height:2px;border:none"/>
# <H1 align='center'> Image Interpolation </H1>
#
# <H3> INF-285 Computación Científica </H3>
# <H3> Autor: <NAME></H3>
#
# Lenguaje: Python
#
# Temas:
#
# - Image Interpolation
# - Interpolación Bicúbica
# - Lagrange, Newton, Spline
# <hr style="height:2px;border:none"/>
import numpy as np
import sympy as sp
from PIL import Image
from scipy import interpolate
import matplotlib.pyplot as plt
# ## Introducción
# En la siguiente tarea estudiaremos un método de interpolación denominado **Interpolación Bicúbica**, utilizada frecuentemente sobre imágenes. Aplicaremos el método para aumentar la resolución de una imagen intentando preservar las propiedades de la versión original.
# ## Contexto
# Supongamos que usted conoce $f$ y las derivadas $f_x$, $f_y$ y $f_{xy}$ dentro de las coordenadas $(0,0),(0,1),(1,0)$ y $(1,1)$ de un cuadrado unitario. La superficie que interpola estos 4 puntos es:
#
# $$
# p(x,y) = \sum\limits_{i=0}^3 \sum_{j=0}^3 a_{ij} x^i y^j.
# $$
#
# Como se puede observar el problema de interpolación se resume en determinar los 16 coeficientes $a_{ij}$ y para esto se genera un total de $16$ ecuaciones utilizando los valores conocidos de $f$,$f_x$,$f_y$ y $f_{xy}$. Por ejemplo, las primeras $4$ ecuaciones son:
#
# $$
# \begin{aligned}
# f(0,0)&=p(0,0)=a_{00},\\
# f(1,0)&=p(1,0)=a_{00}+a_{10}+a_{20}+a_{30},\\
# f(0,1)&=p(0,1)=a_{00}+a_{01}+a_{02}+a_{03},\\
# f(1,1)&=p(1,1)=\textstyle \sum \limits _{i=0}^{3}\sum \limits _{j=0}^{3}a_{ij}.
# \end{aligned}
# $$
#
# Para las $12$ ecuaciones restantes se debe utilizar:
#
# $$
# \begin{aligned}
# f_{x}(x,y)&=p_{x}(x,y)=\textstyle \sum \limits _{i=1}^{3}\sum \limits _{j=0}^{3}a_{ij}ix^{i-1}y^{j},\\
# f_{y}(x,y)&=p_{y}(x,y)=\textstyle \sum \limits _{i=0}^{3}\sum \limits _{j=1}^{3}a_{ij}x^{i}jy^{j-1},\\
# f_{xy}(x,y)&=p_{xy}(x,y)=\textstyle \sum \limits _{i=1}^{3}\sum \limits _{j=1}^{3}a_{ij}ix^{i-1}jy^{j-1}.
# \end{aligned}
# $$
#
#
# Una vez planteadas las ecuaciones, los coeficientes se pueden obtener al resolver el problema $A\alpha=x$, donde $\alpha=\left[\begin{smallmatrix}a_{00}&a_{10}&a_{20}&a_{30}&a_{01}&a_{11}&a_{21}&a_{31}&a_{02}&a_{12}&a_{22}&a_{32}&a_{03}&a_{13}&a_{23}&a_{33}\end{smallmatrix}\right]^T$ y ${\displaystyle x=\left[{\begin{smallmatrix}f(0,0)&f(1,0)&f(0,1)&f(1,1)&f_{x}(0,0)&f_{x}(1,0)&f_{x}(0,1)&f_{x}(1,1)&f_{y}(0,0)&f_{y}(1,0)&f_{y}(0,1)&f_{y}(1,1)&f_{xy}(0,0)&f_{xy}(1,0)&f_{xy}(0,1)&f_{xy}(1,1)\end{smallmatrix}}\right]^{T}}$.
#
# En un contexto más aplicado, podemos hacer uso de la interpolación bicúbica para aumentar la resolución de una imagen. Supongamos que tenemos la siguiente imagen de tamaño $5 \times 5$:
# <img src="assets/img1.png" width="20%"/>
# Podemos ir tomando segmentos de la imagen de tamaño $2 \times 2$ de la siguiente forma:
# <img src="assets/img2.png" width="50%"/>
# Por cada segmento podemos generar una superficie interpoladora mediante el algoritmo de interpolación cubica. Para el ejemplo anterior estariamos generando $16$ superficies interpoladoras distintas. La idea es hacer uso de estas superficies para estimar los valores de los pixeles correspondienets a una imagen más grande. Por ejemplo, la imagen $5 \times 5$ la podemos convertir a una imagen de $9 \times 9$ agregando un pixel entre cada par de pixeles originales además de uno en el centro para que no quede un hueco.
# <img src="assets/img3.png" width="50%"/>
# Aca los pixeles verdes son los mismos que la imagen original y los azules son obtenidos de evaluar cada superficie interpoladora. Notar que existen pixeles azules que se pueden obtener a partir de dos superficies interpoladoras distintas, en esos casos se puede promediar el valor de los pixeles o simplemente dejar uno de los dos.
#
# Para trabajar con la interpolación bicubica necesitamos conocer los valores de $f_x$, $f_y$ y $f_{xy}$. En el caso de las imagenes solo tenemos acceso al valor de cada pixel por lo que deberemos estimar cual es el valor de estos. Para estimar $f_x$ haremos lo siguiente:
# Para estimar el valor de $f_x$ en cada pixel haremos una interpolación con los algoritmos conocidos, usando tres pixels en dirección de las filas, luego derivaremos el polinomio obtenido y finalmente evaluaremos en la posición de interes. La misma idea aplica para $f_y$ solo que ahora interpolaremos en dirección de las columnas.
# <img src="assets/img5.png" width="60%"/>
# Por ejemplo si queremos obtener el valor de $f_x$ en la posición $(0,0)$ (imagen de la izquierda) entonces haremos una interpolación de Lagrange utilizando los pixeles $(0,0),(0,1)$ y $(0,2)$. Derivaremos el polinomio interpolador y evaluaremos en $(0,0)$. Por otro lado si queremos obtener el valor de $f_y$ en la posición $(0,0)$ (imagen de la derecha) entonces interpolaremos los pixeles $(0,0),(1,0)$ y $(2,0)$. Luego derivaremos el polinomio interpolador y evaluaremos en $(0,0)$.
# Para obtener $f_{xy}$ seguiremos la idea anterior. Solo que esta vez se utilizaran los valores de $f_y$ y se interpolaran estos en dirección de las filas.
# # Preguntas
# ## 1. Interpolación bicubica
# ### 1.1 Obtener derivadas (30 puntos)
#
# Implemente la función `derivativeValues` que reciba como input un arreglo con valores, el método de interpolación y si es que se considera el uso de los puntos de chebyshev . La función debe retornar un arreglo de igual dimensión con los valores de las derivadas de los puntos obtenidas
#
# Los métodos de interpolación serán representados por los siguientes valores
#
# * Interpolación de lagrange: `'lagrange'`
# * Diferencias divididas de Newton: `'newton'`
# * Spline cubica: `'spline3'`
#
# +
def chebyshevNodes(n):
i = np.arange(1, n+1)
t = (2*i - 1) * np.pi / (2 * n)
return np.cos(t)
def newtonDD(x_i, y_i):
n = x_i.shape[-1]
pyramid = np.zeros((n, n)) # Create a square matrix to hold pyramid
pyramid[:,0] = y_i # first column is y
for j in range(1,n):
for i in range(n-j):
# create pyramid by updating other columns
pyramid[i][j] = (pyramid[i+1][j-1] - pyramid[i][j-1]) / (x_i[i+j] - x_i[i])
a = pyramid[0] # f[ ... ] coefficients
N = lambda x: a[0] + np.dot(a[1:], np.array([np.prod(x - x_i[:i]) for i in range(1, n)]))
return N
def calcular(values1,values2,values3, method, cheb,number):
y = np.array((values1,values2,values3))
x = np.array((0,1,2))
if cheb:
x = chebyshevNodes(3)
x.sort()
xS = sp.symbols('x', reals=True)
if(method == 'lagrange'):
L = interpolate.lagrange(x,y)
deriv = np.polyder(L)
return deriv(x[number])
if(method == 'newton'):
Pn = newtonDD(x, y)
L = Pn(xS)
deriv = sp.diff(L,xS)
if(method=='spline3'):
deriv = interpolate.CubicSpline(x, y)
deriv = deriv.derivative()
return deriv(x[number])
return deriv.evalf(subs = {xS : x[number]})
calcular_v = np.vectorize(calcular)
#recibe fila de 1 dimension
def derivativeValues(fila,method,cheb):
"""
Parameters
----------
values: (int array) points values
method: (string) interpolation method
cheb: (boolean) if chebyshev points are used
Returns
-------
d: (float array) derivative value of interpolated points
"""
shape = fila.shape
nuevo = np.zeros(shape)
nuevo[1:shape[0]-1] = calcular_v(fila[0:shape[0]-2],fila[1:shape[0]-1],fila[2:shape[0]],method,cheb,1)
nuevo[0] = calcular_v(fila[0],fila[1],fila[2], method, cheb,0)
nuevo[shape[0]-1] = calcular_v(fila[shape[0]-3],fila[shape[0]-2],fila[shape[0]-1], method, cheb,2)
return nuevo
# -
#
# ### 1.2 Interpolación de imagen (50 puntos)
# Implemente la función `bicubicInterpolation` que reciba como input la matriz de la imagen y cuantos píxeles extra se quiere agregar entre los píxeles originales y el algoritmo de interpolación a utilizar. La función debe retornar la matriz con la imagen de dimensión nueva. Considere que se debe aplicar el método de interpolación en cada canal RGB por separado.
# +
def obtain_all_derivatives(image,method,cheb):
shape = image.shape
nuevo_x = np.zeros(shape)
nuevo_y = np.zeros(shape)
nuevo_xy = np.zeros(shape)
for i in range(shape[2]):
nuevo_y[:,:,i] = np.array([derivativeValues(n, method, cheb) for n in image[:,:,i].T]).T
nuevo_x[:,:,i] = np.array([derivativeValues(n, method, cheb) for n in image[:,:,i]])
nuevo_xy[:,:,i] = np.array([derivativeValues(n, method, cheb) for n in nuevo_y[:,:,i]])
return nuevo_x,nuevo_y,nuevo_xy
def bicubicInterpolation(image, interiorPixels, method,cheb):
"""
Parameters
----------
image: (nxnx3 array) image array in RGB format
interiorPixels: (int) interpolation method
method: (string) interpolation method
cheb: (boolean) if chebyshev points are used
Returns
-------
newImage: (nxnx3 array) image array in RGB format
"""
matriz = np.array(((1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0),(0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0),(-3,3,0,0,-2,-1,0,0,0,0,0,0,0,0,0,0),(2,-2,0,0,1,1,0,0,0,0,0,0,0,0,0,0),
(0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0),(0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0),(0,0,0,0,0,0,0,0,-3,3,0,0,-2,-1,0,0),
(0,0,0,0,0,0,0,0,2,-2,0,0,1,1,0,0),(-3,0,3,0,0,0,0,0,-2,0,-1,0,0,0,0,0),(0,0,0,0,-3,0,3,0,0,0,0,0,-2,0,-1,0),
(9,-9,-9,9,6,3,-6,-3,6,-6,3,-3,4,2,2,1),(-6,6,6,-6,-3,-3,3,3,-4,4,-2,2,-2,-2,-1,-1),(2,0,-2,0,0,0,0,0,1,0,1,0,0,0,0,0),
(0,0,0,0,2,0,-2,0,0,0,0,0,1,0,1,0),(-6,6,6,-6,-4,-2,4,2,-3,3,-3,3,-2,-1,-2,-1),(4,-4,-4,4,2,2,-2,-2,2,-2,2,-2,1,1,1,1)))
shape = image.shape
nueva_imagen = np.zeros((shape[0]*(interiorPixels+1)-interiorPixels,shape[1]*(interiorPixels+1)-interiorPixels,shape[2]),dtype=image.dtype)
nuevo_x, nuevo_y, nuevo_xy = obtain_all_derivatives(image,method,cheb)
for j in range(shape[0]-1):
for i in range(shape[0]-1):
for rgb in range(shape[2]):
array = np.array((image[i,j,rgb],image[i+1,j,rgb],image[i,j+1,rgb]
,image[i+1,j+1,rgb],nuevo_x[i,j,rgb],nuevo_x[i+1,j,rgb]
,nuevo_x[i,j+1,rgb],nuevo_x[i+1,j+1,rgb],nuevo_y[i,j,rgb]
,nuevo_y[i+1,j,rgb],nuevo_y[i,j+1,rgb],nuevo_y[i+1,j+1,rgb]
,nuevo_xy[i,j,rgb],nuevo_xy[i+1,j,rgb],nuevo_xy[i,j+1,rgb],nuevo_xy[i+1,j+1,rgb]))
a = matriz.dot(array.T)
P = lambda x,y: np.sum([a[i]*(x**(i%4))*y**(int(i/4)) for i in range(16)])
numero_fila = (interiorPixels + 1)*i
numero_columna = (interiorPixels+1)*j
#rellenar
for cont in range(interiorPixels+2):
for cont1 in range(interiorPixels+2):
value = P(cont1/(interiorPixels+1),cont/(interiorPixels+1))
if(value > 255):
value = 255
if(value < 0):
value = 0
if(nueva_imagen[numero_fila+cont1,numero_columna+cont,rgb] != 0):
value = (nueva_imagen[numero_fila+cont1,numero_columna+cont,rgb]+value)/2
nueva_imagen[numero_fila+cont1,numero_columna+cont,rgb] = value
return nueva_imagen
img = Image.open('sunset.png')
img = img.convert('RGB')
array=np.array(img)
array_nuevo = bicubicInterpolation(array, 4, 'spline3',False)
#original
plt.imshow(img)
plt.show()
#interpolada
plt.imshow(array_nuevo)
plt.show()
# -
print("Tamaño Original: ",array.shape)
print("Interpolada: ", array_nuevo.shape)
# ## 2. Evaluacion de algoritmos
#
#
# ### 2.1 Tiempo de ejecucion
# Implemente la funcion `timeInterpolation` que mida el tiempo de interpolacion de una imagen dado el algoritmo de interpolacion , en segundos.(5 puntos)
import time
def timeInterpolation(image, interiorPixels, method,cheb):
"""
Parameters
----------
image: (nxnx3 array) image array in RGB format
interiorPixels: (int) interpolation method
method: (string) interpolation method
cheb: (boolean) if chebyshev points are used
Returns
-------
time: (float) time in seconds
"""
time1 = time.time()
bicubicInterpolation(image, interiorPixels, method,cheb)
time2 = time.time()
return time2-time1
# ***Pregunta: ¿Cual es el metodo que presenta mayor velocidad en general? (5 puntos)***
# 'spline3' es el método con mayor velocidad
# ### 2.2 Calculo de error
# Implemente la funcion `errorInterpolation` la cual debe obtener el error de la imagen obtenida comparandola con una de referencia. El error debe ser calculado utilizando el indice SSIM (Structural similarity) (5 puntos)
from skimage import metrics
def errorInterpolation(original,new):
"""
Parameters
----------
image: (nxn array) original image array in RGB format
new: (nxn array) new image array in RGB format obtained from interpolation
Returns
-------
error: (float) difference between images
"""
s = metrics.structural_similarity(original, new, multichannel = True)
return 1-s
# ***Pregunta: ¿Cual metodo presenta menor error? (5 puntos)***
# Depende.
# Para gradient con 1 pixel, 'lagrange'.
# Para gradient con 4 pixel, 'spline3'.
# Para sunset con 1 pixel, 'lagrange'.
# Para sunset con 2 pixel, 'lagrange'.
#
# Cabe destacar que los errores son muy parecidos, con una diferencia entre 10^-5 y 10^-6.
# Referencias:
# chebyshevNodes(), NewtonDD() sacados del jupiter del curso.
| Otros/BicubicInterpolation.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Projet Corona (Python)
# language: python
# name: corona
# ---
# # Exploration de la base de données
# +
# #!conda install pandas #dans la console
from datetime import date, timedelta
import os
import pandas as pd
# +
# Racine des fichiers quotidiens
BASE_URL = 'https://raw.githubusercontent.com/CSSEGISandData/COVID-19/master/csse_covid_19_data/csse_covid_19_daily_reports/{}.csv'
# Dates de diponibilité des fichiers
START_DATE = date(2020, 1, 22)
END_DATE = date(2020, 3, 20)
#Répertoire de sauvegarde des fichiers bruts
RAWFILES_DIR = '../data/raw/'
PROCESSED_DIR = '../data/processed/'
# -
# ## Boucle de récupération des fichiers
# +
delta = END_DATE - START_DATE # as timedelta
for i in range(delta.days + 1):
day = START_DATE + timedelta(days=i)
day_label = day.strftime("%m-%d-%Y")
virus_df = pd.read_csv(BASE_URL.format(day_label), sep=",", parse_dates=["Last Update"])
virus_df.to_csv(os.path.join(RAWFILES_DIR, day_label + '.csv'), index=False)
#print(day_label)
# -
virus_df.dtypes
# ## Constitution de la table de référence lat/long
# +
import glob
df_list = []
# Lecture des fichiers récupérés et sélection de ceux qui ont une lat / long
for file in glob.glob(os.path.join(RAWFILES_DIR, '*.csv')):
virus_df = pd.read_csv(file, sep=',')
if 'Latitude' in virus_df.columns and 'Longitude' in virus_df.columns:
df_list.append(virus_df)
all_df = pd.concat(df_list)
# Table de référence pour les lat / long
(all_df[["Province/State", "Country/Region", "Latitude", "Longitude"]]
.drop_duplicates(subset=["Province/State", "Country/Region"])
.sort_values(by=["Country/Region", "Province/State"])
.to_csv(os.path.join(PROCESSED_DIR, "lat_long_table.csv"), index=False)
)
# -
(all_df[["Province/State", "Country/Region", "Latitude", "Longitude"]]
.drop_duplicates()
.shape
)
(all_df[["Province/State", "Country/Region", "Latitude", "Longitude"]]
.drop_duplicates()
.drop_duplicates(subset=["Province/State", "Country/Region"])
.shape
)
# 3 villes qui ont des latitudes-longitudes différentes
(all_df[["Province/State", "Country/Region", "Latitude", "Longitude"]]
.drop_duplicates()
[(all_df[["Province/State", "Country/Region", "Latitude", "Longitude"]]
.drop_duplicates()
.duplicated(subset=["Province/State", "Country/Region"], keep=False))]
)
# Les 3 pays qui ont "bougé".
# ## Construction d'une table unique
data_catalog = {
'Last Update':["<M8[ns]"],
"Confirmed":["float64", "int64"],
"Deaths":["float64", "int64"],
"Recovered":["float64", "int64"],
"Latitude":["float64"],
"Longitude":["float64"]
}
# +
df_list = []
latlong_df = pd.read_csv(os.path.join(PROCESSED_DIR, "lat_long_table.csv"))
# Lecture des fichiers récupérés et sélection de ceux qui ont une lat / long
for file in glob.glob(os.path.join(RAWFILES_DIR, '*.csv')):
virus_df = pd.read_csv(file, sep=',', parse_dates=["Last Update"])
if not('Latitude' in virus_df.columns and 'Longitude' in virus_df.columns):
virus_df = virus_df.merge(latlong_df, on=["Province/State", "Country/Region"], how='left')
# Checker le type des variables dans l'importation de chaque fichier.
for field, types in data_catalog.items():
assert virus_df[field].dtypes in types, f"bad type for {field} in {file}"
df_list.append(virus_df.assign(source=os.path.basename(file)))
all_df = pd.concat(df_list)
# Sauvegarde de la table totale
all_df.to_csv(os.path.join(PROCESSED_DIR, 'all_data.csv'), index=False)
| notebooks/exploration_source.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Checking your setup
# Run through this notebook to make sure your environment is properly setup. Be sure to launch Jupyter from inside the virtual environment.
from check_environment import run_checks
run_checks()
# *Note: Adapted from <NAME>'s [`check_env.ipynb` notebook](https://github.com/amueller/ml-workshop-1-of-4/blob/master/check_env.ipynb).*
| ch_01/checking_your_setup.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Common setup
import zarr
from pyprojroot import here
import pandas as pd
import numpy as np
import allel
import yaml
import matplotlib.pyplot as plt
import functools
import seaborn as sns
import dask.array as da
import scipy.interpolate
import scipy.stats
import petl as etl
# %matplotlib inline
# %config InlineBackend.figure_format = 'retina'
callset_haps_path = here() / 'data/external/ag1000g/phase2/AR1/haplotypes/main/zarr/ag1000g.phase2.ar1.haplotypes'
callset_haps = zarr.open_consolidated(str(callset_haps_path))
df_haps_a = pd.read_csv(here() / 'data/external/ag1000g/phase2/AR1/haplotypes/main/haplotypes.autosomes.meta.txt',
sep='\t', index_col=0)
df_haps_a.head()
df_haps_x = pd.read_csv(here() / 'data/external/ag1000g/phase2/AR1/haplotypes/main/haplotypes.X.meta.txt',
sep='\t', index_col=0)
df_haps_x.head()
callset_pass_path = here() / 'data/external/ag1000g/phase2/AR1/variation/main/zarr/pass/ag1000g.phase2.ar1.pass'
callset_pass = zarr.open_consolidated(str(callset_pass_path))
df_samples = pd.read_csv(here() / 'data/external/ag1000g/phase2/AR1/samples/samples.meta.txt',
sep='\t')
with open('pop_defs.yml', mode='r') as f:
pop_defs = yaml.safe_load(f)
import pyfasta
genome_path = here() / 'data/external/vectorbase/Anopheles-gambiae-PEST_CHROMOSOMES_AgamP4.fa'
genome = pyfasta.Fasta(str(genome_path), key_fn=lambda x: x.split()[0])
tbl_chromatin = [
('name', 'chrom', 'start', 'end'),
('CHX', 'X', 20009764, 24393108),
('CH2R', '2R', 58984778, 61545105),
('CH2L', '2L', 1, 2431617),
('PEU2L', '2L', 2487770, 5042389),
('IH2L', '2L', 5078962, 5788875),
('IH3R', '3R', 38988757, 41860198),
('CH3R', '3R', 52161877, 53200684),
('CH3L', '3L', 1, 1815119),
('PEU3L', '3L', 1896830, 4235209),
('IH3L', '3L', 4264713, 5031692)
]
seq_ids = '2R', '2L', '3R', '3L', 'X'
# +
def build_gmap():
# crude recombination rate lookup, keyed off chromatin state
# use units of cM / bp, assume 2 cM / Mbp == 2x10^-6 cM / bp
tbl_rr = (
etl.wrap(tbl_chromatin)
# extend heterochromatin on 2L - this is empirical, based on making vgsc peaks symmetrical
.update('end', 2840000, where=lambda r: r.name == 'CH2L')
.update('start', 2840001, where=lambda r: r.name == 'PEU2L')
.addfield('rr', lambda r: .5e-6 if 'H' in r.name else 2e-6)
)
# per-base map of recombination rates
rr_map = {seq_id: np.full(len(genome[seq_id]), fill_value=2e-6, dtype='f8')
for seq_id in seq_ids}
for row in tbl_rr.records():
rr_map[row.chrom][row.start - 1:row.end] = row.rr
# genetic map
gmap = {seq_id: np.cumsum(rr_map[seq_id]) for seq_id in seq_ids}
gmap['2'] = np.concatenate([gmap['2R'], gmap['2L'] + gmap['2R'][-1]])
gmap['3'] = np.concatenate([gmap['3R'], gmap['3L'] + gmap['3R'][-1]])
return gmap
gmap = build_gmap()
| gwss/setup.ipynb |
/ ---
/ jupyter:
/ jupytext:
/ text_representation:
/ extension: .q
/ format_name: light
/ format_version: '1.5'
/ jupytext_version: 1.14.4
/ ---
/ + [markdown] cell_id="00001-1da2ae5a-c906-423b-a069-499ab4ba6fc6" deepnote_cell_type="text-cell-h1" tags=[]
/ # Imports
/ + cell_id="00000-f0416702-bac4-4b9b-9ef7-fcd8b89ba78b" deepnote_cell_type="code" deepnote_to_be_reexecuted=false execution_millis=7563 execution_start=1622218489757 source_hash="33f4f6e" tags=[]
from typing import Tuple, List
# import librairies
import pandas as pd
import numpy as np
import seaborn as sns
import matplotlib.pyplot as plt
import tensorflow as tf
# import machine learning packages
import torchvision
import torchvision.transforms as transforms
from torchvision.datasets import MNIST, FashionMNIST
import torch
import torch.nn as nn
import torch.nn.functional as F
import torch.optim as optim
import torch
import torchvision
/ + [markdown] cell_id="00003-239c8af4-6807-493e-a77e-5ddadb557c80" deepnote_cell_type="text-cell-h1" tags=[]
/ # Data recuperation
/ + cell_id="00002-dfbe287d-b846-4b1e-92d5-222aaf1f9d62" deepnote_cell_type="code" deepnote_to_be_reexecuted=false execution_millis=546 execution_start=1622218497329 source_hash="66eae2d2" tags=[]
# recuperate the complete data set
X_train_all = pd.read_csv('train_set.csv')
X_val_all=pd.read_csv('val_set.csv')
X_test_all=pd.read_csv('test_set.csv')
# convert into numpy arrays
X_train_all= X_train_all.to_numpy()
X_val_all= X_val_all.to_numpy()
X_test_all=X_test_all.to_numpy()
print('Training set shape:')
print(f'X: {X_train_all.shape} ')
print('\nVal set shape:')
print(f'X: {X_val_all.shape} ')
print('\nTest set shape:')
print(f'X: {X_test_all.shape} ')
/ + [markdown] cell_id="00009-2538e82b-4145-4972-8e72-919542f6fe35" deepnote_cell_type="text-cell-h1" tags=[]
/ # Data reshape and transformation
/ + cell_id="00008-eec58721-58c0-43f1-8dba-1170396b7aad" deepnote_cell_type="code" deepnote_to_be_reexecuted=false execution_millis=21 execution_start=1622218497885 source_hash="cb09ca1a" tags=[]
# splitting our data for the training in input (X_train) and output (y_train)
X_train= X_train_all[:,0:110].astype(np.float32)
y_train= X_train_all[:,110].astype(np.float32)
# splitting our data for the validation in input (X_val) and output (y_val)
X_val= X_val_all[:,0:110].astype(np.float32)
y_val= X_val_all[:,110].astype(np.float32)
'''
These data are equivalent to test_set in the conventions but we use them
to get an idea of the performance before having the test_set evaluated on AIcrowd
'''
# separation of our data for the test set into input (X_test)
X_test= X_test_all.astype(np.float32)
# transform numpy array in a TensorDataset
'''
Use unsqueeze to counter this error msg : UserWarning: Using a target size (torch.Size([1])) that is different
to the input size (torch.Size([1, 1])). This will likely lead to incorrect results due to broadcasting.
'''
train = torch.utils.data.TensorDataset(torch.from_numpy(X_train),torch.unsqueeze(torch.from_numpy(y_train),1))
validation = torch.utils.data.TensorDataset(torch.from_numpy(X_val),torch.unsqueeze(torch.from_numpy(y_val),1))
test=torch.from_numpy(X_test)
# separate features and targets of the TensorDataset
train_loader = torch.utils.data.DataLoader(train, batch_size=12, shuffle=True)
validation_loader = torch.utils.data.DataLoader(validation, batch_size=1, shuffle=True)
/ + [markdown] cell_id="00008-763caefb-a1df-4991-9969-257e399748ea" deepnote_cell_type="text-cell-h1" tags=[]
/ # Neural network
/ + [markdown] cell_id="00008-5d7956c7-2aca-482d-9389-e22fd60efebd" deepnote_cell_type="text-cell-p" tags=[]
/ To prevent overfitting, we use a dropout in the model.
/ + cell_id="00009-2fc9b8f3-feeb-4d55-9efc-9bb09b46c739" deepnote_cell_type="code" deepnote_to_be_reexecuted=false execution_millis=18 execution_start=1622218497911 source_hash="52e5eec9" tags=[]
# define our model
class model(nn.Module):
# fully connected neural network
def __init__(self):
super().__init__()
self.fc1 = nn.Linear(110,66)
self.fc2 = nn.Linear(66,20)
self.dropout = nn.Dropout(0.1)
self.fc3 = nn.Linear(20,1)
# architecture of our model and activation function
def forward(self, x: torch.Tensor) -> torch.Tensor:
x = F.softsign(self.fc1(x))
x = F.softsign(self.fc2(x))
x = self.dropout(x)
x = self.fc3(x)
return x
model=model()
/ + [markdown] cell_id="00010-3ea08dff-09e6-4683-b722-98454effd619" deepnote_cell_type="text-cell-h1" tags=[]
/ # Loss & optimizer
/ + cell_id="00008-1816fb52-e2fd-4386-b1d5-4a5258d03672" deepnote_cell_type="code" deepnote_to_be_reexecuted=false execution_millis=0 execution_start=1622218497935 source_hash="d516eda" tags=[]
loss_fn =nn.MSELoss(size_average=None, reduce=None, reduction='mean')
optimizer = optim.RMSprop(model.parameters(),lr=0.0011)
#optimizer = optim.Adadelta(model.parameters(), lr=0.005, rho=0.95, eps=1e-06, weight_decay=0)
#optimizer = optim.AdamW(model.parameters(),lr=0.003,betas=(0.9, 0.999), eps=1e-8, weight_decay=0, amsgrad=False)
#optimizer = optim.Adagrad(model.parameters(), lr=0.004, lr_decay=0.00001, weight_decay=0, initial_accumulator_value=0, eps=1e-10)
#optimizer = torch.optim.RMSprop(model.parameters(), lr=0.01, alpha=0.99, eps=1e-08, weight_decay=0, momentum=0, centered=False)
#optimizer = optim.Adam(model.parameters(),lr=0.003,betas=(0.9, 0.999), eps=1e-8, weight_decay=0, amsgrad=False)
/ + [markdown] cell_id="00012-1800966f-8b2b-4ae7-b7cd-1c9c010e59f9" deepnote_cell_type="text-cell-h1" tags=[]
/ # Loss metric function
/ + [markdown] cell_id="00012-d052f541-17ba-46cf-a27c-1e08311b2990" deepnote_cell_type="text-cell-p" tags=[]
/ We import this function from a course exercice : https://github.com/vita-epfl/introML-2021/blob/main/exercises/06-neural-nets/metrics.py
/ + cell_id="00009-3518b0bd-ff11-4590-9654-98a321030064" deepnote_cell_type="code" deepnote_to_be_reexecuted=false execution_millis=42 execution_start=1622218497944 source_hash="fb6621b" tags=[]
# compute the loss and keeps track of the loss over an epoch
class LossMetric:
def __init__(self) -> None:
self.running_loss = 0
self.count = 0
def update(self, loss: float, batch_size: int) -> None:
self.running_loss += loss * batch_size
self.count += batch_size
def compute(self) -> float:
return self.running_loss / self.count
def reset(self) -> None:
self.running_loss = 0
self.count = 0
/ + [markdown] cell_id="00014-4ef3022d-603c-4075-a889-3f1ca38ee669" deepnote_cell_type="text-cell-h1" tags=[]
/ # Model training
/ + cell_id="00008-7477d3f9-b7b4-4c52-8c0f-663db1fc2c76" deepnote_cell_type="code" deepnote_to_be_reexecuted=false execution_millis=0 execution_start=1622218497987 source_hash="6141bda0" tags=[]
def train(model: torch.nn.Module, train_loader : torch.utils.data.DataLoader, loss_fn: torch.nn.Module, optimizer: torch.optim.Optimizer, epochs: int):
# initialization of the loss
loss_metric = LossMetric()
# sets the module in training mode
model.train()
for epoch in range(1,epochs+1):
# iterate through data
for data, target in train_loader:
# zero-out the gradients
optimizer.zero_grad()
# forward pass
out = model(data)
# compute the loss
loss = loss_fn(out,target)
# backward pass
loss.backward()
# optimizer step
optimizer.step()
# update metrics
loss_metric.update(loss.item(), data.shape[0])
# end of epoch, show loss
print("Train loss :"+ str(loss_metric.compute()))
loss_metric.reset()
/ + cell_id="00010-8a8a1d47-e48e-418b-af3f-5d68e306af48" deepnote_cell_type="code" deepnote_to_be_reexecuted=false execution_millis=69406 execution_start=1622218497987 source_hash="e2cc4d15" tags=[]
train(model, train_loader, loss_fn, optimizer, epochs=50)
/ + [markdown] cell_id="00018-76ad237e-c6df-4235-898f-3c04d6c997b4" deepnote_cell_type="text-cell-h1" tags=[]
/ # Model evaluation
/ + cell_id="00011-6dc96436-71ea-41e3-ba04-7261394efafb" deepnote_cell_type="code" deepnote_to_be_reexecuted=false execution_millis=1 execution_start=1622218567402 source_hash="5ce26495" tags=[]
def validation(model: torch.nn.Module, validation_loader : torch.utils.data.DataLoader):
MSEloss= nn.MSELoss(reduction='none')
model.eval()
loss_metric= []
with torch.no_grad():
# iterate through data
for data, target in validation_loader:
# forward pass and update our loss_metrics array with .tolist()
out = model(data)
loss_metric += MSEloss(out, target).tolist()
# transform list in array to use "sum"
loss_metric_array = np.asarray(loss_metric)
# mean of the loss
final_loss = (sum(loss_metric_array)/len(loss_metric))
print((final_loss))
/ + cell_id="00012-4b6ee795-6a80-4ab1-aa9f-3f84923c122d" deepnote_cell_type="code" deepnote_to_be_reexecuted=false execution_millis=487 execution_start=1622218567407 source_hash="22d5650c" tags=[]
validation(model, validation_loader)
/ + [markdown] cell_id="00021-c305b8a4-6b86-43d5-ba08-9ea8713f05d6" deepnote_cell_type="text-cell-h1" tags=[]
/ # .csv submission file generation
/ + cell_id="00014-312f0091-40d9-4a84-82be-8de76290e1c2" deepnote_cell_type="code" deepnote_to_be_reexecuted=false execution_millis=56 execution_start=1622218567897 source_hash="57b47345" tags=[]
ok = model(test).detach().numpy()
df = pd.DataFrame(ok,columns=["sat1_col"])
print(df)
print('***')
df.to_csv("sat1_col.csv", index = False)
/ + [markdown] created_in_deepnote_cell=true deepnote_cell_type="markdown" tags=[]
/ <a style='text-decoration:none;line-height:16px;display:flex;color:#5B5B62;padding:10px;justify-content:end;' href='https://deepnote.com?utm_source=created-in-deepnote-cell&projectId=7069e5d5-d80e-47ef-99c8-07f7aae62cb7' target="_blank">
/ <img alt='Created in deepnote.com' style='display:inline;max-height:16px;margin:0px;margin-right:7.5px;' src='data:image/svg+xml;base64,PD94bWwgdmVyc2lvbj0iMS4wIiBlbmNvZGluZz0iVVR<KEY> > </img>
/ Created in <span style='font-weight:600;margin-left:4px;'>Deepnote</span></a>
| Milestone1/train_milestone1.ipynb |
# ---
# jupyter:
# jupytext:
# formats: ipynb,jl:hydrogen
# text_representation:
# extension: .jl
# format_name: hydrogen
# format_version: '1.3'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Julia 1.6.2
# language: julia
# name: julia-1.6
# ---
# %%
t2j(s) = s
function t2j(s::Tuple)
a = (:block, :if, :tuple, :parameters, :kw, :quote, :(=), :(.=), :.)
s[1] in a && return Expr(s[1], t2j.(s[2:end])...)
s[1] === :lambda && return Expr(:(->), Expr(:tuple, s[2]...), t2j(s[3]))
s[1] === :for && return Expr(s[1], Expr(:(=), t2j(s[2]), t2j(s[3])), t2j(s[4]))
s[1] === :using && return Expr(s[1], Expr.(:., s[2:end])...)
Expr(:call, t2j.(s)...)
end
macro tupp(x) t2j(Core.eval(__module__, x)) end
# %%
(:for, :k, (:(:), 1, 5),
(:if, (:iseven, :k),
(:println, "even: ", :k),
(:println, "odd: ", :k))) |> t2j
# %%
@tupp (:for, :k, (:(:), 1, 5),
(:if, (:iseven, :k),
(:println, "even: ", :k),
(:println, "odd: ", :k)))
# %%
(:block,
(:(=), :U, (:lambda, (:u,), (:u, :u))),
(:(=), :F, (:lambda, (:u,), (:lambda, (:n, :a, :b),
(:if, (:(==), :n, 0), :a,
((:u, :u), (:-, :n, 1), :b, (:+, :a, :b)))))),
(:(=), :Fib, (:U, :F)),
(:for, :k, (:(:), 1, 10),
(:println, "Fib(", :k, ") = ", (:Fib, :k, 0, 1)))) |> t2j
# %%
@tupp (:block,
(:(=), :U, (:lambda, (:u,), (:u, :u))),
(:(=), :F, (:lambda, (:u,), (:lambda, (:n, :a, :b),
(:if, (:(==), :n, 0), :a,
((:u, :u), (:-, :n, 1), :b, (:+, :a, :b)))))),
(:(=), :Fib, (:U, :F)),
(:for, :k, (:(:), 1, 10),
(:println, "Fib(", :k, ") = ", (:Fib, :k, 0, 1))))
# %%
(:block,
(:using, :Plots),
(:(=), :n, 20),
(:(=), :x, (:range, 0, 2, (:kw, :length, :n))),
(:(=), :y, (:+, (:., :sinpi, (:tuple, :x)), (:*, 0.2, (:randn, :n)))),
(:(=), :xs, (:range, 0, 2, (:kw, :length, 200))),
(:(=), :X, (:.^, :x, (:transpose, (:(:), 0, 3)))),
(:(=), :b, (:\, :X, :y)),
(:scatter, :x, :y, (:kw, :label, "sample")),
(:plot!, :xs, (:., :sinpi, (:tuple, :xs)),
(:kw, :label, "sinpi(x)"),
(:kw, :color, (:quote, :black)),
(:kw, :ls, (:quote, :dash))),
(:plot!, :xs, (:., :evalpoly, (:tuple, :xs, (:Ref, :b))),
(:kw, :label, "degree-3 polynomial"),
(:kw, :color, 2), (:kw, :lw, 2))) |> t2j
# %%
@tupp (:block,
(:using, :Plots),
(:(=), :n, 20),
(:(=), :x, (:range, 0, 2, (:kw, :length, :n))),
(:(=), :y, (:+, (:., :sinpi, (:tuple, :x)), (:*, 0.2, (:randn, :n)))),
(:(=), :xs, (:range, 0, 2, (:kw, :length, 200))),
(:(=), :X, (:.^, :x, (:transpose, (:(:), 0, 3)))),
(:(=), :b, (:\, :X, :y)),
(:scatter, :x, :y, (:kw, :label, "sample")),
(:plot!, :xs, (:., :sinpi, (:tuple, :xs)),
(:kw, :label, "sinpi(x)"),
(:kw, :color, (:quote, :black)),
(:kw, :ls, (:quote, :dash))),
(:plot!, :xs, (:., :evalpoly, (:tuple, :xs, (:Ref, :b))),
(:kw, :label, "degree-3 polynomial"),
(:kw, :color, 2), (:kw, :lw, 2)))
# %%
| 0017/tupp - tiny TUPle Processor.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python [conda env:genpen]
# language: python
# name: conda-env-genpen-py
# ---
# + tags=[]
import itertools
import numpy as np
import os
import seaborn as sns
from tqdm import tqdm
from dataclasses import asdict, dataclass, field
import vsketch
import shapely.geometry as sg
from shapely.geometry import box, MultiLineString, Point, MultiPoint, Polygon, MultiPolygon, LineString
import shapely.affinity as sa
import shapely.ops as so
import matplotlib.pyplot as plt
import pandas as pd
import vpype_cli
from typing import List, Generic
from genpen import genpen as gp
from genpen.utils import Paper
from scipy import stats as ss
import geopandas
from shapely.errors import TopologicalError
import functools
import vpype
from skimage import io
from pathlib import Path
from sklearn.preprocessing import minmax_scale
from skimage import feature
from skimage import exposure
from skimage import filters
from skimage.color import rgb2gray
from skimage.transform import rescale, resize, downscale_local_mean
from skimage.morphology import disk
from pyaxidraw import axidraw # import module
from PIL import Image
import cv2
from genpen.flow.field import *
from genpen.flow.particle import *
import time
from datetime import datetime
import pytz
tz = pytz.timezone('US/Pacific')
# %load_ext autoreload
# %autoreload 2
# +
import signal
class GracefulExiter():
def __init__(self):
self.state = False
signal.signal(signal.SIGINT, self.change_state)
def change_state(self, signum, frame):
print("exit flag set to True (repeat to exit now)")
signal.signal(signal.SIGINT, signal.SIG_DFL)
self.state = True
def exit(self):
return self.state
# -
def plot_layer(ad, layer_number, wait_time=1.):
ad.options.layer = ii
t_start = datetime.now()
ad.plot_run()
t_end = datetime.now()
time.sleep(wait_time)
result = {
'layer': ii,
't_start': t_start,
't_end': t_end,
}
return result
savedir = '/home/naka/art/plotter_svgs/'
filename = '20210602-231805-7a540-c98425-machine_gun.svg'
vsk = vsketch.Vsketch()
savepath = Path(savedir).joinpath(filename).as_posix()
doc = vpype.read_multilayer_svg(savepath, 0.1)
n_layers = len(doc.layers)
print(n_layers)
wait_time = 2.1
ad = axidraw.AxiDraw()
ad.plot_setup(savepath)
ad.options.mode = "layers"
ad.options.units = 2
ad.options.speed_pendown = 70
ad.update()
# + tags=[]
timing_info = []
stored_exception = None
flag = GracefulExiter()
for ii in tqdm(range(n_layers)):
timing_info.append(plot_layer(ad, layer_number=ii, wait_time=2.1))
if flag.exit():
break
# -
pd.DataFrame(timing_info).to_csv('/home/naka/art/axidraw_timing/test.csv')
| scratch/041_manual2.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + [markdown] colab_type="text" id="gwcSTHMra-P-"
# ## Filtering Data
# + colab={} colab_type="code" id="nx-RvtVhbIdh"
import os
# Find the latest version of spark 3.0 from http://www.apache.org/dist/spark/ and enter as the spark version
# For example:
# spark_version = 'spark-3.0.3'
spark_version = 'spark-3.<enter version>'
os.environ['SPARK_VERSION']=spark_version
# Install Spark and Java
# !apt-get update
# !apt-get install openjdk-8-jdk-headless -qq > /dev/null
# !wget -q http://www.apache.org/dist/spark/$SPARK_VERSION/$SPARK_VERSION-bin-hadoop2.7.tgz
# !tar xf $SPARK_VERSION-bin-hadoop2.7.tgz
# !pip install -q findspark
# Set Environment Variables
os.environ["JAVA_HOME"] = "/usr/lib/jvm/java-8-openjdk-amd64"
os.environ["SPARK_HOME"] = f"/content/{spark_version}-bin-hadoop2.7"
# Start a SparkSession
import findspark
findspark.init()
# + colab={} colab_type="code" id="7RLRYd3QP2KF"
# Start Spark session
from pyspark.sql import SparkSession
spark = SparkSession.builder.appName("sparkFunctions").getOrCreate()
# + colab={"base_uri": "https://localhost:8080/", "height": 489} colab_type="code" id="8HZ7JR0Ia-QC" outputId="c46ea1ea-c46f-4888-e26f-31294c6a0e86"
from pyspark import SparkFiles
url ="https://s3.amazonaws.com/dataviz-curriculum/day_1/wine.csv"
spark.sparkContext.addFile(url)
df = spark.read.csv(SparkFiles.get("wine.csv"), sep=",", header=True)
# Show DataFrame
df.show()
# + colab={"base_uri": "https://localhost:8080/", "height": 208} colab_type="code" id="Va2Sjj_oa-QM" outputId="1b8f644b-ac0b-43be-9aad-63fc0d55d1a8"
# Order a DataFrame by ascending values
df.orderBy(df["points"].asc()).show(5)
# + colab={"base_uri": "https://localhost:8080/", "height": 208} colab_type="code" id="LJhqS-3qQFRI" outputId="9e622cb5-fcb3-459d-b948-ed53a2e74343"
# Order a DataFrame by descending values
df.orderBy(df["points"].desc()).show(5)
# + colab={"base_uri": "https://localhost:8080/", "height": 121} colab_type="code" id="-pREWDG0a-QQ" outputId="6e7b2b9b-4f1b-426f-96ea-6411897c0fea"
# Import average function
from pyspark.sql.functions import avg
df.select(avg("points")).show()
# + colab={"base_uri": "https://localhost:8080/", "height": 489} colab_type="code" id="Iy7YqMo7a-QU" outputId="e503c7fe-f6c9-4f40-bb44-c994dc1b7982"
# Using filter
df.filter("price<20").show()
# + colab={"base_uri": "https://localhost:8080/", "height": 469} colab_type="code" id="ww7emkaea-QY" outputId="5f4b3c44-794a-4551-d805-06938675b706"
# Filter by price on certain columns
df.filter("price<20").select(['points','country', 'winery','price']).show()
# + [markdown] colab_type="text" id="WPuwAnZ_a-Qc"
# ### Using Python Comparison Operators
# + colab={"base_uri": "https://localhost:8080/", "height": 489} colab_type="code" id="6WYZQGNIa-Qd" outputId="95e84c8e-9d92-42c9-8dc0-9cc71d73761f"
# Same results only this time using python
df.filter(df["price"] < 200).show()
# + colab={"base_uri": "https://localhost:8080/", "height": 489} colab_type="code" id="JK41k68oa-Qh" outputId="ddffa3b8-105b-461f-fc5e-1c86892be0f8"
df.filter( (df["price"] < 200) | (df['points'] > 80) ).show()
# + colab={"base_uri": "https://localhost:8080/", "height": 489} colab_type="code" id="bmlEBelya-Qo" outputId="c7d74efa-e9e9-4ace-8ef3-b912a71ff6ca"
df.filter(df["country"] == "US").show()
# + colab={} colab_type="code" id="mU8w-D46cPtA"
| 01-Lesson-Plans/22-Big-Data/1/Activities/07-Ins_Pyspark_DataFrames_Filtering/Solved/spark_filtering.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + [markdown] heading_collapsed=true
# # 0.0. Imports
# + hidden=true
import json
import math
# import pylab
import random
import pickle
import requests
import datetime
import warnings
warnings.filterwarnings( 'ignore')
import inflection
import numpy as np
import pandas as pd
import seaborn as sns
import xgboost as xgb
from scipy import stats as ss
from sklearn.metrics import mean_absolute_error, mean_squared_error
from sklearn.ensemble import RandomForestRegressor
from sklearn.linear_model import LinearRegression
from sklearn.linear_model import Lasso
from sklearn.preprocessing import MinMaxScaler, LabelEncoder, RobustScaler
from flask import Flask, request, Response
from boruta import BorutaPy
from matplotlib import pyplot as plt
from matplotlib import gridspec
from IPython.display import Image
from IPython.core.display import HTML
from IPython.core.interactiveshell import InteractiveShell
# %pylab inline
# %matplotlib inline
plt.style.use( 'bmh' )
plt.rcParams['figure.figsize'] = [25, 12]
plt.rcParams['font.size'] = 24
display( HTML( '<style>.container { width:100% !important; }</style>') )
pd.options.display.max_columns = None
pd.options.display.max_rows = None
pd.set_option( 'display.expand_frame_repr', False )
sns.set();
# + [markdown] heading_collapsed=true hidden=true
# ## 0.1 Helper Functions
# + hidden=true
# + [markdown] heading_collapsed=true hidden=true
# ## 0.2. Loading Data
# + hidden=true
df_raw = pd.read_csv('data/heart_failure_clinical_records_dataset.csv')
# + [markdown] heading_collapsed=true hidden=true
# ## Attribute Information:
#
# Thirteen (13) clinical features:
#
# - **age**: age of the patient (years)
# - **anaemia**: decrease of red blood cells or hemoglobin (boolean)
# - **high blood pressure**: if the patient has hypertension (boolean)
# - **creatinine phosphokinase (CPK)**: level of the CPK enzyme in the blood (mcg/L)
# - **diabetes**: if the patient has diabetes (boolean)
# - **ejection fraction**: percentage of blood leaving the heart at each contraction (percentage)
# - **platelets**: platelets in the blood (kiloplatelets/mL)
# - **sex**: woman or man (binary)
# - **serum creatinine**: level of serum creatinine in the blood (mg/dL)
# - **serum sodium**: level of serum sodium in the blood (mEq/L)
# - **smoking**: if the patient smokes or not (boolean)
# - **time**: follow-up period (days)
# - **[target] death event**: if the patient deceased during the follow-up period (boolean)
# + hidden=true
df_raw.sample(5)
# + [markdown] heading_collapsed=true
# # 1.0. STEP 01 - DESCRIPTION OF DATA
# + hidden=true
df1 = df_raw.copy()
# + [markdown] heading_collapsed=true hidden=true
# ## 1.1. Rename Columns
# + hidden=true
# rename columns so they are all tiny
cols_old = ['age', 'anaemia','creatinine_phosphokinase', 'diabetes', 'ejection_fraction', 'high_blood_pressure', 'platelets', 'serum_creatinine','serum_sodium', 'sex', 'smoking', 'time', 'DEATH_EVENT']
snakecase = lambda x: inflection.underscore(x)
cols_new = list(map(snakecase, cols_old))
df1.columns = cols_new
# + hidden=true
df1.sample(5)
# + [markdown] heading_collapsed=true hidden=true
# ## 1.2. Data Dimensions
# + hidden=true
print('Number of Rows : {}'.format(df1.shape[0]))
print('Number of Cols : {}'.format(df1.shape[1]))
# + [markdown] heading_collapsed=true hidden=true
# ## 1.3. Data Types
# + hidden=true
df1.dtypes
# + [markdown] heading_collapsed=true hidden=true
# ## 1.4. Check NA
# + hidden=true
df1.isna().sum()
# + [markdown] heading_collapsed=true hidden=true
# ## 1.5. Fillout NA
# + hidden=true
# + [markdown] heading_collapsed=true hidden=true
# ## 1.6. Change Data Types
# + hidden=true
# + [markdown] heading_collapsed=true hidden=true
# ## 1.7. Descriptive Statistical
# + hidden=true
num_attributes = df1.select_dtypes( include=['int64', 'float64'] )
# + [markdown] hidden=true
# ### 1.7.1 Numerical Attributes
# + hidden=true
# Central Tendency - mean, median
ct1 = pd.DataFrame( num_attributes.apply( np.mean ) ).T
ct2 = pd.DataFrame( num_attributes.apply( np.median ) ).T
# Dispersion - std, min, max, range, skew, kurtosis
d1 = pd.DataFrame( num_attributes.apply( np.std ) ).T
d2 = pd.DataFrame( num_attributes.apply( min ) ).T
d3 = pd.DataFrame( num_attributes.apply( max ) ).T
d4 = pd.DataFrame( num_attributes.apply( lambda x: x.max() - x.max() - x.min() ) ).T
d5 = pd.DataFrame( num_attributes.apply( lambda x: x.skew() ) ).T
d6 = pd.DataFrame( num_attributes.apply( lambda x: x.kurtosis() ) ).T
# concatenate
m = pd.concat( [d2, d3, d4, ct1, ct2, d1, d5, d6] ).T.reset_index()
m.columns = ( ['attributes','min','max','range','mean','median','std','skew','kurtosis'] )
m
# + hidden=true
sns.distplot(df1['serum_sodium']);
# + [markdown] hidden=true
# ### 1.7.2. Categorical Attributes
# + hidden=true
# + [markdown] heading_collapsed=true
# # 2.0. STEP 02 - FEATURE ENGINNERING
# + hidden=true
df2 = df1.copy()
# + [markdown] heading_collapsed=true hidden=true
# ## 2.1. Hypothesis Mind Map
# + hidden=true
Image('img/MindMapHypothesis.png')
# + [markdown] heading_collapsed=true hidden=true
# ## 2.2. Creation of Hypotheses
# + [markdown] heading_collapsed=true hidden=true
# ### 2.2.1. Age Hypothesis
# + [markdown] hidden=true
# **1.** Men die more than women from heart attack
# + [markdown] heading_collapsed=true hidden=true
# ### 2.2.2. Sex Hypothesis
# + [markdown] hidden=true
# **1.** Men are more likely to die from heart disease than women.
# + [markdown] heading_collapsed=true hidden=true
# ### 2.2.3. Smooking Hypothesis
# + [markdown] hidden=true
# **1.** Men who smoke die more from heart attack than women.
# + [markdown] heading_collapsed=true hidden=true
# ### 2.2.4. Diabetes Hypothesis
# + [markdown] hidden=true
# **1.** People with Diabetes die more from heart attack than people without diabetes.
# + [markdown] heading_collapsed=true hidden=true
# ### 2.2.5. High Blood Pressure Hypothesis
# + [markdown] hidden=true
# **1.** Women with high blood pressure are more likely to die of a heart attack than men.
# + [markdown] heading_collapsed=true hidden=true
# ### 2.2.6. Anaemia Hypothesis
# + [markdown] hidden=true
# **1.** Pessoas com anemia morrem mais do que quem não tem anemia.
| m02_v01_cardiac_insufficiency_prediction.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
import pandas as pd
import numpy as np
import datetime as dt
import seaborn as sns
import matplotlib.pyplot as plt
import matplotlib.dates as mdates
# %matplotlib inline
import matplotlib.style as style
style.use('seaborn-whitegrid')
import os
import pprint
# import googlemaps
# import time
import pickle
from random import randint
from collections import defaultdict
from sklearn.linear_model import LinearRegression
from sklearn.model_selection import train_test_split, KFold
from sklearn.linear_model import RidgeCV
from sklearn.pipeline import make_pipeline
from sklearn.preprocessing import PolynomialFeatures
from sklearn.preprocessing import StandardScaler
from sklearn.metrics import mean_squared_error
# +
# # Web scraping libraries
# from bs4 import BeautifulSoup
# import selenium
# import requests
# from selenium import webdriver
# from selenium.webdriver.common.keys import Keys
# from selenium.common.exceptions import NoSuchElementException
# chromedriver = f"/Users/brenner/chromedriver" # path to the chromedriver executable
# os.environ["webdriver.chrome.driver"] = chromedriver
# +
# # Local environment variables
# # %load_ext dotenv
# # %dotenv
# # %load_ext autoreload
# # %autoreload 2
# -
# Set pandas options
pd.set_option('max_rows', 10)
# pd.set_option('max_colwidth', -1)
# pd.set_option('display.width', 150)
# pd.set_option('display.max_rows', None)
# pd.set_option('display.max_columns', None)
df = pd.read_csv('data/portland_listings.csv')
# #### Long form output of all fields so that we can see values for all variables
# +
# with pd.option_context('display.max_columns', 100, 'max_colwidth', -1):
# print(df.head())
# -
with pd.option_context('display.max_rows', None, 'display.max_columns', None):
display(df.iloc[:2,:])
# #### Create a dataframe for just the host's property descriptions, summary and notes
# +
df_descriptions = df[['id','host_id', 'summary', 'space', 'description', 'neighborhood_overview',
'notes', 'host_about']]
# Save the new dataframe to CSV so we can use it later
# df_descriptions.to_csv('data/listing_descriptions.csv')
# +
#
# -
# #### Data exploration & cleaning
df.info()
df.shape
df['last_scraped'].value_counts()
fig, ax = plt.subplots(figsize=(8,4))
plt.hist(df['number_of_reviews'], bins=40, range=(0,250));
df['review_scores_rating'].head()
df.columns
df.head(1)
df2 = df[['id', 'host_id', 'host_listings_count',
'host_total_listings_count',
'neighbourhood', 'zipcode','latitude', 'longitude',
'property_type', 'room_type', 'accommodates',
'bathrooms', 'bedrooms', 'beds', 'bed_type', 'amenities',
'price', 'guests_included','number_of_reviews',
'review_scores_rating','review_scores_accuracy', 'review_scores_cleanliness',
'review_scores_checkin', 'review_scores_communication',
'review_scores_location', 'review_scores_value', 'instant_bookable',
'reviews_per_month']]
df2['instant_bookable'] = df2['instant_bookable'].apply(lambda x: 1 if x=='t' else 0)
df2.loc[:, ['accommodates',
'bathrooms', 'bedrooms', 'beds', 'bed_type', 'amenities',
'price','guests_included','number_of_reviews',
'review_scores_rating','review_scores_accuracy', 'review_scores_cleanliness',
'review_scores_checkin', 'review_scores_communication',
'review_scores_location', 'review_scores_value', 'instant_bookable',
'reviews_per_month']]
df2.info()
# #### Change price from a string to an integer, removing dollar sign and commas (e.g. $2,500 -> 2500)
# Change price from a string to an integer, removing dollar sign and commas (e.g. $2,500 -> 2500)
df2['price'] = df2['price'].apply(lambda x: int(x.replace('$','').replace(',','').split('.')[0]))
plt.hist(df2['price'], bins=25, range=(0,500));
# #### Identify and drop listings with 0 reviews, if desired
# Identify and drop listings with 0 reviews, if desired
zero_reviews = df[df['number_of_reviews']==0].index
df.drop(labels=zero_reviews, axis='index')
df2.info()
with pd.option_context('display.max_rows', None, 'display.max_columns', None):
display(df2.iloc[:2,:])
# #### Save newer, smaller dataframe
df2.to_csv('data/cleaned_df.csv')
| Listings.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
#https://medium.com/@acrosson/extracting-names-emails-and-phone-numbers-5d576354baa
# +
# Extract names , emails and phone numbers
# -
import re
import nltk
from nltk.corpus import stopwords
stop = stopwords.words('english')
string = """
Hey,
This week has been crazy. Attached is my report on IBM. Can you give it a quick read and provide some feedback.
Also, make sure you reach out to Claire (<EMAIL>).
You're the best.
Cheers,
<NAME>.
212-555-1234
"""
# +
def extract_phone_numbers(string):
r = re.compile(r'(\d{3}[-\.\s]??\d{3}[-\.\s]??\d{4}|\(\d{3}\)\s*\d{3}[-\.\s]??\d{4}|\d{3}[-\.\s]??\d{4})')
phone_numbers = r.findall(string)
return [re.sub(r'\D', '', number) for number in phone_numbers]
def extract_email_addresses(string):
r = re.compile(r'[\w\.-]+@[\w\.-]+')
return r.findall(string)
def ie_preprocess(document):
document = ' '.join([i for i in document.split() if i not in stop])
sentences = nltk.sent_tokenize(document)
sentences = [nltk.word_tokenize(sent) for sent in sentences]
sentences = [nltk.pos_tag(sent) for sent in sentences]
return sentences
def extract_names(document):
names = []
sentences = ie_preprocess(document)
for tagged_sentence in sentences:
for chunk in nltk.ne_chunk(tagged_sentence):
if type(chunk) == nltk.tree.Tree:
if chunk.label() == 'PERSON':
names.append(' '.join([c[0] for c in chunk]))
return names
# -
numbers = extract_phone_numbers(string)
emails = extract_email_addresses(string)
names = extract_names(string)
numbers
emails
names
| NLTK-master/Extract names,nos, form document.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# ## Overview of the dataframe
import pandas
#Use pandas to read data from csv file into dataframe
#Parse_dates tells df to read particular column as a datetime object column
df = pandas.read_csv("reviews.csv", parse_dates=["Timestamp"])
#Access the first 5 elements using .head()
df.head()
#Tells us shape of the dataframe which is the number of rows and columns the dataframe has -> r,c
print(df.shape)
#Displays the names of the columns of the dataframe
print(df.columns)
#To find the distribution of a particular column and see the data analysed as a histogram -> ratings
df.hist('Rating')
#Selects all the course names from the data
df["Course Name"].unique()
# ## Selecting data from the dataframe
# ### Select column
#Select a specific column -> type = series
df['Rating']
# ### Select multiple columns
#Select multiple columns -> type = dataframe
df[['Course Name', 'Rating']]
# ### Select row
#Selects a specific row -> type = series
df.iloc[3]
# ### Select multiple rows
#Selects multiple rows -> type = dataframe
df.iloc[1:4]
# ### Select a particular section
#Select a particular section -> range of rows and columns
df[["Course Name","Rating"]].iloc[1:4]
# ### Select a particular cell
#Selects the row of a particular column thus choosing one cell
df['Timestamp'].iloc[2]
#Faster method to access a cell -> Pass index of row and then column name
df.at[2, "Rating"]
# ## Filtering data based on conditions
# ### One Condition
#All data where ratings are greater than 4.0 stored in df2
#Can calculate length using len(), count()
df2 = df[df["Rating"] > 4]
#Fetch only the ratings of those elements -> Can the find mean and other data from the dataset .mean()
df2["Rating"]
# ### Multiple Conditions
#Access data with multiple conditions
df3 = df[(df["Rating"] > 4) & (df["Course Name"] == 'The Complete Python Course: Build 10 Professional OOP Apps')]
df3
#Can access the ratings of the result and the mean as well
#df3["Rating"].mean()
# ## Time based filtering
# +
from datetime import datetime as dt
from pytz import utc
#We filter data based on specific dates -> But since the time zones need to match as well we use tzinfo=utc
df4 = df[(df["Timestamp"] >= dt(2020, 7, 1, tzinfo=utc)) & (df["Timestamp"] <= dt(2020, 12, 31, tzinfo=utc))]
df4
# -
# ## Converting Data to Information
# ### Average rating
#Calculates the mean for the rating column
df["Rating"].mean()
# ### Average rating for a particular course
#Calculates the mean for the ratings of a specific course
df[df["Course Name"] == "Data Processing with Python"]["Rating"].mean()
# ### Average rating for a particular period
#Calculates the mean of the ratings for all courses over a particular timeframe
#utc is used to align timezones
df[(df["Timestamp"] >= dt(2020, 7, 1, tzinfo=utc)) &
(df["Timestamp"] <= dt(2020, 12, 31, tzinfo=utc))]["Rating"].mean()
# ### Average rating for a particular period for a particular course
#Calculates the mean of the ratings for a specific course over a particular timeframe
df[(df["Timestamp"] >= dt(2020, 1, 1, tzinfo=utc)) & (df["Timestamp"] <= dt(2021, 1, 1, tzinfo=utc)) &
(df["Course Name"] == '100 Python Exercises I: Evaluate and Improve Your Skills')]["Rating"].mean()
# ### Average of uncommented ratings
#Calculates the average of the ratings of all data that have no comments (NaN)
#isnull() returns all data that is NULL
df[df["Comment"].isnull()]["Rating"].mean()
# ### Average of commented ratings
#Calculates the average of the ratings of all data that have comments (NaN)
#notnull() returns all data that has value -> not NULL
df[df["Comment"].notnull()]["Rating"].mean()
# ### Number of uncommented ratings
#Calculates the number of ratings that are uncommented
df[df["Comment"].isnull()]["Rating"].count()
# ### Number of commented ratings
#Calculates the number of comments that are commented
df[df["Comment"].notnull()]["Rating"].count()
# ### Number of commented ratings containing a certain word
#Calculates the number of ratings that have the word 'accent' in their comments
#It also sets na=False to skip any comments that are null -> Otherwise an error would occur
df[df["Comment"].str.contains("accent", na=False)]["Rating"].count()
# ### Average of commented ratings with "accent" in comment
#Calculates the average of the ratings that have the word 'accent' in their comments
df[df["Comment"].str.contains("accent", na=False)]["Rating"].mean()
| Practice/Data Analysis & Visualisation/Reviews.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # 1. Business Understanding
#
# Uber Technologies Inc. is a San Francisco-based company founded in 2009. It has 91 million active users and 3.9 million drivers worldwide. The company is represented with its app in a total of 63 countries.
# There must be a basic understanding of the objectives from the business perspective. This can then be applied to define appropriate data mining project requirements so that they can be realized. Uber tries to optimize forecasts based on supply and demand in such a way that an availability of vehicles is always guaranteed in order to maintain the service for its users. This dataset is intended to demonstrate the feasibility of this goal.
# # 2. Data and Data Preparation
#
# The dataset consists of four basic variables: dispatching base number, date, active_vehicles, and trips.
# The dispatching base number variable is a code assigned by the TLC that indicates an Uber base in New York City. Accordingly, the codes can be assigned to the following bases: B02512: Under, B02598: Behind, B02617: Next, B02682: Taste, B02764: After-NY, B02765: Grun, B02835: Brazen, B02836: Inside.
# By a first overview of the available data, one can already speculate about possible dependencies between the number of active vehicles, number of trips as well as the date.
# ## 2.1 Import of Relevant Modules
import numpy as np
import pandas as pd
import statsmodels.api as sm
import matplotlib.pyplot as plt
import seaborn as sns
import datetime
import math
import matplotlib.dates as mdates
from matplotlib.dates import DateFormatter
sns.set()
from sklearn.linear_model import LinearRegression
# ## 2.2 Read Data
from pandas.plotting import register_matplotlib_converters
register_matplotlib_converters()
raw_data =pd.read_csv('https://storage.googleapis.com/ml-service-repository-datastorage/Forecast_of_required_vehicles_in_the_city_center_data.csv')
#Output record head
raw_data.head()
#data description
raw_data.describe(include='all')
# ## 2.3 Data Cleaning
#check null values
raw_data.isnull().sum()
raw_data.duplicated().sum()
# View data distributions for active vehicles
sns.distplot(raw_data['active_vehicles'])
# View data distributions for trips
sns.distplot(raw_data['trips'])
#Here a 95% quantile is generated for active vehicles & trips
quant1= raw_data['active_vehicles'].quantile(0.95)
quant1
quant1= raw_data['trips'].quantile(0.95)
quant1
#quantile containment for active_vehicles
data1= raw_data[raw_data['active_vehicles']<quant1]
# +
#Quantile containment for trips
data1=raw_data[raw_data['trips']<quant1]
# -
data1.describe(include='all')
#Displot from the quantile is generated & the irregular distribution is still noticed
sns.distplot(data1['active_vehicles'])
#Next step is to narrow down the values to 4000(active vehicles).
#On the left side you can see the main distribution of the dataset
data2= data1[data1['active_vehicles']<=4000]
data2= data1[data1['active_vehicles']>=600]
plt.xlim(0,4000)
sns.distplot(data2['active_vehicles'])
# +
#here a quantile is created for number of trips
#data2= data1[data1['trips']<quant1]
# -
sns.distplot(data2['trips'])
#Define value ranges
data2= data1[data1['trips']<=20000]
data2= data1[data1['trips']>=2000]
plt.xlim(1000,17000)
sns.distplot(data2['trips'])
# # 3. Data Preparation
data_cleaned=pd.to_datetime(data2['date'])
data_cleaned=str(data2['dispatching_base_number'].astype(str))
#data_cleaned= data2["dispatching_base_number"].astype(str)
data_cleaned = data2.reset_index(drop=True)
data_cleaned.describe(include='all')
#After assuming that the data has been cleaned as far as possible, we continue with OLS assumptions.
#Before a scatter plot showing the split and density of the values
#Main feature: The data in the dataset are concentrated in the x=2000 & y= 20000 range
#However, values in the range of 3000 to over 4000 must also be included in the model
plt.scatter(data_cleaned['active_vehicles'], data_cleaned['trips'])
#Related to statement(s) from the line above.
#Main distribution here is in the 2000 range
plt.scatter(data_cleaned['active_vehicles'], data_cleaned['trips'])
plt.xlim(0,2000)
plt.ylim(0,20000)
#A few more checks on the other variables
plt.scatter(data_cleaned['trips'], data_cleaned['dispatching_base_number'])
plt.scatter(data_cleaned['dispatching_base_number'], data_cleaned['active_vehicles'])
# +
#Transformation für kontinuierliche Variablen
#Log: Die Logarithmentransformation hilft bei schiefen Daten, die Schiefe zu reduzieren. Siehe Oben
log_trips = np.log(data_cleaned['trips'])
data_cleaned['log_trips'] = log_trips
#Linearity is not present-> Violation of OLS Assumption
# -
plt.scatter(data_cleaned['dispatching_base_number'], data_cleaned['log_trips'])
#Output of the new columns -> You can see that "Trips" occurs twice since log transformation
data_cleaned.columns.values
# +
#data_cleaned = data_cleaned.drop(['trips'], axis=1)
# +
#Variance Inflation Factor (VIF).
#If the VIF is between 5 and 10, multicolinearity is probably present and you should consider dropping the variable.
from statsmodels.stats.outliers_influence import variance_inflation_factor
variables = data_cleaned[['active_vehicles', 'log_trips']]
vif = pd.DataFrame()
vif["VIF"]= [variance_inflation_factor(variables.values, i) for i in range(variables.shape[1])]
vif["Features"]= variables.columns
# -
#VIF is output and the values are <5
vif
data_cleaned.describe(include='all')
#Linearity could be improved by LOG transformation
plt.scatter(data_cleaned['active_vehicles'], data_cleaned['log_trips'])
# ## 3.1 Recoding of categorical variables
#Convert vpn categorical data into dummy/ indicator variables. Focus on base number for more general model
data_with_dummies = pd.get_dummies(data_cleaned, drop_first = True)
# Dummy variables head output
data_with_dummies.head()
data_with_dummies.columns.values
#Columns for the later model are provided
cols=['active_vehicles','log_trips','dispatching_base_number_B02598',
'dispatching_base_number_B02617', 'dispatching_base_number_B02682',
'dispatching_base_number_B02764', 'dispatching_base_number_B02765']
# +
#-> At this point it was decided that the variable 'Date' is no longer useful and too costly for model building
#-> It is therefore not considered for the model
# -
#Arrangement-> First column becomes target variable
data_preprocessed = data_with_dummies[cols]
data_preprocessed.head()
# ## 3.2 Create Test and Training Data
targets = data_preprocessed['active_vehicles']
inputs = data_preprocessed.drop(['active_vehicles'], axis=1)
from sklearn.preprocessing import StandardScaler
scaler = StandardScaler()
scaler.fit(inputs)
input_scaled = scaler.transform(inputs)
from sklearn.model_selection import train_test_split
x_train, x_test, y_train, y_test = train_test_split(input_scaled, targets, test_size=0.20, random_state=365)
# # 4. Modelling and Evaluation
# ## 4.1 Linear Regression
reg = LinearRegression()
reg.fit(x_train, y_train)
y_hat = reg.predict(x_train)
#The division of values is also noticeable here
plt.scatter(y_train, y_hat)
plt.xlabel('Targets')
plt.ylabel('Predictions')
plt.show()
#Error thermal: Trainins values minus predicted data
sns.distplot(y_train - y_hat)
plt.title("Residuals")
#plt.show()
#R-squared value is important: variance of the target variable
reg.score(x_train, y_train)
# Important for the regression line
reg.intercept_
reg.score(x_test, y_test)
print('training Performance')
print(reg.score(x_train,y_train))
print('test Performance')
print(reg.score(x_test,y_test))
#Regression coefficients for the model
reg.coef_
#What weights are critical to the model:
reg_summary = pd.DataFrame(inputs.columns, columns = ['Features'])
reg_summary['Weights']= reg.coef_/1000
reg_summary
# +
#Testing the model
# -
y_hat_test = reg.predict(x_test)
plt.scatter(y_test, y_hat_test, alpha=0.2)
plt.xlabel('Targets Test')
plt.ylabel('Predictions Test')
plt.show()
#Vprediction= has
df_pf = pd.DataFrame((y_hat_test), columns = ['Predictions'])
df_pf.head()
y_test=y_test.reset_index(drop=True)
y_test.head()
df_pf['Targets'] = y_test
df_pf.head()
#Overview how much the prediction and the actual values differ:
y_pred= reg.predict(x_test)
test=pd.DataFrame({'Predicted':y_pred, 'Actual':y_test})
fig=plt.figure(figsize=(16,8))
test=test.reset_index()
test=test.drop(['index'],axis=1)
plt.plot(test[:50])
plt.legend(['Actual', 'Predicted'])
sns.jointplot(x='Actual', y='Predicted', data=test, kind='reg');
# ## 4.2 Evaluation
df_pf
df_pf['Residuals'] = df_pf['Targets']-df_pf['Predictions']
df_pf
#Residuals shows the deviation
#Display difference in percent for Prediction & Target
df_pf['Difference%']=np.absolute(df_pf['Residuals']/df_pf['Targets']*100)
df_pf
df_pf.describe()
| Forecast/Forecast of required vehicles in the city center/notebook.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
import pandas as pd
import numpy as np
from sklearn.model_selection import KFold
from sklearn.pipeline import Pipeline
from sklearn.linear_model import Lasso
from sklearn.ensemble import RandomForestRegressor
from sklearn.metrics import mean_squared_error, mean_absolute_error
from sklearn.inspection import plot_partial_dependence, permutation_importance
import matplotlib.pyplot as plt
import seaborn as sns
# %matplotlib inline
import sys
sys.path.append("..")
from source.clean import general_cleaner, drop_columns
from source.transf_category import recode_cat, make_ordinal
from source.transf_numeric import tr_numeric
import source.transf_univ as dfp
import source.utility as ut
import source.report as rp
# +
df_train = pd.read_csv('../data/train.csv')
df_train['Target'] = np.log1p(df_train.SalePrice)
df_train = df_train[df_train.GrLivArea < 4500].copy().reset_index()
del df_train['SalePrice']
train_set, test_set = ut.make_test(df_train,
test_size=0.2, random_state=654,
strat_feat='Neighborhood')
y = train_set['Target'].copy()
del train_set['Target']
y_test = test_set['Target']
del test_set['Target']
# +
numeric_lasso = Pipeline([('fs', dfp.feat_sel('numeric')),
('imp', dfp.df_imputer(strategy='median')),
('transf', tr_numeric(lot=False,
bedroom=False,
SF_room=False))])
cat_lasso = Pipeline([('fs', dfp.feat_sel('category')),
('imp', dfp.df_imputer(strategy='most_frequent')),
('ord', make_ordinal(['BsmtQual', 'KitchenQual', 'ExterQual', 'HeatingQC'],
extra_cols=['BsmtExposure', 'BsmtCond', 'ExterCond'],
include_extra='include')),
('recode', recode_cat()),
('dummies', dfp.dummify(drop_first=True))])
processing_lasso = dfp.FeatureUnion_df(transformer_list=[('cat', cat_lasso),
('num', numeric_lasso)])
lasso_pipe = Pipeline([('gen_cl', general_cleaner()),
('proc', processing_lasso),
('scaler', dfp.df_scaler(method='standard')),
('dropper', drop_columns(lasso=True)),
('lasso', Lasso(alpha=0.001, tol=0.005))])
# -
lasso_pipe.fit(train_set.copy(), y)
# +
coefs = rp.get_coef(lasso_pipe)
coefs
# +
features = coefs.head(9).feat.values
proc = Pipeline([('gen_cl', general_cleaner()),
('proc', processing_lasso),
('scaler', dfp.df_scaler(method='standard')),
('dropper', drop_columns(lasso=True))])
tmp = proc.fit_transform(train_set.copy(), y)
ls_tm = Lasso(alpha=0.001, tol=0.005)
ls_tm.fit(tmp, y)
fig, ax = plt.subplots(3,3, figsize=(15,10))
plot_partial_dependence(ls_tm, tmp, features, ax=ax,
n_jobs=-1, grid_resolution=20)
fig.subplots_adjust(hspace=0.3)
# +
features = [('OverallQual', 'service_area'), ('OverallQual', 'GrLivArea')]
proc = Pipeline([('gen_cl', general_cleaner()),
('proc', processing_lasso),
('scaler', dfp.df_scaler(method='standard')),
('dropper', drop_columns(lasso=True))])
tmp = proc.fit_transform(train_set.copy(), y)
ls_tm = Lasso(alpha=0.001, tol=0.005)
ls_tm.fit(tmp, y)
fig, ax = plt.subplots(1,2, figsize=(12,6))
plot_partial_dependence(ls_tm, tmp, features, ax=ax,
n_jobs=-1, grid_resolution=20)
fig.subplots_adjust(hspace=0.3)
# +
numeric_forest = Pipeline([('fs', dfp.feat_sel('numeric')),
('imp', dfp.df_imputer(strategy='median')),
('transf', tr_numeric(SF_room=False,
bedroom=False,
lot=False))])
cat_forest = Pipeline([('fs', dfp.feat_sel('category')),
('imp', dfp.df_imputer(strategy='most_frequent')),
('ord', make_ordinal(['BsmtQual', 'KitchenQual', 'ExterQual', 'HeatingQC'],
extra_cols=['BsmtExposure', 'BsmtCond', 'ExterCond'],
include_extra='include')),
('recode', recode_cat()),
('dummies', dfp.dummify(drop_first=True))])
processing_forest = dfp.FeatureUnion_df(transformer_list=[('cat', cat_forest),
('num', numeric_forest)])
forest_pipe = Pipeline([('gen_cl', general_cleaner()),
('proc', processing_forest),
('scaler', dfp.df_scaler(method='robust')),
('dropper', drop_columns(forest=True)),
('forest', RandomForestRegressor(n_estimators=1500, max_depth=30,
max_features='sqrt',
n_jobs=-1, random_state=32))])
# -
forest_pipe.fit(train_set.copy(), y)
# +
coefs = rp.get_feature_importance(forest_pipe)
coefs
# +
features = coefs.head(9).feat.values
proc = Pipeline([('gen_cl', general_cleaner()),
('proc', processing_forest),
('scaler', dfp.df_scaler(method='robust')),
('dropper', drop_columns(forest=True))])
tmp = proc.fit_transform(train_set.copy(), y)
ls_tm = RandomForestRegressor(n_estimators=1500, max_depth=30,
max_features='sqrt',
n_jobs=-1, random_state=32)
ls_tm.fit(tmp, y)
fig, ax = plt.subplots(3,3, figsize=(15,10))
plot_partial_dependence(ls_tm, tmp, features, ax=ax,
n_jobs=-1, grid_resolution=50)
fig.subplots_adjust(hspace=0.3)
# +
features = [('OverallQual', 'service_area'), ('OverallQual', 'GrLivArea'),
('OverallQual', 'Neighborhood'), ('Neighborhood', 'service_area')]
fig, ax = plt.subplots(2,2, figsize=(12,12))
plot_partial_dependence(ls_tm, tmp, features, ax=ax,
n_jobs=-1, grid_resolution=20)
fig.subplots_adjust(hspace=0.3)
# -
result = permutation_importance(ls_tm, tmp, y, n_repeats=20,
random_state=42, n_jobs=-1)
# +
sorted_idx = result.importances_mean.argsort()
fig, ax = plt.subplots(figsize=(10, 15))
ax.boxplot(result.importances[sorted_idx].T,
vert=False, labels=tmp.columns[sorted_idx])
plt.show()
# -
| houseprice/notebooks_source/07 - Model interpretability.ipynb |
# -*- coding: utf-8 -*-
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# <img align="right" src="images/dans-small.png"/>
# <img align="right" src="images/tf-small.png"/>
# <img align="right" src="images/etcbc.png"/>
#
#
# # Kings and parallels
#
# # 0. Introduction
#
# ## 0.0 About
#
# This is joint work of <NAME> and <NAME>.
#
# Results of this work are included in:
#
# Naaijer, Martijn and <NAME> (2016). *Parallel Texts in the Hebrew Bible, New Methods and Visualizations*.
# Preprint on [arxiv](http://arxiv.org/abs/1603.01541). To be published in the
# [Journal of Datamining and Digitial Humanities](http://jdmdh.episciences.org).
#
# We perform a case study into 2 Kings 19-25 and passages that run partly parallel to these chapters.
# These variants occur in Chronicles, Isaiah and Jeremiah.
# We also examine the Qumran scroll 1QIsa<sup>a</sup> of the book Isaiah.
# In this notebook we collect the data and carry out preliminary analyses.
# We find the variants by using the database of cross references made by this
# notebook in SHEBANQ: [parallel](https://shebanq.ancient-data.org/tools?goto=parallel).
#
# **Section 1** is devoted to finding and viewing the parallels within the Masoretic texts.
#
# **Section 2** shows the differences between Isaiah in the MT and in the 1QIsa<sup>a</sup> scroll.
#
# ## 0.1 Results
#
# The results of **section 1** are:
# 1. Overview of parallels between 2 Kings 19-25 and the other books (as graph).
# [download as PDF](kings_parallels.pdf) or see below.
# 1. Finer analysis of similarities based on the edit distance between all pairs of sentences.
# 1. [Sentence similarities](kings_similarities.tsv) (8MB)
# 1. [Plot of sentence similarities](kings_similarities.pdf) or see below.
# 1. Synoptic view of the verses in 2 Kings 19-25 and their parallels in the other books.
# In [Hebrew script](kings_parallels_h.html) or [phonemic spelling](kings_parallels_p.html).
#
# The results of **section 2** are:
# 1. Synoptic views of the differences between Isaiah 37-39 in the Masoretic text and in 1QISa<sup>a</sup>.
# 1. [Isaiah 37](Isaiah-mt-1QIsaa_37.html)
# 1. [Isaiah 38](Isaiah-mt-1QIsaa_38.html)
# 1. [Isaiah 39](Isaiah-mt-1QIsaa_39.html).
# 1. Synoptic views of the differences of the whole of Isaiah in the Masoretic text and in 1QISa<sup>a</sup>.
# * [Isaiah](Isaiah-mt-1QIsaa.html)
#
# ## 0.2 The program
#
# The program starts in the next cell, with the loading of several modules.
# We shall not document all programming details, but restrict ourselves to the issues that concern
# the Hebrew texts and the patterns in the data associated with it.
# +
import os
import collections
import difflib
from Levenshtein import ratio
# (sudo -H) pip(3) install python-Levenshtein
# brew install freetype # on mac os x
# (sudo -H) pip(3) install matplotlib
from difflib import SequenceMatcher
import networkx as nx
import matplotlib.pyplot as plt
# %matplotlib inline
from tf.fabric import Fabric
from tf.transcription import Transcription
# -
# ## 0.3 Data source
#
# We use the BHSA database in its version 2016, downloadable from the GitHub repo
# [bhsa](https://github.com/ETCBC/bhsa).
# The format of the data obtained through GitHub is immediately ready to be used by Text-Fabric,
# and hence by this notebook as well.
#
# A previous version of this notebook was based on version `4b`,
# also downloadable from the BHSA repository.
#
# That version has also been archived at DANS, downloadable via DOI
# [10.17026/dans-z6y-skyh](http://dx.doi.org/10.17026/dans-z6y-skyh).
#
# The transcription of 1QIsa<sup>a</sup> is in a file produced by the ETCBC. This file is included
# [here](https://shebanq.ancient-data.org/shebanq/static/docs/tools/parallel/1QIsaa_an.txt)
# in SHEBANQ.
locations = "~/github/etcbc"
sources = ["bhsa", "parallels", "phono"]
version = "2017"
QISA_FILE = os.path.expanduser("{}/parallels/source/1QIsaa_an.txt".format(locations))
# ## 0.4 Data features
#
# We only use a few data features from the BHSA database. You see them in the code below.
# Their documentation can be found through the SHEBANQ help function or via this direct link:
# [Feature-doc](https://etcbc.github.io/bhsa/features/hebrew/2016/0_home.html).
# Here is the direct link to
# [otype](https://etcbc.github.io/bhsa/features/hebrew/2016/otype).
modules = ["{}/tf/{}".format(s, version) for s in sources]
TF = Fabric(locations=locations, modules=modules)
api = TF.load(
"""
language lex_utf8 verse
crossref
"""
)
api.makeAvailableIn(globals())
# ## 0.5 Config information.
#
# We use some constants to refer to the files we read and write.
#
# In particular, we use `crossref` feature, which has been composed by the
# [parallels](https://shebanq.ancient-data.org/shebanq/static/docs/tools/parallel/parallels.html)
# notebook.
#
# Throughout this notebook we use English names for the books of the Bible.
# +
# the language of the book names
LANG = "en"
# the book and chapters that are central to our study
REFBOOKS = {"2_Kings"}
REFCHAPTERS = set(range(19, 26))
# output files
NCOL_FILE = "kings_crossrefs.ncol" # graph of similar verses to 2 Kings 19-25
SIMILAR_FILE = "kings_similarities.tsv" # refined similarities based on sentences
# -
# ## 0.6 Book name index
#
# We want to discuss the portion of ``2_Kings`` that we are interested in separately from the other chapters.
# That is why we introduce a virtual book ``2_Kingsr`` for our reference chapters.
#
# The BHSA database uses Latin names for the Bible books.
# We translate them to conventional English names.
# +
bookNode = dict()
for b in F.otype.s("book"):
bookName = T.bookName(b, lang=LANG)
bookNode[bookName] = b
if bookName == "2_Kings":
bookNode[bookName + "r"] = b
def passage_key(p):
(bk, ch, vs) = p
return (
(-1, ch, vs) if bk in REFBOOKS and ch in REFCHAPTERS else (bookNode[bk], ch, vs)
)
# the format of verse references
PASSAGE_FMT = "{}~{}:{}"
PASSAGER_FMT = "{}r~{}:{}" # used for the pseudo-book Reges_IIr
# -
# # 1. Parallels within the Masoretic Text
# ## 1.1 Grep all parallel verses
#
# We find the verses that are similar to any verse in 2 Kings 19-25.
#
# Our method is to use one of the similarity matrices that has been computed by the parallels notebook.
# To be precise, we took the matrix computed for the SET method applied to verses. We then extract the similarities higher than 60. These specifications are stored in the variables `CHUNK_GREP`, `MATRIX_GREP` and
# `SIM_THRESHOLD_GREP`.
# Every similarity we find is a pair of verse references, at least one of which is in 2 Kings 19-25.
# That reference we render as being in the book `2_Kingsr` because in a later visualization we
# want to have our focus chapters in a separate column.
# +
allVerseNodes = set()
nInternal = 0
nAll = 0
crossrefs = set()
for vnX in F.otype.s("verse"):
(bkX, chX, vsX) = T.sectionFromNode(vnX, lang=LANG)
if bkX not in REFBOOKS or chX not in REFCHAPTERS:
continue
for (vnY, r) in E.crossref.t(vnX):
nAll += 1
(bkY, chY, vsY) = T.sectionFromNode(vnY, lang=LANG)
if bkY in REFBOOKS and chY in REFCHAPTERS:
nInternal += 1
continue
crossrefs.add(((bkX, chX, vsX), (bkY, chY, vsY), r))
allVerseNodes |= {vnX, vnY}
TF.info(
"{} external crossrefs saved; {} internal crossrefs skipped; from total {} crossrefs".format(
len(crossrefs),
nInternal,
nAll,
)
)
print(
"\n".join(
"{}r\t{}\t{}\t{}\t{}\t{}\t{}".format(*x[0], *x[1], round(x[2]))
for x in sorted(crossrefs)[0:10]
)
)
# -
# ## 1.2 Store similarities as a graph
# We want to visualize the similarities found so far as a graph,
# where the verses are nodes and the similarities are edges.
# To that end we have to store the similarities in a format such that graph software can read it.
#
# We write out the graph data as a file in `.ncol` format, and we will use the python package *networkx* to
# read and process that file.
#
# We also produce:
# * a set of all verses encountered
# * a set of all chapters encountered
# * a set of all books encountered
# +
TF.info("Exporting graph info, assembling sets")
ncolfile = open(NCOL_FILE, "w")
for (x, y, r) in sorted(
crossrefs,
key=lambda z: (
bookNode[z[0][0]],
z[0][1],
z[0][2],
bookNode[z[1][0]],
z[1][1],
z[1][2],
),
):
ncolfile.write(
"{} {} {}\n".format(PASSAGER_FMT.format(*x), PASSAGE_FMT.format(*y), round(r))
)
ncolfile.close()
allVerses = {(x[0][0] + "r", x[0][1], x[0][2]) for x in crossrefs} | {
x[1] for x in crossrefs
}
allChapters = {(x[0], x[1]) for x in allVerses}
allBooks = {x[0] for x in allChapters}
TF.info(
"{} edges, {} verses, {} chapters, {} books".format(
len(crossrefs),
len(allVerses),
len(allChapters),
len(allBooks),
)
)
print(" ".join(sorted(allBooks)))
# -
# ## 1.3 Graph visualization
#
# We now visualize the similarities in a graph, using networkx.
# The layout is done manually, not following any of the methods provided by networkx.
#
# The verses are put into columns by the book they occur in,
# and our focus chapters occupy a separate column, thanks to the pseudo book `Reges_IIr`.
# ``Reges_II`` stands for the other chapters of 2 Kings.
# The rows of verses are ordered textually.
#
# Finally, we shift rows of verses up and down in order to align them with their parallel stretches.
# The thickness of the edges corresponds to the degree of similarity, and likewise, the blacker the edge, the more
# similar the pair of verses.
#
# Now we read the graph data file, adjust layout settings, plot it and save it as PDF.
# +
# read the graph data
g = nx.read_weighted_edgelist(NCOL_FILE)
# order the books for handy layout in columns
allBooksCust = """
Jeremiah 2_Chronicles Isaiah 2_Kingsr 1_Kings 2_Kings Ezekiel Haggai Leviticus
""".strip().split()
# specify colors of books
gcolors = {
"2_Kingsr": (0.9, 0.9, 0.9),
"Isaiah": (1.0, 0.3, 0.3),
"Jeremiah": (0.3, 1.0, 0.3),
"2_Chronicles": (1.0, 0.3, 1.0),
"1_Kings": (0.3, 0.3, 1.0),
"2_Kings": (1.0, 1.0, 0.3),
"Leviticus": (0.7, 0.7, 0.7),
"Ezekiel": (0.7, 0.7, 0.7),
"Haggai": (0.7, 0.7, 0.7),
}
# specify vertical positions of passages
offsetY = {
"2_Kingsr": 0,
"Isaiah": 0,
"Jeremiah": 90,
"2_Chronicles": 52,
"1_Kings": 55,
"2_Kings": 75,
"Leviticus": 65,
"Ezekiel": 65,
"Haggai": 60,
}
# compute positions of verses
ncolors = [gcolors[x.split("~")[0]] for x in g.nodes()]
nlabels = dict((x, x.split("~")[1]) for x in g.nodes())
ncols = len(allBooks)
posX = dict((x, i) for (i, x) in enumerate(allBooksCust))
verseLists = collections.defaultdict(lambda: [])
for (bk, ch, vs) in sorted(allVerses):
verseLists[bk].append("{}:{}".format(ch, vs))
nrows = max(len(verseLists[bk]) for bk in allBooksCust)
pos = {}
for bk in verseLists:
for (i, chvs) in enumerate(verseLists[bk]):
pos["{}~{}".format(bk, chvs)] = (posX[bk], i + offsetY[bk])
# start plotting
plt.figure(figsize=(18, 40))
nx.draw_networkx(
g,
pos,
width=[g.get_edge_data(*x)["weight"] / 40 for x in g.edges()],
edge_color=[g.get_edge_data(*x)["weight"] for x in g.edges()],
edge_cmap=plt.cm.Greys,
edge_vmin=50,
edge_vmax=100,
node_color=ncolors,
node_size=100,
labels=nlabels,
alpha=0.4,
linewidths=0,
)
plt.ylim(-2, 130)
bookFontSize = 12
plt.grid(b=True, which="both", axis="x")
plt.title("Parallels involving 2_Kings 19-25", fontsize=24)
plt.text(
-1,
70,
"""
The parallels with
1_Kings and the other
chapters of 2_Kings
are weaker and
more sporadic.
Note that there are
also links within
2_Kings 19-25.
All these are probably
similar verses but
not parallels.
""", # bbox=dict(width=145, height=200, facecolor='yellow', alpha=0.4), fontsize=12)
# suddenly the width and height keyword args are no longer accepted.
# bbox performs an auto fit
bbox=dict(facecolor="yellow", alpha=0.4),
fontsize=12,
)
# add additional book labels
for (ypos, books) in (
(-1, allBooksCust),
(51, ["2_Chronicles", "Isaiah", "1_Kings"]),
(61, ["Leviticus", "Ezekiel", "Haggai"]),
(72, ["2_Kings"]),
(89, ["2_Chronicles", "Jeremiah"]),
(101, ["2_Kings"]),
(128, allBooksCust),
):
for bk in books:
plt.text(
posX[bk], ypos, bk, fontsize=bookFontSize, horizontalalignment="center"
)
# save the plot as pdf
plt.savefig("kings_parallels.pdf")
# -
# From the graph above we read off which are the interesting passages to compare:
#
# 2_Kings 19-20:19 with Isaiah 37-39:8
#
# 2_Kings 21-23:3 with 2_Chronicles 33-34:31
#
# 2_Kings 23:31-25:30 with Jeremiah 52-52:34
focus_books = {"2_Kings", "Isaiah", "Jeremiah", "2_Chronicles"}
# ## 1.4 Similarity study
#
# We have a closer look at the similarities between the passages as far as they are contained in the focus books.
#
# We make a pairwise comparison of all sentences, based on the Levenshtein ratio (the similarity based on the [Levenshtein distance](https://en.wikipedia.org/wiki/Levenshtein_distance)).
#
# First we divide the material up into sentences and fetch their texts from the database.
#
# For this we use the function `T.text(nodes, fmt='text-orig-full')`, an
# [API function](https://github.com/Dans-labs/text-fabric/wiki/Api#text-representation)
# of Text-Fabric.
# ### 1.4.1 Getting the sentences
#
# For each verse, we list its sentences and store them in two lists:
# * `chunks` is a list of verse-sentence references
# * `chunk_data` is a list of the texts of the corresponding sentences.
#
# We order them in a special way: first all our reference verses, i.e. 2 Kings 19-25, and then the rest in normal order.
# +
TF.info("Identifying sentences")
crossrefs_lcs = set()
all_sentences = set()
for v in allVerseNodes:
for w in L.d(v, "word"):
all_sentences.add(L.u(w, "sentence")[0])
TF.info("Found {} sentences in {} verses".format(len(all_sentences), len(allVerseNodes)))
TF.info("Grouping sentences by verse")
focus_sentences = collections.defaultdict(set)
for s in all_sentences:
fw = L.d(s, "word")[0]
(bk, ch, vs) = T.sectionFromNode(fw, lang=LANG)
if bk not in focus_books:
continue
if bk in REFBOOKS and ch not in REFCHAPTERS:
continue
focus_sentences[(bk, ch, vs)].add(s)
TF.info("Getting sentence texts")
chunks = []
chunk_data = []
# in the next line we order chunks such that the reference sentences come first, i.e.
# the ones from 2 Kings 19-25.
# When computing distances, if one of these verses is involved, it will
# occur in the first column, unless an other reference verse is in the first column
# It will not happen that when a reference verse is involved,
# a non-reference verse occupies the first column
for ((bk, ch, vs), sents) in sorted(
focus_sentences.items(), key=lambda x: (passage_key(x[0]), x[1])
):
for (sn, s) in enumerate(sorted(sents)):
chunk_data.append("".join(T.text(L.d(s, "word"), fmt="text-orig-full")))
chunks.append((bk, ch, vs, sn))
TF.info("Done: {} sentences in {} verses".format(len(chunks), len(focus_sentences)))
for i in range(5):
print("{} {}".format(chunks[i], chunk_data[i]))
# -
# ### 1.4.2 Comparing the sentences
#
# We compare all sentences to each other.
# The next cell performs the pairwise comparison by calling the Levenshtein function `ratio` for each pair.
# The result is stored in the dictionary `chunk_dist`, keyed by a pair of indexes in the `chunk` list and valued by the similarity of that pair.
#
# Here is more information about the
# [Levenshtein module](http://www.coli.uni-saarland.de/courses/LT1/2011/slides/Python-Levenshtein.html#Levenshtein-ratio).
TF.info("Comparing sentences and filtering the similar ones")
chunk_dist = {}
total_chunks = len(chunks)
for i in range(total_chunks):
c_i = chunk_data[i]
for j in range(i + 1, total_chunks):
c_j = chunk_data[j]
chunk_dist[(i, j)] = round(100 * ratio(c_i, c_j))
TF.info("Done: {} distances".format(len(chunk_dist)))
# ### 1.4.3 Analyzing the similarities
#
# The Levenshtein ratio of arbitrary but real texts tend to be not zero, but rather something approaching 0.5.
# We present an overview of how the actual similarity values of our sentence pairs are distributed.
#
# We plot the number of comparisons against the similarities, where for each similarity value we plot the number of pairs that have at most that similarity value.
# +
TF.info("Analyzing similarities")
sim_levels = collections.Counter()
for ((i, j), sim) in chunk_dist.items():
sim_levels[sim] += 1
cumsum = 0
sim_levels_cum = []
start = 0
end = 100
for sim in reversed(range(start, end + 1)):
cumsum += sim_levels.get(sim, 0)
sim_levels_cum.append(cumsum)
cummax = sim_levels_cum[100]
x = range(len(sim_levels_cum))
fig = plt.figure(figsize=(40, 20))
plt.plot(x[start:end], sim_levels_cum[start:end])
plt.axis([start, end, 0, cummax])
plt.xticks(x[start:end], range(start, end), rotation="vertical")
plt.margins(0.2)
plt.subplots_adjust(bottom=0.15)
plt.title("cumulative similarities")
plt.savefig("kings_similarities.pdf")
# -
# The full similarity file is [here](kings_similarities.tsv) (8 MB).
# It is a tab separated file with 9 columns: 4 for book, chapter, verse, sentence number of the first sentence, another 4 for the second sentence, and the last column holds the similarity of both sentences.
# See the code below.
#
TF.info("Writing similarities to disk")
field_template = ("{}\t" * 8) + "{}\n"
with open(SIMILAR_FILE, "w") as f:
f.write(
field_template.format(
"book_1",
"chap_1",
"verse_1",
"sen_1",
"book_2",
"chap2",
"verse_2",
"sen_2",
"sim",
)
)
for ((i, j), sim) in sorted(chunk_dist.items()):
f.write(field_template.format(*chunks[i], *chunks[j], sim))
TF.info("Done")
# ## 1.5 Showing text differences
#
# We are going to construct a verse-by-verse table
# containing all parallels in text with difference markup.
# Red and green indicate that the material is absent at the other side.
# Yellow means that there is corresponding material at both sides but different.
#
# For each verse we also give a list of the lexemes that are not shared by the two verses in that comparison.
#
# ### 1.5.1 Formatting
# The table will be formatted in HTML. Therefore we specify a stylesheet.
# +
css = """
<style type="text/css">
table.t {
width: 100%;
border-collapse: collapse;
}
table.h {
direction: rtl;
}
table.p {
direction: ltr;
}
tr.t.tb {
border-top: 2px solid #aaaaaa;
border-left: 2px solid #aaaaaa;
border-right: 2px solid #aaaaaa;
}
tr.t.bb {
border-bottom: 2px solid #aaaaaa;
border-left: 2px solid #aaaaaa;
border-right: 2px solid #aaaaaa;
}
th.t {
font-family: Verdana, Arial, sans-serif;
font-size: large;
vertical-align: middle;
text-align: center;
padding-left: 2em;
padding-right: 2em;
padding-top: 1ex;
padding-bottom: 2ex;
border-left: 2px solid #aaaaaa;
border-right: 2px solid #aaaaaa;
}
td.t {
border-left: 2px solid #aaaaaa;
border-right: 2px solid #aaaaaa;
padding-left: 1em;
padding-right: 1em;
padding-top: 0.3ex;
padding-bottom: 0.5ex;
}
td.h {
font-family: Ezra SIL, SBL Hebrew, Verdana, sans-serif;
font-size: large;
line-height: 1.6;
text-align: right;
direction: rtl;
}
td.ld {
font-family: Ezra SIL, SBL Hebrew, Verdana, sans-serif;
font-size: medium;
line-height: 1.2;
text-align: right;
vertical-align: top;
direction: rtl;
width: 10%;
}
td.p {
font-family: Verdana, sans-serif;
font-size: large;
line-height: 1.3;
text-align: left;
direction: ltr;
}
td.vl {
font-family: Verdana, Arial, sans-serif;
font-size: small;
text-align: right;
vertical-align: top;
color: #aaaaaa;
width: 5%;
direction: ltr;
border-left: 2px solid #aaaaaa;
border-right: 2px solid #aaaaaa;
padding-left: 0.4em;
padding-right: 0.4em;
padding-top: 0.3ex;
padding-bottom: 0.5ex;
}
span.m {
background-color: #aaaaff;
}
span.f {
background-color: #ffaaaa;
}
span.x {
background-color: #ffffaa;
color: #bb0000;
}
span.delete {
background-color: #ffaaaa;
}
span.insert {
background-color: #aaffaa;
}
span.replace {
background-color: #ffff00;
}
</style>
"""
html_file_tpl = """<html>
<head>
<meta http-equiv="Content-Type" content="text/html; charset=UTF-8" />
<title>{}</title>
{}
</head>
<body>
{}
</body>
</html>"""
# -
# ### 1.5.2 Auxiliary functions
# The following auxiliary functions help to wrap the text into HTML.
# +
def lex_diff(c1, c2):
v1 = T.nodeFromSection(c1, lang=LANG)
v2 = T.nodeFromSection(c2, lang=LANG)
lex1 = {F.lex_utf8.v(w) for w in L.d(v1, "word")}
lex2 = {F.lex_utf8.v(w) for w in L.d(v2, "word")}
return (lex1 - lex2, lex2 - lex1)
compare_lexemes = {}
for (c1, c2, r) in crossrefs:
compare_lexemes[(c1, c2)] = lex_diff(c1, c2)
def print_label(vl, without_book=True):
bookrep = "" if without_book else "{} ".format(vl[0])
return "{}{}:{}".format(bookrep, vl[1], vl[2]) if vl[0] != "" else ""
def print_diff(a, b):
arep = ""
brep = ""
for (lb, ai, aj, bi, bj) in SequenceMatcher(
isjunk=None, a=a, b=b, autojunk=False
).get_opcodes():
if lb == "equal":
arep += a[ai:aj]
brep += b[bi:bj]
elif lb == "delete":
arep += '<span class="{}">{}</span>'.format(lb, a[ai:aj])
elif lb == "insert":
brep += '<span class="{}">{}</span>'.format(lb, b[bi:bj])
else:
arep += '<span class="{}">{}</span>'.format(lb, a[ai:aj])
brep += '<span class="{}">{}</span>'.format(lb, b[bi:bj])
return (arep, brep)
def get_vtext(v, hp):
return "".join(
"{}".format(
T.text(
L.d(v, "word"), fmt="text-orig-full" if hp == "h" else "text-phono-full"
)
)
)
def print_chunk(v1, v2, hp):
vn1 = T.nodeFromSection(v1, lang=LANG)
vn2 = T.nodeFromSection(v2, lang=LANG)
text1 = get_vtext(vn1, hp)
text2 = get_vtext(vn2, hp)
(lexdiff1, lexdiff2) = lex_diff(v1, v2)
(line1, line2) = print_diff(text1, text2)
return """
<tr class="t tb">
<td class="vl">{b1}</td>
<td class="t {hp}">{l1}</td>
<td class="t ld"><span class="delete">{ld1}</span></td>
<td class="t ld"><span class="insert">{ld2}</span></td>
<td class="t {hp}">{l2}</td>
<td class="vl">{b2}</td>
</tr>
""".format(
b1=print_label(v1, without_book=False),
l1=line1,
ld1=" ".join(sorted(lexdiff1)),
ld2=" ".join(sorted(lexdiff2)),
b2=print_label(v2, without_book=False),
l2=line2,
hp=hp,
)
def print_passage(cmp_list, hp):
result = []
for item in cmp_list:
result.append(print_chunk(item[0], item[1], hp))
return "\n".join(result)
def get_lex_summ(book, my_own_lex):
result = []
for (lex, n) in sorted(my_own_lex[book].items(), key=lambda x: (-x[1], x[0])):
result.append('<span class="ld">{}</span> {}<br/>'.format(lex, n))
return "\n".join(result)
def print_lexeme_summary(book1, book2, my_own_lex):
return """
<tr class="t tb">
<td class="vl"> </td>
<td class="t"> </td>
<td class="t ld"><span class="delete">{ldr}</span></td>
<td class="t ld"><span class="insert">{ldp}</span></td>
<td class="t"> </td>
<td class="vl"> </td>
</tr>
""".format(
ldr=get_lex_summ(book1, my_own_lex),
ldp=get_lex_summ(book2, my_own_lex),
)
def print_table(hp):
result = """
<table class="t {}">
""".format(
hp
)
result += print_passage(sorted(crossrefs), hp)
result += """
</table>
"""
return result
# -
# ### 1.5.3 Delivering results
# And here we put everything together and produce the HTML
# (in fully pointed Hebrew and in a phonetic transcription) and save it to file.
html_text_h = html_file_tpl.format(
"2 Kings 19-26 and parallels [Hebrew]",
css,
print_table("h"),
)
html_text_p = html_file_tpl.format(
"2 Kings 19-26 and parallels [phonetic]",
css,
print_table("p"),
)
ht = open("kings_parallels_h.html", "w")
ht.write(html_text_h)
ht.close()
ht = open("kings_parallels_p.html", "w")
ht.write(html_text_p)
ht.close()
# # 2. Qumran Scroll 1QIsa<sup>a</sup>
#
# Here we read the transcribed text of 1QIsa<sup>a</sup> and store it in the variable `qisa`.
TF.info("reading 1QIsaa")
qf = open(QISA_FILE)
qisa = collections.defaultdict(lambda: collections.defaultdict(lambda: []))
nwords = 0
for line in qf:
nwords += 1
(passage, word, xword) = line.strip().split()
(chapter, verse) = passage.split(",")
qisa[int(chapter)][int(verse)].append(Transcription.to_hebrew_x(word))
qf.close()
TF.info(
"{} words in {} chapters in {} verses".format(
nwords, len(qisa), sum(len(qisa[x]) for x in qisa)
)
)
print(" ".join(qisa[1][1]))
# ## 2.1 Auxiliary functions
# Here are the functions to make comparisons with the Qumran (1QIsa<sup>a</sup>) text.
#
# Note that the Qumran text is unpointed, so we compare it with an unpointed representation of the Masoretic text.
# We also strip the marks from the s(h)in letter, so that we ignore any distinction between sin and shin in both sources.
# +
wh = "10px"
wn = "10px"
ww = "200px"
diffhead = """
<head>
<meta http-equiv="Content-Type"
content="text/html; charset=UTF-8" />
<title></title>
<style type="text/css">
table.diff {{
font-family: Ezra SIL, SBL Hebrew, Verdana, sans-serif;
font-size: large;
text-align: right;
width: 100%;
}}
td, th {{padding-left: 4px; padding-right: 4px}}
td[nowrap] {{width: {ww}; min-width: {ww}; max-width: {ww}; }}
th.diff_next {{width: {wn}; min-width: {wn}; max-width: {wn}; }}
td.diff_next {{text-align:left; font-size: small; width: {wn}; min-width: {wn}; max-width: {wn}; }}
.diff_next {{background-color:#c0c0c0; font-size: small}}
th.diff_header {{width: {wh}; min-width: {wh}; max-width: {wh}; }}
td.diff_header {{text-align:left; width: {wh}; min-width: {wh}; max-width: {wh}; }}
.diff_header {{background-color:#e0e0e0; font-size: small}}
.diff_add {{background-color:#aaffaa}}
.diff_chg {{background-color:#ffff77}}
.diff_sub {{background-color:#ffaaaa}}
</style>
</head>
""".format(
wh=wh, wn=wn, ww=ww
)
def shin(x):
return x.replace("\uFB2A", "ש").replace("\uFB2B", "ש")
def lines_chapter_mt(ch):
vn = T.nodeFromSection(("Isaiah", ch, 1), lang=LANG)
cn = L.u(vn, "chapter")[0]
lines = []
for v in L.d(cn, "verse"):
# vl = F.verse.v(v)
text = (
T.text(L.d(v, "word"), fmt="text-orig-plain")
.replace("\u05BE", " ")
.replace("\u05C3", "")
) # maqef and sof pasuq
# lines.append('{} {}'.format(vl, text))
lines.append(text)
return lines
def lines_chapter_1q(ch):
lines = []
for v in qisa[ch]:
text = " ".join(qisa[ch][v])
# lines.append('{} {}'.format(v, shin(text.strip())))
lines.append(shin(text.strip()))
return lines
def compare_chapters(c1, c2, lb1, lb2, head=True):
dh = difflib.HtmlDiff(wrapcolumn=50)
table_html = dh.make_table(
c1,
c2,
fromdesc=lb1,
todesc=lb2,
context=False,
numlines=5,
)
htext = (
"""<html>{}<body>{}</body></html>""".format(diffhead, table_html)
if head
else table_html
)
return htext
def mt1q_chapter_diff(ch, head=True):
lines_mt = lines_chapter_mt(ch)
lines_1q = lines_chapter_1q(ch)
return compare_chapters(
lines_mt,
lines_1q,
"Isaiah {} MT".format(ch),
"Isaiah {} 1QIsa<sup>a</sup>".format(ch),
head=head,
)
# -
# ## 2.2 Delivery of results
# And next we produce the actual HTML results.
# +
TF.info("Writing chapter diffs")
for ch in range(37, 40):
ht = open("Isaiah-mt-1QIsaa_{}.html".format(ch), "w")
ht.write(mt1q_chapter_diff(ch))
ht.close()
# Now the whole of Isaiah
TF.info("Writing whole Isaiah")
ht = open("Isaiah-mt-1QIsaa.html", "w")
ht.write("""<html>{}<body>""".format(diffhead))
for ch in range(1, 67):
ht.write(mt1q_chapter_diff(ch, head=False))
ht.write("""</body></html>""")
ht.close()
TF.info("Done")
| programs/kings_ii.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import pandas as pd
import numpy as np
from matplotlib import pyplot as plt
# %matplotlib inline
# ## Read Data
# +
traffic=pd.read_csv('../data/train_2.csv')
key=pd.read_csv('../data/key_2.csv')
sub=pd.read_csv('../data/sample_submission_2.csv')
# -
traffic.head()
# ## Extract meta data
# +
names=traffic.Page.values
langs=[]
access=[]
vtype=[]
entity=[]
for n in names:
elements=n.split('_')
langs.append(elements[-3])
access.append(elements[-2])
vtype.append(elements[-1])
entity.append(' '.join(elements[:-3]))
# -
# ### Visitor types
np.unique(vtype)
# ### Access types
np.unique(access)
# ### Domain types (Language types)
# There are only three types of domains, while the language is the first element in '.wikipedia.org' type.
# 7 languages (de,en,es,fr,ja,ru,zh)
add_type1=sum(list(map((lambda address: '.wikipedia.org' in address), langs)))
add_type2=sum(list(map((lambda address: 'www.mediawiki.org' in address), langs)))
add_type3=sum(list(map((lambda address: 'commons.wikimedia.org' in address), langs)))
assert (add_type1+add_type2+add_type3)==len(langs),'more than 3 types of http address'
np.unique(langs)
# ### Example of entities
np.random.choice(entity,15)
# ## Build new dataframe
traffic['langs']=langs
traffic['vtype']=vtype
traffic['access']=access
traffic['entity']=entity
traffic.drop(['Page'],axis=1,inplace=True)
traffic.head()
# ### Save
# +
# traffic.to_csv('../data/cl_traffic.csv',index=False)
# +
# traffic=pd.read_csv('../data/cl_traffic.csv')
# -
traffic.head()
# ## Exploratory analysis
gpb_lang=traffic.groupby(['langs']).sum()
plt.figure(figsize=(20,5))
for i in range(len(gpb_lang)):
language=gpb_lang.index[i]
data=gpb_lang.iloc[i,:]
plt.plot(data,label=language)
plt.legend()
gpb_lang=traffic.groupby(['vtype']).sum()
plt.figure(figsize=(20,5))
for i in range(len(gpb_lang)):
language=gpb_lang.index[i]
data=gpb_lang.iloc[i,:]
plt.plot(data,label=language)
plt.legend()
gpb_lang=traffic.groupby(['access']).sum()
plt.figure(figsize=(20,5))
for i in range(len(gpb_lang)):
language=gpb_lang.index[i]
data=gpb_lang.iloc[i,:]
plt.plot(data,label=language)
plt.legend()
| data_clean/explor.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# ## Building Tilesets using Xarray-Spatial and Datashader
# Xarray-Spatial provides `render_tiles` which is a utility function for creating tilesets.
# +
import pandas as pd
import datashader as ds
import numpy as np
from xrspatial.tiles import render_tiles
# -
import geopandas as gpd
world = gpd.read_file(
gpd.datasets.get_path('naturalearth_lowres')
)
world = world.to_crs("EPSG:3857")
world = world[world.continent != 'Antarctica']
world.plot(figsize=(7, 7))
# ## Define Tiling Component Functions
# ### Create `load_data_func`
# - accepts `x_range` and `y_range` arguments which correspond to the ranges of the supertile being rendered.
# - returns a dataframe-like object (pd.Dataframe / dask.Dataframe)
# - this example `load_data_func` creates a pandas dataframe with `x` and `y` fields sampled from a wald distribution
# +
import pandas as pd
import numpy as np
def load_data_func(x_range, y_range):
return world.cx[y_range[0]:y_range[1], x_range[0]:x_range[1]]
# -
# ### Create `rasterize_func`
# - accepts `df`, `x_range`, `y_range`, `height`, `width` arguments which correspond to the data, ranges, and plot dimensions of the supertile being rendered.
# - returns an `xr.DataArray` object representing the aggregate.
# +
import datashader as ds
from spatialpandas import GeoDataFrame
def rasterize_func(df, x_range, y_range, height, width):
spatialpandas_df = GeoDataFrame(df, geometry='geometry')
# aggregate
cvs = ds.Canvas(x_range=x_range, y_range=y_range,
plot_height=height, plot_width=width)
agg = cvs.polygons(spatialpandas_df, 'geometry')
return agg
# -
# ### Create `shader_func`
# - accepts `agg (xr.DataArray)`, `span (tuple(min, max))`. The span argument can be used to control color mapping / auto-ranging across supertiles.
# - returns an `ds.Image` object representing the shaded image.
# +
import datashader.transfer_functions as tf
from datashader.colors import viridis
def shader_func(agg, span=None):
img = tf.shade(agg, cmap=['black', 'teal'], span=span, how='log')
img = tf.set_background(img, 'black')
return img
# -
# ### Create `post_render_func`
# - accepts `img `, `extras` arguments which correspond to the output PIL.Image before it is write to disk (or S3), and addtional image properties.
# - returns image `(PIL.Image)`
# - this is a good place to run any non-datashader-specific logic on each output tile.
def post_render_func(img, **kwargs):
info = "x={},y={},z={}".format(kwargs['x'], kwargs['y'], kwargs['z'])
return img
# ## Render tiles to local filesystem
# +
full_extent_of_data = (-20e6, 20e6,
-20e6, 20e6)
output_path = '/Users/bcollins/temp/test_world/'
results = render_tiles(full_extent_of_data,
range(0, 8),
load_data_func=load_data_func,
rasterize_func=rasterize_func,
shader_func=shader_func,
post_render_func=post_render_func,
output_path=output_path,
color_ranging_strategy=(0,2))
# -
# ### Preview the tileset using Bokeh
# - Browse to the tile output directory and start an http server:
#
# ```bash
# $> cd test_tiles_output
# $> python -m http.server
#
# Starting up http-server, serving ./
# Available on:
# http://127.0.0.1:8080
# http://192.168.1.7:8080
# Hit CTRL-C to stop the server
# ```
#
# - build a `bokeh.plotting.Figure`
from bokeh.io import output_notebook, show, output_file
output_notebook()
# ### Preview Tiles
# +
xmin, ymin, xmax, ymax = full_extent_of_data
from bokeh.plotting import figure
from bokeh.models.tiles import WMTSTileSource
p = figure(width=800, height=800,
x_range=(int(-20e6), int(20e6)),
y_range=(int(-20e6), int(20e6)),
tools="pan,wheel_zoom,reset")
p.axis.visible = False
p.background_fill_color = 'black'
p.grid.grid_line_alpha = 0
p.add_tile(WMTSTileSource(url="http://localhost:10000/{Z}/{X}/{Y}.png"),
render_parents=False)
show(p)
# -
| examples/tiling.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # GraphBLAS function signatures
# version 1.3 of the spec
#
# - mxm
# - vxm
# - mxv
# - ewise_mult
# - ewise_add
# - extract
# - assign
# - apply
# - reduce
# - transpose
# - kronecker
#
#
#
# ### General Notes:
# 1. The first item of every signature is the IN/OUT object. It will be the output, but is also used as the input if accumulation is specified.
# 2. Descriptors have 4 settings (1st arg transpose, 2nd arg transpose, Mask complement, Input Removed)
# ## mxm
# ```
# C : Matrix (input & output)
# Mask : Matrix (optional)
# accum : BinaryOp (optional)
# op : Semiring
# A : Matrix
# B : Matrix
# desc : Descriptor (optional, ttcr)
# ```
# ## vxm
# ```
# w : Vector (input & output)
# mask : Vector (optional)
# accum : BinaryOp (optional)
# op : Semiring
# u : Vector
# A : Matrix
# desc : Descriptor (optional, otcr)
# ```
# ## mxv
# ```
# w : Vector (input & output)
# mask : Vector (optional)
# accum : BinaryOp (optional)
# op : Semiring
# A : Matrix
# u : Vector
# desc : Descriptor (optional, tocr)
# ```
# ## ewise_mult
# Note: _Operates on the intersection of indexes_
#
# Vector Variant
# ```
# w : Vector (input & output)
# mask : Vector (optional)
# accum : BinaryOp (optional)
# op : BinaryOp, Monoid, Semiring (mult only)
# u : Vector
# v : Vector
# desc : Descriptor (optional, oocr)
# ```
#
# Matrix Variant
# ```
# C : Matrix (input & output)
# Mask : Matrix (optional)
# accum : BinaryOp (optional)
# op : BinaryOp, Monoid, Semiring (mult only)
# A : Matrix
# B : Matrix
# desc : Descriptor (optional, ttcr)
# ```
# ## ewise_add
# Note: _Operates on the union of indexes_
#
# Vector Variant
# ```
# w : Vector (input & output)
# mask : Vector (optional)
# accum : BinaryOp (optional)
# op : BinaryOp, Monoid, Semiring (add only)
# u : Vector
# v : Vector
# desc : Descriptor (optional, oocr)
# ```
#
# Matrix Variant
# ```
# C : Matrix (input & output)
# Mask : Matrix (optional)
# accum : BinaryOp (optional)
# op : BinaryOp, Monoid, Semiring (add only)
# A : Matrix
# B : Matrix
# desc : Descriptor (optional, ttcr)
# ```
# ## extract
# use GrB_ALL to indicate all indices (similar to [:] in numpy)
#
# Standard Vector Variant
# ```
# w : Vector (input & output)
# mask : Vector (optional)
# accum : BinaryOp (optional)
# u : Vector
# *indices : Index array (indexes from u)
# nindices : Index (int, must be same size as w)
# desc : Descriptor (optional, oocr)
# ```
#
# Standard Matrix Variant
# ```
# C : Matrix (input & output)
# Mask : Matrix (optional)
# accum : BinaryOp (optional)
# A : Matrix
# *row_indices : Index array (indexes from A)
# nrows : Index (int, must be same size as C.nrows)
# *col_indices : Index array (indexes from A)
# ncols : Index (int, must be same size as C.ncols)
# desc : Descriptor (optional, tocr)
# ```
#
# Column Variant (rows are possible with transpose of A)
# ```
# w : Vector (input & output)
# mask : Vector (optional)
# accum : BinaryOp (optional)
# A : Matrix
# *row_indices : Index array (indexes from A)
# nrows : Index (int, must be same size as w)
# col_index : Index
# desc : Descriptor (optional, tocr)
# ```
# ## assign
#
# Standard Vector Variant
# ```
# w : Vector (input & output)
# mask : Vector (optional)
# accum : BinaryOp (optional)
# u : Vector
# *indices : Index array (indexes into w)
# nindices : Index (int, must be same size as u)
# desc : Descriptor (optional, oocr)
# ```
#
# Standard Matrix Variant
# ```
# C : Matrix
# Mask : Matrix (optional)
# accum : BinaryOp (optional)
# A : Matrix
# *row_indices : Index array (indexes into C)
# nrows : Index (int, must be same size as A.nrows)
# *col_indices : Index array (indexes into C)
# ncols : Index (int, must be same size as A.ncols)
# desc : Descriptor (optional, tocr)
# ```
#
# Column Variant
# ```
# C : Matrix (input & output)
# mask : Matrix (optional)
# accum : BinaryOp (optional)
# u : Vector
# *row_indices : Index array (indexes into C)
# nrows : Index (int, must be same size as u)
# col_index : Index
# desc : Descriptor (optional, oocr)
# ```
#
# Row Variant
# ```
# C : Matrix (input & output)
# mask : Matrix (optional)
# accum : BinaryOp (optional)
# u : Vector
# row_index : Index
# *col_indices : Index array (indexes into C)
# ncols : Index (int, must be same size as u)
# desc : Descriptor (optional, oocr)
# ```
#
# Constant Vector Variant
# ```
# w : Vector (input & output)
# mask : Vector (optional)
# accum : BinaryOp (optional)
# val : <type> (Scalar)
# *indices : Index array (indexes into w)
# nindices : Index (int)
# desc : Descriptor (optional, oocr)
# ```
#
# Constant Matrix Variant
# ```
# C : Matrix (input & output)
# mask : Matrix (optional)
# accum : BinaryOp (optional)
# val : <type> Scalar
# *row_indices : Index array (indexes into C)
# nrows : Index (int)
# *col_indices : Index array (indexes into C)
# ncols : Index (int)
# desc : Descriptor (optional, oocr)
# ```
# ## apply
# Computes the transformation of elements using a unary function or a binary function bound to a scalar
#
# Vector Variant
# ```
# w : Vector (input & output)
# mask : Vector (optional)
# accum : BinaryOp (optional)
# op : UnaryOp
# u : Vector
# desc : Descriptor (optional, oocr)
# ```
#
# Matrix Variant
# ```
# C : Matrix (input & output)
# Mask : Matrix (optional)
# accum : BinaryOp (optional)
# op : UnaryOp
# A : Matrix
# desc : Descriptor (optional, tocr)
# ```
#
# Vector BinaryOp Variant # 1
# ```
# w : Vector (input & output)
# mask : Vector (optional)
# accum : BinaryOp (optional)
# op : BinaryOp
# val : <type> Scalar
# u : Vector
# desc : Descriptor (optional, oocr)
# ```
#
# Vector BinaryOp Variant # 2
# ```
# w : Vector (input & output)
# mask : Vector (optional)
# accum : BinaryOp (optional)
# op : BinaryOp
# u : Vector
# val : <type> Scalar
# desc : Descriptor (optional, oocr)
# ```
#
# Matrix BinaryOp Variant # 1
# ```
# C : Matrix (input & output)
# Mask : Matrix (optional)
# accum : BinaryOp (optional)
# op : BinaryOp
# val : <type> Scalar
# A : Matrix
# desc : Descriptor (optional, otcr)
# ```
#
# Matrix BinaryOp Variant # 2
# ```
# C : Matrix (input & output)
# Mask : Matrix (optional)
# accum : BinaryOp (optional)
# op : BinaryOp
# A : Matrix
# val : <type> Scalar
# desc : Descriptor (optional, tocr)
# ```
# ## reduce
# Converts Matrix to Vector
#
# Matrix Row-wise Variant (columns are possible with transpose of A)
# ```
# w : Vector (input & output)
# mask : Vector (optional)
# accum : BinaryOp (optional)
# op : Monoid, BinaryOp
# A : Matrix
# desc : Descriptor (optional, tocr)
# ```
#
# Vector-to-Scalar Variant
# ```
# *val : <type> Scalar (input & output)
# accum : BinaryOp (optional)
# op : Monoid
# u : Vector
# desc : Descriptor (optional, oooo)
# ```
#
# Matrix-to-Scalar Variant
# ```
# *val : <type> Scalar (input & output)
# accum : BinaryOp (optional)
# op : Monoid
# A : Matrix
# desc : Descriptor (optional, oooo)
# ```
# ## transpose
# ```
# C : Matrix (input & output)
# Mask : Matrix (optional)
# accum : BinaryOp (optional)
# A : Matrix
# desc : Descriptor (optional, tocr)
# ```
# ## kronecker
# ```
# C : Matrix (input & output)
# Mask : Matrix (optional)
# accum : BinaryOp (optional)
# op : BinaryOp, Monoid, Semiring (mult only)
# A : Matrix
# B : Matrix
# desc : Descriptor (optional, ttcr)
# ```
# # Object Methods
# ## Vector_new
# ```
# *v : Vector
# d : Type
# nsize : Index
# ```
#
# ## Matrix_new
# ```
# *A : Matrix
# d : Type
# nrows : Index
# ncols : Index
# ```
# ## Vector_dup
# ```
# *w : Vector
# u : Vector
# ```
#
# ## Matrix_dup
# ```
# *C : Matrix
# A : Matrix
# ```
# ## Vector_resize
# ```
# w : Vector
# nsize : Index
# ```
#
# ## Matrix_resize
# ```
# C : Matrix
# nrows : Index
# ncols : Index
# ```
# ## Vector_clear
# ```
# v : Vector
# ```
#
# ## Matrix_clear
# ```
# A : Matrix
# ```
# ## Vector_size
# ```
# *nsize : Index
# v : Vector
# ```
#
# ## Matrix_nrows
# ```
# *nrows : Index
# A : Matrix
# ```
#
# ## Matrix_ncols
# ```
# *ncols : Index
# A : Matrix
# ```
# ## Vector_nvals
# ```
# *nvals : Index
# v : Vector
# ```
#
# ## Matrix_nvals
# ```
# *nvals : Index
# A : Matrix
# ```
# ## Vector_build
# ```
# w : Vector
# *indices : Index
# *values : <type>
# n : Index (length of indices and values arrays)
# dup : BinaryOp # Allows for combining values with the same index
# ```
#
# ## Matrix_build
# ```
# C : Matrix
# *row_indices : Index
# *col_indices : Index
# *values : <type>
# n : Index (length of indices and values arrays)
# dup : BinaryOp # Allows for combining values with same index
# ```
# ## Vector_setElement
# ```
# w : Vector
# val : <type> Scalar
# index : Index
# ```
#
# ## Matrix_setElement
# ```
# C : Matrix
# val : <type> Scalar
# row_index : Index
# col_index : Index
# ```
# ## Vector_removeElement
# ```
# w : Vector
# index : Index
# ```
#
# ## Matrix_removeElement
# ```
# C : Matrix
# row_index : Index
# col_index : Index
# ```
# ## Vector_extractElement
# ```
# *val : <type>
# u : Vector
# index : Index
# ```
#
# ## Matrix_extractElement
# ```
# *val : <type>
# A : Matrix
# row_index : Index
# col_index : Index
# ```
# ## Vector_extractTuples
# ```
# *indices : Index
# *values : <type>
# *n : Index (on input, indicates size of indices and values arrays; on output, how many values were written)
# v : Vector
# ```
#
# ## Matrix_extractTuples
# ```
# *row_indices : Index
# *col_indices : Index
# *values : <type>
# *n : Index (on input, indicates size of indices and values arrays; on output, how many values were written)
# A : Matrix
# ```
| notebooks/GraphBLAS Function signatures.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # AGE Samples
#
# ## Prepare
# ```
# import age
# ```
# ## Connect to PostgreSQL(with AGE extention)
# * Connect to PostgreSQL server
# * Load AGE and register agtype to db session (Psycopg2 driver)
# * Check graph exists and set graph. If not, age make that.
#
# ```
# ag = age.connect(graph="(graph name}", host="{host}", port="{port}", dbname="{dbname}", user="{db username}", password="{password}")
#
# # or
# DSN = "host={host} port={port} dbname={dbname} user={db username} password={password}"
# ag = age.connect(graph="(graph name}", dsn=DSN)
#
# # or Without Graph Name : you can make a new graph later.
#
# ag = age.connect(host="{host}", port="{port}", dbname="{dbname}", user="{db username}", password="{password}")
#
# # And set graph - if you don't have one yet, setGraph make that.)
# ag.setGraph("{graph name}")
# ```
# + tags=[]
import age
from age.gen.ageParser import *
GRAPH_NAME = "test_graph"
DSN = "host=172.17.0.2 port=5432 dbname=postgres user=postgres password=<PASSWORD>"
ag = age.connect(graph=GRAPH_NAME, dsn=DSN)
ag.setGraph(GRAPH_NAME)
cursor = ag.execCypher("MATCH (n) RETURN n")
for row in cursor:
print(row[0])
# -
# ---
# # API
#
# ### age.connect(graph:str=None, dsn:str=None, connection_factory=None, cursor_factory=None, **kwargs) -> Age
# > Connect PostgreSQL server
# Parameters : dsn={dsn} or
# host="{host}", port="{port}", dbname="{dbname}", user="{db username}", password="{password}"
#
# ### Age.commit() , Age.rollback()
# > If your statement change data, you must call 'Age.commit()' explicitly. Otherwise change will not make effect.
# > Or when execution error occurs, you must call 'Age.rollback()'
#
# ### Age.close()
# > Closes connection to PostgreSQL.
#
# ### Age.execCypher(cypherStmt:str, cols:list=None, params:tuple=None) -> psycopg2.extensions.cursor :
# > Execute cypher statements to query or change data (CREATE, SET, REMOVE) with or without result.
# > If your statement change data, you must call 'Age.commit()' explicitly. Otherwise change will not make effect.
#
# > If your execution returns no result or only one result, you don't have to set 'cols' argument.
# > But it returns many columns, you have to pass columns names(and types) to 'cols' argument.
#
# > cols : str list \[ 'colName {type}', ... \] : If column data type is not set, agtype is default.
#
# ### Age.cypher(cursor:psycopg2.extensions.cursor, cypherStmt:str, cols:list=None, params:tuple=None) -> psycopg2.extensions.cursor :
# > If you want execute many statements (changing data statement maybe) with one transaction explicitly, you may use Age.cypher(...) function.
#
# > For creating cursor and mamage transaction, you usually use 'with' clause.
#
# > If your execution returns no result or only one result, you don't have to set 'cols' argument.
# > But it returns many columns, you have to pass columns names(and types) to 'cols' argument.
#
# > cols : str list \[ 'colName {type}', ... \] : If column data type is not set, agtype is default.
#
# ---
# ## Create & Change Vertices
#
# > If cypher statement changes data (create, set, remove),
# you must use execCypher(cypherStmt, commit, *args).
#
# > If **'commit'** argument is **True**: the cypherStmt make effect automatically, but cursor is closed after execution. So you cannot access the result.
# If **False** : you can access the result, but you must commit session(ag.commit()) explicitly.
# (Otherwise the execution cannot make any effect.)
#
#
# > execCypher(cypherStmt:str, commit:bool, *args)
#
# ```
# cursor = ag.execCypher("CREATE(...)", commit=False) # Cypher Create Statement
# ...
# # check result in cursor
# ...
# ag.commit() # commit explicitly
# ```
#
# +
# Create Vertices
ag.execCypher("CREATE (n:Person {name: 'Joe'})")
ag.execCypher("CREATE (n:Person {name: 'Smith'})")
# Execution with one agtype result
cursor = ag.execCypher("CREATE (n:Person {name: %s}) RETURN n", params=('Jack',))
for row in cursor:
print("CREATED: ", row[0])
cursor = ag.execCypher("CREATE (n:Person {name: %s, title: 'Developer'}) RETURN id(n)", params=('Andy',))
for row in cursor:
print("CREATED: ", row[0])
# Execution with one result as SQL TYPE
cursor = ag.execCypher("MATCH (n:Person {name: %s}) SET n.title=%s RETURN n.title", cols=["a VARCHAR"], params=('Smith','Manager',))
for row in cursor:
print("SET: ", row[0])
# Execution with one result as SQL TYPE
cursor = ag.execCypher("MATCH (n:Person {name: %s}) REMOVE n.title RETURN id(n)", cols=["a BIGINT"], params=('Smith',))
for row in cursor:
print("REMOVE Prop: ", row[0])
# You must commit explicitly
ag.commit()
# -
# ---
# ## Query Vertices
#
# > execCypher(cypherStmt:str, cols:list=None, params:tuple=None)
#
# ### Single result column
#
# ```
# cursor = ag.execCypher("MATCH (n:Person {name: %s) RETURN n", params('Andy',))
# for row in cursor:
# vertex = row[0]
# print(vertex.id, vertex["name"], vertex) # row has id, label, properties
# ```
#
# ### Multi result columns
#
# ```
# cursor = ag.execCypher("MATCH (n:Person) RETURN label(n), n.name", cols=['label VARCHAR', 'name'])
# for row in cursor:
# label = row[0]
# name = row[1]
# print(label, name)
# ```
#
#
# ### Vertex object has id, label attribute and __getitem__, __setitem__ for properties
# ```
# vertex.id
# vertex.label
# vertex["property_name"]
# ```
# +
# Query Vertices with parsed row cursor.
print("-- Query Vertices --------------------")
cursor = ag.execCypher("MATCH (n:Person) RETURN n")
for row in cursor:
vertex = row[0]
print(vertex)
print(vertex.id, vertex.label, vertex["name"])
print("-->", vertex)
# Query Vertices with with multi column
print("-- Query Vertices with with multi columns. --------------------")
cursor = ag.execCypher("MATCH (n:Person) RETURN label(n), n.name", cols=['label VARCHAR', 'name'])
for row in cursor:
label = row[0]
name = row[1]
print(label, name)
# -
# ---
# ## Create Relation
#
# > execCypher(cypherStmt:str, commit:bool, *args)
#
#
# ```
# # Execute statement and handle results
# cursor = ag.execCypher("MATCH (a:Person), (b:Person) WHERE a.name = %s AND b.name = %s CREATE p=((a)-[r:workWith]->(b)) RETURN p", False, ('Andy', 'Smith',))
# ...
# # You can access the results in cursor
# ...
# ag.commit() # commit
# ```
#
# ```
# # Auto commit
# ag.execCypher("MATCH (a:Person), (b:Person) WHERE a.name = 'Andy' AND b.name = 'Tom' CREATE (a)-[r:workWith]->(b)", True)
#
# ```
#
# +
# Create Edges
ag.execCypher("MATCH (a:Person), (b:Person) WHERE a.name = 'Joe' AND b.name = 'Smith' CREATE (a)-[r:workWith {weight: 3}]->(b)")
ag.execCypher("MATCH (a:Person), (b:Person) WHERE a.name = 'Andy' AND b.name = 'Tom' CREATE (a)-[r:workWith {weight: 1}]->(b)")
ag.execCypher("MATCH (a:Person {name: 'Jack'}), (b:Person {name: 'Andy'}) CREATE (a)-[r:workWith {weight: 5}]->(b)")
ag.commit()
# With Params and Return
cursor = ag.execCypher("""MATCH (a:Person), (b:Person)
WHERE a.name = %s AND b.name = %s
CREATE p=((a)-[r:workWith]->(b))
RETURN p""",
params=('Andy', 'Smith',))
for row in cursor:
print(row[0])
ag.commit()
# With many columns Return
cursor = ag.execCypher("""MATCH (a:Person {name: 'Joe'}), (b:Person {name: 'Jack'})
CREATE (a)-[r:workWith {weight: 5}]->(b)
RETURN a, r, b """, cols=['a','r', 'b'])
for row in cursor:
print("(a)", row[0], ": (r)", row[1], ": (b)", row[2])
ag.commit()
# -
# ---
# ## Query Relations
#
# > With single column
# ```
# cursor = ag.execCypher("MATCH p=()-[:workWith]-() RETURN p")
# for row in cursor:
# path = row[0]
# print(path)
# ```
#
# > With multi columns
# ```
# cursor = ag.execCypher("MATCH p=(a)-[b]-(c) RETURN a,label(b),c", cols=["a","b VARCHAR","c"])
# for row in cursor:
# start = row[0]
# edge = row[1]
# end = row[2]
# print(start["name"] , edge.label, end["name"])
# ```
#
#
# ### Edge object has id, label,start_id, end_id attribute and __getitem__, __setitem__ for properties
# ```
# edge = path.rel
# edge.id
# edge.label
# edge.start_id
# edge.end_id
# edge["property_name"]
# edge.properties
# ```
# +
cursor = ag.execCypher("MATCH p=()-[:workWith]-() RETURN p")
for row in cursor:
path = row[0]
print("START:", path[0])
print("EDGE:", path[1])
print("END:", path[2])
print("-- [Query path with multi columns --------")
cursor = ag.execCypher("MATCH p=(a)-[b]-(c) WHERE b.weight>2 RETURN a,label(b), b.weight, c", cols=["a","bl","bw", "c"], params=(2,))
for row in cursor:
start = row[0]
edgel = row[1]
edgew = row[2]
end = row[3]
print(start["name"] , edgel, edgew, end["name"])
# -
# ---
# ## Many executions in one transaction & Multiple Edges
# +
with ag.connection.cursor() as cursor:
try :
ag.cypher(cursor, "CREATE (n:Country {name: %s}) ", params=('USA',))
ag.cypher(cursor, "CREATE (n:Country {name: %s}) ", params=('France',))
ag.cypher(cursor, "CREATE (n:Country {name: %s}) ", params=('Korea',))
ag.cypher(cursor, "CREATE (n:Country {name: %s}) ", params=('Russia',))
# You must commit explicitly after all executions.
ag.connection.commit()
except Exception as ex:
ag.rollback()
raise ex
with ag.connection.cursor() as cursor:
try :# Create Edges
ag.cypher(cursor,"MATCH (a:Country), (b:Country) WHERE a.name = 'USA' AND b.name = 'France' CREATE (a)-[r:distance {unit:'miles', value: 4760}]->(b)")
ag.cypher(cursor,"MATCH (a:Country), (b:Country) WHERE a.name = 'France' AND b.name = 'Korea' CREATE (a)-[r:distance {unit: 'km', value: 9228}]->(b)")
ag.cypher(cursor,"MATCH (a:Country {name: 'Korea'}), (b:Country {name: 'Russia'}) CREATE (a)-[r:distance {unit:'km', value: 3078}]->(b)")
# You must commit explicitly
ag.connection.commit()
except Exception as ex:
ag.rollback()
raise ex
cursor = ag.execCypher("""MATCH p=(:Country {name:"USA"})-[:distance]-(:Country)-[:distance]-(:Country)
RETURN p""")
for row in cursor:
path = row[0]
indent = ""
for e in path:
if e.gtype == age.TP_VERTEX:
print(indent, e.label, e["name"])
elif e.gtype == age.TP_EDGE:
print(indent, e.label, e["value"], e["unit"])
else:
print(indent, "Unknown element.", e)
indent += " >"
# -
# ---
# ## Query COLLECT
# +
with ag.connection.cursor() as cursor:
ag.cypher(cursor, "MATCH (a)-[:workWith]-(c) WITH a as V, COLLECT(c) as CV RETURN V.name, CV", cols=["V","CV"])
for row in cursor:
nm = row[0]
collected = row[1]
print(nm, "workWith", [i["name"] for i in collected])
for row in ag.execCypher("MATCH (a)-[:workWith]-(c) WITH a as V, COLLECT(c) as CV RETURN V.name, CV", cols=["V1","CV"]):
nm = row[0]
collected = row[1]
print(nm, "workWith", [i["name"] for i in collected])
# -
# ---
# ## Query Scalar or properties value
# +
# Query scalar value
print("-- Query scalar value --------------------")
for row in ag.execCypher("MATCH (n:Person) RETURN id(n)"):
print(row[0])
# Query properties
print("-- Query properties --------------------")
for row in ag.execCypher("MATCH (n:Person) RETURN properties(n)"):
print(row[0])
# Query properties value
print("-- Query property value --------------------")
for row in ag.execCypher("MATCH (n:Person {name: 'Andy'}) RETURN n.title"):
print(row[0])
# -
# ## Close connection
# Clear test data
age.deleteGraph(ag.connection, GRAPH_NAME)
# connection close
ag.close()
| samples/apache-age-note.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 2
# language: python
# name: python2
# ---
# +
from sets import Set
class Player:
def __init__(self, name=None, points = 0, team = None, hand=None, trump_setter = False):
self.name = name
self.points = points
self.team = team
self.hand = hand
self.trump_setter = trump_setter
# deck, suit, rank
class Hand:
def __init__(self, cards = None):
self.cards = cards
def add_card(self, card):
self.cards.append(card)
def play_card(self, card):
self.cards.remove(card)
class Card:
def __init__(self, rank, suit, verbose_rank):
self.suit = suit
self.rank = rank
self.verbose_rank = verbose_rank
def __str__(self):
return self.verbose_rank + " of "+ self.suit
class GenericDeck:
def __init__(self):
self.cards = List()
self.verbose = {
"A": "Ace",
"K": "King",
"Q": "Queen",
"J": "Jack",
"1": "One",
"2": "Two",
"3": "Three",
"4": "Four",
"5": "Five",
"6": "Six",
"7": "Seven",
"8": "Eight",
"9": "Nine",
"10": "Ten"
}
for rank in self.verbose:
for suit in ["Hearts", "Diamonds", "Clubs", "Spades"]:
self.cards.add(Card(rank,suit,self.verbose[rank]))
class Deck(GenericDeck):
def __init__(self):
super(Deck,self).__init__()
self.cards = filter(lambda card: card.rank in ["J","9","A","10","K","Q","8","7"], self.cards)
def __print__():
pass
# -
deck = Deck()
for card in deck.cards:
print card
# +
class Animal(object):
pass
class Dog(Animal):
def __init__(self, name):
self.name = name
class Person(object):
def __init__(self, name):
self.name = name
class Employee(Person):
def __init__(self, name, salary):
super(Employee, self).__init__(name)
self.salary = salary
# -
Employee("Jeevan", "82k").salary
| 28.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .jl
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Julia 1.2.0
# language: julia
# name: julia-1.2
# ---
using BenchmarkTools
using Pandas
using PyCall
using PyPlot
using JSON
obj = JSON.parse(open("../benchmark.json"));
res = BenchmarkTools.load(IOBuffer(obj["benchmarkgroup"]));
# +
# https://seaborn.pydata.org/examples/grouped_boxplot.html
# # + simple boxplot for a 10-state HMM / 5000 observations
# +
df = []
for K in 2:2:10
for f in ["forward", "backward", "viterbi"]
push!(df, ["hmmbase", f, K, minimum(res[1]["hmmbase"][f]["false"]["$K"].times)])
end
for f in ["forward", "backward", "viterbi"]
push!(df, ["hmmbase (log)", f, K, minimum(res[1]["hmmbase"][f]["true"]["$K"].times)])
end
for (f, fp) in ["_do_forward_pass" => "forward", "_do_backward_pass" => "backward", "_do_viterbi_pass" => "viterbi"]
push!(df, ["hmmlearn", fp, K, minimum(res[1]["hmmlearn"][f]["$K"].times)])
end
for (f, fp) in [
"PyObject <function HMMStatesPython._messages_forwards_normalized at 0x7fd27e075560>" => "forward",
"PyObject <function HMMStatesPython._messages_backwards_normalized at 0x7fd27e075440>" => "backward",
"viterbi" => "viterbi"
]
push!(df, ["pyhsmm", fp, K, minimum(res[1]["pyhsmm"][f]["$K"].times)])
end
end
df = DataFrame(df, columns = ["module", "method", "K", "mintime"]);
# -
sns = pyimport("seaborn")
sns.set_style("whitegrid")
figure(figsize=(7,4))
sns.barplot(x="method", y="mintime", hue="module", data=df[df.K .== 10], palette=sns.color_palette("Paired")[[2,1,7,5]])
gca().set(yscale="log")
xlabel("")
ylabel("Minimum Time")
yticks([1e6, 1e7, 1e8], ["1 ms", "10 ms", "100 ms"])
legend(frameon=false)
title("10-state HMM - 5000 observations")
savefig("benchmark_summary.png", bbox_inches = "tight", dpi = 300)
| benchmark/Plots.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .sh
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Bash
# language: bash
# name: bash
# ---
# # Answers
#
# To run the commands alongside the answers you must first have run _all_ of the tutorial commands for that section and generated the output files they reference.
#
# ## Tutorial sections
#
# * [Introducing the tutorial dataset](#Introducing-the-tutorial-dataset)
# * [Mapping RNA-Seq reads to the genome using HISAT2](#Mapping-RNA-Seq-reads-to-the-genome-using-HISAT2)
# * [Visualising transcriptomes with IGV](#Visualising-transcriptomes-with-IGV)
# * [Transcript quantification with Kallisto](#Transcript-quantification-with-Kallisto)
# * [Identifying differentially expressed genes with Sleuth](#Identifying-differentially-expressed-genes-with-Sleuth)
# * [Interpreting the results](#Interpreting-the-results)
#
# ## Additional information
# * [Normalisation](#Normalisation)
# ***
#
# Let's go to our `data` directory.
cd data
# ***
# ## Introducing the tutorial dataset
# ### Q1: Why is there more than one FASTQ file per sample?
#
# There are **2** FASTQ files for each sample e.g. MT1\_**1**.fastq and MT1\_**2**.fastq. This is because this was **paired-end** sequence data.
ls MT1*.fastq
ls MT1*.fastq | wc -l
# With Illumina paired-end sequencing, you will have a fragment (hopefully longer than your reads) which is sequenced at both ends. This means that there will be a "left" read (sometimes known as _r1_) and its corresonding "right" read (sometimes known as _r2_). These are indicated by the **/1** and **/2** at the end of the read name in the **_1**.fastq and **_2**.fastq files respectively.
#
# So, the left read header _@HS18_08296:8:1101:1352:48181#4_ in **MT1_1.fastq** has the **/1** suffix:
#
# @HS18_08296:8:1101:1352:48181#4/1
#
# And, the corresponding right read header in **MT1_2.fastq** has the **/2** suffix:
#
# @HS18_08296:8:1101:1352:48181#4/2
# ### Q2: How many reads are there for MT1?
#
# There are **2.5 million** reads for MT1.
#
# Ideally, you should count the reads in both files. This is because sometimes we have singletons (reads without a mate) after preprocessing steps such as trimming.
#
# Our reads look like:
#
# @HS18_08296:8:1101:1352:48181#4/2
# ATCCGCCNANTTTNNNNATATAATTANNNGNAANNAANNNNAATNACANNNATTNNNNTAGNANNNGNNAGTNNACAAGGNTNNNNNNNAAAGNNNNNNN
# +
# 9AABE8A!D!DFA!!!!EE@CFCD@B!!!D!F6!!EE!!!!EE4!F5E!!!BEA!!!!B@6!A!!!D!!FD'!!C+D@@*!B!!!!!!!B>DE!!!!!!!
#
# With the FASTQ format there are four lines per read. So, as long as our files are not truncated we can count the number of lines and divide them by four.
wc -l MT1_1.fastq
# So, we can then divide this by four to give us 1.25 million. We will get the same for our r2 reads:
wc -l MT1_2.fastq
# So, we have 1.25 million *read pairs* or 2.5 million (1.25 x 2) *reads* for our MT1 sample.
# You may also have thought "aha, I can use the **@** which is at the start of the header"...
grep -c '^@' MT1_1.fastq
grep -c '^@' MT1_2.fastq
# But, wait, there are 1,343,714 left reads and 1,250,000 right reads...can that be right..._no_.
#
# Take a closer look at the quality scores on the fourth line...they also contain **@**. This is because the quality scores are Phred+33 encoded. For more information on quality score encoding see our [Data Formats and QC tutorial](../QC/formats.ipynb#FASTQ) or go [here](https://en.wikipedia.org/wiki/FASTQ_format#Encoding).
#
# Instead, we can use the earlier information about the left and right read suffixes: **/1** and **/2**. This can be with two commands:
grep -c '/1$' MT1_1.fastq
grep -c '/2$' MT1_2.fastq
# Or, with only one command:
grep -c '/[12]$' MT1*.fastq
# _Note: Don't forget to use the **`-c`** option for `grep` to count the occurences and **`$`** to make sure you're only looking for \1 or \2 at the **end of the line**._
# ***
# ## Mapping RNA-Seq reads to the genome using HISAT2
# ### Map, convert (SAM to BAM), sort and index using the reads from the MT2 sample.
#
# You can do this with several, individual steps:
hisat2 --max-intronlen 10000 -x PccAS_v3_hisat2.idx -1 MT2_1.fastq -2 MT2_2.fastq -S MT2.sam
samtools view -b -o MT1.bam MT1.sam
samtools sort -o MT1_sorted.bam MT1.bam
samtools index MT1_sorted.bam
# Or, you can do it in one step:
hisat2 --max-intronlen 10000 -x PccAS_v3_hisat2.idx -1 MT2_1.fastq -2 MT2_2.fastq \
| samtools view -b - \
| samtools sort -o MT2_sorted.bam - \
&& samtools index MT2_sorted.bam
# Or, you could write a for loop to run the command above for all five of your samples:
for r1 in *_1.fastq
do
sample=${r1/_1.fastq/}
echo "Processing sample: "$sample
hisat2 --max-intronlen 10000 -x PccAS_v3_hisat2.idx -1 $sample"_1.fastq" -2 $sample"_2.fastq" \
| samtools view -b - \
| samtools sort -o $sample"_sorted.bam" - \
&& samtools index $sample"_sorted.bam"
done
# For more information on how this loop works, have a look at our [Unix tutorial](../Unix/index.ipynb) and [Running commands on multiple samples](running-commands-on-multiple-samples.ipynb).
# ### Q1: How many index files were generated when you ran `hisat2-build?`
#
#
# There are **8** HISAT2 index files for our reference genome.
ls data/*.ht2
ls data/*.ht2 | wc -l
# ### Q2: What was the _overall alignment rate_ for each of the MT samples (MT1 and MT2) to the reference genome?
#
# The _overall alignment rate_ for MT1 was **94.15%** and **91.69%** for MT2.
#
# 1250000 reads; of these:
# 1250000 (100.00%) were paired; of these:
# 105798 (8.46%) aligned concordantly 0 times
# 329468 (26.36%) aligned concordantly exactly 1 time
# 814734 (65.18%) aligned concordantly >1 times
# ----
# 105798 pairs aligned concordantly 0 times; of these:
# 1797 (1.70%) aligned discordantly 1 time
# ----
# 104001 pairs aligned 0 times concordantly or discordantly; of these:
# 208002 mates make up the pairs; of these:
# 146250 (70.31%) aligned 0 times
# 19845 (9.54%) aligned exactly 1 time
# 41907 (20.15%) aligned >1 times
# 94.15% overall alignment rate
#
#
# 1250000 reads; of these:
# 1250000 (100.00%) were paired; of these:
# 139557 (11.16%) aligned concordantly 0 times
# 483583 (38.69%) aligned concordantly exactly 1 time
# 626860 (50.15%) aligned concordantly >1 times
# ----
# 139557 pairs aligned concordantly 0 times; of these:
# 4965 (3.56%) aligned discordantly 1 time
# ----
# 134592 pairs aligned 0 times concordantly or discordantly; of these:
# 269184 mates make up the pairs; of these:
# 207729 (77.17%) aligned 0 times
# 28836 (10.71%) aligned exactly 1 time
# 32619 (12.12%) aligned >1 times
# 91.69% overall alignment rate
#
# _Note: If a read pair is concordantly aligned it means both reads in the pair align with the same chromosome/scaffold/contig, the reads are aligned in a proper orientation (typically ----> <----) and that the reads have an appropriate insert size._
# ### Q3: How many MT1 and MT2 reads were not aligned to the reference genome?
#
# **146,250 reads (5.85%)** and **207,729 reads (8.31%)** did not align to the reference genome for MT1 and MT2 respectively.
# Here is a brief summary of what the HISAT2 summary tells us for our MT2 sample and how we can tell which of the summary lines gives us this information:
#
# * We have **1,250,000 read pairs** or **2,500,000 reads** (_2 x 1,250,000 pairs_)
#
# ```
# 1250000 reads; of these:
# ```
#
#
# * All of our reads (100%) are paired - i.e. no reads without their mate
#
# ```
# 1250000 (100.00%) were paired; of these:
#
#
# ```
#
# * 1,110,443 pairs (88.84%) align concordantly one (38.69%) or more (50.15%) times
#
# ```
# 483583 (38.69%) aligned concordantly exactly 1 time
# 626860 (50.15%) aligned concordantly >1 times
# ```
#
#
# * 139,557 pairs (11.16%) or 279,114 reads (2 x 139,557) did not align _concordantly_ anywhere in the genome
#
# ```
# 139557 (11.16%) aligned concordantly 0 times
# ```
#
# * Of those 139,557 pairs, 4,965 pairs align _discordantly_ (3.56% of the 139,557 pairs)
#
# ```
# 139557 pairs aligned concordantly 0 times; of these:
# 4965 (3.56%) aligned discordantly 1 time
# ```
#
# * This leave us with 134,592 pairs (139,557 - 4,965) where both reads in the pair do not align to the genome (concordantly or discordantly)
#
# ```
# 134592 pairs aligned 0 times concordantly or discordantly; of these:
# ```
#
#
# * Of those 269,184 reads (2 x 134,592) we have 61,455 reads (22.83%) which align to the genome without their mate
#
# ```
# 269184 mates make up the pairs; of these:
# ...
# 28836 (10.71%) aligned exactly 1 time
# 32619 (12.12%) aligned >1 times
# ```
#
# * Leaving us with a **91.69% overall alignment rate**
#
# ```
# 91.69% overall alignment rate
# ```
#
# * That means **207,729 (8.31%)** of the **2,500,000 reads** (or 77.17% of the unaligned pairs) do not align anywhere in the genome
#
# ```
# 207729 (77.17%) aligned 0 times
# ```
# ***
# ## Visualising transcriptomes with IGV
#
# ### Q1: How many CDS features are there in "PCHAS_1402500"?
#
# There are **8** CDS features in PCHAS_1402500. You can get this in several ways:
#
# **Count the number of exons/CDS features in the gene annotation.**
#
# 
#
# **Count the number of CDS features in the GFF file.**
#
# First, get all of the CDS features for PCHAS_1402500.
grep -E "CDS.*PCHAS_1402500" PccAS_v3.gff3
# Then count the number of the CDS features for PCHAS_1402500.
grep -cE "CDS.*PCHAS_1402500" PccAS_v3.gff3
# ### Q2: Does the RNA-seq mapping agree with the gene model in blue?
#
# Yes. The peaks of the coverage tracks correspond to the annotated exon/CDS features.
#
# 
# ### Q3: Do you think this gene is differentially expressed and is looking at the coverage plots alone a reliable way to assess differential expression?
#
# Possibly. But, you can't tell differential expression by the counts alone as there may be differences in the sequencing depths of the samples.
# ***
# ## Transcript quantification with Kallisto
#
#
# ### Use kallisto quant four more times, for the MT2 sample and the three SBP samples.
#
# You can run the individual `kallisto quant` commands as you did for MT1 for each of the remaining samples:
kallisto quant -i PccAS_v3_kallisto -o MT2 -b 100 MT2_1.fastq MT2_2.fastq
kallisto quant -i PccAS_v3_kallisto -o SBP1 -b 100 SBP1_1.fastq SBP1_2.fastq
kallisto quant -i PccAS_v3_kallisto -o SBP2 -b 100 SBP1_2.fastq SBP2_2.fastq
kallisto quant -i PccAS_v3_kallisto -o SBP3 -b 100 SBP1_3.fastq SBP3_2.fastq
# Or, you can write a `for` loop which will run `kallisto quant` on all of the samples:
for r1 in *_1.fastq
do
echo $r1
sample=${r1/_1.fastq/}
echo "Quantifying transcripts for sample: "$sample
kallisto quant -i PccAS_v3_kallisto -o $sample -b 100 $sample'_1.fastq' $sample'_2.fastq'
done
# For more information on how this loop works, have a look at our [Unix tutorial](../Unix/index.ipynb) and [Running commands on multiple samples](running-commands-on-multiple-samples.ipynb).
# ### Q1: What _k_-mer length was used to build the Kallisto index?
#
# A _k_-mer length of **31** was used.
#
# Look at the output from `kallisto index`:
#
# [build] k-mer length: 31
#
# Or, look for the `-k` or `--kmer-size` option in the `kallisto index` usage:
kallisto index
# ### Q2: How many transcript sequences are there in _PccAS_v3_transcripts.fa_?
#
# There are **5177** transcript sequences.
#
# Look at the output from `kallisto quant`:
#
# [index] number of targets: 5,177
#
# Or, look for **n_targets** in one of the _run_info.json_ files:
cat MT1/run_info.json
# Or, you can run a `grep` on the transcript FASTA file and count the number of header lines:
grep -c ">" PccAS_v3_transcripts.fa
# ### Q3: What is the transcripts per million (TPM) value for PCHAS_1402500 in each of the samples?
#
# | Sample | Transcripts Per Million (TPM) |
# | :-: | :-: |
# | MT1 | 2342.23 |
# | MT2 | 1354.42 |
# | SBP1 | 2295.24 |
# | SBP2 | 3274.98 |
# | SBP3 | 2536.17 |
#
# You can look at each of the individual abundance files:
grep "^PCHAS_1402500" MT1/abundance.tsv
grep "^PCHAS_1402500" MT2/abundance.tsv
grep "^PCHAS_1402500" SBP1/abundance.tsv
grep "^PCHAS_1402500" SBP2/abundance.tsv
grep "^PCHAS_1402500" SBP3/abundance.tsv
# Or you can use a recursive `grep`:
grep -r "^PCHAS_1402500" .
# Or you can use a loop:
for r1 in *_1.fastq
do
sample=${r1/_1.fastq/}
echo $sample
grep PCHAS_1402500 $sample'/abundance.tsv'
done
# For more information on how this loop works, have a look at our [Unix tutorial](../Unix/index.ipynb) and [Running commands on multiple samples](running-commands-on-multiple-samples.ipynb).
# ### Q4: Do you think PCHAS_1402500 is differentially expressed?
#
# Probably not. We would need to run statistical tests to really be sure though.
# ***
# ## Identifying differentially expressed genes with Sleuth
#
#
# ### Q1: Is our gene from earlier, PCHAS_1402500, significantly differentially expressed?
#
# **No.**
#
# Look at the transcript table.
#
# 
#
# And the transcript view.
#
# 
#
# Although this gene looked like it was differentially expressed from the plots in IGV, our test did not show it to be so (q-value > 0.05). This might be because some samples tended to have more reads, so based on raw read counts, genes generally look up-regulated in the SBP samples.
#
# Alternatively, the reliability of only two biological replicates and the strength of the difference between the conditions was not sufficient to be statistically convincing. In the second case, increasing the number of biological replicates would give us more confidence about whether there really was a difference.
#
# In this case, it was the lower number of reads mapping to MT samples that mislead us in the IGV view. Luckily, careful normalisation and appropriate use of statistics saved the day!
grep PCHAS_1402500 kallisto.results | cut -f1,4
# ***
# ## Interpreting the results
#
# ### Q1: How many genes are more highly expressed in the SBP samples?
# **127**. We can use `awk` to filter our Kallisto/sleuth results and `wc -l` to count the number of lines returned.
awk -F"\t" '$4 < 0.01 && $5 > 0' kallisto.results | wc -l
# ### Q2: How many genes are more highly expressed in the MT samples?
#
# **169**. We can use `awk` to filter our Kallisto/sleuth results and `wc -l` to count the number of lines returned.
awk -F"\t" '$4 < 0.01 && $5 < 0' kallisto.results | wc -l
# ### Q3: Do you notice any particular genes that come up in the analysis?
#
# Genes from the ***cir*** family are upregulated in the MT samples.
#
# To get this we must first find out which genes (or gene descriptions) are seen most often in the genes which are more highly expressed in our SBP samples.
awk -F"\t" '$4 < 0.01 && $5 > 0 {print $2}' kallisto.results | sort | uniq -c | sort -nr
# Then, we summarise the genes more highly expressed in our MT samples.
awk -F"\t" '$4 < 0.01 && $5 < 0 {print $2}' kallisto.results | sort | uniq -c | sort -nr
# Perhaps the CIR proteins are interesting. There are only 2 CIR proteins upregulated in the SBP samples and 25 CIR in the MT samples.
#
# The ***cir*** family is a large, malaria-specific gene family which had previously been proposed to be involved in immune evasion (Lawton et al., 2012). Here, however, we see many of these genes upregulated in a form of the parasite which seems to cause the immune system to better control the parasite. This suggests that these genes interact with the immune system in a subtler way, preventing the immune system from damaging the host.
# ***
# ## Normalisation
#
# ### How we got the information to help the questions
#
# To answer the questions you needed the following for each sample:
#
# * Number of reads assigned to PCHAS_1402500
# * Length of exons in PCHAS_1402500 (bp)
# * Total number of reads mapping
# * Total RPK
#
# First, take a quick look at the first five lines of the `abundance.tsv` for MT1.
head -5 MT1/abundance.tsv
# There are five columns which give us information about the transcript abundances for our MT1 sample.
#
# | Column | Description |
# | --- | --- |
# | target_id | Unique transcript identifier |
# | length | Number of bases found in exons. |
# | eff_length | *Effective length*. Uses fragment length distribution to determine the effective number of positions that can be sampled on each transcript. |
# | est_counts | *Estimated counts*. This may not always be an integer as reads which map to multiple transcripts are fractionally assigned to each of the corresponding transcripts. |
# | tpm | *Transcripts per million*. Normalised value accounting for length and sequence depth bias. |
#
# First, look for your gene of interest, **PCHAS_1402500**. Run this as a loop to `grep` the information for all five samples.
for r1 in *_1.fastq
do
sample=${r1/_1.fastq/}
echo $sample
grep PCHAS_1402500 $sample'/abundance.tsv'
done
# Now you have the length (eff_length) and counts (est_counts) for PCHAS_1402500 for each of your samples. Next, you need to get the total number of reads mapped to each of your samples. You can use a loop to do this.
#
# In the loop below `samtools flagstat` gives you the number of mapped paired reads (reads with itself and mate mapped) and those where one read mapped but its mate didn't (singletons). If then uses `grep` to get the relevant lines and `awk` to add the mapped paired and singleton read totals together.
for r1 in *_1.fastq
do
sample=${r1/_1.fastq/}
total=` samtools flagstat $sample'_sorted.bam' | \
grep 'singletons\|with itself and mate mapped' | \
awk 'BEGIN{ count=0} \
{count+=$1} \
END{print count}'`
echo -e "$sample\t$total"
done
# Finally, to calculate the TPM values, you need the total RPK for each of your samples. Again we use a loop. Notice the use of `NR>2` in the `awk` command which tells it to skip the two header lines at the start of the file. You will also notice that we divide the eff_length by 1,000 so that it's in kilobases.
for r1 in *_1.fastq
do
sample=${r1/_1.fastq/}
awk -F"\t" -v sample="$sample" \
'BEGIN{total_rpk=0;} \
NR>2 \
{ \
rpk=$4/($3/1000); \
total_rpk+=rpk \
} \
END{print sample"\t"total_rpk}' $sample'/abundance.tsv'
done
# ### Q1: Using the `abundance.tsv` files generated by Kallisto and the information above, calculate the RPKM for PCHAS_1402500 in each of our five samples.
#
# | Sample | Per million scaling factor | Reads per million (RPM) | Per kilobase scaling factor | RPKM |
# | :-: | :-: | :-: | :-: | :-: |
# | MT1 | 2.353750 | 1079.528 | 3.697 | **292** |
# | MT2 | 2.292271 | 1479.581 | 3.709 | **398** |
# | SBP1 | 2.329235 | 6270.170 | 3.699 | **1695** |
# | SBP2 | 2.187718 | 7908.652 | 3.696 | **2140** |
# | SBP3 | 2.163979 | 6767.949 | 3.699 | **1830** |
#
#
# ### Q2: Using the `abundance.tsv` files generated by Kallisto and the information above, calculate the TPM for PCHAS_1402500 in each of our five samples.
#
# | Sample | Per kilobase scaling factor | Reads per kilobase (RPK) | TPM |
# | :-: | :-: | :-: | :-: |
# | MT1 | 3.697 | 687.30 | **2342** |
# | MT2 | 3.709 | 914.50 | **1354** |
# | SBP1 | 3.699 | 3947.87 | **2295** |
# | SBP2 | 3.696 | 4681.81 | **3275** |
# | SBP3 | 3.699 | 3959.78 | **2536** |
#
#
# ### Q3: Do these match the TPM values from Kallisto?
# **Yes.**
#
# Well, almost. They may be a couple out because we rounded up to make the calculations easier.
# ### Q4: Do you think PCHAS_1402500 is differentially expressed between the MT and SBP samples?
#
# **Probably not.**
#
# If we were to look at only the counts and RPKM values then it appears there is an 8 fold difference between the MT and SBP samples. However, when we look at the TPM values, they are much closer and so differential expression is less likely.
| RNA-Seq/answers.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# ## Plot ROCs
# +
from keras.applications.mobilenet import MobileNet
from keras.applications.mobilenet import preprocess_input as MobileNet_preprocess_input
from keras.applications.vgg19 import VGG19
from keras.applications.vgg19 import preprocess_input as VGG19_preprocess_input
from keras.applications.inception_resnet_v2 import InceptionResNetV2
from keras.applications.inception_resnet_v2 import preprocess_input as InceptionResNetV2_preprocess_input
from keras.applications.inception_v3 import InceptionV3
from keras.applications.inception_v3 import preprocess_input as InceptionV3_preprocess_input
from keras.applications.mobilenet_v2 import MobileNetV2
from keras.applications.mobilenet_v2 import preprocess_input as MobileNetV2_preprocess_input
from keras.applications.nasnet import NASNetLarge
from keras.applications.nasnet import preprocess_input as NASNetLarge_preprocess_input
# %matplotlib inline
import matplotlib.pyplot as plt
import data_preparation
import params
import os
import reset
import gradient_accumulation
from utils import plot_train_metrics, save_model
from sklearn.metrics import roc_curve, auc
from train import create_data_generator, _create_base_model, create_simple_model, create_attention_model
metadata = data_preparation.load_metadata()
metadata, labels = data_preparation.preprocess_metadata(metadata)
train, valid = data_preparation.stratify_train_test_split(metadata)
# for these image sizes, we don't need gradient_accumulation to achieve BATCH_SIZE = 256
optimizer = 'adam'
if params.DEFAULT_OPTIMIZER != optimizer:
optimizer = gradient_accumulation.AdamAccumulate(
lr=params.LEARNING_RATE, accum_iters=params.ACCUMULATION_STEPS)
base_models = [
[MobileNet, params.MOBILENET_IMG_SIZE, MobileNet_preprocess_input],
[InceptionResNetV2, params.INCEPTIONRESNETV2_IMG_SIZE,
InceptionResNetV2_preprocess_input],
[VGG19, params.VGG19_IMG_SIZE, VGG19_preprocess_input],
[InceptionV3, params.INCEPTIONV3_IMG_SIZE, InceptionV3_preprocess_input],
[MobileNetV2, params.MOBILENETV2_IMG_SIZE, MobileNetV2_preprocess_input],
[NASNetLarge, params.NASNETLARGE_IMG_SIZE, NASNetLarge_preprocess_input],
]
os.environ['TF_CPP_MIN_LOG_LEVEL'] = '2'
def plot_model_ROC(_Model, input_shape, preprocessing_function,
train, valid, labels,
extend_model_callback, optimizer, name_prefix):
test_X, test_Y = next(create_data_generator(
valid, labels, 10000, preprocessing_function, target_size=input_shape))
baseModel = _create_base_model(_Model,
labels,
test_X.shape[1:],
trainable=False,
weights=None)
model = extend_model_callback(baseModel, labels, optimizer)
model_name =name_prefix+'_' + baseModel.name
weights = os.path.join(params.RESULTS_FOLDER, model_name, 'weights.best.hdf5')
print('Loading '+weights)
model.load_weights(weights, by_name=True)
model.trainable = False
pred_Y = model.predict(test_X, batch_size = 32, verbose = True)
fig, c_ax = plt.subplots(1,1, figsize = (9, 9))
for (idx, c_label) in enumerate(labels):
fpr, tpr, thresholds = roc_curve(test_Y[:,idx].astype(int), pred_Y[:,idx])
c_ax.plot(fpr, tpr, label = '%s (AUC:%0.2f)' % (c_label, auc(fpr, tpr)))
c_ax.legend()
c_ax.set_title(model_name+' ROC Curve')
c_ax.set_xlabel('False Positive Rate')
c_ax.set_ylabel('True Positive Rate')
ROC_image_file_path = os.path.join(params.RESULTS_FOLDER, model_name, model_name + '_ROC.png')
fig.savefig(ROC_image_file_path)
print('Saved ROC plot at'+ROC_image_file_path)
for [_Model, input_shape, preprocess_input] in base_models:
plot_model_ROC(_Model, input_shape, preprocess_input,
train, valid, labels,
create_simple_model, optimizer, 'simple')
for [_Model, input_shape, preprocess_input] in base_models:
plot_model_ROC(_Model, input_shape, preprocess_input,
train, valid, labels,
create_attention_model, optimizer, 'attention')
| src/ROCs.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Import Packages
from MLTrainer import MLTrainer
from sklearn.datasets import make_classification
from sklearn.model_selection import train_test_split
# # Generate Classification Dataset
X, y = make_classification(random_state=42, n_samples=10000, n_features=5, n_classes=3,
n_informative=2, n_clusters_per_class=1)
X_train, X_test, y_train, y_test = train_test_split(X, y, random_state=42)
# # Train Multiple Models Using MLTrainer
#
# Multiple Sklearn models will be trained using either with GridSearchCV or without. The trained models, their parameters and cross validation scores will be stored.
#
# Explanations of parameters of methods can be seen in the source code itself. Parameters are fully laid out when calling them in this notebook.
#
# Model parameters used for grid search are found in source code, default model parameters are used when grid search is not enabled.
#
# .cv_scores attribute contains a Pandas DataFrame containing model names, parameters, mean cross validation scores for each batch and remark. Remark will only have an entry if the model was not able to be trained.
#
# .models attribute contains a list of trained model objects
#
# .fit method trains multiple models on training set and saves their mean cross validation scores in a Pandas DataFrame
#
# .evaluate method generates scores for test set for each model, then saves classification reports, confusion matrices and label probabilities in CSV. Each model will have its own folder in the current directory.
models = MLTrainer(ensemble=True, linear=True, naive_bayes=True, neighbors=True, svm=True, decision_tree=True, seed=100)
models.fit(X=X_train, Y=y_train, n_folds=5, scoring="accuracy", n_jobs=-1, gridsearchcv=False, param_grids={}, greater_is_better=True)
models.cv_scores
models.models
preds = models.predict(X_test)
preds
pred_probas = models.predict_proba(X_test)
pred_probas
models.evaluate(test_X=X_test, test_Y=y_test, idx_label_dic=None, class_report="classf_report.csv", con_mat="confusion_matrix.csv", pred_proba="predictions_proba.csv")
| notebooks/MLTrainer.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: ocandataviz
# language: python
# name: ocandataviz
# ---
# # OpenData API
import requests
# ## Package List
packages = requests.get('https://open.canada.ca/data/en/api/action/package_list').json()['result']
len(packages)
# ## Recently Changed Packages
requests.get('https://open.canada.ca/data/en/api/action/recently_changed_packages_activity_list').json()
# ## Group List
requests.get('https://open.canada.ca/data/en/api/action/group_list').json()
# ## Tag List
requests.get('https://open.canada.ca/data/en/api/action/tag_list').json()
# ## Search For Packages Matching a Query
# +
http://demo.ckan.org/api/3/action/package_search?q=spending
http://demo.ckan.org/api/3/action/resource_search?query=name:District%20Names
| notebooks/OpenDataApi.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: 'Python 3.6.13 64-bit (''tensorflow'': conda)'
# name: python3
# ---
# ## Train and Validate
import tensorflow as tf
from tensorflow import keras
from tensorflow.keras import Sequential
from tensorflow.keras.layers import Dense, Dropout, LSTM, Embedding, Activation, Lambda, Bidirectional
from tensorflow.keras.preprocessing.sequence import pad_sequences
from sklearn.model_selection import train_test_split
import numpy as np
import gc
import pandas as pd
df = pd.read_csv("dna.csv")
def letter_to_index(letter):
_alphabet = 'ATGC'
return next((i for i, _letter in enumerate(_alphabet) if _letter == letter), None)
test_split = 0.1
maxlen = 150
df['sequence'] = df['sequence'].apply(lambda x: [int(letter_to_index(e)) for e in x])
df = df.reindex(np.random.permutation(df.index))
train_size = int(len(df) * (1 - test_split))
X_train = df['sequence'].values[:train_size]
y_train = np.array(df['target'].values[:train_size])
X_test = np.array(df['sequence'].values[train_size:])
y_test = np.array(df['target'].values[train_size:])
print('Average train sequence length: {}'.format(np.mean(list(map(len, X_train)), dtype=int)))
print('Average test sequence length: {}'.format(np.mean(list(map(len, X_test)), dtype=int)))
X_train, y_train, X_test, y_test = pad_sequences(X_train, maxlen=maxlen), y_train, pad_sequences(X_test, maxlen=maxlen), y_test
model = Sequential()
model.add(Embedding(input_dim = 4, output_dim = 50, input_length = X_train.shape[1], name='embedding_layer'))
model.add(Bidirectional(LSTM(62, return_sequences=True)))
model.add(Dropout(0.2))
model.add(Bidirectional(LSTM(62)))
model.add(Dropout(0.2))
model.add(Dense(1, activation='sigmoid'))
model.compile('adam', 'binary_crossentropy', metrics=['accuracy'])
model.fit(X_train, y_train, epochs=100)
model.save("SAVED.h5")
| classify.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python [Root]
# language: python
# name: Python [Root]
# ---
# # Assignment 2
#
#
# In this assignment we will cover topics from the previous 3 lectures. We will cover the following topics:
#
# 1) Training a simple Linear Model
#
# 2) Implementing Modules with Backprop functionality
#
# 3) Implementing Convolution Module on Numpy.
#
# 4) Implement Dropout/Different Optimizer setups.
#
# 5) Implementing Pool and Training on CIFAR10?
#
#
# It is crucial to get down to the nitty gritty of the code to implement all of these. No external packages (like caffe,pytorch etc), which directly give functions for these steps, are to be used.
# # Training a simple Linear Model
#
# In this section, you will write the code to train a Linear Model. The goal is to classify and input $x_n$ of size $n$ into one of $m$ classes. For this goal, you need to create the following parts:
#
# 1) ** A weight Matrix $W_{n\times m}$ **, where the Weights are multipled to the input $X_n$ (Vector of size $n$), to find $m$ scores $S_m$ for the $m$ classes.
#
# 2) ** The Loss function **: We learnt two Kinds of Loss functions:
# * The Hinge Loss: This loss measures, for each sample, how many times were the wrong classes scored above correct class score - $\Delta$ ? and by how much? This leads to the formulation:
#
# $$
# L_i = \sum_{j\neq y_i} \max(0, s_j - s_{y_i} + \Delta)
# $$
#
# where $y_i$ is the correct class, and $s_j$ is the score for the $j$-th class (the $j$-th element of $S_m$)
#
# * The softmax Loss: By interpreting the scores as unnormalized log probabilities for each class, this loss tries to measure dissatisfaction with the scores in terms of the log probability of the right class:
#
# $$
# L_i = -\log\left(\frac{e^{f_{y_i}}}{ \sum_j e^{f_j} }\right) \hspace{0.5in} \text{or equivalently} \hspace{0.5in} L_i = -f_{y_i} + \log\sum_j e^{f_j}
# $$
#
# where $f_{ y_i }$ is the $i$-th element of the output of $W^T_{n \times m} . X_m$
#
# 4) ** Regularization term **: In addition to the loss, you need a Regularization term to lead to a more distributed( in case of $L_2$) or sparse (in case of $L_1$) learning of the weights. For example, and having $L_2$ regularization would imply that your loss has the following additional term:
#
# $$
# R(W) = \sum_k\sum_l W_{k,l}^2,
# $$
#
# making the total loss:
# $$
# L = \underbrace{ \frac{1}{N} \sum_i L_i }_\text{data loss} + \underbrace{ \lambda R(W) }_\text{regularization loss} \\\\
# $$
#
# 3) ** An Optimization Procedure **: This refers to the process which tweaks the weight Matrix $W_{n\times m}$ to reduce the loss function $L$. In our case, this refers to Mini-batch Gradient Descent algorithm. We adjust the weights $W_{n\times m}$, based on the gradient of the loss $L$ w.r.t. $W_{n\times m}$. This leads to:
#
# $$
# W_{t+1} = W_{t} + \alpha \frac{\partial L}{\partial W},
# $$
# where $\alpha$ is the learning rate. Additionally, as we will be doing "mini-batch" gradient Descent, instead of finding loss over the whole dataset, we find it only for a small sample of the traning data for each learning step we take. Basically,
# $$
# W_{t+1} = W_{t} + \alpha \frac{\partial \sum^{b}{L_{x_i}}}{\partial W},
# $$
# where, $b$ is the batch size.
# # Question 1
#
# Train a **Single-Layer Classifier** for the MNIST dataset. Guidelines:
# * Use a loss of your choice.
# * Keep a validation split of the trainingset for finding the right value of $\lambda$ for the regularization, and to check for over fitting.
# * Finally,evaluate the classification performance on the testset.
#
# +
## Load The Mnist data:
# Download data from http://yann.lecun.com/exdb/mnist/
# load the data.
# split the data into train, and valid
# Now a function, which returns a generator random mini-batch of the input data
def get_minibatch_function(training_x, training_y):
def get_minibatch(training_x=training_x, training_y=training_y):
## Read generator functions if required.
## WRITE CODE HERE
yield mini_x,mini_y
return get_minibatch
# -
# Define the class Single Layer Classifier
class single_layer_classifier():
def __init__(self, input_size, output_size):
## WRITE CODE HERE
# Give the instance a weight matrix, initialized randomly.
# Define the forward function
def forward(self, input_x):
# get the scores
## WRITE CODE HERE
return scores
# Similarly a backward function
# we define 2 backward functions (as Loss = L1 + L2, grad(Loss) = grad(L1) + grad(L2))
def backward_from_loss(self, grad_from_loss):
# this function returns a matrix of the same size as the weights,
# where each element is the partial derivative of the loss w.r.t. the respective element of weight.
## WRITE CODE HERE
return grad_matrix
def backward_from_l2(self):
# this function returns a matrix of the same size as the weights,
# where each element is the partial derivative of the regularization_term
# w.r.t. the respective element of weight.
## WRITE CODE HERE
return grad_matrix
# BONUS
def grad_checker(input_x, grad_matrix):
# Guess what to do?
## WRITE CODE HERE
if diff<threshold:
return true
else:
return false
# +
# Now we need the loss functions,one which calculates the loss,
# and one which give the backward gradient
# Make any one of the suggested losses
def loss_forward(input_y,scores):
## WRITE CODE HERE
return loss
def loss_backward(loss):
# This part deals with the gradient from the loss to the weight matrix.
# for example, in case of softmax loss(-log(qc)), this part gives grad(loss) w.r.t. qc
## WRITE CODE HERE
return grad_from_loss
# +
# Finally the trainer:
# let it be for t iterations:
# make an instance of single_layer_classifier,
# get the mini-batch yielder.
for iter,input_x, input_y in enumerate(get_minibatch()):
## Write code here for each iteration of training.
# +
# Find the performance on the validation set.
# find the top-1 accuracy on the validation set.
# +
# now make a trainer function based on the above code, which trains for 't' iteration,
# and returns the performance on the validation
def trainer(iterations, kwargs):
## WRITE CODE HERE
return top_1
# +
# Find the optimal lambda and iterations t
# +
# Train on whole dataset with these values,(from scratch)
# report final performance on mnist test set.
# Find the best performing class and the worst performing class.
# -
# # Implementing Backprop
#
# Now that you have had some experience with single layer networks, its time to go to more complex architectures. But first we need to completely understand and implement backpropagation.
#
# ## Backpropagation:
#
# Simple put, a way of computing gradients of expressions through recursive application of chain rule. If,
# $$
# L = f (g (h (\textbf{x})))
# $$
# then,
# $$
# \frac{\partial L}{\partial \textbf{x}} = \frac{\partial f}{\partial g} \times \frac{\partial g}{\partial h} \times\frac{\partial h}{\partial \textbf{x}}
# $$
#
# ** Look into the class Lecture for more detail **
#
#
# # Question 2 : Scalar Backpropagation
#
# Evaluate the gradient of the following functions w.r.t. the input.
#
# 1) $$ f(x,y,z) = log(\sigma(\frac{cos(\pi \times \sigma(x))+sin(\pi \times \sigma(y/2))}{z^2}))$$
# where $\sigma$ is the sigmoid function. Find gradient for the following values:
# * $(x,y,z)$ = (1,2,3)
# * $(x,y,z)$ = (3,2,1)
# * $(x,y,z)$ = (12,23,31)
# * $(x,y,z)$ = (32,21,13)
#
# 2) $$ f(x,y,z) = -tan(z) + exp(4x^2 + 3x + 10) - x^{y^z} $$
# where $\exp$ is the exponential function. Find gradient for the following values:
# * $(x,y,z)$ = (-0.1 ,2 ,-3)
# * $(x,y,z)$ = (-3, 0.2,0.5)
# * $(x,y,z)$ = (1.2, -2.3, 3.1)
# * $(x,y,z)$ = (3.2, 2.1, -1.3)
#
# +
# To solve this problem, construct the computational graph (will help understanding the problem)(not part of assignment)
# Write each component of the graph as a class, with forward and backward functions.
# for eg:
class sigmoid():
def __init__(self):
def forward():
# save values useful for backpropagation
def backward():
# CAUTION: Carefully treat the input and output dimension variation. At worst, handle them with if statements.
# Similarly create the classes for various sub-parts/elements of the graph.
# +
# Now write func_1_creator,
# which constructs the graph(all operators), forward and backward functions.
class func1():
def __init__(self):
# construct the graph here,
# assign the instances of function modules to self.var
def forward(x,y,z):
# Using the graph element's forward functions, get the output.
return output
def backward():
# Use the saved outputs of each module, and backward() function calls
return [grad_x,grad_y,grad_z]
# Similarly,
class func2():
def __init__(self):
# construct the graph here,
# assign the instances of function modules to self.var
def forward(x,y,z):
# Using the graph element's forward functions, get the output.
return output
def backward():
# Use the saved outputs of each module, and backward() function calls
return [grad_x,grad_y,grad_z]
# -
# ## Question 3 : Modular Vector Backpropagation
#
# * Construct a Linear Layer module, implementing the forward and backward functions for arbitrary sizes.
# * Construct a ReLU module, implementing the forward and backward functions for arbitrary sizes.
# * Create a 2 layer MLP using the constructed modules.
#
# * Modifying the functions built in Question 1 , train this two layer MLP for the same data set (MNIST).
# Class for Linear Layer (Refer code of pytorch/tensorflow package if required.)
# Class for ReLU
# Your 2 layer MLP
# Train the MLP
# Validation Performance
# Best Class and worst class performance.
# # After the lecture on Jan 31st.
#
# # Implementing Convolution Module on Numpy.
#
# * This topic will require you to implement the Convolution operation using Numpy.
# * You will implement <s>two</s> one methods of doing it, an intuitive <s>and an optimised</s> way.
# * <s>Additional operations like dropout, batch norms.</s>
# * We will use the created Module for interesting task like Blurring, Bilateral Filtering.
# * Finally, we create the Backprop for this.
# * <s>Train a Conv model for the same MNIST dataset. (can be a script based training, instead of having it in jupyter notebook.)</s>
#
# ## Question 4
#
# <br>
# * Implement a naive Convolution module, with basic functionalities:
# * kernel_size,padding, stride, dilation
#
# * Test out the convolution layer created, by using it to do gaussian blurring on 10 random images of Cifar10 dataset
#
# * Bonus: Bilateral filtering can also be implemented using a 2-D convolution. Try bilateral filter for the space of (X,Y,Gray). (3D space, but not 3D conv), (no speed criteria), (Hint: You have multiple filters in each conv layer.)
#
## Define a class Convolution Layer, which is initialized with the various required params:
Class convolution_layer():
def __init__(self,kvargs):
## Refer pytorch documentation/tensorflow documentation for the parameters for the layer.
## Bonus for implementing Groups, no-bias functionality.
## Random initialization of the weights
def forward(self,input):
# Input Proprocess(According to pad etc.) Input will be of size (Batch_size, in_channels, inp_height, inp_width)
# Reminder: Save Input for backward!
# Simple Conv operation:
# Loop over every location in inp_height * inp_width for the whole batch
# Output will be of the size (Batch_size, out_channels, out_height, out_width)
return output
def backward(self, grad_of_output_size):
# Naive Implementation
# Speed is not a concern
# Hint: gradients from each independant operation can be summed
# return gradient of the size of the weight kernel
return grad
def set_weights(self, new_weights):
## Replace the set of weights with the given 'new_weights'
## use this for setting weights for blurring, bilateral filtering etc.
# +
## get cifar images
## Initialize a conv layer. Set weights for gaussian blurring
## generate output.
## use matplotlib.pyplot to show the results.
# -
## BONUS: Bilateral Filter.
# ## Question 5
# <br>
# Now we will use the created layer for training a simple Convolution Layer.
#
# * The goal is to make it learn a set of weights, by using the backpropagation function created. To test the backpropagation, instead of training a whole network, we will train only a single layer.
# * Take 100 cifar10 images. Generate a numpy array of size (20,3,5,5), with samples from uniform distribution (-1,1).Initialize a Convolution layer with 20 5$\times$5 kernels(input size 3) and set the generated weights as the layer weights. Save the output of these 100 images from this Convolution layer.
#
# * Now, initialize a new convolution layer, and use $L_2$ loss between the output of the network and the output generated in the previous step to get the same set of weights as the ones generated in the previous step.
#
# +
## First generate the random weight vector
## Init a conv layer with these weights
## For all images get output. Store in numpy array.
# +
# for part 2 we need to write a small L2 layer
class L2_loss():
def ___init__():
# empty
def forward(self, inp_1,inp_2):
# input is of dimestion(batch,channels,h,w)
# calculate the l2 norm of inp_1 - inp_2 .,
return output
def backward(self,output_grad):
# from the loss, and the input, get the grad at each location of the input.
# The grad is of the shape (batch,channels,h,w)
return grad
# Now Init a new conv layer and a L2 loss layer
# Train the new conv-layer using the L2 loss to get the earlier set of generated weights.
# Use batches.
# Print L2 dist between output from new convolution layer and the outputs generated initially.
| Assignment_2.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .jl
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Julia 1.6.4
# language: julia
# name: julia-1.6
# ---
# +
using Pkg
Pkg.precompile()
ENV["GENIE_ENV"] = "dev"
using SearchLight
using SearchLightPostgreSQL
using BitemporalPostgres
using Test
SearchLight.Configuration.load() |> SearchLight.connect
# -
using Pkg
Pkg.precompile()
ENV["GENIE_ENV"] = "dev"
using SearchLight
using SearchLightPostgreSQL
using BitemporalPostgres
using Test
using TimeZones
@testset "BitemporalPostgres unit tests" begin
w=Workflow()
w.tsw_validfrom=ZonedDateTime(2014, 5, 30,21,0,1,1, tz"Africa/Porto-Novo")
SearchLight.Configuration.load() |> SearchLight.connect
t=TestDummyComponent()
tr=TestDummyComponentRevision(description="blue")
ts=TestDummySubComponent(ref_super=t.id)
tsr=TestDummySubComponentRevision(description="blue")
create_entity!(w)
create_component!(t,tr,w)
create_subcomponent!(t,ts,tsr,w)
w.ref_history != Nothing
@test w.is_committed ==0
@test w.ref_version == tr.ref_validfrom
@test w.ref_version == tsr.ref_validfrom
commit_workflow!(w)
@test w.is_committed ==1
w2=Workflow(ref_history=w.ref_history,tsw_validfrom=ZonedDateTime(2015, 5, 30,21,0,1,1, tz"Africa/Porto-Novo"))
tr2=copy(tr)
tr.description="yellow"
update_entity!(w2)
update_component!(tr,tr2,w2)
@test w2.ref_version == tr2.ref_validfrom
commit_workflow!(w2)
@test w2.ref_version == tr.ref_invalidfrom
@test w2.is_committed ==1
@test SearchLight.query("""
SELECT i.* FROM histories h JOIN versions v ON v.ref_history = h.id
JOIN validityIntervals i ON i.ref_version = v.id
JOIN testdummyComponents p ON p.ref_history = h.id
JOIN testdummycomponentrevisions r ON r.ref_component = p.id AND r.ref_valid @> v.id
WHERE h.id = 1 and p.id = 1 AND i.tsrworld @> TIMESTAMPTZ '2015-05-30 20:00:01.001+00' AND i.tsrdb @> NOW()
""") != Nothing
| test/testtests.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# %matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
from scipy.linalg import eigh
# + [markdown] hide_input=true
# We have a shear type frame, with constant floor masses and constant storey stiffnesses. We have to specify just the number of stories
# -
ns = 10
# We form the structural matrices
ones = np.ones(ns)
M = np.diag(ones)
K = np.diag(ones*2)
K[-1,-1] = 1
K = K - np.diag(ones[1:], -1) - np.diag(ones[1:], +1)
evals, evecs = eigh(K, M)
evals[:3]
def eigenplot(ns, evecs, ne=None, norm=False, fig_ax=None, title=None):
if fig_ax is None:
fig, ax = plt.subplots(figsize=(4,6))
else:
fig, ax = fig_ax
if ne is None: ne=ns
x = np.arange(ns+1)
y = evecs/(np.abs(evecs).max(axis=0) if norm else 1)/np.sign(evecs[-1])
y = np.vstack((np.zeros(y.shape[1]), y))
for i, evec in enumerate((y.T)[:ne], 1):
ax.plot(evec, x, label='$\\psi_{%d}$'%i)
ax.legend()
ax.grid(b=1, axis='y')
ax.yaxis.set_major_locator(plt.MultipleLocator(1))
if title : ax.set_title(title)
if not fig_ax : fig.tight_layout()
eigenplot(ns, evecs, ne=3, norm=1)
# ## Matrix Iteration
D0 = np.linalg.inv(K)@M
S = np.diag(ones)
sevals, sevecs = [], []
for i in range(3):
D = D0@S
x = ones
w2old = 0
while True:
xh = D@x
temp = xh@M
w2 = (temp@x)/(temp@xh)
x = xh*w2
if abs(w2-w2old)/w2 < 1E-8 : break
w2old = w2
sevals.append(w2)
sevecs.append(x)
modal_m = x.T@M@x
S = S - np.outer(x,x)@M/modal_m
print(evals[:3])
print(sevals)
sevecs = np.array(sevecs).T
fig, axes = plt.subplots(1,2,figsize=(8,8))
eigenplot(ns, sevecs, norm=1, fig_ax=(fig, axes[0]), title='Matrix Iteration')
eigenplot(ns, evecs, ne=3, norm=1, fig_ax=(fig, axes[1]), title='"Exact" Eigenvectors')
# ## Ritz-Rayleigh
np.random.seed(20190402)
phi = D0@(np.random.random((ns,8))-0.5)
k, m = phi.T@K@phi, phi.T@M@phi
zevals, zevecs = eigh(k, m)
psi = phi@zevecs
print(zevals)
print(evals[:8])
fig, axes = plt.subplots(1,2,figsize=(8,8))
eigenplot(ns, psi, ne=5,norm=1, fig_ax=(fig, axes[0]), title='Rayleigh-Ritz')
eigenplot(ns, evecs, ne=5, norm=1, fig_ax=(fig, axes[1]), title='"Exact" Eigenvectors')
# ## Subspace Iteration no.1
#
# 4 Ritz vectors
np.random.seed(20190402)
psi = np.random.random((ns, 4))
for i in range(2):
phi = D0@psi
k, m = phi.T@K@phi, phi.T@M@phi
zevals, zevecs = eigh(k, m)
psi = phi@zevecs
print('Ex', evals[:4])
print('SI', zevals[:4])
fig, axes = plt.subplots(1,2,figsize=(8,8))
eigenplot(ns, psi, ne=4, norm=1, fig_ax=(fig, axes[0]), title='2 Subspace Iterations, M=4')
eigenplot(ns, evecs, ne=4, norm=1, fig_ax=(fig, axes[1]), title='"Exact" Eigenvectors')
# ## Subspace Iteration no. 2
#
# 8 Ritz vectors
np.random.seed(20190402)
psi = np.random.random((ns, 8))
for i in range(2):
phi = D0@psi
k, m = phi.T@K@phi, phi.T@M@phi
zevals, zevecs = eigh(k, m)
psi = phi@zevecs
print('Ex', evals[:4])
print('SI', zevals[:4])
fig, axes = plt.subplots(1,2,figsize=(8,8))
eigenplot(ns, psi, ne=4, norm=1, fig_ax=(fig, axes[0]), title='2 Subspace Iterations, M=8')
eigenplot(ns, evecs, ne=4, norm=1, fig_ax=(fig, axes[1]), title='"Exact" Eigenvectors')
ns = 1000
ones = np.ones(ns)
M = np.diag(ones)
K = np.diag(ones*2)
K[-1,-1] = 1
K = K - np.diag(ones[1:], -1) - np.diag(ones[1:], +1)
K = K*500
evals, evecs = eigh(K, M)
evals[:3]
D0 = np.linalg.inv(K)@M
np.random.seed(20190402)
psi = np.random.random((ns, 4))
for i in range(3):
phi = D0@psi
k, m = phi.T@K@phi, phi.T@M@phi
zevals, zevecs = eigh(k, m)
psi = phi@zevecs
print('Ex', evals[:4])
print('SI', zevals[:4])
#fig, axes = plt.subplots(1,2,figsize=(8,8))
#eigenplot(ns, psi, ne=4, norm=1, fig_ax=(fig, axes[0]), title='2 Subspace Iterations, M=4')
#eigenplot(ns, evecs, ne=4, norm=1, fig_ax=(fig, axes[1]), title='"Exact" Eigenvectors')
np.random.seed(20190402)
psi = np.random.random((ns, 6))
for i in range(3):
phi = D0@psi
k, m = phi.T@K@phi, phi.T@M@phi
zevals, zevecs = eigh(k, m)
psi = phi@zevecs
print('Ex', evals[:4])
print('SI', zevals[:4])
| dati_2019/08/MatrixIterationEtc.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
# Import libraries necessary for this project
import numpy as np
import pandas as pd
import renders as rs
from IPython.display import display # Allows the use of display() for DataFrames
# Show matplotlib plots inline (nicely formatted in the notebook)
# %matplotlib inline
# Load the wholesale customers dataset
try:
data = pd.read_csv("customers.csv")
data.drop(['Region', 'Channel'], axis = 1, inplace = True)
print("Wholesale customers dataset has {} samples with {} features each.".format(*data.shape))
except:
print("Dataset could not be loaded. Is the dataset missing?")
# -
data.head(3)
data.shape
# ## Data Exploration
# In this section, you will begin exploring the data through visualizations and code to understand how each feature is related to the others. You will observe a statistical description of the dataset, consider the relevance of each feature, and select a few sample data points from the dataset which you will track through the course of this project.
#
# Run the code block below to observe a statistical description of the dataset. Note that the dataset is composed of six important product categories: **'Fresh'**, **'Milk'**, **'Grocery'**, **'Frozen'**, **'Detergents_Paper'**, and **'Delicatessen'**. Consider what each category represents in terms of products you could purchase.
# **Description of Categories**
# - FRESH: annual spending (m.u.) on fresh products (Continuous)
# - MILK: annual spending (m.u.) on milk products (Continuous)
# - GROCERY: annual spending (m.u.) on grocery products (Continuous)
# - FROZEN: annual spending (m.u.)on frozen products (Continuous)
# - DETERGENTS_PAPER: annual spending (m.u.) on detergents and paper products (Continuous)
# - DELICATESSEN: annual spending (m.u.) on and delicatessen products (Continuous)
# - "A store selling cold cuts, cheeses, and a variety of salads, as well as a selection of unusual or foreign prepared foods."
# Display a description of the dataset
stats = data.describe()
stats
data.isnull().sum()
# ### Implementation: Selecting Samples
# To get a better understanding of the customers and how their data will transform through the analysis, it would be best to select a few sample data points and explore them in more detail. In the code block below, add **three** indices of your choice to the `indices` list which will represent the customers to track. It is suggested to try different sets of samples until you obtain customers that vary significantly from one another.
data.loc[[9,4,2], :]
# Using data.loc to filter a pandas DataFrame
data.loc[[100, 200, 300],:]
data.columns
data.keys()
# **Logic in selecting the 3 samples: Quartiles**
# - As you can previously (in the object "stats"), we've the data showing the first and third quartiles.
# - We can filter samples that are starkly different based on the quartiles.
# - This way we've two establishments that belong in the first and third quartiles respectively in, for example, the Frozen category.
# Fresh filter
fresh_q1 = 3127.750000
display(data.loc[data.Fresh < fresh_q1, :].head())
# Fresh filter
fresh_q1 = 3127.750000
display(data.loc[data.Fresh < fresh_q1, :].tail())
# Fresh filter
fresh_q1 = 3127.750000
display(data.loc[data.Fresh < fresh_q1, :].min())
# Frozen filter
frozen_q1 = 742.250000
display(data.loc[data.Frozen < frozen_q1, :].head())
# Frozen
frozen_q3 = 3554.250000
display(data.loc[data.Frozen > frozen_q3, :].head(7))
# Hence we'll be choosing:
# - 43: Very low "Fresh" and very high "Grocery"
# - 12: Very low "Frozen" and very high "Fresh"
# - 39: Very high "Frozen" and very low "Detergens_Paper"
# +
# TODO: Select three indices of your choice you wish to sample from the dataset
indices = [43, 12, 39]
# Create a DataFrame of the chosen samples
# .reset_index(drop = True) resets the index from 0, 1 and 2 instead of 100, 200 and 300
samples = pd.DataFrame(data.loc[indices], columns = data.columns).reset_index(drop = True)
print("Chosen samples of wholesale customers dataset:")
display(samples)
# -
# **Comparison of Samples and Means**
# +
# Import Seaborn, a very powerful library for Data Visualisation
import seaborn as sns
# Get the means
mean_data = data.describe().loc['mean', :]
# Append means to the samples' data
samples_bar = samples.append(mean_data)
# Construct indices
samples_bar.index = indices + ['mean']
# Plot bar plot
samples_bar.plot(kind='bar', figsize=(14,8))
# -
samples_bar
# **Comparing Samples' Percentiles**
# +
# First, calculate the percentile ranks of the whole dataset.
percentiles = data.rank(pct=True)
# Then, round it up, and multiply by 100
percentiles = 100*percentiles.round(decimals=3)
# Select the indices you chose from the percentiles dataframe
percentiles = percentiles.iloc[indices]
# Now, create the heat map using the seaborn library
sns.heatmap(percentiles, vmin=1, vmax=99, annot=True)
# -
# ### Question 1
# Consider the total purchase cost of each product category and the statistical description of the dataset above for your sample customers.
# *What kind of establishment (customer) could each of the three samples you've chosen represent?*
# **Hint:** Examples of establishments include places like markets, cafes, and retailers, among many others. Avoid using names for establishments, such as saying *"McDonalds"* when describing a sample customer as a restaurant.
# **Answer:**
# - Index 0: Coffee Cafe
# - Low spending on "Fresh", "Frozen" and "Delicatessen".
# - Majority of spending on "Grocery", "Milk" and "Detergents_Paper".
# - With some spending on "Delicatessen", it may be a cafe establishment serving drinks, coffee perhaps, with some ready-made food as a complimentary product.
# - Index 1: Upscale Restaurant
# - Low spending on "Frozen".
# - Majority of spending is a mix of "Fresh", "Milk, and "Grocery"
# - This may be an upscale restaurent with almost no spending on frozen foods.
# - Most upscale restaurants only use fresh foods.
# - Index 2: Fresh Food Retailer
# - Majority of spending is on "Fresh" goods with little spending on everything else except on "Frozen".
# - This may be a grocery store specializing in fresh foods with some frozen goods.
# ### Implementation: Feature Relevance
# One interesting thought to consider is if one (or more) of the six product categories is actually relevant for understanding customer purchasing. That is to say, is it possible to determine whether customers purchasing some amount of one category of products will necessarily purchase some proportional amount of another category of products? We can make this determination quite easily by training a supervised regression learner on a subset of the data with one feature removed, and then score how well that model can predict the removed feature.
#
# In the code block below, you will need to implement the following:
# - Assign `new_data` a copy of the data by removing a feature of your choice using the `DataFrame.drop` function.
# - Use `sklearn.cross_validation.train_test_split` to split the dataset into training and testing sets.
# - Use the removed feature as your target label. Set a `test_size` of `0.25` and set a `random_state`.
# - Import a decision tree regressor, set a `random_state`, and fit the learner to the training data.
# - Report the prediction score of the testing set using the regressor's `score` function.
# Existing features
data.columns
# Imports
from sklearn.model_selection import train_test_split
from sklearn.tree import DecisionTreeRegressor
# +
# Create list to loop through
dep_vars = list(data.columns)
# Create loop to test each feature as a dependent variable
for var in dep_vars:
# TODO: Make a copy of the DataFrame, using the 'drop' function to drop the given feature
new_data = data.drop([var], axis = 1)
# Confirm drop
# display(new_data.head(2))
# Create feature Series (Vector)
new_feature = pd.DataFrame(data.loc[:, var])
# Confirm creation of new feature
# display(new_feature.head(2))
# TODO: Split the data into training and testing sets using the given feature as the target
X_train, X_test, y_train, y_test = train_test_split(new_data, new_feature, test_size=0.25, random_state=42)
# TODO: Create a decision tree regressor and fit it to the training set
# Instantiate
dtr = DecisionTreeRegressor(random_state=42)
# Fit
dtr.fit(X_train, y_train)
# TODO: Report the score of the prediction using the testing set
# Returns R^2
score = dtr.score(X_test, y_test)
print('R2 score for {} as dependent variable: {}'.format(var, score))
# -
# ### Question 2
# *Which feature did you attempt to predict? What was the reported prediction score? Is this feature is necessary for identifying customers' spending habits?*
# **Hint:** The coefficient of determination, `R^2`, is scored between 0 and 1, with 1 being a perfect fit. A negative `R^2` implies the model fails to fit the data.
# **Answer:**
# - I used a loop and predicted every single feature as a dependent variable with the results shown above.
# - As you can see, "Fresh", "Frozen" and "Delicatessen" as dependent variables have negative R2 scores.
# - Their negative scores imply that they are necessary for identifying customers' spending habits because the remaining features cannot explain the variation in them.
# - Similarly, "Milk" and "Detergents_Paper" have very low R2 scores.
# - Their low scores also imply that they are necessary for identifying customers' spending habits.
# - However, "Grocery" has a R2 score of 0.68.
# - Now this is, in my opinion, a low score. But relative to the others it is much higher.
# - It may be not as necessary, compared to the other features, for identifying customers' spending habits.
# - We will explore this further.
# ### Visualize Feature Distributions
# To get a better understanding of the dataset, we can construct a scatter matrix of each of the six product features present in the data. If you found that the feature you attempted to predict above is relevant for identifying a specific customer, then the scatter matrix below may not show any correlation between that feature and the others. Conversely, if you believe that feature is not relevant for identifying a specific customer, the scatter matrix might show a correlation between that feature and another feature in the data. Run the code block below to produce a scatter matrix.
# Produce a scatter matrix for each pair of features in the data
from pandas.plotting import scatter_matrix
scatter_matrix(data, alpha = 0.3, figsize = (14,8), diagonal = 'kde');
# **Correlation Matrix**
# - This is to cross-reference with the scatter matrix above to draw more accurate insights from the data.
# - The higher the color is on the bar, the higher the correlation.
# +
import matplotlib.pyplot as plt
# %matplotlib inline
def plot_corr(df,size=10):
'''Function plots a graphical correlation matrix for each pair of columns in the dataframe.
Input:
df: pandas DataFrame
size: vertical and horizontal size of the plot'''
corr = df.corr()
fig, ax = plt.subplots(figsize=(size, size))
cax = ax.matshow(df, interpolation='nearest')
ax.matshow(corr)
fig.colorbar(cax)
plt.xticks(range(len(corr.columns)), corr.columns);
plt.yticks(range(len(corr.columns)), corr.columns);
plot_corr(data)
# -
# ### Question 3
# *Are there any pairs of features which exhibit some degree of correlation? Does this confirm or deny your suspicions about the relevance of the feature you attempted to predict? How is the data for those features distributed?*
# **Hint:** Is the data normally distributed? Where do most of the data points lie?
# **Answer:**
# <br \> I have plotted a correlation matrix to compare with the scatter matrix to ensure this answer is as accurate as possible.
# - The follow pairs of features seem to have some correlation as observed from the scatter plot showing a linear trend and the correlation plot showing a high correlation between the two features. I have ranked them in order of correlation from strongest to weakest.
# - Grocery and Detergents_Paper.
# - Grocery and Milk.
# - Detergents_Paper and Milk (not too strong).
# - These features that are strongly correlated does lend credence to our initial claim that Grocery may not be necessary for identifying customers' spending habits.
# - Grocery has a high correlation with Detergents_Paper and Milk that corresponds to a relatively high R2 score when we regress Grocery on all other features.
# - The data are **not normally distributed** due to the presence of many outliers.
# - Evidently, most are skewed to the left where most of the data points lie.
# - This indicates how normalization is required to make the data features normally distributed as clustering algorithms require them to be normally distributed.
# ## Data Preprocessing
# In this section, you will preprocess the data to create a better representation of customers by performing a scaling on the data and detecting (and optionally removing) outliers. Preprocessing data is often times a critical step in assuring that results you obtain from your analysis are significant and meaningful.
# ### Implementation: Feature Scaling
# If data is not normally distributed, especially if the mean and median vary significantly (indicating a large skew), it is most [often appropriate](http://econbrowser.com/archives/2014/02/use-of-logarithms-in-economics) to apply a non-linear scaling — particularly for financial data. One way to achieve this scaling is by using a [Box-Cox test](http://scipy.github.io/devdocs/generated/scipy.stats.boxcox.html), which calculates the best power transformation of the data that reduces skewness. A simpler approach which can work in most cases would be applying the natural logarithm.
#
# In the code block below, you will need to implement the following:
# - Assign a copy of the data to `log_data` after applying a logarithm scaling. Use the `np.log` function for this.
# - Assign a copy of the sample data to `log_samples` after applying a logrithm scaling. Again, use `np.log`.
# +
# TODO: Scale the data using the natural logarithm
log_data = np.log(data)
# TODO: Scale the sample data using the natural logarithm
log_samples = np.log(samples)
# Produce a scatter matrix for each pair of newly-transformed features
scatter_matrix(log_data, alpha = 0.3, figsize = (14,8), diagonal = 'kde');
# -
# ### Observation
# After applying a natural logarithm scaling to the data, the distribution of each feature should appear much more normal. For any pairs of features you may have identified earlier as being correlated, observe here whether that correlation is still present (and whether it is now stronger or weaker than before).
#
# Run the code below to see how the sample data has changed after having the natural logarithm applied to it.
# Display the log-transformed sample data
display(log_samples)
plot_corr(data)
plot_corr(log_data)
# **Changes in correlations**
# - Grocery and Detergents_Paper has a weaker correlation.
# - Grocery and Milk has a slightly stronger correlation.
# - Detergents_Paper and Milk has a slightly stronger correlation.
# ### Implementation: Outlier Detection
# Detecting outliers in the data is extremely important in the data preprocessing step of any analysis. The presence of outliers can often skew results which take into consideration these data points. There are many "rules of thumb" for what constitutes an outlier in a dataset. Here, we will use [Tukey's Method for identfying outliers](http://datapigtechnologies.com/blog/index.php/highlighting-outliers-in-your-data-with-the-tukey-method/): An *outlier step* is calculated as 1.5 times the interquartile range (IQR). A data point with a feature that is beyond an outlier step outside of the IQR for that feature is considered abnormal.
#
# In the code block below, you will need to implement the following:
# - Assign the value of the 25th percentile for the given feature to `Q1`. Use `np.percentile` for this.
# - Assign the value of the 75th percentile for the given feature to `Q3`. Again, use `np.percentile`.
# - Assign the calculation of an outlier step for the given feature to `step`.
# - Optionally remove data points from the dataset by adding indices to the `outliers` list.
#
# **NOTE:** If you choose to remove any outliers, ensure that the sample data does not contain any of these points!
# Once you have performed this implementation, the dataset will be stored in the variable `good_data`.
# This is how np.percentile would work
# np.percentile[series, percentile]
np.percentile(data.loc[:, 'Milk'], 25)
# **Modified Code**
# - The code has been revamped to use pandas .loc method because I prefer the consistency!
import itertools
# +
# Select the indices for data points you wish to remove
outliers_lst = []
# For each feature find the data points with extreme high or low values
for feature in log_data.columns:
# TODO: Calculate Q1 (25th percentile of the data) for the given feature
Q1 = np.percentile(log_data.loc[:, feature], 25)
# TODO: Calculate Q3 (75th percentile of the data) for the given feature
Q3 = np.percentile(log_data.loc[:, feature], 75)
# TODO: Use the interquartile range to calculate an outlier step (1.5 times the interquartile range)
step = 1.5 * (Q3 - Q1)
# Display the outliers
print("Data points considered outliers for the feature '{}':".format(feature))
# The tilde sign ~ means not
# So here, we're finding any points outside of Q1 - step and Q3 + step
outliers_rows = log_data.loc[~((log_data[feature] >= Q1 - step) & (log_data[feature] <= Q3 + step)), :]
# display(outliers_rows)
outliers_lst.append(list(outliers_rows.index))
outliers = list(itertools.chain.from_iterable(outliers_lst))
# List of unique outliers
# We use set()
# Sets are lists with no duplicate entries
uniq_outliers = list(set(outliers))
# List of duplicate outliers
dup_outliers = list(set([x for x in outliers if outliers.count(x) > 1]))
print('Outliers list:\n', uniq_outliers)
print('Length of outliers list:\n', len(uniq_outliers))
print('Duplicate list:\n', dup_outliers)
print('Length of duplicates list:\n', len(dup_outliers))
# Remove duplicate outliers
# Only 5 specified
good_data = log_data.drop(log_data.index[dup_outliers]).reset_index(drop = True)
# Original Data
print('Original shape of data:\n', data.shape)
# Processed Data
print('New shape of data:\n', good_data.shape)
# -
# **Notes**
# - Samples are not in this outliers' list.
# - The good data now is a matrix that measures 435 x 6 instead of the original dimensionality of 440 x 6
# ### Question 4
# *Are there any data points considered outliers for more than one feature based on the definition above? Should these data points be removed from the dataset? If any data points were added to the `outliers` list to be removed, explain why.*
# **Answer:**
# - Specifically there are 5 examples that have duplicates.
# - Their indices are in this array: [128, 65, 66, 75, 154].
# - They should be removed as they are not only outliers in one categories but more than once.
# - Hence, they are not representative of our general customers.
# **Further Readings**
# - http://www.theanalysisfactor.com/outliers-to-drop-or-not-to-drop/
# - Abstract Summary
# - If it is obvious that the outlier is due to incorrectly entered or measured data, you should drop the outlier.
# - If the outlier does not change the results but does affect assumptions, you may drop the outlier. But note that in a footnote of your paper.
# - More commonly, the outlier affects both results and assumptions. In this situation, it is not legitimate to simply drop the outlier. You may run the analysis both with and without it, but you should state in at least a footnote the dropping of any such data points and how the results changed.
# - If the outlier creates a significant association, you should drop the outlier and should not report any significance from your analysis.
# ## Feature Transformation
# In this section you will use principal component analysis (PCA) to draw conclusions about the underlying structure of the wholesale customer data. Since using PCA on a dataset calculates the dimensions which best maximize variance, we will find which compound combinations of features best describe customers.
# ### Implementation: PCA
#
# Now that the data has been scaled to a more normal distribution and has had any necessary outliers removed, we can now apply PCA to the `good_data` to discover which dimensions about the data best maximize the variance of features involved. In addition to finding these dimensions, PCA will also report the *explained variance ratio* of each dimension — how much variance within the data is explained by that dimension alone. Note that a component (dimension) from PCA can be considered a new "feature" of the space, however it is a composition of the original features present in the data.
#
# In the code block below, you will need to implement the following:
# - Import `sklearn.decomposition.PCA` and assign the results of fitting PCA in six dimensions with `good_data` to `pca`.
# - Apply a PCA transformation of the sample log-data `log_samples` using `pca.transform`, and assign the results to `pca_samples`.
from sklearn.decomposition import PCA
# +
# TODO: Apply PCA by fitting the good data with the same number of dimensions as features
# Instantiate
pca = PCA(n_components=6)
# Fit
pca.fit(good_data)
# TODO: Transform the sample log-data using the PCA fit above
pca_samples = pca.transform(log_samples)
# Generate PCA results plot
pca_results = rs.pca_results(good_data, pca)
# +
# DataFrame of results
display(pca_results)
# DataFrame
display(type(pca_results))
# Cumulative explained variance should add to 1
display(pca_results['Explained Variance'].cumsum())
# -
# ### Question 5
# *How much variance in the data is explained* ***in total*** *by the first and second principal component? What about the first four principal components? Using the visualization provided above, discuss what the first four dimensions best represent in terms of customer spending.*
# **Hint:** A positive increase in a specific dimension corresponds with an *increase* of the *positive-weighted* features and a *decrease* of the *negative-weighted* features. The rate of increase or decrease is based on the indivdual feature weights.
# **Answer:**
# - 70.68% of the variance in the data is explained by the first and second principal components.
# - 93.11% of the variance in the data is explained by the first four principal components.
# - Components breakdown:
# - The first principal component (PC1):
# - An increase in PC1 is associated with large increases in "Milk", "Grocery" and "Detergents_Paper" spending.
# - These features best represent PC1.
# - This is in line with our initial findings where the 3 features are highly correlated.
# - The second principal component (PC2):
# - An increase in PC2 is associated with large increases in "Fresh", "Frozen" and "Delicatessen" spending.
# - These features best represent PC2.
# - This makes sense as PC1 represents different features. And in PC2, the features in PC1 have very small positive weights.
# - The third principal component (PC3):
# - An increase in PC3 is associated with a large increase in "Delicatessen" and a large decrease in "Fresh" spending.
# - These features best represent PC3.
# - The fourth principal component (PC4):
# - An increase in PC4 is associated with a large increasing in "Frozen" and a large decrease in "Delicatessen" spending.
# - These features best represent PC4.
# **Further readings**
# - https://onlinecourses.science.psu.edu/stat505/node/54
# - http://setosa.io/ev/principal-component-analysis/
# ### Observation
# Run the code below to see how the log-transformed sample data has changed after having a PCA transformation applied to it in six dimensions. Observe the numerical value for the first four dimensions of the sample points. Consider if this is consistent with your initial interpretation of the sample points.
# Display sample log-data after having a PCA transformation applied
display(pd.DataFrame(np.round(pca_samples, 4), columns = pca_results.index.values))
# ### Implementation: Dimensionality Reduction
# When using principal component analysis, one of the main goals is to reduce the dimensionality of the data — in effect, reducing the complexity of the problem. Dimensionality reduction comes at a cost: Fewer dimensions used implies less of the total variance in the data is being explained. Because of this, the *cumulative explained variance ratio* is extremely important for knowing how many dimensions are necessary for the problem. Additionally, if a signifiant amount of variance is explained by only two or three dimensions, the reduced data can be visualized afterwards.
#
# In the code block below, you will need to implement the following:
# - Assign the results of fitting PCA in two dimensions with `good_data` to `pca`.
# - Apply a PCA transformation of `good_data` using `pca.transform`, and assign the reuslts to `reduced_data`.
# - Apply a PCA transformation of the sample log-data `log_samples` using `pca.transform`, and assign the results to `pca_samples`.
# +
# TODO: Apply PCA by fitting the good data with only two dimensions
# Instantiate
pca = PCA(n_components=2)
pca.fit(good_data)
# TODO: Transform the good data using the PCA fit above
reduced_data = pca.transform(good_data)
# TODO: Transform the sample log-data using the PCA fit above
pca_samples = pca.transform(log_samples)
# Create a DataFrame for the reduced data
reduced_data = pd.DataFrame(reduced_data, columns = ['Dimension 1', 'Dimension 2'])
# -
# ### Observation
# Run the code below to see how the log-transformed sample data has changed after having a PCA transformation applied to it using only two dimensions. Observe how the values for the first two dimensions remains unchanged when compared to a PCA transformation in six dimensions.
# Display sample log-data after applying PCA transformation in two dimensions
display(pd.DataFrame(np.round(pca_samples, 4), columns = ['Dimension 1', 'Dimension 2']))
# ## Clustering
#
# In this section, you will choose to use either a K-Means clustering algorithm or a Gaussian Mixture Model clustering algorithm to identify the various customer segments hidden in the data. You will then recover specific data points from the clusters to understand their significance by transforming them back into their original dimension and scale.
# ### Question 6
# *What are the advantages to using a K-Means clustering algorithm? What are the advantages to using a Gaussian Mixture Model clustering algorithm? Given your observations about the wholesale customer data so far, which of the two algorithms will you use and why?*
# **Answer:**
# - **K-Means Clustering Algorithm**
# - Intuition: points in the same cluster has shorter than points from other clusters.
# - The goal is to minimize the distance within the same cluster.
# - Hard assignment
# - Points belong explicitly to clusters
# - Advantages
# - Easy to understand and implement.
# - Works well in practice.
# - Disadvantages
# - It may converge to a local optima depending on your initialization of clusters.
# - We can initialize multiple times.
# - It may be computationally expensive to compute Euclidean distances.
# - Yet we can easily use batch K-means to solve this.
# - It is susceptible to outliers.
# - We can pre-process our data to exclude outliers to solve this.
#
# - **Gaussian Mixture Model**
# - Soft assignment
# - There is no definite assignment of points to clusters.
# - Points have probabilities of belonging to clusters.
# - Advantages
# - There is greater flexibility due to clusters having unconstrained covariances.
# - In fact, K-means is a special case of the Gaussian Mixture Model.
# - It allows mixed memberships.
# - Due to the nature of soft assignments, a point can belong to two clusters with varying degree (probability).
# - Disadvantages
# - It may converge to a local optima depending on your initialization of clusters.
# - We can initialize multiple times
# - It is a much more complicated model to interpret.
#
# **Due to how there may be a mixed membership problem in our dataset where there is no clear demarcation, I believe we should start with the Gaussian Mixture Model**
# **Further Readings**
# - http://home.deib.polimi.it/matteucc/Clustering/tutorial_html/mixture.html
# - http://www.nickgillian.com/wiki/pmwiki.php/GRT/GMMClassifier
# - http://scikit-learn.org/stable/modules/mixture.html#gmm-classifier
# - http://playwidtech.blogspot.hk/2013/02/k-means-clustering-advantages-and.html
# - https://sites.google.com/site/dataclusteringalgorithms/k-means-clustering-algorithm
# - http://www.improvedoutcomes.com/docs/WebSiteDocs/Clustering/K-Means_Clustering_Overview.htm
# - http://stats.stackexchange.com/questions/133656/how-to-understand-the-drawbacks-of-k-means
# - http://www.r-bloggers.com/k-means-clustering-is-not-a-free-lunch/
# - http://www.r-bloggers.com/pca-and-k-means-clustering-of-delta-aircraft/
# - https://www.quora.com/What-happens-when-you-pass-correlated-variables-to-a-k-means-clustering-Also-is-there-a-way-by-which-clustering-can-be-used-to-group-similar-pattern-observed-for-a-variable-over-time
# - https://shapeofdata.wordpress.com/2013/07/30/k-means/
# - http://mlg.eng.cam.ac.uk/tutorials/06/cb.pdf
# - https://www.quora.com/What-is-the-difference-between-K-means-and-the-mixture-model-of-Gaussian
# ### Implementation: Creating Clusters
# Depending on the problem, the number of clusters that you expect to be in the data may already be known. When the number of clusters is not known *a priori*, there is no guarantee that a given number of clusters best segments the data, since it is unclear what structure exists in the data — if any. However, we can quantify the "goodness" of a clustering by calculating each data point's *silhouette coefficient*. The [silhouette coefficient](http://scikit-learn.org/stable/modules/generated/sklearn.metrics.silhouette_score.html) for a data point measures how similar it is to its assigned cluster from -1 (dissimilar) to 1 (similar). Calculating the *mean* silhouette coefficient provides for a simple scoring method of a given clustering.
#
# In the code block below, you will need to implement the following:
# - Fit a clustering algorithm to the `reduced_data` and assign it to `clusterer`.
# - Predict the cluster for each data point in `reduced_data` using `clusterer.predict` and assign them to `preds`.
# - Find the cluster centers using the algorithm's respective attribute and assign them to `centers`.
# - Predict the cluster for each sample data point in `pca_samples` and assign them `sample_preds`.
# - Import sklearn.metrics.silhouette_score and calculate the silhouette score of `reduced_data` against `preds`.
# - Assign the silhouette score to `score` and print the result.
# Imports
from sklearn.mixture import GaussianMixture as GMM
from sklearn.cluster import KMeans
from sklearn.metrics import silhouette_score
# Create range of clusters
range_n_clusters = list(range(2,11))
print(range_n_clusters)
# **GMM Implementation**
# Loop through clusters
for n_clusters in range_n_clusters:
# TODO: Apply your clustering algorithm of choice to the reduced data
clusterer = GMM(n_components=n_clusters).fit(reduced_data)
# TODO: Predict the cluster for each data point
preds = clusterer.predict(reduced_data)
# TODO: Find the cluster centers
centers = clusterer.means_
# TODO: Predict the cluster for each transformed sample data point
sample_preds = clusterer.predict(pca_samples)
# TODO: Calculate the mean silhouette coefficient for the number of clusters chosen
score = silhouette_score(reduced_data, preds, metric='mahalanobis')
print("For n_clusters = {}. The average silhouette_score is : {}".format(n_clusters, score))
lowest_bic = np.infty
bic = []
n_components_range = range(1, 7)
cv_types = ['spherical', 'tied', 'diag', 'full']
for cv_type in cv_types:
for n_components in n_components_range:
# Fit a mixture of Gaussians with EM
gmm = GMM(n_components=n_components, covariance_type=cv_type)
X = reduced_data
gmm.fit(X)
bic.append(gmm.bic(X))
if bic[-1] < lowest_bic:
lowest_bic = bic[-1]
best_gmm = gmm
bic
# **KMEANS Implementation**
# Loop through clusters
for n_clusters in range_n_clusters:
# TODO: Apply your clustering algorithm of choice to the reduced data
clusterer = KMeans(n_clusters=n_clusters).fit(reduced_data)
# TODO: Predict the cluster for each data point
preds = clusterer.predict(reduced_data)
# TODO: Find the cluster centers
centers = clusterer.cluster_centers_
# TODO: Predict the cluster for each transformed sample data point
sample_preds = clusterer.predict(pca_samples)
# TODO: Calculate the mean silhouette coefficient for the number of clusters chosen
score = silhouette_score(reduced_data, preds, metric='euclidean')
print("For n_clusters = {}. The average silhouette_score is : {}".format(n_clusters, score))
# **Distance Metric**
# - The Silhouette Coefficient is calculated using the mean intra-cluster distance and the mean nearest-cluster distance for each sample. Therefore, it makes sense to use the same distance metric here as the one used in the clustering algorithm. This is **Euclidean for KMeans** and **Mahalanobis for general GMM**.
#
# **Metric for GMM**
# - [BIC](http://scikit-learn.org/stable/auto_examples/mixture/plot_gmm_selection.html) could sometimes be a better criterion for deciding on the optimal number of clusters.
# ### Question 7
# *Report the silhouette score for several cluster numbers you tried. Of these, which number of clusters has the best silhouette score?*
# **Answer: GMM**
# <br>For n_clusters = 2. The average silhouette_score is : 0.411818864386</br>
# <br>For n_clusters = 3. The average silhouette_score is : 0.373560747175</br>
# <br>For n_clusters = 4. The average silhouette_score is : 0.332870064265</br>
# <br>For n_clusters = 5. The average silhouette_score is : 0.287122500285</br>
# <br>For n_clusters = 6. The average silhouette_score is : 0.277900119783</br>
# <br>For n_clusters = 7. The average silhouette_score is : 0.322542962762</br>
# <br>For n_clusters = 8. The average silhouette_score is : 0.310386058749</br>
# <br>For n_clusters = 9. The average silhouette_score is : 0.309984687201</br>
# <br>For n_clusters = 10. The average silhouette_score is : 0.316531857813</br>
#
# **The best score is obtained when the number of clusters is 2. Similarly, when we use KMeans, the best score is also obtained with the same number of clusters.**
# ### Cluster Visualization
# Once you've chosen the optimal number of clusters for your clustering algorithm using the scoring metric above, you can now visualize the results by executing the code block below. Note that, for experimentation purposes, you are welcome to adjust the number of clusters for your clustering algorithm to see various visualizations. The final visualization provided should, however, correspond with the optimal number of clusters.
# Extra code because we ran a loop on top and this resets to what we want
clusterer = GMM(n_components=2).fit(reduced_data)
preds = clusterer.predict(reduced_data)
centers = clusterer.means_
sample_preds = clusterer.predict(pca_samples)
# Display the results of the clustering from implementation
rs.cluster_results(reduced_data, preds, centers, pca_samples)
# ### Implementation: Data Recovery
# Each cluster present in the visualization above has a central point. These centers (or means) are not specifically data points from the data, but rather the *averages* of all the data points predicted in the respective clusters. For the problem of creating customer segments, a cluster's center point corresponds to *the average customer of that segment*. Since the data is currently reduced in dimension and scaled by a logarithm, we can recover the representative customer spending from these data points by applying the inverse transformations.
#
# In the code block below, you will need to implement the following:
# - Apply the inverse transform to `centers` using `pca.inverse_transform` and assign the new centers to `log_centers`.
# - Apply the inverse function of `np.log` to `log_centers` using `np.exp` and assign the true centers to `true_centers`.
#
# ### To The Original Data
# +
# TODO: Inverse transform the centers
log_centers = pca.inverse_transform(centers)
# TODO: Exponentiate the centers
true_centers = np.exp(log_centers)
# Display the true centers
segments = ['Segment {}'.format(i) for i in range(0,len(centers))]
true_centers = pd.DataFrame(np.round(true_centers), columns = data.columns)
true_centers.index = segments
display(true_centers)
# -
# ### Question 8
# Consider the total purchase cost of each product category for the representative data points above, and reference the statistical description of the dataset at the beginning of this project. *What set of establishments could each of the customer segments represent?*
# **Hint:** A customer who is assigned to `'Cluster X'` should best identify with the establishments represented by the feature set of `'Segment X'`.
# +
# Clusters' deviation from median
display(true_centers - data.median())
# Clusters' deviation from mean
# As you can see, this is not a meaningful comparison for Segment 1 where everything is negative
display(true_centers - data.mean())
# -
# **Answer:**
# - We will be using deviations from the median, with reference to the statistical description of the dataset at the beginning of this project, since mean is sensitive to outliers and would not yield meaningful comparisons.
# - Segment 0:
# - Establishments in this segment have above median spending on "Milk", "Grocery" and "Detergents_Paper".
# - This could represent restaurants and cafes.
# - Segment 1:
# - Establishments in this segment have above median spending on "Fresh" and "Frozen".
# - This could represent typical retailers such as markets specializing in fresh and frozen food.
# - This is typical in seafood or meat markets.
# ### Question 9
# *For each sample point, which customer segment from* ***Question 8*** *best represents it? Are the predictions for each sample point consistent with this?*
#
# Run the code block below to find which cluster each sample point is predicted to be.
# Display the predictions
for i, pred in enumerate(sample_preds):
print("Sample point", i, "predicted to be in Cluster", pred)
samples
# **Answer:**
# - Sample 1:
# - It is evident that this belongs to cluster 0 (segment 0) where spending on "Milk", "Grocery" And "Detergents_Paper" is high.
# - Sample 2:
# - This is tricky. The spending on "Milk", "Grocery" and "Detergents_Paper" is high. But spending on "Fresh" is high too.
# - Considering spending on "Frozen" is low, I guess it makes sense to cluster it under cluster 0.
# - Sample 3:
# - It is evident that this belongs to cluster 1 because spending on "Fresh" and "Frozen" is high.
# ## Conclusion
# In this final section, you will investigate ways that you can make use of the clustered data. First, you will consider how the different groups of customers, the ***customer segments***, may be affected differently by a specific delivery scheme. Next, you will consider how giving a label to each customer (which *segment* that customer belongs to) can provide for additional features about the customer data. Finally, you will compare the ***customer segments*** to a hidden variable present in the data, to see whether the clustering identified certain relationships.
# ### Question 10
# Companies will often run [A/B tests](https://en.wikipedia.org/wiki/A/B_testing) when making small changes to their products or services to determine whether making that change will affect its customers positively or negatively. The wholesale distributor is considering changing its delivery service from currently 5 days a week to 3 days a week. However, the distributor will only make this change in delivery service for customers that react positively. *How can the wholesale distributor use the customer segments to determine which customers, if any, would react positively to the change in delivery service?*
# **Hint:** Can we assume the change affects all customers equally? How can we determine which group of customers it affects the most?
# **Answer:**
# - Impact on Segment 0
# - Intuitively, the impact on Segment 0's customers should be minimal.
# - This is because their products are mainly non-perishable products from "Grocery" to "Detergents_Paper".
# - However this situation is complicated as this segment has high spending on "Milk" products which is perishable.
# - But with advances in preservation, most "Milk" products last more than a week these days.
# - Impact on Segment 1
# - One would surmise that Segment 1's customers would have a substantial impact by the change in delivery service.
# - This is because their products are highly perishable such as "Fresh" products including fruits, vegetables, seafood and meat.
# - We can formalize the impact by running an experiment to determine which group of customers would have the greatest impact.
# 1. Randomly sample 4 groups where we sample 2 groups from each cluster.
# - Group 0a, 0b would be the group experiencing the change and the control group respectively for cluster 0.
# - Group 1a, 1b would be the group experiencing the change and the control group respectively for cluster 1.
# 2. We will change the schedules for group 0a and 1a keeping the schedules for 0b and 1b unchanged.
# 3. We will have 2 metrics.
# 1. We will conduct customer satisfaction survey for all groups.
# 2. We will cross-reference their satisfaction level with their spending.
# 4. Clients experiencing a negative impact would have a low satisfaction level and a decreased or similar spending. And clients experiencing a positive impact would have a high satisfaction level and an increased or similar spending.
# - We can investigate anomalies where clients display contradictory signals like expressing a low satisfaction level and increasing spending, and vice versa.
# **Further Readings**
# - https://www.quora.com/When-should-A-B-testing-not-be-trusted-to-make-decisions/answer/Edwin-Chen-1
# - http://multithreaded.stitchfix.com/blog/2015/05/26/significant-sample/
# - http://techblog.netflix.com/2016/04/its-all-about-testing-netflix.html
# - https://vwo.com/ab-testing/
# - http://stats.stackexchange.com/questions/192752/clustering-and-a-b-testing
# ### Question 11
# Additional structure is derived from originally unlabeled data when using clustering techniques. Since each customer has a ***customer segment*** it best identifies with (depending on the clustering algorithm applied), we can consider *'customer segment'* as an **engineered feature** for the data. Assume the wholesale distributor recently acquired ten new customers and each provided estimates for anticipated annual spending of each product category. Knowing these estimates, the wholesale distributor wants to classify each new customer to a ***customer segment*** to determine the most appropriate delivery service.
# *How can the wholesale distributor label the new customers using only their estimated product spending and the* ***customer segment*** *data?*
# **Hint:** A supervised learner could be used to train on the original customers. What would be the target variable?
# **Answer:**
# - You could use support vector machines, naive bayes, logistic regression or any other suitable classifier to classify the new clients based on their features.
# - 0: cluster 0.
# - 1: cluster 1.
# - Target variable would be the cluster group.
# ### Visualizing Underlying Distributions
#
# At the beginning of this project, it was discussed that the `'Channel'` and `'Region'` features would be excluded from the dataset so that the customer product categories were emphasized in the analysis. By reintroducing the `'Channel'` feature to the dataset, an interesting structure emerges when considering the same PCA dimensionality reduction applied earlier to the original dataset.
#
# Run the code block below to see how each data point is labeled either `'HoReCa'` (Hotel/Restaurant/Cafe) or `'Retail'` the reduced space. In addition, you will find the sample points are circled in the plot, which will identify their labeling.
dup_outliers
# Display the clustering results based on 'Channel' data
rs.channel_results(reduced_data, dup_outliers, pca_samples)
# ### Question 12
# *How well does the clustering algorithm and number of clusters you've chosen compare to this underlying distribution of Hotel/Restaurant/Cafe customers to Retailer customers? Are there customer segments that would be classified as purely 'Retailers' or 'Hotels/Restaurants/Cafes' by this distribution? Would you consider these classifications as consistent with your previous definition of the customer segments?*
# **Answer:**
# - The number of clusters is consistent with the underlying distribution with 2 major clusters hence the clustering algorithm did well.
# - There are customer segments that would be purely classified as "Retailers" or "Hotels/"Restaurants/Cafes" on the extreme left and right accordingly.
# - This underlying classification is consistent with our observation where we noted cluster 0 customers are typically restaurants and cafes and cluster 1 customers are typically markets.
| customer_segments_python3.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
import numpy as np
import cv2
cap = cv2.VideoCapture(0)
while(True):
# Capture frame-by-frame
ret, frame = cap.read()
# Our operations on the frame come here
gray = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY)
# Display the resulting frame
cv2.imshow('frame',gray)
if cv2.waitKey(1) & 0xFF == ord('q'):
break
# When everything done, release the capture
cap.release()
cv2.destroyAllWindows()
# -
| notebooks/opencv-webcam-stream/opencv-webcam-stream.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
#phase 1
weight = 0 #w
input = 1 #x
pred = weight * input #y = x*w
print ("Prediction:" + str(pred))
#phase 2
input = 1 #x
weight = 0 #w
pred = weight * input #y = x*w
print ("Prediction:" + str(pred))
#measuring error
goal_pred = 1
error = (pred - goal_pred) ** 2
print ("Error:" + str(error))
# phase 3
input = 1 #x
weight = 0 #w
pred = weight * input #y = x*w
print ("Prediction:" + str(pred))
#measuring error
goal_pred = 1
error = (pred - goal_pred) ** 2
delta = pred - goal_pred
print ("Error:" + str(error) + ",Delta:" + str(delta))
# phase 4
input = 1
weight = 0
pred = weight * input
print ("Prediction:" + str(pred))
#measuring error
goal_pred = 1
error = (pred - goal_pred) ** 2
delta = pred - goal_pred
print ("Error:" + str(error) + ",Delta:" + str(delta))
#updating weight
weight = weight - delta
print ("Weight:" + str(weight))
print ("")
trainingData = [[4,4],
[5,5],
[6,6]]
weight = 0
alpha = 0.1
def neural_network(input, weight):
output = input * weight
return output
for iteration in range(18):
for i in range(len(trainingData)):
data = trainingData[i]
input = data[0]
goal_pred = data[1]
pred = neural_network(input, weight)
error = (pred - goal_pred) ** 2
delta = (pred - goal_pred)
print ("Input:" + str(input) + ", Pred:" + str(pred) + ", Error:" + str(error) + ", W:" + str(weight))
weight = weight - (delta * alpha)
testData = [[1,1],
[2,2],
[3,3]]
for i in range(len(testData)):
data = testData[i]
input = data[0]
answer = data[1]
pred = neural_network(input, weight)
print ("Input:" + str(input) + ", Pred:" + str(pred))
| GrokkingDeepLearning/FirstNeuralNet.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Harris Corner Detection
# ### Import resources and display image
# +
import matplotlib.pyplot as plt
import numpy as np
import cv2
# %matplotlib inline
# Read in the image
image = cv2.imread('images/waffle.jpg')
# Make a copy of the image
image_copy = np.copy(image)
# Change color to RGB (from BGR)
image_copy = cv2.cvtColor(image_copy, cv2.COLOR_BGR2RGB)
plt.imshow(image_copy)
# -
# ### Detect corners
# +
# Corner detection relies on changes in intensity, so we convert the image to grayscale
gray = cv2.cvtColor(image_copy, cv2.COLOR_RGB2GRAY)
gray = np.float32(gray)
# Detect corners
# 2 --> size of the neighborhood to look at when identifying potential corners. 2 means 2x2
# 3 --> size of the Sobel_x and Sobel_y operator
# 0.04 --> a constant value that helps determine which points are considered corners (typycal value 0.04)
# if we choose a slightly lower value for the constant (< 0.04) will result in more corners detected
# dst will have corners marked as bright points and non-corners as darker pixels
dst = cv2.cornerHarris(gray, 2, 3, 0.04)
# Dilate corner image to enhance corner points
# Dilation enlarges bright regions or regions in the foreground
dst = cv2.dilate(dst,None)
plt.imshow(dst, cmap='gray')
# -
# ### Extract and display strong corners
# +
def mark_corners(gray_img_detected_corners, img_orig, threshold):
# Iterate through all the corners and draw them on the image (if they pass the threshold)
for j in range(0, gray_img_detected_corners.shape[0]):
for i in range(0, gray_img_detected_corners.shape[1]):
if(gray_img_detected_corners[j,i] > threshold):
# image, center pt, radius, color, thickness
cv2.circle( img_orig, (i, j), 1, (0,255,0), 1)
return img_orig
# Create an image copy to draw corners on
corner_image_1 = np.copy(image_copy)
corner_image_2 = np.copy(image_copy)
corner_image_3 = np.copy(image_copy)
## TODO: Define a threshold for extracting strong corners
# This value vary depending on the image and how many corners you want to detect
# Try changing this free parameter, 0.1, to be larger or smaller ans see what happens
thresh_1 = 0.100*dst.max()
thresh_2 = 0.025*dst.max()
thresh_3 = 0.400*dst.max()
f, (p1, p2, p3) = plt.subplots(1, 3, figsize=(20,10))
p1.set_title("threshold = 0.100*dst.max()")
p1.imshow(mark_corners(dst, corner_image_1, thresh_1))
p2.set_title("threshold = 0.025*dst.max()")
p2.imshow(mark_corners(dst, corner_image_2, thresh_2))
p3.set_title("threshold = 0.400*dst.max()")
p3.imshow(mark_corners(dst, corner_image_3, thresh_3))
# -
| L04-03-Notebook-Find_the_corners.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# ---
# + application/vnd.databricks.v1+cell={"inputWidgets": {}, "nuid": "9635b9b2-98f4-4103-b7dc-b2a3cf5de4c1", "showTitle": false, "title": ""}
import warnings
warnings.filterwarnings("ignore")
# + application/vnd.databricks.v1+cell={"inputWidgets": {}, "nuid": "c9ab0dcc-93f8-435c-9d13-44d9b0a17502", "showTitle": false, "title": ""}
dbutils.fs.cp("/FileStore/bakerloo_stops.csv", "file:///tmp/bakerloo_stops.csv")
dbutils.fs.cp("/FileStore/bakerloo_line.geojson", "file:///tmp/bakerloo_line.geojson")
dbutils.fs.cp("/FileStore/2020_02_btp_street.csv", "file:///tmp/2020_02_btp_street.csv")
# + application/vnd.databricks.v1+cell={"inputWidgets": {}, "nuid": "29260e44-3211-4e79-b616-657ea6fbc45e", "showTitle": false, "title": ""}
import pandas as pd
import geopandas as gpd
df = pd.read_csv("file:///tmp/bakerloo_stops.csv")
bakerloo_stops = gpd.GeoDataFrame(
df["stn_name"], geometry = gpd.points_from_xy(df.stn_lon, df.stn_lat), crs = "EPSG:4326"
)
# + application/vnd.databricks.v1+cell={"inputWidgets": {}, "nuid": "4810207c-540e-47bc-9edf-d433e6e42205", "showTitle": false, "title": ""}
from shapely import wkt
bakerloo_stops["geometry"] = bakerloo_stops["geometry"].apply(wkt.dumps)
# + application/vnd.databricks.v1+cell={"inputWidgets": {}, "nuid": "ce684a81-8caf-453e-bc97-6b91e9ecffc4", "showTitle": false, "title": ""}
bakerloo_stops_df = spark.createDataFrame(bakerloo_stops)
# + application/vnd.databricks.v1+cell={"inputWidgets": {}, "nuid": "a3983ec3-7eee-476b-9bf3-dce026b391f0", "showTitle": false, "title": ""}
bakerloo_stops_df.show(5, False)
# + application/vnd.databricks.v1+cell={"inputWidgets": {}, "nuid": "46c2f677-e132-4fb6-a6f6-f225e9a3498e", "showTitle": false, "title": ""}
bakerloo_line = gpd.GeoDataFrame.from_file("file:///tmp/bakerloo_line.geojson")
bakerloo_line.to_crs(epsg = 4326, inplace = True)
# + application/vnd.databricks.v1+cell={"inputWidgets": {}, "nuid": "5131b769-b7fa-4e81-8ea9-af0161f0cd08", "showTitle": false, "title": ""}
from shapely.geometry import LineString
# https://stackoverflow.com/questions/62053253/how-to-split-a-linestring-to-segments
def segments(curve):
return list(map(LineString, zip(curve.coords[:-1], curve.coords[1:])))
bakerloo_line_segments = segments(bakerloo_line.geometry[0])
# + application/vnd.databricks.v1+cell={"inputWidgets": {}, "nuid": "4969e128-a508-4c33-8a14-97c302889726", "showTitle": false, "title": ""}
bakerloo_sections = gpd.GeoDataFrame(
geometry = bakerloo_line_segments, crs = "EPSG:4326"
)
# + application/vnd.databricks.v1+cell={"inputWidgets": {}, "nuid": "12cb7c8f-9dd6-4311-b7f7-d9b88be1bc55", "showTitle": false, "title": ""}
bakerloo_sections["geometry"] = bakerloo_sections["geometry"].apply(wkt.dumps)
# + application/vnd.databricks.v1+cell={"inputWidgets": {}, "nuid": "cd90134e-a8d8-4407-bf8b-715ceeaa29dd", "showTitle": false, "title": ""}
bakerloo_sections_df = spark.createDataFrame(bakerloo_sections)
# + application/vnd.databricks.v1+cell={"inputWidgets": {}, "nuid": "79a64ce1-ea17-4fb6-a456-81abdfddf2c9", "showTitle": false, "title": ""}
bakerloo_sections_df.show(5)
# + application/vnd.databricks.v1+cell={"inputWidgets": {}, "nuid": "77b3c557-577c-4b7e-afbf-5495afbef2b2", "showTitle": false, "title": ""}
df = pd.read_csv("file:///tmp/2020_02_btp_street.csv")
crimes = gpd.GeoDataFrame(
df["Crime type"], geometry = gpd.points_from_xy(df.Longitude, df.Latitude), crs = "EPSG:4326"
)
crimes.rename(columns = {"Crime type" : "crime_type"}, inplace = True)
# + application/vnd.databricks.v1+cell={"inputWidgets": {}, "nuid": "81e677c4-afe8-4ff4-9174-370449ada869", "showTitle": false, "title": ""}
crimes["geometry"] = crimes["geometry"].apply(wkt.dumps)
# + application/vnd.databricks.v1+cell={"inputWidgets": {}, "nuid": "0927f0bb-d499-4328-a43f-670c13f66db4", "showTitle": false, "title": ""}
crimes_df = spark.createDataFrame(crimes)
# + application/vnd.databricks.v1+cell={"inputWidgets": {}, "nuid": "66cfd622-f50e-4bbc-850a-a3f207c5e81a", "showTitle": false, "title": ""}
crimes_df.show(5, False)
# + application/vnd.databricks.v1+cell={"inputWidgets": {}, "nuid": "efdc99a8-33f2-4281-9435-819eca91d5e7", "showTitle": false, "title": ""}
bakerloo_line_buff = gpd.GeoDataFrame(
geometry = bakerloo_line.buffer(0.005), crs = "EPSG:4326"
)
# + application/vnd.databricks.v1+cell={"inputWidgets": {}, "nuid": "b592576d-d543-4ae8-8f9e-30620488f843", "showTitle": false, "title": ""}
bakerloo_line_buff["geometry"] = bakerloo_line_buff["geometry"].apply(wkt.dumps)
# + application/vnd.databricks.v1+cell={"inputWidgets": {}, "nuid": "8519af7e-63d2-484a-9e37-dbe0164eb6a7", "showTitle": false, "title": ""}
bakerloo_line_buff_df = spark.createDataFrame(bakerloo_line_buff)
# + application/vnd.databricks.v1+cell={"inputWidgets": {}, "nuid": "50b4039d-a633-4493-a860-345dc86e6098", "showTitle": false, "title": ""}
bakerloo_line_buff_df.show()
# + application/vnd.databricks.v1+cell={"inputWidgets": {}, "nuid": "370fe984-8641-446f-b142-3431dba0222b", "showTitle": false, "title": ""}
# %run ./Setup
# + application/vnd.databricks.v1+cell={"inputWidgets": {}, "nuid": "fc598209-4cbe-42b8-bd8a-cb26def7bf45", "showTitle": false, "title": ""}
spark.conf.set("spark.datasource.singlestore.ddlEndpoint", cluster)
spark.conf.set("spark.datasource.singlestore.user", "admin")
spark.conf.set("spark.datasource.singlestore.password", password)
spark.conf.set("spark.datasource.singlestore.disablePushdown", "false")
# + application/vnd.databricks.v1+cell={"inputWidgets": {}, "nuid": "a12b027e-5efb-4b1d-8ab2-8c6e92ca3900", "showTitle": false, "title": ""}
(bakerloo_stops_df.write
.format("singlestore")
.option("loadDataCompression", "LZ4")
.mode("overwrite")
.save("hot_routes.bakerloo_stops"))
# + application/vnd.databricks.v1+cell={"inputWidgets": {}, "nuid": "10a4eb3e-dbcb-4ced-a06d-72b73dfb6cf9", "showTitle": false, "title": ""}
(bakerloo_sections_df.write
.format("singlestore")
.option("loadDataCompression", "LZ4")
.mode("overwrite")
.save("hot_routes.bakerloo_sections"))
# + application/vnd.databricks.v1+cell={"inputWidgets": {}, "nuid": "119508ea-5425-4708-8858-393a1f194e9e", "showTitle": false, "title": ""}
(crimes_df.write
.format("singlestore")
.option("loadDataCompression", "LZ4")
.mode("overwrite")
.save("hot_routes.crimes"))
# + application/vnd.databricks.v1+cell={"inputWidgets": {}, "nuid": "c69d3f86-8d6a-4492-a92a-8d6c75be88ef", "showTitle": false, "title": ""}
(bakerloo_line_buff_df.write
.format("singlestore")
.option("loadDataCompression", "LZ4")
.mode("overwrite")
.save("hot_routes.bakerloo_line_buff"))
| notebooks/Data Loader for Hot Routes.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Data block API foundations
# +
# %load_ext autoreload
# %autoreload 2
# %matplotlib inline
# -
#export
from exp.nb_07a import *
# [Jump_to lesson 11 video](https://course.fast.ai/videos/?lesson=11&t=600)
datasets.URLs.IMAGENETTE_160
# ## Image ItemList
# Previously we were reading in to RAM the whole MNIST dataset at once, loading it as a pickle file. We can't do that for datasets larger than our RAM capacity, so instead we leave the images on disk and just grab the ones we need for each mini-batch as we use them.
#
# Let's use the [imagenette dataset](https://github.com/fastai/imagenette/blob/master/README.md) and build the data blocks we need along the way.
# ### Get images
path = datasets.untar_data(datasets.URLs.IMAGENETTE_160)
path
# To be able to look at what's inside a directory from a notebook, we add the `.ls` method to `Path` with a monkey-patch.
#export
import PIL,os,mimetypes
Path.ls = lambda x: list(x.iterdir())
path.ls()
(path/'val').ls()
# Let's have a look inside a class folder (the first class is tench):
path_tench = path/'val'/'n01440764'
img_fn = path_tench.ls()[0]
img_fn
img = PIL.Image.open(img_fn)
img
plt.imshow(img)
import numpy
imga = numpy.array(img)
imga.shape
imga[:10,:10,0]
# Just in case there are other files in the directory (models, texts...) we want to keep only the images. Let's not write it out by hand, but instead use what's already on our computer (the MIME types database).
#export
image_extensions = set(k for k,v in mimetypes.types_map.items() if v.startswith('image/'))
' '.join(image_extensions)
#export
def setify(o): return o if isinstance(o,set) else set(listify(o))
test_eq(setify('aa'), {'aa'})
test_eq(setify(['aa',1]), {'aa',1})
test_eq(setify(None), set())
test_eq(setify(1), {1})
test_eq(setify({1}), {1})
# Now let's walk through the directories and grab all the images. The first private function grabs all the images inside a given directory and the second one walks (potentially recursively) through all the folder in `path`.
# [Jump_to lesson 11 video](https://course.fast.ai/videos/?lesson=11&t=1325)
#export
def _get_files(p, fs, extensions=None):
p = Path(p)
res = [p/f for f in fs if not f.startswith('.')
and ((not extensions) or f'.{f.split(".")[-1].lower()}' in extensions)]
return res
t = [o.name for o in os.scandir(path_tench)]
t = _get_files(path, t, extensions=image_extensions)
t[:3]
#export
def get_files(path, extensions=None, recurse=False, include=None):
path = Path(path)
extensions = setify(extensions)
extensions = {e.lower() for e in extensions}
if recurse:
res = []
for i,(p,d,f) in enumerate(os.walk(path)): # returns (dirpath, dirnames, filenames)
if include is not None and i==0: d[:] = [o for o in d if o in include]
else: d[:] = [o for o in d if not o.startswith('.')]
res += _get_files(p, f, extensions)
return res
else:
f = [o.name for o in os.scandir(path) if o.is_file()]
return _get_files(path, f, extensions)
get_files(path_tench, image_extensions)[:3]
# We need the recurse argument when we start from `path` since the pictures are two level below in directories.
get_files(path, image_extensions, recurse=True)[:3]
all_fns = get_files(path, image_extensions, recurse=True)
len(all_fns)
# Imagenet is 100 times bigger than imagenette, so we need this to be fast.
# %timeit -n 10 get_files(path, image_extensions, recurse=True)
# ## Prepare for modeling
# What we need to do:
#
# - Get files
# - Split validation set
# - random%, folder name, csv, ...
# - Label:
# - folder name, file name/re, csv, ...
# - Transform per image (optional)
# - Transform to tensor
# - DataLoader
# - Transform per batch (optional)
# - DataBunch
# - Add test set (optional)
# [Jump_to lesson 11 video](https://course.fast.ai/videos/?lesson=11&t=1728)
# ### Get files
# We use the `ListContainer` class from notebook 06 to store our objects in an `ItemList`. The `get` method will need to be subclassed to explain how to access an element (open an image for instance), then the private `_get` method can allow us to apply any additional transform to it.
#
# `new` will be used in conjunction with `__getitem__` (that works for one index or a list of indices) to create training and validation set from a single stream when we split the data.
# +
#export
def compose(x, funcs, *args, order_key='_order', **kwargs):
key = lambda o: getattr(o, order_key, 0)
for f in sorted(listify(funcs), key=key): x = f(x, **kwargs)
return x
class ItemList(ListContainer):
def __init__(self, items, path='.', tfms=None):
super().__init__(items)
self.path,self.tfms = Path(path),tfms
def __repr__(self): return f'{super().__repr__()}\nPath: {self.path}'
def new(self, items, cls=None):
if cls is None: cls=self.__class__
return cls(items, self.path, tfms=self.tfms)
def get(self, i): return i
def _get(self, i): return compose(self.get(i), self.tfms)
def __getitem__(self, idx):
res = super().__getitem__(idx)
if isinstance(res,list): return [self._get(o) for o in res]
return self._get(res)
class ImageList(ItemList):
@classmethod
def from_files(cls, path, extensions=None, recurse=True, include=None, **kwargs):
if extensions is None: extensions = image_extensions
return cls(get_files(path, extensions, recurse=recurse, include=include), path, **kwargs)
def get(self, fn): return PIL.Image.open(fn)
# -
# Transforms aren't only used for data augmentation. To allow total flexibility, `ImageList` returns the raw PIL image. The first thing is to convert it to 'RGB' (or something else).
#
# Transforms only need to be functions that take an element of the `ItemList` and transform it. If they need state, they can be defined as a class. Also, having them as a class allows to define an `_order` attribute (default 0) that is used to sort the transforms.
# +
#export
class Transform(): _order=0
class MakeRGB(Transform):
def __call__(self, item): return item.convert('RGB')
def make_rgb(item): return item.convert('RGB')
# -
il = ImageList.from_files(path, tfms=make_rgb)
il
img = il[0]; img
# We can also index with a range or a list of integers:
il[:1]
# ### Split validation set
# Here, we need to split the files between those in the folder train and those in the folder val.
fn = il.items[0]; fn
# Since our filenames are `path` object, we can find the directory of the file with `.parent`. We need to go back two folders before since the last folders are the class names.
fn.parent.parent.name
# [Jump_to lesson 11 video](https://course.fast.ai/videos/?lesson=11&t=2175)
# +
#export
def grandparent_splitter(fn, valid_name='valid', train_name='train'):
gp = fn.parent.parent.name
return True if gp==valid_name else False if gp==train_name else None
def split_by_func(items, f):
mask = [f(o) for o in items]
# `None` values will be filtered out
f = [o for o,m in zip(items,mask) if m==False]
t = [o for o,m in zip(items,mask) if m==True ]
return f,t
# -
splitter = partial(grandparent_splitter, valid_name='val')
# %time train,valid = split_by_func(il, splitter)
len(train),len(valid)
# Now that we can split our data, let's create the class that will contain it. It just needs two `ItemList` to be initialized, and we create a shortcut to all the unknown attributes by trying to grab them in the `train` `ItemList`.
#export
class SplitData():
def __init__(self, train, valid): self.train,self.valid = train,valid
def __getattr__(self,k): return getattr(self.train,k)
#This is needed if we want to pickle SplitData and be able to load it back without recursion errors
def __setstate__(self,data:Any): self.__dict__.update(data)
@classmethod
def split_by_func(cls, il, f):
lists = map(il.new, split_by_func(il.items, f))
return cls(*lists)
def __repr__(self): return f'{self.__class__.__name__}\nTrain: {self.train}\nValid: {self.valid}\n'
sd = SplitData.split_by_func(il, splitter); sd
# ### Labeling
# Labeling has to be done *after* splitting, because it uses *training* set information to apply to the *validation* set, using a *Processor*.
#
# A *Processor* is a transformation that is applied to all the inputs once at initialization, with some *state* computed on the training set that is then applied without modification on the validation set (and maybe the test set or at inference time on a single item). For instance, it could be **processing texts** to **tokenize**, then **numericalize** them. In that case we want the validation set to be numericalized with exactly the same vocabulary as the training set.
#
# Another example is in **tabular data**, where we **fill missing values** with (for instance) the median computed on the training set. That statistic is stored in the inner state of the *Processor* and applied on the validation set.
#
# In our case, we want to **convert label strings to numbers** in a consistent and reproducible way. So we create a list of possible labels in the training set, and then convert our labels to numbers based on this *vocab*.
# [Jump_to lesson 11 video](https://course.fast.ai/videos/?lesson=11&t=2368)
# +
#export
from collections import OrderedDict
def uniqueify(x, sort=False):
res = list(OrderedDict.fromkeys(x).keys())
if sort: res.sort()
return res
# -
# First, let's define the processor. We also define a `ProcessedItemList` with an `obj` method that can get the unprocessed items: for instance a processed label will be an index between 0 and the number of classes - 1, the corresponding `obj` will be the name of the class. The first one is needed by the model for the training, but the second one is better for displaying the objects.
# +
#export
class Processor():
def process(self, items): return items
class CategoryProcessor(Processor):
def __init__(self): self.vocab=None
def __call__(self, items):
#The vocab is defined on the first use.
if self.vocab is None:
self.vocab = uniqueify(items)
self.otoi = {v:k for k,v in enumerate(self.vocab)}
return [self.proc1(o) for o in items]
def proc1(self, item): return self.otoi[item]
def deprocess(self, idxs):
assert self.vocab is not None
return [self.deproc1(idx) for idx in idxs]
def deproc1(self, idx): return self.vocab[idx]
# -
# Here we label according to the folders of the images, so simply `fn.parent.name`. We label the training set first with a newly created `CategoryProcessor` so that it computes its inner `vocab` on that set. Then we label the validation set using the same processor, which means it uses the same `vocab`. The end result is another `SplitData` object.
# +
#export
def parent_labeler(fn): return fn.parent.name
def _label_by_func(ds, f, cls=ItemList): return cls([f(o) for o in ds.items], path=ds.path)
#This is a slightly different from what was seen during the lesson,
# we'll discuss the changes in lesson 11
class LabeledData():
def process(self, il, proc): return il.new(compose(il.items, proc))
def __init__(self, x, y, proc_x=None, proc_y=None):
self.x,self.y = self.process(x, proc_x),self.process(y, proc_y)
self.proc_x,self.proc_y = proc_x,proc_y
def __repr__(self): return f'{self.__class__.__name__}\nx: {self.x}\ny: {self.y}\n'
def __getitem__(self,idx): return self.x[idx],self.y[idx]
def __len__(self): return len(self.x)
def x_obj(self, idx): return self.obj(self.x, idx, self.proc_x)
def y_obj(self, idx): return self.obj(self.y, idx, self.proc_y)
def obj(self, items, idx, procs):
isint = isinstance(idx, int) or (isinstance(idx,torch.LongTensor) and not idx.ndim)
item = items[idx]
for proc in reversed(listify(procs)):
item = proc.deproc1(item) if isint else proc.deprocess(item)
return item
@classmethod
def label_by_func(cls, il, f, proc_x=None, proc_y=None):
return cls(il, _label_by_func(il, f), proc_x=proc_x, proc_y=proc_y)
def label_by_func(sd, f, proc_x=None, proc_y=None):
train = LabeledData.label_by_func(sd.train, f, proc_x=proc_x, proc_y=proc_y)
valid = LabeledData.label_by_func(sd.valid, f, proc_x=proc_x, proc_y=proc_y)
return SplitData(train,valid)
# -
ll = label_by_func(sd, parent_labeler, proc_y=CategoryProcessor())
assert ll.train.proc_y is ll.valid.proc_y
ll.train.y
ll.train.y.items[0], ll.train.y_obj(0), ll.train.y_obj(slice(2))
ll
# ### Transform to tensor
# [Jump_to lesson 11 video](https://course.fast.ai/videos/?lesson=11&t=3044)
ll.train[0]
ll.train[0][0]
# To be able to put all our images in a batch, we need them to have all the same size. We can do this easily in PIL.
ll.train[0][0].resize((128,128))
# The first transform resizes to a given size, then we convert the image to a by tensor before converting it to float and dividing by 255. We will investigate data augmentation transforms at length in notebook 10.
# +
#export
class ResizeFixed(Transform):
_order=10
def __init__(self,size):
if isinstance(size,int): size=(size,size)
self.size = size
def __call__(self, item): return item.resize(self.size, PIL.Image.BILINEAR)
def to_byte_tensor(item):
res = torch.ByteTensor(torch.ByteStorage.from_buffer(item.tobytes()))
w,h = item.size
return res.view(h,w,-1).permute(2,0,1)
to_byte_tensor._order=20
def to_float_tensor(item): return item.float().div_(255.)
to_float_tensor._order=30
# +
tfms = [make_rgb, ResizeFixed(128), to_byte_tensor, to_float_tensor]
il = ImageList.from_files(path, tfms=tfms)
sd = SplitData.split_by_func(il, splitter)
ll = label_by_func(sd, parent_labeler, proc_y=CategoryProcessor())
# -
# Here is a little convenience function to show an image from the corresponding tensor.
#export
def show_image(im, figsize=(3,3)):
plt.figure(figsize=figsize)
plt.axis('off')
plt.imshow(im.permute(1,2,0))
x,y = ll.train[0]
x.shape
show_image(x)
# ## Modeling
# ### DataBunch
# Now we are ready to put our datasets together in a `DataBunch`.
# [Jump_to lesson 11 video](https://course.fast.ai/videos/?lesson=11&t=3226)
bs=64
train_dl,valid_dl = get_dls(ll.train,ll.valid,bs, num_workers=4)
x,y = next(iter(train_dl))
x.shape
# We can still see the images in a batch and get the corresponding classes.
show_image(x[0])
ll.train.proc_y.vocab[y[0]]
y
# We change a little bit our `DataBunch` to add a few attributes: `c_in` (for channel in) and `c_out` (for channel out) instead of just `c`. This will help when we need to build our model.
#export
class DataBunch():
def __init__(self, train_dl, valid_dl, c_in=None, c_out=None):
self.train_dl,self.valid_dl,self.c_in,self.c_out = train_dl,valid_dl,c_in,c_out
@property
def train_ds(self): return self.train_dl.dataset
@property
def valid_ds(self): return self.valid_dl.dataset
# Then we define a function that goes directly from the `SplitData` to a `DataBunch`.
# +
#export
def databunchify(sd, bs, c_in=None, c_out=None, **kwargs):
dls = get_dls(sd.train, sd.valid, bs, **kwargs)
return DataBunch(*dls, c_in=c_in, c_out=c_out)
SplitData.to_databunch = databunchify
# -
# This gives us the full summary on how to grab our data and put it in a `DataBunch`:
# +
path = datasets.untar_data(datasets.URLs.IMAGENETTE_160)
tfms = [make_rgb, ResizeFixed(128), to_byte_tensor, to_float_tensor]
il = ImageList.from_files(path, tfms=tfms)
sd = SplitData.split_by_func(il, partial(grandparent_splitter, valid_name='val'))
ll = label_by_func(sd, parent_labeler, proc_y=CategoryProcessor())
data = ll.to_databunch(bs, c_in=3, c_out=10, num_workers=4)
# -
# ### Model
# [Jump_to lesson 11 video](https://course.fast.ai/videos/?lesson=11&t=3360)
cbfs = [partial(AvgStatsCallback,accuracy),
CudaCallback]
# We will normalize with the statistics from a batch.
m,s = x.mean((0,2,3)).cuda(),x.std((0,2,3)).cuda()
m,s
# +
#export
def normalize_chan(x, mean, std):
return (x-mean[...,None,None]) / std[...,None,None]
_m = tensor([0.47, 0.48, 0.45])
_s = tensor([0.29, 0.28, 0.30])
norm_imagenette = partial(normalize_chan, mean=_m.cuda(), std=_s.cuda())
# -
cbfs.append(partial(BatchTransformXCallback, norm_imagenette))
nfs = [64,64,128,256]
# We build our model using [Bag of Tricks for Image Classification with Convolutional Neural Networks](https://arxiv.org/abs/1812.01187), in particular: we don't use a big conv 7x7 at first but three 3x3 convs, and don't go directly from 3 channels to 64 but progressively add those.
# +
#export
import math
def prev_pow_2(x): return 2**math.floor(math.log2(x))
def get_cnn_layers(data, nfs, layer, **kwargs):
def f(ni, nf, stride=2): return layer(ni, nf, 3, stride=stride, **kwargs)
l1 = data.c_in
l2 = prev_pow_2(l1*3*3)
layers = [f(l1 , l2 , stride=1),
f(l2 , l2*2, stride=2),
f(l2*2, l2*4, stride=2)]
nfs = [l2*4] + nfs
layers += [f(nfs[i], nfs[i+1]) for i in range(len(nfs)-1)]
layers += [nn.AdaptiveAvgPool2d(1), Lambda(flatten),
nn.Linear(nfs[-1], data.c_out)]
return layers
def get_cnn_model(data, nfs, layer, **kwargs):
return nn.Sequential(*get_cnn_layers(data, nfs, layer, **kwargs))
def get_learn_run(nfs, data, lr, layer, cbs=None, opt_func=None, **kwargs):
model = get_cnn_model(data, nfs, layer, **kwargs)
init_cnn(model)
return get_runner(model, data, lr=lr, cbs=cbs, opt_func=opt_func)
# -
sched = combine_scheds([0.3,0.7], cos_1cycle_anneal(0.1,0.3,0.05))
learn,run = get_learn_run(nfs, data, 0.2, conv_layer, cbs=cbfs+[
partial(ParamScheduler, 'lr', sched)
])
# Let's have a look at our model using Hooks. We print the layers and the shapes of their outputs.
#export
def model_summary(run, learn, data, find_all=False):
xb,yb = get_batch(data.valid_dl, run)
device = next(learn.model.parameters()).device#Model may not be on the GPU yet
xb,yb = xb.to(device),yb.to(device)
mods = find_modules(learn.model, is_lin_layer) if find_all else learn.model.children()
f = lambda hook,mod,inp,out: print(f"{mod}\n{out.shape}\n")
with Hooks(mods, f) as hooks: learn.model(xb)
model_summary(run, learn, data)
# And we can train the model:
# %time run.fit(5, learn)
# The [leaderboard](https://github.com/fastai/imagenette/blob/master/README.md) as this notebook is written has ~85% accuracy for 5 epochs at 128px size, so we're definitely on the right track!
# ## Export
# !python notebook2script.py 08_data_block.ipynb
| nbs/dl2/08_data_block.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # 05 - The Unreasonable Effectiveness of Linear Regression
#
#
# ## All You Need is Regression
#
# When dealing with causal inference, we saw how there are two potential outcomes for each individual: \\(Y_0\\) is the outcome the individual would have if he or she didn't take the treatment and \\(Y_1\\) is the outcome if he or she took the treatment. The act of setting the treatment \\(T\\) to 0 or 1 materializes one of the potential outcomes and makes it impossible for us to ever know the other one. This leads to the fact that the individual treatment effect \\(\tau_i = Y_{1i} - Y_{0i}\\) is unknowable.
#
# $
# Y_i = Y_{0i} + T_i(Y_{1i} - Y_{0i}) = Y_{0i}(1-T_i) + T_i Y_{1i}
# $
#
# So, for now, let's focus, on the simpler task of estimating the average causal effect. With this in mind, we are accepting the fact that some people respond better than others to the treatment, but we are also accepting that we can't know who they are. Instead, we will just try to see if the treatment works, **on average**.
#
# $
# ATE = E[Y_1 - Y_0]
# $
#
# This will give us a simplified model, with a constant treatment effect \\(Y_{1i} = Y_{0i} + \kappa\\). If \\(\kappa\\) is positive, we will say that the treatment has, on average, a positive effect. Even if some people will respond badly to it, on average, the impact will be positive.
#
# Let's also recall that we can't simply estimate \\(E[Y_1 - Y_0]\\) with the difference in mean \\(E[Y|T=1] - E[Y|T=0]\\) due to bias. Bias often arises when the treated and untreated are different for reasons other than the treatment itself. One way to see this is on how they differ in the potential outcome \\(Y_0\\)
#
# $
# E[Y|T=1] - E[Y|T=0] = \underbrace{E[Y_1 - Y_0|T=1]}_{ATET} + \underbrace{\{ E[Y_0|T=1] - E[Y_0|T=0]\}}_{BIAS}
# $
#
# Previously, we saw how we can eliminate bias with Random Experiments, or **Randomised Controlled Trial** (RCT) as they are sometimes called. RCT forces the treated and the untreated to be equal and that's why the bias vanishes. We also saw how to place uncertainty levels around our estimates for the treatment effect. Namely, we looked at the case of online versus face-to-face classrooms, where \\(T=0\\) represent face-to-face lectures and \\(T=1\\) represent online ones. Students were randomly assigned to one of those 2 types of lectures and then their performance on an exam was evaluated. We've built an A/B testing function that could compare both groups, provide the average treatment effect and even place a confidence interval around it.
#
# Now, it's time to see that we can do all of that with the workhorse of causal inference: **Linear Regression**! Think of it this way. If comparing treated and untreated means was an apple for dessert, linear regression would be cold and creamy tiramisu. Or if comparing treated and untreated is a sad and old loaf of white wonder bread, linear regression would be a crusty, soft crumb country loaf sourdough baked by <NAME> himself.
#
# 
#
# Lets see how this beauty works. In the code below, we want to run the exact same analysis of comparing online vs face-to-face classes. But instead of doing all that math of confidence intervals, we just run a regression. More specifically, we estimate the following model:
#
# $
# exam_i = \beta_0 + \kappa \ Online_i + u_i
# $
#
# Notice that \\(Online\\) is our treatment indication and hence, a dummy variable. It is zero when the treatment is face to face and 1 if it's online. With that in mind, we can see that linear regression will recover \\(E[Y|T=0] = \beta_0\\) and \\(E[Y|T=1] = \beta_0 + \kappa \\). \\(\kappa\\) will be our ATE.
# +
import warnings
warnings.filterwarnings('ignore')
import pandas as pd
import numpy as np
import statsmodels.formula.api as smf
import graphviz as gr
# %matplotlib inline
data = pd.read_csv("data/online_classroom.csv").query("format_blended==0")
result = smf.ols('falsexam ~ format_ol', data=data).fit()
result.summary().tables[1]
# -
# That's quite amazing. We are not only able to estimate the ATE, but we also get, for free, confidence intervals and P-Values out of it! More than that, we can see that regression is doing exactly what it supposed to do: comparing \\(E[Y|T=0]\\) and \\(E[Y|T=1]\\). The intercept is exactly the sample mean when \\(T=0\\), \\(E[Y|T=0]\\), and the coefficient of the online format is exactly the sample difference in means \\(E[Y|T=1] - E[Y|T=0]\\). Don't trust me? No problem. You can see for yourself:
(data
.groupby("format_ol")
["falsexam"]
.mean())
# As expected. If you add to the intercept the ATE, that is, the parameter estimate of online format, you get the sample mean for the treated: \\(78.5475 + (-4.9122) = 73.635263\\).
#
# ## Regression Theory
#
# I don't intend to dive too deep into how linear regression is constructed and estimated. However, a little bit of theory will go a long way in explaining its power in causal inference. First of all, regression solves a theoretical best linear prediction problem. Let \\(\beta^*\\) be a vector of parameters:
#
# $
# \beta^* =\underset{\beta}{argmin} \ E[(Y_i - X_i'\beta)^2]
# $
#
# Linear regression finds the parameters that minimise the mean squared error (MSE).
#
# If you differentiate it and set it to zero, you will find that the linear solution to this problem is given by
#
# $
# \beta^* = E[X_i'X_i]^{-1}E[X_i' Y_i]
# $
#
# We can estimate this beta using the sample equivalent:
#
# $
# \hat{\beta} = (X'X)^{-1}X' Y
# $
#
# But don't take my word for it. If you are one of those that understand code better than formulas, try for yourself:
# +
X = data[["format_ol"]].assign(intercep=1)
y = data["falsexam"]
def regress(y, X):
return np.linalg.inv(X.T.dot(X)).dot(X.T.dot(y))
beta = regress(y, X)
beta
# -
# The formulas above are pretty general. However, it pays off to study the case where we only have one regressor. In causal inference, we often want to estimate the causal impact of a variable \\(T\\) on an outcome \\(y\\). So, we use regression with this single variable to estimate this effect. Even if we include other variables in the model, they are usually just auxiliary. Adding other variables can help us estimate the causal effect of the treatment, but we are not very interested in estimating their parameters.
#
# With a single regressor variable \\(T\\), the parameter associated to it will be given by
#
# $
# \beta_1 = \dfrac{Cov(Y_i, T_i)}{Var(T_i)}
# $
#
# If \\(T\\) is randomly assigned, \\(\beta_1\\) is the ATE.
kapa = data["falsexam"].cov(data["format_ol"]) / data["format_ol"].var()
kapa
# If we have more than one regressor, we can extend the following formula to accommodate that. Let's say those other variables are just auxiliary and that we are truly interested only in estimating the parameter \\(\kappa\\) associated to \\(T\\).
#
# $
# y_i = \beta_0 + \kappa T_i + \beta_1 X_{1i} + ... +\beta_k X_{ki} + u_i
# $
#
# \\(\kappa\\) can be obtained with the following formula
#
# $
# \kappa = \dfrac{Cov(Y_i, \tilde{T_i})}{Var(\tilde{T_i})}
# $
#
# where \\(\tilde{T_i}\\) is the residual from a regression of all other covariates \\(X_{1i} + ... + X_{ki}\\) on \\(T_i\\). Now, let's appreciate how cool this is. It means that the coefficient of a multivariate regression is the bivariate coefficient of the same regressor **after accounting for the effect of other variables in the model**. In causal inference terms, \\(\kappa\\) is the bivariate coeficiente of \\(T\\) after having used all other variables to predict it.
#
# This has a nice intuition behind it. If we can predict \\(T\\) using other variables, it means it's not random. However, we can make it so that \\(T\\) is as good as random once we control for other available variables. To do so, we use linear regression to predict it from the other variables and then we take the residuals of that regression \\(\tilde{T}\\). By definition, \\(\tilde{T}\\) cannot be predicted by the other variables \\(X\\) that we've already used to predict \\(T\\). Quite elegantly, \\(\tilde{T}\\) is a version of the treatment that is not associated with any other variable in \\(X\\).
#
# By the way, this is also a property of linear regression. The residual always orthogonal or uncorrelated with any of the variables in the model that created it:
e = y - X.dot(beta)
print("Orthogonality imply that the dot product is zero:", np.dot(e, X))
X[["format_ol"]].assign(e=e).corr()
# And what is even cooler is that these properties don't depend on anything! They are mathematical truths, regardless of what your data looks like.
#
# ## Regression For Non-Random Data
#
# So far, we worked with random experiment data but, as we know, those are hard to come by. Experiments are very expensive to conduct or simply infeasible. It's very hard to convince McKinsey & Co. to randomly provide their services free of charge so that we can, once and for all, distinguish the value their consulting services brings from the fact that those firms that can afford to pay them are already very well off.
#
#
# For this reason, we shall now delve into non random or observational data. In the following example, we will try to estimate the impact of an additional year of education on hourly wage. As you might have guessed, it is extremely hard to conduct an experiment with education. You can't simply randomize people to 4, 8 or 12 years of education. In this case observational data is all we have.
#
# First, let's estimate a very simple model. We will regress log hourly wages on years of education. We use logs here so that our parameter estimates have a percentage interpretation. With it, we will be able to say that 1 extra year of education yields a wage increase of x%.
#
# $
# log(hwage)_i = \beta_0 + \beta_1 educ_i + u_i
# $
wage = pd.read_csv("./data/wage.csv").dropna()
model_1 = smf.ols('np.log(wage) ~ educ', data=wage).fit()
model_1.summary().tables[1]
# The estimate of \\(\beta_1\\) is 0.0596, with a 95% confidence interval of (0.046, 0.073). This means that this model predicts that wages will increase about 5.9% for every additional year of education. This percentage increase is inline with the belief that education impacts wages in an exponential fashion: we expect that going from 8 to 9 years of education to be more rewarding than going from 2 to 3 years.
# +
from matplotlib import pyplot as plt
from matplotlib import style
style.use("fivethirtyeight")
x = np.array(range(1, 25))
plt.plot(x, np.exp(2.2954 + 0.0529 * x))
plt.xlabel("Years of Education")
plt.ylabel("Hourly Wage")
plt.title("Impact of Education on Hourly Wage")
plt.show()
# -
# Of course, it is not because we can estimate this simple model that it's correct. Notice how I was carefully with my words saying it **predicts** wage from education. I never said that this prediction was causal. In fact, by now, you probably have very serious reasons to believe this model is biased. Since our data didn't come from a random experiment, we don't know if those that got more education are comparable to those who got less. Going even further, from our understanding of how the world works, we are very certain that they are not comparable. Namely, we can argue that those with more years of education probably have richer parents, and that the increase we are seeing in wages as we increase education is just a reflection of how the family wealth is associated with more years of education. Putting it in math terms, we think that \\(E[Y_0|T=0] < E[Y_0|T=1]\\), that is, those with more education would have higher income anyway, even without so many years of education. If you are really grim about education, you can argue that it can even *reduce* wages by keeping people out of the workforce and lowering their experience.
#
# Fortunately, in our data, we have access to lots of other variables. We can see the parents' education `meduc`, `feduc`, the `IQ` score for that person, the number of years of experience `exper` and the tenure of the person in his or her current company `tenure`. We even have some dummy variables for marriage and black ethnicity.
wage.head()
# We can include all those extra variables in a model and estimate it:
#
# $
# log(hwage)_i = \beta_0 + \kappa \ educ_i + \pmb{\beta}X_i + u_i
# $
#
# To understand how this helps with the bias problem, let's recap the bivariate breakdown of multivariate linear regression.
#
# $
# \kappa = \dfrac{Cov(Y_i, \tilde{T_i})}{Var(\tilde{T_i})}
# $
#
# This formula says that we can predict `educ` from the parents' education, from IQ, from experience and so on. After we do that, we'll be left with a version of `educ`, \\(\tilde{educ}\\), which is uncorrelated with all the variables included previously. This will break down arguments such as "people that have more years of education have it because they have higher IQ. It is not the case that education leads to higher wages. It is just the case that it is correlated with IQ, which is what drives waged". Well, if we include IQ in our model, then \\(\kappa\\) becomes the return of an additional year of education while keeping IQ fixed. Pause a little bit to understand what this implies. Even if we can't use randomised controlled trials to keep other factors equal between treated and untreated, regression can do this by including those same factors in the model, even if the data is not random!
# +
controls = ['IQ', 'exper', 'tenure', 'age', 'married', 'black',
'south', 'urban', 'sibs', 'brthord', 'meduc', 'feduc']
X = wage[controls].assign(intercep=1)
t = wage["educ"]
y = wage["lhwage"]
beta_aux = regress(t, X)
t_tilde = t - X.dot(beta_aux)
kappa = t_tilde.cov(y) / t_tilde.var()
kappa
# -
# This coefficient we've just estimated tells us that, for people with the same IQ, experience, tenure, age and so on, we should expect an additional year of education to be associated with a 4.11% increase in hourly wage. This confirms our suspicion that the first simple model with only `educ` was biases. It also confirms that this bias was overestimating the impact of education. Once we controlled for other factors, the estimated impact of education fell.
#
# If we are wiser and use software that other people wrote instead of coding everything yourself, we can even place a confidence interval around this estimate.
model_2 = smf.ols('lhwage ~ educ +' + '+'.join(controls), data=wage).fit()
model_2.summary().tables[1]
# ## Omitted Variable or Confounding Bias
#
# The question that remains is: is this parameter we've estimated causal? Unfortunately, we can't say for sure. We can argue that the first simple model that regress wage on education probably isn't. It omits important variables that are correlated both with education and with wages. Without controlling for them, the estimated impact of education is also capturing the impact of those other variables that were not included in the model.
#
# To better understand how this bias work, let's suppose the true model for how education affects wage looks a bit like this
#
# $
# Wage_i = \alpha + \kappa \ Educ_i + A_i'\beta + u_i
# $
#
# wage is affected by education, which is measured by the size of \\(\kappa\\) and by additional ability factors, denoted as the vector \\(A\\). If we omit ability from our model, our estimate for \\(\kappa\\) will look like this:
#
# $
# \dfrac{Cov(Wage_i, Educ_i)}{Var(Educ_i)} = \kappa + \beta'\delta_{Ability}
# $
#
# where \\(\delta_{A}\\) is the vector of coefficients from the regression of \\(A\\) on \\(Educ\\)
#
# The key point here is that it won't be exactly the \\(\kappa\\) that we want. Instead, it comes with this extra annoying term \\(\beta'\delta_{A}\\). This term is the impact of the omitted \\(A\\) on \\(Wage\\), \\(\beta\\) times the impact of the omitted on the included \\(Educ\\). This is important for economists that <NAME> made a mantra out of it so that the students can recite it in meditation:
#
# ```
# "Short equals long
# plus the effect of omitted
# times the regression of omitted on included"
# ```
#
# Here, the short regression is the one that omits variables, while the long is the one that includes them. This formula or mantra gives us further insight into the nature of bias. First, the bias term will be zero if the omitted variables have no impact on the dependent variable \\(Y\\). This makes total sense. I don't need to control for stuff that is irrelevant for wages when trying to understand the impact of education on it (like how tall the lilies of the field). Second, the bias term will also be zero if the omitted variables have no impact on the treatment variable. This also makes intuitive sense. If everything that impacts education has been included in the model, there is no way the estimated impact of education is mixed with a correlation from education on something else that also impacts wages.
#
# 
#
# To put it more succinctly, we say that **there is no OVB if all the confounding variables are accounted for in the model**. We can also leverage our knowledge about causal graphs here. A confounding variable is one that **causes both the treatment and the outcome**. In the wage example, IQ is a confounder. People with high IQ tend to complete more years of education because it's easier for them, so we can say that IQ causes education. People with high IQ also tend to be naturally more productive and consequently have higher wages, so IQ also causes wage. Since confounders are variables that affect both the treatment and the outcome, we mark them with an arrow going to T and Y. Here, I've denoted them with \\(W\\). I've also marked positive causation with red and negative causation with blue.
# +
g = gr.Digraph()
g.edge("W", "T"), g.edge("W", "Y"), g.edge("T", "Y")
g.edge("IQ", "Educ", color="red"), g.edge("IQ", "Wage", color="red"), g.edge("Educ", "Wage", color="red")
g.edge("Crime", "Police", color="red"), g.edge("Crime", "Violence", color="red"),
g.edge("Police", "Violence", color="blue")
g
# -
# Causal graphs are excellent to depict our understanding of the world and understand how confounding bias works. In our first example, we have a graph where education causes wage: more education leads to higher wages. However, IQ also causes wage and it also causes education: high IQ causes both more education and wage. If we don't account for IQ in our model, some of its effect on wage will flow through the correlation with education. That will make the impact of education look higher than it actually is. This is an example of positive bias.
#
# Just to give another example, but with negative bias, consider the causal graph about the effect of police on city violence. What we usually see in the world is that cities with higher police force also have more violence. Does this mean that the police are causing the violence? Well, it could be, I don't think it's worth getting into that discussion here. But, there is also a strong possibility that there is a confounding variable causing us to see a biased version of the impact of police on violence. It could be that increasing police force decreases violence. But, a third variable crime, causes both more violence and more police force. If we don't account for it, the impact of crime on violence will flow through police force, making it look like it increases violence. This is an example of negative bias.
#
# Causal graphs can also show us how both regression and randomized control trials are correct for confounding bias. RCT does so by severing the connection of the confounder to the treatment variable. By making \\(T\\) random, by definition, nothing can cause it.
# +
g = gr.Digraph()
g.edge("W", "Y"), g.edge("T", "Y")
g.edge("IQ", "Wage", color="red"), g.edge("Educ", "Wage", color="red")
g
# -
# Regression, on the other hand, does so by comparing the effect of \\(T\\) while maintaining the confounder \\(W\\) set to a fixed level. With regression, it is not the case that W cease to cause T and Y. It is just that it is held fixed, so it can't influence changes on T and Y.
# +
g = gr.Digraph()
g.node("W=w"), g.edge("T", "Y")
g.node("IQ=x"), g.edge("Educ", "Wage", color="red")
g
# -
# Now, back to our question, is the parameter we've estimated for the impact of `educ` on wage causal? I'm sorry to bring it to you, but that will depend on our ability to argue in favor or against that fact that all confounders have been included in the model. Personally, I think they haven't. For instance, we haven't included family wealth. Even if we included family education, that can only be seen as a proxy for wealth. We've also not accounted for factors like personal ambition. It could be that ambition is what causes both more years of education and higher wage, so it is a confounder. This is to show that **causal inference with non-random or observational data should always be taken with a grain of salt**. We can never be sure that all confounders were accounted for.
#
# ## Key Ideas
#
# We've covered a lot of ground with regression. We saw how regression can be used to perform A/B testing and how it conveniently gives us confidence intervals. Then, we moved to study how regression solves a prediction problem and it is the best linear approximation to the CEF. We've also discussed how, in the bivariate case, the regression treatment coefficient is the covariance between the treatment and the outcome divided by the variance of the treatment. Expanding to the multivariate case, we figured out how regression gives us a partialling out interpretation of the treatment coefficient: it can be interpreted as how the outcome would change with the treatment while keeping all other included variables constant. This is what economists love to refer as *ceteris paribus*.
#
# Finally, we took a turn to understanding bias. We saw how `Short equals long plus the effect of omitted times the regression of omitted on included`. This shed some light to how bias comes to be. We discovered that the source of omitted variable bias is confounding: a variable that affects both the treatment and the outcome. Lastly, we used causal graphs to see how RCT and regression fixes confounding.
#
# ## References
#
# I like to think of this entire book as a tribute to <NAME>, <NAME> and <NAME> for their amazing Econometrics class. Most of the ideas here are taken from their classes at the American Economic Association. Watching them is what is keeping me sane during this tough year of 2020.
# * [Cross-Section Econometrics](https://www.aeaweb.org/conference/cont-ed/2017-webcasts)
# * [Mastering Mostly Harmless Econometrics](https://www.aeaweb.org/conference/cont-ed/2020-webcasts)
#
# I'll also like to reference the amazing books from Angrist. They have shown me that Econometrics, or 'Metrics as they call it, is not only extremely useful but also profoundly fun.
#
# * [Mostly Harmless Econometrics](https://www.mostlyharmlesseconometrics.com/)
# * [Mastering 'Metrics](https://www.masteringmetrics.com/)
#
# My final reference is <NAME> and <NAME>' book. It has been my trustworthy companion in the most thorny causal questions I had to answer.
#
# * [Causal Inference Book](https://www.hsph.harvard.edu/miguel-hernan/causal-inference-book/)
#
# 
#
#
# ## Contribute
#
# Causal Inference for the Brave and True is an open-source material on causal inference, the statistics of science. It uses only free software, based in Python. Its goal is to be accessible monetarily and intellectually.
# If you found this book valuable and you want to support it, please go to [Patreon](https://www.patreon.com/causal_inference_for_the_brave_and_true). If you are not ready to contribute financially, you can also help by fixing typos, suggesting edits or giving feedback on passages you didn't understand. Just go to the book's repository and [open an issue](https://github.com/matheusfacure/python-causality-handbook/issues). Finally, if you liked this content, please share it with others who might find it useful and give it a [star on GitHub](https://github.com/matheusfacure/python-causality-handbook/stargazers).
| causal-inference-for-the-brave-and-true/05-The-Unreasonable-Effectiveness-of-Linear-Regression.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
import numpy as np
# Write a function that takes as input a list of numbers, and returns
# the list of values given by the softmax function.
def softmax(L):
return [np.exp(L[i])/sum(np.exp(L)) for i in range(len(L))]
# -
scores = np.array([2, 1 , 0])
softmax(scores)
for i in range(len(scores)):
print(np.exp(scores[i])/sum(np.exp(scores)))
L= [5,6,7]
softmax(L)
# +
# solution
# def softmax(L):
# expL = np.exp(L)
# sumExpL = sum(expL)
# result = []
# for i in expL:
# result.append(i*1.0/sumExpL)
# return result
# -
def eq_quiz(w1, w2, b):
return w1*0.4 + w2*0.6 + b
eq_quiz(2,6,-2)
eq_quiz(3,5,-2.2)
eq_quiz(5,4,-3)
softmax([2.3999999999999995, 2.0, 1.4000000000000004])
# w1*Model I + w2*Model II - bias = 0
# Model 1 = 0.4
# Model 2 = 0.6
# Final probability of 0.88
softmax([eq_quiz(1,1,0), eq_quiz(-1,-1,0)])
softmax([eq_quiz(10,10,0), eq_quiz(-10,-10,0)])
| neural_networks/3_softmax.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3.8.2 64-bit
# name: python_defaultSpec_1598401222013
# ---
import numpy as np
import keras
import tensorflow as tf
from tensorflow.keras import layers
from tensorflow.keras.layers.experimental.preprocessing import Normalization
import pandas as pd
import matplotlib.pyplot as plt
from pandas import DataFrame, read_csv
# + tags=[]
df = pd.read_csv("life.csv")
# clean up data a bit
df.fillna(0, inplace=True)
# replace dates with sequential number
# will updates this when annual recurrence happens (could be interesting)
df["Day"] = range(1, 1+len(df))
# pre-process: change mood to 1-5 rating
mood = df["Mood Evening"]
mood = np.where(mood < 3, 1, mood)
mood = np.where((mood == 3) | ( mood == 4), 2, mood)
mood = np.where((df["Mood Evening"] == 5) | ( mood == 6), 3, mood)
mood = np.where((mood == 7) | ( mood == 8), 4,mood)
mood = np.where((mood == 9) | ( mood == 10), 5, mood)
df["Mood Evening"] = mood
#move label to end
df = df[[c for c in df if c not in ["Mood Morning", "Mood Evening"]] + ["Mood Morning", "Mood Evening"]]
# split in training and dev set
validation_raw = df.sample(frac=0.2, random_state=1)
train_raw = df.drop(validation_raw.index)
print(f"Found {len(train_raw)} samples for training and {len(validation_raw)} for validation.")
# +
train_data = np.array(train_raw)[:,1:22].astype(np.float32)
train_labels = np.array(train_raw)[:,-1].astype(np.float32)
test_data = np.array(train_raw)[:,1:22].astype(np.float32)
test_labels = np.array(train_raw)[:,-1].astype(np.float32)
# + tags=[]
print(train_data.shape)
print(train_labels.shape)
# + tags=[]
model = tf.keras.models.Sequential()
model.add( tf.keras.layers.Dense(64, input_shape=[21], activation="relu"))
model.add( tf.keras.layers.BatchNormalization())
model.add( tf.keras.layers.Dense(32, activation="relu"))
model.add( tf.keras.layers.Dropout(0.1))
model.add( tf.keras.layers.Dense(5, activation="softmax"))
opt = keras.optimizers.Adam(learning_rate=0.001)
model.compile(
optimizer=opt,
loss="sparse_categorical_crossentropy",
metrics=["accuracy"]
)
# plot a horizontal graph of the model and write it to disk
keras.utils.plot_model(model, show_shapes=True,rankdir="LR", to_file="model.png")
# + tags=[]
model.summary()
# + tags=["outputPrepend"]
# train the model
history = model.fit(train_data,train_labels, epochs=400, batch_size=40, validation_data=(test_data,test_labels), verbose =1 )
# + tags=[]
import matplotlib
from matplotlib import pyplot as plt
accuracy = history.history["accuracy"]
loss = history.history["loss"]
epochs = range(len(accuracy))
plt.plot(epochs,accuracy, "b",label="Training accuracy")
plt.title("Training accuracy")
plt.figure()
plt.plot(epochs, loss, "b",label="Training Loss")
plt.title("Training loss")
plt.legend()
plt.show()
# +
#model = tf.keras.models.load_model("model.h5")
# + tags=[]
#model.save("model.h5")
#model = tf.keras.models.load_model("model.h5")
### Example on how to do a prediction for a new day
sample = [(
0, #Max Pullup
0,
0,
0,
0, #Running
0, #Walk
0, #Bike
0, #Swim
0, #Tennis
0, #Violin
0, #Reading
0, #Study
0, #Dancing
0,
0, #Social
66, #Weight
0,
0,
8, #Sleep
0, #Spent
0 #Other
)]
pred = model.predict(sample)
print("Mood Evening predicted:" + str(np.argmax(pred)+1))
# -
| life/life.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# ### Setup
# +
from functions import *
# important parameters
classes = ["Murakami", "Abe", "Kafka", "Soseki", "Yoshimoto"]
focus_class = "Kafka"
sentences_per_section = 20
words_per_section = 20*15
# -
# ### Data Loading and Prep
# +
all_texts_paths = load_texts()
texts, labels = [ ], [ ]
for text in all_texts_paths:
path_to_text, num_chapters = text[0], text[1]
book_name = path_to_text.split("/")[-1]
author_name = book_name.split("_")[0]
new_text_sections = export_text_sections(path_to_text, sentences_per_section)
texts += new_text_sections
for _ in range(len(new_text_sections)):
if author_name==focus_class: labels.append(1)
else: labels.append(0)
# plot_distribution(texts, labels)
# -
GLOVE_URL = "https://s3-ap-southeast-1.amazonaws.com/deeplearning-mat/glove.6B.100d.txt.zip"
GLOVE_DIR = keras.utils.get_file("glove.6B.100d.txt.zip", GLOVE_URL, cache_subdir="datasets", extract=True)
print("GloVe data present at", GLOVE_DIR)
GLOVE_DIR = GLOVE_DIR.replace(".zip", "")
# +
tokenizer = Tokenizer(filters="", lower=True, num_words=1e7)
tokenizer.fit_on_texts(texts)
word_index = tokenizer.word_index
print("[INFO] Vocabulary size:", len(word_index))
# +
sequences = tokenizer.texts_to_sequences(texts)
data = pad_sequences(sequences, padding="pre", maxlen=(words_per_section))
#labels = to_categorical(np.asarray(labels))
#labels = np.asarray(labels).reshape((len(labels), 1))
labels = np.asarray(labels)
print("[INFO] Shape of data tensor:", data.shape)
print("[INFO] Shape of label tensor:", labels.shape)
x_train, x_val, y_train, y_val = train_test_split(data, labels, test_size=0.3, stratify=labels)
print('[INFO] Number of entries in each category:')
print("[INFO] Training:\t", len(y_train))
print("[INFO] Validation:\t", len(y_val))
# +
EMBEDDING_DIM = 100
embeddings_index = {}
f = open(GLOVE_DIR)
print("[i] (long) Loading GloVe from:",GLOVE_DIR,"...",end="")
for line in f:
values = line.split()
word = values[0]
embeddings_index[word] = np.asarray(values[1:], dtype='float32')
f.close()
print("Done.\n[+] Proceeding with Embedding Matrix...", end="")
embedding_matrix = np.random.random((len(word_index) + 1, EMBEDDING_DIM))
for word, i in word_index.items():
embedding_vector = embeddings_index.get(word)
if embedding_vector is not None:
# words not found in embedding index will be all-zeros.
embedding_matrix[i] = embedding_vector
print(" Completed!")
# -
# ### Model
# +
sequence_input = Input(shape=(words_per_section,), dtype='int32') # input to the model
embedding_layer = Embedding(len(word_index) + 1,
EMBEDDING_DIM,
weights=[embedding_matrix],
input_length=words_per_section,
trainable=True)
l_embed = embedding_layer(sequence_input)
l_act = LSTM(1, return_sequences=True, activation='relu',
kernel_regularizer=regularizers.l2(0.001),
activity_regularizer=regularizers.l1(0.001))(l_embed)
l_pool = GlobalAveragePooling1D(data_format='channels_first')(l_act)
preds = Dense(1, activation='sigmoid')(l_pool)
# -
model = Model(sequence_input, preds)
model.compile(loss="binary_crossentropy",
optimizer="rmsprop",
metrics=["acc"])
model.summary()
# ### Train Model
# +
print("Training Progress:\n")
opt = keras.optimizers.RMSprop(lr=0.002, decay=0.01)
model.compile(loss="binary_crossentropy",
optimizer=opt,
metrics=["acc"])
model_log = model.fit(x_train, y_train, validation_data=(x_val, y_val),
epochs=40, batch_size=128)
# -
# ### Interpretation
layer_name = str(model.layers[3].name)
print("Truncated model ends at:", layer_name)
intermediate_layer_model = Model(inputs=model.input,
outputs=model.get_layer(layer_name).output)
reverse_word_map = dict(map(reversed, tokenizer.word_index.items()))
# "positive" examples
start_index = 10
for index in range(start_index, start_index + 20):
print("Positive:", labels[index])
output = test_and_export_html(intermediate_layer_model,
model, reverse_word_map,
data[index], labels[index])
display(HTML(output))
# "negative" examples
start_index = 2500
for index in range(start_index, start_index + 20):
print("Positive:", labels[index])
output = test_and_export_html(intermediate_layer_model,
model, reverse_word_map,
data[index], labels[index])
display(HTML(output))
| Train Classifier.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Critical Point Lab
# > March 2021<br>
# > MSU Denver<br>
# > Junior Lab <br>
# > <NAME><br>
# > Dr. <NAME>
import numpy as np
import pandas as pd
from matplotlib import pyplot as plt
from scipy.optimize import curve_fit
# %matplotlib inline
dfs = [
pd.read_csv('T_21C-P_844.csv'),
pd.read_csv('T_29C-P_8437.csv'),
pd.read_csv('T_40C-P_8398.csv'),
pd.read_csv('T_49C-P_8397.csv'),
]
# ## Overview
labels = [
'T = 21$^\circ$C, patm = 0.8440 bar',
'T = 29$^\circ$C, patm = 0.8437 bar',
'T = 40$^\circ$C, patm = 0.8398 bar',
'T = 49$^\circ$C, patm = 0.8397 bar',
]
for i in range(4):
plt.plot(dfs[i].loc[:, 'Vol (mm)'], dfs[i].loc[:, 'P (bar)'], label=labels[i])
plt.title('Pressure (bar) vs. Reduced Volume (mm)')
plt.xlabel('Reduced Volume (mm)')
plt.ylabel('Pressure (bar)')
plt.grid()
plt.legend()
# ## T=21°C, patm=0.8440 bar
df = dfs[0]
tC = 21.0 # Celsius, as seen in the hideous file names above ^^
tK = 273.15+tC
Pa = .844 # Atmospheric pressure (bar)
s = df.loc[:, 'Vol (mm)']
p = df.loc[:, 'P (bar)']
# +
# Plotting data
plt.plot(s, p, 'b.', label=f'Data ({labels[0]})')
plt.title('Pressure (bar) vs. Reduced Volume (mm)')
plt.xlabel('Reduced Volume (mm)')
plt.ylabel('Pressure (bar)')
# Nonlinear curve fitting
def func(s, Pc, Sc, Tc, a):
return -Pa + (8*tK*Pc*Sc)/(Tc*(3*(s-a) - Sc)) - 3*Pc*(Sc/(s-a))**2
popt, pcov = curve_fit(func, s, p, p0=(38, 5.5, 120, 0))
print(f'Pc, Sc, Tc, a: {popt}\nErrors: {np.diagonal(pcov)}')
plt.plot(s, func(s, *popt), 'r--', label=f'Fit:\nPc = {popt[0]} bar\n'
f'Sc={popt[1]} mm\nTc={popt[2]} K\na={popt[3]} mm')
plt.grid()
plt.legend()
# -
# ## T=29°C, patm=0.8437 bar
# +
df = dfs[1]
tC = 29.0 # Celsius
tK = 273.15+tC
Pa = .8437 # Atmospheric pressure (bar)
s = df.loc[:, 'Vol (mm)']
p = df.loc[:, 'P (bar)']
# Plotting data
plt.plot(s, p, 'b.', label=f'Data ({labels[1]})')
plt.title('Pressure (bar) vs. Reduced Volume (mm)')
plt.xlabel('Reduced Volume (mm)')
plt.ylabel('Pressure (bar)')
# Nonlinear curve fitting
def func(s, Pc, Sc, Tc, a):
return -Pa + (8*tK*Pc*Sc)/(Tc*(3*(s-a) - Sc)) - 3*Pc*(Sc/(s-a))**2
popt, pcov = curve_fit(func, s, p, p0=(38, 5.5, 120, 0))
print(f'Pc, Sc, Tc, a: {popt}\nErrors: {np.diagonal(pcov)}')
plt.plot(s, func(s, *popt), 'r--', label=f'Fit:\nPc = {popt[0]} bar\n'
f'Sc={popt[1]} mm\nTc={popt[2]} K\na={popt[3]} mm')
plt.grid()
plt.legend()
# -
# ## T=40°C, patm=0.8398 bar
# +
df = dfs[2]
tC = 40.0 # Celsius
tK = 273.15+tC
Pa = .8398 # Atmospheric pressure (bar)
s = df.loc[:, 'Vol (mm)']
p = df.loc[:, 'P (bar)']
# Plotting data
plt.plot(s, p, 'b.', label=f'Data ({labels[2]})')
plt.title('Pressure (bar) vs. Reduced Volume (mm)')
plt.xlabel('Reduced Volume (mm)')
plt.ylabel('Pressure (bar)')
# Nonlinear curve fitting
def func(s, Pc, Sc, Tc, a):
return -Pa + (8*tK*Pc*Sc)/(Tc*(3*(s-a) - Sc)) - 3*Pc*(Sc/(s-a))**2
popt, pcov = curve_fit(func, s, p, p0=(38, 5.5, 120, 0))
print(f'Pc, Sc, Tc, a: {popt}\nErrors: {np.diagonal(pcov)}')
plt.plot(s, func(s, *popt), 'r--', label=f'Fit: Pc = {popt[0]} bar\n'
f'Sc={popt[1]} mm\nTc={popt[2]} K\na={popt[3]} mm')
plt.grid()
plt.legend()
# -
# ## T=49°C, patm=0.8397 bar
# +
df = dfs[3]
tC = 49.0 # Celsius
tK = 273.15+tC
Pa = .8397 # Atmospheric pressure (bar)
s = df.loc[:, 'Vol (mm)']
p = df.loc[:, 'P (bar)']
# Plotting data
plt.plot(s, p, 'b.', label=f'Data ({labels[3]})')
plt.title('Pressure (bar) vs. Reduced Volume (mm)')
plt.xlabel('Reduced Volume (mm)')
plt.ylabel('Pressure (bar)')
# Nonlinear curve fitting
def func(s, Pc, Sc, Tc, a):
return -Pa + (8*tK*Pc*Sc)/(Tc*(3*(s-a) - Sc)) - 3*Pc*(Sc/(s-a))**2
popt, pcov = curve_fit(func, s, p, p0=(38, 6, 320, 0))
print(f'Pc, Sc, Tc, a: {popt}\nErrors: {np.diagonal(pcov)}')
plt.plot(s, func(s, *popt), 'r--', label=f'Fit:\nPc = {popt[0]} bar\n'
f'Sc={popt[1]} mm\nTc={popt[2]} K\na={popt[3]} mm')
plt.grid()
plt.legend()
# -
| Labs/CriticalPoint/CriticalPoint.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Update the image WCS and coordinates from file using Gaia
# This notebook shows how to use <a href="https://docs.astropy.org/en/stable/wcs/index.html">astropy.wcs</a> and <a href="https://astroquery.readthedocs.io/en/latest/gaia/gaia.html">astroquery.gaia</a> to update an image WCS using Gaia DR3.<br>
# The updated WCS is used to convert from pixel X,Y to celestial RA,DEC coordinates.
# +
# Import modules
import numpy as np
import matplotlib
from matplotlib import pyplot as plt
plt.rcParams['font.family'] = 'serif'
plt.rcParams['font.size'] = 12
for axticks in ['xtick','ytick']:
plt.rcParams['{:}.direction'.format(axticks)] = 'in'
plt.rcParams['{:}.minor.visible'.format(axticks)] = True
from astropy.io import fits
from astropy import wcs as wcs_apy
from astropy.wcs import WCS
from astropy.table import Table
import astropy.units as u
from astropy.coordinates.sky_coordinate import SkyCoord
from astropy.units import Quantity
# Suppress warnings. Comment this out if you wish to see the warning messages
import warnings
warnings.filterwarnings('ignore')
from astroquery.gaia import Gaia
# -
# ## 1. Read in the input image and coordinates
# +
in_image = '../reduction/WFC3_U12591_F606W_adrz_drc_sci.fits.gz'
in_coords = '../phot/WFC3_U12591_F814W_adrz_drc_sci_ePSFs.dat'
local_cat = Table.read(in_coords, format ='ascii')
with fits.open(in_image) as hdu:
if len(hdu) >= 3:
ext_nr = 1
else:
ext_nr = 0
wcs = WCS(hdu[ext_nr].header)
data = hdu[ext_nr].data
data_hdr = fits.getheader(in_image)
# -
# ## 2. Convert from pixel to RA,DEC using the image WCS
# +
local_cat['RA'],local_cat['DEC'] = wcs.all_pix2world(local_cat['xcentroid'],local_cat['ycentroid'],0)
# Save the table
local_cat.write('local_cat.ascii', format='ascii', overwrite=True)
# +
# Visualize
RA_IMG , DEC_IMG = local_cat['RA'],local_cat['DEC']
fig = plt.figure(figsize=(10,10))
ax = fig.add_subplot(projection=wcs)
dmin = np.nanpercentile(data,[16.1])*.3
dmax = np.nanpercentile(data,[84.9])*3
ax.imshow(data,origin='lower',cmap='Greys', vmin=dmin, vmax=dmax)
ax.scatter(RA_IMG ,DEC_IMG, marker='o',s=120, transform=ax.get_transform('fk5'),
facecolor='none', edgecolor='red')
# -
# ## 3. Query Gaia around the RA,DEC coordinates in the local catalog
# +
# Readin local table with RA,DEC
local_cat = Table.read('local_cat.ascii', format='ascii')
# Save the table in VOT format if not laready
try:
local_cat.write('local_cat.vot', format='votable') #, overwrite=True)
except:
pass
upload_resource = 'local_cat.vot'
# Define the RA,DEC coordinate search radius to match between Gaia and local_cat
# Some proper motion quality filtering is applied too.
search_r = 1*u.arcsec
search_r = search_r.to(u.degree).value
# Define the Gaia query. It uploads the local catalog
adql = "SELECT * \
FROM tap_upload.myVOtable AS myVOtable \
JOIN gaiaedr3.gaia_source AS gaia \
ON 1=CONTAINS(\
POINT('ICRS',myVOtable.ra,myVOtable.dec),\
CIRCLE('ICRS',gaia.ra,gaia.dec,{:})\
);\
WHERE \
AND abs(pmra_error/pmra)<0.10 \
AND abs(pmdec_error/pmdec)<0.10 \
AND pmra IS NOT NULL AND abs(pmra)>0 \
AND pmdec IS NOT NULL AND abs(pmdec)>0;".format(search_r)
job = Gaia.launch_job_async(adql, dump_to_file=False,
upload_resource=upload_resource,
upload_table_name='myVOtable', verbose=True
)
result_tbl = job.get_results()
result_tbl
# +
# Visualize
RA_IMG , DEC_IMG = result_tbl['ra'],result_tbl['dec']
RA_Gaia , DEC_Gaia = result_tbl['ra_2'],result_tbl['dec_2']
dRA = np.median((result_tbl['ra']-result_tbl['ra_2'])*3600.)
dDec = np.median((result_tbl['dec']-result_tbl['dec_2'])*3600.)
sdtRA = np.std((result_tbl['ra']-result_tbl['ra_2'])*3600.)
stdDec = np.std((result_tbl['dec']-result_tbl['dec_2'])*3600.)
fig = plt.figure(figsize=(16,8))
ax1 = fig.add_subplot(121, adjustable='box', projection=wcs)
ax2 = fig.add_subplot(122, adjustable='box')
dmin = np.nanpercentile(data,[16.1])*.3
dmax = np.nanpercentile(data,[84.9])*3
ax1.imshow(data,origin='lower',cmap='Greys', vmin=dmin, vmax=dmax)
ax1.scatter(RA_IMG ,DEC_IMG, marker='o',s=120, transform=ax1.get_transform('fk5'),
facecolor='none', edgecolor='red')
ax1.scatter(RA_Gaia , DEC_Gaia, marker='o',s=30, transform=ax1.get_transform('fk5'),
facecolor='none', edgecolor='orange')
ax1.set_xlabel('RA')
ax1.set_ylabel('DEC')
x_txt,y_txt = result_tbl['ra']-.75/3600,result_tbl['dec']-1.01/3600
for x,y,i,j in zip(x_txt,y_txt,result_tbl['id'],range(len(result_tbl))):
ax1.text(x-.1/3600.,y,i,transform=ax1.get_transform('fk5'))
ax2.plot((result_tbl['ra']-result_tbl['ra_2'])*3600.,
(result_tbl['dec']-result_tbl['dec_2'])*3600., 'o', color='orange',
markeredgecolor='black',
label='Median offset:\n$\Delta$RA = {:.3f}$\pm${:.3f}\"\n$\Delta$DEC= {:.3f}$\pm${:.3f}\"'.format(dRA,sdtRA,dDec,stdDec))
ax2.set_xlabel('$\Delta$RA [arcsec]',fontsize=12)
ax2.set_ylabel('$\Delta$DEC [arcsec]',fontsize=12)
ax2.legend(fontsize=12)
ax2.axhline(dDec,ls=':',color='grey')
ax2.axvline(dRA,ls=':',color='grey')
# -
# ## 4. Create the new WCS from the matched data
# +
# Create a SkyCoord object needed as input for fitting with wcs_apy.utils.fit_wcs_from_points
# If result_tbl is not a QTabke with ra,dec unitsin deg
#ref_coos = SkyCoord(ra=result_tbl['ra_2']*u.degree, dec=result_tbl['dec_2']*u.degree)
ref_coos = SkyCoord(ra=result_tbl['ra_2'], dec=result_tbl['dec_2'])
# Get the pixel values corresponding to those RA,DEC
x_pix, y_pix = result_tbl['xcentroid'],result_tbl['ycentroid']
# +
# Update the WCS
updated_wcs = wcs_apy.utils.fit_wcs_from_points((x_pix, y_pix), ref_coos,
proj_point='center',
projection=wcs, sip_degree=None)
# -
# Have a look at the old and new
wcs,updated_wcs
# +
# OPTIONAL
# Copy, the old header
new_data_hdr = data_hdr.copy()
# Convert the new WCS object to a header object
tmp_hdr = updated_wcs.to_header()
# Update the new copy of the old header
new_data_hdr.update(tmp_hdr)
# Write to a new fits file
fits.writeto("New_image_w_updated_WCS.fits", data=data.data, header=new_data_hdr) #, overwrite=True)
# -
# ## 5. Use the updated WCS to update the RA DEC
local_cat['RA_Gaia'],local_cat['DEC_Gaia'] = updated_wcs.all_pix2world(local_cat['xcentroid'],local_cat['ycentroid'],0)
# +
# Convert other coordinates from the same image X,Y [pix] to RA,DEC
sup_local = '../phot/selCCs_wSSP.dat'
tab = Table.read(sup_local, format ='ascii')
tab['RA_Gaia'],tab['DEC_Gaia'] = updated_wcs.all_pix2world(tab['x606'],tab['y606'],0)
tab.meta['comments'] = 'RADEC coordinates updated with https://tinyurl.com/XY-to-RADEC-w-WCS'
tab.write('../phot/selCCs_wSSP_RADEC.ecsv', format = 'ascii.ecsv') #, comment = '#') #, overwrite = True
# -
| SomeAstroNBs/XY_to_RADEC_w_WCS.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import sys
import gym
import numpy as np
import tensorflow as tf
env = gym.make('Taxi-v2')
n_states = env.observation_space.n
n_actions = env.action_space.n
gamma = 0.99
e = 0.1
learning_rate = 1e-1
episodes = 5000
print(f'States: {n_states}\tActions: {n_actions}')
# +
tf.reset_default_graph()
X = tf.placeholder(shape=[1, n_states], dtype=tf.float32)
y = tf.placeholder(shape=[1, n_actions],dtype=tf.float32)
W = tf.Variable(tf.truncated_normal(shape=[n_states, n_actions], mean=0, stddev=0.5))
b = tf.Variable(tf.zeros(shape=[n_actions]))
# -
Q_vals = tf.matmul(X, W) + b
prediction = tf.argmax(Q_vals, axis=1)
loss = tf.reduce_sum(tf.squared_difference(y, Q_vals))
optimizer = tf.train.GradientDescentOptimizer(learning_rate=learning_rate)
train = optimizer.minimize(loss)
sess = tf.Session()
init = tf.global_variables_initializer()
sess.run(init)
reward_list = []
episode_list = []
for episode in range(episodes):
# reset agent's state
s = env.reset()
done = False
rewards, i = 0, 0
# The Q-Network
while i < 99:
i += 1
# Let the neural net predict an action 4u in ur current state
a, Q_old = sess.run([prediction, Q_vals], feed_dict={X: np.identity(n_states)[s:s+1]})
# take random actions every once in a while
if np.random.rand(1) < e:
a[0] = env.action_space.sample()
# take the action that transitions you to a new state (unlikely)
s1, r, done, _ = env.step(a[0])
# Get the Q values for the new state you are
Q_new = sess.run(Q_vals, feed_dict={X: np.identity(n_states)[s1:s1+1]})
# Update the previous state via Bellman's equation
target = np.copy(Q_old)
# Q[s, a] = r + ymax(Q[s'])
target[0, a[0]] = r + gamma*np.max(Q_new) # new value
# Train the network using the predicted value and updated value
sess.run(train, feed_dict={X: np.identity(n_states)[s:s+1], y: target})
rewards += r
if done:
e = 1 / ((episode/50) + 10) # reduce chance of random action as we train.
break
reward_list.append(rewards)
episode_list.append(i)
sys.stdout.write(f'\rEpisode: {episode+1:,}\tRewards: {rewards}\t%success: {sum(reward_list)/episodes:.2%}')
env.render()
target
| Q-network.ipynb |
# -*- coding: utf-8 -*-
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .r
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: R
# language: R
# name: ir
# ---
# Lo siguiente está basado en el libro de <NAME>, Pensando Antes de Actuar: Fundamentos de Elección Racional, 2009 y de <NAME>, Introduction to Probability and Statistics Using R, 2014.
# El libro de <NAME> tiene github: [jkerns/IPSUR](https://github.com/gjkerns/IPSUR)
# **Notas:**
#
# * Se utilizará el paquete *prob* de *R* para los experimentos descritos en la nota y aunque con funciones nativas de *R* se pueden crear los experimentos, se le da preferencia a mostrar cómo en *R* se tienen paquetes para muchas aplicaciones.
#
# * En algunas líneas no es necesario colocar `print` y sólo se ha realizado para mostrar los resultados de las funciones en un formato similar al de R pues la nota se escribió con *jupyterlab* y *R*.
#
# * Cuidado al utilizar las funciones del paquete *prob* para construir espacios de probabilidad grandes como lanzar un dado 9 veces... (tal experimento tiene 10 millones de posibles resultados)
options(repr.plot.width=4, repr.plot.height=4) #esta línea sólo se ejecuta para jupyterlab con R
library(prob)
# # Teorema de Bayes
# Supongamos que $E$ y $E^c$ representan los eventos de tener o no una enfermedad viral y que existe una prueba sanguínea para detectar este virus. Denotamos por $+$ al evento de tener una **prueba positiva** y por $-$ al que sea **negativa**. Las probabilidades $P(E)$ y $P(E^c)$ pueden pensarse como las probabilidades **a priori** o **iniciales** de tener o no la enfermedad respectivamente.
# La probabilidad $P(E|+)$ cuantifica la probabilidad de tener la enfermedad dado que la prueba de sangre resultó positiva para el virus, es decir, mide **qué tan confiable es el resultado de la prueba** y se llaman probabilidades a **posteriori** o **finales** pues ya **contamos con información** acerca de la prueba de sangre. Una forma de calcularla es la siguiente:
# $$P(E|+) = \frac{P(E \cap +)}{P(+)} = \frac{P(+|E)P(E)}{P(+)} = \frac{P(+|E)P(E)}{P(+ \cap E) + P(+ \cap E^c)} = \frac{P(+|E)P(E)}{P(+|E)P(E) + P(+|E^c)P(E^c)}$$
# **Obs:** obsérvese que se ha utilizado el **teorema de probabilidad total** en el denominador para calcular $P(+)$.
# Asimismo, $P(E|+), P(E^c|+), P(E|-)$ y $P(E^c|-)$ son probabilidades a posteriori o finales. Las probabilidades a posteriori son una **revisión de las probabilidades originales (las a priori o iniciales)**, actualizadas con la nueva información del resultado de la prueba de sangre.
# En contraste, la probabilidad $P(+|E)$ es la probabilidad de que la prueba resulte positiva dado que tenemos la enfermedad. Este número mide **qué tan confiable es la prueba de sangre**.
# El cálculo previo para $P(E|+)$ se le conoce con el nombre de **teorema o fórmula o regla de Bayes**:
# Sean $E, E^c$ eventos complementarios de un espacio muestral y sea $F$ algún evento tal que $P(F)\neq 0$, entonces, la probabilidad a posteriori o final de $E$ dado $F$ está dada por:
#
# $$P(E|F) = \frac{P(F|E)P(E)}{P(F)}$$
# o equivalentemente, $$P(E|F) = \frac{P(F|E)P(E)}{P(F|E)P(E) + P(F|E^c)P(E^c)}$$
# En general si el espacio muestral es la unión de los eventos $E_1, E_2,\dots , E_n$ que son mutuamente excluyentes y $F$ es algún evento con $P(F)\neq 0$, la fórmula anterior se generaliza como sigue:
#
# $$P(E_i|F) = \frac{P(F|E_i)P(E_i)}{P(F)} = \frac{P(F|E_i)P(E_i)}{\displaystyle \sum_{j=1}^nP(F|E_j)P(E_j)}$$
# **Comentario:** la mayor aportación de **<NAME>** (1702-1761) radica en la idea de inferencia inversa, que va de los efectos a las causas, en lugar de ir de las causas a los efectos. Si $E$ es la causa y $F$ el efecto, $P (E | F )$ representa la probabilidad de la causa dado que se observó el efecto. El teorema de Bayes contesta preguntas como, dado que el pavimento está mojado, ¿cuál es la probabilidad de que hubiese llovido? Dado que un testigo es confiable, ¿qué tan confiable es su testimonio? Dado que aumentó el desempleo, ¿qué tan probable es que haya una recesión económica?
# ## Ejemplos
# 1) La prueba para detectar la presencia de ciertos esteroides en el cuerpo de los atletas tiene falsos positivos, si la prueba es positiva sin haber utilizado esteroides y falsos negativos, si la prueba es negativa a pesar de haber utilizado esteroides. Denotemos por $E$ y $E^c$ a los eventos de haber utilizado o no esteroides y por $+$ y $-$ al resultado de una prueba de sangre positiva o negativa, respectivamente. Supongamos que se estima que el 4 % de cierto grupo de atletas ha utilizado esteroides, de manera que: $$P(E)= 0.04, P(E^c) = 0.96$$
# y que los falsos positivos y negativos están dados por $0.06, 0.05$ respectivamente.
# Esto nos dice que en un $6\%$ de los casos un atleta que no utiliza esteroides tiene una prueba positiva y, en un $5\%$ un atleta que si los ha utilizado tiene una prueba negativa. Supongamos que un atleta es sometido a la prueba y ésta sale positiva, ¿cuál es la probabilidad de que haya utilizado esteroides?
# **Solución:**
#
# (diagrama de arbol)
# 2) Considerar el escenario del ejercicio anterior. Suponer que se realiza una segunda prueba, (condicionalmente) independiente de la primera. Calcular $P(E|+ \cap +)$.
# 3) Un albino es enjuiciado por haber cometido un crimen. El testigo principal asegura haber visto los hechos a través de una ventana de su casa e identifica a un individuo albino como el criminal. Realizando pruebas para la credibilidad del testigo, se determina que a la distancia y en las condiciones de luz desde las cuales presenció el crimen, identifica correctamente a una persona albina o no albina un 95% de las veces. Aproximadamente una de cada 17 mil personas presenta la condición de albinismo. ¿Cuál es la probabilidad de que el albino haya cometido el crimen dado que el testigo lo identificó?
# ## Ejercicios
#
# 1)El siguiente es un mecanismo conocido para elegir a una persona de un grupo pequeño. Digamos que en un grupo de 4 personas quiere seleccionarse a una de ellas para que realice alguna actividad, digamos ir por cervezas a la hielera. Para este efecto, se toman 4 palillos y uno de ellos se recorta; así, se tienen tres palillos largos y uno corto. Las personas toman turnos tomando palillos, ignorando cuál es el corto, y el que saca el palillo corto es el elegido para ir por las cervezas. ¿Tiene alguna ventaja escoger primero, segundo, tercero o cuarto?
#
# 2)Considera el mismo escenario que el ejemplo $3$ pero en lugar de que el sospechoso sea un individuo albino ahora se tiene:
#
# a) El sospechoso del crimen es un individuo pelirrojo y la probabilidad de ser pelirrojo es del $2 \%$. Encontrar $P(pelirrojo | Tp),$ en donde $Tp$ representa: “el testigo identifica a una persona como pelirroja”.
#
# b) El sospechoso es un individuo de pelo rubio y la probabilidad de tener el pelo rubio es del $15 \%$. Encontrar $P(rubio | Tr)$, en donde $Tr$ representa: “el testigo identifica a una persona como rubia”.
| R/clases/2_probabilidad/6_teorema_de_Bayes.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python [conda root]
# language: python
# name: conda-root-py
# ---
# <a href="http://cocl.us/pytorch_link_top">
# <img src="https://s3-api.us-geo.objectstorage.softlayer.net/cf-courses-data/CognitiveClass/DL0110EN/notebook_images%20/Pytochtop.png" width="750" alt="IBM Product " />
# </a>
#
# <img src="https://s3-api.us-geo.objectstorage.softlayer.net/cf-courses-data/CognitiveClass/DL0110EN/notebook_images%20/cc-logo-square.png" width="200" alt="cognitiveclass.ai logo" />
#
# <h1>Softmax Classifier</h1>
#
# <h2>Table of Contents</h2>
# <p>In this lab, you will use a single layer Softmax to classify handwritten digits from the MNIST database.</p>
#
# <ul>
# <li><a href="#Makeup_Data">Make some Data</a></li>
# <li><a href="#Classifier">Softmax Classifier</a></li>
# <li><a href="#Model">Define Softmax, Criterion Function, Optimizer, and Train the Model</a></li>
# <li><a href="#Result">Analyze Results</a></li>
# </ul>
# <p>Estimated Time Needed: <strong>25 min</strong></p>
#
# <hr>
#
# <h2>Preparation</h2>
#
# We'll need the following libraries
#
# +
# Import the libraries we need for this lab
# Using the following line code to install the torchvision library
# # !conda install -y torchvision
import torch
import torch.nn as nn
import torchvision.transforms as transforms
import torchvision.datasets as dsets
import matplotlib.pylab as plt
import numpy as np
# -
# Use the following function to plot out the parameters of the Softmax function:
#
# +
# The function to plot parameters
def PlotParameters(model):
W = model.state_dict()['linear.weight'].data
w_min = W.min().item()
w_max = W.max().item()
fig, axes = plt.subplots(2, 5)
fig.subplots_adjust(hspace=0.01, wspace=0.1)
for i, ax in enumerate(axes.flat):
if i < 10:
# Set the label for the sub-plot.
ax.set_xlabel("class: {0}".format(i))
# Plot the image.
ax.imshow(W[i, :].view(28, 28), vmin=w_min, vmax=w_max, cmap='seismic')
ax.set_xticks([])
ax.set_yticks([])
# Ensure the plot is shown correctly with multiple plots
# in a single Notebook cell.
plt.show()
# -
# Use the following function to visualize the data:
#
# +
# Plot the data
def show_data(data_sample):
plt.imshow(data_sample[0].numpy().reshape(28, 28), cmap='gray')
plt.title('y = ' + str(data_sample[1].item()))
# plt.title('y = ' + str(data_sample[1]).item())
# -
# <!--Empty Space for separating topics-->
#
# <h2 id="Makeup_Data">Make Some Data</h2>
#
# Load the training dataset by setting the parameters <code>train</code> to <code>True</code> and convert it to a tensor by placing a transform object in the argument <code>transform</code>.
#
# +
# Create and print the training dataset
train_dataset = dsets.MNIST(root='./data', train=True, download=True, transform=transforms.ToTensor())
print("Print the training dataset:\n ", train_dataset)
# -
# Load the testing dataset by setting the parameters <code>train</code> to <code>False</code> and convert it to a tensor by placing a transform object in the argument <code>transform</code>.
#
# +
# Create and print the validating dataset
validation_dataset = dsets.MNIST(root='./data', train=False, download=True, transform=transforms.ToTensor())
print("Print the validating dataset:\n ", validation_dataset)
# -
# You can see that the data type is long:
#
# +
# Print the type of the element
print("Type of data element: ", train_dataset[0][1].type())
# -
# Each element in the rectangular tensor corresponds to a number that represents a pixel intensity as demonstrated by the following image:
#
# <img src="https://s3-api.us-geo.objectstorage.softlayer.net/cf-courses-data/CognitiveClass/DL0110EN/notebook_images%20/chapter3/3.32_image_values.png" width="550" alt="MNIST elements" />
#
# In this image, the values are inverted i.e back represents wight.
#
# Print out the label of the fourth element:
#
# +
# Print the label
print("The label: ", train_dataset[3][1])
# -
# The result shows the number in the image is 1
#
# Plot the fourth sample:
#
# +
# Plot the image
print("The image: ", show_data(train_dataset[3]))
# -
# You see that it is a 1. Now, plot the third sample:
#
# +
# Plot the image
show_data(train_dataset[2])
# -
# <!--Empty Space for separating topics-->
#
# <h2 id="#Classifier">Build a Softmax Classifer</h2>
#
# Build a Softmax classifier class:
#
# +
# Define softmax classifier class
class SoftMax(nn.Module):
# Constructor
def __init__(self, input_size, output_size):
super(SoftMax, self).__init__()
self.linear = nn.Linear(input_size, output_size)
# Prediction
def forward(self, x):
z = self.linear(x)
return z
# -
# The Softmax function requires vector inputs. Note that the vector shape is 28x28.
#
# +
# Print the shape of train dataset
train_dataset[0][0].shape
# -
# Flatten the tensor as shown in this image:
#
# <img src="https://s3-api.us-geo.objectstorage.softlayer.net/cf-courses-data/CognitiveClass/DL0110EN/notebook_images%20/chapter3/3.3.2image_to_vector.gif" width="550" alt="Flattern Image" />
#
# The size of the tensor is now 784.
#
# <img src = "https://s3-api.us-geo.objectstorage.softlayer.net/cf-courses-data/CognitiveClass/DL0110EN/notebook_images%20/chapter3/3.3.2Imagetovector2.png" width="550" alt="Flattern Image" />
#
# Set the input size and output size:
#
# +
# Set input size and output size
input_dim = 28 * 28
output_dim = 10
# -
# <!--Empty Space for separating topics-->
#
# <h2 id="Model">Define the Softmax Classifier, Criterion Function, Optimizer, and Train the Model</h2>
#
# +
# Create the model
model = SoftMax(input_dim, output_dim)
print("Print the model:\n ", model)
# -
# View the size of the model parameters:
#
# +
# Print the parameters
print('W: ',list(model.parameters())[0].size())
print('b: ',list(model.parameters())[1].size())
# -
# You can cover the model parameters for each class to a rectangular grid:
#
# <a> <img src = "https://s3-api.us-geo.objectstorage.softlayer.net/cf-courses-data/CognitiveClass/DL0110EN/notebook_images%20/chapter3/3.3.2paramaters_to_image.gif" width = 550, align = "center"></a>
#
# Plot the model parameters for each class as a square image:
#
# +
# Plot the model parameters for each class
PlotParameters(model)
# -
# Define the learning rate, optimizer, criterion, data loader:
#
# +
# Define the learning rate, optimizer, criterion and data loader
learning_rate = 0.1
optimizer = torch.optim.SGD(model.parameters(), lr=learning_rate)
criterion = nn.CrossEntropyLoss()
train_loader = torch.utils.data.DataLoader(dataset=train_dataset, batch_size=100)
validation_loader = torch.utils.data.DataLoader(dataset=validation_dataset, batch_size=5000)
# -
# Train the model and determine validation accuracy **(should take a few minutes)**:
#
# +
# Train the model
n_epochs = 10
loss_list = []
accuracy_list = []
N_test = len(validation_dataset)
def train_model(n_epochs):
for epoch in range(n_epochs):
for x, y in train_loader:
optimizer.zero_grad()
z = model(x.view(-1, 28 * 28))
loss = criterion(z, y)
loss.backward()
optimizer.step()
correct = 0
# perform a prediction on the validationdata
for x_test, y_test in validation_loader:
z = model(x_test.view(-1, 28 * 28))
_, yhat = torch.max(z.data, 1)
correct += (yhat == y_test).sum().item()
accuracy = correct / N_test
loss_list.append(loss.data)
accuracy_list.append(accuracy)
train_model(n_epochs)
# -
# <!--Empty Space for separating topics-->
#
# <h2 id="Result">Analyze Results</h2>
#
# Plot the loss and accuracy on the validation data:
#
# +
# Plot the loss and accuracy
fig, ax1 = plt.subplots()
color = 'tab:red'
ax1.plot(loss_list,color=color)
ax1.set_xlabel('epoch',color=color)
ax1.set_ylabel('total loss',color=color)
ax1.tick_params(axis='y', color=color)
ax2 = ax1.twinx()
color = 'tab:blue'
ax2.set_ylabel('accuracy', color=color)
ax2.plot( accuracy_list, color=color)
ax2.tick_params(axis='y', color=color)
fig.tight_layout()
# -
# View the results of the parameters for each class after the training. You can see that they look like the corresponding numbers.
#
# +
# Plot the parameters
PlotParameters(model)
# -
# We Plot the first five misclassified samples and the probability of that class.
#
# Plot the misclassified samples
Softmax_fn=nn.Softmax(dim=-1)
count = 0
for x, y in validation_dataset:
z = model(x.reshape(-1, 28 * 28))
_, yhat = torch.max(z, 1)
if yhat != y:
#show_data((x, y))
plt.show()
print("yhat:", yhat)
print("probability of class ", torch.max(Softmax_fn(z)).item())
count += 1
if count >= 5:
break
# <!--Empty Space for separating topics-->
#
# We Plot the first five correctly classified samples and the probability of that class, we see the probability is much larger.
#
# Plot the classified samples
Softmax_fn=nn.Softmax(dim=-1)
count = 0
for x, y in validation_dataset:
z = model(x.reshape(-1, 28 * 28))
_, yhat = torch.max(z, 1)
if yhat == y:
#show_data((x, y))
plt.show()
print("yhat:", yhat)
print("probability of class ", torch.max(Softmax_fn(z)).item())
count += 1
if count >= 5:
break
# <a href="http://cocl.us/pytorch_link_bottom">
# <img src="https://s3-api.us-geo.objectstorage.softlayer.net/cf-courses-data/CognitiveClass/DL0110EN/notebook_images%20/notebook_bottom%20.png" width="750" alt="PyTorch Bottom" />
# </a>
#
# <h2>About the Authors:</h2>
#
# <a href="https://www.linkedin.com/in/joseph-s-50398b136/"><NAME></a> has a PhD in Electrical Engineering, his research focused on using machine learning, signal processing, and computer vision to determine how videos impact human cognition. Joseph has been working for IBM since he completed his PhD.
#
# Other contributors: <a href="https://www.linkedin.com/in/michelleccarey/"><NAME></a>, <a href="www.linkedin.com/in/jiahui-mavis-zhou-a4537814a"><NAME></a>
#
# <hr>
#
# Copyright © 2018 <a href="cognitiveclass.ai?utm_source=bducopyrightlink&utm_medium=dswb&utm_campaign=bdu">cognitiveclass.ai</a>. This notebook and its source code are released under the terms of the <a href="https://bigdatauniversity.com/mit-license/">MIT License</a>.
#
| 6.2lab_predicting__MNIST_using_Softmax_v2.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
# %matplotlib inline
import numpy as np
import os
import matplotlib.pyplot as plt
import keras
import tensorflow as tf
from keras.backend import tensorflow_backend
config = tf.ConfigProto(gpu_options=tf.GPUOptions(allow_growth=True))
session = tf.Session(config=config)
tensorflow_backend.set_session(session)
# -
from keras.preprocessing.image import array_to_img, img_to_array, list_pictures, load_img
# +
img1 = load_img("../Test_Data/mol_img1/11.png")
img1_array = img_to_array(img1)
print(img1_array.shape)
# +
img_path = "../Test_Data/mol_img1/"
tmp_list = ["11.png", "174.png", "6326782.png"]
print(img_path + "11.png")
for item in tmp_list:
print(img_path + item)
# +
img_list = []
for item in tmp_list:
tmp_img = load_img(img_path + item)
tmp_array = img_to_array(tmp_img)
img_list.append(tmp_array)
print(type(img_list))
img_array = np.array(img_list)
print(img_array.shape)
# +
text_1 = open("../Test_Data/list1.txt", "r")
mol_list = text_1.readlines()
text_1.close()
print(type(mol_list))
print(len(mol_list))
# -
print(mol_list[0:3])
# +
for i in range(len(mol_list)):
mol_list[i] = mol_list[i].strip()
print(mol_list[0:3])
# +
img_list = []
for item in mol_list:
tmp_img = load_img(img_path + item)
tmp_array = img_to_array(tmp_img)
img_list.append(tmp_array)
img_array = np.array(img_list)
print(img_array.shape)
# +
#画像の表示(keras版、あまりきれいじゃない。)
plt.imshow(img1)
plt.show()
# +
# pillowを使うと(変わらなかった)
from PIL import Image
im = Image.open("../Test_Data/mol_img1/11.png")
plt.imshow(im)
plt.show()
# -
| code/image_test180319.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# - (c) <NAME>, 2021/02/08
# - MIT License
# ## 機器の振動データに対する異常検知
# - 機密情報のため振動データは非公開になります
# +
import pandas as pd
from sklearn.preprocessing import StandardScaler
import glob
import numpy as np
# フォルダ内のファイル一覧を取得
files_normal = glob.glob('./data/train/*')
# CSV形式のファイルを読み込み、学習データを全てx_trainに格納
x_train = pd.DataFrame([])
for file_name in files_normal:
csv = pd.read_csv(filepath_or_buffer=file_name)
x_train = pd.concat([x_train, csv])
# StandardScalerでz標準化
sc = StandardScaler()
sc.fit(x_train)
x_train_std = sc.transform(x_train)
# データ点数と特徴数を確認
print("training data size: (#data points, #features) = (%d, %d)"% x_train.shape)
# +
# 同様にテストデータと検証用データ(ハイパーパラメータの調整用データ)の用意
files_normal = glob.glob('./data/test/Seg_D0*')
files_anomaly1 = glob.glob('./data/test/Seg_D2*_01_*A*')
files_anomaly2 = glob.glob('./data/test/Seg_D2*_01_*B*')
# 正常データのラベルを1,異常データのラベルを-1としてテストデータ用y_test_true,検証用データ用y_valid_trueに格納
x_test_normal, x_test_anomaly1, x_test_anomaly2, x_test, x_valid = (pd.DataFrame([]), pd.DataFrame([]),
pd.DataFrame([]), pd.DataFrame([]), pd.DataFrame([]))
y_test_true, y_valid_true = [], []
for file_name in files_normal:
csv = pd.read_csv(filepath_or_buffer=file_name)
x_test_normal = pd.concat([x_test_normal, csv])
for i in range(0,len(csv)):
y_test_true.append(1)
y_valid_true.append(1)
for file_name in files_anomaly1:
csv = pd.read_csv(filepath_or_buffer=file_name)
x_test_anomaly1 = pd.concat([x_test_anomaly1, csv])
for i in range(0,len(csv)):
y_test_true.append(-1)
for file_name in files_anomaly2:
csv = pd.read_csv(filepath_or_buffer=file_name)
x_test_anomaly2 = pd.concat([x_test_anomaly2, csv])
for i in range(0,len(csv)):
y_valid_true.append(-1)
# テストデータx_test,検証用データx_validを正常データと異常データを組み合わせて用意し,z標準化
x_test = pd.concat([x_test_normal, x_test_anomaly1])
x_valid = pd.concat([x_test_normal, x_test_anomaly2])
x_test_std = sc.transform(x_test)
x_valid_std = sc.transform(x_valid)
# 正常データ数,異常データ数(テストデータ),テストデータ総数,検証用データ総数を確認
print("data size: (#normal data, #anomaly deata, #test total, #valid total) = (%d, %d, %d, %d)"%
(x_test_normal.shape[0], x_test_anomaly1.shape[0], x_test.shape[0], x_valid.shape[0]))
# +
# Local Outlier Factor
from sklearn.neighbors import LocalOutlierFactor
# LOFの近傍数kを変化させて検証用データに対するF値を取得
idx, f_score = [], []
for k in range(1,11):
lof = LocalOutlierFactor(n_neighbors=k, novelty=True, contamination=0.1)
lof.fit(x_train_std)
f_score.append(validation(y_valid_true, lof.predict(x_valid_std)))
idx.append(k)
# F値が最大となる近傍数kを取得し,LOFに再適合
plot_fscore_graph('n_neighbors', idx, f_score)
best_k = np.argmax(f_score)+1
lof = LocalOutlierFactor(n_neighbors=best_k, novelty=True, contamination=0.1)
lof.fit(x_train_std)
# 最適な近傍数を使用して,テストデータに対する結果を表示
print("--------------------")
print("Local Outlier Factor result (n_neighbors=%d)" % best_k)
print("--------------------")
print_precision_recall_fscore(y_test_true, lof.predict(x_test_std))
print("--------------------")
print_roc_curve(y_test_true, lof.decision_function(x_test_std))
# +
# One-class SVM
from sklearn.svm import OneClassSVM
import itertools
# 探索するハイパーパラメータリスト
gamma = [0.001, 0.005, 0.01]
coef0 = [0.1, 1.0, 5.0]
degree = [1, 2, 3]
# RBFカーネルのバンド幅パラメータγを変化させて検証用データに対するF値を取得
idx, f_score = [], []
for r in gamma:
ocsvm_rbf = OneClassSVM(kernel='rbf', gamma=r)
ocsvm_rbf.fit(x_train_std)
f_score.append(validation(y_valid_true, ocsvm_rbf.predict(x_valid_std)))
# F値が最大となるバンド幅γを取得し,One-class SVM(RBFカーネル)に再適合
plot_fscore_graph('gamma', gamma, f_score)
best_rbf_gamma = gamma[np.argmax(f_score)]
print("RBF kernel(best); gamma:%2.4f, f-score:%.4f"% (best_rbf_gamma, np.max(f_score)))
ocsvm_rbf = OneClassSVM(kernel='rbf', gamma=best_rbf_gamma)
ocsvm_rbf.fit(x_train_std)
# 多項式カーネルのパラメータ(次数d,係数γ,定数項c)を変化させて検証用データに対するF値を取得
idx, f_score = [],[]
for d, r, c in itertools.product(degree, gamma, coef0):
ocsvm_poly = OneClassSVM(kernel='poly', degree=d, gamma=r, coef0=c)
ocsvm_poly.fit(x_train_std)
f_score.append(validation(y_valid_true, ocsvm_poly.predict(x_valid_std)))
idx.append([d,r,c])
# F値が最大となるパラメータの組合せを取得し,One-class SVM(多項式カーネル)に再適合
best_idx = idx[np.argmax(f_score)]
print("Polynomial kernel(best); degree:%1d, gamma:%.4f, coef0:%3.2f, f-score:%.4f" %
(best_idx[0], best_idx[1], best_idx[2], np.max(f_score)))
ocsvm_poly = OneClassSVM(kernel='poly', degree=best_idx[0], gamma=best_idx[1], coef0=best_idx[2])
ocsvm_poly.fit(x_train_std)
# シグモイドカーネルの係数γを変化させて検証用データに対するF値を取得
idx, f_score = [],[]
for r, c in itertools.product(gamma, coef0):
ocsvm_smd = OneClassSVM(kernel='sigmoid', gamma=r, coef0=c)
ocsvm_smd.fit(x_train_std)
f_score.append(validation(y_valid_true, ocsvm_smd.predict(x_valid_std)))
idx.append([r,c])
# F値が最大となる係数γを取得し,One-class SVM(シグモイドカーネル)に再適合
best_idx = idx[np.argmax(f_score)]
print("Sigmoid kernel(best); gamma:%.4f, coef0:%2.2f, f-score:%.4f" %
(best_idx[0], best_idx[1], np.max(f_score)))
ocsvm_smd = OneClassSVM(kernel='sigmoid', gamma=best_idx[0], coef0=best_idx[1])
ocsvm_smd.fit(x_train_std)
# 最適なパラメータを用いたRBFカーネル,多項式カーネル,シグモイドカーネルのテストデータに対する結果を表示
print("--------------------")
print("One-class SVM result")
print("--------------------")
print("rbf kernel")
print_precision_recall_fscore(y_test_true, ocsvm_rbf.predict(x_test_std))
print("--------------------")
print("polynomial kernel")
print_precision_recall_fscore(y_test_true, ocsvm_poly.predict(x_test_std))
print("--------------------")
print("sigmoid kernel")
print_precision_recall_fscore(y_test_true, ocsvm_smd.predict(x_test_std))
print("--------------------")
print_roc_curve_svm(y_test_true, ocsvm_rbf.decision_function(x_test_std),
ocsvm_poly.decision_function(x_test_std), ocsvm_smd.decision_function(x_test_std))
# +
# IsolationForest (iForest)
from sklearn.ensemble import IsolationForest
# 探索するハイパーパラメータのリスト
estimators_params = [50, 100, 150, 200]
# アンサンブルする識別器の数n_estimatorsを変化させて検証用データに対するF値を取得
idx, f_score = [], []
for k in estimators_params:
IF = IsolationForest(n_estimators=k, random_state=2, contamination=0.1)
IF.fit(x_train_std)
f_score.append(validation(y_valid_true, IF.predict(x_valid_std)))
idx.append(k)
# F値が最大となるアンサンブル数を取得し,iForestを再適合
plot_fscore_graph('n_estimators', idx, f_score)
best_k = idx[np.argmax(f_score)]
IF = IsolationForest(n_estimators=best_k, random_state=2, contamination=0.1)
IF.fit(x_train_std)
# 最適なアンサンブル数を使用して,テストデータに対する結果を表示
print("--------------------")
print("IsolationForest result (n_estimators=%d)" % best_k)
print("--------------------")
print_precision_recall_fscore(y_test_true, IF.predict(x_test_std))
print("--------------------")
print_roc_curve(y_test_true, IF.decision_function(x_test_std))
# -
# LOF, SVM, iForestのROC曲線を全て表示
print_roc_curve_all(y_test_true,
lof._decision_function(x_test_std),
ocsvm_rbf.decision_function(x_test_std),
IF.decision_function(x_test_std))
# ## 関数定義(先に読み込む必要あり)
# +
from sklearn.metrics import precision_recall_fscore_support,confusion_matrix
# 平均精度,平均再現率,平均F値,ならびに混同行列を表示する関数
def print_precision_recall_fscore(y_true, y_pred):
prec_rec_f = precision_recall_fscore_support(y_true, y_pred)
print("Ave. Precision %.4f, Ave. Recall %.4f, Ave. F-score %.4f"%
(np.average(prec_rec_f[0]), np.average(prec_rec_f[1]), np.average(prec_rec_f[2])))
print("Confusion Matrix")
df = pd.DataFrame(confusion_matrix(y_true, y_pred))
df.columns = [u'anomaly', u'normal']
print(df)
# +
from sklearn. metrics import roc_curve, roc_auc_score
import matplotlib.pyplot as plt
def plot(plt):
plt.xlim([-0.05, 1.05])
plt.ylim([-0.05, 1.05])
plt.xlabel('False Positive Rate')
plt.ylabel('True Positive Rate')
plt.title('ROC curve')
plt.legend(loc="lower right")
plt.show()
# 正解ラベル(y_true)と識別関数の出力値(decision_function)を受け取って,ROC曲線を描画する関数
def print_roc_curve(y_true, decision_function):
fpr, tpr, thresholds = roc_curve(y_true, decision_function, pos_label=1)
roc_auc = roc_auc_score(y_true, decision_function)
plt.plot(fpr, tpr, 'k--',label='ROC for test data (AUC = %0.2f)' % roc_auc, lw=2, linestyle="-")
plot(plt)
# 同じくROC曲線を描画する関数(SVMのカーネル比較用)
def print_roc_curve_svm(y_true, decision_function_rbf, decision_function_poly, decision_function_smd):
fpr, tpr, thresholds = roc_curve(y_true, decision_function_rbf, pos_label=1)
roc_auc = roc_auc_score(y_true, decision_function_rbf)
plt.plot(fpr, tpr, 'k--',label='rbf kernel (AUC = %0.2f)' % roc_auc, lw=2, linestyle="-", color="r")
fpr, tpr, thresholds = roc_curve(y_true, decision_function_poly, pos_label=1)
roc_auc = roc_auc_score(y_true, decision_function_poly)
plt.plot(fpr, tpr, 'k--',label='poly kernel (AUC = %0.2f)' % roc_auc, lw=2, linestyle="-", color="g")
fpr, tpr, thresholds = roc_curve(y_true, decision_function_smd, pos_label=1)
roc_auc = roc_auc_score(y_true, decision_function_smd)
plt.plot(fpr, tpr, 'k--',label='sigmoid kernel (AUC = %0.2f)' % roc_auc, lw=2, linestyle="-", color="b")
plot(plt)
# 同じくROC曲線を描画する関数(識別器比較用)
def print_roc_curve_all(y_true, decision_function_lof, decision_function_ocsvm, decision_function_iForest):
fpr, tpr, thresholds = roc_curve(y_true, decision_function_lof, pos_label=1)
roc_auc = roc_auc_score(y_true, decision_function_lof)
plt.plot(fpr, tpr, 'k--',label='LOF (AUC = %0.2f)' % roc_auc, lw=2, linestyle="-", color="r")
fpr, tpr, thresholds = roc_curve(y_true, decision_function_ocsvm, pos_label=1)
roc_auc = roc_auc_score(y_true, decision_function_ocsvm)
plt.plot(fpr, tpr, 'k--',label='OCSVM (AUC = %0.2f)' % roc_auc, lw=2, linestyle="-", color="g")
fpr, tpr, thresholds = roc_curve(y_true, decision_function_iForest, pos_label=1)
roc_auc = roc_auc_score(y_true, decision_function_iForest)
plt.plot(fpr, tpr, 'k--',label='iForest (AUC = %0.2f)' % roc_auc, lw=2, linestyle="-", color="b")
plot(plt)
# +
from sklearn.metrics import precision_recall_fscore_support
# 正常クラスと異常クラスに対する平均F値を返す関数
def validation(y_valid_true, y_valid_pred):
prec_rec_f = precision_recall_fscore_support(y_valid_true, y_valid_pred)
return np.average(prec_rec_f[2])
# +
import matplotlib.pyplot as plt
# ハイパーパラメータ(idx_name)を変化させたときのF値のグラフ描画用関数
def plot_fscore_graph(idx_name, idx, f_score):
plt.plot(idx, f_score)
plt.xlabel(idx_name)
plt.ylabel('F-score')
plt.show()
# -
| ch7-2.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt #Data visualization libraries
import seaborn as sns
from sklearn import datasets, linear_model
from sklearn.metrics import mean_squared_error, r2_score
mes = list(range(1, 16))
publicidad = [10, 12, 8, 17, 10, 15, 10, 14, 19, 10, 11, 13, 16, 10, 12] # X Independiente
pasajeros = [15, 17, 13, 23, 16, 21, 14, 20, 24, 17, 16, 18, 23, 15, 16] # Y Dependiente
plt.scatter(publicidad, pasajeros)
plt.show()
data = {
"mes" : mes,
"publicidad" : publicidad,
"pasajeros" : pasajeros
}
df = pd.DataFrame(data)
df
linregress = ss.linregress(publicidad, pasajeros)
linregress
y = linregress[0]
# ***
consumidor = list(range(1,13))
ingreso = [24.3, 12.5, 31.2, 28, 35.1, 10.5, 23.2, 10, 8.5, 15.9, 14.7, 15] # Variable independiente X
consumo = [16.2, 8.5, 15, 17, 24.2, 11.2, 15, 7.1, 3.5, 11.5, 10.7, 9.2] # Variable dependiente Y
plt.scatter(ingreso, consumo)
plt.show()
linregress = ss.linregress(ingreso, consumo)
linregress
x = np.array(ingreso)
f = lambda x : linregress[0] * x + linregress[1]
y = [f(x) for x in x]
plt.plot(x, y)
plt.scatter(ingreso, consumo)
plt.show()
f(27.5)
# ***
unidades = [12.3, 8.3, 6.5, 4.8, 14.6, 14.6, 14.6, 6.5] # Variable independiente X
costo = [6.2, 5.3, 4.1, 4.4, 5.2, 4.8, 5.9, 4.2] # Variable dependiente Y
plt.scatter(unidades, costo)
plt.show()
linregress = ss.linregress(unidades, costo)
linregress
linregress[4]
| examples/.ipynb_checkpoints/Regresion lineal-checkpoint.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Unit testing
#
# ## 1 unittest framework
#
# The unittest unit testing framework was originally inspired by **JUnit**
#
# and has a similar flavor as major unit testing frameworks in other languages.
#
# It supports test automation, sharing of setup and shutdown code for tests, aggregation of tests into collections, and independence of the tests from the reporting framework.
#
# To achieve this, **unittest** supports some important concepts in an object-oriented way:
#
# ### test fixture(测试夹具):
#
# A test fixture represents the **preparation** needed to perform one or more tests, and any associate **cleanup actions**.
#
# This may involve, for example, creating temporary or proxy databases, directories, or starting a server process.
#
# > Fixtures是建立一个固定/已知的**环境状态**以确保测试可重复并且按照预期方式运行
#
# ### test case(测试用例):
#
# A test case is the **individual** unit of testing.
#
# It checks for a **specific** response to a **particular** set of inputs.
#
# **unittest** provides a base class, **TestCase**, which may be used to create new test cases.
#
#
# ### test suite(测试用例集):
#
# A test suite is a **collection** of test cases, test suites, or both.
#
# It is used to **aggregate tests** that should be executed together.
#
#
# ### test runner(测试自动执行器):
#
# A test runner is a component which orchestrates the execution of tests and provides the outcome to the user.
#
#
#
# ## 2 Basic Test Structure
#
# Tests, as defined by <b>unittest</b>, have two parts:
#
# * <b>code to manage test “fixtures”</b>
#
# * <b>the test itself</b>.
#
# **Individual tests** are created by
#
# * subclassing **TestCase**
#
# * overriding or adding appropriate methods
#
# For example:
#
# * **unittest_simple.py**
# +
# %%file ./code/unittest/unittest_simple.py
import unittest
class SimplisticTest(unittest.TestCase):
def test_true(self):
self.assertTrue(True)
def test(self):
self.assertTrue(True)
if __name__ == '__main__':
unittest.main()
# -
# In this case, the <b>SimplisticTest</b> have
#
# * <b>test_true()</b>
#
# * <b>test()</b>
#
# methods would <b>fail if True is ever False</b>.
#
# The methods are defined with name
#
# * **start** with the letters **test**.
#
# This naming convention informs the <b>test runner</b> about which methods represent tests.
#
# ## 3 Running Tests
#
# The easiest way to run unittest tests is to include:
# ```python
# if __name__ == '__main__':
# unittest.main()
# ```
# at the bottom of each test file,
#
# then simply run the script directly from the **command line**:
# ```
# >python unittest_simple.py
# ```
# !python ./code/unittest/unittest_simple.py
# %run ./code/unittest/unittest_simple.py
#
# includes
#
# * <b>a status indicator for each test</b>
#
# * **”.”** on the first line of output means that a test <b>passed<b>
#
# * <b>the amount of time the tests took</b>,
#
#
# For **more** detailed test</b> results,
#
# **-v** option:
#
# ```
# >python unittest_simple.py -v
# ```
#
# %run ./code/unittest/unittest_simple.py -v
# You can run tests with more detailed information by passing in the verbosity argument:
# ```python
# unittest.main(verbosity=2)
# ```
# +
# %%file ./code/unittest/unittest_simple_more_detailed_information.py
import unittest
class SimplisticTest(unittest.TestCase):
def test_true(self):
self.assertTrue(True)
def test(self):
self.assertTrue(True)
if __name__ == '__main__':
unittest.main(verbosity=2)
# -
# %run ./code/unittest/unittest_simple_more_detailed_information.py
# ## 4 Test Outcomes
#
# Tests have 3 possible outcomes, described in the table below.
#
# | Outcome | Mark | Describe |
# |:--------:|----------:|--------:|
# | ok | **.** | The test passes |
# | FAIL | **F** | The test does not pass, and raises an **AssertionError** exception |
# | ERROR | **E** |The test raises an **exception** other than `AssertionError`.|
#
#
# For Example:
#
# * **unittest_outcomes.py**
#
# +
# %%file ./code/unittest/unittest_outcomes.py
import unittest
class OutcomesTest(unittest.TestCase):
# ok
def test_Pass(self):
return
# FAIL
def test_Fail(self):
# AssertionError exception.
self.assertFalse(True)
# ERROR
def test_Error(self):
# raises an exception other than AssertionError
raise RuntimeError('Test error!')
if __name__ == '__main__':
unittest.main()
# -
# %run ./code/unittest/unittest_outcomes.py
# **When a test fails or generates an error**
#
# the **traceback** is included in the output.
# In the example above,
#
# <b>testFail()</b> fails
#
# the traceback <b>shows the line</b> with the failure code.
#
# It is up to the person reading the test output to look at the code to figure out the semantic meaning of the failed test, though.
#
# ### 4.1 fail with message
#
# To make it <b>easier to understand the nature of a test failure</b>,
#
# the <b>fail*() and assert*()</b> methods all accept an argument <b>msg</b>,
#
# which can be used to produce <b>a more detailed error message</b>
#
# Example:
#
# * **unittest_failwithmessage.py**
#
# +
# %%file ./code/unittest/unittest_failwithmessage.py
import unittest
class FailureMessageTest(unittest.TestCase):
def test_Fail(self):
self.assertFalse(True,'Should be False')
if __name__ == '__main__':
unittest.main()
# -
# %run ./code/unittest/unittest_failwithmessage.py
# ## 5 Assert methods
#
# The **TestCase** class provides several assert methods to check for and report failures.
#
# * https://docs.python.org/3/library/unittest.html
# ### 5.1 Asserting Truth
#
#
# +
# %%file ./code/unittest/unittest_true.py
import unittest
class TruthTest(unittest.TestCase):
def testAssertTrue(self):
self.assertTrue(True)
def test_AssertFalse(self):
self.assertFalse(False)
if __name__ == '__main__':
unittest.main()
# -
# %run ./code/unittest/unittest_true.py
# ### 5.2 Testing Equality
#
# As a special case, `unittest` includes methods for testing <b>the equality of two values</b>.
#
#
# +
# %%file ./code/unittest/unittest_equality.py
import unittest
class EqualityTest(unittest.TestCase):
def test_Equal(self):
self.assertEqual(1, 3)
def test_NotEqual(self):
self.assertNotEqual(2, 1)
if __name__ == '__main__':
unittest.main()
# -
# %run ./code/unittest/unittest_equality.py
# These special tests are handy, since the values being <b>compared appear in the failure message</b> when a test fails.
# +
# %%file ./code/unittest/unittest_notequal.py
import unittest
class InequalityTest(unittest.TestCase):
def test_Equal(self):
self.assertNotEqual(1, 1)
def test_NotEqual(self):
self.assertEqual(2, 1)
if __name__ == '__main__':
unittest.main()
# -
# %run ./code/unittest/unittest_notequal.py
# ### 5.3 Almost Equal
#
# In addition to strict equality, it is possible to test for
#
# **near equality of floating point numbers** using
#
# * assertNotAlmostEqual()
#
# * assertAlmostEqual()
# +
# %%file ./code/unittest/unittest_almostequal.py
import unittest
class AlmostEqualTest(unittest.TestCase):
def test_NotAlmostEqual(self):
self.assertNotAlmostEqual(1.11, 1.3, places=1)
def test_AlmostEqual(self):
# self.assertAlmostEqual(1.1, 1.3, places=0)
self.assertAlmostEqual(0.12345678, 0.12345679)
if __name__ == '__main__':
unittest.main()
# -
# %run ./code/unittest/unittest_almostequal.py
# The arguments are the values to be compared, and **the number of decimal places** to use for the test.
#
# `assertAlmostEquals()` and `assertNotAlmostEqual()` have an optional parameter named
#
# **places**
#
# and the numbers are compared by **computing the difference rounded to number of decimal places**.
#
# default **places=7**,
#
# hence:
#
#
# ```python
# self.assertAlmostEqual(0.12345678, 0.12345679) is True.
# ```
# ## 6 Test Fixtures
#
# <b>Fixtures are resources needed by a test</b>
#
# * if you are writing several tests for the same class, those tests all need **an instance of that class** to use for testing.
#
#
# * test fixtures include `database` connections and temporary `files ` (many people would argue that using external resources makes such tests not “unit” tests, but they are still tests and still useful).
#
# **TestCase** includes a special hook to **configure** and **clean up** any fixtures needed by your tests.
#
# * To configure the **fixtures**, override **setUp()**.
#
# * To clean up, override **tearDown()**.
#
# ### 6.1 setUp()
#
# Method called to **prepare** the test fixture. This is called immediately **before** calling the test method;
#
# ### 6.2 tearDown()
#
# Method called immediately **after** the test method has been called and the result recorded.
#
# This is `called` even if the test method `raised an exception`, so the implementation in subclasses may need to be particularly careful about checking internal state.
#
# Any exception, other than `AssertionError` or `SkipTest,` raised by this method will be considered an `error` rather than a test failure.
#
# This method will only be called if the `setUp()` succeeds, whether the test method succeeded or not.
#
#
# * automatically call `setUp()` and `tearDown()`
#
# The testing framework will automatically call `setUp()` and `tearDown()` for **every single test** we run.
#
# * any `exception` raised by this `setUp()` and `tearDown()` will be considered an `error` rather than a test failure.
#
# Such a working environment for the testing code is called a `fixture`.
#
# +
# %%file ./code/unittest/unittest_fixtures.py
import unittest
class FixturesTest(unittest.TestCase):
def setUp(self):
print('In setUp()')
self.fixture = range(1, 10)
def tearDown(self):
print('In tearDown()')
del self.fixture
def test_fixture1(self):
print('in test1()')
self.assertEqual(self.fixture, range(1, 10))
def test_fixture2(self):
print('in test2()')
self.assertEqual(self.fixture, range(2, 10))
if __name__ == '__main__':
unittest.main()
# -
# When this sample test is run, you can see
#
# **the order of execution** of the `fixture` and `test` methods:
# %run ./code/unittest/unittest_fixtures.py
# ### 6.3 Any `exception` raised
#
# **Any `exception` raised by this `setUp()` or `tearDown()`**
#
# * This **tearDown()** method will **only** be called if the **setUp() succeeds**, whether the test method succeeded or not.
#
# * Any **exception** raised by this **setUp()** and **tearDown()** will be considered an **error** rather than **a test failure.**
# +
# %%file ./code/unittest/unittest_fixtures_exception.py
import unittest
class FixturesTest(unittest.TestCase):
def setUp(self):
print('In setUp()')
r=1/0
self.fixture = range(1, 10)
def tearDown(self):
print('In tearDown()')
r=1/0
del self.fixture
def test_fixture1(self):
print('in test1()')
self.assertEqual(self.fixture, range(1, 10))
def test_fixture2(self):
print('in test2()')
self.assertEqual(self.fixture, range(2, 10))
if __name__ == '__main__':
unittest.main()
# -
# %run ./code/unittest/unittest_fixtures_exception.py
# ## 7 Test Suites
#
# **Test case instances** are grouped together according to the features they test.
#
# `unittest` provides a mechanism for this: the **test suite**, represented by unittest‘s **TestSuite** class.
#
# In most cases, calling `unittest.main()` will do the right thing and collect all the module’s test cases for you, and then execute them.
#
# However, should you want to `customize` the building of your test suite, you can do it yourself:
#
# +
# %%file ./code/unittest/test_TestSuite.py
import unittest
class EqualityTest(unittest.TestCase):
def test_Equal(self):
self.assertEqual(1, 3)
def test_NotEqual(self):
self.assertNotEqual(3, 3)
class AlmostEqualTest(unittest.TestCase):
def test_NotAlmostEqual(self):
self.assertNotAlmostEqual(1.2, 1.1, places=1)
def test_AlmostEqual(self):
self.assertAlmostEqual(1.1, 1.3, places=0)
def suiteEqual():
suite = unittest.TestSuite()
suite.addTest(EqualityTest('test_Equal'))
suite.addTest(AlmostEqualTest('test_AlmostEqual'))
return suite
def suiteNotEqual():
suite = unittest.TestSuite()
suite.addTest(EqualityTest('test_NotEqual'))
suite.addTest(AlmostEqualTest('test_NotAlmostEqual'))
return suite
if __name__ == '__main__':
unittest.main(defaultTest = 'suiteNotEqual')
#unittest.main(defaultTest = 'suiteEqual')
# -
# %run ./code/unittest/test_TestSuite.py
# ## 8 Test Algorithms
#
# Test Sorting Algorithms
#
# * Sorting Algorithms Module
# * Test Module
#
# ### 8.1 Sorting Algorithms Module
# +
# %%file ./code/unittest/sort.py
def select_sort(L):
length=len(L)
for i in range(length):
min_idx = i # assume fist element is the smallest
for j in range(i+1, length):
if L[j]<L[min_idx] :
min_idx = j
if min_idx!=i:
L[i], L[min_idx] = L[min_idx], L[i]
def merge(left, right, compare = lambda x,y:x<y):
result = [] # the copy of the list.
i,j = 0, 0
while i < len(left) and j < len(right):
if compare(left[i], right[j]):
result.append(left[i])
i += 1
else:
result.append(right[j])
j += 1
while (i < len(left)):
result.append(left[i])
i += 1
while (j < len(right)):
result.append(right[j])
j += 1
return result
def merge_sort(L, compare = lambda x,y:x<y):
if len(L) < 2:
return L[:]
else:
middle = len(L)//2
left = merge_sort(L[:middle], compare)
right = merge_sort(L[middle:], compare)
return merge(left, right, compare)
# -
# ### 8.2 The Test Module
#
#
# +
# %%file ./code/unittest/test_sort.py
import unittest
from sort import *
class sortTest(unittest.TestCase):
def setUp(self):
self.L=[7, 4, 5, 9, 8, 2, 1]
self.sortedL=[1, 2, 4, 5, 7, 8, 9]
def test_select_sort(self):
select_sort(self.L)
self.assertEqual(self.L,self.sortedL)
def test_merge_sort(self):
L1=merge_sort(self.L)
self.assertEqual(L1,self.sortedL)
if __name__ == '__main__':
unittest.main()
# -
# %run ./code/unittest/test_sort.py
# The test code in **separate** module have several advantages:
#
# * The **test module** can be `run` **standalone** .
#
# * The **test code** can more easily be **separated from shipped code**.
#
# * If the **testing strategy** changes, there is **no need to change the source code**.
#
#
# ## 9 Black and Glass Box Testing
#
#
#
#
# ### 9.1 Black-Box Testing
# **Black-box** testing tests the functionality of an application by looking its **specifications.**
#
#
# Consider the following code specification:
# ```python
# def union(set1, set2):
# """
# set1 and set2 are collections of objects, each of which might be empty.
# Each set has no duplicates within itself, but there may be objects that
# are in both sets. Objects are assumed to be of the same type.
#
# This function returns one set containing all elements from
# both input sets, but with no duplicates.
# """
# ```
#
# According to the **specifications**, the possibilities for set1 and set2 are as follows:
#
#
# * both sets are empty;
#
# * set1 is an empty set; set2 is an empty set - ```union('','')```
#
# * one of the sets is empty and one has at least one object;
#
# * set1 is an empty set; set2 is of size greater than or equal to 1 - ```union('','a')```
#
# * set1 is of size greater than or equal to 1; set2 is an empty set - ```union('b','')```
#
# * both sets are not empty.
#
# * set1 and set2 are both nonempty sets which do not contain any objects in common - ```union('a','b')```
#
# * set1 and set2 are both nonempty sets which contain objects in common - ```union('ab','bcd')```
#
# %%file ./code/unittest/unionsets.py
def union(set1, set2):
"""
set1 and set2 are collections of objects, each of which might be empty.
Each set has no duplicates within itself, but there may be objects that
are in both sets. Objects are assumed to be of the same type.
This function returns one set containing all elements from
both input sets, but with no duplicates.
"""
if len(set1) == 0:
return set2
elif set1[0] in set2:
return union(set1[1:], set2)
else:
return set1[0] + union(set1[1:], set2)
# +
# %%file ./code/unittest/test_union_black_box.py
import unittest
from unionsets import union
class union_black_box(unittest.TestCase):
def setUp(self):
self.inputsets=[("",""),
("","a"),
("b",""),
("a","b"),
("ab","bcd")]
self.unionsets=["","a","b","ab","abcd"]
def test_union(self):
for i,item in enumerate(self.inputsets):
set1,set2=item
self.assertEqual(self.unionsets[i],union(set1,set2))
if __name__ == '__main__':
unittest.main()
# -
# %run ./code/unittest/test_union_black_box.py
# ### 9.2 Glass-Box Testing
#
# The **Glass-Box** method to constructing the test cases from the **code** written
#
# #### 9.2.1 A path-complete glass box test suite
#
# A path-complete glass box test suite would find test cases that go through every possible path in the code.
#
# %%file ./code/unittest/max_three.py
def max_three(a,b,c) :
"""
a, b, and c are numbers
returns: the maximum of a, b, and c
"""
if a > b:
bigger = a
else:
bigger = b
if c > bigger:
bigger = c
return bigger
# In this case, that means finding all possibilities for the conditional tests $a > b$ and $c > bigger$.
#
# So, we end up with four possible paths :
#
# * Case 1: a > b and c > bigger - maxOfThree(2, -10, 100).
# * Case 2: a > b and c <= bigger - maxOfThree(6, 1, 5)
# * Case 3: a <= b and c > bigger - maxOfThree(7, 9, 10).
# * Case 4: a <= b and c <= bigger - maxOfThree(0, 40, 20)
#
#
# +
# %%file ./code/unittest/test_max_three_glass_box.py
import unittest
from max_three import *
class max_three_glass_box(unittest.TestCase):
def setUp(self):
self.testinputs=[(2, -10, 100),
(6, 1, 5),
(7, 9, 10),
(0, 40, 20) ]
self.okvalues=[100,6,10,40]
def test_max_three(self):
for i,item in enumerate(self.testinputs):
a,b,c=item
self.assertEqual(self.okvalues[i],max_three(a,b,c))
if __name__ == '__main__':
unittest.main()
# -
# %run ./code/unittest/test_max_three_glass_box.py
# #### 9.2.2 A good glass box test suite
#
# A good glass box test suite would try to test a good sample of all the possible paths through the code.
#
#
# %%file ./code/unittest/unionsets.py
def union(set1, set2):
"""
set1 and set2 are collections of objects, each of which might be empty.
Each set has no duplicates within itself, but there may be objects that
are in both sets. Objects are assumed to be of the same type.
This function returns one set containing all elements from
both input sets, but with no duplicates.
"""
if len(set1) == 0:
return set2
elif set1[0] in set2:
return union(set1[1:], set2)
else:
return set1[0] + union(set1[1:], set2)
# So, it should contain tests that test
#
# * `if len(set1) == 0` : when set1 is empty, ```union('', 'abc')```
#
# * `elif set1[0] in set2` :when set1[0] is in set2 ```union('a', 'abc')```
# ``
# * `else`: when set1[0] is not in set2. ```union('d', 'abc')```
#
# * The test suite should also test when the `recursion depth` is 0, 1, and greater than 1.
#
# * Recursion depth = 0 - ```union('', 'abc')```
# * Recursion depth = 1 - ```union('a', 'abc'), union('d', 'abc')```
# * Recursion depth > 1 - ```union('ab', 'abc')```
#
# > Note that this test suite is **NOT path complete** because it would take essentially infinite time to test all possible **recursive depths**.
# +
# %%file ./code/unittest/test_union_glass_box.py
import unittest
from unionsets import union
class union_black_box(unittest.TestCase):
def setUp(self):
self.inputsets=[("","abc"),
("a","abc"),
("d","abc"),
("ab","abc")]
self.unionsets=["abc","abc","dabc","abc"]
def test_union(self):
for i,item in enumerate(self.inputsets):
set1,set2=item
self.assertEqual(self.unionsets[i],union(set1,set2))
if __name__ == '__main__':
unittest.main()
# -
# %run ./code/unittest/test_union_glass_box.py
# ## Further Reading
#
# ### Python:unittest
#
# * [Python:unittest — Unit testing framework](https://docs.python.org/3/library/unittest.html)
#
# * [Pymotw:unittest — Automated Testing Framework](https://pymotw.com/3/unittest/index.html)
#
#
# ### Unit Test for C/C++
#
# * [Unit7-A-UnitTest_CPP.](./Unit7-A-UnitTest_CPP.ipynb)
#
#
| notebook/Unit7-2-Unittest.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # My Python/Pandas note for data processing
# > Sang's personal Python pandas note.
#
# - toc: true
# - badges: true
# - comments: true
# - categories: [cheatsheet,pandas,python]
# This is my personal Python
# # Pandas
#
#
import numpy as np
import pandas as pd
# ## Indexing
# ## Naming
# ## Group
# ## Timestamp
print(pd.Timestamp("2021-03-07",tz='America/Indianapolis').week)
print(pd.Timestamp("2021-03-07",tz='America/Indianapolis').dayofweek)
print(pd.Timestamp("2021-03-08",tz='America/Indianapolis').week)
print(pd.Timestamp("2021-03-08",tz='America/Indianapolis').dayofweek)
pd.Timestamp("2021-03-07",tz='America/Indianapolis').dayofweek
import pytorch
# # Python
# ## List comprehension with if
x = [1,2,3,4,5,4,3]
["Good" if i>=4 else "Neutral" if i==3 else "Bad" for i in x]
# # OOP simple example
# +
#class inheritance..
class Animal:
def __init__(self, name):
self.name = name
# Create a class Mammal, which inherits from Animal
class Mammal(Animal):
def __init__(self, name, animal_type):
self.animal_type = animal_type
self.name=name
# Instantiate a mammal with name 'Daisy' and animal_type 'dog': daisy
daisy = Mammal("Daisy", "dog")
# Print both objects
print(daisy)
print(daisy.animal_type)
print(daisy.name)
print(dir(daisy)[-3:])
| _notebooks/2021-02-22-My-Python-Pandas-note.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python [default]
# language: python
# name: python2
# ---
# +
from __future__ import division
from __future__ import print_function
import time
import tensorflow as tf
from gcn.utils import *
from gcn.models import GCN, MLP
# Set random seed
seed = 123
np.random.seed(seed)
tf.set_random_seed(seed)
# Load data
adj, features, y_train, y_val, y_test, train_mask, val_mask, test_mask = load_data('citeseer')
# +
# print(type(adj),type(adj[1,:]))
# print(adj[1,:], adj.shape)
# +
#features
# -
features = preprocess_features(features)
support = [preprocess_adj(adj)] # Graph Laplacian from adjacency matrix , in a compressed form through sparse_to_tuple function
num_supports = 1
model_func = GCN
# Define placeholders
placeholders = {
'support': [tf.sparse_placeholder(tf.float32) for _ in range(num_supports)],
'features': tf.sparse_placeholder(tf.float32, shape=tf.constant(features[2], dtype=tf.int64)),
'labels': tf.placeholder(tf.float32, shape=(None, y_train.shape[1])),
'labels_mask': tf.placeholder(tf.int32),
'dropout': tf.placeholder_with_default(0., shape=()),
'num_features_nonzero': tf.placeholder(tf.int32) # helper variable for sparse dropout
}
# +
from gcn.layers import *
from gcn.metrics import *
flags = tf.app.flags
FLAGS = flags.FLAGS
class Model(object):
def __init__(self, **kwargs):
allowed_kwargs = {'name', 'logging'}
for kwarg in kwargs.keys():
assert kwarg in allowed_kwargs, 'Invalid keyword argument: ' + kwarg
name = kwargs.get('name')
if not name:
name = self.__class__.__name__.lower()
self.name = name
logging = kwargs.get('logging', False)
self.logging = logging
self.vars = {}
self.placeholders = {}
self.layers = []
self.activations = []
self.inputs = None
self.outputs = None
self.loss = 0
self.accuracy = 0
self.optimizer = None
self.opt_op = None
def _build(self):
raise NotImplementedError
# define it in the child class as needed
def build(self):
""" Wrapper for _build() """
with tf.variable_scope(self.name):
self._build() # first call the child class' build
# Build sequential layer model
self.activations.append(self.inputs)
for layer in self.layers:
hidden = layer(self.activations[-1])
self.activations.append(hidden)
self.outputs = self.activations[-1]
# Store model variables for easy access
variables = tf.get_collection(tf.GraphKeys.GLOBAL_VARIABLES, scope=self.name)
self.vars = {var.name: var for var in variables}
# Build metrics
self._loss()
self._accuracy()
self.opt_op = self.optimizer.minimize(self.loss)
def predict(self):
pass
def _loss(self):
raise NotImplementedError
def _accuracy(self):
raise NotImplementedError
def save(self, sess=None):
if not sess:
raise AttributeError("TensorFlow session not provided.")
saver = tf.train.Saver(self.vars)
save_path = saver.save(sess, "tmp/%s.ckpt" % self.name)
print("Model saved in file: %s" % save_path)
def load(self, sess=None):
if not sess:
raise AttributeError("TensorFlow session not provided.")
saver = tf.train.Saver(self.vars)
save_path = "tmp/%s.ckpt" % self.name
saver.restore(sess, save_path)
print("Model restored from file: %s" % save_path)
class GCN(Model):
def __init__(self, placeholders, input_dim, **kwargs):
super(GCN, self).__init__(**kwargs)
self.learning_rate = 0.01
self.hidden1 = 16
self.weight_decay = 5e-4
self.inputs = placeholders['features']
self.input_dim = input_dim
# self.input_dim = self.inputs.get_shape().as_list()[1] # To be supported in future Tensorflow versions
self.output_dim = placeholders['labels'].get_shape().as_list()[1]
self.placeholders = placeholders
self.optimizer = tf.train.AdamOptimizer(self.learning_rate)
self.build() # from Model
def _loss(self):
# Weight decay loss
for var in self.layers[0].vars.values():
self.loss += self.weight_decay * tf.nn.l2_loss(var)
# Cross entropy error
self.loss += masked_softmax_cross_entropy(self.outputs, self.placeholders['labels'],
self.placeholders['labels_mask'])
def _accuracy(self):
self.accuracy = masked_accuracy(self.outputs, self.placeholders['labels'],
self.placeholders['labels_mask'])
def _build(self):
self.layers.append(GraphConvolution(input_dim=self.input_dim,
output_dim=self.hidden1,
placeholders=self.placeholders,
act=tf.nn.relu,
dropout=True,
sparse_inputs=True,
logging=self.logging))
self.layers.append(GraphConvolution(input_dim=self.hidden1,
output_dim=self.output_dim,
placeholders=self.placeholders,
act=lambda x: x,
dropout=True,
logging=self.logging))
def predict(self):
return tf.nn.softmax(self.outputs)
# -
# Create model
model = GCN(placeholders, input_dim=features[2][1], logging=True)
# Initialize session
sess = tf.Session()
# Define model evaluation function
def evaluate(features, support, labels, mask, placeholders):
t_test = time.time()
feed_dict_val = construct_feed_dict(features, support, labels, mask, placeholders)
outs_val = sess.run([model.loss, model.accuracy], feed_dict=feed_dict_val)
return outs_val[0], outs_val[1], (time.time() - t_test)
# +
# Init variables
sess.run(tf.global_variables_initializer())
cost_val = []
# +
# Train model
epochs = 20
dropout = 0.5
early_stopping = 10
for epoch in range(epochs):
t = time.time()
# Construct feed dictionary
feed_dict = construct_feed_dict(features, support, y_train, train_mask, placeholders)
feed_dict.update({placeholders['dropout']:dropout})
# Training step
outs = sess.run([model.opt_op, model.loss, model.accuracy], feed_dict=feed_dict)
# Validation
cost, acc, duration = evaluate(features, support, y_val, val_mask, placeholders)
cost_val.append(cost)
# Print results
print("Epoch:", '%04d' % (epoch + 1), "train_loss=", "{:.5f}".format(outs[1]),
"train_acc=", "{:.5f}".format(outs[2]), "val_loss=", "{:.5f}".format(cost),
"val_acc=", "{:.5f}".format(acc), "time=", "{:.5f}".format(time.time() - t))
if epoch > early_stopping and cost_val[-1] > np.mean(cost_val[-(early_stopping+1):-1]):
print("Early stopping...")
break
print("Optimization Finished!")
# Testing
test_cost, test_acc, test_duration = evaluate(features, support, y_test, test_mask, placeholders)
print("Test set results:", "cost=", "{:.5f}".format(test_cost),
"accuracy=", "{:.5f}".format(test_acc), "time=", "{:.5f}".format(test_duration))
# -
model.layers[0].vars.values()[0]
model.layers[1].vars.values()
print((sess.run(model.layers[0].vars.values()[0])))
sys.version_info > (3, 0)
# # what happens to the unlabeled data?
# everything is lebeled, they just block out some with a mask!
import numpy as np
import pickle as pkl
import networkx as nx
import scipy.sparse as sp
from scipy.sparse.linalg.eigen.arpack import eigsh
import sys
names = ['x', 'y', 'tx', 'ty', 'allx', 'ally', 'graph']
objects = []
for i in range(len(names)):
with open("data/ind.{}.{}".format('cora', names[i]), 'rb') as f:
if sys.version_info > (3, 0):
objects.append(pkl.load(f, encoding='latin1'))
else:
objects.append(pkl.load(f))
x, y, tx, ty, allx, ally, graph = tuple(objects)
x, len(y), len(y[0]), sum([1 for t in y if sum(t)==0]),tx, len(ty), len(ty[0]),sum([1 for t in ty if sum(t)==0]), allx, len(ally),len(ally[0]), sum([1 for t in ally if sum(t)!=1])
test_idx_reorder = parse_index_file("data/ind.{}.test.index".format('cora'))
test_idx_range = np.sort(test_idx_reorder)
features = sp.vstack((allx, tx)).tolil()
features[test_idx_reorder, :] = features[test_idx_range, :]
adj = nx.adjacency_matrix(nx.from_dict_of_lists(graph))
labels = np.vstack((ally, ty))
labels[test_idx_reorder, :] = labels[test_idx_range, :]
idx_test = test_idx_range.tolist()
idx_train = range(len(y))
idx_val = range(len(y), len(y)+500)
train_mask = sample_mask(idx_train, labels.shape[0])
val_mask = sample_mask(idx_val, labels.shape[0])
test_mask = sample_mask(idx_test, labels.shape[0])
y_train = np.zeros(labels.shape)
y_val = np.zeros(labels.shape)
y_test = np.zeros(labels.shape)
y_train[train_mask, :] = labels[train_mask, :]
y_val[val_mask, :] = labels[val_mask, :]
y_test[test_mask, :] = labels[test_mask, :]
features, len(y_train), len(y_train[0])
float(len(idx_train))/features.shape[0]
140.0/2708
sum([1 for t in y_train if sum(t)!=0])
len(idx_test)
sum([1 for t in y_test if sum(t)!=0])
| gcn/understand_training.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + [markdown] id="1YkGmGCi6oRu"
# ## **University of Toronto - CSC413 - Neural Networks and Deep Learning**
# ## **Programming Assignment 4 - StyleGAN2-Ada**
#
# This is a self-contained notebook that allows you to play around with a pre-trained StyleGAN2-Ada generator
#
# Disclaimer: Some codes were borrowed from StyleGAN official documentation on Github https://github.com/NVlabs/stylegan
#
# Make sure to set your runtime to GPU
#
# Remember to save your progress periodically!
# + id="6-W5eReUVPnS"
# Run this for Google CoLab (use TensorFlow 1.x)
# %tensorflow_version 1.x
# clone StyleGAN2 Ada
# !git clone https://github.com/NVlabs/stylegan2-ada.git
# + id="iVNM_ERtVxA1"
#setup some environments (Do not change any of the following)
import sys
import pickle
import os
import numpy as np
from IPython.display import Image
import PIL.Image
from PIL import Image
import matplotlib.pyplot as plt
sys.path.insert(0, "/content/stylegan2-ada") #do not remove this line
import dnnlib
import dnnlib.tflib as tflib
import IPython.display
from google.colab import files
# + [markdown] id="DhU-cjsC6CdN"
# Next, we will load a pre-trained StyleGan2-ada network.
#
# Each of the following pre-trained network is specialized to generate one type of image.
# + id="ENrDl7ZTddyT"
# The pre-trained networks are stored as standard pickle files
# Uncomment one of the following URL to begin
# If you wish, you can also find other pre-trained networks online
#URL = "https://nvlabs-fi-cdn.nvidia.com/stylegan2-ada/pretrained/ffhq.pkl" # Human faces
#URL = "https://nvlabs-fi-cdn.nvidia.com/stylegan2-ada/pretrained/cifar10.pkl" # CIFAR10, these images are a bit too tiny for our experiment
#URL = "https://nvlabs-fi-cdn.nvidia.com/stylegan2-ada/pretrained/afhqwild.pkl" # wild animal pictures
#URL = "https://nvlabs-fi-cdn.nvidia.com/stylegan2-ada/pretrained/metfaces.pkl" # European portrait paintings
#URL = "https://nvlabs-fi-cdn.nvidia.com/stylegan2-ada/pretrained/afhqcat.pkl" # cats
#URL = "https://nvlabs-fi-cdn.nvidia.com/stylegan2-ada/pretrained/afhqdog.pkl" # dogs
tflib.init_tf() #this creates a default Tensorflow session
# we are now going to load the StyleGAN2-Ada model
# The following code downloads the file and unpickles it to yield 3 instances of dnnlib.tflib.Network.
with dnnlib.util.open_url(URL) as fp:
_G, _D, Gs = pickle.load(fp)
# Here is a brief description of _G, _D, Gs, for details see the official StyleGAN documentation
# _G = Instantaneous snapshot of the generator. Mainly useful for resuming a previous training run.
# _D = Instantaneous snapshot of the discriminator. Mainly useful for resuming a previous training run.
# Gs = Long-term average of the generator. Yields higher-quality results than the instantaneous snapshot.
# We will work with Gs
# + [markdown] id="Mk3EKnIyckPE"
# ## Part 1 Sampling and Identifying Fakes
#
# Open: https://github.com/NVlabs/stylegan and follow the instructions starting from *There are three ways to use the pre-trained generator....*
#
# Complete generate_latent_code and generate_images function in the Colab notebook to generate a small row of $3 - 5$ images.
#
# You do not need to include these images into your PDF submission.
#
# If you wish, you can try to use https://www.whichfaceisreal.com/learn.html as a guideline to spot any imperfections that you detect in these images, e.g., ``blob artifact" and make a short remark for your attached images.
# + id="N7eAfVJDiB0y"
# Sample a batch of latent codes {z_1, ...., z_B}, B is your batch size.
def generate_latent_code(SEED, BATCH, LATENT_DIMENSION = 512):
"""
This function returns a sample a batch of 512 dimensional random latent code
- SEED: int
- BATCH: int that specifies the number of latent codes, Recommended batch_size is 3 - 6
- LATENT_DIMENSION is by default 512 (see Karras et al.)
You should use np.random.RandomState to construct a random number generator, say rnd
Then use rnd.randn along with your BATCH and LATENT_DIMENSION to generate your latent codes.
This samples a batch of latent codes from a normal distribution
https://numpy.org/doc/stable/reference/random/generated/numpy.random.RandomState.randn.html
Return latent_codes, which is a 2D array with dimensions BATCH times LATENT_DIMENSION
"""
################################################################################
########################## COMPLETE THE FOLLOWING ##############################
################################################################################
latent_codes = ...
################################################################################
return latent_codes
# + id="N8giN3BfibqG"
# Sample images from your latent codes https://github.com/NVlabs/stylegan
# You can use their default settings
################################################################################
########################## COMPLETE THE FOLLOWING ##############################
################################################################################
def generate_images(SEED, BATCH, TRUNCATION = 0.7):
"""
This function generates a batch of images from latent codes.
- SEED: int
- BATCH: int that specifies the number of latent codes to be generated
- TRUNCATION: float between [-1, 1] that decides the amount of clipping to apply to the latent code distribution
recommended setting is 0.7
You will use Gs.run() to sample images. See https://github.com/NVlabs/stylegan for details
You may use their default setting.
"""
# Sample a batch of latent code z using generate_latent_code function
latent_codes = ...
# Convert latent code into images by following https://github.com/NVlabs/stylegan
fmt = dict(...)
images = Gs.run(...)
return PIL.Image.fromarray(np.concatenate(images, axis=1) , 'RGB')
################################################################################
# + id="bdaxzuAFcTM8"
# Generate your images
generate_images(...)
# + [markdown] id="z9Jre5OXbUHD"
# ## **Part 2 Interpolation**
#
# Complete the interpolate_images function using linear interpolation between two latent codes,
# \begin{equation}
# z = r z_1 + (1-r) z_2, r \in [0, 1]
# \end{equation}
# and feeding this interpolation through the StyleGAN2-Ada generator Gs as done in generate_images. Include a small row of interpolation in your PDF submission as a screen shot if necessary to keep the file size small.
# + id="fUwCVd9mlDOr"
################################################################################
########################## COMPLETE THE FOLLOWING ##############################
################################################################################
def interpolate_images(SEED1, SEED2, INTERPOLATION, BATCH = 1, TRUNCATION = 0.7):
"""
- SEED1, SEED2: int, seed to use to generate the two latent codes
- INTERPOLATION: int, the number of interpolation between the two images, recommended setting 6 - 10
- BATCH: int, the number of latent code to generate. In this experiment, it is 1.
- TRUNCATION: float between [-1, 1] that decides the amount of clipping to apply to the latent code distribution
recommended setting is 0.7
You will interpolate between two latent code that you generate using the above formula
You can generate an interpolation variable using np.linspace
https://numpy.org/doc/stable/reference/generated/numpy.linspace.html
This function should return an interpolated image. Include a screenshot in your submission.
"""
latent_code_1 = ...
latent_code_2 = ...
images = Gs.run(...)
return PIL.Image.fromarray(np.concatenate(images, axis=1) , 'RGB')
################################################################################
# + id="fV4Df5J8lZLy"
# Create an interpolation of your generated images
interpolate_images(...)
# + [markdown] id="o5HUpzB0T5LN"
# After you have generated interpolated images, an interesting task would be to see how you can create a GIF. Feel free to explore a little bit more.
# + [markdown] id="CV2tsaAhaTzg"
# ## **Part 3 Style Mixing and Fine Control**
# In the final part, you will reproduce the famous style mixing example from the original StyleGAN paper.
#
# ### Step 1. We will first learn how to generate from sub-networks of the StyleGAN generator.
# + id="QG-xbj9qtMTA"
# You will generate images from sub-networks of the StyleGAN generator
# Similar to Gs, the sub-networks are represented as independent instances of dnnlib.tflib.Network
# Complete the function by following \url{https://github.com/NVlabs/stylegan}
# And Look up Gs.components.mapping, Gs.components.synthesism, Gs.get_var
# Remember to use the truncation trick as described in the handout after you obtain src_dlatents from Gs.components.mapping.run
def generate_from_subnetwork(src_seeds, LATENT_DIMENSION = 512):
"""
- src_seeds: a list of int, where each int is used to generate a latent code, e.g., [1,2,3]
- LATENT_DIMENSION: by default 512
You will complete the code snippet in the Write Your Code Here block
This generates several images from a sub-network of the genrator.
To prevent mistakes, we have provided the variable names which corresponds to the ones in the StyleGAN documentation
You should use their convention.
"""
# default arguments to Gs.components.synthesis.run, this is given to you.
synthesis_kwargs = {
'output_transform': dict(func=tflib.convert_images_to_uint8, nchw_to_nhwc=True),
'randomize_noise': False,
'minibatch_size': 4
}
############################################################################
########################## WRITE YOUR CODE HERE ############################
############################################################################
truncation = ...
src_latents = ...
src_dlatents = ...
w_avg = ...
src_dlatents = ...
all_images = Gs.components.synthesis.run(...)
############################################################################
return PIL.Image.fromarray(np.concatenate(all_images, axis=1) , 'RGB')
# + id="BI0vEoYdtbJk"
# generate several iamges from the sub-network
generate_from_subnetwork(...)
# + [markdown] id="3QOHpSmoGrZI"
# ### Step 2. Initialize the col_seeds, row_seeds and col_styles and generate a grid of image.
#
# A recommended example for your experiment is as follows:
#
# * col_seeds = [1, 2, 3, 4, 5]
# * row_seeds = [6]
# * col_styles = [1, 2, 3, 4, 5]
#
# and
#
# * col_seeds = [1, 2, 3, 4, 5]
# * row_seeds = [6]
# * col_styles = [8, 9, 10, 11, 12]
#
# You will then incorporate your code from generate from sub_network into the cell below.
#
# Experiment with the col_styles variable. Explain what col_styles does, for instance, roughly describe what these numbers corresponds to. Create a simple experiment to backup your argument. Include **at maximum two** sets of images that illustrates the effect of changing col_styles and your explanation. Include them as screen shots to minimize the size of the file.
#
# Make reference to the original StyleGAN or the StyleGAN2 paper by Karras et al. as needed https://arxiv.org/pdf/1812.04948.pdf https://arxiv.org/pdf/1912.04958.pdf
# + id="TM4QvAkuwCjW"
################################################################################
####################COMPLETE THE NEXT THREE LINES###############################
################################################################################
col_seeds = ...
row_seeds = ...
col_styles = ...
################################################################################
src_seeds = list(set(row_seeds + col_seeds))
# default arguments to Gs.components.synthesis.run, do not change
synthesis_kwargs = {
'output_transform': dict(func=tflib.convert_images_to_uint8, nchw_to_nhwc=True),
'randomize_noise': False,
'minibatch_size': 4
}
################################################################################
########################## COMPLETE THE FOLLOWING ##############################
################################################################################
# Copy the #### WRITE YOUR CODE HERE #### portion from generate_from_subnetwork()
all_images = Gs.components.synthesis.run(...)
################################################################################
# (Do not change)
image_dict = {(seed, seed): image for seed, image in zip(src_seeds, list(all_images))}
w_dict = {seed: w for seed, w in zip(src_seeds, list(src_dlatents))}
# Generating Images (Do not Change)
for row_seed in row_seeds:
for col_seed in col_seeds:
w = w_dict[row_seed].copy()
w[col_styles] = w_dict[col_seed][col_styles]
image = Gs.components.synthesis.run(w[np.newaxis], **synthesis_kwargs)[0]
image_dict[(row_seed, col_seed)] = image
# Create an Image Grid (Do not Change)
def create_grid_images():
_N, _C, H, W = Gs.output_shape
canvas = PIL.Image.new('RGB', (W * (len(col_seeds) + 1), H * (len(row_seeds) + 1)), 'black')
for row_idx, row_seed in enumerate([None] + row_seeds):
for col_idx, col_seed in enumerate([None] + col_seeds):
if row_seed is None and col_seed is None:
continue
key = (row_seed, col_seed)
if row_seed is None:
key = (col_seed, col_seed)
if col_seed is None:
key = (row_seed, row_seed)
canvas.paste(PIL.Image.fromarray(image_dict[key], 'RGB'), (W * col_idx, H * row_idx))
return canvas
# The following code will create your image, save it as a png, and display the image
# Run the following code after you have set your row_seed, col_seed and col_style
image_grid = create_grid_images()
image_grid.save('image_grid.png')
im = Image.open("image_grid.png")
im
| assets/assignments/a4-stylegan.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3 (ipykernel)
# language: python
# name: python3
# ---
from bots import bots
# + [markdown] pycharm={"name": "#%% md\n"}
# # Oplossing met een while loop
# Dit is bijna altijd de snelste oplossing.
# + pycharm={"name": "#%%\n"}
serie_nummer = 1
oven = None
while oven != 0:
bot = bots.check_bot(serie_nummer)
print(bot)
serie_nummer = bot.oven_nummer
oven = bot.oven_nummer
bot
# -
# # Oplossing met een for loop
# Dit is meestal een langzamere oplossing. Het is alleen sneller als de gezochte bot heel vroeg in de loop zit. Als je een hele grote groep doorzoekt is dat onwaarschijnlijk.
# + jupyter={"outputs_hidden": false} pycharm={"name": "#%%\n"}
for bot in bots:
print(bot)
if bot.oven_nummer == 0:
break
bot
# -
#
| nl/3. The Furnace Bots/solution.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import cv2
import matplotlib.pyplot as plt
import numpy as np
import sys
sys.path.append("../common")
import functions
# +
img = cv2.imread("imori.jpg")
result = functions.motion_filter(img)
plt.subplot(1,2,1)
plt.imshow(cv2.cvtColor(img, cv2.COLOR_BGR2RGB))
plt.subplot(1,2,2)
plt.imshow(cv2.cvtColor(result, cv2.COLOR_BGR2RGB))
plt.show()
# -
| Question_11_20/.ipynb_checkpoints/read_functions_module-checkpoint.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # User interactions
#
#
# .. warning:: The callbacks in the example will not work when view the documentation on readthedocs. The interactions require a Python kernel to work, so you should try this example locally.
# ## Reacting to User Interaction in the Kernel
#
# You can monitor user interactions, e.g. when a user clicks on a node, and have the kernel updated with the last interaction. See [the cytoscape.js documentation](https://js.cytoscape.org/#events/user-input-device-events) for a full list of supported events you can listen for. We need to create a new graph before we can add interactivity:
import ipycytoscape
# +
import json
# load the graph dictionary
with open("geneData.json") as fi:
json_file = json.load(fi)
# load a style dictionary, which we'll use to update the graph's style
with open("geneStyle.json") as fi:
s = json.load(fi)
# Create the cytoscape graph widget
cyto = ipycytoscape.CytoscapeWidget()
cyto.graph.add_graph_from_json(json_file)
cyto.set_style(s)
# -
# Display the graph.
cyto
# ## Adding interactivity
#
# Because we provided the `monitor` keyword argument, the client is already listening for user interactions and sending them to the kernel. We can listen for these events in the kernel and respond with some sort of callback. To do this, we call `CytoscapeWidget.on` with the type of event and widget we are listening for followed by a callback that takes as its only argument the JSON dictionary representation of the node or edge the user interacted with. Let's create a simple example that prints the node's ID and the type of interaction to an `ipywidgets.Output` widget:
# +
from ipywidgets import Output
from IPython.display import display
from pprint import pformat
out = Output()
def log_clicks(node):
with out:
print(f'clicked: {pformat(node)}')
def log_mouseovers(node):
with out:
print(f'mouseover: {pformat(node)}')
cyto.on('node', 'click', log_clicks)
cyto.on('node', 'mouseover', log_mouseovers)
# -
# Now display the graph again, this time with the (initially empty) log output below it, and try mousing over the nodes and clicking on them. You should see a stream of messages corresponding to your actions.
# call `display` to show both widgets in one output cell
display(cyto)
display(out)
| examples/interaction-example.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Multiple Linear Regression
#
# Checkout the [Source](https://github.com/Avik-Jain/100-Days-Of-ML-Code/blob/master/Code/Day3_Multiple_Linear_Regression.md)
# ## Step-1: Data Preprocessing
# +
import pandas as pd
import numpy as np
data = pd.read_csv('../Datasets/50_Startups.csv')
X = data.iloc[:, :-1].values
Y = data.iloc[:, 4].values
# +
from sklearn.preprocessing import OneHotEncoder, LabelEncoder
from sklearn.compose import ColumnTransformer
label_encoder_x_1 = LabelEncoder()
X[: , 3] = label_encoder_x_1.fit_transform(X[:,3])
transformer = ColumnTransformer(
transformers=[
("OneHot", # Just a name
OneHotEncoder(), # The transformer class
[1] # The column(s) to be applied on.
)
],
remainder='passthrough' # donot apply anything to the remaining columns
)
X = transformer.fit_transform(X.tolist())
X = X.astype('float64')
X = X[:, 1:]
# +
from sklearn.model_selection import train_test_split
X_train, X_test, Y_train, Y_test = train_test_split(X, Y, test_size = 1/4, random_state = 0)
# -
# ## Step-2: Multiple Linear Regresstion to train data
from sklearn.linear_model import LinearRegression
regressor = LinearRegression()
regressor.fit(X_train, Y_train)
# ## Step-3: Predicting Result
# +
y_pred = regressor.predict(X_test)
print(y_pred)
| Codes/Multiple Linear Regression.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
import config
import data
import iseq_prof
import plot
iseq_prof.turn_cache_on()
# -
with open(f"{config.root_dir}/../ready_organisms800.txt", "r") as file:
organisms = list(sorted([organism.strip() for organism in file]))
df = data.create_per_profile_df(config.root_dir, organisms)
dfe = data.create_per_profile_df_evalue(config.root_dir, organisms)
plot.plot_auc_dist(df, config.Per.profile)
plot.plot_auc_scatter(df, config.Per.profile)
plot.plot_score_boxplot(dfe, config.Per.profile)
plot.plot_pr_scatter(dfe, config.Per.profile, size_max=100)
| baseline/Profile-wise.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# ## Failure ratio of master jobs
# Jobs initiated in an interval of one hour over the last 14 days
# %matplotlib inline
import sys
import os
import datetime, calendar
import azure.cosmos.cosmos_client as cosmos_client
import pandas as pd
import matplotlib.pyplot as plt
# +
client = cosmos_client.CosmosClient(url_connection=os.environ['endpoint'], auth={'masterKey': os.environ['masterKey']})
database_link = 'dbs/' + os.environ['databaseId']
collection_link = database_link + '/colls/{}'.format(os.environ['containerId'])
# +
from datetime import datetime, timedelta
now = datetime.now()
one_hour = timedelta(hours=1)
# Number of days we want to go back in
hours_in_past = 338
reversed_last_hours = list(
map(
lambda i: (now - (i * one_hour)).isoformat(),
range(hours_in_past + 1)
)
)
last_hours = reversed_last_hours[::-1]
first_hour, *tail_hours = last_hours
query = {
"query": """
SELECT c.job_name, c.build_id, c.current_build_current_result, c.stage_timestamp, c._ts
FROM c
WHERE c.current_build_scheduled_time > '{0}Z'
and c.branch_name = 'master'
""".format(first_hour)
}
query_results = list(client.QueryItems(collection_link, query))
df = pd.DataFrame(query_results)
# +
def hour_builds(hour_df):
return pd.DataFrame(
hour_df
.sort_values(by='stage_timestamp')
.drop_duplicates('job_name', keep='last')
)
def hour_df(frame, hour_number):
return frame[
(frame['stage_timestamp'] > last_hours[hour_number])
& (frame['stage_timestamp'] < last_hours[hour_number + 1])
]
builds = list(
map(
lambda hour_number: hour_builds(hour_df(df, hour_number)),
list(range(len(last_hours) - 1))
)
)
# +
def hour_stats(i, current_build):
total_rows = len(current_build)
successes = len(current_build.loc[df['current_build_current_result'] == 'SUCCESS'])
failures = len(current_build.loc[df['current_build_current_result'] == 'FAILURE'])
aborted = len(current_build.loc[df['current_build_current_result'] == 'ABORTED'])
unknows = total_rows - successes - failures - aborted
total = successes + failures + aborted
success_ratio = 100 if total == 0 else round(successes/total * 100)
return ({
"success ratio": success_ratio,
"stats": [successes, failures, aborted, unknows],
"stats_labels": ['SUCCESS', 'FAILURE', 'ABORTED', 'UNKNOWN'],
"date": { "from": last_hours[i], "to" : last_hours[i+1] }
})
all_hours_stats = [hour_stats(i,x) for i,x in enumerate(builds)]
# +
colors = {
'aborted': '#FF8C11',
'failures': '#E55934',
'successes': '#7CCE77',
}
horizontal_axis_labels = list(
map(
lambda all_hours_stats: datetime.fromisoformat(all_hours_stats['date']['from']).strftime('%d/%m %H:%M:%S'),
all_hours_stats
)
)
data = {
'successes': [0 if sum(item['stats']) == 0 else item['stats'][0] for item in all_hours_stats],
'failures': [0 if sum(item['stats']) == 0 else item['stats'][1] for item in all_hours_stats],
'aborted': [0 if sum(item['stats']) == 0 else item['stats'][2] for item in all_hours_stats],
}
fig, ax = plt.subplots()
fig.set_size_inches(20, 4)
ax.set_title('Cumulative jobs statuses')
ax.margins(0)
p_aborted = plt.bar(list(range(hours_in_past)), data['aborted'], linewidth=2, color=colors['aborted'], align='edge', width=1)
p_failures = plt.bar(list(range(hours_in_past)), data['failures'], linewidth=2, color=colors['failures'], align='edge', width=1, bottom=[data['aborted'][j] for j in range(len(data['successes']))])
p_success = plt.bar(list(range(hours_in_past)), data['successes'], linewidth=2, color=colors['successes'], align='edge', width=1, bottom=[data['failures'][j] + data['aborted'][j] for j in range(len(data['successes']))])
plt.xticks(range(hours_in_past), horizontal_axis_labels, rotation='vertical')
plt.legend((p_success[0], p_failures[0], p_aborted[0]), ('successes', 'failures', 'aborted'))
every_nth = 12
for n, label in enumerate(ax.xaxis.get_ticklabels()):
if n % every_nth != 0:
label.set_visible(False)
plt.show()
# +
data_ratio = {
'successes': [0 if sum(item['stats']) == 0 else (item['stats'][0]/sum(item['stats'])) for item in all_hours_stats],
'failures': [0 if sum(item['stats']) == 0 else (item['stats'][1]/sum(item['stats'])) for item in all_hours_stats],
'aborted': [0 if sum(item['stats']) == 0 else (item['stats'][2]/sum(item['stats'])) for item in all_hours_stats],
}
fig, ax = plt.subplots()
fig.set_size_inches(20, 4)
ax.set_title('Proportional jobs statuses')
ax.margins(0)
p_aborted = plt.bar(list(range(hours_in_past)), data_ratio['aborted'], linewidth=2, color=colors['aborted'], align='edge', width=1)
p_failures = plt.bar(list(range(hours_in_past)), data_ratio['failures'], linewidth=2, color=colors['failures'], align='edge', width=1, bottom=[data_ratio['aborted'][j] for j in range(len(data_ratio['successes']))])
p_success = plt.bar(list(range(hours_in_past)), data_ratio['successes'], linewidth=2, color=colors['successes'], align='edge', width=1, bottom=[data_ratio['failures'][j] + data_ratio['aborted'][j] for j in range(len(data_ratio['successes']))])
plt.xticks(range(hours_in_past), horizontal_axis_labels, rotation='vertical')
plt.legend((p_success[0], p_failures[0], p_aborted[0]), ('successes', 'failures', 'aborted'))
every_nth = 12
for n, label in enumerate(ax.xaxis.get_ticklabels()):
if n % every_nth != 0:
label.set_visible(False)
plt.show()
# -
| work/CI/exp2.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 2
# language: python
# name: python2
# ---
# %pylab inline
from __future__ import division
import numpy as np
import deltasigma as ds
from scipy.signal import lti, ss2zpk, lfilter
# # MASH 2-2 cascade
# ## Introduction
# We will simulate here a 2-2 MASH cascade.
#
# The example is taken from <NAME>. The package used here -- `python-deltasigma` -- is a port of <NAME>'s MATLAB Delta-Sigma toolbox, available at: http://www.mathworks.com/matlabcentral/fileexchange. The credit goes to him for all algorithms employed.
# ## Modulator description
# Each modulator in the cascade is described by the ABCD matrix:
ABCD1 = [[1., 0., 1., -1.],
[1., 1., 0., -2.],
[0., 1., 0., 0.]]
ABCD1 = np.array(ABCD1, dtype=np.float32)
# Each quantizer has 9 levels.
#
# We need to describe the modulator in terms of its ABCD matrix:
ABCD = [[1, 0, 0, 0, 1, -1, 0],
[1, 1, 0, 0, 0, -2, 0],
[0, 1, 1, 0, 0, 0, -1],
[0, 0, 1, 1, 0, 0, -2],
[0, 1, 0, 0, 0, 0, 0],
[0, 0, 0, 1, 0, 0, 0]]
ABCD = np.array(ABCD, dtype=np.float_)
# The modulator will have two quantizer, each of them having 9 levels, or slightly more than 3 bit. For this reason `nlev` is set to an array.
nlev = [9, 9]
# ## Transfer functions
#
# We can now calculate the transfer functions associated with the modulator.
#
# Notice there will be 6 of them, **4 NTFs**:
#
# 1. $NTF_{0,0}$: from the quantization noise injected by the 1st quantizer, to the output of the 1st DSM.
# 2. $NTF_{1,0}$: from the quantization noise injected by the 1st quantizer, to the output of the 2nd DSM.
# 3. $NTF_{1,1}$: from the quantization noise injected by the 2nd quantizer, to the output of the 2nd DSM.
# 4. $NTF_{0,1}$: Theoretically it also exists a transfer function from the quantization noise injected by the 2nd quantizer, to the output of the 1st DSM. Since the signal connections between the blocks are unidirectional, the noise added downstream cannot affect the signals upstream, and this transfer function will be null.
#
# And **2 STFs**:
#
# 1. $STF_0$: From the signal input to the output of the 1st DSM.
# 2. $STF_1$: From the signal input to the output of the 2nd DSM.
k = [1., 1.]
ntfs, stfs = ds.calculateTF(ABCD, k)
# ### Noise transfer to the first output
print "NTF_00:\n"
print ds.pretty_lti(ntfs[0, 0])
print "NTF_01:\n"
print ds.pretty_lti(ntfs[0, 1])
# ### Noise transfer to the second output
print "NTF_10:\n"
print ds.pretty_lti(ntfs[1, 0])
print "NTF_11:\n"
print ds.pretty_lti(ntfs[1, 1])
# ### NTF pole-zero plots
figure(figsize=(20, 6))
subplot(131)
title("$NTF_{0,0}$")
ds.plotPZ(ntfs[0, 0], showlist=True)
subplot(132)
title("$NTF_{1,0}$")
ds.plotPZ(ntfs[1, 0], showlist=True)
subplot(133)
title("$NTF_{1,1}$")
ds.plotPZ(ntfs[1, 1], showlist=True)
# ## Signal transfer functions
print "STF_0:\n"
print ds.pretty_lti(stfs[0])
print "\n\nSTF_1:\n"
print ds.pretty_lti(stfs[1])
# ### STF pole-zero plots
figure(figsize=(13, 4))
subplot(121)
title("$STF_{0}$")
ds.plotPZ(stfs[0], showlist=True)
subplot(122)
title("$STF_{1}$")
ds.plotPZ(stfs[1], showlist=True)
# ##Compensation of the quantization noise
#
# Overall, the outputs $V_1$ and $V_2$ are given by:
#
# $$V_1 = u\,z^{-2}+(1 - z^{-1})^2\,e_1$$
#
# $$V_2 = u\, z^{-4} -2 (1 - 0.5z^{-1})\,z^{-3}\,e_1 +(1 - z^{-1})^2\,e_2 $$
#
# It can be shown that, combining $V_1$ and $V_2$, multipliying each of them repectively by:
#
# $$M_1 = z^{-3} - 2z^{-4}$$
#
# and
#
# $$M_2 = (1 - z^{-1})^2 $$
#
# and then summing the result, gives an overall output $V_{OUT}$ with expression:
#
# $$V_{TOT} = M_1V_1 + M_2V_2 = u\,z^{-4} + (1 - z^{-1})^4e_2.$$
#
# The terms in $e_1$ do not appear in the above equation as they cancel out, the second modulator allows for the compensation of the quantization noise introduced by the first. Overall, as it can be seen by the above equation, the system provides fourth order noise shaping by employing two second order DS loops.
#
# We briefly verify that numerically:
# +
def zpk_multiply(a, b):
za, pa, ka = ds._utils._get_zpk(a)
zb, pb, kb = ds._utils._get_zpk(b)
pa = pa.tolist() if hasattr(pa, 'tolist') else pa
pb = pb.tolist() if hasattr(pb, 'tolist') else pb
za = za.tolist() if hasattr(za, 'tolist') else za
zb = zb.tolist() if hasattr(zb, 'tolist') else zb
return ds.cancelPZ((za+zb, pa+pb, ka*kb))
v1n = zpk_multiply(ntfs[0, 0], ([2, -1], [1, 0, 0, 0, 0]))
v2n = zpk_multiply(ntfs[1, 0], ([1, 1], [0, 0], 1))
ntf_eq = zpk_multiply(ntfs[1, 1], ntfs[1, 1])
# compute v1n/v2n and check that it is equal to -1
res = zpk_multiply(v1n, (ds._utils._get_zpk(v2n)[1], ds._utils._get_zpk(v2n)[0], 1./ds._utils._get_zpk(v2n)[2]))
print "The quantization noise cancels out: %s" % (int(ds.pretty_lti(res)) == -1)
# -
# The improvement in the NTF of the cascaded system may be better visualized plotting the spectras:
figure(figsize=(16, 6))
subplot(121)
ds.figureMagic(name='$NTF_{0,0} = NTF_{1,1}$')
ds.PlotExampleSpectrum(ntfs[1, 1], M=31)
ylabel('dBFS/NBW')
subplot(122)
ds.figureMagic(name='$M_1NTF_{0,0}+M_2\left(NTF_{1,0} + NTF_{1,1}\\right) = NTF_{0,0}^2$')
ds.PlotExampleSpectrum(ntf_eq, M=31)
#ds.PlotExampleSpectrum(ntfs[0, 0], M=31)
tight_layout()
# ## Numerical simulation of the 2-2 cascade and SNR improvement
# Previously we simulated the NTF of a single modulator and the *expected* equivalent NTF when the two outputs are filtered and combined. Here we simulate the cascade of modulators with the ABCD matrix, computing their outputs $v_1$ and $v_2$, which are then numerically filtered and combined. Lastly, we check that the SNR improvement is as expected.
#
# Notice we needed to scale down the amplitude of the input sine since a sine wave at -3dBFS was pushing the modulator to instability.
# The filtering transfer functions $M_1$ and $M_2$ need to be expressed in terms of coefficients of $z^{-1}$ to be passed to `scipy`'s `lfilter`.
#
# The coefficients are:
filtM1 = [0., 0., 0., 2., -1.]
filtM2 = [1., -2., 1.]
# +
figure(figsize=(16, 6))
M = nlev[0] - 1
osr = 64
f0 = 0.
f1, f2 = ds.ds_f1f2(OSR=64, f0=0., complex_flag=False)
delta = 2
Amp = ds.undbv(-3) # Test tone amplitude, relative to full-scale.
f = 0.3 # will be adjusted to a bin
N = 2**12
f1_bin = np.round(f1*N)
f2_bin = np.round(f2*N)
fin = np.round(((1 - f)/2*f1 + (f + 1)/2*f2) * N)
# input sine
t = np.arange(0, N).reshape((1, -1))
u = Amp*M*np.cos((2*np.pi/N)*fin*t)
# simulate! don't forget to pass a list (or tuple or ndarray)
# as nlev value or the simulation will not be aware of the
# multiple quantizers
vx, _, xmax, y = ds.simulateDSM(u, ABCD, nlev=nlev)
# separate output #1 and output #2
v1 = vx[0, :]
v2 = vx[1, :]
# filter and combine
vf = lfilter(filtM1, [1.], v1) + lfilter(filtM2, [1.], v2)
# compute the spectra
window = ds.ds_hann(N)
NBW = 1.5/N
spec0 = np.fft.fft(vf*window)/(M*N/2)/ds.undbv(-6)
spec1 = np.fft.fft(v1*window)/(M*N/2)/ds.undbv(-6)
spec2 = np.fft.fft(v1*window)/(M*N/2)/ds.undbv(-6)
freq = np.linspace(0, 0.5, N/2 + 1)
plt.hold(True)
plt.plot(freq, ds.dbv(spec0[:N/2 + 1]), 'c', linewidth=1, label='V1')
plt.plot(freq, ds.dbv(spec2[:N/2 + 1]), '#fb8b00', linewidth=1, label='VF')
# smooth, calculate the theorethical response and the SNR for VF
spec0_smoothed = ds.circ_smooth(np.abs(spec0)**2., 16)
plt.plot(freq, ds.dbp(spec0_smoothed[:N/2 + 1]), 'b', linewidth=3)
Snn0 = np.abs(ds.evalTF(ntf_eq, np.exp(2j*np.pi*freq)))**2 * 2/12*(delta/M)**2
plt.plot(freq, ds.dbp(Snn0*NBW), 'm', linewidth=1)
snr0 = ds.calculateSNR(spec0[f1_bin:f2_bin + 1], fin - f1_bin)
msg = 'VF:\nSQNR = %.1fdB\n @ A = %.1fdBFS & osr = %.0f\n' % \
(snr0, ds.dbv(spec0[fin]), osr)
plt.text(f0 + 1 / osr, - 15, msg, horizontalalignment='left',
verticalalignment='center')
# smooth, calculate the theorethical response and the SNR for V1
spec1_smoothed = ds.circ_smooth(np.abs(spec1)**2., 16)
plt.plot(freq, ds.dbp(spec1_smoothed[:N/2 + 1]), '#d40000', linewidth=3)
Snn1 = np.abs(ds.evalTF(ntfs[0, 0], np.exp(2j*np.pi*freq)))**2 * 2/12*(delta/M)**2
plt.plot(freq, ds.dbp(Snn1*NBW), 'm', linewidth=1)
snr1 = ds.calculateSNR(spec1[f1_bin:f2_bin + 1], fin - f1_bin)
msg = 'V1:\nSQNR = %.1fdB\n @ A = %.1fdBFS & osr = %.0f\n' % \
(snr1, ds.dbv(spec1[fin]), osr)
plt.text(f0 + 1/osr, - 15-30, msg, horizontalalignment='left',
verticalalignment='center')
plt.text(0.5, - 135, 'NBW = %.1e ' % NBW, horizontalalignment='right',
verticalalignment='bottom')
ds.figureMagic((0, 0.5), 1./16, None, (-160, 0), 10, None)
legend()
title("Spectra"); xlabel("Normalized frequency $f \\rightarrow 1$");ylabel("dBFS/NBW");
# -
print "Overall the SNR improved by %g (!) at OSR=%d." % (snr0-snr1, osr)
# Notice that, as it often happen, it is not immediate to see by eye that the composed signal $v_f$ has better SNR than $v_1$ (or $v_2$).
#
# In fact, consider the following plot of the signals from which the above spectra and SNRs were calculated:
figure(figsize=(14, 6))
plot(vf[100:800], label='$v_f$')
plot(v1[100:800], label='$v_1$')
plot(u[:, 100:800].T, 'r', label='$u$')
xlabel('sample #'); legend();
# ## Conclusions
#
# This notebook showed how it is possible, in the case of a Noise Shaping Multi-stage (MASH) cascade, to:
# * calculate the signal and noise transfer functions,
# * simulate the topology,
# * filter and combine the outputs and
# * evaluate the SNR improvement,
#
# with `python-deltasigma`.
# +
# #%install_ext http://raw.github.com/jrjohansson/version_information/master/version_information.py
# %load_ext version_information
# %reload_ext version_information
# %version_information numpy, scipy, matplotlib, deltasigma
# -
| examples/MASH_example.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
# Working on a localhost with Jupyter,
# so my dataset location is localized as well.
source_url = 'https://www.kaggle.com/jsphyg/weather-dataset-rattle-package'
import pandas as pd
import numpy as np
import pandas_profiling
import eli5
import shap
import category_encoders as ce
import matplotlib.pyplot as plt
from eli5.sklearn import PermutationImportance
from sklearn.impute import SimpleImputer
from sklearn.model_selection import train_test_split
from sklearn.tree import DecisionTreeClassifier
from sklearn.ensemble import RandomForestClassifier
from sklearn.linear_model import LogisticRegression
from sklearn.metrics import accuracy_score
from pdpbox.pdp import pdp_isolate, pdp_plot
def load_in():
weather = pd.read_csv('../../datasets/weatherAUS.csv') # Local
# weather = pd.read_csv('weatherAUS.csv') # Colab
original = weather.copy()
return weather
return original
def cel_to_far(x):
'''Small function to convert Celsius to Farenheit'''
x = x * 1.8 + 32
return float(x)
def null_dropper():
dropcol = []
for cols in weather.columns:
if weather[cols].isnull().sum() >= 10000:
weather.drop(columns=cols, inplace=True)
def dataset_imputer(dataset, strategy):
imputer = SimpleImputer(strategy=strategy)
imputed_df = imputer.fit_transform(dataset)
return pd.DataFrame(imputed_df, columns=weather.columns)
def class_balancer():
yes_rain = []
for amount in rain['RISK_MM']:
if amount >= 0.2:
yes_rain.append('Yes')
else:
yes_rain.append('No')
rain['ExpectRain'] = yes_rain
return rain
def indexer():
rain['indexed_date'] = pd.to_datetime(rain['Date'],
infer_datetime_format=True)
rain.set_index('indexed_date', inplace=True)
rain.sort_index(ascending=True, axis=0, inplace=True)
return rain
def dt_breakout():
rain['Dates'] = pd.to_datetime(rain['Date'],
infer_datetime_format=True)
rain['Year'] = rain['Dates'].dt.year
rain['Month'] = rain['Dates'].dt.month
rain['Day'] = rain['Dates'].dt.day
rain.drop('Date', axis=1, inplace=True)
return rain
def get_season():
seasons = []
for number in rain['Month']:
if (number == 12) or (number == 1) or (number == 2):
seasons.append('Summer')
elif (number == 3) or (number == 4) or (number == 5):
seasons.append('Fall')
elif (number == 6) or (number ==7) or (number == 8):
seasons.append('Winter')
else:
seasons.append('Spring')
rain['Season'] = seasons
return rain
def translate():
rain['MinTemp'] = rain['MinTemp'].apply(cel_to_far)
rain['MaxTemp'] = rain['MaxTemp'].apply(cel_to_far)
rain['Temp9am'] = rain['Temp9am'].apply(cel_to_far)
rain['Temp3pm'] = rain['Temp3pm'].apply(cel_to_far)
return rain
def data_split():
train = pd.DataFrame(rain[:113755], columns=rain.columns)
test = pd.DataFrame(rain[113755:], columns=rain.columns)
return train, test
def encoding():
encoder = ce.OrdinalEncoder()
train_enc = encoder.fit_transform(train)
test_enc = encoder.transform(test)
return train_enc, test_enc
# -
#
# Load in data and wrangle, encode and split dataset.
weather = load_in()
original = weather.copy()
null_dropper()
rain = dataset_imputer(weather, 'most_frequent')
old_base = rain['RainTomorrow'].value_counts(normalize=True)
rain = class_balancer() # Rebalances classes for target
rain = indexer() # Sets index to datetime
rain = dt_breakout() # Extracts Day, Month, Year as new columns
rain = get_season() # Classifies observations into Seasons
rain = translate() # Translates Celsius to Farenheit
train, test = data_split()
train_enc, test_enc = encoding()
#
# +
# New Target Classificaiton Baseline:
new_base = rain['ExpectRain'].value_counts(normalize=True)
print(f"Old Baseline for Classification:\n {old_base}\n\n")
print(f"New Baseline for Classification:\n {new_base}")
# -
#
train, val = train_test_split(train_enc, train_size=.8, random_state=42)
rain.columns
# +
target = 'ExpectRain'
features = ['Location', 'Humidity3pm', 'Temp3pm', 'Year', 'Month', 'Day', 'Season', 'RainToday']
X_train = train[features]
y_train = train[target]
X_val = val[features]
y_val = val[target]
X_test = test_enc[features]
y_test = test_enc[target]
# +
# Decision Tree Model:
dtc_model = DecisionTreeClassifier(max_depth=7, min_samples_leaf=2, random_state=24)
dtc_model.fit(X_train, y_train)
# +
# Decision Tree Accuracy Score:
dtc_val_pred = dtc_model.predict(X_val)
accuracy_score(dtc_val_pred, y_val)
# +
# Random Forest Classifier:
rfc_model = RandomForestClassifier(n_estimators=600, min_samples_leaf=2,
max_depth=None, criterion='gini', n_jobs=-1, random_state=42)
rfc_model.fit(X_train, y_train)
# +
# Random Forest Accuracy Score:
rfc_val_pred = rfc_model.predict(X_val)
accuracy_score(rfc_val_pred, y_val)
# -
# Logistic Regression Model:
lr_model = LogisticRegression(solver='saga', max_iter=1200, warm_start=True, n_jobs=-1)
lr_model.fit(X_train, y_train)
# +
# Logistic Regression Accuracy Score:
lr_val_pred = lr_model.predict(X_val)
accuracy_score(lr_val_pred, y_val)
# -
#
# +
# Best model Prediction on Test data:
rfc_test_pred = rfc_model.predict(X_test)
accuracy_score(rfc_test_pred, y_test)
# +
# Decision Tree Prediction on Test data:
dtc_test_pred = dtc_model.predict(X_test)
accuracy_score(dtc_test_pred, y_test)
# -
#
# Finding Permutation Importances for RandomForest model:
permuter = PermutationImportance(
rfc_model,
scoring='accuracy',
n_iter=5,
random_state=42
)
permuter.fit(X_val, y_val)
# Create a Series with the Feature names and importances, sorted:
feature_names = X_val.columns.tolist()
pd.Series(permuter.feature_importances_, feature_names).sort_values()
# Show the weights of the Permutation Inportances:
eli5.show_weights(
permuter,
top=None,
feature_names=feature_names
)
#
# Checking individual predictions for Model explanation:
row = X_test.iloc[[1664]]
row
# True Class for row
y_test.iloc[[1664]]
# Model's Prediction for row:
rfc_model.predict(row)
probability = rfc_model.predict_proba(row)
probability
# Model Explanation for a small subset of Data
subset = X_val[22600:]
subset
#
# +
# Create the Tree Explainer and extract the Shapley Values
explainer = shap.TreeExplainer(rfc_model)
shap_values = explainer.shap_values(row, check_additivity=False)
shap_values[1]
# -
# Plot individial shapley values for single observation:
shap.initjs()
shap.force_plot(
base_value=explainer.expected_value[1],
shap_values=shap_values[1],
features=row.columns,
link='identity'
)
# +
# Plot summary for entire subset of data:
explainer2 = shap.TreeExplainer(rfc_model)
shap_values2 = explainer2.shap_values(subset, check_additivity=False)
shap_values2[1]
shap.initjs()
shap.summary_plot(shap_values2[1], subset);
# -
#
#
#
#
# +
# PD Plot for Single Feature
plt.rcParams['figure.dpi'] = 150
feature = 'Temp3pm'
isolated = pdp_isolate(
model=rfc_model,
dataset=X_val,
model_features=X_val.columns,
feature=feature,
num_grid_points=50
)
# -
pdp_plot(isolated,
feature_name=feature,
plot_lines=True,
frac_to_plot=0.01);
| Unit-2-Build-Week/Paul_Teeter_DSPT7_Unit2_BuildWeek.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 2
# language: python
# name: python2
# ---
# # AxiScan Example
# Here we show an example of the AxiScan analysis pipeline.
# ## Import Code and Setup Plotting Defaults
# +
# Import basics
import numpy as np
import scipy.stats
import matplotlib as mpl
import matplotlib.pyplot as plt
import pymultinest
import corner
# Plotting Settings
mpl.rcParams['figure.figsize'] = 20, 14 # default figure size
# %matplotlib inline
# %load_ext autoreload
# %autoreload 2
# Import MC generation and data analysis code The necessary modules
from AxiScan import mc_gen # MC Generation
from AxiScan import scan # Data Analysis
import analysis_utilities as au # Convenient Utility Methods
# -
# # Step 1: Generate Monte Carlo
# ## Set the Parameters
# First we generate Monte Carlo data for a scenario in which the majority of the dark matter is contained within a bulk halo following the Standard Halo Model parameters with a subdominant fraction contained within the Sagitarrius Stream. Although we have chosen to illustrate the analysis with a large signal strength, this can be easily adjusted.
#
# This is accomplished by seeding an instance (`generator`) of the Generator class in `mc_gen` with arguments detailed below. Data on the $i^\text{th}$ day of data collection is generated by calling `generator.makePSD(i)`. The arguments for the Generator class are
#
# | Argument | Purpose |
# | ------------- | ------------- |
# | ma | ma/2pi is the axion mass [Hz] |
# | A | Proxy for the axion-photon coupling, $A \propto g_{a \gamma \gamma}^2$ |
# | v0_Halo | Velocity dispersion of the bulk halo [km/s] |
# | vDotMag_Halo | Speed of the sun with respect to the bulk halo [km/s]|
# | alpha_Halo | Bulk halo annual modulation scale, $\alpha \in [0, 1]$|
# | tbar_Halo | Date parameter for the bulk halo annual modultion [days] |
# | v0_Sub | Speed dispersion of the substructure halo [km/s] |
# | vDotMag_Sub | Speed of the sun with respect to the substructure halo [km/s]|
# | alpha_Sub | Substructure halo annual modulation scale, $\alpha \in [0, 1]$|
# | tbar_Sub | $\qquad$ Date parameter for the substructure halo annual modultion [days] |
# | frac_Sub | Fraction of the axion DM in the substructure |
# | PSDback | Mean expected background Power Spectral Density |
# | freqs | Array of frequencies to calculate the PSD at [Hz] |
#
# The code generates data in the form of Power Spectral Densities (PSD).
# +
########################
### Seed Values ###
########################
c = 299798.452
# Physics Parameters
ma = 5.5e5*2*np.pi
A = 10000.0
PSDback= 163539.36
# Bulk SHM Parameters
v0_Halo = 220.0
vDotMag_Halo = 232.36
alpha_Halo = .49
tbar_Halo = 72.40
# Sagitarrius Stream Parameters
v0_Sub = 10.0
vDotMag_Sub = 418.815
alpha_Sub = .65903
tbar_Sub = 279.51
frac_Sub = 0.0
# Data Output Size
freqs = np.linspace(.99999, 1.00001, 10000)*5.5e5
PSD_Data = np.zeros((365, len(freqs)))
collectionTime = 1/(freqs[1] - freqs[0])
stacked_per_day = 86400 / collectionTime
num_stacked = 365*stacked_per_day
# Instantiate the data generator
generator = mc_gen.Generator(ma, A, PSDback, v0_Halo, vDotMag_Halo, alpha_Halo, tbar_Halo,
v0_Sub, vDotMag_Sub, alpha_Sub, tbar_Sub, frac_Sub, freqs)
# -
# ## Generate the Data
# Here we fill the `PSD_Data` array with each day of collected data. Data is generated assuming that that the entire 24 hours is used for data collection. If the collection time $T$ as inferred from the user-defined frequency resolution in `freqs` is less than 24 hours, then the data generated for each day is constructed as $24$ hours / $T$ stacked copies of data collections of duration $T$.
#
# We then stack data over the course of the year. The data stacked on the duration of a year is used for simple scans for an axion signal. The data stacked on the duration of the year may be used for more sophisticated scans and parameter estimation.
# +
# Fill the PSD_Data array
for i in range(365):
PSD_Data[i] = np.array(generator.makePSD(i))
# Average over the days in the PSD_Data array for the simple scan
Stacked_PSD_Data = np.mean(PSD_Data, axis = 0)
plt.plot(freqs, Stacked_PSD_Data)
plt.xlabel('Frequency [Hz]')
plt.ylabel('PSD')
plt.show()
# -
# # Step 2: The Simple Scan
# ## Calculating the Test Statistic
# Next we analyze the MC data when stacked over the duration of a year. In this analysis, we only scan over values of A and ma, and we will assume the Axion DM to follow a bulk Standard Halo Model profile with no substructure present. These steps can be repeated on real data.
#
# The anlysis is performed using `scan.TS_Scan`, which has the following arguments:
#
# | Argument | Purpose |
# | ------------- | ------------- |
# | Stacked_PSD_Data | Array of PSD data associated with the measurements when stacked over the duration of a year|
# | freqs | Array of frequencies associated with the data points [Hz] |
# |mass_TestSet | Range of axion masses scanned for in the analysis|
# | A_TestSet| Range of values of the A parameter scanned for at each mass|
# | PSDback | Mean expected background Power Spectral Density |
# | v0_Exp | Expected value of the SHM velocity dispersion [km/s]|
# | vObs_Exp | Expected value of the sun's speed with respect to the bulk SHM Halo [km/s]|
# | num_stacked | Total number of collections of duration T contained in the stacked data |
#
# The output of `scan.TS_Scan` is `TS_Array`, the value of the test statistic TS(`ma`, `A`) at each value of `ma` and `A` in `mass_TestSet` and `A_TestSet`.
# ## Defining the Scan Parameters
# Since we expect to be searching for a bulk SHM distribution, we take SHM parameters `v0_Exp = 220.0` and `vObs_Exp = 232.36`.
#
# The set of masses in `mass_TestSet` is taken to be points on a log-spaced grid beginning at the mass corresponding to the minimum frequency for which we have data with a spacing factor of `1 + v0_Exp**2 /(2 c**2)`.
#
# The set of `A` in `A_TestSet` is determined by the necessary value of `A` of an injected signal expected to produce a 5$\sigma$ detection. At a given mass-point, this value of A can be computed using [57] and [60] of 1711.10489. To ensure a sufficiently large range, we compute the maximum value of such an `A` over all mass, denoting this `A_max`. Then at each mass-point, we scan over values `-A_max` to `5 * A_max`
#
#
# +
# Expectation Parameters
v0_Exp = 220.0
vObs_Exp = 232.36
# Construct the range of masses to scan over
N_testMass = int(np.log(freqs[-50] / freqs[0]) / np.log(1. + v0_Exp**2. / 2. / c**2.))
mass_TestSet = (freqs[0]*(1. + v0_Exp**2. / 2. / c**2.)**np.arange(N_testMass) * 2*np.pi)
# Construct the range of signal strengths to scan over
Sigma_A = au.getSigma_A(mass_TestSet, 365, 86400, v0_Exp, vObs_Exp, PSDback)
N_indMasses = 4 * c**2 / (3 * v0_Exp**2) * np.log(np.amax(freqs)/np.amin(freqs))
TS_Thresh = scipy.stats.norm.ppf(1 - (1-scipy.stats.norm.cdf(5))/N_indMasses)**2
detection_Threshold = np.sqrt(TS_Thresh)*Sigma_A
A_TestSet = np.linspace(-1.0, 5.0, 101)*np.amax(detection_Threshold)
# Run the Scan
TS_Array = np.array(scan.TS_Scan(Stacked_PSD_Data, freqs, mass_TestSet, A_TestSet, PSDback, v0_Exp, vObs_Exp, num_stacked))
# -
# # Extracting Scan Values and Limits
# Now that we have obtained `TS_Array`, we can extract our maximum-likelihood estimates and the 95% limits of `A` at each `ma`.
#
# At a given `ma`, the maximum-likelihood estimate of A is given by
# \begin{equation}
# \hat A = \text{argmax}_{A} TS(m_a, A)
# \end{equation}
#
# At a given `ma`, the 95% limit on `A` is given by solving
# \begin{equation}
# TS(m_a, A_{95\%}) - TS(m_A, \hat A) = 2.71, \qquad A_{95\%} \geq \hat A
# \end{equation}
# +
A_Limits = np.zeros(mass_TestSet.shape) # The expected 95% constraint
A_Scans = np.zeros((mass_TestSet.shape)) # The TS maximizing value
for i in range(len(A_Limits)):
# Naive TS maximizing value
A_Scans[i] = A_TestSet[np.argmax(TS_Array[i])]
# Extracting the 95% constraint by a shift in the TS of 2.71
temp = np.copy(TS_Array[i])
temp[0:np.nanargmax(temp)] = float('nan')
temp -= np.nanmax(temp)
A_Limits[i] = A_TestSet[np.nanargmin(np.abs(temp+2.706))]
A_Limits = np.maximum(A_Limits, au.zScore(-1)*Sigma_A)
A_Scans = np.maximum(0, A_Scans)
# +
plt.subplot(2, 2, 1)
plt.title('Limits', size = 20)
plt.plot(mass_TestSet, A_Limits)
plt.fill_between(mass_TestSet, au.zScore(-1)*Sigma_A, au.zScore(2)*Sigma_A, color = 'yellow')
plt.fill_between(mass_TestSet, au.zScore(-1)*Sigma_A, au.zScore(1)*Sigma_A, color = 'limegreen')
plt.axvline(x=ma, ls = '--', c = 'black')
plt.subplot(2, 2, 2)
plt.title('MLE Values', size = 20)
plt.plot(mass_TestSet, A_Scans)
plt.plot(mass_TestSet, detection_Threshold)
plt.axvline(x=ma, ls = '--', c = 'black')
plt.show()
# -
# Above, we plot the results of the simple scan for an axion signal. In the left panel, we plot the resulting 95% constraints (solid black) against the expected 95% constraints (dashed black) and 1$\sigma$ (green) and 2$\sigma$ (yellow) containment determined by the Asimov dataset according to [56] of 1711.10489. In the right panel, we plot at each mass-point the MLE of `A` (solid black) and the value of A at the threshold of a 5$\sigma$ detection (dashed black).
# # Step 3: The MultiNest Scan
# Now that we have discovered a well-localized axion signal, we proceed to perform a MultiNest Scan over the data stacked at the level of a day. This will allow us to perform more detailed analysis of the signal parameters. For example, a MultiNest scan could be used to gain a more accurate estimate of `A` or `ma`, to study the annual modulation parameters, or to search for substructure. With sufficient computational resources, these could all be accomplished simultaneously.
#
# In the example below, we will perform a very basic MultiNest scan to gain a more accurate estimate of the `A` parameter under the assumption that all other signal parameters are known with perfect accuracy.
# +
# Basic Settings
nlive = 500
chains_dir = '/nfs/turbo/bsafdi/fosterjw/github/AxiScan/examples/chains/'
pymultinest_options = {'importance_nested_sampling': False,
'resume': False, 'verbose': True,
'sampling_efficiency': 'model',
'init_MPI': False, 'evidence_tolerance': 0.5,
'const_efficiency_mode': False}
# -
# ## A-Parameter Scan
# +
# Parameter to Scan Over
A_Prior = [.5*np.amax(A_Scans), 10*np.amax(A_Scans)]
# Formatting the prior cube as required by MultiNest
theta_min = [A_Prior[0]]
theta_max = [A_Prior[1]]
theta_interval = list(np.array(theta_max) - np.array(theta_min))
n_params = len(theta_min) # number of parameters to fit for
def prior_cube(cube, ndim=1, nparams=1):
""" Cube of priors - in the format required by MultiNest
"""
for i in range(ndim):
cube[i] = cube[i] * theta_interval[i] + theta_min[i]
return cube
# Defining the likelihood function in terms of fixed and floated parameters
def LL_Multinest(theta, ndim = 1, nparams = 1):
return scan.SHM_AnnualMod_ll(freqs, PSD_Data, ma, theta[0], v0_Halo, vDotMag_Halo,
alpha_Halo, tbar_Halo, PSDback, stacked_per_day)
# Run the MultiNest Scan
pymultinest.run(LL_Multinest, prior_cube, n_params,
outputfiles_basename=chains_dir,
n_live_points=nlive, **pymultinest_options)
# Plot the posteriors found by the MultiNest Scan
chain_file = '/nfs/turbo/bsafdi/fosterjw/github/AxiScan/examples/chains/post_equal_weights.dat'
chain = np.array(np.loadtxt(chain_file))[:, :-1]
# Now make a triangle plot using corner
corner.corner(chain, smooth=1.5,
labels = ['$A$'], truths = [A],
smooth1d=1, quantiles=[0.16, 0.5, 0.84], show_titles=True,
title_fmt='.2f', title_args={'fontsize': 14},
range=[1 for _ in range(chain.shape[1])],
plot_datapoints=False, verbose=False)
plt.show()
# -
# # Alpha_Halo Scan
# +
# Parameter to Scan Over
alpha_Prior = [0.0, 1.0]
# Formatting the prior cube as required by MultiNest
theta_min = [alpha_Prior[0]]
theta_max = [alpha_Prior[1]]
theta_interval = list(np.array(theta_max) - np.array(theta_min))
n_params = len(theta_min) # number of parameters to fit for
def prior_cube(cube, ndim=1, nparams=1):
""" Cube of priors - in the format required by MultiNest
"""
for i in range(ndim):
cube[i] = cube[i] * theta_interval[i] + theta_min[i]
return cube
# Defining the likelihood function in terms of fixed and floated parameters
def LL_Multinest(theta, ndim = 1, nparams = 1):
return scan.SHM_AnnualMod_ll(freqs, PSD_Data, ma, A, v0_Halo, vDotMag_Halo,
theta[0], tbar_Halo, PSDback, stacked_per_day)
# Run the MultiNest Scan
pymultinest.run(LL_Multinest, prior_cube, n_params,
outputfiles_basename=chains_dir,
n_live_points=nlive, **pymultinest_options)
# Plot the posteriors found by the MultiNest Scan
chain_file = '/nfs/turbo/bsafdi/fosterjw/github/AxiScan/examples/chains/post_equal_weights.dat'
chain = np.array(np.loadtxt(chain_file))[:, :-1]
# Now make a triangle plot using corner
corner.corner(chain, smooth=1.5,
labels = ['alpha_Halo'], truths = [alpha_Halo],
smooth1d=1, quantiles=[0.16, 0.5, 0.84], show_titles=True,
title_fmt='.2f', title_args={'fontsize': 14},
range=[1 for _ in range(chain.shape[1])],
plot_datapoints=False, verbose=False)
plt.show()
# -
# # tbar_Halo Scan
# +
# Parameter to Scan Over
tbar_Prior = [0, 365.0]
# Formatting the prior cube as required by MultiNest
theta_min = [tbar_Prior[0]]
theta_max = [tbar_Prior[1]]
theta_interval = list(np.array(theta_max) - np.array(theta_min))
n_params = len(theta_min) # number of parameters to fit for
def prior_cube(cube, ndim=1, nparams=1):
""" Cube of priors - in the format required by MultiNest
"""
for i in range(ndim):
cube[i] = cube[i] * theta_interval[i] + theta_min[i]
return cube
# Defining the likelihood function in terms of fixed and floated parameters
def LL_Multinest(theta, ndim = 1, nparams = 1):
return scan.SHM_AnnualMod_ll(freqs, PSD_Data, ma, A, v0_Halo, vDotMag_Halo,
alpha_Halo, theta[0], PSDback, stacked_per_day)
# Run the MultiNest Scan
pymultinest.run(LL_Multinest, prior_cube, n_params,
outputfiles_basename=chains_dir,
n_live_points=nlive, **pymultinest_options)
# Plot the posteriors found by the MultiNest Scan
chain_file = '/nfs/turbo/bsafdi/fosterjw/github/AxiScan/examples/chains/post_equal_weights.dat'
chain = np.array(np.loadtxt(chain_file))[:, :-1]
# Now make a triangle plot using corner
corner.corner(chain, smooth=1.5,
labels = ['tbar_Halo'], truths = [tbar_Halo],
smooth1d=1, quantiles=[0.16, 0.5, 0.84], show_titles=True,
title_fmt='.2f', title_args={'fontsize': 14},
range=[1 for _ in range(chain.shape[1])],
plot_datapoints=False, verbose=False)
plt.show()
# -
| examples/Example.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import pandas as pd
import numpy as np
import random
import string
import matplotlib.pyplot as plt
keywords = pd.DataFrame(np.random.randint(low=0, high=10, size=(100, 2)),
... columns=['Keywords', "Timestamp"])
keywords = keywords.sort_values(['Timestamp'], ascending=[1])
keywords.head()
# +
df = pd.DataFrame(np.random.randint(low=0, high=10, size=(100, 10)), columns=['time.start','time.end', 'frame.start', 'frame.end', 'transcript','frames.before','frames.after','angry.change','surprise.change','happy.change'])
#If you want random keywords
#df["keywords"] = [''.join(random.choice(string.ascii_uppercase + string.digits) for _ in range(10)) for x in range(100)]
#Randomly sample from finite list of keywords
possible_keywords = ['a', 'b', 'c', 'd', 'e']
df["keywords"] = [random.choice(possible_keywords) for x in range(100)]
# -
df = df.sort_values(['time.start'], ascending=[1])
df.head()
#This computes the correlation matrix
df_keywords_dummies = pd.get_dummies(df)
corr_matrix = df_keywords_dummies.iloc[:,7:].corr()
corr_matrix
#This outputs the top keywords for changing the three emotions
top_keywords = pd.DataFrame(index=['angry','surprise','happy'],columns=["keyword","correlation"])
keyword_index = ['angry','surprise','happy']
for x in range(0,3):
top_keywords.loc[keyword_index[x],"keyword"] = corr_matrix.iloc[x,3:].idxmax()
top_keywords.loc[keyword_index[x],"correlation"] = np.max(corr_matrix.iloc[x,3:],axis=0)
top_keywords
# +
# Plots a radar chart.
from math import pi
import matplotlib.pyplot as plt
def make_plot(values):
# Set data
cat = ['Happy', 'Sad', 'Angry']
N = len(cat)
x_as = [n / float(N) * 2 * pi for n in range(N)]
# Because our chart will be circular we need to append a copy of the first
# value of each list at the end of each list with data
values += values[:1]
x_as += x_as[:1]
# Set color of axes
plt.rc('axes', linewidth=0.5, edgecolor="#888888")
# Create polar plot
ax = plt.subplot(111, polar=True)
# Set clockwise rotation. That is:
ax.set_theta_offset(pi / 2)
ax.set_theta_direction(-1)
# Set position of y-labels
ax.set_rlabel_position(0)
# Set color and linestyle of grid
ax.xaxis.grid(True, color="#888888", linestyle='solid', linewidth=0.5)
ax.yaxis.grid(True, color="#888888", linestyle='solid', linewidth=0.5)
# Set number of radial axes and remove labels
plt.xticks(x_as[:-1], [])
# Set yticks
plt.yticks([20, 40, 60, 80, 100], ["20", "40", "60", "80", "100"])
# Plot data
ax.plot(x_as, values, linewidth=0, linestyle='solid', zorder=3)
# Fill area
ax.fill(x_as, values, 'b', alpha=0.3)
# Set axes limits
plt.ylim(0, 100)
# Draw ytick labels to make sure they fit properly
for i in range(N):
angle_rad = i / float(N) * 2 * pi
if angle_rad == 0:
ha, distance_ax = "center", 10
elif 0 < angle_rad < pi:
ha, distance_ax = "left", 1
elif angle_rad == pi:
ha, distance_ax = "center", 1
else:
ha, distance_ax = "right", 1
ax.text(angle_rad, 100 + distance_ax, cat[i], size=10, horizontalalignment=ha, verticalalignment="center")
# Show polar plot
plt.show()
#modified codehttps://stackoverflow.com/questions/42227409/tutorial-for-python-radar-chart-plot
# -
#Example of what each radar plot will look like
make_plot(list(top_keywords["correlation"] * 1000))
| ipynb/emotion_keywords_correlation.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import pandas as pd
df = pd.read_csv('D:\Downloads\spam.csv')
df.head()
#spartaing data on basis of ham & spam
df.groupby('Category').describe()
#converting 'Categoery' & 'Meassage' into the numbers (because ML models can't understand text)
df['spam'] = df['Category'].apply(lambda x: 1 if x=='spam' else 0)
df.head()
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(df.Message,df.spam, test_size = 0.25)
# +
#we should also to convert 'Message' into the number form so this done by "Count Vectorisation Technique"
from sklearn.feature_extraction.text import CountVectorizer
v = CountVectorizer()
X_train_count = v.fit_transform(X_train.values)
X_train_count.toarray()[:3]
# -
from sklearn.naive_bayes import MultinomialNB
model = MultinomialNB()
model.fit(X_train_count,y_train)
# +
emails = ['Ok lar... Joking wif u oni...','You have been specially selected to receive a 2000 pound award! Call 08712402050 BEFORE the lines close. Cost 10ppm. 16+. T&Cs apply. AG Promo'
]
emails_count = v.transform(emails)
model.predict(emails_count)
# -
X_test_count = v.transform(X_test)
model.score(X_test_count,y_test)
# +
#using pipeline
from sklearn.pipeline import Pipeline
clf = Pipeline([
('vectoizer',CountVectorizer()),
('nb',MultinomialNB())
])
# -
clf.fit(X_train,y_train)
clf.score(X_test,y_test)
clf.predict(emails)
| Email Spam Detector.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
#import pandas to read in the excel file
import pandas as pd
# ### Identify Duplicate
# Procedure code
#
# 1. Sort the data by all of the variables (denoted hereafter as X)
# a. It does not matter which order the variables are sorted in.
# 2. Create two new variables eqn and eqsize
# a. For the first case set eqn to 1
# 3. Test for equality between each case and the one that precedes it [X=lag(X)]
# a. If X =lag(X) then eqn=lag(eqn)+1
# b. If X /=lag(X) then eqn=1
# 4. Reverse sort the data
# 5. For each case test if eqn<lag(eqn)
# a. If eqn < lag(eqn) then eqsize=lag(eqsize)
# b. If eqn >= lag(eqn) then eqsize=eqn
#new sort funtion that sorts in ascending order
#mylist - is the list sort, this should be empty after sorting
#emptylist - this would be filled with the sorted list, len(emptylist) == len(mylist-before sorting)
def sorting(mylist, emptylist):
while mylist:
minimum = mylist[0] #start with the first case
for x in mylist:
if x < minimum:
minimum = x #change the minimum value if any x is less than the first case
#ind = mylist.index(minimum)
emptylist.append(minimum) #append the minimum to emptylist
mylist.remove(minimum) #remove the minimum from the original list
data = pd.read_csv('data.csv') #read in data using pandas
test = data.T.to_dict('list') #transpose this dataframe and turn into a dictonary
gender = [] #make it into the a list, append by gender
for key, value in test.items():
if value[9] == 1: #append gender == 1
gender.append(value)
if value[9] == 2: #then gender == 2
gender.append(value)
gender_sorted = [] #create an empty list to put in the sorted cases
sorting(gender, gender_sorted) #use the sorting function created
q = gender_sorted #assign to q for ease in refereing to
len(q) #check length is as expected
a = 0 #index of first case (X)
b = 1 #index of lag(X)
q[a] = q[a], 1 #create a second variable (EQN) for X
#Test for equality between each case and the lag
try:
while q:
if q[a][0] == q[b]: #if the first case is same as its lag
#x = int(q[a][1]) + 1
q[b] = q[b], int(q[a][1]) + 1 #make the EQN of the lag == EQN of the first case and continue
#if above is satisfied add 1 to a and b
a+=1
b+=1
else:
q[b] = q[b], 1 #if above is not satisfied, make the EQN of the lag == 1
#if above is satisfied add 1 to a and b
a+=1
b+=1
except IndexError: #catch index error
print ("Finished - All Duplicates Indentified")
#reverse sort: did this by appending starting from the last index
new = [] #new list to append to
ind = len(q) - 1 #the index is the len of old list(q) minus 1
for x in q:
new.append(list(q[ind]))
ind-=1
q = new #reassign to q, again fo ease in referring to
#create a new variable EQSIZE to assign duplicate number
q[0] = [q[0][0], q[0][1], q[0][1]] #for the first case this is equal to EQN
a = 0 #index of first case (X)
b = 1 #index of lag(X)
try:
while q:
if q[b][1] < q[a][1]: #if EQN of the first case is more than its lag,
q[b] = [q[b][0],q[b][1],q[a][2]] #make lag EQSIZE == first case EQSIZE
#if above is satisfied add 1 to a and b
a+=1
b+=1
else:
#q[b][1] >= q[a][1]
q[b] = [q[b][0],q[b][1],q[b][1]] #else make lag EQSIZE == lag EQN
#if above is satisfied add 1 to a and b
a+=1
b+=1
except IndexError:
print ("Finished - EQSIZE IDENTIFIED")
df = pd.DataFrame(q) #make q a dataframe
a = list(df.pop(0)) #drop the first column (which is the original data)
df1 = pd.DataFrame(a) #make the original data back into a dataframe
result = pd.concat([df1, df], axis=1, sort=False) #merge original data and the 2 new columns (EQN and EQSIZE)
#add the column names
result.columns = ['CARS', 'HHSPTYPE', 'TENURE', 'PERSINHH', 'ECONPRIM', 'ETHGROUP', 'LTILL',
'MSTATUS', 'OCCPATN', 'SEX', 'SOCLASS', 'SEGROUP', 'EQN', 'EQSIZE']
#view new data
result
| sorting code.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 2.7.18 64-bit (system)
# name: python3
# ---
# +
# What version of Python do you have?
import sys
import tensorflow.keras
import pandas as pd
import sklearn as sk
import tensorflow as tf
print(f"Tensor Flow Version: {tf.__version__}")
print(f"Keras Version: {tensorflow.keras.__version__}")
print()
print(f"Python {sys.version}")
print(f"Pandas {pd.__version__}")
print(f"Scikit-Learn {sk.__version__}")
gpu = len(tf.config.list_physical_devices('GPU'))>0
print("GPU is", "available" if gpu else "NOT AVAILABLE")
| jon_dl/test.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # KubeFlow Pipeline: Github Issue Summarization using Tensor2Tensor
#
# This notebook assumes that you have already set up a GKE cluster with Kubeflow installed as per this codelab: [g.co/codelabs/kubecon18](g.co/codelabs/kubecon18). Currently, this notebook must be run from the Kubeflow JupyterHub installation, as described in the codelab.
#
# In this notebook, we will show how to:
#
# * Interactively define a KubeFlow Pipeline using the Pipelines Python SDK
# * Submit and run the pipeline
# * Add a step in the pipeline
#
# This example pipeline trains a [Tensor2Tensor](https://github.com/tensorflow/tensor2tensor/) model on Github issue data, learning to predict issue titles from issue bodies. It then exports the trained model and deploys the exported model to [Tensorflow Serving](https://github.com/tensorflow/serving).
# The final step in the pipeline launches a web app which interacts with the TF-Serving instance in order to get model predictions.
# ## Setup
# Do some installations and imports, and set some variables. Set the `WORKING_DIR` to a path under the Cloud Storage bucket you created earlier. The Pipelines SDK is bundled with the notebook server image, but we'll make sure that we're using the most current version for this example. You may need to restart your kernel after the SDK update.
# !pip install -U kfp
# +
import kfp # the Pipelines SDK.
from kfp import compiler
import kfp.dsl as dsl
import kfp.gcp as gcp
import kfp.components as comp
from kfp.dsl.types import Integer, GCSPath, String
import kfp.notebook
# +
# Define some pipeline input variables.
WORKING_DIR = 'gs://YOUR_GCS_BUCKET/t2t/notebooks' # Such as gs://bucket/object/path
PROJECT_NAME = 'YOUR_PROJECT'
GITHUB_TOKEN = '<PASSWORD>' # needed for prediction, to grab issue data from GH
DEPLOY_WEBAPP = 'false'
# -
# ## Create an *Experiment* in the Kubeflow Pipeline System
#
# The Kubeflow Pipeline system requires an "Experiment" to group pipeline runs. You can create a new experiment, or call `client.list_experiments()` to get existing ones.
# Note that this notebook should be running in JupyterHub in the same cluster as the pipeline system.
# Otherwise, additional config would be required to connect.
client = kfp.Client()
client.list_experiments()
exp = client.create_experiment(name='t2t_notebook')
# ## Define a Pipeline
#
# Authoring a pipeline is like authoring a normal Python function. The pipeline function describes the topology of the pipeline. The pipeline components (steps) are container-based. For this pipeline, we're using a mix of predefined components loaded from their [component definition files](https://www.kubeflow.org/docs/pipelines/sdk/component-development/), and some components defined via [the `dsl.ContainerOp` constructor](https://www.kubeflow.org/docs/pipelines/sdk/build-component/). For this codelab, we've prebuilt all the components' containers.
#
# While not shown here, there are other ways to build Kubeflow Pipeline components as well, including converting stand-alone python functions to containers via [`kfp.components.func_to_container_op(func)`](https://www.kubeflow.org/docs/pipelines/sdk/lightweight-python-components/). You can read more [here](https://www.kubeflow.org/docs/pipelines/sdk/).
#
# This pipeline has several steps:
#
# - An existing model checkpoint is copied to your bucket.
# - Dataset metadata is logged to the Kubeflow metadata server.
# - A [Tensor2Tensor](https://github.com/tensorflow/tensor2tensor/) model is trained using preprocessed data. (Training starts from the existing model checkpoint copied in the first step, then trains for a few more hundred steps-- it would take too long to fully train it now). When it finishes, it exports the model in a form suitable for serving by [TensorFlow serving](https://github.com/tensorflow/serving/).
# - Training metadata is logged to the metadata server.
# - The next step in the pipeline deploys a TensorFlow-serving instance using that model.
# - The last step launches a web app for interacting with the served model to retrieve predictions.
#
# We'll first define some constants and load some components from their definition files.
# +
COPY_ACTION = 'copy_data'
TRAIN_ACTION = 'train'
WORKSPACE_NAME = 'ws_gh_summ'
DATASET = 'dataset'
MODEL = 'model'
copydata_op = comp.load_component_from_url(
'https://raw.githubusercontent.com/amygdala/kubeflow-examples/ghpl_update/github_issue_summarization/pipelines/components/t2t/datacopy_component.yaml' # pylint: disable=line-too-long
)
train_op = comp.load_component_from_url(
'https://raw.githubusercontent.com/amygdala/kubeflow-examples/ghpl_update/github_issue_summarization/pipelines/components/t2t/train_component.yaml' # pylint: disable=line-too-long
)
metadata_log_op = comp.load_component_from_url(
'https://raw.githubusercontent.com/kubeflow/examples/master/github_issue_summarization/pipelines/components/t2t/metadata_log_component.yaml' # pylint: disable=line-too-long
)
# -
# Next, we'll define the pipeline itself.
# +
@dsl.pipeline(
name='Github issue summarization',
description='Demonstrate Tensor2Tensor-based training and TF-Serving'
)
def gh_summ( #pylint: disable=unused-argument
train_steps: Integer = 2019300,
project: String = 'YOUR_PROJECT_HERE',
github_token: String = 'YOUR_GITHUB_TOKEN_HERE',
working_dir: GCSPath = 'YOUR_GCS_DIR_HERE',
checkpoint_dir: GCSPath = 'gs://aju-dev-demos-codelabs/kubecon/model_output_tbase.bak2019000/',
deploy_webapp: String = 'true',
data_dir: GCSPath = 'gs://aju-dev-demos-codelabs/kubecon/t2t_data_gh_all/'
):
copydata = copydata_op(
data_dir=data_dir,
checkpoint_dir=checkpoint_dir,
model_dir='%s/%s/model_output' % (working_dir, dsl.RUN_ID_PLACEHOLDER),
action=COPY_ACTION,
).apply(gcp.use_gcp_secret('user-gcp-sa'))
log_dataset = metadata_log_op(
log_type=DATASET,
workspace_name=WORKSPACE_NAME,
run_name=dsl.RUN_ID_PLACEHOLDER,
data_uri=data_dir
)
train = train_op(
data_dir=data_dir,
model_dir=copydata.outputs['copy_output_path'],
action=TRAIN_ACTION, train_steps=train_steps,
deploy_webapp=deploy_webapp
).apply(gcp.use_gcp_secret('user-gcp-sa'))
log_model = metadata_log_op(
log_type=MODEL,
workspace_name=WORKSPACE_NAME,
run_name=dsl.RUN_ID_PLACEHOLDER,
model_uri=train.outputs['train_output_path']
)
serve = dsl.ContainerOp(
name='serve',
image='gcr.io/google-samples/ml-pipeline-kubeflow-tfserve',
arguments=["--model_name", 'ghsumm-%s' % (dsl.RUN_ID_PLACEHOLDER,),
"--model_path", train.outputs['train_output_path']
]
)
log_dataset.after(copydata)
log_model.after(train)
train.set_gpu_limit(1)
train.set_memory_limit('48G')
with dsl.Condition(train.outputs['launch_server'] == 'true'):
webapp = dsl.ContainerOp(
name='webapp',
image='gcr.io/google-samples/ml-pipeline-webapp-launcher:v2ap',
arguments=["--model_name", 'ghsumm-%s' % (dsl.RUN_ID_PLACEHOLDER,),
"--github_token", github_token]
)
webapp.after(serve)
# -
# ## Submit an experiment *run*
compiler.Compiler().compile(gh_summ, 'ghsumm.tar.gz')
# The call below will run the compiled pipeline. We won't actually do that now, but instead we'll add a new step to the pipeline, then run it.
# +
# You'd uncomment this call to actually run the pipeline.
# run = client.run_pipeline(exp.id, 'ghsumm', 'ghsumm.tar.gz',
# params={'working-dir': WORKING_DIR,
# 'github-token': GITHUB_TOKEN,
# 'project': PROJECT_NAME})
# -
# ## Add a step to the pipeline
#
# Next, let's add a new step to the pipeline above. As currently defined, the pipeline accesses a directory of pre-processed data as input to training. Let's see how we could include the pre-processing as part of the pipeline.
#
# We're going to cheat a bit, as processing the full dataset will take too long for this workshop, so we'll use a smaller sample. For that reason, you won't actually make use of the generated data from this step (we'll stick to using the full dataset for training), but this shows how you could do so if we had more time.
# First, we'll define the new pipeline step. Note the last line of this new function, which gives this step's pod the credentials to access GCP.
# defining the new data preprocessing pipeline step.
# Note the last line, which gives this step's pod the credentials to access GCP
def preproc_op(data_dir, project):
return dsl.ContainerOp(
name='datagen',
image='gcr.io/google-samples/ml-pipeline-t2tproc',
arguments=[ "--data-dir", data_dir, "--project", project]
).apply(gcp.use_gcp_secret('user-gcp-sa'))
# ### Modify the pipeline to add the new step
#
# Now, we'll redefine the pipeline to add the new step. We're reusing the component ops defined above.
# +
# Then define a new Pipeline. It's almost the same as the original one,
# but with the addition of the data processing step.
@dsl.pipeline(
name='Github issue summarization',
description='Demonstrate Tensor2Tensor-based training and TF-Serving'
)
def gh_summ2(
train_steps: Integer = 2019300,
project: String = 'YOUR_PROJECT_HERE',
github_token: String = '<PASSWORD>',
working_dir: GCSPath = 'YOUR_GCS_DIR_HERE',
checkpoint_dir: GCSPath = 'gs://aju-dev-demos-codelabs/kubecon/model_output_tbase.bak2019000/',
deploy_webapp: String = 'true',
data_dir: GCSPath = 'gs://aju-dev-demos-codelabs/kubecon/t2t_data_gh_all/'
):
# The new pre-processing op.
preproc = preproc_op(project=project,
data_dir=('%s/%s/gh_data' % (working_dir, dsl.RUN_ID_PLACEHOLDER)))
copydata = copydata_op(
data_dir=data_dir,
checkpoint_dir=checkpoint_dir,
model_dir='%s/%s/model_output' % (working_dir, dsl.RUN_ID_PLACEHOLDER),
action=COPY_ACTION,
).apply(gcp.use_gcp_secret('user-gcp-sa'))
log_dataset = metadata_log_op(
log_type=DATASET,
workspace_name=WORKSPACE_NAME,
run_name=dsl.RUN_ID_PLACEHOLDER,
data_uri=data_dir
)
train = train_op(
data_dir=data_dir,
model_dir=copydata.outputs['copy_output_path'],
action=TRAIN_ACTION, train_steps=train_steps,
deploy_webapp=deploy_webapp
).apply(gcp.use_gcp_secret('user-gcp-sa'))
log_dataset.after(copydata)
train.after(preproc)
log_model = metadata_log_op(
log_type=MODEL,
workspace_name=WORKSPACE_NAME,
run_name=dsl.RUN_ID_PLACEHOLDER,
model_uri=train.outputs['train_output_path']
)
serve = dsl.ContainerOp(
name='serve',
image='gcr.io/google-samples/ml-pipeline-kubeflow-tfserve',
arguments=["--model_name", 'ghsumm-%s' % (dsl.RUN_ID_PLACEHOLDER,),
"--model_path", train.outputs['train_output_path']
]
)
log_model.after(train)
train.set_gpu_limit(1)
train.set_memory_limit('48G')
with dsl.Condition(train.outputs['launch_server'] == 'true'):
webapp = dsl.ContainerOp(
name='webapp',
image='gcr.io/google-samples/ml-pipeline-webapp-launcher:v2ap',
arguments=["--model_name", 'ghsumm-%s' % (dsl.RUN_ID_PLACEHOLDER,),
"--github_token", github_token]
)
webapp.after(serve)
# -
# ### Compile the new pipeline definition and submit the run
compiler.Compiler().compile(gh_summ2, 'ghsumm2.tar.gz')
run = client.run_pipeline(exp.id, 'ghsumm2', 'ghsumm2.tar.gz',
params={'working-dir': WORKING_DIR,
'github-token': GITHUB_TOKEN,
'deploy-webapp': DEPLOY_WEBAPP,
'project': PROJECT_NAME})
# You should be able to see your newly defined pipeline run in the dashboard:
# 
#
# The new pipeline has the following structure:
# 
#
# Below is a screenshot of the pipeline running.
# 
# When this new pipeline finishes running, you'll be able to see your generated processed data files in GCS under the path: `WORKING_DIR/<pipeline_name>/gh_data`. There isn't time in the workshop to pre-process the full dataset, but if there had been, we could have defined our pipeline to read from that generated directory for its training input.
# -----------------------------
# Copyright 2018 Google LLC
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
| github_issue_summarization/pipelines/example_pipelines/pipelines-notebook.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + [markdown] id="view-in-github" colab_type="text"
# <a href="https://colab.research.google.com/github/reneholt/models/blob/master/colab/digit_detection.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# + [markdown] id="rOvvWAVTkMR7"
# # TensorFlow Train Pre-existing Model
# + [markdown] id="IRImnk_7WOq1"
# ### More models
# [This](https://tfhub.dev/tensorflow/collections/object_detection/1) collection contains TF2 object detection models that have been trained on the COCO 2017 dataset. [Here](https://tfhub.dev/s?module-type=image-object-detection) you can find all object detection models that are currently hosted on [tfhub.dev](https://tfhub.dev/).
# + [markdown] id="vPs64QA1Zdov"
# ## Imports and Setup
#
# Let's start with the base imports.
# + id="Xk4FU-jx9kc3"
# This Colab requires TF 2
# !pip install tensorflow
# + id="yn5_uV1HLvaz"
import os
import pathlib
import matplotlib
import matplotlib.pyplot as plt
import io
import scipy.misc
import numpy as np
from six import BytesIO
from PIL import Image, ImageDraw, ImageFont
from six.moves.urllib.request import urlopen
import tensorflow as tf
import tensorflow_hub as hub
tf.get_logger().setLevel('ERROR')
# + [markdown] id="IogyryF2lFBL"
# ## Utilities
#
# Run the following cell to create some utils that will be needed later:
#
# - Helper method to load an image
# - Map of Model Name to TF Hub handle
# - List of tuples with Human Keypoints for the COCO 2017 dataset. This is needed for models with keypoints.
# + id="-y9R0Xllefec"
# @title Run this!!
def load_image_into_numpy_array(path):
"""Load an image from file into a numpy array.
Puts image into numpy array to feed into tensorflow graph.
Note that by convention we put it into a numpy array with shape
(height, width, channels), where channels=3 for RGB.
Args:
path: the file path to the image
Returns:
uint8 numpy array with shape (img_height, img_width, 3)
"""
image = None
if(path.startswith('http')):
response = urlopen(path)
image_data = response.read()
image_data = BytesIO(image_data)
image = Image.open(image_data)
else:
image_data = tf.io.gfile.GFile(path, 'rb').read()
image = Image.open(BytesIO(image_data))
(im_width, im_height) = image.size
return np.array(image.getdata()).reshape(
(1, im_height, im_width, 3)).astype(np.uint8)
ALL_MODELS = {
'CenterNet HourGlass104 512x512' : 'https://tfhub.dev/tensorflow/centernet/hourglass_512x512/1',
'CenterNet HourGlass104 Keypoints 512x512' : 'https://tfhub.dev/tensorflow/centernet/hourglass_512x512_kpts/1',
'CenterNet HourGlass104 1024x1024' : 'https://tfhub.dev/tensorflow/centernet/hourglass_1024x1024/1',
'CenterNet HourGlass104 Keypoints 1024x1024' : 'https://tfhub.dev/tensorflow/centernet/hourglass_1024x1024_kpts/1',
'CenterNet Resnet50 V1 FPN 512x512' : 'https://tfhub.dev/tensorflow/centernet/resnet50v1_fpn_512x512/1',
'CenterNet Resnet50 V1 FPN Keypoints 512x512' : 'https://tfhub.dev/tensorflow/centernet/resnet50v1_fpn_512x512_kpts/1',
'CenterNet Resnet101 V1 FPN 512x512' : 'https://tfhub.dev/tensorflow/centernet/resnet101v1_fpn_512x512/1',
'CenterNet Resnet50 V2 512x512' : 'https://tfhub.dev/tensorflow/centernet/resnet50v2_512x512/1',
'CenterNet Resnet50 V2 Keypoints 512x512' : 'https://tfhub.dev/tensorflow/centernet/resnet50v2_512x512_kpts/1',
'EfficientDet D0 512x512' : 'https://tfhub.dev/tensorflow/efficientdet/d0/1',
'EfficientDet D1 640x640' : 'https://tfhub.dev/tensorflow/efficientdet/d1/1',
'EfficientDet D2 768x768' : 'https://tfhub.dev/tensorflow/efficientdet/d2/1',
'EfficientDet D3 896x896' : 'https://tfhub.dev/tensorflow/efficientdet/d3/1',
'EfficientDet D4 1024x1024' : 'https://tfhub.dev/tensorflow/efficientdet/d4/1',
'EfficientDet D5 1280x1280' : 'https://tfhub.dev/tensorflow/efficientdet/d5/1',
'EfficientDet D6 1280x1280' : 'https://tfhub.dev/tensorflow/efficientdet/d6/1',
'EfficientDet D7 1536x1536' : 'https://tfhub.dev/tensorflow/efficientdet/d7/1',
'SSD MobileNet v2 320x320' : 'https://tfhub.dev/tensorflow/ssd_mobilenet_v2/2',
'SSD MobileNet V1 FPN 640x640' : 'https://tfhub.dev/tensorflow/ssd_mobilenet_v1/fpn_640x640/1',
'SSD MobileNet V2 FPNLite 320x320' : 'https://tfhub.dev/tensorflow/ssd_mobilenet_v2/fpnlite_320x320/1',
'SSD MobileNet V2 FPNLite 640x640' : 'https://tfhub.dev/tensorflow/ssd_mobilenet_v2/fpnlite_640x640/1',
'SSD ResNet50 V1 FPN 640x640 (RetinaNet50)' : 'https://tfhub.dev/tensorflow/retinanet/resnet50_v1_fpn_640x640/1',
'SSD ResNet50 V1 FPN 1024x1024 (RetinaNet50)' : 'https://tfhub.dev/tensorflow/retinanet/resnet50_v1_fpn_1024x1024/1',
'SSD ResNet101 V1 FPN 640x640 (RetinaNet101)' : 'https://tfhub.dev/tensorflow/retinanet/resnet101_v1_fpn_640x640/1',
'SSD ResNet101 V1 FPN 1024x1024 (RetinaNet101)' : 'https://tfhub.dev/tensorflow/retinanet/resnet101_v1_fpn_1024x1024/1',
'SSD ResNet152 V1 FPN 640x640 (RetinaNet152)' : 'https://tfhub.dev/tensorflow/retinanet/resnet152_v1_fpn_640x640/1',
'SSD ResNet152 V1 FPN 1024x1024 (RetinaNet152)' : 'https://tfhub.dev/tensorflow/retinanet/resnet152_v1_fpn_1024x1024/1',
'Faster R-CNN ResNet50 V1 640x640' : 'https://tfhub.dev/tensorflow/faster_rcnn/resnet50_v1_640x640/1',
'Faster R-CNN ResNet50 V1 1024x1024' : 'https://tfhub.dev/tensorflow/faster_rcnn/resnet50_v1_1024x1024/1',
'Faster R-CNN ResNet50 V1 800x1333' : 'https://tfhub.dev/tensorflow/faster_rcnn/resnet50_v1_800x1333/1',
'Faster R-CNN ResNet101 V1 640x640' : 'https://tfhub.dev/tensorflow/faster_rcnn/resnet101_v1_640x640/1',
'Faster R-CNN ResNet101 V1 1024x1024' : 'https://tfhub.dev/tensorflow/faster_rcnn/resnet101_v1_1024x1024/1',
'Faster R-CNN ResNet101 V1 800x1333' : 'https://tfhub.dev/tensorflow/faster_rcnn/resnet101_v1_800x1333/1',
'Faster R-CNN ResNet152 V1 640x640' : 'https://tfhub.dev/tensorflow/faster_rcnn/resnet152_v1_640x640/1',
'Faster R-CNN ResNet152 V1 1024x1024' : 'https://tfhub.dev/tensorflow/faster_rcnn/resnet152_v1_1024x1024/1',
'Faster R-CNN ResNet152 V1 800x1333' : 'https://tfhub.dev/tensorflow/faster_rcnn/resnet152_v1_800x1333/1',
'Faster R-CNN Inception ResNet V2 640x640' : 'https://tfhub.dev/tensorflow/faster_rcnn/inception_resnet_v2_640x640/1',
'Faster R-CNN Inception ResNet V2 1024x1024' : 'https://tfhub.dev/tensorflow/faster_rcnn/inception_resnet_v2_1024x1024/1',
'Mask R-CNN Inception ResNet V2 1024x1024' : 'https://tfhub.dev/tensorflow/mask_rcnn/inception_resnet_v2_1024x1024/1'
}
IMAGES_FOR_TEST = {
'Beach' : 'models/research/object_detection/test_images/image2.jpg',
'Dogs' : 'models/research/object_detection/test_images/image1.jpg',
# By <NAME>, Source: https://commons.wikimedia.org/wiki/File:Naxos_Taverna.jpg
'Naxos Taverna' : 'https://upload.wikimedia.org/wikipedia/commons/6/60/Naxos_Taverna.jpg',
# Source: https://commons.wikimedia.org/wiki/File:The_Coleoptera_of_the_British_islands_(Plate_125)_(8592917784).jpg
'Beatles' : 'https://upload.wikimedia.org/wikipedia/commons/1/1b/The_Coleoptera_of_the_British_islands_%28Plate_125%29_%288592917784%29.jpg',
# By <NAME>, Source: https://commons.wikimedia.org/wiki/File:Biblioteca_Maim%C3%B3nides,_Campus_Universitario_de_Rabanales_007.jpg
'Phones' : 'https://upload.wikimedia.org/wikipedia/commons/thumb/0/0d/Biblioteca_Maim%C3%B3nides%2C_Campus_Universitario_de_Rabanales_007.jpg/1024px-Biblioteca_Maim%C3%B3nides%2C_Campus_Universitario_de_Rabanales_007.jpg',
# Source: https://commons.wikimedia.org/wiki/File:The_smaller_British_birds_(8053836633).jpg
'Birds' : 'https://upload.wikimedia.org/wikipedia/commons/0/09/The_smaller_British_birds_%288053836633%29.jpg',
}
COCO17_HUMAN_POSE_KEYPOINTS = [(0, 1),
(0, 2),
(1, 3),
(2, 4),
(0, 5),
(0, 6),
(5, 7),
(7, 9),
(6, 8),
(8, 10),
(5, 6),
(5, 11),
(6, 12),
(11, 12),
(11, 13),
(13, 15),
(12, 14),
(14, 16)]
# + [markdown] id="14bNk1gzh0TN"
# ## Visualization tools
#
# To visualize the images with the proper detected boxes, keypoints and segmentation, we will use the TensorFlow Object Detection API. To install it we will clone the repo.
# + id="oi28cqGGFWnY"
# Clone the tensorflow models repository
# !git clone --depth 1 https://github.com/reneholt/models.git
# + [markdown] id="yX3pb_pXDjYA"
# Intalling the Object Detection API
# + id="NwdsBdGhFanc" language="bash"
# sudo apt install -y protobuf-compiler
# cd models/research/
# protoc object_detection/protos/*.proto --python_out=.
# cp object_detection/packages/tf2/setup.py .
# python -m pip install .
#
# + [markdown] id="3yDNgIx-kV7X"
# Now we can import the dependencies we will need later
# + id="2JCeQU3fkayh"
from object_detection.utils import label_map_util
from object_detection.utils import visualization_utils as viz_utils
from object_detection.utils import ops as utils_ops
# %matplotlib inline
# + [markdown] id="XGbacPPvG_-g"
# Create TF Record files
# + id="nuVcPQNtCb-_" language="bash"
# cd models/research/object_detection
# python xml_to_csv.py -i data/train -o data/train_labels.csv -l training
# python xml_to_csv.py -i data/test -o data/test_labels.csv
# python generate_tfrecord.py --csv_input=data/train_labels.csv --output_path=data/train.record --img_path=data/train --label_map training/label_map.pbtxt
# python generate_tfrecord.py --csv_input=data/test_labels.csv --output_path=data/test.record --img_path=data/test --label_map training/label_map.pbtxt
# + [markdown] id="Sp361DfL033h"
# Train Model
# + id="dlBlXAxa06mc" language="bash"
# cd models/research/object_detection
# python model_main_tf2.py --alsologtostderr --model_dir=training/train --train_dir=training/ --pipeline_config_path=training/pipeline.config
# + [markdown] id="NKtD0IeclbL5"
# ### Load label map data (for plotting).
#
# Label maps correspond index numbers to category names, so that when our convolution network predicts `5`, we know that this corresponds to `airplane`. Here we use internal utility functions, but anything that returns a dictionary mapping integers to appropriate string labels would be fine.
#
# We are going, for simplicity, to load from the repository that we loaded the Object Detection API code
# + id="5mucYUS6exUJ"
PATH_TO_LABELS = './models/research/object_detection/data/custom_label_map.pbtxt'
category_index = label_map_util.create_category_index_from_labelmap(PATH_TO_LABELS, use_display_name=True)
# + [markdown] id="6917xnUSlp9x"
# ## Build a detection model and load pre-trained model weights
#
# Here we will choose which Object Detection model we will use.
# Select the architecture and it will be loaded automatically.
# If you want to change the model to try other architectures later, just change the next cell and execute following ones.
#
# **Tip:** if you want to read more details about the selected model, you can follow the link (model handle) and read additional documentation on TF Hub. After you select a model, we will print the handle to make it easier.
# + id="HtwrSqvakTNn"
#@title Model Selection { display-mode: "form", run: "auto" }
model_display_name = 'Faster R-CNN ResNet152 V1 1024x1024' # @param ['CenterNet HourGlass104 512x512','CenterNet HourGlass104 Keypoints 512x512','CenterNet HourGlass104 1024x1024','CenterNet HourGlass104 Keypoints 1024x1024','CenterNet Resnet50 V1 FPN 512x512','CenterNet Resnet50 V1 FPN Keypoints 512x512','CenterNet Resnet101 V1 FPN 512x512','CenterNet Resnet50 V2 512x512','CenterNet Resnet50 V2 Keypoints 512x512','EfficientDet D0 512x512','EfficientDet D1 640x640','EfficientDet D2 768x768','EfficientDet D3 896x896','EfficientDet D4 1024x1024','EfficientDet D5 1280x1280','EfficientDet D6 1280x1280','EfficientDet D7 1536x1536','SSD MobileNet v2 320x320','SSD MobileNet V1 FPN 640x640','SSD MobileNet V2 FPNLite 320x320','SSD MobileNet V2 FPNLite 640x640','SSD ResNet50 V1 FPN 640x640 (RetinaNet50)','SSD ResNet50 V1 FPN 1024x1024 (RetinaNet50)','SSD ResNet101 V1 FPN 640x640 (RetinaNet101)','SSD ResNet101 V1 FPN 1024x1024 (RetinaNet101)','SSD ResNet152 V1 FPN 640x640 (RetinaNet152)','SSD ResNet152 V1 FPN 1024x1024 (RetinaNet152)','Faster R-CNN ResNet50 V1 640x640','Faster R-CNN ResNet50 V1 1024x1024','Faster R-CNN ResNet50 V1 800x1333','Faster R-CNN ResNet101 V1 640x640','Faster R-CNN ResNet101 V1 1024x1024','Faster R-CNN ResNet101 V1 800x1333','Faster R-CNN ResNet152 V1 640x640','Faster R-CNN ResNet152 V1 1024x1024','Faster R-CNN ResNet152 V1 800x1333','Faster R-CNN Inception ResNet V2 640x640','Faster R-CNN Inception ResNet V2 1024x1024','Mask R-CNN Inception ResNet V2 1024x1024']
model_handle = ALL_MODELS[model_display_name]
print('Selected model:'+ model_display_name)
print('Model Handle at TensorFlow Hub: {}'.format(model_handle))
# + [markdown] id="muhUt-wWL582"
# ## Loading the selected model from TensorFlow Hub
#
# Here we just need the model handle that was selected and use the Tensorflow Hub library to load it to memory.
#
# + id="rBuD07fLlcEO"
print('loading model...')
hub_model = hub.load(model_handle)
print('model loaded!')
# + [markdown] id="GIawRDKPPnd4"
# ## Loading an image
#
# Let's try the model on a simple image. To help with this, we provide a list of test images.
#
# Here are some simple things to try out if you are curious:
# * Try running inference on your own images, just upload them to colab and load the same way it's done in the cell below.
# * Modify some of the input images and see if detection still works. Some simple things to try out here include flipping the image horizontally, or converting to grayscale (note that we still expect the input image to have 3 channels).
#
# **Be careful:** when using images with an alpha channel, the model expect 3 channels images and the alpha will count as a 4th.
#
#
# + id="hX-AWUQ1wIEr"
#@title Image Selection (don't forget to execute the cell!) { display-mode: "form"}
selected_image = 'Beach' # @param ['Beach', 'Dogs', '<NAME>', 'Beatles', 'Phones', 'Birds']
flip_image_horizontally = False #@param {type:"boolean"}
convert_image_to_grayscale = False #@param {type:"boolean"}
image_path = IMAGES_FOR_TEST[selected_image]
image_np = load_image_into_numpy_array(image_path)
# Flip horizontally
if(flip_image_horizontally):
image_np[0] = np.fliplr(image_np[0]).copy()
# Convert image to grayscale
if(convert_image_to_grayscale):
image_np[0] = np.tile(
np.mean(image_np[0], 2, keepdims=True), (1, 1, 3)).astype(np.uint8)
plt.figure(figsize=(24,32))
plt.imshow(image_np[0])
plt.show()
# + [markdown] id="FTHsFjR6HNwb"
# ## Doing the inference
#
# To do the inference we just need to call our TF Hub loaded model.
#
# Things you can try:
# * Print out `result['detection_boxes']` and try to match the box locations to the boxes in the image. Notice that coordinates are given in normalized form (i.e., in the interval [0, 1]).
# * inspect other output keys present in the result. A full documentation can be seen on the models documentation page (pointing your browser to the model handle printed earlier)
# + id="Gb_siXKcnnGC"
# running inference
results = hub_model(image_np)
# different object detection models have additional results
# all of them are explained in the documentation
result = {key:value.numpy() for key,value in results.items()}
print(result.keys())
# + [markdown] id="IZ5VYaBoeeFM"
# ## Visualizing the results
#
# Here is where we will need the TensorFlow Object Detection API to show the squares from the inference step (and the keypoints when available).
#
# the full documentation of this method can be seen [here](https://github.com/tensorflow/models/blob/master/research/object_detection/utils/visualization_utils.py)
#
# Here you can, for example, set `min_score_thresh` to other values (between 0 and 1) to allow more detections in or to filter out more detections.
# + id="2O7rV8g9s8Bz"
label_id_offset = 0
image_np_with_detections = image_np.copy()
# Use keypoints if available in detections
keypoints, keypoint_scores = None, None
if 'detection_keypoints' in result:
keypoints = result['detection_keypoints'][0]
keypoint_scores = result['detection_keypoint_scores'][0]
viz_utils.visualize_boxes_and_labels_on_image_array(
image_np_with_detections[0],
result['detection_boxes'][0],
(result['detection_classes'][0] + label_id_offset).astype(int),
result['detection_scores'][0],
category_index,
use_normalized_coordinates=True,
max_boxes_to_draw=200,
min_score_thresh=.30,
agnostic_mode=False,
keypoints=keypoints,
keypoint_scores=keypoint_scores,
keypoint_edges=COCO17_HUMAN_POSE_KEYPOINTS)
plt.figure(figsize=(24,32))
plt.imshow(image_np_with_detections[0])
plt.show()
# + [markdown] id="Qaw6Xi08NpEP"
# ## [Optional]
#
# Among the available object detection models there's Mask R-CNN and the output of this model allows instance segmentation.
#
# To visualize it we will use the same method we did before but adding an aditional parameter: `instance_masks=output_dict.get('detection_masks_reframed', None)`
#
# + id="zl3qdtR1OvM_"
# Handle models with masks:
image_np_with_mask = image_np.copy()
if 'detection_masks' in result:
# we need to convert np.arrays to tensors
detection_masks = tf.convert_to_tensor(result['detection_masks'][0])
detection_boxes = tf.convert_to_tensor(result['detection_boxes'][0])
# Reframe the bbox mask to the image size.
detection_masks_reframed = utils_ops.reframe_box_masks_to_image_masks(
detection_masks, detection_boxes,
image_np.shape[1], image_np.shape[2])
detection_masks_reframed = tf.cast(detection_masks_reframed > 0.5,
tf.uint8)
result['detection_masks_reframed'] = detection_masks_reframed.numpy()
viz_utils.visualize_boxes_and_labels_on_image_array(
image_np_with_mask[0],
result['detection_boxes'][0],
(result['detection_classes'][0] + label_id_offset).astype(int),
result['detection_scores'][0],
category_index,
use_normalized_coordinates=True,
max_boxes_to_draw=200,
min_score_thresh=.30,
agnostic_mode=False,
instance_masks=result.get('detection_masks_reframed', None),
line_thickness=8)
plt.figure(figsize=(24,32))
plt.imshow(image_np_with_mask[0])
plt.show()
| colab/digit_detection.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + init_cell=true
# %logstop
# %logstart -rtq ~/.logs/ip.py append
# %matplotlib inline
import matplotlib
import seaborn as sns
sns.set()
matplotlib.rcParams['figure.dpi'] = 144
# -
from static_grader import grader
# # Program Flow exercises
#
# The objective of these exercises is to develop your ability to use iteration and conditional logic to build reusable functions. We will be extending our `get_primes` example from the [Program Flow notebook](../PY_ProgramFlow.ipynb) for testing whether much larger numbers are prime. Large primes are useful for encryption. It is too slow to test every possible factor of a large number to determine if it is prime, so we will take a different approach.
# ## Exercise 1: `mersenne_numbers`
#
# A Mersenne number is any number that can be written as $2^p - 1$ for some $p$. For example, 3 is a Mersenne number ($2^2 - 1$) as is 31 ($2^5 - 1$). We will see later on that it is easy to test if Mersenne numbers are prime.
#
# Write a function that accepts an exponent $p$ and returns the corresponding Mersenne number.
def mersenne_number(p):
return 2**p - 1
# Mersenne numbers can only be prime if their exponent, $p$, is prime. Make a list of the Mersenne numbers for all primes $p$ between 3 and 65 (there should be 17 of them).
#
# Hint: It may be useful to modify the `is_prime` and `get_primes` functions from [the Program Flow notebook](PY_ProgramFlow.ipynb) for use in this problem.
# we can make a list like this
my_list = [0, 1, 2]
print(my_list)
# +
# we can also make an empty list and add items to it
another_list = []
print(another_list)
for item in my_list:
another_list.append(item)
print(another_list)
# +
def is_prime(number):
if number <=1:
return False
for factor in range(2, number):
if number%factor == 0:
return False
return True
prime_list = []
for num in range(3, 65):
if is_prime(num):
prime_list.append(num)
print(prime_list)
# -
# The next cell shows a dummy solution, a list of 17 sevens. Alter the next cell to make use of the functions you've defined above to create the appropriate list of Mersenne numbers.
mersennes = [2**p -1 for p in prime_list]
print(mersennes)
grader.score.ip__mersenne_numbers(mersennes)
# ## Exercise 2: `lucas_lehmer`
#
# We can test if a Mersenne number is prime using the [Lucas-Lehmer test](https://en.wikipedia.org/wiki/Lucas%E2%80%93Lehmer_primality_test). First let's write a function that generates the sequence used in the test. Given a Mersenne number with exponent $p$, the sequence can be defined as
#
# $$ n_0 = 4 $$
# $$ n_i = (n_{i-1}^2 - 2) mod (2^p - 1) $$
#
# Write a function that accepts the exponent $p$ of a Mersenne number and returns the Lucas-Lehmer sequence up to $i = p - 2$ (inclusive). Remember that the [modulo operation](https://en.wikipedia.org/wiki/Modulo_operation) is implemented in Python as `%`.
def lucas_lehmer(p):
ll_seq = [4]
if p>2:
for i in range(1, p-1):
n_i = ((ll_seq[i-1]) ** 2 - 2) % ((2 ** p) - 1)
ll_seq.append(n_i)
return ll_seq
print(lucas_lehmer(10))
# Use your function to calculate the Lucas-Lehmer series for $p = 17$ and pass the result to the grader.
# +
ll_result = lucas_lehmer(17)
grader.score.ip__lucas_lehmer(ll_result)
# -
# ## Exercise 3: `mersenne_primes`
#
# For a given Mersenne number with exponent $p$, the number is prime if the Lucas-Lehmer series is 0 at position $p-2$. Write a function that tests if a Mersenne number with exponent $p$ is prime. Test if the Mersenne numbers with prime $p$ between 3 and 65 (i.e. 3, 5, 7, ..., 61) are prime. Your final answer should be a list of tuples consisting of `(Mersenne exponent, 0)` (or `1`) for each Mersenne number you test, where `0` and `1` are replacements for `False` and `True` respectively.
#
# _HINT: The `zip` function is useful for combining two lists into a list of tuples_
l = [4, 14, 194, 806, 29, 839, 95, 839, 95]
for x in l:
print(x)
def ll_prime():
prime_list = []
for num in range(3, 65):
if is_prime(num):
prime_list.append(num)
#print(prime_list)
p = prime_list
ll = []
for i in p:
l = lucas_lehmer(i)
if l[-1] == 0:
ll.append(1)
else:
ll.append(0)
z = list(zip(prime_list, ll))
return z
print(ll_prime())
# +
mersenne_primes = ll_prime()
print(mersenne_primes)
grader.score.ip__mersenne_primes(mersenne_primes)
# -
# ## Exercise 4: Optimize `is_prime`
#
# You might have noticed that the primality check `is_prime` we developed before is somewhat slow for large numbers. This is because we are doing a ton of extra work checking every possible factor of the tested number. We will use two optimizations to make a `is_prime_fast` function.
#
# The first optimization takes advantage of the fact that two is the only even prime. Thus we can check if a number is even and as long as its greater than 2, we know that it is not prime.
#
# Our second optimization takes advantage of the fact that when checking factors, we only need to check odd factors up to the square root of a number. Consider a number $n$ decomposed into factors $n=ab$. There are two cases, either $n$ is prime and without loss of generality, $a=n, b=1$ or $n$ is not prime and $a,b \neq n,1$. In this case, if $a > \sqrt{n}$, then $b<\sqrt{n}$. So we only need to check all possible values of $b$ and we get the values of $a$ for free! This means that even the simple method of checking factors will increase in complexity as a square root compared to the size of the number instead of linearly.
#
# Lets write the function to do this and check the speed! `is_prime_fast` will take a number and return whether or not it is prime.
#
# You will see the functions followed by a cell with an `assert` statement. These cells should run and produce no output, if they produce an error, then your function needs to be modified. Do not modify the assert statements, they are exactly as they should be!
import math
def is_prime_fast(number):
if number < 2:
return False
if number == 2:
return True
if number % 2 == 0:
return False
for factors in range(3, int(math.sqrt(number))+1, 2):
if number % factors == 0:
return False
return True
# Run the following cell to make sure it finds the same primes as the original function.
for n in range(10000):
assert is_prime(n) == is_prime_fast(n)
# Now lets check the timing, here we will use the `%%timeit` magic which will time the execution of a particular cell.
# %%timeit
is_prime(67867967)
# %%timeit
is_prime_fast(67867967)
# Now return a function which will find all prime numbers up to and including $n$. Submit this function to the grader.
def get_primes_fast(n):
r = []
for i in range(n):
if is_prime_fast(i):
r.append(i)
return r
grader.score.ip__is_prime_fast(get_primes_fast)
# ## Exercise 5: sieve
#
# In this problem we will develop an even faster method which is known as the Sieve of Eratosthenes (although it will be more expensive in terms of memory). The Sieve of Eratosthenes is an example of dynamic programming, where the general idea is to not redo computations we have already done (read more about it [here](https://en.wikipedia.org/wiki/Dynamic_programming)). We will break this sieve down into several small functions.
#
# Our submission will be a list of all prime numbers less than 2000.
#
# The method works as follows (see [here](https://en.wikipedia.org/wiki/Sieve_of_Eratosthenes) for more details)
#
# 1. Generate a list of all numbers between 0 and N; mark the numbers 0 and 1 to be not prime
# 2. Starting with $p=2$ (the first prime) mark all numbers of the form $np$ where $n>1$ and $np <= N$ to be not prime (they can't be prime since they are multiples of 2!)
# 3. Find the smallest number greater than $p$ which is not marked and set that equal to $p$, then go back to step 2. Stop if there is no unmarked number greater than $p$ and less than $N+1$
#
# We will break this up into a few functions, our general strategy will be to use a Python `list` as our container although we could use other data structures. The index of this list will represent numbers.
#
# We have implemented a `sieve` function which will find all the prime numbers up to $n$. You will need to implement the functions which it calls. They are as follows
#
# * `list_true` Make a list of true values of length $n+1$ where the first two values are false (this corresponds with step 1 of the algorithm above)
# * `mark_false` takes a list of booleans and a number $p$. Mark all elements $2p,3p,...n$ false (this corresponds with step 2 of the algorithm above)
# * `find_next` Find the smallest `True` element in a list which is greater than some $p$ (has index greater than $p$ (this corresponds with step 3 of the algorithm above)
# * `prime_from_list` Return indices of True values
#
# Remember that python lists are zero indexed. We have provided assertions below to help you assess whether your functions are functioning properly.
def list_true(n):
truth = [False, False]
for i in range(2, n+1):
truth.append(True)
return truth
print(list_true(10))
assert len(list_true(20)) == 21
assert list_true(20)[0] is False
assert list_true(20)[1] is False
# Now we want to write a function which takes a list of elements and a number $p$ and marks elements false which are in the range $2p,3p ... N$.
def mark_false(bool_list, p):
for i in range(p * 2, len(bool_list) , p):
bool_list[i] = False
return bool_list
assert mark_false(list_true(6), 2) == [False, False, True, True, False, True, False]
# Now lets write a `find_next` function which returns the smallest element in a list which is not false and is greater than $p$.
def find_next(bool_list, p):
for i in range(p+1, len(bool_list)):
if bool_list[i] == True:
return i
return None
assert find_next([True, True, True, True], 2) == 3
assert find_next([True, True, True, False], 2) is None
# Now given a list of `True` and `False`, return the index of the true values.
def prime_from_list(bool_list):
primes = []
for x in range(len(bool_list)):
if bool_list[x] == True :
primes.append(x)
return primes
assert prime_from_list([False, False, True, True, False]) == [2, 3]
def sieve(n):
bool_list = list_true(n)
p = 2
while p is not None:
bool_list = mark_false(bool_list, p)
p = find_next(bool_list, p)
return prime_from_list(bool_list)
assert sieve(1000) == get_primes(0, 1000)
# %%timeit
sieve(1000)
# %%timeit
get_primes(0, 1000)
grader.score.ip__eratosthenes(sieve)
# *Copyright © 2020 The Data Incubator. All rights reserved.*
| datacourse/python/miniprojects/ip.ipynb |