code stringlengths 38 801k | repo_path stringlengths 6 263 |
|---|---|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import requests as req
from config import weather_api_key
url = "http://api.openweathermap.org/data/2.5/weather?units=imperial&appid=" + weather_api_key
cities = ["Pittsburgh", "Austin", "New York", "Los Angeles", "Seattle"]
for city in cities:
try:
city_url = url + "&q=" + city
weather = req.get(city_url).json()
temp = weather["main"]["temp"]
except KeyError:
print("KeyError received for " + city)
continue
print("It is currently " + str(temp) + " degrees in " + city)
| try_and_except.ipynb |
# -*- coding: utf-8 -*-
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .jl
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Julia 0.4.0-rc3
# language: julia
# name: julia-0.4
# ---
# + [markdown] slideshow={"slide_type": "slide"}
# # A day with *(the)* Julia *(language)*
#
# ## Anรกlisis de datos
#
# + [markdown] slideshow={"slide_type": "fragment"}
# [**JuliaStats**](http://juliastats.github.io/) **Statistics** and **Machine Learning** made easy in Julia.
# + slideshow={"slide_type": "slide"}
# Pkg.add("DataFrames")
# + slideshow={"slide_type": "fragment"}
using DataFrames # DataFrames to represent tabular datasets
# Database-style joins and indexing
# Split-apply-combine operations, reshape and pivoting
# Formula and model frames
# + [markdown] slideshow={"slide_type": "fragment"}
# ##### [Iris data set](https://en.wikipedia.org/wiki/Iris_flower_data_set)
# + slideshow={"slide_type": "fragment"}
run(`head data/iris.csv`)
# + slideshow={"slide_type": "fragment"}
iris = readtable("data/iris.csv")
# + [markdown] slideshow={"slide_type": "slide"}
# Descripciรณn (estadรญstica) del dataset (columnas), similar a `summary` de *R*.
# + slideshow={"slide_type": "fragment"}
describe(iris)
# + slideshow={"slide_type": "slide"}
using Gadfly # Similar a ggplot2 de R
# + slideshow={"slide_type": "fragment"}
plot(iris, x="Species", y="PetalLength", color="Species", Geom.boxplot)
# + slideshow={"slide_type": "slide"}
plot(iris, color="Species", x="PetalLength", Geom.histogram)
# + slideshow={"slide_type": "slide"}
plot(iris, x=:PetalLength, y=:PetalWidth, color=:Species, Geom.point, Geom.smooth(method=:lm))
# + slideshow={"slide_type": "slide"}
# Pkg.add("GLM")
# + slideshow={"slide_type": "fragment"}
using GLM # Generalized linear models
linear = fit(LinearModel, PetalWidth ~ PetalLength, iris) # PetalLength en R: 0.4157554
# + slideshow={"slide_type": "slide"}
using Clustering
# + slideshow={"slide_type": "fragment"}
cl = kmeans(convert(Matrix{Float64}, iris[:, [:PetalWidth, :PetalLength]])', 3)
# + slideshow={"slide_type": "fragment"}
cl.centers
# + slideshow={"slide_type": "fragment"}
by(iris, :Species, df -> (mean(df[:PetalWidth]), mean(df[:PetalLength])))
| BahiaBlanca2015/A day with (the) Julia (language) [ Analisis de datos ].ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Leveraging Qiskit Runtime
# Iterative algorithms, such as the Variational Quantum Eigensolver (VQE), traditionally send one batch of circuits (one "job") to be executed on the quantum device in each iteration. Sending a job involves certain overhead, mainly
#
# * the time to process the requests and send the data (API overhead, usually about 10s)
# * the job queue time, that is how long you have to wait before it's your turn to run on the device (usually about 2min)
#
# If we send hundreds of jobs iteratively, this overhead quickly dominates the execution time of our algorithm.
# Qiskit Runtime allows us to tackle these issues and significantly speed up (especially) iterative algorithms. With Qiskit Runtime, one job does not contain only a batch of circuits but the _entire_ algorithm. That means we only experience the API overhead and queue wait _once_ instead of in every iteration! You'll be able to either upload algorithm parameters and delegate all the complexity to the cloud, where your program is executed, or upload your personal algorithm directly.
#
# For the VQE, the integration of Qiskit Runtime in your existing code is a piece of cake. There is a (almost) drop-in replacement, called `VQEProgram` for the `VQE` class.
#
# Let's see how you can leverage the runtime on a simple chemistry example: Finding the ground state energy of the lithium hydrate (LiH) molecule at a given bond distance.
# ## Problem specification: LiH molecule
#
# First, we specify the molecule whose ground state energy we seek. Here, we look at LiH with a bond distance of 2.5 ร
.
from qiskit_nature.drivers import PySCFDriver, UnitsType, Molecule
from qiskit_nature.problems.second_quantization.electronic import ElectronicStructureProblem
from qiskit_nature.converters.second_quantization import QubitConverter
from qiskit_nature.mappers.second_quantization import ParityMapper
from qiskit_nature.transformers import ActiveSpaceTransformer
# +
bond_distance = 2.5 # in Angstrom
# define molecule
molecule = Molecule(geometry=[['Li', [0., 0., 0.]],
['H', [0., 0., bond_distance]]],
charge=0,
multiplicity=1)
# specify driver
driver = PySCFDriver(molecule=molecule, unit=UnitsType.ANGSTROM, basis='sto3g')
q_molecule = driver.run()
# specify active space transformation
active_space_trafo = ActiveSpaceTransformer(num_electrons=(q_molecule.num_alpha, q_molecule.num_beta),
num_molecular_orbitals=3)
# define electronic structure problem
problem = ElectronicStructureProblem(driver, q_molecule_transformers=[active_space_trafo])
# construct qubit converter (parity mapping + 2-qubit reduction)
qubit_converter = QubitConverter(ParityMapper(), two_qubit_reduction=True)
# -
# ## Classical reference solution
# As a reference solution we can solve this system classically with the `NumPyEigensolver`.
# +
from qiskit.algorithms import NumPyMinimumEigensolver
from qiskit_nature.algorithms.ground_state_solvers import GroundStateEigensolver
np_solver = NumPyMinimumEigensolver()
np_groundstate_solver = GroundStateEigensolver(qubit_converter, np_solver)
np_result = np_groundstate_solver.solve(problem)
# -
import numpy as np
target_energy = np.real(np_result.eigenenergies + np_result.nuclear_repulsion_energy)[0]
print('Energy:', target_energy)
# ## VQE
#
# To run the VQE we need to select a parameterized quantum circuit acting as ansatz and a classical optimizer. Here we'll choose a heuristic, hardware efficient ansatz and the SPSA optimizer.
# +
from qiskit.circuit.library import EfficientSU2
ansatz = EfficientSU2(num_qubits=4, reps=1, entanglement='linear', insert_barriers=True)
ansatz.draw('mpl', style='iqx')
# +
from qiskit.algorithms.optimizers import SPSA
optimizer = SPSA(maxiter=100)
np.random.seed(5) # fix seed for reproducibility
initial_point = np.random.random(ansatz.num_parameters)
# -
# Before executing the VQE in the cloud using Qiskit Runtime, let's execute a local VQE first.
# +
from qiskit.providers.basicaer import QasmSimulatorPy # local simulator
from qiskit.algorithms import VQE
local_vqe = VQE(ansatz=ansatz,
optimizer=optimizer,
initial_point=initial_point,
quantum_instance=QasmSimulatorPy())
local_vqe_groundstate_solver = GroundStateEigensolver(qubit_converter, local_vqe)
local_vqe_result = local_vqe_groundstate_solver.solve(problem)
# -
print('Energy:', np.real(local_vqe_result.eigenenergies + local_vqe_result.nuclear_repulsion_energy)[0])
# ## Runtime VQE
#
# Let's exchange the eigensolver from a local VQE algorithm to a VQE executed using Qiskit Runtime -- simply by exchanging the `VQE` class by the `VQEProgram`.
#
# First, we'll have to load a provider to access Qiskit Runtime. **Note:** You have to replace the next cell with your provider.
# +
from qiskit import IBMQ
IBMQ.load_account()
provider = IBMQ.get_provider(project='qiskit-runtime') # replace by your runtime provider
backend = provider.get_backend('ibmq_montreal') # select a backend that supports the runtime
# -
# Now we can set up the `VQEProgram`. In this first release, the optimizer must be provided as a dictionary, in future releases you'll be able to pass the same optimizer object as in the traditional VQE.
# +
from qiskit_nature.runtime import VQEProgram
# currently the VQEProgram supports only 'SPSA' and 'QN-SPSA'
optimizer = {
'name': 'QN-SPSA', # leverage the Quantum Natural SPSA
# 'name': 'SPSA', # set to ordinary SPSA
'maxiter': 100,
'resamplings': {1: 100}, # 100 samples of the QFI for the first step, then 1 sample per step
}
runtime_vqe = VQEProgram(ansatz=ansatz,
optimizer=optimizer,
initial_point=initial_point,
provider=provider,
backend=backend,
shots=1024,
measurement_error_mitigation=True) # use a complete measurement fitter for error mitigation
# -
runtime_vqe_groundstate_solver = GroundStateEigensolver(qubit_converter, runtime_vqe)
runtime_vqe_result = runtime_vqe_groundstate_solver.solve(problem)
print('Energy:', np.real(runtime_vqe_result.eigenenergies + runtime_vqe_result.nuclear_repulsion_energy)[0])
# If we are interested in the development of the energy, the `VQEProgram` allows access to the history of the optimizer, which contains the loss per iteration (along with the parameters and a timestamp).
# We can access this data via the `raw_result` attribute of the ground state solver.
vqeprogram_result = runtime_vqe_result.raw_result
history = vqeprogram_result.optimizer_history
loss = history['loss']
# +
import matplotlib.pyplot as plt
plt.rcParams['font.size'] = 14
# plot loss and reference value
plt.figure(figsize=(12,6))
plt.plot(loss + runtime_vqe_result.nuclear_repulsion_energy, label='Runtime VQE')
plt.axhline(y=target_energy + 0.2, color='tab:red', ls=':', label='Target + 200mH')
plt.axhline(y=target_energy, color='tab:red', ls='--', label='Target')
plt.legend(loc='best')
plt.xlabel('Iteration')
plt.ylabel('Energy [H]')
plt.title('VQE energy');
# -
| docs/tutorials/07_leveraging_qiskit_runtime.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Sched Square
#
# This tutorial includes everything you need to set up decision optimization engines, build constraint programming models.
#
#
# When you finish this tutorial, you'll have a foundational knowledge of _Prescriptive Analytics_.
#
# >This notebook is part of **[Prescriptive Analytics for Python](http://ibmdecisionoptimization.github.io/docplex-doc/)**
# >
# >It requires either an [installation of CPLEX Optimizers](http://ibmdecisionoptimization.github.io/docplex-doc/getting_started.html) or it can be run on [IBM Cloud Pak for Data as a Service](https://www.ibm.com/products/cloud-pak-for-data/as-a-service/) (Sign up for a [free IBM Cloud account](https://dataplatform.cloud.ibm.com/registration/stepone?context=wdp&apps=all>)
# and you can start using `IBM Cloud Pak for Data as a Service` right away).
# >
# > CPLEX is available on <i>IBM Cloud Pack for Data</i> and <i>IBM Cloud Pak for Data as a Service</i>:
# > - <i>IBM Cloud Pak for Data as a Service</i>: Depends on the runtime used:
# > - <i>Python 3.x</i> runtime: Community edition
# > - <i>Python 3.x + DO</i> runtime: full edition
# > - <i>Cloud Pack for Data</i>: Community edition is installed by default. Please install `DO` addon in `Watson Studio Premium` for the full edition
#
# Table of contents:
#
# - [Describe the business problem](#Describe-the-business-problem)
# * [How decision optimization (prescriptive analytics) can help](#How--decision-optimization-can-help)
# * [Use decision optimization](#Use-decision-optimization)
# * [Step 1: Download the library](#Step-1:-Download-the-library)
# * [Step 2: Model the Data](#Step-2:-Model-the-data)
# * [Step 3: Set up the prescriptive model](#Step-3:-Set-up-the-prescriptive-model)
# * [Define the decision variables](#Define-the-decision-variables)
# * [Express the business constraints](#Express-the-business-constraints)
# * [Express the search phase](#Express-the-search-phase)
# * [Solve with Decision Optimization solve service](#Solve-with-Decision-Optimization-solve-service)
# * [Step 4: Investigate the solution and run an example analysis](#Step-4:-Investigate-the-solution-and-then-run-an-example-analysis)
# * [Summary](#Summary)
# ****
# ### Describe the business problem
#
# * The aim of the square example is to place a set of small squares of different sizes into a large square.
# *****
# ## How decision optimization can help
# * Prescriptive analytics technology recommends actions based on desired outcomes, taking into account specific scenarios, resources, and knowledge of past and current events. This insight can help your organization make better decisions and have greater control of business outcomes.
#
# * Prescriptive analytics is the next step on the path to insight-based actions. It creates value through synergy with predictive analytics, which analyzes data to predict future outcomes.
#
# * Prescriptive analytics takes that insight to the next level by suggesting the optimal way to handle that future situation. Organizations that can act fast in dynamic conditions and make superior decisions in uncertain environments gain a strong competitive advantage.
# <br/>
#
# + For example:
# + Automate complex decisions and trade-offs to better manage limited resources.
# + Take advantage of a future opportunity or mitigate a future risk.
# + Proactively update recommendations based on changing events.
# + Meet operational goals, increase customer loyalty, prevent threats and fraud, and optimize business processes.
#
# ## Use decision optimization
# ### Step 1: Download the library
#
# Run the following code to install Decision Optimization CPLEX Modeling library. The *DOcplex* library contains the two modeling packages, Mathematical Programming and Constraint Programming, referred to earlier.
import sys
try:
import docplex.cp
except:
if hasattr(sys, 'real_prefix'):
#we are in a virtual env.
# !pip install docplex
else:
# !pip install --user docplex
# Note that the more global package <i>docplex</i> contains another subpackage <i>docplex.mp</i> that is dedicated to Mathematical Programming, another branch of optimization.
# ### Step 2: Model the data
from docplex.cp.model import *
# Size of the englobing square
SIZE_SQUARE = 112
# Sizes of the sub-squares
SIZE_SUBSQUARE = [50, 42, 37, 35, 33, 29, 27, 25, 24, 19, 18, 17, 16, 15, 11, 9, 8, 7, 6, 4, 2]
# ### Step 3: Set up the prescriptive model
mdl = CpoModel(name="SchedSquare")
# #### Define the decision variables
# ##### Create array of variables for sub-squares
# +
x = []
y = []
rx = pulse((0, 0), 0)
ry = pulse((0, 0), 0)
for i in range(len(SIZE_SUBSQUARE)):
sq = SIZE_SUBSQUARE[i]
vx = interval_var(size=sq, name="X" + str(i))
vx.set_end((0, SIZE_SQUARE))
x.append(vx)
rx += pulse(vx, sq)
vy = interval_var(size=sq, name="Y" + str(i))
vy.set_end((0, SIZE_SQUARE))
y.append(vy)
ry += pulse(vy, sq)
# -
# #### Express the business constraints
# ##### Create dependencies between variables
for i in range(len(SIZE_SUBSQUARE)):
for j in range(i):
mdl.add((end_of(x[i]) <= start_of(x[j]))
| (end_of(x[j]) <= start_of(x[i]))
| (end_of(y[i]) <= start_of(y[j]))
| (end_of(y[j]) <= start_of(y[i])))
# ##### Set other constraints
mdl.add(always_in(rx, (0, SIZE_SQUARE), SIZE_SQUARE, SIZE_SQUARE))
mdl.add(always_in(ry, (0, SIZE_SQUARE), SIZE_SQUARE, SIZE_SQUARE))
# #### Express the search phase
mdl.set_search_phases([search_phase(x), search_phase(y)])
# #### Solve with Decision Optimization solve service
msol = mdl.solve(TimeLimit=20)
# ### Step 4: Investigate the solution and then run an example analysis
# #### Print Solution
print("Solution: ")
msol.print_solution()
# #### Import graphical tools
import docplex.cp.utils_visu as visu
import matplotlib.pyplot as plt
# *You can set __POP\_UP\_GRAPHIC=True__ if you prefer a pop up graphic window instead of an inline one.*
POP_UP_GRAPHIC=False
if msol and visu.is_visu_enabled():
import matplotlib.cm as cm
from matplotlib.patches import Polygon
if not POP_UP_GRAPHIC:
# %matplotlib inline
# Plot external square
print("Plotting squares....")
fig, ax = plt.subplots()
plt.plot((0, 0), (0, SIZE_SQUARE), (SIZE_SQUARE, SIZE_SQUARE), (SIZE_SQUARE, 0))
for i in range(len(SIZE_SUBSQUARE)):
# Display square i
(sx, sy) = (msol.get_var_solution(x[i]), msol.get_var_solution(y[i]))
(sx1, sx2, sy1, sy2) = (sx.get_start(), sx.get_end(), sy.get_start(), sy.get_end())
poly = Polygon([(sx1, sy1), (sx1, sy2), (sx2, sy2), (sx2, sy1)], fc=cm.Set2(float(i) / len(SIZE_SUBSQUARE)))
ax.add_patch(poly)
# Display identifier of square i at its center
ax.text(float(sx1 + sx2) / 2, float(sy1 + sy2) / 2, str(SIZE_SUBSQUARE[i]), ha='center', va='center')
plt.margins(0)
plt.show()
# ## Summary
#
# You learned how to set up and use the IBM Decision Optimization CPLEX Modeling for Python to formulate and solve a Constraint Programming model.
# #### References
# * [CPLEX Modeling for Python documentation](https://rawgit.com/IBMDecisionOptimization/docplex-doc/master/docs/index.html)
# * [IBM Decision Optimization](https://www.ibm.com/analytics/decision-optimization)
# * Need help with DOcplex or to report a bug? Please go [here](https://stackoverflow.com/questions/tagged/docplex)
# * Contact us at <EMAIL>
# Copyright ยฉ 2017, 2021 IBM. IPLA licensed Sample Materials.
| examples/cp/jupyter/sched_square.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
from pandas import read_csv
from matplotlib import pyplot
from statsmodels.tsa.ar_model import AR
from sklearn.metrics import mean_squared_error
series = read_csv('./daily-min-temperatures.csv', header=0, index_col=0)
# split dataset
X = series.values
train, test = X[1:len(X)-7], X[len(X)-7:]
# train autoregression
model = AR(train)
model_fit = model.fit()
# The optimal lag is selectd from training process
window = model_fit.k_ar
coef = model_fit.params
# History is the last lag observations
history = train[len(train)-window:]
# convert from ndarray to list
history = [history[i] for i in range(len(history))]
predictions = list()
# print(history)
print("length of history :", len(history))
# The coefficients are provided in an array with the intercept term followed by the coefficients for each lag variable starting at **t-1 to t-n**. We simply need to use them in the right order on the history of observations, as follows:
#
# yhat = b0 + b1*X1 + b2*X2 ... bn*Xn
#
for t in range(len(test)):
length = len(history)
lag = [history[i] for i in range(length-window,length)]
yhat = coef[0]
for d in range(window):
yhat += coef[d+1] * lag[window-d-1]
obs = test[t]
predictions.append(yhat)
history.append(obs)
print('predicted=%f, expected=%f' % (yhat, obs))
error = mean_squared_error(test, predictions)
print('Test MSE: %.3f' % error)
# plot
pyplot.plot(test)
pyplot.plot(predictions, color='red')
pyplot.show()
| jupyter/Time_Series_Test.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# ## Matplotlib workflow
#
# 
# %matplotlib inline
import matplotlib.pyplot as plt
import pandas as pd
import numpy as np
plt.plot();
plt.plot([1,2,3,4])
plt.show()
x = [1,2,3,4]
y = [11,22,33,44]
plt.plot(x,y);
# 1st method
fig = plt.figure() # creates a figure
ax = fig.add_subplot()
plt.show()
# 2nd method
fig = plt.figure()
ax = fig.add_axes([1,1,1,1])
ax.plot(x,y) #add some data
plt.show()
# 3rd method
fig, ax = plt.subplots()
ax.plot(x, y); #add some data
type(fig), type(ax)
# ### Matplotlib example workflow
# +
# 0 import matplotlib (did in cell 1)
# 1. Prepare Data
x = [1,2,3,4]
y = [11,22,33,44]
# 2. Setup plot
fig, ax = plt.subplots(figsize=(10,10))
# 3. Plot data
ax.plot(x,y)
# 4. Customize plot
ax.set(title = "Simple Plot",
xlabel = "x-axis",
ylabel = "y-axis")
# 5. Save & show (you save the whole figure)
fig.savefig("../data/sample-plot.png")
# -
# ### Making figures with NumPy arrays
# Create some data
x = np.linspace(0, 10, 100)
x[:20]
# Plot the data and create a line plot
fig, ax = plt.subplots()
ax.plot(x, x**2);
# Use same data to make a scatter
fig, ax = plt.subplots()
ax.scatter(x, np.exp(x));
# Another scatter plot
fig, ax = plt.subplots()
ax.scatter(x, np.sin(x));
# Make a plot from dictionary
nut_butter_prices = {"Almond butter": 10,
"Peanut butter": 8,
"Cashew butter": 12}
fig, ax = plt.subplots()
ax.bar(nut_butter_prices.keys(), height = nut_butter_prices.values())
ax.set(title = "Danยดs Nut Butter Store",
ylabel="Price ($)");
fig, ax = plt.subplots()
ax.barh(list(nut_butter_prices.keys()), list(nut_butter_prices.values()))
# make some data for histograms and plot it
x = np.random.randn(1000)
fig, ax = plt.subplots()
ax.hist(x);
# #### Two options for subplots
# +
# Subplot option 1
fig, ((ax1, ax2), (ax3, ax4)) = plt.subplots(nrows = 2,
ncols = 2,
figsize = (10, 5))
# Plot to each different axis
ax1.plot (x, x/2);
ax2.scatter(np.random.random(10), np.random.random(10));
ax3.bar(nut_butter_prices.keys(), nut_butter_prices.values());
ax4.hist(np.random.randn(1000));
# +
# Subplots option 2
fig, ax = plt.subplots(nrows = 2,
ncols = 2,
figsize = (10, 5))
# Plot to each different index
ax[0, 0].plot(x, x/2);
ax[0, 1].scatter(np.random.random(10), np.random.random(10));
ax[1, 0].bar(nut_butter_prices.keys(), nut_butter_prices.values());
ax[1, 1].hist(np.random.randn(1000));
# -
# ## Plotting from pandas DataFrames
# Make a dataframe
car_sales = pd.read_csv("../data/car-sales.csv")
car_sales.head()
# Example
ts = pd.Series(np.random.randn(1000),
index = pd.date_range("1/1/2021", periods = 1000))
ts = ts.cumsum()
ts.plot()
#remove the $ simbol from the Price table
car_sales["Price"] = car_sales["Price"].str.replace('[\$\,\.]', '')
car_sales
# Remove last two zeros
car_sales["Price"] = car_sales["Price"].str[:-2]
car_sales
car_sales["Sale Date"] = pd.date_range("1/1/2021", periods = len(car_sales))
car_sales
type(car_sales["Price"][0])
car_sales["Total Sales"] = car_sales["Price"].astype(int).cumsum()
car_sales.head()
# Letยดs plot the total sales
car_sales.plot(x = "Sale Date", y = "Total Sales");
# +
# Reassign price column to int
car_sales["Price"] = car_sales["Price"].astype(int)
# Plot scatter plot with price as numeric
car_sales.plot(x = "Odometer (KM)", y = "Price", kind = "scatter");
# -
# How about a bar graph?
x = np.random.rand(10, 4)
# Turn it into a dataframe
df = pd.DataFrame(x, columns = ['a', 'b', 'c', 'd'])
df
df.plot.bar();
df.plot(kind = "bar");
car_sales.head()
car_sales.plot(x = "Make", y = "Odometer (KM)", kind = "bar");
# +
# How about histograms?
car_sales["Odometer (KM)"].plot.hist();
# -
car_sales["Odometer (KM)"].plot(kind = "hist");
car_sales["Odometer (KM)"].plot.hist(bins=10);
# Letยดs try on another dataset
heart_disease = pd.read_csv("../data/heart-disease.csv")
heart_disease.head()
# Create a histogram of age
heart_disease["age"].plot.hist(bins = 10);
heart_disease.plot.hist(subplots = True, figsize=(10, 30));
# ### Wich one shold you use? (pyplot vs matplotlib OO method?)
#
# * When plotting something quickly, okay to use the pyplot method
# * When plotting something more advanced, use the OO method
over_50 = heart_disease[heart_disease["age"] > 50]
over_50.head()
# Pyplot method
over_50.plot(kind='scatter',
x = 'age',
y = 'chol',
c = 'target');
# +
# OO method with pyplot method
fix, ax = plt.subplots(figsize = (10, 6))
over_50.plot(kind = 'scatter',
x = 'age',
y = 'chol',
c = 'target',
ax = ax);
#ax.set_xlim([45, 100])
# +
# OO method from scratch
fig, ax = plt.subplots(figsize = (10, 6))
#Plot the data
scatter = ax.scatter(x = over_50["age"],
y = over_50["chol"],
c = over_50["target"])
# Customize the plot
ax.set(title = "Heart Disease and Cholesterol Levels",
xlabel = "Age",
ylabel = "Cholesterol")
# Add a legend
ax.legend(*scatter.legend_elements(), title="Target");
# Add a horizontal line
ax.axhline(over_50["chol"].mean(),
linestyle='--');
# -
over_50.head()
# +
# Subplot of chol, age, thalach
fig, (ax0, ax1) = plt.subplots(nrows = 2,
ncols = 1,
figsize = (10, 10),
sharex=True)
# Add data to ax0
scatter = ax0.scatter(x = over_50["age"],
y = over_50["chol"],
c = over_50["target"])
#customize ax0
ax0.set(title = "Heart Disease and cholesterol Levels",
ylabel = "Cholesterol")
#Add a legend to ax0
ax0.legend(*scatter.legend_elements(), title = "Target")
#Add a meanline
ax.axhline(y=over_50["chol"].mean(),
linestyle="--");
# Add data do ax1
scatter = ax1.scatter(x = over_50["age"],
y = over_50["thalach"],
c = over_50["target"])
#customize ax1
ax1.set(title = "Heart Diesease and Max Heart Rate",
xlabel = "Age",
ylabel = "Max Heart Rate")
# Add a Legend to ax1
ax1.legend(*scatter.legend_elements(), title = "Target")
#Add a mealine
ax1.axhline(y=over_50["thalach"].mean(),
linestyle = "--")
# Add a title to the figure
fig.suptitle("Heart Disease Analysis", fontsize = 16, fontweight = "bold");
# -
# ## Customizing Matplotlib plots and getting stylish
# See the different styles available
plt.style.available
car_sales["Price"].plot()
plt.style.use('seaborn-darkgrid')
car_sales["Price"].plot()
plt.style.use('seaborn')
car_sales["Price"].plot()
# +
# Set the Style
plt.style.use('seaborn-whitegrid')
# OO method from scratch
fig, ax = plt.subplots(figsize = (10, 6))
#Plot the data
scatter = ax.scatter(x = over_50["age"],
y = over_50["chol"],
c = over_50["target"],
cmap = "winter") #this changes the color sheme
# Customize the plot
ax.set(title = "Heart Disease and Cholesterol Levels",
xlabel = "Age",
ylabel = "Cholesterol")
# Add a legend
ax.legend(*scatter.legend_elements(), title="Target");
# Add a horizontal line
ax.axhline(over_50["chol"].mean(),
linestyle='--');
#see matplotlib color documentation
# +
# Customizing the y and x axis limitations
# Subplot of chol, age, thalach
fig, (ax0, ax1) = plt.subplots(nrows = 2,
ncols = 1,
figsize = (10, 10),
sharex=True)
# Add data to ax0
scatter = ax0.scatter(x = over_50["age"],
y = over_50["chol"],
c = over_50["target"],
cmap = "winter")
#customize ax0
ax0.set(title = "Heart Disease and cholesterol Levels",
ylabel = "Cholesterol")
# Change the x axis limits
ax0.set_xlim([50, 80])
#Add a legend to ax0
ax0.legend(*scatter.legend_elements(), title = "Target")
#Add a meanline
ax0.axhline(y=over_50["chol"].mean(),
linestyle="--");
# Add data do ax1
scatter = ax1.scatter(x = over_50["age"],
y = over_50["thalach"],
c = over_50["target"],
cmap = "winter")
#customize ax1
ax1.set(title = "Heart Diesease and Max Heart Rate",
xlabel = "Age",
ylabel = "Max Heart Rate")
# Change the ax1 axis limits
ax1.set_xlim([50, 80])
ax1.set_ylim([60,200])
# Add a Legend to ax1
ax1.legend(*scatter.legend_elements(), title = "Target")
#Add a mealine
ax1.axhline(y=over_50["thalach"].mean(),
linestyle = "--")
# Add a title to the figure
fig.suptitle("Heart Disease Analysis", fontsize = 16, fontweight = "bold");
# -
fig
fig.savefig("../data/heart-disease-analysis-plot.png")
| notebooks/Matplotlib Data Visualization.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python [default]
# language: python
# name: python3
# ---
#
# <img src="https://www.continuum.io/sites/default/files/dask_stacked.png" alt="DASK logo" align="right" style="width: 200px;"/>
#
# # DASK Tests
# Dask: http://dask.pydata.org/en/latest/
#
# Dask.distributed https://distributed.readthedocs.io/en/latest/
#
#
#
#
import dask
import distributed
import bokeh
import tornado
import numpy as np
# ## Versions
print('dask', dask.__version__)
print('distributed', distributed.__version__)
print('bokeh', bokeh.__version__ )
print('tornado', tornado.version)
print('numpy', np.__version__)
# ## Set up distributed workers
client = distributed.Client()
client
# ## Example
import dask.array as da
import dask.dot
A = da.random.random(100, chunks=(25,))
A
dask.dot.dot_graph(A.dask)
M = A.mean()
M
dask.dot.dot_graph(M.dask, rankdir='LR')
M.compute()
from distributed.diagnostics import progressbar
A = da.random.random(100000, chunks=(25,))
M = A.mean()
f = client.compute(M)
progressbar.progress(f, multi=False)
f = client.compute(M)
f
progressbar.progress(f)
| ContributedExamples/Dask Distributed.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python [conda env:analysis]
# language: python
# name: conda-env-analysis-py
# ---
# # Compute a global mean, annual mean timeseries from the CESM Large Ensemble
# +
# %matplotlib inline
import os
import socket
from tqdm import tqdm
import dask
import dask.distributed
import ncar_jobqueue
import xarray as xr
import numpy as np
import esmlab
import intake
import intake_esm
import matplotlib.pyplot as plt
# -
# ## Connect to the `intake-esm` data catalog
#
# An input file `cesm1-le-collection.yml` specifies where to look for files and assembles a database for the CESM-LE. `intake-esm` configuration settings are stored by default in ~/.intake_esm/config.yaml or locally in .intake_esm/config.yaml. Key things to specify are the `database_directory`, which is where the catalog data file (csv) is written to disk.
col = intake.open_esm_metadatastore(
collection_input_definition='cesm1-le-collection.yml',
overwrite_existing=False)
col.df.info()
# ## Compute grid weights for a global mean
#
# ### Load a dataset and read in the grid variables
# To compute a properly-weighted spatial mean, we need a cell-volume array. We'll pick out the necessary grid variables from a single file. First, let's get an arbitrary POP history file from the catalog.
arbitrary_pop_file = col.search(experiment='20C', stream='pop.h').query_results.file_fullpath.tolist()[0]
ds = xr.open_dataset(arbitrary_pop_file, decode_times=False, decode_coords=False)
grid_vars = ['KMT', 'z_t', 'TAREA', 'dz']
ds = ds.drop([v for v in ds.variables if v not in grid_vars]).compute()
ds
# ### Compute a 3D topography mask
# Now we'll compute the 3D volume field, masked appropriate by the topography.
#
# First step is to create the land mask.
# +
nk = len(ds.z_t)
nj = ds.KMT.shape[0]
ni = ds.KMT.shape[1]
# make 3D array of 0:km
k_vector_one_to_km = xr.DataArray(np.arange(0, nk), dims=('z_t'), coords={'z_t': ds.z_t})
ONES_3d = xr.DataArray(np.ones((nk, nj, ni)), dims=('z_t', 'nlat', 'nlon'), coords={'z_t': ds.z_t})
MASK = (k_vector_one_to_km * ONES_3d)
# mask out cells where k is below KMT
MASK = MASK.where(MASK <= ds.KMT - 1)
MASK = xr.where(MASK.notnull(), 1., 0.)
plt.figure()
MASK.isel(z_t=0).plot()
plt.title('Surface mask')
plt.figure()
MASK.isel(nlon=200).plot(yincrease=False)
plt.title('Pacific transect')
# -
# ### Compute the 3D volume field
#
# Now we'll compute the masked volume field by multiplying `z_t` by `TAREA` by the mask created above.
# +
MASKED_VOL = ds.dz * ds.TAREA * MASK
MASKED_VOL.attrs['units'] = 'cm^3'
MASKED_VOL.attrs['long_name'] = 'masked volume'
plt.figure()
MASKED_VOL.isel(z_t=0).plot()
plt.title('Surface mask')
plt.figure()
MASKED_VOL.isel(nlon=200).plot(yincrease=False)
plt.title('Pacific transect')
# -
# ## Compute global-mean, annual-means across the ensemble
#
# ### Find the ensemble members that have ocean biogeochemistry
# (several of the runs had corrupted BGC fields)
member_ids = col.search(experiment=['20C', 'RCP85'], has_ocean_bgc=True).query_results.ensemble.unique().tolist()
print(member_ids)
# ### Spin up a dask cluster
#
# We are using `ncar_jobqueue.NCARCluster`; this just passes thru to `dask_jobqueue.PBSCluster` or `dask_jobqueue.SLURMCluster` depending on whether you are on Cheyenne or a DAV machine.
#
# **Note**: `dask_jobqueue.SLURMCluster` does not work on Cheyenne compute nodes, though the cluster jobs will start giving the appearance of functionality.
#
# Default arguments to `ncar_jobqueue.NCARCluster` are set in `~/.config/dask/jobqueue.yaml`; you can over-ride these defaults by passing in arguments directly here.
cluster = ncar_jobqueue.NCARCluster(walltime="00:20:00", cores=36, memory='350GB', processes=9)
client = dask.distributed.Client(cluster)
n_workers = 9 * 7
cluster.scale(n_workers)
# After the worker jobs have started, it's possible to view the client attributes.
# !squeue -u $USER
# Paste the dashboard link into the `DASK DASHBOARD URL` in the `dask-labextension` at right, replacing the part that looks sort of IP-adress-ish with the URL in your browser, excluding the `/lab...` part.
client
# ### Compute
#
# We'll loop over the ensemble and compute one at a time. In theory it should be possible to compute all at once, but in practice this doesn't seem to work.
variable = ['O2']
experiments = ['20C', 'RCP85']
query = dict(ensemble=member_ids, experiment=experiments[1],
stream='pop.h', variable=variable, direct_access=True)
col_subset = col.search(**query)
col_subset.query_results.info()
# %time ds = col_subset.to_xarray(decode_times=False, decode_coords=False, chunks={'time': 30})
ds
ds_1 = ds.copy()
# %time dso = esmlab.climatology.compute_ann_mean(ds)
dso
# %time dso = esmlab.statistics.weighted_mean(dso, weights=MASKED_VOL, dim=['z_t', 'nlat', 'nlon'])
dso
# +
#cluster.close()
# -
# # %%time
# variable = ['O2']
# dsets = []
# for member_id in member_ids:
# print(f'working on ensemble member {member_id}')
#
# query = dict(ensemble=member_id, experiment=['20C', 'RCP85'],
# stream='pop.h', variable=variable, direct_access=True)
#
# col_subset = col.search(**query)
#
# # get a dataset
# ds = col_subset.to_xarray()
#
# # compute annual means
# dso = esmlab.climatology.compute_ann_mean(ds)
#
# # compute global average
# dso = esmlab.statistics.weighted_mean(dso, weights=MASKED_VOL, dim=['z_t', 'nlat', 'nlon'])
#
# # compute the dataset
# dso = dso.compute()
# dsets.append(dso)
#
#
# ensemble_dim = xr.DataArray(member_ids, dims='member_id', name='member_id')
# ds = xr.concat(dsets, dim=ensemble_dim)
# ds
# cluster.close()
# for member_id in member_ids:
# ds.O2.sel(member_id=member_id).plot()
# set(ds.coords) - set(ds.dims)
| notebooks/experimental/cesm-le-global-integral-casper.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + [markdown] colab_type="text" id="view-in-github"
# <a href="https://colab.research.google.com/github/NeuromatchAcademy/course-content/blob/master/tutorials/W3D5_NetworkCausality/student/W3D5_Tutorial2.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# -
# # Tutorial 2: Correlations
# **Week 3, Day 5: Network Causality**
#
# **By Neuromatch Academy**
#
# **Content creators**: <NAME>, <NAME>, <NAME>
#
# **Content reviewers**: <NAME>, <NAME>, <NAME>, <NAME>
#
# + [markdown] colab_type="text"
# **Our 2021 Sponsors, including Presenting Sponsor Facebook Reality Labs**
#
# <p align='center'><img src='https://github.com/NeuromatchAcademy/widgets/blob/master/sponsors.png?raw=True'/></p>
# -
# ---
# # Tutorial objectives
#
# This is tutorial 2 on our day of examining causality. Below is the high level outline of what we'll cover today, with the sections we will focus on in this tutorial in bold:
#
# 1. Master definitions of causality
# 2. Understand that estimating causality is possible
# 3. Learn 4 different methods and understand when they fail
# 1. perturbations
# 2. **correlations**
# 3. simultaneous fitting/regression
# 5. instrumental variables
#
# ### Tutorial 2 objectives
#
# In tutorial 1, we implemented and explored the dynamical system of neurons we will be working with throughout all of the tutorials today. We also learned about the "gold standard" of measuring causal effects through random perturbations. As random perturbations are often not possible, we will now turn to alternative methods to attempt to measure causality. We will:
#
# - Learn how to estimate connectivity from observations assuming **correlations approximate causation**
# - Show that this only works when the network is small
#
# ### Tutorial 2 setting
#
# Often, we can't force neural activites or brain areas to be on or off. We just have to observe. Maybe we can get the correlation between two nodes -- is that good enough? The question we ask in this tutorial is **when is correlation a "good enough" substitute for causation?**
#
# The answer is not "never", actually, but "sometimes".
#
#
# ---
# # Setup
# + cellView="both"
import numpy as np
import matplotlib.pyplot as plt
# + cellView="form"
#@title Figure settings
import ipywidgets as widgets # interactive display
# %config InlineBackend.figure_format = 'retina'
plt.style.use("https://raw.githubusercontent.com/NeuromatchAcademy/course-content/master/nma.mplstyle")
# + cellView="form"
# @title Helper functions
def sigmoid(x):
"""
Compute sigmoid nonlinearity element-wise on x.
Args:
x (np.ndarray): the numpy data array we want to transform
Returns
(np.ndarray): x with sigmoid nonlinearity applied
"""
return 1 / (1 + np.exp(-x))
def logit(x):
"""
Applies the logit (inverse sigmoid) transformation
Args:
x (np.ndarray): the numpy data array we want to transform
Returns
(np.ndarray): x with logit nonlinearity applied
"""
return np.log(x/(1-x))
def create_connectivity(n_neurons, random_state=42, p=0.9):
"""
Generate our nxn causal connectivity matrix.
Args:
n_neurons (int): the number of neurons in our system.
random_state (int): random seed for reproducibility
Returns:
A (np.ndarray): our 0.1 sparse connectivity matrix
"""
np.random.seed(random_state)
A_0 = np.random.choice([0, 1], size=(n_neurons, n_neurons), p=[p, 1 - p])
# set the timescale of the dynamical system to about 100 steps
_, s_vals, _ = np.linalg.svd(A_0)
A = A_0 / (1.01 * s_vals[0])
# _, s_val_test, _ = np.linalg.svd(A)
# assert s_val_test[0] < 1, "largest singular value >= 1"
return A
def see_neurons(A, ax):
"""
Visualizes the connectivity matrix.
Args:
A (np.ndarray): the connectivity matrix of shape (n_neurons, n_neurons)
ax (plt.axis): the matplotlib axis to display on
Returns:
Nothing, but visualizes A.
"""
A = A.T # make up for opposite connectivity
n = len(A)
ax.set_aspect('equal')
thetas = np.linspace(0, np.pi * 2, n, endpoint=False)
x, y = np.cos(thetas), np.sin(thetas),
ax.scatter(x, y, c='k', s=150)
A = A / A.max()
for i in range(n):
for j in range(n):
if A[i, j] > 0:
ax.arrow(x[i], y[i], x[j] - x[i], y[j] - y[i], color='k', alpha=A[i, j], head_width=.15,
width = A[i,j] / 25, shape='right', length_includes_head=True)
ax.axis('off')
def simulate_neurons(A, timesteps, random_state=42):
"""
Simulates a dynamical system for the specified number of neurons and timesteps.
Args:
A (np.array): the connectivity matrix
timesteps (int): the number of timesteps to simulate our system.
random_state (int): random seed for reproducibility
Returns:
- X has shape (n_neurons, timeteps).
"""
np.random.seed(random_state)
n_neurons = len(A)
X = np.zeros((n_neurons, timesteps))
for t in range(timesteps - 1):
# solution
epsilon = np.random.multivariate_normal(np.zeros(n_neurons), np.eye(n_neurons))
X[:, t + 1] = sigmoid(A.dot(X[:, t]) + epsilon)
assert epsilon.shape == (n_neurons,)
return X
def get_sys_corr(n_neurons, timesteps, random_state=42, neuron_idx=None):
"""
A wrapper function for our correlation calculations between A and R.
Args:
n_neurons (int): the number of neurons in our system.
timesteps (int): the number of timesteps to simulate our system.
random_state (int): seed for reproducibility
neuron_idx (int): optionally provide a neuron idx to slice out
Returns:
A single float correlation value representing the similarity between A and R
"""
A = create_connectivity(n_neurons, random_state)
X = simulate_neurons(A, timesteps)
R = correlation_for_all_neurons(X)
return np.corrcoef(A.flatten(), R.flatten())[0, 1]
def correlation_for_all_neurons(X):
"""Computes the connectivity matrix for the all neurons using correlations
Args:
X: the matrix of activities
Returns:
estimated_connectivity (np.ndarray): estimated connectivity for the selected neuron, of shape (n_neurons,)
"""
n_neurons = len(X)
S = np.concatenate([X[:, 1:], X[:, :-1]], axis=0)
R = np.corrcoef(S)[:n_neurons, n_neurons:]
return R
def plot_estimation_quality_vs_n_neurons(number_of_neurons):
"""
A wrapper function that calculates correlation between true and estimated connectivity
matrices for each number of neurons and plots
Args:
number_of_neurons (list): list of different number of neurons for modeling system
corr_func (function): Function for computing correlation
"""
corr_data = np.zeros((n_trials, len(number_of_neurons)))
for trial in range(n_trials):
print("simulating trial {} of {}".format(trial + 1, n_trials))
for j, size in enumerate(number_of_neurons):
corr = get_sys_corr(size, timesteps, trial)
corr_data[trial, j] = corr
corr_mean = corr_data.mean(axis=0)
corr_std = corr_data.std(axis=0)
plt.plot(number_of_neurons, corr_mean)
plt.fill_between(number_of_neurons,
corr_mean - corr_std,
corr_mean + corr_std,
alpha=.2)
plt.xlabel("Number of neurons")
plt.ylabel("Correlation")
plt.title("Similarity between A and R as a function of network size")
plt.show()
def plot_connectivity_matrix(A, ax=None):
"""Plot the (weighted) connectivity matrix A as a heatmap
Args:
A (ndarray): connectivity matrix (n_neurons by n_neurons)
ax: axis on which to display connectivity matrix
"""
if ax is None:
ax = plt.gca()
lim = np.abs(A).max()
im = ax.imshow(A, vmin=-lim, vmax=lim, cmap="coolwarm")
ax.tick_params(labelsize=10)
ax.xaxis.label.set_size(15)
ax.yaxis.label.set_size(15)
cbar = ax.figure.colorbar(im, ax=ax, ticks=[0], shrink=.7)
cbar.ax.set_ylabel("Connectivity Strength", rotation=90,
labelpad= 20, va="bottom")
ax.set(xlabel="Connectivity from", ylabel="Connectivity to")
def plot_true_vs_estimated_connectivity(estimated_connectivity, true_connectivity, selected_neuron=None):
"""Visualize true vs estimated connectivity matrices
Args:
estimated_connectivity (ndarray): estimated connectivity (n_neurons by n_neurons)
true_connectivity (ndarray): ground-truth connectivity (n_neurons by n_neurons)
selected_neuron (int or None): None if plotting all connectivity, otherwise connectivity
from selected_neuron will be shown
"""
fig, axs = plt.subplots(1, 2, figsize=(10, 5))
if selected_neuron is not None:
plot_connectivity_matrix(np.expand_dims(estimated_connectivity, axis=1), ax=axs[0])
plot_connectivity_matrix(true_connectivity[:, [selected_neuron]], ax=axs[1])
axs[0].set_xticks([0])
axs[1].set_xticks([0])
axs[0].set_xticklabels([selected_neuron])
axs[1].set_xticklabels([selected_neuron])
else:
plot_connectivity_matrix(estimated_connectivity, ax=axs[0])
plot_connectivity_matrix(true_connectivity, ax=axs[1])
axs[1].set(title="True connectivity")
axs[0].set(title="Estimated connectivity")
# -
# ---
# # Section 1: Small systems
#
#
# + cellView="form"
# @title Video 1: Correlation vs causation
from ipywidgets import widgets
out2 = widgets.Output()
with out2:
from IPython.display import IFrame
class BiliVideo(IFrame):
def __init__(self, id, page=1, width=400, height=300, **kwargs):
self.id=id
src = 'https://player.bilibili.com/player.html?bvid={0}&page={1}'.format(id, page)
super(BiliVideo, self).__init__(src, width, height, **kwargs)
video = BiliVideo(id="BV1Ak4y1m7kk", width=854, height=480, fs=1)
print('Video available at https://www.bilibili.com/video/{0}'.format(video.id))
display(video)
out1 = widgets.Output()
with out1:
from IPython.display import YouTubeVideo
video = YouTubeVideo(id="vjBO-S7KNPI", width=854, height=480, fs=1, rel=0)
print('Video available at https://youtube.com/watch?v=' + video.id)
display(video)
out = widgets.Tab([out1, out2])
out.set_title(0, 'Youtube')
out.set_title(1, 'Bilibili')
display(out)
# -
# ## Exercise 1: Try to approximate causation with correlation
#
# In small systems, correlation can look like causation. Let's attempt to recover the true connectivity matrix (A) just by correlating the neural state at each timestep with the previous state: $C=\vec{x_t}{\vec{x_{t+1}}^T}$.
#
# Complete this function to estimate the connectivity matrix of a single neuron by calculating the correlation coefficients with every other neuron at the next timestep. That is, please correlate two vectors: 1) the activity of a selected neuron at time $t$ 2) The activity of all other neurons at time t+1.
# +
def compute_connectivity_from_single_neuron(X, selected_neuron):
"""
Computes the connectivity matrix from a single neuron neurons using correlations
Args:
X (ndarray): the matrix of activities
selected_neuron (int): the index of the selected neuron
Returns:
estimated_connectivity (ndarray): estimated connectivity for the selected neuron, of shape (n_neurons,)
"""
# Extract the current activity of selected_neuron, t
current_activity = X[selected_neuron, :-1]
# Extract the observed outcomes of all the neurons
next_activity = X[:, 1:]
# Initialize estimated connectivity matrix
estimated_connectivity = np.zeros(n_neurons)
# Loop through all neurons
for neuron_idx in range(n_neurons):
# Get the activity of neuron_idx
this_output_activity = next_activity[neuron_idx]
########################################################################
## TODO: Estimate the neural correlations between
## this_output_activity and current_activity
## ------------------- ----------------
##
## Note that np.corrcoef returns the full correlation matrix; we want the
## top right corner, which we have already provided.
## FIll out function and remove
raise NotImplementedError('Compute neural correlations')
########################################################################
# Compute correlation
correlation = np.corrcoef(...)[0, 1]
# Store this neuron's correlation
estimated_connectivity[neuron_idx] = correlation
return estimated_connectivity
# Simulate a 6 neuron system for 5000 timesteps again.
n_neurons = 6
timesteps = 5000
selected_neuron = 1
# Invoke a helper function that generates our nxn causal connectivity matrix
A = create_connectivity(n_neurons)
# Invoke a helper function that simulates the neural activity
X = simulate_neurons(A, timesteps)
# Uncomment below to test your function
# estimated_connectivity = compute_connectivity_from_single_neuron(X, selected_neuron)
# plot_true_vs_estimated_connectivity(estimated_connectivity, A, selected_neuron)
# + [markdown] colab_type="text"
# [*Click for solution*](https://github.com/NeuromatchAcademy/course-content/tree/master//tutorials/W3D5_NetworkCausality/solutions/W3D5_Tutorial2_Solution_b80b474b.py)
#
# *Example output:*
#
# <img alt='Solution hint' align='left' width=486 height=341 src=https://raw.githubusercontent.com/NeuromatchAcademy/course-content/master/tutorials/W3D5_NetworkCausality/static/W3D5_Tutorial2_Solution_b80b474b_0.png>
#
#
# -
# ## Section 1.1: Visualization for all neurons
#
# Hopefully you saw that it pretty much worked. We wrote a function that does what you just did but in matrix form, so it's a little faster. It also does all neurons at the same time (helper function `correlation_for_all_neurons`).
#
# + cellView="form"
#@markdown Execute this cell to visualize full estimated vs true connectivity
R = correlation_for_all_neurons(X)
fig, axs = plt.subplots(1, 2, figsize=(10, 5))
see_neurons(A, axs[0])
plot_connectivity_matrix(A, ax=axs[1])
plt.suptitle("True connectivity matrix A")
fig, axs = plt.subplots(1, 2, figsize=(10, 5))
see_neurons(R, axs[0])
plot_connectivity_matrix(R, ax=axs[1])
plt.suptitle("Estimated connectivity matrix R");
# -
# That pretty much worked too. Let's quantify how much it worked.
#
# We'll calculate the correlation coefficient between the true connectivity and the actual connectivity;
print("Correlation matrix of A and R:", np.corrcoef(A.flatten(), R.flatten())[0, 1])
# It *appears* in our system that correlation captures causality.
# + cellView="form"
# @title Video 2: Correlation ~ causation for small systems
from ipywidgets import widgets
out2 = widgets.Output()
with out2:
from IPython.display import IFrame
class BiliVideo(IFrame):
def __init__(self, id, page=1, width=400, height=300, **kwargs):
self.id=id
src = 'https://player.bilibili.com/player.html?bvid={0}&page={1}'.format(id, page)
super(BiliVideo, self).__init__(src, width, height, **kwargs)
video = BiliVideo(id="BV1XZ4y1u7FR", width=854, height=480, fs=1)
print('Video available at https://www.bilibili.com/video/{0}'.format(video.id))
display(video)
out1 = widgets.Output()
with out1:
from IPython.display import YouTubeVideo
video = YouTubeVideo(id="eWLOnTUe9SM", width=854, height=480, fs=1, rel=0)
print('Video available at https://youtube.com/watch?v=' + video.id)
display(video)
out = widgets.Tab([out1, out2])
out.set_title(0, 'Youtube')
out.set_title(1, 'Bilibili')
display(out)
# -
# **Video correction**: the connectivity graph plots and associated explanations in this and other videos show the wrong direction of connectivity (the arrows should be pointing the opposite direction). This has been fixed in the figures above.
# ---
# # Section 2: Large systems
#
# As our system becomes more complex however, correlation fails to capture causality.
# ## Section 2.1: Failure of correlation in complex systems
#
# Let's jump to a much bigger system. Instead of 6 neurons, we will now use 100 neurons. How does the estimation quality of the connectivity matrix change?
# + cellView="form"
# @markdown Execute this cell to simulate large system, estimate connectivity matrix with correlation and return estimation quality
# Simulate a 100 neuron system for 5000 timesteps.
n_neurons = 100
timesteps = 5000
random_state = 42
A = create_connectivity(n_neurons, random_state)
X = simulate_neurons(A, timesteps)
R = correlation_for_all_neurons(X)
print("Correlation matrix of A and R:", np.corrcoef(A.flatten(), R.flatten())[0, 1])
fig, axs = plt.subplots(1, 2, figsize=(16, 8))
plot_connectivity_matrix(A, ax=axs[0])
axs[0].set_title("True connectivity matrix A")
plot_connectivity_matrix(R, ax=axs[1])
axs[1].set_title("Estimated connectivity matrix R");
# + cellView="form"
# @title Video 3: Correlation vs causation in large systems
from ipywidgets import widgets
out2 = widgets.Output()
with out2:
from IPython.display import IFrame
class BiliVideo(IFrame):
def __init__(self, id, page=1, width=400, height=300, **kwargs):
self.id=id
src = 'https://player.bilibili.com/player.html?bvid={0}&page={1}'.format(id, page)
super(BiliVideo, self).__init__(src, width, height, **kwargs)
video = BiliVideo(id="BV1uC4y1b76C", width=854, height=480, fs=1)
print('Video available at https://www.bilibili.com/video/{0}'.format(video.id))
display(video)
out1 = widgets.Output()
with out1:
from IPython.display import YouTubeVideo
video = YouTubeVideo(id="U4sV-7g8T08", width=854, height=480, fs=1, rel=0)
print('Video available at https://youtube.com/watch?v=' + video.id)
display(video)
out = widgets.Tab([out1, out2])
out.set_title(0, 'Youtube')
out.set_title(1, 'Bilibili')
display(out)
# -
# ## Section 2.2: Correlation as a function of network size
#
#
# ### Interactive Demo: Connectivity estimation as a function of number of neurons
#
# Instead of looking at just a few neurons (6) or a lot of neurons (100), as above, we will now systematically vary the number of neurons and plot the resulting changes in correlation coefficient between the true and estimated connectivity matrices.
# + cellView="form"
#@markdown Execute this cell to enable demo
@widgets.interact(n_neurons=(6, 42, 3))
def plot_corrs(n_neurons=6):
fig, axs = plt.subplots(1, 3, figsize=(15, 5))
timesteps = 2000
random_state = 42
A = create_connectivity(n_neurons, random_state)
X = simulate_neurons(A, timesteps)
R = correlation_for_all_neurons(X)
corr = np.corrcoef(A.flatten(), R.flatten())[0, 1]
plot_connectivity_matrix(A, ax=axs[0])
plot_connectivity_matrix(R, ax=axs[1])
axs[0].set_title("True connectivity")
axs[1].set_title("Estimated connectivity")
axs[2].text(0, 0.5, "Correlation : {:.2f}".format(corr), size=15)
axs[2].axis('off')
# -
# Of course there is some variability due to randomness in $A$. Let's average over a few trials and find the relationship.
# + cellView="form"
#@markdown Execute this cell to plot connectivity estimation as a function of network size
n_trials = 5
timesteps = 1000 # shorter timesteps for faster running time
number_of_neurons = [5, 10, 25, 50, 100]
plot_estimation_quality_vs_n_neurons(number_of_neurons)
# -
# ### Interactive Demo: Connectivity estimation as a function of the sparsity of $A$
#
# You may rightly wonder if correlation only fails for large systems for certain types of $A$. In this interactive demo, you can examine connectivity estimation as a function of the sparsity of $A$. Does connectivity estimation get better or worse with less sparsity?
# + cellView="form"
#@title
#@markdown Execute this cell to enable demo
@widgets.interact(sparsity=(0.01, 0.99, .01))
def plot_corrs(sparsity=0.9):
fig, axs = plt.subplots(1, 3, figsize=(15, 5))
timesteps = 2000
random_state = 42
n_neurons = 25
A = create_connectivity(n_neurons, random_state, sparsity)
X = simulate_neurons(A, timesteps)
R = correlation_for_all_neurons(X)
corr=np.corrcoef(A.flatten(), R.flatten())[0, 1]
plot_connectivity_matrix(A, ax=axs[0])
plot_connectivity_matrix(R, ax=axs[1])
axs[0].set_title("True connectivity")
axs[1].set_title("Estimated connectivity")
axs[2].text(0, 0.5, "Correlation : {:.2f}".format(corr), size=15)
axs[2].axis('off')
# -
# ---
# # Discussion questions
#
# Here are some questions you might chat about before break:
#
#
#
# * Think of a research paper you've written. Did it use previous causal knowledge (e.g. a mechanism), or ask a causal question? Try phrasing that causal relationship in the language of an intervention. ("*If I were to force $A$ to be $A'$, $B$ would...*")
# * What methods for interventions exist in your area of neuroscience?
# * Think about these common "filler" words. Do they imply a causal relationship, in its interventional definition? (*regulates, mediates, generates, modulates, shapes, underlies, produces, encodes, induces, enables, ensures, supports, promotes, determines*)
# * What dimensionality would you (very roughly) estimate the brain to be? Would you expect correlations between neurons to give you their connectivity? Why?
#
#
#
#
# ---
# # Summary
# + cellView="form"
# @title Video 4: Summary
from ipywidgets import widgets
out2 = widgets.Output()
with out2:
from IPython.display import IFrame
class BiliVideo(IFrame):
def __init__(self, id, page=1, width=400, height=300, **kwargs):
self.id=id
src = 'https://player.bilibili.com/player.html?bvid={0}&page={1}'.format(id, page)
super(BiliVideo, self).__init__(src, width, height, **kwargs)
video = BiliVideo(id="BV1KK4y1x74w", width=854, height=480, fs=1)
print('Video available at https://www.bilibili.com/video/{0}'.format(video.id))
display(video)
out1 = widgets.Output()
with out1:
from IPython.display import YouTubeVideo
video = YouTubeVideo(id="hRyAN3yak_U", width=854, height=480, fs=1, rel=0)
print('Video available at https://youtube.com/watch?v=' + video.id)
display(video)
out = widgets.Tab([out1, out2])
out.set_title(0, 'Youtube')
out.set_title(1, 'Bilibili')
display(out)
# -
# Now for the takeaway. We know that for large systems correlation โ ย causation. But what about when we coarsely sample the large system? Do we get better at estimating the *effective* causal interaction between groups (=average of weights) from the correlation between the groups?
#
# From our simulation above, the answer appears to be no: as the number of neurons per group increases, we don't see any significant increase in our ability to estimate the causal interaction between groups.
# ---
# # Appendix
#
#
# ## Correlation as similarity metric
#
# We'd like to note here that though we make use of Pearson correlation coefficients throughout all of our tutorials to measure similarity between our estimated connectivity matrix $R$ and the ground truth connectivity $A$, this is not strictly correct usage of Pearson correlations as elements of $A$ are not normally distributed (they are in fact binary).
#
# We use Pearson correlations as they are quick and easy to compute within the Numpy framework and provide qualitatively similar results to other correlation metrics. Other ways to compute similarities:
# - [Spearman rank correlations](https://en.wikipedia.org/wiki/Spearman%27s_rank_correlation_coefficient), which does not require normally distributed data
# - dichotomizing our estimated matrix $R$ by the median and then running concordance analysis, such as computing [Cohen's kappa](https://en.wikipedia.org/wiki/Cohen%27s_kappa)
#
# Another thing to consider: all we want is some measure of the similarity between $A$ and $R$. Element-wise comparisons are one way to do this, but are there other ways you can think of? What about matrix similarities?
#
#
# ---
# ## (Bonus) Advanced Section 3: Low resolution systems
#
# Please come back to this section only if you get through all the other material during tutorials (or later on your own). Or, just scroll through and see the example solutions.
#
# A common situation in neuroscience is that you observe the *average* activity of large groups of neurons. (Think fMRI, EEG, LFP, etc.)
# We're going to simulate this effect, and ask if correlations work to recover the average causal effect of groups of neurons or areas.
#
# **Note on the quality of the analogy**: This is not intended as a perfect analogy of the brain or fMRI. Instead, we want to ask: *in a big system in which correlations fail to estimate causality, can you at least recover average connectivity between groups?*
#
# **Some brainy differences to remember**:
# We are assuming that the connectivity is random. In real brains, the neurons that are averaged have correlated input and output connectivities. This will improve the correspondence between correlations and causality for the average effect because the system has a lower true dimensionality. However, in real brains the system is also order of magnitudes larger than what we examine here, and the experimenter never has the fully-observed system.
#
# ## Simulate a large system
#
# Execute the next cell to simulate a large system of 256 neurons for 10000 timesteps - it will take a bit of time to finish so move on as it runs.
#
# + cellView="form"
# @markdown Execute this cell to simulate a large system
n_neurons = 256
timesteps = 10000
random_state = 42
A = create_connectivity(n_neurons, random_state)
X = simulate_neurons(A, timesteps)
# -
# ### Section 3.1: Coarsely sample the system
#
# We don't observe this system. Instead, we observe the average activity of groups.
#
# #### (Bonus) Exercise: Compute average activity across groups and compare resulting connectivity to the truth
#
# Let's get a new matrix `coarse_X` that has 16 groups, each reflecting the average activity of 16 neurons (since there are 256 neurons in total).
#
# We will then define the true coarse connectivity as the average of the neuronal connection strengths between groups. We'll compute the correlation between our coarsely sampled groups to estimate the connectivity and compare with the true connectivity.
# +
def get_coarse_corr(n_groups, X):
"""
A wrapper function for our correlation calculations between coarsely sampled
A and R.
Args:
n_groups (int): the number of groups. should divide the number of neurons evenly
X: the simulated system
Returns:
A single float correlation value representing the similarity between A and R
ndarray: estimated connectivity matrix
ndarray: true connectivity matrix
"""
############################################################################
## TODO: Insert your code here to get coarsely sampled X
# Fill out function then remove
raise NotImplementedError('Student exercise: please complete get_coarse_corr')
############################################################################
coarse_X = ...
# Make sure coarse_X is the right shape
assert coarse_X.shape == (n_groups, timesteps)
# Estimate connectivity from coarse system
R = correlation_for_all_neurons(coarse_X)
# Compute true coarse connectivity
coarse_A = A.reshape(n_groups, n_neurons // n_groups, n_groups, n_neurons // n_groups).mean(3).mean(1)
# Compute true vs estimated connectivity correlation
corr = np.corrcoef(coarse_A.flatten(), R.flatten())[0, 1]
return corr, R, coarse_A
n_groups = 16
# Uncomment below to test your function
# corr, R, coarse_A = get_coarse_corr(n_groups, X)
# plot_true_vs_estimated_connectivity(R, coarse_A)
# print("Correlation: {}".format(corr))
# + [markdown] colab_type="text"
# [*Click for solution*](https://github.com/NeuromatchAcademy/course-content/tree/master//tutorials/W3D5_NetworkCausality/solutions/W3D5_Tutorial2_Solution_937da7b5.py)
#
# *Example output:*
#
# <img alt='Solution hint' align='left' width=698 height=300 src=https://raw.githubusercontent.com/NeuromatchAcademy/course-content/master/tutorials/W3D5_NetworkCausality/static/W3D5_Tutorial2_Solution_937da7b5_1.png>
#
#
# -
# How close is the estimated coarse connectivity matrix to the truth?
# We will now look at the estimation quality for different levels of coarseness when averaged over 3 trials.
# + cellView="form"
#@markdown Execute this cell to visualize plot
n_neurons = 128
timesteps = 5000
n_trials = 3
groups = [2 ** i for i in range(2, int(np.log2(n_neurons)))]
corr_data = np.zeros((n_trials, len(groups)))
for trial in range(n_trials):
print("Trial {} out of {}".format(trial, n_trials))
A = create_connectivity(n_neurons, random_state=trial)
X = simulate_neurons(A, timesteps, random_state=trial)
for j, n_groups in enumerate(groups):
corr_data[trial, j], _, _ = get_coarse_corr(n_groups, X)
corr_mean = corr_data.mean(axis=0)
corr_std = corr_data.std(axis=0)
plt.plot(np.divide(n_neurons, groups), corr_mean)
plt.fill_between(np.divide(n_neurons, groups),
corr_mean - corr_std,
corr_mean + corr_std,
alpha=.2)
plt.ylim([-0.2, 1])
plt.xlabel("Number of neurons per group ({} total neurons)".format(n_neurons),
fontsize=15)
plt.ylabel("Correlation of estimated effective connectivity")
plt.title("Connectivity estimation performance vs coarseness of sampling")
plt.show()
# -
# We know that for large systems correlation โ ย causation. Here, we have looked at what happens we coarsely sample a large system. Do we get better at estimating the *effective* causal interaction between groups (=average of weights) from the correlation between the groups?
#
# From our simulation above, the answer appears to be no: as the number of neurons per group increases, we don't see any significant increase in our ability to estimate the causal interaction between groups.
| tutorials/W3D5_NetworkCausality/student/W3D5_Tutorial2.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python [Root]
# language: python
# name: Python [Root]
# ---
# The objective of this notebook is to show how to read and plot the data obtained with a vessel.
# %matplotlib inline
import netCDF4
from netCDF4 import num2date
import numpy as np
import matplotlib as mpl
import matplotlib.pyplot as plt
from matplotlib import colors
from mpl_toolkits.basemap import Basemap
# # Data reading
# The data file is located in the *datafiles* directory.
datadir = './datafiles/'
datafile = 'GL_PR_ML_EXRE0065_2010.nc'
# We extract only the spatial coordinates:
with netCDF4.Dataset(datadir + datafile) as nc:
lon = nc.variables['LONGITUDE'][:]
lat = nc.variables['LATITUDE'][:]
print lon.shape
# # Location of the profiles
# In this first plot we want to see the location of the profiles obtained with the profiler.<br/>
# We create a Mercator projection using the coordinates we just read.
m = Basemap(projection='merc', llcrnrlat=lat.min()-0.5, urcrnrlat=lat.max()+0.5,
llcrnrlon=lon.min()-0.5, urcrnrlon=lon.max()+0.5, lat_ts=0.5*(lon.min()+lon.max()), resolution='h')
# Once we have the projection, the coordinates have to be changed into this projection:
lon2, lat2 = m(lon, lat)
# The locations of the vessel stations are added on a map with the coastline and the land mask.
# +
mpl.rcParams.update({'font.size': 16})
fig = plt.figure(figsize=(8,8))
m.plot(lon2, lat2, 'ko', ms=2)
m.drawcoastlines(linewidth=0.5, zorder=3)
m.fillcontinents(zorder=2)
m.drawparallels(np.arange(-90.,91.,0.5), labels=[1,0,0,0], zorder=1)
m.drawmeridians(np.arange(-180.,181.,0.5), labels=[0,0,1,0], zorder=1)
plt.show()
# -
# # Profile plot
# We read the temperature, salinity and depth variables.
with netCDF4.Dataset(datadir + datafile) as nc:
depth = nc.variables['DEPH'][:]
temperature = nc.variables['TEMP'][:]
temperature_name = nc.variables['TEMP'].long_name
temperature_units = nc.variables['TEMP'].units
salinity = nc.variables['PSAL'][:]
salinity_name = nc.variables['PSAL'].long_name
salinity_units = nc.variables['PSAL'].units
time = nc.variables['TIME'][:]
time_units = nc.variables['TIME'].units
print depth.shape
print temperature.shape
# +
nprofiles, ndepths = depth.shape
fig = plt.figure(figsize=(8,8))
ax = plt.subplot(111)
for nn in range(0, nprofiles):
plt.plot(temperature[nn,:], depth[nn,:], 'k-', linewidth=0.5)
plt.gca().invert_yaxis()
plt.show()
# -
# We observe different types of profiles. As the covered region is rather small, this may be because the measurements were done at different time of the year. The time variable will tell us.<br/>
# We create a plot of time versus temperature (first measurement of each profile).
dates = num2date(time, units=time_units)
fig = plt.figure(figsize=(8,8))
ax = plt.subplot(111)
plt.plot(dates, temperature[:,0], 'ko')
fig.autofmt_xdate()
plt.ylabel("%s (%s)" % (temperature_name, temperature_units))
plt.show()
# The graph confirms that we have data obtained during different periods:
# * July-August 2010,
# * December 2010.
# # T-S diagram
# The x and y labels for the plot are directly taken from the netCDF variable attributes.
fig = plt.figure(figsize=(8,8))
ax = plt.subplot(111)
plt.plot(temperature, salinity, 'ko', markersize=2)
plt.xlabel("%s (%s)" % (temperature_name, temperature_units))
plt.ylabel("%s (%s)" % (salinity_name, salinity_units))
plt.ylim(32, 36)
plt.grid()
plt.show()
# # 3-D plot
# We illustrate with a simple example how to have a 3-dimensional representation of the profiles.<br/>
# First we import the required modules.
from mpl_toolkits.mplot3d import Axes3D
# Then the plot is easily obtained by specifying the coordinates (x, y, z) and the variables (salinity) to be plotted.
# +
cmap = plt.cm.Spectral_r
norm = colors.Normalize(vmin=32, vmax=36)
fig = plt.figure(figsize=(8,8))
ax = fig.add_subplot(111, projection='3d')
for ntime in range(0, nprofiles):
plt.scatter(lon[ntime]*np.ones(ndepths), lat[ntime]*np.ones(ndepths), zs=-depth[ntime,:], zdir='z',
s=20, c=salinity[ntime,:], edgecolor='None', cmap=cmap, norm=norm)
plt.colorbar(cmap=cmap, norm=norm)
plt.show()
# -
| PythonNotebooks/PlatformPlots/.ipynb_checkpoints/plot_CMEMS_vessel-checkpoint.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# ## A multi-agent accounting system example: money creation with banks and households
# +
import abcFinance
import random
class Household(abcFinance.Agent):
def init(self, num_banks):
self.accounts = Ledger(residual_account_name='equity')
self.accounts.make_stock_account(['money holdings', 'loan liabilities'])
self.accounts.make_flow_account(['income', 'expenses'])
self.housebank = self.id % num_banks
def return_housebank(self):
return self.housebank
def return_money_holdings(self):
_, amount = self.accounts['money holdings'].get_balance()
return amount
def transfer_money(self, housebank_indices):
recipient = random.randrange(len(housebank_indices))
recipient_housebank = housebank_indices[recipient]
_, amount = self.accounts['money holdings'].get_balance()
amount = round(random.random() * amount)
if amount > 0:
self.send(('bank', self.housebank), 'Outtransfer',
{'amount': amount, 'recipient': recipient})
self.send(('bank', recipient_housebank), 'Intransfer',
{'amount': amount, 'sender': self.id})
def get_outside_money(self, amount):
self.send(('bank', self.housebank), '_autobook', dict(
debit=[('reserves', amount)],
credit=[('deposits', amount)],
text='Outside money endowment'))
self.accounts.book(debit=[('money holdings', amount)],
credit=([('equity', amount)]),
text='Outside money endowment')
def take_loan(self, amount):
self.send(('bank', self.housebank), 'loan_request', {'amount': amount})
class Bank(abcFinance.Agent):
def init(self):
self.accounts = Ledger(residual_account_name='equity')
self.accounts.make_stock_account(['reserves', 'claims', 'deposits', 'refinancing'])
self.accounts.make_flow_account(['interest income', 'interest expense'])
def handle_transfers(self, num_banks, housebank_indices):
intransfers = self.get_messages('Intransfer')
outtransfers = self.get_messages('Outtransfer')
# First, compute net transfers to each other bank
amounts_transfers = [0] * num_banks
sum_transfers = 0
for intransfer in intransfers:
sender = intransfer.content['sender']
sender_housebank = housebank_indices[sender]
if sender_housebank != self.id:
amount = intransfer.content['amount']
amounts_transfers[sender_housebank] += amount
sum_transfers += amount
for outtransfer in outtransfers:
recipient = outtransfer.content['recipient']
recipient_housebank = housebank_indices[recipient]
amount = outtransfer.content['amount']
# Directly book transfers between own clients
if recipient_housebank == self.id:
self.send(outtransfer.sender, '_autobook', dict(
debit=[('expenses', amount)],
credit=[('money holdings', amount)],
text='Transfer'))
self.send(('household', recipient), '_autobook', dict(
debit=[('money holdings', amount)],
credit=[('income', amount)],
text='Transfer'))
else:
amounts_transfers[recipient_housebank] -= amount
sum_transfers -= amount
# Compute net funding needs
_, reserves = self.accounts['reserves'].get_balance()
funding_need = - min(0, sum(amounts_transfers) + reserves)
# >> could be in separate function after checking if funding needs can be met
# Book transfers on clients' accounts
for outtransfer in outtransfers:
recipient = outtransfer.content['recipient']
sender = outtransfer.sender
recipient_housebank = housebank_indices[recipient]
amount = outtransfer.content['amount']
if recipient_housebank != self.id:
self.send(outtransfer.sender, '_autobook', dict(
debit=[('expenses', amount)],
credit=[('money holdings', amount)],
text='Transfer'))
self.send(('household', recipient), '_autobook', dict(
debit=[('money holdings', amount)],
credit=[('income', amount)],
text='Transfer'))
# Only book net transfers between banks (net settlement system)
for i in range(num_banks):
amount = -amounts_transfers[i]
if amount > 0:
self.accounts.book(debit=[('deposits', amount)],
credit=[('reserves', amount)],
text='Client transfer')
self.send(('bank', recipient_housebank), '_autobook', dict(
debit=[('reserves', amount)],
credit=[('deposits', amount)],
text='Client transfer'))
return funding_need
def get_funding(self, funding_needs):
self.accounts.book(debit=[('reserves', funding_needs[self.id])],
credit=[('refinancing', funding_needs[self.id])])
def give_loan(self):
for loan_request in self.get_messages('loan_request'):
amount = loan_request.content['amount']
self.accounts.book(debit=[('claims', amount)],
credit=[('deposits', amount)],
text='Loan')
self.send(loan_request.sender, '_autobook', dict(
debit=[('money holdings', amount)],
credit=[('loan liabilities', amount)],
text='Loan'))
| examples/money_creation/Under development/ABM_money_creation.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
'''ไฝฟ็จ้ๅไผๅmnist-liteๆจกๅ'''
import logging
logging.getLogger("tensorflow").setLevel(logging.DEBUG)
import tensorflow as tf
from tensorflow import keras
import numpy as np
import pathlib
# -
# Load MNIST dataset
mnist = keras.datasets.mnist
(train_images, train_labels), (test_images, test_labels) = mnist.load_data()
# Normalize the input image so that each pixel value is between 0 to 1.
train_images = train_images / 255.0
test_images = test_images / 255.0
# Define the model architecture
model = keras.Sequential([
keras.layers.InputLayer(input_shape=(28, 28)),
keras.layers.Reshape(target_shape=(28, 28, 1)),
keras.layers.Conv2D(filters=12, kernel_size=(3, 3), activation=tf.nn.relu),
keras.layers.MaxPooling2D(pool_size=(2, 2)),
keras.layers.Flatten(),
keras.layers.Dense(10)
])
model.summary()
# Train the digit classification model
model.compile(optimizer='adam',
loss=keras.losses.SparseCategoricalCrossentropy(from_logits=True),
metrics=['accuracy'])
model.fit(
train_images,
train_labels,
epochs=1,
# validation_data=(test_images, test_labels),
validation_split=0.1,
)
# +
# Clone and fine-tune pre-trained model with quantization aware training
# ๅฏนๆดไธชๆจกๅๅบ็จ้ๅๆ่ฏ่ฎญ็ปใ
# ็ๆ็ๆจกๅๅ
ทๆ้ๅๆ่ฏ๏ผไฝๆฒกๆ้ๅ๏ผไพๅฆๆ้ไธบfloat32่ไธๆฏint8๏ผ
import tensorflow_model_optimization as tfmot
# q_aware stands for for quantization aware.
q_aware_model = tfmot.quantization.keras.quantize_model(model)
q_aware_model.summary()
# `quantize_model` requires a recompile.
q_aware_model.compile(optimizer='adam',
loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True),
metrics=['accuracy'])
# ๅจๆจกๅๆ่ฆไธญๆๆๅพๅฑ้ฝไปฅโ quantโไธบๅ็ผใ
q_aware_model.summary()
# ๅจ่ฎญ็ปๆฐๆฎ็ๅญ้ไธไฝฟ็จ้ๅๆ็ฅ่ฎญ็ป่ฟ่กๅพฎ่ฐ
train_images_subset = train_images[0:1000] # out of 60000
train_labels_subset = train_labels[0:1000]
q_aware_model.fit(train_images_subset, train_labels_subset,
batch_size=500, epochs=1, validation_split=0.1)
# ไธๅบ็บฟ็ธๆฏ๏ผๅจ่ฟ่ก้ๅๆ็ฅ่ฎญ็ปๅ๏ผๆต่ฏๅ็กฎๆง็ๆๅคฑๅพๅฐ็่ณๆฒกๆ
_, baseline_model_accuracy = model.evaluate(
test_images, test_labels, verbose=0)
_, q_aware_model_accuracy = q_aware_model.evaluate(
test_images, test_labels, verbose=0)
print('Baseline test accuracy:', baseline_model_accuracy)
print('Quant test accuracy:', q_aware_model_accuracy)
# -
# ไธบTFLiteๅ็ซฏๅๅปบ้ๅๆจกๅ,่ทๅพไธไธชๅ
ทๆint8ๆ้ๅuint8ๆฟๆดป็ๅฎ้
้ๅๆจกๅ
converter = tf.lite.TFLiteConverter.from_keras_model(q_aware_model)
converter.optimizations = [tf.lite.Optimize.DEFAULT]
quantized_tflite_model = converter.convert()
# +
import numpy as np
def evaluate_model(interpreter):
input_index = interpreter.get_input_details()[0]["index"]
output_index = interpreter.get_output_details()[0]["index"]
# Run predictions on every image in the "test" dataset.
prediction_digits = []
for i, test_image in enumerate(test_images):
if i % 1000 == 0:
print('Evaluated on {n} results so far.'.format(n=i))
# Pre-processing: add batch dimension and convert to float32 to match with
# the model's input data format.
test_image = np.expand_dims(test_image, axis=0).astype(np.float32)
interpreter.set_tensor(input_index, test_image)
# Run inference.
interpreter.invoke()
# Post-processing: remove batch dimension and find the digit with highest
# probability.
output = interpreter.tensor(output_index)
digit = np.argmax(output()[0])
prediction_digits.append(digit)
print('\n')
# Compare prediction results with ground truth labels to calculate accuracy.
prediction_digits = np.array(prediction_digits)
accuracy = (prediction_digits == test_labels).mean()
return accuracy
# +
# ่ฏไผฐไบ้ๅๆจกๅ๏ผๅ็ฐTensorFlow็ๅ็กฎๆงไธ็ดๆ็ปญๅฐTFLiteๅ็ซฏ
interpreter = tf.lite.Interpreter(model_content=quantized_tflite_model)
interpreter.allocate_tensors()
test_accuracy = evaluate_model(interpreter)
print('Quant TFLite test_accuracy:', test_accuracy)
print('Quant TF test accuracy:', q_aware_model_accuracy)
# +
# ๅๅปบไธไธชๆตฎๅจTFLiteๆจกๅ๏ผ้ๅ็TFLiteๆจกๅๅคงๅฐๅทฎ4ๅ
# Create float TFLite model.
import tempfile
float_converter = tf.lite.TFLiteConverter.from_keras_model(model)
float_tflite_model = float_converter.convert()
# Measure sizes of models.
_, float_file = tempfile.mkstemp('.tflite')
_, quant_file = tempfile.mkstemp('.tflite')
with open(quant_file, 'wb') as f:
f.write(quantized_tflite_model)
with open(float_file, 'wb') as f:
f.write(float_tflite_model)
print(float_file,quant_file)
print("Float model in Mb:", os.path.getsize(float_file) / float(2**20))
print("Quantized model in Mb:", os.path.getsize(quant_file) / float(2**20))
# +
'TFL้ๅ'
# ่ฝฌๆขไธบTensorFlow Liteๆจกๅ
converter = tf.lite.TFLiteConverter.from_keras_model(model)
tflite_model = converter.convert()
tflite_models_dir = pathlib.Path("./mnist_tflite_models/")
tflite_models_dir.mkdir(exist_ok=True, parents=True)
# ไฟๅญTensorFlow Liteๆจกๅ
tflite_model_file = tflite_models_dir/"mnist_model.tflite"
tflite_model_file.write_bytes(tflite_model)
# ๅฏนๅฏผๅบๆจกๅ่ฟ่ก้ๅ
converter.optimizations = [tf.lite.Optimize.DEFAULT] # ่ฎพ็ฝฎไธบ้ๅฏนๅคงๅฐ่ฟ่กไผๅ
tflite_quant_model = converter.convert()
tflite_model_quant_file = tflite_models_dir/"mnist_model_quant.tflite"
tflite_model_quant_file.write_bytes(tflite_quant_model)
# size
print("Float model in Mb:", os.path.getsize(tflite_model_file) / float(2**20))
print("Quantized model in Mb:", os.path.getsize(tflite_model_quant_file) / float(2**20))
# -
# ls -lh {tflite_models_dir}
# +
# ไฝฟ็จPython TensorFlow Lite่งฃ้ๅจ่ฟ่กTensorFlow Liteๆจกๅใ
interpreter = tf.lite.Interpreter(model_path=str(tflite_model_file)) # ๅ ่ฝฝๆจกๅ
interpreter.allocate_tensors()
# ๅ ่ฝฝ้ๅๅ็ๆจกๅ
interpreter_quant = tf.lite.Interpreter(model_path=str(tflite_model_quant_file))
interpreter_quant.allocate_tensors()
# ๆต่ฏๅไพ
num = 2
test_image = np.expand_dims(test_images[num], axis=0).astype(np.float32)
input_index = interpreter.get_input_details()[0]["index"]
output_index = interpreter.get_output_details()[0]["index"]
interpreter.set_tensor(input_index, test_image)
interpreter.invoke()
predictions = interpreter.get_tensor(output_index)
import matplotlib.pylab as plt
plt.imshow(test_images[num])
template = "True:{true}, predicted:{predict}"
_ = plt.title(template.format(true= str(test_labels[num]),
predict=str(np.argmax(predictions[0]))))
plt.grid(False)
# -
# A helper function to evaluate the TF Lite model using "test" dataset.
def evaluate_model2(interpreter):
input_index = interpreter.get_input_details()[0]["index"]
output_index = interpreter.get_output_details()[0]["index"]
# Run predictions on every image in the "test" dataset.
prediction_digits = []
for test_image in test_images:
# Pre-processing: add batch dimension and convert to float32 to match with
# the model's input data format.
test_image = np.expand_dims(test_image, axis=0).astype(np.float32)
interpreter.set_tensor(input_index, test_image)
# Run inference.
interpreter.invoke()
# Post-processing: remove batch dimension and find the digit with highest
# probability.
output = interpreter.tensor(output_index)
digit = np.argmax(output()[0])
prediction_digits.append(digit)
# Compare prediction results with ground truth labels to calculate accuracy.
accurate_count = 0
for index in range(len(prediction_digits)):
if prediction_digits[index] == test_labels[index]:
accurate_count += 1
accuracy = accurate_count * 1.0 / len(prediction_digits)
return accuracy
print(evaluate_model(interpreter))
# ๅฏนๅจๆ่ๅด้ๅๆจกๅ้ๅค่ฏไผฐ
print(evaluate_model(interpreter_quant))
| tfmodel_opt/quant_mnist.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import os
import csv
from bs4 import BeautifulSoup
import pandas as pd
import requests as rq
import time
import random
import unicodedata
import nltk
from nltk.corpus import stopwords
from nltk.tokenize import word_tokenize
from nltk.stem import *
import string
import re
from math import *
path = os.getcwd()
movies = pd.DataFrame(pd.read_html(path + "\\movies-1.html")[0])
movies.drop('Id', inplace=True, axis = 1)
# +
# for i in range(len(movies)):
# try:
# response = rq.get(movies.URL[i])
# except rq.exceptions.RequestException as e:
# print(e)
# time.sleep(20*60 + 30)
# response = rq.get(movies.URL[i])
# soup = BeautifulSoup(response.text, 'html.parser')
# f = open('article_'+str(i)+'.html','w')
# f.write(str(soup))
# f.close()
# time.sleep(random.choice(range(1,6)))
# -
path1 = path + '\\Articles'
for i in range(10000):
article = open(path1+'\\article_'+str(i)+'.html', 'r')
soup = BeautifulSoup(article, 'html.parser')
d = {}
try:
for x in soup.find('table', class_="infobox vevent").find('th').find_all_next('th'):
d[x.text] = unicodedata.normalize('NFKD',x.next_sibling.get_text(separator = '<br/>').replace('<br/>', ',').strip())
except:
pass
title = str(soup.select('h1')[0].text)
start = soup.find('p')
intro = start.text.strip()
while len(intro) == 0:
start = start.find_next('p')
intro = start.text.strip()
for elem in start.next_siblings:
if elem.name != 'p':
break
intro += elem.text.strip()
try:
start = soup.find('h2').find_next('p')
plot = start.text.strip()
for elem in start.next_siblings:
if elem.name != 'p':
break
plot += elem.text.strip()
except:
plot = "NA"
try :
director = d['Directed by']
except:
director = "NA"
try :
producer = d['Produced by']
except:
producer = "NA"
try :
writer = d["Written by"]
except:
writer = "NA"
try :
starring = d["Starring"].strip()
except:
starring = "NA"
try :
music = d["Music by"]
except :
music = "NA"
try :
release_date = d["Release date"]
except :
release_date = "NA"
try :
run_time = d["Running time"]
except :
run_time = "NA"
try :
country = d["Country"]
except :
country = "NA"
try :
language = d["Language"]
except:
language = "NA"
try :
budget = d["Budget"]
except :
budget = "NA"
with open(path+"\\TSV\\article_" + str(i) + ".tsv", "w" ,encoding="utf-8") as out_file:
tsv_writer = csv.writer(out_file, delimiter='\t')
tsv_writer.writerow([title, intro, plot, director, producer, writer , starring, music, release_date,run_time, country , language , budget])
path2 = path+'\\TSV\\'
stop_words = set(stopwords.words('english'))
stemmer = PorterStemmer()
allwords = []
for i in range(0,10000):
with open(path2+"article_" + str(i) + ".tsv", encoding = "utf-8") as fd:
rd = csv.reader(fd, delimiter="\t", quotechar='"')
for row in rd:
if row :
tsv = row
text = ' '.join([tsv[1],tsv[2]])
text = text.lower()
words = word_tokenize(text) #devide the text into substrings
filtered1 = [w for w in words if not w in stop_words] #remove stop words
filtered2 = list(filter(lambda word: word not in string.punctuation, filtered1))
filtered3 = []
for word in filtered2:
try:
filtered3 += re.findall(r'\w+', word)
except:
pass
filtered3 = [stemmer.stem(w) for w in filtered3] #stemming
filtered4 = [c.replace("''", "").replace("``", "") for c in filtered3 ] #removing useless '' and `` characters
filtered4 = [f for f in filtered4 if len(f)>1]
with open(path + "\\WORDS\\final_" + str(i) + ".tsv", "w" ,encoding="utf-8") as out_file:
tsv_writer = csv.writer(out_file, delimiter='\t')
tsv_writer.writerow(filtered4)
allwords += filtered4
allwords = set(allwords)
path2 = path+'\\TSV\\'
f = open(path2+'article_0.tsv', encoding = 'utf8')
with open(path2+'article_9973.tsv', encoding = 'utf8') as fd:
rd = csv.reader(fd, delimiter="\t", quotechar='"')
for row in rd:
if row :
tsv = row
# +
import re
d = {}
if tsv[6] == 'NA':
d['Starring'] = '0'
else:
d['Starring'] = str(len(tsv[6].split(',')))
try:
d['Release Year'] = re.search(r'\d{4}', tsv[8]).group(0)
except:
d['Release Year'] = '0'
try:
d['Runtime'] = re.search(r'\d+.*',tsv[9]).group(0)
except:
d['Runtime'] = '0'
#some movies have running time expressed in reels, and the conversion in minutes is not univoque, so we'll just ignore those info
if re.search(r'min', d['Runtime']):
d['Runtime'] = re.search(r'\d+[\.|\,|:]*\d*', d['Runtime']).group(0)
d['Runtime'] = re.search(r'\d+', d['Runtime']).group(0)
else:
d['Runtime'] = 0
try:
d['Budget'] = re.findall(r'\$.*', tsv[12])[0]
except:
d['Budget'] = '0'
if re.search(r'mil', d['Budget']):
d['Budget'] = str(int(float(re.search(r'\d+[\.|\,]*\d*', d['Budget']).group(0))*10**6))
elif re.search(r',', d['Budget']):
d['Budget'] = d['Budget'].replace(',', '').replace('$', '')
else:
d['Budget'] = d['Budget'].replace('.', '').replace('$', '')
d
# -
q = dict()
q['runtime'] = 25
Runtimes = [6,12,15,57,100,132]
minrun = min(Runtimes)
maxrun = max(Runtimes)
runscore = exp(-(int(q['runtime'] -int(d['Runtime']))**2)/100)
runscore
# +
q = dict()
oldnew = input("Do you prefer an old movie or a new-released movie? Please type O for Old and N for New:")
if oldnew == "O" :
q["release"] = "O"
if oldnew == "N" :
q["release"] = "N"
year = input("Do you want to specify the release year ?Please type Y for Yes and N for No: ")
if year == "N" :
q["year"] = "NA"
if year == "Y" :
year = input("please specify the release date")
q["year"] = year
Runtime = input("Do you want to specify the length of the movie? Please type Y for Yes and N for No:")
if Runtime == "N" :
q["Runtime"] = "NA"
if Runtime == "Y" :
Runtime = input("please specify the length of the movie")
q["Runtime"] = Runtime
starring = input("Is number of stars an important factor for you? Please type Y for Yes and N for No:")
q["starring"] = starring
budget = input("Is movie budget an important factor for you? Please type Y for Yes and N for No:")
q["budget"] = budget
| Download wiki articles.ipynb |
# ---
# jupyter:
# jupytext:
# cell_metadata_filter: all
# notebook_metadata_filter: all,-language_info
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# toc:
# base_numbering: 1
# nav_menu: {}
# number_sections: true
# sideBar: true
# skip_h1_title: false
# title_cell: Table of Contents
# title_sidebar: Contents
# toc_cell: true
# toc_position: {}
# toc_section_display: true
# toc_window_display: true
# ---
# + [markdown] toc=true
# <h1>Table of Contents<span class="tocSkip"></span></h1>
# <div class="toc"><ul class="toc-item"><li><span><a href="#Functions" data-toc-modified-id="Functions-1"><span class="toc-item-num">1 </span>Functions</a></span><ul class="toc-item"><li><span><a href="#User-defined-functions" data-toc-modified-id="User-defined-functions-1.1"><span class="toc-item-num">1.1 </span>User-defined functions</a></span><ul class="toc-item"><li><span><a href="#Looping-over-arrays-in-user-defined-functions" data-toc-modified-id="Looping-over-arrays-in-user-defined-functions-1.1.1"><span class="toc-item-num">1.1.1 </span>Looping over arrays in user-defined functions</a></span></li><li><span><a href="#Fast-array-processing-in-user-defined-functions" data-toc-modified-id="Fast-array-processing-in-user-defined-functions-1.1.2"><span class="toc-item-num">1.1.2 </span>Fast array processing in user-defined functions</a></span></li><li><span><a href="#Functions-with-more-(or-less)-than-one-input-or-output" data-toc-modified-id="Functions-with-more-(or-less)-than-one-input-or-output-1.1.3"><span class="toc-item-num">1.1.3 </span>Functions with more (or less) than one input or output</a></span></li><li><span><a href="#Positional-and-keyword-arguments" data-toc-modified-id="Positional-and-keyword-arguments-1.1.4"><span class="toc-item-num">1.1.4 </span>Positional and keyword arguments</a></span></li><li><span><a href="#Variable-number-of-arguments" data-toc-modified-id="Variable-number-of-arguments-1.1.5"><span class="toc-item-num">1.1.5 </span>Variable number of arguments</a></span></li><li><span><a href="#Passing-data-to-and-from-functions" data-toc-modified-id="Passing-data-to-and-from-functions-1.1.6"><span class="toc-item-num">1.1.6 </span>Passing data to and from functions</a></span></li></ul></li><li><span><a href="#Methods-and-attributes" data-toc-modified-id="Methods-and-attributes-1.2"><span class="toc-item-num">1.2 </span>Methods and attributes</a></span></li><li><span><a href="#Example:-linear-least-squares-fitting" data-toc-modified-id="Example:-linear-least-squares-fitting-1.3"><span class="toc-item-num">1.3 </span>Example: linear least squares fitting</a></span><ul class="toc-item"><li><span><a href="#Linear-regression" data-toc-modified-id="Linear-regression-1.3.1"><span class="toc-item-num">1.3.1 </span>Linear regression</a></span></li><li><span><a href="#Linear-regression-with-weighting:-$\chi^2$" data-toc-modified-id="Linear-regression-with-weighting:-$\chi^2$-1.3.2"><span class="toc-item-num">1.3.2 </span>Linear regression with weighting: $\chi^2$</a></span></li></ul></li><li><span><a href="#Anonymous-functions-(lambda)" data-toc-modified-id="Anonymous-functions-(lambda)-1.4"><span class="toc-item-num">1.4 </span>Anonymous functions (lambda)</a></span></li><li><span><a href="#Exercises" data-toc-modified-id="Exercises-1.5"><span class="toc-item-num">1.5 </span>Exercises</a></span></li></ul></li></ul></div>
# -
# python
#
# Functions
# =========
#
# As you develop more complex computer code, it becomes increasingly
# important to organize your code into modular blocks. One important means
# for doing so is *user-defined* Python functions. User-defined functions
# are a lot like built-in functions that we have encountered in core
# Python as well as in NumPy and Matplotlib. The main difference is that
# user-defined functions are written by you. The idea is to define
# functions to simplify your code and to allow you to reuse the same code
# in different contexts.
#
# The number of ways that functions are used in programming is so varied
# that we cannot possibly enumerate all the possibilities. As our use of
# Python functions in scientific program is somewhat specialized, we
# introduce only a few of the possible uses of Python functions, ones that
# are the most common in scientific programming.
#
# single: functions; user defined
#
# User-defined functions
# ----------------------
#
# The NumPy package contains a plethora of mathematical functions. You can
# find a listing of the mathematical functions available through NumPy on
# the web page
# <http://docs.scipy.org/doc/numpy/reference/routines.math.html>. While
# the list may seem pretty exhaustive, you may nevertheless find that you
# need a function that is not available in the NumPy Python library. In
# those cases, you will want to write your own function.
#
# In studies of optics and signal processing one often runs into the sinc
# function, which is defined as
#
# $$\mathrm{sinc}\,x \equiv \frac{\sin x}{x} \;.$$
#
# Let's write a Python function for the sinc function. Here is our first
# attempt:
#
# ``` python
# def sinc(x):
# y = np.sin(x)/x
# return y
# ```
#
# Every function definition begins with the word `def` followed by the
# name you want to give to the function, `sinc` in this case, then a list
# of arguments enclosed in parentheses, and finally terminated with a
# colon. In this case there is only one argument, `x`, but in general
# there can be as many arguments as you want, including no arguments at
# all. For the moment, we will consider just the case of a single
# argument.
#
# The indented block of code following the first line defines what the
# function does. In this case, the first line calculates
# $\mathrm{sinc}\,x = \sin x/x$ and sets it equal to `y`. The `return`
# statement of the last line tells Python to return the value of `y` to
# the user.
#
# We can try it out in the IPython shell. First we type in the function
# definition.
#
# ``` ipython
# In [1]: def sinc(x):
# ...: y = sin(x)/x
# ...: return y
# ```
#
# Because we are doing this from the IPython shell, we don't need to
# import NumPy; it's preloaded. Now the function $\mathrm{sinc}\,x$ is
# available to be used from the IPython shell
#
# ``` ipython
# In [2]: sinc(4)
# Out[2]: -0.18920062382698205
#
# In [3]: a = sinc(1.2)
#
# In [4]: a
# Out[4]: 0.77669923830602194
#
# In [5]: sin(1.2)/1.2
# Out[5]: 0.77669923830602194
# ```
#
# Inputs and outputs 4 and 5 verify that the function does indeed give the
# same result as an explicit calculation of $\sin x/x$.
#
# You may have noticed that there is a problem with our definition of
# $\mathrm{sinc}\,x$ when `x=0.0`. Let's try it out and see what happens
#
# ``` ipython
# In [6]: sinc(0.0)
# Out[6]: nan
# ```
#
# IPython returns `nan` or "not a number", which occurs when Python
# attempts a division by zero, which is not defined. This is not the
# desired response as $\mathrm{sinc}\,x$ is, in fact, perfectly well
# defined for $x=0$. You can verify this using L'Hopital's rule, which you
# may have learned in your study of calculus, or you can ascertain the
# correct answer by calculating the Taylor series for $\mathrm{sinc}\,x$.
# Here is what we get
#
# $$\mathrm{sinc}\,x = \frac{\sin x}{x}
# = \frac{x - \frac{x^3}{3!} + \frac{x^5}{5!} + ...}{x}
# = 1 - \frac{x^2}{3!} + \frac{x^4}{5!} + ... \;.$$
#
# From the Taylor series, it is clear that $\mathrm{sinc}\,x$ is
# well-defined at and near $x=0$ and that, in fact, $\mathrm{sinc}(0)=1$.
# Let's modify our function so that it gives the correct value for `x=0`.
#
# ``` ipython
# In [7]: def sinc(x):
# ...: if x==0.0:
# ...: y = 1.0
# ...: else:
# ...: y = sin(x)/x
# ...: return y
#
# In [8]: sinc(0)
# Out[8]: 1.0
#
# In [9]: sinc(1.2)
# Out[9]: 0.77669923830602194
# ```
#
# Now our function gives the correct value for `x=0` as well as for values
# different from zero.
#
# single: functions; looping over arrays
#
# ### Looping over arrays in user-defined functions
#
# The code for $\mathrm{sinc}\,x$ works just fine when the argument is a
# single number or a variable that represents a single number. However, if
# the argument is a NumPy array, we run into a problem, as illustrated
# below.
#
# ``` ipython
# In [10]: x = arange(0, 5., 0.5)
#
# In [11]: x
# Out[11]: array([ 0. , 0.5, 1. , 1.5, 2. , 2.5, 3. , 3.5,
# 4. , 4.5])
#
# In [12]: sinc(x)
# ----------------------------------------------------------
# ValueError Traceback (most recent call last)
# ----> 1 sinc(x)
#
# 1 def sinc(x):
# ----> 2 if x==0.0:
# 3 y = 1.0
# 4 else:
# 5 y = np.sin(x)/x
#
# ValueError: The truth value of an array with more than one
# element is ambiguous.
# ```
#
# The `if` statement in Python is set up to evaluate the truth value of a
# single variable, not of multielement arrays. When Python is asked to
# evaluate the truth value for a multi-element array, it doesn't know what
# to do and therefore returns an error.
#
# An obvious way to handle this problem is to write the code so that it
# processes the array one element at a time, which you could do using a
# `for` loop, as illustrated below.
#
# ``` python
# def sinc(x):
# y = [] # creates an empty list to store results
# for xx in x: # loops over all elements in x array
# if xx==0.0: # adds result of 1.0 to y list if
# y += [1.0] # xx is zero
# else: # adds result of sin(xx)/xx to y list if
# y += [np.sin(xx)/xx] # xx is not zero
# return np.array(y) # converts y to array and returns array
#
# import numpy as np
# import matplotlib.pyplot as plt
#
# x = np.linspace(-10, 10, 256)
# y = sinc(x)
#
# plt.plot(x, y)
# plt.axhline(color="gray", zorder=-1)
# plt.axvline(color="gray", zorder=-1)
# plt.show()
# ```
#
# The `for` loop evaluates the elements of the `x` array one by one and
# appends the results to the list `y` one by one. When it is finished, it
# converts the list to an array and returns the array. The code following
# the function definition plots $\mathrm{sinc}\,x$ as a function of $x$.
#
# In the program above, you may have noticed that the NumPy library is
# imported *after* the `sinc(x)` function definition. As the function uses
# the NumPy functions `sin` and `array`, you may wonder how this program
# can work. Doesn't the `import numpy` statement have to be called before
# any NumPy functions are used? The answer it an emphatic "YES". What you
# need to understand is that the function definition is *not executed*
# when it is defined, nor can it be as it has no input `x` data to
# process. That part of the code is just a definition. The first time the
# code for the `sinc(x)` function is actually executed is when it is
# called on line 14 of the program, which occurs after the NumPy library
# is imported in line 10. The figure below shows the plot of the
# $\mathrm{sinc}\,x$ function generated by the above code.
#
# <figure>
# <img src="attachment:sinc.png" class="align-center" alt="" /><figcaption>Plot of user-defined <code>sinc(x)</code> function.</figcaption>
# </figure>
#
# single: functions; fast array processing single: conditionals; applied
# to arrays
#
# ### Fast array processing in user-defined functions
#
# While using loops to process arrays works just fine, it is usually not
# the best way to accomplish the task in Python. The reason is that loops
# in Python are executed rather slowly. To deal with this problem, the
# developers of NumPy introduced a number of functions designed to process
# arrays quickly and efficiently. For the present case, what we need is a
# conditional statement or function that can process arrays directly. The
# function we want is called `where` and it is a part of the NumPy
# library. There `where` function has the form
#
# ``` python
# where(condition, output if True, output if False)
# ```
#
# The first argument of the `where` function is a conditional statement
# involving an array. The `where` function applies the condition to the
# array element by element, and returns the second argument for those
# array elements for which the condition is `True`, and returns the third
# argument for those array elements that are `False`. We can apply it to
# the `sinc(x)` function as follows
#
# ``` python
# def sinc(x):
# z = np.where(x==0.0, 1.0, np.sin(x)/x)
# return z
# ```
#
# The `where` function creates the array `y` and sets the elements of `y`
# equal to 1.0 where the corresponding elements of `x` are zero, and
# otherwise sets the corresponding elements to `sin(x)/x`. This code
# executes much faster, 25 to 100 times, depending on the size of the
# array, than the code using a `for` loop. Moreover, the new code is much
# simpler to write and read. An additional benefit of the `where` function
# is that it can handle single variables and arrays equally well. The code
# we wrote for the sinc function with the `for` loop cannot handle single
# variables. Of course we could rewrite the code so that it did, but the
# code becomes even more clunky. It's better just to use NumPy's `where`
# function.
#
# #### The moral of the story
#
# The moral of the story is that you should avoid using `for` and `while`
# loops to process arrays in Python programs whenever an array-processing
# method is available. As a beginning Python programmer, you may not
# always see how to avoid loops, and indeed, avoiding them is not always
# possible, but you should look for ways to avoid loops, especially loops
# that iterate a large number of times. As you become more experienced,
# you will find that using array-processing methods in Python becomes more
# natural. Using them can greatly speed up the execution of your code,
# especially when working with large arrays.
#
# single: functions; multiple inputs and/or outputs
#
# ### Functions with more (or less) than one input or output
#
# Python functions can have any number of input arguments and can return
# any number of variables. For example, suppose you want a function that
# outputs $n$ $(x,y)$ coordinates around a circle of radius $r$ centered
# at the point $(x_0,y_0)$. The inputs to the function would be $r$,
# $x_0$, $y_0$, and $n$. The outputs would be the $n$ $(x,y)$ coordinates.
# The following code implements this function.
#
# ``` python
# def circle(r, x0, y0, n):
# theta = np.linspace(0., 2.*np.pi, n, endpoint=False)
# x = r * np.cos(theta)
# y = r * np.sin(theta)
# return x0+x, y0+y
# ```
#
# This function has four inputs and two outputs. In this case, the four
# inputs are simple numeric variables and the two outputs are NumPy
# arrays. In general, the inputs and outputs can be any combination of
# data types: arrays, lists, strings, *etc*. Of course, the body of the
# function must be written to be consistent with the prescribed data
# types.
#
# Functions can also return nothing to the calling program but just
# perform some task. For example, here is a program that clears the
# terminal screen
#
# ``` python
# import subprocess
# import platform
#
# def clear():
# subprocess.Popen( "cls" if platform.system() ==
# "Windows" else "clear", shell=True)
# ```
#
# The function is invoked by typing `clear()`. It has no inputs and no
# outputs but it performs a useful task. This function uses two standard
# Python libraries, `subprocess` and `platform` that are useful for
# performing computer system tasks. It's not important that you know
# anything about them at this point. We simply use them here to
# demonstrate a useful cross-platform function that has no inputs and
# returns no values.
#
# single: functions; arguments; keyword single: functions; arguments;
# positional
#
# ### Positional and keyword arguments
#
# It is often useful to have function arguments that have some default
# setting. This happens when you want an input to a function to have some
# standard value or setting most of the time, but you would like to
# reserve the possibility of giving it some value other than the default
# value.
#
# For example, in the program `circle` from the previous section, we might
# decide that under most circumstances, we want `n=12` points around the
# circle, like the points on a clock face, and we want the circle to be
# centered at the origin. In this case, we would rewrite the code to read
#
# ``` python
# def circle(r, x0=0.0, y0=0.0, n=12):
# theta = np.linspace(0., 2.*np.pi, n, endpoint=False)
# x = r * np.cos(theta)
# y = r * np.sin(theta)
# return x0+x, y0+y
# ```
#
# The default values of the arguments `x0`, `y0`, and `n` are specified in
# the argument of the function definition in the `def` line. Arguments
# whose default values are specified in this manner are called *keyword
# arguments*, and they can be omitted from the function call if the user
# is content using those values. For example, writing `circle(4)` is now a
# perfectly legal way to call the `circle` function and it would produce
# 12 $(x,y)$ coordinates centered about the origin $(x,y)=(0,0)$. On the
# other hand, if you want the values of `x0`, `y0`, and `n` to be
# something different from the default values, you can specify their
# values as you would have before.
#
# If you want to change only some of the keyword arguments, you can do so
# by using the keywords in the function call. For example, suppose you are
# content with have the circle centered on $(x,y)=(0,0)$ but you want only
# 6 points around the circle rather than 12. Then you would call the
# `circle` function as follows:
#
# ``` python
# circle(2, n=6)
# ```
#
# The unspecified keyword arguments keep their default values of zero but
# the number of points `n` around the circle is now 6 instead of the
# default value of 12.
#
# The normal arguments without keywords are called *positional arguments*;
# they have to appear *before* any keyword arguments and, when the
# function is called, must be supplied values in the same order as
# specified in the function definition. The keyword arguments, if
# supplied, can be supplied in any order providing they are supplied with
# their keywords. If supplied without their keywords, they too must be
# supplied in the order they appear in the function definition. The
# following function calls to `circle` both give the same output.
#
# ``` ipython
# In [13]: circle(3, n=3, y0=4, x0=-2)
# Out[13]: (array([ 1. , -3.5, -3.5]),
# array([ 4. , 6.59807621, 1.40192379]))
#
# In [14]: circle(3, -2, 4, 3) # w/o keywords, arguments
# # supplied in order
# Out[14]: (array([ 1. , -3.5, -3.5]), array([ 4. ,
# 6.59807621, 1.40192379]))
# ```
#
# By now you probably have noticed that we used the keyword argument
# `endpoint` in calling `linspace` in our definition of the `circle`
# function. The default value of `endpoint` is `True`, meaning that
# `linspace` includes the endpoint specified in the second argument of
# `linspace`. We set it equal to `False` so that the last point was not
# included. Do you see why?
#
# single: functions; arguments; variable number single: functions;
# arguments; \*args
#
# ### Variable number of arguments
#
# While it may seem odd, it is sometimes useful to leave the number of
# arguments unspecified. A simple example is a function that computes the
# product of an arbitrary number of numbers:
#
# ``` python
# def product(*args):
# print("args = {}".format(args))
# p = 1
# for num in args:
# p *= num
# return p
# ```
#
# ``` ipython
# In [15]: product(11., -2, 3)
# args = (11.0, -2, 3)
# Out[15]: -66.0
#
# In [16]: product(2.31, 7)
# args = (2.31, 7)
# Out[16]: 16.17
# ```
#
# The `print("args...)` statement in the function definition is not
# necessary, of course, but is put in to show that the argument `args` is
# a tuple inside the function. Here it used because one does not know
# ahead of time how many numbers are to be multiplied together.
#
# The `*args` argument is also quite useful in another context: when
# passing the name of a function as an argument in another function. In
# many cases, the function name that is passed may have a number of
# parameters that must also be passed but aren't known ahead of time. If
# this all sounds a bit confusing---functions calling other functions---a
# concrete example will help you understand.
#
# Suppose we have the following function that numerically computes the
# value of the derivative of an arbitrary function $f(x)$:
#
# ``` python
# def deriv(f, x, h=1.e-9, *params):
# return (f(x+h, *params)-f(x-h, *params))/(2.*h)
# ```
#
# The argument `*params` is an optional positional argument. We begin by
# demonstrating the use of the function `deriv` without using the optional
# `*params` argument. Suppose we want to compute the derivative of the
# function $f_0(x)=4x^5$. First, we define the function
#
# ``` python
# def f0(x):
# return 4.*x**5
# ```
#
# Now let's find the derivative of $f_0(x)=4x^5$ at $x=3$ using the
# function `deriv`:
#
# ``` ipython
# In [17]: deriv(f0, 3)
# Out[17]: 1620.0001482502557
# ```
#
# The exact result, given by evaluating $f_0^\prime(x)=20x^4$ at $x=3$ is
# 1620, so our function to numerically calculate the derivative works
# pretty well.
#
# Suppose we had defined a more general function $f_1(x)=ax^p$ as follows:
#
# ``` python
# def f1(x, a, p):
# return a*x**p
# ```
#
# Suppose we want to calculate the derivative of this function for a
# particular set of parameters $a$ and $p$. Now we face a problem, because
# it might seem that there is no way to pass the parameters $a$ and $p$ to
# the `deriv` function. Moreover, this is a generic problem for functions
# such as `deriv` that use a function as an input, because different
# functions you want to use as inputs generally come with different
# parameters. Therefore, we would like to write our program `deriv` so
# that it works, irrespective of how many parameters are needed to specify
# a particular function.
#
# This is what the optional positional argument `*params` defined in
# `deriv` is for: to pass parameters of `f1`, like $a$ and $b$, through
# `deriv`. To see how this works, let's set $a$ and $b$ to be 4 and 5,
# respectively, the same values we used in the definition of `f0`, so that
# we can compare the results:
#
# ``` ipython
# In [16]: deriv(f1, 3, 1.e-9, 4, 5)
# Out[16]: 1620.0001482502557
# ```
#
# We get the same answer as before, but this time we have used `deriv`
# with a more general form of the function $f_1(x)=ax^p$.
#
# The order of the parameters is important. The function `deriv` uses `x`,
# the first argument of `f1`, as its principal argument, and then uses `a`
# and `p`, in the same order that they are defined in the function `f1`,
# to fill in the additional arguments---the parameters---of the function
# `f1`.
#
# single: functions; arguments; \*\*kwargs
#
# Optional arguments must appear after the regular positional and keyword
# arguments in a function call. The order of the arguments must adhere to
# the following convention:
#
# ``` python
# def func(pos1, pos2, ..., keywd1, keywd2, ..., *args, **kwargs):
# ```
#
# That is, the order of arguments is: positional arguments first, then
# keyword arguments, then optional positional arguments (`*args`), then
# optional keyword arguments (`**kwargs`). Note that to use the `*params`
# argument, we had to explicitly include the keyword argument `h` even
# though we didn't need to change it from its default value.
#
# Python also allows for a variable number of keyword
# arguments---`**kwargs`---in a function call. While `*args` is a tuple,
# `kwargs` is a dictionary, so that the value of an optional keyword
# argument is accessed through its dictionary key.
#
# ### Passing data to and from functions
#
# Functions are like mini-programs within the larger programs that call
# them. Each function has a set of variables with certain names that are
# to some degree or other isolated from the calling program. We shall get
# more specific about just how isolated those variables are below, but
# before we do, we introduce the concept of a *namespace*. Each function
# has its own namespace, which is essentially a mapping of variable names
# to objects, like numerics, strings, lists, and so forth. It's a kind of
# dictionary. The calling program has its own namespace, distinct from
# that of any functions it calls. The distinctiveness of these namespaces
# plays an important role in how functions work, as we shall see below.
#
# #### Variables and arrays created entirely within a function
#
# An important feature of functions is that variables and arrays created
# *entirely within* a function cannot be seen by the program that calls
# the function unless the variable or array is explicitly passed to the
# calling program in the `return` statement. This is important because it
# means you can create and manipulate variables and arrays, giving them
# any name you please, without affecting any variables or arrays outside
# the function, even if the variables and arrays inside and outside a
# function share the same name.
#
# To see what how this works, let's rewrite our program to plot the sinc
# function using the sinc function definition that uses the `where`
# function.
#
# ``` python
# def sinc(x):
# z = np.where(x==0.0, 1.0, np.sin(x)/x)
# return z
#
# import numpy as np
# import matplotlib.pyplot as plt
#
# x = np.linspace(-10, 10, 256)
# y = sinc(x)
#
# plt.plot(x, y)
# plt.axhline(color="gray", zorder=-1)
# plt.axvline(color="gray", zorder=-1)
# plt.show()
# ```
#
# Running this program produces a plot like the plot of sinc shown in the
# previous section. Notice that the array variable `z` is only defined
# within the function definition of sinc. If we run the program from the
# IPython terminal, it produces the plot, of course. Then if we ask
# IPython to print out the arrays, `x`, `y`, and `z`, we get some
# interesting and informative results, as shown below.
#
# ``` ipython
# In [15]: run sinc3.py
#
# In [16]: x
# Out[16]: array([-10. , -9.99969482, -9.99938964, ...,
# 9.99938964, 9.99969482, 10. ])
#
# In [17]: y
# Out[17]: array([-0.05440211, -0.05437816, -0.0543542 , ...,
# -0.0543542 , -0.05437816, -0.05440211])
#
# In [18]: z
# ---------------------------------------------------------
# NameError Traceback (most recent call last)
#
# NameError: name 'z' is not defined
# ```
#
# When we type in `x` at the `In [16]:` prompt, IPython prints out the
# array `x` (some of the output is suppressed because the array `x` has
# many elements); similarly for `y`. But when we type `z` at the
# `In [18]:` prompt, IPython returns a `NameError` because `z` is not
# defined. The IPython terminal is working in the same *namespace* as the
# program. But the namespace of the sinc function is isolated from the
# namespace of the program that calls it, and therefore isolated from
# IPython. This also means that when the sinc function ends with
# `return z`, it doesn't return the name `z`, but instead assigns the
# values in the array `z` to the array `y`, as directed by the main
# program in line 9.
#
# #### Passing variables and arrays to functions: mutable and immutable objects
#
# What happens to a variable or an array passed to a function when the
# variable or array is *changed* within the function? It turns out that
# the answers are different depending on whether the variable passed is a
# simple numeric variable, string, or tuple, or whether it is an array or
# list. The program below illustrates the different ways that Python
# handles single variables *vs* the way it handles lists and arrays.
#
# ``` python
# def test(s, v, t, l, a):
# s = "I am doing fine"
# v = np.pi**2
# t = (1.1, 2.9)
# l[-1] = 'end'
# a[0] = 963.2
# return s, v, t, l, a
#
# import numpy as np
#
# s = "How do you do?"
# v = 5.0
# t = (97.5, 82.9, 66.7)
# l = [3.9, 5.7, 7.5, 9.3]
# a = np.array(l)
#
# print('*************')
# print("s = {0:s}".format(s))
# print("v = {0:5.2f}".format(v))
# print("t = {0:s}".format(t))
# print("l = {0:s}".format(l))
# print("a = "), # comma suppresses line feed
# print(a)
# print('*************')
# print('*call "test"*')
#
# s1, v1, t1, l1, a1 = test(s, v, t, l, a)
#
# print('*************')
# print("s1 = {0:s}".format(s1))
# print("v1 = {0:5.2f}".format(v1))
# print("t1 = {0:s}".format(t1))
# print("l1 = {0:s}".format(l1))
# print("a1 = "),
# print(a1)
# print('*************')
# print("s = {0:s}".format(s))
# print("v = {0:5.2f}".format(v))
# print("t = {0:s}".format(t))
# print("l = {0:s}".format(l))
# print("a = "), # comma suppresses line feed
# print(a)
# print('*************')
# ```
#
# The function `test` has five arguments, a string `s`, a numerical
# variable `v`, a tuple `t`, a list `l`, and a NumPy array `a`. `test`
# modifies each of these arguments and then returns the modified `s`, `v`,
# `t`, `l`, `a`. Running the program produces the following output.
#
# ``` ipython
# In [17]: run passingVars.py
# *************
# s = How do you do?
# v = 5.00
# t = (97.5, 82.9, 66.7)
# l = [3.9, 5.7, 7.5, 9.3]
# a = [ 3.9 5.7 7.5 9.3]
# *************
# *call "test"*
# *************
# s1 = I am doing fine
# v1 = 9.87
# t1 = (1.1, 2.9)
# l1 = [3.9, 5.7, 7.5, 'end']
# a1 = [ 963.2 5.7 7.5 9.3]
# *************
# s = How do you do?
# v = 5.00
# t = (97.5, 82.9, 66.7)
# l = [3.9, 5.7, 7.5, 'end']
# a = [ 963.2 5.7 7.5 9.3]
# *************
# ```
#
# The program prints out three blocks of variables separated by asterisks.
# The first block merely verifies that the contents of `s`, `v`, `t`, `l`,
# and `a` are those assigned in lines 10-13. Then the function `test` is
# called. The next block prints the output of the call to the function
# `test`, namely the variables `s1`, `v1`, `t1`, `l1`, and `a1`. The
# results verify that the function modified the inputs as directed by the
# `test` function.
#
# The third block prints out the variables `s`, `v`, `t`, `l`, and `a`
# from the calling program *after* the function `test` was called. These
# variables served as the inputs to the function `test`. Examining the
# output from the third printing block, we see that the values of the
# string `s`, the numeric variable `v`, and the contents of `t` are
# unchanged after the function call. This is probably what you would
# expect. On the other hand, we see that the list `l` and the array `a`
# are changed after the function call. This might surprise you! But these
# are important points to remember, so important that we summarize them in
# two bullet points here:
#
# > - Changes to string, variable, and tuple arguments of a function
# > within the function do not affect their values in the calling
# > program.
# > - Changes to values of elements in list and array arguments of a
# > function within the function are reflected in the values of the
# > same list and array elements in the calling function.
#
# The point is that simple numerics, strings and tuples are immutable
# while lists and arrays are mutable. Because immutable objects can't be
# changed, changing them within a function creates new objects with the
# same name inside of the function, but the old immutable objects that
# were used as arguments in the function call remain unchanged in the
# calling program. On the other hand, if elements of mutable objects like
# those in lists or arrays are changed, then those elements that are
# changed inside the function are also changed in the calling program.
#
# Methods and attributes
# ----------------------
#
# You have already encountered quite a number of functions that are part
# of either NumPy or Python or Matplotlib. But there is another way in
# which Python implements things that act like functions. To understand
# what they are, you need to understand that variables, strings, arrays,
# lists, and other such data structures in Python are not merely the
# numbers or strings we have defined them to be. They are *objects*. In
# general, an object in Python has associated with it a number of
# *attributes* and a number of specialized functions called *methods* that
# act on the object. How attributes and methods work with objects is best
# illustrated by example.
#
# Let's start with the NumPy array. A NumPy array is a Python object and
# therefore has associated with it a number of attributes and methods.
# Suppose, for example, we write `a = random.random(10)`, which creates an
# array of 10 uniformly distributed random numbers between 0 and 1. An
# example of an attribute of an array is the size or number of elements in
# the array. An attribute of an object in Python is accessed by typing the
# object name followed by a period followed by the attribute name. The
# code below illustrates how to access two different attributes of an
# array, it's size and its data type.
#
# ``` ipython
# In [18]: a = random.random(10)
#
# In [19]: a.size
# Out[19]: 10
#
# In [20]: a.dtype
# Out[20]: dtype('float64')
# ```
#
# Any object in Python can and in general does have a number of attributes
# that are accessed in just the way demonstrated above, with a period and
# the attribute name following the name of the particular object. In
# general, attributes involve properties of the object that are stored by
# Python with the object and require no computation. Python just looks up
# the attribute and returns its value.
#
# Objects in Python also have associated with them a number of specialized
# functions called *methods* that act on the object. In contrast to
# attributes, methods generally involve Python performing some kind of
# computation. Methods are accessed in a fashion similar to attributes, by
# appending a period followed the method's name, which is followed by a
# pair of open-close parentheses, consistent with methods being a kind of
# function that acts on the object. Often methods are used with no
# arguments, as methods by default act on the object whose name they
# follow. In some cases. however, methods can take arguments. Examples of
# methods for NumPy arrays are sorting, calculating the mean, or standard
# deviation of the array. The code below illustrates a few array methods.
#
# ``` ipython
# In [21]: a
# Out[21]:
# array([ 0.859057 , 0.27228037, 0.87780026, 0.14341207,
# 0.05067356, 0.83490135, 0.54844515, 0.33583966,
# 0.31527767, 0.15868803])
#
# In [22]: a.sum() # sum
# Out[22]: 4.3963751104791005
#
# In [23]: a.mean() # mean or average
# Out[23]: 0.43963751104791005
#
# In [24]: a.var() # variance
# Out[24]: 0.090819477333711512
#
# In [25]: a.std() # standard deviation
# Out[25]: 0.30136270063448711
#
# In [26]: a.sort() # sort small to large
#
# In [27]: a
# Out[27]:
# array([ 0.05067356, 0.14341207, 0.15868803, 0.27228037,
# 0.31527767, 0.33583966, 0.54844515, 0.83490135,
# 0.859057 , 0.87780026])
#
# In [28]: a.clip(0.3, 0.8)
# Out[29]:
# array([ 0.3 , 0.3 , 0.3 , 0.3 ,
# 0.31527767, 0.33583966, 0.54844515, 0.8 ,
# 0.8 , 0.8 ])
# ```
#
# The `clip()` method provides an example of a method that takes an
# argument, in this case the arguments are the lower and upper values to
# which array elements are cutoff if their values are outside the range
# set by these values.
#
# single: curve fitting; linear
#
# Example: linear least squares fitting
# -------------------------------------
#
# In this section we illustrate how to use functions and methods in the
# context of modeling experimental data.
#
# In science and engineering we often have some theoretical curve or
# *fitting function* that we would like to fit to some experimental data.
# In general, the fitting function is of the form $f(x; a, b, c, ...)$,
# where $x$ is the independent variable and $a$, $b$, $c$, ... are
# parameters to be adjusted so that the function $f(x; a, b, c, ...)$ best
# fits the experimental data. For example, suppose we had some data of the
# velocity *vs* time for a falling mass. If the mass falls only a short
# distance such that its velocity remains well below its terminal
# velocity, we can ignore air resistance. In this case, we expect the
# acceleration to be constant and the velocity to change linearly in time
# according to the equation
#
# $$v(t) = v_{0} - g t \;,$$
#
# where $g$ is the local gravitational acceleration. We can fit the data
# graphically, say by plotting it as shown below in Fig.
# `4.6<fig:FallingMassDataPlot>` and then drawing a line through the data.
# When we draw a straight line through a data, we try to minimize the
# distance between the points and the line, globally averaged over the
# whole data set.
#
# <figure>
# <img src="attachment:VelocityVsTimePlot.png" class="align-center" alt="" /><figcaption>Velocity <em>vs</em> time for falling mass.</figcaption>
# </figure>
#
# While this can give a reasonable estimate of the best fit to the data,
# the procedure is rather *ad hoc*. We would prefer to have a more
# well-defined analytical method for determining what constitutes a "best
# fit". One way to do that is to consider the sum
#
# $$S = \sum_{i}^{n} [y_{i} - f(x_{i}; a, b, c, ...)]^2 \;,$$
#
# where $y_{i}$ and $f(x_{i}; a, b, c, ...)$ are the values of the
# experimental data and the fitting function, respectively, at $x_{i}$,
# and $S$ is the square of their difference summed over all $n$ data
# points. The quantity $S$ is a sort of global measure of how much the the
# fit $f(x_{i}; a, b, c, ...)$ differs from the experimental data $y_{i}$.
#
# Notice that for a given set of data points $\{x_i, y_i\}$, $S$ is a
# function only of the fitting parameters $a, b, ...$, that is,
# $S=S(a, b, c, ...)$. One way of defining a *best* fit, then, is to find
# the values of the fitting parameters $a, b, ...$ that minimize the $S$.
#
# In principle, finding the values of the fitting parameters $a, b, ...$
# that minimize the $S$ is a simple matter. Just set the partial
# derivatives of $S$ with respect to the fitting parameter equal to zero
# and solve the resulting system of equations:
#
# $$\frac{\partial S}{\partial a} = 0 \;, \quad
# \frac{\partial S}{\partial b} = 0 \;, ...$$
#
# Because there are as many equations as there are fitting paramters, we
# should be able to solve the system of equations and find the values of
# the fitting parameters that minimize $S$. Solving those systems of
# equations is straightforward if the fitting function $f(x; a, b, ...)$
# is linear in the fitting parameters. Some examples of fitting functions
# linear in the fitting parameters are:
#
# $$\begin{aligned}
# f(x; a, b) &= a + bx \\
# f(x; a, b, c) &= a + bx + cx^2 \\
# f(x; a, b, c) &= a \sin x + b e^x + c e^{-x^2} \;.
# \end{aligned}$$
#
# For fitting functions such as these, taking the partial derivatives with
# respect to the fitting parameters, as proposed in `eq:sysSzero`, results
# in a set of algebraic equations that are linear in the fitting paramters
# $a, b, ...$ Because they are linear, these equations can be solved in a
# straightforward manner.
#
# For cases in which the fitting function is not linear in the fitting
# parameters, one can generally still find the values of the fitting
# parameters that minimize $S$ but finding them requires more work, which
# goes beyond our immediate interests here.
#
# ### Linear regression
#
# We start by considering the simplest case, fitting a straight line to a
# data set, such as the one shown in Fig. `4.6 <fig:FallingMassDataPlot>`
# above. Here the fitting function is $f(x) = a + bx$, which is linear in
# the fitting parameters $a$ and $b$. For a straight line, the sum in
# `eq:lsqrsum` becomes
#
# $$S(a,b) = \sum_{i} (y_{i} - a - bx_{i})^2 \;.$$
#
# Finding the best fit in this case corresponds to finding the values of
# the fitting parameters $a$ and $b$ for which $S(a,b)$ is a minimum. To
# find the minimum, we set the derivatives of $S(a,b)$ equal to zero:
#
# $$\begin{aligned}
# \frac{\partial S}{\partial a} &= \sum_{i}-2(y_{i}-a-bx_{i}) = 2 \left(na + b\sum_{i}x_{i} - \sum_{i}y_{i} \right) = 0 \\
# \frac{\partial S}{\partial b} &= \sum_{i}-2(y_{i}-a-bx_{i})\,x_{i} = 2 \left(a\sum_{i}x_{i} + b\sum_{i}x_{i}^2 - \sum_{i}x_{i}y_{i} \right) = 0
# \end{aligned}$$
#
# Dividing both equations by $2n$ leads to the equations
#
# $$\begin{aligned}
# a + b\bar{x} &= \bar{y}\\
# a\bar{x} + b\frac{1}{n}\sum_{i}x_{i}^2 &= \frac{1}{n}\sum_{i}x_{i}y_{i}
# \end{aligned}$$
#
# where
#
# $$\begin{aligned}
# \bar{x} &= \frac{1}{n}\sum_{i}x_{i}\\
# \bar{y} &= \frac{1}{n}\sum_{i}y_{i}\;.
# \end{aligned}$$
#
# Solving Eq. `eq:ablinreg` for the fitting parameters gives
#
# $$\begin{aligned}
# b &= \frac{\sum_{i}x_{i}y_{i} - n\bar{x}\bar{y}} {\sum_{i}x_{i}^2 - n \bar{x}^2}\\
# a &= \bar{y} - b\bar{x} \;.
# \end{aligned}$$
#
# Noting that $n\bar{y}=\sum_{i}y$ and $n\bar{x}=\sum_{i}x$, the results
# can be written as
#
# $$\begin{aligned}
# b &= \frac{\sum_{i}(x_{i}- \bar{x})\,y_{i}} {\sum_{i}(x_{i}- \bar{x})\,x_{i}} \\
# a &= \bar{y} - b\bar{x} \;.
# \end{aligned}$$
#
# While Eqs. `eq:b1` and `eq:b2` are equivalent analytically, Eq. `eq:b2`
# is preferred for numerical calculations because Eq. `eq:b2` is less
# sensitive to roundoff errors. Here is a Python function implementing
# this algorithm:
#
# def LineFit(x, y):
# ''' Returns slope and y-intercept of linear fit to (x,y)
# data set'''
# xavg = x.mean()
# slope = (y*(x-xavg)).sum()/(x*(x-xavg)).sum()
# yint = y.mean()-slope*xavg
# return slope, yint
#
# It's hard to imagine a simpler implementation of the linear regression
# algorithm.
#
# single: curve fitting; linear; with weighting
#
# ### Linear regression with weighting: $\chi^2$
#
# The linear regression routine of the previous section weights all data
# points equally. That is fine if the absolute uncertainty is the same for
# all data points. In many cases, however, the uncertainty is different
# for different points in a data set. In such cases, we would like to
# weight the data that has smaller uncertainty more heavily than those
# data that have greater uncertainty. For this case, there is a standard
# method of weighting and fitting data that is known as $\chi^2$ (or
# *chi-squared*) fitting. In this method we suppose that associated with
# each $(x_{i},y_{i})$ data point is an uncertainty in the value of
# $y_{i}$ of $\pm\sigma_{i}$. In this case, the "best fit" is defined as
# the the one with the set of fitting parameters that minimizes the sum
#
# $$\chi^2 = \sum_{i} \left(\frac{y_{i} - f(x_{i})} {\sigma_{i}}\right)^2 \;.$$
#
# Setting the uncertainties $\sigma_{i}=1$ for all data points yields the
# same sum $S$ we introduced in the previous section. In this case, all
# data points are weighted equally. However, if $\sigma_{i}$ varies from
# point to point, it is clear that those points with large $\sigma_{i}$
# contribute less to the sum than those with small $\sigma_{i}$. Thus,
# data points with large $\sigma_{i}$ are weighted less than those with
# small $\sigma_{i}$.
#
# To fit data to a straight line, we set $f(x) = a + bx$ and write
#
# $$\chi^2(a,b) = \sum_{i} \left(\frac{y_{i} - a -bx_{i}} {\sigma_{i}}\right)^2 \;.$$
#
# Finding the minimum for $\chi^2(a,b)$ follows the same procedure used
# for finding the minimum of $S(a,b)$ in the previous section. The result
# is
#
# $$\begin{aligned}
# b &= \frac{\sum_{i}(x_{i} - \hat{x})\,y_{i}/\sigma_{i}^2} {\sum_{i}(x_{i} - \hat{x})\,x_{i}/\sigma_{i}^2}\\
# a &= \hat{y} - b\hat{x} \;.
# \end{aligned}$$
#
# where
#
# $$\begin{aligned}
# \hat{x} &= \frac{\sum_{i}x_{i}/\sigma_{i}^2} {\sum_{i}1/\sigma_{i}^2}\\
# \hat{y} &= \frac{\sum_{i}y_{i}/\sigma_{i}^2} {\sum_{i}1/\sigma_{i}^2}\;.
# \end{aligned}$$
#
# For a fit to a straight line, the overall quality of the fit can be
# measured by the reduced chi-squared parameter
#
# $$\chi_{r}^2 = \frac{\chi^2}{n-2}$$
#
# where $\chi^2$ is given by Eq. `eq:chisq` evaluated at the optimal
# values of $a$ and $b$ given by Eq. `eq:abwchisq`. A good fit is
# characterized by $\chi_{r}^2 \approx 1$. This makes sense because if the
# uncertainties $\sigma_{i}$ have been properly estimated, then
# $[y_{i}-f(x_{i})]^2$ should on average be roughly equal to
# $\sigma_{i}^2$, so that the sum in Eq. `eq:chisq` should consist of $n$
# terms approximately equal to 1. Of course, if there were only 2 terms
# (<span class="title-ref">n=2</span>), then $\chi^2$ would be zero as the
# best straight line fit to two points is a perfect fit. That is
# essentially why $\chi_{r}^2$ is normalized using $n-2$ instead of $n$.
# If $\chi_{r}^2$ is significantly greater than 1, this indicates a poor
# fit to the fitting function (or an underestimation of the uncertainties
# $\sigma_{i}$). If $\chi_{r}^2$ is significantly less than 1, then it
# indicates that the uncertainties were probably overestimated (the fit
# and fitting function may or may not be good).
#
# <figure>
# <img src="attachment:VelocityVsTimeFit.png" class="align-center" alt="" /><figcaption>Fit using <span class="math inline"><em>ฯ</em><sup>2</sup></span> least squares fitting routine with data weighted by error bars.</figcaption>
# </figure>
#
# We can also get estimates of the uncertainties in our determination of
# the fitting parameters $a$ and $b$, although deriving the formulas is a
# bit more involved that we want to get into here. Therefore, we just give
# the results:
#
# $$\begin{aligned}
# \sigma_{b}^2 &= \frac{1} {\sum_{i}(x_{i} - \hat{x})\,x_{i}/\sigma_{i}^2}\\
# \sigma_{a}^2 &= \sigma_{b}^2 \frac{\sum_{i}x_{i}^2/\sigma_{i}^2} {\sum_{i}1/\sigma_{i}^2}\;.
# \end{aligned}$$
#
# The estimates of uncertainties in the fitting parameters depend
# explicitly on $\{\sigma_{i}\}$ and will only be meaningful if (*i*)
# $\chi_{r}^2 \approx 1$ and (*ii*) the estimates of the uncertainties
# $\sigma_{i}$ are accurate.
#
# You can find more information, including a derivation of Eq.
# `eq:absigma`, in *Data Reduction and Error Analysis for the Physical
# Sciences, 3rd ed* by <NAME> & <NAME>, McGraw-Hill, New
# York, 2003.
#
# single: anonymous functions single: anonymous functions; lambda
# expressions single: lambda expressions
#
# Anonymous functions (lambda)
# ----------------------------
#
# Python provides another way to generate functions called *lambda*
# expressions. A lambda expression is a kind of in-line function that can
# be generated on the fly to accomplish some small task. You can assign
# lambda functions a name, but you don't need to; hence, they are often
# called *anonymous* functions. A lambda uses the keyword `lambda` and has
# the general form
#
# ``` ipython
# lambda arg1, arg2, ... : output
# ```
#
# The arguments `arg1, arg2, ...` are inputs to a lambda, just as for a
# functions, and the output is an expression using the arguments.
#
# While lambda expressions need not be named, we illustrate their use by
# comparing a conventional Python function definition to a lambda
# expression to which we give a name. First, we define a conventional
# python function
#
# ``` ipython
# In [1]: def f(a, b):
# ...: return 3*a+b**2
#
# In [2]: f(2,3)
# Out[2]: 15
# ```
#
# Next, we define a lambda that does the same thing
#
# ``` ipython
# In [3]: g = lambda a, b : 3*a+b**2
#
# In [4]: g(2,3)
# Out[4]: 15
# ```
#
# The `lambda` defined by `g` does the same thing as the function `f`.
# Such `lambda` expressions are useful when you need a very short function
# definition, usually to be used locally only once or a few times.
#
# Sometimes lambda expressions are used in function arguments that call
# for a function *name*, as opposed to the function itself. Moreover, in
# cases where a the function to be integrated is already defined but is a
# function one independent variable and several parameters, the lambda
# expression can be a convenient way of fashioning a single variable
# function. Don't worry if this doesn't quite make sense to you right now.
# You will see examples of lambda expressions used in just this way in the
# section `numericalIntegration`.
#
# There are also a number of nifty programming tricks that can be
# implemented using `lambda` expressions, but we will not go into them
# here. Look up `lambdas` on the web if you are curious about their more
# exotic uses.
#
# Exercises
# ---------
#
# 1. Write a function that can return each of the first three spherical
# Bessel functions $j_n(x)$:
#
# $$\begin{aligned}
# j_0(x) &= \frac{\sin x}{x}\\
# j_1(x) &= \frac{\sin x}{x^2} - \frac{\cos x}{x}\\
# j_2(x) &= \left(\frac{3}{x^2}-1\right)\frac{\sin x}{x} - \frac{3\cos x}{x^2}
# \end{aligned}$$
#
# Your function should take as arguments a NumPy array $x$ and the
# order $n$, and should return an array of the designated order $n$
# spherical Bessel function. Take care to make sure that your
# functions behave properly at $x=0$.
#
# Demonstrate the use of your function by writing a Python routine
# that plots the three Bessel functions for $0 \le x \le 20$. Your
# plot should look like the one below. Something to think about: You
# might note that $j_1(x)$ can be written in terms of $j_0(x)$, and
# that $j_2(x)$ can be written in terms of $j_1(x)$ and $j_0(x)$. Can
# you take advantage of this to write a more efficient function for
# the calculations of $j_1(x)$ and $j_2(x)$?
#
# <figure>
# <img src="attachment:besselSph.png" class="align-center" alt="" />
# </figure>
#
# 2. 1. Write a function that simulates the rolling of $n$ dice. Use the
# NumPy function `random.random_integers(6)`, which generates a
# random integer between 1 and 6 with equal probability (like
# rolling fair dice). The input of your function should be the
# number of dice thrown each roll and the output should be the sum
# of the $n$ dice.
# 2. "Roll" 2 dice 10,000 times keeping track of all the sums of each
# set of rolls in a list. Then use your program to generate a
# histogram summarizing the rolls of two dice 10,000 times. The
# result should look like the histogram plotted below. Use the
# MatPlotLib function `hist` (see
# <http://matplotlib.org/api/pyplot_summary.html>) and set the
# number of bins in the histogram equal to the number of different
# possible outcomes of a roll of your dice. For example, the sum
# of two dice can be anything between 2 and 12, which corresponds
# to 11 possible outcomes. You should get a histogram that looks
# like the one below.
# 3. "Repeat part (b) using 3 dice and plot the resulting histogram.
#
# <figure>
# <img src="attachment:diceRoll2.png" class="align-center" alt="" />
# </figure>
#
# 3. Write a function to draw a circular smiley face with eyes, a nose,
# and a mouth. One argument should set the overall size of the face
# (the circle radius). Optional arguments should allow the user to
# specify the $(x,y)$ position of the face, whether the face is
# smiling or frowning, and the color of the lines. The default should
# be a smiling blue face centered at $(0,0)$. Once you write your
# function, write a program that calls it several times to produce a
# plot like the one below (creative improvisation is encouraged!). In
# producing your plot, you may find the call
# `plt.axes().set_aspect(1)` useful so that circles appear as circles
# and not ovals. You should only use MatPlotLib functions introduced
# in this text. To create a circle you can create an array of angles
# that goes from 0 to $2\pi$ and then produce the $x$ and $y$ arrays
# for your circle by taking the cosine and sine, respectively, of the
# array. Hint: You can use the same $(x,y)$ arrays to make the smile
# and frown as you used to make the circle by plotting appropriate
# slices of those arrays. You do not need to create new arrays.
#
# <figure>
# <img src="attachment:smiley.png" class="align-center" alt="" />
# </figure>
#
# 4. In the section `linfitfunc`, we showed that the best fit of a line
# $y = a + bx$ to a set of data $\{(x_i,y_i)\}$ is obtained for the
# values of $a$ and $b$ given by Eq. `eq:b2`. Those formulas were
# obtained by finding the values of $a$ and $b$ that minimized the sum
# in Eq. `eq:linreg1`. This approach and these formulas are valid when
# the uncertainties in the data are the same for all data points. The
# Python function `LineFit(x, y)` in the section `linfitfunc`
# implements Eq. `eq:b2`.
#
# 1. Write a new fitting function `LineFitWt(x, y)` that implements
# the formulas given in Eq. `eq:xychisq` that minimize the
# $\chi^2$ function give by Eq. `eq:chisqlin`. This more general
# approach is valid when the individual data points have different
# weightings *or* when they all have the same weighting. You
# should also write a function to calculate the reduced
# chi-squared $\chi_r^2$ defined by Eq. `eq:chisqlin`.
#
# 2. Write a Python program that reads in the data below, plots it,
# and fits it using the two fitting functions `LineFit(x, y)` and
# `LineFitWt(x, y)`. Your program should plot the data with error
# bars and with *both* fits with and without weighting, that is
# from `LineFit(x, y)` and `LineFitWt(x, y, dy)`. It should also
# report the results for both fits on the plot, similar to the
# output of the supplied program above, as well as the values of
# $\chi_r^2$, the reduce chi-squared value, for both fits. Explain
# why weighting the data gives a steeper or less steep slope than
# the fit without weighting.
#
# Velocity vs time data
# for a falling mass
# time (s) velocity (m/s) uncertainty (m/s)
# 2.23 139 16
# 4.78 123 16
# 7.21 115 4
# 9.37 96 9
# 11.64 62 17
# 14.23 54 17
# 16.55 10 12
# 18.70 -3 15
# 21.05 -13 18
# 23.21 -55 10
#
# 5. Modify the function `LineFitWt(x, y)` you wrote in Exercise 4 above
# so that in addition to returning the fitting parameters $a$ and $b$,
# it also returns the uncertainties in the fitting parameters
# $\sigma_a$ and $\sigma_b$ using the formulas given by Eq.
# `eq:absigma`. Use your new fitting function to find the
# uncertainties in the fitted slope and $y$-intercept for the data
# provided with Exercise 4.
| Book/chap7/chap7_funcs.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + id="y8x49F0-VoxO" colab_type="code" colab={}
import numpy as np
X= np.load('/content/drive/My Drive/X.npy')
Y= np.load('/content/drive/My Drive/Y.npy')
# + id="RbwrIJWVhTjR" colab_type="code" outputId="20d97aa5-ddde-4188-ef51-5dbfe5f18a39" colab={"base_uri": "https://localhost:8080/", "height": 33}
X.shape
# + id="2_oxAFx8EDkI" colab_type="code" colab={}
#Recurrence plots code start
from pyts.multivariate.image import JointRecurrencePlot
from scipy import signal
rp = JointRecurrencePlot(threshold='distance', percentage=50)
# + id="zIAf8-UI-coq" colab_type="code" colab={}
def preprocess(series):
# original_Fs=3518
# F = int((original_Fs * 1) / downsample_to)
d_series = signal.resample_poly(series, up=1, down=6)
d_series = np.reshape(d_series, (1,d_series.shape[0]))
return d_series
# + id="s6pSUBm9-Q7k" colab_type="code" colab={}
def extract(arr):
data = preprocess(arr[0,:])
# print(arr[0,:].shape)
for i in range(1,64):
data = np.concatenate((data, preprocess(arr[i,:])), axis=0)
# print(data.shape)
return data
# + id="idw7FEz2ApBh" colab_type="code" colab={}
X_final = np.zeros((2160,64,587))
for i in range(X.shape[0]):
h=extract(X[i])
X_final[i] = h
# + id="nV13zAY3JQbK" colab_type="code" colab={}
X_image= np.zeros((2160,587,587))
for i in range(X.shape[0]):
m=np.reshape(X_final[i],(1,X_final.shape[1],X_final.shape[2]))
X_image[i] = rp.transform(m)
# + id="96sVAIqcwO5i" colab_type="code" outputId="a8dbb2bf-ad9f-4304-ebe4-ae1162240099" colab={"base_uri": "https://localhost:8080/", "height": 33}
X_image.shape
# + id="0upF-CMfJPrY" colab_type="code" colab={}
import matplotlib.pyplot as plt
# + id="5Y90OQayFwSH" colab_type="code" outputId="6d96956c-6202-4654-e9c0-a43a4535a712" colab={"base_uri": "https://localhost:8080/", "height": 368}
plt.figure(figsize=(5, 5))
plt.imshow(X_image[2159], cmap='binary', origin='lower')
plt.title('Joint Recurrence Plot', fontsize=18)
plt.tight_layout()
plt.show()
# + id="mxKwauVIHlmM" colab_type="code" outputId="91ab4d95-5b07-4b2b-b99c-36061d5df9e7" colab={"base_uri": "https://localhost:8080/", "height": 33}
x_train=X_image[:1680]
y_train=Y[:1680]
x_valid=X_image[1680:1920]
y_valid=Y[1680:1920]
x_test=X_image[1920:]
y_test=Y[1920:]
print(x_train.shape,y_train.shape, x_valid.shape, y_valid.shape, x_test.shape, y_test.shape)
# + id="EK6XoluptgaE" colab_type="code" colab={}
w, h = 587, 587
x_train = x_train.reshape(x_train.shape[0], w, h, 1)
x_valid = x_valid.reshape(x_valid.shape[0], w, h, 1)
x_test = x_test.reshape(x_test.shape[0], w, h, 1)
# + id="euf6SAnoC-GT" colab_type="code" outputId="39fc7f0a-67c6-49b2-8f94-cbf8e025bfa4" colab={"base_uri": "https://localhost:8080/", "height": 33}
print(x_train.shape,y_train.shape, x_valid.shape, y_valid.shape, x_test.shape, y_test.shape)
# + id="OBAftIpCAH7r" colab_type="code" outputId="1847e559-8ee0-4080-8f0e-4650cfa6bdc4" colab={"base_uri": "https://localhost:8080/", "height": 375}
import tensorflow as tf
np.random.seed(69)
model = tf.keras.Sequential()
# Must define the input shape in the first layer of the neural network
model.add(tf.keras.layers.Conv2D(filters=32, kernel_size=2, padding='same', activation='relu', input_shape=(587,587,1)))
model.add(tf.keras.layers.MaxPooling2D(pool_size=2))
# model.add(tf.keras.layers.Dropout(0.3))
model.add(tf.keras.layers.Conv2D(filters=16, kernel_size=2, padding='same', activation='relu'))
model.add(tf.keras.layers.MaxPooling2D(pool_size=2))
# model.add(tf.keras.layers.Dropout(0.3))
# model.add(tf.keras.layers.Conv2D(filters=16, kernel_size=2, padding='same', activation='relu'))
# model.add(tf.keras.layers.MaxPooling2D(pool_size=2))
model.add(tf.keras.layers.Flatten())
model.add(tf.keras.layers.Dense(98, activation='relu'))
# model.add(tf.keras.layers.Dense(50, activation='relu'))
# model.add(tf.keras.layers.Dropout(0.5))
model.add(tf.keras.layers.Dense(12, activation='softmax'))
# Take a look at the model summary
model.summary()
# + id="JYN9GXPoiGLp" colab_type="code" colab={}
from tensorflow.keras import optimizers
opt = optimizers.Adam(lr=0.0001)
model.compile(loss='sparse_categorical_crossentropy',
optimizer=opt,
metrics=['accuracy'])
# + id="l0HJ0O2FiNtZ" colab_type="code" outputId="f05f6db1-de33-4aee-87be-85af7985d53c" colab={"base_uri": "https://localhost:8080/", "height": 475}
from tensorflow.keras.callbacks import ModelCheckpoint
mc = ModelCheckpoint('/content/drive/My Drive/Data/weights{epoch:08d}.h5', save_weights_only=True, period=1)
# checkpointer = ModelCheckpoint(filepath='model.weights.best.hdf5', verbose = 1, save_best_only=True)
model.fit(x_train,
y_train,
batch_size=10,
epochs=15,
validation_data=(x_valid, y_valid),
callbacks=[mc])
# + id="8W_DDfj5iRmi" colab_type="code" colab={}
# Load the weights with the best validation accuracy
model.load_weights('/content/drive/My Drive/weights00000010.h5')
# model.load_weights('model.weights.best.hdf5')
# + id="TqRVEkc6i3MQ" colab_type="code" outputId="24bfcbf9-46fe-4b9a-b53f-b9fbfde15255" colab={"base_uri": "https://localhost:8080/", "height": 50}
# Evaluate the model on test set
score = model.evaluate(x_test, y_test, verbose=0)
# Print test accuracy
print('\n', 'Test accuracy:', score)
# + id="LRdH0GkEjFpX" colab_type="code" colab={}
yhat=model.predict(x_test)
# + id="Aw7NT-eS3nbB" colab_type="code" outputId="0c7aacf2-9716-45ef-d28e-92a82a1d32aa" colab={"base_uri": "https://localhost:8080/", "height": 33}
yhat.shape
# + id="OXgBl81x0zj_" colab_type="code" outputId="88a91263-bb17-4db9-8322-a15a99c2f4b6" colab={"base_uri": "https://localhost:8080/", "height": 33}
y_test.shape
# + id="ddmCdsIL4-Mq" colab_type="code" outputId="5043a928-9166-40b0-995f-d<PASSWORD>" colab={"base_uri": "https://localhost:8080/", "height": 265}
cm = np.zeros((12,12), dtype=int)
np.add.at(cm, [y_test, yhat], 1)
cm
| models/RecurrencePlotswithCNN.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .r
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Ruby 2.6.5
# language: ruby
# name: ruby
# ---
# # Ruby version of inline chapter code snippets
#
# This notebook contains all small code snippets used within the book chapters translated to Ruby.
#
#
# See here for R version:
# http://xcelab.net/rmpubs/sr2/code.txt
# ## 0.1
puts( "All models are wrong, but some are useful." )
# ## 0.2
x = (1..2)
x = x.map { |i| i * 10 }
x = x.map { |x| Math.log(x) }
x = x.sum
x = Math.exp(x)
# ## 0.3
Math.log(0.01**200)
200 * Math.log(0.01)
# ## 0.4
# TODO: Implement a proper Ruby variant of R's `lm` function
# ## 0.5
# TODO: Port over cars dataset and fill this gap.
| notebooks/0_chapter_code_snippets.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # sequence prediction sandbox
#
# +
import torch
from torch import nn
from itertools import product
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
import utils as u
# -
# create all possible n-mers for 8
seqs8 = [''.join(x) for x in product(['A','C','G','T'], repeat=8)]
print('Total 8mers:',len(seqs8))
# if you want to down select
seqs8_200 = u.downselect_list(seqs8,200)
# +
score_dict = {
'A':20,
'C':17,
'G':14,
'T':11
}
def score_seqs(seqs):
data = []
for seq in seqs:
score = np.mean([score_dict[base] for base in seq])
data.append([seq,score])
df = pd.DataFrame(data, columns=['seq','score'])
return df
def score_seqs_motif(seqs):
data = []
for seq in seqs:
score = np.mean([score_dict[base] for base in seq])
if 'TAT' in seq:
score += 10
if 'GCG' in seq:
score -= 10
data.append([seq,score])
df = pd.DataFrame(data, columns=['seq','score'])
return df
# -
mer8 = score_seqs(seqs8)
mer8.head()
mer8_motif = score_seqs_motif(seqs8)
mer8_motif[mer8['seq']=='TGCGTTTT']
# +
plt.hist(mer8['score'].values,bins=20)
plt.title("8-mer score distribution")
plt.xlabel("seq score",fontsize=14)
plt.ylabel("count",fontsize=14)
plt.show()
plt.hist(mer8_motif['score'].values,bins=20)
plt.title("8-mer with Motifs score distribution")
plt.xlabel("seq score",fontsize=14)
plt.ylabel("count",fontsize=14)
plt.show()
# -
# ### Define some basic model archs for Linear and CNN
# +
class DNA_Linear_Shallow(nn.Module):
def __init__(self, seq_len):
super().__init__()
self.seq_len = seq_len
self.lin = nn.Linear(4*seq_len, 1)
def forward(self, xb):
# Linear wraps up the weights/bias dot product operations
out = self.lin(xb)
#print("Lin out shape:", out.shape)
return out
class DNA_Linear_Deep(nn.Module):
def __init__(self, seq_len,h1_size):
super().__init__()
self.seq_len = seq_len
self.lin = nn.Sequential(
nn.Linear(4*seq_len, h1_size),
nn.ReLU(inplace=True),
nn.Linear(h1_size, 1),
nn.ReLU(inplace=True)
)
def forward(self, xb):
# Linear wraps up the weights/bias dot product operations
out = self.lin(xb)
#print("Lin out shape:", out.shape)
return out
class DNA_CNN(nn.Module):
def __init__(self,
seq_len,
num_filters=31,
kernel_size=3):
super().__init__()
self.seq_len = seq_len
self.conv_net = nn.Sequential(
nn.Conv1d(4, num_filters, kernel_size=kernel_size),
nn.ReLU(inplace=True),
nn.Flatten(),
nn.Linear(num_filters*(seq_len-kernel_size+1), 10),
nn.ReLU(inplace=True),
nn.Linear(10, 1),
)
def forward(self, xb):
# reshape view to batch_ssize x 4channel x seq_len
# permute to put channel in correct order
xb = xb.view(-1,self.seq_len,4).permute(0,2,1)
#print(xb.shape)
out = self.conv_net(xb)
#print("CNN out shape:",out.shape)
return out
# +
def plot_train_test_hist(train_df, test_df,bins=10):
''' Check distribution of train/test scores, sanity check that its not skewed'''
plt.hist(train_df['score'].values,bins=bins,label="train")
plt.hist(test_df['score'].values,bins=bins,label='test')
plt.legend()
plt.xlabel("seq score",fontsize=14)
plt.ylabel("count",fontsize=14)
plt.show()
def quick_test8(model, oracle):
seqs1 = ['AAAAAAAA', 'CCCCCCCC','GGGGGGGG','TTTTTTTT']
seqs2 = ['AACCAACA','CCGGCGCG','GGGTAAGG', 'TTTCGTTT','TGTAATAC']
seqsTAT = ['TATAAAAA','CCTATCCC','GTATGGGG','TTTATTTT']
seqsGCG = ['AAGCGAAA','CGCGCCCC','GGGCGGGG','TTGCGTTT']
TATGCG = ['ATATGCGA','TGCGTATT']
for seqs in [seqs1, seqs2, seqsTAT, seqsGCG, TATGCG]:
u.quick_seq_pred(model, seqs, oracle)
print()
# -
# # Try with 8 mers
# # Single task Regression with Motifs
# ### Linear Model
mer8motif_train_dl, \
mer8motif_test_dl, \
mer8motif_train_df, \
mer8motif_test_df = u.build_dataloaders_single(mer8_motif, batch_size=13)
plot_train_test_hist(mer8motif_train_df, mer8motif_test_df,bins=20)
# +
seq_len = len(mer8motif_train_df['seq'].values[0])
hidden_layer_size = 24
mer8motif_model_lin_s = DNA_Linear_Shallow(seq_len)
mer8motif_train_losses_lin_s, mer8motif_test_losses_lin_s = u.run_model(
mer8motif_train_dl,
mer8motif_test_dl,
mer8motif_model_lin_s
)
# to plot loss
mer8motif_lin_s_data_label = list(zip([mer8motif_train_losses_lin_s,
mer8motif_test_losses_lin_s],
['Lin(sh) Train Loss',
'Lin(sh) Test Loss']))
u.quick_loss_plot(mer8motif_lin_s_data_label)
# -
oracle_8mer_motif = dict(mer8_motif[['seq','score']].values)
quick_test8(mer8motif_model_lin_s,oracle_8mer_motif)
# +
seq_len = len(mer8motif_train_df['seq'].values[0])
hidden_layer_size = 24
mer8motif_model_lin_d = DNA_Linear_Deep(seq_len,hidden_layer_size)
mer8motif_train_losses_lin_d, mer8motif_test_losses_lin_d = u.run_model(
mer8motif_train_dl,
mer8motif_test_dl,
mer8motif_model_lin_d
)
# to plot loss
mer8motif_lin_d_data_label = list(zip([mer8motif_train_losses_lin_d,
mer8motif_test_losses_lin_d],
['Lin(dp) Train Loss',
'Lin(dp) Test Loss']))
u.quick_loss_plot(mer8motif_lin_d_data_label)
# -
oracle_8mer_motif = dict(mer8_motif[['seq','score']].values)
quick_test8(mer8motif_model_lin_d,oracle_8mer_motif)
u.quick_loss_plot(
mer8motif_lin_s_data_label + \
mer8motif_lin_d_data_label
)
# ### CNN Model
# +
seq_len = len(mer8motif_train_df['seq'].values[0])
mer8motif_model_cnn = DNA_CNN(seq_len)
mer8motif_train_losses_cnn, \
mer8motif_test_losses_cnn = u.run_model(
mer8motif_train_dl,
mer8motif_test_dl,
mer8motif_model_cnn,
lr=0.01
)
# to plot loss
mer8motif_cnn_data_label = list(zip([mer8motif_train_losses_cnn,mer8motif_test_losses_cnn], ['CNN Train Loss','CNN Test Loss']))
u.quick_loss_plot(mer8motif_cnn_data_label)
# -
quick_test8(mer8motif_model_cnn, oracle_8mer_motif)
u.quick_loss_plot(
mer8motif_lin_s_data_label + \
mer8motif_lin_d_data_label + \
mer8motif_cnn_data_label
)
# +
models = [
("LinearShallow_8mer",mer8motif_model_lin_s),
("LinearDeep_8mer",mer8motif_model_lin_d),
("CNN_8mer",mer8motif_model_cnn),
]
seqs = mer8motif_test_df['seq'].values
task = "TATGCGmotif"
dfs = u.parity_pred(models, seqs, oracle_8mer_motif,task,alt=True)
# -
# # try to export to keras
import keras
from pytorch2keras import pytorch_to_keras
from torch.autograd import Variable
mer8motif_model_cnn
mer8motif_model_lin_d
# ## Current goal: convert the Deep Linear model (mer8motif_model_lin_d) to a keras model
# (and check that its predictions on specific example sequences are correct)
# ### STUCK - what should this dimension be??
# +
stuck_dim1 = (1,32) # (1,1,32) # (32,)
# ^^ I keep messing with this dimension
input_np = np.random.uniform(0, 1, stuck_dim1 )
print(input_np.shape)
print(input_np)
input_var = Variable(torch.FloatTensor(input_np))
input_var.shape
# -
dummy_input.shape
# +
stuck_dim2 = [(32)] # [(32)]
# ^^ and this dimension
k_model = pytorch_to_keras(mer8motif_model_lin_d, input_var, stuck_dim2, verbose=True)
#k_model = pytorch_to_keras(mer8motif_model_lin_d, dummy_input, stuck_dim2, verbose=True)
# -
k_model.summary()
# ### stuck: try to predict on some seqs
# +
# PERPETUAL DEATH
seqs = ["AAAAAAAA","TTTTTTTT","CCCCCCCC","GGGGGGGG","GGGTATGG","AAGCGAAA"]
ohe_seqs = [u.one_hot_encode(x) for x in seqs]
#print(ohe_seqs)
for seq in seqs:
#ohe_seq = np.array(u.one_hot_encode(seq))
ohe_seq = np.array(torch.from_numpy(u.one_hot_encode(seq).reshape(1, -1)).float())
res = k_model.predict(ohe_seq)
print(f"{seq}:{res}")
# -
np.array(dummy_input)
# try to save this and port to Scrambler notebook?
k_model.save('practice.h5')
k_reload = keras.models.load_model("practice.h5")
for seq in seqs:
#ohe_seq = np.array(u.one_hot_encode(seq))
ohe_seq = np.array(torch.from_numpy(u.one_hot_encode(seq).reshape(1, -1)).float())
res = k_model.predict(ohe_seq)
res2 = k_reload.predict(ohe_seq)
print(f"{seq}:{res}, {res2}")
dummy_input = torch.from_numpy(ohe_seqs[0].reshape(1, -1)).float()
dummy_output = mer8motif_model_lin_d(dummy_input)
print(dummy_output)
dummy_input.shape
k_model.
mer8motif_model_lin_d(torch.tensor(ohe_seqs).float())
mer8motif_model_cnn(torch.tensor(ohe_seqs).float())
keras.utils.plot_model(k_model, "my_first_model.png")
# +
# pytorch2keras example:
class TestConv2d(nn.Module):
"""
Module for Conv2d testing
"""
def __init__(self, inp=10, out=16, kernel_size=3):
super(TestConv2d, self).__init__()
self.conv2d = nn.Conv2d(inp, out, stride=1, kernel_size=kernel_size, bias=True)
def forward(self, x):
x = self.conv2d(x)
return x
model = TestConv2d()
# -
input_np_og = np.random.uniform(0, 1, (1, 10, 32,32))
input_np_og.shape
input_np_og[0].shape
input_var_og = Variable(torch.FloatTensor(input_np_og))
input_var_og.shape
k_model_og = pytorch_to_keras(model, input_var_og, [(10, 32,32)], verbose=True)
k_model_og.summary()
k_model_og.predict()
# +
# example:
in_channels = 10, out_channels = 16
# want? # NCHW: (1, 3, 725, 1920)
# me:
batch size = 13
in_channels = 4
out_channels = 32
seq_len - 8
# -
class DNA_CNN(nn.Module):
def __init__(self,
seq_len,
num_filters=32,
kernel_size=3):
super().__init__()
self.seq_len = seq_len
self.conv_net = nn.Sequential(
nn.Conv1d(4, num_filters, kernel_size=kernel_size),
nn.ReLU(inplace=True),
nn.Flatten(),
nn.Linear(num_filters*(seq_len-kernel_size+1), 10),
nn.ReLU(inplace=True),
nn.Linear(10, 1),
)
def forward(self, xb):
# reshape view to batch_ssize x 4channel x seq_len
# permute to put channel in correct order
xb = xb.view(-1,self.seq_len,4).permute(0,2,1)
#print(xb.shape)
out = self.conv_net(xb)
#print("CNN out shape:",out.shape)
return out
# ^^ https://github.com/gmalivenko/onnx2keras/issues/120
#
# Solution suggests either editing a file in onnx2keras/convolution_layers.py or that it's just a probelm with using conv1D
for xb, yb in mer8motif_train_dl:
print(xb.shape)
break
xbv = xb.view(-1,8,4).permute(0,2,1)
xbv.shape
mer8motif_model_cnn
# # Try onnx export
# PERPETUAL DEATH
seqs = ["AAAAAAAA","TTTTTTTT","CCCCCCCC","GGGGGGGG","GGGTATGG","AAGCGAAA"]
ohe_seqs = [u.one_hot_encode(x) for x in seqs]
dummy_input = torch.from_numpy(ohe_seqs[0].reshape(1, -1)).float()
dummy_output = mer8motif_model_lin_d(dummy_input)
print(dummy_output)
torch.onnx.export(mer8motif_model_lin_d,
dummy_input,
'model_simple.onnx',
input_names=['test_input'],
output_names=['test_output'])
import onnx
import onnx2keras
from onnx2keras import onnx_to_keras
onnx_model = onnx.load('model_simple.onnx')
k_model2 = onnx_to_keras(onnx_model, ['test_input'])
mer8motif_model_lin_d(torch.tensor(ohe_seq).float())
for seq in seqs:
#ohe_seq = np.array(u.one_hot_encode(seq))
ohe_seq1 = torch.Tensor(u.one_hot_encode(seq))
ohe_seq2 = np.array(torch.from_numpy(u.one_hot_encode(seq).reshape(1, -1)).float())
res = mer8motif_model_lin_d(ohe_seq1)
res2 = k_model2.predict(ohe_seq2)
print(f"{seq}:{res}, {res2}")
k_model2.save('practice.h5')
k_model2_reload = keras.models.load_model("practice.h5")
for seq in seqs:
#ohe_seq = np.array(u.one_hot_encode(seq))
ohe_seq1 = torch.Tensor(u.one_hot_encode(seq))
ohe_seq2 = np.array(torch.from_numpy(u.one_hot_encode(seq).reshape(1, -1)).float())
res = mer8motif_model_lin_d(ohe_seq1)
res2 = k_model2.predict(ohe_seq2)
res3 = k_model2_reload.predict(ohe_seq2)
print(f"{seq}:{res}, {res2}, {res3}")
test = keras.engine.sequential.Sequential(k_model)
test
test.save('test.h5')
test.get_config()
for seq in seqs:
#ohe_seq = np.array(u.one_hot_encode(seq))
ohe_seq = np.array(torch.from_numpy(u.one_hot_encode(seq).reshape(1, -1)).float())
res = k_model.predict(ohe_seq)
res2 = test.predict(ohe_seq)
print(f"{seq}:{res}, {res2}")
# # inspect
# # 8mer model cnn
conv_layers, model_weights, bias_weights = u.get_conv_layers_from_model(mer8motif_model_cnn)
u.view_filters(model_weights)
#train_seqs = list(mer8motif_train_df['seq'])# still using mer6 seqs is ok cuz just getting activations!
seqs8_5k = u.downselect_list(seqs8,5000)
filter_activations = u.get_filter_activations(seqs8_5k, conv_layers[0])
u.view_filters_and_logos(model_weights,filter_activations)
# # Try LSTMs
# +
# mer8motif_train_dl,\
# mer8motif_test_dl, \
# mer8motif_train_df, \
# mer8motif_test_df = u.build_dataloaders_single(mer8_motif,batch_size=11)
# change to batch size 11 so I can figure out the dimension errors
# +
# class DNA_LSTM(nn.Module):
# def __init__(self,
# seq_len,
# hidden_dim=10,
# layer1_dim=12,
# #layer2_dim=12
# ):
# super().__init__()
# self.seq_len = seq_len
# self.hidden_dim = hidden_dim
# self.hidden_init_values = None
# self.hidden = self.init_hidden() # tuple of hidden state and cell state
# self.rnn = nn.LSTM(4, hidden_dim,batch_first=True)
# # self.fc = nn.Sequential(
# # nn.ReLU(inplace=True),
# # #nn.Flatten(),
# # nn.Linear(hidden_dim, layer1_dim),
# # nn.ReLU(inplace=True),
# # nn.Linear(layer1_dim, 1),
# # )
# self.fc = nn.Linear(hidden_dim, 1)
# # self.rnn = nn.Sequential(
# # nn.LSTM(4, hidden_dim),
# # nn.ReLU(inplace=True),
# # nn.Flatten(),
# # nn.Linear(hidden_dim, layer1_dim),
# # nn.ReLU(inplace=True),
# # nn.Linear(layer1_dim, 1),
# # )
# def init_hidden(self):
# if self.hidden_init_values == None:
# self.hidden_init_values = (autograd.Variable(torch.randn(1, 1, self.hidden_dim)),
# autograd.Variable(torch.randn(1, 1, self.hidden_dim)))
# return self.hidden_init_values
# #hidden_state = torch.randn(n_layers, batch_size, hidden_dim)
# def forward(self, xb):
# # WRONG? reshape view to batch_ssize x 4channel x seq_len
# # for LSTM? reshape view to seq_len x batch_ssize x 4channel
# # permute to put channel in correct order
# print("original xb.shape:", xb.shape)
# print(xb)
# xb = xb.view(-1,self.seq_len,4)#.permute(1,0,2)
# print("re-viewed xb.shape:", xb.shape) # >> 11, 8, 4
# print(xb)
# #print(xb[0])
# #print("xb shape", xb.shape)
# # ** Init hidden temp **
# batch_size = xb.shape[0]
# print("batch_size:",batch_size)
# (h, c) = (torch.zeros(1, batch_size, self.hidden_dim), torch.zeros(1, batch_size, self.hidden_dim))
# # *******
# lstm_out, self.hidden = self.rnn(xb, (h,c)) # should this get H and C?
# print("lstm_out",lstm_out)
# print("^^^^^lstm_out shape:",lstm_out.shape) # >> 11, 8, 10
# print("lstm_out[-1] shape:",lstm_out[-1].shape)
# print("lstm_out[-1][-1] shape:",lstm_out[-1][-1].shape)
# print("hidden len:",len(self.hidden))
# print("hidden[0] shape:", self.hidden[0].shape)
# print("hidden[0][-1] shape:", self.hidden[0][-1].shape)
# print("hidden[0][-1][-1] shape:", self.hidden[0][-1][-1].shape)
# print("*****")
# A = lstm_out[-1][-1]
# B = self.hidden[0][-1][-1]
# print("lstm_out[-1][-1]:",A)
# print("self.hidden[0][-1][-1]",B)
# print("==?", A==B)
# print("*****")
# #linear_in = lstm_out.contiguous().view(-1, self.hidden_dim)
# #print("Linear In shape:", linear_in.shape)
# #print("self.hidden",self.hidden)
# #print(self.hidden[1].shape)
# #out = self.fc(linear_in)
# out = self.fc(lstm_out)
# #print("out",out)
# print("LSTM->FC out shape:",out.shape)
# return out
# -
class DNA_LSTM(nn.Module):
def __init__(self,seq_len,hidden_dim=10):
super().__init__()
self.seq_len = seq_len
self.hidden_dim = hidden_dim
self.hidden = None # when initialized, should be tuple of (hidden state, cell state)
self.rnn = nn.LSTM(4, hidden_dim,batch_first=True)
self.fc = nn.Linear(hidden_dim, 1)
def init_hidden(self,batch_size):
# initialize hidden and cell states with 0s
self.hidden = (torch.zeros(1, batch_size, self.hidden_dim),
torch.zeros(1, batch_size, self.hidden_dim))
return self.hidden
#hidden_state = torch.randn(n_layers, batch_size, hidden_dim)
def forward(self, xb,verbose=False):
if verbose:
print("original xb.shape:", xb.shape)
print(xb) # 11 x 32
# make the one-hot nucleotide vectors group together
xb = xb.view(-1,self.seq_len,4)
if verbose:
print("re-viewed xb.shape:", xb.shape) # >> 11 x 8 x 4
print(xb)
# ** Init hidden/cell states?? **
batch_size = xb.shape[0]
if verbose:
print("batch_size:",batch_size)
(h,c) = self.init_hidden(batch_size)
# *******
lstm_out, self.hidden = self.rnn(xb, (h,c)) # should this get H and C?
if verbose:
#print("lstm_out",lstm_out)
print("lstm_out shape:",lstm_out.shape) # >> 11, 8, 10
print("lstm_out[-1] shape:",lstm_out[-1].shape) # >> 8 x 10
print("lstm_out[-1][-1] shape:",lstm_out[-1][-1].shape) # 10
print("hidden len:",len(self.hidden)) # 2
print("hidden[0] shape:", self.hidden[0].shape) # >> 1 x 11 x 10
print("hidden[0][-1] shape:", self.hidden[0][-1].shape) # >> 11 X 10
print("hidden[0][-1][-1] shape:", self.hidden[0][-1][-1].shape) # >> 10
print("*****")
# These vectors should be the same, right?
A = lstm_out[-1][-1]
B = self.hidden[0][-1][-1]
print("lstm_out[-1][-1]:",A)
print("self.hidden[0][-1][-1]",B)
print("==?", A==B)
print("*****")
# attempt to get the last layer from each last position of
# all seqs in the batch? IS this the right thing to get?
last_layer = lstm_out[:,-1,:] # This is 11X10... and it makes FC out 11X1, which is what I want?
#last_layer = lstm_out[-1][-1].unsqueeze(0) # this was [10X1]? led to FC outoput being [1]?
if verbose:
print("last layer:", last_layer.shape)
out = self.fc(last_layer)
if verbose:
print("LSTM->FC out shape:",out.shape)
return out
# +
seq_len = len(mer8motif_train_df['seq'].values[0])
mer8motif_model_lstm = DNA_LSTM(seq_len)
mer8motif_model_lstm
# -
mer8motif_train_losses_lstm,\
mer8motif_test_losses_lstm = u.run_model(
mer8motif_train_dl,
mer8motif_test_dl,
mer8motif_model_lstm
)
quick_test8(mer8motif_model_lstm, oracle_8mer_motif)
mer8motif_lstm_data_label = list(zip([mer8motif_train_losses_lstm,mer8motif_test_losses_lstm], ['LSTM Train Loss','LSTM Test Loss']))
u.quick_loss_plot(mer8motif_lstm_data_label)
u.quick_loss_plot(
mer8motif_lin_s_data_label + \
mer8motif_lin_d_data_label + \
mer8motif_cnn_data_label + \
mer8motif_lstm_data_label
)
# # Try CNN + LSTM
class DNA_CNNLSTM(nn.Module):
def __init__(self,
seq_len,
hidden_dim=10,
num_filters=32,
kernel_size=3):
super().__init__()
self.seq_len = seq_len
self.conv_net = nn.Sequential(
nn.Conv1d(4, num_filters, kernel_size=kernel_size),
nn.ReLU(inplace=True),
)
self.hidden_dim = hidden_dim
self.hidden = None # when initialized, should be tuple of (hidden state, cell state)
self.rnn = nn.LSTM(num_filters, hidden_dim,batch_first=True)
self.fc = nn.Linear(hidden_dim, 1)
def init_hidden(self,batch_size):
# initialize hidden and cell states with 0s
self.hidden = (torch.zeros(1, batch_size, self.hidden_dim),
torch.zeros(1, batch_size, self.hidden_dim))
return self.hidden
#hidden_state = torch.randn(n_layers, batch_size, hidden_dim)
def forward(self, xb, verbose=False):
# reshape view to batch_ssize x 4channel x seq_len
# permute to put channel in correct order
xb = xb.view(-1,self.seq_len,4).permute(0,2,1)
if verbose:
print("xb reviewed shape:",xb.shape)
cnn_out = self.conv_net(xb)
if verbose:
print("CNN out shape:",cnn_out.shape)
cnn_out_perm = cnn_out.permute(0,2,1)
if verbose:
print("CNN permute out shape:",cnn_out_perm.shape)
batch_size = xb.shape[0]
if verbose:
print("batch_size:",batch_size)
(h,c) = self.init_hidden(batch_size)
lstm_out, self.hidden = self.rnn(cnn_out_perm, (h,c)) # should this get H and C?
last_layer = lstm_out[:,-1,:] # This is 11X10... and it makes FC out 11X1, which is what I want?
if verbose:
print("last layer:", last_layer.shape)
out = self.fc(last_layer)
if verbose:
print("LSTM->FC out shape:",out.shape)
return out
# +
seq_len = len(mer8motif_train_df['seq'].values[0])
mer8motif_model_cnnlstm = DNA_CNNLSTM(seq_len)
mer8motif_model_cnnlstm
# -
mer8motif_train_losses_cnnlstm,\
mer8motif_test_losses_cnnlstm = u.run_model(
mer8motif_train_dl,
mer8motif_test_dl,
mer8motif_model_cnnlstm,
)
# +
quick_test8(mer8motif_model_cnnlstm, oracle_8mer_motif)
mer8motif_cnnlstm_data_label = list(zip([mer8motif_train_losses_cnnlstm,mer8motif_test_losses_cnnlstm], ['CNN-LSTM Train Loss','CNN-LSTM Test Loss']))
u.quick_loss_plot(mer8motif_cnnlstm_data_label)
# -
u.quick_loss_plot(
mer8motif_lin_s_data_label + \
mer8motif_lin_d_data_label + \
mer8motif_cnn_data_label + \
mer8motif_lstm_data_label + \
mer8motif_cnnlstm_data_label
)
# +
models = [
("LinearShallow_8mer",mer8motif_model_lin_s),
("LinearDeep_8mer",mer8motif_model_lin_d),
("CNN_8mer",mer8motif_model_cnn),
("LSTM_8mer",mer8motif_model_lstm),
("CNN+LSTM_8mer",mer8motif_model_cnnlstm),
]
seqs = mer8motif_test_df['seq'].values
task = "TATGCGmotif"
dfs = u.parity_pred(models, seqs, oracle_8mer_motif,task,alt=True)
# -
| full_synthetic_data_analysis_SAVE2KERAS.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Display of low level motion data from manual drive.
# ## Main Goal: gather positioning data of ideal racing line.
from matplotlib import pyplot as plt
from matplotlib.pyplot import figure
import matplotlib.pyplot as plt
from mpl_toolkits.mplot3d import Axes3D
# %matplotlib inline
import numpy as np
import pandas as pd
import csv
# 1GO of data (doesn't contain vision input)
clean_rl_data = pd.read_csv("racingline_data.txt")
clean_rl_data.head()
# ## Ideal Racing Line
# +
# %matplotlib notebook
# %matplotlib notebook
# plotting trajectory
fig = plt.figure(figsize = (12, 10))
ax = fig.add_subplot(111, projection='3d') # Axes3D(fig)
x = np.array(clean_rl_data['x_pose'])
y = np.array(clean_rl_data['z_pose'])
z = np.array(clean_rl_data['y_pose'])
d = np.array(clean_rl_data['cld'])
plt.title('racing line Spain GP track w/ Lap Distance (meters)\n', size = 12)
ax.set_xlabel('X axis') # ax.set_xlabel('X axis', color='b')
ax.set_ylabel('Y axis') # ax.set_ylabel('Y axis', color='r')
ax.set_zlabel('Z axis') # ax.set_zlabel('Z axis', color='g')
ax.set_label('Lap Dist.') # ax.set_dlabel('D axis', color='g')
img = ax.scatter(x, y, z, c=d, cmap=plt.cm.gist_heat)
# plt.colorbar(img)
fig.colorbar(img, ax=ax, orientation='horizontal', fraction=.05)
plt.show()
# -
# ## Evolution of X,Y,Z (aka X,Z,Y from PCars 2 perpesctive)
plt.plot(clean_rl_data['cld'], clean_rl_data['x_pose'], label='x_pose')
plt.plot(clean_rl_data['cld'], clean_rl_data['z_pose'], label='y_pose')
plt.plot(clean_rl_data['cld'], clean_rl_data['y_pose'], label='z_pose')
plt.legend()
_ = plt.ylim()
# ## Evolution of Euler Angle and Angular velocity
# reminder: provides Pitch, Yaw, Roll angle of the
# +
fig, (ax1, ax2) = plt.subplots(2, figsize=(14, 10))
fig.suptitle('Euler angle (top) vs \n Angular veloticy (bottow)\n over current lap dist..', size = 15)
# ax1.title('Euler angle', size = 15)
ax1.plot(clean_rl_data['cld'], clean_rl_data['pitch'], label='pitch')
ax1.plot(clean_rl_data['cld'], clean_rl_data['yaw'], label='yaw')
ax1.plot(clean_rl_data['cld'], clean_rl_data['roll'], label='roll')
ax1.legend()
_ = plt.ylim()
# ax2.title('Angular veloticy', size = 15)
ax2.plot(clean_rl_data['cld'], clean_rl_data['pitch_velo'], label='pitch_velo')
ax2.plot(clean_rl_data['cld'], clean_rl_data['yaw_velo'], label='yaw_velo')
ax2.plot(clean_rl_data['cld'], clean_rl_data['roll_velo'], label='roll_velo')
ax2.legend()
_ = plt.ylim()
# -
# ## Evolution of Local Velocity and Local Acceleration
# +
fig, (ax1, ax2) = plt.subplots(2, figsize=(14, 10))
fig.suptitle('Local velocity (top) vs \n Local acceleration (bottow)\n over current lap dist..', size = 15)
# ax1.title('Local velocity', size = 15)
ax1.plot(clean_rl_data['cld'], clean_rl_data['L_R__velo'], label='L_R__velo') # (+)Left (-)Right
ax1.plot(clean_rl_data['cld'], clean_rl_data['U_D_velo'], label='D_U_velo') # (+)Down (-)Up
ax1.plot(clean_rl_data['cld'], clean_rl_data['B_A_velo'], label='B_A_velo') # (+)Braking (-)Acceleration
ax1.legend()
_ = plt.ylim()
# ax2.title('Local acceleration', size = 15)
ax2.plot(clean_rl_data['cld'], clean_rl_data['L_R_acc'], label='L_R_acc') # (+)Left (-)Right
ax2.plot(clean_rl_data['cld'], clean_rl_data['U_D_acc'], label='D_U_acc') # (+)Down (-)Up
ax2.plot(clean_rl_data['cld'], clean_rl_data['B_A_acc'], label='B_A_acc') # (+)Braking (-)Acceleration
ax2.legend()
_ = plt.ylim()
# -
| display_of_racing_line_spainGP.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import pickle
fname = '/scratche/home/apoorv/Temporal_KGQA/data/wikidata_big/questions/train.pickle'
train = pickle.load(open(fname, 'rb'))
import random
train[0]
len(train)
random.shuffle(train)
l = len(train)
train_10 = train[:int(0.1*l)]
train_30 = train[:int(0.3*l)]
train_50 = train[:int(0.5*l)]
train_70 = train[:int(0.7*l)]
data = [train_10, train_30, train_50, train_70]
filenames = ['train_10', 'train_30', 'train_50', 'train_70']
prefix = '/scratche/home/apoorv/Temporal_KGQA/data/wikidata_big/questions/'
postfix = '.pickle'
for questions, fname in zip(data, filenames):
file = prefix + fname + postfix
pickle.dump(questions, open(file, 'wb'))
| tkbc/make_small_train_split.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# ---
# + [markdown] application/vnd.databricks.v1+cell={"inputWidgets": {}, "nuid": "445dd548-b84b-4343-819f-b11e53b94530", "showTitle": false, "title": ""}
# d
# # Spark Data Sources
#
# This notebook shows how to use Spark Data Sources Interface API to read file formats:
# * Parquet
# * JSON
# * CSV
# * Avro
# * ORC
# * Image
# * Binary
#
# A full list of DataSource methods is available [here](https://docs.databricks.com/spark/latest/data-sources/index.html#id1)
# + [markdown] application/vnd.databricks.v1+cell={"inputWidgets": {}, "nuid": "feae3eab-a04b-4aa2-9ed4-e4c32db53081", "showTitle": false, "title": ""}
# ## Define paths for the various data sources
# + application/vnd.databricks.v1+cell={"inputWidgets": {}, "nuid": "9ba911fe-9337-4417-9d27-3dd1f3c206fd", "showTitle": false, "title": ""}
parquet_file = "/databricks-datasets/learning-spark-v2/flights/summary-data/parquet/2010-summary.parquet"
json_file = "/databricks-datasets/learning-spark-v2/flights/summary-data/json/*"
csv_file = "/databricks-datasets/learning-spark-v2/flights/summary-data/csv/*"
orc_file = "/databricks-datasets/learning-spark-v2/flights/summary-data/orc/*"
avro_file = "/databricks-datasets/learning-spark-v2/flights/summary-data/avro/*"
schema = "DEST_COUNTRY_NAME STRING, ORIGIN_COUNTRY_NAME STRING, count INT"
# + [markdown] application/vnd.databricks.v1+cell={"inputWidgets": {}, "nuid": "bfb593c2-e1be-47e9-821f-518ca9720a2c", "showTitle": false, "title": ""}
# ## Parquet Data Source
# + application/vnd.databricks.v1+cell={"inputWidgets": {}, "nuid": "f1e1c644-f73a-4532-81a9-a5f6c4aaa27d", "showTitle": false, "title": ""}
df = (spark
.read
.format("parquet")
.option("path", parquet_file)
.load())
# + [markdown] application/vnd.databricks.v1+cell={"inputWidgets": {}, "nuid": "4e55855e-8b89-45f7-8487-f88bf4aa16f5", "showTitle": false, "title": ""}
# Another way to read this same data using a variation of this API
# + application/vnd.databricks.v1+cell={"inputWidgets": {}, "nuid": "3e042e91-c6cc-446a-a8b5-cdd5cfd6e65d", "showTitle": false, "title": ""}
df2 = spark.read.parquet(parquet_file)
# + application/vnd.databricks.v1+cell={"inputWidgets": {}, "nuid": "8b7070e0-cf9c-4763-b58d-7c218594172f", "showTitle": false, "title": ""}
df.show(10, False)
# + [markdown] application/vnd.databricks.v1+cell={"inputWidgets": {}, "nuid": "721034d4-bf29-47e9-9b2e-ce0526170ffd", "showTitle": false, "title": ""}
# ## Use SQL
#
# This will create an _unmanaged_ temporary view
# + application/vnd.databricks.v1+cell={"inputWidgets": {}, "nuid": "d0a685e8-9815-4f70-bc0c-1d404786c5d7", "showTitle": false, "title": ""}
# %sql
CREATE OR REPLACE TEMPORARY VIEW us_delay_flights_tbl
USING parquet
OPTIONS (
path "/databricks-datasets/definitive-guide/data/flight-data/parquet/2010-summary.parquet"
)
# + [markdown] application/vnd.databricks.v1+cell={"inputWidgets": {}, "nuid": "8b231f47-7c57-4326-ac65-8810c63e88c2", "showTitle": false, "title": ""}
# Use SQL to query the table
#
# The outcome should be the same as one read into the DataFrame above
# + application/vnd.databricks.v1+cell={"inputWidgets": {}, "nuid": "9e08d62f-a09b-4980-8f26-437629539d10", "showTitle": false, "title": ""}
spark.sql("SELECT * FROM us_delay_flights_tbl").show(10, truncate=False)
# + [markdown] application/vnd.databricks.v1+cell={"inputWidgets": {}, "nuid": "6a43f650-a5ef-449b-8c76-b40341ec4437", "showTitle": false, "title": ""}
# ## JSON Data Source
# + application/vnd.databricks.v1+cell={"inputWidgets": {}, "nuid": "206dad1b-64a3-4b31-b77b-82d48b7e1ba2", "showTitle": false, "title": ""}
df = spark.read.format("json").option("path", json_file).load()
# + application/vnd.databricks.v1+cell={"inputWidgets": {}, "nuid": "2db970aa-4ffd-48f5-a829-4f8328548d1f", "showTitle": false, "title": ""}
df.show(10, truncate=False)
# + application/vnd.databricks.v1+cell={"inputWidgets": {}, "nuid": "023c3fd5-028d-4571-879e-008db36c7e17", "showTitle": false, "title": ""}
df2 = spark.read.json(json_file)
# + application/vnd.databricks.v1+cell={"inputWidgets": {}, "nuid": "c1cdac5c-562c-43a1-8c6b-db3343cb21ad", "showTitle": false, "title": ""}
df2.show(10, False)
# + [markdown] application/vnd.databricks.v1+cell={"inputWidgets": {}, "nuid": "29ba3ea9-27fc-4fee-9787-c75592c9c7c0", "showTitle": false, "title": ""}
# ## Use SQL
#
# This will create an _unmanaged_ temporary view
# + application/vnd.databricks.v1+cell={"inputWidgets": {}, "nuid": "2a87271d-08b7-408c-839b-35d678d14eff", "showTitle": false, "title": ""}
# %sql
CREATE OR REPLACE TEMPORARY VIEW us_delay_flights_tbl
USING json
OPTIONS (
path "/databricks-datasets/learning-spark-v2/flights/summary-data/json/*"
)
# + [markdown] application/vnd.databricks.v1+cell={"inputWidgets": {}, "nuid": "aa0cd29f-bc94-4fdd-b5ef-b8dd3eb6632d", "showTitle": false, "title": ""}
# Use SQL to query the table
#
# The outcome should be the same as one read into the DataFrame above
# + application/vnd.databricks.v1+cell={"inputWidgets": {}, "nuid": "4a8415dd-9569-4709-a88b-9a087645071b", "showTitle": false, "title": ""}
spark.sql("SELECT * FROM us_delay_flights_tbl").show(10, truncate=False)
# + [markdown] application/vnd.databricks.v1+cell={"inputWidgets": {}, "nuid": "2487b993-a8f4-4c0f-a24c-085394be3b5f", "showTitle": false, "title": ""}
# ## CSV Data Source
# + application/vnd.databricks.v1+cell={"inputWidgets": {}, "nuid": "235e4586-cd30-4de4-a9ab-470803a40462", "showTitle": false, "title": ""}
df = (spark
.read
.format("csv")
.option("header", "true")
.schema(schema)
.option("mode", "FAILFAST") # exit if any errors
.option("nullValue", "") # replace any null data field with โโ
.option("path", csv_file)
.load())
# + application/vnd.databricks.v1+cell={"inputWidgets": {}, "nuid": "62e4a622-e299-4ac6-b9e9-3a47d220a94b", "showTitle": false, "title": ""}
df.show(10, truncate = False)
# + application/vnd.databricks.v1+cell={"inputWidgets": {}, "nuid": "9d4bc3b6-d464-492c-8426-a0e6b461f9d7", "showTitle": false, "title": ""}
(df.write.format("parquet")
.mode("overwrite")
.option("path", "/tmp/data/parquet/df_parquet")
.option("compression", "snappy")
.save())
# + application/vnd.databricks.v1+cell={"inputWidgets": {}, "nuid": "133fcbf4-b052-4ea9-a5cb-9ee746560ead", "showTitle": false, "title": ""}
# %fs ls /tmp/data/parquet/df_parquet
# + application/vnd.databricks.v1+cell={"inputWidgets": {}, "nuid": "df9c9a1e-660b-4d4d-b6f0-aa63c4596c05", "showTitle": false, "title": ""}
df2 = (spark
.read
.option("header", "true")
.option("mode", "FAILFAST") # exit if any errors
.option("nullValue", "")
.schema(schema)
.csv(csv_file))
# + application/vnd.databricks.v1+cell={"inputWidgets": {}, "nuid": "a7f3c36d-57a9-4d65-bfaf-e71bd2c6329c", "showTitle": false, "title": ""}
df2.show(10, truncate=False)
# + [markdown] application/vnd.databricks.v1+cell={"inputWidgets": {}, "nuid": "e36c9f3c-11bb-4746-bd73-45ca866d49d1", "showTitle": false, "title": ""}
# ## Use SQL
#
# This will create an _unmanaged_ temporary view
# + application/vnd.databricks.v1+cell={"inputWidgets": {}, "nuid": "f3608b0d-dc32-4c6a-84c1-d012cadd73c3", "showTitle": false, "title": ""}
# %sql
CREATE OR REPLACE TEMPORARY VIEW us_delay_flights_tbl
USING csv
OPTIONS (
path "/databricks-datasets/learning-spark-v2/flights/summary-data/csv/*",
header "true",
inferSchema "true",
mode "FAILFAST"
)
# + [markdown] application/vnd.databricks.v1+cell={"inputWidgets": {}, "nuid": "95fb3584-0b22-4398-84fd-0a6de84dbf22", "showTitle": false, "title": ""}
# Use SQL to query the table
#
# The outcome should be the same as one read into the DataFrame above
# + application/vnd.databricks.v1+cell={"inputWidgets": {}, "nuid": "6d615327-6606-4471-8a55-6ff4e917eef8", "showTitle": false, "title": ""}
spark.sql("SELECT * FROM us_delay_flights_tbl").show(10, truncate=False)
# + [markdown] application/vnd.databricks.v1+cell={"inputWidgets": {}, "nuid": "83fd78b2-2181-4e31-a7ba-eaf4f9e281a7", "showTitle": false, "title": ""}
# ## ORC Data Source
# + application/vnd.databricks.v1+cell={"inputWidgets": {}, "nuid": "45328661-83d8-4dd4-9c38-7d30da4c72a7", "showTitle": false, "title": ""}
df = (spark.read
.format("orc")
.option("path", orc_file)
.load())
# + application/vnd.databricks.v1+cell={"inputWidgets": {}, "nuid": "b1a41e8d-3920-4d53-a93d-e25bfa8d9485", "showTitle": false, "title": ""}
df.show(10, truncate=False)
# + [markdown] application/vnd.databricks.v1+cell={"inputWidgets": {}, "nuid": "7093b190-25ba-4095-94e1-f7b8bcb7008e", "showTitle": false, "title": ""}
# ## Use SQL
#
# This will create an _unmanaged_ temporary view
# + application/vnd.databricks.v1+cell={"inputWidgets": {}, "nuid": "74498126-17cb-4cce-b769-8f1dae28f79c", "showTitle": false, "title": ""}
# %sql
CREATE OR REPLACE TEMPORARY VIEW us_delay_flights_tbl
USING orc
OPTIONS (
path "/databricks-datasets/learning-spark-v2/flights/summary-data/orc/*"
)
# + [markdown] application/vnd.databricks.v1+cell={"inputWidgets": {}, "nuid": "8f395d8d-2196-4da5-86b9-1f89cbab39e0", "showTitle": false, "title": ""}
# Use SQL to query the table
#
# The outcome should be the same as one read into the DataFrame above
# + application/vnd.databricks.v1+cell={"inputWidgets": {}, "nuid": "bc7a0452-8b8a-4001-aa55-d45213939abd", "showTitle": false, "title": ""}
spark.sql("SELECT * FROM us_delay_flights_tbl").show(10, truncate=False)
# + [markdown] application/vnd.databricks.v1+cell={"inputWidgets": {}, "nuid": "cdfdf544-c9cf-48ad-bb3c-c9dd3ac90387", "showTitle": false, "title": ""}
# ## Avro Data Source
# + application/vnd.databricks.v1+cell={"inputWidgets": {}, "nuid": "a7bdafc0-9117-4947-bf02-38172ffce996", "showTitle": false, "title": ""}
df = (spark.read
.format("avro")
.option("path", avro_file)
.load())
# + application/vnd.databricks.v1+cell={"inputWidgets": {}, "nuid": "e2263c27-6989-4f0a-bc6e-d8bc3c678239", "showTitle": false, "title": ""}
df.show(10, truncate=False)
# + [markdown] application/vnd.databricks.v1+cell={"inputWidgets": {}, "nuid": "8ece2fc2-b64c-4b6a-aa78-7cee06d560ea", "showTitle": false, "title": ""}
# ## Use SQL
#
# This will create an _unmanaged_ temporary view
# + application/vnd.databricks.v1+cell={"inputWidgets": {}, "nuid": "8305769e-8ea6-4406-8927-91a7adebb057", "showTitle": false, "title": ""}
# %sql
CREATE OR REPLACE TEMPORARY VIEW us_delay_flights_tbl
USING avro
OPTIONS (
path "/databricks-datasets/learning-spark-v2/flights/summary-data/avro/*"
)
# + [markdown] application/vnd.databricks.v1+cell={"inputWidgets": {}, "nuid": "0c9c29ed-646f-4fef-bcc0-fbd100d2e718", "showTitle": false, "title": ""}
# Use SQL to query the table
#
# The outcome should be the same as the one read into the DataFrame above
# + application/vnd.databricks.v1+cell={"inputWidgets": {}, "nuid": "18a2779e-c177-46f1-b5f0-91677ea9ccad", "showTitle": false, "title": ""}
spark.sql("SELECT * FROM us_delay_flights_tbl").show(10, truncate=False)
# + [markdown] application/vnd.databricks.v1+cell={"inputWidgets": {}, "nuid": "848dbcbd-abd3-445c-95eb-032af261ed76", "showTitle": false, "title": ""}
# ## Image
# + application/vnd.databricks.v1+cell={"inputWidgets": {}, "nuid": "c38dd396-38e0-441e-8528-b3cd9d5f6152", "showTitle": false, "title": ""}
from pyspark.ml import image
image_dir = "/databricks-datasets/cctvVideos/train_images/"
images_df = spark.read.format("image").load(image_dir)
images_df.printSchema()
images_df.select("image.height", "image.width", "image.nChannels", "image.mode", "label").show(5, truncate=False)
# + [markdown] application/vnd.databricks.v1+cell={"inputWidgets": {}, "nuid": "e46f914e-570b-42bf-8353-560a2e1012ca", "showTitle": false, "title": ""}
# ## Binary
# + application/vnd.databricks.v1+cell={"inputWidgets": {}, "nuid": "5bb932d0-fbba-4692-82cc-5ba07ab5cca0", "showTitle": false, "title": ""}
path = "/databricks-datasets/learning-spark-v2/cctvVideos/train_images/"
binary_files_df = (spark
.read
.format("binaryFile")
.option("pathGlobFilter", "*.jpg")
.load(path))
binary_files_df.show(5)
# + [markdown] application/vnd.databricks.v1+cell={"inputWidgets": {}, "nuid": "dd844ac4-da62-476f-bd6f-edf005204309", "showTitle": false, "title": ""}
# To ignore any partitioning data discovery in a directory, you can set the `recursiveFileLookup` to `true`.
# + application/vnd.databricks.v1+cell={"inputWidgets": {}, "nuid": "5093c9ce-c258-4b36-8451-35880b128441", "showTitle": false, "title": ""}
binary_files_df = (spark
.read
.format("binaryFile")
.option("pathGlobFilter", "*.jpg")
.option("recursiveFileLookup", "true")
.load(path))
binary_files_df.show(5)
| BookExamples/Chapter04/4-2 Spark Data Sources.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + [markdown] id="view-in-github" colab_type="text"
# <a href="https://colab.research.google.com/github/michael-sam/image_classification_CNN/blob/main/image_classification_fashion_mnist_CNN.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# + [markdown] id="pvWttTRFJq89"
# # Import libraries
# + colab={"base_uri": "https://localhost:8080/"} id="uUqjitVXYxYC" outputId="987795a3-239f-4114-b180-8ae4c2bf0eb1"
import tensorflow as tf
import keras
import numpy as np
import matplotlib.pyplot as plt
print(tf.__version__)
print(keras.__version__)
print(np.__version__)
import random
import time
SEED = 1234
random.seed(SEED)
np.random.seed(SEED)
tf.random.set_seed(SEED)
# + [markdown] id="NIw6rDeOJzXV"
# # Brief EDA
# + colab={"base_uri": "https://localhost:8080/"} id="uUUkKMlIYxcD" outputId="19284013-b0e3-428e-d3f2-9fac30eedea1"
from keras.datasets import fashion_mnist
(x_train, y_train), (x_test, y_test) = fashion_mnist.load_data()
print("Shape of x_train: {}".format(x_train.shape))
print("Shape of y_train: {}".format(y_train.shape))
print()
print("Shape of x_test: {}".format(x_test.shape))
print("Shape of y_test: {}".format(y_test.shape))
# + colab={"base_uri": "https://localhost:8080/", "height": 218} id="E_eKewi3YxeY" outputId="4e3791b8-93c0-4400-c8c7-033fd4f4aef9"
#Inspect data
labelNames = ["T-shirt/top", "Trouser", "Pullover", "Dress", "Coat", "Sandal", "Shirt", "Sneaker", "Bag", "Ankle boot"]
sample = 1902
each = x_train[sample]
plt.figure(figsize=(3,3))
plt.imshow(each)
plt.colorbar()
plt.show()
print("Image (#{}): Which is label number '{}', or label '{}''".format(sample,y_train[sample], labelNames[y_train[sample]]))
# + colab={"base_uri": "https://localhost:8080/", "height": 729} id="vJr0raSiY3B7" outputId="cb57d7df-7728-4e68-ddc4-c11a80a93d83"
#Sample images
ROW = 7
COLUMN = 7
plt.figure(figsize=(10, 10))
for i in range(ROW * COLUMN):
temp = random.randint(0, len(x_train)+1)
image = x_train[temp]
plt.subplot(ROW, COLUMN, i+1)
plt.imshow(image, cmap='gray')
plt.xticks([])
plt.yticks([])
plt.xlabel(labelNames[y_train[temp]])
plt.tight_layout()
plt.show()
# + [markdown] id="2_Bdh9dCJ4WF"
# # Initialize CNN model
# + id="6YosX-gaY4lr"
from keras.models import Sequential
from keras.layers import Dense, Dropout, Flatten, Conv2D, MaxPooling2D, BatchNormalization, Input, UpSampling2D, GlobalAveragePooling2D
from keras.callbacks import EarlyStopping, ModelCheckpoint
from keras.preprocessing.image import ImageDataGenerator
from keras import optimizers, regularizers
from sklearn import metrics
from keras.optimizers import Adam, SGD
from keras.models import Model, load_model
# + colab={"base_uri": "https://localhost:8080/"} id="YcWx44DxY7St" outputId="e44b35ea-46c4-4997-ced5-51fecf49347f"
# reshape the input for CNN model
x_train = x_train.reshape(x_train.shape[0], x_train.shape[1], x_train.shape[2], 1)
x_test = x_test.reshape(x_test.shape[0], x_test.shape[1], x_test.shape[1], 1)
print(x_train.shape)
print(x_test.shape)
# + id="QGq4uXTJY8fr"
# Normalize the data
x_train = x_train.astype('float32')
x_test = x_test.astype('float32')
x_train /= 255.0
x_test /= 255.0
# + id="86LXNsSXY92L"
# One hot encode the outcome
num_classes = 10
y_train_tf = keras.utils.np_utils.to_categorical(y_train, num_classes)
y_test_tf = keras.utils.np_utils.to_categorical(y_test, num_classes)
# + colab={"base_uri": "https://localhost:8080/"} id="JlpEbHLmY_eT" outputId="8f74e5c1-f5cd-4ea0-bf0d-1b8db3d80b57"
y_train_tf.shape
# + colab={"base_uri": "https://localhost:8080/"} id="le8ACdj_ZAwr" outputId="ec700e64-df89-46aa-f8c3-163a8eaa03ac"
input_shape = (28, 28, 1)
#Build network
model = Sequential()
model.add(Conv2D(96, kernel_size=(3, 3), strides=(1, 1), padding='valid',
activation='relu', kernel_regularizer=keras.regularizers.l2(0.001), input_shape=input_shape))
model.add(Conv2D(96, kernel_size=(3, 3), strides=(1, 1), padding='valid',activation='relu', kernel_regularizer=keras.regularizers.l2(0.001)))
model.add(BatchNormalization())
model.add(MaxPooling2D(pool_size=(2, 2), strides=None, padding='valid'))
model.add(Dropout(0.45))
model.add(Conv2D(128, kernel_size=(3, 3), strides=(1, 1), padding='valid',activation='relu', kernel_regularizer=keras.regularizers.l2(0.001)))
model.add(Conv2D(128, kernel_size=(3, 3), strides=(1, 1), padding='valid',activation='relu', kernel_regularizer=keras.regularizers.l2(0.001)))
model.add(BatchNormalization())
model.add(MaxPooling2D(pool_size=(2, 2), strides=None, padding='valid'))
model.add(Dropout(0.45))
model.add(Conv2D(256, kernel_size=(3, 3), strides=(1, 1), padding='same', activation='relu', kernel_regularizer=keras.regularizers.l2(0.001)))
model.add(Conv2D(256, kernel_size=(3, 3), strides=(1, 1), padding='valid',activation='relu', kernel_regularizer=keras.regularizers.l2(0.001)))
model.add(BatchNormalization())
model.add(MaxPooling2D(pool_size=(2, 2), strides=None, padding='valid'))
model.add(Dropout(0.45))
model.add(Flatten())
model.add(Dense(512, activation='relu'))
model.add(Dropout(0.5))
model.add(Dense(512, activation='relu'))
model.add(Dropout(0.5))
model.add(Dense(y_train_tf.shape[1], activation='softmax'))
model.summary()
# + colab={"base_uri": "https://localhost:8080/", "height": 1000} id="3BJRVwTnMCSC" outputId="4e68d8ac-2d10-431f-abca-6b92fef79db3"
from keras.utils.vis_utils import plot_model
plot_model(model, to_file='model_plot.png', show_shapes=True, show_layer_names=True)
# + colab={"base_uri": "https://localhost:8080/"} id="WH_1Zf8DZEvN" outputId="643eab43-96a5-41ba-b145-95b9a9a0d8c6"
checkpointer = ModelCheckpoint(filepath="save/cnn_1.hdf5", verbose=0, save_best_only=True) # save best model
monitor = EarlyStopping(monitor='val_loss', min_delta=1e-3, patience=5, verbose=1, mode='auto')
model.compile(loss="categorical_crossentropy", optimizer=Adam(learning_rate=0.001), metrics=['accuracy'])
start = time.time()
history = model.fit(x_train,y_train_tf, validation_split=0.2, callbacks=[monitor,checkpointer],
verbose=1,epochs=25, batch_size=64, shuffle=True)
end = time.time()
cnn_time = end-start
print("Total training time is {:0.2f} minute".format(cnn_time/60.0))
# + colab={"base_uri": "https://localhost:8080/", "height": 295} id="HJGrC_rMZExP" outputId="03cdd17b-120e-4136-cdc3-a401dc9df1ee"
# plot summarize history for loss
plt.plot(history.history['loss'])
plt.plot(history.history['val_loss'])
plt.title('model loss')
plt.ylabel('loss')
plt.xlabel('epoch')
plt.legend(['train', 'val'], loc='upper left')
plt.show()
# + id="gFAvpqBKpOGc"
model.load_weights("save/cnn_1.hdf5")
# + [markdown] id="-RA5oxM2KEhk"
# # Confusion matrix
# + id="dQoBmq4fpOI9"
# cm is the confusion matrix, names are the names of the classes.
def plot_confusion_matrix(cm, names, title='Confusion matrix', cmap=plt.cm.Blues):
plt.imshow(cm, interpolation='nearest', cmap=cmap)
plt.title(title)
plt.colorbar()
tick_marks = np.arange(len(names))
plt.xticks(tick_marks, names, rotation=90)
plt.yticks(tick_marks, names)
plt.tight_layout()
plt.ylabel('True label')
plt.xlabel('Predicted label')
# + colab={"base_uri": "https://localhost:8080/", "height": 855} id="1mc7QN01pVYl" outputId="58305d53-355a-47aa-912c-be7697e9a2d2"
cnn_pred_mnist = model.predict(x_test)
cnn_pred_mnist = np.argmax(cnn_pred_mnist,axis=1)
y_true = np.argmax(y_test_tf,axis=1)
cnn_f1_mnist = metrics.f1_score(y_true, cnn_pred_mnist, average= "weighted")
cnn_accuracy_mnist = metrics.accuracy_score(y_true, cnn_pred_mnist)
cnn_cm_mnist = metrics.confusion_matrix(y_true, cnn_pred_mnist)
print("-----------------Convolutional Neural Network Report---------------")
print("Weighted F1 score: {}".format(cnn_f1_mnist))
print("Accuracy score: {}".format(cnn_accuracy_mnist))
print("Confusion matrix: \n", cnn_cm_mnist)
print('Plotting confusion matrix')
plt.figure()
plot_confusion_matrix(cnn_cm_mnist, labelNames)
plt.show()
print(metrics.classification_report(y_true, cnn_pred_mnist, digits=4))
# + [markdown] id="Lbrgq8bfKMDH"
# # Micro, Macro and Weighted F1-score
# + colab={"base_uri": "https://localhost:8080/"} id="R-bcoGT0pVa9" outputId="d3ef614a-6b60-4af9-c683-8ca25c66aa03"
from sklearn.metrics import accuracy_score, precision_score, recall_score, f1_score
print('Micro Precision: {:.6f}'.format(precision_score(y_true, cnn_pred_mnist, average='micro')))
print('Micro Recall: {:.6f}'.format(recall_score(y_true, cnn_pred_mnist, average='micro')))
print('Micro F1-score: {:.6f}\n'.format(f1_score(y_true, cnn_pred_mnist, average='micro')))
print('Macro Precision: {:.6f}'.format(precision_score(y_true, cnn_pred_mnist, average='macro')))
print('Macro Recall: {:.6f}'.format(recall_score(y_true, cnn_pred_mnist, average='macro')))
print('Macro F1-score: {:.6f}\n'.format(f1_score(y_true, cnn_pred_mnist, average='macro')))
print('Weighted Precision: {:.6f}'.format(precision_score(y_true, cnn_pred_mnist, average='weighted')))
print('Weighted Recall: {:.6f}'.format(recall_score(y_true, cnn_pred_mnist, average='weighted')))
print('Weighted F1-score: {:.6f}'.format(f1_score(y_true, cnn_pred_mnist, average='weighted')))
# + [markdown] id="SBtqtxw8JGyh"
# **Which aggregation method to use?**
#
#
# > For a multi-class problem, we should refer/use weighted Precision, Recall and F1-score to better account for classes that may be unequal in our dataset.
#
# > Macro F1-scores give equal weight to precision and recall and might be misleading in such cases. However for this particular assignment, because each class has exactly 1000 items, macro-F1 is the same as weighted-F1.
#
# > Micro-F1 is the least suitable for multi-class problem as it does not consider each class individually and calculates the metrics globally. Mathematically, micro-F1 = micro-precision = micro-recall = accuracy
#
# **Further explanation below**
#
# **1. Macro**
#
# > Macro-averaged F1-score, or Macro-F1 calculates metrics for each class individually and then takes unweighted mean of the measures (i.e. a simple arithmetic mean of our per-class F1-scores). Macro-averaged precision and the macro-averaged recall are computed in a similar way.
#
# **2. Weighted**
#
# > When averaging the macro-F1, we gave equal weights to each class. In weighted-average F1-score, or weighted-F1, we weight the F1-score of each class by the number of samples from that class. We do the same for weighted precision and weighted recall.
#
# **3. Micro**
#
# > The last variant is the micro-averaged F1-score, or the micro-F1. To calculate the micro-F1, we first compute micro-averaged precision and micro-averaged recall over all the samples, and then combine the two. We โmicro-averageโ by considering all the samples together.
#
# > Precision is the proportion of True Positives out of the Predicted Positives (TP/(TP+FP)). We consider all the correctly predicted samples to be True Positives. As for False Positives, since we are looking at all the classes together, each prediction error is a False Positive for the class that was predicted. The total number of False Positives is thus the total number of prediction errors.
#
# > As for recall, which is the proportion of True Positives out of the actual Positives (TP/(TP+FN)). The TP is as before. As for number of False Negatives, each prediction error (X is misclassified as Y) is a False Positive for Y, and a False Negative for X. Thus, the total number of False Negatives is again the total number of prediction errors, and so recall is the same as precision.
#
# > Since precision=recall in the micro-averaging case, they are also equal to their harmonic mean. Moreover, this is also the classifierโs overall accuracy: the proportion of correctly classified samples out of all the samples.
#
# > ```micro-F1 = micro-precision = micro-recall = accuracy```
# + [markdown] id="N33SruIiKUtZ"
# # Creating a CNN model with 2 branches
# + id="8z7VRYLXZEzm"
import numpy as np
import tensorflow as tf
from tensorflow import keras
from tensorflow.keras import layers
# + colab={"base_uri": "https://localhost:8080/"} id="I4yib8MOrSst" outputId="c6094308-9664-4d33-e77f-4cd058614448"
input_shape = (28, 28, 1)
inputs = keras.Input(shape=input_shape, name="img")
x = layers.Conv2D(96, kernel_size=(3, 3), activation="relu")(inputs)
x = layers.MaxPooling2D(pool_size=(2, 2))(x)
block_1_output = layers.Flatten()(x)
x = layers.Flatten()(inputs)
block_2_output = layers.Dense(512, activation='relu')(x)
x = layers.concatenate([block_1_output, block_2_output])
x = layers.Dense(512, activation="relu")(x)
x = layers.Dropout(0.5)(x)
x = layers.Dense(512, activation="relu")(x)
x = layers.Dropout(0.5)(x)
# outputs = layers.Dense(10)(x)
outputs = layers.Dense(y_train_tf.shape[1], activation='softmax')(x)
model = keras.Model(inputs, outputs, name="branched_model")
model.summary()
# + colab={"base_uri": "https://localhost:8080/", "height": 1000} id="3OMxoe3UZh3z" outputId="8cac9cf0-dbea-49af-9904-4f09a577a3bd"
keras.utils.plot_model(model, "branched_model.png", show_shapes=True)
# + colab={"base_uri": "https://localhost:8080/"} id="Odq_4F7urcMN" outputId="36718687-2145-4513-cd8b-5b307e4bd234"
checkpointer = ModelCheckpoint(filepath="save/branched_model.hdf5", verbose=0, save_best_only=True) # save best model
monitor = EarlyStopping(monitor='val_loss', min_delta=1e-3, patience=5, verbose=1, mode='auto')
model.compile(loss="categorical_crossentropy", optimizer=Adam(learning_rate=0.001), metrics=['accuracy'])
start = time.time()
history = model.fit(x_train,y_train_tf, validation_split=0.2, callbacks=[monitor,checkpointer],
verbose=1,epochs=25, batch_size=64, shuffle=True)
end = time.time()
branched_model_time = end-start
print("Total training time is {:0.2f} minute".format(branched_model_time/60.0))
# + colab={"base_uri": "https://localhost:8080/", "height": 295} id="5DfKWbTv_I9y" outputId="9cc46489-1549-4b5a-eb80-06f8c95602f5"
# plot summarize history for loss
plt.plot(history.history['loss'])
plt.plot(history.history['val_loss'])
plt.title('model loss')
plt.ylabel('loss')
plt.xlabel('epoch')
plt.legend(['train', 'val'], loc='upper left')
plt.show()
# + id="SdSwlH4X_NmE"
model.load_weights("save/branched_model.hdf5")
# + colab={"base_uri": "https://localhost:8080/", "height": 855} id="oOk-VIJA_Nof" outputId="c8e3b9ba-2a2a-490c-9a42-66351112cdb1"
cnn_pred_mnist = model.predict(x_test)
cnn_pred_mnist = np.argmax(cnn_pred_mnist,axis=1)
y_true = np.argmax(y_test_tf,axis=1)
cnn_f1_mnist = metrics.f1_score(y_true, cnn_pred_mnist, average= "weighted")
cnn_accuracy_mnist = metrics.accuracy_score(y_true, cnn_pred_mnist)
cnn_cm_mnist = metrics.confusion_matrix(y_true, cnn_pred_mnist)
print("-----------------Convolutional Neural Network Report---------------")
print("Weighted F1 score: {}".format(cnn_f1_mnist))
print("Accuracy score: {}".format(cnn_accuracy_mnist))
print("Confusion matrix: \n", cnn_cm_mnist)
print('Plotting confusion matrix')
plt.figure()
plot_confusion_matrix(cnn_cm_mnist, labelNames)
plt.show()
print(metrics.classification_report(y_true, cnn_pred_mnist, digits=4))
# + [markdown] id="GWNYj4kOG38c"
# ### References:
#
# <NAME>, [Intuitive CNN Creation for Fashion Image Multi-class Classification](https://towardsdatascience.com/intuitively-create-cnn-for-fashion-image-multi-class-classification-6e31421d5227)
#
# <NAME>, [Image Classification with Fashion-MNIST and CIFAR-10](http://athena.ecs.csus.edu/~hoangkh/Image%20Classification%20with%20Fashion-MNIST%20and%20CIFAR-10.html)
#
# <NAME>, [Confusion Matrix for Your Multi-Class Machine Learning Model](https://towardsdatascience.com/confusion-matrix-for-your-multi-class-machine-learning-model-ff9aa3bf7826)
#
# <NAME>, [Multi-Class Metrics Made Simple, Part II: the F1-score](https://towardsdatascience.com/multi-class-metrics-made-simple-part-ii-the-f1-score-ebe8b2c2ca1)
#
# + id="4vOb9tJPIiwr"
| image_classification_fashion_mnist_CNN.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
from matplotlib.backends.backend_pdf import PdfPages
piloto = pd.read_csv ('dados/Dados4.csv',sep=';',index_col='Amostra',decimal= ',',engine = 'python',skipfooter = 4)
producao = pd.read_csv ('dados/Dados5.csv',sep=';',index_col='Amostra',decimal= ',')
# # Fase piloto
# +
# Calcula a mรฉdia das mรฉdias das amostras
media = piloto.mean(axis=1).mean(axis=0)
# Calcula a mรฉdia dos desvios padrรตes das amostras
desvio = piloto.std(axis=1).mean(axis=0)
# Calcula o tamanho da amostra
tamanho_piloto = piloto.shape[0]
# Calcula o desvio padrรฃo amostral dividindo a mรฉdia dos desvios-padrรตes das amostras pela raiz do tamanho das amostras
desvio_amostral = desvio / np.sqrt(tamanho_piloto)
# Calcula os limites superior e inferior de controle
LIC = media - 3 * desvio_amostral
LSC = media + 3 * desvio_amostral
# Definiรงรฃo do tamanho da figura, marcaรงรฃo do eixo 'x' e seu tรญtulo
grafico_1 = plt.figure(figsize = (10,6))
plt.xticks(np.arange(1, tamanho_piloto, 2))
plt.title('Grรกfico de controle de mรฉdias de fase piloto')
# Plota os dados da fase piloto e linhas de controle
piloto.mean(axis=1).plot(marker="o",label ='Mรฉdia de observaรงรตes')
plt.axhline(media, color = 'black', linestyle = 'dashed', linewidth = 2,label= 'LM')
plt.axhline(LSC, color = 'r', linestyle = 'dashed', linewidth = 2,label = 'LSC')
plt.axhline(LIC, color = 'r', linestyle = 'dashed', linewidth = 2,label = 'LIC')
# Cria a legenda e coloca-a do lado exterior da figura
plt.legend(loc = 'center left', bbox_to_anchor = (1, 0.5));
# +
# Cria lista com valores fora dos limites de controle + 3ฯ
fora_3 = piloto.index[(piloto.mean(axis=1) >= LIC) & (piloto.mean(axis=1) <= LSC) == False].tolist()
# Verifica se hรก pontos fora dos limites de controle e os elimina
if len(fora_3) > 0:
piloto.drop(fora_3,inplace = True)
# +
# Reseta o index e adiciona 1 a cada para que nรฃo comece de 0
piloto.reset_index(drop=True, inplace=True)
piloto.index += 1
# Calcula a mรฉdia das mรฉdias das amostras
media = piloto.mean(axis=1).mean(axis=0)
# Calcula a mรฉdia dos desvios padrรตes das amostras
desvio = piloto.std(axis=1).mean(axis=0)
# Calcula o tamanho da amostra
tamanho_piloto = piloto.shape[0]
# Calcula o desvio padrรฃo amostral dividindo a mรฉdia dos desvios-padrรตes das amostras pela raiz do tamanho das amostras
desvio_amostral = desvio / np.sqrt(tamanho_piloto)
# Calcula os limites superior e inferior de controle
LIC = media - 3 * desvio_amostral
LSC = media + 3 * desvio_amostral
LIC2 = media - 2 * desvio_amostral
LSC2 = media + 2 * desvio_amostral
# Definiรงรฃo do tamanho da figura, marcaรงรฃo do eixo 'x' e seu tรญtulo
grafico_2 = plt.figure(figsize = (10,6))
plt.xticks(np.arange(1, tamanho_piloto, 2))
plt.title('Grรกfico de controle de mรฉdias de fase piloto')
plt.xlabel('Amostra')
# Plota os dados da fase piloto e linhas de controle
piloto.mean(axis=1).plot(marker="o",label ='Mรฉdia de observaรงรตes')
plt.axhline(media, color = 'black', linestyle = 'dashed', linewidth = 2,label= 'LM')
plt.axhline(LSC, color = 'r', linestyle = 'dashed', linewidth = 2,label = 'LSC')
plt.axhline(LIC, color = 'r', linestyle = 'dashed', linewidth = 2,label = 'LIC')
# Cria a legenda e coloca-a do lado exterior da figura
plt.legend(loc = 'center left', bbox_to_anchor = (1, 0.5));
# -
# # Fase de produรงรฃo
# +
# Definiรงรฃo do tamanho da figura, marcaรงรฃo do eixo 'x' e seu tรญtulo
grafico_3 = plt.figure(figsize = (10,6))
plt.xticks(np.arange(1,40,2))
plt.title('Grรกfico de controle de mรฉdias de fase de produรงรฃo')
# Plota os dados da fase piloto e linhas de controle
ax = producao.mean(axis=1).plot(marker="o",label ='Mรฉdia de observaรงรตes')
plt.axhline(media, color = 'black', linestyle = 'dashed', linewidth = 2,label= 'LM')
plt.axhline(LSC, color = 'r', linestyle = 'dashed', linewidth = 2,label = 'LSC')
plt.axhline(LIC, color = 'r', linestyle = 'dashed', linewidth = 2,label = 'LIC')
plt.axhline(LSC2, color = 'orange', linestyle = 'dashed', linewidth = 2,label = 'LM + 2ฯ')
plt.axhline(LIC2, color = 'orange', linestyle = 'dashed', linewidth = 2,label = 'LM - 2ฯ')
# Cria a legenda e coloca-a do lado exterior da figura
leg = plt.legend(loc = 'center left', bbox_to_anchor = (1, 0.5));
# +
# Cria lista com valores fora dos limites de controle + 3ฯ
fora_3_producao = producao.index[(producao.mean(axis=1) >= LIC) & (producao.mean(axis=1) <= LSC) == False].tolist()
# Cria lista com valores fora dos limites de controle + 2ฯ
fora_2_producao = producao.index[(producao.mean(axis=1) >= LIC2) & (producao.mean(axis=1) <= LSC2) == False].tolist()
# +
# Cria lista com valores onde hรก 2 mรฉdias amostrais consecutivas fora do intervalo
# entre mรฉdia amostral ยฑ 2 desvios-padrรฃo amostral.
conseq = []
for i in range(len(fora_2_producao)-1):
if fora_2_producao[i]+1 == fora_2_producao[i+1]:
conseq.append(fora_2_producao[i])
conseq.append(fora_2_producao[i+1])
conseq_2 = list(dict.fromkeys(conseq))
# +
# Cria lista com valores onde hรก hรก 3 mรฉdias amostrais consecutivas em que, pelo menos,
# 2 valores estรฃo fora do intervalo entre mรฉdia amostral ยฑ 2 desvios-padrรฃo amostral.
conseq_3 = []
for i in range(2,40):
if i in fora_2_producao and (i-1 in fora_2_producao or i+1 in fora_2_producao):
list = []
list.append(i-1)
list.append(i)
list.append(i+1)
conseq_3.append(list)
elif (i not in fora_2_producao) and (i-1 in fora_2_producao) and (i+1 in fora_2_producao):
list = []
list.append(i-1)
list.append(i)
list.append(i+1)
conseq_3.append(list)
# -
# Cria arquivo pdf com nome 'Grรกficos.pdf' e salva os 3 grรกficos em cada pรกgina.
with PdfPages('graficos/Atividade_4_Graficos.pdf') as pdf:
pdf.savefig(grafico_1, bbox_inches='tight')
pdf.savefig(grafico_2, bbox_inches='tight')
pdf.savefig(grafico_3, bbox_inches='tight')
# Cria arquivo txt com nome 'Informaรงรตes.txt' e salva os valores LM, LSC, LIC e os
# resultados das verificaรงรตesde valores fora dos intervalos
with open('informacoes/Atividade_4_Informacoes.txt', 'w') as text_file:
text_file.write(' LM = {:.2f}\n'.format(media))
text_file.write(' LIC = {:.2f}\n'.format(LIC))
text_file.write(' LSC = {:.2f}\n'.format(LSC))
text_file.write(' As amostras: {} apresentam mรฉdias amostrais fora do intervalo entre mรฉdia amostral ยฑ 3 desvios-padrรฃo amostral;\n'.format(fora_3_producao))
text_file.write(' As amostras: {} apresentam 2 mรฉdias amostrais consecutivas fora do intervalo entre mรฉdia amostral ยฑ 2 desvios-padrรฃo amostral;\n'.format(conseq_2))
text_file.write(' As amostras: {} apresentam pelo menos 2 valores que estรฃo fora do intervalo entre mรฉdia amostral ยฑ 2 desvios-padrรฃo amostral;\n'.format(conseq_3))
| Joao_Gabriel_Atividade_4.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 2
# language: python
# name: python2
# ---
# # Using a single Table vs EArray + Table
# The PyTables community keep asking what can be considered a FAQ. Namely, should I use a single Table for storing my data, or should I split it in a Table and an Array?
#
# Although there is not a totally general answer, the study below address this for the common case where one has 'raw data' and other data that can be considered 'meta'. See for example: https://groups.google.com/forum/#!topic/pytables-users/vBEiaRzp3gI
import numpy as np
import tables
tables.print_versions()
LEN_PMT = int(1.2e6)
NPMTS = 12
NEVENTS = 10
# !rm PMT*.h5
# +
def gaussian(x, mu, sig):
return np.exp(-np.power(x - mu, 2.) / (2 * np.power(sig, 2.)))
x = np.linspace(0, 1, 1e7)
rd = (gaussian(x, 1, 1.) * 1e6).astype(np.int32)
def raw_data(length):
# Return the actual data that you think it represents PM waveforms better
#return np.arange(length, dtype=np.int32)
return rd[:length]
# -
# ## Using Tables to store everything
class PMTRD(tables.IsDescription):
# event_id = tables.Int32Col(pos=1, indexed=True)
event_id = tables.Int32Col(pos=1)
npmt = tables.Int8Col(pos=2)
pmtrd = tables.Int32Col(shape=LEN_PMT, pos=3)
def one_table(filename, filters):
with tables.open_file("{}-{}-{}.h5".format(filename, filters.complib, filters.complevel), "w", filters=filters) as h5t:
pmt = h5t.create_table(h5t.root, "pmt", PMTRD, expectedrows=NEVENTS*NPMTS)
pmtr = pmt.row
for i in range(NEVENTS):
for j in range(NPMTS):
pmtr['event_id'] = i
pmtr['npmt'] = j
pmtr['pmtrd'] = raw_data(LEN_PMT)
pmtr.append()
# Using no compression
# %time one_table("PMTs", tables.Filters(complib="zlib", complevel=0))
# Using Zlib (level 5) compression
# %time one_table("PMTs", tables.Filters(complib="zlib", complevel=5))
# Using Blosc (level 9) compression
# %time one_table("PMTs", tables.Filters(complib="blosc:lz4", complevel=9))
# ls -sh *.h5
# So, using no compression leads to best speed, whereas Zlib can compress data by ~32x. Zlib is ~3x slower than using no compression though. On its hand, the Blosc compressor is faster but it can barely compress the dataset.
# ## Using EArrays for storing raw data and Table for other metadata
def rawdata_earray(filename, filters):
with tables.open_file("{}-{}.h5".format(filename, filters.complib), "w", filters=filters) as h5a:
pmtrd = h5a.create_earray(h5a.root, "pmtrd", tables.Int32Atom(), shape=(0, NPMTS, LEN_PMT),
chunkshape=(1,1,LEN_PMT))
for i in range(NEVENTS):
rdata = []
for j in range(NPMTS):
rdata.append(raw_data(LEN_PMT))
pmtrd.append(np.array(rdata).reshape(1, NPMTS, LEN_PMT))
pmtrd.flush()
# Using no compression
# %time rawdata_earray("PMTAs", tables.Filters(complib="zlib", complevel=0))
# Using Zlib (level 5) compression
# %time rawdata_earray("PMTAs", tables.Filters(complib="zlib", complevel=5))
# Using Blosc (level 5) compression
# %time rawdata_earray("PMTAs", tables.Filters(complib="blosc:lz4", complevel=9))
# !ls -sh *.h5
# We see that by using the Blosc compressor one can achieve around 10x faster output operation wrt Zlib, although the compression ratio can be somewhat smaller (but still pretty good).
# +
# Add the event IDs in a separate table in the same file
class PMTRD(tables.IsDescription):
# event_id = tables.Int32Col(pos=1, indexed=True)
event_id = tables.Int32Col(pos=1)
npmt = tables.Int8Col(pos=2)
def add_table(filename, filters):
with tables.open_file("{}-{}.h5".format(filename, filters.complib), "a", filters=filters) as h5a:
pmt = h5a.create_table(h5a.root, "pmt", PMTRD)
pmtr = pmt.row
for i in range(NEVENTS):
for j in range(NPMTS):
pmtr['event_id'] = i
pmtr['npmt'] = j
pmtr.append()
# -
# Using no compression
# %time add_table("PMTAs", tables.Filters(complib="zlib", complevel=0))
# Using Zlib (level 5) compression
# %time add_table("PMTAs", tables.Filters(complib="zlib", complevel=5))
# Using Blosc (level 9) compression
# %time add_table("PMTAs", tables.Filters(complib="blosc:lz4", complevel=9))
# !ls -sh *.h5
# After adding the table we continue to see that a better compression ratio is achieved for EArray + Table with respect to a single Table. Also, Blosc can make writing files significantly faster than not using compression (it has to write less).
# ## Retrieving data from a single Table
def read_single_table(complib, complevel):
with tables.open_file("PMTs-{}-{}.h5".format(complib, complevel), "r") as h5t:
pmt = h5t.root.pmt
for i, row in enumerate(pmt):
event_id, npmt, pmtrd = row["event_id"], row["npmt"], row["pmtrd"][:]
if i % 20 == 0:
print(event_id, npmt, pmtrd[0:5])
# %time read_single_table("None", 0)
# %time read_single_table("zlib", 5)
# %time read_single_table("blosc:lz4", 9)
# As Blosc could not compress the table, it has a performance that is worse (quite worse actually) to the uncompressed table. On its hand, Zlib can be more than 3x slower for reading than without compression.
# ## Retrieving data from the EArray + Table
def read_earray_table(complib, complevel):
with tables.open_file("PMTAs-{}.h5".format(complib, "r")) as h5a:
pmt = h5a.root.pmt
pmtrd_ = h5a.root.pmtrd
for i, row in enumerate(pmt):
event_id, npmt = row["event_id"], row["npmt"]
pmtrd = pmtrd_[event_id, npmt]
if i % 20 == 0:
print(event_id, npmt, pmtrd[0:5])
# %time read_earray_table("None", 0)
# %time read_earray_table("zlib", 5)
# %time read_earray_table("blosc:lz4", 9)
# So, the EArray + Table takes a similar time to read than a pure Table approach when no compression is used. And for some reason, when Zlib is used for compressing the data, the EArray + Table scenario degrades read speed significantly. However, when the Blosc compression is used, the EArray + Table works actually faster than for the single Table.
# ## Some plots on speeds and sizes
import numpy as np
import matplotlib
import matplotlib.pyplot as plt
# %matplotlib inline
# Let's have a look at the speeds at which data can be stored and read using the different paradigms:
# +
fig, (ax1, ax2) = plt.subplots(nrows=2, sharex=False, sharey=False)
fig.set_size_inches(w=8, h=7, forward=False)
f = .550 # conversion factor to GB/s
rects1 = ax1.bar(np.arange(3), f / np.array([.323, 3.17, .599]), 0.25, color='r')
rects2 = ax1.bar(np.arange(3) + 0.25, f / np.array([.250, 2.74, .258]), 0.25, color='y')
_ = ax1.set_ylabel('GB/s')
_ = ax1.set_xticks(np.arange(3) + 0.25)
_ = ax1.set_xticklabels(('No Compr', 'Zlib', 'Blosc'))
_ = ax1.legend((rects1[0], rects2[0]), ('Single Table', 'EArray+Table'), loc=9)
_ = ax1.set_title('Speed to store data')
rects1 = ax2.bar(np.arange(3), f / np.array([.099, .592, .782]), 0.25, color='r')
rects2 = ax2.bar(np.arange(3) + 0.25, f / np.array([.082, 1.09, .171]), 0.25, color='y')
_ = ax2.set_ylabel('GB/s')
_ = ax2.set_xticks(np.arange(3) + 0.25)
_ = ax2.set_xticklabels(('No Compr', 'Zlib', 'Blosc'))
_ = ax2.legend((rects1[0], rects2[0]), ('Single Table', 'EArray+Table'), loc=9)
_ = ax2.set_title('Speed to read data')
# -
# And now, see the different sizes for the final files:
fig, ax1 = plt.subplots()
fig.set_size_inches(w=8, h=5)
rects1 = ax1.bar(np.arange(3), np.array([550, 17, 42]), 0.25, color='r')
rects2 = ax1.bar(np.arange(3) + 0.25, np.array([550, 4.1, 9]), 0.25, color='y')
_ = ax1.set_ylabel('MB')
_ = ax1.set_xticks(np.arange(3) + 0.25)
_ = ax1.set_xticklabels(('No Compr', 'Zlib', 'Blosc'))
_ = ax1.legend((rects1[0], rects2[0]), ('Single Table', 'EArray+Table'), loc=9)
_ = ax1.set_title('Size for stored datasets')
# ## Conclusions
# The main conclusion here is that, whenever you have a lot of data to dump (typically in the form of an array), a combination of an EArray + Table is preferred instead of a single Table. The reason for this is that HDF5 can store the former arrangement more efficiently, and that fast compressors like Blosc works way better too.
# The deep explanation on why using an EArray to store the raw data gives these advanatges is because we are physically (not only logically!) separating data that is highly related (like the result of some measurements) and also is homogeneous (of type Int32 in this case). This manual separation is critical for getting better compression ratios and faster speeds too, specially when using fast compressors like Blosc.
# Finally, although meaningful, this experiment is based on a pure synthetic dataset. It is always wise to use your own data in order to get your conclusions. It is specially recommended to have a look at the different compressors that comes with PyTables and see which one fits better to your needs: http://www.pytables.org/usersguide/libref/helper_classes.html
| examples/Single_Table-vs-EArray_Table.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import matplotlib.pyplot as plt
import pickle
import numpy as np
import tools
from pylab import *
import matplotlib.animation as animation
import matplotlib as mpl
import numpy as np
import os
import standard.analysis as sa
import tools
import matplotlib.pyplot as plt
import task
import tensorflow as tf
from model import FullModel
# %matplotlib inline
mpl.rcParams['font.size'] = 15
mpl.rcParams['pdf.fonttype'] = 42
mpl.rcParams['ps.fonttype'] = 42
mpl.rcParams['font.family'] = 'arial'
path = './files_temp/test'
npath = os.path.join(path,'000000')
w_glo = tools.load_pickle(path, 'w_glo')
plt.hist(w_glo[0].flatten(), bins=50)
plt.ylim([0, 5000])
plt.xlim([0, .5])
len(w_glo)
glo_in, glo_out, kc_in, kc_out, results = sa.load_activity(os.path.join(path,'000000'))
kc_in[:,:20]
kc_in.mean()
(kc_out>0).mean()
np.std(kc_in, axis=0).mean()
plt.imshow(kc_out[:100,:200])
plt.figure(figsize=(8,8))
plt.imshow(wglos[-1][:50,:20], cmap = 'jet', vmin=0, vmax = .05)
plt.colorbar()
np.sum(wglos[-1],axis=0)
x = tools.load_all_results(path, argLast=True)
plt.plot(x['hist'].T)
plt.ylim([0, 5000])
x['thres']
x.keys()
| notebooks/Scratch.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Puzzles from adventofcode.com 2015
# ## Day 2: I Was Told There Would Be No Math
def compute_feets(text):
l, w, h = [int(v) for v in text.split('x')]
f1 = l*w
f2 = w*h
f3 = h*l
return 2*(f1 + f2 + f3) + min(f1, f2, f3)
# Test examples
compute_feets("1x1x10")
# Real input
with open('day2/input.txt', 'rt') as gifts:
total_feets = 0
for gift in gifts:
total_feets += compute_feets(gift.rstrip())
print("They should order {} square feet of wrapping paper".format(total_feets))
def compute_ribbon(text):
l, w, h = [int(v) for v in text.split('x')]
return 2*(l + w + h - max(l, w, h)) + l*w*h
# Test examples
compute_ribbon("2x3x4")
# Real input
with open('day2/input.txt', 'rt') as gifts:
total_ribbon = 0
for gift in gifts:
total_ribbon += compute_ribbon(gift.rstrip())
print("They should order {} feet of ribbon".format(total_ribbon))
| 2015/jordi/Day 2.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Exoplanet systems
# Here we'll discuss how to instantiate an exoplanet system and compute its full light curve. Currently, all orbital stuff lives in the `starry.kepler` module, which implements a simple Keplerian solver. This works for systems of exoplanets orbiting stars, moons orbiting planets, and binary star systems. Keep in mind, however, that the primary object is assumed to sit at the origin, and the secondary objects are assumed to be massless. A more flexible N-body solver will be added in the next version of the code, so stay tuned!
# ## Creating a star
# This is as easy as it gets:
# + nbsphinx="hidden"
# %matplotlib inline
# -
import starry
star = starry.kepler.Primary()
# A `kepler.Primary` object is just a body that sits at the origin and either occults or gets occulted by other bodies, so you don't have too much freedom in settings its properties. For one, its radius and luminosity are fixed at unity: this sets the units for the properties of the `Secondary` bodies, since those bodies' radii, semi-major axes, and luminosities are specified in units of the `Primary`. You can, however, specify spherical harmonic and/or limb darkening coefficients, an axis of rotation, and a rotational period. In fact, the `kepler.Primary` class is a subclass of the `Map` class, so everything you can do with a `Map` you can do here, too. Check out the `Spotted star` tutorial for some ideas.
#
# As a simple example, let's set the limb darkening coefficients of the star. Let's give the stellar map a linear and a quadratic limb-darkening coefficient:
star[1] = 0.40
star[2] = 0.26
# Here's what that looks like:
star.show()
# ## Creating a planet
# Let's create a (very) hot Jupiter with some interesting properties:
planet = starry.kepler.Secondary(lmax=5)
planet.r = 0.1
planet.L = 5e-3
planet.porb = 1
planet.prot = 1
planet.a = 30
planet.Omega = 30
planet.ecc = 0.3
planet.w = 30
# Here we've instantiated a planet with a fifth degree surface map, a radius that is one-tenth that of the star, a luminosity that is one-two-hundredth that of the star, an orbital period of 1 day, a rotational period of 1 day, and a semi-major axis that's thirty times the stellar radius. This is essentially a tidally-locked hot Jupiter.
#
# **NOTE:** *The default rotational period for planets is **infinity**, so if you don't specify* `prot` *, the planet's surface map will not rotate as the planet orbits the star and there will be no phase curve variation. For planets whose emission tracks the star--i.e., a hot Jupiter with a hotspot--set* `prot=porb` *. Also note that the default luminosity is *zero*, so make sure to set that as well if you want any emission from the planet!*
#
# There are a bunch of other settings related to the orbit, so check out the docs for those. By default, planets are given zero eccentricity, edge-on inclination, and zero obliquity, and $t = 0$ corresponds to a transiting configuration.
#
# OK, the next thing we get to do is specify the planet's map. For simplicity, let's just create a random one:
import numpy as np
np.random.seed(123)
for l in range(1, planet.lmax + 1):
for m in range(-l, l + 1):
planet[l, m] = 0.01 * np.random.randn()
planet.show()
# Note that when instantiating a map for a `starry.kepler.Secondary` instance, **the map should be defined as it would appear during occultation by the primary (i.e., secondary eclipse).** That is, the image you see above is the full dayside of the planet, with the orbital plane slicing the planet's equator and ecliptic north pointing up. **This is true even for planets that aren't edge on and don't have secondary eclipses.** The map is always defined as if it were seen edge on, with the orbital plane along the $xy$ plane and the planet behind the primary. Check out the [viewing geometry tutorial](geometry.html) for more information.
#
# Now, it's probably a good idea to ensure we didn't end up with negative specific intensity anywhere:
planet.is_physical()
# This routine performs gradient descent to try to find the global minimum of the map, and returns `True` if the minimum is greater than or equal to zero. Since it's determining this numerically, it's probably a good idea to avoid doing this repeatedly (say, in an MCMC problem).
# ## Creating a system
# Now that we have a star and a planet, we can instantiate a planetary system:
system = starry.kepler.System(star, planet)
# The first argument to a `starry.System` call is a `kepler.Primary` object, followed by any number of `kepler.Secondary` objects.
#
# There are some other system attributes you can set--notably an exposure time (`exptime`)--if the exposure time of your data is long enough to affect the light curve shape. Check out the docs for more information.
# ## Computing light curves
# We're ready to compute the full system light curve:
time = np.linspace(-0.25, 3.25, 10000)
# %time system.compute(time)
# Cool -- `starry` computed 10,000 cadences in 15 ms. Let's check it out:
import matplotlib.pyplot as pl
# %matplotlib inline
fig, ax = pl.subplots(1, figsize=(14, 3))
ax.set_xlabel('Time [days]', fontsize=16)
ax.set_ylabel('System Flux', fontsize=16)
ax.plot(time, system.lightcurve);
# We can also plot the stellar and planetary light curves individually:
# This will show you only the transits
fig, ax = pl.subplots(1, figsize=(14, 3))
ax.set_xlabel('Time [days]', fontsize=16)
ax.set_ylabel('Planet Flux', fontsize=16)
ax.plot(time, star.lightcurve);
# This will show you only the planet's flux (phase curve + secondary eclipse)
fig, ax = pl.subplots(1, figsize=(14, 3))
ax.set_xlabel('Time [days]', fontsize=16)
ax.set_ylabel('Planet Flux', fontsize=16)
ax.plot(time, planet.lightcurve);
# And, just for fun, the planet's orbit (the sky plane is the $xy$ plane, with $y$ pointing up and $x$ pointing to the right; $z$ points toward the observer):
# +
fig, ax = pl.subplots(1, figsize=(14, 4.25))
ax.plot(time, planet.X, label='x')
ax.plot(time, planet.Y, label='y')
ax.plot(time, planet.Z, label='z')
ax.set_ylabel(r'Position [R$_*$]', fontsize=16);
ax.set_xlabel(r'Time [days]', fontsize=16);
ax.legend();
fig, ax = pl.subplots(1,3, sharex=True, sharey=True, figsize=(14, 4.25))
ax[0].plot(planet.X, planet.Y)
ax[1].plot(planet.X, planet.Z)
ax[2].plot(planet.Z, planet.Y)
for n in [0, 1, 2]:
ax[n].scatter(0, 0, marker='*', color='k', s=100, zorder=10)
ax[0].set_xlabel(r'x [R$_*$]', fontsize=16);
ax[0].set_ylabel(r'y [R$_*$]', fontsize=16);
ax[1].set_xlabel(r'x [R$_*$]', fontsize=16);
ax[1].set_ylabel(r'z [R$_*$]', fontsize=16);
ax[2].set_xlabel(r'z [R$_*$]', fontsize=16);
ax[2].set_ylabel(r'y [R$_*$]', fontsize=16);
# -
# ## Comparison to `batman`
# One last thing we can do is compare a simple transit calculation to what we'd get with the `batman` code [(Kreidberg 2015)](https://astro.uchicago.edu/~kreidberg/batman/), a widely used and well-tested light curve tool.
# First, let's define all the system parameters:
u1 = 0.4 # Stellar linear limb darkening coefficient
u2 = 0.26 # Stellar quadratic limb darkening coefficient
rplanet = 0.1 # Planet radius in units of stellar radius
inc = 89.95 # Planet orbital inclination
P = 50 # Planet orbital period in days
a = 300 # Planet semi-major axis in units of stellar radius
# We'll evaluate the light curve on the following time grid:
npts = 500
time = np.linspace(-0.1, 0.1, npts)
# Let's evaluate the `starry` light curve for this system:
# +
# Instantiate the star
star = starry.kepler.Primary()
star[1] = u1
star[2] = u2
# Instantiate the planet
planet = starry.kepler.Secondary()
planet.r = rplanet
planet.inc = inc
planet.porb = P
planet.a = a
planet.lambda0 = 90
# Instantiate the system
system = starry.kepler.System(star, planet)
# Compute and store the light curve
system.compute(time)
flux_starry = system.lightcurve
# -
# And now the `batman` light curve:
import batman
params = batman.TransitParams()
params.limb_dark = "quadratic"
params.u = [u1, u2]
params.t0 = 0.
params.ecc = 0.
params.w = 90.
params.rp = rplanet
params.a = a
params.per = P
params.inc = inc
m = batman.TransitModel(params, time)
flux_batman = m.light_curve(params)
# Let's plot the two light curves:
fig, ax = pl.subplots(1, figsize=(14, 6))
ax.set_xlabel('Time [days]', fontsize=16)
ax.set_ylabel('Stellar Flux', fontsize=16)
ax.plot(time, flux_starry, label='starry', lw=3);
ax.plot(time, flux_batman, '--', label='batman', lw=3);
ax.legend(fontsize=14);
# Here is the difference between the two models:
fig, ax = pl.subplots(1, figsize=(14, 6))
ax.set_xlabel('Time [days]', fontsize=16)
ax.set_ylabel('Residuals', fontsize=16)
ax.plot(time, flux_starry - flux_batman);
# It's on the order of a few parts per billion, which is quite small. The oscillations are due to the fact that `batman` uses [Hasting's approximation](https://github.com/lkreidberg/batman/blob/master/c_src/_quadratic_ld.c#L304) to compute the elliptic integrals, which is slightly faster but leads to small errors. In practice, however, the two models are equivalent for exoplanet transit modeling.
# **One final note.** You may have noticed that some of the parameters accepted by `batman` and `starry` are slightly different. While `batman` accepts the transit mid-point `t0` as an input parameter, `starry` accepts the mean longitude `lambda0` at the reference time `tref` instead. This is a bit more general and useful, for instance, in the case of non-transiting bodies. By default, `tref=0` and `lambda0=90`, corresponding to a transit time of `t0=0`. For eccentric orbits things can get a little tricky converting between these two conventions, but the way to do is to remember that the **true anomaly at the transit midpoint is**
#
# $f_{t_0} = \frac{\pi}{2} - \varpi$
#
#
# where $\varpi$ is the longitude of pericenter, the `w` attribute of a `starry.kepler.Secondary` instance. Solving [Kepler's equation](https://en.wikipedia.org/wiki/Eccentric_anomaly) can then yield the time of the transit mid-point.
# For circular orbits, the true anomaly, eccentric anomaly, and mean anomaly are all the same ($f = E = M$), and since $M = \lambda - \varpi$, we have $\lambda = \frac{\pi}{2}$ at the transit mid-point, as expected.
| sparse/repos/rodluger/starry/binder/Basics 3 - Exoplanet systems.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# <center>
# <img src="../../img/ods_stickers.jpg">
# ## ะัะบััััะน ะบััั ะฟะพ ะผะฐัะธะฝะฝะพะผั ะพะฑััะตะฝะธั. ะกะตััะธั โ 3
#
# ### <center> ะะฒัะพั ะผะฐัะตัะธะฐะปะฐ: <NAME>
# ## <center> ะะฝะดะธะฒะธะดัะฐะปัะฝัะน ะฟัะพะตะบั ะฟะพ ะฐะฝะฐะปะธะทั ะดะฐะฝะฝัั
</center>
# **ะะปะฐะฝ ะธััะปะตะดะพะฒะฐะฝะธั**
# - ะะฟะธัะฐะฝะธะต ะฝะฐะฑะพัะฐ ะดะฐะฝะฝัั
ะธ ะฟัะธะทะฝะฐะบะพะฒ
# - ะะตัะฒะธัะฝัะน ะฐะฝะฐะปะธะท ะฟัะธะทะฝะฐะบะพะฒ
# - ะะตัะฒะธัะฝัะน ะฒะธะทัะฐะปัะฝัะน ะฐะฝะฐะปะธะท ะฟัะธะทะฝะฐะบะพะฒ
# - ะะฐะบะพะฝะพะผะตัะฝะพััะธ, "ะธะฝัะฐะนัั", ะพัะพะฑะตะฝะฝะพััะธ ะดะฐะฝะฝัั
# - ะัะตะดะพะฑัะฐะฑะพัะบะฐ ะดะฐะฝะฝัั
# - ะกะพะทะดะฐะฝะธะต ะฝะพะฒัั
ะฟัะธะทะฝะฐะบะพะฒ ะธ ะพะฟะธัะฐะฝะธะต ััะพะณะพ ะฟัะพัะตััะฐ
# - ะัะพัั-ะฒะฐะปะธะดะฐัะธั, ะฟะพะดะฑะพั ะฟะฐัะฐะผะตััะพะฒ
# - ะะพัััะพะตะฝะธะต ะบัะธะฒัั
ะฒะฐะปะธะดะฐัะธะธ ะธ ะพะฑััะตะฝะธั
# - ะัะพะณะฝะพะท ะดะปั ัะตััะพะฒะพะน ะธะปะธ ะพัะปะพะถะตะฝะฝะพะน ะฒัะฑะพัะบะธ
# - ะัะตะฝะบะฐ ะผะพะดะตะปะธ ั ะพะฟะธัะฐะฝะธะตะผ ะฒัะฑัะฐะฝะฝะพะน ะผะตััะธะบะธ
# - ะัะฒะพะดั
#
# ะะพะปะตะต ะดะตัะฐะปัะฝะพะต ะพะฟะธัะฐะฝะธะต [ััั](https://goo.gl/cJbw7V).
# +
import numpy as np
import pandas as pd
from matplotlib import pyplot as plt
import seaborn as sns
import scipy
from statsmodels.stats.weightstats import *
from sklearn.linear_model import RidgeCV, Ridge, Lasso, LassoCV
from sklearn.ensemble import RandomForestRegressor
from sklearn.model_selection import train_test_split, KFold
from sklearn.preprocessing import StandardScaler, LabelBinarizer, PolynomialFeatures
from sklearn.metrics import mean_absolute_error, make_scorer
from xgboost import XGBClassifier
from hyperopt import fmin,tpe, hp, STATUS_OK, Trials
import xgboost as xgb
from sklearn.model_selection import learning_curve, validation_curve
# %matplotlib inline
# -
# ### ะงะฐััั 1. ะะฟะธัะฐะฝะธะต ะฝะฐะฑะพัะฐ ะดะฐะฝะฝัั
ะธ ะฟัะธะทะฝะฐะบะพะฒ
# ะะฐัะฐัะตั ัะพะดะตัะถะธั ะธะฝัะพัะผะฐัะธั ะพ 53940 ะฑัะธะปะปะธะฐะฝัะฐั
. ะะพ ะฝะตะบะพัะพััะผ ั
ะฐัะฐะบัะตัะธััะธะบะฐะผ (ะพะฑ ััะพะผ ะฟะพะทะถะต) ะฑัะดะตะผ ะฟัะตะดัะบะฐะทัะฒะฐัั ััะพะธะผะพััั. ะะฐะฝะฝัะต ะผะพะถะฝะพ ัะบะฐัะฐัั <a href='https://www.kaggle.com/shivam2503/diamonds/data'>ะทะดะตัั</a>.
#
# ะก ัะพัะบะธ ะทัะตะฝะธั ะฑะธะทะฝะตัะฐ, ัะตะฝะฝะพััั ะทะฐะดะฐัะธ ะฟะพะฝััะฝะฐ - ะฟะพ ั
ะฐัะฐะบัะตัะธััะธะบะฐะผ ะฑัะธะปะปะธะฐะฝัะฐ ะฟัะตะดัะบะฐะทะฐัั, ัะบะพะปัะบะพ ะดะพะปะปะฐัะพะฒ ะทะฐ ะฝะตะณะพ ะผะพะถะฝะพ ะฟะพะปััะธัั. ะั ะฑะธะทะฝะตัะฐ ั ะดะฐะปัะบ, ะฟะพััะพะผั ะธะฝัะตัะตั ัะธััะพ ัะฟะพััะธะฒะฝัะน: ัะฐะทะพะฑัะฐัััั, ะบะฐะบะธะต ั
ะฐัะฐะบัะตัะธััะธะบะธ ะธ ะบะฐะบ ะฒะปะธััั ะฝะฐ ััะพะธะผะพััั ััะธั
ะบะฐะผะตัะบะพะฒ =)
#
# <b>ะัะธะทะฝะฐะบะธ</b>
# - carat - ะฒะตั ะฑัะธะปะปะธะฐะฝัะฐ ะฒ ะบะฐัะฐัะฐั
, ะฒะตัะตััะฒะตะฝะฝัะน
# - cut - ะบะฐัะตััะฒะพ ะพะณัะฐะฝะบะธ, ะบะฐัะตะณะพัะธะฐะปัะฝัะน. ะัะธะฝะธะผะฐะตั ะฟััั ะฒะพะทะผะพะถะฝัั
ะทะฝะฐัะตะฝะธะน: Fair, Good, Very Good, Premium, Ideal
# - color - "ัะฒะตั" ะฑัะธะปะปะธะฐะฝัะฐ. ะะฐัะตะณะพัะธะฐะปัะฝัะน ะฟัะธะทะฝะฐะบ, ะฟัะธะฝะธะผะฐะตั ะทะฝะฐัะตะฝะธั J,I,H,G,F,E,D (ะพั ั
ัะดัะตะณะพ (J) ะบ ะปัััะตะผั (D))
# - clarity - ัะธััะพัะฐ ะฑัะธะปะปะธะฐะฝัะฐ. ะะฐัะตะณะพัะธะฐะปัะฝัะน ะฟัะธะทะฝะฐะบ, ะฟัะธะฝะธะผะฐะตั ะทะฝะฐัะตะฝะธั I1 (ั
ัะดัะธะน), SI2, SI1, VS2, VS1, VVS2, VVS1, IF (ะปัััะธะน)
# - x,y,z - ััะธ ะฟัะธะทะฝะฐะบะฐ, ั
ะฐัะฐะบัะตัะธะทััะธะต ัะฐะทะผะตัั ะฑัะธะปะปะธะฐะฝัะฐ, ะฒะตัะตััะฒะตะฝะฝัะต
# - depth - ะฟัะธะทะฝะฐะบ, ะบะพัะพััะน ะฒัััะธััะฒะฐะตััั ะฝะฐ ะพัะฝะพะฒะต ััะตั
ะฟัะตะดัะดััะธั
ะฟะพ ัะพัะผัะปะต 2 * z / (x + y), ะฒะตัะตััะฒะตะฝะฝัะน
# - table - ะพัะฝะพัะตะฝะธะต ัะธัะธะฝั ะฒะตัั
ะฝะตะน ะณัะฐะฝะธ ะฑัะธะปะปะธะฐะฝัั ะบ ะตะณะพ ะผะฐะบัะธะผะฐะปัะฝะพะน ัะธัะธะฝะต, ะฒ ะฟัะพัะตะฝัะฐั
#
#
# <b>ะฆะตะปะตะฒะพะน ะฟัะธะทะฝะฐะบ</b>: price - ััะพะธะผะพััั ะฑัะธะปะปะธะฐะฝัะฐ ะฒ ะดะพะปะปะฐัะฐั
#
#
# ### ะงะฐััั 2. ะะตัะฒะธัะฝัะน ะฐะฝะฐะปะธะท ะฟัะธะทะฝะฐะบะพะฒ
#ะทะฐะณััะถะฐะตะผ dataset
diamonds_df = pd.read_csv('../../data/diamonds.csv')
diamonds_df.head()
diamonds_df.describe()
# ะะธะดะฝะพ, ััะพ ะผะฐัััะฐะฑั ะฟัะธะทะฝะฐะบะพะฒ ะพัะปะธัะฐัััั. ะ ะดะฐะปัะฝะตะนัะตะผ ะฝัะถะฝะพ ะฑัะดะตั ะฟัะธะผะตะฝะธัั StandartScaler
diamonds_df.info()
# ะ ะดะฐะฝะฝัั
ะพััััััะฒััั ะฟัะพะฟััะบะธ. ะัะพะณะพ, ะธะผะตะตััั 6 ะฒะตัะตััะฒะตะฝะฝัั
, 1 ัะตะปะพัะธัะปะตะฝะฝัะน (unnamed: 0 ะฝะต ััะธัะฐะตะผ) ะธ 3 ะบะฐัะตะณะพัะธะฐะปัะฝัั
ะฟัะธะทะฝะฐะบะฐ.
# ### ะะฝะฐะปะธะท ัะตะปะพัะธัะปะตะฝะฝัั
ะธ ะฒะตัะตััะฒะตะฝะฝัั
ะฟัะธะทะฝะฐะบะพะฒ
real_features = ['carat', 'depth', 'table', 'x', 'y', 'z','price']
# ะะทััะธะผ ะบะพััะตะปััะธั ะฒะตัะตััะฒะตะฝะฝัั
ะฟัะธะทะฝะฐะบะพะฒ ะธ ัะตะปะตะฒะพะน ะฟะตัะตะผะตะฝะฝะพะน
sns.heatmap(diamonds_df[real_features].corr(method='spearman'));
# ะัะธะทะฝะฐะบะธ carat, x,y,z ะธะผะตัั ะฑะพะปัััั ะบะพััะตะปััะธั, ะบะฐะบ ะผะตะถะดั ัะพะฑะพะน, ัะฐะบ ะธ ั ัะตะปะตะฒะพะน ะฟะตัะตะผะตะฝะฝะพะน, ััะพ ะฝะต ัะดะธะฒะธัะตะปัะฝะพ. ะัะธ ััะพะผ, ะบะพััะตะปััะธั ัะตะปะตะฒะพะน ะฟะตัะตะผะตะฝะฝะพะน ะธ ะฟัะธะทะฝะฐะบะพะฒ depth, table ะฟะพััะธ ะพััััััะฒัะตั
# #### ะะฝะฐะปะธะท ะบะฐัะตะณะพัะธะฐะปัะฝัั
ะฟัะธะทะฝะฐะบะพะฒ
cat_features = ['cut','color','clarity']
# +
fig, axes = plt.subplots(nrows=1, ncols=3, figsize=(16, 10))
for idx, feature in enumerate(cat_features):
sns.countplot(diamonds_df[feature], ax=axes[idx % 3], label=feature)
# -
# ะ ะตะฐะปัะฝัะต ะทะฝะฐัะตะฝะธั ะบะฐัะตะณะพัะธะฐะปัะฝัั
ะฟัะธะทะฝะฐะบะพะฒ ะฝะต ะพัะปะธัะฐัััั ะพั ัะตั
, ััะพ ะทะฐัะฒะปะตะฝั ะฒ ะพะฟะธัะฐะฝะธะธ. ะัะพะผะต ัะพะณะพ, ะฒะธะดะฝะพ, ััะพ ัะฝะธะบะฐะปัะฝัั
ะทะฝะฐัะตะฝะธะน ะฝะต ะผะฝะพะณะพ, ัะฐะบ ััะพ One Hot encoding ะดะพะปะถะตะฝ ะพัะปะธัะฝะพ ััะฐะฑะพัะฐัั.
# #### ะะฝะฐะปะธะท ัะตะปะตะฒะพะณะพ ะฟัะธะทะฝะฐะบะฐ
sns.distplot(diamonds_df['price'])
# ะ ะฐัะฟัะตะดะตะปะตะฝะธะต ะธะผะตะตั ััะถะตะปัะน ะฟัะฐะฒัะน ั
ะฒะพัั. ะัะธะผะตะฝะธะผ ะปะพะณะฐัะธัะผะธัะพะฒะฐะฝะธะต.
sns.distplot(diamonds_df['price'].map(np.log1p))
# ะะพะผะพะณะปะพ ััะพ ะฝะต ัะธะปัะฝะพ: ะฟะพะปััะธะปะพัั ะฑะธะผะพะดะฐะปัะฝะพะต ัะฐัะฟัะตะดะตะปะตะฝะธะต. ะะฐัะพ ั
ะฒะพัั ะธััะตะท =) ะะปั ะฝะฐะณะปัะดะฝะพััะธ, ะฟะพัััะพะธะผ QQ ะณัะฐัะธะบ
stats.probplot(diamonds_df['price'], dist="norm", plot=plt);
# #### ะัะฒะพะดั
# - ะะตัะตััะฒะตะฝะฝัะต ะฟัะธะทะฝะฐะบะธ (carat, depth, table, x, y, z) ะผะฐัััะฐะฑะธััะตะผ
# - ะ ะบะฐัะตะณะพัะธะฐะปัะฝัะผ ะฟัะธะทะฝะฐะบะฐะผ ('cut','color','clarity') ะฟัะธะผะตะฝัะตะผ one hot encoding
# - ะฆะตะปะตะฒัั ะฟะตัะตะผะตะฝะฝัั ะปะพะณะฐัะธัะผะธััะตะผ
# ### ะงะฐััั 3. ะะตัะฒะธัะฝัะน ะฒะธะทัะฐะปัะฝัะน ะฐะฝะฐะปะธะท ะฟัะธะทะฝะฐะบะพะฒ
# #### ะะฝะฐะปะธะท ัะตะปะพัะธัะปะตะฝะฝัั
ะธ ะฒะตัะตััะฒะตะฝะฝัั
ะฟัะธะทะฝะฐะบะพะฒ
# +
# ะะฐัะฝะตะผ ั ะฟะพัััะพะตะฝะธั ะณะธััะพะณัะฐะผะผ ะฒะตัะตััะฒะตะฝะฝัั
ะฟัะธะทะฝะฐะบะพะฒ
fig, axes = plt.subplots(nrows=2, ncols=3, figsize=(16, 10))
for idx, feature in enumerate(real_features[:-1]): #price ัะธัะพะฒะฐัั ะฝะต ะฑัะดะตะผ
sns.distplot(diamonds_df[feature], ax=axes[idx // 3, idx % 3], label=feature)
# -
# ะ ะฐัะฟัะตะดะตะปะตะฝะธะต ะฟัะธะทะฝะฐะบะพะฒ depth, table, y, z ะพัะดะฐะปะตะฝะฝะพ, ะฝะพ ะฝะฐะฟะพะผะธะฝะฐะตั ะบะพะปะพะบะพะป. ะฃ depth ั
ะฒะพััั ััะถะตะปะพะฒะฐัั ะดะปั ะฝะพัะผะฐะปัะฝะพะณะพ ัะฐัะฟัะตะดะตะปะตะฝะธั; carat ะธ table ัะบะพัะตะต ะฑะธะผะพะดะฐะปัะฝัะต. ะัะพะผะต ัะพะณะพ, ั ะฝะธั
ััะถะตะปัะต ะฟัะฐะฒัะต ั
ะฒะพััั, ัะฐะบ ััะพ np.log1p ะฝะต ะฟะพะผะตัะฐะตั. ะะพ ะณัะฐัะธะบะฐะผ ะฒััะต ะฝะต ะฒะธะดะฝะพ ะฒัะฑัะพัะพะฒ. ะัะพะฒะตัะธะผ, ััะพ ััะพ ะดะตะนััะฒะธัะตะปัะฝะพ ัะฐะบ, ั ะฟะพะผะพััั boxplot
# +
fig, axes = plt.subplots(nrows=2, ncols=3, figsize=(16, 10))
for idx, feature in enumerate(real_features[:-1]): #price ัะธัะพะฒะฐัั ะฝะต ะฑัะดะตะผ
sns.boxplot(diamonds_df[feature], ax=axes[idx // 3, idx % 3], orient='v')
# -
# ะะฐะบะธั
-ะปะธะฑะพ ัะตััะตะทะฝัั
ะฐะฝะพะผะฐะปะธะน ะฒ ัะฐััะผะฐััะธะฒะฐะตะผัั
ะดะฐะฝะฝัั
ะฝะตั. ะะฐ ะฒััะบะธะน ัะปััะฐะน ะฟะพัะผะพััะธะผ ะฑัะธะปะปะธะฐะฝั ั y=60, z = 32 ะธ carat > 4. ะัะปะธ ะพะฝ ััะพะธั ะดะพัะพะณะพ, ัะพ ะทะฐ ะฒัะฑัะพั ะตะณะพ ััะธัะฐัั ะฝะต ะฑัะดะตะผ.
diamonds_df[diamonds_df['y'] > 55].head()
diamonds_df[diamonds_df['z'] > 30].head()
diamonds_df[diamonds_df['carat'] > 4].head()
# ะะธะดะฝะพ, ััะพ ััะพ ะฟัะพััะพ ะพัะตะฝั ะดะพัะพะณะธะต ะบะฐะผะฝะธ. ะะพัะผะพััะธะผ, ะบะฐะบ ัะฐััะผะฐััะธะฒะฐะตะผัะต ะฟัะธะทะฝะฐะบะธ ะฒะทะฐะธะผะพัะฒัะทะฐะฝั ั ัะตะปะตะฒะพะน ะฟะตัะผะตะฝะฝะพะน
sns.pairplot(diamonds_df[real_features], diag_kind="kde")
# - ะฒะตั ะฑัะธะปะปะธะฐะฝัะฐ ะฟะพะบะฐะทัะฒะฐะตั ััะตะฟะตะฝะฝัั ะทะฐะฒะธัะธะผะพััั ะพั ะตะณะพ ัะฐะทะผะตัะพะฒ
# - depth ะธ table ะฟะพััะธ ะฝะธะบะฐะบ ะฝะต ะฒะทะฐะธะผะพัะฒัะทะฐะฝั ั ะพััะฐะปัะฝัะผะธ ะฟัะธะทะฝะฐะบะฐะผะธ, ะฒ ัะพะผ ัะธัะปะต ะธ ัะตะปะตะฒัะผ
# - x,y,z ัะฒัะทะฐะฝั ะผะตะถะดั ัะพะฑะพะน ะปะธะฝะตะนะฝะพ
# - ัะตะฝะฐ ะปะธะฝะตะนะฝะพ ะทะฐะฒะธัะธั ะพั ัะฐะทะผะตัะพะฒ
# - ะทะฐะฒะธัะธะผะพััั ะผะตะถะดั ัะตะฝะพะน ะธ ะฒะตัะพะผ ัะปะพะถะฝะพ ะฝะฐะทะฒะฐัั ะปะธะฝะตะนะฝะพะน, ะฝะพ ะผะพะฝะพัะพะฝะฝัะน ััะตะฝะด ะตััั
# #### ะะฝะฐะปะธะท ะบะฐัะตะณะพัะธะฐะปัะฝัั
ะฟัะธะทะฝะฐะบะพะฒ
# ะะพัะผะพััะธะผ, ะบะฐะบ ัะตะปะตะฒะฐั ะฟะตัะตะผะตะฝะฝะฐั ะทะฐะฒะธัะธั ะพั ะบะฐัะตะณะพัะธะฐะปัะฝัั
ะฟัะธะทะฝะฐะบะพะฒ
# +
# ัะฒะตั ะฑัะธะปะปะธะฐะฝัะฐ
fig, axes = plt.subplots(nrows=3, ncols=3, figsize=(16, 10))
for idx, (color, sub_df) in enumerate(pd.groupby(diamonds_df, 'color')):
ax = sns.distplot(sub_df['price'], kde=False, ax=axes[idx // 3, idx % 3])
ax.set(xlabel=color)
# -
# ะ ะฐัะฟัะตะดะตะปะตะฝะธั ะดะปั ะฒัะตั
ะทะฝะฐัะตะฝะธะน ัะฒะตัะพะฒ ะธะผะตัั ััะถะตะปัะน ะฟัะฐะฒัะน ั
ะฒะพัั ะธ ะฝะต ัะธะปัะฝะพ ะพัะปะธัะฐัััั ะดััะณ ะพั ะดััะณะฐ.
# +
# ัะธััะพัะฐ ะฑัะธะปะปะธะฐะฝัะฐ
fig, axes = plt.subplots(nrows=3, ncols=3, figsize=(16, 10))
for idx, (clarity, sub_df) in enumerate(pd.groupby(diamonds_df, 'clarity')):
ax = sns.distplot(sub_df['price'], kde=False, ax=axes[idx // 3, idx % 3])
ax.set(xlabel=clarity)
# -
# ะฅะฒะพััั ั ะฒัะตั
ััะถะตะปัะต, ะฝะพ ั SI1,SI2 ะฟัะธัััััะฒััั ะดะพะฟะพะปะฝะธัะตะปัะฝัะต ะฟะธะบะธ ะฒ ัะฐะนะพะฝะต 5000.
# +
# ะบะฐัะตััะฒะพ ะพะณัะฐะฝะบะธ
fig, axes = plt.subplots(nrows=2, ncols=3, figsize=(16, 10))
for idx, (cut, sub_df) in enumerate(pd.groupby(diamonds_df, 'cut')):
ax = sns.distplot(sub_df['price'], kde=False, ax=axes[idx // 3, idx % 3])
ax.set(xlabel=cut)
# -
# ะ ัะฝะพะฒะฐ ะฟะธะบะธ ะฒ ัะฐะนะพะฝะต 5000 (ั Good ะธ Premium). ะ ะฒ ัะตะปะพะผ ะณัะฐัะธะบะธ ะฟะพั
ะพะถะธ.
# ะะฐัะธััะตะผ boxplot ะดะปั ะบะฐะถะดะพะณะพ ะทะฝะฐัะตะฝะธั
# +
fig, axes = plt.subplots(nrows=3, ncols=1, figsize=(16, 10))
# ะัะพะฑัะฐะทะธะผ ัััะพะบะธ ะฒ ัะธัะปะฐ ะฒ ะฟะพััะดะบะต ะพั ั
ัะดัะตะณะพ ะบ ะปัััะตะผั. ะขะฐะบ ัะดะพะฑะฝะตะต ะฝะฐ ะณัะฐัะธะบะต ัะผะพััะตัั
df = diamonds_df.copy()
df['color'] = df['color'].map({'J': 0, 'I': 1, 'H': 2, 'G': 3, 'F': 4, 'E': 5, 'D': 6})
df['clarity'] = df['clarity'].map({'I1': 0, 'SI2': 1, 'SI1': 2, 'VS2': 3, 'VS1': 4, 'VVS2': 5, 'VVS1': 6, 'IF': 7 })
df['cut'] = df['cut'].map({'Fair': 0, 'Good': 1, 'Very Good': 2, 'Premium': 3, 'Ideal': 4})
for idx, feature in enumerate(cat_features):
sns.boxplot(x=feature, y='price',data=df,hue=feature, ax=axes[idx])
# -
# ะขัั ัะถะต ะธะฝัะตัะตัะฝะตะต. ะะฐัะฝะตะผ ั ะพะณัะฐะฝะบะธ. ะะธะดะฝะพ, ััะพ ะผะตะดะธะฐะฝะฐ ะผะฐะบัะธะผะฐะปัะฝะฐ ะดะปั Very Good ะธ Premium. ะะปั ideal ะผะตะดะธะฐะฝะฝะพะต ะทะฝะฐัะตะฝะธะต ัะตะฝั ะณะพัะฐะทะดะพ ะผะตะฝััะต. ะะฝะฐะปะพะณะธัะฝัะต ะฝะฐะฑะปัะดะตะฝะธั ะผะพะถะฝะพ ัะดะตะปะฐัั ะดะปั ัะฒะตัะฐ ะธ ัะธััะพัั. ะะพะทะผะพะถะฝะพ, ะฑัะธะปะปะธะฐะฝัั ั ะฝะฐะธะปัััะธะผะธ ัะฒะพะนััะฒะฐะผะธ ะฝะฐ ะพัะตะฝั ะฑะพะปััะธะต, ะธ, ัะพะพัะฒะตััะฒะตะฝะฝะพ, ะธั
ัะตะฝะฐ ะฝะธะถะต. ะัะพะฒะตัะธะผ ััะพ.
# +
fig, axes = plt.subplots(nrows=3, ncols=1, figsize=(16, 10))
for idx, feature in enumerate(cat_features):
sns.boxplot(x=feature, y='carat',data=df,hue=feature, ax=axes[idx])
# -
# ะะตะนััะฒะธัะตะปัะฝะพ, ะผะตะดะธะฐะฝะฝะพะต ะทะฝะฐัะตะฝะธะต ะฒะตัะฐ ะดะปั ะฑัะธะปะปะธะฐะฝัะพะฒ ั ะพัะตะฝั ั
ะพัะพัะธะผะธ ั
ะฐัะฐะบัะตัะธััะธะบะฐะผะธ ะผะตะฝััะต, ัะตะผ ะดะปั ะฑัะธะปะปะธะฐะฝัะพะฒ ั ะฟะปะพั
ะธะผะธ ั
ะฐัะฐะบะตััะธััะธะบะฐะผะธ. ะะฐะฟะพัะปะตะดะพะบ, ะฟะพัะผะพััะธะผ ัะบะพะปัะบะพ ะฑัะธะปะปะธะฐะฝัะพะฒ ั ัะพะน ะธะปะธ ะธะฝะพะน ั
ะฐัะฐะบะตัะธััะธะบะพะน ะฟัะธัััััะฒัะตั ะฒ ะดะฐะฝะฝัั
.
# +
fig, axes = plt.subplots(nrows=1, ncols=3, figsize=(16, 10))
for idx, feature in enumerate(cat_features):
sns.countplot(df[feature], ax=axes[idx % 3], label=feature)
# -
# ะะธะดะฝะพ, ััะพ ะพัะตะฝั ะผะฐะปะพ ะบะฐะผะฝะตะน ั ะฟะปะพั
ะพะน ะพะณัะฐะฝะบะพะน. ะขะฐะบะถะต ะผะฐะปะพ ะบะฐะผะฝะตะน ั ะฟะปะพั
ะธะผะธ ัะฒะตัะพะฒัะผะธ ั
ะฐัะฐะบะตัะธััะธะบะฐะผะธ. ะะพ ะธ ะฝะต ะพัะตะฝั ะผะฝะพะณะพ ั ะธะดะตะฐะปัะฝัะผะธ. ะ ะฐัะฟัะตะดะตะปะตะฝะธะต ัะธััะพัั ะบะฐะผะฝั ะฝะฐะฟะพะผะธะฝะฐะตั ะปะฐะฟะปะฐัะพะฒัะบะพะต ัะฐัะฟัะตะดะตะปะตะฝะธะต.
# ### ะงะฐััั 4. ะะฐะบะพะฝะพะผะตัะฝะพััะธ, "ะธะฝัะฐะนัั", ะพัะพะฑะตะฝะฝะพััะธ ะดะฐะฝะฝัั
# #### ะัะฝะพะฒะฝัะต ะฒัะฒะพะดั ะฟะพ ะฟัะตะดัะดััะธะผ ะฟัะฝะบัะฐะผ:
# - ะะปััะตะฒัะต ะฟัะธะทะฝะฐะบะธ ะดะปั ะฟัะพะณะฝะพะทะธัะพะฒะฐะฝะธั: ะฒะตั ะธ ัะฐะทะผะตัั ะฑัะธะปะปะธะฐะฝัะฐ (carat, x, y, z). ะะพ ะณัะฐัะธะบะฐะผ ะฒะธะดะฝะพ, ััะพ ะตััั ะผะพะฝะพัะพะฝะฝะฐั ะทะฐะฒะธัะธะผะพััั ััะธั
ะฟัะธะทะฝะฐะบะพะฒ ะธ ัะตะฝั. ะงัะพ ะปะพะณะธัะฝะพ
# - ะัะธะทะฝะฐะบะธ depth ะธ table ะฟะพััะธ ะฝะต ะฒะปะธััั ะฝะฐ ััะพะธะผะพััั ะบะฐะผะฝั
# - ะัะบะปััะธัะตะปัะฝะพ ะฟะพ ะบะฐัะตะณะพัะธะฐะปัะฝัะผ ะฟัะธะทะฝะฐะบะฐะผ ัะปะพะถะฝะพ ััะพ-ะปะธะฑะพ ัะบะฐะทะฐัั ะพ ัะตะปะตะฒะพะน ะฟะตัะตะผะตะฝะฝะพะน. ะะดะฝะฐะบะพ ะฒะธะดะฝะพ, ััะพ ัะตะผ ะปัััะต ะฑัะธะปะปะธะฐะฝั ั ัะพัะบะธ ะทัะตะฝะธั ััะธั
ะฟัะธะทะฝะฐะบะพะฒ, ัะตะผ ะฑะพะปััะต ะฒะตัะพััะฝะพััั ัะพะณะพ, ััะพ ะพะฝ ะฑัะดะตั ะฝะต ะพัะตะฝั ะฑะพะปััะพะณะพ ัะฐะทะผะตัะฐ
# - ะัะฑัะพัั ะฒ ะดะฐะฝะฝัั
ะพััััััะฒััั
# - ะขะฐะบ ะบะฐะบ ั ัะตะปะตะฒะพะน ะฟะตัะตะผะตะฝะฝะพะน ะพัะตะฝั ััะถะตะปัะน ะฟัะฐะฒัะน ั
ะฒะพัั, ะฒ ะบะฐัะตััะฒะต ะผะตััะธะบะธ ะฑัะดะตะผ ะธัะฟะพะปัะทะพะฒะฐัั ััะตะดะฝัั ะฐะฑัะพะปััะฝัั ะพัะธะฑะบั, ะฐ ะฝะต ะบะฒะฐะดัะฐัะธัะฝัั.
# - ะะธะดะฝะพ, ััะพ ะทะฐะฒะธัะธะผะพััั ะพั ะบะปััะตะฒัั
ะฟัะธะทะฝะฐะบะพะฒ ะฑะปะธะทะบะฐ ะบ ะปะธะฝะตะนะฝะพะน. ะะพััะพะผั ะฒ ะบะฐัะตััะฒะต ะฑะตะนะทะปะฐะนะฝะฐ ะฑัะดะตะผ ะธัะฟะพะปัะทะพะฒะฐัั ะปะธะฝะตะนะฝัั ัะตะณัะตััะธั.
# - ะะพะปะตะต ัะพะณะพ, ะฟัะธะทะฝะฐะบะพะฒ ะฝะต ัะฐะบ ัะถ ะธ ะผะฝะพะณะพ, ะฟะพััะพะผั ะฑัะดะตะผ ัะฐััะผะฐััะธะฒะฐัั ัะฐะบะถะต ัะปััะฐะนะฝัะน ะปะตั ะธ ะณัะฐะดะธะตะฝัะฝัะน ะฑัััะธะฝะณ (ััั ะพะฝ ะดะพะปะถะตะฝ ะทะฐัะฐัะธัั =)). ะ ัะปััะฐะนะฝัะน ะปะตั ะธะฝะตัะตัะตัะตะฝ ะธัะบะปััะธัะตะปัะฝะพ ะดะปั ััะฐะฒะฝะตะฝะธั ั ะฑัััะธะฝะณะพะผ
# ### ะงะฐััั 5. ะัะตะดะพะฑัะฐะฑะพัะบะฐ ะดะฐะฝะฝัั
# +
# ะะปั ะฝะฐัะฐะปะฐ, ะฒัะดะตะปะธะผ ะฒัะฑะพัะบั ะดะปั ัะตััะธัะพะฒะฐะฝะธั
X = diamonds_df.drop(['price'], axis=1).values[:,1:] # ะพััะตะบะฐะตะผ ะธะฝะดะตะบั
y = diamonds_df['price']
X_train, X_test, y_train, y_test = train_test_split(X,y,test_size=0.3,random_state=4444, shuffle=True)
# +
# ะฟัะธะทะฝะฐะบะธ ั ะธะฝะดะตะบัะฐะผะธ 1,2,3 ะบะฐัะตะณะพัะธะฐะปัะฝัะต. ะัะธะผะตะฝะธะผ ะบ ะฝะธะผ ohe
label_bin = LabelBinarizer()
X_train_cut_ohe = label_bin.fit_transform(X_train[:,1])
X_test_cut_ohe = label_bin.transform(X_test[:,1])
X_train_color_ohe = label_bin.fit_transform(X_train[:,2])
X_test_color_ohe = label_bin.transform(X_test[:,2])
X_train_clarity_ohe = label_bin.fit_transform(X_train[:,3])
X_test_clarity_ohe = label_bin.transform(X_test[:,3])
# carat, x ะธ ัะตะปะตะฒัั ะฟะตัะตะผะตะฝะฝัั ะปะพะณะฐัะธัะผะธััะตะผ
log_vect = np.vectorize(np.log1p)
X_train_ัarat_log = log_vect(X_train[:,0]).reshape(-1,1)
X_test_ัarat_log = log_vect(X_test[:,0]).reshape(-1,1)
X_train_x_log = log_vect(X_train[:,6]).reshape(-1,1)
X_test_x_log = log_vect(X_test[:,6]).reshape(-1,1)
y_train_log = log_vect(y_train)
y_test_log = log_vect(y_test)
# ะผะฐัััะฐะฑะธัะธัะตะผ ะฒะตัะตััะฒะตะฝะฝัะต ะฟัะธะทะฝะฐะบะธ
scaler = StandardScaler()
X_train_real = np.hstack((X_train_ัarat_log, X_train_x_log, X_train[:,[7,8,4,5]]))
X_test_real = np.hstack((X_test_ัarat_log, X_test_x_log, X_test[:,[7,8,4,5]]))
X_train_real_scaled = scaler.fit_transform(X_train_real)
X_test_real_scaled = scaler.transform(X_test_real)
# ะ ะบะฐัะตััะฒะต ะดะพะฟะพะปะฝะธัะตะปัะฝัั
ะฟัะธะทะฝะฐะบะพะฒ ะฑัะดะตะผ ัะฐััะผะฐััะธะฒะฐัั ะฟะพะปะธะฝะพะผะธะฐะปัะฝัะต ะฟัะธะทะฝะฐะบะธ
#ะะฐะฝะฝัะต ะฟัะธะทะฝะฐะบะธ ะดะพะปะถะฝั ัะปัััะธัั ะบะฐัะตััะฒะพ ะปะธะฝะตะนะฝะพะน ะผะพะดะตะปะธ.
X_train_additional = PolynomialFeatures().fit_transform(X_train_real)
X_test_additional = PolynomialFeatures().fit_transform(X_test_real)
X_train_additional_scaled = scaler.fit_transform(X_train_additional)
X_test_additional_scaled = scaler.transform(X_test_additional)
# ะะฑัะตะดะธะฝัะตะผ ะฒัะต ะฟัะตะพะฑัะฐะทะพะฒะฐะฝะฝัะต ะฟัะธะทะฝะฐะบะธ
X_train_transformed = np.hstack((X_train_real_scaled,X_train_cut_ohe, X_train_color_ohe, X_train_clarity_ohe))
X_test_transformed = np.hstack((X_test_real_scaled,X_test_cut_ohe, X_test_color_ohe, X_test_clarity_ohe))
# -
# ### ะงะฐััั 6. ะกะพะทะดะฐะฝะธะต ะฝะพะฒัั
ะฟัะธะทะฝะฐะบะพะฒ ะธ ะพะฟะธัะฐะฝะธะต ััะพะณะพ ะฟัะพัะตััะฐ
# ะกะผะพััะธ ะฟัะตะดัะดััะธะน ะฟัะฝะบั
# ### ะงะฐััั 7. ะัะพัั-ะฒะฐะปะธะดะฐัะธั, ะฟะพะดะฑะพั ะฟะฐัะฐะผะตััะพะฒ
# ะ ะฐััะผะพััะธะผ ัะฝะฐัะฐะปะฐ ะปะธะฝะตะนะฝัั ะผะพะดะตะปั. ะะฐะฝะฝัะต ัะฐะทะดะตะปะธะผ ะฝะฐ 5 ัะพะปะดะพะฒ. ะก ะฟะพะผะพััั RidgeCV ะธ LassoCV ะฑัะดะตะผ ะพะฟัะธะผะธะทะธัะพะฒะฐัั ัะธะปั ัะตะณัะปััะธะทะฐัะธะธ.
# ััะฝะบัะธั ะฟะพัะตัั ะดะปั ัะฐััะผะฐััะธะฒะฐะตะผะพะน ะทะฐะดะฐัะธ. ะัะธะฑะบั ัะผะพััะธะผ ะฝะฐ ะธัั
ะพะดะฝัั
ะดะฐะฝะฝัั
def mean_absolute_exp_error(model, X,y):
return -mean_absolute_error(np.expm1(model.predict(X)), np.expm1(y))
cv = KFold(n_splits=5, shuffle=True, random_state=4444)
alphas = np.logspace(-5,2,100)
ridge_cv = RidgeCV(alphas=alphas, scoring=mean_absolute_exp_error, cv=cv)
lasso_cv = LassoCV(alphas=alphas, cv=cv, random_state=4444)
ridge_cv.fit(X_train_transformed, y_train_log)
lasso_cv.fit(X_train_transformed, y_train_log)
print('Optimized alpha: Ridge = %f, Lasso = %f' % (ridge_cv.alpha_, lasso_cv.alpha_))
score_ridge = mean_absolute_error(y_test, np.expm1(ridge_cv.predict(X_test_transformed)))
score_lasso = mean_absolute_error(y_test, np.expm1(lasso_cv.predict(X_test_transformed)))
print('Ridge regression score = %f' % score_ridge)
print('Lasso regression score = %f' % score_lasso)
# ะะฑะฐ ะผะตัะพะดะฐ ะฟะพะบะฐะทะฐะปะธ ัั
ะพะถะธะน ัะตะทัะปััะฐั. ะงัะพ ะฑัะดะตั, ะตัะปะธ ะผั ะดะพะฑะฐะฒะธะผ ะฝะพะฒัะต ะฟัะธะทะฝะฐะบะธ?
X_train_transformed_add = np.hstack((X_train_transformed, X_train_additional_scaled))
X_test_transformed_add = np.hstack((X_test_transformed, X_test_additional_scaled))
ridge_cv.fit(X_train_transformed_add, y_train_log)
lasso_cv.fit(X_train_transformed_add, y_train_log)
print('Optimized alpha: Ridge = %f, Lasso = %f' % (ridge_cv.alpha_, lasso_cv.alpha_))
score_ridge = mean_absolute_error(y_test, np.expm1(ridge_cv.predict(X_test_transformed_add)))
score_lasso = mean_absolute_error(y_test, np.expm1(lasso_cv.predict(X_test_transformed_add)))
print('Ridge regression score = %f' % score_ridge)
print('Lasso regression score = %f' % score_lasso)
# ะัะธะฑะบะฐ ะทะฝะฐัะธัะตะปัะฝะพ ัะผะตะฝััะธะปะฐัั. ะะพัััะพะธะผ ะบัะธะฒัะต ะฒะฐะปะธะดะฐัะธะธ ะธ ะพะฑััะตะฝะธั
# +
# %%time
# ะบะพะด ะธะท ััะฐััะธ ะฝะฐ ั
ะฐะฑัะต
def plot_with_err(x, data, **kwargs):
mu, std = data.mean(1), data.std(1)
lines = plt.plot(x, mu, '-', **kwargs)
plt.fill_between(x, mu - std, mu + std, edgecolor='none',
facecolor=lines[0].get_color(), alpha=0.2)
model = Ridge(random_state=4444)
alphas = np.logspace(1,2,10) + 10 # ะัะปะธ ะบะพัััะธัะธะตะฝั ัะตะณัะปััะธะทะฐัะธะธ ะผะฐะป, ัะพ ะทะฝะฐัะตะฝะธั ะฟะพะปััะฐัััั ะทะฐะพะฑะปะพัะฝัะผะธ
val_train, val_test = validation_curve(model, X_train_transformed_add, y_train_log,'alpha', alphas, cv=cv,scoring=mean_absolute_exp_error)
plot_with_err(alphas, -val_train, label='training scores')
plot_with_err(alphas, -val_test, label='validation scores')
plt.xlabel(r'$\alpha$'); plt.ylabel('MAE')
plt.legend();
# -
# ะกัะดั ะฟะพ ะบัะธะฒัะผ ะฒะฐะปะธะดะฐัะธะธ, ะผะพะดะตะปั ะฝะตะดะพะพะฑััะธะปะฐัั: ะพัะธะฑะบะธ ะปะตะถะฐั ะฑะปะธะทะบะพ ะดััะณ ะบ ะดััะณั.
# +
# ะบะพะด ะธะท ััะฐััะธ ะฝะฐ ั
ะฐะฑัะต
def plot_learning_curve(model, X,y):
train_sizes = np.linspace(0.05, 1, 20)
N_train, val_train, val_test = learning_curve(model,X, y, train_sizes=train_sizes, cv=5,scoring=mean_absolute_exp_error, random_state=4444)
plot_with_err(N_train, -val_train, label='training scores')
plot_with_err(N_train, -val_test, label='validation scores')
plt.xlabel('Training Set Size'); plt.ylabel('MAE')
plt.legend()
# -
model = Ridge(alpha=52.140083,random_state=4444)
plot_learning_curve(model, X_train_transformed_add, y_train_log)
# ะัะธะฒัะต ะปะตะถะฐั ะฑะปะธะทะบะพ ะดััะณ ะบ ะดััะณั ะฟะพััะธ ั ัะฐะผะพะณะพ ะฝะฐัะฐะปะฐ. ะัะฒะพะด: ะฝะฐะฑะปัะดะตะฝะธะน ั ะฝะฐั ะดะพััะฐัะพัะฝะพ, ะฝัะถะฝะพ ะดะฒะธะณะฐัััั ะฒ ััะพัะพะฝั ััะปะพะถะฝะตะฝะธั ะผะพะดะตะปะธ
# #### ะกะปััะฐะนะฝัะน ะปะตั
# ะกะปััะฐะนะฝัะน ะปะตั ะดะพะปะถะตะฝ ั
ะพัะพัะพ ัะฐะฑะพัะฐัั "ะธะท ะบะพัะพะฑะบะธ". ะะพััะพะผั ะฑัะดะตะผ ะพะฟัะธะผะธะทะธัะพะฒะฐัั ัะพะปัะบะพ ัะธัะปะพ ะดะตัะตะฒัะตะฒ.
#
# +
# %%time
model = RandomForestRegressor(n_estimators=100, random_state=4444)
n_estimators = [10,25,50,100,250,500,1000]
val_train, val_test = validation_curve(model, X_train_transformed, y_train_log,'n_estimators', n_estimators, cv=cv,scoring=mean_absolute_exp_error)
plot_with_err(n_estimators, -val_train, label='training scores')
plot_with_err(n_estimators, -val_test, label='validation scores')
plt.xlabel('n_estimators'); plt.ylabel('MAE')
plt.legend();
# -
# ะะธะดะฝะพ, ััะพ ะฝะฐัะธะฝะฐั ั 200 ะดะตัะตะฒัะตะฒ ะบะฐัะตััะฒะพ ะฟัะฐะบัะธัะตัะบะธ ะฝะต ะธะทะผะตะฝัะตััั. ะะพััะพะผั ะฒ ะบะฐัะตััะฒะต ะตัะต ะพะดะฝะพะน ะผะพะดะตะปะธ ะฑัะดะตะผ ัะฐััะผะฐััะธะฒะฐัั ัะปััะฐะนะฝัะน ะปะตั ะธะผะตะฝะฝะพ ั ัะฐะบะธะผ ะบะพะปะธัะตััะฒะพะผ ะดะตัะตะฒัะตะฒ.
forest_model = RandomForestRegressor(n_estimators=200, random_state=4444)
forest_model.fit(X_train_transformed, y_train_log)
forest_prediction = np.expm1(forest_model.predict(X_test_transformed))
score = mean_absolute_error(y_test, forest_prediction)
print('Random forest score: %f' % score)
# ะฟะพัะผะพััะธะผ ะฝะฐ ะฒะฐะถะฝะพััั ะฟัะธะทะฝะฐะบะพะฒ
np.argsort(forest_model.feature_importances_)
# ะะตัะฒัะต ัะตัััะต ััะพะปะฑัะฐ ะพะฑััะฐััะตะน ะฒัะฑะพัะบะธ ัะพะพัะฒะตัััะฒััั ะฟัะธะทะฝะฐะบะฐะผ carat, x,y,z. ะะฐะบ ะธ ะฟัะตะดะฟะพะปะฐะณะฐะปะพัั ะฒ ะฝะฐัะฐะปะต, 3 ะธะท 4 ััะธั
ะฟัะธะทะฝะฐะบะพะฒ ะธะผะตัั ะฝะฐะธะฑะพะปัััั ะฒะฐะถะฝะพััั ะดะปั ะผะพะดะตะปะธ
# %%time
# ะะพัััะพะธะผ, ัะฐะบะถะต, ะบัะธะฒัั ะพะฑััะตะฝะธั
plot_learning_curve(model, X_train_transformed, y_train_log)
# ะัะฐัะธะบ ะฒัั
ะพะดะธั ะฝะฐ ะฟะพะปะบั, ัะฐะบ ััะพ ะฑะพะปััะต ะดะฐะฝะฝัั
ะฝะฐะผ ะฝะต ะฝัะถะฝะพ
# #### boosting. ะ ััะพ boosting?
X_train_boosting, X_valid_boosting, y_train_boosting, y_valid_boosting = train_test_split(
X_train_transformed, y_train_log, test_size=0.3, random_state=4444)
# +
def score(params):
from sklearn.metrics import log_loss
print("Training with params:")
print(params)
params['max_depth'] = int(params['max_depth'])
dtrain = xgb.DMatrix(X_train_boosting, label=y_train_boosting)
dvalid = xgb.DMatrix(X_valid_boosting, label=y_valid_boosting)
model = xgb.train(params, dtrain, params['num_round'])
predictions = model.predict(dvalid).reshape((X_valid_boosting.shape[0], 1))
score = mean_absolute_error(np.expm1(y_valid_boosting), np.expm1(predictions))
# score = mean_absolute_error(y_valid_boosting, predictions)
print("\tScore {0}\n\n".format(score))
return {'loss': score, 'status': STATUS_OK}
def optimize(trials):
space = {
'num_round': 200,
'learning_rate': hp.quniform('eta', 0.05, 0.5, 0.005),
'max_depth': hp.quniform('max_depth', 3, 14, 1),
'min_child_weight': hp.quniform('min_child_weight', 1, 10, 1),
'subsample': hp.quniform('subsample', 0.5, 1, 0.05),
'gamma': hp.quniform('gamma', 0.5, 1, 0.01),
'colsample_bytree': hp.quniform('colsample_bytree', 0.4, 1, 0.05),
'eval_metric': 'mae',
'objective': 'reg:linear',
'nthread' : 4,
'silent' : 1,
'seed': 4444
}
best = fmin(score, space, algo=tpe.suggest,trials=trials, max_evals=100)
return best
# -
# %%time
# ะะฟัะธะผะธะทะฐัะธั ะฟะฐัะฐะผะตััะพะฒ
trials = Trials()
best_params = optimize(trials)
best_params
params = {
'num_round': 200,
'colsample_bytree': 0.65,
'eta': 0.145,
'gamma': 0.55,
'max_depth': 10,
'min_child_weight': 4.0,
'subsample': 1.0,
'eval_metric': 'mae',
'objective': 'reg:linear',
'nthread' : 4,
'silent' : 1,
'seed': 4444}
dtrain = xgb.DMatrix(X_train_transformed, label=y_train_log)
dvalid = xgb.DMatrix(X_test_transformed, label=y_test_log)
boosting_model = xgb.train(params, dtrain, params['num_round'])
predictions = boosting_model.predict(dvalid).reshape((X_test_transformed.shape[0], 1))
score = mean_absolute_error(y_test, np.expm1(predictions))
print('Boosting score: %f' % score)
# ### ะงะฐััั 8. ะะพัััะพะตะฝะธะต ะบัะธะฒัั
ะฒะฐะปะธะดะฐัะธะธ ะธ ะพะฑััะตะฝะธั
# ะ ะฑะพะปััะพะผ ะบะพะปะธัะตััะฒะต ะฒ ะฟัะตะดัะดััะตะผ ะฟัะฝะบัะต
# ### ะงะฐััั 9. ะัะพะณะฝะพะท ะดะปั ัะตััะพะฒะพะน ะธะปะธ ะพัะปะพะถะตะฝะฝะพะน ะฒัะฑะพัะบะธ
# ะ ะฑะพะปััะพะผ ะบะพะปะธัะตััะฒะต ะฒ ัะฐััะธ 7
# ### ะงะฐััั 10. ะัะตะฝะบะฐ ะผะพะดะตะปะธ ั ะพะฟะธัะฐะฝะธะตะผ ะฒัะฑัะฐะฝะฝะพะน ะผะตััะธะบะธ
# ะัะธะฒะตะดะตะผ ัะตะทัะปััะฐัั ัะฐะทะปะธัะฝัั
ะผะพะดะตะปะตะน ะฝะฐ ัะตััะพะฒะพะน ะฒัะฑะพัะบะต.
# ะะฐะบ ัะถะต ะพะณะพะฒะฐัะธะฒะฐะปะพัั ัะฐะฝะตะต, ะฒ ะบะฐัะตััะฒะต ะผะตััะธะบะธ ะธัะฟะพะปัะทัะตะผ MAE
pure_ridge = Ridge(random_state=4444, alpha=0.00001) # ะณัะตะฑะฝะตะฒะฐั ัะตะณัะตััะธั ะฝะฐ ะธัั
ะพะดะฝัั
ะดะฐะฝะฝัั
pure_ridge.fit(X_train_transformed, y_train_log)
pure_ridge_score = mean_absolute_error(y_test, np.expm1(pure_ridge.predict(X_test_transformed)))
print('Ridge regression score: %f' % pure_ridge_score)
poly_ridge = Ridge(random_state=4444, alpha=52.140083) # ะณัะตะฑะฝะตะฒะฐั ัะตะณัะตััะธั ั ะฟะพะปะธะฝะพะผะธะฐะปัะฝัะผะธ ะฟัะธะทะฝะฐะบะฐะผะธ
poly_ridge.fit(X_train_transformed_add, y_train_log)
poly_ridge_score = mean_absolute_error(y_test, np.expm1(poly_ridge.predict(X_test_transformed_add)))
print('Ridge regression score with poly features: %f' % poly_ridge_score)
forest_score = mean_absolute_error(y_test, np.expm1(forest_model.predict(X_test_transformed)))
print('Random forest score: %f' % forest_score)
boosting_score = mean_absolute_error(y_test, np.expm1(boosting_model.predict(dvalid)))
print('XGBoost score: %f' % boosting_score)
# ะ ะตะทัะปััะฐัั ะฑะปะธะทะบะธ ะบ ัะตะผ, ััะพ ะฟะพะปััะฐะปะธัั ะฝะฐ ะบัะพัั-ะฒะฐะปะธะดะฐัะธะธ. ะขะฐะบ ััะพ ะฒัั ั
ะพัะพัะพ =)
# ### ะงะฐััั 11. ะัะฒะพะดั
# ะ ะดะฐะฝะฝะพะผ ะฟัะพะตะบัะต ัะฐััะผะฐััะธะฒะฐะปะธัั ะดะพััะฐัะพัะฝะพ "ะฟัะพัััะต" ะดะฐะฝะฝัะต, ะฟะพััะพะผั ะพัะฝะพะฒะฝะพะน ัะฟะพั ะฑัะป ัะดะตะปะฐะฝ ะฝะฐ ะฟัะธะผะตะฝะตะฝะธะต ัะฐะทะปะธัะฝัั
ะผะพะดะตะปะตะน ะดะปั ะธั
ะฐะฝะฐะปะธะทะฐ. ะก ะพะดะฝะพะน ััะพัะพะฝั, ัะปััะฐะนะฝัะน ะปะตั ะฑะตะท ะบะฐะบะพะน-ะปะธะฑะพ ะฝะฐัััะพะนะบะธ ะณะธะฟะตัะฟะฐัะฐะผะตััะพะฒ ะฟะพะบะฐะทะฐะป ะปัััะธะน ัะตะทัะปััะฐั. ะก ะดััะณะพะน ััะพัะพะฝั, ะตัะปะธ ะฟะพััะฐัะธัั ะฑะพะปััะต ะฒัะตะผะตะฝะธ ะฝะฐ ะพะฟัะธะผะธะทะฐัะธั ะณัะฐะดะธะตะฝัะฝะพะณะพ ะฑัััะธะฝะณะฐ, ะฒะพะทะผะพะถะฝะพ, ะพะฝ ัะผะพะถะตั ะฟะพะบะฐะทะฐัั ัะตะทัะปััะฐั ะปัััะต, ัะตะผ ั ัะปััะฐะนะฝะพะณะพ ะปะตัะฐ. ะกัะพะธั, ัะฐะบะถะต, ะพัะผะตัะธัั ะปะธะฝะตะนะฝัั ะผะพะดะตะปั: ะฟะพัะปะต ะดะพะฑะฐะฒะปะตะฝะธั ะฟะพะปะธะฝะพะผะธะฐะปัะฝัั
ะฟัะธะทะฝะฐะบะพะฒ ะพะฝะฐ ะฟะพะบะฐะทะฐะปะฐ ะพัะตะฝั ะฝะตะฟะปะพั
ะพะน ัะตะทัะปััะฐั (ะตัะปะธ ััะฐะฒะฝะธะฒะฐัั ั ะผะพะดะตะปัั ะฑะตะท ะดะพะฟะพะปะฝะธัะตะปัะฝัั
ะฟัะธะทะฝะฐะบะพะฒ =)). ะะฐัะพ ัะปะพะถะฝะพััั ะณะพัะฐะทะดะพ ะผะตะฝััะต. ะัะปะธ ะฒะดััะณ ะบะพะผั-ัะพ ะฟะพ ะถะธะทะฝะธ ะฟัะธะดะตััั ะพัะตะฝะธะฒะฐัั ะฑัะธะปะปะธะฐะฝัั, ะผะพะถะตัะต ัะผะตะปะพ ะธัะฟะพะปัะทะพะฒะฐัั ะฟัะตะดะปะพะถะตะฝะฝัั ะผะพะดะตะปั ัะปััะฐะนะฝะพะณะพ ะปะตัะฐ. ะ ััะตะดะฝะตะผ ะฑัะดะตัะต ัะตัััั ะฟะพ 275 $ ั ะพะดะฝะพะณะพ ะบะฐะผััะบะฐ :p
#
# ะกะฟะฐัะธะฑะพ ะทะฐ ะฒะฝะธะผะฐะฝะธะต!
| jupyter_russian/projects_individual/project_diamonds_kamenev.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 2
# language: python
# name: python2
# ---
# + active=""
# .. _categorical_tutorial:
#
# .. currentmodule:: seaborn
# -
# # Plotting with categorical data
# + active=""
# We :ref:`previously <regression_tutorial>` learned how to use scatterplots and regression model fits to visualize the relationship between two variables and how it changes across levels of additional categorical variables. However, what if one of the main variables you are interested in is categorical? In this case, the scatterplot and regression model approach won't work. There are several options, however, for visualizing such a relationship, which we will discuss in this tutorial.
#
# It's useful to divide seaborn's categorical plots into three groups: those that show each observation at each level of the categorical variable, those that show an abstract representation of each *distribution* of observations, and those that apply a statistical estimation to show a measure of central tendency and confidence interval. The first includes the functions :func:`swarmplot` and :func:`stripplot`, the second includes :func:`boxplot` and :func:`violinplot`, and the third includes :func:`barplot` and :func:`pointplot`. These functions all share a basic API for how they accept data, although each has specific parameters that control the particulars of the visualization that is applied to that data.
#
# Much like the relationship between :func:`regplot` and :func:`lmplot`, in seaborn there are both relatively low-level and relatively high-level approaches for making categorical plots. The functions named above are all low-level in that they plot onto a specific matplotlib axes. There is also the higher-level :func:`factorplot`, which combines these functions with a :class:`FacetGrid` to apply a categorical plot across a grid of figure panels.
#
# It is easiest and best to invoke these functions with a DataFrame that is in `"tidy" <http://vita.had.co.nz/papers/tidy-data.pdf>`_ format, although the lower-level functions also accept wide-form DataFrames or simple vectors of observations. See below for examples.
# -
# %matplotlib inline
import numpy as np
import pandas as pd
import matplotlib as mpl
import matplotlib.pyplot as plt
import seaborn as sns
sns.set(style="whitegrid", color_codes=True)
np.random.seed(sum(map(ord, "categorical")))
titanic = sns.load_dataset("titanic")
tips = sns.load_dataset("tips")
iris = sns.load_dataset("iris")
# + active=""
# Categorical scatterplots
# ------------------------
#
# A simple way to show the the values of some quantitative variable across the levels of a categorical variable uses :func:`stripplot`, which generalizes a scatterplot to the case where one of the variables is categorical:
# -
sns.stripplot(x="day", y="total_bill", data=tips);
# + active=""
# In a strip plot, the scatterplot points will usually overlap. This makes it difficult to see the full distribution of data. One easy solution is to adjust the positions (only along the categorical axis) using some random "jitter":
# -
sns.stripplot(x="day", y="total_bill", data=tips, jitter=True);
# + active=""
# A different approach would be to use the function :func:`swarmplot`, which positions each scatterplot point on the categorical axis with an algorithm that avoids overlapping points:
# -
sns.swarmplot(x="day", y="total_bill", data=tips);
# + active=""
# It's also possible to add a nested categorical variable with the ``hue`` parameter. Above the color and position on the categorical axis are redundant, but now each provides information about one of the two variables:
# -
sns.swarmplot(x="day", y="total_bill", hue="sex", data=tips);
# + active=""
# In general, the seaborn categorical plotting functions try to infer the order of categories from the data. If your data have a pandas ``Categorical`` datatype, then the default order of the categories can be set there. For other datatypes, string-typed categories will be plotted in the order they appear in the DataFrame, but categories that look numerical will be sorted:
# -
sns.swarmplot(x="size", y="total_bill", data=tips);
# + active=""
# With these plots, it's often helpful to put the categorical variable on the vertical axis (this is particularly useful when the category names are relatively long or there are many categories). You can force an orientation using the ``orient`` keyword, but usually plot orientation can be inferred from the datatypes of the variables passed to ``x`` and/or ``y``:
# -
sns.swarmplot(x="total_bill", y="day", hue="time", data=tips);
# + active=""
# Distributions of observations within categories
# -----------------------------------------------
#
# At a certain point, the categorical scatterplot approach becomes limited in the information it can provide about the distribution of values within each category. There are several ways to summarize this information in ways that facilitate easy comparisons across the category levels. These generalize some of the approaches we discussed in the :ref:`chapter <distribution_tutorial>` to the case where we want to quickly compare across several distributions.
#
# Boxplots
# ^^^^^^^^
#
# The first is the familiar :func:`boxplot`. This kind of plot shows the three quartile values of the distribution along with extreme values. The "whiskers" extend to points that lie within 1.5 IQRs of the lower and upper quartile, and then observations that fall outside this range are displayed independently. Importantly, this means that each value in the boxplot corresponds to an actual observation in the data:
# -
sns.boxplot(x="day", y="total_bill", hue="time", data=tips);
# + active=""
# For boxplots, the assumption when using a ``hue`` variable is that it is nested within the ``x`` or ``y`` variable. This means that by default, the boxes for different levels of ``hue`` will be offset, as you can see above. If your ``hue`` variable is not nested, you can set the ``dodge`` parameter to disable offsetting:
# -
tips["weekend"] = tips["day"].isin(["Sat", "Sun"])
sns.boxplot(x="day", y="total_bill", hue="weekend", data=tips, dodge=False);
# + active=""
# Violinplots
# ^^^^^^^^^^^
#
# A different approach is a :func:`violinplot`, which combines a boxplot with the kernel density estimation procedure described in the :ref:`distributions <distribution_tutorial>` tutorial:
# -
sns.violinplot(x="total_bill", y="day", hue="time", data=tips);
# + active=""
# This approach uses the kernel density estimate to provide a better description of the distribution of values. Additionally, the quartile and whikser values from the boxplot are shown inside the violin. Because the violinplot uses a KDE, there are some other parameters that may need tweaking, adding some complexity relative to the straightforward boxplot:
# -
sns.violinplot(x="total_bill", y="day", hue="time", data=tips,
bw=.1, scale="count", scale_hue=False);
# + active=""
# It's also possible to "split" the violins when the hue parameter has only two levels, which can allow for a more efficient use of space:
# -
sns.violinplot(x="day", y="total_bill", hue="sex", data=tips, split=True);
# + active=""
# Finally, there are several options for the plot that is drawn on the interior of the violins, including ways to show each individual observation instead of the summary boxplot values:
# -
sns.violinplot(x="day", y="total_bill", hue="sex", data=tips,
split=True, inner="stick", palette="Set3");
# + active=""
# It can also be useful to combine :func:`swarmplot` or :func:`swarmplot` with :func:`violinplot` or :func:`boxplot` to show each observation along with a summary of the distribution:
# -
sns.violinplot(x="day", y="total_bill", data=tips, inner=None)
sns.swarmplot(x="day", y="total_bill", data=tips, color="w", alpha=.5);
# + active=""
# Statistical estimation within categories
# ----------------------------------------
#
# Often, rather than showing the distribution within each category, you might want to show the central tendency of the values. Seaborn has two main ways to show this information, but importantly, the basic API for these functions is identical to that for the ones discussed above.
#
# Bar plots
# ^^^^^^^^^
#
# A familiar style of plot that accomplishes this goal is a bar plot. In seaborn, the :func:`barplot` function operates on a full dataset and shows an arbitrary estimate, using the mean by default. When there are multiple observations in each category, it also uses bootstrapping to compute a confidence interval around the estimate and plots that using error bars:
# -
sns.barplot(x="sex", y="survived", hue="class", data=titanic);
# + active=""
# A special case for the bar plot is when you want to show the number of observations in each category rather than computing a statistic for a second variable. This is similar to a histogram over a categorical, rather than quantitative, variable. In seaborn, it's easy to do so with the :func:`countplot` function:
# -
sns.countplot(x="deck", data=titanic, palette="Greens_d");
# + active=""
# Both :func:`barplot` and :func:`countplot` can be invoked with all of the options discussed above, along with others that are demonstrated in the detailed documentation for each function:
# -
sns.countplot(y="deck", hue="class", data=titanic, palette="Greens_d");
# + active=""
# Point plots
# ^^^^^^^^^^^
#
# An alternative style for visualizing the same information is offered by the :func:`pointplot` function. This function also encodes the value of the estimate with height on the other axis, but rather than show a full bar it just plots the point estimate and confidence interval. Additionally, pointplot connects points from the same ``hue`` category. This makes it easy to see how the main relationship is changing as a function of a second variable, because your eyes are quite good at picking up on differences of slopes:
# -
sns.pointplot(x="sex", y="survived", hue="class", data=titanic);
# + active=""
# To make figures that reproduce well in black and white, it can be good to use different markers and line styles for the levels of the ``hue`` category:
# -
sns.pointplot(x="class", y="survived", hue="sex", data=titanic,
palette={"male": "g", "female": "m"},
markers=["^", "o"], linestyles=["-", "--"]);
# + active=""
# Plotting "wide-form" data
# -------------------------
#
# While using "long-form" or "tidy" data is preferred, these functions can also by applied to "wide-form" data in a variety of formats, including pandas DataFrames or two-dimensional numpy arrays. These objects should be passed directly to the ``data`` parameter:
# -
sns.boxplot(data=iris, orient="h");
# + active=""
# Additionally, these functions accept vectors of Pandas or numpy objects rather than variables in a ``DataFrame``:
# -
sns.violinplot(x=iris.species, y=iris.sepal_length);
# + active=""
# To control the size and shape of plots made by the functions discussed above, you must set up the figure yourself using matplotlib commands. Of course, this also means that the plots can happily coexist in a multi-panel figure with other kinds of plots:
# -
f, ax = plt.subplots(figsize=(7, 3))
sns.countplot(y="deck", data=titanic, color="c");
# + active=""
# Drawing multi-panel categorical plots
# -------------------------------------
#
# As we mentioned above, there are two ways to draw categorical plots in seaborn. Similar to the duality in the regression plots, you can either use the functions introduced above, or the higher-level function :func:`factorplot`, which combines these functions with a :func:`FacetGrid` to add the ability to examine additional categories through the larger structure of the figure. By default, :func:`factorplot` produces a :func:`pointplot`:
# -
sns.factorplot(x="day", y="total_bill", hue="smoker", data=tips);
# + active=""
# However, the ``kind`` parameter lets you chose any of the kinds of plots discussed above:
# -
sns.factorplot(x="day", y="total_bill", hue="smoker", data=tips, kind="bar");
# + active=""
# The main advantage of using a :func:`factorplot` is that it is very easy to "facet" the plot and investigate the role of other categorical variables:
# -
sns.factorplot(x="day", y="total_bill", hue="smoker",
col="time", data=tips, kind="swarm");
# + active=""
# Any kind of plot can be drawn. Because of the way :class:`FacetGrid` works, to change the size and shape of the figure you need to specify the ``size`` and ``aspect`` arguments, which apply to each facet:
# -
sns.factorplot(x="time", y="total_bill", hue="smoker",
col="day", data=tips, kind="box", size=4, aspect=.5);
# + active=""
# It is important to note that you could also make this plot by using :func:`boxplot` and :class:`FacetGrid` directly. However, special care must be taken to ensure that the order of the categorical variables is enforced in each facet, either by using data with a ``Categorical`` datatype or by passing ``order`` and ``hue_order``.
#
# Because of the generalized API of the categorical plots, they should be easy to apply to other more complex contexts. For example, they are easily combined with a :class:`PairGrid` to show categorical relationships across several different variables:
# -
g = sns.PairGrid(tips,
x_vars=["smoker", "time", "sex"],
y_vars=["total_bill", "tip"],
aspect=.75, size=3.5)
g.map(sns.violinplot, palette="pastel");
| doc/tutorial/categorical.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# Imporing libabries
import math
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
# %matplotlib inline
# Importing rbl data set
df = pd.read_csv('./stock_data/RBLBANK.csv',parse_dates=['Date']).set_index('Date')
df.head()
# Filling NULL values
df['Daily_return'] = (df['Close Price']).pct_change()
df['Daily_return'] = df['Daily_return'].replace([np.inf, -np.inf],np.nan)
df.dropna(inplace = True)
df['Daily_return'].isnull().sum()
# Daily mean of RBL
avg_daily_mean = df['Daily_return'].mean()
daily_mean = round(avg_daily_mean * 100, 2)
daily_mean
# Daily standard deviation of RBL
std_daily_mean = df['Daily_return'].std()
daily_std = round(std_daily_mean* 100, 2)
daily_std
# Annual mean and standard deviation of RBL
annual_mean = avg_daily_mean * 252
annual_stddev = std_daily_mean * math.sqrt(252)
round(annual_mean* 100, 2), round(annual_stddev* 100, 2)
def volatality_return(df):
df['Log_Ret'] = np.log(df['Close Price'] / df['Close Price'].shift(1))
df['Volatility'] = df['Log_Ret'].rolling(window=252).std() * np.sqrt(252)
pass
# # Importing data from multiple sectors
# * It - Mindtree
# * Textiles - Raymond
# * Bank - RBl Bank
# * Entertainment - PVR
# * Suzlon - Energy
# +
# It - mindtree
MINDTREE = pd.read_csv('./stock_data/MINDTREE.csv')
MINDTREE['Date']=MINDTREE['Date'].astype(dtype='datetime64')
MINDTREE=MINDTREE[MINDTREE['Series']=='EQ']
MINDTREE.drop(['Series'],axis=1,inplace=True)
MINDTREE.head()
# +
# Textiles - raymond
RAYMOND = pd.read_csv('./stock_data/RAYMOND.csv')
RAYMOND['Date']=RAYMOND['Date'].astype(dtype='datetime64')
RAYMOND=RAYMOND[RAYMOND['Series']=='EQ']
RAYMOND.drop(['Series'],axis=1,inplace=True)
RAYMOND.head()
# +
# Bank - RBl Bank
RBL = pd.read_csv('./stock_data/RBLBANK.csv')
RBL['Date']=RBL['Date'].astype(dtype='datetime64')
RBL=RBL[RBL['Series']=='EQ']
RBL.drop(['Series'],axis=1,inplace=True)
RBL.head()
# +
# Entertainment - pvr
PVR = pd.read_csv('./stock_data/PVR.csv')
PVR['Date']=PVR['Date'].astype(dtype='datetime64')
PVR=PVR[PVR['Series']=='EQ']
PVR.drop(['Series'],axis=1,inplace=True)
PVR.head()
# +
# Suzlon - energy
SUZLON = pd.read_csv('./stock_data/SUZLON.csv')
SUZLON['Date']=SUZLON['Date'].astype(dtype='datetime64')
SUZLON=SUZLON[SUZLON['Series']=='EQ']
SUZLON.drop(['Series'],axis=1,inplace=True)
SUZLON.head()
# -
# List of selected stocks
li = ['mindtree','raymond','rblbank','pvr','suzlon']
li = [x.upper() for x in li]
li_csv = ['./stock_data/'+x+'.csv' for x in li]
# +
# Data frame of selected stocks
def read_csv(filename):
return pd.read_csv(filename, parse_dates=['Date'])['Close Price']
df = pd.DataFrame()
for fname in li_csv:
df[fname.split('/')[-1][:-4]] = read_csv(fname)
df.head()
# -
# Creating equal weights
equal_weights = np.full(df.shape[1], 1/df.shape[1])
equal_weights
# +
# Annual return on portfolio with equal weights
def portfolio_annual_returns(df, weights):
return np.sum(df.pct_change().mean() * weights ) * 252
round(portfolio_annual_returns(df, equal_weights), 2 )
# -
# Covariance of the portfolio
portfolio_covariance = df.pct_change().cov()
portfolio_covariance
# +
# Annual voltatility
def portfolio_annual_volatility(portfolio, weights):
return np.sqrt(np.dot(weights.T, np.dot(df.pct_change().cov(), weights)) * np.sqrt(252))
round(portfolio_annual_volatility(df, equal_weights), 2)
# +
# sharpe ratio of portfolio
def portfolio_sharpe(df, weights ):
return portfolio_annual_returns(df, weights ) / portfolio_annual_volatility(df, weights )
round(portfolio_sharpe(df, equal_weights), 2)
# +
print("Portfolio Annualized Mean Return: ", round(portfolio_annual_returns(df, equal_weights), 2))
print("Portfolio Annualized Volatility: ", round(portfolio_annual_volatility(df, equal_weights), 2))
# +
# Normalizing weights
def normalize_weights(weights):
for i in range(0,3):
weights = np.round(weights, 3)
weights /= weights.sum()
return np.asarray(weights)
def random_weights():
weights = np.random.rand(df.shape[1])
return normalize_weights(weights)
random_weights()
# +
scatter_data = pd.DataFrame()
for i in range(0, 2500):
weights = random_weights()
returns = portfolio_annual_returns(df, weights )
volatility = portfolio_annual_volatility(df, weights )
sharpe = returns / volatility
scatter_data = scatter_data.append([{ "weights": weights,
"returns": returns,
"volatility": volatility,
"sharpe": sharpe }])
scatter_data.reset_index(inplace=True, drop=True)
scatter_data.head()
# -
# Portfolio with highest sharpe ratio
point_max_sharpe = scatter_data.loc[scatter_data['sharpe'].idxmax()]
point_max_sharpe
point_max_sharpe['weights']
# Portfolio with lowest volatility
point_min_volatility = scatter_data.loc[ scatter_data['volatility'].idxmin() ]
point_min_volatility
# +
# Monty-Carlo Simulation1
fig, ax = plt.subplots(figsize=(20, 10), nrows=1, ncols=1)
plt.scatter(
scatter_data.volatility,
scatter_data.returns,
c = scatter_data.sharpe)
plt.title('Portfolo Weightings - Monty-Carlo Simulation')
plt.ylabel('Annualized Return')
plt.xlabel('Annualized Volatility')
plt.colorbar()
# Mark the 2 portfolios where
plt.scatter(point_max_sharpe.volatility, point_max_sharpe.returns, marker=(5,1,0), c='b', s=200, label = 'P1 Sharpe ratio at the highest')
plt.scatter(point_min_volatility.volatility, point_min_volatility.returns, marker=(5,1,0), c='r', s=200, label = 'P2 Volatility is the lowest')
plt.legend()
plt.show()
| Module5/Carrer_Launcher_Module_5.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Introduction to SHAP for explaining regression models
# ## CHAPTER 06 - *Introduction to model interpretability using SHAP*
#
# From **Applied Machine Learning Explainability Techniques** by [**<NAME>**](https://www.linkedin.com/in/aditya-bhattacharya-b59155b6/), published by **Packt**
# ### Objective
#
# In this notebook, let us get familiar with the SHAP (SHapley Additive exPlanation) framework for explaining regression models, based on the concepts discussed in Chapter 6 - Introduction to model interpretability using SHAP.
# ### Installing the modules
# Install the following libraries in Google Colab or your local environment, if not already installed.
# !pip install --upgrade pandas numpy matplotlib seaborn scikit-learn shap
# ### Loading the modules
# +
import warnings
warnings.filterwarnings('ignore')
import pandas as pd
import numpy as np
np.random.seed(123)
import seaborn as sns
import matplotlib.pyplot as plt
import shap
print(f"Shap version used: {shap.__version__}")
from sklearn.model_selection import train_test_split
from sklearn.ensemble import RandomForestRegressor
# -
# ### SHAP Errata
# Although, I really liked the idea of the SHAP algorithm, but unfortunately the SHAP Python framework has alot of issues, especially with the latest version (0.40.0) that I am using while making this tutorial.
#
# There are many pending pull requests which the project owners would need to review and approve and it was sad for me to see that the project owners are not actively taking care of the project.
# If you run across any issue, please check the open issues for SHAP first at: https://github.com/slundberg/shap/issues
# Currently, when I am writing this book, there are more than 1.3K open issues and close to 100 unreviewed pull requests.
#
# I myself, have come across few issues which running the framework and had to dig deep into the code of the framework, and make necessary changes to remove the error. Although the errors are quite trivial, but it is difficult for novice learners to identify and fix these by themselves. So, I am including the SHAP ERRATA section to resolve the known issues encountered by me, during the writing process.
#
# Please check https://github.com/PacktPublishing/Applied-Machine-Learning-Explainability-Techniques/blob/main/Chapter06/SHAP_ERRATA/ReadMe.md before proceeding.
# ### About the data
# **Red Wine Quality Dataset - Kaggle**
#
# The dataset is related to the red variant of the Portuguese "Vinho Verde" wine. For more details, consult the reference [Cortez et al., 2009]. Due to privacy and logistic issues, only physicochemical (inputs) and sensory (the output) variables are available (e.g. there is no data about grape types, wine brand, wine selling price, etc.). We will use this dataset for solving a regression problem.
#
# **Citation** -
# *<NAME>, <NAME>, <NAME>, <NAME> and <NAME>. Modeling wine preferences by data mining from physicochemical properties.
# In Decision Support Systems, Elsevier, 47(4):547-553, 2009.*
#
# - Original Source - https://archive.ics.uci.edu/ml/datasets/wine+quality
# - Kaggle Source - https://www.kaggle.com/uciml/red-wine-quality-cortez-et-al-2009
#
# ### Loading the data
# We will read the training data
data = pd.read_csv('dataset/winequality-red.csv')
data.head()
data.shape
data.columns
data.info()
data.describe()
# Now that we see that our dataset has 10 features with all float values, we do not need to do any encoding of features. We can do data normalization for getting more accurate models. But our focus is on the model interpretability part using SHAP. So, we will skip the EDA and normalization part and build a simple ML model which might not be very accurate using the Random Forest algorithm. We still need to check for missing values, as a dataset with missing values can cause problems later.
# Dropping missing values
data.fillna(0,inplace=True)
data.shape
# Handling missing values
sns.displot(
data=data.isna().melt(value_name="missing"),
y="variable",
hue="missing",
multiple="fill",
aspect=1.5
)
plt.show()
# No missing values found in the data. We can proceed with the model training process.
# ### Training the model
features = data.drop(columns=['quality'])
labels = data['quality']
# Dividing the data into training-test set with 80:20 split ratio
x_train,x_test,y_train,y_test = train_test_split(features,labels,test_size=0.2, random_state=123)
model = RandomForestRegressor(n_estimators=2000, max_depth=30, random_state=123)
model.fit(x_train, y_train)
model.score(x_test, y_test)
# The coefficient of determination ($R^2$ coefficient) is around 0.5. The scores indicate that we do not have a very good ML model. Hence, it is even more important to explain such models. Let's define the prediction probability function (*f*) now, which we will used by the SHAP framework.
# ### Using SHAP for model interpretability
# Let's use SHAP to provide model interpretability. We will use the visualization methods for both global and local explainability.
explainer = shap.Explainer(model)
shap_values = explainer(x_test)
# ### Model interpretability using SHAP visualizations
# SHAP provides many visualization options to explain the working of the model. But some of the visualization might look sophisticated, so I will try to explain what the visualizations are trying to tell in the next section.
# #### Global interpretability with feature importance
plt.title('Feature Importance using SHAP')
shap.plots.bar(shap_values, show=True, max_display=12)
# Feature importance is one of the most common model explainability method. Instead of calculating feature importance in terms of learned model weighted, SHAP provides a model-agnostic approach to highlight the most influential features based on the mean absolute SHAP values.
# - **Pro** - Easy to interpret and very helpful to identify the dominating features based on the collective interactive with other features using SHAP values.
# - **Con** - Since it shows features based on mean absolute Shap values, it is hard to identify features which are positively influencing the model and negatively influencing the model.
# #### Global interpretability with heatmap plots
plt.title('SHAP Heatmap Plot')
shap.plots.heatmap(shap_values, max_display=12, show=False)
plt.gcf().axes[-1].set_box_aspect(100)
plt.ylabel('Features')
plt.show()
# Heatmaps are essential to understand the overall impact all features on the model. The heatmap shown above shows how the quality of the alcohol increases with higher amount of alcohol and sulphates as these features have higher shap values, indicated by the red region. The *f(x)* curve on the top, shows how the predicted value of wine quality incraeses with the increase in the number of data instances. This might be because the alcohol content and sulphate content is positively influencing the outcome.
# - **Pro** - Shows the most influential features and how the overall prediction varies with the data instances.
# - **Con** - Can be complicated to interpret for any common user.
# #### Global interpretability with Cohort plots
# **Please note** - Check the SHAP Errata section before proceeding with this section! I have encountered an error with the framework, which is a known issue when I am making this tutorial: https://github.com/slundberg/shap/issues/2325 and the solution is provided in https://github.com/yuuuxt/shap/commit/7c95fd2b48bcf9cad82b10f65e552a49b360afbd which is still an open Pull Request. Please try out the solution posted in the SHAP Errata secton, if you get any error while obtaining cohort values.
# index of the feature alcohol is 10
f_idx = 10
alcohol = ["High Alcohol" if shap_values[i].data[f_idx] >=10.0 else "Low Alcohol" for i in range(shap_values.shape[0])]
shap.plots.bar(shap_values.cohorts(alcohol).abs.mean(0), max_display=12)
# SHAP provides a unique way of forming sub-groups or cohorts to analyze feature importance. I found this to be a unique option in SHAP, which can be really helpful! But the same pros and cons remain like SHAP bar plots.
# #### Global interpretability with feature clustering
clustering = shap.utils.hclust(x_test, y_test)
shap.plots.bar(shap_values, clustering=clustering, clustering_cutoff=0.75, max_display = 12)
# This is another interesting visualization from SHAP. The clustering features helps us to visualize grouped features or features which have high interaction with each other. As we can see from the visual that *volatile acidity* and *citric acid* forms a group and *pH* and *fixed acidity* form another group, but collectively these sub groups interact with the *density* and forms a hierarchical cluster. So, this method uses *hierarchical clustering* to find features which have high interaction with each other and influences the model in a collective approach.
# #### Global interpretability with SHAP summary plots
shap.summary_plot(shap_values, x_test, plot_type="violin", show=False)
plt.gcf().axes[-1].set_box_aspect(10)
plt.show()
# SHAP Summary plots are interesting visualizations that does give alot of information and it is much better than feature importance bar plots. We can get the following information from SHAP summary plots.
# - Shapley value based feature importance - The plot shows the importance of the features in descending order of their importance based on Shapley values. The horizontal violin plot visualization shows the positive or the negative impact of the feature for some data points. But the impact of the feature values is displayed from high to low using interesting color combinations.
# - Feature correlation - The color of the horizontal violin plots for each feature shows the positive or negative correlation with the target outcome. From the plot, we can say that as alcohol is most depicted with the red color gradient, it has a positive correlation with the wine quality, whereas volatile acidity is depicted with blue color gradient, indicating that it has negative correlation with the target outcome, which is wine quality.
# #### Global interpretability with feature dependence plot
explainer = shap.TreeExplainer(model)
shap_values = explainer.shap_values(x_test)
shap.dependence_plot("pH", shap_values, x_test, show=False)
plt.gcf().axes[-1].set_box_aspect(50)
# The dependence plot helps us find out the partial dependence of a particular feature with another feature. From the visualization, we can interpret that for wines with higher alcohol content have slightly less pH values. This is an interesting plot to study interactions between the influential features.
# #### Local interpretability with force plot
# Initialization of javascript visualization in notebooks
shap.initjs()
shap.force_plot(explainer.expected_value, shap_values[0,:], x_test.iloc[0,:], plot_cmap="PkYg")
# Force plots are interesting visualizations provided by SHAP for local interpretability of inference data. We can interpret the following information from this plot :
# - f(x) value of 6.57 is the model prediction outcome. This means that for the given inference data, the model has predicted the outcome as 6.57 as the wine quality.
# - The base value of 5.631 is the mean wine quality value predicted by the model. This means, if the features where totally absent, the model would just have given an average outcome as the base value.
# - Feature impact - The features that are pushing the model outcome to the higher side are shown in pink. So, the feature *sulphates*, *total sulfur dioxide* and *alcohol* are positively influencing the model to get a higher outcome. Whereas the features shown in green are pushing the model outcome to a lower value. This means the feature *free sulfur dioxide* is actually negatively influencing the model.
# #### Local interpretability using Waterfall plots
# **Please note** - Check the SHAP Errata section before proceeding with this section! I have encountered an error with the framework, which is a known issue when I am making this tutorial:GitHub Issue link - https://github.com/slundberg/shap/issues/2261. Please try out the solution posted in the SHAP Errata secton, if you get any error while using the Waterfall plot.
# +
figure = plt.figure(figsize=(25,12))
ax1 = figure.add_subplot(121)
explainer = shap.Explainer(model, x_test)
shap_values = explainer(x_test)
# For the test observation with index 0
shap.plots.waterfall(shap_values[0], max_display = 12, show=False)
ax1.title.set_text(f'The First Observation.\nModel prediction: {model.predict(x_test)[0]}')
ax2 = figure.add_subplot(122)
# Similarly for the test observation with index 1
shap.plots.bar(shap_values[1], max_display = 12, show=False)
ax2.title.set_text(f'The Second Observation.\n Model prediction: {model.predict(x_test)[1]}')
plt.tight_layout()
plt.show()
# -
shap.plots.waterfall(shap_values[1], max_display = 12)
# The waterfall plot is another interesting local interpretability plot available in SHAP. It is better than Feature Importance plots as it shows how each feature value of the local inference data contributes positively or negatively towards the model prediction. As we can see from the plot, for the second test observation, the alcohol value is lower than the mean value and hence it has a negative shapley value. The negative shapley value indicates that although *alcohol* is the most influential feature in the dataset, but since for the current inference data instance, the value of *alcohol* is lower than the average value, it is negatively influencing the model to give a lower prediction value for the *wine quality*. But compared to a waterfall plots, SHAP barplots are centered at 0.
shap.plots.bar(shap_values[0], max_display = 12)
# #### Local interpretability with decision plot
# +
expected_value = explainer.expected_value
figure = plt.figure(figsize=(10,5))
ax1 = figure.add_subplot(121)
shap_values = explainer.shap_values(x_test)[0]
shap.decision_plot(expected_value, shap_values, x_test, show=False)
ax1.title.set_text(f'The First Observation.\nModel prediction: {model.predict(x_test)[0]}')
ax2 = figure.add_subplot(122)
shap_values = explainer.shap_values(x_test)[1]
shap.decision_plot(expected_value, shap_values, x_test, show=False)
ax2.title.set_text(f'The Second Observation.\n Model prediction: {model.predict(x_test)[1]}')
plt.tight_layout()
plt.show()
# -
# As observed previously, force plots, barplots and even waterfall plots do not show the mean values of all the features and the mean predicted outcome. Also, if the dataset contains too many features, then force plots are extremely difficult to interpret. That is when decision plots are more useful for local explanations. It shows where the current local prediction of the model is higher or lower than the average predicted outcome and the most influential features, and how value of such features in the local data instance is affecting towards the model outcome. In the two examples shown, in both the cases the features *alcohol*, *total sulfur dioxide* and *sulphates* are either positively impacting or negatively impacting the model, thus leading to a higher or lower model predicton respectively. Other features are not significantly impacting the model's decision.
# ## Final Thoughts
# In this tutorial, we have discussed many interesting visualization options from SHAP. We have also seen how SHAP can be used to explain regression models. But I expected the framework to be better maintained by the project owners, so while trying it out, you would need to be careful and may want to look at the known issues if you encounter an error. Otherwise, it is very interesting, especially it considers the collective effect of features. In the next chapter, we will go through many more tutorials on SHAP to explain different types of models.
# ## Reference
# 1. Red Wine Quality Dataset - Kaggle - https://www.kaggle.com/uciml/red-wine-quality-cortez-et-al-2009
# 2. SHAP GitHub Project - https://github.com/slundberg/shap
# 3. SHAP Documentations - https://shap.readthedocs.io/en/latest/index.html
# 4. Some of the utility functions and code are taken from the GitHub Repository of the author - <NAME> https://github.com/adib0073
| Chapter06/Intro_to_SHAP.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# ## Relational Databases and pandas
import pandas as pd
# +
###
# -
# Imagine this notebook contains all of the gathering code from this entire lesson, plus the assessing and cleaning code done behind the scenes, and that the final product is a merged master DataFrame called *df*.
df = pd.read_csv('bestofrt_master.csv')
df.head(3)
# +
###
# -
# ### 1. Connect to a database
from sqlalchemy import create_engine
# Create SQLAlchemy Engine and empty bestofrt database
# bestofrt.db will not show up in the Jupyter Notebook dashboard yet
engine = create_engine('sqlite:///bestofrt.db')
# ### 2. Store pandas DataFrame in database
# Store the data in the cleaned master dataset (bestofrt_master) in that database.
# Store cleaned master DataFrame ('df') in a table called master in bestofrt.db
# bestofrt.db will be visible now in the Jupyter Notebook dashboard
df.to_sql('master', engine, index=False)
# ### 3. Read database data into a pandas DataFrame
# Read the brand new data in that database back into a pandas DataFrame.
df_gather = pd.read_sql('SELECT * FROM master', engine)
df_gather.head(3)
| Data_Wrangling/Gathering Data/gathering (7).ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Predicting UFC Fights With Supervised Learning
# <NAME> - Oregon, USA - November 7, 2019
#
# This project focuses on UFC fight prediction using supervised learning models. The data comes from Kaggle (https://www.kaggle.com/rajeevw/ufcdata). A big thank you to the originator of this data, <NAME>. It is detailed and well put-together with zero missing data.
#
# Below in quotes is info about the two original datasets directly from its Kaggle page:
#
# " This is a list of every UFC fight in the history of the organisation. Every row contains information about both fighters, fight details and the winner. The data was scraped from ufcstats website. After fightmetric ceased to exist, this came into picture. I saw that there was a lot of information on the website about every fight and every event and there were no existing ways of capturing all this. I used beautifulsoup to scrape the data and pandas to process it. It was a long and arduous process, please forgive any mistakes. I have provided the raw files incase anybody wants to process it differently. This is my first time creating a dataset, any suggestions and corrections are welcome! Incase anyone wants to check out the work, I have all uploaded all the code files, including the scraping module here.
#
# Each row is a compilation of both fighter stats. Fighters are represented by 'red' and 'blue' (for red and blue corner). So for instance, red fighter has the complied average stats of all the fights except the current one. The stats include damage done by the red fighter on the opponent and the damage done by the opponent on the fighter (represented by 'opp' in the columns) in all the fights this particular red fighter has had, except this one as it has not occured yet (in the data). Same information exists for blue fighter. The target variable is 'Winner' which is the only column that tells you what happened. Here are some column definitions. "
#
#
# ### Overview
# 1. __Explore Original Datasets__
# > 1. Size and shape
# > 2. Sample view
# > 3. Missing data
# 2. __Create New Variables and Clean Data__
# > 1. Combine and create new variables
# > 2. Parse date/time
# > 3. Create dummy binary columns for 'Winner' category
# > 4. (Optional) trim dataset to include only 2011-2019 and four men's weight classes: featherweight, lightweight, welterweight, middleweight
# > 5. Create subset dataframe of key variables
# 3. __Exploratory Data Analysis__
# > 1. Basic statistics
# > 2. Bar plot
# - total wins (red vs blue)
# > 3. Count plot
# - weight classes
# > 4. Distribution plots
# - total fights (red vs blue)
# - wins (red vs blue)
# - age (red vs blue)
# > 5. Pair plots
# - offense and defense (red vs blue) compared to red wins
# - win % and finish % (red vs blue) compared to red wins
# > 6. Correlation matrix of key variables
# 4. __Supervised Learning__
# > 1. Define and preprocess data
# > 2. Support vector machine
# > 3. Naive Bayes
# > 4. Logistic regression
# > 5. Decision tree/random forest
# 5. __Summary and Conclusion__
# 6. __Acknowledgments__
# +
# import libraries
# %matplotlib inline
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
from scipy import stats
import warnings
warnings.filterwarnings('ignore')
# +
# import original kaggle datasets
df_clean = pd.read_csv(r'C:\Users\AP\Desktop\ufc-fight-stats-clean.csv')
df_raw = pd.read_csv(r'C:\Users\AP\Desktop\ufc-fight-stats.csv')
# change all columns to lower case for ease and consistency of typing
df_clean.columns = map(str.lower, df_clean.columns)
df_raw.columns = map(str.lower, df_raw.columns)
# -
# ------------------
#
# ### Explore Original Datasets
#
# #### Pre-processed Dataset
#
# 1. Size and shape
# 2. Sample view
# 3. Missing data
# basic size and shape of preprocessed dataset
df_clean.info()
# #### Observations
# - The dataset contains 160 columns and approximately 3600 rows.
# sample view of dataset
df_clean.head()
# +
# quantify missing data
total_missing = df_clean.isnull().sum().sort_values(ascending=False)
percent_missing = (df_clean.isnull().sum()/df_clean.isnull().count()).sort_values(ascending=False)
missing_data = pd.concat([total_missing, percent_missing], axis=1, keys=['Count', 'Percent'])
missing_data.head()
# -
# #### Raw Dataset
#
# 1. Size and shape
# 2. Sample view
# 3. Missing data
# basic size and shape of dataset
df_raw.info()
# #### Observations
# - The raw dataset has 145 columns and approximately 5100 rows.
# sample view of dataset
df_raw.head()
# +
# quantify missing data
total_missing = df_raw.isnull().sum().sort_values(ascending=True)
percent_missing = (df_raw.isnull().sum()/df_raw.isnull().count()).sort_values(ascending=True)
missing_data = pd.concat([total_missing, percent_missing], axis=1, keys=['Count', 'Percent'])
missing_data.head()
# -
# #### Observations
# - There are several differences between the two datasets. The raw set contains variables not found in the preprocessed version. This includes each fighter's name, who refereed the bout, and the date and location of the fight. The preprocessed version drops these variables and adds some more detailed fight metrics.
#
# - We need to combine some categories from each dataset. First, we will parse the date/time column in the raw set and add it to the preprocessed set.
#
# - No missing data! Thank you to the originator of this data, <NAME>.
#
# - Let's clean the data and create/combine new variables based on my intuitions from years of training and watching mixed martial arts.
# --------------------
#
# # Create New Variables and Clean Data
#
# 1. Combine and create new variables
# 2. Parse date/time
# 3. Create dummy binary columns for 'Winner' category
# 4. (Optional) trim dataset to include only 2011-2019 and four weight classes: featherweight, lightweight, welterweight, middleweight
# 5. Create subset dataframe of key variables
#
# ### Creat Key Variables
# - __Winner:__ winner of fight (red or blue corner)
# - __Win red:__ binary (1 for red win, 0 for red loss)
# - __Experience score:__ interaction between total fights and total rounds fought
# - __Streak score:__ interaction between current and longest win streak
# - __Win %:__ total wins divided by total fights
# - __Finish %:__ percentage of fights that end in KO/TKO, submission, or doctor's stoppage
# - __Decision %:__ percentage of fights that end in judges' decision
# - __Offense score:__ interaction between % significant strikes landed, submission attempts, takedowns landed, and knockdowns
# - __Defense score:__ interaction between % significant strikes absorbed, submission attempts against, and opponent takedowns landed
# +
# create new variables
# r = red corner
# b = blue corner
# win %
df_clean['r_win_pct'] = df_clean.r_wins / (df_clean.r_wins + df_clean.r_losses + df_clean.r_draw)
df_clean['b_win_pct'] = df_clean.b_wins / (df_clean.b_wins + df_clean.b_losses + df_clean.b_draw)
# total fights
df_clean['r_total_fights'] = df_clean.r_wins + df_clean.r_losses + df_clean.r_draw
df_clean['b_total_fights'] = df_clean.b_wins + df_clean.b_losses + df_clean.b_draw
# finish %
df_clean['r_finish_pct'] = (df_clean['r_win_by_ko/tko'] + df_clean.r_win_by_submission +
df_clean.r_win_by_tko_doctor_stoppage) / df_clean.r_total_fights
df_clean['b_finish_pct'] = (df_clean['b_win_by_ko/tko'] + df_clean.b_win_by_submission +
df_clean.b_win_by_tko_doctor_stoppage) / df_clean.b_total_fights
# decision %
df_clean['r_decision_pct'] = (df_clean.r_win_by_decision_majority + df_clean.r_win_by_decision_split +
df_clean.r_win_by_decision_unanimous) / df_clean.r_total_fights
df_clean['b_decision_pct'] = (df_clean.b_win_by_decision_majority + df_clean.b_win_by_decision_split +
df_clean.b_win_by_decision_unanimous) / df_clean.b_total_fights
# total strikes landed %
df_clean['r_total_str_pct'] = df_clean.r_avg_total_str_landed / df_clean.r_avg_total_str_att
df_clean['b_total_str_pct'] = df_clean.b_avg_total_str_landed / df_clean.b_avg_total_str_att
# total strikes absorbed %
df_clean['r_opp_total_str_pct'] = df_clean.r_avg_opp_total_str_landed / df_clean.r_avg_opp_total_str_att
df_clean['b_opp_total_str_pct'] = df_clean.b_avg_opp_total_str_landed / df_clean.b_avg_opp_total_str_att
# overall streak score
df_clean['r_streak'] = df_clean.r_current_win_streak * df_clean.r_longest_win_streak
df_clean['b_streak'] = df_clean.b_current_win_streak * df_clean.b_longest_win_streak
# offense score
df_clean['r_offense'] = df_clean.r_avg_sig_str_pct * df_clean.r_avg_kd * df_clean.r_avg_sub_att * df_clean.r_avg_td_pct
df_clean['b_offense'] = df_clean.b_avg_sig_str_pct * df_clean.r_avg_kd * df_clean.b_avg_sub_att * df_clean.b_avg_td_pct
# defense score
df_clean['r_defense'] = df_clean.r_avg_opp_sig_str_pct * df_clean.r_avg_opp_sub_att * df_clean.r_avg_opp_td_pct
df_clean['b_defense'] = df_clean.b_avg_opp_sig_str_pct * df_clean.b_avg_opp_sub_att * df_clean.b_avg_opp_td_pct
# experience score
df_clean['r_experience'] = df_clean.r_total_fights * df_clean.r_total_rounds_fought
df_clean['b_experience'] = df_clean.b_total_fights * df_clean.b_total_rounds_fought
# +
# parse date/time into separate columns
df_clean['date'] = pd.to_datetime(df_raw.date)
df_clean['day'] = df_clean.date.dt.day
df_clean['month'] = df_clean.date.dt.month
df_clean['year'] = df_clean.date.dt.year
# +
# create binary winner columns
df_dum_win = pd.concat([df_clean, pd.get_dummies(df_clean.winner, prefix='win', dummy_na=True)], axis=1)
# combine dummy columns to raw dataset
df_clean = pd.concat([df_dum_win, df_raw], axis=1)
# convert columns to lowercase
df_clean.columns = map(str.lower, df_clean.columns)
# +
# drop duplicate columns
df_clean = df_clean.loc[:,~df_clean.columns.duplicated()]
# drop null rows
df_clean.dropna(axis=0, inplace=True)
# ----- OPTIONAL ----- comment or un-comment the code to turn and turn off and run the cell again
# drop all rows before 2011 for lack of detailed stats
df_clean = df_clean[(df_clean.year > 2011) & (df_clean.year < 2020)]
# ----- OPTIONAL ----- comment or un-comment the code to turn and turn off and run the cell again
#drop all weight classes except featherweight(145 lb), lightweight(155 lb),
# welterweight(170 lb), and middleweight(185 lb)
#df_clean = df_clean.loc[df_clean.weight_class.isin(['Featherweight', 'Lightweight', 'Welterweight', 'Middleweight'])]
# -
# create new dataframe of key variables and rearrange by similarity groups
df_keys = df_clean[['winner',
'win_red',
'r_experience',
'r_streak',
'r_win_pct',
'r_finish_pct',
'r_decision_pct',
'r_offense',
'r_defense',
'b_experience',
'b_streak',
'b_win_pct',
'b_finish_pct',
'b_decision_pct',
'b_offense',
'b_defense',
]]
# basic size and shape of newly created clean dataframe
df_clean.info()
# #### Observations
# - The new clean dataset contains approximately 200 columns and 3100 rows
# sample view of newly created clean dataframe
df_clean.head()
# sample view of newly created subset of key variables dataframe
df_keys.info()
# #### Observations
# - The dataset of key variables for modeling has 16 columns and approximately 3300 rows
# - All feature variables are continuous floats
# - Target variable option #1: 'winner' as categorical (red or blue)
# - Target variable option #2: 'win_red' as numerical (1 for red win, 0 for red loss)
# sample view of newly created subset of key variables
df_keys.head()
# -----------------------
#
# # Exploratory Data Analysis
#
# 1. Basic stats
# 2. Bar plot
# > - wins (red vs blue)
# 3. Count plot
# > - weight classes
# 4. Distribution plots
# > - total fights (red vs blue)
# > - total wins (red vs blue)
# > - age (red vs blue)
# 5. Pair plots
# > - offense and defense (red vs blue) compared to red wins
# > - win % and finish % (red vs blue) compared to red wins
# 6. Correlation matrix
# basic statistics
df_keys.describe()
# #### Observations
# - Except for the 'experience' and 'streak' variables, all standard deviations are small. Outliers should be checked for in these two variables.
# - All of the variables besides 'experience' seem to contain zeros as their minimum. Something does not seem right here. Again, outliers should be investigated.
# +
# bar chart red vs blue total wins
plt.figure(figsize=(8,4))
sns.countplot(df_clean.winner)
plt.title('Total Win Count')
plt.xlabel('Winner')
plt.ylabel('Count')
plt.show()
# total win count
count = df_clean.winner.value_counts()
print('Total Win Count')
print('')
print(count)
print('')
print('')
# win %
print('Win %')
print('')
print(count / (count[0] + count[1]))
# -
# #### Observations
#
# - Out of approximately 3100 total fights, the red corner has won just under 2000 of them, or 64%.
# - The red corner is historically reserved for the favored, more experienced of the two fighters, so this makes sense.
# - The above chart is simple but important. Remember our goal is to predict the outcome of a fight. Also remember that the red corner is typically the favored, more experienced fighter. This means that if your only strategy for predicting fights was always choosing the red corner, you would be correct 64% of the time. This number is now our baseline score to beat. If any of the machine learning models score better than 64% accuracy, it could be considered a success. Anything below 64% and the models are worthless because we could always fall back on choosing red every time.
# +
# countplot of weight classes
plt.figure(figsize=(8,4))
sns.countplot(df_clean.weight_class, order=df_clean.weight_class.value_counts().index)
plt.title('Total Fight Count by Weight Class')
plt.xlabel('Weight Class')
plt.xticks(rotation='vertical')
plt.ylabel('Fight Count')
plt.show()
# print totals
print(df_clean.weight_class.value_counts())
# -
# #### Observations
# - Lightweight (155 lbs) and welterweight (170 lbs) are the most common weight classes and are almost equal in count at approximately 560 each out of 3100 total fights, occuring 36% of the time.
# - Featherweight (145 lbs) and middleweight (185 lbs) are the next two runnerups and also almost equal each other in count at approximately 375 fights each out of 3100 total fights, occuring 24% of the time.
# - The featherweight, lightweight, welterweight, and middleweight divisions account for approximately 60% of all fights.
# +
# distributions comparison
# total fights distribution
fig, ax = plt.subplots(1, figsize=(8, 4))
sns.distplot(df_clean.b_total_fights)
sns.distplot(df_clean.r_total_fights)
plt.title('Total Fights Distribution')
plt.xlabel('# Fights')
plt.legend(labels=['Blue','Red'], loc="upper right")
# wins distribution
fig, ax = plt.subplots(1, figsize=(8, 4))
sns.distplot(df_clean.b_wins)
sns.distplot(df_clean.r_wins)
plt.title('Wins Distribution')
plt.xlabel('# Wins')
plt.legend(labels=['Blue','Red'], loc="upper right")
# age distribution
fig, ax = plt.subplots(1, figsize=(8, 4))
sns.distplot(df_clean.b_age)
sns.distplot(df_clean.r_age)
plt.title('Age Distribution')
plt.xlabel('Age')
plt.legend(labels=['Blue','Red'], loc="upper right")
plt.show()
# calculate red and blue mean and mode ages
r_mean_age = df_clean.r_age.mean()
r_mode_age = df_clean.r_age.mode()
b_mean_age = df_clean.b_age.mean()
b_mode_age = df_clean.b_age.mode()
# print red and blue mean ages
print('Mean Fighter Age')
print('')
print('Red: ', (r_mean_age))
print('Blue: ', (b_mean_age))
# -
# #### Observations
# - The red and blue corner distributions have similar shapes to each other in their respective graphs.
# - There are more blue fighters with < 5 wins than red fighters, and there are more red fighters with > 5 wins than blue fighters. This makes sense, as historically the red corner has been reserved for the favored, more experienced fighter.
# - The mean age of red and blue are essentially equal at 30 years old. This is surprising. I would have expected the red corner to have a slightly higher mean age since the red corner is typically reserved for the favored, more experienced fighter.
# +
# pairplot red vs blue offense and defense
sns.pairplot(df_keys[['winner',
'b_offense',
'r_offense',
'b_defense',
'r_defense',
]], hue='winner')
plt.show()
# -
# #### Observations
# - The above pairplot reveals quite a few outliers that should be investigated.
# +
# pairplot red vs blue win % and finish % compared to red wins
sns.pairplot(df_keys[['winner',
'r_win_pct',
'b_win_pct',
'r_finish_pct',
'b_finish_pct',
]], hue='winner')
plt.show()
# +
# key variables correlation
corr = df_keys.corr()
# generate mask for upper triangle
mask = np.zeros_like(corr, dtype=np.bool)
mask[np.triu_indices_from(mask)] = True
# plot heatmap correlation
plt.figure(figsize=(25,10))
sns.heatmap(corr, mask=mask, annot=True, cbar_kws={"shrink": .75}, center=0)
plt.show()
# -
# #### Observations
# - Surprisingly, none of the variables seem to be linearly correlated with the target variable. This does not mean we can rule out non-linear correlation at the moment.
# - Some variables are correlated with each other. Most notably, 'win %' and 'finish %'. This makes sense since if a fighter has a higher 'finish %' it almost guarantees a relatively high 'win %'. It is probably not common to see a fighter with a high 'win %' and a very low 'finish %'. The UFC greatly values the entertainment factor when putting on shows, not just the caliber of fighters. A fighter with a high win % but always goes to decision typically gets cut from the promotion. It is not enough to win fights; one is also required to be entertaining as well.
# --------------------------
#
# # Supervised Learning
# 1. Define and preprocess data
# 2. Support vector machines
# 3. Naive Bayes
# 4. Logistic regression
# 5. Decision tree/random forest
# import libraries
import scipy
import sklearn
import statsmodels.api as sm
from sklearn.preprocessing import StandardScaler
from sklearn.model_selection import train_test_split
from sklearn import svm
from sklearn import linear_model
from sklearn import tree
from sklearn.linear_model import LogisticRegression
from sklearn.naive_bayes import BernoulliNB
from sklearn.naive_bayes import GaussianNB
from sklearn.tree import DecisionTreeClassifier
from sklearn.ensemble import RandomForestClassifier
from sklearn.neural_network import MLPClassifier
from sklearn.metrics import accuracy_score
from sklearn.model_selection import cross_val_score
from sklearn.metrics import confusion_matrix
from sklearn.metrics import mean_absolute_error
from statsmodels.tools.eval_measures import mse, rmse
from statsmodels.tsa.stattools import acf
# +
# define and preprocess data before modeling
# target variable and feature set
Y = df_keys.win_red
X = df_keys[['r_experience',
'r_win_pct',
'r_finish_pct',
'r_offense',
'r_defense',
'b_experience',
'b_win_pct',
'b_finish_pct',
'b_offense',
'b_defense'
]]
# train/test split
X_train, X_test, y_train, y_test = train_test_split(X, Y, test_size=0.2, random_state=123)
# define standard scaler
sc = StandardScaler()
# fit standard scaler
X_train = sc.fit_transform(X_train)
X_test = sc.transform(X_test)
# -
# ### Why Support Vector Machines
# - Common algorithm for predicting a categorical outcome, which is our goal
# - Good at finding solutions in non-linear data, which is possible with in this case
# +
# support vector machines
# fit model
model = svm.SVC()
results = model.fit(X_train, y_train)
# predict
y_preds = results.predict(X_test)
# print results
print('Train Set Observations: {}'.format(X_train.shape[0]))
print('Test Set Observations: {}'.format(X_test.shape[0]))
print('')
print('')
print('Support Vector Machine Accuracy Score')
print('')
print('Train Set: ', accuracy_score(y_train, model.predict(X_train)))
print('Test Set: ', accuracy_score(y_test, model.predict(X_test)))
# -
# #### Observations
# - Train and test set are similar at approximately 64% and 68%, which indicates the model is not overfitting.
# - Not a particularly high accuracy score, but so far it performs better than the baseline strategy of always choosing the red corner to win (64% accuracy).
# ### Why Naive Bayes
# - Common classification algorithm for predicting a categorical outcome, which is our goal
# - Assumes independent variables, probably not the case with this dataset
# - Curiosity without high hopes
# +
# naive bayes
# fit to model
model = GaussianNB()
model.fit(X_train, y_train)
# print results
print('Naive Bayes Accuracy Score')
print('')
print('Train Set: ', accuracy_score(y_train, model.predict(X_train)))
print('Test Set: ', accuracy_score(y_test, model.predict(X_test)))
# -
# #### Observations
# - Train and test set accuracy scores are similar, but 43% is a terrible score. You could achieve far better results by simply choosing the red corner to win every fight (64% accuracy).
# - Naive Bayes may not be the best option here
# ### Why Logistic Regression
# - Common algorithm for predicting a categorical outcome, which is our goal
# - Good at predicting the probability of binary outcomes, which is our goal
# +
# logistic regression
# fit model
model = LogisticRegression()
model.fit(X_train, y_train)
print('Logistic Regression Accuracy Score')
print('')
print('Train Set: ', accuracy_score(y_train, model.predict(X_train)))
print('Test Set: ', accuracy_score(y_test, model.predict(X_test)))
# -
# #### Observations
# - Test set accuracy increased by 5% over the train set.
# - Logistic regression and support vector machine have performed the best so far at 68%, beating our baseline score of 64% accuracy.
# ### Why Decision Tree and Random Forest
# - Common algorithm for predicting a categorical outcome, which is our goal
# - Good at learning non-linear relationships, which our dataset could potentially possess
# +
# decision tree
tree_model = DecisionTreeClassifier()
rf_model = RandomForestClassifier()
# fit models
tree_model.fit(X_train, y_train)
rf_model.fit(X_train, y_train)
# print results
print('Decision Tree Accuracy Score')
print('')
print('Train Set: ', accuracy_score(y_train, tree_model.predict(X_train)))
print('Test Set: ', accuracy_score(y_test, tree_model.predict(X_test)))
print('')
print('')
print('Random Forest Accuracy Score')
print('')
print('Train Set: ', accuracy_score(y_train, rf_model.predict(X_train)))
print('Test Set: ', accuracy_score(y_test, rf_model.predict(X_test)))
# -
# #### Observations
# - Accuracy for both decision tree and random forest train set were very high at 99% and 98%, respectively. This suggests the model could be overfitting. It performs well on the known training data, but severely underperforms on the new test set.
# - Accuracy for both test sets fell dramatically to 57% and 56%.
# - The train and test sets could possibly have different distributions.
# # Summary and Conclusion
#
# After loading the two original datasets, we discovered that there were some distinct variables in each, and we needed some from both. After joining the datasets, duplicate variables were dropped, which left a clean new set to work with. New variables were then created and combined. Finally, a subset dataframe of key variables was created for modeling.
#
# Next came exploratory data analysis. We found out that the red corner wins on average 64% of the time. We chose this as our baseline prediction score to beat. Some other interesting facts arose throughout this phase of the process:
# - Total fight count is dominated by just four weight classes: featherweight (145 lbs), lightweight (155 lbs), welterweight (170 lbs), and middleweight (185 lbs), and account for 60% of all fights.
# - Mean fighter age is 30 years old, which was a bit surprising to learn. Most people think of fighting as a young man's game. This result appears to refute that statement.
# - No single variable was found to be highly linearly correlated with the target variable. This was very surprising to find out. Professional fighting is a volatile sport. If red consistently wins greater than 50% there should presumably be some combination of features that puts them at a 64% win rate.
#
# Our goal of this project was to predict the outcome of UFC fights using supervised learning. Four models were used: support vector machines, naive Bayes, logistic regression, and decision tree/random forest. Both naive Bayes and decision tree/random forest scored terribly and far below the baseline-to-beat of 64% accuracy. Support vector machines and logistic regression scored roughly equal to 64% on their train sets but scored on the test set with 68% accuracy.
#
# A score of 68% beats our initial baseline accuracy score of 64%. A small success but a success nonetheless. I believe this score could be improved by implementing the following strategy:
# 1. Address and correct outliers
# 2. Further refining or combining of features with a focus on win/finish %, height/reach advantage, and fighting style (striker, wrestler)
# 3. Identifying the "typical" fighter profile in more detail. So far we know it is a male approximately 30 years old who fights in one of the four main weight classes.
# 4. Deeper exploratory data analysis to discover not-so-obvious correlations and connections between variables
# 5. Further model parameter tuning and experimenting with new models
#
# The main takeaway is this: it is theoretically possible to predict the winner of UFC fights with better accuracy than either pure chance or by choosing the red corner to win every time. However, professional fighting is an extremely volatile sport. Even a champion on a winning streak can lose from a split second minor mistake. Fighters commonly perform injured, severely impairing their potential while highlighting their opponent who may not warrant it. Even with unlimited amounts of data, it is entirely possible that predicting fights is a fool's errand.
# # Acknowledgments
# - <NAME> and his Kaggle dataset (https://www.kaggle.com/rajeevw/ufcdata)
# - <NAME> (Thinkful mentor)
# - Any of you who let me know about an error or typo in any of the above (for real, it would be appreciated)
| Capstone 2 - Predicting UFC Fights with Machine Learning.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # [DLPA17] [Deep Learning: A Practitioner's Approach](https://www.amazon.com/Deep-Learning-Practitioners-Josh-Patterson/dp/1491914254)
#
# * Practical; good intro to ML theory
# * Not *really* NN specific; if you understand one classifier deeply, it's easier to pick up others
# # [ML97] [Machine Learning](https://www.amazon.com/Learning-McGraw-Hill-International-Editions-Computer/dp/0071154671/)
# * Tom Mitchell's classic; academic focus
# # [FBP17] [Facebook Prophet](https://peerj.com/preprints/3190.pdf)
# * [Docs for R and Python](https://facebook.github.io/prophet/docs/installation.html)
#
# ## Abstract
# Forecasting is a common data science task that helps organizations with capacity
# planning, goal setting, and anomaly detection. Despite its importance, there are
# serious challenges associated with producing reliable and high quality forecasts โ
# especially when there are a variety of time series and analysts with expertise in
# time series modeling are relatively rare. To address these challenges, we describe
# a practical approach to forecasting โat scaleโ that combines configurable models
# with analyst-in-the-loop performance analysis. We propose a modular regression
# model with interpretable parameters that can be intuitively adjusted by analysts
# with domain knowledge about the time series. We describe performance analyses
# to compare and evaluate forecasting procedures, and automatically flag forecasts for
# manual review and adjustment. Tools that help analysts to use their expertise most
# effectively enable reliable, practical forecasting of business time series.
# # [GCP16] [Machine Learning with Financial Time Series Data](https://cloud.google.com/solutions/machine-learning-with-financial-time-series-data)
# * Predict & model SP500 closes based on markets that close earlier
#
#
# # [AS] Algorithm selection
#
# * [AS2]: [Which machine learning algorithm should I use?](https://blogs.sas.com/content/subconsciousmusings/2017/04/12/machine-learning-algorithm-use/)
# * [AS1]: [Choosing the right esimtator (scikit)](http://scikit-learn.org/stable/tutorial/machine_learning_map/index.html)
# # [RMLT17] [FINTECH: CAN MACHINE LEARNING BE APPLIED TO TRADING?](http://www.newsweek.com/business-technology-trading-684303)
#
# ## Article Highlights
# ### (Q-Learning) able to arbitrage known advantage
# ### Overview
# Ritter explained: "I was really trying to answer the question, does machine learning have any application to trading at all, or no application; sort of a binary question. Can machine learning be applied to the problem of trading?
#
# "I reasoned that in a system that I know admits a profitable trading strategy, because I constructed it that way, can the machine find it. If the answer to that is no, then what chance would it have in the real world where you don't even necessarily know that a profitable strategy exists in the space you are looking at.
#
# "So luckily the answer to that was yes. In the system where I knew there was a profitable opportunity, the machine did learn to find it. So that then allows for further study."
#
# A question on many people's lips when it comes to machine learning and AI-driven automation concerns the role that will be left for humans.
#
# ### Human in the loop
# Ritter takes a sober view of this: "I think it's important to know what human beings are good at and what they are not good at. So human beings are typically not good at knowing what their true costs are.
#
# "For example, we all know that doing a large trade or a trade that's a large fraction of the volume can have an impact in the market; you'll move the price as a result of your trading. Well, how much?
#
# "Humans are not really good at answering that question. That's a better question for a mathematical model. So we sometimes get asked the question, does this mean humans are done? I think the answer to that is definitely not.
#
# "Humans are probably good at coming up with the idea for a new strategy. Take the technology in my paper, for example: where should we apply it? What should we apply it to? What should the signal that drives the trade be, if any?
#
# "But humans are not good at interacting with the microstructure; humans are not good at looking at a bid and an offer and saying, I think if I execute this many basis points here's what my impact will be. That kind of decision should be left to a machine really."
#
#
# ## [Paper](https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3015609)
#
# ### Abstract
#
# Abstract. In multi-period trading with realistic market impact, determining
# the dynamic trading strategy that optimizes expected utility
# of final wealth is a hard problem. In this paper we show that, with an
# appropriate choice of the reward function, reinforcement learning techniques
# (specifically, Q-learning) can successfully handle the risk-averse
# case. We provide a proof of concept in the form of a simulated market
# which permits a statistical arbitrage even with trading costs. The
# Q-learning agent finds and exploits this arbitrage.
#
# ### Conclusion
# According to neoclassical finance theory, no investor should hold a portfolio
# (or follow a dynamic trading strategy) that does not maximize expected
# utility of final wealth, E[u(wT )]. The concave utility function u expresses the
# investorโs risk aversion, or preference for a less risky portfolio over a more
# risky one with the same expected return. Only a risk-neutral investor would
# maximize E[wT ]. The main contribution of this paper is that we show how
# to handle the risk-averse case in a model-free way using Q-learning. We provide
# a proof of concept in a controlled numerical simulation which permits
# an approximate arbitrage, and we verify that the Q-learning agent finds and
# exploits this arbitrage.
# It is instructive to consider how this differs from approaches such as
# Gหarleanu and Pedersen (2013); to use their approach in practice requires
# three models: a model of expected returns, a risk model which forecasts
# the variance, and a (pre-trade) transaction cost model. The methods of
# Gหarleanu and Pedersen (2013) provide an explicit solution only when the
# cost model is quadratic. By contrast, the methods of the present paper can,
# in principle, be applied without directly estimating any of these three models,
# or they can be applied in cases where one has an asset return model,
# but one wishes to use machine learning techniques to infer the cost function
# and the optimal strategy.
# # Interpreting models
# * [Ideas on interpreting machine learning](https://www.oreilly.com/ideas/ideas-on-interpreting-machine-learning)
# * [Making AI Interpretable with Generative Adversarial Networks](https://medium.com/square-corner-blog/making-ai-interpretable-with-generative-adversarial-networks-766abc953edf)
# # [FTS17] [FINANCIAL TIME SERIES FORECASTING โ A MACHINE LEARNING APPROACH](https://pdfs.semanticscholar.org/7955/af1f5b8226ff13f915bead877c181a2917dc.pdf)
#
# ## Survey paper. Includes annotated bibliography
# * Statistical methods
# * Regression: positive correlation between news to stock opens
# * ARIMA: autoregressive integrated moving average
# * ML (80+% is possible)
# * SVMs (more than NNs) successful at predicting price direction
# * Text-based DTs
# * RNNs and NEAT
# * ANNs
# * Engineer ~70 custom features (indices, technical indicators, etc.)
# * Skelarn feature selection (SelectKBest)
# * Used these classifiers (~80%, better on longer timescales)
# * **K-Nearest Neighbors**
# * **Logistic Regression**
# * Naive Bayes
# * Support Vector Classifier (SVC)
# * Decision Trees
# * Random Forest
# * Multi Layer Perceptron (MLP)
# * Ada Boost
# * QDA
# * These regression algorithms (.016 RMSE)
# * Linear Regressor
# * Support Vector Regressor
# * Decision Tree Regressor
# * Ada Boost Regressor
# * Random Forest Regressor
# * K-Nearest Neighbors
# * Bagging Regressor
#
# # [NNAT17] [Neural networks for algorithmic trading: enhancing classic strategies](https://medium.com/machine-learning-world/neural-networks-for-algorithmic-trading-enhancing-classic-strategies-a517f43109bf)
# * Use NNs to improve EMA crossover signal quality
# # [LHIB] [The Inductive Biases of Various Machine Learning Algorithms](http://www.lauradhamilton.com/inductive-biases-various-machine-learning-algorithms)
# # [FSP17] [Financial Series Prediction: Comparison Between Precision of Time Series Models and Machine Learning Methods](https://arxiv.org/pdf/1706.00948.pdf)
#
# * TL;DR - SVMs outperform ARIMA (traditional time series model) and outperform DNNs
#
# ## Abstract
# Investors collect information from trading market and make investing decision based on collected
# information, i.e. belief of future trend of securityโs price. Therefore, several mainstream trend analysis
# methodology come into being and develop gradually. However, precise trend predicting has long been a
# difficult problem because of overwhelming market information. Although traditional time series models
# like ARIMA and GARCH have been researched and proved to be effective in predicting, their performances
# are still far from satisfying. Machine learning, as an emerging research field in recent years, has brought
# about many incredible improvements in tasks such as regressing and classifying, and itโs also promising
# to exploit the methodology in financial time series predicting. In this paper, the predicting precision of
# financial time series between traditional time series models ARIMA, and mainstream machine learning
# models including logistic regression, multiple-layer perceptron, support vector machine along with deep
# learning model denoising auto-encoder are compared through experiment on real data sets composed of
# three stock index data including Dow 30, S&P 500 and Nasdaq. The result show
# # [Neural networks for algorithmic trading. Volatility forecasting and custom loss functions](https://codeburst.io/neural-networks-for-algorithmic-trading-volatility-forecasting-and-custom-loss-functions-c030e316ea7e)
#
# ## Conclusions
# In this tutorial I tried to show several very important moments in time series forecasting:
#
# 1. While having a general goal for forecasting financial time series, we can transform our data in different way in order to work with better time seriesโโโwe couldnโt work adequately with prices and returns, but forecasting variability of returns works not bad at all!
#
# 1. You have to design loss functions very carefully depending on the problem and check different hypothesis. For returns giving a penalty for wrong sign is good idea, MSE on log values is better for volatility.
# # [GitHub BenjiKCF/Neural-Network-with-Financial-Time-Series-Data](https://github.com/BenjiKCF/Neural-Network-with-Financial-Time-Series-Data)
# # Libraries for hyperparameter tuning
# * http://scikit-learn.org/stable/modules/grid_search.html#grid-search
# * https://github.com/EpistasisLab/tpot
# * https://github.com/automl/auto-sklearn
# # Libraries for feature selection
#
# * feature selection http://scikit-learn.org/stable/modules/feature_selection.html
# # [SPNS] [STOCK TREND PREDICTION USING NEWS SENTIMENT ANALYSIS](https://arxiv.org/pdf/1607.01958.pdf)
#
# * text mining
# * RF: 90%, SVM 86%, 83%
# * Optimized # of classes at ~5 (from prior research)
#
# Then after comparing their
# results, Random Forest worked very well for all test cases ranging from 88% to 92% accuracy.
# Accuracy followed by SVM is also considerable around 86%. Naive Bayes algorithm
# performance is around 83%
# # [Multivariate Time Series Forecasting of Crude Palm Oil Price Using Machine Learning Techniques](http://iopscience.iop.org/article/10.1088/1757-899X/226/1/012117/pdf)
#
# ## Abstract
# The aim of this paper was to study the correlation between crude palm oil
# (CPO) price, selected vegetable oil prices (such as soybean oil, coconut oil, and olive
# oil, rapeseed oil and sunflower oil), crude oil and the monthly exchange rate.
# Comparative analysis was then performed on CPO price forecasting results using the
# machine learning techniques. Monthly CPO prices, selected vegetable oil prices, crude
# oil prices and monthly exchange rate data from January 1987 to February 2017 were
# utilized. Preliminary analysis showed a positive and high correlation between the CPO
# price and soy bean oil price and also between CPO price and crude oil price.
# Experiments were conducted using multi-layer perception, support vector regression
# and Holt Winter exponential smoothing techniques. The results were assessed by using
# criteria of root mean square error (RMSE), means absolute error (MAE), means
# absolute percentage error (MAPE) and Direction of accuracy (DA). Among these three
# techniques, support vector regression(SVR) with Sequential minimal optimization
# (SMO) algorithm showed relatively better results compared to multi-layer perceptron
# and Holt Winters exponential smoothing method.
#
# ## Conclusions
# Support vector regression, multi-layer perceptron and Holt Winter exponential smoothing were
# utilized in this study to forecast the CPO price using multivariate time series. The prediction results
# exhibits that the support vector regression had higher predicted accuracy compared to multi-layer
# perceptron and Holt Winter exponential smoothing methods. In this study nine attributes were
# chosen and the results of this analysis showed the strength of support vector regression in
# forecasting multivariate time series of CPO price. In future, more relevant attributes could be included
# to improve forecasting of the CPO price. Feature selection method also can be added in future studies
# in order to improve accuracy of CPO price forecasting.
#
#
# # [Machine Learning Strategies for Time Series Forecasting](http://www.ulb.ac.be/di/map/gbonte/ftp/time_ser.pdf)
#
# ## Slides
#
# time series [25].
# In the last two decades, machine learning models have drawn attention and
# have established themselves as serious contenders to classical statistical models
# in the forecasting community [1,43,61]. These models, also called black-box or
# data-driven models [40], are examples of nonparametric nonlinear models which
# use only historical data to learn the stochastic dependency between the past and
# the future. For instance, Werbos found that Artificial Neural Networks (ANNs)
# outperform the classical statistical methods such as linear regression and BoxJenkins
# approaches [59,60]. A similar study has been conducted by Lapedes and
# Farber [33] who conclude that ANNs can be successfully used for modeling and
# forecasting nonlinear time series. Later, other models appeared such as decision
# trees, support vector machines and nearest neighbor regression [29,3]. Moreover,
# the empirical accuracy of several machine learning models has been explored in a
# number of forecasting competitions under different data conditions (e.g. the NN3,
# NN5, and the annual ESTSP competitions [19,20,34,35]) creating interesting
# scientific debates in the area of data mining and forecasting [28,45,21].
#
# # [Time series prediction using SVMs: A Survey](https://pdfs.semanticscholar.org/4092/7c5d81988a1151639fad150cbc74f64e0d68.pdf)
#
# # [FTML10] [Financial Time Series Forecasting with Machine Learning Techniques: A Survey](http://epublications.bond.edu.au/cgi/viewcontent.cgi?article=1113&context=infotech_pubs)
#
# ## Abstract
#
# Stock index forecasting is vital for making informed investment decisions. This
# paper surveys recent literature in the domain of machine learning techniques and artificial
# intelligence used to forecast stock market movements. The publications are categorised
# according to the machine learning technique used, the forecasting timeframe, the input
# variables used, and the evaluation techniques employed. It is found that there is a consensus
# between researchers stressing the importance of stock index forecasting. *Artificial Neural
# Networks (ANNs)* are identified to be the dominant machine learning technique in this area.
# We conclude with possible future research directions
#
# ### Time frame
# 
#
# ### ML Algorithms
# 
#
# ### Model Inputs
# 
#
# ### Evaluation
# 
#
#
# ## Conclusion
#
# Selecting the right input variables is very important for machine learning techniques.
# Even the best machine learning technique can only learn from an input if there is
# actually some kind of correlation between input and output variable.
# Table 3 shows that over 75% of the reviewed papers rely in some form on
# lagged index data. The most commonly used parameters are daily opening, high, low
# and close prices. Also used often are technical indicators which are mathematical
# transformations of lagged index data. The most common technical indicators found in
# the surveyed literature are the simple moving average (SMA), exponential moving
# average (EMA), relative strength index (RSI), rate of change (ROC), moving average
# convergence / divergence (MACD), Williamโs oscillator and average true range
# (ATR)
# # [SAN11] [CS224N Final Project: Sentiment analysis of news articles for financial signal prediction](https://nlp.stanford.edu/courses/cs224n/2011/reports/nccohen-aatreya-jameszjj.pdf)
#
#
# ## Abstract
# Due to the volatility of the stock market,
# price fluctuations based on sentiment and news reports
# are common. Traders draw upon a wide variety of
# publicly-available information to inform their market
# decisions. For this project, we focused on the
# analysis of publicly-available news reports with the
# use of computers to provide advice to traders for
# stock trading. We developed Java data processing code
# and used the Stanford Classifier to quickly analyze
# financial news articles from The New York Times
# and predict sentiment in the articles. Two approaches
# were taken to produce sentiments for training and
# testing. A manual approach was tried using a human
# to read the articles and classifying the sentiment, and
# the automatic approach using market movements was
# used. These sentiments could be used to predict the
# daily market trend of the Standard & Poorโs 500 index
# or as inputs to a larger trading system"
#
# ## Conclusions
#
# "This work has demonstrated the difficulty of
# extracting financially-relevant sentiment information
# from news sources and using it as a market
# predictor. While news articles remain a useful sort
# of information for determining overall market sentiment,
# they are often difficult to analyze and, since
# they are often focused on conveying nuanced information,
# may contain mixed messages. Furthermore,
# the success of this model relies largely upon the
# exploitation of market inefficiencies, which often
# take a great deal of work to identify if they are to
# be reliable.
# Thus, while our system provides interesting analysis
# of market sentiment in hindsight, it is less effective
# when used for predictive purposes. Nonetheless,
# given the coarse signals produced by our
# model, it is important to note that it is not necessary
# to trade directly using the values produced from
# our model. The sentiment results we produce could
# instead be an input to another trading system or
# simply be given to human traders to aid their
# judgments."
#
#
# ## NYT Corpus at Google
# https://research.googleblog.com/2014/08/teaching-machines-to-read-between-lines.html
# # [ECML10] [An Empirical Comparison of Machine Learning Models for Time Series Forecasting](https://www.researchgate.net/profile/Nesreen_Ahmed3/publication/227612766_An_Empirical_Comparison_of_Machine_Learning_Models_for_Time_Series_Forecasting/links/00b7d526c47935d41b000000/An-Empirical-Comparison-of-Machine-Learning-Models-for-Time-Series-Forecasting.pdf)
#
# Includes valuable notes on preprocessing.
#
# ## Abstract
# In this work we present a large scale comparison study for the major machine
# learning models for time series forecasting. Specifically, we apply the models on
# the monthly M3 time series competition data (around a thousand time series).
# There have been very few, if any, large scale comparison studies for machine
# learning models for the regression or the time series forecasting problems, so we
# hope this study would fill this gap. The models considered are multilayer perceptron,
# Bayesian neural networks, radial basis functions, generalized regression
# neural networks (also called kernel regression), K-nearest neighbor regression,
# CART regression trees, support vector regression, and Gaussian processes. The
# study reveals significant differences between the different methods. The best
# two methods turned out to be the multilayer perceptron and the Gaussian process
# regression. In addition to model comparisons, we have tested different
# preprocessing methods and have shown that they have different impacts on the
# performance
#
# ## Conclusions (abbr.)
# The two best models turned out to be MLP (ANN) and GP (Gaussian Processes). This is an interesting result, as GP up until few years ago has not been a widely used or studied method. We believe that there is still room
# for improving GP in a way that may positively reflect on its performance
# # [A Comprehensive Review of Sentiment Analysis of Stocks](https://pdfs.semanticscholar.org/42b1/0e23482cd2a0dcbd4c9ac1295620d4c80be5.pdf)
#
# The algorithms, Naive Bayes and Support Vector Machine (SVM) are basic machine learning algorithms currently used however hybrid versions are upcoming
# # [Time Series Forecasting as Supervised Learning](https://machinelearningmastery.com/time-series-forecasting-supervised-learning/)
# * Multivariate vs univariate
# * One-step vs multi-step forecast
#
#
#
# # [Application of Machine Learning Techniques to Trading]( https://medium.com/auquan/https-medium-com-auquan-machine-learning-techniques-trading-b7120cee4f05)
#
# ```
# DIRECTION: identify if an asset is cheap/expensive/fair value
# ENTRY TRADE: if an asset is cheap/expensive, should you buy/sell it
# EXIT TRADE: if an asset is fair priced and if we hold a position in that asset(bought or sold it earlier), should you exit that position
# PRICE RANGE: which price (or range) to make this trade at
# QUANTITY: Amount of capital to trade(example shares of a stock)
# ```
| References.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: 'Python 3.7.6 64-bit (''base'': conda)'
# name: python376jvsc74a57bd0b3ba2566441a7c06988d0923437866b63cedc61552a5af99d1f4fb67d367b25f
# ---
# # Find patterns by site
#
# - These errors are based on A and C: the same errors but repeated across multiple buildings at the same time.
# - In range errors (B): 0.1 <= `rmsle_scaled` <= 0.3
# - multiple buiding in range long term (B1): time_consequitve_error > 3 days
# - multiple building in range mid term (B2): 1 day < time_consequitve_error <= 3 days
# - multiple building in range short term (B3): time_consequitve_error = 1
# - multiple building in range fluctuatiom (B4):
# - Out of range errors (D): `rmsle_scaled` > 0.3
# - multiple buiding out of range long term (D1): time_consequitve_error > 3 days
# - multiple building out of range mid term (D2): 1 day < time_consequitve_error <= 3 days
# - multiple building out of range short term (D3): time_consequitve_error = 1
# - multiple building out of range fluctuatiom (D4):
# +
import sys
sys.path.append("..\\source\\")
import utils as utils
import glob
# Data and numbers
import pandas as pd
import numpy as np
import datetime as dt
from sklearn.preprocessing import MinMaxScaler
# Visualization
import matplotlib.pyplot as plt
import matplotlib as mpl
from matplotlib import ticker
import matplotlib.dates as mdates
# %matplotlib inline
# What do we say to python warnings? NOT TODAY
import warnings
warnings.filterwarnings("ignore")
# -
path_data = "..\\data\\processed\\summary\\"
path_leak = "..\\data\\leaked\\"
path_meta = "..\\data\\original\\metadata\\"
path_arrays = "..\\data\\processed\\arrays\\"
path_res = "..\\results\\by_bdg\\"
# # Functions
def heatmaps_comp(df,df_bool):
"""This function plots two heatmaps side by side:
- The original heatmap\n
- A heatmap where only values over ยดtresh_errorยด are colored\n
df: original values for a meter-site.
df_bool: masked df for that meter-site, where only values over `tresh_error` are 1 (otherwise, 0)\n
"""
fig, axes = plt.subplots(1, 2, sharex = True, sharey=True, figsize=(16,8))
axes = axes.flatten()
# Get the data
y = np.linspace(0, len(df), len(df)+1)
x = pd.date_range(start='2017-01-01', end='2018-12-31')
cmap = plt.get_cmap('YlOrRd')
for i,data in enumerate([df_bool,df]):
# Plot
ax = axes[i]
data = data
qmesh = ax.pcolormesh(x, y, data, cmap=cmap, rasterized=True, vmin=0, vmax=1)
# Axis
plt.locator_params(axis='y', nbins=len(list(data.index)) + 1)
ax.axis('tight')
ax.xaxis_date() # Set up as dates
ax.xaxis.set_major_formatter(mdates.DateFormatter('%b-%y')) # set date's format
ax.set_yticklabels(list(df_bool.index)) # omit building ID on y axis
# Color bar
cbar = fig.colorbar(qmesh, ax=ax)
cbar.set_label('Min-Max scaled RMSLE')
fig.suptitle(f"{meter} - site {site}", y = 1.015, fontsize=16)
plt.tight_layout()
plt.subplots_adjust(bottom=0.12)
#plt.pcolormesh(df, cmap='YlOrRd', rasterized=True)
return fig
def error_MB1(file, meter, t_bdg, t_days, plot=False):
"""
Calculates errors in multiple buildings.
file: path to single building errors type 1 array.\n
t_bdg: threshold for the proportion of buildings with high error values in a date to consider it an error.\n
t_days: days threshold to consider it an error type 1.\n
returns:
A sparse dataframe, with 1 on those values over `t_error`.\n
"""
# Load data
df = pd.read_csv(file).set_index("building_id")
# Replace missings with 0
df_bool = df.replace(np.nan,0)
# Create empty df to assign errors
df_bdg = df[df<0]
# Sum values for each date
by_date = pd.DataFrame(df_bool.sum()).reset_index().rename(columns={"index":"date",0:"sum_val"})
# Filter dates that has high error in more than t_bdg% of buildings
results = by_date[by_date.sum_val > t_bdg*len(df_bool.index)]
# error_type those dates in df_empty
df_bdg.loc[:,list(results.date)] = df_bool.loc[:,list(results.date)]
### GET HIGH ERRORS BY BDG ###
dfs = []
df_bdg = df_bdg.T
# Create empty df to assign errors
df_empty = df[df<0]
for col in df_bdg.columns:
# Select one building
df1 = df_bdg[[col]]
# Tag groups of consequtive equal numbers
df1['grp'] = (df1[col] != df1[col].shift()).cumsum()
# Filter values == 1 (error higher than t_error)
df1 = df1.loc[df1[col] == 1,].reset_index().rename(columns={"index":"date"}, index={"building_id":"index"})[["date","grp"]]
# Add bdg number
df1["building_id"] = col
# Get buildings with high error during period longer than t_days
by_group = df1[["date","grp"]].groupby("grp").count().reset_index() # group
groups = list(by_group.loc[by_group.date > t_days, "grp"]) # get list of groups
mt = df1[df1.grp.isin(groups) == True] # filter and get only those groups
mt["error"] = 1
# Append to list
dfs.append(mt)
# Concat all
df2 = pd.concat(dfs)
# Pivot
df2 = df2.pivot(index="date",columns="building_id", values="error").T
# Replace on df_empty
df_empty.update(df2)
if plot == True:
fig = heatmaps_comp(df_bool,df_empty.replace(np.nan,0))
return df_empty
def error_MB2(file, meter, t_bdg, t_days, plot=False):
"""
Calculates errors in multiple buildings.
file: path to single building errors type 1 array.\n
t_bdg: threshold for the proportion of buildings with high error values in a date to consider it an error.\n
t_days: days threshold to consider it an error type 1.\n
returns:
A sparse dataframe, with 1 on those values over `t_error`.\n
"""
# Load data
df = pd.read_csv(file).set_index("building_id")
# Replace missings with 0
df_bool = df.replace(np.nan,0)
# Create empty df to assign errors
df_bdg = df[df<0]
# Sum values for each date
by_date = pd.DataFrame(df_bool.sum()).reset_index().rename(columns={"index":"date",0:"sum_val"})
# Filter dates that has high error in more than t_bdg% of buildings
results = by_date[by_date.sum_val > t_bdg*len(df_bool.index)]
# error_type those dates in df_empty
df_bdg.loc[:,list(results.date)] = df_bool.loc[:,list(results.date)]
### GET HIGH ERRORS BY BDG ###
dfs = []
df_bdg = df_bdg.T
# Create empty df to assign errors
df_empty = df[df<0]
for col in df_bdg.columns:
# Select one building
df1 = df_bdg[[col]]
# Tag groups of consequtive equal numbers
df1['grp'] = (df1[col] != df1[col].shift()).cumsum()
# Filter values == 1 (error higher than t_error)
df1 = df1.loc[df1[col] == 1,].reset_index().rename(columns={"index":"date"}, index={"building_id":"index"})[["date","grp"]]
# Add bdg number
df1["building_id"] = col
# Get buildings with high error during period longer than t_days
by_group = df1[["date","grp"]].groupby("grp").count().reset_index() # group
groups = list(by_group.loc[(by_group.date > 1) & (by_group.date <= t_days), "grp"]) # get list of groups
mt = df1[df1.grp.isin(groups) == True] # filter and get only those groups
mt["error"] = 1
# Append to list
dfs.append(mt)
# Concat all
df2 = pd.concat(dfs)
# Pivot
df2 = df2.pivot(index="date",columns="building_id", values="error").T
# Replace on df_empty
df_empty.update(df2)
if plot == True:
fig = heatmaps_comp(df_bool,df_empty.replace(np.nan,0))
return df_empty
def error_MB3(file, meter, t_bdg, plot=False):
"""
Calculates errors in multiple buildings.
file: path to single building errors type 1 array.\n
t_bdg: threshold for the proportion of buildings with high error values in a date to consider it an error.\n
t_days: days threshold to consider it an error type 1.\n
returns:
A sparse dataframe, with 1 on those values over `t_error`.\n
"""
# Load data
df = pd.read_csv(file).set_index("building_id")
# Replace missings with 0
df_bool = df.replace(np.nan,0)
# Create empty df to assign errors
df_bdg = df[df<0]
# Sum values for each date
by_date = pd.DataFrame(df_bool.sum()).reset_index().rename(columns={"index":"date",0:"sum_val"})
# Filter dates that has high error in more than t_bdg% of buildings
results = by_date[by_date.sum_val > t_bdg*len(df_bool.index)]
# error_type those dates in df_empty
df_bdg.loc[:,list(results.date)] = df_bool.loc[:,list(results.date)]
### GET HIGH ERRORS BY BDG ###
dfs = []
df_bdg = df_bdg.T
# Create empty df to assign errors
df_empty = df[df<0]
for col in df_bdg.columns:
# Select one building
df1 = df_bdg[[col]]
# Tag groups of consequtive equal numbers
df1['grp'] = (df1[col] != df1[col].shift()).cumsum()
# Filter values == 1 (error higher than t_error)
df1 = df1.loc[df1[col] == 1,].reset_index().rename(columns={"index":"date"}, index={"building_id":"index"})[["date","grp"]]
# Add bdg number
df1["building_id"] = col
# Get buildings with high error during period longer than t_days
by_group = df1[["date","grp"]].groupby("grp").count().reset_index() # group
groups = list(by_group.loc[by_group.date == 1, "grp"]) # get list of groups
mt = df1[df1.grp.isin(groups) == True] # filter and get only those groups
mt["error"] = 1
# Append to list
dfs.append(mt)
# Concat all
df2 = pd.concat(dfs)
# Pivot
df2 = df2.pivot(index="date",columns="building_id", values="error").T
# Replace on df_empty
df_empty.update(df2)
if plot == True:
fig = heatmaps_comp(df_bool,df_empty.replace(np.nan,0))
return df_empty
def error_MB4(file, meter, t_bdg, w, t_w, plot=False):
"""
Calculates errors in multiple buildings.
file: path to single building errors type 1 array.\n
t_bdg: threshold for the proportion of buildings with high error values in a date to consider it an error.\n
w: time window to check error.\n
t_w: proportion of error during time window.\n
returns:
A sparse dataframe, with 1 on those values over `t_error`.\n
"""
# Load data
df = pd.read_csv(file).set_index("building_id")
# Replace missings with 0
df_bool = df.replace(np.nan,0)
# Create empty df to assign errors
df_bdg = df[df<0]
# Sum values for each date
by_date = pd.DataFrame(df_bool.sum()).reset_index().rename(columns={"index":"date",0:"sum_val"})
# Filter dates that has high error in more than t_bdg% of buildings
results = by_date[by_date.sum_val > t_bdg*len(df_bool.index)]
# error_type those dates in df_empty
df_bdg.loc[:,list(results.date)] = df_bool.loc[:,list(results.date)]
### CHECK IF t_w PROPORTION IS EXCEED DURING w TIME PERIOD ###
dfs = []
df_bdg = df_bdg.T
for col in df_bdg.columns:
# Select one building
df1 = df_bdg[[col]].reset_index().rename(columns={"index":'date'}).fillna(0)
# Calculate rolling sum
df1["rol_sum"] = df1[col].rolling(w).sum()
# Get index of last row of windows, which sum is over threshold
idx = df1[df1["rol_sum"] > w*t_w].index
# Mark whole window
if len(idx) > 0:
for ix in list(idx):
i0 = ix-w
it = ix
df1.loc[i0:it,"mark"] = df1.loc[i0:it,col] #copy original values in window
else:
df1["mark"] = np.nan
# Rename and complete df
df1 = df1.set_index("date").drop([col,"rol_sum"],axis=1).rename(columns={"mark":col})
# Append
dfs.append(df1)
#Create errors df
df_error = pd.concat(dfs,axis=1).replace(0,np.nan).T
if plot == True:
fig = heatmaps_comp(df,df_error.replace(np.nan,0))
return df_error
# # Choose meter
meters = ["chilledwater","electricity","hotwater","steam"]
# # In range errors (B)
# ## B1
# + tags=[]
name = "B1"
for meter in meters:
#Get list of files
files = glob.glob(path_res + f"{meter}_A1_*.csv")
for file in files:
# Site id
site = file.split("\\")[-1].split("_")[-1].split(".")[0].split("site")[1]
print(f"{meter} - site {site}")
# Create df
df = error_MB1(file, meter, 0.33, 3, plot=False)
# Save df
df.to_csv(f"{path_res}\\{meter}_{name}_site{site}.csv")
print()
# -
# ## B2
name = "B2"
for meter in meters:
#Get list of files
files = glob.glob(path_res + f"{meter}_A2_*.csv")
for file in files:
# Site id
site = file.split("\\")[-1].split("_")[-1].split(".")[0].split("site")[1]
print(f"{meter} - site {site}")
# Create df
df = error_MB2(file, meter, 0.33, 3, plot=False)
# Save df
df.to_csv(f"{path_res}\\{meter}_{name}_site{site}.csv")
print()
# ## B3
name = "B3"
for meter in meters:
#Get list of files
files = glob.glob(path_res + f"{meter}_A3_*.csv")
for file in files:
# Site id
site = file.split("\\")[-1].split("_")[-1].split(".")[0].split("site")[1]
print(f"{meter} - site {site}")
# Create df
df = error_MB3(file, meter, 0.33, plot=False)
# Save df
df.to_csv(f"{path_res}\\{meter}_{name}_site{site}.csv")
print()
# ## B4
name = "B4"
for meter in meters:
#Get list of files
files = glob.glob(path_res + f"{meter}_A4_*.csv")
for file in files:
# Site id
site = file.split("\\")[-1].split("_")[-1].split(".")[0].split("site")[1]
print(f"{meter} - site {site}")
# Create df
df = error_MB4(file, meter, 0.33, 30, 0.1, plot=False)
# Save df
df.to_csv(f"{path_res}\\{meter}_{name}_site{site}.csv")
print()
# # Out of range errors (D)
# ## D1
name = "D1"
for meter in meters:
#Get list of files
files = glob.glob(path_res + f"{meter}_C1_*.csv")
for file in files:
# Site id
site = file.split("\\")[-1].split("_")[-1].split(".")[0].split("site")[1]
print(f"{meter} - site {site}")
# Create df
df = error_MB1(file, meter, 0.33, 3, plot=False)
# Save df
df.to_csv(f"{path_res}\\{meter}_{name}_site{site}.csv")
print()
# ## D2
name = "D2"
for meter in meters:
#Get list of files
files = glob.glob(path_res + f"{meter}_C2_*.csv")
for file in files:
# Site id
site = file.split("\\")[-1].split("_")[-1].split(".")[0].split("site")[1]
print(f"{meter} - site {site}")
# Create df
df = error_MB2(file, meter, 0.33, 3, plot=False)
# Save df
df.to_csv(f"{path_res}\\{meter}_{name}_site{site}.csv")
print()
# ## D3
name = "D3"
for meter in meters:
#Get list of files
files = glob.glob(path_res + f"{meter}_C3_*.csv")
for file in files:
# Site id
site = file.split("\\")[-1].split("_")[-1].split(".")[0].split("site")[1]
print(f"{meter} - site {site}")
# Create df
df = error_MB3(file, meter, 0.33, plot=False)
# Save df
df.to_csv(f"{path_res}\\{meter}_{name}_site{site}.csv")
print()
# ## D4
name = "D4"
for meter in meters:
#Get list of files
files = glob.glob(path_res + f"{meter}_C4_*.csv")
for file in files:
# Site id
site = file.split("\\")[-1].split("_")[-1].split(".")[0].split("site")[1]
print(f"{meter} - site {site}")
# Create df
df = error_MB4(file, meter, 0.33, 30, 0.1, plot=False)
# Save df
df.to_csv(f"{path_res}\\{meter}_{name}_site{site}.csv")
print()
| notebooks/06_find-patterns-by-site.ipynb |
# -*- coding: utf-8 -*-
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .jl
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Julia 1.7.0
# language: julia
# name: julia-1.7
# ---
# + [markdown] tags=[]
# # Review of Statistics
#
# This notebook shows some basic statistics needed for this course in financial econometrics. Details are in the first chapter of the lecture notes (pdf).
#
# It uses the `Statistics` package (built in) for descriptive statistics (averages, autocorrelations, etc) and the [Distributions.jl](https://github.com/JuliaStats/Distributions.jl) package for statistical distributions (pdf, cdf, etc). For more stat functions, see the [StatsBase.jl](https://github.com/JuliaStats/StatsBase.jl) package. (Not used here.)
# -
# ## Load Packages and Extra Functions
# +
using Printf, Statistics, DelimitedFiles, Distributions
include("jlFiles/printmat.jl") #prettier matrix printing
# +
using Plots, LaTeXStrings #packages for plotting and LaTeX
#pyplot(size=(600,400)) #use pyplot() or gr()
gr(size=(480,320))
default(fmt = :png) # :svg gives prettier plots
# -
# # Distributions
# ## Probability Density Function (pdf)
#
# The cells below calculate and plot pdfs of some distributions often used in econometrics. The Distributions package has many more distributions.
#
# ### A Remark on the Code
#
# Notice that Distributions package wants `Normal(ฮผ,ฯ)`, where `ฯ` is the standard deviation. However, the notation in the lecture notes is $N(\mu,\sigma^2)$. For instance, $N(0,2)$ from the lectures is coded as `Normal(0,sqrt(2))`.
# +
x = -3:0.1:3
xb = x[x.<=1.645] #pick out x values <= 1.645
pdfx = pdf.(Normal(0,1),x) #calculate the pdf of a N(0,1) variable
pdfxb = pdf.(Normal(0,1),xb)
p1 = plot( x,pdfx, #plot pdf
linecolor = :red,
linewidth = 2,
legend = nothing,
ylims = (0,0.4),
title = "pdf of N(0,1)",
xlabel = "x",
annotation = (1.1,0.3,text("the area covers 95%\n of the probability mass",:left,8)) )
plot!(xb,pdfxb,linecolor=:red,linewidth=2,legend=nothing,fill=(0,:red)) #plot area under pdf
display(p1)
# +
x = 0.0001:0.1:12
pdf2 = pdf.(Chisq(2),x) #pdf of Chisq(2)
pdf5 = pdf.(Chisq(5),x)
p1 = plot( x,[pdf2 pdf5],
linecolor = [:red :blue],
linestyle = [:solid :dash],
label = [L"\chi_{2}^{2}" L"\chi_{5}^{2}"],
title = L"\mathrm{pdf \ of \ } \chi_{n}^{2}",
xlabel = "x" )
display(p1)
# +
x = -3:0.1:3
pdfN = pdf.(Normal(0,1),x)
pdft50 = pdf.(TDist(50),x) #pdf of t-dist with 50 df
p1 = plot( x,[pdfN pdft50],
linecolor = [:red :blue],
linestyle = [:solid :dash],
label = [L"N(0,1)" L"t_{50}"],
ylims = (0,0.4),
title = L"\mathrm{pdf \ of \ N(0,1) \ and \ } t_{50}",
xlabel = "x" )
display(p1)
# -
# ## Cumulative Distribution Function (cdf)
#
# The cdf calculates the probability for the random variable (here denoted $x$) to be below or at a value $z$, for instance, $\textrm{cdf}(z) = \textrm{Pr}(x\leq z)$.
#
# Also, we can calculate $\textrm{Pr}( z \lt x)$ as $1 - \textrm{cdf}(z)$ and $\textrm{Pr}(z_1 < x\leq z_2)$ as $\textrm{cdf}(z_2)-\textrm{cdf}(z_1)$.
#
# ### A Remark on the Code
#
# $1 - \textrm{cdf}(z)$ can be coded as `1-cdf(dist,z)` where dist is, for instance, `Chisq(2)`. Alternatively, we could also use `ccdf(dist,z)`. (The extra `c` stands for the complement.)
# +
printblue("Probability of:\n")
printlnPs("x<=-1.645 when x is N(0,1) ",cdf(Normal(0,1),-1.645))
printlnPs("x<=0 when x is N(0,1) ",cdf(Normal(0,1),0))
printlnPs("2<x<=3 when x is N(0,2) ",cdf(Normal(0,sqrt(2)),3)-cdf(Normal(0,sqrt(2)),2))
printlnPs("2<x<=3 when x is N(1,2) ",cdf(Normal(1,sqrt(2)),3)-cdf(Normal(1,sqrt(2)),2))
printlnPs("\nx>4.61 when x is Chisq(2) ",1-cdf(Chisq(2),4.61)," or ", ccdf(Chisq(2),4.61))
printlnPs("x>9.24 when x is Chisq(5) ",1-cdf(Chisq(5),9.24)," or ", ccdf(Chisq(5),9.24))
# -
# ## Quantiles (percentiles)
#
# ...are just about inverting the cdf. For instance, the 5th percentile is the value $q$ such that cdf($q)=0.05$.
# +
N05 = quantile(Normal(0,1),0.05) #from the Distributions package
Chisq90 = quantile(Chisq(5),0.9)
printblue("\npercentiles:")
printlnPs("5th percentile of a N(0,1) ",N05)
printlnPs("90th percentile of a Chisquare(5)",Chisq90)
# -
# ## Confidence Bands and t-tests
#
# Suppose we have a point estimate equal to the value $b$ and it has a standard deviation of $\sigma$. The next few cells create a 90% confidence band around the point estimate (assuming it is normally distributed) and tests various null hypotheses.
# +
b = 0.5 #an estimate (a random variable)
ฯ = 0.15 #std of the estimate. Do \sigma[Tab] to get ฯ
confB = [(b-1.64*ฯ) (b+1.64*ฯ)] #confidence band of the estimate
printlnPs("90% confidence band around the point estimate:",confB)
println("If the null hypothesis is outside this band, then it is rejected")
# +
tstat1 = (b - 0.4)/ฯ #testing Hโ: coefficient is 0.4
tstat2 = (b - 0.746)/ฯ #testing Hโ: coefficient is 0.746
tstat3 = (b - 1)/ฯ #testing Hโ: coefficient is 1.0
printblue("t-stats for different tests: are they beyond [-1.64,1.64]?\n")
rowNames = ["Hโ: 0.4","Hโ: 0.746","Hโ: 1"] #Do H\_0[TAB] to get Hโ
printmat([tstat1;tstat2;tstat3];colNames=["t-stat"],rowNames) #or rowNames=rowNames
printred("compare with the confidence band")
# -
# ## Load Data from a csv File
# +
x = readdlm("Data/FFmFactorsPs.csv",',',skipstart=1)
#yearmonth, market, small minus big, high minus low
(ym,Rme,RSMB,RHML) = [x[:,i] for i=1:4]
println("Sample period: ",ym[1]," to ",ym[end]) #just numbers, not converted to Dates
# -
# ## Means and Standard Deviations
# +
xbar = mean([Rme RHML],dims=1) #,dims=1 to calculate average along a column
ฯ = std([Rme RHML],dims=1)
T = length(Rme)
printmat([xbar;ฯ],colNames=["Rme","HML"],rowNames=["average","std"])
# +
printblue("std of sample average (assuming iid data):\n")
printmat(ฯ/sqrt(T),colNames=["Rme","HML"],rowNames=["std(average)"])
# -
# ## Skewness, Kurtosis and Bera-Jarque
#
# We here construct the skewness, kurtosis and BJ statistics ourselves. Otherwise, you could use the functions in the `StatsBase.jl` package.
# +
xStd = (Rme .- mean(Rme))./std(Rme)
skewness = mean(xStd.^3)
kurtosis = mean(xStd.^4)
BJ = (T/6)*skewness.^2 + (T/24)*(kurtosis.-3).^2 #Chisq(2)
pvalBJ = 1 .- cdf.(Chisq(2),BJ)
printblue("Testing skewness and kurtosis:\n")
printmat([skewness,kurtosis,BJ],colNames=["Rme"],rowNames=["Skewness","Kurtosis","Bera-Jarque"])
# -
# ## Covariances and Correlations
# +
println("\ncov([Rme RHML]): ")
printmat(cov([Rme RHML]))
println("\ncor([Rme RHML]): ")
printmat(cor([Rme RHML]))
ฯ = cor(Rme,RHML)
tstat = sqrt(T)*ฯ/sqrt(1-ฯ^2)
printlnPs("correlation and its t-stat:",ฯ,tstat)
# -
| Ch01_StatsReview.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: py35
# language: python
# name: py35
# ---
#define vgg16 model ,but its more elegant
def vgg_16(inputs, scope='vgg_16'):
with tf.variable_scope(scope, 'vgg_16', [inputs]) as sc:
with slim.arg_scope([slim.conv2d, slim.fully_connected, slim.max_pool2d]):
net = slim.repeat(inputs, 2, slim.conv2d, 64, [3, 3], scope='conv1')
net = slim.max_pool2d(net, [2, 2], scope='pool1')
net = slim.repeat(net, 2, slim.conv2d, 128, [3, 3], scope='conv2')
net = slim.max_pool2d(net, [2, 2], scope='pool2')
net = slim.repeat(net, 3, slim.conv2d, 256, [3, 3], scope='conv3')
net = slim.max_pool2d(net, [2, 2], scope='pool3')
net = slim.repeat(net, 3, slim.conv2d, 512, [3, 3], scope='conv4')
net = slim.max_pool2d(net, [2, 2], scope='pool4')
net = slim.repeat(net, 3, slim.conv2d, 512, [3, 3], scope='conv5')
return net
with open('/home/aaron/ๆก้ข/13_2.txt','r') as f:
for line in f.readlines():
print (line)
| CTPN.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3 (ipykernel)
# language: python
# name: python3
# ---
# # Axiomatic Probability
# The methods of the previous two sections that define probabilities using relative frequencies or based on properties of fair experiments are helpful to develop some intuition about probability. However, these methods have limitations that restrict their usefulness to many real world problems. These problems were recognized by mathematicians working on probability and motivated these mathematicians to develop an approach to probability that is:
# * not based on a particular application or interpretation,
# * agrees models based on relative frequency and fair probabilities,
# * agrees with our intuition (where appropriate), and
# * are useful to solving real-world problems.
#
# The approach they developed is called *Axiomatic Probability*. Axiomatic means that there is a set of assumptions or rules (called axioms) for probability, but that the set of rules is made as small as possible. This approach may at first seem unnecessarily mathematical, but I believe that the reader will soon see that this approach will help them to develop a fundamentally sound understanding of probability.
# ## Probability Spaces
#
# The first step in developing Axiomatic Probability is to define the core objects that the axioms apply to. Define a *Probability Space* as an ordered collection (tuple) of three objects, and we denote it by
# $$
# (S, \mathcal{F}, P)
# $$
#
# These objects are called the *sample space*, the *event class*, and the *probability measure*. Since there are three objects in a probability space, it is sometimes said that **probability is a triple** or **a probability space is a triple**.
# **Sample Space**
#
# We have already introduce the sample space in {doc}`outcomes-samplespaces-events`. It is a **set** containing all possible outcomes for an experiment.
# **Event Class**
#
# The second object, denoted by a calligraphic F ($\mathcal{F}$), is called the *event class*.
# ````{panels}
# DEFINITION
# ^^^
# event class
# : For a sample space $S$ and a probability measure $P$, the event class, denoted by $\mathcal{F}$ is a collection of all subsets of $S$ to which we will assign probability (i.e., for which $P$ will be defined). The sets in $\mathcal{F}$ are called events.
# ````
#
# We require that the event class be a $\sigma-algebra$ (read โsigma algebraโ) of $S$, which is a concise and mathematically precise way to say that events in $\mathcal{F}$ that combinations of events using (a finite or countably infinite number of) the usual set operations will still be events in $\mathcal{F}.
#
# For many readers of this book, the above explanation will be sufficient to understand what events are in $\mathcal{F}$. If you feel satisfied with this explanation, you may skip ahead to the heading **Event Class for Finite Sample Spaces**. If you want more mathematical depth and rigor, here are the properties that $\mathcal{F}$ must satisfy to be a $\sigma$-algebra on $S$:
#
# 1. $\mathcal{F}$ contains the sample space:<br>
# $S \in \mathcal{F}$
# 1. $\mathcal{F}$ is **closed under complements**:<br>
# If $A \in \mathcal{F}$, then $\overline{A} \in \mathcal{F}$.
# 1. $\mathcal{F}$ is **closed under countable unions**:<br>
# If $A_1, A_2, \ldots$ are a finite or countably infinite number of sets in \mathcal{F}, then
# \begin{equation*}
# \bigcup_i A_i \in \mathcal{F}
# \end{equation*}
#
# Note that DeMorgan's Laws immediately imply a few other properties:
# * The null set $\emptyset$ is in $\mathcal{F}$ by combining properties 1 and 2. $S \in \mathcal{F}$, and so $\overline{S} =\emptyset \in \mathcal{F}$.
# * $\mathcal{F}$ is **closed under countable intersections**. If $A_1, A_2, \ldots$ are a finite or countably infinite number of sets in $\mathcal{F}$, then by property 2, $\overline{A_1}, \overline{A_2} \ldots$ are in $\mathcal{F}$. By property 3,
# \begin{equation*}
# \bigcup_i \overline{A_i} \in \mathcal{F}
# \end{equation*}
# If we apply DeMorgan's Laws to this expression, we have
# \begin{equation*}
# \overline{\bigcap_i A_i} \in \mathcal{F}
# \end{equation*}
# Then by applying property 2 again, we have that
# \begin{equation*}
# {\bigcap_i A_i} \in \mathcal{F}
# \end{equation*}
#
# **Event Class for Finite Sample Spaces**
#
# When $S$ is finite, we almost always use the same event class, which is to take $\mathcal{F}$ to be the *power set* of $S$:
# ````{panels}
# DEFINITION
# ^^^
# power set
# : For a set $S$ with finite cardinality, $|S|=N < \infty$, the power set is the set of all possible subsets. We will use the notation $2^S$ to denote the power set.
# ````
#
#
# Note that the power set includes both the empty set ($\emptyset$) and $S$.
#
# **Example**
# Consider flipping a coin and observing the top face. Then $S=\{H,T\}$ and
#
# $$
# \mathcal{F} = \bigl\{ \emptyset, H, T, \{H,T\} = S \bigr\}
# $$
#
# Note that $|S|=2$ and $|2^S| = 4 = 2^{|S|}$.
#
# **Exercise**
#
# Consider rolling a standard six-sided die. Give the sample space, $S$, and the power set of the sample space, $2^S$. What is the cardinality of $2^S$?
# When $|S|=\infty$, weird things can happen if we try to assign probabilities to every subset of $S$. **JMS: Working here. Need footnote about uncountably infinite** For typical data science applications, we can assume that any event that we want to ask about will be in the event class, and we do not need to explicitly enumerate the event class.
# **Probability Measure**
#
# Until now, we have discussed the probabilities of outcomes. However, this is not the approach taken in probability spaces:
#
# ````{panels}
# DEFINITION
# ^^^
# probability measure
# : The probability measure, $P$, is a real-valued set function that maps every element of the event class to the real line.
# ````
#
# Note that in defining the probability measure, we do not specify the range of values for $P$, because at this point we are only defining the structure of the probability space through the types of elements that make it up.
#
# Although $P$ assigns outcomes to events (as opposed to outcomes), every outcome in $S$ is typically an event in the event class. Thus, $P$ is more general in its operation than we have considered in our previous examples. As explained in {doc}`outcomes-samplespaces-events`, an event occurs if the experiment's outcome is one of the outcomes in that event's set.
# ## Axioms of Probability
#
# As previously mentioned, axioms are a minimal set of rules. There are three Axioms of Probability that are specified in terms of the probability measure:
#
#
# **The Axioms of Probability**
#
# **I.** For every event $E$ in the eventclass $\mathcal{F}$, $ P(E) \ge 0$
# *(the event probabilities are non-negative)*
#
# **II.** $P(S) =1$ *(the probability that some outcome occurs is 1)*
#
# **III.** For all pairs of events $E$ and $F$ in the event class that are disjoint ($E \cap F = \emptyset$),
# $P( E \cup F) = P(E)+P(F)$ *(if two events are disjoint, then the probability that either one of the events occurs is equal to the sum of the event probabilities)*
#
# When dealing with infinite sample spaces, an alternative version of Axiom III should be used:
#
# **III'.** If $A_1, A_2, \ldots$ is a sequence of
# event that are all disjoint ($A_i \cap A_j = \emptyset~ \forall i\ne j$),
# then
#
# $$
# P \left[ \bigcup_{k=1}^{\infty} A_k \right] = \sum_{k=1}^{\infty}
# P\left[ A_k \right].
# $$
#
# <!-- *(Note that these sums and unions are over countably infinite sequences of events.)* -->
# Many students of probability wonder why Axiom I does not specify that $0 \le P(E) \le 1$. The answer is that the second part of that inequality is not needed because it can be proven from the other axioms. Anything that is not required is removed to ensure that the axioms are a minimal set of rules.
# Axiom III is a powerful tool for calculating probabilities. However, it must be used carefully.
#
# **Example**
#
# A fair six-sided die is rolled twice. What is the probability that the top face on the first roll is less than 3? What is the probability that the top face on the second roll is less than 3?
#
# First, let's define some notation for the events of interest:
#
# Let $E_i$ denote the event that the top face on roll $i$ is less than 3
#
# Then
#
# $$
# E_1=\{1_1, 2_1 \},
# $$
#
# where $k_l$ denotes the **outcome** that the top face is $k$ on roll $l$. Similarly,
#
# $$E_2=\{1_2, 2_2 \}.$$
# Note that we can rewrite
#
# $$
# E_i = \{1_i\} \cup \{2_i\},
# $$
#
# where $\cup$ is the union operator. Because outcomes are always disjoint, axiom III can be applied to yield
# \begin{align*}
# P(E_i) &= P\left(\{1_i\} \cup \{2_i\} \right) \\
# &= P\left(\{1_i\} \right) + P \left( \{2_i\} \right) \\
# &= \frac{1}{6} + \frac{1}{6},
# \end{align*}
# where the last line comes from applying the probability of an outcome in a fair experiment. Thus, $P(E_i)=1/3$ for $i=1,2$. Most readers will intuitively have known this answer.
#
# **Example**
#
# Consider the same exact experiment described in the previous example. However, let's ask a slightly different question: what is the probability that either the value on the first die is less than 3 **or** the value on the second die is less than 3. (This could also include the case that both are less than 3.) Mathematically, we write this as $P(E_1 \cup E_2)$ using the events already defined.
#
# Since $E_1$ and $E_2$ correspond to events on completely different dice, it may be tempting to apply Axiom III like:
# \begin{align*}
# P(E_1 \cup E_2) &= P(E_1) + P(E_2) \\
# &= \frac{1}{3} + \frac{1}{3}\\
# &= \frac{2}{3}.
# \end{align*}
# However, it is easy to see that somehow this thinking is not correct. For example, if we defined events $G_i$ to be the event that the value on die $i$ is less than 5, this approach would imply that
# \begin{align*}
# P(G_1 \cup G_2) &= P(G_1) + P(G_2) \\
# &= \frac{2}{3} + \frac{2}{3}\\
# &= \frac{4}{3}.
# \end{align*}
# Hopefully the reader recognizes that this is not an allowed value for a probability! Let's delve in to see what went wrong. We can begin by estimating the true value of $P(E_1 \cup E_2)$ using simulation:
# +
import numpy as np
import numpy.random as npr
num_sims = 100_000
# Generate the dice values for all simulations:
die1 = npr.randint(1, 7, size=num_sims)
die2 = npr.randint(1, 7, size=num_sims)
# Each comparison will generate an array of True/False value
E1occurred = die1 < 3
E2occurred = die2 < 3
# Use NumPy's union operator (|) to return True where either array is True:
Eoccurred = E1occurred | E2occurred
# NumPy's count_nonzero function will count 1 for each True value and 0 for each False value
print("P(E1 or E2) =~", np.count_nonzero(Eoccurred) / num_sims)
# -
# The estimated probability is about 0.56, which is lower than predicted by trying to apply Axiom III. The problem is that Axiom III does not hold in the way that it is used here because $E_1$ and $E_2$ are not disjoint: both can occur at the same time. Let's enumerate every thing that could happen by writing the outcomes of dice 1 and dice 2 as a tuple, where $(j,k)$ means that die 1's outcome was $j$ and die 2's outcome was $k$.
#
# We will use colors to help denote when events belong to a particular event. We start by printing all outcomes with the outcomes in event $E_1$ highlighted in blue:
# +
# Need to see if this is standard in Anaconda
from termcolor import colored
print("Outcomes in E1 are in blue:")
for j in range(1, 7):
for k in range(1, 7):
if j < 3:
print(colored("(" + str(j) + ", " + str(k) + ") ", "blue"), end="")
else:
print("(" + str(j) + ", " + str(k) + ") ", end="")
print()
# -
# We can easily modify this to highlight the events in $E_2$ in green:
print("Outcomes in E2 are in green:")
for j in range(1, 7):
for k in range(1, 7):
if k < 3:
print(colored("(" + str(j) + ", " + str(k) + ") ", "green"), end="")
else:
print("(" + str(j) + ", " + str(k) + ") ", end="")
print()
# Note that we can already see that the set of outcomes in $E_1$ overlap with the set of outcomes in $E_2$. To make that explicit, let's highlight the outcomes that are in both $E_1$ or $E_2$ in red:
print("Outcomes in both E1 and E2 are in red:")
for j in range(1, 7):
for k in range(1, 7):
if j < 3 and k < 3:
print(colored("(" + str(j) + ", " + str(k) + ") ", "red"), end="")
else:
print("(" + str(j) + ", " + str(k) + ") ", end="")
print()
# So, does this mean that we cannot use Axiom III to solve this problem? No. We just have to be more careful. Let's highlight all the outcomes that belong to $E_1 \cup E_2$ with a yellow background. Let's also count these as we go:
# +
print("Outcomes in E1 OR E2 are on a yellow background:")
count = 0
for j in range(1, 7):
for k in range(1, 7):
if j < 3 or k < 3:
print(
colored("(" + str(j) + ", " + str(k) + ") ", on_color="on_yellow"),
end="",
)
count += 1
else:
print("(" + str(j) + ", " + str(k) + ") ", end="")
print()
print()
print("Number of outcomes in E1 OR E2 is", count)
# -
# If an event is written in terms of a set of $K$ **outcomes** $o_0, o_1, \ldots, o_{K-1}$, and the experiment is fair and has $N$ total outcomes, then Axiom III can be applied to calculate the probability as
# \begin{align*}
# P(E) &= P \left(\left\{o_0, o_1, \ldots, o_{K-1} \right\} \right) \\
# &= P \left( o_0 \right) + P \left( o_1 \right) + \ldots + P \left( o_{K-1} \right) \\
# &= \frac{1}{N} + \frac{1}{N} + \ldots + \frac{1}{N} \mbox{ (total of } K \mbox{ terms)} \\
# &= \frac{K}{N}
# \end{align*}
# We believe that this experiment is fair and that any of the 36 total outcomes is equally likely to occur. The form above is general to any event for a fair experiment, and it is convenient to rewrite it in terms of set cardinalities as
#
# $$
# P(E) = \frac{|E|}{|S|}.
# $$
#
# Applying this our example, we can easily calculate the probability we are looking for as
# \begin{align*}
# P\left( E_1 \cup E_2 \right) &= \frac{ \left \vert E_1 \cup E_2 \right \vert } { \left \vert S \right \vert} \\
# &= \frac{20}{36} \\
# &= \frac{5}{9}
# \end{align*}
#
# Note that the calculated value matches our estimate from the simulation:
5 / 9
# The key to making this work was that we had to realize several things:
# * $E_1$ and $E_2$ are not outcomes. They are events, and they can occur at the same time.
# * The outcomes of the experiment are the combination of the outcomes from the individual rolls of the two dice.
# * The composite experiment is still a fair experiment. It is easy to calculate probabilities using Axiom III and the properties of fair experiments once we can determine the number of outcomes in the event of interest.
# However, we can see that the solution method is still lacking in some ways:
# * It only works for fair experiments
# * It requires enumeration of the outcomes in the event -- this may be challenging to do without a computer and may not scale well.
# <!-- * It will not work if the trials are not independent.-->
#
# Some of the difficulties in solving this problem come from not having a larger toolbox; i.e., the axioms provide a very limited set of equations for working with probabilities. In the next section, we explore several corollaries to the axioms and show how these can be used to simplify some problems in probability.
# ## Terminology Review
#
# Use the flashcards below to help you review the terminology introduced in this chapter.
# + tags=["remove-input"]
from jupytercards import display_flashcards
github='https://raw.githubusercontent.com/jmshea/Foundations-of-Data-Science-with-Python/main/04-probability1/flashcards/'
display_flashcards(github+'axiomatic-prob.json')
# -
| 04-probability1/axiomatic-prob.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
from sqlalchemy import create_engine, MetaData, Table
from sqlalchemy.orm import mapper, sessionmaker
import requests
import json
from elasticsearch import Elasticsearch
my_index = 'zadolbali'
es = Elasticsearch([{'host': 'localhost', 'port': 9200}])
def get_es_stats():
print(requests.get('http://localhost:9200/_cat/health?v').text)
print(requests.get('http://localhost:9200/_cat/nodes?v').text)
print(requests.get('http://localhost:9200/_cat/shards?v').text)
print(requests.get('http://localhost:9200/_cat/indices?v').text)
# +
def create_index(index, settings):
return requests.put('http://localhost:9200/zadolbali', data=json.dumps(settings)).text
def delete_index(index):
return requests.delete('http://localhost:9200/{0}?pretty'.format(index)).text
def setup_index_settings(index, settings):
headers = {'Content-type': 'application/json', 'Accept': 'text/plain'}
return requests.put('http://localhost:9200/zadolbali/_settings?pretty', headers=headers, data=json.dumps(settings)).text
def setup_index_mapping(index, settings):
headers = {'Content-type': 'application/json', 'Accept': 'text/plain'}
return requests.put('http://localhost:9200/zadolbali/_mappings/story?pretty', headers=headers, data=json.dumps(settings)).text
def get_index_state(index):
print(requests.get('http://localhost:9200/{0}/_settings?pretty'.format(index)).text)
print(requests.get('http://localhost:9200/{0}/_mapping?pretty'.format(index)).text)
# -
delete_index(my_index)
# +
create_settings = {
'settings' : {
'index' : {
'number_of_shards' : 5,
'number_of_replicas' : 1
}
}
}
create_index(my_index, create_settings)
print(requests.post('http://localhost:9200/zadolbali/_close').text)
# +
# stolen from https://gist.github.com/svartalf/4465752
index_settings = {
'analysis': {
'analyzer': {
'ru': {
'type': 'custom',
'tokenizer': 'standard',
'filter': ['lowercase', 'russian_morphology', 'english_morphology', 'ru_stopwords'],
},
},
'filter': {
'ru_stopwords': {
'type': 'stop',
'stopwords': u'ะฐ,ะฑะตะท,ะฑะพะปะตะต,ะฑั,ะฑัะป,ะฑัะปะฐ,ะฑัะปะธ,ะฑัะปะพ,ะฑััั,ะฒ,ะฒะฐะผ,ะฒะฐั,ะฒะตัั,ะฒะพ,ะฒะพั,ะฒัะต,ะฒัะตะณะพ,ะฒัะตั
,ะฒั,ะณะดะต,ะดะฐ,ะดะฐะถะต,ะดะปั,ะดะพ,ะตะณะพ,ะตะต,ะตัะปะธ,ะตััั,ะตัะต,ะถะต,ะทะฐ,ะทะดะตัั,ะธ,ะธะท,ะธะปะธ,ะธะผ,ะธั
,ะบ,ะบะฐะบ,ะบะพ,ะบะพะณะดะฐ,ะบัะพ,ะปะธ,ะปะธะฑะพ,ะผะฝะต,ะผะพะถะตั,ะผั,ะฝะฐ,ะฝะฐะดะพ,ะฝะฐั,ะฝะต,ะฝะตะณะพ,ะฝะตะต,ะฝะตั,ะฝะธ,ะฝะธั
,ะฝะพ,ะฝั,ะพ,ะพะฑ,ะพะดะฝะฐะบะพ,ะพะฝ,ะพะฝะฐ,ะพะฝะธ,ะพะฝะพ,ะพั,ะพัะตะฝั,ะฟะพ,ะฟะพะด,ะฟัะธ,ั,ัะพ,ัะฐะบ,ัะฐะบะถะต,ัะฐะบะพะน,ัะฐะผ,ัะต,ัะตะผ,ัะพ,ัะพะณะพ,ัะพะถะต,ัะพะน,ัะพะปัะบะพ,ัะพะผ,ัั,ั,ัะถะต,ั
ะพัั,ัะตะณะพ,ัะตะน,ัะตะผ,ััะพ,ััะพะฑั,ััะต,ััั,ััะฐ,ััะธ,ััะพ,ั,a,an,and,are,as,at,be,but,by,for,if,in,into,is,it,no,not,of,on,or,such,that,the,their,then,there,these,they,this,to,was,will,with',
},
'ru_stemming': {
'type': 'snowball',
'language': 'Russian',
}
},
}
}
print(setup_index_settings(my_index, index_settings))
# +
print(requests.post('http://localhost:9200/zadolbali/_open').text)
mapping_settings = {
'properties': {
'id': { 'type': 'integer' },
'title': {
'type': 'text',
'analyzer': 'ru',
"fields": {
"keyword": {
"type": "keyword"
}
}
},
'text': {
'type': 'text',
'analyzer': 'ru',
"fields": {
"keyword": {
"type": "keyword"
},
"length": {
"type": "token_count",
"analyzer": "standard"
}
}
},
'published': {
'type': 'date',
'format': 'yyyyMMdd'
},
'likes': { 'type': 'integer' },
'tags': {
'type': 'keyword'
},
'url': { 'type': 'text' }
}
}
print(setup_index_mapping(my_index, mapping_settings))
# +
class Story(object):
pass
def loadSession():
dbPath = '../corpus/stories.sqlite'
engine = create_engine('sqlite:///%s' % dbPath, echo=True)
bookmarks = Table('stories', MetaData(engine), autoload=True)
mapper(Story, bookmarks)
Session = sessionmaker(bind=engine)
session = Session()
return session
session = loadSession()
# -
stories = session.query(Story).all()
print(len(stories))
print(dir(stories[0]))
# 'hrefs', 'id', 'likes', 'published', 'tags', 'text', 'title', 'url'
def index_data(index):
# for story in stories[:100]:
for story in stories:
body = {
'id': story.id,
'title': story.title,
'text': story.text,
'published': story.published,
'likes': story.likes,
'tags': story.tags.split(' '),
'url': story.url
}
es.index(index=index, doc_type='story', id=story.id, body=body)
index_data(my_index)
get_index_state(my_index)
| elasticsearch/elastic_zadolbali_deploy.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Task 4: Personal EDA - Noah
# +
import pandas as pd
import numpy as np
import seaborn as sns
import matplotlib.pyplot as plt
from scripts.functions_libs import *
# %matplotlib inline
# +
# Initial dataframe import
df = process_data('../../data/raw/adult.data')
df.head()
# +
# Initial dataframe visualization
print("Number of rows, columns: {}".format(df.shape))
# +
# Viewing numerical column data basic statistics
pd.options.display.float_format = "{:.1f}".format
df.describe()
# +
# Number of unique values for each column
df.nunique(axis=0)
# +
# Looking at some of the columns unique variables
print("Education uniques: \n{}\n".format(df.Education.unique()))
print("Relationship uniques: \n{}\n".format(df.Relationship.unique()))
print("Occupation uniques: \n{}".format(df.Occupation.unique()))
# +
# Setting theme
sns.set_style("darkgrid", {"axes.facecolor": ".9"})
sns.set_theme(font_scale=1.3)
plt.rc("axes.spines", top=False, right=False)
# -
df.dtypes
# +
# Histogram of age and salary distribution. This shows that the majority of people have <=50K salaries,
# and those that do make >50K are generally older population. Data has outliers and needs more cleaning.
sns.histplot(
df,
x="Age",
hue="Salary",
element="step",
common_norm=False,
)
plt.title("Salary of Different Aged Adults", size=20)
# +
# Grouped bar chart of salaries counts grouped by workclass. This plot's groups are cleaned up and replotted later in the EDA.
sns.catplot(
x="Age",
y="Workclass",
hue="Salary",
kind="bar",
data=df
)
plt.title("Adult Salary by Workclass", size=20)
# +
# Grouped bar chart of the salaries of people in each level of education. Some grouping of variables
# would be useful for 'Education Level', such as a group of all highschool grades.
sns.catplot(
y="Education",
hue="Salary",
kind="count",
palette="pastel",
edgecolor=".6",
data=df
)
plt.title("Salary of Different Education Levels", size=20)
# -
# ## **Task 5 Personal Work**
#
# #### RQ: Does Classification Play a Major Role In Determining The Annual Income Of An Adult Worker?
#
# +
import pandas as pd
import numpy as np
import seaborn as sns
import matplotlib
import matplotlib.pyplot as plt
from scripts.functions_libs import *
# %matplotlib inline
# -
df = process_data('../../data/raw/adult.data')
df.head()
df.Education.unique()
# +
# Grouping some education levels that can be categorized together
find_and_replace(df, 'Education', '11th|9th|7th-8th|5th-6th|10th|1st-4th|12th|Preschool|Replaced', 'Didnt-grad-HS')
find_and_replace(df, 'Education', 'Some-college', 'Bachelors')
# df['Education'] = df['Education'].str.replace('11th|9th|7th-8th|5th-6th|10th|1st-4th|12th|Preschool|Replaced', 'Didnt-Grad-HS', regex=True)
# df['Education'] = df['Education'].str.replace('Some-college','Bachelors',regex=False)
# -
df.Education.unique()
# +
# Grouped bar chart of Education with salary
sns.catplot(
x="Education",
hue="Salary",
kind="count",
data=df
).set_xticklabels(rotation=50)
plt.title("Salary of Different Education Levels", size=20)
# -
df.Workclass.unique()
# +
# Grouping like workclasses for visulaization purposes
find_and_replace(df, 'Workclass', 'State-gov|Federal-gov|Local-gov', 'Government')
find_and_replace(df, 'Workclass', 'Never-worked|Without-pay|Other', '?')
find_and_replace(df, 'Workclass', 'Self-emp-not-inc|Self-emp-inc', 'Self-employed')
find_and_replace(df, 'Workclass', '?', 'Other')
# df['Workclass'] = df['Workclass'].str.replace('State-gov|Federal-gov|Local-gov', 'Government', regex=True)
# df['Workclass'] = df['Workclass'].str.replace('Never-worked|Without-pay|Other', '?', regex=True)
# df['Workclass'] = df['Workclass'].str.replace('Self-emp-not-inc|Self-emp-inc', 'Self-employed', regex=True)
# df['Workclass'] = df['Workclass'].str.replace('?', 'Other', regex=False)
# -
df.Workclass.unique()
sns.catplot(
x="Workclass",
hue="Salary",
kind="count",
data=df,
).set_xticklabels(rotation=50)
plt.title("Adult Salary by Workclass", size=20)
# +
# Salary classification with hours worked per week
sns.boxplot(x='Salary', y='Hours per Week', data=df)
plt.title("Adult Salary Depending On Hours Worked per Week", size=20)
# -
df.Occupation.unique()
find_and_replace(df, 'Occupation', '?', 'Other')
find_and_replace(df, 'Occupation', 'Other-service', 'Other')
df.Occupation.unique()
sns.catplot(
x="Occupation",
hue="Salary",
kind="count",
data=df,
).set_xticklabels(rotation=70)
plt.title("Salary of Different Occupations", size=20)
| analysis/noah/milestone2.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: alibidetect
# language: python
# name: alibidetect
# ---
# +
import tensorflow as tf
from functools import partial
from tensorflow.keras.layers import Conv2D, Dense, Flatten, InputLayer, Reshape
from alibi_detect.cd.tensorflow import preprocess_drift
from tensorflow.keras import layers
from alibi_detect.cd import KSDrift
import numpy as np
import methods
import matplotlib.pyplot as plt
import seaborn as sns
from tensorflow import keras
import pandas as pd
from pathlib import Path
from ood_metrics import calc_metrics, plot_roc, plot_pr, plot_barcode
import logging
logger = tf.get_logger()
logger.setLevel(logging.ERROR)
physical_devices = tf.config.list_physical_devices('GPU')
print("Num GPUs Available: ", len(physical_devices))
# Assume that you have 12GB of GPU memory and want to allocate ~4GB:
gpu_options = tf.compat.v1.GPUOptions(per_process_gpu_memory_fraction=0.1)
sess = tf.compat.v1.Session(config=tf.compat.v1.ConfigProto(gpu_options=gpu_options))
# -
from torchvision.utils import make_grid
from torchvision.io import read_image
import torchvision.transforms.functional as F
import torch
# %matplotlib inline
def show(imgs):
if not isinstance(imgs, list):
imgs = [imgs]
fix, axs = plt.subplots(ncols=len(imgs), squeeze=False)
for i, img in enumerate(imgs):
img = img.detach()
img = F.to_pil_image(img)
axs[0, i].imshow(np.asarray(img))
axs[0, i].set(xticklabels=[], yticklabels=[], xticks=[], yticks=[])
# +
inlier_names = ["cCry", "sCry", "uCry", "hCast", "nhCast", "nsEC", "sEC", "WBC", "RBC", "Artifact", "Dirt", "LD"]
testImages_cCry = methods.getTestRawImages("cCry", (32,32))
testImages_sCry = methods.getTestRawImages("sCry", (32,32))
testImages_uCry = methods.getTestRawImages("uCry", (32,32))
testImages_hCast = methods.getTestRawImages("hCast", (32,32))
testImages_nhCast = methods.getTestRawImages("nhCast", (32,32))
testImages_nsEC = methods.getTestRawImages("nsEC", (32,32))
testImages_sEC = methods.getTestRawImages("sEC", (32,32))
testImages_WBC = methods.getTestRawImages("WBC", (32,32))
testImages_RBC = methods.getTestRawImages("RBC", (32,32))
testImages_Artifact = methods.getTestRawImages("Artifact", (32,32))
testImages_Dirt = methods.getTestRawImages("Dirt", (32,32))
testImages_LD = methods.getTestRawImages("LD", (32,32))
X_inliers = np.concatenate((testImages_cCry, testImages_sCry, testImages_uCry, testImages_hCast, testImages_nhCast, testImages_nsEC,
testImages_sEC, testImages_WBC, testImages_RBC))
unclassified_imgs = methods.getTestRawImages("Unclassified", (32,32))
# -
encoder_net = keras.models.load_model('vae_encoder_net', compile=False)
decoder_net = keras.models.load_model('vae_decoder_net', compile=False)
def getPermuteOutputs(encoder, X):
outputs = []
for x in X:
outputs.append(encoder.predict(x.reshape(1, 32, 32, 1)))
return np.asarray(outputs).reshape(-1, 256)
# +
def test_ksd(cd, imgs_ref, imgs, label):
p_vals = []
distances = []
labels = []
imgs_array = []
for img in imgs:
p_val, dist = cd.feature_score(x_ref=imgs_ref, x=img.reshape(-1,32, 32, 1))
p_vals.append(np.mean(p_val))
distances.append(np.mean(dist))
labels.append(label)
imgs_array.append(img)
d = {"p_vals": p_vals, "distances": distances, "labels": labels, "imgs_array": imgs_array}
df = pd.DataFrame(data=d)
return df
def test_ksd_final(cd, encoder, imgs_ref, perturb = None, y_limit = None):
inlier_scores = []
inlier_labels = []
outlier_scores = []
outlier_labels = []
inlier_path = "/home/erdem/dataset/urine_test_32/inliers"
outlier_path = "/home/erdem/dataset/urine_test_32/outliers"
# Inliers
for img_path in Path(inlier_path).glob("*.png"):
inlier_labels.append(0)
image = plt.imread(img_path)
if perturb == 'gaussian':
image = th.tensor(random_noise(image, mode='gaussian', mean=0, var=0.01, clip=True)).float()
elif perturb == 's&p':
image = th.tensor(random_noise(image, mode='s&p', salt_vs_pepper=0.5, amount=0.03, clip=True))
output = encoder.predict(image.reshape(1, 32, 32, 1))
_, dist = cd.feature_score(x_ref=imgs_ref, x=output[2].reshape(1, 256))
temp_score = np.amax(np.mean(dist))
inlier_scores.append(temp_score)
# Outliers
for img_path in Path(outlier_path).glob("*.png"):
outlier_labels.append(1)
image = plt.imread(img_path)
if perturb == 'gaussian':
image = th.tensor(random_noise(image, mode='gaussian', mean=0, var=0.01, clip=True)).float()
elif perturb == 's&p':
image = th.tensor(random_noise(image, mode='s&p', salt_vs_pepper=0.5, amount=0.03, clip=True))
output = encoder.predict(image.reshape(1, 32, 32, 1))
_, dist = cd.feature_score(x_ref=imgs_ref, x=output[2].reshape(1, 256))
temp_score = np.amax(np.mean(dist))
outlier_scores.append(temp_score)
d_outliers = {"K-S Distance": outlier_scores, "outlier_labels": outlier_labels, "Index of Image Patches": np.linspace(1, 636, num=636)}
d_inliers = {"K-S Distance": inlier_scores, "inlier_labels": inlier_labels, "Index of Image Patches": np.linspace(1, 636, num=636)}
df1 = pd.DataFrame(data=d_inliers)
df2 = pd.DataFrame(data=d_outliers)
g = sns.scatterplot(data=df1, x="Index of Image Patches", y="K-S Distance")
g = sns.scatterplot(data=df2, x="Index of Image Patches", y="K-S Distance")
g.set(ylim=(0, y_limit))
score_array = inlier_scores+outlier_scores
label_array = inlier_labels+outlier_labels
print(calc_metrics(score_array, label_array))
plot_roc(score_array, label_array)
plot_pr(score_array, label_array)
# plot_barcode(score_array, label_array)
# -
inlier_outputs = getPermuteOutputs(encoder_net, X_inliers)
inlier_outputs.shape
cd = KSDrift(inlier_outputs, p_val=.05, correction = 'fdr', preprocess_x_ref=False)
test_ksd_final(cd, encoder_net, inlier_outputs, perturb = None, y_limit = None)
test_ksd_final(cd, encoder_net, inlier_outputs, perturb = None, y_limit = None)
test_ksd_final(cd, encoder_net, inlier_outputs, perturb = None, y_limit = None)
# +
cl_path = "/home/thomas/tmp/patches_urine_32_scaled/Unclassified"
perturb = None
outlier_labels = []
outlier_scores = []
outlier_path = []
cl = "Unclassified"
for img_path in Path(cl_path).glob("*.png"):
outlier_path.append(img_path)
outlier_labels.append(cl)
image = plt.imread(img_path)
if perturb == 'gaussian':
image = th.tensor(random_noise(image, mode='gaussian', mean=0, var=0.01, clip=True)).float()
elif perturb == 's&p':
image = th.tensor(random_noise(image, mode='s&p', salt_vs_pepper=0.5, clip=True))
output = encoder_net.predict(image.reshape(1, 32, 32, 1))
_, dist = cd.feature_score(x_ref=inlier_outputs, x=output.reshape(1, 256))
temp_score = np.amax(np.mean(dist))
outlier_scores.append(temp_score)
d = {"outlier_scores": outlier_scores, "outlier_labels": outlier_labels, "outlier_path": outlier_path}
df3 = pd.DataFrame(data=d)
sns.scatterplot(data=df3, x = "outlier_labels", y="outlier_scores")
# -
sorted_unclassified = df3.sort_values(by=['outlier_scores'])
from torchvision.utils import make_grid
from torchvision.io import read_image
import torchvision.transforms.functional as F
# %matplotlib inline
def show(imgs):
if not isinstance(imgs, list):
imgs = [imgs]
fix, axs = plt.subplots(ncols=len(imgs), squeeze=False)
for i, img in enumerate(imgs):
img = img.detach()
img = F.to_pil_image(img)
axs[0, i].imshow(np.asarray(img))
axs[0, i].set(xticklabels=[], yticklabels=[], xticks=[], yticks=[])
unclassified_imgs = []
for path in sorted_unclassified["outlier_path"]:
unclassified_imgs.append(read_image(str(path)))
# +
# Random encoder_net
encoding_dim = 256
encoder_net2 = tf.keras.Sequential(
[
layers.InputLayer(input_shape=(32,32, 1)),
layers.Conv2D(64, (4,4), strides=2, padding='same', activation=tf.nn.relu),
layers.Conv2D(64, (4,4), strides=2, padding='same', activation=tf.nn.relu),
layers.Conv2D(64, (4,4), strides=2, padding='same', activation=tf.nn.relu),
layers.Conv2D(64, (4,4), strides=2, padding='same', activation=tf.nn.relu),
layers.Flatten(),
layers.Dense(encoding_dim,)
])
encoder_net2.summary()
# -
test_ksd_final(cd, encoder_net2, inlier_outputs, perturb = None, y_limit = None)
# +
cl_path = "/home/thomas/tmp/patches_urine_32_scaled/Unclassified"
perturb = None
outlier_labels = []
outlier_scores = []
outlier_path = []
cl = "Unclassified"
for img_path in Path(cl_path).glob("*.png"):
outlier_path.append(img_path)
outlier_labels.append(cl)
image = plt.imread(img_path)
if perturb == 'gaussian':
image = th.tensor(random_noise(image, mode='gaussian', mean=0, var=0.01, clip=True)).float()
elif perturb == 's&p':
image = th.tensor(random_noise(image, mode='s&p', salt_vs_pepper=0.5, clip=True))
output = encoder_net2.predict(image.reshape(1, 32, 32, 1))
_, dist = cd.feature_score(x_ref=inlier_outputs, x=output.reshape(1, 256))
temp_score = np.amax(np.mean(dist))
outlier_scores.append(temp_score)
d = {"outlier_scores": outlier_scores, "outlier_labels": outlier_labels, "outlier_path": outlier_path}
df3 = pd.DataFrame(data=d)
sns.scatterplot(data=df3, x = "outlier_labels", y="outlier_scores")
# -
sorted_unclassified = df3.sort_values(by=['outlier_scores'])
from torchvision.utils import make_grid
from torchvision.io import read_image
import torchvision.transforms.functional as F
# %matplotlib inline
def show(imgs):
if not isinstance(imgs, list):
imgs = [imgs]
fix, axs = plt.subplots(ncols=len(imgs), squeeze=False)
for i, img in enumerate(imgs):
img = img.detach()
img = F.to_pil_image(img)
axs[0, i].imshow(np.asarray(img))
axs[0, i].set(xticklabels=[], yticklabels=[], xticks=[], yticks=[])
unclassified_imgs = []
for path in sorted_unclassified["outlier_path"]:
unclassified_imgs.append(read_image(str(path)))
df_unclassified = test_ksd(cd, X_inliers, unclassified_imgs, 0)
sorted_p_vals = df_unclassified.sort_values(by=['p_vals'])
| alibi_ksd_encoder.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # ๆฐๅญฆๅฝๆฐ๏ผๅญ็ฌฆไธฒๅๅฏน่ฑก
# ็ปๅฏนๅผ
abs(-2)
# ๆๅคงๅผ
max(1,2,3)
# ๆๅฐๅผ
min(1,2,3)
#็ฑปไผผa**b
pow(2,3)
#่ฟๅไธ5.6ๆ่ฟ็ๆดๆฐ
round(5.6)
# ไฟ็ๅฐๆฐ็นๅ2ไฝๅฐๆฐ็ๆตฎ็นๅผ
round(5.466,2)
# +
import turtle
turtle.write("\u6B22\u8FCE\u03b1\u03b2\u03b3")
turtle.done()
# -
ch = "a"
ord(ch) //่ฟๅๅญ็ฌฆch็ASCLLๅผ
chr(98) //่ฟๅASCLL็ๅญ็ฌฆ
# ่ฝฌไนๅญ็ฌฆ๏ผ\n :ๆข่ก็ฌฆ๏ผ่กจ็คบไธ่ก็ปๆ \b :้ๆ ผ็ฌฆ
#
# \t :ๅถ่กจ็ฌฆ \f:ๆข้กต็ฌฆ๏ผไปไธไธ้กตๆๅฐ \r:ๅ่ฝฆ็ฌฆ
#
# \\:ๅๆ็บฟ \' \":ๅๅผๅท๏ผๅๅผๅท
print("he sand,\"Job is progrom is easy to read\"")
# +
import math
x1, y1 = eval(input("Enter point 1 (latitude and longitude) in degrees: "))
x2, y2 = eval(input("Enter point 2 (latitude and longitude) in degrees: "))
d = 6371.01 * math.acos(math.sin(math.radians(x1)) * math.sin(math.radians(x2)) +
math.cos(math.radians(x1)) * math.cos(math.radians(x2)) *
math.cos(math.radians(y1 - y2)));
print("The distance between the two points is", d, "km")
# -
import math
s = eval(input("bianchang:" ))
Area = (5 * s*s)/(4 * (math.tan(math.pi/5)))
print(Area)
# +
code = eval(input("Enter an ASCII code: "))
# Display result
print("The character for ASCII code", code, "is", chr(code))
# -
| Python3.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# %pylab inline
import poloniex as plnx
import ta_lib as ta
from datetime import datetime, timedelta
from matplotlib.finance import candlestick2_ohlc
# # Altcoin Signal Trading Simulation System
# The system should be able to:
#
# * Keep track of various alt coin charts.
# * Keep track of various alt coin positions.
# * Generate signals for various alt coin charts.
# * Simulate to get statistics.
| Crypto/Strategy Backtesting/Altcoin Signal Trading Simulation System.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # RDD creation
# #### [Introduction to Spark with Python, by <NAME>](https://github.com/jadianes/spark-py-notebooks)
# In this notebook we will introduce two different ways of getting data into the basic Spark data structure, the **Resilient Distributed Dataset** or **RDD**. An RDD is a distributed collection of elements. All work in Spark is expressed as either creating new RDDs, transforming existing RDDs, or calling actions on RDDs to compute a result. Spark automatically distributes the data contained in RDDs across your cluster and parallelizes the operations you perform on them.
# #### References
# The reference book for these and other Spark related topics is *Learning Spark* by <NAME>, <NAME>, <NAME>, and <NAME>.
# The KDD Cup 1999 competition dataset is described in detail [here](http://kdd.ics.uci.edu/databases/kddcup99/kddcup99).
# ## Getting the data files
# In this notebook we will use the reduced dataset (10 percent) provided for the KDD Cup 1999, containing nearly half million network interactions. The file is provided as a *Gzip* file that we will download locally.
import urllib.request
f = urllib.request.urlretrieve ("http://kdd.ics.uci.edu/databases/kddcup99/kddcup.data_10_percent.gz", "./kddcup.data_10_percent.gz")
# ## Creating a RDD from a file
from pyspark import SparkContext
sc = SparkContext()
# The most common way of creating an RDD is to load it from a file. Notice that Spark's `textFile` can handle compressed files directly.
data_file = "./kddcup.data_10_percent.gz"
raw_data = sc.textFile(data_file)
# Now we have our data file loaded into the `raw_data` RDD.
# Without getting into Spark *transformations* and *actions*, the most basic thing we can do to check that we got our RDD contents right is to `count()` the number of lines loaded from the file into the RDD.
raw_data.count()
# We can also check the first few entries in our data.
raw_data.take(5)
# In the following notebooks, we will use this raw data to learn about the different Spark transformations and actions.
# ## Creating and RDD using `parallelize`
# Another way of creating an RDD is to parallelize an already existing list.
# +
a = range(100)
data = sc.parallelize(a)
# -
# As we did before, we can `count()` the number of elements in the RDD.
data.count()
# As before, we can access the first few elements on our RDD.
data.take(5)
| nb1-rdd-creation/nb1-rdd-creation.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3 (ipykernel)
# language: python
# name: python3
# ---
from IPython.core.display import display, HTML
display(HTML("<style>.container { width:90% !important; }</style>"))
# +
import pyautogui as pag
import time
from datetime import datetime, timedelta
import win32api,win32con,win32clipboard
import keyboard as kb
import pyperclip
def winup():
'''Funรงรฃo para maximizar janelas do windows sem usar mouse'''
kb.press_and_release('win+up') #executa o comando
def isNum():
'''Funรงรฃo para checar se o num lock estรก ativo ou nรฃo'''
return win32api.GetKeyState(win32con.VK_NUMLOCK) #retorna True se ativo
def wait(onde):
'''Funรงรฃo que espera achar o png na tela pra prosseguir com o cรณdigo'''
if onde == 'save': #o botรฃo de salvar do excel as vezes muda um pouco
path = 'png_wait/save.png'
x = pag.locateOnScreen(path, confidence=0.99) #busca uma vez
while not x: #trava o programa atรฉ achar o png na tela
x = pag.locateOnScreen(path, confidence=0.99) #busca o png
else: #se nรฃo for o save a checagem รฉ 1% mais rรญgida
path = f'png_wait/{str(onde)}'+'.png'
x = pag.locateCenterOnScreen(path) #busca uma vez
while not x: #trava o programa atรฉ achar o png na tela
x = pag.locateCenterOnScreen(path) #busca o png
def seta(d = 'down'):
'''Funรงรฃo pra conseguir usar a seta sem o numlock atrapalhar'''
pag.PAUSE = 0.4
if isNum() == 0: #se numlock desligado
pag.press(d)
pag.press('numlock')
else: #se ligado
pag.press('numlock')
pag.press(d)
pag.press('numlock')
pag.PAUSE = 0.85
def loga():
'''Funรงรฃo para logar no sistema'''
pag.click(1912,1064) #vai pro desktop
pag.click(626,289, clicks = 2) #clica no mรณdulo compras
wait('compras') #espera ver a palavra compras pra digitar o user
pag.write("gabriel") #loga no compras
pag.press("enter")
pag.write("8842")
pag.press("enter")
wait("conf") #espera o sistema abrir pra continuar
pag.click(281,77) #relatorios
def copi(p):
'''Funรงรฃo diminui a taxa de erro do pag.hotkey '''
kb.press('ctrl') #e do keyboard errarem o comando CTRL + C ou CTRL + V
time.sleep(0.2)
kb.press(p) #aqui vai a letra digitada
time.sleep(0.2)
kb.release('ctrl')
kb.release(p) #aqui solta a letra
def f5(tec,copy = False,let = ''):
'''Funรงรฃo que simula atalho do escel pra selecionar celulas'''
pag.press('f5')
kb.write(tec)
pag.press('enter')
if copy:
copi(let)
def fecha():
'''Funรงรฃo para fechar a planilha 101 e o mรณdulo compras '''
pag.moveTo(169,1064,0.5)
pag.moveTo(161,1069,0.5)
pag.click(294,1023)
pag.click(964,590)
pag.click(1885,55,clicks = 2,interval = 1)
def puxe():
'''Funรงรฃo para puxar o conteudo do relatรณrio excel'''
pag.click(340,57)
time.sleep(5)
f5('a:k',True,'c')
def abreGui():
'''Funรงรฃo para abrir planilhas excel caso usuรกrio escolha'''
li = ['AรOUGUE','AUTO SERVIรO','FRIOS','MERCEARIA'] #lista de setores
print(f'\n{"Deseja abrir as pastas?":=^60}\n{" 1 - sim ":=^60}\n{" 2 - nรฃo ":=^60}\n') #1 pra sim 2 pra nรฃo
ps = input() #registra escolha
pag.hotkey('ctrl','shift','capslock') #entra no sistema independente da escolha
if ps == '1':
time.sleep(1)
pag.click(1912,1064) #vai pro desktop
for i in range(0,4):
# pag.moveTo(24,1059) #clica no simbolo do windows
# pag.click(22,1061)
pag.press('win')
time.sleep(1)
txt = f"c:\\agnaldo\\venda {li[i]}.xlsx"
pyperclip.copy(txt) #copia o txt pra area de transferencia
copi('v') #cola o txt
pag.press('enter')
def setor(ddd,y,a,c=0):
plan()
pag.click(169,y)
texto = datetime.now() + timedelta(days=ddd*-1)
texto = texto.strftime('%d')
text = f'c{int(texto)+1}'
f5(text,True,'v')
seta()
if c == 0:
pag.hotkey('ctrl','shift','capslock')
f5(f'm{a}:s{a}',True,'c')
def plan(): #bota o mouse simbolo do excel pra aparecer abas
pag.hotkey('ctrl','shift','capslock')
pag.moveTo(169,1064,0.5)
pag.moveTo(161,1069,0.5)
pag.moveTo(169,1064,1)
def abreplan(m = 0): #abre plan limpa
if m == 2:
pag.click(1918,1056,clicks=2)
pag.click(1560,948,clicks=2)
wait('save')
winup()
time.sleep(1)
pag.click(57,200) #clica celula 0,0
time.sleep(1)
f5('a:k')
pag.press('del')
pag.click(57,200)
copi('v')
pag.click(57,200)
def init(ddd):
global contador
loga()
pag.click(1458,671,clicks=2) #desce barra
pag.click(141,616) #abre rel 604.01
pag.click(1496,127) #filtro
dti = datetime.now() + timedelta(days=ddd*-1) #pega a data que estรก sendo puxada
dt = dti.strftime('%d%m%Y') #pega a data '01/01/2022'
pag.write(dt) #escreve data
pag.press("enter")
pag.write(dt) #escreve data
pag.click(1871,124)
wait('consulta') #espera abrir o relatorios
puxe() #puxa a planilha toda do excel
pag.hotkey('ctrl','shift','capslock') #sai RDP
if not contador:
abreplan(2)
else:
abreplan(0)
contador = 1
pag.hotkey('ctrl','shift','capslock') #entra RDP
fecha() #fecha planilha aberta
pag.hotkey('ctrl','shift','capslock') #sai RDP
f5(f'm1:s1',True,'c') #copia as vends
time.sleep(1)
setor(ddd,930,2) #copia e cola aรงougue
setor(ddd,962,3) #copia e cola frios
setor(ddd,992,4) #copia e cola auto-serviรงo
setor(ddd,y=1027,a=0,c=1) #copia e cola mercearia
pag.PAUSE = 1 #define o tempo entre oa comandos do PY AUTO GUI
contador = 0
dia_atual = datetime.now() #pega dia atual do sistema
for i in range(20,0,-1): #percorre os 20 dias retrocedentes
data = dia_atual - timedelta(days=i)
texto = f"Digite {i:02d} para puxar desde {data.strftime('%d / %m / %y')}"
print(f'{texto}\n') #mostra as opรงรตes de escolha
dd = int(input("Qual dia deseja puxar? ")) #registra qual dia o usuรกrio quer
print(f'\nDia escolhido: {(datetime.now() - timedelta(days=dd)).strftime("%d / %m / %y")}') #mostra o dia escolhido
mes1 = (datetime.now() - timedelta(days=dd)).strftime("%m") #pega
abreGui()
for i in range(dd,0,-1):
dti = datetime.now() + timedelta(days=i*-1) #pega a data que estรก sendo puxada
if mes1 == dti.strftime('%m'): # se o mes mudar durante o programa ele para (pq seria outra aba da planilha)
print(f'init({i})')
init(i)
# +
import pyautogui as pag
import time
pag.hotkey('ctrl','shift','capslock')
time.sleep(3)
print(pag.position())
pag.hotkey('ctrl','shift','capslock')
# -
| pacote download/projetos_jupyter/vendas2022automaticas.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # CrypTen on AWS Instances
#
# Our previous tutorials have covered the essentials of using CrypTen on our local machines. We also provides a script `aws_launcher.py` in the `scripts` directory that will allow you to compute on encrypted data on multiple AWS instances.
#
# For example, if Alice has a classifier on one AWS instance and Bob has data on another AWS instance, `aws_launcher.py` will allow Alice and Bob to classify the data without revealing their respective private information (just as we did in Tutorial 4).
#
# ## Using the Launcher Script
#
# The steps to follow are:
# <ol>
# <li> First, create multiple AWS instances with public AMI "Deep Learning AMI (Ubuntu) Version 24.0", and record the instance IDs. </li>
# <li> Install PyTorch, CrypTen and dependencies of the program to be run on all AWS instances.</li>
# <li> Run `aws_launcher.py` on your local machine, as we explain below.</li>
# </ol>
# The results are left on the AWS instances. Log messages will be printed on your local machine by launcher script.
#
# ## Sample Run
#
# The following cell shows a sample usage of the `aws_launcher.py` script. Note, however, that the command in the cell will not work as is: please replace the parameters used with appropriate ones with your own AWS instances, usernames, and `.ssh` keys (see documentation in the `aws_launcher.py` script).
# + magic_args="bash" language="script"
# python3 [PATH_TO_CRYPTEN]/CrypTen/scripts/aws_launcher.py \
# --ssh_key_file [SSH_KEY_FILE] --instances=[AWS_INSTANCE1, AWS_INSTANCE2...] \
# --region [AWS_REGION] \
# --ssh_user [AWS_USERNAME] \
# --aux_files=[PATH_TO_CRYPTEN]/CrypTen/examples/mpc_linear_svm/mpc_linear_svm.py [PATH_TO_CRYPTEN]/CrypTen/examples/mpc_linear_svm/launcher.py \
# --features 50 \
# --examples 100 \
# --epochs 50 \
# --lr 0.5 \
# --skip_plaintext
# -
| tutorials/.ipynb_checkpoints/Tutorial_6_CrypTen_on_AWS_instances-checkpoint.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: tf1.3_python
# language: python
# name: tf1.3_kernel
# ---
# +
# %reload_ext autoreload
# %autoreload 2
import tensorflow as tf
import numpy as np
import os
import sys
#currentpath = os.path.dirname(os.path.realpath(__file__))
project_basedir = '..'#os.path.join(currentpath,'..')
sys.path.append(project_basedir)
from matplotlib import pyplot as plt
import random
import time
from common.utils import Dataset,ProgressBar
from tflearn.data_flow import DataFlow,DataFlowStatus,FeedDictFlow
from tflearn.data_utils import Preloader,ImagePreloader
import scipy
import pandas as pd
import xmltodict
import common
import tflearn
import copy
from config import conf
from cchess import *
from gameplays.game_convert import convert_game,convert_game_value,convert_game_board
import os, shutil
os.environ["CUDA_VISIBLE_DEVICES"] = '1'
from net.net_maintainer import NetMatainer
from net import resnet
# -
# !nvidia-smi | head -n 20
# # a network predict select and move of Chinese chess, with minimal preprocessing
stamp = time.strftime('%Y-%m-%d_%H-%M-%S',time.localtime(time.time()))
#data_dir = os.path.join(conf.history_selfplay_dir,stamp)
data_dir = '../data/history_selfplays/2018-06-22_00-44-48/'
if os.path.exists(data_dir):
print('data_dir already exist: {}'.format(data_dir))
else:
print('creating data_dir: {}'.format(data_dir))
os.mkdir("{}".format(data_dir))
GPU_CORE = [1]
BATCH_SIZE = 512
BEGINING_LR = 0.01
#TESTIMG_WIDTH = 500
model_name = 'update_model'
distribute_dir = conf.distributed_datadir
filelist = os.listdir(data_dir)
#filelist = os.listdir(data_dir)
#filelist = [os.path.join(distribute_dir,i) for i in filelist]
network_dir = conf.distributed_server_weight_dir
for f in filelist:
src = os.path.join(distribute_dir,f)
dst = os.path.join(data_dir,f)
shutil.move(src,dst)
filelist = [os.path.join(data_dir,i) for i in filelist]
# + active=""
# filelist[0].split('.')[-2].split('_')[-1]
# -
#filelist = filelist[:1000]
len(filelist)
labels = common.board.create_uci_labels()
label2ind = dict(zip(labels,list(range(len(labels)))))
# + active=""
# pgn2value = dict(pd.read_csv('./data/resultlist.csv').values[:,1:])
# -
rev_ab = dict(zip('abcdefghi','abcdefghi'[::-1]))
rev_num = dict(zip('0123456789','0123456789'[::-1]))
class ElePreloader(object):
def __init__(self,filelist,batch_size=64):
self.batch_size=batch_size
#content = pd.read_csv(datafile,header=None,index_col=None)
self.filelist = filelist#[i[0] for i in content.get_values()]
self.pos = 0
self.feature_list = {"red":['A', 'B', 'C', 'K', 'N', 'P', 'R']
,"black":['a', 'b', 'c', 'k', 'n', 'p', 'r']}
self.batch_size = batch_size
self.batch_iter = self.iter()
assert(len(self.filelist) > batch_size)
#self.game_iterlist = [None for i in self.filelist]
def iter(self):
retx1,rety1,retx2,rety2 = [],[],[],[]
vals = []
filelist = []
num_filepop = 0
while True:
for i in range(self.batch_size):
filelist = copy.copy(self.filelist)
random.shuffle(filelist)
#if self.game_iterlist[i] == None:
# if len(filelist) == 0:
# filelist = copy.copy(self.filelist)
# random.shuffle(filelist)
# self.game_iterlist[i] = convert_game_value(filelist.pop(),self.feature_list,None)
# num_filepop += 1
#game_iter = self.game_iterlist[i]
#x1,y1,val1 = game_iter.__next__()
for one_file in filelist:
try:
for x1,y1,val1 in convert_game_value(one_file,self.feature_list,None):
x1 = np.transpose(x1,[1,2,0])
x1 = np.expand_dims(x1,axis=0)
#if random.random() < 0.5:
# y1 = [rev_ab[y1[0]],y1[1],rev_ab[y1[2]],y1[3]]
# x1 = x1[:,:,::-1,:]
# #x1 = np.concatenate((x1[:,::-1,:,7:],x1[:,::-1,:,:7]),axis=-1)
retx1.append(x1)
#rety1.append(y1)
oney = np.zeros(len(labels))
oney[label2ind[''.join(y1)]] = 1
rety1.append(oney)
vals.append(val1)
if len(retx1) >= self.batch_size:
yield (np.concatenate(retx1,axis=0),np.asarray(rety1),np.asarray(vals),num_filepop)
retx1,rety1 = [],[]
vals = []
num_filepop = 0
except:
print(one_file)
import traceback
traceback.print_exc()
continue
num_filepop += 1
#print(one_file)
def __getitem__(self, id):
#pass
x1,y1,val1,num_filepop = self.batch_iter.__next__()
return x1,y1,val1,num_filepop
def __len__(self):
return len(self.filelist)
filelist[0]
trainset = ElePreloader(filelist=filelist,batch_size=BATCH_SIZE)
with tf.device("/gpu:{}".format(GPU_CORE[0])):
coord = tf.train.Coordinator()
trainflow = FeedDictFlow({
'data':trainset,
},coord,batch_size=BATCH_SIZE,shuffle=False,continuous=True,num_threads=1)
trainflow.start()
# + active=""
# testset = ElePreloader(datafile='data/test_list.csv',batch_size=BATCH_SIZE)
# with tf.device("/gpu:{}".format(GPU_CORE[0])):
# coord = tf.train.Coordinator()
# testflow = FeedDictFlow({
# 'data':testset,
# },coord,batch_size=BATCH_SIZE,shuffle=True,continuous=True,num_threads=1)
# testflow.start()
# -
sample_x1,sample_y1,sample_value,sample_num = trainflow.next()['data']
print(sample_num,sample_value)
trainset.filelist[4]
filepops = []
for sample_x1,sample_y1,sample_value,num_filepop in trainset.iter():
#xx = x1
filepops.append(num_filepop)
print(len(filepops),num_filepop)
break
# complete_number
# sample_x1,sample_y1,sample_value = testflow.next()['data']
sample_x1.shape,sample_y1.shape,sample_value.shape
labels[np.argmax(sample_y1[0])]
np.sum(sample_x1[0],axis=-1)
# !mkdir models
import os
if not os.path.exists("models/{}".format(model_name)):
os.mkdir("models/{}".format(model_name))
N_BATCH = len(trainset)
#N_BATCH_TEST = 300 * (128 / BATCH_SIZE)
len(trainset)
N_BATCH#,N_BATCH_TEST
latest_netname = NetMatainer(None,network_dir).get_latest()
latest_netname
from net.resnet import get_model
(sess,graph),((X,training),(net_softmax,value_head,train_op_policy,train_op_value,policy_loss,accuracy_select,global_step,value_loss,nextmove,learning_rate,score)) = \
get_model('{}/{}'.format(conf.distributed_server_weight_dir,latest_netname),labels,GPU_CORE=GPU_CORE,FILTERS=128,NUM_RES_LAYERS=7,extra=True)
# +
#with graph.as_default():
# sess.run(tf.global_variables_initializer())
# -
# with graph.as_default():
# train_epoch = 58
# train_batch = 0
# saver = tf.train.Saver(var_list=tf.global_variables())
# saver.restore(sess,"models/{}/model_{}".format(model_name,train_epoch - 1))
train_epoch = 1
train_batch = 0
# +
restore = True
N_EPOCH = 3
DECAY_EPOCH = 20
class ExpVal:
def __init__(self,exp_a=0.97):
self.val = None
self.exp_a = exp_a
def update(self,newval):
if self.val == None:
self.val = newval
else:
self.val = self.exp_a * self.val + (1 - self.exp_a) * newval
def getval(self):
return round(self.val,2)
expacc_move = ExpVal()
exploss = ExpVal()
expsteploss = ExpVal()
begining_learning_rate = 1e-2
pred_image = None
if restore == False:
train_epoch = 1
train_batch = 0
for one_epoch in range(train_epoch,N_EPOCH):
trainset = ElePreloader(filelist=filelist,batch_size=BATCH_SIZE)
train_epoch = one_epoch
pb = ProgressBar(worksum=N_BATCH,info=" epoch {} batch {}".format(train_epoch,train_batch))
pb.startjob()
#for one_batch in range(N_BATCH):
one_batch = 0
for batch_x,batch_y,batch_v,one_finish_sum in trainset.iter():
one_batch += 1
if pb.finishsum > pb.worksum - 100: # 100 buffer
break
#batch_x,batch_y,batch_v = trainflow.next()['data']
batch_v = np.expand_dims(np.nan_to_num(batch_v),1)
# learning rate decay strategy
batch_lr = begining_learning_rate * 2 ** -(one_epoch // DECAY_EPOCH)
with graph.as_default():
_,step_loss,step_acc_move,step_value = sess.run(
[train_op_policy,policy_loss,accuracy_select,global_step],feed_dict={
X:batch_x,nextmove:batch_y,learning_rate:batch_lr,training:True,
})
_,step_value_loss,step_val_predict = sess.run(
[train_op_value,value_loss,value_head],feed_dict={
X:batch_x,learning_rate:batch_lr,training:True,score:batch_v,
})
#batch_v = - batch_v
#batch_x = np.concatenate((batch_x[:,::-1,:,7:],batch_x[:,::-1,:,:7]),axis=-1)
#_,step_value_loss,step_val_predict = sess.run(
# [train_op_value,value_loss,value_head],feed_dict={
# X:batch_x,learning_rate:batch_lr,training:True,score:batch_v,
# })
step_acc_move *= 100
expacc_move.update(step_acc_move)
exploss.update(step_loss)
expsteploss.update(step_value_loss)
pb.info = "EPOCH {} STEP {} LR {} ACC {} LOSS {} value_loss {}".format(
one_epoch,one_batch,batch_lr,expacc_move.getval(),exploss.getval(),expsteploss.getval())
pb.complete(one_finish_sum)
print()
with graph.as_default():
saver = tf.train.Saver(var_list=tf.global_variables())
saver.save(sess,"../data/models/{}/model_{}".format(model_name,one_epoch))
# -
with graph.as_default():
saver = tf.train.Saver(var_list=tf.global_variables())
saver.save(sess,"../data/models/{}_model_{}".format(model_name,one_epoch))
batch_x.shape
"models/{}/model_{}".format(model_name,one_epoch)
# !ls -l 'models/update_model/model_2.data-00000-of-00001'
model_name
for f in ['data-00000-of-00001','meta','index']:
src = "models/{}/model_{}.{}".format(model_name,one_epoch,f)
dst = os.path.join(network_dir,"{}.{}".format(stamp,f))
shutil.copyfile(src,dst)
sorted([i[:-6] for i in os.listdir('data/prepare_weight/') if '.index' in i])[::-1][:2]
import os
new_name, old_name = sorted([i[:-6] for i in os.listdir('data/prepare_weight/') if '.index' in i])[::-1][:2]
new_name, old_name
| ipynbs/model_update.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Customizing Logical Types and Type Inference
#
# The default type system in Woodwork contains many built-in LogicalTypes that work for a wide variety of datasets. For situations in which the built-in LogicalTypes are not sufficient, Woodwork allows you to create custom LogicalTypes.
#
# Woodwork also has a set of standard type inference functions that can help in automatically identifying correct LogicalTypes in the data. Woodwork also allows you to override these existing functions, or add new functions for inferring any custom LogicalTypes that are added.
#
# This guide provides an overview of how to create custom LogicalTypes as well as how to override and add new type inference functions. If you need to learn more about the existing types and tags in Woodwork, refer to the [Understanding Logical Types and Semantic Tags](logical_types_and_semantic_tags.ipynb) guide for more detail. If you need to learn more about how to set and update these types and tags on a DataFrame, refer to the
# [Working with Types and Tags](working_with_types_and_tags.ipynb) guide for more detail.
#
# ## Viewing Built-In Logical Types
#
# To view all of the default LogicalTypes in Woodwork, use the `list_logical_types` function. If the existing types are not sufficient for your needs, you can create and register new LogicalTypes for use with Woodwork initialized DataFrames and Series.
# +
import woodwork as ww
ww.list_logical_types()
# -
# ## Registering a New LogicalType
#
# The first step in registering a new LogicalType is to define the class for the new type. This is done by sub-classing the built-in `LogicalType` class. There are a few class attributes that should be set when defining this new class. Each is reviewed in more detail below.
#
# For this example, you will work through an example for a dataset that contains [UPC Codes](https://en.wikipedia.org/wiki/Universal_Product_Code). First create a new `UPCCode` LogicalType. For this example, consider the UPC Code to be a type of categorical variable.
# +
from woodwork.logical_types import LogicalType
class UPCCode(LogicalType):
"""Represents Logical Types that contain 12-digit UPC Codes."""
primary_dtype = 'category'
backup_dtype = 'string'
standard_tags = {'category', 'upc_code'}
# -
# When defining the `UPCCode` LogicalType class, three class attributes were set. All three of these attributes are optional, and will default to the values defined on the `LogicalType` class if they are not set when defining the new type.
#
# - `primary_dtype`: This value specifies how the data will be stored. If the column of the dataframe is not already of this type, Woodwork will convert the data to this dtype. This should be specified as a string that represents a valid pandas dtype. If not specified, this will default to `'string'`.
# - `backup_dtype`: This is primarily useful when working with Koalas dataframes. `backup_dtype` specifies the dtype to use if Woodwork is unable to convert to the dtype specified by `primary_dtype`. In our example, we set this to `'string'` since Koalas does not currently support the `'category'` dtype.
# - `standard_tags`: This is a set of semantic tags to apply to any column that is set with the specified LogicalType. If not specified, `standard_tags` will default to an empty set.
# - docstring: Adding a docstring for the class is optional, but if specified, this text will be used for adding a description of the type in the list of available types returned by `ww.list_logical_types()`.
# + raw_mimetype="text/restructuredtext" active=""
# .. note::
# Behind the scenes, Woodwork uses the ``category`` and ``numeric`` semantic tags to determine whether a column is categorical or numeric column, respectively. If the new LogicalType you define represents a categorical or numeric type, you should include the appropriate tag in the set of tags specified for ``standard_tags``.
# -
# Now that you have created the new LogicalType, you can register it with the Woodwork type system so you can use it. All modifications to the type system are performed by calling the appropriate method on the `ww.type_system` object.
ww.type_system.add_type(UPCCode, parent='Categorical')
# If you once again list the available LogicalTypes, you will see the new type you created was added to the list, including the values for description, physical_type and standard_tags specified when defining the `UPCCode` LogicalType.
ww.list_logical_types()
# ### Logical Type Relationships
#
# When adding a new type to the type system, you can specify an optional parent LogicalType as done above. When performing type inference a given set of data might match multiple different LogicalTypes. Woodwork uses the parent-child relationship defined when registering a type to determine which type to infer in this case.
#
# When multiple matches are found, Woodwork will return the most specific type match found. By setting the parent type to `Categorical` when registering the `UPCCode` LogicalType, you are telling Woodwork that if a data column matches both `Categorical` and `UPCCode` during inference, the column should be considered as `UPCCode` as this is more specific than `Categorical`. Woodwork always assumes that a child type is a more specific version of the parent type.
# ## Working with Custom LogicalTypes
#
# Next, you will create a small sample DataFrame to demonstrate use of the new custom type. This sample DataFrame includes an id column, a column with valid UPC Codes, and a column that should not be considered UPC Codes because it contains non-numeric values.
import pandas as pd
df = pd.DataFrame({
'id': [0, 1, 2, 3],
'code': ['012345412359', '122345712358', '012345412359', '012345412359'],
'not_upc': ['abcdefghijkl', '122345712358', '012345412359', '022323413459']
})
# Use a with block setting override to update Woodwork's default threshold for differentiating between a `Unknown` and `Categorical` column so that Woodwork will correctly recognize the `code` column as a `Categorical` column. After setting the threshold, initialize Woodwork and verify that Woodwork has identified our column as `Categorical`.
with ww.config.with_options(categorical_threshold=0.5):
df.ww.init()
df.ww
# The reason Woodwork did not identify the `code` column to have a `UPCCode` LogicalType, is that you have not yet defined an inference function to use with this type. The inference function is what tells Woodwork how to match columns to specific LogicalTypes.
#
# Even without the inference function, you can manually tell Woodwork that the `code` column should be of type `UPCCode`. This will set the physical type properly and apply the standard semantic tags you have defined
df.ww.init(logical_types = {'code': 'UPCCode'})
df.ww
# Next, add a new inference function and allow Woodwork to automatically set the correct type for the `code` column.
#
# ## Defining Custom Inference Functions
#
# The first step in adding an inference function for the `UPCCode` LogicalType is to define an appropriate function. Inference functions always accept a single parameter, a `pandas.Series`. The function should return `True` if the series is a match for the LogicalType for which the function is associated, or `False` if the series is not a match.
#
# For the `UPCCode` LogicalType, define a function to check that all of the values in a column are 12 character strings that contain only numbers. Note, this function is for demonstration purposes only and may not catch all cases that need to be considered for properly identifying a UPC Code.
def infer_upc_code(series):
# Make sure series contains only strings:
if not series.apply(type).eq(str).all():
return False
# Check that all items are 12 characters long
if all(series.str.len() == 12):
# Try to convert to a number
try:
series.astype('int')
return True
except:
return False
return False
# After defining the new UPC Code inference function, add it to the Woodwork type system so it can be used when inferring column types.
ww.type_system.update_inference_function('UPCCode', inference_function=infer_upc_code)
# After updating the inference function, you can reinitialize Woodwork on the DataFarme. Notice that Woodwork has correctly identified the `code` column to have a LogicalType of `UPCCode` and has correctly set the physical type and added the standard tags to the semantic tags for that column.
#
# Also note that the `not_upc` column was identified as `Categorical`. Even though this column contains 12-digit strings, some of the values contain letters, and our inference function correctly told Woodwork this was not valid for the `UPCCode` LogicalType.
df.ww.init()
df.ww
# ## Overriding Default Inference Functions
#
# Overriding the default inference functions is done with the `update_inference_function` TypeSystem method. Simply pass in the LogicalType for which you want to override the function, along with the new function to use.
#
# For example you can tell Woodwork to use the new `infer_upc_code` function for the built in `Categorical` LogicalType.
ww.type_system.update_inference_function('Categorical', inference_function=infer_upc_code)
# If you initialize Woodwork on a DataFrame after updating the `Categorical` function, you can see that the `not_upc` column is no longer identified as a `Categorical` column, but is rather set to the default `Unknown` LogicalType. This is because the letters in the first row of the `not_upc` column cause our inference function to return `False` for this column, while the default `Categorical` function will allow non-numeric values to be present. After updating the inference function, this column is no longer considered a match for the `Categorical` type, nor does the column match any other logical types. As a result, the LogicalType is set to `Unknown`, the default type used when no type matches are found.
df.ww.init()
df.ww
# ## Updating LogicalType Relationships
#
# If you need to change the parent for a registered LogicalType, you can do this with the `update_relationship` method. Update the new `UPCCode` LogicalType to be a child of `NaturalLanguage` instead.
ww.type_system.update_relationship('UPCCode', parent='NaturalLanguage')
# The parent for a logical type can also be set to `None` to indicate this is a root-level LogicalType that is not a child of any other existing LogicalType.
ww.type_system.update_relationship('UPCCode', parent=None)
# Setting the proper parent-child relationships between logical types is important. Because Woodwork will return the most specific LogicalType match found during inference, improper inference can occur if the relationships are not set correctly.
#
# As an example, if you initialize Woodwork after setting the `UPCCode` LogicalType to have a parent of `None`, you will now see that the UPC Code column is inferred as `Categorical` instead of `UPCCode`. After setting the parent to `None`, `UPCCode` and `Categorical` are now siblings in the relationship graph instead of having a parent-child relationship as they did previously. When Woodwork finds multiple matches on the same level in the relationship graph, the first match is returned, which in this case is `Categorical`. Without proper parent-child relationships set, Woodwork is unable to determine which LogicalType is most specific.
df.ww.init()
df.ww
# ## Removing a LogicalType
# If a LogicalType is no longer needed, or is unwanted, it can be removed from the type system with the `remove_type` method. When a LogicalType has been removed, a value of `False` will be present in the `is_registered` column for the type. If a LogicalType that has children is removed, all of the children types will have their parent set to the parent of the LogicalType that is being removed, assuming a parent was defined.
#
# Remove the custom `UPCCode` type and confirm it has been removed from the type system by listing the available LogicalTypes. You can confirm that the `UPCCode` type will no longer be used because it will have a value of `False` listed in the `is_registered` column.
ww.type_system.remove_type('UPCCode')
ww.list_logical_types()
# ## Resetting Type System to Defaults
#
# Finally, if you made multiple changes to the default Woodwork type system and would like to reset everything back to the default state, you can use the `reset_defaults` method as shown below. This unregisters any new types you have registered, resets all relationships to their default values and sets all inference functions back to their default functions.
ww.type_system.reset_defaults()
# ## Overriding Default LogicalTypes
#
# There may be times when you would like to override Woodwork's default LogicalTypes. An example might be if you wanted to use the nullable `Int64` dtype for the `Integer` LogicalType instead of the default dtype of `int64`. In this case, you want to stop Woodwork from inferring the default `Integer` LogicalType and have a compatible Logical Type inferred instead. You may solve this issue in one of two ways.
#
# First, you can create an entirely new LogicalType with its own name, `MyInteger`, and register it in the TypeSystem. If you want to infer it in place of the normal `Integer` LogicalType, you would remove `Integer` from the type system, and use `Integer`'s default inference function for `MyInteger`. Doing this will make it such that `MyInteger` will get inferred any place that `Integer` would have previously. Note, that because `Integer` has a parent LogicalType of `IntegerNullable`, you also need to set the parent of `MyInteger` to be `IntegerNullable` when registering with the type system.
# +
from woodwork.logical_types import LogicalType
class MyInteger(LogicalType):
primary_dtype = 'Int64'
standard_tags = {'numeric'}
int_inference_fn = ww.type_system.inference_functions[ww.logical_types.Integer]
ww.type_system.remove_type(ww.logical_types.Integer)
ww.type_system.add_type(MyInteger, int_inference_fn, parent='IntegerNullable')
df.ww.init()
df.ww
# -
# Above, you can see that the `id` column, which was previously inferred as `Integer` is now inferred as `MyInteger` with the `Int64` physical type. In the full list of Logical Types at `ww.list_logical_types()`, `Integer` and `MyInteger` will now both be present, but `Integer`'s `is_registered` will be False while the value for `is_registered` for `MyInteger` will be set to True.
#
# The second option for overriding the default Logical Types allows you to create a new LogicalType with the same name as an existing one. This might be desirable because it will allow Woodwork to interpret the string `'Integer'` as your new LogicalType, allowing previous code that might have selected `'Integer'` to be used without updating references to a new LogicalType like `MyInteger`.
#
# Before adding a LogicalType whose name already exists into the TypeSystem, you must first unregister the default LogicalType.
#
# In order to avoid using the same name space locally between Integer LogicalTypes, it is recommended to reference Woodwork's default LogicalType as `ww.logical_types.Integer`.
# +
ww.type_system.reset_defaults()
class Integer(LogicalType):
primary_dtype = 'Int64'
standard_tags = {'numeric'}
int_inference_fn = ww.type_system.inference_functions[ww.logical_types.Integer]
ww.type_system.remove_type(ww.logical_types.Integer)
ww.type_system.add_type(Integer, int_inference_fn, parent='IntegerNullable')
df.ww.init()
display(df.ww)
ww.type_system.reset_defaults()
# -
# Notice how `id` now gets inferred as an `Integer` Logical Type that has `Int64` as its Physical Type!
| docs/source/guides/custom_types_and_type_inference.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # December 2017: Advent of Code Solutions
#
# <NAME>
#
# I'm doing the [Advent of Code](https://adventofcode.com) puzzles, just like [last year](https://github.com/norvig/pytudes/blob/master/ipynb/Advent%20of%20Code.ipynb). This time, my terms of engagement are a bit different:
#
# * I won't write a summary of each day's puzzle description. Follow the links in the section headers (e.g. **[Day 1](https://adventofcode.com/2017/day/1)**) to understand what each puzzle is asking.
# * What you see is mostly the algorithm I first came up with first, although sometimes I go back and refactor if I think the original is unclear.
# * I do clean up the code a bit even after I solve the puzzle: adding docstrings, changing variable names, changing input boxes to `assert` statements.
# * I will describe my errors that slowed me down.
# * Some days I start on time and try to code very quickly (although I know that people at the top of the leader board will be much faster than me); other days I end up starting late and don't worry about going quickly.
#
#
#
#
#
# # Day 0: Imports and Utility Functions
#
# I might need these:
# +
# Python 3.x Utility Functions
# %matplotlib inline
import matplotlib.pyplot as plt
import re
import numpy as np
import math
import random
from collections import Counter, defaultdict, namedtuple, deque, abc, OrderedDict
from functools import lru_cache
from statistics import mean, median, mode, stdev, variance
from itertools import (permutations, combinations, chain, cycle, product, islice,
takewhile, zip_longest, count as count_from)
from heapq import heappop, heappush
identity = lambda x: x
letters = 'abcdefghijklmnopqrstuvwxyz'
cache = lru_cache(None)
cat = ''.join
ร = frozenset() # Empty set
inf = float('inf')
BIG = 10 ** 999
################ Functions for Input, Parsing
def Input(day, year=2017):
"Open this day's input file."
return open('data/advent{}/input{}.txt'.format(year, day))
def array(lines):
"Parse an iterable of str lines into a 2-D array. If `lines` is a str, splitlines."
if isinstance(lines, str): lines = lines.splitlines()
return mapt(vector, lines)
def vector(line):
"Parse a str into a tuple of atoms (numbers or str tokens)."
return mapt(atom, line.replace(',', ' ').split())
def integers(text):
"Return a tuple of all integers in a string."
return mapt(int, re.findall(r'\b[-+]?\d+\b', text))
def atom(token):
"Parse a str token into a number, or leave it as a str."
try:
return int(token)
except ValueError:
try:
return float(token)
except ValueError:
return token
################ Functions on Iterables
def first(iterable, default=None): return next(iter(iterable), default)
def first_true(iterable, pred=None, default=None):
"""Returns the first true value in the iterable.
If no true value is found, returns *default*
If *pred* is not None, returns the first item
for which pred(item) is true."""
# first_true([a,b,c], default=x) --> a or b or c or x
# first_true([a,b], fn, x) --> a if fn(a) else b if fn(b) else x
return next(filter(pred, iterable), default)
def nth(iterable, n, default=None):
"Returns the nth item of iterable, or a default value"
return next(islice(iterable, n, None), default)
def upto(iterable, maxval):
"From a monotonically increasing iterable, generate all the values <= maxval."
# Why <= maxval rather than < maxval? In part because that's how Ruby's upto does it.
return takewhile(lambda x: x <= maxval, iterable)
def groupby(iterable, key=identity):
"Return a dict of {key(item): [items...]} grouping all items in iterable by keys."
groups = defaultdict(list)
for item in iterable:
groups[key(item)].append(item)
return groups
def grouper(iterable, n, fillvalue=None):
"""Collect data into fixed-length chunks:
grouper('ABCDEFG', 3, 'x') --> ABC DEF Gxx"""
args = [iter(iterable)] * n
return zip_longest(*args, fillvalue=fillvalue)
def overlapping(iterable, n):
"""Generate all (overlapping) n-element subsequences of iterable.
overlapping('ABCDEFG', 3) --> ABC BCD CDE DEF EFG"""
if isinstance(iterable, abc.Sequence):
yield from (iterable[i:i+n] for i in range(len(iterable) + 1 - n))
else:
result = deque(maxlen=n)
for x in iterable:
result.append(x)
if len(result) == n:
yield tuple(result)
def pairwise(iterable):
"s -> (s0,s1), (s1,s2), (s2, s3), ..."
return overlapping(iterable, 2)
def sequence(iterable, type=tuple):
"Coerce iterable to sequence: leave alone if already a sequence, else make it `type`."
return iterable if isinstance(iterable, abc.Sequence) else type(iterable)
def join(iterable, sep=''):
"Join the items in iterable, converting each to a string first."
return sep.join(map(str, iterable))
def powerset(iterable):
"Yield all subsets of items."
items = list(iterable)
for r in range(len(items)+1):
for c in combinations(items, r):
yield c
def quantify(iterable, pred=bool):
"Count how many times the predicate is true."
return sum(map(pred, iterable))
def shuffled(iterable):
"Create a new list out of iterable, and shuffle it."
new = list(iterable)
random.shuffle(new)
return new
flatten = chain.from_iterable
class Set(frozenset):
"A frozenset, but with a prettier printer."
def __repr__(self): return '{' + join(sorted(self), ', ') + '}'
def canon(items, typ=None):
"Canonicalize these order-independent items into a hashable canonical form."
typ = typ or (cat if isinstance(items, str) else tuple)
return typ(sorted(items))
def mapt(fn, *args):
"Do a map, and make the results into a tuple."
return tuple(map(fn, *args))
################ Math Functions
def transpose(matrix): return tuple(zip(*matrix))
def isqrt(n):
"Integer square root (rounds down)."
return int(n ** 0.5)
def ints(start, end):
"The integers from start to end, inclusive: range(start, end+1)"
return range(start, end + 1)
def floats(start, end, step=1.0):
"Yields from start to end (inclusive), by increments of step."
m = (1.0 if step >= 0 else -1.0)
while start * m <= end * m:
yield start
start += step
def multiply(numbers):
"Multiply all the numbers together."
result = 1
for n in numbers:
result *= n
return result
import operator as op
operations = {'>': op.gt, '>=': op.ge, '==': op.eq,
'<': op.lt, '<=': op.le, '!=': op.ne,
'+': op.add, '-': op.sub, '*': op.mul,
'/': op.truediv, '**': op.pow}
################ 2-D points implemented using (x, y) tuples
def X(point): x, y = point; return x
def Y(point): x, y = point; return y
origin = (0, 0)
UP, DOWN, LEFT, RIGHT = (0, 1), (0, -1), (-1, 0), (1, 0)
def neighbors4(point):
"The four neighboring squares."
x, y = point
return ( (x, y-1),
(x-1, y), (x+1, y),
(x, y+1))
def neighbors8(point):
"The eight neighboring squares."
x, y = point
return ((x-1, y-1), (x, y-1), (x+1, y-1),
(x-1, y), (x+1, y),
(x-1, y+1), (x, y+1), (x+1, y+1))
def cityblock_distance(p, q=origin):
"Manhatten distance between two points."
return abs(X(p) - X(q)) + abs(Y(p) - Y(q))
def distance(p, q=origin):
"Hypotenuse distance between two points."
return math.hypot(X(p) - X(q), Y(p) - Y(q))
################ Debugging
def trace1(f):
"Print a trace of the input and output of a function on one line."
def traced_f(*args):
result = f(*args)
print('{}({}) = {}'.format(f.__name__, ', '.join(map(str, args)), result))
return result
return traced_f
def grep(pattern, iterable):
"Print lines from iterable that match pattern."
for line in iterable:
if re.search(pattern, line):
print(line)
################ A* and Breadth-First Search (tracking states, not actions)
def always(value): return (lambda *args: value)
def Astar(start, moves_func, h_func, cost_func=always(1)):
"Find a shortest sequence of states from start to a goal state (where h_func(s) == 0)."
frontier = [(h_func(start), start)] # A priority queue, ordered by path length, f = g + h
previous = {start: None} # start state has no previous state; other states will
path_cost = {start: 0} # The cost of the best path to a state.
Path = lambda s: ([] if (s is None) else Path(previous[s]) + [s])
while frontier:
(f, s) = heappop(frontier)
if h_func(s) == 0:
return Path(s)
for s2 in moves_func(s):
g = path_cost[s] + cost_func(s, s2)
if s2 not in path_cost or g < path_cost[s2]:
heappush(frontier, (g + h_func(s2), s2))
path_cost[s2] = g
previous[s2] = s
def bfs(start, moves_func, goals):
"Breadth-first search"
goal_func = (goals if callable(goals) else lambda s: s in goals)
return Astar(start, moves_func, lambda s: (0 if goal_func(s) else 1))
# +
def tests():
# Functions for Input, Parsing
assert array('''1 2 3
4 5 6''') == ((1, 2, 3),
(4, 5, 6))
assert vector('testing 1 2 3.') == ('testing', 1, 2, 3.0)
# Functions on Iterables
assert first('abc') == first(['a', 'b', 'c']) == 'a'
assert first_true([0, None, False, {}, 42, 43]) == 42
assert nth('abc', 1) == nth(iter('abc'), 1) == 'b'
assert cat(upto('abcdef', 'd')) == 'abcd'
assert cat(['do', 'g']) == 'dog'
assert groupby([-3, -2, -1, 1, 2], abs) == {1: [-1, 1], 2: [-2, 2], 3: [-3]}
assert list(grouper(range(8), 3)) == [(0, 1, 2), (3, 4, 5), (6, 7, None)]
assert list(overlapping((0, 1, 2, 3, 4), 3)) == [(0, 1, 2), (1, 2, 3), (2, 3, 4)]
assert list(overlapping('abcdefg', 4)) == ['abcd', 'bcde', 'cdef', 'defg']
assert list(pairwise((0, 1, 2, 3, 4))) == [(0, 1), (1, 2), (2, 3), (3, 4)]
assert sequence('seq') == 'seq'
assert sequence((i**2 for i in range(5))) == (0, 1, 4, 9, 16)
assert join(range(5)) == '01234'
assert join(range(5), ', ') == '0, 1, 2, 3, 4'
assert multiply([1, 2, 3, 4]) == 24
assert transpose(((1, 2, 3), (4, 5, 6))) == ((1, 4), (2, 5), (3, 6))
assert isqrt(9) == 3 == isqrt(10)
assert ints(1, 100) == range(1, 101)
assert identity('anything') == 'anything'
assert set(powerset({1, 2, 3})) == {
(), (1,), (1, 2), (1, 2, 3), (1, 3), (2,), (2, 3), (3,)}
assert quantify(['testing', 1, 2, 3, int, len], callable) == 2 # int and len are callable
assert quantify([0, False, None, '', [], (), {}, 42]) == 1 # Only 42 is truish
assert set(shuffled('abc')) == set('abc')
assert canon('abecedarian') == 'aaabcdeeinr'
assert canon([9, 1, 4]) == canon({1, 4, 9}) == (1, 4, 9)
assert mapt(math.sqrt, [1, 9, 4]) == (1, 3, 2)
# Math
assert transpose([(1, 2, 3), (4, 5, 6)]) == ((1, 4), (2, 5), (3, 6))
assert isqrt(10) == isqrt(9) == 3
assert ints(1, 5) == range(1, 6)
assert list(floats(1, 5)) == [1., 2., 3., 4., 5.]
assert multiply(ints(1, 10)) == math.factorial(10) == 3628800
# 2-D points
P = (3, 4)
assert X(P) == 3 and Y(P) == 4
assert cityblock_distance(P) == cityblock_distance(P, origin) == 7
assert distance(P) == distance(P, origin) == 5
# Search
assert Astar((4, 4), neighbors8, distance) == [(4, 4), (3, 3), (2, 2), (1, 1), (0, 0)]
assert bfs((4, 4), neighbors8, {origin}) == [(4, 4), (3, 3), (2, 2), (1, 1), (0, 0)]
forty2 = always(42)
assert forty2() == forty2('?') == forty2(4, 2) == 42
return 'pass'
tests()
# -
# # [Day 1](https://adventofcode.com/2017/day/1): Inverse Captcha
#
# This was easier than I remember last year's puzzles being:
#
digits = mapt(int, '3294199471327195994824832197564859876682638188889768298894243832665654681412886862234525991553276578641265589959178414218389329361496673991614673626344552179413995562266818138372393213966143124914469397692587251112663217862879233226763533911128893354536353213847122251463857894159819828724827969576432191847787772732881266875469721189331882228146576832921314638221317393256471998598117289632684663355273845983933845721713497811766995367795857965222183668765517454263354111134841334631345111596131682726196574763165187889337599583345634413436165539744188866156771585647718555182529936669683581662398618765391487164715724849894563314426959348119286955144439452731762666568741612153254469131724137699832984728937865956711925592628456617133695259554548719328229938621332325125972547181236812263887375866231118312954369432937359357266467383318326239572877314765121844831126178173988799765218913178825966268816476559792947359956859989228917136267178571776316345292573489873792149646548747995389669692188457724414468727192819919448275922166321158141365237545222633688372891451842434458527698774342111482498999383831492577615154591278719656798277377363284379468757998373193231795767644654155432692988651312845433511879457921638934877557575241394363721667237778962455961493559848522582413748218971212486373232795878362964873855994697149692824917183375545192119453587398199912564474614219929345185468661129966379693813498542474732198176496694746111576925715493967296487258237854152382365579876894391815759815373319159213475555251488754279888245492373595471189191353244684697662848376529881512529221627313527441221459672786923145165989611223372241149929436247374818467481641931872972582295425936998535194423916544367799522276914445231582272368388831834437562752119325286474352863554693373718848649568451797751926315617575295381964426843625282819524747119726872193569785611959896776143539915299968276374712996485367853494734376257511273443736433464496287219615697341973131715166768916149828396454638596713572963686159214116763')
N = len(digits)
N
sum(digits[i]
for i in range(N)
if digits[i] == digits[i - 1])
# **Part Two**:
sum(digits[i]
for i in range(N)
if digits[i] == digits[i - N // 2])
# # [Day 2](https://adventofcode.com/2017/day/2): Corruption Checksum
#
rows2 = array('''790 99 345 1080 32 143 1085 984 553 98 123 97 197 886 125 947
302 463 59 58 55 87 508 54 472 63 469 419 424 331 337 72
899 962 77 1127 62 530 78 880 129 1014 93 148 239 288 357 424
2417 2755 254 3886 5336 3655 5798 3273 5016 178 270 6511 223 5391 1342 2377
68 3002 3307 166 275 1989 1611 364 157 144 3771 1267 3188 3149 156 3454
1088 1261 21 1063 1173 278 1164 207 237 1230 1185 431 232 660 195 1246
49 1100 136 1491 647 1486 112 1278 53 1564 1147 1068 809 1638 138 117
158 3216 1972 2646 3181 785 2937 365 611 1977 1199 2972 201 2432 186 160
244 86 61 38 58 71 243 52 245 264 209 265 308 80 126 129
1317 792 74 111 1721 252 1082 1881 1349 94 891 1458 331 1691 89 1724
3798 202 3140 3468 1486 2073 3872 3190 3481 3760 2876 182 2772 226 3753 188
2272 6876 6759 218 272 4095 4712 6244 4889 2037 234 223 6858 3499 2358 439
792 230 886 824 762 895 99 799 94 110 747 635 91 406 89 157
2074 237 1668 1961 170 2292 2079 1371 1909 221 2039 1022 193 2195 1395 2123
8447 203 1806 6777 278 2850 1232 6369 398 235 212 992 7520 7304 7852 520
3928 107 3406 123 2111 2749 223 125 134 146 3875 1357 508 1534 4002 4417''')
sum(abs(max(row) - min(row)) for row in rows2)
# **Part Two**:
# +
def evendiv(row):
return first(a // b for a in row for b in row if a > b and a // b == a / b)
sum(map(evendiv, rows2))
# -
# This day was also very easy. It was nice that my pre-defined `array` function did the whole job of parsing the input. In Part One, I was slowed down by a typo: I had `"="` instead of `"-"` in `"max(row) - min(row)"`. I was confused by Python's misleading error message, which said `"SyntaxError: keyword can't be an expression"`. Later on, <NAME> explained to me that the message meant that in `abs(max(row)=...)` it thought that `max(row)` was a keyword argument to `abs`, as in `abs(x=-1)`.
#
# In Part Two, note that to check that `a/b` is an exact integer, I used `a // b == a / b`, which I think is more clear than the marginally-faster expression one would typically use here, `a % b == 0`, which requires you to think about two things: division and the modulus operator (is it `a % b` or `b % a`?).
# # [Day 3](https://adventofcode.com/2017/day/3): Spiral Memory
#
# For today the data is just one number:
M = 277678
# This puzzle takes some thinking, not just fast typing. I decided to break the problem into three parts:
# - Generate a spiral (by writing a new function called `spiral`).
# - Find the Nth square on the spiral (with my function `nth`).
# - Find the distance from that square to the center (with my function `cityblock_distance`).
#
# I suspect many people will do all three of these in one function. That's probably the best way to get the answer really quickly, but I'd rather be clear than quick (and I'm anticipating that `spiral` will come in handy in Part Two), so I'll factor out each part, obeying the *single responsibility principle*.
#
# Now I need to make `spiral()` generate the coordinates of squares on an infinite spiral, in order, going out from the center square, `(0, 0)`. After the center square, the spiral goes 1 square right, then 1 square up, then 2 square left, then 2 square down, thus completing one revolution; then it does subsequent revolutions. In general if the previous revolution ended with *s* squares down, then the next revolution consists of *s*+1 squares right, *s*+1 squares up, *s*+2 squares left and *s*+2 down. A small test confirms that this matches the example diagram in the puzzle description (although I had a bug on my first try because I only incremented `s` once per revolution, not twice):
# +
def spiral():
"Yield successive (x, y) coordinates of squares on a spiral."
x = y = s = 0 # (x, y) is the position; s is the side length.
yield (x, y)
while True:
for (dx, dy) in (RIGHT, UP, LEFT, DOWN):
if dy: s += 1 # Increment side length before RIGHT and LEFT
for _ in range(s):
x += dx; y += dy
yield (x, y)
list(islice(spiral(), 10))
# -
# Now we can find the `N`th square. As this is Python, indexes start at 0, whereas the puzzle description starts counting at 1, so I have to subtract 1. Then I can find the distance to the origin:
nth(spiral(), M - 1)
cityblock_distance(_)
# For **Part Two** I can re-use my `spiral` generator, yay! Here's a function to sum the neighboring squares (I can use my `neighbors8` function, yay!):
def spiralsums():
"Yield the values of a spiral where each square has the sum of the 8 neighbors."
value = defaultdict(int)
for p in spiral():
value[p] = sum(value[q] for q in neighbors8(p)) or 1
yield value[p]
list(islice(spiralsums(), 12))
# Looks good, so let's get the answer:
first(x for x in spiralsums() if x > M)
# # [Day 4](https://adventofcode.com/2017/day/4): High-Entropy Passphrases
#
# This is the first time I will have to store an input file and read it with the function `Input`. It should be straightforward, though:
# +
def is_valid(line): return is_unique(line.split())
def is_unique(items): return len(items) == len(set(items))
quantify(Input(4), is_valid)
# -
# **Part Two:**
# +
def is_valid2(line): return is_unique(mapt(canon, line.split()))
quantify(Input(4), is_valid2)
# -
# That was easy, and I started on time, but the leaders were still three times faster than me!
# # [Day 5](https://adventofcode.com/2017/day/5): A Maze of Twisty Trampolines, All Alike
#
# Let's first make sure we can read the data/program okay:
# +
program = mapt(int, Input(5))
program[:10]
# -
# Now I'll make a little interpreter, `run`, which takes a program, loads it into memory,
# and executes the instruction, maintaining a program counter, `pc`, and doing the incrementing/branching as described in the puzzle,
# until the program counter is out of range:
# +
def run(program):
memory = list(program)
pc = steps = 0
while pc in range(len(memory)):
steps += 1
oldpc = pc
pc += memory[pc]
memory[oldpc] += 1
return steps
run(program)
# -
# **Part Two:**
#
# Part Two seems tricky, so I'll include an optional argument, `verbose`, and check if the printout it produces matches the example in the puzzle description:
# +
def run2(program, verbose=False):
memory = list(program)
pc = steps = 0
while pc in range(len(memory)):
steps += 1
oldpc = pc
pc += memory[pc]
memory[oldpc] += (-1 if memory[oldpc] >= 3 else 1)
if verbose: print(steps, pc, memory)
return steps
run2([0, 3, 0, 1, -3], True)
# -
# That looks right, so I can solve the puzzle:
run2(program)
# Thanks to [Clement Sreeves](https://github.com/ClementSreeves) for the suggestion of making a distinction between the `program` and the `memory`. In my first version, `run` would mutate the argument, which was OK for a short exercise, but not best practice for a reliable API.
# # [Day 6](https://adventofcode.com/2017/day/6): Memory Reallocation
# I had to read the puzzle description carefully, but then it is pretty clear what to do. I'll keep a set of previously seen configurations, which will all be tuples. But in the function `spread`, I want to mutate the configuration of banks, so I will convert to a list at the start, then convert back to a tuple at the end.
# +
banks = vector('4 10 4 1 8 4 9 14 5 1 14 15 0 15 3 5')
def realloc(banks):
"How many cycles until we reach a configuration we've seen before?"
seen = {banks}
for cycles in count_from(1):
banks = spread(banks)
if banks in seen:
return cycles
seen.add(banks)
def spread(banks):
"Find the area with the most blocks, and spread them evenly to following areas."
banks = list(banks)
maxi = max(range(len(banks)), key=lambda i: banks[i])
blocks = banks[maxi]
banks[maxi] = 0
for i in range(maxi + 1, maxi + 1 + blocks):
banks[i % len(banks)] += 1
return tuple(banks)
# -
spread((0, 2, 7, 0))
realloc((0, 2, 7, 0))
# These tests look good; let's solve the problem:
realloc(banks)
# **Part Two:** Here I will just replace the `set` of `seen` banks with a `dict` of `{bank: cycle_number}`; everything else is the same, and the final result is the current cycle number minus the cycle number of the previously-seen tuple of banks.
# +
def realloc2(banks):
"When we hit a cycle, what is the length of the cycle?"
seen = {banks: 0}
for cycles in count_from(1):
banks = spread(banks)
if banks in seen:
return cycles - seen[banks]
seen[banks] = cycles
realloc2((0, 2, 7, 0))
# -
realloc2(banks)
# # [Day 7](https://adventofcode.com/2017/day/7): Recursive Circus
# First I'll read the data into two dicts as follows: the input line:
#
# tcmdaji (40) -> wjbdxln, amtqhf
#
# creates:
#
# weight['tcmdaji'] = 40
# above['tcmdaji'] = ['wjbdxln', 'amtqhf']
# +
def towers(lines):
"Return (weight, above) dicts."
weight = {}
above = {}
for line in lines:
name, w, *rest = re.findall(r'\w+', line)
weight[name] = int(w)
above[name] = set(rest)
return weight, above
weight, above = towers(Input(7))
programs = set(above)
# -
# Now the root progam is the one that is not above anything:
programs - set(flatten(above.values()))
# **Part Two:**
#
# A program is *wrong* if it is the bottom of a tower that is a different weight from all its sibling towers:
def wrong(p): return tower_weight(p) not in map(tower_weight, siblings(p))
# Here we define `tower_weight`, `siblings`, and the `below` dict:
# +
def tower_weight(p):
"Total weight for the tower whose root (bottom) is p."
return weight[p] + sum(map(tower_weight, above[p]))
def siblings(p):
"The other programs at the same level as this one."
if p not in below:
return ร # the root has no siblings
else:
return above[below[p]] - {p}
below = {a: b for b in programs for a in above[b]}
# -
set(filter(wrong, programs))
# So these four programs are wrong. Which one should we correct? The one that is wrong, and has no wrong program above it:
# +
def wrongest(programs):
return first(p for p in programs
if wrong(p)
and not any(wrong(p2) for p2 in above[p]))
wrongest(programs)
# -
# Now what should we correct it to? To the weight that makes it the same weight as the sibling towers:
# +
def correct(p):
"Return the weight that would make p's tower's weight the same as its sibling towers."
delta = tower_weight(first(siblings(p))) - tower_weight(p)
return weight[p] + delta
correct(wrongest(programs))
# -
# # [Day 8](https://adventofcode.com/2017/day/8): Memory Reallocation
#
# This one looks easy: a simple interpreter for straight-line code where each instruction has 7 tokens. It is nice that my `array` function parses the whole program.
# +
program8 = array(Input(8))
def run8(program):
"Run the program and return final value of registers."
registers = defaultdict(int)
for (r, inc, delta, _if, r2, cmp, amount) in program:
if operations[cmp](registers[r2], amount):
registers[r] += delta * (+1 if inc == 'inc' else -1)
return registers
max(run8(program8).values())
# -
# **Part Two:**
#
# Here I modify the interpreter to keep track of the highest value of any register at any time.
# +
def run82(program):
registers = defaultdict(int)
highest = 0
for r, inc, delta, _if, r2, cmp, amount in program:
if operations[cmp](registers[r2], amount):
registers[r] += delta * (+1 if inc == 'inc' else -1)
highest = max(highest, registers[r])
return highest
run82(program8)
# -
# # [Day 9](https://adventofcode.com/2017/day/9): Stream Processing
#
# For this problem I could have a single finite-state machine that handles all five magic characters, `'{<!>}'`, but I think it is easier to first clean up the garbage, using regular expressions:
# +
text1 = re.sub(r'!.', '', Input(9).read()) # Delete canceled characters
text2 = re.sub(r'<.*?>', '', text1) # Delete garbage
text2[:70]
# -
# Now I can deal with the nested braces (which can't be handled with regular expressions). The puzzle says "*Each group is assigned a score which is one more than the score of the group that immediately contains it,*" which is the same as saying that a group's score is its nesting level, a quantity that increases with each open-brace character, and decreases with each close-brace:
# +
def total_score(text):
"Total of group scores; each group scores one more than the group it is nested in."
total = 0
level = 0 # Level of nesting
for c in text:
if c == '{':
level += 1
total += level
elif c == '}':
level -= 1
return total
total_score(text2)
# -
# **Part Two:**
#
# At first I thought that the amount of garbage is just the difference in lengths of `text2` and `text3`:
len(text1) - len(text2)
# But this turned out to be wrong; it counts the angle brackets themselves s being deleted, whereas the puzzle is actually asking how many character between the angle brackets are deleted. So that would be:
# +
text3 = re.sub(r'<.*?>', '<>', text1) # Delete garbage inside brackets, but not brackets
len(text1) - len(text3)
# -
# # [Day 10](https://adventofcode.com/2017/day/10): Stream Processing
# I have to do a bunch of reversals of substrings of `stream`. It looks complicated so I will include a `verbose` argument to `knothash` and confirm it works on the example puzzle. I break out the reversal into a separate function, `rev`. The way I handle reversal interacting with wraparound is that I first move all the items before the reversal position to the end of the list, then I do the reversal, then I move them back.
# +
stream = (63,144,180,149,1,255,167,84,125,65,188,0,2,254,229,24)
def knothash(lengths, N=256, verbose=False):
"Do a reversal for each of the numbers in `lengths`."
nums = list(range(N))
pos = skip = 0
for L in lengths:
nums = rev(nums, pos, L)
if verbose: print(nums)
pos = (pos + L + skip) % N
skip += 1
return nums[0] * nums[1]
def rev(nums, pos, L):
"Reverse nums[pos:pos+L], handling wrap-around."
# Move first pos elements to end, reverse first L, move pos elements back
nums = nums[pos:] + nums[:pos]
nums[:L] = reversed(nums[:L])
nums = nums[-pos:] + nums[:-pos]
return nums
# -
# Reverse [0, 1, 2]:
assert rev(list(range(5)), 0, 3) == [2, 1, 0, 3, 4]
# Reverse [4, 0, 1], wrapping around:
assert rev(list(range(5)), 4, 3) == [0, 4, 2, 3, 1]
# Duplicate the example output
assert knothash((3, 4, 1, 5), N=5, verbose=True) == 12
# That's correct, but the first time through I got it wrong because I forgot the `"% N"` on the update of `pos`.
knothash(stream)
# **Part Two**:
#
# Now it gets *really* complicated: string processing, the suffix, hex string output, and dense hashing. But just take them one at a time:
# +
stream2 = '63,144,180,149,1,255,167,84,125,65,188,0,2,254,229,24'
def knothash2(lengthstr, N=256, rounds=64, suffix=(17, 31, 73, 47, 23),
verbose=False):
"Do a reversal for each length; repeat `rounds` times."
nums = list(range(N))
lengths = mapt(ord, lengthstr) + suffix
pos = skip = 0
for round in range(rounds):
for L in lengths:
nums = rev(nums, pos, L)
if verbose: print(nums)
pos = (pos + L + skip) % N
skip += 1
return hexstr(dense_hash(nums))
def hexstr(nums):
"Convert a sequence of (0 to 255) ints into a hex str."
return cat(map('{:02x}'.format, nums))
def dense_hash(nums, blocksize=16):
"XOR each block of nums, return the list of them."
return [XOR(block) for block in grouper(nums, blocksize)]
def XOR(nums):
"Exclusive-or all the numbers together."
result = 0
for n in nums:
result ^= n
return result
assert XOR([65, 27, 9, 1, 4, 3, 40, 50, 91, 7, 6, 0, 2, 5, 68, 22]) == 64
assert hexstr([255, 0, 17]) == 'ff0011'
assert knothash2('') == 'a2582a3a0e66e6e86e3812dcb672a272'
knothash2(stream2)
# -
# I had a bug: originally I used `'{:x}'` as the format instead of `'{:02x}'`; the later correctly formats `0` as `'00'`, not `'0'`.
# # [Day 11](https://adventofcode.com/2017/day/11): Hex Ed
#
# The first thing I did was search [`[hex coordinates]`](https://www.google.com/search?source=hp&ei=Ft4xWoOqKcy4jAOs76a4CQ&q=hex+coordinates), and the #1 result (as I expected) was <NAME>'s "[Hexagonal Grids](https://www.redblobgames.com/grids/hexagons/)" page. I chose his "odd-q vertical layout" to define the six directions as (dx, dy) deltas:
directions6 = dict(n=(0, -1), ne=(1, 0), se=(1, 1), s=(0, 1), sw=(-1, 0), nw=(-1, -1))
# Now I can read the path, follow it, and see where it ends up. If the end point is `(x, y)`, then it will take `max(abs(x), abs(y))` steps to get back to the origin, because each step can increment or decrement either `x` or `y` or both.
# +
path = vector(Input(11).read())
def follow(path):
"Follow each step of the path; return final distance to origin."
x, y = (0, 0)
for (dx, dy) in map(directions6.get, path):
x += dx; y += dy
return max(abs(x), abs(y))
follow(path)
# -
# This one seemed so easy that I didn't bother testing it on the simple examples in the puzzle; all I did was confirm that the answer for my puzzle input was correct.
#
# **Part Two:**
#
# This looks pretty easy; repeat Part One, but keep track of the maximum number of steps we get from the origin at any point in the path:
# +
def follow2(path):
"Follow each step of the path; return max steps to origin."
x = y = maxsteps = 0
for (dx, dy) in map(directions6.get, path):
x += dx; y += dy
maxsteps = max(maxsteps, abs(x), abs(y))
return maxsteps
follow2(path)
# -
# Again, no tests, just the final answer.
#
# # [Day 12](https://adventofcode.com/2017/day/12): Digital Plumber
#
# First I'll parse the data, creating a dict of `{program: direct_group_of_programs}`:
# +
def groups(lines):
"Dict of {i: {directly_connected_to_i}"
return {lhs: {lhs} | set(rhs)
for (lhs, _, *rhs) in array(lines)}
assert groups(Input(12))[0] == {0, 659, 737}
# -
# That looks good. I recognize this as a [Union-Find](https://en.wikipedia.org/wiki/Disjoint-set_data_structure) problem, for which there are efficient algorithms. But for this small example, I don't need efficiency, I need clarity and simplicity. So I'll write `merge` to take a dict and merge together the sets that are connected:
# +
def merge(G):
"Merge all indirectly connected groups together."
for i in G:
for j in list(G[i]):
if G[i] != G[j]:
G[i].update(G[j])
G[j] = G[i]
return G
G = merge(groups(Input(12)))
# -
len(G[0])
# That's the answer for Part One.
#
# **Part Two**
#
# I did almost all the work; I just need to count the number of distinct groups. That's a set of sets, and regular `set`s are not hashable, so I use my `Set` class:
len({Set(G[i]) for i in G})
# # [Day 13](https://adventofcode.com/2017/day/13): Packet Scanners
#
# First thing: The puzzle says the data is *depth: range*, but `range` has a meaning in Python, so I'll use the term *width* instead.
#
# Second thing: I misread the puzzle description and mistakenly thought the scanners were going in a circular route,
# so that they'd be at the top at any time that is 0 mod *width*. That gave the wrong answer and I realized the scanners are actually going back-and-forth, so with a width of size *n*, it takes *n* - 1 steps to get to the bottom, and *n* - 1 steps to get back to the top, so the scanner will be
# at the top at times that are multiples of 2(*n* - 1). For example, with width 3, that would be times 0, 4, 8, ...
# +
def trip_severity(scanners):
"The sum of sevrities for each time the packet is caught."
return sum((d * w if caught(d, w) else 0)
for (d, w) in scanners)
def caught(depth, width):
"Does the scanner at this depth/width catch the packet?"
return depth % (2 * (width - 1)) == 0
example = ((0, 3), (1, 2), (4, 4), (6, 4))
assert trip_severity(example) == 24
# -
scanners = mapt(integers, Input(13))
scanners[:5]
trip_severity(scanners)
# **Part Two**
#
# A packet is safe if no scanner catches it. We now have the possibility of a delay, so I update `caught` to allow for an optional delay, and define `safe_delay`:
# +
def caught(depth, width, delay=0):
"Does the scanner at this depth/width catch the packet with this delay?"
return (depth + delay) % (2 * (width - 1)) == 0
def safe_delay(scanners):
"Find the first delay such that no scanner catches the packet."
safe = lambda delay: not any(caught(d, w, delay) for (d, w) in scanners)
return first(filter(safe, count_from(0)))
safe_delay(example)
# -
safe_delay(scanners)
# # [Day 14](https://adventofcode.com/2017/day/14): Disk Defragmentation
#
# I found this puzzle description confusing: are they talking about what I call `knothash`, or is it `knothash2`? I decided for the latter, which turned out to be right:
key = '<KEY>'
# +
def bits(key, i):
"The bits in the hash of this key with this row number."
hash = knothash2(key + '-' + str(i))
return format(int(hash, base=16), '0128b')
sum(bits(key, i).count('1') for i in range(128))
# -
# **Part Two**
#
# So as not to worry about running off the edge of the grid, I'll surround the grid with `'0'` bits:
def Grid(key, N=128+2):
"Make a grid, with a border around it."
rows = [['0'] + list(bits(key, i)) + ['0'] for i in range(128)]
empty = ['0'] * len(rows[0])
return [empty] + rows + [empty]
# To find a region, start at some `(x, y)` position and [flood fill](https://en.wikipedia.org/wiki/Flood_fill) to neighbors that have the same value (a `'1'` bit).
def flood(grid, x, y, val, R):
"For all cells with value val connected to grid[x][y], give them region number R."
if grid[y][x] == val:
grid[y][x] = R
for x2, y2 in neighbors4((x, y)):
flood(grid, x2, y2, val, R)
def flood_all(grid, val='1'):
"Label all regions with consecutive ints starting at 1."
R = 0 # R is the region number
for y in range(1, len(grid) - 1):
for x in range(1, len(grid) - 1):
if grid[y][x] == val:
R += 1
flood(grid, x, y, val, R)
return R
flood_all(Grid(key))
# # [Day 15](https://adventofcode.com/2017/day/15): Dueling Generators
#
# There are lots of arbitrary integers below: my personalized inputs are `516` and `190`; the other numbers are shared by all puzzle-solvers. I decided to make infinite generators of numbers, using `gen`:
# +
def gen(prev, factor, m=2147483647):
"Generate an infinite sequence of numbers according to the rules."
while True:
prev = (prev * factor) % m
yield prev
def judge(A, B, N=40*10**6, b=16):
"How many of the first N pairs from A and B agree in the last b bits?"
m = 2 ** b
return quantify(next(A) % m == next(B) % m
for _ in range(N))
A = lambda: gen(516, 16807)
B = lambda: gen(190, 48271)
judge(A(), B())
# -
# **Part Two**
#
# A small change: only consider numbers that match the **criteria** of being divisible by 4 or 8, respectively;
# +
def criteria(m, iterable):
"Elements of iterable that are divisible by m"
return (n for n in iterable if n % m == 0)
judge(criteria(4, A()), criteria(8, B()), 5*10**6)
# -
# # [Day 16](https://adventofcode.com/2017/day/16): Permutation Promenade
#
# Let's read the input and check that it looks reasonable:
dance = vector(Input(16).read())
dance[:10]
len(dance)
# I'll define `perform` to perform the dance:
# +
dancers = 'abcdefghijklmnop'
def perform(dance, dancers=dancers):
D = deque(dancers)
def swap(i, j): D[i], D[j] = D[j], D[i]
for move in dance:
op, arg = move[0], move[1:]
if op == 's': D.rotate(int(arg))
elif op == 'x': swap(*integers(arg))
elif op == 'p': swap(D.index(arg[0]), D.index(arg[2]))
return cat(D)
perform(dance)
# -
# That's the right answer.
#
# **Part Two**
#
# My first thought was to define a dance as a permutation: a list of numbers `[11, 1, 9, ...]` which says that the net effect of the dance is that the first dancer (`a`) ends up in position, the second (`b`) stays in position 1, and so on. Applying that permutation once is a lot faster than interpreting all 10,000 moves of the dance, and it is feasible to apply the permutation a billion times. I tried that (code not shown here), but that was a mistake: it took 15 minutes to run, and it got the wrong answer. The problem is that a dance is *not* just a permutation, because a dance can reference dancer *names*, not just positions.
#
# It would take about 10,000 times 20 minutes to perform a billion repetitions of the dance, so that's out. But even though the dance is not a permutation, it might repeat after a short period. Let's check:
seen = {dancers: 0}
d = dancers
for i in range(1, 1000):
d = perform(dance, d)
if d in seen:
print(d, 'is seen in iterations', (seen[d], i))
break
# So we get back to the start position after 56 repetitions of the dance. What happens after a billion repetitions?
1000000000 % 56
# The end position after a billion repetitions is the same as after 48:
# +
def whole(N, dance, dancers=dancers):
"Repeat `perform(dance)` N times."
for i in range(N):
dancers = perform(dance, dancers)
return dancers
whole(48, dance)
# -
# # Wrapping Up: Verification and Timing
#
# Here is a little test harness to verify that I still get the right answers (even if I refactor some of the code):
# +
# %%time
def day(n, compute1, answer1, compute2, answer2):
"Assert that we get the right answers for this day."
assert compute1 == answer1
assert compute2 == answer2
day(1, sum(digits[i] for i in range(N) if digits[i] == digits[i - 1]), 1158,
sum(digits[i] for i in range(N) if digits[i] == digits[i - N // 2]), 1132)
day(2, sum(abs(max(row) - min(row)) for row in rows2), 46402,
sum(map(evendiv, rows2)), 265)
day(3, cityblock_distance(nth(spiral(), M - 1)), 475,
first(x for x in spiralsums() if x > M), 279138)
day(4, quantify(Input(4), is_valid), 337, quantify(Input(4), is_valid2), 231)
day(5, run(program), 364539, run2(program), 27477714)
day(6, realloc(banks), 12841, realloc2(banks), 8038)
day(7, first(programs - set(flatten(above.values()))), 'wiapj',
correct(wrongest(programs)), 1072)
day(8, max(run8(program8).values()), 6828, run82(program8), 7234)
day(9, total_score(text2), 9662, len(text1) - len(text3), 4903)
day(10, knothash(stream), 4480,
knothash2(stream2), 'c500ffe015c83b60fad2e4b7d59dabc4')
day(11, follow(path), 705, follow2(path), 1469)
day(12, len(G[0]), 115, len({Set(G[i]) for i in G}), 221)
day(13, trip_severity(scanners), 1504, safe_delay(scanners), 3823370)
day(14, sum(bits(key, i).count('1') for i in range(128)), 8316,
flood_all(Grid(key)), 1074)
day(15, judge(A(), B()), 597,
judge(criteria(4, A()), criteria(8, B()), 5*10**6), 303)
day(16, perform(dance), 'lbdiomkhgcjanefp',
whole(48, dance), 'ejkflpgnamhdcboi')
# -
# And here is a plot of the time taken to completely solve both parts of each puzzle each day, for me, the first person to finish, and the hundredth person. On days when I started late, I estimate my time and mark it with parens below:
# +
def plot_times(times):
plt.style.use('seaborn-whitegrid')
X = ints(1, len(times[0]) - 2)
for (label, mark, *Y) in times:
plt.plot(X, Y, mark, label=label)
plt.xlabel('Day Number'); plt.ylabel('Minutes to Solve Both')
plt.legend(loc='upper left')
x = None
plot_times([
('Me', 'd:', (4), 6,(20), 5, 12, 30, 33,(10), 21, 40, 13, 12,(30),(41), 13, 64),
('100th', 'v:', 6, 6, 23, 4, 5, 9, 25, 8, 12, 25, 12, 9, 22, 25, 10, 27),
('1st', '^:', 1, 1, 4, 1, 2, 3, 10, 3, 4, 6, 3, 2, 6, 5, 2, 5)])
| ipynb/Advent 2017.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + [markdown] id="vPgjW2NO3nxh"
# ## References
#
# My algorithm learning notebook following the live lesson series [**"Data Structures, Algorithms, and Machine Learning Optimization"**](https://learning.oreilly.com/videos/data-structures-algorithms/9780137644889/) by Dr. <NAME>. I adapted some and partially modified or added entirely new code. Notes largely based on and (some of them entirely) from Jon's notebooks and learning materials. The lesson and original notebook source code at:
#
# https://learning.oreilly.com/videos/data-structures-algorithms/9780137644889/
# https://github.com/jonkrohn/ML-foundations/blob/master/notebooks/7-algos-and-data-structures.ipynb
# + [markdown] id="-iQOFLLdv7Nx"
# # 5.3 Hash Functions
#
# - **Hash functions** set-searches in constant time $O(1)$!!
#
# - List-based structures (or unhashed sets) are searched in linear time $O(n)$.
# - At best $O(\text{log }n)$ with binary search of pre-sorted list.
#
# <br/>
#
# ---
#
# <br/>
#
# - Regardless of the size of the set, we can access the searched key-value instsantly.
# - The trick is that ***indexed data structures*** (with ***sequential*** integers) have $O(1)$ time search/retrieval.
# - So we need to convert dictionary values into an index - which is what hash functions do:
# - Input some value.
# - Output a hash value.
# - Used as indices of *hash table*.
#
# + id="vNPVTKW9gkXn" colab={"base_uri": "https://localhost:8080/", "height": 17} outputId="18446b52-c73d-49a9-f675-c7fc11bd26dd"
emergency_call = 1189998819991197253 # the easiest-to-remember emergency call, if you know what I mean...;)
# + [markdown] id="1xwET467eiPO"
# A common hash function approach is to use the modulo operator on the last few digits of the value...
# + colab={"base_uri": "https://localhost:8080/", "height": 35} id="YAcc_vjJ3hU-" outputId="fa5f70fa-2aa0-4264-f6e8-db29416e73ad"
split_value = [digit for digit in str(emergency_call)]
print(split_value)
# + id="6UyBk96X4pDU" colab={"base_uri": "https://localhost:8080/", "height": 35} outputId="68140413-9337-4173-addb-75895d34350b"
end_digits = int(''.join(split_value[-2:])) # final digits typically used b/c they tend to vary more than first ones
end_digits
# + colab={"base_uri": "https://localhost:8080/", "height": 35} id="YGUYdqtifTa7" outputId="c622a878-833f-4a7f-bed9-1c6fd3257fdb"
# Divisor (10 in this case) is arbitrary.
# You can choose any number but should be used consistently across all values to be hashed.
hash_value = end_digits % 10
hash_value
# + id="tBhdND_rfTW6" colab={"base_uri": "https://localhost:8080/", "height": 17} outputId="b8e1bb28-b5c0-4da7-ab0f-c4b26752fd3f"
def simple_hash(value):
split_value = [digit for digit in str(value)]
last_2_digits = int(''.join(split_value[-2:]))
return last_2_digits % 10
# + colab={"base_uri": "https://localhost:8080/", "height": 35} id="ZDOsgfQ2fTSy" outputId="277093d0-3c20-4cc2-bdb6-f5b651e9c07b"
simple_hash(emergency_call)
# + colab={"base_uri": "https://localhost:8080/", "height": 35} id="QUKYRF4EgHV-" outputId="bae66470-3ff7-458b-ea49-17f880e683bc"
another_emergency_call = 1189998819991197746
simple_hash(another_emergency_call)
# + colab={"base_uri": "https://localhost:8080/", "height": 520} id="zAFCJL_LhSVY" outputId="2e5c8e18-0499-4d6c-e362-9a7e0470d88b"
hash_table = "/content/here/MyDrive/Data and Algorithms/ALGO05/Hash table.png"
show_img(hash_table, resize=0.75, source="image from Wikipedia", sourceScale=1)
# + [markdown] id="-b0fWquFgTOl"
# These hash values (`7` and `1`) could be used in a sequential, small-integer index, i.e., a *hash table*.
#
# - This is how we get that constant time complexity with our set data structure.
#
# + [markdown] id="IfmTTo-q8tCn"
# # 5.4 Collisions
#
# Major problem with `simple_hash()` function
# - Index can range at most up to 10 (if using a single digit as an index)
# - Ergo, many input values will result in collisions.
# + id="UTAOWVF5gSgu" colab={"base_uri": "https://localhost:8080/", "height": 35} outputId="1b9bc7e5-0c29-4041-d67f-6caf613faf3c"
simple_hash(47583)
# + colab={"base_uri": "https://localhost:8080/", "height": 35} id="HzvREC4a955r" outputId="664d20ad-44b2-4677-d3e8-db643f5924a0"
simple_hash(7866778853)
# + [markdown] id="OB-rWGAB98kK"
# No matter how long the input value may be, the hash will always the be last digit because the `simple_hash()` indexes the input by modulo 10.
# + [markdown] id="nCK32bhd-WxP"
# ## Problems
#
# Major problem with the `simple_hash()` function:
# * The hash table has at most ten indices
# * Ergo, many input values will result in **collisions**, e.g.:
# + [markdown] id="kI_Y1OjN-fWV"
# ## Solution
# (excerpt)
#
# Three common ways to resolve collisions:
# 1. Change the modulus denominator (e.g., `10` --> `11`); this adds procedural (and thus time) complexity to hash algo
# 2. Change the hash function entirely; ditto w.r.t. procedural complexity
# 3. Store a list (or similar) at the index, e.g.:
# + colab={"base_uri": "https://localhost:8080/", "height": 17} id="bjlI_Vfx-V8S" outputId="f418ec1a-192e-473b-81a8-781ff79f204c"
hash_table = {}
# + colab={"base_uri": "https://localhost:8080/", "height": 35} id="4FCKdDe6_lH-" outputId="ef94bcd0-ebaa-4da0-af30-1069fdc4fd2e"
hash_table[simple_hash(555)] = [555]
hash_table
# + colab={"base_uri": "https://localhost:8080/", "height": 35} id="iTqJYRHF_lDX" outputId="a32e7074-7d28-4e1c-c4b8-ceea3cbedccc"
hash_table[simple_hash(125)].append(125)
hash_table
# + [markdown] id="MnIaOVcC_5Jm"
# (Excerpt)
#
# Such a list is called a **bucket**.
#
# Worst case:
# * All of the values hash to the same hash value (e.g., `5`)
# * Thus, all of the values are stored in a single bucket
# * Searching through the bucket has linear O($n$) time complexity
#
# Alternatively, we can increase memory complexity instead of time complexity:
# * Use very large modulus denominator
# * Reduces probability of collisions
# * If use denominator of `1e9`, we have a hash table with a billion buckets
#
# Could also have a second hash function *inside* of the bucket (e.g., if we know we'll have a few very large buckets).
#
# There is no "perfect hash". It depends on the values you're working with. There are many options to consider with various trade-offs.
# + [markdown] id="1qfTdwwtAgCg"
# #### Nested hash
# + colab={"base_uri": "https://localhost:8080/", "height": 17} id="lzZofLJi_k-1" outputId="ce50bc49-b2a2-40da-cb30-516e69c71f75"
def outer_hash(value):
return value % 1e9
# + colab={"base_uri": "https://localhost:8080/", "height": 35} id="SebGv0SD_k5c" outputId="3ab2adbc-6f9c-41fe-c5bb-08878a73d901"
outer_hash(emergency_call)
# + colab={"base_uri": "https://localhost:8080/", "height": 35} id="mt2Rgi1f_k10" outputId="419208a8-a7e9-4746-b168-bff226fbf081"
outer_hash(47862199645)
# + colab={"base_uri": "https://localhost:8080/", "height": 17} id="Pmf_hYu3A7YL" outputId="b6c85741-e949-4bf7-dac7-88446305ad98"
nested_hash_table = {}
# + colab={"base_uri": "https://localhost:8080/", "height": 35} id="BJej9Py4A7UC" outputId="587afea4-da41-4d9b-8970-9bb4222ed1fa"
nested_hash_table[outer_hash(emergency_call)] = [emergency_call]
nested_hash_table
# + colab={"base_uri": "https://localhost:8080/", "height": 17} id="tfpXFmy5A7QV" outputId="8e466d5c-7b00-4021-8435-85ce2346fa44"
nested_hash_table[outer_hash(emergency_call)].append(5)
nested_hash_table[outer_hash(emergency_call)].append(12)
nested_hash_table[outer_hash(emergency_call)].append(752)
# + colab={"base_uri": "https://localhost:8080/", "height": 35} id="dgnFpH_TA7MV" outputId="4c7cde5b-ab9a-426f-f940-13ee51ff5cc6"
nested_hash_table
# + colab={"base_uri": "https://localhost:8080/", "height": 17} id="MfIHuuP0A7Im" outputId="2dee4d7e-65f9-494c-b336-8869626a9d2b"
def inner_hash(value):
return value % 10
# + colab={"base_uri": "https://localhost:8080/", "height": 17} id="zt44KR7JA7E0" outputId="f0ef56d3-d126-499c-f7ad-535753320b35"
n = len(nested_hash_table[991197184.0])
nested_list = nested_hash_table[991197184.0]
for i in range(n):
nested_list[i] = {inner_hash(nested_list[i]): nested_list[i]}
# + colab={"base_uri": "https://localhost:8080/", "height": 35} id="BNYwlvirA7Ab" outputId="94fdb03e-d908-4d6c-ce8a-a27b73685447"
nested_hash_table
| 02 DSA and ML Optimisation/ALGO 05 Sets and Hashing 2 Hash Functions and Collisions.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Qiskit v0.35.0 (ipykernel)
# language: python
# name: python3
# ---
# # Variational Quantum Eigensolver - Ground State Energy for $H2$ Molecule using the RYRZ ansatzยถ
# +
import numpy as np
import matplotlib.pyplot as plt
# Importing standard Qiskit libraries
from qiskit import QuantumCircuit, transpile, IBMQ
from qiskit.tools.jupyter import *
from qiskit.visualization import *
from ibm_quantum_widgets import *
from qiskit.providers.aer import QasmSimulator, StatevectorSimulator
from qiskit.utils import QuantumInstance
# Loading your IBM Quantum account(s)
provider = IBMQ.load_account()
# +
# Chemistry Drivers
from qiskit_nature.drivers.second_quantization.pyscfd import PySCFDriver
from qiskit_nature.transformers.second_quantization.electronic import FreezeCoreTransformer
from qiskit.opflow.primitive_ops import Z2Symmetries
# Electroinic structure problem
from qiskit_nature.problems.second_quantization.electronic import ElectronicStructureProblem
# Qubit converter
from qiskit_nature.converters.second_quantization.qubit_converter import QubitConverter
# Mappers
from qiskit_nature.mappers.second_quantization import ParityMapper, BravyiKitaevMapper, JordanWignerMapper
# Initial state
from qiskit_nature.circuit.library import HartreeFock
# Variational form - circuit
from qiskit.circuit.library import TwoLocal
# Optimizer
from qiskit.algorithms.optimizers import COBYLA, SLSQP, SPSA
# Algorithms and Factories
from qiskit_nature.algorithms import ExcitedStatesEigensolver, NumPyEigensolverFactory
# Eigen Solvers
# NumPy Minimum Eigen Solver
from qiskit_nature.algorithms.ground_state_solvers.minimum_eigensolver_factories import NumPyMinimumEigensolverFactory
# ground state
from qiskit_nature.algorithms.ground_state_solvers import GroundStateEigensolver
# VQE Solver
from qiskit.algorithms import VQE
# -
# Backend
qasm_sim = QasmSimulator()
state_sim = StatevectorSimulator()
# Drivers
#
# Below we set up a PySCF driver for $H2$ molecule at equilibrium bond length 0.735 Angstrom
def exact_diagonalizer(es_problem, qubit_converter):
solver = NumPyMinimumEigensolverFactory()
calc = GroundStateEigensolver(qubit_converter, solver)
result = calc.solve(es_problem)
return result
def get_mapper(mapper_str: str):
if mapper_str == "jw":
mapper = JordanWignerMapper()
elif mapper_str == "pa":
mapper = ParityMapper()
elif mapper_str == "bk":
mapper = BravyiKitaevMapper()
return mapper
def initial_state_preparation(mapper_str: str = "jw"):
molecule = "H 0.0 0.0 0.0; H 0.0 0.0 0.735"
driver = PySCFDriver(atom=molecule)
qmolecule = driver.run()
transformer = FreezeCoreTransformer()
qmolecule = transformer.transform(qmolecule)
es_problem = ElectronicStructureProblem(driver)
# generating second_quzntized operators
second_q_ops = es_problem.second_q_ops()
# Hamiltonian
main_op = second_q_ops[0]
# return tuple of number of particles if available
num_particles = es_problem.num_particles
# return the number of spin orbitals
num_spin_orbitals = es_problem.num_spin_orbitals
mapper = get_mapper(mapper_str)
qubit_converter = QubitConverter(mapper=mapper, two_qubit_reduction=True)#, z2symmetry_reduction=[1, 1])
# Qubit Hamiltonian
qubit_op = qubit_converter.convert(main_op, num_particles=num_particles)
return (qubit_op, num_particles, num_spin_orbitals, qubit_converter, es_problem)
qubit_op, num_particles, num_spin_orbitals, qubit_converter, es_problem = initial_state_preparation("jw")
# +
init_state = HartreeFock(num_spin_orbitals, num_particles, qubit_converter)
init_state.barrier()
init_state.draw("mpl", initial_state=True)
# +
# Setting up TwoLocal for our ansatz
ansatz_type = "RYRZ"
# Single qubit rotations that are placed on all qubits with independent parameters
rotation_blocks = ["ry", "rz"]
# Entangling gates
entanglement_blocks = "cx"
# How the qubits are entangled?
entanglement = 'linear'
# Repetitions of rotation_blocks + entanglement_blocks with independent parameters
repetitions = 1
# Skipoing the final rotation_blocks layer
skip_final_rotation_layer = False
ansatz = TwoLocal(
qubit_op.num_qubits,
rotation_blocks,
entanglement_blocks,
reps=repetitions,
entanglement=entanglement,
skip_final_rotation_layer=skip_final_rotation_layer,
# insert_barriers=True
)
# Add the initial state
ansatz.compose(init_state, front=True, inplace=True)
# -
ansatz.draw(output="mpl", initial_state=True).savefig("ryrz_vqe_h2_ansatz.png", dpi=300)
ansatz.draw(output="mpl", initial_state=True)
ansatz.decompose().draw(output="mpl", initial_state=True).savefig("ryrz_vqe_h2_ansatz_decomposed.png", dpi=300)
ansatz.decompose().draw(output="mpl", initial_state=True)
optimizer = COBYLA(maxiter=10000)
# ### Solver
#
# Exact Eigensolver using NumPyMinimumEigensolver
#
# +
result_exact = exact_diagonalizer(es_problem, qubit_converter)
exact_energy = np.real(result_exact.eigenenergies[0])
print("Exact Electronic Energy: {:.4f} Eh\n\n".format(exact_energy))
print("Results:\n\n", result_exact)
# -
# VQE Solver
# +
from IPython.display import display, clear_output
def callback(eval_count, parameters, mean, std):
# overwrites same line when printing
display("Evaluation: {},\tEnergy: {},\tStd: {}".format(eval_count, mean, std))
clear_output(wait=True)
counts.append(eval_count)
values.append(mean)
params.append(parameters)
deviation.append(std)
counts = []
values = []
params = []
deviation = []
# Set initial parameters of the ansatz
# we choose a fixed small displacement
try:
initial_point = [0.01] * len(ansatz.ordered_parameters)
except:
initial_point = [0.01] * ansatz.num_parameters
algorithm = VQE(
ansatz,
optimizer=optimizer,
quantum_instance=state_sim,
callback=callback,
initial_point=initial_point
)
result = algorithm.compute_minimum_eigenvalue(qubit_op)
print(result)
# +
# Storing results in a dictionary
from qiskit.transpiler import PassManager
from qiskit.transpiler.passes import Unroller
# Unroller transpile our circuit into CNOTs and U gates
pass_ = Unroller(['u', 'cx'])
pm = PassManager(pass_)
ansatz_tp = pm.run(ansatz)
cnots = ansatz_tp.count_ops()['cx']
score = cnots
accuracy_threshold = 4.0 # in mHa
energy = result.optimal_value
# if ansatz_type == "TwoLocal":
result_dict = {
'optimizer': optimizer.__class__.__name__,
'mapping': qubit_converter.mapper.__class__.__name__,
'ansatz': ansatz.__class__.__name__,
'rotation blocks': rotation_blocks,
'entanglement_blocks': entanglement_blocks,
'entanglement': entanglement,
'repetitions': repetitions,
'skip_final_rotation_layer': skip_final_rotation_layer,
'energy (Ha)': energy,
'error (mHa)': (energy-exact_energy)*1000,
'pass': (energy-exact_energy)*1000 <= accuracy_threshold,
'# of parameters': len(result.optimal_point),
'final parameters': result.optimal_point,
'# of evaluations': result.optimizer_evals,
'optimizer time': result.optimizer_time,
'# of qubits': int(qubit_op.num_qubits),
'# of CNOTs': cnots,
'score': score}
# +
# Plotting the results
import matplotlib.pyplot as plt
fig, ax = plt.subplots(1, 1, figsize=(19.20, 10.80))
# ax.set_facecolor("#293952")
ax.set_xlabel('Iterations')
ax.set_ylabel('Energy (Eh)')
ax.grid()
fig.text(0.7, 0.75, f'VQE Energy: {result.optimal_value:.4f} Eh\nExact Energy: {exact_energy:.4f} Eh\nScore: {score:.0f}')
plt.title(f"Ground State Energy of H2 using RYRZ VQE Ansatz\nOptimizer: {result_dict['optimizer']} \n Mapper: {result_dict['mapping']}\nVariational Form: {result_dict['ansatz']} - RYRZ")
ax.plot(counts, values)
ax.axhline(exact_energy, linestyle='--')
# fig_title = f"\
# {result_dict['optimizer']}-\
# {result_dict['mapping']}-\
# {result_dict['ansatz']}-\
# Energy({result_dict['energy (Ha)']:.3f})-\
# Score({result_dict['score']:.0f})\
# .png"
fig.savefig("ryrz_vqe_h2_fig.png", dpi=300)
# Displaying and saving the data
import pandas as pd
result_df = pd.DataFrame.from_dict([result_dict])
result_df[['optimizer','ansatz', '# of qubits', 'error (mHa)', 'pass', 'score','# of parameters','rotation blocks', 'entanglement_blocks',
'entanglement', 'repetitions']]
# -
| RYRZ/RYRZ_VQE_H2.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + pycharm={"is_executing": false}
from Utilities.io import DataLoader
from Utilities.painter import Visualizer
from Models.RRDBNet import RRDBNet# we use RRDB in this demo
# -
# ### Load in the sample images
# + pycharm={"is_executing": false}
import glob
DATA_PATH = 'Samples'
loader = DataLoader()
data = loader.load(glob.glob(DATA_PATH + '/*.jpg'), batchSize=1)
painter = Visualizer()
for downSample, original in data.take(2):
painter.plot(downSample, original)
# -
# ### Load in the pretrained super-resolution model
# + pycharm={"is_executing": false}
# pretrained rrdb network can be found in the Pretrained folder
MODEL_PATH = 'Pretrained/rrdb'
model = RRDBNet(blockNum=10)
model.load_weights(MODEL_PATH)
# -
# ### Run plate enhancement
for downSample, original in data.take(4):
yPred = model.predict(downSample)
painter.plot(downSample, original, yPred)
| Tutorial_1_Enhance_Image.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .jl
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Julia 1.7.0
# language: julia
# name: julia-1.7
# ---
using Zygote
# ## Basic condition
# +
i(x) = x
f(x) = 2x
g(x) = 3x
h(x) = 4x
w_vec = [i, h, g, f]
x = 2.0
# +
function forward_fn(w_vec, x, i::Int)
y = w_vec[i](x)
i == size(w_vec)[1] ? y : [y; forward_fn(w_vec,y,i+1)]
end
function reverse_autodiff(w_vec, x_vec, i::Int)
i == 1 ? 1 :
gradient(w_vec[i], x_vec[i-1])[1] *
reverse_autodiff(w_vec, x_vec, i-1)
end
# -
x_vec = forward_fn(w_vec, x, 1)
y_ad = x_vec[end]
dy_ad = reverse_autodiff(w_vec, x_vec, size(w_vec)[1])
println("AutoDiff: x:$x, y(x):$(y_ad), dy(x):$(dy_ad)")
y(x) = f(g(h(x)))
dy(x) = gradient(y,x)[1]
println("Symbolic: x:$x, y(x):$(y(x)), dy(x):$(dy(x))")
# ## Full code
# +
function test()
i(x) = x
f(x) = sin(x)
g(x) = 3x^2
h(x) = exp(x)
x = 2.0
w_vec = [i, h, g, f]
x_vec = forward_fn(w_vec, x, 1)
y_ad = x_vec[end]
dy_ad = reverse_autodiff(w_vec, x_vec, size(w_vec)[1])
println("AutoDiff: x:$x, y(x):$(y_ad), dy(x):$(dy_ad)")
y(x) = f(g(h(x)))
dy(x) = gradient(y,x)[1]
println("Symbolic: x:$x, y(x):$(y(x)), dy(x):$(dy(x))")
end
test()
# -
| diffprog/julia_dp/autodiff_chain_rule.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + [markdown] id="kWbwaoVKx4hM"
# This is the Colab to reproduce our reproduction of the text style transfer task in [Beyond Fully Connected Layers with Quaternions: Parametrization of Hypercomplex Multiplications with 1/n Parameters](https://openreview.net/pdf?id=rcQdycl0zyk). We will translate modern English to Shakespearean English, using a PHM Transformer. You can find the detailed explanation in our [Github repo](https://github.com/sinankalkan/CENG501-Spring2021/tree/main/project_BarutcuDemir).
# + [markdown] id="f4k_OivV4RoV"
# Parts of the code are adapted from the Torch source code and [this tutorial](https://pytorch.org/tutorials/beginner/translation_transformer.html). An efficient implementation of Kronecker product by [Bayer Research](https://github.com/bayer-science-for-a-better-life/phc-gnn) was also used.
# + [markdown] id="22lF85SNx3bv"
# # Install & Import Libraries
# + [markdown] id="1TYOWv66tAul"
# You should use a GPU. Although it should be enabled by default, check if GPU is selected as the hardware accelerator by going to Runtime->Change Runtime Type. Make sure you get a T4 or P100. We have not tested the code on other GPU's (they have lower performance).
# + colab={"base_uri": "https://localhost:8080/"} id="jKz_Fz6CuBzh" outputId="f987fa72-6e07-4a9a-85a3-61b459e3da91"
# !nvidia-smi
# + [markdown] id="NDl78wqMtRzZ"
# Install the necessary libraries.
# + colab={"base_uri": "https://localhost:8080/"} id="q9tL0RgKwiAF" outputId="77eb11e6-c942-41de-f730-eb4af7b5151e"
# !pip install torchinfo wandb
# + [markdown] id="8HXEy5V6tVY8"
# We use particular version of Torch because of a CUDA issue. Also some parts of the code such as the Vocabulary are radically different in Torch 1.9 so be sure to use this version.
# + colab={"base_uri": "https://localhost:8080/"} id="bhhT37pBSgeQ" outputId="4438275a-f10e-475b-80da-64e429348300"
# !pip install torch==1.8.0+cu111 -f https://download.pytorch.org/whl/torch_stable.html
# + [markdown] id="sI8nuePstlic"
# The proper torchtext version to accompany the older torch.
# + colab={"base_uri": "https://localhost:8080/"} id="ynMDU2DBEdk2" outputId="4863a6f9-5db5-4f8a-ceb1-8c57493c95dd"
# !pip install torchtext==0.9.0
# + [markdown] id="SPUxbZf1xtD3"
# Import the necessary libraries, functions and classes
# + id="5nQUQIFE1G2b"
import io
import time
import copy
import math
import wandb
import torch
import warnings
import torchtext
import torch.nn as nn
from torch import Tensor
from torch.nn import Module
from torch.nn.utils.rnn import pad_sequence
from torch.utils.data import DataLoader
from torch.nn.parameter import Parameter, UninitializedParameter
from torch.nn import init
from torch.nn.init import constant_
from torch.nn.init import xavier_normal_
from torch.nn import functional as F
from torch.nn import MultiheadAttention
from torch.nn import ModuleList
from torch.nn.init import xavier_uniform_
from torch.nn import Dropout
from torch.nn import Linear
from torch.nn import LayerNorm
from torch.optim.lr_scheduler import MultiStepLR
from torch.nn import (TransformerEncoder, TransformerDecoder,
TransformerEncoderLayer, TransformerDecoderLayer)
from torch.nn.functional import *
from torchtext.vocab import Vocab
from torchtext.data.utils import get_tokenizer
from torchtext.utils import download_from_url, extract_archive
from torchinfo import summary
from collections import Counter
from typing import Optional, Any
from nltk.tokenize import RegexpTokenizer
from nltk.translate.bleu_score import sentence_bleu
from nltk.translate.bleu_score import SmoothingFunction
#from torch import nn
#from torch.nn.functional import multi_head_attention_forward
# + [markdown] id="_vMfnb5QxcH6"
# Set seed for reproducibility
# + id="x1ez-YKmxY_3" colab={"base_uri": "https://localhost:8080/"} outputId="d6c63661-46f1-489a-d2d4-24bf06853e45"
torch.manual_seed(0)
# + [markdown] id="0qvyw2aRzbtT"
# Select GPU to use.
# + id="C-6nmXdLzZoz"
device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
# + [markdown] id="IGcCu4bMx-rg"
# # Data
# + [markdown] id="Zitjiif1tquH"
# This is optional. You might want to connect your Google Drive and pull your data from there if you are going to repeatedly run this notebook, so that you do not download the data over and over again.
# + colab={"base_uri": "https://localhost:8080/"} id="4UHTPvN-Wd13" outputId="db41a558-fa3a-47f3-e75b-5611539ed98d"
from google.colab import drive
drive.mount('/content/drive', force_remount=True)
# + [markdown] id="6tbGNyhFqKrH"
# The data is available in [this repo](https://github.com/tlatkowski/st). The line below will download it
# + colab={"base_uri": "https://localhost:8080/"} id="lnWwdOK5k7yL" outputId="e83cfe3e-37a1-4bad-cf42-1e4743a39313"
# !git clone https://github.com/tlatkowski/st
# + [markdown] id="ErayClJcuKBw"
# Extract the .tgz archive from the cloned Github repo.
# + id="o7JnBVezOiqs"
# !tar -xf /content/st/shakespeare.train.tgz
# !tar -xf /content/st/shakespeare.dev.tgz
# !tar -xf /content/st/shakespeare.test.tgz
# + [markdown] id="ZPc3DhoWzBhF"
# Load the data, tokenize it with the Spacy tokenizer and construct a vocabulary. This is word level tokenization and every unique word is assigned a token, along with special tokens.
# + id="Gzo6oMyzt653"
train_filepaths = ['/content/train.modern', '/content/train.original']
val_filepaths = ['/content/dev.modern', '/content/dev.original']
test_filepaths = ['/content/test.modern', '/content/test.original']
en_tokenizer = get_tokenizer('spacy', language='en_core_web_sm')
def build_vocab(filepath, tokenizer):
counter = Counter()
with io.open(filepath, encoding="utf8") as f:
for string_ in f:
counter.update(tokenizer(string_))
return Vocab(counter, max_size=8188, specials=['<unk>', '<pad>', '<bos>', '<eos>'])
modern_vocab = build_vocab('/content/train.modern', en_tokenizer)
original_vocab = build_vocab('/content/train.original', en_tokenizer)
# + [markdown] id="bnnM2gJ30JU9"
# Convert the text data to Torch tensors using the vocabulary obtained at the previous step.
# + id="jv7ZwefVtP8U"
def data_process(filepaths):
raw_de_iter = iter(io.open(filepaths[0], encoding="utf8"))
raw_en_iter = iter(io.open(filepaths[1], encoding="utf8"))
data = []
for (raw_de, raw_en) in zip(raw_de_iter, raw_en_iter):
mo_tensor_ = torch.tensor([modern_vocab[token] for token in en_tokenizer(raw_de.rstrip("\n"))],
dtype=torch.long)
og_tensor_ = torch.tensor([original_vocab[token] for token in en_tokenizer(raw_en.rstrip("\n"))],
dtype=torch.long)
data.append((mo_tensor_, og_tensor_))
return data
train_data = data_process(train_filepaths)
val_data = data_process(val_filepaths)
test_data = data_process(test_filepaths)
PAD_IDX = modern_vocab['<pad>']
BOS_IDX = modern_vocab['<bos>']
EOS_IDX = modern_vocab['<eos>']
# + [markdown] id="zxds3dCF0YIW"
# Set batch size.
# + id="6RtKIEdO0XNr"
BATCH_SIZE = 32
# + [markdown] id="Ornl_Fpk0gJs"
# Construct a Torch dataloader.
# + id="UnLNCst7t655"
def generate_batch(data_batch):
de_batch, en_batch = [], []
for (de_item, en_item) in data_batch:
de_batch.append(torch.cat([torch.tensor([BOS_IDX]), de_item, torch.tensor([EOS_IDX])], dim=0))
en_batch.append(torch.cat([torch.tensor([BOS_IDX]), en_item, torch.tensor([EOS_IDX])], dim=0))
de_batch = pad_sequence(de_batch, padding_value=PAD_IDX)
en_batch = pad_sequence(en_batch, padding_value=PAD_IDX)
return de_batch, en_batch
train_iter = DataLoader(train_data, batch_size=BATCH_SIZE,
shuffle=False, collate_fn=generate_batch)
valid_iter = DataLoader(val_data, batch_size=BATCH_SIZE,
shuffle=False, collate_fn=generate_batch)
test_iter = DataLoader(test_data, batch_size=BATCH_SIZE,
shuffle=False, collate_fn=generate_batch)
# + [markdown] id="nVzA046ayORJ"
# # The PHM Transformer
# + [markdown] id="SGMY7TLQ0uSm"
# This is the PHM Layer, the heart of our code. It is adapted from the Torch linear layer. Please refer to the Github repo for more details. The A and S tensors are the same tensors as they are in the explanation in the repo.
# + id="htvjhR063fn0"
class PHMLayer(nn.Module):
def __init__(self, n, in_features, out_features):
super(PHMLayer, self).__init__()
self.n = n
self.in_features = in_features
self.out_features = out_features
self.bias = Parameter(torch.Tensor(out_features))
self.a = Parameter(torch.nn.init.xavier_uniform_(torch.zeros((n, n, n))))
self.s = Parameter(torch.nn.init.xavier_uniform_(torch.zeros((n, self.out_features//n, self.in_features//n))))
self.weight = torch.zeros((self.out_features, self.in_features))
fan_in, _ = init._calculate_fan_in_and_fan_out(self.weight)
bound = 1 / math.sqrt(fan_in)
init.uniform_(self.bias, -bound, bound)
def kronecker_product1(self, a, b): #adapted from Bayer Research's implementation
siz1 = torch.Size(torch.tensor(a.shape[-2:]) * torch.tensor(b.shape[-2:]))
res = a.unsqueeze(-1).unsqueeze(-3) * b.unsqueeze(-2).unsqueeze(-4)
siz0 = res.shape[:-4]
out = res.reshape(siz0 + siz1)
return out
def forward(self, input: Tensor) -> Tensor:
self.weight = torch.sum(self.kronecker_product1(self.a, self.s), dim=0)
input = input.type(dtype=self.weight.type())
return F.linear(input, weight=self.weight, bias=self.bias)
def extra_repr(self) -> str:
return 'in_features={}, out_features={}, bias={}'.format(
self.in_features, self.out_features, self.bias is not None)
def reset_parameters(self) -> None:
init.kaiming_uniform_(self.a, a=math.sqrt(5))
init.kaiming_uniform_(self.s, a=math.sqrt(5))
fan_in, _ = init._calculate_fan_in_and_fan_out(self.placeholder)
bound = 1 / math.sqrt(fan_in)
init.uniform_(self.bias, -bound, bound)
# + [markdown] id="HM6Xq1bA1UzS"
# The forward method of PHM multi head attention. It is adapted from Torch's multi_head_attention_forward. The part between '#'s are modified. The A and S tensors are used to create the weight matrix H for out projection.
# + id="alyPyVkXSQoC"
def phm_multi_head_attention_forward(
phm: int,
a: Tensor,
s: Tensor,
query: Tensor,
key: Tensor,
value: Tensor,
embed_dim_to_check: int,
num_heads: int,
in_proj_weight: Tensor,
in_proj_bias: Tensor,
bias_k: Optional[Tensor],
bias_v: Optional[Tensor],
add_zero_attn: bool,
dropout_p: float,
out_proj_weight: Tensor,
out_proj_bias: Tensor,
training: bool = True,
key_padding_mask: Optional[Tensor] = None,
need_weights: bool = True,
attn_mask: Optional[Tensor] = None,
use_separate_proj_weight: bool = False,
q_proj_weight: Optional[Tensor] = None,
k_proj_weight: Optional[Tensor] = None,
v_proj_weight: Optional[Tensor] = None,
static_k: Optional[Tensor] = None,
static_v: Optional[Tensor] = None,
) -> Tuple[Tensor, Optional[Tensor]]:
r"""
Args:
query, key, value: map a query and a set of key-value pairs to an output.
See "Attention Is All You Need" for more details.
embed_dim_to_check: total dimension of the model.
num_heads: parallel attention heads.
in_proj_weight, in_proj_bias: input projection weight and bias.
bias_k, bias_v: bias of the key and value sequences to be added at dim=0.
add_zero_attn: add a new batch of zeros to the key and
value sequences at dim=1.
dropout_p: probability of an element to be zeroed.
out_proj_weight, out_proj_bias: the output projection weight and bias.
training: apply dropout if is ``True``.
key_padding_mask: if provided, specified padding elements in the key will
be ignored by the attention. This is an binary mask. When the value is True,
the corresponding value on the attention layer will be filled with -inf.
need_weights: output attn_output_weights.
attn_mask: 2D or 3D mask that prevents attention to certain positions. A 2D mask will be broadcasted for all
the batches while a 3D mask allows to specify a different mask for the entries of each batch.
use_separate_proj_weight: the function accept the proj. weights for query, key,
and value in different forms. If false, in_proj_weight will be used, which is
a combination of q_proj_weight, k_proj_weight, v_proj_weight.
q_proj_weight, k_proj_weight, v_proj_weight, in_proj_bias: input projection weight and bias.
static_k, static_v: static key and value used for attention operators.
Shape:
Inputs:
- query: :math:`(L, N, E)` where L is the target sequence length, N is the batch size, E is
the embedding dimension.
- key: :math:`(S, N, E)`, where S is the source sequence length, N is the batch size, E is
the embedding dimension.
- value: :math:`(S, N, E)` where S is the source sequence length, N is the batch size, E is
the embedding dimension.
- key_padding_mask: :math:`(N, S)` where N is the batch size, S is the source sequence length.
If a ByteTensor is provided, the non-zero positions will be ignored while the zero positions
will be unchanged. If a BoolTensor is provided, the positions with the
value of ``True`` will be ignored while the position with the value of ``False`` will be unchanged.
- attn_mask: 2D mask :math:`(L, S)` where L is the target sequence length, S is the source sequence length.
3D mask :math:`(N*num_heads, L, S)` where N is the batch size, L is the target sequence length,
S is the source sequence length. attn_mask ensures that position i is allowed to attend the unmasked
positions. If a ByteTensor is provided, the non-zero positions are not allowed to attend
while the zero positions will be unchanged. If a BoolTensor is provided, positions with ``True``
are not allowed to attend while ``False`` values will be unchanged. If a FloatTensor
is provided, it will be added to the attention weight.
- static_k: :math:`(N*num_heads, S, E/num_heads)`, where S is the source sequence length,
N is the batch size, E is the embedding dimension. E/num_heads is the head dimension.
- static_v: :math:`(N*num_heads, S, E/num_heads)`, where S is the source sequence length,
N is the batch size, E is the embedding dimension. E/num_heads is the head dimension.
Outputs:
- attn_output: :math:`(L, N, E)` where L is the target sequence length, N is the batch size,
E is the embedding dimension.
- attn_output_weights: :math:`(N, L, S)` where N is the batch size,
L is the target sequence length, S is the source sequence length.
"""
tens_ops = (query, key, value, in_proj_weight, in_proj_bias, bias_k, bias_v, out_proj_weight, out_proj_bias)
if has_torch_function(tens_ops):
return handle_torch_function(
multi_head_attention_forward,
tens_ops,
query,
key,
value,
embed_dim_to_check,
num_heads,
in_proj_weight,
in_proj_bias,
bias_k,
bias_v,
add_zero_attn,
dropout_p,
out_proj_weight,
out_proj_bias,
training=training,
key_padding_mask=key_padding_mask,
need_weights=need_weights,
attn_mask=attn_mask,
use_separate_proj_weight=use_separate_proj_weight,
q_proj_weight=q_proj_weight,
k_proj_weight=k_proj_weight,
v_proj_weight=v_proj_weight,
static_k=static_k,
static_v=static_v,
)
tgt_len, bsz, embed_dim = query.size()
assert embed_dim == embed_dim_to_check
# allow MHA to have different sizes for the feature dimension
assert key.size(0) == value.size(0) and key.size(1) == value.size(1)
head_dim = embed_dim // num_heads
assert head_dim * num_heads == embed_dim, "embed_dim must be divisible by num_heads"
scaling = float(head_dim) ** -0.5
if not use_separate_proj_weight:
if (query is key or torch.equal(query, key)) and (key is value or torch.equal(key, value)):
# self-attention
q, k, v = linear(query, in_proj_weight, in_proj_bias).chunk(3, dim=-1)
elif key is value or torch.equal(key, value):
# encoder-decoder attention
# This is inline in_proj function with in_proj_weight and in_proj_bias
_b = in_proj_bias
_start = 0
_end = embed_dim
_w = in_proj_weight[_start:_end, :]
if _b is not None:
_b = _b[_start:_end]
q = linear(query, _w, _b)
if key is None:
assert value is None
k = None
v = None
else:
# This is inline in_proj function with in_proj_weight and in_proj_bias
_b = in_proj_bias
_start = embed_dim
_end = None
_w = in_proj_weight[_start:, :]
if _b is not None:
_b = _b[_start:]
k, v = linear(key, _w, _b).chunk(2, dim=-1)
else:
# This is inline in_proj function with in_proj_weight and in_proj_bias
_b = in_proj_bias
_start = 0
_end = embed_dim
_w = in_proj_weight[_start:_end, :]
if _b is not None:
_b = _b[_start:_end]
q = linear(query, _w, _b)
# This is inline in_proj function with in_proj_weight and in_proj_bias
_b = in_proj_bias
_start = embed_dim
_end = embed_dim * 2
_w = in_proj_weight[_start:_end, :]
if _b is not None:
_b = _b[_start:_end]
k = linear(key, _w, _b)
# This is inline in_proj function with in_proj_weight and in_proj_bias
_b = in_proj_bias
_start = embed_dim * 2
_end = None
_w = in_proj_weight[_start:, :]
if _b is not None:
_b = _b[_start:]
v = linear(value, _w, _b)
else:
q_proj_weight_non_opt = torch.jit._unwrap_optional(q_proj_weight)
len1, len2 = q_proj_weight_non_opt.size()
assert len1 == embed_dim and len2 == query.size(-1)
k_proj_weight_non_opt = torch.jit._unwrap_optional(k_proj_weight)
len1, len2 = k_proj_weight_non_opt.size()
assert len1 == embed_dim and len2 == key.size(-1)
v_proj_weight_non_opt = torch.jit._unwrap_optional(v_proj_weight)
len1, len2 = v_proj_weight_non_opt.size()
assert len1 == embed_dim and len2 == value.size(-1)
if in_proj_bias is not None:
q = linear(query, q_proj_weight_non_opt, in_proj_bias[0:embed_dim])
k = linear(key, k_proj_weight_non_opt, in_proj_bias[embed_dim : (embed_dim * 2)])
v = linear(value, v_proj_weight_non_opt, in_proj_bias[(embed_dim * 2) :])
else:
q = linear(query, q_proj_weight_non_opt, in_proj_bias)
k = linear(key, k_proj_weight_non_opt, in_proj_bias)
v = linear(value, v_proj_weight_non_opt, in_proj_bias)
q = q * scaling
if attn_mask is not None:
assert (
attn_mask.dtype == torch.float32
or attn_mask.dtype == torch.float64
or attn_mask.dtype == torch.float16
or attn_mask.dtype == torch.uint8
or attn_mask.dtype == torch.bool
), "Only float, byte, and bool types are supported for attn_mask, not {}".format(attn_mask.dtype)
if attn_mask.dtype == torch.uint8:
warnings.warn("Byte tensor for attn_mask in nn.MultiheadAttention is deprecated. Use bool tensor instead.")
attn_mask = attn_mask.to(torch.bool)
if attn_mask.dim() == 2:
attn_mask = attn_mask.unsqueeze(0)
if list(attn_mask.size()) != [1, query.size(0), key.size(0)]:
raise RuntimeError("The size of the 2D attn_mask is not correct.")
elif attn_mask.dim() == 3:
if list(attn_mask.size()) != [bsz * num_heads, query.size(0), key.size(0)]:
raise RuntimeError("The size of the 3D attn_mask is not correct.")
else:
raise RuntimeError("attn_mask's dimension {} is not supported".format(attn_mask.dim()))
# attn_mask's dim is 3 now.
# convert ByteTensor key_padding_mask to bool
if key_padding_mask is not None and key_padding_mask.dtype == torch.uint8:
warnings.warn(
"Byte tensor for key_padding_mask in nn.MultiheadAttention is deprecated. Use bool tensor instead."
)
key_padding_mask = key_padding_mask.to(torch.bool)
if bias_k is not None and bias_v is not None:
if static_k is None and static_v is None:
k = torch.cat([k, bias_k.repeat(1, bsz, 1)])
v = torch.cat([v, bias_v.repeat(1, bsz, 1)])
if attn_mask is not None:
attn_mask = pad(attn_mask, (0, 1))
if key_padding_mask is not None:
key_padding_mask = pad(key_padding_mask, (0, 1))
else:
assert static_k is None, "bias cannot be added to static key."
assert static_v is None, "bias cannot be added to static value."
else:
assert bias_k is None
assert bias_v is None
q = q.contiguous().view(tgt_len, bsz * num_heads, head_dim).transpose(0, 1)
if k is not None:
k = k.contiguous().view(-1, bsz * num_heads, head_dim).transpose(0, 1)
if v is not None:
v = v.contiguous().view(-1, bsz * num_heads, head_dim).transpose(0, 1)
if static_k is not None:
assert static_k.size(0) == bsz * num_heads
assert static_k.size(2) == head_dim
k = static_k
if static_v is not None:
assert static_v.size(0) == bsz * num_heads
assert static_v.size(2) == head_dim
v = static_v
src_len = k.size(1)
if key_padding_mask is not None:
assert key_padding_mask.size(0) == bsz
assert key_padding_mask.size(1) == src_len
if add_zero_attn:
src_len += 1
k = torch.cat([k, torch.zeros((k.size(0), 1) + k.size()[2:], dtype=k.dtype, device=k.device)], dim=1)
v = torch.cat([v, torch.zeros((v.size(0), 1) + v.size()[2:], dtype=v.dtype, device=v.device)], dim=1)
if attn_mask is not None:
attn_mask = pad(attn_mask, (0, 1))
if key_padding_mask is not None:
key_padding_mask = pad(key_padding_mask, (0, 1))
attn_output_weights = torch.bmm(q, k.transpose(1, 2))
assert list(attn_output_weights.size()) == [bsz * num_heads, tgt_len, src_len]
if attn_mask is not None:
if attn_mask.dtype == torch.bool:
attn_output_weights.masked_fill_(attn_mask, float("-inf"))
else:
attn_output_weights += attn_mask
if key_padding_mask is not None:
attn_output_weights = attn_output_weights.view(bsz, num_heads, tgt_len, src_len)
attn_output_weights = attn_output_weights.masked_fill(
key_padding_mask.unsqueeze(1).unsqueeze(2),
float("-inf"),
)
attn_output_weights = attn_output_weights.view(bsz * num_heads, tgt_len, src_len)
attn_output_weights = softmax(attn_output_weights, dim=-1)
attn_output_weights = dropout(attn_output_weights, p=dropout_p, training=training)
attn_output = torch.bmm(attn_output_weights, v)
assert list(attn_output.size()) == [bsz * num_heads, tgt_len, head_dim]
attn_output = attn_output.transpose(0, 1).contiguous().view(tgt_len, bsz, embed_dim)
######################################################################################
def kronecker_product1(a, b):
siz1 = torch.Size(torch.tensor(a.shape[-2:]) * torch.tensor(b.shape[-2:]))
res = a.unsqueeze(-1).unsqueeze(-3) * b.unsqueeze(-2).unsqueeze(-4)
siz0 = res.shape[:-4]
out = res.reshape(siz0 + siz1)
return out
out_proj_weight2 = torch.sum(kronecker_product1(a, s), dim=0) #the weight matrix H is constructed from a and s tensors
attn_output = linear(attn_output, out_proj_weight2, out_proj_bias)
######################################################################################
if need_weights:
# average attention weights over heads
attn_output_weights = attn_output_weights.view(bsz, num_heads, tgt_len, src_len)
return attn_output, attn_output_weights.sum(dim=1) / num_heads
else:
return attn_output, None
# + [markdown] id="GeDKC5HCEPr2"
# The weights that are multiplied with the input to create Q, K, V in multiheaded attention are created from A and S. Also a PHM layer is defined to use its weights in the out proj.
# + id="4-puq5PXxr6y"
class PHMMultiheadAttention(Module):
r"""Allows the model to jointly attend to information
from different representation subspaces.
See `Attention Is All You Need <https://arxiv.org/abs/1706.03762>`_
.. math::
\text{MultiHead}(Q, K, V) = \text{Concat}(head_1,\dots,head_h)W^O
where :math:`head_i = \text{Attention}(QW_i^Q, KW_i^K, VW_i^V)`.
Args:
embed_dim: total dimension of the model.
num_heads: parallel attention heads.
dropout: a Dropout layer on attn_output_weights. Default: 0.0.
bias: add bias as module parameter. Default: True.
add_bias_kv: add bias to the key and value sequences at dim=0.
add_zero_attn: add a new batch of zeros to the key and
value sequences at dim=1.
kdim: total number of features in key. Default: None.
vdim: total number of features in value. Default: None.
batch_first: If ``True``, then the input and output tensors are provided
as (batch, seq, feature). Default: ``False`` (seq, batch, feature).
Note that if :attr:`kdim` and :attr:`vdim` are None, they will be set
to :attr:`embed_dim` such that query, key, and value have the same
number of features.
Examples::
>>> multihead_attn = nn.MultiheadAttention(embed_dim, num_heads)
>>> attn_output, attn_output_weights = multihead_attn(query, key, value)
"""
__constants__ = ['batch_first']
bias_k: Optional[torch.Tensor]
bias_v: Optional[torch.Tensor]
def __init__(self, phm, embed_dim, num_heads, dropout=0., bias=True, add_bias_kv=False, add_zero_attn=False,
kdim=None, vdim=None, batch_first=False, device=None, dtype=None) -> None:
factory_kwargs = {'device': device, 'dtype': dtype}
super(PHMMultiheadAttention, self).__init__()
###########################################################################################################
self.phm = phm # this is n in the explanation
###########################################################################################################
self.embed_dim = embed_dim
self.kdim = kdim if kdim is not None else embed_dim
self.vdim = vdim if vdim is not None else embed_dim
self._qkv_same_embed_dim = self.kdim == embed_dim and self.vdim == embed_dim
self.num_heads = num_heads
self.dropout = dropout
self.batch_first = batch_first
self.head_dim = embed_dim // num_heads
assert self.head_dim * num_heads == self.embed_dim, "embed_dim must be divisible by num_heads"
if self._qkv_same_embed_dim is False:
self.q_proj_weight = Parameter(torch.empty((embed_dim, embed_dim), **factory_kwargs))
self.k_proj_weight = Parameter(torch.empty((embed_dim, self.kdim), **factory_kwargs))
self.v_proj_weight = Parameter(torch.empty((embed_dim, self.vdim), **factory_kwargs))
self.register_parameter('in_proj_weight', None)
else:
###########################################################################################################
self.a = Parameter(torch.nn.init.xavier_uniform_(torch.zeros((phm, phm, phm))))
self.s = Parameter(torch.nn.init.xavier_uniform_(torch.zeros((phm, (3*embed_dim)//phm, embed_dim//phm))))
###########################################################################################################
self.register_parameter('q_proj_weight', None)
self.register_parameter('k_proj_weight', None)
self.register_parameter('v_proj_weight', None)
if bias:
self.in_proj_bias = Parameter(torch.empty(3 * embed_dim, **factory_kwargs))
else:
self.register_parameter('in_proj_bias', None)
###########################################################################################################
self.out_proj = PHMLayer(phm, embed_dim, embed_dim)
###########################################################################################################
if add_bias_kv:
self.bias_k = Parameter(torch.empty((1, 1, embed_dim), **factory_kwargs))
self.bias_v = Parameter(torch.empty((1, 1, embed_dim), **factory_kwargs))
else:
self.bias_k = self.bias_v = None
self.add_zero_attn = add_zero_attn
self._reset_parameters()
def _reset_parameters(self):
if self._qkv_same_embed_dim:
#xavier_uniform_(self.in_proj_weight) #we initialize the weights ourself
pass
else:
xavier_uniform_(self.q_proj_weight)
xavier_uniform_(self.k_proj_weight)
xavier_uniform_(self.v_proj_weight)
if self.in_proj_bias is not None:
constant_(self.in_proj_bias, 0.)
constant_(self.out_proj.bias, 0.)
if self.bias_k is not None:
xavier_normal_(self.bias_k)
if self.bias_v is not None:
xavier_normal_(self.bias_v)
###########################################################################################################
def kronecker_product1(self, a, b):
siz1 = torch.Size(torch.tensor(a.shape[-2:]) * torch.tensor(b.shape[-2:]))
res = a.unsqueeze(-1).unsqueeze(-3) * b.unsqueeze(-2).unsqueeze(-4)
siz0 = res.shape[:-4]
out = res.reshape(siz0 + siz1)
return out
###########################################################################################################
def __setstate__(self, state):
# Support loading old MultiheadAttention checkpoints generated by v1.1.0
if '_qkv_same_embed_dim' not in state:
state['_qkv_same_embed_dim'] = True
super(PHMMultiheadAttention, self).__setstate__(state)
def forward(self, query: Tensor, key: Tensor, value: Tensor, key_padding_mask: Optional[Tensor] = None,
need_weights: bool = True, attn_mask: Optional[Tensor] = None):
r"""
Args:
query, key, value: map a query and a set of key-value pairs to an output.
See "Attention Is All You Need" for more details.
key_padding_mask: if provided, specified padding elements in the key will
be ignored by the attention. When given a binary mask and a value is True,
the corresponding value on the attention layer will be ignored. When given
a byte mask and a value is non-zero, the corresponding value on the attention
layer will be ignored
need_weights: output attn_output_weights.
attn_mask: 2D or 3D mask that prevents attention to certain positions. A 2D mask will be broadcasted for all
the batches while a 3D mask allows to specify a different mask for the entries of each batch.
Shapes for inputs:
- query: :math:`(L, N, E)` where L is the target sequence length, N is the batch size, E is
the embedding dimension. :math:`(N, L, E)` if ``batch_first`` is ``True``.
- key: :math:`(S, N, E)`, where S is the source sequence length, N is the batch size, E is
the embedding dimension. :math:`(N, S, E)` if ``batch_first`` is ``True``.
- value: :math:`(S, N, E)` where S is the source sequence length, N is the batch size, E is
the embedding dimension. :math:`(N, S, E)` if ``batch_first`` is ``True``.
- key_padding_mask: :math:`(N, S)` where N is the batch size, S is the source sequence length.
If a ByteTensor is provided, the non-zero positions will be ignored while the position
with the zero positions will be unchanged. If a BoolTensor is provided, the positions with the
value of ``True`` will be ignored while the position with the value of ``False`` will be unchanged.
- attn_mask: if a 2D mask: :math:`(L, S)` where L is the target sequence length, S is the
source sequence length.
If a 3D mask: :math:`(N\cdot\text{num\_heads}, L, S)` where N is the batch size, L is the target sequence
length, S is the source sequence length. ``attn_mask`` ensure that position i is allowed to attend
the unmasked positions. If a ByteTensor is provided, the non-zero positions are not allowed to attend
while the zero positions will be unchanged. If a BoolTensor is provided, positions with ``True``
is not allowed to attend while ``False`` values will be unchanged. If a FloatTensor
is provided, it will be added to the attention weight.
Shapes for outputs:
- attn_output: :math:`(L, N, E)` where L is the target sequence length, N is the batch size,
E is the embedding dimension. :math:`(N, L, E)` if ``batch_first`` is ``True``.
- attn_output_weights: :math:`(N, L, S)` where N is the batch size,
L is the target sequence length, S is the source sequence length.
"""
self.in_proj_weight = torch.sum(self.kronecker_product1(self.a, self.s), dim=0)
if self.batch_first:
query, key, value = [x.transpose(1, 0) for x in (query, key, value)]
###########################################################################################################
if not self._qkv_same_embed_dim:
attn_output, attn_output_weights = phm_multi_head_attention_forward(
self.phm, self.out_proj.a, self.out_proj.s, query, key, value, self.embed_dim, self.num_heads,
self.in_proj_weight, self.in_proj_bias,
self.bias_k, self.bias_v, self.add_zero_attn,
self.dropout, self.out_proj.weight, self.out_proj.bias,
training=self.training,
key_padding_mask=key_padding_mask, need_weights=need_weights,
attn_mask=attn_mask, use_separate_proj_weight=True,
q_proj_weight=self.q_proj_weight, k_proj_weight=self.k_proj_weight,
v_proj_weight=self.v_proj_weight,
)
else:
attn_output, attn_output_weights = phm_multi_head_attention_forward(
self.phm, self.out_proj.a, self.out_proj.s, query, key, value, self.embed_dim, self.num_heads,
self.in_proj_weight, self.in_proj_bias,
self.bias_k, self.bias_v, self.add_zero_attn,
self.dropout, self.out_proj.weight, self.out_proj.bias,
training=self.training,
key_padding_mask=key_padding_mask, need_weights=need_weights,
attn_mask=attn_mask)
###########################################################################################################
if self.batch_first:
return attn_output.transpose(1, 0), attn_output_weights
else:
return attn_output, attn_output_weights
# + [markdown] id="_2YiTvhPEpCd"
# The feed forward network and the attention mechanism is replaced with PHM counterparts
# + id="X50ThC5O109i"
def _get_activation_fn(activation):
if activation == "relu":
return F.relu
elif activation == "gelu":
return F.gelu
raise RuntimeError("activation should be relu/gelu, not {}".format(activation))
class PHMTransformerEncoderLayer(Module):
__constants__ = ['batch_first']
def __init__(self, phm, d_model, nhead, dim_feedforward=2048, dropout=0.1, activation="relu",
layer_norm_eps=1e-5, batch_first=False,
device=None, dtype=None) -> None:
factory_kwargs = {'device': device, 'dtype': dtype}
###########################################################################################################
super(PHMTransformerEncoderLayer, self).__init__()
self.self_attn = PHMMultiheadAttention(phm, d_model, nhead, dropout=dropout)
self.linear1 = PHMLayer(phm, d_model, dim_feedforward)
self.dropout = Dropout(dropout)
self.linear2 = PHMLayer(phm, dim_feedforward, d_model)
###########################################################################################################
self.norm1 = LayerNorm(d_model, eps=layer_norm_eps)
self.norm2 = LayerNorm(d_model, eps=layer_norm_eps)
self.dropout1 = Dropout(dropout)
self.dropout2 = Dropout(dropout)
self.activation = _get_activation_fn(activation)
def __setstate__(self, state):
if 'activation' not in state:
state['activation'] = F.relu
super(PHMTransformerEncoderLayer, self).__setstate__(state)
def forward(self, src: Tensor, src_mask: Optional[Tensor] = None, src_key_padding_mask: Optional[Tensor] = None) -> Tensor:
src2 = self.self_attn(src, src, src, attn_mask=src_mask,
key_padding_mask=src_key_padding_mask)[0]
src = src + self.dropout1(src2)
src = self.norm1(src)
src2 = self.linear2(self.dropout(self.activation(self.linear1(src))))
src = src + self.dropout2(src2)
src = self.norm2(src)
return src
# + [markdown] id="t2IdiLv7ExDo"
# The feed forward network and the attention mechanism is replaced with PHM counterparts
# + id="US8QRCVYtVVw"
class PHMTransformerDecoderLayer(Module):
__constants__ = ['batch_first']
def __init__(self, phm, d_model, nhead, dim_feedforward=2048, dropout=0.1, activation="relu",
layer_norm_eps=1e-5, batch_first=False, device=None, dtype=None) -> None:
factory_kwargs = {'device': device, 'dtype': dtype}
###########################################################################################################
super(PHMTransformerDecoderLayer, self).__init__()
self.self_attn = PHMMultiheadAttention(phm, d_model, nhead, dropout=dropout)
self.multihead_attn = PHMMultiheadAttention(phm, d_model, nhead, dropout=dropout)
self.linear1 = PHMLayer(phm, d_model, dim_feedforward)
self.dropout = Dropout(dropout)
self.linear2 = PHMLayer(phm, dim_feedforward, d_model)
###########################################################################################################
self.norm1 = LayerNorm(d_model, eps=layer_norm_eps)
self.norm2 = LayerNorm(d_model, eps=layer_norm_eps)
self.norm3 = LayerNorm(d_model, eps=layer_norm_eps)
self.dropout1 = Dropout(dropout)
self.dropout2 = Dropout(dropout)
self.dropout3 = Dropout(dropout)
self.activation = _get_activation_fn(activation)
def __setstate__(self, state):
if 'activation' not in state:
state['activation'] = F.relu
super(PHMTransformerDecoderLayer, self).__setstate__(state)
def forward(self, tgt: Tensor, memory: Tensor, tgt_mask: Optional[Tensor] = None, memory_mask: Optional[Tensor] = None,
tgt_key_padding_mask: Optional[Tensor] = None, memory_key_padding_mask: Optional[Tensor] = None) -> Tensor:
tgt2 = self.self_attn(tgt, tgt, tgt, attn_mask=tgt_mask,
key_padding_mask=tgt_key_padding_mask)[0]
tgt = tgt + self.dropout1(tgt2)
tgt = self.norm1(tgt)
tgt2 = self.multihead_attn(tgt, memory, memory, attn_mask=memory_mask,
key_padding_mask=memory_key_padding_mask)[0]
tgt = tgt + self.dropout2(tgt2)
tgt = self.norm2(tgt)
tgt2 = self.linear2(self.dropout(self.activation(self.linear1(tgt))))
tgt = tgt + self.dropout3(tgt2)
tgt = self.norm3(tgt)
return tgt
def _get_clones(module, N):
return ModuleList([copy.deepcopy(module) for i in range(N)])
def _get_activation_fn(activation):
if activation == "relu":
return F.relu
elif activation == "gelu":
return F.gelu
raise RuntimeError("activation should be relu/gelu, not {}".format(activation))
# + [markdown] id="boBnW4x0EzvT"
# The embedding layer, encoder, decoder and the generator are replaced with PHM counterparts.
# + id="X-2di7MD1o23"
class PHMTransformer(Module):
def __init__(self, phm, nhead, num_encoder_layers: int, num_decoder_layers: int,
emb_size: int, src_vocab_size: int, tgt_vocab_size: int,
dim_feedforward:int = 512, dropout:float = 0.1):
###########################################################################################################
super(PHMTransformer, self).__init__()
encoder_layer = PHMTransformerEncoderLayer(phm, d_model=emb_size, nhead=nhead,
dim_feedforward=dim_feedforward)
self.transformer_encoder = TransformerEncoder(encoder_layer, num_layers=num_encoder_layers)
decoder_layer = PHMTransformerDecoderLayer(phm, d_model=emb_size, nhead=nhead,
dim_feedforward=dim_feedforward)
self.transformer_decoder = TransformerDecoder(decoder_layer, num_layers=num_decoder_layers)
self.generator = PHMLayer(phm, emb_size, tgt_vocab_size)
self.src_tok_emb = PHMTokenEmbedding(src_vocab_size, emb_size, phm)
self.tgt_tok_emb = PHMTokenEmbedding(tgt_vocab_size, emb_size, phm)
###########################################################################################################
self.positional_encoding = PositionalEncoding(emb_size, dropout=dropout)
def forward(self, src: Tensor, trg: Tensor, src_mask: Tensor,
tgt_mask: Tensor, src_padding_mask: Tensor,
tgt_padding_mask: Tensor, memory_key_padding_mask: Tensor):
src_emb = self.positional_encoding(self.src_tok_emb(src))
tgt_emb = self.positional_encoding(self.tgt_tok_emb(trg))
memory = self.transformer_encoder(src_emb, src_mask, src_padding_mask)
outs = self.transformer_decoder(tgt_emb, memory, tgt_mask, None,
tgt_padding_mask, memory_key_padding_mask)
return self.generator(outs)
def encode(self, src: Tensor, src_mask: Tensor):
return self.transformer_encoder(self.positional_encoding(
self.src_tok_emb(src)), src_mask)
def decode(self, tgt: Tensor, memory: Tensor, tgt_mask: Tensor):
return self.transformer_decoder(self.positional_encoding(
self.tgt_tok_emb(tgt)), memory,
tgt_mask)
# + [markdown] id="WZu2eJfoE8kr"
# This is the usual transformer
# + id="VCCahYjmt656"
class Seq2SeqTransformer(nn.Module):
def __init__(self, nhead: int, num_encoder_layers: int, num_decoder_layers: int,
emb_size: int, src_vocab_size: int, tgt_vocab_size: int,
dim_feedforward:int = 512, dropout:float = 0.1):
super(Seq2SeqTransformer, self).__init__()
encoder_layer = TransformerEncoderLayer(d_model=emb_size, nhead=nhead,
dim_feedforward=dim_feedforward)
self.transformer_encoder = TransformerEncoder(encoder_layer, num_layers=num_encoder_layers)
decoder_layer = TransformerDecoderLayer(d_model=emb_size, nhead=nhead,
dim_feedforward=dim_feedforward)
self.transformer_decoder = TransformerDecoder(decoder_layer, num_layers=num_decoder_layers)
self.generator = nn.Linear(emb_size, tgt_vocab_size)
self.src_tok_emb = TokenEmbedding(src_vocab_size, emb_size)
self.tgt_tok_emb = TokenEmbedding(tgt_vocab_size, emb_size)
self.positional_encoding = PositionalEncoding(emb_size, dropout=dropout)
def forward(self, src: Tensor, trg: Tensor, src_mask: Tensor,
tgt_mask: Tensor, src_padding_mask: Tensor,
tgt_padding_mask: Tensor, memory_key_padding_mask: Tensor):
src_emb = self.positional_encoding(self.src_tok_emb(src))
tgt_emb = self.positional_encoding(self.tgt_tok_emb(trg))
memory = self.transformer_encoder(src_emb, src_mask, src_padding_mask)
outs = self.transformer_decoder(tgt_emb, memory, tgt_mask, None,
tgt_padding_mask, memory_key_padding_mask)
return self.generator(outs)
def encode(self, src: Tensor, src_mask: Tensor):
return self.transformer_encoder(self.positional_encoding(
self.src_tok_emb(src)), src_mask)
def decode(self, tgt: Tensor, memory: Tensor, tgt_mask: Tensor):
return self.transformer_decoder(self.positional_encoding(
self.tgt_tok_emb(tgt)), memory,
tgt_mask)
# + id="zGp9atlkgnvV"
class PHMEmbedding(Module):
__constants__ = ['num_embeddings', 'embedding_dim', 'padding_idx', 'max_norm',
'norm_type', 'scale_grad_by_freq', 'sparse']
def __init__(self, num_embeddings: int, embedding_dim: int, phm: int, padding_idx: Optional[int] = None,
max_norm: Optional[float] = None, norm_type: float = 2., scale_grad_by_freq: bool = False,
sparse: bool = False, _weight: Optional[Tensor] = None,
device=None, dtype=None):
factory_kwargs = {'device': device, 'dtype': dtype}
super(PHMEmbedding, self).__init__()
self.num_embeddings = num_embeddings
self.embedding_dim = embedding_dim
if padding_idx is not None:
if padding_idx > 0:
assert padding_idx < self.num_embeddings, 'Padding_idx must be within num_embeddings'
elif padding_idx < 0:
assert padding_idx >= -self.num_embeddings, 'Padding_idx must be within num_embeddings'
padding_idx = self.num_embeddings + padding_idx
self.padding_idx = padding_idx
self.max_norm = max_norm
self.norm_type = norm_type
self.scale_grad_by_freq = scale_grad_by_freq
if _weight is None:
##########################################################################################
self.a = Parameter(torch.nn.init.xavier_uniform_(torch.zeros((phm, phm, phm))))
self.s = Parameter(torch.nn.init.xavier_uniform_(torch.zeros((phm, num_embeddings//phm, embedding_dim//phm))))
self.weight = torch.zeros((num_embeddings, embedding_dim))
##########################################################################################
else:
assert list(_weight.shape) == [num_embeddings, embedding_dim], \
'Shape of weight does not match num_embeddings and embedding_dim'
self.weight = Parameter(_weight)
self.sparse = sparse
def reset_parameters(self) -> None:
#init.normal_(self.weight)
#self._fill_padding_idx_with_zero() #not used
pass
def _fill_padding_idx_with_zero(self) -> None:
if self.padding_idx is not None:
with torch.no_grad():
self.weight[self.padding_idx].fill_(0)
##########################################################################################
def kronecker_product1(self, a, b):
siz1 = torch.Size(torch.tensor(a.shape[-2:]) * torch.tensor(b.shape[-2:]))
res = a.unsqueeze(-1).unsqueeze(-3) * b.unsqueeze(-2).unsqueeze(-4)
siz0 = res.shape[:-4]
out = res.reshape(siz0 + siz1)
return out
##########################################################################################
def forward(self, input: Tensor) -> Tensor:
##########################################################################################
self.weight = torch.sum(self.kronecker_product1(self.a, self.s), dim=0)
##########################################################################################
#self._fill_padding_idx_with_zero()
return F.embedding(
input, self.weight, self.padding_idx, self.max_norm,
self.norm_type, self.scale_grad_by_freq, self.sparse)
def extra_repr(self) -> str:
s = '{num_embeddings}, {embedding_dim}'
if self.padding_idx is not None:
s += ', padding_idx={padding_idx}'
if self.max_norm is not None:
s += ', max_norm={max_norm}'
if self.norm_type != 2:
s += ', norm_type={norm_type}'
if self.scale_grad_by_freq is not False:
s += ', scale_grad_by_freq={scale_grad_by_freq}'
if self.sparse is not False:
s += ', sparse=True'
return s.format(**self.__dict__)
# + [markdown] id="uTSESCTsFV0d"
# Token embedding is used in the usual transformer whereas PHM transformer uses PHMTokenEmbedding. Positional encoding is untouched.
# + id="5n_U0CjTt658"
class PositionalEncoding(nn.Module):
def __init__(self, emb_size: int, dropout, maxlen: int = 5000):
super(PositionalEncoding, self).__init__()
den = torch.exp(- torch.arange(0, emb_size, 2) * math.log(10000) / emb_size)
pos = torch.arange(0, maxlen).reshape(maxlen, 1)
pos_embedding = torch.zeros((maxlen, emb_size))
pos_embedding[:, 0::2] = torch.sin(pos * den)
pos_embedding[:, 1::2] = torch.cos(pos * den)
pos_embedding = pos_embedding.unsqueeze(-2)
self.dropout = nn.Dropout(dropout)
self.register_buffer('pos_embedding', pos_embedding)
def forward(self, token_embedding: Tensor):
return self.dropout(token_embedding +
self.pos_embedding[:token_embedding.size(0),:])
class TokenEmbedding(nn.Module):
def __init__(self, vocab_size: int, emb_size):
super(TokenEmbedding, self).__init__()
self.embedding = nn.Embedding(vocab_size, emb_size)
self.emb_size = emb_size
def forward(self, tokens: Tensor):
return self.embedding(tokens.long()) * math.sqrt(self.emb_size)
class PHMTokenEmbedding(nn.Module):
def __init__(self, vocab_size: int, emb_size, phm):
super(PHMTokenEmbedding, self).__init__()
self.embedding = PHMEmbedding(vocab_size, emb_size, phm)
self.emb_size = emb_size
def forward(self, tokens: Tensor):
return self.embedding(tokens.long()) * math.sqrt(self.emb_size)
# + [markdown] id="Kjh1xctrFgIS"
# Attention masks are created so that the encoder and decoder only have access to the relevant tokens at a time step.
# + id="1QhCOolDt66A"
def generate_square_subsequent_mask(sz):
mask = (torch.triu(torch.ones((sz, sz), device=device)) == 1).transpose(0, 1)
mask = mask.float().masked_fill(mask == 0, float('-inf')).masked_fill(mask == 1, float(0.0))
return mask
def create_mask(src, tgt):
src_seq_len = src.shape[0]
tgt_seq_len = tgt.shape[0]
tgt_mask = generate_square_subsequent_mask(tgt_seq_len)
src_mask = torch.zeros((src_seq_len, src_seq_len), device=device).type(torch.bool)
src_padding_mask = (src == PAD_IDX).transpose(0, 1)
tgt_padding_mask = (tgt == PAD_IDX).transpose(0, 1)
return src_mask, tgt_mask, src_padding_mask, tgt_padding_mask
# + [markdown] id="3ViY1CROFyz8"
# The usual training eval loops with WandB reporting.
# + id="zLq0dLBCt66C"
def train_epoch(model, train_iter, optimizer):
model.train()
losses = 0
for idx, (src, tgt) in enumerate(train_iter):
src = src.to(device)
tgt = tgt.to(device)
tgt_input = tgt[:-1, :]
src_mask, tgt_mask, src_padding_mask, tgt_padding_mask = create_mask(src, tgt_input)
logits = model(src, tgt_input, src_mask, tgt_mask,
src_padding_mask, tgt_padding_mask, src_padding_mask)
optimizer.zero_grad()
tgt_out = tgt[1:,:]
loss = loss_fn(logits.reshape(-1, logits.shape[-1]), tgt_out.reshape(-1))
loss.backward()
optimizer.step()
losses += loss.item()
if idx % 50 == 0:
wandb.log({"train_loss": loss.item()})
return losses / len(train_iter)
def evaluate(model, val_iter):
model.eval()
losses = 0
for idx, (src, tgt) in (enumerate(valid_iter)):
src = src.to(device)
tgt = tgt.to(device)
tgt_input = tgt[:-1, :]
src_mask, tgt_mask, src_padding_mask, tgt_padding_mask = create_mask(src, tgt_input)
logits = model(src, tgt_input, src_mask, tgt_mask,
src_padding_mask, tgt_padding_mask, src_padding_mask)
tgt_out = tgt[1:,:]
loss = loss_fn(logits.reshape(-1, logits.shape[-1]), tgt_out.reshape(-1))
losses += loss.item()
return losses / len(val_iter)
# + id="bhbcsaobB1-c"
def bleu(model, src_lines, tgt_lines, src_vcb, tgt_vcb, src_tok):
tokenizer = RegexpTokenizer(r'\w+')
assert len(src_lines) == len(tgt_lines)
n = len(src_lines)
scores = 0
for mo, og in zip(src_lines, tgt_lines):
reference = [tokenizer.tokenize(og[:-3])]
candidate = tokenizer.tokenize(translate(model, mo[:-3], src_vcb, tgt_vcb, en_tokenizer))
try:
scores += sentence_bleu(reference, candidate, smoothing_function=smoothie)
except:
print('except') # the bleu function very rarely gives exceptions (once every 500 lines or so)
pass
return scores/n
# + [markdown] id="WsescJCJydXA"
# # Instantiate Model
# + id="VfVA2zobt66B"
SRC_VOCAB_SIZE = len(modern_vocab)
TGT_VOCAB_SIZE = len(original_vocab)
EMB_SIZE = 512 #embedding size
NHEAD = 8 #number of heads in multihead attention
FFN_HID_DIM = 512 #feed forward network hidden dimension
NUM_ENCODER_LAYERS = 4
NUM_DECODER_LAYERS = 4
N = 2 #the 1/n scaling factor
#this is the phm+ transformer
transformer = PHMTransformer(N, NHEAD, NUM_ENCODER_LAYERS, NUM_DECODER_LAYERS,
EMB_SIZE, SRC_VOCAB_SIZE, TGT_VOCAB_SIZE,
FFN_HID_DIM)
"""
#this is the usual transformer
transformer = Seq2SeqTransformer(NHEAD, NUM_ENCODER_LAYERS, NUM_DECODER_LAYERS,
EMB_SIZE, SRC_VOCAB_SIZE, TGT_VOCAB_SIZE,
FFN_HID_DIM)
"""
#one of the transformers above should be commented out
for p in transformer.parameters():
if p.dim() > 1:
nn.init.xavier_uniform_(p)
transformer = transformer.to(device)
loss_fn = torch.nn.CrossEntropyLoss(ignore_index=PAD_IDX)
optimizer = torch.optim.AdamW(
transformer.parameters(), lr=0.0001, betas=(0.9, 0.98), eps=1e-9, weight_decay=0.05
)
# at every milestone the learning rate is multiplied with gamma to avoid overfitting
scheduler = MultiStepLR(optimizer, milestones=[10, 20, 30], gamma=0.9)
# + colab={"base_uri": "https://localhost:8080/"} id="07yoyaJ-yMfl" outputId="0a70231b-39f3-4c08-b31c-5e62d1c000e6"
print(transformer)
# + id="KVnO79YW6xxm"
def count_parameters(model):
return sum(p.numel() for p in model.parameters() if p.requires_grad)
# + colab={"base_uri": "https://localhost:8080/"} id="z3_RxyywIpXc" outputId="9c4e0dec-4964-467a-b8ae-2774d752a53a"
count_parameters(transformer)
# + [markdown] id="KshZB0MBhD_6"
# Weights and Biases is used to track the experiment. By running this code you can also contribute to our experiment page, or you can track the runs in your own project page
# + colab={"base_uri": "https://localhost:8080/", "height": 620} id="pCZT3IE6ZGx8" outputId="2b66d1f1-00cc-4641-d403-bdcbd2dc24d2"
wandb.init(project='transformer', entity='demegire')
# + id="EMay1qm3ZXJ_"
config = wandb.config
config.learning_rate = optimizer.state_dict()['param_groups'][0]['lr']
config.SRC_VOCAB_SIZE = SRC_VOCAB_SIZE
config.TGT_VOCAB_SIZE = TGT_VOCAB_SIZE
config.EMB_SIZE = EMB_SIZE
config.NHEAD = NHEAD
config.FFN_HID_DIM = FFN_HID_DIM
config.BATCH_SIZE = BATCH_SIZE
config.NUM_ENCODER_LAYERS = NUM_ENCODER_LAYERS
config.NUM_DECODER_LAYERS = NUM_DECODER_LAYERS
config.num_parameters = count_parameters(transformer)
# + [markdown] id="wLA5ReqjHd-h"
# Our beam search is not the most efficient but it works.
# + id="4wjaBUZ9t66E"
def greedy_decode(model, src, src_mask, max_len, start_symbol):
src = src.to(device)
src_mask = src_mask.to(device)
memory = model.encode(src, src_mask)
ys = torch.ones(1, 1).fill_(start_symbol).type(torch.long).to(device)
for i in range(max_len-1):
memory = memory.to(device)
memory_mask = torch.zeros(ys.shape[0], memory.shape[0]).to(device).type(torch.bool)
tgt_mask = (generate_square_subsequent_mask(ys.size(0))
.type(torch.bool)).to(device)
out = model.decode(ys, memory, tgt_mask)
out = out.transpose(0, 1)
prob = model.generator(out[:, -1])
_, next_word = torch.max(prob, dim = 1)
next_word = next_word.item()
ys = torch.cat([ys,
torch.ones(1, 1).type_as(src.data).fill_(next_word)], dim=0)
if next_word == EOS_IDX:
break
return ys
def beam_search(model, src, src_mask, max_len, start_symbol, beam_size=2, alpha=0.7):
with torch.no_grad():
src = src.to(device)
src_mask = src_mask.to(device)
memory = model.encode(src, src_mask)
ys = torch.ones(1, 1).fill_(start_symbol).type(torch.long).to(device)
memory = memory.to(device)
memory_mask = torch.zeros(ys.shape[0], memory.shape[0]).to(device).type(torch.bool)
tgt_mask = (generate_square_subsequent_mask(ys.size(0))
.type(torch.bool)).to(device)
out = model.decode(ys, memory, tgt_mask)
out = out.transpose(0, 1)
probs = model.generator(out[:, -1])
soft_probs = softmax(probs, dim=1)
tops = torch.topk(soft_probs, beam_size, dim = 1)
sentences = []
for i in range(beam_size):
sentences.append((tops.values[0][i].item(),
torch.cat([ys, torch.ones(1, 1).type_as(src.data).fill_(tops.indices[0][i].item())], dim=0)))
while True:
candidate_sentences = []
eos_index = []
for i in range(beam_size):
eos_flag = False
if sentences[i][1][-1].item() != EOS_IDX:
memory = memory.to(device)
memory_mask = torch.zeros(sentences[i][1].shape[0], memory.shape[0]).to(device).type(torch.bool)
tgt_mask = (generate_square_subsequent_mask(sentences[i][1].size(0))
.type(torch.bool)).to(device)
out = model.decode(sentences[i][1], memory, tgt_mask)
out = out.transpose(0, 1)
probs = model.generator(out[:, -1])
soft_probs = softmax(probs, dim=1)
tops = torch.topk(soft_probs, beam_size, dim = 1)
for j in range(beam_size):
candidate_sentences.append((1/(len(torch.cat([sentences[i][1], torch.ones(1, 1).type_as(src.data).fill_(tops.indices[0][j].item())], dim=0))**alpha) * sentences[i][0] * tops.values[0][j].item(),
torch.cat([sentences[i][1], torch.ones(1, 1).type_as(src.data).fill_(tops.indices[0][j].item())], dim=0) ))
else:
eos_index.append(i)
eos_flag = True
if eos_flag:
for idx in eos_index:
candidate_sentences.append(sentences[idx])
candidate_sentences.sort(key=lambda x: x[0], reverse=True)
sentences = candidate_sentences[:beam_size]
candidate_sentences = []
end_flag = True
max_flag = False
for idx in range(beam_size):
if sentences[idx][1][-1].item() != EOS_IDX :
end_flag = False
for idx in range(beam_size):
if len(sentences[idx][1]) == max_len:
end_flag = True
if end_flag or max_flag:
return sorted(sentences,key=lambda x: x[0], reverse=True)[0][1]
def translate(model, src, src_vocab, tgt_vocab, src_tokenizer):
model.eval()
with torch.no_grad():
tokens = [BOS_IDX] + [src_vocab.stoi[tok] for tok in src_tokenizer(src)]+ [EOS_IDX]
num_tokens = len(tokens)
src = (torch.LongTensor(tokens).reshape(num_tokens, 1) )
src_mask = (torch.zeros(num_tokens, num_tokens)).type(torch.bool)
tgt_tokens = beam_search(model, src, src_mask, max_len=num_tokens + 5, start_symbol=BOS_IDX, beam_size=5, alpha = 0.6).flatten()
#tgt_tokens = greedy_decode(model, src, src_mask, max_len=num_tokens + 5, start_symbol=BOS_IDX).flatten() # greedy decode is faster
return " ".join([tgt_vocab.itos[tok] for tok in tgt_tokens]).replace("<bos>", "").replace("<eos>", "")
# + [markdown] id="gp1hOx5IHrTM"
# Be sure to run this line to see if your model works. The model will output nonsense when it is not trained
# + colab={"base_uri": "https://localhost:8080/", "height": 36} id="CpwLDHwbt66E" outputId="b228833d-37b2-414b-cea9-57f1ef0ac3df"
translate(transformer, "I have half a mind to hit you before you speak again", modern_vocab, original_vocab, en_tokenizer)
# + [markdown] id="FTZk0cWWHzcC"
# Smoothing function is explained in our repo.
# + id="TZLle3ri_eh1"
smoothie = SmoothingFunction().method4
# + [markdown] id="nIq2lg90Mpjk"
# # Train
# + [markdown] id="xPJTzQVtMKsO"
# Load the test set
# + id="CYI50I9jBHwu"
with open('test.original') as f:
test_lines_og = f.readlines()
# + id="fAmIKSY_Czhj"
with open('test.modern') as g:
test_lines_mo = g.readlines()
# + [markdown] id="CyYOBSqXMOSE"
# There are 575 steps in every epoch
# + colab={"base_uri": "https://localhost:8080/"} id="SZDHKgui3SAG" outputId="f1659aba-3a80-485f-c402-ff2988090894"
len(train_iter)
# + [markdown] id="FqWA2ayaMUHY"
# The model is trained for 9 epochs, notice range(1, num epochs)
# + id="-n6MO9ej3wlr"
NUM_EPOCHS = 10
# + colab={"base_uri": "https://localhost:8080/"} id="-EUBL4D5t66D" outputId="873ff0d3-a3b4-4917-9119-cfcbd0e4de5e"
wandb.watch(transformer)
for epoch in range(1, NUM_EPOCHS):
start_time = time.time()
train_loss = train_epoch(transformer, train_iter, optimizer)
val_loss = evaluate(transformer, valid_iter)
wandb.log({"val_loss": val_loss})
end_time = time.time()
scheduler.step()
print((f"Epoch: {epoch}, Train loss: {train_loss:.3f}, Val loss: {val_loss:.3f}, "
f"Epoch time = {(end_time - start_time):.3f}s"))
# + colab={"base_uri": "https://localhost:8080/", "height": 510, "referenced_widgets": ["321b68039d3446bcaf4ad4f74be492dc", "609413c75891404dbcb13ecbb67ad69c", "c53dfea99a844785b86c3b990bd1d58a", "4ffed0daf0db4da7900412ee735ee457", "2a9af9b6fc9f42d5b3d67e2c05c6279f", "41b0171d39474d88ac61797ec06b71e5", "9ff0cbfb452543a2a9e33a2a6c3cad7c", "4179f9701091458a86efcfb70b792b0c"]} id="EQDYIHXheSPu" outputId="11ea412f-7b26-4dcb-aecd-e0a06542e265"
wandb.finish()
# + [markdown] id="ZIyaL4vpMthT"
# # Test
# + [markdown] id="PxGUNFoXMd9X"
# Lastly, the model is evaluated on the first 500 lines of the test set, which is approx 1500 lines
# + colab={"base_uri": "https://localhost:8080/"} id="rVgbN1NQ4xzK" outputId="020c19a4-024f-44e3-afd9-ce3edbd2e719"
start_time = time.time()
score = bleu(transformer, test_lines_mo[:500], test_lines_og[:500], modern_vocab, original_vocab, en_tokenizer)
end_time = time.time()
print(score)
print(end_time-start_time)
| PHM_Transformer_Style_Transfer.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Breast cancer detection using logistical regression
# ## Importing libraries
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
# # Importing dataset (the dataset is just for practice, this one is outdated)
# # The column class of the dataset is 2 for negative and 4 for positive
# # The first column is an ID for the patient so it will not be used
dataset = pd.read_csv("breast_cancer.csv")
X = dataset.iloc[:, 1:-1].values
y = dataset.iloc[:, -1].values
# # Spliting the dataset on training and test set (80/20)
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size = 0.2, random_state = 0)
# # Training the model on the train set
from sklearn.linear_model import LogisticRegression
classifier = LogisticRegression(random_state = 0)
classifier.fit(X_train, y_train)
# # Predicting the test set results
y_pred = classifier.predict(X_test)
# # Making a confusion matrix for accuracy testing
from sklearn.metrics import confusion_matrix
cm = confusion_matrix(y_test, y_pred)
print(cm)
# # K-fold cross validation for computing accuracy
from sklearn.model_selection import cross_val_score
accuracies = cross_val_score(estimator = classifier, X = X_train, y = y_train, cv = 10)
print("Accuracy: {:.2f} %".format(accuracies.mean()*100))
print("Standard Deviation: {:.2f} %".format(accuracies.std()*100))
| Breast Cancer Detection.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import torch
import math
from torch.distributions import Poisson
import matplotlib.pyplot as plt
import numpy as np
from poisson_process import PoissonProcess
intense_pts = torch.linspace(0, 10).unsqueeze(-1)
def intense_func(pts):
return torch.sin(intense_pts[:, 0])
intense_vals = intense_func(intense_pts)
proc = PoissonProcess(intense_pts, intense_vals)
proc.in_dim
sim = proc.simulate()
plt.plot(intense_pts, intense_vals.exp(), label="Intensity Function")
plt.scatter(sim, torch.zeros_like(sim), label="Arrivals", color="orange")
plt.legend()
# ## Now in 2D
# +
n = 50
x_max = 2.
grid = torch.zeros(int(pow(n, 2)), 2)
for i in range(n):
for j in range(n):
grid[i * n + j][0] = float(i) / (n-1)
grid[i * n + j][1] = float(j) / (n-1)
grid = grid * x_max
# -
def intense_func(pts):
# return (torch.sin(pts[:, 0]*0.5) + 0.2*torch.sin(pts[:, 1] * 5)).exp()
return 1.5*torch.sin(pts[:, 1] * 5 + 1.) + torch.sin(pts[:, 0])
intense = intense_func(grid)
xx, yy = np.meshgrid(grid[:n, 1], grid[:n, 1])
int_plot = intense.reshape(xx.shape)
plt.contourf(xx, yy, int_plot)
pois_proc = PoissonProcess(grid, intense)
simmed_pts = pois_proc.simulate()
# +
fig, ax = plt.subplots(1, 2, figsize=(20, 7.5))
im=ax[0].contourf(xx, yy, int_plot, cmap='coolwarm')
cbar=fig.colorbar(im, ax=ax[0])
cbar.ax.tick_params(labelsize=20)
cbar.ax.yaxis.offsetText.set(size=20)
ax[0].set_title("Intensity Function",
fontsize=24)
ax[1].scatter(simmed_pts[:, 1], simmed_pts[:, 0])
ax[1].set_xlim(0, x_max)
ax[1].set_ylim(0, x_max)
# ax[1].autoscale(False)
ax[1].set_title("Simulated",
fontsize=24)
# -
simmed_pts.shape
grd = pois_proc.compute_grad(simmed_pts)
grd
hess = pois_proc.compute_hessian(simmed_pts)
hess.shape
| cox-process/test_sampling.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3 (ipykernel)
# language: python
# name: python3
# ---
# %load_ext lab_black
# +
from collections import Counter
def load_crabs(filename):
with open(filename) as f:
temp = f.read().strip()
crabs = [int(x) for x in temp.split(",")]
return crabs
def part_one(crabs):
positions = Counter(crabs)
min_position = min(positions.keys())
max_position = max(positions.keys())
fuel_costs = {}
for i in range(min_position, max_position):
cost = sum([abs(pos - i) * count for pos, count in positions.items()])
fuel_costs[i] = cost
return min(fuel_costs.values())
def part_two(crabs):
positions = Counter(crabs)
min_position = min(positions.keys())
max_position = max(positions.keys()) + 1
fuel_costs = {}
for i in range(min_position, max_position):
cost = sum(
[sum(range(abs(pos - i) + 1)) * count for pos, count in positions.items()]
)
fuel_costs[i] = cost
return min(fuel_costs.values())
def main():
crabs = load_crabs("../data/07_input.txt")
print(part_one(crabs))
print(part_two(crabs))
# -
| notebooks/07_The-Treachery-Of-Whales.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + [markdown] pycharm={"name": "#%% md\n"}
# # Decision Theory
# ## Introduction
# Decision theory spans a combination of problem solving techniques to find the best decision in complex decisional problems. In this unit, we will use the following terminology:
#
# - **Alternatives:** Decision variables which are controllable and depend on the decision makerยดs decision.
# - **Uncertainty and states of nature:** External variables, are incontrollable and need to be estimated or assumed.
# - **Performances:** Profit or Cost (utility) of the result of a decision
#
# Hence, the objective is to find the alternative with the highest performance for the decision maker, possibly in situations where the performance may depend on external variables which cannot be controlled.
#
# So far, this set up is not that different from the definitions seen in previous units, however, we will be focusing on the following aspects in this unit:
#
# - **Impacts over time** All the important consequences of a problem do not occur at the same instant in time
#
# - **Uncertainty** At the time the decision-maker must select an alternative, the consequences are not known with certainty.
#
# - **Possibility of acquiring information** Often we can acquire additional information to support decision making at a cost. For instance, to collect seismic information to decide whether to drill for oil. Decision theory provides methods to evaluate if it is worth acquiring additional information or not.
#
# - **Dynamic aspects** The problem might not end immediately after an alternative is chosen but might require further analysis (eg further decisions)
#
#
# ## Pay off Matrix
# The pay-off matrix is a tool to represent the performances of the different alternatives against the possible future outcomes of an uncontrolled event, or states of nature.
# The different rows of the matrix represent the alternatives of the decision maker, and the different columns of the matrix, the different possible states of nature. Finally, every cell in the matrix contains the performance of each alternative given the occurence of each state of nature. That is, let us note the decision maker's alternatives as $a_1, a_2, ..., a_m$, and the different states of nature as $s_1, s_2, ..., s_n$. Let us note the performance of alternative $a_i$ when $s_j$ has occured as $u_{ij}$, then the pay-off matrix is:
#
# $\begin{bmatrix}
# u_{11} & u_{12} & ... & u_{1n}\\
# u_{21} & u_{22} & ... & u_{2n}\\
# u_{m1} & u_{m2} & ... & u_{mn}
# \end{bmatrix}$
#
# Note that the pay-off matrix represents all possible outcomes of our decision under uncertainty, given that the states of nature are exhaustive (together they describe all the possible outcomes of the uncontrolled variable) and mutually exclusive (if one occurs, then the rest cannot occur).
# The pay-off matrix can also be represented in tabular form as:
#
# | Decision Alternatives | state 1 | state 2| ... | state n|
# |-----------------------|---------|--------|-----|--------|
# | $a_1$ | $u_{11}$ | $u_{12}$ |... |$u_{1n}$ |
# | $a_2$ | $u_{21}$ | $u_{22}$ |... |$u_{2n}$ |
# | ... | ... | ... |... |... |
# | $a_m$ | $u_{m1}$ | $u_{m2}$ |... |$u_{mn}$ |
#
#
# Let us at this point use one example to be used in the next sections of this presentation. In the example, the decision maker needs to select a financial product among a set of market alternatives: Gold, bonds, stock options, deposit, or hedge fund. Each product provides different benefits or losses depending on the behaviour of the market. The behaviour of the market is not controlled by the decision maker, therefore, it is characterised in this example as a set of states of natures:
#
# - Accumulation Phase: The market is stable and characterised by a slow raise
# - Mark-up Phase: The market has been stable for a while, investors feel secure and the market is characterised by a fast raise.
# - Distribution Phase: Most investors start selling to collect benefits, the market stabilises and there is little or no change.
# - Mark-down Phase: Some investors try to hold positions as there is a large fall of the market
#
# The pay-off matrix shows the expected performance in euros of the investment of each product under each market cycle phase:
#
# | Decision Alternatives | Accumulation | Mark-up| Distribution | Mark-down|
# |-----------------------|---------|--------|-----|--------|
# | Gold | -100 | 100 |200 |0 |
# | Bonds | 250 | 200 |150 |-150 |
# | Stock options | 500 | 250 |100 |-600 |
# | Deposit | 60 | 60 |60 |60 |
# | Hedge fund | 200 | 150 |150 |-150 |
#
# ### Dominance
# Dominance is a property of alternatives that provide a better performance than others for every state of nature. In the example above, it can be seen that the bond provides a higher benefit than the option hedge in any market scenario, therefore the bond option is dominant over the hedge fund option. Dominance allows us to simplify the decision making process, eliminating the dominated alternatives in favour of the dominant alternatives. Taking into account dominance, the pay-off matrix above can be written as:
#
# | Decision Alternatives | Accumulation | Mark-up| Distribution | Mark-down|
# |-----------------------|---------|--------|-----|--------|
# | Gold | -100 | 100 |200 |0 |
# | Bonds | 250 | 200 |150 |-150 |
# | Stock options | 500 | 250 |100 |-600 |
# | Deposit | 60 | 60 |60 |60 |
#
# Any rational decision maker would select bonds before stock options under any market phase and therefore, the latter can be ignored.
#
# ## Decision Rules
# Decision rules are criteria used by a rational decision maker to make systematic decisions, id est, to select the best alternative. This section describes some of the most important criteria in related literature:
#
# ### MinMax
# The MinMax criteria aims to minimise loss in the worst-case scenario. It is therefore a conservative, or pessimistic criteria. Also known as Maximin (minimise the maximum loss), this criteria first finds the minimum value of the performance of each decision variables in every scenario:
#
# $m_i = \min(u_{i1}, u_{i2}, ..., u_{in}) \quad \forall i=[1, 2, ..., m]$
#
# These values represent the worst case scenario in every decision alternative, then the minmax criteria selects the maximum of this values:
#
# $d = \text{\argmax}(m_1, m_2, ..., m_m)$
#
# The function argmax above returns the index of the maximum value rather than the maximum value. Hence, it will return the index of the alternative for which the maximum is value:
#
# In the example above, the minmax criteria yields:
#
# | Decision Alternatives | Accumulation | Mark-up| Distribution | Mark-down| $m_i$ |
# |-----------------------|---------|--------|-----|--------|----|
# | Gold | -100 | 100 |200 |0 |-100|
# | Bonds | 250 | 200 |150 |-150 |-150|
# | Stock options | 500 | 250 |100 |-600 |-600|
# | Deposit | 60 | 60 |60 |60 |60|
#
# $d = \text{\argmax}(-100, -150, -600, 60) = 4$
#
# $v = max(-100, -150, -600, 60) = 60$
#
# The MinMax criteria models the decisions of a conservative rational decision maker and in this example, a conservative investor will invest in deposits.
#
#
# ### MaxMax
# The MaxMax criteria on the other hand aims to maximise profit in the best-case scenario. It is therefore an aggresive, and optimistic criteria. This criteria first finds the maximum value of the performance of each decision variables in every scenario:
#
# $o_i = \max(u_{i1}, u_{i2}, ..., u_{in}) \quad \forall i=[1, 2, ..., m]$
#
# These values represent the best case scenario in every decision alternative, then the MaxMax criteria selects the maximum of this values:
#
# $d = \text{\argmax}(m_1, m_2, ..., m_n)$
#
# In the example above, the MaxMax criteria yields:
#
# | Decision Alternatives | Accumulation | Mark-up| Distribution | Mark-down| $o_i$ |
# |-----------------------|---------|--------|-----|--------|----|
# | Gold | -100 | 100 |200 |0 |200|
# | Bonds | 250 | 200 |150 |-150 |250|
# | Stock options | 500 | 250 |100 |-600 |500|
# | Deposit | 60 | 60 |60 |60 |60|
#
# $d = \text{\argmax}(200, 250, 500, 60) = 3$
#
# $v = \max(200, 250, 500, 60) = 500$
#
# The MaxMax criteria models the decisions of a conservative rational decision maker and in this example, a conservative investor will invest in deposits.
#
# ### Bayesian
# The Bayesian criteria takes into account additional information about the likelihood of each state of nature. It is therefore the criteria applied by an informed decision maker which wants to factor in the decision the likelihood of each alternative. The probabilities of each state or nature as defined as $p(s_j)$. The Expected Monetary Value (EMV) of each alternative is defined as:
#
# $\text{EMV_i} = \sum_ju_{ij}*p(s_j}$
#
# The EMV can be interpreted as the weighted average value of the alternative given the probabilities of occurence of each state of nature.
# The Bayesian criteria selects the alternative with the maximum EMV:
#
# $d = \text{\argmax}(text{EMV_1}, text{EMV_2}, ..., text{EMV_m})$
#
# Let us use again the example above to apply this decision criteria.
#
# | States of nature | Accumulation | Mark-up| Distribution | Mark-down| EMV |
# |-----------------------|---------|--------|-----|--------|----|
# | Probabilities ($p(s_j)$) | 0.2 | 0.3| 0.3 | 0.2| EMV |
# | Gold | -100 | 100 |200 |0 |70|
# | Bonds | 250 | 200 |150 |-150 |125|
# | Stock options | 500 | 250 |100 |-600 |85|
# | Deposit | 60 | 60 |60 |60 |60|
#
# $d = \text{argmax}(70, 125, 85, 60) = 2$
#
# $v = \max(70, 125, 85, 60) = 125$
#
# The Bayesian criteria selects the decision alternative with index 2 (Bonds) and its EMV is 125.
| docs/source/Decision Theory/tutorials/DT Intro.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: envs
# language: python
# name: cs771
# ---
# ### fast local attention
#
# #### batch size 128, learning rate = 0.001, windows size 5, num_gpus = 6
# +
import tensorflow as tf
tf.logging.set_verbosity(tf.logging.WARN)
import pickle
import numpy as np
import os
from sklearn.model_selection import train_test_split
from sklearn.metrics import f1_score
from sklearn.metrics import accuracy_score
import os
from tensorflow.python.client import device_lib
from collections import Counter
import time
VERY_BIG_NUMBER = 1e30
# +
f = open('../../Glove/word_embedding_glove', 'rb')
word_embedding = pickle.load(f)
f.close()
word_embedding = word_embedding[: len(word_embedding)-1]
f = open('../../Glove/vocab_glove', 'rb')
vocab = pickle.load(f)
f.close()
word2id = dict((w, i) for i,w in enumerate(vocab))
id2word = dict((i, w) for i,w in enumerate(vocab))
unknown_token = "UNKNOWN_TOKEN"
# Model Description
model_name = 'model-aw-lex-local-att-fast-v2-4'
model_dir = '../output/all-word/' + model_name
save_dir = os.path.join(model_dir, "save/")
log_dir = os.path.join(model_dir, "log")
if not os.path.exists(model_dir):
os.mkdir(model_dir)
if not os.path.exists(save_dir):
os.mkdir(save_dir)
if not os.path.exists(log_dir):
os.mkdir(log_dir)
with open('../../../dataset/train_val_data_fine/all_word_lex','rb') as f:
train_data, val_data = pickle.load(f)
# Parameters
mode = 'train'
num_senses = 45
num_pos = 12
batch_size = 128
vocab_size = len(vocab)
unk_vocab_size = 1
word_emb_size = len(word_embedding[0])
max_sent_size = 200
hidden_size = 256
num_filter = 256
window_size = 5
kernel_size = 5
keep_prob = 0.3
l2_lambda = 0.001
init_lr = 0.001
decay_steps = 500
decay_rate = 0.99
clip_norm = 1
clipping = True
moving_avg_deacy = 0.999
num_gpus = 6
width = int(window_size/2)
# -
def average_gradients(tower_grads):
average_grads = []
for grad_and_vars in zip(*tower_grads):
# Note that each grad_and_vars looks like the following:
# ((grad0_gpu0, var0_gpu0), ... , (grad0_gpuN, var0_gpuN))
grads = []
for g, _ in grad_and_vars:
# Add 0 dimension to the gradients to represent the tower.
expanded_g = tf.expand_dims(g, 0)
# Append on a 'tower' dimension which we will average over below.
grads.append(expanded_g)
# Average over the 'tower' dimension.
grad = tf.concat(grads, 0)
grad = tf.reduce_mean(grad, axis=0)
# Keep in mind that the Variables are redundant because they are shared
# across towers. So .. we will just return the first tower's pointer to
# the Variable.
v = grad_and_vars[0][1]
grad_and_var = (grad, v)
average_grads.append(grad_and_var)
return average_grads
# +
# MODEL
device_num = 0
tower_grads = []
losses = []
predictions = []
predictions_pos = []
x = tf.placeholder('int32', [num_gpus, batch_size, max_sent_size], name="x")
y = tf.placeholder('int32', [num_gpus, batch_size, max_sent_size], name="y")
y_pos = tf.placeholder('int32', [num_gpus, batch_size, max_sent_size], name="y")
x_mask = tf.placeholder('bool', [num_gpus, batch_size, max_sent_size], name='x_mask')
sense_mask = tf.placeholder('bool', [num_gpus, batch_size, max_sent_size], name='sense_mask')
is_train = tf.placeholder('bool', [], name='is_train')
word_emb_mat = tf.placeholder('float', [None, word_emb_size], name='emb_mat')
input_keep_prob = tf.cond(is_train,lambda:keep_prob, lambda:tf.constant(1.0))
pretrain = tf.placeholder('bool', [], name="pretrain")
global_step = tf.Variable(0, trainable=False, name="global_step")
learning_rate = tf.train.exponential_decay(init_lr, global_step, decay_steps, decay_rate, staircase=True)
summaries = []
def global_attention(input_x, input_mask, W_att):
h_masked = tf.boolean_mask(input_x, input_mask)
h_tanh = tf.tanh(h_masked)
u = tf.matmul(h_tanh, W_att)
a = tf.nn.softmax(u)
c = tf.reduce_sum(tf.multiply(h_masked, a), axis=0)
return c
with tf.variable_scope("word_embedding"):
unk_word_emb_mat = tf.get_variable("word_emb_mat", dtype='float', shape=[unk_vocab_size, word_emb_size], initializer=tf.contrib.layers.xavier_initializer(uniform=True, seed=0, dtype=tf.float32))
final_word_emb_mat = tf.concat([word_emb_mat, unk_word_emb_mat], 0)
with tf.variable_scope(tf.get_variable_scope()):
for gpu_idx in range(num_gpus):
if gpu_idx>=3:
device_num = 1
with tf.name_scope("model_{}".format(gpu_idx)) as scope, tf.device('/gpu:%d' % device_num):
if gpu_idx > 0:
tf.get_variable_scope().reuse_variables()
with tf.name_scope("word"):
Wx = tf.nn.embedding_lookup(final_word_emb_mat, x[gpu_idx])
float_x_mask = tf.cast(x_mask[gpu_idx], 'float')
x_len = tf.reduce_sum(tf.cast(x_mask[gpu_idx], 'int32'), axis=1)
with tf.variable_scope("lstm1"):
cell_fw1 = tf.contrib.rnn.BasicLSTMCell(hidden_size,state_is_tuple=True)
cell_bw1 = tf.contrib.rnn.BasicLSTMCell(hidden_size,state_is_tuple=True)
d_cell_fw1 = tf.contrib.rnn.DropoutWrapper(cell_fw1, input_keep_prob=input_keep_prob)
d_cell_bw1 = tf.contrib.rnn.DropoutWrapper(cell_bw1, input_keep_prob=input_keep_prob)
(fw_h1, bw_h1), _ = tf.nn.bidirectional_dynamic_rnn(d_cell_fw1, d_cell_bw1, Wx, sequence_length=x_len, dtype='float', scope='lstm1')
h1 = tf.concat([fw_h1, bw_h1], 2)
with tf.variable_scope("lstm2"):
cell_fw2 = tf.contrib.rnn.BasicLSTMCell(hidden_size,state_is_tuple=True)
cell_bw2 = tf.contrib.rnn.BasicLSTMCell(hidden_size,state_is_tuple=True)
d_cell_fw2 = tf.contrib.rnn.DropoutWrapper(cell_fw2, input_keep_prob=input_keep_prob)
d_cell_bw2 = tf.contrib.rnn.DropoutWrapper(cell_bw2, input_keep_prob=input_keep_prob)
(fw_h2, bw_h2), _ = tf.nn.bidirectional_dynamic_rnn(d_cell_fw2, d_cell_bw2, h1, sequence_length=x_len, dtype='float', scope='lstm2')
h = tf.concat([fw_h2, bw_h2], 2)
with tf.variable_scope("local_attention"):
W_att_local = tf.get_variable("W_att_local", shape=[2*hidden_size, 1], initializer=tf.truncated_normal_initializer(mean=0.0, stddev=0.1, seed=gpu_idx*10))
flat_h = tf.reshape(h, [batch_size*max_sent_size, tf.shape(h)[2]])
h_tanh = tf.tanh(flat_h)
u_flat = tf.matmul(h_tanh, W_att_local)
u_local = tf.reshape(u_flat, [batch_size, max_sent_size])
final_u = (tf.cast(x_mask[gpu_idx], 'float') -1)*VERY_BIG_NUMBER + u_local
c_local = tf.map_fn(lambda i:tf.reduce_sum(tf.multiply(h[:, tf.maximum(0, i-width-1):tf.minimum(1+width+i, max_sent_size)],
tf.expand_dims(tf.nn.softmax(final_u[:, tf.maximum(0, i-width-1):tf.minimum(1+width+i, max_sent_size)], 1), 2)), axis=1),
tf.range(max_sent_size), dtype=tf.float32)
c_local = tf.transpose(c_local, perm=[1,0,2])
h_final = tf.multiply(c_local, tf.expand_dims(float_x_mask, 2))
flat_h_final = tf.reshape(h_final, [-1, tf.shape(h_final)[2]])
with tf.variable_scope("hidden_layer"):
W = tf.get_variable("W", shape=[2*hidden_size, 2*hidden_size], initializer=tf.truncated_normal_initializer(mean=0.0, stddev=0.1, seed=gpu_idx*20))
b = tf.get_variable("b", shape=[2*hidden_size], initializer=tf.zeros_initializer())
drop_flat_h_final = tf.nn.dropout(flat_h_final, input_keep_prob)
flat_hl = tf.matmul(drop_flat_h_final, W) + b
with tf.variable_scope("softmax_layer"):
W = tf.get_variable("W", shape=[2*hidden_size, num_senses], initializer=tf.truncated_normal_initializer(mean=0.0, stddev=0.1, seed=gpu_idx*20))
b = tf.get_variable("b", shape=[num_senses], initializer=tf.zeros_initializer())
drop_flat_hl = tf.nn.dropout(flat_hl, input_keep_prob)
flat_logits_sense = tf.matmul(drop_flat_hl, W) + b
logits = tf.reshape(flat_logits_sense, [batch_size, max_sent_size, num_senses])
predictions.append(tf.argmax(logits, 2))
with tf.variable_scope("softmax_layer_pos"):
W = tf.get_variable("W", shape=[2*hidden_size, num_pos], initializer=tf.truncated_normal_initializer(mean=0.0, stddev=0.1, seed=gpu_idx*30))
b = tf.get_variable("b", shape=[num_pos], initializer=tf.zeros_initializer())
flat_h1 = tf.reshape(h1, [-1, tf.shape(h1)[2]])
drop_flat_hl = tf.nn.dropout(flat_hl, input_keep_prob)
flat_logits_pos = tf.matmul(drop_flat_hl, W) + b
logits_pos = tf.reshape(flat_logits_pos, [batch_size, max_sent_size, num_pos])
predictions_pos.append(tf.argmax(logits_pos, 2))
float_sense_mask = tf.cast(sense_mask[gpu_idx], 'float')
loss = tf.contrib.seq2seq.sequence_loss(logits, y[gpu_idx], float_sense_mask, name="loss")
loss_pos = tf.contrib.seq2seq.sequence_loss(logits_pos, y_pos[gpu_idx], float_x_mask, name="loss_")
l2_loss = l2_lambda * tf.losses.get_regularization_loss()
total_loss = tf.cond(pretrain, lambda:loss_pos, lambda:loss + loss_pos + l2_loss)
summaries.append(tf.summary.scalar("loss_{}".format(gpu_idx), loss))
summaries.append(tf.summary.scalar("loss_pos_{}".format(gpu_idx), loss_pos))
summaries.append(tf.summary.scalar("total_loss_{}".format(gpu_idx), total_loss))
optimizer = tf.train.AdamOptimizer(learning_rate)
grads_vars = optimizer.compute_gradients(total_loss)
clipped_grads = grads_vars
if(clipping == True):
clipped_grads = [(tf.clip_by_norm(grad, clip_norm), var) for grad, var in clipped_grads]
tower_grads.append(clipped_grads)
losses.append(total_loss)
with tf.device('/gpu:0'):
tower_grads = average_gradients(tower_grads)
losses = tf.add_n(losses)/len(losses)
apply_grad_op = optimizer.apply_gradients(tower_grads, global_step=global_step)
summaries.append(tf.summary.scalar('total_loss', losses))
summaries.append(tf.summary.scalar('learning_rate', learning_rate))
variable_averages = tf.train.ExponentialMovingAverage(moving_avg_deacy, global_step)
variables_averages_op = variable_averages.apply(tf.trainable_variables())
train_op = tf.group(apply_grad_op, variables_averages_op)
saver = tf.train.Saver(tf.global_variables())
summary = tf.summary.merge(summaries)
# -
os.environ["CUDA_DEVICE_ORDER"]="PCI_BUS_ID" # see issue #152
os.environ["CUDA_VISIBLE_DEVICES"]="0,1"
# print (device_lib.list_local_devices())
config = tf.ConfigProto()
config.gpu_options.allow_growth = True
config.allow_soft_placement = True
sess = tf.Session(config=config)
sess.run(tf.global_variables_initializer()) # For initializing all the variables
summary_writer = tf.summary.FileWriter(log_dir, sess.graph) # For writing Summaries
# +
save_period = 100
log_period = 100
def model(xx, yy, yy_pos, mask, smask, train_cond=True, pretrain_cond=False):
num_batches = int(len(xx)/(batch_size*num_gpus))
_losses = 0
temp_loss = 0
preds_sense = []
true_sense = []
preds_pos = []
true_pos = []
for j in range(num_batches):
s = j * batch_size * num_gpus
e = (j+1) * batch_size * num_gpus
xx_re = xx[s:e].reshape([num_gpus, batch_size, -1])
yy_re = yy[s:e].reshape([num_gpus, batch_size, -1])
yy_pos_re = yy_pos[s:e].reshape([num_gpus, batch_size, -1])
mask_re = mask[s:e].reshape([num_gpus, batch_size, -1])
smask_re = smask[s:e].reshape([num_gpus, batch_size, -1])
feed_dict = {x:xx_re, y:yy_re, y_pos:yy_pos_re, x_mask:mask_re, sense_mask:smask_re, pretrain:pretrain_cond, is_train:train_cond, input_keep_prob:keep_prob, word_emb_mat:word_embedding}
if(train_cond==True):
_, _loss, step, _summary = sess.run([train_op, losses, global_step, summary], feed_dict)
summary_writer.add_summary(_summary, step)
temp_loss += _loss
if((j+1)%log_period==0):
print("Steps: {}".format(step), "Loss:{0:.4f}".format(temp_loss/log_period), ", Current Loss: {0:.4f}".format(_loss))
temp_loss = 0
if((j+1)%save_period==0):
saver.save(sess, save_path=save_dir)
else:
_loss, pred, pred_pos = sess.run([total_loss, predictions, predictions_pos], feed_dict)
for i in range(num_gpus):
preds_sense.append(pred[i][smask_re[i]])
true_sense.append(yy_re[i][smask_re[i]])
preds_pos.append(pred_pos[i][mask_re[i]])
true_pos.append(yy_pos_re[i][mask_re[i]])
_losses +=_loss
if(train_cond==False):
sense_preds = []
sense_true = []
pos_preds = []
pos_true = []
for preds in preds_sense:
for ps in preds:
sense_preds.append(ps)
for trues in true_sense:
for ts in trues:
sense_true.append(ts)
for preds in preds_pos:
for ps in preds:
pos_preds.append(ps)
for trues in true_pos:
for ts in trues:
pos_true.append(ts)
return _losses/num_batches, sense_preds, sense_true, pos_preds, pos_true
return _losses/num_batches, step
def eval_score(yy, pred, yy_pos, pred_pos):
f1 = f1_score(yy, pred, average='macro')
accu = accuracy_score(yy, pred)
f1_pos = f1_score(yy_pos, pred_pos, average='macro')
accu_pos = accuracy_score(yy_pos, pred_pos)
return f1*100, accu*100, f1_pos*100, accu_pos*100
# +
x_id_train = train_data['x']
mask_train = train_data['x_mask']
sense_mask_train = train_data['sense_mask']
y_train = train_data['y']
y_pos_train = train_data['pos']
x_id_val = val_data['x']
mask_val = val_data['x_mask']
sense_mask_val = val_data['sense_mask']
y_val = val_data['y']
y_pos_val = val_data['pos']
# +
def testing():
start_time = time.time()
val_loss, val_pred, val_true, val_pred_pos, val_true_pos = model(x_id_val, y_val, y_pos_val, mask_val, sense_mask_val, train_cond=False)
f1_, accu_, f1_pos_, accu_pos_ = eval_score(val_true, val_pred, val_true_pos, val_pred_pos)
time_taken = time.time() - start_time
print("Val: F1 Score:{0:.2f}".format(f1_), "Accuracy:{0:.2f}".format(accu_), " POS: F1 Score:{0:.2f}".format(f1_pos_), "Accuracy:{0:.2f}".format(accu_pos_), "Loss:{0:.4f}".format(val_loss), ", Time: {0:.1f}".format(time_taken))
return f1_, accu_, f1_pos_, accu_pos_
def training(current_epoch, pre_train_cond):
random = np.random.choice(len(y_train), size=(len(y_train)), replace=False)
x_id_train_tmp = x_id_train[random]
y_train_tmp = y_train[random]
mask_train_tmp = mask_train[random]
sense_mask_train_tmp = sense_mask_train[random]
y_pos_train_tmp = y_pos_train[random]
start_time = time.time()
train_loss, step = model(x_id_train_tmp, y_train_tmp, y_pos_train_tmp, mask_train_tmp, sense_mask_train_tmp, pretrain_cond=pre_train_cond)
time_taken = time.time() - start_time
print("Epoch: {}".format(current_epoch+1),", Step: {}".format(step), ", loss: {0:.4f}".format(train_loss), ", Time: {0:.1f}".format(time_taken))
saver.save(sess, save_path=save_dir)
print("Model Saved")
return [step, train_loss]
# +
loss_collection = []
val_collection = []
num_epochs = 20
val_period = 2
# Pretraining POS Tags
training(0, True)
training(1, True)
testing()
for i in range(num_epochs):
loss_collection.append(training(i, False))
if((i+1)%val_period==0):
val_collection.append(testing())
# +
loss_collection = []
val_collection = []
num_epochs = 20
val_period = 2
for i in range(num_epochs):
loss_collection.append(training(i, False))
if((i+1)%val_period==0):
val_collection.append(testing())
# +
loss_collection = []
val_collection = []
num_epochs = 20
val_period = 2
for i in range(num_epochs):
loss_collection.append(training(i, False))
if((i+1)%val_period==0):
val_collection.append(testing())
# -
testing()
start_time = time.time()
train_loss, train_pred, train_true, train_pred_pos, train_true_pos = model(x_id_train, y_train, y_pos_train, mask_train, sense_mask_train, train_cond=False)
f1_, accu_, f1_pos_, accu_pos_ = etrain_score(train_true, train_pred, train_true_pos, train_pred_pos)
time_taken = time.time() - start_time
print("train: F1 Score:{0:.2f}".format(f1_), "Accuracy:{0:.2f}".format(accu_), " POS: F1 Score:{0:.2f}".format(f1_pos_), "Accuracy:{0:.2f}".format(accu_pos_), "Loss:{0:.4f}".format(train_loss), ", Time: {0:.1f}".format(time_taken))
saver.restore(sess, save_dir)
| one_million/all-word/Model-aw-lex-local_attention-fast-v2-4.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Deep AI
# language: python
# name: dl
# ---
# +
import numpy as np
import random
from genome_file import read_genome, write_genome
# +
# def trans_circle(g, p1, p2, rand_op = None):
# if rand_op == None:
# rand_op = random.random()
# g = g[:-1]
# p1 %= len(g)
# p2 %= len(g)
# if p1 == p2:
# return [g[p1:] + g[:p1]]
# if p2 < p1:
# p1, p2 = p2, p1
# if rand_op < 0.5:
# c0 = g[0:p1] + [-x for x in reversed(g[p1:p2])] + g[p2:]
# return [c0 + c0[0:1]]
# else:
# c1 = g[0:p1] + g[p2:]
# c2 = g[p1:p2]
# return [c1 + c1[0:1], c2 + c2[0:1]]
# +
# def trans_linear(g, p1, p2, rand_op = None):
# if rand_op == None:
# rand_op = random.random()
# size = len(g) + 1
# p1 %= size
# p2 %= size
# if p1 == p2:
# return [g]
# if p2 < p1:
# p1, p2 = p2, p1
# if p1 == 0 and p2 == len(g):
# if rand_op < 0.5:
# return [[-x for x in reversed(g)]]
# else:
# return [g + g[0:1]]
# if rand_op < 0.5:
# return [g[0:p1] + [-x for x in reversed(g[p1:p2])] + g[p2:]]
# else:
# return [g[0:p1] + g[p2:], g[p1:p2] + g[p1:p1+1]]
# def trans(g, p1, p2, rand_op = None):
# if g[0] == g[-1] and len(g) >= 2:
# return trans_circle(g, p1, p2, rand_op)
# return trans_linear(g, p1, p2, rand_op)
# +
# def trans_cross(g1, g2, p1, p2, rand_op = None):
# if rand_op == None:
# rand_op = random.random()
# if g1[0] == g1[-1] and g2[0] == g2[-1] and len(g1) >= 2 and len(g2) >= 2:
# # circle and circle
# if rand_op < 0.5:
# res = g1[:p1] + g2[p2:-1] + g2[:p2] + g1[p1:-1]
# else:
# res = g1[:p1] + [-x for x in reversed(g2[:p2])] + \
# [-x for x in reversed(g2[p2:-1])] + g1[p1:-1]
# return [res + res[0:1]]
# if (g1[0] != g1[-1] or len(g1) == 1) and (g2[0] != g2[-1] or len(g2) == 1):
# # linear and linear
# if rand_op < 0.5:
# r1 = g1[:p1] + g2[p2:]
# r2 = g2[:p2] + g1[p1:]
# else:
# r1 = g1[:p1] + [-x for x in reversed(g2[:p2])]
# r2 = [-x for x in reversed(g2[p2:])] + g1[p1:]
# return [x for x in [r1, r2] if x!=[] ]
# # linear and circle
# if g1[0] == g1[-1] and len(g1) >= 2:
# c, l = g1, g2
# cp, lp = p1, p2
# else:
# c, l = g2, g1
# cp, lp = p2, p1
# if rand_op < 0.5:
# return [l[:lp] + c[cp:-1] + c[:cp] + l[lp:]]
# else:
# return [l[:lp] + [-x for x in reversed(c[:cp])] +
# [-x for x in reversed(c[cp:-1])] + l[lp:]]
# +
# def trans_op(g, p1, p2):
# size_list = []
# for gene in g:
# size = len(gene)
# if gene[0] == gene[-1] and size >=2 :
# size -= 2
# size_list.append(size)
# p1 %= sum(size_list) + len(size_list)
# p2 %= sum(size_list) + len(size_list)
# t1, t2 = 0, 0
# if p1 > p2:
# p2, p1 = p1, p2
# for t1 in range(len(size_list)):
# if p1 <= size_list[t1]:
# break
# p1 -= size_list[t1] + 1
# for t2 in range(len(size_list)):
# if p2 <= size_list[t2]:
# break
# p2 -= size_list[t2] + 1
# # print(t1, p1, t2, p2)
# if t1 == t2:
# res = trans(g[t1], p1, p2)
# return g[:t1] + res + g[t1+1:]
# res = trans_cross(g[t1], g[t2], p1, p2)
# return g[:t1] + g[t1 + 1:t2] + g[t2 + 1:] + res
# +
# def is_circle(g):
# if len(g) >= 2 and g[0] == g[-1]:
# return True
# return False
# +
# def rev_trans(g, p1 = None, p2 = None):
# g0_range = len(g[0]) - (2 if is_circle(g[0]) else 0)
# if p1 == None:
# p1 = random.randint(0, g0_range)
# if p2 == None:
# start = 0 if len(g) == 1 else (g0_range + 1)
# end = g0_range if len(g) == 1 else (start + len(g[1]) - (2 if is_circle(g[1]) else 0))
# p2 = random.randint(start, end)
# return trans_op(g, p1, p2)
# +
# tmp = [list(range(1, 2001))]
# print(tmp)
# for i in range(5000000):
# tmp = trans_op(tmp, random.randint(0,3000), random.randint(0,3000))
# untmp = []
# for x in tmp:
# untmp += x
# if np.unique(untmp).size != 2000:
# print('error', tmp)
# if i%50000 == 0:
# print(i, tmp)
# print(tmp)
# -
genes = []
tmp = [list(range(1, 21))]
genes.append(tmp)
for i in range(50):
tmp = trans_op(tmp, random.randint(0,3000), random.randint(0,3000))
genes.append(tmp)
# + tags=[]
for x in genes[:10]:
print(x)
# -
test = read_genome('data/genome.txt')
# + jupyter={"outputs_hidden": true} tags=[]
for x in test:
print(x)
# -
genes == test
genes[3], test[3]
genes_50 = []
g_50 = [list(range(1, 51))]
genes_50.append(g_50)
for i in range(100):
g_50 = trans_op(g_50, random.randint(0,3000), random.randint(0,3000))
genes_50.append(g_50)
# + jupyter={"outputs_hidden": true} tags=[]
for x in genes_50:
print(x)
# -
write_genome(genes_50, fname='genome_50.txt')
def generate_seq(gene):
if len(gene) == 1:
return [[gene[0]], [-gene[0]]]
res = []
for i in range(len(gene)):
g = gene[i]
rest = gene[0:i] + gene[i+1:]
tmp = generate_seq(rest)
res += [[g] + x for x in tmp] + [[-g] + x for x in tmp]
return res
generate_seq([1])
import math
# for n in range(5, 50):
# t = generate_seq(list(range(1,n+1)))
# num = 2**n*math.factorial(n)
# print(n, len(t) == num)
# + jupyter={"outputs_hidden": true} tags=[]
for n in range(1,50):
print(n, 2**n*math.factorial(n))
# -
t1 = generate_seq(list(range(1,6)))
t2 = generate_seq(list(range(1,6)))
n = 5
count = 2**n*math.factorial(n)
for _ in range(10):
num = []
genes_5 = []
g_5 = [list(range(1, n + 1))]
num.append((1,-1))
genes_5.append(g_5)
for i in range(1000000):
g_5 = trans_op(g_5, random.randint(0,3000), random.randint(0,3000))
# if tmp in genes_5:
# continue
# g_5 = tmp
if len(g_5) == 1 and len(g_5[0]) == n:
if g_5 in genes_5:
continue
else:
genes_5.append(g_5)
l = len(genes_5)
if l > num[-1][0]:
num.append((l, i))
if l == count:
break
print(num[-1])
rt_genes = []
tmp = [list(range(1, 21))]
rt_genes.append(tmp)
for i in range(50):
rt_genes.append(rev_trans(rt_genes[-1]))
print(rt_genes)
# + tags=[]
# for n in range(5, 9):
# print(n)
# count = 2**n*math.factorial(n)
# for _ in range(1):
# num = []
# genes_5 = []
# g_5 = [list(range(1, n + 1))]
# num.append((1,-1))
# genes_5.append(g_5)
# for i in range(100000000):
# g_5 = rev_trans(g_5)
# if len(g_5) == 1 and len(g_5[0]) == n and g_5 not in genes_5:
# genes_5.append(g_5)
# l = len(genes_5)
# if l > num[-1][0]:
# num.append((l, i))
# if l == count:
# break
# print(num[-1])
# +
def revs(size, p1, p2):
if p1 > p2:
p1, p2 = p2, p1
res = np.diag(np.repeat(1,size))
res[:, p1:p2] = np.fliplr(res[:, p1:p2]) * -1
return res
def circle(size, p1, p2):
if p1 > p2:
p1, p2 = p2, p1
res = np.diag(np.repeat(1,size))
return res[:, np.r_[0:p1, p2:size]], res[:,np.r_[p1:p2, p1]]
def trans(size, p1, p2, p3, p4):
p1,p2,p3,p4 = sorted([p1,p2,p3,p4])
res = np.diag(np.repeat(1,size))
return res[:, np.r_[0:p1, p3:p4, p2:p3, p1:p2, p4:size]]
def trans_rev(size, p1, p2, p3, p4):
p1, p2, p3, p4 = sorted([p1, p2, p3, p4])
res = np.diag(np.repeat(1,size))
res = res[:, np.r_[0:p1, p3:p4, p2:p3, p1:p2, p4:size]]
end = p1 + p4 - p3 + p3 - p2
res[:, p1:end] = np.fliplr(res[:, p1:end]) * -1
return res
# -
import torch as th
device = th.device('cuda')
n = 200
tmp = np.array([range(1,n + 1)])
import time
n = 5
count = 2**n*math.factorial(n)
for _ in range(10):
num = []
genes_5 = []
g_5 = [list(range(1, n + 1))]
test = th.tensor(g_5, dtype = th.float, device = device)
num.append((1,-1))
genes_5.append(g_5)
for i in range(10000000):
p1,p2,p3,p4 = [random.randint(0,n) for _ in range(4)]
op_list = []
for _ in range(200):
op_type = random.randint(0,2)
if op_type == 0:
trans_mat = trans_rev(n, p1, p2, p3, p4)
elif op_type == 1:
trans_mat = trans(n, p1, p2, p3, p4)
else:
trans_mat = revs(n, p1, p2)
op_list.append(trans_mat)
mat_tensor = th.tensor(op_list, dtype = th.float, device = device)
test = th.matmul(test, mat_tensor)
g_res = test.cpu().numpy().astype(dtype=np.int32).tolist()
for g_5 in g_res:
if g_5 not in genes_5:
genes_5.append(g_5)
l = len(genes_5)
if l > num[-1][0]:
num.append((l, i))
if l >= count:
break
print(time.ctime(), num[-1])
t1 = generate_seq(list(range(1,6)))
from dcj_comp import dcj_dist
from genome_file import encodeAdj
encodeAdj(t1[1])
for i in range(len(t1)):
d = dcj_dist(encodeAdj(t1[0]), encodeAdj(t1[i]))[-1]
if d != 0 or t1[0] == t1[i]:
continue
print(t1[0],'\t', t1[i])
for i in range(len(t1)):
d = dcj_dist(encodeAdj(t1[0]), encodeAdj(t1[i]))[-1]
print(t1[0],'\t', t1[i], '\t', d)
| genome_file_op.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernel_info:
# name: hki_ml
# kernelspec:
# display_name: HKI ML
# language: python
# name: hki_ml
# ---
# + gather={"logged": 1608121236254}
from pathlib import Path
import shutil
from fastai.vision.all import *
from training import get_document_tiles, GetLabelFromX, SyntheticImageBlock
# + gather={"logged": 1608118254963}
from fastai.torch_core import TensorCategory
# + jupyter={"source_hidden": false, "outputs_hidden": false} nteract={"transient": {"deleting": false}} gather={"logged": 1608121244176}
data_root = Path('../../../data')
tmp_root = Path('/tmp/hki_data2')
if not tmp_root.exists():
print('Copying data to instance SSD')
shutil.copytree(data_root, tmp_root)
signature_root = tmp_root / 'signatures'
document_root = tmp_root / 'documents'
# + gather={"logged": 1608121469384}
db = DataBlock(blocks=[SyntheticImageBlock(svg_directory=signature_root, positive_prob=0.3), CategoryBlock(vocab=[0,1])],
get_items=get_document_tiles,
get_x=ItemGetter(0),
get_y=ItemGetter(1),
item_tfms=[GetLabelFromX()])
dls = db.dataloaders(document_root, bs=80)
dls.show_batch()
# -
learner = cnn_learner(dls, resnet18, metrics=[error_rate, Recall(), Precision(), F1Score()])
learner.lr_find()
cbs = [ShowGraphCallback(), SaveModelCallback(fname='resnet18_based_model')]
learner.fine_tune(30, freeze_epochs=3, base_lr=2e-2, cbs=cbs)
| notebooks/Train tiling model.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Constraint Satisfaction Problems Lab
#
# ## Introduction
# Constraint Satisfaction is a technique for solving problems by expressing limits on the values of each variable in the solution with mathematical constraints. We've used constraints before -- constraints in the Sudoku project are enforced implicitly by filtering the legal values for each box, and the planning project represents constraints as arcs connecting nodes in the planning graph -- but in this lab exercise we will use a symbolic math library to explicitly construct binary constraints and then use Backtracking to solve the N-queens problem (which is a generalization [8-queens problem](https://en.wikipedia.org/wiki/Eight_queens_puzzle)). Using symbolic constraints should make it easier to visualize and reason about the constraints (especially for debugging), but comes with a performance penalty.
#
# 
#
# Briefly, the 8-queens problem asks you to place 8 queens on a standard 8x8 chessboard such that none of the queens are in "check" (i.e., no two queens occupy the same row, column, or diagonal). The N-queens problem generalizes the puzzle to to any size square board.
#
# ## I. Lab Overview
# Students should read through the code and the wikipedia page (or other resources) to understand the N-queens problem, then:
#
# 0. Complete the warmup exercises in the [Sympy_Intro notebook](Sympy_Intro.ipynb) to become familiar with they sympy library and symbolic representation for constraints
# 0. Implement the [NQueensCSP class](#II.-Representing-the-N-Queens-Problem) to develop an efficient encoding of the N-queens problem and explicitly generate the constraints bounding the solution
# 0. Write the [search functions](#III.-Backtracking-Search) for recursive backtracking, and use them to solve the N-queens problem
# 0. (Optional) Conduct [additional experiments](#IV.-Experiments-%28Optional%29) with CSPs and various modifications to the search order (minimum remaining values, least constraining value, etc.)
# +
import copy
import timeit
import matplotlib as mpl
import matplotlib.pyplot as plt
from util import constraint, displayBoard
from sympy import *
from IPython.display import display
init_printing()
# %matplotlib inline
# -
# ## II. Representing the N-Queens Problem
# There are many acceptable ways to represent the N-queens problem, but one convenient way is to recognize that one of the constraints (either the row or column constraint) can be enforced implicitly by the encoding. If we represent a solution as an array with N elements, then each position in the array can represent a column of the board, and the value at each position can represent which row the queen is placed on.
#
# In this encoding, we only need a constraint to make sure that no two queens occupy the same row, and one to make sure that no two queens occupy the same diagonal.
#
# ### Define Symbolic Expressions for the Problem Constraints
# Before implementing the board class, we need to construct the symbolic constraints that will be used in the CSP. Declare any symbolic terms required, and then declare two generic constraint generators:
# - `diffRow` - generate constraints that return True if the two arguments do not match
# - `diffDiag` - generate constraints that return True if two arguments are not on the same diagonal (Hint: you can easily test whether queens in two columns are on the same diagonal by testing if the difference in the number of rows and the number of columns match)
#
# Both generators should produce binary constraints (i.e., each should have two free symbols) once they're bound to specific variables in the CSP. For example, Eq((a + b), (b + c)) is not a binary constraint, but Eq((a + b), (b + c)).subs(b, 1) _is_ a binary constraint because one of the terms has been bound to a constant, so there are only two free variables remaining.
# +
# Declare any required symbolic variables
r1, r2 = symbols(['r1', 'r2'])
c1, c2 = symbols(['c1', 'c2'])
# Define diffRow and diffDiag constraints
diffRow = constraint('DiffRow', ~Eq(r1, r2))
diffDiag = constraint('DiffDiag', ~Eq(abs(r1 - r2), abs(c1 - c2)))
# +
# Test diffRow and diffDiag
_x = symbols('x:3')
# generate a diffRow instance for testing
diffRow_test = diffRow.subs({r1: _x[0], r2: _x[1]})
assert(len(diffRow_test.free_symbols) == 2)
assert(diffRow_test.subs({_x[0]: 0, _x[1]: 1}) == True)
assert(diffRow_test.subs({_x[0]: 0, _x[1]: 0}) == False)
assert(diffRow_test.subs({_x[0]: 0}) != False) # partial assignment is not false
print("Passed all diffRow tests.")
# generate a diffDiag instance for testing
diffDiag_test = diffDiag.subs({r1: _x[0], r2: _x[2], c1:0, c2:2})
assert(len(diffDiag_test.free_symbols) == 2)
assert(diffDiag_test.subs({_x[0]: 0, _x[2]: 2}) == False)
assert(diffDiag_test.subs({_x[0]: 0, _x[2]: 0}) == True)
assert(diffDiag_test.subs({_x[0]: 0}) != False) # partial assignment is not false
print("Passed all diffDiag tests.")
# -
# ### The N-Queens CSP Class
# Implement the CSP class as described above, with constraints to make sure each queen is on a different row and different diagonal than every other queen, and a variable for each column defining the row that containing a queen in that column.
class NQueensCSP:
"""CSP representation of the N-queens problem
Parameters
----------
N : Integer
The side length of a square chess board to use for the problem, and
the number of queens that must be placed on the board
"""
def __init__(self, N):
_vars = symbols(f'A0:{N}')
_domain = set(range(N))
self.size = N
self.variables = _vars
self.domains = {v: _domain for v in _vars}
self._constraints = {x: set() for x in _vars}
# add constraints - for each pair of variables xi and xj, create
# a diffRow(xi, xj) and a diffDiag(xi, xj) instance, and add them
# to the self._constraints dictionary keyed to both xi and xj;
# (i.e., add them to both self._constraints[xi] and self._constraints[xj])
for i in range(N):
for j in range(i + 1, N):
diffRowConstraint = diffRow.subs({r1: _vars[i], r2: _vars[j]})
diffDiagConstraint = diffDiag.subs({r1: _vars[i], r2: _vars[j], c1:i, c2:j})
self._constraints[_vars[i]].add(diffRowConstraint)
self._constraints[_vars[i]].add(diffDiagConstraint)
self._constraints[_vars[j]].add(diffRowConstraint)
self._constraints[_vars[j]].add(diffDiagConstraint)
@property
def constraints(self):
"""Read-only list of constraints -- cannot be used for evaluation """
constraints = set()
for _cons in self._constraints.values():
constraints |= _cons
return list(constraints)
def is_complete(self, assignment):
"""An assignment is complete if it is consistent, and all constraints
are satisfied.
Hint: Backtracking search checks consistency of each assignment, so checking
for completeness can be done very efficiently
Parameters
----------
assignment : dict(sympy.Symbol: Integer)
An assignment of values to variables that have previously been checked
for consistency with the CSP constraints
"""
return len(assignment) == self.size
def is_consistent(self, var, value, assignment):
"""Check consistency of a proposed variable assignment
self._constraints[x] returns a set of constraints that involve variable `x`.
An assignment is consistent unless the assignment it causes a constraint to
return False (partial assignments are always consistent).
Parameters
----------
var : sympy.Symbol
One of the symbolic variables in the CSP
value : Numeric
A valid value (i.e., in the domain of) the variable `var` for assignment
assignment : dict(sympy.Symbol: Integer)
A dictionary mapping CSP variables to row assignment of each queen
"""
assignment[var] = value
constraints = list(self._constraints[var])
for constraint in constraints:
for arg in constraint.args:
if arg in assignment.keys():
constraint = constraint.subs({arg: assignment[arg]})
if not constraint:
return False
return True
def inference(self, var, value):
"""Perform logical inference based on proposed variable assignment
Returns an empty dictionary by default; function can be overridden to
check arc-, path-, or k-consistency; returning None signals "failure".
Parameters
----------
var : sympy.Symbol
One of the symbolic variables in the CSP
value : Integer
A valid value (i.e., in the domain of) the variable `var` for assignment
Returns
-------
dict(sympy.Symbol: Integer) or None
A partial set of values mapped to variables in the CSP based on inferred
constraints from previous mappings, or None to indicate failure
"""
# TODO (Optional): Implement this function based on AIMA discussion
return {}
def show(self, assignment):
"""Display a chessboard with queens drawn in the locations specified by an
assignment
Parameters
----------
assignment : dict(sympy.Symbol: Integer)
A dictionary mapping CSP variables to row assignment of each queen
"""
locations = [(i, assignment[j]) for i, j in enumerate(self.variables)
if assignment.get(j, None) is not None]
displayBoard(locations, self.size)
# ## III. Backtracking Search
# Implement the [backtracking search](https://github.com/aimacode/aima-pseudocode/blob/master/md/Backtracking-Search.md) algorithm (required) and helper functions (optional) from the AIMA text.
# +
def select(csp, assignment):
"""Choose an unassigned variable in a constraint satisfaction problem """
# TODO (Optional): Implement a more sophisticated selection routine from AIMA
for var in csp.variables:
if var not in assignment:
return var
return None
def order_values(var, assignment, csp):
"""Select the order of the values in the domain of a variable for checking during search;
the default is lexicographically.
"""
# TODO (Optional): Implement a more sophisticated search ordering routine from AIMA
return csp.domains[var]
def backtracking_search(csp):
"""Helper function used to initiate backtracking search """
return backtrack({}, csp)
def backtrack(assignment, csp):
"""Perform backtracking search for a valid assignment to a CSP
Parameters
----------
assignment : dict(sympy.Symbol: Integer)
An partial set of values mapped to variables in the CSP
csp : CSP
A problem encoded as a CSP. Interface should include csp.variables, csp.domains,
csp.inference(), csp.is_consistent(), and csp.is_complete().
Returns
-------
dict(sympy.Symbol: Integer) or None
A partial set of values mapped to variables in the CSP, or None to indicate failure
"""
if csp.is_complete(assignment):
return assignment
var = select(csp, assignment)
for value in order_values(var, assignment, csp):
if csp.is_consistent(var, value, assignment):
assignment[var] = value
assignment_copy = copy.deepcopy(assignment)
result = backtrack(assignment_copy, csp)
if result is not None:
return result
# -
# ### Solve the CSP
# With backtracking implemented, now you can use it to solve instances of the problem. We've started with the classical 8-queen version, but you can try other sizes as well. Boards larger than 12x12 may take some time to solve because sympy is slow in the way its being used here, and because the selection and value ordering methods haven't been implemented. See if you can implement any of the techniques in the AIMA text to speed up the solver!
# +
start = timeit.default_timer()
num_queens = 12
csp = NQueensCSP(num_queens)
var = csp.variables[0]
print("CSP problems have variables, each variable has a domain, and the problem has a list of constraints.")
print("Showing the variables for the N-Queens CSP:")
display(csp.variables)
print("Showing domain for {}:".format(var))
display(csp.domains[var])
print("And showing the constraints for {}:".format(var))
display(csp._constraints[var])
print("Solving N-Queens CSP...")
assn = backtracking_search(csp)
if assn is not None:
csp.show(assn)
print("Solution found:\n{!s}".format(assn))
else:
print("No solution found.")
end = timeit.default_timer() - start
print(f'N-Queens size {num_queens} solved in {end} seconds')
# -
# ## IV. Experiments (Optional)
# For each optional experiment, discuss the answers to these questions on the forum: Do you expect this change to be more efficient, less efficient, or the same? Why or why not? Is your prediction correct? What metric did you compare (e.g., time, space, nodes visited, etc.)?
#
# - Implement a _bad_ N-queens solver: generate & test candidate solutions one at a time until a valid solution is found. For example, represent the board as an array with $N^2$ elements, and let each element be True if there is a queen in that box, and False if it is empty. Use an $N^2$-bit counter to generate solutions, then write a function to check if each solution is valid. Notice that this solution doesn't require any of the techniques we've applied to other problems -- there is no DFS or backtracking, nor constraint propagation, or even explicitly defined variables.
# - Use more complex constraints -- i.e., generalize the binary constraint RowDiff to an N-ary constraint AllRowsDiff, etc., -- and solve the problem again.
# - Rewrite the CSP class to use forward checking to restrict the domain of each variable as new values are assigned.
# - The sympy library isn't very fast, so this version of the CSP doesn't work well on boards bigger than about 12x12. Write a new representation of the problem class that uses constraint functions (like the Sudoku project) to implicitly track constraint satisfaction through the restricted domain of each variable. How much larger can you solve?
# - Create your own CSP!
| 1_foundations/5_nqueens/constraint_satisfaction.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + colab={"base_uri": "https://localhost:8080/"} id="zzLyWM6YDWXq" outputId="859f654e-60c9-4b39-9f9a-f2de684d7a35"
# %tensorflow_version 2.x
import tensorflow as tf
device_name = tf.test.gpu_device_name()
if device_name != '/device:GPU:0':
raise SystemError('GPU device not found')
print('Found GPU at: {}'.format(device_name))
# + id="EtvjjZCrAvKQ"
import tensorflow as tf
import pandas as pd
import matplotlib.pyplot as plt
from random import choice
from sklearn.metrics import classification_report
import seaborn as sns
import numpy as np
from scipy.signal import cwt
from scipy import signal
from tensorflow.keras.layers import Input, Dense, Dropout, Flatten, Conv1D, MaxPooling1D
from tensorflow.keras.models import Model
import random
from tensorflow.keras.utils import to_categorical
from imblearn.under_sampling import RandomUnderSampler
# + colab={"base_uri": "https://localhost:8080/"} id="EwSw-HAVGLSV" outputId="51c55a80-5433-46ef-82c7-9ce5b5b93373"
from google.colab import drive
drive.mount('/content/drive/')
# + id="wTiduEeBCKwO"
mit_test = pd.read_csv("drive/MyDrive/Datascientest/Data/mitbih_test.csv", header = None)
mit_train = pd.read_csv("drive/MyDrive/Datascientest/Data/mitbih_train.csv", header = None)
# + id="Xiq0IGstW6T4"
X_train_mit = mit_train.iloc[:,:-1]
y_train_mit = mit_train.iloc[:,-1]
X_test_mit = mit_test.iloc[:,:-1]
y_test_mit = mit_test.iloc[:,-1]
# + id="pn-899gRW63-"
ru = RandomUnderSampler(replacement = True)
X_train_mit, y_train_mit = ru.fit_resample(X_train_mit, y_train_mit)
# + id="hco1yQQ7G106" colab={"base_uri": "https://localhost:8080/"} outputId="2f0be085-8db8-4737-f236-f19aee7d1ef4"
from datetime import datetime
now = datetime.now()
X_train = []
y_train = []
for i in range(X_train_mit.shape[0]):
img = cwt(data = X_train_mit.iloc[i,:], wavelet = signal.ricker, widths = np.arange(1, 30))
X_train.append(np.repeat(img[..., np.newaxis], 3, -1))
y_train.append(y_train_mit.iloc[i])
X_train = np.array(X_train)
print(datetime.now() - now)
# + id="K10iKtlfHgxC" colab={"base_uri": "https://localhost:8080/", "height": 138} outputId="c1efda36-fdc7-441f-83cf-80cb30a9c081"
plt.imshow(X_train[156])
# + id="zUC6oQTHmW6Y"
plt.imshow(X_train[156])
# + id="AueeBQlUWbNf"
model = tf.keras.applications.densenet.DenseNet121(
include_top=True, weights='imagenet', input_tensor=Input(shape=(29, 187, 3)),
pooling=None, classes=1000
)
# + colab={"base_uri": "https://localhost:8080/"} id="8GnDHPvyYwj0" outputId="f457d883-5fd3-4733-e961-a6718364c3d1"
model.summary()
# + id="mkOOabe5Yyzp"
for layer in model.layers :
layer.trainable = False
# + id="-QkAxEiFY0hc"
output = Dense(units = 5, activation="softmax", name="final" )(model.layers[-2].output)
model_2 = Model(inputs = model.input, outputs = output)
model_2.compile(loss = "categorical_crossentropy", optimizer = "adam", metrics = ["accuracy"])
# + colab={"base_uri": "https://localhost:8080/"} id="oOUwDDuQZoiS" outputId="a0c1c4fb-67f6-4532-e79a-a394a88821a8"
model_2.fit(X_train, to_categorical(y_train), epochs = 30, batch_size = 32, validation_split = 0.2)
# + id="ks2LJ_8UasyN" colab={"base_uri": "https://localhost:8080/"} outputId="d0e0931e-a2fe-4380-b18e-8cf50e089445"
now = datetime.now()
X_test = []
y_test = []
for i in range(10000):
rand = random.randint(0,mit_test.shape[0])
img = cwt(data = X_test_mit.iloc[rand,:], wavelet = signal.ricker, widths = np.arange(1, 30))
X_test.append(np.repeat(img[..., np.newaxis], 3, -1))
y_test.append(y_test_mit.iloc[rand])
X_test = np.array(X_test)
print(datetime.now() - now)
# + id="cD7hlwXbcSxK"
prediction = model_2.predict(X_test)
# + id="QbyVuCZ3c49y" colab={"base_uri": "https://localhost:8080/", "height": 570} outputId="eafcb0ec-cff7-4d99-ff1c-7758846795c1"
from sklearn.metrics import classification_report
display(pd.crosstab(np.array(y_test), prediction.argmax(1)))
print(classification_report(np.array(y_test), prediction.argmax(1)))
| notebooks/5 - Transfer_Learning.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3 (ipykernel)
# language: python
# name: python3
# ---
# + [markdown] Collapsed="false" colab_type="text" id="1h9FQJHvG8Vv"
# # Exploring The Dimensions Search Language (DSL) - Deep Dive
#
# This tutorial provides a detailed walkthrough of the most important features of the [Dimensions Search Language](https://docs.dimensions.ai/dsl/).
#
# This tutorial is based on the [Query Syntax](https://docs.dimensions.ai/dsl/language.html) section of the official documentation. So, it can be used as an interactive version of the documentation, as it allows to try out the various DSL queries presented there.
#
# ## What is the Dimensions Search Language?
#
# The DSL aims to capture the type of interaction with Dimensions data
# that users are accustomed to performing graphically via the [web
# application](https://app.dimensions.ai/), and enable web app developers, power users, and others to
# carry out such interactions by writing query statements in a syntax
# loosely inspired by SQL but particularly suited to our specific domain
# and data organization.
#
# **Note:** this notebook uses the Python programming language, however all the **DSL queries are not Python-specific** and can in fact be reused with any other API client.
#
#
# -
import datetime
print("==\nCHANGELOG\nThis notebook was last run on %s\n==" % datetime.date.today().strftime('%b %d, %Y'))
# + [markdown] Collapsed="false" colab_type="text" id="hMaQlB7DG8Vw"
# ## Prerequisites
#
# This notebook assumes you have installed the [Dimcli](https://pypi.org/project/dimcli/) library and are familiar with the ['Getting Started' tutorial](https://api-lab.dimensions.ai/cookbooks/1-getting-started/1-Using-the-Dimcli-library-to-query-the-API.html).
# + Collapsed="false"
# !pip install dimcli --quiet
import dimcli
from dimcli.utils import *
import json
import sys
import pandas as pd
#
print("==\nLogging in..")
# https://digital-science.github.io/dimcli/getting-started.html#authentication
ENDPOINT = "https://app.dimensions.ai"
if 'google.colab' in sys.modules:
import getpass
KEY = getpass.getpass(prompt='API Key: ')
dimcli.login(key=KEY, endpoint=ENDPOINT)
else:
KEY = ""
dimcli.login(key=KEY, endpoint=ENDPOINT)
dsl = dimcli.Dsl()
# + [markdown] Collapsed="false"
#
# ## Sections Index
#
# 1. Basic query structure
# 2. Full-text searching
# 3. Field searching
# 4. Searching for researchers
# 5. Returning results
# 6. Aggregations
# + [markdown] Collapsed="false"
# ## 1. Basic query structure
#
# DSL queries consist of two required components: a `search` phrase that
# indicates the scientific records to be searched, and one or
# more `return` phrases which specify the contents and structure of the
# desired results.
#
# The simplest valid DSL query is of the form `search <source>|return <result>`:
# + Collapsed="false"
# %%dsldf
search grants return grants limit 5
# + [markdown] Collapsed="false"
# ### `search source`
#
# A query must begin with the word `search` followed by a `source` name, i.e. the name of a type of scientific `record`, such as `grants` or `publications`.
#
# **What are the sources available?** See the [data sources](https://docs.dimensions.ai/dsl/data-sources.html) section of the documentation.
#
# Alternatively, we can use the 'schema' API ([describe](https://docs.dimensions.ai/dsl/data-sources.html#metadata-api)) to return this information programmatically:
# + Collapsed="false"
dsl.query("describe schema")
# + [markdown] Collapsed="false"
# A more useful query might also make use of the optional `for` and
# `where` phrases to limit the set of records returned.
# + Collapsed="false"
# %%dsldf
search grants for "lung cancer"
where active_year=2000
return grants limit 5
# + [markdown] Collapsed="false"
# ### `return` result (source or facet)
#
# The most basic `return` phrase consists of the keyword `return` followed
# by the name of a `record` or `facet` to be returned.
#
# This must be the
# name of the `source` used in the `search` phrase, or the name of a
# `facet` of that source.
# + Collapsed="false"
# %%dsldf
search grants for "laryngectomy"
return grants limit 5
# + [markdown] Collapsed="false"
# Eg let's see what are the *facets* available for the *grants* source:
# + Collapsed="false"
fields = dsl.query("describe schema")['sources']['grants']['fields']
[x for x in fields if fields[x]['is_facet']]
# + [markdown] Collapsed="false" toc-hr-collapsed=true toc-nb-collapsed=true
# ## 2. Full-text Searching
#
# Full-text search or keyword search finds all instances of a term
# (keyword) in a document, or group of documents.
#
# Full text search works
# by using search indexes, which can be targeting specific sections of a
# document e.g. its $abstract$, $authors$, $full text$ etc...
# + Collapsed="false"
# %%dsldf
search publications
in full_data for "moon landing"
return publications limit 5
# + [markdown] Collapsed="true"
# ### 2.1 `in [search index]`
#
# This optional phrase consists of the particle `in` followed by a term indicating a `search index`, specifying for example whether the search
# is limited to full text, title and abstract only, or title only.
# + Collapsed="false"
# %%dsldf
search grants
in title_abstract_only for "something"
return grants limit 5
# + [markdown] Collapsed="false"
# Eg let's see what are the *search fields* available for the *grants* source:
# + Collapsed="false"
dsl.query("describe schema")['sources']['grants']['search_fields']
# + Collapsed="false"
# %%dsldf
search grants
in full_data for "graphene AND computer AND iron"
return grants limit 5
# + [markdown] Collapsed="false"
# Special search indexes for persons names permit to perform full text
# searches on publications `authors` or grants `investigators`. Please see the
# *Researchers Search* section below for more information
# on how searches work in this case.
# + Collapsed="false"
# %dsldf search publications in authors for "\"<NAME>\"" return publications limit 5
# + [markdown] Collapsed="false"
# ### 2.2 `for "search term"`
#
# This optional phrase consists of the keyword `for` followed by a
# `search term` `string`, enclosed in double quotes (`"`).
# + [markdown] Collapsed="false"
# Strings in double quotes can contain nested quotes escaped by a
# backslash `\`. This will ensure that the string in nested double quotes
# is searched for as if it was a single phrase, not multiple words.
#
# An example of a phrase: `"\"Machine Learning\""` : results must contain
# `Machine Learning` as a phrase.
# + Collapsed="false"
# %dsldf search publications for "\"Machine Learning\"" return publications limit 5
# + [markdown] Collapsed="false"
# Example of multiple keywords: `"Machine Learning"` : this searches for
# keywords independently.
# + Collapsed="false"
# %dsldf search publications for "Machine Learning" return publications limit 5
# + [markdown] Collapsed="false"
# Note: Special characters, such as any of `^ " : ~ \ [ ] { } ( ) ! | & +` must be escaped by a backslash `\`. Also, please note escaping rules in
# [Python](http://python-reference.readthedocs.io/en/latest/docs/str/escapes.html) (or other languages). For example, when writing a query with escaped quotes, such as `search publications for "\"phrase 1\" AND \"phrase 2\""`, in Python, it is necessary to escape the backslashes as well, so it
# would look like: `'search publications for "\\"phrase 1\\" AND \\"phrase 2\\""'`.
#
# See the [official docs](https://docs.dimensions.ai/dsl/language.html#for-search-term) for more details.
# + [markdown] Collapsed="false"
# ### 2.3 Boolean Operators
#
# Search term can consist of multiple keywords or phrases connected using
# boolean logic operators, e.g. `AND`, `OR` and `NOT`.
# + Collapsed="false"
# %dsldf search publications for "(dose AND concentration)" return publications limit 5
# + [markdown] Collapsed="false"
# When specifying Boolean operators with keywords such as `AND`, `OR` and
# `NOT`, the keywords must appear in all uppercase.
#
# The operators available are shown in the table below.
# .
#
# | Boolean Operator | Alternative Symbol | Description |
# |------------------|--------------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
# | `AND` | `&&` | Requires both terms on either side of the Boolean operator to be present for a match. |
# | `NOT` | `!` | Requires that the following term not be present. |
# | `OR` | `||` | Requires that either term (or both terms) be present for a match. |
# | | `+` | Requires that the following term be present. |
# | | `-` | Prohibits the following term (that is, matches on fields or documents that do not include that term). The `-` operator is functionally similar to the Boolean operator `!`. |
# + Collapsed="false"
# %dsldf search publications for "(dose OR concentration) AND (-malaria +africa)" return publications limit 5
# + [markdown] Collapsed="false"
# The combination of keywords and boolean operators allow to construct rather sophisticated queries. For example, here's a real-world query used to extract publications related to COVID-19.
# + Collapsed="false"
q_inner = """ "2019-nCoV" OR "COVID-19" OR "SARS-CoV-2" OR "HCoV-2019" OR "hcov" OR "NCOVID-19" OR
"severe acute respiratory syndrome coronavirus 2" OR "severe acute respiratory syndrome corona virus 2"
OR (("coronavirus" OR "corona virus") AND (Wuhan OR China OR novel)) """
# tip: dsl_escape is a dimcli utility function for escaping special characters
q_outer = f"""search publications in full_data for "{dsl_escape(q_inner)}" return publications"""
print(q_outer)
dsl.query(q_outer)
# + [markdown] Collapsed="false"
# ### 2.4 Wildcard Searches
#
# The DSL supports single and multiple character wildcard searches within
# single terms. Wildcard characters can be applied to single terms, but
# not to search phrases.
# + Collapsed="false"
# %dsldf search publications in title_only for "ital? malaria" return publications limit 5
# + Collapsed="false"
# %dsldf search publications in title_only for "it* malaria" return publications limit 5
# + [markdown] Collapsed="false"
# | Wildcard Search Type | Special Character | Example |
# |------------------------------------------------------------------|-------------------|---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
# | Single character - matches a single character | `?` | The search string `te?t` would match both `test` and `text`. |
# | Multiple characters - matches zero or more sequential characters | `*` | The wildcard search: `tes*` would match `test`, `testing`, and `tester`. You can also use wildcard characters in the middle of a term. For example: `te*t` would match `test` and `text`. `*est` would match `pest` and `test`. |
# + [markdown] Collapsed="false"
# ### 2.5 Proximity Searches
#
# A proximity search looks for terms that are within a specific distance
# from one another.
#
# To perform a proximity search, add the tilde character `~` and a numeric
# value to the end of a search phrase. For example, to search for a
# `formal` and `model` within 10 words of each other in a document, use
# the search:
# + Collapsed="false"
# %dsldf search publications for "\"formal model\"~10" return publications limit 5
# + Collapsed="false"
# %dsldf search publications for "\"digital humanities\"~5 +ontology" return publications limit 5
# + [markdown] Collapsed="false"
# The distance referred to here is the number of term movements needed to match the specified phrase.
# In the example above, if `formal` and `model` were 10 spaces apart in a
# field, but `formal` appeared before `model`, more than 10 term movements
# would be required to move the terms together and position `formal` to
# the right of `model` with a space in between.
# + [markdown] Collapsed="false" toc-hr-collapsed=true toc-nb-collapsed=true
# ## 3. Field Searching
#
# Field searching allows to use a specific `field` of a `source` as a
# query filter. For example, this can be a Literal field such as the $type$ of a
# publication, its $date$, $mesh terms$, etc.. Or it can be an
# Entity field, such as the $journal title$ for a
# publication, the $country name$ of its author affiliations, etc..
#
# **What are the fields available for each source?** See the [data sources](https://docs.dimensions.ai/dsl/data-sources.html) section of the documentation.
#
# Alternatively, we can use the 'schema' API ([describe](https://docs.dimensions.ai/dsl/data-sources.html#metadata-api)) to return this information programmatically:
# + Collapsed="false"
# %dsldocs publications
# + [markdown] Collapsed="false"
# ### 3.1 `where`
#
# This optional phrase consists of the keyword `where` followed by a
# `filters` phrase consisting of DSL filter expressions, as described
# below.
# + Collapsed="false"
# %dsldf search publications where type = "book" return publications limit 5
# + [markdown] Collapsed="false"
# If a `for` phrase is also used in a filtered query, the
# system will first apply the filters, and then search the resulting
# restricted set of documents for the `search term`.
# + Collapsed="false"
# %dsldf search publications for "malaria" where type = "book" return publications limit 5
# + [markdown] Collapsed="false"
# ### 3.2 `in`
#
# For convenience, the DSL also supports shorthand notation for filters
# where a particular field should be restricted to a specified range or
# list of values (although the same logic may be expressed using complex
# filters as shown below).
#
# Syntax: a **range filter** consists of the `field` name, the keyword `in`, and a
# range of values enclosed in square brackets (`[]`), where the range
# consists of a `low` value, colon `:`, and a `high` value.
# + Collapsed="false"
# %%dsldf
search grants
for "malaria"
where start_year in [ 2010 : 2015 ]
return grants limit 5
# + [markdown] Collapsed="false"
# Syntax: a **list filter** consists of the `field` name, the keyword `in`, and a list
# of one or more `value` s enclosed in square brackets (`[]`), where
# values are separated by commas (`,`):
# + Collapsed="false"
# %%dsldf
search grants
for "malaria"
where research_org_names in [ "UC Berkeley", "UC Davis", "UCLA" ]
return grants limit 5
# + [markdown] Collapsed="false"
# ### 3.3 `count` - filter function
#
# The filter function `count` is supported on some fields in publications (e.g. `researchers` and `research_orgs`).
#
# Use of this filter is shown on the example below:
# + Collapsed="false"
# %%dsldf
search publications
for "malaria"
where count(research_orgs) > 5
return research_orgs limit 5
# + [markdown] Collapsed="false"
# Number of publications with more than 50 researcher.
# + Collapsed="false"
# %%dsldf
search publications
for "malaria"
where count(researchers) > 50
return publications limit 5
# + [markdown] Collapsed="false"
# Number of publications with more than one researcher.
# + Collapsed="false"
# %%dsldf
search publications
where count(researchers) > 1
return funders limit 5
# + [markdown] Collapsed="false"
# International collaborations: number of publications with more than one author and affiliations located in more than one country.
# + Collapsed="false"
# %%dsldf
search publications
where count(researchers) > 1
and count(research_org_countries) > 1
return funders limit 5
# + [markdown] Collapsed="false"
# Domestic collaborations: number of publications with more than one author and more than one affiliation located in exactly one country.
# + Collapsed="false"
# %%dsldf
search publications
where count(researchers) > 1
and count(research_org_countries) = 1
return funders limit 5
# + [markdown] Collapsed="false"
# ### 3.4 Filter Operators
#
# A simple filter expression consists of a `field` name, an in-/equality
# operator `op`, and the desired field `value`.
#
# The `value` must be a
# `string` enclosed in double quotes (`"`) or an integer (e.g. `1234`).
#
# The available operators are:
#
# | `op` | meaning |
# |----------------|------------------------------------------------------------------------------------------|
# | `=` | *is* (or *contains* if the given `field` is multi-value) |
# | `!=` | *is not* |
# | `>` | *is greater than* |
# | `<` | *is less than* |
# | `>=` | *is greater than or equal to* |
# | `<=` | *is less than or equal to* |
# | `~` | *partially matches* (see partial-string-matching below) |
# | `is empty` | *is empty* (see emptiness-filters below) |
# | `is not empty` | *is not empty* (see emptiness-filters below) |
# + [markdown] Collapsed="false"
# A couple of examples
# + Collapsed="false"
# %dsldf search datasets where year > 2010 and year < 2012 return datasets limit 5
# + Collapsed="false"
# %dsldf search patents where assignees != "grid.410484.d" return patents limit 5
# + [markdown] Collapsed="false"
# ### 3.5 Partial string matching with `~`
#
# The `~` operator indicates that the given `field` need only partially,
# instead of exactly, match the given `string` (the `value` used with this
# operator must be a `string`, not an integer).
#
# For example, the filter `where research_orgs.name~"Saarland Uni"` would
# match both the organization named "Saarland University" and the one
# named "Universitรคtsklinikum des Saarlandes", and any other organization
# whose name includes the terms "Saarland" and "Uni" (the order is
# unimportant).
# + Collapsed="false"
# %%dsldf
search patents
where assignee_names ~ "IBM"
return assignees limit 5
# + [markdown] Collapsed="false"
# ### 3.6 Emptiness filters `is empty`
#
# To filter records which contain specific field or to filter those which
# contain an empty field, it is possible to use something like
# `where research_orgs is not empty` or `where issn is empty`.
# + Collapsed="false"
# %%dsldf
search publications
for "iron graphene"
where researchers is empty
and research_orgs is not empty
return publications[id+title+researchers+research_orgs+type] limit 5
# + [markdown] Collapsed="false"
# ## 4. Searching for Researchers
#
# The DSL offers different mechanisms for searching for researchers (e.g.
# publication authors, grant investigators), each of them presenting
# specific advantages.
# + [markdown] Collapsed="false"
# ### 4.1 Exact name searches
#
# Special full-text indices allows to look up a researcher's name and
# surname **exactly as they appear in the source documents** they derive from.
#
# This approach has a broad scope, as it allows to search the full
# collection of Dimensions documents irrespectively of whether a
# researcher was succesfully disambiguated (and hence given a Dimensions
# ID). On the other hand, this approach will only match names as they
# appear in the source document, so different spellings or initials are
# not necessarily returned via a single query.
#
# ```
# search in [authors|investigators|inventors]
# ```
#
# It is possible to look up publications authors using a specific
# `search index` called `authors`.
#
# This method expects case insensitive
# phrases, in format $"<first name> <last name>"$ or reverse order. Note
# that strings in double quotes that contain nested quotes must always be
# escaped by a backslash `\`.
# + Collapsed="false"
# %dsldf search publications in authors for "\"<NAME>\"" return publications limit 5
# + [markdown] Collapsed="false"
# Instead of first name, initials can also be used. These are examples of
# valid research search phrases:
#
# - `\"<NAME>.\"`
# - `\"<NAME>\"`
# - `\"CS Peirce\"`
# - `\"Peirce CS\"`
# - `\"C S Peirce\"`
# - `\"<NAME>\"`
# - `\"C Peirce\"`
# - `\"Peirce C\"`
# - `\"<NAME>\"`
# - `\"<NAME>\"`
#
# **Warning**: In order to produce valid results an author or an investigator search
# query must contain **at least two components or more** (e.g., name and
# surname, either in full or initials).
#
# Investigators search is similar to *authors* search, only it allows to search on `grants` and
# `clinical trials` using a separate search index `investigators`, and on
# `patents` using the index `inventors`.
# + Collapsed="false"
# %%dsldf
search clinical_trials in investigators for "\"<NAME>\""
return clinical_trials limit 5
# + Collapsed="false"
# %%dsldf
search grants in investigators for "\"<NAME>\""
return grants limit 5
# + Collapsed="false"
# %%dsldf
search patents in inventors for "\"<NAME>\""
return patents limit 5
# + [markdown] Collapsed="false"
# ### 4.2 Fuzzy Searches
#
# This type of search is similar to *full-text
# search*, with the difference that it
# allows searching by only a part of a name, e.g. only the 'last name' of
# a person, by using the `where` clause.
#
# **Note** At this moment, this type of search is only available for
# `publications`. Other sources will add this option in the future.
#
# For example:
# + Collapsed="false"
# %%dsldf
search publications where authors = "Hawking"
return publications[id+doi+title+authors] limit 5
# + [markdown] Collapsed="false"
# Generally speaking, using a `where` clause to search authors is less
# precise that using the relevant exact-search syntax.
#
# On the other hand, using a
# `where` clause can be handy if one wants to **combine an author search
# with another full-text search index**.
#
# For example:
# + Collapsed="false"
# %%dsldf
search publications
in title_abstract_only for "dna replication"
where authors = "smith"
return publications limit 5
# + [markdown] Collapsed="false"
# ### 4.3 Using the disambiguated Researchers database
#
# The Dimensions [Researchers](https://docs.dimensions.ai/dsl/datasource-researchers.html) source is a database of
# researchers information algorithmically extracted and disambiguated from
# all of the other content sources (publications, grants, clinical trials
# etc..).
#
# By using the `researchers` source it is possible to match an
# 'aggregated' person object linking together multiple publication
# authors, grant investigators etc.. irrespectively of the form their
# names can take in the original source documents.
#
# However, since database does not contain all authors and investigators information
# available in Dimensions.
#
# E.g. think of authors from older publications,
# or authors with very common names that are difficult to disambiguate, or
# very new authors, who have only one or few publications. In such cases,
# using full-text authors search might be more
# appropriate.
#
# Examples:
# + Collapsed="false"
# %%dsldf
search researchers for "\"<NAME>\""
return researchers[basics+obsolete]
# + [markdown] Collapsed="false"
# NOTE pay attentiont to the `obsolete` field. This indicates the researcher ID status. 0 means that the researcher ID is still **active**, 1 means that the researcher ID is **no longer valid**. This is due to the ongoing process of refinement of Dimensions researchers.
#
# Hence the query above is best written like this:
# + Collapsed="false"
# %%dsldf
search researchers where obsolete=0 for "\"<NAME>\""
return researchers[basics+obsolete]
# + [markdown] Collapsed="false"
# With `Researchers`, one can use other fields as well:
# + Collapsed="false"
# %%dsldf
search researchers
where obsolete=0 and last_name="Shimazaki"
return researchers[basics] limit 5
# + [markdown] Collapsed="false"
# ## 5. Returning results
#
# After the `search` phrase, a query must contain one or more `return`
# phrases, specifying the content and format of the information that
# should be returned.
# + [markdown] Collapsed="false"
#
#
# ### 5.1 Returning Multiple Sources
#
# Multiple results may not be returned in a single `return` phrase.
# + Collapsed="false"
# %%dsldf
search publications
return funders limit 5
return research_orgs limit 5
return year
# + [markdown] Collapsed="false"
#
# ### 5.2 Returning Specific Fields
#
# For control over which information from each given `record` will be
# returned, a `source` or `entity` name in the `results` phrase can be
# optionally followed by a specification of `fields` and `fieldsets` to be
# included in the JSON results for each retrieved record.
#
# The fields specification may be an arbitrary list of `field` names
# enclosed in brackets (`[`, `]`), with field names separated by a plus
# sign (`+`). Minus sign (`-`) can be used to exclude `field` or a
# `fieldset` from the result. Field names thus listed within brackets must
# be "known" to the DSL, and therefore only a subset of fields may be used
# in this syntax (see note below).
# + Collapsed="false"
# %%dsldf
search grants
return grants[grant_number + title + language] limit 5
# + Collapsed="false"
# %%dsldf
search clinical_trials
return clinical_trials [id+ title + acronym + phase] limit 5
# + [markdown] Collapsed="false"
# **Shortcuts: `fieldsets`**
#
# The fields specification may be the name of a pre-defined `fieldset`
# (e.g. `extras`, `basics`). These are shortcuts that can be handy when testing out new queries, for example.
#
# NOTE In general when writing code used in integrations or long-standing extraction scripts it is **best to return specific fields rather that a predefined set**. This has also the advantage of making queries faster by avoiding the extraction of unnecessary data.
#
# + Collapsed="false"
# %%dsldf
search grants
return grants [basics] limit 5
# + Collapsed="false"
# %%dsldf
search publications
return publications [basics+times_cited] limit 5
# + [markdown] Collapsed="false"
# The fields specification may be an (`all`), to indicate that all fields
# available for the given `source` should be returned.
# + Collapsed="false"
# %%dsldf
search publications
return publications [all] limit 5
# + [markdown] Collapsed="false"
# ### 5.3 Returning Facets
#
# In addition to returning source records matching a query, it is possible
# to $facet$ on the entity fields related to a
# particular source and return only those entity values as an aggregrated
# view of the related source data. This operation is similar to a
# $group by$ or $pivot table$.
#
# **Warning** Faceting can return up to a maximum of 1000 results. This is to ensure
# adequate performance with all queries. Furthemore, although the `limit`
# operator is allowed, the `skip` operator cannot be used.
# + Collapsed="false"
# %%dsldf
search publications
for "coronavirus"
return research_orgs limit 5
# + Collapsed="false"
# %%dsldf
search publications
for "coronavirus"
return research_org_countries limit 5
return year limit 5
return category_for limit 5
# + [markdown] Collapsed="false"
# For control over the organization and headers of the JSON query results,
# the `return` keyword in a return phrase may be followed by the keyword
# `in` and then a `group` name for this group of results, where the group
# name is enclosed in double quotes(`"`).
#
# Also, one can define `aliases` that replace the defaul JSON fields names with other ones provided by the user.
#
# See the [official documentation](https://docs.dimensions.ai/dsl/language.html#aliases) for more details about this feature.
# + Collapsed="false"
# %%dsl
search publications
return in "facets" funders
return in "facets" research_orgs
# + [markdown] Collapsed="false" colab_type="text" id="hQSTK25yG8Vz"
# ### 5.4 What the query statistics refer to - sources VS facets
#
# When performing a DSL search, a `_stats` object is return which contains some useful info eg the total number of records available for a search.
# + Collapsed="false" colab={"base_uri": "https://localhost:8080/", "height": 34} colab_type="code" executionInfo={"elapsed": 956, "status": "ok", "timestamp": 1574678516635, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AAuE7mBu8LVjIGgontF2Wax51BoL5KFx8esezX3bUmaa0g=s64", "userId": "10309320684375994511"}, "user_tz": 0} id="yn-gpdVQG8Vz" outputId="695b23b2-f439-412d-9741-24cd3515f5d3"
# %%dsldf
search publications
where year in [2013:2018] and research_orgs="grid.258806.1"
return publications limit 5
# + [markdown] Collapsed="false" colab_type="text" id="lBP-VzxJItji"
#
#
# It is important to note though that the **total number always refers to the main source, never the facets** one is searching for.
#
# For example, in this query we return `researchers` linked to publications:
# + Collapsed="false" colab={"base_uri": "https://localhost:8080/", "height": 34} colab_type="code" executionInfo={"elapsed": 1011, "status": "ok", "timestamp": 1574678518954, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AAuE7mBu8LVjIGgontF2Wax51BoL5KFx8esezX3bUmaa0g=s64", "userId": "10309320684375994511"}, "user_tz": 0} id="UDa-lDPVG8V1" outputId="0c9717e4-2172-4c0a-ebb7-a12d3c38f3e1"
# %%dsldf
search publications
where year in [2013:2018] and research_orgs="grid.258806.1"
return researchers limit 5
# + [markdown] Collapsed="false" colab_type="text" id="Ihecf2FPTCoM"
# NOTE: facet results can be 1000 at most (due to performance limitations) so if there are more than 1000 it is not possible to know the total number.
# + [markdown] Collapsed="false"
# ### 5.5 Paginating Results
#
# At the end of a `return` phrase, the user can specify the maximum number
# of results to be returned and the number of top records to skip over
# before returning the first result record, for e.g. returning large
# result sets page-by-page (i.e. "paging" results) as described below.
#
# This is done using the keyword `limit` followed by the maximum number of
# results to return, optionally followed by the keyword `skip` and the
# number of results to skip (the offset).
# + Collapsed="false"
# %%dsldf
search publications return publications limit 10
# + [markdown] Collapsed="false"
# If paging information is not provided, the default values
# `limit 20 skip 0` are used, so the two following queries are equivalent:
# + [markdown] Collapsed="false"
# Combining `limit` and `skip` across multiple queries enables paging or
# batching of results; e.g. to retrieve 30 grant records divided into 3
# pages of 10 records each, the following three queries could be used:
#
# ```
# return grants limit 10 => get 1st 10 records for page 1 (skip 0, by default)
# return grants limit 10 skip 10 => get next 10 for page 2; skip the 10 we already have
# return grants limit 10 skip 20 => get another 10 for page 3, for a total of 30
# ```
# + [markdown] Collapsed="false"
# ### 5.6 Sorting Results
#
# A sort order for the results in a given `return` phrase can be specified
# with the keyword `sort by` followed by the name of
# * a `field` (in the
# case that a `source` is being requested)
# * an `indicator (aggregation)` (in the case
# that one or more facets are being requested).
#
# By default, the result set of full text
# queries ($search ... for "full text query"$) is sorted by "relevance".
# Additionally, it is possible to specify the sort order, using `asc` or
# `desc` keywords. By default, descending order is selected.
# + Collapsed="false"
# %%dsldf
search grants
for "nanomaterials"
return grants sort by title desc limit 5
# + Collapsed="false"
# %%dsldf
search grants
for "nanomaterials"
return grants sort by relevance desc limit 5
# + [markdown] Collapsed="false"
# Number of citations per publication
# + Collapsed="false"
# %%dsldf
search publications
return publications [doi + times_cited]
sort by times_cited limit 5
# + [markdown] Collapsed="false"
# Recent citations per publication.
# Note: Recent citation refers to the number of citations accrued in the last two year period. A single value is stored per document and the year window rolls over in July.
# + Collapsed="false"
# %%dsldf
search publications
return publications [doi + recent_citations]
sort by recent_citations limit 5
# + [markdown] Collapsed="false"
# When a facet is being returned, the `indicator` used in the
# `sort` phrase must either be `count` (the default, such that
# `sort by count` is unnecessary), or one of the indicators specified in
# the `aggregate` phrase, i.e. one whose values are being computed in the
# faceting operation.
#
# + Collapsed="false"
# %%dsldf
search publications
for "nanomaterials"
return research_orgs
aggregate altmetric_median, rcr_avg sort by rcr_avg limit 5
# + [markdown] Collapsed="false"
# ### 5.7 Unnesting results
#
# Multi-value entity and JSON fields, such as `researchers`, `authors` or `research_orgs` or any of `category_*` fields may be unnested into top level objects.
#
# This operation makes it easier to do further operations on these objects e.g. counting or processing them further.
#
# This functionality will transform all of the returned multi-value data and turn them into top level keys, such as `researchers.id`, `researchers.first_name`, `researchers.last_name`, while copying other, non-unnested fields, such as `id` or `title` of publication for each of them. Returned results are therefore multiplied by as many researchers and categories each original publication has, so they will likely be more than the overall query limit, as the limit applies on the source objects, not the unnested one. If multiple fields are being unnested, then a cartesian product of all unnested fields is being returned.
#
#
#
# + Collapsed="false"
# %%dsldf
search publications for "Japan AND Buddhism"
where researchers is not empty
return publications[id+year+title+unnest(researchers)] limit 10
# + Collapsed="false"
# %%dsldf
search publications for "Japan AND Buddhism"
return publications[id+year+title+unnest(category_for)] limit 5
# + [markdown] Collapsed="false"
# You can `unnest` as many fields as you want. However the number of results will grow pretty quickly!
# + Collapsed="false"
# %%dsldf
search publications for "Japan AND Buddhism"
return publications[id+year+title+unnest(category_for)+unnest(researchers)+unnest(research_orgs)] limit 5
# + [markdown] Collapsed="false"
# ## 6. Aggregations
#
# In a `return` phrase requesting one or more `facet` results, aggregation
# operations to perform during faceting can be specified after the facet
# name(s) by using the keyword `aggregate` followed by a comma-separated
# list of one or more `indicator` names corresponding to the `source`
# being searched.
# + Collapsed="false"
# %%dsldf
search publications
where year > 2010
return research_orgs
aggregate rcr_avg, altmetric_median limit 5
# + [markdown] Collapsed="false"
# **What are the metrics/aggregations available?** See the data sources documentation for information about available [indicators](https://docs.dimensions.ai/dsl/datasource-publications.html#publications-indicators).
#
# Alternatively, we can use the 'schema' API ([describe](https://docs.dimensions.ai/dsl/data-sources.html#metadata-api)) to return this information programmatically:
# + Collapsed="false"
schema = dsl.query("describe schema")
sources = [x for x in schema['sources']]
# for each source name, extract metrics info
for s in sources:
print("SOURCE:", s)
for m in schema['sources'][s]['metrics']:
print("--", schema['sources'][s]['metrics'][m]['name'], " => ", schema['sources'][s]['metrics'][m]['description'], )
# + [markdown] Collapsed="false"
# **NOTE** In addition to any specified aggregations, `count` is always computed
# and reported when facet results are requested.
# + Collapsed="false"
# %%dsldf
search grants
for "5g network"
return funders
aggregate count, funding sort by funding limit 5
# + [markdown] Collapsed="false"
# Aggregated total number of citations
# + Collapsed="false"
# %%dsldf
search publications
for "ontologies"
return funders
aggregate citations_total
sort by citations_total limit 5
# + [markdown] Collapsed="false"
# Arithmetic mean number of citations
# + Collapsed="false"
# %%dsldf
search publications
return funders
aggregate citations_avg
sort by citations_avg limit 5
# + [markdown] Collapsed="false"
# Geometric mean of FCR
#
# + Collapsed="false"
# %%dsldf
search publications
return funders
aggregate fcr_gavg limit 5
# + [markdown] Collapsed="false"
# Median Altmetric Attention Score
# + Collapsed="false"
# %%dsldf
search publications
return funders aggregate altmetric_median
sort by altmetric_median limit 5
# + [markdown] Collapsed="false"
# ### 6.1 Complex aggregations
#
# The `return` phrase may be followed by a function expression, to return additional calculations, such as per year funding or citations statistics. These functions may take their own arguments, and are calculated using the source data as specified in the `search part` of the query.
#
# At the time of writing, there are two functions available: Publications `citations_per_year` and Grants `funding_per_year`
# + [markdown] Collapsed="false"
# #### Publications `citations_per_year`
#
# Publication citations is the number of times that publications have been cited by other publications in the database. This function returns the number of citations received in each year.
# + Collapsed="false"
# %%dsldf
search publications for "brexit"
return citations_per_year(2010, 2020)
# + [markdown] Collapsed="false"
# #### Grants `funding_per_year`
#
# Returns grant funding per year in the given currency, starting from specified year, ending in specified year (including).
#
# Supported currencies are: CAD,USD,JPY,GBP,CHF,CNY,EUR,NZD,AUD
# + Collapsed="false"
# %%dsldf
search grants for "brexit"
return funding_per_year(2010, 2020, "USD")
# + Collapsed="false"
| docs/cookbooks/1-getting-started/5-Deep-dive-DSL-language.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3 (ipykernel)
# language: python
# name: python3
# ---
# ## Classifying news with HuggingFace and PyTorch on Amazon SageMaker
# make sure the Amazon SageMaker SDK is updated
# !pip install "sagemaker" --upgrade
# import a few libraries that will be needed
import sagemaker
from sagemaker.huggingface import HuggingFace
import boto3
import pandas as pd
import os, time, tarfile
# gets role for executing training job and set a few variables
sagemaker_session = sagemaker.Session()
bucket = sagemaker_session.default_bucket()
prefix = "news-hf"
role = sagemaker.get_execution_role()
region = boto3.Session().region_name
# This example uses the AG News dataset cited in the paper [Character-level Convolutional Networks for Text Classification](https://arxiv.org/abs/1509.01626) by <NAME> and [<NAME>](https://twitter.com/ylecun). This dataset is available on the [AWS Open Data Registry](https://registry.opendata.aws/fast-ai-nlp/).
# download and extract our custom dataset
# !wget -nc https://s3.amazonaws.com/fast-ai-nlp/ag_news_csv.tgz
tf = tarfile.open('ag_news_csv.tgz')
tf.extractall()
# !rm -fr ag_news_csv.tgz
# +
# read training data and add a header
train = pd.read_csv('./ag_news_csv/train.csv')
train.columns = ['label', 'title', 'description']
# read testing data and add a header
test = pd.read_csv('./ag_news_csv/test.csv')
test.columns = ['label', 'title', 'description']
# write the files with header
train.to_csv("ag_news_csv/ag-train.csv", index=False)
test.to_csv("ag_news_csv/ag-test.csv", index=False)
# -
# take a look at the training data
train
# upload training and testing data to Amazon S3
inputs_train = sagemaker_session.upload_data("ag_news_csv/ag-train.csv", bucket=bucket, key_prefix='{}/train'.format(prefix))
inputs_test = sagemaker_session.upload_data("ag_news_csv/ag-test.csv", bucket=bucket, key_prefix='{}/test'.format(prefix))
print(inputs_train)
print(inputs_test)
# keep in mind the classes used in this dataset
classes = pd.read_csv('./ag_news_csv/classes.txt', header=None)
classes.columns = ['label']
classes
# ----
# ## BERT large uncased
# https://huggingface.co/bert-large-uncased
# #### Fine-tuning
hyperparameters = {
'model_name_or_path':'bert-large-uncased',
'output_dir':'/opt/ml/model',
'train_file':'/opt/ml/input/data/train/ag-train.csv',
'validation_file':'/opt/ml/input/data/test/ag-test.csv',
'do_train':True,
'do_eval':True,
'num_train_epochs': 1,
'save_total_limit': 1,
# add your remaining hyperparameters
# more info here https://github.com/huggingface/transformers/tree/v4.10.0/examples/pytorch/text-classification
}
# git configuration to download our fine-tuning script
git_config = {'repo': 'https://github.com/huggingface/transformers.git','branch': 'v4.6.1'}
# creates Hugging Face estimator
huggingface_estimator_bert = HuggingFace(
entry_point='run_glue.py', # note we are pointing to the processing script in HF repo
source_dir='./examples/pytorch/text-classification',
instance_type='ml.g4dn.16xlarge',
instance_count=1,
role=role,
git_config=git_config,
transformers_version='4.6.1',
pytorch_version='1.7.1',
py_version='py36',
hyperparameters = hyperparameters,
disable_profiler=True
)
training_path='s3://{}/{}/train'.format(bucket, prefix)
testing_path='s3://{}/{}/test'.format(bucket, prefix)
# starting the train job
huggingface_estimator_bert.fit({"train": training_path, "test": testing_path}, wait=False)
# +
# check the status of the training job
client = boto3.client("sagemaker")
describe_response = client.describe_training_job(TrainingJobName=huggingface_estimator_bert.latest_training_job.name)
print ('Time - JobStatus - SecondaryStatus')
print('------------------------------')
print (time.strftime("%H:%M", time.localtime()), '-', describe_response['TrainingJobStatus'] + " - " + describe_response['SecondaryStatus'])
# uncomment this for monitoring the job status...
#job_run_status = describe_response['TrainingJobStatus']
#while job_run_status not in ('Failed', 'Completed', 'Stopped'):
# describe_response = client.describe_training_job(TrainingJobName=huggingface_estimator_bert.latest_training_job.name)
# job_run_status = describe_response['TrainingJobStatus']
# print (time.strftime("%H:%M", time.localtime()), '-', describe_response['TrainingJobStatus'] + " - " + describe_response['SecondaryStatus'])
# sleep(30)
# -
# **Important:** Make sure the training job is completed before running the "Inference" section below.
#
# You can verify this by running the previous cell and getting JobStatus = "Completed".
# #### Inference
# +
from sagemaker.huggingface.model import HuggingFaceModel
# create Hugging Face Model Class
huggingface_model = sagemaker.huggingface.HuggingFaceModel(
env={ 'HF_TASK':'text-classification' },
model_data=huggingface_estimator_bert.model_data,
role=role,
transformers_version="4.6.1",
pytorch_version="1.7.1",
py_version='py36',
)
# -
# create SageMaker Endpoint with the HF model
predictor = huggingface_model.deploy(
initial_instance_count=1,
instance_type="ml.g4dn.xlarge"
)
# +
# example request (you always need to define "inputs"). You can try with your own news' titles here...
data = {
#"inputs": "Armed robbery last night in the city."
"inputs": "Great match from Real Madrid tonight."
#"inputs": "Stocks went up 30% after yesterday's market closure."
#"inputs": "There is a new chipset that outperforms current GPUs."
}
response = predictor.predict(data)
print(response, classes['label'][int(response[0]['label'][-1:])])
# -
# let us run a quick performance test
sum_BERT=0
for i in range(1, 1000):
a_time = float(time.time())
result_BERT = predictor.predict(data)
b_time = float(time.time())
sum_BERT = sum_BERT + (b_time - a_time)
#print(b_time - a_time)
avg_BERT = sum_BERT/1000
print('BERT average inference time: {:.3f}'.format(avg_BERT), 'secs,')
# -----
# ## Amazon's BORT
# https://huggingface.co/amazon/bort
# #### Fine-tuning
hyperparameters_bort = {
'model_name_or_path':'amazon/bort',
'output_dir':'/opt/ml/model',
'train_file':'/opt/ml/input/data/train/ag-train.csv',
'validation_file':'/opt/ml/input/data/test/ag-test.csv',
'do_train':True,
'do_eval':True,
'num_train_epochs': 1,
'save_total_limit': 1
# add your remaining hyperparameters
# more info here https://github.com/huggingface/transformers/tree/v4.6.1/examples/pytorch/text-classification
}
# git configuration to download our fine-tuning script
git_config = {'repo': 'https://github.com/huggingface/transformers.git','branch': 'v4.6.1'}
# creates Hugging Face estimator
huggingface_estimator_bort = HuggingFace(
entry_point='run_glue.py', # note we are pointing to the processing script in HF repo
source_dir='./examples/pytorch/text-classification',
instance_type='ml.g4dn.12xlarge',
instance_count=1,
role=role,
git_config=git_config,
transformers_version='4.6.1',
pytorch_version='1.7.1',
py_version='py36',
hyperparameters = hyperparameters_bort,
disable_profiler=True
)
training_path='s3://{}/{}/train'.format(bucket, prefix)
testing_path='s3://{}/{}/test'.format(bucket, prefix)
# starting the train job
huggingface_estimator_bort.fit({"train": training_path, "test": testing_path}, wait=False)
# +
# check the status of the training job
client = boto3.client("sagemaker")
describe_response = client.describe_training_job(TrainingJobName=huggingface_estimator_bort.latest_training_job.name)
print ('Time - JobStatus - SecondaryStatus')
print('------------------------------')
print (time.strftime("%H:%M", time.localtime()), '-', describe_response['TrainingJobStatus'] + " - " + describe_response['SecondaryStatus'])
# uncomment this for monitoring the job status...
#job_run_status = describe_response['TrainingJobStatus']
#while job_run_status not in ('Failed', 'Completed', 'Stopped'):
# describe_response = client.describe_training_job(TrainingJobName=huggingface_estimator_bort.latest_training_job.name)
# job_run_status = describe_response['TrainingJobStatus']
# print (time.strftime("%H:%M", time.localtime()), '-', describe_response['TrainingJobStatus'] + " - " + describe_response['SecondaryStatus'])
# sleep(30)
# -
# **Important:** Make sure the training job is completed before running the "Inference" section below.
#
# You can verify this by running the previous cell and getting JobStatus = "Completed".
# #### Inference
# +
from sagemaker.huggingface.model import HuggingFaceModel
# create Hugging Face Model Class
huggingface_model_bort = sagemaker.huggingface.HuggingFaceModel(
env={ 'HF_TASK':'text-classification' },
model_data=huggingface_estimator_bort.model_data,
role=role,
transformers_version="4.6.1",
pytorch_version="1.7.1",
py_version='py36',
)
# -
# create SageMaker Endpoint with the HF model
predictor_bort = huggingface_model_bort.deploy(
initial_instance_count=1,
instance_type="ml.g4dn.xlarge"
)
# +
# example request (you always need to define "inputs"). You can try with your own news' titles here...
data = {
"inputs": "Stocks went up 30% after yesterday's market closure."
#"inputs": "There is a new chipset that outperforms current GPUs."
}
response = predictor_bort.predict(data)
print(response, classes['label'][int(response[0]['label'][-1:])])
# -
# let us run a quick performance test
sum_BORT=0
for i in range(1, 1000):
a_time = float(time.time())
result_BORT = predictor_bort.predict(data)
b_time = float(time.time())
sum_BORT = sum_BORT + (b_time - a_time)
#print(b_time - a_time)
avg_BORT = sum_BORT/1000
print('BORT average inference time: {:.3f}'.format(avg_BORT), 'secs,')
# -----
# #### Clean-up
# +
# uncomment for cleaning-up endpoints
#sess = boto3.Session()
#sess.delete_endpoint(predictor_bert.endpoint)
#sess.delete_endpoint(predictor_bort.endpoint)
| byod-news-sm-sf.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# <img style="float:top,right" src="Logo.png">
#
# <br><br>
#
# # Welcome to the KinMS introduction
#
# <br><br>
#
# ### Here you will learn how to import and use KinMS to generate mock interferometric data cubes and gain a better understanding of using the functionalities within the package.
#
# ---
#
# Copyright (C) 2016, <NAME>
# E-mail: DavisT -at- cardiff.ac.uk, zabelnj -at- cardiff.ac.uk, dawsonj5 -at- cardiff.ac.uk
#
# ---
#
# This tutorial aims at getting you up and running with KinMS! To start you will need to download the KinMSpy code and have it in your python path.
#
# The simplest way to do this is to call `pip install kinms`
#
# Once you have completed/understood this tutorial you may want to check out the tutorial on galaxy fitting with KinMS!
# ### HOUSEKEEPING
# Firstly, we want to import the KinMS package and instantiate the class so that we can freely use it throughout this example notebook
from kinms import KinMS
# Secondly we're going to need some more basic Python packages as well as the premade colourmap for viewing velocity maps found in $\texttt{sauron-colormap}$
import numpy as np
import matplotlib.pyplot as plt
from scipy import interpolate
from kinms.utils.sauron_colormap import sauron
# ---
#
# ## Example 1.
#
# ### Lets try making a data cube by providing the class with the physical attributes necessary for describing a simple exponential disk.
# First lets start by creating a surface brightness profile which decays radially
scalerad = 10 # arcseconds
radius = np.arange(0, 1000, 0.1) # radius vector in arcseconds
sbprof = np.exp(-radius / scalerad)
# Next, lets make the velocity profile, assuming an arctan form.
vel = (210) * (2/np.pi)*np.arctan(radius) # Scaling the maximum velocity to 210 km/s
# Although not necessary, we may also wish to provide our class with the position angle and inclination angle of our galaxy. We do that here by defining $\theta_\texttt{pos}$ and $\phi_\texttt{inc}$ respectively
pos = 270 # degrees
inc= 45 # degrees
# Now we need to define the properties of the data cube which we would like to return, including the physical dimensions, channel width, and beam size
xsize = 128 # arcsec
ysize = 128 # arcsec
vsize = 700 # km/s
cellsize = 1 # arcsec/pixel
dv = 10 # km/s/channel
beamsize = [4, 4, 0] # arcsec, arcsec, degrees
# Finally, we provide all of the parameters defined above to the class which returns the modelled data cube.
#
# **Note**: If you wish, the user can use the "verbose = True" argument to see useful information and feedback on the input parameters while generating the cube. We show an example of this behaviour below
kin = KinMS(xsize, ysize, vsize, cellsize, dv, beamSize = beamsize, inc = inc, sbProf = sbprof,
sbRad = radius, velProf = vel, posAng = pos, verbose = True)
# You can then generate the model cube using the following:
model=kin.model_cube()
# If you do not want to see the printed information (for example during MCMC fitting routines), it is easy to switch off by either not using the verbose argument or setting it to False explicitly.
kin.verbose=False
# A similar behaviour exists for outputting plots of the generated cube, which can also be toggled on and off. Plots are created by passing the "toplot = True" argument to model_cube. We show this behaviour below
cube = kin.model_cube(toplot=True)
# Next we're going to demonstrate the use of $\texttt{inclouds}$, which allows the user to pass specific cloudlet positions and their associated velocities to $\texttt{KinMS}$. These particles could be generated by some other means (e.g. if you are making mock observations of a simulation), or be the output from some analytic function.
#
# As in the first example, we need to set up our cube parameters
xsize = 128 # arcsec
ysize = 128 # arcsec
vsize = 1400 # km/s
cellsize = 1 # arcsec/pixel
dv = 10 # km/s/channel
beamsize = [4, 4, 0] # arcsec, arcsec, degrees
inc = 35 # degrees
intflux = 30 # Jy km/s
posang = 90 # degrees
# Now we can specify the x,y and z positions of the cloudlets we wish to pass to $\texttt{KinMS}$ as an (n,3) vector. These should be specified in arcseconds around some central location.
inclouds = np.array([[40, 0, 0], [39.5075, 6.25738, 0], [38.0423, 12.3607, 0.00000], [35.6403, 18.1596, 0],
[32.3607, 23.5114, 0], [28.2843, 28.2843, 0], [23.5114, 32.3607, 0], [18.1596, 35.6403, 0],
[12.3607, 38.0423, 0], [6.25737, 39.5075, 0], [0, 40, 0], [-6.25738, 39.5075, 0],
[-12.3607, 38.0423, 0], [-18.1596, 35.6403, 0], [-23.5114, 32.3607, 0],
[-28.2843, 28.2843, 0], [-32.3607, 23.5114, 0], [-35.6403, 18.1596, 0],
[-38.0423, 12.3607, 0], [-39.5075, 6.25738, 0], [-40, 0, 0], [-39.5075, -6.25738, 0],
[-38.0423,-12.3607, 0], [-35.6403, -18.1596, 0], [-32.3607, -23.5114, 0], [-28.2843, -28.2843, 0],
[-23.5114, -32.3607, 0], [-18.1596, -35.6403, 0], [-12.3607,-38.0423, 0], [-6.25738, -39.5075, 0],
[0, -40, 0], [6.25738, -39.5075, 0], [12.3607, -38.0423, 0], [18.1596, -35.6403, 0],
[23.5114, -32.3607, 0], [28.2843, -28.2843, 0], [32.3607,-23.5114, 0], [35.6403, -18.1596, 0],
[38.0423, -12.3607, 0], [39.5075, -6.25737, 0], [15, 15, 0], [-15, 15, 0],
[-19.8504, -2.44189, 0], [-18.0194, -8.67768, 0], [-14.2856, -13.9972, 0],
[-9.04344, -17.8386, 0], [-2.84630, -19.7964, 0], [3.65139, -19.6639, 0],
[9.76353, -17.4549, 0], [14.8447, -13.4028, 0], [18.3583, -7.93546, 0],
[19.9335, -1.63019, 0]])
# Now we have a choice to make. If you are generating mock observations from a hydrodynamic simulation, lets say, then you already have full 3D velocity information, and you will want to supply the line-of-sight velocity for every resolution element. In this case you can pass the velocity information as vLOS_clouds - but you should make sure your input cloudlets have already been projected to the desired inclination.
#
# Alternativly, perhaps you would like to input a circular velocity profile, and have KinMS handle the projection. Here we create a velocity profile with a few radial position anchors and linearly interpolate between them to get a full profile
x = np.arange(0, 100, 0.1)
velfunc = interpolate.interp1d([0, 0.5, 1, 3, 500], [0, 50, 100, 210, 210], kind = 'linear')
vel = velfunc(x)
# Again, lets make a cube with all the specified parameters above
cube = KinMS(xsize, ysize, vsize, cellsize, dv, beamsize, inc, intFlux = intflux, inClouds = inclouds,
velProf = vel, velRad = x, posAng = posang).model_cube(toplot = True)
# ---
#
# ## Example 3.
#
# $\texttt{KinMS}$ can accomodate a variety of departures from simple orderly rotation. In this example we will demonstrate the creation of datacubes containing a galaxy with a non-zero thickness disk with a warp in the position angle across the radius of the disk.
#
# As in the other examples, we need to set up our cube parameters
# +
xsize = 128
ysize = 128
vsize = 1400
cellsize = 1
dv = 10
beamsize = 2
intflux = 30
fcent = 10
scalerad = 20
inc = 60
discthick=1.
# create an exponetial surface brightness profile and an arctan velocity curve
radius = np.arange(0, 100, 0.1)
sbprof = fcent * np.exp(-radius / scalerad)
vel = (210) * (2/np.pi)*np.arctan(radius)
# -
# Next we need to create an array of position angle values
posangfunc = interpolate.interp1d([0, 15, 50, 500], [270, 270, 300, 300], kind='linear')
posang = posangfunc(radius)
# And lastly, we simply run KinMS to generate the final cube
kin = KinMS(xsize, ysize, vsize, cellsize, dv, beamsize, inc, sbProf=sbprof, sbRad=radius, velProf=vel, intFlux=intflux,
posAng=posang,diskThick=discthick)
cube=kin.model_cube(toplot=True)
# ---
#
# ## Final notes
#
# For a more in-depth exploration of the capabilities of $\texttt{KinMS}$, please check out the $\texttt{KinMS}$ testsuite in the GitHub repository!
#
# You may also want to check out the [tutorial on galaxy fitting with KinMS](https://github.com/TimothyADavis/KinMSpy/blob/master/kinms/docs/KinMSpy_tutorial.ipynb)!
| kinms/docs/KinMS_example_notebook.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# ###### Content under Creative Commons Attribution license CC-BY 4.0, code under MIT license (c)2014 <NAME>, <NAME>, <NAME>.
# # Spreading out
# We're back! This is the fourth notebook of _Spreading out: parabolic PDEs,_ Module 4 of the course [**"Practical Numerical Methods with Python"**](https://openedx.seas.gwu.edu/courses/course-v1:MAE+MAE6286+2017/about).
#
# In the [previous notebook](https://nbviewer.jupyter.org/github/numerical-mooc/numerical-mooc/blob/master/lessons/04_spreadout/04_03_Heat_Equation_2D_Explicit.ipynb), we solved a 2D problem for the first time, using an explicit scheme. We know explicit schemes have stability constraints that might make them impractical in some cases, due to requiring a very small time step. Implicit schemes are unconditionally stable, offering the advantage of larger time steps; in [notebook 2](https://nbviewer.jupyter.org/github/numerical-mooc/numerical-mooc/blob/master/lessons/04_spreadout/04_02_Heat_Equation_1D_Implicit.ipynb), we look at the 1D implicit solution of diffusion. Already, that was quite a lot of work: setting up a matrix of coefficients and a right-hand-side vector, while taking care of the boundary conditions, and then solving the linear system. And now, we want to do implicit schemes in 2Dโare you ready for this challenge?
# ## 2D Heat conduction
# We already studied 2D heat conduction in the previous lesson, but now we want to work out how to build an implicit solution scheme. To refresh your memory, here is the heat equation again:
#
# $$
# \begin{equation}
# \frac{\partial T}{\partial t} = \alpha \left(\frac{\partial^2 T}{\partial x^2} + \frac{\partial^2 T}{\partial y^2} \right)
# \end{equation}
# $$
#
# Our previous solution used a Dirichlet boundary condition on the left and bottom boundaries, with $T(x=0)=T(y=0)=100$, and a Neumann boundary condition with zero flux on the top and right edges, with $q_x=q_y=0$.
#
# $$
# \left( \left.\frac{\partial T}{\partial y}\right|_{y=0.1} = q_y \right) \quad \text{and} \quad \left( \left.\frac{\partial T}{\partial x}\right|_{x=0.1} = q_x \right)
# $$
#
# Figure 1 shows a sketch of the problem set up for our hypothetical computer chip with two hot edges and two insulated edges.
# #### <img src="./figures/2dchip.svg" width="400px"> Figure 1: Simplified microchip problem setup.
# ### Implicit schemes in 2D
# An implicit discretization will evaluate the spatial derivatives at the next time level, $t^{n+1}$, using the unknown values of the solution variable. For the 2D heat equation with central difference in space, that is written as:
#
# $$
# \begin{equation}
# \begin{split}
# & \frac{T^{n+1}_{i,j} - T^n_{i,j}}{\Delta t} = \\
# & \quad \alpha \left( \frac{T^{n+1}_{i+1, j} - 2T^{n+1}_{i,j} + T^{n+1}_{i-1,j}}{\Delta x^2} + \frac{T^{n+1}_{i, j+1} - 2T^{n+1}_{i,j} + T^{n+1}_{i,j-1}}{\Delta y^2} \right) \\
# \end{split}
# \end{equation}
# $$
#
# This equation looks better when we put what we *don't know* on the left and what we *do know* on the right. Make sure to work this out yourself on a piece of paper.
#
# $$
# \begin{equation}
# \begin{split}
# & -\frac{\alpha \Delta t}{\Delta x^2} \left( T^{n+1}_{i-1,j} + T^{n+1}_{i+1,j} \right) + \left( 1 + 2 \frac{\alpha \Delta t}{\Delta x^2} + 2 \frac{\alpha \Delta t}{\Delta y^2} \right) T^{n+1}_{i,j} \\
# & \quad \quad \quad -\frac{\alpha \Delta t}{\Delta y^2} \left( T^{n+1}_{i,j-1} + T^{n+1}_{i,j+1} \right) = T^n_{i,j} \\
# \end{split}
# \end{equation}
# $$
#
# To make this discussion easier, let's assume that the mesh spacing is the same in both directions and $\Delta x=\Delta y = \delta$:
#
# $$
# \begin{equation}
# -T^{n+1}_{i-1,j} - T^{n+1}_{i+1,j} + \left(\frac{\delta^2}{\alpha \Delta t} + 4 \right) T^{n+1}_{i,j} - T^{n+1}_{i,j-1}-T^{n+1}_{i,j+1} = \frac{\delta^2}{\alpha \Delta t}T^n_{i,j}
# \end{equation}
# $$
#
# Just like in the one-dimensional case, $T_{i,j}$ appears in the equation for $T_{i-1,j}$, $T_{i+1,j}$, $T_{i,j+1}$ and $T_{i,j-1}$, and we can form a linear system to advance in time. But, how do we construct the matrix in this case? What are the $(i+1,j)$, $(i-1,j)$, $(i,j+1)$, and $(i,j-1)$ positions in the matrix?
#
# With explicit schemes we don't need to worry about these things. We can lay out the data just as it is in the physical problem. We had an array `T` that was a 2-dimensional matrix. To fetch the temperature in the next node in the $x$ direction $(T_{i+1,j})$ we just did `T[j,i+1]`, and likewise in the $y$ direction $(T_{i,j+1})$ was in `T[j+1,i]`. In implicit schemes, we need to think a bit harder about how the data is mapped to the physical problem.
#
# Also, remember from the [notebook on 1D-implicit schemes](https://nbviewer.jupyter.org/github/numerical-mooc/numerical-mooc/blob/master/lessons/04_spreadout/04_02_Heat_Equation_1D_Implicit.ipynb) that the linear system had $N-2$ elements? We applied boundary conditions on nodes $i=0$ and $i=N-1$, and they were not modified by the linear system. In 2D, this becomes a bit more complicated.
#
# Let's use Figure 1, representing a set of grid nodes in two dimensions, to guide the discussion.
# #### <img src="./figures/2D_discretization.png"> Figure 2: Layout of matrix elements in 2D problem
# Say we have the 2D domain of size $L_x\times L_y$ discretized in $n_x$ and $n_y$ points. We can divide the nodes into boundary nodes (empty circles) and interior nodes (filled circles).
#
# The boundary nodes, as the name says, are on the boundary. They are the nodes with indices $(i=0,j)$, $(i=n_x-1,j)$, $(i,j=0)$, and $(i,j=n_y-1)$, and boundary conditions are enforced there.
#
# The interior nodes are not on the boundary, and the finite-difference equation acts on them. If we leave the boundary nodes aside for the moment, then the grid will have $(n_x-2)\cdot(n_y-2)$ nodes that need to be updated on each time step. This is the number of unknowns in the linear system. The matrix of coefficients will have $\left( (n_x-2)\cdot(n_y-2) \right)^2$ elements (most of them zero!).
#
# To construct the matrix, we will iterate over the nodes in an x-major order: index $i$ will run faster. The order will be
#
# * $(i=1,j=1)$
# * $(i=2,j=1)$ ...
# * $(i=nx-2,j=1)$
# * $(i=1,j=2)$
# * $(i=2,j=2)$ ...
# * $(i=n_x-2,j=n_y-2)$.
#
# That is the ordering represented by dotted line on Figure 1. Of course, if you prefer to organize the nodes differently, feel free to do so!
#
# Because we chose this ordering, the equation for nodes $(i-1,j)$ and $(i+1,j)$ will be just before and after $(i,j)$, respectively. But what about $(i,j-1)$ and $(i,j+1)$? Even though in the physical problem they are very close, the equations are $n_x-2$ places apart! This can tie your head in knots pretty quickly.
#
# _The only way to truly understand it is to make your own diagrams and annotations on a piece of paper and reconstruct this argument!_
# ### Boundary conditions
# Before we attempt to build the matrix, we need to think about boundary conditions. There is some bookkeeping to be done here, so bear with us for a moment.
#
# Say, for example, that the left and bottom boundaries have Dirichlet boundary conditions, and the top and right boundaries have Neumann boundary conditions.
#
# Let's look at each case:
#
# **Bottom boundary:**
#
# The equation for $j=1$ (interior points adjacent to the bottom boundary) uses values from $j=0$, which are known. Let's put that on the right-hand side of the equation. We get this equation for all points across the $x$-axis that are adjacent to the bottom boundary:
#
# $$
# \begin{equation}
# \begin{split}
# -T^{n+1}_{i-1,1} - T^{n+1}_{i+1,1} + \left( \frac{\delta^2}{\alpha \Delta t} + 4 \right) T^{n+1}_{i,1} - T^{n+1}_{i,j+1} \qquad & \\
# = \frac{\delta^2}{\alpha \Delta t} T^n_{i,1} + T^{n+1}_{i,0} & \\
# \end{split}
# \end{equation}
# $$
#
# **Left boundary:**
#
# Like for the bottom boundary, the equation for $i=1$ (interior points adjacent to the left boundary) uses known values from $i=0$, and we will put that on the right-hand side:
#
# $$
# \begin{equation}
# \begin{split}
# -T^{n+1}_{2,j} + \left( \frac{\delta^2}{\alpha \Delta t} + 4 \right) T^{n+1}_{1,j} - T^{n+1}_{1,j-1} - T^{n+1}_{1,j+1} \qquad & \\
# = \frac{\delta^2}{\alpha \Delta t} T^n_{1,j} + T^{n+1}_{0,j} & \\
# \end{split}
# \end{equation}
# $$
#
# **Right boundary:**
#
# Say the boundary condition is $\left. \frac{\partial T}{\partial x} \right|_{x=L_x} = q_x$. Its finite-difference approximation is
#
# $$
# \begin{equation}
# \frac{T^{n+1}_{n_x-1,j} - T^{n+1}_{n_x-2,j}}{\delta} = q_x
# \end{equation}
# $$
#
# We can write $T^{n+1}_{n_x-1,j} = \delta q_x + T^{n+1}_{n_x-2,j}$ to get the finite difference equation for $i=n_x-2$:
#
# $$
# \begin{equation}
# \begin{split}
# -T^{n+1}_{n_x-3,j} + \left( \frac{\delta^2}{\alpha \Delta t} + 3 \right) T^{n+1}_{n_x-2,j} - T^{n+1}_{n_x-2,j-1} - T^{n+1}_{n_x-2,j+1} \qquad & \\
# = \frac{\delta^2}{\alpha \Delta t} T^n_{n_x-2,j} + \delta q_x & \\
# \end{split}
# \end{equation}
# $$
#
# Not sure about this? Grab pen and paper! _Please_, check this yourself. It will help you understand!
#
# **Top boundary:**
#
# Neumann boundary conditions specify the derivative normal to the boundary: $\left. \frac{\partial T}{\partial y} \right|_{y=L_y} = q_y$. No need to repeat what we did for the right boundary, right? The equation for $j=n_y-2$ is
#
# $$
# \begin{equation}
# \begin{split}
# -T^{n+1}_{i-1,n_y-2} - T^{n+1}_{i+1,n_y-2} + \left( \frac{\delta^2}{\alpha \Delta t} + 3 \right) T^{n+1}_{i,n_y-2} - T^{n+1}_{i,n_y-3} \qquad & \\
# = \frac{\delta^2}{\alpha \Delta t} T^n_{i,n_y-2} + \delta q_y & \\
# \end{split}
# \end{equation}
# $$
#
# So far, we have then 5 possible cases: bottom, left, right, top, and interior points. Does this cover everything? What about corners?
# **Bottom-left corner**
#
# At $T_{1,1}$ there is a Dirichlet boundary condition at $i=0$ and $j=0$. This equation is:
#
# $$
# \begin{equation}
# \begin{split}
# -T^{n+1}_{2,1} + \left( \frac{\delta^2}{\alpha \Delta t} + 4 \right) T^{n+1}_{1,1} - T^{n+1}_{1,2} \qquad & \\
# = \frac{\delta^2}{\alpha \Delta t} T^n_{1,1} + T^{n+1}_{0,1} + T^{n+1}_{1,0} & \\
# \end{split}
# \end{equation}
# $$
#
# **Top-left corner:**
#
# At $T_{1,n_y-2}$ there is a Dirichlet boundary condition at $i=0$ and a Neumann boundary condition at $i=n_y-1$. This equation is:
#
# $$
# \begin{equation}
# \begin{split}
# -T^{n+1}_{2,n_y-2} + \left( \frac{\delta^2}{\alpha \Delta t} + 3 \right) T^{n+1}_{1,n_y-2} - T^{n+1}_{1,n_y-3} \qquad & \\
# = \frac{\delta^2}{\alpha \Delta t} T^n_{1,n_y-2} + T^{n+1}_{0,n_y-2} + \delta q_y & \\
# \end{split}
# \end{equation}
# $$
#
# **Top-right corner**
#
# At $T_{n_x-2,n_y-2}$, there are Neumann boundary conditions at both $i=n_x-1$ and $j=n_y-1$. The finite difference equation is then
#
# $$
# \begin{equation}
# \begin{split}
# -T^{n+1}_{n_x-3,n_y-2} + \left( \frac{\delta^2}{\alpha \Delta t} + 2 \right) T^{n+1}_{n_x-2,n_y-2} - T^{n+1}_{n_x-2,n_y-3} \qquad & \\
# = \frac{\delta^2}{\alpha \Delta t} T^n_{n_x-2,n_y-2} + \delta(q_x + q_y) & \\
# \end{split}
# \end{equation}
# $$
#
# **Bottom-right corner**
#
# To calculate $T_{n_x-2,1}$ we need to consider a Dirichlet boundary condition to the bottom and a Neumann boundary condition to the right. We will get a similar equation to the top-left corner!
#
# $$
# \begin{equation}
# \begin{split}
# -T^{n+1}_{n_x-3,1} + \left( \frac{\delta^2}{\alpha \Delta t} + 3 \right) T^{n+1}_{n_x-2,1} - T^{n+1}_{n_x-2,2} \qquad & \\
# = \frac{\delta^2}{\alpha \Delta t} T^n_{n_x-2,1} + T^{n+1}_{n_x-2,0} + \delta q_x & \\
# \end{split}
# \end{equation}
# $$
#
# Okay, now we are actually ready. We have checked every possible case!
# ### The linear system
# Like in the previous lesson introducing implicit schemes, we will solve a linear system at every time step:
#
# $$
# [A][T^{n+1}_\text{int}] = [b]+[b]_{b.c.}
# $$
#
# The coefficient matrix now takes some more work to figure out and to build in code. There is no substitute for you working this out patiently on paper!
#
# The structure of the matrix can be described as a series of diagonal blocks, and lots of zeros elsewhere. Look at Figure 3, representing the block structure of the coefficient matrix, and refer back to Figure 2, showing the discretization grid in physical space. The first row of interior points, adjacent to the bottom boundary, generates the matrix block labeled $A_1$. The top row of interior points, adjacent to the top boundary generates the matrix block labeled $A_3$. All other interior points in the grid generate similar blocks, labeled $A_2$ on Figure 3.
# #### <img src="./figures/implicit-matrix-blocks.png"> Figure 3: Sketch of coefficient-matrix blocks.
# #### <img src="./figures/matrix-blocks-on-grid.png"> Figure 4: Grid points corresponding to each matrix-block type.
# The matrix block $A_1$ is
#
# <img src="./figures/A_1.svg" width="640px">
#
# The block matrix $A_2$ is
#
# <img src="./figures/A_2.svg" width="640px">
#
# The block matrix $A_3$ is
#
# <img src="./figures/A_3.svg" width="640px">
# Vector $T^{n+1}_\text{int}$ contains the temperature of the interior nodes in the next time step. It is:
#
# $$
# \begin{equation}
# T^{n+1}_\text{int} = \left[
# \begin{array}{c}
# T^{n+1}_{1,1}\\
# T^{n+1}_{2,1} \\
# \vdots \\
# T^{n+1}_{n_x-2,1} \\
# T^{n+1}_{2,1} \\
# \vdots \\
# T^{n+1}_{n_x-2,n_y-2}
# \end{array}
# \right]
# \end{equation}
# $$
#
# Remember the x-major ordering we chose!
# Finally, the right-hand side is
# \begin{equation}
# [b]+[b]_{b.c.} =
# \left[\begin{array}{c}
# \sigma^\prime T^n_{1,1} + T^{n+1}_{0,1} + T^{n+1}_{1,0} \\
# \sigma^\prime T^n_{2,0} + T^{n+1}_{2,0} \\
# \vdots \\
# \sigma^\prime T^n_{n_x-2,1} + T^{n+1}_{n_x-2,0} + \delta q_x \\
# \sigma^\prime T^n_{1,2} + T^{n+1}_{0,2} \\
# \vdots \\
# \sigma^\prime T^n_{n_x-2,n_y-2} + \delta(q_x + q_y)
# \end{array}\right]
# \end{equation}
# where $\sigma^\prime = 1/\sigma = \delta^2/\alpha \Delta t$. The matrix looks very ugly, but it is important you understand it! Think about it. Can you answer:
# * Why a -1 factor appears $n_x-2$ columns after the diagonal? What about $n_x-2$ columns before the diagonal?
# * Why in row $n_x-2$ the position after the diagonal contains a 0?
# * Why in row $n_x-2$ the diagonal is $\sigma^\prime + 3$ rather than $\sigma^\prime + 4$?
# * Why in the last row the diagonal is $\sigma^\prime + 2$ rather than $\sigma^\prime + 4$?
#
# If you can answer those questions, you are in good shape to continue!
# Let's write a function that will generate the matrix and right-hand side for the heat conduction problem in the previous notebook. Remember, we had Dirichlet boundary conditions in the left and bottom, and zero-flux Neumann boundary condition on the top and right $(q_x=q_y=0)$.
#
# Also, we'll import `scipy.linalg.solve` because we need to solve a linear system.
import numpy
from scipy import linalg
def lhs_operator(M, N, sigma):
"""
Assembles and returns the implicit operator
of the system for the 2D diffusion equation.
We use a Dirichlet condition at the left and
bottom boundaries and a Neumann condition
(zero-gradient) at the right and top boundaries.
Parameters
----------
M : integer
Number of interior points in the x direction.
N : integer
Number of interior points in the y direction.
sigma : float
Value of alpha * dt / dx**2.
Returns
-------
A : numpy.ndarray
The implicit operator as a 2D array of floats
of size M*N by M*N.
"""
A = numpy.zeros((M * N, M * N))
for j in range(N):
for i in range(M):
I = j * M + i # row index
# Get index of south, west, east, and north points.
south, west, east, north = I - M, I - 1, I + 1, I + M
# Setup coefficients at corner points.
if i == 0 and j == 0: # bottom-left corner
A[I, I] = 1.0 / sigma + 4.0
A[I, east] = -1.0
A[I, north] = -1.0
elif i == M - 1 and j == 0: # bottom-right corner
A[I, I] = 1.0 / sigma + 3.0
A[I, west] = -1.0
A[I, north] = -1.0
elif i == 0 and j == N - 1: # top-left corner
A[I, I] = 1.0 / sigma + 3.0
A[I, south] = -1.0
A[I, east] = -1.0
elif i == M - 1 and j == N - 1: # top-right corner
A[I, I] = 1.0 / sigma + 2.0
A[I, south] = -1.0
A[I, west] = -1.0
# Setup coefficients at side points (excluding corners).
elif i == 0: # left side
A[I, I] = 1.0 / sigma + 4.0
A[I, south] = -1.0
A[I, east] = -1.0
A[I, north] = -1.0
elif i == M - 1: # right side
A[I, I] = 1.0 / sigma + 3.0
A[I, south] = -1.0
A[I, west] = -1.0
A[I, north] = -1.0
elif j == 0: # bottom side
A[I, I] = 1.0 / sigma + 4.0
A[I, west] = -1.0
A[I, east] = -1.0
A[I, north] = -1.0
elif j == N - 1: # top side
A[I, I] = 1.0 / sigma + 3.0
A[I, south] = -1.0
A[I, west] = -1.0
A[I, east] = -1.0
# Setup coefficients at interior points.
else:
A[I, I] = 1.0 / sigma + 4.0
A[I, south] = -1.0
A[I, west] = -1.0
A[I, east] = -1.0
A[I, north] = -1.0
return A
def rhs_vector(T, M, N, sigma, Tb):
"""
Assembles and returns the right-hand side vector
of the system for the 2D diffusion equation.
We use a Dirichlet condition at the left and
bottom boundaries and a Neumann condition
(zero-gradient) at the right and top boundaries.
Parameters
----------
T : numpy.ndarray
The temperature distribution as a 1D array of floats.
M : integer
Number of interior points in the x direction.
N : integer
Number of interior points in the y direction.
sigma : float
Value of alpha * dt / dx**2.
Tb : float
Boundary value for Dirichlet conditions.
Returns
-------
b : numpy.ndarray
The right-hand side vector as a 1D array of floats
of size M*N.
"""
b = 1.0 / sigma * T
# Add Dirichlet term at points located next
# to the left and bottom boundaries.
for j in range(N):
for i in range(M):
I = j * M + i
if i == 0:
b[I] += Tb
if j == 0:
b[I] += Tb
return b
# The solution of the linear system $(T^{n+1}_\text{int})$ contains the temperatures of the interior points at the next time step in a 1D array. We will also create a function that will take the values of $T^{n+1}_\text{int}$ and put them in a 2D array that resembles the physical domain.
def map_1d_to_2d(T_1d, nx, ny, Tb):
"""
Maps a 1D array of the temperature at the interior points
to a 2D array that includes the boundary values.
Parameters
----------
T_1d : numpy.ndarray
The temperature at the interior points as a 1D array of floats.
nx : integer
Number of points in the x direction of the domain.
ny : integer
Number of points in the y direction of the domain.
Tb : float
Boundary value for Dirichlet conditions.
Returns
-------
T : numpy.ndarray
The temperature distribution in the domain
as a 2D array of size ny by nx.
"""
T = numpy.zeros((ny, nx))
# Get the value at interior points.
T[1:-1, 1:-1] = T_1d.reshape((ny - 2, nx - 2))
# Use Dirichlet condition at left and bottom boundaries.
T[:, 0] = Tb
T[0, :] = Tb
# Use Neumann condition at right and top boundaries.
T[:, -1] = T[:, -2]
T[-1, :] = T[-2, :]
return T
# And to advance in time, we will use
def btcs_implicit_2d(T0, nt, dt, dx, alpha, Tb):
"""
Computes and returns the distribution of the
temperature after a given number of time steps.
The 2D diffusion equation is integrated using
Euler implicit in time and central differencing
in space, with a Dirichlet condition at the left
and bottom boundaries and a Neumann condition
(zero-gradient) at the right and top boundaries.
Parameters
----------
T0 : numpy.ndarray
The initial temperature distribution as a 2D array of floats.
nt : integer
Number of time steps to compute.
dt : float
Time-step size.
dx : float
Grid spacing in the x and y directions.
alpha : float
Thermal diffusivity of the plate.
Tb : float
Boundary value for Dirichlet conditions.
Returns
-------
T : numpy.ndarray
The temperature distribution as a 2D array of floats.
"""
# Get the number of points in each direction.
ny, nx = T0.shape
# Get the number of interior points in each direction.
M, N = nx - 2, ny - 2
# Compute the constant sigma.
sigma = alpha * dt / dx**2
# Create the implicit operator of the system.
A = lhs_operator(M, N, sigma)
# Integrate in time.
T = T0[1:-1, 1:-1].flatten() # interior points as a 1D array
I, J = int(M / 2), int(N / 2) # indices of the center
for n in range(nt):
# Compute the right-hand side of the system.
b = rhs_vector(T, M, N, sigma, Tb)
# Solve the system with scipy.linalg.solve.
T = linalg.solve(A, b)
# Check if the center of the domain has reached T = 70C.
if T[J * M + I] >= 70.0:
break
print('[time step {}] Center at T={:.2f} at t={:.2f} s'
.format(n + 1, T[J * M + I], (n + 1) * dt))
# Returns the temperature in the domain as a 2D array.
return map_1d_to_2d(T, nx, ny, Tb)
# Remember, we want the function to tell us when the center of the plate reaches $70^\circ C$.
# ##### Dig deeper
# For demonstration purposes, these functions are very explicit. But you can see a trend here, right?
#
# Say we start with a matrix with `1/sigma+4` in the main diagonal, and `-1` on the 4 other corresponding diagonals. Now, we have to modify the matrix only where the boundary conditions are affecting. We saw the impact of the Dirichlet and Neumann boundary condition on each position of the matrix, we just need to know in which position to perform those changes.
#
# A function that maps `i` and `j` into `row_number` would be handy, right? How about `row_number = (j-1)*(nx-2)+(i-1)`? By feeding `i` and `j` to that equation, you know exactly where to operate on the matrix. For example, `i=nx-2, j=2`, which is in row `row_number = 2*nx-5`, is next to a Neumann boundary condition: we have to substract one out of the main diagonal (`A[2*nx-5,2*nx-5]-=1`), and put a zero in the next column (`A[2*nx-5,2*nx-4]=0`). This way, the function can become much simpler!
#
# Can you use this information to construct a more general function `lhs_operator`? Can you make it such that the type of boundary condition is an input to the function?
# ## Heat diffusion in 2D
# Let's recast the 2D heat conduction from the previous notebook, and solve it with an implicit scheme.
# +
# Set parameters.
Lx = 0.01 # length of the plate in the x direction
Ly = 0.01 # length of the plate in the y direction
nx = 21 # number of points in the x direction
ny = 21 # number of points in the y direction
dx = Lx / (nx - 1) # grid spacing in the x direction
dy = Ly / (ny - 1) # grid spacing in the y direction
alpha = 1e-4 # thermal diffusivity
# Define the locations along a gridline.
x = numpy.linspace(0.0, Lx, num=nx)
y = numpy.linspace(0.0, Ly, num=ny)
# Compute the initial temperature distribution.
Tb = 100.0 # temperature at the left and bottom boundaries
T0 = 20.0 * numpy.ones((ny, nx))
T0[:, 0] = Tb
T0[0, :] = Tb
# -
# We are ready to go!
# +
# Set the time-step size based on CFL limit.
sigma = 0.25
dt = sigma * min(dx, dy)**2 / alpha # time-step size
nt = 300 # number of time steps to compute
# Compute the temperature along the rod.
T = btcs_implicit_2d(T0, nt, dt, dx, alpha, Tb)
# -
# And plot,
from matplotlib import pyplot
# %matplotlib inline
# Set the font family and size to use for Matplotlib figures.
pyplot.rcParams['font.family'] = 'serif'
pyplot.rcParams['font.size'] = 16
# Plot the filled contour of the temperature.
pyplot.figure(figsize=(8.0, 5.0))
pyplot.xlabel('x [m]')
pyplot.ylabel('y [m]')
levels = numpy.linspace(20.0, 100.0, num=51)
contf = pyplot.contourf(x, y, T, levels=levels)
cbar = pyplot.colorbar(contf)
cbar.set_label('Temperature [C]')
pyplot.axis('scaled', adjustable='box');
# Try this out with different values of `sigma`! You'll see that it will always give a stable solution!
#
# Does this result match the explicit scheme from the previous notebook? Do they take the same amount of time to reach $70^\circ C$ in the center of the plate? Now that we can use higher values of `sigma`, we need fewer time steps for the center of the plate to reach $70^\circ C$! Of course, we need to be careful that `dt` is small enough to resolve the physics correctly.
# ---
# ###### The cell below loads the style of the notebook
from IPython.core.display import HTML
css_file = '../../styles/numericalmoocstyle.css'
HTML(open(css_file, 'r').read())
| lessons/04_spreadout/04_04_Heat_Equation_2D_Implicit.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# trying to mock some of the interactive visuals on fivethirtyeight using plotly
#
# * data: https://github.com/fivethirtyeight/covid-19-polls
# * article: https://projects.fivethirtyeight.com/coronavirus-polls/
import pandas as pd
import numpy as np
import plotly.express as px
df = pd.read_csv('data/covid_concern_polls_adjusted.csv')
df
economy = df[df.subject == "concern-economy"]
economy
economy.describe()
# +
import plotly.graph_objects as go
attributes = ['very_adjusted',
'somewhat_adjusted',
'not_very_adjusted',
'not_at_all_adjusted']
fig = go.Figure()
for attribute in attributes:
# # Plot raw data
# fig.add_trace(go.Scatter(x=economy['enddate'],
# y=economy[attribute],
# mode='markers', name=attribute))
# Add rolling average
smoothed = np.convolve(economy[attribute], np.ones(10)/10)
fig.add_trace(go.Scatter(x=economy['enddate'],
y=smoothed,
mode='lines',
name=f'{attribute} rolling avg'))
fig.show()
# -
fig.update_xaxes(spikemode='across')
fig.update_layout(hovermode='x unified')
fig.show()
| fivethirtyseven/covid-polls-538.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Introduction
# In this notebook, we demonstrate the potential of combining the deep learning capabilities of PyTorch with Gaussian process models using GPyTorch. In this notebook, we will use deep kernel learning to train a deep neural network with a Gaussian process prediction layer for classification, using the MNIST dataset as a simple example.
#
# For an introduction to DKL see these papers:
# https://arxiv.org/abs/1511.02222
# https://arxiv.org/abs/1611.00336
# +
# Import our GPyTorch library
import gpytorch
# Import some classes we will use from torch
from torch.autograd import Variable
from torch.optim import SGD, Adam
from torch.utils.data import DataLoader
# -
# ## Loading data
#
# First, we must load the standard train and test sets for MNIST. To do this, we use the standard MNIST dataset available through torchvision
# +
# Import datasets to access MNISTS and transforms to format data for learning
from torchvision import transforms, datasets
# Download and load the MNIST dataset to train on
# Compose lets us do multiple transformations. Specically make the data a torch.FloatTensor of shape
# (colors x height x width) in the range [0.0, 1.0] as opposed to an RGB image with shape (height x width x colors)
# then normalize using mean (0.1317) and standard deviation (0.3081) already calculated (not here)
# Transformation documentation here: http://pytorch.org/docs/master/torchvision/transforms.html
train_dataset = datasets.MNIST('/tmp', train=True, download=True,
transform=transforms.Compose([
transforms.ToTensor(),
transforms.Normalize((0.1307,), (0.3081,))
]))
test_dataset = datasets.MNIST('/tmp', train=False, download=True,
transform=transforms.Compose([
transforms.ToTensor(),
transforms.Normalize((0.1307,), (0.3081,))
]))
# But the data into a DataLoader. We shuffle the training data but not the test data because the order
# training data is presented will affect the outcome unlike the test data
train_loader = DataLoader(train_dataset, batch_size=256, shuffle=True, pin_memory=True)
test_loader = DataLoader(test_dataset, batch_size=256, shuffle=False, pin_memory=True)
# -
# ## Define the feature extractor for our deep kernel
#
# In this cell, we define the deep neural network we will use as the basis for our deep kernel. To keep things simple, we use the classic LeNet architecture.
# +
# Import torch's neural network
# Documentation here: http://pytorch.org/docs/master/nn.html
from torch import nn
# Import torch.nn.functional for various activation/pooling functions
# Documentation here: http://pytorch.org/docs/master/nn.html#torch-nn-functional
from torch.nn import functional as F
# We make a classic LeNet Architecture sans a final prediction layer to 10 outputs. This will serve as a feature
# extractor reducing the dimensionality of our data down to 64. We will pretrain these layers by adding on a
# final classifying 64-->10 layer
# https://medium.com/@siddharthdas_32104/cnns-architectures-lenet-alexnet-vgg-googlenet-resnet-and-more-666091488df5
class LeNetFeatureExtractor(nn.Module):
def __init__(self):
super(LeNetFeatureExtractor, self).__init__()
self.conv1 = nn.Conv2d(1, 16, kernel_size=5, padding=2)
self.norm1 = nn.BatchNorm2d(16)
self.conv2 = nn.Conv2d(16, 32, kernel_size=5, padding=2)
self.norm2 = nn.BatchNorm2d(32)
self.fc3 = nn.Linear(32 * 7 * 7, 64)
self.norm3 = nn.BatchNorm1d(64)
def forward(self, x):
x = F.max_pool2d(F.relu(self.norm1(self.conv1(x))), 2)
x = F.max_pool2d(F.relu(self.norm2(self.conv2(x))), 2)
x = x.view(-1, 32 * 7 * 7)
x = F.relu(self.norm3(self.fc3(x)))
return x
feature_extractor = LeNetFeatureExtractor().cuda()
# -
# ### Pretrain the feature extractor a bit
#
# We next pretrain the deep feature extractor using a simple linear classifier. While this step is in general not necessary, we include it to demonstrate that GPs can be added on to a neural network as a simple fine-tuning step that adds minimal training overhead.
# +
# Make a final classifier layer that operates on the feature extractor's output
classifier = nn.Linear(64, 10).cuda()
# Make list of parameters to optimize (both the parameters of the feature extractor and classifier)
params = list(feature_extractor.parameters()) + list(classifier.parameters())
# We train the network using stochastic gradient descent
optimizer = SGD(params, lr=0.1, momentum=0.9)
def pretrain(epoch):
# Set feature extract to training model
feature_extractor.train()
train_loss = 0.
# Basic training loop for a DNN
for data, target in train_loader:
data, target = Variable(data.cuda()), Variable(target.cuda())
optimizer.zero_grad()
# Forward data through the feature extractor and soft max
features = feature_extractor(data)
output = F.log_softmax(classifier(features), 1)
# Compute the loss
loss = F.nll_loss(output, target)
# Back propagate and update weights
loss.backward()
optimizer.step()
train_loss += loss.data[0] * len(data)
print('Train Epoch: %d\tLoss: %.6f' % (epoch, train_loss / len(train_dataset)))
def pretest():
# Change feature extract to eval mode
feature_extractor.eval()
test_loss = 0
correct = 0
# Loop over minibatches of test data and compute accuracy
for data, target in test_loader:
data, target = data.cuda(), target.cuda()
data, target = Variable(data, volatile=True), Variable(target)
features = feature_extractor(data)
output = F.log_softmax(classifier(features), 1)
test_loss += F.nll_loss(output, target, size_average=False).data[0] # sum up batch loss
pred = output.data.max(1, keepdim=True)[1] # get the index of the max log-probability
correct += pred.eq(target.data.view_as(pred)).cpu().sum()
test_loss /= len(test_loader.dataset)
print('Test set: Average loss: {:.4f}, Accuracy: {}/{} ({:.3f}%)'.format(
test_loss, correct, len(test_loader.dataset),
100. * correct / len(test_loader.dataset)))
n_epochs = 3
for epoch in range(1, n_epochs + 1):
pretrain(epoch)
pretest()
# -
# ## Define the deep kernel GP
#
# We next define a DKLModel that uses the feature extractor. This is a Gaussian process that applies an additive RBF kernel to the features extracted by the deep neural network. The key thing that is different between this model and models we've seen in other example notebooks is in forward: rather than working directly with x, we first extract features using the deep feature extractor.
#
# The loss used for training is the standard variational lower bound used for training Gaussian processes. Since we use an additive RBF kernel, we can make use of the AdditiveGridInducingVariationalGP model, which efficiently performs inference using SKI in this setting.
# +
# now this is our first exposure to the usefulness of gpytorch
# A gpytorch module is superclass of torch.nn.Module
class DKLModel(gpytorch.Module):
def __init__(self, feature_extractor, n_features=64, grid_bounds=(-10., 10.)):
super(DKLModel, self).__init__()
# We add the feature-extracting network to the class
self.feature_extractor = feature_extractor
# The latent function is what transforms the features into the output
self.latent_functions = LatentFunctions(n_features=n_features, grid_bounds=grid_bounds)
# The grid bounds are the range we expect the features to fall into
self.grid_bounds = grid_bounds
# n_features in the dimension of the vector extracted (64)
self.n_features = n_features
def forward(self, x):
# For the forward method of the Module, first feed the xdata through the
# feature extraction network
features = self.feature_extractor(x)
# Scale to fit inside grid bounds
features = gpytorch.utils.scale_to_bounds(features, self.grid_bounds[0], self.grid_bounds[1])
# The result is hte output of the latent functions
res = self.latent_functions(features.unsqueeze(-1))
return res
# The AdditiveGridInducingVariationalGP trains multiple GPs on the features
# These are mixed together by the likelihoo function to generate the final
# classification output
# Grid bounds specify the allowed values of features
# grid_size is the number of subdivisions along each dimension
class LatentFunctions(gpytorch.models.AdditiveGridInducingVariationalGP):
# n_features is the number of features from feature extractor
# mixing params = False means the result of the GPs will simply be summed instead of mixed
def __init__(self, n_features=64, grid_bounds=(-10., 10.), grid_size=128):
super(LatentFunctions, self).__init__(grid_size=grid_size, grid_bounds=[grid_bounds],
n_components=n_features, mixing_params=False, sum_output=False)
# We will use the very common universal approximator RBF Kernel
cov_module = gpytorch.kernels.RBFKernel()
# Initialize the lengthscale of the kernel
cov_module.initialize(log_lengthscale=0)
self.cov_module = cov_module
self.grid_bounds = grid_bounds
def forward(self, x):
# Zero mean
mean = Variable(x.data.new(len(x)).zero_())
# Covariance using RBF kernel as described in __init__
covar = self.cov_module(x)
# Return as Gaussian
return gpytorch.random_variables.GaussianRandomVariable(mean, covar)
# Intialize the model
model = DKLModel(feature_extractor).cuda()
# Choose that likelihood function to use
# Here we use the softmax likelihood (e^z_i)/SUM_over_i(e^z_i)
# https://en.wikipedia.org/wiki/Softmax_function
likelihood = gpytorch.likelihoods.SoftmaxLikelihood(n_features=model.n_features, n_classes=10).cuda()
# -
# ## Train the DKL model
# In this cell we train the DKL model we defined above.
# +
# Simple DataLoader
train_loader = DataLoader(train_dataset, batch_size=256, shuffle=True, pin_memory=True)
# We use an adam optimizer over both the model and likelihood parameters
# https://arxiv.org/abs/1412.6980
optimizer = Adam([
{'params': model.parameters()},
{'params': likelihood.parameters()}, # SoftmaxLikelihood contains parameters
], lr=0.01)
# "Loss" for GPs - the marginal log likelihood
mll = gpytorch.mlls.VariationalMarginalLogLikelihood(likelihood, model, n_data=len(train_dataset))
def train(epoch):
model.train()
likelihood.train()
train_loss = 0.
for batch_idx, (data, target) in enumerate(train_loader):
data, target = data.cuda(), target.cuda()
data, target = Variable(data), Variable(target)
optimizer.zero_grad()
output = model(data)
loss = -mll(output, target)
loss.backward()
optimizer.step()
print('Train Epoch: %d [%03d/%03d], Loss: %.6f' % (epoch, batch_idx + 1, len(train_loader), loss.data[0]))
def test():
model.eval()
likelihood.eval()
test_loss = 0
correct = 0
for data, target in test_loader:
data, target = data.cuda(), target.cuda()
data, target = Variable(data, volatile=True), Variable(target)
output = likelihood(model(data))
pred = output.argmax()
correct += pred.eq(target.view_as(pred)).data.cpu().sum()
test_loss /= len(test_loader.dataset)
print('Test set: Average loss: {:.4f}, Accuracy: {}/{} ({:.3f}%)'.format(
test_loss, correct, len(test_loader.dataset),
100. * correct / len(test_loader.dataset)))
n_epochs = 10
# While we have theoretically fast algorithms for toeplitz matrix-vector multiplication, the hardware of GPUs
# is so well-designed that naive multiplication on them beats the current implementation of our algorith (despite
# theoretically fast computation). Because of this, we set the use_toeplitz flag to false to minimize runtime
with gpytorch.settings.use_toeplitz(False):
for epoch in range(1, n_epochs + 1):
# %time train(epoch)
test()
# -
| examples/dkl_mnist.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import wfdb
import glob
import os
import random
import matplotlib.pyplot as plt
import heartpy
import scipy.signal
import numpy as np
import itertools
import sklearn.model_selection
from keras.models import Sequential
from keras.layers import Dense, Activation, Conv1D, MaxPooling1D, Flatten
def create_segmented_signals(signal, annmap, sample_rate, sec):
seg_len = sec*sample_rate
segments = []
curr_ini = curr_fin = 0
for i, sample in enumerate(annmap):
if sample['ann'] == 'N':
if curr_ini == 0:
if i+1 < len(annmap)-1 and annmap[i+1]['ann'] == 'N':
curr_ini = random.randint(sample['time'], annmap[i+1]['time'])
else:
continue
curr_fin = sample['time']
if curr_fin - curr_ini > seg_len and curr_ini + seg_len <= signal.shape[0]:
segments.append(
{
'data': signal[curr_ini:curr_ini+seg_len,:],
'ann': 'N'
}
)
curr_ini = curr_fin
else:
curr_ini = curr_fin = 0
if sample['time'] > 2*seg_len and sample['time'] < signal.shape[0] - 2*seg_len:
rand_start = sample['time'] - random.randint(seg_len//3, 2*seg_len//3)
segments.append(
{
'data': signal[rand_start:rand_start+seg_len,:],
'ann': sample['ann'],
'time': sample['time']
}
)
return segments
filelist = [filename.split('.')[0] for filename in glob.glob('files/*.dat')]
notes = ['A','F','Q','n','R','B','S','j','+','V']
# Creating the segments variable, a list of dictionaries containing the fields 'data', 'ann', and 'time'
# +
train_test_ratio = 0.3
threshold = 100
test_threshold = int(threshold*train_test_ratio)
train_threshold = threshold - test_threshold
# filter definition
sample_rate = 257
n_samp = 101
filt = scipy.signal.firwin(n_samp, cutoff=5, fs=sample_rate, pass_zero='highpass')
padding = (n_samp//2)
# populating the segments list
for note in notes:
patient_sane_train = []
patient_sane_test = []
patient_ill_train = []
patient_ill_test = []
for file in filelist:
segments = []
record = wfdb.rdrecord(file)
annotations = wfdb.rdann(file, 'atr')
annmap = [{'time':samp, 'ann':symb} for samp, symb in zip(annotations.sample, annotations.symbol) if symb == note or symb == 'N']
# signal transformation pipeline
signal = record.p_signal
for i in range(signal.shape[-1]):
signal[:,i] = np.convolve(signal[:,i], filt)[padding:-padding]
segments += create_segmented_signals(signal, annmap, sample_rate, 2)
del signal
sane_segments = [s['data'] for s in segments if s['ann'] == 'N']
ill_segments = [s['data'] for s in segments if s['ann'] != 'N']
del segments
if len(sane_segments) == 0 or len(ill_segments) == 0:
continue
try:
sane_train, sane_test = sklearn.model_selection.train_test_split(sane_segments, test_size=train_test_ratio)
ill_train, ill_test = sklearn.model_selection.train_test_split(ill_segments, test_size=train_test_ratio)
except:
continue
if len(sane_train) == 0 or len(sane_test) == 0 or len(ill_train) == 0 or len(ill_test) == 0:
continue
while len(sane_train) < train_threshold:
sane_train += sane_train
while len(sane_test) < test_threshold:
sane_test += sane_test
while len(ill_train) < train_threshold:
ill_train += ill_train
while len(ill_test) < test_threshold:
ill_test += ill_test
patient_sane_train += sane_train[:train_threshold]
patient_sane_test += sane_test[:test_threshold]
patient_ill_train += ill_train[:train_threshold]
patient_ill_test += ill_test[:test_threshold]
trainX = np.array(patient_sane_train + patient_ill_train)
trainY = [[1,0]]*len(patient_sane_train) + [[0,1]]*len(patient_ill_train)
testX = patient_sane_test + patient_ill_test
testY = [[1,0]]*len(patient_sane_test) + [[0,1]]*len(patient_ill_test)
with open('mals/mal_'+note, 'wb') as file:
np.savez(file,
trainX=np.array(trainX, dtype=np.float32),
trainY=np.array(trainY, dtype=np.uint8),
testX=np.array(testX, dtype=np.float32),
testY=np.array(testY, dtype=np.uint8)
)
# -
for note in notes:
model = Sequential([
Conv1D(32, kernel_size=5, input_shape=(514, 12)),
MaxPooling1D(),
Activation('relu'),
Conv1D(64, kernel_size=5),
MaxPooling1D(),
Activation('relu'),
Conv1D(128, kernel_size=5),
MaxPooling1D(),
Activation('relu'),
Flatten(),
Dense(20),
Activation('relu'),
Dense(2),
Activation('softmax')
])
model.compile(optimizer='rmsprop',
loss='categorical_crossentropy',
metrics=['accuracy'])
data = np.load(os.path.join('mals', 'mal_'+note))
try:
model.fit(data['trainX'],
data['trainY'],
epochs=10,
batch_size=32,
validation_data=(data['testX'], data['testY']))
model.save(os.path.join('models', 'model_'+note+'.h5'))
except:
print('ERROR: could not train on '+note)
continue
# +
def load_file(filename, path):
return np.load(os.path.join(path, filename))
def
| notebooks/signal_processing/main.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Notebook for coronavirus open citations visualisation
#
# This Python notebook contains all the Python code to retrieve all the data used for creating the [Coronavirus Open Citations Dataset](https://opencitations.github.io/coronavirus/). In particular, we used the Crossref API and the unifying REST API for all the OpenCitations Indexes for getting all the information needed for the visualisation.
# ## Preliminaries
#
# The next code imports all the module needed and set up the basic variables used for retrieving the data.
# +
from requests import get
from requests.exceptions import Timeout, ConnectionError
from json import loads, load, dump
from re import sub
from os.path import exists
from os import makedirs, sep
from csv import reader, writer
import logging
from urllib.parse import quote
headers = {
"User-Agent":
"COVID-19 / OpenCitations "
"(http://opencitations.net; mailto:<EMAIL>)"
}
# the base directory where to store the files with the full data
data_dir = "data"
# the CSV document containing the DOIs of the articles relevant for the analysis
doi_file = data_dir + sep + "dois.csv"
# the JSON document containing the citations of the articles relevant for the analysis
cit_file = data_dir + sep + "citations.json"
# the CSV document containing the DOIs of the articles relevant for the analysis that do not have references deposited in Crossref
doi_no_ref_file = data_dir + sep + "dois_no_ref.csv"
# the JSON document containing the metadata of the articles involved in the relevant citations
met_file = data_dir + sep + "metadata.json"
# the CSV document containing the DOIs of the articles for which Crossref does not return any information
nod_file = data_dir + sep + "metadata_not_found.csv"
# the base directory containing all the material for the visualisation
vis_dir = "docs"
# the directory where to store the files with the partial data used in the visualisation
vis_data_dir = vis_dir + sep + data_dir
# the JSON document containing the citations of the articles relevant for the visualisation
vis_cit_file = vis_data_dir + sep + "citations.json"
# the JSON document containing the metadata of the articles used in the visualisation
vis_met_file = vis_data_dir + sep + "metadata.json"
# -
# In order to debug the following code snippets, it is possible to set the logger to a debug level (`logging.DEBUG`). If debug messages are not needed, specify the level at `logging.INFO`.
# +
# change the following variable to logging.INFO for removing debug, or logging.DEBUG to add debug messages
logging_level = logging.INFO
logging.basicConfig(format='%(levelname)s: %(message)s.')
log = logging.getLogger()
log.setLevel(logging_level)
# -
# ## Getting the data
#
# The following code retrieves the list of relevant articles talking about coronaviruses using the Crossref API. It looks for all the articles which contain the word "coronavirus", "covid19", "sarscov", "ncov2019", and "2019ncov", either in their title or abstract. All these data are stored in the file `doi_file`. If the file already exists on the file system, the process will not run. Thus, to launch again the process, it is needed to remove the file `doi_file` from the file system first.
# +
crossref_query = "https://api.crossref.org/works?query.bibliographic=coronavirus+OR+covid19+OR+sarscov+OR+ncov2019+OR+2019ncov&rows=1000&cursor=*"
dois = set()
cursors = set()
next_cursor = "*"
if not exists(doi_file):
logging.debug("The file with DOIs does not exist: start querying Crossref for retrieving data")
while next_cursor:
if next_cursor not in cursors:
log.debug(f"Current cursor for querying Crossref: '{next_cursor}'")
cursors.add(next_cursor)
crossref_data = get(sub("&cursor=.+$", "&cursor=", crossref_query) + quote(next_cursor),
headers=headers)
if crossref_data.status_code == 200:
crossref_data.encoding = "utf-8"
if crossref_json := loads(crossref_data.text).get("message"):
next_cursor = crossref_json.get("next-cursor")
for item in crossref_json.get("items"):
dois.add(item.get("DOI"))
else:
log.debug(f"Crossref response does not contain a 'message' "
f"item:\n{crossref_data.text}")
else:
log.debug(f"The request to Crossref end up with a non-OK status code "
f"('{crossref_data.status_code}'): stopping the download\n" + crossref_data.text)
next_cursor = None
else:
logging.debug(f"Current cursor '{next_cursor}' already used: stopping the download")
next_cursor = None
if not exists(data_dir):
makedirs(data_dir)
with open(doi_file, "w") as f:
csv_writer = writer(f)
for doi in dois:
csv_writer.writerow((doi, ))
else:
log.debug("The file with DOIs exist: load information directly from there")
with open(doi_file) as f:
csv_reader = reader(f)
for doi, in csv_reader:
dois.add(doi)
log.info(f"Total DOIs available: {len(dois)}")
# -
# The following code retrieves all the citations which involve the DOIs of the articles obtained in the previous step, either as a citing entity or as a cited entity. It uses the unifying REST API for all the OpenCitations Indexes for getting the citation information, thus using all the OpenCitations Indexes currently available, i.e. COCI and CROCI. All the citation data are stored in the file indicated by the variable `cit_file` in a JSON format compatible with the one used by Cytoscape JS - which is the tool used for visualising the data. If the file `cit_file` already exists on the file system, the process will not run. Thus, to launch again the process, it is needed to remove the file `cit_file` from the file system first.
# +
citations = list()
opencitations_query = 'https://opencitations.net/index/api/v1/%s/%s?json=array("; ",citing).array("; ",cited).dict(" => ",citing,source,doi).dict(" => ",cited,source,doi)'
cit_id = 0
def extract_citations(res, cit_id):
result = list()
res.encoding = "utf-8"
for citation in loads(res.text):
cit_id += 1
citation_item = {
"id": str(cit_id),
"source": citation["citing"][0]["doi"],
"target": citation["cited"][0]["doi"]
}
result.append(citation_item)
return result, cit_id
if not exists(cit_file):
log.debug("The file with citations does not exist: start querying OpenCitations "
"for retrieving citation data")
for doi in dois:
logging.debug(f"Process DOI '{doi}'")
reference_data = get(opencitations_query % ("references", doi), headers=headers)
if reference_data.status_code == 200:
all_citations, cit_id = extract_citations(reference_data, cit_id)
for citation in all_citations:
if citation not in citations:
citations.append(citation)
else:
log.warning(f"Status code '{reference_data.status_code}' when requesting references for "
"DOI '{doi}'")
citation_data = get(opencitations_query % ("citations", doi), headers=headers)
if citation_data.status_code == 200:
all_citations, cit_id = extract_citations(citation_data, cit_id)
for citation in all_citations:
if citation not in citations:
citations.append(citation)
else:
log.warning(f"Status code '{citation_data.status_code}' when requesting citations for "
"DOI '{doi}'")
with open(cit_file, "w") as f:
dump(citations, f, ensure_ascii=False, indent=0)
else:
log.debug("The file with citations exist: load information directly from there.")
with open(cit_file) as f:
citations.extend(load(f))
citing_dois = set()
for citation in citations:
citing_dois.add(citation["source"])
articles_with_references = set()
articles_without_references = set()
for doi in dois:
if doi in citing_dois:
articles_with_references.add(doi)
else:
articles_without_references.add(doi)
if not exists(doi_no_ref_file):
with open(doi_no_ref_file, "w") as f:
csv_writer = writer(f)
for doi in articles_without_references:
csv_writer.writerow((doi, ))
log.info(f"Total citations available: {len(citations)}. Number of articles with references "
f"deposited in Crossref and available in the OpenCitations Indexes: "
f"{len(articles_with_references)} out of {len(dois)} total articles retrieved in Crossref")
# -
# Finally, the following code uses the Crossref API again to identify all the basic metadata (i.e. authors, year of publication, title, publication venue, and DOI) of the articles involved in all the citations retrieved in the previous step. Only the metadata of the articles for which Crossref has metadata are stored in the file `met_file`. If the file already exists on the file system, the process will not run. Thus, to launch again the process, it is needed to remove the file `met_file` from the file system first.
# +
crossref_query = "https://api.crossref.org/works/"
def normalise(o):
if o is None:
s = ""
else:
s = str(o)
return sub("\s+", " ", s).strip()
def create_title_from_list(title_list):
cur_title = ""
for title in title_list:
strip_title = title.strip()
if strip_title != "":
if cur_title == "":
cur_title = strip_title
else:
cur_title += " - " + strip_title
return normalise(cur_title.title())
def get_basic_metadata(body):
authors = []
for author in body.get("author", []):
authors.append(normalise(author.get("family", "").title()))
year = ""
if "issued" in body and "date-parts" in body["issued"] and len(body["issued"]["date-parts"]) and \
len(body["issued"]["date-parts"][0]):
year = normalise(body["issued"]["date-parts"][0][0])
title = ""
if "title" in body:
title = create_title_from_list(body.get("title", []))
source_title = ""
if "container-title" in body:
source_title = create_title_from_list(body.get("container-title", []))
return ", ".join(authors), year, title, source_title
dois_in_citations = set()
for citation in citations:
dois_in_citations.add(citation["source"])
dois_in_citations.add(citation["target"])
existing_doi = set()
metadata = []
if exists(met_file):
with open(met_file) as f:
for article in load(f):
existing_doi.add(article["id"])
metadata.append(article)
if exists(nod_file):
with open(nod_file) as f:
csv_reader = reader(f)
for doi, in csv_reader:
existing_doi.add(doi)
dois_not_found = []
for doi in dois_in_citations.difference(existing_doi):
log.debug(f"Requesting Crossref metadata for DOI '{doi}'")
try:
article = get(crossref_query + doi, headers=headers, timeout=30)
if article.status_code == 200:
article.encoding = "utf-8"
if article_json := loads(article.text).get("message"):
author, year, title, source_title = get_basic_metadata(article_json)
metadata.append({
"id": doi,
"author": author,
"year": year,
"title": title,
"source_title": source_title
})
else:
log.warning(f"No article metadata in Crossref for DOI '{doi}'")
dois_not_found.append(doi)
else:
dois_not_found.append(doi)
log.warning(f"Status code '{article.status_code}' when requesting Crossref metadata "
f"for DOI '{doi}'")
except Timeout:
dois_not_found.append(doi)
log.warning(f"Timeout when querying Crossref for DOI '{doi}'")
except ConnectionError:
dois_not_found.append(doi)
log.warning(f"Connection issues when querying Crossref for DOI '{doi}'")
with open(met_file, "w") as f:
dump(metadata, f, ensure_ascii=False, indent=0)
if dois_not_found:
with open(nod_file, "w") as f:
csv_writer = writer(f)
for doi in dois_not_found:
csv_writer.writerow((doi, ))
log.info(f"The total number of articles involved in the citations retrieved "
f"are {len(metadata) + len(dois_not_found)}. "
f"The total number of articles with available metadata is {len(metadata)}, "
f"and there are {len(dois_not_found)} articles with no metadata found")
# -
# All the DOIs of the articles for which Crossref does not provide any metadata are stored in the file `nod_file`. In this case, the metadata for that articles must be completed by hand, and then added to the file `met_file`.
# ## Data for the visualisation
#
# For visualisation purposes, we selected only a partial subset of the citations and the articles retrived in the previous steps. In particular, we considered only:
#
# 1. the citations having both the DOIs of the citing entity and the cited entity included in the file `doi_file`;
# 2. the articles that received, overall, at least ten citations per year since their publication date.
#
# The following code stores the citations and the metadata of the articles used in the visualisation in the files `vis_cit_file` and `vis_met_file` respectively. If the files already exist on the file system, the process will not run. Thus, to launch again the process, it is needed to remove the files `vis_cit_file` and `vis_met_file` from the file system first.
# +
min_num_cits_per_year = 20
# Consider only the citations from and to articles of the selected dataset
filtered_citations = []
number_of_citations = {}
for citation in citations:
if citation["target"] in dois:
filtered_citations.append(citation)
number_of_citations[citation["target"]] = number_of_citations.get(citation["target"], 0) + 1
# Publication years of the articles
pub_year = {}
for article in metadata:
pub_year[article["id"]] = int(article["year"]) if article["year"] else 0
current_year = 2020
only_highly_cited = []
dois_in_selected_citations = set()
for citation in filtered_citations:
if citation["source"] in dois and citation["target"] in dois and \
number_of_citations.get(citation["source"], 0) >= ((current_year - pub_year[citation["source"]] + 1) * min_num_cits_per_year) and \
number_of_citations.get(citation["target"], 0) >= ((current_year - pub_year[citation["target"]] + 1) * min_num_cits_per_year):
dois_in_selected_citations.add(citation["source"])
dois_in_selected_citations.add(citation["target"])
only_highly_cited.append(citation)
log.info(f"Number of citations ({len(only_highly_cited)}) and articles "
f"({len(dois_in_selected_citations)}) selected for visualization purposes")
if not exists(vis_data_dir):
makedirs(vis_data_dir)
if not exists(vis_cit_file):
with open(vis_cit_file, "w") as f:
dump(only_highly_cited, f, ensure_ascii=False, indent=0)
if not exists(vis_met_file):
partial_metadata = []
with open(met_file) as f:
for article in load(f):
if article["id"] in dois_in_selected_citations:
article["count"] = number_of_citations.get(article["id"], 0)
partial_metadata.append(article)
with open(vis_met_file, "w") as f:
dump(partial_metadata, f, ensure_ascii=False, indent=0)
# -
# ## License
#
# ### Code
#
# Copyright 2020, <NAME> (<EMAIL>)
#
# Permission to use, copy, modify, and/or distribute this software for any purpose with or without fee is hereby granted, provided that the above copyright notice and this permission notice appear in all copies.
#
# THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
#
# ### Notebook text
# Copyright 2020, <NAME> (<EMAIL>)
#
# Attribution 4.0 International (CC BY 4.0) https://creativecommons.org/licenses/by/4.0/legalcode
#
# You are free to:
#
# * Share โ copy and redistribute the material in any medium or format.
# * Adapt โ remix, transform, and build upon the material for any purpose, even commercially.
#
# Under the following terms:
#
# * Attribution โ You must give appropriate credit, provide a link to the license, and indicate if changes were made. You may do so in any reasonable manner, but not in any way that suggests the licensor endorses you or your use.
# * No additional restrictions โ You may not apply legal terms or technological measures that legally restrict others from doing anything the license permits.
#
# This license is acceptable for Free Cultural Works. The licensor cannot revoke these freedoms as long as you follow the license terms.
#
# Notices:
#
# * You do not have to comply with the license for elements of the material in the public domain or where your use is permitted by an applicable exception or limitation.
# * No warranties are given. The license may not give you all of the permissions necessary for your intended use. For example, other rights such as publicity, privacy, or moral rights may limit how you use the material.
| data.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # 0.0. Imports
# +
import pandas as pd
import inflection
import math
import numpy as np
import seaborn as sns
import matplotlib.pyplot as plt
import datetime as dt
import datetime
from IPython.display import Image
# + [markdown] heading_collapsed=true
# ## 0.1. Helper Functions
# -
# ## 0.2. Loading Data
#
# +
df_sales_raw = pd.read_csv('data/train.csv', low_memory=False)
df_store_raw = pd.read_csv('data/store.csv', low_memory=False)
# merge
df_raw = pd.merge(df_sales_raw, df_store_raw, how='left', on='Store')
# -
# # 1.0. Passo 01 - DESCRIรรO DOS DADOS
df1 = df_raw.copy()
# ## 1.1. Rename Columns
# +
cols_old = ['Store', 'DayOfWeek', 'Date', 'Sales', 'Customers', 'Open', 'Promo',
'StateHoliday', 'SchoolHoliday', 'StoreType', 'Assortment',
'CompetitionDistance', 'CompetitionOpenSinceMonth',
'CompetitionOpenSinceYear', 'Promo2', 'Promo2SinceWeek',
'Promo2SinceYear', 'PromoInterval']
snakecase = lambda x: inflection.underscore(x)
cols_new = list(map(snakecase, cols_old))
# rename
df1.columns = cols_new
# -
# ## 1.2. Data Dimension
print('Number of Rows: {}'.format(df1.shape[0]))
print('Number of Columns: {}'.format(df1.shape[1]))
# ## 1.3. Data Types
df1.dtypes
df1['date'] = pd.to_datetime(df1['date'])
df1.dtypes
# ## 1.4 Check NA
df1.isna().sum()
# ## 1.5. Fillout NA
# +
# competition_distance
df1['competition_distance'] = df1['competition_distance'].apply(lambda x: 200000 if math.isnan(x) else x)
# competition_open_since_month
df1['competition_open_since_month'] = df1.apply( lambda x: x['date'].month if math.isnan(x['competition_open_since_month']) else x['competition_open_since_month'], axis=1)
# competition_open_since_year
df1['competition_open_since_year'] = df1.apply( lambda x: x['date'].year if math.isnan(x['competition_open_since_year']) else x['competition_open_since_year'], axis=1)
# promo2_since_week
df1['promo2_since_week'] = df1.apply( lambda x: x['date'].week if math.isnan(x['promo2_since_week']) else x['promo2_since_week'], axis=1)
# promo2_since_year
df1['promo2_since_year'] = df1.apply( lambda x: x['date'].year if math.isnan(x['promo2_since_year']) else x['promo2_since_year'], axis=1)
# promo_interval
month_map = {1: 'Jan',2: 'Fev',3: 'Mar',4: 'Apr',5: 'Mai',6: 'Jun',7: 'Jul',8: 'Aug',9: 'Sep',10: 'Oct',11: 'Nov',12: 'Dec'}
df1['promo_interval'].fillna(0, inplace = True)
df1['month_map'] = df1['date'].dt.month.map(month_map)
df1['is_promo'] = df1[['promo_interval','month_map']].apply(lambda x: 0 if x['promo_interval'] == 0 else 1 if x['month_map'] in x['promo_interval'].split(',') else 0, axis = 1)
# -
df1.isna().sum()
# ## 1.6. Change Types
#
df1.dtypes
# +
df1['competition_open_since_month'] = df1['competition_open_since_month'].astype(int)
df1['competition_open_since_year'] = df1['competition_open_since_year'].astype(int)
df1['promo2_since_week'] = df1['promo2_since_week'].astype(int)
df1['promo2_since_year'] = df1['promo2_since_year'].astype(int)
# -
# ## 1.7. Descriptive Statistics
num_attributes = df1.select_dtypes(include = ['int64', 'float64'])
cat_attributes = df1.select_dtypes(include = ['int64', 'float64', 'datetime64[ns]'])
cat_attributes.sample(2)
# ## 1.7.1 Numerical Attributes
# +
# Central Tendency - mean, median
ct1 = pd.DataFrame(num_attributes.apply(np.mean)).T
ct2 = pd.DataFrame(num_attributes.apply(np.median)).T
# Dispersion - std, min, max, range, range kurtosis
d1 = pd.DataFrame(num_attributes.apply(np.std)).T
d2 = pd.DataFrame(num_attributes.apply(min)).T
d3 = pd.DataFrame(num_attributes.apply(max)).T
d4 = pd.DataFrame(num_attributes.apply(lambda x: x.max() - x.min())).T
d5 = pd.DataFrame(num_attributes.apply(lambda x: x.skew())).T
d6 = pd.DataFrame(num_attributes.apply(lambda x: x.kurtosis())).T
# concatenate
m = pd.concat([d2, d3, d4, ct1, ct2, d1, d5, d6]).T.reset_index()
m.columns = ['attributes','min','max','range','mean','median','std','skew','kurtosis']
# -
m
sns.distplot(df1['sales'])
# ## 1.7.2 Categorical Attributes
cat_attributes.apply(lambda x: x.unique().shape[0])
# +
aux1 = df1[(df1['state_holiday'] != '0') & (df1['sales'] > 0)]
plt.subplot(1,3,1)
sns.boxplot(x='state_holiday', y='sales', data=aux1)
plt.subplot(1,3,2)
sns.boxplot(x='store_type', y='sales', data=aux1)
plt.subplot(1,3,3)
sns.boxplot(x='assortment', y='sales', data=aux1)
# -
# # 2.0. Passo 02 - FEATURE ENGINEERING
df2 = df1.copy()
Image('img/MindMapHypothesis.png')
# ## 2.1 Criaรงรฃo das Hipรณteses
# ### 2.1.1 Hipรณteses Loja
# **1.** Lojas com maior quadro de funcionรกrios deveriam vender mais
#
# **2.** Lojas com maior capacidade de estoque deveriam vender mais
#
# **3.** Lojas com maior porte deveriam vender mais
#
# **4.** Lojas com maior sortimento deveriam vender mais
#
# **5.** Lojas com competidores mais prรณximos deveriam vender menos
#
# **6.** Lojas com competidores ร mais tempo deveriam vender mais
# ### 2.1.2 Hipรณteses Produto
# **1.** Lojas que investem mais em marketing deveriam vender mais
#
# **2.** Lojas que expoem mais os produtos nas vitrines deveriam vender mais
#
# **3.** Lojas que tem preรงos menores deveriam vender mais
#
# **4.** Lojas que tem preรงos menores por mais tempo deveriam vender mais
#
# **5.** Lojas com promoรงรตes mais agressivas deveriam vender mais
#
# **6.** Lojas com promoรงรตes ativas por mais tempo deveriam vender mais
#
# **7.** Lojas com mais dias de promoรงรตes deveriam vender mais
#
# **8.** Lojas com mais promoรงรตes consecutivas deveriam vender mais
# ### 2.1.3 Hipรณteses Tempo (Sazonalidade)
# **1.** Lojas abertas durante o friado de natal deveriam vender mais
#
# **2.** Lojas deveriam vender mais ao longo do ano
#
# **3.** Lojas deveriam vender mais no segundo semestre do ano
#
# **4.** Lojas deveriam vender mais depois do dia 10 de cada mรชs
#
# **5.** Lojas deveriam vender menos aos fins de semana
#
# **6.** Lojas deveriam vender menos durante os feriados escolares
# ## 2.2 Lista Final de Hipรณteses
# **1.** Lojas com maior sortimento deveriam vender mais
#
# **2.** Lojas com competidores mais prรณximos deveriam vender menos
#
# **3.** Lojas com competidores ร mais tempo deveriam vender mais
#
# **4.** Lojas com promoรงรตes ativas por mais tempo deveriam vender mais
#
# **5.** Lojas com mais dias de promoรงรตes deveriam vender mais
#
# **6.** Lojas com mais promoรงรตes consecutivas deveriam vender mais
#
# **7.** Lojas abertas durante o friado de natal deveriam vender mais
#
# **8.** Lojas deveriam vender mais ao longo do ano
#
# **9.** Lojas deveriam vender mais no segundo semestre do ano
#
# **10.** Lojas deveriam vender mais depois do dia 10 de cada mรชs
#
# **11.** Lojas deveriam vender menos aos fins de semana
#
# **12.** Lojas deveriam vender menos durante os feriados escolares
#
# ## 2.3 Feature Engineering
# +
# year
df2['year'] = df2['date'].dt.year
# month
df2['month'] = df2['date'].dt.month
# day
df2['day'] = df2['date'].dt.day
# week of year
df2['week_of_year'] = df2['date'].dt.weekofyear
# year week
df2['year_week'] = df2['date'].dt.strftime('%Y-%W')
# competition since
df2['competition_since'] = df2.apply( lambda x: datetime.datetime( year=x['competition_open_since_year'], month=x['competition_open_since_month'], day=1), axis=1)
df2['competition_time_month'] = ((df2['date']-df2['competition_since'])/30).apply(lambda x: x.days).astype(int)
# promo since
df2['promo_since'] = df2['promo2_since_year'].astype(str) +'-'+ df2['promo2_since_week'].astype(str)
df2['promo_since'] = df2['promo_since'].apply(lambda x: datetime.datetime.strptime(x + '-1', '%Y-%W-%w') - datetime.timedelta(days=7))
df2['promo_time_week'] = ((df2['date'] - df2['promo_since'])/7).apply(lambda x: x.days).astype(int)
# assortment
df2['assortment'] = df2['assortment'].apply( lambda x: 'basic' if x == 'a' else 'extra' if x == 'b' else 'extended' )
# state holiday
df2['state_holiday'] = df2['state_holiday'].apply( lambda x: 'public_holiday' if x == 'a' else 'easter_holiday' if x == 'b' else 'christmas' if x == 'c' else 'regular_day' )
# -
# # 3.0. Passo 03 - FILTRAGEM DE VARIรVEIS
df3 = df2.copy()
# ## 3.1. Filtragem das Linhas
df3 = df3[(df3['open'] != 0) & (df3['sales'] > 0)]
# ## 3.2. Seleรงรฃo de Colunas
cols_drop = ['customers', 'open', 'promo_interval', 'month_map']
df3 = df3.drop(cols_drop, axis = 1)
df3.columns
| m03_v01_store_sales_prediction.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + [markdown] id="view-in-github" colab_type="text"
# <a href="https://colab.research.google.com/github/maderix/pytorch-notebooks/blob/main/02_pytorch_cifar10_tutorial.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# + [markdown] id="yiIlbUjWpV_e"
# # CIFAR10 model
# CIFAR10 is slightly more complicated than MNIST. The major difference is the increase in channels - RGB(3) vs Gray(1). So we might need a deeper net.
# The good thing is that the dataset loading and training procedure will remain similar.
# + colab={"base_uri": "https://localhost:8080/"} id="z8eZ2DekhMqP" outputId="5d41f6c5-80a7-4a12-cefb-d877fbfe1689"
import torch
import torch.nn as nn
import torch.nn.functional as F
import torch.optim as optim
device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
random_seed = 1
torch.backends.cudnn.enabled = True
torch.manual_seed(random_seed)
# + id="2_izt8Qbp0UZ"
import torchvision
import torchvision.datasets as datasets
import torchvision.transforms as transforms
# + id="jm90iJgTp-pt"
transform = transforms.Compose([
transforms.ToTensor(),
transforms.Normalize((0.485, 0.456, 0.406), (0.229, 0.224, 0.225)),
])
# + colab={"base_uri": "https://localhost:8080/", "height": 116, "referenced_widgets": ["5af6c0f1f2a1420abb8538f0984a440f", "e268fbc7e60e49d9a629895dc25fb1b9", "3721239db5494490a5c9341d934db0b1", "a7706d2ec9574be9804b797e84e41a9e", "<KEY>", "<KEY>", "516132c55d1c460983da112a85b9d9c7", "1058d802e54b4764a88e5cab817b24d0"]} id="qYbIzfNdffI3" outputId="ff6151da-4405-45df-96bc-d36e9dc2b2f7"
cifar10_trainset = datasets.CIFAR10(root='./data',train=True,download=True, transform=transform)
cifar10_testset = datasets.CIFAR10(root='./data',train=False,download=True, transform=transform)
# + id="87osiFj0gBhU"
train_loader = torch.utils.data.DataLoader(cifar10_trainset, batch_size=64, shuffle=True)
test_loader = torch.utils.data.DataLoader(cifar10_testset, batch_size=32)
# + colab={"base_uri": "https://localhost:8080/", "height": 353} id="v2VSCc2KhD6f" outputId="1e7a62c0-5cb0-4e8b-e75b-17b25e4bcc02"
import numpy as np
import matplotlib.pyplot as plt
images,labels = next(iter(train_loader))
print(np.max(images.numpy()),np.min(images.numpy()),labels)
plt.imshow(torchvision.utils.make_grid(images).numpy().transpose(1,2,0))
# + [markdown] id="E_brYL0mk7SX"
# # Creating the network
# We'll create a simple CNN which takes a 32x32x3 size images and returns a probability distribution of 10 classes. This is very similar to the MNIST task.
# + colab={"base_uri": "https://localhost:8080/"} id="ywMpGKhkiiqd" outputId="f6232874-f868-4e68-dcb5-17cc630958e9"
class CIFARNet(nn.Module):
def __init__(self):
super(CIFARNet,self).__init__()
self.conv1 = nn.Conv2d(3, 32, 3, 1)
self.bn1 = nn.BatchNorm2d(32)
self.conv2 = nn.Conv2d(32, 64, 3, 1)
self.bn2 = nn.BatchNorm2d(64)
self.conv3 = nn.Conv2d(64, 128, 3, 1)
self.bn3 = nn.BatchNorm2d(128)
self.conv4 = nn.Conv2d(128, 256, 3, 1)
self.bn4 = nn.BatchNorm2d(256)
self.dropout1 = nn.Dropout2d(0.25)
#self.dropout2 = nn.Dropout(0.5)
#fully connected layers for categorizing the image features
self.fc1 = nn.Linear(256*12*12, 128)
self.fc2 = nn.Linear(128, 10)
#forward function will determine how data passes through our model
def forward(self, x):
x = self.conv1(x)
x = self.bn1(x)
x = F.relu(x)
#second block
x = self.conv2(x)
x = self.bn2(x)
x = F.relu(x)
#Third block
x = self.conv3(x)
x = self.bn3(x)
x = F.relu(x)
#Fourth block
x = self.conv4(x)
x = self.bn4(x)
x = F.relu(x)
#create a maxpooling layer to reduce the compute requirements
x = F.max_pool2d(x , 2)
# pass data through the dropouts
#x = self.dropout1(x)
#flatten the data so that it can pass through fc layers
x = torch.flatten(x,1)
#fully connected/dense block.
x = self.fc1(x)
x = F.relu(x)
x = self.dropout1(x)
x = self.fc2(x)
#we'll apply softmax to the final output to get nice probabilities for each digit
output = F.log_softmax(x,dim=1)
return output
cnet = CIFARNet()
print(cnet)
# + [markdown] id="NMCV6b4VmAY1"
# # Random test
# Let's do a random test to test the forward function of our net
#
# + colab={"base_uri": "https://localhost:8080/"} id="OPwp9tLbl3Lx" outputId="c6b588a5-65db-45c5-cf8c-7cd13d8e52df"
random_data = torch.randn((1, 3, 32, 32))
result = cnet(random_data)
print(result)
# + [markdown] id="Ikb66Zxqn61_"
# # Training pipeline
# Let's define our CIFAR training pipeline, which is quite similar to the MNIST one. Therein lies the power of neural nets, a model can fit multiple datasets. Accuracy may vary according to the complexity of the dataset but tuning may help correct this.
#
# 1. Get model output with current input
# 2. Calculate loss
# 3. Calculate gradients
# 4. Update weights through optimizer
#
# the 'to(device)' function will allow us to place the tensor on GPU if available.
# + id="Ptt6PlQCmXzN"
import tqdm
net = CIFARNet()
learning_rate = 1e-4
epochs = 40
#we use Adam optimizer as it converges faster compared to SGD
optimizer = optim.Adam(net.parameters(), lr=learning_rate)
criterion = nn.CrossEntropyLoss()
# + colab={"base_uri": "https://localhost:8080/"} id="O0Dqo3pfo7sd" outputId="5c4df880-aac3-41b0-bfcb-63721db609f5"
net = net.to(device)
#zero out the gradients
for epoch in range(epochs):
net.train()
t = tqdm.tqdm(train_loader,leave=True, position=0)
for images,labels in t:
images = images.to(device)
labels = labels.long().to(device)
optimizer.zero_grad()
output = net.forward(images)
train_loss = criterion(output,labels)
train_loss.backward()
optimizer.step()
t.set_description(f'epoch:{epoch+1} : train loss:{train_loss.item():.4f}')
t.refresh()
#evaluate after each epoch
net.eval()
images,labels = next(iter(test_loader))
images = images.to(device)
labels = labels.to(device)
with torch.no_grad():
output = net(images)
valid_loss = criterion(output,labels)
print(f'epoch:{epoch+1} : valid loss:{valid_loss.item():.4f}\n')
# + [markdown] id="9l8e6750sSyE"
# # Test accuracy
# + colab={"base_uri": "https://localhost:8080/"} id="8-SneZLppPnI" outputId="a0e98f9e-6b10-4e56-f656-cca49187aab2"
test_loss = 0
class_correct,class_total = [0]*10, [0] *10
#we'll load the weights from first network
net_eval=CIFARNet()
net_eval.load_state_dict(net.state_dict())
net_eval.eval()
for images,targets in test_loader:
with torch.no_grad():
output = net_eval(images)
loss = criterion(output, targets)
test_loss += loss.item()*images.size(0)
_, pred = torch.max(output, 1)
correct = np.squeeze(pred.eq(targets.data.view_as(pred)))
for i in range(len(targets)):
label = targets.data[i]
class_correct[label] += correct[i].item()
class_total[label] += 1
# calculate and print avg test loss
test_loss = test_loss/len(test_loader.sampler)
print(f'Test Loss: {test_loss:.6f}')
print(f'Test accuracy: {np.sum(class_correct)/np.sum(class_total)*100} %')
# + id="CLPneuP3snk6"
| 02_pytorch_cifar10_tutorial.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
from rdkit import Chem
from rdkit.Chem import Draw
from rdkit.Chem.Draw import IPythonConsole
import chembl_structure_pipeline
import rdkit
print(rdkit.__version__)
print(chembl_structure_pipeline.__version__)
import gzip
with gzip.open('/home/glandrum/T5/Data/Pubchem/Substance_000000001_000500000.sdf.gz') as inf:
records = []
record = []
for line in inf:
record.append(line)
if line == b'$$$$\n':
records.append(b''.join(record).decode())
record = []
if len(records)>=1000:
break
print(records[0])
chembl_structure_pipeline.check_molblock(records[0])
for i,record in enumerate(records):
res = chembl_structure_pipeline.check_molblock(record)
if res:
print(i,res)
# +
from ipywidgets import interact,fixed
from rdkit.Chem.Draw import rdMolDraw2D
from IPython.display import SVG
@interact(idx=range(0,len(records)),records=fixed(records))
def show_mol(idx,records):
record = records[idx]
print(chembl_structure_pipeline.check_molblock(record))
m = Chem.MolFromMolBlock(record,sanitize=False)
m.UpdatePropertyCache()
Chem.GetSymmSSSR(m)
print(Chem.MolToSmiles(m))
d2d = rdMolDraw2D.MolDraw2DSVG(450,400)
d2d.drawOptions().prepareMolsBeforeDrawing=False
d2d.DrawMolecule(m)
d2d.FinishDrawing()
return SVG(d2d.GetDrawingText())
# -
from ipywidgets import interact,fixed
from rdkit.Chem.Draw import rdMolDraw2D
from IPython.display import SVG
from rdkit import RDLogger
RDLogger.DisableLog('rdApp.info')
@interact(idx=range(0,len(records)),records=fixed(records))
def show_standardized_mol(idx,records):
record = records[idx]
checks = chembl_structure_pipeline.check_molblock(record)
if checks and checks[0][0]>6:
print(f"Failed validation: {checks}")
return None
standard_record = chembl_structure_pipeline.standardize_molblock(record)
standard_parent,exclude = chembl_structure_pipeline.get_parent_molblock(standard_record)
m1 = Chem.MolFromMolBlock(record,sanitize=False)
m1.UpdatePropertyCache(strict=False)
Chem.GetSymmSSSR(m1)
if exclude:
print(f'Excluded: {Chem.MolToSmiles(m1)}')
return m1
m2 = Chem.MolFromMolBlock(standard_record,sanitize=False)
m2.UpdatePropertyCache(strict=False)
Chem.GetSymmSSSR(m2)
m3 = Chem.MolFromMolBlock(standard_parent,sanitize=False)
m3.UpdatePropertyCache(strict=False)
Chem.GetSymmSSSR(m3)
print(Chem.MolToSmiles(m1))
print(Chem.MolToSmiles(m2))
print(Chem.MolToSmiles(m3))
d2d = rdMolDraw2D.MolDraw2DSVG(700,300,350,300)
#d2d.drawOptions().prepareMolsBeforeDrawing=False
d2d.DrawMolecules((m2,m3))
d2d.FinishDrawing()
return SVG(d2d.GetDrawingText())
| ChEMBL Structure Pipeline example.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Introduction
# + [markdown] toc-hr-collapsed=false
# > This course introduces students to basic microeconmetric methods. The objective is to learn how to make and evaluate causal claims. By the end of the course, students should to able to apply each of the methods discussed and critically evaluate research based on them.
#
# I just want to discuss some basic features of the course. We discuss the core references, the tooling for the course, student projects, and illustrate the basics of the potential outcomes model and causal graphs.
#
# #### Causal questions
#
# What is the causal effect of ...
#
# * neighborhood of residence on educational performance, deviance, and youth development
# * school vouchers on learning?
# * of charter schools on learning?
# * worker training on earnings?
# * ...
#
# What causal question brought you here?
# -
# ### Core reference Test
# The whole course is built on the following textbook:
#
# * **<NAME>., & <NAME>. (2007)**. [Counterfactuals and causal inference: Methods and principles for social research](https://www.amazon.com/Counterfactuals-Causal-Inference-Principles-Analytical/dp/1107694167/ref=dp_ob_title_bk). Cambridge, England: *Cambridge University Press*.
#
# This is a rather non-standard textbook in economics. However, I very much enjoy working with it as it provides a coherent conceptual framework for a host of different methods for causal analysis. It then clearly delineates the special cases that allow the application of particular methods. We will follow their lead and structure our thinking around the **counterfactual approach to causal analysis** and its two key ingredients **potential outcome model** and **directed graphs**.
#
# It also is one of the few textbooks that includes extensive simulation studies to convey the economic assumptions required to apply certain estimation strategies.
#
# It is not very technical at all, so will also need to draw on more conventional resources to fill this gap.
#
# * <NAME>. (2001). [*Econometric analysis of cross section and panel data*](https://mitpress.mit.edu/books/econometric-analysis-cross-section-and-panel-data). Cambridge, MA: The MIT Press.
#
# * <NAME>., & <NAME>. (2009). [*Mostly harmless econometrics: An empiricists companion*](https://www.amazon.com/Mostly-Harmless-Econometrics-Empiricists-Companion/dp/0691120358/ref=sr_1_1?keywords=mostly+harmless+econometrics&qid=1553511192&s=gateway&sr=8-1). Princeton, NJ: Princeton University Press.
#
# * <NAME>., and <NAME>. (2019). [*Impact evaluation: Treatment effects and causal analysis*](https://www.cambridge.org/core/books/impact-evaluation/F07A859F06FF131D78DA7FC81939A6DC). Cambridge, England: Cambridge University Press.
#
#
# Focusing on the conceptual framework as much as we do in the class has its cost. We might not get to discuss all the approaches you might be particularly interested in. However, my goal is that all of you can draw on this framework later on to think about your econometric problem in a structured way. This then enables you to choose the right approach for the analysis and study it in more detail on your own.
#
# <img src="material/fig-dunning-kruger.png" width="500">
#
# Combining this counterfactual approach to causal analysis with sufficient domain-expertise will allow you to leave the valley of despair.
# ### Lectures
# We follow the general structure of Winship & Morgan (2007).
#
# * Counterfactuals, potential outcomes and causal graphs
#
# * Estimating causal effects by conditioning on observables
# * regression, matching, ...
#
# * Estimating causal effects by other means
# * instrumental variables, mechanism-based estimation, regression discontinuity design, ...
# ### Tooling
# We will use open-source software and some of the tools building on it extensively throughout the course.
#
# * [Course website](https://ose-data-science.readthedocs.io/en/latest/)
# * [GitHub](https://github.com)
# * [Zulip](https://zulip.com)
# * [Python](https://python.org)
# * [SciPy](https://www.scipy.org/index.html) and [statsmodels](https://www.statsmodels.org/stable/index.html)
# * [Jupyterlan](https://jupyterlab.readthedocs.io/en/stable/)
# * [GitHub Actions](https://github.com/features/actions)
#
# We will briefly discuss each of these components over the next week. By then end of the term, you hopefully have a good sense on how we combine all of them to produce sound empirical research. Transparency and reproducibility are a the absolute minimum of sound data science and all then can be very achieved using the kind of tools of our class.
#
# Compared to other classes on the topic, we will do quite some programming in class. I think I have a good reason to do so. From my own experience in learning and teaching the material, there is nothing better to understand the potential and limitations of the approaches we discuss than to implemented them in a simulation setup where we have full control of the underlying data generating process.
#
# To cite <NAME>: What I cannot create, I cannot understand.
#
# However, it is often problematic that students have a very, very heterogeneous background regarding their prior programming experience and some feel intimidated by the need to not only learn the material we discuss in class but also catch up on the programming. To mitigate this valid concern, we started several accompanying initiatives that will get you up to speed such as additional workshop, help desks, etc. Make sure to join our Q&A channels in Zulip and attend the our [Computing Primer](https://github.com/OpenSourceEconomics/ose-course-primer).
# ### Problem sets
# Thanks to [<NAME>](https://github.com/milakis), [<NAME>](https://github.com/timmens/), and [<NAME>](https://github.com/segsell) we now have four problem sets available on our website.
#
# * Potential outcome model
# * Matching
# * Regression-discontinuity design
# * Generalized Roy model
#
# Just as the whole course, they do not only require you to further digest the material in the course but also require you to do some programming. They are available on our course website and we will discuss them in due course.
# ### Projects
# + [markdown] toc-hr-collapsed=false
# Applying methods from data science and understanding their potential and limitations is only possible when bringing them to bear on one's one research project. So we will work on student projects during the course. More details are available [here](https://ose-data-science.readthedocs.io/en/latest/projects/index.html).
# -
# ### Data sources
# Throughout the course, we will use several data sets that commonly serve as teaching examples. We collected them from several textbooks and are available in a central place in our online repository [here](https://github.com/OpenSourceEconomics/ose-course-data-science/tree/master/datasets).
#
# ### Potential outcome model
# The potential outcome model serves us several purposes:
#
# * help stipulate assumptions
# * evaluate alternative data analysis techniques
# * think carefully about process of causal exposure
#
# #### Basic setup
#
# There are three simple variables:
#
# * $D$, treatment
# * $Y$, observed outcome
# * $Y_1$, outcome in the treatment state
# * $Y_0$, outcome in the no-treatment state
#
# #### Examples
#
# * economics of education
# * health economics
# * industrial organization
# * $...$
# + [markdown] toc-hr-collapsed=false
# #### Exploration
#
# We will use our first dataset to illustrate the basic problems of causal analysis. We will use the original data from the article below:
#
# * <NAME>. (1986). [Evaluating the econometric evaluations of training programs with experimental data](https://www.jstor.org/stable/1806062). *The American Economic Review*, 76(4), 604-620.
#
# He summarizes the basic setup as follows:
#
# > The National Supported Work Demonstration (NSW) was temporary employment program desinged to help disadvantaged workers lacking basic job skills move into the labor market by giving them work experience and counseling in sheltered environment. Unlike other federally sponsored employment programs, the NSW program assigned qualified applications randomly. Those assigned to the treatment group received all the benefits of the NSW program, while those assigned to the control group were left to fend for themselves.
#
# What is the *effect* of the program?
#
# We will have a quick look at a subset of the data to illustrate the **fundamental problem of evaluation**, i.e. we only observe one of the potential outcomes depending on the treatment status but never both.
# +
import matplotlib.pyplot as plt
import seaborn as sns
import pandas as pd
import numpy as np
# We collected a host of data from two other influential textbooks.
df = pd.read_csv("../../datasets/processed/dehejia_waba/nsw_lalonde.csv")
df.index.set_names("Individual", inplace=True)
# -
df.describe()
# It is important to check for missing values first.
for column in df.columns:
assert not df[column].isna().any()
# Note that this lecture, just as all other lectures, is available on [](https://mybinder.org/v2/gh/HumanCapitalAnalysis/microeconometrics/master?filepath=lectures%2F01_introduction%2Flecture.ipynb) so you can easily continue working on it and take your exploration to another direction.
#
# There are numerous discrete variables in this dataset describing the individual's background. How does their distribution look like?
columns_background = [
"treat",
"age",
"education",
"black",
"hispanic",
"married",
"nodegree",
]
for column in columns_background:
sns.countplot(x=df[column], color="#1f77b4")
plt.show()
# How about the continous earnings variable?
columns_outcome = ["re75", "re78"]
for column in columns_outcome:
earnings = df[column]
# We drop all earnings at zero.
earnings = earnings.loc[earnings > 0]
ax = sns.histplot(earnings)
ax.set_xlim([0, None])
plt.show()
# We work under the assumption that the data is generated by an experiment. Let's make sure by checking the distribution of the background variables by treatment status.
info = ["count", "mean", "std"]
for column in columns_background:
print("\n\n", column.capitalize())
print(df.groupby("treat")[column].describe()[info])
# What is the data that corresponds to $(Y, Y_1, Y_0, D)$?
# +
# We first create True / False
is_treated = df["treat"] == 1
df["Y"] = df["re78"]
df["Y_0"] = df.loc[~is_treated, "re78"]
df["Y_1"] = df.loc[is_treated, "re78"]
df["D"] = np.nan
df.loc[~is_treated, "D"] = 0
df.loc[is_treated, "D"] = 1
df[["Y", "Y_1", "Y_0", "D"]].sample(10)
# -
# Let us get a basic impression on how the distribution of earnings looks like by treatment status.
df.groupby("D")["re78"].describe()
ax = sns.histplot(df.loc[~is_treated, "Y"], label="untreated")
ax = sns.histplot(df.loc[is_treated, "Y"], label="treated")
ax.set_xlim(0, None)
ax.legend()
# We are now ready to reproduce one of the key findings from this article. What is the difference in earnings in 1978 between those that did participate in the program and those that did not?
stat = df.loc[is_treated, "Y"].mean() - df.loc[~is_treated, "Y"].mean()
f"{stat:.2f}"
# Earnings are \$886.30 higher among those that participate in the treatment compared to those that do not. Can we say even more?
# **References**
#
# Here are some further references for the potential outcome model.
#
#
# * <NAME>., and <NAME>. (2007a). [*Econometric evaluation of social programs, part I: Causal effects, structural models and econometric policy evaluation*](https://www.sciencedirect.com/science/article/pii/S1573441207060709). In <NAME>, and <NAME> (Eds.), *Handbook of Econometrics* (Vol. 6B, pp. 4779โ4874). Amsterdam, Netherlands: Elsevier Science.
#
# * <NAME>., and <NAME>. (2015). [*Causal inference for statistics, social, and biomedical sciences: An introduction*](https://www.cambridge.org/core/books/causal-inference-for-statistics-social-and-biomedical-sciences/71126BE90C58F1A431FE9B2DD07938AB). Cambridge, England: Cambridge University Press.
#
# * <NAME>. (2017). [*Observation and experiment: An introduction to causal inference*](https://www.hup.harvard.edu/catalog.php?isbn=9780674975576). Cambridge, MA: Harvard University Press.
#
#
#
# ### Causal graphs
# One unique feature of our core textbook is the heavy use of causal graphs to investigate and assess the validity of different estimation strategies. There are three general strategies to estimate causal effects and their applicability depends on the exact structure of the causal graph.
#
# * condition on variables, i.e. matching and regression-based estimation
#
# * exogenous variation, i.e. instrumental variables estimation
#
# * establish an exhaustive and isolated mechanism, i.e. structural estimation
#
# Here are some examples of what to expect.
#
# <img src="material/fig-causal-graph-1.png" width=500>
#
#
# <img src="material/fig-causal-graph-2.png" width=500>
#
#
# <img src="material/fig-causal-graph-3.png" width=500>
#
#
# The key message for now:
#
# * There is often more than one way to estimate a causal effect with differing demands about knowledge and observability
#
# Pearl (2009) is the seminal reference on the use of graphs to represent general causal representations.
#
# **References**
#
# * **<NAME>., <NAME>., <NAME>., <NAME>., <NAME>., <NAME>., <NAME>., <NAME>., <NAME>., <NAME>., <NAME>., <NAME>. (2021)**. [The influence of hidden researcher decisions in applied microeconomics](https://onlinelibrary.wiley.com/doi/full/10.1111/ecin.12992), *Economic Impuiry*, 59, 944โ960.
#
#
# * **<NAME>. (2014)**. [Causality](https://www.cambridge.org/core/books/causality/B0046844FAE10CBF274D4ACBDAEB5F5B). Cambridge, England: *Cambridge University Press*.
#
#
# * **<NAME>., and <NAME>. (2018)**. [The book of why: The new science of cause and effect](https://www.amazon.de/Book-Why-Science-Cause-Effect/dp/0141982411). New York, NY: *Basic Books*.
#
#
# * **<NAME>., <NAME>., and <NAME>. (2016)**. [Causal inference in statistics: A primer](https://www.wiley.com/en-us/Causal+Inference+in+Statistics%3A+A+Primer-p-9781119186847). Chichester, UK: *Wiley*.
#
#
# * **<NAME>. (2021)**. [The Art of Statistics: Learning from Data](https://www.amazon.de/-/en/David-Spiegelhalter/dp/0241398630?asin=1541675703&revisionId=&format=4&depth=1). New York: *Hachette Book Group*.
#
# ### Resources
# * **<NAME>. (1986)**. [Evaluating the econometric evaluations of training programs with experimental data](https://www.jstor.org/stable/1806062). *The American Economic Review*, 76(4), 604-620.
| lectures/introduction/notebook.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # 1 Basic
# +
import multiprocessing
def worker():
"""worker function"""
print('Worker')
if __name__ == '__main__':
jobs = []
for i in range(5):
p = multiprocessing.Process(target=worker)
jobs.append(p)
p.start()
# -
# # 2 Determining the Current Process
# +
import multiprocessing
import time
def worker():
name = multiprocessing.current_process().name
print(name, 'Starting')
time.sleep(2)
print(name, 'Exiting')
def my_service():
name = multiprocessing.current_process().name
print(name, 'Starting')
time.sleep(3)
print(name, 'Exiting')
if __name__ == '__main__':
service = multiprocessing.Process(
name='my_service',
target=my_service,
)
worker_1 = multiprocessing.Process(
name='worker 1',
target=worker,
)
worker_2 = multiprocessing.Process( # default name
target=worker,
)
worker_1.start()
worker_2.start()
service.start()
# -
# # 3 Daemon Process
# +
import multiprocessing
import time
import sys
def daemon():
p = multiprocessing.current_process()
print('Starting:',p.name, p.pid)
sys.stdout.flush()
time.sleep(2)
print('Exiting:', p.name, p.pid)
sys.stdout.flush()
return
def non_daemon():
p = multiprocessing.current_process()
print('Starting:',p.name,p.pid)
sys.stdout.flush()
print('Exiting:',p.name,p.pid)
sys.stdout.flush()
if __name__ == '__main__':
d = multiprocessing.Process(
name='daemon',target=daemon)
d.daemon=True
n = multiprocessing.Process(
name='non-daemon', target=non_daemon)
n.daemon=False
d.start()
time.sleep(1)
n.start()
# -
# # 4 Terminating Process
# +
import multiprocessing
import time
def slow_worker():
print('Starting worker')
time.sleep(0.1)
print('Fininsh worker')
return
if __name__ == '__main__':
p = multiprocessing.Process(target=slow_worker)
print('Before', p, p.is_alive())
p.start()
print('During',p, p.is_alive())
p.terminate()
print('Terminate:',p, p.is_alive())
p.join()
print('Join', p, p.is_alive())
# -
# # 5 SubClassing Process
# +
import multiprocessing
class Worker(multiprocessing.Process):
def run(self):
print('In {}'.format(self.name))
return
if __name__ == '__main__':
jobs = []
for i in range(5):
p = Worker()
jobs.append(p)
p.start()
for j in jobs:
j.join()
# -
| Python-Standard-Library/Cocurrency/MultiProcessing.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Module 4
#
# ## Video 19: Querying the SDK
# **Python for the Energy Industry**
#
# To access data from a given endpoint, we do a 'search', which returns all data matching a given criteria.
#
# Note the Vessels endpoint documentation can be [found here](https://vortechsa.github.io/python-sdk/endpoints/vessels/).
# +
import vortexasdk as v
query = v.Vessels().search(vessel_classes='vlcc', term='ocean')
print(query)
# -
# The data returned by the query is not interperatable in this format. We can think of it as a list of items, which each correspond to an item (in this case a vessel) matching the search query. We can see how many matching items were found:
len(query)
# So there are 8 matching vessels. The data makes a little more sense if we look at just one of these:
query[0]
# There's a lot of information here! Looking at the keys in this dictionary structure, we can see basic information about the vessel, along with the cargo that it is currently carrying.
#
# We can use a list comprehension to get the values of a particular key for each vessel in the returned data:
[vessel['name'] for vessel in query]
# The query has a function `.to_df` which will convert the data into a pandas DataFrame structure. By default, it will only include some of the more important bits of information. We can also specifically list which colulmns we would like:
query.to_df(columns=['name', 'imo', 'mmsi', 'related_names'])
# ### Exercise
#
#
# Using the Geographies endpoint, do a search for the term 'portsmouth'. How many matching queries are there? Where are they located?
#
# Note the Geographies endpoint documentation can be [found here](https://vortechsa.github.io/python-sdk/endpoints/geographies/).
| docs/examples/academy/19. Querying the SDK.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# # Earth Engine Notebook Blog Post Test
# > Testing out using FastPages with Earth Engine code.
#
# - toc: true
# - badges: true
# - comments: true
# - categories: [jupyter, earth engine]
# - image: images/chart-preview.png
# > Important: Copyright 2020 Google LLC. SPDX-License-Identifier: Apache-2.0
# +
# This comment on the first line is suppressed, for some reason...
# A comment before the print statement.
print('1 + 2 = ', 1 + 2)
# Another comment.
# + [markdown] id="LAZiVi13zTE7"
# # Earth Engine Python API Colab Setup
#
# This notebook demonstrates how to setup the Earth Engine Python API in Colab and provides several examples of how to print and visualize Earth Engine processed data.
#
# ## Import API and get credentials
#
# The Earth Engine API is installed by default in Google Colaboratory so requires only importing and authenticating. These steps must be completed for each new Colab session, if you restart your Colab kernel, or if your Colab virtual machine is recycled due to inactivity.
#
# ### Import the API
#
# Run the following cell to import the API into your session.
# + id="65RChERMzQHZ"
import ee
# + [markdown] id="s-dN42MTzg-w"
# ### Authenticate and initialize
#
# Run the `ee.Authenticate` function to authenticate your access to Earth Engine servers and `ee.Initialize` to initialize it. Upon running the following cell you'll be asked to grant Earth Engine access to your Google account. Follow the instructions printed to the cell.
# + id="NMp9Ei9b0XXL"
# Initialize the library.
ee.Initialize()
# + [markdown] id="8I_Fr0L5AFmu"
# ### Test the API
#
# Test the API by printing the elevation of Mount Everest.
# + id="v7pD6pDOAhOW"
# Print the elevation of Mount Everest.
dem = ee.Image('USGS/SRTMGL1_003')
xy = ee.Geometry.Point([86.9250, 27.9881])
elev = dem.sample(xy, 30).first().get('elevation').getInfo()
print('Mount Everest elevation (m):', elev)
# + [markdown] id="fDLAqiNWeD6t"
# ## Map visualization
#
# `ee.Image` objects can be displayed to notebook output cells. The following two
# examples demonstrate displaying a static image and an interactive map.
#
# + [markdown] id="45BfeVygwmKm"
# ### Static image
#
# The `IPython.display` module contains the `Image` function, which can display
# the results of a URL representing an image generated from a call to the Earth
# Engine `getThumbUrl` function. The following cell will display a thumbnail
# of the global elevation model.
# + id="Fp4rdpy0eGjx"
# Import the Image function from the IPython.display module.
from IPython.display import Image
# Display a thumbnail of global elevation.
Image(url = dem.updateMask(dem.gt(0))
.getThumbURL({'min': 0, 'max': 4000, 'dimensions': 512,
'palette': ['006633', 'E5FFCC', '662A00', 'D8D8D8', 'F5F5F5']}))
| _notebooks/2020-11-01-ee-test-1.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Submission statistics and merge notebook
# +
import pandas as pd
import numpy as np
import sys
import os
from src import data_prepare
label_map=data_prepare.load_label_map()
post,thread=data_prepare.load_test_data()
# -
module_path = os.path.abspath(os.path.join('..'))
subm_path = os.path.join(module_path,'submissions')
# ### For the sake of diversity I chose best submissions with absolutely different models
best = pd.read_csv(os.path.join(subm_path,'sol21.csv'))
best = pd.Series(best["thread_label_id"])
cnn=pd.read_csv(os.path.join(subm_path,'sol28.csv'))
cnn=pd.Series(cnn["thread_label_id"])
rforest=pd.read_csv(os.path.join(subm_path,'sol32.csv'))
rforest=pd.Series(rforest["thread_label_id"])
svc=pd.read_csv(os.path.join(subm_path,'sol25.csv'))
svc=pd.Series(svc["thread_label_id"])
df=pd.DataFrame({'best':best ,'cnn': cnn,'rforest':rforest,'svc': svc})
df
from sklearn.metrics import confusion_matrix
from matplotlib import pyplot as plt
import seaborn as sns
conf_mat = confusion_matrix(svc, rforest,labels=label_map["type_id"].values)
fig, ax = plt.subplots(figsize=(13,13))
sns.heatmap(conf_mat, annot=True, fmt='d',xticklabels=label_map.index, yticklabels=label_map.index)
plt.ylabel('svc')
plt.xlabel('rforest')
plt.show()
# ### I am going to just get modes of all rows and use them as a prediction, simply picking the most frequent value
total=df.mode(axis=1)
total.rename(columns={0:'subm'},inplace=True)
sub=total['subm']
ans = pd.concat([thread["thread_num"],sub], axis=1, keys=['thread_num', 'thread_label_id'])
ans=ans.set_index("thread_num")
ans=ans.astype(int)
ans.head()
path=os.path.join(module_path,"submissions")
ans.to_csv(os.path.join(path,"submissions_merge.csv"))
# **By the way, this type of 'technique' gave me my second best submission**
| notebooks/Some submission statistics.ipynb |
// ---
// jupyter:
// jupytext:
// text_representation:
// extension: .cpp
// format_name: light
// format_version: '1.5'
// jupytext_version: 1.14.4
// kernelspec:
// display_name: C++14
// language: C++14
// name: xeus-cling-cpp14
// ---
// # Getting start with Eigen3
//
// This notebooks basicaly reproduces (in a notebook) the [Eigen's Getting Start page](https://eigen.tuxfamily.org/dox/GettingStarted.html) with few modifications. Here, I will not concerned about installing, building and running, since I'm using notebooks to exemplify usage aspects only.
// ## Example 1: a simple first program
// First, we link the proper header files:
#include <iostream>
#include <eigen3/Eigen/Dense>
// Since that Eigen header file defines several types, as starting point I'm only interest in `MatrixXd`, which is a matrix of arbitrary size `X` size), where every entry is a double (hence the `d` in the suffix) . So, let's get what we need:
using Eigen::MatrixXd;
// A first point to note is that the include procedure employed here slightly differs from the official documentation. This is due to the way we build Eigen within a Conda Environment. No need to worry!
// Below, we declare a matrix and assign values to it:
// +
MatrixXd matrix_1(2,2); // a matrix with 2 rows and 2 cols
matrix_1(0, 0) = 3;
matrix_1(1, 0) = 2.5;
matrix_1(0, 1) = -1;
matrix_1(1, 1) = matrix_1(1, 0) + matrix_1(0, 1);
std::cout << matrix_1;
// -
// Initializing looks pretty simple!
// ## Example 2: Matrices and vectors
// Let's consider another example. Herein, for simplicity's sake (and lesser amount of code), I will use the desired namespace:
using namespace Eigen;
// Below, I define a $(3, 3)$-matrix with random entries:
// +
// Initializing
MatrixXd matrix_2 = MatrixXd::Random(3, 3);
// A linear mapping
matrix_2 = (matrix_2 + MatrixXd::Constant(3, 3, 1.2)) * 50;
std::cout << "matrix_2 =" << std::endl << matrix_2;
// -
// The `MatrixXd::Random()` provides a matrix with random entries within $[-1, 1]$ range. In the following, a linear map is performed. The constructor `MatrixXd::Constant(rows, cols, value)` creates a $(\texttt{rows},\texttt{cols})$-matrix with all entries equal to `value`. Thus, in the parenthesis, the resulting matrix has entries with values in the range $[0.2, 2.2]$. Hence, after multiplied by $50$, the range for `matrix_2` is $[10, 110]$.
// Now, I will create a simple 3D-vector and perform the multiplication `matrix_2 * vector_1`, like $Ax = b$ systems:
// +
VectorXd vector_1(3);
vector_1 << 1, 2, 3; // note that this is a way to initialize
std::cout << "matrix_2 * vector_1 =" << std::endl << matrix_2 * vector_1;
// -
// Above, a vector with 3 coefficients were created from a unitialized `VectorXd`. This represents a __column__ vector of arbitrary size, which was stated as 3 in the present case. All Eigen vectors (from `VectorXd`) are column vectors.
// Instead of declaring dynamic matrix and vectors, which will have their size determined in runtime, we could also use equivalent constructors with fixed-size. This way has the advantages to decreases te compiling time and can perform a more rigorous checking. See below the equivalent fixed-size version:
// +
// Initializing
Matrix3d matrix_3 = Matrix3d::Random();
// A linear mapping
matrix_3 = (matrix_3 + Matrix3d::Constant(1.2)) * 50;
Vector3d vector_2(1, 2, 3);
std::cout << "matrix_3 =" << std::endl << matrix_3 << std::endl;
std::cout << "matrix_3 * vector_2 =" << std::endl << matrix_3 * vector_2;
// -
// Note that, for fixed-size matrices and vectors, just replace `X` for `3`. The suffix `d` remains the same, as the entries are double type. The constructors now don't need any information about the size, since it is already incorporated by replaced `X`. Another important point is the vector initialization. This time, I used `vector_2(1, 2, 3)`, but the same used for dynamic vector could be used as well.
// Behind the scenes, the fixed-size matrices and vectors are just C++ `typedef`. The `Matrix3d` used above is just:
//
// ```C++
// typedef Matrix<double, 3, 3> Matrix3d;
// ```
//
// Analogously, we have for vectors:
//
// ```C++
// typedef Matrix<double, 3, 1> Vector3d;
// ```
//
// All Eigen matrices and vectors are essentially objects of the `Matrix` templeta class. There is a more generic class named `Array`, which is quite similar to `Matrix`. I will provide some examples of such class in another notebook.
// ## Final words
// Here, we reviewed the very basic use of Eigen, specially about matrices and vectors. For futher details, you can check the official documentation as well as a (little) longer tutorial provided in other notebook.
| eigen/notebooks/01-getting-start.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3 (ipykernel)
# language: python
# name: python3
# ---
# # ใใคใบ้ๅฑค็ทๅฝขๅๅธฐ
# http://num.pyro.ai/en/latest/tutorials/bayesian_hierarchical_linear_regression.html
# +
import pandas as pd
import numpy as np
import seaborn as sns
import matplotlib.pyplot as plt
import warnings
warnings.filterwarnings("ignore")
# -
train = pd.read_csv('https://gist.githubusercontent.com/ucals/'
'2cf9d101992cb1b78c2cdd6e3bac6a4b/raw/'
'43034c39052dcf97d4b894d2ec1bc3f90f3623d9/'
'osic_pulmonary_fibrosis.csv')
train.head()
# ใใผใฟใปใใใฏใไธ้ฃใฎๆฃ่
ใฎใใผในใฉใคใณ่ธ้จCTในใญใฃใณใจ้ข้ฃใใ่จๅบๆ
ๅ ฑ
# +
def chart(patient_id, ax):
data = train[train['Patient'] == patient_id]
x = data['Weeks']
y = data['FVC']
ax.set_title(patient_id)
ax = sns.regplot(x, y, ax=ax, ci=None, line_kws={'color':'red'})
f, axes = plt.subplots(1, 3, figsize=(15, 5))
chart('ID00007637202177411956430', axes[0])
chart('ID00009637202177434476278', axes[1])
chart('ID00010637202177584971671', axes[2])
# -
# #### model
# +
import numpyro
from numpyro.infer import MCMC, NUTS, Predictive
import numpyro.distributions as dist
from jax import random
assert numpyro.__version__.startswith('0.6.0')
# -
# GPUใ ใจMCMC้
ใใฎใงcpuไฝฟใ
numpyro.set_platform("cpu")
def model(PatientID, Weeks, FVC_obs=None):
ฮผ_ฮฑ = numpyro.sample("ฮผ_ฮฑ", dist.Normal(0., 100.))
ฯ_ฮฑ = numpyro.sample("ฯ_ฮฑ", dist.HalfNormal(100.))
ฮผ_ฮฒ = numpyro.sample("ฮผ_ฮฒ", dist.Normal(0., 100.))
ฯ_ฮฒ = numpyro.sample("ฯ_ฮฒ", dist.HalfNormal(100.))
unique_patient_IDs = np.unique(PatientID)
n_patients = len(unique_patient_IDs)
with numpyro.plate("plate_i", n_patients):
ฮฑ = numpyro.sample("ฮฑ", dist.Normal(ฮผ_ฮฑ, ฯ_ฮฑ))
ฮฒ = numpyro.sample("ฮฒ", dist.Normal(ฮผ_ฮฒ, ฯ_ฮฒ))
ฯ = numpyro.sample("ฯ", dist.HalfNormal(100.))
FVC_est = ฮฑ[PatientID] + ฮฒ[PatientID] * Weeks
with numpyro.plate("data", len(PatientID)):
numpyro.sample("obs", dist.Normal(FVC_est, ฯ), obs=FVC_obs)
# #### MCMC
# +
from sklearn.preprocessing import LabelEncoder
le = LabelEncoder()
train['PatientID'] = le.fit_transform(train['Patient'].values)
FVC_obs = train['FVC'].values
Weeks = train['Weeks'].values
PatientID = train['PatientID'].values
# +
nuts_kernel = NUTS(model)
mcmc = MCMC(nuts_kernel, num_samples=2000, num_warmup=2000)
rng_key = random.PRNGKey(0)
mcmc.run(rng_key, PatientID, Weeks, FVC_obs=FVC_obs)
posterior_samples = mcmc.get_samples()
# -
# #### param check
# +
import arviz as az
data = az.from_numpyro(mcmc)
az.plot_trace(data, compact=True);
# -
# ๆฃ่
ใใจใซใใผใฝใใฉใคใบใใใใขใซใใกใจใใผใฟใๅญฆ็ฟใงใใ
# #### ใขใใซใซใใฃใฆไบๆธฌใใใFVCไฝไธๆฒ็ทใๅฏ่ฆๅ
#
# ใใผใฟใปใใใฎๆฌ ่ฝใใฆใใใในใฆใฎๅคใไบๆธฌใใ
pred_template = []
for i in range(train['Patient'].nunique()):
df = pd.DataFrame(columns=['PatientID', 'Weeks'])
df['Weeks'] = np.arange(-12, 134)
df['PatientID'] = i
pred_template.append(df)
pred_template = pd.concat(pred_template, ignore_index=True)
PatientID = pred_template['PatientID'].values
Weeks = pred_template['Weeks'].values
predictive = Predictive(model, posterior_samples,
return_sites=['ฯ', 'obs'])
samples_predictive = predictive(random.PRNGKey(0),
PatientID, Weeks, None)
df = pd.DataFrame(columns=['Patient', 'Weeks', 'FVC_pred', 'sigma'])
df['Patient'] = le.inverse_transform(pred_template['PatientID'])
df['Weeks'] = pred_template['Weeks']
df['FVC_pred'] = samples_predictive['obs'].T.mean(axis=1)
df['sigma'] = samples_predictive['obs'].T.std(axis=1)
df['FVC_inf'] = df['FVC_pred'] - df['sigma']
df['FVC_sup'] = df['FVC_pred'] + df['sigma']
df = pd.merge(df, train[['Patient', 'Weeks', 'FVC']],
how='left', on=['Patient', 'Weeks'])
df = df.rename(columns={'FVC': 'FVC_true'})
df.head()
# 3ไบบใฎๆฃ่
ใฎไบๆธฌใ ใ็ขบ่ช
# +
def chart(patient_id, ax):
data = df[df['Patient'] == patient_id]
x = data['Weeks']
ax.set_title(patient_id)
ax.plot(x, data['FVC_true'], 'o')
ax.plot(x, data['FVC_pred'])
ax = sns.regplot(x, data['FVC_true'], ax=ax, ci=None,
line_kws={'color':'red'})
ax.fill_between(x, data["FVC_inf"], data["FVC_sup"],
alpha=0.5, color='#ffcd3c')
ax.set_ylabel('FVC')
f, axes = plt.subplots(1, 3, figsize=(15, 5))
chart('ID00007637202177411956430', axes[0])
chart('ID00009637202177434476278', axes[1])
chart('ID00011637202177653955184', axes[2])
# -
# #### ใขใใซใฎไบๆธฌใฎ็ฒพๅบฆ่ฉไพก
# RMSEใจๅฏพๆฐๅฐคๅบฆใ่จ็ฎ
# +
y = df.dropna()
rmse = ((y['FVC_pred'] - y['FVC_true']) ** 2).mean() ** (1/2)
print(f'RMSE: {rmse:.1f} ml')
sigma_c = y['sigma'].values
sigma_c[sigma_c < 70] = 70
delta = (y['FVC_pred'] - y['FVC_true']).abs()
delta[delta > 1000] = 1000
lll = - np.sqrt(2) * delta / sigma_c - np.log(np.sqrt(2) * sigma_c)
print(f'Laplace Log Likelihood: {lll.mean():.4f}')
# -
| numpyro_RTX3090/work/008_hierarchical_bayes_regression.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3 (ipykernel)
# language: python
# name: python3
# ---
import json
import requests
import threading
import time
import os
from seleniumwire import webdriver
from selenium.webdriver.common.proxy import Proxy, ProxyType
def interceptor(request):
if request.url.startswith('https://us-central1-popxi-f3a4d.cloudfunctions.net/stats?count='):
params = request.params
params['count'] = '5000'
request.params = params
print('Popping...')
def initBrowser(proxy = None):
options = webdriver.ChromeOptions()
options.add_argument('ignore-certificate-errors')
#options.add_argument('headless')
options.add_argument('window-size=1920x1080')
#options.add_argument("disable-gpu")
options.add_argument("--mute-audio")
#options.add_argument("--disable-gpu")
#seleniumwire_options = {
# 'enable_har': True # Capture HAR data, retrieve with driver.har
#}
#driver = webdriver.Chrome('chromedriver', options=options, seleniumwire_options=seleniumwire_options)
if proxy is not None:
chrome_options.add_argument('--proxy-server=%s' % PROXY)
driver = webdriver.Chrome('chromedriver', options=options)
driver.request_interceptor = interceptor
#driver.scopes = [
# '.*www.google.com/*',
# '.*us-central1-popxi-f3a4d.cloudfunctions.net/stats*.*'
#]
driver.get('https://popxi.click/')
driver.execute_script('var event=new KeyboardEvent("keydown",{key:"g",ctrlKey:!0});setInterval(function(){for(i=0;i<1;i++)document.dispatchEvent(event)},200);')
return driver
def getRequests(driver):
get = False
while not get:
for req in driver.requests:
if req.url.startswith('https://us-central1-popxi-f3a4d.cloudfunctions.net/stats?count='):
try:
print('Response: ' + str(req.response.status_code))
except:
print('Response: None')
print('Deleting cookies...')
driver.delete_all_cookies()
del driver.requests
driver.execute_script("window.open('https://popxi.click/');")
driver.switch_to.window(driver.window_handles[0])
driver.close()
driver.switch_to.window(driver.window_handles[0])
print('Deleted.')
time.sleep(2)
driver.execute_script('var event=new KeyboardEvent("keydown",{key:"g",ctrlKey:!0});setInterval(function(){for(i=0;i<1;i++)document.dispatchEvent(event)},200);')
time.sleep(1)
def Run(proxy = None):
print("Starting browser...")
driver = initBrowser(proxy)
print("Browser Started.")
print("Fetching requests...")
getRequests(driver)
Run()
| POPXI.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + run_control={"frozen": false, "read_only": false}
debugging = True
IPTS = 19558
# + [markdown] run_control={"frozen": false, "read_only": false}
# # Description
# + [markdown] run_control={"frozen": false, "read_only": false}
# Steps are:
# - load a stack of images
# - define your sample
#
# => the average counts of the region vs the stack (index, TOF or lambda) will be displayed
# compared to the theory signal of a given set of layers.
# + [markdown] run_control={"frozen": false, "read_only": false}
# # Notebook Initialization
# + run_control={"frozen": false, "read_only": false}
from __code.__all import custom_style
custom_style.style()
# + run_control={"frozen": false, "read_only": false}
# %gui qt
# + run_control={"frozen": false, "marked": true, "read_only": false}
from __code.ui_builder import UiBuilder
o_builder = UiBuilder(ui_name = 'ui_resonance_imaging_experiment_vs_theory.ui')
o_builder = UiBuilder(ui_name = 'ui_resonance_imaging_layers_input.ui')
from __code import file_handler, utilities
from __code.display_counts_of_region_vs_stack_vs_theory import ImageWindow
from __code.display_imaging_resonance_sample_definition import SampleWindow
from NeuNorm.normalization import Normalization
from __code.ipywe import fileselector
import pprint
if debugging:
ipts = IPTS
else:
ipts = utilities.get_ipts()
working_dir = utilities.get_working_dir(ipts=ipts, debugging=debugging)
print("Working dir: {}".format(working_dir))
# + [markdown] cell_style="split" run_control={"frozen": false, "read_only": false}
# # Select Stack Folder
# + format="tab" run_control={"frozen": false, "read_only": false}
input_folder_ui = ipywe.fileselector.FileSelectorPanel(instruction='Select Input Folder',
type='directory',
start_dir=working_dir,
multiple=False)
input_folder_ui.show()
# + [markdown] run_control={"frozen": false, "read_only": false}
# # Load Stack
# + run_control={"frozen": false, "read_only": false}
working_folder = input_folder_ui.selected
o_norm = Normalization()
o_norm.load(folder=working_folder, notebook=True)
# + [markdown] run_control={"frozen": false, "read_only": false}
# # Define Sample
# + run_control={"frozen": false, "read_only": false}
_sample = SampleWindow(parent=None, debugging=debugging)
_sample.show()
# + [markdown] run_control={"frozen": false, "read_only": false}
# # Select Region and display Counts
# + run_control={"frozen": false, "read_only": false}
o_reso = _sample.o_reso
_image = ImageWindow(
stack=(o_norm.data['sample']['data']), working_folder=working_folder, o_reso=o_reso)
_image.show()
# + [markdown] run_control={"frozen": false, "read_only": false}
# # Export
# + [markdown] run_control={"frozen": false, "read_only": false}
# UNDER CONSTRUCTION!
# + run_control={"frozen": false, "read_only": false}
| notebooks/resonance_imaging_experiment_vs_theory.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Homework 03
#
# ### Instructions:
# If you are able to see this successfully, it means you have downloaded this notebook file (<code>**Homework_03.ipynb**</code>) on your local machine, started up your Jupyter Notebook server, and opened the notebook file from the running server.<br />
#
# In the following, you will be prompted with a sequence of coding exercises.
# Each (code) cell contains a single exercise; <u>unless otherwise specified</u>, each exercise is **self-contained**, i.e., there is no dependency with any other exercise contained in the above cells. In other words, you don't need to solve exercises in any particular order and/or execute the notebook's cells in the same order as they appear.<br />
#
# Each exercise is labeled as **Exercise N**, where **N = {1, 2, ...}**. The cell right below the exercise label contains a (natural language) description of what you are expected to do to solve the exercise, along with one or two examples of expected outputs. Possibly, you may be prompted with some suggestions or requirements, which help you answer correctly.<br />
# Below each code question, there is one (or more) code solution cells which you are asked to fill. More specifically, if **Exercise N** contains just one question then you will see a code cell labeled as **Solution N**. Instead, if the exercise contains more than one question, you will see something like **Solution N.x**, where **x = {1, 2, ...}** (i.e., as many code solution cells as the number of questions).<br />
# A typical code solution cell contains the skeleton of a function; the function has a signature (name and input arguments) which has already been defined (**PLEASE DO NOT CHANGE IT!**). You are asked to implement the function according to the decription provided above. To do so, you must replace the <code>**# YOUR CODE HERE**</code> with your own code. <br />
# Once you have done it and you are confident that your solution works as you expect, just run the corresponding test code cell (labeled as **Test N** or **Test N.x** depending on the number of questions within the same **Exercise N**) right below the specific solution code cell.
# <br />
#
# Please, remember that to execute a cell you need to do the following:
# - Be sure the cell is selected (you can verify this by looking at the cell border: if this is green the cell is selected);
# - Go to the *Main Menu* bar, click on *Cell --> Run Cells*. Alternatively, you can use a keyboard shortcut (e.g., <code>**Ctrl + Enter**</code>).
#
# Finally, I **strongly** recommend you not to delete nor modify this notebook in any of its parts. As this is not a read-only file, anyone could make changes to it. Should you do it by mistake, just download this notebook file again from our [Moodle web page](https://elearning.unipd.it/math/course/view.php?id=321).
# + jupyter={"outputs_hidden": true}
"""
DO NOT DELETE THIS CELL
REMEMBER TO RUN THIS BEFORE ANY OF YOUR CODE CELLS BELOW
"""
import numpy as np
EPSILON = .000001 # tiny tolerance for managing subtle differences resulting from floating point operations
# -
# ## Exercise 1
#
# Consider the skeleton of the function below called <code>**create_array**</code>, which takes as input an integer <code>**n**</code> and a tuple <code>**dims**</code>, and returns a numpy array (<code>**ndarray**</code>) containing values in the range <code>**[0,n)**</code> whose shape is equal to <code>**dims**</code>. Note that it must hold that $n = d_1 \cdot d_2 \cdot ... \cdot d_k$ (where $k = |\textit{dims}|$). Otherwise, the function should return a properly formatted string containing an error message, which explains the reason why the numpy array cannot be created.<br />
# For example:<br />
# <code>**create_array(9, (3, 3)) = array([[0, 1, 2], [3, 4, 5], [6, 7, 8]])**</code><br />
# <code>**create_array(16, (2, 3, 4)) = "Unable to create an array of 16 elements having (2 x 3 x 4) shape"**</code><br />
#
# (__SUGGESTIONS__: *Use *<code>**np.reshape()**</code>* to reshape the array according to the specified dimensions. You can safely assume *<code>**dims**</code>* to be a tuple, i.e., no need to check its type.*)
# ## Solution 1
# + jupyter={"outputs_hidden": true}
def create_array(n, dims):
"""
Create, if possible, an ndarray whose elements are in the range [0,n) and whose shape is equal to dims.
Otherwise, return a formatted string containing an error message,
indicating the reason why the array cannot be successfully created.
"""
# YOUR CODE HERE
# -
# ## Test 1
# + jupyter={"outputs_hidden": true}
"""
Please, run this cell to test your solution to Exercise 1 above, by means of some unit tests.
Be sure all the unit tests below are passed correctly.
"""
assert(np.array_equal(create_array(1, (1,)), np.array([0])))
assert(np.array_equal(create_array(1, (1, 1)), np.array([[0]])))
assert(np.array_equal(create_array(7, (1, 7)), np.array([[0, 1, 2, 3, 4, 5, 6]])))
assert(create_array(12, (1, 19))== "Unable to create an array of 12 elements having (1 x 19) shape")
assert(np.array_equal(create_array(4, (4,)), np.array([0, 1, 2, 3])))
assert(create_array(42, (41,))== "Unable to create an array of 42 elements having (41) shape")
assert(create_array(10, (1,)) == "Unable to create an array of 10 elements having (1) shape")
assert(create_array(5, ()) == "Unable to create an array of 5 elements having () shape")
assert(create_array(5, (0,)) == "Unable to create an array of 5 elements having (0) shape")
assert(create_array(5, (0,5)) == "Unable to create an array of 5 elements having (0 x 5) shape")
assert(create_array(5, (0,10)) == "Unable to create an array of 5 elements having (0 x 10) shape")
assert(create_array(0, ()) == "Unable to create an array of 0 elements having () shape")
assert(create_array(0, (0,)) == "Unable to create an array of 0 elements having (0) shape")
assert(create_array(0, (0,73)) == "Unable to create an array of 0 elements having (0 x 73) shape")
assert(np.array_equal(create_array(9, (3, 3)), np.array([[0, 1, 2], [3, 4, 5], [6, 7, 8]])))
assert(create_array(25, (6, 4)) == "Unable to create an array of 25 elements having (6 x 4) shape")
assert(np.array_equal(create_array(8, (2, 2, 2)), np.array([[[0, 1], [2, 3]], [[4, 5], [6, 7]]])))
assert(create_array(16, (2, 3, 4)) == "Unable to create an array of 16 elements having (2 x 3 x 4) shape")
assert(create_array(0, (1, 8)) == "Unable to create an array of 0 elements having (1 x 8) shape")
assert(create_array(-11, (3, 5)) == "Unable to create an array of -11 elements having (3 x 5) shape")
# -
# ## Exercise 2
#
# Consider the skeleton of the function below called <code>**det_core_matrix**</code>, which takes as input an $n$-by-$n$ bidimensional numpy array (i.e., a quadratic matrix) <code>**X**</code>, where $n \geq 2,~n \mod 2 = 0$ (i.e., $n$ is greater than or equal to $2$, and it is **even**). The function should return the determinant of the internal $2$-by-$2$ "**core matrix**", or <code>**None**</code> if the above conditions are not met.<br /> The core matrix can be identified as follows. If <code>**X**</code> is already $2$-by-$2$ then it is **also** the core matrix. If, instead, <code>**X**</code> is $4$-by-$4$ then the core matrix will be the $2$-by-$2$ sub-matrix identified by the elements of the second and third rows (i.e., row at index = 1 and row at index = 2) and second and third columns (i.e., column at index = 1, column at index = 2). More generally, if the dimension of the input matrix is $n$, the starting index of the row (column) of the core matrix can be found by computing $(n-2)/2$. For $n = 2$, this turns out to be 0, which is exactly what we expected, for $n = 4$, this is going to be 1 which, again, is what we found above, for $n = 6$, the starting index is 2, and so on and so forth (see the picture below).<br />
# <center></center>
# Finally, the determinant must be computed analytically (i.e., without using any helper function). Just as a reminder, given the $2$-by-$2$ matrix as below:
# $$ X =
# \begin{bmatrix}
# x_{00} & x_{01} \\
# x_{10} & x_{11}
# \end{bmatrix}
# $$
# The determinant $det(X) = (x_{00}\cdot x_{11}) - (x_{10} \cdot x_{01})$. For example:<br/>
# <code>**det_core_matrix(np.array([[1, 2], [3, 4]]) = (1 x 4) - (3 x 2) = -2 **</code><br />
# <code>**det_core_matrix(np.array([[1, 2, 3, 4], [5, 6, 7, 8], [9, 10, 11, 12], [13, 14, 15, 16]]) = (6 x 11) - (7 x 10) = -4 **</code><br />
# <code>**det_core_matrix(np.array([[1, 2, 3], [4, 5, 6]]) = None **</code>
# ## Solution 2
# + jupyter={"outputs_hidden": true}
def det_core_matrix(X):
"""
Return the determinant of the "core matrix" if the input matrix X is quadratic
and its number of rows (columns) is even, otherwise return None.
"""
# YOUR CODE HERE
# -
# ## Test 2
# + jupyter={"outputs_hidden": true}
"""
Please, run this cell to test your solution to Exercise 2 above, by means of some unit tests.
Be sure all the unit tests below are passed correctly.
"""
assert(det_core_matrix(np.array([[1, 2, 3], [4, 5, 6]])) == None)
assert(np.abs(det_core_matrix(np.array([[1, 2], [3, 4]])) - np.linalg.det(np.array([[1, 2], [3, 4]]))) < EPSILON)
assert(np.abs(det_core_matrix(np.array([[1, 2, 3, 4], [5, 6, 7, 8], [9, 10, 11, 12], [13, 14, 15, 16]])) -\
np.linalg.det(np.array([[6, 7], [10, 11]]))) < EPSILON)
X = np.array([[ 0.44023786, 0.83127201, 0.63944119, 0.69333524, 0.3650256 ,
0.27843133],
[ 0.50953169, 0.52998992, 0.62946758, 0.3675919 , 0.8418018 ,
0.2680972 ],
[ 0.72636984, 0.60750261, 0.99378191, 0.70026505, 0.96493826,
0.18946902],
[ 0.66880465, 0.55011437, 0.40990365, 0.98575236, 0.48193052,
0.81143302],
[ 0.48492125, 0.37851368, 0.97280827, 0.33088006, 0.50370432,
0.13189437],
[ 0.9273423 , 0.05007188, 0.60490133, 0.40947619, 0.68521798,
0.71775745]])
X_core = np.array([[ 0.99378191, 0.70026505],
[ 0.40990365, 0.98575236]])
assert(np.abs(det_core_matrix(X) - np.linalg.det(X_core)) < EPSILON)
assert(np.abs(det_core_matrix(np.array([[1, 1], [1, 1]])) - np.linalg.det(np.array([[1, 1], [1, 1]]))) < EPSILON)
assert(np.abs(det_core_matrix(np.array([[0, 0], [1, 1]])) - np.linalg.det(np.array([[0, 0], [1, 1]]))) < EPSILON)
# -
# ## Exercise 3
#
# Consider the skeleton of the function below called <code>**standardize_matrix**</code>, which takes as input a bidimensional numpy array (i.e., a matrix) <code>**X**</code> and returns a new matrix which is obtained by standardize the matrix passed as input. Standardizing a matrix $X$ means replacing each element $x_{i,j}$ as follows:<br />
# $$x_{i,j} = \frac{x_{i,j} - \mu(X)}{\sigma(X)}$$
# where $\mu(X)$ is the **mean** and $\sigma(X)$ the **standard deviation**, as computed from **all** the elements of the matrix $X$.<br />
# If the standard deviation of the input matrix is equal to 0, then just return a matrix of the same shape yet containing all 0s.
# For example: <br />
# <code>**standardize_matrix(np.array([[1,2],[3,4]])) = np.array([[-1.34164079, -0.4472136],[0.4472136, 1.34164079]])**</code> [$\mu(X) = 2.5$; $\sigma(X) \approx 1.2$] <br />
# <code>**standardize_matrix(np.array([[5,5,5], [5,5,5], [5,5,5], [5,5,5]])) = np.array([[0,0,0], [0,0,0], [0,0,0], [0,0,0]])**</code> [$\mu(X) = 5$; $\sigma(X) = 0$]<br />
#
# (__SUGGESTIONS__: *Try to solve it using the classical 'iterative' approach. Then, think about how you could make your solution more efficient using vectorization. Remember that you can either use numpy's built-in statistical functions, e.g.,*<code>**np.std(ndarray)**</code>*, or the method associated with each numpy array, e.g., *<code>**ndarray.std()**</code>*.*)
# ## Solution 3
# + jupyter={"outputs_hidden": true}
def standardize_matrix(X):
"""
Standardize the input matrix X using its mean and standard deviation.
If the standard deviation is 0 just return a matrix with the same shape,
yet containing all 0s.
"""
# YOUR CODE HERE
# -
# ## Test 3
# + jupyter={"outputs_hidden": true}
"""
Please, run this cell to test your solution to Exercise 3 above, by means of some unit tests.
Be sure all the unit tests below are passed correctly.
"""
assert(np.all(np.abs(standardize_matrix(np.array([[1,2],[3,4]])) - \
np.array([[-1.34164079, -0.4472136], [0.4472136, 1.34164079]])) < EPSILON))
assert(np.all((standardize_matrix(np.array([[5,5,5], [5,5,5], [5,5,5], [5,5,5]])) == \
np.array([[0,0,0], [0,0,0], [0,0,0], [0,0,0]]))))
assert(np.all(np.abs(standardize_matrix(np.array([[ 1.81962596, -0.70078527], \
[ 0.15814445, -1.10416631], \
[-0.7776333 , -1.5993643 ]])) - \
np.array([[ 1.96937124, -0.30024474], \
[ 0.47321659, -0.66348708], \
[-0.36944596, -1.10941005]])) < EPSILON))
assert(np.all((standardize_matrix(np.array([0,0,0])) == \
np.array([0,0,0]))))
# -
# ## Exercise 4
#
# Consider the skeleton of the function below called <code>**log_max_min**</code>, which takes as input a bidimensional numpy array (i.e., a matrix) <code>**X**</code> and returns the (natural) logarithm of the array obtained from subtracting the **column-wise** maximum and minimum arrays of the input matrix. Be careful, however, as the logarithm is only defined for any input which is greater than 0; as such, you have to be sure the resulting array does not contain **any** element which is **less than or equal to 0**. Otherwise, the function should return an array of 1s whose number of elements are equal to the number of columns of the input matrix. <br />
# For example:<br />
# <code>**log_max_min(np.array([[1,3], [5,2], [4,6]])) = np.array([1.38629436, 1.38629436])**</code><br />
# <code>**log_max_min(np.array([[1,3,-2], [16,-5,2], [4,6,11], [-42,10,7]])) = np.array([4.06044301, 2.7080502, 2.56494936])**</code><br />
# <code>**log_max_min(np.array([[0, 0], [0, 0], [0, 0]])) = np.array([1, 1])**</code><br />
# Remember that the column-wise maximum (minimum) of a matrix is the maximum (minimum) element obtained by fixing each column and considering all the elements of that column (i.e., scanning each row of that column...)
#
# (__SUGGESTION__: *You can safely assume the input matrix to be bidimensional. To test if a vector does not contain any 0, you have to keep in mind that boolean tests on numpy arrays return an array of booleans. To check if such a boolean array contains all *<code>**True**</code>* you should use the numpy's built-in function *<code>**np.all()**</code>.)
# ## Solution 4
# + jupyter={"outputs_hidden": true}
def log_max_min(X):
"""
Return the logarithm of the difference between the column-wise maximum and the minimum arrays,
provided that the logarithm is defined on every element of the resulting array
"""
# YOUR CODE HERE
# -
# ## Test 4
# + jupyter={"outputs_hidden": true}
"""
Please, run this cell to test your solution to Exercise 4 above, by means of some unit tests.
Be sure all the unit tests below are passed correctly.
"""
assert(np.all(np.abs(log_max_min(np.array([[1,3], [5,2], [4,6]])) -\
np.array([1.38629436, 1.38629436])) < EPSILON))
assert(np.all(np.abs(log_max_min(np.array([[1,3,-2], [16,-5,2], [4,6,11], [-42,10,7]])) -\
np.array([4.06044301, 2.7080502, 2.56494936])) < EPSILON))
assert(np.all(np.abs(log_max_min(np.array([[0, 0], [0, 0]])) -\
np.array([[1, 1]])) < EPSILON))
assert(np.all(np.abs(log_max_min(np.array([[-1, -2], [-3, -4]])) -\
np.array([[0.69314718, 0.69314718]])) < EPSILON))
# -
# ## Exercise 5
#
# Consider the skeleton of the function below called <code>**update_matrix**</code>, which takes as input a bidimensional numpy array (i.e., a matrix) <code>**X**</code> and a number <code>**t**</code>. The function returns the same matrix where a row is all set to 0 if **any** of the elements of that row is less than <code>**t**</code>.<br />
# For example, if your input matrix is as follows:<br />
# <code>**np.array([[0.28955492, -0.45053239], [-0.25879235, 1.67546741], [0.21191212, 1.42856687]])**</code> and <code>**t**</code> = -0.4, the resulting matrix will be:<br />
# <code>**np.array([[0., 0.], [-0.25879235, 1.67546741], [0.21191212, 1.42856687]])**</code>, as the first row of the original matrix is the only one containing an element whose value is less than <code>**t**</code> = -0.4.<br />
# If, for instance, we set <code>**t**</code> = -0.5 then no elements would have been less than such a value, and therefore the matrix will be left as it is.
#
# (__SUGGESTION__: *Use boolean indexing to update the values of the original input matrix...*)
# ## Solution 5
# + jupyter={"outputs_hidden": true}
def update_matrix(X, t):
"""
Set each row of the input matrix to all 0s as long as at least one element of that row is less than t.
"""
# YOUR CODE HERE
# -
# ## Test 5
# + jupyter={"outputs_hidden": true}
"""
Please, run this cell to test your solution to Exercise 5 above, by means of some unit tests.
Be sure all the unit tests below are passed correctly.
"""
assert(np.all(update_matrix(np.array([[0.28955492, -0.45053239], [-0.25879235, 1.67546741], [0.21191212, 1.42856687]]),-0.4)[0,:] == np.array([0, 0])))
assert(np.all(update_matrix(np.array([[0.28955492, -0.45053239], [-0.25879235, 1.67546741], [0.21191212, 1.42856687]]),-0.5) == np.array([[0.28955492, -0.45053239], [-0.25879235, 1.67546741], [0.21191212, 1.42856687]])))
assert(np.all(update_matrix(np.array([[0.28955492, -0.45053239], [-0.25879235, 1.67546741], [0.21191212, 1.42856687]]),-0.2)[:2,:] == np.array([[0, 0],[0, 0]])))
# -
# ## Exercise 6
#
# Consider the skeleton of the function below called <code>**find_closest_to_k_in_array**</code>, which takes as input a unidimensional numpy array (i.e., an $m$-by-$1$ vector) <code>**x**</code> and a number <code>**k**</code>, and returns the number which corresponds to the closest value to <code>**k**</code> in <code>**x**</code>. The function should just return <code>**None**</code> if the input array does not contain **at least** 1 element.<br />
# For example:<br />
# <code>**find_closest_to_k_in_array(np.array([1.2, 0.8, 0.48, 18.4, -1.9, 5.7]), 3.6) = 5.7**</code>
# <code>**find_closest_to_k_in_array(np.array([]), 1.3) = None**</code>
#
# (__SUGGESTION__: *Try to solve it using the traditional 'iterative' approach. Then, think about how this could be solved using a vectorized approach. Here, two numpy's functions may help you: *<code>**np.abs()**</code>* which compute the absolute value of the input argument, and *<code>**np.argmin()**</code>* which, given an array as input, returns the corresponding array of the index of the minimum element.*)
# ## Solution 6
# + jupyter={"outputs_hidden": true}
def find_closest_to_k_in_array(x, k):
"""
Return the element of x which is closest to k.
"""
# YOUR CODE HERE
# -
# ## Test 6
# + jupyter={"outputs_hidden": true}
"""
Please, run this cell to test your solution to Exercise 6 above, by means of some unit tests.
Be sure all the unit tests below are passed correctly.
"""
assert(find_closest_to_k_in_array(np.array([1.2, 0.8, 0.48, 18.4, -1.9, 5.7]), 3.6) == 5.7)
assert(find_closest_to_k_in_array(np.array([15.6]), 9.3) == 15.6)
assert(find_closest_to_k_in_array(np.array([]), 9.3) == None)
assert(find_closest_to_k_in_array(np.array([-9.1, -0.4, -2.7]), -6.4) == -9.1)
# -
# ## Exercise 7
#
# Consider the skeleton of the function below called <code>**find_closest_to_k_in_matrix**</code>, which takes as input a bidimensional numpy array (i.e., an $m$-by-$n$ matrix) <code>**X**</code> and a number <code>**k**</code> and returns an $n$-by-$1$ vector, which for each row of <code>**X**</code> contains the closest value to <code>**k**</code>.<br />
# Note that this is very similar to **Exercise 5** above. The only thing that changes here is that we are asking to compute the closest value **for each row** of the input matrix.
# For example:<br />
# <code>**find_closest_to_k(np.array([[1.2, 0.8, 0.48], [18.4, -1.9, 5.7], [0.49, 0.51, 0.]]), 0.5) = np.array([0.48, -1.9, 0.49])**</code>
#
# (__SUGGESTION__: *Although you could theoretically make use of the function *<code>**find_closest_to_k_in_array**</code>* to solve this problem, I again strongly encourage you to try to use the same tools suggested above to come up with a vectorized solution. In addition to that, it might be useful to know that numpy arrays *<code>**X**</code>* can be indexed by a list of tuples as follows: *<code>**X[(i_1, i_2,..., i_k),(j_1, j_2, ..., j_k)]**</code>*, whose meaning is to return an array containing the following elements: *<code>**[X[i_1, j_1], X[i_2, j_2], ..., X[i_k, j_k]]**</code>.)
# ## Solution 7
# + jupyter={"outputs_hidden": true}
def find_closest_to_k_in_matrix(X, k):
"""
For each row of X, return the element which is closest to k.
"""
# YOUR CODE HERE
# -
# ## Test 7
# + jupyter={"outputs_hidden": true}
"""
Please, run this cell to test your solution to Exercise 7 above, by means of some unit tests.
Be sure all the unit tests below are passed correctly.
"""
assert(np.all(np.abs(find_closest_to_k_in_matrix(np.array([[1.2, 0.8, 0.48], [18.4, -1.9, 5.7], [0.49, 0.51, 0.]]), 0.5) -\
np.array([0.48, -1.9, 0.49])) < EPSILON))
assert(np.all(np.abs(find_closest_to_k_in_matrix(np.array([[3.2, 0.8, 0.48], [18.4, -1.9, 5.7]]), 1.1) -\
np.array([0.8, -1.9])) < EPSILON))
assert(np.all(np.abs(find_closest_to_k_in_matrix(np.array([[3.9, 4.6], [4.5, 3.2]]), 4.1) -\
np.array([3.9, 4.5])) < EPSILON))
# -
# ## Exercise 8
#
# Consider the skeleton of the function below called <code>**array_of_distances**</code>, which takes as input a bidimensional numpy array (i.e., an $m$-by-$n$ matrix) <code>**X**</code> and a unidimensional numpy array (i.e., an $m$-by-$1$ vector) <code>**y**</code> and returns an $n$-by-$1$ vector <code>**d**</code>, which is computed as follows:<br />
# <code>**d[j] = euclidean_distance(X[:,j], y)**</code>, where <code>**X[:,j]**</code> denotes the $j$-th $m$-by-$1$ column vector of the input matrix <code>**X**</code> and <code>**euclidean_distance()**</code> is the Euclidean distance function which is computed as follows:<br />
# <code>**euclidean_distance(a, b) = sqrt((a_1 - b_1)^2 + (a_2 - b_2)^2 + ... + (a_m - b_m)^2)**</code>.<br />
# The function should test whether this operation is allowed by checking the shape of the input matrix and array are "compatible", otherwise it should return <code>**None**</code>.<br />
# For example:<br />
# <code>**array_of_distances(np.array([[0, 1, 2, 3], [4, 5, 6, 7], [8, 9, 10, 11]]), np.array([0, 4, 8])) = np.array([0., 1.73205081, 3.46410162, 5.19615242])**</code><br />
# <code>**array_of_distances(np.array([[0, 1, 2, 3], [4, 5, 6, 7], [8, 9, 10, 11]]), np.array([0, 4])) = None**</code>
#
# (__SUGGESTION__: *Try to solve this using the standard 'iterative' approach. Afterwards, try to use a vectorized solution. In order for the dimensions to be "compatible" it must hold that the number of rows of the input matrix equals to the number of elements of the input array. Moreover, to apply vectorized operation between a 2-D array and a 1-D array, you must ensure that the shape of the 1-D array is $(m, 1)$ instead of just $(m,)$ if the shape of the 2-D array is $(m, n)$. To transform a 1-D array's shape from $(m,)$ to $(m, 1)$ you just need to invoke the *<code>**reshape(m, 1)**</code>* method on the 1-D array.*)
# ## Solution 8
# + jupyter={"outputs_hidden": true}
def array_of_distances(X, y):
"""
Return the column-wise Euclidean distance between each column vector of the given input matrix X
and the input vector y. The shape of X and y must be "compatible", otherwise None should be returned.
"""
# YOUR CODE HERE
# -
# ## Test 8
# + jupyter={"outputs_hidden": true}
"""
Please, run this cell to test your solution to Exercise 8 above, by means of some unit tests.
Be sure all the unit tests below are passed correctly.
"""
assert(np.all(np.abs(array_of_distances(np.array([[0, 1, 2, 3], [4, 5, 6, 7], [8, 9, 10, 11]]), np.array([0, 4, 8])) -\
np.array([0., 1.73205081, 3.46410162, 5.19615242])) < EPSILON))
assert(array_of_distances(np.array([[0, 1, 2, 3], [4, 5, 6, 7], [8, 9, 10, 11]]), np.array([0, 4])) == None)
assert(np.all(np.abs(array_of_distances(np.array([[0, 1], [4, 5]]), np.array([1, 2])) -\
np.array([2.23606798, 3.])) < EPSILON))
| Homeworks/Homework_03.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 2
# language: python
# name: python2
# ---
import numpy as np
import matplotlib.pyplot as plt
import pysptools.noise as noise
# +
#from taking n purest pixels
purest_pixels_dict = np.load('./purest_pixels_dict.npy').item()
#from subtracting out signal from background
pure_signal_dict = np.load('./pure_signal_dict.npy').item()
# +
#http://evols.library.manoa.hawaii.edu/bitstream/handle/10524/35872/vol3-Poop-Que(LO).pdf#page=11
#Olivenza: Took average for LL chondrite
#'H' has about 25-30% total Iron (with over half in metallic form =>strongly magnetic)
#L contains 20-25% (with 5-10% in uncombined metal state)
#'LL' contain 19-22% iron (with only 0.3-3% metallic iron)
CLASSIFICATION_PER_SPECIMEN = {'Abee':'EH4', 'Acapulco':'Acapulcoite', 'Allende':'CV3','Brownsfield':'H3.7',
'Estacado':'H6', 'Estacado2':'H6', 'Gibeon':'IronIva', 'Hessle':'H5',
'Holbrook':"L/LL6", 'Homestead':'L5','Homestead2':'L5','Millbilille':'Eucrite-mmict',
'Olivenza':"LL5", 'Peekshill':'H6',
'PutnamCounty':'IronIva', 'Soku':'LL4', 'Steinbach1':'IronIva', 'Steinbach2':'IronIva',
'Sutton':'H5','Toluca1':'IronIAB-sLL', 'Toluca2':'IronIAB-sLL', 'Toluca3':'IronIAB-sLL',
'TolucaBig':'IronIAB-sLL'}
IRON_CATEGORY_PER_SPECIMEN = {'Abee':32.52, 'Acapulco':27.5, 'Allende':23.85,'Brownsfield':'H','Estacado':27.88,
'Estacado2':27.88, 'Gibeon':91.8, 'Hessle':'H', 'Holbrook':"L/LL", 'Homestead':'L',
'Homestead2':'L','Millbilille':'L','Olivenza':"LL", 'Peekshill':'H',
'PutnamCounty':91.57, 'Soku':'LL', 'Steinbach1':'HH', 'Steinbach2':'HH', 'Sutton':'H',
'Toluca1':91, 'Toluca2':91, 'Toluca3':91, 'TolucaBig':91}
IRON_ALL_CATEGORIZED = {'Abee':'H', 'Acapulco':'H', 'Allende':'L','Brownsfield':'H','Estacado':'H',
'Estacado2':'H', 'Gibeon':'HH', 'Hessle':'H', 'Holbrook':"L/LL", 'Homestead':'L',
'Homestead2':'L','Millbilille':'L','Olivenza':"LL", 'Peekshill':'H',
'PutnamCounty':'H', 'Soku':'LL', 'Steinbach1':'HH', 'Steinbach2':'HH', 'Sutton':'H',
'Toluca1':'HH', 'Toluca2':'HH', 'Toluca3':'HH', 'TolucaBig':'HH'}
IRON_SIMPLE_CATEGORIZED = {'Abee':'L', 'Acapulco':'L', 'Allende':'L','Brownsfield':'L','Estacado':'L',
'Estacado2':'L', 'Gibeon':'HH', 'Hessle':'L', 'Holbrook':"L", 'Homestead':'L',
'Homestead2':'L','Millbilille':'L','Olivenza':"L", 'Peekshill':'L',
'PutnamCounty':'L', 'Soku':'L', 'Steinbach1':'HH', 'Steinbach2':'HH', 'Sutton':'',
'Toluca1':'HH', 'Toluca2':'HH', 'Toluca3':'HH', 'TolucaBig':'HH'}
IRON_SIMPLE_NUMERICAL = {'Abee':0, 'Acapulco':0, 'Allende':0,'Brownsfield':0,'Estacado':0,
'Estacado2':0, 'Gibeon':1, 'Hessle':0, 'Holbrook':0, 'Homestead':0,
'Homestead2':0,'Millbilille':0,'Olivenza':0, 'Peekshill':0,
'PutnamCounty':0, 'Soku':0, 'Steinbach1':1, 'Steinbach2':1, 'Sutton':0,
'Toluca1':1, 'Toluca2':1, 'Toluca3':1, 'TolucaBig':1}
IRON_PERCENTAGE_IF_AVAILABLE = {'Abee':32.52, 'Acapulco':27.5, 'Allende':23.85,'Estacado':27.88,
'Estacado2':27.88, 'Gibeon':91.8, 'PutnamCounty':91.57,'Toluca1':91,
'Toluca2':91, 'Toluca3':91, 'TolucaBig':91}
COLORS = {'Abee':'darkslateblue', 'Acapulco':'green', 'Allende':'blue','Brownsfield':'yellow',
'Estacado':'purple', 'Estacado2':'brown', 'Gibeon':'black', 'Hessle':'lime',
'Holbrook':"orange", 'Homestead':'grey','Homestead2':'lightgreen',
'Millbilille':'lightcoral',
'Olivenza':"c", 'Peekshill':'cyan',
'PutnamCounty':'pink', 'Soku':'silver', 'Steinbach1':'maroon', 'Steinbach2':'fuchsia',
'Sutton':'lawngreen','Toluca1':'cyan', 'Toluca2':'ivory', 'Toluca3':'olive',
'TolucaBig':'red'}
# +
#create matrix of all data samples (no classes)
all_data = []
all_data_simplified_classes = []
all_data_standard_classes = []
all_data_simplified_numerical = []
simplified_classes = ['HH','L']
standard_classes = ['HH','H','L','L/LL','LL']
all_data_meteorite_classes = []
for i,sample in enumerate(pure_signal_dict):
for j,row in enumerate(pure_signal_dict[sample]):
#if sample == 'TolucaBig':
# continue
all_data.append(row)
all_data_simplified_classes.append(IRON_SIMPLE_CATEGORIZED[sample])
all_data_standard_classes.append(IRON_ALL_CATEGORIZED[sample])
all_data_simplified_numerical.append(IRON_SIMPLE_NUMERICAL[sample])
all_data_meteorite_classes.append(sample)
print np.shape(all_data)
print np.shape(all_data_simplified_classes)
# +
all_data_mean = []
all_data_mean_classes = []
for sample in pure_signal_dict:
all_data_mean.append(np.mean(pure_signal_dict[sample], axis=0))
all_data_mean_classes.append(sample)
print np.shape(all_data_mean)
# +
import itertools
band = [610,680,730,760,810,860]
def add_many_variables(spectrum):
pairs = list(itertools.combinations(spectrum, 2))
differences = [abs(b-a) for (a,b) in pairs]
differences_squared = [abs(b-a)**2 for (a,b) in pairs]
ratios = [float(b)/float(a) for (a,b) in pairs]
ratios_squared = [(float(b)/float(a))**2 for (a,b) in pairs]
#return spectrum
#return np.concatenate((spectrum,differences))
#return np.concatenate((spectrum,differences))
#return ratios
#based on expected iron changes
#return [spectrum[0],spectrum[1],spectrum[5]]
return spectrum
#return np.concatenate((spectrum,differences,ratios))
#sums
#slopes = [(b-a)/(band[i] for i, (a,b) in enumerate(pairs)]
#NOTE: tried a bunch of stuff. Seemed like just using the spectrum itself worked best, though it would be
#nice in partiuclar to pay attention to the ratio results since this helps get rid of any noise across all channels
# +
mega_feature_array = []
def build_mega_feature_array(original_dataset):
for sample in original_dataset:
mega_feature_array.append(add_many_variables(sample))
build_mega_feature_array(all_data)
print np.shape(all_data)
print np.shape(mega_feature_array)
print mega_feature_array[1]
# -
whitener = noise.Whiten()
whitened = (whitener.apply(np.array([mega_feature_array])))[0]
# +
from sklearn.decomposition import PCA as sklearnPCA
sklearn_pca = sklearnPCA(n_components=2)
print np.shape(mega_feature_array)
data_r = sklearn_pca.fit_transform(mega_feature_array)
print np.shape(data_r)
print sklearn_pca.explained_variance_ratio_
print sklearn_pca.components_
# +
colors = {}
for i, row in enumerate(data_r):
if all_data_standard_classes[i] == 'HH':
plt.scatter(row[0],row[1], color='red')
elif all_data_standard_classes[i] == 'H':
plt.scatter(row[0],row[1], color='green')
elif all_data_standard_classes[i] == 'L':
plt.scatter(row[0],row[1], color='blue')
elif all_data_standard_classes[i] == 'L/LL':
plt.scatter(row[0],row[1], color='yellow')
elif all_data_standard_classes[i] == 'LL':
plt.scatter(row[0],row[1], color='purple')
#plt.xlim(-2,0)
plt.show()
#plt.savefig('../results/MNF_5_cat')
# -
colors = {}
for i, row in enumerate(data_r):
if all_data_standard_classes[i] == 'HH':
plt.scatter(row[0],row[1], color='red')
elif all_data_standard_classes[i] == 'H':
continue
elif all_data_standard_classes[i] == 'L':
plt.scatter(row[0],row[1], color='blue')
elif all_data_standard_classes[i] == 'L/LL':
plt.scatter(row[0],row[1], color='yellow')
elif all_data_standard_classes[i] == 'LL':
plt.scatter(row[0],row[1], color='purple')
#plt.xlim(-2,1)
plt.show()
#plt.savefig('../results/MNF_4_cat')
colors = {}
for i, row in enumerate(data_r):
if all_data_standard_classes[i] == 'H':
continue
elif all_data_standard_classes[i] == 'L':
plt.scatter(row[0],row[1], color='blue')
elif all_data_standard_classes[i] == 'L/LL':
plt.scatter(row[0],row[1], color='blue')
elif all_data_standard_classes[i] == 'LL':
plt.scatter(row[0],row[1], color='blue')
elif all_data_standard_classes[i] == 'HH':
plt.scatter(row[0],row[1], color='red')
#plt.xlim(-3,0)
plt.show()
#plt.savefig('../results/MNF_HHv3Ls')
for i, row in enumerate(data_r):
plt.scatter(row[0],row[1], color=COLORS[all_data_meteorite_classes[i]])
plt.show()
#plt.savefig('../results/PCA_meteorite')
# +
#Next: MNF
#Then: MDA
MNF = noise.MNF()
#data = np.reshape(all_data,(10,21,6))
#print (mega_feature_array[0])
result = MNF.apply(np.array([mega_feature_array]))
#print(result[0])
print np.shape(result[0][0])
print result[0][2]
components = MNF.get_components(2)
print np.shape(components)
#components = np.reshape(components,(210,2))
#print np.shape(components)
for i, crow in enumerate(components):
sample = all_data_meteorite_classes[i]
iron_level = IRON_ALL_CATEGORIZED[sample]
if iron_level == 'HH':
color = 'red'
elif iron_level == 'L' or iron_level == 'LL' or iron_level == 'L/LL':
color = 'blue'
else:
continue
plt.scatter(crow[0],crow[1], color=color)
#plt.show()
#plt.savefig('../results/MNF_iron_2cat')
#MNF.display_components(n_first=1)
# +
from sklearn.discriminant_analysis import QuadraticDiscriminantAnalysis
# Parameters
n_classes = 2
plot_colors = "bry"
plot_step = 0.001
qda = QuadraticDiscriminantAnalysis(store_covariances=True)
X = data_r
y = all_data_simplified_numerical
clf = qda.fit(X,y)
#print qda_result
x_min, x_max = X[:, 0].min() - 1, X[:, 0].max() + 0.1
y_min, y_max = X[:, 1].min() - 1, X[:, 1].max() + 0.1
xx, yy = np.meshgrid(np.arange(x_min, x_max, plot_step),np.arange(y_min, y_max, plot_step))
Z = clf.predict(np.c_[xx.ravel(), yy.ravel()])
Z = Z.reshape(xx.shape)
cs = plt.contourf(xx, yy, Z, cmap=plt.cm.Paired)
plt.scatter(X[:, 0], X[:, 1], c=y, cmap=plt.cm.coolwarm)
plt.show()
# -
| DataAnalysis/.ipynb_checkpoints/PCA, MNF etc.-checkpoint.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import numpy as np
import pandas as pd
# +
train = pd.read_csv(r"D:\AI_ML_DL\ML\Machine Learning\Case studies\HumanActivityRecognition\HumanActivityRecognition\Data\csv_files\train.csv")
test = pd.read_csv(r"D:\AI_ML_DL\ML\Machine Learning\Case studies\HumanActivityRecognition\HumanActivityRecognition\Data\csv_files\test.csv")
# -
print(train.shape, test.shape)
train.head(2)
x_train = train.drop(['subject','Activity','ActivityName'],axis=1)
y_train = train.Activity
x_test = test.drop(['subject','Activity','ActivityName'],axis=1)
y_test = test.Activity
# #### Logistic Regression
#
from sklearn import linear_model
from sklearn import metrics
from sklearn.model_selection import GridSearchCV
params = {'C':[0.001,0.01,0.1,1,10,100],'penalty':['l1','l2']}
log_reg = linear_model.LogisticRegression()
log_reg_grid = GridSearchCV(log_reg, param_grid=params,cv=3, verbose=1, n_jobs=-1)
# +
results =dict()
log_reg_grid.fit(x_train,y_train)
y_pred = log_reg_grid.predict(x_test)
acc= metrics.accuracy_score(y_test,y_pred)
cm = metrics.confusion_matrix(y_test,y_pred)
results['accuracy'] = acc
results['confusion_matix']=cm
classification_report = metrics.classification_report(y_test,y_pred)
results['classification_report']=classification_report
results['model']=log_reg_grid
print(f"\n cm is {cm}")
print(f"\n results are {results}")
# -
print(f"\n results are \n {classification_report}")
print(f"accuracy is {results['accuracy']}")
def plot_confusion_matrix(cm, classes,
normalize=False,
title='Confusion matrix',
cmap=plt.cm.Greens):
if normalize:
cm = cm.astype('float') / cm.sum(axis=1)[:, np.newaxis]
plt.figure(figsize=(7,7))
plt.imshow(cm, interpolation='nearest', cmap=cmap)
plt.title(title)
plt.colorbar()
tick_marks = np.arange(len(classes))
plt.xticks(tick_marks, classes, rotation=90)
plt.yticks(tick_marks, classes)
fmt = '.2f' if normalize else 'd'
thresh = cm.max() / 2.
for i, j in itertools.product(range(cm.shape[0]), range(cm.shape[1])):
plt.text(j, i, format(cm[i, j], fmt),
horizontalalignment="center",
color="white" if cm[i, j] > thresh else "black")
plt.tight_layout()
plt.ylabel('True label')
plt.xlabel('Predicted label')
labels=['LAYING', 'SITTING','STANDING','WALKING','WALKING_DOWNSTAIRS','WALKING_UPSTAIRS']
plot_confusion_matrix(cm,labels)
# Linear SVC
# +
from sklearn.svm import LinearSVC
params = {'C':[0.125,0.5,1,2,8,16]}
linear_svc = LinearSVC(tol=0.0005)
lin_svc_grid = GridSearchCV(linear_svc,params,n_jobs=-1,verbose=1)
# -
lin_svc_grid.fit(x_train,y_train)
y_pred = lin_svc_grid.predict(x_test)
accuracy = metrics.accuracy_score(y_test,y_pred)
accuracy
a=metrics.classification_report(y_test,y_pred)
print(a)
cm=metrics.confusion_matrix(y_test,y_pred)
cm
plot_confusion_matrix(cm,labels)
# ## Kernel SVM with Grid Search
from sklearn.svm import SVC
params = {'C':[2,4,8,16],'gamma':[0.0025,0.6,1,2]}
rbf_kernel = SVC(kernel='rbf')
rbf_kernel_grid = GridSearchCV(rbf_kernel,params,n_jobs=-1, verbose=1)
rbf_kernel_grid.fit(x_train,y_train)
y_pred = rbf_kernel_grid.predict(x_test)
metrics.accuracy_score(y_test,y_pred)
cm=metrics.confusion_matrix(y_test,y_pred)
plot_confusion_matrix(cm,labels)
print(metrics.classification_report(y_test,y_pred))
rbf_kernel_grid.best_params_
# ## Decision Trees with Grid Search
# +
from sklearn.tree import DecisionTreeClassifier
params = {'max_depth':np.arange(3,10,2)}
dt = DecisionTreeClassifier()
dt_grid = GridSearchCV(dt,params,n_jobs=-1,verbose=1)
dt_grid.fit(x_train,y_train)
y_pred = dt_grid.predict(x_test)
print(f"accuracy for DT classifier is {metrics.accuracy_score(y_pred,y_test)}")
cm= metrics.confusion_matrix(y_test,y_pred)
print(f"classification report for DT classifier is \n {metrics.classification_report(y_pred,y_test)}")
plot_confusion_matrix(cm,labels)
dt_grid.best_params_
# -
# ## Random forest Classifier with Grid Search
from sklearn.ensemble import RandomForestClassifier
params = {'n_estimators':np.arange(10,210,20),'max_depth':np.arange(4,15,3)}
rfc = RandomForestClassifier()
rfc_grid=GridSearchCV(rfc,params,n_jobs=-1, verbose=1)
rfc_grid.fit(x_train,y_train)
y_pred = rfc_grid.predict(x_test)
print(f"accuracy for RFC classifier is {metrics.accuracy_score(y_pred,y_test)}")
cm= metrics.confusion_matrix(y_test,y_pred)
print(f"classification report for RFC classifier is \n {metrics.classification_report(y_pred,y_test)}")
plot_confusion_matrix(cm,labels)
rfc_grid.best_params_
# ## Gradient Boosting Classifier
# +
from sklearn.ensemble import GradientBoostingClassifier
from sklearn.model_selection import GridSearchCV
param_grid = {'max_depth': np.arange(5,8,1), \
'n_estimators':np.arange(130,170,10)}
gbdt = GradientBoostingClassifier()
gbdt_grid = GridSearchCV(gbdt, param_grid=param_grid, n_jobs=-1)
gbdt_grid.fit(x_train,y_train)
y_pred = gbdt_grid.predict(x_test)
print(f"accuracy for GBDT classifier is {metrics.accuracy_score(y_pred,y_test)}")
cm= metrics.confusion_matrix(y_test,y_pred)
print(f"classification report for GBDT classifier is \n {metrics.classification_report(y_pred,y_test)}")
plot_confusion_matrix(cm,labels)
gbdt_grid.best_params_
# -
| Classical_ML_Models.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# ### time ํจ์
import time
time.time()
time.localtime()
yesterday = time.localtime(time.time()-60*60*24)
yesterday
time.gmtime()
time.strftime('%Y %m %d')
time.strftime('%Y %m %d', yesterday)
time.sleep(1)
# ### datetime ํจ์
from datetime import date, time, datetime
today = date(2016, 7, 14)
today
today.day
today = date.today()
today.day
tm = time(17, 10, 1)
tm.second
now = datetime.now()
now
now.timestamp()
now.day, now.second
now.strftime('%Y-%m-%d %H:%M:%S')
now.isoformat()
| tutorial-2/datetime.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python [conda env:analysis3-20.01]
# language: python
# name: conda-env-analysis3-20.01-py
# ---
# # Barotropic Streamfunction
#
# An example on how to calculate the mean barotropic streamfunction from model output using the inter-annual forcing (IAF) run of the 1 degree experiment of the ACCESS-OM2 model.
#
# (Method can be easily adopted to other resolutions.)
# +
import cartopy.crs as ccrs
import cosima_cookbook as cc
import cartopy.feature as cfeature
from cartopy.mpl.gridliner import LONGITUDE_FORMATTER, LATITUDE_FORMATTER
import cmocean as cm
from dask.distributed import Client
import matplotlib.path as mpath
import matplotlib.pyplot as plt
import numpy as np
import xarray as xr
# Avoid the Runtime errors in true_divide encountered when trying to divide by zero
import warnings
warnings.filterwarnings('ignore', category = RuntimeWarning)
# -
client = Client(n_workers=4)
# Start a session in the default COSIMA database. To search this database, the following commands can be used:
#
# e.g.: search for all the `1deg` experiments
#
# ```python
# exp = cc.querying.get_experiments(session)
# exp[exp['experiment'].str.lower().str.match('1deg')]
# ```
session = cc.database.create_session()
# Here we will load the depth-integrated zonal transport, `tx_trans_int_z`, and the corresponding appropriate grids.
#
# The start time **2198-01-16** selects the last 60-year cycle forcing (these are model years).
# +
tx_trans_int_z = cc.querying.getvar('1deg_jra55v13_iaf_spinup1_B1', 'tx_trans_int_z', session, start_time = '2198-01-16', ncfile = 'ocean_month.nc')
# Grid (used for plotting)
geolon_c = xr.open_dataset('/g/data/ik11/grids/ocean_grid_10.nc').geolon_c
geolat_t = xr.open_dataset('/g/data/ik11/grids/ocean_grid_10.nc').geolat_t
# -
# Here we calculate the mean barotropic stream function by integrating the time-mean zonal transport in latitude starting from the South Pole. Therefore, for every $(x, y)$ point, the barotropic streamfunction $\psi$ is:
#
# $$\psi(x, y) = -\int_{y_0}^y \frac{\overline{T^{x}(x, y')}}{\rho}\, \mathrm{d}y',$$
#
# where $T^{x}(x, y)$ is the depth integrated mass transport, the overline denotes time-mean, $y_0$ is any arbirtrary latitude (in our case, the South Pole), and $\rho$ is the mean deansity of sea water (to convert mass transport to volume transport).
#
# The minus sign above is merely a matter of convenction so that positive values of streamfunction denote eastward flow, the Southern subtropical gyre to corresponds to negative values, etc.
#
# Variable `tx_trans_int_z` contains the instantaneous depth-intergrated zonal transport multiplied with the $\Delta y$ of each cell:
# <center> `tx_trans_int_z`$=T^{x}(x, y)\,\mathrm{d}y$.</center>
#
# Finally, divide by mean $10^6$ to convect units from m$^3$/s to Sv.
ฯ = 1036 # kg/m^3
psi = -tx_trans_int_z.mean('time').cumsum('yt_ocean')/(1e6*ฯ) # divide by 1e6 to convert m^3/s -> Sv
# ## Global map
# Note that the streamfunction as defined above has a free parameter: we can subtract any constant we want. The way we defined $\psi$ above implies that $\psi$ is zeros in the South Pole.
#
# A usual convention is to set the coast of South America to correspond to $ฯ=0$. To do so, we the value on Patagonia and subtract it.
#
#
# The value we need to subtract is found as the minimum value of $\psi$ in the longitude range [69W, 67W] south of 55S, $\psi_{ACC}$, which should be representative of the transport in the Drake Passage.
psi_acc = np.nanmin(psi.sel(xu_ocean = slice(-69, -67), yt_ocean = slice(-80, -55)))
print('Max value of streamfunction south of 55S and within 69W-67W = ', psi_acc, 'Sv')
# Because we want the ACC to correspond to a positive value of the streamfunction (denoting flow to the east), the Southern subtropical gyre to correspond to negative values of $\psi$ (representing anticiclonic flow), etc, we define the global stream function as $\psi_{G} = -(\psi - \psi_{ACC})$
psi_g = psi-psi_acc
psi_g = psi_g.rename('Barotropic Stream function')
psi_g.attrs['long_name'] = 'Barotropic Stream function'
psi_g.attrs['units'] = 'Sv'
# +
# Define the levels for the contourf
lvls = np.arange(-80, 90, 10)
fig = plt.figure(figsize = (12, 8))
ax = fig.add_subplot(111, projection = ccrs.Robinson())
# Add land features and gridlines
ax.add_feature(cfeature.LAND, edgecolor = 'black', facecolor = 'gray', zorder = 2)
ax.gridlines(color='grey', linestyle='--')
# Plot the barotropic stream function
cf = ax.contourf(geolon_c, geolat_t, psi_g, levels = lvls, cmap = cm.cm.balance, extend = 'both',
transform = ccrs.PlateCarree())
# Add a colorbar
cbar = fig.colorbar(cf, ax = ax, orientation = 'vertical', shrink = 0.5)
cbar.set_label('Transport [Sv]')
# -
# ## Southern Ocean map
# ### Complete Southern Ocean
# +
# Select the region in the variables
psi_so = psi_g.sel(yt_ocean = slice(-80, -45))
# Define the levels for the contourf
lvls = np.arange(-40, 190, 10)
fig = plt.figure(figsize = (12, 8))
ax = fig.add_subplot(111, projection = ccrs.SouthPolarStereo())
ax.set_extent([-180, 180, -80, -45], crs=ccrs.PlateCarree())
# Map the plot boundaries to a circle
theta = np.linspace(0, 2*np.pi, 100)
center, radius = [0.5, 0.5], 0.5
verts = np.vstack([np.sin(theta), np.cos(theta)]).T
circle = mpath.Path(verts * radius + center)
ax.set_boundary(circle, transform=ax.transAxes)
# Add land features and gridlines
ax.add_feature(cfeature.NaturalEarthFeature('physical', 'land', '50m', edgecolor='black',
facecolor='lightgray'), zorder = 2)
ax.gridlines(color='grey', linestyle='--')
# Plot the barotropic stream function
cf = ax.contourf(psi_so['xu_ocean'], psi_so['yt_ocean'], psi_so, levels = lvls, cmap = cm.cm.thermal,
extend = 'both', transform = ccrs.PlateCarree())
# Add a colorbar
cbar = fig.colorbar(cf, ax = ax, orientation = 'vertical', shrink = 0.5)
cbar.set_label('Transport [Sv]')
# -
# ### Region of the Southern Ocean
# Select the region in the variables
psi_re = psi_g.sel(yt_ocean = slice(-80, -45), xu_ocean = slice(-60, 60))
# +
# Define the levels for the contourf
lvls = np.arange(-40, 190, 10)
fig = plt.figure(figsize = (12, 8))
ax = fig.add_subplot(111, projection = ccrs.AlbersEqualArea(central_longitude = 0, central_latitude = -62.5, standard_parallels = (-70, -50)))
ax.set_extent([-60, 60, -80, -45], crs=ccrs.PlateCarree())
# Map the plot boundaries to the lat, lon boundaries
vertices = [(lon, -80) for lon in range(-60, 60+1, 1)] + [(lon, -45) for lon in range(60, -60-1, -1)]
boundary = mpath.Path(vertices)
ax.set_boundary(boundary, transform=ccrs.PlateCarree())
# Add land features and gridlines
ax.add_feature(cfeature.NaturalEarthFeature('physical', 'land', '50m', edgecolor='black',
facecolor='lightgray'), zorder = 2)
ax.gridlines(color='grey', linestyle='--')
# Plot the barotropic stream function
cf = ax.contourf(psi_re['xu_ocean'], psi_re['yt_ocean'], psi_re, levels = lvls, cmap = cm.cm.thermal, extend = 'both', transform = ccrs.PlateCarree())
# Add a colorbar
cbar = fig.colorbar(cf, ax = ax, orientation = 'vertical', shrink = 0.5)
cbar.set_label('Transport [Sv]')
# -
# ## Comparison against Sea Level Standard Deviation
# Calculate the sea level standard deviation `sla` as the square root of the difference in between the square of the effective sea level and the square of the surface height.
#
# $$ \textrm{sla} = \sqrt{\textrm{sea_levelsq} - \textrm{eta_t}^2 } $$
# +
sh = cc.querying.getvar('1deg_jra55v13_iaf_spinup1_B1', 'eta_t', session, start_time = '2198-01-16', ncfile = 'ocean_month.nc').mean('time')
sl = cc.querying.getvar('1deg_jra55v13_iaf_spinup1_B1', 'sea_levelsq', session, start_time = '2198-01-16', ncfile = 'ocean_month.nc').mean('time')
sla = (sl - sh**2)**0.5
sla = sla.rename('Sea level STD')
sla.attrs['long_name'] = 'sea level standard deviation'
sla.attrs['units'] = 'm'
# -
# ### Plot in Malvinas region
# Select the region
psi_ag = psi_g.sel(yt_ocean = slice(-60, -30), xu_ocean = slice(-80, -30))
sla_ag = sla.sel(yt_ocean = slice(-60, -30), xt_ocean = slice(-80, -30))
# +
# Define the levels for the contourf
lvls_f = np.arange(0, 0.20, 0.005)
lvls = np.arange(-30, 160, 10)
fig = plt.figure(figsize = (12, 8))
ax = fig.add_subplot(111, projection = ccrs.Mercator())
ax.set_extent([sla_ag['xt_ocean'][0], sla_ag['xt_ocean'][-1], sla_ag['yt_ocean'][0], sla_ag['yt_ocean'][-1]],
crs=ccrs.PlateCarree())
# Add land features and gridlines
ax.add_feature(cfeature.NaturalEarthFeature('physical', 'land', '50m', edgecolor='black',
facecolor='lightgray'), zorder = 2)
gl = ax.gridlines(draw_labels=True, color='grey', linestyle='--')
gl.xlabels_top = gl.ylabels_right = False
gl.xformatter = LONGITUDE_FORMATTER
gl.yformatter = LATITUDE_FORMATTER
# Plot the SLA and barotropic stream function
cf = ax.contourf(sla_ag['xt_ocean'], sla_ag['yt_ocean'], sla_ag, levels = lvls_f, cmap = 'nipy_spectral',
extend = 'both', transform = ccrs.PlateCarree())
c = ax.contour(psi_ag['xu_ocean'], psi_ag['yt_ocean'], psi_ag, colors = 'k', levels = lvls,
transform = ccrs.PlateCarree())
# Add a colorbar and inline labels
cbar = fig.colorbar(cf, ax = ax, orientation = 'vertical', shrink = 0.7)
cbar.set_label('Sea Level Standard Deviation [m]')
ax.clabel(c, inline = 1, fmt = '%1.0f');
| DocumentedExamples/Barotropic_Streamfunction.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
info = {
"title": "Stickman",
"author": "<NAME>",
"github_username": "alcarney",
"stylo_version": "0.7.0",
"dimensions": (1920, 1080)
}
# +
from math import pi, sin, cos
from stylo.domain.transform import translate, rotate
from stylo.color import FillColor
from stylo.shape import Circle, Line
from stylo.image import LayeredImage
# +
black = FillColor("000000")
class Stickman:
def __init__(self):
self.head_x = 0
self.head_y = 1
self.head_size = .3
self.pt = 0.02
self.body_angle = pi/2
self.body_length = 1
self.leg_length = 1
self.arm_length = 1
self.left_arm_angle = self.body_angle - pi/8
self.right_arm_angle = self.body_angle + pi/8
self.left_leg_angle = self.body_angle + pi/8
self.right_leg_angle = self.body_angle - pi/8
self._calculate()
def _calculate(self):
self.body_end = self.head_size + self.body_length
self.arm_start = self.head_size + 0*self.body_length
self.neck = {
'x': self.head_x + self.head_size*cos(-self.body_angle),
'y': self.head_y + self.head_size*sin(-self.body_angle)
}
self.body = {
'x': self.head_x + self.body_end*cos(-self.body_angle),
'y': self.head_y + self.body_end*sin(-self.body_angle)
}
self.arm = {
'x': self.head_x + self.arm_start * cos(-self.body_angle),
'y': self.head_y + self.arm_start * sin(-self.body_angle)
}
self.leg_start = self.body_end
self.leg_end = self.body_end + self.leg_length
def _construct_head(self, image):
head = Circle(r=self.head_size, pt=self.pt*0.9) \
>> translate(self.head_x, self.head_y)
image.add_layer(head, black)
def _construct_body(self, image):
body = Line((0, 0), (self.body_length,0), pt=self.pt*1.03) \
>> rotate(self.body_angle) \
>> translate(self.neck['x'], self.neck['y'])
image.add_layer(body, black)
def _construct_legs(self, image):
left_leg = Line((0, 0), (self.leg_length, 0), pt=self.pt) \
>> rotate(self.left_leg_angle) \
>> translate(self.body['x'], self.body['y'])
right_leg = Line((0, 0), (self.leg_length, 0), pt=self.pt) \
>> rotate(self.right_leg_angle) \
>> translate(self.body['x'], self.body['y'])
image.add_layer(left_leg, black)
image.add_layer(right_leg, black)
def _construct_arms(self, image):
left_arm = Line((0, 0), (self.arm_length, 0), pt=self.pt) \
>> rotate(self.left_arm_angle) \
>> translate(self.arm['x'], self.arm['y'])
right_arm = Line((0,0), (self.arm_length, 0), pt=self.pt) \
>> rotate(self.right_arm_angle) \
>> translate(self.arm['x'], self.arm['y'])
image.add_layer(left_arm, black)
image.add_layer(right_arm, black)
def construct(self, image):
self._construct_head(image)
self._construct_body(image)
self._construct_legs(image)
self._construct_arms(image)
pass
stickman = Stickman()
# +
image = LayeredImage(scale=3)
stickman.construct(image)
| notebooks/Stickman.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + _cell_guid="b1076dfc-b9ad-4769-8c92-a6c4dae69d19" _uuid="8f2839f25d086af736a60e9eeb907d3b93b6e0e5" papermill={"duration": 1.041576, "end_time": "2022-02-01T14:39:51.701500", "exception": false, "start_time": "2022-02-01T14:39:50.659924", "status": "completed"} tags=[]
import numpy as np
import pandas as pd
from sklearn import preprocessing
from sklearn.metrics import mean_squared_error as mse
import matplotlib.pyplot as plt
# + [markdown] papermill={"duration": 0.020794, "end_time": "2022-02-01T14:39:51.743924", "exception": false, "start_time": "2022-02-01T14:39:51.723130", "status": "completed"} tags=[]
# ## Importing Dataset
# + papermill={"duration": 0.078388, "end_time": "2022-02-01T14:39:51.843278", "exception": false, "start_time": "2022-02-01T14:39:51.764890", "status": "completed"} tags=[]
train = pd.read_csv("../input/eda-concrete-strength/Filtered_dataset.csv")
train
# + papermill={"duration": 0.054761, "end_time": "2022-02-01T14:39:51.920467", "exception": false, "start_time": "2022-02-01T14:39:51.865706", "status": "completed"} tags=[]
train.drop(['Unnamed: 0'],axis=1,inplace=True)
train
# + papermill={"duration": 0.032226, "end_time": "2022-02-01T14:39:51.975013", "exception": false, "start_time": "2022-02-01T14:39:51.942787", "status": "completed"} tags=[]
x = train.drop(['Strength'],axis=1)
y = train['Strength']
# + papermill={"duration": 0.033767, "end_time": "2022-02-01T14:39:52.030736", "exception": false, "start_time": "2022-02-01T14:39:51.996969", "status": "completed"} tags=[]
from sklearn.preprocessing import StandardScaler
scaler = StandardScaler()
x = scaler.fit_transform(x)
# + papermill={"duration": 0.044975, "end_time": "2022-02-01T14:39:52.097728", "exception": false, "start_time": "2022-02-01T14:39:52.052753", "status": "completed"} tags=[]
# splitting the dataset into train and test dataset with 4:1 ratio (80%-20%)
from sklearn.model_selection import train_test_split
x_train, x_test, y_train, y_test = train_test_split(x, y, test_size = .2, random_state = 26)
# + [markdown] papermill={"duration": 0.021963, "end_time": "2022-02-01T14:39:52.141933", "exception": false, "start_time": "2022-02-01T14:39:52.119970", "status": "completed"} tags=[]
# ## Training on different algorithms
# + [markdown] papermill={"duration": 0.021732, "end_time": "2022-02-01T14:39:52.186002", "exception": false, "start_time": "2022-02-01T14:39:52.164270", "status": "completed"} tags=[]
# ### Linear Regression
# + papermill={"duration": 0.113316, "end_time": "2022-02-01T14:39:52.321422", "exception": false, "start_time": "2022-02-01T14:39:52.208106", "status": "completed"} tags=[]
from sklearn.linear_model import LinearRegression
# Create instance of model
lreg = LinearRegression()
# Pass training data into model
lreg.fit(x_train, y_train)
# + papermill={"duration": 0.031354, "end_time": "2022-02-01T14:39:52.374981", "exception": false, "start_time": "2022-02-01T14:39:52.343627", "status": "completed"} tags=[]
# Getting prediciton on x_test
y_pred_lreg = lreg.predict(x_test)
# + papermill={"duration": 0.029899, "end_time": "2022-02-01T14:39:52.427154", "exception": false, "start_time": "2022-02-01T14:39:52.397255", "status": "completed"} tags=[]
def rmse(y,y_pred):
return (np.sqrt(mse(y,y_pred)))
# + papermill={"duration": 0.033339, "end_time": "2022-02-01T14:39:52.482693", "exception": false, "start_time": "2022-02-01T14:39:52.449354", "status": "completed"} tags=[]
# Scoring our model
print('Linear Regression')
# Root mean square error of our model
print('--'*50)
linreg_error = rmse(y_test, y_pred_lreg)
print('RMSE = ', linreg_error)
# + [markdown] papermill={"duration": 0.022722, "end_time": "2022-02-01T14:39:52.529186", "exception": false, "start_time": "2022-02-01T14:39:52.506464", "status": "completed"} tags=[]
# ### RBF SUPPORT VECTOR REGRESSOR
# + papermill={"duration": 0.070513, "end_time": "2022-02-01T14:39:52.622697", "exception": false, "start_time": "2022-02-01T14:39:52.552184", "status": "completed"} tags=[]
# %%time
from sklearn.svm import SVR
svr = SVR(kernel = 'rbf')
# Fit the model on training data
svr.fit(x_train, y_train)
# + papermill={"duration": 0.033199, "end_time": "2022-02-01T14:39:52.679731", "exception": false, "start_time": "2022-02-01T14:39:52.646532", "status": "completed"} tags=[]
# Getting the predictions for x_test
y_pred_svr = svr.predict(x_test)
# + papermill={"duration": 0.033784, "end_time": "2022-02-01T14:39:52.737648", "exception": false, "start_time": "2022-02-01T14:39:52.703864", "status": "completed"} tags=[]
print('Support Vector Classifier')
# Root mean square error of our model
print('--'*50)
svr_error = rmse(y_test, y_pred_svr)
print('RMSE = ', svr_error)
# + [markdown] papermill={"duration": 0.023485, "end_time": "2022-02-01T14:39:52.784899", "exception": false, "start_time": "2022-02-01T14:39:52.761414", "status": "completed"} tags=[]
# ### Decision Tree - Regression
# + papermill={"duration": 0.122248, "end_time": "2022-02-01T14:39:52.930666", "exception": false, "start_time": "2022-02-01T14:39:52.808418", "status": "completed"} tags=[]
# %%time
from sklearn.tree import DecisionTreeRegressor
dtr = DecisionTreeRegressor()
# Fit new DT on training data
dtr.fit(x_train, y_train)
# + papermill={"duration": 0.031965, "end_time": "2022-02-01T14:39:52.987256", "exception": false, "start_time": "2022-02-01T14:39:52.955291", "status": "completed"} tags=[]
# Predict DTR
y_pred_dtr = dtr.predict(x_test)
# + papermill={"duration": 0.035737, "end_time": "2022-02-01T14:39:53.047382", "exception": false, "start_time": "2022-02-01T14:39:53.011645", "status": "completed"} tags=[]
print('Decision Tree')
# Root mean square error of our model
print('--'*50)
dtr_error = rmse(y_test, y_pred_dtr)
print('RMSE = ', dtr_error)
# + [markdown] papermill={"duration": 0.024828, "end_time": "2022-02-01T14:39:53.096868", "exception": false, "start_time": "2022-02-01T14:39:53.072040", "status": "completed"} tags=[]
# ### RANDOM FOREST
# + papermill={"duration": 0.998062, "end_time": "2022-02-01T14:39:54.119722", "exception": false, "start_time": "2022-02-01T14:39:53.121660", "status": "completed"} tags=[]
from sklearn.ensemble import RandomForestRegressor
# Create model object
rfr = RandomForestRegressor(n_estimators = 250,n_jobs=-1)
# Fit model to training data
rfr.fit(x_train,y_train)
y_pred_rfr = rfr.predict(x_test)
# + papermill={"duration": 0.036685, "end_time": "2022-02-01T14:39:54.181297", "exception": false, "start_time": "2022-02-01T14:39:54.144612", "status": "completed"} tags=[]
print('Random Forest')
# Root mean square error of our model
print('--'*50)
rfr_error = rmse(y_test, y_pred_rfr)
print('RMSE = ', rfr_error)
# + [markdown] papermill={"duration": 0.024741, "end_time": "2022-02-01T14:39:54.231045", "exception": false, "start_time": "2022-02-01T14:39:54.206304", "status": "completed"} tags=[]
# ### XGBoost Regressor
# + papermill={"duration": 0.608915, "end_time": "2022-02-01T14:39:54.864925", "exception": false, "start_time": "2022-02-01T14:39:54.256010", "status": "completed"} tags=[]
from xgboost import XGBRegressor
# Create model object
xgb = XGBRegressor(n_jobs=-1)
# Fit model to training data
xgb.fit(x_train, y_train)
y_pred_xgb = xgb.predict(x_test)
# + papermill={"duration": 0.036042, "end_time": "2022-02-01T14:39:54.930921", "exception": false, "start_time": "2022-02-01T14:39:54.894879", "status": "completed"} tags=[]
print('XGBoost Classifer')
print('--'*50)
xgb_error = rmse(y_test, y_pred_xgb)
print('RMSE = ', xgb_error)
# + papermill={"duration": 0.042134, "end_time": "2022-02-01T14:39:54.999491", "exception": false, "start_time": "2022-02-01T14:39:54.957357", "status": "completed"} tags=[]
models = pd.DataFrame({
'Model': ['Linear Regression', 'RBF SVC',
'Decision Tree', 'Random Forest','XGBoost Regressor'],
'Score': [linreg_error, svr_error,
dtr_error, rfr_error,xgb_error]})
models.sort_values(by='Score', ascending=True)
| Concrete Strength Calculation/Model/baseline-models-concrete-strength-regression.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: mincloud
# language: python
# name: mincloud
# ---
from bokeh.plotting import figure, output_file, show
x = [0.1, 0.5, 1.0, 1.5, 2.0, 2.5, 3.0]
y0 = [i**2 for i in x]
y1 = [10**i for i in x]
y2 = [10**(i**2) for i in x]
output_file("log_lines.html")
p = figure(
tools="pan,box_zoom,reset,save",
y_axis_type="log", y_range=[0.001, 10**11], title="log axis example",
x_axis_label='sections', y_axis_label='particles'
)
p.line(x, x, legend="y=x")
p.circle(x, x, legend="y=x", fill_color="white", size=8)
p.line(x, y0, legend="y=x^2", line_width=3)
p.line(x, y1, legend="y=10^x", line_color="red")
p.circle(x, y1, legend="y=10^x", fill_color="red", line_color="red", size=6)
p.line(x, y2, legend="y=10^x^2", line_color="orange", line_dash="4 4")
show(p)
| log_lines.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# TSG104 - Control troubleshooter
# ===============================
#
# Description
# -----------
#
# Follow these steps to troubleshoot `controller` issues.
#
# Steps
# -----
#
# ### View and analyze the `controller` log files
#
# The following TSG will get the controller logs from the cluster, and
# analyze each entry for known issues and HINT further TSGs to assist.
#
# - [TSG036 - Controller
# logs](../log-analyzers/tsg036-get-controller-logs.ipynb)
#
# The controller health monitor may also report issues from dependent
# services, the logs for these can be viewed here:
#
# - [TSG076 - Elastic Search
# logs](../log-analyzers/tsg076-get-elastic-search-logs.ipynb)
# - [TSG077 - Kibana
# logs](../log-analyzers/tsg077-get-kibana-logs.ipynb)
#
# Related
# -------
#
# - [TSG100 - The Big Data Cluster
# troubleshooter](../troubleshooters/tsg100-troubleshoot-bdc.ipynb)
| Big-Data-Clusters/CU6/Public/content/troubleshooters/tsg104-troubleshoot-control.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # RECUPรRATION DES DONNรES
#
# * [Source externe](#source_externe)
# * [Section 1.1](#section_1_1)
# * [Section 1.2](#Section_1_2)
# * [Section 1.2.1](#section_1_2_1)
# * [Section 1.2.2](#section_1_2_2)
# * [Section 1.2.3](#section_1_2_3)
# * [Chapter 2](#chapter2)
# * [Section 2.1](#section_2_1)
# * [Section 2.2](#section_2_2)
# ### Source externe <a class="anchor" id="source_externe"></a>
#
# #### Section 1.1 <a class="anchor" id="section_1_1"></a>
#
# #### Section 1.2 <a class="anchor" id="section_1_2"></a>
#
# ##### Section 1.2.1 <a class="anchor" id="section_1_2_1"></a>
#
# ##### Section 1.2.2 <a class="anchor" id="section_1_2_2"></a>
#
# ##### Section 1.2.3 <a class="anchor" id="section_1_2_3"></a>
#
# ### Chapter 2 <a class="anchor" id="chapter2"></a>
#
# #### Section 2.1 <a class="anchor" id="section_2_1"></a>
#
# #### Section 2.2 <a class="anchor" id="section_2_2"></a>
# ## Source externe <a class="anchor" id="source_externe"></a>
| Recup_Donnees.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Locate P, Q, S and T waves in ECG
# This example shows how to use NeuroKit to delineate the ECG peaks in Python using NeuroKit. This means detecting and locating all components of the QRS complex, including **P-peaks** and **T-peaks**, as well their **onsets** and **offsets** from an ECG signal.
#
# This example can be referenced by [citing the package](https://github.com/neuropsychology/NeuroKit#citation).
# Load NeuroKit and other useful packages
import neurokit2 as nk
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
# %matplotlib inline
plt.rcParams['figure.figsize'] = [8, 5] # Bigger images
# In this example, we will use a short segment of ECG signal with sampling rate of 3000 Hz.
# ## Find the R peaks
# Retrieve ECG data from data folder (sampling rate= 1000 Hz)
ecg_signal = nk.data(dataset="ecg_3000hz")['ECG']
# Extract R-peaks locations
_, rpeaks = nk.ecg_peaks(ecg_signal, sampling_rate=3000)
# The [ecg_peaks()](https://neurokit2.readthedocs.io/en/latest/functions.html#neurokit2.ecg_peaks>) function will return a dictionary contains the samples at which R-peaks are located.
# Let's visualize the R-peaks location in the signal to make sure it got detected correctly.
# +
# Visualize R-peaks in ECG signal
plot = nk.events_plot(rpeaks['ECG_R_Peaks'], ecg_signal)
# Zooming into the first 5 R-peaks
plot = nk.events_plot(rpeaks['ECG_R_Peaks'][:5], ecg_signal[:20000])
# -
# Visually, the R-peaks seem to have been correctly identified. You can also explore searching for R-peaks using different methods provided by NeuroKit [ecg_peaks()](https://neurokit2.readthedocs.io/en/latest/functions.html#neurokit2.ecg_peaks>).
# ## Locate other waves (P, Q, S, T) and their onset and offset
# In [ecg_delineate()](https://neurokit2.readthedocs.io/en/latest/functions.html#neurokit2.ecg_delineate>), NeuroKit implements different methods to segment the QRS complexes. There are the derivative method and the other methods that make use of Wavelet to delineate the complexes.
# ### Peak method
# First, let's take a look at the 'peak' method and its output.
# Delineate the ECG signal
_, waves_peak = nk.ecg_delineate(ecg_signal, rpeaks, sampling_rate=3000, method="peak")
# +
# Visualize the T-peaks, P-peaks, Q-peaks and S-peaks
plot = nk.events_plot([waves_peak['ECG_T_Peaks'],
waves_peak['ECG_P_Peaks'],
waves_peak['ECG_Q_Peaks'],
waves_peak['ECG_S_Peaks']], ecg_signal)
# Zooming into the first 3 R-peaks, with focus on T_peaks, P-peaks, Q-peaks and S-peaks
plot = nk.events_plot([waves_peak['ECG_T_Peaks'][:3],
waves_peak['ECG_P_Peaks'][:3],
waves_peak['ECG_Q_Peaks'][:3],
waves_peak['ECG_S_Peaks'][:3]], ecg_signal[:12500])
# -
# Visually, the 'peak' method seems to have correctly identified the P-peaks, Q-peaks, S-peaks and T-peaks for this signal, at least, for the first few complexes. Well done, *peak*!
#
# However, it can be quite tiring to be zooming in to each complex and inspect them one by one. To have a better overview of all complexes at once, you can make use of the `show` argument in [ecg_delineate()](https://neurokit2.readthedocs.io/en/latest/functions.html#neurokit2.ecg_delineate>) as below.
# Delineate the ECG signal and visualizing all peaks of ECG complexes
_, waves_peak = nk.ecg_delineate(ecg_signal, rpeaks, sampling_rate=3000, method="peak", show=True, show_type='peaks')
# The 'peak' method is doing a glamorous job with identifying all the ECG peaks for this piece of ECG signal.
#
# On top of the above peaks, the peak method also identify the wave boundaries, namely the onset of P-peaks and offset of T-peaks. You can vary the argument `show_type` to specify the information you would like plot.
#
# Let's visualize them below:
# Delineate the ECG signal and visualizing all P-peaks boundaries
signal_peak, waves_peak = nk.ecg_delineate(ecg_signal, rpeaks, sampling_rate=3000, method="peak", show=True, show_type='bounds_P')
# Delineate the ECG signal and visualizing all T-peaks boundaries
signal_peaj, waves_peak = nk.ecg_delineate(ecg_signal, rpeaks, sampling_rate=3000, method="peak", show=True, show_type='bounds_T')
# Both the onsets of P-peaks and the offsets of T-peaks appears to have been correctly identified here. This information will be used to delineate cardiac phases in [ecg_phase()](https://neurokit2.readthedocs.io/en/latest/functions.html#neurokit2.ecg_phase>).
#
# Let's next take a look at the continuous wavelet method.
# ### Continuous Wavelet Method (CWT)
# Delineate the ECG signal
signal_cwt, waves_cwt = nk.ecg_delineate(ecg_signal, rpeaks, sampling_rate=3000, method="cwt", show=True, show_type='all')
# By specifying *'all'* in the `show_type` argument, you can plot all delineated information output by the cwt method. However, it could be hard to evaluate the accuracy of the delineated information with everyhing plotted together. Let's tease them apart!
# Visualize P-peaks and T-peaks
signal_cwt, waves_cwt = nk.ecg_delineate(ecg_signal, rpeaks, sampling_rate=3000, method="cwt", show=True, show_type='peaks')
# Visualize T-waves boundaries
signal_cwt, waves_cwt = nk.ecg_delineate(ecg_signal, rpeaks, sampling_rate=3000, method="cwt", show=True, show_type='bounds_T')
# Visualize P-waves boundaries
signal_cwt, waves_cwt = nk.ecg_delineate(ecg_signal, rpeaks, sampling_rate=3000, method="cwt", show=True, show_type='bounds_P')
# Visualize R-waves boundaries
signal_cwt, waves_cwt = nk.ecg_delineate(ecg_signal, rpeaks, sampling_rate=3000, method="cwt", show=True, show_type='bounds_R')
# *Unlike the peak method, the continuous wavelet method does not idenfity the Q-peaks and S-peaks. However, it provides more information regarding the boundaries of the waves*
#
# Visually, except a few exception, CWT method is doing a great job. However, the P-waves boundaries are not very clearly identified here.
#
# Last but not least, we will look at the third method in NeuroKit [ecg_delineate()](https://neurokit2.readthedocs.io/en/latest/functions.html#neurokit2.ecg_delineate>) function: the discrete wavelet method.
# ### Discrete Wavelet Method (DWT) - default method
# Delineate the ECG signal
signal_dwt, waves_dwt = nk.ecg_delineate(ecg_signal, rpeaks, sampling_rate=3000, method="dwt", show=True, show_type='all')
# Visualize P-peaks and T-peaks
signal_dwt, waves_dwt = nk.ecg_delineate(ecg_signal, rpeaks, sampling_rate=3000, method="dwt", show=True, show_type='peaks')
# visualize T-wave boundaries
signal_dwt, waves_dwt = nk.ecg_delineate(ecg_signal, rpeaks, sampling_rate=3000, method="dwt", show=True, show_type='bounds_T')
# Visualize P-wave boundaries
signal_dwt, waves_dwt = nk.ecg_delineate(ecg_signal, rpeaks, sampling_rate=3000, method="dwt", show=True, show_type='bounds_P')
# Visualize R-wave boundaries
signal_dwt, waves_dwt = nk.ecg_delineate(ecg_signal, rpeaks, sampling_rate=3000, method="dwt", show=True, show_type='bounds_R')
# Visually, from the plots above, the delineated outputs of DWT appear to be more accurate than CWT, especially for the P-peaks and P-wave boundaries.
#
# Overall, for this signal, the peak and DWT methods seem to be superior to the CWT.
| docs_readthedocs/examples/ecg_delineate.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Vectorizing
# <ul>
# <li>
# The process that we use to convert text to a form that Python and a machine learning model can understand is called vectorizing
# </li>
# <li>
# Vectorization is used to speed up the Python code without using loop
# </li>
# </ul>
# ___
#
# - ### Count Vectorization
# Creates a document-term matrix where the entry of each cell will be a count of the number of times that word occurred in that document.
import nltk
import re
import string
import pandas as pd
pd.set_option('display.max_colwidth',100)
data=pd.read_csv('SMSSpamCollection.tsv',header=None,sep='\t')
data.columns=['label','body_text']
data.head()
stopwords=nltk.corpus.stopwords.words('english')
ps=nltk.PorterStemmer()
# ### Create function to remove punctuation, tokenize, remove stopwords, and stem
def clean_data(x):
text="".join([word.lower() for word in x if word not in string.punctuation])
token=re.split('\W+',text)
text=[ps.stem(word) for word in token if word not in stopwords]
return text
data['body_text_ptrs']=data['body_text'].apply(lambda x : clean_data(x))
data.head()
# ### Apply CountVectorizer
from sklearn.feature_extraction.text import CountVectorizer
count_vect=CountVectorizer(analyzer=clean_data)
X_count=count_vect.fit_transform(data['body_text'])
count_vect
X_count
X_count.shape
count_vect.get_feature_names
print(count_vect.get_feature_names())
# ### Apply CountVectorizer to smaller sample
data_sample=data[0:20]
count_vect_data_sample=CountVectorizer(analyzer=clean_data)
X_count_data_sample=count_vect_data_sample.fit_transform(data_sample['body_text'])
print(X_count_data_sample.shape)
print(count_vect_data_sample.get_feature_names())
# ### Vectorizers output sparse matrices
#
# _**Sparse Matrix**: A matrix in which most entries are 0. In the interest of efficient storage, a sparse matrix will be stored by only storing the locations of the non-zero elements._
X_count_data_sample
df=pd.DataFrame(X_count_data_sample.toarray())
df
count_vect_data_sample
df.columns=count_vect_data_sample.get_feature_names()
df
| Linkedin/NLP/CH03/CountVectorizer.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
import numpy as np
import pandas as pd
import seaborn as sns
import matplotlib.pyplot as plt
from sklearn.metrics import r2_score
from sklearn.metrics import median_absolute_error
from sklearn.metrics import mean_absolute_error
from sklearn.metrics import mean_squared_error,mean_squared_log_error
from itertools import product
import statsmodels.api as sm
import scipy
import statsmodels.tsa.api as smt
import warnings
import plotly.express as px
warnings.filterwarnings('ignore')
# %matplotlib inline
sns.set_theme(style='darkgrid')
plt.rcParams['figure.figsize'] = (10, 15)
# -
# Importing data from csv files using pandas
aord = pd.read_csv('YAHOO-INDEX_AORD.csv',parse_dates=['Date'])
dji = pd.read_csv('YAHOO-INDEX_DJI.csv',parse_dates=['Date'])
gdaxi = pd.read_csv('YAHOO-INDEX_GDAXI.csv',parse_dates=['Date'])
gspc = pd.read_csv('YAHOO-INDEX_GSPC.csv',parse_dates=['Date'])
# +
data = {
"aord" : aord,
"dji" : dji,
"gdaxi" : gdaxi,
"gspc" : gspc
}
columns_to_drop = ['Volume','Date','Adjusted Close']
y_column = 'Adjusted Close'
# -
aord.groupby('Date').mean()['Close']
aord
# +
closingData = pd.DataFrame()
for key,value in data.items():
closingData[key] = value.groupby('Date').mean()['Close']
closingData.fillna(method = 'backfill',inplace = True)
# -
closingData
def exponential_smoothing(series, alpha):
result = [series[0]] # first value is same as series
for n in range(1, len(series)):
result.append(alpha * series[n] + (1 - alpha) * result[n-1])
return result
def plot_exponential_smoothing(series, alphas):
plt.figure(figsize=(17, 8))
for alpha in alphas:
plt.plot(exponential_smoothing(series, alpha), label="Alpha {}".format(alpha),linewidth=1.5)
plt.plot(series.values, "c", label = "Actual")
plt.legend(loc="best")
plt.axis('tight')
plt.title("Exponential Smoothing")
plt.grid(True)
plot_exponential_smoothing(closingData['aord'],[0.3,0.17])
| part222 (3).ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3.10.4 ('tf_pt')
# language: python
# name: python3
# ---
import tensorflow as tf
from tensorflow import keras
import numpy as np
import os
# +
os.environ['TF_CPP_MIN_LOG_LEVEL'] = '3' # ํ
์ ํ๋ก์ ์ ๋ณด ์ถ๋ ฅ ์ต์ ํ๊ธฐ
# os.environ["CUDA_VISIBLE_DEVICES"] = "0" # GPU ์ฅ์น ์ง์
# tf.debugging.set_log_device_placement(True) # ์ด๊ฑฐ ์ฐ์ง ๋ง์
ใ
ใด ์ถ๋ ฅ ๋๋ฌ์
if not tf.config.list_physical_devices('GPU'):
print("๊ฐ์ง๋ GPU๊ฐ ์์ต๋๋ค. GPU๊ฐ ์์ผ๋ฉด LSTM๊ณผ CNN์ด ๋งค์ฐ ๋๋ฆด ์ ์์ต๋๋ค.")
# -
# ## 9.
# _์ฐ์ต๋ฌธ์ : ๋ ์ง ๋ฌธ์์ด ํฌ๋งท์ ๋ณํํ๋ ์ธ์ฝ๋-๋์ฝ๋ ๋ชจ๋ธ์ ํ๋ จํ์ธ์(์๋ฅผ ๋ค์ด, โApril 22, 2019โ์์ โ2019-04-22โ๋ก ๋ฐ๊ฟ๋๋ค)._
#
# ๋จผ์ ๋ฐ์ดํฐ์
์ ๋ง๋ค์ด ๋ณด์ฃ . 1000-01-01 ~ 9999-12-31 ์ฌ์ด์ ๋๋คํ ๋ ์ง๋ฅผ ์ฌ์ฉํ๊ฒ ์ต๋๋ค:
# +
from datetime import date
# strftime()์ %B ํฌ๋งท์ ๋ก์ผ์ผ์ ์์กดํ๊ธฐ ๋๋ฌธ์ ์ฌ์ฉํ ์ ์์ต๋๋ค.
MONTHS = ["January", "February", "March", "April", "May", "June",
"July", "August", "September", "October", "November", "December"]
def random_dates(n_dates):
min_date = date(1000, 1, 1).toordinal()
max_date = date(9999, 12, 31).toordinal()
ordinals = np.random.randint(max_date - min_date, size=n_dates) + min_date
dates = [date.fromordinal(ordinal) for ordinal in ordinals]
x = [MONTHS[dt.month - 1] + " " + dt.strftime("%d, %Y") for dt in dates]
y = [dt.isoformat() for dt in dates]
return x, y
# -
# ๋ค์์ ์
๋ ฅ๊ณผ ์ถ๋ ฅ ํ์์ ๋ง์ถ ๋๋คํ ๋ช ๊ฐ์ ๋ ์ง์
๋๋ค:
# +
np.random.seed(42)
n_dates = 3
x_example, y_example = random_dates(n_dates)
print("{:25s}{:25s}".format("Input", "Target"))
print("-" * 50)
for idx in range(n_dates):
print("{:25s}{:25s}".format(x_example[idx], y_example[idx]))
# -
# ์
๋ ฅ์ ๊ฐ๋ฅํ ์ ์ฒด ๋ฌธ์๋ฅผ ๋์ดํด ๋ณด์ฃ :
INPUT_CHARS = "".join(sorted(set("".join(MONTHS) + "0123456789, ")))
INPUT_CHARS
# ๊ทธ๋ฆฌ๊ณ ๋ค์์ ์ถ๋ ฅ์ ๊ฐ๋ฅํ ์ ์ฒด ๋ฌธ์์
๋๋ค:
OUTPUT_CHARS = "0123456789-"
# ์ด์ ์ฐ์ต๋ฌธ์ ์์์ฒ๋ผ ๋ฌธ์์ด์ ๋ฌธ์ ID ๋ฆฌ์คํธ๋ก ๋ฐ๊พธ๋ ํจ์๋ฅผ ์์ฑํด ๋ณด๊ฒ ์ต๋๋ค:
def date_str_to_ids(date_str, chars=INPUT_CHARS):
return [chars.index(c) for c in date_str]
date_str_to_ids(x_example[0], INPUT_CHARS)
date_str_to_ids(y_example[0], OUTPUT_CHARS)
# +
def prepare_date_strs(date_strs, chars=INPUT_CHARS):
X_ids = [date_str_to_ids(dt, chars) for dt in date_strs]
X = tf.ragged.constant(X_ids, ragged_rank=1) # ๋น์ ํ ํ
์
return (X + 1).to_tensor() # using 0 as the padding token ID
def create_dataset(n_dates):
x, y = random_dates(n_dates)
return prepare_date_strs(x, INPUT_CHARS), prepare_date_strs(y, OUTPUT_CHARS)
# +
np.random.seed(42)
X_train, Y_train = create_dataset(10000)
X_valid, Y_valid = create_dataset(2000)
X_test, Y_test = create_dataset(2000)
# -
Y_train[0]
# ### ์ฒซ ๋ฒ์งธ ๋ฒ์ : ๊ธฐ๋ณธ์ ์ธ seq2seq ๋ชจ๋ธ
# ๋จผ์ ๊ฐ์ฅ ๊ฐ๋จํ ๋ชจ๋ธ์ ์๋ํด ๋ณด๊ฒ ์ต๋๋ค: ์
๋ ฅ ์ํ์ค๊ฐ ๋จผ์ (์๋ฒ ๋ฉ ์ธต ๋ค์ ํ๋์ LSTM ์ธต์ผ๋ก ๊ตฌ์ฑ๋) ์ธ์ฝ๋๋ฅผ ํต๊ณผํ์ฌ ๋ฒกํฐ๋ก ์ถ๋ ฅ๋ฉ๋๋ค. ๊ทธ ๋ค์ ์ด ๋ฒกํฐ๊ฐ (ํ๋์ LSTM ์ธต ๋ค์ ๋ฐ์ง ์ธต์ผ๋ก ๊ตฌ์ฑ๋) ๋์ฝ๋๋ก ๋ค์ด๊ฐ ๋ฒกํฐ์ ์ํ์ค๋ฅผ ์ถ๋ ฅํฉ๋๋ค. ๊ฐ ๋ฒกํฐ๋ ๊ฐ๋ฅํ ๋ชจ๋ ์ถ๋ ฅ ๋ฌธ์์ ๋ํ ์ถ์ ํ๋ฅ ์
๋๋ค.
#
# ๋์ฝ๋๋ ์ํ์ค๋ฅผ ์
๋ ฅ์ผ๋ก ๊ธฐ๋ํ๊ธฐ ๋๋ฌธ์ ๊ฐ๋ฅํ ๊ฐ์ฅ ๊ธด ์ถ๋ ฅ ์ํ์ค๋งํผ (์ธ์ฝ๋์ ์ถ๋ ฅ) ๋ฒกํฐ๋ฅผ ๋ฐ๋ณตํฉ๋๋ค.
# +
embedding_size = 32
max_output_length = Y_train.shape[1]
np.random.seed(42)
tf.random.set_seed(42)
encoder = keras.models.Sequential([
keras.layers.Embedding(input_dim=len(INPUT_CHARS) + 1,
output_dim=embedding_size,
input_shape=[None]),
keras.layers.LSTM(128)
])
decoder = keras.models.Sequential([
keras.layers.LSTM(128, return_sequences=True),
keras.layers.Dense(len(OUTPUT_CHARS) + 1, activation="softmax")
])
model = keras.models.Sequential([
encoder,
keras.layers.RepeatVector(max_output_length),
decoder
])
optimizer = keras.optimizers.Nadam()
model.compile(loss="sparse_categorical_crossentropy", optimizer=optimizer,
metrics=["accuracy"])
history = model.fit(X_train, Y_train, epochs=20,
validation_data=(X_valid, Y_valid))
# -
# ์ข์ ๋ณด์ด๋ค์, 100% ๊ฒ์ฆ ์ ํ๋๋ฅผ ๋ฌ์ฑํ์ต๋๋ค! ์ด ๋ชจ๋ธ์ ์ฌ์ฉํด ์์ธก์ ๋ง๋ค์ด ๋ณด์ฃ . ๋ฌธ์ ID ์ํ์ค๋ฅผ ๋ฌธ์์ด๋ก ๋ฐ๊พธ๋ ํจ์๋ฅผ ์์ฑํ๊ฒ ์ต๋๋ค:
def ids_to_date_strs(ids, chars=OUTPUT_CHARS):
return ["".join([("?" + chars)[index] for index in sequence])
for sequence in ids]
# ์ด์ ๋ชจ๋ธ์ ์ฌ์ฉํด ์ํ ๋ ์ง๋ฅผ ๋ณํํฉ๋๋ค.
X_new = prepare_date_strs(["September 17, 2009", "July 14, 1789"])
#ids = model.predict_classes(X_new)
ids = np.argmax(model.predict(X_new), axis=-1)
for date_str in ids_to_date_strs(ids):
print(date_str)
# ์๋ฒฝํฉ๋๋ค! :)
#
# ํ์ง๋ง (๊ฐ์ฅ ๊ธด ๋ ์ง์ ํด๋นํ๋) ๊ธธ์ด๊ฐ 18์ธ ์
๋ ฅ ๋ฌธ์์ด์์๋ง ๋ชจ๋ธ์ด ํ๋ จ๋์๊ธฐ ๋๋ฌธ์ ์งง์ ์ํ์ค์์๋ ์ ๋์ํ์ง ์์ต๋๋ค:
X_new = prepare_date_strs(["May 02, 2020", "July 14, 1789"])
#ids = model.predict_classes(X_new)
ids = np.argmax(model.predict(X_new), axis=-1)
for date_str in ids_to_date_strs(ids):
print(date_str)
# ์ด๋ฐ! ํจ๋ฉ์ ์ฌ์ฉํด ํ๋ จํ ๋์ ๋์ผํ ๊ธธ์ด์ ์ํ์ค๋ฅผ ์ ๋ฌํด์ผ ํ ๊ฒ ๊ฐ์ต๋๋ค. ์ด๋ฅผ ์ํด ํฌํผ ํจ์๋ฅผ ์์ฑํด ๋ณด์ฃ :
# +
max_input_length = X_train.shape[1]
def prepare_date_strs_padded(date_strs):
X = prepare_date_strs(date_strs)
if X.shape[1] < max_input_length:
X = tf.pad(X, [[0, 0], [0, max_input_length - X.shape[1]]])
return X
def convert_date_strs(date_strs):
X = prepare_date_strs_padded(date_strs)
#ids = model.predict_classes(X)
ids = np.argmax(model.predict(X), axis=-1)
return ids_to_date_strs(ids)
# -
convert_date_strs(["May 02, 2020", "July 14, 1789"])
# ์ข๋ค์! ๋ฌผ๋ก ๋ ์ฝ๊ฒ ๋ ์ง ๋ณํ ๋๊ตฌ๋ฅผ ๋ง๋ค ์ ์์ต๋๋ค(์๋ฅผ ๋ค๋ฉด, ์ ๊ท์์ด๋ ๋ ๋จ์ํ ๋ฌธ์์ด ์กฐ์). ํ์ง๋ง ์ ๊ฒฝ๋ง์ ์ฌ์ฉํ๋ ๊ฒ์ด ๋ ๋ฉ์ ธ ๋ณด์ด๋ค์. ;-)
#
# ํ์ง๋ง ์ค์ ์ํ์ค-ํฌ-์ํ์ค ๋ฌธ์ ๋ ๋ ์ด๋ ต์ต๋๋ค. ์๋ฒฝํจ์ ์ถ๊ตฌํ๊ธฐ ์ํด ๋ ๊ฐ๋ ฅํ ๋ชจ๋ธ์ ๋ง๋ค์ด ๋ณด๊ฒ ์ต๋๋ค.
# ### ๋ ๋ฒ์งธ ๋ฒ์ : ๋์ฝ๋์์ ์ฌํํธ๋ ํ๊น ์ฃผ์
ํ๊ธฐ(ํฐ์ฒ ํฌ์ฑ(teacher forcing))
# ๋์ฝ๋์์ธ ์ธ์ฝ๋ ์ถ๋ ฅ ๋ฒกํฐ๋ฅผ ๋จ์ํ ๋ฐ๋ณตํ ๊ฒ์ ์ฃผ์
ํ๋ ๋์ ํ ํ์ ์คํ
์ค๋ฅธ์ชฝ์ผ๋ก ์ด๋๋ ํ๊น ์ํ์ค๋ฅผ ์ฃผ์
ํ ์ ์์ต๋๋ค. ์ด๋ ๊ฒ ํ๋ฉด ๊ฐ ํ์ ์คํ
์์ ๋์ฝ๋๋ ์ด์ ํ๊น ๋ฌธ์๊ฐ ๋ฌด์์ธ์ง ์๊ฒ ๋ฉ๋๋ค. ์ด๋ ๋ ๋ณต์กํ ์ํ์ค-ํฌ-์ํ์ค ๋ฌธ์ ๋ฅผ ๋ค๋ฃจ๋๋ฐ ๋์์ด ๋ฉ๋๋ค.
#
# ๊ฐ ํ๊น ์ํ์ค์ ์ฒซ ๋ฒ์งธ ์ถ๋ ฅ ๋ฌธ์๋ ์ด์ ๋ฌธ์๊ฐ ์๊ธฐ ๋๋ฌธ์ ์ํ์ค ์์(start-of-sequence, sos)์ ๋ํ๋ด๋ ์๋ก์ด ํ ํฐ์ด ํ์ํฉ๋๋ค.
#
# ์ถ๋ก ์์๋ ํ๊น์ ์์ง ๋ชปํ๋ฏ๋ก ๋์ฝ๋์๊ฒ ๋ฌด์์ ์ฃผ์
ํด์ผ ํ ๊น์? sos ํ ํฐ์ ์์ํด์ ํ ๋ฒ์ ํ๋์ ๋ฌธ์๋ฅผ ์์ธกํ๊ณ ๋์ฝ๋์๊ฒ ์ง๊ธ๊น์ง ์์ธกํ ๋ชจ๋ ๋ฌธ์๋ฅผ ์ฃผ์
ํ ์ ์์ต๋๋ค(๋์ค์ ์ด ๋
ธํธ๋ถ์์ ๋ ์์ธํ ์์ ๋ณด๊ฒ ์ต๋๋ค).
#
# ํ์ง๋ง ๋์ฝ๋์ LSTM์ด ์คํ
๋ง๋ค ์ด์ ํ๊น์ ์
๋ ฅ์ผ๋ก ๊ธฐ๋ํ๋ค๋ฉด ์ธ์ฝ๋์ ๋ฒกํฐ ์ถ๋ ฅ์ ์ด๋ป๊ฒ ์ ๋ฌํ ๊น์? ํ๊ฐ์ง ๋ฐฉ๋ฒ์ ์ถ๋ ฅ ๋ฒกํฐ๋ฅผ ๋ฌด์ํ๋ ๊ฒ์
๋๋ค. ๊ทธ๋ฆฌ๊ณ ๋์ ์ธ์ฝ๋์ LSTM ์ํ๋ฅผ ๋์ฝ๋์ LSTM์ ์ด๊ธฐ ์ํ๋ก ์ฌ์ฉํฉ๋๋ค(์ด๋ ๊ฒ ํ๋ ค๋ฉด ์ธ์ฝ๋์ LSTM๊ณผ ๋์ฝ๋์ LSTM ์ ๋ ๊ฐ์๊ฐ ๊ฐ์์ผ ํฉ๋๋ค).
#
# ๊ทธ๋ผ (ํ๋ จ, ๊ฒ์ฆ, ํ
์คํธ๋ฅผ ์ํ) ๋์ฝ๋์ ์
๋ ฅ์ ๋ง๋ค์ด ๋ณด์ฃ . sos ํ ํฐ์ ๊ฐ๋ฅํ ์ถ๋ ฅ ๋ฌธ์์ ๋ง์ง๋ง ID + 1์ผ๋ก ๋ํ๋
๋๋ค.
# +
sos_id = len(OUTPUT_CHARS) + 1
def shifted_output_sequences(Y):
sos_tokens = tf.fill(dims=(len(Y), 1), value=sos_id)
return tf.concat([sos_tokens, Y[:, :-1]], axis=1) # axis = 1 or -1 same meaning
X_train_decoder = shifted_output_sequences(Y_train)
X_valid_decoder = shifted_output_sequences(Y_valid)
X_test_decoder = shifted_output_sequences(Y_test)
# -
# ๋์ฝ๋์ ํ๋ จ ์
๋ ฅ์ ํ์ธํด ๋ณด์ฃ :
X_train_decoder
# ์ ๋จธ์. ใ
ใด ์ด๋ ต๋ค, ๋๋ฐ์ฒด 128์ฐจ์์ ์ด๋ป๊ฒ ์ฅ์ด์ค๊ฑธ๊น?
# +
# Keras Multi Layer Model์ ๋ํด์๋ 'Neural Network ๊ฐ๋
๊ณผ MNIST์์ ' ๋จ์์์ ๊ณต๋ถํ์
จ์ ๊ฑฐ๋ผ ์๊ฐ๋ฉ๋๋ค.
# ๋จผ์ Dense Layer๋ฅผ ์ถ๊ฐํ๋ model.add(Dense(12, activation='softmax'))์ ๊ฒฝ์ฐ๋ 12๋ฅผ ์ฐ๋ ์ด์ ๊ฐ ๋ช
ํํฉ๋๋ค.
# activation='softmax'์ ๊ฐ์ด Activationํจ์๋ฅผ softmax๋ฅผ ์ฌ์ฉํ๋ ๊ฒ์ Multinomial Classification์ ์ฌ์ฉํ๋ ๊ฒ์
๋๋ค.
# ์ ํํ ์ด๋ค ์์ ์ธ์ง๋ ๋ชจ๋ฅด๊ฒ ์ง๋ง ๊ฒฐ๊ณผ๊ฐ์ด 12๊ฐ์ง ์ข
๋ฅ ์ค ํ๋๋ฅผ ์ฐพ์์ฃผ๋ ๊ฒ ๊ฐ๋ค์.
# ๊ฐ๋ น MNIST ์๊ธ์จ ์ธ์์ ๊ฒฝ์ฐ๋ model.add(Dense(10, activation='softmax'))์ผ๋ก 0๋ถํฐ 9๊น์ง ์ด 10๊ฐ์ง ์ข
๋ฅ๋ฅผ ์ง์ ํ๊ฒ ๋๊ณ ๋ณดํต ์ด Layer๋ ๊ฒฐ๊ณผ๋ฅผ ๋ํ๋ด๋ฏ๋ก ๋ง์ง๋ง์ ์ถ๊ฐ๋ฉ๋๋ค.
# model.add(LSTM(128,~์ ๊ฒฝ์ฐ๋ 128์ด Layer์ ํฌํจ๋๋ units์ ๊ฐฏ์๋ผ๋ ์ ์ ๊ฐ์ผ๋ ์ ๋์ ์ธ ์ซ์๋ฅผ ์๋ฏธํ์ง๋ ์์ต๋๋ค.
# ์ฆ ๊ฒฝํ์น์ ์ํด์ LSTM(Long Short-Term Memory layer)์ 128๊ฐ์ units(dimensionality of the output space)์ ์คฌ์๋ ๊ฐ์ฅ ์ข์ ๊ฒฐ๊ณผ๋ฅผ ๊ฐ์ ธ์ค๋ฏ๋ก ์ด๋ ๊ฒ ์ด ๊ฒ์ด๊ณ ์ด ๊ฐ์ ๋ชจ๋ธ์ ํ๋ํ๋ฉด์ ๋ณ๊ฒฝํ์ฌ ์ฌ์ฉํ ์ ์์ต๋๋ค.
# ์์ฆ์ ์ต์ ์ ๊ฐ์ ์ฐพ์์ฃผ๊ฑฐ๋ CNN์ GUI ํด๋ผ์ฐ๋์์ ์ฝ๊ฒ ๊ตฌํด์ฃผ๋ teachable Machine๊ณผ ๊ฐ์ ์๋น์ค๊ฐ ๋์์์ง๋ง
# ์ ํต์ ์ผ๋ก Keras Multi Layer Model์ ๋ง๋ค๋๋ ์ฌ๋ฌ ๋ฒ์ ๊ฒฝํ์น ๊ฐ์ ์ฌ์ฉํ๋ฉฐ ์ต์ ์ ๋ชจ๋ธ์ ๋ง๋ค๊ฒ ๋ฉ๋๋ค.
# -
# ์ด์ ๋ชจ๋ธ์ ๋ง๋ญ๋๋ค. ์ด์ ๋ ์ด์ ๊ฐ๋จํ ์ํ์
๋ชจ๋ธ์ด ์๋๋ฏ๋ก ํจ์ํ API๋ฅผ ์ฌ์ฉํ๊ฒ ์ต๋๋ค:
# +
encoder_embedding_size = 32
decoder_embedding_size = 32
lstm_units = 128
np.random.seed(42)
tf.random.set_seed(42)
encoder_input = keras.layers.Input(shape=[None], dtype=tf.int32)
encoder_embedding = keras.layers.Embedding(
input_dim=len(INPUT_CHARS) + 1,
output_dim=encoder_embedding_size)(encoder_input)
_, encoder_state_h, encoder_state_c = keras.layers.LSTM(
lstm_units, return_state=True)(encoder_embedding)
encoder_state = [encoder_state_h, encoder_state_c]
decoder_input = keras.layers.Input(shape=[None], dtype=tf.int32)
decoder_embedding = keras.layers.Embedding(
input_dim=len(OUTPUT_CHARS) + 2,
output_dim=decoder_embedding_size)(decoder_input)
decoder_lstm_output = keras.layers.LSTM(lstm_units, return_sequences=True)(
decoder_embedding, initial_state=encoder_state)
decoder_output = keras.layers.Dense(len(OUTPUT_CHARS) + 1,
activation="softmax")(decoder_lstm_output)
model = keras.models.Model(inputs=[encoder_input, decoder_input],
outputs=[decoder_output])
optimizer = keras.optimizers.Nadam()
model.compile(loss="sparse_categorical_crossentropy", optimizer=optimizer,
metrics=["accuracy"])
history = model.fit([X_train, X_train_decoder], Y_train, epochs=10,
validation_data=([X_valid, X_valid_decoder], Y_valid))
# -
# ์ด ๋ชจ๋ธ๋ 100% ๊ฒ์ฆ ์ ํ๋๋ฅผ ๋ฌ์ฑํ์ง๋ง ๋ ๋น ๋ฆ
๋๋ค.
# ์ด ๋ชจ๋ธ์ ์ฌ์ฉํด ๋ช ๊ฐ์ง ์์ธก์ ์ํํด ๋ณด์ฃ . ์ด๋ฒ์๋ ํ ๋ฌธ์์ฉ ์์ธกํด์ผ ํฉ๋๋ค.
# +
sos_id = len(OUTPUT_CHARS) + 1
def predict_date_strs(date_strs):
X = prepare_date_strs_padded(date_strs)
Y_pred = tf.fill(dims=(len(X), 1), value=sos_id)
for index in range(max_output_length):
pad_size = max_output_length - Y_pred.shape[1]
X_decoder = tf.pad(Y_pred, [[0, 0], [0, pad_size]])
Y_probas_next = model.predict([X, X_decoder])[:, index:index+1]
Y_pred_next = tf.argmax(Y_probas_next, axis=-1, output_type=tf.int32)
Y_pred = tf.concat([Y_pred, Y_pred_next], axis=1)
return ids_to_date_strs(Y_pred[:, 1:])
# -
# predict_date_strs(["July 14, 1789", "May 01, 2020"])
# ์ ๋์ํ๋ค์! :)
# ### ์ธ ๋ฒ์งธ ๋ฒ์ : TF-Addons์ seq2seq ๊ตฌํ ์ฌ์ฉํ๊ธฐ
# ์ ํํ ๋์ผํ ๋ชจ๋ธ์ ๋ง๋ค์ด ๋ณด์ฃ . ํ์ง๋ง TF-Addon์ seq2seq API๋ฅผ ์ฌ์ฉํ๊ฒ ์ต๋๋ค. ์๋ ๊ตฌํ์ ์ด ๋
ธํธ๋ถ์ ์์ ์๋ TFA ์์ ์ ๊ฑฐ์ ๋น์ทํฉ๋๋ค. ๋ค๋ง ๋ชจ๋ธ ์
๋ ฅ์ ์ถ๋ ฅ ์ํ์ค ๊ธธ์ด๋ฅผ ์ง์ ํ์ง ์์ต๋๋ค(ํ์ง๋ง ์ถ๋ ฅ ์ํ์ค์ ๊ธธ์ด๊ฐ ๋งค์ฐ ๋ค๋ฅธ ํ๋ก์ ํธ์์ ํ์ํ๋ค๋ฉด ์ฝ๊ฒ ์ด๋ฅผ ์ถ๊ฐํ ์ ์์ต๋๋ค).
# +
import tensorflow_addons as tfa
np.random.seed(42)
tf.random.set_seed(42)
encoder_embedding_size = 32
decoder_embedding_size = 32
units = 128
encoder_inputs = keras.layers.Input(shape=[None], dtype=np.int32)
decoder_inputs = keras.layers.Input(shape=[None], dtype=np.int32)
sequence_lengths = keras.layers.Input(shape=[], dtype=np.int32)
encoder_embeddings = keras.layers.Embedding(
len(INPUT_CHARS) + 1, encoder_embedding_size)(encoder_inputs)
decoder_embedding_layer = keras.layers.Embedding(
len(OUTPUT_CHARS) + 2, decoder_embedding_size)
decoder_embeddings = decoder_embedding_layer(decoder_inputs)
encoder = keras.layers.LSTM(units, return_state=True)
encoder_outputs, state_h, state_c = encoder(encoder_embeddings)
encoder_state = [state_h, state_c]
sampler = tfa.seq2seq.sampler.TrainingSampler()
decoder_cell = keras.layers.LSTMCell(units)
output_layer = keras.layers.Dense(len(OUTPUT_CHARS) + 1)
decoder = tfa.seq2seq.basic_decoder.BasicDecoder(decoder_cell,
sampler,
output_layer=output_layer)
final_outputs, final_state, final_sequence_lengths = decoder(
decoder_embeddings,
initial_state=encoder_state)
Y_proba = keras.layers.Activation("softmax")(final_outputs.rnn_output)
model = keras.models.Model(inputs=[encoder_inputs, decoder_inputs],
outputs=[Y_proba])
optimizer = keras.optimizers.Nadam()
model.compile(loss="sparse_categorical_crossentropy", optimizer=optimizer,
metrics=["accuracy"])
history = model.fit([X_train, X_train_decoder], Y_train, epochs=15,
validation_data=([X_valid, X_valid_decoder], Y_valid))
# -
# ์ฌ๊ธฐ์์๋ 100% ๊ฒ์ฆ ์ ํ๋๋ฅผ ๋ฌ์ฑํ์ต๋๋ค! ์ด ๋ชจ๋ธ์ ์ฌ์ฉํ๊ธฐ ์ํด `predict_date_strs()` ํจ์๋ฅผ ๋ค์ ์ฌ์ฉํ๊ฒ ์ต๋๋ค:
predict_date_strs(["July 14, 1789", "May 01, 2020"])
# ํ์ง๋ง ๋ ํจ์จ์ ์ผ๋ก ์ถ๋ก ์ ์ํํ๋ ๋ฐฉ๋ฒ์ด ์์ต๋๋ค. ์ง๊ธ๊น์ง ์ถ๋ก ์์ ์๋ก์ด ๋ฌธ์๋ง๋ค ๋ชจ๋ธ์ ์คํํ์ต๋๋ค. ํ์ง๋ง`TrainingSampler` ๋์ ์ `GreedyEmbeddingSampler`๋ฅผ ์ฌ์ฉํ๋ ์๋ก์ด ๋์ฝ๋๋ฅผ ๋ง๋ค ์ ์์ต๋๋ค.
#
# ํ์ ์คํ
๋ง๋ค `GreedyEmbeddingSampler`๊ฐ ๋์ฝ๋์ ์ถ๋ ฅ์ argmax๋ฅผ ๊ณ์ฐํ๊ณ , ๋์ฝ๋ ์๋ฒ ๋ฉ ์ธต์ ํตํด ํ ํฐ ID๋ฅผ ์ป์ ์ ์์ต๋๋ค. ๊ทธ๋ค์ ๋ค์ ํ์ ์คํ
์ ๋ง๋ค์ด์ง ์๋ฒ ๋ฉ์ ๋์ฝ๋์ LSTM ์
์ ์ฃผ์
ํฉ๋๋ค. ์ด๋ฐ ๋ฐฉ๋ฒ์ ํตํด ๋์ฝ๋๋ฅผ ํ ๋ฒ๋ง ์คํํ์ฌ ์ ์ฒด ์์ธก์ ์ป์ ์ ์์ต๋๋ค.
# +
inference_sampler = tfa.seq2seq.sampler.GreedyEmbeddingSampler(
embedding_fn=decoder_embedding_layer)
inference_decoder = tfa.seq2seq.basic_decoder.BasicDecoder(
decoder_cell, inference_sampler, output_layer=output_layer,
maximum_iterations=max_output_length)
batch_size = tf.shape(encoder_inputs)[:1]
start_tokens = tf.fill(dims=batch_size, value=sos_id)
final_outputs, final_state, final_sequence_lengths = inference_decoder(
start_tokens,
initial_state=encoder_state,
start_tokens=start_tokens,
end_token=0)
inference_model = keras.models.Model(inputs=[encoder_inputs],
outputs=[final_outputs.sample_id])
# -
# ๋ช ๊ฐ์ ๋
ธํธ:
# * `GreedyEmbeddingSampler`๋ `start_tokens`(๋์ฝ๋ ์ํ์ค๋ง๋ค sos ID๋ฅผ ๋ด์ ๋ฒกํฐ)์ `end_token`(๋ชจ๋ธ์ด ์ด ํ ํฐ์ ์ถ๋ ฅํ ๋ ๋์ฝ๋๊ฐ ์ํ์ค ๋์ฝ๋ฉ์ ๋ฉ์ถฅ๋๋ค)์ด ํ์ํฉ๋๋ค.
# * `BasicDecoder`๋ฅผ ๋ง๋ค ๋ `maximum_iterations`๋ฅผ ์ค์ ํด์ผ ํฉ๋๋ค. ๊ทธ๋ ์ง ์์ผ๋ฉด ๋ฌดํํ๊ฒ ๋ฐ๋ณตํ ์ ์์ต๋๋ค(์ ์ด๋ ํ๋์ ์ํ์ค์์ ๋ชจ๋ธ์ด `end_token`์ ์ถ๋ ฅํ์ง ์๋๋ค๋ฉด). ์ด๋ ๊ฒ ๋๋ฉด ์ฃผํผํฐ ์ปค๋์ ์ฌ์์ํด์ผ ํฉ๋๋ค.
# * ๋ชจ๋ ๋์ฝ๋ ์
๋ ฅ์ด ์ด์ ํ์ ์คํ
์ ์ถ๋ ฅ์ ๊ธฐ๋ฐ์ผ๋ก ๋์ ์ผ๋ก ์์ฑ๋๊ธฐ ๋๋ฌธ์ ๋์ฝ๋ ์
๋ ฅ์ ๋ ์ด์ ํ์ํ์ง ์์ต๋๋ค.
# * ๋ชจ๋ธ์ ์ถ๋ ฅ์ `final_outputs.rnn_outputs`์ ์ํํธ๋งฅ์ค๊ฐ ์๋๋ผ `final_outputs.sample_id`์
๋๋ค. ๋ก์ง ๊ฐ์ ์ป๊ณ ์ถ๋ค๋ฉด `final_outputs.sample_id`์ `final_outputs.rnn_outputs`์ผ๋ก ๋ฐ๊พธ์ธ์.
model.summary()
# ์ด์ ์ด ๋ชจ๋ธ์ ์ฌ์ฉํ๋ ๊ฐ๋จํ ํจ์๋ฅผ ์์ฑํ์ฌ ๋ ์ง ํฌ๋งท ๋ณํ์ ์ํํ ์ ์์ต๋๋ค:
def fast_predict_date_strs(date_strs):
X = prepare_date_strs_padded(date_strs)
Y_pred = inference_model.predict(X)
return ids_to_date_strs(Y_pred)
fast_predict_date_strs(["July 14, 1789", "May 01, 2020"])
# ์๋๋ฅผ ํ์ธํด ๋ณด์ฃ :
# %timeit predict_date_strs(["July 14, 1789", "May 01, 2020"])
# %timeit fast_predict_date_strs(["July 14, 1789", "May 01, 2020"])
# 10๋ฐฐ ์ด์ ๋น ๋ฆ
๋๋ค! ๊ธด ์ํ์ค๋ฅผ ๋ค๋ฃฐ ๋ ์๋๋ ๋ ์ฐจ์ด๊ฐ ๋ ๊ฒ์
๋๋ค.
# ### ๋ค ๋ฒ์งธ ๋ฒ์ : ์ค์ผ์ค ์ํ๋ฌ๋ฅผ ์ฌ์ฉํ๋ TF-Addons์ seq2seq ๊ตฌํ
# **๊ฒฝ๊ณ **: TF ๋ฒ๊ทธ ๋๋ฌธ์ ์ด ๋ฒ์ ์ ํ
์ํ๋ก 2.2 ์ด์์์๋ง ๋์ํฉ๋๋ค.
# ์ด์ ๋ชจ๋ธ์ ํ๋ จํ ๋ ๋งค ํ์ ์คํ
_t_์์ ํ์ ์คํ
_t_-1์ ํ๊น ํ ํฐ์ ๋ชจ๋ธ์๊ฒ ์ ๋ฌํฉ๋๋ค. ํ์ง๋ง ์ถ๋ก ์์๋ ๋ชจ๋ธ์ด ํ์ ์คํ
๋ง๋ค ์ด์ ํ๊น์ ์ป์ ์ ์์ต๋๋ค. ๋์ ์ ์ด์ ์์ธก์ ์ฌ์ฉํฉ๋๋ค. ๋ฐ๋ผ์ ์ด๋ฐ ํ๋ จ๊ณผ ์ถ๋ก ์ฌ์ด์ ์ฐจ์ด๊ฐ ์ค๋ง์ค๋ฌ์ด ์ฑ๋ฅ์ผ๋ก ์ด์ด์ง ์ ์์ต๋๋ค. ์ด๋ฅผ ์ํํ๊ธฐ ์ํด ํ๋ จํ๋ ๋์ ํ๊น์ ์์ธก์ผ๋ก ์ ์ง์ ์ผ๋ก ๋ฐ๊ฟ ์ ์์ต๋๋ค. ์ด๋ ๊ฒ ํ๋ ค๋ฉด `TrainingSampler`๋ฅผ `ScheduledEmbeddingTrainingSampler`๋ฅผ ๋ฐ๊พธ๊ธฐ๋ง ํ๋ฉด ๋ฉ๋๋ค. ๊ทธ๋ฆฌ๊ณ `sampling_probability`(๋์ฝ๋๊ฐ ์ด์ ํ์ ์คํ
์ ํ๊น ๋์ ์ ์ด์ ํ์ ์คํ
์ ์์ธก์ ์ฌ์ฉํ ํ๋ฅ )๋ฅผ ์ ์ง์ ์ผ๋ก ์ฆ๊ฐ์ํค๊ธฐ ์ํด ์ผ๋ผ์ค ์ฝ๋ฐฑ์ ์ฌ์ฉํฉ๋๋ค.
# +
import tensorflow_addons as tfa
np.random.seed(42)
tf.random.set_seed(42)
n_epochs = 20
encoder_embedding_size = 32
decoder_embedding_size = 32
units = 128
encoder_inputs = keras.layers.Input(shape=[None], dtype=np.int32)
decoder_inputs = keras.layers.Input(shape=[None], dtype=np.int32)
sequence_lengths = keras.layers.Input(shape=[], dtype=np.int32)
encoder_embeddings = keras.layers.Embedding(
len(INPUT_CHARS) + 1, encoder_embedding_size)(encoder_inputs)
decoder_embedding_layer = keras.layers.Embedding(
len(OUTPUT_CHARS) + 2, decoder_embedding_size)
decoder_embeddings = decoder_embedding_layer(decoder_inputs)
#----------------------------------------------------------------------------
encoder = keras.layers.LSTM(units, return_state=True)
encoder_outputs, state_h, state_c = encoder(encoder_embeddings)
encoder_state = [state_h, state_c]
sampler = tfa.seq2seq.sampler.ScheduledEmbeddingTrainingSampler(
sampling_probability=0.,
embedding_fn=decoder_embedding_layer)
# sampler๋ฅผ ๋ง๋ค ๋ค์ sampling_probability๋ฅผ ์ง์ ํด์ผ ํฉ๋๋ค.
# (see https://github.com/tensorflow/addons/pull/1714)
sampler.sampling_probability = tf.Variable(0.)
decoder_cell = keras.layers.LSTMCell(units)
output_layer = keras.layers.Dense(len(OUTPUT_CHARS) + 1)
decoder = tfa.seq2seq.basic_decoder.BasicDecoder(decoder_cell,
sampler,
output_layer=output_layer)
final_outputs, final_state, final_sequence_lengths = decoder(
decoder_embeddings,
initial_state=encoder_state)
Y_proba = keras.layers.Activation("softmax")(final_outputs.rnn_output)
#----------------------------------------------------------------------------
model = keras.models.Model(inputs=[encoder_inputs, decoder_inputs],
outputs=[Y_proba])
optimizer = keras.optimizers.Nadam()
model.compile(loss="sparse_categorical_crossentropy", optimizer=optimizer,
metrics=["accuracy"])
def update_sampling_probability(epoch, logs):
proba = min(1.0, epoch / (n_epochs - 10))
sampler.sampling_probability.assign(proba)
sampling_probability_cb = keras.callbacks.LambdaCallback(
on_epoch_begin=update_sampling_probability)
history = model.fit([X_train, X_train_decoder], Y_train, epochs=n_epochs,
validation_data=([X_valid, X_valid_decoder], Y_valid),
callbacks=[sampling_probability_cb])
# -
# ๊ฒ์ฆ ์ ํ๋๊ฐ 100%๋ ์๋์ง๋ง ์ถฉ๋ถํ ๊ฐ๊น์ต๋๋ค!
# ์ถ๋ก ์์๋ `GreedyEmbeddingSampler`๋ฅผ ์ฌ์ฉํด ์์์์ ๋์ผํ ์์
์ ์ํํ ์ ์์ต๋๋ค. ํ์ง๋ง ์์ฑ๋๋ฅผ ๋์ด๊ธฐ ์ํด `SampleEmbeddingSampler`๋ฅผ ์ฌ์ฉํ๊ฒ ์ต๋๋ค. ํ ํฐ ID๋ฅผ ์ฐพ๊ธฐ ์ํด ๋ชจ๋ธ ์ถ๋ ฅ์ argmax๋ฅผ ์ ์ฉํ๋ ๋์ ๋ก์ง ์ถ๋ ฅ์์ ๋๋คํ๊ฒ ํ ํฐ ID๋ฅผ ์ํ๋งํ๋ ๊ฒ๋ง ๋ค๋ฅด๊ณ ๊ฑฐ์ ๋์ผํฉ๋๋ค. ํ
์คํธ๋ฅผ ์์ฑํ๋ ์์
์ ์ ์ฉํฉ๋๋ค. `softmax_temperature` ๋งค๊ฐ๋ณ์๋ ์ธ์ต์คํผ์ด์ ๊ฐ์ ํ
์คํธ๋ฅผ ์์ฑํ์ ๋์ ๊ฐ์ ๋ชฉ์ ์ ๊ฐ์ง๋๋ค(์ด ๋งค๊ฐ๋ณ์ ๊ฐ์ด ๋์์๋ก ๋ ๋๋คํ ํ
์คํธ๊ฐ ์์ฑ๋ฉ๋๋ค).
# +
softmax_temperature = tf.Variable(1.)
inference_sampler = tfa.seq2seq.sampler.SampleEmbeddingSampler(
embedding_fn=decoder_embedding_layer,
softmax_temperature=softmax_temperature)
inference_decoder = tfa.seq2seq.basic_decoder.BasicDecoder(
decoder_cell, inference_sampler, output_layer=output_layer,
maximum_iterations=max_output_length)
batch_size = tf.shape(encoder_inputs)[:1]
start_tokens = tf.fill(dims=batch_size, value=sos_id)
final_outputs, final_state, final_sequence_lengths = inference_decoder(
start_tokens,
initial_state=encoder_state,
start_tokens=start_tokens,
end_token=0)
inference_model = keras.models.Model(inputs=[encoder_inputs],
outputs=[final_outputs.sample_id])
# -
def creative_predict_date_strs(date_strs, temperature=1.0):
softmax_temperature.assign(temperature)
X = prepare_date_strs_padded(date_strs)
Y_pred = inference_model.predict(X)
return ids_to_date_strs(Y_pred)
# +
tf.random.set_seed(42)
creative_predict_date_strs(["July 14, 1789", "May 01, 2020"])
# -
# ๊ธฐ๋ณธ ์จ๋์์ ๋ ์ง๊ฐ ๊ด์ฐฎ์ ๊ฒ ๊ฐ๊ตฐ์. ์จ๋๋ฅผ ์กฐ๊ธ ๋ ์ฌ๋ ค ๋ณด์ฃ :
# +
tf.random.set_seed(42)
creative_predict_date_strs(["July 14, 1789", "May 01, 2020"],
temperature=5.)
# -
# ์ด๋ฐ ๋ ์ง๊ฐ ๋๋ฌด ๋๋คํ๋ค์. "์ฐฝ์์ ์ธ" ๋ ์ง๋ผ๊ณ ๋ถ๋ฅด์ฃ .
# ### ๋ค์ฏ ๋ฒ์งธ ๋ฒ์ : TFA seq2seq, ์ผ๋ผ์ค ์๋ธํด๋์ฑ API, ์ดํ
์
๋ฉ์ปค๋์ฆ ์ฌ์ฉํ๊ธฐ
# ์ด ๋ฌธ์ ์ ์ํ์ค๋ ๊ฝค ์งง์ง๋ง ๊ธด ์ํ์ค๋ฅผ ์ฒ๋ฆฌํ๋ ค๋ฉด ์ดํ
์
๋ฉ์ปค๋์ฆ์ ์ฌ์ฉํด์ผ ํ ๊ฒ์
๋๋ค. ์ง์ ์ดํ
์
๋ฉ์ปค๋์ฆ์ ๊ตฌํํ ์ ์์ง๋ง TF-Addons์ ์๋ ๊ตฌํ์ ์ฌ์ฉํ๋ ๊ฒ์ด ๋ ๊ฐ๋จํ๊ณ ํจ์จ์ ์
๋๋ค. ์ผ๋ผ์ค ์๋ธํด๋์ฑ API๋ฅผ ์ฌ์ฉํด์ ๋ง๋ค์ด ๋ณด์ฃ .
#
# **๊ฒฝ๊ณ **: ํ
์ํ๋ก ๋ฒ๊ทธ([์ด์](https://github.com/tensorflow/addons/issues/1153) ์ฐธ์กฐ) ๋๋ฌธ์ ์ฆ์ ์คํ ๋ชจ๋(eager mode)์์ `get_initial_state()` ๋ฉ์๋๊ฐ ์คํจํฉ๋๋ค. ๋ฐ๋ผ์ ์ง๊ธ์ `call()` ๋ฉ์๋์์ `tf.function()`์ ์๋์ผ๋ก ํธ์ถํ๋ (๋ฐ๋ผ์ ๊ทธ๋ํ ๋ชจ๋๋ก ์คํํ๋) ์ผ๋ผ์ค ์๋ธํด๋์ฑ API๋ฅผ ์ฌ์ฉํด์ผ ํฉ๋๋ค.
# ์ด ๊ตฌํ์์๋ ๊ฐ๋จํ๊ฒ ๋ง๋ค๊ธฐ ์ํด ๋ค์ `TrainingSampler`๋ฅผ ์ฌ์ฉํฉ๋๋ค(ํ์ง๋ง `ScheduledEmbeddingTrainingSampler`๋ฅผ ์ฌ์ฉํด ์ฝ๊ฒ ๋ฐ๊ฟ ์ ์์ต๋๋ค). ์ถ๋ก ์๋ `GreedyEmbeddingSampler`๋ฅผ ์ฌ์ฉํฉ๋๋ค:
class DateTranslation(keras.models.Model):
def __init__(self, units=128, encoder_embedding_size=32,
decoder_embedding_size=32, **kwargs):
super().__init__(**kwargs)
self.encoder_embedding = keras.layers.Embedding(
input_dim=len(INPUT_CHARS) + 1,
output_dim=encoder_embedding_size)
self.encoder = keras.layers.LSTM(units,
return_sequences=True,
return_state=True)
self.decoder_embedding = keras.layers.Embedding(
input_dim=len(OUTPUT_CHARS) + 2,
output_dim=decoder_embedding_size)
self.attention = tfa.seq2seq.LuongAttention(units)
decoder_inner_cell = keras.layers.LSTMCell(units)
self.decoder_cell = tfa.seq2seq.AttentionWrapper(
cell=decoder_inner_cell,
attention_mechanism=self.attention)
output_layer = keras.layers.Dense(len(OUTPUT_CHARS) + 1)
self.decoder = tfa.seq2seq.BasicDecoder(
cell=self.decoder_cell,
sampler=tfa.seq2seq.sampler.TrainingSampler(),
output_layer=output_layer)
self.inference_decoder = tfa.seq2seq.BasicDecoder(
cell=self.decoder_cell,
sampler=tfa.seq2seq.sampler.GreedyEmbeddingSampler(
embedding_fn=self.decoder_embedding),
output_layer=output_layer,
maximum_iterations=max_output_length)
def call(self, inputs, training=None):
encoder_input, decoder_input = inputs
encoder_embeddings = self.encoder_embedding(encoder_input)
encoder_outputs, encoder_state_h, encoder_state_c = self.encoder(
encoder_embeddings,
training=training)
encoder_state = [encoder_state_h, encoder_state_c]
self.attention(encoder_outputs,
setup_memory=True)
decoder_embeddings = self.decoder_embedding(decoder_input)
decoder_initial_state = self.decoder_cell.get_initial_state(
decoder_embeddings)
decoder_initial_state = decoder_initial_state.clone(
cell_state=encoder_state)
if training:
decoder_outputs, _, _ = self.decoder(
decoder_embeddings,
initial_state=decoder_initial_state,
training=training)
else:
start_tokens = tf.zeros_like(encoder_input[:, 0]) + sos_id
decoder_outputs, _, _ = self.inference_decoder(
decoder_embeddings,
initial_state=decoder_initial_state,
start_tokens=start_tokens,
end_token=0)
return tf.nn.softmax(decoder_outputs.rnn_output)
# +
# ์ ์ํ ๋น ์์น ์ฝ๋
class DateTranslation_BeamSearch(keras.models.Model):
def __init__(self, units=128, encoder_embedding_size=32,
decoder_embedding_size=32, beam_width=10, **kwargs):
super().__init__(**kwargs)
self.beam_width = beam_width
self.encoder_embedding = keras.layers.Embedding(
input_dim=len(INPUT_CHARS) + 1,
output_dim=encoder_embedding_size)
self.encoder = keras.layers.LSTM(units, return_state=True)
self.decoder_embedding = keras.layers.Embedding(
input_dim=len(OUTPUT_CHARS) + 2,
output_dim=decoder_embedding_size)
self.decoder_cell = keras.layers.LSTMCell(units)
output_layer = keras.layers.Dense(len(OUTPUT_CHARS) + 1)
self.decoder = tfa.seq2seq.BeamSearchDecoder(cell=decoder_cell,
beam_width=beam_width,
sampler=tfa.seq2seq.sampler.TrainingSampler(),
output_layer=output_layer)
def call(self, inputs):
encoder_input, decoder_input = inputs
encoder_embeddings = self.encoder_embedding(encoder_input)
encoder_outputs, encoder_state_h, encoder_state_c = self.encoder(encoder_embeddings)
encoder_state = [encoder_state_h, encoder_state_c]
self.decoder_cell(encoder_outputs, setup_memory=True)
decoder_embeddings = self.decoder_embedding(decoder_input)
decoder_initial_state = tfa.seq2seq.tile_batch(encoder_state, multiplier=self.beam_width)
start_tokens = tf.fill(dims=batch_size, value=sos_id)
decoder_outputs, _, _ = self.decoder(
decoder_embeddings,
initial_state=decoder_initial_state,
start_tokens=start_tokens,
end_token=0)
return tf.nn.softmax(decoder_outputs.rnn_output)
# -
'''# ํด๋น ์ฝ๋๋ ๋น ๊ฒ์ ์ฝ๋ ๊ตฌํ ํ
์คํธ ์ค
import tensorflow_addons as tfa
np.random.seed(42)
tf.random.set_seed(42)
encoder_embedding_size = 32
decoder_embedding_size = 32
units = 128
beam_width = 10
encoder_inputs = keras.layers.Input(shape=[None], dtype=np.int32)
decoder_inputs = keras.layers.Input(shape=[None], dtype=np.int32)
sequence_lengths = keras.layers.Input(shape=[], dtype=np.int32)
encoder_embeddings = keras.layers.Embedding(
len(INPUT_CHARS) + 1, encoder_embedding_size)(encoder_inputs)
decoder_embedding_layer = keras.layers.Embedding(
len(OUTPUT_CHARS) + 2, decoder_embedding_size)
decoder_embeddings = decoder_embedding_layer(decoder_inputs)
#>>
#---------------------------------------------------------------------------------
encoder = keras.layers.LSTM(units, return_state=True)
#>>
encoder_outputs, state_h, state_c = encoder(encoder_embeddings)
encoder_state = [state_h, state_c]
sampler = tfa.seq2seq.sampler.TrainingSampler()
decoder_cell = keras.layers.LSTMCell(units)
output_layer = keras.layers.Dense(len(OUTPUT_CHARS) + 1)
decoder = tfa.seq2seq.BeamSearchDecoder(cell=decoder_cell,
beam_width=beam_width,
sampler=sampler,
output_layer=output_layer)
decoder_initial_state = tfa.seq2seq.tile_batch(encoder_state, multiplier=beam_width)
final_outputs, final_state, final_sequence_lengths = decoder(decoder_embeddings,
start_tokens=start_tokens, end_token=0,
initial_state=decoder_initial_state)
Y_proba = keras.layers.Activation("softmax")(final_outputs.rnn_output)
#---------------------------------------------------------------------------------
model = keras.models.Model(inputs=[encoder_inputs, decoder_inputs],
outputs=[Y_proba])
optimizer = keras.optimizers.Nadam()
model.compile(loss="sparse_categorical_crossentropy", optimizer=optimizer,
metrics=["accuracy"])
history = model.fit([X_train, X_train_decoder], Y_train, epochs=15,
validation_data=([X_valid, X_valid_decoder], Y_valid))'''
'''# ํด๋น ์ฝ๋๋ ๋น ๊ฒ์ ์ฝ๋ ๊ตฌํ ํ
์คํธ ์ค
beam_width = 10
decoder = tfa.seq2seq.BeamSearchDecoder(
cell=decoder_cell,
beam_width=beam_width,
output_layer=output_layer)
decoder_initial_state = tfa.seq2seq.tile_batch(encoder_state, multiplier=beam_width)
outputs, _h, _c = decoder(decoder_embeddings,
start_tokens=start_tokens, end_token=0,
initial_state=decoder_initial_state)'''
# +
np.random.seed(42)
tf.random.set_seed(42)
model = DateTranslation()
optimizer = keras.optimizers.Nadam()
model.compile(loss="sparse_categorical_crossentropy", optimizer=optimizer,
metrics=["accuracy"])
history = model.fit([X_train, X_train_decoder], Y_train, epochs=25,
validation_data=([X_valid, X_valid_decoder], Y_valid))
# +
np.random.seed(42)
tf.random.set_seed(42)
model = DateTranslation_BeamSearch()
optimizer = keras.optimizers.Nadam()
model.compile(loss="sparse_categorical_crossentropy", optimizer=optimizer,
metrics=["accuracy"])
history = model.fit([X_train, X_train_decoder], Y_train, epochs=25,
validation_data=([X_valid, X_valid_decoder], Y_valid))
# -
# 100% ๊ฒ์ฆ ์ ํ๋๋ ์๋์ง๋ง ๋งค์ฐ ๊ฐ๊น์ต๋๋ค. ์๋ ดํ๋๋ฐ ์กฐ๊ธ ์ค๋ ๊ฑธ๋ ธ์ง๋ง ๋ฐ๋ณต๋ง๋ค ํ๋ผ๋ฏธํฐ์ ๊ณ์ฐ๋์ด ๋ง์ต๋๋ค. ๊ทธ๋ฆฌ๊ณ ์ค์ผ์ค ์ํ๋ฌ๋ฅผ ์ฌ์ฉํ์ง ์์์ต๋๋ค
#
# ์ด ๋ชจ๋ธ์ ์ฌ์ฉํ๊ธฐ ์ํด ๋ ๋ค๋ฅธ ์์ ํจ์๋ฅผ ๋ง๋ญ๋๋ค:
def fast_predict_date_strs_v2(date_strs):
X = prepare_date_strs_padded(date_strs)
X_decoder = tf.zeros(shape=(len(X), max_output_length), dtype=tf.int32)
Y_probas = model.predict([X, X_decoder])
Y_pred = tf.argmax(Y_probas, axis=-1)
return ids_to_date_strs(Y_pred)
fast_predict_date_strs_v2(["July 14, 1789", "May 01, 2020"])
# TF-Addons์๋ ๋ช ๊ฐ์ง ํฅ๋ฏธ๋ก์ด ๊ธฐ๋ฅ์ด ์์ต๋๋ค:
# * ์ถ๋ก ์ `BasicDecoder` ๋์ `BeamSearchDecoder`๋ฅผ ์ฌ์ฉํ๋ฉด ๊ฐ์ฅ ๋์ ํ๋ฅ ์ ๋ฌธ์๋ฅผ ์ถ๋ ฅํ๋ ๋์ ๋์ฝ๋๊ฐ ๋ช ๊ฐ์ ํ๋ณด ์ค์์ ๊ฐ์ฅ ๊ฐ๋ฅ์ฑ ์๋ ์ํ์ค๋ง ์ ์งํฉ๋๋ค(์์ธํ ๋ด์ฉ์ ์ฑ
16์ฅ์ ์ฐธ๊ณ ํ์ธ์).
# * ์
๋ ฅ์ด๋ ํ๊น ์ํ์ค์ ๊ธธ์ด๊ฐ ๋งค์ฐ ๋ค๋ฅด๋ฉด ๋ง์คํฌ๋ฅผ ์ค์ ํ๊ฑฐ๋ `sequence_length`๋ฅผ ์ง์ ํฉ๋๋ค.
# * `ScheduledEmbeddingTrainingSampler` ๋ณด๋ค ๋ ์ ์ฐํ `ScheduledOutputTrainingSampler`์ ์ฌ์ฉํ์ฌ ํ์ ์คํ
_t_์ ์ถ๋ ฅ์ ํ์ ์คํ
_t_+1์ ์ฃผ์
ํ๋ ๋ฐฉ๋ฒ์ ๊ฒฐ์ ํฉ๋๋ค. ๊ธฐ๋ณธ์ ์ผ๋ก argmax๋ก ID๋ฅผ ์ฐพ์ง ์๊ณ ์๋ฒ ๋ฉ ์ธต์ ํต๊ณผ์์ผ ์ถ๋ ฅ์ ์
์ ๋ฐ๋ก ์ฃผ์
ํฉ๋๋ค. ๋๋ `next_inputs_fn` ํจ์๋ฅผ ์ง์ ํ์ฌ ์
์ถ๋ ฅ์ ๋ค์ ์คํ
์ ์
๋ ฅ์ผ๋ก ๋ณํํ ์ ์์ต๋๋ค.
| mytest/DeepLearn&NeuralNetwork/16_prac_2_err copy.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3 (Data Science)
# language: python
# name: python3__SAGEMAKER_INTERNAL__arn:aws:sagemaker:us-east-1:081325390199:image/datascience-1.0
# ---
# # Invoke SageMaker Autopilot Model from Athena
# Machine Learning (ML) with Amazon Athena (Preview) lets you use Athena to write SQL statements that run Machine Learning (ML) inference using Amazon SageMaker. This feature simplifies access to ML models for data analysis, eliminating the need to use complex programming methods to run inference.
#
# To use ML with Athena (Preview), you define an ML with Athena (Preview) function with the `USING FUNCTION` clause. The function points to the Amazon SageMaker model endpoint that you want to use and specifies the variable names and data types to pass to the model. Subsequent clauses in the query reference the function to pass values to the model. The model runs inference based on the values that the query passes and then returns inference results.
# <img src="img/athena_model.png" width="50%" align="left">
# # Pre-Requisite
#
# ## *Please note that ML with Athena is in Preview and will only work in the following regions that support Preview Functionality:*
#
# ## *us-east-1, us-west-2, ap-south-1, eu-west-1*
#
# ### Check if you current regions supports AthenaML Preview
# +
import boto3
import sagemaker
import pandas as pd
sess = sagemaker.Session()
bucket = sess.default_bucket()
role = sagemaker.get_execution_role()
region = boto3.Session().region_name
sm = boto3.Session().client(service_name="sagemaker", region_name=region)
# -
if region in ["eu-west-1", "ap-south-1", "us-east-1", "us-west-2"]:
print(" [OK] AthenaML IS SUPPORTED IN {}".format(region))
print(" [OK] Please proceed with this notebook.")
else:
print("+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++")
print(" [ERROR] AthenaML IS *NOT* SUPPORTED IN {} !!".format(region))
print(" [INFO] This is OK. SKIP this notebook and move ahead with the workshop.")
print(" [INFO] This notebook is not required for the rest of this workshop.")
print("+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++")
# # Pre-Requisite
#
# ## _Please wait for the Autopilot Model to deploy!! Otherwise, this notebook won't work properly._
# %store -r autopilot_endpoint_name
try:
autopilot_endpoint_name
print("[OK]")
except NameError:
print("+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++")
print("[ERROR] There is no Autopilot Model Endpoint deployed.")
print("[INFO] This is OK. Just skip this notebook and move ahead with the next notebook.")
print("[INFO] This notebook is not required for the rest of this workshop.")
print("+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++")
print(autopilot_endpoint_name)
try:
resp = sm.describe_endpoint(EndpointName=autopilot_endpoint_name)
status = resp["EndpointStatus"]
if status == "InService":
print("[OK] Your Autopilot Model Endpoint is in status: {}".format(status))
elif status == "Creating":
print("[INFO] Your Autopilot Model Endpoint is in status: {}".format(status))
print("[INFO] Waiting for the endpoint to be InService. Please be patient. This might take a few minutes.")
sm.get_waiter("endpoint_in_service").wait(EndpointName=autopilot_endpoint_name)
else:
print("+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++")
print("[ERROR] Your Autopilot Model is in status: {}".format(status))
print("[INFO] This is OK. Just skip this notebook and move ahead with the next notebook.")
print("[INFO] This notebook is not required for the rest of this workshop.")
print("+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++")
except:
print("+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++")
print("[ERROR] There is no Autopilot Model Endpoint deployed.")
print("[INFO] This is OK. Just skip this notebook and move ahead with the next notebook.")
print("[INFO] This notebook is not required for the rest of this workshop.")
print("+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++")
# ## Import PyAthena
from pyathena import connect
# # Create an Athena Table with Sample Reviews
# ## Check for Athena TSV Table
# %store -r ingest_create_athena_table_tsv_passed
try:
ingest_create_athena_table_tsv_passed
except NameError:
print("++++++++++++++++++++++++++++++++++++++++++++++")
print("[ERROR] YOU HAVE TO RUN ALL NOTEBOOKS IN THE `INGEST` SECTION.")
print("++++++++++++++++++++++++++++++++++++++++++++++")
print(ingest_create_athena_table_tsv_passed)
if not ingest_create_athena_table_tsv_passed:
print("++++++++++++++++++++++++++++++++++++++++++++++")
print("[ERROR] YOU HAVE TO RUN ALL NOTEBOOKS IN THE `INGEST` SECTION.")
print("++++++++++++++++++++++++++++++++++++++++++++++")
else:
print("[OK]")
s3_staging_dir = "s3://{}/athena/staging".format(bucket)
tsv_prefix = "amazon-reviews-pds/tsv"
database_name = "dsoaws"
table_name_tsv = "amazon_reviews_tsv"
table_name = "product_reviews"
# +
statement = """
CREATE TABLE IF NOT EXISTS {}.{} AS
SELECT review_id, review_body
FROM {}.{}
""".format(
database_name, table_name, database_name, table_name_tsv
)
print(statement)
# +
import pandas as pd
if region in ["eu-west-1", "ap-south-1", "us-east-1", "us-west-2"]:
conn = connect(region_name=region, s3_staging_dir=s3_staging_dir)
pd.read_sql(statement, conn)
print("[OK]")
else:
print("+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++")
print(" [ERROR] AthenaML IS *NOT* SUPPORTED IN {} !!".format(region))
print(" [INFO] This is OK. SKIP this notebook and move ahead with the workshop.")
print(" [INFO] This notebook is not required for the rest of this workshop.")
print("+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++")
# -
if region in ["eu-west-1", "ap-south-1", "us-east-1", "us-west-2"]:
statement = "SELECT * FROM {}.{} LIMIT 10".format(database_name, table_name)
conn = connect(region_name=region, s3_staging_dir=s3_staging_dir)
df_table = pd.read_sql(statement, conn)
print(df_table)
# ## Add the Required `AmazonAthenaPreviewFunctionality` Work Group to Use This Preview Feature
# +
from botocore.exceptions import ClientError
client = boto3.client("athena")
if region in ["eu-west-1", "ap-south-1", "us-east-1", "us-west-2"]:
try:
response = client.create_work_group(Name="AmazonAthenaPreviewFunctionality")
print(response)
except ClientError as e:
if e.response["Error"]["Code"] == "InvalidRequestException":
print("[OK] Workgroup already exists.")
else:
print("[ERROR] {}".format(e))
# -
# # Create SQL Query
# The `USING FUNCTION` clause specifies an ML with Athena (Preview) function or multiple functions that can be referenced by a subsequent `SELECT` statement in the query. You define the function name, variable names, and data types for the variables and return values.
# +
statement = """
USING FUNCTION predict_star_rating(review_body VARCHAR)
RETURNS VARCHAR TYPE
SAGEMAKER_INVOKE_ENDPOINT WITH (sagemaker_endpoint = '{}'
)
SELECT review_id, review_body, predict_star_rating(REPLACE(review_body, ',', ' ')) AS predicted_star_rating
FROM {}.{} LIMIT 10
""".format(
autopilot_endpoint_name, database_name, table_name
)
print(statement)
# -
# # Query the Autopilot Endpoint using Data from the Athena Table
if region in ["eu-west-1", "ap-south-1", "us-east-1", "us-west-2"]:
conn = connect(region_name=region, s3_staging_dir=s3_staging_dir, work_group="AmazonAthenaPreviewFunctionality")
df = pd.read_sql(statement, conn)
print(df)
# # Delete Endpoint
# +
sm = boto3.client("sagemaker")
if autopilot_endpoint_name:
sm.delete_endpoint(EndpointName=autopilot_endpoint_name)
# + language="html"
#
# <p><b>Shutting down your kernel for this notebook to release resources.</b></p>
# <button class="sm-command-button" data-commandlinker-command="kernelmenu:shutdown" style="display:none;">Shutdown Kernel</button>
#
# <script>
# try {
# els = document.getElementsByClassName("sm-command-button");
# els[0].click();
# }
# catch(err) {
# // NoOp
# }
# </script>
# + language="javascript"
#
# try {
# Jupyter.notebook.save_checkpoint();
# Jupyter.notebook.session.delete();
# }
# catch(err) {
# // NoOp
# }
| 09_deploy/01_Invoke_SageMaker_Autopilot_Model_From_Athena.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # bytedesign
#
# The example is adapted from the python turtle bytedesign.
#
# It takes a while to finish drawing.
from jp_turtle import bytedesign
designer = bytedesign.Designer()
designer.speed(0)
designer.hideturtle()
designer.draw_limit, designer.draw_count
designer.screen.auto_flush = False
designer.screen.element.allow_auto_redraw(False)
try:
designer.design(designer.position(), 1)
finally:
designer.screen.flush()
designer.screen.auto_flush = True
designer.screen.element.allow_auto_redraw(True)
| notebooks/bytedesign.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
# ๊ทธ๋ํ, ์ํ ๊ธฐ๋ฅ ์ถ๊ฐ
# Add graph and math features
import pylab as py
import numpy as np
import numpy.linalg as nl
# ๊ธฐํธ ์ฐ์ฐ ๊ธฐ๋ฅ ์ถ๊ฐ
# Add symbolic operation capability
import sympy as sy
# -
# # ์์ํ์คํ ๋จ์์ง์ง๋ณด์ ๋ฐ๋ ฅ<br>Reaction forces of a simple supported beam under a general load
#
# ๋ค์๊ณผ ๊ฐ์ ๋ณด์ ๋ฐ๋ ฅ์ ๊ตฌํด ๋ณด์.<br>
# Let's try to find the reaction forces of the following beam.
#
#
# ๋ณด์ ๊ธธ์ด:<br>
# Length of the beam:
#
#
# +
L = sy.symbols('L[m]', real=True, nonnegative=True)
# -
L
# ์๋จ ๋จ์ ์ง์ง์ธ ๊ฒฝ์ฐ๋ x๋ฐฉํฅ ํ๋, y๋ฐฉํฅ ๋๊ฐ์ ๋ฐ๋ ฅ์ ๊ฐ์ ํ ์ ์๋ค.<br>
# Simple supports at both ends would have three reaction forces: one in $x$ and two in $y$ directions.
#
#
# +
R_Ax, R_Ay, R_By = sy.symbols('R_{Ax}[N] R_{Ay}[N] R_{By}[N]', real=True)
# +
R_Ax
# +
R_Ay
# +
R_By
# -
# $R_{Ax}$ ๋ $-\infty$ ๋ฐฉํฅ, $R_{Ay}$ ์ $R_{By}$ ๋ $+\infty$ ๋ฐฉํฅ์ด ์์ ๋ฐฉํฅ์ผ๋ก ๊ฐ์ ํ์.<br>
# Let's assume $R_{Ax}$ is positive in $-\infty$ direction. Also $R_{Ay}$ and $R_{By}$ would be positive in $+\infty$ direction.
#
# ํ์ค ๋ฒกํฐ์ ์ฑ๋ถ:<br>
# Components of the load vector:
#
#
# +
F_x, F_y = sy.symbols('F_{x}[N] F_{y}[N]', real=True)
# +
F_x
# +
F_y
# -
# $F_{x}$ ๋ $+\infty$ ๋ฐฉํฅ, $F_{y}$ ๋ $-\infty$ ๋ฐฉํฅ์ด ์์ ๋ฐฉํฅ์ผ๋ก ๊ฐ์ ํ์.<br>
# Let's assume $F_{x}$ and $F_{y}$ are positive in $+\infty$ and $-\infty$ directions, respectively.
#
#
# ๋ฐ์นจ์ A๋ก๋ถํฐ ํ์ค์ ์๋ ์์น ๋ฒกํฐ์ ์ฑ๋ถ:<br>
# Components of the location vector of load relative to support A:
#
#
# +
P_x, P_y = sy.symbols('P_{x}[m] P_{y}[m]', real=True)
# +
P_x
# +
P_y
# -
# $x$ ๋ฐฉํฅ ํ์ ํํ<br>Force equilibrium in $x$ direction
#
#
# +
x_eq = sy.Eq(R_Ax, F_x)
# +
x_eq
# -
# $y$ ๋ฐฉํฅ ํ์ ํํ<br>Force equilibrium in $y$ direction
#
#
# +
y_eq = sy.Eq(R_Ay+R_By, F_y)
# +
y_eq
# -
# A ์ ์ค์ฌ์ ๋ชจ๋ฉํธ ํํ<br>
# Moment equilibrium at A
#
#
# +
A_eq = sy.Eq(-P_y * F_x + P_x * F_y +R_By * L )
# +
A_eq
# -
# ์ฐ๋ฆฝํ์ฌ ๋ฐ๋ ฅ์ ๊ดํ์ฌ ํ๋ฉด ๋ค์๊ณผ ๊ฐ๋ค.<br>
# Solving the system of the equations about the reaction forces would give the followings.
#
#
# +
sol = sy.solve([x_eq, y_eq, A_eq], [R_Ax, R_Ay, R_By])
# +
sol[R_Ax]
# +
sol[R_Ay]
# +
sol[R_By]
# -
# ## Final Bell<br>๋ง์ง๋ง ์ข
#
#
# +
# stackoverfow.com/a/24634221
import os
os.system("printf '\a'");
# -
| 45_sympy/20_Beam_Reaction_Force_General.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + id="jCIIG4eCyMrP" executionInfo={"status": "ok", "timestamp": 1633253458642, "user_tz": -180, "elapsed": 959, "user": {"displayName": "<NAME>.", "photoUrl": "https://lh3.googleusercontent.com/a/default-user=s64", "userId": "01554857910732208880"}}
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import sklearn as sk
# + id="0NkLKSbfyMrX" executionInfo={"status": "ok", "timestamp": 1633253458643, "user_tz": -180, "elapsed": 9, "user": {"displayName": "<NAME>.", "photoUrl": "https://lh3.googleusercontent.com/a/default-user=s64", "userId": "01554857910732208880"}}
# ะะตะฝะตัะธััะตะผ ัะฝะธะบะฐะปัะฝัะน seed
my_code = "Dimasik"
seed_limit = 2 ** 32
my_seed = int.from_bytes(my_code.encode(), "little") % seed_limit
np.random.seed(my_seed)
# + id="lUgiKH3ujA_O" executionInfo={"status": "ok", "timestamp": 1633253458644, "user_tz": -180, "elapsed": 8, "user": {"displayName": "<NAME>.", "photoUrl": "https://lh3.googleusercontent.com/a/default-user=s64", "userId": "01554857910732208880"}}
# + colab={"base_uri": "https://localhost:8080/", "height": 265} id="VAABYAScyMrc" executionInfo={"status": "ok", "timestamp": 1633253459496, "user_tz": -180, "elapsed": 858, "user": {"displayName": "<NAME>.", "photoUrl": "https://lh3.googleusercontent.com/a/default-user=s64", "userId": "01554857910732208880"}} outputId="292bd521-592b-4e3e-babb-6da34876fb43"
# ะคะพัะผะธััะตะผ ัะปััะฐะนะฝัั ะฝะพัะผะฐะปัะฝะพ ัะฐัะฟัะตะดะตะปะตะฝะฝัั ะฒัะฑะพัะบั sample
N = 10000
sample = np.random.normal(0, 1, N)
plt.hist(sample, bins=100)
plt.show()
# + colab={"base_uri": "https://localhost:8080/", "height": 265} id="Si5tvEFQh8po" executionInfo={"status": "ok", "timestamp": 1633253461907, "user_tz": -180, "elapsed": 2419, "user": {"displayName": "<NAME>.", "photoUrl": "https://lh3.googleusercontent.com/a/default-user=s64", "userId": "01554857910732208880"}} outputId="d18fc626-ef52-46fd-d7af-0222dec6f7c1"
# ะคะพัะผะธััะตะผ ะผะฐััะธะฒ ัะตะปะตะฒัั
ะผะตัะพะบะฐ ะบะปะฐััะพะฒ: 0 - ะตัะปะธ ะทะฝะฐัะตะฝะธะต ะฒ sample ะผะตะฝััะต t ะธ 1 - ะตัะปะธ ะฑะพะปััะต
t = 0
target_labels = np.array([0 if i < t else 1 for i in sample])
plt.hist(target_labels, bins=100)
plt.show()
# + id="lpiBPPw1yMr_" executionInfo={"status": "ok", "timestamp": 1633253461908, "user_tz": -180, "elapsed": 9, "user": {"displayName": "<NAME>.", "photoUrl": "https://lh3.googleusercontent.com/a/default-user=s64", "userId": "01554857910732208880"}}
# ะัะฟะพะปัะทัั ะดะฐะฝะฝัะต ะทะฐะณะพัะพะฒะบะธ (ะธะปะธ, ะฟัะธ ะถะตะปะฐะฝะธะธ, ะฝะต ะธัะฟะพะปัะทัั),
# ัะตะฐะปะธะทัะนัะต ััะฝะบัะธะธ ะดะปั ัะฐัััะตัะฐ accuracy, precision, recall ะธ F1
def confusion_matrix(target_labels, model_labels) :
tp = 0
tn = 0
fp = 0
fn = 0
for i in range(len(target_labels)) :
if target_labels[i] == 1 and model_labels[i] == 1 :
tp += 1
if target_labels[i] == 0 and model_labels[i] == 0 :
tn += 1
if target_labels[i] == 0 and model_labels[i] == 1 :
fp += 1
if target_labels[i] == 1 and model_labels[i] == 0 :
fn += 1
return tp, tn, fp, fn
def accuracy (target_labels, model_labels) :
tp, tn, fp, fn = confusion_matrix(target_labels, model_labels)
return ((tp+tn)/(tp+fp+tn+fn))
def precision (target_labels, model_labels) :
tp, tn, fp, fn = confusion_matrix(target_labels, model_labels)
return (tp/(tp+fp))
def recall (target_labels, model_labels) :
tp, tn, fp, fn = confusion_matrix(target_labels, model_labels)
return (tp/(tp+fn))
def F1 (target_labels, model_labels) :
tp, tn, fp, fn = confusion_matrix(target_labels, model_labels)
return 2*((precision (target_labels, model_labels)*recall (target_labels, model_labels))/(precision (target_labels, model_labels)+recall (target_labels, model_labels)))
# + colab={"base_uri": "https://localhost:8080/"} id="o-aTJeozh8pr" executionInfo={"status": "ok", "timestamp": 1633253462182, "user_tz": -180, "elapsed": 281, "user": {"displayName": "<NAME>.", "photoUrl": "https://lh3.googleusercontent.com/a/default-user=s64", "userId": "01554857910732208880"}} outputId="82951f56-6e81-43b2-812c-80ecb0afc8bb"
# ะะตัะฒัะน ัะบัะฟะตัะธะผะตะฝั: t = 0, ะผะพะดะตะปั ั ะฒะตัะพััะฝะพัััั 50% ะฒะพะทะฒัะฐัะฐะตั 0 ะธ 1
t = 0
target_labels = np.array([0 if i < t else 1 for i in sample])
model_labels = np.random.randint(2, size=N)
print(accuracy (target_labels, model_labels))
print(precision (target_labels, model_labels))
print(recall (target_labels, model_labels))
print(F1 (target_labels, model_labels))
# ะ ะฐัััะธัะฐะนัะต ะธ ะฒัะฒะตะดะธัะต ะทะฝะฐัะตะฝะธั ะผะตััะธะบ accuracy, precision, recall ะธ F1.
# + colab={"base_uri": "https://localhost:8080/"} id="zPc6KJBGh8ps" executionInfo={"status": "ok", "timestamp": 1633253462183, "user_tz": -180, "elapsed": 49, "user": {"displayName": "<NAME>.", "photoUrl": "https://lh3.googleusercontent.com/a/default-user=s64", "userId": "01554857910732208880"}} outputId="a237ba99-f508-40d5-a2c9-5bc7747bcade"
# ะัะพัะพะน ัะบัะฟะตัะธะผะตะฝั: t = 0, ะผะพะดะตะปั ั ะฒะตัะพััะฝะพัััั 25% ะฒะพะทะฒัะฐัะฐะตั 0 ะธ ั 75% - 1
t = 0
target_labels = np.array([0 if i < t else 1 for i in sample])
labels = np.random.randint(4, size=N)
model_labels = np.array([0 if i == 0 else 1 for i in labels])
np.random.shuffle(model_labels)
print(accuracy (target_labels, model_labels))
print(precision (target_labels, model_labels))
print(recall (target_labels, model_labels))
print(F1 (target_labels, model_labels))
# ะ ะฐัััะธัะฐะนัะต ะธ ะฒัะฒะตะดะธัะต ะทะฝะฐัะตะฝะธั ะผะตััะธะบ accuracy, precision, recall ะธ F1.
# + id="O5HU4aGMh8pt" executionInfo={"status": "ok", "timestamp": 1633253462184, "user_tz": -180, "elapsed": 40, "user": {"displayName": "<NAME>.", "photoUrl": "https://lh3.googleusercontent.com/a/default-user=s64", "userId": "01554857910732208880"}}
# ะัะพะฐะฝะฐะปะธะทะธััะนัะต, ะบะฐะบะธะต ะธะท ะผะตััะธะบ ะฟัะธะผะตะฝะธะผั ะฒ ะฟะตัะฒะพะผ ะธ ะฒัะพัะพะผ ัะบัะฟะตัะธะผะตะฝัะฐั
.
# + colab={"base_uri": "https://localhost:8080/"} id="ldHtGWEMh8pv" executionInfo={"status": "ok", "timestamp": 1633253462185, "user_tz": -180, "elapsed": 40, "user": {"displayName": "<NAME>.", "photoUrl": "https://lh3.googleusercontent.com/a/default-user=s64", "userId": "01554857910732208880"}} outputId="309d565b-6672-49c6-b247-503446daa199"
# ะขัะตัะธะน ัะบัะฟะตัะธะผะตะฝั: t = 2, ะผะพะดะตะปั ั ะฒะตัะพััะฝะพัััั 50% ะฒะพะทะฒัะฐัะฐะตั 0 ะธ 1
t = 2
target_labels = np.array([0 if i < t else 1 for i in sample])
model_labels = np.random.randint(2, size=N)
print(accuracy (target_labels, model_labels))
print(precision (target_labels, model_labels))
print(recall (target_labels, model_labels))
print(F1 (target_labels, model_labels))
# ะ ะฐัััะธัะฐะนัะต ะธ ะฒัะฒะตะดะธัะต ะทะฝะฐัะตะฝะธั ะผะตััะธะบ accuracy, precision, recall ะธ F1.
# + colab={"base_uri": "https://localhost:8080/"} id="vdwJaPM0h8pw" executionInfo={"status": "ok", "timestamp": 1633253462185, "user_tz": -180, "elapsed": 33, "user": {"displayName": "<NAME>.", "photoUrl": "https://lh3.googleusercontent.com/a/default-user=s64", "userId": "01554857910732208880"}} outputId="4cbbc3c8-1b4a-42e3-b281-8e0460d56b66"
# ะงะตัะฒััััะน ัะบัะฟะตัะธะผะตะฝั: t = 2, ะผะพะดะตะปั ั ะฒะตัะพััะฝะพัััั 100% ะฒะพะทะฒัะฐัะฐะตั 0
t = 2
target_labels = np.array([0 if i < t else 1 for i in sample])
model_labels = np.zeros(N)
print(accuracy (target_labels, model_labels))
#print(precision (target_labels, model_labels)) ะดะตะปะตะฝะธะต ะฝะฐ ะฝะพะปั
print(recall (target_labels, model_labels))
#print(F1 (target_labels, model_labels)) ะดะตะปะตะฝะธะต ะฝะฐ ะฝะพะปั
# ะ ะฐัััะธัะฐะนัะต ะธ ะฒัะฒะตะดะธัะต ะทะฝะฐัะตะฝะธั ะผะตััะธะบ accuracy, precision, recall ะธ F1.
# + id="QMcr_nofh8py" executionInfo={"status": "ok", "timestamp": 1633253462186, "user_tz": -180, "elapsed": 25, "user": {"displayName": "<NAME>.", "photoUrl": "https://lh3.googleusercontent.com/a/default-user=s64", "userId": "01554857910732208880"}}
# ะัะพะฐะฝะฐะปะธะทะธััะนัะต, ะบะฐะบะธะต ะธะท ะผะตััะธะบ ะฟัะธะผะตะฝะธะผั ะฒ ััะตััะตะผ ะธ ัะตัะฒัััะพะผ ัะบัะฟะตัะธะผะตะฝัะฐั
.
| 2021 ะัะตะฝะฝะธะน ัะตะผะตััั/ะัะฐะบัะธัะตัะบะพะต ะทะฐะดะฐะฝะธะต 3/ะะ -3 ะะฝัะพะฝะพะฒ ะ. ะะกะข-19-1.ipynb |