Unnamed: 0 int64 0 16k | text_prompt stringlengths 110 62.1k | code_prompt stringlengths 37 152k |
|---|---|---|
5,700 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Cahn-Hilliard Example
This example demonstrates how to use PyMKS to solve the Cahn-Hilliard equation. The first section provides some background information about the Cahn-Hilliard equation as well as details about calibrating and validating the MKS model. The example demonstrates how to generate sample data, calibrate the influence coefficients and then pick an appropriate number of local states when state space is continuous. The MKS model and a spectral solution of the Cahn-Hilliard equation are compared on a larger test microstructure over multiple time steps.
Cahn-Hilliard Equation
The Cahn-Hilliard equation is used to simulate microstructure evolution during spinodial decomposition and has the following form,
$$ \dot{\phi} = \nabla^2 \left( \phi^3 - \phi \right) - \gamma \nabla^4 \phi $$
where $\phi$ is a conserved ordered parameter and $\sqrt{\gamma}$ represents the width of the interface. In this example, the Cahn-Hilliard equation is solved using a semi-implicit spectral scheme with periodic boundary conditions, see Chang and Rutenberg for more details.
Step1: Modeling with MKS
In this example the MKS equation will be used to predict microstructure at the next time step using
$$p[s, 1] = \sum_{r=0}^{S-1} \alpha[l, r, 1] \sum_{l=0}^{L-1} m[l, s - r, 0] + ...$$
where $p[s, n + 1]$ is the concentration field at location $s$ and at time $n + 1$, $r$ is the convolution dummy variable and $l$ indicates the local states varable. $\alpha[l, r, n]$ are the influence coefficients and $m[l, r, 0]$ the microstructure function given to the model. $S$ is the total discretized volume and $L$ is the total number of local states n_states choosen to use.
The model will march forward in time by recussively replacing discretizing $p[s, n]$ and substituing it back for $m[l, s - r, n]$.
Calibration Datasets
Unlike the elastostatic examples, the microstructure (concentration field) for this simulation doesn't have discrete phases. The microstructure is a continuous field that can have a range of values which can change over time, therefore the first order influence coefficients cannot be calibrated with delta microstructures. Instead a large number of simulations with random initial conditions are used to calibrate the first order influence coefficients using linear regression.
The function make_cahn_hilliard from pymks.datasets provides an interface to generate calibration datasets for the influence coefficients. To use make_cahn_hilliard, we need to set the number of samples we want to use to calibrate the influence coefficients using n_samples, the size of the simulation domain using size and the time step using dt.
Step2: The function make_cahnHilliard generates n_samples number of random microstructures, X, and the associated updated microstructures, y, after one time step y. The following cell plots one of these microstructures along with its update.
Step3: Calibrate Influence Coefficients
As mentioned above, the microstructures (concentration fields) does not have discrete phases. This leaves the number of local states in local state space as a free hyper parameter. In previous work it has been shown that as you increase the number of local states, the accuracy of MKS model increases (see Fast et al.), but as the number of local states increases, the difference in accuracy decreases. Some work needs to be done in order to find the practical number of local states that we will use.
Optimizing the Number of Local States
Let's split the calibrate dataset into testing and training datasets. The function train_test_split for the machine learning python module sklearn provides a convenient interface to do this. 80% of the dataset will be used for training and the remaining 20% will be used for testing by setting test_size equal to 0.2. The state of the random number generator used to make the split can be set using random_state.
Step4: We are now going to calibrate the influence coefficients while varying the number of local states from 2 up to 20. Each of these models will then predict the evolution of the concentration fields. Mean square error will be used to compared the results with the testing dataset to evaluate how the MKS model's performance changes as we change the number of local states.
First we need to import the class MKSLocalizationModel from pymks.
Step5: Next we will calibrate the influence coefficients while varying the number of local states and compute the mean squared error. The following demonstrates how to use Scikit-learn's GridSearchCV to optimize n_states as a hyperparameter. Of course, the best fit is always with a larger value of n_states. Increasing this parameter does not overfit the data.
Step6: As expected the accuracy of the MKS model monotonically increases as we increase n_states, but accuracy doesn't improve significantly as n_states gets larger than signal digits.
In order to save on computation costs let's set calibrate the influence coefficients with n_states equal to 6, but realize that if we need slightly more accuracy the value can be increased.
Step7: Here are the first 4 influence coefficients.
Step8: Predict Microstructure Evolution
With the calibrated influence coefficients, we are ready to predict the evolution of a concentration field. In order to do this, we need to have the Cahn-Hilliard simulation and the MKS model start with the same initial concentration phi0 and evolve in time. In order to do the Cahn-Hilliard simulation we need an instance of the class CahnHilliardSimulation.
Step9: In order to move forward in time, we need to feed the concentration back into the Cahn-Hilliard simulation and the MKS model.
Step10: Let's take a look at the concentration fields.
Step11: The MKS model was able to capture the microstructure evolution with 6 local states.
Resizing the Coefficients to use on Larger Systems
Now let's try and predict a larger simulation by resizing the coefficients and provide a larger initial concentratio field.
Step12: Once again we are going to march forward in time by feeding the concentration fields back into the Cahn-Hilliard simulation and the MKS model.
Step13: Let's take a look at the results. | Python Code:
%matplotlib inline
%load_ext autoreload
%autoreload 2
import numpy as np
import matplotlib.pyplot as plt
Explanation: Cahn-Hilliard Example
This example demonstrates how to use PyMKS to solve the Cahn-Hilliard equation. The first section provides some background information about the Cahn-Hilliard equation as well as details about calibrating and validating the MKS model. The example demonstrates how to generate sample data, calibrate the influence coefficients and then pick an appropriate number of local states when state space is continuous. The MKS model and a spectral solution of the Cahn-Hilliard equation are compared on a larger test microstructure over multiple time steps.
Cahn-Hilliard Equation
The Cahn-Hilliard equation is used to simulate microstructure evolution during spinodial decomposition and has the following form,
$$ \dot{\phi} = \nabla^2 \left( \phi^3 - \phi \right) - \gamma \nabla^4 \phi $$
where $\phi$ is a conserved ordered parameter and $\sqrt{\gamma}$ represents the width of the interface. In this example, the Cahn-Hilliard equation is solved using a semi-implicit spectral scheme with periodic boundary conditions, see Chang and Rutenberg for more details.
End of explanation
import pymks
from pymks.datasets import make_cahn_hilliard
n = 41
n_samples = 400
dt = 1e-2
np.random.seed(99)
= make_cahn_hilliard(n_samples=n_samples, size=(n, n), dt=dt)
Explanation: Modeling with MKS
In this example the MKS equation will be used to predict microstructure at the next time step using
$$p[s, 1] = \sum_{r=0}^{S-1} \alpha[l, r, 1] \sum_{l=0}^{L-1} m[l, s - r, 0] + ...$$
where $p[s, n + 1]$ is the concentration field at location $s$ and at time $n + 1$, $r$ is the convolution dummy variable and $l$ indicates the local states varable. $\alpha[l, r, n]$ are the influence coefficients and $m[l, r, 0]$ the microstructure function given to the model. $S$ is the total discretized volume and $L$ is the total number of local states n_states choosen to use.
The model will march forward in time by recussively replacing discretizing $p[s, n]$ and substituing it back for $m[l, s - r, n]$.
Calibration Datasets
Unlike the elastostatic examples, the microstructure (concentration field) for this simulation doesn't have discrete phases. The microstructure is a continuous field that can have a range of values which can change over time, therefore the first order influence coefficients cannot be calibrated with delta microstructures. Instead a large number of simulations with random initial conditions are used to calibrate the first order influence coefficients using linear regression.
The function make_cahn_hilliard from pymks.datasets provides an interface to generate calibration datasets for the influence coefficients. To use make_cahn_hilliard, we need to set the number of samples we want to use to calibrate the influence coefficients using n_samples, the size of the simulation domain using size and the time step using dt.
End of explanation
from pymks.tools import draw_concentrations
Explanation: The function make_cahnHilliard generates n_samples number of random microstructures, X, and the associated updated microstructures, y, after one time step y. The following cell plots one of these microstructures along with its update.
End of explanation
import sklearn
from sklearn.cross_validation import train_test_split
split_shape = (X.shape[0],) + (np.product(X.shape[1:]),)
X_train, X_test, y_train, y_test = train_test_split(X.reshape(split_shape), y.reshape(split_shape),
test_size=0.5, random_state=3)
Explanation: Calibrate Influence Coefficients
As mentioned above, the microstructures (concentration fields) does not have discrete phases. This leaves the number of local states in local state space as a free hyper parameter. In previous work it has been shown that as you increase the number of local states, the accuracy of MKS model increases (see Fast et al.), but as the number of local states increases, the difference in accuracy decreases. Some work needs to be done in order to find the practical number of local states that we will use.
Optimizing the Number of Local States
Let's split the calibrate dataset into testing and training datasets. The function train_test_split for the machine learning python module sklearn provides a convenient interface to do this. 80% of the dataset will be used for training and the remaining 20% will be used for testing by setting test_size equal to 0.2. The state of the random number generator used to make the split can be set using random_state.
End of explanation
from pymks import MKSLocalizationModel
from pymks.bases import PrimitiveBasis
Explanation: We are now going to calibrate the influence coefficients while varying the number of local states from 2 up to 20. Each of these models will then predict the evolution of the concentration fields. Mean square error will be used to compared the results with the testing dataset to evaluate how the MKS model's performance changes as we change the number of local states.
First we need to import the class MKSLocalizationModel from pymks.
End of explanation
from sklearn.grid_search import GridSearchCV
parameters_to_tune = {'n_states': np.arange(2, 11)}
prim_basis = PrimitiveBasis(2, [-1, 1])
model = MKSLocalizationModel(prim_basis)
gs = GridSearchCV(model, parameters_to_tune, cv=5, fit_params={'size': (n, n)})
gs.fit
print(gs.best_estimator_)
print(gs.score(X_test, y_test))
from pymks.tools import draw_gridscores
Explanation: Next we will calibrate the influence coefficients while varying the number of local states and compute the mean squared error. The following demonstrates how to use Scikit-learn's GridSearchCV to optimize n_states as a hyperparameter. Of course, the best fit is always with a larger value of n_states. Increasing this parameter does not overfit the data.
End of explanation
model = MKSLocalizationModel(basis=PrimitiveBasis(6, [-1, 1]))
model.fit(X, y)
Explanation: As expected the accuracy of the MKS model monotonically increases as we increase n_states, but accuracy doesn't improve significantly as n_states gets larger than signal digits.
In order to save on computation costs let's set calibrate the influence coefficients with n_states equal to 6, but realize that if we need slightly more accuracy the value can be increased.
End of explanation
from pymks.tools import draw_coeff
Explanation: Here are the first 4 influence coefficients.
End of explanation
from pymks.datasets.cahn_hilliard_simulation import CahnHilliardSimulation
np.random.seed(191)
phi0 = np.random.normal(0, 1e-9, (1, n, n))
ch_sim = CahnHilliardSimulation(dt=dt)
phi_sim = phi0.copy()
phi_pred = phi0.copy()
Explanation: Predict Microstructure Evolution
With the calibrated influence coefficients, we are ready to predict the evolution of a concentration field. In order to do this, we need to have the Cahn-Hilliard simulation and the MKS model start with the same initial concentration phi0 and evolve in time. In order to do the Cahn-Hilliard simulation we need an instance of the class CahnHilliardSimulation.
End of explanation
time_steps = 10
for ii in range(time_steps):
ch_sim.run(phi_sim)
phi_sim = ch_sim.response
phi_pred = model.predict(phi_pred)
Explanation: In order to move forward in time, we need to feed the concentration back into the Cahn-Hilliard simulation and the MKS model.
End of explanation
from pymks.tools import draw_concentrations_compare
draw_concentrations((phi_sim[0], phi_pred[0]), labels=('Simulation', 'MKS'))
Explanation: Let's take a look at the concentration fields.
End of explanation
m = 3 * n
model.resize_coeff((m, m))
phi0 = np.random.normal(0, 1e-9, (1, m, m))
phi_sim = phi0.copy()
phi_pred = phi0.copy()
Explanation: The MKS model was able to capture the microstructure evolution with 6 local states.
Resizing the Coefficients to use on Larger Systems
Now let's try and predict a larger simulation by resizing the coefficients and provide a larger initial concentratio field.
End of explanation
for ii in range(1000):
ch_sim.run(phi_sim)
phi_sim = ch_sim.response
phi_pred = model.predict(phi_pred)
Explanation: Once again we are going to march forward in time by feeding the concentration fields back into the Cahn-Hilliard simulation and the MKS model.
End of explanation
from pymks.tools import draw_concentrations_compare
draw_concentrations_compare((phi_sim[0], phi_pred[0]), labels=('Simulation', 'MKS'))
Explanation: Let's take a look at the results.
End of explanation |
5,701 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Compute IonPopSolver Results
Here, we'll run the IonPopSolver code using some of the EBTEL results in order to account for non-equilibrium ionization in our results. We'll use the multiprocessing module in Python to speed things up.
Step1: First, import the EBTEL results.
Step2: Next, reshape the EBTEL results into something readable by the IonPopSolver code and save them to a file. Set some parameters for reading the data structure.
Step3: Now, print the files.
Step4: We need to modify the XML input file for the IonPopSolver code to make sure it points to the right atomic database (see install instructions in IonPopSolver). We'll also set the cutoff ion fraction to $1\times10^{-6}$ to speed up the calculation.
Step5: Now, we'll run the code in parallel with the subprocess module. First, define the worker function that will run in parallel.
Step6: Now, we need to assemble our list of inputs.
Finally, map the worker and inputs to the appropriate processors and run the code. | Python Code:
import os
import pickle
import multiprocessing
import subprocess
import xml.etree.ElementTree as ET
import numpy as np
Explanation: Compute IonPopSolver Results
Here, we'll run the IonPopSolver code using some of the EBTEL results in order to account for non-equilibrium ionization in our results. We'll use the multiprocessing module in Python to speed things up.
End of explanation
with open(__depends__[0],'rb') as f:
varying_tau_results = pickle.load(f)
Explanation: First, import the EBTEL results.
End of explanation
tau_indices = [0,-1]
prefixes = ['tau20','tau500']
parameter_sets = {'single':('t','T','n'),'electron':('te','Tee','ne'),'ion':('ti','Tie','ni')}
Explanation: Next, reshape the EBTEL results into something readable by the IonPopSolver code and save them to a file. Set some parameters for reading the data structure.
End of explanation
inputs = []
for i,pre in zip(tau_indices,prefixes):
for key in parameter_sets:
inputs.append(os.path.join('../results/','_tmp_%s.%s.ips.txt'%(pre,key)))
np.savetxt(os.path.join('../results/','_tmp_%s.%s.ips.txt'%(pre,key)),
np.transpose([varying_tau_results[i][parameter_sets[key][0]],
varying_tau_results[i][parameter_sets[key][1]],
varying_tau_results[i][parameter_sets[key][2]]]),
header=str(len(varying_tau_results[i][parameter_sets[key][0]])),comments='',fmt='%f\t%e\t%e')
Explanation: Now, print the files.
End of explanation
xml_tree = ET.parse(os.path.join(os.environ['EXP_DIR'],'IonPopSolver/test/radiation.example.cfg.xml'))
root = xml_tree.getroot()
node1 = root.find('atomicDB')
node1.text = os.path.join(os.environ['EXP_DIR'],'apolloDB') + '/'
node2 = root.find('cutoff_ion_fraction')
node2.text = '1e-6'
xml_tree.write(os.path.join(os.environ['EXP_DIR'],'IonPopSolver/test/radiation.local.cfg.xml'))
Explanation: We need to modify the XML input file for the IonPopSolver code to make sure it points to the right atomic database (see install instructions in IonPopSolver). We'll also set the cutoff ion fraction to $1\times10^{-6}$ to speed up the calculation.
End of explanation
def worker((input_file,output_file)):
print("Running IonPopSolver for input %s"%(input_file))
executable = os.path.join(os.environ['EXP_DIR'],'IonPopSolver/bin/IonPopSolver.run')
static_args = ["-Z","26","-f","9","-t","27","-r",
os.path.join(os.environ['EXP_DIR'],'IonPopSolver/test/radiation.local.cfg.xml')]
var_args = ["-I",os.path.abspath(input_file),"-O",os.path.abspath(output_file)]
subprocess.call([executable]+static_args+var_args)
print("Finished IonPopSolver for input %s"%(input_file))
Explanation: Now, we'll run the code in parallel with the subprocess module. First, define the worker function that will run in parallel.
End of explanation
p = multiprocessing.Pool()
p.map(worker,zip(sorted(inputs),__dest__))
Explanation: Now, we need to assemble our list of inputs.
Finally, map the worker and inputs to the appropriate processors and run the code.
End of explanation |
5,702 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
ES-DOC CMIP6 Model Properties - Seaice
MIP Era
Step1: Document Authors
Set document authors
Step2: Document Contributors
Specify document contributors
Step3: Document Publication
Specify document publication status
Step4: Document Table of Contents
1. Key Properties --> Model
2. Key Properties --> Variables
3. Key Properties --> Seawater Properties
4. Key Properties --> Resolution
5. Key Properties --> Tuning Applied
6. Key Properties --> Key Parameter Values
7. Key Properties --> Assumptions
8. Key Properties --> Conservation
9. Grid --> Discretisation --> Horizontal
10. Grid --> Discretisation --> Vertical
11. Grid --> Seaice Categories
12. Grid --> Snow On Seaice
13. Dynamics
14. Thermodynamics --> Energy
15. Thermodynamics --> Mass
16. Thermodynamics --> Salt
17. Thermodynamics --> Salt --> Mass Transport
18. Thermodynamics --> Salt --> Thermodynamics
19. Thermodynamics --> Ice Thickness Distribution
20. Thermodynamics --> Ice Floe Size Distribution
21. Thermodynamics --> Melt Ponds
22. Thermodynamics --> Snow Processes
23. Radiative Processes
1. Key Properties --> Model
Name of seaice model used.
1.1. Model Overview
Is Required
Step5: 1.2. Model Name
Is Required
Step6: 2. Key Properties --> Variables
List of prognostic variable in the sea ice model.
2.1. Prognostic
Is Required
Step7: 3. Key Properties --> Seawater Properties
Properties of seawater relevant to sea ice
3.1. Ocean Freezing Point
Is Required
Step8: 3.2. Ocean Freezing Point Value
Is Required
Step9: 4. Key Properties --> Resolution
Resolution of the sea ice grid
4.1. Name
Is Required
Step10: 4.2. Canonical Horizontal Resolution
Is Required
Step11: 4.3. Number Of Horizontal Gridpoints
Is Required
Step12: 5. Key Properties --> Tuning Applied
Tuning applied to sea ice model component
5.1. Description
Is Required
Step13: 5.2. Target
Is Required
Step14: 5.3. Simulations
Is Required
Step15: 5.4. Metrics Used
Is Required
Step16: 5.5. Variables
Is Required
Step17: 6. Key Properties --> Key Parameter Values
Values of key parameters
6.1. Typical Parameters
Is Required
Step18: 6.2. Additional Parameters
Is Required
Step19: 7. Key Properties --> Assumptions
Assumptions made in the sea ice model
7.1. Description
Is Required
Step20: 7.2. On Diagnostic Variables
Is Required
Step21: 7.3. Missing Processes
Is Required
Step22: 8. Key Properties --> Conservation
Conservation in the sea ice component
8.1. Description
Is Required
Step23: 8.2. Properties
Is Required
Step24: 8.3. Budget
Is Required
Step25: 8.4. Was Flux Correction Used
Is Required
Step26: 8.5. Corrected Conserved Prognostic Variables
Is Required
Step27: 9. Grid --> Discretisation --> Horizontal
Sea ice discretisation in the horizontal
9.1. Grid
Is Required
Step28: 9.2. Grid Type
Is Required
Step29: 9.3. Scheme
Is Required
Step30: 9.4. Thermodynamics Time Step
Is Required
Step31: 9.5. Dynamics Time Step
Is Required
Step32: 9.6. Additional Details
Is Required
Step33: 10. Grid --> Discretisation --> Vertical
Sea ice vertical properties
10.1. Layering
Is Required
Step34: 10.2. Number Of Layers
Is Required
Step35: 10.3. Additional Details
Is Required
Step36: 11. Grid --> Seaice Categories
What method is used to represent sea ice categories ?
11.1. Has Mulitple Categories
Is Required
Step37: 11.2. Number Of Categories
Is Required
Step38: 11.3. Category Limits
Is Required
Step39: 11.4. Ice Thickness Distribution Scheme
Is Required
Step40: 11.5. Other
Is Required
Step41: 12. Grid --> Snow On Seaice
Snow on sea ice details
12.1. Has Snow On Ice
Is Required
Step42: 12.2. Number Of Snow Levels
Is Required
Step43: 12.3. Snow Fraction
Is Required
Step44: 12.4. Additional Details
Is Required
Step45: 13. Dynamics
Sea Ice Dynamics
13.1. Horizontal Transport
Is Required
Step46: 13.2. Transport In Thickness Space
Is Required
Step47: 13.3. Ice Strength Formulation
Is Required
Step48: 13.4. Redistribution
Is Required
Step49: 13.5. Rheology
Is Required
Step50: 14. Thermodynamics --> Energy
Processes related to energy in sea ice thermodynamics
14.1. Enthalpy Formulation
Is Required
Step51: 14.2. Thermal Conductivity
Is Required
Step52: 14.3. Heat Diffusion
Is Required
Step53: 14.4. Basal Heat Flux
Is Required
Step54: 14.5. Fixed Salinity Value
Is Required
Step55: 14.6. Heat Content Of Precipitation
Is Required
Step56: 14.7. Precipitation Effects On Salinity
Is Required
Step57: 15. Thermodynamics --> Mass
Processes related to mass in sea ice thermodynamics
15.1. New Ice Formation
Is Required
Step58: 15.2. Ice Vertical Growth And Melt
Is Required
Step59: 15.3. Ice Lateral Melting
Is Required
Step60: 15.4. Ice Surface Sublimation
Is Required
Step61: 15.5. Frazil Ice
Is Required
Step62: 16. Thermodynamics --> Salt
Processes related to salt in sea ice thermodynamics.
16.1. Has Multiple Sea Ice Salinities
Is Required
Step63: 16.2. Sea Ice Salinity Thermal Impacts
Is Required
Step64: 17. Thermodynamics --> Salt --> Mass Transport
Mass transport of salt
17.1. Salinity Type
Is Required
Step65: 17.2. Constant Salinity Value
Is Required
Step66: 17.3. Additional Details
Is Required
Step67: 18. Thermodynamics --> Salt --> Thermodynamics
Salt thermodynamics
18.1. Salinity Type
Is Required
Step68: 18.2. Constant Salinity Value
Is Required
Step69: 18.3. Additional Details
Is Required
Step70: 19. Thermodynamics --> Ice Thickness Distribution
Ice thickness distribution details.
19.1. Representation
Is Required
Step71: 20. Thermodynamics --> Ice Floe Size Distribution
Ice floe-size distribution details.
20.1. Representation
Is Required
Step72: 20.2. Additional Details
Is Required
Step73: 21. Thermodynamics --> Melt Ponds
Characteristics of melt ponds.
21.1. Are Included
Is Required
Step74: 21.2. Formulation
Is Required
Step75: 21.3. Impacts
Is Required
Step76: 22. Thermodynamics --> Snow Processes
Thermodynamic processes in snow on sea ice
22.1. Has Snow Aging
Is Required
Step77: 22.2. Snow Aging Scheme
Is Required
Step78: 22.3. Has Snow Ice Formation
Is Required
Step79: 22.4. Snow Ice Formation Scheme
Is Required
Step80: 22.5. Redistribution
Is Required
Step81: 22.6. Heat Diffusion
Is Required
Step82: 23. Radiative Processes
Sea Ice Radiative Processes
23.1. Surface Albedo
Is Required
Step83: 23.2. Ice Radiation Transmission
Is Required | Python Code:
# DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'noaa-gfdl', 'sandbox-3', 'seaice')
Explanation: ES-DOC CMIP6 Model Properties - Seaice
MIP Era: CMIP6
Institute: NOAA-GFDL
Source ID: SANDBOX-3
Topic: Seaice
Sub-Topics: Dynamics, Thermodynamics, Radiative Processes.
Properties: 80 (63 required)
Model descriptions: Model description details
Initialized From: --
Notebook Help: Goto notebook help page
Notebook Initialised: 2018-02-20 15:02:35
Document Setup
IMPORTANT: to be executed each time you run the notebook
End of explanation
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
Explanation: Document Authors
Set document authors
End of explanation
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
Explanation: Document Contributors
Specify document contributors
End of explanation
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
Explanation: Document Publication
Specify document publication status
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.model.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: Document Table of Contents
1. Key Properties --> Model
2. Key Properties --> Variables
3. Key Properties --> Seawater Properties
4. Key Properties --> Resolution
5. Key Properties --> Tuning Applied
6. Key Properties --> Key Parameter Values
7. Key Properties --> Assumptions
8. Key Properties --> Conservation
9. Grid --> Discretisation --> Horizontal
10. Grid --> Discretisation --> Vertical
11. Grid --> Seaice Categories
12. Grid --> Snow On Seaice
13. Dynamics
14. Thermodynamics --> Energy
15. Thermodynamics --> Mass
16. Thermodynamics --> Salt
17. Thermodynamics --> Salt --> Mass Transport
18. Thermodynamics --> Salt --> Thermodynamics
19. Thermodynamics --> Ice Thickness Distribution
20. Thermodynamics --> Ice Floe Size Distribution
21. Thermodynamics --> Melt Ponds
22. Thermodynamics --> Snow Processes
23. Radiative Processes
1. Key Properties --> Model
Name of seaice model used.
1.1. Model Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of sea ice model.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.model.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.2. Model Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Name of sea ice model code (e.g. CICE 4.2, LIM 2.1, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.variables.prognostic')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Sea ice temperature"
# "Sea ice concentration"
# "Sea ice thickness"
# "Sea ice volume per grid cell area"
# "Sea ice u-velocity"
# "Sea ice v-velocity"
# "Sea ice enthalpy"
# "Internal ice stress"
# "Salinity"
# "Snow temperature"
# "Snow depth"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 2. Key Properties --> Variables
List of prognostic variable in the sea ice model.
2.1. Prognostic
Is Required: TRUE Type: ENUM Cardinality: 1.N
List of prognostic variables in the sea ice component.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.seawater_properties.ocean_freezing_point')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "TEOS-10"
# "Constant"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 3. Key Properties --> Seawater Properties
Properties of seawater relevant to sea ice
3.1. Ocean Freezing Point
Is Required: TRUE Type: ENUM Cardinality: 1.1
Equation used to compute the freezing point (in deg C) of seawater, as a function of salinity and pressure
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.seawater_properties.ocean_freezing_point_value')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 3.2. Ocean Freezing Point Value
Is Required: FALSE Type: FLOAT Cardinality: 0.1
If using a constant seawater freezing point, specify this value.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.resolution.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4. Key Properties --> Resolution
Resolution of the sea ice grid
4.1. Name
Is Required: TRUE Type: STRING Cardinality: 1.1
This is a string usually used by the modelling group to describe the resolution of this grid e.g. N512L180, T512L70, ORCA025 etc.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.resolution.canonical_horizontal_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4.2. Canonical Horizontal Resolution
Is Required: TRUE Type: STRING Cardinality: 1.1
Expression quoted for gross comparisons of resolution, eg. 50km or 0.1 degrees etc.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.resolution.number_of_horizontal_gridpoints')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 4.3. Number Of Horizontal Gridpoints
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Total number of horizontal (XY) points (or degrees of freedom) on computational grid.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.tuning_applied.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5. Key Properties --> Tuning Applied
Tuning applied to sea ice model component
5.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General overview description of tuning: explain and motivate the main targets and metrics retained. Document the relative weight given to climate performance metrics versus process oriented metrics, and on the possible conflicts with parameterization level tuning. In particular describe any struggle with a parameter value that required pushing it to its limits to solve a particular model deficiency.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.tuning_applied.target')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5.2. Target
Is Required: TRUE Type: STRING Cardinality: 1.1
What was the aim of tuning, e.g. correct sea ice minima, correct seasonal cycle.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.tuning_applied.simulations')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5.3. Simulations
Is Required: TRUE Type: STRING Cardinality: 1.1
*Which simulations had tuning applied, e.g. all, not historical, only pi-control? *
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.tuning_applied.metrics_used')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5.4. Metrics Used
Is Required: TRUE Type: STRING Cardinality: 1.1
List any observed metrics used in tuning model/parameters
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.tuning_applied.variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5.5. Variables
Is Required: FALSE Type: STRING Cardinality: 0.1
Which variables were changed during the tuning process?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.key_parameter_values.typical_parameters')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Ice strength (P*) in units of N m{-2}"
# "Snow conductivity (ks) in units of W m{-1} K{-1} "
# "Minimum thickness of ice created in leads (h0) in units of m"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 6. Key Properties --> Key Parameter Values
Values of key parameters
6.1. Typical Parameters
Is Required: FALSE Type: ENUM Cardinality: 0.N
What values were specificed for the following parameters if used?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.key_parameter_values.additional_parameters')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.2. Additional Parameters
Is Required: FALSE Type: STRING Cardinality: 0.N
If you have any additional paramterised values that you have used (e.g. minimum open water fraction or bare ice albedo), please provide them here as a comma separated list
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.assumptions.description')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7. Key Properties --> Assumptions
Assumptions made in the sea ice model
7.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.N
General overview description of any key assumptions made in this model.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.assumptions.on_diagnostic_variables')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7.2. On Diagnostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.N
Note any assumptions that specifically affect the CMIP6 diagnostic sea ice variables.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.assumptions.missing_processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7.3. Missing Processes
Is Required: TRUE Type: STRING Cardinality: 1.N
List any key processes missing in this model configuration? Provide full details where this affects the CMIP6 diagnostic sea ice variables?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.conservation.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8. Key Properties --> Conservation
Conservation in the sea ice component
8.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
Provide a general description of conservation methodology.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.conservation.properties')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Energy"
# "Mass"
# "Salt"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 8.2. Properties
Is Required: TRUE Type: ENUM Cardinality: 1.N
Properties conserved in sea ice by the numerical schemes.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.conservation.budget')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.3. Budget
Is Required: TRUE Type: STRING Cardinality: 1.1
For each conserved property, specify the output variables which close the related budgets. as a comma separated list. For example: Conserved property, variable1, variable2, variable3
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.conservation.was_flux_correction_used')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 8.4. Was Flux Correction Used
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does conservation involved flux correction?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.conservation.corrected_conserved_prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.5. Corrected Conserved Prognostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.1
List any variables which are conserved by more than the numerical scheme alone.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Ocean grid"
# "Atmosphere Grid"
# "Own Grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 9. Grid --> Discretisation --> Horizontal
Sea ice discretisation in the horizontal
9.1. Grid
Is Required: TRUE Type: ENUM Cardinality: 1.1
Grid on which sea ice is horizontal discretised?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.grid_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Structured grid"
# "Unstructured grid"
# "Adaptive grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 9.2. Grid Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
What is the type of sea ice grid?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Finite differences"
# "Finite elements"
# "Finite volumes"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 9.3. Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
What is the advection scheme?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.thermodynamics_time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 9.4. Thermodynamics Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
What is the time step in the sea ice model thermodynamic component in seconds.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.dynamics_time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 9.5. Dynamics Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
What is the time step in the sea ice model dynamic component in seconds.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.additional_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9.6. Additional Details
Is Required: FALSE Type: STRING Cardinality: 0.1
Specify any additional horizontal discretisation details.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.vertical.layering')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Zero-layer"
# "Two-layers"
# "Multi-layers"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 10. Grid --> Discretisation --> Vertical
Sea ice vertical properties
10.1. Layering
Is Required: TRUE Type: ENUM Cardinality: 1.N
What type of sea ice vertical layers are implemented for purposes of thermodynamic calculations?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.vertical.number_of_layers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 10.2. Number Of Layers
Is Required: TRUE Type: INTEGER Cardinality: 1.1
If using multi-layers specify how many.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.vertical.additional_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 10.3. Additional Details
Is Required: FALSE Type: STRING Cardinality: 0.1
Specify any additional vertical grid details.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.seaice_categories.has_mulitple_categories')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 11. Grid --> Seaice Categories
What method is used to represent sea ice categories ?
11.1. Has Mulitple Categories
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Set to true if the sea ice model has multiple sea ice categories.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.seaice_categories.number_of_categories')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 11.2. Number Of Categories
Is Required: TRUE Type: INTEGER Cardinality: 1.1
If using sea ice categories specify how many.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.seaice_categories.category_limits')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 11.3. Category Limits
Is Required: TRUE Type: STRING Cardinality: 1.1
If using sea ice categories specify each of the category limits.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.seaice_categories.ice_thickness_distribution_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 11.4. Ice Thickness Distribution Scheme
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the sea ice thickness distribution scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.seaice_categories.other')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 11.5. Other
Is Required: FALSE Type: STRING Cardinality: 0.1
If the sea ice model does not use sea ice categories specify any additional details. For example models that paramterise the ice thickness distribution ITD (i.e there is no explicit ITD) but there is assumed distribution and fluxes are computed accordingly.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.snow_on_seaice.has_snow_on_ice')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 12. Grid --> Snow On Seaice
Snow on sea ice details
12.1. Has Snow On Ice
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is snow on ice represented in this model?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.snow_on_seaice.number_of_snow_levels')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 12.2. Number Of Snow Levels
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Number of vertical levels of snow on ice?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.snow_on_seaice.snow_fraction')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 12.3. Snow Fraction
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe how the snow fraction on sea ice is determined
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.snow_on_seaice.additional_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 12.4. Additional Details
Is Required: FALSE Type: STRING Cardinality: 0.1
Specify any additional details related to snow on ice.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.dynamics.horizontal_transport')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Incremental Re-mapping"
# "Prather"
# "Eulerian"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 13. Dynamics
Sea Ice Dynamics
13.1. Horizontal Transport
Is Required: TRUE Type: ENUM Cardinality: 1.1
What is the method of horizontal advection of sea ice?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.dynamics.transport_in_thickness_space')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Incremental Re-mapping"
# "Prather"
# "Eulerian"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 13.2. Transport In Thickness Space
Is Required: TRUE Type: ENUM Cardinality: 1.1
What is the method of sea ice transport in thickness space (i.e. in thickness categories)?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.dynamics.ice_strength_formulation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Hibler 1979"
# "Rothrock 1975"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 13.3. Ice Strength Formulation
Is Required: TRUE Type: ENUM Cardinality: 1.1
Which method of sea ice strength formulation is used?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.dynamics.redistribution')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Rafting"
# "Ridging"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 13.4. Redistribution
Is Required: TRUE Type: ENUM Cardinality: 1.N
Which processes can redistribute sea ice (including thickness)?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.dynamics.rheology')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Free-drift"
# "Mohr-Coloumb"
# "Visco-plastic"
# "Elastic-visco-plastic"
# "Elastic-anisotropic-plastic"
# "Granular"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 13.5. Rheology
Is Required: TRUE Type: ENUM Cardinality: 1.1
Rheology, what is the ice deformation formulation?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.enthalpy_formulation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Pure ice latent heat (Semtner 0-layer)"
# "Pure ice latent and sensible heat"
# "Pure ice latent and sensible heat + brine heat reservoir (Semtner 3-layer)"
# "Pure ice latent and sensible heat + explicit brine inclusions (Bitz and Lipscomb)"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 14. Thermodynamics --> Energy
Processes related to energy in sea ice thermodynamics
14.1. Enthalpy Formulation
Is Required: TRUE Type: ENUM Cardinality: 1.1
What is the energy formulation?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.thermal_conductivity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Pure ice"
# "Saline ice"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 14.2. Thermal Conductivity
Is Required: TRUE Type: ENUM Cardinality: 1.1
What type of thermal conductivity is used?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.heat_diffusion')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Conduction fluxes"
# "Conduction and radiation heat fluxes"
# "Conduction, radiation and latent heat transport"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 14.3. Heat Diffusion
Is Required: TRUE Type: ENUM Cardinality: 1.1
What is the method of heat diffusion?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.basal_heat_flux')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Heat Reservoir"
# "Thermal Fixed Salinity"
# "Thermal Varying Salinity"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 14.4. Basal Heat Flux
Is Required: TRUE Type: ENUM Cardinality: 1.1
Method by which basal ocean heat flux is handled?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.fixed_salinity_value')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 14.5. Fixed Salinity Value
Is Required: FALSE Type: FLOAT Cardinality: 0.1
If you have selected {Thermal properties depend on S-T (with fixed salinity)}, supply fixed salinity value for each sea ice layer.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.heat_content_of_precipitation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 14.6. Heat Content Of Precipitation
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the method by which the heat content of precipitation is handled.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.precipitation_effects_on_salinity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 14.7. Precipitation Effects On Salinity
Is Required: FALSE Type: STRING Cardinality: 0.1
If precipitation (freshwater) that falls on sea ice affects the ocean surface salinity please provide further details.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.mass.new_ice_formation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 15. Thermodynamics --> Mass
Processes related to mass in sea ice thermodynamics
15.1. New Ice Formation
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the method by which new sea ice is formed in open water.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.mass.ice_vertical_growth_and_melt')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 15.2. Ice Vertical Growth And Melt
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the method that governs the vertical growth and melt of sea ice.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.mass.ice_lateral_melting')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Floe-size dependent (Bitz et al 2001)"
# "Virtual thin ice melting (for single-category)"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 15.3. Ice Lateral Melting
Is Required: TRUE Type: ENUM Cardinality: 1.1
What is the method of sea ice lateral melting?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.mass.ice_surface_sublimation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 15.4. Ice Surface Sublimation
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the method that governs sea ice surface sublimation.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.mass.frazil_ice')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 15.5. Frazil Ice
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the method of frazil ice formation.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.has_multiple_sea_ice_salinities')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 16. Thermodynamics --> Salt
Processes related to salt in sea ice thermodynamics.
16.1. Has Multiple Sea Ice Salinities
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does the sea ice model use two different salinities: one for thermodynamic calculations; and one for the salt budget?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.sea_ice_salinity_thermal_impacts')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 16.2. Sea Ice Salinity Thermal Impacts
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does sea ice salinity impact the thermal properties of sea ice?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.mass_transport.salinity_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant"
# "Prescribed salinity profile"
# "Prognostic salinity profile"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 17. Thermodynamics --> Salt --> Mass Transport
Mass transport of salt
17.1. Salinity Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
How is salinity determined in the mass transport of salt calculation?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.mass_transport.constant_salinity_value')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 17.2. Constant Salinity Value
Is Required: FALSE Type: FLOAT Cardinality: 0.1
If using a constant salinity value specify this value in PSU?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.mass_transport.additional_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 17.3. Additional Details
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the salinity profile used.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.thermodynamics.salinity_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant"
# "Prescribed salinity profile"
# "Prognostic salinity profile"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 18. Thermodynamics --> Salt --> Thermodynamics
Salt thermodynamics
18.1. Salinity Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
How is salinity determined in the thermodynamic calculation?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.thermodynamics.constant_salinity_value')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 18.2. Constant Salinity Value
Is Required: FALSE Type: FLOAT Cardinality: 0.1
If using a constant salinity value specify this value in PSU?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.thermodynamics.additional_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 18.3. Additional Details
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the salinity profile used.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.ice_thickness_distribution.representation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Explicit"
# "Virtual (enhancement of thermal conductivity, thin ice melting)"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 19. Thermodynamics --> Ice Thickness Distribution
Ice thickness distribution details.
19.1. Representation
Is Required: TRUE Type: ENUM Cardinality: 1.1
How is the sea ice thickness distribution represented?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.ice_floe_size_distribution.representation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Explicit"
# "Parameterised"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 20. Thermodynamics --> Ice Floe Size Distribution
Ice floe-size distribution details.
20.1. Representation
Is Required: TRUE Type: ENUM Cardinality: 1.1
How is the sea ice floe-size represented?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.ice_floe_size_distribution.additional_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 20.2. Additional Details
Is Required: FALSE Type: STRING Cardinality: 0.1
Please provide further details on any parameterisation of floe-size.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.melt_ponds.are_included')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 21. Thermodynamics --> Melt Ponds
Characteristics of melt ponds.
21.1. Are Included
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Are melt ponds included in the sea ice model?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.melt_ponds.formulation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Flocco and Feltham (2010)"
# "Level-ice melt ponds"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 21.2. Formulation
Is Required: TRUE Type: ENUM Cardinality: 1.1
What method of melt pond formulation is used?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.melt_ponds.impacts')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Albedo"
# "Freshwater"
# "Heat"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 21.3. Impacts
Is Required: TRUE Type: ENUM Cardinality: 1.N
What do melt ponds have an impact on?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.has_snow_aging')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 22. Thermodynamics --> Snow Processes
Thermodynamic processes in snow on sea ice
22.1. Has Snow Aging
Is Required: TRUE Type: BOOLEAN Cardinality: 1.N
Set to True if the sea ice model has a snow aging scheme.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.snow_aging_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 22.2. Snow Aging Scheme
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the snow aging scheme.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.has_snow_ice_formation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 22.3. Has Snow Ice Formation
Is Required: TRUE Type: BOOLEAN Cardinality: 1.N
Set to True if the sea ice model has snow ice formation.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.snow_ice_formation_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 22.4. Snow Ice Formation Scheme
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the snow ice formation scheme.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.redistribution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 22.5. Redistribution
Is Required: TRUE Type: STRING Cardinality: 1.1
What is the impact of ridging on snow cover?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.heat_diffusion')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Single-layered heat diffusion"
# "Multi-layered heat diffusion"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 22.6. Heat Diffusion
Is Required: TRUE Type: ENUM Cardinality: 1.1
What is the heat diffusion through snow methodology in sea ice thermodynamics?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.radiative_processes.surface_albedo')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Delta-Eddington"
# "Parameterized"
# "Multi-band albedo"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 23. Radiative Processes
Sea Ice Radiative Processes
23.1. Surface Albedo
Is Required: TRUE Type: ENUM Cardinality: 1.1
Method used to handle surface albedo.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.radiative_processes.ice_radiation_transmission')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Delta-Eddington"
# "Exponential attenuation"
# "Ice radiation transmission per category"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 23.2. Ice Radiation Transmission
Is Required: TRUE Type: ENUM Cardinality: 1.N
Method by which solar radiation through sea ice is handled.
End of explanation |
5,703 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Let's scrape a practice table
The latest Mountain Goats album is called Goths. (It's good!) I made a simple HTML table with the track listing -- let's scrape it into a CSV.
Import the modules we'll need
Step1: Read in the file, see what we're working with
We'll use the read() method to get the contents of the file.
Step2: Parse the table with BeautifulSoup
Right now, Python isn't interpreting our table as data -- it's just a string. We need to use BeautifulSoup to parse that string into data objects that Python can understand. Once the string is parsed, we'll be working with a "tree" of data that we can navigate.
Step3: Decide how to target the table
BeautifulSoup has several methods for targeting elements -- by position on the page, by attribute, etc. Right now we just want to find the correct table.
Step4: Looping over the table rows
Let's print a list of track numbers and song titles. Look at the structure of the table -- a table has rows represented by the tag tr, and within each row there are cells represented by td tags. The find_all() method returns a list. And we know how to iterate over lists
Step5: Write data to file
Let's put it all together and open a file to write the data to. | Python Code:
from bs4 import BeautifulSoup
import csv
Explanation: Let's scrape a practice table
The latest Mountain Goats album is called Goths. (It's good!) I made a simple HTML table with the track listing -- let's scrape it into a CSV.
Import the modules we'll need
End of explanation
# in a with block, open the HTML file
with open('mountain-goats.html', 'r') as html_file:
# .read() in the contents of a file -- it'll be a string
html_code = html_file.read()
# print the string to see what's there
print(html_code)
Explanation: Read in the file, see what we're working with
We'll use the read() method to get the contents of the file.
End of explanation
with open('mountain-goats.html', 'r') as html_file:
html_code = html_file.read()
# use the type() function to see what kind of object `html_code` is
print(type(html_code))
# feed the file's contents (the string of HTML) to BeautifulSoup
# will complain if you don't specify the parser
soup = BeautifulSoup(html_code, 'html.parser')
# use the type() function to see what kind of object `soup` is
print(type(soup))
Explanation: Parse the table with BeautifulSoup
Right now, Python isn't interpreting our table as data -- it's just a string. We need to use BeautifulSoup to parse that string into data objects that Python can understand. Once the string is parsed, we'll be working with a "tree" of data that we can navigate.
End of explanation
with open('mountain-goats.html', 'r') as html_file:
html_code = html_file.read()
soup = BeautifulSoup(html_code, 'html.parser')
# by position on the page
# find_all returns a list of matching elements, and we want the second ([1]) one
# song_table = soup.find_all('table')[1]
# by class name
# => with `find`, you can pass a dictionary of element attributes to match on
# song_table = soup.find('table', {'class': 'song-table'})
# by ID
# song_table = soup.find('table', {'id': 'my-cool-table'})
# by style
song_table = soup.find('table', {'style': 'width: 95%;'})
print(song_table)
Explanation: Decide how to target the table
BeautifulSoup has several methods for targeting elements -- by position on the page, by attribute, etc. Right now we just want to find the correct table.
End of explanation
with open('mountain-goats.html', 'r') as html_file:
html_code = html_file.read()
soup = BeautifulSoup(html_code, 'html.parser')
song_table = soup.find('table', {'style': 'width: 95%;'})
# find the rows in the table
# slice to skip the header row
song_rows = song_table.find_all('tr')[1:]
# loop over the rows
for row in song_rows:
# get the table cells in the row
song = row.find_all('td')
# assign them to variables
track, title, duration, artist, album = song
# use the .string attribute to get the text in the cell
print(track.string, title.string)
Explanation: Looping over the table rows
Let's print a list of track numbers and song titles. Look at the structure of the table -- a table has rows represented by the tag tr, and within each row there are cells represented by td tags. The find_all() method returns a list. And we know how to iterate over lists: with a for loop. Let's do that.
End of explanation
with open('mountain-goats.html', 'r') as html_file, open('mountain-goats.csv', 'w') as outfile:
html_code = html_file.read()
soup = BeautifulSoup(html_code, 'html.parser')
song_table = soup.find('table', {'style': 'width: 95%;'})
song_rows = song_table.find_all('tr')[1:]
# set up a writer object
writer = csv.DictWriter(outfile, fieldnames=['track', 'title', 'duration', 'artist', 'album'])
writer.writeheader()
for row in song_rows:
# get the table cells in the row
song = row.find_all('td')
# assign them to variables
track, title, duration, artist, album = song
# write out the dictionary to file
writer.writerow({
'track': track.string,
'title': title.string,
'duration': duration.string,
'artist': artist.string,
'album': album.string
})
Explanation: Write data to file
Let's put it all together and open a file to write the data to.
End of explanation |
5,704 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
NLP training example
In this example, we'll train an NLP model for sentiment analysis of tweets using spaCy.
First we download spaCy language libraries.
Step1: And import the boilerplate code.
Step2: Data prep
Download the dataset from S3.
Step3: Clean and load data using our library.
Step4: Train the model
We'll use a pre-trained model from spaCy and fine tune it in our new dataset.
Step5: Update the model with the current data using our library.
Step6: Now we save the model back into S3 to a well known location (make sure it's a location you can write to!) so that we can fetch it later. | Python Code:
!python -m spacy download en_core_web_sm
Explanation: NLP training example
In this example, we'll train an NLP model for sentiment analysis of tweets using spaCy.
First we download spaCy language libraries.
End of explanation
from __future__ import unicode_literals, print_function
import boto3
import json
import numpy as np
import pandas as pd
import spacy
Explanation: And import the boilerplate code.
End of explanation
S3_BUCKET = "verta-strata"
S3_KEY = "english-tweets.csv"
FILENAME = S3_KEY
boto3.client('s3').download_file(S3_BUCKET, S3_KEY, FILENAME)
Explanation: Data prep
Download the dataset from S3.
End of explanation
import utils
data = pd.read_csv(FILENAME).sample(frac=1).reset_index(drop=True)
utils.clean_data(data)
data.head()
Explanation: Clean and load data using our library.
End of explanation
nlp = spacy.load('en_core_web_sm')
Explanation: Train the model
We'll use a pre-trained model from spaCy and fine tune it in our new dataset.
End of explanation
import training
training.train(nlp, data, n_iter=20)
Explanation: Update the model with the current data using our library.
End of explanation
filename = "/tmp/model.spacy"
with open(filename, 'wb') as f:
f.write(nlp.to_bytes())
boto3.client('s3').upload_file(filename, S3_BUCKET, "models/01/model.spacy")
filename = "/tmp/model_metadata.json"
with open(filename, 'w') as f:
f.write(json.dumps(nlp.meta))
boto3.client('s3').upload_file(filename, S3_BUCKET, "models/01/model_metadata.json")
Explanation: Now we save the model back into S3 to a well known location (make sure it's a location you can write to!) so that we can fetch it later.
End of explanation |
5,705 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Probabilistic Bayesian Neural Networks
Author
Step1: Create training and evaluation datasets
Here, we load the wine_quality dataset using tfds.load(), and we convert
the target feature to float. Then, we shuffle the dataset and split it into
training and test sets. We take the first train_size examples as the train
split, and the rest as the test split.
Step2: Compile, train, and evaluate the model
Step3: Create model inputs
Step4: Experiment 1
Step5: Let's split the wine dataset into training and test sets, with 85% and 15% of
the examples, respectively.
Step6: Now let's train the baseline model. We use the MeanSquaredError
as the loss function.
Step7: We take a sample from the test set use the model to obtain predictions for them.
Note that since the baseline model is deterministic, we get a single a
point estimate prediction for each test example, with no information about the
uncertainty of the model nor the prediction.
Step8: Experiment 2
Step9: We use the tfp.layers.DenseVariational layer instead of the standard
keras.layers.Dense layer in the neural network model.
Step10: The epistemic uncertainty can be reduced as we increase the size of the
training data. That is, the more data the BNN model sees, the more it is certain
about its estimates for the weights (distribution parameters).
Let's test this behaviour by training the BNN model on a small subset of
the training set, and then on the full training set, to compare the output variances.
Train BNN with a small training subset.
Step11: Since we have trained a BNN model, the model produces a different output each time
we call it with the same input, since each time a new set of weights are sampled
from the distributions to construct the network and produce an output.
The less certain the mode weights are, the more variability (wider range) we will
see in the outputs of the same inputs.
Step12: Train BNN with the whole training set.
Step13: Notice that the model trained with the full training dataset shows smaller range
(uncertainty) in the prediction values for the same inputs, compared to the model
trained with a subset of the training dataset.
Experiment 3
Step14: Since the output of the model is a distribution, rather than a point estimate,
we use the negative loglikelihood
as our loss function to compute how likely to see the true data (targets) from the
estimated distribution produced by the model.
Step15: Now let's produce an output from the model given the test examples.
The output is now a distribution, and we can use its mean and variance
to compute the confidence intervals (CI) of the prediction. | Python Code:
import numpy as np
import tensorflow as tf
from tensorflow import keras
from tensorflow.keras import layers
import tensorflow_datasets as tfds
import tensorflow_probability as tfp
Explanation: Probabilistic Bayesian Neural Networks
Author: Khalid Salama<br>
Date created: 2021/01/15<br>
Last modified: 2021/01/15<br>
Description: Building probabilistic Bayesian neural network models with TensorFlow Probability.
Introduction
Taking a probabilistic approach to deep learning allows to account for uncertainty,
so that models can assign less levels of confidence to incorrect predictions.
Sources of uncertainty can be found in the data, due to measurement error or
noise in the labels, or the model, due to insufficient data availability for
the model to learn effectively.
This example demonstrates how to build basic probabilistic Bayesian neural networks
to account for these two types of uncertainty.
We use TensorFlow Probability library,
which is compatible with Keras API.
This example requires TensorFlow 2.3 or higher.
You can install Tensorflow Probability using the following command:
python
pip install tensorflow-probability
The dataset
We use the Wine Quality
dataset, which is available in the TensorFlow Datasets.
We use the red wine subset, which contains 4,898 examples.
The dataset has 11numerical physicochemical features of the wine, and the task
is to predict the wine quality, which is a score between 0 and 10.
In this example, we treat this as a regression task.
You can install TensorFlow Datasets using the following command:
python
pip install tensorflow-datasets
Setup
End of explanation
def get_train_and_test_splits(train_size, batch_size=1):
# We prefetch with a buffer the same size as the dataset because th dataset
# is very small and fits into memory.
dataset = (
tfds.load(name="wine_quality", as_supervised=True, split="train")
.map(lambda x, y: (x, tf.cast(y, tf.float32)))
.prefetch(buffer_size=dataset_size)
.cache()
)
# We shuffle with a buffer the same size as the dataset.
train_dataset = (
dataset.take(train_size).shuffle(buffer_size=train_size).batch(batch_size)
)
test_dataset = dataset.skip(train_size).batch(batch_size)
return train_dataset, test_dataset
Explanation: Create training and evaluation datasets
Here, we load the wine_quality dataset using tfds.load(), and we convert
the target feature to float. Then, we shuffle the dataset and split it into
training and test sets. We take the first train_size examples as the train
split, and the rest as the test split.
End of explanation
hidden_units = [8, 8]
learning_rate = 0.001
def run_experiment(model, loss, train_dataset, test_dataset):
model.compile(
optimizer=keras.optimizers.RMSprop(learning_rate=learning_rate),
loss=loss,
metrics=[keras.metrics.RootMeanSquaredError()],
)
print("Start training the model...")
model.fit(train_dataset, epochs=num_epochs, validation_data=test_dataset)
print("Model training finished.")
_, rmse = model.evaluate(train_dataset, verbose=0)
print(f"Train RMSE: {round(rmse, 3)}")
print("Evaluating model performance...")
_, rmse = model.evaluate(test_dataset, verbose=0)
print(f"Test RMSE: {round(rmse, 3)}")
Explanation: Compile, train, and evaluate the model
End of explanation
FEATURE_NAMES = [
"fixed acidity",
"volatile acidity",
"citric acid",
"residual sugar",
"chlorides",
"free sulfur dioxide",
"total sulfur dioxide",
"density",
"pH",
"sulphates",
"alcohol",
]
def create_model_inputs():
inputs = {}
for feature_name in FEATURE_NAMES:
inputs[feature_name] = layers.Input(
name=feature_name, shape=(1,), dtype=tf.float32
)
return inputs
Explanation: Create model inputs
End of explanation
def create_baseline_model():
inputs = create_model_inputs()
input_values = [value for _, value in sorted(inputs.items())]
features = keras.layers.concatenate(input_values)
features = layers.BatchNormalization()(features)
# Create hidden layers with deterministic weights using the Dense layer.
for units in hidden_units:
features = layers.Dense(units, activation="sigmoid")(features)
# The output is deterministic: a single point estimate.
outputs = layers.Dense(units=1)(features)
model = keras.Model(inputs=inputs, outputs=outputs)
return model
Explanation: Experiment 1: standard neural network
We create a standard deterministic neural network model as a baseline.
End of explanation
dataset_size = 4898
batch_size = 256
train_size = int(dataset_size * 0.85)
train_dataset, test_dataset = get_train_and_test_splits(train_size, batch_size)
Explanation: Let's split the wine dataset into training and test sets, with 85% and 15% of
the examples, respectively.
End of explanation
num_epochs = 100
mse_loss = keras.losses.MeanSquaredError()
baseline_model = create_baseline_model()
run_experiment(baseline_model, mse_loss, train_dataset, test_dataset)
Explanation: Now let's train the baseline model. We use the MeanSquaredError
as the loss function.
End of explanation
sample = 10
examples, targets = list(test_dataset.unbatch().shuffle(batch_size * 10).batch(sample))[
0
]
predicted = baseline_model(examples).numpy()
for idx in range(sample):
print(f"Predicted: {round(float(predicted[idx][0]), 1)} - Actual: {targets[idx]}")
Explanation: We take a sample from the test set use the model to obtain predictions for them.
Note that since the baseline model is deterministic, we get a single a
point estimate prediction for each test example, with no information about the
uncertainty of the model nor the prediction.
End of explanation
# Define the prior weight distribution as Normal of mean=0 and stddev=1.
# Note that, in this example, the we prior distribution is not trainable,
# as we fix its parameters.
def prior(kernel_size, bias_size, dtype=None):
n = kernel_size + bias_size
prior_model = keras.Sequential(
[
tfp.layers.DistributionLambda(
lambda t: tfp.distributions.MultivariateNormalDiag(
loc=tf.zeros(n), scale_diag=tf.ones(n)
)
)
]
)
return prior_model
# Define variational posterior weight distribution as multivariate Gaussian.
# Note that the learnable parameters for this distribution are the means,
# variances, and covariances.
def posterior(kernel_size, bias_size, dtype=None):
n = kernel_size + bias_size
posterior_model = keras.Sequential(
[
tfp.layers.VariableLayer(
tfp.layers.MultivariateNormalTriL.params_size(n), dtype=dtype
),
tfp.layers.MultivariateNormalTriL(n),
]
)
return posterior_model
Explanation: Experiment 2: Bayesian neural network (BNN)
The object of the Bayesian approach for modeling neural networks is to capture
the epistemic uncertainty, which is uncertainty about the model fitness,
due to limited training data.
The idea is that, instead of learning specific weight (and bias) values in the
neural network, the Bayesian approach learns weight distributions
- from which we can sample to produce an output for a given input -
to encode weight uncertainty.
Thus, we need to define prior and the posterior distributions of these weights,
and the training process is to learn the parameters of these distributions.
End of explanation
def create_bnn_model(train_size):
inputs = create_model_inputs()
features = keras.layers.concatenate(list(inputs.values()))
features = layers.BatchNormalization()(features)
# Create hidden layers with weight uncertainty using the DenseVariational layer.
for units in hidden_units:
features = tfp.layers.DenseVariational(
units=units,
make_prior_fn=prior,
make_posterior_fn=posterior,
kl_weight=1 / train_size,
activation="sigmoid",
)(features)
# The output is deterministic: a single point estimate.
outputs = layers.Dense(units=1)(features)
model = keras.Model(inputs=inputs, outputs=outputs)
return model
Explanation: We use the tfp.layers.DenseVariational layer instead of the standard
keras.layers.Dense layer in the neural network model.
End of explanation
num_epochs = 500
train_sample_size = int(train_size * 0.3)
small_train_dataset = train_dataset.unbatch().take(train_sample_size).batch(batch_size)
bnn_model_small = create_bnn_model(train_sample_size)
run_experiment(bnn_model_small, mse_loss, small_train_dataset, test_dataset)
Explanation: The epistemic uncertainty can be reduced as we increase the size of the
training data. That is, the more data the BNN model sees, the more it is certain
about its estimates for the weights (distribution parameters).
Let's test this behaviour by training the BNN model on a small subset of
the training set, and then on the full training set, to compare the output variances.
Train BNN with a small training subset.
End of explanation
def compute_predictions(model, iterations=100):
predicted = []
for _ in range(iterations):
predicted.append(model(examples).numpy())
predicted = np.concatenate(predicted, axis=1)
prediction_mean = np.mean(predicted, axis=1).tolist()
prediction_min = np.min(predicted, axis=1).tolist()
prediction_max = np.max(predicted, axis=1).tolist()
prediction_range = (np.max(predicted, axis=1) - np.min(predicted, axis=1)).tolist()
for idx in range(sample):
print(
f"Predictions mean: {round(prediction_mean[idx], 2)}, "
f"min: {round(prediction_min[idx], 2)}, "
f"max: {round(prediction_max[idx], 2)}, "
f"range: {round(prediction_range[idx], 2)} - "
f"Actual: {targets[idx]}"
)
compute_predictions(bnn_model_small)
Explanation: Since we have trained a BNN model, the model produces a different output each time
we call it with the same input, since each time a new set of weights are sampled
from the distributions to construct the network and produce an output.
The less certain the mode weights are, the more variability (wider range) we will
see in the outputs of the same inputs.
End of explanation
num_epochs = 500
bnn_model_full = create_bnn_model(train_size)
run_experiment(bnn_model_full, mse_loss, train_dataset, test_dataset)
compute_predictions(bnn_model_full)
Explanation: Train BNN with the whole training set.
End of explanation
def create_probablistic_bnn_model(train_size):
inputs = create_model_inputs()
features = keras.layers.concatenate(list(inputs.values()))
features = layers.BatchNormalization()(features)
# Create hidden layers with weight uncertainty using the DenseVariational layer.
for units in hidden_units:
features = tfp.layers.DenseVariational(
units=units,
make_prior_fn=prior,
make_posterior_fn=posterior,
kl_weight=1 / train_size,
activation="sigmoid",
)(features)
# Create a probabilisticå output (Normal distribution), and use the `Dense` layer
# to produce the parameters of the distribution.
# We set units=2 to learn both the mean and the variance of the Normal distribution.
distribution_params = layers.Dense(units=2)(features)
outputs = tfp.layers.IndependentNormal(1)(distribution_params)
model = keras.Model(inputs=inputs, outputs=outputs)
return model
Explanation: Notice that the model trained with the full training dataset shows smaller range
(uncertainty) in the prediction values for the same inputs, compared to the model
trained with a subset of the training dataset.
Experiment 3: probabilistic Bayesian neural network
So far, the output of the standard and the Bayesian NN models that we built is
deterministic, that is, produces a point estimate as a prediction for a given example.
We can create a probabilistic NN by letting the model output a distribution.
In this case, the model captures the aleatoric uncertainty as well,
which is due to irreducible noise in the data, or to the stochastic nature of the
process generating the data.
In this example, we model the output as a IndependentNormal distribution,
with learnable mean and variance parameters. If the task was classification,
we would have used IndependentBernoulli with binary classes, and OneHotCategorical
with multiple classes, to model distribution of the model output.
End of explanation
def negative_loglikelihood(targets, estimated_distribution):
return -estimated_distribution.log_prob(targets)
num_epochs = 1000
prob_bnn_model = create_probablistic_bnn_model(train_size)
run_experiment(prob_bnn_model, negative_loglikelihood, train_dataset, test_dataset)
Explanation: Since the output of the model is a distribution, rather than a point estimate,
we use the negative loglikelihood
as our loss function to compute how likely to see the true data (targets) from the
estimated distribution produced by the model.
End of explanation
prediction_distribution = prob_bnn_model(examples)
prediction_mean = prediction_distribution.mean().numpy().tolist()
prediction_stdv = prediction_distribution.stddev().numpy()
# The 95% CI is computed as mean ± (1.96 * stdv)
upper = (prediction_mean + (1.96 * prediction_stdv)).tolist()
lower = (prediction_mean - (1.96 * prediction_stdv)).tolist()
prediction_stdv = prediction_stdv.tolist()
for idx in range(sample):
print(
f"Prediction mean: {round(prediction_mean[idx][0], 2)}, "
f"stddev: {round(prediction_stdv[idx][0], 2)}, "
f"95% CI: [{round(upper[idx][0], 2)} - {round(lower[idx][0], 2)}]"
f" - Actual: {targets[idx]}"
)
Explanation: Now let's produce an output from the model given the test examples.
The output is now a distribution, and we can use its mean and variance
to compute the confidence intervals (CI) of the prediction.
End of explanation |
5,706 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Integration Exercise 1
Imports
Step2: Trapezoidal rule
The trapezoidal rule generates a numerical approximation to the 1d integral
Step3: Now use scipy.integrate.quad to integrate the f and g functions and see how the result compares with your trapz function. Print the results and errors. | Python Code:
%matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
from scipy import integrate
Explanation: Integration Exercise 1
Imports
End of explanation
def trapz(f, a, b, N):
Integrate the function f(x) over the range [a,b] with N points.
#I worked with James A
# YOUR CODE HERE
#raise NotImplementedError()
h = (b-a)/N
k = np.arange(1,N)
return h*(0.5*f(a)+0.5*f(b)+ (f(a+k*h)).sum())
f = lambda x: x**2
g = lambda x: np.sin(x)
I = trapz(f, 0, 1, 1000)
assert np.allclose(I, 0.33333349999999995)
J = trapz(g, 0, np.pi, 1000)
assert np.allclose(J, 1.9999983550656628)
Explanation: Trapezoidal rule
The trapezoidal rule generates a numerical approximation to the 1d integral:
$$ I(a,b) = \int_a^b f(x) dx $$
by dividing the interval $[a,b]$ into $N$ subdivisions of length $h$:
$$ h = (b-a)/N $$
Note that this means the function will be evaluated at $N+1$ points on $[a,b]$. The main idea of the trapezoidal rule is that the function is approximated by a straight line between each of these points.
Write a function trapz(f, a, b, N) that performs trapezoidal rule on the function f over the interval $[a,b]$ with N subdivisions (N+1 points).
End of explanation
# YOUR CODE HERE
#raise NotImplementedError()
integrate.quad(f,0,1), integrate.quad(g,0,np.pi)
assert True # leave this cell to grade the previous one
Explanation: Now use scipy.integrate.quad to integrate the f and g functions and see how the result compares with your trapz function. Print the results and errors.
End of explanation |
5,707 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
This notebook describes how to use the provided tools to interface with the data. It goes over the process of installing the tools, retrieving the data, and opening the data within a notebook.
<br>
<br>
Importing the tools and data
Video Tutorial on how to import tools into Jupyter
<br>
<br>
If you have worked with python notebooks before, you are probably familiar with the more basic included libraries. While you may not use all of their functionality for every activity, they are a very useful option to have available. We can bring them in with the standard import command.
Step1: Now we need the particle physics specific libraries. If you installed the libraries from the command shell properly as shown in the local setup tutorial, then most of the work needed to use these should already be done. From here it should be as simple as using the import command like for the included libraries.
Step2: With all of the tools imported, we now need our data files. Included in pps_tools are several functions for this purpose. Currently all of the files for the activities are located in this Google Drive folder. The function you will need to download files from this Drive is download_drive_file, for which the argument is simply the file name. You can also use the function download_file_from_google_drive if you want to save the file you download under a different name, or if for some reason you need to download a file from Google Drive that is not included, in which case you will need to put the file's id tag instead of the filename.
<br>
In the case that you want to download a file from a different location online, you would need to use the download_file function, for which the argument is the url of the file you need. (This tool will not work for Google Drive files).
<br>
Once you download a file, it should permenantly reside in the directory you downloaded it to, so you only have to do it once.
Step3: With the tools imported and files downloaded, you should now have everything you need to start interfacing with the data!
<br>
<br>
Interfacing with the data
Video overview on data interfacing
<br>
Reading data and navigating lists
The h5hep tools we used to unpack the data files puts the data all into lists that are accessible with dictionaries. You can view all the of the dictionary entries the data is tagged with using the following command (this is entirely optional, and simply gives you an idea of what kind of data the file may have contained).
Step4: To organize the data in a way that makes it easy to find what you need, you will need to use the hep tools we imported. This can be done in several ways.
<br>
Simple way
The first way uses less commands, but gives you less control over how much data you are using. This command organizes ALL of the data, so if you are using a large data file, this can take a long time. However, it is usually the best choice if you want to use all the data.
<br>
<br>
NOTE
Step5: This returns a list called collisions which has all of the collision events as entries. Each event is in turn its own list whose entries are the different types of particles involved in that collision. These are also lists, containing each individual particle of that particular type as entries, which are also lists of the four-momentum and other characteristics of each particle.
<br>
This can be a bit complicated until you learn to work with it, so we'll try a visualization as an example
Step6: You might notice that each individual event is callable from all collisions by is entry number, as are the individual particles from within their lists of particle types. However, the particle types themselves are only callable from the event list by their names. The characteristics of each particle are also only callable from their lists by the name of the characteristic. The exact dictionary entry needed to call them can be referenced by printing event.keys as above.
Because get_collisions puts ALL of the data in a list, to do you analysis, you can simply call everything you need from this one list. For instance, if we wanted to find the energies of all the jets in the entire list of collisions, we could do so using loops
Step7: More involved way
Alternatively, you can use the following series of commands to organize the data. It is a little more involved, but gives you more control over the data. For instance, it gives you the ability to only use some of the events rather than all of them, which also would decrease some of the computing time.
Step8: The above commands do not actually make the data directly usable, we need one more step for that, which is the get_collision function. This function is different from the get_collisions function used in the simpler method in that it only pulls out the information of a single event rather than all of them. This means that to get information from multiple events, you will need to use this command in a loop, for which you can define a range that determines what events you actualy want to use.
<br>
<br>
NOTE
Step9: Other than that get_collision only gets the information from one event rather than all of them, it essentially organizes the information in the same way that get_collisions does. You can interact with this data the same way you would for any individual event from the big list of events that get_collisions would give you.
For instance, to find the energies of all jets in the events we were looking at like we did for the simpler method, it would look very similar. However, you will notice that because we already have to loop over each event to use the get_collision function, we can simply nest the rest of our code within this loop. | Python Code:
import numpy as np
import matplotlib.pylab as plt
%matplotlib notebook
Explanation: This notebook describes how to use the provided tools to interface with the data. It goes over the process of installing the tools, retrieving the data, and opening the data within a notebook.
<br>
<br>
Importing the tools and data
Video Tutorial on how to import tools into Jupyter
<br>
<br>
If you have worked with python notebooks before, you are probably familiar with the more basic included libraries. While you may not use all of their functionality for every activity, they are a very useful option to have available. We can bring them in with the standard import command.
End of explanation
import h5hep
import pps_tools as pps
Explanation: Now we need the particle physics specific libraries. If you installed the libraries from the command shell properly as shown in the local setup tutorial, then most of the work needed to use these should already be done. From here it should be as simple as using the import command like for the included libraries.
End of explanation
filename = 'dimuons_1000_collisions.hdf5'
pps.download_drive_file(filename)
### Other examples: ###
#pps.download_file_from_google_drive('dimuons_1000_collisions.hdf5','data/file.hdf5')
#pps.download_file_from_google_drive('<google drive file id>','data/file.hdf5')
#pps.download_file('https://github.com/particle-physics-playground/playground/blob/master/data/dimuons_1000_collisions.hdf5')
Explanation: With all of the tools imported, we now need our data files. Included in pps_tools are several functions for this purpose. Currently all of the files for the activities are located in this Google Drive folder. The function you will need to download files from this Drive is download_drive_file, for which the argument is simply the file name. You can also use the function download_file_from_google_drive if you want to save the file you download under a different name, or if for some reason you need to download a file from Google Drive that is not included, in which case you will need to put the file's id tag instead of the filename.
<br>
In the case that you want to download a file from a different location online, you would need to use the download_file function, for which the argument is the url of the file you need. (This tool will not work for Google Drive files).
<br>
Once you download a file, it should permenantly reside in the directory you downloaded it to, so you only have to do it once.
End of explanation
# Print the keys to see what is in the dictionary OPTIONAL
for key in event.keys():
print(key)
Explanation: With the tools imported and files downloaded, you should now have everything you need to start interfacing with the data!
<br>
<br>
Interfacing with the data
Video overview on data interfacing
<br>
Reading data and navigating lists
The h5hep tools we used to unpack the data files puts the data all into lists that are accessible with dictionaries. You can view all the of the dictionary entries the data is tagged with using the following command (this is entirely optional, and simply gives you an idea of what kind of data the file may have contained).
End of explanation
infile = '../data/dimuons_1000_collisions.hdf5'
collisions = pps.get_collisions(infile,experiment='CMS',verbose=False)
print(len(collisions), " collisions") # This line is optional, and simply tells you how many events are in the file.
Explanation: To organize the data in a way that makes it easy to find what you need, you will need to use the hep tools we imported. This can be done in several ways.
<br>
Simple way
The first way uses less commands, but gives you less control over how much data you are using. This command organizes ALL of the data, so if you are using a large data file, this can take a long time. However, it is usually the best choice if you want to use all the data.
<br>
<br>
NOTE: This command will need to change depending on which experiment your activity is aligned with. For instance, the top quark activity uses CMS tools, so for the experiment argument in get_collisions, we put 'CMS'. The other possible arguments would be 'CLEO' and 'BaBar'.
End of explanation
second_collision = collisions[1] # the second event (list indexes start at 0)
print("Second event: ",second_collision)
all_muons = second_collision['muons'] # all of the jets in the first event
print("All muons: ",all_muons)
first_muon = all_muons[0] # the first jet in the first event
print("First muon: ",first_muon)
muon_energy = first_muon['e'] # the energy of the first photon
print("First muon's energy: ",muon_energy)
Explanation: This returns a list called collisions which has all of the collision events as entries. Each event is in turn its own list whose entries are the different types of particles involved in that collision. These are also lists, containing each individual particle of that particular type as entries, which are also lists of the four-momentum and other characteristics of each particle.
<br>
This can be a bit complicated until you learn to work with it, so we'll try a visualization as an example:
End of explanation
energies = []
for collision in collisions: # loops over all the events in the file
jets = collision['jets'] # gets the list of all photons in the event
for jet in jets: # loops over each photon in the current event
e = jet['e'] # gets the energy of the photon
energies.append(e) # puts the energy in a list
Explanation: You might notice that each individual event is callable from all collisions by is entry number, as are the individual particles from within their lists of particle types. However, the particle types themselves are only callable from the event list by their names. The characteristics of each particle are also only callable from their lists by the name of the characteristic. The exact dictionary entry needed to call them can be referenced by printing event.keys as above.
Because get_collisions puts ALL of the data in a list, to do you analysis, you can simply call everything you need from this one list. For instance, if we wanted to find the energies of all the jets in the entire list of collisions, we could do so using loops:
End of explanation
infile = '../data/dimuons_1000_collisions.hdf5'
alldata = pps.get_all_data(infile,verbose=False)
nentries = pps.get_number_of_entries(alldata)
print("# entries: ",nentries) # This optional line tells you how many events are in the file
Explanation: More involved way
Alternatively, you can use the following series of commands to organize the data. It is a little more involved, but gives you more control over the data. For instance, it gives you the ability to only use some of the events rather than all of them, which also would decrease some of the computing time.
End of explanation
for entry in range(nentries): # This range will loop over ALL of the events
collision = pps.get_collision(alldata,entry_number=entry,experiment='CMS')
for entry in range(0,int(nentries/2)): # This range will loop over the first half of the events
collision = pps.get_collision(alldata,entry_number=entry,experiment='CMS')
for entry in range(int(nentries/2),nentries): # This range will loop over the second half of the events
collision = pps.get_collision(alldata,entry_number=entry,experiment='CMS')
Explanation: The above commands do not actually make the data directly usable, we need one more step for that, which is the get_collision function. This function is different from the get_collisions function used in the simpler method in that it only pulls out the information of a single event rather than all of them. This means that to get information from multiple events, you will need to use this command in a loop, for which you can define a range that determines what events you actualy want to use.
<br>
<br>
NOTE: Depending on which activity you are doing, you will have to change the experiment argument to 'CMS', 'CLEO', or 'BaBar'. You will also need to change the entry_number argument to be the same variable you call in the loop.
End of explanation
energies = []
for event in range(0,int(nentries/3)): # Loops over first 3rd of all events
collision = pps.get_collision(alldata,entry_number=event,experiment='CMS') # organizes the data so you can interface with it
jets = collision['jets'] # gets the list of all photons in the current event
for jet in jets: # loops over all photons in the event
e = jet['e'] # gets the energy of the photon
energies.append(e) # adds the energy to a list
Explanation: Other than that get_collision only gets the information from one event rather than all of them, it essentially organizes the information in the same way that get_collisions does. You can interact with this data the same way you would for any individual event from the big list of events that get_collisions would give you.
For instance, to find the energies of all jets in the events we were looking at like we did for the simpler method, it would look very similar. However, you will notice that because we already have to loop over each event to use the get_collision function, we can simply nest the rest of our code within this loop.
End of explanation |
5,708 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<h1 align="center">Basic CFNCluster Setup</h1>
<h3 align="center">Author
Step1: 1. Install CFNCluster
Notice
Step2: 2. Upgrade CFNCluster
Step3: 3. Configure CFNCluster
To configure CFNCluster settings, you need to import the package CFNCluster. The below functions tell you how to insert AWS access keys, configure instance types, spot price and S3 resource.
Step4: After you finish configuration, you can call the below function to double check if your settings are correct.
Before you create a new cluster, you can check the current running clusters to avoid to use the different cluster name by call the below function.
Step5: To create a new cluster, you need to set a cluster name and then call the below function. After the creation is complete, you will see the output information about your cluser IP address.
Step6: 4. Manage cluster
To manage your new created cluster, you need to import ConnectionManager. The ConnectionManager can create the connection to the master node, execute commands on the master node, transfer files to the master. To create a connection to the master node, you need to set the hostname, username and your private key file. The hostname IP address (MasterPublicIP) can be found when your cluster creation is complete. The private key file should be the same when you configure CFNCluster.
Step7: To install GATK
<font color='red'>Notice
Step8: After the job is done, you can call the below function to close the connection.
Step9: To delete the cluster, you just need to set the cluster name and call the below function. | Python Code:
import os
import sys
sys.path.append(os.getcwd().replace("notebooks/awsCluster", "src/awsCluster"))
## Input the AWS account access keys
aws_access_key_id = "/**aws_access_key_id**/"
aws_secret_access_key = "/**aws_secret_access_key**/"
## CFNCluster name
your_cluster_name = "cluster_name"
## The private key pair for accessing cluster.
private_key = "/path/to/aws_priate_key.pem"
## If delete cfncluster after job is done.
delete_cfncluster = False
Explanation: <h1 align="center">Basic CFNCluster Setup</h1>
<h3 align="center">Author: Guorong Xu (g1xu@ucsd.edu) </h3>
<h3 align="center">2016-4-22</h3>
The notebook is an example that tells you how to call API to install, configure CFNCluster package, create a cluster, and connect to the master node. Currently we only support Linux, Mac OS platforms.
<font color='red'>Notice:</font> First step is to fill in the AWS account access keys and then follow the instructions to install CFNCluster package and create a cluster.
End of explanation
from cfnCluster import CFNClusterManager
CFNClusterManager.install_cfn_cluster()
Explanation: 1. Install CFNCluster
Notice: The CFNCluster package can be only installed on Linux box which supports pip installation.
End of explanation
from cfnCluster import CFNClusterManager
CFNClusterManager.upgrade_cfn_cluster()
Explanation: 2. Upgrade CFNCluster
End of explanation
from cfnCluster import CFNClusterManager
## Configure cfncluster settings
CFNClusterManager.insert_access_keys(aws_access_key_id=aws_access_key_id, aws_secret_access_key=aws_secret_access_key)
CFNClusterManager.config_key_name(private_key)
CFNClusterManager.config_instance_types(master_instance_type="m3.large", compute_instance_type="m4.xlarge")
CFNClusterManager.config_initial_cluster_size(initial_cluster_size="0")
CFNClusterManager.config_spot_price(spot_price="0.7")
CFNClusterManager.config_volume_size(volume_size="300")
CFNClusterManager.config_ebs_snapshot_id(ebs_snapshot_id="snap-058a7554")
CFNClusterManager.config_aws_region_name(aws_region_name="us-west-2")
CFNClusterManager.config_post_install(post_install="s3://path/to/postinstall.sh")
CFNClusterManager.config_vpc_subnet_id(master_subnet_id="subnet-00000000", vpc_id="vpc-00000000")
CFNClusterManager.config_s3_resource(s3_read_resource="s3://bucket_name/", s3_read_write_resource="s3://bucket_name/")
Explanation: 3. Configure CFNCluster
To configure CFNCluster settings, you need to import the package CFNCluster. The below functions tell you how to insert AWS access keys, configure instance types, spot price and S3 resource.
End of explanation
CFNClusterManager.view_cfncluster_config()
CFNClusterManager.list_cfn_cluster()
Explanation: After you finish configuration, you can call the below function to double check if your settings are correct.
Before you create a new cluster, you can check the current running clusters to avoid to use the different cluster name by call the below function.
End of explanation
master_ip_address = CFNClusterManager.create_cfn_cluster(cluster_name=your_cluster_name)
Explanation: To create a new cluster, you need to set a cluster name and then call the below function. After the creation is complete, you will see the output information about your cluser IP address.
End of explanation
from cfnCluster import ConnectionManager
ssh_client = ConnectionManager.connect_master(hostname=master_ip_address,
username="ec2-user",
private_key_file=private_key)
Explanation: 4. Manage cluster
To manage your new created cluster, you need to import ConnectionManager. The ConnectionManager can create the connection to the master node, execute commands on the master node, transfer files to the master. To create a connection to the master node, you need to set the hostname, username and your private key file. The hostname IP address (MasterPublicIP) can be found when your cluster creation is complete. The private key file should be the same when you configure CFNCluster.
End of explanation
from cfnCluster import ConnectionManager
ConnectionManager.copy_gatk(ssh_client, "/path/to/local/GenomeAnalysisTK.jar")
Explanation: To install GATK
<font color='red'>Notice:</font> You need to download the GenomeAnalysisTK.jar from the official website to your local machine, then upload to the cluster.
The GATK official website: https://software.broadinstitute.org/gatk/download/
End of explanation
ConnectionManager.close_connection(ssh_client)
Explanation: After the job is done, you can call the below function to close the connection.
End of explanation
from cfnCluster import CFNClusterManager
if delete_cfncluster == True:
CFNClusterManager.delete_cfn_cluster(cluster_name=your_cluster_name)
Explanation: To delete the cluster, you just need to set the cluster name and call the below function.
End of explanation |
5,709 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
How to use implemented algorithms
Overview
The project has the following structure
Step1: We will check if the library is imported correctly by computing a few parameters of empty graph.
Step2: Reading graph from a file
Now we will show how to read graph from a file and compute requested parameters.
Step3: Computing requested parameters
We assume that the provided graphs are simple and thus multi edges in the files are ignored.
Since one of the graphs has 1000 vertices and 3989 edges, computation of diameter will take some time.
Step4: Now we will present the results in a table. Notice that package ipy_table is required. | Python Code:
import sys, os
# sys.path.append(os.path.dirname(os.path.dirname(os.path.abspath('../src/'))))
sys.path.append('../src/')
import graph
reload(graph)
Explanation: How to use implemented algorithms
Overview
The project has the following structure:
doc contains documentation of the project
src contains all the source code
algorithms contains different algorithms
test contains modules for testing with unittest
test_data contains test data
tools different tools (scripts) used for building and testing
Notebooks contains ipython notebooks which demonstrate usage
Importing Graph module
Firstly, we will import the Graph module. We are in the folder Notebooks so, we need to append the path.
End of explanation
emptyG = graph.Graph()
diam = emptyG.diameter()
gcc = emptyG.global_clustering_coefficient()
print("The diameter of empty graph: %d" % diam)
print("The global cluster coeff of empty graph is: %f" % gcc)
Explanation: We will check if the library is imported correctly by computing a few parameters of empty graph.
End of explanation
path = '../test_data/'
txt = ".txt"
filenames = ['zachary_connected', 'graph_1000n_4000m', 'graph_100n_1000m']
graphs = [] # store all three graph objects in a list
for i, g_name in enumerate(filenames):
g = graph.Graph({})
g.read_from_file(filename=path+g_name+txt)
graphs.append(g)
results = []
params = ["vertices", "edges", "density", "diameter", "clustering coef"]
print("%d, %d" % (graphs[0].number_of_vertices(), graphs[0].number_of_edges()))
print("%d, %d" % (graphs[1].number_of_vertices(), graphs[1].number_of_edges()))
print("%d, %d" % (graphs[2].number_of_vertices(), graphs[2].number_of_edges()))
Explanation: Reading graph from a file
Now we will show how to read graph from a file and compute requested parameters.
End of explanation
# compute parameters
for i, G in enumerate(graphs):
temp_results = [G.number_of_vertices(), G.number_of_edges(),
G.density(), G.diameter(),
G.global_clustering_coefficient()
]
results.append(temp_results)
Explanation: Computing requested parameters
We assume that the provided graphs are simple and thus multi edges in the files are ignored.
Since one of the graphs has 1000 vertices and 3989 edges, computation of diameter will take some time.
End of explanation
from ipy_table import *
dictList = []
data_str = "dataset"
# convert the dictionary to a list
for i in range(len(results)+1):
if i == 0:
dictList.append([data_str] + [p for p in params])
else:
dictList.append([filenames[i-1]] + results[i-1])
# create table with make_table
make_table(dictList)
set_column_style(0, width='100', bold=True, color='hsla(225, 80%, 94%, 1)')
set_column_style(1, width='100')
# render the table
render()
Explanation: Now we will present the results in a table. Notice that package ipy_table is required.
End of explanation |
5,710 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
ES-DOC CMIP6 Model Properties - Aerosol
MIP Era
Step1: Document Authors
Set document authors
Step2: Document Contributors
Specify document contributors
Step3: Document Publication
Specify document publication status
Step4: Document Table of Contents
1. Key Properties
2. Key Properties --> Software Properties
3. Key Properties --> Timestep Framework
4. Key Properties --> Meteorological Forcings
5. Key Properties --> Resolution
6. Key Properties --> Tuning Applied
7. Transport
8. Emissions
9. Concentrations
10. Optical Radiative Properties
11. Optical Radiative Properties --> Absorption
12. Optical Radiative Properties --> Mixtures
13. Optical Radiative Properties --> Impact Of H2o
14. Optical Radiative Properties --> Radiative Scheme
15. Optical Radiative Properties --> Cloud Interactions
16. Model
1. Key Properties
Key properties of the aerosol model
1.1. Model Overview
Is Required
Step5: 1.2. Model Name
Is Required
Step6: 1.3. Scheme Scope
Is Required
Step7: 1.4. Basic Approximations
Is Required
Step8: 1.5. Prognostic Variables Form
Is Required
Step9: 1.6. Number Of Tracers
Is Required
Step10: 1.7. Family Approach
Is Required
Step11: 2. Key Properties --> Software Properties
Software properties of aerosol code
2.1. Repository
Is Required
Step12: 2.2. Code Version
Is Required
Step13: 2.3. Code Languages
Is Required
Step14: 3. Key Properties --> Timestep Framework
Physical properties of seawater in ocean
3.1. Method
Is Required
Step15: 3.2. Split Operator Advection Timestep
Is Required
Step16: 3.3. Split Operator Physical Timestep
Is Required
Step17: 3.4. Integrated Timestep
Is Required
Step18: 3.5. Integrated Scheme Type
Is Required
Step19: 4. Key Properties --> Meteorological Forcings
**
4.1. Variables 3D
Is Required
Step20: 4.2. Variables 2D
Is Required
Step21: 4.3. Frequency
Is Required
Step22: 5. Key Properties --> Resolution
Resolution in the aersosol model grid
5.1. Name
Is Required
Step23: 5.2. Canonical Horizontal Resolution
Is Required
Step24: 5.3. Number Of Horizontal Gridpoints
Is Required
Step25: 5.4. Number Of Vertical Levels
Is Required
Step26: 5.5. Is Adaptive Grid
Is Required
Step27: 6. Key Properties --> Tuning Applied
Tuning methodology for aerosol model
6.1. Description
Is Required
Step28: 6.2. Global Mean Metrics Used
Is Required
Step29: 6.3. Regional Metrics Used
Is Required
Step30: 6.4. Trend Metrics Used
Is Required
Step31: 7. Transport
Aerosol transport
7.1. Overview
Is Required
Step32: 7.2. Scheme
Is Required
Step33: 7.3. Mass Conservation Scheme
Is Required
Step34: 7.4. Convention
Is Required
Step35: 8. Emissions
Atmospheric aerosol emissions
8.1. Overview
Is Required
Step36: 8.2. Method
Is Required
Step37: 8.3. Sources
Is Required
Step38: 8.4. Prescribed Climatology
Is Required
Step39: 8.5. Prescribed Climatology Emitted Species
Is Required
Step40: 8.6. Prescribed Spatially Uniform Emitted Species
Is Required
Step41: 8.7. Interactive Emitted Species
Is Required
Step42: 8.8. Other Emitted Species
Is Required
Step43: 8.9. Other Method Characteristics
Is Required
Step44: 9. Concentrations
Atmospheric aerosol concentrations
9.1. Overview
Is Required
Step45: 9.2. Prescribed Lower Boundary
Is Required
Step46: 9.3. Prescribed Upper Boundary
Is Required
Step47: 9.4. Prescribed Fields Mmr
Is Required
Step48: 9.5. Prescribed Fields Mmr
Is Required
Step49: 10. Optical Radiative Properties
Aerosol optical and radiative properties
10.1. Overview
Is Required
Step50: 11. Optical Radiative Properties --> Absorption
Absortion properties in aerosol scheme
11.1. Black Carbon
Is Required
Step51: 11.2. Dust
Is Required
Step52: 11.3. Organics
Is Required
Step53: 12. Optical Radiative Properties --> Mixtures
**
12.1. External
Is Required
Step54: 12.2. Internal
Is Required
Step55: 12.3. Mixing Rule
Is Required
Step56: 13. Optical Radiative Properties --> Impact Of H2o
**
13.1. Size
Is Required
Step57: 13.2. Internal Mixture
Is Required
Step58: 14. Optical Radiative Properties --> Radiative Scheme
Radiative scheme for aerosol
14.1. Overview
Is Required
Step59: 14.2. Shortwave Bands
Is Required
Step60: 14.3. Longwave Bands
Is Required
Step61: 15. Optical Radiative Properties --> Cloud Interactions
Aerosol-cloud interactions
15.1. Overview
Is Required
Step62: 15.2. Twomey
Is Required
Step63: 15.3. Twomey Minimum Ccn
Is Required
Step64: 15.4. Drizzle
Is Required
Step65: 15.5. Cloud Lifetime
Is Required
Step66: 15.6. Longwave Bands
Is Required
Step67: 16. Model
Aerosol model
16.1. Overview
Is Required
Step68: 16.2. Processes
Is Required
Step69: 16.3. Coupling
Is Required
Step70: 16.4. Gas Phase Precursors
Is Required
Step71: 16.5. Scheme Type
Is Required
Step72: 16.6. Bulk Scheme Species
Is Required | Python Code:
# DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'mohc', 'ukesm1-0-ll', 'aerosol')
Explanation: ES-DOC CMIP6 Model Properties - Aerosol
MIP Era: CMIP6
Institute: MOHC
Source ID: UKESM1-0-LL
Topic: Aerosol
Sub-Topics: Transport, Emissions, Concentrations, Optical Radiative Properties, Model.
Properties: 69 (37 required)
Model descriptions: Model description details
Initialized From: --
Notebook Help: Goto notebook help page
Notebook Initialised: 2018-02-15 16:54:15
Document Setup
IMPORTANT: to be executed each time you run the notebook
End of explanation
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
Explanation: Document Authors
Set document authors
End of explanation
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
Explanation: Document Contributors
Specify document contributors
End of explanation
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
Explanation: Document Publication
Specify document publication status
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: Document Table of Contents
1. Key Properties
2. Key Properties --> Software Properties
3. Key Properties --> Timestep Framework
4. Key Properties --> Meteorological Forcings
5. Key Properties --> Resolution
6. Key Properties --> Tuning Applied
7. Transport
8. Emissions
9. Concentrations
10. Optical Radiative Properties
11. Optical Radiative Properties --> Absorption
12. Optical Radiative Properties --> Mixtures
13. Optical Radiative Properties --> Impact Of H2o
14. Optical Radiative Properties --> Radiative Scheme
15. Optical Radiative Properties --> Cloud Interactions
16. Model
1. Key Properties
Key properties of the aerosol model
1.1. Model Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of aerosol model.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.2. Model Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Name of aerosol model code
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.scheme_scope')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "troposhere"
# "stratosphere"
# "mesosphere"
# "mesosphere"
# "whole atmosphere"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 1.3. Scheme Scope
Is Required: TRUE Type: ENUM Cardinality: 1.N
Atmospheric domains covered by the aerosol model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.basic_approximations')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.4. Basic Approximations
Is Required: TRUE Type: STRING Cardinality: 1.1
Basic approximations made in the aerosol model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.prognostic_variables_form')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "3D mass/volume ratio for aerosols"
# "3D number concenttration for aerosols"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 1.5. Prognostic Variables Form
Is Required: TRUE Type: ENUM Cardinality: 1.N
Prognostic variables in the aerosol model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.number_of_tracers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 1.6. Number Of Tracers
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Number of tracers in the aerosol model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.family_approach')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 1.7. Family Approach
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Are aerosol calculations generalized into families of species?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.software_properties.repository')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 2. Key Properties --> Software Properties
Software properties of aerosol code
2.1. Repository
Is Required: FALSE Type: STRING Cardinality: 0.1
Location of code for this component.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.software_properties.code_version')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 2.2. Code Version
Is Required: FALSE Type: STRING Cardinality: 0.1
Code version identifier.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.software_properties.code_languages')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 2.3. Code Languages
Is Required: FALSE Type: STRING Cardinality: 0.N
Code language(s).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.timestep_framework.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Uses atmospheric chemistry time stepping"
# "Specific timestepping (operator splitting)"
# "Specific timestepping (integrated)"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 3. Key Properties --> Timestep Framework
Physical properties of seawater in ocean
3.1. Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
Mathematical method deployed to solve the time evolution of the prognostic variables
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.timestep_framework.split_operator_advection_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 3.2. Split Operator Advection Timestep
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Timestep for aerosol advection (in seconds)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.timestep_framework.split_operator_physical_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 3.3. Split Operator Physical Timestep
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Timestep for aerosol physics (in seconds).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.timestep_framework.integrated_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 3.4. Integrated Timestep
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Timestep for the aerosol model (in seconds)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.timestep_framework.integrated_scheme_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Explicit"
# "Implicit"
# "Semi-implicit"
# "Semi-analytic"
# "Impact solver"
# "Back Euler"
# "Newton Raphson"
# "Rosenbrock"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 3.5. Integrated Scheme Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Specify the type of timestep scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.meteorological_forcings.variables_3D')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4. Key Properties --> Meteorological Forcings
**
4.1. Variables 3D
Is Required: FALSE Type: STRING Cardinality: 0.1
Three dimensionsal forcing variables, e.g. U, V, W, T, Q, P, conventive mass flux
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.meteorological_forcings.variables_2D')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4.2. Variables 2D
Is Required: FALSE Type: STRING Cardinality: 0.1
Two dimensionsal forcing variables, e.g. land-sea mask definition
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.meteorological_forcings.frequency')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 4.3. Frequency
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Frequency with which meteological forcings are applied (in seconds).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.resolution.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5. Key Properties --> Resolution
Resolution in the aersosol model grid
5.1. Name
Is Required: TRUE Type: STRING Cardinality: 1.1
This is a string usually used by the modelling group to describe the resolution of this grid, e.g. ORCA025, N512L180, T512L70 etc.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.resolution.canonical_horizontal_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5.2. Canonical Horizontal Resolution
Is Required: FALSE Type: STRING Cardinality: 0.1
Expression quoted for gross comparisons of resolution, eg. 50km or 0.1 degrees etc.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.resolution.number_of_horizontal_gridpoints')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 5.3. Number Of Horizontal Gridpoints
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Total number of horizontal (XY) points (or degrees of freedom) on computational grid.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.resolution.number_of_vertical_levels')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 5.4. Number Of Vertical Levels
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Number of vertical levels resolved on computational grid.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.resolution.is_adaptive_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 5.5. Is Adaptive Grid
Is Required: FALSE Type: BOOLEAN Cardinality: 0.1
Default is False. Set true if grid resolution changes during execution.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.tuning_applied.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6. Key Properties --> Tuning Applied
Tuning methodology for aerosol model
6.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General overview description of tuning: explain and motivate the main targets and metrics retained. &Document the relative weight given to climate performance metrics versus process oriented metrics, &and on the possible conflicts with parameterization level tuning. In particular describe any struggle &with a parameter value that required pushing it to its limits to solve a particular model deficiency.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.tuning_applied.global_mean_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.2. Global Mean Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List set of metrics of the global mean state used in tuning model/component
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.tuning_applied.regional_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.3. Regional Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List of regional metrics of mean state used in tuning model/component
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.tuning_applied.trend_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.4. Trend Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List observed trend metrics used in tuning model/component
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.transport.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7. Transport
Aerosol transport
7.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of transport in atmosperic aerosol model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.transport.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Uses Atmospheric chemistry transport scheme"
# "Specific transport scheme (eulerian)"
# "Specific transport scheme (semi-lagrangian)"
# "Specific transport scheme (eulerian and semi-lagrangian)"
# "Specific transport scheme (lagrangian)"
# TODO - please enter value(s)
Explanation: 7.2. Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Method for aerosol transport modeling
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.transport.mass_conservation_scheme')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Uses Atmospheric chemistry transport scheme"
# "Mass adjustment"
# "Concentrations positivity"
# "Gradients monotonicity"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 7.3. Mass Conservation Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.N
Method used to ensure mass conservation.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.transport.convention')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Uses Atmospheric chemistry transport scheme"
# "Convective fluxes connected to tracers"
# "Vertical velocities connected to tracers"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 7.4. Convention
Is Required: TRUE Type: ENUM Cardinality: 1.N
Transport by convention
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8. Emissions
Atmospheric aerosol emissions
8.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of emissions in atmosperic aerosol model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.method')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "Prescribed (climatology)"
# "Prescribed CMIP6"
# "Prescribed above surface"
# "Interactive"
# "Interactive above surface"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 8.2. Method
Is Required: TRUE Type: ENUM Cardinality: 1.N
Method used to define aerosol species (several methods allowed because the different species may not use the same method).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.sources')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Vegetation"
# "Volcanos"
# "Bare ground"
# "Sea surface"
# "Lightning"
# "Fires"
# "Aircraft"
# "Anthropogenic"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 8.3. Sources
Is Required: FALSE Type: ENUM Cardinality: 0.N
Sources of the aerosol species are taken into account in the emissions scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.prescribed_climatology')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant"
# "Interannual"
# "Annual"
# "Monthly"
# "Daily"
# TODO - please enter value(s)
Explanation: 8.4. Prescribed Climatology
Is Required: FALSE Type: ENUM Cardinality: 0.1
Specify the climatology type for aerosol emissions
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.prescribed_climatology_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.5. Prescribed Climatology Emitted Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of aerosol species emitted and prescribed via a climatology
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.prescribed_spatially_uniform_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.6. Prescribed Spatially Uniform Emitted Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of aerosol species emitted and prescribed as spatially uniform
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.interactive_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.7. Interactive Emitted Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of aerosol species emitted and specified via an interactive method
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.other_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.8. Other Emitted Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of aerosol species emitted and specified via an "other method"
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.other_method_characteristics')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.9. Other Method Characteristics
Is Required: FALSE Type: STRING Cardinality: 0.1
Characteristics of the "other method" used for aerosol emissions
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.concentrations.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9. Concentrations
Atmospheric aerosol concentrations
9.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of concentrations in atmosperic aerosol model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.concentrations.prescribed_lower_boundary')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9.2. Prescribed Lower Boundary
Is Required: FALSE Type: STRING Cardinality: 0.1
List of species prescribed at the lower boundary.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.concentrations.prescribed_upper_boundary')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9.3. Prescribed Upper Boundary
Is Required: FALSE Type: STRING Cardinality: 0.1
List of species prescribed at the upper boundary.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.concentrations.prescribed_fields_mmr')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9.4. Prescribed Fields Mmr
Is Required: FALSE Type: STRING Cardinality: 0.1
List of species prescribed as mass mixing ratios.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.concentrations.prescribed_fields_mmr')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9.5. Prescribed Fields Mmr
Is Required: FALSE Type: STRING Cardinality: 0.1
List of species prescribed as AOD plus CCNs.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 10. Optical Radiative Properties
Aerosol optical and radiative properties
10.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of optical and radiative properties
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.absorption.black_carbon')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 11. Optical Radiative Properties --> Absorption
Absortion properties in aerosol scheme
11.1. Black Carbon
Is Required: FALSE Type: FLOAT Cardinality: 0.1
Absorption mass coefficient of black carbon at 550nm (if non-absorbing enter 0)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.absorption.dust')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 11.2. Dust
Is Required: FALSE Type: FLOAT Cardinality: 0.1
Absorption mass coefficient of dust at 550nm (if non-absorbing enter 0)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.absorption.organics')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 11.3. Organics
Is Required: FALSE Type: FLOAT Cardinality: 0.1
Absorption mass coefficient of organics at 550nm (if non-absorbing enter 0)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.mixtures.external')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 12. Optical Radiative Properties --> Mixtures
**
12.1. External
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there external mixing with respect to chemical composition?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.mixtures.internal')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 12.2. Internal
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there internal mixing with respect to chemical composition?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.mixtures.mixing_rule')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 12.3. Mixing Rule
Is Required: FALSE Type: STRING Cardinality: 0.1
If there is internal mixing with respect to chemical composition then indicate the mixinrg rule
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.impact_of_h2o.size')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 13. Optical Radiative Properties --> Impact Of H2o
**
13.1. Size
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does H2O impact size?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.impact_of_h2o.internal_mixture')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 13.2. Internal Mixture
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does H2O impact internal mixture?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.radiative_scheme.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 14. Optical Radiative Properties --> Radiative Scheme
Radiative scheme for aerosol
14.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of radiative scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.radiative_scheme.shortwave_bands')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 14.2. Shortwave Bands
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Number of shortwave bands
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.radiative_scheme.longwave_bands')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 14.3. Longwave Bands
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Number of longwave bands
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 15. Optical Radiative Properties --> Cloud Interactions
Aerosol-cloud interactions
15.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of aerosol-cloud interactions
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.twomey')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 15.2. Twomey
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is the Twomey effect included?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.twomey_minimum_ccn')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 15.3. Twomey Minimum Ccn
Is Required: FALSE Type: INTEGER Cardinality: 0.1
If the Twomey effect is included, then what is the minimum CCN number?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.drizzle')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 15.4. Drizzle
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does the scheme affect drizzle?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.cloud_lifetime')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 15.5. Cloud Lifetime
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does the scheme affect cloud lifetime?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.longwave_bands')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 15.6. Longwave Bands
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Number of longwave bands
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.model.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 16. Model
Aerosol model
16.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of atmosperic aerosol model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.model.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Dry deposition"
# "Sedimentation"
# "Wet deposition (impaction scavenging)"
# "Wet deposition (nucleation scavenging)"
# "Coagulation"
# "Oxidation (gas phase)"
# "Oxidation (in cloud)"
# "Condensation"
# "Ageing"
# "Advection (horizontal)"
# "Advection (vertical)"
# "Heterogeneous chemistry"
# "Nucleation"
# TODO - please enter value(s)
Explanation: 16.2. Processes
Is Required: TRUE Type: ENUM Cardinality: 1.N
Processes included in the Aerosol model.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.model.coupling')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Radiation"
# "Land surface"
# "Heterogeneous chemistry"
# "Clouds"
# "Ocean"
# "Cryosphere"
# "Gas phase chemistry"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 16.3. Coupling
Is Required: FALSE Type: ENUM Cardinality: 0.N
Other model components coupled to the Aerosol model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.model.gas_phase_precursors')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "DMS"
# "SO2"
# "Ammonia"
# "Iodine"
# "Terpene"
# "Isoprene"
# "VOC"
# "NOx"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 16.4. Gas Phase Precursors
Is Required: TRUE Type: ENUM Cardinality: 1.N
List of gas phase aerosol precursors.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.model.scheme_type')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Bulk"
# "Modal"
# "Bin"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 16.5. Scheme Type
Is Required: TRUE Type: ENUM Cardinality: 1.N
Type(s) of aerosol scheme used by the aerosols model (potentially multiple: some species may be covered by one type of aerosol scheme and other species covered by another type).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.model.bulk_scheme_species')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Sulphate"
# "Nitrate"
# "Sea salt"
# "Dust"
# "Ice"
# "Organic"
# "Black carbon / soot"
# "SOA (secondary organic aerosols)"
# "POM (particulate organic matter)"
# "Polar stratospheric ice"
# "NAT (Nitric acid trihydrate)"
# "NAD (Nitric acid dihydrate)"
# "STS (supercooled ternary solution aerosol particule)"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 16.6. Bulk Scheme Species
Is Required: TRUE Type: ENUM Cardinality: 1.N
List of species covered by the bulk scheme.
End of explanation |
5,711 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Copyright 2020 The TensorFlow Authors.
Step1: Weight clustering in Keras example
<table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https
Step2: Train a tf.keras model for MNIST without clustering
Step3: Evaluate the baseline model and save it for later usage
Step4: Fine-tune the pre-trained model with clustering
Apply the cluster_weights() API to a whole pre-trained model to demonstrate its effectiveness in reducing the model size after applying zip while keeping decent accuracy. For how best to balance the accuracy and compression rate for your use case, please refer to the per layer example in the comprehensive guide.
Define the model and apply the clustering API
Before you pass the model to the clustering API, make sure it is trained and shows some acceptable accuracy.
Step5: Fine-tune the model and evaluate the accuracy against baseline
Fine-tune the model with clustering for 1 epoch.
Step6: For this example, there is minimal loss in test accuracy after clustering, compared to the baseline.
Step7: Create 6x smaller models from clustering
Both strip_clustering and applying a standard compression algorithm (e.g. via gzip) are necessary to see the compression benefits of clustering.
First, create a compressible model for TensorFlow. Here, strip_clustering removes all variables (e.g. tf.Variable for storing the cluster centroids and the indices) that clustering only needs during training, which would otherwise add to model size during inference.
Step8: Then, create compressible models for TFLite. You can convert the clustered model to a format that's runnable on your targeted backend. TensorFlow Lite is an example you can use to deploy to mobile devices.
Step9: Define a helper function to actually compress the models via gzip and measure the zipped size.
Step10: Compare and see that the models are 6x smaller from clustering
Step11: Create an 8x smaller TFLite model from combining weight clustering and post-training quantization
You can apply post-training quantization to the clustered model for additional benefits.
Step12: See the persistence of accuracy from TF to TFLite
Define a helper function to evaluate the TFLite model on the test dataset.
Step13: You evaluate the model, which has been clustered and quantized, and then see the accuracy from TensorFlow persists to the TFLite backend. | Python Code:
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
Explanation: Copyright 2020 The TensorFlow Authors.
End of explanation
! pip install -q tensorflow-model-optimization
import tensorflow as tf
from tensorflow import keras
import numpy as np
import tempfile
import zipfile
import os
Explanation: Weight clustering in Keras example
<table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https://www.tensorflow.org/model_optimization/guide/clustering/clustering_example"><img src="https://www.tensorflow.org/images/tf_logo_32px.png" />View on TensorFlow.org</a>
</td>
<td>
<a target="_blank" href="https://colab.research.google.com/github/tensorflow/model-optimization/blob/master/tensorflow_model_optimization/g3doc/guide/clustering/clustering_example.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" />Run in Google Colab</a>
</td>
<td>
<a target="_blank" href="https://github.com/tensorflow/model-optimization/blob/master/tensorflow_model_optimization/g3doc/guide/clustering/clustering_example.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />View source on GitHub</a>
</td>
<td>
<a href="https://storage.googleapis.com/tensorflow_docs/model-optimization/tensorflow_model_optimization/g3doc/guide/clustering/clustering_example.ipynb"><img src="https://www.tensorflow.org/images/download_logo_32px.png" />Download notebook</a>
</td>
</table>
Overview
Welcome to the end-to-end example for weight clustering, part of the TensorFlow Model Optimization Toolkit.
Other pages
For an introduction to what weight clustering is and to determine if you should use it (including what's supported), see the overview page.
To quickly find the APIs you need for your use case (beyond fully clustering a model with 16 clusters), see the comprehensive guide.
Contents
In the tutorial, you will:
Train a tf.keras model for the MNIST dataset from scratch.
Fine-tune the model by applying the weight clustering API and see the accuracy.
Create a 6x smaller TF and TFLite models from clustering.
Create a 8x smaller TFLite model from combining weight clustering and post-training quantization.
See the persistence of accuracy from TF to TFLite.
Setup
You can run this Jupyter Notebook in your local virtualenv or colab. For details of setting up dependencies, please refer to the installation guide.
End of explanation
# Load MNIST dataset
mnist = keras.datasets.mnist
(train_images, train_labels), (test_images, test_labels) = mnist.load_data()
# Normalize the input image so that each pixel value is between 0 to 1.
train_images = train_images / 255.0
test_images = test_images / 255.0
# Define the model architecture.
model = keras.Sequential([
keras.layers.InputLayer(input_shape=(28, 28)),
keras.layers.Reshape(target_shape=(28, 28, 1)),
keras.layers.Conv2D(filters=12, kernel_size=(3, 3), activation=tf.nn.relu),
keras.layers.MaxPooling2D(pool_size=(2, 2)),
keras.layers.Flatten(),
keras.layers.Dense(10)
])
# Train the digit classification model
model.compile(optimizer='adam',
loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True),
metrics=['accuracy'])
model.fit(
train_images,
train_labels,
validation_split=0.1,
epochs=10
)
Explanation: Train a tf.keras model for MNIST without clustering
End of explanation
_, baseline_model_accuracy = model.evaluate(
test_images, test_labels, verbose=0)
print('Baseline test accuracy:', baseline_model_accuracy)
_, keras_file = tempfile.mkstemp('.h5')
print('Saving model to: ', keras_file)
tf.keras.models.save_model(model, keras_file, include_optimizer=False)
Explanation: Evaluate the baseline model and save it for later usage
End of explanation
import tensorflow_model_optimization as tfmot
cluster_weights = tfmot.clustering.keras.cluster_weights
CentroidInitialization = tfmot.clustering.keras.CentroidInitialization
clustering_params = {
'number_of_clusters': 16,
'cluster_centroids_init': CentroidInitialization.LINEAR
}
# Cluster a whole model
clustered_model = cluster_weights(model, **clustering_params)
# Use smaller learning rate for fine-tuning clustered model
opt = tf.keras.optimizers.Adam(learning_rate=1e-5)
clustered_model.compile(
loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True),
optimizer=opt,
metrics=['accuracy'])
clustered_model.summary()
Explanation: Fine-tune the pre-trained model with clustering
Apply the cluster_weights() API to a whole pre-trained model to demonstrate its effectiveness in reducing the model size after applying zip while keeping decent accuracy. For how best to balance the accuracy and compression rate for your use case, please refer to the per layer example in the comprehensive guide.
Define the model and apply the clustering API
Before you pass the model to the clustering API, make sure it is trained and shows some acceptable accuracy.
End of explanation
# Fine-tune model
clustered_model.fit(
train_images,
train_labels,
batch_size=500,
epochs=1,
validation_split=0.1)
Explanation: Fine-tune the model and evaluate the accuracy against baseline
Fine-tune the model with clustering for 1 epoch.
End of explanation
_, clustered_model_accuracy = clustered_model.evaluate(
test_images, test_labels, verbose=0)
print('Baseline test accuracy:', baseline_model_accuracy)
print('Clustered test accuracy:', clustered_model_accuracy)
Explanation: For this example, there is minimal loss in test accuracy after clustering, compared to the baseline.
End of explanation
final_model = tfmot.clustering.keras.strip_clustering(clustered_model)
_, clustered_keras_file = tempfile.mkstemp('.h5')
print('Saving clustered model to: ', clustered_keras_file)
tf.keras.models.save_model(final_model, clustered_keras_file,
include_optimizer=False)
Explanation: Create 6x smaller models from clustering
Both strip_clustering and applying a standard compression algorithm (e.g. via gzip) are necessary to see the compression benefits of clustering.
First, create a compressible model for TensorFlow. Here, strip_clustering removes all variables (e.g. tf.Variable for storing the cluster centroids and the indices) that clustering only needs during training, which would otherwise add to model size during inference.
End of explanation
clustered_tflite_file = '/tmp/clustered_mnist.tflite'
converter = tf.lite.TFLiteConverter.from_keras_model(final_model)
tflite_clustered_model = converter.convert()
with open(clustered_tflite_file, 'wb') as f:
f.write(tflite_clustered_model)
print('Saved clustered TFLite model to:', clustered_tflite_file)
Explanation: Then, create compressible models for TFLite. You can convert the clustered model to a format that's runnable on your targeted backend. TensorFlow Lite is an example you can use to deploy to mobile devices.
End of explanation
def get_gzipped_model_size(file):
# It returns the size of the gzipped model in bytes.
import os
import zipfile
_, zipped_file = tempfile.mkstemp('.zip')
with zipfile.ZipFile(zipped_file, 'w', compression=zipfile.ZIP_DEFLATED) as f:
f.write(file)
return os.path.getsize(zipped_file)
Explanation: Define a helper function to actually compress the models via gzip and measure the zipped size.
End of explanation
print("Size of gzipped baseline Keras model: %.2f bytes" % (get_gzipped_model_size(keras_file)))
print("Size of gzipped clustered Keras model: %.2f bytes" % (get_gzipped_model_size(clustered_keras_file)))
print("Size of gzipped clustered TFlite model: %.2f bytes" % (get_gzipped_model_size(clustered_tflite_file)))
Explanation: Compare and see that the models are 6x smaller from clustering
End of explanation
converter = tf.lite.TFLiteConverter.from_keras_model(final_model)
converter.optimizations = [tf.lite.Optimize.DEFAULT]
tflite_quant_model = converter.convert()
_, quantized_and_clustered_tflite_file = tempfile.mkstemp('.tflite')
with open(quantized_and_clustered_tflite_file, 'wb') as f:
f.write(tflite_quant_model)
print('Saved quantized and clustered TFLite model to:', quantized_and_clustered_tflite_file)
print("Size of gzipped baseline Keras model: %.2f bytes" % (get_gzipped_model_size(keras_file)))
print("Size of gzipped clustered and quantized TFlite model: %.2f bytes" % (get_gzipped_model_size(quantized_and_clustered_tflite_file)))
Explanation: Create an 8x smaller TFLite model from combining weight clustering and post-training quantization
You can apply post-training quantization to the clustered model for additional benefits.
End of explanation
def eval_model(interpreter):
input_index = interpreter.get_input_details()[0]["index"]
output_index = interpreter.get_output_details()[0]["index"]
# Run predictions on every image in the "test" dataset.
prediction_digits = []
for i, test_image in enumerate(test_images):
if i % 1000 == 0:
print('Evaluated on {n} results so far.'.format(n=i))
# Pre-processing: add batch dimension and convert to float32 to match with
# the model's input data format.
test_image = np.expand_dims(test_image, axis=0).astype(np.float32)
interpreter.set_tensor(input_index, test_image)
# Run inference.
interpreter.invoke()
# Post-processing: remove batch dimension and find the digit with highest
# probability.
output = interpreter.tensor(output_index)
digit = np.argmax(output()[0])
prediction_digits.append(digit)
print('\n')
# Compare prediction results with ground truth labels to calculate accuracy.
prediction_digits = np.array(prediction_digits)
accuracy = (prediction_digits == test_labels).mean()
return accuracy
Explanation: See the persistence of accuracy from TF to TFLite
Define a helper function to evaluate the TFLite model on the test dataset.
End of explanation
interpreter = tf.lite.Interpreter(model_content=tflite_quant_model)
interpreter.allocate_tensors()
test_accuracy = eval_model(interpreter)
print('Clustered and quantized TFLite test_accuracy:', test_accuracy)
print('Clustered TF test accuracy:', clustered_model_accuracy)
Explanation: You evaluate the model, which has been clustered and quantized, and then see the accuracy from TensorFlow persists to the TFLite backend.
End of explanation |
5,712 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Regridding input data to higher resolution
The initial resolution of the input file is used as the higher resolution that Badlands model can used. If one started with a given resolution and want to work with an higher one, it is required to regrid the input file to match at least the requested resolution.
Step1: 1. Load python class and set required resolution
Step2: 2. Regrid DEM file
Step3: 3. Regrid Rain file
Step4: 4. Regrid Tectonic files
Here you have the choice between vertical only displacements file and 3D ones.
In cases where you have several files you might create a loop to automatically regrid the files!
Vertical only file
Step5: 3D displacements file | Python Code:
import sys
print(sys.version)
print(sys.executable)
%matplotlib inline
# Import badlands grid generation toolbox
import pybadlands_companion.resizeInput as resize
Explanation: Regridding input data to higher resolution
The initial resolution of the input file is used as the higher resolution that Badlands model can used. If one started with a given resolution and want to work with an higher one, it is required to regrid the input file to match at least the requested resolution.
End of explanation
#help(resize.resizeInput.__init__)
newRes = resize.resizeInput(requestedSpacing = 40)
Explanation: 1. Load python class and set required resolution
End of explanation
#help(newRes.regridDEM)
newRes.regridDEM(inDEM='mountain/data/nodes.csv',outDEM='mountain/data/newnodes.csv')
Explanation: 2. Regrid DEM file
End of explanation
#help(newRes.regridRain)
newRes.regridRain(inRain='data/rain.csv',outRain='newrain.csv')
Explanation: 3. Regrid Rain file
End of explanation
#help(newRes.regridTecto)
newRes.regridTecto(inTec='data/disp.csv', outTec='newdisp.csv')
Explanation: 4. Regrid Tectonic files
Here you have the choice between vertical only displacements file and 3D ones.
In cases where you have several files you might create a loop to automatically regrid the files!
Vertical only file
End of explanation
#help(newRes.regridDisp)
newRes.regridDisp(inDisp='data/disp.csv', outDisp='newdisp.csv')
Explanation: 3D displacements file
End of explanation |
5,713 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
xrange vs range looping
For long for loops with no need to track iteration use
Step1: This will loop through 10 times, but the iteration variable won't be unused as it was never assigned. Also, xrange returns a type of iterator, whereas range returns a full list that can take a lot of memory for large loops.
Automating variable names
To assign a variable name and value in a loop fasion, use vars()[variable name as a string] = variable value. Such as
Step2: You can see the variables in memory with
Step3: Binary numbers and Python operators
A good review of Python operators can be found here
Step4: Ensuring two binary numbers are the same length
Step5: For bitwise or
Step6: bitwise or is |, xor is ^, and is &, complement (switch 0's to 1's, and 1's to 0's) is ~, binary shift left (move binary number two digits to left by adding zeros to its right) is <<, right >>
Convert the resulting binary number to base 10
Step7: Building a 'stack' in Python | Python Code:
for _ in xrange(10):
print "Do something"
Explanation: xrange vs range looping
For long for loops with no need to track iteration use:
End of explanation
for i in range(1,10):
vars()['x'+str(i)] = i
Explanation: This will loop through 10 times, but the iteration variable won't be unused as it was never assigned. Also, xrange returns a type of iterator, whereas range returns a full list that can take a lot of memory for large loops.
Automating variable names
To assign a variable name and value in a loop fasion, use vars()[variable name as a string] = variable value. Such as:
End of explanation
print repr(dir())
print repr(x1)
print repr(x5)
Explanation: You can see the variables in memory with:
End of explanation
bin(21)[2:]
Explanation: Binary numbers and Python operators
A good review of Python operators can be found here: http://www.programiz.com/python-programming/operators
The wiki reviewing bitwise operations here: https://en.wikipedia.org/wiki/Bitwise_operation
OR
http://www.math.grin.edu/~rebelsky/Courses/152/97F/Readings/student-binary
Note that binary numbers follow:
2^4| 2^3| 2^2| 2^1| 2^0
1 0 -> 2+0 = 2
1 1 1 -> 4+2+1 = 7
1 0 1 0 1 -> 16+0+4+0+1 = 21
1 1 1 1 0 -> 16+8+4+2+0 = 30
Convert numbers from base 10 to binary with bin()
End of explanation
a = 123
b = 234
a, b = bin(a)[2:], bin(b)[2:]
print "Before evening their lengths:\n{}\n{}".format(a,b)
diff = len(a)-len(b)
if diff > 0:
b = '0' * diff + b
elif diff < 0:
a = '0' * abs(diff) + a
print "After evening their lengths:\n{}\n{}".format(a,b)
Explanation: Ensuring two binary numbers are the same length
End of explanation
s = ''
for i in range(len(a)):
s += str(int(a[i]) | int(b[i]))
print "{}\n{}\n{}\n{}".format(a, b, '-'*len(a), s)
Explanation: For bitwise or:
End of explanation
sum(map(lambda x: 2**x[0] if int(x[1]) else 0, enumerate(reversed(s))))
Explanation: bitwise or is |, xor is ^, and is &, complement (switch 0's to 1's, and 1's to 0's) is ~, binary shift left (move binary number two digits to left by adding zeros to its right) is <<, right >>
Convert the resulting binary number to base 10:
End of explanation
class Stack:
def __init__(self):
self.items = []
def isEmpty(self):
return self.items == []
def push(self, item):
self.items.append(item)
def pop(self):
return self.items.pop()
def peek(self):
return self.items[len(self.items)-1]
def size(self):
return len(self.items)
s=Stack()
print repr(s.isEmpty())+'\n'
s.push(4)
s.push('dog')
print repr(s.peek())+'\n'
s.push(True)
print repr(s.size())+'\n'
print repr(s.isEmpty())+'\n'
s.push(8.4)
print repr(s.pop())+'\n'
print repr(s.pop())+'\n'
print repr(s.size())+'\n'
Explanation: Building a 'stack' in Python
End of explanation |
5,714 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Chopsticks!
A few researchers set out to determine the optimal length of chopsticks for children and adults. They came up with a measure of how effective a pair of chopsticks performed, called the "Food Pinching Performance." The "Food Pinching Performance" was determined by counting the number of peanuts picked and placed in a cup (PPPC).
An investigation for determining the optimum length of chopsticks.
Link to Abstract and Paper
the abstract below was adapted from the link
Chopsticks are one of the most simple and popular hand tools ever invented by humans, but have not previously been investigated by ergonomists. Two laboratory studies were conducted in this research, using a randomised complete block design, to evaluate the effects of the length of the chopsticks on the food-serving performance of adults and children. Thirty-one male junior college students and 21 primary school pupils served as subjects for the experiment to test chopsticks lengths of 180, 210, 240, 270, 300, and 330 mm. The results showed that the food-pinching performance was significantly affected by the length of the chopsticks, and that chopsticks of about 240 and 180 mm long were optimal for adults and pupils, respectively. Based on these findings, the researchers suggested that families with children should provide both 240 and 180 mm long chopsticks. In addition, restaurants could provide 210 mm long chopsticks, considering the trade-offs between ergonomics and cost.
For the rest of this project, answer all questions based only on the part of the experiment analyzing the thirty-one adult male college students.
Download the data set for the adults, then answer the following questions based on the abstract and the data set.
If you double click on this cell, you will see the text change so that all of the formatting is removed. This allows you to edit this block of text. This block of text is written using Markdown, which is a way to format text using headers, links, italics, and many other options. You will learn more about Markdown later in the Nanodegree Program. Hit shift + enter or shift + return to show the formatted text.
1. What is the independent variable in the experiment?
You can either double click on this cell to add your answer in this cell, or use the plus sign in the toolbar (Insert cell below) to add your answer in a new cell.
Chopstick length
2. What is the dependent variable in the experiment?
Food pinching efficiency
3. How is the dependent variable operationally defined?
Number of peanutes able to place in cup
4. Based on the description of the experiment and the data set, list at least two variables that you know were controlled.
Think about the participants who generated the data and what they have in common. You don't need to guess any variables or read the full paper to determine these variables. (For example, it seems plausible that the material of the chopsticks was held constant, but this is not stated in the abstract or data description.)
Age of participant
Sex of college students
One great advantage of ipython notebooks is that you can document your data analysis using code, add comments to the code, or even add blocks of text using Markdown. These notebooks allow you to collaborate with others and share your work. For now, let's see some code for doing statistics.
Step1: Let's do a basic statistical calculation on the data using code! Run the block of code below to calculate the average "Food Pinching Efficiency" for all 31 participants and all chopstick lengths.
Step2: This number is helpful, but the number doesn't let us know which of the chopstick lengths performed best for the thirty-one male junior college students. Let's break down the data by chopstick length. The next block of code will generate the average "Food Pinching Effeciency" for each chopstick length. Run the block of code below.
Step3: 5. Which chopstick length performed the best for the group of thirty-one male junior college students?
240mm | Python Code:
import pandas as pd
# pandas is a software library for data manipulation and analysis
# We commonly use shorter nicknames for certain packages. Pandas is often abbreviated to pd.
# hit shift + enter to run this cell or block of code
path = r'/Users/scott/googledrive/udacity/data_science/project_0/chopstick-effectiveness.csv'
# Change the path to the location where the chopstick-effectiveness.csv file is located on your computer.
# If you get an error when running this block of code, be sure the chopstick-effectiveness.csv is located at the path on your computer.
dataFrame = pd.read_csv(path)
dataFrame
Explanation: Chopsticks!
A few researchers set out to determine the optimal length of chopsticks for children and adults. They came up with a measure of how effective a pair of chopsticks performed, called the "Food Pinching Performance." The "Food Pinching Performance" was determined by counting the number of peanuts picked and placed in a cup (PPPC).
An investigation for determining the optimum length of chopsticks.
Link to Abstract and Paper
the abstract below was adapted from the link
Chopsticks are one of the most simple and popular hand tools ever invented by humans, but have not previously been investigated by ergonomists. Two laboratory studies were conducted in this research, using a randomised complete block design, to evaluate the effects of the length of the chopsticks on the food-serving performance of adults and children. Thirty-one male junior college students and 21 primary school pupils served as subjects for the experiment to test chopsticks lengths of 180, 210, 240, 270, 300, and 330 mm. The results showed that the food-pinching performance was significantly affected by the length of the chopsticks, and that chopsticks of about 240 and 180 mm long were optimal for adults and pupils, respectively. Based on these findings, the researchers suggested that families with children should provide both 240 and 180 mm long chopsticks. In addition, restaurants could provide 210 mm long chopsticks, considering the trade-offs between ergonomics and cost.
For the rest of this project, answer all questions based only on the part of the experiment analyzing the thirty-one adult male college students.
Download the data set for the adults, then answer the following questions based on the abstract and the data set.
If you double click on this cell, you will see the text change so that all of the formatting is removed. This allows you to edit this block of text. This block of text is written using Markdown, which is a way to format text using headers, links, italics, and many other options. You will learn more about Markdown later in the Nanodegree Program. Hit shift + enter or shift + return to show the formatted text.
1. What is the independent variable in the experiment?
You can either double click on this cell to add your answer in this cell, or use the plus sign in the toolbar (Insert cell below) to add your answer in a new cell.
Chopstick length
2. What is the dependent variable in the experiment?
Food pinching efficiency
3. How is the dependent variable operationally defined?
Number of peanutes able to place in cup
4. Based on the description of the experiment and the data set, list at least two variables that you know were controlled.
Think about the participants who generated the data and what they have in common. You don't need to guess any variables or read the full paper to determine these variables. (For example, it seems plausible that the material of the chopsticks was held constant, but this is not stated in the abstract or data description.)
Age of participant
Sex of college students
One great advantage of ipython notebooks is that you can document your data analysis using code, add comments to the code, or even add blocks of text using Markdown. These notebooks allow you to collaborate with others and share your work. For now, let's see some code for doing statistics.
End of explanation
dataFrame['Food.Pinching.Efficiency'].mean()
Explanation: Let's do a basic statistical calculation on the data using code! Run the block of code below to calculate the average "Food Pinching Efficiency" for all 31 participants and all chopstick lengths.
End of explanation
meansByChopstickLength = dataFrame.groupby('Chopstick.Length')['Food.Pinching.Efficiency'].mean().reset_index()
meansByChopstickLength
# reset_index() changes Chopstick.Length from an index to column. Instead of the index being the length of the chopsticks, the index is the row numbers 0, 1, 2, 3, 4, 5.
Explanation: This number is helpful, but the number doesn't let us know which of the chopstick lengths performed best for the thirty-one male junior college students. Let's break down the data by chopstick length. The next block of code will generate the average "Food Pinching Effeciency" for each chopstick length. Run the block of code below.
End of explanation
# Causes plots to display within the notebook rather than in a new window
%pylab inline
import matplotlib.pyplot as plt
plt.scatter(x=meansByChopstickLength['Chopstick.Length'], y=meansByChopstickLength['Food.Pinching.Efficiency'])
# title="")
plt.xlabel("Length in mm")
plt.ylabel("Efficiency in PPPC")
plt.title("Average Food Pinching Efficiency by Chopstick Length")
plt.show()
Explanation: 5. Which chopstick length performed the best for the group of thirty-one male junior college students?
240mm
End of explanation |
5,715 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Python for Webscraping
SOC 590
Step1: open US News Rankings for Sociology webpage
view page source to see html
Step3: create a function to extract page data from US News
Step4: make empty lists for US News Rankings and errors
Step5: go through multiple web pages of US News Rankings
Step6: view possible urls that returned errors
Step7: view list of appended dataframes
concatenate dataframes into one .csv
Step8: make folders for raw_data and clean_data
Step9: read in raw .csv of US News Rankings with Pandas
Step10: make new columns for school location and school name
clean rank and score columns with regular expressions
Step11: fill in the school name and school location columns with regular expressions
split the school column to create two new columns
Step12: drop observations that are empty rows (NaN observations on school name column)
Step13: save clean .csv to file
Step14: Review
In this tutorial, we learned how to examine the html structure of webpage and use a function based on the Beautiful Soup module to parse tables on multiple webpage into .csv. After creating a .csv of all webpage tables, we analyzed the .csv using the Pandas module. Lastly, we created vizualizations with the Altair library | Python Code:
import os
import urllib
import webbrowser
import pandas as pd
from bs4 import BeautifulSoup
Explanation: Python for Webscraping
SOC 590: Big Data and Population Processes
17th October 2016
Tutorial 2: Webscraping with a function
Outline
Import modules
Examine html structure of a webpage
Use Beautiful Soup within a function
Analyze .csv of webpages as a Pandas DataFrame
Process data with regular expressions
Visualize data with Altair
import relevant modules
standard library modules:
os
urllib
webbrowser
open source modules:
pandas
Beautiful Soup
End of explanation
url = 'http://grad-schools.usnews.rankingsandreviews.com/best-graduate-schools/top-humanities-schools/sociology-rankings/page+1'
webbrowser.open_new_tab(url)
Explanation: open US News Rankings for Sociology webpage
view page source to see html
End of explanation
def extract_page_data(table_rows):
Extract and return the the desired information from the td elements within
the table rows.
# create the empty list to store the player data
# <tr> tag: defines a row in a table
# <td> tag: defines a cell in a table
rows = []
for row in soup.findAll('tr'):
rows.append([val.text for val in row.find_all('td')])
return rows[1:]
Explanation: create a function to extract page data from US News
End of explanation
us_news_rankings = []
errors_list = []
Explanation: make empty lists for US News Rankings and errors
End of explanation
url_template = 'http://grad-schools.usnews.rankingsandreviews.com/best-graduate-schools/top-humanities-schools/sociology-rankings/page+{page_number}'
# for each page from 1 to (and including) 4
for page_number in range(1, 5):
# Use try/except block to catch and inspect any urls that cause an error
try:
# get the webpage url
url = url_template.format(page_number=page_number)
# get the html
html = urllib.request.urlopen(url)
# create the BeautifulSoup object
soup = BeautifulSoup(html, "lxml")
# get the column headers
headers = [header.text for header in soup.findAll('th')]
# start etracting rows
table_rows = soup.select('td')[1:]
school_data = extract_page_data(table_rows)
# create the dataframe for the current page
school_df = pd.DataFrame(school_data, columns=headers)
school_df = pd.DataFrame(school_data)
# append the current dataframe to the list of dataframes
us_news_rankings.append(school_df)
except Exception as e:
# Store the url and the error it causes in a list
error =[url, e]
# then append it to the list of errors
errors_list.append(error)
Explanation: go through multiple web pages of US News Rankings
End of explanation
print(len(errors_list))
errors_list
Explanation: view possible urls that returned errors
End of explanation
us_news_rankings
us_news_df_raw = pd.concat(us_news_rankings, axis=0)
column_headers = ["rank", "school", "score"]
us_news_df_raw.columns = column_headers
us_news_df_raw.head(10)
Explanation: view list of appended dataframes
concatenate dataframes into one .csv
End of explanation
if not os.path.exists('../data/raw_data'):
os.makedirs('../data/raw_data')
if not os.path.exists('../data/clean_data'):
os.makedirs('../data/clean_data')
# Write out the raw rankings data to the raw_data folder in the data folder
us_news_df_raw.to_csv("../data/raw_data/us_news_rankings_RAW.csv", index=False)
Explanation: make folders for raw_data and clean_data
End of explanation
us_news_df_raw = pd.read_csv("../data/raw_data/us_news_rankings_RAW.csv")
us_news_df_raw = us_news_df_raw[0:len(us_news_df_raw)]
us_news_df_raw.head()
Explanation: read in raw .csv of US News Rankings with Pandas
End of explanation
us_news_df_raw["school_location"] = "NaN"
us_news_df_raw["school_name"] = "NaN"
us_news_df_raw["rank"] = us_news_df_raw.loc[:,('rank')].replace(r"\D", "", regex = True)
us_news_df_raw["score"] = us_news_df_raw.loc[:,('score')].str.extract("(\d.\d)", expand=False)
us_news_df_raw.head(10)
Explanation: make new columns for school location and school name
clean rank and score columns with regular expressions
End of explanation
for i in range(0,len(us_news_df_raw)+1):
try:
us_news_df_raw["school_name"][i] = us_news_df_raw["school"].str.split("\n\n")[i][0]
us_news_df_raw["school_location"][i] = us_news_df_raw["school"].str.split("\n\n")[i][1]
except:
us_news_df_raw["school_name"][i] = "NaN"
us_news_df_raw["school_location"][i] = "NaN"
us_news_df_raw["school_name"] = us_news_df_raw.loc[:,('school_name')].replace(r"\n", "", regex = True)
us_news_df_raw["school_location"] = us_news_df_raw.loc[:,('school_location')].replace("\n", "", regex = True)
cols = ["rank", "school_name", "school_location", "score"]
us_news_df_raw = us_news_df_raw[cols]
us_news_df_raw.head()
Explanation: fill in the school name and school location columns with regular expressions
split the school column to create two new columns
End of explanation
us_news_df_clean = us_news_df_raw[us_news_df_raw['school_name']!="NaN"]
us_news_df_clean.head()
Explanation: drop observations that are empty rows (NaN observations on school name column)
End of explanation
us_news_df_clean.to_csv("../data/clean_data/us_news_rankings_clean.csv")
from geopy.geocoders import Nominatim
geolocator = Nominatim()
locations = us_news_df_clean['school_location'].apply(lambda x: geolocator.geocode(x)) # equiv to df.sum(0)
address,coordinates = zip(*locations)
latitude,longitude = zip(*coordinates)
us_news_df_clean.loc[:,'latitude'] = latitude
us_news_df_clean.loc[:,('longitude')] = longitude
us_news_df_clean = us_news_df_clean.apply(pd.to_numeric, errors="ignore")
us_news_df_clean.head()
us_news_df_clean['quintile'] = pd.qcut(us_news_df_clean['score'], 5, labels=False)
from altair import Chart, X, Y, Axis, Scale
Chart(us_news_df_clean).mark_circle(
size=100,
opacity=0.6
).encode(
x=X('longitude:Q', axis=Axis(title=' ')),
y=Y('latitude:Q', axis=Axis(title=' ')),
#scale=Scale(domain=(-60, 80))),
color='quintile:N',
).configure_cell(
width=800,
height=350
).configure_axis(
grid=False,
axisWidth=0,
tickWidth=0,
labels=False,
)
Explanation: save clean .csv to file
End of explanation
import sys
import altair
import bs4
print("System and module version information: \n")
print('Python version:', sys.version_info)
print('urllib.request version:', urllib.request.__version__)
print('pandas version:', pd.__version__)
print('altair version:',altair.__version__)
print('Beautiful Soup version:', bs4.__version__)
Explanation: Review
In this tutorial, we learned how to examine the html structure of webpage and use a function based on the Beautiful Soup module to parse tables on multiple webpage into .csv. After creating a .csv of all webpage tables, we analyzed the .csv using the Pandas module. Lastly, we created vizualizations with the Altair library
End of explanation |
5,716 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Primeiramente, é necessária a leitura dos 3 arquivos, inserindo as informações em um vetor
Step1: Depois, devemos retirar cada quebra de linha no final de cada linha, ou seja, os '\n'.
Step2: A seguir, retiramos os dois últimos caracteres sobrando apenas o nosso comentário. Depois, passamos ele para lowercase.
Step4: Foi decidida a abordagem por poly SCV
Step5: Resultado - Poly
Step6: Com maior refinamento de dados
Step7: GradientBoostingClassifier
Step8: Gaussiano | Python Code:
import codecs
with codecs.open("imdb_labelled.txt", "r", "utf-8") as arquivo:
vetor = []
for linha in arquivo:
vetor.append(linha)
with codecs.open("amazon_cells_labelled.txt", "r", "utf-8") as arquivo:
for linha in arquivo:
vetor.append(linha)
with codecs.open("yelp_labelled.txt", "r", "utf-8") as arquivo:
for linha in arquivo:
vetor.append(linha)
Explanation: Primeiramente, é necessária a leitura dos 3 arquivos, inserindo as informações em um vetor:
End of explanation
vetor = [ x[:-1] for x in vetor ]
vetor = ([s.replace('&', '').replace(' - ', '').replace('.', '').replace(',', '').replace('!', '').
replace('+', '')for s in vetor])
Explanation: Depois, devemos retirar cada quebra de linha no final de cada linha, ou seja, os '\n'.
End of explanation
TextosQuebrados = [ x[:-4] for x in vetor ]
TextosQuebrados = map(lambda X:X.lower(),TextosQuebrados)
#TextosQuebrados = [x.split(' ') for x in TextosQuebrados]
TextosQuebrados = [nltk.tokenize.word_tokenize(frase) for frase in TextosQuebrados]
import nltk
stopwords = nltk.corpus.stopwords.words('english')
stemmer = nltk.stem.RSLPStemmer()
dicionario = set()
for comentarios in TextosQuebrados:
validas = [stemmer.stem(palavra) for palavra in comentarios if palavra not in stopwords and len(palavra) > 0]
dicionario.update(validas)
totalDePalavras = len(dicionario)
tuplas = zip(dicionario, xrange(totalDePalavras))
tradutor = {palavra:indice for palavra,indice in tuplas}
def vetorizar_texto(texto, tradutor, stemmer):
vetor = [0] * len(tradutor)
for palavra in texto:
if len(palavra) > 0:
raiz = stemmer.stem(palavra)
if raiz in tradutor:
posicao = tradutor[raiz]
vetor[posicao] += 1
return vetor
vetoresDeTexto = [vetorizar_texto(texto, tradutor,stemmer) for texto in TextosQuebrados]
X = vetoresDeTexto
Y = [ x[-1:] for x in vetor ]
porcentagem_de_treino = 0.8
tamanho_do_treino = porcentagem_de_treino * len(Y)
tamanho_de_validacao = len(Y) - tamanho_do_treino
treino_dados = X[0:int(tamanho_do_treino)]
treino_marcacoes = Y[0:int(tamanho_do_treino)]
validacao_dados = X[int(tamanho_do_treino):]
validacao_marcacoes = Y[int(tamanho_do_treino):]
fim_de_teste = tamanho_do_treino + tamanho_de_validacao
teste_dados = X[int(tamanho_do_treino):int(fim_de_teste)]
teste_marcacoes = Y[int(tamanho_do_treino):int(fim_de_teste)]
Explanation: A seguir, retiramos os dois últimos caracteres sobrando apenas o nosso comentário. Depois, passamos ele para lowercase.
End of explanation
from sklearn import svm
from sklearn.model_selection import cross_val_score
k = 10
# Implement poly SVC
poly_svc = svm.SVC(kernel='linear')
accuracy_poly_svc = cross_val_score(poly_svc, treino_dados, treino_marcacoes, cv=k, scoring='accuracy')
print('poly_svc: ', accuracy_poly_svc.mean())
Explanation: Foi decidida a abordagem por poly SCV
End of explanation
def fit_and_predict(modelo, treino_dados, treino_marcacoes, teste_dados, teste_marcacoes):
modelo.fit(treino_dados, treino_marcacoes)
resultado = modelo.predict(teste_dados)
acertos = (resultado == teste_marcacoes)
total_de_acertos = sum(acertos)
total_de_elementos = len(teste_dados)
taxa_de_acerto = float(total_de_acertos) / float(total_de_elementos)
print(taxa_de_acerto)
return taxa_de_acerto
resultados = {}
from sklearn.naive_bayes import MultinomialNB
modeloMultinomial = MultinomialNB()
resultadoMultinomial = fit_and_predict(modeloMultinomial, treino_dados, treino_marcacoes, teste_dados, teste_marcacoes)
resultados[resultadoMultinomial] = modeloMultinomial
Explanation: Resultado - Poly:
Os 3: Após 10 minutos rodando, foi decidido parar o teste
IMdB: 0.51750234411626805
Amazon: 0.51125019534302241
Yelp: 0.56500429754649173
Resultado - Linear:
Os 3: 0.7745982496802607 (5 minutos)
IMdB: 0.72168288013752147
Amazon: 0.78869745272698855
Yelp: 0.77492342553523996
End of explanation
from sklearn.ensemble import GradientBoostingClassifier
classificador = GradientBoostingClassifier(n_estimators=100, learning_rate=1.0, max_depth=1, random_state=0).fit(treino_dados, treino_marcacoes)
resultado = fit_and_predict(classificador, treino_dados, treino_marcacoes, teste_dados, teste_marcacoes)
Explanation: Com maior refinamento de dados:
MultinomialNB:
Todos: 0.808652246256
Adaboost:
Todos:0.527454242928
End of explanation
from sklearn.naive_bayes import GaussianNB
classificador = GaussianNB()
resultado = fit_and_predict(classificador, treino_dados, treino_marcacoes, teste_dados, teste_marcacoes)
Explanation: GradientBoostingClassifier:
Todos: 0.77870216306156403
End of explanation
from sklearn.naive_bayes import BernoulliNB
classificador = BernoulliNB()
resultado = fit_and_predict(classificador, treino_dados, treino_marcacoes, teste_dados, teste_marcacoes)
Explanation: Gaussiano:
0.665557404326
End of explanation |
5,717 | Given the following text problem statement, write Python code to implement the functionality described below in problem statement
Problem:
How to batch convert sentence lengths to masks in PyTorch? | Problem:
import numpy as np
import pandas as pd
import torch
lens = load_data()
max_len = max(lens)
mask = torch.arange(max_len).expand(len(lens), max_len) > (max_len - lens.unsqueeze(1) - 1)
mask = mask.type(torch.LongTensor) |
5,718 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
ETL with PySpark SQL
Step1: Importing and creating SparkSession
Step2: Setting filesystem and files
Load all CSV's files from HiggsTwitter dataset (http
Step3: Convert CSV's dataframes to Apache Parquet files
Step4: Load the parquet files into new dataframes
Step5: Working with dataframes
Step6: Spark SQL using DataFrames API
Step7: Spark SQL using SQL language
Create temporary views of all tables so we can use SQL statements
Step8: Performance testing
GZIP Compressed CSV file vs Parquet file
Step9: Cached DF vs not cached DF
This time we will cache the 2 previous dataframes (socialDF and socialDFpq) and see how faster is.
Step10: Note | Python Code:
import os
import sys
os.environ["SPARK_HOME"] = "/Users/projects/.pyenv/versions/3.7.10/envs/tatapower/lib/python3.7/site-packages/pyspark"
# os.environ["HADOOP_HOME"] = ""
# os.environ["PYSPARK_PYTHON"] = "/opt/cloudera/parcels/Anaconda/bin/python"
# os.environ["JAVA_HOME"] = "/usr/java/jdk1.8.0_161/jre"
# os.environ["PYLIB"] = os.environ["SPARK_HOME"] + "/python/lib"
# sys.path.insert(0, os.environ["PYLIB"] +"/py4j-0.10.6-src.zip")
# sys.path.insert(0, os.environ["PYLIB"] +"/pyspark.zip")
Explanation: ETL with PySpark SQL
End of explanation
from pyspark import SparkFiles
from pyspark.sql import SparkSession
from pyspark.sql.window import Window
from pyspark.sql.types import *
from pyspark.sql.functions import *
# Create a SparkSession.
spark = SparkSession.builder\
.master("local[*]")\
.appName("ETL")\
.config("spark.executor.logs.rolling.time.interval", "daily")\
.getOrCreate()
Explanation: Importing and creating SparkSession
End of explanation
# Social Network edgelist
# First, we set the filename
file = "HiggsTwitter/higgs-social_network.edgelist.gz"
# Second, Set the Schema where first column is follower and second is followed, both of types integer.
schema = StructType([StructField("follower", IntegerType()), StructField("followed", IntegerType())])
# Create the DataFrame
socialDF = spark.read.csv(path=file, sep=" ", schema=schema)
#Retweet Network
# First, we set the filename
file = "HiggsTwitter/higgs-retweet_network.edgelist.gz"
# Second, Set the Schema where first column is tweeter, second is tweeted, third is occur and all are of type integer.
schema =
# Create the DataFrame
retweetDF =
# Reply Network
# First, we set the filename
file = "HiggsTwitter/higgs-reply_network.edgelist.gz"
# Second, Set the Schema where first column is replier, second is replied, third is occur and all are of type integer.
schema =
# Create the DataFrame
replyDF =
# Mention Network
# First, we set the filename
file = "HiggsTwitter/higgs-mention_network.edgelist.gz"
# Second, Set the Schema where first column is mentioner, second is mentioned, third is occur and all are of type integer.
schema =
# Create the DataFrame
mentionDF =
# Activity Time
# First, we set the filename
file = "HiggsTwitter/higgs-activity_time.txt.gz"
# Second, Set the Schema where
# * first column is userA (integer)
# * second is userB (integer)
# * third is timestamp (integer)
# * fourth is interaction (string): Interaction can be: RT (retweet), MT (mention) or RE (reply)
schema =
activityDF =
Explanation: Setting filesystem and files
Load all CSV's files from HiggsTwitter dataset (http://snap.stanford.edu/data/higgs-twitter.html)
Read all the 5 different zip files into Spark Dataframe.
End of explanation
# Save all the five files to parquet format
socialDF
retweetDF
replyDF
mentionDF
activityDF
Explanation: Convert CSV's dataframes to Apache Parquet files
End of explanation
# Read all the five files from parquet format
socialDFpq = spark.read
retweetDFpq = spark.read
replyDFpq = spark.read
mentionDFpq = spark.read
activityDFpq = spark.read
Explanation: Load the parquet files into new dataframes
End of explanation
# Display the schema of the dataframes
socialDFpq
socialDFpq
# Show the top 5 rows of each dataframe
socialDFpq
retweetDFpq
replyDFpq
mentionDFpq
activityDFpq
Explanation: Working with dataframes
End of explanation
# Users who have most followers
socialDFpq
# Users who have most mentions
mentionDFpq
# Of the top 5 followed users, how many mentions has each one?
# top_f contains "top 5 users who have most followers"
top_f =
top_f.
Explanation: Spark SQL using DataFrames API
End of explanation
# Create temporary views so we can use SQL statements
socialDFpq.
retweetDFpq.
replyDFpq.
mentionDFpq.
activityDFpq.
# List all the tables in spark memory
spark.
# Users who have most followers using SQL
spark.
# Users who have most mentions using SQL
spark.
# Of the top 5 followed users, how many mentions has each one? (using SQL)
Explanation: Spark SQL using SQL language
Create temporary views of all tables so we can use SQL statements
End of explanation
%%time
# GZIP Compressed CSV
socialDF.groupBy("followed").agg(count("follower").alias("followers")).orderBy(desc("followers")).show(5)
%%time
# Parquet file
socialDFpq.groupBy("followed").agg(count("followed").alias("followers")).orderBy(desc("followers")).show(5)
Explanation: Performance testing
GZIP Compressed CSV file vs Parquet file
End of explanation
# cache dataframes
socialDF.cache()
socialDFpq.cache()
# remove from cache
#socialDF.unpersist()
#socialDFpq.unpersist()
Explanation: Cached DF vs not cached DF
This time we will cache the 2 previous dataframes (socialDF and socialDFpq) and see how faster is.
End of explanation
%%time
# GZIP Compressed CSV (dataframe cached)
socialDF.groupBy("followed").agg(count("followed").alias("followers")).orderBy(desc("followers")).show(5)
%%time
# Parquet file (dataframe cached)
socialDFpq.groupBy("followed").agg(count("followed").alias("followers")).orderBy(desc("followers")).show(5)
Explanation: Note: The first time we run cached dataframes can be slower, but the next times they should run faster.
End of explanation |
5,719 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Setup command line client
The following is code to get the client and set it up.
Step1: Describes the tests needed to validate the PutFile functionality.
The commands presented are just examples of what the actual command to test with could look like.
Basic put
Put a file to all pillars
Run put command on a file which is not present in the collection.
Step2: Check this by running
Step3: The response should be
Count
Step4: Verify that it is now present at all pillars, and that it has the same checksum at all pillars
Step5: Put a file with a different file-id to all pillars
Run put command on a file and use the file-id parameter (-i)
Step6: Verify that it is now present at all pillars, and that it has the same checksum at all pillars
Step7: Put a file by using an URL
Use the URL and Checksum to put a file, already on the webdav server.
Step8: Verify that it is now present at al pillars, with the requested checksum
Step9: Idempotent test
Attempt to put a file with an fileID that is already present in the collection suppling the same checksum
The putFile request should succeed.
Step10: Put with returned checksums
Put a file including a request of a salted checksum calculated on the pillar, which should be returned.
Step11: Use the request-checksum-algorithm and request-checksum-salt arguments (-R and -S)
Step12: Note, this will return the stored MD5 hashes, if the file is already put'et
Step13: Verify that the checksumpillar does not reply with a checksum.
Step14: Verify that the checksumpillar and all the other pillars have the file
Step15: Put a file including a request for checksum, but using both a salt and a non-salt algorithm.
Use the request-checksum-algorithm and request-checksum-salt arguments (-R and -S)
Step16: Verify that all the data pillars return the same checksum, and that it is identical to the previous test (where '-R HMAC_SHA1')
Put a file including a request for checksum, but using a salt algorithm but not giving a salt.
Use the request-checksum-algorithm and request-checksum-salt arguments (-R and no -S)
Step17: Verify that the data pillars all deliver the same checksum, which must be different from the checksum in the previous two tests.
Error scenarios
Attempt to put a file with an fileID that is already present in the collection (ensure that the file has a different checksum from the already archived)
The putFile request should fail informing the user that a file with the given file ID already exists in the collection.
Step18: Attempt to put a file which does not exist
Client should fail immediately
Step19: Attempt to put a file to a non-existing collection
Client should fail immediately
Step20: Attempt to put a file to a non-existing pillar
Client should fail immediately | Python Code:
%env CLIENT bitrepository-client-1.9-RC1
!wget -Nq "https://sbforge.org/download/attachments/25395346/${CLIENT}.zip"
!unzip -quo ${CLIENT}.zip
%alias bitmag ${CLIENT}/bin/bitmag.sh %l
#Some imports we will need later
import random
import string
Explanation: Setup command line client
The following is code to get the client and set it up.
End of explanation
TESTFILE1='README.md'
Explanation: Describes the tests needed to validate the PutFile functionality.
The commands presented are just examples of what the actual command to test with could look like.
Basic put
Put a file to all pillars
Run put command on a file which is not present in the collection.
End of explanation
%bitmag get-file-ids -c integrationtest2 -i {TESTFILE1}
Explanation: Check this by running
End of explanation
hash=!cat {TESTFILE1} | md5sum - | cut -d' ' -f1
print("md5: {}".format(hash))
%bitmag delete -c integrationtest2 -i {TESTFILE1} -C {hash.s} -p "sbtape2"
%bitmag delete -c integrationtest2 -i {TESTFILE1} -C {hash.s} -p "reference2"
%bitmag delete -c integrationtest2 -i {TESTFILE1} -C {hash.s} -p "checksum2"
%bitmag delete -c integrationtest2 -i {TESTFILE1} -C {hash.s} -p "sbdisk1"
%bitmag delete -c integrationtest2 -i {TESTFILE1} -C {hash.s} -p "kbpillar2"
%bitmag put-file -c integrationtest2 -f {TESTFILE1}
Explanation: The response should be
Count: Size: FileID:
Indicating that no pillar had the file.
If the file is found, delete it with the following command
See https://www.safaribooksonline.com/blog/2014/02/12/using-shell-commands-effectively-ipython/ for how this works
End of explanation
%bitmag get-file-ids -c integrationtest2 -i {TESTFILE1}
%bitmag get-checksums -c integrationtest2 -i {TESTFILE1}
Explanation: Verify that it is now present at all pillars, and that it has the same checksum at all pillars
End of explanation
TESTFILE2=''.join(random.choice(string.ascii_uppercase + string.digits) for _ in range(10))
%bitmag put-file -c integrationtest2 -f {TESTFILE1} -i {TESTFILE2}
Explanation: Put a file with a different file-id to all pillars
Run put command on a file and use the file-id parameter (-i)
End of explanation
%bitmag get-file-ids -c integrationtest2 -i {TESTFILE2}
%bitmag get-checksums -c integrationtest2 -i {TESTFILE2}
Explanation: Verify that it is now present at all pillars, and that it has the same checksum at all pillars
End of explanation
TESTFILE3=''.join(random.choice(string.ascii_uppercase + string.digits) for _ in range(10))
URL='http://sandkasse-01.kb.dk/dav/test.txt'
hash=!curl -s {URL} | md5sum - | cut -d' ' -f1
print("md5: {}".format(hash))
%bitmag put-file -c integrationtest2 -u {URL} -i {TESTFILE3} -C {hash.s}
Explanation: Put a file by using an URL
Use the URL and Checksum to put a file, already on the webdav server.
End of explanation
%bitmag get-file-ids -c integrationtest2 -i {TESTFILE3}
%bitmag get-checksums -c integrationtest2 -i {TESTFILE3}
Explanation: Verify that it is now present at al pillars, with the requested checksum
End of explanation
%bitmag put-file -c integrationtest2 -f {TESTFILE1}
Explanation: Idempotent test
Attempt to put a file with an fileID that is already present in the collection suppling the same checksum
The putFile request should succeed.
End of explanation
import hmac
import hashlib
import urllib.request
def getSaltedChecksum(url,salt,algorithm):
saltBytes = bytes.fromhex(salt)
digester = hmac.new(saltBytes,None,algorithm)
with urllib.request.urlopen(url) as from_fh:
while True:
chunk = from_fh.read()
if not chunk:
break
digester.update(chunk)
return digester.hexdigest().lower()
saltedChecksum=getSaltedChecksum(url=URL,salt='abcd',algorithm=hashlib.sha1)
print(saltedChecksum)
Explanation: Put with returned checksums
Put a file including a request of a salted checksum calculated on the pillar, which should be returned.
End of explanation
TESTFILE4=''.join(random.choice(string.ascii_uppercase + string.digits) for _ in range(10))
Explanation: Use the request-checksum-algorithm and request-checksum-salt arguments (-R and -S):
End of explanation
hash=!curl -s {URL} | md5sum - | cut -d' ' -f1
print("md5: {}".format(hash))
%bitmag put-file -c integrationtest2 -u {URL} -i {TESTFILE4} -C {hash.s} -S 'abcd' -R HMAC_SHA1
Explanation: Note, this will return the stored MD5 hashes, if the file is already put'et
End of explanation
%bitmag get-checksums -c integrationtest2 -i {TESTFILE4} -R HMAC_SHA1 -S 'abcd'
Explanation: Verify that the checksumpillar does not reply with a checksum.
End of explanation
%bitmag get-file-ids -c integrationtest2 -i {TESTFILE4}
Explanation: Verify that the checksumpillar and all the other pillars have the file
End of explanation
TESTFILE5=''.join(random.choice(string.ascii_uppercase + string.digits) for _ in range(10))
hash=!curl -s {URL} | md5sum - | cut -d' ' -f1
print("md5: {}".format(hash))
%bitmag put-file -c integrationtest2 -u {URL} -i {TESTFILE5} -C {hash.s} -R SHA1 -S 'abcd'
Explanation: Put a file including a request for checksum, but using both a salt and a non-salt algorithm.
Use the request-checksum-algorithm and request-checksum-salt arguments (-R and -S):
End of explanation
hash=!curl -s {URL} | md5sum - | cut -d' ' -f1
print("md5: {}".format(hash))
hash=!curl -s {URL} | sha1sum - | cut -d' ' -f1
print("sha1: {}".format(hash))
%bitmag get-checksums -c integrationtest2 -i {TESTFILE5}
Explanation: Verify that all the data pillars return the same checksum, and that it is identical to the previous test (where '-R HMAC_SHA1')
Put a file including a request for checksum, but using a salt algorithm but not giving a salt.
Use the request-checksum-algorithm and request-checksum-salt arguments (-R and no -S):
End of explanation
%bitmag put-file -c integrationtest2 -f .gitignore -i {TESTFILE1}
Explanation: Verify that the data pillars all deliver the same checksum, which must be different from the checksum in the previous two tests.
Error scenarios
Attempt to put a file with an fileID that is already present in the collection (ensure that the file has a different checksum from the already archived)
The putFile request should fail informing the user that a file with the given file ID already exists in the collection.
End of explanation
%bitmag put-file -c integrationtest2 -f ThisFileDoesNotExist
Explanation: Attempt to put a file which does not exist
Client should fail immediately
End of explanation
%bitmag put-file -c integrationtest3 -i {TESTFILE1}
Explanation: Attempt to put a file to a non-existing collection
Client should fail immediately
End of explanation
%bitmag put-file -c integrationtest1 -i {TESTFILE1} -p non-existing-pillar
Explanation: Attempt to put a file to a non-existing pillar
Client should fail immediately
End of explanation |
5,720 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Accompanying code examples of the book "Introduction to Artificial Neural Networks and Deep Learning
Step1: Scientific Computing in NumPy
This appendix offers a quick tour of the NumPy library for working with multi-dimensional arrays in Python. NumPy (short for Numerical Python) was created by Travis Oliphant in 2005, by merging Numarray into Numeric. Since then, the open source NumPy library has evolved into an essential library for scientific computing in Python and has become a building block of many other scientific libraries, such as SciPy, scikit-learn, pandas, and others.
What makes NumPy so particularly attractive to the scientific community is that it provides a convenient Python interface for working with multi-dimensional array data structures efficiently; the NumPy array data structure is also called ndarray, which is short for n-dimensional array.
In addition to being mostly implemented in C and using Python as glue, the main reason why NumPy is so efficient for numerical computations is that NumPy arrays use contiguous blocks of memory that can be efficiently cached by the CPU. In contrast, Python lists are arrays of pointers to objects in random locations in memory, which cannot be easily cached and come with a more expensive memory-look-up. However, the computational efficiency and low-memory footprint come at a cost
Step2: By default, NumPy infers the type of the array upon construction. Since we passed Python integers to the array, the ndarray object ary1d should be of type int64 on a 64-bit machine, which we can confirm by accessing the dtype attribute
Step3: If we want to construct NumPy arrays of different types, we can pass an argument to the dtype parameter of the array function, for example np.int32 to create 32-bit arrays. For a full list of supported data types, please refer to the official NumPy documentation. Once an array has been constructed, we can downcast or recast its type via the astype method as shown in the following example
Step4: In the following sections we will cover many more aspects of NumPy arrays; however, to conclude this basic introduction to the ndarray object, let us take a look at some of its handy attributes.
For instance, the itemsize attribute returns the size of a single array element in bytes
Step5: The code snippet above returned 8, which means that each element in the array (remember that ndarrays are homogeneous) takes up 8 bytes in memory. This result makes sense since the array ary2d has type int64 (64-bit integer), which we determined earlier, and 8 bits equals 1 byte. (Note that 'int64' is just a shorthand for np.int64.)
To return the number of elements in an array, we can use the size attribute, as shown below
Step6: And the number of dimensions of our array (Intuitively, you may think of dimensions as the rank of a tensor) can be obtained via the ndim attribute
Step7: If we are interested in the number of elements along each array dimension (in the context of NumPy arrays, we may also refer to them as axes), we can access the shape attribute as shown below
Step8: The shape is always a tuple; in the code example above, the two-dimensional ary object has two rows and three columns, (2, 3), if we think of it as a matrix representation.
Similarly, the shape of the one-dimensional array only contains a single value
Step9: Instead of passing lists or tuples to the array function, we can also provide a single float or integer, which will construct a zero-dimensional array (for instance, a representation of a scalar)
Step10: Array Construction Routines
In the previous section, we used the array function to construct NumPy arrays from Python objects that are sequences or nested sequences -- lists, tuples, nested lists, iterables, and so forth. While array is often our go-to function for creating ndarray objects, NumPy implements a variety of functions for constructing arrays that may come in handy in different contexts. In this section, we will take a quick peek at those that we use most commonly -- you can find a more comprehensive list in the official documentation.
The array function works with most iterables in Python, including lists, tuples, and range objects; however, array does not support generator expression. If we want parse generators directly, however, we can use the fromiter function as demonstrated below
Step11: Next, we will take at two functions that let us create ndarrays of consisting of either ones and zeros by only specifying the elements along each axes (here
Step12: Creating arrays of ones or zeros can also be useful as placeholder arrays, in cases where we do not want to use the initial values for computations but want to fill it with other values right away. If we do not need the initial values (for instance, '0.' or '1.'), there is also numpy.empty, which follows the same syntax as numpy.ones and np.zeros. However, instead of filling the array with a particular value, the empty function creates the array with non-sensical values from memory. We can think of zeros as a function that creates the array via empty and then sets all its values to 0. -- in practice, a difference in speed is not noticeable, though.
NumPy also comes with functions to create identity matrices and diagonal matrices as ndarrays that can be useful in the context of linear algebra -- a topic that we will explore later in this appendix.
Step13: Lastly, I want to mention two very useful functions for creating sequences of numbers within a specified range, namely, arange and linspace. NumPy's arange function follows the same syntax as Python's range objects
Step14: Notice that arange also performs type inference similar to the array function. If we only provide a single function argument, the range object treats this number as the endpoint of the interval and starts at 0
Step15: Similar to Python's range, a third argument can be provided to define the step (the default step size is 1). For example, we can obtain an array of all uneven values between one and ten as follows
Step16: The linspace function is especially useful if we want to create a particular number of evenly spaced values in a specified half-open interval
Step17: Array Indexing
In this section, we will go over the basics of retrieving NumPy array elements via different indexing methods. Simple NumPy indexing and slicing works similar to Python lists, which we will demonstrate in the following code snippet, where we retrieve the first element of a one-dimensional array
Step18: Also, the same Python semantics apply to slicing operations. The following example shows how to fetch the first two elements in ary
Step19: If we work with arrays that have more than one dimension or axis, we separate our indexing or slicing operations by commas as shown in the series of examples below
Step20:
Step21: Array Math and Universal Functions
In the previous sections, you learned how to create NumPy arrays and how to access different elements in an array. It is about time that we introduce one of the core features of NumPy that makes working with ndarray so efficient and convenient
Step22: This for-loop approach is very verbose, and we could achieve the same goal more elegantly using list comprehensions
Step23: We can accomplish the same using NumPy's ufunc for element-wise scalar addition as shown below
Step24: The ufuncs for basic arithmetic operations are add, subtract, divide, multiply, and exp (exponential). However, NumPy uses operator overloading so that we can use mathematical operators (+, -, /, *, and **) directly
Step25: Above, we have seen examples of binary ufuncs, which are ufuncs that take two arguments as an input. In addition, NumPy implements several useful unary ufuncs, such as log (natural logarithm), log10 (base-10 logarithm), and sqrt (square root).
Often, we want to compute the sum or product of array element along a given axis. For this purpose, we can use a ufunc's reduce operation. By default, reduce applies an operation along the first axis (axis=0). In the case of a two-dimensional array, we can think of the first axis as the rows of a matrix. Thus, adding up elements along rows yields the column sums of that matrix as shown below
Step26: To compute the row sums of the array above, we can specify axis=1
Step27: While it can be more intuitive to use reduce as a more general operation, NumPy also provides shorthands for specific operations such as product and sum. For example, sum(axis=0) is equivalent to add.reduce
Step28: As a word of caution, keep in mind that product and sum both compute the product or sum of the entire array if we do not specify an axis
Step29: Other useful unary ufuncs are
Step30: In contrast to what we are used from linear algebra, we can also add arrays of different shapes. In the example above, we will add a one-dimensional to a two-dimensional array, where NumPy creates an implicit multidimensional grid from the one-dimensional array ary1
Step31: Keep in mind though that the number of elements along the explicit axes and the implicit grid have to match so that NumPy can perform a sensical operation. For instance, the following example should raise a ValueError, because NumPy attempts to add the elements from the first axis of the left array (2 elements) to the first axis of the right array (3 elements)
Step32: So, if we want to add the 2-element array to the columns in ary3, the 2-element array must have two elements along its first axis as well
Step33: Advanced Indexing -- Memory Views and Copies
In the previous sections, we have used basic indexing and slicing routines. It is important to note that basic integer-based indexing and slicing create so-called views of NumPy arrays in memory. Working with views can be highly desirable since it avoids making unnecessary copies of arrays to save memory resources. To illustrate the concept of memory views, let us walk through a simple example where we access the first row in an array, assign it to a variable, and modify that variable
Step34: As we can see in the example above, changing the value of first_row also affected the original array. The reason for this is that ary[0] created a view of the first row in ary, and its elements were then incremented by 99. The same concept applies to slicing operations
Step35: If we are working with NumPy arrays, it is always important to be aware that slicing creates views -- sometimes it is desirable since it can speed up our code by avoiding to create unnecessary copies in memory. However, in certain scenarios we want force a copy of an array; we can do this via the copy method as shown below
Step36: One way to check if two arrays might share memory is to use the may_share_memory function from NumPy. However, be aware that it uses a heuristic and can return false negatives or false positives. The code snippets shows an example of may_share_memory applied to the view (first_row) and copy (second_row) of the array elements from ary
Step37: In addition to basic single-integer indexing and slicing operations, NumPy supports advanced indexing routines called fancy indexing. Via fancy indexing, we can use tuple or list objects of non-contiguous integer indices to return desired array elements. Since fancy indexing can be performed with non-contiguous sequences, it cannot return a view -- a contiguous slice from memory. Thus, fancy indexing always returns a copy of an array -- it is important to keep that in mind. The following code snippets show some fancy indexing examples
Step38: Finally, we can also use Boolean masks for indexing -- that is, arrays of True and False values. Consider the following example, where we return all values in the array that are greater than 3
Step39: Using these masks, we can select elements given our desired criteria
Step40: We can also chain different selection criteria using the logical and operator '&' or the logical or operator '|'. The example below demonstrates how we can select array elements that are greater than 3 and divisible by 2
Step41: Note that indexing using Boolean arrays is also considered "fancy indexing" and thus returns a copy of the array.
Random Number Generators
In machine learning and deep learning, we often have to generate arrays of random numbers -- for example, the initial values of our model parameters before optimization. NumPy has a random subpackage to create random numbers and samples from a variety of distributions conveniently. Again, I encourage you to browse through the more comprehensive numpy.random documentation for a more comprehensive list of functions for random sampling.
To provide a brief overview of the pseudo-random number generators that we will use most commonly, let's start with drawing a random sample from a uniform distribution
Step42: In the code snippet above, we first seeded NumPy's random number generator. Then, we drew three random samples from a uniform distribution via random.rand in the half-open interval [0, 1). I highly recommend the seeding step in practical applications as well as in research projects, since it ensures that our results are reproducible. If we run our code sequentially -- for example, if we execute a Python script -- it should be sufficient to seed the random number generator only once at the beginning to enforce reproducible outcomes between different runs. However, it is often useful to create separate RandomState objects for various parts of our code, so that we can test methods of functions reliably in unit tests. Working with multiple, separate RandomState objects can also be useful if we run our code in non-sequential order -- for example if we are experimenting with our code in interactive sessions or Jupyter Notebook environments.
The example below shows how we can use a RandomState object to create the same results that we obtained via np.random.rand in the previous code snippet
Step43: Another useful function that we will often use in practice is randn, which returns a random sample of floats from a standard normal distribution $N(\mu, \sigma^2)$, where the mean, ($\mu$) is zero and unit variance ($\sigma = 1$). The example below creates a two-dimensional array of such z-scores
Step44: NumPy's random functions rand and randn take an arbitrary number of integer arguments, where each integer argument specifies the number of elements along a given axis -- the z_scores array should now refer to an array of 100 rows and two columns. Let us now visualize how our random sample looks like using matplotlib
Step45: If we want to draw a random sample from a non-standard normal distribution, we can simply add a scalar value to the array elements to shift the mean of the sample, and we can multiply the sample by a scalar to change its standard deviation. The following code snippet will change the properties of our random sample as if it has been drawn from a Normal distribution $N(5, 4)$
Step46: Note that in the example above, we multiplied the z-scores by a standard deviation of 2 -- the standard deviation of a sample is the square root of the variance $\sigma^2$. Also, notice that all elements in the array were updated when we multiplied it by a scalar or added a scalar. In the next section, we will discuss NumPy's capabilities regarding such element-wise operations in more detail.
Reshaping Arrays
In practice, we often run into situations where existing arrays do not have the right shape to perform certain computations. As you might remember from the beginning of this appendix, the size of NumPy arrays is fixed. Fortunately, this does not mean that we have to create new arrays and copy values from the old array to the new one if we want arrays of different shapes -- the size is fixed, but the shape is not. NumPy provides a reshape methods that allow us to obtain a view of an array with a different shape.
For example, we can reshape a one-dimensional array into a two-dimensional one using reshape as follows
Step47: While we need to specify the desired elements along each axis, we need to make sure that the reshaped array has the same number of elements as the original one. However, we do not need to specify the number elements in each axis; NumPy is smart enough to figure out how many elements to put along an axis if only one axis is unspecified (by using the placeholder -1)
Step48: We can, of course, also use reshape to flatten an array
Step49: Note that NumPy also has a shorthand for that called ravel
Step50: A function related to ravel is flatten. In contrast to ravel, flatten returns a copy, though
Step51: Sometimes, we are interested in merging different arrays. Unfortunately, there is no efficient way to do this without creating a new array, since NumPy arrays have a fixed size. While combining arrays should be avoided if possible -- for reasons of computational efficiency -- it is sometimes necessary. To combine two or more array objects, we can use NumPy's concatenate function as shown in the following examples
Step52: Linear Algebra with NumPy Arrays
Most of the operations in machine learning and deep learning are based on concepts from linear algebra. In this section, we will take a look how to perform basic linear algebra operations using NumPy arrays.
I want to mention that there is also a special matrix type in NumPy. NumPy matrix objects are analogous to NumPy arrays but are restricted to two dimensions. Also, matrices define certain operations differently than arrays; for instance, the * operator performs matrix multiplication instead of element-wise multiplication. However, NumPy matrix is less popular in the science community compared to the more general array data structure. Since we are also going to work with arrays that have more than two dimensions (for example, when we are working with convolutional neural networks), we will not use NumPy matrix data structures in this book.
Intuitively, we can think of one-dimensional NumPy arrays as data structures that represent row vectors
Step53: Similarly, we can use two-dimensional arrays to create column vectors
Step54: Instead of reshaping a one-dimensional array into a two-dimensional one, we can simply add a new axis as shown below
Step55: Note that in this context, np.newaxis behaves like None
Step56: All three approaches listed above, using reshape(-1, 1), np.newaxis, or None yield the same results -- all three approaches create views not copies of the row_vector array.
As we remember from the Linear Algebra appendix, we can think of a column vector as a matrix consisting only of one column. To perform matrix multiplication between matrices, we learned that number of columns of the left matrix must match the number of rows of the matrix to the right. In NumPy, we can perform matrix multiplication via the matmul function
Step57: However, if we are working with matrices and vectors, NumPy can be quite forgiving if the dimensions of matrices and one-dimensional arrays do not match exactly -- thanks to broadcasting. The following example yields the same result as the matrix-column vector multiplication, except that it returns a one-dimensional array instead of a two-dimensional one
Step58: Similarly, we can compute the dot-product between two vectors (here
Step59: NumPy has a special dot function that behaves similar to matmul on pairs of one- or two-dimensional arrays -- its underlying implementation is different though, and one or the other can be slightly faster on specific machines and versions of BLAS
Step60: Similar to the examples above we can use matmul or dot to multiply two matrices (here
Step61:
Step62: While transpose can be annoyingly verbose for implementing linear algebra operations -- think of PEP8's 80 character per line recommendation -- NumPy has a shorthand for that | Python Code:
%load_ext watermark
%watermark -a 'Sebastian Raschka' -p numpy
Explanation: Accompanying code examples of the book "Introduction to Artificial Neural Networks and Deep Learning: A Practical Guide with Applications in Python" by Sebastian Raschka. All code examples are released under the MIT license. If you find this content useful, please consider supporting the work by buying a copy of the book.
Other code examples and content are available on GitHub. The PDF and ebook versions of the book are available through Leanpub.
Appendix F - Introduction to NumPy
Scientific Computing in NumPy
N-dimensional Arrays
Array Construction Routines
Array Indexing
Array Math and Universal Functions
Broadcasting
Advanced Indexing -- Memory Views and Copies
Random Number Generators
Reshaping Arrays
Linear Algebra with NumPy Arrays
Conclusion
End of explanation
import numpy as np
lst = [[1, 2, 3], [4, 5, 6]]
ary1d = np.array(lst)
ary1d
Explanation: Scientific Computing in NumPy
This appendix offers a quick tour of the NumPy library for working with multi-dimensional arrays in Python. NumPy (short for Numerical Python) was created by Travis Oliphant in 2005, by merging Numarray into Numeric. Since then, the open source NumPy library has evolved into an essential library for scientific computing in Python and has become a building block of many other scientific libraries, such as SciPy, scikit-learn, pandas, and others.
What makes NumPy so particularly attractive to the scientific community is that it provides a convenient Python interface for working with multi-dimensional array data structures efficiently; the NumPy array data structure is also called ndarray, which is short for n-dimensional array.
In addition to being mostly implemented in C and using Python as glue, the main reason why NumPy is so efficient for numerical computations is that NumPy arrays use contiguous blocks of memory that can be efficiently cached by the CPU. In contrast, Python lists are arrays of pointers to objects in random locations in memory, which cannot be easily cached and come with a more expensive memory-look-up. However, the computational efficiency and low-memory footprint come at a cost: NumPy arrays have a fixed size and are homogenous, which means that all elements must have the same type. Homogenous ndarray objects have the advantage that NumPy can carry out operations using efficient C loops and avoid expensive type checks and other overheads of the Python API. While adding and removing elements from the end of a Python list is very efficient, altering the size of a NumPy array is very expensive since it requires to create a new array and carry over the contents of the old array that we want to expand or shrink.
Besides being more efficient for numerical computations than native Python code, NumPy can also be more elegant and readable due to vectorized operations and broadcasting, which are features that we will explore in this appendix. While this appendix should be sufficient to follow the code examples in this book if you are new to NumPy, there are many advanced NumPy topics that are beyond the scope of this book. If you are interested in a more in-depth coverage of NumPy, I selected a few resources that could be useful to you:
Rougier, N.P., 2016. From Python to Numpy.
Oliphant, T.E., 2015. A Guide to NumPy: 2nd Edition. USA: Travis Oliphant, independent publishing.
Varoquaux, G., Gouillart, E., Vahtras, O., Haenel, V., Rougier, N.P., Gommers, R., Pedregosa, F., Jędrzejewski-Szmek, Z., Virtanen, P., Combelles, C. and Pinte, D., 2015. Scipy Lecture Notes.
The official NumPy documentation
N-dimensional Arrays
NumPy is built around ndarrays objects, which are high-performance multi-dimensional array data structures. Intuitively, we can think of a one-dimensional NumPy array as a data structure to represent a vector of elements -- you may think of it as a fixed-size Python list where all elements share the same type. Similarly, we can think of a two-dimensional array as a data structure to represent a matrix or a Python list of lists. While NumPy arrays can have up to 32 dimensions if it was compiled without alterations to the source code, we will focus on lower-dimensional arrays for the purpose of illustration in this introduction.
Now, let us get started with NumPy by calling the array function to create a two-dimensional NumPy array, consisting of two rows and three columns, from a list of lists:
End of explanation
ary1d.dtype
Explanation: By default, NumPy infers the type of the array upon construction. Since we passed Python integers to the array, the ndarray object ary1d should be of type int64 on a 64-bit machine, which we can confirm by accessing the dtype attribute:
End of explanation
float32_ary = ary1d.astype(np.float32)
float32_ary
float32_ary.dtype
Explanation: If we want to construct NumPy arrays of different types, we can pass an argument to the dtype parameter of the array function, for example np.int32 to create 32-bit arrays. For a full list of supported data types, please refer to the official NumPy documentation. Once an array has been constructed, we can downcast or recast its type via the astype method as shown in the following example:
End of explanation
ary2d = np.array([[1, 2, 3],
[4, 5, 6]], dtype='int64')
ary2d.itemsize
Explanation: In the following sections we will cover many more aspects of NumPy arrays; however, to conclude this basic introduction to the ndarray object, let us take a look at some of its handy attributes.
For instance, the itemsize attribute returns the size of a single array element in bytes:
End of explanation
ary2d.size
Explanation: The code snippet above returned 8, which means that each element in the array (remember that ndarrays are homogeneous) takes up 8 bytes in memory. This result makes sense since the array ary2d has type int64 (64-bit integer), which we determined earlier, and 8 bits equals 1 byte. (Note that 'int64' is just a shorthand for np.int64.)
To return the number of elements in an array, we can use the size attribute, as shown below:
End of explanation
ary2d.ndim
Explanation: And the number of dimensions of our array (Intuitively, you may think of dimensions as the rank of a tensor) can be obtained via the ndim attribute:
End of explanation
ary2d.shape
Explanation: If we are interested in the number of elements along each array dimension (in the context of NumPy arrays, we may also refer to them as axes), we can access the shape attribute as shown below:
End of explanation
np.array([1, 2, 3]).shape
Explanation: The shape is always a tuple; in the code example above, the two-dimensional ary object has two rows and three columns, (2, 3), if we think of it as a matrix representation.
Similarly, the shape of the one-dimensional array only contains a single value:
End of explanation
scalar = np.array(5)
scalar
scalar.ndim
scalar.shape
Explanation: Instead of passing lists or tuples to the array function, we can also provide a single float or integer, which will construct a zero-dimensional array (for instance, a representation of a scalar):
End of explanation
def generator():
for i in range(10):
if i % 2:
yield i
gen = generator()
np.fromiter(gen, dtype=int)
# using 'comprehensions' the following
# generator expression is equivalent to
# the `generator` above
generator_expression = (i for i in range(10) if i % 2)
np.fromiter(generator_expression, dtype=int)
Explanation: Array Construction Routines
In the previous section, we used the array function to construct NumPy arrays from Python objects that are sequences or nested sequences -- lists, tuples, nested lists, iterables, and so forth. While array is often our go-to function for creating ndarray objects, NumPy implements a variety of functions for constructing arrays that may come in handy in different contexts. In this section, we will take a quick peek at those that we use most commonly -- you can find a more comprehensive list in the official documentation.
The array function works with most iterables in Python, including lists, tuples, and range objects; however, array does not support generator expression. If we want parse generators directly, however, we can use the fromiter function as demonstrated below:
End of explanation
np.ones((3, 3))
np.zeros((3, 3))
Explanation: Next, we will take at two functions that let us create ndarrays of consisting of either ones and zeros by only specifying the elements along each axes (here: three rows and three columns):
End of explanation
np.eye(3)
np.diag((3, 3, 3))
Explanation: Creating arrays of ones or zeros can also be useful as placeholder arrays, in cases where we do not want to use the initial values for computations but want to fill it with other values right away. If we do not need the initial values (for instance, '0.' or '1.'), there is also numpy.empty, which follows the same syntax as numpy.ones and np.zeros. However, instead of filling the array with a particular value, the empty function creates the array with non-sensical values from memory. We can think of zeros as a function that creates the array via empty and then sets all its values to 0. -- in practice, a difference in speed is not noticeable, though.
NumPy also comes with functions to create identity matrices and diagonal matrices as ndarrays that can be useful in the context of linear algebra -- a topic that we will explore later in this appendix.
End of explanation
np.arange(4., 10.)
Explanation: Lastly, I want to mention two very useful functions for creating sequences of numbers within a specified range, namely, arange and linspace. NumPy's arange function follows the same syntax as Python's range objects: If two arguments are provided, the first argument represents the start value and the second value defines the stop value of a half-open interval:
End of explanation
np.arange(5)
Explanation: Notice that arange also performs type inference similar to the array function. If we only provide a single function argument, the range object treats this number as the endpoint of the interval and starts at 0:
End of explanation
np.arange(1., 11., 2)
Explanation: Similar to Python's range, a third argument can be provided to define the step (the default step size is 1). For example, we can obtain an array of all uneven values between one and ten as follows:
End of explanation
np.linspace(0., 1., num=5)
Explanation: The linspace function is especially useful if we want to create a particular number of evenly spaced values in a specified half-open interval:
End of explanation
ary = np.array([1, 2, 3])
ary[0]
Explanation: Array Indexing
In this section, we will go over the basics of retrieving NumPy array elements via different indexing methods. Simple NumPy indexing and slicing works similar to Python lists, which we will demonstrate in the following code snippet, where we retrieve the first element of a one-dimensional array:
End of explanation
ary[:2] # equivalent to ary[0:2]
Explanation: Also, the same Python semantics apply to slicing operations. The following example shows how to fetch the first two elements in ary:
End of explanation
ary = np.array([[1, 2, 3],
[4, 5, 6]])
ary[0, 0] # upper left
ary[-1, -1] # lower right
ary[0, 1] # first row, second column
Explanation: If we work with arrays that have more than one dimension or axis, we separate our indexing or slicing operations by commas as shown in the series of examples below:
End of explanation
ary[0] # entire first row
ary[:, 0] # entire first column
ary[:, :2] # first two columns
ary[0, 0]
Explanation:
End of explanation
lst = [[1, 2, 3], [4, 5, 6]]
for row_idx, row_val in enumerate(lst):
for col_idx, col_val in enumerate(row_val):
lst[row_idx][col_idx] += 1
lst
Explanation: Array Math and Universal Functions
In the previous sections, you learned how to create NumPy arrays and how to access different elements in an array. It is about time that we introduce one of the core features of NumPy that makes working with ndarray so efficient and convenient: vectorization. While we typically use for-loops if we want to perform arithmetic operations on sequence-like objects, NumPy provides vectorized wrappers for performing element-wise operations implicitly via so-called ufuncs -- short for universal functions.
As of this writing, there are more than 60 ufuncs available in NumPy; ufuncs are implemented in compiled C code and very fast and efficient compared to vanilla Python. In this section, we will take a look at the most commonly used ufuncs, and I recommend you to check out the official documentation for a complete list.
To provide an example of a simple ufunc for element-wise addition, consider the following example, where we add a scalar (here: 1) to each element in a nested Python list:
End of explanation
lst = [[1, 2, 3], [4, 5, 6]]
[[cell + 1 for cell in row] for row in lst]
Explanation: This for-loop approach is very verbose, and we could achieve the same goal more elegantly using list comprehensions:
End of explanation
ary = np.array([[1, 2, 3], [4, 5, 6]])
ary = np.add(ary, 1)
ary
Explanation: We can accomplish the same using NumPy's ufunc for element-wise scalar addition as shown below:
End of explanation
ary + 1
ary**2
Explanation: The ufuncs for basic arithmetic operations are add, subtract, divide, multiply, and exp (exponential). However, NumPy uses operator overloading so that we can use mathematical operators (+, -, /, *, and **) directly:
End of explanation
ary = np.array([[1, 2, 3],
[4, 5, 6]])
np.add.reduce(ary) # column sumns
Explanation: Above, we have seen examples of binary ufuncs, which are ufuncs that take two arguments as an input. In addition, NumPy implements several useful unary ufuncs, such as log (natural logarithm), log10 (base-10 logarithm), and sqrt (square root).
Often, we want to compute the sum or product of array element along a given axis. For this purpose, we can use a ufunc's reduce operation. By default, reduce applies an operation along the first axis (axis=0). In the case of a two-dimensional array, we can think of the first axis as the rows of a matrix. Thus, adding up elements along rows yields the column sums of that matrix as shown below:
End of explanation
np.add.reduce(ary, axis=1) # row sums
Explanation: To compute the row sums of the array above, we can specify axis=1:
End of explanation
ary.sum(axis=0) # column sums
Explanation: While it can be more intuitive to use reduce as a more general operation, NumPy also provides shorthands for specific operations such as product and sum. For example, sum(axis=0) is equivalent to add.reduce:
End of explanation
ary.sum()
Explanation: As a word of caution, keep in mind that product and sum both compute the product or sum of the entire array if we do not specify an axis:
End of explanation
ary1 = np.array([1, 2, 3])
ary2 = np.array([4, 5, 6])
ary1 + ary2
Explanation: Other useful unary ufuncs are:
mean (computes arithmetic average)
std (computes the standard deviation)
var (computes variance)
np.sort (sorts an array)
np.argsort (returns indices that would sort an array)
np.min (returns the minimum value of an array)
np.max (returns the maximum value of an array)
np.argmin (returns the index of the minimum value)
np.argmax (returns the index of the maximum value)
array_equal (checks if two arrays have the same shape and elements)
Broadcasting
A topic we glanced over in the previous section is broadcasting. Broadcasting allows us to perform vectorized operations between two arrays even if their dimensions do not match by creating implicit multidimensional grids. You already learned about ufuncs in the previous section where we performed element-wise addition between a scalar and a multidimensional array, which is just one example of broadcasting.
Naturally, we can also perform element-wise operations between arrays of equal dimensions:
End of explanation
ary3 = np.array([[4, 5, 6],
[7, 8, 9]])
ary3 + ary1 # similarly, ary1 + ary3
Explanation: In contrast to what we are used from linear algebra, we can also add arrays of different shapes. In the example above, we will add a one-dimensional to a two-dimensional array, where NumPy creates an implicit multidimensional grid from the one-dimensional array ary1:
End of explanation
try:
ary3 + np.array([1, 2])
except ValueError as e:
print('ValueError:', e)
Explanation: Keep in mind though that the number of elements along the explicit axes and the implicit grid have to match so that NumPy can perform a sensical operation. For instance, the following example should raise a ValueError, because NumPy attempts to add the elements from the first axis of the left array (2 elements) to the first axis of the right array (3 elements):
End of explanation
ary3 + np.array([[1], [2]])
np.array([[1], [2]]) + ary3
Explanation: So, if we want to add the 2-element array to the columns in ary3, the 2-element array must have two elements along its first axis as well:
End of explanation
ary = np.array([[1, 2, 3],
[4, 5, 6]])
first_row = ary[0]
first_row += 99
ary
Explanation: Advanced Indexing -- Memory Views and Copies
In the previous sections, we have used basic indexing and slicing routines. It is important to note that basic integer-based indexing and slicing create so-called views of NumPy arrays in memory. Working with views can be highly desirable since it avoids making unnecessary copies of arrays to save memory resources. To illustrate the concept of memory views, let us walk through a simple example where we access the first row in an array, assign it to a variable, and modify that variable:
End of explanation
ary = np.array([[1, 2, 3],
[4, 5, 6]])
first_row = ary[:1]
first_row += 99
ary
ary = np.array([[1, 2, 3],
[4, 5, 6]])
center_col = ary[:, 1]
center_col += 99
ary
Explanation: As we can see in the example above, changing the value of first_row also affected the original array. The reason for this is that ary[0] created a view of the first row in ary, and its elements were then incremented by 99. The same concept applies to slicing operations:
End of explanation
second_row = ary[1].copy()
second_row += 99
ary
Explanation: If we are working with NumPy arrays, it is always important to be aware that slicing creates views -- sometimes it is desirable since it can speed up our code by avoiding to create unnecessary copies in memory. However, in certain scenarios we want force a copy of an array; we can do this via the copy method as shown below:
End of explanation
ary = np.array([[1, 2, 3],
[4, 5, 6]])
first_row = ary[:1]
first_row += 99
ary
np.may_share_memory(first_row, ary)
second_row = ary[1].copy()
second_row += 99
ary
np.may_share_memory(second_row, ary)
Explanation: One way to check if two arrays might share memory is to use the may_share_memory function from NumPy. However, be aware that it uses a heuristic and can return false negatives or false positives. The code snippets shows an example of may_share_memory applied to the view (first_row) and copy (second_row) of the array elements from ary:
End of explanation
ary = np.array([[1, 2, 3],
[4, 5, 6]])
ary[:, [0, 2]] # first and and last column
this_is_a_copy = ary[:, [0, 2]]
this_is_a_copy += 99
ary
ary[:, [2, 0]] # first and and last column
Explanation: In addition to basic single-integer indexing and slicing operations, NumPy supports advanced indexing routines called fancy indexing. Via fancy indexing, we can use tuple or list objects of non-contiguous integer indices to return desired array elements. Since fancy indexing can be performed with non-contiguous sequences, it cannot return a view -- a contiguous slice from memory. Thus, fancy indexing always returns a copy of an array -- it is important to keep that in mind. The following code snippets show some fancy indexing examples:
End of explanation
ary = np.array([[1, 2, 3],
[4, 5, 6]])
greater3_mask = ary > 3
greater3_mask
Explanation: Finally, we can also use Boolean masks for indexing -- that is, arrays of True and False values. Consider the following example, where we return all values in the array that are greater than 3:
End of explanation
ary[greater3_mask]
Explanation: Using these masks, we can select elements given our desired criteria:
End of explanation
ary[(ary > 3) & (ary % 2 == 0)]
Explanation: We can also chain different selection criteria using the logical and operator '&' or the logical or operator '|'. The example below demonstrates how we can select array elements that are greater than 3 and divisible by 2:
End of explanation
np.random.seed(123)
np.random.rand(3)
Explanation: Note that indexing using Boolean arrays is also considered "fancy indexing" and thus returns a copy of the array.
Random Number Generators
In machine learning and deep learning, we often have to generate arrays of random numbers -- for example, the initial values of our model parameters before optimization. NumPy has a random subpackage to create random numbers and samples from a variety of distributions conveniently. Again, I encourage you to browse through the more comprehensive numpy.random documentation for a more comprehensive list of functions for random sampling.
To provide a brief overview of the pseudo-random number generators that we will use most commonly, let's start with drawing a random sample from a uniform distribution:
End of explanation
rng1 = np.random.RandomState(seed=123)
rng1.rand(3)
Explanation: In the code snippet above, we first seeded NumPy's random number generator. Then, we drew three random samples from a uniform distribution via random.rand in the half-open interval [0, 1). I highly recommend the seeding step in practical applications as well as in research projects, since it ensures that our results are reproducible. If we run our code sequentially -- for example, if we execute a Python script -- it should be sufficient to seed the random number generator only once at the beginning to enforce reproducible outcomes between different runs. However, it is often useful to create separate RandomState objects for various parts of our code, so that we can test methods of functions reliably in unit tests. Working with multiple, separate RandomState objects can also be useful if we run our code in non-sequential order -- for example if we are experimenting with our code in interactive sessions or Jupyter Notebook environments.
The example below shows how we can use a RandomState object to create the same results that we obtained via np.random.rand in the previous code snippet:
End of explanation
rng2 = np.random.RandomState(seed=123)
z_scores = rng2.randn(100, 2)
Explanation: Another useful function that we will often use in practice is randn, which returns a random sample of floats from a standard normal distribution $N(\mu, \sigma^2)$, where the mean, ($\mu$) is zero and unit variance ($\sigma = 1$). The example below creates a two-dimensional array of such z-scores:
End of explanation
%matplotlib inline
import matplotlib.pyplot as plt
plt.scatter(z_scores[:, 0], z_scores[:, 1])
#plt.savefig('images/numpy-intro/random_1.png', dpi=600)
plt.show()
Explanation: NumPy's random functions rand and randn take an arbitrary number of integer arguments, where each integer argument specifies the number of elements along a given axis -- the z_scores array should now refer to an array of 100 rows and two columns. Let us now visualize how our random sample looks like using matplotlib:
End of explanation
rng3 = np.random.RandomState(seed=123)
scores = 2. * rng3.randn(100, 2) + 5.
plt.scatter(scores[:, 0], scores[:, 1])
#plt.savefig('images/numpy-intro/random_2.png', dpi=600)
plt.show()
Explanation: If we want to draw a random sample from a non-standard normal distribution, we can simply add a scalar value to the array elements to shift the mean of the sample, and we can multiply the sample by a scalar to change its standard deviation. The following code snippet will change the properties of our random sample as if it has been drawn from a Normal distribution $N(5, 4)$:
End of explanation
ary1d = np.array([1, 2, 3, 4, 5, 6])
ary2d_view = ary1d.reshape(2, 3)
ary2d_view
np.may_share_memory(ary2d_view, ary1d)
Explanation: Note that in the example above, we multiplied the z-scores by a standard deviation of 2 -- the standard deviation of a sample is the square root of the variance $\sigma^2$. Also, notice that all elements in the array were updated when we multiplied it by a scalar or added a scalar. In the next section, we will discuss NumPy's capabilities regarding such element-wise operations in more detail.
Reshaping Arrays
In practice, we often run into situations where existing arrays do not have the right shape to perform certain computations. As you might remember from the beginning of this appendix, the size of NumPy arrays is fixed. Fortunately, this does not mean that we have to create new arrays and copy values from the old array to the new one if we want arrays of different shapes -- the size is fixed, but the shape is not. NumPy provides a reshape methods that allow us to obtain a view of an array with a different shape.
For example, we can reshape a one-dimensional array into a two-dimensional one using reshape as follows:
End of explanation
ary1d.reshape(2, -1)
ary1d.reshape(-1, 2)
Explanation: While we need to specify the desired elements along each axis, we need to make sure that the reshaped array has the same number of elements as the original one. However, we do not need to specify the number elements in each axis; NumPy is smart enough to figure out how many elements to put along an axis if only one axis is unspecified (by using the placeholder -1):
End of explanation
ary2d = np.array([[1, 2, 3],
[4, 5, 6]])
ary2d.reshape(-1)
Explanation: We can, of course, also use reshape to flatten an array:
End of explanation
ary2d.ravel()
Explanation: Note that NumPy also has a shorthand for that called ravel:
End of explanation
np.may_share_memory(ary2d.flatten(), ary2d)
np.may_share_memory(ary2d.ravel(), ary2d)
Explanation: A function related to ravel is flatten. In contrast to ravel, flatten returns a copy, though:
End of explanation
ary = np.array([1, 2, 3])
# stack along the first axis
np.concatenate((ary, ary))
ary = np.array([[1, 2, 3]])
# stack along the first axis (here: rows)
np.concatenate((ary, ary), axis=0)
# stack along the second axis (here: column)
np.concatenate((ary, ary), axis=1)
Explanation: Sometimes, we are interested in merging different arrays. Unfortunately, there is no efficient way to do this without creating a new array, since NumPy arrays have a fixed size. While combining arrays should be avoided if possible -- for reasons of computational efficiency -- it is sometimes necessary. To combine two or more array objects, we can use NumPy's concatenate function as shown in the following examples:
End of explanation
row_vector = np.array([1, 2, 3])
row_vector
Explanation: Linear Algebra with NumPy Arrays
Most of the operations in machine learning and deep learning are based on concepts from linear algebra. In this section, we will take a look how to perform basic linear algebra operations using NumPy arrays.
I want to mention that there is also a special matrix type in NumPy. NumPy matrix objects are analogous to NumPy arrays but are restricted to two dimensions. Also, matrices define certain operations differently than arrays; for instance, the * operator performs matrix multiplication instead of element-wise multiplication. However, NumPy matrix is less popular in the science community compared to the more general array data structure. Since we are also going to work with arrays that have more than two dimensions (for example, when we are working with convolutional neural networks), we will not use NumPy matrix data structures in this book.
Intuitively, we can think of one-dimensional NumPy arrays as data structures that represent row vectors:
End of explanation
column_vector = np.array([[1, 2, 3]]).reshape(-1, 1)
column_vector
Explanation: Similarly, we can use two-dimensional arrays to create column vectors:
End of explanation
row_vector[:, np.newaxis]
Explanation: Instead of reshaping a one-dimensional array into a two-dimensional one, we can simply add a new axis as shown below:
End of explanation
row_vector[:, None]
Explanation: Note that in this context, np.newaxis behaves like None:
End of explanation
matrix = np.array([[1, 2, 3],
[4, 5, 6]])
np.matmul(matrix, column_vector)
Explanation: All three approaches listed above, using reshape(-1, 1), np.newaxis, or None yield the same results -- all three approaches create views not copies of the row_vector array.
As we remember from the Linear Algebra appendix, we can think of a column vector as a matrix consisting only of one column. To perform matrix multiplication between matrices, we learned that number of columns of the left matrix must match the number of rows of the matrix to the right. In NumPy, we can perform matrix multiplication via the matmul function:
End of explanation
np.matmul(matrix, row_vector)
Explanation: However, if we are working with matrices and vectors, NumPy can be quite forgiving if the dimensions of matrices and one-dimensional arrays do not match exactly -- thanks to broadcasting. The following example yields the same result as the matrix-column vector multiplication, except that it returns a one-dimensional array instead of a two-dimensional one:
End of explanation
np.matmul(row_vector, row_vector)
Explanation: Similarly, we can compute the dot-product between two vectors (here: the vector norm)
End of explanation
np.dot(row_vector, row_vector)
np.dot(matrix, row_vector)
np.dot(matrix, column_vector)
Explanation: NumPy has a special dot function that behaves similar to matmul on pairs of one- or two-dimensional arrays -- its underlying implementation is different though, and one or the other can be slightly faster on specific machines and versions of BLAS:
End of explanation
matrix = np.array([[1, 2, 3],
[4, 5, 6]])
matrix.transpose()
Explanation: Similar to the examples above we can use matmul or dot to multiply two matrices (here: two-dimensional arrays). In this context, NumPy arrays have a handy transpose method to transpose matrices if necessary:
End of explanation
np.matmul(matrix, matrix.transpose())
Explanation:
End of explanation
matrix.T
Explanation: While transpose can be annoyingly verbose for implementing linear algebra operations -- think of PEP8's 80 character per line recommendation -- NumPy has a shorthand for that: T:
End of explanation |
5,721 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Principal Component Analysis in Shogun
By Abhijeet Kislay (GitHub ID
Step1: Some Formal Background (Skip if you just want code examples)
PCA is a useful statistical technique that has found application in fields such as face recognition and image compression, and is a common technique for finding patterns in data of high dimension.
In machine learning problems data is often high dimensional - images, bag-of-word descriptions etc. In such cases we cannot expect the training data to densely populate the space, meaning that there will be large parts in which little is known about the data. Hence it is expected that only a small number of directions are relevant for describing the data to a reasonable accuracy.
The data vectors may be very high dimensional, they will therefore typically lie closer to a much lower dimensional 'manifold'.
Here we concentrate on linear dimensional reduction techniques. In this approach a high dimensional datapoint $\mathbf{x}$ is 'projected down' to a lower dimensional vector $\mathbf{y}$ by
Step2: Step 2
Step3: Step 3
Step4: Step 5
Step5: In the above figure, the blue line is a good fit of the data. It shows the most significant relationship between the data dimensions.
It turns out that the eigenvector with the $highest$ eigenvalue is the $principle$ $component$ of the data set.
Form the matrix $\mathbf{E}=[\mathbf{e}^1,...,\mathbf{e}^M].$
Here $\text{M}$ represents the target dimension of our final projection
Step6: Step 6
Step7: Step 5 and Step 6 can be applied directly with Shogun's PCA preprocessor (from next example). It has been done manually here to show the exhaustive nature of Principal Component Analysis.
Step 7
Step8: The new data is plotted below
Step9: PCA on a 3d data.
Step1
Step10: Step 2
Step11: Step 3 & Step 4
Step12: Steps 5
Step13: Step 7
Step15: PCA Performance
Uptill now, we were using the EigenValue Decomposition method to compute the transformation matrix$\text{(N>D)}$ but for the next example $\text{(N<D)}$ we will be using Singular Value Decomposition.
Practical Example
Step16: Lets have a look on the data
Step17: Represent every image $I_i$ as a vector $\Gamma_i$
Step18: Step 2
Step19: Step 3 & Step 4
Step20: These 20 eigenfaces are not sufficient for a good image reconstruction. Having more eigenvectors gives us the most flexibility in the number of faces we can reconstruct. Though we are adding vectors with low variance, they are in directions of change nonetheless, and an external image that is not in our database could in fact need these eigenvectors to get even relatively close to it. But at the same time we must also keep in mind that adding excessive eigenvectors results in addition of little or no variance, slowing down the process.
Clearly a tradeoff is required.
We here set for M=100.
Step 5
Step21: Step 7
Step22: Recognition part.
In our face recognition process using the Eigenfaces approach, in order to recognize an unseen image, we proceed with the same preprocessing steps as applied to the training images.
Test images are represented in terms of eigenface coefficients by projecting them into face space$\text{(eigenspace)}$ calculated during training. Test sample is recognized by measuring the similarity distance between the test sample and all samples in the training. The similarity measure is a metric of distance calculated between two vectors. Traditional Eigenface approach utilizes $\text{Euclidean distance}$.
Step23: Here we have to project our training image as well as the test image on the PCA subspace.
The Eigenfaces method then performs face recognition by
Step24: Shogun's way of doing things | Python Code:
%pylab inline
%matplotlib inline
import os
SHOGUN_DATA_DIR=os.getenv('SHOGUN_DATA_DIR', '../../../data')
# import all shogun classes
from shogun import *
import shogun as sg
Explanation: Principal Component Analysis in Shogun
By Abhijeet Kislay (GitHub ID: <a href='https://github.com/kislayabhi'>kislayabhi</a>)
This notebook is about finding Principal Components (<a href="http://en.wikipedia.org/wiki/Principal_component_analysis">PCA</a>) of data (<a href="http://en.wikipedia.org/wiki/Unsupervised_learning">unsupervised</a>) in Shogun. Its <a href="http://en.wikipedia.org/wiki/Dimensionality_reduction">dimensional reduction</a> capabilities are further utilised to show its application in <a href="http://en.wikipedia.org/wiki/Data_compression">data compression</a>, image processing and <a href="http://en.wikipedia.org/wiki/Facial_recognition_system">face recognition</a>.
End of explanation
#number of data points.
n=100
#generate a random 2d line(y1 = mx1 + c)
m = random.randint(1,10)
c = random.randint(1,10)
x1 = random.random_integers(-20,20,n)
y1=m*x1+c
#generate the noise.
noise=random.random_sample([n]) * random.random_integers(-35,35,n)
#make the noise orthogonal to the line y=mx+c and add it.
x=x1 + noise*m/sqrt(1+square(m))
y=y1 + noise/sqrt(1+square(m))
twoD_obsmatrix=array([x,y])
#to visualise the data we must plot it.
rcParams['figure.figsize'] = 7, 7
figure,axis=subplots(1,1)
xlim(-50,50)
ylim(-50,50)
axis.plot(twoD_obsmatrix[0,:],twoD_obsmatrix[1,:],'o',color='green',markersize=6)
#the line from which we generated the data is plotted in red
axis.plot(x1[:],y1[:],linewidth=0.3,color='red')
title('One-Dimensional sub-space with noise')
xlabel("x axis")
_=ylabel("y axis")
Explanation: Some Formal Background (Skip if you just want code examples)
PCA is a useful statistical technique that has found application in fields such as face recognition and image compression, and is a common technique for finding patterns in data of high dimension.
In machine learning problems data is often high dimensional - images, bag-of-word descriptions etc. In such cases we cannot expect the training data to densely populate the space, meaning that there will be large parts in which little is known about the data. Hence it is expected that only a small number of directions are relevant for describing the data to a reasonable accuracy.
The data vectors may be very high dimensional, they will therefore typically lie closer to a much lower dimensional 'manifold'.
Here we concentrate on linear dimensional reduction techniques. In this approach a high dimensional datapoint $\mathbf{x}$ is 'projected down' to a lower dimensional vector $\mathbf{y}$ by:
$$\mathbf{y}=\mathbf{F}\mathbf{x}+\text{const}.$$
where the matrix $\mathbf{F}\in\mathbb{R}^{\text{M}\times \text{D}}$, with $\text{M}<\text{D}$. Here $\text{M}=\dim(\mathbf{y})$ and $\text{D}=\dim(\mathbf{x})$.
From the above scenario, we assume that
The number of principal components to use is $\text{M}$.
The dimension of each data point is $\text{D}$.
The number of data points is $\text{N}$.
We express the approximation for datapoint $\mathbf{x}^n$ as:$$\mathbf{x}^n \approx \mathbf{c} + \sum\limits_{i=1}^{\text{M}}y_i^n \mathbf{b}^i \equiv \tilde{\mathbf{x}}^n.$$
* Here the vector $\mathbf{c}$ is a constant and defines a point in the lower dimensional space.
* The $\mathbf{b}^i$ define vectors in the lower dimensional space (also known as 'principal component coefficients' or 'loadings').
* The $y_i^n$ are the low dimensional co-ordinates of the data.
Our motive is to find the reconstruction $\tilde{\mathbf{x}}^n$ given the lower dimensional representation $\mathbf{y}^n$(which has components $y_i^n,i = 1,...,\text{M})$. For a data space of dimension $\dim(\mathbf{x})=\text{D}$, we hope to accurately describe the data using only a small number $(\text{M}\ll \text{D})$ of coordinates of $\mathbf{y}$.
To determine the best lower dimensional representation it is convenient to use the square distance error between $\mathbf{x}$ and its reconstruction $\tilde{\mathbf{x}}$:$$\text{E}(\mathbf{B},\mathbf{Y},\mathbf{c})=\sum\limits_{n=1}^{\text{N}}\sum\limits_{i=1}^{\text{D}}[x_i^n - \tilde{x}i^n]^2.$$
* Here the basis vectors are defined as $\mathbf{B} = [\mathbf{b}^1,...,\mathbf{b}^\text{M}]$ (defining $[\text{B}]{i,j} = b_i^j$).
* Corresponding low dimensional coordinates are defined as $\mathbf{Y} = [\mathbf{y}^1,...,\mathbf{y}^\text{N}].$
* Also, $x_i^n$ and $\tilde{x}_i^n$ represents the coordinates of the data points for the original and the reconstructed data respectively.
* The bias $\mathbf{c}$ is given by the mean of the data $\sum_n\mathbf{x}^n/\text{N}$.
Therefore, for simplification purposes we centre our data, so as to set $\mathbf{c}$ to zero. Now we concentrate on finding the optimal basis $\mathbf{B}$( which has the components $\mathbf{b}^i, i=1,...,\text{M} $).
Deriving the optimal linear reconstruction
To find the best basis vectors $\mathbf{B}$ and corresponding low dimensional coordinates $\mathbf{Y}$, we may minimize the sum of squared differences between each vector $\mathbf{x}$ and its reconstruction $\tilde{\mathbf{x}}$:
$\text{E}(\mathbf{B},\mathbf{Y}) = \sum\limits_{n=1}^{\text{N}}\sum\limits_{i=1}^{\text{D}}\left[x_i^n - \sum\limits_{j=1}^{\text{M}}y_j^nb_i^j\right]^2 = \text{trace} \left( (\mathbf{X}-\mathbf{B}\mathbf{Y})^T(\mathbf{X}-\mathbf{B}\mathbf{Y}) \right)$
where $\mathbf{X} = [\mathbf{x}^1,...,\mathbf{x}^\text{N}].$
Considering the above equation under the orthonormality constraint $\mathbf{B}^T\mathbf{B} = \mathbf{I}$ (i.e the basis vectors are mutually orthogonal and of unit length), we differentiate it w.r.t $y_k^n$. The squared error $\text{E}(\mathbf{B},\mathbf{Y})$ therefore has zero derivative when:
$y_k^n = \sum_i b_i^kx_i^n$
By substituting this solution in the above equation, the objective becomes
$\text{E}(\mathbf{B}) = (\text{N}-1)\left[\text{trace}(\mathbf{S}) - \text{trace}\left(\mathbf{S}\mathbf{B}\mathbf{B}^T\right)\right],$
where $\mathbf{S}$ is the sample covariance matrix of the data.
To minimise equation under the constraint $\mathbf{B}^T\mathbf{B} = \mathbf{I}$, we use a set of Lagrange Multipliers $\mathbf{L}$, so that the objective is to minimize:
$-\text{trace}\left(\mathbf{S}\mathbf{B}\mathbf{B}^T\right)+\text{trace}\left(\mathbf{L}\left(\mathbf{B}^T\mathbf{B} - \mathbf{I}\right)\right).$
Since the constraint is symmetric, we can assume that $\mathbf{L}$ is also symmetric. Differentiating with respect to $\mathbf{B}$ and equating to zero we obtain that at the optimum
$\mathbf{S}\mathbf{B} = \mathbf{B}\mathbf{L}$.
This is a form of eigen-equation so that a solution is given by taking $\mathbf{L}$ to be diagonal and $\mathbf{B}$ as the matrix whose columns are the corresponding eigenvectors of $\mathbf{S}$. In this case,
$\text{trace}\left(\mathbf{S}\mathbf{B}\mathbf{B}^T\right) =\text{trace}(\mathbf{L}),$
which is the sum of the eigenvalues corresponding to the eigenvectors forming $\mathbf{B}$. Since we wish to minimise $\text{E}(\mathbf{B})$, we take the eigenvectors with the largest corresponding eigenvalues.
Whilst the solution to this eigen-problem is unique, this only serves to define the solution subspace since one may rotate and scale $\mathbf{B}$ and $\mathbf{Y}$ such that the value of the squared loss is exactly the same. The justification for choosing the non-rotated eigen solution is given by the additional requirement that the principal components corresponds to directions of maximal variance.
Maximum variance criterion
We aim to find that single direction $\mathbf{b}$ such that, when the data is projected onto this direction, the variance of this projection is maximal amongst all possible such projections.
The projection of a datapoint onto a direction $\mathbf{b}$ is $\mathbf{b}^T\mathbf{x}^n$ for a unit length vector $\mathbf{b}$. Hence the sum of squared projections is: $$\sum\limits_{n}\left(\mathbf{b}^T\mathbf{x}^n\right)^2 = \mathbf{b}^T\left[\sum\limits_{n}\mathbf{x}^n(\mathbf{x}^n)^T\right]\mathbf{b} = (\text{N}-1)\mathbf{b}^T\mathbf{S}\mathbf{b} = \lambda(\text{N} - 1)$$
which ignoring constants, is simply the negative of the equation for a single retained eigenvector $\mathbf{b}$(with $\mathbf{S}\mathbf{b} = \lambda\mathbf{b}$). Hence the optimal single $\text{b}$ which maximises the projection variance is given by the eigenvector corresponding to the largest eigenvalues of $\mathbf{S}.$ The second largest eigenvector corresponds to the next orthogonal optimal direction and so on. This explains why, despite the squared loss equation being invariant with respect to arbitrary rotation of the basis vectors, the ones given by the eigen-decomposition have the additional property that they correspond to directions of maximal variance. These maximal variance directions found by PCA are called the $\text{principal} $ $\text{directions}.$
There are two eigenvalue methods through which shogun can perform PCA namely
* Eigenvalue Decomposition Method.
* Singular Value Decomposition.
EVD vs SVD
The EVD viewpoint requires that one compute the eigenvalues and eigenvectors of the covariance matrix, which is the product of $\mathbf{X}\mathbf{X}^\text{T}$, where $\mathbf{X}$ is the data matrix. Since the covariance matrix is symmetric, the matrix is diagonalizable, and the eigenvectors can be normalized such that they are orthonormal:
$\mathbf{S}=\frac{1}{\text{N}-1}\mathbf{X}\mathbf{X}^\text{T},$
where the $\text{D}\times\text{N}$ matrix $\mathbf{X}$ contains all the data vectors: $\mathbf{X}=[\mathbf{x}^1,...,\mathbf{x}^\text{N}].$
Writing the $\text{D}\times\text{N}$ matrix of eigenvectors as $\mathbf{E}$ and the eigenvalues as an $\text{N}\times\text{N}$ diagonal matrix $\mathbf{\Lambda}$, the eigen-decomposition of the covariance $\mathbf{S}$ is
$\mathbf{X}\mathbf{X}^\text{T}\mathbf{E}=\mathbf{E}\mathbf{\Lambda}\Longrightarrow\mathbf{X}^\text{T}\mathbf{X}\mathbf{X}^\text{T}\mathbf{E}=\mathbf{X}^\text{T}\mathbf{E}\mathbf{\Lambda}\Longrightarrow\mathbf{X}^\text{T}\mathbf{X}\tilde{\mathbf{E}}=\tilde{\mathbf{E}}\mathbf{\Lambda},$
where we defined $\tilde{\mathbf{E}}=\mathbf{X}^\text{T}\mathbf{E}$. The final expression above represents the eigenvector equation for $\mathbf{X}^\text{T}\mathbf{X}.$ This is a matrix of dimensions $\text{N}\times\text{N}$ so that calculating the eigen-decomposition takes $\mathcal{O}(\text{N}^3)$ operations, compared with $\mathcal{O}(\text{D}^3)$ operations in the original high-dimensional space. We then can therefore calculate the eigenvectors $\tilde{\mathbf{E}}$ and eigenvalues $\mathbf{\Lambda}$ of this matrix more easily. Once found, we use the fact that the eigenvalues of $\mathbf{S}$ are given by the diagonal entries of $\mathbf{\Lambda}$ and the eigenvectors by
$\mathbf{E}=\mathbf{X}\tilde{\mathbf{E}}\mathbf{\Lambda}^{-1}$
On the other hand, applying SVD to the data matrix $\mathbf{X}$ follows like:
$\mathbf{X}=\mathbf{U}\mathbf{\Sigma}\mathbf{V}^\text{T}$
where $\mathbf{U}^\text{T}\mathbf{U}=\mathbf{I}\text{D}$ and $\mathbf{V}^\text{T}\mathbf{V}=\mathbf{I}\text{N}$ and $\mathbf{\Sigma}$ is a diagonal matrix of the (positive) singular values. We assume that the decomposition has ordered the singular values so that the upper left diagonal element of $\mathbf{\Sigma}$ contains the largest singular value.
Attempting to construct the covariance matrix $(\mathbf{X}\mathbf{X}^\text{T})$from this decomposition gives:
$\mathbf{X}\mathbf{X}^\text{T} = \left(\mathbf{U}\mathbf{\Sigma}\mathbf{V}^\text{T}\right)\left(\mathbf{U}\mathbf{\Sigma}\mathbf{V}^\text{T}\right)^\text{T}$
$\mathbf{X}\mathbf{X}^\text{T} = \left(\mathbf{U}\mathbf{\Sigma}\mathbf{V}^\text{T}\right)\left(\mathbf{V}\mathbf{\Sigma}\mathbf{U}^\text{T}\right)$
and since $\mathbf{V}$ is an orthogonal matrix $\left(\mathbf{V}^\text{T}\mathbf{V}=\mathbf{I}\right),$
$\mathbf{X}\mathbf{X}^\text{T}=\left(\mathbf{U}\mathbf{\Sigma}^\mathbf{2}\mathbf{U}^\text{T}\right)$
Since it is in the form of an eigen-decomposition, the PCA solution given by performing the SVD decomposition of $\mathbf{X}$, for which the eigenvectors are then given by $\mathbf{U}$, and corresponding eigenvalues by the square of the singular values.
CPCA Class Reference (Shogun)
CPCA class of Shogun inherits from the CPreprocessor class. Preprocessors are transformation functions that doesn't change the domain of the input features. Specifically, CPCA performs principal component analysis on the input vectors and keeps only the specified number of eigenvectors. On preprocessing, the stored covariance matrix is used to project vectors into eigenspace.
Performance of PCA depends on the algorithm used according to the situation in hand.
Our PCA preprocessor class provides 3 method options to compute the transformation matrix:
$\text{PCA(EVD)}$ sets $\text{PCAmethod == EVD}$ : Eigen Value Decomposition of Covariance Matrix $(\mathbf{XX^T}).$
The covariance matrix $\mathbf{XX^T}$ is first formed internally and then
its eigenvectors and eigenvalues are computed using QR decomposition of the matrix.
The time complexity of this method is $\mathcal{O}(D^3)$ and should be used when $\text{N > D.}$
$\text{PCA(SVD)}$ sets $\text{PCAmethod == SVD}$ : Singular Value Decomposition of feature matrix $\mathbf{X}$.
The transpose of feature matrix, $\mathbf{X^T}$, is decomposed using SVD. $\mathbf{X^T = UDV^T}.$
The matrix V in this decomposition contains the required eigenvectors and
the diagonal entries of the diagonal matrix D correspond to the non-negative
eigenvalues.The time complexity of this method is $\mathcal{O}(DN^2)$ and should be used when $\text{N < D.}$
$\text{PCA(AUTO)}$ sets $\text{PCAmethod == AUTO}$ : This mode automagically chooses one of the above modes for the user based on whether $\text{N>D}$ (chooses $\text{EVD}$) or $\text{N<D}$ (chooses $\text{SVD}$)
PCA on 2D data
Step 1: Get some data
We will generate the toy data by adding orthogonal noise to a set of points lying on an arbitrary 2d line. We expect PCA to recover this line, which is a one-dimensional linear sub-space.
End of explanation
#convert the observation matrix into dense feature matrix.
train_features = features(twoD_obsmatrix)
#PCA(EVD) is choosen since N=100 and D=2 (N>D).
#However we can also use PCA(AUTO) as it will automagically choose the appropriate method.
preprocessor = sg.transformer('PCA', method='EVD')
#since we are projecting down the 2d data, the target dim is 1. But here the exhaustive method is detailed by
#setting the target dimension to 2 to visualize both the eigen vectors.
#However, in future examples we will get rid of this step by implementing it directly.
preprocessor.put('target_dim', 2)
#Centralise the data by subtracting its mean from it.
preprocessor.fit(train_features)
#get the mean for the respective dimensions.
mean_datapoints=preprocessor.get('mean_vector')
mean_x=mean_datapoints[0]
mean_y=mean_datapoints[1]
Explanation: Step 2: Subtract the mean.
For PCA to work properly, we must subtract the mean from each of the data dimensions. The mean subtracted is the average across each dimension. So, all the $x$ values have $\bar{x}$ subtracted, and all the $y$ values have $\bar{y}$ subtracted from them, where:$$\bar{\mathbf{x}} = \frac{\sum\limits_{i=1}^{n}x_i}{n}$$ $\bar{\mathbf{x}}$ denotes the mean of the $x_i^{'s}$
Shogun's way of doing things :
Preprocessor PCA performs principial component analysis on input feature vectors/matrices. It provides an interface to set the target dimension by $\text{put('target_dim', target_dim) method}.$ When the $\text{init()}$ method in $\text{PCA}$ is called with proper
feature matrix $\text{X}$ (with say $\text{N}$ number of vectors and $\text{D}$ feature dimension), a transformation matrix is computed and stored internally.It inherenty also centralizes the data by subtracting the mean from it.
End of explanation
#Get the eigenvectors(We will get two of these since we set the target to 2).
E = preprocessor.get('transformation_matrix')
#Get all the eigenvalues returned by PCA.
eig_value=preprocessor.get('eigenvalues_vector')
e1 = E[:,0]
e2 = E[:,1]
eig_value1 = eig_value[0]
eig_value2 = eig_value[1]
Explanation: Step 3: Calculate the covariance matrix
To understand the relationship between 2 dimension we define $\text{covariance}$. It is a measure to find out how much the dimensions vary from the mean $with$ $respect$ $to$ $each$ $other.$$$cov(X,Y)=\frac{\sum\limits_{i=1}^{n}(X_i-\bar{X})(Y_i-\bar{Y})}{n-1}$$
A useful way to get all the possible covariance values between all the different dimensions is to calculate them all and put them in a matrix.
Example: For a 3d dataset with usual dimensions of $x,y$ and $z$, the covariance matrix has 3 rows and 3 columns, and the values are this:
$$\mathbf{S} = \quad\begin{pmatrix}cov(x,x)&cov(x,y)&cov(x,z)\cov(y,x)&cov(y,y)&cov(y,z)\cov(z,x)&cov(z,y)&cov(z,z)\end{pmatrix}$$
Step 4: Calculate the eigenvectors and eigenvalues of the covariance matrix
Find the eigenvectors $e^1,....e^M$ of the covariance matrix $\mathbf{S}$.
Shogun's way of doing things :
Step 3 and Step 4 are directly implemented by the PCA preprocessor of Shogun toolbar. The transformation matrix is essentially a $\text{D}$$\times$$\text{M}$ matrix, the columns of which correspond to the eigenvectors of the covariance matrix $(\text{X}\text{X}^\text{T})$ having top $\text{M}$ eigenvalues.
End of explanation
#find out the M eigenvectors corresponding to top M number of eigenvalues and store it in E
#Here M=1
#slope of e1 & e2
m1=e1[1]/e1[0]
m2=e2[1]/e2[0]
#generate the two lines
x1=range(-50,50)
x2=x1
y1=multiply(m1,x1)
y2=multiply(m2,x2)
#plot the data along with those two eigenvectors
figure, axis = subplots(1,1)
xlim(-50, 50)
ylim(-50, 50)
axis.plot(x[:], y[:],'o',color='green', markersize=5, label="green")
axis.plot(x1[:], y1[:], linewidth=0.7, color='black')
axis.plot(x2[:], y2[:], linewidth=0.7, color='blue')
p1 = Rectangle((0, 0), 1, 1, fc="black")
p2 = Rectangle((0, 0), 1, 1, fc="blue")
legend([p1,p2],["1st eigenvector","2nd eigenvector"],loc='center left', bbox_to_anchor=(1, 0.5))
title('Eigenvectors selection')
xlabel("x axis")
_=ylabel("y axis")
Explanation: Step 5: Choosing components and forming a feature vector.
Lets visualize the eigenvectors and decide upon which to choose as the $principle$ $component$ of the data set.
End of explanation
#The eigenvector corresponding to higher eigenvalue(i.e eig_value2) is choosen (i.e e2).
#E is the feature vector.
E=e2
Explanation: In the above figure, the blue line is a good fit of the data. It shows the most significant relationship between the data dimensions.
It turns out that the eigenvector with the $highest$ eigenvalue is the $principle$ $component$ of the data set.
Form the matrix $\mathbf{E}=[\mathbf{e}^1,...,\mathbf{e}^M].$
Here $\text{M}$ represents the target dimension of our final projection
End of explanation
#transform all 2-dimensional feature matrices to target-dimensional approximations.
yn=preprocessor.transform(train_features).get('feature_matrix')
#Since, here we are manually trying to find the eigenvector corresponding to the top eigenvalue.
#The 2nd row of yn is choosen as it corresponds to the required eigenvector e2.
yn1=yn[1,:]
Explanation: Step 6: Projecting the data to its Principal Components.
This is the final step in PCA. Once we have choosen the components(eigenvectors) that we wish to keep in our data and formed a feature vector, we simply take the vector and multiply it on the left of the original dataset.
The lower dimensional representation of each data point $\mathbf{x}^n$ is given by
$\mathbf{y}^n=\mathbf{E}^T(\mathbf{x}^n-\mathbf{m})$
Here the $\mathbf{E}^T$ is the matrix with the eigenvectors in rows, with the most significant eigenvector at the top. The mean adjusted data, with data items in each column, with each row holding a seperate dimension is multiplied to it.
Shogun's way of doing things :
Step 6 can be performed by shogun's PCA preprocessor as follows:
The transformation matrix that we got after $\text{init()}$ is used to transform all $\text{D-dim}$ feature matrices (with $\text{D}$ feature dimensions) supplied, via $\text{apply_to_feature_matrix methods}$.This transformation outputs the $\text{M-Dim}$ approximation of all these input vectors and matrices (where $\text{M}$ $\leq$ $\text{min(D,N)}$).
End of explanation
x_new=(yn1 * E[0]) + tile(mean_x,[n,1]).T[0]
y_new=(yn1 * E[1]) + tile(mean_y,[n,1]).T[0]
Explanation: Step 5 and Step 6 can be applied directly with Shogun's PCA preprocessor (from next example). It has been done manually here to show the exhaustive nature of Principal Component Analysis.
Step 7: Form the approximate reconstruction of the original data $\mathbf{x}^n$
The approximate reconstruction of the original datapoint $\mathbf{x}^n$ is given by : $\tilde{\mathbf{x}}^n\approx\text{m}+\mathbf{E}\mathbf{y}^n$
End of explanation
figure, axis = subplots(1,1)
xlim(-50, 50)
ylim(-50, 50)
axis.plot(x[:], y[:],'o',color='green', markersize=5, label="green")
axis.plot(x_new, y_new, 'o', color='blue', markersize=5, label="red")
title('PCA Projection of 2D data into 1D subspace')
xlabel("x axis")
ylabel("y axis")
#add some legend for information
p1 = Rectangle((0, 0), 1, 1, fc="r")
p2 = Rectangle((0, 0), 1, 1, fc="g")
p3 = Rectangle((0, 0), 1, 1, fc="b")
legend([p1,p2,p3],["normal projection","2d data","1d projection"],loc='center left', bbox_to_anchor=(1, 0.5))
#plot the projections in red:
for i in range(n):
axis.plot([x[i],x_new[i]],[y[i],y_new[i]] , color='red')
Explanation: The new data is plotted below
End of explanation
rcParams['figure.figsize'] = 8,8
#number of points
n=100
#generate the data
a=random.randint(1,20)
b=random.randint(1,20)
c=random.randint(1,20)
d=random.randint(1,20)
x1=random.random_integers(-20,20,n)
y1=random.random_integers(-20,20,n)
z1=-(a*x1+b*y1+d)/c
#generate the noise
noise=random.random_sample([n])*random.random_integers(-30,30,n)
#the normal unit vector is [a,b,c]/magnitude
magnitude=sqrt(square(a)+square(b)+square(c))
normal_vec=array([a,b,c]/magnitude)
#add the noise orthogonally
x=x1+noise*normal_vec[0]
y=y1+noise*normal_vec[1]
z=z1+noise*normal_vec[2]
threeD_obsmatrix=array([x,y,z])
#to visualize the data, we must plot it.
from mpl_toolkits.mplot3d import Axes3D
fig = pyplot.figure()
ax=fig.add_subplot(111, projection='3d')
#plot the noisy data generated by distorting a plane
ax.scatter(x, y, z,marker='o', color='g')
ax.set_xlabel('x label')
ax.set_ylabel('y label')
ax.set_zlabel('z label')
legend([p2],["3d data"],loc='center left', bbox_to_anchor=(1, 0.5))
title('Two dimensional subspace with noise')
xx, yy = meshgrid(range(-30,30), range(-30,30))
zz=-(a * xx + b * yy + d) / c
Explanation: PCA on a 3d data.
Step1: Get some data
We generate points from a plane and then add random noise orthogonal to it. The general equation of a plane is: $$\text{a}\mathbf{x}+\text{b}\mathbf{y}+\text{c}\mathbf{z}+\text{d}=0$$
End of explanation
#convert the observation matrix into dense feature matrix.
train_features = features(threeD_obsmatrix)
#PCA(EVD) is choosen since N=100 and D=3 (N>D).
#However we can also use PCA(AUTO) as it will automagically choose the appropriate method.
preprocessor = sg.transformer('PCA', method='EVD')
#If we set the target dimension to 2, Shogun would automagically preserve the required 2 eigenvectors(out of 3) according to their
#eigenvalues.
preprocessor.put('target_dim', 2)
preprocessor.fit(train_features)
#get the mean for the respective dimensions.
mean_datapoints=preprocessor.get('mean_vector')
mean_x=mean_datapoints[0]
mean_y=mean_datapoints[1]
mean_z=mean_datapoints[2]
Explanation: Step 2: Subtract the mean.
End of explanation
#get the required eigenvectors corresponding to top 2 eigenvalues.
E = preprocessor.get('transformation_matrix')
Explanation: Step 3 & Step 4: Calculate the eigenvectors of the covariance matrix
End of explanation
#This can be performed by shogun's PCA preprocessor as follows:
yn=preprocessor.transform(train_features).get('feature_matrix')
Explanation: Steps 5: Choosing components and forming a feature vector.
Since we performed PCA for a target $\dim = 2$ for the $3 \dim$ data, we are directly given
the two required eigenvectors in $\mathbf{E}$
E is automagically filled by setting target dimension = M. This is different from the 2d data example where we implemented this step manually.
Step 6: Projecting the data to its Principal Components.
End of explanation
new_data=dot(E,yn)
x_new=new_data[0,:]+tile(mean_x,[n,1]).T[0]
y_new=new_data[1,:]+tile(mean_y,[n,1]).T[0]
z_new=new_data[2,:]+tile(mean_z,[n,1]).T[0]
#all the above points lie on the same plane. To make it more clear we will plot the projection also.
fig=pyplot.figure()
ax=fig.add_subplot(111, projection='3d')
ax.scatter(x, y, z,marker='o', color='g')
ax.set_xlabel('x label')
ax.set_ylabel('y label')
ax.set_zlabel('z label')
legend([p1,p2,p3],["normal projection","3d data","2d projection"],loc='center left', bbox_to_anchor=(1, 0.5))
title('PCA Projection of 3D data into 2D subspace')
for i in range(100):
ax.scatter(x_new[i], y_new[i], z_new[i],marker='o', color='b')
ax.plot([x[i],x_new[i]],[y[i],y_new[i]],[z[i],z_new[i]],color='r')
Explanation: Step 7: Form the approximate reconstruction of the original data $\mathbf{x}^n$
The approximate reconstruction of the original datapoint $\mathbf{x}^n$ is given by : $\tilde{\mathbf{x}}^n\approx\text{m}+\mathbf{E}\mathbf{y}^n$
End of explanation
rcParams['figure.figsize'] = 10, 10
import os
def get_imlist(path):
Returns a list of filenames for all jpg images in a directory
return [os.path.join(path,f) for f in os.listdir(path) if f.endswith('.pgm')]
#set path of the training images
path_train=os.path.join(SHOGUN_DATA_DIR, 'att_dataset/training/')
#set no. of rows that the images will be resized.
k1=100
#set no. of columns that the images will be resized.
k2=100
filenames = get_imlist(path_train)
filenames = array(filenames)
#n is total number of images that has to be analysed.
n=len(filenames)
Explanation: PCA Performance
Uptill now, we were using the EigenValue Decomposition method to compute the transformation matrix$\text{(N>D)}$ but for the next example $\text{(N<D)}$ we will be using Singular Value Decomposition.
Practical Example : Eigenfaces
The problem with the image representation we are given is its high dimensionality. Two-dimensional $\text{p} \times \text{q}$ grayscale images span a $\text{m=pq}$ dimensional vector space, so an image with $\text{100}\times\text{100}$ pixels lies in a $\text{10,000}$ dimensional image space already.
The question is, are all dimensions really useful for us?
$\text{Eigenfaces}$ are based on the dimensional reduction approach of $\text{Principal Component Analysis(PCA)}$. The basic idea is to treat each image as a vector in a high dimensional space. Then, $\text{PCA}$ is applied to the set of images to produce a new reduced subspace that captures most of the variability between the input images. The $\text{Pricipal Component Vectors}$(eigenvectors of the sample covariance matrix) are called the $\text{Eigenfaces}$. Every input image can be represented as a linear combination of these eigenfaces by projecting the image onto the new eigenfaces space. Thus, we can perform the identfication process by matching in this reduced space. An input image is transformed into the $\text{eigenspace,}$ and the nearest face is identified using a $\text{Nearest Neighbour approach.}$
Step 1: Get some data.
Here data means those Images which will be used for training purposes.
End of explanation
# we will be using this often to visualize the images out there.
def showfig(image):
imgplot=imshow(image, cmap='gray')
imgplot.axes.get_xaxis().set_visible(False)
imgplot.axes.get_yaxis().set_visible(False)
from PIL import Image
from scipy import misc
# to get a hang of the data, lets see some part of the dataset images.
fig = pyplot.figure()
title('The Training Dataset')
for i in range(49):
fig.add_subplot(7,7,i+1)
train_img=array(Image.open(filenames[i]).convert('L'))
train_img=misc.imresize(train_img, [k1,k2])
showfig(train_img)
Explanation: Lets have a look on the data:
End of explanation
#To form the observation matrix obs_matrix.
#read the 1st image.
train_img = array(Image.open(filenames[0]).convert('L'))
#resize it to k1 rows and k2 columns
train_img=misc.imresize(train_img, [k1,k2])
#since features accepts only data of float64 datatype, we do a type conversion
train_img=array(train_img, dtype='double')
#flatten it to make it a row vector.
train_img=train_img.flatten()
# repeat the above for all images and stack all those vectors together in a matrix
for i in range(1,n):
temp=array(Image.open(filenames[i]).convert('L'))
temp=misc.imresize(temp, [k1,k2])
temp=array(temp, dtype='double')
temp=temp.flatten()
train_img=vstack([train_img,temp])
#form the observation matrix
obs_matrix=train_img.T
Explanation: Represent every image $I_i$ as a vector $\Gamma_i$
End of explanation
train_features = features(obs_matrix)
preprocessor= sg.transformer('PCA', method='AUTO')
preprocessor.put('target_dim', 100)
preprocessor.fit(train_features)
mean=preprocessor.get('mean_vector')
Explanation: Step 2: Subtract the mean
It is very important that the face images $I_1,I_2,...,I_M$ are $centered$ and of the $same$ size
We observe here that the no. of $\dim$ for each image is far greater than no. of training images. This calls for the use of $\text{SVD}$.
Setting the $\text{PCA}$ in the $\text{AUTO}$ mode does this automagically according to the situation.
End of explanation
#get the required eigenvectors corresponding to top 100 eigenvalues
E = preprocessor.get('transformation_matrix')
#lets see how these eigenfaces/eigenvectors look like:
fig1 = pyplot.figure()
title('Top 20 Eigenfaces')
for i in range(20):
a = fig1.add_subplot(5,4,i+1)
eigen_faces=E[:,i].reshape([k1,k2])
showfig(eigen_faces)
Explanation: Step 3 & Step 4: Calculate the eigenvectors and eigenvalues of the covariance matrix.
End of explanation
#we perform the required dot product.
yn=preprocessor.transform(train_features).get('feature_matrix')
Explanation: These 20 eigenfaces are not sufficient for a good image reconstruction. Having more eigenvectors gives us the most flexibility in the number of faces we can reconstruct. Though we are adding vectors with low variance, they are in directions of change nonetheless, and an external image that is not in our database could in fact need these eigenvectors to get even relatively close to it. But at the same time we must also keep in mind that adding excessive eigenvectors results in addition of little or no variance, slowing down the process.
Clearly a tradeoff is required.
We here set for M=100.
Step 5: Choosing components and forming a feature vector.
Since we set target $\dim = 100$ for this $n \dim$ data, we are directly given the $100$ required eigenvectors in $\mathbf{E}$
E is automagically filled. This is different from the 2d data example where we implemented this step manually.
Step 6: Projecting the data to its Principal Components.
The lower dimensional representation of each data point $\mathbf{x}^n$ is given by $$\mathbf{y}^n=\mathbf{E}^T(\mathbf{x}^n-\mathbf{m})$$
End of explanation
re=tile(mean,[n,1]).T[0] + dot(E,yn)
#lets plot the reconstructed images.
fig2 = pyplot.figure()
title('Reconstructed Images from 100 eigenfaces')
for i in range(1,50):
re1 = re[:,i].reshape([k1,k2])
fig2.add_subplot(7,7,i)
showfig(re1)
Explanation: Step 7: Form the approximate reconstruction of the original image $I_n$
The approximate reconstruction of the original datapoint $\mathbf{x}^n$ is given by : $\mathbf{x}^n\approx\text{m}+\mathbf{E}\mathbf{y}^n$
End of explanation
#set path of the training images
path_train=os.path.join(SHOGUN_DATA_DIR, 'att_dataset/testing/')
test_files=get_imlist(path_train)
test_img=array(Image.open(test_files[0]).convert('L'))
rcParams.update({'figure.figsize': (3, 3)})
#we plot the test image , for which we have to identify a good match from the training images we already have
fig = pyplot.figure()
title('The Test Image')
showfig(test_img)
#We flatten out our test image just the way we have done for the other images
test_img=misc.imresize(test_img, [k1,k2])
test_img=array(test_img, dtype='double')
test_img=test_img.flatten()
#We centralise the test image by subtracting the mean from it.
test_f=test_img-mean
Explanation: Recognition part.
In our face recognition process using the Eigenfaces approach, in order to recognize an unseen image, we proceed with the same preprocessing steps as applied to the training images.
Test images are represented in terms of eigenface coefficients by projecting them into face space$\text{(eigenspace)}$ calculated during training. Test sample is recognized by measuring the similarity distance between the test sample and all samples in the training. The similarity measure is a metric of distance calculated between two vectors. Traditional Eigenface approach utilizes $\text{Euclidean distance}$.
End of explanation
#We have already projected our training images into pca subspace as yn.
train_proj = yn
#Projecting our test image into pca subspace
test_proj = dot(E.T, test_f)
Explanation: Here we have to project our training image as well as the test image on the PCA subspace.
The Eigenfaces method then performs face recognition by:
1. Projecting all training samples into the PCA subspace.
2. Projecting the query image into the PCA subspace.
3. Finding the nearest neighbour between the projected training images and the projected query image.
End of explanation
#To get Eucledian Distance as the distance measure use EuclideanDistance.
workfeat = features(mat(train_proj))
testfeat = features(mat(test_proj).T)
RaRb = sg.distance('EuclideanDistance')
RaRb.init(testfeat, workfeat)
#The distance between one test image w.r.t all the training is stacked in matrix d.
d=empty([n,1])
for i in range(n):
d[i]= RaRb.distance(0,i)
#The one having the minimum distance is found out
min_distance_index = d.argmin()
iden=array(Image.open(filenames[min_distance_index]))
title('Identified Image')
showfig(iden)
Explanation: Shogun's way of doing things:
Shogun uses CEuclideanDistance class to compute the familiar Euclidean distance for real valued features. It computes the square root of the sum of squared disparity between the corresponding feature dimensions of two data points.
$\mathbf{d(x,x')=}$$\sqrt{\mathbf{\sum\limits_{i=0}^{n}}|\mathbf{x_i}-\mathbf{x'_i}|^2}$
End of explanation |
5,722 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Subprocess WindowsError 5
场景是这样的,有一个app目录,里面有一个主程序main.py, 主程序会调用app目录外的updater.py对app目录进行升级
Step1: updater.py代码
启动后会对app目录重命名为app_old
Step2: build.py 代码
将main.py打包成exe并复制到main.py相同的目录
将updater打包成exe并复制到updater.py相同目录
注意 | Python Code:
# encoding: utf-8
import logging
import os
import subprocess
import sys
CUR_DIR = os.path.dirname(os.path.abspath(sys.argv[0]))
logging.basicConfig(filename=os.path.join(CUR_DIR, "app.log"),
filemode="w",level=logging.INFO,
format='%(asctime)s [%(levelname)s]- %(message)s')
UPDATER = os.path.abspath(os.path.join(CUR_DIR, "..", "updater.exe"))
def main():
subprocess.Popen([UPDATER])
sys.exit()
if __name__ == "__main__":
try:
main()
except Exception as e:
pass
Explanation: Subprocess WindowsError 5
场景是这样的,有一个app目录,里面有一个主程序main.py, 主程序会调用app目录外的updater.py对app目录进行升级:
- 下载新的包
- 重命名app为app_old
- 将新的包拷进来,成为新的app目录
- 删除app_old
说明: main.py和updater.py会使用pyinstaller打包成exe
但是在执行os.rename对目录进行重命名时发生错误了:
WindowsError: [Error 5]
问题模拟
app/main.py代码
主程序代码,启动时会打开一个日志文件. 然后调用updater.exe程序.
End of explanation
# -*- encoding: utf-8 -*-
import logging
import os
import sys
import zipfile
CUR_DIR = os.path.dirname(os.path.abspath(sys.argv[0]))
CACHE_DIR = os.path.join(CUR_DIR, "cache")
NEW_PACKAGE = os.path.join(CUR_DIR, "app.zip")
logging.basicConfig(filename=os.path.join(CUR_DIR, "updater.log"), filemode="w", level=logging.INFO,
format='%(asctime)s [%(levelname)s]- %(message)s')
def update_files():
logging.info(u"删除缓存目录:%s", CACHE_DIR)
with zipfile.ZipFile(NEW_PACKAGE) as zip:
zip.extractall(CACHE_DIR)
app_cache_dir = os.path.join(CUR_DIR, "cache", "app")
app_dir = os.path.join(CUR_DIR, "app")
app_old_dir = os.path.join(CUR_DIR, "old_app")
logging.info(u"替换目录%s 为 %s", app_dir, app_old_dir)
os.rename(app_dir, app_old_dir)
if __name__ == "__main__":
try:
update_files()
except Exception as e:
logging.exception(e)
Explanation: updater.py代码
启动后会对app目录重命名为app_old
End of explanation
import sys
import os
import shutil
import subprocess
CUR_DIR = os.path.abspath(os.path.dirname(__file__))
APP_DIR = os.path.join(CUR_DIR, "app")
pyinstaller_exe = os.path.join(os.path.dirname(sys.executable) ,"Scripts", "pyinstaller.exe")
try:
shutil.rmtree(os.path.join(CUR_DIR, "build"))
shutil.rmtree(os.path.join(CUR_DIR, "dist"))
except:
pass
subprocess.check_call("{pyinstaller} -w -F --onefile --clean {script}".format(
pyinstaller=pyinstaller_exe,
script=os.path.join(CUR_DIR, "updater.py")
), shell=True, cwd=CUR_DIR)
shutil.copy(os.path.join(CUR_DIR, "dist", "updater.exe"),
CUR_DIR)
try:
shutil.rmtree(os.path.join(APP_DIR, "build"))
shutil.rmtree(os.path.join(APP_DIR, "dist"))
except:
pass
subprocess.check_call("{pyinstaller} -w -F --onefile --clean {script}".format(
pyinstaller=pyinstaller_exe,
script=os.path.join(APP_DIR, "main.py")
), shell=True, cwd=APP_DIR)
shutil.copy(os.path.join(APP_DIR, "dist", "main.exe"),
APP_DIR)
Explanation: build.py 代码
将main.py打包成exe并复制到main.py相同的目录
将updater打包成exe并复制到updater.py相同目录
注意: 需要安装pyinstaller
End of explanation |
5,723 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Markov Chains
author
Step1: Markov chains have log probability, fit, summarize, and from summaries methods implemented. They do not have classification capabilities by themselves, but when combined with a Naive Bayes classifier can be used to do discrimination between multiple models (see the Naive Bayes tutorial notebook).
Lets see the log probability of some data.
Step2: We can fit the model to sequences which we pass in, and as expected, get better performance on sequences which we train on. | Python Code:
%matplotlib inline
import time
import pandas
import random
import numpy
import matplotlib.pyplot as plt
import seaborn; seaborn.set_style('whitegrid')
import itertools
from pomegranate import *
random.seed(0)
numpy.random.seed(0)
numpy.set_printoptions(suppress=True)
%load_ext watermark
%watermark -m -n -p numpy,scipy,pomegranate
from pomegranate import *
%pylab inline
d1 = DiscreteDistribution({'A': 0.10, 'C': 0.40, 'G': 0.40, 'T': 0.10})
d2 = ConditionalProbabilityTable([['A', 'A', 0.10],
['A', 'C', 0.50],
['A', 'G', 0.30],
['A', 'T', 0.10],
['C', 'A', 0.10],
['C', 'C', 0.40],
['C', 'T', 0.40],
['C', 'G', 0.10],
['G', 'A', 0.05],
['G', 'C', 0.45],
['G', 'G', 0.45],
['G', 'T', 0.05],
['T', 'A', 0.20],
['T', 'C', 0.30],
['T', 'G', 0.30],
['T', 'T', 0.20]], [d1])
clf = MarkovChain([d1, d2])
Explanation: Markov Chains
author: Jacob Schreiber <br>
contact: jmschreiber91@gmail.com
Markov Chains are a simple model based on conditional probability, where a sequence is modelled as the product of conditional probabilities. A n-th order Markov chain looks back n emissions to base its conditional probability on. For example, a 3rd order Markov chain models $P(X_{t} | X_{t-1}, X_{t-2}, X_{t-3})$.
However, a full Markov model needs to model the first observations, and the first n-1 observations. The first observation can't really be modelled well using $P(X_{t} | X_{t-1}, X_{t-2}, X_{t-3})$, but can be modelled by $P(X_{t})$. The second observation has to be modelled by $P(X_{t} | X_{t-1} )$. This means that these distributions have to be passed into the Markov chain as well.
We can initialize a Markov chain easily enough by passing in a list of the distributions.
End of explanation
clf.log_probability( list('CAGCATCAGT') )
clf.log_probability( list('C') )
clf.log_probability( list('CACATCACGACTAATGATAAT') )
Explanation: Markov chains have log probability, fit, summarize, and from summaries methods implemented. They do not have classification capabilities by themselves, but when combined with a Naive Bayes classifier can be used to do discrimination between multiple models (see the Naive Bayes tutorial notebook).
Lets see the log probability of some data.
End of explanation
clf.fit(list(map(list,('CAGCATCAGT', 'C', 'ATATAGAGATAAGCT', 'GCGCAAGT', 'GCATTGC', 'CACATCACGACTAATGATAAT'))))
print(clf.log_probability( list('CAGCATCAGT') ) )
print(clf.log_probability( list('C') ))
print(clf.log_probability( list('CACATCACGACTAATGATAAT') ))
print(clf.distributions[0])
print(clf.distributions[1])
Explanation: We can fit the model to sequences which we pass in, and as expected, get better performance on sequences which we train on.
End of explanation |
5,724 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
TODO
Step1: Create a master bias frame
Step2: Create the master flat frame
Step3: Process object frames
Bias, flat corrections
Step4: Now that those intermediate frames are written, we need to extract the 1D spectra
Step5: TODO
Step6: Fit a Voigt profile at row = 800 to determine PSF
Step7: Now fix PSF parameters, only need to fit amplitudes of PSF and background
Step9: Everything below here is crazy (wrong) aperture spectrum shit
Step10: Maximum likelihood estimate of source counts, background counts
For observed counts $C$ in the source aperture, and observed counts $B$ in the background aperture, $s$ is the true source counts, and $b$ is the background density, and there are $N_S$ pixels in the source aperture, $N_B$ pixels in the background aperture
Step11: TODO
Step12:
Step13: | Python Code:
# Standard library
from os.path import join
import sys
if '/Users/adrian/projects/longslit/' not in sys.path:
sys.path.append('/Users/adrian/projects/longslit/')
# Third-party
from astropy.constants import c
import numpy as np
import matplotlib.pyplot as plt
import astropy.units as u
from astropy.io import fits
import astropy.modeling as mod
plt.style.use('apw-notebook')
%matplotlib inline
from scipy.interpolate import InterpolatedUnivariateSpline
from scipy.optimize import leastsq
import ccdproc
from ccdproc import CCDData, ImageFileCollection
from longslit.utils import gaussian_constant, gaussian_polynomial
ccd_props = dict(gain=2.7*u.electron/u.adu, # from: http://mdm.kpno.noao.edu/index/MDM_CCDs.html
readnoise=7.9*u.electron)
# ic = ImageFileCollection('/Users/adrian/projects/gaia-wide-binaries/data/mdm-spring-2017/n1/',
# filenames=['n1.00{:02d}.fit'.format(i) for i in range(1,23+1)])
ic = ImageFileCollection('/Users/adrian/projects/gaia-wide-binaries/data/mdm-spring-2017/n4/',
filenames=['n4.00{:02d}.fit'.format(i) for i in list(range(1,21))+[89]])
# ic = ImageFileCollection('/Users/adrian/projects/gaia-wide-binaries/data/mdm-spring-2017/n3/',
# filenames=['n3.0{:03d}.fit'.format(i) for i in list(range(1,21))+[122]])
ic.summary['object']
Explanation: TODO:
Define pixel masks for all objects to remove nearby sources
Data model for 1D extracted files:
HDU0: reproduce header from original file
HDU1: Table with: pix, source_flux, source_ivar, centroid, bg_flux, bg_ivar
header should contain PSF fit parameters
Try reducing a single object
End of explanation
# get list of overscan-subtracted bias frames as 2D arrays
bias_list = []
for hdu, fname in ic.hdus(return_fname=True, imagetyp='BIAS'):
ccd = CCDData.read(join(ic.location, fname), unit='adu')
ccd = ccdproc.gain_correct(ccd, gain=ccd_props['gain'])
ccd = ccdproc.subtract_overscan(ccd, overscan=ccd[:,300:])
ccd = ccdproc.trim_image(ccd, fits_section="[1:300,:]")
bias_list.append(ccd)
# combine all bias frames into a master bias frame
master_bias = ccdproc.combine(bias_list, method='average', clip_extrema=True,
nlow=1, nhigh=1, error=True)
plt.hist(master_bias.data.ravel(), bins='auto');
plt.yscale('log')
plt.xlabel('master bias pixel values')
plt.figure(figsize=(15,5))
plt.imshow(master_bias.data.T, cmap='plasma')
Explanation: Create a master bias frame
End of explanation
# create a list of flat frames
flat_list = []
for hdu, fname in ic.hdus(return_fname=True, imagetyp='FLAT'):
ccd = CCDData.read(join(ic.location, fname), unit='adu')
ccd = ccdproc.gain_correct(ccd, gain=ccd_props['gain'])
ccd = ccdproc.ccd_process(ccd, oscan='[300:364,:]', trim="[1:300,:]",
master_bias=master_bias)
flat_list.append(ccd)
# combine into a single master flat - use 3*sigma sigma-clipping
master_flat = ccdproc.combine(flat_list, method='average', sigma_clip=True,
low_thresh=3, high_thresh=3)
plt.hist(master_flat.data.ravel(), bins='auto');
plt.yscale('log')
plt.xlabel('master flat pixel values')
plt.figure(figsize=(15,5))
plt.imshow(master_flat.data.T, cmap='plasma')
Explanation: Create the master flat frame
End of explanation
im_list = []
for hdu, fname in ic.hdus(return_fname=True, imagetyp='OBJECT'):
print(hdu.header['OBJECT'])
ccd = CCDData.read(join(ic.location, fname), unit='adu')
hdr = ccd.header
# make a copy of the object
nccd = ccd.copy()
# apply the overscan correction
poly_model = mod.models.Polynomial1D(2)
nccd = ccdproc.subtract_overscan(nccd, fits_section='[300:364,:]',
model=poly_model)
# apply the trim correction
nccd = ccdproc.trim_image(nccd, fits_section='[1:300,:]')
# create the error frame
nccd = ccdproc.create_deviation(nccd, gain=ccd_props['gain'],
readnoise=ccd_props['readnoise'])
# now gain correct
nccd = ccdproc.gain_correct(nccd, gain=ccd_props['gain'])
# correct for master bias frame
# - this does some crazy shit at the blue end, but we can live with it
nccd = ccdproc.subtract_bias(nccd, master_bias)
# correct for master flat frame
nccd = ccdproc.flat_correct(nccd, master_flat)
# comsic ray cleaning - this updates the uncertainty array as well
nccd = ccdproc.cosmicray_lacosmic(nccd, sigclip=8.)
# new_fname = 'p'+fname
# ccd.write(new_fname, overwrite=True)
# im_list.append(new_fname)
print(hdu.header['EXPTIME'])
break
fig,axes = plt.subplots(3,1,figsize=(18,10),sharex=True,sharey=True)
axes[0].imshow(nccd.data.T, cmap='plasma')
axes[1].imshow(nccd.uncertainty.array.T, cmap='plasma')
axes[2].imshow(nccd.mask.T, cmap='plasma')
Explanation: Process object frames
Bias, flat corrections
End of explanation
n_trace_nodes = 32
def fit_gaussian(pix, flux, flux_ivar, n_coeff=1, p0=None, return_cov=False):
if p0 is None:
p0 = [flux.max(), pix[flux.argmax()], 1.] + [0.]*(n_coeff-1) + [flux.min()]
def errfunc(p):
return (gaussian_polynomial(pix, *p) - flux) * np.sqrt(flux_ivar)
res = leastsq(errfunc, x0=p0, full_output=True)
p_opt = res[0]
cov_p = res[1]
ier = res[-1]
if ier > 4 or ier < 1:
raise RuntimeError("Failed to fit Gaussian to spectrum trace row.")
if return_cov:
return p_opt, cov_p
else:
return p_opt
n_rows,n_cols = nccd.data.shape
row_idx = np.linspace(0, n_rows-1, n_trace_nodes).astype(int)
pix = np.arange(n_cols, dtype=float)
trace_amps = np.zeros_like(row_idx).astype(float)
trace_centroids = np.zeros_like(row_idx).astype(float)
trace_stddevs = np.zeros_like(row_idx).astype(float)
for i,j in enumerate(row_idx):
flux = nccd.data[j]
flux_err = nccd.uncertainty.array[j]
flux_ivar = 1/flux_err**2.
flux_ivar[~np.isfinite(flux_ivar)] = 0.
p_opt = fit_gaussian(pix, flux, flux_ivar)
trace_amps[i] = p_opt[0]
trace_centroids[i] = p_opt[1]
trace_stddevs[i] = p_opt[2]
trace_func = InterpolatedUnivariateSpline(row_idx, trace_centroids)
trace_amp_func = InterpolatedUnivariateSpline(row_idx, trace_amps)
trace_stddev = np.median(trace_stddevs)
plt.hist(trace_stddevs, bins=np.linspace(0, 5., 16));
Explanation: Now that those intermediate frames are written, we need to extract the 1D spectra
End of explanation
plt.plot(row_idx, trace_centroids)
_grid = np.linspace(row_idx.min(), row_idx.max(), 1024)
plt.plot(_grid, trace_func(_grid), marker='', alpha=0.5)
plt.axvline(50)
plt.axvline(1650)
plt.xlabel('row index [pix]')
plt.ylabel('trace centroid [pix]')
Explanation: TODO: some outlier rejection
End of explanation
from scipy.special import wofz
from scipy.stats import scoreatpercentile
def voigt(x, amp, x_0, stddev, fwhm):
_x = x-x_0
z = (_x + 1j*fwhm/2.) / (np.sqrt(2.)*stddev)
return amp * wofz(z).real / (np.sqrt(2.*np.pi)*stddev)
def psf_model(p, x):
amp, x_0, std_G, fwhm_L, C = p
return voigt(x, amp, x_0, std_G, fwhm_L) + C
def psf_chi(p, pix, flux, flux_ivar):
return (psf_model(p, pix) - flux) * np.sqrt(flux_ivar)
i = 800 # MAGIC NUMBER
flux = nccd.data[i]
flux_err = nccd.uncertainty.array[i]
pix = np.arange(len(flux))
flux_ivar = 1/flux_err**2.
flux_ivar[~np.isfinite(flux_ivar)] = 0.
p0 = [flux.max(), pix[np.argmax(flux)], 1., 1., scoreatpercentile(flux[flux>0], 16.)]
fig,axes = plt.subplots(2, 1, figsize=(10,6), sharex=True, sharey=True)
# data on both panels
for i in range(2):
axes[i].plot(pix, flux, marker='', drawstyle='steps-mid')
axes[i].errorbar(pix, flux, flux_err, marker='', linestyle='none', ecolor='#777777', zorder=-10)
# plot initial guess
grid = np.linspace(pix.min(), pix.max(), 1024)
axes[0].plot(grid, psf_model(p0, grid), marker='', alpha=1., zorder=10)
axes[0].set_yscale('log')
axes[0].set_title("Initial")
# fit parameters
p_opt,ier = leastsq(psf_chi, x0=p0, args=(pix, flux, flux_ivar))
print(ier)
# plot fit parameters
axes[1].plot(grid, psf_model(p_opt, grid), marker='', alpha=1., zorder=10)
axes[1].set_title("Fit")
fig.tight_layout()
psf_p = dict()
psf_p['std_G'] = p_opt[2]
psf_p['fwhm_L'] = p_opt[3]
Explanation: Fit a Voigt profile at row = 800 to determine PSF
End of explanation
def row_model(p, psf_p, x):
amp, x_0, C = p
return voigt(x, amp, x_0, stddev=psf_p['std_G'], fwhm=psf_p['fwhm_L']) + C
def row_chi(p, pix, flux, flux_ivar, psf_p):
return (row_model(p, psf_p, pix) - flux) * np.sqrt(flux_ivar)
# This does PSF extraction
trace_1d = np.zeros(n_rows).astype(float)
flux_1d = np.zeros(n_rows).astype(float)
flux_1d_err = np.zeros(n_rows).astype(float)
sky_flux_1d = np.zeros(n_rows).astype(float)
sky_flux_1d_err = np.zeros(n_rows).astype(float)
for i in range(nccd.data.shape[0]):
flux = nccd.data[i]
flux_err = nccd.uncertainty.array[i]
flux_ivar = 1/flux_err**2.
flux_ivar[~np.isfinite(flux_ivar)] = 0.
p0 = [flux.max(), pix[np.argmax(flux)], scoreatpercentile(flux[flux>0], 16.)]
p_opt,p_cov,*_,mesg,ier = leastsq(row_chi, x0=p0, full_output=True,
args=(pix, flux, flux_ivar, psf_p))
if ier < 1 or ier > 4 or p_cov is None:
flux_1d[i] = np.nan
sky_flux_1d[i] = np.nan
print("Fit failed for {}".format(i))
continue
if test:
_grid = np.linspace(sub_pix.min(), sub_pix.max(), 1024)
model_flux = p_to_model(p_opt, shifts)(_grid)
# ----------------------------------
plt.figure(figsize=(8,6))
plt.plot(sub_pix, flux, drawstyle='steps-mid', marker='')
plt.errorbar(sub_pix, flux, 1/np.sqrt(flux_ivar), marker='', linestyle='none', ecolor='#666666', alpha=0.75)
plt.axhline(sky_opt[0])
plt.plot(_grid, model_flux, marker='')
plt.yscale('log')
flux_1d[i] = p_opt[0]
trace_1d[i] = p_opt[1]
sky_flux_1d[i] = p_opt[2]
# TODO: ignores centroiding covariances...
flux_1d_err[i] = np.sqrt(p_cov[0,0])
sky_flux_1d_err[i] = np.sqrt(p_cov[2,2])
# clean up the 1d spectra
flux_1d_ivar = 1/flux_1d_err**2
sky_flux_1d_ivar = 1/sky_flux_1d_err**2
pix_1d = np.arange(len(flux_1d))
mask_1d = (pix_1d < 50) | (pix_1d > 1600)
flux_1d[mask_1d] = 0.
flux_1d_ivar[mask_1d] = 0.
sky_flux_1d[mask_1d] = 0.
sky_flux_1d_ivar[mask_1d] = 0.
fig,all_axes = plt.subplots(2, 2, figsize=(12,12), sharex='row')
axes = all_axes[0]
axes[0].plot(flux_1d, marker='', drawstyle='steps-mid')
axes[0].errorbar(np.arange(n_rows), flux_1d, 1/np.sqrt(flux_1d_ivar), linestyle='none',
marker='', ecolor='#666666', alpha=1., zorder=-10)
axes[0].set_ylim(1e3, np.nanmax(flux_1d))
axes[0].set_yscale('log')
axes[0].axvline(halpha_idx, zorder=-10, color='r', alpha=0.1)
axes[1].plot(sky_flux_1d, marker='', drawstyle='steps-mid')
axes[1].errorbar(np.arange(n_rows), sky_flux_1d, 1/np.sqrt(sky_flux_1d_ivar), linestyle='none',
marker='', ecolor='#666666', alpha=1., zorder=-10)
axes[1].set_ylim(1e-1, np.nanmax(sky_flux_1d))
axes[1].set_yscale('log')
axes[1].axvline(halpha_idx, zorder=-10, color='r', alpha=0.1)
# Zoom in around Halpha
axes = all_axes[1]
slc = slice(halpha_idx-32, halpha_idx+32+1)
axes[0].plot(np.arange(n_rows)[slc], flux_1d[slc], marker='', drawstyle='steps-mid')
axes[0].errorbar(np.arange(n_rows)[slc], flux_1d[slc], flux_1d_err[slc], linestyle='none',
marker='', ecolor='#666666', alpha=1., zorder=-10)
axes[1].plot(np.arange(n_rows)[slc], sky_flux_1d[slc], marker='', drawstyle='steps-mid')
axes[1].errorbar(np.arange(n_rows)[slc], sky_flux_1d[slc], sky_flux_1d_err[slc], linestyle='none',
marker='', ecolor='#666666', alpha=1., zorder=-10)
fig.tight_layout()
plt.figure(figsize=(12,5))
plt.plot(sky_flux_1d, marker='', drawstyle='steps-mid')
plt.errorbar(np.arange(n_rows), sky_flux_1d, 1/np.sqrt(sky_flux_1d_ivar), linestyle='none',
marker='', ecolor='#666666', alpha=1., zorder=-10)
# plt.ylim(1e3, np.nanmax(flux_1d))
plt.yscale('log')
plt.xlim(1000, 1300)
# plt.ylim(1e4, 4e4)
Explanation: Now fix PSF parameters, only need to fit amplitudes of PSF and background
End of explanation
n_rows,n_cols = nccd.data.shape
pix = np.arange(n_cols, dtype=float)
sqrt_2pi = np.sqrt(2*np.pi)
def gaussian1d(x, amp, mu, var):
return amp/(sqrt_2pi*np.sqrt(var)) * np.exp(-0.5 * (np.array(x) - mu)**2/var)
def model(pix, sky_p, p0, other_p, shifts=None):
parameters, p, are:
sky
mean0, log_amp0, log_var0
log_amp_m*, log_var_m* (at mean0 - N pix)
log_amp_p*, log_var_p* (at mean0 + N pix)
n_other = len(other_p)
if n_other // 2 != n_other / 2:
raise ValueError("Need even number of other_p")
if shifts is None:
shifts = np.arange(-n_other/2, n_other/2+1)
shifts = np.delete(shifts, np.where(shifts == 0)[0])
assert len(other_p) == len(other_p)
# central model
mean0, log_amp0, log_var0 = np.array(p0).astype(float)
model_flux = gaussian1d(pix, np.exp(log_amp0), mean0, np.exp(log_var0))
for shift,pars in zip(shifts, other_p):
model_flux += gaussian1d(pix, np.exp(pars[0]), mean0 + shift, np.exp(pars[1]))
# l1 = Lorentz1D(x_0=mean0, amplitude=np.exp(sky_p[1]), fwhm=np.sqrt(np.exp(sky_p[2])))
# sky = sky_p[0] + l1(pix)
g = np.exp(sky_p[2])
sky = sky_p[0] + np.exp(sky_p[1]) * (np.pi*g * (1 + (pix-mean0)**2/g**2))**-1
return model_flux + sky
def unpack_p(p):
sky = p[0:3]
p0 = p[3:6]
other_p = np.array(p[6:]).reshape(-1, 2)
return sky, p0, other_p
def p_to_model(p, shifts):
sky, p0, other_p = unpack_p(p)
for ln_amp in [p0[1]]+other_p[:,0].tolist():
if ln_amp > 15:
return lambda x: np.inf
for ln_var in [p0[2]]+other_p[:,1].tolist():
if ln_var < -1. or ln_var > 8:
return lambda x: np.inf
return lambda x: model(x, sky, p0, other_p, shifts)
def ln_likelihood(p, pix, flux, flux_ivar, shifts):
_model = p_to_model(p, shifts)
model_flux = _model(pix)
if np.any(np.isinf(model_flux)):
return np.inf
return np.sqrt(flux_ivar) * (flux - model_flux)
fig,ax = plt.subplots(1,1,figsize=(10,8))
for i in np.linspace(500, 1200, 16).astype(int):
flux = nccd.data[i]
pix = np.arange(len(flux))
plt.plot(pix-trace_func(i), flux / trace_amp_func(i), marker='', drawstyle='steps-mid', alpha=0.25)
plt.xlim(-50, 50)
plt.yscale('log')
flux = nccd.data[800]
plt.plot(pix, flux, marker='', drawstyle='steps-mid', alpha=0.25)
plt.yscale('log')
plt.xlim(flux.argmax()-25, flux.argmax()+25)
# This does PSF extraction, slightly wrong
# hyper-parameters of fit
# shifts = [-6, -4, -2, 2, 4., 6.]
shifts = np.linspace(-10, 10, 4).tolist()
flux_1d = np.zeros(n_rows).astype(float)
flux_1d_err = np.zeros(n_rows).astype(float)
sky_flux_1d = np.zeros(n_rows).astype(float)
sky_flux_1d_err = np.zeros(n_rows).astype(float)
test = False
# test = True
# for i in range(n_rows):
# for i in [620, 700, 780, 860, 920, 1000]:
for i in range(620, 1000):
# line_ctr0 = trace_func(i) # TODO: could use this to initialize?
flux = nccd.data[i,j1:j2]
flux_err = nccd.uncertainty.array[i,j1:j2]
flux_ivar = 1/flux_err**2.
flux_ivar[~np.isfinite(flux_ivar)] = 0.
# initial guess for least-squares
sky = [np.min(flux), 1E-2, 2.]
p0 = [pix[np.argmax(flux)], np.log(flux.max()), np.log(1.)]
other_p = [[np.log(flux.max()/100.), np.log(1.)]]*len(shifts)
_shifts = shifts + [0.,0.]
other_p += [[np.log(flux.max()/100.), np.log(4.)]]*2
# sanity check
ll = ln_likelihood(sky + p0 + other_p, sub_pix, flux, flux_ivar, shifts=_shifts).sum()
assert np.isfinite(ll)
p_opt,p_cov,*_,mesg,ier = leastsq(ln_likelihood, x0=sky + p0 + np.ravel(other_p).tolist(), full_output=True,
args=(pix, flux, flux_ivar, shifts))
if ier < 1 or ier > 4:
flux_1d[i] = np.nan
sky_flux_1d[i] = np.nan
print("Fit failed for {}".format(i))
continue
# raise ValueError("Fit failed for {}".format(i))
sky_opt, p0_opt, other_p_opt = unpack_p(p_opt)
amps = np.exp(other_p_opt[:,0])
stds = np.exp(other_p_opt[:,1])
if test:
_grid = np.linspace(sub_pix.min(), sub_pix.max(), 1024)
model_flux = p_to_model(p_opt, shifts)(_grid)
# ----------------------------------
plt.figure(figsize=(8,6))
plt.plot(sub_pix, flux, drawstyle='steps-mid', marker='')
plt.errorbar(sub_pix, flux, 1/np.sqrt(flux_ivar), marker='', linestyle='none', ecolor='#666666', alpha=0.75)
plt.axhline(sky_opt[0])
plt.plot(_grid, model_flux, marker='')
plt.yscale('log')
flux_1d[i] = np.exp(p0_opt[1]) + amps.sum()
sky_flux_1d[i] = sky_opt[0]
fig,all_axes = plt.subplots(2, 2, figsize=(12,12), sharex='row')
axes = all_axes[0]
axes[0].plot(flux_1d, marker='', drawstyle='steps-mid')
# axes[0].errorbar(np.arange(n_rows), flux_1d, flux_1d_err, linestyle='none',
# marker='', ecolor='#666666', alpha=1., zorder=-10)
# axes[0].set_ylim(1e2, np.nanmax(flux_1d))
axes[0].set_ylim(np.nanmin(flux_1d[flux_1d!=0.]), np.nanmax(flux_1d))
# axes[0].set_yscale('log')
axes[0].axvline(halpha_idx, zorder=-10, color='r', alpha=0.1)
axes[1].plot(sky_flux_1d, marker='', drawstyle='steps-mid')
# axes[1].errorbar(np.arange(n_rows), sky_flux_1d, sky_flux_1d_err, linestyle='none',
# marker='', ecolor='#666666', alpha=1., zorder=-10)
axes[1].set_ylim(-5, np.nanmax(sky_flux_1d))
axes[1].axvline(halpha_idx, zorder=-10, color='r', alpha=0.1)
# Zoom in around Halpha
axes = all_axes[1]
slc = slice(halpha_idx-32, halpha_idx+32+1)
axes[0].plot(np.arange(n_rows)[slc], flux_1d[slc], marker='', drawstyle='steps-mid')
# axes[0].errorbar(np.arange(n_rows)[slc], flux_1d[slc], flux_1d_err[slc], linestyle='none',
# marker='', ecolor='#666666', alpha=1., zorder=-10)
axes[1].plot(np.arange(n_rows)[slc], sky_flux_1d[slc], marker='', drawstyle='steps-mid')
# axes[1].errorbar(np.arange(n_rows)[slc], sky_flux_1d[slc], sky_flux_1d_err[slc], linestyle='none',
# marker='', ecolor='#666666', alpha=1., zorder=-10)
fig.tight_layout()
Explanation: Everything below here is crazy (wrong) aperture spectrum shit
End of explanation
# aperture_width = 8 # pixels
aperture_width = int(np.ceil(3*trace_stddev))
sky_offset = 8 # pixels
sky_width = 16 # pixels
# # estimated from ds9: counts in 1 pixel of spectrum, background counts in 2 pixels
# C = 11000.
# B = 800.
# N_S = 1
# N_B = 2
# s_hat = C - B*N_S/N_B
# b_hat = B / (N_B-N_S)
# s_var = C + B*(N_S/N_B)**2
# b_var = B / N_B
# This does aperture extraction, but wrong!
flux_1d = np.zeros(n_rows).astype(float)
flux_1d_err = np.zeros(n_rows).astype(float)
sky_flux_1d = np.zeros(n_rows).astype(float)
sky_flux_1d_err = np.zeros(n_rows).astype(float)
source_mask = np.zeros_like(nccd.data).astype(bool)
sky_mask = np.zeros_like(nccd.data).astype(bool)
# # HACK: uniform aperture down the CCD
# _derp = trace_func(np.arange(n_rows))
# j1 = int(np.floor(_derp.min()))
# j2 = int(np.ceil(_derp.max()))
for i in range(n_rows):
line_ctr = trace_func(i)
# source aperture bool mask
j1 = int(np.floor(line_ctr-aperture_width/2))
j2 = int(np.ceil(line_ctr+aperture_width/2)+1)
source_mask[i,j1:j2] = 1
# sky aperture bool masks
s1 = int(np.floor(j1 - sky_offset - sky_width))
s2 = int(np.ceil(j1 - sky_offset + 1))
sky_mask[i,s1:s2] = 1
s1 = int(np.floor(j2 + sky_offset))
s2 = int(np.ceil(j2 + sky_offset + sky_width + 1))
sky_mask[i,s1:s2] = 1
source_flux = nccd.data[i,source_mask[i]]
source_flux_ivar = 1 / nccd.uncertainty.array[i,source_mask[i]]**2
source_flux_ivar[~np.isfinite(source_flux_ivar)] = 0.
sky_flux = nccd.data[i,sky_mask[i]]
sky_flux_ivar = 1 / nccd.uncertainty.array[i,sky_mask[i]]**2
sky_flux_ivar[~np.isfinite(sky_flux_ivar)] = 0.
C = np.sum(source_flux_ivar*source_flux) / np.sum(source_flux_ivar)
B = np.sum(sky_flux_ivar*sky_flux) / np.sum(sky_flux_ivar)
N_S = source_mask[i].sum()
N_B = sky_mask[i].sum()
s_hat = C - B * N_S / N_B
b_hat = B / (N_B - N_S)
s_var = C + B * (N_S / N_B)**2
b_var = B / N_B
flux_1d[i] = s_hat
flux_1d_err[i] = np.sqrt(s_var)
sky_flux_1d[i] = b_hat
sky_flux_1d_err[i] = np.sqrt(b_var)
# approximate
halpha_idx = 686
Explanation: Maximum likelihood estimate of source counts, background counts
For observed counts $C$ in the source aperture, and observed counts $B$ in the background aperture, $s$ is the true source counts, and $b$ is the background density, and there are $N_S$ pixels in the source aperture, $N_B$ pixels in the background aperture:
$$
\hat{s} = C - B\,\frac{N_S}{N_B} \
\hat{b} = \frac{B}{N_B - N_S} \
$$
$$
\sigma^2_s = C + B\,\frac{N_S^2}{N_B^2} \
\sigma^2_b = \frac{B}{N_B}
$$
https://arxiv.org/pdf/1410.2564.pdf
TODO: include uncertainties in pixel counts from read noise?
End of explanation
x = np.linspace(-10.2, 11.1, 21)
y = gaussian_polynomial(x, 100., 0.2, 1., 0.4, 15.) \
+ gaussian_polynomial(x, 10., 0.3, 2., 0.)
# y = gaussian_polynomial(x, 100., 0.2, 1., 0.) \
# + gaussian_polynomial(x, 100., 2., 2., 0.)
y_err = np.full_like(y, 1.)
y_ivar = 1/y_err**2
# y_ivar[8] = 0.
y = np.random.normal(y, y_err)
plt.errorbar(x, y, y_err)
g1 = mod.models.Gaussian1D(amplitude=y.max())
g2 = mod.models.Gaussian1D(amplitude=y.max()/10.)
g1.amplitude.bounds = (0, 1E10)
g2.amplitude.bounds = (0, 1E10)
# g3 = mod.models.Gaussian1D()
p1 = mod.models.Polynomial1D(3)
full_model = g1+g2+p1
fitter = mod.fitting.LevMarLSQFitter()
fit_model = fitter(full_model, x, y, weights=1/y_err)
plt.errorbar(x, y, y_err, linestyle='none', marker='o')
_grid = np.linspace(x.min(), x.max(), 256)
plt.plot(_grid, fit_model(_grid), marker='')
fit_model
fit_model.amplitude_0
fit_model.amplitude_1
Explanation: TODO: discontinuities are from shifting aperture -- figure out a way to have a constant aperture?
Testing line fit
End of explanation
aperture_width = 250
i = 1200
line_ctr = trace_func(i)
# j1 = int(np.floor(line_ctr-aperture_width/2))
# j2 = int(np.ceil(line_ctr+aperture_width/2)+1)
j1 = 0
j2 = 300
print(j1, j2)
_grid = np.linspace(sub_pix.min(), sub_pix.max(), 1024)
model_flux = p_to_model(p_opt, shifts)(_grid)
# ----------------------------------
plt.figure(figsize=(8,6))
plt.plot(sub_pix, flux, drawstyle='steps-mid', marker='')
plt.errorbar(sub_pix, flux, 1/np.sqrt(flux_ivar), marker='', linestyle='none', ecolor='#666666', alpha=0.75)
plt.axhline(sky_opt[0])
plt.plot(_grid, model_flux, marker='')
plt.yscale('log')
Explanation:
End of explanation
from astropy.modeling.functional_models import Lorentz1D, Gaussian1D, Voigt1D, Const1D
from astropy.modeling.fitting import LevMarLSQFitter
def _pars_to_model(p):
mean = p[0]
log_amp_V, log_fwhm_L = p[1:3]
log_amp_G1, log_var1, log_amp_G2, log_var2 = p[3:-1]
C = p[-1]
# l = Voigt1D(x_0=mean, amplitude_L=np.exp(log_amp_V),
# fwhm_L=np.exp(log_fwhm_L),
# fwhm_G=np.exp(log_fwhm_G)) \
# + Gaussian1D(amplitude=np.exp(log_amp_G1), mean=mean, stddev=np.sqrt(np.exp(log_var1))) \
# + Gaussian1D(amplitude=np.exp(log_amp_G2), mean=mean, stddev=np.sqrt(np.exp(log_var2))) \
# + Const1D(amplitude=C)
l = Lorentz1D(x_0=mean, amplitude=np.exp(log_amp_V), fwhm=np.exp(log_fwhm_L)) \
+ Gaussian1D(amplitude=np.exp(log_amp_G1), mean=mean, stddev=np.sqrt(np.exp(log_var1))) \
+ Gaussian1D(amplitude=np.exp(log_amp_G2), mean=mean, stddev=np.sqrt(np.exp(log_var2))) \
+ Const1D(amplitude=C)
return l
def test(p, pix, flux, flux_ivar):
l = _pars_to_model(p)
return (flux - l(pix)) * np.sqrt(flux_ivar)
derp,ier = leastsq(test, x0=[np.argmax(flux),
np.log(np.max(flux)), 1.,
np.log(np.max(flux)/10.), 5.5,
np.log(np.max(flux)/25.), 7.5,
np.min(flux)],
args=(sub_pix, flux, flux_ivar))
l_fit = _pars_to_model(derp)
plt.figure(figsize=(8,6))
plt.plot(sub_pix, flux, drawstyle='steps-mid', marker='')
plt.errorbar(sub_pix, flux, 1/np.sqrt(flux_ivar), marker='', linestyle='none', ecolor='#666666', alpha=0.75)
plt.plot(_grid, l_fit(_grid), marker='')
plt.yscale('log')
plt.figure(figsize=(8,6))
plt.plot(sub_pix, (flux-l_fit(sub_pix))/(1/np.sqrt(flux_ivar)), drawstyle='steps-mid', marker='')
# plt.errorbar(sub_pix, (flux-l_fit(sub_pix))/(, 1/np.sqrt(flux_ivar),
# marker='', linestyle='none', ecolor='#666666', alpha=0.75)
Explanation:
End of explanation |
5,725 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
ES-DOC CMIP6 Model Properties - Seaice
MIP Era
Step1: Document Authors
Set document authors
Step2: Document Contributors
Specify document contributors
Step3: Document Publication
Specify document publication status
Step4: Document Table of Contents
1. Key Properties --> Model
2. Key Properties --> Variables
3. Key Properties --> Seawater Properties
4. Key Properties --> Resolution
5. Key Properties --> Tuning Applied
6. Key Properties --> Key Parameter Values
7. Key Properties --> Assumptions
8. Key Properties --> Conservation
9. Grid --> Discretisation --> Horizontal
10. Grid --> Discretisation --> Vertical
11. Grid --> Seaice Categories
12. Grid --> Snow On Seaice
13. Dynamics
14. Thermodynamics --> Energy
15. Thermodynamics --> Mass
16. Thermodynamics --> Salt
17. Thermodynamics --> Salt --> Mass Transport
18. Thermodynamics --> Salt --> Thermodynamics
19. Thermodynamics --> Ice Thickness Distribution
20. Thermodynamics --> Ice Floe Size Distribution
21. Thermodynamics --> Melt Ponds
22. Thermodynamics --> Snow Processes
23. Radiative Processes
1. Key Properties --> Model
Name of seaice model used.
1.1. Model Overview
Is Required
Step5: 1.2. Model Name
Is Required
Step6: 2. Key Properties --> Variables
List of prognostic variable in the sea ice model.
2.1. Prognostic
Is Required
Step7: 3. Key Properties --> Seawater Properties
Properties of seawater relevant to sea ice
3.1. Ocean Freezing Point
Is Required
Step8: 3.2. Ocean Freezing Point Value
Is Required
Step9: 4. Key Properties --> Resolution
Resolution of the sea ice grid
4.1. Name
Is Required
Step10: 4.2. Canonical Horizontal Resolution
Is Required
Step11: 4.3. Number Of Horizontal Gridpoints
Is Required
Step12: 5. Key Properties --> Tuning Applied
Tuning applied to sea ice model component
5.1. Description
Is Required
Step13: 5.2. Target
Is Required
Step14: 5.3. Simulations
Is Required
Step15: 5.4. Metrics Used
Is Required
Step16: 5.5. Variables
Is Required
Step17: 6. Key Properties --> Key Parameter Values
Values of key parameters
6.1. Typical Parameters
Is Required
Step18: 6.2. Additional Parameters
Is Required
Step19: 7. Key Properties --> Assumptions
Assumptions made in the sea ice model
7.1. Description
Is Required
Step20: 7.2. On Diagnostic Variables
Is Required
Step21: 7.3. Missing Processes
Is Required
Step22: 8. Key Properties --> Conservation
Conservation in the sea ice component
8.1. Description
Is Required
Step23: 8.2. Properties
Is Required
Step24: 8.3. Budget
Is Required
Step25: 8.4. Was Flux Correction Used
Is Required
Step26: 8.5. Corrected Conserved Prognostic Variables
Is Required
Step27: 9. Grid --> Discretisation --> Horizontal
Sea ice discretisation in the horizontal
9.1. Grid
Is Required
Step28: 9.2. Grid Type
Is Required
Step29: 9.3. Scheme
Is Required
Step30: 9.4. Thermodynamics Time Step
Is Required
Step31: 9.5. Dynamics Time Step
Is Required
Step32: 9.6. Additional Details
Is Required
Step33: 10. Grid --> Discretisation --> Vertical
Sea ice vertical properties
10.1. Layering
Is Required
Step34: 10.2. Number Of Layers
Is Required
Step35: 10.3. Additional Details
Is Required
Step36: 11. Grid --> Seaice Categories
What method is used to represent sea ice categories ?
11.1. Has Mulitple Categories
Is Required
Step37: 11.2. Number Of Categories
Is Required
Step38: 11.3. Category Limits
Is Required
Step39: 11.4. Ice Thickness Distribution Scheme
Is Required
Step40: 11.5. Other
Is Required
Step41: 12. Grid --> Snow On Seaice
Snow on sea ice details
12.1. Has Snow On Ice
Is Required
Step42: 12.2. Number Of Snow Levels
Is Required
Step43: 12.3. Snow Fraction
Is Required
Step44: 12.4. Additional Details
Is Required
Step45: 13. Dynamics
Sea Ice Dynamics
13.1. Horizontal Transport
Is Required
Step46: 13.2. Transport In Thickness Space
Is Required
Step47: 13.3. Ice Strength Formulation
Is Required
Step48: 13.4. Redistribution
Is Required
Step49: 13.5. Rheology
Is Required
Step50: 14. Thermodynamics --> Energy
Processes related to energy in sea ice thermodynamics
14.1. Enthalpy Formulation
Is Required
Step51: 14.2. Thermal Conductivity
Is Required
Step52: 14.3. Heat Diffusion
Is Required
Step53: 14.4. Basal Heat Flux
Is Required
Step54: 14.5. Fixed Salinity Value
Is Required
Step55: 14.6. Heat Content Of Precipitation
Is Required
Step56: 14.7. Precipitation Effects On Salinity
Is Required
Step57: 15. Thermodynamics --> Mass
Processes related to mass in sea ice thermodynamics
15.1. New Ice Formation
Is Required
Step58: 15.2. Ice Vertical Growth And Melt
Is Required
Step59: 15.3. Ice Lateral Melting
Is Required
Step60: 15.4. Ice Surface Sublimation
Is Required
Step61: 15.5. Frazil Ice
Is Required
Step62: 16. Thermodynamics --> Salt
Processes related to salt in sea ice thermodynamics.
16.1. Has Multiple Sea Ice Salinities
Is Required
Step63: 16.2. Sea Ice Salinity Thermal Impacts
Is Required
Step64: 17. Thermodynamics --> Salt --> Mass Transport
Mass transport of salt
17.1. Salinity Type
Is Required
Step65: 17.2. Constant Salinity Value
Is Required
Step66: 17.3. Additional Details
Is Required
Step67: 18. Thermodynamics --> Salt --> Thermodynamics
Salt thermodynamics
18.1. Salinity Type
Is Required
Step68: 18.2. Constant Salinity Value
Is Required
Step69: 18.3. Additional Details
Is Required
Step70: 19. Thermodynamics --> Ice Thickness Distribution
Ice thickness distribution details.
19.1. Representation
Is Required
Step71: 20. Thermodynamics --> Ice Floe Size Distribution
Ice floe-size distribution details.
20.1. Representation
Is Required
Step72: 20.2. Additional Details
Is Required
Step73: 21. Thermodynamics --> Melt Ponds
Characteristics of melt ponds.
21.1. Are Included
Is Required
Step74: 21.2. Formulation
Is Required
Step75: 21.3. Impacts
Is Required
Step76: 22. Thermodynamics --> Snow Processes
Thermodynamic processes in snow on sea ice
22.1. Has Snow Aging
Is Required
Step77: 22.2. Snow Aging Scheme
Is Required
Step78: 22.3. Has Snow Ice Formation
Is Required
Step79: 22.4. Snow Ice Formation Scheme
Is Required
Step80: 22.5. Redistribution
Is Required
Step81: 22.6. Heat Diffusion
Is Required
Step82: 23. Radiative Processes
Sea Ice Radiative Processes
23.1. Surface Albedo
Is Required
Step83: 23.2. Ice Radiation Transmission
Is Required | Python Code:
# DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'uhh', 'sandbox-3', 'seaice')
Explanation: ES-DOC CMIP6 Model Properties - Seaice
MIP Era: CMIP6
Institute: UHH
Source ID: SANDBOX-3
Topic: Seaice
Sub-Topics: Dynamics, Thermodynamics, Radiative Processes.
Properties: 80 (63 required)
Model descriptions: Model description details
Initialized From: --
Notebook Help: Goto notebook help page
Notebook Initialised: 2018-02-15 16:54:42
Document Setup
IMPORTANT: to be executed each time you run the notebook
End of explanation
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
Explanation: Document Authors
Set document authors
End of explanation
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
Explanation: Document Contributors
Specify document contributors
End of explanation
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
Explanation: Document Publication
Specify document publication status
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.model.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: Document Table of Contents
1. Key Properties --> Model
2. Key Properties --> Variables
3. Key Properties --> Seawater Properties
4. Key Properties --> Resolution
5. Key Properties --> Tuning Applied
6. Key Properties --> Key Parameter Values
7. Key Properties --> Assumptions
8. Key Properties --> Conservation
9. Grid --> Discretisation --> Horizontal
10. Grid --> Discretisation --> Vertical
11. Grid --> Seaice Categories
12. Grid --> Snow On Seaice
13. Dynamics
14. Thermodynamics --> Energy
15. Thermodynamics --> Mass
16. Thermodynamics --> Salt
17. Thermodynamics --> Salt --> Mass Transport
18. Thermodynamics --> Salt --> Thermodynamics
19. Thermodynamics --> Ice Thickness Distribution
20. Thermodynamics --> Ice Floe Size Distribution
21. Thermodynamics --> Melt Ponds
22. Thermodynamics --> Snow Processes
23. Radiative Processes
1. Key Properties --> Model
Name of seaice model used.
1.1. Model Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of sea ice model.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.model.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.2. Model Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Name of sea ice model code (e.g. CICE 4.2, LIM 2.1, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.variables.prognostic')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Sea ice temperature"
# "Sea ice concentration"
# "Sea ice thickness"
# "Sea ice volume per grid cell area"
# "Sea ice u-velocity"
# "Sea ice v-velocity"
# "Sea ice enthalpy"
# "Internal ice stress"
# "Salinity"
# "Snow temperature"
# "Snow depth"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 2. Key Properties --> Variables
List of prognostic variable in the sea ice model.
2.1. Prognostic
Is Required: TRUE Type: ENUM Cardinality: 1.N
List of prognostic variables in the sea ice component.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.seawater_properties.ocean_freezing_point')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "TEOS-10"
# "Constant"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 3. Key Properties --> Seawater Properties
Properties of seawater relevant to sea ice
3.1. Ocean Freezing Point
Is Required: TRUE Type: ENUM Cardinality: 1.1
Equation used to compute the freezing point (in deg C) of seawater, as a function of salinity and pressure
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.seawater_properties.ocean_freezing_point_value')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 3.2. Ocean Freezing Point Value
Is Required: FALSE Type: FLOAT Cardinality: 0.1
If using a constant seawater freezing point, specify this value.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.resolution.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4. Key Properties --> Resolution
Resolution of the sea ice grid
4.1. Name
Is Required: TRUE Type: STRING Cardinality: 1.1
This is a string usually used by the modelling group to describe the resolution of this grid e.g. N512L180, T512L70, ORCA025 etc.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.resolution.canonical_horizontal_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4.2. Canonical Horizontal Resolution
Is Required: TRUE Type: STRING Cardinality: 1.1
Expression quoted for gross comparisons of resolution, eg. 50km or 0.1 degrees etc.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.resolution.number_of_horizontal_gridpoints')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 4.3. Number Of Horizontal Gridpoints
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Total number of horizontal (XY) points (or degrees of freedom) on computational grid.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.tuning_applied.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5. Key Properties --> Tuning Applied
Tuning applied to sea ice model component
5.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General overview description of tuning: explain and motivate the main targets and metrics retained. Document the relative weight given to climate performance metrics versus process oriented metrics, and on the possible conflicts with parameterization level tuning. In particular describe any struggle with a parameter value that required pushing it to its limits to solve a particular model deficiency.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.tuning_applied.target')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5.2. Target
Is Required: TRUE Type: STRING Cardinality: 1.1
What was the aim of tuning, e.g. correct sea ice minima, correct seasonal cycle.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.tuning_applied.simulations')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5.3. Simulations
Is Required: TRUE Type: STRING Cardinality: 1.1
*Which simulations had tuning applied, e.g. all, not historical, only pi-control? *
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.tuning_applied.metrics_used')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5.4. Metrics Used
Is Required: TRUE Type: STRING Cardinality: 1.1
List any observed metrics used in tuning model/parameters
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.tuning_applied.variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5.5. Variables
Is Required: FALSE Type: STRING Cardinality: 0.1
Which variables were changed during the tuning process?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.key_parameter_values.typical_parameters')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Ice strength (P*) in units of N m{-2}"
# "Snow conductivity (ks) in units of W m{-1} K{-1} "
# "Minimum thickness of ice created in leads (h0) in units of m"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 6. Key Properties --> Key Parameter Values
Values of key parameters
6.1. Typical Parameters
Is Required: FALSE Type: ENUM Cardinality: 0.N
What values were specificed for the following parameters if used?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.key_parameter_values.additional_parameters')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.2. Additional Parameters
Is Required: FALSE Type: STRING Cardinality: 0.N
If you have any additional paramterised values that you have used (e.g. minimum open water fraction or bare ice albedo), please provide them here as a comma separated list
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.assumptions.description')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7. Key Properties --> Assumptions
Assumptions made in the sea ice model
7.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.N
General overview description of any key assumptions made in this model.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.assumptions.on_diagnostic_variables')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7.2. On Diagnostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.N
Note any assumptions that specifically affect the CMIP6 diagnostic sea ice variables.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.assumptions.missing_processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7.3. Missing Processes
Is Required: TRUE Type: STRING Cardinality: 1.N
List any key processes missing in this model configuration? Provide full details where this affects the CMIP6 diagnostic sea ice variables?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.conservation.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8. Key Properties --> Conservation
Conservation in the sea ice component
8.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
Provide a general description of conservation methodology.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.conservation.properties')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Energy"
# "Mass"
# "Salt"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 8.2. Properties
Is Required: TRUE Type: ENUM Cardinality: 1.N
Properties conserved in sea ice by the numerical schemes.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.conservation.budget')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.3. Budget
Is Required: TRUE Type: STRING Cardinality: 1.1
For each conserved property, specify the output variables which close the related budgets. as a comma separated list. For example: Conserved property, variable1, variable2, variable3
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.conservation.was_flux_correction_used')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 8.4. Was Flux Correction Used
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does conservation involved flux correction?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.conservation.corrected_conserved_prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.5. Corrected Conserved Prognostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.1
List any variables which are conserved by more than the numerical scheme alone.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Ocean grid"
# "Atmosphere Grid"
# "Own Grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 9. Grid --> Discretisation --> Horizontal
Sea ice discretisation in the horizontal
9.1. Grid
Is Required: TRUE Type: ENUM Cardinality: 1.1
Grid on which sea ice is horizontal discretised?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.grid_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Structured grid"
# "Unstructured grid"
# "Adaptive grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 9.2. Grid Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
What is the type of sea ice grid?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Finite differences"
# "Finite elements"
# "Finite volumes"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 9.3. Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
What is the advection scheme?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.thermodynamics_time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 9.4. Thermodynamics Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
What is the time step in the sea ice model thermodynamic component in seconds.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.dynamics_time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 9.5. Dynamics Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
What is the time step in the sea ice model dynamic component in seconds.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.additional_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9.6. Additional Details
Is Required: FALSE Type: STRING Cardinality: 0.1
Specify any additional horizontal discretisation details.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.vertical.layering')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Zero-layer"
# "Two-layers"
# "Multi-layers"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 10. Grid --> Discretisation --> Vertical
Sea ice vertical properties
10.1. Layering
Is Required: TRUE Type: ENUM Cardinality: 1.N
What type of sea ice vertical layers are implemented for purposes of thermodynamic calculations?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.vertical.number_of_layers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 10.2. Number Of Layers
Is Required: TRUE Type: INTEGER Cardinality: 1.1
If using multi-layers specify how many.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.vertical.additional_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 10.3. Additional Details
Is Required: FALSE Type: STRING Cardinality: 0.1
Specify any additional vertical grid details.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.seaice_categories.has_mulitple_categories')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 11. Grid --> Seaice Categories
What method is used to represent sea ice categories ?
11.1. Has Mulitple Categories
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Set to true if the sea ice model has multiple sea ice categories.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.seaice_categories.number_of_categories')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 11.2. Number Of Categories
Is Required: TRUE Type: INTEGER Cardinality: 1.1
If using sea ice categories specify how many.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.seaice_categories.category_limits')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 11.3. Category Limits
Is Required: TRUE Type: STRING Cardinality: 1.1
If using sea ice categories specify each of the category limits.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.seaice_categories.ice_thickness_distribution_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 11.4. Ice Thickness Distribution Scheme
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the sea ice thickness distribution scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.seaice_categories.other')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 11.5. Other
Is Required: FALSE Type: STRING Cardinality: 0.1
If the sea ice model does not use sea ice categories specify any additional details. For example models that paramterise the ice thickness distribution ITD (i.e there is no explicit ITD) but there is assumed distribution and fluxes are computed accordingly.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.snow_on_seaice.has_snow_on_ice')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 12. Grid --> Snow On Seaice
Snow on sea ice details
12.1. Has Snow On Ice
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is snow on ice represented in this model?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.snow_on_seaice.number_of_snow_levels')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 12.2. Number Of Snow Levels
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Number of vertical levels of snow on ice?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.snow_on_seaice.snow_fraction')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 12.3. Snow Fraction
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe how the snow fraction on sea ice is determined
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.snow_on_seaice.additional_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 12.4. Additional Details
Is Required: FALSE Type: STRING Cardinality: 0.1
Specify any additional details related to snow on ice.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.dynamics.horizontal_transport')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Incremental Re-mapping"
# "Prather"
# "Eulerian"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 13. Dynamics
Sea Ice Dynamics
13.1. Horizontal Transport
Is Required: TRUE Type: ENUM Cardinality: 1.1
What is the method of horizontal advection of sea ice?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.dynamics.transport_in_thickness_space')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Incremental Re-mapping"
# "Prather"
# "Eulerian"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 13.2. Transport In Thickness Space
Is Required: TRUE Type: ENUM Cardinality: 1.1
What is the method of sea ice transport in thickness space (i.e. in thickness categories)?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.dynamics.ice_strength_formulation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Hibler 1979"
# "Rothrock 1975"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 13.3. Ice Strength Formulation
Is Required: TRUE Type: ENUM Cardinality: 1.1
Which method of sea ice strength formulation is used?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.dynamics.redistribution')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Rafting"
# "Ridging"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 13.4. Redistribution
Is Required: TRUE Type: ENUM Cardinality: 1.N
Which processes can redistribute sea ice (including thickness)?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.dynamics.rheology')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Free-drift"
# "Mohr-Coloumb"
# "Visco-plastic"
# "Elastic-visco-plastic"
# "Elastic-anisotropic-plastic"
# "Granular"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 13.5. Rheology
Is Required: TRUE Type: ENUM Cardinality: 1.1
Rheology, what is the ice deformation formulation?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.enthalpy_formulation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Pure ice latent heat (Semtner 0-layer)"
# "Pure ice latent and sensible heat"
# "Pure ice latent and sensible heat + brine heat reservoir (Semtner 3-layer)"
# "Pure ice latent and sensible heat + explicit brine inclusions (Bitz and Lipscomb)"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 14. Thermodynamics --> Energy
Processes related to energy in sea ice thermodynamics
14.1. Enthalpy Formulation
Is Required: TRUE Type: ENUM Cardinality: 1.1
What is the energy formulation?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.thermal_conductivity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Pure ice"
# "Saline ice"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 14.2. Thermal Conductivity
Is Required: TRUE Type: ENUM Cardinality: 1.1
What type of thermal conductivity is used?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.heat_diffusion')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Conduction fluxes"
# "Conduction and radiation heat fluxes"
# "Conduction, radiation and latent heat transport"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 14.3. Heat Diffusion
Is Required: TRUE Type: ENUM Cardinality: 1.1
What is the method of heat diffusion?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.basal_heat_flux')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Heat Reservoir"
# "Thermal Fixed Salinity"
# "Thermal Varying Salinity"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 14.4. Basal Heat Flux
Is Required: TRUE Type: ENUM Cardinality: 1.1
Method by which basal ocean heat flux is handled?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.fixed_salinity_value')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 14.5. Fixed Salinity Value
Is Required: FALSE Type: FLOAT Cardinality: 0.1
If you have selected {Thermal properties depend on S-T (with fixed salinity)}, supply fixed salinity value for each sea ice layer.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.heat_content_of_precipitation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 14.6. Heat Content Of Precipitation
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the method by which the heat content of precipitation is handled.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.precipitation_effects_on_salinity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 14.7. Precipitation Effects On Salinity
Is Required: FALSE Type: STRING Cardinality: 0.1
If precipitation (freshwater) that falls on sea ice affects the ocean surface salinity please provide further details.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.mass.new_ice_formation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 15. Thermodynamics --> Mass
Processes related to mass in sea ice thermodynamics
15.1. New Ice Formation
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the method by which new sea ice is formed in open water.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.mass.ice_vertical_growth_and_melt')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 15.2. Ice Vertical Growth And Melt
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the method that governs the vertical growth and melt of sea ice.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.mass.ice_lateral_melting')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Floe-size dependent (Bitz et al 2001)"
# "Virtual thin ice melting (for single-category)"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 15.3. Ice Lateral Melting
Is Required: TRUE Type: ENUM Cardinality: 1.1
What is the method of sea ice lateral melting?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.mass.ice_surface_sublimation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 15.4. Ice Surface Sublimation
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the method that governs sea ice surface sublimation.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.mass.frazil_ice')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 15.5. Frazil Ice
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the method of frazil ice formation.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.has_multiple_sea_ice_salinities')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 16. Thermodynamics --> Salt
Processes related to salt in sea ice thermodynamics.
16.1. Has Multiple Sea Ice Salinities
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does the sea ice model use two different salinities: one for thermodynamic calculations; and one for the salt budget?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.sea_ice_salinity_thermal_impacts')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 16.2. Sea Ice Salinity Thermal Impacts
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does sea ice salinity impact the thermal properties of sea ice?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.mass_transport.salinity_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant"
# "Prescribed salinity profile"
# "Prognostic salinity profile"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 17. Thermodynamics --> Salt --> Mass Transport
Mass transport of salt
17.1. Salinity Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
How is salinity determined in the mass transport of salt calculation?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.mass_transport.constant_salinity_value')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 17.2. Constant Salinity Value
Is Required: FALSE Type: FLOAT Cardinality: 0.1
If using a constant salinity value specify this value in PSU?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.mass_transport.additional_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 17.3. Additional Details
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the salinity profile used.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.thermodynamics.salinity_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant"
# "Prescribed salinity profile"
# "Prognostic salinity profile"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 18. Thermodynamics --> Salt --> Thermodynamics
Salt thermodynamics
18.1. Salinity Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
How is salinity determined in the thermodynamic calculation?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.thermodynamics.constant_salinity_value')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 18.2. Constant Salinity Value
Is Required: FALSE Type: FLOAT Cardinality: 0.1
If using a constant salinity value specify this value in PSU?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.thermodynamics.additional_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 18.3. Additional Details
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the salinity profile used.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.ice_thickness_distribution.representation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Explicit"
# "Virtual (enhancement of thermal conductivity, thin ice melting)"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 19. Thermodynamics --> Ice Thickness Distribution
Ice thickness distribution details.
19.1. Representation
Is Required: TRUE Type: ENUM Cardinality: 1.1
How is the sea ice thickness distribution represented?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.ice_floe_size_distribution.representation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Explicit"
# "Parameterised"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 20. Thermodynamics --> Ice Floe Size Distribution
Ice floe-size distribution details.
20.1. Representation
Is Required: TRUE Type: ENUM Cardinality: 1.1
How is the sea ice floe-size represented?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.ice_floe_size_distribution.additional_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 20.2. Additional Details
Is Required: FALSE Type: STRING Cardinality: 0.1
Please provide further details on any parameterisation of floe-size.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.melt_ponds.are_included')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 21. Thermodynamics --> Melt Ponds
Characteristics of melt ponds.
21.1. Are Included
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Are melt ponds included in the sea ice model?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.melt_ponds.formulation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Flocco and Feltham (2010)"
# "Level-ice melt ponds"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 21.2. Formulation
Is Required: TRUE Type: ENUM Cardinality: 1.1
What method of melt pond formulation is used?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.melt_ponds.impacts')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Albedo"
# "Freshwater"
# "Heat"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 21.3. Impacts
Is Required: TRUE Type: ENUM Cardinality: 1.N
What do melt ponds have an impact on?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.has_snow_aging')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 22. Thermodynamics --> Snow Processes
Thermodynamic processes in snow on sea ice
22.1. Has Snow Aging
Is Required: TRUE Type: BOOLEAN Cardinality: 1.N
Set to True if the sea ice model has a snow aging scheme.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.snow_aging_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 22.2. Snow Aging Scheme
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the snow aging scheme.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.has_snow_ice_formation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 22.3. Has Snow Ice Formation
Is Required: TRUE Type: BOOLEAN Cardinality: 1.N
Set to True if the sea ice model has snow ice formation.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.snow_ice_formation_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 22.4. Snow Ice Formation Scheme
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the snow ice formation scheme.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.redistribution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 22.5. Redistribution
Is Required: TRUE Type: STRING Cardinality: 1.1
What is the impact of ridging on snow cover?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.heat_diffusion')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Single-layered heat diffusion"
# "Multi-layered heat diffusion"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 22.6. Heat Diffusion
Is Required: TRUE Type: ENUM Cardinality: 1.1
What is the heat diffusion through snow methodology in sea ice thermodynamics?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.radiative_processes.surface_albedo')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Delta-Eddington"
# "Parameterized"
# "Multi-band albedo"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 23. Radiative Processes
Sea Ice Radiative Processes
23.1. Surface Albedo
Is Required: TRUE Type: ENUM Cardinality: 1.1
Method used to handle surface albedo.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.radiative_processes.ice_radiation_transmission')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Delta-Eddington"
# "Exponential attenuation"
# "Ice radiation transmission per category"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 23.2. Ice Radiation Transmission
Is Required: TRUE Type: ENUM Cardinality: 1.N
Method by which solar radiation through sea ice is handled.
End of explanation |
5,726 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Parallelization
Another nice thing about Python is, how easily you can parallelize your code. Here comes one example for multiprocessing.
map
An often used function in Python is map. It mapps a given function to every item in a list.
Step1: parallel map
Such a map function can be parallelized over all cores of your machine very easily. This of course is most useful when the function to call needs heavy computations. Like this one
Step2: Unfortunately, the parallelization doesn't work in IPython notebooks. So you need to run the following code from it's own file | Python Code:
def f(x):
return x**2
l = range(8)
s = map(f, l)
print 'input: ', l
print 'output:', s
Explanation: Parallelization
Another nice thing about Python is, how easily you can parallelize your code. Here comes one example for multiprocessing.
map
An often used function in Python is map. It mapps a given function to every item in a list.
End of explanation
import time
def g(x):
time.sleep(1) # simulate heavy computations
return x**2
Explanation: parallel map
Such a map function can be parallelized over all cores of your machine very easily. This of course is most useful when the function to call needs heavy computations. Like this one:
End of explanation
import joblib
import time
mem = joblib.Memory(cachedir='C:\Windows\Temp')
@mem.cache
def my_cached_function(x):
time.sleep(5)
return x**2
print my_cached_function(2)
print my_cached_function(2)
print my_cached_function(2)
print my_cached_function(2)
Explanation: Unfortunately, the parallelization doesn't work in IPython notebooks. So you need to run the following code from it's own file:
```python
import multiprocessing
if name == 'main':
# a list of 'problems' to solve in parallel
problems = range(8)
# starting a multiprocessing pool and measure time
pool = multiprocessing.Pool()
time_start = time.time()
result = pool.map(g, problems)
time_stop = time.time()
# print result
execution_time = time_stop - time_start
print 'executed %d problems in %d seconds' % (len(problems), execution_time)
```
MapReduce
A simple map may seem like a quite limited approach to parallelization at first. However, every problem that can be re-formulated as a combination of a map and a subsequent reduce function can be parallelized easily. In fact, this scheme is known as MapReduce and used for large-scale parallelization on the cloud-computing clusters from Google, Amazon and co.
Other approaches
There are many other approaches and libraries to parallelization as well. For instance there is joblib parallel and IPython parallel. The latter is pretty powerful and even lets you execute your jobs on different machines via SSH.
Caching
Another thing that can make your code much faster is caching, especially when you run scientific experiments repeatedly.
Note: If a library (in this case joblib) is not installed, you can install it in the CIP cluster via conda install joblib from the command prompt. On your own system, libraries are usually installed via pip install <library_name>.
End of explanation |
5,727 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Table of Contents
Iterative vs fragment-based mapping
Advantages of iterative mapping
Advantages of fragment-based mapping
Mapping
Iterative mapping
Fragment-based mapping
Iterative vs fragment-based mapping
Iterative mapping first proposed by <a name="ref-1"/>(Imakaev et al., 2012), allows to map usually a high number of reads. However other methodologies, less "brute-force" can be used to take into account the chimeric nature of Hi-C reads.
A simple alternative is to allow split mapping, just as with RNA-seq data.
Another way consists in pre-truncating <a name="ref-1"/>(Ay and Noble, 2015) reads that contain a ligation site and map only the longest part of the read <a name="ref-2"/>(Wingett et al., 2015).
Finally, an intermediate approach, fragment-based, consists in mapping full length reads first, and than splitting unmapped reads at the ligation sites <a name="ref-1"/>(Serra, Ba`{u, Filion and Marti-Renom, 2016).
Advantages of iterative mapping
It's the only solution when no restriction enzyme has been used (i.e. micro-C)
Can be faster when few windows (2 or 3) are used
Advantages of fragment-based mapping
Generally faster
Safer
Step1: The full mapping function can be used to perform either iterative or fragment-based mapping, or a combination of both.
Iterative mapping
Here an example of use as iterative mapping
Step2: And for the second side of the read
Step3: Fragment-based mapping
With fragment based mapping it would be | Python Code:
from pytadbit.mapping.full_mapper import full_mapping
Explanation: Table of Contents
Iterative vs fragment-based mapping
Advantages of iterative mapping
Advantages of fragment-based mapping
Mapping
Iterative mapping
Fragment-based mapping
Iterative vs fragment-based mapping
Iterative mapping first proposed by <a name="ref-1"/>(Imakaev et al., 2012), allows to map usually a high number of reads. However other methodologies, less "brute-force" can be used to take into account the chimeric nature of Hi-C reads.
A simple alternative is to allow split mapping, just as with RNA-seq data.
Another way consists in pre-truncating <a name="ref-1"/>(Ay and Noble, 2015) reads that contain a ligation site and map only the longest part of the read <a name="ref-2"/>(Wingett et al., 2015).
Finally, an intermediate approach, fragment-based, consists in mapping full length reads first, and than splitting unmapped reads at the ligation sites <a name="ref-1"/>(Serra, Ba`{u, Filion and Marti-Renom, 2016).
Advantages of iterative mapping
It's the only solution when no restriction enzyme has been used (i.e. micro-C)
Can be faster when few windows (2 or 3) are used
Advantages of fragment-based mapping
Generally faster
Safer: mapped reads are generally larger than 25-30 nm (the largest window used in iterative mapping). Less reads are mapped, but the difference is usually canceled or reversed when looking for "valid-pairs".
Note: We use GEM <a name="ref-1"/>(Marco-Sola, Sammeth, Guig\'{o and Ribeca, 2012), performance are very similar to Bowtie2, perhaps a bit better.
For now TADbit is only compatible with GEM.
Mapping
End of explanation
r_enz = 'MboI'
! mkdir -p results/iterativ/$r_enz
! mkdir -p results/iterativ/$r_enz/01_mapping
# for the first side of the reads
full_mapping(gem_index_path='/media/storage/db/reference_genome/Homo_sapiens/hg38/hg38.gem',
out_map_dir='results/iterativ/{0}/01_mapping/mapped_{0}_r1/'.format(r_enz),
fastq_path='/media/storage/FASTQs/K562_%s_1.fastq' % (r_enz),
r_enz='hindIII', frag_map=False, clean=True, nthreads=20,
windows=((1,25),(1,30),(1,35),(1,40),(1,45),(1,50),(1,55),(1,60),(1,65),(1,70),(1,75)),
temp_dir='results/iterativ/{0}/01_mapping/mapped_{0}_r1_tmp/'.format(r_enz))
Explanation: The full mapping function can be used to perform either iterative or fragment-based mapping, or a combination of both.
Iterative mapping
Here an example of use as iterative mapping:
End of explanation
# for the second side of the reads
full_mapping(gem_index_path='/media/storage/db/reference_genome/Homo_sapiens/hg38/hg38.gem',
out_map_dir='results/iterativ/{0}/01_mapping/mapped_{0}_r2/'.format(r_enz),
fastq_path='/media/storage/FASTQs/K562_%s_2.fastq' % (r_enz),
r_enz=r_enz, frag_map=False, clean=True, nthreads=20,
windows=((1,25),(1,30),(1,35),(1,40),(1,45),(1,50),(1,55),(1,60),(1,65),(1,70),(1,75)),
temp_dir='results/iterativ/{0}/01_mapping/mapped_{0}_r2_tmp/'.format(r_enz))
Explanation: And for the second side of the read:
End of explanation
! mkdir -p results/fragment/$r_enz
! mkdir -p results/fragment/$r_enz/01_mapping
# for the first side of the reads
full_mapping(gem_index_path='/media/storage/db/reference_genome/Homo_sapiens/hg38/hg38.gem',
out_map_dir='results/fragment/{0}/01_mapping/mapped_{0}_r1/'.format(r_enz),
fastq_path='/media/storage/FASTQs/K562_%s_1.fastq' % (r_enz),
r_enz=r_enz, frag_map=True, clean=True, nthreads=20,
temp_dir='results/fragment/{0}/01_mapping/mapped_{0}_r1_tmp/'.format(r_enz))
# for the second side of the reads
full_mapping(gem_index_path='/media/storage/db/reference_genome/Homo_sapiens/hg38/hg38.gem',
out_map_dir='results/fragment/{0}/01_mapping/mapped_{0}_r2/'.format(r_enz),
fastq_path='/media/storage/FASTQs/K562_%s_2.fastq' % (r_enz),
r_enz=r_enz, frag_map=True, clean=True, nthreads=20,
temp_dir='results/fragment/{0}/01_mapping/mapped_{0}_r2_tmp/'.format(r_enz))
Explanation: Fragment-based mapping
With fragment based mapping it would be:
End of explanation |
5,728 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<a name="top"></a>
<div style="width
Step1: <a href="#top">Top</a>
<hr style="height
Step2: We can also find the list of datasets within a time range
Step3: Exercise
Starting from http
Step4: Solution
Step5: <a href="#top">Top</a>
<hr style="height
Step6: We can ask Siphon to download the file locally
Step7: Or better yet, get a file-like object that lets us read from the file as if it were local
Step8: This is handy if you have Python code to read a particular format.
It's also possible to get access to the file through services that provide netCDF4-like access, but for the remote file. This access allows downloading information only for variables of interest, or for (index-based) subsets of that data
Step9: By default this uses CDMRemote (if available), but it's also possible to ask for OPeNDAP (using netCDF4-python). | Python Code:
from datetime import datetime, timedelta
from siphon.catalog import TDSCatalog
date = datetime.utcnow() - timedelta(days=1)
cat = TDSCatalog('http://thredds.ucar.edu/thredds/catalog/nexrad/level3/'
f'N0Q/LRX/{date:%Y%m%d}/catalog.xml')
Explanation: <a name="top"></a>
<div style="width:1000 px">
<div style="float:right; width:98 px; height:98px;">
<img src="https://raw.githubusercontent.com/Unidata/MetPy/master/metpy/plots/_static/unidata_150x150.png" alt="Unidata Logo" style="height: 98px;">
</div>
<h1>Siphon Overview</h1>
<h3>Unidata Python Workshop</h3>
<div style="clear:both"></div>
</div>
<hr style="height:2px;">
<div style="float:right; width:250 px"><img src="https://unidata.github.io/siphon/latest/_static/siphon_150x150.png" alt="TDS" style="height: 200px;"></div>
Overview:
Teaching: 10 minutes
Exercises: 10 minutes
Questions
What is a THREDDS Data Server (TDS)?
How can I use Siphon to access a TDS?
Objectives
<a href="#threddsintro">Use siphon to access a THREDDS catalog</a>
<a href="#filtering">Find data within the catalog that we wish to access</a>
<a href="#dataaccess">Use siphon to perform remote data access</a>
<a name="threddsintro"></a>
1. What is THREDDS?
Server for providing remote access to datasets
Variety of services for accesing data:
HTTP Download
Web Mapping/Coverage Service (WMS/WCS)
OPeNDAP
NetCDF Subset Service
CDMRemote
Provides a more uniform way to access different types/formats of data
THREDDS Demo
http://thredds.ucar.edu
THREDDS Catalogs
XML descriptions of data and metadata
Access methods
Easily handled with siphon.catalog.TDSCatalog
End of explanation
request_time = date.replace(hour=18, minute=30, second=0, microsecond=0)
ds = cat.datasets.filter_time_nearest(request_time)
ds
Explanation: <a href="#top">Top</a>
<hr style="height:2px;">
<a name="filtering"></a>
2. Filtering data
We could manually figure out what dataset we're looking for and generate that name (or index). Siphon provides some helpers to simplify this process, provided the names of the dataset follow a pattern with the timestamp in the name:
End of explanation
datasets = cat.datasets.filter_time_range(request_time, request_time + timedelta(hours=1))
print(datasets)
Explanation: We can also find the list of datasets within a time range:
End of explanation
# YOUR CODE GOES HERE
Explanation: Exercise
Starting from http://thredds.ucar.edu/thredds/catalog/satellite/SFC-T/SUPER-NATIONAL_1km/catalog.html, find the composites for the previous day.
Grab the URL and create a TDSCatalog instance.
Using Siphon, find the data available in the catalog between 12Z and 18Z on the previous day.
End of explanation
# %load solutions/datasets.py
Explanation: Solution
End of explanation
ds = datasets[0]
Explanation: <a href="#top">Top</a>
<hr style="height:2px;">
<a name="dataaccess"></a>
3. Accessing data
Accessing catalogs is only part of the story; Siphon is much more useful if you're trying to access/download datasets.
For instance, using our data that we just retrieved:
End of explanation
ds.download()
import os; os.listdir()
Explanation: We can ask Siphon to download the file locally:
End of explanation
fobj = ds.remote_open()
data = fobj.read()
print(len(data))
Explanation: Or better yet, get a file-like object that lets us read from the file as if it were local:
End of explanation
nc = ds.remote_access()
Explanation: This is handy if you have Python code to read a particular format.
It's also possible to get access to the file through services that provide netCDF4-like access, but for the remote file. This access allows downloading information only for variables of interest, or for (index-based) subsets of that data:
End of explanation
print(list(nc.variables))
Explanation: By default this uses CDMRemote (if available), but it's also possible to ask for OPeNDAP (using netCDF4-python).
End of explanation |
5,729 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Monte Carlo integration
Monte Carlo is the simplest of all collocation methods.
It consist of the following steps
Step1: Then we generate samples from the three schemes
Step2: From the three plots above it is easy to see both how the Sobol sequence have
more structure, and the antithetic variate have observable symmetries.
Evaluating model solver
Like in the case of problem formulation again,
evaluation is straight forward
Step3: Error analysis
Having a good estimate on the statistical properties allows us to asses the
properties of the uncertainty in the model. However, it does not allow us to
assess the accuracy of the methods used. To do that we need to compare the
statistical metrics with their analytical counterparts. To do so, we use the
reference analytical solution and error function as defined in problem
formulation.
Step4: Here we see that for our little problem, all new schemes outperforms
classical random samples with Sobol on top, followed by Halton and antithetic
variate.
For the error in variance estimation we have | Python Code:
from problem_formulation import joint
joint
Explanation: Monte Carlo integration
Monte Carlo is the simplest of all collocation methods.
It consist of the following steps:
Generate (pseudo-)random samples $Q_1, ..., Q_N$.
Evaluate model solver $U_1=u(Q_1), ..., U_N=u(Q_N)$ for each sample.
Use empirical metrics to assess statistics on the evaluations.
This was the approached introduced in problem
formulation, and we shall go through it again,
but in a bit more details, and leveraging some of the features introduced in
the section quasi_random_samples.
Generating samples
The samples that shall be used in Monte Carlo must be assumed to behave as if
drawn from the probability distribution of the model parameters on want to
model. In the case of problem formulation, the
samples drawn was random. Here we shall replace the random samples with
variance reduced samples from the following three schemes:
Sobol
Antithetic variate
Halton
We start by generating samples from the distribution of interest:
End of explanation
sobol_samples = joint.sample(10000, rule="sobol")
antithetic_samples = joint.sample(10000, antithetic=True, seed=1234)
halton_samples = joint.sample(10000, rule="halton")
from matplotlib import pyplot
pyplot.rc("figure", figsize=[16, 4])
pyplot.subplot(131)
pyplot.scatter(*sobol_samples[:, :1000])
pyplot.title("sobol")
pyplot.subplot(132)
pyplot.scatter(*antithetic_samples[:, :1000])
pyplot.title("antithetic variates")
pyplot.subplot(133)
pyplot.scatter(*halton_samples[:, :1000])
pyplot.title("halton")
pyplot.show()
Explanation: Then we generate samples from the three schemes:
End of explanation
from problem_formulation import model_solver, coordinates
import numpy
sobol_evals = numpy.array([
model_solver(sample) for sample in sobol_samples.T])
antithetic_evals = numpy.array([
model_solver(sample) for sample in antithetic_samples.T])
halton_evals = numpy.array([
model_solver(sample) for sample in halton_samples.T])
pyplot.subplot(131)
pyplot.plot(coordinates, sobol_evals[:100].T, alpha=0.3)
pyplot.title("sobol")
pyplot.subplot(132)
pyplot.plot(coordinates, antithetic_evals[:100].T, alpha=0.3)
pyplot.title("antithetic variate")
pyplot.subplot(133)
pyplot.plot(coordinates, halton_evals[:100].T, alpha=0.3)
pyplot.title("halton")
pyplot.show()
Explanation: From the three plots above it is easy to see both how the Sobol sequence have
more structure, and the antithetic variate have observable symmetries.
Evaluating model solver
Like in the case of problem formulation again,
evaluation is straight forward:
End of explanation
from problem_formulation import error_in_mean, indices, eps_mean
eps_sobol_mean = [error_in_mean(
numpy.mean(sobol_evals[:idx], 0)) for idx in indices]
eps_antithetic_mean = [error_in_mean(
numpy.mean(antithetic_evals[:idx], 0)) for idx in indices]
eps_halton_mean = [error_in_mean(
numpy.mean(halton_evals[:idx], 0)) for idx in indices]
pyplot.rc("figure", figsize=[6, 4])
pyplot.semilogy(indices, eps_mean, "r", label="random")
pyplot.semilogy(indices, eps_sobol_mean, "-", label="sobol")
pyplot.semilogy(indices, eps_antithetic_mean, ":", label="antithetic")
pyplot.semilogy(indices, eps_halton_mean, "--", label="halton")
pyplot.legend()
pyplot.show()
Explanation: Error analysis
Having a good estimate on the statistical properties allows us to asses the
properties of the uncertainty in the model. However, it does not allow us to
assess the accuracy of the methods used. To do that we need to compare the
statistical metrics with their analytical counterparts. To do so, we use the
reference analytical solution and error function as defined in problem
formulation.
End of explanation
from problem_formulation import error_in_variance, eps_variance
eps_halton_variance = [error_in_variance(
numpy.var(halton_evals[:idx], 0)) for idx in indices]
eps_sobol_variance = [error_in_variance(
numpy.var(sobol_evals[:idx], 0)) for idx in indices]
eps_antithetic_variance = [error_in_variance(
numpy.var(antithetic_evals[:idx], 0)) for idx in indices]
pyplot.semilogy(indices, eps_variance, "r", label="random")
pyplot.semilogy(indices, eps_sobol_variance, "-", label="sobol")
pyplot.semilogy(indices, eps_antithetic_variance, ":", label="antithetic")
pyplot.semilogy(indices, eps_halton_variance, "--", label="halton")
pyplot.legend()
pyplot.show()
Explanation: Here we see that for our little problem, all new schemes outperforms
classical random samples with Sobol on top, followed by Halton and antithetic
variate.
For the error in variance estimation we have:
End of explanation |
5,730 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
What questions would you have about this data?
Step1: How would you encode categorical data such a carrier, day of week and origin airport as numerical features?
Step2: Features
Step3: Unique Carrier One-Hot Feature
Step4: Day Of Week One-Hot Feature
Step5: Time of Day Bucket Feature | Python Code:
df.shape
len(set(df.Origin))
df.FlightDate.min()
df.FlightDate.max()
df.DepTime.count()
df.DepTime.dropna().describe()
needed_columns = ['Year',
'Quarter',
'Month',
'DayofMonth',
'DayOfWeek',
'FlightDate',
'UniqueCarrier',
'Origin',
'OriginCityName',
'Dest',
'DestCityName',
'CRSDepTime',
'DepTime',
'DepDelay',
'DepDelayMinutes',
'DepDel15',
'DepartureDelayGroups',
'DepTimeBlk',
'CRSArrTime',
'ArrTime',
'ArrDelay',
'ArrDelayMinutes',
'ArrDel15',
'ArrTimeBlk',
'Cancelled',
'Diverted',
'CRSElapsedTime',
'ActualElapsedTime',
'Distance',
'DistanceGroup',
]
# Percentage of flights. 1=Monday, 2=Tuesday, etc.
df[['DayOfWeek', 'DepDel15']].groupby('DayOfWeek').mean()
df[['UniqueCarrier', 'DepDelayMinutes']].groupby('UniqueCarrier').count()
# Mean minutes of delay by carrier
df[['UniqueCarrier', 'DepDelayMinutes']].groupby('UniqueCarrier').mean()
CARRIERS = {
'AA': 'American',
'AS': 'Alaska',
'B6': 'Jet Blue',
'DL': 'Delta',
'EV': 'Express Jet',
'F9': 'Frontier',
'HA': 'Hawaiian',
'NK': 'Spirit',
'OO': 'SkyWest',
'UA': 'United',
'VX': 'Virgin',
'WN': 'Southwest'
}
df[['Origin', 'DepDel15']].groupby('Origin').mean()
Explanation: What questions would you have about this data?
End of explanation
# Percent of flights arriving within 15 minute of time by origin
mean_dep_delay15 = df[['Origin', 'DepDel15']].groupby('Origin').mean()
len(set(df.Origin))
for x in df.Origin:
break
import numpy as np
Explanation: How would you encode categorical data such a carrier, day of week and origin airport as numerical features?
End of explanation
match = [x==y for (x,y) in zip((df.DepDelayMinutes >= 15), df.DepDel15)]
quantiles = [0] + list(np.percentile(mean_dep_delay15, [20,40,60,80])) + [1.1]
quantiles
origin_groups = []
for (low, high) in list(zip(quantiles, quantiles[1:])):
origin_groups.append(
set(mean_dep_delay15[(mean_dep_delay15 >= low) & (mean_dep_delay15 < high)].dropna().index)
)
[len(x) for x in origin_groups]
for i, group in enumerate(origin_groups):
df['OriginGroup%s' % i] = [int(o in group) for o in df.Origin]
Explanation: Features:
Origin group
UniqueCarrier one hot encoding
Day of week one hot encoding
Time of day bucket
Objective function:
DepDelay15 (Whether or not the flight will be delayed by 15 minutes or more
Origin Groups
End of explanation
unique_carriers = list(set(df.UniqueCarrier))
for carrier in unique_carriers:
df['Carrier%s' % carrier] = [int(x == carrier) for x in df.UniqueCarrier]
Explanation: Unique Carrier One-Hot Feature
End of explanation
days_of_week = sorted(list(set(df.DayOfWeek)))
for dow in days_of_week:
df['DayOfWeek%s' % dow] = [int(x == dow) for x in df.DayOfWeek]
Explanation: Day Of Week One-Hot Feature
End of explanation
clean_df = df[['DepTime', 'DepDelayMinutes']].dropna()
plt.scatter(x=clean_df.DepTime.iloc[:5000], y = clean_df.DepDelayMinutes.iloc[:5000])
thresholds = [-1, 400, 800, 1200, 1600, 2000, 2401]
buckets = []
for i, (min_time, max_time) in enumerate(list(zip(thresholds, thresholds[1:]))):
df["DepTimeBucket%s" % i] = ((df.DepTime >= min_time) & (df.DepTime < max_time)).astype(int)
features = (['OriginGroup%s' % i for i in range(5)] +
['Carrier%s' % carrier for carrier in unique_carriers] +
['DayOfWeek%s' % dow for dow in days_of_week] +
['DepTimeBucket%s' % i for i in range(6)]
)
from sklearn.linear_model import LogisticRegression, LinearRegression
model = LogisticRegression()
clean_df = df[features + ['DepDelayMinutes']].dropna()
clean_df['Delayed'] = [int(x >= 15) for x in clean_df.DepDelayMinutes]
train_size = int(len(clean_df) * 0.7)
model.fit(clean_df[features].iloc[:train_size], clean_df.Delayed.iloc[:train_size])
predictions = model.predict(clean_df[features].iloc[train_size:])
actuals = clean_df.Delayed.iloc[train_size:]
origin_groups
from sklearn.metrics import roc_curve, auc
predict_probs = [tpl[1] for tpl in model.predict_proba(clean_df[features].iloc[train_size:])]
# Compute micro-average ROC curve and ROC area
fpr, tpr, _ = roc_curve(actuals, predict_probs)
import matplotlib.pyplot as plt
#Plot of a ROC curve for a specific class
plt.figure()
lw = 2
plt.plot(fpr, tpr, color='darkorange',
lw=lw)
plt.plot([0, 1], [0, 1], color='navy', lw=lw, linestyle='--')
plt.xlim([0.0, 1.0])
plt.ylim([0.0, 1.05])
plt.xlabel('False Positive Rate')
plt.ylabel('True Positive Rate')
plt.title('Receiver operating characteristic example')
plt.legend(loc="lower right")
plt.show()
auc(fpr, tpr)
model.intercept_
coefs = dict(list(zip(features, model.coef_[0])))
coefs
Explanation: Time of Day Bucket Feature:
End of explanation |
5,731 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Demonstrate linear inverse model for the heat budget
Horizontal heat transports are non-linear terms if one assume that both temperatures and velocities have to be optimized. In order to keep the model as simple as possible, we hypothesized that only velocities require optimization. In other word, we assumed that between temperatures and circulation, the former is the best known. Doing so, heat transport terms are linear with regard to the optimization procedure.
Inversion procedure used to optimized parameters of the model
The procedure presented here is for a linear model, the reader is referred to \citet{tarantola-1982} and \citet{mercier-1986} for further details on a non-linear formulation.
Let $\mathbf{X}={X^1,...,X^M}$ refers to the finite set of $M$ parameters needed to describe the system such as velocity, fluxes or tracer concentrations. A physical model will impose $N$ constraints on the possible values of $\mathbf{X}$ which can take the functional form
Step1: Define problem variables
State vector $\mathbf{X}$ and its a priori errors $\mathbf{E}_0$
Volume fluxes are in Sverdrup, positive eastward & northward
Step2: Model matrix $\mathbf{A}$, jacobian and constraint errors $\mathbf{E}_c$
Step3: Solve the inverse problem
Step4: Compare a priori with optimised state | Python Code:
import numpy as np
Explanation: Demonstrate linear inverse model for the heat budget
Horizontal heat transports are non-linear terms if one assume that both temperatures and velocities have to be optimized. In order to keep the model as simple as possible, we hypothesized that only velocities require optimization. In other word, we assumed that between temperatures and circulation, the former is the best known. Doing so, heat transport terms are linear with regard to the optimization procedure.
Inversion procedure used to optimized parameters of the model
The procedure presented here is for a linear model, the reader is referred to \citet{tarantola-1982} and \citet{mercier-1986} for further details on a non-linear formulation.
Let $\mathbf{X}={X^1,...,X^M}$ refers to the finite set of $M$ parameters needed to describe the system such as velocity, fluxes or tracer concentrations. A physical model will impose $N$ constraints on the possible values of $\mathbf{X}$ which can take the functional form:
\begin{eqnarray}
f^1(X^1,...,X^M) &=& 0 \
f^2(X^1,...,X^M) &=& 0 \
...&& \
f^N(X^1,...,X^M) &=& 0
\end{eqnarray}
Let $\mathbf{X_0}$ be an \apri state of information of the model parameters $\mathbf{X}$ and $\mathbf{E_0}$ the associated error covariance matrix. We refer to the information after inversion as the \apost or optimized state. The constraints take values $f(\mathbf{X})$ at $\mathbf{X}$ and their error covariance matrix is denoted as $\mathbf{E_c}$. The optimization procedure minimizes the following function:
\begin{equation}
(\mathbf{X-X_0})^T \cdot \mathbf{E_0}^{-1} \cdot (\mathbf{X-X_0})
+ f(\mathbf{X})^T \cdot \mathbf{E_c}^{-1} \cdot f(\mathbf{X}) \label{eq:costfunction}
\end{equation}
where the subscript $T$ indicates a transpose operator. The first term is the squared distance between the \apri and \apost estimates of the parameters while the second term is the constraints residual weighted by their errors.
The best estimate $\mathbf{X^}$ and its error covariance matrix $\mathbf{E^}$ are given uniquely by:
\begin{eqnarray}
\mathbf{X^} &=& \mathbf{X_0} - \mathbf{Q} \cdot f(\mathbf{X_0}) \label{eq:Xstar} \
\mathbf{E^} &=& \mathbf{E_0} - \mathbf{Q} \cdot \mathbf{F} \cdot \mathbf{E_0} \label{eq:Estar}
\end{eqnarray}
where $\mathbf{F}$ is the model matrix of partial derivatives (model jacobian)
\begin{eqnarray}
\mathbf{F}^{ij} = \frac{\partial f^i}{\partial \mathbf{X}^j}
\end{eqnarray}
and the matrix $\mathbf{Q}$ is given by:
\begin{eqnarray}
\mathbf{Q} &=& \mathbf{E_0} \cdot \mathbf{F}^T \cdot \left( \mathbf{F} \cdot \mathbf{E_0} \cdot \mathbf{F}^T + \mathbf{E_c} \right)^{-1} \label{eq:Q}
\end{eqnarray}
Our model
The volume conservation equation is given by the domain convergence of volume fluxes:
\begin{eqnarray}
\nabla U^i + \nabla U_e &=& r\
U^i &=& U^i_g + U^i_c
\end{eqnarray}
where the $U^i$ are geostrophic volume fluxes ($m^3/s$) through each faces of the domain, $U_e$ Ekman volume fluxes and $r$ the residuals. The total geostrophic terms are decomposed into an observation-based estime $U^i_g$ and a correction term $U^i_c$ so that the first model constraint is:
\begin{eqnarray}
\nabla U^i_c &=& r - \nabla U_e - \nabla U^i_g
\end{eqnarray}
Then for the heat budget, the domain heat content rate of change is:
\begin{eqnarray}
\partial_t OHC + \nabla U^i \theta^i + \nabla U_e \theta_s &=& \frac{Q_{net}}{\rho_0 C_p}
\end{eqnarray}
so that in terms of volume flux constraints it can be re-written as:
\begin{eqnarray}
\nabla U^i_c \theta^i &=& \frac{Q_{net}}{\rho_0 C_p} - \nabla U_e \theta_s - \partial_t OHC - \nabla U^i_g \theta^i\
\end{eqnarray}
The model is linear and simple: $f(\mathbf{X}) = \mathbf{A}\mathbf{X}$ in matrix form. When constraint should be satisfied:
\begin{eqnarray}
\mathbf{A} \mathbf{X} &=& 0
\end{eqnarray}
with the (2,4) model matrix $\mathbf{A}$ :
\begin{eqnarray}
\mathbf{A} &=
\begin{pmatrix}
1 & -1 & -1 & 1 \
\theta^w & -\theta^e & -\theta^n & \theta^s
\end{pmatrix}
\end{eqnarray}
and the (4,1) state vector $\mathbf{X}$:
\begin{eqnarray}
\mathbf{X} &=
\begin{pmatrix}
U_c^w \ U_c^e \ U_c^s \ U_c^n
\end{pmatrix}
\end{eqnarray}
where volume fluxes are positive eastward or northward.
The model jacobian $\mathbf{F}$ is in this trivial linear case simply given by $\mathbf{A}$.
The a priori constraint residual is thus given by:
\begin{eqnarray}
\mathbf{R} &=
\begin{pmatrix}
r - \nabla U_e - \nabla U^i_g \
\frac{Q_{net}}{\rho_0 C_p} - \nabla U_e \theta_s - \partial_t OHC - \nabla U^i_g \theta^i
\end{pmatrix}
\end{eqnarray}
So that the a priori (and unsatisfied) model state constraints are:
\begin{eqnarray}
\mathbf{A} \mathbf{X_0} &=& \mathbf{R}
\end{eqnarray}
Once optimised, we'll get $U^i_c$ to but this down to 0:
\begin{eqnarray}
\mathbf{A} \mathbf{X^} &=& 0
\end{eqnarray}
where $\mathbf{X^}$ was given in the 1st section.
Matlab code
The point is to convert in python this Matlab peace of code from my toolbox and tia_nl.m:
%-- SOLVE A LINEAR MODEL
disp('TIA_NL MESSAGE: THIS MODEL IS LINEAR')
jac = jacob(Jeval,X0,varargin{:});
Q = E0 * jac' * inv(jac * E0 * jac' + Ec);
xs = x0 - Q*f(C,X0,varargin{:}); % Optimized parameters
es = diag(E0 - Q * jac * E0); % Errors
Xstar = X0;
for ix = 1 : size(Xstar,1)
Xstar(ix,3) = {xs(ix)};
Xstar(ix,4) = {sqrt(es(ix))};
end%for ix
Xstar_lin = Xstar;
k = 1;
% EVALUTE COST FUNCTION TERMS:
cost(1,1) = (xs-x0)' * inv(E0) * (xs-x0);
cost(1,2) = f(Meval,Xstar,varargin{:})' * inv(Ec) * f(Meval,Xstar,varargin{:});
End of explanation
X = np.zeros((4,1))
# X[:,0] = [43,14,-13,18] # West, East, South, North
X[:,0] = [1,1,-1,1] # West, East, South, North
print("State vector has shape:", X.shape, "and values:") # must be 4,1
print(X)
# Set errors to a % relative value:
s = 20/100. # 20% error on all volume fluxes
E = np.zeros((4,4))
for i in range(0,4):
E[i,i] = (s*np.abs(X[i,0]))**2 # Take the square because E is the error co-variance matrix and it scales as X^2
print("State vector co-variance error matrix has shape:", E.shape, "and values:") # must be 4,4
print(E)
Explanation: Define problem variables
State vector $\mathbf{X}$ and its a priori errors $\mathbf{E}_0$
Volume fluxes are in Sverdrup, positive eastward & northward
End of explanation
# Mean face temperatures:
Temp = np.zeros((4,1))
Temp[:,0] = [16.1,13.5,16.4,9.] # West, East, South, North
print("Temperature vector has shape:", Temp.shape, "and values:") # must be 4,1
print(Temp)
# Model matrix:
A = np.zeros((2,4))
A[0,:] = [1,-1,-1,1] # Volume constraint (Sv)
A[1,:] = np.diag(Temp*np.array(([1,-1,-1,1]))) # Heat constraint (Sv*degC)
print("Model matrix A has shape:", A.shape, "and values:") # must be 2,4
print(A)
# A priori constraint residuals:
print("A priori constraint residuals:\n", A.dot(X))
# Constraint errors:
Ec = np.zeros((2,2))
Ec[0,0] = 1**2 # Volume constraint residual squared error in [Sv]^2
Ec[1,1] = 10**2 # Heat constraint residual squared error in [Sv*degC]^2
print("Constraints error co-variance matrix has shape:", Ec.shape, "and values:") # must be 2,2
print(Ec)
Explanation: Model matrix $\mathbf{A}$, jacobian and constraint errors $\mathbf{E}_c$
End of explanation
# Put solver into a function
def tia_lin(X0,E0,Ec,A):
# Solve:
J = A # The model jacobian is A in the linear case
Q = (E0.dot(J.T)).dot( np.linalg.inv((J.dot(E0)).dot(J.T) + Ec) )
# Final state:
Xs = X0 - Q.dot(A.dot(X0))
Es = E0 - (Q.dot(J)).dot(E0)
return Xs,Es
# Solve it:
X0 = X
E0 = E
Xs, Es = tia_lin(X0,E0,Ec,A)
Explanation: Solve the inverse problem
End of explanation
# State vectors:
print("Initial state vector:\n",X0.T)
print("Optimised state vector:\n",Xs.T)
# State vectors error:
print("Initial state vector errors:\n",np.sqrt(np.diag(E0)))
print("Optimised state vector errors:\n",np.sqrt(np.diag(Es)))
# Constraint residuals:
print("A priori constraint residuals:\n", A.dot(X0))
print("Optimised constraint residuals:\n",A.dot(Xs))
Explanation: Compare a priori with optimised state
End of explanation |
5,732 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Exploring QVEC
I want to spend some time now looking closer at QVEC's output, namely the correlations and the alignment matrix. The second main point of the original paper is that the alignments allow you to interpret individual dimensions of embeddings.
Step1: Linguistic features
Step2: Learnt embeddings
Step4: QVEC model
Step5: Exploration
What dimensions and features are aligned?
The dataframe below is as follows
Step7: What is QVEC doing?
QVEC is looking at 41 correlation coefficients (or as many linguistic features as there are) and finding the maximum. Here, I show the relevant scatterplot for the highest correlation.
A consistent observation is that the distribution of the linguistic features are strongly peaked at 0. That is, almost all words have 0 for for most features. Sometimes, there is some mass at 1. This suggests to me that the linguistic features being used are not appropriate.
Step8: What do the learnt embeddings looks like?
In sum
Step9: Graphical test of normality
I'm plotting a QQ plot and a probability plot side by side.
Step10: KS test
The preliminary results suggest that some dimensions are not normally distributed.
The KS test is clear, but I have some uncertainty about how to use it in scipy. In particular, do I give it the std or var of the distribution being tested?
Step11: Shapiro-Wilk test
Step12: Lilliefors test
Step13: Location & spread of each dimension
Step14: What does each learnt embedding look like?
In sum
Step15: Location & spread of each word embedding
Step16: What do the features look like?
The features are strongly bimodal. The usual summary statistics of mean, median and std are not appropriate for bimodal distributions.
If you were to randomly select a word, on average its feature representation would have 1.3% for an animal noun.
Step17: What proportion of words in the vocab have a non-zero value for each feature?
On average across all 41 features, 6% of words have a nonzero entry for features. The highest proportion is 21%.
Step18: How can I actually measure an association between linguistic features and learnt dimensions?
Knowing that the dimensions of the learnt embeddings are normally distributed and that the features are strongly bimodal, what is the best way to measure their correlation? It's clear that Pearson's $r$ and Spearman's $\rho$ are not appropriate because of the high number of ties.
I see two broad approaches
Step19: The following dimensions and features were aligned previously
Step23: Treat linguistic features as binary
In sum
Step24: This dimension-feature pair was aligned in the original method using correlation between raw values. I see good separation between the distributions, which is consistent with a relationship between the variables. The t test result strongly suggests the population means are different (but I can see that from the plot).
Step25: This dimension-feature pair is weakly negatively correlated using the original method ($r=-0.06$). Consistent with that, the distributions overlap a lot. However, a t test gives a small p value, suggesting the population means are different.
Step26: This is the distribution of p values from a t test for all dimension-feature pairs. I think it shows the inappropriateness of a t test more than anything else.
Step27: How sparse is the feature matrix?
93% of the entries in the feature matrix are zero.
Step28: How correlated are the features and learnt dimensions?
This plot says that the correlations are normally distributed around 0.
Step29: The following summary shows that the $(41*300)$ correlations are centered at 0 with std 0.05. The largest is 0.32, and the smallest is -0.30. These don't seem like very high numbers. The 75% percentile is 0.03. By looking back at the histogram above, it's obviously these dimensions are not highly correlated with these linguistic features.
Another important point from this is that the distribution of correlations is symmetric. So using the the max, rather than the max absolute, seems arbitrary. It doesn't allow for reversed dimensions.
Step30: What do the maximum correlations look like?
These are all positive. But are they different enough from 0? What test can I use here?
Step31: In the heatmap below, nothing really sticks out. It does not look like these features are captured well by these dimensions.
Step32: Which features are not the most correlated with any dimension?
'verb.weather' doesn't seem like a good feature, but the others do. So leaving them out isn't great.
Step35: Top K words
NB | Python Code:
%matplotlib inline
import os
import csv
from itertools import product
import pandas as pd
import numpy as np
from scipy import stats
import statsmodels.api as sm
import matplotlib.pyplot as plt
import seaborn as sns
sns.set()
data_path = '../../data'
tmp_path = '../../tmp'
Explanation: Exploring QVEC
I want to spend some time now looking closer at QVEC's output, namely the correlations and the alignment matrix. The second main point of the original paper is that the alignments allow you to interpret individual dimensions of embeddings.
End of explanation
feature_path = os.path.join(data_path, 'evaluation/semcor/tsvetkov_semcor.csv')
subset = pd.read_csv(feature_path, index_col=0)
subset.columns = [c.replace('semcor.', '') for c in subset.columns]
subset.set_index('words', inplace=True)
subset = subset.T
Explanation: Linguistic features
End of explanation
size = 300
fname = 'embeddings/glove.6B.{}d.txt'.format(size)
embedding_path = os.path.join(data_path, fname)
embeddings = pd.read_csv(embedding_path, sep=' ', header=None, index_col=0, quoting=csv.QUOTE_NONE).T
Explanation: Learnt embeddings
End of explanation
def qvec(features, embeddings):
Returns correlations between columns of `features` and `embeddings`.
The aligned feature is the one with the highest correlation.
The qvec score is the sum of correlations of aligned features.
common_words = embeddings.columns.intersection(features.columns)
S = features[common_words]
X = embeddings[common_words]
correlations = pd.DataFrame({i:X.corrwith(S.iloc[i], axis=1) for i in range(len(S))})
correlations.columns = S.index
return correlations
correlations = qvec(subset, embeddings)
V = len(embeddings.columns.intersection(subset.columns))
correlations.head()
Explanation: QVEC model
End of explanation
alignments = pd.DataFrame(correlations.idxmax(axis=1))
alignments.columns = ['feature']
alignments['max_corr'] = correlations.max(axis=1)
alignments.sort_values(by='max_corr', ascending=False).head(10)
Explanation: Exploration
What dimensions and features are aligned?
The dataframe below is as follows: index is the dimension of the learnt embedding, 'feature' is the name of the linguistic feature aligned with that dimension, and 'max_corr' is the correlation between the dimension and feature. The sum of the 'max_corr' column is the qvec score.
39 dimensions pick out 'noun.person', 37 'noun.artifact', 19 'noun.body', 15 'verb.change'.
End of explanation
common_words = embeddings.columns.intersection(subset.columns)
S = subset[common_words]
X = embeddings[common_words]
def plot(i, j, X=X, S=S):
Plot ith dimension of embeddings against feature j.
x = X.loc[i]
s = S.loc[j]
sns.jointplot(x, s);
plot(300,'noun.person')
Explanation: What is QVEC doing?
QVEC is looking at 41 correlation coefficients (or as many linguistic features as there are) and finding the maximum. Here, I show the relevant scatterplot for the highest correlation.
A consistent observation is that the distribution of the linguistic features are strongly peaked at 0. That is, almost all words have 0 for for most features. Sometimes, there is some mass at 1. This suggests to me that the linguistic features being used are not appropriate.
End of explanation
sns.distplot(X.loc[89]);
Explanation: What do the learnt embeddings looks like?
In sum: each dimension looks pretty normal, but the formal tests I'm using suggest otherwise. Most are centered at 0 with std around 0.4.
From the marginal distribution plots above, it looks like each dimension is normally distributed. I don't know if that's purposively done during training or if it just turns out that way.
End of explanation
fig, axs = plt.subplots(1,2)
vector = X.loc[1]
sm.qqplot(vector, ax=axs[0]);
stats.probplot(vector, plot=axs[1]);
Explanation: Graphical test of normality
I'm plotting a QQ plot and a probability plot side by side.
End of explanation
def do_kstest(i):
vector = X.loc[i]
ybar = vector.mean()
s = vector.std()
result = stats.kstest(vector, cdf='norm', args=(ybar, s))
return result.pvalue
p_values = [do_kstest(i) for i in X.index]
sns.distplot(p_values);
Explanation: KS test
The preliminary results suggest that some dimensions are not normally distributed.
The KS test is clear, but I have some uncertainty about how to use it in scipy. In particular, do I give it the std or var of the distribution being tested?
End of explanation
def do_shapirotest(i):
vector = X.loc[i]
result = stats.shapiro(vector)
return result[1]
p_values = [do_shapirotest(i) for i in X.index]
sns.distplot(p_values);
Explanation: Shapiro-Wilk test
End of explanation
def do_lillieforstest(i):
vector = X.loc[i]
result = sm.stats.lilliefors(vector)
return result[1]
p_values = [do_lillieforstest(i) for i in X.index]
sns.distplot(p_values);
Explanation: Lilliefors test
End of explanation
fig, axs = plt.subplots(1,2)
sns.distplot(X.mean(axis=1), ax=axs[0]);
sns.distplot(X.std(axis=1), ax=axs[1]);
Explanation: Location & spread of each dimension
End of explanation
sns.distplot(X['bird']);
Explanation: What does each learnt embedding look like?
In sum: Centered at 0 with std 0.4, but less clearly normal.
How to answer this effectively?
End of explanation
fig, axs = plt.subplots(1,2)
sns.distplot(X.mean(), ax=axs[0]);
sns.distplot(X.std(), ax=axs[1]);
Explanation: Location & spread of each word embedding
End of explanation
S.mean(axis=1).sort_values(ascending=False).head()
fig, axs = plt.subplots(ncols=4, figsize=(10, 4), sharey=True)
sns.distplot(S.loc['noun.artifact'], ax=axs[0], kde=False);
sns.distplot(S.loc['noun.person'], ax=axs[1], kde=False);
sns.distplot(S.loc['noun.act'], ax=axs[2], kde=False);
sns.distplot(S.loc['noun.communication'], ax=axs[3], kde=False);
Explanation: What do the features look like?
The features are strongly bimodal. The usual summary statistics of mean, median and std are not appropriate for bimodal distributions.
If you were to randomly select a word, on average its feature representation would have 1.3% for an animal noun.
End of explanation
proportions = S.astype(bool).sum(axis=1) / len(S.columns)
print(proportions.sort_values(ascending=False).head())
proportions.describe()
Explanation: What proportion of words in the vocab have a non-zero value for each feature?
On average across all 41 features, 6% of words have a nonzero entry for features. The highest proportion is 21%.
End of explanation
S_no_zeroes = S[(S != 0)]
S_no_zeroes.head()
tmp = qvec(S_no_zeroes, embeddings)
tmp.head()
alignments = pd.DataFrame(tmp.idxmax(axis=1))
alignments.columns = ['feature']
alignments['max_corr'] = tmp.max(axis=1)
alignments.sort_values(by='max_corr', ascending=False).head()
Explanation: How can I actually measure an association between linguistic features and learnt dimensions?
Knowing that the dimensions of the learnt embeddings are normally distributed and that the features are strongly bimodal, what is the best way to measure their correlation? It's clear that Pearson's $r$ and Spearman's $\rho$ are not appropriate because of the high number of ties.
I see two broad approaches:
Remove all 0's and use Pearson's or Spearman's.
Treat the feature as binary and compare means.
In sum: Neither is very insightful. I need to use different (less sparse) features.
Remove 0's
In sum: Removing 0's seems to help, but is not principled. It picks out one extremely rare feature. The fact that removing 0's helps tells me the presence of such rare features is a problem.
I changed the 0's to missing values and then use the usual QVEC code from above. I checked the source of corrwith and it looks like it ignores missing values, which is what I want.
The wierd thing is that one feature 'noun.motive' is the most highly correlated feature for 66 of the 300 dimensions. Most of the most highly correlated features are 'noun.motive'. Previously, it didn't appear at all. There are only 11 nonzero entries for it.
End of explanation
plot(122, 'noun.person', S=S_no_zeroes);
plot(255, 'noun.person', S=S_no_zeroes);
Explanation: The following dimensions and features were aligned previously:
- 122 noun.person
- 255 noun.person
- 91 noun.artifact
- 54 noun.person
- 245 noun.person
End of explanation
def do_ttest(i, feature, X=X, S=S):
Do two sample t test for difference of means between the ith dimension
of words with feature and those without.
dim = X.loc[i]
have = S.loc[feature].astype(bool)
result = stats.ttest_ind(dim[have], dim[~have])
return result[1]
def do_2kstest(i, feature, X=X, S=S):
Returns p value from 2 sided KS test that the ith dimension from words with
feature and those from words without feature come from the same distribution.
dim = X.loc[i]
have = S.loc[feature].astype(bool)
result = stats.ks_2samp(dim[have], dim[~have])
return result[1]
def plot_by_presence(i, feature, X=X, S=S):
Plot distribution of the ith dimension of X for those that have
feature and those that don't.
dim = X.loc[i]
have = S.loc[feature].astype(bool)
has_label = feature
has_not_label = 'no {}'.format(feature)
sns.distplot(dim[have], label=has_label);
sns.distplot(dim[~have], label=has_not_label);
t_test = do_ttest(i, feature, X, S)
ks_test = do_2kstest(i, feature, X, S)
plt.legend();
plt.title('t: {}\nks: {}'.format(t_test, ks_test))
Explanation: Treat linguistic features as binary
In sum: I binarize S, so the linguistic features are now presence/absence. For each dimension-feature pair, you can look at the distribution of the dimension for words with and without that feature. Dimension-features that are aligned using the original method show separation. But quantifying the separation across all dimension-feature pairs is problematic, using either t-test or KS test. You get seemingly significant results for unaligned pairs. I have not done any multiple test corrections. This approach does not seem as promising as changing the features to less sparse ones.*
One suggestion here is to compare the means directly. Below, I plot the distribution of a dimension split by words that have that feature and those that don't. For dimensions and features that were identified above as aligned, this plot shows some good separation. For non-aligned dimension-feature pairs, there is no separation. However, I perform a two-tailed t-test for a difference of means between the two. I get "significant" results even when there is no visible difference. Thus, I cannot blindly trust the t test results here. To show this, I perform all $(41 \times 300)$ t tests and plot the p values. The plot suggests most pairs are significantly different, in line with my eyeball checks previously.
End of explanation
plot_by_presence(255, 'noun.person')
Explanation: This dimension-feature pair was aligned in the original method using correlation between raw values. I see good separation between the distributions, which is consistent with a relationship between the variables. The t test result strongly suggests the population means are different (but I can see that from the plot).
End of explanation
plot_by_presence(1, 'noun.body')
Explanation: This dimension-feature pair is weakly negatively correlated using the original method ($r=-0.06$). Consistent with that, the distributions overlap a lot. However, a t test gives a small p value, suggesting the population means are different.
End of explanation
tmp = [do_ttest(i, f) for (i, f) in product(X.index, S.index)]
sns.distplot(tmp);
tmp = [do_2kstest(i, f) for (i, f) in product(X.index, S.index)]
sns.distplot(tmp);
Explanation: This is the distribution of p values from a t test for all dimension-feature pairs. I think it shows the inappropriateness of a t test more than anything else.
End of explanation
(subset.size - np.count_nonzero(subset.values)) / subset.size
Explanation: How sparse is the feature matrix?
93% of the entries in the feature matrix are zero.
End of explanation
sns.distplot(correlations.values.flatten());
Explanation: How correlated are the features and learnt dimensions?
This plot says that the correlations are normally distributed around 0.
End of explanation
pd.Series(correlations.values.flatten()).describe()
Explanation: The following summary shows that the $(41*300)$ correlations are centered at 0 with std 0.05. The largest is 0.32, and the smallest is -0.30. These don't seem like very high numbers. The 75% percentile is 0.03. By looking back at the histogram above, it's obviously these dimensions are not highly correlated with these linguistic features.
Another important point from this is that the distribution of correlations is symmetric. So using the the max, rather than the max absolute, seems arbitrary. It doesn't allow for reversed dimensions.
End of explanation
sns.distplot(correlations.max(axis=1));
Explanation: What do the maximum correlations look like?
These are all positive. But are they different enough from 0? What test can I use here?
End of explanation
sns.heatmap(correlations);
Explanation: In the heatmap below, nothing really sticks out. It does not look like these features are captured well by these dimensions.
End of explanation
subset.index.difference(alignments['feature'])
Explanation: Which features are not the most correlated with any dimension?
'verb.weather' doesn't seem like a good feature, but the others do. So leaving them out isn't great.
End of explanation
def highest_value(i, k=20, X=X):
Return the top `k` words with highest values for ith dimension in X.
dim = X.loc[i]
return dim.nlargest(n=k).index
k = 10
largest = pd.DataFrame([highest_value(i, k) for i in alignments.index], index=alignments.index)
top_k = pd.merge(alignments, largest, left_index=True, right_index=True)
top_k.sort_values(by='max_corr', ascending=False).head()
def get_dims(feature, df=top_k):
Return the dimensions aligned with `feature` in `df`.
return df[df['feature']==feature].sort_values(by='max_corr', ascending=False)
get_dims('noun.time').head()
Explanation: Top K words
NB: This stinks with the whole embedding matrix, but looks promising with smaller vocab.
In the paper they give the top K words for a dimension. The code prints, for each dimension, the dimension number, the aligned linguistic feature, the correlation between the two previous things, and the top k words associated with the dimension. I understand the last bit to mean "the k words with the highest value in the dimension".
Importantly, it matters whether you look for the top K words in the whole embedding matrix, or the reduced vocab in the matrix X (the one you have linguistic features for). You get much better results when you use X. Ideally, the method would generalize to the larger vocab. Clearly, X will have words with much higher frequency. This may give more sensible (stable?) results.
How can I assess whether these top K words are "correct" or not?
- Are the top k words of the right POS?
- Look at the smallest values associated with each dimension.
End of explanation |
5,733 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Regressão Linear
Este notebook mostra uma implementação básica de Regressão Linear e o uso da biblioteca MLlib do PySpark para a tarefa de regressão na base de dados Million Song Dataset do repositório UCI Machine Learning Repository. Nosso objetivo é predizer o ano de uma música através dos seus atributos de áudio.
Neste notebook
Step2: (1b) Usando LabeledPoint
Na MLlib, bases de dados rotuladas devem ser armazenadas usando o objeto LabeledPoint. Escreva a função parsePoint que recebe como entrada uma amostra de dados, transforma os dados usandoo comando unicode.split, e retorna um LabeledPoint.
Aplique essa função na variável samplePoints da célula anterior e imprima os atributos e rótulo utilizando os atributos LabeledPoint.features e LabeledPoint.label. Finalmente, calcule o número de atributos nessa base de dados.
Step4: Visualização 1
Step5: (1c) Deslocando os rótulos
Para melhor visualizar as soluções obtidas, calcular o erro de predição e visualizar a relação dos atributos com os rótulos, costuma-se deslocar os rótulos para iniciarem em zero.
Dessa forma vamos verificar qual é a faixa de valores dos rótulos e, em seguida, subtrair os rótulos pelo menor valor encontrado. Em alguns casos também pode ser interessante normalizar tais valores dividindo pelo valor máximo dos rótulos.
Step6: (1d) Conjuntos de treino, validação e teste
Como próximo passo, vamos dividir nossa base de dados em conjunto de treino, validação e teste conforme discutido em sala de aula. Use o método randomSplit method com os pesos (weights) e a semente aleatória (seed) especificados na célula abaixo parar criar a divisão das bases. Em seguida, utilizando o método cache() faça o pré-armazenamento da base processada.
Esse comando faz o processamento da base através das transformações e armazena em um novo RDD que pode ficar armazenado em memória, se couber, ou em um arquivo temporário.
Step7: Part 2
Step10: (2b) Erro quadrático médio
Para comparar a performance em problemas de regressão, geralmente é utilizado o Erro Quadrático Médio (RMSE). Implemente uma função que calcula o RMSE a partir de um RDD de tuplas (rótulo, predição).
Step11: (2c) RMSE do baseline para os conjuntos de treino, validação e teste
Vamos calcular o RMSE para nossa baseline. Primeiro crie uma RDD de (rótulo, predição) para cada conjunto, e então chame a função calcRMSE.
Step12: Visualização 2
Step14: Parte 3
Step16: (3b) Use os pesos para fazer a predição
Agora, implemente a função getLabeledPredictions que recebe como parâmetro o conjunto de pesos e um LabeledPoint e retorna uma tupla (rótulo, predição). Lembre-se que podemos predizer um rótulo calculando o produto interno dos pesos com os atributos.
Step18: (3c) Gradiente descendente
Finalmente, implemente o algoritmo gradiente descendente para regressão linear e teste a função em um exemplo.
Step19: (3d) Treinando o modelo na base de dados
Agora iremos treinar o modelo de regressão linear na nossa base de dados de treino e calcular o RMSE na base de validação. Lembrem-se que não devemos utilizar a base de teste até que o melhor parâmetro do modelo seja escolhido.
Para essa tarefa vamos utilizar as funções linregGradientDescent, getLabeledPrediction e calcRMSE já implementadas.
Step20: Visualização 3
Step21: Part 4
Step22: (4b) Predição
Agora use o método LinearRegressionModel.predict() para fazer a predição de um objeto. Passe o atributo features de um LabeledPoint comp parâmetro.
Step23: (4c) Avaliar RMSE
Agora avalie o desempenho desse modelo no teste de validação. Use o método predict() para criar o RDD labelsAndPreds RDD, e então use a função calcRMSE() da Parte (2b) para calcular o RMSE.
Step24: (4d) Grid search
Já estamos superando o baseline em pelo menos dois anos na média, vamos ver se encontramos um conjunto de parâmetros melhor. Faça um grid search para encontrar um bom parâmetro de regularização. Tente valores para regParam dentro do conjunto 1e-10, 1e-5, e 1.
Step25: Visualização 5
Step26: (4e) Grid Search para o valor de alfa e número de iterações
Agora, vamos verificar diferentes valores para alfa e número de iterações para perceber o impacto desses parâmetros em nosso modelo. Especificamente tente os valores 1e-5 e 10 para alpha e os valores 500 e 5 para número de iterações. Avalie todos os modelos no conjunto de valdação. Reparem que com um valor baixo de alpha, o algoritmo necessita de muito mais iterações para convergir ao ótimo, enquanto um valor muito alto para alpha, pode fazer com que o algoritmo não encontre uma solução.
Step28: Parte 5
Step29: (5b) Construindo um novo modelo
Agora construa um novo modelo usando esses novos atributos. Repare que idealmente, com novos atributos, você deve realizar um novo Grid Search para determinar os novos parâmetros ótimos, uma vez que os parâmetros do modelo anterior não necessariamente funcionarão aqui.
Para este exercício, os parâmetros já foram otimizados.
Step30: (5c) Avaliando o modelo de interação
Finalmente, temos que o melhor modelo para o conjunto de validação foi o modelo de interação. Na prática esse seria o modelo escolhido para aplicar nos modelos não-rotulados. Vamos ver como essa escolha se sairia utilizand a base de teste nesse modelo e no baseline. | Python Code:
sc = SparkContext.getOrCreate()
# carregar base de dados
from test_helper import Test
import os.path
baseDir = os.path.join('Data')
inputPath = os.path.join('millionsong.txt')
fileName = os.path.join(baseDir, inputPath)
numPartitions = 2
rawData = sc.textFile(fileName, numPartitions)
# EXERCICIO
numPoints = rawData.count()
print numPoints
samplePoints = rawData.take(5)
print samplePoints
# TEST Load and check the data (1a)
Test.assertEquals(numPoints, 6724, 'incorrect value for numPoints')
Test.assertEquals(len(samplePoints), 5, 'incorrect length for samplePoints')
Explanation: Regressão Linear
Este notebook mostra uma implementação básica de Regressão Linear e o uso da biblioteca MLlib do PySpark para a tarefa de regressão na base de dados Million Song Dataset do repositório UCI Machine Learning Repository. Nosso objetivo é predizer o ano de uma música através dos seus atributos de áudio.
Neste notebook:
Parte 1: Leitura e parsing da base de dados
Visualização 1: Atributos
Visualização 2: Deslocamento das variáveis de interesse
Parte 2: Criar um preditor de referência
Visualização 3: Valores Preditos vs. Verdadeiros
Parte 3: Treinar e avaliar um modelo de regressão linear
Visualização 4: Erro de Treino
Parte 4: Treinar usando MLlib e ajustar os hiperparâmetros
Visualização 5: Predições do Melhor modelo
Visualização 6: Mapa de calor dos hiperparâmetros
Parte 5: Adicionando interações entre atributos
Parte 6: Aplicando na base de dados de Crimes de São Francisco
Para referência, consulte os métodos relevantes do PySpark em Spark's Python API e do NumPy em NumPy Reference
Parte 1: Leitura e parsing da base de dados
(1a) Verificando os dados disponíveis
Os dados da base que iremos utilizar estão armazenados em um arquivo texto. No primeiro passo vamos transformar os dados textuais em uma RDD e verificar a formatação dos mesmos. Altere a segunda célula para verificar quantas amostras existem nessa base de dados utilizando o método count method.
Reparem que o rótulo dessa base é o primeiro registro, representando o ano.
End of explanation
from pyspark.mllib.regression import LabeledPoint
import numpy as np
# Here is a sample raw data point:
# '2001.0,0.884,0.610,0.600,0.474,0.247,0.357,0.344,0.33,0.600,0.425,0.60,0.419'
# In this raw data point, 2001.0 is the label, and the remaining values are features
# EXERCICIO
def parsePoint(line):
Converts a comma separated unicode string into a `LabeledPoint`.
Args:
line (unicode): Comma separated unicode string where the first element is the label and the
remaining elements are features.
Returns:
LabeledPoint: The line is converted into a `LabeledPoint`, which consists of a label and
features.
Point = line.split(",")
return LabeledPoint(Point[0], Point[1:])
parsedSamplePoints = map(parsePoint,samplePoints)
firstPointFeatures = parsedSamplePoints[0].features
firstPointLabel = parsedSamplePoints[0].label
print firstPointFeatures, firstPointLabel
d = len(firstPointFeatures)
print d
# TEST Using LabeledPoint (1b)
Test.assertTrue(isinstance(firstPointLabel, float), 'label must be a float')
expectedX0 = [0.8841,0.6105,0.6005,0.4747,0.2472,0.3573,0.3441,0.3396,0.6009,0.4257,0.6049,0.4192]
Test.assertTrue(np.allclose(expectedX0, firstPointFeatures, 1e-4, 1e-4),
'incorrect features for firstPointFeatures')
Test.assertTrue(np.allclose(2001.0, firstPointLabel), 'incorrect label for firstPointLabel')
Test.assertTrue(d == 12, 'incorrect number of features')
Explanation: (1b) Usando LabeledPoint
Na MLlib, bases de dados rotuladas devem ser armazenadas usando o objeto LabeledPoint. Escreva a função parsePoint que recebe como entrada uma amostra de dados, transforma os dados usandoo comando unicode.split, e retorna um LabeledPoint.
Aplique essa função na variável samplePoints da célula anterior e imprima os atributos e rótulo utilizando os atributos LabeledPoint.features e LabeledPoint.label. Finalmente, calcule o número de atributos nessa base de dados.
End of explanation
#insert a graphic inline
%matplotlib inline
import matplotlib.pyplot as plt
import matplotlib.cm as cm
sampleMorePoints = rawData.take(50)
parsedSampleMorePoints = map(parsePoint, sampleMorePoints)
dataValues = map(lambda lp: lp.features.toArray(), parsedSampleMorePoints)
#print dataValues
def preparePlot(xticks, yticks, figsize=(10.5, 6), hideLabels=False, gridColor='#999999',
gridWidth=1.0):
Template for generating the plot layout.
plt.close()
fig, ax = plt.subplots(figsize=figsize, facecolor='white', edgecolor='white')
ax.axes.tick_params(labelcolor='#999999', labelsize='10')
for axis, ticks in [(ax.get_xaxis(), xticks), (ax.get_yaxis(), yticks)]:
axis.set_ticks_position('none')
axis.set_ticks(ticks)
axis.label.set_color('#999999')
if hideLabels: axis.set_ticklabels([])
plt.grid(color=gridColor, linewidth=gridWidth, linestyle='-')
map(lambda position: ax.spines[position].set_visible(False), ['bottom', 'top', 'left', 'right'])
return fig, ax
# generate layout and plot
fig, ax = preparePlot(np.arange(.5, 11, 1), np.arange(.5, 49, 1), figsize=(8,7), hideLabels=True,
gridColor='#eeeeee', gridWidth=1.1)
image = plt.imshow(dataValues,interpolation='nearest', aspect='auto', cmap=cm.Greys)
for x, y, s in zip(np.arange(-.125, 12, 1), np.repeat(-.75, 12), [str(x) for x in range(12)]):
plt.text(x, y, s, color='#999999', size='10')
plt.text(4.7, -3, 'Feature', color='#999999', size='11'), ax.set_ylabel('Observation')
pass
Explanation: Visualização 1: Atributos
A próxima célula mostra uma forma de visualizar os atributos através de um mapa de calor. Nesse mapa mostramos os 50 primeiros objetos e seus atributos representados por tons de cinza, sendo o branco representando o valor 0 e o preto representando o valor 1.
Esse tipo de visualização ajuda a perceber a variação dos valores dos atributos.
End of explanation
# EXERCICIO
parsedDataInit = rawData.map(lambda x: parsePoint(x))
onlyLabels = parsedDataInit.map(lambda x: x.label)
minYear = onlyLabels.min()
maxYear = onlyLabels.max()
print maxYear, minYear
# TEST Find the range (1c)
Test.assertEquals(len(parsedDataInit.take(1)[0].features), 12,
'unexpected number of features in sample point')
sumFeatTwo = parsedDataInit.map(lambda lp: lp.features[2]).sum()
Test.assertTrue(np.allclose(sumFeatTwo, 3158.96224351), 'parsedDataInit has unexpected values')
yearRange = maxYear - minYear
Test.assertTrue(yearRange == 89, 'incorrect range for minYear to maxYear')
# Debug
parsedDataInit.take(1)
# EXERCICIO
parsedData = parsedDataInit.map(lambda x: LabeledPoint(x.label - minYear, x.features))
# Should be a LabeledPoint
print type(parsedData.take(1)[0])
# View the first point
print '\n{0}'.format(parsedData.take(1))
# TEST Shift labels (1d)
oldSampleFeatures = parsedDataInit.take(1)[0].features
newSampleFeatures = parsedData.take(1)[0].features
Test.assertTrue(np.allclose(oldSampleFeatures, newSampleFeatures),
'new features do not match old features')
sumFeatTwo = parsedData.map(lambda lp: lp.features[2]).sum()
Test.assertTrue(np.allclose(sumFeatTwo, 3158.96224351), 'parsedData has unexpected values')
minYearNew = parsedData.map(lambda lp: lp.label).min()
maxYearNew = parsedData.map(lambda lp: lp.label).max()
Test.assertTrue(minYearNew == 0, 'incorrect min year in shifted data')
Test.assertTrue(maxYearNew == 89, 'incorrect max year in shifted data')
Explanation: (1c) Deslocando os rótulos
Para melhor visualizar as soluções obtidas, calcular o erro de predição e visualizar a relação dos atributos com os rótulos, costuma-se deslocar os rótulos para iniciarem em zero.
Dessa forma vamos verificar qual é a faixa de valores dos rótulos e, em seguida, subtrair os rótulos pelo menor valor encontrado. Em alguns casos também pode ser interessante normalizar tais valores dividindo pelo valor máximo dos rótulos.
End of explanation
# EXERCICIO
weights = [.8, .1, .1]
seed = 42
parsedTrainData, parsedValData, parsedTestData = parsedData.randomSplit(weights, seed)
parsedTrainData.cache()
parsedValData.cache()
parsedTestData.cache()
nTrain = parsedTrainData.count()
nVal = parsedValData.count()
nTest = parsedTestData.count()
print nTrain, nVal, nTest, nTrain + nVal + nTest
print parsedData.count()
# TEST Training, validation, and test sets (1e)
Test.assertEquals(parsedTrainData.getNumPartitions(), numPartitions,
'parsedTrainData has wrong number of partitions')
Test.assertEquals(parsedValData.getNumPartitions(), numPartitions,
'parsedValData has wrong number of partitions')
Test.assertEquals(parsedTestData.getNumPartitions(), numPartitions,
'parsedTestData has wrong number of partitions')
Test.assertEquals(len(parsedTrainData.take(1)[0].features), 12,
'parsedTrainData has wrong number of features')
sumFeatTwo = (parsedTrainData
.map(lambda lp: lp.features[2])
.sum())
sumFeatThree = (parsedValData
.map(lambda lp: lp.features[3])
.reduce(lambda x, y: x + y))
sumFeatFour = (parsedTestData
.map(lambda lp: lp.features[4])
.reduce(lambda x, y: x + y))
Test.assertTrue(np.allclose([sumFeatTwo, sumFeatThree, sumFeatFour],
2526.87757656, 297.340394298, 184.235876654),
'parsed Train, Val, Test data has unexpected values')
Test.assertTrue(nTrain + nVal + nTest == 6724, 'unexpected Train, Val, Test data set size')
Test.assertEquals(nTrain, 5371, 'unexpected value for nTrain')
Test.assertEquals(nVal, 682, 'unexpected value for nVal')
Test.assertEquals(nTest, 671, 'unexpected value for nTest')
Explanation: (1d) Conjuntos de treino, validação e teste
Como próximo passo, vamos dividir nossa base de dados em conjunto de treino, validação e teste conforme discutido em sala de aula. Use o método randomSplit method com os pesos (weights) e a semente aleatória (seed) especificados na célula abaixo parar criar a divisão das bases. Em seguida, utilizando o método cache() faça o pré-armazenamento da base processada.
Esse comando faz o processamento da base através das transformações e armazena em um novo RDD que pode ficar armazenado em memória, se couber, ou em um arquivo temporário.
End of explanation
# EXERCICIO
averageTrainYear = (parsedTrainData
.map(lambda x: x.label)
.mean()
)
print averageTrainYear
# TEST Average label (2a)
Test.assertTrue(np.allclose(averageTrainYear, 53.9316700801),
'incorrect value for averageTrainYear')
Explanation: Part 2: Criando o modelo de baseline
(2a) Rótulo médio
O baseline é útil para verificarmos que nosso modelo de regressão está funcionando. Ele deve ser um modelo bem simples que qualquer algoritmo possa fazer melhor.
Um baseline muito utilizado é fazer a mesma predição independente dos dados analisados utilizando o rótulo médio do conjunto de treino. Calcule a média dos rótulos deslocados para a base de treino, utilizaremos esse valor posteriormente para comparar o erro de predição. Use um método apropriado para essa tarefa, consulte o RDD API.
End of explanation
# EXERCICIO
def squaredError(label, prediction):
Calculates the the squared error for a single prediction.
Args:
label (float): The correct value for this observation.
prediction (float): The predicted value for this observation.
Returns:
float: The difference between the `label` and `prediction` squared.
return np.square(label - prediction)
def calcRMSE(labelsAndPreds):
Calculates the root mean squared error for an `RDD` of (label, prediction) tuples.
Args:
labelsAndPred (RDD of (float, float)): An `RDD` consisting of (label, prediction) tuples.
Returns:
float: The square root of the mean of the squared errors.
return np.sqrt(labelsAndPreds.map(lambda (x,y): squaredError(x,y)).mean())
labelsAndPreds = sc.parallelize([(3., 1.), (1., 2.), (2., 2.)])
# RMSE = sqrt[((3-1)^2 + (1-2)^2 + (2-2)^2) / 3] = 1.291
exampleRMSE = calcRMSE(labelsAndPreds)
print exampleRMSE
# TEST Root mean squared error (2b)
Test.assertTrue(np.allclose(squaredError(3, 1), 4.), 'incorrect definition of squaredError')
Test.assertTrue(np.allclose(exampleRMSE, 1.29099444874), 'incorrect value for exampleRMSE')
Explanation: (2b) Erro quadrático médio
Para comparar a performance em problemas de regressão, geralmente é utilizado o Erro Quadrático Médio (RMSE). Implemente uma função que calcula o RMSE a partir de um RDD de tuplas (rótulo, predição).
End of explanation
#Debug
parsedTrainData.take(1)
# EXERCICIO -> (rótulo, predição)
labelsAndPredsTrain = parsedTrainData.map(lambda x:(x.label, averageTrainYear))
rmseTrainBase = calcRMSE(labelsAndPredsTrain)
labelsAndPredsVal = parsedValData.map(lambda x:(x.label, averageTrainYear))
rmseValBase = calcRMSE(labelsAndPredsVal)
labelsAndPredsTest = parsedTestData.map(lambda x:(x.label, averageTrainYear))
rmseTestBase = calcRMSE(labelsAndPredsTest)
print 'Baseline Train RMSE = {0:.3f}'.format(rmseTrainBase)
print 'Baseline Validation RMSE = {0:.3f}'.format(rmseValBase)
print 'Baseline Test RMSE = {0:.3f}'.format(rmseTestBase)
# TEST Training, validation and test RMSE (2c)
Test.assertTrue(np.allclose([rmseTrainBase, rmseValBase, rmseTestBase],
[21.305869, 21.586452, 22.136957]), 'incorrect RMSE value')
Explanation: (2c) RMSE do baseline para os conjuntos de treino, validação e teste
Vamos calcular o RMSE para nossa baseline. Primeiro crie uma RDD de (rótulo, predição) para cada conjunto, e então chame a função calcRMSE.
End of explanation
from matplotlib.colors import ListedColormap, Normalize
from matplotlib.cm import get_cmap
cmap = get_cmap('YlOrRd')
norm = Normalize()
actual = np.asarray(parsedValData
.map(lambda lp: lp.label)
.collect())
error = np.asarray(parsedValData
.map(lambda lp: (lp.label, lp.label))
.map(lambda (l, p): squaredError(l, p))
.collect())
clrs = cmap(np.asarray(norm(error)))[:,0:3]
fig, ax = preparePlot(np.arange(0, 100, 20), np.arange(0, 100, 20))
plt.scatter(actual, actual, s=14**2, c=clrs, edgecolors='#888888', alpha=0.75, linewidths=0.5)
ax.set_xlabel('Predicted'), ax.set_ylabel('Actual')
pass
predictions = np.asarray(parsedValData
.map(lambda lp: averageTrainYear)
.collect())
error = np.asarray(parsedValData
.map(lambda lp: (lp.label, averageTrainYear))
.map(lambda (l, p): squaredError(l, p))
.collect())
norm = Normalize()
clrs = cmap(np.asarray(norm(error)))[:,0:3]
fig, ax = preparePlot(np.arange(53.0, 55.0, 0.5), np.arange(0, 100, 20))
ax.set_xlim(53, 55)
plt.scatter(predictions, actual, s=14**2, c=clrs, edgecolors='#888888', alpha=0.75, linewidths=0.3)
ax.set_xlabel('Predicted'), ax.set_ylabel('Actual')
Explanation: Visualização 2: Predição vs. real
Vamos visualizar as predições no conjunto de validação. Os gráficos de dispersão abaixo plotam os pontos com a coordenada X sendo o valor predito pelo modelo e a coordenada Y o valor real do rótulo.
O primeiro gráfico mostra a situação ideal, um modelo que acerta todos os rótulos. O segundo gráfico mostra o desempenho do modelo baseline. As cores dos pontos representam o erro quadrático daquela predição, quanto mais próxima do laranja, maior o erro.
End of explanation
from pyspark.mllib.linalg import DenseVector
# EXERCICIO
def gradientSummand(weights, lp):
Calculates the gradient summand for a given weight and `LabeledPoint`.
Note:
`DenseVector` behaves similarly to a `numpy.ndarray` and they can be used interchangably
within this function. For example, they both implement the `dot` method.
Args:
weights (DenseVector): An array of model weights (betas).
lp (LabeledPoint): The `LabeledPoint` for a single observation.
Returns:
DenseVector: An array of values the same length as `weights`. The gradient summand.
return (weights.dot(lp.features) - lp.label) * lp.features
exampleW = DenseVector([1, 1, 1])
exampleLP = LabeledPoint(2.0, [3, 1, 4])
summandOne = gradientSummand(exampleW, exampleLP)
print summandOne
exampleW = DenseVector([.24, 1.2, -1.4])
exampleLP = LabeledPoint(3.0, [-1.4, 4.2, 2.1])
summandTwo = gradientSummand(exampleW, exampleLP)
print summandTwo
# TEST Gradient summand (3a)
Test.assertTrue(np.allclose(summandOne, [18., 6., 24.]), 'incorrect value for summandOne')
Test.assertTrue(np.allclose(summandTwo, [1.7304,-5.1912,-2.5956]), 'incorrect value for summandTwo')
Explanation: Parte 3: Treinando e avaliando o modelo de regressão linear
(3a) Gradiente do erro
Vamos implementar a regressão linear através do gradiente descendente.
Lembrando que para atualizar o peso da regressão linear fazemos: $$ \scriptsize \mathbf{w}_{i+1} = \mathbf{w}_i - \alpha_i \sum_j (\mathbf{w}_i^\top\mathbf{x}_j - y_j) \mathbf{x}_j \,.$$ onde $ \scriptsize i $ é a iteração do algoritmo, e $ \scriptsize j $ é o objeto sendo observado no momento.
Primeiro, implemente uma função que calcula esse gradiente do erro para certo objeto: $ \scriptsize (\mathbf{w}^\top \mathbf{x} - y) \mathbf{x} \, ,$ e teste a função em dois exemplos. Use o método DenseVector dot para representar a lista de atributos (ele tem funcionalidade parecida com o np.array()).
End of explanation
# EXERCICIO
def getLabeledPrediction(weights, observation):
Calculates predictions and returns a (label, prediction) tuple.
Note:
The labels should remain unchanged as we'll use this information to calculate prediction
error later.
Args:
weights (np.ndarray): An array with one weight for each features in `trainData`.
observation (LabeledPoint): A `LabeledPoint` that contain the correct label and the
features for the data point.
Returns:
tuple: A (label, prediction) tuple.
return ( observation.label, weights.dot(observation.features) )
weights = np.array([1.0, 1.5])
predictionExample = sc.parallelize([LabeledPoint(2, np.array([1.0, .5])),
LabeledPoint(1.5, np.array([.5, .5]))])
labelsAndPredsExample = predictionExample.map(lambda lp: getLabeledPrediction(weights, lp))
print labelsAndPredsExample.collect()
# TEST Use weights to make predictions (3b)
Test.assertEquals(labelsAndPredsExample.collect(), [(2.0, 1.75), (1.5, 1.25)],
'incorrect definition for getLabeledPredictions')
Explanation: (3b) Use os pesos para fazer a predição
Agora, implemente a função getLabeledPredictions que recebe como parâmetro o conjunto de pesos e um LabeledPoint e retorna uma tupla (rótulo, predição). Lembre-se que podemos predizer um rótulo calculando o produto interno dos pesos com os atributos.
End of explanation
# EXERCICIO
def linregGradientDescent(trainData, numIters):
Calculates the weights and error for a linear regression model trained with gradient descent.
Note:
`DenseVector` behaves similarly to a `numpy.ndarray` and they can be used interchangably
within this function. For example, they both implement the `dot` method.
Args:
trainData (RDD of LabeledPoint): The labeled data for use in training the model.
numIters (int): The number of iterations of gradient descent to perform.
Returns:
(np.ndarray, np.ndarray): A tuple of (weights, training errors). Weights will be the
final weights (one weight per feature) for the model, and training errors will contain
an error (RMSE) for each iteration of the algorithm.
# The length of the training data
n = trainData.count()
# The number of features in the training data
d = len(trainData.take(1)[0].features)
w = np.zeros(d)
alpha = 1.0
# We will compute and store the training error after each iteration
errorTrain = np.zeros(numIters)
for i in range(numIters):
# Use getLabeledPrediction from (3b) with trainData to obtain an RDD of (label, prediction)
# tuples. Note that the weights all equal 0 for the first iteration, so the predictions will
# have large errors to start.
labelsAndPredsTrain = trainData.map(lambda x: getLabeledPrediction(w, x))
errorTrain[i] = calcRMSE(labelsAndPredsTrain)
# Calculate the `gradient`. Make use of the `gradientSummand` function you wrote in (3a).
# Note that `gradient` sould be a `DenseVector` of length `d`.
gradient = trainData.map(lambda x: gradientSummand(w, x)).sum()
# Update the weights
alpha_i = alpha / (n * np.sqrt(i+1))
w -= alpha_i*gradient
return w, errorTrain
# create a toy dataset with n = 10, d = 3, and then run 5 iterations of gradient descent
# note: the resulting model will not be useful; the goal here is to verify that
# linregGradientDescent is working properly
exampleN = 10
exampleD = 3
exampleData = (sc
.parallelize(parsedTrainData.take(exampleN))
.map(lambda lp: LabeledPoint(lp.label, lp.features[0:exampleD])))
print exampleData.take(2)
exampleNumIters = 5
exampleWeights, exampleErrorTrain = linregGradientDescent(exampleData, exampleNumIters)
print exampleWeights
# TEST Gradient descent (3c)
expectedOutput = [48.88110449, 36.01144093, 30.25350092]
Test.assertTrue(np.allclose(exampleWeights, expectedOutput), 'value of exampleWeights is incorrect')
expectedError = [79.72013547, 30.27835699, 9.27842641, 9.20967856, 9.19446483]
Test.assertTrue(np.allclose(exampleErrorTrain, expectedError),
'value of exampleErrorTrain is incorrect')
Explanation: (3c) Gradiente descendente
Finalmente, implemente o algoritmo gradiente descendente para regressão linear e teste a função em um exemplo.
End of explanation
# EXERCICIO
numIters = 50
weightsLR0, errorTrainLR0 = linregGradientDescent(parsedTrainData, numIters)
labelsAndPreds = parsedValData.map(lambda x: getLabeledPrediction(weightsLR0, x))
rmseValLR0 = calcRMSE(labelsAndPreds)
print 'Validation RMSE:\n\tBaseline = {0:.3f}\n\tLR0 = {1:.3f}'.format(rmseValBase,
rmseValLR0)
# TEST Train the model (3d)
expectedOutput = [22.64535883, 20.064699, -0.05341901, 8.2931319, 5.79155768, -4.51008084,
15.23075467, 3.8465554, 9.91992022, 5.97465933, 11.36849033, 3.86452361]
Test.assertTrue(np.allclose(weightsLR0, expectedOutput), 'incorrect value for weightsLR0')
Explanation: (3d) Treinando o modelo na base de dados
Agora iremos treinar o modelo de regressão linear na nossa base de dados de treino e calcular o RMSE na base de validação. Lembrem-se que não devemos utilizar a base de teste até que o melhor parâmetro do modelo seja escolhido.
Para essa tarefa vamos utilizar as funções linregGradientDescent, getLabeledPrediction e calcRMSE já implementadas.
End of explanation
norm = Normalize()
clrs = cmap(np.asarray(norm(np.log(errorTrainLR0))))[:,0:3]
fig, ax = preparePlot(np.arange(0, 60, 10), np.arange(2, 6, 1))
ax.set_ylim(2, 6)
plt.scatter(range(0, numIters), np.log(errorTrainLR0), s=14**2, c=clrs, edgecolors='#888888', alpha=0.75)
ax.set_xlabel('Iteration'), ax.set_ylabel(r'$\log_e(errorTrainLR0)$')
pass
norm = Normalize()
clrs = cmap(np.asarray(norm(errorTrainLR0[6:])))[:,0:3]
fig, ax = preparePlot(np.arange(0, 60, 10), np.arange(17, 22, 1))
ax.set_ylim(17.8, 21.2)
plt.scatter(range(0, numIters-6), errorTrainLR0[6:], s=14**2, c=clrs, edgecolors='#888888', alpha=0.75)
ax.set_xticklabels(map(str, range(6, 66, 10)))
ax.set_xlabel('Iteration'), ax.set_ylabel(r'Training Error')
pass
Explanation: Visualização 3: Erro de Treino
Vamos verificar o comportamento do algoritmo durante as iterações. Para isso vamos plotar um gráfico em que o eixo x representa a iteração e o eixo y o log do RMSE. O primeiro gráfico mostra as primeiras 50 iterações enquanto o segundo mostra as últimas 44 iterações. Note que inicialmente o erro cai rapidamente, quando então o gradiente descendente passa a fazer apenas pequenos ajustes.
End of explanation
from pyspark.mllib.regression import LinearRegressionWithSGD
# Values to use when training the linear regression model
numIters = 500 # iterations
alpha = 1.0 # step
miniBatchFrac = 1.0 # miniBatchFraction
reg = 1e-1 # regParam
regType = 'l2' # regType
useIntercept = True # intercept
# EXERCICIO
firstModel = LinearRegressionWithSGD.train(parsedTrainData, iterations = numIters, step = alpha, miniBatchFraction = 1.0,
regParam=reg,regType=regType, intercept=useIntercept)
# weightsLR1 stores the model weights; interceptLR1 stores the model intercept
weightsLR1 = firstModel.weights
interceptLR1 = firstModel.intercept
print weightsLR1, interceptLR1
# TEST LinearRegressionWithSGD (4a)
expectedIntercept = 13.3335907631
expectedWeights = [16.682292427, 14.7439059559, -0.0935105608897, 6.22080088829, 4.01454261926, -3.30214858535,
11.0403027232, 2.67190962854, 7.18925791279, 4.46093254586, 8.14950409475, 2.75135810882]
Test.assertTrue(np.allclose(interceptLR1, expectedIntercept), 'incorrect value for interceptLR1')
Test.assertTrue(np.allclose(weightsLR1, expectedWeights), 'incorrect value for weightsLR1')
Explanation: Part 4: Treino utilizando MLlib e Busca em Grade (Grid Search)
(4a) LinearRegressionWithSGD
Nosso teste inicial já conseguiu obter um desempenho melhor que o baseline, mas vamos ver se conseguimos fazer melhor introduzindo a ordenada de origem da reta além de outros ajustes no algoritmo. MLlib LinearRegressionWithSGD implementa o mesmo algoritmo da parte (3b), mas de forma mais eficiente para o contexto distribuído e com várias funcionalidades adicionais.
Primeiro utilize a função LinearRegressionWithSGD para treinar um modelo com regularização L2 (Ridge) e com a ordenada de origem. Esse método retorna um LinearRegressionModel.
Em seguida, use os atributos weights e intercept para imprimir o modelo encontrado.
End of explanation
# EXERCICIO
samplePoint = parsedTrainData.take(1)[0]
samplePrediction = firstModel.predict(samplePoint.features)
print samplePrediction
# TEST Predict (4b)
Test.assertTrue(np.allclose(samplePrediction, 56.8013380112),
'incorrect value for samplePrediction')
Explanation: (4b) Predição
Agora use o método LinearRegressionModel.predict() para fazer a predição de um objeto. Passe o atributo features de um LabeledPoint comp parâmetro.
End of explanation
# EXERCICIO
labelsAndPreds = parsedValData.map(lambda x: (x.label, firstModel.predict(x.features)))
rmseValLR1 = calcRMSE(labelsAndPreds)
print ('Validation RMSE:\n\tBaseline = {0:.3f}\n\tLR0 = {1:.3f}' +
'\n\tLR1 = {2:.3f}').format(rmseValBase, rmseValLR0, rmseValLR1)
# TEST Evaluate RMSE (4c)
Test.assertTrue(np.allclose(rmseValLR1, 19.691247), 'incorrect value for rmseValLR1')
Explanation: (4c) Avaliar RMSE
Agora avalie o desempenho desse modelo no teste de validação. Use o método predict() para criar o RDD labelsAndPreds RDD, e então use a função calcRMSE() da Parte (2b) para calcular o RMSE.
End of explanation
# EXERCICIO
bestRMSE = rmseValLR1
bestRegParam = reg
bestModel = firstModel
numIters = 500
alpha = 1.0
miniBatchFrac = 1.0
for reg in [1e-10, 1e-5, 1]:
model = LinearRegressionWithSGD.train(parsedTrainData, numIters, alpha,
miniBatchFrac, regParam=reg,
regType='l2', intercept=True)
labelsAndPreds = parsedValData.map(lambda x: (x.label, model.predict(x.features)))
rmseValGrid = calcRMSE(labelsAndPreds)
print rmseValGrid
if rmseValGrid < bestRMSE:
bestRMSE = rmseValGrid
bestRegParam = reg
bestModel = model
rmseValLRGrid = bestRMSE
print ('Validation RMSE:\n\tBaseline = {0:.3f}\n\tLR0 = {1:.3f}\n\tLR1 = {2:.3f}\n' +
'\tLRGrid = {3:.3f}').format(rmseValBase, rmseValLR0, rmseValLR1, rmseValLRGrid)
# TEST Grid search (4d)
Test.assertTrue(np.allclose(17.017170, rmseValLRGrid), 'incorrect value for rmseValLRGrid')
Explanation: (4d) Grid search
Já estamos superando o baseline em pelo menos dois anos na média, vamos ver se encontramos um conjunto de parâmetros melhor. Faça um grid search para encontrar um bom parâmetro de regularização. Tente valores para regParam dentro do conjunto 1e-10, 1e-5, e 1.
End of explanation
predictions = np.asarray(parsedValData
.map(lambda lp: bestModel.predict(lp.features))
.collect())
actual = np.asarray(parsedValData
.map(lambda lp: lp.label)
.collect())
error = np.asarray(parsedValData
.map(lambda lp: (lp.label, bestModel.predict(lp.features)))
.map(lambda (l, p): squaredError(l, p))
.collect())
norm = Normalize()
clrs = cmap(np.asarray(norm(error)))[:,0:3]
fig, ax = preparePlot(np.arange(0, 120, 20), np.arange(0, 120, 20))
ax.set_xlim(15, 82), ax.set_ylim(-5, 105)
plt.scatter(predictions, actual, s=14**2, c=clrs, edgecolors='#888888', alpha=0.75, linewidths=.5)
ax.set_xlabel('Predicted'), ax.set_ylabel(r'Actual')
pass
Explanation: Visualização 5: Predições do melhor modelo
Agora, vamos criar um gráfico para verificar o desempenho do melhor modelo. Reparem nesse gráfico que a quantidade de pontos mais escuros reduziu bastante em relação ao baseline.
End of explanation
# EXERCICIO
reg = bestRegParam
modelRMSEs = []
for alpha in [1e-5, 10]:
for numIters in [500, 5]:
model = LinearRegressionWithSGD.train(parsedTrainData, numIters, alpha,
miniBatchFrac, regParam=reg,
regType='l2', intercept=True)
labelsAndPreds = parsedValData.map(lambda lp: (lp.label, model.predict(lp.features)))
rmseVal = calcRMSE(labelsAndPreds)
print 'alpha = {0:.0e}, numIters = {1}, RMSE = {2:.3f}'.format(alpha, numIters, rmseVal)
modelRMSEs.append(rmseVal)
# TEST Vary alpha and the number of iterations (4e)
expectedResults = sorted([56.969705, 56.892949, 355124752.221221])
Test.assertTrue(np.allclose(sorted(modelRMSEs)[:3], expectedResults), 'incorrect value for modelRMSEs')
Explanation: (4e) Grid Search para o valor de alfa e número de iterações
Agora, vamos verificar diferentes valores para alfa e número de iterações para perceber o impacto desses parâmetros em nosso modelo. Especificamente tente os valores 1e-5 e 10 para alpha e os valores 500 e 5 para número de iterações. Avalie todos os modelos no conjunto de valdação. Reparem que com um valor baixo de alpha, o algoritmo necessita de muito mais iterações para convergir ao ótimo, enquanto um valor muito alto para alpha, pode fazer com que o algoritmo não encontre uma solução.
End of explanation
# EXERCICIO
import itertools
def twoWayInteractions(lp):
Creates a new `LabeledPoint` that includes two-way interactions.
Note:
For features [x, y] the two-way interactions would be [x^2, x*y, y*x, y^2] and these
would be appended to the original [x, y] feature list.
Args:
lp (LabeledPoint): The label and features for this observation.
Returns:
LabeledPoint: The new `LabeledPoint` should have the same label as `lp`. Its features
should include the features from `lp` followed by the two-way interaction features.
newfeats = <COMPLETAR>
return LabeledPoint(lp.label, <COMPLETAR>)
#return lp
print twoWayInteractions(LabeledPoint(0.0, [2, 3]))
# Transform the existing train, validation, and test sets to include two-way interactions.
trainDataInteract = parsedTrainData.map(twoWayInteractions)
valDataInteract = parsedValData.map(twoWayInteractions)
testDataInteract = parsedTestData.map(twoWayInteractions)
# TEST Add two-way interactions (5a)
twoWayExample = twoWayInteractions(LabeledPoint(0.0, [2, 3]))
Test.assertTrue(np.allclose(sorted(twoWayExample.features),
sorted([2.0, 3.0, 4.0, 6.0, 6.0, 9.0])),
'incorrect features generatedBy twoWayInteractions')
twoWayPoint = twoWayInteractions(LabeledPoint(1.0, [1, 2, 3]))
Test.assertTrue(np.allclose(sorted(twoWayPoint.features),
sorted([1.0,2.0,3.0,1.0,2.0,3.0,2.0,4.0,6.0,3.0,6.0,9.0])),
'incorrect features generated by twoWayInteractions')
Test.assertEquals(twoWayPoint.label, 1.0, 'incorrect label generated by twoWayInteractions')
Test.assertTrue(np.allclose(sum(trainDataInteract.take(1)[0].features), 40.821870576035529),
'incorrect features in trainDataInteract')
Test.assertTrue(np.allclose(sum(valDataInteract.take(1)[0].features), 45.457719932695696),
'incorrect features in valDataInteract')
Test.assertTrue(np.allclose(sum(testDataInteract.take(1)[0].features), 35.109111632783168),
'incorrect features in testDataInteract')
Explanation: Parte 5: Adicionando atributos não-lineares
(5a) Interações par a par
Conforme mencionado em aula, os modelos de regressão linear conseguem capturar as relações lineares entre os atributos e o rótulo. Porém, em muitos casos a relação entre eles é não-linear.
Uma forma de resolver tal problema é criando mais atributos com características não-lineares, como por exemplo a expansão quadrática dos atributos originais. Escreva uma função twoWayInteractions que recebe um LabeledPoint e gera um novo LabeledPoint que contém os atributos antigos e as interações par a par entre eles. Note que um objeto com 3 atributos terá nove interações ( $ \scriptsize 3^2 $ ) par a par.
Para facilitar, utilize o método itertools.product para gerar tuplas para cada possível interação. Utilize também np.hstack para concatenar dois vetores.
End of explanation
# EXERCICIO
numIters = 500
alpha = 1.0
miniBatchFrac = 1.0
reg = 1e-10
modelInteract = LinearRegressionWithSGD.train(trainDataInteract, numIters, alpha,
miniBatchFrac, regParam=reg,
regType='l2', intercept=True)
labelsAndPredsInteract = valDataInteract.<COMPLETAR>
rmseValInteract = calcRMSE(labelsAndPredsInteract)
print ('Validation RMSE:\n\tBaseline = {0:.3f}\n\tLR0 = {1:.3f}\n\tLR1 = {2:.3f}\n\tLRGrid = ' +
'{3:.3f}\n\tLRInteract = {4:.3f}').format(rmseValBase, rmseValLR0, rmseValLR1,
rmseValLRGrid, rmseValInteract)
# TEST Build interaction model (5b)
Test.assertTrue(np.allclose(rmseValInteract, 15.6894664683), 'incorrect value for rmseValInteract')
Explanation: (5b) Construindo um novo modelo
Agora construa um novo modelo usando esses novos atributos. Repare que idealmente, com novos atributos, você deve realizar um novo Grid Search para determinar os novos parâmetros ótimos, uma vez que os parâmetros do modelo anterior não necessariamente funcionarão aqui.
Para este exercício, os parâmetros já foram otimizados.
End of explanation
# EXERCICIO
labelsAndPredsTest = testDataInteract.<COMPLETAR>
rmseTestInteract = calcRMSE(labelsAndPredsTest)
print ('Test RMSE:\n\tBaseline = {0:.3f}\n\tLRInteract = {1:.3f}'
.format(rmseTestBase, rmseTestInteract))
# TEST Evaluate interaction model on test data (5c)
Test.assertTrue(np.allclose(rmseTestInteract, 16.3272040537),
'incorrect value for rmseTestInteract')
Explanation: (5c) Avaliando o modelo de interação
Finalmente, temos que o melhor modelo para o conjunto de validação foi o modelo de interação. Na prática esse seria o modelo escolhido para aplicar nos modelos não-rotulados. Vamos ver como essa escolha se sairia utilizand a base de teste nesse modelo e no baseline.
End of explanation |
5,734 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Experiment
Step1: Load and check data
Step2: ## Analysis
Experiment Details
Step3: Does improved weight pruning outperforms regular SET | Python Code:
%load_ext autoreload
%autoreload 2
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import os
import glob
import tabulate
import pprint
import click
import numpy as np
import pandas as pd
from ray.tune.commands import *
from nupic.research.frameworks.dynamic_sparse.common.browser import *
import matplotlib.pyplot as plt
from matplotlib import rcParams
%config InlineBackend.figure_format = 'retina'
import seaborn as sns
sns.set(style="whitegrid")
sns.set_palette("colorblind")
Explanation: Experiment:
Evaluate pruning by magnitude weighted by coactivations (more thorough evaluation), compare it to baseline (SET).
Motivation.
Check if results are consistently above baseline.
Conclusion
No significant difference between both models
No support for early stopping
End of explanation
base = 'MLPHeb-non-binary-coacts-2019-10-04'
exps = [
os.path.join(base, exp) for exp in [
'mlp-heb',
'mlp-SET',
'mlp-WeightedMag',
'mlp-sparse',
]
]
paths = [os.path.expanduser("~/nta/results/{}".format(e)) for e in exps]
df = load_many(paths)
df.head(5)['hebbian_prune_perc']
# replace hebbian prine
df['hebbian_prune_perc'] = df['hebbian_prune_perc'].replace(np.nan, 0.0, regex=True)
df['weight_prune_perc'] = df['weight_prune_perc'].replace(np.nan, 0.0, regex=True)
df.columns
df.shape
df.iloc[1]
df.groupby('model')['model'].count()
Explanation: Load and check data
End of explanation
# Did any trials failed?
df[df["epochs"]<30]["epochs"].count()
# Removing failed or incomplete trials
df_origin = df.copy()
df = df_origin[df_origin["epochs"]>=30]
df.shape
# which ones failed?
# failed, or still ongoing?
df_origin['failed'] = df_origin["epochs"]<30
df_origin[df_origin['failed']]['epochs']
# helper functions
def mean_and_std(s):
return "{:.3f} ± {:.3f}".format(s.mean(), s.std())
def round_mean(s):
return "{:.0f}".format(round(s.mean()))
stats = ['min', 'max', 'mean', 'std']
def agg(columns, filter=None, round=3):
if filter is None:
return (df.groupby(columns)
.agg({'val_acc_max_epoch': round_mean,
'val_acc_max': stats,
'model': ['count']})).round(round)
else:
return (df[filter].groupby(columns)
.agg({'val_acc_max_epoch': round_mean,
'val_acc_max': stats,
'model': ['count']})).round(round)
Explanation: ## Analysis
Experiment Details
End of explanation
agg(['model'])
agg(['on_perc'])
def model_name(row):
if row['model'] == 'DSNNWeightedMag':
return 'DSNN-WM'
elif row['model'] == 'DSNNMixedHeb':
if row['hebbian_prune_perc'] == 0.3:
return 'SET'
elif row['weight_prune_perc'] == 0.3:
return 'DSNN-Heb'
elif row['model'] == 'SparseModel':
return 'Static'
assert False, "This should cover all cases. Got {} h - {} w - {}".format(row['model'], row['hebbian_prune_perc'], row['weight_prune_perc'])
def test(val):
print(type(val))
print(val)
print()
df['model2'] = df.apply(model_name, axis=1)
fltr = df['model2'] != 'Sparse'
agg(['on_perc', 'model2'], filter=fltr)
# translate model names
rcParams['figure.figsize'] = 16, 8
# d = {
# 'DSNNWeightedMag': 'DSNN',
# 'DSNNMixedHeb': 'SET',
# 'SparseModel': 'Static',
# }
# df_plot = df.copy()
# df_plot['model'] = df_plot['model'].apply(lambda x, i: model_name(x, i))
# sns.scatterplot(data=df_plot, x='on_perc', y='val_acc_max', hue='model')
sns.lineplot(data=df, x='on_perc', y='val_acc_max', hue='model2')
rcParams['figure.figsize'] = 16, 8
filter = df['model'] != 'Static'
sns.lineplot(data=df[filter], x='on_perc', y='val_acc_max_epoch', hue='model2')
sns.lineplot(data=df, x='on_perc', y='val_acc_last', hue='model2')
Explanation: Does improved weight pruning outperforms regular SET
End of explanation |
5,735 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Generate Training sets
Based on "Reproducible Experiments" notebook
Step1: Initiate experiment with this input file
Step2: Before we start to draw random realisations of the model, we should first store the base state of the model for later reference. This is simply possibel with the freeze() method which stores the current state of the model as the "base-state"
Step3: We now intialise the random generator. We can directly assign a random seed to simplify reproducibility (note that this is not essential, as it would be for the definition in a script function
Step4: The next step is to define probability distributions to the relevant event parameters. Let's first look at the different events
Step5: Next, we define the probability distributions for the uncertain input parameters
Step6: This example shows how the base module for reproducible experiments with kinematics can be used. For further specification, child classes of Experiment can be defined, and we show examples of this type of extension in the next sections.
Adjustments to generate training set
First step
Step7: Idea
Step8: All in one function
Generate images for normal faults
Step9: Generate training set for normal faults
Step10: Generate reverse faults
And now
Step11: Generate simple layer structure
No need for noddy, in this simple case - just adapt a numpy array | Python Code:
%matplotlib inline
# here the usual imports. If any of the imports fails,
# make sure that pynoddy is installed
# properly, ideally with 'python setup.py develop'
# or 'python setup.py install'
import sys, os
import matplotlib.pyplot as plt
import numpy as np
# adjust some settings for matplotlib
from matplotlib import rcParams
# print rcParams
rcParams['font.size'] = 15
# determine path of repository to set paths corretly below
repo_path = os.path.realpath('../..')
import pynoddy.history
import pynoddy.experiment
reload(pynoddy.experiment)
rcParams.update({'font.size': 15})
# From notebook 4/ Traning Set example 1:
reload(pynoddy.history)
reload(pynoddy.events)
nm = pynoddy.history.NoddyHistory()
# add stratigraphy
strati_options = {'num_layers' : 3,
'layer_names' : ['layer 1', 'layer 2', 'layer 3'],
'layer_thickness' : [1500, 500, 1500]}
nm.add_event('stratigraphy', strati_options )
# The following options define the fault geometry:
fault_options = {'name' : 'Fault_E',
'pos' : (4000, 0, 5000),
'dip_dir' : 90.,
'dip' : 60,
'slip' : 1000}
nm.add_event('fault', fault_options)
history = 'normal_fault.his'
output_name = 'normal_fault_out'
nm.write_history(history)
Explanation: Generate Training sets
Based on "Reproducible Experiments" notebook
End of explanation
reload(pynoddy.history)
reload(pynoddy.experiment)
from pynoddy.experiment import monte_carlo
ue = pynoddy.experiment.Experiment(history)
ue.change_cube_size(100)
ue.plot_section('y')
Explanation: Initiate experiment with this input file:
End of explanation
ue.freeze()
Explanation: Before we start to draw random realisations of the model, we should first store the base state of the model for later reference. This is simply possibel with the freeze() method which stores the current state of the model as the "base-state":
End of explanation
ue.set_random_seed(12345)
Explanation: We now intialise the random generator. We can directly assign a random seed to simplify reproducibility (note that this is not essential, as it would be for the definition in a script function: the random state is preserved within the model and could be retrieved at a later stage, as well!):
End of explanation
ue.info(events_only = True)
ev2 = ue.events[2]
ev2.properties
Explanation: The next step is to define probability distributions to the relevant event parameters. Let's first look at the different events:
End of explanation
param_stats = [{'event' : 2,
'parameter': 'Slip',
'stdev': 300.0,
'type': 'normal'},
{'event' : 2,
'parameter': 'Dip',
'stdev': 10.0,
'type': 'normal'},]
ue.set_parameter_statistics(param_stats)
resolution = 100
ue.change_cube_size(resolution)
tmp = ue.get_section('y')
prob_2 = np.zeros_like(tmp.block[:,:,:])
n_draws = 100
for i in range(n_draws):
ue.random_draw()
tmp = ue.get_section('y', resolution = resolution)
prob_2 += (tmp.block[:,:,:] == 2)
# Normalise
prob_2 = prob_2 / float(n_draws)
fig = plt.figure(figsize = (12,8))
ax = fig.add_subplot(111)
ax.imshow(prob_2.transpose()[:,0,:],
origin = 'lower left',
interpolation = 'none')
plt.title("Estimated probability of unit 4")
plt.xlabel("x (E-W)")
plt.ylabel("z")
Explanation: Next, we define the probability distributions for the uncertain input parameters:
End of explanation
ue.random_draw()
s1 = ue.get_section('y')
s1.block.shape
s1.block[np.where(s1.block == 3)] = 1
s1.plot_section('y', cmap='Greys')
Explanation: This example shows how the base module for reproducible experiments with kinematics can be used. For further specification, child classes of Experiment can be defined, and we show examples of this type of extension in the next sections.
Adjustments to generate training set
First step: generate more layers and randomly select layers to visualise:
End of explanation
nm = pynoddy.history.NoddyHistory()
# add stratigraphy
n_layers = 8
strati_options['num_layers'] = n_layers
strati_options['layer_names'] = []
strati_options['layer_thickness'] = []
for n in range(n_layers):
strati_options['layer_names'].append("layer %d" % n)
strati_options['layer_thickness'].append(5000./n_layers)
nm.add_event('stratigraphy', strati_options )
# The following options define the fault geometry:
fault_options = {'name' : 'Fault_E',
'pos' : (1000, 0, 5000),
'dip_dir' : 90.,
'dip' : 60,
'slip' : 500}
nm.add_event('fault', fault_options)
history = 'normal_fault.his'
output_name = 'normal_fault_out'
nm.write_history(history)
reload(pynoddy.history)
reload(pynoddy.experiment)
from pynoddy.experiment import monte_carlo
ue = pynoddy.experiment.Experiment(history)
ue.freeze()
ue.set_random_seed(12345)
ue.set_extent(2800, 100, 2800)
ue.change_cube_size(50)
ue.plot_section('y')
param_stats = [{'event' : 2,
'parameter': 'Slip',
'stdev': 100.0,
'type': 'lognormal'},
{'event' : 2,
'parameter': 'Dip',
'stdev': 10.0,
'type': 'normal'},
# {'event' : 2,
# 'parameter': 'Y',
# 'stdev': 150.0,
# 'type': 'normal'},
{'event' : 2,
'parameter': 'X',
'stdev': 150.0,
'type': 'normal'},]
ue.set_parameter_statistics(param_stats)
# randomly select layers:
ue.random_draw()
s1 = ue.get_section('y')
# create "feature" model:
f1 = s1.block.copy()
# randomly select layers:
f1 = np.squeeze(f1)
# n_featuers: number of "features" -> gray values in image
n_features = 5
vals = np.random.randint(0,255,size=n_features)
for n in range(n_layers):
f1[f1 == n] = np.random.choice(vals)
f1.shape
plt.imshow(f1.T, origin='lower_left', cmap='Greys', interpolation='nearest')
# blur image
from scipy import ndimage
f2 = ndimage.filters.gaussian_filter(f1, 1, mode='nearest')
plt.imshow(f2.T, origin='lower_left', cmap='Greys', interpolation='nearest', vmin=0, vmax=255)
# randomly swap image
if np.random.randint(2) == 1:
f2 = f2[::-1,:]
plt.imshow(f2.T, origin='lower_left', cmap='Greys', interpolation='nearest', vmin=0, vmax=255)
Explanation: Idea: generate many layers, then randomly extract a couple of these and also assign different density/ color values:
End of explanation
# back to before: re-initialise model:
nm = pynoddy.history.NoddyHistory()
# add stratigraphy
n_layers = 18
strati_options['num_layers'] = n_layers
strati_options['layer_names'] = []
strati_options['layer_thickness'] = []
for n in range(n_layers):
strati_options['layer_names'].append("layer %d" % n)
strati_options['layer_thickness'].append(5000./n_layers)
nm.add_event('stratigraphy', strati_options )
# The following options define the fault geometry:
fault_options = {'name' : 'Fault_E',
'pos' : (1000, 0, 5000),
'dip_dir' : 90.,
'dip' : 60,
'slip' : 500}
nm.add_event('fault', fault_options)
history = 'normal_fault.his'
output_name = 'normal_fault_out'
nm.write_history(history)
reload(pynoddy.history)
reload(pynoddy.experiment)
from pynoddy.experiment import monte_carlo
ue = pynoddy.experiment.Experiment(history)
ue.freeze()
ue.set_random_seed(12345)
ue.set_extent(2800, 100, 2800)
ue.change_cube_size(50)
param_stats = [{'event' : 2,
'parameter': 'Slip',
'stdev': 100.0,
'type': 'lognormal'},
{'event' : 2,
'parameter': 'Dip',
'stdev': 10.0,
'type': 'normal'},
# {'event' : 2,
# 'parameter': 'Y',
# 'stdev': 150.0,
# 'type': 'normal'},
{'event' : 2,
'parameter': 'X',
'stdev': 150.0,
'type': 'normal'},]
ue.set_parameter_statistics(param_stats)
Explanation: All in one function
Generate images for normal faults
End of explanation
n_train = 10000
F_train = np.empty((n_train, 28*28))
ue.change_cube_size(100)
for i in range(n_train):
# randomly select layers:
ue.random_draw()
s1 = ue.get_section('y')
# create "feature" model:
f1 = s1.block.copy()
# randomly select layers:
f1 = np.squeeze(f1)
# n_featuers: number of "features" -> gray values in image
n_features = 4
vals = np.random.randint(0,255,size=n_features)
for n in range(n_layers):
f1[f1 == n+1] = np.random.choice(vals)
f1 = f1.T
f2 = ndimage.filters.gaussian_filter(f1, 0, mode='nearest')
# scale image
f2 = f2 - np.min(f2)
if np.max(f2) != 0:
f2 = f2/np.max(f2)*255
# randomly swap image
if np.random.randint(2) == 1:
f2 = f2[::-1,:]
F_train[i] = f2.flatten().T
plt.imshow(f2, origin='lower_left', cmap='Greys', interpolation='nearest', vmin=0, vmax=255)
import matplotlib.pyplot as plt
%matplotlib inline
fig, ax = plt.subplots(nrows=2, ncols=5, sharex=True, sharey=True, figsize=(12,6))
ax = ax.flatten()
for i in range(10):
img = F_train[i].reshape(28, 28)
ax[i].imshow(img, cmap='Greys', interpolation='nearest')
ax[0].set_xticks([])
ax[0].set_yticks([])
plt.tight_layout()
# plt.savefig('./figures/mnist_all.png', dpi=300)
plt.show()
import pickle
pickle.dump(F_train, open("f_train_normal.pkl", 'w'))
Explanation: Generate training set for normal faults:
End of explanation
# back to before: re-initialise model:
nm = pynoddy.history.NoddyHistory()
# add stratigraphy
n_layers = 18
strati_options['num_layers'] = n_layers
strati_options['layer_names'] = []
strati_options['layer_thickness'] = []
for n in range(n_layers):
strati_options['layer_names'].append("layer %d" % n)
strati_options['layer_thickness'].append(5000./n_layers)
nm.add_event('stratigraphy', strati_options )
# The following options define the fault geometry:
fault_options = {'name' : 'Fault_E',
'pos' : (1000, 0, 5000),
'dip_dir' : 90.,
'dip' : 60,
'slip' : -500}
nm.add_event('fault', fault_options)
history = 'normal_fault.his'
output_name = 'normal_fault_out'
nm.write_history(history)
reload(pynoddy.history)
reload(pynoddy.experiment)
from pynoddy.experiment import monte_carlo
ue = pynoddy.experiment.Experiment(history)
ue.freeze()
ue.set_random_seed(12345)
ue.set_extent(2800, 100, 2800)
ue.change_cube_size(50)
param_stats = [{'event' : 2,
'parameter': 'Slip',
'stdev': -100.0,
'type': 'lognormal'},
{'event' : 2,
'parameter': 'Dip',
'stdev': 10.0,
'type': 'normal'},
# {'event' : 2,
# 'parameter': 'Y',
# 'stdev': 150.0,
# 'type': 'normal'},
{'event' : 2,
'parameter': 'X',
'stdev': 150.0,
'type': 'normal'},]
ue.set_parameter_statistics(param_stats)
n_train = 10000
F_train_rev = np.empty((n_train, 28*28))
ue.change_cube_size(100)
for i in range(n_train):
# randomly select layers:
ue.random_draw()
s1 = ue.get_section('y')
# create "feature" model:
f1 = s1.block.copy()
# randomly select layers:
f1 = np.squeeze(f1)
# n_featuers: number of "features" -> gray values in image
n_features = 4
vals = np.random.randint(0,255,size=n_features)
for n in range(n_layers):
f1[f1 == n+1] = np.random.choice(vals)
f1 = f1.T
f2 = ndimage.filters.gaussian_filter(f1, 0, mode='nearest')
# scale image
f2 = f2 - np.min(f2)
if np.max(f2) != 0:
f2 = f2/np.max(f2)*255
# randomly swap image
if np.random.randint(2) == 1:
f2 = f2[::-1,:]
F_train_rev[i] = f2.flatten().T
fig, ax = plt.subplots(nrows=2, ncols=5, sharex=True, sharey=True, figsize=(12,6))
ax = ax.flatten()
for i in range(10):
img = F_train_rev[i].reshape(28, 28)
ax[i].imshow(img, cmap='Greys', interpolation='nearest')
ax[0].set_xticks([])
ax[0].set_yticks([])
plt.tight_layout()
# plt.savefig('./figures/mnist_all.png', dpi=300)
plt.show()
pickle.dump(F_train_rev, open("f_train_reverse.pkl", 'w'))
Explanation: Generate reverse faults
And now: the same for reverse faults:
End of explanation
l1 = np.empty_like(s1.block[:,0,:])
n_layers = 18
for i in range(l1.shape[0]):
l1[:,i] = i
l1_ori = np.floor(l1*n_layers/l1.shape[0])
F_train_line = np.empty((n_train, 28*28))
for i in range(n_train):
n_features = 4
vals = np.random.randint(0,255,size=n_features)
l1 = l1_ori.copy()
for n in range(n_layers):
l1[l1 == n+1] = np.random.choice(vals)
f1 = l1.T
f2 = ndimage.filters.gaussian_filter(f1, 0, mode='nearest')
# scale image
f2 = f2 - np.min(f2)
if np.max(f2) != 0:
f2 = f2/np.max(f2)*255
F_train_line[i] = f2.flatten().T
fig, ax = plt.subplots(nrows=2, ncols=5, sharex=True, sharey=True, figsize=(12,6))
ax = ax.flatten()
for i in range(10):
img = F_train_line[i].reshape(28, 28)
ax[i].imshow(img, cmap='Greys', interpolation='nearest')
ax[0].set_xticks([])
ax[0].set_yticks([])
plt.tight_layout()
# plt.savefig('./figures/mnist_all.png', dpi=300)
plt.show()
pickle.dump(F_train_line, open("f_train_line.pkl", 'w'))
Explanation: Generate simple layer structure
No need for noddy, in this simple case - just adapt a numpy array:
End of explanation |
5,736 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
9. Morphology — Lab exercises
XFST / foma
XFST provides two formalisms for creating FSA / FST for morphology and related fields
Step1: 2. subprocess
The subprocess module provides full access to the command line. The basic method of usage is to create a Popen object and call its methods
Step2: It is also possible to send input to a program started by Popen
Step3: From Python 3.6, Popen supports the encoding parameter, which alleviates the need for encode/decode.
There are also functions that cover the basic cases
Step4: 3. Jupyter!
Jupyter also has a shorthand for executing commands
Step9: Morphology
Take a few minutes to make yourself familiar with the code below. It defines the functions we use to communicate with foma via the command line.
Step11: Warm-up
First a few warm-up exercises. This will teach you how to use the functions defined above and give you a general idea of how a lexical transducer looks like. We shall cover a tiny subset of the English verb morphology.
Task W1.
A lexc grammar consists of LEXICONs, which corresponds to continuation classes. One lexicon, Root must always be present. Let's add the two words pack and talk to it. We shall build the grammar in a Python string and use the compile_lexc() function to compile it to binary format, and draw_net() to display the resulting automaton.
Step13: There are several points to observe here
Step14: Now, we can test what words the automaton can recognize in two ways
Step16: Uh-oh. Something's wrong
Step17: Experiment again with apply_up and apply_down. How do they behave differently?
See how the output of the print words command changed. It is also useful to print just the upper or lower tape with print upper-words and print lower-words.
Step20: Lexc Intuition
While the ideas behind lexc are very logical, one might need some time to wrap their heads around it. In this notebook, I try to give some advice on how to "think lexc". Do not hesitate to check it out if the tasks below seem to hard. I also provide the solution to task H1 in there, though you are encouraged to come up with your own.
Hungarian Adjectives
In this exercise, we shall model a subset of the Hungarian nominal paradigm
Step22: Task H2.
What we have now is a simple (lexical) FSA. In this task, we modify it to have a proper lexical FST that can parse (apply_up) surface forms to morphological features and vice versa (apply_down).
Run the words through HFST
Step25: Task H2b*.
Copy the apply functions and create a hfst_apply version of each, which calls hfst instead of foma.. Note that hfst-lookup does not support generation. You will probably need communicate() to implement function.
Important
Step27: Task H4.
The previous solution works, but implementing one distinction (a/e) required us to double the number of lexicons; this clearly doesn't scale. Here, we introduce a more flexible solution
Step29: Task H5.
We round up the exercise by adding adjective comparison. Incorporate the following rules into your grammar | Python Code:
import os
# Note that the actual output of `ls` is not printed!
print('Exit code:', os.system('ls -a'))
files = os.listdir('.')
print('Should have printed:\n\n{}'.format('\n'.join(files if len(files) <= 3 else files[:3] + ['...'])))
Explanation: 9. Morphology — Lab exercises
XFST / foma
XFST provides two formalisms for creating FSA / FST for morphology and related fields:
- regular expressions: similar to Python's (e.g. {reg}?*({expr}) $\equiv$ reg.*(expr)?)
- lexc: a much simpler formalism for lexicographers
In this lab, we shall learn the latter via the open-source reimplementation of XFST: foma. We shall also acquaint ourselves with the Hungarian HFST morphology. We are not going into details of how foma works; for that, see the
- https://fomafst.github.io/
- https://github.com/mhulden/foma/
- the XFST book (Kenneth R. Beesley and Lauri Karttunen: Finite State Morphology)
But first...
Command-line access from Python
In some cases, we need to interface with command-line applications from our script. There are two ways to do this in Python, and an additional method in Jupyter.
1. os.system()
The os.system(cmd) call executes cmd, sends its output to the stdout of the interpreter, and returns the exit code of the process. As such, there is no way to capture the output in the script, so this method is only useful if we are interested solely in the exit code.
End of explanation
import subprocess
p = subprocess.Popen(['ls', '-a'], # manual cmd split; see next example
stdout=subprocess.PIPE) # we need the output
ret = p.communicate()
print('Exit code: {}\nOutput:\n\n{}'.format(p.returncode, ret[0].decode('utf-8')))
Explanation: 2. subprocess
The subprocess module provides full access to the command line. The basic method of usage is to create a Popen object and call its methods:
End of explanation
p = subprocess.Popen('cat -', shell=True, # automatic cmd split -> ['cat', '-']
stdin=subprocess.PIPE, # we shall use stdin
stdout=subprocess.PIPE)
ret = p.communicate('hello\nbello'.encode('utf-8'))
print(ret[0].decode('utf-8'))
Explanation: It is also possible to send input to a program started by Popen:
End of explanation
# From Python 3.5
ret = subprocess.run('ls -a', shell=True, stdout=subprocess.PIPE)
print('run():\n{}'.format(
ret.stdout.decode('utf-8')))
# Even easier
print('check_output()\n{}'.format(
subprocess.check_output('ls -a', shell=True).decode('utf-8')))
Explanation: From Python 3.6, Popen supports the encoding parameter, which alleviates the need for encode/decode.
There are also functions that cover the basic cases:
End of explanation
directory = '.'
s = !ls -a {directory}
print(s)
Explanation: 3. Jupyter!
Jupyter also has a shorthand for executing commands: !. It is a bit more convenient, as it does string encoding behind the scenes and parses the output into a list. However, it is not available in regular Python scripts.
End of explanation
# Utility functions
from functools import partial
import os
import subprocess
import tempfile
from IPython.display import display, Image
def execute_commands(*cmds, fancy=True):
Starts foma end executes the specified commands.
Might not work if there are too many...
if fancy:
print('Executing commands...\n=====================\n')
args = ' '.join('-e "{}"'.format(cmd) for cmd in cmds)
output = subprocess.check_output('foma {} -s'.format(args),
stderr=subprocess.STDOUT,
shell=True).decode('utf-8')
print(output)
if fancy:
print('=====================\n')
def compile_lexc(lexc_string, fst_file):
Compiles a string describing a lexc lexicon with foma. The FST
is written to fst_file.
with tempfile.NamedTemporaryFile(mode='wt', encoding='utf-8', delete=False) as outf:
outf.write(lexc_string)
try:
execute_commands('read lexc {}'.format(outf.name),
'save stack {}'.format(fst_file), fancy=False)
#!foma -e "read lexc {outf.name}" -e "save stack {fst_file}" -s
finally:
os.remove(outf.name)
def apply(fst_file, words, up=True):
Applies the FST in fst_file on the supplied words. The default direction
is up.
if isinstance(words, list):
words = '\n'.join(map(str, words))
elif not isinstance(words, str):
raise ValueError('words must be a str or list')
header = 'Applying {} {}...'.format(fst_file, 'up' if up else 'down')
print('{}\n{}\n'.format(header, '=' * len(header)))
invert = '-i' if not up else ''
result = subprocess.check_output('flookup {} {}'.format(invert, fst_file),
stderr=subprocess.STDOUT, shell=True,
input=words.encode('utf-8'))
print(result.decode('utf-8')[:-1]) # Skip last newline
print('=' * len(header), '\n')
apply_up = partial(apply, up=True)
apply_down = partial(apply, up=False)
def draw_net(fst_file, inline=True):
Displays a compiled network inline or in a separate window.
The package imagemagic must be installed for this function to work.
!foma -e "load stack {fst_file}" -e "print dot >{fst_file}.dot" -s
if inline:
png_data = subprocess.check_output(
'cat {}.dot | dot -Tpng'.format(fst_file), shell=True)
display(Image(data=png_data, format='png'))
else:
!cat {fst_file}.dot | dot -Tpng | display
!rm {fst_file}.dot
Explanation: Morphology
Take a few minutes to make yourself familiar with the code below. It defines the functions we use to communicate with foma via the command line.
End of explanation
grammar =
LEXICON Root
pack # ;
talk # ;
walk # ;
compile_lexc(grammar, 'warm_up.fst')
draw_net('warm_up.fst')
Explanation: Warm-up
First a few warm-up exercises. This will teach you how to use the functions defined above and give you a general idea of how a lexical transducer looks like. We shall cover a tiny subset of the English verb morphology.
Task W1.
A lexc grammar consists of LEXICONs, which corresponds to continuation classes. One lexicon, Root must always be present. Let's add the two words pack and talk to it. We shall build the grammar in a Python string and use the compile_lexc() function to compile it to binary format, and draw_net() to display the resulting automaton.
End of explanation
grammar =
LEXICON Root
! see how the continuation changes to the new LEXICON
! BTW this is a comment
pack Infl ;
talk Infl ;
walk Infl ;
LEXICON Infl
! add the endings here, without the hyphens
compile_lexc(grammar, 'warm_up.fst')
draw_net('warm_up.fst')
Explanation: There are several points to observe here:
- the format of a word (morpheme) definition line is: morpheme next_lexicon ;
- the next_lexicon can be the word end mark #
- word definitions must end with a semicolon (;); LEXICON lines must not
- the basic unit in the FSA is the character, not the whole word
- the FSA is determinized and minimized to save space: see how the states (3) and (5) and the arc -k-> are re-used
Task W2.
Let's add some inflection to the grammar. These are all regular verbs, so they all can receive -s, -ed, and -ing to form the third person singular, past and gerund forms, respectively. Add these to the second lexicon, and
compile the network again.
End of explanation
apply_up('warm_up.fst', ['walked', 'talking', 'packs', 'walk'])
execute_commands('load stack warm_up.fst', 'print words')
Explanation: Now, we can test what words the automaton can recognize in two ways:
- call the apply_up or apply_down functions with the word form
- use the print words foma command
End of explanation
grammar =
compile_lexc(grammar, 'warm_up.fst')
draw_net('warm_up.fst')
Explanation: Uh-oh. Something's wrong: the automaton didn't recognize walk. What happened?
The explanation is very simple: now all words in Root continue to Infl, which requires one of the inflectional endings. See how state (6) ceased to be an accepting state.
The solution: replicate the code from above, but also add the "zero morpheme" ending # ; to Infl! Make sure that state (6) is accepting again and that the recognized words now include the basic form.
Task W3.
Here we change our automaton to a transducer that lemmatizes words it receives on its bottom tape. Transduction in lexc is denoted by the colon (:). Again, copy your grammar below, but replace the contents of LEXICON Infl with
# ;
0:s # ;
0:ed # ;
0:ing # ;
Note that
- only a single colon is allowed on a line
- everything left of it is "up", right is "down"
- the $\varepsilon$ (empty character) is denoted by 0
- deletion happens on the top, "output" side
End of explanation
# apply_up('warm_up.fst', ['walked', 'talking', 'packs', 'walk'])
# execute_commands('load stack warm_up.fst', 'print words')
Explanation: Experiment again with apply_up and apply_down. How do they behave differently?
See how the output of the print words command changed. It is also useful to print just the upper or lower tape with print upper-words and print lower-words.
End of explanation
adjectives_1 =
csendes ! quiet
egészséges ! healthy
idős ! old
kék ! blue
mély ! deep
öntelt ! conceited
szeles ! windy
terhes ! pregnant; arduous
zsémbes ! shrewish
grammar =
compile_lexc(grammar, 'h1.fst')
Explanation: Lexc Intuition
While the ideas behind lexc are very logical, one might need some time to wrap their heads around it. In this notebook, I try to give some advice on how to "think lexc". Do not hesitate to check it out if the tasks below seem to hard. I also provide the solution to task H1 in there, though you are encouraged to come up with your own.
Hungarian Adjectives
In this exercise, we shall model a subset of the Hungarian nominal paradigm:
- regular adjectives
- vowel harmony
- plurals
- the accusative case
- comparative and superlative forms
The goal is to replicate the output of the Hungarian HFST morphology. We shall learn the following techniques:
- defining lexical automata and tranducers with lexc
- multi-character symbols
- flag diacritics
Task H1.
We start small with a tiny lexical FSA.
- define a LEXICON for the adjectives in the code cell below
- add continuation classes to handle:
- the plural form (-ek)
- accusative (-et)
A little help for the latter two: in Hungarian, adjectives (and numerals) are inflected the same way as nouns; this is called the nominal paradigm. A simplified schematic would be
Root (Plur)? (Case)?
Plural is marked by -k, and accusative by -t. However, if the previous morpheme ends with a consonant (as is the case here), a link vowel is inserted before the k or t. Which vowel gets inserted is decided by complicated vowel harmony rules. The adjectives below all contain front vowels only, so the link vowel is e.
End of explanation
grammar =
compile_lexc(grammar, 'h2.fst')
# apply_up('h2.fst', [])
Explanation: Task H2.
What we have now is a simple (lexical) FSA. In this task, we modify it to have a proper lexical FST that can parse (apply_up) surface forms to morphological features and vice versa (apply_down).
Run the words through HFST:
Start a new docker bash shell by running docker exec -it <container name or id> bash
Start HFST by typing hfst-lookup --cascade=composition /nlp/hfst/hu.hfstol into the shell
Type the words in our FSA (don't forget plural / accusative, e.g. nagyok, finomat) into hfst-lookup one-by-one. See what features appear on the upper side (limit yourself to the correct parse, i.e. the one with [/Adj]).
Add the same features to our lexc grammar:
remember that you want to keep the surface form in the upper side as well, so e.g. [/Pl]:ek won't do. You must
either repeat it twice: ek[/Pl]:ek
or use two lexicons e.g. Plur and PlurTag, and have ek in the first and [/Pl]:0 in the second
all tags, such as [/Pl] must be defined in the Multichar_Symbols header:
```
Multichar Symbols Symb1 Symb2 ...
LEXICON Root
...
``
Play around withapply_upandapply_down. Make sure you covered all tags in the HFST output. (Note: HFST tags color names as[/Adj|col]`. You don't need to make this distinction in this exercise.)
End of explanation
adjectives_2 =
abszurd ! absurd
bájos ! charming
finom ! delicious
gyanús ! suspicious
okos ! clever
piros ! red
száraz ! dry
zord ! grim
grammar =
compile_lexc(grammar, 'h3.fst')
# apply_up('h3.fst', [])
Explanation: Task H2b*.
Copy the apply functions and create a hfst_apply version of each, which calls hfst instead of foma.. Note that hfst-lookup does not support generation. You will probably need communicate() to implement function.
Important: do not start this exercise until you finish all foma-related ones!
Task H3.
In the next few exercises, we are going to delve deeper into vowel harmony and techniques to handle it. For now, add the adjectives below to the grammar. In these words, back vowels dominate, so the link vowel for plural and accusative is a. Create LEXICON structures that mirror what you have for the front adjectives to handle the new words.
End of explanation
grammar =
compile_lexc(grammar, 'h4.fst')
# apply_up('h4.fst', [])
Explanation: Task H4.
The previous solution works, but implementing one distinction (a/e) required us to double the number of lexicons; this clearly doesn't scale. Here, we introduce a more flexible solution: flag diacritics.
Flag diacritics are (multichar!) symbol with a few special properties:
- they have the form @COMMAND.FEATURE_NAME.FEATURE_VALUE@, where command is
- P: set
- R: require
- D: disallow (the opposite of R)
- C: clear (removes the flag)
- U: unification (first P, then R)
- they must appear on both tapes (upper and lower) to have any effect (e.g. @P.FEAT.VALUE@:0 won't work, but @P.FEAT.VALUE@xxx will)
- even so, flag diacritics never appear in the final upper / lower strings -- they can be considered an "implementation detail"
- flag diacritics incur some performance overhead, but decrease the size of the FSTs
Add flag diacritics to your grammar. You will want to keep the two adjective types in separate lexicons, e.g.
LEXICON Root
@U.HARM.FRONT@ AdjFront ;
@U.HARM.BACK@ AdjBack ;
However, the two plural / accusative lexicons can be merged, like so:
LEXICON Plur
@U.HARM.FRONT@ek PlurTag ;
@U.HARM.BACK@ak PlurTag ;
Compile your grammar to see that the network became smaller. Check and see if the new FST accepts the same language as the old one.
End of explanation
grammar =
compile_lexc(grammar, 'h5.fst')
# apply_up('h5.fst', [])
Explanation: Task H5.
We round up the exercise by adding adjective comparison. Incorporate the following rules into your grammar:
- Comparative forms are marked by -bb, which takes a link vowel similarly to plural
- The superlative form is marked by the leg- prefix and -bb, i.e. a circumfix
- The exaggerated form is the same as the superlative, with any number of leges coming before leg
The full simplified paradigm thus becomes:
((leges)* leg)? Root (-bb)? (Plur)? (Case)?
Again, the circumfix is best handled with flag diacritics. However, the U command probably won't work because its main use is for agreement. Try to implement an if-else structure with the other commands!
End of explanation |
5,737 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Handwritten Number Recognition with TFLearn and MNIST
In this notebook, we'll be building a neural network that recognizes handwritten numbers 0-9.
This kind of neural network is used in a variety of real-world applications including
Step1: Retrieving training and test data
The MNIST data set already contains both training and test data. There are 55,000 data points of training data, and 10,000 points of test data.
Each MNIST data point has
Step2: Visualize the training data
Provided below is a function that will help you visualize the MNIST data. By passing in the index of a training example, the function show_digit will display that training image along with it's corresponding label in the title.
Step3: Building the network
TFLearn lets you build the network by defining the layers in that network.
For this example, you'll define
Step4: Training the network
Now that we've constructed the network, saved as the variable model, we can fit it to the data. Here we use the model.fit method. You pass in the training features trainX and the training targets trainY. Below I set validation_set=0.1 which reserves 10% of the data set as the validation set. You can also set the batch size and number of epochs with the batch_size and n_epoch keywords, respectively.
Too few epochs don't effectively train your network, and too many take a long time to execute. Choose wisely!
Step5: Testing
After you're satisified with the training output and accuracy, you can then run the network on the test data set to measure it's performance! Remember, only do this after you've done the training and are satisfied with the results.
A good result will be higher than 95% accuracy. Some simple models have been known to get up to 99.7% accuracy! | Python Code:
# Import Numpy, TensorFlow, TFLearn, and MNIST data
import numpy as np
import tensorflow as tf
import tflearn
import tflearn.datasets.mnist as mnist
Explanation: Handwritten Number Recognition with TFLearn and MNIST
In this notebook, we'll be building a neural network that recognizes handwritten numbers 0-9.
This kind of neural network is used in a variety of real-world applications including: recognizing phone numbers and sorting postal mail by address. To build the network, we'll be using the MNIST data set, which consists of images of handwritten numbers and their correct labels 0-9.
We'll be using TFLearn, a high-level library built on top of TensorFlow to build the neural network. We'll start off by importing all the modules we'll need, then load the data, and finally build the network.
End of explanation
# Retrieve the training and test data
trainX, trainY, testX, testY = mnist.load_data(one_hot=True)
Explanation: Retrieving training and test data
The MNIST data set already contains both training and test data. There are 55,000 data points of training data, and 10,000 points of test data.
Each MNIST data point has:
1. an image of a handwritten digit and
2. a corresponding label (a number 0-9 that identifies the image)
We'll call the images, which will be the input to our neural network, X and their corresponding labels Y.
We're going to want our labels as one-hot vectors, which are vectors that holds mostly 0's and one 1. It's easiest to see this in a example. As a one-hot vector, the number 0 is represented as [1, 0, 0, 0, 0, 0, 0, 0, 0, 0], and 4 is represented as [0, 0, 0, 0, 1, 0, 0, 0, 0, 0].
Flattened data
For this example, we'll be using flattened data or a representation of MNIST images in one dimension rather than two. So, each handwritten number image, which is 28x28 pixels, will be represented as a one dimensional array of 784 pixel values.
Flattening the data throws away information about the 2D structure of the image, but it simplifies our data so that all of the training data can be contained in one array whose shape is [55000, 784]; the first dimension is the number of training images and the second dimension is the number of pixels in each image. This is the kind of data that is easy to analyze using a simple neural network.
End of explanation
# Visualizing the data
import matplotlib.pyplot as plt
%matplotlib inline
# Function for displaying a training image by it's index in the MNIST set
def show_digit(index):
label = trainY[index].argmax(axis=0)
# Reshape 784 array into 28x28 image
image = trainX[index].reshape([28,28])
plt.title('Training data, index: %d, Label: %d' % (index, label))
plt.imshow(image, cmap='gray_r')
plt.show()
# Display the first (index 0) training image
show_digit(0)
Explanation: Visualize the training data
Provided below is a function that will help you visualize the MNIST data. By passing in the index of a training example, the function show_digit will display that training image along with it's corresponding label in the title.
End of explanation
# Define the neural network
def build_model():
# This resets all parameters and variables, leave this here
tf.reset_default_graph()
#### Your code ####
# Include the input layer, hidden layer(s), and set how you want to train the model
net = tflearn.input_data([None,784])
net = tflearn.fully_connected(net,128,activation='ReLU') # hidden 1
net = tflearn.fully_connected(net,32,activation='ReLU') # hidden 2
net = tflearn.fully_connected(net,10,activation='Softmax') # output layer
# configure the training:
net = tflearn.regression(net,optimizer='sgd',learning_rate=0.1, loss='categorical_crossentropy')
# This model assumes that your network is named "net"
model = tflearn.DNN(net)
return model
# Build the model
model = build_model()
Explanation: Building the network
TFLearn lets you build the network by defining the layers in that network.
For this example, you'll define:
The input layer, which tells the network the number of inputs it should expect for each piece of MNIST data.
Hidden layers, which recognize patterns in data and connect the input to the output layer, and
The output layer, which defines how the network learns and outputs a label for a given image.
Let's start with the input layer; to define the input layer, you'll define the type of data that the network expects. For example,
net = tflearn.input_data([None, 100])
would create a network with 100 inputs. The number of inputs to your network needs to match the size of your data. For this example, we're using 784 element long vectors to encode our input data, so we need 784 input units.
Adding layers
To add new hidden layers, you use
net = tflearn.fully_connected(net, n_units, activation='ReLU')
This adds a fully connected layer where every unit (or node) in the previous layer is connected to every unit in this layer. The first argument net is the network you created in the tflearn.input_data call, it designates the input to the hidden layer. You can set the number of units in the layer with n_units, and set the activation function with the activation keyword. You can keep adding layers to your network by repeated calling tflearn.fully_connected(net, n_units).
Then, to set how you train the network, use:
net = tflearn.regression(net, optimizer='sgd', learning_rate=0.1, loss='categorical_crossentropy')
Again, this is passing in the network you've been building. The keywords:
optimizer sets the training method, here stochastic gradient descent
learning_rate is the learning rate
loss determines how the network error is calculated. In this example, with categorical cross-entropy.
Finally, you put all this together to create the model with tflearn.DNN(net).
Exercise: Below in the build_model() function, you'll put together the network using TFLearn. You get to choose how many layers to use, how many hidden units, etc.
Hint: The final output layer must have 10 output nodes (one for each digit 0-9). It's also recommended to use a softmax activation layer as your final output layer.
End of explanation
# Training
model.fit(trainX, trainY, validation_set=0.1, show_metric=True, batch_size=100, n_epoch=20)
Explanation: Training the network
Now that we've constructed the network, saved as the variable model, we can fit it to the data. Here we use the model.fit method. You pass in the training features trainX and the training targets trainY. Below I set validation_set=0.1 which reserves 10% of the data set as the validation set. You can also set the batch size and number of epochs with the batch_size and n_epoch keywords, respectively.
Too few epochs don't effectively train your network, and too many take a long time to execute. Choose wisely!
End of explanation
# Compare the labels that our model predicts with the actual labels
# Find the indices of the most confident prediction for each item. That tells us the predicted digit for that sample.
predictions = np.array(model.predict(testX)).argmax(axis=1)
# Calculate the accuracy, which is the percentage of times the predicated labels matched the actual labels
actual = testY.argmax(axis=1)
test_accuracy = np.mean(predictions == actual, axis=0)
# Print out the result
print("Test accuracy: ", test_accuracy)
Explanation: Testing
After you're satisified with the training output and accuracy, you can then run the network on the test data set to measure it's performance! Remember, only do this after you've done the training and are satisfied with the results.
A good result will be higher than 95% accuracy. Some simple models have been known to get up to 99.7% accuracy!
End of explanation |
5,738 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Project 22
Step1: Importing of the dataset
Method definition for reading one of the available datasets
Step2: Checking for missing data
In the following lines, we check for missing values as these can falsify our data extraction
Step3: Feature Engineering
The goal is to extract features from the preprocessed numpy array
But before we have to do a preprocessing step
1. Step is to sequence the data in windows with 52 instances
Sidenote
Step4: Feature Extraction
2. Step we want to extract two feature types for each window (6 different features for each window - x-, y-, z- axis)
Step5: Classification
The next step is the classification of the labels with the extracted features. Therefore the team used 2 different classifiers (Random Forest - as the main classifier build on the idea of the research paper by Pierluigi et al. {Pierluigi Casale, Oriol Pujol, and Petia Radeva. Human activity recognition from accelerometer data using a wearable device. Pattern Recognition and Image Analysis, pages 289–296, 2011} - and Support Vector machines as a comparison to the main classifier
Step6: Code execution | Python Code:
# General Imports for more than one file
import pandas as pd
import numpy as np
# For reading the CSV
from pandas import read_csv
# Imports for the classification
from sklearn.metrics import confusion_matrix
from sklearn.metrics import f1_score
from sklearn.model_selection import cross_validate
from sklearn.ensemble import RandomForestClassifier
from sklearn import metrics
from sklearn.model_selection import train_test_split
from sklearn import svm
Explanation: Project 22: Activity Recognition
Authors: Alessandro Pomes, Simon Schmoll
Objectives: Classification of 7 activities which are tracked with a Single Chest-Mounted Accelerometer
What is done in the Notebook: The data is imported, processed and classified
As we followed a modular approach firstly the functions are defined which are later called for execution
Importing the libraries
End of explanation
# Specifying engine = python because c engine can not handle 'sep'
# @params: dataNum
# @output: dataset (Dataframe)
def read(data_num):
dataset = pd.read_csv('data/%d.csv' % (data_num), sep=',', header=None, engine='python', names=names_attributes)
# Comment in for printing out the array data and the size of the array
# print(array_data)
# print(np.size(array_data, 0))
return dataset
Explanation: Importing of the dataset
Method definition for reading one of the available datasets
End of explanation
#First we transform the Dataframe into an numpy array
#Specifying engine = python because c engine can not handle 'sep'
# @params: dataNum
# @output: dataset (Dataframe)
def read(data_num):
names_attributes = ['sequentialNumber', 'xAcceleration', 'yAcceleration', 'zAcceleration', 'label']
dataset = read_csv('data/%d.csv' % (data_num), sep=',', header=None, engine='python', names=names_attributes)
# Comment in for printing out the array data and the size of the array
# print(array_data)
# print(np.size(array_data, 0))
return dataset
# This function checks how balanced the data is
# @Input: data_array with labels
# @Output: list with different counts for the labels
def count_labels(data_array):
label_data = data_array[:,[4]]
unique, counts = np.unique(label_data, return_counts=True)
ret = dict(zip(unique, counts))
return ret
#3d plot Graph
#Here we reate a simple funcion to create the plot of the data
#of our original dataset. The different 7 colour corrispond our 7 labels
#@input:database in a matrix Array
#@output:plot of the point in a 3d structure
def d3Plot(dataset):
def column(matrix, i):
return [row[i] for row in matrix]
ax = plt.axes(projection='3d')
# Data for three-dimensional scattered points
zdata= column(dataset,3)
ydata= column(dataset,2)
xdata= column(dataset,1)
lable= column(dataset,4)
colors = ['orange','green','blue','purple','yellow','black','orange','white']
ax.scatter3D(xdata, ydata, zdata, c=lable, cmap=matplotlib.colors.ListedColormap(colors))
return plt.show()
Explanation: Checking for missing data
In the following lines, we check for missing values as these can falsify our data extraction
End of explanation
# For Feature Extraction we use a technique called window overlapping (Pierluigi Casale, Oriol Pujol, and Petia Radeva. Human activity recognition from accelerometer
# data using a wearable device. Pattern Recognition and Image Analysis, pages 289–296, 2011). It has an overlap of 50% between the different
# time series. As a time window 1 second is use --> corresponds to 52 samplings (52 Hz frequency)
# Then we start with the sequencing
# Slicing needs to be done as follows:
# - it is not possible that 2 activities are grouped in one sequence (would falsify the outcome of the mean value)
# - therefore only labels with the same value are grouped into one sequence
# @params: array_data is list of array that contains the grouped data
# @output: data_list which contains numpy arrays with the respective windows
def grouping(array_data):
start = int(0)
end = int(52)
data_list = []
length = np.size(array_data, 0)
while start < length-52:
if(array_data[start][4] != array_data[end-1][4]): # this control sequence is necessary to ensure that not two of the same
while(array_data[start][4] != array_data[end-1][4]): # labels are in one window
end = end -1
newArray = array_data[slice(start, end)]
start = end
end = end + 52
else:
newArray = array_data[slice(start, end)]
start = start + 26
end = end + 26
data_list.append(newArray)
if(end-52 > length - 1):
end = length-1
# Comment in to show the size and length of the data_list array
# print(np.size(data_list))
# print(len(data_list))
return data_list
# This is an additional function which could be called to print a data list to a text file (e.g to examine it)
# Comment in for printing the data to a text file
# def sysout_to_text(dataList):
# file = open("tempFile", "w")
# for item in dataList:
# file.write("%s\n" % item)
# file.close()
Explanation: Feature Engineering
The goal is to extract features from the preprocessed numpy array
But before we have to do a preprocessing step
1. Step is to sequence the data in windows with 52 instances
Sidenote: it is of high importance to not mix two labels into the same window
End of explanation
#Feature Extraction
# For Feature Extraction we use a technique called window overlapping (Pierluigi Casale, Oriol Pujol, and Petia Radeva. Human activity recognition from accelerometer
# data using a wearable device. Pattern Recognition and Image Analysis, pages 289–296, 2011). It has an overlap of 50% between the different
# time series. As a time window 1 second is use --> corresponds to 52 samplings (52 Hz frequency)
# Then we start with the sequencing
# Slicing needs to be done as follows:
# - it is not possible that 2 activities are grouped in one sequence (would falsify the outcome of the mean value)
# - therefore only labels with the same value are grouped into one sequence
# @params: array_data is list of array that contains the grouped data
# @output: data_list which contains numpy arrays with the respective windows
def grouping(array_data):
start = int(0)
end = int(52)
data_list = []
length = np.size(array_data, 0)
while start < length-52:
if(array_data[start][4] != array_data[end-1][4]): # this control sequence is necessary to ensure that not two of the same
while(array_data[start][4] != array_data[end-1][4]): # labels are in one window
end = end -1
newArray = array_data[slice(start, end)]
start = end
end = end + 52
else:
newArray = array_data[slice(start, end)]
start = start + 26
end = end + 26
data_list.append(newArray)
if(end-52 > length - 1):
end = length-1
# Comment in to show the size and length of the data_list array
# print(np.size(data_list))
# print(len(data_list))
return data_list
# This is an additional function which could be called to print a data list to a text file (e.g to examine it)
# Comment in for printing the data to a text file
# def sysout_to_text(dataList):
# file = open("tempFile", "w")
# for item in dataList:
# file.write("%s\n" % item)
# file.close()
#Now we need to get the mean value and standard deviation of all windows
#@Params: grouped data_List containing the window arrays
#@Output: mean value of x, y, z, standard deviation of the coordinates, target array
def extract_features(data_list):
total_average_values = []
total_label = []
for row in data_list:
acceleration = np.nanmean(row, 0)
standard_deviation = np.std(row, 0)
temp_features = [acceleration[1], acceleration[2], acceleration[3], standard_deviation[1], standard_deviation[2], standard_deviation[3]]
label_array = [row[0][4]]
total_average_values.append(temp_features)
total_label.append(label_array)
# print(total_average_values)
# print(total_label)
#print(total_average_values)
feature = np.vstack(total_average_values)
target = np.vstack(total_label)
# comment in to print out lists
#print(feature)
#print(target)
return feature, target
Explanation: Feature Extraction
2. Step we want to extract two feature types for each window (6 different features for each window - x-, y-, z- axis)
End of explanation
#function to classify without crossvalidaion:
#we created here funtion to predict with Random forest and SVM the values with a simple division of the dataset.
#we split infact the dataset in 2 part:Training and test set and we provide the follow estimators:
#1)F1score
#2)Accuracy
#3)Confusion Matrix
def classify(x_features, y_features):
X_train, X_test, y_train, y_test = train_test_split(x_features, y_features.ravel(), test_size=0.2, random_state=0)
X_train.shape, y_train.shape
X_test.shape, y_test.shape
forest= RandomForestClassifier(n_estimators=100, random_state=0)
clf = svm.SVC(kernel='linear', C=1)
model = forest.fit(X_train,y_train )
modelSv=clf.fit(X_train,y_train )
predicted_labels = model.predict(X_test)
predicted_labelsSv = modelSv.predict(X_test)
# Compute the F1 score, also known as balanced F-score or F-measure
# The F1 score can be interpreted as a weighted average of the precision and recall,
# where an F1 score reaches its best value at 1 and worst score at 0.
# The relative contribution of precision and recall to the F1 score are equal.
# The formula for the F1 score is:
# F1 = 2 * (precision * recall) / (precision + recall)
# In the multi-class and multi-label case, this is the weighted average of the F1 score of each class.
print(" F1 score Random forest: %f" % f1_score(y_test, predicted_labels, average='macro'))
print(" F1 score precision with SV: %f" % f1_score(y_test, predicted_labels, average='macro'))
print("Accuracy Random forest: %f" % metrics.accuracy_score(y_test, predicted_labels))
print("Accuracy with SV: %f" % metrics.accuracy_score(y_test, predicted_labelsSv))
#By definition a confusion matrix C is such that C_{i, j}
# is equal to the number of observations known to be in group i but predicted to be in group j.
print("Confusion Matrix Random forest: ")
print(confusion_matrix(y_test, predicted_labels, labels=[1, 2, 3, 4, 5, 6, 7]))
print("Confusion Matrix with SVM: ")
print(confusion_matrix(y_test, predicted_labelsSv, labels=[1, 2, 3, 4, 5, 6, 7]))
return
#function to classify with crossvalidaion:
#1)Accuracy
#2)F1score
def CrossValidation(x_features, y_features, kfold):
scoring = ['accuracy', 'f1_micro']
forest = RandomForestClassifier(n_estimators=100, random_state=0)
clf = svm.SVC(kernel='linear', C=1)
scoresSv = cross_validate(clf, x_features, y_features.ravel(), scoring=scoring, cv=kfold, return_train_score=False)
scores = cross_validate(forest, x_features, y_features.ravel(), scoring=scoring, cv=kfold, return_train_score=False)
print("Accuracy Random Forest: %0.2f (+/- %0.2f)" % (scores['test_accuracy'].mean(), scores['test_accuracy'].std() * 2))
print("F1 Score Random Forest: %0.2f (+/- %0.2f)" % (scores['test_f1_micro'].mean(), scores['test_f1_micro'].std() * 2))
print("Accuracy SVM: %0.2f (+/- %0.2f)" % (scoresSv['test_accuracy'].mean() ,scoresSv['test_accuracy'].std() * 2))
print("F1 Score SVM: %0.2f (+/- %0.2f)" % (scoresSv['test_f1_micro'].mean() ,scoresSv['test_f1_micro'].std() * 2))
print("Scores for the test folds (Random Forest)", scores['test_accuracy'])
print("Scores for the test folds (Support Vector Machine)", scoresSv['test_accuracy'])
return
Explanation: Classification
The next step is the classification of the labels with the extracted features. Therefore the team used 2 different classifiers (Random Forest - as the main classifier build on the idea of the research paper by Pierluigi et al. {Pierluigi Casale, Oriol Pujol, and Petia Radeva. Human activity recognition from accelerometer data using a wearable device. Pattern Recognition and Image Analysis, pages 289–296, 2011} - and Support Vector machines as a comparison to the main classifier
End of explanation
# Importing data
dataframe = read(1)
#print(dataframe)
# Delete incorrect Data
cleaned_data = zeroDet(dataframe, 0)
#Check how balanced data is
counts = count_labels(cleaned_data)
print("Instances of every label, starting by one to seven",counts)
#Feature Extraction
grouped_data = grouping(cleaned_data)
features = extract_features(grouped_data)
#we are calling the function for classify our data, first with a simple
#splitting of the dataset to divide test and training set, after
#using k-crossvalidation with two different train model: 1)RandomForest classifier , 2)Super Vector Machine
classify(features.__getitem__(0),features.__getitem__(1))
CrossValidation(features.__getitem__(0),features.__getitem__(1), 5)
Explanation: Code execution
End of explanation |
5,739 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Milestone 2 - this version has all the input completed, individually and each tested.
2-Dimensional Frame Analysis - Version 04
This program performs an elastic analysis of 2-dimensional structural frames. It has the following features
Step1: Test Frame
Nodes
Step2: Supports
Step3: Members
Step4: Releases
Step5: Properties
If the SST module is loadable, member properties may be specified by giving steel shape designations
(such as 'W310x97') in the member properties data. If the module is not available, you may still give $A$ and
$I_x$ directly (it only tries to lookup the properties if these two are not provided).
Step6: Node Loads
Step7: Member Loads
Step8: Load Combinations
Step9: Load Iterators
Step10: Support Constraints
Step11: Accumulated Cell Data | Python Code:
from __future__ import print_function
import salib as sl
sl.import_notebooks()
from Tables import Table
from Nodes import Node
from Members import Member
from LoadSets import LoadSet, LoadCombination
from NodeLoads import makeNodeLoad
from MemberLoads import makeMemberLoad
from collections import OrderedDict, defaultdict
import numpy as np
class Object(object):
pass
class Frame2D(object):
def __init__(self,dsname=None):
self.dsname = dsname
self.rawdata = Object()
self.nodes = OrderedDict()
self.members = OrderedDict()
self.nodeloads = LoadSet()
self.memberloads = LoadSet()
self.loadcombinations = LoadCombination()
#self.dofdesc = []
#self.nodeloads = defaultdict(list)
#self.membloads = defaultdict(list)
self.ndof = 0
self.nfree = 0
self.ncons = 0
self.R = None
self.D = None
self.PDF = None # P-Delta forces
COLUMNS_xxx = [] # list of column names for table 'xxx'
def get_table(self,tablename,extrasok=False,optional=False):
columns = getattr(self,'COLUMNS_'+tablename)
t = Table(tablename,columns=columns,optional=optional)
t.read(optional=optional)
reqdl= columns
reqd = set(reqdl)
prov = set(t.columns)
if reqd-prov:
raise Exception('Columns missing {} for table "{}". Required columns are: {}'\
.format(list(reqd-prov),tablename,reqdl))
if not extrasok:
if prov-reqd:
raise Exception('Extra columns {} for table "{}". Required columns are: {}'\
.format(list(prov-reqd),tablename,reqdl))
return t
Explanation: Milestone 2 - this version has all the input completed, individually and each tested.
2-Dimensional Frame Analysis - Version 04
This program performs an elastic analysis of 2-dimensional structural frames. It has the following features:
1. Input is provided by a set of CSV files (and cell-magics exist so you can specifiy the CSV data
in a notebook cell). See the example below for an, er, example.
1. Handles concentrated forces on nodes, and concentrated forces, concentrated moments, and linearly varying distributed loads applied transversely anywhere along the member (i.e., there is as yet no way to handle longitudinal
load components).
1. It handles fixed, pinned, roller supports and member end moment releases (internal pins). The former are
handled by assigning free or fixed global degrees of freedom, and the latter are handled by adjusting the
member stiffness matrix.
1. It has the ability to handle named sets of loads with factored combinations of these.
1. The DOF #'s are assigned by the program, with the fixed DOF #'s assigned after the non-fixed. The equilibrium
equation is then partitioned for solution. Among other advantages, this means that support settlement could be
easily added (there is no UI for that, yet).
1. A non-linear analysis can be performed using the P-Delta method (fake shears are computed at column ends due to the vertical load acting through horizontal displacement differences, and these shears are applied as extra loads
to the nodes).
1. A full non-linear (2nd order) elastic analysis will soon be available by forming the equilibrium equations
on the deformed structure. This is very easy to add, but it hasn't been done yet. Shouldn't be too long.
1. There is very little no documentation below, but that will improve, slowly.
End of explanation
%%Table nodes
NODEID,X,Y,Z
A,0,0,5000
B,0,4000,5000
C,8000,4000,5000
D,8000,0,5000
@sl.extend(Frame2D)
class Frame2D:
COLUMNS_nodes = ('NODEID','X','Y')
def install_nodes(self):
node_table = self.get_table('nodes')
for ix,r in node_table.data.iterrows():
if r.NODEID in self.nodes:
raise Exception('Multiply defined node: {}'.format(r.NODEID))
n = Node(r.NODEID,r.X,r.Y)
self.nodes[n.id] = n
self.rawdata.nodes = node_table
def get_node(self,id):
try:
return self.nodes[id]
except KeyError:
raise Exception('Node not defined: {}'.format(id))
##test:
f = Frame2D()
##test:
f.install_nodes()
##test:
f.nodes
##test:
f.get_node('C')
Explanation: Test Frame
Nodes
End of explanation
%%Table supports
NODEID,C0,C1,C2
A,FX,FY,MZ
D,FX,FY
def isnan(x):
if x is None:
return True
try:
return np.isnan(x)
except TypeError:
return False
@sl.extend(Frame2D)
class Frame2D:
COLUMNS_supports = ('NODEID','C0','C1','C2')
def install_supports(self):
table = self.get_table('supports')
for ix,row in table.data.iterrows():
node = self.get_node(row.NODEID)
for c in [row.C0,row.C1,row.C2]:
if not isnan(c):
node.add_constraint(c)
self.rawdata.supports = table
##test:
f.install_supports()
vars(f.get_node('D'))
Explanation: Supports
End of explanation
%%Table members
MEMBERID,NODEJ,NODEK
AB,A,B
BC,B,C
DC,D,C
@sl.extend(Frame2D)
class Frame2D:
COLUMNS_members = ('MEMBERID','NODEJ','NODEK')
def install_members(self):
table = self.get_table('members')
for ix,m in table.data.iterrows():
if m.MEMBERID in self.members:
raise Exception('Multiply defined member: {}'.format(m.MEMBERID))
memb = Member(m.MEMBERID,self.get_node(m.NODEJ),self.get_node(m.NODEK))
self.members[memb.id] = memb
self.rawdata.members = table
def get_member(self,id):
try:
return self.members[id]
except KeyError:
raise Exception('Member not defined: {}'.format(id))
##test:
f.install_members()
f.members
##test:
m = f.get_member('BC')
m.id, m.L, m.dcx, m.dcy
Explanation: Members
End of explanation
%%Table releases
MEMBERID,RELEASE
AB,MZK
@sl.extend(Frame2D)
class Frame2D:
COLUMNS_releases = ('MEMBERID','RELEASE')
def install_releases(self):
table = self.get_table('releases',optional=True)
for ix,r in table.data.iterrows():
memb = self.get_member(r.MEMBERID)
memb.add_release(r.RELEASE)
self.rawdata.releases = table
##test:
f.install_releases()
##test:
vars(f.get_member('AB'))
Explanation: Releases
End of explanation
try:
from sst import SST
__SST = SST()
get_section = __SST.section
except ImportError:
def get_section(dsg,fields):
raise ValueError('Cannot lookup property SIZE because SST is not available. SIZE = {}'.format(dsg))
##return [1.] * len(fields.split(',')) # in case you want to do it that way
%%Table properties
MEMBERID,SIZE,IX,A
BC,W460x106,,
AB,W310x97,,
DC,,
@sl.extend(Frame2D)
class Frame2D:
COLUMNS_properties = ('MEMBERID','SIZE','IX','A')
def install_properties(self):
table = self.get_table('properties')
table = self.fill_properties(table)
for ix,row in table.data.iterrows():
memb = self.get_member(row.MEMBERID)
memb.size = row.SIZE
memb.Ix = row.IX
memb.A = row.A
self.rawdata.properties = table
def fill_properties(self,table):
data = table.data
for ix,row in data.iterrows():
if type(row.SIZE) in [type(''),type(u'')]:
if isnan(row.IX) or isnan(row.A):
Ix,A = get_section(row.SIZE,'Ix,A')
if isnan(row.IX):
data.loc[ix,'IX'] = Ix
if isnan(row.A):
data.loc[ix,'A'] = A
elif isnan(row.SIZE):
data.loc[ix,'SIZE'] = ''
table.data = data.fillna(method='ffill')
return table
##test:
f.install_properties()
##test:
vars(f.get_member('DC'))
Explanation: Properties
If the SST module is loadable, member properties may be specified by giving steel shape designations
(such as 'W310x97') in the member properties data. If the module is not available, you may still give $A$ and
$I_x$ directly (it only tries to lookup the properties if these two are not provided).
End of explanation
%%Table node_loads
LOAD,NODEID,DIRN,F
Wind,B,FX,-200000.
@sl.extend(Frame2D)
class Frame2D:
COLUMNS_node_loads = ('LOAD','NODEID','DIRN','F')
def install_node_loads(self):
table = self.get_table('node_loads')
dirns = ['FX','FY','FZ']
for ix,row in table.data.iterrows():
n = self.get_node(row.NODEID)
if row.DIRN not in dirns:
raise ValueError("Invalid node load direction: {} for load {}, node {}; must be one of '{}'"
.format(row.DIRN, row.LOAD, row.NODEID, ', '.join(dirns)))
l = makeNodeLoad({row.DIRN:row.F})
self.nodeloads.append(row.LOAD,n,l)
self.rawdata.node_loads = table
##test:
f.install_node_loads()
##test:
for o,l,fact in f.nodeloads.iterloads('Wind'):
print(o,l,fact,l*fact)
Explanation: Node Loads
End of explanation
%%Table member_loads
LOAD,MEMBERID,TYPE,W1,W2,A,B,C
Live,BC,UDL,-50,,,,
Live,BC,PL,-200000,,5000
@sl.extend(Frame2D)
class Frame2D:
COLUMNS_member_loads = ('LOAD','MEMBERID','TYPE','W1','W2','A','B','C')
def install_member_loads(self):
table = self.get_table('member_loads')
for ix,row in table.data.iterrows():
m = self.get_member(row.MEMBERID)
l = makeMemberLoad(m.L,row)
self.memberloads.append(row.LOAD,m,l)
self.rawdata.member_loads = table
##test:
f.install_member_loads()
##test:
for o,l,fact in f.memberloads.iterloads('Live'):
print(o.id,l,fact,l.fefs()*fact)
Explanation: Member Loads
End of explanation
%%Table load_combinations
COMBO,LOAD,FACTOR
One,Live,1.5
One,Wind,1.75
@sl.extend(Frame2D)
class Frame2D:
COLUMNS_load_combinations = ('COMBO','LOAD','FACTOR')
def install_load_combinations(self):
table = self.get_table('load_combinations')
for ix,row in table.data.iterrows():
self.loadcombinations.append(row.COMBO,row.LOAD,row.FACTOR)
self.rawdata.load_combinations = table
##test:
f.install_load_combinations()
##test:
for o,l,fact in f.loadcombinations.iterloads('One',f.nodeloads):
print(o.id,l,fact)
for o,l,fact in f.loadcombinations.iterloads('One',f.memberloads):
print(o.id,l,fact,l.fefs()*fact)
Explanation: Load Combinations
End of explanation
@sl.extend(Frame2D)
class Frame2D:
def iter_nodeloads(self,comboname):
for o,l,f in self.loadcombinations.iterloads(comboname,self.nodeloads):
yield o,l,f
def iter_memberloads(self,comboname):
for o,l,f in self.loadcombinations.iterloads(comboname,self.memberloads):
yield o,l,f
##test:
for o,l,fact in f.iter_nodeloads('One'):
print(o.id,l,fact)
for o,l,fact in f.iter_memberloads('One'):
print(o.id,l,fact)
Explanation: Load Iterators
End of explanation
%%Table supports
NODEID,C0,C1,C2
A,FX,FY,MZ
D,FX,FY
Explanation: Support Constraints
End of explanation
##test:
Table.CELLDATA
Explanation: Accumulated Cell Data
End of explanation |
5,740 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Video Codec Unit (VCU) Demo Example
Step1: Run the Demo
Step2: Video
Step3: Audio
Step4: Advanced options | Python Code:
from IPython.display import HTML
HTML('''<script>
code_show=true;
function code_toggle() {
if (code_show){
$('div.input').hide();
} else {
$('div.input').show();
}
code_show = !code_show
}
$( document ).ready(code_toggle);
</script>
<form action="javascript:code_toggle()"><input type="submit" value="Click here to toggle on/off the raw code."></form>''')
Explanation: Video Codec Unit (VCU) Demo Example: CAMERA->ENCODE ->FILE
Introduction
Video Codec Unit (VCU) in ZynqMP SOC is capable of encoding and decoding AVC/HEVC compressed video streams in real time.
This notebook example shows video audio (AV) recording usecase – the process of capturing raw video and audio(optional), encode using VCU and then store content in file. The file stored is recorded compressed file.
Implementation Details
<img src="pictures/block-diagram-camera-encode-file.png" align="center" alt="Drawing" style="width: 400px; height: 200px"/>
Board Setup
Connect Ethernet cable.
Connect serial cable to monitor logs on serial console.
Connect USB camera(preferably Logitech HD camera, C920) with board.
If Board is connected to private network, then export proxy settings in /home/root/.bashrc file as below,
create/open a bashrc file using "vi ~/.bashrc"
Insert below line to bashrc file
export http_proxy="< private network proxy address >"
export https_proxy="< private network proxy address >"
Save and close bashrc file.
Determine Audio input device names based on requirements. Please refer Determine AUDIO Device Names section.
Determine Audio Device Names
The audio device name of audio source(Input device) and playback device(output device) need to be determined using arecord and aplay utilities installed on platform.
Audio Input
ALSA sound device names for capture devices
- Run below command to get ALSA sound device names for capture devices
root@zcu106-zynqmp:~#arecord -l
It shows list of Audio Capture Hardware Devices. For e.g
- card 1: C920 [HD Pro Webcam C920], device 0: USB Audio [USB Audio]
- Subdevices: 1/1
- Subdevice #0: subdevice #0
Here card number of capture device is 1 and device id is 0. Hence " hw:1,0 " to be passed as auido input device.
Pulse sound device names for capture devices
- Run below command to get PULSE sound device names for capture devices
root@zcu106-zynqmp:~#pactl list short sources
It shows list of Audio Capture Hardware Devices. For e.g
- 0 alsa_input.usb-046d_HD_Pro_Webcam_C920_758B5BFF-02.analog-stereo ...
Here "alsa_input.usb-046d_HD_Pro_Webcam_C920_758B5BFF-02.analog-stereo" is the name of audio capture device. Hence it can be passed as auido input device.
USB Camera Capabilities
Resolutions for this example need to set based on USB Camera Capabilities
- Capabilities can be found by executing below command on board
root@zcu106-zynqmp:~#"v4l2-ctl -d < dev-id > --list-formats-ext".
< dev-id >:- It can be found using dmesg logs. Mostly it would be like "/dev/video0"
V4lutils if not installed in the pre-built image, need to install using dnf or rebuild petalinux image including v4lutils
End of explanation
from ipywidgets import interact
import ipywidgets as widgets
from common import common_vcu_demo_camera_encode_file
import os
from ipywidgets import HBox, VBox, Text, Layout
Explanation: Run the Demo
End of explanation
video_capture_device=widgets.Text(value='',
placeholder='"/dev/video1"',
description='Camera Dev Id:',
style={'description_width': 'initial'},
#layout=Layout(width='35%', height='30px'),
disabled=False)
video_capture_device
codec_type=widgets.RadioButtons(
options=['avc', 'hevc'],
description='Codec Type:',
disabled=False)
sink_name=widgets.RadioButtons(
options=['none', 'fakevideosink'],
description='Video Sink:',
disabled=False)
video_size=widgets.RadioButtons(
options=['640x480', '1280x720', '1920x1080', '3840x2160'],
description='Resolution:',
description_tooltip='To select the values, please refer USB Camera Capabilities section',
disabled=False)
HBox([codec_type, video_size, sink_name])
Explanation: Video
End of explanation
device_id=Text(value='',
placeholder='(optional) "hw:1"',
description='Input Dev:',
description_tooltip='To select the values, please refer Determine Audio Device Names section',
disabled=False)
device_id
audio_sink={'none':['none'], 'aac':['auto','alsasink','pulsesink'],'vorbis':['auto','alsasink','pulsesink']}
audio_src={'none':['none'], 'aac':['auto','alsasrc','pulseaudiosrc'],'vorbis':['auto','alsasrc','pulseaudiosrc']}
#val=sorted(audio_sink, key = lambda k: (-len(audio_sink[k]), k))
def print_audio_sink(AudioSink):
pass
def print_audio_src(AudioSrc):
pass
def select_audio_sink(AudioCodec):
audio_sinkW.options = audio_sink[AudioCodec]
audio_srcW.options = audio_src[AudioCodec]
audio_codecW = widgets.RadioButtons(options=sorted(audio_sink.keys(), key=lambda k: len(audio_sink[k])), description='Audio Codec:')
init = audio_codecW.value
audio_sinkW = widgets.RadioButtons(options=audio_sink[init], description='Audio Sink:')
audio_srcW = widgets.RadioButtons(options=audio_src[init], description='Audio Src:')
#j = widgets.interactive(print_audio_sink, AudioSink=audio_sinkW)
k = widgets.interactive(print_audio_src, AudioSrc=audio_srcW)
i = widgets.interactive(select_audio_sink, AudioCodec=audio_codecW)
HBox([i, k])
Explanation: Audio
End of explanation
frame_rate=widgets.Text(value='',
placeholder='(optional) 15, 30, 60',
description='Frame Rate:',
disabled=False)
bit_rate=widgets.Text(value='',
placeholder='(optional) 1000, 20000',
description='Bit Rate(Kbps):',
style={'description_width': 'initial'},
disabled=False)
gop_length=widgets.Text(value='',
placeholder='(optional) 30, 60',
description='Gop Length',
disabled=False)
display(HBox([bit_rate, frame_rate, gop_length]))
no_of_frames=Text(value='',
placeholder='(optional) 1000, 2000',
description=r'<p>Frame Nos:</p>',
#layout=Layout(width='25%', height='30px'),
disabled=False)
output_path=widgets.Text(value='',
placeholder='(optional) /mnt/sata/op.ts',
description='Output Path:',
disabled=False)
entropy_buffers=widgets.Dropdown(
options=['2', '3', '4', '5', '6', '7', '8', '9', '10', '11', '12', '13', '14', '15'],
value='5',
description='Entropy Buffers Nos:',
style={'description_width': 'initial'},
disabled=False,)
#entropy_buffers
#output_path
#gop_length
HBox([entropy_buffers, no_of_frames, output_path])
#entropy_buffers
show_fps=widgets.Checkbox(
value=False,
description='show-fps',
#style={'description_width': 'initial'},
disabled=False)
compressed_mode=widgets.Checkbox(
value=False,
description='compressed-mode',
disabled=False)
HBox([compressed_mode, show_fps])
from IPython.display import clear_output
from IPython.display import Javascript
def run_all(ev):
display(Javascript('IPython.notebook.execute_cells_below()'))
def clear_op(event):
clear_output(wait=True)
return
button1 = widgets.Button(
description='Clear Output',
style= {'button_color':'lightgreen'},
#style= {'button_color':'lightgreen', 'description_width': 'initial'},
layout={'width': '300px'}
)
button2 = widgets.Button(
description='',
style= {'button_color':'white'},
#style= {'button_color':'lightgreen', 'description_width': 'initial'},
layout={'width': '83px'}
)
button1.on_click(run_all)
button1.on_click(clear_op)
def start_demo(event):
#clear_output(wait=True)
arg = [];
arg = common_vcu_demo_camera_encode_file.cmd_line_args_generator(device_id.value, video_capture_device.value, video_size.value, codec_type.value, audio_codecW.value, frame_rate.value, output_path.value, no_of_frames.value, bit_rate.value, entropy_buffers.value, show_fps.value, audio_srcW.value, compressed_mode.value, gop_length.value, sink_name.value);
#!sh vcu-demo-camera-encode-decode-display.sh $arg > logs.txt 2>&1
!sh vcu-demo-camera-encode-file.sh $arg
return
button = widgets.Button(
description='click to start camera-encode-file demo',
style= {'button_color':'lightgreen'},
#style= {'button_color':'lightgreen', 'description_width': 'initial'},
layout={'width': '300px'}
)
button.on_click(start_demo)
HBox([button, button2, button1])
Explanation: Advanced options:
End of explanation |
5,741 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Generative Adversarial Network
In this notebook, we'll be building a generative adversarial network (GAN) trained on the MNIST dataset. From this, we'll be able to generate new handwritten digits!
GANs were first reported on in 2014 from Ian Goodfellow and others in Yoshua Bengio's lab. Since then, GANs have exploded in popularity. Here are a few examples to check out
Step1: Model Inputs
First we need to create the inputs for our graph. We need two inputs, one for the discriminator and one for the generator. Here we'll call the discriminator input inputs_real and the generator input inputs_z. We'll assign them the appropriate sizes for each of the networks.
Exercise
Step2: Generator network
Here we'll build the generator network. To make this network a universal function approximator, we'll need at least one hidden layer. We should use a leaky ReLU to allow gradients to flow backwards through the layer unimpeded. A leaky ReLU is like a normal ReLU, except that there is a small non-zero output for negative input values.
Variable Scope
Here we need to use tf.variable_scope for two reasons. Firstly, we're going to make sure all the variable names start with generator. Similarly, we'll prepend discriminator to the discriminator variables. This will help out later when we're training the separate networks.
We could just use tf.name_scope to set the names, but we also want to reuse these networks with different inputs. For the generator, we're going to train it, but also sample from it as we're training and after training. The discriminator will need to share variables between the fake and real input images. So, we can use the reuse keyword for tf.variable_scope to tell TensorFlow to reuse the variables instead of creating new ones if we build the graph again.
To use tf.variable_scope, you use a with statement
Step3: Discriminator
The discriminator network is almost exactly the same as the generator network, except that we're using a sigmoid output layer.
Exercise
Step4: Hyperparameters
Step5: Build network
Now we're building the network from the functions defined above.
First is to get our inputs, input_real, input_z from model_inputs using the sizes of the input and z.
Then, we'll create the generator, generator(input_z, input_size). This builds the generator with the appropriate input and output sizes.
Then the discriminators. We'll build two of them, one for real data and one for fake data. Since we want the weights to be the same for both real and fake data, we need to reuse the variables. For the fake data, we're getting it from the generator as g_model. So the real data discriminator is discriminator(input_real) while the fake discriminator is discriminator(g_model, reuse=True).
Exercise
Step6: Discriminator and Generator Losses
Now we need to calculate the losses, which is a little tricky. For the discriminator, the total loss is the sum of the losses for real and fake images, d_loss = d_loss_real + d_loss_fake. The losses will by sigmoid cross-entropys, which we can get with tf.nn.sigmoid_cross_entropy_with_logits. We'll also wrap that in tf.reduce_mean to get the mean for all the images in the batch. So the losses will look something like
python
tf.reduce_mean(tf.nn.sigmoid_cross_entropy_with_logits(logits=logits, labels=labels))
For the real image logits, we'll use d_logits_real which we got from the discriminator in the cell above. For the labels, we want them to be all ones, since these are all real images. To help the discriminator generalize better, the labels are reduced a bit from 1.0 to 0.9, for example, using the parameter smooth. This is known as label smoothing, typically used with classifiers to improve performance. In TensorFlow, it looks something like labels = tf.ones_like(tensor) * (1 - smooth)
The discriminator loss for the fake data is similar. The logits are d_logits_fake, which we got from passing the generator output to the discriminator. These fake logits are used with labels of all zeros. Remember that we want the discriminator to output 1 for real images and 0 for fake images, so we need to set up the losses to reflect that.
Finally, the generator losses are using d_logits_fake, the fake image logits. But, now the labels are all ones. The generator is trying to fool the discriminator, so it wants to discriminator to output ones for fake images.
Exercise
Step7: Optimizers
We want to update the generator and discriminator variables separately. So we need to get the variables for each part build optimizers for the two parts. To get all the trainable variables, we use tf.trainable_variables(). This creates a list of all the variables we've defined in our graph.
For the generator optimizer, we only want to generator variables. Our past selves were nice and used a variable scope to start all of our generator variable names with generator. So, we just need to iterate through the list from tf.trainable_variables() and keep variables to start with generator. Each variable object has an attribute name which holds the name of the variable as a string (var.name == 'weights_0' for instance).
We can do something similar with the discriminator. All the variables in the discriminator start with discriminator.
Then, in the optimizer we pass the variable lists to var_list in the minimize method. This tells the optimizer to only update the listed variables. Something like tf.train.AdamOptimizer().minimize(loss, var_list=var_list) will only train the variables in var_list.
Exercise
Step8: Training
Step9: Training loss
Here we'll check out the training losses for the generator and discriminator.
Step10: Generator samples from training
Here we can view samples of images from the generator. First we'll look at images taken while training.
Step11: These are samples from the final training epoch. You can see the generator is able to reproduce numbers like 5, 7, 3, 0, 9. Since this is just a sample, it isn't representative of the full range of images this generator can make.
Step12: Below I'm showing the generated images as the network was training, every 10 epochs. With bonus optical illusion!
Step13: It starts out as all noise. Then it learns to make only the center white and the rest black. You can start to see some number like structures appear out of the noise. Looks like 1, 9, and 8 show up first. Then, it learns 5 and 3.
Sampling from the generator
We can also get completely new images from the generator by using the checkpoint we saved after training. We just need to pass in a new latent vector $z$ and we'll get new samples! | Python Code:
%matplotlib inline
import pickle as pkl
import numpy as np
import tensorflow as tf
import matplotlib.pyplot as plt
from tensorflow.examples.tutorials.mnist import input_data
mnist = input_data.read_data_sets('MNIST_data')
Explanation: Generative Adversarial Network
In this notebook, we'll be building a generative adversarial network (GAN) trained on the MNIST dataset. From this, we'll be able to generate new handwritten digits!
GANs were first reported on in 2014 from Ian Goodfellow and others in Yoshua Bengio's lab. Since then, GANs have exploded in popularity. Here are a few examples to check out:
Pix2Pix
CycleGAN
A whole list
The idea behind GANs is that you have two networks, a generator $G$ and a discriminator $D$, competing against each other. The generator makes fake data to pass to the discriminator. The discriminator also sees real data and predicts if the data it's received is real or fake. The generator is trained to fool the discriminator, it wants to output data that looks as close as possible to real data. And the discriminator is trained to figure out which data is real and which is fake. What ends up happening is that the generator learns to make data that is indistiguishable from real data to the discriminator.
The general structure of a GAN is shown in the diagram above, using MNIST images as data. The latent sample is a random vector the generator uses to contruct it's fake images. As the generator learns through training, it figures out how to map these random vectors to recognizable images that can foold the discriminator.
The output of the discriminator is a sigmoid function, where 0 indicates a fake image and 1 indicates an real image. If you're interested only in generating new images, you can throw out the discriminator after training. Now, let's see how we build this thing in TensorFlow.
End of explanation
def model_inputs(real_dim, z_dim):
inputs_real =
inputs_z =
return inputs_real, inputs_z
Explanation: Model Inputs
First we need to create the inputs for our graph. We need two inputs, one for the discriminator and one for the generator. Here we'll call the discriminator input inputs_real and the generator input inputs_z. We'll assign them the appropriate sizes for each of the networks.
Exercise: Finish the model_inputs function below. Create the placeholders for inputs_real and inputs_z using the input sizes real_dim and z_dim respectively.
End of explanation
def generator(z, out_dim, n_units=128, reuse=False, alpha=0.01):
''' Build the generator network.
Arguments
---------
z : Input tensor for the generator
out_dim : Shape of the generator output
n_units : Number of units in hidden layer
reuse : Reuse the variables with tf.variable_scope
alpha : leak parameter for leaky ReLU
Returns
-------
out, logits:
'''
with tf.variable_scope # finish this
# Hidden layer
h1 =
# Leaky ReLU
h1 =
# Logits and tanh output
logits =
out =
return out, logits
Explanation: Generator network
Here we'll build the generator network. To make this network a universal function approximator, we'll need at least one hidden layer. We should use a leaky ReLU to allow gradients to flow backwards through the layer unimpeded. A leaky ReLU is like a normal ReLU, except that there is a small non-zero output for negative input values.
Variable Scope
Here we need to use tf.variable_scope for two reasons. Firstly, we're going to make sure all the variable names start with generator. Similarly, we'll prepend discriminator to the discriminator variables. This will help out later when we're training the separate networks.
We could just use tf.name_scope to set the names, but we also want to reuse these networks with different inputs. For the generator, we're going to train it, but also sample from it as we're training and after training. The discriminator will need to share variables between the fake and real input images. So, we can use the reuse keyword for tf.variable_scope to tell TensorFlow to reuse the variables instead of creating new ones if we build the graph again.
To use tf.variable_scope, you use a with statement:
python
with tf.variable_scope('scope_name', reuse=False):
# code here
Here's more from the TensorFlow documentation to get another look at using tf.variable_scope.
Leaky ReLU
TensorFlow doesn't provide an operation for leaky ReLUs, so we'll need to make one . For this you can use take the outputs from a linear fully connected layer and pass them to tf.maximum. Typically, a parameter alpha sets the magnitude of the output for negative values. So, the output for negative input (x) values is alpha*x, and the output for positive x is x:
$$
f(x) = max(\alpha * x, x)
$$
Tanh Output
The generator has been found to perform the best with $tanh$ for the generator output. This means that we'll have to rescale the MNIST images to be between -1 and 1, instead of 0 and 1. Along with the $tanh$ output, we also need to return the logits for use in calculating the loss with tf.nn.sigmoid_cross_entropy_with_logits.
Exercise: Implement the generator network in the function below. You'll need to return both the logits and the tanh output. Make sure to wrap your code in a variable scope, with 'generator' as the scope name, and pass the reuse keyword argument from the function to tf.variable_scope.
End of explanation
def discriminator(x, n_units=128, reuse=False, alpha=0.01):
''' Build the discriminator network.
Arguments
---------
x : Input tensor for the discriminator
n_units: Number of units in hidden layer
reuse : Reuse the variables with tf.variable_scope
alpha : leak parameter for leaky ReLU
Returns
-------
out, logits:
'''
with tf.variable_scope # finish this
# Hidden layer
h1 =
# Leaky ReLU
h1 =
logits =
out =
return out, logits
Explanation: Discriminator
The discriminator network is almost exactly the same as the generator network, except that we're using a sigmoid output layer.
Exercise: Implement the discriminator network in the function below. Same as above, you'll need to return both the logits and the sigmoid output. Make sure to wrap your code in a variable scope, with 'discriminator' as the scope name, and pass the reuse keyword argument from the function arguments to tf.variable_scope.
End of explanation
# Size of input image to discriminator
input_size = 784 # 28x28 MNIST images flattened
# Size of latent vector to generator
z_size = 100
# Sizes of hidden layers in generator and discriminator
g_hidden_size = 128
d_hidden_size = 128
# Leak factor for leaky ReLU
alpha = 0.01
# Label smoothing
smooth = 0.1
Explanation: Hyperparameters
End of explanation
tf.reset_default_graph()
# Create our input placeholders
input_real, input_z =
# Generator network here
g_model, g_logits =
# g_model is the generator output
# Disriminator network here
d_model_real, d_logits_real =
d_model_fake, d_logits_fake =
Explanation: Build network
Now we're building the network from the functions defined above.
First is to get our inputs, input_real, input_z from model_inputs using the sizes of the input and z.
Then, we'll create the generator, generator(input_z, input_size). This builds the generator with the appropriate input and output sizes.
Then the discriminators. We'll build two of them, one for real data and one for fake data. Since we want the weights to be the same for both real and fake data, we need to reuse the variables. For the fake data, we're getting it from the generator as g_model. So the real data discriminator is discriminator(input_real) while the fake discriminator is discriminator(g_model, reuse=True).
Exercise: Build the network from the functions you defined earlier.
End of explanation
# Calculate losses
d_loss_real =
d_loss_fake =
d_loss =
g_loss =
Explanation: Discriminator and Generator Losses
Now we need to calculate the losses, which is a little tricky. For the discriminator, the total loss is the sum of the losses for real and fake images, d_loss = d_loss_real + d_loss_fake. The losses will by sigmoid cross-entropys, which we can get with tf.nn.sigmoid_cross_entropy_with_logits. We'll also wrap that in tf.reduce_mean to get the mean for all the images in the batch. So the losses will look something like
python
tf.reduce_mean(tf.nn.sigmoid_cross_entropy_with_logits(logits=logits, labels=labels))
For the real image logits, we'll use d_logits_real which we got from the discriminator in the cell above. For the labels, we want them to be all ones, since these are all real images. To help the discriminator generalize better, the labels are reduced a bit from 1.0 to 0.9, for example, using the parameter smooth. This is known as label smoothing, typically used with classifiers to improve performance. In TensorFlow, it looks something like labels = tf.ones_like(tensor) * (1 - smooth)
The discriminator loss for the fake data is similar. The logits are d_logits_fake, which we got from passing the generator output to the discriminator. These fake logits are used with labels of all zeros. Remember that we want the discriminator to output 1 for real images and 0 for fake images, so we need to set up the losses to reflect that.
Finally, the generator losses are using d_logits_fake, the fake image logits. But, now the labels are all ones. The generator is trying to fool the discriminator, so it wants to discriminator to output ones for fake images.
Exercise: Calculate the losses for the discriminator and the generator. There are two discriminator losses, one for real images and one for fake images. For the real image loss, use the real logits and (smoothed) labels of ones. For the fake image loss, use the fake logits with labels of all zeros. The total discriminator loss is the sum of those two losses. Finally, the generator loss again uses the fake logits from the discriminator, but this time the labels are all ones because the generator wants to fool the discriminator.
End of explanation
# Optimizers
learning_rate = 0.002
# Get the trainable_variables, split into G and D parts
t_vars =
g_vars =
d_vars =
d_train_opt =
g_train_opt =
Explanation: Optimizers
We want to update the generator and discriminator variables separately. So we need to get the variables for each part build optimizers for the two parts. To get all the trainable variables, we use tf.trainable_variables(). This creates a list of all the variables we've defined in our graph.
For the generator optimizer, we only want to generator variables. Our past selves were nice and used a variable scope to start all of our generator variable names with generator. So, we just need to iterate through the list from tf.trainable_variables() and keep variables to start with generator. Each variable object has an attribute name which holds the name of the variable as a string (var.name == 'weights_0' for instance).
We can do something similar with the discriminator. All the variables in the discriminator start with discriminator.
Then, in the optimizer we pass the variable lists to var_list in the minimize method. This tells the optimizer to only update the listed variables. Something like tf.train.AdamOptimizer().minimize(loss, var_list=var_list) will only train the variables in var_list.
Exercise: Below, implement the optimizers for the generator and discriminator. First you'll need to get a list of trainable variables, then split that list into two lists, one for the generator variables and another for the discriminator variables. Finally, using AdamOptimizer, create an optimizer for each network that update the network variables separately.
End of explanation
batch_size = 100
epochs = 100
samples = []
losses = []
saver = tf.train.Saver(var_list = g_vars)
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
for e in range(epochs):
for ii in range(mnist.train.num_examples//batch_size):
batch = mnist.train.next_batch(batch_size)
# Get images, reshape and rescale to pass to D
batch_images = batch[0].reshape((batch_size, 784))
batch_images = batch_images*2 - 1
# Sample random noise for G
batch_z = np.random.uniform(-1, 1, size=(batch_size, z_size))
# Run optimizers
_ = sess.run(d_train_opt, feed_dict={input_real: batch_images, input_z: batch_z})
_ = sess.run(g_train_opt, feed_dict={input_z: batch_z})
# At the end of each epoch, get the losses and print them out
train_loss_d = sess.run(d_loss, {input_z: batch_z, input_real: batch_images})
train_loss_g = g_loss.eval({input_z: batch_z})
print("Epoch {}/{}...".format(e+1, epochs),
"Discriminator Loss: {:.4f}...".format(train_loss_d),
"Generator Loss: {:.4f}".format(train_loss_g))
# Save losses to view after training
losses.append((train_loss_d, train_loss_g))
# Sample from generator as we're training for viewing afterwards
sample_z = np.random.uniform(-1, 1, size=(16, z_size))
gen_samples, _ = sess.run(
generator(input_z, input_size, reuse=True),
feed_dict={input_z: sample_z})
samples.append(gen_samples)
saver.save(sess, './checkpoints/generator.ckpt')
# Save training generator samples
with open('train_samples.pkl', 'wb') as f:
pkl.dump(samples, f)
Explanation: Training
End of explanation
%matplotlib inline
import matplotlib.pyplot as plt
fig, ax = plt.subplots()
losses = np.array(losses)
plt.plot(losses.T[0], label='Discriminator')
plt.plot(losses.T[1], label='Generator')
plt.title("Training Losses")
plt.legend()
Explanation: Training loss
Here we'll check out the training losses for the generator and discriminator.
End of explanation
def view_samples(epoch, samples):
fig, axes = plt.subplots(figsize=(7,7), nrows=4, ncols=4, sharey=True, sharex=True)
for ax, img in zip(axes.flatten(), samples[epoch]):
ax.xaxis.set_visible(False)
ax.yaxis.set_visible(False)
im = ax.imshow(img.reshape((28,28)), cmap='Greys_r')
return fig, axes
# Load samples from generator taken while training
with open('train_samples.pkl', 'rb') as f:
samples = pkl.load(f)
Explanation: Generator samples from training
Here we can view samples of images from the generator. First we'll look at images taken while training.
End of explanation
_ = view_samples(-1, samples)
Explanation: These are samples from the final training epoch. You can see the generator is able to reproduce numbers like 5, 7, 3, 0, 9. Since this is just a sample, it isn't representative of the full range of images this generator can make.
End of explanation
rows, cols = 10, 6
fig, axes = plt.subplots(figsize=(7,12), nrows=rows, ncols=cols, sharex=True, sharey=True)
for sample, ax_row in zip(samples[::int(len(samples)/rows)], axes):
for img, ax in zip(sample[::int(len(sample)/cols)], ax_row):
ax.imshow(img.reshape((28,28)), cmap='Greys_r')
ax.xaxis.set_visible(False)
ax.yaxis.set_visible(False)
Explanation: Below I'm showing the generated images as the network was training, every 10 epochs. With bonus optical illusion!
End of explanation
saver = tf.train.Saver(var_list=g_vars)
with tf.Session() as sess:
saver.restore(sess, tf.train.latest_checkpoint('checkpoints'))
sample_z = np.random.uniform(-1, 1, size=(16, z_size))
gen_samples, _ = sess.run(
generator(input_z, input_size, reuse=True),
feed_dict={input_z: sample_z})
view_samples(0, [gen_samples])
Explanation: It starts out as all noise. Then it learns to make only the center white and the rest black. You can start to see some number like structures appear out of the noise. Looks like 1, 9, and 8 show up first. Then, it learns 5 and 3.
Sampling from the generator
We can also get completely new images from the generator by using the checkpoint we saved after training. We just need to pass in a new latent vector $z$ and we'll get new samples!
End of explanation |
5,742 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Let's see what the column names that end in 'ID' are. Those are probably primary keys and foreign keys.
Step1: First, let's set the index to what I think is the primary key | Python Code:
for col in probe_spec.columns:
if col.endswith('ID'):
print col
Explanation: Let's see what the column names that end in 'ID' are. Those are probably primary keys and foreign keys.
End of explanation
probe_spec.set_index('DesignID',inplace=True)
probe_spec.head()
design_type = pd.read_csv('NiPOD-DesignType.csv')
for col in design_type.columns:
if col.endswith('ID'):
print col
design_type.head()
probe_spec = probe_spec.merge(design_type, on='DesignTypeID')
manufacture = pd.read_csv('NiPOD-Manufacture.csv')
for col in manufacture.columns:
if col.endswith('ID'):
print col
manufacture.head()
probe_spec = probe_spec.merge(manufacture, on='ManufactureID')
package = pd.read_csv('NiPOD-ProbePackage.csv')
for col in package.columns:
if col.endswith('ID'):
print col
package.head()
probe_spec = probe_spec.merge(package, on='PackageID')
probe_type = pd.read_csv('NiPOD-ProbeType.csv')
for col in probe_type.columns:
if col.endswith('ID'):
print col
probe_type.head()
probe_spec = probe_spec.merge(probe_type, on='ProbeTypeID')
probe_spec.head()
keep = ['DesignName',
'FirstChannelYSpacing',
'NumChannel',
'NumShank',
'NumSitePerShank',
'OtherParameters',
'PackageID',
'ShankHeight',
'ShankSpace',
'ShankStartingXLocation',
'ShankStartingYLocation',
'ShankWidth',
'SiteArea',
'TetrodeOffsetLeft',
'TetrodeOffsetRight',
'TetrodeOffsetUp',
'TrueShankLength',
'TrueSiteSpacing',
'DesignType',
'PackageName',
'ProbeType']
probe_spec = probe_spec[keep]
probe_spec.head()
probe_spec.to_csv('NiPOD-ProbeSpec-denormalized.csv',
encoding='utf-8',
index=False)
Explanation: First, let's set the index to what I think is the primary key
End of explanation |
5,743 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Short intro to the SCT library of AutoGraph
Work in progress, use with care and expect changes.
The pyct module packages the source code transformation APIs used by AutoGraph.
This tutorial is just a preview - there is no PIP package yet, and the API has not been finalized, although most of those shown here are quite stable.
Run in Colab
Requires tf-nightly
Step1: Writing a custom code generator
transformer.CodeGenerator is an AST visitor that outputs a string. This makes it useful in the final stage of translating Python to another language.
Here's a toy C++ code generator written using a transformer.CodeGenerator, which is just a fancy subclass of ast.NodeVisitor
Step2: Let's try it on a simple function
Step3: First, parse the Python code and annotate the AST. This is easily done with standard libraries, but parser.parse_entity makes it a single call. It returns a gast AST, so you don't have to worry about Python version
Step4: There are a couple of context objects that most transformer objects like CodeGenerator use.
Of note here is EntityInfo.namespace, which contains the runtime values for all the global and closure names that the function has access to. Inside a transformer object, this is available under self.ctx.info.namespace.
For example, if a function uses NumPy, its namespace will typically include 'np'.
Step5: Finally, it's just a matter of running the generator
Step6: Helpful static analysis passes
The static_analysis module contains various helper passes for dataflow analyis.
All these passes annotate the AST. These annotations can be extracted using anno.getanno. Most of them rely on the qual_names annotations, which just simplify the way more complex identifiers like a.b.c are accessed.
The most useful is the activity analysis which just inventories symbols read, modified, etc.
Step7: Another useful utility is the control flow graph builder.
Of course, a CFG that fully accounts for all effects is impractical to build in a late-bound language like Python without creating an almost fully-connected graph. However, one can be reasonably built if we ignore the potential for functions to raise arbitrary exceptions.
Step8: Other useful analyses include liveness analysis. Note that these make simplifying assumptions, because in general the CFG of a Python program is a graph that's almost complete. The only robust assumption is that execution can't jump backwards.
Step9: Writing a custom Python transpiler
transpiler.FunctionTranspiler is a generic class for a Python source-to-source compiler. It operates on Python ASTs. Subclasses override its transform_ast method.
Unlike the transformer module, which have an AST as input/output, the transpiler APIs accept and return actual Python objects, handling the tasks associated with parsing, unparsing and loading of code.
Here's a transpiler that does nothing
Step10: The main method is transform_function, which as its name suggests, operates on functions.
Step11: Adding new variables to the transformed code
The transformed function has the same global and local variables as the original function. You can of course generate local imports to add any new references into the generated code, but an easier method is to use the extra_locals arg of transform_function | Python Code:
!pip install tf-nightly
Explanation: Short intro to the SCT library of AutoGraph
Work in progress, use with care and expect changes.
The pyct module packages the source code transformation APIs used by AutoGraph.
This tutorial is just a preview - there is no PIP package yet, and the API has not been finalized, although most of those shown here are quite stable.
Run in Colab
Requires tf-nightly:
End of explanation
import gast
from tensorflow.python.autograph.pyct import transformer
class BasicCppCodegen(transformer.CodeGenerator):
def visit_Name(self, node):
self.emit(node.id)
def visit_arguments(self, node):
self.visit(node.args[0])
for arg in node.args[1:]:
self.emit(', ')
self.visit(arg)
def visit_FunctionDef(self, node):
self.emit('void {}'.format(node.name))
self.emit('(')
self.visit(node.args)
self.emit(') {\n')
self.visit_block(node.body)
self.emit('\n}')
def visit_Call(self, node):
self.emit(node.func.id)
self.emit('(')
self.visit(node.args[0])
for arg in node.args[1:]:
self.emit(', ')
self.visit(arg)
self.emit(');')
Explanation: Writing a custom code generator
transformer.CodeGenerator is an AST visitor that outputs a string. This makes it useful in the final stage of translating Python to another language.
Here's a toy C++ code generator written using a transformer.CodeGenerator, which is just a fancy subclass of ast.NodeVisitor:
End of explanation
def f(x, y):
print(x, y)
Explanation: Let's try it on a simple function:
End of explanation
from tensorflow.python.autograph.pyct import parser
node, source = parser.parse_entity(f, ())
Explanation: First, parse the Python code and annotate the AST. This is easily done with standard libraries, but parser.parse_entity makes it a single call. It returns a gast AST, so you don't have to worry about Python version:
End of explanation
from tensorflow.python.autograph.pyct import inspect_utils
f_info = transformer.EntityInfo(
name='f',
source_code=source,
source_file=None,
future_features=(),
namespace=inspect_utils.getnamespace(f))
ctx = transformer.Context(f_info, None, None)
Explanation: There are a couple of context objects that most transformer objects like CodeGenerator use.
Of note here is EntityInfo.namespace, which contains the runtime values for all the global and closure names that the function has access to. Inside a transformer object, this is available under self.ctx.info.namespace.
For example, if a function uses NumPy, its namespace will typically include 'np'.
End of explanation
codegen = BasicCppCodegen(ctx)
codegen.visit(node)
print(codegen.code_buffer)
Explanation: Finally, it's just a matter of running the generator:
End of explanation
def get_node_and_ctx(f):
node, source = parser.parse_entity(f, ())
f_info = transformer.EntityInfo(
name='f',
source_code=source,
source_file=None,
future_features=(),
namespace=None)
ctx = transformer.Context(f_info, None, None)
return node, ctx
from tensorflow.python.autograph.pyct import anno
from tensorflow.python.autograph.pyct import qual_names
from tensorflow.python.autograph.pyct.static_analysis import annos
from tensorflow.python.autograph.pyct.static_analysis import activity
def f(a):
b = a + 1
return b
node, ctx = get_node_and_ctx(f)
node = qual_names.resolve(node)
node = activity.resolve(node, ctx)
fn_scope = anno.getanno(node, annos.NodeAnno.BODY_SCOPE) # Note: tag will be changed soon.
print('read:', fn_scope.read)
print('modified:', fn_scope.modified)
Explanation: Helpful static analysis passes
The static_analysis module contains various helper passes for dataflow analyis.
All these passes annotate the AST. These annotations can be extracted using anno.getanno. Most of them rely on the qual_names annotations, which just simplify the way more complex identifiers like a.b.c are accessed.
The most useful is the activity analysis which just inventories symbols read, modified, etc.:
End of explanation
from tensorflow.python.autograph.pyct import cfg
def f(a):
if a > 0:
return a
b = -a
node, ctx = get_node_and_ctx(f)
node = qual_names.resolve(node)
cfgs = cfg.build(node)
cfgs[node]
Explanation: Another useful utility is the control flow graph builder.
Of course, a CFG that fully accounts for all effects is impractical to build in a late-bound language like Python without creating an almost fully-connected graph. However, one can be reasonably built if we ignore the potential for functions to raise arbitrary exceptions.
End of explanation
from tensorflow.python.autograph.pyct import anno
from tensorflow.python.autograph.pyct import cfg
from tensorflow.python.autograph.pyct import qual_names
from tensorflow.python.autograph.pyct.static_analysis import annos
from tensorflow.python.autograph.pyct.static_analysis import liveness
def f(a):
b = a + 1
return b
node, ctx = get_node_and_ctx(f)
node = qual_names.resolve(node)
cfgs = cfg.build(node)
node = activity.resolve(node, ctx)
node = liveness.resolve(node, ctx, cfgs)
print('live into `b = a + 1`:', anno.getanno(node.body[0], anno.Static.LIVE_VARS_IN))
print('live into `return b`:', anno.getanno(node.body[1], anno.Static.LIVE_VARS_IN))
Explanation: Other useful analyses include liveness analysis. Note that these make simplifying assumptions, because in general the CFG of a Python program is a graph that's almost complete. The only robust assumption is that execution can't jump backwards.
End of explanation
from tensorflow.python.autograph.pyct import transpiler
class NoopTranspiler(transpiler.FunctionTranspiler):
def transform_ast(self, ast, transformer_context):
return ast
tr = NoopTranspiler()
Explanation: Writing a custom Python transpiler
transpiler.FunctionTranspiler is a generic class for a Python source-to-source compiler. It operates on Python ASTs. Subclasses override its transform_ast method.
Unlike the transformer module, which have an AST as input/output, the transpiler APIs accept and return actual Python objects, handling the tasks associated with parsing, unparsing and loading of code.
Here's a transpiler that does nothing:
End of explanation
def f(x, y):
return x + y
new_f, _, _ = tr.transform_function(f, None, None, {})
print(new_f(1, 1))
Explanation: The main method is transform_function, which as its name suggests, operates on functions.
End of explanation
from tensorflow.python.autograph.pyct import parser
class HelloTranspiler(transpiler.FunctionTranspiler):
def transform_ast(self, ast, transformer_context):
print_code = parser.parse('print("Hello", name)')
ast.body = [print_code] + ast.body
return ast
def f(x, y):
pass
extra_locals = {'name': 'you'}
new_f, _, _ = HelloTranspiler().transform_function(f, None, None, extra_locals)
_ = new_f(1, 1)
import inspect
print(inspect.getsource(new_f))
Explanation: Adding new variables to the transformed code
The transformed function has the same global and local variables as the original function. You can of course generate local imports to add any new references into the generated code, but an easier method is to use the extra_locals arg of transform_function:
End of explanation |
5,744 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Numerical differential equations
In the simplest methods for solving differential equations numerically, one goes back to the definition of the differential operator
Step1: Now let us see if we can calculate these derivatives numerically.
Step2: That worked pretty well, but we can do even better by using central differences for estimating the deriavtive.
Step3: The error is order dx for left/right derivatives, but only dx^2 for central derivatives.
Initial Value Problems
We now turn to simple ODE's. Consider the damped harmonic oscillator equation
x''(t) = - k x(t) - d x'(t)
which we can write as
x'(t) = v(t)
v'(t) = -k x(t) - d v(t).
We want to solve this with initial conditions x(0) = x0, v(0) = v0.
Step4: Exercise
(Skip this exercise if you feel confident it's easy)
Change the above code to solve
x''(t) = - k sin(x(t))
If time permits
Step5: Imagine that we had calculated y. It would be some array
Step6: How would we calculate the vector y''? In particular, can we find a matrix L such that y'' = L y?
Double derivatives can be estimated by finite differences as (see https
Step7: All looks good, expect the endpoints. This was expected. This is why we have boundary conditions.
We can write our equation y''(x) = y(x) as y''(x) - y(x) = 0 and by introducing the double derivative operator L and the identity operator I, we could also write it as
(L - I) y = 0
The idenitity operator for vectors is of course just the idenitity matrix
Step8: Where A is defining the full problem.
The problem is still not well-defined as we need to somehow implement our boundary conditions.
Let's do it, and explain afterwards
Step9: The first row extracts y[0] and the equation then sets it equal to y0(=1) and the last row extracts y[-1] and sets it equal to y1(=10).
All other lines sets double derivatives equal to zero.
Solving the above linear system of equations is easy.
Numpy linear algebra module can do it for us
Step10: It is great for testing one's code that we can compare to analytical solutions, but of course the strength of numerical methods is that we can solve equations that we do not have analytical solutions for.
Exercise
Solve
y''(x) = y'(x) + sin(x)*exp(x)
y(-pi) = 0
y(pi) = 1
The analytical solution is given below.
Step11: Extra exercise
What about Neumann boundary conditions? Try writing a solver for
y''(x) = y(x) + sin(x)*exp(x)
y(-pi) = 1
y'(pi) = 0
As a check, the correct solution has y(0) = -0.956331.
Partial differential equations
Just as PDE's are harder to solve than ODE's, so are PDE's much harder to implement numerically.
Most courses tend to avoid these and say they "generalise from ODE's", which I would argue is far from true.
This being a maths department and PDE solving might be relevant, so we will cover this area. But focus on simple methods.
3D functions
We start by showing how to plot
u(x,y) = exp(x) sin(y)
Step12: In the above np.meshgrid has made the 2D arrays X,Y
Step13: Example problem
Let us solve the steady state heat (Laplace) equation with known temperature boundary conditions
Laplacian u(x, y) = 0
u(x, 0) = 1
u(0, y) = 1 - y
u(x, 1) = 0
y(1, y) = 0
We start by defining our domain
Step14: We wish to solve for U, but currently U, like X and Y, would be a matrix. Linear systems tend to work on vectors, so we should formulate our system such that we can solve for a vector.
For this purpose we define functions that take us from vector to matrix and vice versa.
Step15: So we define 1D arrays of our coordinates
Step16: The discrete Laplacian looks like this
Step17: As mentioned, this can be done much more efficiently (and without using loops).
But this requires that one understands how the reorganisation to vectors work.
Anyway, just to show how one could do it
Step18: Now we implement the boundary conditions
Step19: And lastly, we solve | Python Code:
dx = 0.3
x = np.arange(0, 10, dx) # returns [0, dx, 2dx, 3dx, 4dx, 5dx, ...]
print(x)
f1 = np.sin(x)
f2 = x**2/100
f3 = np.log(1+x)-1
fs = [f1, f2, f3]
for i in range(3): plt.plot(x, fs[i])
df1 = np.cos(x)
df2 = x/50
df3 = 1/(1+x)
dfs = [df1, df2, df3]
Explanation: Numerical differential equations
In the simplest methods for solving differential equations numerically, one goes back to the definition of the differential operator: lim dx->0 (f(t + dx) - f(t))/dx and simply take dx small, but finite, which will approximate the true derivative.
We start by understanding this.
Numerical differentiation
We start by defining a few functions for which we know their analytical derivative.
End of explanation
def derivative(f, dx):
return (f[1:] - f[:-1])/dx # returned function is one value shorter than input f
ndfs = [derivative(f, dx) for f in fs]
for i in range(3): plt.plot(x[:-1], ndfs[i], lw=3, alpha=0.5)
for i in range(3): plt.plot(x, dfs[i], '--k')
plt.show()
Explanation: Now let us see if we can calculate these derivatives numerically.
End of explanation
def central_derivative(f, dx):
return (f[2:] - f[:-2])/(2*dx) # returned function is two values shorter than input f
ndfs = [central_derivative(f, dx) for f in fs]
for i in range(3): plt.plot(x[1:-1], ndfs[i], lw=3, alpha=0.5)
for i in range(3): plt.plot(x, dfs[i], '--k')
plt.show()
Explanation: That worked pretty well, but we can do even better by using central differences for estimating the deriavtive.
End of explanation
def analytical_solution(t, k, d, x0, v0):
d += 0j # Exploiting complex numbers for one general solution
return np.real((np.exp(-((d*t)/2))*(np.sqrt(d**2-4*k)*x0*np.cosh(1/2*np.sqrt(d**2-4*k)*t)+(2*v0+d* \
x0)*np.sinh(1/2*np.sqrt(d**2-4*k)*t)))/np.sqrt(d**2-4*k))
dt = 0.02
t = np.arange(0,25,dt)
k = 5; d = 0.3; x0 = 0; v0 = 1
x = np.zeros_like(t)
v = np.zeros_like(t)
x[0] = x0; v[0] = v0
for i in range(len(t)-1): # Step through time with step size dt
v[i + 1] = v[i] + (-k * x[i] - d * v[i]) * dt
x[i + 1] = x[i] + v[i+1] * dt # v[i+1] makes a big stability difference!
plt.plot(t, x, lw=3, alpha=0.5)
plt.plot(t, analytical_solution(t, k, d, x0, v0), '--k')
plt.show()
Explanation: The error is order dx for left/right derivatives, but only dx^2 for central derivatives.
Initial Value Problems
We now turn to simple ODE's. Consider the damped harmonic oscillator equation
x''(t) = - k x(t) - d x'(t)
which we can write as
x'(t) = v(t)
v'(t) = -k x(t) - d v(t).
We want to solve this with initial conditions x(0) = x0, v(0) = v0.
End of explanation
dx = 0.01
x = np.arange(0, 1, dx)
n = len(x)
y0 = 1
y1 = 10
Explanation: Exercise
(Skip this exercise if you feel confident it's easy)
Change the above code to solve
x''(t) = - k sin(x(t))
If time permits: lookup documentation for scipy.integrate.odeint and use that to solve the above equation. This will be much more efficient. But do other exercises first.
Boundary Value Problems
Now we want to solve the equation
y''(x) = y(x)
y(0) = y0
y(1) = y1
We can no longer iterate forwards from a given start point as we could before, where both x(0) and x'(0) were known.
There are many methods to solve systems as the above (shootings methods, relaxation methods, etc), but we will use the simplest direct method here. Again we base it on finite differences.
End of explanation
y = x**4 # not correct
Explanation: Imagine that we had calculated y. It would be some array:
End of explanation
L = np.diag(np.ones(n - 1), 1) + np.diag(np.ones(n - 1), -1) - 2 * np.diag(np.ones(n))
print(L)
L = L/dx**2
ypp = np.dot(L, y) # matrix product L y
plt.plot(x, ypp, lw=3, alpha=0.5)
plt.plot(x, 4*3*x**2, '--k')
plt.axis([0, 1, -3, 15])
plt.show()
Explanation: How would we calculate the vector y''? In particular, can we find a matrix L such that y'' = L y?
Double derivatives can be estimated by finite differences as (see https://en.wikipedia.org/wiki/Finite_difference_coefficient)
y''(x) = (y(x + dx) + y(x - dx) - 2 y(x))/dx^2
Let's build a matrix based on that:
End of explanation
I = np.eye(n)
A = L - I
Explanation: All looks good, expect the endpoints. This was expected. This is why we have boundary conditions.
We can write our equation y''(x) = y(x) as y''(x) - y(x) = 0 and by introducing the double derivative operator L and the identity operator I, we could also write it as
(L - I) y = 0
The idenitity operator for vectors is of course just the idenitity matrix
End of explanation
A[0, :] = 0
A[0, 0] = 1
A[-1, :] = 0
A[-1, -1] = 1
b = np.zeros(n)
b[0] = y0
b[-1] = y1
np.set_printoptions(suppress=True)
print(np.round(A))
print('* y == ')
print(b)
Explanation: Where A is defining the full problem.
The problem is still not well-defined as we need to somehow implement our boundary conditions.
Let's do it, and explain afterwards:
End of explanation
from numpy.linalg import solve
y = solve(A, b)
analytical = (np.exp(-x)*(np.exp(1)*(np.exp(1)*y0-y1)+np.exp(2*x)*(-y0+np.exp(1)*y1)))/(-1+np.exp(2))
plt.plot(x, y, lw=3, alpha=0.5)
plt.plot(x, analytical, '--k')
plt.show()
Explanation: The first row extracts y[0] and the equation then sets it equal to y0(=1) and the last row extracts y[-1] and sets it equal to y1(=10).
All other lines sets double derivatives equal to zero.
Solving the above linear system of equations is easy.
Numpy linear algebra module can do it for us:
End of explanation
dx = 0.01
x = np.arange(-np.pi, np.pi, dx)
y = 1/4*np.exp(-np.pi)/np.sinh(np.pi)*(-2+np.exp(x)*(1+2*np.exp(np.pi)+np.cos(x)+ \
np.sin(x)-np.exp(2*(np.pi))*(1+np.cos(x)+np.sin(x))))
plt.plot(x, y, '--k')
plt.axis([-np.pi, np.pi, -6, 1.5])
plt.show()
Explanation: It is great for testing one's code that we can compare to analytical solutions, but of course the strength of numerical methods is that we can solve equations that we do not have analytical solutions for.
Exercise
Solve
y''(x) = y'(x) + sin(x)*exp(x)
y(-pi) = 0
y(pi) = 1
The analytical solution is given below.
End of explanation
dx = 0.01
x = np.arange(0, 7, dx)
y = np.arange(-3, 3, dx)
X, Y = np.meshgrid(x, y)
U = np.exp(X) * np.sin(Y)
plt.imshow(U, extent=(min(x), max(x), max(y), min(y)))
plt.xlabel('x')
plt.ylabel('y')
plt.colorbar()
plt.show()
Explanation: Extra exercise
What about Neumann boundary conditions? Try writing a solver for
y''(x) = y(x) + sin(x)*exp(x)
y(-pi) = 1
y'(pi) = 0
As a check, the correct solution has y(0) = -0.956331.
Partial differential equations
Just as PDE's are harder to solve than ODE's, so are PDE's much harder to implement numerically.
Most courses tend to avoid these and say they "generalise from ODE's", which I would argue is far from true.
This being a maths department and PDE solving might be relevant, so we will cover this area. But focus on simple methods.
3D functions
We start by showing how to plot
u(x,y) = exp(x) sin(y)
End of explanation
print(X.shape, Y.shape, U.shape)
print(X)
Explanation: In the above np.meshgrid has made the 2D arrays X,Y:
End of explanation
dx = 0.02
x = np.arange(0, 1 + dx, dx)
m = len(x)
print(x[0], x[-1])
X, Y = np.meshgrid(x, x)
shape = X.shape
Explanation: Example problem
Let us solve the steady state heat (Laplace) equation with known temperature boundary conditions
Laplacian u(x, y) = 0
u(x, 0) = 1
u(0, y) = 1 - y
u(x, 1) = 0
y(1, y) = 0
We start by defining our domain:
End of explanation
def to_vector(mat):
return np.ravel(mat)
def to_matrix(vec):
return np.reshape(vec, shape)
print(X.shape, '=>', to_vector(X).shape)
print((X == to_matrix(to_vector(X))).all())
print((Y == to_matrix(to_vector(Y))).all())
Explanation: We wish to solve for U, but currently U, like X and Y, would be a matrix. Linear systems tend to work on vectors, so we should formulate our system such that we can solve for a vector.
For this purpose we define functions that take us from vector to matrix and vice versa.
End of explanation
x = to_vector(X)
y = to_vector(Y)
n = len(x)
Explanation: So we define 1D arrays of our coordinates:
End of explanation
L = np.zeros((n, n))
for i in range(n):
L[i,i] = -4
j = np.argmin( (x[i] + dx - x)**2 + (y[i] - y)**2 ) # Find index j in vectors for point (x[i]+dx, y[i])
if i!=j: L[i,j] = 1 # If i==j, we are at the boundary of the domain
j = np.argmin( (x[i] - dx - x)**2 + (y[i] - y)**2 ) # Find index j in vectors for point (x[i]-dx, y[i])
if i!=j: L[i,j] = 1
j = np.argmin( (x[i] - x)**2 + (y[i] + dx - y)**2 ) # Find index j in vectors for point (x[i], y[i]+dx)
if i!=j: L[i,j] = 1
j = np.argmin( (x[i] - x)**2 + (y[i] - dx - y)**2 ) # Find index j in vectors for point (x[i], y[i]-dx)
if i!=j: L[i,j] = 1
print(L)
L = L/dx**2
Explanation: The discrete Laplacian looks like this:
0 1 0
1 -4 1
0 1 0
divided by dx^2
There are smart ways to make this, but it depends on how you form the vectors.
We should not dwelve too long at the details of how we've done it here (ravel, reshape), but instead we will build the Laplacian in a silly way using a loop. This way will work no matter how scrambled the coordinates are in the vector.
End of explanation
L_quick = -4 * np.eye(n) + np.diag(np.ones(n-m), m) + np.diag(np.ones(n-m), -m)
a = np.ones(n-1); a[(m-1)::m] = 0
L_quick += np.diag(a,1) + np.diag(a,-1)
L_quick = L_quick/dx**2
print( (L == L_quick).all() )
Explanation: As mentioned, this can be done much more efficiently (and without using loops).
But this requires that one understands how the reorganisation to vectors work.
Anyway, just to show how one could do it:
End of explanation
b = np.zeros(n)
for i in range(n):
if (x[i]==0 or x[i]==1 or y[i]==0 or y[i]==1): # For any boundary point
L[i, :] = 0
L[i, i] = 1
# BC points that are not equal to zero:
if x[i] == 0:
b[i] = 1 - y[i]
elif y[i] == 0:
b[i] = 1
Explanation: Now we implement the boundary conditions:
End of explanation
from scipy.linalg import solve
u = solve(L, b)
U = to_matrix(u)
plt.imshow(U, extent=(min(x), max(x), max(y), min(y)))
plt.xlabel('x')
plt.ylabel('y')
plt.colorbar()
plt.title('Temperature distriubtion of plate')
plt.show()
Explanation: And lastly, we solve:
End of explanation |
5,745 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Dynamic Pricing Game
Notebook for testing your algorithms
You can use this notebook to run your buyer/seller algorithms and compare them to the naive ones provided.
In the below code, make any necessary changes as indicated by the TODO comments.
Step1: If everything works you can test your algorithms below | Python Code:
import sys
import os
import matplotlib.pyplot as plt
import numpy.random as rn
import numpy as np
%matplotlib inline
# TODO: write the path to the root directory of the simulation game code below.
# It should have a README.md file under it and 'simulation_game', 'simulation_algos', 'test directories' under it.
path_to_game_code = "<TODO-CHANGE-THIS>/DynamicPricingGame"
sys.path.append(path_to_game_code)
os.environ['PYTHONPATH'] = path_to_game_code
from simulation_game.simulation import simulate
# TODO: you need to change the name of the file and the classes!
from simulation_algos.teamname import TeamNameBuyer, TeamNameSeller
from simulation_game.buyer import MyopicBuyer
from simulation_game.seller import DummySeller
Explanation: Dynamic Pricing Game
Notebook for testing your algorithms
You can use this notebook to run your buyer/seller algorithms and compare them to the naive ones provided.
In the below code, make any necessary changes as indicated by the TODO comments.
End of explanation
# Set these parameters how you want for testing
price_scale = 3
mc_trials = 100
x_0 = 3
horizon = 4
# Set the seed so that you get the same random number sequences in every run (makes it easier to debug and test)
rn.seed(123)
# Create your buyer and seller objects
your_buyer = TeamNameBuyer()
your_seller = TeamNameSeller()
# Algorithms to compare your ones against. These are the simple ones that we provide for you
simple_buyer = MyopicBuyer("MyopicBuyer")
simple_seller = DummySeller("DummySeller")
# 2 teams are formed. One team is you and the other is the simple A.I.
teams = [(your_buyer, your_seller), (simple_buyer, simple_seller)]
# Simulate and print the mean revenue and consumer surplus for the two teams (where the order is the same as how
# the teams were defined)
mean_revenue, mean_cs = simulate(teams, horizon, x_0, mc_trials, price_scale)
print mean_revenue, mean_cs
Explanation: If everything works you can test your algorithms below:
End of explanation |
5,746 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Built-in linear stability analysis
Step1: To perform linear stability analysis, we simply call pyqg's built-in method stability_analysis
Step2: The eigenvalues are stored in omg, and the eigenctors in evec. For plotting purposes, we use fftshift to reorder the entries
Step3: It is also useful to analyze the fasted-growing mode
Step4: By default, the stability analysis above is performed without bottom friction, but the stability method also supports bottom friction
Step5: Plotting growth rates
Step6: Plotting the wavestructure of the most unstable modes | Python Code:
import numpy as np
from numpy import pi
import matplotlib.pyplot as plt
%matplotlib inline
import pyqg
m = pyqg.LayeredModel(nx=256, nz = 2, U = [.01, -.01], V = [0., 0.], H = [1., 1.],
L=2*pi,beta=1.5, rd=1./20., rek=0.05, f=1.,delta=1.)
Explanation: Built-in linear stability analysis
End of explanation
evals,evecs = m.stability_analysis()
Explanation: To perform linear stability analysis, we simply call pyqg's built-in method stability_analysis:
End of explanation
evals = np.fft.fftshift(evals.imag,axes=(0,))
k,l = m.k*m.radii[1], np.fft.fftshift(m.l,axes=(0,))*m.radii[1]
Explanation: The eigenvalues are stored in omg, and the eigenctors in evec. For plotting purposes, we use fftshift to reorder the entries
End of explanation
argmax = evals[m.Ny/2,:].argmax()
evec = np.fft.fftshift(evecs,axes=(1))[:,m.Ny/2,argmax]
kmax = k[m.Ny/2,argmax]
x = np.linspace(0,4.*pi/kmax,100)
mag, phase = np.abs(evec), np.arctan2(evec.imag,evec.real)
Explanation: It is also useful to analyze the fasted-growing mode:
End of explanation
evals_fric, evecs_fric = m.stability_analysis(bottom_friction=True)
evals_fric = np.fft.fftshift(evals_fric.imag,axes=(0,))
argmax = evals_fric[m.Ny/2,:].argmax()
evec_fric = np.fft.fftshift(evecs_fric,axes=(1))[:,m.Ny/2,argmax]
kmax_fric = k[m.Ny/2,argmax]
mag_fric, phase_fric = np.abs(evec_fric), np.arctan2(evec_fric.imag,evec_fric.real)
Explanation: By default, the stability analysis above is performed without bottom friction, but the stability method also supports bottom friction:
End of explanation
plt.figure(figsize=(14,4))
plt.subplot(121)
plt.contour(k,l,evals,colors='k')
plt.pcolormesh(k,l,evals,cmap='Blues')
plt.colorbar()
plt.xlim(0,2.); plt.ylim(-2.,2.)
plt.clim([0.,.1])
plt.xlabel(r'$k \, L_d$'); plt.ylabel(r'$l \, L_d$')
plt.title('without bottom friction')
plt.subplot(122)
plt.contour(k,l,evals_fric,colors='k')
plt.pcolormesh(k,l,evals_fric,cmap='Blues')
plt.colorbar()
plt.xlim(0,2.); plt.ylim(-2.,2.)
plt.clim([0.,.1])
plt.xlabel(r'$k \, L_d$'); plt.ylabel(r'$l \, L_d$')
plt.title('with bottom friction')
plt.figure(figsize=(8,4))
plt.plot(k[m.Ny/2,:],evals[m.Ny/2,:],'b',label='without bottom friction')
plt.plot(k[m.Ny/2,:],evals_fric[m.Ny/2,:],'b--',label='with bottom friction')
plt.xlim(0.,2.)
plt.legend()
plt.xlabel(r'$k\,L_d$')
plt.ylabel(r'Growth rate')
Explanation: Plotting growth rates
End of explanation
plt.figure(figsize=(12,5))
plt.plot(x,mag[0]*np.cos(kmax*x + phase[0]),'b',label='Layer 1')
plt.plot(x,mag[1]*np.cos(kmax*x + phase[1]),'g',label='Layer 2')
plt.plot(x,mag_fric[0]*np.cos(kmax_fric*x + phase_fric[0]),'b--')
plt.plot(x,mag_fric[1]*np.cos(kmax_fric*x + phase_fric[1]),'g--')
plt.legend(loc=8)
plt.xlabel(r'$x/L_d$'); plt.ylabel(r'$y/L_d$')
Explanation: Plotting the wavestructure of the most unstable modes
End of explanation |
5,747 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
The Hipster Effect
Step2: This gives us a nice way to move from our preference $x_i$ to a probability of switching styles. Here $\beta$ is inversely related to noise. For large $\beta$, the noise is small and we basically map $x > 0$ to a 100% probability of switching, and $x<0$ to a 0% probability of switching. As $\beta$ gets smaller, the probabilities get less and less distinct.
The Code
Let's see this model in action. We'll start by defining a class which implements everything we've gone through above
Step3: Now we'll create a function which will return an instance of the HipsterStep class with the appropriate settings
Step4: Exploring this data
Now that we've defined the simulation, we can start exploring this data. I'll quickly demonstrate how to advance simulation time and get the results.
First we initialize the model with a certain fraction of hipsters
Step5: To run the simulation a number of steps we execute sim.step(Nsteps) giving us a matrix of identities for each invidual at each timestep.
Step6: Now we can simply go right ahead and visualize this data using an Image Element type, defining the dimensions and bounds of the space.
Step7: Now that you know how to run the simulation and access the data have a go at exploring the effects of different parameters on the population dynamics or apply some custom analyses to this data. Here are two quick examples of what you can do | Python Code:
import numpy as np
import holoviews as hv
hv.notebook_extension(bokeh=True, width=90)
%%output backend='matplotlib'
%%opts NdOverlay [aspect=1.5 figure_size=200 legend_position='top_left']
x = np.linspace(-1, 1, 1000)
curves = hv.NdOverlay(key_dimensions=['$\\beta$'])
for beta in [0.1, 0.5, 1, 5]:
curves[beta] = hv.Curve(zip(x, 0.5 * (1 + np.tanh(beta * x))), kdims=['$x$'],
vdims=['$\\phi(x;\\beta)$'])
curves
Explanation: The Hipster Effect: An IPython Interactive Exploration
This notebook originally appeared as a post on the blog Pythonic Perambulations. The content is BSD licensed. It has been adapted to use HoloViews by Philipp Rudiger.
This week I started seeing references all over the internet to this paper: The Hipster Effect: When Anticonformists All Look The Same. It essentially describes a simple mathematical model which models conformity and non-conformity among a mutually interacting population, and finds some interesting results: namely, conformity among a population of self-conscious non-conformists is similar to a phase transition in a time-delayed thermodynamic system. In other words, with enough hipsters around responding to delayed fashion trends, a plethora of facial hair and fixed gear bikes is a natural result.
Also naturally, upon reading the paper I wanted to try to reproduce the work. The paper solves the problem analytically for a continuous system and shows the precise values of certain phase transitions within the long-term limit of the postulated system. Though such theoretical derivations are useful, I often find it more intuitive to simulate systems like this in a more approximate manner to gain hands-on understanding.
Mathematically Modeling Hipsters
We'll start by defining the problem, and going through the notation suggested in the paper. We'll consider a group of $N$ people, and define the following quantities:
$\epsilon_i$ : this value is either $+1$ or $-1$. $+1$ means person $i$ is a hipster, while $-1$ means they're a conformist.
$s_i(t)$ : this is also either $+1$ or $-1$. This indicates person $i$'s choice of style at time $t$. For example, $+1$ might indicated a bushy beard, while $-1$ indicates clean-shaven.
$J_{ij}$ : The influence matrix. This is a value greater than zero which indicates how much person $j$ influences person $i$.
$\tau_{ij}$ : The delay matrix. This is an integer telling us the length of delay for the style of person $j$ to affect the style of person $i$.
The idea of the model is this: on any given day, person $i$ looks at the world around him or her, and sees some previous day's version of everyone else. This information is $s_j(t - \tau_{ij})$.
The amount that person $j$ influences person $i$ is given by the influence matrix, $J_{ij}$, and after putting all the information together, we see that person $i$'s mean impression of the world's style is
$$
m_i(t) = \frac{1}{N} \sum_j J_{ij} \cdot s_j(t - \tau_{ij})
$$
Given the problem setup, we can quickly check whether this impression matches their own current style:
if $m_i(t) \cdot s_i(t) > 0$, then person $i$ matches those around them
if $m_i(t) \cdot s_i(t) < 0$, then person $i$ looks different than those around them
A hipster who notices that their style matches that of the world around them will risk giving up all their hipster cred if they don't change quickly; a conformist will have the opposite reaction. Because $\epsilon_i$ = $+1$ for a hipster and $-1$ for a conformist, we can encode this observation in a single value which tells us what which way the person will lean that day:
$$
x_i(t) = -\epsilon_i m_i(t) s_i(t)
$$
Simple! If $x_i(t) > 0$, then person $i$ will more likely switch their style that day, and if $x_i(t) < 0$, person $i$ will more likely maintain the same style as the previous day. So we have a formula for how to update each person's style based on their preferences, their influences, and the world around them.
But the world is a noisy place. Each person might have other things going on that day, so instead of using this value directly, we can turn it in to a probabilistic statement. Consider the function
$$
\phi(x;\beta) = \frac{1 + \tanh(\beta \cdot x)}{2}
$$
We can plot this function quickly:
End of explanation
class HipsterStep(object):
Class to implement hipster evolution
Parameters
----------
initial_style : length-N array
values > 0 indicate one style, while values <= 0 indicate the other.
is_hipster : length-N array
True or False, indicating whether each person is a hipster
influence_matrix : N x N array
Array of non-negative values. influence_matrix[i, j] indicates
how much influence person j has on person i
delay_matrix : N x N array
Array of positive integers. delay_matrix[i, j] indicates the
number of days delay between person j's influence on person i.
def __init__(self, initial_style, is_hipster,
influence_matrix, delay_matrix,
beta=1, rseed=None):
self.initial_style = initial_style
self.is_hipster = is_hipster
self.influence_matrix = influence_matrix
self.delay_matrix = delay_matrix
self.rng = np.random.RandomState(rseed)
self.beta = beta
# make s array consisting of -1 and 1
self.s = -1 + 2 * (np.atleast_2d(initial_style) > 0)
N = self.s.shape[1]
# make eps array consisting of -1 and 1
self.eps = -1 + 2 * (np.asarray(is_hipster) > 0)
# create influence_matrix and delay_matrix
self.J = np.asarray(influence_matrix, dtype=float)
self.tau = np.asarray(delay_matrix, dtype=int)
# validate all the inputs
assert self.s.ndim == 2
assert self.s.shape[1] == N
assert self.eps.shape == (N,)
assert self.J.shape == (N, N)
assert np.all(self.J >= 0)
assert np.all(self.tau > 0)
@staticmethod
def phi(x, beta):
return 0.5 * (1 + np.tanh(beta * x))
def step_once(self):
N = self.s.shape[1]
# iref[i, j] gives the index for the j^th individual's
# time-delayed influence on the i^th individual
iref = np.maximum(0, self.s.shape[0] - self.tau)
# sref[i, j] gives the previous state of the j^th individual
# which affects the current state of the i^th individual
sref = self.s[iref, np.arange(N)]
# m[i] is the mean of weighted influences of other individuals
m = (self.J * sref).sum(1) / self.J.sum(1)
# From m, we use the sigmoid function to compute a transition probability
transition_prob = self.phi(-self.eps * m * self.s[-1], beta=self.beta)
# Now choose steps stochastically based on this probability
new_s = np.where(transition_prob > self.rng.rand(N), -1, 1) * self.s[-1]
# Add this to the results, and return
self.s = np.vstack([self.s, new_s])
return self.s
def step(self, N):
for i in range(N):
self.step_once()
return self.s
Explanation: This gives us a nice way to move from our preference $x_i$ to a probability of switching styles. Here $\beta$ is inversely related to noise. For large $\beta$, the noise is small and we basically map $x > 0$ to a 100% probability of switching, and $x<0$ to a 0% probability of switching. As $\beta$ gets smaller, the probabilities get less and less distinct.
The Code
Let's see this model in action. We'll start by defining a class which implements everything we've gone through above:
End of explanation
def get_sim(Npeople=500, hipster_frac=0.8, initial_state_frac=0.5, delay=20, log10_beta=0.5, rseed=42):
rng = np.random.RandomState(rseed)
initial_state = (rng.rand(1, Npeople) > initial_state_frac)
is_hipster = (rng.rand(Npeople) > hipster_frac)
influence_matrix = abs(rng.randn(Npeople, Npeople))
influence_matrix.flat[::Npeople + 1] = 0
delay_matrix = 1 + rng.poisson(delay, size=(Npeople, Npeople))
return HipsterStep(initial_state, is_hipster, influence_matrix, delay_matrix=delay_matrix,
beta=10 ** log10_beta, rseed=rseed)
Explanation: Now we'll create a function which will return an instance of the HipsterStep class with the appropriate settings:
End of explanation
sim = get_sim(hipster_frac=0.8)
Explanation: Exploring this data
Now that we've defined the simulation, we can start exploring this data. I'll quickly demonstrate how to advance simulation time and get the results.
First we initialize the model with a certain fraction of hipsters:
End of explanation
result = sim.step(200)
result
Explanation: To run the simulation a number of steps we execute sim.step(Nsteps) giving us a matrix of identities for each invidual at each timestep.
End of explanation
%%opts Image [width=600]
hv.Image(result.T, bounds=(0, 0, 100, 500),
kdims=['Time', 'individual'], vdims=['State'])
Explanation: Now we can simply go right ahead and visualize this data using an Image Element type, defining the dimensions and bounds of the space.
End of explanation
%%opts Curve [width=350] Image [width=350]
hipster_frac = hv.HoloMap(kdims=['Hipster Fraction'])
for i in np.linspace(0.1, 1, 10):
sim = get_sim(hipster_frac=i)
hipster_frac[i] = hv.Image(sim.step(200).T, (0, 0, 500, 500), group='Population Dynamics',
kdims=['Time', 'individual'], vdims=['Bearded'])
(hipster_frac + hipster_frac.reduce(individual=np.mean).to.curve('Time', 'Bearded'))
%%opts Overlay [width=600] Curve (color='black')
aggregated = hipster_frac.table().aggregate(['Time', 'Hipster Fraction'], np.mean, np.std)
aggregated.to.curve('Time') * aggregated.to.errorbars('Time')
Explanation: Now that you know how to run the simulation and access the data have a go at exploring the effects of different parameters on the population dynamics or apply some custom analyses to this data. Here are two quick examples of what you can do:
End of explanation |
5,748 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
ES-DOC CMIP6 Model Properties - Aerosol
MIP Era
Step1: Document Authors
Set document authors
Step2: Document Contributors
Specify document contributors
Step3: Document Publication
Specify document publication status
Step4: Document Table of Contents
1. Key Properties
2. Key Properties --> Software Properties
3. Key Properties --> Timestep Framework
4. Key Properties --> Meteorological Forcings
5. Key Properties --> Resolution
6. Key Properties --> Tuning Applied
7. Transport
8. Emissions
9. Concentrations
10. Optical Radiative Properties
11. Optical Radiative Properties --> Absorption
12. Optical Radiative Properties --> Mixtures
13. Optical Radiative Properties --> Impact Of H2o
14. Optical Radiative Properties --> Radiative Scheme
15. Optical Radiative Properties --> Cloud Interactions
16. Model
1. Key Properties
Key properties of the aerosol model
1.1. Model Overview
Is Required
Step5: 1.2. Model Name
Is Required
Step6: 1.3. Scheme Scope
Is Required
Step7: 1.4. Basic Approximations
Is Required
Step8: 1.5. Prognostic Variables Form
Is Required
Step9: 1.6. Number Of Tracers
Is Required
Step10: 1.7. Family Approach
Is Required
Step11: 2. Key Properties --> Software Properties
Software properties of aerosol code
2.1. Repository
Is Required
Step12: 2.2. Code Version
Is Required
Step13: 2.3. Code Languages
Is Required
Step14: 3. Key Properties --> Timestep Framework
Physical properties of seawater in ocean
3.1. Method
Is Required
Step15: 3.2. Split Operator Advection Timestep
Is Required
Step16: 3.3. Split Operator Physical Timestep
Is Required
Step17: 3.4. Integrated Timestep
Is Required
Step18: 3.5. Integrated Scheme Type
Is Required
Step19: 4. Key Properties --> Meteorological Forcings
**
4.1. Variables 3D
Is Required
Step20: 4.2. Variables 2D
Is Required
Step21: 4.3. Frequency
Is Required
Step22: 5. Key Properties --> Resolution
Resolution in the aersosol model grid
5.1. Name
Is Required
Step23: 5.2. Canonical Horizontal Resolution
Is Required
Step24: 5.3. Number Of Horizontal Gridpoints
Is Required
Step25: 5.4. Number Of Vertical Levels
Is Required
Step26: 5.5. Is Adaptive Grid
Is Required
Step27: 6. Key Properties --> Tuning Applied
Tuning methodology for aerosol model
6.1. Description
Is Required
Step28: 6.2. Global Mean Metrics Used
Is Required
Step29: 6.3. Regional Metrics Used
Is Required
Step30: 6.4. Trend Metrics Used
Is Required
Step31: 7. Transport
Aerosol transport
7.1. Overview
Is Required
Step32: 7.2. Scheme
Is Required
Step33: 7.3. Mass Conservation Scheme
Is Required
Step34: 7.4. Convention
Is Required
Step35: 8. Emissions
Atmospheric aerosol emissions
8.1. Overview
Is Required
Step36: 8.2. Method
Is Required
Step37: 8.3. Sources
Is Required
Step38: 8.4. Prescribed Climatology
Is Required
Step39: 8.5. Prescribed Climatology Emitted Species
Is Required
Step40: 8.6. Prescribed Spatially Uniform Emitted Species
Is Required
Step41: 8.7. Interactive Emitted Species
Is Required
Step42: 8.8. Other Emitted Species
Is Required
Step43: 8.9. Other Method Characteristics
Is Required
Step44: 9. Concentrations
Atmospheric aerosol concentrations
9.1. Overview
Is Required
Step45: 9.2. Prescribed Lower Boundary
Is Required
Step46: 9.3. Prescribed Upper Boundary
Is Required
Step47: 9.4. Prescribed Fields Mmr
Is Required
Step48: 9.5. Prescribed Fields Mmr
Is Required
Step49: 10. Optical Radiative Properties
Aerosol optical and radiative properties
10.1. Overview
Is Required
Step50: 11. Optical Radiative Properties --> Absorption
Absortion properties in aerosol scheme
11.1. Black Carbon
Is Required
Step51: 11.2. Dust
Is Required
Step52: 11.3. Organics
Is Required
Step53: 12. Optical Radiative Properties --> Mixtures
**
12.1. External
Is Required
Step54: 12.2. Internal
Is Required
Step55: 12.3. Mixing Rule
Is Required
Step56: 13. Optical Radiative Properties --> Impact Of H2o
**
13.1. Size
Is Required
Step57: 13.2. Internal Mixture
Is Required
Step58: 14. Optical Radiative Properties --> Radiative Scheme
Radiative scheme for aerosol
14.1. Overview
Is Required
Step59: 14.2. Shortwave Bands
Is Required
Step60: 14.3. Longwave Bands
Is Required
Step61: 15. Optical Radiative Properties --> Cloud Interactions
Aerosol-cloud interactions
15.1. Overview
Is Required
Step62: 15.2. Twomey
Is Required
Step63: 15.3. Twomey Minimum Ccn
Is Required
Step64: 15.4. Drizzle
Is Required
Step65: 15.5. Cloud Lifetime
Is Required
Step66: 15.6. Longwave Bands
Is Required
Step67: 16. Model
Aerosol model
16.1. Overview
Is Required
Step68: 16.2. Processes
Is Required
Step69: 16.3. Coupling
Is Required
Step70: 16.4. Gas Phase Precursors
Is Required
Step71: 16.5. Scheme Type
Is Required
Step72: 16.6. Bulk Scheme Species
Is Required | Python Code:
# DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'inm', 'inm-cm5-0', 'aerosol')
Explanation: ES-DOC CMIP6 Model Properties - Aerosol
MIP Era: CMIP6
Institute: INM
Source ID: INM-CM5-0
Topic: Aerosol
Sub-Topics: Transport, Emissions, Concentrations, Optical Radiative Properties, Model.
Properties: 69 (37 required)
Model descriptions: Model description details
Initialized From: --
Notebook Help: Goto notebook help page
Notebook Initialised: 2018-02-15 16:54:04
Document Setup
IMPORTANT: to be executed each time you run the notebook
End of explanation
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
Explanation: Document Authors
Set document authors
End of explanation
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
Explanation: Document Contributors
Specify document contributors
End of explanation
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
Explanation: Document Publication
Specify document publication status
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: Document Table of Contents
1. Key Properties
2. Key Properties --> Software Properties
3. Key Properties --> Timestep Framework
4. Key Properties --> Meteorological Forcings
5. Key Properties --> Resolution
6. Key Properties --> Tuning Applied
7. Transport
8. Emissions
9. Concentrations
10. Optical Radiative Properties
11. Optical Radiative Properties --> Absorption
12. Optical Radiative Properties --> Mixtures
13. Optical Radiative Properties --> Impact Of H2o
14. Optical Radiative Properties --> Radiative Scheme
15. Optical Radiative Properties --> Cloud Interactions
16. Model
1. Key Properties
Key properties of the aerosol model
1.1. Model Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of aerosol model.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.2. Model Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Name of aerosol model code
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.scheme_scope')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "troposhere"
# "stratosphere"
# "mesosphere"
# "mesosphere"
# "whole atmosphere"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 1.3. Scheme Scope
Is Required: TRUE Type: ENUM Cardinality: 1.N
Atmospheric domains covered by the aerosol model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.basic_approximations')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.4. Basic Approximations
Is Required: TRUE Type: STRING Cardinality: 1.1
Basic approximations made in the aerosol model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.prognostic_variables_form')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "3D mass/volume ratio for aerosols"
# "3D number concenttration for aerosols"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 1.5. Prognostic Variables Form
Is Required: TRUE Type: ENUM Cardinality: 1.N
Prognostic variables in the aerosol model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.number_of_tracers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 1.6. Number Of Tracers
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Number of tracers in the aerosol model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.family_approach')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 1.7. Family Approach
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Are aerosol calculations generalized into families of species?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.software_properties.repository')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 2. Key Properties --> Software Properties
Software properties of aerosol code
2.1. Repository
Is Required: FALSE Type: STRING Cardinality: 0.1
Location of code for this component.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.software_properties.code_version')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 2.2. Code Version
Is Required: FALSE Type: STRING Cardinality: 0.1
Code version identifier.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.software_properties.code_languages')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 2.3. Code Languages
Is Required: FALSE Type: STRING Cardinality: 0.N
Code language(s).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.timestep_framework.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Uses atmospheric chemistry time stepping"
# "Specific timestepping (operator splitting)"
# "Specific timestepping (integrated)"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 3. Key Properties --> Timestep Framework
Physical properties of seawater in ocean
3.1. Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
Mathematical method deployed to solve the time evolution of the prognostic variables
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.timestep_framework.split_operator_advection_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 3.2. Split Operator Advection Timestep
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Timestep for aerosol advection (in seconds)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.timestep_framework.split_operator_physical_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 3.3. Split Operator Physical Timestep
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Timestep for aerosol physics (in seconds).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.timestep_framework.integrated_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 3.4. Integrated Timestep
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Timestep for the aerosol model (in seconds)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.timestep_framework.integrated_scheme_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Explicit"
# "Implicit"
# "Semi-implicit"
# "Semi-analytic"
# "Impact solver"
# "Back Euler"
# "Newton Raphson"
# "Rosenbrock"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 3.5. Integrated Scheme Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Specify the type of timestep scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.meteorological_forcings.variables_3D')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4. Key Properties --> Meteorological Forcings
**
4.1. Variables 3D
Is Required: FALSE Type: STRING Cardinality: 0.1
Three dimensionsal forcing variables, e.g. U, V, W, T, Q, P, conventive mass flux
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.meteorological_forcings.variables_2D')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4.2. Variables 2D
Is Required: FALSE Type: STRING Cardinality: 0.1
Two dimensionsal forcing variables, e.g. land-sea mask definition
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.meteorological_forcings.frequency')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 4.3. Frequency
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Frequency with which meteological forcings are applied (in seconds).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.resolution.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5. Key Properties --> Resolution
Resolution in the aersosol model grid
5.1. Name
Is Required: TRUE Type: STRING Cardinality: 1.1
This is a string usually used by the modelling group to describe the resolution of this grid, e.g. ORCA025, N512L180, T512L70 etc.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.resolution.canonical_horizontal_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5.2. Canonical Horizontal Resolution
Is Required: FALSE Type: STRING Cardinality: 0.1
Expression quoted for gross comparisons of resolution, eg. 50km or 0.1 degrees etc.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.resolution.number_of_horizontal_gridpoints')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 5.3. Number Of Horizontal Gridpoints
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Total number of horizontal (XY) points (or degrees of freedom) on computational grid.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.resolution.number_of_vertical_levels')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 5.4. Number Of Vertical Levels
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Number of vertical levels resolved on computational grid.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.resolution.is_adaptive_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 5.5. Is Adaptive Grid
Is Required: FALSE Type: BOOLEAN Cardinality: 0.1
Default is False. Set true if grid resolution changes during execution.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.tuning_applied.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6. Key Properties --> Tuning Applied
Tuning methodology for aerosol model
6.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General overview description of tuning: explain and motivate the main targets and metrics retained. &Document the relative weight given to climate performance metrics versus process oriented metrics, &and on the possible conflicts with parameterization level tuning. In particular describe any struggle &with a parameter value that required pushing it to its limits to solve a particular model deficiency.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.tuning_applied.global_mean_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.2. Global Mean Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List set of metrics of the global mean state used in tuning model/component
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.tuning_applied.regional_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.3. Regional Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List of regional metrics of mean state used in tuning model/component
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.tuning_applied.trend_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.4. Trend Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List observed trend metrics used in tuning model/component
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.transport.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7. Transport
Aerosol transport
7.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of transport in atmosperic aerosol model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.transport.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Uses Atmospheric chemistry transport scheme"
# "Specific transport scheme (eulerian)"
# "Specific transport scheme (semi-lagrangian)"
# "Specific transport scheme (eulerian and semi-lagrangian)"
# "Specific transport scheme (lagrangian)"
# TODO - please enter value(s)
Explanation: 7.2. Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Method for aerosol transport modeling
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.transport.mass_conservation_scheme')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Uses Atmospheric chemistry transport scheme"
# "Mass adjustment"
# "Concentrations positivity"
# "Gradients monotonicity"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 7.3. Mass Conservation Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.N
Method used to ensure mass conservation.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.transport.convention')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Uses Atmospheric chemistry transport scheme"
# "Convective fluxes connected to tracers"
# "Vertical velocities connected to tracers"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 7.4. Convention
Is Required: TRUE Type: ENUM Cardinality: 1.N
Transport by convention
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8. Emissions
Atmospheric aerosol emissions
8.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of emissions in atmosperic aerosol model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.method')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "Prescribed (climatology)"
# "Prescribed CMIP6"
# "Prescribed above surface"
# "Interactive"
# "Interactive above surface"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 8.2. Method
Is Required: TRUE Type: ENUM Cardinality: 1.N
Method used to define aerosol species (several methods allowed because the different species may not use the same method).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.sources')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Vegetation"
# "Volcanos"
# "Bare ground"
# "Sea surface"
# "Lightning"
# "Fires"
# "Aircraft"
# "Anthropogenic"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 8.3. Sources
Is Required: FALSE Type: ENUM Cardinality: 0.N
Sources of the aerosol species are taken into account in the emissions scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.prescribed_climatology')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant"
# "Interannual"
# "Annual"
# "Monthly"
# "Daily"
# TODO - please enter value(s)
Explanation: 8.4. Prescribed Climatology
Is Required: FALSE Type: ENUM Cardinality: 0.1
Specify the climatology type for aerosol emissions
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.prescribed_climatology_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.5. Prescribed Climatology Emitted Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of aerosol species emitted and prescribed via a climatology
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.prescribed_spatially_uniform_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.6. Prescribed Spatially Uniform Emitted Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of aerosol species emitted and prescribed as spatially uniform
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.interactive_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.7. Interactive Emitted Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of aerosol species emitted and specified via an interactive method
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.other_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.8. Other Emitted Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of aerosol species emitted and specified via an "other method"
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.other_method_characteristics')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.9. Other Method Characteristics
Is Required: FALSE Type: STRING Cardinality: 0.1
Characteristics of the "other method" used for aerosol emissions
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.concentrations.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9. Concentrations
Atmospheric aerosol concentrations
9.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of concentrations in atmosperic aerosol model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.concentrations.prescribed_lower_boundary')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9.2. Prescribed Lower Boundary
Is Required: FALSE Type: STRING Cardinality: 0.1
List of species prescribed at the lower boundary.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.concentrations.prescribed_upper_boundary')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9.3. Prescribed Upper Boundary
Is Required: FALSE Type: STRING Cardinality: 0.1
List of species prescribed at the upper boundary.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.concentrations.prescribed_fields_mmr')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9.4. Prescribed Fields Mmr
Is Required: FALSE Type: STRING Cardinality: 0.1
List of species prescribed as mass mixing ratios.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.concentrations.prescribed_fields_mmr')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9.5. Prescribed Fields Mmr
Is Required: FALSE Type: STRING Cardinality: 0.1
List of species prescribed as AOD plus CCNs.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 10. Optical Radiative Properties
Aerosol optical and radiative properties
10.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of optical and radiative properties
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.absorption.black_carbon')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 11. Optical Radiative Properties --> Absorption
Absortion properties in aerosol scheme
11.1. Black Carbon
Is Required: FALSE Type: FLOAT Cardinality: 0.1
Absorption mass coefficient of black carbon at 550nm (if non-absorbing enter 0)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.absorption.dust')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 11.2. Dust
Is Required: FALSE Type: FLOAT Cardinality: 0.1
Absorption mass coefficient of dust at 550nm (if non-absorbing enter 0)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.absorption.organics')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 11.3. Organics
Is Required: FALSE Type: FLOAT Cardinality: 0.1
Absorption mass coefficient of organics at 550nm (if non-absorbing enter 0)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.mixtures.external')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 12. Optical Radiative Properties --> Mixtures
**
12.1. External
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there external mixing with respect to chemical composition?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.mixtures.internal')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 12.2. Internal
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there internal mixing with respect to chemical composition?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.mixtures.mixing_rule')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 12.3. Mixing Rule
Is Required: FALSE Type: STRING Cardinality: 0.1
If there is internal mixing with respect to chemical composition then indicate the mixinrg rule
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.impact_of_h2o.size')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 13. Optical Radiative Properties --> Impact Of H2o
**
13.1. Size
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does H2O impact size?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.impact_of_h2o.internal_mixture')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 13.2. Internal Mixture
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does H2O impact internal mixture?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.radiative_scheme.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 14. Optical Radiative Properties --> Radiative Scheme
Radiative scheme for aerosol
14.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of radiative scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.radiative_scheme.shortwave_bands')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 14.2. Shortwave Bands
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Number of shortwave bands
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.radiative_scheme.longwave_bands')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 14.3. Longwave Bands
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Number of longwave bands
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 15. Optical Radiative Properties --> Cloud Interactions
Aerosol-cloud interactions
15.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of aerosol-cloud interactions
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.twomey')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 15.2. Twomey
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is the Twomey effect included?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.twomey_minimum_ccn')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 15.3. Twomey Minimum Ccn
Is Required: FALSE Type: INTEGER Cardinality: 0.1
If the Twomey effect is included, then what is the minimum CCN number?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.drizzle')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 15.4. Drizzle
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does the scheme affect drizzle?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.cloud_lifetime')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 15.5. Cloud Lifetime
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does the scheme affect cloud lifetime?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.longwave_bands')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 15.6. Longwave Bands
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Number of longwave bands
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.model.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 16. Model
Aerosol model
16.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of atmosperic aerosol model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.model.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Dry deposition"
# "Sedimentation"
# "Wet deposition (impaction scavenging)"
# "Wet deposition (nucleation scavenging)"
# "Coagulation"
# "Oxidation (gas phase)"
# "Oxidation (in cloud)"
# "Condensation"
# "Ageing"
# "Advection (horizontal)"
# "Advection (vertical)"
# "Heterogeneous chemistry"
# "Nucleation"
# TODO - please enter value(s)
Explanation: 16.2. Processes
Is Required: TRUE Type: ENUM Cardinality: 1.N
Processes included in the Aerosol model.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.model.coupling')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Radiation"
# "Land surface"
# "Heterogeneous chemistry"
# "Clouds"
# "Ocean"
# "Cryosphere"
# "Gas phase chemistry"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 16.3. Coupling
Is Required: FALSE Type: ENUM Cardinality: 0.N
Other model components coupled to the Aerosol model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.model.gas_phase_precursors')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "DMS"
# "SO2"
# "Ammonia"
# "Iodine"
# "Terpene"
# "Isoprene"
# "VOC"
# "NOx"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 16.4. Gas Phase Precursors
Is Required: TRUE Type: ENUM Cardinality: 1.N
List of gas phase aerosol precursors.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.model.scheme_type')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Bulk"
# "Modal"
# "Bin"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 16.5. Scheme Type
Is Required: TRUE Type: ENUM Cardinality: 1.N
Type(s) of aerosol scheme used by the aerosols model (potentially multiple: some species may be covered by one type of aerosol scheme and other species covered by another type).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.model.bulk_scheme_species')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Sulphate"
# "Nitrate"
# "Sea salt"
# "Dust"
# "Ice"
# "Organic"
# "Black carbon / soot"
# "SOA (secondary organic aerosols)"
# "POM (particulate organic matter)"
# "Polar stratospheric ice"
# "NAT (Nitric acid trihydrate)"
# "NAD (Nitric acid dihydrate)"
# "STS (supercooled ternary solution aerosol particule)"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 16.6. Bulk Scheme Species
Is Required: TRUE Type: ENUM Cardinality: 1.N
List of species covered by the bulk scheme.
End of explanation |
5,749 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
An Introduction to Bayesian Statistical Analysis
Though many of you will have taken a statistics course or two during your undergraduate (or graduate education, most of those who have will likely not have had a course in Bayesian statistics. Most introductory courses, particularly for non-statisticians, still do not cover Bayesian methods at all. Even today, Bayesian courses (similarly to statistical computing courses!) are typically tacked onto the curriculum, rather than being integrated into the program.
In fact, Bayesian statistics is not just a particular method, or even a class of methods; it is an entirely different paradigm for doing statistical analysis.
Practical methods for making inferences from data using probability models for quantities we observe and about which we wish to learn.
-- Gelman et al. 2013
A Bayesian model is described by parameters, uncertainty in those parameters is described using probability distributions.
All conclusions from Bayesian statistical procedures are stated in terms of probability statements
This confers several benefits to the analyst, including
Step1: Bayesian Computation
Bayesian analysis usually requires integration over multiple dimensions that is intractable both via analytic methods or via standard methods of numerical integration.
Following the steps for setting up a Bayesian model, after observing data $y$ that we hypothesize as having being obtained from a sampling model $f(y|\theta)$, we then place a prior distribution $p(\theta)$ on the parameters to describe the uncertainty in their true values. We then obtain inference by calculating the posterior distribution, which is proportional to the product of these quantities
Step2: If we use a simple binomial model, which assumes independent samples from a binomial distribution with probability of mortality $p$, we can use MLE to obtain an estimate of this probability.
$$L(p \mid n, y) = \frac{n!}{y!(n-y)!} p^y (1-p)^{n-y}$$
$$\hat{p} = \frac{y}{n}$$
Step3: However, if we compare the variation of $y$ under this model, it is to small relative to the observed variation
Step4: $$Var(y) = np(1-p)$$
Step5: Hence, the data are strongly overdispersed relative to what is predicted under a model with a fixed probability of death. A more realistic model would allow for these probabilities to vary among the cities.
One way of representing this is conjugating the binomial distribution with another distribution that describes the variation in the binomial probability. A sensible choice for this is the beta distribution
Step6: Now, by multiplying these quantities together, we can obtain a non-normalized posterior.
$$p(K, \eta | \mathbf{y}) \propto \frac{1}{(1+K)^2} \frac{1}{\eta(1-\eta)} \prod_i \frac{B(K\eta+y_i, K(1-\eta) + n_i - y_i)}{B(K\eta, K(1-\eta))}$$
This can be calculated in Python as follows (log-transformed)
Step7: An easy (though computationally expensive) way of getting the joint posterior distribution of the parameters is to evaluate betabin_post on a grid of parameter values.
Step8: This is fine, but the precision parameter $K$ is heavily skewed.
To deal with the skewness in $K$ and to facilitate modeling, we can transform the beta-binomial parameters to the real line via
Step9: Normal Approximation
An alternative approach to summarizing a $p$-dimensional posterior distribution involves estimating the mode of the posterior, and approximating the density as having a multivariate normal distribution.
$$p(x \mid \theta, \Sigma) = (2\pi |\Sigma|)^{-1/2} \exp\left{ -\frac{1}{2} (x-\theta)^{\prime}\Sigma^{-1}(x-\theta) \right}$$
which, when log-transformed, becomes
Step10: Thus, our approximated mode is $\log(K)=7.6$, $\text{logit}(\eta)=-6.8$. We can plug this value, along with the variance-covariance matrix, into a function that returns the kernel of a multivariate normal distribution, and use this to plot the approximate posterior
Step11: Along with this, we can estimate a 95% probability interval for the estimated mode
Step12: Of course, this approximation is only reasonable for posteriors that are not strongly skewed, bimodal, or leptokurtic (heavy-tailed).
Monte Carlo Methods
It is often possible to compute integrals required by Bayesian analysis by simulating
(drawing samples) from posterior distributions. For example, consider the expected value of a random variable $\mathbf{x}$
Step13: This approach is useful, for example, in estimating the normalizing constant for posterior distributions.
If $f(x)$ has unbounded support (i.e. infinite tails), such as a Gaussian distribution, a bounding box is no longer appropriate. We must specify a majorizing (or, enveloping) function, $g(x)$, which implies
Step14: Finally, we need an implementation of the multivariate T probability distribution function, which is as follows
Step15: The next step is to find the constant $c$ that ensures
Step16: We can calculate an appropriate value of $c'$ by simply using the approximation method described above on calc_diff (tweaked to produce a negative value for minimization)
Step17: Now we can execute a rejection sampling algorithm
Step18: Notice that the efficiency of rejection sampling is not very high for this problem.
Step19: Rejection sampling is usually subject to declining performance as the dimension of the parameter space increases. Further improvement is gained by using optimized algorithms such as importance sampling which, as the name implies, samples more frequently from important areas of the distribution.
Importance Sampling
As we have seen, the primary difficulty in Bayesian inference is calculating the posterior density for models of moderate-to-high dimension. For example, calculating the posterior mean of some function $h$ requires two difficult integration steps
Step20: We can obtain the probability of these values under the posterior density
Step21: and under the T distribution
Step22: This allows us to calculate the importance weights
Step23: notice that we have subtracted the maximum value of the differences, which normalizes the weights.
Now, we can obtain estimates of the parameters
Step24: Finally, the standard error of the estimates
Step25: Sampling Importance Resampling
The importance sampling method can be modified to incorporate weighted bootstrapping, in a procedure called sampling importance resampling (SIR). As previously, we obtain a sample of size $M$ from an importance sampling distribution $q$ and calculate the corresponding weights $w(\theta_i) = p(\theta|y) / q(\theta)$.
Instead of directly re-weighting the samples from $q$, SIR instead transforms the weights into probabilities via
Step26: The choice function in numpy.random can be used to generate a random sample from an arbitrary 1-D array.
Step27: One advantage of this approach is that one can easily extract a posterior probability interval for each parameter, simply by extracting quantiles from the resampled values.
Step28: Exercise | Python Code:
from scipy.stats import binom
# Binomial probability mass function
yvals = range(10+1)
plt.plot(yvals, binom.pmf(yvals, 10, 0.5), 'ro')
# Binomial likelhood function
pvals = np.linspace(0, 1)
y = 4
plt.plot(pvals, binom.pmf(y, 10, pvals));
Explanation: An Introduction to Bayesian Statistical Analysis
Though many of you will have taken a statistics course or two during your undergraduate (or graduate education, most of those who have will likely not have had a course in Bayesian statistics. Most introductory courses, particularly for non-statisticians, still do not cover Bayesian methods at all. Even today, Bayesian courses (similarly to statistical computing courses!) are typically tacked onto the curriculum, rather than being integrated into the program.
In fact, Bayesian statistics is not just a particular method, or even a class of methods; it is an entirely different paradigm for doing statistical analysis.
Practical methods for making inferences from data using probability models for quantities we observe and about which we wish to learn.
-- Gelman et al. 2013
A Bayesian model is described by parameters, uncertainty in those parameters is described using probability distributions.
All conclusions from Bayesian statistical procedures are stated in terms of probability statements
This confers several benefits to the analyst, including:
ease of interpretation, summarization of uncertainty
can incorporate uncertainty in parent parameters
easy to calculate summary statistics
Bayesian vs Frequentist Statistics: What's the difference?
Any statistical paradigm, Bayesian or otherwise, involves at least the following:
Some unknown quantities about which we are interested in learning or testing. We call these parameters.
Some data which have been observed, and hopefully contain information about (1).
One or more models that relate the data to the parameters, and is the instrument that is used to learn.
The Frequentist World View
The data that have been observed are considered random, because they are realizations of random processes, and hence will vary each time one goes to observe the system.
Model parameters are considered fixed. A parameter's true value is uknown and fixed, and so we condition on them.
In mathematical notation, this implies a (very) general model of the following form:
<div style="font-size:35px">
\\[f(y | \theta)\\]
</div>
Here, the model \(f\) accepts data values \(y\) as an argument, conditional on particular values of \(\theta\).
Frequentist inference typically involves deriving estimators for the unknown parameters. Estimators are formulae that return estimates for particular estimands, as a function of data. They are selected based on some chosen optimality criterion, such as unbiasedness, variance minimization, or efficiency.
For example, lets say that we have collected some data on the prevalence of autism spectrum disorder (ASD) in some defined population. Our sample includes \(n\) sampled children, \(y\) of them having been diagnosed with autism. A frequentist estimator of the prevalence \(p\) is:
<div style="font-size:25px">
\[\hat{p} = \frac{y}{n}\]
</div>
Why this particular function? Because it can be shown to be unbiased and minimum-variance.
It is important to note that, in a frequentist world, new estimators need to be derived for every estimand that is introduced.
The Bayesian World View
Data are considered fixed. They used to be random, but once they were written into your lab notebook/spreadsheet/IPython notebook they do not change.
Model parameters themselves may not be random, but Bayesians use probability distribtutions to describe their uncertainty in parameter values, and are therefore treated as random. In some cases, it is useful to consider parameters as having been sampled from probability distributions.
This implies the following form:
<div style="font-size:35px">
\\[p(\theta | y)\\]
</div>
This formulation used to be referred to as inverse probability, because it infers from observations to parameters, or from effects to causes.
Bayesians do not seek new estimators for every estimation problem they encounter. There is only one estimator for Bayesian inference: Bayes' Formula.
Bayesian Inference, in 3 Easy Steps
Gelman et al. (2013) describe the process of conducting Bayesian statistical analysis in 3 steps.
Step 1: Specify a probability model
As was noted above, Bayesian statistics involves using probability models to solve problems. So, the first task is to completely specify the model in terms of probability distributions. This includes everything: unknown parameters, data, covariates, missing data, predictions. All must be assigned some probability density.
This step involves making choices.
what is the form of the sampling distribution of the data?
what form best describes our uncertainty in the unknown parameters?
Step 2: Calculate a posterior distribution
The mathematical form \(p(\theta | y)\) that we associated with the Bayesian approach is referred to as a posterior distribution.
posterior /pos·ter·i·or/ (pos-tēr´e-er) later in time; subsequent.
Why posterior? Because it tells us what we know about the unknown \(\theta\) after having observed \(y\).
This posterior distribution is formulated as a function of the probability model that was specified in Step 1. Usually, we can write it down but we cannot calculate it analytically. In fact, the difficulty inherent in calculating the posterior distribution for most models of interest is perhaps the major contributing factor for the lack of widespread adoption of Bayesian methods for data analysis. Various strategies for doing so comprise this tutorial.
But, once the posterior distribution is calculated, you get a lot for free:
point estimates
credible intervals
quantiles
predictions
Step 3: Check your model
Though frequently ignored in practice, it is critical that the model and its outputs be assessed before using the outputs for inference. Models are specified based on assumptions that are largely unverifiable, so the least we can do is examine the output in detail, relative to the specified model and the data that were used to fit the model.
Specifically, we must ask:
does the model fit data?
are the conclusions reasonable?
are the outputs sensitive to changes in model structure?
Probability
Misunderstanding of probability may be the greatest of all impediments to scientific literacy.
— Stephen Jay Gould
Because of its reliance on probabilty models, its worth talking a little bit about probability. There are different ways to define probability, depending on how it is being used. In fact, Bayesian statistics invokes an additional definition of probability that is not used elsewhere.
1. Classical probability
<div style="font-size:25px">
\\[Pr(X=x) = \frac{\text{# x outcomes}}{\text{# possible outcomes}}\\]
</div>
Classical probability is an assessment of possible outcomes of elementary events. Elementary events are assumed to be equally likely.
2. Frequentist probability
<div style="font-size:25px">
\\[Pr(X=x) = \lim_{n \rightarrow \infty} \frac{\text{# times x has occurred}}{\text{# independent and identical trials}}\\]
</div>
Unlike classical probability, frequentist probability is an EMPIRICAL definition. It is an objective statement desribing events that have occurred.
3. Subjective probability
<div style="font-size:25px">
\\[Pr(X=x)\\]
</div>
Subjective probability is a measure of one's uncertainty in the value of \(X\). It characterizes the state of knowledge regarding some unknown quantity using probability.
It is not associated with long-term frequencies nor with equal-probability events.
For example:
X = the true prevalence of diabetes in Austin is < 15%
X = the blood type of the person sitting next to you is type A
X = the Nashville Predators will win next year's Stanley Cup
X = it is raining in Nashville
Bayes' Formula
Now that we have some probability under our belt, we turn to Bayes' formula. While frequentist statistics uses different estimators for different problems, Bayes formula is the only estimator that Bayesians need to obtain estimates of unknown quantities that we care about. It turns out to be straightforward to derive Bayes' formula directly from the definition of conditional probability.
Recall that the goal in Bayesian inference is to calculate the posterior distribution of our unknowns:
<div style="font-size: 150%;">
\\[Pr(\theta|Y=y)\\]
</div>
This expression is a conditional probability. It is the probability of \(\theta\) given the observed values of \(Y=y\).
In general, the conditional probability of A given B is defined as follows:
\[Pr(B|A) = \frac{Pr(A \cap B)}{Pr(A)}\]
To gain an intuition for this, it is helpful to use a Venn diagram:
Notice from this diagram that the following conditional probability is also true:
\[Pr(A|B) = \frac{Pr(A \cap B)}{Pr(B)}\]
These can both be rearranged to be expressions of the joint probability of A and B. Setting these equal to one another:
\[Pr(B|A)Pr(A) = Pr(A|B)Pr(B)\]
Then rearranging:
\[Pr(B|A) = \frac{Pr(A|B)Pr(B)}{Pr(A)}\]
This is Bayes' formula. Replacing the generic A and B with things we care about reveals why Bayes' formula is so important:
The equation expresses how our belief about the value of \(\theta\), as expressed by the prior distribution \(P(\theta)\) is reallocated following the observation of the data \(y\).
The innocuous denominator \(P(y)\) usuallt cannot be computed directly, and is actually the expression in the numerator, integrated over all \(\theta\):
<div style="font-size: 150%;">
\\[Pr(\theta|y) = \frac{Pr(y|\theta)Pr(\theta)}{\int Pr(y|\theta)Pr(\theta) d\theta}\\]
</div>
The intractability of this integral is one of the factors that has contributed to the under-utilization of Bayesian methods by statisticians.
Priors
Once considered a controversial aspect of Bayesian analysis, the prior distribution characterizes what is known about an unknown quantity before observing the data from the present study. Thus, it represents the information state of that parameter. It can be used to reflect the information obtained in previous studies, to constrain the parameter to plausible values, or to represent the population of possible parameter values, of which the current study's parameter value can be considered a sample.
Likelihood functions
The likelihood represents the information in the observed data, and is used to update prior distributions to posterior distributions. This updating of belief is justified becuase of the likelihood principle, which states:
Following observation of \(y\), the likelihood \(L(\theta|y)\) contains all experimental information from \(y\) about the unknown \(\theta\).
Bayesian analysis satisfies the likelihood principle because the posterior distribution's dependence on the data is only through the likelihood. In comparison, most frequentist inference procedures violate the likelihood principle, because inference will depend on the design of the trial or experiment.
Remember from the density estimation section that the likelihood is closely related to the probability density (or mass) function. The difference is that the likelihood varies the parameter while holding the observations constant, rather than vice versa.
End of explanation
cancer = pd.read_csv('../data/cancer.csv')
cancer
Explanation: Bayesian Computation
Bayesian analysis usually requires integration over multiple dimensions that is intractable both via analytic methods or via standard methods of numerical integration.
Following the steps for setting up a Bayesian model, after observing data $y$ that we hypothesize as having being obtained from a sampling model $f(y|\theta)$, we then place a prior distribution $p(\theta)$ on the parameters to describe the uncertainty in their true values. We then obtain inference by calculating the posterior distribution, which is proportional to the product of these quantities:
$$p(\theta | y) \propto f(y|\theta) p(\theta)$$
unfortunately, for most problems of interest, the normalizing constant cannot be calculated because it involves multi-dimensional integration over $\theta$.
One approach is to work with non-normalized posteriors, and obtain approximate inference.
Approximation Methods
Tsutakawa et al. (1985) provides mortality data for stomach cancer among men aged 45-64 in several cities in Missouri. The file cancer.csv contains deaths $y_i$ and subjects at risk $n_i$ for 20 cities from this dataset.
End of explanation
ytotal, ntotal = cancer.sum().astype(float)
p_hat = ytotal/ntotal
p_hat
Explanation: If we use a simple binomial model, which assumes independent samples from a binomial distribution with probability of mortality $p$, we can use MLE to obtain an estimate of this probability.
$$L(p \mid n, y) = \frac{n!}{y!(n-y)!} p^y (1-p)^{n-y}$$
$$\hat{p} = \frac{y}{n}$$
End of explanation
cancer.y.var()
Explanation: However, if we compare the variation of $y$ under this model, it is to small relative to the observed variation:
End of explanation
p_hat*(1.-p_hat)*ntotal
Explanation: $$Var(y) = np(1-p)$$
End of explanation
K_x = np.linspace(0, 10)
K_prior = lambda K: 1./(1. + K)**2
plt.plot(K_x, K_prior(K_x))
%matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
eta_x = np.linspace(0, 1)
eta_prior = lambda eta: 1./(eta*(1.-eta))
plt.plot(eta_x, eta_prior(eta_x))
Explanation: Hence, the data are strongly overdispersed relative to what is predicted under a model with a fixed probability of death. A more realistic model would allow for these probabilities to vary among the cities.
One way of representing this is conjugating the binomial distribution with another distribution that describes the variation in the binomial probability. A sensible choice for this is the beta distribution:
$$f(p \mid \alpha, \beta) = \frac{\Gamma(\alpha + \beta)}{\Gamma(\alpha) \Gamma(\beta)} p^{\alpha - 1} (1 - p)^{\beta - 1}$$
Conjugating this with the binomial distribution, and reparameterizing such that $\alpha = K\eta$ and $\beta = K(1-\eta)$ for $K > 0$ and $\eta \in (0,1)$ results in the beta-binomial distribution:
$$f(y \mid K, \eta) = \frac{n!}{y!(n-y)!} \frac{B(K\eta+y, K(1-\eta) + n - y)}{B(K\eta, K(1-\eta))}$$
where $B$ is the beta function.
What remains is to place priors over the parameters $K$ and $\eta$. Common choices for diffuse (i.e. vague or uninformative) priors are:
$$\begin{aligned}
p(K) &\propto \frac{1}{(1+K)^2} \cr
p(\eta) &\propto \frac{1}{\eta(1-\eta)}
\end{aligned}$$
These are not normalized, but our posterior will not be normalized anyhow, so this is not an issue.
End of explanation
from scipy.special import betaln
def betabin_post(params, n, y):
K, eta = params
post = betaln(K*eta + y, K*(1.-eta) + n - y).sum()
post -= len(y)*betaln(K*eta, K*(1.-eta))
post -= np.log(eta*(1.-eta))
post -= 2.*np.log(1.+K)
return post
betabin_post((15000, 0.003), cancer.n, cancer.y)
Explanation: Now, by multiplying these quantities together, we can obtain a non-normalized posterior.
$$p(K, \eta | \mathbf{y}) \propto \frac{1}{(1+K)^2} \frac{1}{\eta(1-\eta)} \prod_i \frac{B(K\eta+y_i, K(1-\eta) + n_i - y_i)}{B(K\eta, K(1-\eta))}$$
This can be calculated in Python as follows (log-transformed):
End of explanation
# Create grid
K_x = np.linspace(1, 20000)
eta_x = np.linspace(0.0001, 0.003)
# Calculate posterior on grid
z = np.array([[betabin_post((K, eta), cancer.n, cancer.y)
for eta in eta_x] for K in K_x])
# Plot posterior
x, y = np.meshgrid(eta_x, K_x)
cplot = plt.contour(x, y, z-z.max(), [-0.5, -1, -2, -3, -4], cmap=plt.cm.RdBu)
plt.ylabel('K');plt.xlabel('$\eta$');
Explanation: An easy (though computationally expensive) way of getting the joint posterior distribution of the parameters is to evaluate betabin_post on a grid of parameter values.
End of explanation
def betabin_trans(theta, n, y):
K = np.exp(theta[0])
eta = 1./(1. + np.exp(-theta[1]))
# Jacobians for transformation
J = theta[0] + theta[1]
return betabin_post((K, eta), n, y) + J
betabin_trans((10, -7.5), cancer.n, cancer.y)
# Create grid
log_K_x = np.linspace(0, 20)
logit_eta_x = np.linspace(-8, -5)
# Calculate posterior on grid
z = np.array([[betabin_trans((t1, t2), cancer.n, cancer.y)
for t2 in logit_eta_x] for t1 in log_K_x])
# Plot posterior
x, y = np.meshgrid(logit_eta_x, log_K_x)
cplot = plt.contour(x, y, z - z.max(), [-0.5, -1, -2, -4, -8], cmap=plt.cm.RdBu)
plt.clabel(cplot, inline=1, fontsize=10, fmt='%1.1f')
plt.ylabel('log(K)');plt.xlabel('logit($\eta$)');
Explanation: This is fine, but the precision parameter $K$ is heavily skewed.
To deal with the skewness in $K$ and to facilitate modeling, we can transform the beta-binomial parameters to the real line via:
$$\begin{aligned}
\theta_1 &= \log(K) \cr
\theta_2 &= \log\left(\frac{\eta}{1-\eta}\right)
\end{aligned}$$
which we can easily implement by wrapping betabin_post:
End of explanation
from scipy.optimize import fmin_bfgs
betabin_trans_min = lambda *args: -betabin_trans(*args)
init_value = (10, -7.5)
opt = fmin_bfgs(betabin_trans_min, init_value,
args=(cancer.n, cancer.y), full_output=True)
mode, var = opt[0], opt[3]
mode, var
Explanation: Normal Approximation
An alternative approach to summarizing a $p$-dimensional posterior distribution involves estimating the mode of the posterior, and approximating the density as having a multivariate normal distribution.
$$p(x \mid \theta, \Sigma) = (2\pi |\Sigma|)^{-1/2} \exp\left{ -\frac{1}{2} (x-\theta)^{\prime}\Sigma^{-1}(x-\theta) \right}$$
which, when log-transformed, becomes:
$$l(x \mid \theta, \Sigma) \propto - \frac{1}{2} \log|\Sigma| -\frac{1}{2} (x-\theta)^{\prime}\Sigma^{-1}(x-\theta)$$
If we consider the logarithm of the unnormalized joint posterior:
$$h(\theta | y) = \log[f(y|\theta) p(\theta)]$$
one way to approximate this function is to use a second-order Taylor series expansion around the mode $\hat{\theta}$:
$$h(\theta | y) \approx h(\hat{\theta} | y) + \frac{1}{2}(\theta-\hat{\theta})' h''(\hat{\theta} | y) (\theta-\hat{\theta})$$
This form is simply the multivariate normal distribution with $\hat{\theta}$ as the mean and the inverse negative Hessian as the covariance matrix:
$$\Sigma = -h''(\hat{\theta} | y)^{-1}$$
We can apply one of several numerical methods for multivariate optimization to numerically estimate the mode of the posterior. Here, we will use the Broyden–Fletcher–Goldfarb–Shanno (BFGS) algorithm that is provided by SciPy's fmin_bfgs function. In addition to returning an estimate of the mode, it returns the estimated variance-covariance matrix, which we will need to parameterize the multivariate normal distribution.
Applying this to the beta-binomial posterior estimation problem, we simply provide an initial guess for the mode:
End of explanation
det = np.linalg.det
inv = np.linalg.inv
def lmvn(value, mu, Sigma):
# Log kernel of multivariate normal
delta = np.array(value) - mu
return 1 / (2. * (np.log(det(Sigma))) - np.dot(delta.T, np.dot(inv(Sigma), delta)))
z = np.array([[lmvn((t1, t2), mode, var)
for t2 in logit_eta_x] for t1 in log_K_x])
x, y = np.meshgrid(logit_eta_x, log_K_x)
cplot = plt.contour(x, y, z - z.max(), cmap=plt.cm.RdBu)
plt.ylabel('log(K)');plt.xlabel('logit($\eta$)');
Explanation: Thus, our approximated mode is $\log(K)=7.6$, $\text{logit}(\eta)=-6.8$. We can plug this value, along with the variance-covariance matrix, into a function that returns the kernel of a multivariate normal distribution, and use this to plot the approximate posterior:
End of explanation
from scipy.stats.distributions import norm
se = np.sqrt(np.diag(var))
mode[0] + norm.ppf(0.025)*se[0], mode[0] + norm.ppf(0.975)*se[0]
mode[1] + norm.ppf(0.025)*se[1], mode[1] + norm.ppf(0.975)*se[1]
Explanation: Along with this, we can estimate a 95% probability interval for the estimated mode:
End of explanation
def rtriangle(low, high, mode):
alpha = -1
# Run until accepted
while np.random.random() > alpha:
u = np.random.uniform(low, high)
if u < mode:
alpha = (u - low) / (mode - low)
else:
alpha = (high - u) / (high - mode)
return(u)
_ = plt.hist([rtriangle(0, 7, 2) for t in range(10000)], bins=100)
Explanation: Of course, this approximation is only reasonable for posteriors that are not strongly skewed, bimodal, or leptokurtic (heavy-tailed).
Monte Carlo Methods
It is often possible to compute integrals required by Bayesian analysis by simulating
(drawing samples) from posterior distributions. For example, consider the expected value of a random variable $\mathbf{x}$:
$$\begin{gathered}
\begin{split}E[{\bf x}] = \int {\bf x} f({\bf x}) d{\bf x}, \qquad
{\bf x} = {x_1,...,x_k}\end{split}\notag\\begin{split}\end{split}\notag\end{gathered}$$
where $k$ (the dimension of vector $x$) is perhaps very large. If we can produce a reasonable number of random vectors ${{\bf x_i}}$, we can use these values to approximate the unknown integral. This process is known as Monte Carlo integration. In general, MC integration allows integrals over probability density functions ...
$$\begin{gathered}
\begin{split}I = \int h(\mathbf{x}) f(\mathbf{x}) \mathbf{dx}\end{split}\notag\\begin{split}\end{split}\notag\end{gathered}$$
... to be estimated by finite sums:
$$\begin{gathered}
\begin{split}\hat{I} = \frac{1}{n}\sum_{i=1}^n h(\mathbf{x}_i),\end{split}\notag\\begin{split}\end{split}\notag\end{gathered}$$
where $\mathbf{x}_i$ is a sample from $f$. This estimate is valid and useful because:
By the strong law of large numbers:
$$\begin{gathered}
\begin{split}\hat{I} \rightarrow I \mbox{ with probability 1}\end{split}\notag\\begin{split}\end{split}\notag\end{gathered}$$
Simulation error can be measured and controlled:
$$Var(\hat{I}) = \frac{1}{n(n-1)}\sum_{i=1}^n
(h(\mathbf{x}_i)-\hat{I})^2$$
How is this relevant to Bayesian analysis?
In the integral $I$ above, if we replace $f(\mathbf{x})$
with a posterior, $p(\theta|y)$ and make $h(\theta)$ an interesting function of the unknown parameter, the resulting expectation is that of the posterior of $h(\theta)$:
$$E[h(\theta)|y] = \int h(\theta) p(\theta|y) d\theta \approx \frac{1}{n}\sum_{i=1}^n h(\theta)$$
We also require integrals to obtain marginal estimates from a joint model. If $\theta$ is of length $K$, then inference about any particular parameter is obtained by:
$$p(\theta_i|y) \propto \int p(\theta|y) d\theta_{-i}$$
where the -i subscript indicates all elements except the $i^{th}$.
Rejection Sampling
Though Monte Carlo integration allows us to estimate integrals that are unassailable by analysis and standard numerical methods, it relies on the ability to draw samples from the posterior distribution. For known parametric forms, this is not a problem; probability integral transforms or bivariate techniques (e.g Box-Muller method) may be used to obtain samples from uniform pseudo-random variates generated from a computer. Often, however, we cannot readily generate random values from non-standard posteriors. In such instances, we can use rejection sampling to generate samples.
Consider a function, $f(x)$ which can be evaluated for any value on the support of $x:S_x = [A,B]$, but may not be integrable or easily sampled from. If we can calculate the maximum value of $f(x)$, we can then define a rectangle that is guaranteed to contain all possible values
$(x,f(x))$. It is then trivial to generate points over the box and enumerate the values that fall under the curve.
$$\begin{gathered}
\begin{split}\frac{\mbox{Points under curve}}{\mbox{Points generated}} \times \mbox{box area} = \lim_{n \to \infty} \int_A^B f(x) dx\end{split}\notag\\begin{split}\end{split}\notag\end{gathered}$$
Example: triangular distribution
End of explanation
chi2 = np.random.chisquare
mvn = np.random.multivariate_normal
rmvt = lambda nu, S, mu=0, size=1: (np.sqrt(nu) * (mvn(np.zeros(len(S)), S, size).T
/ chi2(nu, size))).T + mu
Explanation: This approach is useful, for example, in estimating the normalizing constant for posterior distributions.
If $f(x)$ has unbounded support (i.e. infinite tails), such as a Gaussian distribution, a bounding box is no longer appropriate. We must specify a majorizing (or, enveloping) function, $g(x)$, which implies:
$$\begin{gathered}
\begin{split}cg(x) \ge f(x) \qquad\forall x \in (-\infty,\infty)\end{split}\notag\\begin{split}\end{split}\notag\end{gathered}$$
Having done this, we can now sample ${x_i}$ from $g(x)$ and accept or reject each of these values based upon $f(x_i)$. Specifically, for each draw $x_i$, we also draw a uniform random variate $u_i$ and accept $x_i$
if $u_i < f(x_i)/cg(x_i)$, where $c$ is a constant. This procedure is repeated until a sufficient number of samples is obtained. This approach is made more efficient by choosing an enveloping distribution that is “close” to the target distribution, thus maximizing the number of accepted points.
To apply rejection sampling to the beta-binomial example, we first need to find a majorizing function $g(x)$ from which we can easily draw samples. We have seen in the previous section that the multivariate normal might serve as a suitable candidate, if multiplied by an appropriately large value of $c$. However, the thinness of the normal tails makes it difficult to use as a majorizing function. Instead, a multivariate Student's T distribution offers heavier tails for a suitably-small value for the degrees of freedom $\nu$:
$$f(\mathbf{x}| \nu,\mu,\Sigma) = \frac{\Gamma\left[(\nu+p)/2\right]}{\Gamma(\nu/2)\nu^{p/2}\pi^{p/2}\left|{\boldsymbol\Sigma}\right|^{1/2}\left[1+\frac{1}{\nu}({\mathbf x}-{\boldsymbol\mu})^T{\boldsymbol\Sigma}^{-1}({\mathbf x}-{\boldsymbol\mu})\right]^{(\nu+p)/2}}$$
We can draw samples from a multivariate-T density by combining mutlivariate normal and $\chi^2$ random variates:
Generating multivariate-T samples
If $X$ is distributed multivariate normal $\text{MVN}(\mathbf{0},\Sigma)$ and $S$ is a $\chi^2$ random variable with $\mu$ degrees of freedom, then a multivariate Student's-T random variable $T = T_1,\ldots,T_p$ can be generated by $T_i = \frac{\sqrt{\nu}X_i}{S} + \mu_i$, where $\mu = \mu_1,\ldots,\mu$ is a mean vector.
This is implemented in Python by:
End of explanation
from scipy.special import gammaln
def mvt(x, nu, S, mu=0):
d = len(S)
n = len(x)
X = np.atleast_2d(x) - mu
Q = X.dot(np.linalg.inv(S)).dot(X.T).sum()
log_det = np.log(np.linalg.det(S))
log_pdf = gammaln((nu + d)/2.) - 0.5 * (d*np.log(np.pi*nu) + log_det) - gammaln(nu/2.)
log_pdf -= 0.5*(nu + d)*np.log(1 + Q/nu)
return(np.exp(log_pdf))
Explanation: Finally, we need an implementation of the multivariate T probability distribution function, which is as follows:
End of explanation
def calc_diff(theta, n, y, nu, S, mu):
return betabin_trans(theta, n, y) - np.log(mvt(theta, nu, S, mu))
calc_diff_min = lambda *args: -calc_diff(*args)
Explanation: The next step is to find the constant $c$ that ensures:
$$cg(\theta) \ge f(\theta|y) \qquad\forall \theta \in (-\infty,\infty)$$
Alternatively, we want to ensure:
$$\log[f(\theta|y)] - \log[g(\theta)] \le c'$$
End of explanation
opt = fmin_bfgs(calc_diff_min,
(12, -7),
args=(cancer.n, cancer.y, 4, 2*var, mode),
full_output=True)
c = opt[1]
c
Explanation: We can calculate an appropriate value of $c'$ by simply using the approximation method described above on calc_diff (tweaked to produce a negative value for minimization):
End of explanation
def reject(post, nu, S, mu, n, data, c):
k = len(mode)
# Draw samples from g(theta)
theta = rmvt(nu, S, mu, size=n)
# Calculate probability under g(theta)
gvals = np.array([np.log(mvt(t, nu, S, mu)) for t in theta])
# Calculate probability under f(theta)
fvals = np.array([post(t, data.n, data.y) for t in theta])
# Calculate acceptance probability
p = np.exp(fvals - gvals + c)
return theta[np.random.random(n) < p]
nsamples = 1000
sample = reject(betabin_trans, 4, var, mode, nsamples, cancer, c)
z = np.array([[betabin_trans((t1, t2), cancer.n, cancer.y)
for t2 in logit_eta_x] for t1 in log_K_x])
x, y = np.meshgrid(logit_eta_x, log_K_x)
cplot = plt.contour(x, y, z - z.max(), [-0.5, -1, -2, -4, -8], cmap=plt.cm.RdBu)
plt.clabel(cplot, inline=1, fontsize=10, fmt='%1.1f')
plt.ylabel('log(K)');plt.xlabel('logit($\eta$)')
plt.scatter(*sample.T[[1,0]])
Explanation: Now we can execute a rejection sampling algorithm:
End of explanation
float(sample.size)/nsamples
Explanation: Notice that the efficiency of rejection sampling is not very high for this problem.
End of explanation
theta = rmvt(4, var, mode, size=1000)
Explanation: Rejection sampling is usually subject to declining performance as the dimension of the parameter space increases. Further improvement is gained by using optimized algorithms such as importance sampling which, as the name implies, samples more frequently from important areas of the distribution.
Importance Sampling
As we have seen, the primary difficulty in Bayesian inference is calculating the posterior density for models of moderate-to-high dimension. For example, calculating the posterior mean of some function $h$ requires two difficult integration steps:
$$E[h(\theta) | y] = \frac{\int h(\theta)f(y|\theta) p(\theta) d\theta}{\int f(y|\theta) p(\theta) d\theta} = \frac{\int h(\theta)p(\theta | y) d\theta}{\int p(\theta|y) d\theta}$$
If the posterior $p(\theta|y)$ is a density from which it is easy to sample, we could approximiate these integrals using Monte Carlo simulation, but too often it is not.
Instead, assume that we can draw from a tractable probability density $q(\theta)$ that is some approximation of $p$. We could then write:
$$E[h(\theta) | y] = \frac{\int h(\theta) \frac{p(\theta|y)}{q(\theta)} q(\theta) d\theta}{\int \frac{p(\theta|y)}{q(\theta)} q(\theta) d\theta}$$
Expressed this way, $w(\theta) = p(\theta|y) / q(\theta)$ can be regarded as weights for the $M$ values of $\theta$ sampled from $q$ that we can use to correct the sample so that it approximates $h(\theta)$. Specifically, the importance sampling estimate of $E[h(\theta) | y]$ is:
$$\hat{h}{is} = \frac{\sum{i=1}^{M} h(\theta^{(i)})w(\theta^{(i)})}{\sum_{i=1}^{M} w(\theta^{(i)})}$$
where $\theta^{(i)}$ is the $i^{th}$ sample simulated from $q(\theta)$. The standard error for the importance sampling estimate is:
$$\text{SE}{is} = \frac{\sqrt{\sum{i=1}^{M} [(h(\theta^{(i)}) - \hat{h}{is}) w(\theta^{(i)})]^2}}{\sum{i=1}^{M} w(\theta^{(i)})}$$
The efficiency of importance sampling is related to the selection of the importance sampling distribution $q$.
Example: Beta-binomial parameter
As a simple illustration of importance sampling, let's consider again the problem of estimating the parameters of the beta-binomial example. Here, we will use a multivariate T density as the simulation distribution $q$.
Here are 1000 sampled values to use for approximating the posterior:
End of explanation
f_theta = np.array([betabin_trans(t, cancer.n, cancer.y) for t in theta])
Explanation: We can obtain the probability of these values under the posterior density:
End of explanation
q_theta = np.array([mvt(t, 4, var, mode) for t in theta])
Explanation: and under the T distribution:
End of explanation
w = np.exp(f_theta - q_theta - max(f_theta - q_theta))
Explanation: This allows us to calculate the importance weights:
End of explanation
theta_si = [(w*t).sum()/w.sum() for t in theta.T]
theta_si
Explanation: notice that we have subtracted the maximum value of the differences, which normalizes the weights.
Now, we can obtain estimates of the parameters:
End of explanation
se = [np.sqrt((((theta.T[i] - theta_si[i])* w)**2).sum()/w.sum()) for i in (0,1)]
se
Explanation: Finally, the standard error of the estimates:
End of explanation
p_sir = w/w.sum()
Explanation: Sampling Importance Resampling
The importance sampling method can be modified to incorporate weighted bootstrapping, in a procedure called sampling importance resampling (SIR). As previously, we obtain a sample of size $M$ from an importance sampling distribution $q$ and calculate the corresponding weights $w(\theta_i) = p(\theta|y) / q(\theta)$.
Instead of directly re-weighting the samples from $q$, SIR instead transforms the weights into probabilities via:
$$p_i = \frac{w(\theta_i)}{\sum_{i=1}^M w(\theta_i)}$$
These probabilities are then used to resample their respective $\theta_i$ values, with replacement. This implies that the resulting resamples $\theta_i^{\prime}$ will be distributed approximately as the posterior $p(\theta|y)$.
Using again the beta-binomial example, we can take the weights calculated above, and convert them to probabilities:
End of explanation
theta_sir = theta[np.random.choice(range(len(theta)), size=10000, p=p_sir)]
fig, axes = plt.subplots(2)
_ = axes[0].hist(theta_sir.T[0], bins=30)
_ = axes[1].hist(theta_sir.T[1], bins=30)
Explanation: The choice function in numpy.random can be used to generate a random sample from an arbitrary 1-D array.
End of explanation
logK_sample = theta_sir[:,0]
logK_sample.sort()
logK_sample[[250, 9750]]
Explanation: One advantage of this approach is that one can easily extract a posterior probability interval for each parameter, simply by extracting quantiles from the resampled values.
End of explanation
# Write your anser here
Explanation: Exercise: Model checking
Perform a Bayesian sensitivity analysis by performing SIR on the stomach cancer dataset $N$ times, with one observation (a city) removed from the dataset each time. Calculate and plot posterior medians and 95% posterior intervals for each $f(\theta|y_{(-i)})$ to visually analyze the influence of each observation.
End of explanation |
5,750 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Copyright 2016 Google Inc. All Rights Reserved.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http
Step2: Basics
There's lots of guides out there on decorators (this one is good), but I was never really sure when I would need to use decorators. Hopefully this will help motivate them a little more. Here I hope to show you
Step5: Well, we only want these functions to work if both inputs are numbers. So we could do
Step8: But this is yucky
Step9: This is definitely better. But there's still some repeated logic. Like, what if we want to return an error if we don't get numbers, or print something before running the code? We'd still have to make the changes in multiple places. The code isn't DRY.
Basic decorators
We can refactor further with the decorator pattern.
We want to write something that looks like
@decorator
def add(n1, n2)
Step10: This pattern is nice because we've even refactored out all the validation logic (even the "if blah then blah" part) into the decorator.
Generalizing with *args and **kwargs
What if we want to validate a function that has a different number of arguments?
Step12: We can't decorate this because the wrapped function expects 2 arguments.
Here's where we use the * symbol. I'll write out the code so you can see how it looks, and we'll look at what *args is doing below.
Step13: <a id='args'>*args</a>
What is this * nonsense?
You've probably seen *args and **kwargs in documentation before. Here's what they mean
Step14: Back to the decorator
(If you're just here for *args and **kwargs, skip down to here)
So let's look at the decorator code again, minus the comments
Step15: And in case your head doesn't hurt yet, we can do both together
Step17: Advanced decorators
This section will introduce some of the many other useful ways you can use decorators. We'll talk about
* Passing arguments into decorators
* functools.wraps
* Returning a different function
* Decorators and objects.
Use the table of contents at the top to make it easier to look around.
Decorators with arguments
A common thing to want to do is to do some kind of configuration in a decorator. For example, let's say we want to define a divide_n method, and to make it easy to use we want to hide the existence of integer division. Let's define a decorator that converts arguments into floats.
Step19: But now let's say we want to define a divide_n_as_integers function. We could write a new decorator, or we could alter our decorator so that we can specify what we want to convert the arguments to. Let's try the latter.
(For you smart alecks out there
Step21: Did you notice the tricky thing about creating a decorator that takes arguments? We had to create a function to "return a decorator". The outermost function, convert_arguments_to, returns a function that takes a function, which is what we've been calling a "decorator".
To think about why this is necessary, let's start from the form that we wanted to write, and unpack from there. We wanted to be able to do
Step23: functools.wraps solves this problem. Use it as follows
Step24: Think of the @wraps decorator making it so that wrapped_func knows what function it originally wrapped.
Returning a different function
Decorators don't even have to return the function that's passed in. You can have some fun with this... | Python Code:
%%javascript
// From https://github.com/kmahelona/ipython_notebook_goodies
$.getScript('https://kmahelona.github.io/ipython_notebook_goodies/ipython_notebook_toc.js')
Explanation: Copyright 2016 Google Inc. All Rights Reserved.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
<h1 id="tocheading">Table of Contents</h1>
<div id="toc"></div>
End of explanation
def add(n1, n2):
return n1 + n2
def multiply(n1, n2):
return n1 * n2
def exponentiate(n1, n2):
Raise n1 to the power of n2
import math
return math.pow(n1, n2)
Explanation: Basics
There's lots of guides out there on decorators (this one is good), but I was never really sure when I would need to use decorators. Hopefully this will help motivate them a little more. Here I hope to show you:
When decorators might come in handy
How to write one
How to generalize using *args and **kwargs sorcery.
You should read this if:
You've heard of decorators and want to know more about them, and/or
You want to know what *args and **kwargs mean.
If you're here just for *args and **kwargs, start reading here.
Motivation
Let's say you're defining methods on numbers:
End of explanation
def is_number(n):
Return True iff n is a number.
# A number can always be converted to a float
try:
float(n)
return True
except ValueError:
return False
def add(n1, n2):
if not (is_number(n1) and is_number(n2)):
print("Arguments must be numbers!")
return
return n1 + n2
def multiply(n1, n2):
if not (is_number(n1) and is_number(n2)):
print("Arguments must be numbers!")
return
return n1 * n2
def exponentiate(n1, n2):
Raise n1 to the power of n2
if not (is_number(n1) and is_number(n2)):
print("Arguments must be numbers!")
return
import math
return math.pow(n1, n2)
Explanation: Well, we only want these functions to work if both inputs are numbers. So we could do:
End of explanation
def validate_two_arguments(n1, n2):
Returns True if n1 and n2 are both numbers.
if not (is_number(n1) and is_number(n2)):
return False
return True
def add(n1, n2):
if validate_two_arguments(n1, n2):
return n1 + n2
def multiply(n1, n2):
if validate_two_arguments(n1, n2):
return n1 * n2
def exponentiate(n1, n2):
Raise n1 to the power of n2
if validate_two_arguments(n1, n2):
import math
return math.pow(n1, n2)
Explanation: But this is yucky: we had to copy and paste code. This should always make you sad! For example, what if you wanted to change the message slightly? Or to return an error instead? You'd have to change it everywhere it appears...
We want the copy & pasted code to live in just one place, so any changes just go there (DRY code: Don't Repeat Yourself). So let's refactor.
End of explanation
# The decorator: takes a function.
def validate_arguments(func):
# The decorator will be returning wrapped_func, a function that has the
# same signature as add, multiply, etc.
def wrapped_func(n1, n2):
# If we don't have two numbers, we don't want to run the function.
# Best practice ("be explicit") is to raise an error here
# instead of just returning None.
if not validate_two_arguments(n1, n2):
raise Exception("Arguments must be numbers!")
# We've passed our checks, so we can call the function with the passed in arguments.
# If you like, think of this as
# result = func(n1, n2)
# return result
# to distinguish it from the outer return where we're returning a function.
return func(n1, n2)
# This is where we return the function that has the same signature.
return wrapped_func
@validate_arguments
def add(n1, n2):
return n1 + n2
# Don't forget, the @ syntax just means
# add = validate_decorator(add)
print(add(1, 3))
try:
add(2, 'hi')
except Exception as e:
print("Caught Exception: {}".format(e))
Explanation: This is definitely better. But there's still some repeated logic. Like, what if we want to return an error if we don't get numbers, or print something before running the code? We'd still have to make the changes in multiple places. The code isn't DRY.
Basic decorators
We can refactor further with the decorator pattern.
We want to write something that looks like
@decorator
def add(n1, n2):
return n1 + n2
so that all the logic about validating n1 and n2 lives in one place, and the functions just do what we want them to do.
Since the @ syntax just means add = decorator(add), we know the decorator needs to take a function as an argument, and it needs to return a function. (This should be confusing at first. Functions returning functions are scary, but think about it until that doesn't seem outlandish to you.)
This returned function should act the same way as add, so it should take two arguments. And within this returned function, we want to first check that the arguments are numbers. If they are, we want to call the original function that we decorated (in this case, add). If not, we don't want to do anything. Here's what that looks like (there's a lot here, so use the comments to understand what's happening):
End of explanation
@validate_arguments # Won't work!
def add3(n1, n2, n3):
return n1 + n2 + n3
add3(1, 2, 3)
Explanation: This pattern is nice because we've even refactored out all the validation logic (even the "if blah then blah" part) into the decorator.
Generalizing with *args and **kwargs
What if we want to validate a function that has a different number of arguments?
End of explanation
# The decorator: takes a function.
def validate_arguments(func):
# Note the *args! Think of this as representing "as many arguments as you want".
# So this function will take an arbitrary number of arguments.
def wrapped_func(*args):
# We just want to apply the check to each argument.
for arg in args:
if not is_number(arg):
raise Exception("Arguments must be numbers!")
# We also want to make sure there's at least two arguments.
if len(args) < 2:
raise Exception("Must specify at least 2 arguments!")
# We've passed our checks, so we can call the function with the
# passed-in arguments.
# Right now, args is a tuple of all the different arguments passed in
# (more explanation below), so we want to expand them back out when
# calling the function.
return func(*args)
return wrapped_func
@validate_arguments # This works
def add3(n1, n2, n3):
return n1 + n2 + n3
add3(1, 2, 3)
@validate_arguments # And so does this
def addn(*args):
Add an arbitrary number of numbers together
cumu = 0
for arg in args:
cumu += arg
return cumu
print(addn(1, 2, 3, 4, 5))
# range(n) gives a list, so we expand the list into positional arguments...
print(addn(*range(10)))
Explanation: We can't decorate this because the wrapped function expects 2 arguments.
Here's where we use the * symbol. I'll write out the code so you can see how it looks, and we'll look at what *args is doing below.
End of explanation
def foo(*args):
print("foo args: {}".format(args))
print("foo args type: {}".format(type(args)))
# So foo can take an arbitrary number of arguments
print("First call:")
foo(1, 2, 'a', 3, True)
# Which can be written using the * syntax to expand an iterable
print("\nSecond call:")
l = [1, 2, 'a', 3, True]
foo(*l)
Explanation: <a id='args'>*args</a>
What is this * nonsense?
You've probably seen *args and **kwargs in documentation before. Here's what they mean:
When calling a function, * expands an iterable into positional arguments.
Terminology note: in a call like bing(1, 'hi', name='fig'), 1 is the first positional argument, 'hi' is the second positional argument, and there's a keyword argument 'name' with the value 'fig'.
When defining a signature, *args represents an arbitrary number of positional arguments.
End of explanation
def bar(**kwargs):
print("bar kwargs: {}".format(kwargs))
# bar takes an arbitrary number of keyword arguments
print("First call:")
bar(location='US-PAO', ldap='awan', age=None)
# Which can also be written using the ** syntax to expand a dict
print("\nSecond call:")
d = {'location': 'US-PAO', 'ldap': 'awan', 'age': None}
bar(**d)
Explanation: Back to the decorator
(If you're just here for *args and **kwargs, skip down to here)
So let's look at the decorator code again, minus the comments:
def validate_decorator(func):
def wrapped_func(*args):
for arg in args:
if not is_number(arg):
print("arguments must be numbers!")
return
return func(*args)
return wrapped_func
def wrapped_func(*args) says that wrapped_func can take an arbitrary number of arguments.
Within wrapped_func, we interact with args as a tuple containing all the (positional) arguments passed in.
If all the arguments are numbers, we call func, the function we decorated, by expanding the args tuple back out into positional arguments: func(*args).
Finally the decorator needs to return a function (remember that the @ syntax is just sugar for add = decorator(add).
Congrats, you now understand decorators! You can do tons of other stuff with them, but hopefully now you're equipped to read the other guides online.
<a id='kwargs'>As for **kwargs:</a>
When calling a function, ** expands a dict into keyword arguments.
When defining a signature, **kwargs represents an arbitrary number of keyword arguments.
End of explanation
def baz(*args, **kwargs):
print("baz args: {}. kwargs: {}".format(args, kwargs))
# Calling baz with a mixture of positional and keyword arguments
print("First call:")
baz(1, 3, 'hi', name='Joe', age=37, occupation='Engineer')
# Which is the same as
print("\nSecond call:")
l = [1, 3, 'hi']
d = {'name': 'Joe', 'age': 37, 'occupation': 'Engineer'}
baz(*l, **d)
Explanation: And in case your head doesn't hurt yet, we can do both together:
End of explanation
def convert_arguments(func):
Convert func arguments to floats.
# Introducing the leading underscore: (weakly) marks a private
# method/property that should not be accessed outside the defining
# scope. Look up PEP 8 for more.
def _wrapped_func(*args):
new_args = [float(arg) for arg in args]
return func(*new_args)
return _wrapped_func
@convert_arguments
@validate_arguments
def divide_n(*args):
cumu = args[0]
for arg in args[1:]:
cumu = cumu / arg
return cumu
# The user doesn't need to think about integer division!
divide_n(103, 2, 8)
Explanation: Advanced decorators
This section will introduce some of the many other useful ways you can use decorators. We'll talk about
* Passing arguments into decorators
* functools.wraps
* Returning a different function
* Decorators and objects.
Use the table of contents at the top to make it easier to look around.
Decorators with arguments
A common thing to want to do is to do some kind of configuration in a decorator. For example, let's say we want to define a divide_n method, and to make it easy to use we want to hide the existence of integer division. Let's define a decorator that converts arguments into floats.
End of explanation
def convert_arguments_to(to_type=float):
Convert arguments to the given to_type by casting them.
def _wrapper(func):
def _wrapped_func(*args):
new_args = [to_type(arg) for arg in args]
return func(*new_args)
return _wrapped_func
return _wrapper
@validate_arguments
def divide_n(*args):
cumu = args[0]
for arg in args[1:]:
cumu = cumu / arg
return cumu
@convert_arguments_to(to_type=int)
def divide_n_as_integers(*args):
return divide_n(*args)
@convert_arguments_to(to_type=float)
def divide_n_as_float(*args):
return divide_n(*args)
print(divide_n_as_float(7, 3))
print(divide_n_as_integers(7, 3))
Explanation: But now let's say we want to define a divide_n_as_integers function. We could write a new decorator, or we could alter our decorator so that we can specify what we want to convert the arguments to. Let's try the latter.
(For you smart alecks out there: yes you could use the // operator, but you'd still have to replicate the logic in divide_n. Nice try.)
End of explanation
@validate_arguments
def foo(*args):
foo frobs bar
pass
print(foo.__name__)
print(foo.__doc__)
Explanation: Did you notice the tricky thing about creating a decorator that takes arguments? We had to create a function to "return a decorator". The outermost function, convert_arguments_to, returns a function that takes a function, which is what we've been calling a "decorator".
To think about why this is necessary, let's start from the form that we wanted to write, and unpack from there. We wanted to be able to do:
@decorator(decorator_arg)
def myfunc(*func_args):
pass
Unpacking the syntactic sugar gives us
def myfunc(*func_args):
pass
myfunc = decorator(decorator_arg)(myfunc)
Written this way, it should immediately be clear that decorator(decorator_arg) returns a function that takes a function.
So that's how you write a decorator that takes an argument: it actually has to be a function that takes your decorator arguments, and returns a function that takes a function.
functools.wraps
If you've played around with the examples above, you might've seen that the name of the wrapped function changes after you apply a decorator... And perhaps more importantly, the docstring of the wrapped function changes too (this is important for when generating documentation, e.g. with Sphinx).
End of explanation
from functools import wraps
def better_validate_arguments(func):
@wraps(func)
def wrapped_func(*args):
for arg in args:
if not is_number(arg):
raise Exception("Arguments must be numbers!")
if len(args) < 2:
raise Exception("Must specify at least 2 arguments!")
return func(*args)
return wrapped_func
@better_validate_arguments
def bar(*args):
bar frobs foo
pass
print(bar.__name__)
print(bar.__doc__)
Explanation: functools.wraps solves this problem. Use it as follows:
End of explanation
def jedi_mind_trick(func):
def _jedi_func():
return "Not the droid you're looking for"
return _jedi_func
@jedi_mind_trick
def get_droid():
return "Found the droid!"
get_droid()
Explanation: Think of the @wraps decorator making it so that wrapped_func knows what function it originally wrapped.
Returning a different function
Decorators don't even have to return the function that's passed in. You can have some fun with this...
End of explanation |
5,751 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Receptive Field Estimation and Prediction
This example reproduces figures from Lalor et al's mTRF toolbox in
matlab [1]_. We will show how the
Step1: Load the data from the publication
First we will load the data collected in [1]_. In this experiment subjects
listened to natural speech. Raw EEG and the speech stimulus are provided.
We will load these below, downsampling the data in order to speed up
computation since we know that our features are primarily low-frequency in
nature. Then we'll visualize both the EEG and speech envelope.
Step2: Create and fit a receptive field model
We will construct an encoding model to find the linear relationship between
a time-delayed version of the speech envelope and the EEG signal. This allows
us to make predictions about the response to new stimuli.
Step3: Investigate model coefficients
Finally, we will look at how the linear coefficients (sometimes
referred to as beta values) are distributed across time delays as well as
across the scalp. We will recreate figure 1 and figure 2 from [1]_.
Step4: Create and fit a stimulus reconstruction model
We will now demonstrate another use case for the for the
Step5: Visualize stimulus reconstruction
To get a sense of our model performance, we can plot the actual and predicted
stimulus envelopes side by side.
Step6: Investigate model coefficients
Finally, we will look at how the decoding model coefficients are distributed
across the scalp. We will attempt to recreate figure 5 from [1]. The
decoding model weights reflect the channels that contribute most toward
reconstructing the stimulus signal, but are not directly interpretable in a
neurophysiological sense. Here we also look at the coefficients obtained
via an inversion procedure [2]_, which have a more straightforward
interpretation as their value (and sign) directly relates to the stimulus
signal's strength (and effect direction). | Python Code:
# Authors: Chris Holdgraf <choldgraf@gmail.com>
# Eric Larson <larson.eric.d@gmail.com>
# Nicolas Barascud <nicolas.barascud@ens.fr>
#
# License: BSD (3-clause)
# sphinx_gallery_thumbnail_number = 3
import numpy as np
import matplotlib.pyplot as plt
from scipy.io import loadmat
from os.path import join
import mne
from mne.decoding import ReceptiveField
from sklearn.model_selection import KFold
from sklearn.preprocessing import scale
Explanation: Receptive Field Estimation and Prediction
This example reproduces figures from Lalor et al's mTRF toolbox in
matlab [1]_. We will show how the :class:mne.decoding.ReceptiveField class
can perform a similar function along with scikit-learn. We will first fit a
linear encoding model using the continuously-varying speech envelope to predict
activity of a 128 channel EEG system. Then, we will take the reverse approach
and try to predict the speech envelope from the EEG (known in the litterature
as a decoding model, or simply stimulus reconstruction).
References
.. [1] Crosse, M. J., Di Liberto, G. M., Bednar, A. & Lalor, E. C. (2016).
The Multivariate Temporal Response Function (mTRF) Toolbox:
A MATLAB Toolbox for Relating Neural Signals to Continuous Stimuli.
Frontiers in Human Neuroscience 10, 604. doi:10.3389/fnhum.2016.00604
.. [2] Haufe, S., Meinecke, F., Goergen, K., Daehne, S., Haynes, J.-D.,
Blankertz, B., & Biessmann, F. (2014). On the interpretation of weight
vectors of linear models in multivariate neuroimaging. NeuroImage, 87,
96-110. doi:10.1016/j.neuroimage.2013.10.067
End of explanation
path = mne.datasets.mtrf.data_path()
decim = 2
data = loadmat(join(path, 'speech_data.mat'))
raw = data['EEG'].T
speech = data['envelope'].T
sfreq = float(data['Fs'])
sfreq /= decim
speech = mne.filter.resample(speech, down=decim, npad='auto')
raw = mne.filter.resample(raw, down=decim, npad='auto')
# Read in channel positions and create our MNE objects from the raw data
montage = mne.channels.read_montage('biosemi128')
montage.selection = montage.selection[:128]
info = mne.create_info(montage.ch_names[:128], sfreq, 'eeg', montage=montage)
raw = mne.io.RawArray(raw, info)
n_channels = len(raw.ch_names)
# Plot a sample of brain and stimulus activity
fig, ax = plt.subplots()
lns = ax.plot(scale(raw[:, :800][0].T), color='k', alpha=.1)
ln1 = ax.plot(scale(speech[0, :800]), color='r', lw=2)
ax.legend([lns[0], ln1[0]], ['EEG', 'Speech Envelope'], frameon=False)
ax.set(title="Sample activity", xlabel="Time (s)")
mne.viz.tight_layout()
Explanation: Load the data from the publication
First we will load the data collected in [1]_. In this experiment subjects
listened to natural speech. Raw EEG and the speech stimulus are provided.
We will load these below, downsampling the data in order to speed up
computation since we know that our features are primarily low-frequency in
nature. Then we'll visualize both the EEG and speech envelope.
End of explanation
# Define the delays that we will use in the receptive field
tmin, tmax = -.2, .4
# Initialize the model
rf = ReceptiveField(tmin, tmax, sfreq, feature_names=['envelope'],
estimator=1., scoring='corrcoef')
# We'll have (tmax - tmin) * sfreq delays
# and an extra 2 delays since we are inclusive on the beginning / end index
n_delays = int((tmax - tmin) * sfreq) + 2
n_splits = 3
cv = KFold(n_splits)
# Prepare model data (make time the first dimension)
speech = speech.T
Y, _ = raw[:] # Outputs for the model
Y = Y.T
# Iterate through splits, fit the model, and predict/test on held-out data
coefs = np.zeros((n_splits, n_channels, n_delays))
scores = np.zeros((n_splits, n_channels))
for ii, (train, test) in enumerate(cv.split(speech)):
print('split %s / %s' % (ii + 1, n_splits))
rf.fit(speech[train], Y[train])
scores[ii] = rf.score(speech[test], Y[test])
# coef_ is shape (n_outputs, n_features, n_delays). we only have 1 feature
coefs[ii] = rf.coef_[:, 0, :]
times = rf.delays_ / float(rf.sfreq)
# Average scores and coefficients across CV splits
mean_coefs = coefs.mean(axis=0)
mean_scores = scores.mean(axis=0)
# Plot mean prediction scores across all channels
fig, ax = plt.subplots()
ix_chs = np.arange(n_channels)
ax.plot(ix_chs, mean_scores)
ax.axhline(0, ls='--', color='r')
ax.set(title="Mean prediction score", xlabel="Channel", ylabel="Score ($r$)")
mne.viz.tight_layout()
Explanation: Create and fit a receptive field model
We will construct an encoding model to find the linear relationship between
a time-delayed version of the speech envelope and the EEG signal. This allows
us to make predictions about the response to new stimuli.
End of explanation
# Print mean coefficients across all time delays / channels (see Fig 1 in [1])
time_plot = 0.180 # For highlighting a specific time.
fig, ax = plt.subplots(figsize=(4, 8))
max_coef = mean_coefs.max()
ax.pcolormesh(times, ix_chs, mean_coefs, cmap='RdBu_r',
vmin=-max_coef, vmax=max_coef, shading='gouraud')
ax.axvline(time_plot, ls='--', color='k', lw=2)
ax.set(xlabel='Delay (s)', ylabel='Channel', title="Mean Model\nCoefficients",
xlim=times[[0, -1]], ylim=[len(ix_chs) - 1, 0],
xticks=np.arange(tmin, tmax + .2, .2))
plt.setp(ax.get_xticklabels(), rotation=45)
mne.viz.tight_layout()
# Make a topographic map of coefficients for a given delay (see Fig 2C in [1])
ix_plot = np.argmin(np.abs(time_plot - times))
fig, ax = plt.subplots()
mne.viz.plot_topomap(mean_coefs[:, ix_plot], pos=info, axes=ax, show=False,
vmin=-max_coef, vmax=max_coef)
ax.set(title="Topomap of model coefficients\nfor delay %s" % time_plot)
mne.viz.tight_layout()
Explanation: Investigate model coefficients
Finally, we will look at how the linear coefficients (sometimes
referred to as beta values) are distributed across time delays as well as
across the scalp. We will recreate figure 1 and figure 2 from [1]_.
End of explanation
# We use the same lags as in [1]. Negative lags now index the relationship
# between the neural response and the speech envelope earlier in time, whereas
# positive lags would index how a unit change in the amplitude of the EEG would
# affect later stimulus activity (obviously this should have an amplitude of
# zero).
tmin, tmax = -.2, 0.
# Initialize the model. Here the features are the EEG data. We also specify
# ``patterns=True`` to compute inverse-transformed coefficients during model
# fitting (cf. next section). We'll use a ridge regression estimator with an
# alpha value similar to [1].
sr = ReceptiveField(tmin, tmax, sfreq, feature_names=raw.ch_names,
estimator=1e4, scoring='corrcoef', patterns=True)
# We'll have (tmax - tmin) * sfreq delays
# and an extra 2 delays since we are inclusive on the beginning / end index
n_delays = int((tmax - tmin) * sfreq) + 2
n_splits = 3
cv = KFold(n_splits)
# Iterate through splits, fit the model, and predict/test on held-out data
coefs = np.zeros((n_splits, n_channels, n_delays))
patterns = coefs.copy()
scores = np.zeros((n_splits,))
for ii, (train, test) in enumerate(cv.split(speech)):
print('split %s / %s' % (ii + 1, n_splits))
sr.fit(Y[train], speech[train])
scores[ii] = sr.score(Y[test], speech[test])[0]
# coef_ is shape (n_outputs, n_features, n_delays). We have 128 features
coefs[ii] = sr.coef_[0, :, :]
patterns[ii] = sr.patterns_[0, :, :]
times = sr.delays_ / float(sr.sfreq)
# Average scores and coefficients across CV splits
mean_coefs = coefs.mean(axis=0)
mean_patterns = patterns.mean(axis=0)
mean_scores = scores.mean(axis=0)
max_coef = np.abs(mean_coefs).max()
max_patterns = np.abs(mean_patterns).max()
Explanation: Create and fit a stimulus reconstruction model
We will now demonstrate another use case for the for the
:class:mne.decoding.ReceptiveField class as we try to predict the stimulus
activity from the EEG data. This is known in the literature as a decoding, or
stimulus reconstruction model [1]_. A decoding model aims to find the
relationship between the speech signal and a time-delayed version of the EEG.
This can be useful as we exploit all of the available neural data in a
multivariate context, compared to the encoding case which treats each M/EEG
channel as an independent feature. Therefore, decoding models might provide a
better quality of fit (at the expense of not controlling for stimulus
covariance), especially for low SNR stimuli such as speech.
End of explanation
y_pred = sr.predict(Y[test])
time = np.linspace(0, 2., 5 * int(sfreq))
fig, ax = plt.subplots(figsize=(8, 4))
ax.plot(time, speech[test][sr.valid_samples_][:int(5 * sfreq)],
color='grey', lw=2, ls='--')
ax.plot(time, y_pred[sr.valid_samples_][:int(5 * sfreq)], color='r', lw=2)
ax.legend([lns[0], ln1[0]], ['Envelope', 'Reconstruction'], frameon=False)
ax.set(title="Stimulus reconstruction")
ax.set_xlabel('Time (s)')
mne.viz.tight_layout()
Explanation: Visualize stimulus reconstruction
To get a sense of our model performance, we can plot the actual and predicted
stimulus envelopes side by side.
End of explanation
time_plot = (-.140, -.125) # To average between two timepoints.
ix_plot = np.arange(np.argmin(np.abs(time_plot[0] - times)),
np.argmin(np.abs(time_plot[1] - times)))
fig, ax = plt.subplots(1, 2)
mne.viz.plot_topomap(np.mean(mean_coefs[:, ix_plot], axis=1),
pos=info, axes=ax[0], show=False,
vmin=-max_coef, vmax=max_coef)
ax[0].set(title="Model coefficients\nbetween delays %s and %s"
% (time_plot[0], time_plot[1]))
mne.viz.plot_topomap(np.mean(mean_patterns[:, ix_plot], axis=1),
pos=info, axes=ax[1],
show=False, vmin=-max_patterns, vmax=max_patterns)
ax[1].set(title="Inverse-transformed coefficients\nbetween delays %s and %s"
% (time_plot[0], time_plot[1]))
mne.viz.tight_layout()
plt.show()
Explanation: Investigate model coefficients
Finally, we will look at how the decoding model coefficients are distributed
across the scalp. We will attempt to recreate figure 5 from [1]. The
decoding model weights reflect the channels that contribute most toward
reconstructing the stimulus signal, but are not directly interpretable in a
neurophysiological sense. Here we also look at the coefficients obtained
via an inversion procedure [2]_, which have a more straightforward
interpretation as their value (and sign) directly relates to the stimulus
signal's strength (and effect direction).
End of explanation |
5,752 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Lab 2
Due 11
Step1: The code below produces the data frames used in the examples
Step2: Pandas and Wrangling
For the examples that follow, we will be using a toy data set containing information about superheroes in the Arrowverse. In the first_seen_on column, a stands for Archer and f, Flash.
Step3: Slice and Dice
Column selection by label
To select a column of a DataFrame by column label, the safest and fastest way is to use the .loc method. General usage looks like frame.loc[rowname,colname]. (Reminder that the colon
Step4: Selecting multiple columns is easy. You just need to supply a list of column names. Here we select the color and value columns
Step5: While .loc is invaluable when writing production code, it may be a little too verbose for interactive use. One recommended alternative is the [] method, which takes on the form frame['colname'].
Step6: Row Selection by Label
Similarly, if we want to select a row by its label, we can use the same .loc method.
Step7: If we want all the columns returned, we can, for brevity, drop the colon without issue.
Step8: General Selection by Label
More generally you can slice across both rows and columns at the same time. For example
Step9: Selection by Integer Index
If you want to select rows and columns by position, the Data Frame has an analogous .iloc method for integer indexing. Remember that Python indexing starts at 0.
Step10: Filtering with boolean arrays
Filtering is the process of removing unwanted material. In your quest for cleaner data, you will undoubtedly filter your data at some point
Step11: Problem Solving Strategy
We want to highlight the strategy for filtering to answer the question above
Step12: Notice that in both examples above, the expression in the brackets evaluates to a boolean series. The general strategy for filtering data frames, then, is to write an expression of the form frame[logical statement].
Counting Rows
To count the number of instances of a value in a Series, we can use the value_counts method. Below we count the number of instances of each color.
Step13: A more sophisticated analysis might involve counting the number of instances a tuple appears. Here we count $(color,value)$ tuples.
Step14: This returns a series that has been multi-indexed. We'll eschew this topic for now. To get a data frame back, we'll use the reset_index method, which also allows us to simulataneously name the new column.
Step15: Joining Tables on One Column
Suppose we have another table that classifies superheroes into their respective teams. Note that canary is not in this data set and that killer frost and speedy are additions that aren't in the original heroes set.
For simplicity of the example, we'll convert the index of the heroes data frame into an explicit column called hero. A careful examination of the documentation will reveal that joining on a mixture of the index and columns is possible.
Step16: Inner Join
The inner join below returns rows representing the heroes that appear in both data frames.
Step17: Left and right join
The left join returns rows representing heroes in the ex ("left") data frame, augmented by information found in the teams data frame. Its counterpart, the right join, would return heroes in the teams data frame. Note that the team for hero canary is an NaN value, representing missing data.
Step18: Outer join
An outer join on hero will return all heroes found in both the left and right data frames. Any missing values are filled in with NaN.
Step19: More than one match?
If the values in the columns to be matched don't uniquely identify a row, then a cartesian product is formed in the merge. For example, notice that firestorm has two different egos, so information from heroes had to be duplicated in the merge, once for each ego.
Step20: Missing Values
As shown in lecture, there are a multitude of reasons why a data set might have missing values. The current implementation of Pandas uses the numpy NaN to represent these null values (older implementations even used -inf and inf). Future versions of Pandas might implement a true null value---keep your eyes peeled for this in updates! More information can be found (http
Step21: To check if a value is null, we use the isnull() method for series and data frames. Alternatively, there is a pd.isnull() function as well.
Step22: Since filtering out missing data is such a common operation, Pandas also has conveniently included the analogous notnull() methods and function for improved human readability.
Step23: Practice Set 1
Consider the "complete" data set shown below. Note that the rows are indexed by the superheroes' names.
Step24: What are the outputs of the following calls? State what is wrong with the ones that will produce errors and propose a fix. To challenge yourself, try to do this exercise without running any commands.
Step25: Can you propose a fix to any of the broken ones above?
Practice Set 2
The practice problems below use the department of transportation's "On-Time" flight data for all flights originating from SFO or OAK in January 2016. Information about the variables can be found in the readme.html file. Information about the airports and airlines are contained in the comma-delimited files airports.dat and airlines.dat, respectively. Both were sourced from http
Step28: Question 1
It looks like the departure and arrival were read in a floating-point numbers. Write two functions, extract_hour and extract_mins that converts military time to hours and minutes, respectively. Hint
Step31: Question 2
Using your two functions above, filter the flights data for flights that departed 15 or more minutes later than scheduled. You need not worry about flights that were delayed to the next day for this question.
Step32: Question 3
Using your answer from question 2, find the full name of every destination city with a flight from SFO or OAK that was delayed by 15 or more minutes. The airport codes used in flights are IATA codes. Sort the cities alphabetically.
Step33: Question 4
Find the tail number of the top ten planes, measured by number of destinations the plane flew to in January. You may find drop_duplicates and sort_values helpful.
Step34: Challenge
Add a new column to airports called sfo_arr_delay_avg that contains information about the average delay time in January from SFO.
Step35: Let's take a look at our non-null results. Do any of the delay values catch your eye?
...
Submission
Run the cell below to submit the lab. You may resubmit as many times you want. We will be grading you on effort/completion. | Python Code:
import pandas as pd
import numpy as np
# These lines load the tests.
!pip install -U okpy
from client.api.notebook import Notebook
ok = Notebook('lab02.ok')
Explanation: Lab 2
Due 11:59pm 01/27/2017 (Completion-based)
In this lab you will see some examples of some commonly used data wrangling tools in Python. In particular, we aim to give you some familiarity with:
Slicing data frames
Filtering data
Grouped counts
Joining two tables
NA/Null values
Setup
End of explanation
heroes = pd.DataFrame(
data={'color': ['red', 'green', 'black',
'blue', 'black', 'red'],
'first_seen_on': ['a', 'a', 'f', 'a', 'a', 'f'],
'first_season': [2, 1, 2, 3, 3, 1]},
index=['flash', 'arrow', 'vibe',
'atom', 'canary', 'firestorm']
)
identities = pd.DataFrame(
data={'ego': ['barry allen', 'oliver queen', 'cisco ramon',
'ray palmer', 'sara lance',
'martin stein', 'ronnie raymond'],
'alter-ego': ['flash', 'arrow', 'vibe', 'atom',
'canary', 'firestorm', 'firestorm']}
)
teams = pd.DataFrame(
data={'team': ['flash', 'arrow', 'flash', 'legends',
'flash', 'legends', 'arrow'],
'hero': ['flash', 'arrow', 'vibe', 'atom',
'killer frost', 'firestorm', 'speedy']})
Explanation: The code below produces the data frames used in the examples
End of explanation
heroes
identities
teams
Explanation: Pandas and Wrangling
For the examples that follow, we will be using a toy data set containing information about superheroes in the Arrowverse. In the first_seen_on column, a stands for Archer and f, Flash.
End of explanation
heroes.loc[:, 'color']
Explanation: Slice and Dice
Column selection by label
To select a column of a DataFrame by column label, the safest and fastest way is to use the .loc method. General usage looks like frame.loc[rowname,colname]. (Reminder that the colon : means "everything"). For example, if we want the color column of the ex data frame, we would use :
End of explanation
heroes.loc[:, ['color', 'first_season']]
Explanation: Selecting multiple columns is easy. You just need to supply a list of column names. Here we select the color and value columns:
End of explanation
heroes['first_seen_on']
Explanation: While .loc is invaluable when writing production code, it may be a little too verbose for interactive use. One recommended alternative is the [] method, which takes on the form frame['colname'].
End of explanation
heroes.loc[['flash', 'vibe'], :]
Explanation: Row Selection by Label
Similarly, if we want to select a row by its label, we can use the same .loc method.
End of explanation
heroes.loc[['flash', 'vibe']]
Explanation: If we want all the columns returned, we can, for brevity, drop the colon without issue.
End of explanation
heroes.loc['flash':'atom', :'first_seen_on']
Explanation: General Selection by Label
More generally you can slice across both rows and columns at the same time. For example:
End of explanation
heroes.iloc[:4,:2]
Explanation: Selection by Integer Index
If you want to select rows and columns by position, the Data Frame has an analogous .iloc method for integer indexing. Remember that Python indexing starts at 0.
End of explanation
heroes[(heroes['first_season']==3) & (heroes['first_seen_on']=='a')]
Explanation: Filtering with boolean arrays
Filtering is the process of removing unwanted material. In your quest for cleaner data, you will undoubtedly filter your data at some point: whether it be for clearing up cases with missing values, culling out fishy outliers, or analyzing subgroups of your data set. For example, we may be interested in characters that debuted in season 3 of Archer. Note that compound expressions have to be grouped with parentheses.
End of explanation
heroes[heroes['first_season'].isin([1,3])]
Explanation: Problem Solving Strategy
We want to highlight the strategy for filtering to answer the question above:
Identify the variables of interest
Interested in the debut: first_season and first_seen_on
Translate the question into statements one with True/False answers
Did the hero debut on Archer? $\rightarrow$ The hero has first_seen_on equal to a
Did the hero debut in season 3? $\rightarrow$ The hero has first_season equal to 3
Translate the statements into boolean statements
The hero has first_seen_on equal to a $\rightarrow$ hero['first_seen_on']=='a'
The hero has first_season equal to 3 $\rightarrow$ heroes['first_season']==3
Use the boolean array to filter the data
Note that compound expressions have to be grouped with parentheses.
For your reference, some commonly used comparison operators are given below.
Symbol | Usage | Meaning
------ | ---------- | -------------------------------------
== | a == b | Does a equal b?
<= | a <= b | Is a less than or equal to b?
= | a >= b | Is a greater than or equal to b?
< | a < b | Is a less than b?
> | a > b | Is a greater than b?
~ | ~p | Returns negation of p
| | p | q | p OR q
& | p & q | p AND q
^ | p ^ q | p XOR q (exclusive or)
An often-used operation missing from the above table is a test-of-membership. The Series.isin(values) method returns a boolean array denoting whether each element of Series is in values. We can then use the array to subset our data frame. For example, if we wanted to see which rows of heroes had values in ${1,3}$, we would use:
End of explanation
heroes['color'].value_counts()
Explanation: Notice that in both examples above, the expression in the brackets evaluates to a boolean series. The general strategy for filtering data frames, then, is to write an expression of the form frame[logical statement].
Counting Rows
To count the number of instances of a value in a Series, we can use the value_counts method. Below we count the number of instances of each color.
End of explanation
heroes.groupby(['color', 'first_season']).size()
Explanation: A more sophisticated analysis might involve counting the number of instances a tuple appears. Here we count $(color,value)$ tuples.
End of explanation
heroes.groupby(['color', 'first_season']).size().reset_index(name='count')
Explanation: This returns a series that has been multi-indexed. We'll eschew this topic for now. To get a data frame back, we'll use the reset_index method, which also allows us to simulataneously name the new column.
End of explanation
heroes['hero'] = heroes.index
heroes
Explanation: Joining Tables on One Column
Suppose we have another table that classifies superheroes into their respective teams. Note that canary is not in this data set and that killer frost and speedy are additions that aren't in the original heroes set.
For simplicity of the example, we'll convert the index of the heroes data frame into an explicit column called hero. A careful examination of the documentation will reveal that joining on a mixture of the index and columns is possible.
End of explanation
pd.merge(heroes, teams, how='inner', on='hero')
Explanation: Inner Join
The inner join below returns rows representing the heroes that appear in both data frames.
End of explanation
pd.merge(heroes, teams, how='left', on='hero')
Explanation: Left and right join
The left join returns rows representing heroes in the ex ("left") data frame, augmented by information found in the teams data frame. Its counterpart, the right join, would return heroes in the teams data frame. Note that the team for hero canary is an NaN value, representing missing data.
End of explanation
pd.merge(heroes, teams, how='outer', on='hero')
Explanation: Outer join
An outer join on hero will return all heroes found in both the left and right data frames. Any missing values are filled in with NaN.
End of explanation
pd.merge(heroes, identities, how='inner',
left_on='hero', right_on='alter-ego')
Explanation: More than one match?
If the values in the columns to be matched don't uniquely identify a row, then a cartesian product is formed in the merge. For example, notice that firestorm has two different egos, so information from heroes had to be duplicated in the merge, once for each ego.
End of explanation
x = np.nan
y = pd.merge(heroes, teams, how='outer', on='hero')['first_season']
y
Explanation: Missing Values
As shown in lecture, there are a multitude of reasons why a data set might have missing values. The current implementation of Pandas uses the numpy NaN to represent these null values (older implementations even used -inf and inf). Future versions of Pandas might implement a true null value---keep your eyes peeled for this in updates! More information can be found (http://pandas.pydata.org/pandas-docs/stable/missing_data.html)[here].
Because of the specialness of missing values, they merit their own set of tools. For this lab, we will focus on detection. For replacement, see the docs.
End of explanation
x.isnull() # won't work since x is neither a series nor a data frame
pd.isnull(x)
y.isnull()
pd.isnull(y)
Explanation: To check if a value is null, we use the isnull() method for series and data frames. Alternatively, there is a pd.isnull() function as well.
End of explanation
y.notnull()
y[y.notnull()]
Explanation: Since filtering out missing data is such a common operation, Pandas also has conveniently included the analogous notnull() methods and function for improved human readability.
End of explanation
heroes_complete = pd.merge(heroes, identities, left_on='hero', right_on='alter-ego')
heroes_complete = pd.merge(heroes_complete, teams, how='outer', on='hero')
heroes_complete.set_index('hero', inplace=True)
heroes_complete
Explanation: Practice Set 1
Consider the "complete" data set shown below. Note that the rows are indexed by the superheroes' names.
End of explanation
...
...
...
...
...
...
...
...
...
Explanation: What are the outputs of the following calls? State what is wrong with the ones that will produce errors and propose a fix. To challenge yourself, try to do this exercise without running any commands.
End of explanation
flights = pd.read_csv("flights.dat", dtype={'sched_dep_time': 'f8', 'sched_arr_time': 'f8'})
flights.head()
airports_cols = [
'openflights_id',
'name',
'city',
'country',
'iata',
'icao',
'latitude',
'longitude',
'altitude',
'tz',
'dst',
'tz_olson',
'type',
'airport_dsource'
]
airports = pd.read_csv("airports.dat", names=airports_cols)
airports.head()
Explanation: Can you propose a fix to any of the broken ones above?
Practice Set 2
The practice problems below use the department of transportation's "On-Time" flight data for all flights originating from SFO or OAK in January 2016. Information about the variables can be found in the readme.html file. Information about the airports and airlines are contained in the comma-delimited files airports.dat and airlines.dat, respectively. Both were sourced from http://openflights.org/data.html
Disclaimer: There is a more direct way of dealing with time data that is not presented in these problems. This activity is merely an academic exercise.
End of explanation
def extract_hour(time):
Extracts hour information from military time.
Args:
time (float64): array of time given in military format.
Takes on values in 0.0-2359.0 due to float64 representation.
Returns:
array (float64): array of input dimension with hour information.
Should only take on integer values in 0-23
...
def extract_mins(time):
Extracts minute information from military time
Args:
time (float64): array of time given in military format.
Takes on values in 0.0-2359.0 due to float64 representation.
Returns:
array (float64): array of input dimension with hour information.
Should only take on integer values in 0-59
...
Explanation: Question 1
It looks like the departure and arrival were read in a floating-point numbers. Write two functions, extract_hour and extract_mins that converts military time to hours and minutes, respectively. Hint: You may want to use modular arithmetic and integer division.
End of explanation
def convert_to_minofday(time):
Converts military time to minute of day
Args:
time (float64): array of time given in military format.
Takes on values in 0.0-2359.0 due to float64 representation.
Returns:
array (float64): array of input dimension with minute of day
Example: 1:03pm is converted to 783.0
>>> convert_to_minofday(1303.0)
783.0
...
def calc_time_diff(x, y):
Calculates delay times y - x
Args:
x (float64): array of scheduled time given in military format.
Takes on values in 0.0-2359.0 due to float64 representation.
y (float64): array of same dimensions giving actual time
Returns:
array (float64): array of input dimension with delay time
scheduled = ...
actual = ...
...
delay = ...
delayed15 = ...
Explanation: Question 2
Using your two functions above, filter the flights data for flights that departed 15 or more minutes later than scheduled. You need not worry about flights that were delayed to the next day for this question.
End of explanation
delayed_airports = ...
delayed_destinations = ...
Explanation: Question 3
Using your answer from question 2, find the full name of every destination city with a flight from SFO or OAK that was delayed by 15 or more minutes. The airport codes used in flights are IATA codes. Sort the cities alphabetically.
End of explanation
top10 = ...
Explanation: Question 4
Find the tail number of the top ten planes, measured by number of destinations the plane flew to in January. You may find drop_duplicates and sort_values helpful.
End of explanation
airports = ...
Explanation: Challenge
Add a new column to airports called sfo_arr_delay_avg that contains information about the average delay time in January from SFO.
End of explanation
I_totally_did_everything=False
_ = ok.grade('qcompleted')
_ = ok.backup()
_ = ok.submit()
Explanation: Let's take a look at our non-null results. Do any of the delay values catch your eye?
...
Submission
Run the cell below to submit the lab. You may resubmit as many times you want. We will be grading you on effort/completion.
End of explanation |
5,753 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Survival Analysis with Plotly
Step1: Introduction
Survival analysis is a set of statistical methods for analyzing the occurrence of event data over time. It is also used to determine the relationship of co-variates to the time-to-events, and accurately compare time-to-event between two or more groups. For example
Step2: Loading data into python and R
We will be using the tongue dataset from the KMsurv package in R, then convert the data into a pandas dataframe under the same name.
This data frame contains the following columns
Step3: We can now refer to tongue using both R and python.
Step4: We can even operate on R and Python within the same code cell.
Step5: In R we need to create a Surv object with the Surv() function. Most functions in the survival package apply methods to this object. For right-censored data, we need to pass two arguments to Surv()
Step6: The plus-signs identify observations that are right-censored.
Estimating survival with Kaplan-Meier
Using R
The simplest fit estimates a survival object against an intercept. However, the survfit() function has several optional arguments. For example, we can change the confidence interval using conf.int and conf.type.
See help(survfit.formula) for the comprehensive documentation.
Step7: It is often helpful to call the summary() and plot() functions on this object.
Step8: Let's convert this plot into an interactive plotly object using plotly and ggplot2.
First, we will use a helper ggplot function written by Edwin Thoen to plot pretty survival distributions in R.
Step9: Voila!
Step10: We have to use a workaround to render an interactive plotly object by using an iframe in the ipython kernel. This is a bit easier if you are working in an R kernel.
Step11: The y axis represents the probability a patient is still alive at time $t$ weeks. We see a steep drop off within the first 100 weeks, and then observe the curve flattening. The dotted lines represent the 95% confidence intervals.
Using Python
We will now replicate the above steps using python. Above, we have already specified a variable tongues that holds the data in a pandas dataframe.
Step12: The method takes the same parameters as it's R counterpart, a time vector and a vector indicating which observations are observed or censored. The model fitting sequence is similar to the scikit-learn api.
Step13: To get a plot with the confidence intervals, we simply can call plot() on our kmf object.
Step14: Now we can convert this plot to an interactive Plotly object. However, we will have to augment the legend and filled area manually. Once we create a helper function, the process is simple.
Please see the Plotly Python user guide for more insight on how to update plot parameters.
Don't forget you can also easily edit the chart properties using the Plotly GUI interface by clicking the "Play with this data!" link below the chart.
Step15: <hr>
Multiple Types
Using R
Many times there are different groups contained in a single dataset. These may represent categories such as treatment groups, different species, or different manufacturing techniques. The type variable in the tongues dataset describes a patients DNA profile. Below we define a Kaplan-Meier estimate for each of these groups in R and Python.
Step16: Convert to a Plotly object.
Step17: Using Python
Step18: Convert to a Plotly object.
Step19: <hr>
Testing for Difference
It looks like DNA Type 2 is potentially more deadly, or more difficult to treat compared to Type 1. However, the difference between these survival curves still does not seem dramatic. It will be useful to perform a statistical test on the different DNA profiles to see if their survival rates are significantly different.
Python's lifelines contains methods in lifelines.statistics, and the R package survival uses a function survdiff(). Both functions return a p-value from a chi-squared distribution.
It turns out these two DNA types do not have significantly different survival rates.
Using R
Step20: Using Python
Step21: <hr>
Estimating Hazard Rates
Using R
To estimate the hazard function, we compute the cumulative hazard function using the Nelson-Aalen estimator, defined as
Step22: Using Python | Python Code:
# You can also install packages from within IPython!
# Install Python Packages
!pip install lifelines
!pip install rpy2
!pip install plotly
!pip install pandas
# Install R libraries
%load_ext rpy2.ipython
%R install.packages("devtools")
%R install_github("ropensci/plotly")
%R install.packages("IOsurv")
%R install.packages("ggplot2")
Explanation: Survival Analysis with Plotly: R vs. Python
<h2 id="tocheading">Table of Contents</h2>
<div id="toc"><ul class="toc"><li><a href="#Survival-Analysis-with-Plotly:-R-vs.-Python">I. Survival Analysis with Plotly: R vs. Python</a><a class="anchor-link" href="#Survival-Analysis-with-Plotly:-R-vs.-Python">¶</a></li><ul class="toc"><li><a href="#Introduction">I. Introduction</a><a class="anchor-link" href="#Introduction">¶</a></li><li><a href="#Censoring">II. Censoring</a><a class="anchor-link" href="#Censoring">¶</a></li><li><a href="#Loading-data-into-python-and-R">III. Loading data into python and R</a><a class="anchor-link" href="#Loading-data-into-python-and-R">¶</a></li></ul><li><a href="#Estimating-survival-with-Kaplan-Meier">II. Estimating survival with Kaplan-Meier</a><a class="anchor-link" href="#Estimating-survival-with-Kaplan-Meier">¶</a></li><ul class="toc"><ul class="toc"><li><a href="#Using-R">I. Using R</a><a class="anchor-link" href="#Using-R">¶</a></li><li><a href="#Using-Python">II. Using Python</a><a class="anchor-link" href="#Using-Python">¶</a></li></ul></ul><li><a href="#Multiple-Types">III. Multiple Types</a><a class="anchor-link" href="#Multiple-Types">¶</a></li><ul class="toc"><ul class="toc"><li><a href="#Using-R">I. Using R</a><a class="anchor-link" href="#Using-R">¶</a></li><li><a href="#Using-Python">II. Using Python</a><a class="anchor-link" href="#Using-Python">¶</a></li></ul></ul><li><a href="#Testing-for-Difference">IV. Testing for Difference</a><a class="anchor-link" href="#Testing-for-Difference">¶</a></li><ul class="toc"><ul class="toc"><li><a href="#Using-R">I. Using R</a><a class="anchor-link" href="#Using-R">¶</a></li><li><a href="#Using-Python">II. Using Python</a><a class="anchor-link" href="#Using-Python">¶</a></li></ul></ul><li><a href="#Estimating-Hazard-Rates">V. Estimating Hazard Rates</a><a class="anchor-link" href="#Estimating-Hazard-Rates">¶</a></li><ul class="toc"><ul class="toc"><li><a href="#Using-R">I. Using R</a><a class="anchor-link" href="#Using-R">¶</a></li><li><a href="#Using-Python">II. Using Python</a><a class="anchor-link" href="#Using-Python">¶</a></li></ul></ul></ul></div>
In this notebook we introduce Survival Analysis using both R and Python. We will compare programming languages and leverage Plotly's Python and R APIs to convert graphics to interactive Plotly objects.
Plotly is a platform for making interactive graphs with R, Python, MATLAB, and Excel. You can make graphs and analyze data on Plotly’s free public cloud. For collaboration and sensitive data, you can run Plotly on your own servers.
For a more in depth theoretical background in survival analysis, please refer to these sources:
Intro to Survival Analysis by John Fox
Wikipedia
Introduction to Survival Analysis, Stanford University
Survival Models, Princeton University
Need help converting Plotly graphs from R or Python?
- R
- Python
For this code to run on your machine, you will need several R and Python packages installed.
Running sudo pip install <package_name> from your terminal will install python libraries.
Running install.packages("<library_name>") in your R console will install R packages.
You will also need to register an account with Plotly to receive your API key.
End of explanation
# OIserve contains the survival package and sample datasets
%R library(OIsurv)
%R library(devtools)
%R library(plotly)
%R library(ggplot2)
%R library(IRdisplay)
# Authenticate to plotly's api using your account
%R py <- plotly("rmdk", "0sn825k4r8")
# Load python libraries
import numpy as np
import pandas as pd
import lifelines as ll
# Plotting helpers
from IPython.display import HTML
%matplotlib inline
import matplotlib.pyplot as plt
import plotly.plotly as py
import plotly.tools as tls
from plotly.graph_objs import *
from pylab import rcParams
rcParams['figure.figsize']=10, 5
Explanation: Introduction
Survival analysis is a set of statistical methods for analyzing the occurrence of event data over time. It is also used to determine the relationship of co-variates to the time-to-events, and accurately compare time-to-event between two or more groups. For example:
Time to death in biological systems.
Failure time in mechanical systems.
How long can we expect a user to be on a website / service?
Time to recovery for lung cancer treatment.
The statistical term survival analysis is analogous to reliability theory in engineering, duration analysis in economics, and event history analysis in sociology.
The two key functions in survival analysis are the survival function and the hazard function.
The survival function is conventionally denoted as $S$, the probability that time of death is later than some specified time $t$, defined as:
$$ S(t) = Pr(T>t)$$
where $t$ is some duration, $T$ is a random variable denoting the time of death, and $Pr$ is the probability of the event. Generally, $0\leq S(t)\leq1$ and $S(0) = 1$.
<br>
<br>
The hazard function gives us the probability of "death" in the next instance of time, given we are still "alive".
$$\lambda(t) = \lim_{dt \to 0} \frac{Pr(t \leq T < t + dt}{dt \cdot S(t)}$$
The hazard rate describes the relative likelihood of the event occurring at time $t$, and ignores the accumulation of hazard up to time $t$, unlike $S(t)$.
<br>
However, we do not actually observe the true survival function of a population; we must use the observed data to estimate it. A popular method to estimate the survival function $S(t)$ is the Kaplan-Meier estimate.
$$S(t)= \prod_{ti < t} \frac{n_i−d_i}{n_i}$$
where $d_i$ are the number of death events at time $t$ and $n_i$ is the number of subjects at risk of death at time t.
Censoring
Censoring is a type of missing data problem common in survival analysis. Other popular comparison methods, such as linear regression and t-tests do not accommodate for censoring. This makes survival analysis attractive for data from randomized clinical studies.
In an ideal scenario, both the birth and death rates of a patient is known, which means the lifetime is known.
Right censoring occurs when the 'death' is unknown, but it is after some known date. e.g. The 'death' occurs after the end of the study, or there was no follow-up with the patient.
Left censoring occurs when the lifetime is known to be less than a certain duration. e.g. Unknown time of initial infection exposure when first meeting with a patient.
<hr>
For following analysis, we will use the lifelines library for python, and the survival package for R. We can use rpy2 to execute R code in the same document as the python code.
End of explanation
# Load in data
%R data(tongue)
# Pull data into python kernel
%Rpull tongue
# Convert into pandas dataframe
from rpy2.robjects import pandas2ri
tongue = pandas2ri.ri2py_dataframe(tongue)
Explanation: Loading data into python and R
We will be using the tongue dataset from the KMsurv package in R, then convert the data into a pandas dataframe under the same name.
This data frame contains the following columns:
type: Tumor DNA profile (1=Aneuploid Tumor, 2=Diploid Tumor)
time: Time to death or on-study time, weeks
delta Death indicator (0=alive, 1=dead)
End of explanation
%%R
summary(tongue)
tongue.describe()
Explanation: We can now refer to tongue using both R and python.
End of explanation
%R print(mean(tongue$time))
print tongue['time'].mean()
Explanation: We can even operate on R and Python within the same code cell.
End of explanation
%%R
attach(tongue)
tongue.surv <- Surv(time[type==1], delta[type==1])
tongue.surv
Explanation: In R we need to create a Surv object with the Surv() function. Most functions in the survival package apply methods to this object. For right-censored data, we need to pass two arguments to Surv():
a vector of times
a vector indicating which times are observed and censored
End of explanation
%%R
surv.fit <- survfit(tongue.surv~1)
surv.fit
Explanation: The plus-signs identify observations that are right-censored.
Estimating survival with Kaplan-Meier
Using R
The simplest fit estimates a survival object against an intercept. However, the survfit() function has several optional arguments. For example, we can change the confidence interval using conf.int and conf.type.
See help(survfit.formula) for the comprehensive documentation.
End of explanation
%%R
summary(surv.fit)
%%R -h 400
plot(surv.fit, main='Kaplan-Meier estimate with 95% confidence bounds',
xlab='time', ylab='survival function')
Explanation: It is often helpful to call the summary() and plot() functions on this object.
End of explanation
%%R
ggsurv <- function(s, CI = 'def', plot.cens = T, surv.col = 'gg.def',
cens.col = 'red', lty.est = 1, lty.ci = 2,
cens.shape = 3, back.white = F, xlab = 'Time',
ylab = 'Survival', main = ''){
library(ggplot2)
strata <- ifelse(is.null(s$strata) ==T, 1, length(s$strata))
stopifnot(length(surv.col) == 1 | length(surv.col) == strata)
stopifnot(length(lty.est) == 1 | length(lty.est) == strata)
ggsurv.s <- function(s, CI = 'def', plot.cens = T, surv.col = 'gg.def',
cens.col = 'red', lty.est = 1, lty.ci = 2,
cens.shape = 3, back.white = F, xlab = 'Time',
ylab = 'Survival', main = ''){
dat <- data.frame(time = c(0, s$time),
surv = c(1, s$surv),
up = c(1, s$upper),
low = c(1, s$lower),
cens = c(0, s$n.censor))
dat.cens <- subset(dat, cens != 0)
col <- ifelse(surv.col == 'gg.def', 'black', surv.col)
pl <- ggplot(dat, aes(x = time, y = surv)) +
xlab(xlab) + ylab(ylab) + ggtitle(main) +
geom_step(col = col, lty = lty.est)
pl <- if(CI == T | CI == 'def') {
pl + geom_step(aes(y = up), color = col, lty = lty.ci) +
geom_step(aes(y = low), color = col, lty = lty.ci)
} else (pl)
pl <- if(plot.cens == T & length(dat.cens) > 0){
pl + geom_point(data = dat.cens, aes(y = surv), shape = cens.shape,
col = cens.col)
} else if (plot.cens == T & length(dat.cens) == 0){
stop ('There are no censored observations')
} else(pl)
pl <- if(back.white == T) {pl + theme_bw()
} else (pl)
pl
}
ggsurv.m <- function(s, CI = 'def', plot.cens = T, surv.col = 'gg.def',
cens.col = 'red', lty.est = 1, lty.ci = 2,
cens.shape = 3, back.white = F, xlab = 'Time',
ylab = 'Survival', main = '') {
n <- s$strata
groups <- factor(unlist(strsplit(names
(s$strata), '='))[seq(2, 2*strata, by = 2)])
gr.name <- unlist(strsplit(names(s$strata), '='))[1]
gr.df <- vector('list', strata)
ind <- vector('list', strata)
n.ind <- c(0,n); n.ind <- cumsum(n.ind)
for(i in 1:strata) ind[[i]] <- (n.ind[i]+1):n.ind[i+1]
for(i in 1:strata){
gr.df[[i]] <- data.frame(
time = c(0, s$time[ ind[[i]] ]),
surv = c(1, s$surv[ ind[[i]] ]),
up = c(1, s$upper[ ind[[i]] ]),
low = c(1, s$lower[ ind[[i]] ]),
cens = c(0, s$n.censor[ ind[[i]] ]),
group = rep(groups[i], n[i] + 1))
}
dat <- do.call(rbind, gr.df)
dat.cens <- subset(dat, cens != 0)
pl <- ggplot(dat, aes(x = time, y = surv, group = group)) +
xlab(xlab) + ylab(ylab) + ggtitle(main) +
geom_step(aes(col = group, lty = group))
col <- if(length(surv.col == 1)){
scale_colour_manual(name = gr.name, values = rep(surv.col, strata))
} else{
scale_colour_manual(name = gr.name, values = surv.col)
}
pl <- if(surv.col[1] != 'gg.def'){
pl + col
} else {pl + scale_colour_discrete(name = gr.name)}
line <- if(length(lty.est) == 1){
scale_linetype_manual(name = gr.name, values = rep(lty.est, strata))
} else {scale_linetype_manual(name = gr.name, values = lty.est)}
pl <- pl + line
pl <- if(CI == T) {
if(length(surv.col) > 1 && length(lty.est) > 1){
stop('Either surv.col or lty.est should be of length 1 in order
to plot 95% CI with multiple strata')
}else if((length(surv.col) > 1 | surv.col == 'gg.def')[1]){
pl + geom_step(aes(y = up, color = group), lty = lty.ci) +
geom_step(aes(y = low, color = group), lty = lty.ci)
} else{pl + geom_step(aes(y = up, lty = group), col = surv.col) +
geom_step(aes(y = low,lty = group), col = surv.col)}
} else {pl}
pl <- if(plot.cens == T & length(dat.cens) > 0){
pl + geom_point(data = dat.cens, aes(y = surv), shape = cens.shape,
col = cens.col)
} else if (plot.cens == T & length(dat.cens) == 0){
stop ('There are no censored observations')
} else(pl)
pl <- if(back.white == T) {pl + theme_bw()
} else (pl)
pl
}
pl <- if(strata == 1) {ggsurv.s(s, CI , plot.cens, surv.col ,
cens.col, lty.est, lty.ci,
cens.shape, back.white, xlab,
ylab, main)
} else {ggsurv.m(s, CI, plot.cens, surv.col ,
cens.col, lty.est, lty.ci,
cens.shape, back.white, xlab,
ylab, main)}
pl
}
Explanation: Let's convert this plot into an interactive plotly object using plotly and ggplot2.
First, we will use a helper ggplot function written by Edwin Thoen to plot pretty survival distributions in R.
End of explanation
%%R -h 400
p <- ggsurv(surv.fit) + theme_bw()
p
Explanation: Voila!
End of explanation
%%R
# Create the iframe HTML
plot.ly <- function(url) {
# Set width and height from options or default square
w <- "750"
h <- "600"
html <- paste("<center><iframe height=\"", h, "\" id=\"igraph\" scrolling=\"no\" seamless=\"seamless\"\n\t\t\t\tsrc=\"",
url, "\" width=\"", w, "\" frameBorder=\"0\"></iframe></center>", sep="")
return(html)
}
%R p <- plot.ly("https://plot.ly/~rmdk/111/survival-vs-time/")
# pass object to python kernel
%R -o p
# Render HTML
HTML(p[0])
Explanation: We have to use a workaround to render an interactive plotly object by using an iframe in the ipython kernel. This is a bit easier if you are working in an R kernel.
End of explanation
from lifelines.estimation import KaplanMeierFitter
kmf = KaplanMeierFitter()
Explanation: The y axis represents the probability a patient is still alive at time $t$ weeks. We see a steep drop off within the first 100 weeks, and then observe the curve flattening. The dotted lines represent the 95% confidence intervals.
Using Python
We will now replicate the above steps using python. Above, we have already specified a variable tongues that holds the data in a pandas dataframe.
End of explanation
f = tongue.type==1
T = tongue[f]['time']
C = tongue[f]['delta']
kmf.fit(T, event_observed=C)
Explanation: The method takes the same parameters as it's R counterpart, a time vector and a vector indicating which observations are observed or censored. The model fitting sequence is similar to the scikit-learn api.
End of explanation
kmf.plot(title='Tumor DNA Profile 1')
Explanation: To get a plot with the confidence intervals, we simply can call plot() on our kmf object.
End of explanation
p = kmf.plot(ci_force_lines=True, title='Tumor DNA Profile 1 (95% CI)')
# Collect the plot object
kmf1 = plt.gcf()
def pyplot(fig, ci=True, legend=True):
# Convert mpl fig obj to plotly fig obj, resize to plotly's default
py_fig = tls.mpl_to_plotly(fig, resize=True)
# Add fill property to lower limit line
if ci == True:
style1 = dict(fill='tonexty')
# apply style
py_fig['data'][2].update(style1)
# Change color scheme to black
py_fig['data'].update(dict(line=Line(color='black')))
# change the default line type to 'step'
py_fig['data'].update(dict(line=Line(shape='hv')))
# Delete misplaced legend annotations
py_fig['layout'].pop('annotations', None)
if legend == True:
# Add legend, place it at the top right corner of the plot
py_fig['layout'].update(
showlegend=True,
legend=Legend(
x=1.05,
y=1
)
)
# Send updated figure object to Plotly, show result in notebook
return py.iplot(py_fig)
pyplot(kmf1, legend=False)
Explanation: Now we can convert this plot to an interactive Plotly object. However, we will have to augment the legend and filled area manually. Once we create a helper function, the process is simple.
Please see the Plotly Python user guide for more insight on how to update plot parameters.
Don't forget you can also easily edit the chart properties using the Plotly GUI interface by clicking the "Play with this data!" link below the chart.
End of explanation
%%R
surv.fit2 <- survfit( Surv(time, delta) ~ type)
p <- ggsurv(surv.fit2) +
ggtitle('Lifespans of different tumor DNA profile') + theme_bw()
p
Explanation: <hr>
Multiple Types
Using R
Many times there are different groups contained in a single dataset. These may represent categories such as treatment groups, different species, or different manufacturing techniques. The type variable in the tongues dataset describes a patients DNA profile. Below we define a Kaplan-Meier estimate for each of these groups in R and Python.
End of explanation
#%R py$ggplotly(plt)
%R p <- plot.ly("https://plot.ly/~rmdk/173/lifespans-of-different-tumor-dna-profile/")
# pass object to python kernel
%R -o p
# Render HTML
HTML(p[0])
Explanation: Convert to a Plotly object.
End of explanation
f2 = tongue.type==2
T2 = tongue[f2]['time']
C2 = tongue[f2]['delta']
ax = plt.subplot(111)
kmf.fit(T, event_observed=C, label=['Type 1 DNA'])
kmf.survival_function_.plot(ax=ax)
kmf.fit(T2, event_observed=C2, label=['Type 2 DNA'])
kmf.survival_function_.plot(ax=ax)
plt.title('Lifespans of different tumor DNA profile')
kmf2 = plt.gcf()
Explanation: Using Python
End of explanation
pyplot(kmf2, ci=False)
Explanation: Convert to a Plotly object.
End of explanation
%%R
survdiff(Surv(time, delta) ~ type)
Explanation: <hr>
Testing for Difference
It looks like DNA Type 2 is potentially more deadly, or more difficult to treat compared to Type 1. However, the difference between these survival curves still does not seem dramatic. It will be useful to perform a statistical test on the different DNA profiles to see if their survival rates are significantly different.
Python's lifelines contains methods in lifelines.statistics, and the R package survival uses a function survdiff(). Both functions return a p-value from a chi-squared distribution.
It turns out these two DNA types do not have significantly different survival rates.
Using R
End of explanation
from lifelines.statistics import logrank_test
summary_= logrank_test(T, T2, C, C2, alpha=99)
print summary_
Explanation: Using Python
End of explanation
%%R
haz <- Surv(time[type==1], delta[type==1])
haz.fit <- summary(survfit(haz ~ 1), type='fh')
x <- c(haz.fit$time, 250)
y <- c(-log(haz.fit$surv), 1.474)
cum.haz <- data.frame(time=x, cumulative.hazard=y)
p <- ggplot(cum.haz, aes(time, cumulative.hazard)) + geom_step() + theme_bw() +
ggtitle('Nelson-Aalen Estimate')
p
%R p <- plot.ly("https://plot.ly/~rmdk/185/cumulativehazard-vs-time/")
# pass object to python kernel
%R -o p
# Render HTML
HTML(p[0])
Explanation: <hr>
Estimating Hazard Rates
Using R
To estimate the hazard function, we compute the cumulative hazard function using the Nelson-Aalen estimator, defined as:
$$\hat{\Lambda} (t) = \sum_{t_i \leq t} \frac{d_i}{n_i}$$
where $d_i$ is the number of deaths at time $t_i$ and $n_i$ is the number of susceptible individuals. Both R and Python modules use the same estimator. However, in R we will use the -log of the Fleming and Harrington estimator, which is equivalent to the Nelson-Aalen.
End of explanation
from lifelines.estimation import NelsonAalenFitter
naf = NelsonAalenFitter()
naf.fit(T, event_observed=C)
naf.plot(title='Nelson-Aalen Estimate')
naf.plot(ci_force_lines=True, title='Nelson-Aalen Estimate')
py_p = plt.gcf()
pyplot(py_p, legend=False)
Explanation: Using Python
End of explanation |
5,754 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Trabajando de forma conjunta con Python y con R.
Hoy vamos a ver como podemos juntar lo bueno de R, algunas de sus librerías, con Python usando rpy2.
Pero, lo primero de todo, ¿qué es rpy2? rpy2 es una interfaz que permite que podamos comunicar información entre R y Python y que podamos acceder a funcionalidad de R desde Python. Por tanto, podemos estar usando Python para todo nuestro análisis y en el caso de que necesitemos alguna librería estadística especializada de R podremos acceder a la misma usando rpy2.
Para poder usar rpy2 necesitarás tener instalado tanto Python (CPython versión >= 2.7.x) como R (versión >=3), además de las librerías R a las que quieras acceder. Conda permite realizar todo el proceso de instalación de los intérpretes de Python y R, además de librerías, pero no he trabajado con Conda y R por lo que no puedo aportar mucho más en este aspecto. Supongo que será parecido a lo que hacemos con Conda y Python.
Para este microtutorial voy a hacer uso de la librería extRemes de R que permite hacer análisis de valores extremos usando varias de las metodologías más comúnmente aceptadas.
Como siempre, primero de todo, importaremos la funcionalidad que necesitamos para la ocasión.
Step2: En el anterior código podemos ver una serie de cosas nuevas que voy a explicar brevemente
Step3: En la anterior celda hemos creado una función R llamada saluda y que ahora está disponible en el espacio de nombres global de R. Podemos acceder a la misma desde Python de la siguiente forma
Step4: Y podemos usarla de la siguiente forma
Step5: En la anterior celda véis que para acceder al resultado he tenido que usar res[0]. En realidad, lo que nos devuleve rpy2 es
Step6: En este caso un numpy array con diversa información del objeto rpy2. Como el objeto solo devuelve un string pues el numpy array solo tiene un elemento.
Podemos acceder al código R de la función de la siguiente forma
Step7: Hemos visto como acceder desde Python a nombres disponibles en el entorno global de R. ¿Cómo podemos hacer para que algo que creemos en Python este accesible en R?
Step8: Veamos como es esta variable_r_creada_desde_python dentro de Python
Step9: ¿Y lo que se tendría que ver en R?
Step10: Pero ahora mismo esa variable no está disponible desde R y no la podríamos usar dentro de código R que permanece en el espacio R (vaya lío, ¿no?)
Step11: Vale, tendremos que hacer que sea accesible desde R de la siguiente forma
Step12: Ahora que ya la tenemos accesible la podemos usar desde R. Por ejemplo, vamos a usar la función sum en R que suma los elementos pero directamente desde R
Step13: Perfecto, ya sabemos, de forma muy sencilla y básica, como podemos usar R desde Python, como podemos pasar información desde R hacia Python y desde Python hacia R. ¡¡¡Esto es muy poderoso!!!, estamos juntando lo mejor de dos mundos, la solidez de las herramientas científicas de Python con la funcionalidad especializada que nos pueden aportar algunas librerías de R no disponibles en otros ámbitos.
Trabajando de forma híbrida entre Python y R
Vamos a empezar importando la librería extRemes de R
Step14: En la anterior celda hemos hecho lo siguiente
Step15: Extraemos los máximos anuales los cuales usaremos posteriormente dentro de R para hacer cálculo de valores extremos usando la distribución generalizada de valores extremos (GEV)
Step16: Dibujamos los valores máximos anuales usando Pandas
Step17: Referenciamos la funcionalidad fevd (fit extreme value distribution) dentro del paquete extremes de R para poder usarla directamente con los valores máximos que hemos obtenido usando Pandas y desde Python.
Step18: Como hemos comentado anteriormente, vamos a calcular los parámetros de la GEV usando el método de ajuste GMLE (Generalised Maximum Lihelihood Estimation) y los vamos a guardar directamente en una variable Python.
Veamos la ayuda antes
Step19: Y ahora vamos a hacer un cálculo sin meternos mucho en todas las opciones posibles.
Step20: ¿Qué estructura tiene la variable res que acabamos de crear y que tiene los resultados del ajuste?
Step21: Según nos indica lo anterior, ahora res es un vector que está compuesto de diferentes elementos. Los vectores pueden tener un nombre para todos o algunos de los elementos. Para acceder a estor nombres podemos hacer
Step22: Según el output anterior, parece que hay un nombre results, ahí es donde se guardan los valores del ajuste, los estimadores. Para acceder al mismo podemos hacerlo de diferentes formas. Con Python tendriamos que saber el índice y acceder de forma normal (__getitem__()). Existe una forma alternativa usando el método rx que nos permite acceder directamente con el nombre
Step23: Parece que tenemos un único elemento
Step24: Vemos ahora que results tiene un elemento con nombre par donde se guardan los valores de los estimadores del ajuste a la GEV que hemos obtenido usando GMLE. Vamos a obtener finalmente los valores de los estimadores
Step25: Funcion mágica para R (antigua rmagic)
Usamos la antigua función mágica rmagic que ahora se activará en el notebook de la siguiente forma
Step26: Veamos como funciona la functión mágica de R
Step27: A veces, será más simple usar la función mágica para interactuar con R. Veamos un ejemplo donde le pasamos a R el valor obtenido de la función fevd del paquete extRemes de R que he usado anteriormente y corremos cierto código directamente desde R sin tener que usar ro.r.
Step28: En la anterior celda de código le he pasado como parámetro de entrada (- i res) la variable res que había obtenido anteriormente para que esté disponible desde R. y he ejecutado código R puro (plot.fevd(res)).
Si lo anterior lo quiero hacer con rpy2 puedo hacer lo siquiente
Step29: Lo anterior me bloquea el notebook y me 'rompe' la sesión (en windows, al menos) ya que la ventana de gráficos se abre de forma externa... Por tanto, una buena opción para trabajar de forma interactiva con Python y R de forma conjunta y que no se 'rompa' nada es usar tanto rpy2 como su extensión para el notebook de Jupyter (dejaremos de llamarlo IPython poco a poco).
Usando Python y R combinando rpy2 y la función mágica
Vamos a combinar las dos formas de trabajar con rpy2 en el siguiente ejemplo
Step30: Lo que vamos a hacer es calcular los parámetros del ajuste usando la distribución GEV y Gumbel, que es un caso especial de la GEV. El ajuste lo calculamos usando tanto MLE como GMLE. Además de mostrar los valores resultantes del ajuste para los estimadores vamos a mostrar el dibujo de cada uno de los ajustes y algunos test de bondad. Usamos Python para toda la maquinaria de los bucles, usamos rpy2 para obtener los estimadores y usamos la función mágica de rpy2 para mostrar los gráficos del resultado. | Python Code:
# Importamos pandas y numpy para manejar los datos que pasaremos a R
import pandas as pd
import numpy as np
# Usamos rpy2 para interactuar con R
import rpy2.robjects as ro
# Activamos la conversión automática de tipos de rpy2
import rpy2.robjects.numpy2ri
rpy2.robjects.numpy2ri.activate()
import matplotlib.pyplot as plt
%matplotlib inline
Explanation: Trabajando de forma conjunta con Python y con R.
Hoy vamos a ver como podemos juntar lo bueno de R, algunas de sus librerías, con Python usando rpy2.
Pero, lo primero de todo, ¿qué es rpy2? rpy2 es una interfaz que permite que podamos comunicar información entre R y Python y que podamos acceder a funcionalidad de R desde Python. Por tanto, podemos estar usando Python para todo nuestro análisis y en el caso de que necesitemos alguna librería estadística especializada de R podremos acceder a la misma usando rpy2.
Para poder usar rpy2 necesitarás tener instalado tanto Python (CPython versión >= 2.7.x) como R (versión >=3), además de las librerías R a las que quieras acceder. Conda permite realizar todo el proceso de instalación de los intérpretes de Python y R, además de librerías, pero no he trabajado con Conda y R por lo que no puedo aportar mucho más en este aspecto. Supongo que será parecido a lo que hacemos con Conda y Python.
Para este microtutorial voy a hacer uso de la librería extRemes de R que permite hacer análisis de valores extremos usando varias de las metodologías más comúnmente aceptadas.
Como siempre, primero de todo, importaremos la funcionalidad que necesitamos para la ocasión.
End of explanation
codigo_r =
saluda <- function(cadena) {
return(paste("Hola, ", cadena))
}
ro.r(codigo_r)
Explanation: En el anterior código podemos ver una serie de cosas nuevas que voy a explicar brevemente:
import rpy2.robjects as ro, esto lo explicaremos un poquito más abajo.
import rpy2.robjects.numpy2ri, importamos el módulo numpy2ri. Este módulo permite que hagamos conversión automática de objetos numpy a objetos rpy2.
rpy2.robjects.numpy2ri.activate(), hacemos uso de la función activate que activa la conversión automática de objetos que hemos comentado en la línea anterior.
Brevísima introducción a algunas de las cosas más importantes de rpy2.
Para evaluar directamente código R podemos hacerlo usando rpy2.robjects.r con el código R expresado como una cadena (rpy2.robjects lo he importado como ro en este caso, como podéis ver más arriba):
End of explanation
saluda_py = ro.globalenv['saluda']
Explanation: En la anterior celda hemos creado una función R llamada saluda y que ahora está disponible en el espacio de nombres global de R. Podemos acceder a la misma desde Python de la siguiente forma:
End of explanation
res = saluda_py('pepe')
print(res[0])
Explanation: Y podemos usarla de la siguiente forma:
End of explanation
print(type(res))
print(res.shape)
Explanation: En la anterior celda véis que para acceder al resultado he tenido que usar res[0]. En realidad, lo que nos devuleve rpy2 es:
End of explanation
print(saluda_py.r_repr())
Explanation: En este caso un numpy array con diversa información del objeto rpy2. Como el objeto solo devuelve un string pues el numpy array solo tiene un elemento.
Podemos acceder al código R de la función de la siguiente forma:
End of explanation
variable_r_creada_desde_python = ro.FloatVector(np.arange(1,5,0.1))
Explanation: Hemos visto como acceder desde Python a nombres disponibles en el entorno global de R. ¿Cómo podemos hacer para que algo que creemos en Python este accesible en R?
End of explanation
variable_r_creada_desde_python
Explanation: Veamos como es esta variable_r_creada_desde_python dentro de Python
End of explanation
print(variable_r_creada_desde_python.r_repr())
Explanation: ¿Y lo que se tendría que ver en R?
End of explanation
ro.r('variable_r_creada_desde_python')
Explanation: Pero ahora mismo esa variable no está disponible desde R y no la podríamos usar dentro de código R que permanece en el espacio R (vaya lío, ¿no?)
End of explanation
ro.globalenv["variable_ahora_en_r"] = variable_r_creada_desde_python
print(ro.r("variable_ahora_en_r"))
Explanation: Vale, tendremos que hacer que sea accesible desde R de la siguiente forma:
End of explanation
print(ro.r('sum(variable_ahora_en_r)'))
print(np.sum(variable_r_creada_desde_python))
Explanation: Ahora que ya la tenemos accesible la podemos usar desde R. Por ejemplo, vamos a usar la función sum en R que suma los elementos pero directamente desde R:
End of explanation
# Importamos la librería extRemes de R
from rpy2.robjects.packages import importr
extremes = importr('extRemes')
Explanation: Perfecto, ya sabemos, de forma muy sencilla y básica, como podemos usar R desde Python, como podemos pasar información desde R hacia Python y desde Python hacia R. ¡¡¡Esto es muy poderoso!!!, estamos juntando lo mejor de dos mundos, la solidez de las herramientas científicas de Python con la funcionalidad especializada que nos pueden aportar algunas librerías de R no disponibles en otros ámbitos.
Trabajando de forma híbrida entre Python y R
Vamos a empezar importando la librería extRemes de R:
End of explanation
data = pd.read_csv('datasets/Synthetic_data.txt',
sep = '\s*', skiprows = 1, parse_dates = [[0, 1]],
names = ['date','time','wspd'], index_col = 0)
data.head(3)
Explanation: En la anterior celda hemos hecho lo siguiente:
from rpy2.robjects.packages import importr, La función importr nos servirá para importar las librerías R
extremes = importr('extRemes'), de esta forma importamos la librería extRemes de R, sería equivalente a hacer en R library(extRemes).
Leemos datos con pandas. En el mismo repo donde está este notebook está también un fichero de texto con datos que creé a priori. Supuestamente son datos horarios de velocidad del viento por lo que vamos a hacer análisis de valores extremos de velocidad del viento horaria.
End of explanation
max_y = data.wspd.groupby(pd.TimeGrouper(freq = 'A')).max()
Explanation: Extraemos los máximos anuales los cuales usaremos posteriormente dentro de R para hacer cálculo de valores extremos usando la distribución generalizada de valores extremos (GEV):
End of explanation
max_y.plot(kind = 'bar', figsize = (12, 4))
Explanation: Dibujamos los valores máximos anuales usando Pandas:
End of explanation
fevd = extremes.fevd
Explanation: Referenciamos la funcionalidad fevd (fit extreme value distribution) dentro del paquete extremes de R para poder usarla directamente con los valores máximos que hemos obtenido usando Pandas y desde Python.
End of explanation
print(fevd.__doc__)
Explanation: Como hemos comentado anteriormente, vamos a calcular los parámetros de la GEV usando el método de ajuste GMLE (Generalised Maximum Lihelihood Estimation) y los vamos a guardar directamente en una variable Python.
Veamos la ayuda antes:
End of explanation
res = fevd(max_y.values, type = "GEV", method = "GMLE")
Explanation: Y ahora vamos a hacer un cálculo sin meternos mucho en todas las opciones posibles.
End of explanation
print(type(res))
print(res.r_repr)
Explanation: ¿Qué estructura tiene la variable res que acabamos de crear y que tiene los resultados del ajuste?
End of explanation
res.names
Explanation: Según nos indica lo anterior, ahora res es un vector que está compuesto de diferentes elementos. Los vectores pueden tener un nombre para todos o algunos de los elementos. Para acceder a estor nombres podemos hacer:
End of explanation
results = res.rx('results')
print(results.r_repr)
Explanation: Según el output anterior, parece que hay un nombre results, ahí es donde se guardan los valores del ajuste, los estimadores. Para acceder al mismo podemos hacerlo de diferentes formas. Con Python tendriamos que saber el índice y acceder de forma normal (__getitem__()). Existe una forma alternativa usando el método rx que nos permite acceder directamente con el nombre:
End of explanation
results = results[0]
results.r_repr
Explanation: Parece que tenemos un único elemento:
End of explanation
location, scale, shape = results.rx('par')[0][:]
print(location, scale, shape)
Explanation: Vemos ahora que results tiene un elemento con nombre par donde se guardan los valores de los estimadores del ajuste a la GEV que hemos obtenido usando GMLE. Vamos a obtener finalmente los valores de los estimadores:
End of explanation
%load_ext rpy2.ipython
Explanation: Funcion mágica para R (antigua rmagic)
Usamos la antigua función mágica rmagic que ahora se activará en el notebook de la siguiente forma:
End of explanation
help(rpy2.ipython.rmagic.RMagics.R)
Explanation: Veamos como funciona la functión mágica de R:
End of explanation
%R -i res plot.fevd(res)
Explanation: A veces, será más simple usar la función mágica para interactuar con R. Veamos un ejemplo donde le pasamos a R el valor obtenido de la función fevd del paquete extRemes de R que he usado anteriormente y corremos cierto código directamente desde R sin tener que usar ro.r.
End of explanation
ro.globalenv['res'] = res
ro.r("plot.fevd(res)")
Explanation: En la anterior celda de código le he pasado como parámetro de entrada (- i res) la variable res que había obtenido anteriormente para que esté disponible desde R. y he ejecutado código R puro (plot.fevd(res)).
Si lo anterior lo quiero hacer con rpy2 puedo hacer lo siquiente:
<p class="alert alert-info">CUIDADO, la siguiente celda de código puede provocar que se reinicialice el notebook y se rompa la sesión. Si has hecho cambios en el notebook guárdalos antes de ejecutar la celda, por lo que pueda pasar...</p>
End of explanation
metodos = ["MLE", "GMLE"]
tipos = ["GEV", "Gumbel"]
Explanation: Lo anterior me bloquea el notebook y me 'rompe' la sesión (en windows, al menos) ya que la ventana de gráficos se abre de forma externa... Por tanto, una buena opción para trabajar de forma interactiva con Python y R de forma conjunta y que no se 'rompa' nada es usar tanto rpy2 como su extensión para el notebook de Jupyter (dejaremos de llamarlo IPython poco a poco).
Usando Python y R combinando rpy2 y la función mágica
Vamos a combinar las dos formas de trabajar con rpy2 en el siguiente ejemplo:
End of explanation
for t in tipos:
for m in metodos:
print('tipo de ajuste: ', t)
print('método de ajuste: ', m)
res = fevd(max_y.values, method = m, type = t)
if m == "Bayesian":
print(res.rx('results')[0][-1][0:-2])
elif m == "Lmoments":
print(res.rx('results')[0])
else:
print(res.rx('results')[0].rx('par')[0][:])
%R -i res plot.fevd(res)
Explanation: Lo que vamos a hacer es calcular los parámetros del ajuste usando la distribución GEV y Gumbel, que es un caso especial de la GEV. El ajuste lo calculamos usando tanto MLE como GMLE. Además de mostrar los valores resultantes del ajuste para los estimadores vamos a mostrar el dibujo de cada uno de los ajustes y algunos test de bondad. Usamos Python para toda la maquinaria de los bucles, usamos rpy2 para obtener los estimadores y usamos la función mágica de rpy2 para mostrar los gráficos del resultado.
End of explanation |
5,755 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Chapter 4
The greatest theorem never told
This chapter focuses on an idea that is always bouncing around our minds, but is rarely made explicit outside books devoted to statistics. In fact, we've been using this simple idea in every example thus far.
The Law of Large Numbers
Let $Z_i$ be $N$ independent samples from some probability distribution. According to the Law of Large numbers, so long as the expected value $E[Z]$ is finite, the following holds,
$$\frac{1}{N} \sum_{i=1}^N Z_i \rightarrow E[ Z ], \;\;\; N \rightarrow \infty.$$
In words
Step2: Looking at the above plot, it is clear that when the sample size is small, there is greater variation in the average (compare how jagged and jumpy the average is initially, then smooths out). All three paths approach the value 4.5, but just flirt with it as $N$ gets large. Mathematicians and statistician have another name for flirting
Step3: As expected, the expected distance between our sample average and the actual expected value shrinks as $N$ grows large. But also notice that the rate of convergence decreases, that is, we need only 10 000 additional samples to move from 0.020 to 0.015, a difference of 0.005, but 20 000 more samples to again decrease from 0.015 to 0.010, again only a 0.005 decrease.
It turns out we can measure this rate of convergence. Above I have plotted a second line, the function $\sqrt{\lambda}/\sqrt{N}$. This was not chosen arbitrarily. In most cases, given a sequence of random variable distributed like $Z$, the rate of converge to $E[Z]$ of the Law of Large Numbers is
$$ \frac{ \sqrt{ \; Var(Z) \; } }{\sqrt{N} }$$
This is useful to know
Step4: What does this all have to do with Bayesian statistics?
Point estimates, to be introduced in the next chapter, in Bayesian inference are computed using expected values. In more analytical Bayesian inference, we would have been required to evaluate complicated expected values represented as multi-dimensional integrals. No longer. If we can sample from the posterior distribution directly, we simply need to evaluate averages. Much easier. If accuracy is a priority, plots like the ones above show how fast you are converging. And if further accuracy is desired, just take more samples from the posterior.
When is enough enough? When can you stop drawing samples from the posterior? That is the practitioners decision, and also dependent on the variance of the samples (recall from above a high variance means the average will converge slower).
We also should understand when the Law of Large Numbers fails. As the name implies, and comparing the graphs above for small $N$, the Law is only true for large sample sizes. Without this, the asymptotic result is not reliable. Knowing in what situations the Law fails can give us confidence in how unconfident we should be. The next section deals with this issue.
The Disorder of Small Numbers
The Law of Large Numbers is only valid as $N$ gets infinitely large
Step5: What do we observe? Without accounting for population sizes we run the risk of making an enormous inference error
Step6: Not at all uniform over 100 to 1500. This is an absolute failure of the Law of Large Numbers.
Example
Step8: The above is a classic phenomenon in statistics. I say classic referring to the "shape" of the scatter plot above. It follows a classic triangular form, that tightens as we increase the sample size (as the Law of Large Numbers becomes more exact).
I am perhaps overstressing the point and maybe I should have titled the book "You don't have big data problems!", but here again is an example of the trouble with small datasets, not big ones. Simply, small datasets cannot be processed using the Law of Large Numbers. Compare with applying the Law without hassle to big datasets (ex. big data). I mentioned earlier that paradoxically big data prediction problems are solved by relatively simple algorithms. The paradox is partially resolved by understanding that the Law of Large Numbers creates solutions that are stable, i.e. adding or subtracting a few data points will not affect the solution much. On the other hand, adding or removing data points to a small dataset can create very different results.
For further reading on the hidden dangers of the Law of Large Numbers, I would highly recommend the excellent manuscript The Most Dangerous Equation.
Example
Step10: For a given true upvote ratio $p$ and $N$ votes, the number of upvotes will look like a Binomial random variable with parameters $p$ and $N$. (This is because of the equivalence between upvote ratio and probability of upvoting versus downvoting, out of $N$ possible votes/trials). We create a function that performs Bayesian inference on $p$, for a particular comment's upvote/downvote pair.
Step11: Below are the resulting posterior distributions.
Step12: Some distributions are very tight, others have very long tails (relatively speaking), expressing our uncertainty with what the true upvote ratio might be.
Sorting!
We have been ignoring the goal of this exercise
Step13: The best submissions, according to our procedure, are the submissions that are most-likely to score a high percentage of upvotes. Visually those are the submissions with the 95% least plausible value close to 1.
Why is sorting based on this quantity a good idea? By ordering by the 95% least plausible value, we are being the most conservative with what we think is best. When using the lower-bound of the 95% credible interval, we believe with high certainty that the 'true upvote ratio' is at the very least equal to this value (or greater), thereby ensuring that the best submissions are still on top. Under this ordering, we impose the following very natural properties
Step14: We can view the ordering visually by plotting the posterior mean and bounds, and sorting by the lower bound. In the plot below, notice that the left error-bar is sorted (as we suggested this is the best way to determine an ordering), so the means, indicated by dots, do not follow any strong pattern.
Step15: In the graphic above, you can see why sorting by mean would be sub-optimal.
Extension to Starred rating systems
The above procedure works well for upvote-downvotes schemes, but what about systems that use star ratings, e.g. 5 star rating systems. Similar problems apply with simply taking the average
Step16: 2. The following table was located in the paper "Going for Three | Python Code:
%matplotlib inline
import numpy as np
from IPython.core.pylabtools import figsize
import matplotlib.pyplot as plt
figsize(12.5, 5)
import pymc as pm
sample_size = 100000
expected_value = lambda_ = 4.5
poi = pm.rpoisson
N_samples = range(1, sample_size, 100)
for k in range(3):
samples = poi(lambda_, size=sample_size)
partial_average = [samples[:i].mean() for i in N_samples]
plt.plot(N_samples, partial_average, lw=1.5, label="average \
of $n$ samples; seq. %d" % k)
plt.plot(N_samples, expected_value * np.ones_like(partial_average),
ls="--", label="true expected value", c="k")
plt.ylim(4.35, 4.65)
plt.title("Convergence of the average of \n random variables to its \
expected value")
plt.ylabel("average of $n$ samples")
plt.xlabel("# of samples, $n$")
plt.legend();
Explanation: Chapter 4
The greatest theorem never told
This chapter focuses on an idea that is always bouncing around our minds, but is rarely made explicit outside books devoted to statistics. In fact, we've been using this simple idea in every example thus far.
The Law of Large Numbers
Let $Z_i$ be $N$ independent samples from some probability distribution. According to the Law of Large numbers, so long as the expected value $E[Z]$ is finite, the following holds,
$$\frac{1}{N} \sum_{i=1}^N Z_i \rightarrow E[ Z ], \;\;\; N \rightarrow \infty.$$
In words:
The average of a sequence of random variables from the same distribution converges to the expected value of that distribution.
This may seem like a boring result, but it will be the most useful tool you use.
Intuition
If the above Law is somewhat surprising, it can be made clearer by examining a simple example.
Consider a random variable $Z$ that can take only two values, $c_1$ and $c_2$. Suppose we have a large number of samples of $Z$, denoting a specific sample $Z_i$. The Law says that we can approximate the expected value of $Z$ by averaging over all samples. Consider the average:
$$ \frac{1}{N} \sum_{i=1}^N \;Z_i $$
By construction, $Z_i$ can only take on $c_1$ or $c_2$, hence we can partition the sum over these two values:
\begin{align}
\frac{1}{N} \sum_{i=1}^N \;Z_i
& =\frac{1}{N} \big( \sum_{ Z_i = c_1}c_1 + \sum_{Z_i=c_2}c_2 \big) \\[5pt]
& = c_1 \sum_{ Z_i = c_1}\frac{1}{N} + c_2 \sum_{ Z_i = c_2}\frac{1}{N} \\[5pt]
& = c_1 \times \text{ (approximate frequency of $c_1$) } \\
& \;\;\;\;\;\;\;\;\; + c_2 \times \text{ (approximate frequency of $c_2$) } \\[5pt]
& \approx c_1 \times P(Z = c_1) + c_2 \times P(Z = c_2 ) \\[5pt]
& = E[Z]
\end{align}
Equality holds in the limit, but we can get closer and closer by using more and more samples in the average. This Law holds for almost any distribution, minus some important cases we will encounter later.
Example
Below is a diagram of the Law of Large numbers in action for three different sequences of Poisson random variables.
We sample sample_size = 100000 Poisson random variables with parameter $\lambda = 4.5$. (Recall the expected value of a Poisson random variable is equal to its parameter.) We calculate the average for the first $n$ samples, for $n=1$ to sample_size.
End of explanation
figsize(12.5, 4)
N_Y = 250 # use this many to approximate D(N)
N_array = np.arange(1000, 50000, 2500) # use this many samples in the approx. to the variance.
D_N_results = np.zeros(len(N_array))
lambda_ = 4.5
expected_value = lambda_ # for X ~ Poi(lambda) , E[ X ] = lambda
def D_N(n):
This function approx. D_n, the average variance of using n samples.
Z = poi(lambda_, size=(n, N_Y))
average_Z = Z.mean(axis=0)
return np.sqrt(((average_Z - expected_value) ** 2).mean())
for i, n in enumerate(N_array):
D_N_results[i] = D_N(n)
plt.xlabel("$N$")
plt.ylabel("expected squared-distance from true value")
plt.plot(N_array, D_N_results, lw=3,
label="expected distance between\n\
expected value and \naverage of $N$ random variables.")
plt.plot(N_array, np.sqrt(expected_value) / np.sqrt(N_array), lw=2, ls="--",
label=r"$\frac{\sqrt{\lambda}}{\sqrt{N}}$")
plt.legend()
plt.title("How 'fast' is the sample average converging? ");
Explanation: Looking at the above plot, it is clear that when the sample size is small, there is greater variation in the average (compare how jagged and jumpy the average is initially, then smooths out). All three paths approach the value 4.5, but just flirt with it as $N$ gets large. Mathematicians and statistician have another name for flirting: convergence.
Another very relevant question we can ask is how quickly am I converging to the expected value? Let's plot something new. For a specific $N$, let's do the above trials thousands of times and compute how far away we are from the true expected value, on average. But wait — compute on average? This is simply the law of large numbers again! For example, we are interested in, for a specific $N$, the quantity:
$$D(N) = \sqrt{ \;E\left[\;\; \left( \frac{1}{N}\sum_{i=1}^NZ_i - 4.5 \;\right)^2 \;\;\right] \;\;}$$
The above formulae is interpretable as a distance away from the true value (on average), for some $N$. (We take the square root so the dimensions of the above quantity and our random variables are the same). As the above is an expected value, it can be approximated using the law of large numbers: instead of averaging $Z_i$, we calculate the following multiple times and average them:
$$ Y_k = \left( \;\frac{1}{N}\sum_{i=1}^NZ_i - 4.5 \; \right)^2 $$
By computing the above many, $N_y$, times (remember, it is random), and averaging them:
$$ \frac{1}{N_Y} \sum_{k=1}^{N_Y} Y_k \rightarrow E[ Y_k ] = E\;\left[\;\; \left( \frac{1}{N}\sum_{i=1}^NZ_i - 4.5 \;\right)^2 \right]$$
Finally, taking the square root:
$$ \sqrt{\frac{1}{N_Y} \sum_{k=1}^{N_Y} Y_k} \approx D(N) $$
End of explanation
import pymc as pm
N = 10000
print(np.mean([pm.rexponential(0.5) > 10 for i in range(N)]))
Explanation: As expected, the expected distance between our sample average and the actual expected value shrinks as $N$ grows large. But also notice that the rate of convergence decreases, that is, we need only 10 000 additional samples to move from 0.020 to 0.015, a difference of 0.005, but 20 000 more samples to again decrease from 0.015 to 0.010, again only a 0.005 decrease.
It turns out we can measure this rate of convergence. Above I have plotted a second line, the function $\sqrt{\lambda}/\sqrt{N}$. This was not chosen arbitrarily. In most cases, given a sequence of random variable distributed like $Z$, the rate of converge to $E[Z]$ of the Law of Large Numbers is
$$ \frac{ \sqrt{ \; Var(Z) \; } }{\sqrt{N} }$$
This is useful to know: for a given large $N$, we know (on average) how far away we are from the estimate. On the other hand, in a Bayesian setting, this can seem like a useless result: Bayesian analysis is OK with uncertainty so what's the statistical point of adding extra precise digits? Though drawing samples can be so computationally cheap that having a larger $N$ is fine too.
How do we compute $Var(Z)$ though?
The variance is simply another expected value that can be approximated! Consider the following, once we have the expected value (by using the Law of Large Numbers to estimate it, denote it $\mu$), we can estimate the variance:
$$ \frac{1}{N}\sum_{i=1}^N \;(Z_i - \mu)^2 \rightarrow E[ \;( Z - \mu)^2 \;] = Var( Z )$$
Expected values and probabilities
There is an even less explicit relationship between expected value and estimating probabilities. Define the indicator function
$$\mathbb{1}_A(x) =
\begin{cases} 1 & x \in A \\
0 & else
\end{cases}
$$
Then, by the law of large numbers, if we have many samples $X_i$, we can estimate the probability of an event $A$, denoted $P(A)$, by:
$$ \frac{1}{N} \sum_{i=1}^N \mathbb{1}_A(X_i) \rightarrow E[\mathbb{1}_A(X)] = P(A) $$
Again, this is fairly obvious after a moments thought: the indicator function is only 1 if the event occurs, so we are summing only the times the event occurs and dividing by the total number of trials (consider how we usually approximate probabilities using frequencies). For example, suppose we wish to estimate the probability that a $Z \sim Exp(.5)$ is greater than 10, and we have many samples from a $Exp(.5)$ distribution.
$$ P( Z > 10 ) = \frac{1}{N} \sum_{i=1}^N \mathbb{1}_{z > 10 }(Z_i) $$
End of explanation
figsize(12.5, 4)
std_height = 15
mean_height = 150
n_counties = 5000
pop_generator = pm.rdiscrete_uniform
norm = pm.rnormal
# generate some artificial population numbers
population = pop_generator(100, 1500, size=n_counties)
average_across_county = np.zeros(n_counties)
for i in range(n_counties):
# generate some individuals and take the mean
average_across_county[i] = norm(mean_height, 1. / std_height ** 2,
size=population[i]).mean()
# located the counties with the apparently most extreme average heights.
i_min = np.argmin(average_across_county)
i_max = np.argmax(average_across_county)
# plot population size vs. recorded average
plt.scatter(population, average_across_county, alpha=0.5, c="#7A68A6")
plt.scatter([population[i_min], population[i_max]],
[average_across_county[i_min], average_across_county[i_max]],
s=60, marker="o", facecolors="none",
edgecolors="#A60628", linewidths=1.5,
label="extreme heights")
plt.xlim(100, 1500)
plt.title("Average height vs. County Population")
plt.xlabel("County Population")
plt.ylabel("Average height in county")
plt.plot([100, 1500], [150, 150], color="k", label="true expected \
height", ls="--")
plt.legend(scatterpoints=1);
Explanation: What does this all have to do with Bayesian statistics?
Point estimates, to be introduced in the next chapter, in Bayesian inference are computed using expected values. In more analytical Bayesian inference, we would have been required to evaluate complicated expected values represented as multi-dimensional integrals. No longer. If we can sample from the posterior distribution directly, we simply need to evaluate averages. Much easier. If accuracy is a priority, plots like the ones above show how fast you are converging. And if further accuracy is desired, just take more samples from the posterior.
When is enough enough? When can you stop drawing samples from the posterior? That is the practitioners decision, and also dependent on the variance of the samples (recall from above a high variance means the average will converge slower).
We also should understand when the Law of Large Numbers fails. As the name implies, and comparing the graphs above for small $N$, the Law is only true for large sample sizes. Without this, the asymptotic result is not reliable. Knowing in what situations the Law fails can give us confidence in how unconfident we should be. The next section deals with this issue.
The Disorder of Small Numbers
The Law of Large Numbers is only valid as $N$ gets infinitely large: never truly attainable. While the law is a powerful tool, it is foolhardy to apply it liberally. Our next example illustrates this.
Example: Aggregated geographic data
Often data comes in aggregated form. For instance, data may be grouped by state, county, or city level. Of course, the population numbers vary per geographic area. If the data is an average of some characteristic of each the geographic areas, we must be conscious of the Law of Large Numbers and how it can fail for areas with small populations.
We will observe this on a toy dataset. Suppose there are five thousand counties in our dataset. Furthermore, population number in each state are uniformly distributed between 100 and 1500. The way the population numbers are generated is irrelevant to the discussion, so we do not justify this. We are interested in measuring the average height of individuals per county. Unbeknownst to us, height does not vary across county, and each individual, regardless of the county he or she is currently living in, has the same distribution of what their height may be:
$$ \text{height} \sim \text{Normal}(150, 15 ) $$
We aggregate the individuals at the county level, so we only have data for the average in the county. What might our dataset look like?
End of explanation
print("Population sizes of 10 'shortest' counties: ")
print(population[np.argsort(average_across_county)[:10]])
print("\nPopulation sizes of 10 'tallest' counties: ")
print(population[np.argsort(-average_across_county)[:10]])
Explanation: What do we observe? Without accounting for population sizes we run the risk of making an enormous inference error: if we ignored population size, we would say that the county with the shortest and tallest individuals have been correctly circled. But this inference is wrong for the following reason. These two counties do not necessarily have the most extreme heights. The error results from the calculated average of smaller populations not being a good reflection of the true expected value of the population (which in truth should be $\mu =150$). The sample size/population size/$N$, whatever you wish to call it, is simply too small to invoke the Law of Large Numbers effectively.
We provide more damning evidence against this inference. Recall the population numbers were uniformly distributed over 100 to 1500. Our intuition should tell us that the counties with the most extreme population heights should also be uniformly spread over 100 to 1500, and certainly independent of the county's population. Not so. Below are the population sizes of the counties with the most extreme heights.
End of explanation
figsize(12.5, 6.5)
data = np.genfromtxt("./data/census_data.csv", skip_header=1,
delimiter=",")
plt.scatter(data[:, 1], data[:, 0], alpha=0.5, c="#7A68A6")
plt.title("Census mail-back rate vs Population")
plt.ylabel("Mail-back rate")
plt.xlabel("population of block-group")
plt.xlim(-100, 15e3)
plt.ylim(-5, 105)
i_min = np.argmin(data[:, 0])
i_max = np.argmax(data[:, 0])
plt.scatter([data[i_min, 1], data[i_max, 1]],
[data[i_min, 0], data[i_max, 0]],
s=60, marker="o", facecolors="none",
edgecolors="#A60628", linewidths=1.5,
label="most extreme points")
plt.legend(scatterpoints=1);
Explanation: Not at all uniform over 100 to 1500. This is an absolute failure of the Law of Large Numbers.
Example: Kaggle's U.S. Census Return Rate Challenge
Below is data from the 2010 US census, which partitions populations beyond counties to the level of block groups (which are aggregates of city blocks or equivalents). The dataset is from a Kaggle machine learning competition some colleagues and I participated in. The objective was to predict the census letter mail-back rate of a group block, measured between 0 and 100, using census variables (median income, number of females in the block-group, number of trailer parks, average number of children etc.). Below we plot the census mail-back rate versus block group population:
End of explanation
# adding a number to the end of the %run call will get the ith top post.
%run top_showerthoughts_submissions.py 2
print("Post contents: \n")
print(top_post)
contents: an array of the text from the last 100 top submissions to a subreddit
votes: a 2d numpy array of upvotes, downvotes for each submission.
n_submissions = len(votes)
submissions = np.random.randint( n_submissions, size=4)
print("Some Submissions (out of %d total) \n-----------"%n_submissions)
for i in submissions:
print('"' + contents[i] + '"')
print("upvotes/downvotes: ",votes[i,:], "\n")
Explanation: The above is a classic phenomenon in statistics. I say classic referring to the "shape" of the scatter plot above. It follows a classic triangular form, that tightens as we increase the sample size (as the Law of Large Numbers becomes more exact).
I am perhaps overstressing the point and maybe I should have titled the book "You don't have big data problems!", but here again is an example of the trouble with small datasets, not big ones. Simply, small datasets cannot be processed using the Law of Large Numbers. Compare with applying the Law without hassle to big datasets (ex. big data). I mentioned earlier that paradoxically big data prediction problems are solved by relatively simple algorithms. The paradox is partially resolved by understanding that the Law of Large Numbers creates solutions that are stable, i.e. adding or subtracting a few data points will not affect the solution much. On the other hand, adding or removing data points to a small dataset can create very different results.
For further reading on the hidden dangers of the Law of Large Numbers, I would highly recommend the excellent manuscript The Most Dangerous Equation.
Example: How to order Reddit submissions
You may have disagreed with the original statement that the Law of Large numbers is known to everyone, but only implicitly in our subconscious decision making. Consider ratings on online products: how often do you trust an average 5-star rating if there is only 1 reviewer? 2 reviewers? 3 reviewers? We implicitly understand that with such few reviewers that the average rating is not a good reflection of the true value of the product.
This has created flaws in how we sort items, and more generally, how we compare items. Many people have realized that sorting online search results by their rating, whether the objects be books, videos, or online comments, return poor results. Often the seemingly top videos or comments have perfect ratings only from a few enthusiastic fans, and truly more quality videos or comments are hidden in later pages with falsely-substandard ratings of around 4.8. How can we correct this?
Consider the popular site Reddit (I purposefully did not link to the website as you would never come back). The site hosts links to stories or images, and a very popular part of the site are the comments associated with each link. Redditors can vote up or down on each submission (called upvotes and downvotes). Reddit, by default, will sort submissions to a given subreddit by Hot, that is, the submissions that have the most upvotes recently.
<img src="http://i.imgur.com/3v6bz9f.png" />
How would you determine which submissions are the best? There are a number of ways to achieve this:
Popularity: A submission is considered good if it has many upvotes. A problem with this model is that a submission with hundreds of upvotes, but thousands of downvotes. While being very popular, the submission is likely more controversial than best.
Difference: Using the difference of upvotes and downvotes. This solves the above problem, but fails when we consider the temporal nature of submission. Depending on when a submission is posted, the website may be experiencing high or low traffic. The difference method will bias the Top submissions to be the those made during high traffic periods, which have accumulated more upvotes than submissions that were not so graced, but are not necessarily the best.
Time adjusted: Consider using Difference divided by the age of the submission. This creates a rate, something like difference per second, or per minute. An immediate counter-example is, if we use per second, a 1 second old submission with 1 upvote would be better than a 100 second old submission with 99 upvotes. One can avoid this by only considering at least t second old submission. But what is a good t value? Does this mean no submission younger than t is good? We end up comparing unstable quantities with stable quantities (young vs. old submissions).
Ratio: Rank submissions by the ratio of upvotes to total number of votes (upvotes plus downvotes). This solves the temporal issue, such that new submissions who score well can be considered Top just as likely as older submissions, provided they have many upvotes to total votes. The problem here is that a submission with a single upvote (ratio = 1.0) will beat a submission with 999 upvotes and 1 downvote (ratio = 0.999), but clearly the latter submission is more likely to be better.
I used the phrase more likely for good reason. It is possible that the former submission, with a single upvote, is in fact a better submission than the latter with 999 upvotes. The hesitation to agree with this is because we have not seen the other 999 potential votes the former submission might get. Perhaps it will achieve an additional 999 upvotes and 0 downvotes and be considered better than the latter, though not likely.
What we really want is an estimate of the true upvote ratio. Note that the true upvote ratio is not the same as the observed upvote ratio: the true upvote ratio is hidden, and we only observe upvotes vs. downvotes (one can think of the true upvote ratio as "what is the underlying probability someone gives this submission a upvote, versus a downvote"). So the 999 upvote/1 downvote submission probably has a true upvote ratio close to 1, which we can assert with confidence thanks to the Law of Large Numbers, but on the other hand we are much less certain about the true upvote ratio of the submission with only a single upvote. Sounds like a Bayesian problem to me.
One way to determine a prior on the upvote ratio is to look at the historical distribution of upvote ratios. This can be accomplished by scraping Reddit's submissions and determining a distribution. There are a few problems with this technique though:
Skewed data: The vast majority of submissions have very few votes, hence there will be many submissions with ratios near the extremes (see the "triangular plot" in the above Kaggle dataset), effectively skewing our distribution to the extremes. One could try to only use submissions with votes greater than some threshold. Again, problems are encountered. There is a tradeoff between number of submissions available to use and a higher threshold with associated ratio precision.
Biased data: Reddit is composed of different subpages, called subreddits. Two examples are r/aww, which posts pics of cute animals, and r/politics. It is very likely that the user behaviour towards submissions of these two subreddits are very different: visitors are likely to be more friendly and affectionate in the former, and would therefore upvote submissions more, compared to the latter, where submissions are likely to be controversial and disagreed upon. Therefore not all submissions are the same.
In light of these, I think it is better to use a Uniform prior.
With our prior in place, we can find the posterior of the true upvote ratio. The Python script top_showerthoughts_submissions.py will scrape the best posts from the showerthoughts community on Reddit. This is a text-only community so the title of each post is the post. Below is the top post as well as some other sample posts:
End of explanation
import pymc as pm
def posterior_upvote_ratio(upvotes, downvotes, samples=20000):
This function accepts the number of upvotes and downvotes a particular submission received,
and the number of posterior samples to return to the user. Assumes a uniform prior.
N = upvotes + downvotes
upvote_ratio = pm.Uniform("upvote_ratio", 0, 1)
observations = pm.Binomial("obs", N, upvote_ratio, value=upvotes, observed=True)
# do the fitting; first do a MAP as it is cheap and useful.
map_ = pm.MAP([upvote_ratio, observations]).fit()
mcmc = pm.MCMC([upvote_ratio, observations])
mcmc.sample(samples, samples / 4)
return mcmc.trace("upvote_ratio")[:]
Explanation: For a given true upvote ratio $p$ and $N$ votes, the number of upvotes will look like a Binomial random variable with parameters $p$ and $N$. (This is because of the equivalence between upvote ratio and probability of upvoting versus downvoting, out of $N$ possible votes/trials). We create a function that performs Bayesian inference on $p$, for a particular comment's upvote/downvote pair.
End of explanation
figsize(11., 8)
posteriors = []
colours = ["#348ABD", "#A60628", "#7A68A6", "#467821", "#CF4457"]
for i in range(len(submissions)):
j = submissions[i]
posteriors.append(posterior_upvote_ratio(votes[j, 0], votes[j, 1]))
plt.hist(posteriors[i], bins=18, normed=True, alpha=.9,
histtype="step", color=colours[i % 5], lw=3,
label='(%d up:%d down)\n%s...' % (votes[j, 0], votes[j, 1], contents[j][:50]))
plt.hist(posteriors[i], bins=18, normed=True, alpha=.2,
histtype="stepfilled", color=colours[i], lw=3, )
plt.legend(loc="upper left")
plt.xlim(0, 1)
plt.title("Posterior distributions of upvote ratios on different submissions");
Explanation: Below are the resulting posterior distributions.
End of explanation
N = posteriors[0].shape[0]
lower_limits = []
for i in range(len(submissions)):
j = submissions[i]
plt.hist(posteriors[i], bins=20, normed=True, alpha=.9,
histtype="step", color=colours[i], lw=3,
label='(%d up:%d down)\n%s...' % (votes[j, 0], votes[j, 1], contents[j][:50]))
plt.hist(posteriors[i], bins=20, normed=True, alpha=.2,
histtype="stepfilled", color=colours[i], lw=3, )
v = np.sort(posteriors[i])[int(0.05 * N)]
# plt.vlines( v, 0, 15 , color = "k", alpha = 1, linewidths=3 )
plt.vlines(v, 0, 10, color=colours[i], linestyles="--", linewidths=3)
lower_limits.append(v)
plt.legend(loc="upper left")
plt.legend(loc="upper left")
plt.title("Posterior distributions of upvote ratios on different submissions");
order = np.argsort(-np.array(lower_limits))
print(order, lower_limits)
Explanation: Some distributions are very tight, others have very long tails (relatively speaking), expressing our uncertainty with what the true upvote ratio might be.
Sorting!
We have been ignoring the goal of this exercise: how do we sort the submissions from best to worst? Of course, we cannot sort distributions, we must sort scalar numbers. There are many ways to distill a distribution down to a scalar: expressing the distribution through its expected value, or mean, is one way. Choosing the mean is a bad choice though. This is because the mean does not take into account the uncertainty of distributions.
I suggest using the 95% least plausible value, defined as the value such that there is only a 5% chance the true parameter is lower (think of the lower bound on the 95% credible region). Below are the posterior distributions with the 95% least-plausible value plotted:
End of explanation
def intervals(u, d):
a = 1. + u
b = 1. + d
mu = a / (a + b)
std_err = 1.65 * np.sqrt((a * b) / ((a + b) ** 2 * (a + b + 1.)))
return (mu, std_err)
print("Approximate lower bounds:")
posterior_mean, std_err = intervals(votes[:, 0], votes[:, 1])
lb = posterior_mean - std_err
print(lb)
print("\n")
print("Top 40 Sorted according to approximate lower bounds:")
print("\n")
order = np.argsort(-lb)
ordered_contents = []
for i in order[:40]:
ordered_contents.append(contents[i])
print(votes[i, 0], votes[i, 1], contents[i])
print("-------------")
Explanation: The best submissions, according to our procedure, are the submissions that are most-likely to score a high percentage of upvotes. Visually those are the submissions with the 95% least plausible value close to 1.
Why is sorting based on this quantity a good idea? By ordering by the 95% least plausible value, we are being the most conservative with what we think is best. When using the lower-bound of the 95% credible interval, we believe with high certainty that the 'true upvote ratio' is at the very least equal to this value (or greater), thereby ensuring that the best submissions are still on top. Under this ordering, we impose the following very natural properties:
given two submissions with the same observed upvote ratio, we will assign the submission with more votes as better (since we are more confident it has a higher ratio).
given two submissions with the same number of votes, we still assign the submission with more upvotes as better.
But this is too slow for real-time!
I agree, computing the posterior of every submission takes a long time, and by the time you have computed it, likely the data has changed. I delay the mathematics to the appendix, but I suggest using the following formula to compute the lower bound very fast.
$$ \frac{a}{a + b} - 1.65\sqrt{ \frac{ab}{ (a+b)^2(a + b +1 ) } }$$
where
\begin{align}
& a = 1 + u \\
& b = 1 + d \\
\end{align}
$u$ is the number of upvotes, and $d$ is the number of downvotes. The formula is a shortcut in Bayesian inference, which will be further explained in Chapter 6 when we discuss priors in more detail.
End of explanation
r_order = order[::-1][-40:]
plt.errorbar(posterior_mean[r_order], np.arange(len(r_order)),
xerr=std_err[r_order], capsize=0, fmt="o",
color="#7A68A6")
plt.xlim(0.3, 1)
plt.yticks(np.arange(len(r_order) - 1, -1, -1), map(lambda x: x[:30].replace("\n", ""), ordered_contents));
Explanation: We can view the ordering visually by plotting the posterior mean and bounds, and sorting by the lower bound. In the plot below, notice that the left error-bar is sorted (as we suggested this is the best way to determine an ordering), so the means, indicated by dots, do not follow any strong pattern.
End of explanation
# Enter code here
import scipy.stats as stats
exp = stats.expon(scale=4)
N = int(1e5)
X = exp.rvs(N)
# ...
Explanation: In the graphic above, you can see why sorting by mean would be sub-optimal.
Extension to Starred rating systems
The above procedure works well for upvote-downvotes schemes, but what about systems that use star ratings, e.g. 5 star rating systems. Similar problems apply with simply taking the average: an item with two perfect ratings would beat an item with thousands of perfect ratings, but a single sub-perfect rating.
We can consider the upvote-downvote problem above as binary: 0 is a downvote, 1 if an upvote. A $N$-star rating system can be seen as a more continuous version of above, and we can set $n$ stars rewarded is equivalent to rewarding $\frac{n}{N}$. For example, in a 5-star system, a 2 star rating corresponds to 0.4. A perfect rating is a 1. We can use the same formula as before, but with $a,b$ defined differently:
$$ \frac{a}{a + b} - 1.65\sqrt{ \frac{ab}{ (a+b)^2(a + b +1 ) } }$$
where
\begin{align}
& a = 1 + S \\
& b = 1 + N - S \\
\end{align}
where $N$ is the number of users who rated, and $S$ is the sum of all the ratings, under the equivalence scheme mentioned above.
Example: Counting Github stars
What is the average number of stars a Github repository has? How would you calculate this? There are over 6 million repositories, so there is more than enough data to invoke the Law of Large numbers. Let's start pulling some data. TODO
Conclusion
While the Law of Large Numbers is cool, it is only true so much as its name implies: with large sample sizes only. We have seen how our inference can be affected by not considering how the data is shaped.
By (cheaply) drawing many samples from the posterior distributions, we can ensure that the Law of Large Number applies as we approximate expected values (which we will do in the next chapter).
Bayesian inference understands that with small sample sizes, we can observe wild randomness. Our posterior distribution will reflect this by being more spread rather than tightly concentrated. Thus, our inference should be correctable.
There are major implications of not considering the sample size, and trying to sort objects that are unstable leads to pathological orderings. The method provided above solves this problem.
Appendix
Derivation of sorting comments formula
Basically what we are doing is using a Beta prior (with parameters $a=1, b=1$, which is a uniform distribution), and using a Binomial likelihood with observations $u, N = u+d$. This means our posterior is a Beta distribution with parameters $a' = 1 + u, b' = 1 + (N - u) = 1+d$. We then need to find the value, $x$, such that 0.05 probability is less than $x$. This is usually done by inverting the CDF (Cumulative Distribution Function), but the CDF of the beta, for integer parameters, is known but is a large sum [3].
We instead use a Normal approximation. The mean of the Beta is $\mu = a'/(a'+b')$ and the variance is
$$\sigma^2 = \frac{a'b'}{ (a' + b')^2(a'+b'+1) }$$
Hence we solve the following equation for $x$ and have an approximate lower bound.
$$ 0.05 = \Phi\left( \frac{(x - \mu)}{\sigma}\right) $$
$\Phi$ being the cumulative distribution for the normal distribution
Exercises
1. How would you estimate the quantity $E\left[ \cos{X} \right]$, where $X \sim \text{Exp}(4)$? What about $E\left[ \cos{X} | X \lt 1\right]$, i.e. the expected value given we know $X$ is less than 1? Would you need more samples than the original samples size to be equally accurate?
End of explanation
from IPython.core.display import HTML
def css_styling():
styles = open("../styles/custom.css", "r").read()
return HTML(styles)
css_styling()
Explanation: 2. The following table was located in the paper "Going for Three: Predicting the Likelihood of Field Goal Success with Logistic Regression" [2]. The table ranks football field-goal kickers by their percent of non-misses. What mistake have the researchers made?
Kicker Careers Ranked by Make Percentage
<table><tbody><tr><th>Rank </th><th>Kicker </th><th>Make % </th><th>Number of Kicks</th></tr><tr><td>1 </td><td>Garrett Hartley </td><td>87.7 </td><td>57</td></tr><tr><td>2</td><td> Matt Stover </td><td>86.8 </td><td>335</td></tr><tr><td>3 </td><td>Robbie Gould </td><td>86.2 </td><td>224</td></tr><tr><td>4 </td><td>Rob Bironas </td><td>86.1 </td><td>223</td></tr><tr><td>5</td><td> Shayne Graham </td><td>85.4 </td><td>254</td></tr><tr><td>… </td><td>… </td><td>…</td><td> </td></tr><tr><td>51</td><td> Dave Rayner </td><td>72.2 </td><td>90</td></tr><tr><td>52</td><td> Nick Novak </td><td>71.9 </td><td>64</td></tr><tr><td>53 </td><td>Tim Seder </td><td>71.0 </td><td>62</td></tr><tr><td>54 </td><td>Jose Cortez </td><td>70.7</td><td> 75</td></tr><tr><td>55 </td><td>Wade Richey </td><td>66.1</td><td> 56</td></tr></tbody></table>
In August 2013, a popular post on the average income per programmer of different languages was trending. Here's the summary chart: (reproduced without permission, cause when you lie with stats, you gunna get the hammer). What do you notice about the extremes?
Average household income by programming language
<table >
<tr><td>Language</td><td>Average Household Income ($)</td><td>Data Points</td></tr>
<tr><td>Puppet</td><td>87,589.29</td><td>112</td></tr>
<tr><td>Haskell</td><td>89,973.82</td><td>191</td></tr>
<tr><td>PHP</td><td>94,031.19</td><td>978</td></tr>
<tr><td>CoffeeScript</td><td>94,890.80</td><td>435</td></tr>
<tr><td>VimL</td><td>94,967.11</td><td>532</td></tr>
<tr><td>Shell</td><td>96,930.54</td><td>979</td></tr>
<tr><td>Lua</td><td>96,930.69</td><td>101</td></tr>
<tr><td>Erlang</td><td>97,306.55</td><td>168</td></tr>
<tr><td>Clojure</td><td>97,500.00</td><td>269</td></tr>
<tr><td>Python</td><td>97,578.87</td><td>2314</td></tr>
<tr><td>JavaScript</td><td>97,598.75</td><td>3443</td></tr>
<tr><td>Emacs Lisp</td><td>97,774.65</td><td>355</td></tr>
<tr><td>C#</td><td>97,823.31</td><td>665</td></tr>
<tr><td>Ruby</td><td>98,238.74</td><td>3242</td></tr>
<tr><td>C++</td><td>99,147.93</td><td>845</td></tr>
<tr><td>CSS</td><td>99,881.40</td><td>527</td></tr>
<tr><td>Perl</td><td>100,295.45</td><td>990</td></tr>
<tr><td>C</td><td>100,766.51</td><td>2120</td></tr>
<tr><td>Go</td><td>101,158.01</td><td>231</td></tr>
<tr><td>Scala</td><td>101,460.91</td><td>243</td></tr>
<tr><td>ColdFusion</td><td>101,536.70</td><td>109</td></tr>
<tr><td>Objective-C</td><td>101,801.60</td><td>562</td></tr>
<tr><td>Groovy</td><td>102,650.86</td><td>116</td></tr>
<tr><td>Java</td><td>103,179.39</td><td>1402</td></tr>
<tr><td>XSLT</td><td>106,199.19</td><td>123</td></tr>
<tr><td>ActionScript</td><td>108,119.47</td><td>113</td></tr>
</table>
References
Wainer, Howard. The Most Dangerous Equation. American Scientist, Volume 95.
Clarck, Torin K., Aaron W. Johnson, and Alexander J. Stimpson. "Going for Three: Predicting the Likelihood of Field Goal Success with Logistic Regression." (2013): n. page. Web. 20 Feb. 2013.
http://en.wikipedia.org/wiki/Beta_function#Incomplete_beta_function
End of explanation |
5,756 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Read in the data..
Step1: The columns are the instances and rows the features so we need to transpose the dataset.
Step2: Read in the labels...
Step3: We are using the OAC labeling...
Step4: Join the data and labels on the FileName, remove any null rows and create a label column with 0s and 1s.
Step5: Drop the FileName and OAC columns and export as a CSV file. | Python Code:
data = pd.read_csv('/Users/Frankie/Documents/Dissertation/Data/pancreatic/24hProbeExpressionValues.csv')
data[:5]
Explanation: Read in the data..
End of explanation
data = data.T
Explanation: The columns are the instances and rows the features so we need to transpose the dataset.
End of explanation
label = pd.read_csv('/Users/Frankie/Documents/Dissertation/Data/pancreatic/24hTargets.csv')
label[:5]
Explanation: Read in the labels...
End of explanation
label = label[['FileName', 'OAC']]
label[:5]
Explanation: We are using the OAC labeling...
End of explanation
joined_tables = label.join(data, on='FileName', how = 'outer')
joined_tables = joined_tables[pd.notnull(joined_tables['Probe1'])]
joined_tables['label'] = np.where(joined_tables['OAC']=='Mild', 0, 1)
joined_tables[:5]
Explanation: Join the data and labels on the FileName, remove any null rows and create a label column with 0s and 1s.
End of explanation
joined_tables.drop(['FileName','OAC'], axis=1).to_csv("/Users/Frankie/Desktop/pancreatic.csv",index=False)
Explanation: Drop the FileName and OAC columns and export as a CSV file.
End of explanation |
5,757 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Introduction to NumPy
Topics
Basic Synatx
creating vectors matrices
special
Step1: This code sets up Ipython Notebook environments (lines beginning with %), and loads several libraries and functions. The core scientific stack in python consists of a number of free libraries. The ones I have loaded above include
Step2: Vectors and Lists
The numpy library (we will reference it by np) is the workhorse library for linear algebra in python. To creat a vector simply surround a python list ($[1,2,3]$) with the np.array function
Step3: We could have done this by defining a python list and converting it to an array
Step4: Matrix Addition and Subtraction
Adding or subtracting a scalar value to a matrix
To learn the basics, consider a small matrix of dimension $2 \times 2$, where $2 \times 2$ denotes the number of rows $\times$ the number of columns. Let $A$=$\bigl( \begin{smallmatrix} a_{11} & a_{12} \ a_{21} & a_{22} \end{smallmatrix} \bigr)$. Consider adding a scalar value (e.g. 3) to the A.
$$
\begin{equation}
A+3=\begin{bmatrix}
a_{11} & a_{12} \
a_{21} & a_{22}
\end{bmatrix}+3
=\begin{bmatrix}
a_{11}+3 & a_{12}+3 \
a_{21}+3 & a_{22}+3
\end{bmatrix}
\end{equation}
$$
The same basic principle holds true for A-3
Step5: Adding or subtracting two matrices
Consider two small $2 \times 2$ matrices, where $2 \times 2$ denotes the # of rows $\times$ the # of columns. Let $A$=$\bigl( \begin{smallmatrix} a_{11} & a_{12} \ a_{21} & a_{22} \end{smallmatrix} \bigr)$ and $B$=$\bigl( \begin{smallmatrix} b_{11} & b_{12} \ b_{21} & b_{22} \end{smallmatrix} \bigr)$. To find the result of $A-B$, simply subtract each element of A with the corresponding element of B
Step6: Matrix Multiplication
Multiplying a scalar value times a matrix
As before, let $A$=$\bigl( \begin{smallmatrix} a_{11} & a_{12} \ a_{21} & a_{22} \end{smallmatrix} \bigr)$. Suppose we want to multiply A times a scalar value (e.g. $3 \times A$)
$$
\begin{equation}
3 \times A = 3 \times \begin{bmatrix}
a_{11} & a_{12} \
a_{21} & a_{22}
\end{bmatrix}
=
\begin{bmatrix}
3a_{11} & 3a_{12} \
3a_{21} & 3a_{22}
\end{bmatrix}
\end{equation}
$$
is of dimension (2,2). Scalar multiplication is commutative, so that $3 \times A$=$A \times 3$. Notice that the product is defined for a matrix A of any dimension.
Similar to scalar addition and subtration, the code is simple
Step7: Multiplying two matricies
Now, consider the $2 \times 1$ vector $C=\bigl( \begin{smallmatrix} c_{11} \
c_{21}
\end{smallmatrix} \bigr)$
Consider multiplying matrix $A_{2 \times 2}$ and the vector $C_{2 \times 1}$. Unlike the addition and subtraction case, this product is defined. Here, conformability depends not on the row and column dimensions, but rather on the column dimensions of the first operand and the row dimensions of the second operand. We can write this operation as follows
$$
\begin{equation}
A_{2 \times 2} \times C_{2 \times 1} =
\begin{bmatrix}
a_{11} & a_{12} \
a_{21} & a_{22}
\end{bmatrix}{2 \times 2}
\times
\begin{bmatrix}
c{11} \
c_{21}
\end{bmatrix}{2 \times 1}
=
\begin{bmatrix}
a{11}c_{11} + a_{12}c_{21} \
a_{21}c_{11} + a_{22}c_{21}
\end{bmatrix}_{2 \times 1}
\end{equation}
$$
Alternatively, consider a matrix C of dimension $2 \times 3$ and a matrix A of dimension $3 \times 2$
$$
\begin{equation}
A_{3 \times 2}=\begin{bmatrix}
a_{11} & a_{12} \
a_{21} & a_{22} \
a_{31} & a_{32}
\end{bmatrix}{3 \times 2}
,
C{2 \times 3} =
\begin{bmatrix}
c_{11} & c_{12} & c_{13} \
c_{21} & c_{22} & c_{23} \
\end{bmatrix}_{2 \times 3}
\end{equation}
$$
Here, A $\times$ C is
$$
\begin{align}
A_{3 \times 2} \times C_{2 \times 3}=&
\begin{bmatrix}
a_{11} & a_{12} \
a_{21} & a_{22} \
a_{31} & a_{32}
\end{bmatrix}{3 \times 2}
\times
\begin{bmatrix}
c{11} & c_{12} & c_{13} \
c_{21} & c_{22} & c_{23}
\end{bmatrix}{2 \times 3} \
=&
\begin{bmatrix}
a{11} c_{11}+a_{12} c_{21} & a_{11} c_{12}+a_{12} c_{22} & a_{11} c_{13}+a_{12} c_{23} \
a_{21} c_{11}+a_{22} c_{21} & a_{21} c_{12}+a_{22} c_{22} & a_{21} c_{13}+a_{22} c_{23} \
a_{31} c_{11}+a_{32} c_{21} & a_{31} c_{12}+a_{32} c_{22} & a_{31} c_{13}+a_{32} c_{23}
\end{bmatrix}_{3 \times 3}
\end{align}
$$
So in general, $X_{r_x \times c_x} \times Y_{r_y \times c_y}$ we have two important things to remember
Step8: We will use the numpy dot operator to perform the these multiplications. You can use it two ways to yield the same result
Step9: Matrix Division
The term matrix division is actually a misnomer. To divide in a matrix algebra world we first need to invert the matrix. It is useful to consider the analog case in a scalar work. Suppose we want to divide the $f$ by $g$. We could do this in two different ways
Step10: Check that $C\times C^{-1} = I$
Step11: Transposing a Matrix
At times it is useful to pivot a matrix for conformability- that is in order to matrix divide or multiply, we need to switch the rows and column dimensions of matrices. Consider the matrix
$$
\begin{equation}
A_{3 \times 2}=\begin{bmatrix}
a_{11} & a_{12} \
a_{21} & a_{22} \
a_{31} & a_{32}
\end{bmatrix}{3 \times 2}
\end{equation}
$$
The transpose of A (denoted as $A^{\prime}$) is
$$
\begin{equation}
A^{\prime}=\begin{bmatrix}
a{11} & a_{21} & a_{31} \
a_{12} & a_{22} & a_{32} \
\end{bmatrix}_{2 \times 3}
\end{equation}
$$
Step12: One important property of transposing a matrix is the transpose of a product of two matrices. Let matrix A be of dimension $N \times M$ and let B of of dimension $M \times P$. Then
$$
\begin{equation}
(AB)^{\prime}=B^{\prime}A^{\prime}
\end{equation}
$$
For more information, see this http
Step13: Mechanics
Indexing and Slicing
examples from https
Step14: Logic, Comparison
Step15: Concatenate, Reshape
Numpy load, save data files
Similarity
Step16: Example
Step17: Let's add the bias, i.e. a column of $1$s to the explanatory variables
Step18: Closed-form Linear Regression
And compute the parametes $\beta_0$ and $\beta_1$ according to
$$ \beta = (X^\prime X)^{-1} X^\prime y $$
Note
Step19: Multiple Linear Regression
Step20: Evaluation
Step21: Regularization, Ridge-Regression
Regularization, in mathematics and statistics and particularly in the fields of machine learning and inverse problems, is a process of introducing additional information in order to solve an ill-posed problem or to prevent overfitting.
In general, a regularization term $R(f)$ is introduced to a general loss function | Python Code:
%matplotlib inline
import math
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sbn
##from scipy import *
Explanation: Introduction to NumPy
Topics
Basic Synatx
creating vectors matrices
special: ones, zeros, identity eye
add, product, inverse
Mechanics: indexing, slicing, concatenating, reshape, zip
Numpy load, save data files
Random numbers $\rightarrow$ distributions
Similarity: Euclidian vs Cosine
Example Nearest Neighbor search
Example Linear Regression
Basics
this section uses content created by Rob Hicks http://rlhick.people.wm.edu/stories/linear-algebra-python-basics.html
Loading libraries
The python universe has a huge number of libraries that extend the capabilities of python. Nearly all of these are open source, unlike packages like stata or matlab where some key libraries are proprietary (and can cost lots of money). In lots of my code, you will see this at the top:
End of explanation
x = .5
print x
Explanation: This code sets up Ipython Notebook environments (lines beginning with %), and loads several libraries and functions. The core scientific stack in python consists of a number of free libraries. The ones I have loaded above include:
sympy: provides for symbolic computation (solving algebra problems)
numpy: provides for linear algebra computations
matplotlib.pyplot: provides for the ability to graph functions and draw figures
scipy: scientific python provides a plethora of capabilities
seaborn: makes matplotlib figures even pretties (another library like this is called bokeh). This is entirely optional and is purely for eye candy.
Creating arrays, scalars, and matrices in Python
Scalars can be created easily like this:
End of explanation
x_vector = np.array([1,2,3])
print x_vector
Explanation: Vectors and Lists
The numpy library (we will reference it by np) is the workhorse library for linear algebra in python. To creat a vector simply surround a python list ($[1,2,3]$) with the np.array function:
End of explanation
c_list = [1,2]
print "The list:",c_list
print "Has length:", len(c_list)
c_vector = np.array(c_list)
print "The vector:", c_vector
print "Has shape:",c_vector.shape
z = [5,6]
print "This is a list, not an array:",z
print type(z)
Explanation: We could have done this by defining a python list and converting it to an array:
End of explanation
result = A + 3
#or
result = 3 + A
print result
Explanation: Matrix Addition and Subtraction
Adding or subtracting a scalar value to a matrix
To learn the basics, consider a small matrix of dimension $2 \times 2$, where $2 \times 2$ denotes the number of rows $\times$ the number of columns. Let $A$=$\bigl( \begin{smallmatrix} a_{11} & a_{12} \ a_{21} & a_{22} \end{smallmatrix} \bigr)$. Consider adding a scalar value (e.g. 3) to the A.
$$
\begin{equation}
A+3=\begin{bmatrix}
a_{11} & a_{12} \
a_{21} & a_{22}
\end{bmatrix}+3
=\begin{bmatrix}
a_{11}+3 & a_{12}+3 \
a_{21}+3 & a_{22}+3
\end{bmatrix}
\end{equation}
$$
The same basic principle holds true for A-3:
$$
\begin{equation}
A-3=\begin{bmatrix}
a_{11} & a_{12} \
a_{21} & a_{22}
\end{bmatrix}-3
=\begin{bmatrix}
a_{11}-3 & a_{12}-3 \
a_{21}-3 & a_{22}-3
\end{bmatrix}
\end{equation}
$$
Notice that we add (or subtract) the scalar value to each element in the matrix A. A can be of any dimension.
This is trivial to implement, now that we have defined our matrix A:
End of explanation
B = np.random.randn(2,2)
print B
Explanation: Adding or subtracting two matrices
Consider two small $2 \times 2$ matrices, where $2 \times 2$ denotes the # of rows $\times$ the # of columns. Let $A$=$\bigl( \begin{smallmatrix} a_{11} & a_{12} \ a_{21} & a_{22} \end{smallmatrix} \bigr)$ and $B$=$\bigl( \begin{smallmatrix} b_{11} & b_{12} \ b_{21} & b_{22} \end{smallmatrix} \bigr)$. To find the result of $A-B$, simply subtract each element of A with the corresponding element of B:
$$
\begin{equation}
A -B =
\begin{bmatrix}
a_{11} & a_{12} \
a_{21} & a_{22}
\end{bmatrix} -
\begin{bmatrix} b_{11} & b_{12} \
b_{21} & b_{22}
\end{bmatrix}
=
\begin{bmatrix}
a_{11}-b_{11} & a_{12}-b_{12} \
a_{21}-b_{21} & a_{22}-b_{22}
\end{bmatrix}
\end{equation}
$$
Addition works exactly the same way:
$$
\begin{equation}
A + B =
\begin{bmatrix}
a_{11} & a_{12} \
a_{21} & a_{22}
\end{bmatrix} +
\begin{bmatrix} b_{11} & b_{12} \
b_{21} & b_{22}
\end{bmatrix}
=
\begin{bmatrix}
a_{11}+b_{11} & a_{12}+b_{12} \
a_{21}+b_{21} & a_{22}+b_{22}
\end{bmatrix}
\end{equation}
$$
An important point to know about matrix addition and subtraction is that it is only defined when $A$ and $B$ are of the same size. Here, both are $2 \times 2$. Since operations are performed element by element, these two matrices must be conformable- and for addition and subtraction that means they must have the same numbers of rows and columns. I like to be explicit about the dimensions of matrices for checking conformability as I write the equations, so write
$$
A_{2 \times 2} + B_{2 \times 2}= \begin{bmatrix}
a_{11}+b_{11} & a_{12}+b_{12} \
a_{21}+b_{21} & a_{22}+b_{22}
\end{bmatrix}_{2 \times 2}
$$
Notice that the result of a matrix addition or subtraction operation is always of the same dimension as the two operands.
Let's define another matrix, B, that is also $2 \times 2$ and add it to A:
End of explanation
A * 3
Explanation: Matrix Multiplication
Multiplying a scalar value times a matrix
As before, let $A$=$\bigl( \begin{smallmatrix} a_{11} & a_{12} \ a_{21} & a_{22} \end{smallmatrix} \bigr)$. Suppose we want to multiply A times a scalar value (e.g. $3 \times A$)
$$
\begin{equation}
3 \times A = 3 \times \begin{bmatrix}
a_{11} & a_{12} \
a_{21} & a_{22}
\end{bmatrix}
=
\begin{bmatrix}
3a_{11} & 3a_{12} \
3a_{21} & 3a_{22}
\end{bmatrix}
\end{equation}
$$
is of dimension (2,2). Scalar multiplication is commutative, so that $3 \times A$=$A \times 3$. Notice that the product is defined for a matrix A of any dimension.
Similar to scalar addition and subtration, the code is simple:
End of explanation
# Let's redefine A and C to demonstrate matrix multiplication:
A = np.arange(6).reshape((3,2))
C = np.random.randn(2,2)
print A.shape
print C.shape
Explanation: Multiplying two matricies
Now, consider the $2 \times 1$ vector $C=\bigl( \begin{smallmatrix} c_{11} \
c_{21}
\end{smallmatrix} \bigr)$
Consider multiplying matrix $A_{2 \times 2}$ and the vector $C_{2 \times 1}$. Unlike the addition and subtraction case, this product is defined. Here, conformability depends not on the row and column dimensions, but rather on the column dimensions of the first operand and the row dimensions of the second operand. We can write this operation as follows
$$
\begin{equation}
A_{2 \times 2} \times C_{2 \times 1} =
\begin{bmatrix}
a_{11} & a_{12} \
a_{21} & a_{22}
\end{bmatrix}{2 \times 2}
\times
\begin{bmatrix}
c{11} \
c_{21}
\end{bmatrix}{2 \times 1}
=
\begin{bmatrix}
a{11}c_{11} + a_{12}c_{21} \
a_{21}c_{11} + a_{22}c_{21}
\end{bmatrix}_{2 \times 1}
\end{equation}
$$
Alternatively, consider a matrix C of dimension $2 \times 3$ and a matrix A of dimension $3 \times 2$
$$
\begin{equation}
A_{3 \times 2}=\begin{bmatrix}
a_{11} & a_{12} \
a_{21} & a_{22} \
a_{31} & a_{32}
\end{bmatrix}{3 \times 2}
,
C{2 \times 3} =
\begin{bmatrix}
c_{11} & c_{12} & c_{13} \
c_{21} & c_{22} & c_{23} \
\end{bmatrix}_{2 \times 3}
\end{equation}
$$
Here, A $\times$ C is
$$
\begin{align}
A_{3 \times 2} \times C_{2 \times 3}=&
\begin{bmatrix}
a_{11} & a_{12} \
a_{21} & a_{22} \
a_{31} & a_{32}
\end{bmatrix}{3 \times 2}
\times
\begin{bmatrix}
c{11} & c_{12} & c_{13} \
c_{21} & c_{22} & c_{23}
\end{bmatrix}{2 \times 3} \
=&
\begin{bmatrix}
a{11} c_{11}+a_{12} c_{21} & a_{11} c_{12}+a_{12} c_{22} & a_{11} c_{13}+a_{12} c_{23} \
a_{21} c_{11}+a_{22} c_{21} & a_{21} c_{12}+a_{22} c_{22} & a_{21} c_{13}+a_{22} c_{23} \
a_{31} c_{11}+a_{32} c_{21} & a_{31} c_{12}+a_{32} c_{22} & a_{31} c_{13}+a_{32} c_{23}
\end{bmatrix}_{3 \times 3}
\end{align}
$$
So in general, $X_{r_x \times c_x} \times Y_{r_y \times c_y}$ we have two important things to remember:
For conformability in matrix multiplication, $c_x=r_y$, or the columns in the first operand must be equal to the rows of the second operand.
The result will be of dimension $r_x \times c_y$, or of dimensions equal to the rows of the first operand and columns equal to columns of the second operand.
Given these facts, you should convince yourself that matrix multiplication is not generally commutative, that the relationship $X \times Y = Y \times X$ does not hold in all cases.
For this reason, we will always be very explicit about whether we are pre multiplying ($X \times Y$) or post multiplying ($Y \times X$) the vectors/matrices $X$ and $Y$.
For more information on this topic, see this
http://en.wikipedia.org/wiki/Matrix_multiplication.
End of explanation
print A.dot(C)
print np.dot(A,C)
# What would happen to
C.dot(A)
Explanation: We will use the numpy dot operator to perform the these multiplications. You can use it two ways to yield the same result:
End of explanation
# note, we need a square matrix (# rows = # cols), use C:
C_inverse = np.linalg.inv(C)
print C_inverse
Explanation: Matrix Division
The term matrix division is actually a misnomer. To divide in a matrix algebra world we first need to invert the matrix. It is useful to consider the analog case in a scalar work. Suppose we want to divide the $f$ by $g$. We could do this in two different ways:
$$
\begin{equation}
\frac{f}{g}=f \times g^{-1}.
\end{equation}
$$
In a scalar seeting, these are equivalent ways of solving the division problem. The second one requires two steps: first we invert g and then we multiply f times g. In a matrix world, we need to think about this second approach. First we have to invert the matrix g and then we will need to pre or post multiply depending on the exact situation we encounter (this is intended to be vague for now).
Inverting a Matrix
As before, consider the square $2 \times 2$ matrix $A$=$\bigl( \begin{smallmatrix} a_{11} & a_{12} \ a_{21} & a_{22}\end{smallmatrix} \bigr)$. Let the inverse of matrix A (denoted as $A^{-1}$) be
$$
\begin{equation}
A^{-1}=\begin{bmatrix}
a_{11} & a_{12} \
a_{21} & a_{22}
\end{bmatrix}^{-1}=\frac{1}{a_{11}a_{22}-a_{12}a_{21}} \begin{bmatrix}
a_{22} & -a_{12} \
-a_{21} & a_{11}
\end{bmatrix}
\end{equation}
$$
The inverted matrix $A^{-1}$ has a useful property:
$$
\begin{equation}
A \times A^{-1}=A^{-1} \times A=I
\end{equation}
$$
where I, the identity matrix (the matrix equivalent of the scalar value 1), is
$$
\begin{equation}
I_{2 \times 2}=\begin{bmatrix}
1 & 0 \
0 & 1
\end{bmatrix}
\end{equation}
$$
furthermore, $A \times I = A$ and $I \times A = A$.
An important feature about matrix inversion is that it is undefined if (in the $2 \times 2$ case), $a_{11}a_{22}-a_{12}a_{21}=0$. If this relationship is equal to zero the inverse of A does not exist. If this term is very close to zero, an inverse may exist but $A^{-1}$ may be poorly conditioned meaning it is prone to rounding error and is likely not well identified computationally. The term $a_{11}a_{22}-a_{12}a_{21}$ is the determinant of matrix A, and for square matrices of size greater than $2 \times 2$, if equal to zero indicates that you have a problem with your data matrix (columns are linearly dependent on other columns). The inverse of matrix A exists if A is square and is of full rank (ie. the columns of A are not linear combinations of other columns of A).
For more information on this topic, see this
http://en.wikipedia.org/wiki/Matrix_inversion, for example, on inverting matrices.
End of explanation
print C.dot(C_inverse)
print "Is identical to:"
print C_inverse.dot(C)
Explanation: Check that $C\times C^{-1} = I$:
End of explanation
A = np.arange(6).reshape((3,2))
B = np.arange(8).reshape((2,4))
print "A is"
print A
print "The Transpose of A is"
print A.T
Explanation: Transposing a Matrix
At times it is useful to pivot a matrix for conformability- that is in order to matrix divide or multiply, we need to switch the rows and column dimensions of matrices. Consider the matrix
$$
\begin{equation}
A_{3 \times 2}=\begin{bmatrix}
a_{11} & a_{12} \
a_{21} & a_{22} \
a_{31} & a_{32}
\end{bmatrix}{3 \times 2}
\end{equation}
$$
The transpose of A (denoted as $A^{\prime}$) is
$$
\begin{equation}
A^{\prime}=\begin{bmatrix}
a{11} & a_{21} & a_{31} \
a_{12} & a_{22} & a_{32} \
\end{bmatrix}_{2 \times 3}
\end{equation}
$$
End of explanation
print B.T.dot(A.T)
print "Is identical to:"
print (A.dot(B)).T
Explanation: One important property of transposing a matrix is the transpose of a product of two matrices. Let matrix A be of dimension $N \times M$ and let B of of dimension $M \times P$. Then
$$
\begin{equation}
(AB)^{\prime}=B^{\prime}A^{\prime}
\end{equation}
$$
For more information, see this http://en.wikipedia.org/wiki/Matrix_transposition on matrix transposition. This is also easy to implement:
End of explanation
a = np.arange(10)
s = slice(2,7,2)
print a[s]
a = np.arange(10)
b = a[2:7:2]
print b
a = np.arange(10)
b = a[5]
print b
a = np.arange(10)
print a[2:]
import numpy as np
a = np.arange(10)
print a[2:5]
a = np.array([[1,2,3],[3,4,5],[4,5,6]])
print a
# slice items starting from index
print 'Now we will slice the array from the index a[1:]'
print a[1:]
# array to begin with
a = np.array([[1,2,3],[3,4,5],[4,5,6]])
print 'Our array is:'
print a
print '\n'
# this returns array of items in the second column
print 'The items in the second column are:'
print a[...,1]
print '\n'
# Now we will slice all items from the second row
print 'The items in the second row are:'
print a[1,...]
print '\n'
# Now we will slice all items from column 1 onwards
print 'The items column 1 onwards are:'
print a[...,1:]
Explanation: Mechanics
Indexing and Slicing
examples from https://www.tutorialspoint.com/numpy/numpy_indexing_and_slicing.htm
End of explanation
A = np.random.rand(5,5)*10
print A[:,1]>4
A[A[:,1]>4]
Explanation: Logic, Comparison
End of explanation
### Pure iterative Python ###
points = [[9,2,8],[4,7,2],[3,4,4],[5,6,9],[5,0,7],[8,2,7],[0,3,2],[7,3,0],[6,1,1],[2,9,6]]
qPoint = [4,5,3]
minIdx = -1
minDist = -1
for idx, point in enumerate(points): # iterate over all points
dist = sum([(dp-dq)**2 for dp,dq in zip(point,qPoint)])**0.5 # compute the euclidean distance for each point to q
if dist < minDist or minDist < 0: # if necessary, update minimum distance and index of the corresponding point
minDist = dist
minIdx = idx
print 'Nearest point to q: ', points[minIdx]
# # # Equivalent NumPy vectorization # # #
import numpy as np
points = np.array([[9,2,8],[4,7,2],[3,4,4],[5,6,9],[5,0,7],[8,2,7],[0,3,2],[7,3,0],[6,1,1],[2,9,6]])
qPoint = np.array([4,5,3])
minIdx = np.argmin(np.linalg.norm(points-qPoint,axis=1)) # compute all euclidean distances at once and return the index of the smallest one
print 'Nearest point to q: ', points[minIdx]
Explanation: Concatenate, Reshape
Numpy load, save data files
Similarity: Euclidian vs Cosine
Example Nearest Neighbor search
Nearest Neighbor search is a common technique in Machine Learning
End of explanation
n = 100 # numeber of samples
Xr = np.random.rand(n)*99.0
y = -7.3 + 2.5*Xr + np.random.randn(n)*27.0
plt.plot(Xr, y, "o", alpha=0.5)
Explanation: Example: Linear Regression
Linear regression is an approach for modeling the relationship between a scalar dependent variable $y$ and one or more explanatory variables (or independent variables) denoted $X$. The case of one explanatory variable is called simple linear regression. For more than one explanatory variable, the process is called multiple linear regression.$^1$ (This term is distinct from multivariate linear regression, where multiple correlated dependent variables are predicted, rather than a single scalar variable.$^2$
We assume that the equation
$y_i = \beta_0 + \beta_1 X_i + \epsilon_i$ where $\epsilon_i \approx N(0, \sigma^2)$
$^1$ David A. Freedman (2009). Statistical Models: Theory and Practice. Cambridge University Press. p. 26. A simple regression equation has on the right hand side an intercept and an explanatory variable with a slope coefficient. A multiple regression equation has two or more explanatory variables on the right hand side, each with its own slope coefficient
$^2$ Rencher, Alvin C.; Christensen, William F. (2012), "Chapter 10, Multivariate regression – Section 10.1, Introduction", Methods of Multivariate Analysis, Wiley Series in Probability and Statistics, 709 (3rd ed.), John Wiley & Sons, p. 19, ISBN 9781118391679.
End of explanation
X = np.vstack((np.ones(n), Xr)).T
print X.shape
X[0:10,:]
Explanation: Let's add the bias, i.e. a column of $1$s to the explanatory variables
End of explanation
beta = np.linalg.inv(X.T.dot(X)).dot(X.T).dot(y)
yhat = X.dot(beta)
yhat.shape
plt.plot(X[:,1], y, "o", alpha=0.5)
plt.plot(X[:,1], yhat, "-", alpha=1, color="red")
Explanation: Closed-form Linear Regression
And compute the parametes $\beta_0$ and $\beta_1$ according to
$$ \beta = (X^\prime X)^{-1} X^\prime y $$
Note:
This not only looks elegant but can also be written in Julia code. However, matrix inversion $M^{-1}$ requires $O(d^3)$ iterations for a $d\times d$ matrix.<br />
https://www.coursera.org/learn/ml-regression/lecture/jOVX8/discussing-the-closed-form-solution
End of explanation
n = 100 # numeber of samples
X1 = np.random.rand(n)*99.0
X2 = np.random.rand(n)*51.0 - 26.8
X3 = np.random.rand(n)*5.0 + 6.1
X4 = np.random.rand(n)*1.0 - 0.5
X5 = np.random.rand(n)*300.0
y_m = -7.3 + 2.5*X1 + -7.9*X2 + 1.5*X3 + 10.0*X4 + 0.13*X5 + np.random.randn(n)*7.0
plt.hist(y_m, bins=20)
;
X_m = np.vstack((np.ones(n), X1, X2, X3, X4, X5)).T
X_m.shape
beta_m = np.linalg.inv(X_m.T.dot(X_m)).dot(X_m.T).dot(y_m)
beta_m
yhat_m = X.dot(beta_m)
yhat_m.shape
Explanation: Multiple Linear Regression
End of explanation
import math
RSMD = math.sqrt(np.square(yhat_m-y_m).sum()/n)
print RSMD
Explanation: Evaluation: Root-mean-square Deviation
The root-mean-square deviation (RMSD) or root-mean-square error (RMSE) is a frequently used measure of the differences between values (sample and population values) predicted by a model or an estimator and the values actually observed. The RMSD represents the sample standard deviation of the differences between predicted values and observed values. These individual differences are called residuals when the calculations are performed over the data sample that was used for estimation, and are called prediction errors when computed out-of-sample. The RMSD serves to aggregate the magnitudes of the errors in predictions for various times into a single measure of predictive power. RMSD is a good measure of accuracy, but only to compare forecasting errors of different models for a particular variable and not between variables, as it is scale-dependent.$^1$
$^1$ Hyndman, Rob J. Koehler, Anne B.; Koehler (2006). "Another look at measures of forecast accuracy". International Journal of Forecasting. 22 (4): 679–688. doi:10.1016/j.ijforecast.2006.03.001.
End of explanation
p = X.shape[1] ## get number of parameters
lam = 10.0
p, lam
beta2 = np.linalg.inv(X.T.dot(X) + lam*np.eye(p)).dot(X.T).dot(y)
yhat2 = X.dot(beta2)
RSMD2 = math.sqrt(np.square(yhat2-y).sum()/n)
print RSMD2
##n = float(X.shape[0])
print " RMSE = ", math.sqrt(np.square(yhat-y).sum()/n)
print "Ridge RMSE = ", math.sqrt(np.square(yhat2-y).sum()/n)
plt.plot(X[:,1], y, "o", alpha=0.5)
plt.plot(X[:,1], yhat, "-", alpha=0.7, color="red")
plt.plot(X[:,1], yhat2, "-", alpha=0.7, color="green")
Explanation: Regularization, Ridge-Regression
Regularization, in mathematics and statistics and particularly in the fields of machine learning and inverse problems, is a process of introducing additional information in order to solve an ill-posed problem or to prevent overfitting.
In general, a regularization term $R(f)$ is introduced to a general loss function:
for a loss function $V$ that describes the cost of predicting $f(x)$ when the label is
$y$, such as the square loss or hinge loss, and for the term
$\lambda$ which controls the importance of the regularization term.
$R(f)$ is typically a penalty on the complexity of
$f$, such as restrictions for smoothness or bounds on the vector space norm.$^1$
A theoretical justification for regularization is that it attempts to impose Occam's razor on the solution, as depicted in the figure. From a Bayesian point of view, many regularization techniques correspond to imposing certain prior distributions on model parameters.
Regularization can be used to learn simpler models, induce models to be sparse, introduce group structure into the learning problem, and more.
We're going to add the L2 term $\lambda||\beta||_2^2$ to the regression equation, which yields to$^2$
$$ \beta = (X^\prime X + \lambda I)^{-1} X^\prime y $$
$^1$ Bishop, Christopher M. (2007). Pattern recognition and machine learning (Corr. printing. ed.). New York: Springer. ISBN 978-0387310732.
$^2$ http://stats.stackexchange.com/questions/69205/how-to-derive-the-ridge-regression-solution
End of explanation |
5,758 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Multiclass Classification
Last modification
Step1: In this example, the wine dataset contains 178 samples, 13 features, and 3 clases. Details of the dataset can be find here.
Step2: This section is used to configure the classifiers that are going to be used in the classification, in this case, the Support Vector Machines (SVC) and Random Forest classifiers (RF) are going to be applied. The dictionary models is used to declare the classifiers and their parameters that are NOT going to be tunned in the cross-validation. The dictionary model_params is used to specify the parameters that DO will be tunned in the cross-validation. The dictionary cv_params is used to configure how the grid cross-validation is going to be performed.
Step3: The MultiClassifier trains the multiple estimators previouly configured. First, the data is divided n_splits times, in this case in 5 folds using the StratifiedKFold class. As it is shown in the following table, the data is divided 5 times, four of the 5 blocks will be used for training (blue ones), while one will be used for testing (orange one). In addition, if the parameter shuffle=True, the data will be rearranged before splitting into blocks.
<img src="./images/folds.png" alt="" height="250" width="250">
Step4: Second, the method train() receives the data, and the dictionaries with the configrations to compute the training. As an example the fold_1 is taken, it is divided in 3 parts to perform the cross-validation (specified in the dictionary cv_params). In the cross-validation, two parts are taken to tune the parameters of the classifiers, and one to test them.
<img src="./images/cross-validation.png" alt="" height="250" width="250">
Step5: Third, once that the best parameters were obtained, a model is generated from the training data. Following the example, this model is then tested on the fold_1
Step6: In order to analize an especific fold, you can obtaine the indices of the data used for training and testing, the model trained, as well as the prediction on the test data. The method best_estimator() has the parameter fold_key, if it is not set, the method returns the fold with the highest accuracy.
TODO
Step7: In the case that you need to train the model again using the data of an especific fold, you can use the bm_train_indices and bm_test_indices.
Step8: Also, the feature importances can be obtained if the algorithm has the option. | Python Code:
# Load the iris dataset and randomly permute it
import numpy as np
#import logging
#logger = logging.getLogger()
#logger.setLevel(logging.DEBUG)
#logging.debug("test")
from sklearn.datasets import load_iris
from sklearn.ensemble import RandomForestClassifier
from sklearn.svm import SVC
from sklearn.model_selection import StratifiedKFold
from sklearn import preprocessing
from mltoolbox.model_selection.classification import MultiClassifier
Explanation: Multiclass Classification
Last modification: 2017-10-16
End of explanation
# Load data
iris = load_iris()
X = iris.data
y = iris.target
n_samples, n_features = X.shape
print("samples:{}, features:{}, labels:{}".format(n_samples, n_features, np.unique(y)))
# Preprocessing
std_scale = preprocessing.StandardScaler().fit(X)
X_std = std_scale.transform(X)
print('After standardization:{:.4f}, {:.4f}'.format(X_std.mean(), X_std.std()))
Explanation: In this example, the wine dataset contains 178 samples, 13 features, and 3 clases. Details of the dataset can be find here.
End of explanation
# Configuration
random_state = 2017 # seed used by the random number generator
models = {
# NOTE: SVC and RFC are the names that will be used to make reference to the models after the training step.
'SVC': SVC(probability=True,
random_state=random_state),
'RFC': RandomForestClassifier(random_state=random_state)
}
model_params = {
'SVC': {'kernel':['linear', 'rbf', 'sigmoid']},
'RFC': {'n_estimators': [25,50, 75, 100]}
}
cv_params = {
'cv': StratifiedKFold(n_splits=3, shuffle=False, random_state=random_state)
}
Explanation: This section is used to configure the classifiers that are going to be used in the classification, in this case, the Support Vector Machines (SVC) and Random Forest classifiers (RF) are going to be applied. The dictionary models is used to declare the classifiers and their parameters that are NOT going to be tunned in the cross-validation. The dictionary model_params is used to specify the parameters that DO will be tunned in the cross-validation. The dictionary cv_params is used to configure how the grid cross-validation is going to be performed.
End of explanation
# Training
mc = MultiClassifier(n_splits=5, shuffle=True, random_state=random_state)
Explanation: The MultiClassifier trains the multiple estimators previouly configured. First, the data is divided n_splits times, in this case in 5 folds using the StratifiedKFold class. As it is shown in the following table, the data is divided 5 times, four of the 5 blocks will be used for training (blue ones), while one will be used for testing (orange one). In addition, if the parameter shuffle=True, the data will be rearranged before splitting into blocks.
<img src="./images/folds.png" alt="" height="250" width="250">
End of explanation
mc.train(X, y, models, model_params, cv_params=cv_params)
Explanation: Second, the method train() receives the data, and the dictionaries with the configrations to compute the training. As an example the fold_1 is taken, it is divided in 3 parts to perform the cross-validation (specified in the dictionary cv_params). In the cross-validation, two parts are taken to tune the parameters of the classifiers, and one to test them.
<img src="./images/cross-validation.png" alt="" height="250" width="250">
End of explanation
# Results
print('RFC\n{}\n'.format(mc.report_score_summary_by_classifier('RFC')))
print('SVC\n{}\n'.format(mc.report_score_summary_by_classifier('SVC')))
Explanation: Third, once that the best parameters were obtained, a model is generated from the training data. Following the example, this model is then tested on the fold_1:test block. As soon as the training and test were perfromed for each fold, the results can be visualized in a report.
End of explanation
# Get the results of the parition that has the high accuracy
fold, bm_model, bm_y_pred, bm_train_indices, bm_test_indices = mc.best_estimator('RFC')['RFC']
print(">>Best model in fold: {}".format(fold))
print(">>>Trained model \n{}".format(bm_model))
print(">>>Predicted labels: \n{}".format(bm_y_pred))
print(">>>Indices of the samples used for training: \n{}".format(bm_train_indices))
print(">>>Indices of samples used for predicting: \n{}".format(bm_test_indices))
Explanation: In order to analize an especific fold, you can obtaine the indices of the data used for training and testing, the model trained, as well as the prediction on the test data. The method best_estimator() has the parameter fold_key, if it is not set, the method returns the fold with the highest accuracy.
TODO: Use the measurement as a parameter to get the best estimator
End of explanation
# Recover the partition of the dataset based on the results of the best model
X_train_final, X_test_final = X[bm_train_indices], X[bm_test_indices]
y_train_final, y_test_final = y[bm_train_indices], y[bm_test_indices]
# Testing the best model using again all the training set
bm_model.fit(X_train_final, y_train_final)
print("Final score {0:.4f}".format(bm_model.score(X_test_final, y_test_final)))
Explanation: In the case that you need to train the model again using the data of an especific fold, you can use the bm_train_indices and bm_test_indices.
End of explanation
importances = mc.feature_importances('RFC')
%matplotlib inline
import matplotlib.pyplot as plt
indices = range(n_features)
f, ax = plt.subplots(figsize=(11, 9))
plt.title("Feature importances", fontsize = 20)
plt.bar(indices, importances, color="black", align="center")
plt.xticks(indices)
plt.ylabel("Importance", fontsize = 18)
plt.xlabel("Index of the feature", fontsize = 18)
Explanation: Also, the feature importances can be obtained if the algorithm has the option.
End of explanation |
5,759 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<a href="https
Step1: Alternative
Step2: Checking our results (inference) | Python Code:
!pip install -q tf-nightly-gpu-2.0-preview
import tensorflow as tf
print(tf.__version__)
(x_train, y_train), (x_test, y_test) = tf.keras.datasets.fashion_mnist.load_data()
x_train.shape
import numpy as np
# add empty color dimension
x_train = np.expand_dims(x_train, -1)
x_test = np.expand_dims(x_test, -1)
x_train.shape
# recude memory and compute time
NUMBER_OF_SAMPLES = 50000
x_train_samples = x_train[:NUMBER_OF_SAMPLES]
y_train_samples = y_train[:NUMBER_OF_SAMPLES]
import skimage.data
import skimage.transform
x_train_224 = np.array([skimage.transform.resize(image, (32, 32)) for image in x_train_samples])
x_train_224.shape
Explanation: <a href="https://colab.research.google.com/github/DJCordhose/ai/blob/master/notebooks/tf2/fashion-mnist-resnet.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
Fashion MNIST with Keras and Resnet
Adapted from
* https://colab.research.google.com/github/tensorflow/tpu/blob/master/tools/colab/fashion_mnist.ipynb
* https://github.com/margaretmz/deep-learning/blob/master/fashion_mnist_keras.ipynb
End of explanation
from tensorflow.keras.applications.resnet50 import ResNet50
# https://keras.io/applications/#mobilenet
# https://arxiv.org/pdf/1704.04861.pdf
from tensorflow.keras.applications.mobilenet import MobileNet
# model = ResNet50(classes=10, weights=None, input_shape=(32, 32, 1))
model = MobileNet(classes=10, weights=None, input_shape=(32, 32, 1))
model.summary()
%%time
BATCH_SIZE=10
EPOCHS = 10
model.compile(loss='sparse_categorical_crossentropy',
optimizer='adam',
metrics=['accuracy'])
history = model.fit(x_train_224, y_train_samples, epochs=EPOCHS, batch_size=BATCH_SIZE, validation_split=0.2, verbose=1)
import matplotlib.pyplot as plt
plt.xlabel('epochs')
plt.ylabel('loss')
plt.yscale('log')
plt.plot(history.history['loss'])
plt.plot(history.history['val_loss'])
plt.legend(['Loss', 'Validation Loss'])
plt.xlabel('epochs')
plt.ylabel('accuracy')
plt.plot(history.history['accuracy'])
plt.plot(history.history['val_accuracy'])
plt.legend(['Accuracy', 'Validation Accuracy'])
Explanation: Alternative: ResNet
basic ideas
depth does matter
8x deeper than VGG
possible by using shortcuts and skipping final fc layer
prevents vanishing gradient problem
https://keras.io/applications/#resnet50
https://medium.com/towards-data-science/neural-network-architectures-156e5bad51ba
http://arxiv.org/abs/1512.03385
End of explanation
x_test_224 = np.array([skimage.transform.resize(image, (32, 32)) for image in x_test])
LABEL_NAMES = ['t_shirt', 'trouser', 'pullover', 'dress', 'coat', 'sandal', 'shirt', 'sneaker', 'bag', 'ankle_boots']
def plot_predictions(images, predictions):
n = images.shape[0]
nc = int(np.ceil(n / 4))
f, axes = plt.subplots(nc, 4)
for i in range(nc * 4):
y = i // 4
x = i % 4
axes[x, y].axis('off')
label = LABEL_NAMES[np.argmax(predictions[i])]
confidence = np.max(predictions[i])
if i > n:
continue
axes[x, y].imshow(images[i])
axes[x, y].text(0.5, 0.5, label + '\n%.3f' % confidence, fontsize=14)
plt.gcf().set_size_inches(8, 8)
plot_predictions(np.squeeze(x_test_224[:16]),
model.predict(x_test_224[:16]))
train_loss, train_accuracy = model.evaluate(x_train_224, y_train_samples, batch_size=BATCH_SIZE)
train_accuracy
test_loss, test_accuracy = model.evaluate(x_test_224, y_test, batch_size=BATCH_SIZE)
test_accuracy
Explanation: Checking our results (inference)
End of explanation |
5,760 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Integration Exercise 2
Imports
Step1: Indefinite integrals
Here is a table of definite integrals. Many of these integrals has a number of parameters $a$, $b$, etc.
Find five of these integrals and perform the following steps
Step2: Integral 1
$$ I_1=\int_0^\infty \frac{x^{p-1}dx}{1+x}=\frac{\pi}{\sin p\pi}, 0<p<1 $$
Step3: Integral 2
$$ I_2=\int_0^a \sqrt{a^{2}-x^{2}}dx=\frac{\pi a^{2}}{4} $$
Step4: Integral 3
$$ I_3=\int_0^\infty \frac{dx}{\sqrt{a^{2}-x^{2}}}=\frac{\pi}{2} $$
Step5: Integral 4
$$ I_4=\int_0^\infty \frac{\sin^{2}px}{x^{2}}dx=\frac{\pi p}{2} $$
Step6: Integral 5
$$ I_4=\int_0^\infty \frac{1-\cos px}{x^{2}}dx=\frac{\pi p}{2} $$ | Python Code:
%matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
import seaborn as sns
from scipy import integrate
Explanation: Integration Exercise 2
Imports
End of explanation
def integrand(x, a):
return 1.0/(x**2 + a**2)
def integral_approx(a):
# Use the args keyword argument to feed extra arguments to your integrand
I, e = integrate.quad(integrand, 0, np.inf, args=(a,))
return I
def integral_exact(a):
return 0.5*np.pi/a
print('Numerical:', integral_approx(1.0))
print('Exact:', integral_exact(1.0))
assert True # leave this cell to grade the above integral
Explanation: Indefinite integrals
Here is a table of definite integrals. Many of these integrals has a number of parameters $a$, $b$, etc.
Find five of these integrals and perform the following steps:
Typeset the integral using LateX in a Markdown cell.
Define an integrand function that computes the value of the integrand.
Define an integral_approx funciton that uses scipy.integrate.quad to peform the integral.
Define an integral_exact function that computes the exact value of the integral.
Call and print the return value of integral_approx and integral_exact for one set of parameters.
Here is an example to show what your solutions should look like:
Example
Here is the integral I am performing:c
$$ I_1 = \int_0^\infty \frac{dx}{x^2 + a^2} = \frac{\pi}{2a} $$
End of explanation
def integrand1(x,p):
return (x**(p-1))/(1+x)
def integral_approx1(p):
I,e=integrate.quad(integrand1, 0, np.inf, args=(p,))
return I
def integral_exact1(p):
return np.pi/(np.sin(p*np.pi))
print('Numerical:', integral_approx1(0.5))
print('Exact:', integral_exact1(0.5))
assert True # leave this cell to grade the above integral
Explanation: Integral 1
$$ I_1=\int_0^\infty \frac{x^{p-1}dx}{1+x}=\frac{\pi}{\sin p\pi}, 0<p<1 $$
End of explanation
def integrand2(x,a):
return (a**2-x**2)**(1/2)
def integral_approx2(a):
I,e=integrate.quad(integrand2, 0,a,args=(a,))
return I
def integral_exact2(a):
return np.pi*(a**2)/4
print('Numerical:', integral_approx2(1.0))
print('Exact:', integral_exact2(1.0))
assert True # leave this cell to grade the above integral
Explanation: Integral 2
$$ I_2=\int_0^a \sqrt{a^{2}-x^{2}}dx=\frac{\pi a^{2}}{4} $$
End of explanation
def integrand3(x,a):
return 1/((a**2-x**2)**(1/2))
def integral_approx3(a):
I,e=integrate.quad(integrand3, 0,np.inf,args=(a,))
return I
def integral_exact3(a):
return np.pi/2
#print('Numerical:', integral_approx3(1.0))
#print('Exact:', integral_exact3(1.0))
#this integral seems to be flawed, for there will always be an x which makes the denominator 0 and causes the function being integrated to be discontinuos
assert True # leave this cell to grade the above integral
Explanation: Integral 3
$$ I_3=\int_0^\infty \frac{dx}{\sqrt{a^{2}-x^{2}}}=\frac{\pi}{2} $$
End of explanation
def integrand4(x,p):
return (np.sin(p*x)**2)/(x**2)
def integral_approx4(p):
I,e=integrate.quad(integrand4, 0,np.inf,args=(p,))
return I
def integral_exact4(p):
return np.pi*p/2
print('Numerical:', integral_approx4(1.0))
print('Exact:', integral_exact4(1.0))
assert True # leave this cell to grade the above integral
Explanation: Integral 4
$$ I_4=\int_0^\infty \frac{\sin^{2}px}{x^{2}}dx=\frac{\pi p}{2} $$
End of explanation
def integrand5(x,p):
return (1-np.cos(p*x))/(x**2)
def integral_approx5(p):
I,e=integrate.quad(integrand5, 0,np.inf,args=(p,))
return I
def integral_exact5(p):
return np.pi*p/2
print('Numerical:', integral_approx5(1.0))
print('Exact:', integral_exact5(1.0))
assert True # leave this cell to grade the above integral
Explanation: Integral 5
$$ I_4=\int_0^\infty \frac{1-\cos px}{x^{2}}dx=\frac{\pi p}{2} $$
End of explanation |
5,761 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Lesson 8
v1.1, 2020.4 2020.5 edit by David Yi
本课内容要点
函数介绍和用法
思考一下:剪刀石头布
函数用法
函数是组织好的、可重复使用的、用来实现单一或相关联功能的代码段。函数能提高应用的模块性,和代码的重复利用率。
python 提供了许多内建函数,比如 print(), max();
python 提供的大量内建函数,可以满足绝大多数基本的功能,有时候 python 语言是瑞士军刀,就是这个意思。
函数可以大大简化重复代码,降低工作量,减少错误。
函数的基本语法
def 函数名称( 参数 )
Step1: 如果上面的函数要修改成只有一个 return 应该怎么修改呢?
Step2: 写一个函数,输入一个数字,判断其是否是偶数,是偶数的话,返回 True,反之返回 False
写一个函数,输入一个英文单词,返回其中的元音字母的个数
Step3: 思考一下
写一个简单的剪刀石头布的游戏; | Python Code:
# 计算圆的面积
# 不用函数的话,每次需要写一些重复的代码
r1 = 4
r2 = 6
r3 = 5.61
s1 = 3.14 * r1 * r1
s2 = 3.14 * r2 * r2
s3 = 3.14 * r3 * r3
print(s1)
print(s2)
print(s3)
# 定义一个函数,用来计算圆面积
# 输入 半径,返回 圆面积
def func1(r):
s = 3.14 * r * r
return s
print(func1(4))
print(func1(6))
print(func1(5.61))
# 先来看看 python 内置的获取最大值的函数
print(max(1,5))
print(max(1,4,6,3,9))
# 请注意,函数体内部的语句在执行时,一旦执行到return时,函数就执行完毕,并将结果返回。
# 因此,函数内部通过条件判断和循环可以实现非常复杂的逻辑。
# 从优雅的写法来说,不是很推荐一个函数内有多个返回点的方式,但是有时候这样写的确很方便
def my_max(a,b):
if a > b:
return a
else:
return b
print(my_max(3,6))
Explanation: Lesson 8
v1.1, 2020.4 2020.5 edit by David Yi
本课内容要点
函数介绍和用法
思考一下:剪刀石头布
函数用法
函数是组织好的、可重复使用的、用来实现单一或相关联功能的代码段。函数能提高应用的模块性,和代码的重复利用率。
python 提供了许多内建函数,比如 print(), max();
python 提供的大量内建函数,可以满足绝大多数基本的功能,有时候 python 语言是瑞士军刀,就是这个意思。
函数可以大大简化重复代码,降低工作量,减少错误。
函数的基本语法
def 函数名称( 参数 ):
函数语句
return [值]
虽然 Python 内置了大量函数,但是大多数提供的是基本功能,有时候还是不能满足真正程序的需要;
函数可以做的很复杂,发展为包(Package)。Python 目前有七万多个各类功能的包,几乎都可以可以免费使用在自己的程序中,这是最近几年 Python 飞速发展的原因之一。
学会使用函数来简化程序之后,可以为以后学习面向对象的编程方法打好基础,也可以真正的开始使用 Python 来解决各类问题。
2017年编写这个文档的时候,python 有大约7万多个包,现在,2020年4月,我看了一下 pypi,python 包的大本营,目前有 23万个包了。增长非常迅速。
End of explanation
def my_max(a,b):
if a > b:
c = a
else:
c = b
return c
print(my_max(3,6))
# *number 这样的写法,可以理解为传入任意数目个参数,包括0个参数,就是在参数名称前面加上一个 *
# 为了简化程序逻辑,返回的结果现在是假的,后面我们会再修改
def my_max(*number):
print(type(number))
return 100
print(my_max(1,5,30,14,25))
# 用列表来排序,来完成这个取得任意值的最大值的函数
def my_max(*number):
a_list = list(number)
a_list.sort()
a = a_list[-1]
return a
print(my_max(1,5,30,14,25))
# Python的函数返回多值其实就是返回一个tuple,但语法上可以省略括号,写起来更方便
# 同时返回最大值和最小值
def my_max_min(*number):
a_list = list(number)
a_list.sort()
my_max = a_list[-1]
my_min = a_list[0]
return my_max, my_min
print(my_max_min(1,5,30,14,25))
Explanation: 如果上面的函数要修改成只有一个 return 应该怎么修改呢?
End of explanation
# 判断奇偶数
a = int(input('Input a number:'))
def oushu():
if int(a/2) == a/2:
return True
#print('这是个偶数')
else:
return False
#print('这是个奇数')
result = oushu()
print(result)
# 判断元音
def reason(s):
b = ['a','e','i','o','u']
c = 0
for i in list(s):
if i in b:
# c=c+1 的简约写法
c += 1
return c
a = input('plese input a word:')
print(reason(a))
# 计算一个正整数的因数
def yinshu(number):
b = []
for i in range(1,number+1):
if number % i == 0:
b.append(i)
return b
print(yinshu(10))
# 找到最大数 #1
def find_max(list1):
# max = float('-inf') 表示负无穷大
max = float('-inf')
for x in list1:
if x > max:
max = x
return max
print(find_max([-20,1,6,7,20,5]))
print(find_max([-20,-3,-6,-7,-5]))
# 找到最大数 #2
# 使用递归,递归是编程中非常有效的一种解决问题的方式,将内在逻辑分解为一个固定的逻辑,而调用参数来自于上一次的执行结果
# 使用递归要注意初始化和防止递归陷入死循环
# 排序、斐波那契数列这些都是递归的经典实现
def find_max(list1):
if len(list1) == 1:
return list1[0]
v1 = list1[0]
v2 = find_max(list1[1:])
if v1 > v2:
return v1
else:
return v2
print(find_max([1,6,7,20,5]))
Explanation: 写一个函数,输入一个数字,判断其是否是偶数,是偶数的话,返回 True,反之返回 False
写一个函数,输入一个英文单词,返回其中的元音字母的个数
End of explanation
# 简单的剪刀石头布
import random
FIRST = 0
SECOND = 1
BOTH = 2
t1 = ('剪刀', '石头', '布')
t2 = ('human win', 'computer win', 'draw')
def which_win(i1, i2):
if i1 == 0 and i2 == 1:
return SECOND
if i1 == 0 and i2 == 2:
return FIRST
if i1 == 1 and i2 == 0:
return FIRST
if i1 == 1 and i2 == 2:
return SECOND
if i1 == 2 and i2 == 0:
return SECOND
if i1 == 2 and i2 == 1:
return FIRST
if i1 == i2:
return BOTH
print('0:剪刀 1:石头 2:布')
human = int(input('你出了:'))
c_index = random.randint(0,2)
computer = t1[c_index]
print("电脑出了",computer)
print(t2[which_win(human, c_index)])
# 优化的剪刀石头布程序
import random
FIRST = 0
SECOND = 1
BOTH = 2
t1 = ('scissors', 'stone', 'cloth')
t2 = ('human win', 'computer win', 'draw')
table = {'01':1, '02':0, '10':0, '12':1, '20':1, '21':0, '00':2, '11':2, '22':2}
def which_win(i1, i2):
s = i1 + i2
return table.get(s)
# 显示规则
for i, s in enumerate(t1):
print(i, s, ', ', end='')
human = input('please input:')
h_index = int(human)
human_choice = t1[h_index]
c_index = random.randint(0,2)
computer_choice = t1[c_index]
print('human:',h_index,human_choice)
print('computer:',c_index, computer_choice)
print(t2[which_win(str(h_index), str(c_index))])
# 五局三胜的剪刀石头布程序
import random
FIRST = 0
SECOND = 1
BOTH = 2
t1 = ('scissors', 'stone', 'cloth')
t2 = ('human win', 'computer win', 'draw')
table = {'01':1, '02':2, '10':0, '12':1, '20':1, '21':0, '00':2, '11':2, '22':2}
def which_win(i1, i2):
s = i1 + i2
return table.get(s)
mark = True
computer_win = 0
human_win = 0
while mark:
# 显示规则
for i, s in enumerate(t1):
print(i, s, ', ', end='')
human = input('please input:')
c_index = random.randint(0,2)
computer = t1[c_index]
print(c_index, computer)
who_win = which_win(human, str(c_index))
if who_win == 0:
human_win += 1
if who_win == 1:
computer_win += 1
print(t2[which_win(human, str(c_index))])
print('human:computer ', human_win, ':', computer_win)
print()
if (human_win == 3) or (computer_win == 3):
mark = False
if human_win == 3:
print('final human win')
else:
print('final computer win')
Explanation: 思考一下
写一个简单的剪刀石头布的游戏;
End of explanation |
5,762 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Step1: A notebook to process experimental results of ex2_prob_params.py. p(reject) as problem parameters are varied.
Step2: $$p(x)=\mathcal{N}(0, I) \
q(x)=\mathcal{N}(0, I)$$
Step3: $$p(x)=\mathcal{N}(0, I) \
q(x)=\mathcal{N}(0, \mathrm{diag}(2,1,1,\ldots))$$ | Python Code:
%load_ext autoreload
%autoreload 2
%matplotlib inline
#%config InlineBackend.figure_format = 'svg'
#%config InlineBackend.figure_format = 'pdf'
import numpy as np
import matplotlib
import matplotlib.pyplot as plt
import kgof.data as data
import kgof.glo as glo
import kgof.goftest as gof
import kgof.kernel as kernel
import kgof.plot as plot
import kgof.util as util
import scipy.stats as stats
import kgof.plot
kgof.plot.set_default_matplotlib_options()
# np.random.seed(0)
# x = np.linspace(-5., 5., 50)
# y = 3 * np.exp(-0.5 * (x - 1.3)**2 / 0.8**2)
# y += np.random.normal(0., 0.2, x.shape)
# f = plt.figure(0)
# plt.plot(x,y, 'b-')
# plt.xlabel(r'$\alpha$')
# plt.ylabel('Test power')
# f.savefig('test.pdf', bbox_inches='tight')
def load_plot_vs_Js(fname, show_legend=True, xscale='log', yscale='linear'):
J = number of test locations.
func_xvalues = lambda agg_results: agg_results['Js']
ex = 3
def func_title(agg_results):
repeats, _, n_methods = agg_results['job_results'].shape
alpha = agg_results['alpha']
test_size = (1.0 - agg_results['tr_proportion'])*agg_results['sample_size']
title = '%s. %d trials. test size: %d. $\\alpha$ = %.2g.'%\
( agg_results['prob_label'], repeats, test_size, alpha)
return title
#plt.figure(figsize=(10,5))
results = plot.plot_prob_reject(
ex, fname, func_xvalues, '', func_title=func_title)
plt.title('')
if xscale is not None:
plt.xscale(xscale)
if yscale is not None:
plt.yscale(yscale)
plt.xlabel('$J$')
plt.gca().legend(loc='best').set_visible(show_legend)
if show_legend:
plt.legend(bbox_to_anchor=(1.70, 1.05))
plt.grid(False)
return results
def load_runtime_vs_Js(fname, xlabel='$J$ parameter',
show_legend=True, xscale='linear', yscale='linear'):
func_xvalues = lambda agg_results: agg_results['Js']
ex = 3
def func_title(agg_results):
repeats, _, n_methods = agg_results['job_results'].shape
alpha = agg_results['alpha']
title = '%s. %d trials. $\\alpha$ = %.2g.'%\
( agg_results['prob_label'], repeats, alpha)
return title
#plt.figure(figsize=(10,6))
results = plot.plot_runtime(ex, fname,
func_xvalues, xlabel=xlabel, func_title=func_title)
plt.title('')
plt.gca().legend(loc='best').set_visible(show_legend)
if show_legend:
plt.legend(bbox_to_anchor=(1.70, 1.05))
plt.grid(False)
if xscale is not None:
plt.xscale(xscale)
if yscale is not None:
plt.yscale(yscale)
return results
# GMD
# gmd_fname = 'ex3-gmd1-me2_n500_rs100_Jmi2_Jma32_a0.050_trp0.50.p'
# gmd_results = load_plot_vs_Js(gmd_fname, show_legend=True)
# p: normal, q: Gaussian mixture
# g_vs_gmm_fname = 'ex3-g_vs_gmm_d5-me2_n500_rs50_Jmi2_Jma384_a0.050_trp0.50.p'
# g_vs_gmm_fname = 'ex3-g_vs_gmm_d5-me2_n500_rs100_Jmi2_Jma384_a0.050_trp0.50.p'
# g_vs_gmm_fname = 'ex3-g_vs_gmm_d2-me2_n500_rs50_Jmi2_Jma384_a0.050_trp0.50.p'
# g_vs_gmm_fname = 'ex3-g_vs_gmm_d1-me2_n500_rs50_Jmi2_Jma384_a0.050_trp0.50.p'
g_vs_gmm_fname = 'ex3-g_vs_gmm_d1-me2_n500_rs200_Jmi2_Jma384_a0.050_trp0.50.p'
# g_vs_gmm_fname = 'ex3-g_vs_gmm_d1-me2_n800_rs50_Jmi2_Jma384_a0.050_trp0.50.p'
g_vs_gmm_results = load_plot_vs_Js(g_vs_gmm_fname, show_legend=False)
plt.xticks([1, 10, 1e2, 1e3])
plt.savefig(g_vs_gmm_fname.replace('.p', '.pdf', 1), bbox_inches='tight')
# Gaussian mixture
# gmm_fname = 'ex3-gmm_d1-me2_n500_rs100_Jmi2_Jma32_a0.050_trp0.50.p'
# gmm_results = load_plot_vs_Js(gmm_fname)
# Same Gaussian
sg5_fname = "ex3-sg5-me2_n500_rs100_Jmi2_Jma384_a0.050_trp0.50.p"
sg5_results = load_plot_vs_Js(sg5_fname, show_legend=False)
plt.ylim([0, 0.05])
plt.xticks([1, 10, 1e2, 1e3])
plt.savefig(sg5_fname.replace('.p', '.pdf', 1), bbox_inches='tight')
Explanation: A notebook to process experimental results of ex2_prob_params.py. p(reject) as problem parameters are varied.
End of explanation
# Gaussian variance difference.
gvd5_fname = 'ex3-gvd5-me2_n500_rs100_Jmi2_Jma384_a0.050_trp0.50.p'
gvd5_results = load_plot_vs_Js(gvd5_fname, show_legend=True)
plt.legend(bbox_to_anchor=(1.8, 1.05))
plt.xticks([1, 10, 1e2, 1e3])
plt.savefig(gvd5_fname.replace('.p', '.pdf', 1), bbox_inches='tight')
# plt.legend(ncol=2)
#plt.ylim([0.03, 0.1])
Explanation: $$p(x)=\mathcal{N}(0, I) \
q(x)=\mathcal{N}(0, I)$$
End of explanation
# Gauss-Bernoulli RBM. H1 case
# rbm_h1_fname = 'ex3-gbrbm_dx5_dh3_v5em3-me2_n500_rs100_Jmi2_Jma384_a0.050_trp0.50.p'
# rbm_h1_results = load_plot_vs_Js(rbm_h1_fname, show_legend=True)
Explanation: $$p(x)=\mathcal{N}(0, I) \
q(x)=\mathcal{N}(0, \mathrm{diag}(2,1,1,\ldots))$$
End of explanation |
5,763 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Comparing Keras and Scikit models deployed on Cloud AI Platform with the What-if Tool
In this notebook we'll use the UCI wine quality dataset to train both tf.keras and Scikit learn regression models that will predict the quality rating of a wine given 11 numerical data points about the wine. You'll learn how to
Step1: Download and process data
In this section we'll
Step2: Train tf.keras model
In this section we'll
Step3: Deploy keras model to Cloud AI Platform
In this section we'll
Step4: Build and train Scikit learn model
In this section we'll
Step5: Deploy Scikit model to CAIP
In this section we'll
Step6: Compare tf.keras and Scikit models with the What-if Tool
Now we're ready for the What-if Tool! In this section we'll | Python Code:
import sys
python_version = sys.version_info[0]
# If you're running on Colab, you'll need to install the What-if Tool package and authenticate
def pip_install(module):
if python_version == '2':
!pip install {module} --quiet
else:
!pip3 install {module} --quiet
try:
import google.colab
IN_COLAB = True
except:
IN_COLAB = False
if IN_COLAB:
pip_install('witwidget')
from google.colab import auth
auth.authenticate_user()
import pandas as pd
import numpy as np
import tensorflow as tf
import witwidget
import os
import pickle
from tensorflow.keras.layers import Dense
from tensorflow.keras.models import Sequential
from sklearn.utils import shuffle
from sklearn.linear_model import LinearRegression
from witwidget.notebook.visualization import WitWidget, WitConfigBuilder
# This has been tested on TF 1.14
print(tf.__version__)
Explanation: Comparing Keras and Scikit models deployed on Cloud AI Platform with the What-if Tool
In this notebook we'll use the UCI wine quality dataset to train both tf.keras and Scikit learn regression models that will predict the quality rating of a wine given 11 numerical data points about the wine. You'll learn how to:
* Build, train, and then deploy tf.keras and Scikit Learn models to Cloud AI Platform
* Use the What-if Tool to compare two different models deployed on CAIP
You will need a Google Cloud Platform account and project to run this notebook. Instructions for creating a project can be found here.
Installing dependencies
End of explanation
!wget 'http://archive.ics.uci.edu/ml/machine-learning-databases/wine-quality/winequality-white.csv'
data = pd.read_csv('winequality-white.csv', index_col=False, delimiter=';')
data = shuffle(data, random_state=4)
data.head()
labels = data['quality']
print(labels.value_counts())
data = data.drop(columns=['quality'])
train_size = int(len(data) * 0.8)
train_data = data[:train_size]
train_labels = labels[:train_size]
test_data = data[train_size:]
test_labels = labels[train_size:]
train_data.head()
Explanation: Download and process data
In this section we'll:
* Download the wine quality data directly from UCI Machine Learning
* Read it into a Pandas dataframe and preview it
* Split the data and labels into train and test sets
End of explanation
# This is the size of the array we'll be feeding into our model for each wine example
input_size = len(train_data.iloc[0])
print(input_size)
model = Sequential()
model.add(Dense(200, input_shape=(input_size,), activation='relu'))
model.add(Dense(50, activation='relu'))
model.add(Dense(25, activation='relu'))
model.add(Dense(1))
model.compile(loss='mean_squared_error', optimizer='adam')
model.summary()
model.fit(train_data.values,train_labels.values, epochs=4, batch_size=32, validation_split=0.1)
Explanation: Train tf.keras model
In this section we'll:
Build a regression model using tf.keras to predict a wine's quality score
Train the model
Add a layer to the model to prepare it for serving
End of explanation
# Update these to your own GCP project + model names
GCP_PROJECT = 'your_gcp_project'
KERAS_MODEL_BUCKET = 'gs://your_storage_bucket'
KERAS_VERSION_NAME = 'v1'
# Add the serving input layer below in order to serve our model on AI Platform
class ServingInput(tf.keras.layers.Layer):
# the important detail in this boilerplate code is "trainable=False"
def __init__(self, name, dtype, batch_input_shape=None):
super(ServingInput, self).__init__(trainable=False, name=name, dtype=dtype, batch_input_shape=batch_input_shape)
def get_config(self):
return {'batch_input_shape': self._batch_input_shape, 'dtype': self.dtype, 'name': self.name }
restored_model = model
serving_model = tf.keras.Sequential()
serving_model.add(ServingInput('serving', tf.float32, (None, input_size)))
serving_model.add(restored_model)
tf.contrib.saved_model.save_keras_model(serving_model, os.path.join(KERAS_MODEL_BUCKET, 'keras_export')) # export the model to your GCS bucket
export_path = KERAS_MODEL_BUCKET + '/keras_export'
# Configure gcloud to use your project
!gcloud config set project $GCP_PROJECT
# Create a new model in our project, you only need to run this once
!gcloud ai-platform models create keras_wine
# Deploy the model to Cloud AI Platform
!gcloud beta ai-platform versions create $KERAS_VERSION_NAME --model keras_wine \
--origin=$export_path \
--python-version=3.5 \
--runtime-version=1.14 \
--framework='TENSORFLOW'
%%writefile predictions.json
[7.8, 0.21, 0.49, 1.2, 0.036, 20.0, 99.0, 0.99, 3.05, 0.28, 12.1]
# Test the deployed model on an example from our test set
# The correct score for this prediction is 7
prediction = !gcloud ai-platform predict --model=keras_wine --json-instances=predictions.json --version=$KERAS_VERSION_NAME
print(prediction[1])
Explanation: Deploy keras model to Cloud AI Platform
In this section we'll:
* Set up some global variables for our GCP project
* Add a serving layer to our model so we can deploy it on Cloud AI Platform
* Run the deploy command to deploy our model
* Generate a test prediction on our deployed model
End of explanation
SKLEARN_VERSION_NAME = 'v1'
SKLEARN_MODEL_BUCKET = 'gs://sklearn_model_bucket'
scikit_model = LinearRegression().fit(train_data.values, train_labels.values)
# Export the model to a local file using pickle
pickle.dump(scikit_model, open('model.pkl', 'wb'))
Explanation: Build and train Scikit learn model
In this section we'll:
* Train a regression model using Scikit Learn
* Save the model to a local file using pickle
End of explanation
# Copy the saved model to Cloud Storage
!gsutil cp ./model.pkl gs://wine_sklearn/model.pkl
# Create a new model in our project, you only need to run this once
!gcloud ai-platform models create sklearn_wine
!gcloud beta ai-platform versions create $SKLEARN_VERSION_NAME --model=sklearn_wine \
--origin=$SKLEARN_MODEL_BUCKET \
--runtime-version=1.14 \
--python-version=3.5 \
--framework='SCIKIT_LEARN'
# Test the model usnig the same example instance from above
!gcloud ai-platform predict --model=sklearn_wine --json-instances=predictions.json --version=$SKLEARN_VERSION_NAME
Explanation: Deploy Scikit model to CAIP
In this section we'll:
* Copy our saved model file to Cloud Storage
* Run the gcloud command to deploy our model
* Generate a prediction on our deployed model
End of explanation
# Create np array of test examples + their ground truth labels
test_examples = np.hstack((test_data[:200].values,test_labels[:200].values.reshape(-1,1)))
print(test_examples.shape)
# Create a What-if Tool visualization, it may take a minute to load
# See the cell below this for exploration ideas
# We use `set_predict_output_tensor` here becuase our tf.keras model returns a dict with a 'sequential' key
config_builder = (WitConfigBuilder(test_examples.tolist(), data.columns.tolist() + ['quality'])
.set_ai_platform_model(GCP_PROJECT, 'keras_wine', KERAS_VERSION_NAME).set_predict_output_tensor('sequential').set_uses_predict_api(True)
.set_target_feature('quality')
.set_model_type('regression')
.set_compare_ai_platform_model(GCP_PROJECT, 'sklearn_wine', SKLEARN_VERSION_NAME))
WitWidget(config_builder, height=800)
Explanation: Compare tf.keras and Scikit models with the What-if Tool
Now we're ready for the What-if Tool! In this section we'll:
* Create an array of our test examples with their ground truth values. The What-if Tool works best when we send the actual values for each example input.
* Instantiate the What-if Tool using the set_compare_ai_platform_model method. This lets us compare 2 models deployed on Cloud AI Platform.
End of explanation |
5,764 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Step1: Image Classification
In this project, you'll classify images from the CIFAR-10 dataset. The dataset consists of airplanes, dogs, cats, and other objects. You'll preprocess the images, then train a convolutional neural network on all the samples. The images need to be normalized and the labels need to be one-hot encoded. You'll get to apply what you learned and build a convolutional, max pooling, dropout, and fully connected layers. At the end, you'll get to see your neural network's predictions on the sample images.
Get the Data
Run the following cell to download the CIFAR-10 dataset for python.
Step2: Explore the Data
The dataset is broken into batches to prevent your machine from running out of memory. The CIFAR-10 dataset consists of 5 batches, named data_batch_1, data_batch_2, etc.. Each batch contains the labels and images that are one of the following
Step5: Implement Preprocess Functions
Normalize
In the cell below, implement the normalize function to take in image data, x, and return it as a normalized Numpy array. The values should be in the range of 0 to 1, inclusive. The return object should be the same shape as x.
Step8: One-hot encode
Just like the previous code cell, you'll be implementing a function for preprocessing. This time, you'll implement the one_hot_encode function. The input, x, are a list of labels. Implement the function to return the list of labels as One-Hot encoded Numpy array. The possible values for labels are 0 to 9. The one-hot encoding function should return the same encoding for each value between each call to one_hot_encode. Make sure to save the map of encodings outside the function.
Hint
Step10: Randomize Data
As you saw from exploring the data above, the order of the samples are randomized. It doesn't hurt to randomize it again, but you don't need to for this dataset.
Preprocess all the data and save it
Running the code cell below will preprocess all the CIFAR-10 data and save it to file. The code below also uses 10% of the training data for validation.
Step12: Check Point
This is your first checkpoint. If you ever decide to come back to this notebook or have to restart the notebook, you can start from here. The preprocessed data has been saved to disk.
Step17: Build the network
For the neural network, you'll build each layer into a function. Most of the code you've seen has been outside of functions. To test your code more thoroughly, we require that you put each layer in a function. This allows us to give you better feedback and test for simple mistakes using our unittests before you submit your project.
Note
Step20: Convolution and Max Pooling Layer
Convolution layers have a lot of success with images. For this code cell, you should implement the function conv2d_maxpool to apply convolution then max pooling
Step23: Flatten Layer
Implement the flatten function to change the dimension of x_tensor from a 4-D tensor to a 2-D tensor. The output should be the shape (Batch Size, Flattened Image Size). Shortcut option
Step26: Fully-Connected Layer
Implement the fully_conn function to apply a fully connected layer to x_tensor with the shape (Batch Size, num_outputs). Shortcut option
Step29: Output Layer
Implement the output function to apply a fully connected layer to x_tensor with the shape (Batch Size, num_outputs). Shortcut option
Step32: Create Convolutional Model
Implement the function conv_net to create a convolutional neural network model. The function takes in a batch of images, x, and outputs logits. Use the layers you created above to create this model
Step35: Train the Neural Network
Single Optimization
Implement the function train_neural_network to do a single optimization. The optimization should use optimizer to optimize in session with a feed_dict of the following
Step37: Show Stats
Implement the function print_stats to print loss and validation accuracy. Use the global variables valid_features and valid_labels to calculate validation accuracy. Use a keep probability of 1.0 to calculate the loss and validation accuracy.
Step38: Hyperparameters
Tune the following parameters
Step40: Train on a Single CIFAR-10 Batch
Instead of training the neural network on all the CIFAR-10 batches of data, let's use a single batch. This should save time while you iterate on the model to get a better accuracy. Once the final validation accuracy is 50% or greater, run the model on all the data in the next section.
Step42: Fully Train the Model
Now that you got a good accuracy with a single CIFAR-10 batch, try it with all five batches.
Step45: Checkpoint
The model has been saved to disk.
Test Model
Test your model against the test dataset. This will be your final accuracy. You should have an accuracy greater than 50%. If you don't, keep tweaking the model architecture and parameters. | Python Code:
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
from urllib.request import urlretrieve
from os.path import isfile, isdir
from tqdm import tqdm
import problem_unittests as tests
import tarfile
cifar10_dataset_folder_path = 'cifar-10-batches-py'
# Use Floyd's cifar-10 dataset if present
floyd_cifar10_location = '/input/cifar-10/python.tar.gz'
if isfile(floyd_cifar10_location):
tar_gz_path = floyd_cifar10_location
else:
tar_gz_path = 'cifar-10-python.tar.gz'
class DLProgress(tqdm):
last_block = 0
def hook(self, block_num=1, block_size=1, total_size=None):
self.total = total_size
self.update((block_num - self.last_block) * block_size)
self.last_block = block_num
if not isfile(tar_gz_path):
with DLProgress(unit='B', unit_scale=True, miniters=1, desc='CIFAR-10 Dataset') as pbar:
urlretrieve(
'https://www.cs.toronto.edu/~kriz/cifar-10-python.tar.gz',
tar_gz_path,
pbar.hook)
if not isdir(cifar10_dataset_folder_path):
with tarfile.open(tar_gz_path) as tar:
tar.extractall()
tar.close()
tests.test_folder_path(cifar10_dataset_folder_path)
Explanation: Image Classification
In this project, you'll classify images from the CIFAR-10 dataset. The dataset consists of airplanes, dogs, cats, and other objects. You'll preprocess the images, then train a convolutional neural network on all the samples. The images need to be normalized and the labels need to be one-hot encoded. You'll get to apply what you learned and build a convolutional, max pooling, dropout, and fully connected layers. At the end, you'll get to see your neural network's predictions on the sample images.
Get the Data
Run the following cell to download the CIFAR-10 dataset for python.
End of explanation
%matplotlib inline
%config InlineBackend.figure_format = 'retina'
import helper
import numpy as np
# Explore the dataset
batch_id = 1
sample_id = 8
helper.display_stats(cifar10_dataset_folder_path, batch_id, sample_id)
Explanation: Explore the Data
The dataset is broken into batches to prevent your machine from running out of memory. The CIFAR-10 dataset consists of 5 batches, named data_batch_1, data_batch_2, etc.. Each batch contains the labels and images that are one of the following:
* airplane
* automobile
* bird
* cat
* deer
* dog
* frog
* horse
* ship
* truck
Understanding a dataset is part of making predictions on the data. Play around with the code cell below by changing the batch_id and sample_id. The batch_id is the id for a batch (1-5). The sample_id is the id for a image and label pair in the batch.
Ask yourself "What are all possible labels?", "What is the range of values for the image data?", "Are the labels in order or random?". Answers to questions like these will help you preprocess the data and end up with better predictions.
End of explanation
def normalize(x):
Normalize a list of sample image data in the range of 0 to 1
: x: List of image data. The image shape is (32, 32, 3)
: return: Numpy array of normalize data
return x/255
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_normalize(normalize)
Explanation: Implement Preprocess Functions
Normalize
In the cell below, implement the normalize function to take in image data, x, and return it as a normalized Numpy array. The values should be in the range of 0 to 1, inclusive. The return object should be the same shape as x.
End of explanation
def one_hot_encode(x):
One hot encode a list of sample labels. Return a one-hot encoded vector for each label.
: x: List of sample Labels
: return: Numpy array of one-hot encoded labels
N=len(x)
z=np.zeros((N,10))
z[np.arange(N),x]=1
return z
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_one_hot_encode(one_hot_encode)
Explanation: One-hot encode
Just like the previous code cell, you'll be implementing a function for preprocessing. This time, you'll implement the one_hot_encode function. The input, x, are a list of labels. Implement the function to return the list of labels as One-Hot encoded Numpy array. The possible values for labels are 0 to 9. The one-hot encoding function should return the same encoding for each value between each call to one_hot_encode. Make sure to save the map of encodings outside the function.
Hint: Don't reinvent the wheel.
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
# Preprocess Training, Validation, and Testing Data
helper.preprocess_and_save_data(cifar10_dataset_folder_path, normalize, one_hot_encode)
Explanation: Randomize Data
As you saw from exploring the data above, the order of the samples are randomized. It doesn't hurt to randomize it again, but you don't need to for this dataset.
Preprocess all the data and save it
Running the code cell below will preprocess all the CIFAR-10 data and save it to file. The code below also uses 10% of the training data for validation.
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
import pickle
import problem_unittests as tests
import helper
# Load the Preprocessed Validation data
valid_features, valid_labels = pickle.load(open('preprocess_validation.p', mode='rb'))
Explanation: Check Point
This is your first checkpoint. If you ever decide to come back to this notebook or have to restart the notebook, you can start from here. The preprocessed data has been saved to disk.
End of explanation
import tensorflow as tf
def neural_net_image_input(image_shape):
Return a Tensor for a batch of image input
: image_shape: Shape of the images
: return: Tensor for image input.
return tf.placeholder(tf.float32,[None, image_shape[0], image_shape[1], image_shape[2]],name="x")
def neural_net_label_input(n_classes):
Return a Tensor for a batch of label input
: n_classes: Number of classes
: return: Tensor for label input.
return tf.placeholder(tf.float32,[None,n_classes],name="y")
def neural_net_keep_prob_input():
Return a Tensor for keep probability
: return: Tensor for keep probability.
return tf.placeholder(tf.float32,name="keep_prob")
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tf.reset_default_graph()
tests.test_nn_image_inputs(neural_net_image_input)
tests.test_nn_label_inputs(neural_net_label_input)
tests.test_nn_keep_prob_inputs(neural_net_keep_prob_input)
Explanation: Build the network
For the neural network, you'll build each layer into a function. Most of the code you've seen has been outside of functions. To test your code more thoroughly, we require that you put each layer in a function. This allows us to give you better feedback and test for simple mistakes using our unittests before you submit your project.
Note: If you're finding it hard to dedicate enough time for this course each week, we've provided a small shortcut to this part of the project. In the next couple of problems, you'll have the option to use classes from the TensorFlow Layers or TensorFlow Layers (contrib) packages to build each layer, except the layers you build in the "Convolutional and Max Pooling Layer" section. TF Layers is similar to Keras's and TFLearn's abstraction to layers, so it's easy to pickup.
However, if you would like to get the most out of this course, try to solve all the problems without using anything from the TF Layers packages. You can still use classes from other packages that happen to have the same name as ones you find in TF Layers! For example, instead of using the TF Layers version of the conv2d class, tf.layers.conv2d, you would want to use the TF Neural Network version of conv2d, tf.nn.conv2d.
Let's begin!
Input
The neural network needs to read the image data, one-hot encoded labels, and dropout keep probability. Implement the following functions
* Implement neural_net_image_input
* Return a TF Placeholder
* Set the shape using image_shape with batch size set to None.
* Name the TensorFlow placeholder "x" using the TensorFlow name parameter in the TF Placeholder.
* Implement neural_net_label_input
* Return a TF Placeholder
* Set the shape using n_classes with batch size set to None.
* Name the TensorFlow placeholder "y" using the TensorFlow name parameter in the TF Placeholder.
* Implement neural_net_keep_prob_input
* Return a TF Placeholder for dropout keep probability.
* Name the TensorFlow placeholder "keep_prob" using the TensorFlow name parameter in the TF Placeholder.
These names will be used at the end of the project to load your saved model.
Note: None for shapes in TensorFlow allow for a dynamic size.
End of explanation
def conv2d_maxpool(x_tensor, conv_num_outputs, conv_ksize, conv_strides, pool_ksize, pool_strides):
Apply convolution then max pooling to x_tensor
:param x_tensor: TensorFlow Tensor
:param conv_num_outputs: Number of outputs for the convolutional layer
:param conv_ksize: kernal size 2-D Tuple for the convolutional layer
:param conv_strides: Stride 2-D Tuple for convolution
:param pool_ksize: kernal size 2-D Tuple for pool
:param pool_strides: Stride 2-D Tuple for pool
: return: A tensor that represents convolution and max pooling of x_tensor
# Conv2D wrapper, with bias and relu activation
i=x_tensor.get_shape().as_list()
d=[conv_ksize[0], conv_ksize[1], i[3], conv_num_outputs]
W=tf.Variable(tf.truncated_normal(d,stddev=0.08))
x = tf.nn.conv2d(x_tensor, W, strides=[1, conv_strides[0], conv_strides[1], 1], padding='SAME')
b=tf.Variable(tf.zeros([conv_num_outputs]))
x = tf.nn.bias_add(x, b)
x = tf.nn.relu(x)
x = tf.nn.max_pool(x, ksize=[1, pool_ksize[0], pool_ksize[1], 1], strides=[1, pool_strides[0], pool_strides[1], 1],padding='SAME')
return x
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_con_pool(conv2d_maxpool)
Explanation: Convolution and Max Pooling Layer
Convolution layers have a lot of success with images. For this code cell, you should implement the function conv2d_maxpool to apply convolution then max pooling:
* Create the weight and bias using conv_ksize, conv_num_outputs and the shape of x_tensor.
* Apply a convolution to x_tensor using weight and conv_strides.
* We recommend you use same padding, but you're welcome to use any padding.
* Add bias
* Add a nonlinear activation to the convolution.
* Apply Max Pooling using pool_ksize and pool_strides.
* We recommend you use same padding, but you're welcome to use any padding.
Note: You can't use TensorFlow Layers or TensorFlow Layers (contrib) for this layer, but you can still use TensorFlow's Neural Network package. You may still use the shortcut option for all the other layers.
End of explanation
def flatten(x_tensor):
Flatten x_tensor to (Batch Size, Flattened Image Size)
: x_tensor: A tensor of size (Batch Size, ...), where ... are the image dimensions.
: return: A tensor of size (Batch Size, Flattened Image Size).
d=x_tensor.get_shape().as_list()
flat = tf.reshape(x_tensor, [-1, d[1]*d[2]*d[3]])
return flat
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_flatten(flatten)
Explanation: Flatten Layer
Implement the flatten function to change the dimension of x_tensor from a 4-D tensor to a 2-D tensor. The output should be the shape (Batch Size, Flattened Image Size). Shortcut option: you can use classes from the TensorFlow Layers or TensorFlow Layers (contrib) packages for this layer. For more of a challenge, only use other TensorFlow packages.
End of explanation
def fully_conn(x_tensor, num_outputs):
Apply a fully connected layer to x_tensor using weight and bias
: x_tensor: A 2-D tensor where the first dimension is batch size.
: num_outputs: The number of output that the new tensor should be.
: return: A 2-D tensor where the second dimension is num_outputs.
d=x_tensor.get_shape().as_list()
#print(d)
W=tf.Variable(tf.truncated_normal([d[1],num_outputs],stddev=0.08))
#print(W.shape)
b=tf.Variable(tf.zeros([num_outputs]))
#print(b.shape)
fc=tf.matmul(x_tensor,W)+b
#print(fc.shape)
return tf.nn.relu(fc)
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_fully_conn(fully_conn)
Explanation: Fully-Connected Layer
Implement the fully_conn function to apply a fully connected layer to x_tensor with the shape (Batch Size, num_outputs). Shortcut option: you can use classes from the TensorFlow Layers or TensorFlow Layers (contrib) packages for this layer. For more of a challenge, only use other TensorFlow packages.
End of explanation
def output(x_tensor, num_outputs):
Apply a output layer to x_tensor using weight and bias
: x_tensor: A 2-D tensor where the first dimension is batch size.
: num_outputs: The number of output that the new tensor should be.
: return: A 2-D tensor where the second dimension is num_outputs.
d=x_tensor.get_shape().as_list()
#print(d)
W=tf.Variable(tf.truncated_normal([d[1],num_outputs],stddev=0.08))
#print(W.shape)
b=tf.Variable(tf.zeros([num_outputs]))
#print(b.shape)
fc=tf.matmul(x_tensor,W)+b
#print(fc.shape)
return fc
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_output(output)
Explanation: Output Layer
Implement the output function to apply a fully connected layer to x_tensor with the shape (Batch Size, num_outputs). Shortcut option: you can use classes from the TensorFlow Layers or TensorFlow Layers (contrib) packages for this layer. For more of a challenge, only use other TensorFlow packages.
Note: Activation, softmax, or cross entropy should not be applied to this.
End of explanation
def conv_net(x, keep_prob):
Create a convolutional neural network model
: x: Placeholder tensor that holds image data.
: keep_prob: Placeholder tensor that hold dropout keep probability.
: return: Tensor that represents logits
# TODO: Apply 1, 2, or 3 Convolution and Max Pool layers
# Play around with different number of outputs, kernel size and stride
# Function Definition from Above:
# conv2d_maxpool(x_tensor, conv_num_outputs, conv_ksize, conv_strides, pool_ksize, pool_strides)
cn1=conv2d_maxpool(x, 32, (5,5), (1,1), (2,2), (2,2))
cn1=tf.nn.dropout(cn1,keep_prob)
cn2=conv2d_maxpool(cn1, 64, (5,5), (1,1), (2,2), (2,2))
cn3=conv2d_maxpool(cn2, 128, (5,5), (1,1), (2,2), (2,2))
cn3=tf.nn.dropout(cn3,keep_prob)
#cn4=conv2d_maxpool(cn3, 256, (3,3), (1,1), (2,2), (2,2))
# TODO: Apply a Flatten Layer
# Function Definition from Above:
# flatten(x_tensor)
flat=flatten(cn3)
# TODO: Apply 1, 2, or 3 Fully Connected Layers
# Play around with different number of outputs
# Function Definition from Above:
# fully_conn(x_tensor, num_outputs)
fc1=fully_conn(flat,512)
fc1=tf.nn.dropout(fc1,keep_prob)
fc2=fully_conn(fc1,256)
fc2=tf.nn.dropout(fc2,keep_prob)
fc3=fully_conn(fc2,128)
fc3=tf.nn.dropout(fc3,keep_prob)
fc4=fully_conn(fc3,128)
fc4=tf.nn.dropout(fc4,keep_prob)
# TODO: Apply an Output Layer
# Set this to the number of classes
# Function Definition from Above:
# output(x_tensor, num_outputs)
logits=output(fc4,10)
# TODO: return output
return logits
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
##############################
## Build the Neural Network ##
##############################
# Remove previous weights, bias, inputs, etc..
tf.reset_default_graph()
# Inputs
x = neural_net_image_input((32, 32, 3))
y = neural_net_label_input(10)
keep_prob = neural_net_keep_prob_input()
# Model
logits = conv_net(x, keep_prob)
# Name logits Tensor, so that is can be loaded from disk after training
logits = tf.identity(logits, name='logits')
# Loss and Optimizer
cost = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(logits=logits, labels=y))
optimizer = tf.train.AdamOptimizer().minimize(cost)
# Accuracy
correct_pred = tf.equal(tf.argmax(logits, 1), tf.argmax(y, 1))
accuracy = tf.reduce_mean(tf.cast(correct_pred, tf.float32), name='accuracy')
tests.test_conv_net(conv_net)
Explanation: Create Convolutional Model
Implement the function conv_net to create a convolutional neural network model. The function takes in a batch of images, x, and outputs logits. Use the layers you created above to create this model:
Apply 1, 2, or 3 Convolution and Max Pool layers
Apply a Flatten Layer
Apply 1, 2, or 3 Fully Connected Layers
Apply an Output Layer
Return the output
Apply TensorFlow's Dropout to one or more layers in the model using keep_prob.
End of explanation
def train_neural_network(session, optimizer, keep_probability, feature_batch, label_batch):
Optimize the session on a batch of images and labels
: session: Current TensorFlow session
: optimizer: TensorFlow optimizer function
: keep_probability: keep probability
: feature_batch: Batch of Numpy image data
: label_batch: Batch of Numpy label data
session.run([optimizer],feed_dict={x: feature_batch,
y: label_batch,
keep_prob:keep_probability})
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_train_nn(train_neural_network)
Explanation: Train the Neural Network
Single Optimization
Implement the function train_neural_network to do a single optimization. The optimization should use optimizer to optimize in session with a feed_dict of the following:
* x for image input
* y for labels
* keep_prob for keep probability for dropout
This function will be called for each batch, so tf.global_variables_initializer() has already been called.
Note: Nothing needs to be returned. This function is only optimizing the neural network.
End of explanation
def print_stats(session, feature_batch, label_batch, cost, accuracy):
Print information about loss and validation accuracy
: session: Current TensorFlow session
: feature_batch: Batch of Numpy image data
: label_batch: Batch of Numpy label data
: cost: TensorFlow cost function
: accuracy: TensorFlow accuracy function
# TODO: Implement Function
loss,train_acc=session.run([cost,accuracy],feed_dict={x: feature_batch,
y: label_batch,
keep_prob:1.0})
_,val_acc=session.run([cost,accuracy],feed_dict={x: valid_features,
y: valid_labels,
keep_prob:1.0})
print("loss=",loss,"train_acc=",train_acc,"val_acc",val_acc)
Explanation: Show Stats
Implement the function print_stats to print loss and validation accuracy. Use the global variables valid_features and valid_labels to calculate validation accuracy. Use a keep probability of 1.0 to calculate the loss and validation accuracy.
End of explanation
# TODO: Tune Parameters
epochs = 20
batch_size = 64
keep_probability = 0.5
Explanation: Hyperparameters
Tune the following parameters:
* Set epochs to the number of iterations until the network stops learning or start overfitting
* Set batch_size to the highest number that your machine has memory for. Most people set them to common sizes of memory:
* 64
* 128
* 256
* ...
* Set keep_probability to the probability of keeping a node using dropout
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
print('Checking the Training on a Single Batch...')
with tf.Session() as sess:
# Initializing the variables
sess.run(tf.global_variables_initializer())
# Training cycle
for epoch in range(epochs):
batch_i = 1
for batch_features, batch_labels in helper.load_preprocess_training_batch(batch_i, batch_size):
train_neural_network(sess, optimizer, keep_probability, batch_features, batch_labels)
print('Epoch {:>2}, CIFAR-10 Batch {}: '.format(epoch + 1, batch_i), end='')
print_stats(sess, batch_features, batch_labels, cost, accuracy)
Explanation: Train on a Single CIFAR-10 Batch
Instead of training the neural network on all the CIFAR-10 batches of data, let's use a single batch. This should save time while you iterate on the model to get a better accuracy. Once the final validation accuracy is 50% or greater, run the model on all the data in the next section.
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
save_model_path = './image_classification'
print('Training...')
with tf.Session() as sess:
# Initializing the variables
sess.run(tf.global_variables_initializer())
# Training cycle
for epoch in range(epochs):
# Loop over all batches
n_batches = 5
for batch_i in range(1, n_batches + 1):
for batch_features, batch_labels in helper.load_preprocess_training_batch(batch_i, batch_size):
train_neural_network(sess, optimizer, keep_probability, batch_features, batch_labels)
print('Epoch {:>2}, CIFAR-10 Batch {}: '.format(epoch + 1, batch_i), end='')
print_stats(sess, batch_features, batch_labels, cost, accuracy)
# Save Model
saver = tf.train.Saver()
save_path = saver.save(sess, save_model_path)
Explanation: Fully Train the Model
Now that you got a good accuracy with a single CIFAR-10 batch, try it with all five batches.
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
%matplotlib inline
%config InlineBackend.figure_format = 'retina'
import tensorflow as tf
import pickle
import helper
import random
# Set batch size if not already set
try:
if batch_size:
pass
except NameError:
batch_size = 64
save_model_path = './image_classification'
n_samples = 4
top_n_predictions = 3
def test_model():
Test the saved model against the test dataset
test_features, test_labels = pickle.load(open('preprocess_test.p', mode='rb'))
loaded_graph = tf.Graph()
with tf.Session(graph=loaded_graph) as sess:
# Load model
loader = tf.train.import_meta_graph(save_model_path + '.meta')
loader.restore(sess, save_model_path)
# Get Tensors from loaded model
loaded_x = loaded_graph.get_tensor_by_name('x:0')
loaded_y = loaded_graph.get_tensor_by_name('y:0')
loaded_keep_prob = loaded_graph.get_tensor_by_name('keep_prob:0')
loaded_logits = loaded_graph.get_tensor_by_name('logits:0')
loaded_acc = loaded_graph.get_tensor_by_name('accuracy:0')
# Get accuracy in batches for memory limitations
test_batch_acc_total = 0
test_batch_count = 0
for test_feature_batch, test_label_batch in helper.batch_features_labels(test_features, test_labels, batch_size):
test_batch_acc_total += sess.run(
loaded_acc,
feed_dict={loaded_x: test_feature_batch, loaded_y: test_label_batch, loaded_keep_prob: 1.0})
test_batch_count += 1
print('Testing Accuracy: {}\n'.format(test_batch_acc_total/test_batch_count))
# Print Random Samples
random_test_features, random_test_labels = tuple(zip(*random.sample(list(zip(test_features, test_labels)), n_samples)))
random_test_predictions = sess.run(
tf.nn.top_k(tf.nn.softmax(loaded_logits), top_n_predictions),
feed_dict={loaded_x: random_test_features, loaded_y: random_test_labels, loaded_keep_prob: 1.0})
helper.display_image_predictions(random_test_features, random_test_labels, random_test_predictions)
test_model()
Explanation: Checkpoint
The model has been saved to disk.
Test Model
Test your model against the test dataset. This will be your final accuracy. You should have an accuracy greater than 50%. If you don't, keep tweaking the model architecture and parameters.
End of explanation |
5,765 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Step1: Logistic Regression
Step2: <hr />
Data setup for the 10 digit classes
The data is the same that was used in the last post but this time I will use all of the 0-9 images. There are 42000 total. Each image has 784 pixels and the first column is the label for what the image is. Let's read that in and look at the first 10 entries, then put that into a matrix called data_full_matrix.
Step3: You can show any of the images in that matrix with the following snipit of code. This would show the image in the 4th row (index 3) which is a hand written 4.
Step4: <hr />
Create matrix $Y$
There is a column in $Y$ for each of the digits 0-9. I print out the first 10 rows so you can see how it is laid out. The first column has a 1 at row 2,5 and 6 (1,4,5 is you count from 0), that means that those rows correspond to the number 0. There are 42000 rows in $Y$.
Step5: <hr />
Separate out the label column and remove columns that are all 0's
We got rid of 76 feature columns that were all 0's
Step6: <hr />
Divide the dataset into sets for Training, Validation and Testing
There will be 29400 images in the Training set and 6300 images in each of the Validation and Test sets. The matrix $Y$ is divided up the same way.
Step7: <hr />
Mean normalize the data sets
Each of the data sets are normalized using the mean and standard deviation from the whole 42000 element data set. The make_multinomial_features functions is used here simply to add the column of 1's to the data for the bias term.
Step8: <hr />
Find optimal parameters for the 10 models
This is the main training loop. All 10 models are optimized using the columns of $Y$ and the training data set. Each model is fit to it's number (0-9) by evaluation it's cost function against all of the other numbers "the rest".
You can see that some of the models required many more iterations before convergence. There was also some numerical overflow present. I'm not too concerned about this since it is an artifact of the optimization run. The models converged OK and gave reasonably good set of parameters for each of the 10 models. It is possible to work on each model separately to try to get better fits and the regularization term could be adjusted per model. I did play with the optimization somewhat but wont worry about it too much since in teh next post I'll be doing an implementation of "Stochastic Gradient Descent" and will likely use this data again as an example.
Step9: <hr />
You can look at the fit quality of each model. The model for the digit 8 has the worst finial value for the cost function and it looks like it had many false negatives. I am using the Validation data set check the quality of fit.
Step10: <hr />
The fit for the "0" model has a low cost function and the quality of fit looks much better than that for "8".
Step11: <hr />
Use the model to make predictions for untested number images
The following function will return the probabilities predicted by each of the models for some given input image. The probabilities are sorted with the most likely being listed first.
Step12: <hr />
Following are a few random images picked from the test set.
The first image is of an "8". You can see that the model did not give a very high probability for "8" but it was higher than any of the other probabilities so it did give the correct answer!
Step13: <hr />
Checking how well the model did for each of the datasets
The next function will give a printout of the percentage of correct prediction in a dataset. We first look at the training and validation sets.
Step14: ... and now the test set. | Python Code:
import pandas as pd # data handeling
import numpy as np # numerical computing
from scipy.optimize import minimize # optimization code
import matplotlib.pyplot as plt # plotting
import seaborn as sns
%matplotlib inline
sns.set()
import itertools # combinatorics functions for multinomial code
#
# Main Logistic Regression Equations
#
def g(z) : # sigmoid function
return 1.0/(1.0 + np.exp(-z))
def h_logistic(X,a) : # Model function
return g(np.dot(X,a))
def J(X,a,y) : # Cost Function
m = y.size
return -(np.sum(np.log(h_logistic(X,a))) + np.dot((y-1).T,(np.dot(X,a))))/m
def J_reg(X,a,y,reg_lambda) : # Cost Function with Regularization
m = y.size
return J(X,a,y) + reg_lambda/(2.0*m) * np.dot(a[1:],a[1:])
def gradJ(X,a,y) : # Gradient of Cost Function
m = y.size
return (np.dot(X.T,(h_logistic(X,a) - y)))/m
def gradJ_reg(X,a,y,reg_lambda) : # Gradient of Cost Function with Regularization
m = y.size
return gradJ(X,a,y) + reg_lambda/(2.0*m) * np.concatenate(([0], a[1:])).T
#
# Some model checking functions
#
def to_0_1(h_prob) : # convert probabilites to true (1) or false (0) at cut-off 0.5
return np.where(h_prob >= 0.5, 1, 0)
def model_accuracy(h,y) : # Overall accuracy of model
return np.sum(h==y)/y.size * 100
def model_accuracy_pos(h,y) : # Accuracy on positive cases
return np.sum(y[h==1] == 1)/y[y==1].size * 100
def model_accuracy_neg(h,y) : # Accuracy on negative cases
return np.sum(y[h==0] == 0)/y[y==0].size * 100
def false_pos(h,y) : # Number of false positives
return np.sum((y==0) & (h==1))
def false_neg(h,y) : # Number of false negatives
return np.sum((y==1) & (h==0))
def true_pos(h,y) : # Number of true positives
return np.sum((y==1) & (h==1))
def true_neg(h,y) : # Number of true negatives
return np.sum((y==0) & (h==0))
def model_precision(h,y) : # Precision = TP/(TP+FP)
return true_pos(h,y)/(true_pos(h,y) + false_pos(h,y))
def model_recall(h,y) : # Recall = TP/(TP+FN)
return true_pos(h,y)/(true_pos(h,y) + false_neg(h,y))
def print_model_quality(title, h, y) : # Print the results of the functions above
print( '\n# \n# {} \n#'.format(title) )
print( 'Total number of data points = {}'.format(y.size))
print( 'Number of Positive values(1s) = {}'.format(y[y==1].size))
print( 'Number of Negative values(0s) = {}'.format(y[y==0].size))
print( '\nNumber of True Positives = {}'.format(true_pos(h,y)) )
print( 'Number of False Positives = {}'.format(false_pos(h,y)) )
print( '\nNumber of True Negatives = {}'.format(true_neg(h,y)) )
print( 'Number of False Negatives = {}'.format(false_neg(h,y)) )
print( '\nModel Accuracy = {:.2f}%'.format( model_accuracy(h,y) ) )
print( 'Model Accuracy Positive Cases = {:.2f}%'.format( model_accuracy_pos(h,y) ) )
print( 'Model Accuracy Negative Cases = {:.2f}%'.format( model_accuracy_neg(h,y) ) )
print( '\nModel Precision = {}'.format(model_precision(h,y)) )
print( '\nModel Recall = {}'.format(model_recall(h,y)) )
def multinomial_partitions(n, k):
returns an array of length k sequences of integer partitions of n
nparts = itertools.combinations(range(1, n+k), k-1)
tmp = [(0,) + p + (n+k,) for p in nparts]
sequences = np.diff(tmp) - 1
return sequences[::-1] # reverse the order
def make_multinomial_features(fvecs,order=[1,2]) :
'''Make multinomial feature matrix
fvecs is a matrix of feature vectors (columns)
"order" is a set of multinomial degrees to create
default is [1,2] meaning for example: given f1, f2 in fvecs
return a matrix made up of a [1's column, f1,f2,f1**2,f1*f2,f2**2] '''
Xtmp = np.ones_like(fvecs[:,0])
for ord in order :
if ord==1 :
fstmp = fvecs
else :
pwrs = multinomial_partitions(ord,fvecs.shape[1])
fstmp = np.column_stack( ( np.prod(fvecs**pwrs[i,:], axis=1) for i in range(pwrs.shape[0]) ))
Xtmp = np.column_stack((Xtmp,fstmp))
return Xtmp
def mean_normalize(X):
'''apply mean normalization to each column of the matrix X'''
X_mean=X.mean(axis=0)
X_std=X.std(axis=0)
return (X-X_mean)/X_std
def apply_normalizer(X,X_mean,X_std) :
return (X-X_mean)/X_std
Explanation: Logistic Regression: Multi-Class (Multinomial) -- Full MNIST digits classification example
This post will be an implementation and example of what is commonly called "Multinomial Logistic Regression". The particular method I will look at is "one-vs-all" or "one-vs-rest".
<rant-on> What's in a name? "A rose by any other name would smell as sweet". In my opinion calling this "Multinomial Logistic Regression" stinks! A multinomial is a specific mathematical thing and I already used "multinomial term expansion of feature sets". I really feel that a more descriptive name would be "Multi-Class". ... but Multinomial Logistic Regression is the name that is commonly used.<rant-off>
This post is heavy on Python code and job runs. It includes the implementation code from the previous post with additional code to generalize that to multi-class. The usage example will be image classification of hand written digits (0-9) using the MNIST dataset.
I've done four earlier posts on Logistic Regression that give a pretty thorough explanation of Logistic Regress and cover theory and insight for what I'm looking at in this post, Logistic Regression Theory and Logistic and Linear Regression Regularization, Logistic Regression Implementation, Logistic Regression: Examples 1 -- 2D data fit with multinomial model and 0 1 digits classification on MNIST dataset.
This will be a "calculator" style implementation using Python in this Jupyter notebook. Everything needed to "tinker" with the method is contained in this notebook except the MNIST dataset. I pulled the MNIST training set from Kaggle. For information on the dataset itself see Yann Lecun's site http://yann.lecun.com/exdb/mnist/index.html. To use this notebook for your own experimentation you would need to download that dataset.
This posts along with all of the others in this series were converted to html from Jupyter notebooks. The notebooks are available at https://github.com/dbkinghorn/blog-jupyter-notebooks
<hr />
Understanding Multi-Class (Multinomial) Logistic Regression
You can think of logistic regression as if the logistic (sigmoid) function is a single "neuron" that returns the probability that some input sample is the "thing" that the neuron was trained to recognize. It is a binary classifier. It just gives the probability that the input it is looking at is the ONE thing that it was trained to recognize. To generalize this to several "things" (classes) we can create a collection of these binary "neurons" with one for each class of the things the we want to distinguish. You could think of that as a single layer network of these sigmoid neurons.
To classify the 10 digits 0-9 there would be 10 of these sigmoid neurons in a single layer network. Like this,
The $f_i$ are the features i.e. pixels in an image, $h_i$ are the 10 individual digit models and MAX(P) is the result with the highest probability.
That is basically what we are going to do. In general the steps are,
Create a 0,1 vector $y_k$ for each class $k$. Each $y_k$ will have a 1 matching the position of all samples in the training set that match that class and 0 otherwise. (I will put them in a matrix $Y$ where the $k^{th}$ column of $Y$ is $y_k$)
Do an optimization loop over all $k$ classes finding an optimal parameter vector $a_k$ to define $k$ models $h_k$
To test i.e. classify, an input evaluate it with each $h_k$ to get a probability that it is in class $k$.
Pick the class with the highest probability as the "answer".
Specifically for the MNIST digits dataset being used;
- There will be $k=10$ classes with labels {0,1,2,3,4,5,6,7,8,9}.
- For a set with $m$ samples $Y_{set}$ will be an $(m \times 10)$ matrix of 0's and 1's corresponding to samples in each class. For example the first column of $Y$ will have a 1 in each row that is a sample image of a "0". ... The tenth column of $Y$ will have a 1 in each row that is a sample of a "9".
- The full data set has 42000 samples which will be divided into
- 29400 training-set samples,
- 6300 validation-set samples,
- 6300 test-set samples.
- The digit images in the MNIST dataset have 28 x 28 pixels. These pixels together with the bias term is the number of features. That means that each sample feature vector will have 784 + 1 = 785 features that we will need to find 785 parameters for.
- The optimization loop will be over the 10 classes and will produce a matrix $A$ of optimized parameters by minimizing a cost function for each for the 10 classes. Each column of the 10 columns $A$ will be a model parameter vector corresponding to each of the 10 classes (0-9).
- To test or use the resulting model the input sample will be evaluated for each of the 10 "class models" and sorted by highest probability. The result with the highest probability is the prediction from the model.
Simple! Lets do it.
<hr />
Core Logistic Regression Functions (Python Code)
This section is the base code for, logistic regression with regularization, that was worked up in the previous posts. You can skip over this section if you have seen the code in the last post and just refer back to it if you need to see how some function was defined.
End of explanation
data_full = pd.read_csv("./data/kg-mnist/train.csv")
data_full.head(10)
data_full_matrix=data_full.as_matrix()
print(data_full_matrix.shape)
Explanation: <hr />
Data setup for the 10 digit classes
The data is the same that was used in the last post but this time I will use all of the 0-9 images. There are 42000 total. Each image has 784 pixels and the first column is the label for what the image is. Let's read that in and look at the first 10 entries, then put that into a matrix called data_full_matrix.
End of explanation
plt.figure(figsize=(1,1))
plt.imshow(data_full_matrix[3,1:].reshape((28,28)) )
Explanation: You can show any of the images in that matrix with the following snipit of code. This would show the image in the 4th row (index 3) which is a hand written 4.
End of explanation
Y = np.zeros((data_full_matrix.shape[0],10))
for i in range(10) :
Y[:,i] = np.where(data_full_matrix[:,0]==i, 1,0)
Y[0:10,:]
Explanation: <hr />
Create matrix $Y$
There is a column in $Y$ for each of the digits 0-9. I print out the first 10 rows so you can see how it is laid out. The first column has a 1 at row 2,5 and 6 (1,4,5 is you count from 0), that means that those rows correspond to the number 0. There are 42000 rows in $Y$.
End of explanation
y_labels, data_09 = data_full_matrix[:,0], data_full_matrix[:,1:]
print(data_09.shape)
data_09 = data_09[:,data_09.sum(axis=0)!=0]
print(data_09.shape)
Explanation: <hr />
Separate out the label column and remove columns that are all 0's
We got rid of 76 feature columns that were all 0's
End of explanation
data_train_09,Y_train_09 = data_09[0:29400,:], Y[0:29400,:]
data_val_09, Y_val_09 = data_09[29400:35700,:], Y[29400:35700,:]
data_test_09, Y_test_09 = data_09[35700:,:], Y[35700:,:]
y_labels_train = y_labels[0:29400]
y_labels_val = y_labels[29400:35700]
y_labels_test = y_labels[35700:]
print(data_train_09.shape,Y_train_09.shape)
print(data_val_09.shape, Y_val_09.shape)
print(data_test_09.shape, Y_test_09.shape)
Explanation: <hr />
Divide the dataset into sets for Training, Validation and Testing
There will be 29400 images in the Training set and 6300 images in each of the Validation and Test sets. The matrix $Y$ is divided up the same way.
End of explanation
X_mean = data_09.mean(axis=0)
X_std = data_09.std(axis=0)
X_std[X_std==0]=1.0 # if there are any 0 values in X_std set them to 1
order = [1]
X_train = make_multinomial_features(data_train_09, order=order)
X_train[:,1:] = apply_normalizer(X_train[:,1:],X_mean,X_std)
Y_train = Y_train_09
X_val = make_multinomial_features(data_val_09, order=order)
X_val[:,1:] = apply_normalizer(X_val[:,1:],X_mean,X_std)
Y_val = Y_val_09
X_test = make_multinomial_features(data_test_09, order=order)
X_test[:,1:] = apply_normalizer(X_test[:,1:],X_mean,X_std)
Y_test = Y_test_09
Explanation: <hr />
Mean normalize the data sets
Each of the data sets are normalized using the mean and standard deviation from the whole 42000 element data set. The make_multinomial_features functions is used here simply to add the column of 1's to the data for the bias term.
End of explanation
reg =300.0 # Regularization term
np.random.seed(42)
aguess = np.random.randn(X_train.shape[1]) # A random guess for the parameters
A_opt = np.zeros((X_train.shape[1],10)) # The matrix of optimized parameters
Res=[] # List to hold the full optimizitaion output of each model
for i in range(10):
print('\nFitting {} against the rest\n'.format(i))
def opt_J_reg(a) :
return J(X_train,a,Y_train[:,i])
def opt_gradJ_reg(a) :
return gradJ_reg(X_train,a,Y_train[:,i],reg)
res = minimize(opt_J_reg, aguess, method='CG', jac=opt_gradJ_reg, tol=1e-6, options={'disp': True})
Res.append(res)
A_opt[:,i] = res.x
Explanation: <hr />
Find optimal parameters for the 10 models
This is the main training loop. All 10 models are optimized using the columns of $Y$ and the training data set. Each model is fit to it's number (0-9) by evaluation it's cost function against all of the other numbers "the rest".
You can see that some of the models required many more iterations before convergence. There was also some numerical overflow present. I'm not too concerned about this since it is an artifact of the optimization run. The models converged OK and gave reasonably good set of parameters for each of the 10 models. It is possible to work on each model separately to try to get better fits and the regularization term could be adjusted per model. I did play with the optimization somewhat but wont worry about it too much since in teh next post I'll be doing an implementation of "Stochastic Gradient Descent" and will likely use this data again as an example.
End of explanation
num=8
a_opt = A_opt[:,num]
h_prob = h_logistic(X_val,a_opt)
h_predict = to_0_1(h_prob)
print_model_quality('Validation-data fit', h_predict, Y_val[:,num])
Explanation: <hr />
You can look at the fit quality of each model. The model for the digit 8 has the worst finial value for the cost function and it looks like it had many false negatives. I am using the Validation data set check the quality of fit.
End of explanation
num=0
a_opt = A_opt[:,num]
h_prob = h_logistic(X_val,a_opt)
h_predict = to_0_1(h_prob)
print_model_quality('Validation-data fit', h_predict, Y_val[:,num])
Explanation: <hr />
The fit for the "0" model has a low cost function and the quality of fit looks much better than that for "8".
End of explanation
def predict(sample, sample_label):
print('\nTest sample is : {}\n'.format(sample_label))
probs = np.zeros((10,2))
for num in range(10):
a_opt = A_opt[:,num]
probs[num,0] = num
probs[num,1] = h_logistic(sample,a_opt)
probs = probs[probs[:,1].argsort()[::-1]] # put the best guess at the top
print('Model prediction probabilites\n')
for i in range(10):
print( "{} with probability = {:.3f}".format(int(probs[i,0]), probs[i,1]) )
Explanation: <hr />
Use the model to make predictions for untested number images
The following function will return the probabilities predicted by each of the models for some given input image. The probabilities are sorted with the most likely being listed first.
End of explanation
samp = 23
samp_label = y_labels_test[samp]
sample = X_test[samp,:]
predict(sample, samp_label)
samp = 147
samp_label = y_labels_test[samp]
sample = X_test[samp,:]
predict(sample,samp_label)
samp = 6200
samp_label = y_labels_test[samp]
sample = X_test[samp,:]
predict(sample,samp_label)
Explanation: <hr />
Following are a few random images picked from the test set.
The first image is of an "8". You can see that the model did not give a very high probability for "8" but it was higher than any of the other probabilities so it did give the correct answer!
End of explanation
def print_num_correct(datasets):
for dataset in datasets :
set_name, yl, Xd = dataset
yls = yl.size
probs = np.zeros((10,2))
count = 0
for samp in range(yls):
for num in range(10):
a_opt = A_opt[:,num]
probs[num,0] = num
probs[num,1] = h_logistic(Xd[samp,:],a_opt)
probs = probs[probs[:,1].argsort()[::-1]]
if probs[0,0] == yl[samp] :
count +=1
print('\n{}'.format(set_name))
print("{} correct out of {} : {}% correct".format(count, yls, count/yls * 100))
datasets = [('Training Set:', y_labels_train, X_train),('Validation Set:',y_labels_val, X_val)]
print_num_correct(datasets)
Explanation: <hr />
Checking how well the model did for each of the datasets
The next function will give a printout of the percentage of correct prediction in a dataset. We first look at the training and validation sets.
End of explanation
print_num_correct([('Test Set:', y_labels_test, X_test)])
Explanation: ... and now the test set.
End of explanation |
5,766 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
3/10/17
Trying to get a critical infinite serpent simulation, e.g. $k_{\infty}$ = 1
U235 = .418%
U238 = .8625%
k = 1.07238
msr2g_enrU
2/10/17
Serpent run yielded k_eff of 1.03
msr2g_part_U_single_cell
2/10/17
Serpent run yielded k$_{eff}$ of 1.44
msr2g_part_U_full_core
2/13/17
Serpent run yielded k$_{eff}$ of .79216 with
- D ~ 55 cm
- H = 162.56 cm
- vacuum BCs
Dimensions based on buckling size given on pg. 39, MSRE Design and Operations, part iii, nuclear analysis
k$_{eff}$ = 1.07
D = 74.93 cm
H = 198.12 cm
vacuum BCs
graphite as special moderator material
k$_{eff}$ = 1.0545
D = 74.93 cm
H = 198.12 cm
vacuum BCs
graphite as regular doppler broadened material
)
1144 K fuel
k$_{eff}$ = 1.04149
Fuel temperature coefficient of reactivity = -3.08e-5 [$\frac{\delta k}{k} / ^{\circ}F$]
Step1: 1144 K fuel and graphite
k_eff = 1.02315
Step2: Summary
Serpent Fuel C calculation
$\alpha_f = -3.08\cdot 10^{-5} [\frac{\delta k}{k} / ^{\circ}F]$
$\alpha_g = -4.34\cdot 10^{-5} [\frac{\delta k}{k} / ^{\circ}F]$
$\alpha_{tot} = -7.43\cdot 10^{-5} [\frac{\delta k}{k} / ^{\circ}F]$
MSRE report
$\alpha_f = -3.28\cdot 10^{-5} [\frac{\delta k}{k} / ^{\circ}F]$
$\alpha_g = -3.68\cdot 10^{-5} [\frac{\delta k}{k} / ^{\circ}F]$
$\alpha_{tot} = -6.96\cdot 10^{-5} [\frac{\delta k}{k} / ^{\circ}F]$
% error
$\alpha_f \rightarrow 5.96%$
$\alpha_g \rightarrow 18.15%$
$\alpha_{tot} \rightarrow 6.79%$
Step3: msre_homogeneous
2/13/17
Serpent run yielded k$_{eff}$ of 1.03
General Notes
2/13/17
pg. 39, MSRE Design and Operations, part iii, nuclear analysis
Step4: 2/27/17
My two benchmarks in trying to understand serpent buckling are msre_homogeneous_critical_question_mark and msre_homog_rad_57_b1. These should both have the exact same material definitions and reactor height and differ only in the radius of the reactor. This is in fact the case, with the former having a radius of 73.86 cm and the latter having a radius of 57.1544 cm. What did this mean for respective $k_{inf}$? Intuitively we would hope for the change to be relatively small, since $\sigma(\vec{r},E)$ is unchanged between the two simulations. However, we could get changes because of the change in the reactor flux. So
Step5: Above value perfectly matches one-group nubar
Step6: Above matches perfectly with Serpent output B1_BUCKLING. So how does this compare with the original calculation where the material buckling was actually higher and yet the reactor was still sub-critical??
Step7: So to move the reactor to a state of criticality (from a sub-critical state), the medium was made less absorptive, less fissile, and more diffusive. More particularly, $k_{\infty}$ was increased and the medium was made more diffusive. Ok, that seems like it could work.
See sage_scratch right before the beginning of 2/24/17 notes for some notes on buckling results. I believe that the quite large discrepancy between predicted critical buckling in Serpent and the actual buckling values is entirely due to large discrepancies in diffusion coefficient values. Let's consider the results from msre_homog_rad_57_b1_res.m. We have
Step8: All the above results are very close to the b1 calculation results for the 57 cm reactor. So my question | Python Code:
k_nom = 1.0545
k_f_1144 = 1.04149
fuel_reactivity = (k_f_1144 - k_nom) / k_nom / 400
print(fuel_reactivity)
Explanation: 3/10/17
Trying to get a critical infinite serpent simulation, e.g. $k_{\infty}$ = 1
U235 = .418%
U238 = .8625%
k = 1.07238
msr2g_enrU
2/10/17
Serpent run yielded k_eff of 1.03
msr2g_part_U_single_cell
2/10/17
Serpent run yielded k$_{eff}$ of 1.44
msr2g_part_U_full_core
2/13/17
Serpent run yielded k$_{eff}$ of .79216 with
- D ~ 55 cm
- H = 162.56 cm
- vacuum BCs
Dimensions based on buckling size given on pg. 39, MSRE Design and Operations, part iii, nuclear analysis
k$_{eff}$ = 1.07
D = 74.93 cm
H = 198.12 cm
vacuum BCs
graphite as special moderator material
k$_{eff}$ = 1.0545
D = 74.93 cm
H = 198.12 cm
vacuum BCs
graphite as regular doppler broadened material
)
1144 K fuel
k$_{eff}$ = 1.04149
Fuel temperature coefficient of reactivity = -3.08e-5 [$\frac{\delta k}{k} / ^{\circ}F$]
End of explanation
k_f_g_1144 = 1.02315
total_reactivity = (k_f_g_1144 - k_nom) / k_nom / 400
print(total_reactivity)
graph_reactivity = total_reactivity - fuel_reactivity
print(graph_reactivity)
from numpy import abs
print(abs(total_reactivity + 6.96e-5) / 6.96e-5 * 100)
print(abs(fuel_reactivity + 3.28e-5) / 3.28e-5 * 100)
print(abs(graph_reactivity + 3.68e-5) / 3.68e-5 * 100)
Explanation: 1144 K fuel and graphite
k_eff = 1.02315
End of explanation
from numpy import exp
alpha_fuel_faeh = 1.18e-4
alpha_fuel_kel = 1.8 * alpha_fuel_faeh
alpha_graph_faeh = 1.0e-5
alpha_graph_kel = 1.8 * alpha_graph_faeh
rho0_fuel = 2.146
rho0_graph = 1.86
T0 = 922
rho1144_fuel = rho0_fuel * exp(-alpha_fuel_kel * (1144 - T0))
rho1144_graph = rho0_graph * exp(-alpha_graph_kel * (1144 - T0))
print(rho1144_fuel)
print(rho1144_graph)
Explanation: Summary
Serpent Fuel C calculation
$\alpha_f = -3.08\cdot 10^{-5} [\frac{\delta k}{k} / ^{\circ}F]$
$\alpha_g = -4.34\cdot 10^{-5} [\frac{\delta k}{k} / ^{\circ}F]$
$\alpha_{tot} = -7.43\cdot 10^{-5} [\frac{\delta k}{k} / ^{\circ}F]$
MSRE report
$\alpha_f = -3.28\cdot 10^{-5} [\frac{\delta k}{k} / ^{\circ}F]$
$\alpha_g = -3.68\cdot 10^{-5} [\frac{\delta k}{k} / ^{\circ}F]$
$\alpha_{tot} = -6.96\cdot 10^{-5} [\frac{\delta k}{k} / ^{\circ}F]$
% error
$\alpha_f \rightarrow 5.96%$
$\alpha_g \rightarrow 18.15%$
$\alpha_{tot} \rightarrow 6.79%$
End of explanation
print(100 * .00272 / 1.02316)
Explanation: msre_homogeneous
2/13/17
Serpent run yielded k$_{eff}$ of 1.03
General Notes
2/13/17
pg. 39, MSRE Design and Operations, part iii, nuclear analysis: For one-region model of reactivity effects of temperature, geometric buckling used was that of a cylinder, 59 in. in diameter by 78 in. high. Three conditions:
salt and graphite at 1200 F
salt at 1600 F, graphite at 1200 F
salt and graphite at 1600 F
Temperature Reactivity coefficients both negative for fuel and graphite
Most definite reactor geometry and material descriptions: pages 18 & 19 of MSRE Design and Op., part iii
On those pages is a 2D R-Z 20-region model; within each region the material composition is considered to be uniform
Critical concentrations of Uranium were calculated for three different salt compositions
End of explanation
tot_gen_rate = 1.02331
tot_fiss_rate = 4.2e-1
tot_gen_rate / tot_fiss_rate
Explanation: 2/27/17
My two benchmarks in trying to understand serpent buckling are msre_homogeneous_critical_question_mark and msre_homog_rad_57_b1. These should both have the exact same material definitions and reactor height and differ only in the radius of the reactor. This is in fact the case, with the former having a radius of 73.86 cm and the latter having a radius of 57.1544 cm. What did this mean for respective $k_{inf}$? Intuitively we would hope for the change to be relatively small, since $\sigma(\vec{r},E)$ is unchanged between the two simulations. However, we could get changes because of the change in the reactor flux. So:
msre_homogeneous_critical_question_mark: abs_gc_kinf = 1.54774
msre_homog_rad_57_b1: abs_gc_kinf = 1.51718
I would contend that this is a small enough change to be satisfactory. Moreover, as calculated in the sage_scratch notebook, the critical material buckling changes by even less:
msre_homogeneous_critical_question_mark: 1.32e-3
msre_homog_rad_57_b1: 1.31e-3
Below generation and fission rates taken from msre_homogeneous_critical_question_mark_res.m:
End of explanation
b1_remxs = 2.37419e-3
b1_nsf = 3.66701e-3
b1_diff = 9.85681e-1
b1_mat_buckl = (b1_nsf - b1_remxs) / b1_diff
b1_k_inf = b1_nsf / b1_remxs
print(b1_mat_buckl)
print(b1_k_inf)
Explanation: Above value perfectly matches one-group nubar: 2.43645. It's obvious that in Serpent, the total loss rate is normalized to one, and consequently the generation rate is exactly equal to $k_{eff}$.
Relative loss mechanisms differ between the two different sizes as one would hope. Leakage more important in smaller reactor (oh really?):
57 cm:
TOT_ABSRATE (idx, [1: 2]) = [ 5.40995E-01 0.00021 ];
TOT_LEAKRATE (idx, [1: 2]) = [ 4.59005E-01 0.00025 ];
74 cm:
TOT_ABSRATE (idx, [1: 2]) = [ 6.61174E-01 0.00017 ];
TOT_LEAKRATE (idx, [1: 2]) = [ 3.38826E-01 0.00033 ];
I don't even think that the idea of a critical buckling "search" makes any bloody sense. That implies to me that you are going to change the materials! To me the search should consist of setting $k_{eff}$ equal to one, and then calculating the appropriate geometric buckling. That's what makes sense to me. I just don't think that the buckling search works in Serpent. I decreased the geometric buckling, attempting to reach a value of 1.31e-3, starting from 3.52e-3 for the 57 cm radius reactor but didn't get there...I only got down to 1.90e-3 with the reactor radius of 74 cm. However, that was plenty far enough to go below the material buckling resulting in a super-critical reactor. That's why I don't think the buckling method in Serpent works.
Ok, we're at assembly level (or some other arbitrary level). Typically, even at assembly level we use reflective boundary conditions at first to isolate the assembly from the rest of the reactor core. Treating the current level this way yields a $k_{inf}$ of sorts and a spectrum that we can call an infinite medium spectrum because of the reflective conditions. However, we want to determine a criticality spectrum. A critical system can be realized in a few ways. One way would be to add some amount of absorber (if $k_{inf}>1$; negative absorber if the system is sub-critical). However, a more physically realistic way is to add leakage; e.g. if the reactor core is critical but one piece's $k_{inf}$ is subcritical, then there will be neat leakage into that component to balance.
Two group discretization results in two eigenvalues and their associated eigenvectors (e.g. two eigenvectors total) as long as the matrix has full rank.
It might be critical to understand that in general reactor theory language when someone talks about the neutron spectrum, s/he is talking about the neutron energy distribution and may very well not be referring at all to the spatial distribution of neutrons. Spectrum -> energy. And so an infinite spectrum may refer only to the neutron energy distribution produced by an infinite medium, and consequently the critical spectrum may only refer to the change in the neutron energy distribution from the infinite spectrum. E.g. we construct a relation between the leakage and infinite spectra (Stammler pg. 356):
\begin{equation}
r_L = r_{\infty} \frac{1+x}{1 + xk_{\infty}}
\end{equation}
where $r = \frac{\phi_1}{\phi_2}$, $r_{\infty} = \frac{\Sigma_{12}}{\Sigma_{a2}}$ and $x = L^2/\tau$ where:
\begin{equation}
L^2 = \frac{D_2}{\Sigma_{a2}}\
\tau = \frac{D_1}{\Sigma_{12} + \Sigma_{a1}}
\end{equation}
Note that the above equation for $r_L$ is valid only when the overall reactor is critical and only thermal neutrons cause fission events. A more general equation for the spectrum index is:
\begin{equation}
r_i = \frac{r_{\infty}}{1+L^2 B_i^2}
\end{equation}
where i equals either 1 or 2 for the two group case and the equations for $B_1^2$ and $B_2^2$ can be found on page 355 of Stammler. Spectrum eigenvalue problem corresponds to material buckling (largest of which is $B_1^2$ and is called the fundamental or asymptotic mode; for two group problems $B_2^2$ is referred to as the transient mode; presumably for a G group problem, anything other than $B_1^2$ would be referred to as a transient mode.) Spatial eigenvalue problem corresponds to geometric buckling. Here's $B_1^2$:
\begin{equation}
B_1^2 = \frac{\lambda k_{\infty} - 1}{\tau + \frac{\tau}{\tau*}L^2}
\end{equation}
With only thermal fission, $\tau* = \tau$. Note that $B_1^2$ is a function only of the material data and the multiplication factor of the reactor. In general, the corresponding Helmholtz equation for the spatial shape can be solved with appropriate boundary conditions determined by continuity of flux and current (assuming we are currently only examining one part of the reactor). However, if we require that the flux in all groups must vanish at the boundaries of a bare homogeneous system, then the Helmholtz equation for the spatial shape becomes an eigenvalue problem.
It should be noted from a math standpoint that eigenvalue/eigenfunction problems can only arise in differential equations when boundary conditions are homogeneous.
What I want to figure out tomorrow is exactly how the calculation of few group cross sections is implemented in serpent; e.g. look at the actual code calculation routines.
3/1/17
Homogenized cross section of reaction $i$:
\begin{equation}
i = j + 1
\end{equation}
\begin{equation}
\Sigma_{i,g} = \frac{\int_V \int_{E_g}^{E_{g-1}} \Sigma_i(\vec{r}, E)\ \phi(\vec{r},E)\ d^3r\ dE}{\int_V \int_{E_g}^{E_{g-1}} \phi(\vec{r},E)\ d^3r\ dE} \quad eq. 17.3.1.1
\end{equation}
Deterministic transport version:
\begin{equation}
\Sigma_{i,g} = \frac{ \sum_j \left( V_j \sum_h \left( \Sigma_{i,j,h} \Phi_{j,h} \right) \right) }
{ \sum_j \left( V_j \sum_h \Phi_{j,h} \right) } \quad eq. 17.3.1.2
\end{equation}
In a cell with reflective boundary conditions and homogeneous material composition, the flux has no spatial dependence and is thus constant with respect to the spatial coordinates. However, it will not be constant with respect to a continuous energy variable. In general the determinstic transport calculation proceeds like this, similar to that outlined in Figure I.3 in Stammler (pg. 42):
Within homogeneous material regions, micro-group cross sections are calculated via Monte Carlo methods (which use coninuous energy cross sections). Double check that these micro-group cross sections are indeed calculated with Monte Carlo methods. Within each homogeneous region: $\Sigma_i(\vec{r},E) \rightarrow \Sigma_i(E)$.
Armed with micro-group cross sections in each region, the micro-group fluxes in a unit/pin-cell are calculated from some analytical or numerical transport method assuming reflective boundary conditions at the outer unit/pin-cell boundary (e.g. imagine the calculations perfomed in the 501 neutron project but with less approximations, e.g. we're definitely not doing diffusion). If a unit/pin-cell only has one material, then the micro-group flux solutions will display no spatial dependence.
Or put another way:
This step needs to be studied more. Monte Carlo calculation yields continuous energy neutron flux (restricted to what region?). This continuous energy flux is used to collapse continuous energy cross sections in homogeneous material regions to micro-group cross sections using continuos energy neutron flux
Unit/pin-cell calculation yields micro-group neutron fluxes within the unit/pin-cell, using reflective boundary conditions to completely isolate the unit/pin-cell from the outside world. If the unit/pin-cell is heterogeneous (e.g. multiple material regions), then the micro-group fluxes will display spatial dependence. If so, then a discretized form of equation 17.3.1.1 (using only a spatial filter) can be used to convert homogeneous micro-group cross sections into cell-averaged micro-group cross sections. Similarly, spatially-varying pin-cell fluxes can be homogenized by simple cell averaging (e.g. integrate cell flux over cell volume and then divide by cell volume) to create space indepenent cell average fluxes. Micro-group cell-averaged fluxes are then used to collapse cell-averaged micro-group cross sections into few-group, assembly-averaged cross sections using equation 17.3.1.2. (May also do an intermediate calculation with B2 groups).
Another very useful diagram is Stammler X.4. It's perhaps the best one I've seen
3/7/17
Understanding Serpent output:
DIFFAREA = RES_FG_L2
MAT_BUCKLING = RES_MAT_BUCKL
ABS_GC_KINF = RES_ABS_GC_KINF
Calculation of RES_MAT_BUCKL (where is the index looping over the few groups):
c
if ((div = BufVal(RES_FG_L2, i)) > 0.0)
{
val = (BufVal(RES_ABS_GC_KINF, i) - 1.0)/div;
AddStat(RES_MAT_BUCKL, i, val);
Note that just because BufVal takes an index doesn't mean that the first argument is a true array, e.g. RES_ABS_GC_KINF only has one value (e.g. one mean and one measure of variance).
Ok, here is how RES_ABS_GC_KINF is calculated:
c
if ((loss = abs - n2n) > 0.0)
{
keff = src/loss;
AddStat(RES_ABS_GC_KINF, i, keff);
}
End of explanation
remxs = 2.21079e-3
nsf = 3.35432e-3
diff = 5.31584e-1
mat_buckl = (nsf - remxs) / diff
k_inf = nsf / remxs
print(mat_buckl)
print(k_inf)
Explanation: Above matches perfectly with Serpent output B1_BUCKLING. So how does this compare with the original calculation where the material buckling was actually higher and yet the reactor was still sub-critical??
End of explanation
b1_73_remxs = 2.38026e-3
b1_73_nsf = 3.67837e-3
b1_73_diff = 9.85825e-1
mat_buckl_73 = (b1_73_nsf - b1_73_remxs) / b1_73_diff
k_inf_73 = b1_73_nsf / b1_73_remxs
print(mat_buckl_73)
print(k_inf_73)
Explanation: So to move the reactor to a state of criticality (from a sub-critical state), the medium was made less absorptive, less fissile, and more diffusive. More particularly, $k_{\infty}$ was increased and the medium was made more diffusive. Ok, that seems like it could work.
See sage_scratch right before the beginning of 2/24/17 notes for some notes on buckling results. I believe that the quite large discrepancy between predicted critical buckling in Serpent and the actual buckling values is entirely due to large discrepancies in diffusion coefficient values. Let's consider the results from msre_homog_rad_57_b1_res.m. We have:
DIFFCOEFF = 5.31584E-1
LEAK_DIFFCOEFF = 8.72571E-1
P1_DIFFCOEFF = 9.37823E-1
B1_DIFFCOEFF = 9.85681E-1
If we create a cylindrical reactor that from diffusion theory should have a geometric buckling equal to the predicted critical buckling predicted in the 57 cm radius Serpent simulation, then we produce this set of diffusion coefficients with Serpent:
DIFFCOEFF = 6.46163E-1
LEAK_DIFFCOEFF = 6.04551E-1
P1_DIFFCOEFF = 9.30344E-1
B1_DIFFCOEFF = 9.85825E-1
Here's also the important b1 results for the 73 cm radius (supposed to be critical but isn't) simulation:
End of explanation
40 * 20000
(381.99 + 160.60) / 4
381.99 + 160.60
Explanation: All the above results are very close to the b1 calculation results for the 57 cm reactor. So my question: how is the b1 diffusion coefficient calculated and how does it differ from calculation of DIFFCOEFF?
Serpent 1 and 2 results for the same input file are pretty much identical, which I suppose should be expected. However, the Serpent 2 result file does contain a lot more information.
3/9/17
Analog estimator: score the physical interactions (fission, capture, etc.) or event sequences that are simulated during the calculation. Analog estimates are directly related to the simulation process.
Note: in general when the chain of equalities ends, that generally means I've reached a point where all arguments are generated via Monte Carlo analog or implicit estimates, e.g. none of the arguments are derived quantities, e.g. if I search in collectresults.c, their first hit will be on the RHS of an equality sign.
Default calculation of $D$:
\begin{equation}
D_g = \frac{\bar{r_g^2}}{6}\ \Sigma_{r,g}
\end{equation}
where $\bar{r_g^2}$ is the mean value of all scored square distances that neutrons have made within energy group $g$, which is an analog estimator. The starting points are the locations where neutrons have entered the group by fission or scattering from another (higher) energy group. The end points are the locations at which the neutrons are lost from the group either by absorption or scattering to another (lower) energy group.
Leakage calculation of $D$:
\begin{equation}
D_g = \frac{leak}{B_m^2}\
B_m^2 = \frac{k_{\infty} - 1}{L_g^2}\
L_g^2 = \frac{\bar{r_g^2}}{6}\
k_{\infty} = \frac{src}{loss} = \frac{\Sigma_{g,f}\bar{\nu}}{\Sigma_{g,f} + \Sigma_{g,c} - \Sigma_{n\rightarrow n}}
\end{equation}
where $leak$ comes directly from Monte Carlo scoring of neutrons exiting the simulation domain I believe.
P1 calculation of $D$:
\begin{equation}
D_g = \frac{1}{3\Sigma_{g,tr}}\
\Sigma_{g,tr} = \Sigma_{g,t} - \Sigma_{g,s1}
\end{equation}
B1 calculation of $D$ (note that this occurs in b1solver.c which is different from all the other calculations of $D$ that occur in collectresults.c; moreover the below equation comes from the Fridman and Lepannen PHYSOR paper as opposed to directly from the source code):
\begin{equation}
D_g = \frac{J_g}{|B|\phi_g}
\end{equation}
where $J_g$ is the group current and $|B|$ is the magnitude of the calculated critical buckling.
$\Sigma_r$
$\bar{\nu}\Sigma_f$
$D$
Ok, B1 calculations just don't make sense in multiple regions in Serpent. Serpent performs criticality searches in both regions independently. So that's out!
Scratch
End of explanation |
5,767 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
My note
Step1: Test OpenCV
Step2: Test TensorFlow
Step3: Test Moviepy
Step4: Troubleshooting ffmpeg
NOTE
Step6: Create a new video with moviepy by processing each frame to YUV color space. | Python Code:
import matplotlib.pyplot as plt
import matplotlib.image as mpimg
import numpy as np
%matplotlib inline
img = mpimg.imread('test.jpg')
plt.imshow(img)
Explanation: My note:
1. Install Anaconda
2. Setup the carnd-term1 environment as instructions in Starter Kit.
3. Run the test.ipynb in the carnd-term1 kernel
4. Troubleshoot with ffmpeg
Run all the cells below to make sure everything is working and ready to go. All cells should run without error.
Test Matplotlib and Plotting
End of explanation
import cv2
# convert the image to grayscale
gray = cv2.cvtColor(img, cv2.COLOR_RGB2GRAY)
plt.imshow(gray, cmap='Greys_r')
Explanation: Test OpenCV
End of explanation
import tensorflow as tf
with tf.Session() as sess:
a = tf.constant(1)
b = tf.constant(2)
c = a + b
# Should be 3
print("1 + 2 = {}".format(sess.run(c)))
Explanation: Test TensorFlow
End of explanation
# Import everything needed to edit/save/watch video clips
from moviepy.editor import VideoFileClip
from IPython.display import HTML
Explanation: Test Moviepy
End of explanation
import imageio
imageio.plugins.ffmpeg.download()
Explanation: Troubleshooting ffmpeg
NOTE: If you don't have ffmpeg installed on your computer you'll have to install it for moviepy to work. If this is the case you'll be prompted by an error in the notebook. You can easily install ffmpeg by running the following in a code cell in the notebook.
import imageio
imageio.plugins.ffmpeg.download()
End of explanation
new_clip_output = 'test_output.mp4'
test_clip = VideoFileClip("test.mp4")
new_clip = test_clip.fl_image(lambda x: cv2.cvtColor(x, cv2.COLOR_RGB2YUV)) #NOTE: this function expects color images!!
%time new_clip.write_videofile(new_clip_output, audio=False)
HTML(
<video width="640" height="300" controls>
<source src="{0}" type="video/mp4">
</video>
.format(new_clip_output))
Explanation: Create a new video with moviepy by processing each frame to YUV color space.
End of explanation |
5,768 |
Given the following text problem statement, write Python code to implement the functionality described below in problem statement
Problem:
Create Dummy Variables with Pandas
| Python Code::
import pandas as pd
X = pd.get_dummies(X, columns=['neighbourhood_group','room_type'], drop_first=True)
|
5,769 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<img src="images/logo.jpg" style="display
Step1: <p style="text-align
Step2: <p style="text-align
Step3: <p style="text-align
Step4: <p style="text-align
Step5: <p style="text-align
Step6: <p style="text-align
Step7: <p style="text-align
Step8: <p style="text-align
Step9: <p style="text-align
Step10: <p style="text-align
Step11: <p style="text-align
Step12: <span style="text-align
Step13: <p style="text-align
Step14: <p style="text-align
Step15: <span style="text-align
Step16: <div class="align-center" style="display
Step17: <p style="text-align
Step18: <p style="text-align
Step19: <div class="align-center" style="display
Step20: <p style="text-align
Step21: <p style="text-align
Step22: <p style="text-align
Step23: <p style="text-align | Python Code:
def silly_generator():
a = 1
yield a
b = a + 1
yield b
c = [1, 2, 3]
yield c
Explanation: <img src="images/logo.jpg" style="display: block; margin-left: auto; margin-right: auto;" alt="לוגו של מיזם לימוד הפייתון. נחש מצויר בצבעי צהוב וכחול, הנע בין האותיות של שם הקורס: לומדים פייתון. הסלוגן המופיע מעל לשם הקורס הוא מיזם חינמי ללימוד תכנות בעברית.">
<span style="text-align: right; direction: rtl; float: right;">Generators</span>
<span style="text-align: right; direction: rtl; float: right; clear: both;">הקדמה</span>
<p style="text-align: right; direction: rtl; float: right; clear: both;">
הפונקציות שיצרנו עד כה נבנו כך שיחזירו ערך אחד בכל קריאה.<br>
הערך הזה יכול היה להיות מכל טיפוס שהוא: בוליאני, מחרוזת, tuple וכדומה.<br>
אם נרצה להחזיר כמה ערכים יחד, תמיד נוכל להחזיר אותם כרשימה או כ־tuple.<br>
אבל מה קורה כשאנחנו רוצים להחזיר סדרת ערכים גדולה מאוד או אפילו אין־סופית?
</p>
<p style="text-align: right; direction: rtl; float: right; clear: both;">
למשל:
</p>
<ul style="text-align: right; direction: rtl; float: right; clear: both;">
<li>הכתובות של כל הדפים הקיימים באינטרנט.</li>
<li>מילות כל השירים שראו אור מאז שנת 1400 לספירה.</li>
<li>כל המספרים השלמים הגדולים מ־0.</li>
</ul>
<p style="text-align: right; direction: rtl; float: right; clear: both;">
בכלים שיש בידינו כרגע, נמצא שיש בעיה ליצור רשימות כאלו.<br>
עבור רשימות גדולות מאוד – לא יהיה למחשב די זיכרון ולבסוף הוא ייכשל בשמירת ערכים חדשים.<br>
ומה בנוגע לרשימות אין־סופיות? הן... ובכן... אין־סופיות, ולכן מלכתחילה לא נוכל ליצור אותן.
</p>
<span style="text-align: right; direction: rtl; float: right; clear: both;">הגדרה</span>
<p style="text-align: right; direction: rtl; float: right; clear: both;">
פתרון שמעניין לחשוב עליו הוא "פונקציה עצלנית".<br>
אם אנחנו בשום שלב לא יכולים להחזיק בזיכרון המחשב את כל האיברים (כי יש יותר מדי מהם, או כי זו סדרה אין־סופית),<br>
אולי נוכל לשלוח תחילה את הערך הראשון – ואת הערכים שבאים אחריו נשלח רק כשיבקשו אותם מאיתנו.
</p>
<p style="text-align: right; direction: rtl; float: right; clear: both;">
פונקציה עצלנית שכזו נקראת <dfn>generator</dfn>, ובכל פעם שנבקש ממנה, היא תחזיר לנו איבר יחיד מתוך סדרת ערכים.<br>
תחילה – היא תחזיר רק את הערך הראשון, בלי לחשב את שאר האיברים. אחר כך, באותו אופן, רק את השני, אחר כך רק את השלישי וכן הלאה.<br>
ההבדל העיקרי בין generator לבין פונקציה רגילה, הוא שב־generator נבחר להחזיר את הערכים אחד־אחד ולא תחת מבנה מאוגד.
</p>
<p style="text-align: right; direction: rtl; float: right; clear: both;">
נסכם: generator היא פונקציה שיוצרת עבורנו בכל פעם ערך אחד, מחזירה אותו, ומחכה עד שנבקש את האיבר הבא.
</p>
<span style="text-align: right; direction: rtl; float: right; clear: both;">שימוש</span>
<span style="text-align: right; direction: rtl; float: right; clear: both;">יצירת generator בסיסי</span>
<p style="text-align: right; direction: rtl; float: right; clear: both;">
נתחיל בהגדרת generator מטופש למדי:
</p>
End of explanation
print(silly_generator())
Explanation: <p style="text-align: right; direction: rtl; float: right; clear: both;">
מעניין! זה נראה ממש כמו פונקציה. נקרא למבנה הזה שיצרנו "<dfn>פונקציית ה־generator</dfn>".<br>
אבל מהו ה־<code>yield</code> המוזר הזה שנמצא שם?
</p>
<p style="text-align: right; direction: rtl; float: right; clear: both;">
לפני שנתהה על קנקנו, בואו ננסה לקרוא לפונקציה ונראה מה היא מחזירה:
</p>
End of explanation
our_generator = silly_generator()
Explanation: <p style="text-align: right; direction: rtl; float: right; clear: both;">
אומנם למדנו שלא אומרים איכס על פונקציות, אבל מה קורה פה?
</p>
<p style="text-align: right; direction: rtl; float: right; clear: both;">
בניגוד לפונקציות רגילות, קריאה ל־generator לא מחזירה ערך מייד.<br>
במקום ערך היא מחזירה מעין סמן, כמו בקובץ, שאפשר לדמיין כחץ שמצביע על השורה הראשונה של הפונקציה.<br>
נשמור את הסמן על משתנה:
</p>
End of explanation
next_value = next(our_generator)
print(next_value)
Explanation: <p style="text-align: right; direction: rtl; float: right; clear: both;">
בעקבות הקריאה ל־<code>silly_generator</code> נוצר לנו סמן שמצביע כרגע על השורה <code>a = 1</code>.<br>
המינוח המקצועי לסמן הזה הוא <dfn>generator iterator</dfn>.
</p>
<img src="images/silly_generator1.png?v=1" width="300px" style="display: block; margin-left: auto; margin-right: auto;" alt="תוכן הפונקציה silly_generator, כאשר חץ מצביע לשורה הראשונה שלה – a = 1">
<p style="text-align: right; direction: rtl; float: right; clear: both;">
אחרי שהרצנו את השורה <code dir="ltr">our_generator = silly_generator()</code>, הסמן המדובר נשמר במשתנה בשם <var>our_generator</var>.<br>
זה זמן מצוין לבקש מה־generator להחזיר ערך.<br>
נעשה זאת בעזרת הפונקציה הפייתונית <code>next</code>:
</p>
End of explanation
print(next(our_generator))
Explanation: <p style="text-align: right; direction: rtl; float: right; clear: both;">
כדי להבין מה התרחש נצטרך להבין שני דברים חשובים שקשורים ל־generators:<br>
</p>
<ol style="text-align: right; direction: rtl; float: right; clear: both;">
<li>קריאה ל־<code>next</code> היא כמו לחיצה על "נגן" (Play) – היא גורמת לסמן לרוץ עד שהוא מגיע לשורה של החזרת ערך.</li>
<li>מילת המפתח <code>yield</code> דומה למילת המפתח <code>return</code> – היא מפסיקה את ריצת הסמן, ומחזירה את הערך שמופיע אחריה.</li>
</ol>
<p style="text-align: right; direction: rtl; float: right; clear: both;">
אז היה לנו סמן שהצביע על השורה הראשונה. לחצנו Play, והוא הריץ את הקוד עד שהוא הגיע לנקודה שבה מחזירים ערך.<br>
ההבדל בין פונקציה לבין generator, הוא ש<mark>כשאנחנו מחזירים ערך בעזרת <code>yield</code> אנחנו "מקפיאים" את המצב שבו
יצאנו מהפונקציה.</mark><br>
ממש כמו ללחוץ על "Pause".<br>
כשנקרא ל־<code>next</code> בפעם הבאה – הפונקציה תמשיך לרוץ מאותו המקום שבו השארנו את הסמן, עם אותם ערכי משתנים.<br>
עכשיו הסמן מצביע על השורה <code>b = a + 1</code>, ומחכה שמישהו יקרא שוב ל־<code>next</code> כדי שהפונקציה תוכל להמשיך לרוץ:
</p>
End of explanation
print(next(our_generator))
Explanation: <p style="text-align: right; direction: rtl; float: right; clear: both;">
נסכם מה קרה עד עכשיו:
</p>
<ol style="text-align: right; direction: rtl; float: right; clear: both;">
<li>הגדרנו פונקציה בשם <var>silly_generator</var>, שאמורה להחזיר את הערכים <samp>1</samp>, <samp>2</samp> ו־<samp dir="ltr">[1, 2, 3]</samp>. קראנו לה "<em>פונקציית הגנרטור</em>".</li>
<li>בעזרת קריאה לפונקציית הגנרטור, יצרנו "סמן" (generator iterator) שנקרא <var>our_generator</var> ומצביע לשורה הראשונה בפונקציה.</li>
<li>בעזרת קריאה ל־<code>next</code> על ה־generator iterator, הרצנו את הסמן עד שה־generator החזיר ערך.</li>
<li>למדנו ש־generator־ים מחזירים ערכים בעיקר בעזרת yield – שמחזיר ערך ושומר את המצב שבו הפונקציה עצרה.</li>
<li>קראנו שוב ל־<code>next</code> על ה־generator iterator, וראינו שהוא ממשיך מהמקום שבו ה־generator הפסיק לרוץ פעם קודמת.</li>
</ol>
<div class="align-center" style="display: flex; text-align: right; direction: rtl; clear: both;">
<div style="display: flex; width: 10%; float: right; clear: both;">
<img src="images/exercise.svg" style="height: 50px !important;" alt="תרגול">
</div>
<div style="width: 70%">
<p style="text-align: right; direction: rtl; float: right; clear: both;">
תוכלו לחזות מה יקרה אם נקרא שוב ל־<code>next(our_generator)</code>?
</p>
</div>
<div style="display: flex; width: 20%; border-right: 0.1rem solid #A5A5A5; padding: 1rem 2rem;">
<p style="text-align: center; direction: rtl; justify-content: center; align-items: center; clear: both;">
<strong>חשוב!</strong><br>
פתרו לפני שתמשיכו!
</p>
</div>
</div>
<p style="text-align: right; direction: rtl; float: right; clear: both;">
ננסה:
</p>
End of explanation
print(next(our_generator))
Explanation: <p style="text-align: right; direction: rtl; float: right; clear: both;">
יופי! הכל הלך כמצופה.<br>
אבל מה צופן לנו העתיד?<br>
בפעם הבאה שנבקש ערך מהפונקציה, הסמן שלנו ירוץ הלאה ולא ייתקל ב־<code>yield</code>.<br>
במקרה כזה, נקבל שגיאת <var>StopIteration</var>, שמבשרת לנו ש־<code>next</code> לא הצליח לחלץ מה־generator את הערך הבא.
</p>
End of explanation
our_generator = silly_generator()
print(next(our_generator))
print(next(our_generator))
print(next(our_generator))
Explanation: <p style="text-align: right; direction: rtl; float: right; clear: both;">
מובן שאין סיבה להילחץ.<br>
במקרה הזה אפילו לא מדובר במשהו רע – פשוט כילינו את כל הערכים מה־generator iterator שלנו.<br>
פונקציית ה־generator עדיין קיימת!<br>
אפשר ליצור עוד generator iterator אם נרצה, ולקבל את כל הערכים שנמצאים בו באותה צורה:
</p>
End of explanation
our_generator = silly_generator()
for item in our_generator:
print(item)
Explanation: <p style="text-align: right; direction: rtl; float: right; clear: both;">
אבל כשחושבים על זה, זה קצת מגוחך.<br>
בכל פעם שנרצה להשיג את הערך הבא נצטרך לרשום <code>next</code>?<br>
חייבת להיות דרך טובה יותר!
</p>
<span style="text-align: right; direction: rtl; float: right; clear: both;">כל generator הוא גם iterable</span>
<span style="text-align: right; direction: rtl; float: right; clear: both;">for</span>
<p style="text-align: right; direction: rtl; float: right; clear: both;">
אז למעשה, יש יותר מדרך טובה אחת להשיג את כל הערכים שיוצאים מ־generator מסוים.<br>
כהקדמה, נניח פה עובדה שלא תשאיר אתכם אדישים: ה־generator iterator הוא... iterable! הפתעת השנה, אני יודע!<br>
אמנם אי אפשר לפנות לאיברים שלו לפי מיקום, אך בהחלט אפשר לעבור עליהם בעזרת לולאת <code>for</code>, לדוגמה:
</p>
End of explanation
for item in our_generator:
print(item)
Explanation: <p style="text-align: right; direction: rtl; float: right; clear: both;">
מה מתרחש כאן?<br>
אנחנו מבקשים מלולאת ה־<code>for</code> לעבור על ה־generator iterator שלנו.<br>
ה־<code>for</code> עושה עבורנו את העבודה אוטומטית:
</p>
<ol style="text-align: right; direction: rtl; float: right; clear: both;">
<li>הוא מבקש את האיבר הבא מה־generator iterator באמצעות <code>next</code>.</li>
<li>הוא מכניס את האיבר שהוא קיבל מה־generator ל־<var>item</var>.</li>
<li>הוא מבצע את גוף הלולאה פעם אחת עבור האיבר שנמצא ב־<var>item</var>.</li>
<li>הוא חוזר לראש הלולאה שוב, ומנסה לקבל את האיבר הבא באמצעות <code>next</code>. כך עד שייגמרו האיברים ב־generator iterator.</li>
</ol>
<p style="text-align: right; direction: rtl; float: right; clear: both;">
שימו לב שהעובדות שלמדנו בנוגע לאותו "סמן" יבואו לידי ביטוי גם כאן.<br>
הרצה נוספת של הלולאה על אותו סמן לא תדפיס יותר איברים, כיוון שהסמן מצביע כעת על סוף פונקציית ה־generator:
</p>
End of explanation
our_generator = silly_generator()
items = list(our_generator)
print(items)
Explanation: <p style="text-align: right; direction: rtl; float: right; clear: both;">
למזלנו, לולאות <code>for</code> יודעות לטפל בעצמן בשגיאת <code>StopIteration</code>, ולכן שגיאה שכזו לא תקפוץ לנו במקרה הזה.
</p>
<span style="text-align: right; direction: rtl; float: right; clear: both;">המרת טיפוסים</span>
<p style="text-align: right; direction: rtl; float: right; clear: both;">
דרך אחרת, לדוגמה, היא לבקש להמיר את ה־generator iterator לסוג משתנה אחר שהוא גם iterable:
</p>
End of explanation
print(list(our_generator))
Explanation: <p style="text-align: right; direction: rtl; float: right; clear: both;">
בקוד שלמעלה, השתמשנו בפונקציה <code>list</code> שיודעת להמיר ערכים iterable־ים לרשימות.<br>
שימו לב שמה שלמדנו בנוגע ל"סמן" יבוא לידי ביטוי גם בהמרות:
</p>
End of explanation
def my_range(upper_limit):
numbers = []
current_number = 0
while current_number < upper_limit:
numbers.append(current_number)
current_number = current_number + 1
return numbers
for number in my_range(1000):
print(number)
Explanation: <span style="text-align: right; direction: rtl; float: right; clear: both;">שימושים פרקטיים</span>
<span style="text-align: right; direction: rtl; float: right; clear: both;">חיסכון בזיכרון</span>
<p style="text-align: right; direction: rtl; float: right; clear: both;">
נכתוב פונקציה רגילה שמקבלת מספר שלם, ומחזירה רשימה של כל המספרים השלמים מ־0 ועד אותו מספר (נשמע לכם מוכר?):
</p>
End of explanation
def my_range(upper_limit):
current_number = 0
while current_number < upper_limit:
yield current_number
current_number = current_number + 1
our_generator = my_range(1000)
for number in our_generator:
print(number)
Explanation: <p style="text-align: right; direction: rtl; float: right; clear: both;">
בפונקציה הזו אנחנו יוצרים רשימת מספרים חדשה, המכילה את כל המספרים בין 0 לבין המספר שהועבר לפרמטר <var>upper_limit</var>.<br>
אך ישנה בעיה חמורה – הפעלת הפונקציה גורמת לניצול משאבים רבים!<br>
אם נכניס כארגומנט 1,000 – נצטרך להחזיק רשימה המכילה 1,000 איברים שונים, ואם נכניס מספר גדול מדי – עלול להיגמר לנו הזיכרון.
</p>
<p style="text-align: right; direction: rtl; float: right; clear: both;">
אבל איזו סיבה יש לנו להחזיק בזיכרון את רשימת כל המספרים?<br>
אם לא עולה צורך מובהק שכזה, ייתכן שעדיף להחזיק בזיכרון מספר אחד בלבד בכל פעם, ולהחזירו מייד בעזרת generator:
</p>
End of explanation
def square_numbers(numbers):
squared_numbers = []
for number in numbers:
squared_numbers.append(number ** 2)
return squared_numbers
for number in square_numbers(my_range(1000)):
print(number)
Explanation: <p style="text-align: right; direction: rtl; float: right; clear: both;">
שימו לב כמה הגרסה הזו אלגנטית יותר!<br>
בכל פעם אנחנו פשוט שולחים את ערכו של מספר אחד (<var>current_number</var>) החוצה.<br>
כשמבקשים את הערך הבא מה־generator iterator, פונקציית ה־generator חוזרת לעבוד מהנקודה שבה היא עצרה:<br>
היא מעלה את ערכו של המספר הנוכחי, בודקת אם הוא נמוך מ־<var>upper_limit</var>, ושולחת גם אותו החוצה.<br>
בשיטה הזו, <code>my_range(numbers)</code> לא מחזירה לנו רשימה של התוצאות – אלא generator iterator שמחזיר ערך אחד בכל פעם.<br>
כך אנחנו לעולם לא מחזיקים בזיכרון 1,000 מספרים בו־זמנית.
</p>
<div class="align-center" style="display: flex; text-align: right; direction: rtl; clear: both;">
<div style="display: flex; width: 10%; float: right; clear: both;">
<img src="images/exercise.svg" style="height: 50px !important;" alt="תרגול">
</div>
<div style="width: 70%">
<p style="text-align: right; direction: rtl; float: right; clear: both;">
לפניכם פונקציה שמקבלת רשימה, ומחזירה עבור כל מספר ברשימה את ערכו בריבוע.<br>
זוהי גרסה מעט בזבזנית שמשתמשת בהרבה זיכרון. תוכלו להמיר אותה להיות generator?
</p>
</div>
<div style="display: flex; width: 20%; border-right: 0.1rem solid #A5A5A5; padding: 1rem 2rem;">
<p style="text-align: center; direction: rtl; justify-content: center; align-items: center; clear: both;">
<strong>חשוב!</strong><br>
פתרו לפני שתמשיכו!
</p>
</div>
</div>
End of explanation
def find_pythagorean_triples(upper_bound=10_000):
pythagorean_triples = []
for c in range(3, upper_bound):
for b in range(2, c):
for a in range(1, b):
if a ** 2 + b **2 == c ** 2:
pythagorean_triples.append((a, b, c))
return pythagorean_triples
for triple in find_pythagorean_triples():
print(triple)
Explanation: <span style="text-align: right; direction: rtl; float: right; clear: both;">תשובות חלקיות</span>
<p style="text-align: right; direction: rtl; float: right; clear: both;">
לעיתים ניאלץ לבצע חישוב ארוך, שהשלמתו תימשך זמן רב מאוד.<br>
במקרה כזה, נוכל להשתמש ב־generator כדי לקבל חלק מהתוצאה בזמן אמת,<br>
בזמן שבפונקציה "רגילה" נצטרך להמתין עד סיום החישוב כולו.
</p>
<p style="text-align: right; direction: rtl; float: right; clear: both;">
שלשה פיתגורית, לדוגמה, היא שלישיית מספרים שלמים וחיוביים, $a$, $b$ ו־$c$, שעונים על הדרישה $a^2 + b^2 = c^2$.<br>
אם כך, כדי ששלושה מספרים שאנחנו בוחרים ייחשבו שלשה פיתגורית,<br>
הסכום של ריבוע המספר הראשון וריבוע המספר השני, אמור להיות שווה לערכו של המספר השלישי בריבוע.
</p>
<p style="text-align: right; direction: rtl; float: right; clear: both;">
אלו דוגמאות לשלשות פיתגוריות:
</p>
<ul style="text-align: right; direction: rtl; float: right; clear: both;">
<li>$(3, 4, 5)$, כיוון ש־$9 + 16 = 25$.<br>
9 הוא 3 בריבוע, 16 הוא 4 בריבוע ו־25 הוא 5 בריבוע.
</li>
<li>$(5, 12, 13)$, כיוון ש־$25 + 144 = 169$.</li>
<li>$(8, 15, 17)$, כיוון ש־$64 + 225 = 289$.</li>
</ul>
<p style="text-align: right; direction: rtl; float: right; clear: both;">
ננסה למצוא את כל השלשות הפיתגוריות מתחת ל־10,000 בעזרת קוד שרץ על כל השלשות האפשריות:
</p>
End of explanation
def find_pythagorean_triples(upper_bound=10_000):
for c in range(3, upper_bound):
for b in range(2, c):
for a in range(1, b):
if a ** 2 + b **2 == c ** 2:
yield a, b, c
for triple in find_pythagorean_triples():
print(triple)
Explanation: <div class="align-center" style="display: flex; text-align: right; direction: rtl;">
<div style="display: flex; width: 10%; float: right; ">
<img src="images/warning.png" style="height: 50px !important;" alt="אזהרה!">
</div>
<div style="width: 90%">
<p style="text-align: right; direction: rtl;">
הרצת התא הקודם תתקע את המחברת (חישוב התוצאה יימשך זמן רב).<br>
כדי להיות מסוגלים להריץ את התאים הבאים, לחצו <samp>00</samp> לאחר הרצת התא, ובחרו <em>Restart</em>.<br>
אל דאגה – האתחול יתבצע אך ורק עבור המחברת, ולא עבור מחשב.
</p>
</div>
</div>
<p style="text-align: right; direction: rtl; float: right; clear: both;">
יו, כמה זמן נמשכת הרצת הקוד הזה... 😴<br>
הלוואי שעד שהקוד הזה היה מסיים היינו מקבלים לפחות <em>חלק</em> מהתוצאות!<br>
נפנה ל־generator־ים לעזרה:
</p>
End of explanation
def fibonacci(max_items):
a = 1
b = 1
numbers = [1, 1]
while len(numbers) < max_items:
a, b = b, a + b # Unpacking
numbers.append(b)
return numbers
for number in fibonacci(10):
print(number)
Explanation: <p style="text-align: right; direction: rtl; float: right; clear: both;">
איך זה קרה? קיבלנו את התשובה בתוך שבריר שנייה!<br>
ובכן, זה לא מדויק – קיבלנו חלק מהתשובות. שימו לב שהקוד ממשיך להדפיס :)<br>
</p>
<p style="text-align: right; direction: rtl; float: right; clear: both;">
להזכירכם, ה־generator שולח את התוצאה החוצה מייד כשהוא מוצא שלשה אחת,<br>
וה־for מקבל מה־generator iterable כל שלשה ברגע שהיא נמצאה.<br>
ברגע שה־for מקבל שלשה, הוא מבצע את גוף הלולאה עבור אותה שלשה, ורק אז מבקש מ־generator את האיבר הבא.<br>
בגלל האופי של generators, הקוד בתא האחרון מדפיס לנו כל שלשה ברגע שהוא מצא אותה, ולא מחכה עד שיימצאו כל השלשות.
</p>
<span style="text-align: right; direction: rtl; float: right; clear: both;">תרגול ביניים: מספרים פראיים</span>
<p style="text-align: right; direction: rtl; float: right; clear: both;">
"פירוק לגורמים של מספר שלם" היא בעיה שחישוב פתרונה נמשך זמן רב במחשבים מודרניים.
</p>
<p style="text-align: right; direction: rtl; float: right; clear: both;">
עליכם לכתוב פונקציה שמקבלת מספר חיובי שלם $n$, ומחזירה קבוצת מספרים שמכפלתם (תוצאת הכפל ביניהם) היא $n$.<br>
לדוגמה, המספר 1,386 בנוי מהמכפלה של קבוצת המספרים $2 \cdot 3 \cdot 3 \cdot 7 \cdot 11$.<br>
כל מספר בקבוצת המספרים הזו חייב להיות ראשוני.<br>
להזכירכם: מספר ראשוני הוא מספר שאין לו מחלקים חוץ מעצמו ומ־1.
</p>
<p style="text-align: right; direction: rtl; float: right; clear: both;">
הניחו שהמספר שהתקבל אינו ראשוני.<br>
מה היתרון של generator על פני פונקציה רגילה שעושה אותו דבר?
</p>
<p style="text-align: right; direction: rtl; float: right; clear: both;">
רמז: <span style="background: black;">אם תנסו לחלק את המספר ב־2, ואז ב־3 (וכן הלאה), בסופו של דבר תגיעו למחלק ראשוני של המספר.</span><br>
רמז עבה: <span style="background: black;">בכל פעם שמצאתם מחלק אחד למספר, חלקו את המספר במחלק, והתחילו את החיפוש מחדש. מתי עליכם לעצור?</span>
</p>
<span style="text-align: right; direction: rtl; float: right; clear: both;">אוספים אין־סופיים</span>
<p style="text-align: right; direction: rtl; float: right; clear: both;">
עבור בעיות מסוימות, נרצה להיות מסוגלים להחזיר אין־סוף תוצאות.<br>
ניקח כדוגמה לסדרה אין־סופית את סדרת פיבונאצ'י, שבה כל איבר הוא סכום זוג האיברים הקודמים לו:<br>
$1, 1, 2, 3, 5, 8, \ldots$
</p>
<p style="text-align: right; direction: rtl; float: right; clear: both;">
נממש פונקציה שמחזירה לנו את סדרת פיבונאצ'י.<br>
בפונקציה רגילה אין לנו אפשרות להחזיר מספר אין־סופי של איברים, ולכן נצטרך להחליט על מספר האיברים המרבי שנרצה להחזיר:
</p>
End of explanation
def fibonacci():
a = 1
b = 1
numbers = [1, 1]
while True: # תמיד מתקיים
yield a
a, b = b, a + b
generator_iterator = fibonacci()
for number in range(10):
print(next(generator_iterator))
# אני יכול לבקש בקלות רבה את 10 האיברים הבאים בסדרה
for number in range(10):
print(next(generator_iterator))
Explanation: <p style="text-align: right; direction: rtl; float: right; clear: both;">
לעומת זאת, ל־generators לא חייב להיות סוף מוגדר.<br>
נשתמש ב־<code>while True</code> שתמיד מתקיים, כדי שבסופו של דבר – תמיד נגיע ל־<code>yield</code>:
</p>
End of explanation
def simple_generator():
yield 1
yield 2
yield 3
Explanation: <div class="align-center" style="display: flex; text-align: right; direction: rtl;">
<div style="display: flex; width: 10%; float: right; ">
<img src="images/warning.png" style="height: 50px !important;" alt="אזהרה!">
</div>
<div style="width: 90%">
<p style="text-align: right; direction: rtl;">
generators אין־סופיים יכולים לגרום בקלות ללולאות אין־סופיות, גם בלולאות <code>for</code>.<br>
שימו לב לצורת ההתעסקות העדינה בדוגמאות למעלה.<br>
הרצת לולאת <code>for</code> ישירות על ה־generator iterator הייתה מכניסה אותנו ללולאה אין־סופית.
</p>
</div>
</div>
<div class="align-center" style="display: flex; text-align: right; direction: rtl; clear: both;">
<div style="display: flex; width: 10%; float: right; clear: both;">
<img src="images/exercise.svg" style="height: 50px !important;" alt="תרגול">
</div>
<div style="width: 70%">
<p style="text-align: right; direction: rtl; float: right; clear: both;">
כתבו generator שמחזיר את כל המספרים השלמים הגדולים מ־0.
</p>
</div>
<div style="display: flex; width: 20%; border-right: 0.1rem solid #A5A5A5; padding: 1rem 2rem;">
<p style="text-align: center; direction: rtl; justify-content: center; align-items: center; clear: both;">
<strong>חשוב!</strong><br>
פתרו לפני שתמשיכו!
</p>
</div>
</div>
<span style="text-align: right; direction: rtl; float: right; clear: both;">ריבוי generator iterators</span>
<p style="text-align: right; direction: rtl; float: right; clear: both;">
נגדיר generator פשוט שמחזיר את האיברים <samp>1</samp>, <samp>2</samp> ו־<samp>3</samp>:
</p>
End of explanation
first_gen = simple_generator()
second_gen = simple_generator()
Explanation: <p style="text-align: right; direction: rtl; float: right; clear: both;">
ניצור שני generator iterators ("סמנים") שונים שמצביעים לשורה הראשונה של ה־generator שמופיע למעלה:
</p>
End of explanation
print(next(first_gen))
print(next(second_gen))
Explanation: <p style="text-align: right; direction: rtl; float: right; clear: both;">
בעניין זה, חשוב להבין שכל אחד מה־generator iterators הוא "חץ" נפרד שמצביע לשורה הראשונה ב־<var>simple_generator</var>.<br>
אם נבקש מכל אחד מהם להחזיר ערך, נקבל משניהם את 1, ואותו חץ דמיוני יעבור בשני ה־generator iterators להמתין בשורה השנייה:
</p>
End of explanation
print(next(first_gen))
print(next(first_gen))
Explanation: <p style="text-align: right; direction: rtl; float: right; clear: both;">
נוכל לקדם את <var>first_gen</var>, לדוגמה, לסוף הפונקציה:
</p>
End of explanation
print(next(second_gen))
Explanation: <p style="text-align: right; direction: rtl; float: right; clear: both;">
אבל <var>second_gen</var> הוא חץ נפרד, שעדיין מצביע לשורה השנייה של פונקציית ה־generator.<br>
אם נבקש ממנו את הערך הבא, הוא ימשיך את המסע מהערך <samp>2</samp>:<br>
</p>
End of explanation |
5,770 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
2/22/17
Step1: So it looks like there's excellent agreement between the theoretical calculation of the material buckling and the value output by serpent, so it seems reasonable to guess that serpent is using the same formula I just used. Recall that the bigger the reactor dimensions are, the smaller the geometric buckling is, the smaller the leakage is, and the larger k is. Currently the geometric buckling should be 2.02e-3 while the material buckling is 2.15e-3...this would lead me to believe that the reator should have k > 1!!! I need to work this out on paper.
Step2: The above $k_{inf}$ agrees very well with the six factor $k_{inf}$ from serpent
Step3: How is it that the actual geometric buckling (3.52e-3) is so much greater than the buckling (2.02e-3) predicted by the following formula?
\begin{equation}
B_g^2 = \left(\frac{\pi}{H}\right)^2 + \left(\frac{2.405}{R}\right)^2
\end{equation}
Step4: Above is another way for calculating geometric buckling in Serpent
With B1 correction
Step5: Regular material values unchanged...like they should be
Step6: Now try with those reactor dimensions
Step7: Summary
Simulation 1; R = 57.15; H = 198.12
Predicted geometric buckling = 2.02e-3 (equation 1)
Actual geometric buckling = 3.52e-3
Material buckling = 2.15e-3
k = .82117
Serpent b1 predicted critical buckling = 1.31e-3
Solve for new radial dimension using equation 1 $\rightarrow$ R = 73.86
Simulation 2; R = 73.86; H = 198.12
Predicted geometric (and material) buckling = 1.31e-3 (equation 1)
Actual geometric buckling = 1.90e-3
Material buckling = 2.02e-3
k = 1.0237
Serpent b1 predicted critical buckling = 1.32e-3
At least the predicted critical buckling is consistent.
Conclusion
Predicted geometric buckling using equation 1 and serpent calculated values for geometric buckling do not agree.
Step8: 2/24/17
Examining validity of linear and exponential Moltres forms
Step9: If the vacuum boundary condition of $-D \frac{d\phi}{dx} = \phi/2$ was equivalent to $\phi(a') = 0$ then the above two expressions would be equal | Python Code:
bm2 = .00202183
height = 198.12
radius = var('radius')
solns = solve(bm2 == (pi/height)^2 + (2.405/radius)^2, radius, solution_dict=True)
[s[radius].n() for s in solns]
radius = solns[1][radius].n()
print(radius)
nu = 2.43654
sigma_f = 1.3769e-3
sigma_a = 2.21110e-3
diff = 5.31788e-1
bm2_calc = (nu * sigma_f - sigma_a) / diff
print(bm2_calc)
bm2_serpent = 2.15149e-3
error = abs(bm2_calc-bm2_serpent) / bm2_serpent * 100
print(error)
Explanation: 2/22/17
End of explanation
bg2_calc = bm2
k = (nu * sigma_f) / (sigma_a + diff * bg2_calc)
print(k)
kinf = nu * sigma_f / sigma_a
print(kinf)
Explanation: So it looks like there's excellent agreement between the theoretical calculation of the material buckling and the value output by serpent, so it seems reasonable to guess that serpent is using the same formula I just used. Recall that the bigger the reactor dimensions are, the smaller the geometric buckling is, the smaller the leakage is, and the larger k is. Currently the geometric buckling should be 2.02e-3 while the material buckling is 2.15e-3...this would lead me to believe that the reator should have k > 1!!! I need to work this out on paper.
End of explanation
bg2_serp = var('bg2_serp')
k_serp = .82117
solns = solve(k_serp == nu * sigma_f / (sigma_a + diff * bg2_serp), bg2_serp, solution_dict = True)
[s[bg2_serp].n() for s in solns]
print(bg2_serpent > bm2_serpent)
solns[0]
solns
bg2_serpent = solns[0][bg2_serp].n()
Explanation: The above $k_{inf}$ agrees very well with the six factor $k_{inf}$ from serpent: 1.51783
End of explanation
predicted = (pi/height)^2 + (2.405/radius)^2
print(predicted.n())
leak = 1.87651e-3
leak / diff
Explanation: How is it that the actual geometric buckling (3.52e-3) is so much greater than the buckling (2.02e-3) predicted by the following formula?
\begin{equation}
B_g^2 = \left(\frac{\pi}{H}\right)^2 + \left(\frac{2.405}{R}\right)^2
\end{equation}
End of explanation
b1_nsf = 3.66701e-3
b1_remxs = 2.37419e-3
b1_diff = 9.85681e-1
b1_bm2 = (b1_nsf - b1_remxs) / b1_diff
print(b1_bm2)
reg_nsf = 3.35432e-3
reg_remxs = 2.21079e-3
reg_diff = 5.31584e-1
reg_bm2 = (reg_nsf - reg_remxs) / reg_diff
print(reg_bm2)
Explanation: Above is another way for calculating geometric buckling in Serpent
With B1 correction
End of explanation
def b2g_wikipedia(r, H):
return (pi/H)^2 + (2.405/r)^2
print(b2g_wikipedia(74, 198.12).n())
radius = var('radius')
solns = solve(b1_bm2 == (pi/198.12)^2 + (2.405/radius)^2, radius, solution_dict=True)
solns[1][radius].n()
Explanation: Regular material values unchanged...like they should be
End of explanation
k = 1.02370
nsf = 3.70420e-3
absxs = 2.39330e-3
diff = 6.46350e-1
bg2 = var('bg2')
solns = solve(k == nsf / (absxs + diff * bg2), bg2, solution_dict=True)
print(solns[0][bg2].n())
Explanation: Now try with those reactor dimensions
End of explanation
x = var('x')
f = 2.61e-5 * sin(pi * x / (pi + 4) )
Explanation: Summary
Simulation 1; R = 57.15; H = 198.12
Predicted geometric buckling = 2.02e-3 (equation 1)
Actual geometric buckling = 3.52e-3
Material buckling = 2.15e-3
k = .82117
Serpent b1 predicted critical buckling = 1.31e-3
Solve for new radial dimension using equation 1 $\rightarrow$ R = 73.86
Simulation 2; R = 73.86; H = 198.12
Predicted geometric (and material) buckling = 1.31e-3 (equation 1)
Actual geometric buckling = 1.90e-3
Material buckling = 2.02e-3
k = 1.0237
Serpent b1 predicted critical buckling = 1.32e-3
At least the predicted critical buckling is consistent.
Conclusion
Predicted geometric buckling using equation 1 and serpent calculated values for geometric buckling do not agree.
End of explanation
plot(f, (x, 2, pi + 2))
plot(f, (x, 0, pi + 4))
type(f)
f.subs(x == 2 + pi).n()
f.diff(x).subs(x == 2 + pi).n() * -2
Explanation: 2/24/17
Examining validity of linear and exponential Moltres forms
End of explanation
f
import numpy as np
data_dir = "/home/lindsayad/projects/moltres/problems/MooseGold/022317_test_critical_neutronics_only_reactor/"
lin_data = np.loadtxt(data_dir + "linear_gov_eqns0.csv", delimiter=",", skiprows=1)
exp_data = np.loadtxt(data_dir + "exp_gov_eqns_penalty_1000.csv", delimiter=",", skiprows=1)
x_lin_data = lin_data[:, 3]
y_lin_data = lin_data[:, 0]
x_exp_data = exp_data[:, 4]
y_exp_data = exp_data[:, 1]
from sage.plot.scatter_plot import ScatterPlot, scatter_plot
lin_pts = zip(x_lin_data, y_lin_data)
sp_lin = scatter_plot(lin_pts, markersize=20, facecolor='green', marker='s')
exp_pts = zip(x_exp_data, y_exp_data)
sp_exp = scatter_plot(exp_pts, markersize=20, facecolor='blue', marker='o')
g = Graphics()
g += sp_lin
g += sp_exp
h = 1.8252e-5*sin(x)
p = plot(h, (x, 0, pi), color = 'green')
q = plot(1.96078e-5 * sin(x), (x, 0, pi), color = 'blue')
g += p
g += q
g.show()
Explanation: If the vacuum boundary condition of $-D \frac{d\phi}{dx} = \phi/2$ was equivalent to $\phi(a') = 0$ then the above two expressions would be equal
End of explanation |
5,771 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
This notebook was prepared by Donne Martin. Source and license info is on GitHub.
Kaggle Machine Learning Competition
Step1: Explore the Data
Read the data
Step2: View the data types of each column
Step3: Type 'object' is a string for pandas, which poses problems with machine learning algorithms. If we want to use these as features, we'll need to convert these to number representations.
Get some basic information on the DataFrame
Step4: Age, Cabin, and Embarked are missing values. Cabin has too many missing values, whereas we might be able to infer values for Age and Embarked.
Generate various descriptive statistics on the DataFrame
Step5: Now that we have a general idea of the data set contents, we can dive deeper into each column. We'll be doing exploratory data analysis and cleaning data to setup 'features' we'll be using in our machine learning algorithms.
Plot a few features to get a better idea of each
Step6: Next we'll explore various features to view their impact on survival rates.
Feature
Step7: Plot the cross tab
Step8: We can see that passenger class seems to have a significant impact on whether a passenger survived. Those in First Class the highest chance for survival.
Feature
Step9: Transform Sex from a string to a number representation
Step10: Plot a normalized cross tab for Sex_Val and Survived
Step11: The majority of females survived, whereas the majority of males did not.
Next we'll determine whether we can gain any insights on survival rate by looking at both Sex and Pclass.
Count males and females in each Pclass
Step12: Plot survival rate by Sex and Pclass
Step13: The vast majority of females in First and Second class survived. Males in First class had the highest chance for survival.
Feature
Step14: Prepare to map Embarked from a string to a number representation
Step15: Transform Embarked from a string to a number representation to prepare it for machine learning algorithms
Step16: Plot the histogram for Embarked_Val
Step17: Since the vast majority of passengers embarked in 'S'
Step18: Verify we do not have any more NaNs for Embarked_Val
Step19: Plot a normalized cross tab for Embarked_Val and Survived
Step20: It appears those that embarked in location 'C'
Step21: Leaving Embarked as integers implies ordering in the values, which does not exist. Another way to represent Embarked without ordering is to create dummy variables
Step22: Feature
Step23: Determine the Age typical for each passenger class by Sex_Val. We'll use the median instead of the mean because the Age histogram seems to be right skewed.
Step24: Ensure AgeFill does not contain any missing values
Step25: Plot a normalized cross tab for AgeFill and Survived
Step26: Unfortunately, the graphs above do not seem to clearly show any insights. We'll keep digging further.
Plot AgeFill density by Pclass
Step27: When looking at AgeFill density by Pclass, we see the first class passengers were generally older then second class passengers, which in turn were older than third class passengers. We've determined that first class passengers had a higher survival rate than second class passengers, which in turn had a higher survival rate than third class passengers.
Step28: In the first graph, we see that most survivors come from the 20's to 30's age ranges and might be explained by the following two graphs. The second graph shows most females are within their 20's. The third graph shows most first class passengers are within their 30's.
Feature
Step29: Plot a histogram of FamilySize
Step30: Plot a histogram of AgeFill segmented by Survived
Step31: Based on the histograms, it is not immediately obvious what impact FamilySize has on survival. The machine learning algorithms might benefit from this feature.
Additional features we might want to engineer might be related to the Name column, for example honorrary or pedestrian titles might give clues and better predictive power for a male's survival.
Final Data Preparation for Machine Learning
Many machine learning algorithms do not work on strings and they usually require the data to be in an array, not a DataFrame.
Show only the columns of type 'object' (strings)
Step32: Drop the columns we won't use
Step33: Drop the following columns
Step34: Convert the DataFrame to a numpy array
Step35: Data Wrangling Summary
Below is a summary of the data wrangling we performed on our training data set. We encapsulate this in a function since we'll need to do the same operations to our test set later.
Step36: Random Forest
Step37: Fit the training data and create the decision trees
Step38: Random Forest
Step39: Note the test data does not contain the column 'Survived', we'll use our trained model to predict these values.
Step40: Take the decision trees and run it on the test data
Step41: Random Forest
Step42: Evaluate Model Accuracy
Submitting to Kaggle will give you an accuracy score. It would be helpful to get an idea of accuracy without submitting to Kaggle.
We'll split our training data, 80% will go to "train" and 20% will go to "test"
Step43: Use the new training data to fit the model, predict, and get the accuracy score
Step44: View the Confusion Matrix
Step45: Get the model score and confusion matrix
Step46: Display the classification report | Python Code:
import pandas as pd
import numpy as np
import pylab as plt
# Set the global default size of matplotlib figures
plt.rc('figure', figsize=(10, 5))
# Size of matplotlib figures that contain subplots
fizsize_with_subplots = (10, 10)
# Size of matplotlib histogram bins
bin_size = 10
Explanation: This notebook was prepared by Donne Martin. Source and license info is on GitHub.
Kaggle Machine Learning Competition: Predicting Titanic Survivors
Competition Site
Description
Evaluation
Data Set
Setup Imports and Variables
Explore the Data
Feature: Passenger Classes
Feature: Sex
Feature: Embarked
Feature: Age
Feature: Family Size
Final Data Preparation for Machine Learning
Data Wrangling Summary
Random Forest: Training
Random Forest: Predicting
Random Forest: Prepare for Kaggle Submission
Support Vector Machine: Training
Support Vector Machine: Predicting
Competition Site
Description, Evaluation, and Data Set taken from the competition site.
Description
The sinking of the RMS Titanic is one of the most infamous shipwrecks in history. On April 15, 1912, during her maiden voyage, the Titanic sank after colliding with an iceberg, killing 1502 out of 2224 passengers and crew. This sensational tragedy shocked the international community and led to better safety regulations for ships.
One of the reasons that the shipwreck led to such loss of life was that there were not enough lifeboats for the passengers and crew. Although there was some element of luck involved in surviving the sinking, some groups of people were more likely to survive than others, such as women, children, and the upper-class.
In this challenge, we ask you to complete the analysis of what sorts of people were likely to survive. In particular, we ask you to apply the tools of machine learning to predict which passengers survived the tragedy.
Evaluation
The historical data has been split into two groups, a 'training set' and a 'test set'. For the training set, we provide the outcome ( 'ground truth' ) for each passenger. You will use this set to build your model to generate predictions for the test set.
For each passenger in the test set, you must predict whether or not they survived the sinking ( 0 for deceased, 1 for survived ). Your score is the percentage of passengers you correctly predict.
The Kaggle leaderboard has a public and private component. 50% of your predictions for the test set have been randomly assigned to the public leaderboard ( the same 50% for all users ). Your score on this public portion is what will appear on the leaderboard. At the end of the contest, we will reveal your score on the private 50% of the data, which will determine the final winner. This method prevents users from 'overfitting' to the leaderboard.
Data Set
| File Name | Available Formats |
|------------------|-------------------|
| train | .csv (59.76 kb) |
| gendermodel | .csv (3.18 kb) |
| genderclassmodel | .csv (3.18 kb) |
| test | .csv (27.96 kb) |
| gendermodel | .py (3.58 kb) |
| genderclassmodel | .py (5.63 kb) |
| myfirstforest | .py (3.99 kb) |
<pre>
VARIABLE DESCRIPTIONS:
survival Survival
(0 = No; 1 = Yes)
pclass Passenger Class
(1 = 1st; 2 = 2nd; 3 = 3rd)
name Name
sex Sex
age Age
sibsp Number of Siblings/Spouses Aboard
parch Number of Parents/Children Aboard
ticket Ticket Number
fare Passenger Fare
cabin Cabin
embarked Port of Embarkation
(C = Cherbourg; Q = Queenstown; S = Southampton)
SPECIAL NOTES:
Pclass is a proxy for socio-economic status (SES)
1st ~ Upper; 2nd ~ Middle; 3rd ~ Lower
Age is in Years; Fractional if Age less than One (1)
If the Age is Estimated, it is in the form xx.5
With respect to the family relation variables (i.e. sibsp and parch)
some relations were ignored. The following are the definitions used
for sibsp and parch.
Sibling: Brother, Sister, Stepbrother, or Stepsister of Passenger Aboard Titanic
Spouse: Husband or Wife of Passenger Aboard Titanic (Mistresses and Fiances Ignored)
Parent: Mother or Father of Passenger Aboard Titanic
Child: Son, Daughter, Stepson, or Stepdaughter of Passenger Aboard Titanic
Other family relatives excluded from this study include cousins,
nephews/nieces, aunts/uncles, and in-laws. Some children travelled
only with a nanny, therefore parch=0 for them. As well, some
travelled with very close friends or neighbors in a village, however,
the definitions do not support such relations.
</pre>
Setup Imports and Variables
End of explanation
df_train = pd.read_csv('../data/titanic/train.csv')
df_train.head()
df_train.tail()
Explanation: Explore the Data
Read the data:
End of explanation
df_train.dtypes
Explanation: View the data types of each column:
End of explanation
df_train.info()
Explanation: Type 'object' is a string for pandas, which poses problems with machine learning algorithms. If we want to use these as features, we'll need to convert these to number representations.
Get some basic information on the DataFrame:
End of explanation
df_train.describe()
Explanation: Age, Cabin, and Embarked are missing values. Cabin has too many missing values, whereas we might be able to infer values for Age and Embarked.
Generate various descriptive statistics on the DataFrame:
End of explanation
# Set up a grid of plots
fig = plt.figure(figsize=fizsize_with_subplots)
fig_dims = (3, 2)
# Plot death and survival counts
plt.subplot2grid(fig_dims, (0, 0))
df_train['Survived'].value_counts().plot(kind='bar',
title='Death and Survival Counts')
# Plot Pclass counts
plt.subplot2grid(fig_dims, (0, 1))
df_train['Pclass'].value_counts().plot(kind='bar',
title='Passenger Class Counts')
# Plot Sex counts
plt.subplot2grid(fig_dims, (1, 0))
df_train['Sex'].value_counts().plot(kind='bar',
title='Gender Counts')
plt.xticks(rotation=0)
# Plot Embarked counts
plt.subplot2grid(fig_dims, (1, 1))
df_train['Embarked'].value_counts().plot(kind='bar',
title='Ports of Embarkation Counts')
# Plot the Age histogram
plt.subplot2grid(fig_dims, (2, 0))
df_train['Age'].hist()
plt.title('Age Histogram')
Explanation: Now that we have a general idea of the data set contents, we can dive deeper into each column. We'll be doing exploratory data analysis and cleaning data to setup 'features' we'll be using in our machine learning algorithms.
Plot a few features to get a better idea of each:
End of explanation
pclass_xt = pd.crosstab(df_train['Pclass'], df_train['Survived'])
pclass_xt
Explanation: Next we'll explore various features to view their impact on survival rates.
Feature: Passenger Classes
From our exploratory data analysis in the previous section, we see there are three passenger classes: First, Second, and Third class. We'll determine which proportion of passengers survived based on their passenger class.
Generate a cross tab of Pclass and Survived:
End of explanation
# Normalize the cross tab to sum to 1:
pclass_xt_pct = pclass_xt.div(pclass_xt.sum(1).astype(float), axis=0)
pclass_xt_pct.plot(kind='bar',
stacked=True,
title='Survival Rate by Passenger Classes')
plt.xlabel('Passenger Class')
plt.ylabel('Survival Rate')
Explanation: Plot the cross tab:
End of explanation
sexes = sorted(df_train['Sex'].unique())
genders_mapping = dict(zip(sexes, range(0, len(sexes) + 1)))
genders_mapping
Explanation: We can see that passenger class seems to have a significant impact on whether a passenger survived. Those in First Class the highest chance for survival.
Feature: Sex
Gender might have also played a role in determining a passenger's survival rate. We'll need to map Sex from a string to a number to prepare it for machine learning algorithms.
Generate a mapping of Sex from a string to a number representation:
End of explanation
df_train['Sex_Val'] = df_train['Sex'].map(genders_mapping).astype(int)
df_train.head()
Explanation: Transform Sex from a string to a number representation:
End of explanation
sex_val_xt = pd.crosstab(df_train['Sex_Val'], df_train['Survived'])
sex_val_xt_pct = sex_val_xt.div(sex_val_xt.sum(1).astype(float), axis=0)
sex_val_xt_pct.plot(kind='bar', stacked=True, title='Survival Rate by Gender')
Explanation: Plot a normalized cross tab for Sex_Val and Survived:
End of explanation
# Get the unique values of Pclass:
passenger_classes = sorted(df_train['Pclass'].unique())
for p_class in passenger_classes:
print 'M: ', p_class, len(df_train[(df_train['Sex'] == 'male') &
(df_train['Pclass'] == p_class)])
print 'F: ', p_class, len(df_train[(df_train['Sex'] == 'female') &
(df_train['Pclass'] == p_class)])
Explanation: The majority of females survived, whereas the majority of males did not.
Next we'll determine whether we can gain any insights on survival rate by looking at both Sex and Pclass.
Count males and females in each Pclass:
End of explanation
# Plot survival rate by Sex
females_df = df_train[df_train['Sex'] == 'female']
females_xt = pd.crosstab(females_df['Pclass'], df_train['Survived'])
females_xt_pct = females_xt.div(females_xt.sum(1).astype(float), axis=0)
females_xt_pct.plot(kind='bar',
stacked=True,
title='Female Survival Rate by Passenger Class')
plt.xlabel('Passenger Class')
plt.ylabel('Survival Rate')
# Plot survival rate by Pclass
males_df = df_train[df_train['Sex'] == 'male']
males_xt = pd.crosstab(males_df['Pclass'], df_train['Survived'])
males_xt_pct = males_xt.div(males_xt.sum(1).astype(float), axis=0)
males_xt_pct.plot(kind='bar',
stacked=True,
title='Male Survival Rate by Passenger Class')
plt.xlabel('Passenger Class')
plt.ylabel('Survival Rate')
Explanation: Plot survival rate by Sex and Pclass:
End of explanation
df_train[df_train['Embarked'].isnull()]
Explanation: The vast majority of females in First and Second class survived. Males in First class had the highest chance for survival.
Feature: Embarked
The Embarked column might be an important feature but it is missing a couple data points which might pose a problem for machine learning algorithms:
End of explanation
# Get the unique values of Embarked
embarked_locs = sorted(df_train['Embarked'].unique())
embarked_locs_mapping = dict(zip(embarked_locs,
range(0, len(embarked_locs) + 1)))
embarked_locs_mapping
Explanation: Prepare to map Embarked from a string to a number representation:
End of explanation
df_train['Embarked_Val'] = df_train['Embarked'] \
.map(embarked_locs_mapping) \
.astype(int)
df_train.head()
Explanation: Transform Embarked from a string to a number representation to prepare it for machine learning algorithms:
End of explanation
df_train['Embarked_Val'].hist(bins=len(embarked_locs), range=(0, 3))
plt.title('Port of Embarkation Histogram')
plt.xlabel('Port of Embarkation')
plt.ylabel('Count')
plt.show()
Explanation: Plot the histogram for Embarked_Val:
End of explanation
if len(df_train[df_train['Embarked'].isnull()] > 0):
df_train.replace({'Embarked_Val' :
{ embarked_locs_mapping[nan] : embarked_locs_mapping['S']
}
},
inplace=True)
Explanation: Since the vast majority of passengers embarked in 'S': 3, we assign the missing values in Embarked to 'S':
End of explanation
embarked_locs = sorted(df_train['Embarked_Val'].unique())
embarked_locs
Explanation: Verify we do not have any more NaNs for Embarked_Val:
End of explanation
embarked_val_xt = pd.crosstab(df_train['Embarked_Val'], df_train['Survived'])
embarked_val_xt_pct = \
embarked_val_xt.div(embarked_val_xt.sum(1).astype(float), axis=0)
embarked_val_xt_pct.plot(kind='bar', stacked=True)
plt.title('Survival Rate by Port of Embarkation')
plt.xlabel('Port of Embarkation')
plt.ylabel('Survival Rate')
Explanation: Plot a normalized cross tab for Embarked_Val and Survived:
End of explanation
# Set up a grid of plots
fig = plt.figure(figsize=fizsize_with_subplots)
rows = 2
cols = 3
col_names = ('Sex_Val', 'Pclass')
for portIdx in embarked_locs:
for colIdx in range(0, len(col_names)):
plt.subplot2grid((rows, cols), (colIdx, portIdx - 1))
df_train[df_train['Embarked_Val'] == portIdx][col_names[colIdx]] \
.value_counts().plot(kind='bar')
Explanation: It appears those that embarked in location 'C': 1 had the highest rate of survival. We'll dig in some more to see why this might be the case. Below we plot a graphs to determine gender and passenger class makeup for each port:
End of explanation
df_train = pd.concat([df_train, pd.get_dummies(df_train['Embarked_Val'], prefix='Embarked_Val')], axis=1)
Explanation: Leaving Embarked as integers implies ordering in the values, which does not exist. Another way to represent Embarked without ordering is to create dummy variables:
End of explanation
df_train[df_train['Age'].isnull()][['Sex', 'Pclass', 'Age']].head()
Explanation: Feature: Age
The Age column seems like an important feature--unfortunately it is missing many values. We'll need to fill in the missing values like we did with Embarked.
Filter to view missing Age values:
End of explanation
# To keep Age in tact, make a copy of it called AgeFill
# that we will use to fill in the missing ages:
df_train['AgeFill'] = df_train['Age']
# Populate AgeFill
df_train['AgeFill'] = df_train['AgeFill'] \
.groupby([df_train['Sex_Val'], df_train['Pclass']]) \
.apply(lambda x: x.fillna(x.median()))
Explanation: Determine the Age typical for each passenger class by Sex_Val. We'll use the median instead of the mean because the Age histogram seems to be right skewed.
End of explanation
len(df_train[df_train['AgeFill'].isnull()])
Explanation: Ensure AgeFill does not contain any missing values:
End of explanation
# Set up a grid of plots
fig, axes = plt.subplots(2, 1, figsize=fizsize_with_subplots)
# Histogram of AgeFill segmented by Survived
df1 = df_train[df_train['Survived'] == 0]['Age']
df2 = df_train[df_train['Survived'] == 1]['Age']
max_age = max(df_train['AgeFill'])
axes[0].hist([df1, df2],
bins=max_age / bin_size,
range=(1, max_age),
stacked=True)
axes[0].legend(('Died', 'Survived'), loc='best')
axes[0].set_title('Survivors by Age Groups Histogram')
axes[0].set_xlabel('Age')
axes[0].set_ylabel('Count')
# Scatter plot Survived and AgeFill
axes[1].scatter(df_train['Survived'], df_train['AgeFill'])
axes[1].set_title('Survivors by Age Plot')
axes[1].set_xlabel('Survived')
axes[1].set_ylabel('Age')
Explanation: Plot a normalized cross tab for AgeFill and Survived:
End of explanation
for pclass in passenger_classes:
df_train.AgeFill[df_train.Pclass == pclass].plot(kind='kde')
plt.title('Age Density Plot by Passenger Class')
plt.xlabel('Age')
plt.legend(('1st Class', '2nd Class', '3rd Class'), loc='best')
Explanation: Unfortunately, the graphs above do not seem to clearly show any insights. We'll keep digging further.
Plot AgeFill density by Pclass:
End of explanation
# Set up a grid of plots
fig = plt.figure(figsize=fizsize_with_subplots)
fig_dims = (3, 1)
# Plot the AgeFill histogram for Survivors
plt.subplot2grid(fig_dims, (0, 0))
survived_df = df_train[df_train['Survived'] == 1]
survived_df['AgeFill'].hist(bins=max_age / bin_size, range=(1, max_age))
# Plot the AgeFill histogram for Females
plt.subplot2grid(fig_dims, (1, 0))
females_df = df_train[(df_train['Sex_Val'] == 0) & (df_train['Survived'] == 1)]
females_df['AgeFill'].hist(bins=max_age / bin_size, range=(1, max_age))
# Plot the AgeFill histogram for first class passengers
plt.subplot2grid(fig_dims, (2, 0))
class1_df = df_train[(df_train['Pclass'] == 1) & (df_train['Survived'] == 1)]
class1_df['AgeFill'].hist(bins=max_age / bin_size, range=(1, max_age))
Explanation: When looking at AgeFill density by Pclass, we see the first class passengers were generally older then second class passengers, which in turn were older than third class passengers. We've determined that first class passengers had a higher survival rate than second class passengers, which in turn had a higher survival rate than third class passengers.
End of explanation
df_train['FamilySize'] = df_train['SibSp'] + df_train['Parch']
df_train.head()
Explanation: In the first graph, we see that most survivors come from the 20's to 30's age ranges and might be explained by the following two graphs. The second graph shows most females are within their 20's. The third graph shows most first class passengers are within their 30's.
Feature: Family Size
Feature enginering involves creating new features or modifying existing features which might be advantageous to a machine learning algorithm.
Define a new feature FamilySize that is the sum of Parch (number of parents or children on board) and SibSp (number of siblings or spouses):
End of explanation
df_train['FamilySize'].hist()
plt.title('Family Size Histogram')
Explanation: Plot a histogram of FamilySize:
End of explanation
# Get the unique values of Embarked and its maximum
family_sizes = sorted(df_train['FamilySize'].unique())
family_size_max = max(family_sizes)
df1 = df_train[df_train['Survived'] == 0]['FamilySize']
df2 = df_train[df_train['Survived'] == 1]['FamilySize']
plt.hist([df1, df2],
bins=family_size_max + 1,
range=(0, family_size_max),
stacked=True)
plt.legend(('Died', 'Survived'), loc='best')
plt.title('Survivors by Family Size')
Explanation: Plot a histogram of AgeFill segmented by Survived:
End of explanation
df_train.dtypes[df_train.dtypes.map(lambda x: x == 'object')]
Explanation: Based on the histograms, it is not immediately obvious what impact FamilySize has on survival. The machine learning algorithms might benefit from this feature.
Additional features we might want to engineer might be related to the Name column, for example honorrary or pedestrian titles might give clues and better predictive power for a male's survival.
Final Data Preparation for Machine Learning
Many machine learning algorithms do not work on strings and they usually require the data to be in an array, not a DataFrame.
Show only the columns of type 'object' (strings):
End of explanation
df_train = df_train.drop(['Name', 'Sex', 'Ticket', 'Cabin', 'Embarked'],
axis=1)
Explanation: Drop the columns we won't use:
End of explanation
df_train = df_train.drop(['Age', 'SibSp', 'Parch', 'PassengerId', 'Embarked_Val'], axis=1)
df_train.dtypes
Explanation: Drop the following columns:
* The Age column since we will be using the AgeFill column instead.
* The SibSp and Parch columns since we will be using FamilySize instead.
* The PassengerId column since it won't be used as a feature.
* The Embarked_Val as we decided to use dummy variables instead.
End of explanation
train_data = df_train.values
train_data
Explanation: Convert the DataFrame to a numpy array:
End of explanation
def clean_data(df, drop_passenger_id):
# Get the unique values of Sex
sexes = sorted(df['Sex'].unique())
# Generate a mapping of Sex from a string to a number representation
genders_mapping = dict(zip(sexes, range(0, len(sexes) + 1)))
# Transform Sex from a string to a number representation
df['Sex_Val'] = df['Sex'].map(genders_mapping).astype(int)
# Get the unique values of Embarked
embarked_locs = sorted(df['Embarked'].unique())
# Generate a mapping of Embarked from a string to a number representation
embarked_locs_mapping = dict(zip(embarked_locs,
range(0, len(embarked_locs) + 1)))
# Transform Embarked from a string to dummy variables
df = pd.concat([df, pd.get_dummies(df['Embarked'], prefix='Embarked_Val')], axis=1)
# Fill in missing values of Embarked
# Since the vast majority of passengers embarked in 'S': 3,
# we assign the missing values in Embarked to 'S':
if len(df[df['Embarked'].isnull()] > 0):
df.replace({'Embarked_Val' :
{ embarked_locs_mapping[nan] : embarked_locs_mapping['S']
}
},
inplace=True)
# Fill in missing values of Fare with the average Fare
if len(df[df['Fare'].isnull()] > 0):
avg_fare = df['Fare'].mean()
df.replace({ None: avg_fare }, inplace=True)
# To keep Age in tact, make a copy of it called AgeFill
# that we will use to fill in the missing ages:
df['AgeFill'] = df['Age']
# Determine the Age typical for each passenger class by Sex_Val.
# We'll use the median instead of the mean because the Age
# histogram seems to be right skewed.
df['AgeFill'] = df['AgeFill'] \
.groupby([df['Sex_Val'], df['Pclass']]) \
.apply(lambda x: x.fillna(x.median()))
# Define a new feature FamilySize that is the sum of
# Parch (number of parents or children on board) and
# SibSp (number of siblings or spouses):
df['FamilySize'] = df['SibSp'] + df['Parch']
# Drop the columns we won't use:
df = df.drop(['Name', 'Sex', 'Ticket', 'Cabin', 'Embarked'], axis=1)
# Drop the Age column since we will be using the AgeFill column instead.
# Drop the SibSp and Parch columns since we will be using FamilySize.
# Drop the PassengerId column since it won't be used as a feature.
df = df.drop(['Age', 'SibSp', 'Parch'], axis=1)
if drop_passenger_id:
df = df.drop(['PassengerId'], axis=1)
return df
Explanation: Data Wrangling Summary
Below is a summary of the data wrangling we performed on our training data set. We encapsulate this in a function since we'll need to do the same operations to our test set later.
End of explanation
from sklearn.ensemble import RandomForestClassifier
clf = RandomForestClassifier(n_estimators=100)
Explanation: Random Forest: Training
Create the random forest object:
End of explanation
# Training data features, skip the first column 'Survived'
train_features = train_data[:, 1:]
# 'Survived' column values
train_target = train_data[:, 0]
# Fit the model to our training data
clf = clf.fit(train_features, train_target)
score = clf.score(train_features, train_target)
"Mean accuracy of Random Forest: {0}".format(score)
Explanation: Fit the training data and create the decision trees:
End of explanation
df_test = pd.read_csv('../data/titanic/test.csv')
df_test.head()
Explanation: Random Forest: Predicting
Read the test data:
End of explanation
# Data wrangle the test set and convert it to a numpy array
df_test = clean_data(df_test, drop_passenger_id=False)
test_data = df_test.values
Explanation: Note the test data does not contain the column 'Survived', we'll use our trained model to predict these values.
End of explanation
# Get the test data features, skipping the first column 'PassengerId'
test_x = test_data[:, 1:]
# Predict the Survival values for the test data
test_y = clf.predict(test_x)
Explanation: Take the decision trees and run it on the test data:
End of explanation
df_test['Survived'] = test_y
df_test[['PassengerId', 'Survived']] \
.to_csv('../data/titanic/results-rf.csv', index=False)
Explanation: Random Forest: Prepare for Kaggle Submission
Create a DataFrame by combining the index from the test data with the output of predictions, then write the results to the output:
End of explanation
from sklearn import metrics
from sklearn.cross_validation import train_test_split
# Split 80-20 train vs test data
train_x, test_x, train_y, test_y = train_test_split(train_features,
train_target,
test_size=0.20,
random_state=0)
print (train_features.shape, train_target.shape)
print (train_x.shape, train_y.shape)
print (test_x.shape, test_y.shape)
Explanation: Evaluate Model Accuracy
Submitting to Kaggle will give you an accuracy score. It would be helpful to get an idea of accuracy without submitting to Kaggle.
We'll split our training data, 80% will go to "train" and 20% will go to "test":
End of explanation
clf = clf.fit(train_x, train_y)
predict_y = clf.predict(test_x)
from sklearn.metrics import accuracy_score
print ("Accuracy = %.2f" % (accuracy_score(test_y, predict_y)))
Explanation: Use the new training data to fit the model, predict, and get the accuracy score:
End of explanation
from IPython.core.display import Image
Image(filename='../data/confusion_matrix.png', width=800)
Explanation: View the Confusion Matrix:
| | condition True | condition false|
|------|----------------|---------------|
|prediction true|True Positive|False positive|
|Prediction False|False Negative|True Negative|
End of explanation
model_score = clf.score(test_x, test_y)
print ("Model Score %.2f \n" % (model_score))
confusion_matrix = metrics.confusion_matrix(test_y, predict_y)
print ("Confusion Matrix ", confusion_matrix)
print (" Predicted")
print (" | 0 | 1 |")
print (" |-----|-----|")
print (" 0 | %3d | %3d |" % (confusion_matrix[0, 0],
confusion_matrix[0, 1]))
print ("Actual |-----|-----|")
print (" 1 | %3d | %3d |" % (confusion_matrix[1, 0],
confusion_matrix[1, 1]))
print (" |-----|-----|")
Explanation: Get the model score and confusion matrix:
End of explanation
from sklearn.metrics import classification_report
print(classification_report(test_y,
predict_y,
target_names=['Not Survived', 'Survived']))
Explanation: Display the classification report:
$$Precision = \frac{TP}{TP + FP}$$
$$Recall = \frac{TP}{TP + FN}$$
$$F1 = \frac{2TP}{2TP + FP + FN}$$
End of explanation |
5,772 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
PySAL Change Log Statistics
Step1: with open('../packages.yml') as package_file
Step2: Our last main release was 2019-01-30
Step3: get dates of tags
with open('subtags', 'r') as tag_name
Step4: append html files to end of changes.md with tags for toc | Python Code:
from __future__ import print_function
import os
import json
import re
import sys
import pandas
import subprocess
from subprocess import check_output
#import yaml
from datetime import datetime, timedelta
from dateutil.parser import parse
import pytz
utc=pytz.UTC
from datetime import datetime, timedelta
from time import sleep
from subprocess import check_output
try:
from urllib import urlopen
except:
from urllib.request import urlopen
import ssl
#import yaml
context = ssl._create_unverified_context()
Explanation: PySAL Change Log Statistics: Table Generation
This notebook generates the summary statistics for use in the 6-month releases of PySAL, which is now a meta package.
It assumes the subpackages have been git cloned in a directory below the location of this notebook. It also requires network connectivity for some of the reporting.
Run this notebook after gitcount.ipynb
End of explanation
CWD = os.path.abspath(os.path.curdir)
CWD
Explanation: with open('../packages.yml') as package_file:
packages = yaml.load(package_file)
End of explanation
start_date = '2019-07-29'
since_date = '--since="{start}"'.format(start=start_date)
since_date
since = datetime.strptime(start_date+" 0:0:0", "%Y-%m-%d %H:%M:%S")
since
with open('package_versions.txt', 'r') as package_list:
packages = dict([line.strip().split() for line in package_list.readlines()])
import pickle
issue_details = pickle.load( open( "issue_details.p", "rb" ) )
pull_details = pickle.load( open( "pull_details.p", "rb" ) )
packages
Explanation: Our last main release was 2019-01-30:
End of explanation
tag_dates = {}
#root = '/home/serge/Dropbox/p/pysal/src/pysal/tmp/'
root = CWD + "/tmp/"
#for record in tags:
for pkg in packages:
#pkg, tag = record.strip().split()
tag = packages[pkg]
print(pkg, tag)
if pkg=='spvcm':
tag = '0.2.1post1'
#tag = tag.split('/')[-1]
pkdir = root+pkg
try:
cmd = "git log -1 --format=%ai v{tag}".format(tag=tag)
os.chdir(pkdir)
result = subprocess.run(cmd, check=True, shell=True, stdout=subprocess.PIPE)
except:
cmd = "git log -1 --format=%ai {tag}".format(tag=tag)
os.chdir(pkdir)
result = subprocess.run(cmd, check=True, shell=True, stdout=subprocess.PIPE)
tag_string = result.stdout.decode('utf-8')
tag_date = tag_string.split()[0]
tag_dates[pkg] = tag_date
print(pkg, tag, tag_date)
os.chdir(CWD)
# get issues for a package and filter on tag date
for pkg in tag_dates.keys():
issues = issue_details[pkg]
tag_date = utc.localize(parse(tag_dates[pkg]))
keep = []
for issue in issues:
closed = parse(issue['closed_at'])
if closed <= tag_date:
keep.append(issue)
print(pkg, len(issues), len(keep))
issue_details[pkg] = keep
keep = []
pulls = pull_details[pkg]
for pull in pulls:
closed = parse(pull['closed_at'])
if closed <= tag_date:
keep.append(pull)
print(pkg, len(pulls), len(keep))
pull_details[pkg] = keep
# commits
cmd = ['git', 'log', '--oneline', since_date]
activity = {}
total_commits = 0
for subpackage in packages:
tag_date = tag_dates[subpackage]
os.chdir(CWD)
os.chdir('tmp/{subpackage}'.format(subpackage=subpackage))
cmd_until = cmd + ['--until="{tag_date}"'.format(tag_date=tag_date)]
ncommits = len(check_output(cmd_until).splitlines())
ncommits_total = len(check_output(cmd).splitlines())
print(subpackage, ncommits_total, ncommits, tag_date)
total_commits += ncommits
activity[subpackage] = ncommits
cmd_until
identities = {'Levi John Wolf': ('ljwolf', 'Levi John Wolf'),
'Serge Rey': ('Serge Rey', 'Sergio Rey', 'sjsrey', 'serge'),
'Wei Kang': ('Wei Kang', 'weikang9009'),
'Dani Arribas-Bel': ('Dani Arribas-Bel', 'darribas'),
'Antti Härkönen': ( 'antth', 'Antti Härkönen', 'Antti Härkönen', 'Antth' ),
'Juan C Duque': ('Juan C Duque', "Juan Duque"),
'Renan Xavier Cortes': ('Renan Xavier Cortes', 'renanxcortes', 'Renan Xavier Cortes' ),
'Taylor Oshan': ('Tayloroshan', 'Taylor Oshan', 'TaylorOshan'),
'Tom Gertin': ('@Tomgertin', 'Tom Gertin', '@tomgertin')
}
def regularize_identity(string):
string = string.decode()
for name, aliases in identities.items():
for alias in aliases:
if alias in string:
string = string.replace(alias, name)
if len(string.split(' '))>1:
string = string.title()
return string.lstrip('* ')
author_cmd = ['git', 'log', '--format=* %aN', since_date]
author_cmd.append('blank')
author_cmd
from collections import Counter
authors_global = set()
authors = {}
global_counter = Counter()
counters = dict()
cmd = ['git', 'log', '--oneline', since_date]
total_commits = 0
activity = {}
for subpackage in packages:
os.chdir(CWD)
os.chdir('tmp/{subpackage}'.format(subpackage=subpackage))
ncommits = len(check_output(cmd).splitlines())
tag_date = tag_dates[subpackage]
author_cmd[-1] = '--until="{tag_date}"'.format(tag_date=tag_date)
#cmd_until = cmd + ['--until="{tag_date}"'.format(tag_date=tag_date)]
all_authors = check_output(author_cmd).splitlines()
counter = Counter([regularize_identity(author) for author in all_authors])
global_counter += counter
counters.update({'.'.join((package,subpackage)): counter})
unique_authors = sorted(set(all_authors))
authors[subpackage] = unique_authors
authors_global.update(unique_authors)
total_commits += ncommits
activity[subpackage] = ncommits
authors_global
activity
counters
counters
def get_tag(title, level="##", as_string=True):
words = title.split()
tag = "-".join([word.lower() for word in words])
heading = level+" "+title
line = "\n\n<a name=\"{}\"></a>".format(tag)
lines = [line]
lines.append(heading)
if as_string:
return "\n".join(lines)
else:
return lines
subs = issue_details.keys()
table = []
txt = []
lines = get_tag("Changes by Package", as_string=False)
for sub in subs:
total= issue_details[sub]
pr = pull_details[sub]
row = [sub, activity[sub], len(total), len(pr)]
table.append(row)
#line = "\n<a name=\"{sub}\"></a>".format(sub=sub)
#lines.append(line)
#line = "### {sub}".format(sub=sub)
#lines.append(line)
lines.extend(get_tag(sub.lower(), "###", as_string=False))
for issue in total:
url = issue['html_url']
title = issue['title']
number = issue['number']
line = "* [#{number}:]({url}) {title} ".format(title=title,
number=number,
url=url)
lines.append(line)
line
table
os.chdir(CWD)
import pandas
df = pandas.DataFrame(table, columns=['package', 'commits', 'total issues', 'pulls'])
df.sort_values(['commits','pulls'], ascending=False)\
.to_html('./commit_table.html', index=None)
df.sum()
contributor_table = pandas.DataFrame.from_dict(counters).fillna(0).astype(int).T
contributor_table.to_html('./contributor_table.html')
totals = contributor_table.sum(axis=0).T
totals.sort_index().to_frame('commits')
totals = contributor_table.sum(axis=0).T
totals.sort_index().to_frame('commits').to_html('./commits_by_person.html')
totals
n_commits = df.commits.sum()
n_issues = df['total issues'].sum()
n_pulls = df.pulls.sum()
n_commits
#Overall, there were 719 commits that closed 240 issues, together with 105 pull requests across 12 packages since our last release on 2017-11-03.
#('{0} Here is a really long '
# 'sentence with {1}').format(3, 5))
line = ('Overall, there were {n_commits} commits that closed {n_issues} issues,'
' together with {n_pulls} pull requests since our last release'
' on {since_date}.\n'.format(n_commits=n_commits, n_issues=n_issues,
n_pulls=n_pulls, since_date = start_date))
line
Explanation: get dates of tags
with open('subtags', 'r') as tag_name:
tags = tag_name.readlines()
End of explanation
with open('changes.md', 'w') as fout:
fout.write(line)
fout.write("\n".join(lines))
fout.write(get_tag("Summary Statistics"))
with open('commit_table.html') as table:
table_lines = table.readlines()
title = "Package Activity"
fout.write(get_tag(title,"###"))
fout.write("\n")
fout.write("".join(table_lines))
with open('commits_by_person.html') as table:
table_lines = table.readlines()
title = "Contributor Activity"
fout.write(get_tag(title,"###"))
fout.write("\n")
fout.write("".join(table_lines))
with open('contributor_table.html') as table:
table_lines = table.readlines()
title = "Contributor by Package Activity"
fout.write(get_tag(title,"###"))
fout.write("\n")
fout.write("".join(table_lines))
Explanation: append html files to end of changes.md with tags for toc
End of explanation |
5,773 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
WHFast tutorial
This tutorial is an introduction to the python interface of WHFast, a fast and unbiased symplectic Wisdom-Holman integrator. This integrator is well suited for integrations of planetary systems in which the planets stay roughly on their orbits. If close encounters and collisions occur, then WHFast is not the right integrator. The WHFast method is described in detail in Rein & Tamayo (2015).
This tutorial assumes that you have already installed REBOUND.
First WHFast integration
You can enter all the commands below into a file and execute it all at once, or open an interactive shell).
First, we need to import the REBOUND module (make sure have have enabled the virtual environment if you used it to install REBOUND).
Step1: Next, we create a REBOUND simulation instance. This object encapsulated all the variables and functions that REBOUND has to offer.
Step2: Now, we can add particles. We'll work in units in which $G=1$ (see Units.ipynb for using different units). The first particle we add is the central object. We place it at rest at the origin and use the convention of setting the mass of the central object $M_*$ to 1
Step3: Let's look at the particle we just added
Step4: The output tells us that the mass of the particle is 1 and all coordinates are zero.
The next particle we're adding is a planet. We'll use Cartesian coordinates to initialize it. Any coordinate that we do not specify in the sim.add() command is assumed to be 0. We place our planet on a circular orbit at $a=1$ and give it a mass of $10^{-3}$ times that of the central star.
Step5: Instead of initializing the particle with Cartesian coordinates, we can also use orbital elements. By default, REBOUND (as well as WHFast internally) will use Jacobi coordinates, i.e. REBOUND assumes the orbital elements describe the particle's orbit around the centre of mass of all particles added previously. Our second planet will have a mass of $10^{-3}$, a semimajoraxis of $a=2$ and an eccentricity of $e=0.1$ (note that you shouldn't change G after adding particles this way, see Units.ipynb)
Step6: Now that we have added two more particles, let's have a quick look at what's in this simulation by using
Step7: You can see that REBOUND used the ias15 integrator as a default. Next, let's tell REBOUND that we want to use WHFast instead. We'll also set the timestep. In our system of units, an orbit at $a=1$ has an orbital period of $T_{\rm orb} =2\pi \sqrt{\frac{GM}{a}}= 2\pi$. So a reasonable timestep to start with would be $dt=10^{-3}$ (see Rein & Tamayo 2015 for some discussion on timestep choices).
Step8: whfast refers to the 2nd order symplectic integrator WHFast described by Rein & Tamayo (2015). By default, no symplectic correctors are used, but they can be easily turned on (see Advanced Settings for WHFast).
We are now ready to start the integration. Let's integrate the simulation for one orbit, i.e. until $t=2\pi$. Because we use a fixed timestep, rebound would have to change it to integrate exactly up to $2\pi$.
Note
Step9: Once again, let's look at what REBOUND's status is
Step10: As you can see the time has advanced to $t=2\pi$ and the positions and velocities of all particles have changed. If you want to post-process the particle data, you can access it in the following way
Step11: The particles object is an array of pointers to the particles. This means you can call particles = sim.particles before the integration and the contents of particles will be updated after the integration. If you add or remove particles, you'll need to call sim.particles again.
Visualization with matplotlib
Instead of just printing boring numbers at the end of the simulation, let's visualize the orbit using matplotlib (you'll need to install numpy and matplotlib to run this example, see Installation).
We'll use the same particles as above. As the particles are already in memory, we don't need to add them again. Let us plot the position of the inner planet at 100 steps during its orbit. First, we'll import numpy and create an array of times for which we want to have an output (here, from $T_{\rm orb}$ to $2 T_{\rm orb}$ (we have already advanced the simulation time to $t=2\pi$).
Step12: Next, we'll step through the simulation. Rebound will integrate up to time. Depending on the timestep, it might overshoot slightly. If you want to have the outputs at exactly the time you specify, you can set the exact_finish_time=1 flag in the integrate function (or omit it altogether, 1 is the default). However, note that changing the timestep in a symplectic integrator could have negative impacts on its properties.
Step13: Let's plot the orbit using matplotlib.
Step14: Hurray! It worked. The orbit looks like it should, it's an almost perfect circle. There are small perturbations though, induced by the outer planet. Let's integrate a bit longer to see them.
Step15: Oops! This doesn't look like what we expected to see (small perturbations to an almost circluar orbit). What you see here is the barycenter slowly drifting. Some integration packages require that the simulation be carried out in a particular frame, but WHFast provides extra flexibility by working in any inertial frame. If you recall how we added the particles, the Sun was at the origin and at rest, and then we added the planets. This means that the center of mass, or barycenter, will have a small velocity, which results in the observed drift. There are multiple ways we can get the plot we want to.
1. We can calculate only relative positions.
2. We can add the particles in the barycentric frame.
3. We can let REBOUND transform the particle coordinates to the barycentric frame for us.
Let's use the third option (next time you run a simulation, you probably want to do that at the beginning).
Step16: So let's try this again. Let's integrate for a bit longer this time.
Step17: That looks much more like it. Let us finally plot the orbital elements as a function of time. | Python Code:
import rebound
Explanation: WHFast tutorial
This tutorial is an introduction to the python interface of WHFast, a fast and unbiased symplectic Wisdom-Holman integrator. This integrator is well suited for integrations of planetary systems in which the planets stay roughly on their orbits. If close encounters and collisions occur, then WHFast is not the right integrator. The WHFast method is described in detail in Rein & Tamayo (2015).
This tutorial assumes that you have already installed REBOUND.
First WHFast integration
You can enter all the commands below into a file and execute it all at once, or open an interactive shell).
First, we need to import the REBOUND module (make sure have have enabled the virtual environment if you used it to install REBOUND).
End of explanation
sim = rebound.Simulation()
Explanation: Next, we create a REBOUND simulation instance. This object encapsulated all the variables and functions that REBOUND has to offer.
End of explanation
sim.add(m=1.)
Explanation: Now, we can add particles. We'll work in units in which $G=1$ (see Units.ipynb for using different units). The first particle we add is the central object. We place it at rest at the origin and use the convention of setting the mass of the central object $M_*$ to 1:
End of explanation
print(sim.particles[0])
Explanation: Let's look at the particle we just added:
End of explanation
sim.add(m=1e-3, x=1., vy=1.)
Explanation: The output tells us that the mass of the particle is 1 and all coordinates are zero.
The next particle we're adding is a planet. We'll use Cartesian coordinates to initialize it. Any coordinate that we do not specify in the sim.add() command is assumed to be 0. We place our planet on a circular orbit at $a=1$ and give it a mass of $10^{-3}$ times that of the central star.
End of explanation
sim.add(m=1e-3, a=2., e=0.1)
Explanation: Instead of initializing the particle with Cartesian coordinates, we can also use orbital elements. By default, REBOUND (as well as WHFast internally) will use Jacobi coordinates, i.e. REBOUND assumes the orbital elements describe the particle's orbit around the centre of mass of all particles added previously. Our second planet will have a mass of $10^{-3}$, a semimajoraxis of $a=2$ and an eccentricity of $e=0.1$ (note that you shouldn't change G after adding particles this way, see Units.ipynb):
End of explanation
sim.status()
Explanation: Now that we have added two more particles, let's have a quick look at what's in this simulation by using
End of explanation
sim.integrator = "whfast"
sim.dt = 1e-3
Explanation: You can see that REBOUND used the ias15 integrator as a default. Next, let's tell REBOUND that we want to use WHFast instead. We'll also set the timestep. In our system of units, an orbit at $a=1$ has an orbital period of $T_{\rm orb} =2\pi \sqrt{\frac{GM}{a}}= 2\pi$. So a reasonable timestep to start with would be $dt=10^{-3}$ (see Rein & Tamayo 2015 for some discussion on timestep choices).
End of explanation
sim.integrate(6.28318530717959, exact_finish_time=0) # 6.28318530717959 is 2*pi
Explanation: whfast refers to the 2nd order symplectic integrator WHFast described by Rein & Tamayo (2015). By default, no symplectic correctors are used, but they can be easily turned on (see Advanced Settings for WHFast).
We are now ready to start the integration. Let's integrate the simulation for one orbit, i.e. until $t=2\pi$. Because we use a fixed timestep, rebound would have to change it to integrate exactly up to $2\pi$.
Note: The default is for sim.integrate to simulate up to exactly the time you specify. This means that in general it has to change the timestep close to the output time to match things up. A changing timestep breaks WHFast's symplectic nature, so when using WHFast, you typically want to pass the exact_finish_time = 0 flag, which will instead integrate up to the timestep which is nearest to the endtime that you have passed to sim.integrate.
End of explanation
sim.status()
Explanation: Once again, let's look at what REBOUND's status is
End of explanation
particles = sim.particles
for p in particles:
print(p.x, p.y, p.vx, p.vy)
Explanation: As you can see the time has advanced to $t=2\pi$ and the positions and velocities of all particles have changed. If you want to post-process the particle data, you can access it in the following way:
End of explanation
import numpy as np
torb = 2.*np.pi
Noutputs = 100
times = np.linspace(torb, 2.*torb, Noutputs)
x = np.zeros(Noutputs)
y = np.zeros(Noutputs)
Explanation: The particles object is an array of pointers to the particles. This means you can call particles = sim.particles before the integration and the contents of particles will be updated after the integration. If you add or remove particles, you'll need to call sim.particles again.
Visualization with matplotlib
Instead of just printing boring numbers at the end of the simulation, let's visualize the orbit using matplotlib (you'll need to install numpy and matplotlib to run this example, see Installation).
We'll use the same particles as above. As the particles are already in memory, we don't need to add them again. Let us plot the position of the inner planet at 100 steps during its orbit. First, we'll import numpy and create an array of times for which we want to have an output (here, from $T_{\rm orb}$ to $2 T_{\rm orb}$ (we have already advanced the simulation time to $t=2\pi$).
End of explanation
for i,time in enumerate(times):
sim.integrate(time, exact_finish_time=0)
x[i] = particles[1].x
y[i] = particles[1].y
Explanation: Next, we'll step through the simulation. Rebound will integrate up to time. Depending on the timestep, it might overshoot slightly. If you want to have the outputs at exactly the time you specify, you can set the exact_finish_time=1 flag in the integrate function (or omit it altogether, 1 is the default). However, note that changing the timestep in a symplectic integrator could have negative impacts on its properties.
End of explanation
%matplotlib inline
import matplotlib.pyplot as plt
fig = plt.figure(figsize=(5,5))
ax = plt.subplot(111)
ax.set_xlim([-2,2])
ax.set_ylim([-2,2])
plt.plot(x, y);
Explanation: Let's plot the orbit using matplotlib.
End of explanation
Noutputs = 1000
times = np.linspace(2.*torb, 20.*torb, Noutputs)
x = np.zeros(Noutputs)
y = np.zeros(Noutputs)
for i,time in enumerate(times):
sim.integrate(time, exact_finish_time=0)
x[i] = particles[1].x
y[i] = particles[1].y
fig = plt.figure(figsize=(5,5))
ax = plt.subplot(111)
ax.set_xlim([-2,2])
ax.set_ylim([-2,2])
plt.plot(x, y);
Explanation: Hurray! It worked. The orbit looks like it should, it's an almost perfect circle. There are small perturbations though, induced by the outer planet. Let's integrate a bit longer to see them.
End of explanation
sim.move_to_com()
Explanation: Oops! This doesn't look like what we expected to see (small perturbations to an almost circluar orbit). What you see here is the barycenter slowly drifting. Some integration packages require that the simulation be carried out in a particular frame, but WHFast provides extra flexibility by working in any inertial frame. If you recall how we added the particles, the Sun was at the origin and at rest, and then we added the planets. This means that the center of mass, or barycenter, will have a small velocity, which results in the observed drift. There are multiple ways we can get the plot we want to.
1. We can calculate only relative positions.
2. We can add the particles in the barycentric frame.
3. We can let REBOUND transform the particle coordinates to the barycentric frame for us.
Let's use the third option (next time you run a simulation, you probably want to do that at the beginning).
End of explanation
times = np.linspace(20.*torb, 1000.*torb, Noutputs)
for i,time in enumerate(times):
sim.integrate(time, exact_finish_time=0)
x[i] = particles[1].x
y[i] = particles[1].y
fig = plt.figure(figsize=(5,5))
ax = plt.subplot(111)
ax.set_xlim([-1.5,1.5])
ax.set_ylim([-1.5,1.5])
plt.scatter(x, y, marker='.', color='k', s=1.2);
Explanation: So let's try this again. Let's integrate for a bit longer this time.
End of explanation
times = np.linspace(1000.*torb, 9000.*torb, Noutputs)
a = np.zeros(Noutputs)
e = np.zeros(Noutputs)
for i,time in enumerate(times):
sim.integrate(time, exact_finish_time=0)
a[i] = sim.particles[2].a
e[i] = sim.particles[2].e
fig = plt.figure(figsize=(15,5))
ax = plt.subplot(121)
ax.set_xlabel("time")
ax.set_ylabel("semi-major axis")
plt.plot(times, a);
ax = plt.subplot(122)
ax.set_xlabel("time")
ax.set_ylabel("eccentricity")
plt.plot(times, e);
Explanation: That looks much more like it. Let us finally plot the orbital elements as a function of time.
End of explanation |
5,774 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
1. Partial function application
2. Pattern matching
Ciastocna aplikacia - Partially applied functions
http
Step1: Iny priklad
Step2: Balicek functools ma na to funkciu, ktora definiciu takychto funkcii robi este pohodlnejsiu
Step3: Iny priklad, uprveny konstruktor int
Step4: Problem je v tom, ze skoro vsetky priklady na internete, ktore najdete su z toho ako vyrobit power funkcie alebo nieco podobne trivialne
Skusme nieco trivialne, ale praktickejsie
Napriklad funkciu, ktora ma vypisovat do nejakeho specialneho suboru. Napriklad chyboveho vystupu
Step5: Toto by som vedel dosiahnut aj dekoratorom, aj lambdou aj pomocou closure ale takto je to asi najjednoduchsie
Skusme si niektore z toho naprogramovat v ramci opakovania
Step6: Skusme partial application pouzit na refaktorovanie takehoto kodu
Step7: regularne vyrazy sa daju vytiahnut do funkcie
Step8: Vidite tam to opakovanie kodu?
Ako by to bolo cele prerobene?
Step9: Dalsie priklady na pouzitie partial pri refactoringu
http
Step10: To iste by fungovalo aj na "objekt" vytvoreny pomocou closure
Step11: Viete si tak vytvorit viacere konstruktory pre tu istu triedu
Co vam brani vytvorit si konstruktor pre nejaky specialny typ loggera alebo objektu na citanie nejakeho specialneho typu suboru.
Nemsuite stale opakovat tie iste parametre vo volani konstruktora / funkcie.
Viete to pouzit nie len na specializovanie, ale aj na oddelenie zadavania parametrov funkcie a jej vykonania v case.
Kolko krat sa vam stalo, ze ste vedeli davno v programe aku funkciu budete musiet zavolat a aj s castou argumentov, ale museli ste cakat az do nejakeho casu, kde ste dostali aj zvysok a museli ste parametre predavat spolu s funkciou / objektom na ktorom bola metoda
Ak by ste vedeli vyrobit funkciu, s niektorymi parametrami prednastavenymi, tak by vam stacilo posuvat si tuto jednu funkciu a nemuseli by ste si presuvat vsetky parametre az do miesta, kde ich nakoniec vlozite pri volani funkcie
Step12: Takto by sa to dalo spravit ak by sme pouzili partial application pomocou vnorenej funkcie.
Step13: Teraz o cool funkcionalej vlastnosti, ktora v Pythone nie je
Pattern matching
Multimethods
Multiple dispatch
Multiple dispatch (and poor men's patter matching) in Java
http
Step14: Toto nebol multiple dispatch. Toto bol overloading pretoze sa to rozhodovalo v case kompilacie.
preto by sa vypisalo "Hello banana" na zaklade typu premennej a nie "Hello Fruit" na zaklade typu objektu
multiple dispatch sa rozhoduje dynamicky na zaklade objektu
Multiple dispatch by som dosiahol napriklad ak by print bola metoda objektu.
Nanestastie Python nema ani multiple dispatch a ani overloading
Nema zmysel definovat dve funkcie s rovnakym menom
Step15: A je jedno, ci maju rovnaky pocet parametrov alebo rozny. Ani definovanie typu pomocou anotacie v pythone 3 mi nepomoze
Vzdy si len prepisem funkciu inou
Nikdy sa nerozhodne na zaklade parametrov, ktora by sa mala pouzit (tak ako je to napriklad v jave)
Step16: V standardnej kniznici jendoducho nie su prostriedky na to, aby som vedel definovat vacero rovnakych fukcii a na zaklade atributov rozhodnut ktora sa ma zavolat
toto plati aj pre metody
nevieme napriklad definovat ani metodu triedy a objektu, ktora sa rovnako vola
Step17: Nevyzeralo by to ovela lepsie takto?
Tento slajd nevidite.
je tu len pre to, aby bol kod na dalsom slajde vykonatelny. Je to kod, ktorym vkladam zelanu funkcionalitu do jazyka
Step18: Co na to treba?
dekorator, ktory do nejakej struktury bude odkladat funkcie a parametre
je potrebne overenie, ktora funkcia je ta spravna
dekorator musi vratit funkciu, ktora sa pozrie do struktury s funkciami, postupne bude overovat, ci sa typy a pocty atributov zhoduju a potom jednu fukciu zavolat
cele to ma menej ako 20 riadkov (koho to zaujima, moze sa pozriet o par saljdov vyssie ako sa to da spravit)
Obmedzenia?
nefunguje to na zaklade pomenovanych atributov
neda sa pouzit premenlivy pocet atributov
atributy sa porovnavaju len na zaklade typov. Napada mi milion sposobov, ako by som chcel atributy porovnavat zlozitejsie
Mozno ina implementacia mi da vacsiu volnost
http
Step19: No a posledna kniznica so zaujimavou syntaxou
https | Python Code:
def add(a, b):
return a + b
def make_adder(a) :
def adder(b) :
return add(a, b)
return adder
add_two = make_adder(20)
add_two(4)
Explanation: 1. Partial function application
2. Pattern matching
Ciastocna aplikacia - Partially applied functions
http://blog.dhananjaynene.com/tags/functional-programming/
Ciastocna aplikacia transformuje funkciu s nejakym poctom parametrov na inu funkciu s mensim poctom parametrov
Cize zafixuje nejake parametre
f:(X × Y × Z) → N
partial(f):(Y × Z) → N
Vcera som naznacil, ako sa nieco taketo da spravit s pomocou uzaveru
End of explanation
def make_power(exponent):
def power(x):
return x**exponent
return power
square = make_power(2)
print(square(3))
print(square(30))
square(300)
Explanation: Iny priklad
End of explanation
from functools import partial
def power(base, exponent):
return base ** exponent
cube = partial(power, 3)
cube(2)
def power(base, exponent):
return base ** exponent
cube = partial(power, exponent=3)
cube(2)
Explanation: Balicek functools ma na to funkciu, ktora definiciu takychto funkcii robi este pohodlnejsiu
End of explanation
basetwo = partial(int, base=2)
basetwo('111010101')
Explanation: Iny priklad, uprveny konstruktor int
End of explanation
import sys
from functools import partial
print_stderr = partial(print, file=sys.stderr)
print_stderr("pokus")
Explanation: Problem je v tom, ze skoro vsetky priklady na internete, ktore najdete su z toho ako vyrobit power funkcie alebo nieco podobne trivialne
Skusme nieco trivialne, ale praktickejsie
Napriklad funkciu, ktora ma vypisovat do nejakeho specialneho suboru. Napriklad chyboveho vystupu
End of explanation
# print_stderr = partial(print, file=sys.stderr)
print_stderr = lambda x: print(x, file=sys.stderr)
print_stderr('hahahhaha')
Explanation: Toto by som vedel dosiahnut aj dekoratorom, aj lambdou aj pomocou closure ale takto je to asi najjednoduchsie
Skusme si niektore z toho naprogramovat v ramci opakovania
End of explanation
for text in lines:
if re.search('[a-zA-Z]\=', text):
some_action(text)
elif re.search('[a-zA-Z]\s\=', text):
some_other_action(text)
else:
some_default_action()
Explanation: Skusme partial application pouzit na refaktorovanie takehoto kodu
End of explanation
def is_grouped_together(text): # skuste z tohoto spravit partial
return re.search("[a-zA-Z]\s\=", text)
def is_spaced_apart(text):
return re.search("[a-zA-Z]\s\=", text)
def and_so_on(text):
return re.search("pattern_188364625", text)
for text in lines:
if is_grouped_together(text):
some_action(text)
elif is_spaced_apart(text):
some_other_action(text)
else:
some_default_action()
Explanation: regularne vyrazy sa daju vytiahnut do funkcie
End of explanation
is_spaced_apart = partial(re.search, '[a-zA-Z]\s\=')
is_grouped_together = partial(re.search, '[a-zA-Z]\=')
for text in lines:
if is_grouped_together(text):
some_action(text)
elif is_spaced_apart(text):
some_other_action(text)
else:
some_default_action()
Explanation: Vidite tam to opakovanie kodu?
Ako by to bolo cele prerobene?
End of explanation
class Tovar:
def __init__(self, typ, mnozstvo=0):
self.typ=typ
self.mnozstvo=mnozstvo
def write(self):
return '{}: {}'.format(self.typ, self.mnozstvo)
nakup_jablk = Tovar('jablka', 3)
print(nakup_jablk.write())
Jablko = partial(Tovar, 'jablka')
Jablko(4).write()
Explanation: Dalsie priklady na pouzitie partial pri refactoringu
http://chriskiehl.com/article/Cleaner-coding-through-partially-applied-functions/
A preco to nepouzit na specializovany konstruktor?
End of explanation
import pyrsistent as ps
def Tovar(typ, mnozstvo):
def write():
return '{}: {}'.format(typ, mnozstvo)
return ps.freeze({'write': write})
Jablko = partial(Tovar, 'jablka')
Jablko(5).write()
Explanation: To iste by fungovalo aj na "objekt" vytvoreny pomocou closure
End of explanation
def query_database(userid, password, query) :
# do query
# return results
def bar(userid, password):
return query_database(userid, password)
def foo(userid, password) :
return bar(userid, password)
def main(userid, password) :
# .. lot of code here .. eventually reaching
foo(userid, password)
Explanation: Viete si tak vytvorit viacere konstruktory pre tu istu triedu
Co vam brani vytvorit si konstruktor pre nejaky specialny typ loggera alebo objektu na citanie nejakeho specialneho typu suboru.
Nemsuite stale opakovat tie iste parametre vo volani konstruktora / funkcie.
Viete to pouzit nie len na specializovanie, ale aj na oddelenie zadavania parametrov funkcie a jej vykonania v case.
Kolko krat sa vam stalo, ze ste vedeli davno v programe aku funkciu budete musiet zavolat a aj s castou argumentov, ale museli ste cakat az do nejakeho casu, kde ste dostali aj zvysok a museli ste parametre predavat spolu s funkciou / objektom na ktorom bola metoda
Ak by ste vedeli vyrobit funkciu, s niektorymi parametrami prednastavenymi, tak by vam stacilo posuvat si tuto jednu funkciu a nemuseli by ste si presuvat vsetky parametre az do miesta, kde ich nakoniec vlozite pri volani funkcie
End of explanation
def get_query_agent(userid, password)
def do_query(query) :
# do query
# return results
return do_query
def bar(querying_func):
return func(querying_func)
def foo(querying_func) :
return bar(querying_func)
def main(userid, password) :
query_agent = get_query_agent(userid, password)
# .. much further down the line
foo(query_agent)
Explanation: Takto by sa to dalo spravit ak by sme pouzili partial application pomocou vnorenej funkcie.
End of explanation
# -- JAVA --
static void print(Fruit f) {
sysout("Hello Fruit");
}
static void print(Banana b) {
sysout("Hello Banana");
}
Banana banana = new Fruit();
print(banana)
Explanation: Teraz o cool funkcionalej vlastnosti, ktora v Pythone nie je
Pattern matching
Multimethods
Multiple dispatch
Multiple dispatch (and poor men's patter matching) in Java
http://blog.efftinge.de/2010/03/multiple-dispatch-and-poor-mens-patter.html odkaz davam hlavne kvoli nazvu clanku :)
End of explanation
def pokus(a):
print('pokus1')
def pokus():
print('pokus2')
pokus()
Explanation: Toto nebol multiple dispatch. Toto bol overloading pretoze sa to rozhodovalo v case kompilacie.
preto by sa vypisalo "Hello banana" na zaklade typu premennej a nie "Hello Fruit" na zaklade typu objektu
multiple dispatch sa rozhoduje dynamicky na zaklade objektu
Multiple dispatch by som dosiahol napriklad ak by print bola metoda objektu.
Nanestastie Python nema ani multiple dispatch a ani overloading
Nema zmysel definovat dve funkcie s rovnakym menom
End of explanation
def pokus(a:str, b:list):
print('pokus1')
def pokus(b:int):
print('pokus2')
pokus('3', [])
Explanation: A je jedno, ci maju rovnaky pocet parametrov alebo rozny. Ani definovanie typu pomocou anotacie v pythone 3 mi nepomoze
Vzdy si len prepisem funkciu inou
Nikdy sa nerozhodne na zaklade parametrov, ktora by sa mala pouzit (tak ako je to napriklad v jave)
End of explanation
# casto sa stava, ze kod vyzera nejak takto
def foo(a, b):
if isinstance(a, int) and isinstance(b, int):
# ...code for two ints...
elif isinstance(a, float) and isinstance(b, float):
# ...code for two floats...
elif isinstance(a, str) and isinstance(b, str):
# ...code for two strings...
else:
raise TypeError("unsupported argument types (%s, %s)" % (type(a), type(b)))
Explanation: V standardnej kniznici jendoducho nie su prostriedky na to, aby som vedel definovat vacero rovnakych fukcii a na zaklade atributov rozhodnut ktora sa ma zavolat
toto plati aj pre metody
nevieme napriklad definovat ani metodu triedy a objektu, ktora sa rovnako vola :(
Vela ludom uz napadlo, ze by nieco take bolo celkom cool a spravili nejake pokusy o zapracovanie do jazyka
http://www.grantjenks.com/docs/pypatt-python-pattern-matching/
https://github.com/lihaoyi/macropy - module import
https://github.com/Suor/patterns - decorator with funky syntax - Shared at Python Brazil 2013
https://github.com/mariusae/match - http://monkey.org/~marius/pattern-matching-in-python.html - operator overloading
http://blog.chadselph.com/adding-functional-style-pattern-matching-to-python.html - multi-methods
http://svn.colorstudy.com/home/ianb/recipes/patmatch.py - multi-methods
http://www.artima.com/weblogs/viewpost.jsp?thread=101605 - the original multi-methods
http://speak.codebunk.com/post/77084204957/pattern-matching-in-python - multi-methods supporting callables
http://www.aclevername.com/projects/splarnektity/ - not sure how it works but the syntax leaves a lot to be desired
https://github.com/martinblech/pyfpm - multi-dispatch with string parsing
https://github.com/jldupont/pyfnc - multi-dispatch
http://www.pyret.org/ - It’s own language
Ziadna z tychto kniznic nie je taka dobra ako plnohodnotne zapracovana vlastnost do funkcionalneho jazyka, ale skusim aspon na takomto chabom priklade ukazat, co by sa s niecim takymto dalo robit.
Multimethods
uz aj Guido van Rossum si vsimol, ze by to mohlo byt celkom fajn
http://www.artima.com/weblogs/viewpost.jsp?thread=101605
End of explanation
registry = {}
class MultiMethod(object):
def __init__(self, name):
self.name = name
self.typemap = {}
def __call__(self, *args):
types = tuple(arg.__class__ for arg in args) # a generator expression!
function = self.typemap.get(types)
if function is None:
raise TypeError("no match")
return function(*args)
def register(self, types, function):
if types in self.typemap:
raise TypeError("duplicate registration")
self.typemap[types] = function
def multimethod(*types):
def register(function):
name = function.__name__
mm = registry.get(name)
if mm is None:
mm = registry[name] = MultiMethod(name)
mm.register(types, function)
return mm
return register
@multimethod(int, int)
def foo(a, b):
print('int int')
@multimethod(float, float)
def foo(a, b):
print('float float')
@multimethod(str, str)
def foo(a, b):
print('str str')
foo(1,1)
Explanation: Nevyzeralo by to ovela lepsie takto?
Tento slajd nevidite.
je tu len pre to, aby bol kod na dalsom slajde vykonatelny. Je to kod, ktorym vkladam zelanu funkcionalitu do jazyka
End of explanation
from patternmatching import ifmatches, Any, OfType, Where
@ifmatches
def greet(gender=OfType(str), name="Joey"):
print("Joey, whats up man?")
@ifmatches
def greet(gender="male", name=Any):
print("Hello Mr. {}".format(name))
@ifmatches
def greet(gender="female", name=Any):
print("Hello Ms. {}".format(name))
@ifmatches
def greet(gender=Any, name=Where(str.isupper)):
print("Hello {}. IMPORTANT".format("Mr" if gender == 'male' else "Ms"))
@ifmatches
def greet(gender=Any, name=Any):
print("Hello, {}".format(name))
greet('male', 'JAKUB')
Explanation: Co na to treba?
dekorator, ktory do nejakej struktury bude odkladat funkcie a parametre
je potrebne overenie, ktora funkcia je ta spravna
dekorator musi vratit funkciu, ktora sa pozrie do struktury s funkciami, postupne bude overovat, ci sa typy a pocty atributov zhoduju a potom jednu fukciu zavolat
cele to ma menej ako 20 riadkov (koho to zaujima, moze sa pozriet o par saljdov vyssie ako sa to da spravit)
Obmedzenia?
nefunguje to na zaklade pomenovanych atributov
neda sa pouzit premenlivy pocet atributov
atributy sa porovnavaju len na zaklade typov. Napada mi milion sposobov, ako by som chcel atributy porovnavat zlozitejsie
Mozno ina implementacia mi da vacsiu volnost
http://blog.chadselph.com/adding-functional-style-pattern-matching-to-python.html
End of explanation
from patterns import patterns, Mismatch
@patterns
def factorial():
if 0: 1
if n is int: n * factorial(n-1)
if []: []
if [x] + xs: [factorial(x)] + factorial(xs)
if {'n': n, 'f': f}: f(factorial(n))
factorial(0)
factorial(5)
factorial([3,4,2])
factorial({'n': [5, 1], 'f': sum})
factorial('hello')
Explanation: No a posledna kniznica so zaujimavou syntaxou
https://github.com/Suor/patterns
End of explanation |
5,775 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Big 3 statistics
uslisted.txt
Step1: Show types of companies
Step2: Step 1. Include only publicly listed companies from the US
Keep only larger shareholder percentage between direct and total shareholder percentages
In case of a name code instead of percentage convert as follow
Step3: Step2
Step4: Step 3
Step5: Step 4
Step6: Step 5 | Python Code:
#Install libraries needed
!pip install --upgrade pip
!pip install pandas
!pip install numpy
#Import libraries needed
import pandas as pd
import numpy as np
from collections import Counter
Explanation: Big 3 statistics
uslisted.txt: Downloaded from Orbis, file containing the following fields:
'Company name'
'Country ISO Code',
'NACE Rev. 2 Core code (4 digits)',
'Operating revenue (Turnover) th USD Last avail. yr',
'Number of employees Last avail. yr',
'GUO - Name',
'Ticker symbol',
'Stock exchange(s) listed',
'Shareholder - BvD ID number',
'Shareholder - Direct %',
'Shareholder - Total %', 'BvD ID number',
'Current market capitalisation th USD',
'Shareholder - Name',
'Total assets (last value) th USD',
'Type of entity'
IMPORTANT: This data is not provided, only the end result. big3_position.csv, that can be used to replicate all figures (step 3-5). big3_position.csv has the following data:
Company_name:
Company_ID: Orbis ID
Big3Share: Share of the big3 together
Position: Position of the big 3 among all shareholders
Revenue: Firm's revenue
Assets: Firm's assets
Employees: Firm's number of employees
MarketCap: Firm's market capitalization
Exchange: Firm's exchange
TypeEnt: Firm's type of entity
We excluded:
Non-US exhanges and private US exchanges. The exchanges that were remaining were: "'NYSE MKT','NYSE ARCA','NASDAQ/NMS (Global Market)','NASDAQ National Market', 'New York Stock Exchange (NYSE)'
Private equity firms and Funds. We added JPMORGAN CHASE & CO (BvD ID US132624428) by hand since Orbis has mistakenly classified this very large US bank as a Private Equity firm.
US041867445 (State Street Bank and Trust Co) because this subsidiary of State Street acts as a custodian, holding the shares for the ultimate owners.
Public ownership (many small ownership combines) and owners whose ID start by ZZ (no people or companies).
End of explanation
df = pd.read_csv("uslisted.txt",encoding="utf-16",sep="\t",na_values = ["-","n.a."],thousands=",")
df = df.loc[df["Stock exchange(s) listed"].
isin([ 'NYSE MKT','NYSE ARCA','NASDAQ/NMS (Global Market)','NASDAQ National Market',
'New York Stock Exchange (NYSE)'])]
df = df.drop_duplicates(subset="BvD ID number")
c= Counter(df["Type of entity"])
c
Explanation: Show types of companies
End of explanation
df = pd.read_csv("uslisted.txt",encoding="utf-16",sep="\t",na_values = ["-","n.a."],thousands=",")
df = df.loc[df["Stock exchange(s) listed"].isin([ 'NYSE MKT','NYSE ARCA','NASDAQ/NMS (Global Market)','NASDAQ National Market','New York Stock Exchange (NYSE)'])]
df = df.loc[df["Type of entity"].isin(['Foundation/Research institute', 'Bank', 'Venture capital', 'Financial company',
'Industrial company', 'Insurance company']),:]
df = df.loc[df["Shareholder - BvD ID number"] != "US041867445",:]
df = df.loc[df["Shareholder - Name"] != "PUBLIC",:]
companies = df.loc[:,["Company name","BvD ID number","Operating revenue (Turnover) th USD Last avail. yr",
"Total assets (last value) th USD",'Number of employees Last avail. yr',
"Current market capitalisation th USD","Stock exchange(s) listed","Type of entity"]].drop_duplicates()
ownership = df.loc[:,["Company name","BvD ID number","Shareholder - Name","Shareholder - BvD ID number",
"Shareholder - Direct %","Shareholder - Total %"]]
d = {np.NaN: np.NaN,"NG": 0.01,"MO": 50.01, "WO": 98.01, "GP": 50.01,">50.00":50.01}
#NG:0 MO: 50.01 WO: 98.1 GP"
for i in sorted([_ for _ in set(ownership["Shareholder - Direct %"]) | set(ownership["Shareholder - Total %"]) if isinstance(_,str)]):
if not d.get(i):
try:
d[i] = float(i)
except:
d[i] = float(i[1:])
ownership["Shareholder - Direct %"] = ownership["Shareholder - Direct %"].apply(lambda x: d[x])
ownership["Shareholder - Total %"] = ownership["Shareholder - Total %"].apply(lambda x: d[x])
ownership["max"] = ownership.apply(lambda x: np.nanmax([x["Shareholder - Direct %"],x["Shareholder - Total %"]]),axis=1)
Explanation: Step 1. Include only publicly listed companies from the US
Keep only larger shareholder percentage between direct and total shareholder percentages
In case of a name code instead of percentage convert as follow: {"NG": 0.01,"MO": 50.01, "WO": 98.01, "GP": 50.01,">50.00":50.01}
End of explanation
#"US041867445"
with open("big3_position.csv","w+") as f:
f.write("{}\t{}\t{}\t{}\t{}\t{}\t{}\t{}\t{}\t{}\n".
format("Company_name","Company_ID","Big3Share","Position","Revenue","Assets","Employees","MarketCap","Exchange","TypeEnt"))
for id,g in ownership.groupby("BvD ID number"):
sum_big3 = g.loc[g["Shareholder - BvD ID number"].isin(['US149144472L', 'US320174431', 'US042456637']),"max"].sum()
t = g.loc[g["Shareholder - BvD ID number"] != "US041867445",:].sort_values(by="max",ascending=False,na_position="last")
if sum_big3 == 0: position = 100
else: position = 1
for i,values in t.iterrows():
if isinstance(values["Shareholder - BvD ID number"],float): continue
if values.values[3][:2] != "ZZ":
if values.values[-1] >=sum_big3: position+=1
else: break
r,a,e,m,exchange,typeent = companies.loc[companies["BvD ID number"] == values["BvD ID number"],:].values[0][-6:]
#print(companies.loc[companies["BvD ID number"] == values["BvD ID number"],:].values[0])
f.write("{}\t{}\t{}\t{}\t{}\t{}\t{}\t{}\t{}\t{}\n".format(values["Company name"],values["BvD ID number"],sum_big3,position,r,a,e,m,exchange,typeent))
Explanation: Step2: Create file with the sum of big3 and the largest shareholder
End of explanation
df = pd.read_csv("big3_position.csv",sep="\t")
from collections import Counter
c = Counter(df["Position"])
sumc = np.sum([c[_] for _ in range(0,1007)])/100
print("Percentage of companies and percentage of market capitalization")
print(c[1]/sumc,100*np.sum(df.loc[df["Position"] == 1,"MarketCap"]/np.sum(df["MarketCap"])))
print(c[2]/sumc,100*np.sum(df.loc[df["Position"] == 2,"MarketCap"]/np.sum(df["MarketCap"])))
print(c[3]/sumc,100*np.sum(df.loc[df["Position"] == 3,"MarketCap"]/np.sum(df["MarketCap"])))
print((np.sum([c[_] for _ in range(4,1007)])/sumc),
(100*np.sum(df.loc[~df["Position"].isin([1,2,3]),"MarketCap"]/np.sum(df["MarketCap"]))))
print()
print("Number of companies and capitalization (billions)")
print(c[1],np.sum(df.loc[df["Position"] == 1,"MarketCap"])*1000/1E9)
print(c[2],np.sum(df.loc[df["Position"] == 2,"MarketCap"])*1000/1E9)
print(c[3],np.sum(df.loc[df["Position"] == 3,"MarketCap"])*1000/1E9)
print(sumc*100-c[1]-c[2]-c[3], (np.sum(df["MarketCap"])
-np.sum(df.loc[df["Position"] == 1,"MarketCap"])
-np.sum(df.loc[df["Position"] == 2,"MarketCap"])
-np.sum(df.loc[df["Position"] == 3,"MarketCap"]))*1000/1E9)
print()
c.most_common(10)
Explanation: Step 3: Print stats to create the figure on ownership A and B
End of explanation
print("Ownership of each member of the big three and sum of means")
for i in [1,2,3]:
df2 = df.loc[df["Position"] == i,:]
o = pd.merge(ownership,companies,on="BvD ID number")
o = pd.merge(o,df2,left_on="BvD ID number",right_on="Company_ID")
o["x"] = o["max"]*o["Current market capitalisation th USD"]
v = o.loc[o["Shareholder - BvD ID number"] == 'US149144472L',"max"].mean() #V
b = o.loc[o["Shareholder - BvD ID number"] == 'US320174431',"max"].mean() #BLK
s = o.loc[o["Shareholder - BvD ID number"] == 'US042456637',"max"].mean() #SS
print(1*v,1*b,1*s,1*(v+b+s))
df2 = df.loc[~df["Position"].isin([1,2,3]),:]
o = pd.merge(ownership,companies,on="BvD ID number")
o = pd.merge(o,df2,left_on="BvD ID number",right_on="Company_ID")
o["x"] = o["max"]*o["Current market capitalisation th USD"]
v = o.loc[o["Shareholder - BvD ID number"] == 'US149144472L',"max"].mean() #V
b = o.loc[o["Shareholder - BvD ID number"] == 'US320174431',"max"].mean() #BLK
s = o.loc[o["Shareholder - BvD ID number"] == 'US042456637',"max"].mean() #SS
print(1*v,1*b,1*s,1*(v+b+s))
print()
print("Mean of sum of Ownership of each member of the big three")
for i in [1,2,3]:
print(1*df.loc[df["Position"] == i,"Big3Share"].mean())
print(1*df.loc[~df["Position"].isin([1,2,3]),"Big3Share"].mean())
Explanation: Step 4: Create percentages for figure on ownership C
End of explanation
df = pd.read_csv("big3_position.csv",sep="\t")
endogenous_own = ownership.copy()
endogenous_own = endogenous_own.loc[endogenous_own["max"]>=3.]
endogenous_own = endogenous_own.loc[endogenous_own["BvD ID number"] != endogenous_own["Shareholder - BvD ID number"],:]
edges = endogenous_own.loc[:,["BvD ID number","Shareholder - BvD ID number","max"]]
edges.columns = ["Source","Target","Weight"]
edges["Type"] = "Directed"
e1 = endogenous_own[["BvD ID number","Company name"]]
e1.columns = ["Id","Label"]
e2 = endogenous_own[["Shareholder - BvD ID number","Shareholder - Name"]]
e2.columns = ["Id","Label"]
nodes = pd.concat([e1,e2]).drop_duplicates()
nodes = pd.merge(nodes,df,left_on="Id",right_on="Company_ID")
nodes = nodes[["Id","Label","Position","Exchange","TypeEnt"]]
for i in range(4,1000):
d[i] = 4
d[1] = 1
d[2] = 2
d[3] = 3
nodes["Position"] = nodes["Position"].apply(lambda x: d[x])
nodes.loc[nodes["Id"] == "US320174431","Position"] = 0
nodes.loc[nodes["Id"] == "US042456637","Position"] = 0
#VAnguard and FMR added by hand to have their names in the file
nodes.loc[122312] = ["US149144472L","VANGUARD INC via its funds",0,"None","Bank"]
nodes.loc[122313] = ["US126246544L","FMR LLC",0,"None","Bank"]
nodes.to_csv("nodes_allmarket.csv",sep="\t",index = None)
edges.to_csv("edges_allmarket.csv",sep="\t",index = None)
Explanation: Step 5: Create network of ownership based on position in the network
End of explanation |
5,776 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Enron Scandal
Step1: 1. Data Processing and Exploratory Data Analysis
Load the Data
Step2: Explore the Data
Step3: Imbalanced target
Step4: Transform the data
Step5: Missing features
Step6: High-missing features, like 'loan_advances', are needed to obtain better models
Remove irrelevant features
Step7: Classify variables
Step8: Fill missing values
Step9: Visualize the data
Step10: Numerical features
Step11: Target vs Numerical features
Step12: Total stock value vs some features
Step13: The person of interest seems to have a higher stock vs salary and long-term incentive, especially when his stock value is high. There is no dependency between POI and the amount of emails from or to another person of interest.
Correlation between numerical features and target
Step14: 2. Neural Network model
Select the features
Step15: Scale numerical features
Shift and scale numerical variables to a standard normal distribution. The scaling factors are saved to be used for predictions.
Step16: There are no categorical variables
Split the data into training and test sets
Data leakage
Step17: Encode the output
Step18: Build a dummy classifier
Step19: Build the Neural Network for Binary Classification
Step20: Evaluate the model
Step21: Compare with non-neural network models | Python Code:
import os
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
import helper
import keras
helper.info_gpu()
#sns.set_palette("Reds")
helper.reproducible(seed=0) # setup reproducible results from run to run using Keras
%matplotlib inline
%load_ext autoreload
%autoreload
Explanation: Enron Scandal: Indentifying Person of Interest
Identification of Enron employees who may have committed fraud
Supervised Learning. Classification
Data: Enron financial dataset from Udacity
End of explanation
data_path = 'data/enron_financial_data.pkl'
target = ['poi']
df = pd.read_pickle(data_path)
df = pd.DataFrame.from_dict(df, orient='index')
Explanation: 1. Data Processing and Exploratory Data Analysis
Load the Data
End of explanation
helper.info_data(df, target)
Explanation: Explore the Data
End of explanation
df.head(3)
Explanation: Imbalanced target: the evaluation metric used in this problem is the Area Under the ROC Curve <br>
poi = person of interest (boolean) <br>
End of explanation
# delete 'TOTAL' row (at the bottom)
if 'TOTAL' in df.index:
df.drop('TOTAL', axis='index', inplace=True)
# convert dataframe values (objects) to numerical. There are no categorical features
df = df.apply(pd.to_numeric, errors='coerce')
Explanation: Transform the data
End of explanation
helper.missing(df)
Explanation: Missing features
End of explanation
df.drop('email_address', axis='columns', inplace=True)
Explanation: High-missing features, like 'loan_advances', are needed to obtain better models
Remove irrelevant features
End of explanation
num = list(df.select_dtypes(include=[np.number]))
df = helper.classify_data(df, target, numerical=num)
helper.get_types(df)
Explanation: Classify variables
End of explanation
# Reeplace NaN values with the median
df.fillna(df.median(), inplace=True)
#helper.fill_simple(df, target, inplace=True) # same result
Explanation: Fill missing values
End of explanation
df.describe(percentiles=[0.5]).astype(int)
Explanation: Visualize the data
End of explanation
helper.show_numerical(df, kde=True, ncols=5)
Explanation: Numerical features
End of explanation
helper.show_target_vs_numerical(df, target, jitter=0.05, point_size=50, ncols=5)
Explanation: Target vs Numerical features
End of explanation
# df.plot.scatter(x='salary', y='total_stock_value')
# df.plot.scatter(x='long_term_incentive', y='total_stock_value')
# sns.lmplot(x="salary", y="total_stock_value", hue='poi', data=df)
# sns.lmplot(x="long_term_incentive", y="total_stock_value", hue='poi', data=df)
g = sns.PairGrid(
df,
y_vars=["total_stock_value"],
x_vars=["salary", "long_term_incentive", "from_this_person_to_poi"],
hue='poi',
size=4)
g.map(sns.regplot).add_legend()
plt.ylim(
ymin=0, ymax=0.5e8)
#sns.pairplot(df, hue='poi', vars=['long_term_incentive', 'total_stock_value', 'from_poi_to_this_person'], kind='reg', size=3)
Explanation: Total stock value vs some features
End of explanation
helper.correlation(df, target)
Explanation: The person of interest seems to have a higher stock vs salary and long-term incentive, especially when his stock value is high. There is no dependency between POI and the amount of emails from or to another person of interest.
Correlation between numerical features and target
End of explanation
droplist = [] # features to drop from the model
# For the model 'data' instead of 'df'
data = df.copy()
data.drop(droplist, axis='columns', inplace=True)
data.head(3)
Explanation: 2. Neural Network model
Select the features
End of explanation
data, scale_param = helper.scale(data)
Explanation: Scale numerical features
Shift and scale numerical variables to a standard normal distribution. The scaling factors are saved to be used for predictions.
End of explanation
test_size = 0.4
random_state = 9
x_train, y_train, x_test, y_test = helper.simple_split(data, target, True, test_size,
random_state)
Explanation: There are no categorical variables
Split the data into training and test sets
Data leakage: Test set hidden when training the model, but seen when preprocessing the dataset
No validation set (small dataset)
End of explanation
y_train, y_test = helper.one_hot_output(y_train, y_test)
print("train size \t X:{} \t Y:{}".format(x_train.shape, y_train.shape))
print("test size \t X:{} \t Y:{} ".format(x_test.shape, y_test.shape))
Explanation: Encode the output
End of explanation
helper.dummy_clf(x_train, y_train, x_test, y_test)
Explanation: Build a dummy classifier
End of explanation
# class weight for imbalance target
cw = helper.get_class_weight(y_train[:,1])
model_path = os.path.join("models", "enron_scandal.h5")
model = None
model = helper.build_nn_clf(x_train.shape[1], y_train.shape[1], dropout=0.3, summary=True)
helper.train_nn(model, x_train, y_train, class_weight=cw, path=model_path)
from sklearn.metrics import roc_auc_score
y_pred_train = model.predict(x_train, verbose=0)
print('\nROC_AUC train:\t{:.2f} \n'.format(roc_auc_score(y_train, y_pred_train)))
Explanation: Build the Neural Network for Binary Classification
End of explanation
# Dataset too small for train, validation, and test sets. More data is needed for a proper
y_pred = model.predict(x_test, verbose=0)
helper.binary_classification_scores(y_test[:, 1], y_pred[:, 1], return_dataframe=True, index="DNN")
Explanation: Evaluate the model
End of explanation
helper.ml_classification(x_train, y_train[:,1], x_test, y_test[:,1])
Explanation: Compare with non-neural network models
End of explanation |
5,777 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Copyright 2019 DeepMind Technologies Limited
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
https
Step4: Costs
Step19: Measures (probability distributions)
verify_unbiasedness flag adds an unbiasedness check for the gradient estimators. When computing the variance or covariance, it checks that the expectation of the estimator is equal to the desired value.
Step21: Numerical integration
Step22: Plotting
Step23: Test that the estimators are unbiased
Step24: Plots
Figure 2
Step25: Figure 3 (top)
Step26: Figure 3 (bottom)
Step27: Legend for the plots | Python Code:
import numpy as np
import scipy.stats
import seaborn as sns
import matplotlib
import matplotlib.pyplot as plt
sns.set_context('paper', font_scale=2.0, rc={'lines.linewidth': 2.0})
sns.set_style('whitegrid')
# We use INTEGRATION_LIMIT instead of infinity in integration limits
INTEGRATION_LIMIT = 10.
# Threshold for testing the unbiasedness of estimators
EPS = 1e-4
# Whether to save the resulting plots on disk
SAVE_PLOTS = True
Explanation: Copyright 2019 DeepMind Technologies Limited
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
https://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
Intuitive Analysis of Gradient Estimators
This colab allows reproducing the plots in Figures 2 and 3 in Section 3 of the paper [1]. We consider a particular instance of the stochastic gradient problem, eqn. (10). We would like to stochastically estimate the following quantity:
$\eta = \nabla_{\theta} \int \mathcal{N}(x|\mu, \sigma^2) f(x; k) dx; \quad \theta \in {\mu, \sigma}; \quad f \in {(x-k)^2, \exp(-kx^2), \cos(kx)}.$
Here the measure is a Gaussian distribution and the cost function is univariate.
In this experiment we consider several gradient estimators:
* Derivatives of measure
* Score function gradient esimator (naive and variance-reduced version): Section 4
* Measure-valued gradient estimator with variance reduction (coupling): Section 6
* Derivatives of path
* Pathwise gradients: Section 5
Since all the estimators are unbiased (have the same expectation), we compare the variance of these gradient estimators. A lower-variance estimator is almost universally preferred to a higher-variance one. For this simple univariate problem, we compute the variance via numerical integration to remove any noise in the measurements.
[1] Shakir Mohamed, Mihaela Rosca, Michael Figurnov and Andriy Mnih, "Monte Carlo Gradient Estimation in Machine Learning". arXiv, 2019
Code
Imports and global settings
End of explanation
class SquareCost(object):
The cost f(x; k) = (x - k)^2
name = 'square'
def __init__(self, k):
self.k = k
def value(self, x):
return (x - self.k) ** 2
def derivative(self, x):
return 2 * (x - self.k)
class CosineCost(object):
The cost f(x; k) = cos kx
name = 'cos'
def __init__(self, k):
self.k = k
def value(self, x):
return np.cos(self.k * x)
def derivative(self, x):
return -self.k * np.sin(self.k * x)
class ExponentialCost(object):
The cost f(x; k) = exp(-k x^2)
name = 'exp'
def __init__(self, k):
self.k = k
def value(self, x):
return np.exp(-self.k * x ** 2)
def derivative(self, x):
return (-2 * self.k * x) * np.exp(-self.k * x ** 2)
Explanation: Costs
End of explanation
class Normal(object):
Univariate Normal (Gaussian) measure.
def __init__(self, mean, std, verify_unbiasedness):
self.distrib = scipy.stats.norm(loc=mean, scale=std)
self.mean = mean
self.std = std
self.verify_unbiasedness = verify_unbiasedness
def expect(self, g):
Computes the mean: E_p(x) g(x)
return scipy.integrate.quad(lambda x: self.distrib.pdf(x) * g(x),
-INTEGRATION_LIMIT, INTEGRATION_LIMIT)
def var(self, g, expect_g):
Compute the variance given the mean: E_p(x) (g(x) - E g(x))^2
if self.verify_unbiasedness:
assert (self.expect(g)[0] - expect_g) ** 2 < EPS
return self.expect(lambda x: (g(x) - expect_g) ** 2)
def cov(self, g, expect_g, h, expect_h):
Computes the covariance of two functions given their means:
E_p(x) (f(x) - E f(x)) (g(x) - E g(x))
if self.verify_unbiasedness:
assert (self.expect(g)[0] - expect_g) ** 2 < EPS
assert (self.expect(h)[0] - expect_h) ** 2 < EPS
return self.expect(lambda x: (g(x) - expect_g) * (h(x) - expect_h))
def dlogpdf_dmean(self, x):
Computes the score function for mean: \nabla_mean \log p(x; mean, std)
The score function is part of the score function estimator, see eqn. (13)
return (x - self.mean) / self.std ** 2
def dlogpdf_dstd(self, x):
Computes the score function for the std: \nabla_std \log p(x; mean, std)
The score function is part of the score function estimator, see eqn. (13)
return -(((self.mean + self.std - x) *
(-self.mean + self.std + x)) / self.std ** 3)
def dx_dmean(self, x):
Computes \nabla_mean x.
This is part of the pathwise estimator, see eqn. (35b).
For derivation, see eqn. (37).
return 1.
def dx_dstd(self, x):
Computes \nabla_std x.
This is part of the pathwise estimator, see eqn. (35b).
For derivation, see eqn. (37).
return (x - self.mean) / self.std
class StandardWeibull(object):
Weibull(2, 0.5) is a distribution used for measure-valued derivative w.r.t.
Normal mean.
See equation (46) for the derivation. This distribution has a density
function x * exp(-x^2 / 2) for x > 0
def __init__(self, verify_unbiasedness):
self.verify_unbiasedness = verify_unbiasedness
def expect(self, g):
Computes the mean: E_Weibull(x) g(x)
weibull_pdf = lambda x: x * np.exp(-0.5 * x ** 2)
return scipy.integrate.quad(lambda x: weibull_pdf(x) * g(x),
0, INTEGRATION_LIMIT)
def var(self, g, expect_g):
Compute the variance given the mean: E_Weibull(x) (g(x) - E g(x))^2
if self.verify_unbiasedness:
assert (self.expect(g)[0] - expect_g) ** 2 < EPS
return self.expect(lambda x: (g(x) - expect_g) ** 2)
class StandardDsMaxwellCoupledWithNormal(object):
This is standard double-sided Maxwell distribution coupled with
standard Normal distribution. This is a bivariate distribution which is used
for measure-valued derivative w.r.t. Normal standard deviation, see Table 1.
Standard double-sided Maxwell distribution has the density function
x^2 exp(-x^2 / 2) / sqrt(2 pi) for x \in R.
To reduce the variance of the estimator, we couple the positve
(double-sided Maxwell) and negative (Gaussian) parts of the estimator.
See Section 7.2 for discussion of this idea. Technically, this is achieved
by representing a standard Normal sample as (m*u),
where m ~ DSMaxwell and u ~ U[0, 1].
def __init__(self, verify_unbiasedness):
self.verify_unbiasedness = verify_unbiasedness
def expect(self, g):
Computes the mean E_p(m, n) g(m, n) where m has a marginal DS-Maxwell
distribution and n has a marginal Normal distribution.
def ds_maxwell_pdf(x):
return x ** 2 * np.exp(-0.5 * x ** 2) / np.sqrt(2 * np.pi)
return scipy.integrate.dblquad(
# m: Double Sided Maxwell, u: U[0, 1]
# The PDF of U[0, 1] is constant 1.
lambda m, u: ds_maxwell_pdf(m) * g(m, m * u),
# Limits for Uniform
0, 1,
# Limits for Double Sided Maxwell. Infinity is not supported by dblquad.
lambda x: -INTEGRATION_LIMIT, lambda x: INTEGRATION_LIMIT,
)
def var(self, g, expect_g):
Computes the variance E_p(m, n) (g(m, n) - E g(m, n)), where m has
a marginal DS-Maxwell distribution and n has a marginal Normal
distribution.
if self.verify_unbiasedness:
assert (self.expect(g)[0] - expect_g) ** 2 < EPS
return self.expect(lambda m, n: (g(m, n) - expect_g) ** 2)
Explanation: Measures (probability distributions)
verify_unbiasedness flag adds an unbiasedness check for the gradient estimators. When computing the variance or covariance, it checks that the expectation of the estimator is equal to the desired value.
End of explanation
def numerical_integration(Cost, k, mean, std, verify_unbiasedness=False):
This function numerically evaluates the variance of gradient estimators.
Arguments:
Cost: the class of a cost function
k: a list/NumPy vector of values for the cost parameter k
mean: a scalar parameter of the Normal measure
std: a scalar parameter of the Normal measure
verify_unbiasedness: if True, perform additional asserts that verify
that the estimators are unbiased
Returns:
A dictionary {key: NumPy array}. The keys have the form var_..., where ...
is the name of the estimator. The dimensions of the NumPy arrays are
[len(k), 2, 2], where the second dimension is [dmean, dstd], and the last
dimension is [value, integration_error].
measure = Normal(mean, std, verify_unbiasedness)
weibull = StandardWeibull(verify_unbiasedness)
ds_maxwell_coupled_with_normal = StandardDsMaxwellCoupledWithNormal(
verify_unbiasedness)
ret = {}
for key in ['var_sf',
'var_sf_mean_baseline',
'var_sf_optimal_baseline',
'var_pathwise',
'var_measure_valued_coupled']:
ret[key] = np.zeros([len(k), 2, 2])
for i in range(len(k)):
cost = Cost(k[i])
expect_loss = measure.expect(cost.value)[0]
# Compute $\nabla_{\theta} \int \mathcal{N}(x|\mu, \sigma^2) f(x; k) dx$
# using the score-function estimator
d_expect_loss = [
measure.expect(lambda x: cost.value(x) * measure.dlogpdf_dmean(x))[0],
measure.expect(lambda x: cost.value(x) * measure.dlogpdf_dstd(x))[0]
]
# Variance of the score-function estimator: Section 4, eqn. (13)
ret['var_sf'][i] = [
measure.var(lambda x: cost.value(x) * measure.dlogpdf_dmean(x),
d_expect_loss[0]),
measure.var(lambda x: cost.value(x) * measure.dlogpdf_dstd(x),
d_expect_loss[1])
]
# Variance of the score-function estimator with the mean baseline
# Section 4, eqn. (14)
ret['var_sf_mean_baseline'][i] = [
measure.var(lambda x: (cost.value(x) - expect_loss) * measure.dlogpdf_dmean(x),
d_expect_loss[0]),
measure.var(lambda x: (cost.value(x) - expect_loss) * measure.dlogpdf_dstd(x),
d_expect_loss[1])
]
# Computes the optimal baseline for the score-function estimator
# using Section 7.4.1, eqn. (65).
# Note that it has different values for mean and std.
optimal_baseline = [
(measure.cov(measure.dlogpdf_dmean, 0.,
lambda x: cost.value(x) * measure.dlogpdf_dmean(x),
d_expect_loss[0])[0]
/ measure.var(measure.dlogpdf_dmean, 0.)[0]),
(measure.cov(measure.dlogpdf_dstd, 0.,
lambda x: cost.value(x) * measure.dlogpdf_dstd(x),
d_expect_loss[1])[0]
/ measure.var(measure.dlogpdf_dstd, 0.)[0])
]
# Variance of the score-function estimator with the optimal baseline
# Section 4, eqn. (14)
ret['var_sf_optimal_baseline'][i] = [
measure.var(lambda x: (cost.value(x) - optimal_baseline[0]) * measure.dlogpdf_dmean(x),
d_expect_loss[0]),
measure.var(lambda x: (cost.value(x) - optimal_baseline[1]) * measure.dlogpdf_dstd(x),
d_expect_loss[1])
]
# Variance of the pathwise estimator. Here we use the "implicit" form of the
# estimator that allows reusing the same Gaussian measure.
# See Section 5, eqn. (35) for details
ret['var_pathwise'][i] = [
measure.var(lambda x: cost.derivative(x) * measure.dx_dmean(x),
d_expect_loss[0]),
measure.var(lambda x: cost.derivative(x) * measure.dx_dstd(x),
d_expect_loss[1])
]
# Variance of the measure-valued gradient estimator (Section 6, eqn. (44),
# Table 1) with variance reduction via coupling (Section 7.2)
ret['var_measure_valued_coupled'][i] = [
# We couple the Weibulls from the positive and negative parts of the
# estimator simply by reusing the value of the Weibull
weibull.var(
lambda x: (cost.value(mean + std * x) - cost.value(mean - std * x)) / (np.sqrt(2 * np.pi) * std), d_expect_loss[0]),
# See Section 7.2 and documentation of StandardDsMaxwellCoupledWithNormal
# for details on this coupling. Here m ~ DS-Maxwell, n ~ Normal(0, 1)
ds_maxwell_coupled_with_normal.var(
lambda m, n: (cost.value(m * std + mean) - cost.value(n * std + mean)) / std, d_expect_loss[1])
]
return ret
Explanation: Numerical integration
End of explanation
def plot(k, ret, param_idx, logx, logy, ylabel, ylim, filename, xticks=None):
plt.figure(figsize=[8, 5])
plt.plot(k, ret['var_sf'][:, param_idx, 0],
label='Score function')
# plt.plot(k, ret['var_sf_mean_baseline'][:, param_idx, 0],
# label='Score function + mean baseline')
plt.plot(k, ret['var_sf_optimal_baseline'][:, param_idx, 0],
label='Score function + variance reduction')
plt.plot(k, ret['var_pathwise'][:, param_idx, 0],
label='Pathwise')
plt.plot(k, ret['var_measure_valued_coupled'][:, param_idx, 0],
label='Measure-valued + variance reduction')
plt.xlabel(r'$k$')
plt.ylabel(ylabel)
plt.xlim([np.min(k), np.max(k)])
plt.ylim(ylim)
if logx:
plt.xscale('log')
if logy:
plt.yscale('log')
if xticks is not None:
plt.xticks(xticks)
x_axis = plt.gca().get_xaxis()
x_axis.set_ticklabels(xticks)
x_axis.set_major_formatter(matplotlib.ticker.ScalarFormatter())
x_axis.set_minor_formatter(matplotlib.ticker.NullFormatter())
if SAVE_PLOTS:
plt.savefig(filename, dpi=200, transparent=True)
return plt.gca()
def plot_cost_cartoon(Cost, k, x, xticks, yticks, ylim, filename):
f, axes = plt.subplots(1, 3, sharey='row', figsize=[12, 2])
for i in range(len(k)):
axes[i].plot(x, Cost(k[i]).value(x),
color='k', label='Value of the cost')
axes[i].plot(x, Cost(k[i]).derivative(x),
color='k', linestyle='--', label='Derivative of the cost')
axes[i].axis('on')
axes[i].grid(False)
axes[i].xaxis.set_tick_params(length=0)
axes[i].xaxis.set_ticks(xticks)
axes[i].yaxis.set_tick_params(length=0)
axes[i].yaxis.set_ticks(yticks)
axes[i].set_frame_on(False)
axes[0].set_ylim(ylim)
f.tight_layout()
if SAVE_PLOTS:
f.savefig(filename, dpi=200, transparent=True)
return axes
Explanation: Plotting
End of explanation
for Cost in [SquareCost, CosineCost, ExponentialCost]:
print(Cost.name)
ret = numerical_integration(
Cost, k=[0.1, 1., 10.], mean=1, std=1.5, verify_unbiasedness=True)
print('Maximum integration error: {}'.format(
max(np.max(v[..., 1]) for v in ret.values())))
Explanation: Test that the estimators are unbiased
End of explanation
Cost = SquareCost
k = np.linspace(-3., 3., 100)
ret = numerical_integration(Cost, k, mean=1, std=1)
print('Maximum integration error: {}'.format(
max(np.max(v[..., 1]) for v in ret.values())))
plot(
k, ret, param_idx=0,
logx=False, logy=True, ylabel=r'Variance of the estimator for $\mu$',
ylim=[1., 1e3], filename='variance_mu_{}.pdf'.format(Cost.name))
plot_ax = plot(
k, ret, param_idx=1,
logx=False, logy=True, ylabel=r'Variance of the estimator for $\sigma$',
ylim=[1., 1e3], filename='variance_sigma_{}.pdf'.format(Cost.name))
cartoon_ax = plot_cost_cartoon(
Cost, k=[np.min(k), 0, np.max(k)], x=np.linspace(-5., 5., 100),
xticks=[-5, 0, 5], yticks=[-2, 0, 5], ylim=[-2, 5],
filename='costs_{}.pdf'.format(Cost.name))
Explanation: Plots
Figure 2: $f(x; k) = (x-k)^2$
End of explanation
Cost = ExponentialCost
k = np.logspace(np.log10(0.1), np.log10(10.), 100)
ret = numerical_integration(Cost, k, mean=1, std=1)
print('Maximum integration error: {}'.format(
max(np.max(v[..., 1]) for v in ret.values())))
plot(
k, ret, param_idx=0,
logx=True, logy=True, ylabel=r'Variance of the estimator for $\mu$',
ylim=[1e-3, 1], xticks=[0.1, 1, 10],
filename='variance_mu_{}.pdf'.format(Cost.name))
plot_ax = plot(
k, ret, param_idx=1,
logx=True, logy=True, ylabel=r'Variance of the estimator for $\sigma$',
ylim=[1e-3, 1], xticks=[0.1, 1, 10],
filename='variance_sigma_{}.pdf'.format(Cost.name))
cartoon_ax = plot_cost_cartoon(
Cost, k=[np.min(k), 1, np.max(k)], x=np.linspace(-3., 3., 100),
xticks=[-3, 0, 3], yticks=[-1, 0, 1], ylim=[-1.1, 1.1],
filename='costs_{}.pdf'.format(Cost.name))
Explanation: Figure 3 (top): $f(x; k) = \exp(-kx^2)$
End of explanation
Cost = CosineCost
k = np.logspace(np.log10(0.5), np.log10(5.), 100)
ret = numerical_integration(Cost, k, mean=1, std=1)
print('Maximum integration error: {}'.format(
max(np.max(v[..., 1]) for v in ret.values())))
plot(
k, ret, param_idx=0,
logx=True, logy=True, ylabel=r'Variance of the estimator for $\mu$',
ylim=[0.005, 10], xticks=[0.5, 1, 2, 5],
filename='variance_mu_{}.pdf'.format(Cost.name))
plot_ax = plot(
k, ret, param_idx=1,
logx=True, logy=True, ylabel=r'Variance of the estimator for $\sigma$',
ylim=[0.1, 10], xticks=[0.5, 1, 2, 5],
filename='variance_sigma_{}.pdf'.format(Cost.name))
cartoon_ax = plot_cost_cartoon(
Cost,
k=[np.min(k),
10 ** ((np.log10(np.min(k)) + np.log10(np.min(k))) / 2),
np.max(k)],
x=np.linspace(-3., 3., 100),
xticks=[-3, 0, 3], yticks=[-3, 0, 3], ylim=[-3, 3],
filename='costs_{}.pdf'.format(Cost.name))
Explanation: Figure 3 (bottom): $f(x; k) = \cos kx$
End of explanation
plt.figure(figsize=[22, 1])
plt.axis('off')
plt.grid(False)
plt.legend(*plot_ax.get_legend_handles_labels(), loc='center',
frameon=False, ncol=5)
if SAVE_PLOTS:
filename = 'estimators_legend.pdf'
plt.savefig(filename, dpi=200, transparent=True)
plt.figure(figsize=[22, 1])
plt.axis('off')
plt.grid(False)
plt.legend(*cartoon_ax[0].get_legend_handles_labels(), loc='center',
frameon=False, ncol=5)
if SAVE_PLOTS:
filename = 'costs_legend.pdf'
plt.savefig(filename, dpi=200, transparent=True)
Explanation: Legend for the plots
End of explanation |
5,778 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Anomaly Detection on MNIST
This notebook shows how a Deep Learning Auto-Encoder model can be used to find outliers in a dataset.
Consider the following three-layer neural network with one hidden layer and the same number of input neurons (features) as output neurons. The loss function is the MSE between the input and the output. Hence, the network is forced to learn the identity via a nonlinear, reduced representation of the original data. Such an algorithm is called a deep autoencoder; these models have been used extensively for unsupervised, layer-wise pretraining of supervised deep learning tasks, but here we consider the autoencoder's application for discovering anomalies in data.
We use the well-known MNIST dataset of hand-written digits, where each row contains the 28^2=784 raw gray-scale pixel values from 0 to 255 of the digitized digits (0 to 9).
Load Theano/Lasagne and the MNIST training/testing datasets
Step1: Finding outliers - ugly hand-written digits
We train a Deep Learning Auto-Encoder to learn a compressed (low-dimensional) non-linear representation of the dataset, hence learning the intrinsic structure of the training dataset. The auto-encoder model is then used to transform all test set images to their reconstructed images, by passing through the lower-dimensional neural network. We then find outliers in a test dataset by comparing the reconstruction of each scanned digit with its original pixel values. The idea is that a high reconstruction error of a digit indicates that the test set point doesn't conform to the structure of the training data and can hence be called an outlier.
Learn what's normal from the training data
Train unsupervised Deep Learning autoencoder model on the training dataset. For simplicity, we train a model with 1 hidden layer of 50 Tanh neurons to create 50 non-linear features with which to reconstruct the original dataset. For now, please accept that 50 hidden units is a reasonable choice...
For simplicity, we train the auto-encoder for only 5 epoch (fives passes over the entire traing dataset).
Step2: NB
Step3: Find outliers in the test data
The Anomaly app computes the per-row reconstruction error for the test data set. It passes it through the autoencoder model (built on the training data) and computes mean square error (MSE) for each row in the test set.
Step4: Visualize the good, the bad and the ugly
We will need a helper function for plotting handwritten digits
Step5: Let's look at the test set points with low/median/high reconstruction errors. We will now visualize the original test set points and their reconstructions obtained by propagating them through the narrow neural net.
Step6: The good
Let's plot the 12 digits with lowest reconstruction error. First we plot the reconstruction, then the original scanned images.
Step7: Clearly, a well-written digit 1 appears in both the training and testing set, and is easy to reconstruct by the autoencoder with minimal reconstruction error. Nothing is as easy as a straight line.
The bad
Now let's look at the 12 digits with median reconstruction error.
Step8: These test set digits look "normal" - it is plausible that they resemble digits from the training data to a large extent, but they do have some particularities that cause some reconstruction error.
The ugly
And here are the biggest outliers - The 12 digits with highest reconstruction error!
Step9: Now here are some pretty ugly digits that are plausibly not commonly found in the training data - some are even hard to classify by humans.
Voila!
We were able to find outliers with Deep Learning Auto-Encoder models.
We would love to hear your use-case for Anomaly detection...
References
See | Python Code:
import numpy as np
import theano
import lasagne
import matplotlib.pyplot as plt
%matplotlib inline
import gzip
import pickle
# Seed for reproducibility
np.random.seed(42)
# Download the MNIST digits dataset (actually, these are already downloaded locally)
# !wget -N --directory-prefix=./data/MNIST/ http://deeplearning.net/data/mnist/mnist.pkl.gz
# Load training and test splits as numpy arrays
train, val, test = pickle.load(gzip.open('data/MNIST/mnist.pkl.gz'), encoding='iso-8859-1')
X_train, y_train = train
# Omit the validation set...
X_test, y_test = test
#X_train[1000][:]
# For training, we want to sample examples at random in small batches - don't care about the 'y_target'
def batch_gen(X, N):
while True:
idx = np.random.choice(len(X), N)
yield X[idx].astype('float32')
Explanation: Anomaly Detection on MNIST
This notebook shows how a Deep Learning Auto-Encoder model can be used to find outliers in a dataset.
Consider the following three-layer neural network with one hidden layer and the same number of input neurons (features) as output neurons. The loss function is the MSE between the input and the output. Hence, the network is forced to learn the identity via a nonlinear, reduced representation of the original data. Such an algorithm is called a deep autoencoder; these models have been used extensively for unsupervised, layer-wise pretraining of supervised deep learning tasks, but here we consider the autoencoder's application for discovering anomalies in data.
We use the well-known MNIST dataset of hand-written digits, where each row contains the 28^2=784 raw gray-scale pixel values from 0 to 255 of the digitized digits (0 to 9).
Load Theano/Lasagne and the MNIST training/testing datasets
End of explanation
# A very simple network, an autoencoder with a single hidden layer of 50 neurons
l_in = lasagne.layers.InputLayer(shape=(None, 784))
l_hidden = lasagne.layers.DenseLayer(l_in,
num_units=50,
nonlinearity=lasagne.nonlinearities.tanh)
l_out = lasagne.layers.DenseLayer(l_hidden,
num_units=784,
nonlinearity=lasagne.nonlinearities.sigmoid)
# Symbolic variable for our input features
X_sym = theano.tensor.matrix()
# Theano expressions for the output distribution and loss vs the original input
output = lasagne.layers.get_output(l_out, X_sym)
# The loss function is the sum-squared-error averaged over a minibatch
sample_loss = theano.tensor.mean(lasagne.objectives.squared_error(output, X_sym), axis=1)
minibatch_loss = theano.tensor.mean(sample_loss)
# We retrieve all the trainable parameters in our network
params = lasagne.layers.get_all_params(l_out, trainable=True)
# Compute Adam updates for training (scores on right show training speed variation)
updates = lasagne.updates.adam(minibatch_loss, params) # 0.065 ... 0.032
#updates = lasagne.updates.adagrad(minibatch_loss, params) # 0.056 ... 0.037
#updates = lasagne.updates.rmsprop(minibatch_loss, params) # 0.059 ... 0.041
#updates = lasagne.updates.adadelta(minibatch_loss, params) # 0.101 ... 0.065
print(params)
# We define a training function that will compute the loss, and take a single optimization step
f_train = theano.function([X_sym], minibatch_loss, updates=updates)
# The prediction function doesn't require targets, and outputs only the autoencoder loss for the sample
f_predict = theano.function([X_sym], [output, sample_loss])
print("Theano functions created")
# We'll choose a batch size, and calculate the number of batches in an "epoch"
BATCH_SIZE = 64
N_BATCHES = len(X_train) // BATCH_SIZE
# Minibatch generators for the training and validation sets
train_batches = batch_gen(X_train, BATCH_SIZE)
Explanation: Finding outliers - ugly hand-written digits
We train a Deep Learning Auto-Encoder to learn a compressed (low-dimensional) non-linear representation of the dataset, hence learning the intrinsic structure of the training dataset. The auto-encoder model is then used to transform all test set images to their reconstructed images, by passing through the lower-dimensional neural network. We then find outliers in a test dataset by comparing the reconstruction of each scanned digit with its original pixel values. The idea is that a high reconstruction error of a digit indicates that the test set point doesn't conform to the structure of the training data and can hence be called an outlier.
Learn what's normal from the training data
Train unsupervised Deep Learning autoencoder model on the training dataset. For simplicity, we train a model with 1 hidden layer of 50 Tanh neurons to create 50 non-linear features with which to reconstruct the original dataset. For now, please accept that 50 hidden units is a reasonable choice...
For simplicity, we train the auto-encoder for only 5 epoch (fives passes over the entire traing dataset).
End of explanation
for epoch in range(5):
train_loss = 0
for _ in range(N_BATCHES):
X = next(train_batches)
loss = f_train(X)
train_loss += loss
train_loss /= N_BATCHES
print('Epoch {:2d}, Train loss {:.03f}'.format( epoch, train_loss, ))
print("DONE")
Explanation: NB: Each epoch should take 10-20 seconds, although the first one may take a little longer...
End of explanation
test_reconstructed, test_loss = f_predict(X_test)
test_loss.shape
Explanation: Find outliers in the test data
The Anomaly app computes the per-row reconstruction error for the test data set. It passes it through the autoencoder model (built on the training data) and computes mean square error (MSE) for each row in the test set.
End of explanation
def plot_by_index(X, indices):
plt.figure(figsize=(12,3))
for i in range(len(indices)):
plt.subplot(1, 12, i+1)
plt.imshow(X[indices[i]].reshape((28, 28)), cmap='gray', interpolation='nearest')
plt.axis('off')
Explanation: Visualize the good, the bad and the ugly
We will need a helper function for plotting handwritten digits:
End of explanation
# Sort the test set into recostruction error order
test_loss_sorted_indices = np.argsort( test_loss )
# Here are the best ones
test_loss[ test_loss_sorted_indices[0:10] ]
Explanation: Let's look at the test set points with low/median/high reconstruction errors. We will now visualize the original test set points and their reconstructions obtained by propagating them through the narrow neural net.
End of explanation
indices = test_loss_sorted_indices[0:12]
plot_by_index(X_test, indices)
plot_by_index(test_reconstructed, indices)
Explanation: The good
Let's plot the 12 digits with lowest reconstruction error. First we plot the reconstruction, then the original scanned images.
End of explanation
mid = len(test_loss_sorted_indices)//2
indices = test_loss_sorted_indices[mid-6:mid+6]
plot_by_index(X_test, indices)
plot_by_index(test_reconstructed, indices)
Explanation: Clearly, a well-written digit 1 appears in both the training and testing set, and is easy to reconstruct by the autoencoder with minimal reconstruction error. Nothing is as easy as a straight line.
The bad
Now let's look at the 12 digits with median reconstruction error.
End of explanation
indices = test_loss_sorted_indices[-12:]
plot_by_index(X_test, indices)
plot_by_index(test_reconstructed, indices)
Explanation: These test set digits look "normal" - it is plausible that they resemble digits from the training data to a large extent, but they do have some particularities that cause some reconstruction error.
The ugly
And here are the biggest outliers - The 12 digits with highest reconstruction error!
End of explanation
im = plt.imread('./images/mnist/template_28x28.png')
plt.imshow(im, 'gray')
import os
image_dir = './images/mnist/'
image_files = [ '%s/%s' % (image_dir, f) for f in os.listdir(image_dir)
if (f.lower().endswith('png') or f.lower().endswith('jpg')) ]
v=[]
for i, f in enumerate(image_files):
im = plt.imread(f)
#print("Image File:%s" % (f,))
v.append( im.flatten() )
# v=[ plt.imread(f).flatten() for f in image_files ]
v_reconstructed, v_loss = f_predict(v)
v_all_indices = np.arange(len(v))
plot_by_index(v, v_all_indices)
plot_by_index(v_reconstructed, v_all_indices)
Explanation: Now here are some pretty ugly digits that are plausibly not commonly found in the training data - some are even hard to classify by humans.
Voila!
We were able to find outliers with Deep Learning Auto-Encoder models.
We would love to hear your use-case for Anomaly detection...
References
See :
https://github.com/h2oai/h2o-training-book/blob/master/hands-on_training/anomaly_detection.md
h2o-training-book/package.json :: "license": "Apache 2",
http://goelhardik.github.io/2016/06/04/mnist-autoencoder/
https://github.com/mikesj-public/convolutional_autoencoder/blob/master/mnist_conv_autoencode.ipynb
https://cs.stanford.edu/people/karpathy/convnetjs/demo/autoencoder.html
Exercises
What is the test set error before network training?
Check whether 20 hidden units or 100 hidden units would have been a better choice
See how the learning progresses using adadelta updates
Try adding your own example digits to ./images/mnist/ - there's a template .png file there to start - and have a look at the reconstruction errors
End of explanation |
5,779 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Load airports of each country
Step1: record schedules for 2 weeks, then augment count with weekly flight numbers.
seasonal and seasonal charter will count as once per week for 3 months, so 12/52 per week. TGM separate, since its history is in the past.
Step2: parse Arrivals
Step3: parse Departures
Step4: Save | Python Code:
L=json.loads(file('../json/L.json','r').read())
M=json.loads(file('../json/M.json','r').read())
N=json.loads(file('../json/N.json','r').read())
import requests
AP={}
for c in M:
if c not in AP:AP[c]={}
for i in range(len(L[c])):
AP[c][N[c][i]]=L[c][i]
sch={}
Explanation: Load airports of each country
End of explanation
baseurl='https://www.airportia.com/'
import requests, urllib2
SC={}
Explanation: record schedules for 2 weeks, then augment count with weekly flight numbers.
seasonal and seasonal charter will count as once per week for 3 months, so 12/52 per week. TGM separate, since its history is in the past.
End of explanation
for c in AP:
print c
airportialinks=AP[c]
sch={}
for i in airportialinks:
print i,
if i not in sch:sch[i]={}
#march 4-31 = 4 weeks
for d in range (4,32):
if d not in sch[i]:
try:
#capture token
url=baseurl+airportialinks[i]+'arrivals/201703'+str(d)
s = requests.Session()
cookiesopen = s.get(url)
cookies=str(s.cookies)
fcookies=[[k[:k.find('=')],k[k.find('=')+1:k.find(' for ')]] for k in cookies[cookies.find('Cookie '):].split('Cookie ')[1:]]
#push token
opener = urllib2.build_opener()
for k in fcookies:
opener.addheaders.append(('Cookie', k[0]+'='+k[1]))
#read html
m=s.get(url).content
sch[i][url]=pd.read_html(m)[0]
except: pass #print 'no tables',i,d
print
SC[c]=sch
Explanation: parse Arrivals
End of explanation
SD={}
for c in AP:
print c
airportialinks=AP[c]
sch={}
for i in airportialinks:
print i,
if i not in sch:sch[i]={}
#march 4-31 = 4 weeks
for d in range (4,32):
if d not in sch[i]:
try:
#capture token
url=baseurl+airportialinks[i]+'departures/201703'+str(d)
s = requests.Session()
cookiesopen = s.get(url)
cookies=str(s.cookies)
fcookies=[[k[:k.find('=')],k[k.find('=')+1:k.find(' for ')]] for k in cookies[cookies.find('Cookie '):].split('Cookie ')[1:]]
#push token
opener = urllib2.build_opener()
for k in fcookies:
opener.addheaders.append(('Cookie', k[0]+'='+k[1]))
#read html
m=s.get(url).content
sch[i][url]=pd.read_html(m)[0]
except: pass #print 'no tables',i,d
print
SD[c]=sch
Explanation: parse Departures
End of explanation
for c in SC:
sch=SC[c]
mdf=pd.DataFrame()
for i in sch:
for d in sch[i]:
df=sch[i][d].drop(sch[i][d].columns[3:],axis=1).drop(sch[i][d].columns[0],axis=1)
df['To']=i
df['Date']=d
mdf=pd.concat([mdf,df])
mdf=mdf.replace('Hahn','Frankfurt')
mdf=mdf.replace('Hahn HHN','Frankfurt HHN')
mdf['City']=[i[:i.rfind(' ')] for i in mdf['From']]
mdf['Airport']=[i[i.rfind(' ')+1:] for i in mdf['From']]
file('countries/'+cnc.T.loc[c]['ISO2']+"/json/mdf_arrv.json",'w').write(json.dumps(mdf.reset_index().to_json()))
for c in SD:
sch=SD[c]
mdf=pd.DataFrame()
for i in sch:
for d in sch[i]:
df=sch[i][d].drop(sch[i][d].columns[3:],axis=1).drop(sch[i][d].columns[0],axis=1)
df['From']=i
df['Date']=d
mdf=pd.concat([mdf,df])
mdf=mdf.replace('Hahn','Frankfurt')
mdf=mdf.replace('Hahn HHN','Frankfurt HHN')
mdf['City']=[i[:i.rfind(' ')] for i in mdf['To']]
mdf['Airport']=[i[i.rfind(' ')+1:] for i in mdf['To']]
file('countries/'+cnc.T.loc[c]['ISO2']+"/json/mdf_dest.json",'w').write(json.dumps(mdf.reset_index().to_json()))
Explanation: Save
End of explanation |
5,780 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Win/Loss Betting Model
Step2: Pymc Model
Determining Binary Win Loss
Step3: Plot the last period rating for some teams
Step4: Plot some over time ratings | Python Code:
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
%matplotlib inline
data = pd.read_csv('data.csv', index_col=0).reset_index(drop=True)
teams = np.sort(np.unique(np.concatenate([data['Team 1 ID'], data['Team 2 ID']])))
periods = data.Date.unique()
tmap = {v:k for k,v in dict(enumerate(teams)).items()}
pmap = {v:k for k,v in dict(enumerate(periods)).items()}
n_teams = len(teams)
n_periods = len(periods)
print('Number of Teams: %i ' % n_teams)
print('Number of Matches: %i ' % len(data))
print('Number of Periods: %i '% n_periods)
data
Explanation: Win/Loss Betting Model
End of explanation
import pymc3 as pm
import theano.tensor as tt
obs_team_1 = data['Team 1 ID'].map(tmap).values
obs_team_2 = data['Team 2 ID'].map(tmap).values
obs_period = data['Date'].map(pmap).values
obs_wl = (data['Team 1 ID'] == data['winner']).values
obs_team_1 = data['Team 1 ID'].map(tmap).values
obs_team_2 = data['Team 2 ID'].map(tmap).values
obs_wl = (data['Team 1 ID'] == data['winner']).values
obs_map =
with pm.Model() as rating_model:
R_k = pm.Normal('rating', 0, 1, shape=n_teams)
R_km = pm.Normal('rating map', R_k, 1, shape=(n_teams, n_maps))
diff = R[obs_team_1] - R[obs_team_2]
p = pm.math.sigmoid(diff)
wl = pm.Bernoulli('observed wl', p=p, observed=obs_wl)
with pm.Model() as rating_model:
mu = pm.Normal('mu', 0, 0.5, shape=n_teams)
sd = pm.InverseGamma('sigma', 2,2)
time_rating = pm.GaussianRandomWalk('time_rating', mu=mu, sd = sd, init=pm.Normal.dist(0,1), shape = (n_periods, n_teams))
diff = time_rating[obs_period, obs_team_1] - time_rating[obs_period, obs_team_2]
p = 0.5*pm.math.tanh(diff)+0.5
wl = pm.Bernoulli('observed wl', p=p, observed=obs_wl)
with pm.Model() as rating_model:
rho = pm.Normal('rho', 1, 0.5)
def AR1_vec(value):
vector version of AR1
x_im1 = value[:,:-1]
x_i = value[:,1:]
boundary = pm.Normal.dist(0, 1).logp
innov_like = pm.Normal.dist(x_im1, sd=1).logp(x_i)
return boundary(value[:,0]) + tt.sum(innov_like)
time_rating = pm.DensityDist('time_rating', AR1_vec, shape=(n_teams, n_periods))
diff = time_rating[obs_team_1, obs_period] - time_rating[obs_team_2, obs_period]
p = 0.5*pm.math.tanh(diff)+0.5
wl = pm.Bernoulli('observed wl', p=p, observed=(data['Team 1 ID'] == data['winner']).values)
# try approximating instead
with rating_model:
approx = pm.fit(20000, method='nfvi')
ap_trace = approx.sample(1000)
with rating_model:
trace = pm.sample(1000, init='advi+adapt_diag', n_init=20000, tune=250, nuts_kwargs={'target_accept': 0.9, 'max_treedepth': 15}) # tune=1000, nuts_kwargs={'target_accept': 0.95}
pm.traceplot(trace, varnames=['rating'])
v = {4411: 'NiP', 5752: 'Cloud9', 6665: 'Astralis', 7157: 'Rogue', 6667: 'FaZe', 4991: 'fnatic'}
f, ax = plt.subplots(figsize=(8,3), sharex=True, sharey=True)
sns.set_palette('Paired', len(v))
plt.ylim(0,3.0)
[sns.kdeplot(trace['rating'][:,tmap[i]], shade=True, alpha=0.55, legend=True, ax=ax, label=v) for i,v in v.items()]
ax.legend(loc='center left', bbox_to_anchor=(1, 0.5))
Explanation: Pymc Model
Determining Binary Win Loss: $wl_{t,i,j}$
$$
\rho \sim Beta(5,8) \
\omega_k \sim HC(0.5) \
\sigma \sim HC(0.5) \
R_{k,0} \sim N(0, \omega^2) \
R_{k,t} \sim N(\rho R_{k,t-1}, \sigma^2) \
wl_{t,i,j} \sim B(p = 0.5\text{Tanh}(R_{i,t}-R_{j,t})+0.5) \
$$
End of explanation
v = {4411: 'NiP', 5752: 'Cloud9', 6665: 'Astralis', 7157: 'Rogue', 6667: 'FaZe', 4991: 'fnatic'}
num_rows = int(np.ceil(n_periods/2))
f, ax = plt.subplots(num_rows, 2, figsize=(16,20), sharex=True, sharey=True)
ax = ax.flatten()
sns.set_palette('Paired', n_teams)
plt.ylim(0,10.0)
for i in np.arange(n_periods):
ax[i].set_title(periods[i])
[sns.kdeplot(ap_trace['time_rating'][:,i,tmap[j]], shade=True, alpha=0.55, legend=True, ax=ax[i], label=v) for j,v in v.items()]
ax[i].legend(loc='center left', bbox_to_anchor=(1, 0.5))
Explanation: Plot the last period rating for some teams
End of explanation
num_rows = int(np.ceil(len(v)/4))
f, ax = plt.subplots(num_rows, 4, figsize=(16,8), sharex=True, sharey=True)
ax = ax.flatten()
condensed_ratings = {j: np.vstack([ap_trace['time_rating'][:,i,tmap[j], ] for i in range(n_periods)]).T for j,n in v.items()}
for i,(j,n) in enumerate(v.items()):
ax[i].set_title(n)
sns.tsplot(condensed_ratings[j], color='black', ci='sd', ax=ax[i], marker='s', linewidth=1)
Explanation: Plot some over time ratings
End of explanation |
5,781 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Implementing Maps as Arrays
If the keys are natural numbers less than a given natural number n that is not
too big, a map can be implemented via an array.
The class ArrayMap shows how this is done. The constructor init takes a second argument n. This argument specifies the size of the array.
Step1: Lets compute the primes up to $n = 10000$. | Python Code:
class ArrayMap:
def __init__(self, n):
self.mArray = [None] * n
def find(self, k):
return self.mArray[k]
def insert(self, k, v):
self.mArray[k] = v
def delete(self, k):
self.mArray[k] = None
def __repr__(self):
result = '{ '
for key, value in enumerate(self.mArray):
if value != None:
result += f'{key}: {value}, '
if result == '{ ':
return '{}'
result = result[:-2] + ' }'
return result
squares = ArrayMap(10)
squares
for i in range(10):
squares.insert(i, i * i)
squares
for i in range(10):
print(f'find({i}) = {squares.find(i)}');
for i in range(10):
squares.delete(i)
squares
Explanation: Implementing Maps as Arrays
If the keys are natural numbers less than a given natural number n that is not
too big, a map can be implemented via an array.
The class ArrayMap shows how this is done. The constructor init takes a second argument n. This argument specifies the size of the array.
End of explanation
n = 10000
%%time
S = ArrayMap(n+1)
for i in range(2, n+1):
S.insert(i, i)
for i in range(2, n // 2 + 1):
for j in range(i, n+1):
if i * j > n:
break
S.delete(i * j)
for k in range(2, n+1):
if S.find(k):
print(k, end=' ')
Explanation: Lets compute the primes up to $n = 10000$.
End of explanation |
5,782 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Assignment 4 Solution
Step1: 4. The Database object you just created has a phases attribute that is a dictionary. Print this dictionary. The keys are the phase names and the values are Phase objects. Each phase object has a constituents attribute that returns a tuple of the sublattice model for that phase. Print the sublattice model of the BCC_A2 phase and confirm that the sublattice model is ((CU, SN), (VA)). Note that the sublattices are wrapped as frozensets. A frozenset is an immutable version of set.
Step2: 5. The SN-PB system.
i. The tin-lead system is a simple eutectic system with only three phases. Import binplot from pycalphad and use the binplot function to plot the binary phase diagram of the SN-PB system described in the database you created. Use the phases ['LIQUID', 'BCT_A5', 'FCC_A1']. Use conditions P=101325, T=(300, 700, 5), X(SN)=(0,1,0.005). Identify each single and two phase region.
Remember to use the %matplotlib inline magic if you are using a Jupyter Notebook!
Step3: ii. Use calculate from pycalphad to calculate Gibbs free energies for each of these phases. Use the xarray Dataset returned from the calculate function and matplotlib to plot the free energy curves as a function of composition. Do this at 400 K, 450 K, 475 K and 600 K. Try to use the pycalphad.plot.utils.phase_legend function to create a phase legend
See cell 6 in the pycalphad binary examples for an example.
Step4: 6. The BI-PB binary system.
i. Like before, would like to plot a phase diagram. This time, there might be more stable phases and we would like to specify by hand. Write a function (possibly called filter_active_phases) to take a pycalphad Database and a list of compositions and return a list of only active phases. An active phase can be defined by a phase that has at least one of the desired components in each sublattice. E.g. a sublattice model of ((BI,SN), (PB,SN)) is valid for the components BI-PB-VA, but the sublattice model ((BI,SN), (SN)) is not because the second sublattice does not contain any of the components. If you have done this right, running the function should return ['RHOMBO_A7', 'LIQUID', 'HCP_A3', 'FCC_A1', 'BCT_A5'] for BI-PB-VA components. Hint
Step5: ii. Perform an equilibrium calculation with pycalphad using the active phases from your function and the conditions P=101325, T=(300, 700, 10), X(PB)=(0,1,0.02). Store the result in a variable, such as eq_result.
Step6: iii. Plot the binary phase diagram, but instead of using binplot, which will run another equilibrium calculation, import eqplot (from pycalphad.plot.eqplot import eqplot) and simply pass in the Dataset returned from the equilibrium calculation.
Step7: iv. Previously we calculated and plotted Gibbs free energy for each phase using calculate. We can use calculate to give us properties for individual phases, but often equilibrium properties are of interest. Use equilibrium to calculate and plot the equilibrium free energy as a function of composition at 425 K. Bonus
Step8: 7. The DO3 phase.
i. The phases we have looked at so far have had very simple sublattice models. The DO3 sublattice model is ((CU, SN), (CU, SN)) with sublattice site ratios of (0.75, 0.25). That means there can be four different endmembers (an endmember has no mixing within sublattices) of this phase, pure Cu and pure Sn (((CU), (CU)) and ((SN), (SN))), CU-25SN (((CU), (SN))), and SN-25CU (((SN), (CU))). Then with mixing, the entire free energy surface between these endmembers can be calculated. Use calculate to plot the free energy surface of this phase at 500 K and find the endmembers.
Step9: ii. (TODO) On Brandon's Jupyter notebooks repository, there is a thermodynamics folder that contains a free energy surface that interactively changes with temperature
8. The AG-BI-CU system.
i. Finally, we will look at a ternary system. pycalphad supports arbitrarily large multicomponent systems. To save time, a precalculated Ag-Bi-Cu ternary system from 300 K to 1600 K. Use xarray's open_dataset function to create a Dataset from the Ag-Bi-Cu-eq.nc file in this folder.
Step10: ii. Select several temperature slices (with the Dataset methods) of that ternary Dataset. Use the same eqplot function as earlier to plot these temperatures slices as isotherms. | Python Code:
from pycalphad import Database
solder_dbf = Database('Ag-Bi-Cu-Pb-Sb-Sn-nist-solders.tdb')
Explanation: Assignment 4 Solution: Introduction to pycalphad
User questions and feedback can be directed to the pycalphad Google Group. Bugs can be reported to the GitHub repo.
1. Ensure pycalphad is installed. For this you will need at version 0.5.2. If this version is not available, please install the development version.
Installation guide
2. Open the pycalphad API documentation in your browser as well as the pycalphad examples. The rest of this assignment can be completed by using and adapting the code in the examples and referring to the API documentation for more information.
API docs
pycalphad examples
3. Start up an interactive Python interpreter (or, preferably, a Jupyter notebook) and create a pycalphad Database from the Ag-Bi-Cu-Pb-Sb-Sn-nist-solders.tdb thermodynamic database in this directory.
End of explanation
print(solder_dbf.phases, end='\n\n')
print('BCC_A2 sublattice model:')
print(solder_dbf.phases['BCC_A2'].constituents)
Explanation: 4. The Database object you just created has a phases attribute that is a dictionary. Print this dictionary. The keys are the phase names and the values are Phase objects. Each phase object has a constituents attribute that returns a tuple of the sublattice model for that phase. Print the sublattice model of the BCC_A2 phase and confirm that the sublattice model is ((CU, SN), (VA)). Note that the sublattices are wrapped as frozensets. A frozenset is an immutable version of set.
End of explanation
%matplotlib inline
from pycalphad import binplot, variables as v
comps = ['PB', 'SN', 'VA']
phases = ['LIQUID', 'BCT_A5', 'FCC_A1']
conds = {v.P: 101325, v.T: (300, 700, 10), v.X('SN'): (0, 1, 0.01)}
binplot(solder_dbf, comps, phases, conds)
Explanation: 5. The SN-PB system.
i. The tin-lead system is a simple eutectic system with only three phases. Import binplot from pycalphad and use the binplot function to plot the binary phase diagram of the SN-PB system described in the database you created. Use the phases ['LIQUID', 'BCT_A5', 'FCC_A1']. Use conditions P=101325, T=(300, 700, 5), X(SN)=(0,1,0.005). Identify each single and two phase region.
Remember to use the %matplotlib inline magic if you are using a Jupyter Notebook!
End of explanation
from pycalphad import calculate
from pycalphad.plot.utils import phase_legend
import matplotlib.pyplot as plt
import numpy as np
legend_handles, colorlist = phase_legend(phases)
temperatures = 400, 450, 475, 600
for temp in temperatures:
fig = plt.figure()
ax = fig.gca()
ax.set_title('Pb-Sn {} K'.format(temp))
ax.set_ylabel('GM')
ax.set_xlabel('X(Sn)')
for phase in phases:
result = calculate(solder_dbf, comps, phase, T=temp, P=101325, output='GM')
ax.scatter(result.X.sel(component='SN'), result.GM,
marker='.', s=5, color=colorlist[phase.upper()])
ax.set_xlim((0, 1))
ax.legend(handles=legend_handles, loc='center left', bbox_to_anchor=(1, 0.6))
Explanation: ii. Use calculate from pycalphad to calculate Gibbs free energies for each of these phases. Use the xarray Dataset returned from the calculate function and matplotlib to plot the free energy curves as a function of composition. Do this at 400 K, 450 K, 475 K and 600 K. Try to use the pycalphad.plot.utils.phase_legend function to create a phase legend
See cell 6 in the pycalphad binary examples for an example.
End of explanation
def filter_active_phases(dbf, comps):
active_phases = []
for phase_name, phase in dbf.phases.items():
sublattice_is_valid = [] # a list of booleans, e.g. [True, True, True] means that all 3 sublattices are valid
for sublattice in phase.constituents:
valid_sublattice = len(set(comps).intersection(sublattice)) > 0
sublattice_is_valid.append(valid_sublattice)
if all(sublattice_is_valid):
active_phases.append(phase_name)
return active_phases
phases = filter_active_phases(solder_dbf, ['BI', 'PB', 'VA'])
print('Active phase {}'.format(phases))
print('Inactive phases {}'.format(solder_dbf.phases.keys() - set(phases)))
Explanation: 6. The BI-PB binary system.
i. Like before, would like to plot a phase diagram. This time, there might be more stable phases and we would like to specify by hand. Write a function (possibly called filter_active_phases) to take a pycalphad Database and a list of compositions and return a list of only active phases. An active phase can be defined by a phase that has at least one of the desired components in each sublattice. E.g. a sublattice model of ((BI,SN), (PB,SN)) is valid for the components BI-PB-VA, but the sublattice model ((BI,SN), (SN)) is not because the second sublattice does not contain any of the components. If you have done this right, running the function should return ['RHOMBO_A7', 'LIQUID', 'HCP_A3', 'FCC_A1', 'BCT_A5'] for BI-PB-VA components. Hint: use set methods
End of explanation
from pycalphad import equilibrium
comps = ['BI', 'PB', 'VA']
phases = filter_active_phases(solder_dbf, comps)
conds = {v.P: 101325, v.T: (300, 700, 10), v.X('PB'): (0, 1, 0.02)}
eq_result = equilibrium(solder_dbf, comps, phases, conds)
print(eq_result)
Explanation: ii. Perform an equilibrium calculation with pycalphad using the active phases from your function and the conditions P=101325, T=(300, 700, 10), X(PB)=(0,1,0.02). Store the result in a variable, such as eq_result.
End of explanation
from pycalphad.plot.eqplot import eqplot
eqplot(eq_result)
Explanation: iii. Plot the binary phase diagram, but instead of using binplot, which will run another equilibrium calculation, import eqplot (from pycalphad.plot.eqplot import eqplot) and simply pass in the Dataset returned from the equilibrium calculation.
End of explanation
from pycalphad import equilibrium
conds = {v.P: 101325, v.T: 425, v.X('PB'): (0, 1, 0.005)}
eq_result = equilibrium(solder_dbf, comps, phases, conds)
print(eq_result)
legend_handles, colorlist = phase_legend(phases)
fig = plt.figure()
ax = fig.gca()
ax.set_title('BI-PB 425 K')
ax.set_ylabel('GM')
ax.set_xlabel('X(Pb)')
for phase in phases:
result = calculate(solder_dbf, comps, phase, T=425, P=101325, output='GM')
ax.scatter(result.X.sel(component='PB'), result.GM,
marker='.', s=5, color=colorlist[phase.upper()])
# finally plot the equilibrium values
ax.scatter(eq_result.X_PB, eq_result.GM, c='r', lw=0)
ax.set_xlim((0, 1))
ax.legend(handles=legend_handles, loc='center left', bbox_to_anchor=(1, 0.6))
Explanation: iv. Previously we calculated and plotted Gibbs free energy for each phase using calculate. We can use calculate to give us properties for individual phases, but often equilibrium properties are of interest. Use equilibrium to calculate and plot the equilibrium free energy as a function of composition at 425 K. Bonus: plot the free energies of the phases on top of this plot.
End of explanation
fig = plt.figure()
ax = fig.gca()
ax.set_title('DO3 phase free energy 500 K')
ax.set_ylabel('GM')
ax.set_xlabel('X(Sn)')
result = calculate(solder_dbf, ['CU', 'SN'], 'DO3', T=500, P=101325, output='GM')
ax.scatter(result.X.sel(component='SN'), result.GM,
marker='.', s=5,)
ax.set_xlim((0, 1))
Explanation: 7. The DO3 phase.
i. The phases we have looked at so far have had very simple sublattice models. The DO3 sublattice model is ((CU, SN), (CU, SN)) with sublattice site ratios of (0.75, 0.25). That means there can be four different endmembers (an endmember has no mixing within sublattices) of this phase, pure Cu and pure Sn (((CU), (CU)) and ((SN), (SN))), CU-25SN (((CU), (SN))), and SN-25CU (((SN), (CU))). Then with mixing, the entire free energy surface between these endmembers can be calculated. Use calculate to plot the free energy surface of this phase at 500 K and find the endmembers.
End of explanation
import xarray as xr
eq_tern = xr.open_dataset('Ag-Bi-Cu-eq.nc')
Explanation: ii. (TODO) On Brandon's Jupyter notebooks repository, there is a thermodynamics folder that contains a free energy surface that interactively changes with temperature
8. The AG-BI-CU system.
i. Finally, we will look at a ternary system. pycalphad supports arbitrarily large multicomponent systems. To save time, a precalculated Ag-Bi-Cu ternary system from 300 K to 1600 K. Use xarray's open_dataset function to create a Dataset from the Ag-Bi-Cu-eq.nc file in this folder.
End of explanation
fig = plt.figure()
ax = eqplot(eq_tern.sel(T=[500]))
ax.set_title('Ag-Bi-Cu 500 K')
fig = plt.figure()
ax = eqplot(eq_tern.sel(T=[600]))
ax.set_title('Ag-Bi-Cu 600 K')
fig = plt.figure()
ax = eqplot(eq_tern.sel(T=[700]))
ax.set_title('Ag-Bi-Cu 700 K')
fig = plt.figure()
ax = eqplot(eq_tern.sel(T=[800]))
ax.set_title('Ag-Bi-Cu 8000 K')
fig = plt.figure()
ax = eqplot(eq_tern.sel(T=[900]))
ax.set_title('Ag-Bi-Cu 900 K')
fig = plt.figure()
ax = eqplot(eq_tern.sel(T=[1000]))
ax.set_title('Ag-Bi-Cu 1000 K')
Explanation: ii. Select several temperature slices (with the Dataset methods) of that ternary Dataset. Use the same eqplot function as earlier to plot these temperatures slices as isotherms.
End of explanation |
5,783 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
GeoViews is a Python library that makes it easy to explore and visualize geographical, meteorological, and oceanographic datasets, such as those used in weather, climate, and remote sensing research.
GeoViews is built on the HoloViews library for building flexible visualizations of multidimensional data. GeoViews adds a family of geographic plot types based on the Cartopy library, plotted using either the Matplotlib or Bokeh packages.
With GeoViews, you can now work easily and naturally with large, multidimensional geographic datasets, instantly visualizing any subset or combination of them, while always being able to access the raw data underlying any plot. Here's a simple example
Step1: GeoViews is designed to work well with the Iris and xarray libraries for working with multidimensional arrays, such as those stored in netCDF files. GeoViews also accepts data as NumPy arrays and Pandas data frames. In each case, the data can be left stored in its original, native format, wrapped in a HoloViews or GeoViews object that provides instant interactive visualizations.
The following example loads a dataset originally taken from iris-sample-data and quickly builds an interactive tool for exploring how the data changes over time
Step2: GeoViews also natively supports geopandas datastructures allowing us to easily plot shapefiles and choropleths | Python Code:
import geoviews as gv
import geoviews.feature as gf
import xarray as xr
from cartopy import crs
gv.extension('bokeh', 'matplotlib')
(gf.ocean + gf.land + gf.ocean * gf.land * gf.coastline * gf.borders).opts(
'Feature', projection=crs.Geostationary(), global_extent=True, height=325).cols(3)
Explanation: GeoViews is a Python library that makes it easy to explore and visualize geographical, meteorological, and oceanographic datasets, such as those used in weather, climate, and remote sensing research.
GeoViews is built on the HoloViews library for building flexible visualizations of multidimensional data. GeoViews adds a family of geographic plot types based on the Cartopy library, plotted using either the Matplotlib or Bokeh packages.
With GeoViews, you can now work easily and naturally with large, multidimensional geographic datasets, instantly visualizing any subset or combination of them, while always being able to access the raw data underlying any plot. Here's a simple example:
End of explanation
dataset = gv.Dataset(xr.open_dataset('./data/ensemble.nc'))
ensemble = dataset.to(gv.Image, ['longitude', 'latitude'], 'surface_temperature')
gv.output(ensemble.opts(cmap='viridis', colorbar=True, fig_size=200, backend='matplotlib') * gf.coastline(),
backend='matplotlib')
Explanation: GeoViews is designed to work well with the Iris and xarray libraries for working with multidimensional arrays, such as those stored in netCDF files. GeoViews also accepts data as NumPy arrays and Pandas data frames. In each case, the data can be left stored in its original, native format, wrapped in a HoloViews or GeoViews object that provides instant interactive visualizations.
The following example loads a dataset originally taken from iris-sample-data and quickly builds an interactive tool for exploring how the data changes over time:
End of explanation
import geopandas as gpd
gv.Polygons(gpd.read_file(gpd.datasets.get_path('naturalearth_lowres')), vdims=['pop_est', ('name', 'Country')]).opts(
tools=['hover'], width=600, projection=crs.Robinson()
)
Explanation: GeoViews also natively supports geopandas datastructures allowing us to easily plot shapefiles and choropleths:
End of explanation |
5,784 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Step1: Fully-Connected Neural Nets
In the previous homework you implemented a fully-connected two-layer neural network on CIFAR-10. The implementation was simple but not very modular since the loss and gradient were computed in a single monolithic function. This is manageable for a simple two-layer network, but would become impractical as we move to bigger models. Ideally we want to build networks using a more modular design so that we can implement different layer types in isolation and then snap them together into models with different architectures.
In this exercise we will implement fully-connected networks using a more modular approach. For each layer we will implement a forward and a backward function. The forward function will receive inputs, weights, and other parameters and will return both an output and a cache object storing data needed for the backward pass, like this
Step4: Affine layer
Step5: Affine layer
Step6: ReLU layer
Step7: ReLU layer
Step8: "Sandwich" layers
There are some common patterns of layers that are frequently used in neural nets. For example, affine layers are frequently followed by a ReLU nonlinearity. To make these common patterns easy, we define several convenience layers in the file cs231n/layer_utils.py.
For now take a look at the affine_relu_forward and affine_relu_backward functions, and run the following to numerically gradient check the backward pass
Step9: Loss layers
Step10: Two-layer network
In the previous assignment you implemented a two-layer neural network in a single monolithic class. Now that you have implemented modular versions of the necessary layers, you will reimplement the two layer network using these modular implementations.
Open the file cs231n/classifiers/fc_net.py and complete the implementation of the TwoLayerNet class. This class will serve as a model for the other networks you will implement in this assignment, so read through it to make sure you understand the API. You can run the cell below to test your implementation.
Step11: Solver
In the previous assignment, the logic for training models was coupled to the models themselves. Following a more modular design, for this assignment we have split the logic for training models into a separate class.
Open the file cs231n/solver.py and read through it to familiarize yourself with the API. After doing so, use a Solver instance to train a TwoLayerNet that achieves at least 50% accuracy on the validation set.
Step12: Multilayer network
Next you will implement a fully-connected network with an arbitrary number of hidden layers.
Read through the FullyConnectedNet class in the file cs231n/classifiers/fc_net.py.
Implement the initialization, the forward pass, and the backward pass. For the moment don't worry about implementing dropout or batch normalization; we will add those features soon.
Initial loss and gradient check
As a sanity check, run the following to check the initial loss and to gradient check the network both with and without regularization. Do the initial losses seem reasonable?
For gradient checking, you should expect to see errors around 1e-6 or less.
Step13: As another sanity check, make sure you can overfit a small dataset of 50 images. First we will try a three-layer network with 100 units in each hidden layer. You will need to tweak the learning rate and initialization scale, but you should be able to overfit and achieve 100% training accuracy within 20 epochs.
Step14: Now try to use a five-layer network with 100 units on each layer to overfit 50 training examples. Again you will have to adjust the learning rate and weight initialization, but you should be able to achieve 100% training accuracy within 20 epochs.
Step15: Inline question
Step16: Once you have done so, run the following to train a six-layer network with both SGD and SGD+momentum. You should see the SGD+momentum update rule converge faster.
Step17: RMSProp and Adam
RMSProp [1] and Adam [2] are update rules that set per-parameter learning rates by using a running average of the second moments of gradients.
In the file cs231n/optim.py, implement the RMSProp update rule in the rmsprop function and implement the Adam update rule in the adam function, and check your implementations using the tests below.
[1] Tijmen Tieleman and Geoffrey Hinton. "Lecture 6.5-rmsprop
Step18: Once you have debugged your RMSProp and Adam implementations, run the following to train a pair of deep networks using these new update rules
Step19: Train a good model!
Train the best fully-connected model that you can on CIFAR-10, storing your best model in the best_model variable. We require you to get at least 50% accuracy on the validation set using a fully-connected net.
If you are careful it should be possible to get accuracies above 55%, but we don't require it for this part and won't assign extra credit for doing so. Later in the assignment we will ask you to train the best convolutional network that you can on CIFAR-10, and we would prefer that you spend your effort working on convolutional nets rather than fully-connected nets.
You might find it useful to complete the BatchNormalization.ipynb and Dropout.ipynb notebooks before completing this part, since those techniques can help you train powerful models.
Step20: Test you model
Run your best model on the validation and test sets. You should achieve above 50% accuracy on the validation set. | Python Code:
# As usual, a bit of setup
import time
import numpy as np
import matplotlib.pyplot as plt
from cs231n.classifiers.fc_net import *
from cs231n.data_utils import get_CIFAR10_data
from cs231n.gradient_check import eval_numerical_gradient, eval_numerical_gradient_array
from cs231n.solver import Solver
%matplotlib inline
plt.rcParams['figure.figsize'] = (10.0, 8.0) # set default size of plots
plt.rcParams['image.interpolation'] = 'nearest'
plt.rcParams['image.cmap'] = 'gray'
# for auto-reloading external modules
# see http://stackoverflow.com/questions/1907993/autoreload-of-modules-in-ipython
%load_ext autoreload
%autoreload 2
def rel_error(x, y):
returns relative error
return np.max(np.abs(x - y) / (np.maximum(1e-8, np.abs(x) + np.abs(y))))
# Load the (preprocessed) CIFAR10 data.
data = get_CIFAR10_data()
for k, v in data.iteritems():
print '%s: ' % k, v.shape
Explanation: Fully-Connected Neural Nets
In the previous homework you implemented a fully-connected two-layer neural network on CIFAR-10. The implementation was simple but not very modular since the loss and gradient were computed in a single monolithic function. This is manageable for a simple two-layer network, but would become impractical as we move to bigger models. Ideally we want to build networks using a more modular design so that we can implement different layer types in isolation and then snap them together into models with different architectures.
In this exercise we will implement fully-connected networks using a more modular approach. For each layer we will implement a forward and a backward function. The forward function will receive inputs, weights, and other parameters and will return both an output and a cache object storing data needed for the backward pass, like this:
```python
def layer_forward(x, w):
Receive inputs x and weights w
# Do some computations ...
z = # ... some intermediate value
# Do some more computations ...
out = # the output
cache = (x, w, z, out) # Values we need to compute gradients
return out, cache
```
The backward pass will receive upstream derivatives and the cache object, and will return gradients with respect to the inputs and weights, like this:
```python
def layer_backward(dout, cache):
Receive derivative of loss with respect to outputs and cache,
and compute derivative with respect to inputs.
# Unpack cache values
x, w, z, out = cache
# Use values in cache to compute derivatives
dx = # Derivative of loss with respect to x
dw = # Derivative of loss with respect to w
return dx, dw
```
After implementing a bunch of layers this way, we will be able to easily combine them to build classifiers with different architectures.
In addition to implementing fully-connected networks of arbitrary depth, we will also explore different update rules for optimization, and introduce Dropout as a regularizer and Batch Normalization as a tool to more efficiently optimize deep networks.
End of explanation
# Test the affine_forward function
num_inputs = 2
input_shape = (4, 5, 6)
output_dim = 3
input_size = num_inputs * np.prod(input_shape)
weight_size = output_dim * np.prod(input_shape)
x = np.linspace(-0.1, 0.5, num=input_size).reshape(num_inputs, *input_shape)
w = np.linspace(-0.2, 0.3, num=weight_size).reshape(np.prod(input_shape), output_dim)
b = np.linspace(-0.3, 0.1, num=output_dim)
out, _ = affine_forward(x, w, b)
correct_out = np.array([[ 1.49834967, 1.70660132, 1.91485297],
[ 3.25553199, 3.5141327, 3.77273342]])
# Compare your output with ours. The error should be around 1e-9.
print 'Testing affine_forward function:'
print 'difference: ', rel_error(out, correct_out)
print 'prod result', np.prod(input_shape)
Explanation: Affine layer: foward
Open the file cs231n/layers.py and implement the affine_forward function.
Once you are done you can test your implementaion by running the following:
End of explanation
# Test the affine_backward function
x = np.random.randn(10, 2, 3)
w = np.random.randn(6, 5)
b = np.random.randn(5)
dout = np.random.randn(10, 5)
dx_num = eval_numerical_gradient_array(lambda x: affine_forward(x, w, b)[0], x, dout)
dw_num = eval_numerical_gradient_array(lambda w: affine_forward(x, w, b)[0], w, dout)
db_num = eval_numerical_gradient_array(lambda b: affine_forward(x, w, b)[0], b, dout)
_, cache = affine_forward(x, w, b)
dx, dw, db = affine_backward(dout, cache)
# The error should be around 1e-10
print 'Testing affine_backward function:'
print 'dx error: ', rel_error(dx_num, dx)
print 'dw error: ', rel_error(dw_num, dw)
print 'db error: ', rel_error(db_num, db)
Explanation: Affine layer: backward
Now implement the affine_backward function and test your implementation using numeric gradient checking.
End of explanation
# Test the relu_forward function
x = np.linspace(-0.5, 0.5, num=12).reshape(3, 4)
out, _ = relu_forward(x)
correct_out = np.array([[ 0., 0., 0., 0., ],
[ 0., 0., 0.04545455, 0.13636364,],
[ 0.22727273, 0.31818182, 0.40909091, 0.5, ]])
# Compare your output with ours. The error should be around 1e-8
print 'Testing relu_forward function:'
print 'difference: ', rel_error(out, correct_out)
Explanation: ReLU layer: forward
Implement the forward pass for the ReLU activation function in the relu_forward function and test your implementation using the following:
End of explanation
x = np.random.randn(10, 10)
dout = np.random.randn(*x.shape)
dx_num = eval_numerical_gradient_array(lambda x: relu_forward(x)[0], x, dout)
_, cache = relu_forward(x)
dx = relu_backward(dout, cache)
# The error should be around 1e-12
print 'Testing relu_backward function:'
print 'dx error: ', rel_error(dx_num, dx)
Explanation: ReLU layer: backward
Now implement the backward pass for the ReLU activation function in the relu_backward function and test your implementation using numeric gradient checking:
End of explanation
from cs231n.layer_utils import affine_relu_forward, affine_relu_backward
x = np.random.randn(2, 3, 4)
w = np.random.randn(12, 10)
b = np.random.randn(10)
dout = np.random.randn(2, 10)
out, cache = affine_relu_forward(x, w, b)
dx, dw, db = affine_relu_backward(dout, cache)
dx_num = eval_numerical_gradient_array(lambda x: affine_relu_forward(x, w, b)[0], x, dout)
dw_num = eval_numerical_gradient_array(lambda w: affine_relu_forward(x, w, b)[0], w, dout)
db_num = eval_numerical_gradient_array(lambda b: affine_relu_forward(x, w, b)[0], b, dout)
print 'Testing affine_relu_forward:'
print 'dx error: ', rel_error(dx_num, dx)
print 'dw error: ', rel_error(dw_num, dw)
print 'db error: ', rel_error(db_num, db)
Explanation: "Sandwich" layers
There are some common patterns of layers that are frequently used in neural nets. For example, affine layers are frequently followed by a ReLU nonlinearity. To make these common patterns easy, we define several convenience layers in the file cs231n/layer_utils.py.
For now take a look at the affine_relu_forward and affine_relu_backward functions, and run the following to numerically gradient check the backward pass:
End of explanation
num_classes, num_inputs = 10, 50
x = 0.001 * np.random.randn(num_inputs, num_classes)
y = np.random.randint(num_classes, size=num_inputs)
dx_num = eval_numerical_gradient(lambda x: svm_loss(x, y)[0], x, verbose=False)
loss, dx = svm_loss(x, y)
# Test svm_loss function. Loss should be around 9 and dx error should be 1e-9
print 'Testing svm_loss:'
print 'loss: ', loss
print 'dx error: ', rel_error(dx_num, dx)
dx_num = eval_numerical_gradient(lambda x: softmax_loss(x, y)[0], x, verbose=False)
loss, dx = softmax_loss(x, y)
# Test softmax_loss function. Loss should be 2.3 and dx error should be 1e-8
print '\nTesting softmax_loss:'
print 'loss: ', loss
print 'dx error: ', rel_error(dx_num, dx)
Explanation: Loss layers: Softmax and SVM
You implemented these loss functions in the last assignment, so we'll give them to you for free here. You should still make sure you understand how they work by looking at the implementations in cs231n/layers.py.
You can make sure that the implementations are correct by running the following:
End of explanation
N, D, H, C = 3, 5, 50, 7
X = np.random.randn(N, D)
y = np.random.randint(C, size=N)
std = 1e-2
model = TwoLayerNet(input_dim=D, hidden_dim=H, num_classes=C, weight_scale=std)
print 'Testing initialization ... '
W1_std = abs(model.params['W1'].std() - std)
b1 = model.params['b1']
W2_std = abs(model.params['W2'].std() - std)
b2 = model.params['b2']
assert W1_std < std / 10, 'First layer weights do not seem right'
assert np.all(b1 == 0), 'First layer biases do not seem right'
assert W2_std < std / 10, 'Second layer weights do not seem right'
assert np.all(b2 == 0), 'Second layer biases do not seem right'
print 'Testing test-time forward pass ... '
model.params['W1'] = np.linspace(-0.7, 0.3, num=D*H).reshape(D, H)
model.params['b1'] = np.linspace(-0.1, 0.9, num=H)
model.params['W2'] = np.linspace(-0.3, 0.4, num=H*C).reshape(H, C)
model.params['b2'] = np.linspace(-0.9, 0.1, num=C)
X = np.linspace(-5.5, 4.5, num=N*D).reshape(D, N).T
scores = model.loss(X)
correct_scores = np.asarray(
[[11.53165108, 12.2917344, 13.05181771, 13.81190102, 14.57198434, 15.33206765, 16.09215096],
[12.05769098, 12.74614105, 13.43459113, 14.1230412, 14.81149128, 15.49994135, 16.18839143],
[12.58373087, 13.20054771, 13.81736455, 14.43418138, 15.05099822, 15.66781506, 16.2846319 ]])
scores_diff = np.abs(scores - correct_scores).sum()
assert scores_diff < 1e-6, 'Problem with test-time forward pass'
print 'Testing training loss (no regularization)'
y = np.asarray([0, 5, 1])
loss, grads = model.loss(X, y)
correct_loss = 3.4702243556
assert abs(loss - correct_loss) < 1e-10, 'Problem with training-time loss'
model.reg = 1.0
loss, grads = model.loss(X, y)
correct_loss = 26.5948426952
assert abs(loss - correct_loss) < 1e-10, 'Problem with regularization loss'
for reg in [0.0, 0.7]:
print 'Running numeric gradient check with reg = ', reg
model.reg = reg
loss, grads = model.loss(X, y)
for name in sorted(grads):
f = lambda _: model.loss(X, y)[0]
grad_num = eval_numerical_gradient(f, model.params[name], verbose=False)
print '%s relative error: %.2e' % (name, rel_error(grad_num, grads[name]))
Explanation: Two-layer network
In the previous assignment you implemented a two-layer neural network in a single monolithic class. Now that you have implemented modular versions of the necessary layers, you will reimplement the two layer network using these modular implementations.
Open the file cs231n/classifiers/fc_net.py and complete the implementation of the TwoLayerNet class. This class will serve as a model for the other networks you will implement in this assignment, so read through it to make sure you understand the API. You can run the cell below to test your implementation.
End of explanation
model = TwoLayerNet()
##############################################################################
# TODO: Use a Solver instance to train a TwoLayerNet that achieves at least #
# 50% accuracy on the validation set. #
##############################################################################
solver = Solver(model, data, optim_config={'learning_rate': 1e-3,}, lr_decay=0.95, print_every = 100)
solver.train()
##############################################################################
# END OF YOUR CODE #
##############################################################################
# Run this cell to visualize training loss and train / val accuracy
plt.subplot(2, 1, 1)
plt.title('Training loss')
plt.plot(solver.loss_history, 'o')
plt.xlabel('Iteration')
plt.subplot(2, 1, 2)
plt.title('Accuracy')
plt.plot(solver.train_acc_history, '-o', label='train')
plt.plot(solver.val_acc_history, '-o', label='val')
plt.plot([0.5] * len(solver.val_acc_history), 'k--')
plt.xlabel('Epoch')
plt.legend(loc='lower right')
plt.gcf().set_size_inches(15, 12)
plt.show()
Explanation: Solver
In the previous assignment, the logic for training models was coupled to the models themselves. Following a more modular design, for this assignment we have split the logic for training models into a separate class.
Open the file cs231n/solver.py and read through it to familiarize yourself with the API. After doing so, use a Solver instance to train a TwoLayerNet that achieves at least 50% accuracy on the validation set.
End of explanation
N, D, H1, H2, C = 2, 15, 20, 30, 10
X = np.random.randn(N, D)
y = np.random.randint(C, size=(N,))
for reg in [0, 3.14]:
print 'Running check with reg = ', reg
model = FullyConnectedNet([H1, H2], input_dim=D, num_classes=C,
reg=reg, weight_scale=5e-2, dtype=np.float64)
loss, grads = model.loss(X, y)
print 'Initial loss: ', loss
for name in sorted(grads):
f = lambda _: model.loss(X, y)[0]
grad_num = eval_numerical_gradient(f, model.params[name], verbose=False, h=1e-5)
print '%s relative error: %.2e' % (name, rel_error(grad_num, grads[name]))
Explanation: Multilayer network
Next you will implement a fully-connected network with an arbitrary number of hidden layers.
Read through the FullyConnectedNet class in the file cs231n/classifiers/fc_net.py.
Implement the initialization, the forward pass, and the backward pass. For the moment don't worry about implementing dropout or batch normalization; we will add those features soon.
Initial loss and gradient check
As a sanity check, run the following to check the initial loss and to gradient check the network both with and without regularization. Do the initial losses seem reasonable?
For gradient checking, you should expect to see errors around 1e-6 or less.
End of explanation
# TODO: Use a three-layer Net to overfit 50 training examples.
num_train = 50
small_data = {
'X_train': data['X_train'][:num_train],
'y_train': data['y_train'][:num_train],
'X_val': data['X_val'],
'y_val': data['y_val'],
}
weight_scale = 1.0/7
learning_rate = 1e-3
model = FullyConnectedNet([100, 100], reg = 0.001,
weight_scale=weight_scale, dtype=np.float64)
solver = Solver(model, small_data,
print_every=10, num_epochs=20, batch_size=25,
update_rule='sgd',
optim_config={
'learning_rate': learning_rate,
}
)
solver.train()
plt.plot(solver.loss_history, 'o')
plt.title('Training loss history')
plt.xlabel('Iteration')
plt.ylabel('Training loss')
plt.show()
Explanation: As another sanity check, make sure you can overfit a small dataset of 50 images. First we will try a three-layer network with 100 units in each hidden layer. You will need to tweak the learning rate and initialization scale, but you should be able to overfit and achieve 100% training accuracy within 20 epochs.
End of explanation
# TODO: Use a five-layer Net to overfit 50 training examples.
num_train = 50
small_data = {
'X_train': data['X_train'][:num_train],
'y_train': data['y_train'][:num_train],
'X_val': data['X_val'],
'y_val': data['y_val'],
}
print np.mean(small_data['X_train'])
learning_rate = 1e-3
weight_scale = 1.0/7
model = FullyConnectedNet([100, 100, 100, 100], reg = 0.001,
weight_scale=weight_scale, dtype=np.float64)
solver = Solver(model, small_data,
print_every=10, num_epochs=20, batch_size=25,
update_rule='sgd',
optim_config={
'learning_rate': learning_rate,
}
)
solver.train()
plt.plot(solver.loss_history, 'o')
plt.title('Training loss history')
plt.xlabel('Iteration')
plt.ylabel('Training loss')
plt.show()
Explanation: Now try to use a five-layer network with 100 units on each layer to overfit 50 training examples. Again you will have to adjust the learning rate and weight initialization, but you should be able to achieve 100% training accuracy within 20 epochs.
End of explanation
from cs231n.optim import sgd_momentum
N, D = 4, 5
w = np.linspace(-0.4, 0.6, num=N*D).reshape(N, D)
dw = np.linspace(-0.6, 0.4, num=N*D).reshape(N, D)
v = np.linspace(0.6, 0.9, num=N*D).reshape(N, D)
config = {'learning_rate': 1e-3, 'velocity': v}
next_w, _ = sgd_momentum(w, dw, config=config)
expected_next_w = np.asarray([
[ 0.1406, 0.20738947, 0.27417895, 0.34096842, 0.40775789],
[ 0.47454737, 0.54133684, 0.60812632, 0.67491579, 0.74170526],
[ 0.80849474, 0.87528421, 0.94207368, 1.00886316, 1.07565263],
[ 1.14244211, 1.20923158, 1.27602105, 1.34281053, 1.4096 ]])
expected_velocity = np.asarray([
[ 0.5406, 0.55475789, 0.56891579, 0.58307368, 0.59723158],
[ 0.61138947, 0.62554737, 0.63970526, 0.65386316, 0.66802105],
[ 0.68217895, 0.69633684, 0.71049474, 0.72465263, 0.73881053],
[ 0.75296842, 0.76712632, 0.78128421, 0.79544211, 0.8096 ]])
print 'next_w error: ', rel_error(next_w, expected_next_w)
print 'velocity error: ', rel_error(expected_velocity, config['velocity'])
Explanation: Inline question:
Did you notice anything about the comparative difficulty of training the three-layer net vs training the five layer net?
Answer:
[FILL THIS IN]
Update rules
So far we have used vanilla stochastic gradient descent (SGD) as our update rule. More sophisticated update rules can make it easier to train deep networks. We will implement a few of the most commonly used update rules and compare them to vanilla SGD.
SGD+Momentum
Stochastic gradient descent with momentum is a widely used update rule that tends to make deep networks converge faster than vanilla stochstic gradient descent.
Open the file cs231n/optim.py and read the documentation at the top of the file to make sure you understand the API. Implement the SGD+momentum update rule in the function sgd_momentum and run the following to check your implementation. You should see errors less than 1e-8.
End of explanation
num_train = 4000
small_data = {
'X_train': data['X_train'][:num_train],
'y_train': data['y_train'][:num_train],
'X_val': data['X_val'],
'y_val': data['y_val'],
}
solvers = {}
for update_rule in ['sgd', 'sgd_momentum']:
print 'running with ', update_rule
model = FullyConnectedNet([100, 100, 100, 100, 100], weight_scale=5e-2)
solver = Solver(model, small_data,
num_epochs=5, batch_size=100,
update_rule=update_rule,
optim_config={
'learning_rate': 1e-2,
},
verbose=True)
solvers[update_rule] = solver
solver.train()
print
plt.subplot(3, 1, 1)
plt.title('Training loss')
plt.xlabel('Iteration')
plt.subplot(3, 1, 2)
plt.title('Training accuracy')
plt.xlabel('Epoch')
plt.subplot(3, 1, 3)
plt.title('Validation accuracy')
plt.xlabel('Epoch')
for update_rule, solver in solvers.iteritems():
plt.subplot(3, 1, 1)
plt.plot(solver.loss_history, 'o', label=update_rule)
plt.subplot(3, 1, 2)
plt.plot(solver.train_acc_history, '-o', label=update_rule)
plt.subplot(3, 1, 3)
plt.plot(solver.val_acc_history, '-o', label=update_rule)
for i in [1, 2, 3]:
plt.subplot(3, 1, i)
plt.legend(loc='upper center', ncol=4)
plt.gcf().set_size_inches(15, 15)
plt.show()
Explanation: Once you have done so, run the following to train a six-layer network with both SGD and SGD+momentum. You should see the SGD+momentum update rule converge faster.
End of explanation
# Test RMSProp implementation; you should see errors less than 1e-7
from cs231n.optim import rmsprop
N, D = 4, 5
w = np.linspace(-0.4, 0.6, num=N*D).reshape(N, D)
dw = np.linspace(-0.6, 0.4, num=N*D).reshape(N, D)
cache = np.linspace(0.6, 0.9, num=N*D).reshape(N, D)
config = {'learning_rate': 1e-2, 'cache': cache}
next_w, _ = rmsprop(w, dw, config=config)
expected_next_w = np.asarray([
[-0.39223849, -0.34037513, -0.28849239, -0.23659121, -0.18467247],
[-0.132737, -0.08078555, -0.02881884, 0.02316247, 0.07515774],
[ 0.12716641, 0.17918792, 0.23122175, 0.28326742, 0.33532447],
[ 0.38739248, 0.43947102, 0.49155973, 0.54365823, 0.59576619]])
expected_cache = np.asarray([
[ 0.5976, 0.6126277, 0.6277108, 0.64284931, 0.65804321],
[ 0.67329252, 0.68859723, 0.70395734, 0.71937285, 0.73484377],
[ 0.75037008, 0.7659518, 0.78158892, 0.79728144, 0.81302936],
[ 0.82883269, 0.84469141, 0.86060554, 0.87657507, 0.8926 ]])
print 'next_w error: ', rel_error(expected_next_w, next_w)
print 'cache error: ', rel_error(expected_cache, config['cache'])
# Test Adam implementation; you should see errors around 1e-7 or less
from cs231n.optim import adam
N, D = 4, 5
w = np.linspace(-0.4, 0.6, num=N*D).reshape(N, D)
dw = np.linspace(-0.6, 0.4, num=N*D).reshape(N, D)
m = np.linspace(0.6, 0.9, num=N*D).reshape(N, D)
v = np.linspace(0.7, 0.5, num=N*D).reshape(N, D)
config = {'learning_rate': 1e-2, 'm': m, 'v': v, 't': 5}
next_w, _ = adam(w, dw, config=config)
expected_next_w = np.asarray([
[-0.40094747, -0.34836187, -0.29577703, -0.24319299, -0.19060977],
[-0.1380274, -0.08544591, -0.03286534, 0.01971428, 0.0722929],
[ 0.1248705, 0.17744702, 0.23002243, 0.28259667, 0.33516969],
[ 0.38774145, 0.44031188, 0.49288093, 0.54544852, 0.59801459]])
expected_v = np.asarray([
[ 0.69966, 0.68908382, 0.67851319, 0.66794809, 0.65738853,],
[ 0.64683452, 0.63628604, 0.6257431, 0.61520571, 0.60467385,],
[ 0.59414753, 0.58362676, 0.57311152, 0.56260183, 0.55209767,],
[ 0.54159906, 0.53110598, 0.52061845, 0.51013645, 0.49966, ]])
expected_m = np.asarray([
[ 0.48, 0.49947368, 0.51894737, 0.53842105, 0.55789474],
[ 0.57736842, 0.59684211, 0.61631579, 0.63578947, 0.65526316],
[ 0.67473684, 0.69421053, 0.71368421, 0.73315789, 0.75263158],
[ 0.77210526, 0.79157895, 0.81105263, 0.83052632, 0.85 ]])
print 'next_w error: ', rel_error(expected_next_w, next_w)
print 'v error: ', rel_error(expected_v, config['v'])
print 'm error: ', rel_error(expected_m, config['m'])
Explanation: RMSProp and Adam
RMSProp [1] and Adam [2] are update rules that set per-parameter learning rates by using a running average of the second moments of gradients.
In the file cs231n/optim.py, implement the RMSProp update rule in the rmsprop function and implement the Adam update rule in the adam function, and check your implementations using the tests below.
[1] Tijmen Tieleman and Geoffrey Hinton. "Lecture 6.5-rmsprop: Divide the gradient by a running average of its recent magnitude." COURSERA: Neural Networks for Machine Learning 4 (2012).
[2] Diederik Kingma and Jimmy Ba, "Adam: A Method for Stochastic Optimization", ICLR 2015.
End of explanation
learning_rates = {'rmsprop': 1e-4, 'adam': 1e-3}
for update_rule in ['adam', 'rmsprop']:
print 'running with ', update_rule
model = FullyConnectedNet([100, 100, 100, 100, 100], weight_scale=5e-2)
solver = Solver(model, small_data,
num_epochs=5, batch_size=100,
update_rule=update_rule,
optim_config={
'learning_rate': learning_rates[update_rule]
},
verbose=True)
solvers[update_rule] = solver
solver.train()
print
plt.subplot(3, 1, 1)
plt.title('Training loss')
plt.xlabel('Iteration')
plt.subplot(3, 1, 2)
plt.title('Training accuracy')
plt.xlabel('Epoch')
plt.subplot(3, 1, 3)
plt.title('Validation accuracy')
plt.xlabel('Epoch')
for update_rule, solver in solvers.iteritems():
plt.subplot(3, 1, 1)
plt.plot(solver.loss_history, 'o', label=update_rule)
plt.subplot(3, 1, 2)
plt.plot(solver.train_acc_history, '-o', label=update_rule)
plt.subplot(3, 1, 3)
plt.plot(solver.val_acc_history, '-o', label=update_rule)
for i in [1, 2, 3]:
plt.subplot(3, 1, i)
plt.legend(loc='upper center', ncol=4)
plt.gcf().set_size_inches(15, 15)
plt.show()
Explanation: Once you have debugged your RMSProp and Adam implementations, run the following to train a pair of deep networks using these new update rules:
End of explanation
best_model = None
################################################################################
# TODO: Train the best FullyConnectedNet that you can on CIFAR-10. You might #
# batch normalization and dropout useful. Store your best model in the #
# best_model variable. #
################################################################################
pass
################################################################################
# END OF YOUR CODE #
################################################################################
Explanation: Train a good model!
Train the best fully-connected model that you can on CIFAR-10, storing your best model in the best_model variable. We require you to get at least 50% accuracy on the validation set using a fully-connected net.
If you are careful it should be possible to get accuracies above 55%, but we don't require it for this part and won't assign extra credit for doing so. Later in the assignment we will ask you to train the best convolutional network that you can on CIFAR-10, and we would prefer that you spend your effort working on convolutional nets rather than fully-connected nets.
You might find it useful to complete the BatchNormalization.ipynb and Dropout.ipynb notebooks before completing this part, since those techniques can help you train powerful models.
End of explanation
y_test_pred = np.argmax(best_model.loss(X_test), axis=1)
y_val_pred = np.argmax(best_model.loss(X_val), axis=1)
print 'Validation set accuracy: ', (y_val_pred == y_val).mean()
print 'Test set accuracy: ', (y_test_pred == y_test).mean()
Explanation: Test you model
Run your best model on the validation and test sets. You should achieve above 50% accuracy on the validation set.
End of explanation |
5,785 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Introducing Pandas
From the docs
Step1: Introducing DataFrame
From the docs
Step2: The Jupyter Notebook automatically renders DataFrame as HTML!
Note the first column; this is an Index, and is an essential component of DataFrame. Here, it was auto-generated, but we can also set it
Step3: The Index plays a key role in slicing the DataFrame
Step4: We can also get individual columns
Step5: This is a Series, which is essentially a one-dimensional DataFrame, with a defined data type. Put another way, a DataFrame is a collection of Series.
Note that the Series retained the Index of our DataFrame, so we can use similar slicing
Step6: Series and DataFrame support element-wise operations
Step7: Using DataFrame with Django
Django gives us a handy way to build a list of dictionaries
Step8: DataFrame doesn't know what to do with a QuerySet; it wants something that looks more like a list.
We could use list(gig_values), but gig_values.iterator() is more efficient.
Step9: This is a good place to start, and we've already got the answer to "How many gigs have we played"?
However, there are a few ways we can make this easier to work with
Step10: The previous cell demonstrates a good practice
Step11: How many gigs have we played each year?
The date Index also allows for fast aggregration
Step12: What are our most active months?
Get the dates as a Series
Step13: Convert those to sortable month names
Step14: Count the unique values
Step15: What cities have we played?
Step16: When did we play in Pittsburgh?
We can use a Series of boolean values to slice our DataFrame
Step17: What states have we played?
The Gig model doesn't have a state field, so we need to parse it out. In vanilla Python, we'd do
Step18: With pandas, we can do the same thing for every Series element
Step19: What venues have we played?
DataFrame has powerful grouping and aggregration functionality
Step20: This Series has a MultiIndex. Very useful, but beyond the scope of this presentation... | Python Code:
import pandas as pd
pd.options.display.max_rows = 20
%matplotlib inline
Explanation: Introducing Pandas
From the docs:
A Python package providing fast, flexible, and expressive data structures designed to make working with “relational” or “labeled” data both easy and intuitive.
We also use matplotlib:
A Python 2D plotting library which produces publication quality figures in a variety of hardcopy formats and interactive environments across platforms.
Requirements:
(venv) $ pip install pandas matplotlib
We're going to see a sliver of the functionality provided by these packages.
End of explanation
df = pd.DataFrame([
{'integer': 1, 'float': 1.0, 'string': 'one'},
{'integer': 2, 'float': 2.0, 'string': 'two'},
{'integer': 2, 'float': 2.0, 'string': 'two'},
{'integer': 3, 'float': 3.0, 'string': 'three'},
])
# Print some details about the DataFrame
df.info()
df
Explanation: Introducing DataFrame
From the docs:
DataFrame is a 2-dimensional labeled data structure with columns of potentially different types. You can think of it like a spreadsheet or SQL table. It is generally the most commonly used pandas object.
There are many ways to get a DataFrame, but we'll start with a list of dictionaries.
End of explanation
df_index = df.set_index('string')
df_index
Explanation: The Jupyter Notebook automatically renders DataFrame as HTML!
Note the first column; this is an Index, and is an essential component of DataFrame. Here, it was auto-generated, but we can also set it:
End of explanation
# Slice by label
df_index.loc['two']
# Slice by position
df_index.iloc[-2:]
Explanation: The Index plays a key role in slicing the DataFrame:
End of explanation
floats = df_index['float']
floats
Explanation: We can also get individual columns:
End of explanation
floats['two']
Explanation: This is a Series, which is essentially a one-dimensional DataFrame, with a defined data type. Put another way, a DataFrame is a collection of Series.
Note that the Series retained the Index of our DataFrame, so we can use similar slicing:
End of explanation
df_index['float'] * df_index['integer']
df_index * df_index
number_format = 'Number {}'.format
df_index['integer'].apply(number_format)
df_index.applymap(number_format)
Explanation: Series and DataFrame support element-wise operations:
End of explanation
gig_values = Gig.objects.past().published().values('date', 'venue__name', 'venue__city')
gig_values[:5]
Explanation: Using DataFrame with Django
Django gives us a handy way to build a list of dictionaries:
End of explanation
gigs = pd.DataFrame(gig_values.iterator())
gigs.info()
gigs
Explanation: DataFrame doesn't know what to do with a QuerySet; it wants something that looks more like a list.
We could use list(gig_values), but gig_values.iterator() is more efficient.
End of explanation
gig_values = Gig.objects.past().published().values_list('date', 'venue__name', 'venue__city')
gig_values[:5]
gigs = pd.DataFrame(gig_values.iterator(), columns=['date', 'venue', 'city'])
gigs['date'] = pd.to_datetime(gigs['date'])
gigs = gigs.set_index('date').sort_index()
gigs.info()
gigs.head()
Explanation: This is a good place to start, and we've already got the answer to "How many gigs have we played"?
However, there are a few ways we can make this easier to work with:
Shorter column names
Predictable column order
Indexed and sorted by date
For more control, we'll use a list of tuples to initialize the DataFrame.
End of explanation
gigs.loc['2016']
Explanation: The previous cell demonstrates a good practice: make all of your modifications to a variable in one cell. This will help prevent surprises when you execute cells out of order as you play with code. If you need to make modifications later in the notebook, assign the result to a new variable.
Answering questions
What gigs did we play last year?
Using the date as the Index allows for fast slicing:
End of explanation
# resample('A') creates year-end groups like '2005-12-31'
# to_period() turns that into '2005'
years = gigs.resample('A').size().to_period()
years
years.plot.bar()
Explanation: How many gigs have we played each year?
The date Index also allows for fast aggregration:
End of explanation
gig_dates = gigs.reset_index()['date']
gig_dates
Explanation: What are our most active months?
Get the dates as a Series:
End of explanation
# Series.dt gives us access to date-related methods
gig_months = gig_dates.dt.strftime('%m %b')
gig_months
Explanation: Convert those to sortable month names:
End of explanation
months = gig_months.value_counts()
months
# matplotlib has lots of options for customization
ax = months.sort_index().plot.bar(table=True, figsize=(10,5))
ax.get_xaxis().set_visible(False)
Explanation: Count the unique values:
End of explanation
gig_cities = gigs['city']
gig_cities
cities = gig_cities.value_counts()
cities
cities.describe()
# Adding the ; suppresses `<matplotlib.axes._subplots.AxesSubplot ...>`
cities[:10].sort_values().plot.barh();
Explanation: What cities have we played?
End of explanation
# Series.str gives us access to string methods
in_pgh = gig_cities.str.contains('Pittsburgh')
in_pgh
gigs[in_pgh]
Explanation: When did we play in Pittsburgh?
We can use a Series of boolean values to slice our DataFrame:
End of explanation
'Boston, MA'.split(',')[1].strip()
Explanation: What states have we played?
The Gig model doesn't have a state field, so we need to parse it out. In vanilla Python, we'd do:
End of explanation
states = gig_cities.str.split(',').str.get(1).str.strip().value_counts()
states
states.describe()
# Don't rotate the x-axis labels
states[:5].plot.bar(rot=0);
Explanation: With pandas, we can do the same thing for every Series element:
End of explanation
venues = gigs.groupby(['venue', 'city']).size()
venues
Explanation: What venues have we played?
DataFrame has powerful grouping and aggregration functionality:
End of explanation
venues.describe()
top_venues = venues.nlargest(10)
top_venues
top_venues.sort_values().plot.barh();
Explanation: This Series has a MultiIndex. Very useful, but beyond the scope of this presentation...
End of explanation |
5,786 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Deep Learning
Assignment 4
Previously in 2_fullyconnected.ipynb and 3_regularization.ipynb, we trained fully connected networks to classify notMNIST characters.
The goal of this assignment is make the neural network convolutional.
Step1: Reformat into a TensorFlow-friendly shape
Step2: Let's build a small network with two convolutional layers, followed by one fully connected layer. Convolutional networks are more expensive computationally, so we'll limit its depth and number of fully connected nodes.
Step3: Problem 1
The convolutional model above uses convolutions with stride 2 to reduce the dimensionality. Replace the strides by a max pooling operation (nn.max_pool()) of stride 2 and kernel size 2.
Step4: Output
Initialized
Minibatch loss at step 0 | Python Code:
# These are all the modules we'll be using later. Make sure you can import them
# before proceeding further.
from __future__ import print_function
import numpy as np
import tensorflow as tf
from six.moves import cPickle as pickle
from six.moves import range
pickle_file = '../notMNIST.pickle'
with open(pickle_file, 'rb') as f:
save = pickle.load(f)
train_dataset = save['train_dataset']
train_labels = save['train_labels']
valid_dataset = save['valid_dataset']
valid_labels = save['valid_labels']
test_dataset = save['test_dataset']
test_labels = save['test_labels']
del save # hint to help gc free up memory
print('Training set', train_dataset.shape, train_labels.shape)
print('Validation set', valid_dataset.shape, valid_labels.shape)
print('Test set', test_dataset.shape, test_labels.shape)
Explanation: Deep Learning
Assignment 4
Previously in 2_fullyconnected.ipynb and 3_regularization.ipynb, we trained fully connected networks to classify notMNIST characters.
The goal of this assignment is make the neural network convolutional.
End of explanation
image_size = 28
num_labels = 10
num_channels = 1 # grayscale
import numpy as np
def reformat(dataset, labels):
dataset = dataset.reshape(
(-1, image_size, image_size, num_channels)).astype(np.float32)
labels = (np.arange(num_labels) == labels[:,None]).astype(np.float32)
return dataset, labels
train_dataset, train_labels = reformat(train_dataset, train_labels)
valid_dataset, valid_labels = reformat(valid_dataset, valid_labels)
test_dataset, test_labels = reformat(test_dataset, test_labels)
print('Training set', train_dataset.shape, train_labels.shape)
print('Validation set', valid_dataset.shape, valid_labels.shape)
print('Test set', test_dataset.shape, test_labels.shape)
def accuracy(predictions, labels):
return (100.0 * np.sum(np.argmax(predictions, 1) == np.argmax(labels, 1))
/ predictions.shape[0])
Explanation: Reformat into a TensorFlow-friendly shape:
- convolutions need the image data formatted as a cube (width by height by #channels)
- labels as float 1-hot encodings.
End of explanation
batch_size = 16
patch_size = 5
depth = 16
num_hidden = 64
graph = tf.Graph()
with graph.as_default():
# Input data.
tf_train_dataset = tf.placeholder(
tf.float32, shape=(batch_size, image_size, image_size, num_channels))
tf_train_labels = tf.placeholder(tf.float32, shape=(batch_size, num_labels))
tf_valid_dataset = tf.constant(valid_dataset)
tf_test_dataset = tf.constant(test_dataset)
# Variables.
layer1_weights = tf.Variable(tf.truncated_normal(
[patch_size, patch_size, num_channels, depth], stddev=0.1))
layer1_biases = tf.Variable(tf.zeros([depth]))
layer2_weights = tf.Variable(tf.truncated_normal(
[patch_size, patch_size, depth, depth], stddev=0.1))
layer2_biases = tf.Variable(tf.constant(1.0, shape=[depth]))
layer3_weights = tf.Variable(tf.truncated_normal(
[image_size // 4 * image_size // 4 * depth, num_hidden], stddev=0.1))
layer3_biases = tf.Variable(tf.constant(1.0, shape=[num_hidden]))
layer4_weights = tf.Variable(tf.truncated_normal(
[num_hidden, num_labels], stddev=0.1))
layer4_biases = tf.Variable(tf.constant(1.0, shape=[num_labels]))
# Model.
def model(data): #data of shape [batch_size, image_size, image_size, num_channels]
conv = tf.nn.conv2d(data, layer1_weights, [1, 2, 2, 1], padding='SAME') # shape of [batch_size, image_size/2, image_size/2, depth]
hidden = tf.nn.relu(conv + layer1_biases)
conv = tf.nn.conv2d(hidden, layer2_weights, [1, 2, 2, 1], padding='SAME')# shape of [batch_size, image_size/4, image_size/4, depth]
hidden = tf.nn.relu(conv + layer2_biases)
shape = hidden.get_shape().as_list()
reshape = tf.reshape(hidden, [shape[0], shape[1] * shape[2] * shape[3]])# shape of [batch_size, image_size/4 * image_size/4* depth]
hidden = tf.nn.relu(tf.matmul(reshape, layer3_weights) + layer3_biases) # shape of [batch_size,num_hidden]
return tf.matmul(hidden, layer4_weights) + layer4_biases # shape of [batch_size,num_labels]
# Training computation.
logits = model(tf_train_dataset)
loss = tf.reduce_mean(
tf.nn.softmax_cross_entropy_with_logits(labels=tf_train_labels, logits=logits))
# Optimizer.
optimizer = tf.train.GradientDescentOptimizer(0.05).minimize(loss)
# Predictions for the training, validation, and test data.
train_prediction = tf.nn.softmax(logits)
valid_prediction = tf.nn.softmax(model(tf_valid_dataset))
test_prediction = tf.nn.softmax(model(tf_test_dataset))
num_steps = 1001
with tf.Session(graph=graph) as session:
tf.global_variables_initializer().run()
print('Initialized')
for step in range(num_steps):
offset = (step * batch_size) % (train_labels.shape[0] - batch_size)
batch_data = train_dataset[offset:(offset + batch_size), :, :, :]
batch_labels = train_labels[offset:(offset + batch_size), :]
feed_dict = {tf_train_dataset : batch_data, tf_train_labels : batch_labels}
_, l, predictions = session.run(
[optimizer, loss, train_prediction], feed_dict=feed_dict)
if (step % 50 == 0):
print('Minibatch loss at step %d: %f' % (step, l))
print('Minibatch accuracy: %.1f%%' % accuracy(predictions, batch_labels))
print('Validation accuracy: %.1f%%' % accuracy(
valid_prediction.eval(), valid_labels))
print('Test accuracy: %.1f%%' % accuracy(test_prediction.eval(), test_labels))
Explanation: Let's build a small network with two convolutional layers, followed by one fully connected layer. Convolutional networks are more expensive computationally, so we'll limit its depth and number of fully connected nodes.
End of explanation
# Variables.
layer3_weights = tf.Variable(tf.truncated_normal(
[image_size // 2 * image_size // 2 * num_channels, num_hidden], stddev=0.1))
layer3_biases = tf.Variable(tf.constant(1.0, shape=[num_hidden]))
layer4_weights = tf.Variable(tf.truncated_normal(
[num_hidden, num_labels], stddev=0.1))
layer4_biases = tf.Variable(tf.constant(1.0, shape=[num_labels]))
# Model.
def model(data):
# Data is shaped of [batch_size, image_size, image_size, num_channels]
hidden = tf.nn.max_pool(data, [1, 2, 2, 1],[1, 2, 2, 1] , padding='SAME') #same shape of [batch_size, image_size/2, image_size/2, num_channels]
shape = hidden.get_shape().as_list()
reshape = tf.reshape(hidden, [shape[0], shape[1] * shape[2] * shape[3]]) #reshaped into 2D array of [batch_size, image_size/2* image_size/2 * num_channels]
hidden = tf.nn.relu(tf.matmul(reshape, layer3_weights) + layer3_biases)
return tf.matmul(hidden, layer4_weights) + layer4_biases
Explanation: Problem 1
The convolutional model above uses convolutions with stride 2 to reduce the dimensionality. Replace the strides by a max pooling operation (nn.max_pool()) of stride 2 and kernel size 2.
End of explanation
# Variables.
layer1_weights = tf.Variable(tf.truncated_normal(
[patch_size, patch_size, num_channels, depth], stddev=0.1))
layer1_biases = tf.Variable(tf.zeros([depth]))
layer2_weights = tf.Variable(tf.truncated_normal(
[patch_size, patch_size, depth, depth], stddev=0.1))
layer2_biases = tf.Variable(tf.constant(1.0, shape=[depth]))
layer3_weights = tf.Variable(tf.truncated_normal(
[math.ceil(image_size / 16) * math.ceil(image_size /16) * depth, num_hidden], stddev=0.1))
layer3_biases = tf.Variable(tf.constant(1.0, shape=[num_hidden]))
layer4_weights = tf.Variable(tf.truncated_normal(
[num_hidden, num_labels], stddev=0.1))
layer4_biases = tf.Variable(tf.constant(1.0, shape=[num_labels]))
# Model.
def model(data):
conv = tf.nn.conv2d(data, layer1_weights, [1, 2, 2, 1], padding='SAME') # shape of [batch_size, image_size/2, image_size/2, depth]: [16, 14, 14, 16]
shape1 = conv.get_shape().as_list()
hidden = tf.nn.relu(conv + layer1_biases)
hidden1 = tf.nn.max_pool(hidden, [1, 2, 2, 1], [1, 2, 2, 1] , padding='SAME') #shape of [batch_size, image_size/4, image_size/4, depth]: [16, 7, 7, 16]
shape2 = hidden1.get_shape().as_list()
conv = tf.nn.conv2d(hidden1, layer2_weights, [1, 2, 2, 1], padding='SAME') #shape of [batch_size, image_size/8, image_size/8, depth]: [16, 4, 4, 16]
shape3 = conv.get_shape().as_list()
hidden = tf.nn.relu(conv + layer2_biases)
hidden2 = tf.nn.max_pool(hidden, [1, 2, 2, 1],[1, 2, 2, 1] , padding='SAME') #same shape of [batch_size, image_size/16, image_size/16, depth]: [16, 2, 2, 16]
shape = hidden2.get_shape().as_list()
reshape = tf.reshape(hidden2, [shape[0], shape[1] * shape[2] * shape[3]])
hidden = tf.nn.relu(tf.matmul(reshape, layer3_weights) + layer3_biases)
return tf.matmul(hidden, layer4_weights) + layer4_biases
Explanation: Output
Initialized
Minibatch loss at step 0: 2.694045
Minibatch accuracy: 0.0%
Validation accuracy: 7.9%
Minibatch loss at step 50: 1.684043
Minibatch accuracy: 50.0%
Validation accuracy: 65.4%
Minibatch loss at step 100: 0.953076
Minibatch accuracy: 81.2%
Validation accuracy: 70.7%
Minibatch loss at step 150: 0.509777
Minibatch accuracy: 87.5%
Validation accuracy: 76.1%
Minibatch loss at step 200: 0.464738
Minibatch accuracy: 87.5%
Validation accuracy: 78.0%
Minibatch loss at step 250: 0.875403
Minibatch accuracy: 75.0%
Validation accuracy: 78.1%
Minibatch loss at step 300: 0.913955
Minibatch accuracy: 75.0%
Validation accuracy: 80.6%
Minibatch loss at step 350: 0.677745
Minibatch accuracy: 75.0%
Validation accuracy: 80.3%
Minibatch loss at step 400: 0.772082
Minibatch accuracy: 68.8%
Validation accuracy: 80.2%
Minibatch loss at step 450: 0.742608
Minibatch accuracy: 93.8%
Validation accuracy: 81.0%
Minibatch loss at step 500: 0.643120
Minibatch accuracy: 81.2%
Validation accuracy: 80.2%
Minibatch loss at step 550: 0.628266
Minibatch accuracy: 75.0%
Validation accuracy: 80.5%
Minibatch loss at step 600: 0.979889
Minibatch accuracy: 81.2%
Validation accuracy: 80.8%
Minibatch loss at step 650: 0.581403
Minibatch accuracy: 87.5%
Validation accuracy: 81.0%
Minibatch loss at step 700: 0.751648
Minibatch accuracy: 75.0%
Validation accuracy: 80.5%
Minibatch loss at step 750: 0.344367
Minibatch accuracy: 87.5%
Validation accuracy: 81.4%
Minibatch loss at step 800: 0.698404
Minibatch accuracy: 87.5%
Validation accuracy: 81.8%
Minibatch loss at step 850: 0.795159
Minibatch accuracy: 68.8%
Validation accuracy: 81.2%
Minibatch loss at step 900: 0.890547
Minibatch accuracy: 75.0%
Validation accuracy: 80.8%
Minibatch loss at step 950: 1.138554
Minibatch accuracy: 75.0%
Validation accuracy: 81.2%
Minibatch loss at step 1000: 0.733926
Minibatch accuracy: 81.2%
Validation accuracy: 81.4%
Test accuracy: 87.4%
We can see that the max_pool on its own is not too bad at all, without any convolution and it is much faster to compute.
Problem 2
Try to get the best performance you can using a convolutional net. Look for example at the classic LeNet5 architecture, adding Dropout, and/or adding learning rate decay.
2.1 Simple LeNet Idea
Conv2D + max-pool + Conv2D + max-pool + full-network + output
End of explanation |
5,787 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Vertex client library
Step1: Install the latest GA version of google-cloud-storage library as well.
Step2: Restart the kernel
Once you've installed the Vertex client library and Google cloud-storage, you need to restart the notebook kernel so it can find the packages.
Step3: Before you begin
GPU runtime
Make sure you're running this notebook in a GPU runtime if you have that option. In Colab, select Runtime > Change Runtime Type > GPU
Set up your Google Cloud project
The following steps are required, regardless of your notebook environment.
Select or create a Google Cloud project. When you first create an account, you get a $300 free credit towards your compute/storage costs.
Make sure that billing is enabled for your project.
Enable the Vertex APIs and Compute Engine APIs.
The Google Cloud SDK is already installed in Google Cloud Notebook.
Enter your project ID in the cell below. Then run the cell to make sure the
Cloud SDK uses the right project for all the commands in this notebook.
Note
Step4: Region
You can also change the REGION variable, which is used for operations
throughout the rest of this notebook. Below are regions supported for Vertex. We recommend that you choose the region closest to you.
Americas
Step5: Timestamp
If you are in a live tutorial session, you might be using a shared test account or project. To avoid name collisions between users on resources created, you create a timestamp for each instance session, and append onto the name of resources which will be created in this tutorial.
Step6: Authenticate your Google Cloud account
If you are using Google Cloud Notebook, your environment is already authenticated. Skip this step.
If you are using Colab, run the cell below and follow the instructions when prompted to authenticate your account via oAuth.
Otherwise, follow these steps
Step7: Set up variables
Next, set up some variables used throughout the tutorial.
Import libraries and define constants
Import Vertex client library
Import the Vertex client library into our Python environment.
Step8: Vertex constants
Setup up the following constants for Vertex
Step9: AutoML constants
Set constants unique to AutoML datasets and training
Step10: Tutorial
Now you are ready to start creating your own AutoML image classification model.
Set up clients
The Vertex client library works as a client/server model. On your side (the Python script) you will create a client that sends requests and receives responses from the Vertex server.
You will use different clients in this tutorial for different steps in the workflow. So set them all up upfront.
Dataset Service for Dataset resources.
Model Service for Model resources.
Pipeline Service for training.
Endpoint Service for deployment.
Prediction Service for serving.
Step11: Dataset
Now that your clients are ready, your first step in training a model is to create a managed dataset instance, and then upload your labeled data to it.
Create Dataset resource instance
Use the helper function create_dataset to create the instance of a Dataset resource. This function does the following
Step12: Now save the unique dataset identifier for the Dataset resource instance you created.
Step13: Data preparation
The Vertex Dataset resource for images has some requirements for your data
Step14: Quick peek at your data
You will use a version of the Flowers dataset that is stored in a public Cloud Storage bucket, using a CSV index file.
Start by doing a quick peek at the data. You count the number of examples by counting the number of rows in the CSV index file (wc -l) and then peek at the first few rows.
Step15: Import data
Now, import the data into your Vertex Dataset resource. Use this helper function import_data to import the data. The function does the following
Step16: Train the model
Now train an AutoML image classification model using your Vertex Dataset resource. To train the model, do the following steps
Step17: Construct the task requirements
Next, construct the task requirements. Unlike other parameters which take a Python (JSON-like) dictionary, the task field takes a Google protobuf Struct, which is very similar to a Python dictionary. Use the json_format.ParseDict method for the conversion.
The minimal fields we need to specify are
Step18: Now save the unique identifier of the training pipeline you created.
Step19: Get information on a training pipeline
Now get pipeline information for just this training pipeline instance. The helper function gets the job information for just this job by calling the the job client service's get_training_pipeline method, with the following parameter
Step20: Deployment
Training the above model may take upwards of 20 minutes time.
Once your model is done training, you can calculate the actual time it took to train the model by subtracting end_time from start_time. For your model, you will need to know the fully qualified Vertex Model resource identifier, which the pipeline service assigned to it. You can get this from the returned pipeline instance as the field model_to_deploy.name.
Step21: Model information
Now that your model is trained, you can get some information on your model.
Evaluate the Model resource
Now find out how good the model service believes your model is. As part of training, some portion of the dataset was set aside as the test (holdout) data, which is used by the pipeline service to evaluate the model.
List evaluations for all slices
Use this helper function list_model_evaluations, which takes the following parameter
Step22: Deploy the Model resource
Now deploy the trained Vertex Model resource you created with AutoML. This requires two steps
Step23: Now get the unique identifier for the Endpoint resource you created.
Step24: Compute instance scaling
You have several choices on scaling the compute instances for handling your online prediction requests
Step25: Deploy Model resource to the Endpoint resource
Use this helper function deploy_model to deploy the Model resource to the Endpoint resource you created for serving predictions, with the following parameters
Step26: Make a online prediction request
Now do a online prediction to your deployed model.
Get test item
You will use an arbitrary example out of the dataset as a test item. Don't be concerned that the example was likely used in training the model -- we just want to demonstrate how to make a prediction.
Step27: Make a prediction
Now you have a test item. Use this helper function predict_item, which takes the following parameters
Step28: Undeploy the Model resource
Now undeploy your Model resource from the serving Endpoint resoure. Use this helper function undeploy_model, which takes the following parameters
Step29: Cleaning up
To clean up all GCP resources used in this project, you can delete the GCP
project you used for the tutorial.
Otherwise, you can delete the individual resources you created in this tutorial | Python Code:
import os
import sys
# Google Cloud Notebook
if os.path.exists("/opt/deeplearning/metadata/env_version"):
USER_FLAG = "--user"
else:
USER_FLAG = ""
! pip3 install -U google-cloud-aiplatform $USER_FLAG
Explanation: Vertex client library: AutoML image classification model for online prediction
<table align="left">
<td>
<a href="https://colab.research.google.com/github/GoogleCloudPlatform/vertex-ai-samples/blob/master/notebooks/community/gapic/automl/showcase_automl_image_classification_online.ipynb">
<img src="https://cloud.google.com/ml-engine/images/colab-logo-32px.png" alt="Colab logo"> Run in Colab
</a>
</td>
<td>
<a href="https://github.com/GoogleCloudPlatform/vertex-ai-samples/blob/master/notebooks/community/gapic/automl/showcase_automl_image_classification_online.ipynb">
<img src="https://cloud.google.com/ml-engine/images/github-logo-32px.png" alt="GitHub logo">
View on GitHub
</a>
</td>
</table>
<br/><br/><br/>
Overview
This tutorial demonstrates how to use the Vertex client library for Python to create image classification models and do online prediction using Google Cloud's AutoML.
Dataset
The dataset used for this tutorial is the Flowers dataset from TensorFlow Datasets. The version of the dataset you will use in this tutorial is stored in a public Cloud Storage bucket. The trained model predicts the type of flower an image is from a class of five flowers: daisy, dandelion, rose, sunflower, or tulip.
Objective
In this tutorial, you create an AutoML image classification model and deploy for online prediction from a Python script using the Vertex client library. You can alternatively create and deploy models using the gcloud command-line tool or online using the Google Cloud Console.
The steps performed include:
Create a Vertex Dataset resource.
Train the model.
View the model evaluation.
Deploy the Model resource to a serving Endpoint resource.
Make a prediction.
Undeploy the Model.
Costs
This tutorial uses billable components of Google Cloud (GCP):
Vertex AI
Cloud Storage
Learn about Vertex AI
pricing and Cloud Storage
pricing, and use the Pricing
Calculator
to generate a cost estimate based on your projected usage.
Installation
Install the latest version of Vertex client library.
End of explanation
! pip3 install -U google-cloud-storage $USER_FLAG
Explanation: Install the latest GA version of google-cloud-storage library as well.
End of explanation
if not os.getenv("IS_TESTING"):
# Automatically restart kernel after installs
import IPython
app = IPython.Application.instance()
app.kernel.do_shutdown(True)
Explanation: Restart the kernel
Once you've installed the Vertex client library and Google cloud-storage, you need to restart the notebook kernel so it can find the packages.
End of explanation
PROJECT_ID = "[your-project-id]" # @param {type:"string"}
if PROJECT_ID == "" or PROJECT_ID is None or PROJECT_ID == "[your-project-id]":
# Get your GCP project id from gcloud
shell_output = !gcloud config list --format 'value(core.project)' 2>/dev/null
PROJECT_ID = shell_output[0]
print("Project ID:", PROJECT_ID)
! gcloud config set project $PROJECT_ID
Explanation: Before you begin
GPU runtime
Make sure you're running this notebook in a GPU runtime if you have that option. In Colab, select Runtime > Change Runtime Type > GPU
Set up your Google Cloud project
The following steps are required, regardless of your notebook environment.
Select or create a Google Cloud project. When you first create an account, you get a $300 free credit towards your compute/storage costs.
Make sure that billing is enabled for your project.
Enable the Vertex APIs and Compute Engine APIs.
The Google Cloud SDK is already installed in Google Cloud Notebook.
Enter your project ID in the cell below. Then run the cell to make sure the
Cloud SDK uses the right project for all the commands in this notebook.
Note: Jupyter runs lines prefixed with ! as shell commands, and it interpolates Python variables prefixed with $ into these commands.
End of explanation
REGION = "us-central1" # @param {type: "string"}
Explanation: Region
You can also change the REGION variable, which is used for operations
throughout the rest of this notebook. Below are regions supported for Vertex. We recommend that you choose the region closest to you.
Americas: us-central1
Europe: europe-west4
Asia Pacific: asia-east1
You may not use a multi-regional bucket for training with Vertex. Not all regions provide support for all Vertex services. For the latest support per region, see the Vertex locations documentation
End of explanation
from datetime import datetime
TIMESTAMP = datetime.now().strftime("%Y%m%d%H%M%S")
Explanation: Timestamp
If you are in a live tutorial session, you might be using a shared test account or project. To avoid name collisions between users on resources created, you create a timestamp for each instance session, and append onto the name of resources which will be created in this tutorial.
End of explanation
# If you are running this notebook in Colab, run this cell and follow the
# instructions to authenticate your GCP account. This provides access to your
# Cloud Storage bucket and lets you submit training jobs and prediction
# requests.
# If on Google Cloud Notebook, then don't execute this code
if not os.path.exists("/opt/deeplearning/metadata/env_version"):
if "google.colab" in sys.modules:
from google.colab import auth as google_auth
google_auth.authenticate_user()
# If you are running this notebook locally, replace the string below with the
# path to your service account key and run this cell to authenticate your GCP
# account.
elif not os.getenv("IS_TESTING"):
%env GOOGLE_APPLICATION_CREDENTIALS ''
Explanation: Authenticate your Google Cloud account
If you are using Google Cloud Notebook, your environment is already authenticated. Skip this step.
If you are using Colab, run the cell below and follow the instructions when prompted to authenticate your account via oAuth.
Otherwise, follow these steps:
In the Cloud Console, go to the Create service account key page.
Click Create service account.
In the Service account name field, enter a name, and click Create.
In the Grant this service account access to project section, click the Role drop-down list. Type "Vertex" into the filter box, and select Vertex Administrator. Type "Storage Object Admin" into the filter box, and select Storage Object Admin.
Click Create. A JSON file that contains your key downloads to your local environment.
Enter the path to your service account key as the GOOGLE_APPLICATION_CREDENTIALS variable in the cell below and run the cell.
End of explanation
import time
from google.cloud.aiplatform import gapic as aip
from google.protobuf import json_format
from google.protobuf.json_format import MessageToJson, ParseDict
from google.protobuf.struct_pb2 import Struct, Value
Explanation: Set up variables
Next, set up some variables used throughout the tutorial.
Import libraries and define constants
Import Vertex client library
Import the Vertex client library into our Python environment.
End of explanation
# API service endpoint
API_ENDPOINT = "{}-aiplatform.googleapis.com".format(REGION)
# Vertex location root path for your dataset, model and endpoint resources
PARENT = "projects/" + PROJECT_ID + "/locations/" + REGION
Explanation: Vertex constants
Setup up the following constants for Vertex:
API_ENDPOINT: The Vertex API service endpoint for dataset, model, job, pipeline and endpoint services.
PARENT: The Vertex location root path for dataset, model, job, pipeline and endpoint resources.
End of explanation
# Image Dataset type
DATA_SCHEMA = "gs://google-cloud-aiplatform/schema/dataset/metadata/image_1.0.0.yaml"
# Image Labeling type
LABEL_SCHEMA = "gs://google-cloud-aiplatform/schema/dataset/ioformat/image_classification_single_label_io_format_1.0.0.yaml"
# Image Training task
TRAINING_SCHEMA = "gs://google-cloud-aiplatform/schema/trainingjob/definition/automl_image_classification_1.0.0.yaml"
Explanation: AutoML constants
Set constants unique to AutoML datasets and training:
Dataset Schemas: Tells the Dataset resource service which type of dataset it is.
Data Labeling (Annotations) Schemas: Tells the Dataset resource service how the data is labeled (annotated).
Dataset Training Schemas: Tells the Pipeline resource service the task (e.g., classification) to train the model for.
End of explanation
# client options same for all services
client_options = {"api_endpoint": API_ENDPOINT}
def create_dataset_client():
client = aip.DatasetServiceClient(client_options=client_options)
return client
def create_model_client():
client = aip.ModelServiceClient(client_options=client_options)
return client
def create_pipeline_client():
client = aip.PipelineServiceClient(client_options=client_options)
return client
def create_endpoint_client():
client = aip.EndpointServiceClient(client_options=client_options)
return client
def create_prediction_client():
client = aip.PredictionServiceClient(client_options=client_options)
return client
clients = {}
clients["dataset"] = create_dataset_client()
clients["model"] = create_model_client()
clients["pipeline"] = create_pipeline_client()
clients["endpoint"] = create_endpoint_client()
clients["prediction"] = create_prediction_client()
for client in clients.items():
print(client)
Explanation: Tutorial
Now you are ready to start creating your own AutoML image classification model.
Set up clients
The Vertex client library works as a client/server model. On your side (the Python script) you will create a client that sends requests and receives responses from the Vertex server.
You will use different clients in this tutorial for different steps in the workflow. So set them all up upfront.
Dataset Service for Dataset resources.
Model Service for Model resources.
Pipeline Service for training.
Endpoint Service for deployment.
Prediction Service for serving.
End of explanation
TIMEOUT = 90
def create_dataset(name, schema, labels=None, timeout=TIMEOUT):
start_time = time.time()
try:
dataset = aip.Dataset(
display_name=name, metadata_schema_uri=schema, labels=labels
)
operation = clients["dataset"].create_dataset(parent=PARENT, dataset=dataset)
print("Long running operation:", operation.operation.name)
result = operation.result(timeout=TIMEOUT)
print("time:", time.time() - start_time)
print("response")
print(" name:", result.name)
print(" display_name:", result.display_name)
print(" metadata_schema_uri:", result.metadata_schema_uri)
print(" metadata:", dict(result.metadata))
print(" create_time:", result.create_time)
print(" update_time:", result.update_time)
print(" etag:", result.etag)
print(" labels:", dict(result.labels))
return result
except Exception as e:
print("exception:", e)
return None
result = create_dataset("flowers-" + TIMESTAMP, DATA_SCHEMA)
Explanation: Dataset
Now that your clients are ready, your first step in training a model is to create a managed dataset instance, and then upload your labeled data to it.
Create Dataset resource instance
Use the helper function create_dataset to create the instance of a Dataset resource. This function does the following:
Uses the dataset client service.
Creates an Vertex Dataset resource (aip.Dataset), with the following parameters:
display_name: The human-readable name you choose to give it.
metadata_schema_uri: The schema for the dataset type.
Calls the client dataset service method create_dataset, with the following parameters:
parent: The Vertex location root path for your Database, Model and Endpoint resources.
dataset: The Vertex dataset object instance you created.
The method returns an operation object.
An operation object is how Vertex handles asynchronous calls for long running operations. While this step usually goes fast, when you first use it in your project, there is a longer delay due to provisioning.
You can use the operation object to get status on the operation (e.g., create Dataset resource) or to cancel the operation, by invoking an operation method:
| Method | Description |
| ----------- | ----------- |
| result() | Waits for the operation to complete and returns a result object in JSON format. |
| running() | Returns True/False on whether the operation is still running. |
| done() | Returns True/False on whether the operation is completed. |
| canceled() | Returns True/False on whether the operation was canceled. |
| cancel() | Cancels the operation (this may take up to 30 seconds). |
End of explanation
# The full unique ID for the dataset
dataset_id = result.name
# The short numeric ID for the dataset
dataset_short_id = dataset_id.split("/")[-1]
print(dataset_id)
Explanation: Now save the unique dataset identifier for the Dataset resource instance you created.
End of explanation
IMPORT_FILE = (
"gs://cloud-samples-data/vision/automl_classification/flowers/all_data_v2.csv"
)
Explanation: Data preparation
The Vertex Dataset resource for images has some requirements for your data:
Images must be stored in a Cloud Storage bucket.
Each image file must be in an image format (PNG, JPEG, BMP, ...).
There must be an index file stored in your Cloud Storage bucket that contains the path and label for each image.
The index file must be either CSV or JSONL.
CSV
For image classification, the CSV index file has the requirements:
No heading.
First column is the Cloud Storage path to the image.
Second column is the label.
Location of Cloud Storage training data.
Now set the variable IMPORT_FILE to the location of the CSV index file in Cloud Storage.
End of explanation
if "IMPORT_FILES" in globals():
FILE = IMPORT_FILES[0]
else:
FILE = IMPORT_FILE
count = ! gsutil cat $FILE | wc -l
print("Number of Examples", int(count[0]))
print("First 10 rows")
! gsutil cat $FILE | head
Explanation: Quick peek at your data
You will use a version of the Flowers dataset that is stored in a public Cloud Storage bucket, using a CSV index file.
Start by doing a quick peek at the data. You count the number of examples by counting the number of rows in the CSV index file (wc -l) and then peek at the first few rows.
End of explanation
def import_data(dataset, gcs_sources, schema):
config = [{"gcs_source": {"uris": gcs_sources}, "import_schema_uri": schema}]
print("dataset:", dataset_id)
start_time = time.time()
try:
operation = clients["dataset"].import_data(
name=dataset_id, import_configs=config
)
print("Long running operation:", operation.operation.name)
result = operation.result()
print("result:", result)
print("time:", int(time.time() - start_time), "secs")
print("error:", operation.exception())
print("meta :", operation.metadata)
print(
"after: running:",
operation.running(),
"done:",
operation.done(),
"cancelled:",
operation.cancelled(),
)
return operation
except Exception as e:
print("exception:", e)
return None
import_data(dataset_id, [IMPORT_FILE], LABEL_SCHEMA)
Explanation: Import data
Now, import the data into your Vertex Dataset resource. Use this helper function import_data to import the data. The function does the following:
Uses the Dataset client.
Calls the client method import_data, with the following parameters:
name: The human readable name you give to the Dataset resource (e.g., flowers).
import_configs: The import configuration.
import_configs: A Python list containing a dictionary, with the key/value entries:
gcs_sources: A list of URIs to the paths of the one or more index files.
import_schema_uri: The schema identifying the labeling type.
The import_data() method returns a long running operation object. This will take a few minutes to complete. If you are in a live tutorial, this would be a good time to ask questions, or take a personal break.
End of explanation
def create_pipeline(pipeline_name, model_name, dataset, schema, task):
dataset_id = dataset.split("/")[-1]
input_config = {
"dataset_id": dataset_id,
"fraction_split": {
"training_fraction": 0.8,
"validation_fraction": 0.1,
"test_fraction": 0.1,
},
}
training_pipeline = {
"display_name": pipeline_name,
"training_task_definition": schema,
"training_task_inputs": task,
"input_data_config": input_config,
"model_to_upload": {"display_name": model_name},
}
try:
pipeline = clients["pipeline"].create_training_pipeline(
parent=PARENT, training_pipeline=training_pipeline
)
print(pipeline)
except Exception as e:
print("exception:", e)
return None
return pipeline
Explanation: Train the model
Now train an AutoML image classification model using your Vertex Dataset resource. To train the model, do the following steps:
Create an Vertex training pipeline for the Dataset resource.
Execute the pipeline to start the training.
Create a training pipeline
You may ask, what do we use a pipeline for? You typically use pipelines when the job (such as training) has multiple steps, generally in sequential order: do step A, do step B, etc. By putting the steps into a pipeline, we gain the benefits of:
Being reusable for subsequent training jobs.
Can be containerized and ran as a batch job.
Can be distributed.
All the steps are associated with the same pipeline job for tracking progress.
Use this helper function create_pipeline, which takes the following parameters:
pipeline_name: A human readable name for the pipeline job.
model_name: A human readable name for the model.
dataset: The Vertex fully qualified dataset identifier.
schema: The dataset labeling (annotation) training schema.
task: A dictionary describing the requirements for the training job.
The helper function calls the Pipeline client service'smethod create_pipeline, which takes the following parameters:
parent: The Vertex location root path for your Dataset, Model and Endpoint resources.
training_pipeline: the full specification for the pipeline training job.
Let's look now deeper into the minimal requirements for constructing a training_pipeline specification:
display_name: A human readable name for the pipeline job.
training_task_definition: The dataset labeling (annotation) training schema.
training_task_inputs: A dictionary describing the requirements for the training job.
model_to_upload: A human readable name for the model.
input_data_config: The dataset specification.
dataset_id: The Vertex dataset identifier only (non-fully qualified) -- this is the last part of the fully-qualified identifier.
fraction_split: If specified, the percentages of the dataset to use for training, test and validation. Otherwise, the percentages are automatically selected by AutoML.
End of explanation
PIPE_NAME = "flowers_pipe-" + TIMESTAMP
MODEL_NAME = "flowers_model-" + TIMESTAMP
task = json_format.ParseDict(
{
"multi_label": False,
"budget_milli_node_hours": 8000,
"model_type": "CLOUD",
"disable_early_stopping": False,
},
Value(),
)
response = create_pipeline(PIPE_NAME, MODEL_NAME, dataset_id, TRAINING_SCHEMA, task)
Explanation: Construct the task requirements
Next, construct the task requirements. Unlike other parameters which take a Python (JSON-like) dictionary, the task field takes a Google protobuf Struct, which is very similar to a Python dictionary. Use the json_format.ParseDict method for the conversion.
The minimal fields we need to specify are:
multi_label: Whether True/False this is a multi-label (vs single) classification.
budget_milli_node_hours: The maximum time to budget (billed) for training the model, where 1000 = 1 hour. For image classification, the budget must be a minimum of 8 hours.
model_type: The type of deployed model:
CLOUD: For deploying to Google Cloud.
MOBILE_TF_LOW_LATENCY_1: For deploying to the edge and optimizing for latency (response time).
MOBILE_TF_HIGH_ACCURACY_1: For deploying to the edge and optimizing for accuracy.
MOBILE_TF_VERSATILE_1: For deploying to the edge and optimizing for a trade off between latency and accuracy.
disable_early_stopping: Whether True/False to let AutoML use its judgement to stop training early or train for the entire budget.
Finally, create the pipeline by calling the helper function create_pipeline, which returns an instance of a training pipeline object.
End of explanation
# The full unique ID for the pipeline
pipeline_id = response.name
# The short numeric ID for the pipeline
pipeline_short_id = pipeline_id.split("/")[-1]
print(pipeline_id)
Explanation: Now save the unique identifier of the training pipeline you created.
End of explanation
def get_training_pipeline(name, silent=False):
response = clients["pipeline"].get_training_pipeline(name=name)
if silent:
return response
print("pipeline")
print(" name:", response.name)
print(" display_name:", response.display_name)
print(" state:", response.state)
print(" training_task_definition:", response.training_task_definition)
print(" training_task_inputs:", dict(response.training_task_inputs))
print(" create_time:", response.create_time)
print(" start_time:", response.start_time)
print(" end_time:", response.end_time)
print(" update_time:", response.update_time)
print(" labels:", dict(response.labels))
return response
response = get_training_pipeline(pipeline_id)
Explanation: Get information on a training pipeline
Now get pipeline information for just this training pipeline instance. The helper function gets the job information for just this job by calling the the job client service's get_training_pipeline method, with the following parameter:
name: The Vertex fully qualified pipeline identifier.
When the model is done training, the pipeline state will be PIPELINE_STATE_SUCCEEDED.
End of explanation
while True:
response = get_training_pipeline(pipeline_id, True)
if response.state != aip.PipelineState.PIPELINE_STATE_SUCCEEDED:
print("Training job has not completed:", response.state)
model_to_deploy_id = None
if response.state == aip.PipelineState.PIPELINE_STATE_FAILED:
raise Exception("Training Job Failed")
else:
model_to_deploy = response.model_to_upload
model_to_deploy_id = model_to_deploy.name
print("Training Time:", response.end_time - response.start_time)
break
time.sleep(60)
print("model to deploy:", model_to_deploy_id)
Explanation: Deployment
Training the above model may take upwards of 20 minutes time.
Once your model is done training, you can calculate the actual time it took to train the model by subtracting end_time from start_time. For your model, you will need to know the fully qualified Vertex Model resource identifier, which the pipeline service assigned to it. You can get this from the returned pipeline instance as the field model_to_deploy.name.
End of explanation
def list_model_evaluations(name):
response = clients["model"].list_model_evaluations(parent=name)
for evaluation in response:
print("model_evaluation")
print(" name:", evaluation.name)
print(" metrics_schema_uri:", evaluation.metrics_schema_uri)
metrics = json_format.MessageToDict(evaluation._pb.metrics)
for metric in metrics.keys():
print(metric)
print("logloss", metrics["logLoss"])
print("auPrc", metrics["auPrc"])
return evaluation.name
last_evaluation = list_model_evaluations(model_to_deploy_id)
Explanation: Model information
Now that your model is trained, you can get some information on your model.
Evaluate the Model resource
Now find out how good the model service believes your model is. As part of training, some portion of the dataset was set aside as the test (holdout) data, which is used by the pipeline service to evaluate the model.
List evaluations for all slices
Use this helper function list_model_evaluations, which takes the following parameter:
name: The Vertex fully qualified model identifier for the Model resource.
This helper function uses the model client service's list_model_evaluations method, which takes the same parameter. The response object from the call is a list, where each element is an evaluation metric.
For each evaluation (you probably only have one) we then print all the key names for each metric in the evaluation, and for a small set (logLoss and auPrc) you will print the result.
End of explanation
ENDPOINT_NAME = "flowers_endpoint-" + TIMESTAMP
def create_endpoint(display_name):
endpoint = {"display_name": display_name}
response = clients["endpoint"].create_endpoint(parent=PARENT, endpoint=endpoint)
print("Long running operation:", response.operation.name)
result = response.result(timeout=300)
print("result")
print(" name:", result.name)
print(" display_name:", result.display_name)
print(" description:", result.description)
print(" labels:", result.labels)
print(" create_time:", result.create_time)
print(" update_time:", result.update_time)
return result
result = create_endpoint(ENDPOINT_NAME)
Explanation: Deploy the Model resource
Now deploy the trained Vertex Model resource you created with AutoML. This requires two steps:
Create an Endpoint resource for deploying the Model resource to.
Deploy the Model resource to the Endpoint resource.
Create an Endpoint resource
Use this helper function create_endpoint to create an endpoint to deploy the model to for serving predictions, with the following parameter:
display_name: A human readable name for the Endpoint resource.
The helper function uses the endpoint client service's create_endpoint method, which takes the following parameter:
display_name: A human readable name for the Endpoint resource.
Creating an Endpoint resource returns a long running operation, since it may take a few moments to provision the Endpoint resource for serving. You call response.result(), which is a synchronous call and will return when the Endpoint resource is ready. The helper function returns the Vertex fully qualified identifier for the Endpoint resource: response.name.
End of explanation
# The full unique ID for the endpoint
endpoint_id = result.name
# The short numeric ID for the endpoint
endpoint_short_id = endpoint_id.split("/")[-1]
print(endpoint_id)
Explanation: Now get the unique identifier for the Endpoint resource you created.
End of explanation
MIN_NODES = 1
MAX_NODES = 1
Explanation: Compute instance scaling
You have several choices on scaling the compute instances for handling your online prediction requests:
Single Instance: The online prediction requests are processed on a single compute instance.
Set the minimum (MIN_NODES) and maximum (MAX_NODES) number of compute instances to one.
Manual Scaling: The online prediction requests are split across a fixed number of compute instances that you manually specified.
Set the minimum (MIN_NODES) and maximum (MAX_NODES) number of compute instances to the same number of nodes. When a model is first deployed to the instance, the fixed number of compute instances are provisioned and online prediction requests are evenly distributed across them.
Auto Scaling: The online prediction requests are split across a scaleable number of compute instances.
Set the minimum (MIN_NODES) number of compute instances to provision when a model is first deployed and to de-provision, and set the maximum (`MAX_NODES) number of compute instances to provision, depending on load conditions.
The minimum number of compute instances corresponds to the field min_replica_count and the maximum number of compute instances corresponds to the field max_replica_count, in your subsequent deployment request.
End of explanation
DEPLOYED_NAME = "flowers_deployed-" + TIMESTAMP
def deploy_model(
model, deployed_model_display_name, endpoint, traffic_split={"0": 100}
):
deployed_model = {
"model": model,
"display_name": deployed_model_display_name,
"automatic_resources": {
"min_replica_count": MIN_NODES,
"max_replica_count": MAX_NODES,
},
}
response = clients["endpoint"].deploy_model(
endpoint=endpoint, deployed_model=deployed_model, traffic_split=traffic_split
)
print("Long running operation:", response.operation.name)
result = response.result()
print("result")
deployed_model = result.deployed_model
print(" deployed_model")
print(" id:", deployed_model.id)
print(" model:", deployed_model.model)
print(" display_name:", deployed_model.display_name)
print(" create_time:", deployed_model.create_time)
return deployed_model.id
deployed_model_id = deploy_model(model_to_deploy_id, DEPLOYED_NAME, endpoint_id)
Explanation: Deploy Model resource to the Endpoint resource
Use this helper function deploy_model to deploy the Model resource to the Endpoint resource you created for serving predictions, with the following parameters:
model: The Vertex fully qualified model identifier of the model to upload (deploy) from the training pipeline.
deploy_model_display_name: A human readable name for the deployed model.
endpoint: The Vertex fully qualified endpoint identifier to deploy the model to.
The helper function calls the Endpoint client service's method deploy_model, which takes the following parameters:
endpoint: The Vertex fully qualified Endpoint resource identifier to deploy the Model resource to.
deployed_model: The requirements specification for deploying the model.
traffic_split: Percent of traffic at the endpoint that goes to this model, which is specified as a dictionary of one or more key/value pairs.
If only one model, then specify as { "0": 100 }, where "0" refers to this model being uploaded and 100 means 100% of the traffic.
If there are existing models on the endpoint, for which the traffic will be split, then use model_id to specify as { "0": percent, model_id: percent, ... }, where model_id is the model id of an existing model to the deployed endpoint. The percents must add up to 100.
Let's now dive deeper into the deployed_model parameter. This parameter is specified as a Python dictionary with the minimum required fields:
model: The Vertex fully qualified model identifier of the (upload) model to deploy.
display_name: A human readable name for the deployed model.
disable_container_logging: This disables logging of container events, such as execution failures (default is container logging is enabled). Container logging is typically enabled when debugging the deployment and then disabled when deployed for production.
automatic_resources: This refers to how many redundant compute instances (replicas). For this example, we set it to one (no replication).
Traffic Split
Let's now dive deeper into the traffic_split parameter. This parameter is specified as a Python dictionary. This might at first be a tad bit confusing. Let me explain, you can deploy more than one instance of your model to an endpoint, and then set how much (percent) goes to each instance.
Why would you do that? Perhaps you already have a previous version deployed in production -- let's call that v1. You got better model evaluation on v2, but you don't know for certain that it is really better until you deploy to production. So in the case of traffic split, you might want to deploy v2 to the same endpoint as v1, but it only get's say 10% of the traffic. That way, you can monitor how well it does without disrupting the majority of users -- until you make a final decision.
Response
The method returns a long running operation response. We will wait sychronously for the operation to complete by calling the response.result(), which will block until the model is deployed. If this is the first time a model is deployed to the endpoint, it may take a few additional minutes to complete provisioning of resources.
End of explanation
test_item = !gsutil cat $IMPORT_FILE | head -n1
if len(str(test_item[0]).split(",")) == 3:
_, test_item, test_label = str(test_item[0]).split(",")
else:
test_item, test_label = str(test_item[0]).split(",")
print(test_item, test_label)
Explanation: Make a online prediction request
Now do a online prediction to your deployed model.
Get test item
You will use an arbitrary example out of the dataset as a test item. Don't be concerned that the example was likely used in training the model -- we just want to demonstrate how to make a prediction.
End of explanation
import base64
import tensorflow as tf
def predict_item(filename, endpoint, parameters_dict):
parameters = json_format.ParseDict(parameters_dict, Value())
with tf.io.gfile.GFile(filename, "rb") as f:
content = f.read()
# The format of each instance should conform to the deployed model's prediction input schema.
instances_list = [{"content": base64.b64encode(content).decode("utf-8")}]
instances = [json_format.ParseDict(s, Value()) for s in instances_list]
response = clients["prediction"].predict(
endpoint=endpoint, instances=instances, parameters=parameters
)
print("response")
print(" deployed_model_id:", response.deployed_model_id)
predictions = response.predictions
print("predictions")
for prediction in predictions:
print(" prediction:", dict(prediction))
predict_item(test_item, endpoint_id, {"confidenceThreshold": 0.5, "maxPredictions": 2})
Explanation: Make a prediction
Now you have a test item. Use this helper function predict_item, which takes the following parameters:
filename: The Cloud Storage path to the test item.
endpoint: The Vertex fully qualified identifier for the Endpoint resource where the Model resource was deployed.
parameters_dict: Additional filtering parameters for serving prediction results.
This function calls the prediction client service's predict method with the following parameters:
endpoint: The Vertex fully qualified identifier for the Endpoint resource where the Model resource was deployed.
instances: A list of instances (encoded images) to predict.
parameters: Additional filtering parameters for serving prediction results.
confidence_threshold: The threshold for returning predictions. Must be between 0 and 1.
max_predictions: The maximum number of predictions to return, sorted by confidence.
How does confidence_threshold affect the model accuracy? The threshold won't change the accuracy. What it changes is recall and precision.
- Precision: The higher the precision the more likely what is predicted is the correct prediction, but return fewer predictions. Increasing the confidence threshold increases precision.
- Recall: The higher the recall the more likely a correct prediction is returned in the result, but return more prediction with incorrect prediction. Decreasing the confidence threshold increases recall.
In this example, you will predict for precision. You set the confidence threshold to 0.5 and the maximum number of predictions for a classification to two. Since, all the confidence values across the classes must add up to one, there are only two possible outcomes:
1. There is a tie, both 0.5, and returns two predictions.
2. One value is above 0.5 and the rest are below 0.5, and returns one prediction.
Request
Since in this example your test item is in a Cloud Storage bucket, you open and read the contents of the image using tf.io.gfile.Gfile(). To pass the test data to the prediction service, you encode the bytes into base64 -- which makes the content safe from modification while transmitting binary data over the network.
The format of each instance is:
{ 'content': { 'b64': [base64_encoded_bytes] } }
Since the predict() method can take multiple items (instances), send your single test item as a list of one test item. As a final step, you package the instances list into Google's protobuf format -- which is what you pass to the predict() method.
Response
The response object returns a list, where each element in the list corresponds to the corresponding image in the request. You will see in the output for each prediction (just one in this case):
ids: The instance ID of each data item.
confidences: The percent of confidence between 0 and 1 in the prediction for each class.
displayNames: The corresponding class names.
End of explanation
def undeploy_model(deployed_model_id, endpoint):
response = clients["endpoint"].undeploy_model(
endpoint=endpoint, deployed_model_id=deployed_model_id, traffic_split={}
)
print(response)
undeploy_model(deployed_model_id, endpoint_id)
Explanation: Undeploy the Model resource
Now undeploy your Model resource from the serving Endpoint resoure. Use this helper function undeploy_model, which takes the following parameters:
deployed_model_id: The model deployment identifier returned by the endpoint service when the Model resource was deployed to.
endpoint: The Vertex fully qualified identifier for the Endpoint resource where the Model is deployed to.
This function calls the endpoint client service's method undeploy_model, with the following parameters:
deployed_model_id: The model deployment identifier returned by the endpoint service when the Model resource was deployed.
endpoint: The Vertex fully qualified identifier for the Endpoint resource where the Model resource is deployed.
traffic_split: How to split traffic among the remaining deployed models on the Endpoint resource.
Since this is the only deployed model on the Endpoint resource, you simply can leave traffic_split empty by setting it to {}.
End of explanation
delete_dataset = True
delete_pipeline = True
delete_model = True
delete_endpoint = True
delete_batchjob = True
delete_customjob = True
delete_hptjob = True
delete_bucket = True
# Delete the dataset using the Vertex fully qualified identifier for the dataset
try:
if delete_dataset and "dataset_id" in globals():
clients["dataset"].delete_dataset(name=dataset_id)
except Exception as e:
print(e)
# Delete the training pipeline using the Vertex fully qualified identifier for the pipeline
try:
if delete_pipeline and "pipeline_id" in globals():
clients["pipeline"].delete_training_pipeline(name=pipeline_id)
except Exception as e:
print(e)
# Delete the model using the Vertex fully qualified identifier for the model
try:
if delete_model and "model_to_deploy_id" in globals():
clients["model"].delete_model(name=model_to_deploy_id)
except Exception as e:
print(e)
# Delete the endpoint using the Vertex fully qualified identifier for the endpoint
try:
if delete_endpoint and "endpoint_id" in globals():
clients["endpoint"].delete_endpoint(name=endpoint_id)
except Exception as e:
print(e)
# Delete the batch job using the Vertex fully qualified identifier for the batch job
try:
if delete_batchjob and "batch_job_id" in globals():
clients["job"].delete_batch_prediction_job(name=batch_job_id)
except Exception as e:
print(e)
# Delete the custom job using the Vertex fully qualified identifier for the custom job
try:
if delete_customjob and "job_id" in globals():
clients["job"].delete_custom_job(name=job_id)
except Exception as e:
print(e)
# Delete the hyperparameter tuning job using the Vertex fully qualified identifier for the hyperparameter tuning job
try:
if delete_hptjob and "hpt_job_id" in globals():
clients["job"].delete_hyperparameter_tuning_job(name=hpt_job_id)
except Exception as e:
print(e)
if delete_bucket and "BUCKET_NAME" in globals():
! gsutil rm -r $BUCKET_NAME
Explanation: Cleaning up
To clean up all GCP resources used in this project, you can delete the GCP
project you used for the tutorial.
Otherwise, you can delete the individual resources you created in this tutorial:
Dataset
Pipeline
Model
Endpoint
Batch Job
Custom Job
Hyperparameter Tuning Job
Cloud Storage Bucket
End of explanation |
5,788 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Deploying NVIDIA Triton Inference Server in AI Platform Prediction Custom Container (REST API)
In this notebook, we will walk through the process of deploying NVIDIA's Triton Inference Server into AI Platform Prediction Custom Container service in the Direct Model Server mode
Step1: Create the Artifact Registry
This will be used to store the container image for the model server Triton.
Step2: Prepare the container
We will make a copy of the Triton container image into the Artifact Registry, where AI Platform Custom Container Prediction will only pull from during Model Version setup. The following steps will download the NVIDIA Triton Inference Server container to your VM, then upload it to your repo.
Step3: Prepare model Artifacts
Clone the NVIDIA Triton Inference Server repo.
Step4: Create the GCS bucket where the model artifacts will be copied to.
Step5: Stage model artifacts and copy to bucket.
Step6: Prepare request payload
To prepare the payload format, we have included a utility get_request_body_simple.py. To use this utility, install the following library
Step7: Prepare non-binary request payload
The first model will illustrate a non-binary payload. The following command will create a KF Serving v2 format non-binary payload to be used with the "simple" model
Step8: Prepare binary request payload
Triton's implementation of KF Serving v2 protocol for binary data appends the binary data after the json body. Triton requires an additional header for offset
Step9: Create and deploy Model and Model Version
In this section, we will deploy two models
Step10: Create Model Version
After the Model is created, we can now create a Model Version under this Model. Each Model Version will need a name that is unique within the Model. In AI Platform Prediction Custom Container, a {Project}/{Model}/{ModelVersion} uniquely identifies the specific container and model artifact used for inference.
Step11: The following specifications tell AI Platform how to create the Model Version.
Step12: Check the status of Model Version creation
Creating a Model Version may take several minutes. You can check on the status of this specfic Model Version with the following, and a successful deployment will show
Step13: To list all Model Versions and their states in this Model
Step14: Run prediction using curl
The "simple" model takes two tensors with shape [1,16] and does a couple of basic arithmetic operation.
Step15: Run prediction using Using requests library
Step16: ResNet-50 model (binary data)
Create Model
Step17: Create Model Version
Step18: Check Model Version status
Step19: Run prediction using curl
Recall the offset value calcuated above. The binary case has an additional header
Step20: Run prediction using Using requests library
Step21: Clean up | Python Code:
PROJECT_ID='[Enter project name - REQUIRED]'
REPOSITORY='caipcustom'
REGION='us-central1'
TRITON_VERSION='20.06'
import os
import random
import requests
import json
MODEL_BUCKET='gs://{}-{}'.format(PROJECT_ID,random.randint(10000,99999))
ENDPOINT='https://{}-ml.googleapis.com/v1'.format(REGION)
TRITON_IMAGE='tritonserver:{}-py3'.format(TRITON_VERSION)
CAIP_IMAGE='{}-docker.pkg.dev/{}/{}/{}'.format(REGION,PROJECT_ID,REPOSITORY,TRITON_IMAGE)
'''
# Test values
PROJECT_ID='tsaikevin-1238'
REPOSITORY='caipcustom'
REGION='us-central1'
TRITON_VERSION='20.06'
import os
import random
import requests
import json
MODEL_BUCKET='gs://{}-{}'.format(PROJECT_ID,random.randint(10000,99999))
ENDPOINT='https://{}-ml.googleapis.com/v1'.format(REGION)
TRITON_IMAGE='tritonserver:{}-py3'.format(TRITON_VERSION)
CAIP_IMAGE='{}-docker.pkg.dev/{}/{}/{}'.format(REGION,PROJECT_ID,REPOSITORY,TRITON_IMAGE)
'''
!gcloud config set project $PROJECT_ID
os.environ["PROJECT_ID"]=PROJECT_ID
os.environ["MODEL_BUCKET"]=MODEL_BUCKET
os.environ["ENDPOINT"]=ENDPOINT
os.environ["CAIP_IMAGE"]=CAIP_IMAGE
Explanation: Deploying NVIDIA Triton Inference Server in AI Platform Prediction Custom Container (REST API)
In this notebook, we will walk through the process of deploying NVIDIA's Triton Inference Server into AI Platform Prediction Custom Container service in the Direct Model Server mode:
End of explanation
!gcloud beta artifacts repositories create $REPOSITORY --repository-format=docker --location=$REGION
!gcloud beta auth configure-docker $REGION-docker.pkg.dev --quiet
Explanation: Create the Artifact Registry
This will be used to store the container image for the model server Triton.
End of explanation
!docker pull nvcr.io/nvidia/$TRITON_IMAGE && \
docker tag nvcr.io/nvidia/$TRITON_IMAGE $CAIP_IMAGE && \
docker push $CAIP_IMAGE
Explanation: Prepare the container
We will make a copy of the Triton container image into the Artifact Registry, where AI Platform Custom Container Prediction will only pull from during Model Version setup. The following steps will download the NVIDIA Triton Inference Server container to your VM, then upload it to your repo.
End of explanation
!git clone -b r$TRITON_VERSION https://github.com/triton-inference-server/server.git
Explanation: Prepare model Artifacts
Clone the NVIDIA Triton Inference Server repo.
End of explanation
!gsutil mb $MODEL_BUCKET
Explanation: Create the GCS bucket where the model artifacts will be copied to.
End of explanation
!mkdir model_repository
!cp -R server/docs/examples/model_repository/* model_repository/
!./server/docs/examples/fetch_models.sh
!gsutil -m cp -R model_repository/ $MODEL_BUCKET
!gsutil ls -Rl $MODEL_BUCKET/model_repository
Explanation: Stage model artifacts and copy to bucket.
End of explanation
!pip3 install geventhttpclient
Explanation: Prepare request payload
To prepare the payload format, we have included a utility get_request_body_simple.py. To use this utility, install the following library:
End of explanation
!python3 get_request_body_simple.py -m simple
Explanation: Prepare non-binary request payload
The first model will illustrate a non-binary payload. The following command will create a KF Serving v2 format non-binary payload to be used with the "simple" model:
End of explanation
!python3 get_request_body_simple.py -m image -f server/qa/images/mug.jpg
Explanation: Prepare binary request payload
Triton's implementation of KF Serving v2 protocol for binary data appends the binary data after the json body. Triton requires an additional header for offset:
Inference-Header-Content-Length: [offset]
We have provided a script that will automatically resize the image to the proper size for ResNet-50 [224, 224, 3] and calculate the proper offset. The following command takes an image file and outputs the necessary data structure to be use with the "resnet50_netdef" model. Please note down this offset as it will be used later.
End of explanation
%env MODEL_NAME=simple
!curl -X \
POST -v -k -H "Content-Type: application/json" \
-d "{'name': '"$MODEL_NAME"'}" \
-H "Authorization: Bearer `gcloud auth print-access-token`" \
"${ENDPOINT}/projects/${PROJECT_ID}/models/"
Explanation: Create and deploy Model and Model Version
In this section, we will deploy two models:
1. Simple model with non-binary data. KF Serving v2 protocol specifies a json format with non-binary data in the json body itself.
2. Binary data model with ResNet-50. Triton's implementation of binary data for KF Server v2 protocol.
Simple model (non-binary data)
Create Model
AI Platform Prediction uses a Model/Model Version Hierarchy, where the Model is a logical grouping of Model Versions. We will first create the Model.
Because the MODEL_NAME variable will be used later to specify the predict route, and Triton will use that route to run prediction on a specific model, we must set the value of this variable to a valid name of a model. For this section, will use the "simple" model.
End of explanation
%env VERSION_NAME=v01
Explanation: Create Model Version
After the Model is created, we can now create a Model Version under this Model. Each Model Version will need a name that is unique within the Model. In AI Platform Prediction Custom Container, a {Project}/{Model}/{ModelVersion} uniquely identifies the specific container and model artifact used for inference.
End of explanation
import json
import os
triton_simple_version = {
"name": os.getenv("VERSION_NAME"),
"deployment_uri": os.getenv("MODEL_BUCKET")+"/model_repository",
"container": {
"image": os.getenv("CAIP_IMAGE"),
"args": ["tritonserver",
"--model-repository=$(AIP_STORAGE_URI)"
],
"env": [
],
"ports": [
{ "containerPort": 8000 }
]
},
"routes": {
"predict": "/v2/models/"+os.getenv("MODEL_NAME")+"/infer",
"health": "/v2/models/"+os.getenv("MODEL_NAME")
},
"machine_type": "n1-standard-4",
"acceleratorConfig": {
"count":1,
"type":"nvidia-tesla-t4"
},
"autoScaling": {
"minNodes": 1
}
}
with open("triton_simple_version.json", "w") as f:
json.dump(triton_simple_version, f)
!curl -X \
POST -v -k -H "Content-Type: application/json" \
-d @triton_simple_version.json \
-H "Authorization: Bearer `gcloud auth print-access-token`" \
"${ENDPOINT}/projects/${PROJECT_ID}/models/${MODEL_NAME}/versions"
Explanation: The following specifications tell AI Platform how to create the Model Version.
End of explanation
!curl -X GET -k -H "Content-Type: application/json" \
-H "Authorization: Bearer `gcloud auth print-access-token`" \
"${ENDPOINT}/projects/${PROJECT_ID}/models/${MODEL_NAME}/versions/${VERSION_NAME}"
Explanation: Check the status of Model Version creation
Creating a Model Version may take several minutes. You can check on the status of this specfic Model Version with the following, and a successful deployment will show:
"state": "READY"
End of explanation
!curl -X GET -k -H "Content-Type: application/json" \
-H "Authorization: Bearer `gcloud auth print-access-token`" \
"${ENDPOINT}/projects/${PROJECT_ID}/models/${MODEL_NAME}/versions/"
Explanation: To list all Model Versions and their states in this Model:
End of explanation
!curl -X POST ${ENDPOINT}/projects/${PROJECT_ID}/models/${MODEL_NAME}/versions/${VERSION_NAME}:predict \
-k -H "Content-Type: application/json" \
-H "Authorization: Bearer `gcloud auth print-access-token`" \
-d '{ \
"id": "0", \
"inputs": [ \
{ \
"name": "INPUT0", \
"shape": [1, 16], \
"datatype": "INT32", \
"parameters": {}, \
"data": [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15] \
}, \
{ \
"name": "INPUT1", \
"shape": [1, 16], \
"datatype": "INT32", \
"parameters": {}, \
"data": [-1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1] \
} \
] \
}'
# this works: gcloud auth application-default print-access-token
!curl -X POST ${ENDPOINT}/projects/${PROJECT_ID}/models/${MODEL_NAME}/versions/${VERSION_NAME}:predict \
-k -H "Content-Type: application/json" \
-H "Authorization: Bearer `gcloud auth application-default print-access-token`" \
-d '{ \
"id": "0", \
"inputs": [ \
{ \
"name": "INPUT0", \
"shape": [1, 16], \
"datatype": "INT32", \
"parameters": {}, \
"data": [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15] \
}, \
{ \
"name": "INPUT1", \
"shape": [1, 16], \
"datatype": "INT32", \
"parameters": {}, \
"data": [-1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1] \
} \
] \
}'
Explanation: Run prediction using curl
The "simple" model takes two tensors with shape [1,16] and does a couple of basic arithmetic operation.
End of explanation
with open('simple.json', 'r') as s:
data=s.read()
PREDICT_URL = "{}/projects/{}/models/{}/versions/{}:predict".format(ENDPOINT, PROJECT_ID, os.getenv('MODEL_NAME'), os.getenv('VERSION_NAME'))
HEADERS = {
'Content-Type': 'application/json',
'Authorization': 'Bearer {}'.format(os.popen('gcloud auth print-access-token').read().rstrip())
}
response = requests.request("POST", PREDICT_URL, headers=HEADERS, data = data).content.decode()
json.loads(response)
# this works: gcloud auth application-default print-access-token
with open('simple.json', 'r') as s:
data=s.read()
PREDICT_URL = "{}/projects/{}/models/{}/versions/{}:predict".format(ENDPOINT, PROJECT_ID, os.getenv('MODEL_NAME'), os.getenv('VERSION_NAME'))
HEADERS = {
'Content-Type': 'application/json',
'Authorization': 'Bearer {}'.format(os.popen('gcloud auth application-default print-access-token').read().rstrip())
}
response = requests.request("POST", PREDICT_URL, headers=HEADERS, data = data).content.decode()
json.loads(response)
Explanation: Run prediction using Using requests library
End of explanation
%env BINARY_MODEL_NAME=resnet50_netdef
!curl -X POST -v -k -H "Content-Type: application/json" \
-d "{'name': '"$BINARY_MODEL_NAME"'}" \
-H "Authorization: Bearer `gcloud auth print-access-token`" \
"${ENDPOINT}/projects/${PROJECT_ID}/models/"
Explanation: ResNet-50 model (binary data)
Create Model
End of explanation
%env BINARY_VERSION_NAME=v1
triton_binary_version = {
"name": os.getenv("BINARY_VERSION_NAME"),
"deployment_uri": os.getenv("MODEL_BUCKET")+"/model_repository",
"container": {
"image": os.getenv("CAIP_IMAGE"),
"args": ["tritonserver",
"--model-repository=$(AIP_STORAGE_URI)"
],
"env": [
],
"ports": [
{ "containerPort": 8000 }
]
},
"routes": {
"predict": "/v2/models/"+os.getenv("BINARY_MODEL_NAME")+"/infer",
"health": "/v2/models/"+os.getenv("BINARY_MODEL_NAME")
},
"machine_type": "n1-standard-4",
"acceleratorConfig": {
"count":1,
"type":"nvidia-tesla-t4"
},
"autoScaling": {
"minNodes": 1
}
}
with open("triton_binary_version.json", "w") as f:
json.dump(triton_binary_version, f)
!curl --request POST -v -k -H "Content-Type: application/json" \
-d @triton_binary_version.json \
-H "Authorization: Bearer `gcloud auth print-access-token`" \
${ENDPOINT}/projects/${PROJECT_ID}/models/${BINARY_MODEL_NAME}/versions
Explanation: Create Model Version
End of explanation
!curl --request GET -k -H "Content-Type: application/json" \
-H "Authorization: Bearer `gcloud auth print-access-token`" \
"${ENDPOINT}/projects/${PROJECT_ID}/models/${BINARY_MODEL_NAME}/versions/${BINARY_VERSION_NAME}"
Explanation: Check Model Version status
End of explanation
!curl --request POST ${ENDPOINT}/projects/${PROJECT_ID}/models/${BINARY_MODEL_NAME}/versions/${BINARY_VERSION_NAME}:predict \
-k -H "Content-Type: application/octet-stream" \
-H "Authorization: Bearer `gcloud auth print-access-token`" \
-H "Inference-Header-Content-Length: 138" \
--data-binary @payload.dat
# this works: gcloud auth application-default print-access-token
!curl --request POST ${ENDPOINT}/projects/${PROJECT_ID}/models/${BINARY_MODEL_NAME}/versions/${BINARY_VERSION_NAME}:predict \
-k -H "Content-Type: application/octet-stream" \
-H "Authorization: Bearer `gcloud auth application-default print-access-token`" \
-H "Inference-Header-Content-Length: 138" \
--data-binary @payload.dat
Explanation: Run prediction using curl
Recall the offset value calcuated above. The binary case has an additional header:
Inference-Header-Content-Length: [offset]
End of explanation
with open('payload.dat', 'rb') as s:
data=s.read()
PREDICT_URL = "{}/projects/{}/models/{}/versions/{}:predict".format(ENDPOINT, PROJECT_ID, os.getenv('BINARY_MODEL_NAME'), os.getenv('BINARY_VERSION_NAME'))
HEADERS = {
'Content-Type': 'application/octet-stream',
'Inference-Header-Content-Length': '138',
'Authorization': 'Bearer {}'.format(os.popen('gcloud auth print-access-token').read().rstrip())
}
response = requests.request("POST", PREDICT_URL, headers=HEADERS, data = data).content.decode()
json.loads(response)
# this works: gcloud auth application-default print-access-token
with open('payload.dat', 'rb') as s:
data=s.read()
PREDICT_URL = "{}/projects/{}/models/{}/versions/{}:predict".format(ENDPOINT, PROJECT_ID, os.getenv('BINARY_MODEL_NAME'), os.getenv('BINARY_VERSION_NAME'))
HEADERS = {
'Content-Type': 'application/octet-stream',
'Inference-Header-Content-Length': '138',
'Authorization': 'Bearer {}'.format(os.popen('gcloud auth application-default print-access-token').read().rstrip())
}
response = requests.request("POST", PREDICT_URL, headers=HEADERS, data = data).content.decode()
json.loads(response)
Explanation: Run prediction using Using requests library
End of explanation
!curl --request DELETE -k -H "Content-Type: application/json" \
-H "Authorization: Bearer `gcloud auth print-access-token`" \
"${ENDPOINT}/projects/${PROJECT_ID}/models/${BINARY_MODEL_NAME}/versions/${BINARY_VERSION_NAME}"
!curl --request DELETE -k -H "Content-Type: application/json" \
-H "Authorization: Bearer `gcloud auth print-access-token`" \
"${ENDPOINT}/projects/${PROJECT_ID}/models/${BINARY_MODEL_NAME}"
!curl --request DELETE -k -H "Content-Type: application/json" \
-H "Authorization: Bearer `gcloud auth print-access-token`" \
"${ENDPOINT}/projects/${PROJECT_ID}/models/${MODEL_NAME}/versions/${VERSION_NAME}"
!curl --request DELETE -k -H "Content-Type: application/json" \
-H "Authorization: Bearer `gcloud auth print-access-token`" \
"${ENDPOINT}/projects/${PROJECT_ID}/models/${MODEL_NAME}"
!gsutil -m rm -r -f $MODEL_BUCKET
!rm -rf model_repository triton-inference-server server *.yaml *.dat *.json
Explanation: Clean up
End of explanation |
5,789 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Example of using two distance + thresholds in one ABC sampling run
Step1: Distance measure compares mean + variance | Python Code:
samples_size = 1000
mean = 2
sigma = 1
data = np.random.normal(mean, sigma, samples_size)
f,ax = plt.subplots()
sns.distplot(data)
def create_new_sample(theta):
mu,sigma = theta
if sigma<=0:
sigma=10
return np.random.normal(mu, sigma, samples_size)
Explanation: Example of using two distance + thresholds in one ABC sampling run
End of explanation
def dist_measure(x, y):
return [np.abs(np.mean(x, axis=0) - np.mean(y, axis=0)),
np.abs(np.var(x, axis=0) - np.var(y, axis=0))]
distances = [dist_measure(data, create_new_sample((mean, sigma))) for _ in range(1000)]
dist_labels = ["mean", "var"]
f, ax = plt.subplots()
sns.distplot(np.asarray(distances)[:, 0], axlabel="distances", label=dist_labels[0])
sns.distplot(np.asarray(distances)[:, 1], axlabel="distances", label=dist_labels[1])
plt.title("Variablility of distance from simulations")
plt.legend()
import abcpmc
prior = abcpmc.GaussianPrior(mu=[2.5, 1.5], sigma=np.eye(2) * 0.5)
alpha = 85
T = 100
eps_start = [1.0, 1.0]
sampler = abcpmc.Sampler(N=1000, Y=data, postfn=create_new_sample, dist=dist_measure, threads=7)
def launch():
eps = abcpmc.ConstEps(T, eps_start)
pools = []
for pool in sampler.sample(prior, eps):
eps_str = ", ".join(["{0:>.4f}".format(e) for e in pool.eps])
print("T: {0}, eps: [{1}], ratio: {2:>.4f}".format(pool.t, eps_str, pool.ratio))
for i, (mean, std) in enumerate(zip(*abcpmc.weighted_avg_and_std(pool.thetas, pool.ws, axis=0))):
print(u" theta[{0}]: {1:>.4f} \u00B1 {2:>.4f}".format(i, mean,std))
eps.eps = np.percentile(pool.dists, alpha, axis=0) # reduce eps value
pools.append(pool)
if pool.ratio <0.2:
break
sampler.close()
return pools
import time
t0 = time.time()
pools = launch()
print("took", (time.time() - t0))
T = len(pools)
fig, ax = plt.subplots()
lables =["mu", "sigma"]
for i in range(2):
moments = np.array([abcpmc.weighted_avg_and_std(pool.thetas[:,i], pool.ws, axis=0) for pool in pools])
ax.errorbar(range(T), moments[:, 0], moments[:, 1], label=lables[i])
ax.hlines(np.mean(data), 0, T, linestyle="dotted", linewidth=0.7)
ax.hlines(np.std(data), 0, T, linestyle="dotted", linewidth=0.7)
ax.legend()
_ = ax.set_xlim([-.5, T])
distances = np.array([pool.dists for pool in pools])
fig, ax = plt.subplots()
ax.errorbar(np.arange(len(distances)), np.mean(distances, axis=1)[:, 0], np.std(distances, axis=1)[:, 0], label=dist_labels[0])
ax.errorbar(np.arange(len(distances)), np.mean(distances, axis=1)[:, 1], np.std(distances, axis=1)[:, 1], label=dist_labels[1])
ax.legend()
ax.set_title("Distances")
fig,ax = plt.subplots()
eps_values = np.array([pool.eps for pool in pools])
ax.plot(eps_values[:, 0], label=dist_labels[0])
ax.plot(eps_values[:, 1], label=dist_labels[1])
ax.set_xlabel("Iteration")
ax.set_ylabel(r"$\epsilon$", fontsize=15)
ax.legend(loc="best")
ax.set_title("Thresholds")
jg = sns.jointplot(pools[-1].thetas[:, 0],
pools[-1].thetas[:, 1],
kind="kde",
)
jg.ax_joint.set_xlabel(lables[0])
jg.ax_joint.set_ylabel(lables[1])
Explanation: Distance measure compares mean + variance
End of explanation |
5,790 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Exercise 5.18
Plots L vs T from a source file and fits polynomials of varying degrees to it
Step1: Below are the polynomials being fit to the data
Step2: Exercise 5.22
Computes the midpoint rule in different forms
Step3: Exercise 5.23, 5.24, 5.25 (3 part)
Lagrange Interpolation | Python Code:
p1.part_a()
Explanation: Exercise 5.18
Plots L vs T from a source file and fits polynomials of varying degrees to it
End of explanation
p1.part_b()
Explanation: Below are the polynomials being fit to the data
End of explanation
p2.midpointint(p2.function, 1, 3, 50)[0]
p2.sum_vectorized(p2.function, 1, 3, 50)
p2.sum_numpy(p2.function, 1, 3, 50)
Explanation: Exercise 5.22
Computes the midpoint rule in different forms
End of explanation
p4.graph(p4.sin, 20, 0, 10, [0,10,-2,2])
p5.problem_5_25()
Explanation: Exercise 5.23, 5.24, 5.25 (3 part)
Lagrange Interpolation
End of explanation |
5,791 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Copyright 2018 The TensorFlow Hub Authors.
Licensed under the Apache License, Version 2.0 (the "License");
Step1: Object Detection
<table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https
Step4: Example use
Helper functions for downloading images and for visualization.
Visualization code adapted from TF object detection API for the simplest required functionality.
Step5: Apply module
Load a public image from Open Images v4, save locally, and display.
Step6: Pick an object detection module and apply on the downloaded image. Modules
Step7: More images
Perform inference on some additional images with time tracking. | Python Code:
# Copyright 2018 The TensorFlow Hub Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ==============================================================================
Explanation: Copyright 2018 The TensorFlow Hub Authors.
Licensed under the Apache License, Version 2.0 (the "License");
End of explanation
#@title Imports and function definitions
# For running inference on the TF-Hub module.
import tensorflow as tf
import tensorflow_hub as hub
# For downloading the image.
import matplotlib.pyplot as plt
import tempfile
from six.moves.urllib.request import urlopen
from six import BytesIO
# For drawing onto the image.
import numpy as np
from PIL import Image
from PIL import ImageColor
from PIL import ImageDraw
from PIL import ImageFont
from PIL import ImageOps
# For measuring the inference time.
import time
# Print Tensorflow version
print(tf.__version__)
# Check available GPU devices.
print("The following GPU devices are available: %s" % tf.test.gpu_device_name())
Explanation: Object Detection
<table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https://www.tensorflow.org/hub/tutorials/object_detection"><img src="https://www.tensorflow.org/images/tf_logo_32px.png" />View on TensorFlow.org</a>
</td>
<td>
<a target="_blank" href="https://colab.research.google.com/github/tensorflow/hub/blob/master/examples/colab/object_detection.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" />Run in Google Colab</a>
</td>
<td>
<a target="_blank" href="https://github.com/tensorflow/hub/blob/master/examples/colab/object_detection.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />View on GitHub</a>
</td>
<td>
<a href="https://storage.googleapis.com/tensorflow_docs/hub/examples/colab/object_detection.ipynb"><img src="https://www.tensorflow.org/images/download_logo_32px.png" />Download notebook</a>
</td>
<td>
<a href="https://tfhub.dev/s?q=google%2Ffaster_rcnn%2Fopenimages_v4%2Finception_resnet_v2%2F1%20OR%20google%2Ffaster_rcnn%2Fopenimages_v4%2Finception_resnet_v2%2F1"><img src="https://www.tensorflow.org/images/hub_logo_32px.png" />See TF Hub models</a>
</td>
</table>
This Colab demonstrates use of a TF-Hub module trained to perform object detection.
Setup
End of explanation
def display_image(image):
fig = plt.figure(figsize=(20, 15))
plt.grid(False)
plt.imshow(image)
def download_and_resize_image(url, new_width=256, new_height=256,
display=False):
_, filename = tempfile.mkstemp(suffix=".jpg")
response = urlopen(url)
image_data = response.read()
image_data = BytesIO(image_data)
pil_image = Image.open(image_data)
pil_image = ImageOps.fit(pil_image, (new_width, new_height), Image.ANTIALIAS)
pil_image_rgb = pil_image.convert("RGB")
pil_image_rgb.save(filename, format="JPEG", quality=90)
print("Image downloaded to %s." % filename)
if display:
display_image(pil_image)
return filename
def draw_bounding_box_on_image(image,
ymin,
xmin,
ymax,
xmax,
color,
font,
thickness=4,
display_str_list=()):
Adds a bounding box to an image.
draw = ImageDraw.Draw(image)
im_width, im_height = image.size
(left, right, top, bottom) = (xmin * im_width, xmax * im_width,
ymin * im_height, ymax * im_height)
draw.line([(left, top), (left, bottom), (right, bottom), (right, top),
(left, top)],
width=thickness,
fill=color)
# If the total height of the display strings added to the top of the bounding
# box exceeds the top of the image, stack the strings below the bounding box
# instead of above.
display_str_heights = [font.getsize(ds)[1] for ds in display_str_list]
# Each display_str has a top and bottom margin of 0.05x.
total_display_str_height = (1 + 2 * 0.05) * sum(display_str_heights)
if top > total_display_str_height:
text_bottom = top
else:
text_bottom = top + total_display_str_height
# Reverse list and print from bottom to top.
for display_str in display_str_list[::-1]:
text_width, text_height = font.getsize(display_str)
margin = np.ceil(0.05 * text_height)
draw.rectangle([(left, text_bottom - text_height - 2 * margin),
(left + text_width, text_bottom)],
fill=color)
draw.text((left + margin, text_bottom - text_height - margin),
display_str,
fill="black",
font=font)
text_bottom -= text_height - 2 * margin
def draw_boxes(image, boxes, class_names, scores, max_boxes=10, min_score=0.1):
Overlay labeled boxes on an image with formatted scores and label names.
colors = list(ImageColor.colormap.values())
try:
font = ImageFont.truetype("/usr/share/fonts/truetype/liberation/LiberationSansNarrow-Regular.ttf",
25)
except IOError:
print("Font not found, using default font.")
font = ImageFont.load_default()
for i in range(min(boxes.shape[0], max_boxes)):
if scores[i] >= min_score:
ymin, xmin, ymax, xmax = tuple(boxes[i])
display_str = "{}: {}%".format(class_names[i].decode("ascii"),
int(100 * scores[i]))
color = colors[hash(class_names[i]) % len(colors)]
image_pil = Image.fromarray(np.uint8(image)).convert("RGB")
draw_bounding_box_on_image(
image_pil,
ymin,
xmin,
ymax,
xmax,
color,
font,
display_str_list=[display_str])
np.copyto(image, np.array(image_pil))
return image
Explanation: Example use
Helper functions for downloading images and for visualization.
Visualization code adapted from TF object detection API for the simplest required functionality.
End of explanation
# By Heiko Gorski, Source: https://commons.wikimedia.org/wiki/File:Naxos_Taverna.jpg
image_url = "https://upload.wikimedia.org/wikipedia/commons/6/60/Naxos_Taverna.jpg" #@param
downloaded_image_path = download_and_resize_image(image_url, 1280, 856, True)
Explanation: Apply module
Load a public image from Open Images v4, save locally, and display.
End of explanation
module_handle = "https://tfhub.dev/google/faster_rcnn/openimages_v4/inception_resnet_v2/1" #@param ["https://tfhub.dev/google/openimages_v4/ssd/mobilenet_v2/1", "https://tfhub.dev/google/faster_rcnn/openimages_v4/inception_resnet_v2/1"]
detector = hub.load(module_handle).signatures['default']
def load_img(path):
img = tf.io.read_file(path)
img = tf.image.decode_jpeg(img, channels=3)
return img
def run_detector(detector, path):
img = load_img(path)
converted_img = tf.image.convert_image_dtype(img, tf.float32)[tf.newaxis, ...]
start_time = time.time()
result = detector(converted_img)
end_time = time.time()
result = {key:value.numpy() for key,value in result.items()}
print("Found %d objects." % len(result["detection_scores"]))
print("Inference time: ", end_time-start_time)
image_with_boxes = draw_boxes(
img.numpy(), result["detection_boxes"],
result["detection_class_entities"], result["detection_scores"])
display_image(image_with_boxes)
run_detector(detector, downloaded_image_path)
Explanation: Pick an object detection module and apply on the downloaded image. Modules:
* FasterRCNN+InceptionResNet V2: high accuracy,
* ssd+mobilenet V2: small and fast.
End of explanation
image_urls = [
# Source: https://commons.wikimedia.org/wiki/File:The_Coleoptera_of_the_British_islands_(Plate_125)_(8592917784).jpg
"https://upload.wikimedia.org/wikipedia/commons/1/1b/The_Coleoptera_of_the_British_islands_%28Plate_125%29_%288592917784%29.jpg",
# By Américo Toledano, Source: https://commons.wikimedia.org/wiki/File:Biblioteca_Maim%C3%B3nides,_Campus_Universitario_de_Rabanales_007.jpg
"https://upload.wikimedia.org/wikipedia/commons/thumb/0/0d/Biblioteca_Maim%C3%B3nides%2C_Campus_Universitario_de_Rabanales_007.jpg/1024px-Biblioteca_Maim%C3%B3nides%2C_Campus_Universitario_de_Rabanales_007.jpg",
# Source: https://commons.wikimedia.org/wiki/File:The_smaller_British_birds_(8053836633).jpg
"https://upload.wikimedia.org/wikipedia/commons/0/09/The_smaller_British_birds_%288053836633%29.jpg",
]
def detect_img(image_url):
start_time = time.time()
image_path = download_and_resize_image(image_url, 640, 480)
run_detector(detector, image_path)
end_time = time.time()
print("Inference time:",end_time-start_time)
detect_img(image_urls[0])
detect_img(image_urls[1])
detect_img(image_urls[2])
Explanation: More images
Perform inference on some additional images with time tracking.
End of explanation |
5,792 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Step1: <a href="http
Step3: Doc tests
The following docstring section 'Examples' should help the user understand what is the component's purpose and how it works. It is an example (or examples) of its use in a (more or less) simple case within the Landlab framework
Step4: Header information
Step5: Header information
Step6: Header information
Step8: Class with complete header information
Step10: The initialization method (__init__)
Every Landlab component should have an __init__ method. The parameter signature should start with a ModelGrid object as the first parameter. Following this are component-specific parameters. In our example, the parameters for the kinematic wave model include
Step12: The "go" method, run_one_step()
Every Landlab component will have a method that implements the component's action. The go method can have any name you like, but the preferred practice for time-advancing components is to use the standard name run_one_step(). Landlab assumes that if a component has a method with this name, it will (a) be the primary "go" method, and (b) will be fully standardized as described here.
The run_one_step method should take either zero or one argument. If there is an argument, it should be a duration to run, dt; i.e., a timestep length. If the component does not evolve as time passes, this argument may be missing (see, e.g., the FlowRouter, which returns a steady state flow pattern independent of time).
The first step in the algorithm in the example below is to calculate water depth at the links, where we will be calculating the water discharge. In this particular case, we'll use the depth at the upslope of the two nodes. The grid method to do this, map_value_at_max_node_to_link, is one of many mapping functions available.
We then calculate velocity using the Manning equation, and specific discharge by multiplying velocity by depth.
Mass balance for the cells around nodes is computed using the calc_flux_div_at_node grid method.
Step16: Changes to boundary conditions
Sometimes, (though not in this example), it proves convenient to hard-code assumptions about boundary conditions into the __init__ method.
We can resolve this issue by creating an additional component method that updates these components that can be called if the boundary conditions change. Whether the boundary conditions have changed can be assessed with a grid method called bc_set_code. This is an int which will change if the boundary conditions change.
Step21: The complete component
Step22: Next, we'll test the component on a larger grid and a larger domain.
Step23: Plot the topography
Step24: The steady solution should be as follows. The unit discharge at the bottom edge should equal the precipitation rate, 100 mm/hr, times the slope length.
The slope length is the distance from the bottom edge of the bottom-most row of cells, to the top edge of the top-most row of cells. The base row of nodes are at y = 0, and the cell edges start half a cell width up from that, so y = 5 m. The top of the upper-most row of cells is half a cell width below the top grid edge, which is 610 m, so the top of the cells is 605 m. Hence the interior (cell) portion of the grid is 600 m long.
Hence, discharge out the bottom should be 100 mm/hr x 600 m = 0.1 m/hr x 600 m = 60 m2/hr. Let's convert this to m2/s
Step25: The water depth should be just sufficient to carry this discharge with the given slope and roughness. We get this by inverting the Manning equation
Step26: This looks pretty good. Let's check the values | Python Code:
import numpy as np
from landlab import Component, FieldError
class KinwaveOverlandFlowModel(Component):
Calculate water flow over topography.
Landlab component that implements a two-dimensional
kinematic wave model.
You can put other information here... Anything you
think a user might need to know. We use numpy style
docstrings written in restructured text. You can use
math formatting.
Useful Links:
- https://www.sphinx-doc.org/en/master/usage/restructuredtext/basics.html
- https://sphinxcontrib-napoleon.readthedocs.io/en/latest/example_numpy.html
Parameters
----------
grid : ModelGrid
A Landlab grid object.
precip_rate : float, optional (defaults to 1 mm/hr)
Precipitation rate, mm/hr
precip_duration : float, optional (defaults to 1 hour)
Duration of precipitation, hours
infilt_rate : float, optional (defaults to 0)
Maximum rate of infiltration, mm/hr
roughnes : float, defaults to 0.01
Manning roughness coefficient, s/m^1/3
def __init__(): # ignore this for now, we will add more stuff eventually.
pass
Explanation: <a href="http://landlab.github.io"><img style="float: left" src="../../landlab_header.png"></a>
How to write a Landlab component
<hr>
<small>For more Landlab tutorials, click here: <a href="https://landlab.readthedocs.io/en/latest/user_guide/tutorials.html">https://landlab.readthedocs.io/en/latest/user_guide/tutorials.html</a></small>
<hr>
This ipython notebook walks you through the basic procedure for writing a Landlab component, using the example of a kinematic-wave flow model.
Overview
A Landlab component is implemented as a Python class. Although every Landlab component is unique in some respects, to be a component, a class must have at least the following standard ingredients:
(1) The class must inherit the base class Component.
(2) The class must include a set of standard variables defined in the header (i.e., before the __init__ method), which describe the data arrays that the component uses.
(3) The class must have an __init__ method defined, with a semi-standardized parameter list described below.
(4) The class must provide a function that does performs the component's "action", typically named run_one_step() and this function's parameter list must follow the convention described below.
Class definition and header
A Landlab component is a class that inherits from Component. The name of the class should be in CamelCase, and should make sense when used in the sentence: "A (component-name) is a...". The class definition should be followed by a docstring. The docstring should include a list of parameters for the __init__ method and succintly describe them.
End of explanation
Examples
--------
>>> from landlab import RasterModelGrid
>>> rg = RasterModelGrid((4, 5), 10.0)
>>> kw = KinwaveOverlandFlowModel(rg)
>>> kw.vel_coef
100.0
>>> rg.at_node['surface_water__depth']
array([ 0., 0., 0., 0., 0.,
0., 0., 0., 0., 0.,
0., 0., 0., 0., 0.,
0., 0., 0., 0., 0.])
Explanation: Doc tests
The following docstring section 'Examples' should help the user understand what is the component's purpose and how it works. It is an example (or examples) of its use in a (more or less) simple case within the Landlab framework: a grid is created, the component is instantiated on this grid and run. Unlike in the example below, we strongly recommend commenting your example(s) to explain what is happening.
This is also the section that will be run during the integration tests of your component (once you have submitted a pull request to have your component merged into the Landlab release branch). All lines starting with >>> are run and should produce the results you provided: here, the test will fail if kw.vel_coeff does not return 100.0.
End of explanation
_name = "KinwaveOverlandFlowModel"
Explanation: Header information: _name
Every component should have a name, as a string. Normally this will be the same as the class name.
End of explanation
_unit_agnostic = False
Explanation: Header information: _unit_agnostic
Components also indicate whether they are unit agnostic or not. Unit agnostic components require that component users are consistent with units within and across components used in a single application, but do not require that inputs conform to a specific set of units.
This component is not unit agnostic because it includes an assumption that time units will be in hours but assumes that the Manning coefficient will be provided with time units of seconds.
End of explanation
_info = {
"surface_water__depth": {
"dtype": float,
"intent": "out",
"optional": False,
"units": "m",
"mapping": "node",
"doc": "Depth of water on the surface",
},
"topographic__elevation": {
"dtype": float,
"intent": "in",
"optional": False,
"units": "m",
"mapping": "node",
"doc": "Land surface topographic elevation",
},
"topographic__gradient": {
"dtype": float,
"intent": "in",
"optional": False,
"units": "m/m",
"mapping": "link",
"doc": "Gradient of the ground surface",
},
"water__specific_discharge": {
"dtype": float,
"intent": "out",
"optional": False,
"units": "m2/s",
"mapping": "link",
"doc": "flow discharge component in the direction of the link",
},
"water__velocity": {
"dtype": float,
"intent": "out",
"optional": False,
"units": "m/s",
"mapping": "link",
"doc": "flow velocity component in the direction of the link",
},
}
Explanation: Header information: _info
All the metadata about the fields that a components requires and creates is described in a datastructured called Component._info.
Info is a dictionary with one key for each field. The value associated with that key is a dictionary that must contain all of the following elements (and no other elements).
"dtype": a python data type
"intent": a string indicating whether the field is an input ("in"), and output ("out"), or both ("inout")
"optional": a boolean indicating whether the field is an optional input or output
"units": a string indicating what units the field has (use "-")
"mapping": a string indicating the grid element (e.g., node, cell) on which the field is located
"doc": a string describing the field.
The code in the Component base class will check things like:
* Can the component be created if all of the required inputs exist?
* Is all this information present? Is something extra present?
* Does the component create outputs of the correct dtype?
End of explanation
import numpy as np
from landlab import Component, FieldError
class KinwaveOverlandFlowModel(Component):
Calculate water flow over topography.
Landlab component that implements a two-dimensional
kinematic wave model.
Construction:
KinwaveOverlandFlowModel(grid, [stuff to be added later])
Parameters
----------
grid : ModelGrid
Landlab ModelGrid object
precip_rate : float, optional (defaults to 1 mm/hr)
Precipitation rate, mm/hr
precip_duration : float, optional (defaults to 1 hour)
Duration of precipitation, hours
infilt_rate : float, optional (defaults to 0)
Maximum rate of infiltration, mm/hr
roughness : float, defaults to 0.01
Manning roughness coefficient, s/m^1/3
_name = "KinwaveOverlandFlowModel"
_unit_agnostic = False
_info = {
"surface_water__depth": {
"dtype": float,
"intent": "out",
"optional": False,
"units": "m",
"mapping": "node",
"doc": "Depth of water on the surface",
},
"topographic__elevation": {
"dtype": float,
"intent": "in",
"optional": False,
"units": "m",
"mapping": "node",
"doc": "Land surface topographic elevation",
},
"topographic__gradient": {
"dtype": float,
"intent": "in",
"optional": False,
"units": "m/m",
"mapping": "link",
"doc": "Gradient of the ground surface",
},
"water__specific_discharge": {
"dtype": float,
"intent": "out",
"optional": False,
"units": "m2/s",
"mapping": "link",
"doc": "flow discharge component in the direction of the link",
},
"water__velocity": {
"dtype": float,
"intent": "out",
"optional": False,
"units": "m/s",
"mapping": "link",
"doc": "flow velocity component in the direction of the link",
},
}
def __init__(): # ignore this for now, we will add more stuff eventually.
pass
Explanation: Class with complete header information
End of explanation
def __init__(
self, grid, precip_rate=1.0, precip_duration=1.0, infilt_rate=0.0, roughness=0.01
):
Initialize the KinwaveOverlandFlowModel.
Parameters
----------
grid : ModelGrid
Landlab ModelGrid object
precip_rate : float, optional (defaults to 1 mm/hr)
Precipitation rate, mm/hr
precip_duration : float, optional (defaults to 1 hour)
Duration of precipitation, hours
infilt_rate : float, optional (defaults to 0)
Maximum rate of infiltration, mm/hr
roughness : float, defaults to 0.01
Manning roughness coefficient, s/m^1/3
super().__init__(grid)
# Store parameters and do unit conversion
self._current_time = 0
self._precip = precip_rate / 3600000.0 # convert to m/s
self._precip_duration = precip_duration * 3600.0 # h->s
self._infilt = infilt_rate / 3600000.0 # convert to m/s
self._vel_coef = 1.0 / roughness # do division now to save time
# Create fields...
# Elevation
self._elev = grid.at_node["topographic__elevation"]
# Slope
self._slope = grid.at_link["topographic__gradient"]
self.initialize_output_fields()
self._depth = grid.at_node["surface_water__depth"]
self._vel = grid.at_link["water__velocity"]
self._disch = grid.at_link["water__specific_discharge"]
# Calculate the ground-surface slope (assume it won't change)
self._slope[self._grid.active_links] = self._grid.calc_grad_at_link(self._elev)[
self._grid.active_links
]
self._sqrt_slope = np.sqrt(self._slope)
self._sign_slope = np.sign(self._slope)
Explanation: The initialization method (__init__)
Every Landlab component should have an __init__ method. The parameter signature should start with a ModelGrid object as the first parameter. Following this are component-specific parameters. In our example, the parameters for the kinematic wave model include: precipiation rate, precipitation duration, infiltration rate, and roughness coefficient (Manning's n).
The first thing the component __init__ should do is call the super method. This calls the __init__ of the component's base class.
Two things a component __init__ method common does are (1) store the component's parameters as class attributes, and (2) create the necessary fields. When creating grid fields, it is important to first check to see whether a field with the same name (and mapping) already exists. For example, a driver or another component might have already created topographic__elevation when our kinematic wave component is initialized.
End of explanation
def run_one_step(self, dt):
Calculate water flow for a time period `dt`.
Default units for dt are *seconds*.
# Calculate water depth at links. This implements an "upwind" scheme
# in which water depth at the links is the depth at the higher of the
# two nodes.
H_link = self._grid.map_value_at_max_node_to_link(
"topographic__elevation", "surface_water__depth"
)
# Calculate velocity using the Manning equation.
self._vel = (
-self._sign_slope * self._vel_coef * H_link ** 0.66667 * self._sqrt_slope
)
# Calculate discharge
self._disch = H_link * self._vel
# Flux divergence
dqda = self._grid.calc_flux_div_at_node(self._disch)
# Rate of change of water depth
if self._current_time < self._precip_duration:
ppt = self._precip
else:
ppt = 0.0
dHdt = ppt - self._infilt - dqda
# Update water depth: simple forward Euler scheme
self._depth[self._grid.core_nodes] += dHdt[self._grid.core_nodes] * dt
# Very crude numerical hack: prevent negative water depth
self._depth[np.where(self._depth < 0.0)[0]] = 0.0
self._current_time += dt
Explanation: The "go" method, run_one_step()
Every Landlab component will have a method that implements the component's action. The go method can have any name you like, but the preferred practice for time-advancing components is to use the standard name run_one_step(). Landlab assumes that if a component has a method with this name, it will (a) be the primary "go" method, and (b) will be fully standardized as described here.
The run_one_step method should take either zero or one argument. If there is an argument, it should be a duration to run, dt; i.e., a timestep length. If the component does not evolve as time passes, this argument may be missing (see, e.g., the FlowRouter, which returns a steady state flow pattern independent of time).
The first step in the algorithm in the example below is to calculate water depth at the links, where we will be calculating the water discharge. In this particular case, we'll use the depth at the upslope of the two nodes. The grid method to do this, map_value_at_max_node_to_link, is one of many mapping functions available.
We then calculate velocity using the Manning equation, and specific discharge by multiplying velocity by depth.
Mass balance for the cells around nodes is computed using the calc_flux_div_at_node grid method.
End of explanation
def __init__(self):
Initialize the Component.
...
super().__init__(grid)
# Store grid and parameters and do unit conversion
self._bc_set_code = self._grid.bc_set_code
# ...
def updated_boundary_conditions(self):
Call if boundary conditions are updated.
# do things necessary if BCs are updated.
def run_one_step(self, dt):
Calculate water flow for a time period `dt`.
if self._bc_set_code != self.grid.bc_set_code:
self.updated_boundary_conditions()
self._bc_set_code = self.grid.bc_set_code
# Do rest of run one step
# ...
Explanation: Changes to boundary conditions
Sometimes, (though not in this example), it proves convenient to hard-code assumptions about boundary conditions into the __init__ method.
We can resolve this issue by creating an additional component method that updates these components that can be called if the boundary conditions change. Whether the boundary conditions have changed can be assessed with a grid method called bc_set_code. This is an int which will change if the boundary conditions change.
End of explanation
import numpy as np
from landlab import Component
class KinwaveOverlandFlowModel(Component):
Calculate water flow over topography.
Landlab component that implements a two-dimensional
kinematic wave model. This is an extremely simple, unsophisticated
model, originally built simply to demonstrate the component creation
process. Limitations to the present version include: infiltration is
handled very crudely, the called is responsible for picking a stable
time step size (no adaptive time stepping is used in the `run_one_step`
method), precipitation rate is constant for a given duration (then zero),
and all parameters are uniform in space. Also, the terrain is assumed
to be stable over time. Caveat emptor!
Examples
--------
>>> from landlab import RasterModelGrid
>>> rg = RasterModelGrid((4, 5), xy_spacing=10.0)
>>> z = rg.add_zeros("topographic__elevation", at="node")
>>> s = rg.add_zeros("topographic__gradient", at="link")
>>> kw = KinwaveOverlandFlowModel(rg)
>>> kw.vel_coef
100.0
>>> rg.at_node['surface_water__depth']
array([ 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
0., 0., 0., 0., 0., 0., 0.])
_name = "KinwaveOverlandFlowModel"
_unit_agnostic = False
_info = {
"surface_water__depth": {
"dtype": float,
"intent": "out",
"optional": False,
"units": "m",
"mapping": "node",
"doc": "Depth of water on the surface",
},
"topographic__elevation": {
"dtype": float,
"intent": "in",
"optional": False,
"units": "m",
"mapping": "node",
"doc": "Land surface topographic elevation",
},
"topographic__gradient": {
"dtype": float,
"intent": "in",
"optional": False,
"units": "m/m",
"mapping": "link",
"doc": "Gradient of the ground surface",
},
"water__specific_discharge": {
"dtype": float,
"intent": "out",
"optional": False,
"units": "m2/s",
"mapping": "link",
"doc": "flow discharge component in the direction of the link",
},
"water__velocity": {
"dtype": float,
"intent": "out",
"optional": False,
"units": "m/s",
"mapping": "link",
"doc": "flow velocity component in the direction of the link",
},
}
def __init__(
self,
grid,
precip_rate=1.0,
precip_duration=1.0,
infilt_rate=0.0,
roughness=0.01,
):
Initialize the KinwaveOverlandFlowModel.
Parameters
----------
grid : ModelGrid
Landlab ModelGrid object
precip_rate : float, optional (defaults to 1 mm/hr)
Precipitation rate, mm/hr
precip_duration : float, optional (defaults to 1 hour)
Duration of precipitation, hours
infilt_rate : float, optional (defaults to 0)
Maximum rate of infiltration, mm/hr
roughness : float, defaults to 0.01
Manning roughness coefficient, s/m^1/3
super().__init__(grid)
# Store parameters and do unit conversion
self._current_time = 0
self._precip = precip_rate / 3600000.0 # convert to m/s
self._precip_duration = precip_duration * 3600.0 # h->s
self._infilt = infilt_rate / 3600000.0 # convert to m/s
self._vel_coef = 1.0 / roughness # do division now to save time
# Create fields...
# Elevation
self._elev = grid.at_node["topographic__elevation"]
# Slope
self._slope = grid.at_link["topographic__gradient"]
self.initialize_output_fields()
self._depth = grid.at_node["surface_water__depth"]
self._vel = grid.at_link["water__velocity"]
self._disch = grid.at_link["water__specific_discharge"]
# Calculate the ground-surface slope (assume it won't change)
self._slope[self._grid.active_links] = self._grid.calc_grad_at_link(self._elev)[
self._grid.active_links
]
self._sqrt_slope = np.sqrt(self._slope)
self._sign_slope = np.sign(self._slope)
@property
def vel_coef(self):
Velocity coefficient.
(1/roughness)
return self._vel_coef
def run_one_step(self, dt):
Calculate water flow for a time period `dt`.
Default units for dt are *seconds*.
# Calculate water depth at links. This implements an "upwind" scheme
# in which water depth at the links is the depth at the higher of the
# two nodes.
H_link = self._grid.map_value_at_max_node_to_link(
"topographic__elevation", "surface_water__depth"
)
# Calculate velocity using the Manning equation.
self._vel = (
-self._sign_slope * self._vel_coef * H_link ** 0.66667 * self._sqrt_slope
)
# Calculate discharge
self._disch[:] = H_link * self._vel
# Flux divergence
dqda = self._grid.calc_flux_div_at_node(self._disch)
# Rate of change of water depth
if self._current_time < self._precip_duration:
ppt = self._precip
else:
ppt = 0.0
dHdt = ppt - self._infilt - dqda
# Update water depth: simple forward Euler scheme
self._depth[self._grid.core_nodes] += dHdt[self._grid.core_nodes] * dt
# Very crude numerical hack: prevent negative water depth
self._depth[np.where(self._depth < 0.0)[0]] = 0.0
self._current_time += dt
from landlab import RasterModelGrid
nr = 3
nc = 4
rg = RasterModelGrid((nr, nc), 10.0)
rg.add_empty("topographic__elevation", at="node")
rg.add_zeros("topographic__gradient", at="link")
rg.at_node["topographic__elevation"][:] = rg.x_of_node.copy()
kinflow = KinwaveOverlandFlowModel(rg, precip_rate=100.0, precip_duration=100.0)
for i in range(100):
kinflow.run_one_step(1.0)
print("The discharge from node 6 to node 5 should be -0.000278 m2/s:")
print(rg.at_link["water__specific_discharge"][8])
print("The discharge from node 5 to node 4 should be -0.000556 m2/s:")
print(rg.at_link["water__specific_discharge"][7])
Explanation: The complete component
End of explanation
nr = 62
nc = 42
rg = RasterModelGrid((nr, nc), 10.0)
rg.add_empty("topographic__elevation", at="node")
rg.at_node["topographic__elevation"] = 0.01 * rg.y_of_node
rg.add_zeros("topographic__gradient", at="link")
kinflow = KinwaveOverlandFlowModel(rg, precip_rate=100.0, precip_duration=100.0)
for i in range(1800):
kinflow.run_one_step(1.0)
Explanation: Next, we'll test the component on a larger grid and a larger domain.
End of explanation
%matplotlib inline
from landlab.plot import imshow_grid
imshow_grid(rg, "topographic__elevation")
Explanation: Plot the topography:
End of explanation
q_out = 0.1 * 600 / 3600.0
q_out
Explanation: The steady solution should be as follows. The unit discharge at the bottom edge should equal the precipitation rate, 100 mm/hr, times the slope length.
The slope length is the distance from the bottom edge of the bottom-most row of cells, to the top edge of the top-most row of cells. The base row of nodes are at y = 0, and the cell edges start half a cell width up from that, so y = 5 m. The top of the upper-most row of cells is half a cell width below the top grid edge, which is 610 m, so the top of the cells is 605 m. Hence the interior (cell) portion of the grid is 600 m long.
Hence, discharge out the bottom should be 100 mm/hr x 600 m = 0.1 m/hr x 600 m = 60 m2/hr. Let's convert this to m2/s:
End of explanation
n = 0.01
q = 0.0167
S = 0.01
H_out = (n * q) ** 0.6 * S ** -0.3
H_out
imshow_grid(rg, "surface_water__depth", cmap="Blues")
Explanation: The water depth should be just sufficient to carry this discharge with the given slope and roughness. We get this by inverting the Manning equation:
$$q = (1/n) H^{5/3} S^{1/2}$$
$$H^{5/3} = n q S^{-1/2}$$
$$H = (n q)^{3/5} S^{-3/10}$$
The slope gradient is 0.01 (because we set elevation to be 0.01 times the y coordinate). The discharge, as we've already established, is about 0.0167 m2/s, and the roughness is 0.01 (the default value). Therefore,
End of explanation
rg.at_node["surface_water__depth"][42:84] # bottom row of core nodes
Explanation: This looks pretty good. Let's check the values:
End of explanation |
5,793 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
BBH class
$ecc==0$
Step1: 5 realization results
(93328, 13)
(17287278, 13)
(17380606, 14)
Step2: Mergers
Event Rate
Step3: $\frac{1}{4\pi/3(30Mpc)^313.5Gyr}=\frac{1}{4\pi/32700013.5 {Gpc}^3 yr}$
Step4: integrate from $z=0.6$,
which means t=7.7587 to 13.78, d_c=0 to 2.206 Gpc
cosmic calculation
Step5: Consider cosmology, bhb merger in farther space can will be detectable
Step6: Present BBHs
Step7: $\dot{f}=k_0 f^{11/3}$
$k_0=\frac{96}{5}(2\pi)^{8/3}\frac{G^{5/3}}{c^5}\frac{m_1m_2}{(m_1+m_2)^{1/3}}=\frac{96}{5}(2\pi)^{8/3}\frac{G^{5/3}}{c^5}M_{chirp}^{5/3}=const M_{chirp}^{5/3}$
$const=3.68e-6 s^{5/3}$
$n=\frac{3\eta}{8k_0}f_{min}^{-8/3}$ where $f_{min}=\frac{2}{T_{max}\simeq 10^{4.4}[s]}\simeq 2*10^{-4.4} Hz$
N in [$\frac{4\pi}{3}30^3$ Mpc^3] = $\frac{3}{8k_0} f_{min}^{-8/3} \eta$ in [s x per Gpc^3 per year]
=> $N=\frac{3}{8k_0} f_{min}^{-8/3} \eta [yr/s][Gpc^3/(V(30Mpc))]$
```
==> $\Delta t=\frac{3}{8k_0}f^{-8/3}_{now}$ per 30 Mpc^3 volume
when merge within 1 yr ($\Delta t \le 1yr$), $f \ge f_{min} = (\frac{8}{3} k_0 \Delta t)^{-3/8} $
$f_{min} = (\frac{8}{3}\frac{96}{5}(2\pi)^{8/3} \frac{G^{5/3}}{c^5} 1yr)^{-3/8} M_{chirp}^{-5/8} \simeq 0.1164 M_{chirp}^{-5/8} Hz$ per 30 Mpc^3 volume, which is $4\pi/330^3 *10^{-9}$ Gpc^3 colume
```
Step8: $f_N(19.6,10\sim100,10^{4.0})=[1190, 11897]$
in my case
Step9: Generate file for LISA analysis
Parameters for binary source
$f_0$
Step10: $T_{eject}$ variation for each model
Step11: Weird models
48
210 | Python Code:
# ## load example data for testing
# BBHex=pd.read_csv('../data/RES/1024/BBHex.dat',delim_whitespace=True,header=None,
# names=['Galaxy','RA','Dec','Dist','VMag','Model','Age','T_eject','M1','M2','Seperation','Ecc','Period'])
# BBHex.sample(2)
# ## label example data for analysis
# merge_list=((BBHex.Seperation==0)&(BBHex.Ecc==0))|(BBHex.Ecc==1)
# present_list=-merge_list
# merger=BBHex[merge_list]
# merger=merger.rename(columns={'Period':'T_merge'})
# present=BBHex[present_list]
dir_data='../data/Res/0220'
# Load complete data set
BBH=pd.read_csv(path.join(dir_data,'BBHnow.dat.gz'),delim_whitespace=True,header=None,
names=['PGC','RA','Dec','Dist','VMag','Model',
'Age','T_eject','M1','M2','Seperation','Ecc','Period'])
merger=BBH[((BBH.Seperation==0)&(BBH.Ecc==0))|(BBH.Ecc==1)]
merger=merger.rename(columns={'Period':'T_merge'})
present=BBH[-(((BBH.Seperation==0)&(BBH.Ecc==0))|(BBH.Ecc==1))]
BBH['MC']=(BBH.M1*BBH.M2)**0.6*(BBH.M1+BBH.M2)**(-0.2)
Explanation: BBH class
$ecc==0$: never merge, period is lifetime in Gyr, seperation won't change
$ecc==1$: merge quickly, period is time of merge, seperation won't change
rest:
merge: seperation==ecc==0, period is time of merge
present: period is in unit of second, seperation!=0, ecc!=0
End of explanation
display(merger.shape,present.shape,BBH.shape)
MC=(30*35)**0.6*(30+35)**(-0.2)
MC
sns.distplot(BBH.MC[BBH.MC<50])
title('BBH merged: 93,328; present BBH: 17,287,278; total: 17,380,606')
xlabel('Chirp Mass')
annotate('GW150914',xy=(MC,0.018),xytext=(MC,0.04),arrowprops=dict(facecolor='red', shrink=0.05))
# savefig('../Fig/MC.jpeg',dpi=200,bbox_inches='tight')
## Generate sample data for develop analysis scripts
# BBH.sample(10000).to_csv('../data/RES/1024/BBHex.dat')
Explanation: 5 realization results
(93328, 13)
(17287278, 13)
(17380606, 14)
End of explanation
merger.sample(5)
sum(merger.Ecc==1)
sns.distplot(BBH.T_eject,kde=True,norm_hist=True)
text(7,0.3,BBH.T_eject.describe())
ylim([0,0.6])
xlim([-0.5,14])
# savefig(path.join(dir_data,'Fig/Teject_hist.jpeg'),dpi=200,bbox_inches='tight')
merge_kde=sns.distplot(merger.T_merge,kde=True,norm_hist=True)
text(7,0.116,merger.T_merge.describe())
xlim([-0.5,14])
# savefig(path.join(dir_data,'Fig/Tmerge_hist.jpeg'),dpi=200,bbox_inches='tight')
# sns.set_context("poster")
merge_curve=merge_kde.get_lines()[0].get_data()
Explanation: Mergers
Event Rate:
[LSC]BBH;aLIGO.pdf: $9 \sim 240Gpc^{-3}yr^{-1}$
[Carl;Morscher;Fred]BBH;GC;aLIGO.pdf: $10\sim100$ per year
My path:
merge event distribution over time
find the possibility of merge at recent, total merger: 93328 $(30Mpc)^{-3}(13.5Gyr)^{-1}$
scale it to $Gpc^{-3}yr^{-1}$
End of explanation
scale=len(merger)/(4.*pi/3.*30**3)/13.5
from scipy.interpolate import interp1d
f2=interp1d(merge_curve[0],merge_curve[1]*scale,kind='cubic')
f2(13.5)
plot(merge_curve[0],merge_curve[1]*scale)
title("merger event rate, per Gpc^3 per year, in local frame")
from scipy.integrate import quad
Explanation: $\frac{1}{4\pi/3(30Mpc)^313.5Gyr}=\frac{1}{4\pi/32700013.5 {Gpc}^3 yr}$
End of explanation
quad(f2, 7.7587,13.5)
Explanation: integrate from $z=0.6$,
which means t=7.7587 to 13.78, d_c=0 to 2.206 Gpc
cosmic calculation
End of explanation
import numpy as np
import matplotlib.pyplot as pl
import scipy.stats as st
x = merger.Dist
y = merger.T_merge
xmin=5
xmax=35
ymin=0
ymax=15
# Peform the kernel density estimate
xx, yy = np.mgrid[xmin:xmax:100j, ymin:ymax:100j]
positions = np.vstack([xx.ravel(), yy.ravel()])
values = np.vstack([x, y])
kernel = st.gaussian_kde(values)
f = np.reshape(kernel(positions).T, xx.shape)
fig = pl.figure()
ax = fig.gca()
ax.set_xlim(xmin, xmax)
ax.set_ylim(ymin, ymax)
# Contourf plot
cfset = ax.contourf(xx, yy, f, cmap='Blues')
## Or kernel density estimate plot instead of the contourf plot
#ax.imshow(np.rot90(f), cmap='Blues', extent=[xmin, xmax, ymin, ymax])
# Contour plot
cset = ax.contour(xx, yy, f, colors='k')
# Label plot
ax.clabel(cset, inline=1, fontsize=10)
ax.set_xlabel('Distance [Mpc]')
ax.set_ylabel('Merge Time [Gyr]')
def tdelay(time,distance):
return time-distance*3.2625/1000
distance=np.arange(0,35,0.1)
y1=tdelay(13.5,distance)
y2=tdelay(13.4,distance)
# plt.fill_between(distance, y1, y2, color='grey', alpha='0.5')
# pl.savefig('../Fig/Event_rate.jpeg',dpi=200,bbox_inches='tight')
sns.jointplot(merger.Dist,merger.T_merge,kind='kde')
merger['MC']=(merger.M1*merger.M2)**0.6*(merger.M1+merger.M2)**(-0.2)
sns.distplot(merger.MC[merger.MC<50],kde=True)
fig = figure()#figsize=(12,8))
ax = fig.add_subplot(111, projection="mollweide")
# RA: [0h-24h] -> [-pi,pi]; Dec: [-90°,90°] -> [-pi/2,pi/2]\n"
ax.scatter((merger.RA-12)/12*pi,merger.Dec/180*pi,s=2)#,c=expGCpG.iMType,
# s=5*(2+log10(30./expGCpG.Dist.convert_objects(convert_numeric=True))),cmap=cm.Accent)
ax.set_xticklabels(['2h','4h','6h','8h','10h','12h','14h','16h','18h','20h','22h'])
ax.grid(True)
import matplotlib.pyplot as plt
import numpy as np
from moviepy.video.io.bindings import mplfig_to_npimage
import moviepy.editor as mpy
# duration = 1
# dt=0.2
# snap=lambda t:merger[(merger.T_merge>t)&(merger.T_merge<t+dt)]
# fig = figure()#figsize=(12,8))
# ax = fig.add_subplot(111, projection="mollweide")
# # RA: [0h-24h] -> [-pi,pi]; Dec: [-90°,90°] -> [-pi/2,pi/2]\n"
# ax.scatter((snap(0).RA-12)/12*pi,snap(0).Dec/180*pi)#,c=expGCpG.iMType,
# # s=5*(2+log10(30./expGCpG.Dist.convert_objects(convert_numeric=True))),cmap=cm.Accent)
# ax.set_xticklabels(['2h','4h','6h','8h','10h','12h','14h','16h','18h','20h','22h'])
# ax.grid(True)
# def make_frame_mpl(t):
# ax.scatter((snap(t/duration).RA-12)/12*pi,snap(t/duration).Dec/180*pi)
# return mplfig_to_npimage(fig) # RGB image of the figure
# animation =mpy.VideoClip(make_frame_mpl, duration=duration)
import time
# fig = figure()#figsize=(12,8))
# ax = fig.add_subplot(111, projection="mollweide")
# ax.set_xticklabels(['2h','4h','6h','8h','10h','12h','14h','16h','18h','20h','22h'])
# ax.grid(True)
# for t in linspace(0,13,260):
# # RA: [0h-24h] -> [-pi,pi]; Dec: [-90°,90°] -> [-pi/2,pi/2]\n"
# # ax.scatter((snap(0).RA-12)/12*pi,snap(0).Dec/180*pi)#,c=expGCpG.iMType,
# # s=5*(2+log10(30./expGCpG.Dist.convert_objects(convert_numeric=True))),cmap=cm.Accent)
# ax.scatter((snap(t).RA-12)/12*pi,snap(t).Dec/180*pi)
# time.sleep(0.01)
Explanation: Consider cosmology, bhb merger in farther space can will be detectable
End of explanation
present.sample(5)
# fig = figure()#figsize=(12,8))
# ax = fig.add_subplot(111, projection="mollweide")
# # RA: [0h-24h] -> [-pi,pi]; Dec: [-90°,90°] -> [-pi/2,pi/2]\n"
# ax.scatter((present.RA-12)/12*pi,present.Dec/180*pi,#c=expGCpG.iMType,
# s=log(2000000000/present.Period))#,cmap=cm.Accent)
# ax.set_xticklabels(['2h','4h','6h','8h','10h','12h','14h','16h','18h','20h','22h'])
# ax.grid(True)
sns.distplot(log10(2./(present.Period[log10(present.Period)<5.45])),rug=True,kde=True,bins=49)
xlim([-4.9,-3.5])
text(-4.2,1,(log10(2./(present.Period[log10(present.Period)<5.1])).describe()))
xlabel("Binary Frequency")
# savefig('../Fig/period.jpeg',dpi=200,bbox_inches='tight')
sum(log10(present.Period)<8.5)
present[log10(present.Period)<4]
present[log10(present.Period)<4.4]
present['MC']=(present.M1*present.M2)**0.6*(present.M1+present.M2)**(-0.2)
present.MC.describe()
# present[present.MC==min(present.MC)]
sns.distplot(present.MC)#[present.MC<50])
inspiral=present[log10(2./present.Period)>-4.0]
inspiral
sum(merger.Ecc==1)
merger.Seperation.describe()
inspiral.ix[inspiral.index,['RA','Dec','Dist','Model','Age','T_eject','M1','M2','MC','Seperation','Ecc','Period']].to_csv('../data/RES/0220/present.dat',sep=' ',index=None,float_format='%.3f')
inspiral.ix[inspiral.index,['RA','Dec','Dist','Model','Age','T_eject','M1','M2','MC','Seperation','Ecc','Period']]
Explanation: Present BBHs
End of explanation
def f_N(MC,eta,f_min):
return 3./8./(3.68e-6*MC**(5./3.))*(f_min)**(-8./3.)/3.154e7*(4.*3.14/3.*30.**3.*10**(-9))*eta
f_N(19.6,10,10**-4.0)
Explanation: $\dot{f}=k_0 f^{11/3}$
$k_0=\frac{96}{5}(2\pi)^{8/3}\frac{G^{5/3}}{c^5}\frac{m_1m_2}{(m_1+m_2)^{1/3}}=\frac{96}{5}(2\pi)^{8/3}\frac{G^{5/3}}{c^5}M_{chirp}^{5/3}=const M_{chirp}^{5/3}$
$const=3.68e-6 s^{5/3}$
$n=\frac{3\eta}{8k_0}f_{min}^{-8/3}$ where $f_{min}=\frac{2}{T_{max}\simeq 10^{4.4}[s]}\simeq 2*10^{-4.4} Hz$
N in [$\frac{4\pi}{3}30^3$ Mpc^3] = $\frac{3}{8k_0} f_{min}^{-8/3} \eta$ in [s x per Gpc^3 per year]
=> $N=\frac{3}{8k_0} f_{min}^{-8/3} \eta [yr/s][Gpc^3/(V(30Mpc))]$
```
==> $\Delta t=\frac{3}{8k_0}f^{-8/3}_{now}$ per 30 Mpc^3 volume
when merge within 1 yr ($\Delta t \le 1yr$), $f \ge f_{min} = (\frac{8}{3} k_0 \Delta t)^{-3/8} $
$f_{min} = (\frac{8}{3}\frac{96}{5}(2\pi)^{8/3} \frac{G^{5/3}}{c^5} 1yr)^{-3/8} M_{chirp}^{-5/8} \simeq 0.1164 M_{chirp}^{-5/8} Hz$ per 30 Mpc^3 volume, which is $4\pi/330^3 *10^{-9}$ Gpc^3 colume
```
End of explanation
def f_eta(MC,N,f_min):
return N*8./3.*(3.68e-6*MC**(5./3.))/(f_min)**(-8./3.)*3.154e7/(4.*3.14/3.*30.**3.*10**(-9))
f_eta(19.6,10,10**-4.098)
Explanation: $f_N(19.6,10\sim100,10^{4.0})=[1190, 11897]$
in my case: $N=7$
$N=7$ only from dynamically formed bbh
field has $10^4$ more stars, might be [$10^{-2}$ to 0.5] less efficiency for bbh
bh formation merger events not counted
GC underestimated (GC SN \& dwarf Galaxy)
more bbh channel (galactic nucleus...)
End of explanation
present.shape
present.keys()
lisadata=present[['M1','M2','Period']] # M_sun, M_sun, second
lisadata['Dist']=present.Dist*1.0e6 # Mpc -> pc
lisadata['thetas']=present.Dec.apply(lambda x: (x+90.)/180.0*pi) # [-90,90]->[0,pi]
lisadata['phis']=present.RA.apply(lambda x: x*15.0/180.0*pi) # [0,24]->[0,2pi]
np.random.seed(seed=427)
lisadata['thetal']=np.random.rand(len(present))*pi # [0,pi]
lisadata['phil']=np.random.rand(len(present))*2*pi # [0,2pi]
lisadata['phio']=np.random.rand(len(present))*2*pi # [0,2pi]
lisadata['gam']=np.random.rand(len(present))*2*pi # [0,2pi]
lisadata['e']=present.Ecc
lisadata.sample(1000).describe().ix[['min','max']]
lisadata.to_csv('../data/RES/0220/lisa.dat.gz',
sep='\t',index=None,compression='gzip')
lisadata[lisadata.Period<10**8.5].to_csv('../data/RES/0220/lisa_8.5.dat.gz',
sep='\t',index=None,compression='gzip')
Explanation: Generate file for LISA analysis
Parameters for binary source
$f_0$: binary frequency
$\theta_s,\phi_s,\theta_L,\phi_L$: s for source; L for angular momentum vector
$\mathrm{A}=\frac{M_1M_2}{rD}$: amplitude
$\varphi_0$: phase at $t=0$
M1 = parameters(1)
M2 = parameters(2)
Porb = parameters(3) : orbital period, represent for orbital separation r, s
d = parameters(4) : distance between source and observer, pc
thetas = parameters(5) : based on galactic radec
phis = parameters(6) : based on galactic radec
thetal = parameters(7) : [0, pi] angular momentum
phil = parameters(8) : [0, 2pi] angular momentum
phio = parameters(9) : [0, 2pi] phase
gam = parameters(10) : [0, 2pi] amplitude modulation, Lambda
e = parameters(11) : eccentricity
End of explanation
## Load BHLIB
bhlib=pd.DataFrame()
###################################
for i in range(1,325): # model
for j in range(1,11): # model id
###################################
bhe=pd.read_csv('../data/BHsystem/%d-%d-bhe.dat' %(i,j),
usecols=[0, 2, 3, 4, 6, 8, 10, 20], names=['T_eject','Type1','Type2','M1','M2','Seperation','Ecc','Model'],
header=None, delim_whitespace=True)
bhe.Model='%d-%d' %(i,j)
bhlib=pd.concat([bhlib,bhe],ignore_index=False)
# BHB binary
bhblib=bhlib[bhlib.Type1==14*(bhlib.Type2==14)].copy(deep=True)
bhsys=bhlib[-(bhlib.Type1==14*(bhlib.Type2==14))].copy(deep=True)
bhblib=bhblib.drop(bhblib.columns[[1, 2]], axis=1)
for i in range(1,325):
for j in range(1,6):
fig=figure(i)
try:
sns.distplot(bhblib.T_eject[bhblib.Model=="%d-%d" % (i,j)],rug=True,label='Model %d-%d' % (i,j))
except:
print "no ejected bbh in %d-%d" % (i,j)
legend()
savefig('../Fig/BHE_var/model-%d.jpeg' % i, dpi=200,bbox_inches='tight')
close(fig)
Explanation: $T_{eject}$ variation for each model
End of explanation
i=48
for j in range(1,6):
fig=figure(i)
try:
sns.distplot(bhblib.T_eject[bhblib.Model=="%d-%d" % (i,j)],kde=True,rug=True,label='Model %d-%d' % (i,j))
except:
print "no ejected bbh in %d-%d" % (i,j)
legend()
bhblib[bhblib.Model.str.match(r"48-\d")]
for i in range(1,6):
sns.distplot(bhblib.T_eject[bhblib.Model.str.contains('-%d' % i)],norm_hist=True,label='run %d' % i)
legend()
# savefig('../Fig/BHE_var/runs.jpeg',dpi=200,bbox_inches='tight')
from gcpg import build
reload(build)
build.analysis('/Users/domi/Desktop/RES/1101')
Explanation: Weird models
48
210
End of explanation |
5,794 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<a href="https
Step2: Parameters
Step3: Colab-only auth for this notebook and the TPU
Step4: TPU detection
Step5: tf.data.Dataset
Step6: Let's have a look at the data
Step7: Estimator model
If you are not sure what cross-entropy, dropout, softmax or batch-normalization mean, head here for a crash-course
Step8: Train and validate the model, this time on TPU
Step9: Visualize predictions
Step10: Deploy the trained model to ML Engine
Push your trained model to production on ML Engine for a serverless, autoscaled, REST API experience.
You will need a GCS bucket and a GCP project for this.
Models deployed on ML Engine autoscale to zero if not used. There will be no ML Engine charges after you are done testing.
Google Cloud Storage incurs charges. Empty the bucket after deployment if you want to avoid these. Once the model is deployed, the bucket is not useful anymore.
Configuration
Step11: Deploy the model
This uses the command-line interface. You can do the same thing through the ML Engine UI at https
Step12: Test the deployed model
Your model is now available as a REST API. Let us try to call it. The cells below use the "gcloud ml-engine"
command line tool but any tool that can send a JSON payload to a REST endpoint will work. | Python Code:
import os, re, math, json, shutil, pprint, datetime
import PIL.Image, PIL.ImageFont, PIL.ImageDraw # "pip3 install Pillow" or "pip install Pillow" if needed
import numpy as np
import tensorflow as tf
from matplotlib import pyplot as plt
from tensorflow.python.platform import tf_logging
print("Tensorflow version " + tf.__version__)
Explanation: <a href="https://colab.research.google.com/github/GoogleCloudPlatform/training-data-analyst/blob/master/courses/fast-and-lean-data-science/06_MNIST_Estimator_to_TPUEstimator.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
MNIST Estimator to TPUEstimator
This notebook will show you how to port an Estimator model to TPUEstimator.
All the lines that had to be changed in the porting are marked with a "TPU REFACTORING" comment.
You do the porting only once. TPUEstimator then works on both TPU and GPU with the use_tpu=False flag.
Imports
End of explanation
BATCH_SIZE = 32 #@param {type:"integer"}
BUCKET = 'gs://' #@param {type:"string"}
assert re.search(r'gs://.+', BUCKET), 'You need a GCS bucket for your Tensorboard logs. Head to http://console.cloud.google.com/storage and create one.'
training_images_file = 'gs://mnist-public/train-images-idx3-ubyte'
training_labels_file = 'gs://mnist-public/train-labels-idx1-ubyte'
validation_images_file = 'gs://mnist-public/t10k-images-idx3-ubyte'
validation_labels_file = 'gs://mnist-public/t10k-labels-idx1-ubyte'
#@title visualization utilities [RUN ME]
This cell contains helper functions used for visualization
and downloads only. You can skip reading it. There is very
little useful Keras/Tensorflow code here.
# Matplotlib config
plt.rc('image', cmap='gray_r')
plt.rc('grid', linewidth=0)
plt.rc('xtick', top=False, bottom=False, labelsize='large')
plt.rc('ytick', left=False, right=False, labelsize='large')
plt.rc('axes', facecolor='F8F8F8', titlesize="large", edgecolor='white')
plt.rc('text', color='a8151a')
plt.rc('figure', facecolor='F0F0F0')# Matplotlib fonts
MATPLOTLIB_FONT_DIR = os.path.join(os.path.dirname(plt.__file__), "mpl-data/fonts/ttf")
# pull a batch from the datasets. This code is not very nice, it gets much better in eager mode (TODO)
def dataset_to_numpy_util(training_dataset, validation_dataset, N):
# get one batch from each: 10000 validation digits, N training digits
unbatched_train_ds = training_dataset.apply(tf.data.experimental.unbatch())
v_images, v_labels = validation_dataset.make_one_shot_iterator().get_next()
t_images, t_labels = unbatched_train_ds.batch(N).make_one_shot_iterator().get_next()
# Run once, get one batch. Session.run returns numpy results
with tf.Session() as ses:
(validation_digits, validation_labels,
training_digits, training_labels) = ses.run([v_images, v_labels, t_images, t_labels])
# these were one-hot encoded in the dataset
validation_labels = np.argmax(validation_labels, axis=1)
training_labels = np.argmax(training_labels, axis=1)
return (training_digits, training_labels,
validation_digits, validation_labels)
# create digits from local fonts for testing
def create_digits_from_local_fonts(n):
font_labels = []
img = PIL.Image.new('LA', (28*n, 28), color = (0,255)) # format 'LA': black in channel 0, alpha in channel 1
font1 = PIL.ImageFont.truetype(os.path.join(MATPLOTLIB_FONT_DIR, 'DejaVuSansMono-Oblique.ttf'), 25)
font2 = PIL.ImageFont.truetype(os.path.join(MATPLOTLIB_FONT_DIR, 'STIXGeneral.ttf'), 25)
d = PIL.ImageDraw.Draw(img)
for i in range(n):
font_labels.append(i%10)
d.text((7+i*28,0 if i<10 else -4), str(i%10), fill=(255,255), font=font1 if i<10 else font2)
font_digits = np.array(img.getdata(), np.float32)[:,0] / 255.0 # black in channel 0, alpha in channel 1 (discarded)
font_digits = np.reshape(np.stack(np.split(np.reshape(font_digits, [28, 28*n]), n, axis=1), axis=0), [n, 28*28])
return font_digits, font_labels
# utility to display a row of digits with their predictions
def display_digits(digits, predictions, labels, title, n):
plt.figure(figsize=(13,3))
digits = np.reshape(digits, [n, 28, 28])
digits = np.swapaxes(digits, 0, 1)
digits = np.reshape(digits, [28, 28*n])
plt.yticks([])
plt.xticks([28*x+14 for x in range(n)], predictions)
for i,t in enumerate(plt.gca().xaxis.get_ticklabels()):
if predictions[i] != labels[i]: t.set_color('red') # bad predictions in red
plt.imshow(digits)
plt.grid(None)
plt.title(title)
# utility to display multiple rows of digits, sorted by unrecognized/recognized status
def display_top_unrecognized(digits, predictions, labels, n, lines):
idx = np.argsort(predictions==labels) # sort order: unrecognized first
for i in range(lines):
display_digits(digits[idx][i*n:(i+1)*n], predictions[idx][i*n:(i+1)*n], labels[idx][i*n:(i+1)*n],
"{} sample validation digits out of {} with bad predictions in red and sorted first".format(n*lines, len(digits)) if i==0 else "", n)
# utility to display training and validation curves
def display_training_curves(training, validation, title, subplot):
if subplot%10==1: # set up the subplots on the first call
plt.subplots(figsize=(10,10), facecolor='#F0F0F0')
plt.tight_layout()
ax = plt.subplot(subplot)
ax.grid(linewidth=1, color='white')
ax.plot(training)
ax.plot(validation)
ax.set_title('model '+ title)
ax.set_ylabel(title)
ax.set_xlabel('epoch')
ax.legend(['train', 'valid.'])
Explanation: Parameters
End of explanation
IS_COLAB_BACKEND = 'COLAB_GPU' in os.environ # this is always set on Colab, the value is 0 or 1 depending on GPU presence
if IS_COLAB_BACKEND:
from google.colab import auth
auth.authenticate_user() # Authenticates the backend and also the TPU using your credentials so that they can access your private GCS buckets
Explanation: Colab-only auth for this notebook and the TPU
End of explanation
#TPU REFACTORING: detect the TPU
try: # TPU detection
tpu = tf.contrib.cluster_resolver.TPUClusterResolver() # Picks up a connected TPU on Google's Colab, ML Engine, Kubernetes and Deep Learning VMs accessed through the 'ctpu up' utility
#tpu = tf.contrib.cluster_resolver.TPUClusterResolver('MY_TPU_NAME') # If auto-detection does not work, you can pass the name of the TPU explicitly (tip: on a VM created with "ctpu up" the TPU has the same name as the VM)
print('Running on TPU ', tpu.cluster_spec().as_dict()['worker'])
USE_TPU = True
except ValueError:
tpu = None
print("Running on GPU or CPU")
USE_TPU = False
Explanation: TPU detection
End of explanation
def read_label(tf_bytestring):
label = tf.decode_raw(tf_bytestring, tf.uint8)
label = tf.reshape(label, [])
label = tf.one_hot(label, 10)
return label
def read_image(tf_bytestring):
image = tf.decode_raw(tf_bytestring, tf.uint8)
image = tf.cast(image, tf.float32)/256.0
image = tf.reshape(image, [28*28])
return image
def load_dataset(image_file, label_file):
imagedataset = tf.data.FixedLengthRecordDataset(image_file, 28*28, header_bytes=16)
imagedataset = imagedataset.map(read_image, num_parallel_calls=16)
labelsdataset = tf.data.FixedLengthRecordDataset(label_file, 1, header_bytes=8)
labelsdataset = labelsdataset.map(read_label, num_parallel_calls=16)
dataset = tf.data.Dataset.zip((imagedataset, labelsdataset))
return dataset
def get_training_dataset(image_file, label_file, batch_size):
dataset = load_dataset(image_file, label_file)
dataset = dataset.cache() # this small dataset can be entirely cached in RAM, for TPU this is important to get good performance from such a small dataset
dataset = dataset.shuffle(5000, reshuffle_each_iteration=True)
dataset = dataset.repeat() # Mandatory for TPU for now
dataset = dataset.batch(batch_size, drop_remainder=True) # drop_remainder is important on TPU, batch size must be fixed
dataset = dataset.prefetch(-1) # prefetch next batch while training (-1: autotune prefetch buffer size)
return dataset
#TPU REFACTORING: training and eval batch sizes must be the same: passing batch_size parameter here too
# def get_validation_dataset(image_file, label_file):
def get_validation_dataset(image_file, label_file, batch_size):
dataset = load_dataset(image_file, label_file)
dataset = dataset.cache() # this small dataset can be entirely cached in RAM, for TPU this is important to get good performance from such a small dataset
#TPU REFACTORING: training and eval batch sizes must be the same: passing batch_size parameter here too
# dataset = dataset.batch(10000, drop_remainder=True) # 10000 items in eval dataset, all in one batch
dataset = dataset.batch(batch_size, drop_remainder=True)
dataset = dataset.repeat() # Mandatory for TPU for now
return dataset
# instantiate the datasets
training_dataset = get_training_dataset(training_images_file, training_labels_file, BATCH_SIZE)
validation_dataset = get_validation_dataset(validation_images_file, validation_labels_file, 10000)
# For TPU, we will need a function that returns the dataset
# TPU REFACTORING: input_fn's must have a params argument though which TPUEstimator passes params['batch_size']
# training_input_fn = lambda: get_training_dataset(training_images_file, training_labels_file, BATCH_SIZE)
# validation_input_fn = lambda: get_validation_dataset(validation_images_file, validation_labels_file)
training_input_fn = lambda params: get_training_dataset(training_images_file, training_labels_file, params['batch_size'])
validation_input_fn = lambda params: get_validation_dataset(validation_images_file, validation_labels_file, params['batch_size'])
Explanation: tf.data.Dataset: parse files and prepare training and validation datasets
Please read the best practices for building input pipelines with tf.data.Dataset
End of explanation
N = 24
(training_digits, training_labels,
validation_digits, validation_labels) = dataset_to_numpy_util(training_dataset, validation_dataset, N)
display_digits(training_digits, training_labels, training_labels, "training digits and their labels", N)
display_digits(validation_digits[:N], validation_labels[:N], validation_labels[:N], "validation digits and their labels", N)
font_digits, font_labels = create_digits_from_local_fonts(N)
Explanation: Let's have a look at the data
End of explanation
# This model trains to 99.4% sometimes 99.5% accuracy in 10 epochs
# TPU REFACTORING: model_fn must have a params argument. TPUEstimator passes batch_size and use_tpu into it
#def model_fn(features, labels, mode):
def model_fn(features, labels, mode, params):
is_training = (mode == tf.estimator.ModeKeys.TRAIN)
x = features
y = tf.reshape(x, [-1, 28, 28, 1])
y = tf.layers.Conv2D(filters=6, kernel_size=3, padding='same', use_bias=False)(y) # no bias necessary before batch norm
y = tf.layers.BatchNormalization(scale=False, center=True)(y, training=is_training) # no batch norm scaling necessary before "relu"
y = tf.nn.relu(y) # activation after batch norm
y = tf.layers.Conv2D(filters=12, kernel_size=6, padding='same', use_bias=False, strides=2)(y)
y = tf.layers.BatchNormalization(scale=False, center=True)(y, training=is_training)
y = tf.nn.relu(y)
y = tf.layers.Conv2D(filters=24, kernel_size=6, padding='same', use_bias=False, strides=2)(y)
y = tf.layers.BatchNormalization(scale=False, center=True)(y, training=is_training)
y = tf.nn.relu(y)
y = tf.layers.Flatten()(y)
y = tf.layers.Dense(200, use_bias=False)(y)
y = tf.layers.BatchNormalization(scale=False, center=True)(y, training=is_training)
y = tf.nn.relu(y)
y = tf.layers.Dropout(0.5)(y, training=is_training)
logits = tf.layers.Dense(10)(y)
predictions = tf.nn.softmax(logits)
classes = tf.math.argmax(predictions, axis=-1)
if (mode != tf.estimator.ModeKeys.PREDICT):
loss = tf.losses.softmax_cross_entropy(labels, logits)
step = tf.train.get_or_create_global_step()
# TPU REFACTORING: step is now increased once per GLOBAL_BATCH_SIZE = 8*BATCH_SIZE. Must adjust learning rate schedule accordingly
# lr = 0.0001 + tf.train.exponential_decay(0.01, step, 2000, 1/math.e)
lr = 0.0001 + tf.train.exponential_decay(0.01, step, 2000//8, 1/math.e)
# TPU REFACTORING: custom Tensorboard summaries do not work. Only default Estimator summaries will appear in Tensorboard.
# tf.summary.scalar("learn_rate", lr)
optimizer = tf.train.AdamOptimizer(lr)
# TPU REFACTORING: wrap the optimizer in a CrossShardOptimizer: this implements the multi-core training logic
if params['use_tpu']:
optimizer = tf.contrib.tpu.CrossShardOptimizer(optimizer)
# little wrinkle: batch norm uses running averages which need updating after each batch. create_train_op does it, optimizer.minimize does not.
train_op = tf.contrib.training.create_train_op(loss, optimizer)
#train_op = optimizer.minimize(loss, tf.train.get_or_create_global_step())
# TPU REFACTORING: a metrics_fn is needed for TPU
# metrics = {'accuracy': tf.metrics.accuracy(classes, tf.math.argmax(labels, axis=-1))}
metric_fn = lambda classes, labels: {'accuracy': tf.metrics.accuracy(classes, tf.math.argmax(labels, axis=-1))}
tpu_metrics = (metric_fn, [classes, labels]) # pair of metric_fn and its list of arguments, there can be multiple pairs in a list
# metric_fn will run on CPU, not TPU: more operations are allowed
else:
loss = train_op = metrics = tpu_metrics = None # None of these can be computed in prediction mode because labels are not available
# TPU REFACTORING: EstimatorSpec => TPUEstimatorSpec
## return tf.estimator.EstimatorSpec(
return tf.contrib.tpu.TPUEstimatorSpec(
mode=mode,
predictions={"predictions": predictions, "classes": classes}, # name these fields as you like
loss=loss,
train_op=train_op,
# TPU REFACTORING: a metrics_fn is needed for TPU, passed into the eval_metrics field instead of eval_metrics_ops
# eval_metric_ops=metrics
eval_metrics = tpu_metrics
)
# Called once when the model is saved. This function produces a Tensorflow
# graph of operations that will be prepended to your model graph. When
# your model is deployed as a REST API, the API receives data in JSON format,
# parses it into Tensors, then sends the tensors to the input graph generated by
# this function. The graph can transform the data so it can be sent into your
# model input_fn. You can do anything you want here as long as you do it with
# tf.* functions that produce a graph of operations.
def serving_input_fn():
# placeholder for the data received by the API (already parsed, no JSON decoding necessary,
# but the JSON must contain one or multiple 'image' key(s) with 28x28 greyscale images as content.)
inputs = {"serving_input": tf.placeholder(tf.float32, [None, 28, 28])} # the shape of this dict should match the shape of your JSON
features = inputs['serving_input'] # no transformation needed
return tf.estimator.export.TensorServingInputReceiver(features, inputs) # features are the features needed by your model_fn
# Return a ServingInputReceiver if your features are a dictionary of Tensors, TensorServingInputReceiver if they are a straight Tensor
Explanation: Estimator model
If you are not sure what cross-entropy, dropout, softmax or batch-normalization mean, head here for a crash-course: Tensorflow and deep learning without a PhD
End of explanation
EPOCHS = 10
# TPU_REFACTORING: to use all 8 cores, increase the batch size by 8
GLOBAL_BATCH_SIZE = BATCH_SIZE * 8
# TPU_REFACTORING: TPUEstimator increments the step once per GLOBAL_BATCH_SIZE: must adjust epoch length accordingly
# steps_per_epoch = 60000 // BATCH_SIZE # 60,000 images in training dataset
steps_per_epoch = 60000 // GLOBAL_BATCH_SIZE # 60,000 images in training dataset
MODEL_EXPORT_NAME = "mnist" # name for exporting saved model
# TPU_REFACTORING: the TPU will run multiple steps of training before reporting back
TPU_ITERATIONS_PER_LOOP = steps_per_epoch # report back after each epoch
tf_logging.set_verbosity(tf_logging.INFO)
now = datetime.datetime.now()
MODEL_DIR = BUCKET+"/mnistjobs/job" + "-{}-{:02d}-{:02d}-{:02d}:{:02d}:{:02d}".format(now.year, now.month, now.day, now.hour, now.minute, now.second)
# TPU REFACTORING: the RunConfig has changed
#training_config = tf.estimator.RunConfig(model_dir=MODEL_DIR, save_summary_steps=10, save_checkpoints_steps=steps_per_epoch, log_step_count_steps=steps_per_epoch/4)
training_config = tf.contrib.tpu.RunConfig(
cluster=tpu,
model_dir=MODEL_DIR,
tpu_config=tf.contrib.tpu.TPUConfig(TPU_ITERATIONS_PER_LOOP))
# TPU_REFACTORING: exporters do not work yet. Must call export_savedmodel manually after training
#export_latest = tf.estimator.LatestExporter(MODEL_EXPORT_NAME, serving_input_receiver_fn=serving_input_fn)
# TPU_REFACTORING: Estimator => TPUEstimator
#estimator = tf.estimator.Estimator(model_fn=model_fn, config=training_config)
estimator = tf.contrib.tpu.TPUEstimator(
model_fn=model_fn,
model_dir=MODEL_DIR,
# TPU_REFACTORING: training and eval batch size must be the same for now
train_batch_size=GLOBAL_BATCH_SIZE,
eval_batch_size=10000, # 10000 digits in eval dataset
predict_batch_size=10000, # prediction on the entire eval dataset in the demo below
config=training_config,
use_tpu=USE_TPU,
# TPU REFACTORING: setting the kind of model export we want
export_to_tpu=False) # we want an exported model for CPU/GPU inference because that is what is supported on ML Engine
# TPU REFACTORING: train_and_evaluate does not work on TPU yet, TrainSpec not needed
# train_spec = tf.estimator.TrainSpec(training_input_fn, max_steps=EPOCHS*steps_per_epoch)
# TPU REFACTORING: train_and_evaluate does not work on TPU yet, EvalSpec not needed
# eval_spec = tf.estimator.EvalSpec(validation_input_fn, steps=1, exporters=export_latest, throttle_secs=0) # no eval throttling: evaluates after each checkpoint
# TPU REFACTORING: train_and_evaluate does not work on TPU yet, must train then eval manually
# tf.estimator.train_and_evaluate(estimator, train_spec, eval_spec)
estimator.train(training_input_fn, steps=steps_per_epoch*EPOCHS)
estimator.evaluate(input_fn=validation_input_fn, steps=1)
# TPU REFACTORING: exporters do not work yet. Must call export_savedmodel manually after training
estimator.export_savedmodel(os.path.join(MODEL_DIR, MODEL_EXPORT_NAME), serving_input_fn)
tf_logging.set_verbosity(tf_logging.WARN)
Explanation: Train and validate the model, this time on TPU
End of explanation
# recognize digits from local fonts
# TPU REFACTORING: TPUEstimator.predict requires a 'params' in ints input_fn so that it can pass params['batch_size']
#predictions = estimator.predict(lambda: tf.data.Dataset.from_tensor_slices(font_digits).batch(N),
predictions = estimator.predict(lambda params: tf.data.Dataset.from_tensor_slices(font_digits).batch(N),
yield_single_examples=False) # the returned value is a generator that will yield one batch of predictions per next() call
predicted_font_classes = next(predictions)['classes']
display_digits(font_digits, predicted_font_classes, font_labels, "predictions from local fonts (bad predictions in red)", N)
# recognize validation digits
predictions = estimator.predict(validation_input_fn,
yield_single_examples=False) # the returned value is a generator that will yield one batch of predictions per next() call
predicted_labels = next(predictions)['classes']
display_top_unrecognized(validation_digits, predicted_labels, validation_labels, N, 7)
Explanation: Visualize predictions
End of explanation
PROJECT = "" #@param {type:"string"}
NEW_MODEL = True #@param {type:"boolean"}
MODEL_NAME = "estimator_mnist_tpu" #@param {type:"string"}
MODEL_VERSION = "v0" #@param {type:"string"}
assert PROJECT, 'For this part, you need a GCP project. Head to http://console.cloud.google.com/ and create one.'
#TPU REFACTORING: TPUEstimator does not create the 'export' subfolder
#export_path = os.path.join(MODEL_DIR, 'export', MODEL_EXPORT_NAME)
export_path = os.path.join(MODEL_DIR, MODEL_EXPORT_NAME)
last_export = sorted(tf.gfile.ListDirectory(export_path))[-1]
export_path = os.path.join(export_path, last_export)
print('Saved model directory found: ', export_path)
Explanation: Deploy the trained model to ML Engine
Push your trained model to production on ML Engine for a serverless, autoscaled, REST API experience.
You will need a GCS bucket and a GCP project for this.
Models deployed on ML Engine autoscale to zero if not used. There will be no ML Engine charges after you are done testing.
Google Cloud Storage incurs charges. Empty the bucket after deployment if you want to avoid these. Once the model is deployed, the bucket is not useful anymore.
Configuration
End of explanation
# Create the model
if NEW_MODEL:
!gcloud ml-engine models create {MODEL_NAME} --project={PROJECT} --regions=us-central1
# Create a version of this model (you can add --async at the end of the line to make this call non blocking)
# Additional config flags are available: https://cloud.google.com/ml-engine/reference/rest/v1/projects.models.versions
# You can also deploy a model that is stored locally by providing a --staging-bucket=... parameter
!echo "Deployment takes a couple of minutes. You can watch your deployment here: https://console.cloud.google.com/mlengine/models/{MODEL_NAME}"
!gcloud ml-engine versions create {MODEL_VERSION} --model={MODEL_NAME} --origin={export_path} --project={PROJECT} --runtime-version=1.10
Explanation: Deploy the model
This uses the command-line interface. You can do the same thing through the ML Engine UI at https://console.cloud.google.com/mlengine/models
End of explanation
# prepare digits to send to online prediction endpoint
digits = np.concatenate((font_digits, validation_digits[:100-N]))
labels = np.concatenate((font_labels, validation_labels[:100-N]))
with open("digits.json", "w") as f:
for digit in digits:
# the format for ML Engine online predictions is: one JSON object per line
data = json.dumps({"serving_input": digit.tolist()}) # "serving_input" because that is what you defined in your serving_input_fn: {"serving_input": tf.placeholder(tf.float32, [None, 28, 28])}
f.write(data+'\n')
# Request online predictions from deployed model (REST API) using the "gcloud ml-engine" command line.
predictions = !gcloud ml-engine predict --model={MODEL_NAME} --json-instances digits.json --project={PROJECT} --version {MODEL_VERSION}
predictions = np.array([int(p.split('[')[0]) for p in predictions[1:]]) # first line is the name of the input layer: drop it, parse the rest
display_top_unrecognized(digits, predictions, labels, N, 100//N)
Explanation: Test the deployed model
Your model is now available as a REST API. Let us try to call it. The cells below use the "gcloud ml-engine"
command line tool but any tool that can send a JSON payload to a REST endpoint will work.
End of explanation |
5,795 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Lab 09a
Step1: Next, let's load the data. The iris data set is included in scikit-learn's datasets submodule, so we can just load it directly like this
Step2: Exploratory data analysis
Let's start by making a scatter plot matrix of our data. We can colour the individual scatter points according to their true class labels by passing c=iris.target to the function, like this
Step3: The colours of the data points here are our ground truth, that is the actual labels of the data. Generally, when we cluster data, we don't know the ground truth, but in this instance it will help us to assess how well $K$-means clustering segments the data into its true categories.
K-means clustering
Let's build an $K$-means clustering model of the document data. scikit-learn supports $K$-means clustering functionality via the cluster subpackage. We can use the KMeans class to build our model.
3 clusters
Generally, we won't know in advance how many clusters to use but, as we do in this instance, let's start by splitting the data into three clusters. We can run $K$-means clustering with scikit-learn using the KMeans class. We can specify n_clusters=3 to find three clusters, like this
Step4: Note
Step5: We can check the results of our clustering visually by building another scatter plot matrix, this time colouring the points according to the cluster labels
Step6: As can be seen, the $K$-means algorithm has partitioned the data into three distinct sets, using just the values of petal length, petal width, sepal length and sepal width. The clusters do not precisely correspond to the true class labels plotted earlier but, as we usually perform clustering in situations where we don't know the true class labels, this seems like a reasonable attempt.
Other numbers of clusters
We can cluster the data into arbitrary many clusters (up to the point where each sample is its own cluster). Let's cluster the data into two clusters and see what effect this has
Step7: Finding the optimum number of clusters
One way to find the optimum number of clusters is to plot the variation in total inertia with increasing numbers of clusters. Because the total inertia decreases as the number of clusters increases, we can determine a reasonable, but possibly not true, clustering of the data by finding the "elbow" in the curve, which occurs as a result of the diminishing returns from adding further clusters.
We can access the inertia value of a fitted $K$-means model using its inertia_ attribute, like this | Python Code:
%matplotlib inline
import numpy as np
import pandas as pd
from matplotlib import pyplot as plt
from sklearn import cluster
from sklearn import datasets
Explanation: Lab 09a: K-means clustering
Introduction
This lab focuses on $K$-means clustering using the Iris flower data set. At the end of the lab, you should be able to:
Create a $K$-means clustering model for various cluster sizes.
Estimate the right number of clusters to choose by plotting the total inertia of the clusters and finding the "elbow" of the curve.
Getting started
Let's start by importing the packages we'll need. As usual, we'll import pandas for exploratory analysis, but this week we're also going to use the cluster subpackage from scikit-learn to create $K$-means models and the datasets subpackage to access the Iris data set.
End of explanation
iris = datasets.load_iris()
X = pd.DataFrame({k: v for k, v in zip(iris.feature_names, iris.data.T)}) # Convert the raw data to a data frame
X.head()
Explanation: Next, let's load the data. The iris data set is included in scikit-learn's datasets submodule, so we can just load it directly like this:
End of explanation
pd.plotting.scatter_matrix(X, c=iris.target, figsize=(9, 9));
Explanation: Exploratory data analysis
Let's start by making a scatter plot matrix of our data. We can colour the individual scatter points according to their true class labels by passing c=iris.target to the function, like this:
End of explanation
k_means = cluster.KMeans(n_clusters=3)
k_means.fit(X)
Explanation: The colours of the data points here are our ground truth, that is the actual labels of the data. Generally, when we cluster data, we don't know the ground truth, but in this instance it will help us to assess how well $K$-means clustering segments the data into its true categories.
K-means clustering
Let's build an $K$-means clustering model of the document data. scikit-learn supports $K$-means clustering functionality via the cluster subpackage. We can use the KMeans class to build our model.
3 clusters
Generally, we won't know in advance how many clusters to use but, as we do in this instance, let's start by splitting the data into three clusters. We can run $K$-means clustering with scikit-learn using the KMeans class. We can specify n_clusters=3 to find three clusters, like this:
End of explanation
labels = k_means.predict(X)
print labels
Explanation: Note: In previous weeks, we have called fit(X, y) when fitting scikit-learn estimators. However, in each of these cases, we were fitting supervised learning models where y represented the true class labels of the data. This week, we're fitting $K$-means clustering models, which are unsupervised learners, and so there is no need to specify the true class labels (i.e. y).
When we call the predict method on our fitted estimator, it predicts the class labels for each record in our explanatory data matrix (i.e. X):
End of explanation
pd.plotting.scatter_matrix(X, c=labels, figsize=(9, 9));
Explanation: We can check the results of our clustering visually by building another scatter plot matrix, this time colouring the points according to the cluster labels:
End of explanation
k_means = cluster.KMeans(n_clusters=2)
k_means.fit(X)
labels = k_means.predict(X)
pd.plotting.scatter_matrix(X, c=labels, figsize=(9, 9));
Explanation: As can be seen, the $K$-means algorithm has partitioned the data into three distinct sets, using just the values of petal length, petal width, sepal length and sepal width. The clusters do not precisely correspond to the true class labels plotted earlier but, as we usually perform clustering in situations where we don't know the true class labels, this seems like a reasonable attempt.
Other numbers of clusters
We can cluster the data into arbitrary many clusters (up to the point where each sample is its own cluster). Let's cluster the data into two clusters and see what effect this has:
End of explanation
clusters = range(1, 10)
inertia = []
for n in clusters:
k_means = cluster.KMeans(n_clusters=n)
k_means.fit(X)
inertia.append(k_means.inertia_)
plt.plot(clusters, inertia)
plt.xlabel("Number of clusters")
plt.ylabel("Inertia");
Explanation: Finding the optimum number of clusters
One way to find the optimum number of clusters is to plot the variation in total inertia with increasing numbers of clusters. Because the total inertia decreases as the number of clusters increases, we can determine a reasonable, but possibly not true, clustering of the data by finding the "elbow" in the curve, which occurs as a result of the diminishing returns from adding further clusters.
We can access the inertia value of a fitted $K$-means model using its inertia_ attribute, like this:
End of explanation |
5,796 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
$\hat R$ locker
This notebook serves as a sandbox to understand the potential of the nested-$\hat R$ diagnostic. The underlying idea is to gather short chains into a long "super chains" and then check that the super chains are mixing. We'll motivate this idea and work out details, benefits and limitations.
Copyright 2021 Google LLC.
Step2: Set up problem
Step3: Run MCMC
We consider two regimes
Step4: Analyze results
Squared error for Monte Carlo estimate of the mean and variance
Step6: As a first step plot the squared error based on a Monte Carlo estimator that discards the first half of the samples, and doesn't discriminate between warmup and sampling iterations. We also plot the target precision whith the "true" variance -- when available for instance via the inference gym -- divided by 100. This is the precision we expect our Monte Carlo estimates to reach with an effective sample size of 100.
Step7: I don't think the variance of the variance is stored in the inference gym, although it's probably possible to access this information using the error in the variance estimate. For now, we'll use the final result reported by the long chain as the target precision.
Step9: Repeat the above, using a Monte Carlo estimator based on sampling iterations with only the warmup samples discard.
Step10: Remark
Step11: Staring at the plot above it's clear that the short regime reaches a reasonable precision in fewer iterations than the long regime, even though the long regime warms up chains for many more iterations. The dotted line represent the Monte Carlo estimate using all the samples from the long regime. We'll use this as our target precision.
Step12: Check for convergence
Let's first examine whether we're passed the transient bias regime (we should be since we're discarding the warmup phase).
Step13: Making due diligence, let's look at the samples returned by both methods, after discarding the warmup iterations.
Step14: Let's compute $\hat R$ as a function of iteration and pay attention to how quickly $\hat R$ goes to 1 in both regimes.
Step15: Remark
Step16: (Banana example) As expected, $\hat R$ decreases with the number of iterations per chain, although crucially not with the total number of samples! As one might suspect, the short regime produces a less noisy estimate of $\hat R$. To be more precise, we expect $\hat R$ to decrease with the effective sample size per chain. Since the long regime benefits from a longer warmup, the effective sample size per iteration should be better, although it might not make a difference in this example.
Crucially, $\hat R$ as a convergence diagnostic isn't sensitive to the fact we are running many chains (although the estimator does become less noisy...).
Step17: Proposition
Step19: Nested $\hat R$
To remedy the identified issue, we propose to pool chains together in the short regime, thereby building super-chains, and then checking that the super chains are mixing.
We index each sample by $n$ the iteration, $m$ the chain, and $k$ the cluster of chains, and write $\theta^{(n, m, k)}$. The within-chain variance is estimated by
$$
s^2_{km} = \frac{1}{N - 1} \sum_{n = 1}^N \left (\theta^{(nmk)} - \bar \theta^{(.mk)} \right)^2.
$$
Next the between-chain variance, or within super chain variance is
\begin{eqnarray}
s^2_{k.} & = & \frac{1}{M - 1} \sum_{m = 1}^M \left (\bar \theta^{(.mk)} - \bar \theta^{(..k)} \right)^2,
\end{eqnarray}
and the total variance for a super chain is
\begin{eqnarray}
S^2_k & = & \frac{1}{M - 1} \sum_{m = 1}^M \left (\bar \theta^{(.mk)} - \bar \theta^{(..k)} \right)^2 + \frac{1}{M (N - 1)} \sum_{m = 1}^M \sum_{n = 1}^N \left (\theta^{(nmk)} - \bar \theta^{(.mk)} \right)^2 \
& = & s^2_{k.} + \frac{1}{M} \sum_{m = 1}^M s^2_{km}
\end{eqnarray}
Notice that this calculation accounts for the fact the super-chain is made up of multiple chains.
Finally the within-super-chain variance is estimated as
$$
W = \frac{1}{K} \sum_{k = 1}^K S^2_k.
$$
Now it remains to compute the between super-chain variance
$$
B = \frac{1}{K - 1} \sum_{k = 1}^K \left (\bar \theta^{(..k)} - \bar \theta^{(...)} \right)^2,
$$
yielding an estimate of the posterior variance
$$
\widehat{\mathrm{var}}^+(\theta) = B + W,
$$
which very much looks like the posterior variance estimate used in the in the long regime, except that I've been a bit more consistent about making the estimator unbiased. We then compute
$$
\hat R = \sqrt{\frac{\widehat{\mathrm{var}}^+(\theta)}{W}}.
$$
Remark. The $\theta$ can be replaced by the rank-normalized $z$ as presrcribed by Vehtari et al 2020.
Implementation of nested-$\hat R$ using TensorFlow.
Step20: CASE 1 (sanity check)
Step21: CASE 2
Step22: Effective sample size
We'll now compute the effective sample size. We might in fact expect the classic diagnostic to work relatively well.
Step23: Adaptive warmup length
Playing around a little, we find that once the algorithm is properly warmed up, the short regime can reach good precision in very few iterations. The primary limitation hence becomes the warm up time.
Proper warmup means (i) we've overcomed the transient bias and have already moved across the "typical set" -- it isn't enough to be in the "typical set" if where we are is determined by our starting point -- and (ii) our algorithm tuned well-enough such that it can explore every part of the parameter space in a reasonable time and has a relatively short relaxation time. The first item is essential to both sampling regimes, though intuitively, it seems we might be able to compromise on the second item in the short regime.
In many cases, the number of warmup samples is determined ahead of time when calling the algorithm. Ideally we'd stop the warmup once we have suitable tuning parameters and then move to the sampling phase. Zhang et al (2020) propose to run warmups over short windows of $w = 100$ iterations and compute $\hat R$ and the ESS at the end of each of window to check if we should continue warming up. Once both diagnostic estimates are passed a certain threshold, the warmup ends and the sampling begins. In theory, this scheme can be adapted to the short regime by replacing $\hat R$ with the nested $\hat R$.
My guess is that by using nested $\hat R$ and the classic ESS (computed using many independent chains) we'll implicitly compromise on item (ii) -- so a priori, the described warmup method requires little adjustment.
Step24: Experiment with window size
The code below returns the length of the warmup phase, simulated accross several seeds. This can give us a sense of how long the warmup phase is on average for different seeds. Be mindful that when using too many seeds with a lot of chains, the GPU can run out of memory. The motivation is to check how stable the warmup strategy is when using different window sizes.
Step25: Results for the Banana problem
Applying the code above for the banana problem with
target_rhat = 1.01
use_nested_rhat = True
use_log_joint = False
we estimate the length of the warmup phase for different window sizes | Python Code:
#@title Licensed under the Apache License, Version 2.0 (the "License"); { display-mode: "form" }
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# Import tf first to enable eager mode.
import tensorflow as tf
tf.executing_eagerly()
# TODO (charlesm93): check which of these actually need to be imported.
import numpy as np
from matplotlib.pyplot import *
# %config InlineBackend.figure_format = 'retina'
# matplotlib.pyplot.style.use("dark_background")
import jax
from jax import random
from jax import numpy as jnp
from colabtools import adhoc_import
from inference_gym import using_jax as gym
from tensorflow_probability.spinoffs.fun_mc import using_jax as fun_mcmc
# import tensorflow as tf
from tensorflow_probability.python.internal import prefer_static as ps
from tensorflow_probability.python.internal import unnest
import tensorflow_probability as _tfp
tfp = _tfp.substrates.jax
tfd = tfp.distributions
tfb = tfp.bijectors
tfp_np = _tfp.substrates.numpy
tfd_np = tfp_np.distributions
# set font size for matplot lib
font = {'family' : 'normal',
'weight' : 'bold',
'size' : 14}
matplotlib.rc('font', **font)
tf.executing_eagerly()
Explanation: $\hat R$ locker
This notebook serves as a sandbox to understand the potential of the nested-$\hat R$ diagnostic. The underlying idea is to gather short chains into a long "super chains" and then check that the super chains are mixing. We'll motivate this idea and work out details, benefits and limitations.
Copyright 2021 Google LLC.
End of explanation
# options: Bananas, GermanCredit, Brownian
problem_name = 'Bananas'
if (problem_name == 'Bananas'):
target = gym.targets.VectorModel(gym.targets.Banana(),
flatten_sample_transformations=True)
num_dimensions = target.event_shape[0]
init_step_size = 1.
if (problem_name == 'GermanCredit'):
# This problem seems to require that we load TF datasets first.
import tensorflow_datasets
target = gym.targets.VectorModel(gym.targets.GermanCreditNumericSparseLogisticRegression(),
flatten_sample_transformations=True)
num_dimensions = target.event_shape[0]
init_step_size = 0.02
if (problem_name == 'Brownian'):
target = gym.targets.BrownianMotionMissingMiddleObservations()
target = gym.targets.VectorModel(target,
flatten_sample_transformations = True)
num_dimensions = target.event_shape[0]
init_step_size = 0.01
def target_log_prob_fn(x):
Unnormalized, unconstrained target density.
This is a thin wrapper that applies the default bijectors so that we can
ignore any constraints.
y = target.default_event_space_bijector(x)
fldj = target.default_event_space_bijector.forward_log_det_jacobian(x)
return target.unnormalized_log_prob(y) + fldj
# NOTE: use a large factor to get overdispered initializations.
# NOTE: don't set offset to 0 when the target mean is 0.
# CHECK: what scale should we use? Poor inits can make the problem much more
# difficult.
# NOTE: we probably want inits that allow us to get decent estimates
# in the long regime
# if (problem_name == 'Bananas'):
if (problem_name == 'Bananas'):
offset = 2
def initialize (shape, key = random.PRNGKey(37272709)):
return 3 * random.normal(key, shape + (num_dimensions,)) + offset
if (problem_name == 'GermanCredit'):
offset = 0.1
def initialize (shape, key = random.PRNGKey(37272709)):
return 0.5 * random.normal(key, shape + (num_dimensions,)) + offset
# offset = 0.5
# def initialize (shape, key = random.PRNGKey(37272709)):
# return 0.01 * random.normal(key, shape + (num_dimensions,)) + offset
Explanation: Set up problem
End of explanation
# Transition kernel for long regime
num_chains_long = 4
if (problem_name == 'GermanCredit'):
num_warmup_long, num_sampling_long = 500, 1000
if (problem_name == 'Bananas'):
num_warmup_long, num_sampling_long = 200, 1000
total_samples_long = num_warmup_long + num_sampling_long
# CHECK: is this the transition kernel we want to use?
# REMARK: the step size is picked based on the model we're fitting
if (problem_name == 'Bananas' or problem_name == 'GermanCredit'):
kernel_long = tfp.mcmc.HamiltonianMonteCarlo(target_log_prob_fn, init_step_size, 1)
kernel_long = tfp.experimental.mcmc.GradientBasedTrajectoryLengthAdaptation(kernel_long, num_warmup_long)
kernel_long = tfp.mcmc.DualAveragingStepSizeAdaptation(
kernel_long, num_warmup_long, target_accept_prob = 0.75,
reduce_fn=tfp.math.reduce_log_harmonic_mean_exp)
# Follow the inference gym tutorial
# NOTE: transition kernel below is untested.
if (problem_name == 'Brownian'):
kernel_long = tfp.mcmc.HamiltonianMonteCarlo(target_log_prob_fn, init_step_size, 1)
# Adapt step size.
kernel_long = tfp.mcmc.DualAveragingStepSizeAdaptation(
kernel_long, num_warmup_long, # int(num_samples // 2 * 0.8),
target_accept_prob = 0.9)
# Adapt trajectory length.
kernel_long = tfp.experimental.mcmc.GradientBasedTrajectoryLengthAdaptation(
kernel_long,
num_adaptation_steps = num_warmup_long) # int(num_steps // 2 * 0.8))
# TODO: work out what an appropriate transition kernel for this problem would be.
# if (problem_name == 'GermanCredit'):
# kernel_long = tfp.mcmc.HamiltonianMonteCarlo(target_log_prob_fn, init_step_size, 1)
# kernel_long = tfp.experimental.mcmc.GradientBasedTrajectoryLengthAdaptation(kernel_long, num_warmup_long)
# kernel_long = tfp.mcmc.DualAveragingStepSizeAdaptation(
# kernel_long, num_warmup_long, target_accept_prob = 0.75,
# reduce_fn=tfp.math.reduce_log_harmonic_mean_exp)
initial_state = initialize((num_chains_long,))
# initial_state = initialize((num_chains_long,))
result_long = tfp.mcmc.sample_chain(
total_samples_long, initial_state, kernel = kernel_long, seed = random.PRNGKey(1954))
# Transition kernel for short regime
# CHECK: how many warmup iterations should we use here?
# Suggested options: 512, 1024, 2048, 2500
num_chains_short = 512
num_super_chains = 4
if (problem_name == 'GermanCredit'):
num_warmup_short, num_sampling_short = 1000, 1000
if (problem_name == 'Bananas'):
num_warmup_short, num_sampling_short = 100, 1000 # 100, 1000
total_samples_short = num_warmup_short + num_sampling_short
if (problem_name == 'Bananas' or problem_name == 'GermanCredit'):
kernel_short = tfp.mcmc.HamiltonianMonteCarlo(target_log_prob_fn, init_step_size, 1)
kernel_short = tfp.experimental.mcmc.GradientBasedTrajectoryLengthAdaptation(kernel_short, num_warmup_short)
kernel_short = tfp.mcmc.DualAveragingStepSizeAdaptation(
kernel_short, num_warmup_short, target_accept_prob = 0.75, #0.75,
reduce_fn = tfp.math.reduce_log_harmonic_mean_exp)
different_location = False
if (different_location):
# initialize each chain at a different location
initial_state = initialize((num_chains_short,))
else:
# Chains within a super chain are all initialized at the same location
# Here we use the same initial points as in the long regime.
initial_state = initial_state # initialize((num_super_chains,))
initial_state = np.repeat(initial_state, num_chains_short // num_super_chains,
axis = 0)
result_short = tfp.mcmc.sample_chain(
total_samples_short, initial_state, kernel = kernel_short,
seed = random.PRNGKey(1954))
Explanation: Run MCMC
We consider two regimes: the "long" regime in which a few chains are run for many warmup and sampling iterations, and the "short" regime, wherein many chains are run for a few warmup and sampling iterations. Note that in the short regime we're willing to not warmup our chains (i.e. possibly adapt step size, trajectory length, mass matrix) as well as in the long regime, the hope being that the variance decreases enough because we're running many chains.
End of explanation
# Get some estimates of the mean and variance.
try:
mean_est = target.sample_transformations['identity'].ground_truth_mean
except:
print('no ground truth mean')
mean_est = (result.all_states[num_warmup:, :]).mean(0).mean(0)
try:
var_est = target.sample_transformations['identity'].ground_truth_standard_deviation**2
except:
print('no ground truth std dev')
var_est = ((result.all_states[num_warmup:, :]**2).mean(0).mean(0) -
mean_est**2)
jnp.linalg.norm(var_est[0] / 100)
Explanation: Analyze results
Squared error for Monte Carlo estimate of the mean and variance
End of explanation
# Map MCMC samples from the unconstrained space to the original space
# CHECK: does this mess up the banana example?
result_state_long = target.default_event_space_bijector(result_long.all_states)
result_state_short = target.default_event_space_bijector(result_short.all_states)
def mc_est(x, axis = 0):
Computes the running sample mean based on sampling iterations, with
warmup iterations discarded.
By default, we focus on the first parameter.
# NOTE: why discard half of the samples?
cum_x = np.cumsum(x, axis)
return ((cum_x[1::2] - cum_x[:cum_x.shape[0]//2]) /
np.arange(1, cum_x.shape[0] // 2 + 1).reshape([-1] + [1] * (len(cum_x.shape) - 1)))
long_error = mc_est(result_state_long.mean(1) - mean_est)
short_error = mc_est(result_state_short.mean(1) - mean_est)
true_var_available = True
if (true_var_available):
target_precision = jnp.linalg.norm(var_est[0] / 100)
else:
target_precision = jnp.linalg.norm(long_error[len(long_error) - 1], axis = -1)
figure(figsize = [6, 6])
semilogy(jnp.linalg.norm(long_error, axis = -1), label = '4 chains')
semilogy(jnp.linalg.norm(short_error, axis = -1), label = '1024 chains')
hlines(target_precision, 0, total_samples_long / 2,
linestyles = '--',
label = 'Target: Var / 100')
ylabel("Squared error for Mean estimate")
xlabel("Iterations (excluding warmup)")
legend(loc = 'best')
show()
Explanation: As a first step plot the squared error based on a Monte Carlo estimator that discards the first half of the samples, and doesn't discriminate between warmup and sampling iterations. We also plot the target precision whith the "true" variance -- when available for instance via the inference gym -- divided by 100. This is the precision we expect our Monte Carlo estimates to reach with an effective sample size of 100.
End of explanation
long_var_error = mc_est(result_state_long.var(1)) - var_est
short_var_error = mc_est(result_state_short.var(1)) - var_est
long_var_estimate = jnp.linalg.norm(long_var_error[len(long_var_error) - 1], axis = -1)
figure(figsize = [6, 6])
semilogy(jnp.linalg.norm(long_var_error, axis = -1), label = 'long')
semilogy(jnp.linalg.norm(short_var_error, axis = -1), label = 'short')
hlines(long_var_estimate, 0, total_samples_long / 2,
linestyles = '--',
label = 'long var estimate')
ylabel("Squared error for Variance estimate")
legend(loc = 'best')
show()
# NOTE: why are the estimates in the long regime so poor??
Explanation: I don't think the variance of the variance is stored in the inference gym, although it's probably possible to access this information using the error in the variance estimate. For now, we'll use the final result reported by the long chain as the target precision.
End of explanation
if (False):
print(result_state_long[num_warmup_long:, :, :].mean(0)[0][0])
print(result_state_short[num_warmup_short:, :, :].mean(0)[0][0])
print(mean_est[0])
print(long_error[len(long_error) - 1][0])
print(short_error[len(short_error) - 1][0])
result_state_long[num_warmup_long:, :, :].mean(1).shape
def mc_est_warm(x, axis = 0):
compute running average without discarding half of the samples.
return np.cumsum(x, axis) / np.arange(1, x.shape[0] + 1).reshape([-1] + [1] * (len(x.shape) - 1))
discard_warmup = True
if (discard_warmup):
long_error = mc_est_warm(result_state_long[num_warmup_long:, :, :].mean(1)) - mean_est
short_error = mc_est_warm(result_state_short[num_warmup_short:, :, :].mean(1)) - mean_est
else:
long_error = result_state_long[num_warmup_long:, :, :].mean(1) - mean_est
short_error = result_state_short[num_warmup_short:, :, :].mean(1) - mean_est
true_var_available = True
if (true_var_available):
target_precision = jnp.linalg.norm(var_est[0] / 100)
else:
target_precision = jnp.linalg.norm(long_error[len(long_error) - 1], axis = -1)
figure(figsize = [6, 6])
semilogy(jnp.linalg.norm(long_error, axis = -1), label = '4 chains')
semilogy(jnp.linalg.norm(short_error, axis = -1), label = '512 chains')
hlines(target_precision, 0, num_sampling_long,
linestyles = '--',
label = 'target: var / 100')
ylabel("Squared error for Mean estimate")
xlabel("Sampling iterations (i.e. warmup excluded)")
legend(loc = 'best')
show()
Explanation: Repeat the above, using a Monte Carlo estimator based on sampling iterations with only the warmup samples discard.
End of explanation
long_var_error = mc_est_warm(result_state_long[num_warmup_long:, :, :].var(1)) - var_est
short_var_error = mc_est_warm(result_state_short[num_warmup_short:, :, :].var(1)) - var_est
long_var_mc_estimate = jnp.linalg.norm(long_var_error[len(long_var_error) - 1], axis = -1)
figure(figsize = [6, 6])
semilogy(jnp.linalg.norm(long_var_error, axis = -1), label = 'long')
semilogy(jnp.linalg.norm(short_var_error, axis = -1), label = 'short')
hlines(long_var_mc_estimate, 0, num_sampling_long,
linestyles = '--',
label = 'long MC estimate')
ylabel("Squared error for Variance estimate")
legend(loc = 'best')
show()
Explanation: Remark: if after one iteration we are below the target precision, than we're probably running a warmup which is too long and / or running too many chains.
End of explanation
if (False):
print(long_mc_estimate)
print(jnp.linalg.norm(short_error, axis = -1)[0:10])
print(long_var_estimate)
print(jnp.linalg.norm(short_var_error, axis = -1)[0:10])
# Identify the number of iterations after which the short regime matches
# the precision of the long regime.
# TODO: find a better criterion
item_index = np.where(jnp.linalg.norm(short_error, axis = -1) <= target_precision)
target_iter_mean = item_index[0][0]
print("Reasonable precision for mean reached in", target_iter_mean + 1, "iteration(s).")
item_index = np.where(jnp.linalg.norm(short_var_error, axis = -1) <= long_var_estimate)
target_iter_var = item_index[0][0]
print("Reasonable precision for variance reached in", target_iter_var + 1, "iteration(s).")
Explanation: Staring at the plot above it's clear that the short regime reaches a reasonable precision in fewer iterations than the long regime, even though the long regime warms up chains for many more iterations. The dotted line represent the Monte Carlo estimate using all the samples from the long regime. We'll use this as our target precision.
End of explanation
# Plot last-sample estimarors
figure(figsize = [6, 6])
semilogy(jnp.linalg.norm(result_state_long.mean(1) - var_est, axis=-1),
label='Long mean Error')
semilogy(jnp.linalg.norm(result_state_short.mean(1) - mean_est, axis=-1),
label='Short Mean Error')
hlines(jnp.sqrt(var_est.sum() / 100), 0, total_samples_long, label='Norm of Posterior Scales / 10')
legend(loc='best')
xlabel('Iteration')
ylabel('Norm of Error of Estimate')
title(target.name)
xlim([0, 200])
show()
# NOTE: Note sure what's going on here.
Explanation: Check for convergence
Let's first examine whether we're passed the transient bias regime (we should be since we're discarding the warmup phase).
End of explanation
plot(result_long.all_states[num_warmup_short:, :, 0].flatten(),
result_long.all_states[num_warmup_short:, :, 1].flatten(), '.', alpha = 0.2)
title('Long regime')
show()
plot(result_long.all_states[num_warmup_long:total_samples_long, :10, 1])
show()
# NOTE: (for Banana problem) With 4 samples after warmup we already samples spread
# out accross the parameter space.
num_samples_plot = 4 # target_iter_mean
plot(result_short.all_states[num_warmup_short:num_samples_plot + num_warmup_short, :, 0].flatten(),
result_short.all_states[num_warmup_short:num_samples_plot + num_warmup_short, :, 1].flatten(), '.', alpha = 0.2)
title('Short regime')
show()
plot(result_short.all_states[num_warmup_short:100 + num_warmup_short, [10, 20, 100, 500, 1000], 1])
show()
# REMARK: the mixing for the banana problem is slow. This is obvious if we
# only plot the first few samples of each chain.
num_samples_plot = 4 # target_iter_mean
plot(result_short.all_states[num_warmup_short:, :, 0].flatten(),
result_short.all_states[num_warmup_short:, :, 1].flatten(), '.', alpha = 0.2)
title('Short regime')
show()
plot(result_short.all_states[:, [1, 200, 400, 600, 800, 1000], 1])
show()
Explanation: Making due diligence, let's look at the samples returned by both methods, after discarding the warmup iterations.
End of explanation
# NOTE: the warmup is not stored.
# NOTE: compute rhat for the samples on the original space, since these are
# the quantities of interest.
def compute_rhat(result_state, num_samples, num_warmup = 0):
return tfp.mcmc.potential_scale_reduction(result_state[num_warmup:num_warmup + num_samples + 1],
independent_chain_ndims = 1).T
# TODO: do this without a for loop
# WARNING: this cell takes a minute to run
# TODO: use a single variable num_sampling, instead of num_sampling_long and
# num_sampling_var.
rhat_long = np.array([])
rhat_short = np.array([])
range_iter = range(2, num_sampling_long, 10) # range(2, num_samples, 8)
# NOTE: depending on the problem, it can be interesting to look at both.
# However, to be consistent with earlier analysis, the warmup samples should
# be discarded.
discard_warmup = True
for i in range_iter:
if (discard_warmup):
discard_long = num_warmup_long
discard_short = num_warmup_short
else:
discard_long = 0
discard_short = 0
rhat_long = np.append(rhat_long,
compute_rhat(result_state_long, i, discard_long)[0, ])
rhat_short = np.append(rhat_short,
compute_rhat(result_state_short, i, discard_short)[0, ])
Explanation: Let's compute $\hat R$ as a function of iteration and pay attention to how quickly $\hat R$ goes to 1 in both regimes.
End of explanation
result_snip = result_state_long[num_warmup_long:num_warmup_long + 2]
tfp.mcmc.potential_scale_reduction(result_snip, independent_chain_ndims = 1).T
# Plot result
figure(figsize = [6, 6])
semilogy(np.array(range_iter), rhat_long - 1, label = '4 chains')
semilogy(np.array(range_iter), rhat_short - 1, label = '512 chains')
legend(loc = 'best')
xlim([0, 500])
ylabel("Rhat - 1")
show()
Explanation: Remark: the $\hat R$ estimate can be quite noisy, especially when computed with a small number of samples. One manifestation of this is the fact that $\hat R < 1$. In the German credit score model, $\hat R$ is as low as 0.6!! When this is the case, $\hat R$ will typically be large for other parameters. Hence, inspecting many parameters (presumably all of interest) can safeguard us against crying "victory" too early.
This type of noise can explain why the change in $\hat R$ isn't always quite monotone, sometimes with an increase at first, and then the expected decrease.
End of explanation
# Compare Rhat at the point where both methods have reached a comparable squared
# error.
# NOTE: not super reliable -- sometimes rhat is noisy and goes to 1 (or below)
# before jumping back up...
index = np.where(range_iter > target_iter_mean)[0][0]
print("Rhat for short regime after hitting target precision:", rhat_short[index])
print("Rhat for long regime after hitting target precision:", rhat_long[len(rhat_long) - 1])
Explanation: (Banana example) As expected, $\hat R$ decreases with the number of iterations per chain, although crucially not with the total number of samples! As one might suspect, the short regime produces a less noisy estimate of $\hat R$. To be more precise, we expect $\hat R$ to decrease with the effective sample size per chain. Since the long regime benefits from a longer warmup, the effective sample size per iteration should be better, although it might not make a difference in this example.
Crucially, $\hat R$ as a convergence diagnostic isn't sensitive to the fact we are running many chains (although the estimator does become less noisy...).
End of explanation
n_bootstrap_samples = 64 # 64
rhat_estimates = np.array([])
n_sampling_iter = max(range_iter[index], 2) # max(target_iter_mean, 2) # range_iter[index]
for i in range(1, n_bootstrap_samples):
choose_samples_randomly = True
if (choose_samples_randomly):
bootstrap_sample = np.random.choice(np.array(range(1, num_chains_short + 1)),
n_bootstrap_samples, replace = False)
# num_chains_short // 16, replace = False)
else:
bootstrap_sample = np.array(range(1 + (i - 1) * n_bootstrap_samples, i * n_bootstrap_samples))
# print(bootstrap_sample)
# print(result_state_short[:, bootstrap_sample, :].shape)
rhat_estimates = np.append(rhat_estimates,
compute_rhat(result_state_short[:, bootstrap_sample, :], n_sampling_iter, num_warmup_short)[0, ])
print("Mean rhat (short) = ", rhat_estimates.mean(), "+/-", rhat_estimates.std())
Explanation: Proposition: Concerned with how noisy $\hat R$ might be, let's use a bootstrap scheme to get a standard deviation on the estimator. The short regime should be amiable to this, since we can resample chains. Unfortunately, if we sample with replacement, we underestimate the between chain variance, because some of the chains are identical. One idea is to randomly sample a subset of the chains without replacement and compute $\hat R$.
This will overestimate the uncertainty in our calculations, since we have reduced the sample size.
End of explanation
# Remark: eager execution is disabled and would have to be enabled at the
# start of the program. I however suspect this would interfere with
# TensorFlow probability.
tf.executing_eagerly()
# Follow procedure described in source code for potential scale reduction.
# NOTE: some of the tf argument need to be adjusted (e.g. keepdims = False,
# instead of True). Not quite sure why.
# QUESTION: can these be accessed as internal functions of tf?
# TODO: following Pavel's example, rewrite this without using tf.
# TODO: add error message when the number of samples is less than 2.
# REMARK: this function doesn't seem to work, returns NaN.
# As a result, can only use _reduce_variance with biased = False.
def _axis_size(x, axis = None):
Get number of elements of `x` in `axis`, as type `x.dtype`.
if axis is None:
return ps.cast(ps.size(x), x.dtype)
return ps.cast(
ps.reduce_prod(
ps.gather(ps.shape(x), axis)), x.dtype)
def _reduce_variance(x, axis=None, biased=True, keepdims=False):
with tf.name_scope('reduce_variance'):
x = tf.convert_to_tensor(x, name='x')
mean = tf.reduce_mean(x, axis=axis, keepdims=True)
biased_var = tf.reduce_mean(
tf.math.squared_difference(x, mean), axis=axis, keepdims=keepdims)
if biased:
return biased_var
n = _axis_size(x, axis)
return (n / (n - 1.)) * biased_var
def nested_rhat(result_state, num_super_chain):
used_samples = result_state.shape[0]
num_sub_chains = result_state.shape[1] // num_super_chains
num_dimensions = result_state.shape[2]
chain_states = result_state.reshape(used_samples, -1, num_sub_chains,
num_dimensions)
state = tf.convert_to_tensor(chain_states, name = 'state')
mean_chain = tf.reduce_mean(state, axis = 0)
mean_super_chain = tf.reduce_mean(state, axis = [0, 2])
variance_chain = _reduce_variance(state, axis = 0, biased = False)
variance_super_chain = _reduce_variance(mean_chain, axis = 1, biased = False) \
+ tf.reduce_mean(variance_chain, axis = 1)
W = tf.reduce_mean(variance_super_chain, axis = 0)
B = _reduce_variance(mean_super_chain, axis = 0, biased = False)
return tf.sqrt((W + B) / W)
Explanation: Nested $\hat R$
To remedy the identified issue, we propose to pool chains together in the short regime, thereby building super-chains, and then checking that the super chains are mixing.
We index each sample by $n$ the iteration, $m$ the chain, and $k$ the cluster of chains, and write $\theta^{(n, m, k)}$. The within-chain variance is estimated by
$$
s^2_{km} = \frac{1}{N - 1} \sum_{n = 1}^N \left (\theta^{(nmk)} - \bar \theta^{(.mk)} \right)^2.
$$
Next the between-chain variance, or within super chain variance is
\begin{eqnarray}
s^2_{k.} & = & \frac{1}{M - 1} \sum_{m = 1}^M \left (\bar \theta^{(.mk)} - \bar \theta^{(..k)} \right)^2,
\end{eqnarray}
and the total variance for a super chain is
\begin{eqnarray}
S^2_k & = & \frac{1}{M - 1} \sum_{m = 1}^M \left (\bar \theta^{(.mk)} - \bar \theta^{(..k)} \right)^2 + \frac{1}{M (N - 1)} \sum_{m = 1}^M \sum_{n = 1}^N \left (\theta^{(nmk)} - \bar \theta^{(.mk)} \right)^2 \
& = & s^2_{k.} + \frac{1}{M} \sum_{m = 1}^M s^2_{km}
\end{eqnarray}
Notice that this calculation accounts for the fact the super-chain is made up of multiple chains.
Finally the within-super-chain variance is estimated as
$$
W = \frac{1}{K} \sum_{k = 1}^K S^2_k.
$$
Now it remains to compute the between super-chain variance
$$
B = \frac{1}{K - 1} \sum_{k = 1}^K \left (\bar \theta^{(..k)} - \bar \theta^{(...)} \right)^2,
$$
yielding an estimate of the posterior variance
$$
\widehat{\mathrm{var}}^+(\theta) = B + W,
$$
which very much looks like the posterior variance estimate used in the in the long regime, except that I've been a bit more consistent about making the estimator unbiased. We then compute
$$
\hat R = \sqrt{\frac{\widehat{\mathrm{var}}^+(\theta)}{W}}.
$$
Remark. The $\theta$ can be replaced by the rank-normalized $z$ as presrcribed by Vehtari et al 2020.
Implementation of nested-$\hat R$ using TensorFlow.
End of explanation
# num_super_chains = 4
# super_chain_size = num_chains_short // num_super_chains # 250
used_samples = 4 # total_samples_long // super_chain_size # 4
result_state = result_short.all_states[0:used_samples, :, :]
print("short rhat: ", nested_rhat(result_state, num_super_chains))
Explanation: CASE 1 (sanity check): $\hat R$ after a few iterations
The super chains are such that they have the same number of samples as the chains in the long regime. Because of the slow mixing, 4 iterations per chain is not enough to overcome the transient bias and the nested Rhat is high, even though each super chain has many iterations. Note we're looking at the first warmup iterations.
End of explanation
result_state.shape
target_iter_mean
used_samples = max(target_iter_mean, 2)
result_state = result_short.all_states[num_warmup_short:num_warmup_short + used_samples, :, :]
print("short nested-rhat: ", nested_rhat(result_state, num_super_chains)[0])
print("short rhat: ", rhat_short[index])
print("long rhat: ", rhat_long[len(rhat_long) - 1])
print(range_iter)
# Let's find out how quickly nested-rhat compared to traditional rhat goes down.
nested_rhat_short = np.array([])
for i in range_iter:
nested_rhat_short = np.append(nested_rhat_short,
nested_rhat(result_short.all_states[num_warmup_short:num_warmup_short + i, :, :],
num_super_chains).numpy()[0])
figure(figsize = [6, 6])
semilogy(np.array(range_iter), rhat_long - 1, label = '$\hat R$, 4 chains')
semilogy(np.array(range_iter), rhat_short - 1, label = '$\hat R$, 512 chains')
semilogy(np.array(range_iter), nested_rhat_short - 1, label = '$n \hat R$, 512 chains')
legend(loc = 'best')
xlim([0, 1000])
ylabel("Rhat - 1")
xlabel("Post-warmup sampling iterations")
show()
threshold = 1.1
index_classic = np.where((rhat_short < threshold) & (rhat_short > 1.))
if (len(index_classic[0]) > 0):
print("Rhat =", threshold, "after",range_iter[index_classic[0][0]], "iterations.")
else:
print("Rhat doesn't hit the target threshold = ", threshold, ".")
index_short = np.where((nested_rhat_short < threshold) & (nested_rhat_short > 1.))
if (len(index_short[0]) > 0):
print("Nested Rhat =", threshold, "after", range_iter[index_short[0][0]], "iterations.")
else:
print("Nested Rhat doesn't hit the target threshold = ", threshold, ".")
threshold = 1.01
index_classic = np.where((rhat_short < threshold) & (rhat_short > 1.))
if (len(index_classic[0]) > 0):
print("Rhat =", threshold, "after",range_iter[index_classic[0][0]], "iterations.")
else:
print("Rhat doesn't hit the target threshold = ", threshold, ".")
index_short = np.where((nested_rhat_short < threshold) & (nested_rhat_short > 1.))
if (len(index_short[0]) > 0):
print("Nested Rhat =", threshold, "after", range_iter[index_short[0][0]], "iterations.")
else:
print("Nested Rhat doesn't hit the target threshold = ", threshold, ".")
Explanation: CASE 2: $\hat R$ after "enough" iterations
The number of iterations in each chain corresponds to the number of samples required by the short regime to match the precision for the mean attained by the long regime after 1000 sampling iterations (meaning we've discarded the warmup iterations). The diagnostic is quite happy, even though there are only two iterations per chain.
End of explanation
ess_long = np.sum(tfp.mcmc.effective_sample_size(
result_state_long[num_warmup_long:, : , :]), axis = 0)
ess_short = np.sum(tfp.mcmc.effective_sample_size(
result_state_short[num_warmup_short:, :, :]), axis = 0)
ess_short_target = np.sum(tfp.mcmc.effective_sample_size(
result_state_short[num_warmup_short:num_warmup_short + 3, :, :]), axis = 0)
# NOTE: it seems we need at least 3 samples to compute the ess estimate...
print("Ess long (discarding warmup): ", ess_long[0])
print("Ess short (discarding warmup): ", ess_short[0])
print("Ess short (when hitting target precision): ", ess_short_target[0])
Explanation: Effective sample size
We'll now compute the effective sample size. We might in fact expect the classic diagnostic to work relatively well.
End of explanation
# Define function to extract the adapted parameters
# (Follow what's done in the inference gym tutorial)
# REMARK: if we pass only initial step size, only one step size is adapted for
# the whole transition kernel (as opposed to one step size per chain).
# REMARK: we won't use this scheme. Instead, we'll pass the whole transition.
from tensorflow_probability.python.internal.unnest import get_innermost
# NOTE: presumable we're not going to use this, and instead get the full
# kernel result back.
def trace_fn(_, pkr):
return (
get_innermost(pkr, 'step_size'),
get_innermost(pkr, 'num_leapfrog_steps')
# get_innermost(pkr, 'max_trajectory_length')
)
def forge_chain (target_rhat, warmup_window_size, kernel_cold, initial_state,
max_num_steps, seed, monitor = False,
use_nested_rhat = True, use_log_joint = False,
num_super_chains = 4):
# store certain variables
rhat_forge = np.array([])
warmup_is_acceptable = False
store_results = []
warmup_iteration = 0
current_state = initial_state
final_kernel_args = None
while (not warmup_is_acceptable and warmup_iteration <= max_num_steps):
warmup_iteration += 1
# 1) Run MCMC on short warmup window
result_cold, target_log_prob, final_kernel_args = tfp.mcmc.sample_chain(
num_results = warmup_window_size,
current_state = current_state,
kernel = kernel_cold,
previous_kernel_results = final_kernel_args,
seed = kernel_seed,
trace_fn = lambda _, pkr: unnest.get_innermost(pkr, 'target_log_prob'),
return_final_kernel_results = True)
if (warmup_iteration == 1) :
store_results = result_cold
else :
store_results = np.append(store_results, result_cold, axis = 0)
current_state = result_cold[-1]
# 2) Check if warmup is acceptable
if (used_nested_rhat):
if (use_log_joint):
shape_lp = target_log_prob.shape
rhat_warmup = nested_rhat(target_log_prob.reshape(shape_lp[0], shape_lp[1], 1),
num_super_chains)
else:
rhat_warmup = max(nested_rhat(result_cold, num_super_chains))
else:
if (use_log_joint):
rhat_warmup = tfp.mcmc.potential_scale_reduction(target_log_prob)
else:
rhat_warmup = max(tfp.mcmc.potential_scale_reduction(result_cold))
# ess_warmup = np.sum(tfp.mcmc.effective_sample_size(result_cold), axis = 0)
# print(rhat_warmup)
if (rhat_warmup < target_rhat): warmup_is_acceptable = True
# if (max(rhat_warmup) < 1.01 and min(ess_warmup) > 100): warmup_is_acceptable = True
if (monitor):
print("step:", final_kernel_args.step)
# print("max rhat:", max(rhat_warmup))
# print("min ess warmup:" , min(ess_warmup))
# print("step size:", step_size)
# print("number of leapfrog steps:", num_leapfrog_steps)
save_values = True
if (save_values):
rhat_forge = np.append(rhat_forge, rhat_warmup)
# While loop ends
return store_results, final_kernel_args, rhat_forge
# Set up adaptive warmup scheme
warmup_window_size = 5
target_rhat = 1.01
target_ess = 100
max_num_steps = 1000 // warmup_window_size
current_state = initial_state
num_leapfrog_steps = 1
warmup_iteration = 0
kernel_seed = random.PRNGKey(1957)
used_nested_rhat = True
# define kernel using most recent step size
kernel_cold = tfp.mcmc.HamiltonianMonteCarlo(target_log_prob_fn, init_step_size, 1)
kernel_cold = tfp.experimental.mcmc.GradientBasedTrajectoryLengthAdaptation(kernel_cold, warmup_window_size)
kernel_cold = tfp.mcmc.DualAveragingStepSizeAdaptation(
kernel_cold, warmup_window_size, target_accept_prob = 0.75,
reduce_fn = tfp.math.reduce_log_harmonic_mean_exp)
kernel_warm = tfp.mcmc.HamiltonianMonteCarlo(target_log_prob_fn, init_step_size, 1)
kernel_warm = tfp.experimental.mcmc.GradientBasedTrajectoryLengthAdaptation(kernel_warm, 0)
kernel_warm = tfp.mcmc.DualAveragingStepSizeAdaptation(
kernel_warm, warmup_window_size, target_accept_prob = 0.75,
reduce_fn = tfp.math.reduce_log_harmonic_mean_exp)
result_cold, final_kernel_args, rhat_forge = \
forge_chain(target_rhat = target_rhat,
warmup_window_size = warmup_window_size,
kernel_cold = kernel_cold,
initial_state = initial_state,
max_num_steps = max_num_steps,
seed = random.PRNGKey(1954), monitor = False,
use_nested_rhat = True,
use_log_joint = True)
print("iterations:", len(rhat_forge) * warmup_window_size)
print(rhat_forge)
print(target_rhat)
# print(tfp.mcmc.potential_scale_reduction(result_cold[-50]))
# print(nested_rhat(result_short.all_states[num_warmup_short:num_warmup_short + 5, :, :], num_super_chains))
# Run sampling iterations
# def trace_fn(_, pkr):
# return (
# get_innermost(pkr, 'unnormalized_log_prob'))
current_state = result_cold[-1]
result_warm, target_log_prob, final_kernel_args_warm = tfp.mcmc.sample_chain(
num_results = 5,
current_state = current_state,
kernel = kernel_warm, # kernel_cold
previous_kernel_results = final_kernel_args,
seed = random.PRNGKey(100001),
return_final_kernel_results = True,
trace_fn = lambda _, pkr: unnest.get_innermost(pkr, 'target_log_prob'))
print(tfp.mcmc.potential_scale_reduction(target_log_prob))
# print(nested_rhat(target_log_prob, num_super_chains))
shape_lp = target_log_prob.shape
lp__ = target_log_prob.reshape(shape_lp[0], shape_lp[1], 1)
lp__.shape
print(nested_rhat(lp__, num_super_chains))
print(tfp.mcmc.potential_scale_reduction(result_warm))
nested_rhat(result_warm, num_super_chain = num_super_chains)
# options: result_cold[result_cold.shape[0] - 30:], result_state_short, result_warm, store_results
states_to_read = result_warm
print("mean estimate:", np.mean(states_to_read.mean(0), axis = 0))
print("variance estimate:", np.mean(states_to_read.var(1), axis = 0))
print(nested_rhat(states_to_read, num_super_chain = 4))
print(tfp.mcmc.potential_scale_reduction(states_to_read))
print(mean_est)
print(var_est)
# Check output of the last run
plot(result_warm[:, :, 0].flatten(),
result_warm[:, :, 1].flatten(), '.', alpha = 0.2)
title('Long regime')
show()
plot(result_warm[:, :30, 1])
show()
# Compare to output we get with uninterrupted run.
# (Examine the iterations before the warmup ends)
chain_state_short = result_short.all_states[num_warmup_short - 10:num_warmup_short - 10 + warmup_window_size, :, :]
plot(chain_state_short[:, :, 0].flatten(),
chain_state_short[:, :, 1].flatten(), '.', alpha = 0.2)
show()
plot(chain_state_short[:, :30, 1])
show()
Explanation: Adaptive warmup length
Playing around a little, we find that once the algorithm is properly warmed up, the short regime can reach good precision in very few iterations. The primary limitation hence becomes the warm up time.
Proper warmup means (i) we've overcomed the transient bias and have already moved across the "typical set" -- it isn't enough to be in the "typical set" if where we are is determined by our starting point -- and (ii) our algorithm tuned well-enough such that it can explore every part of the parameter space in a reasonable time and has a relatively short relaxation time. The first item is essential to both sampling regimes, though intuitively, it seems we might be able to compromise on the second item in the short regime.
In many cases, the number of warmup samples is determined ahead of time when calling the algorithm. Ideally we'd stop the warmup once we have suitable tuning parameters and then move to the sampling phase. Zhang et al (2020) propose to run warmups over short windows of $w = 100$ iterations and compute $\hat R$ and the ESS at the end of each of window to check if we should continue warming up. Once both diagnostic estimates are passed a certain threshold, the warmup ends and the sampling begins. In theory, this scheme can be adapted to the short regime by replacing $\hat R$ with the nested $\hat R$.
My guess is that by using nested $\hat R$ and the classic ESS (computed using many independent chains) we'll implicitly compromise on item (ii) -- so a priori, the described warmup method requires little adjustment.
End of explanation
target_rhat = 1.01
warmup_window_size = 30
max_num_steps = 1000 // warmup_window_size
iteration_after_warmup = np.array([])
for seed in jax.random.split(jax.random.PRNGKey(0), 10):
initial_state = initialize((num_super_chains,), key = seed)
initial_state = np.repeat(initial_state, num_chains_short // num_super_chains,
axis = 0)
result_cold, final_kernel_args, rhat_forge = \
forge_chain(target_rhat = target_rhat,
warmup_window_size = warmup_window_size,
kernel_cold = kernel_cold,
initial_state = initial_state,
max_num_steps = max_num_steps,
seed = seed, monitor = False,
use_nested_rhat = True,
use_log_joint = False)
iteration_after_warmup = np.append(iteration_after_warmup,
len(rhat_forge) * warmup_window_size)
# print(iteration_after_warmup)
print(rhat_forge)
print(iteration_after_warmup.mean())
print(iteration_after_warmup.std())
Explanation: Experiment with window size
The code below returns the length of the warmup phase, simulated accross several seeds. This can give us a sense of how long the warmup phase is on average for different seeds. Be mindful that when using too many seeds with a lot of chains, the GPU can run out of memory. The motivation is to check how stable the warmup strategy is when using different window sizes.
End of explanation
result_cold, _, final_kernel_args = tfp.mcmc.sample_chain(
num_results = 100,
current_state = initial_state,
kernel = kernel_cold,
previous_kernel_results = None,
seed = random.PRNGKey(1954),
return_final_kernel_results = True)
result_warm, _, final_kernel_args = tfp.mcmc.sample_chain(
num_results = 50,
current_state = result_cold[-1],
kernel = kernel_warm,
previous_kernel_results = final_kernel_args,
seed = random.PRNGKey(1954),
return_final_kernel_results = True)
nested_rhat(result_warm[1:3], 4)
warmup_window_size = 200
current_state = initial_state
kernel_warm = tfp.mcmc.HamiltonianMonteCarlo(target_log_prob_fn, init_step_size, 1)
kernel_warm = tfp.experimental.mcmc.GradientBasedTrajectoryLengthAdaptation(kernel_warm, warmup_window_size)
kernel_warm = tfp.mcmc.DualAveragingStepSizeAdaptation(
kernel_warm, warmup_window_size, target_accept_prob = 0.75,
reduce_fn = tfp.math.reduce_log_harmonic_mean_exp)
# result_warm, (step_size_saved, num_leapfrog_steps_saved) = tfp.mcmc.sample_chain(
# warmup_window_size, current_state, kernel = kernel_warm,
# seed = random.PRNGKey(1954), trace_fn = trace_fn)
result_warm, kernel_args, final_kernel_args = tfp.mcmc.sample_chain(
warmup_window_size, current_state, kernel = kernel_warm,
seed = random.PRNGKey(1954), return_final_kernel_results = True)
# step_size = step_size_saved[warmup_window_size - 1]
# current_state = result_warm[warmup_window_size - 1, :, :]
# num_leapfrog_steps = num_leapfrog_steps_saved[warmup_window_size - 1]
tfp.mcmc.potential_scale_reduction(result_warm[:, :, :])
# kernel_warm2 = tfp.mcmc.HamiltonianMonteCarlo(target_log_prob_fn, step_size, num_leapfrog_steps)
# kernel_warm2 = tfp.experimental.mcmc.GradientBasedTrajectoryLengthAdaptation(kernel_warm2, warmup_window_size)
# kernel_warm2 = tfp.mcmc.DualAveragingStepSizeAdaptation(
# kernel_warm2, warmup_window_size, target_accept_prob = 0.75,
# reduce_fn = tfp.math.reduce_log_harmonic_mean_exp)
# result_warm2, (step_size_saved) = tfp.mcmc.sample_chain(
# warmup_window_size, current_state, kernel = kernel_warm2,
# seed = random.PRNGKey(1954), trace_fn = trace_fn)
result_warm2 = tfp.mcmc.sample_chain(
num_results = warmup_window_size,
kernel = kernel_warm,
current_state = current_state,
previous_kernel_results = final_kernel_args,
seed = random.PRNGKey(1953)
)
tfp.mcmc.potential_scale_reduction(result_warm2.all_states[:, :, :])
print(problem_name)
print(max(rhat_warmup))
print(min(ess_warmup))
# print(len(step_size))
# print(step_size[0][warmup_window_size - 1])
max(tfp.mcmc.potential_scale_reduction(result_warm))
# Define kernel for warmup windows (should be the same in the long and short regime)
warmup_window_size = 10
if (problem_name == 'Bananas' or problem_name == 'GermanCredit'):
kernel_warm_init = tfp.mcmc.HamiltonianMonteCarlo(target_log_prob_fn, init_step_size, 1)
kernel_warm_init = tfp.experimental.mcmc.GradientBasedTrajectoryLengthAdaptation(kernel_warm_init, warmup_window_size)
kernel_warm_init = tfp.mcmc.DualAveragingStepSizeAdaptation(
kernel_warm_init, warmup_window_size, target_accept_prob = 0.75, #0.75,
reduce_fn = tfp.math.reduce_log_harmonic_mean_exp)
result_warm, (step_size, max_trajectory_length) = tfp.mcmc.sample_chain(
warmup_window_size, initial_state, kernel = kernel_warm_init, seed = random.PRNGKey(1954),
trace_fn = trace_fn)
print(step_size[len(step_size) - 1])
print(max_trajectory_length[len(max_trajectory_length) - 1])
print(initial_state.shape)
print(result_warm[warmup_window_size, :, :])
# To run next window, define a new transition kernel
# REMARK: the maximum trajectory length isn't, if my understanding is correct,
# a tuning parameter; rather something that get's calculated at each step. So
# there's no need to pass it on.
kernel_warm = tfp.mcmc.HamiltonianMonteCarlo(target_log_prob_fn, step_size[len(step_size) - 1], 1)
kernel_warm = tfp.experimental.mcmc.GradientBasedTrajectoryLengthAdaptation(kernel_warm, warmup_window_size)
kernel_warm = tfp.mcmc.DualAveragingStepSizeAdaptation(
kernel_warm_init, warmup_window_size, target_accept_prob = 0.75,
reduce_fn = tfp.math.reduce_log_harmonic_mean_exp)
result_warm2, (step_size, max_trajectory_length) = tfp.mcmc.sample_chain(
warmup_window_size, initial_state, kernel = kernel_warm, seed = random.PRNGKey(1954),
trace_fn = trace_fn)
print(result_warm.shape)
print(step_size.shape)
print(max_trajectory_length.shape)
step_size[len(step_size) - 1]
# nested_rhat(result_short.all_states, num_super_chains)
## Sandbox
# Pool chains into super chains
# num_super_chains = 4 # num_chains_short // num_chains_long
# num_sub_chains = num_chains_short // num_super_chains
# used_samples = num_samples # 5 # 2 * target_iter_mean # target_iter_mean
# result_state = result_short.all_states[0:used_samples, :, :]
# chain_states = result_state.reshape(used_samples, num_sub_chains,
# -1, num_dimensions)
# independent_chains_ndims = 1
# sample_ndims = 1
# sample_axis = tf.range(0, sample_ndims)
# chain_axis
# used_samples = result_state.shape[0]
# num_sub_chains = result_state.shape[1] // num_super_chains
# num_dimensions = result_state.shape[2]
# chain_states = result_state.reshape(used_samples, -1, num_sub_chains,
# num_dimensions)
# state = tf.convert_to_tensor(chain_states, name = 'state')
# mean_chain = tf.reduce_mean(state, axis = 0)
# mean_super_chain = tf.reduce_mean(state, axis = [0, 2])
# variance_chain = _reduce_variance(state, axis = 0, biased = False)
# variance_super_chain = _reduce_variance(mean_chain, axis = 1, biased = False) \
# + tf.reduce_mean(variance_chain, axis = 1)
# W = tf.reduce_mean(variance_super_chain, axis = 0)
# B = _reduce_variance(mean_super_chain, axis = 0, biased = False)
# rhat = tf.sqrt((W + B) / W)
# print(rhat)
# print(mean_chain.shape)
# print(mean_super_chain.shape)
# print("mean_super_chain: ", mean_super_chain)
# print(variance_chain.shape)
# print(variance_super_chain.shape)
# print(state.shape) # (5, 250, 4, 2)
# print(result_state.shape) # (5, 1000, 2)
# # 'manually' compute the mean of each super chain.
# print(np.mean(result_state[:, 0:250, 0]))
# print(np.mean(result_state[:, 250:500, 0]))
# print(np.mean(result_state[:, 500:750, 0]))
# print(np.mean(result_state[:, 750:1000, 0]))
# # compute the means after reshaping the results. Get agreement!
# print(np.mean(chain_states[:, 0, :, 0]))
# print(np.mean(chain_states[:, 1, :, 0]))
# print(np.mean(chain_states[:, 2, :, 0]))
# print(np.mean(chain_states[:, 3, :, 0]))
# print(result_state[:, 250, 0])
# print(chain_states[:, 0, 1, 0])
# simple_chain = np.array([[0, 1, 2, 3, 4, 5], [0, 1, 2, 3, 4, 5], [0, 1, 2, 3, 4, 5], [0, 1, 2, 3, 4, 5]])
# simple_chain.shape # (4, 6)
# chain_reshape = simple_chain.reshape(4, 2, -1)
# chain_reshape.shape # (4, 2, 3)
# np.mean(chain_reshape, axis = 0) # returns mean for each chain
# np.mean(chain_reshape[:, 0, :]) # 1
# np.mean(chain_reshape[:, 1, :]) # 4
# np.mean(simple_chain[:, 0:2]) # 1
# np.mean(simple_chain[:, 3:6]) # 4 -- but it seems index should be 3:5
# # simple_chain[:, 3:6]
# ## Sandbox
# tf.compat.v1.disable_eager_execution() # need to disable eager in TF2.x
# state = result_short.all_states[1:range_iter[index], :, :]
# n = state.shape[0]
# m = state.shape[1]
# sample_ndims = 1
# independent_chains_ndims = 1
# sample_axis = tf.range(0, sample_ndims) # CHECK
# chain_axis = 0
# sample_and_chain_axis = tf.range(0, sample_ndims + independent_chains_ndims) # CHECK
# with tf.name_scope('potential_scale_reduction_single_state'):
# state = tf.convert_to_tensor(state, name = 'state')
# # CHECK: do we need to define a tf scope?
# n_samples = tf.compat.dimension_value(state.shape[0])
# # n = _axis_size(state, sample_axis)
# # m = _axis_size(state, chain_axis)
# # NOTE: These lines prompt the error message once the session is run.
# # x = tf.reduce_mean(state, axis=sample_axis, keepdims=True)
# # x_tf = tf.convert_to_tensor(x, name = 'x')
# # n_tf = _axis_size(x_tf)
# b_div_n = _reduce_variance(
# tf.reduce_mean(state, axis = 0, keepdims = False),
# sample_and_chain_axis, # sample and chain axis
# biased = False
# )
# w = tf.reduce_mean(
# _reduce_variance(state, sample_axis, keepdims = False,
# biased = False),
# axis = sample_and_chain_axis
# )
# # TODO: work out n and m from the number of chains being passed.
# # n = target_iter_mean
# # m = num_chains
# sigma_2_plus = ((n - 1) / n) * w + b_div_n
# rhat = ((m + 1.) / m) * sigma_2_plus / w - (n - 1.) / (m * n)
# # Launch the graph in a session. (TensorFlow uses differed action,
# # so need to explicitly request evaluation)
# sess = tf.compat.v1.Session()
# print(sess.run(rhat))
Explanation: Results for the Banana problem
Applying the code above for the banana problem with
target_rhat = 1.01
use_nested_rhat = True
use_log_joint = False
we estimate the length of the warmup phase for different window sizes:
w = 10, length = 62 +/- 16.12
w = 15, length = 72 +/- 17.41
w = 20, length = 86 +/- 18
w = 30, length = 90 +/- 13.75
w = 60, length = 120 +/- 0.0
Taking into consideration the different granularities, we find the results to be fairly consistent with one another.
Let's go back to the original case where we use $\hat R$ and ESS as our stoping criterion. Given the approximate one-to-one map between $\hat R$ and ESS per chain, the two criterion are somewhat redundant, so I'll focus on $\hat R$. When picking the window size, we must contend with the following trade-off:
* if the window size is too short, we're unlikely to produce a large enough ESS per chain to hit the target $\hat R$, and this could mean a never-ending warmup phase, or one that only stops once we exceed a maximum number of steps.
* if the window size is too large, we may jump pass the optimal point. It's also worth noting that the first window is unlikely to yield satisfactory results, because the intial estimates are overdispered and bias.
The first item is largely mitigated by using nested-$\hat R$, since we're then less dependent on the ESS per chain. The second item could be addressed by using a path-finder to initialize the chains and/or by discarding some of the early iterations in a window when computing the diagnostics.
One final remark is that using $\hat R$ on the log joint distribution yielded somewhat optimistic results. As Pavel puts it: "log_joint is a pretty bad metric. Generally, for convergence, you prefer to measure the least constrainted directions, and log_joint is typically not that."
Draft Code
End of explanation |
5,797 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Identifying safe loans with decision trees
The LendingClub is a peer-to-peer leading company that directly connects borrowers and potential lenders/investors. In this notebook, you will build a classification model to predict whether or not a loan provided by LendingClub is likely to [default](https
Step1: Load LendingClub dataset
We will be using a dataset from the LendingClub. A parsed and cleaned form of the dataset is availiable here. Make sure you download the dataset before running the following command.
Step2: Exploring some features
Let's quickly explore what the dataset looks like. First, let's print out the column names to see what features we have in this dataset.
Step3: Here, we see that we have some feature columns that have to do with grade of the loan, annual income, home ownership status, etc. Let's take a look at the distribution of loan grades in the dataset.
Step4: We can see that over half of the loan grades are assigned values B or C. Each loan is assigned one of these grades, along with a more finely discretized feature called subgrade (feel free to explore that feature column as well!). These values depend on the loan application and credit report, and determine the interest rate of the loan. More information can be found here.
Now, let's look at a different feature.
Step5: This feature describes whether the loanee is mortaging, renting, or owns a home. We can see that a small percentage of the loanees own a home.
Exploring the target column
The target column (label column) of the dataset that we are interested in is called bad_loans. In this column 1 means a risky (bad) loan 0 means a safe loan.
In order to make this more intuitive and consistent with the lectures, we reassign the target to be
Step6: Now, let us explore the distribution of the column safe_loans. This gives us a sense of how many safe and risky loans are present in the dataset.
Step7: You should have
Step8: What remains now is a subset of features and the target that we will use for the rest of this notebook.
Sample data to balance classes
As we explored above, our data is disproportionally full of safe loans. Let's create two datasets
Step9: Now, write some code to compute below the percentage of safe and risky loans in the dataset and validate these numbers against what was given using .show earlier in the assignment
Step10: One way to combat class imbalance is to undersample the larger class until the class distribution is approximately half and half. Here, we will undersample the larger class (safe loans) in order to balance out our dataset. This means we are throwing away many data points. We used seed=1 so everyone gets the same results.
Step11: Now, let's verify that the resulting percentage of safe and risky loans are each nearly 50%.
Step12: Note
Step13: Use decision tree to build a classifier
Now, let's use the built-in GraphLab Create decision tree learner to create a loan prediction model on the training data. (In the next assignment, you will implement your own decision tree learning algorithm.) Our feature columns and target column have already been decided above. Use validation_set=None to get the same results as everyone else.
Step14: Visualizing a learned model
As noted in the documentation, typically the the max depth of the tree is capped at 6. However, such a tree can be hard to visualize graphically. Here, we instead learn a smaller model with max depth of 2 to gain some intuition by visualizing the learned tree.
Step15: In the view that is provided by GraphLab Create, you can see each node, and each split at each node. This visualization is great for considering what happens when this model predicts the target of a new data point.
Note
Step16: Making predictions
Let's consider two positive and two negative examples from the validation set and see what the model predicts. We will do the following
Step17: Explore label predictions
Now, we will use our model to predict whether or not a loan is likely to default. For each row in the sample_validation_data, use the decision_tree_model to predict whether or not the loan is classified as a safe loan.
Hint
Step18: Let's visualize the small tree here to do the traversing for this data point.
Step19: Note
Step20: Checkpoint
Step21: Now, let us evaluate big_model on the training set and validation set.
Step22: Checkpoint | Python Code:
import graphlab
graphlab.canvas.set_target('ipynb')
Explanation: Identifying safe loans with decision trees
The LendingClub is a peer-to-peer leading company that directly connects borrowers and potential lenders/investors. In this notebook, you will build a classification model to predict whether or not a loan provided by LendingClub is likely to [default](https://en.wikipedia.org/wiki/Default_(finance).
In this notebook you will use data from the LendingClub to predict whether a loan will be paid off in full or the loan will be charged off and possibly go into default. In this assignment you will:
Use SFrames to do some feature engineering.
Train a decision-tree on the LendingClub dataset.
Visualize the tree.
Predict whether a loan will default along with prediction probabilities (on a validation set).
Train a complex tree model and compare it to simple tree model.
Let's get started!
Fire up Graphlab Create
Make sure you have the latest version of GraphLab Create. If you don't find the decision tree module, then you would need to upgrade GraphLab Create using
pip install graphlab-create --upgrade
End of explanation
loans = graphlab.SFrame('lending-club-data.gl/')
Explanation: Load LendingClub dataset
We will be using a dataset from the LendingClub. A parsed and cleaned form of the dataset is availiable here. Make sure you download the dataset before running the following command.
End of explanation
loans.column_names()
Explanation: Exploring some features
Let's quickly explore what the dataset looks like. First, let's print out the column names to see what features we have in this dataset.
End of explanation
loans['grade'].show()
Explanation: Here, we see that we have some feature columns that have to do with grade of the loan, annual income, home ownership status, etc. Let's take a look at the distribution of loan grades in the dataset.
End of explanation
loans['home_ownership'].show()
Explanation: We can see that over half of the loan grades are assigned values B or C. Each loan is assigned one of these grades, along with a more finely discretized feature called subgrade (feel free to explore that feature column as well!). These values depend on the loan application and credit report, and determine the interest rate of the loan. More information can be found here.
Now, let's look at a different feature.
End of explanation
# safe_loans = 1 => safe
# safe_loans = -1 => risky
loans['safe_loans'] = loans['bad_loans'].apply(lambda x : +1 if x==0 else -1)
loans = loans.remove_column('bad_loans')
Explanation: This feature describes whether the loanee is mortaging, renting, or owns a home. We can see that a small percentage of the loanees own a home.
Exploring the target column
The target column (label column) of the dataset that we are interested in is called bad_loans. In this column 1 means a risky (bad) loan 0 means a safe loan.
In order to make this more intuitive and consistent with the lectures, we reassign the target to be:
* +1 as a safe loan,
* -1 as a risky (bad) loan.
We put this in a new column called safe_loans.
End of explanation
loans['safe_loans'].show(view = 'Categorical')
Explanation: Now, let us explore the distribution of the column safe_loans. This gives us a sense of how many safe and risky loans are present in the dataset.
End of explanation
features = ['grade', # grade of the loan
'sub_grade', # sub-grade of the loan
'short_emp', # one year or less of employment
'emp_length_num', # number of years of employment
'home_ownership', # home_ownership status: own, mortgage or rent
'dti', # debt to income ratio
'purpose', # the purpose of the loan
'term', # the term of the loan
'last_delinq_none', # has borrower had a delinquincy
'last_major_derog_none', # has borrower had 90 day or worse rating
'revol_util', # percent of available credit being used
'total_rec_late_fee', # total late fees received to day
]
target = 'safe_loans' # prediction target (y) (+1 means safe, -1 is risky)
# Extract the feature columns and target column
loans = loans[features + [target]]
Explanation: You should have:
* Around 81% safe loans
* Around 19% risky loans
It looks like most of these loans are safe loans (thankfully). But this does make our problem of identifying risky loans challenging.
Features for the classification algorithm
In this assignment, we will be using a subset of features (categorical and numeric). The features we will be using are described in the code comments below. If you are a finance geek, the LendingClub website has a lot more details about these features.
End of explanation
safe_loans_raw = loans[loans[target] == +1]
risky_loans_raw = loans[loans[target] == -1]
print "Number of safe loans : %s" % len(safe_loans_raw)
print "Number of risky loans : %s" % len(risky_loans_raw)
Explanation: What remains now is a subset of features and the target that we will use for the rest of this notebook.
Sample data to balance classes
As we explored above, our data is disproportionally full of safe loans. Let's create two datasets: one with just the safe loans (safe_loans_raw) and one with just the risky loans (risky_loans_raw).
End of explanation
print "Percentage of safe loans :",
print "Percentage of risky loans :",
Explanation: Now, write some code to compute below the percentage of safe and risky loans in the dataset and validate these numbers against what was given using .show earlier in the assignment:
End of explanation
# Since there are fewer risky loans than safe loans, find the ratio of the sizes
# and use that percentage to undersample the safe loans.
percentage = len(risky_loans_raw)/float(len(safe_loans_raw))
risky_loans = risky_loans_raw
safe_loans = safe_loans_raw.sample(percentage, seed=1)
# Append the risky_loans with the downsampled version of safe_loans
loans_data = risky_loans.append(safe_loans)
Explanation: One way to combat class imbalance is to undersample the larger class until the class distribution is approximately half and half. Here, we will undersample the larger class (safe loans) in order to balance out our dataset. This means we are throwing away many data points. We used seed=1 so everyone gets the same results.
End of explanation
print "Percentage of safe loans :", len(safe_loans) / float(len(loans_data))
print "Percentage of risky loans :", len(risky_loans) / float(len(loans_data))
print "Total number of loans in our new dataset :", len(loans_data)
Explanation: Now, let's verify that the resulting percentage of safe and risky loans are each nearly 50%.
End of explanation
train_data, validation_data = loans_data.random_split(.8, seed=1)
Explanation: Note: There are many approaches for dealing with imbalanced data, including some where we modify the learning algorithm. These approaches are beyond the scope of this course, but some of them are reviewed in this paper. For this assignment, we use the simplest possible approach, where we subsample the overly represented class to get a more balanced dataset. In general, and especially when the data is highly imbalanced, we recommend using more advanced methods.
Split data into training and validation sets
We split the data into training and validation sets using an 80/20 split and specifying seed=1 so everyone gets the same results.
Note: In previous assignments, we have called this a train-test split. However, the portion of data that we don't train on will be used to help select model parameters (this is known as model selection). Thus, this portion of data should be called a validation set. Recall that examining performance of various potential models (i.e. models with different parameters) should be on validation set, while evaluation of the final selected model should always be on test data. Typically, we would also save a portion of the data (a real test set) to test our final model on or use cross-validation on the training set to select our final model. But for the learning purposes of this assignment, we won't do that.
End of explanation
decision_tree_model = graphlab.decision_tree_classifier.create(train_data, validation_set=None,
target = target, features = features)
Explanation: Use decision tree to build a classifier
Now, let's use the built-in GraphLab Create decision tree learner to create a loan prediction model on the training data. (In the next assignment, you will implement your own decision tree learning algorithm.) Our feature columns and target column have already been decided above. Use validation_set=None to get the same results as everyone else.
End of explanation
small_model = graphlab.decision_tree_classifier.create(train_data, validation_set=None,
target = target, features = features, max_depth = 2)
Explanation: Visualizing a learned model
As noted in the documentation, typically the the max depth of the tree is capped at 6. However, such a tree can be hard to visualize graphically. Here, we instead learn a smaller model with max depth of 2 to gain some intuition by visualizing the learned tree.
End of explanation
small_model.show(view="Tree")
Explanation: In the view that is provided by GraphLab Create, you can see each node, and each split at each node. This visualization is great for considering what happens when this model predicts the target of a new data point.
Note: To better understand this visual:
* The root node is represented using pink.
* Intermediate nodes are in green.
* Leaf nodes in blue and orange.
End of explanation
validation_safe_loans = validation_data[validation_data[target] == 1]
validation_risky_loans = validation_data[validation_data[target] == -1]
sample_validation_data_risky = validation_risky_loans[0:2]
sample_validation_data_safe = validation_safe_loans[0:2]
sample_validation_data = sample_validation_data_safe.append(sample_validation_data_risky)
sample_validation_data
Explanation: Making predictions
Let's consider two positive and two negative examples from the validation set and see what the model predicts. We will do the following:
* Predict whether or not a loan is safe.
* Predict the probability that a loan is safe.
End of explanation
sample_validation_data[1]
Explanation: Explore label predictions
Now, we will use our model to predict whether or not a loan is likely to default. For each row in the sample_validation_data, use the decision_tree_model to predict whether or not the loan is classified as a safe loan.
Hint: Be sure to use the .predict() method.
Quiz Question: What percentage of the predictions on sample_validation_data did decision_tree_model get correct?
Explore probability predictions
For each row in the sample_validation_data, what is the probability (according decision_tree_model) of a loan being classified as safe?
Hint: Set output_type='probability' to make probability predictions using decision_tree_model on sample_validation_data:
Quiz Question: Which loan has the highest probability of being classified as a safe loan?
Checkpoint: Can you verify that for all the predictions with probability >= 0.5, the model predicted the label +1?
Tricky predictions!
Now, we will explore something pretty interesting. For each row in the sample_validation_data, what is the probability (according to small_model) of a loan being classified as safe?
Hint: Set output_type='probability' to make probability predictions using small_model on sample_validation_data:
Quiz Question: Notice that the probability preditions are the exact same for the 2nd and 3rd loans i.e 0.472267584643798. Why would this happen?
Visualize the prediction on a tree
Note that you should be able to look at the small tree, traverse it yourself, and visualize the prediction being made. Consider the following point in the sample_validation_data
End of explanation
small_model.show(view="Tree")
Explanation: Let's visualize the small tree here to do the traversing for this data point.
End of explanation
print small_model.evaluate(train_data)['accuracy']
print decision_tree_model.evaluate(train_data)['accuracy']
Explanation: Note: In the tree visualization above, the values at the leaf nodes are not class predictions but scores (a slightly advanced concept that is out of the scope of this course). You can read more about this here. If the score is $\geq$ 0, the class +1 is predicted. Otherwise, if the score < 0, we predict class -1.
Quiz Question: Based on the visualized tree, what prediction would you make for this data point?
Now, let's verify your prediction by examining the prediction made using GraphLab Create. Use the .predict function on small_model.
Evaluating accuracy of the decision tree model
Recall that the accuracy is defined as follows:
$$
\mbox{accuracy} = \frac{\mbox{# correctly classified examples}}{\mbox{# total examples}}
$$
Let us start by evaluating the accuracy of the small_model and decision_tree_model on the training data
End of explanation
big_model = graphlab.decision_tree_classifier.create(train_data, validation_set=None,
target = target, features = features, max_depth = 10)
Explanation: Checkpoint: You should see that the small_model performs worse than the decision_tree_model on the training data.
Now, let us evaluate the accuracy of the small_model and decision_tree_model on the entire validation_data, not just the subsample considered above.
Quiz Question: What is the accuracy of decision_tree_model on the validation set, rounded to the nearest .01?
Evaluating accuracy of a complex decision tree model
Here, we will train a large decision tree with max_depth=10. This will allow the learned tree to become very deep, and result in a very complex model. Recall that in lecture, we prefer simpler models with similar predictive power. This will be an example of a more complicated model which has similar predictive power, i.e. something we don't want.
End of explanation
print big_model.evaluate(train_data)['accuracy']
print big_model.evaluate(validation_data)['accuracy']
Explanation: Now, let us evaluate big_model on the training set and validation set.
End of explanation
predictions = decision_tree_model.predict(validation_data)
Explanation: Checkpoint: We should see that big_model has even better performance on the training set than decision_tree_model did on the training set.
Quiz Question: How does the performance of big_model on the validation set compare to decision_tree_model on the validation set? Is this a sign of overfitting?
Quantifying the cost of mistakes
Every mistake the model makes costs money. In this section, we will try and quantify the cost of each mistake made by the model.
Assume the following:
False negatives: Loans that were actually safe but were predicted to be risky. This results in an oppurtunity cost of losing a loan that would have otherwise been accepted.
False positives: Loans that were actually risky but were predicted to be safe. These are much more expensive because it results in a risky loan being given.
Correct predictions: All correct predictions don't typically incur any cost.
Let's write code that can compute the cost of mistakes made by the model. Complete the following 4 steps:
1. First, let us compute the predictions made by the model.
1. Second, compute the number of false positives.
2. Third, compute the number of false negatives.
3. Finally, compute the cost of mistakes made by the model by adding up the costs of true positives and false positives.
First, let us make predictions on validation_data using the decision_tree_model:
End of explanation |
5,798 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Markov switching autoregression models
This notebook provides an example of the use of Markov switching models in Statsmodels to replicate a number of results presented in Kim and Nelson (1999). It applies the Hamilton (1989) filter the Kim (1994) smoother.
This is tested against the Markov-switching models from E-views 8, which can be found at http
Step1: Hamilton (1989) switching model of GNP
This replicates Hamilton's (1989) seminal paper introducing Markov-switching models. The model is an autoregressive model of order 4 in which the mean of the process switches between two regimes. It can be written
Step2: We plot the filtered and smoothed probabilities of a recession. Filtered refers to an estimate of the probability at time $t$ based on data up to and including time $t$ (but excluding time $t+1, ..., T$). Smoothed refers to an estimate of the probability at time $t$ using all the data in the sample.
For reference, the shaded periods represent the NBER recessions.
Step3: From the estimated transition matrix we can calculate the expected duration of a recession versus an expansion.
Step4: In this case, it is expected that a recession will last about one year (4 quarters) and an expansion about two and a half years.
Kim, Nelson, and Startz (1998) Three-state Variance Switching
This model demonstrates estimation with regime heteroskedasticity (switching of variances) and no mean effect. The dataset can be reached at http
Step5: Below we plot the probabilities of being in each of the regimes; only in a few periods is a high-variance regime probable.
Step6: Filardo (1994) Time-Varying Transition Probabilities
This model demonstrates estimation with time-varying transition probabilities. The dataset can be reached at http
Step7: The time-varying transition probabilities are specified by the exog_tvtp parameter.
Here we demonstrate another feature of model fitting - the use of a random search for MLE starting parameters. Because Markov switching models are often characterized by many local maxima of the likelihood function, performing an initial optimization step can be helpful to find the best parameters.
Below, we specify that 20 random perturbations from the starting parameter vector are examined and the best one used as the actual starting parameters. Because of the random nature of the search, we seed the random number generator beforehand to allow replication of the result.
Step8: Below we plot the smoothed probability of the economy operating in a low-production state, and again include the NBER recessions for comparison.
Step9: Using the time-varying transition probabilities, we can see how the expected duration of a low-production state changes over time | Python Code:
%matplotlib inline
import numpy as np
import pandas as pd
import statsmodels.api as sm
import matplotlib.pyplot as plt
import requests
from io import BytesIO
# NBER recessions
from pandas_datareader.data import DataReader
from datetime import datetime
usrec = DataReader('USREC', 'fred', start=datetime(1947, 1, 1), end=datetime(2013, 4, 1))
Explanation: Markov switching autoregression models
This notebook provides an example of the use of Markov switching models in Statsmodels to replicate a number of results presented in Kim and Nelson (1999). It applies the Hamilton (1989) filter the Kim (1994) smoother.
This is tested against the Markov-switching models from E-views 8, which can be found at http://www.eviews.com/EViews8/ev8ecswitch_n.html#MarkovAR or the Markov-switching models of Stata 14 which can be found at http://www.stata.com/manuals14/tsmswitch.pdf.
End of explanation
# Get the RGNP data to replicate Hamilton
dta = pd.read_stata('https://www.stata-press.com/data/r14/rgnp.dta').iloc[1:]
dta.index = pd.DatetimeIndex(dta.date, freq='QS')
dta_hamilton = dta.rgnp
# Plot the data
dta_hamilton.plot(title='Growth rate of Real GNP', figsize=(12,3))
# Fit the model
mod_hamilton = sm.tsa.MarkovAutoregression(dta_hamilton, k_regimes=2, order=4, switching_ar=False)
res_hamilton = mod_hamilton.fit()
res_hamilton.summary()
Explanation: Hamilton (1989) switching model of GNP
This replicates Hamilton's (1989) seminal paper introducing Markov-switching models. The model is an autoregressive model of order 4 in which the mean of the process switches between two regimes. It can be written:
$$
y_t = \mu_{S_t} + \phi_1 (y_{t-1} - \mu_{S_{t-1}}) + \phi_2 (y_{t-2} - \mu_{S_{t-2}}) + \phi_3 (y_{t-3} - \mu_{S_{t-3}}) + \phi_4 (y_{t-4} - \mu_{S_{t-4}}) + \varepsilon_t
$$
Each period, the regime transitions according to the following matrix of transition probabilities:
$$ P(S_t = s_t | S_{t-1} = s_{t-1}) =
\begin{bmatrix}
p_{00} & p_{10} \
p_{01} & p_{11}
\end{bmatrix}
$$
where $p_{ij}$ is the probability of transitioning from regime $i$, to regime $j$.
The model class is MarkovAutoregression in the time-series part of Statsmodels. In order to create the model, we must specify the number of regimes with k_regimes=2, and the order of the autoregression with order=4. The default model also includes switching autoregressive coefficients, so here we also need to specify switching_ar=False to avoid that.
After creation, the model is fit via maximum likelihood estimation. Under the hood, good starting parameters are found using a number of steps of the expectation maximization (EM) algorithm, and a quasi-Newton (BFGS) algorithm is applied to quickly find the maximum.
End of explanation
fig, axes = plt.subplots(2, figsize=(7,7))
ax = axes[0]
ax.plot(res_hamilton.filtered_marginal_probabilities[0])
ax.fill_between(usrec.index, 0, 1, where=usrec['USREC'].values, color='k', alpha=0.1)
ax.set_xlim(dta_hamilton.index[4], dta_hamilton.index[-1])
ax.set(title='Filtered probability of recession')
ax = axes[1]
ax.plot(res_hamilton.smoothed_marginal_probabilities[0])
ax.fill_between(usrec.index, 0, 1, where=usrec['USREC'].values, color='k', alpha=0.1)
ax.set_xlim(dta_hamilton.index[4], dta_hamilton.index[-1])
ax.set(title='Smoothed probability of recession')
fig.tight_layout()
Explanation: We plot the filtered and smoothed probabilities of a recession. Filtered refers to an estimate of the probability at time $t$ based on data up to and including time $t$ (but excluding time $t+1, ..., T$). Smoothed refers to an estimate of the probability at time $t$ using all the data in the sample.
For reference, the shaded periods represent the NBER recessions.
End of explanation
print(res_hamilton.expected_durations)
Explanation: From the estimated transition matrix we can calculate the expected duration of a recession versus an expansion.
End of explanation
# Get the dataset
ew_excs = requests.get('http://econ.korea.ac.kr/~cjkim/MARKOV/data/ew_excs.prn').content
raw = pd.read_table(BytesIO(ew_excs), header=None, skipfooter=1, engine='python')
raw.index = pd.date_range('1926-01-01', '1995-12-01', freq='MS')
dta_kns = raw.loc[:'1986'] - raw.loc[:'1986'].mean()
# Plot the dataset
dta_kns[0].plot(title='Excess returns', figsize=(12, 3))
# Fit the model
mod_kns = sm.tsa.MarkovRegression(dta_kns, k_regimes=3, trend='nc', switching_variance=True)
res_kns = mod_kns.fit()
res_kns.summary()
Explanation: In this case, it is expected that a recession will last about one year (4 quarters) and an expansion about two and a half years.
Kim, Nelson, and Startz (1998) Three-state Variance Switching
This model demonstrates estimation with regime heteroskedasticity (switching of variances) and no mean effect. The dataset can be reached at http://econ.korea.ac.kr/~cjkim/MARKOV/data/ew_excs.prn.
The model in question is:
$$
\begin{align}
y_t & = \varepsilon_t \
\varepsilon_t & \sim N(0, \sigma_{S_t}^2)
\end{align}
$$
Since there is no autoregressive component, this model can be fit using the MarkovRegression class. Since there is no mean effect, we specify trend='nc'. There are hypotheized to be three regimes for the switching variances, so we specify k_regimes=3 and switching_variance=True (by default, the variance is assumed to be the same across regimes).
End of explanation
fig, axes = plt.subplots(3, figsize=(10,7))
ax = axes[0]
ax.plot(res_kns.smoothed_marginal_probabilities[0])
ax.set(title='Smoothed probability of a low-variance regime for stock returns')
ax = axes[1]
ax.plot(res_kns.smoothed_marginal_probabilities[1])
ax.set(title='Smoothed probability of a medium-variance regime for stock returns')
ax = axes[2]
ax.plot(res_kns.smoothed_marginal_probabilities[2])
ax.set(title='Smoothed probability of a high-variance regime for stock returns')
fig.tight_layout()
Explanation: Below we plot the probabilities of being in each of the regimes; only in a few periods is a high-variance regime probable.
End of explanation
# Get the dataset
filardo = requests.get('http://econ.korea.ac.kr/~cjkim/MARKOV/data/filardo.prn').content
dta_filardo = pd.read_table(BytesIO(filardo), sep=' +', header=None, skipfooter=1, engine='python')
dta_filardo.columns = ['month', 'ip', 'leading']
dta_filardo.index = pd.date_range('1948-01-01', '1991-04-01', freq='MS')
dta_filardo['dlip'] = np.log(dta_filardo['ip']).diff()*100
# Deflated pre-1960 observations by ratio of std. devs.
# See hmt_tvp.opt or Filardo (1994) p. 302
std_ratio = dta_filardo['dlip']['1960-01-01':].std() / dta_filardo['dlip'][:'1959-12-01'].std()
dta_filardo['dlip'][:'1959-12-01'] = dta_filardo['dlip'][:'1959-12-01'] * std_ratio
dta_filardo['dlleading'] = np.log(dta_filardo['leading']).diff()*100
dta_filardo['dmdlleading'] = dta_filardo['dlleading'] - dta_filardo['dlleading'].mean()
# Plot the data
dta_filardo['dlip'].plot(title='Standardized growth rate of industrial production', figsize=(13,3))
plt.figure()
dta_filardo['dmdlleading'].plot(title='Leading indicator', figsize=(13,3));
Explanation: Filardo (1994) Time-Varying Transition Probabilities
This model demonstrates estimation with time-varying transition probabilities. The dataset can be reached at http://econ.korea.ac.kr/~cjkim/MARKOV/data/filardo.prn.
In the above models we have assumed that the transition probabilities are constant across time. Here we allow the probabilities to change with the state of the economy. Otherwise, the model is the same Markov autoregression of Hamilton (1989).
Each period, the regime now transitions according to the following matrix of time-varying transition probabilities:
$$ P(S_t = s_t | S_{t-1} = s_{t-1}) =
\begin{bmatrix}
p_{00,t} & p_{10,t} \
p_{01,t} & p_{11,t}
\end{bmatrix}
$$
where $p_{ij,t}$ is the probability of transitioning from regime $i$, to regime $j$ in period $t$, and is defined to be:
$$
p_{ij,t} = \frac{\exp{ x_{t-1}' \beta_{ij} }}{1 + \exp{ x_{t-1}' \beta_{ij} }}
$$
Instead of estimating the transition probabilities as part of maximum likelihood, the regression coefficients $\beta_{ij}$ are estimated. These coefficients relate the transition probabilities to a vector of pre-determined or exogenous regressors $x_{t-1}$.
End of explanation
mod_filardo = sm.tsa.MarkovAutoregression(
dta_filardo.iloc[2:]['dlip'], k_regimes=2, order=4, switching_ar=False,
exog_tvtp=sm.add_constant(dta_filardo.iloc[1:-1]['dmdlleading']))
np.random.seed(12345)
res_filardo = mod_filardo.fit(search_reps=20)
res_filardo.summary()
Explanation: The time-varying transition probabilities are specified by the exog_tvtp parameter.
Here we demonstrate another feature of model fitting - the use of a random search for MLE starting parameters. Because Markov switching models are often characterized by many local maxima of the likelihood function, performing an initial optimization step can be helpful to find the best parameters.
Below, we specify that 20 random perturbations from the starting parameter vector are examined and the best one used as the actual starting parameters. Because of the random nature of the search, we seed the random number generator beforehand to allow replication of the result.
End of explanation
fig, ax = plt.subplots(figsize=(12,3))
ax.plot(res_filardo.smoothed_marginal_probabilities[0])
ax.fill_between(usrec.index, 0, 1, where=usrec['USREC'].values, color='gray', alpha=0.2)
ax.set_xlim(dta_filardo.index[6], dta_filardo.index[-1])
ax.set(title='Smoothed probability of a low-production state');
Explanation: Below we plot the smoothed probability of the economy operating in a low-production state, and again include the NBER recessions for comparison.
End of explanation
res_filardo.expected_durations[0].plot(
title='Expected duration of a low-production state', figsize=(12,3));
Explanation: Using the time-varying transition probabilities, we can see how the expected duration of a low-production state changes over time:
End of explanation |
5,799 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Problem Set 01
1. COUNTING VOWELS
Assume s is a string of lower case characters.
Write a program that counts up the number of vowels contained in the string s. Valid vowels are
Step1: 2. COUNTING BOBS
Assume 's' is a string of lower case characters.
Write a program that prints the number of times the string 'bob' occurs in s. For example, if s = 'azcbobobegghakl', then your program should print
Number of times bob occurs is
Step2: 3. Counting and Grouping
A catering company has hired you to help with organizing and preparing customer's orders. You are given a list of each customer's desired items, and must write a program that will count the number of each items needed for the chefs to prepare. The items that a customer can order are
Step3: Problem Set 02
PAYING OFF CREDIT CARD DEBT
Each month, a credit card statement will come with the option for you to pay a minimum amount of your charge, usually 2% of the balance due. However, the credit card company earns money by charging interest on the balance that you don't pay. So even if you pay credit card payments on time, interest is still accruing on the outstanding balance.
Say you've made a \$5,000 purchase on a credit card with an 18% annual interest rate and a 2% minimum monthly payment rate. If you only pay the minimum monthly amount for a year, how much is the remaining balance?
You can think about this in the following way.
At the beginning of month 0 (when the credit card statement arrives), assume you owe an amount we will call $b_0$ (b for balance; subscript 0 to indicate this is the balance at month 0).
Any payment you make during that month is deducted from the balance. Let's call the payment you make in month 0, p_0. Thus, your unpaid balance for month 0, $ub_0$, is equal to $b_0−p_0$.
$$ ub_0=b_0 −p_0 $$
At the beginning of month 1, the credit card company will charge you interest on your unpaid balance. So if your annual interest rate is r, then at the beginning of month 1, your new balance is your previous unpaid balance $ub_0$, plus the interest on this unpaid balance for the month. In algebra, this new balance would be
$$ b_1=ub_0+r/12.0⋅ub_0 $$
In month 1, we will make another payment, p_1. That payment has to cover some of the interest costs, so it does not completely go towards paying off the original charge. The balance at the beginning of month 2, b_2, can be calculated by first calculating the unpaid balance after paying p_1, then by adding the interest accrued
Step4: 2. PAYING DEBT OFF IN A YEAR
Now write a program that calculates the minimum fixed monthly payment needed in order pay off a credit card balance
within 12 months. By a fixed monthly payment, we mean a single number which does not change each month, but instead
is a constant amount that will be paid each month.
Step5: 3. USING BISECTION SEARCH TO MAKE THE PROGRAM FASTER | Python Code:
s= 'wordsmith'
vowels = {'a','e','i','o','u'}
count = 0
for char in s:
if char in vowels:
count+=1
print "Number of vowels: " + str(count)
Explanation: Problem Set 01
1. COUNTING VOWELS
Assume s is a string of lower case characters.
Write a program that counts up the number of vowels contained in the string s. Valid vowels are: 'a', 'e', 'i', 'o', and 'u'. For example, if s = 'azcbobobegghakl', your program should print:
Number of vowels: 5
End of explanation
s = 'azcbobobegghakl'
pattern = 'bob'
count =0
for position in range(0,len(s)):
if s[position:position+3]==pattern:
count+=1
print count
Explanation: 2. COUNTING BOBS
Assume 's' is a string of lower case characters.
Write a program that prints the number of times the string 'bob' occurs in s. For example, if s = 'azcbobobegghakl', then your program should print
Number of times bob occurs is: 2
End of explanation
def item_order(order):
dishes = {'salad':0,'hamburger':0,'water':0}
for dish_ordered in order.split(' '):
if dish_ordered in dishes.keys():
dishes[dish_ordered] +=1
string = 'salad:' + str(dishes['salad']) + ' hamburger:' + str(dishes['hamburger']) + ' water:' + str(dishes['water'])
return string
order = "hamburger water hamburger"
item_order(order)
Explanation: 3. Counting and Grouping
A catering company has hired you to help with organizing and preparing customer's orders. You are given a list of each customer's desired items, and must write a program that will count the number of each items needed for the chefs to prepare. The items that a customer can order are: salad, hamburger, and water.
Write a function called item_order that takes as input a string named order.
The string contains only words for the items the customer can order separated by one space.
The function returns a string that counts the number of each item and consolidates them in the following order:
salad:[# salad] hamburger:[# hambruger] water:[# water]
If an order does not contain an item, then the count for that item is 0. Notice that each item is formatted as [name of the item][a colon symbol][count of the item] and all item groups are separated by a space.
For example:
If order = "salad water hamburger salad hamburger" then the function returns "salad:2 hamburger:2 water:1"
If order = "hamburger water hamburger" then the function returns "salad:0 hamburger:2 water:1"
End of explanation
balance = 5000
months=12
payment_min = balance * monthlyPaymentRate
balance_remaining = balance - payment_min
interest = balance_remaining * (annualInterestRate/12.0)
def remainingBalance(balance,annualInterestRate,monthlyPaymentRate,months=12):
balance_remaining =balance
payment_min = 0
for month in range(0,months):
payment_min = balance_remaining * monthlyPaymentRate
balance_remaining = balance_remaining - payment_min
interest = balance_remaining * ((annualInterestRate)/12.0)
balance_remaining += interest
#print (payment_min,balance_remaining,interest)
return round(balance_remaining,2)
remainingBalance(balance=5000,annualInterestRate=0.18,monthlyPaymentRate=0.02,months=12)
Explanation: Problem Set 02
PAYING OFF CREDIT CARD DEBT
Each month, a credit card statement will come with the option for you to pay a minimum amount of your charge, usually 2% of the balance due. However, the credit card company earns money by charging interest on the balance that you don't pay. So even if you pay credit card payments on time, interest is still accruing on the outstanding balance.
Say you've made a \$5,000 purchase on a credit card with an 18% annual interest rate and a 2% minimum monthly payment rate. If you only pay the minimum monthly amount for a year, how much is the remaining balance?
You can think about this in the following way.
At the beginning of month 0 (when the credit card statement arrives), assume you owe an amount we will call $b_0$ (b for balance; subscript 0 to indicate this is the balance at month 0).
Any payment you make during that month is deducted from the balance. Let's call the payment you make in month 0, p_0. Thus, your unpaid balance for month 0, $ub_0$, is equal to $b_0−p_0$.
$$ ub_0=b_0 −p_0 $$
At the beginning of month 1, the credit card company will charge you interest on your unpaid balance. So if your annual interest rate is r, then at the beginning of month 1, your new balance is your previous unpaid balance $ub_0$, plus the interest on this unpaid balance for the month. In algebra, this new balance would be
$$ b_1=ub_0+r/12.0⋅ub_0 $$
In month 1, we will make another payment, p_1. That payment has to cover some of the interest costs, so it does not completely go towards paying off the original charge. The balance at the beginning of month 2, b_2, can be calculated by first calculating the unpaid balance after paying p_1, then by adding the interest accrued:
$$ ub_1=b_1−p_1 $$
$$ b2=ub_1+r/12.0⋅ub_1 $$
If you choose just to pay off the minimum monthly payment each month, you will see that the compound interest will dramatically reduce your ability to lower your debt.
Let's look at an example. If you've got a \$ 5,000 balance on a credit card with 18% annual interest rate, and the minimum monthly payment is 2% of the current balance, we would have the following repayment schedule if you only pay the minimum payment each month:
|Month| Balance| Minimum Payment| Unpaid Balance| Interest|
|---|---|---|---|---|
|0| 5000.00| 100 (= 5000 * 0.02)| 4900 (= 5000 - 100) |73.50 (= 0.18/12.0 * 4900)
|1| 4973.50 (= 4900 + 73.50)| 99.47 (= 4973.50 * 0.02)| 4874.03 (= 4973.50 - 99.47)| 73.11 (= 0.18/12.0 * 4874.03)|
|2| 4947.14 (= 4874.03 + 73.11)| 98.94 (= 4947.14 * 0.02)| 4848.20 (= 4947.14 - 98.94)| 72.72 (= 0.18/12.0 * 4848.20)|
You can see that a lot of your payment is going to cover interest, and if you work this through month 12, you will see that after a year, you will have paid \$ 1165.63 and yet you will still owe \$ 4691.11 on what was originally a \$ 5000.00 debt. Pretty depressing!
1. PAYING THE MINIMUM
Write a program to calculate the credit card balance after one year if a person only pays the minimum monthly payment required by the credit card company each month.
End of explanation
balance = 5000
months=12
payment_min = balance * monthlyPaymentRate
balance_remaining = balance - payment_min
interest = balance_remaining * (annualInterestRate/12.0)
def remainingBalance(balance,annualInterestRate,monthlyPaymentRate,months):
balance_remaining =balance
payment_min = 0
for month in range(0,months):
payment_min = monthlyPaymentRate
balance_remaining = balance_remaining - payment_min
interest = balance_remaining * ((annualInterestRate)/12.0)
balance_remaining += interest
return round(balance_remaining,2)
def payOffDebt(balance,annualInterestRate,monthlyPaymentRate=0.0,months=12):
monthlyPaymentRate+=10
balance_remaining = remainingBalance(balance,annualInterestRate,monthlyPaymentRate,months)
rate = monthlyPaymentRate
if balance_remaining<=0:
print 'Lowest Payment: ' + str(rate)
else:
#print balance_remaining
payOffDebt(balance,annualInterestRate,monthlyPaymentRate,months)
payOffDebt(3329,0.2)
payOffDebt(4773,0.2)
payOffDebt(3926,0.2)
Explanation: 2. PAYING DEBT OFF IN A YEAR
Now write a program that calculates the minimum fixed monthly payment needed in order pay off a credit card balance
within 12 months. By a fixed monthly payment, we mean a single number which does not change each month, but instead
is a constant amount that will be paid each month.
End of explanation
range(12)
def payOffDebtBisection(balance,annualInterestRate,months=12):
balance_remaining = balance
monthlyInterestRate = annualInterestRate/12.0
lower_limit = balance/12.0
upper_limit = balance * ((1+monthlyInterestRate)**12)/12.0
tolerance =0.01
while abs(balance_remaining) > tolerance:
balance_remaining = balance
payment = ( lower_limit + upper_limit )/2
for month in range(12):
balance_remaining = balance_remaining-payment
balance_remaining = balance_remaining*(1+monthlyInterestRate)
if balance_remaining<=0:
upper_limit = payment
else:
lower_limit = payment
print 'Lowest Payment: ' + str(round(payment, 2))
payOffDebtBisection(320000,0.2)
payOffDebtBisection(999999,0.18)
Explanation: 3. USING BISECTION SEARCH TO MAKE THE PROGRAM FASTER
End of explanation |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.