repo_name stringlengths 6 77 | path stringlengths 8 215 | license stringclasses 15
values | content stringlengths 335 154k |
|---|---|---|---|
cshankm/rebound | ipython_examples/WHFast.ipynb | gpl-3.0 | import rebound
"""
Explanation: WHFast tutorial
This tutorial is an introduction to the python interface of WHFast, a fast and unbiased symplectic Wisdom-Holman integrator. The method is described in Rein & Tamayo (2015).
This tutorial assumes that you have already installed REBOUND.
First WHFast integration
You can enter all the commands below into a file and execute it all at once, or open an interactive shell).
First, we need to import the REBOUND module (make sure have have enabled the virtual environment if you used it to install REBOUND).
End of explanation
"""
sim = rebound.Simulation()
"""
Explanation: Next, we create a REBOUND simulation instance. This object encapsulated all the variables and functions that REBOUND has to offer.
End of explanation
"""
sim.add(m=1.)
"""
Explanation: Now, we can add particles. We'll work in units in which $G=1$ (see Units.ipynb for using different units). The first particle we add is the central object. We place it at rest at the origin and use the convention of setting the mass of the central object $M_*$ to 1:
End of explanation
"""
print(sim.particles[0])
"""
Explanation: Let's look at the particle we just added:
End of explanation
"""
sim.add(m=1e-3, x=1., vy=1.)
"""
Explanation: The output tells us that the mass of the particle is 1 and all coordinates are zero.
The next particle we're adding is a planet. We'll use Cartesian coordinates to initialize it. Any coordinate that we do not specify in the sim.add() command is assumed to be 0. We place our planet on a circular orbit at $a=1$ and give it a mass of $10^{-3}$ times that of the central star.
End of explanation
"""
sim.add(m=1e-3, a=2., e=0.1)
"""
Explanation: Instead of initializing the particle with Cartesian coordinates, we can also use orbital elements. By default, REBOUND will use Jacobi coordinates, i.e. REBOUND assumes the orbital elements describe the particle's orbit around the centre of mass of all particles added previously. Our second planet will have a mass of $10^{-3}$, a semimajoraxis of $a=2$ and an eccentricity of $e=0.1$ (note that you shouldn't change G after adding particles this way, see Units.ipynb):
End of explanation
"""
sim.status()
"""
Explanation: Now that we have added two more particles, let's have a quick look at what's in this simulation by using
End of explanation
"""
sim.integrator = "whfast"
sim.dt = 1e-3
"""
Explanation: You can see that REBOUND used the ias15 integrator as a default. Next, let's tell REBOUND that we want to use WHFast instead. We'll also set the timestep. In our system of units, an orbit at $a=1$ has an orbital period of $T_{\rm orb} =2\pi \sqrt{\frac{GM}{a}}= 2\pi$. So a reasonable timestep to start with would be $dt=10^{-3}$ (see Rein & Tamayo 2015 for some discussion on timestep choices).
End of explanation
"""
sim.integrate(6.28318530717959, exact_finish_time=0) # 6.28318530717959 is 2*pi
"""
Explanation: whfast refers to the 2nd order symplectic integrator WHFast described by Rein & Tamayo (2015). By default, no symplectic correctors are used, but they can be easily turned on (see Advanced Settings for WHFast).
We are now ready to start the integration. Let's integrate the simulation for one orbit, i.e. until $t=2\pi$. Because we use a fixed timestep, rebound would have to change it to integrate exactly up to $2\pi$. Changing the timestep in a symplectic integrator is a bad idea, so we'll tell rebound to not worry about the exact_finish_time.
End of explanation
"""
sim.status()
"""
Explanation: Once again, let's look at what REBOUND's status is
End of explanation
"""
particles = sim.particles
for p in particles:
print(p.x, p.y, p.vx, p.vy)
"""
Explanation: As you can see the time has advanced to $t=2\pi$ and the positions and velocities of all particles have changed. If you want to post-process the particle data, you can access it in the following way:
End of explanation
"""
import numpy as np
torb = 2.*np.pi
Noutputs = 100
times = np.linspace(torb, 2.*torb, Noutputs)
x = np.zeros(Noutputs)
y = np.zeros(Noutputs)
"""
Explanation: The particles object is an array of pointers to the particles. This means you can call particles = sim.particles before the integration and the contents of particles will be updated after the integration. If you add or remove particles, you'll need to call sim.particles again.
Visualization with matplotlib
Instead of just printing boring numbers at the end of the simulation, let's visualize the orbit using matplotlib (you'll need to install numpy and matplotlib to run this example, see Installation).
We'll use the same particles as above. As the particles are already in memory, we don't need to add them again. Let us plot the position of the inner planet at 100 steps during its orbit. First, we'll import numpy and create an array of times for which we want to have an output (here, from $T_{\rm orb}$ to $2 T_{\rm orb}$ (we have already advanced the simulation time to $t=2\pi$).
End of explanation
"""
for i,time in enumerate(times):
sim.integrate(time, exact_finish_time=0)
x[i] = particles[1].x
y[i] = particles[1].y
"""
Explanation: Next, we'll step through the simulation. Rebound will integrate up to time. Depending on the timestep, it might overshoot slightly. If you want to have the outputs at exactly the time you specify, you can set the exact_finish_time=1 flag in the integrate function (or omit it altogether, 1 is the default). However, note that changing the timestep in a symplectic integrator could have negative impacts on its properties.
End of explanation
"""
%matplotlib inline
import matplotlib.pyplot as plt
fig = plt.figure(figsize=(5,5))
ax = plt.subplot(111)
ax.set_xlim([-2,2])
ax.set_ylim([-2,2])
plt.plot(x, y);
"""
Explanation: Let's plot the orbit using matplotlib.
End of explanation
"""
Noutputs = 1000
times = np.linspace(2.*torb, 20.*torb, Noutputs)
x = np.zeros(Noutputs)
y = np.zeros(Noutputs)
for i,time in enumerate(times):
sim.integrate(time, exact_finish_time=0)
x[i] = particles[1].x
y[i] = particles[1].y
fig = plt.figure(figsize=(5,5))
ax = plt.subplot(111)
ax.set_xlim([-2,2])
ax.set_ylim([-2,2])
plt.plot(x, y);
"""
Explanation: Hurray! It worked. The orbit looks like it should, it's an almost perfect circle. There are small perturbations though, induced by the outer planet. Let's integrate a bit longer to see them.
End of explanation
"""
sim.move_to_com()
"""
Explanation: Oops! This doesn't look like what we expected to see (small perturbations to an almost circluar orbit). What you see here is the barycenter slowly drifting. Some integration packages require that the simulation be carried out in a particular frame, but WHFast provides extra flexibility by working in any inertial frame. If you recall how we added the particles, the Sun was at the origin and at rest, and then we added the planets. This means that the center of mass, or barycenter, will have a small velocity, which results in the observed drift. There are multiple ways we can get the plot we want to.
1. We can calculate only relative positions.
2. We can add the particles in the barycentric frame.
3. We can let REBOUND transform the particle coordinates to the barycentric frame for us.
Let's use the third option (next time you run a simulation, you probably want to do that at the beginning).
End of explanation
"""
times = np.linspace(20.*torb, 1000.*torb, Noutputs)
for i,time in enumerate(times):
sim.integrate(time, exact_finish_time=0)
x[i] = particles[1].x
y[i] = particles[1].y
fig = plt.figure(figsize=(5,5))
ax = plt.subplot(111)
ax.set_xlim([-1.5,1.5])
ax.set_ylim([-1.5,1.5])
plt.scatter(x, y, marker='.', color='k', s=1.2);
"""
Explanation: So let's try this again. Let's integrate for a bit longer this time.
End of explanation
"""
times = np.linspace(1000.*torb, 9000.*torb, Noutputs)
a = np.zeros(Noutputs)
e = np.zeros(Noutputs)
for i,time in enumerate(times):
sim.integrate(time, exact_finish_time=0)
orbits = sim.calculate_orbits()
a[i] = orbits[1].a
e[i] = orbits[1].e
fig = plt.figure(figsize=(15,5))
ax = plt.subplot(121)
ax.set_xlabel("time")
ax.set_ylabel("semi-major axis")
plt.plot(times, a);
ax = plt.subplot(122)
ax.set_xlabel("time")
ax.set_ylabel("eccentricity")
plt.plot(times, e);
"""
Explanation: That looks much more like it. Let us finally plot the orbital elements as a function of time.
End of explanation
"""
|
bjackman/lisa | ipynb/examples/utils/executor_example.ipynb | apache-2.0 | import logging
from conf import LisaLogging
LisaLogging.setup()
import os
import json
from env import TestEnv
from executor import Executor
"""
Explanation: Executor API - Executor
A tests executor is a module which supports the execution of a configured set of experiments.<br><br>
Each experiment is composed by:
- a target configuration
- a workload to execute
The executor module can be configured to run a set of workloads (wloads) in each different target configuration of a specified set (confs). These wloads and confs can be specified by the "experiments_conf" input dictionary which is described below at Experiments Configuration.<br><br>
All the results generated by each experiment will be collected in a results folder. The format and content fo the results forlder is detailed in the last cell of Tests execution below.
End of explanation
"""
# Setup a target configuration
my_target_conf = {
# Target platform and board
"platform" : 'linux',
"board" : 'juno',
# Target board IP/MAC address
"host" : '192.168.0.1',
# Login credentials
"username" : 'root',
"password" : 'juno',
}
"""
Explanation: Target Configuration
The target configuration it's used to describe and configure your test environment.
You can find more details in examples/utils/testenv_example.ipynb.
End of explanation
"""
my_experiments_conf = {
# Folder where all the results will be collected
"results_dir" : "ExecutorExample",
# Platform configurations to test: you can specify any number of configurations
"confs" : [
{
"tag" : "base", # Relevant string to identify configuration
"flags" : ["ftrace", "freeze_userspace"], # Enable FTrace events, freeze userspace while running
"sched_features" : "NO_ENERGY_AWARE", # Disable EAS
"cpufreq" : { # Use PERFORMANCE CpuFreq
"governor" : "performance",
},
},
{
"tag" : "eas", # Relevant string to identify configuration
"flags" : ["ftrace", "freeze_userspace"], # Enable FTrace events, freeze userspace while running
"sched_features" : "ENERGY_AWARE", # Enable EAS
"cpufreq" : { # Use PERFORMANCE CpuFreq
"governor" : "performance",
},
},
],
# Workloads to run (on each platform configuration)
"wloads" : {
# Run hackbench with 1 group using pipes
"perf" : {
"type" : "perf_bench",
"conf" : {
"class" : "messaging",
"params" : {
"group" : 1,
"loop" : 10,
"pipe" : True,
"thread": True,
}
}
},
# Run a 20% duty-cycle periodic task
"rta" : {
"type" : "rt-app",
"loadref" : "big",
"conf" : {
"class" : "profile",
"params" : {
"p20" : {
"kind" : "Periodic",
"params" : {
"duty_cycle_pct" : 20,
},
},
},
},
},
},
# Number of iterations for each workloaditerations
"iterations" : 1,
}
my_test_conf = {
# FTrace events to collect for all the tests configuration which have
# the "ftrace" flag enabled
"ftrace" : {
"events" : [
"sched_switch",
"sched_wakeup",
"sched_wakeup_new",
"cpu_frequency",
],
"buffsize" : 80 * 1024,
},
# Tools required by the experiments
"tools" : [ 'trace-cmd', 'perf' ],
# Modules required by these experiments
"modules" : [ 'bl', 'cpufreq', 'cgroups' ],
}
"""
Explanation: Experiments Configuration
The experiments configuration defines the software setups that we need on our hardware target.<br>
This can be given as an argument to an Executor instance or to a TestEnv one.<br> <br>
Elements of the experiments configuration:
- confs: mandatory platform configurations to be tested.
- tag: relevant string to identify your configuration.
- flags: ftrace (to enable ftrace events) is the only one supported at the moment.
- sched_features: features to be added to /sys/kernel/debug/sched_features.
- cpufreq: CpuFreq governor and tunables.
- cgroups: CGroups configuration (controller). The default CGroup will be used otherwise.
- wloads: mandatory workloads to run on each platform configuration.
- iterations: number of iterations for each workload.
- tools: binary tools (available under ./tools/$ARCH/) to install by default; these will be merged with the ones in the target configuration.
- ftrace: FTrace events to collect for all the experiments configurations which have the "ftrace" flag enabled.
- modules: modules required by the experiments resulted from the experiments configurations.
- exclude_modules - modules to be disabled.
- results_dir: results directory - experiments configuration results directory overrides target one.
End of explanation
"""
executor = Executor(TestEnv(target_conf=my_target_conf, test_conf=my_test_conf), my_experiments_conf)
executor.run()
!tree {executor.te.res_dir}
"""
Explanation: Tests execution
End of explanation
"""
|
aravindhv10/CPP_Wrappers | AntiQCD4/Training_Notebook.ipynb | gpl-2.0 | # This program will not generate the jet images, it will only train the autoencoder
# and evaluate the results. The jet images can be found in:
# https://drive.google.com/drive/folders/1i5DY9duzDuumQz636u5YQeYQEt_7TYa8?usp=sharing
# Please download those images to your google drive and use the colab - drive integration.
import lzma
from google.colab import drive
import numpy as np
import tensorflow as tf
import keras
from keras import backend as K
from keras.layers import Input, Dense
from keras.models import Model
import matplotlib.pyplot as plt
def READ_XZ (filename):
file = lzma.LZMAFile(filename)
type_bytes = file.read(-1)
type_array = np.frombuffer(type_bytes,dtype='float32')
return type_array
def Count(array,val):
count = 0.0
for e in range(array.shape[0]):
if array[e]>val :
count=count+1.0
return count / array.shape[0]
width=40
batch_size=200
ModelName = "Model_40_24_8_24_40_40"
config = tf.ConfigProto( device_count = {'GPU': 1 , 'CPU': 2} )
sess = tf.Session(config=config)
keras.backend.set_session(sess)
K.tensorflow_backend._get_available_gpus()
"""
Explanation: <a href="https://colab.research.google.com/github/aravindhv10/CPP_Wrappers/blob/master/AntiQCD4/Training_Notebook.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
This program will not generate the jet images, it will only train the autoencoder
and evaluate the results. The jet images can be found in:
https://drive.google.com/drive/folders/1i5DY9duzDuumQz636u5YQeYQEt_7TYa8?usp=sharing
Please download those images to your google drive and use the colab - drive integration.
A program to generate jet images is available at
https://github.com/aravindhv10/CPP_Wrappers/blob/master/AntiQCD4/JetImageFormation.hh
in the form of the class BoxImageGen.
The images used in this program were produced using BoxImageGen<40,float,true> with the ratio $m_J/E_J=0.5$.
End of explanation
"""
# this is our input placeholder
input_img = Input(shape=(width*width,))
# "encoded" is the encoded representation of the input
Layer1 = Dense(24*24, activation='relu')(input_img)
Layer2 = Dense(8*8, activation='relu')(Layer1)
Layer3 = Dense(24*24, activation='relu')(Layer2)
Layer4 = Dense(40*40, activation='relu')(Layer3)
Out = Dense(40*40, activation='softmax')(Layer4)
# this model maps an input to its reconstruction
autoencoder = Model(input_img, Out)
autoencoder.compile(optimizer='adam', loss='mean_squared_error')
def EvalOnFile (InFileName,OutFileName):
data = READ_XZ (InFileName)
x_train = data.reshape(-1,width*width)
x_out = autoencoder.predict(x_train,200,use_multiprocessing=True)
diff = x_train - x_out
lrnorm = np.ones((diff.shape[0]))
for e in range(diff.shape[0]):
lrnorm[e] = np.linalg.norm(diff[e])
lrnorm.tofile(OutFileName)
print(lrnorm.shape)
def TrainOnFile (filename,testfilename,totalepochs):
data = READ_XZ (filename)
x_train = data.reshape(-1,width*width)
datatest = READ_XZ (testfilename)
x_test = datatest.reshape(-1,width*width)
autoencoder.fit(
x_train, x_train, epochs=totalepochs,
batch_size=200, shuffle=True,
validation_data=(x_test, x_test)
)
autoencoder.save(ModelName)
"""
Explanation: Defining network architecture (we use Arch-2)
We also define some functions to make training convinent here.
End of explanation
"""
# Please download the files from the link below and appropriately change this program:
# https://drive.google.com/drive/folders/1i5DY9duzDuumQz636u5YQeYQEt_7TYa8?usp=sharing
drive.mount('/gdrive')
%cd /gdrive
"""
Explanation: Mounting folder from Google Drive:
End of explanation
"""
%cd /gdrive/My\ Drive/JetImages/QCD/
!ls ./TEST/BoxImages/0.xz
!ls ./TRAIN/BoxImages/0.xz
"""
Explanation: Verify the files are correctly mounted and available:
End of explanation
"""
%cd /gdrive/My Drive/JetImages/QCD
autoencoder = keras.models.load_model(ModelName)
"""
Explanation: Load the model in case a trained one is already available:
End of explanation
"""
%cd /gdrive/My Drive/JetImages/QCD
for e in range(4):
TrainOnFile("./TRAIN/BoxImages/0.xz","./TEST/BoxImages/0.xz",10)
TrainOnFile("./TRAIN/BoxImages/1.xz","./TEST/BoxImages/1.xz",10)
TrainOnFile("./TRAIN/BoxImages/2.xz","./TEST/BoxImages/2.xz",10)
TrainOnFile("./TRAIN/BoxImages/3.xz","./TEST/BoxImages/3.xz",10)
TrainOnFile("./TRAIN/BoxImages/4.xz","./TEST/BoxImages/4.xz",10)
TrainOnFile("./TRAIN/BoxImages/5.xz","./TEST/BoxImages/5.xz",10)
TrainOnFile("./TRAIN/BoxImages/6.xz","./TEST/BoxImages/6.xz",10)
TrainOnFile("./TRAIN/BoxImages/7.xz","./TEST/BoxImages/7.xz",10)
TrainOnFile("./TRAIN/BoxImages/8.xz","./TEST/BoxImages/8.xz",10)
TrainOnFile("./TRAIN/BoxImages/9.xz","./TEST/BoxImages/9.xz",10)
TrainOnFile("./TRAIN/BoxImages/10.xz","./TEST/BoxImages/10.xz",10)
TrainOnFile("./TRAIN/BoxImages/11.xz","./TEST/BoxImages/11.xz",10)
TrainOnFile("./TRAIN/BoxImages/12.xz","./TEST/BoxImages/12.xz",10)
TrainOnFile("./TRAIN/BoxImages/13.xz","./TEST/BoxImages/13.xz",10)
TrainOnFile("./TRAIN/BoxImages/14.xz","./TEST/BoxImages/14.xz",10)
TrainOnFile("./TRAIN/BoxImages/15.xz","./TEST/BoxImages/15.xz",10)
"""
Explanation: The training step:
End of explanation
"""
%cd /gdrive/My Drive/JetImages/QCD
autoencoder.save(ModelName)
# autoencoder = keras.models.load_model(ModelName)
%cd /gdrive/My Drive/JetImages/QCD
!ls -lh
%cd /gdrive/My Drive/JetImages/QCD
!xz -z9evvfk Model_40_24_8_24_40_40
"""
Explanation: Once again save the model (although it is already saved each epoch)
End of explanation
"""
%cd /gdrive/My Drive/JetImages/QCD
EvalOnFile("./TEST/BoxImages/0.xz","./TEST/BoxImages/0_out")
EvalOnFile("./TEST/BoxImages/1.xz","./TEST/BoxImages/1_out")
EvalOnFile("./TEST/BoxImages/2.xz","./TEST/BoxImages/2_out")
EvalOnFile("./TEST/BoxImages/3.xz","./TEST/BoxImages/3_out")
EvalOnFile("./TEST/BoxImages/4.xz","./TEST/BoxImages/4_out")
EvalOnFile("./TEST/BoxImages/5.xz","./TEST/BoxImages/5_out")
EvalOnFile("./TEST/BoxImages/6.xz","./TEST/BoxImages/6_out")
EvalOnFile("./TEST/BoxImages/7.xz","./TEST/BoxImages/7_out")
%cd /gdrive/My Drive/JetImages/TOP
EvalOnFile("./TEST/BoxImages/0.xz","./TEST/BoxImages/0_out")
EvalOnFile("./TEST/BoxImages/1.xz","./TEST/BoxImages/1_out")
EvalOnFile("./TEST/BoxImages/2.xz","./TEST/BoxImages/2_out")
EvalOnFile("./TEST/BoxImages/3.xz","./TEST/BoxImages/3_out")
EvalOnFile("./TEST/BoxImages/4.xz","./TEST/BoxImages/4_out")
EvalOnFile("./TEST/BoxImages/5.xz","./TEST/BoxImages/5_out")
EvalOnFile("./TEST/BoxImages/6.xz","./TEST/BoxImages/6_out")
EvalOnFile("./TEST/BoxImages/7.xz","./TEST/BoxImages/7_out")
%cd /gdrive/My Drive/JetImages/
!cat TOP/TEST/BoxImages/*_out > TOP_OUT
!cat QCD/TEST/BoxImages/*_out > QCD_OUT
!ls TOP/TEST/BoxImages/*_out TOP_OUT -lh
!ls QCD/TEST/BoxImages/*_out QCD_OUT -lh
"""
Explanation: Evaluate using the trained model:
End of explanation
"""
%cd /gdrive/My Drive/JetImages/
qcdloss = np.fromfile("QCD_OUT", dtype=float, count=-1, sep='', offset=0)
toploss = np.fromfile("TOP_OUT", dtype=float, count=-1, sep='', offset=0)
qcdloss=np.sort(qcdloss)
toploss=np.sort(toploss)
print(qcdloss.shape)
print(toploss.shape)
plt.hist(toploss,100,(0.0,0.4),density=True,histtype='step')
plt.hist(qcdloss,100,(0.0,0.4),density=True,histtype='step')
plt.show()
dx = (0.4 - 0.0) / 100.0
qcdeff = np.ones((100))
topeff = np.ones((100))
for i in range(100):
xval = i*dx
qcdeff[i]=1.0/(Count(qcdloss,xval)+0.0000000001)
topeff[i]=Count(toploss,xval)
plt.yscale('log')
plt.plot(topeff,qcdeff)
%cd /gdrive/My Drive/JetImages/
def ReadLossMass(lossname,massname):
loss = np.fromfile(lossname, dtype=float, count=-1, sep='', offset=0)
mass = np.fromfile(massname, dtype='float32', count=-1, sep='', offset=0)
out = np.ones((mass.shape[0],2))
for i in range(mass.shape[0]):
out[i][0] = loss[i]
out[i][1] = mass[i]
return out
def GetQCDPair () :
pair = ReadLossMass("QCD/TEST/BoxImages/0_out","QCD/TEST/Mass/0")
pair = np.append (pair,ReadLossMass("QCD/TEST/BoxImages/1_out","QCD/TEST/Mass/1"),0)
pair = np.append (pair,ReadLossMass("QCD/TEST/BoxImages/2_out","QCD/TEST/Mass/2"),0)
pair = np.append (pair,ReadLossMass("QCD/TEST/BoxImages/3_out","QCD/TEST/Mass/3"),0)
pair = np.append (pair,ReadLossMass("QCD/TEST/BoxImages/4_out","QCD/TEST/Mass/4"),0)
pair = np.append (pair,ReadLossMass("QCD/TEST/BoxImages/5_out","QCD/TEST/Mass/5"),0)
pair = np.append (pair,ReadLossMass("QCD/TEST/BoxImages/6_out","QCD/TEST/Mass/6"),0)
pair = np.append (pair,ReadLossMass("QCD/TEST/BoxImages/7_out","QCD/TEST/Mass/7"),0)
return pair
def GetTOPPair () :
pair = ReadLossMass("TOP/TEST/BoxImages/0_out","TOP/TEST/Mass/0")
pair = np.append (pair,ReadLossMass("TOP/TEST/BoxImages/1_out","TOP/TEST/Mass/1"),0)
pair = np.append (pair,ReadLossMass("TOP/TEST/BoxImages/2_out","TOP/TEST/Mass/2"),0)
pair = np.append (pair,ReadLossMass("TOP/TEST/BoxImages/3_out","TOP/TEST/Mass/3"),0)
pair = np.append (pair,ReadLossMass("TOP/TEST/BoxImages/4_out","TOP/TEST/Mass/4"),0)
pair = np.append (pair,ReadLossMass("TOP/TEST/BoxImages/5_out","TOP/TEST/Mass/5"),0)
pair = np.append (pair,ReadLossMass("TOP/TEST/BoxImages/6_out","TOP/TEST/Mass/6"),0)
pair = np.append (pair,ReadLossMass("TOP/TEST/BoxImages/7_out","TOP/TEST/Mass/7"),0)
return pair
qcdpair = GetQCDPair()
toppair = GetTOPPair()
"""
Explanation: Plotting the loss and ROC:
End of explanation
"""
#plt.hist(qcdpair[:,1],100,(0.0,300.0),density=True,histtype='step')
#plt.hist(toppair[:,1],100,(0.0,300.0),density=True,histtype='step')
plt.hist2d(qcdpair[:,1],qcdpair[:,0],bins=100,range=[[0,400],[0.0,0.3]])
plt.show()
def QCDMassBin(minmass,maxmass):
ret = np.ones((1))
for e in range(qcdpair.shape[0]):
if (minmass < qcdpair[e][1]) and (qcdpair[e][1] < maxmass) :
if e == 0 :
ret[e] = qcdpair[e][0]
else:
ret = np.append(ret,qcdpair[e][0])
return ret
plt.hist(QCDMassBin(0,100),100,(0.0,0.4),density=True,histtype='step')
plt.hist(QCDMassBin(100,200),100,(0.0,0.4),density=True,histtype='step')
plt.hist(QCDMassBin(200,300),100,(0.0,0.4),density=True,histtype='step')
plt.hist(QCDMassBin(300,400),100,(0.0,0.4),density=True,histtype='step')
plt.hist(QCDMassBin(400,500),100,(0.0,0.4),density=True,histtype='step')
plt.hist(QCDMassBin(500,5000),100,(0.0,0.4),density=True,histtype='step')
plt.show()
"""
Explanation: The 2D Histogram of QCD Loss vs Mass
End of explanation
"""
|
Adamage/python-training | Lesson_00_algorithms.ipynb | apache-2.0 | def bubble_sort(alist):
for pass_number in range(len(alist)-1,0,-1):
for i in range(pass_number):
left, right = alist[i], alist[i+1]
if left > right:
left, right = right, left
alist[i], alist[i+1] = left, right
print(alist)
bubble_sort([27,2,1,63,8,26,3,2,8,1,3,3,4])
"""
Explanation: Python Training - Lesson 0 - algorithms
Before we start writing code, we should first understand how to solve problems using well defined steps and procedures. Let me give you an example.
Example algorithm - sort numbers
Sort a list of numbers so that they are in a growing sequence.
List : 27,2,1,63,8,26,3,2,8,1,3,3,4
Let's write some steps for a classic bubble sort.
Take the first pair. 27 and 2
If the second one is higher, leave it.
If it is lower, switch them.
Do this until they are sorted.
If you reach the end of list, go again, do another pass.
Now, most of the work is done. It's enough to solve the problem.
To make it automatic, we need to write pieces of code for each of the steps. The tools you need to know will be covered later in the course, for now just look at how simple a Python program could be.
End of explanation
"""
# Chair leg, screwdriver
# Chair seat, hammer
# Chair leg, screwdriver
"""
Explanation: Example algorithm - IKEA furniture assembly
Now, let's complicate things a bit.
Given a list of pieces, each with information about what tool to use, and needed supplies like screws, how will you do it step by step?
Simple description
We can just take a new piece, get the tool, get the supplies, and do the instruction. Then, take the next one.
How to model this information?
Now, imagine that you need to explain this to the computer.
- Take a new piece
From where? We need a thing with all pieces. IKEA instructions tell you to do things in a sequence, so let's try a list. We need a list with all pieces.
- Take the required tool
How do we know which one? It seems like, for each piece, we need to know this. We need to connect this information. To keep things simple, let's keep this information in a list. So for example:
End of explanation
"""
# Chair leg, screwdriver, 30mm screw
# Char seat, hammer, wooden peg
# Char leg, screwdriver, 30mm screw
# ...
"""
Explanation: - Take the supplies
Seems like a similar task to "mapping" tools. Let's extend the action list:
End of explanation
"""
def screw_together( furniture_piece, supplies, tool):
connected_furniture = tool.screw(furniture_piece, supplies)
return connected_furniture
"""
Explanation: But what if we no longer have any screws? We should stop the program. It also means, somewhere else in program, we should keep count of screws, and each time one is used, decrease the count by 1.
- Perform instruction
Now, everyone knows this is the hardest part. We need to tell the computer to do something with many elements. "Doing stuff" in programming is performed by instructions, or a sequence of instructions (called a procedure, function, method).
Lets say we define how to use a screw and screwdriver:
End of explanation
"""
# list_of_actions =
# Chair leg, screwdriver, 30mm screw, screw_together( Chair leg, 30mm screw, screwdriver)
# ...
"""
Explanation: Now our list looks like this:
End of explanation
"""
# screw_together( Chair leg, 30mm screw, screwdriver)
# hammer_together( Chair seat, wooden peg, hammer)
# screw_together( Chair leg, 30mm screw, screwdriver)
# ...
"""
Explanation: We managed to at least model the behavior. We know the minimum amount of information an algorithm needs to work its way to the end.
One additional thing is, actually it's really hard to write this down. And a single row of our action list has some duplicated information. Maybe we can write it easier:
End of explanation
"""
for action in list_of_actions:
action()
"""
Explanation: How does the algorithm look like?
Generally speaking, for each furniture element, perform the instruction using a tool and some supplies.
In pseudo code:
End of explanation
"""
|
keras-team/keras-io | examples/vision/ipynb/supervised-contrastive-learning.ipynb | apache-2.0 | import tensorflow as tf
import tensorflow_addons as tfa
import numpy as np
from tensorflow import keras
from tensorflow.keras import layers
"""
Explanation: Supervised Contrastive Learning
Author: Khalid Salama<br>
Date created: 2020/11/30<br>
Last modified: 2020/11/30<br>
Description: Using supervised contrastive learning for image classification.
Introduction
Supervised Contrastive Learning
(Prannay Khosla et al.) is a training methodology that outperforms
supervised training with crossentropy on classification tasks.
Essentially, training an image classification model with Supervised Contrastive
Learning is performed in two phases:
Training an encoder to learn to produce vector representations of input images such
that representations of images in the same class will be more similar compared to
representations of images in different classes.
Training a classifier on top of the frozen encoder.
Note that this example requires TensorFlow Addons, which you can install using the following command:
python
pip install tensorflow-addons
Setup
End of explanation
"""
num_classes = 10
input_shape = (32, 32, 3)
# Load the train and test data splits
(x_train, y_train), (x_test, y_test) = keras.datasets.cifar10.load_data()
# Display shapes of train and test datasets
print(f"x_train shape: {x_train.shape} - y_train shape: {y_train.shape}")
print(f"x_test shape: {x_test.shape} - y_test shape: {y_test.shape}")
"""
Explanation: Prepare the data
End of explanation
"""
data_augmentation = keras.Sequential(
[
layers.Normalization(),
layers.RandomFlip("horizontal"),
layers.RandomRotation(0.02),
layers.RandomWidth(0.2),
layers.RandomHeight(0.2),
]
)
# Setting the state of the normalization layer.
data_augmentation.layers[0].adapt(x_train)
"""
Explanation: Using image data augmentation
End of explanation
"""
def create_encoder():
resnet = keras.applications.ResNet50V2(
include_top=False, weights=None, input_shape=input_shape, pooling="avg"
)
inputs = keras.Input(shape=input_shape)
augmented = data_augmentation(inputs)
outputs = resnet(augmented)
model = keras.Model(inputs=inputs, outputs=outputs, name="cifar10-encoder")
return model
encoder = create_encoder()
encoder.summary()
learning_rate = 0.001
batch_size = 265
hidden_units = 512
projection_units = 128
num_epochs = 50
dropout_rate = 0.5
temperature = 0.05
"""
Explanation: Build the encoder model
The encoder model takes the image as input and turns it into a 2048-dimensional
feature vector.
End of explanation
"""
def create_classifier(encoder, trainable=True):
for layer in encoder.layers:
layer.trainable = trainable
inputs = keras.Input(shape=input_shape)
features = encoder(inputs)
features = layers.Dropout(dropout_rate)(features)
features = layers.Dense(hidden_units, activation="relu")(features)
features = layers.Dropout(dropout_rate)(features)
outputs = layers.Dense(num_classes, activation="softmax")(features)
model = keras.Model(inputs=inputs, outputs=outputs, name="cifar10-classifier")
model.compile(
optimizer=keras.optimizers.Adam(learning_rate),
loss=keras.losses.SparseCategoricalCrossentropy(),
metrics=[keras.metrics.SparseCategoricalAccuracy()],
)
return model
"""
Explanation: Build the classification model
The classification model adds a fully-connected layer on top of the encoder,
plus a softmax layer with the target classes.
End of explanation
"""
encoder = create_encoder()
classifier = create_classifier(encoder)
classifier.summary()
history = classifier.fit(x=x_train, y=y_train, batch_size=batch_size, epochs=num_epochs)
accuracy = classifier.evaluate(x_test, y_test)[1]
print(f"Test accuracy: {round(accuracy * 100, 2)}%")
"""
Explanation: Experiment 1: Train the baseline classification model
In this experiment, a baseline classifier is trained as usual, i.e., the
encoder and the classifier parts are trained together as a single model
to minimize the crossentropy loss.
End of explanation
"""
class SupervisedContrastiveLoss(keras.losses.Loss):
def __init__(self, temperature=1, name=None):
super(SupervisedContrastiveLoss, self).__init__(name=name)
self.temperature = temperature
def __call__(self, labels, feature_vectors, sample_weight=None):
# Normalize feature vectors
feature_vectors_normalized = tf.math.l2_normalize(feature_vectors, axis=1)
# Compute logits
logits = tf.divide(
tf.matmul(
feature_vectors_normalized, tf.transpose(feature_vectors_normalized)
),
self.temperature,
)
return tfa.losses.npairs_loss(tf.squeeze(labels), logits)
def add_projection_head(encoder):
inputs = keras.Input(shape=input_shape)
features = encoder(inputs)
outputs = layers.Dense(projection_units, activation="relu")(features)
model = keras.Model(
inputs=inputs, outputs=outputs, name="cifar-encoder_with_projection-head"
)
return model
"""
Explanation: Experiment 2: Use supervised contrastive learning
In this experiment, the model is trained in two phases. In the first phase,
the encoder is pretrained to optimize the supervised contrastive loss,
described in Prannay Khosla et al..
In the second phase, the classifier is trained using the trained encoder with
its weights freezed; only the weights of fully-connected layers with the
softmax are optimized.
1. Supervised contrastive learning loss function
End of explanation
"""
encoder = create_encoder()
encoder_with_projection_head = add_projection_head(encoder)
encoder_with_projection_head.compile(
optimizer=keras.optimizers.Adam(learning_rate),
loss=SupervisedContrastiveLoss(temperature),
)
encoder_with_projection_head.summary()
history = encoder_with_projection_head.fit(
x=x_train, y=y_train, batch_size=batch_size, epochs=num_epochs
)
"""
Explanation: 2. Pretrain the encoder
End of explanation
"""
classifier = create_classifier(encoder, trainable=False)
history = classifier.fit(x=x_train, y=y_train, batch_size=batch_size, epochs=num_epochs)
accuracy = classifier.evaluate(x_test, y_test)[1]
print(f"Test accuracy: {round(accuracy * 100, 2)}%")
"""
Explanation: 3. Train the classifier with the frozen encoder
End of explanation
"""
|
Kaggle/learntools | notebooks/ml_intermediate/raw/ex1.ipynb | apache-2.0 | # Set up code checking
import os
if not os.path.exists("../input/train.csv"):
os.symlink("../input/home-data-for-ml-course/train.csv", "../input/train.csv")
os.symlink("../input/home-data-for-ml-course/test.csv", "../input/test.csv")
from learntools.core import binder
binder.bind(globals())
from learntools.ml_intermediate.ex1 import *
print("Setup Complete")
"""
Explanation: As a warm-up, you'll review some machine learning fundamentals and submit your initial results to a Kaggle competition.
Setup
The questions below will give you feedback on your work. Run the following cell to set up the feedback system.
End of explanation
"""
import pandas as pd
from sklearn.model_selection import train_test_split
# Read the data
X_full = pd.read_csv('../input/train.csv', index_col='Id')
X_test_full = pd.read_csv('../input/test.csv', index_col='Id')
# Obtain target and predictors
y = X_full.SalePrice
features = ['LotArea', 'YearBuilt', '1stFlrSF', '2ndFlrSF', 'FullBath', 'BedroomAbvGr', 'TotRmsAbvGrd']
X = X_full[features].copy()
X_test = X_test_full[features].copy()
# Break off validation set from training data
X_train, X_valid, y_train, y_valid = train_test_split(X, y, train_size=0.8, test_size=0.2,
random_state=0)
"""
Explanation: You will work with data from the Housing Prices Competition for Kaggle Learn Users to predict home prices in Iowa using 79 explanatory variables describing (almost) every aspect of the homes.
Run the next code cell without changes to load the training and validation features in X_train and X_valid, along with the prediction targets in y_train and y_valid. The test features are loaded in X_test. (If you need to review features and prediction targets, please check out this short tutorial. To read about model validation, look here. Alternatively, if you'd prefer to look through a full course to review all of these topics, start here.)
End of explanation
"""
X_train.head()
"""
Explanation: Use the next cell to print the first several rows of the data. It's a nice way to get an overview of the data you will use in your price prediction model.
End of explanation
"""
from sklearn.ensemble import RandomForestRegressor
# Define the models
model_1 = RandomForestRegressor(n_estimators=50, random_state=0)
model_2 = RandomForestRegressor(n_estimators=100, random_state=0)
model_3 = RandomForestRegressor(n_estimators=100, criterion='absolute_error', random_state=0)
model_4 = RandomForestRegressor(n_estimators=200, min_samples_split=20, random_state=0)
model_5 = RandomForestRegressor(n_estimators=100, max_depth=7, random_state=0)
models = [model_1, model_2, model_3, model_4, model_5]
"""
Explanation: The next code cell defines five different random forest models. Run this code cell without changes. (To review random forests, look here.)
End of explanation
"""
from sklearn.metrics import mean_absolute_error
# Function for comparing different models
def score_model(model, X_t=X_train, X_v=X_valid, y_t=y_train, y_v=y_valid):
model.fit(X_t, y_t)
preds = model.predict(X_v)
return mean_absolute_error(y_v, preds)
for i in range(0, len(models)):
mae = score_model(models[i])
print("Model %d MAE: %d" % (i+1, mae))
"""
Explanation: To select the best model out of the five, we define a function score_model() below. This function returns the mean absolute error (MAE) from the validation set. Recall that the best model will obtain the lowest MAE. (To review mean absolute error, look here.)
Run the code cell without changes.
End of explanation
"""
# Fill in the best model
best_model = ____
# Check your answer
step_1.check()
#%%RM_IF(PROD)%%
best_model = model_3
step_1.assert_check_passed()
# Lines below will give you a hint or solution code
#_COMMENT_IF(PROD)_
step_1.hint()
#_COMMENT_IF(PROD)_
step_1.solution()
"""
Explanation: Step 1: Evaluate several models
Use the above results to fill in the line below. Which model is the best model? Your answer should be one of model_1, model_2, model_3, model_4, or model_5.
End of explanation
"""
# Define a model
my_model = ____ # Your code here
# Check your answer
step_2.check()
#%%RM_IF(PROD)%%
my_model = 3
step_2.assert_check_failed()
#%%RM_IF(PROD)%%
my_model = best_model
step_2.assert_check_passed()
# Lines below will give you a hint or solution code
#_COMMENT_IF(PROD)_
step_2.hint()
#_COMMENT_IF(PROD)_
step_2.solution()
"""
Explanation: Step 2: Generate test predictions
Great. You know how to evaluate what makes an accurate model. Now it's time to go through the modeling process and make predictions. In the line below, create a Random Forest model with the variable name my_model.
End of explanation
"""
# Fit the model to the training data
my_model.fit(X, y)
# Generate test predictions
preds_test = my_model.predict(X_test)
# Save predictions in format used for competition scoring
output = pd.DataFrame({'Id': X_test.index,
'SalePrice': preds_test})
output.to_csv('submission.csv', index=False)
"""
Explanation: Run the next code cell without changes. The code fits the model to the training and validation data, and then generates test predictions that are saved to a CSV file. These test predictions can be submitted directly to the competition!
End of explanation
"""
|
benkoo/fast_ai_coursenotes | deeplearning1/nbs/lesson3.ipynb | apache-2.0 | from theano.sandbox import cuda
%matplotlib inline
from imp import reload
import utils; reload(utils)
from utils import *
from __future__ import division, print_function
#path = "data/dogscats/sample/"
path = "data/dogscats/"
model_path = path + 'models/'
if not os.path.exists(model_path): os.mkdir(model_path)
import keras.backend as K
K.set_image_dim_ordering('th')
batch_size=64
"""
Explanation: Training a better model
End of explanation
"""
model = vgg_ft(2)
"""
Explanation: Are we underfitting?
Our validation accuracy so far has generally been higher than our training accuracy. That leads to two obvious questions:
How is this possible?
Is this desirable?
The answer to (1) is that this is happening because of dropout. Dropout refers to a layer that randomly deletes (i.e. sets to zero) each activation in the previous layer with probability p (generally 0.5). This only happens during training, not when calculating the accuracy on the validation set, which is why the validation set can show higher accuracy than the training set.
The purpose of dropout is to avoid overfitting. By deleting parts of the neural network at random during training, it ensures that no one part of the network can overfit to one part of the training set. The creation of dropout was one of the key developments in deep learning, and has allowed us to create rich models without overfitting. However, it can also result in underfitting if overused, and this is something we should be careful of with our model.
So the answer to (2) is: this is probably not desirable. It is likely that we can get better validation set results with less (or no) dropout, if we're seeing that validation accuracy is higher than training accuracy - a strong sign of underfitting. So let's try removing dropout entirely, and see what happens!
(We had dropout in this model already because the VGG authors found it necessary for the imagenet competition. But that doesn't mean it's necessary for dogs v cats, so we will do our own analysis of regularization approaches from scratch.)
Removing dropout
Our high level approach here will be to start with our fine-tuned cats vs dogs model (with dropout), then fine-tune all the dense layers, after removing dropout from them. The steps we will take are:
- Re-create and load our modified VGG model with binary dependent (i.e. dogs v cats)
- Split the model between the convolutional (conv) layers and the dense layers
- Pre-calculate the output of the conv layers, so that we don't have to redundently re-calculate them on every epoch
- Create a new model with just the dense layers, and dropout p set to zero
- Train this new model using the output of the conv layers as training data.
As before we need to start with a working model, so let's bring in our working VGG 16 model and change it to predict our binary dependent...
End of explanation
"""
model.load_weights(model_path+'finetune3.h5')
"""
Explanation: ...and load our fine-tuned weights.
End of explanation
"""
layers = model.layers
last_conv_idx = [index for index,layer in enumerate(layers)
if type(layer) is Convolution2D][-1]
last_conv_idx
layers[last_conv_idx]
conv_layers = layers[:last_conv_idx+1]
conv_model = Sequential(conv_layers)
# Dense layers - also known as fully connected or 'FC' layers
fc_layers = layers[last_conv_idx+1:]
"""
Explanation: We're going to be training a number of iterations without dropout, so it would be best for us to pre-calculate the input to the fully connected layers - i.e. the Flatten() layer. We'll start by finding this layer in our model, and creating a new model that contains just the layers up to and including this layer:
End of explanation
"""
batches = get_batches(path+'train', shuffle=False, batch_size=batch_size)
val_batches = get_batches(path+'valid', shuffle=False, batch_size=batch_size)
val_classes = val_batches.classes
trn_classes = batches.classes
val_labels = onehot(val_classes)
trn_labels = onehot(trn_classes)
val_features = conv_model.predict_generator(val_batches, val_batches.nb_sample)
trn_features = conv_model.predict_generator(batches, batches.nb_sample)
save_array(model_path + 'train_convlayer_features.bc', trn_features)
save_array(model_path + 'valid_convlayer_features.bc', val_features)
trn_features = load_array(model_path+'train_convlayer_features.bc')
val_features = load_array(model_path+'valid_convlayer_features.bc')
trn_features.shape
"""
Explanation: Now we can use the exact same approach to creating features as we used when we created the linear model from the imagenet predictions in the last lesson - it's only the model that has changed. As you're seeing, there's a fairly small number of "recipes" that can get us a long way!
End of explanation
"""
# Copy the weights from the pre-trained model.
# NB: Since we're removing dropout, we want to half the weights
def proc_wgts(layer): return [o/2 for o in layer.get_weights()]
# Such a finely tuned model needs to be updated very slowly!
opt = RMSprop(lr=0.00001, rho=0.7)
def get_fc_model():
model = Sequential([
MaxPooling2D(input_shape=conv_layers[-1].output_shape[1:]),
Flatten(),
Dense(4096, activation='relu'),
Dropout(0.),
Dense(4096, activation='relu'),
Dropout(0.),
Dense(2, activation='softmax')
])
for l1,l2 in zip(model.layers, fc_layers): l1.set_weights(proc_wgts(l2))
model.compile(optimizer=opt, loss='categorical_crossentropy', metrics=['accuracy'])
return model
fc_model = get_fc_model()
"""
Explanation: For our new fully connected model, we'll create it using the exact same architecture as the last layers of VGG 16, so that we can conveniently copy pre-trained weights over from that model. However, we'll set the dropout layer's p values to zero, so as to effectively remove dropout.
End of explanation
"""
fc_model.fit(trn_features, trn_labels, nb_epoch=8,
batch_size=batch_size, validation_data=(val_features, val_labels))
fc_model.save_weights(model_path+'no_dropout.h5')
fc_model.load_weights(model_path+'no_dropout.h5')
"""
Explanation: And fit the model in the usual way:
End of explanation
"""
# dim_ordering='tf' uses tensorflow dimension ordering,
# which is the same order as matplotlib uses for display.
# Therefore when just using for display purposes, this is more convenient
gen = image.ImageDataGenerator(rotation_range=10, width_shift_range=0.1,
height_shift_range=0.1, width_zoom_range=0.2, shear_range=0.15, zoom_range=0.1,
channel_shift_range=10., horizontal_flip=True, dim_ordering='tf')
"""
Explanation: Reducing overfitting
Now that we've gotten the model to overfit, we can take a number of steps to reduce this.
Approaches to reducing overfitting
We do not necessarily need to rely on dropout or other regularization approaches to reduce overfitting. There are other techniques we should try first, since regularlization, by definition, biases our model towards simplicity - which we only want to do if we know that's necessary. This is the order that we recommend using for reducing overfitting (more details about each in a moment):
Add more data
Use data augmentation
Use architectures that generalize well
Add regularization
Reduce architecture complexity.
We'll assume that you've already collected as much data as you can, so step (1) isn't relevant (this is true for most Kaggle competitions, for instance). So the next step (2) is data augmentation. This refers to creating additional synthetic data, based on reasonable modifications of your input data. For images, this is likely to involve one or more of: flipping, rotation, zooming, cropping, panning, minor color changes.
Which types of augmentation are appropriate depends on your data. For regular photos, for instance, you'll want to use horizontal flipping, but not vertical flipping (since an upside down car is much less common than a car the right way up, for instance!)
We recommend always using at least some light data augmentation, unless you have so much data that your model will never see the same input twice.
About data augmentation
Keras comes with very convenient features for automating data augmentation. You simply define what types and maximum amounts of augmentation you want, and keras ensures that every item of every batch randomly is changed according to these settings. Here's how to define a generator that includes data augmentation:
End of explanation
"""
# Create a 'batch' of a single image
img = np.expand_dims(ndimage.imread('cat.jpg'),0)
# Request the generator to create batches from this image
aug_iter = gen.flow(img)
# Get eight examples of these augmented images
aug_imgs = [next(aug_iter)[0].astype(np.uint8) for i in range(8)]
# The original
plt.imshow(img[0])
"""
Explanation: Let's take a look at how this generator changes a single image (the details of this code don't matter much, but feel free to read the comments and keras docs to understand the details if you're interested).
End of explanation
"""
# Augmented data
plots(aug_imgs, (20,7), 2)
# Ensure that we return to theano dimension ordering
K.set_image_dim_ordering('th')
"""
Explanation: As you can see below, there's no magic to data augmentation - it's a very intuitive approach to generating richer input data. Generally speaking, your intuition should be a good guide to appropriate data augmentation, although it's a good idea to test your intuition by checking the results of different augmentation approaches.
End of explanation
"""
gen = image.ImageDataGenerator(rotation_range=15, width_shift_range=0.1,
height_shift_range=0.1, zoom_range=0.1, horizontal_flip=True)
batches = get_batches(path+'train', gen, batch_size=batch_size)
# NB: We don't want to augment or shuffle the validation set
val_batches = get_batches(path+'valid', shuffle=False, batch_size=batch_size)
"""
Explanation: Adding data augmentation
Let's try adding a small amount of data augmentation, and see if we reduce overfitting as a result. The approach will be identical to the method we used to finetune the dense layers in lesson 2, except that we will use a generator with augmentation configured. Here's how we set up the generator, and create batches from it:
End of explanation
"""
fc_model = get_fc_model()
for layer in conv_model.layers: layer.trainable = False
# Look how easy it is to connect two models together!
conv_model.add(fc_model)
"""
Explanation: When using data augmentation, we can't pre-compute our convolutional layer features, since randomized changes are being made to every input image. That is, even if the training process sees the same image multiple times, each time it will have undergone different data augmentation, so the results of the convolutional layers will be different.
Therefore, in order to allow data to flow through all the conv layers and our new dense layers, we attach our fully connected model to the convolutional model--after ensuring that the convolutional layers are not trainable:
End of explanation
"""
conv_model.compile(optimizer=opt, loss='categorical_crossentropy', metrics=['accuracy'])
conv_model.fit_generator(batches, samples_per_epoch=batches.nb_sample, nb_epoch=8,
validation_data=val_batches, nb_val_samples=val_batches.nb_sample)
conv_model.fit_generator(batches, samples_per_epoch=batches.nb_sample, nb_epoch=3,
validation_data=val_batches, nb_val_samples=val_batches.nb_sample)
conv_model.save_weights(model_path + 'aug1.h5')
conv_model.load_weights(model_path + 'aug1.h5')
"""
Explanation: Now we can compile, train, and save our model as usual - note that we use fit_generator() since we want to pull random images from the directories on every batch.
End of explanation
"""
conv_layers[-1].output_shape[1:]
def get_bn_layers(p):
return [
MaxPooling2D(input_shape=conv_layers[-1].output_shape[1:]),
Flatten(),
Dense(4096, activation='relu'),
Dropout(p),
BatchNormalization(),
Dense(4096, activation='relu'),
Dropout(p),
BatchNormalization(),
Dense(1000, activation='softmax')
]
p=0.6
bn_model = Sequential(get_bn_layers(0.6))
bn_model.load_weights('/data/jhoward/ILSVRC2012_img/bn_do3_1.h5')
def proc_wgts(layer, prev_p, new_p):
scal = (1-prev_p)/(1-new_p)
return [o*scal for o in layer.get_weights()]
for l in bn_model.layers:
if type(l)==Dense: l.set_weights(proc_wgts(l, 0.3, 0.6))
bn_model.pop()
for layer in bn_model.layers: layer.trainable=False
bn_model.add(Dense(2,activation='softmax'))
bn_model.compile(Adam(), 'categorical_crossentropy', metrics=['accuracy'])
bn_model.fit(trn_features, trn_labels, nb_epoch=8, validation_data=(val_features, val_labels))
bn_model.save_weights(model_path+'bn.h5')
bn_model.load_weights(model_path+'bn.h5')
bn_layers = get_bn_layers(0.6)
bn_layers.pop()
bn_layers.append(Dense(2,activation='softmax'))
final_model = Sequential(conv_layers)
for layer in final_model.layers: layer.trainable = False
for layer in bn_layers: final_model.add(layer)
for l1,l2 in zip(bn_model.layers, bn_layers):
l2.set_weights(l1.get_weights())
final_model.compile(optimizer=Adam(),
loss='categorical_crossentropy', metrics=['accuracy'])
final_model.fit_generator(batches, samples_per_epoch=batches.nb_sample, nb_epoch=1,
validation_data=val_batches, nb_val_samples=val_batches.nb_sample)
final_model.save_weights(model_path + 'final1.h5')
final_model.fit_generator(batches, samples_per_epoch=batches.nb_sample, nb_epoch=4,
validation_data=val_batches, nb_val_samples=val_batches.nb_sample)
final_model.save_weights(model_path + 'final2.h5')
final_model.optimizer.lr=0.001
final_model.fit_generator(batches, samples_per_epoch=batches.nb_sample, nb_epoch=4,
validation_data=val_batches, nb_val_samples=val_batches.nb_sample)
bn_model.save_weights(model_path + 'final3.h5')
"""
Explanation: Batch normalization
About batch normalization
Batch normalization (batchnorm) is a way to ensure that activations don't become too high or too low at any point in the model. Adjusting activations so they are of similar scales is called normalization. Normalization is very helpful for fast training - if some activations are very high, they will saturate the model and create very large gradients, causing training to fail; if very low, they will cause training to proceed very slowly. Furthermore, large or small activations in one layer will tend to result in even larger or smaller activations in later layers, since the activations get multiplied repeatedly across the layers.
Prior to the development of batchnorm in 2015, only the inputs to a model could be effectively normalized - by simply subtracting their mean and dividing by their standard deviation. However, weights in intermediate layers could easily become poorly scaled, due to problems in weight initialization, or a high learning rate combined with random fluctuations in weights.
Batchnorm resolves this problem by normalizing each intermediate layer as well. The details of how it works are not terribly important (although I will outline them in a moment) - the important takeaway is that all modern networks should use batchnorm, or something equivalent. There are two reasons for this:
1. Adding batchnorm to a model can result in 10x or more improvements in training speed
2. Because normalization greatly reduces the ability of a small number of outlying inputs to over-influence the training, it also tends to reduce overfitting.
As promised, here's a brief outline of how batchnorm works. As a first step, it normalizes intermediate layers in the same way as input layers can be normalized. But this on its own would not be enough, since the model would then just push the weights up or down indefinitely to try to undo this normalization. Therefore, batchnorm takes two additional steps:
1. Add two more trainable parameters to each layer - one to multiply all activations to set an arbitrary standard deviation, and one to add to all activations to set an arbitary mean
2. Incorporate both the normalization, and the learnt multiply/add parameters, into the gradient calculations during backprop.
This ensures that the weights don't tend to push very high or very low (since the normalization is included in the gradient calculations, so the updates are aware of the normalization). But it also ensures that if a layer does need to change the overall mean or standard deviation in order to match the output scale, it can do so.
Adding batchnorm to the model
We can use nearly the same approach as before - but this time we'll add batchnorm layers (and dropout layers):
End of explanation
"""
|
GoogleCloudPlatform/vertex-ai-samples | notebooks/community/feature_store/mobile_gaming/mobile_gaming_feature_store.ipynb | apache-2.0 | import os
# The Google Cloud Notebook product has specific requirements
IS_GOOGLE_CLOUD_NOTEBOOK = os.path.exists("/opt/deeplearning/metadata/env_version")
# Google Cloud Notebook requires dependencies to be installed with '--user'
USER_FLAG = ""
if IS_GOOGLE_CLOUD_NOTEBOOK:
USER_FLAG = "--user"
! pip3 install {USER_FLAG} --upgrade pip
! pip3 install {USER_FLAG} --upgrade google-cloud-aiplatform==1.11.0 -q --no-warn-conflicts
! pip3 install {USER_FLAG} git+https://github.com/googleapis/python-aiplatform.git@main # For features monitoring
! pip3 install {USER_FLAG} --upgrade google-cloud-bigquery==2.24.0 -q --no-warn-conflicts
! pip3 install {USER_FLAG} --upgrade xgboost==1.1.1 -q --no-warn-conflicts
"""
Explanation: <table align="left">
<td>
<a href="https://colab.research.google.com/github/GoogleCloudPlatform/vertex-ai-samples/blob/main/notebooks/notebook_template.ipynb">
<img src="https://cloud.google.com/ml-engine/images/colab-logo-32px.png" alt="Colab logo"> Run in Colab
</a>
</td>
<td>
<a href="https://github.com/GoogleCloudPlatform/vertex-ai-samples/blob/main/notebooks/notebook_template.ipynb">
<img src="https://cloud.google.com/ml-engine/images/github-logo-32px.png" alt="GitHub logo">
View on GitHub
</a>
</td>
<td>
<a href="https://console.cloud.google.com/vertex-ai/workbench/deploy-notebook?download_url=https://github.com/GoogleCloudPlatform/vertex-ai-samples/blob/main/notebooks/notebook_template.ipynb">
<img src="https://lh3.googleusercontent.com/UiNooY4LUgW_oTvpsNhPpQzsstV5W8F7rYgxgGBD85cWJoLmrOzhVs_ksK_vgx40SHs7jCqkTkCk=e14-rj-sc0xffffff-h130-w32" alt="Vertex AI logo">
Open in Vertex AI Workbench
</a>
</td>
</table>
Overview
Imagine you are a member of the Data Science team working on the same Mobile Gaming application reported in the Churn prediction for game developers using Google Analytics 4 (GA4) and BigQuery ML blog post.
Business wants to use that information in real-time to take immediate intervention actions in-game to prevent churn. In particular, for each player, they want to provide gaming incentives like new items or bonus packs depending on the customer demographic, behavioral information and the resulting propensity of return.
Last year, Google Cloud announced Vertex AI, a managed machine learning (ML) platform that allows data science teams to accelerate the deployment and maintenance of ML models. One of the platform building blocks is Vertex AI Feature store which provides a managed service for low latency scalable feature serving. Also it is a centralized feature repository with easy APIs to search & discover features and feature monitoring capabilities to track drift and other quality issues.
In this notebook, we will show how the role of Vertex AI Feature Store in a ready to production scenario when the user's activities within the first 24 hours of last engagment and the gaming platform would consume in order to improver UX. Below you can find the high level picture of the system
<img src="./assets/mobile_gaming_architecture_1.png">
Dataset
The dataset is the public sample export data from an actual mobile game app called "Flood It!" (Android, iOS)
Objective
In the following notebook, you will learn how Vertex AI Feature store
Provide a centralized feature repository with easy APIs to search & discover features and fetch them for training/serving.
Simplify deployments of models for Online Prediction, via low latency scalable feature serving.
Mitigate training serving skew and data leakage by performing point in time lookups to fetch historical data for training.
Notice that we assume that already know how to set up a Vertex AI Feature store. In case you are not, please check out this detailed notebook.
Costs
This tutorial uses billable components of Google Cloud:
Vertex AI
BigQuery
Cloud Storage
Learn about Vertex AI
pricing and Cloud Storage
pricing, and use the Pricing
Calculator
to generate a cost estimate based on your projected usage.
Set up your local development environment
If you are using Colab or Google Cloud Notebooks, your environment already meets
all the requirements to run this notebook. You can skip this step.
Otherwise, make sure your environment meets this notebook's requirements.
You need the following:
The Google Cloud SDK
Git
Python 3
virtualenv
Jupyter notebook running in a virtual environment with Python 3
The Google Cloud guide to Setting up a Python development
environment and the Jupyter
installation guide provide detailed instructions
for meeting these requirements. The following steps provide a condensed set of
instructions:
Install and initialize the Cloud SDK.
Install Python 3.
Install
virtualenv
and create a virtual environment that uses Python 3. Activate the virtual environment.
To install Jupyter, run pip3 install jupyter on the
command-line in a terminal shell.
To launch Jupyter, run jupyter notebook on the command-line in a terminal shell.
Open this notebook in the Jupyter Notebook Dashboard.
Install additional packages
Install additional package dependencies not installed in your notebook environment, such as {XGBoost, AdaNet, or TensorFlow Hub TODO: Replace with relevant packages for the tutorial}. Use the latest major GA version of each package.
End of explanation
"""
# Automatically restart kernel after installs
import os
if not os.getenv("IS_TESTING"):
# Automatically restart kernel after installs
import IPython
app = IPython.Application.instance()
app.kernel.do_shutdown(True)
"""
Explanation: Restart the kernel
After you install the additional packages, you need to restart the notebook kernel so it can find the packages.
End of explanation
"""
import os
PROJECT_ID = ""
# Get your Google Cloud project ID from gcloud
if not os.getenv("IS_TESTING"):
shell_output = !gcloud config list --format 'value(core.project)' 2>/dev/null
PROJECT_ID = shell_output[0]
print("Project ID: ", PROJECT_ID)
"""
Explanation: Before you begin
Set up your Google Cloud project
The following steps are required, regardless of your notebook environment.
Select or create a Google Cloud project. When you first create an account, you get a $300 free credit towards your compute/storage costs.
Make sure that billing is enabled for your project.
Enable the Vertex AI API and Compute Engine API.
If you are running this notebook locally, you will need to install the Cloud SDK.
Enter your project ID in the cell below. Then run the cell to make sure the
Cloud SDK uses the right project for all the commands in this notebook.
Note: Jupyter runs lines prefixed with ! as shell commands, and it interpolates Python variables prefixed with $ into these commands.
Set your project ID
If you don't know your project ID, you may be able to get your project ID using gcloud.
End of explanation
"""
if PROJECT_ID == "" or PROJECT_ID is None:
PROJECT_ID = "" # @param {type:"string"}
!gcloud config set project $PROJECT_ID #change it
"""
Explanation: Otherwise, set your project ID here.
End of explanation
"""
from datetime import datetime
TIMESTAMP = datetime.now().strftime("%Y%m%d%H%M%S")
"""
Explanation: Timestamp
If you are in a live tutorial session, you might be using a shared test account or project. To avoid name collisions between users on resources created, you create a timestamp for each instance session, and append it onto the name of resources you create in this tutorial.
End of explanation
"""
import os
import sys
# If you are running this notebook in Colab, run this cell and follow the
# instructions to authenticate your GCP account. This provides access to your
# Cloud Storage bucket and lets you submit training jobs and prediction
# requests.
# The Google Cloud Notebook product has specific requirements
IS_GOOGLE_CLOUD_NOTEBOOK = os.path.exists("/opt/deeplearning/metadata/env_version")
# If on Google Cloud Notebooks, then don't execute this code
if not IS_GOOGLE_CLOUD_NOTEBOOK:
if "google.colab" in sys.modules:
from google.colab import auth as google_auth
google_auth.authenticate_user()
# If you are running this notebook locally, replace the string below with the
# path to your service account key and run this cell to authenticate your GCP
# account.
elif not os.getenv("IS_TESTING"):
%env GOOGLE_APPLICATION_CREDENTIALS ''
"""
Explanation: Authenticate your Google Cloud account
If you are using Google Cloud Notebooks, your environment is already
authenticated. Skip this step.
If you are using Colab, run the cell below and follow the instructions
when prompted to authenticate your account via oAuth.
Otherwise, follow these steps:
In the Cloud Console, go to the Create service account key
page.
Click Create service account.
In the Service account name field, enter a name, and
click Create.
In the Grant this service account access to project section, click the Role drop-down list. Type "Vertex AI"
into the filter box, and select
Vertex AI Administrator. Type "Storage Object Admin" into the filter box, and select Storage Object Admin.
Click Create. A JSON file that contains your key downloads to your
local environment.
Enter the path to your service account key as the
GOOGLE_APPLICATION_CREDENTIALS variable in the cell below and run the cell.
End of explanation
"""
BUCKET_URI = "gs://[your-bucket-name]" # @param {type:"string"}
REGION = "[your-region]" # @param {type:"string"}
if BUCKET_URI == "" or BUCKET_URI is None or BUCKET_URI == "gs://[your-bucket-name]":
BUCKET_URI = "gs://" + PROJECT_ID + "-aip-" + TIMESTAMP
if REGION == "[your-region]":
REGION = "us-central1"
"""
Explanation: Create a Cloud Storage bucket
The following steps are required, regardless of your notebook environment.
Set the name of your Cloud Storage bucket below. It must be unique across all
Cloud Storage buckets.
You may also change the REGION variable, which is used for operations
throughout the rest of this notebook. Make sure to choose a region where Vertex AI services are
available. You may
not use a Multi-Regional Storage bucket for training with Vertex AI.
End of explanation
"""
! gsutil mb -l $REGION -p $PROJECT_ID $BUCKET_URI
"""
Explanation: Only if your bucket doesn't already exist: Run the following cell to create your Cloud Storage bucket.
End of explanation
"""
! gsutil uniformbucketlevelaccess set on $BUCKET_URI
"""
Explanation: Run the following cell to grant access to your Cloud Storage resources from Vertex AI Feature store
End of explanation
"""
! gsutil ls -al $BUCKET_URI
"""
Explanation: Finally, validate access to your Cloud Storage bucket by examining its contents:
End of explanation
"""
BQ_DATASET = "Mobile_Gaming" # @param {type:"string"}
LOCATION = "US"
!bq mk --location=$LOCATION --dataset $PROJECT_ID:$BQ_DATASET
"""
Explanation: Create a Bigquery dataset
You create the BigQuery dataset to store the data along the demo.
End of explanation
"""
# General
import os
import random
import sys
import time
# Data Science
import pandas as pd
# Vertex AI and its Feature Store
from google.cloud import aiplatform as vertex_ai
from google.cloud import bigquery
from google.cloud.aiplatform import Feature, Featurestore
"""
Explanation: Import libraries
End of explanation
"""
# Data Engineering and Feature Engineering
TODAY = "2018-10-03"
TOMORROW = "2018-10-04"
LABEL_TABLE = f"label_table_{TODAY}".replace("-", "")
FEATURES_TABLE = "wide_features_table" # @param {type:"string"}
FEATURES_TABLE_TODAY = f"wide_features_table_{TODAY}".replace("-", "")
FEATURES_TABLE_TOMORROW = f"wide_features_table_{TOMORROW}".replace("-", "")
FEATURESTORE_ID = "mobile_gaming" # @param {type:"string"}
ENTITY_TYPE_ID = "user"
# Vertex AI Feature store
ONLINE_STORE_NODES_COUNT = 5
ENTITY_ID = "user"
API_ENDPOINT = f"{REGION}-aiplatform.googleapis.com"
FEATURE_TIME = "timestamp"
ENTITY_ID_FIELD = "user_pseudo_id"
BQ_SOURCE_URI = f"bq://{PROJECT_ID}.{BQ_DATASET}.{FEATURES_TABLE}"
GCS_DESTINATION_PATH = f"data/features/train_features_{TODAY}".replace("-", "")
GCS_DESTINATION_OUTPUT_URI = f"{BUCKET_URI}/{GCS_DESTINATION_PATH}"
SERVING_FEATURE_IDS = {"user": ["*"]}
READ_INSTANCES_TABLE = f"ground_truth_{TODAY}".replace("-", "")
READ_INSTANCES_URI = f"bq://{PROJECT_ID}.{BQ_DATASET}.{READ_INSTANCES_TABLE}"
# Vertex AI Training
BASE_CPU_IMAGE = "us-docker.pkg.dev/vertex-ai/training/scikit-learn-cpu.0-23:latest"
DATASET_NAME = f"churn_mobile_gaming_{TODAY}".replace("-", "")
TRAIN_JOB_NAME = f"xgb_classifier_training_{TODAY}".replace("-", "")
MODEL_NAME = f"churn_xgb_classifier_{TODAY}".replace("-", "")
MODEL_PACKAGE_PATH = "train_package"
TRAINING_MACHINE_TYPE = "n1-standard-4"
TRAINING_REPLICA_COUNT = 1
DATA_PATH = f"{GCS_DESTINATION_OUTPUT_URI}/000000000000.csv".replace("gs://", "/gcs/")
MODEL_PATH = f"model/{TODAY}".replace("-", "")
MODEL_DIR = f"{BUCKET_URI}/{MODEL_PATH}".replace("gs://", "/gcs/")
# Vertex AI Prediction
DESTINATION_URI = f"{BUCKET_URI}/{MODEL_PATH}"
VERSION = "v1"
SERVING_CONTAINER_IMAGE_URI = (
"us-docker.pkg.dev/vertex-ai/prediction/sklearn-cpu.0-23:latest"
)
ENDPOINT_NAME = "mobile_gaming_churn"
DEPLOYED_MODEL_NAME = f"churn_xgb_classifier_{VERSION}"
MODEL_DEPLOYED_NAME = "churn_xgb_classifier_v1"
SERVING_MACHINE_TYPE = "n1-highcpu-4"
MIN_NODES = 1
MAX_NODES = 1
# Sampling distributions for categorical features implemented in
# https://github.com/GoogleCloudPlatform/vertex-ai-samples/blob/main/notebooks/official/model_monitoring/model_monitoring.ipynb
LANGUAGE = [
"en-us",
"en-gb",
"ja-jp",
"en-au",
"en-ca",
"de-de",
"en-in",
"en",
"fr-fr",
"pt-br",
"es-us",
"zh-tw",
"zh-hans-cn",
"es-mx",
"nl-nl",
"fr-ca",
"en-za",
"vi-vn",
"en-nz",
"es-es",
]
OS = ["IOS", "ANDROID", "null"]
COUNTRY = [
"United States",
"India",
"Japan",
"Canada",
"Australia",
"United Kingdom",
"Germany",
"Mexico",
"France",
"Brazil",
"Taiwan",
"China",
"Saudi Arabia",
"Pakistan",
"Egypt",
"Netherlands",
"Vietnam",
"Philippines",
"South Africa",
]
USER_IDS = [
"C8685B0DFA2C4B4E6E6EA72894C30F6F",
"A976A39B8E08829A5BC5CD3827C942A2",
"DD2269BCB7F8532CD51CB6854667AF51",
"A8F327F313C9448DFD5DE108DAE66100",
"8BE7BF90C971453A34C1FF6FF2A0ACAE",
"8375B114AFAD8A31DE54283525108F75",
"4AD259771898207D5869B39490B9DD8C",
"51E859FD9D682533C094B37DC85EAF87",
"8C33815E0A269B776AAB4B60A4F7BC63",
"D7EA8E3645EFFBD6443946179ED704A6",
"58F3D672BBC613680624015D5BC3ADDB",
"FF955E4CA27C75CE0BEE9FC89AD275A3",
"22DC6A6AE86C0AA33EBB8C3164A26925",
"BC10D76D02351BD4C6F6F5437EE5D274",
"19DEEA6B15B314DB0ED2A4936959D8F9",
"C2D17D9066EE1EB9FAE1C8A521BFD4E5",
"EFBDEC168A2BF8C727B060B2E231724E",
"E43D3AB2F9B9055C29373523FAF9DB9B",
"BBDCBE2491658165B7F20540DE652E3A",
"6895EEFC23B59DB13A9B9A7EED6A766F",
]
"""
Explanation: Define constants
End of explanation
"""
def run_bq_query(query: str):
"""
An helper function to run a BigQuery job
Args:
query: a formatted SQL query
Returns:
None
"""
try:
job = bq_client.query(query)
_ = job.result()
except RuntimeError as error:
print(error)
def upload_model(
display_name: str,
serving_container_image_uri: str,
artifact_uri: str,
sync: bool = True,
) -> vertex_ai.Model:
"""
Args:
display_name: The name of Vertex AI Model artefact
serving_container_image_uri: The uri of the serving image
artifact_uri: The uri of artefact to import
sync:
Returns: Vertex AI Model
"""
model = vertex_ai.Model.upload(
display_name=display_name,
artifact_uri=artifact_uri,
serving_container_image_uri=serving_container_image_uri,
sync=sync,
)
model.wait()
print(model.display_name)
print(model.resource_name)
return model
def create_endpoint(display_name: str) -> vertex_ai.Endpoint:
"""
An utility to create a Vertex AI Endpoint
Args:
display_name: The name of Endpoint
Returns: Vertex AI Endpoint
"""
endpoint = vertex_ai.Endpoint.create(display_name=display_name)
print(endpoint.display_name)
print(endpoint.resource_name)
return endpoint
def deploy_model(
model: vertex_ai.Model,
machine_type: str,
endpoint: vertex_ai.Endpoint = None,
deployed_model_display_name: str = None,
min_replica_count: int = 1,
max_replica_count: int = 1,
sync: bool = True,
) -> vertex_ai.Model:
"""
An helper function to deploy a Vertex AI Endpoint
Args:
model: A Vertex AI Model
machine_type: The type of machine to serve the model
endpoint: An Vertex AI Endpoint
deployed_model_display_name: The name of the model
min_replica_count: Minimum number of serving replicas
max_replica_count: Max number of serving replicas
sync: Whether to execute method synchronously
Returns: vertex_ai.Model
"""
model_deployed = model.deploy(
endpoint=endpoint,
deployed_model_display_name=deployed_model_display_name,
machine_type=machine_type,
min_replica_count=min_replica_count,
max_replica_count=max_replica_count,
sync=sync,
)
model_deployed.wait()
print(model_deployed.display_name)
print(model_deployed.resource_name)
return model_deployed
def endpoint_predict_sample(
instances: list, endpoint: vertex_ai.Endpoint
) -> vertex_ai.models.Prediction:
"""
An helper function to get prediction from Vertex AI Endpoint
Args:
instances: The list of instances to score
endpoint: An Vertex AI Endpoint
Returns:
vertex_ai.models.Prediction
"""
prediction = endpoint.predict(instances=instances)
print(prediction)
return prediction
def generate_online_sample() -> dict:
"""
An helper function to generate a sample of online features
Returns:
online_sample: dict of online features
"""
online_sample = {}
online_sample["entity_id"] = random.choices(USER_IDS)
online_sample["country"] = random.choices(COUNTRY)
online_sample["operating_system"] = random.choices(OS)
online_sample["language"] = random.choices(LANGUAGE)
return online_sample
def simulate_prediction(endpoint: vertex_ai.Endpoint, n_requests: int, latency: int):
"""
An helper function to simulate online prediction with customer entity type
- format entities for prediction
- retrieve static features with a singleton lookup operations from Vertex AI Feature store
- run the prediction request and get back the result
Args:
endpoint: Vertex AI Endpoint object
n_requests: number of requests to run
latency: latency in seconds
Returns:
vertex_ai.models.Prediction
"""
for i in range(n_requests):
online_sample = generate_online_sample()
online_features = pd.DataFrame.from_dict(online_sample)
entity_ids = online_features["entity_id"].tolist()
customer_aggregated_features = user_entity_type.read(
entity_ids=entity_ids,
feature_ids=[
"cnt_user_engagement",
"cnt_level_start_quickplay",
"cnt_level_end_quickplay",
"cnt_level_complete_quickplay",
"cnt_level_reset_quickplay",
"cnt_post_score",
"cnt_spend_virtual_currency",
"cnt_ad_reward",
"cnt_challenge_a_friend",
"cnt_completed_5_levels",
"cnt_use_extra_steps",
],
)
prediction_sample_df = pd.merge(
customer_aggregated_features.set_index("entity_id"),
online_features.set_index("entity_id"),
left_index=True,
right_index=True,
).reset_index(drop=True)
# prediction_sample = prediction_sample_df.to_dict("records")
prediction_instance = prediction_sample_df.values.tolist()
prediction = endpoint.predict(prediction_instance)
print(
f"Prediction request: user_id - {entity_ids} - values - {prediction_instance} - prediction - {prediction[0]}"
)
time.sleep(latency)
"""
Explanation: Helpers
End of explanation
"""
bq_client = bigquery.Client(project=PROJECT_ID, location=LOCATION)
vertex_ai.init(project=PROJECT_ID, location=REGION, staging_bucket=BUCKET_URI)
"""
Explanation: Setting the realtime scenario
In order to make real-time churn prediction, you need to
Collect the historical data about user's events and behaviors
Design your data model, build your feature and ingest them into the Feature store to serve both offline for training and online for serving.
Define churn and get the data to train a churn model
Train the model at scale
Deploy the model to an endpoint and generate return the prediction score in real-time
You will cover those steps in details below.
Initiate clients
End of explanation
"""
features_sql_query = f"""
CREATE OR REPLACE TABLE
`{PROJECT_ID}.{BQ_DATASET}.{FEATURES_TABLE}` AS
WITH
# query to extract demographic data for each user ---------------------------------------------------------
get_demographic_data AS (
SELECT * EXCEPT (row_num)
FROM (
SELECT
user_pseudo_id,
geo.country as country,
device.operating_system as operating_system,
device.language as language,
ROW_NUMBER() OVER (PARTITION BY user_pseudo_id ORDER BY event_timestamp DESC) AS row_num
FROM `firebase-public-project.analytics_153293282.events_*`)
WHERE row_num = 1),
# query to extract behavioral data for each user ----------------------------------------------------------
get_behavioral_data AS (
SELECT
event_timestamp,
user_pseudo_id,
SUM(IF(event_name = 'user_engagement', 1, 0)) OVER (PARTITION BY user_pseudo_id ORDER BY event_timestamp ASC RANGE BETWEEN 86400000000 PRECEDING
AND CURRENT ROW ) AS cnt_user_engagement,
SUM(IF(event_name = 'level_start_quickplay', 1, 0)) OVER (PARTITION BY user_pseudo_id ORDER BY event_timestamp ASC RANGE BETWEEN 86400000000 PRECEDING
AND CURRENT ROW ) AS cnt_level_start_quickplay,
SUM(IF(event_name = 'level_end_quickplay', 1, 0)) OVER (PARTITION BY user_pseudo_id ORDER BY event_timestamp ASC RANGE BETWEEN 86400000000 PRECEDING
AND CURRENT ROW ) AS cnt_level_end_quickplay,
SUM(IF(event_name = 'level_complete_quickplay', 1, 0)) OVER (PARTITION BY user_pseudo_id ORDER BY event_timestamp ASC RANGE BETWEEN 86400000000 PRECEDING
AND CURRENT ROW ) AS cnt_level_complete_quickplay,
SUM(IF(event_name = 'level_reset_quickplay', 1, 0)) OVER (PARTITION BY user_pseudo_id ORDER BY event_timestamp ASC RANGE BETWEEN 86400000000 PRECEDING
AND CURRENT ROW ) AS cnt_level_reset_quickplay,
SUM(IF(event_name = 'post_score', 1, 0)) OVER (PARTITION BY user_pseudo_id ORDER BY event_timestamp ASC RANGE BETWEEN 86400000000 PRECEDING
AND CURRENT ROW ) AS cnt_post_score,
SUM(IF(event_name = 'spend_virtual_currency', 1, 0)) OVER (PARTITION BY user_pseudo_id ORDER BY event_timestamp ASC RANGE BETWEEN 86400000000 PRECEDING
AND CURRENT ROW ) AS cnt_spend_virtual_currency,
SUM(IF(event_name = 'ad_reward', 1, 0)) OVER (PARTITION BY user_pseudo_id ORDER BY event_timestamp ASC RANGE BETWEEN 86400000000 PRECEDING
AND CURRENT ROW ) AS cnt_ad_reward,
SUM(IF(event_name = 'challenge_a_friend', 1, 0)) OVER (PARTITION BY user_pseudo_id ORDER BY event_timestamp ASC RANGE BETWEEN 86400000000 PRECEDING
AND CURRENT ROW ) AS cnt_challenge_a_friend,
SUM(IF(event_name = 'completed_5_levels', 1, 0)) OVER (PARTITION BY user_pseudo_id ORDER BY event_timestamp ASC RANGE BETWEEN 86400000000 PRECEDING
AND CURRENT ROW ) AS cnt_completed_5_levels,
SUM(IF(event_name = 'use_extra_steps', 1, 0)) OVER (PARTITION BY user_pseudo_id ORDER BY event_timestamp ASC RANGE BETWEEN 86400000000 PRECEDING
AND CURRENT ROW ) AS cnt_use_extra_steps,
FROM (
SELECT
e.*
FROM
`firebase-public-project.analytics_153293282.events_*` AS e
)
)
SELECT
-- PARSE_TIMESTAMP('%Y-%m-%d %H:%M:%S', CONCAT('{TODAY}', ' ', STRING(TIME_TRUNC(CURRENT_TIME(), SECOND))), 'UTC') as timestamp,
PARSE_TIMESTAMP('%Y-%m-%d %H:%M:%S', FORMAT_TIMESTAMP('%Y-%m-%d %H:%M:%S', TIMESTAMP_MICROS(beh.event_timestamp))) AS timestamp,
dem.*,
CAST(IFNULL(beh.cnt_user_engagement, 0) AS FLOAT64) AS cnt_user_engagement,
CAST(IFNULL(beh.cnt_level_start_quickplay, 0) AS FLOAT64) AS cnt_level_start_quickplay,
CAST(IFNULL(beh.cnt_level_end_quickplay, 0) AS FLOAT64) AS cnt_level_end_quickplay,
CAST(IFNULL(beh.cnt_level_complete_quickplay, 0) AS FLOAT64) AS cnt_level_complete_quickplay,
CAST(IFNULL(beh.cnt_level_reset_quickplay, 0) AS FLOAT64) AS cnt_level_reset_quickplay,
CAST(IFNULL(beh.cnt_post_score, 0) AS FLOAT64) AS cnt_post_score,
CAST(IFNULL(beh.cnt_spend_virtual_currency, 0) AS FLOAT64) AS cnt_spend_virtual_currency,
CAST(IFNULL(beh.cnt_ad_reward, 0) AS FLOAT64) AS cnt_ad_reward,
CAST(IFNULL(beh.cnt_challenge_a_friend, 0) AS FLOAT64) AS cnt_challenge_a_friend,
CAST(IFNULL(beh.cnt_completed_5_levels, 0) AS FLOAT64) AS cnt_completed_5_levels,
CAST(IFNULL(beh.cnt_use_extra_steps, 0) AS FLOAT64) AS cnt_use_extra_steps,
FROM
get_demographic_data dem
LEFT OUTER JOIN
get_behavioral_data beh
ON
dem.user_pseudo_id = beh.user_pseudo_id
"""
run_bq_query(features_sql_query)
"""
Explanation: Identify users and build your features
This section we will static features we want to fetch from Vertex AI Feature Store. In particular, we will cover the following steps:
Identify users, process demographic features and process behavioral features within the last 24 hours using BigQuery
Set up the feature store
Register features using Vertex AI Feature Store and the SDK.
Below you have a picture that shows the process.
<img src="./assets/feature_store_ingestion_2.png">
The original dataset contains raw event data we cannot ingest in the feature store as they are. We need to pre-process the raw data in order to get user features.
Notice we simulate those transformations in different point of time (today and tomorrow).
Label, Demographic and Behavioral Transformations
This section is based on the Churn prediction for game developers using Google Analytics 4 (GA4) and BigQuery ML blog article by Minhaz Kazi and Polong Lin.
You will adapt it in order to turn a batch churn prediction (using features within the first 24h user of first engagment) in a real-time churn prediction (using features within the first 24h user of last engagment).
End of explanation
"""
try:
mobile_gaming_feature_store = Featurestore.create(
featurestore_id=FEATURESTORE_ID,
online_store_fixed_node_count=ONLINE_STORE_NODES_COUNT,
labels={"team": "dataoffice", "app": "mobile_gaming"},
sync=True,
)
except RuntimeError as error:
print(error)
else:
FEATURESTORE_RESOURCE_NAME = mobile_gaming_feature_store.resource_name
print(f"Feature store created: {FEATURESTORE_RESOURCE_NAME}")
"""
Explanation: Create a Vertex AI Feature store and ingest your features
Now you have the wide table of features. It is time to ingest them into the feature store.
Before to moving on, you may have a question: Why do I need a feature store
in this scenario at that point?
One of the reason would be to make those features accessable across team by calculating once and reuse them many times. And in order to make it possible you need also be able to monitor those features over time to guarantee freshness and in case have a new feature engineerign run to refresh them.
If it is not your case, I will give even more reasons about why you should consider feature store in the following sections. Just keep following me for now.
One of the most important thing is related to its data model. As you can see in the picture below, Vertex AI Feature Store organizes resources hierarchically in the following order: Featurestore -> EntityType -> Feature. You must create these resources before you can ingest data into Vertex AI Feature Store.
<img src="./assets/feature_store_data_model_3.png">
In our case we are going to create mobile_gaming featurestore resource containing user entity type and all its associated features such as country or the number of times a user challenged a friend (cnt_challenge_a_friend).
Create featurestore, mobile_gaming
You need to create a featurestore resource to contain entity types, features, and feature values. In your case, you would call it mobile_gaming.
End of explanation
"""
try:
user_entity_type = mobile_gaming_feature_store.create_entity_type(
entity_type_id=ENTITY_ID, description="User Entity", sync=True
)
except RuntimeError as error:
print(error)
else:
USER_ENTITY_RESOURCE_NAME = user_entity_type.resource_name
print("Entity type name is", USER_ENTITY_RESOURCE_NAME)
"""
Explanation: Create the User entity type and its features
You define your own entity types which represents one or more level you decide to refer your features. In your case, it would have a user entity.
End of explanation
"""
from google.cloud.aiplatform_v1beta1 import \
FeaturestoreServiceClient as v1beta1_FeaturestoreServiceClient
from google.cloud.aiplatform_v1beta1.types import \
entity_type as v1beta1_entity_type_pb2
from google.cloud.aiplatform_v1beta1.types import \
featurestore_monitoring as v1beta1_featurestore_monitoring_pb2
from google.cloud.aiplatform_v1beta1.types import \
featurestore_service as v1beta1_featurestore_service_pb2
from google.protobuf.duration_pb2 import Duration
v1beta1_admin_client = v1beta1_FeaturestoreServiceClient(
client_options={"api_endpoint": API_ENDPOINT}
)
v1beta1_admin_client.update_entity_type(
v1beta1_featurestore_service_pb2.UpdateEntityTypeRequest(
entity_type=v1beta1_entity_type_pb2.EntityType(
name=v1beta1_admin_client.entity_type_path(
PROJECT_ID, REGION, FEATURESTORE_ID, ENTITY_ID
),
monitoring_config=v1beta1_featurestore_monitoring_pb2.FeaturestoreMonitoringConfig(
snapshot_analysis=v1beta1_featurestore_monitoring_pb2.FeaturestoreMonitoringConfig.SnapshotAnalysis(
monitoring_interval=Duration(seconds=86400), # 1 day
),
),
),
)
)
"""
Explanation: Set Feature Monitoring
Notice that Vertex AI Feature store has feature monitoring capability. It is in preview, so you need to use v1beta1 Python which is a lower-level API than the one we've used so far in this notebook.
The easiest way to set this for now is using console UI. For completeness, below is example to do this using v1beta1 SDK.
End of explanation
"""
feature_configs = {
"country": {
"value_type": "STRING",
"description": "The country of customer",
"labels": {"status": "passed"},
},
"operating_system": {
"value_type": "STRING",
"description": "The operating system of device",
"labels": {"status": "passed"},
},
"language": {
"value_type": "STRING",
"description": "The language of device",
"labels": {"status": "passed"},
},
"cnt_user_engagement": {
"value_type": "DOUBLE",
"description": "A variable of user engagement level",
"labels": {"status": "passed"},
},
"cnt_level_start_quickplay": {
"value_type": "DOUBLE",
"description": "A variable of user engagement with start level",
"labels": {"status": "passed"},
},
"cnt_level_end_quickplay": {
"value_type": "DOUBLE",
"description": "A variable of user engagement with end level",
"labels": {"status": "passed"},
},
"cnt_level_complete_quickplay": {
"value_type": "DOUBLE",
"description": "A variable of user engagement with complete status",
"labels": {"status": "passed"},
},
"cnt_level_reset_quickplay": {
"value_type": "DOUBLE",
"description": "A variable of user engagement with reset status",
"labels": {"status": "passed"},
},
"cnt_post_score": {
"value_type": "DOUBLE",
"description": "A variable of user score",
"labels": {"status": "passed"},
},
"cnt_spend_virtual_currency": {
"value_type": "DOUBLE",
"description": "A variable of user virtual amount",
"labels": {"status": "passed"},
},
"cnt_ad_reward": {
"value_type": "DOUBLE",
"description": "A variable of user reward",
"labels": {"status": "passed"},
},
"cnt_challenge_a_friend": {
"value_type": "DOUBLE",
"description": "A variable of user challenges with friends",
"labels": {"status": "passed"},
},
"cnt_completed_5_levels": {
"value_type": "DOUBLE",
"description": "A variable of user level 5 completed",
"labels": {"status": "passed"},
},
"cnt_use_extra_steps": {
"value_type": "DOUBLE",
"description": "A variable of user extra steps",
"labels": {"status": "passed"},
},
}
"""
Explanation: Create features
In order to ingest features, you need to provide feature configuration and create them as featurestore resources.
Create Feature configuration
For simplicity, I created the configuration in a declarative way. Of course, we can create an helper function to built it from Bigquery schema.
Also notice that we want to pass some feature on-fly. In this case, it country, operating system and language looks perfect for that.
End of explanation
"""
try:
user_entity_type.batch_create_features(feature_configs=feature_configs, sync=True)
except RuntimeError as error:
print(error)
else:
for feature in user_entity_type.list_features():
print("")
print(f"The resource name of {feature.name} feature is", feature.resource_name)
"""
Explanation: Create features using batch_create_features method
Once you have the feature configuration, you can create feature resources using batch_create_features method.
End of explanation
"""
feature_query = "feature_id:cnt_user_engagement"
searched_features = Feature.search(query=feature_query)
searched_features
"""
Explanation: Search features
Vertex AI Feature store supports serching capabilities. Below you have a simple example that show how to filter a feature based on its name.
End of explanation
"""
FEATURES_IDS = [feature.name for feature in user_entity_type.list_features()]
try:
user_entity_type.ingest_from_bq(
feature_ids=FEATURES_IDS,
feature_time=FEATURE_TIME,
bq_source_uri=BQ_SOURCE_URI,
entity_id_field=ENTITY_ID_FIELD,
disable_online_serving=False,
worker_count=10,
sync=True,
)
except RuntimeError as error:
print(error)
"""
Explanation: Ingest features
At that point, you create all resources associated to the feature store. You just need to import feature values before you can use them for online/offline serving.
End of explanation
"""
read_instances_query = f"""
CREATE OR REPLACE TABLE
`{PROJECT_ID}.{BQ_DATASET}.{READ_INSTANCES_TABLE}` AS
WITH
# get training threshold ----------------------------------------------------------------------------------
get_training_threshold AS (
SELECT
(MAX(event_timestamp) - 86400000000) AS training_thrs
FROM
`firebase-public-project.analytics_153293282.events_*`
WHERE
event_name="user_engagement"
AND
PARSE_TIMESTAMP('%Y-%m-%d %H:%M:%S', FORMAT_TIMESTAMP('%Y-%m-%d %H:%M:%S', TIMESTAMP_MICROS(event_timestamp))) < '{TODAY}'),
# query to create label -----------------------------------------------------------------------------------
get_label AS (
SELECT
user_pseudo_id,
user_last_engagement,
#label = 1 if last_touch within last hour hr else 0
IF
(user_last_engagement < (
SELECT
training_thrs
FROM
get_training_threshold),
1,
0 ) AS churned
FROM (
SELECT
user_pseudo_id,
MAX(event_timestamp) AS user_last_engagement
FROM
`firebase-public-project.analytics_153293282.events_*`
WHERE
event_name="user_engagement"
AND
PARSE_TIMESTAMP('%Y-%m-%d %H:%M:%S', FORMAT_TIMESTAMP('%Y-%m-%d %H:%M:%S', TIMESTAMP_MICROS(event_timestamp))) < '{TODAY}'
GROUP BY
user_pseudo_id )
GROUP BY
1,
2),
# query to create class weights --------------------------------------------------------------------------------
get_class_weights AS (
SELECT
CAST(COUNT(*) / (2*(COUNT(*) - SUM(churned))) AS STRING) AS class_weight_zero,
CAST(COUNT(*) / (2*SUM(churned)) AS STRING) AS class_weight_one,
FROM
get_label )
SELECT
user_pseudo_id as user,
PARSE_TIMESTAMP('%Y-%m-%d %H:%M:%S', CONCAT('{TODAY}', ' ', STRING(TIME_TRUNC(CURRENT_TIME(), SECOND))), 'UTC') as timestamp,
churned AS churned,
CASE
WHEN churned = 0 THEN ( SELECT class_weight_zero FROM get_class_weights)
ELSE ( SELECT class_weight_one
FROM get_class_weights)
END AS class_weights
FROM
get_label
"""
"""
Explanation: Train and deploy a real-time churn ML model using Vertex AI Training and Endpoints
Now that you have your features and you are almost ready to train our churn model.
Below an high level picture
<img src="./assets/train_model_4.png">
Let's dive into each step of this process.
Fetch training data with point-in-time query using BigQuery and Vertex AI Feature store
As we mentioned above, in real time churn prediction, it is so important defining the label you want to predict with your model.
Let's assume that you decide to predict the churn probability over the last 24 hr. So now you have your label. Next step is to define your training sample. But let's think about that for a second.
In that churn real time system, you have a high volume of transactions you could use to calculate those features which keep floating and are collected constantly over time. It implies that you always get fresh data to reconstruct features. And depending on when you decide to calculate one feature or another you can end up with a set of features that are not aligned in time.
When you have labels available, it would be incredibly difficult to say which set of features contains the most up to date historical information associated with the label you want to predict. And, when you are not able to guarantee that, the performance of your model would be badly affected because you serve no representative features of the data and the label from the field when it goes live. So you need a way to get the most updated features you calculated over time before the label becomes available in order to avoid this informational skew.
With the Vertex AI Feature store, you can fetch feature values corresponding to a particular timestamp thanks to point-in-time lookup capability. In our case, it would be the timestamp associated to the label you want to predict with your model. In this way, you will avoid data leakage and you will get the most updated features to train your model.
Let's see how to do that.
Define query for reading instances at a specific point in time
First thing, you need to define the set of reading instances at a specific point in time you want to consider in order to generate your training sample.
End of explanation
"""
run_bq_query(read_instances_query)
"""
Explanation: Create the BigQuery instances tables
You store those instances in a Bigquery table.
End of explanation
"""
mobile_gaming_feature_store.batch_serve_to_gcs(
gcs_destination_output_uri_prefix=GCS_DESTINATION_OUTPUT_URI,
gcs_destination_type="csv",
serving_feature_ids=SERVING_FEATURE_IDS,
read_instances_uri=READ_INSTANCES_URI,
pass_through_fields=["churned", "class_weights"],
)
"""
Explanation: Serve features for batch training
Then you use the batch_serve_to_gcs in order to generate your training sample and store it as csv file in a target cloud bucket.
End of explanation
"""
!rm -Rf train_package #if train_package already exist
!mkdir -m 777 -p trainer data/ingest data/raw model config
!gsutil -m cp -r $GCS_DESTINATION_OUTPUT_URI/*.csv data/ingest
!head -n 1000 data/ingest/*.csv > data/raw/sample.csv
"""
Explanation: Train a custom model on Vertex AI with Training Pipelines
Now that we produce the training sample, we use the Vertex AI SDK to train an new version of the model using Vertex AI Training.
Create training package and training sample
End of explanation
"""
!touch trainer/__init__.py
%%writefile trainer/task.py
import os
from pathlib import Path
import argparse
import yaml
import numpy as np
import pandas as pd
from sklearn.model_selection import train_test_split
from sklearn.impute import SimpleImputer
from sklearn.preprocessing import OneHotEncoder
from sklearn.pipeline import Pipeline
import xgboost as xgb
import joblib
import warnings
warnings.filterwarnings("ignore")
def get_args():
"""
Get arguments from command line.
Returns:
args: parsed arguments
"""
parser = argparse.ArgumentParser()
parser.add_argument(
'--data_path',
required=False,
default=os.getenv('AIP_TRAINING_DATA_URI'),
type=str,
help='path to read data')
parser.add_argument(
'--learning_rate',
required=False,
default=0.01,
type=int,
help='number of epochs')
parser.add_argument(
'--model_dir',
required=False,
default=os.getenv('AIP_MODEL_DIR'),
type=str,
help='dir to store saved model')
parser.add_argument(
'--config_path',
required=False,
default='../config.yaml',
type=str,
help='path to read config file')
args = parser.parse_args()
return args
def ingest_data(data_path, data_model_params):
"""
Ingest data
Args:
data_path: path to read data
data_model_params: data model parameters
Returns:
df: dataframe
"""
# read training data
df = pd.read_csv(data_path, sep=',',
dtype={col: 'string' for col in data_model_params['categorical_features']})
return df
def preprocess_data(df, data_model_params):
"""
Preprocess data
Args:
df: dataframe
data_model_params: data model parameters
Returns:
df: dataframe
"""
# convert nan values because pd.NA ia not supported by SimpleImputer
# bug in sklearn 0.23.1 version: https://github.com/scikit-learn/scikit-learn/pull/17526
# decided to skip NAN values for now
df.replace({pd.NA: np.nan}, inplace=True)
df.dropna(inplace=True)
# get features and labels
x = df[data_model_params['numerical_features'] + data_model_params['categorical_features'] + [
data_model_params['weight_feature']]]
y = df[data_model_params['target']]
# train-test split
x_train, x_test, y_train, y_test = train_test_split(x, y,
test_size=data_model_params['train_test_split']['test_size'],
random_state=data_model_params['train_test_split'][
'random_state'])
return x_train, x_test, y_train, y_test
def build_pipeline(learning_rate, model_params):
"""
Build pipeline
Args:
learning_rate: learning rate
model_params: model parameters
Returns:
pipeline: pipeline
"""
# build pipeline
pipeline = Pipeline([
# ('imputer', SimpleImputer(strategy='most_frequent')),
('encoder', OneHotEncoder(handle_unknown='ignore')),
('model', xgb.XGBClassifier(learning_rate=learning_rate,
use_label_encoder=False, #deprecated and breaks Vertex AI predictions
**model_params))
])
return pipeline
def main():
print('Starting training...')
args = get_args()
data_path = args.data_path
learning_rate = args.learning_rate
model_dir = args.model_dir
config_path = args.config_path
# read config file
with open(config_path, 'r') as f:
config = yaml.load(f, Loader=yaml.FullLoader)
f.close()
data_model_params = config['data_model_params']
model_params = config['model_params']
# ingest data
print('Reading data...')
data_df = ingest_data(data_path, data_model_params)
# preprocess data
print('Preprocessing data...')
x_train, x_test, y_train, y_test = preprocess_data(data_df, data_model_params)
sample_weight = x_train.pop(data_model_params['weight_feature'])
sample_weight_eval_set = x_test.pop(data_model_params['weight_feature'])
# train lgb model
print('Training model...')
xgb_pipeline = build_pipeline(learning_rate, model_params)
# need to use fit_transform to get the encoded eval data
x_train_transformed = xgb_pipeline[:-1].fit_transform(x_train)
x_test_transformed = xgb_pipeline[:-1].transform(x_test)
xgb_pipeline[-1].fit(x_train_transformed, y_train,
sample_weight=sample_weight,
eval_set=[(x_test_transformed, y_test)],
sample_weight_eval_set=[sample_weight_eval_set],
eval_metric='error',
early_stopping_rounds=50,
verbose=True)
# save model
print('Saving model...')
model_path = Path(model_dir)
model_path.mkdir(parents=True, exist_ok=True)
joblib.dump(xgb_pipeline, f'{model_dir}/model.joblib')
if __name__ == "__main__":
main()
"""
Explanation: Create training script
You create the training script to train a XGboost model.
End of explanation
"""
%%writefile requirements.txt
pip==22.0.4
PyYAML==5.3.1
joblib==0.15.1
numpy==1.18.5
pandas==1.0.4
scipy==1.4.1
scikit-learn==0.23.1
xgboost==1.1.1
"""
Explanation: Create requirements.txt
You write the requirement file to build the training container.
End of explanation
"""
%%writefile config/config.yaml
data_model_params:
target: churned
categorical_features:
- country
- operating_system
- language
numerical_features:
- cnt_user_engagement
- cnt_level_start_quickplay
- cnt_level_end_quickplay
- cnt_level_complete_quickplay
- cnt_level_reset_quickplay
- cnt_post_score
- cnt_spend_virtual_currency
- cnt_ad_reward
- cnt_challenge_a_friend
- cnt_completed_5_levels
- cnt_use_extra_steps
weight_feature: class_weights
train_test_split:
test_size: 0.2
random_state: 8
model_params:
booster: gbtree
objective: binary:logistic
max_depth: 80
n_estimators: 100
random_state: 8
"""
Explanation: Create training configuration
You create a training configuration with data and model params.
End of explanation
"""
test_job_script = f"""
gcloud ai custom-jobs local-run \
--executor-image-uri={BASE_CPU_IMAGE} \
--python-module=trainer.task \
--extra-dirs=config,data,model \
-- \
--data_path data/raw/sample.csv \
--model_dir model \
--config_path config/config.yaml
"""
with open("local_train_job_run.sh", "w+") as s:
s.write(test_job_script)
s.close()
!chmod +x ./local_train_job_run.sh && ./local_train_job_run.sh
"""
Explanation: Test the model locally with local-run
You leverage the Vertex AI SDK local-run to test the script locally.
End of explanation
"""
!mkdir -m 777 -p {MODEL_PACKAGE_PATH} && mv -t {MODEL_PACKAGE_PATH} trainer requirements.txt config
train_job_script = f"""
gcloud ai custom-jobs create \
--region={REGION} \
--display-name={TRAIN_JOB_NAME} \
--worker-pool-spec=machine-type={TRAINING_MACHINE_TYPE},replica-count={TRAINING_REPLICA_COUNT},executor-image-uri={BASE_CPU_IMAGE},local-package-path={MODEL_PACKAGE_PATH},python-module=trainer.task,extra-dirs=config \
--args=--data_path={DATA_PATH},--model_dir={MODEL_DIR},--config_path=config/config.yaml \
--verbosity='info'
"""
with open("train_job_run.sh", "w+") as s:
s.write(train_job_script)
s.close()
!chmod +x ./train_job_run.sh && ./train_job_run.sh
"""
Explanation: Create and Launch the Custom training pipeline to train the model with autopackaging.
You use autopackaging from Vertex AI SDK in order to
Build a custom Docker training image.
Push the image to Container Registry.
Start a Vertex AI CustomJob.
End of explanation
"""
TRAIN_JOB_RESOURCE_NAME = "" # @param {type:"string"}
!gcloud ai custom-jobs describe $TRAIN_JOB_RESOURCE_NAME
!gsutil ls $DESTINATION_URI
"""
Explanation: Check the status of training job and the result.
You can use the following commands to monitor the status of your job and check for the artefact in the bucket once the training successfully run.
End of explanation
"""
xgb_model = upload_model(
display_name=MODEL_NAME,
serving_container_image_uri=SERVING_CONTAINER_IMAGE_URI,
artifact_uri=DESTINATION_URI,
)
"""
Explanation: Upload and Deploy Model on Vertex AI Endpoint
You use a custom function to upload your model to a Vertex AI Model Registry.
End of explanation
"""
endpoint = create_endpoint(display_name=ENDPOINT_NAME)
deployed_model = deploy_model(
model=xgb_model,
machine_type=SERVING_MACHINE_TYPE,
endpoint=endpoint,
deployed_model_display_name=DEPLOYED_MODEL_NAME,
min_replica_count=1,
max_replica_count=1,
sync=False,
)
"""
Explanation: Deploy Model to the same Endpoint with Traffic Splitting
Now that you have registered in the model registry, you can deploy it in an endpoint. So you firstly create the endpoint and then you deploy your model.
End of explanation
"""
simulate_prediction(endpoint=endpoint, n_requests=1000, latency=1)
"""
Explanation: Serve ML features at scale with low latency
At that time, you are ready to deploy our simple model which would requires fetching preprocessed attributes as input features in real time.
Below you can see how it works
<img src="./assets/online_serving_5.png" width="600">
But think about those features for a second.
Your behavioral features used to trained your model, they cannot be computed when you are going to serve the model online.
How could you compute the number of time a user challenged a friend withing the last 24 hours on the fly?
You simply can't do that. You need to be computed this feature on the server side and serve it with low latency. And becuase Bigquery is not optimized for those read operations, we need a different service that allows singleton lookup where the result is a single row with many columns.
Also, even if it was not the case, when you deploy a model that requires preprocessing your data, you need to be sure to reproduce the same preprocessing steps you had when you trained it. If you are not able to do that a skew between training and serving data would happen and it will affect badly your model performance (and in the worst scenario break your serving system).
You need a way to mitigate that in a way you don't need to implement those preprocessing steps online but just serve the same aggregated features you already have for training to generate online prediction.
These are other valuable reasons to introduce Vertex AI Feature Store. With it, you have a service which helps you to serve feature at scale with low latency as they were available at training time mitigating in that way possible training-serving skew.
Now that you know why you need a feature store, let's closing this journey by deploying your model and use feature store to retrieve features online, pass them to endpoint and generate predictions.
Time to simulate online predictions
Once the model is ready to receive prediction requests, you can use the simulate_prediction function to generate them.
In particular, that function
format entities for prediction
retrieve static features with a singleton lookup operations from Vertex AI Feature store
run the prediction request and get back the result
for a number of requests and some latency you define.
End of explanation
"""
# delete feature store
mobile_gaming_feature_store.delete(sync=True, force=True)
# delete Vertex AI resources
endpoint.undeploy_all()
xgb_model.delete()
# Delete bucket
if (delete_bucket or os.getenv("IS_TESTING")) and "BUCKET_URI" in globals():
! gsutil -m rm -r $BUCKET_URI
# Delete the BigQuery Dataset
!bq rm -r -f -d $PROJECT_ID:$BQ_DATASET
"""
Explanation: Cleaning up
To clean up all Google Cloud resources used in this project, you can delete the Google Cloud
project you used for the tutorial.
Otherwise, you can delete the individual resources you created in this tutorial
End of explanation
"""
|
Knewton/lentil | nb/synthetic_experiments.ipynb | apache-2.0 | num_students = 2000
num_assessments = 3000
num_ixns_per_student = 1000
USING_2PL = False # False => using 1PL
proficiencies = np.random.normal(0, 1, num_students)
difficulties = np.random.normal(0, 1, num_assessments)
if USING_2PL:
discriminabilities = np.random.normal(0, 1, num_assessments)
else:
discriminabilities = np.ones(num_assessments)
student_ids = ['S'+str(x) for x in xrange(num_students)]
assessment_ids = ['A'+str(x) for x in xrange(num_assessments)]
ixns = [None] * (num_students * num_ixns_per_student)
assessment_idxes = range(num_assessments)
for student_idx, student_id in enumerate(student_ids):
for t in xrange(num_ixns_per_student):
module_idx = random.choice(assessment_idxes)
pass_likelihood = 1 / (1 + math.exp(-(discriminabilities[module_idx]*proficiencies[student_idx] + difficulties[module_idx])))
ixns[student_idx * num_ixns_per_student + t] = {
'student_id' : student_id,
'module_id' : assessment_ids[module_idx],
'module_type' : datatools.AssessmentInteraction.MODULETYPE,
'outcome' : np.random.random() < pass_likelihood,
'timestep' : t+1
}
history = datatools.InteractionHistory(pd.DataFrame(ixns))
history.idx_of_student_id = lambda x: int(x[1:])
history.idx_of_assessment_id = lambda x: int(x[1:])
mirt_model = models.MIRTModel(history, dims=1, using_assessment_factors=USING_2PL)
estimator = est.MIRTMAPEstimator(
regularization_constant=1e-3,
ftol=1e-5,
debug_mode_on=True)
mirt_model.fit(estimator)
onepl_model = models.OneParameterLogisticModel(
history.data, select_regularization_constant=True)
onepl_model.fit()
twopl_model = models.TwoParameterLogisticModel(
history.data, select_regularization_constant=True)
twopl_model.fit()
student_idxes = [int(k[1:]) for k in history.data['student_id'].unique()]
assessment_idxes = [int(k[1:]) for k in history.data['module_id'].unique()]
"""
Explanation: Generate a synthetic 1PL/2PL IRT model and sample an interaction history from it
End of explanation
"""
plt.xlabel('True difficulties')
plt.ylabel('Estimated difficulties')
plt.scatter(difficulties[assessment_idxes], onepl_model.model.coef_[0, num_students:])
plt.show()
plt.xlabel('Estimated difficulty - true difficulty')
plt.ylabel('Frequency (number of assessments)')
plt.hist(onepl_model.model.coef_[0, num_students:] - difficulties[assessment_idxes], bins=20)
plt.show()
plt.xlabel('True proficiencies')
plt.ylabel('Estimated proficiencies')
plt.scatter(proficiencies[student_idxes], onepl_model.model.coef_[0, :num_students])
plt.show()
plt.xlabel('Estimated proficiency - true proficiency')
plt.ylabel('Frequency (number of students)')
plt.hist(onepl_model.model.coef_[0, :num_students] - proficiencies[student_idxes], bins=20)
plt.show()
"""
Explanation: Verify that models.OneParameterLogisticModel can recover parameters. We would only expect this to be possible when USING_2PL = False.
End of explanation
"""
plt.xlabel('True difficulties')
plt.ylabel('Estimated difficulties')
plt.scatter(difficulties[assessment_idxes], twopl_model.model.coef_[0, (num_students*num_assessments):])
plt.show()
plt.xlabel('Estimated difficulty - true difficulty')
plt.ylabel('Frequency (number of assessments)')
plt.hist(twopl_model.model.coef_[0, (num_students*num_assessments):] - difficulties[assessment_idxes], bins=20)
plt.show()
est_params = twopl_model.model.coef_[0, :(num_students*num_assessments)]
true_params = discriminabilities[:, None].dot(proficiencies[:, None].T).ravel()
plt.xlabel('True proficiency*discriminability')
plt.ylabel('Estimated proficiency*discriminability')
plt.scatter(true_params, est_params)
plt.show()
plt.xlabel('Estimated proficiency*discriminability - true proficiency*discriminability')
plt.ylabel('Frequency (number of student-assessment pairs)')
plt.hist(est_params - true_params, bins=20)
plt.show()
"""
Explanation: Verify that models.TwoParameterLogisticModel can recover parameters. We would only expect this to be possible when USING_2PL = True.
End of explanation
"""
plt.xlabel('True difficulties')
plt.ylabel('Estimated difficulties')
plt.scatter(difficulties, mirt_model.assessment_offsets)
plt.show()
plt.xlabel('Estimated difficulty - true difficulty')
plt.ylabel('Frequency (number of assessments)')
plt.hist(mirt_model.assessment_offsets - difficulties, bins=20)
plt.show()
plt.xlabel('True proficiencies')
plt.ylabel('Estimated proficiencies')
plt.scatter(proficiencies, mirt_model.student_factors[:, 0])
plt.show()
plt.xlabel('Estimated proficiency - true proficiency')
plt.ylabel('Frequency (number of students)')
plt.hist(mirt_model.student_factors[:, 0] - proficiencies, bins=20)
plt.show()
plt.xlabel('True discriminabilities')
plt.ylabel('Estimated discriminabilities')
plt.scatter(discriminabilities, mirt_model.assessment_factors[:, 0])
plt.show()
plt.xlabel('Estimated discriminability - true discriminability')
plt.ylabel('Frequency (number of assessments)')
plt.hist(mirt_model.assessment_factors[:, 0] - discriminabilities, bins=20)
plt.show()
"""
Explanation: Verify that models.MIRTModel can recover parameters
End of explanation
"""
# models.OneParameterLogisticModel
evaluate.training_auc(onepl_model, history, plot_roc_curve=True)
# models.TwoParameterLogisticModel
evaluate.training_auc(twopl_model, history, plot_roc_curve=True)
# models.MIRTModel
evaluate.training_auc(mirt_model, history, plot_roc_curve=True)
# true model
true_model = copy.deepcopy(mirt_model)
true_model.student_factors[:, 0] = proficiencies
true_model.assessment_factors[:, 0] = discriminabilities
true_model.assessment_offsets = difficulties
evaluate.training_auc(true_model, history, plot_roc_curve=True)
"""
Explanation: Verify that all models achieve similar training AUCs
End of explanation
"""
num_students = 10000
num_assessment_interactions_per_step = 100
grid_size = 5
embedding_dimension = 2
num_assessments = grid_size ** 2
num_lessons = 2 * grid_size * (grid_size - 1)
num_lesson_interactions_per_student = 2 * (grid_size - 1) + 2
S = np.zeros((num_students, embedding_dimension, num_lesson_interactions_per_student))
A = np.zeros((num_assessments, embedding_dimension))
L = np.zeros((num_lessons, embedding_dimension))
Q = np.zeros((num_lessons, embedding_dimension))
lesson_idx_of_loc = {}
assessment_idx_of_loc = {}
cell_size = 10 / (grid_size - 1)
lesson_count = 0
for i in xrange(grid_size):
for j in xrange(grid_size):
A[grid_size * i + j, :] = [i, j]
assessment_idx_of_loc[(i, j)] = grid_size * i + j
if j < grid_size - 1:
Q[lesson_count, :] = [i, j]
L[lesson_count, :] = [0, 1]
lesson_idx_of_loc[(i, j, 0, 1)] = lesson_count
lesson_count += 1
if i < grid_size - 1:
Q[lesson_count, :] = [i, j]
L[lesson_count, :] = [1, 0]
lesson_idx_of_loc[(i, j, 1, 0)] = lesson_count
lesson_count += 1
A *= cell_size
Q *= cell_size
L *= cell_size
A = np.maximum(1e-3, A)
Q = np.maximum(1e-3, Q)
"""
Explanation: Construct a synthetic embedding
End of explanation
"""
id_of_loc = lambda x: '-'.join(str(z) for z in x)
data = []
for student_idx in xrange(num_students):
student_id = 'S' + str(student_idx)
steps = ([(0, 1)] * (grid_size - 1)) + ([(1, 0)] * (grid_size - 1))
random.shuffle(steps)
x, y = 0, 0
t = 1
assessment_idx = assessment_idx_of_loc[(0, 0)]
assessment_id = id_of_loc(assessment_loc_of_idx[assessment_idx])
pass_likelihood = 1 / (1 + math.exp(-(np.dot(S[student_idx, :, t], A[assessment_idx, :]) / np.linalg.norm(A[assessment_idx, :]) - np.linalg.norm(A[assessment_idx, :]))))
outcome = random.random() < pass_likelihood
data.append({
'student_id' : student_id,
'module_id' : assessment_id,
'module_type' : datatools.AssessmentInteraction.MODULETYPE,
'timestep' : t,
'outcome' : outcome})
for i, j in steps:
lesson_idx = lesson_idx_of_loc[(x, y, i, j)]
lesson_id = id_of_loc(lesson_loc_of_idx[lesson_idx])
data.append({
'student_id' : student_id,
'module_id' : lesson_id,
'module_type' : datatools.LessonInteraction.MODULETYPE,
'timestep' : t,
'outcome' : None})
x += i
y += j
# DEBUG
S[student_idx, :, t+1] = S[student_idx, :, t] + L[lesson_idx, :]# / (1 + math.exp(-(np.dot(S[student_idx, :, t], Q[lesson_idx, :]) / np.linalg.norm(Q[lesson_idx, :]) - np.linalg.norm(Q[lesson_idx, :]))))
t += 1
for _ in xrange(num_assessment_interactions_per_step):
assessment_idx = random.randint(0, num_assessments - 1)
assessment_id = id_of_loc(assessment_loc_of_idx[assessment_idx])
pass_likelihood = 1 / (1 + math.exp(-(np.dot(S[student_idx, :, t], A[assessment_idx, :]) / np.linalg.norm(A[assessment_idx, :]) - np.linalg.norm(A[assessment_idx, :]))))
outcome = random.random() < pass_likelihood
# BEGIN DEBUG
if assessment_idx_of_loc[(0, 0)] == assessment_idx:
outcome = random.random() < 0.1
# END DEBUG
data.append({
'student_id' : student_id,
'module_id' : assessment_id,
'module_type' : datatools.AssessmentInteraction.MODULETYPE,
'timestep' : t,
'outcome' : outcome})
history = datatools.InteractionHistory(pd.DataFrame(data))
assessment_idx_map = {id_of_loc(loc): idx for idx, loc in assessment_loc_of_idx.iteritems()}
lesson_idx_map = {id_of_loc(loc): idx for idx, loc in lesson_loc_of_idx.iteritems()}
history.compute_idx_maps(assessment_idx=assessment_idx_map, lesson_idx=lesson_idx_map)
len(history.data)
history_path = os.path.join('data', 'lse_synthetic_history.pkl')
with open(history_path, 'wb') as f:
pickle.dump(history, f, pickle.HIGHEST_PROTOCOL)
"""
Explanation: Sample interactions from the synthetic embedding
End of explanation
"""
model = models.EmbeddingModel(
history, embedding_dimension=2,
using_lessons=True, using_prereqs=False, using_bias=True,
learning_update_variance_constant=0.5)
estimator = est.EmbeddingMAPEstimator(
regularization_constant=1e-3, using_scipy=True,
debug_mode_on=True, ftol=1e-4)
model.fit(estimator)
model = models.OneParameterLogisticModel(history.data, select_regularization_constant=True)
model.fit()
evaluate.training_auc(model, history, plot_roc_curve=True)
"""
Explanation: Estimate an embedding from the sampled interactions
End of explanation
"""
plt.scatter(A[:, 0], A[:, 1])
for assessment_idx in xrange(num_assessments):
plt.annotate(id_of_assessment_idx(assessment_idx), (A[assessment_idx, 0], A[assessment_idx, 1]))
"""
for i in xrange(grid_size):
for j in xrange(grid_size):
if j < grid_size - 1:
assessment_idxes = [assessment_idx_of_loc[(i, j)], assessment_idx_of_loc[(i, j + 1)]]
plt.plot(A[assessment_idxes, 0], A[assessment_idxes, 1], c='black')
if i < grid_size - 1:
assessment_idxes = [assessment_idx_of_loc[(i, j)], assessment_idx_of_loc[(i + 1, j)]]
plt.plot(A[assessment_idxes, 0], A[assessment_idxes, 1], c='black')
"""
plt.show()
plt.scatter(model.assessment_embeddings[:, 0], model.assessment_embeddings[:, 1])
for assessment_idx in xrange(num_assessments):
plt.annotate(id_of_assessment_idx(assessment_idx), (model.assessment_embeddings[assessment_idx, 0], model.assessment_embeddings[assessment_idx, 1]))
"""
for i in xrange(grid_size):
for j in xrange(grid_size):
if j < grid_size - 1:
assessment_idxes = [assessment_idx_of_loc[(i, j)], assessment_idx_of_loc[(i, j + 1)]]
plt.plot(model.assessment_embeddings[assessment_idxes, 0], model.assessment_embeddings[assessment_idxes, 1], c='black')
if i < grid_size - 1:
assessment_idxes = [assessment_idx_of_loc[(i, j)], assessment_idx_of_loc[(i + 1, j)]]
plt.plot(model.assessment_embeddings[assessment_idxes, 0], model.assessment_embeddings[assessment_idxes, 1], c='black')
"""
plt.show()
plt.quiver(Q[:, 0], Q[:, 1], L[:, 0], L[:, 1], pivot='tail', color='black')
"""
for i in xrange(grid_size):
for j in xrange(grid_size):
if j < grid_size - 1:
lesson_idxes = [lesson_idx_of_loc[(i, j)], lesson_idx_of_loc[(i, j + 1)]]
plt.plot(Q[lesson_idxes, 0], Q[lesson_idxes, 1], c='black')
if i < grid_size - 1:
lesson_idxes = [lesson_idx_of_loc[(i, j)], lesson_idx_of_loc[(i + 1, j)]]
plt.plot(Q[lesson_idxes, 0], Q[lesson_idxes, 1], c='black')
"""
plt.xlabel('Skill 1')
plt.ylabel('Skill 2')
plt.xlim([-1, 11])
plt.ylim([-1, 11])
plt.show()
plt.quiver(model.prereq_embeddings[:, 0], model.prereq_embeddings[:, 1], model.lesson_embeddings[:, 0], model.lesson_embeddings[:, 1], pivot='tail', color='black')
"""
for i in xrange(grid_size):
for j in xrange(grid_size):
if j < grid_size - 1:
lesson_idxes = [lesson_idx_of_loc[(i, j)], lesson_idx_of_loc[(i, j + 1)]]
plt.plot(model.prereq_embeddings[lesson_idxes, 0], model.prereq_embeddings[lesson_idxes, 1], c='black')
if i < grid_size - 1:
lesson_idxes = [lesson_idx_of_loc[(i, j)], lesson_idx_of_loc[(i + 1, j)]]
plt.plot(model.prereq_embeddings[lesson_idxes, 0], model.prereq_embeddings[lesson_idxes, 1], c='black')
"""
plt.xlabel('Skill 1')
plt.ylabel('Skill 2')
plt.xlim([-1, 11])
plt.ylim([-1, 11])
plt.show()
right_lesson_idxes = [lesson_idx_of_loc[(i, j, 1, 0)] for i in xrange(grid_size) for j in xrange(grid_size) if (i, j, 1, 0) in lesson_idx_of_loc]
up_lesson_idxes = [lesson_idx_of_loc[(i, j, 0, 1)] for i in xrange(grid_size) for j in xrange(grid_size) if (i, j, 0, 1) in lesson_idx_of_loc]
plt.quiver(0, 0, L[right_lesson_idxes, 0], L[right_lesson_idxes, 1], pivot='tail', color='red', alpha=0.25)
plt.quiver(0, 0, L[up_lesson_idxes, 0], L[up_lesson_idxes, 1], pivot='tail', color='blue', alpha=0.25)
plt.xlabel('Skill 1')
plt.ylabel('Skill 2')
plt.xlim([-1, 11])
plt.ylim([-1, 11])
plt.show()
plt.quiver(0, 0, model.lesson_embeddings[right_lesson_idxes, 0], model.lesson_embeddings[right_lesson_idxes, 1], pivot='tail', color='red', alpha=0.25)
plt.quiver(0, 0, model.lesson_embeddings[up_lesson_idxes, 0], model.lesson_embeddings[up_lesson_idxes, 1], pivot='tail', color='blue', alpha=0.25)
plt.xlabel('Skill 1')
plt.ylabel('Skill 2')
plt.xlim([-1, 11])
plt.ylim([-1, 11])
plt.show()
plt.scatter(L[right_lesson_idxes, 0], L[right_lesson_idxes, 1], color='red', label='1-0')
plt.scatter(L[up_lesson_idxes, 0], L[up_lesson_idxes, 1], color='blue', label='0-1')
plt.xlabel('Skill 1')
plt.ylabel('Skill 2')
plt.legend(loc='best')
plt.show()
plt.scatter(model.lesson_embeddings[right_lesson_idxes, 0], model.lesson_embeddings[right_lesson_idxes, 1], color='red', label='1-0')
plt.scatter(model.lesson_embeddings[up_lesson_idxes, 0], model.lesson_embeddings[up_lesson_idxes, 1], color='blue', label='0-1')
plt.xlabel('Skill 1')
plt.ylabel('Skill 2')
plt.legend(loc='best')
plt.show()
student_idxes = random.sample(range(num_students), 10)
for student_idx in student_idxes:
plt.scatter(S[student_idx, 0, :], S[student_idx, 1, :], c='black')
for i in xrange(num_lesson_interactions_per_student):
plt.plot(S[student_idx, 0, i:(i+2)], S[student_idx, 1, i:(i+2)], c='black')
plt.xlabel('Skill 1')
plt.ylabel('Skill 2')
plt.title('student_id = %s' % history.id_of_student_idx(student_idx))
plt.show()
for student_idx in student_idxes:
plt.scatter(model.student_embeddings[student_idx, 0, :], model.student_embeddings[student_idx, 1, :], c='black')
for i in xrange(num_lesson_interactions_per_student):
plt.plot(model.student_embeddings[student_idx, 0, i:(i+2)], model.student_embeddings[student_idx, 1, i:(i+2)], c='black')
plt.xlabel('Skill 1')
plt.ylabel('Skill 2')
plt.title('student_id = %s' % history.id_of_student_idx(student_idx))
plt.show()
for student_idx in student_idxes:
for i in xrange(embedding_dimension):
plt.plot(S[student_idx, i, :], '-s', label='Skill 1')
plt.xlabel('Timestep')
plt.ylabel('Skill')
plt.title('student_id = %s' % history.id_of_student_idx(student_idx))
plt.legend(loc='best')
plt.show()
for student_idx in student_idxes:
for i in xrange(embedding_dimension):
plt.plot(model.student_embeddings[student_idx, i, :], '-s', label='Skill 1')
plt.xlabel('Timestep')
plt.ylabel('Skill')
plt.title('student_id = %s' % history.id_of_student_idx(student_idx))
plt.legend(loc='best')
plt.show()
"""
Explanation: Visualize the estimated embedding vs. the true embedding
End of explanation
"""
|
MLnick/sseu16-meetup | Creating a Scalable Recommender System with Spark & Elasticsearch.ipynb | apache-2.0 | from elasticsearch import Elasticsearch
es = Elasticsearch()
create_index = {
"settings": {
"analysis": {
"analyzer": {
"payload_analyzer": {
"type": "custom",
"tokenizer":"whitespace",
"filter":"delimited_payload_filter"
}
}
}
},
"mappings": {
"ratings": {
"properties": {
"timestamp": {
"type": "date"
},
"userId": {
"type": "string",
"index": "not_analyzed"
},
"movieId": {
"type": "string",
"index": "not_analyzed"
},
"rating": {
"type": "double"
}
}
},
"users": {
"properties": {
"name": {
"type": "string"
},
"@model": {
"properties": {
"factor": {
"type": "string",
"term_vector": "with_positions_offsets_payloads",
"analyzer" : "payload_analyzer"
},
"version": {
"type": "string",
"index": "not_analyzed"
}
}
}
}
},
"movies": {
"properties": {
"genres": {
"type": "string"
},
"original_language": {
"type": "string",
"index": "not_analyzed"
},
"image_url": {
"type": "string",
"index": "not_analyzed"
},
"release_date": {
"type": "date"
},
"popularity": {
"type": "double"
},
"@model": {
"properties": {
"factor": {
"type": "string",
"term_vector": "with_positions_offsets_payloads",
"analyzer" : "payload_analyzer"
},
"version": {
"type": "string",
"index": "not_analyzed"
}
}
}
}
}
}
}
# create index with the settings & mappings above
es.indices.create(index="demo", body=create_index)
"""
Explanation: Overview
Create Elasticsearch Mappings
Load data into Elasticsearch (see Enrich & Prepare MovieLens Dataset.ipynb)
Load ratings data and run ALS
Save ALS model factors to Elasticsearch
Show similar items using Elasticsearch
1. Set up Elasticsearch mappings
References:
* Create index request
* Delimited payload filter
* Term vectors
* Mapping
End of explanation
"""
user_df = sqlContext.read.format("es").load("demo/users")
user_df.printSchema()
user_df.select("userId", "name").show(5)
movie_df = sqlContext.read.format("es").load("demo/movies")
movie_df.printSchema()
movie_df.select("movieId", "title", "genres", "release_date", "popularity").show(5)
ratings_df = sqlContext.read.format("es").load("demo/ratings")
ratings_df.printSchema()
ratings_df.show(5)
"""
Explanation: Load User, Movie and Ratings DataFrames from Elasticsearch
Show schemas
End of explanation
"""
from pyspark.ml.recommendation import ALS
als = ALS(userCol="userId", itemCol="movieId", ratingCol="rating", regParam=0.1, rank=20)
model = als.fit(ratings_df)
model.userFactors.show(5)
model.itemFactors.show(5)
"""
Explanation: 2. Run ALS
End of explanation
"""
from pyspark.sql.types import *
from pyspark.sql.functions import udf, lit
def convert_vector(x):
'''Convert a list or numpy array to delimited token filter format'''
return " ".join(["%s|%s" % (i, v) for i, v in enumerate(x)])
def reverse_convert(s):
'''Convert a delimited token filter format string back to list format'''
return [float(f.split("|")[1]) for f in s.split(" ")]
def vector_to_struct(x, version):
'''Convert a vector to a SparkSQL Struct with string-format vector and version fields'''
return (convert_vector(x), version)
vector_struct = udf(vector_to_struct, \
StructType([StructField("factor", StringType(), True), \
StructField("version", StringType(), True)]))
# test out the vector conversion function
test_vec = model.userFactors.select("features").first().features
print test_vec
print
print convert_vector(test_vec)
"""
Explanation: 3. Write ALS user and item factors to Elasticsearch
Utility functions for converting factor vectors
End of explanation
"""
ver = model.uid
movie_vectors = model.itemFactors.select("id", vector_struct("features", lit(ver)).alias("@model"))
movie_vectors.select("id", "@model.factor", "@model.version").show(5)
user_vectors = model.userFactors.select("id", vector_struct("features", lit(ver)).alias("@model"))
user_vectors.select("id", "@model.factor", "@model.version").show(5)
# write data to ES, use:
# - "id" as the column to map to ES movie id
# - "update" write mode for ES
# - "append" write mode for Spark
movie_vectors.write.format("es") \
.option("es.mapping.id", "id") \
.option("es.write.operation", "update") \
.save("demo/movies", mode="append")
user_vectors.write.format("es") \
.option("es.mapping.id", "id") \
.option("es.write.operation", "update") \
.save("demo/users", mode="append")
"""
Explanation: Convert factor vectors to [factor, version] form and write to Elasticsearch
End of explanation
"""
es.search(index="demo", doc_type="movies", q="star wars force", size=1)
"""
Explanation: Check the data was written correctly
End of explanation
"""
from IPython.display import Image, HTML, display
def fn_query(query_vec, q="*", cosine=False):
return {
"query": {
"function_score": {
"query" : {
"query_string": {
"query": q
}
},
"script_score": {
"script": "payload_vector_score",
"lang": "native",
"params": {
"field": "@model.factor",
"vector": query_vec,
"cosine" : cosine
}
},
"boost_mode": "replace"
}
}
}
def get_similar(the_id, q="*", num=10, index="demo", dt="movies"):
response = es.get(index=index, doc_type=dt, id=the_id)
src = response['_source']
if '@model' in src and 'factor' in src['@model']:
raw_vec = src['@model']['factor']
# our script actually uses the list form for the query vector and handles conversion internally
query_vec = reverse_convert(raw_vec)
q = fn_query(query_vec, q=q, cosine=True)
results = es.search(index, dt, body=q)
hits = results['hits']['hits']
return src, hits[1:num+1]
def display_similar(the_id, q="*", num=10, index="demo", dt="movies"):
movie, recs = get_similar(the_id, q, num, index, dt)
# display query
q_im_url = movie['image_url']
display(HTML("<h2>Get similar movies for:</h2>"))
display(Image(q_im_url, width=200))
display(HTML("<br>"))
display(HTML("<h2>Similar movies:</h2>"))
sim_html = "<table border=0><tr>"
i = 0
for rec in recs:
r_im_url = rec['_source']['image_url']
r_score = rec['_score']
sim_html += "<td><img src=%s width=200></img></td><td>%2.3f</td>" % (r_im_url, r_score)
i += 1
if i % 5 == 0:
sim_html += "</tr><tr>"
sim_html += "</tr></table>"
display(HTML(sim_html))
display_similar(122886, num=5)
display_similar(122886, num=5, q="title:(NOT trek)")
display_similar(6377, num=5, q="genres:children AND release_date:[now-2y/y TO now]")
"""
Explanation: 4. Recommend using Elasticsearch!
End of explanation
"""
|
catherinedevlin/sql_quest | sqlquest.ipynb | cc0-1.0 | !libreoffice data/aelfryth.odt
"""
Explanation: SQLQuest
Catherine Devlin
Ohio LinuxFest 2015, Oct 3
https://github.com/catherinedevlin/sql_quest
Me
Database administrator since 1999
Python programmer since 2003
First chair of PyOhio
catherinedevlin.blogspot.com, @catherinedevlin
Your employee
You
Total SQL n00b. Please!
Adventure Synopsis
Basic: CSV
Advanced: JSON
Expert: Relational
In the beginning...
Into Electronica
All but useless electronically
End of explanation
"""
!libreoffice data/party.ods
"""
Explanation: CSV
End of explanation
"""
!cat data/party.csv
from csv import DictReader
from pprint import pprint
with open('data/party.csv') as infile:
party = list(DictReader(infile))
pprint(party)
"""
Explanation:
End of explanation
"""
total_weight = 0
for character in party:
if character['class'] == 'Fighter':
for eq_index in range(1, 4):
try:
quantity = int(character['equip %d quantity' % eq_index])
weight_each = float(character['equip %d weight each' % eq_index])
total_weight += quantity * weight_each
except (TypeError, ValueError):
pass
print(total_weight)
"""
Explanation: Answering questions
How much weight are the party's fighters carrying?
End of explanation
"""
!libreoffice data/party.multisheet.ods
"""
Explanation: Save vs. Confusion
custom program
hardcoded number of equipment slots
Relational (from CSV)
End of explanation
"""
from csv import DictReader
from pprint import pprint
with open('data/party.multisheet/stats-Table 1.csv') as infile:
stats = list(DictReader(infile))
with open('data/party.multisheet/equipment-Table 1.csv') as infile:
equipment = list(DictReader(infile))
pprint(stats)
total_weight = 0
for character in stats:
if character['class'] == 'Fighter':
for itm in equipment:
if itm['owner'] == character['name']:
try:
quantity = int(itm['quantity'])
weight_each = float(itm['weight each'])
total_weight += quantity * weight_each
except (TypeError, ValueError):
pass
print(total_weight)
"""
Explanation:
End of explanation
"""
!cat data/party.json
"""
Explanation: Still ouch
JSON
arbitrary structure
End of explanation
"""
import json
import glob
total_weight = 0
with open('data/party.json') as infile:
for character in json.load(infile):
if character['class'] == 'Fighter':
for itm in character['equipment'].values():
total_weight += (itm['quantity'] * itm['weight each'])
print(total_weight)
"""
Explanation: How much does all the fighters' equipment weigh?
End of explanation
"""
!echo "db.party.drop()" | mongo
!mongoimport --collection party --jsonArray data/party.json
!echo "db.party.find()" | mongo
"""
Explanation: Database management system
Multiple access
Performance
> memory
> 1 machine
Document databases
Mongo
RethinkDB
PostgreSQL
End of explanation
"""
!echo "db.party.find({class: 'Fighter'}, {'equipment': 1, _id: 0})" | mongo
from pymongo import MongoClient
client = MongoClient()
total_weight = 0.0
for character in client.test.party.find({'class':'Fighter'}):
for itm in character['equipment'].values():
total_weight += itm['weight each'] * itm['quantity']
print(total_weight)
"""
Explanation: How much weight are the party's fighters carrying?
End of explanation
"""
party.append( {'name': 'Mickey',
'class': 'Druid/Gunslinger/Paladin/Illusionist/Assassin',
'level': 87,
'psionic level': 19} )
"""
Explanation: Consistency
End of explanation
"""
!cat sql/ddl/create_char_tbl.sql
"""
Explanation: JSON schema enforcement
kwalify, marshmallow, ...
RDBMS
Relational DataBase Management System
PostgreSQL
SQLite
MySQL / MariaDB
RDBMS provides
scale
data consistency
query language: SQL
tools
DDL
Data Definition Language
Create a database and connect to it
$ psql template1 dungeonmaster
Password for user dungeonmaster:
psql (9.3.6)
Type "help" for help.
template1=# create database dnd;
CREATE DATABASE
template1=# \c dnd
You are now connected to database "dnd" as user "dungeonmaster".
dnd=#
End of explanation
"""
!cat sql/dml/ins_single_char.sql
"""
Explanation: dnd=# \i sql/ddl/create_char_tbl.sql
CREATE TABLE
DML
Data Manipulation Language
End of explanation
"""
!cat sql/dml/godric2.sql
"""
Explanation: dnd=# \i sql/dml/ins_single_char.sql
INSERT 0 1
SELECT
dnd=# SELECT * FROM character;
name | class | role | level | strength | intelligence | wisdom | dexterity | constitution | charisma
----------+---------+-------+-------+----------+--------------+--------+-----------+--------------+----------
Aelfryth | Fighter | Thane | 5 | 16 | 11 | 14 | 15 | 15 | 14
(1 row)
Mass insert
dnd=# DELETE FROM character;
DELETE 4
dnd=# \copy character FROM 'data/party.multisheet/stats-Table 1.csv' WITH csv HEADER;
COPY 4
Selective SELECT
dnd=# SELECT name, class, role FROM character;
name | class | role
-----------+-----------+----------------
Aelfryth | Fighter | Thane
Godric | Fighter | Warband Member
Leofflaed | Sourceror | Enigma
Wigstan | Thief | Runaway Thrall
(4 rows)
WHERE
dnd=# SELECT name, class, role
dnd-# FROM character
dnd-# WHERE class = 'Fighter';
name | class | role
----------+---------+----------------
Aelfryth | Fighter | Thane
Godric | Fighter | Warband Member
(2 rows)
AND
dnd=# SELECT name, class, role, intelligence
dnd-# FROM character
dnd-# WHERE class = 'Fighter'
dnd-# AND intelligence > 10;
name | class | role | intelligence
----------+---------+-------+--------------
Aelfryth | Fighter | Thane | 11
(1 row)
Level up
dnd=# SELECT name, level FROM character;
name | level
-----------+-------
Aelfryth | 5
Godric | 4
Leofflaed | 5
Wigstan | 3
(4 rows)
UPDATE
dnd=# UPDATE character
dnd-# SET level = 4
dnd-# WHERE name = 'Wigstan';
UPDATE 1
dnd=# SELECT name, level FROM character;
name | level
-----------+-------
Aelfryth | 5
Godric | 4
Leofflaed | 5
Wigstan | 4
(4 rows)
Unique identifiers
dnd=# INSERT INTO character
dnd-# SELECT * FROM character
dnd-# WHERE name = 'Wigstan';
INSERT 0 1
dnd=# SELECT name, class, level FROM character;
name | class | level
-----------+-----------+-------
Aelfryth | Fighter | 5
Godric | Fighter | 4
Leofflaed | Sourceror | 5
Wigstan | Thief | 4
Wigstan | Thief | 4
(5 rows)
Primary Key
numeric
dnd=# ALTER TABLE character ADD
dnd-# id SERIAL PRIMARY KEY;
ALTER TABLE
dnd=# SELECT id, name, class FROM character;
id | name | class
----+-----------+-----------
1 | Aelfryth | Fighter
2 | Godric | Fighter
3 | Leofflaed | Sourceror
4 | Wigstan | Thief
5 | Wigstan | Thief
(5 rows)
dnd=# UPDATE character
dnd-# SET name = 'Wigmund'
dnd-# WHERE id = 5;
UPDATE 1
dnd=# ALTER TABLE character DROP id;
ALTER TABLE
Text PK
dnd=# ALTER TABLE character
dnd-# ADD PRIMARY KEY (name);
ALTER TABLE
End of explanation
"""
!cat sql/ddl/create_equip_tbl.sql
"""
Explanation: dnd=# \i sql/dml/godric2.sql
psql:sql/dml/godric2.sql:8: ERROR: duplicate key value violates unique constraint "character_pkey"
DETAIL: Key (name)=(Godric) already exists.
Relations
End of explanation
"""
!cat sql/dml/mickey.sql
"""
Explanation: dnd=# \i sql/ddl/create_equip_tbl.sql
CREATE TABLE
dnd=# \copy equipment FROM 'data/party.multisheet/equipment-Table 1.csv' WITH csv HEADER
COPY 14
Join
All Leofflaed's gear
dnd=# SELECT c.name, e.name, e.magic
FROM character c
JOIN equipment e ON (e.owner = c.name)
WHERE c.name = 'Leofflaed';
name | name | magic
-----------+-------------------+-------
Leofflaed | Oil of Revelation | Y
Leofflaed | Unfailing Flint | Y
Leofflaed | lamp |
Weight of all fighters' gear
dnd=# SELECT SUM(quantity * weight_each)
dnd-# FROM equipment e
dnd-# JOIN character c ON (e.owner = c.name)
dnd-# WHERE class = 'Fighter';
sum
------
57.2
(1 row)
Constraints
End of explanation
"""
!cat sql/dml/thief_inventories.sql
"""
Explanation: dnd# \i sql/dml/mickey.sql
ERROR: column "psionic_level" of relation "character" does not exist
LINE 3: psionic_level,
Column constraints
CREATE TABLE character (
name TEXT,
class VARCHAR(14),
...
dnd=# ALTER TABLE character ALTER COLUMN class TYPE VARCHAR(14);
INSERT INTO character (
...
ERROR: value too long for type character varying(14)
Foreign key constraints
Check constraints
CREATE TABLE character (
...
level INTEGER,
...
dnd=# ALTER TABLE character ADD CONSTRAINT reaonable_level CHECK (level < 15);
ERROR: new row for relation "character" violates check constraint "character_level_check"
DETAIL: Failing row contains (Mickey, Druid, Demigod, 87, 19, 19, 18, 18, 19, 17).
Transactions
End of explanation
"""
import psycopg2
import psycopg2.extras
conn = psycopg2.connect('dbname=dnd user=dungeonmaster password=gygax')
curs = conn.cursor(cursor_factory=psycopg2.extras.DictCursor)
curs.execute("SELECT * FROM character")
for character in curs.fetchall():
print(character)
if character['class'] == 'Fighter':
print('{name}, get in front!'.format(**character))
conn.close()
"""
Explanation: dnd=# \i sql/dml/thief_inventories.sql
name | owner
---------+---------
cudgel | Wigstan
cookpot | Wigstan
dagger | Wigstan
torches | Wigstan
(4 rows)
dnd=# UPDATE equipment
dnd-# SET owner = 'Wigmund'
dnd-# WHERE owner = 'Wigstan'
dnd-# AND name = 'cudgel';
UPDATE 1
dnd=# \i sql/dml/thief_inventories.sql
name | owner
---------+---------
cookpot | Wigstan
dagger | Wigstan
torches | Wigstan
cudgel | Wigmund
(4 rows)
dnd=# BEGIN TRANSACTION;
BEGIN
dnd=# UPDATE equipment
dnd-# SET owner = 'Wigmund'
dnd-# WHERE owner = 'Wigstan'
dnd-# AND name = 'dagger';
UPDATE 1
dnd=# SELECT name, owner
dnd-# FROM equipment
dnd-# WHERE owner LIKE 'Wig%';
name | owner
---------+---------
cookpot | Wigstan
torches | Wigstan
cudgel | Wigmund
dagger | Wigmund
(4 rows)
dnd=# ROLLBACK;
ROLLBACK
dnd=# SELECT name, owner
dnd-# FROM equipment
dnd-# WHERE owner LIKE 'Wig%';
name | owner
---------+---------
cookpot | Wigstan
torches | Wigstan
dagger | Wigstan
cudgel | Wigmund
(4 rows)
opposite is COMMIT
dnd=# update equipment set owner = 'Wigstan' WHERE owner = 'Wigmund';
UPDATE 0
dnd=#
Aggregate functions
dnd=# SELECT MAX(charisma) FROM character;
max
-----
14
(1 row)
Think line count
dnd=# SELECT name, MAX(charisma) FROM character;
ERROR: column "character.name" must appear in the GROUP BY clause or be used in an aggregate function
LINE 1: SELECT name, MAX(charisma) FROM character;
Subquery
dnd=# SELECT name, charisma
dnd-# FROM character
dnd-# WHERE charisma = (SELECT MAX(charisma) FROM character);
name | charisma
----------+----------
Aelfryth | 14
(1 row)
GROUP BY
dnd=# SELECT owner, SUM(cost)
dnd-# FROM equipment
dnd-# GROUP BY owner
dnd-# ORDER BY SUM(cost) DESC;
owner | sum
-----------+--------
Leofflaed | 550.50
Aelfryth | 41.10
Godric | 5.12
Wigstan | 2.42
(4 rows)
Outer joins
dnd=# SELECT c.name, c.class, SUM(e.cost)
dnd-# FROM character c
dnd-# JOIN equipment e ON (e.owner = c.name)
dnd-# GROUP BY c.name, c.class
dnd-# ORDER BY SUM(e.cost) DESC;
name | class | sum
-----------+-----------+--------
Leofflaed | Sourceror | 550.50
Aelfryth | Fighter | 41.10
Godric | Fighter | 5.12
Wigstan | Thief | 2.42
(4 rows)
dnd=# SELECT c.name, c.class, SUM(e.cost)
FROM character c
LEFT OUTER JOIN equipment e ON (e.owner = c.name)
GROUP BY c.name, c.class
ORDER BY SUM(e.cost) DESC;
name | class | sum
-----------+-----------+--------
Wigmund | Thief |
Leofflaed | Sourceror | 550.50
Aelfryth | Fighter | 41.10
Godric | Fighter | 5.12
Wigstan | Thief | 2.42
(5 rows)
... optionally, NULLS LAST
Traps
The NULL trap (3-value logic)
dnd=# SELECT name, magic
dnd-# FROM equipment
dnd-# WHERE magic IS NOT NULL;
name | magic
-------------------+-------
battleaxe | +1
Oil of Revelation | Y
Unfailing Flint | Y
(3 rows)
dnd=# SELECT name, magic
dnd-# FROM equipment
dnd-# WHERE magic = NULL;
name | magic
------+-------
(0 rows)
dnd=# SELECT 1 = 1;
?column?
----------
t
(1 row)
dnd=# SELECT 1 = 2;
?column?
----------
f
(1 row)
dnd=# SELECT NULL = NULL;
?column?
----------
(1 row)
dnd=# SELECT NULL != NULL;
?column?
----------
(1 row)
dnd=# SELECT c.name
dnd-# FROM character c
dnd-# JOIN equipment e ON (e.owner = c.name)
dnd-# AND e.magic != NULL;
name
------
(0 rows)
dnd=# SELECT c.name AS owner, e.name
dnd=# FROM character c
dnd=# JOIN equipment e ON (c.name = e.owner)
dnd=# WHERE magic IS NULL;
owner | name
-----------+----------
Aelfryth | shield
Aelfryth | Tent
Aelfryth | bow
Aelfryth | arrows
Godric | 30’ rope
Godric | torches
Leofflaed | lamp
Wigstan | cookpot
Wigstan | torches
Wigstan | dagger
Wigmund | cudgel
(11 rows)
Missing WHERE trap
dnd=# SELECT name, level
dnd-# FROM character;
name | level
-----------+-------
Aelfryth | 5
Godric | 4
Leofflaed | 5
Wigstan | 5
Wigmund | 5
(5 rows)
dnd=# UPDATE character
dnd-# SET level = 6;
UPDATE 5
dnd=# SELECT name, level
dnd-# FROM character;
name | level
-----------+-------
Aelfryth | 6
Godric | 6
Leofflaed | 6
Wigstan | 6
Wigmund | 6
(5 rows)
Unqualified DELETE = apocalypse
dnd=# BEGIN TRANSACTION;
BEGIN
dnd=# DELETE FROM equipment;
DELETE 14
dnd=# SELECT * FROM equipment;
name | quantity | weight_each | cost | magic | materials | notes | owner | damage
------+----------+-------------+------+-------+-----------+-------+-------+--------
(0 rows)
dnd=# ROLLBACK;
ROLLBACK
Triggers
Programming
End of explanation
"""
|
Danghor/Algorithms | Python/Chapter-02/Power.ipynb | gpl-2.0 | def power(m, n):
r = 1
for i in range(n):
r *= m
return r
power(2, 3), power(3, 2)
%%time
p = power(3, 500000)
"""
Explanation: Efficient Computation of Powers
The function power takes two natural numbers $m$ and $n$ and computes $m^n$. Our first implementation is inefficient and takes $n-1$ multiplication to compute $m^n$.
End of explanation
"""
def power(m, n):
if n == 0:
return 1
p = power(m, n // 2)
if n % 2 == 0:
return p * p
else:
return p * p * m
%%time
p = power(3, 500000)
"""
Explanation: Next, we try a recursive implementation that is based on the following two equations:
1. $m^0 = 1$
2. $m^n = \left{\begin{array}{ll}
m^{n/2} \cdot m^{n/2} & \mbox{if $n$ is even}; \
m^{n/2} \cdot m^{n/2} \cdot m & \mbox{if $n$ is odd}.
\end{array}
\right.
$
End of explanation
"""
|
catalystcomputing/DSIoT-Python-sessions | Session3/code/04 Unsupervised and supervised Learning.ipynb | apache-2.0 | # Let's import the relevant packages first
import numpy as np
import matplotlib as mpl
import matplotlib.pyplot as plt
from sklearn import manifold
import gzip, cPickle
import pandas as pd
from sklearn.cluster import KMeans
from sklearn import metrics
"""
Explanation: Supervised and Unsupervised learning example
We are going to try to be able to identify handwritten digits
These handwritten digits are contained in the MNIST data set.
You can download it from : http://deeplearning.net/data/mnist/mnist.pkl.gz
And then unzip it
The script assumes the zipped data set has moved to the script's directory
End of explanation
"""
# Unzip and load the data set
f = gzip.open("../data/mnist.pkl.gz", "rb")
train, val, test = cPickle.load(f)
f.close()
"""
Explanation: Part 1 : Load the data
End of explanation
"""
train_data = train[0]
train_class = train[1]
print train_data.shape
"""
Explanation: Part 2 : Data exploration
Let's have a word on the data.
train contains 2 arrays : a data and a target array
The data is stored in train[0]
The targets (= class of the digit) are stored in train[1]
End of explanation
"""
%matplotlib inline
first_digit = train_data[0]
# reshape the digit to a 28*28 array
first_digit = np.reshape(first_digit, (28,28))
# Then plot the digit
fig = plt.figure()
im = plt.imshow(first_digit, cmap = mpl.cm.Greys)
im.set_interpolation("nearest")
plt.show()
# We recognize a handwritten 5.
# let's look at the actual class of this digit
first_digit_class = train_class[0]
print "First digit class :", first_digit_class
# it's indeed a 5 !
"""
Explanation: It's a 50000 x 784 array :
There are 50000 handwritten digits
Each digit is stored in an array of dimension 784 = 28*28
This array stores the grayscale value of a 28*28 picture of the digit.
Let's visualise the first digit.
N.B. : Close the figure to continue
End of explanation
"""
# Let's define a list of feature names
# We have 784 pixels, let's index them from 0 to 783
lfeat = ["p" + str(i) for i in range(784)]
# Build a dataframe with all features
df_mnist = pd.DataFrame(train_data, columns = lfeat)
# Add the target = digit class
df_mnist["Class"] = train_class
# Let's have a look at the first few rows
df_mnist.iloc[:5,:]
"""
Explanation: Likewise, eval and test contain handwritten digits and their class.
We won't look at those for now.
Part 3 : pandas format
Now that we know the structure of the data, let's put it in a pandas dataframe. It will be easier to manipulate.
End of explanation
"""
# Initialise the kmeans method
# we use 10 clusters under the naive assumption of one cluster per class of digit
km = KMeans(n_clusters=10, n_jobs = -1, precompute_distances=True)
#n_jobs = -1 to speed up with max # of CPU
#precompute_distances = True to speed up algorithm as well
#We'll take a subset of data, otherwise, it takes too much time
data_subset = df_mnist[lfeat].values[:2000]
class_subset = df_mnist["Class"].values[:2000]
#Let's examine the statistics of our subset
for i in range(10):
print "%s samples of class %s" % (len(np.where(class_subset == i)[0]),i)
#Now fit
pred_km = km.fit_predict(data_subset)
"""
Explanation: Part 3 : First attempt at unsupervised classification
Let's see how far we can go with a simple unsupervised learning method
We will use the K-Means algorithm.
The KMeans algorithm clusters data by trying to separate samples in n groups
of equal variance, minimizing a criterion known as the inertia or within-cluster sum-of-squares.
This algorithm requires the number of clusters to be specified.
What KMeans finds are the location of centroids= the mean of each of the 10 groups of equal variances.
Of course, we hope the algorithm has found 10 well separated groups of points
KMeans will classify using the following rule : a point will be associated to the nearest cluster (i.e. the group of points whose centroid is closest to this particular point).
N.B.
Let's call i the index of this cluster.
Of course, i is not equal to the class of the digit.
It could very well be that all the 8 digits belong to cluster #3
End of explanation
"""
print "Rand score:", metrics.adjusted_rand_score(class_subset, pred_km)
print "MI:", metrics.adjusted_mutual_info_score(class_subset, pred_km)
print "V:",metrics.v_measure_score(class_subset, pred_km)
"""
Explanation: Part 4 : Measuring the performance
Now we will evaluate the performance of the algorithm.
If we have the ground truth labels (i.e. we know to which class each training sample belongs), we can define the classification performance with metrics that measure the similarity between label assignments.
In our problem, this means that we compare the cluster assignment to the actual class of the digit, ignoring permutations. (cf. N.B. above, the cluster index may not be equal to the class index)
scikit-learn provides a range of such metrics.
We will report scores for three of them :
- Adjusted Rand Index
- Mutual information
- V-measure
All these scores span the [0,1] range, the higher, the better.
End of explanation
"""
tsne = manifold.TSNE(n_components=2, init='pca', random_state=0, method = "barnes_hut")
data_subset_tsne = tsne.fit_transform(data_subset)
#Now let's apply kmeans to the transformed dataset
pred_km_tsne = km.fit_predict(data_subset_tsne)
print "Rand score:", metrics.adjusted_rand_score(class_subset, pred_km_tsne)
print "MI:", metrics.adjusted_mutual_info_score(class_subset, pred_km_tsne)
print "V:",metrics.v_measure_score(class_subset, pred_km_tsne)
"""
Explanation: Part 4 : Improving unsupervised classification with tsne
N.B. You should have sklearn version 0.17 or else tsne will be really slow
We are now going to apply t-sne to the data.
Its advantages are 2-fold
It reduces the feature space (we project the 28*28 dimension feature space to a 2 dimension one) hence allowing easy visualisation
It is sensitive to local structures and may provide much better separation between various classes than traditional methods such as PCA
N.B. This notebook cell may take time to execute
End of explanation
"""
# color map, one color per digit
list_color = ["r", "g", "b", "k", "plum", "pink", "lightseagreen", "blueviolet", "darkgray", "sandybrown"]
# dictionnary of color to be used in the plot
d_color = {}
for i in range(10) :
d_color[i] = list_color[i]
fig = plt.figure()
ax = fig.add_subplot(111)
# Plot the data
for i in range(2000):
ax.text(data_subset_tsne[i,0], data_subset_tsne[i,1], str(class_subset[i]), color=d_color[class_subset[i]], fontsize=12)
# Also plot the cluster centers
for c in km.cluster_centers_ :
ax.plot(c[0], c[1], "x", color = "k", markersize = 15, markeredgewidth=4)
# choose the boundaries of the plot for an ideal view
ax.set_xlim([-2 + min(data_subset_tsne[:,0]),2 + max(data_subset_tsne[:,0])])
ax.set_ylim([-2 + min(data_subset_tsne[:,1]),2 + max(data_subset_tsne[:,1])])
plt.show()
"""
Explanation: This new classification is a vast improvement over the previous one !
We are now going to do a visualisation of what has exactly happened :
2D plot of the data projected by tsne
location of centroids in the data (how well our k-MEANS algorithm picks up the new structure in the data).
We will see that while not perfects, the centroid detection works very well for some digits.
End of explanation
"""
|
JuanIgnacioGil/basket-stats | NBA_Keras/Predicting NBA players positions using Keras.ipynb | mit | %load_ext autoreload
%autoreload 2
import pandas as pd
import numpy as np
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import LabelBinarizer, StandardScaler
from keras.models import Sequential
from keras.layers import Dense, Activation, Dropout
"""
Explanation: Predicting NBA players positions using Keras
In this notebook we will build a neural net to predict the positions of NBA players using the Keras library.
End of explanation
"""
stats = pd.read_csv(r'data/Seasons_Stats.csv', index_col=0)
"""
Explanation: Data preparation
We will use the Kaggle dataset "NBA Players stats since 1950", with stats for all players since 1950. We will take special interest in how the pass of time affects to the position of each player, and the definition of the positions themselves (a Small Forward, for example, was absolutely different in the 60's to what it is now)
End of explanation
"""
stats = pd.read_csv(r'data/Seasons_Stats.csv', index_col=0)
stats_clean = stats.drop(['blanl', 'blank2', 'Tm'], axis=1)
stats_clean.head()
"""
Explanation: The file Seasons_Stats.csv contains the statics of all players since 1950. First, we drop a couple of blank columns, and the "Tm" column, that contains the team.
End of explanation
"""
players = pd.read_csv(r'data/players.csv', index_col=0)
players.head(10)
"""
Explanation: A second file, players.csv, contains static information for each player, as height, weight, etc.
End of explanation
"""
data = pd.merge(stats_clean, players[['Player', 'height', 'weight']], left_on='Player', right_on='Player', right_index=False,
how='left', sort=False).fillna(value=0)
data = data[~(data['Pos']==0) & (data['MP'] > 200)]
data.reset_index(inplace=True, drop=True)
data['Player'] = data['Player'].str.replace('*','')
totals = ['PER', 'OWS', 'DWS', 'WS', 'OBPM', 'DBPM', 'BPM', 'VORP', 'FG', 'FGA', '3P', '3PA', '2P', '2PA', 'FT', 'FTA',
'ORB', 'DRB', 'TRB', 'AST', 'STL', 'BLK', 'TOV', 'PF', 'PTS']
for col in totals:
data[col] = 36 * data[col] / data['MP']
data.tail()
"""
Explanation: We merge both tables, and do some data cleaning:
Keep only players with more than 400 minutes for each season (with a 82 games regular season, thats around 5 minutes per game. Players with less than that will be only anecdotical, and will distort the analysis).
Replace the * sign in some of the names
For the stats that represent total values (others, as TS%, represent percentages), we will take the values per 36 minutes. The reason is to judge every player according to his characteristics, not the time he was on the floor.
End of explanation
"""
X = data.drop(['Player', 'Pos', 'G', 'GS', 'MP'], axis=1).as_matrix()
y = data['Pos'].as_matrix()
encoder = LabelBinarizer()
y_cat = encoder.fit_transform(y)
nlabels = len(encoder.classes_)
scaler =StandardScaler()
Xnorm = scaler.fit_transform(X)
stats2017 = (data['Year'] == 2017)
X_train = Xnorm[~stats2017]
y_train = y_cat[~stats2017]
X_test = Xnorm[stats2017]
y_test = y_cat[stats2017]
"""
Explanation: We will train a neural network with this data, to try to predict the position of each player.
A way we didn't follow was to transform the positions into numbers from 1 to 5 (1 for a PG, 2 for a SG, 1.5 for a PG-SG, and so on, until 5 for a C), and use the network for regression instead of classification. But we wanted to see if the network was able to predict labels as "SG-PF", so we decided to work with the categorical labels. Another reason is that this makes this study more easily portable to other areas.
We convert our DataFrame into a matrix X with the inputs, and a vector y with the labels. We scale the inputs and encode the outputs into dummy variables using the corresponding sklearn utilities.
Instead of a stochastic partition, we decided to use the 2017 season as our test data, and all the previous as the train set.
End of explanation
"""
model = Sequential()
model.add(Dense(40, activation='relu', input_dim=46))
model.add(Dropout(0.5))
model.add(Dense(30, activation='relu'))
model.add(Dropout(0.5))
model.add(Dense(nlabels, activation='softmax'))
model.compile(loss='categorical_crossentropy', optimizer='sgd', metrics=['accuracy'])
# x_train and y_train are Numpy arrays --just like in the Scikit-Learn API.
model.fit(X_train, y_train, epochs=200, batch_size=128, validation_split=0.2, verbose=1)
model.test_on_batch(X_test, y_test, sample_weight=None)
"""
Explanation: Neural network training
We build using Keras (with Tensorflow as beckend) a neural network with two hidden layers. We will use relu activations, except for the last one, where we use a softmax to properly obtain the label probability. We will use a 20% of the data as a validation set, to make sure we are not overfitting.
End of explanation
"""
# Production model, using all data
model.fit(X_train, y_train, epochs=200, batch_size=128, validation_split=0, verbose=1)
"""
Explanation: The model performs well both for the validation and the test sets (65% might not seem a lot, but it is satisfying enough for our problem, where all the labels are very subjective (Was Larry Bird a "SM-PF" or a "PF-SF"? Nobody can tell).
Now we train again the model, using all the training data (we will still reserve the 2017 season out of the training).
End of explanation
"""
first_team_members = ['Russell Westbrook', 'James Harden', 'Anthony Davis', 'LeBron James', 'Kawhi Leonard']
first_team_stats = data[[((x[1]['Player'] in first_team_members) & (x[1]['Year']==2017)) for x in data.iterrows()]]
first_team_stats
pd.DataFrame(index=first_team_stats.loc[:, 'Player'].values, data={'Real': first_team_stats.loc[:, 'Pos'].values,
'Predicted':encoder.inverse_transform(model.predict(Xnorm[first_team_stats.index, :]))})
"""
Explanation: Predicting the positions of the First NBA Team of 2017
As a first test of the model, we will predict the positions of the player in the First NBA Team of 2017
End of explanation
"""
mvp = [(1956, 'Bob Pettit'), (1957, 'Bob Cousy'), (1958, 'Bill Russell'), (1959, 'Bob Pettit'),
(1960, 'Wilt Chamberlain'), (1961, 'Bill Russell'), (1962, 'Bill Russell'), (1963, 'Bill Russell'),
(1964, 'Oscar Robertson'), (1965, 'Bill Russell'), (1966, 'Wilt Chamberlain'), (1967, 'Wilt Chamberlain'),
(1968, 'Wilt Chamberlain'), (1969, 'Wes Unseld'), (1970, 'Willis Reed'), (1971, 'Lew Alcindor'),
(1972, 'Kareem Abdul-Jabbar'), (1973, 'Dave Cowens'), (19704, 'Kareem Abdul-Jabbar'), (1975, 'Bob McAdoo'),
(1976, 'Kareem Abdul-Jabbar'), (1977, 'Kareem Abdul-Jabbar'), (1978, 'Bill Walton'), (1979, 'Moses Malone'),
(1980, 'Kareem Abdul-Jabbar'), (1981, 'Julius Erving'), (1982, 'Moses Malone'), (1983, 'Moses Malone'),
(1984, 'Larry Bird'), (1985, 'Larry Bird'), (1986, 'Larry Bird'), (1987, 'Magic Johnson'),
(1988, 'Michael Jordan'), (1989, 'Magic Johnson'), (1990, 'Magic Johnson'), (1991, 'Michael Jordan'),
(1992, 'Michael Jordan'), (1993, 'Charles Barkley'), (1994, 'Hakeem Olajuwon'), (1995, 'David Robinson'),
(1996, 'Michael Jordan'), (1997, 'Karl Malone'), (1998, 'Michael Jordan'), (1999, 'Karl Malone'),
(2000, 'Shaquille O\'Neal'), (2001, 'Allen Iverson'), (2002, 'Tim Duncan'), (2003, 'Tim Duncan'),
(2004, 'Kevin Garnett'), (2005, 'Steve Nash'), (2006, 'Steve Nash'), (2007, 'Dirk Nowitzki'),
(2008, 'Kobe Bryant'), (2009, 'LeBron James'), (2010, 'LeBron James'), (2011, 'Derrick Rose'),
(2012, 'LeBron James'), (2013, 'LeBron James'), (2014, 'Kevin Durant'), (2015, 'Stephen Curry'),
(2016, 'Stephen Curry')]
mvp_stats = pd.concat([data[(data['Player'] == x[1]) & (data['Year']==x[0])] for x in mvp], axis=0)
mvp_stats
mvp_pred = pd.DataFrame(index=mvp_stats.loc[:, 'Player'].values, data={'Real': mvp_stats.loc[:, 'Pos'].values,
'Predicted':encoder.inverse_transform(model.predict(Xnorm[mvp_stats.index, :]))})
mvp_pred
"""
Explanation: The model gets right four of the five. It's even more interesting that the one that gets wrong, Anthony Davis, can play in both PF and C positions, and that in the last season, he played more as a Power Forward than as a Center, as the model predicts:
New Orleans Pelicans Depth Chart - 2016-17.
Predicting the positions of the NBA MVP
We will use now the model to predict the positions of all the NBA MVP since the creation of the award, in 1956.
End of explanation
"""
curry2017 = data[(data['Player'] == 'Stephen Curry') & (data['Year']==2017)]
pettit1956 = data[(data['Player'] == 'Bob Pettit') & (data['Year']==1956)]
time_travel_curry = pd.concat([curry2017 for year in range(1956, 2018)], axis=0)
time_travel_curry['Year'] = range(1956, 2018)
X = time_travel_curry.drop(['Player', 'Pos', 'G', 'GS', 'MP'], axis=1).as_matrix()
y = time_travel_curry['Pos'].as_matrix()
y_cat = encoder.transform(y)
Xnorm = scaler.transform(X)
time_travel_curry_pred = pd.DataFrame(index=time_travel_curry.loc[:, 'Year'].values,
data={'Real': time_travel_curry.loc[:, 'Pos'].values,
'Predicted':encoder.inverse_transform(model.predict(Xnorm))})
time_travel_pettit = pd.concat([pettit1956 for year in range(1956, 2018)], axis=0)
time_travel_pettit['Year'] = range(1956, 2018)
X = time_travel_pettit.drop(['Player', 'Pos', 'G', 'GS', 'MP'], axis=1).as_matrix()
y = time_travel_pettit['Pos'].as_matrix()
y_cat = encoder.transform(y)
Xnorm = scaler.transform(X)
time_travel_pettit_pred = pd.DataFrame(index=time_travel_pettit.loc[:, 'Year'].values,
data={'Real': time_travel_pettit.loc[:, 'Pos'].values,
'Predicted':encoder.inverse_transform(model.predict(Xnorm))})
pd.concat([time_travel_curry_pred,time_travel_pettit_pred],axis=1,keys=['Stephen Curry','Bob Pettit'])
"""
Explanation: The model gets right most of the players, and the errors are always for a contiguous position (it is interesting that the model gets this right without having been provided with any information about the distances between the labels.)
Does year metter?
The definitions of a forward or a center are always changing: in the very recent years, there is, for example, a trend towards having scoring point guards (as Stephen Curry) and forwards that direct the game instead of the guard (as Lebron James).
Also, the physical requirements are increasing, and a height that in the 50's could characterize you as a center will make you a forward today.
We will follow the first and last MVP's, Stephen Curry and Bob Pettit, and see where our model puts them in different years in the NBA history.
End of explanation
"""
magic = data[(data['Player'] == 'Magic Johnson')]
jordan = data[(data['Player'] == 'Michael Jordan')]
# Magic
X = magic.drop(['Player', 'Pos', 'G', 'GS', 'MP'], axis=1).as_matrix()
y = magic['Pos'].as_matrix()
y_cat = encoder.transform(y)
Xnorm = scaler.transform(X)
magic_pred = pd.DataFrame(index=magic.loc[:, 'Age'].values,
data={'Real': magic.loc[:, 'Pos'].values,
'Predicted':encoder.inverse_transform(model.predict(Xnorm))})
# Jordan
X = jordan.drop(['Player', 'Pos', 'G', 'GS', 'MP'], axis=1).as_matrix()
y = jordan['Pos'].as_matrix()
y_cat = encoder.transform(y)
Xnorm = scaler.transform(X)
jordan_pred = pd.DataFrame(index=jordan.loc[:, 'Age'].values,
data={'Real': jordan.loc[:, 'Pos'].values,
'Predicted':encoder.inverse_transform(model.predict(Xnorm))})
pd.concat([magic_pred,jordan_pred],axis=1,keys=['Magic Johnson','Michael Jordan'])
"""
Explanation: Curry is labeled as a point guard (his real position) from 1973 until today, and as a shooting guard before that. Perhaps because of his heigh (191cm), or perhaps because he is too much of a scorer. Bob Pettit is labeled as a center until 1967, and as a power forward after that (he played both roles, but nowadays he would have difficulties to play as a center, and would be for sure a forward, perhaps even a small forward).
Changing positions
Many players go towards more interior roles with age, as they lose velocity. We will follow two cases, Magic Johnson, and Michael Jordan. Both of them retired, and returned years later with more interior roles.
End of explanation
"""
first_team_stats
multiplier = np.arange(0.8,1.2,0.02)
growing_predicted = []
for p in first_team_stats.iterrows():
growing = pd.concat([p[1].to_frame().T for x in multiplier], axis=0)
growing['height'] = growing['height'] * multiplier
growing['weight'] = growing['weight'] * (multiplier ** 3)
X = growing.drop(['Player', 'Pos', 'G', 'GS', 'MP'], axis=1).as_matrix()
y = growing['Pos'].as_matrix()
y_cat = encoder.transform(y)
Xnorm = scaler.transform(X)
growing_predicted.append(pd.DataFrame(index=multiplier, data={'height': growing.loc[:, 'height'].values,
'Real': growing.loc[:, 'Pos'].values, 'Predicted':encoder.inverse_transform(model.predict(Xnorm))}))
pd.concat(growing_predicted,axis=1,keys=first_team_stats['Player'])
"""
Explanation: The model is able to detect the conversion of Jordan into a forward at the end of his career, but not the return of Magic as a power forward. Also, in his rookie season, he is classified as a small forward instead of as a shooting guard (Magic was clearly and outlier in the data, a 205cm point guard who could easily play in the five positions. It is even surprising that is properly labelled as a point guard during most of his career)
How important are height and weight?
A concern we have before training the model was that it would use the height and weight as the main classifiers, and that it would label incorrectly players as Magic Johnson (a 205 cm point guard), or Charles Barkley (a 196cm power forward). Almost surprisingly, it works properly on this two players.
We will use again the 2017 First NBA Team and play with the heights and weights of the players. Keeping constant all other statistics, we will change the height and weight and observe how the predicted positions change.
End of explanation
"""
|
hetaodie/hetaodie.github.io | assets/media/uda-ml/code/boston_housing/.Trash-0/files/boston_housing-zh.ipynb | mit | # Import libraries necessary for this project
import numpy as np
import pandas as pd
from sklearn.cross_validation import ShuffleSplit
# Import supplementary visualizations code visuals.py
import visuals as vs
# Pretty display for notebooks
%matplotlib inline
# Load the Boston housing dataset
data = pd.read_csv('housing.csv')
prices = data['MEDV']
features = data.drop('MEDV', axis = 1)
# Success
print("Boston housing dataset has {} data points with {} variables each.".format(*data.shape))
"""
Explanation: 机器学习纳米学位
模型评估与验证
项目: 预测波士顿房价
欢迎来到机器学习纳米学位的第一个实战项目!在这个 notebook 中,我们已经为你提供了一些代码模板,你将需要补充函数来顺利完成项目。除非有明确要求,你无须修改任何已给出的代码。以 “实现” 为开头的部分意味着你需要在之后的大括号中提供额外的函数。每一部分都会有详细的指导,需要实现的部分也会以 “TODO” 标出。请仔细阅读所有的提示!
除了实现代码外,你还必须回答一些与项目和实现有关的问题。每一个需要你回答的问题都会以 “问题 X” 为标题。请仔细阅读每个问题,并且在以 “回答:” 开头的文字框中写出完整的答案。你的项目将会根据你的回答和实现来进行评分。
请注意: Code 和 Markdown 区域可通过 Shift + Enter 快捷键运行。此外,Markdown可以通过双击进入编辑模式。
开始项目
在这个项目中,你将利用马萨诸塞州波士顿郊区的房屋信息数据来训练和测试一个模型,并对模型的性能和预测能力进行评估。使用该数据训练出适配的模型可用于对房屋进行特定预测,尤其是预测房屋的价值。对于房地产经纪等人的日常工作来说,这样的预测模型非常有价值。
此项目的数据集来自 UCI 机器学习代码库。波士顿房屋数据于1978年开始统计,共506个数据点,涵盖了波士顿郊区房屋的14种特征信息。本项目对原始数据集做了以下处理:
有16个数据点的 'MEDV' 值为 50.0。由于这些数据点包含遗失或被删去的值,因此已被移除。
有1个数据点的 RM 值为 8.78。这是一个异常值,因此已被移除。
本项目必须的特征为 'RM','LSTAT','PTRATIO' 和 'MEDV'。其余不相关的特征已被移除。
特征 'MEDV' 已根据 35 年来的市场通货膨胀状况进行了加倍。
运行下面的代码以加载波士顿房屋数据集,以及一些此项目所需的 Python 库。如果成功返回数据集的大小,表示数据集已载入成功。
End of explanation
"""
# TODO: Minimum price of the data
minimum_price = None
# TODO: Maximum price of the data
maximum_price = None
# TODO: Mean price of the data
mean_price = None
# TODO: Median price of the data
median_price = None
# TODO: Standard deviation of prices of the data
std_price = None
# Show the calculated statistics
print("Statistics for Boston housing dataset:\n")
print("Minimum price: ${}".format(minimum_price))
print("Maximum price: ${}".format(maximum_price))
print("Mean price: ${}".format(mean_price))
print("Median price ${}".format(median_price))
print("Standard deviation of prices: ${}".format(std_price))
"""
Explanation: 数据分析
在该项目的第一部分,你将对波士顿的房地产数据进行初步观察和分析。你可以先探索和熟悉数据,这样可以更好地理解和解释你的结果。
由于本项目的最终目标是建立一个预测房屋价值的模型,因此我们需要将数据集分为特征和目标变量。特征,即'RM','LSTAT' 和 'PTRATIO' 为我们提供了每个数据点的数量信息。目标变量,即 'MEDV' 是我们准备预测的变量。它们被分别存储在 features 和 prices 中。
实现:计算统计数据
你的第一个编程实现是计算有关波士顿房价的描述统计数据。我们已为你导入了 numpy,你需要使用这个库来执行必要的计算。这些统计数据对于分析模型的预测结果十分重要。
在下面的代码中,你将需要实现以下内容:
计算存储在 prices 中的 'MEDV' 的最小值、最大值、均值、中值和标准差。
将运算结果存储在相应的变量中。
End of explanation
"""
# TODO: Import 'r2_score'
def performance_metric(y_true, y_predict):
""" Calculates and returns the performance score between
true and predicted values based on the metric chosen. """
# TODO: Calculate the performance score between 'y_true' and 'y_predict'
score = None
# Return the score
return score
"""
Explanation: 问题 1 - 特征观察
在前面我们提到过,波士顿房屋信息的数据组中,我们用到三个特征,即 'RM','LSTAT' 和'PTRATIO'。对于每个数据点(街区):
'RM' 指该街区房屋的平均房间数量。
'LSTAT' 指该街区屋主为”低收入阶层“(收入微薄)的比例。
'PTRATIO' 指该街区的小学和中学中学生与老师的比例。
根据你的直觉,对于上面的三个特征而言,你认为增大某特征的数值, 'MEDV' 值是会增大还是减小?请给出理由。
提示: 该问题可以使用下面的例子来解释:
你认为 'RM' 值(房间数量)为 6 的房子与 'RM' 为 7 的房子相比,哪一个价值更高?
你认为 'LSTAT' 值(低收入阶级比例)为 15 的街区与 'LSTAT' 值为 20 的街区相比,哪一个房价更高?
你认为 'PTRATIO' 值(学生和教师的比例)为 10 的街区与 'PTRATIO' 值为15的街区相比,哪一个房价更高?
回答:
开发模型
在本项目的第二部分,你将会了解必要的工具和技术来让模型进行预测。通过使用这些工具和技术,你可以精确评估每个模型的性能,来更好地加强预测结果的准确性。
实现:定义性能的衡量标准
如果不能对模型的训练和测试的表现进行量化评估,我们很难衡量模型的好坏。通常我们会定义一些衡量标准,这些标准可以通过对某些误差或者拟合程度的计算来得到。在这个项目中,你将通过运算决定系数 R<sup>2</sup> 来量化模型的表现。模型的决定系数是回归分析中常用的统计信息,经常被当作衡量模型预测能力好坏的标准。
R<sup>2</sup> 的数值位于 0 和 1 之间,它表示目标变量的预测值和实际值相关程度平方的百分比。当一个模型的 R<sup>2</sup> 值为 0 时,其预测效果还不如使用目标变量的均值进行预测,而 R<sup>2</sup> 值为 1 的模型能完美预测目标变量。当数值位于 0 至 1 之间时,则表示该模型中目标变量能以多大百分比用该特征表示。模型中的 R<sup>2</sup> 也有可能出现负值,在这种情况下,模型做出的预测会比直接预测均值差很多。
在下方代码块的 performance_metric 函数中,你需要实现:
使用 sklearn.metrics 中的 r2_score 来进行 y_true 和 `y_predict 之间的性能计算。
将性能分数存储至 score 变量中。
End of explanation
"""
# Calculate the performance of this model
score = performance_metric([3, -0.5, 2, 7, 4.2], [2.5, 0.0, 2.1, 7.8, 5.3])
print("Model has a coefficient of determination, R^2, of {:.3f}.".format(score))
"""
Explanation: 问题 2 - 拟合程度
假设一个数据集有五个数据,且一个模型对目标变量做出以下预测:
| True Value | Prediction |
| :-------------: | :--------: |
| 3.0 | 2.5 |
| -0.5 | 0.0 |
| 2.0 | 2.1 |
| 7.0 | 7.8 |
| 4.2 | 5.3 |
请运行下方的代码,使用 performance_metric 函数计算该模型的决定系数。
End of explanation
"""
# TODO: Import 'train_test_split'
# TODO: Shuffle and split the data into training and testing subsets
X_train, X_test, y_train, y_test = (None, None, None, None)
# Success
print("Training and testing split was successful.")
"""
Explanation: 你认为这个模型成功描述了目标变量的变化吗?
为什么?
提示:R2 分数是从自变量中预测的因变量的方差比例。换句话说:
R2 分数为 0 意味着该因变量无法根据自变量进行预测。
R2 分数为 1 意味着该因变量可以根据自变量进行预测。
R2 分数在 0 和 1 之间意味着该因变量可被预测的程度。
当 R2 分数为 0.4 时,Y 中 40% 的方差可以根据 X 进行预测。
回答:
实现:数据分割与重新排列
在这个实现中,你需要把波士顿房屋数据集分成训练和测试两个子集。通常在这个过程中,数据也会被重排列,以消除数据集中由于顺序而产生的偏差。
在下面的代码中,你需要实现以下内容:
使用 sklearn.cross_validation 中的 train_test_split 来将 features 和 prices 分割成训练集和测试集,并重新排列。
分割比例为:80% 的数据用于训练,20% 的数据用于测试。
选择一个数值来为 train_test_split 设置 random_state。这将保证结果的一致性。
将数据集和测试集分割为 X_train,X_test,y_train 和 y_test。
End of explanation
"""
# Produce learning curves for varying training set sizes and maximum depths
vs.ModelLearning(features, prices)
"""
Explanation: 问题 3 - 训练与测试
将数据集按一定比例分割为测试集和训练集对学习算法有什么好处?
提示: 思考一下数据分割会如何造成过拟合和欠拟合。
回答:
分析模型性能
在本项目的第三部分,你将学习不同训练数据的子集中,几个模型的学习和测试性能。此外,你还会关注一个特定的算法,该算法的 'max_depth' 参数在整个训练集中呈上升趋势,通过这个算法,你将观察到模型的复杂度对性能的影响。根据不同标准绘制模型的性能曲线可以帮助我们进行分析,比如能让我们看到一些单从结果无法判断的行为。
学习曲线
下方区域内的代码会输出四幅图像,它们是一个决策树模型在不同最大深度下的表现。每一张图显示了随着训练集的增加,训练和测试模型的学习曲线的变化情况。请注意,学习曲线中的阴影部分代表了该曲线的不确定性(标准差)。该模型在训练集和测试集中的性能使用决定系数 R<sup>2</sup> 进行评分。
运行下面的代码,并根据这些图像回答问题。
End of explanation
"""
vs.ModelComplexity(X_train, y_train)
"""
Explanation: 问题 4 - 学习数据
选择一张上方的图表,说明该模型的最大深度。
当训练点增加时,训练曲线的分数会如何变化?
训练点增加对模型有什么好处?
提示:学习曲线最终会汇集到一个特定的分数吗?一般来说,数据越多越好。但如果你的训练和测试曲线在一个特定分数汇集,并且超过了你的基准,这是否必要?
根据训练和测试曲线是否汇聚,思考增加训练点的优缺点。
答案:
复杂度曲线
下列代码将输出一幅图像,它表示一个使用训练数据进行过训练和验证的决策树模型在不同最大深度下的表现。该图形将包含两条复杂度曲线,一个是训练集的变化,一个是测试集的变化。跟学习曲线相似,阴影区域代表该曲线的不确定性,模型训练和测试部分的评分都使用 performance_metric 函数。
运行下方代码,并根据此图表回答下面的两个问题(Q5 和 Q6)。
End of explanation
"""
# TODO: Import 'make_scorer', 'DecisionTreeRegressor', and 'GridSearchCV'
def fit_model(X, y):
""" Performs grid search over the 'max_depth' parameter for a
decision tree regressor trained on the input data [X, y]. """
# Create cross-validation sets from the training data
# sklearn version 0.18: ShuffleSplit(n_splits=10, test_size=0.1, train_size=None, random_state=None)
# sklearn versiin 0.17: ShuffleSplit(n, n_iter=10, test_size=0.1, train_size=None, random_state=None)
cv_sets = ShuffleSplit(X.shape[0], n_iter = 10, test_size = 0.20, random_state = 0)
# TODO: Create a decision tree regressor object
regressor = None
# TODO: Create a dictionary for the parameter 'max_depth' with a range from 1 to 10
params = {}
# TODO: Transform 'performance_metric' into a scoring function using 'make_scorer'
scoring_fnc = None
# TODO: Create the grid search cv object --> GridSearchCV()
# Make sure to include the right parameters in the object:
# (estimator, param_grid, scoring, cv) which have values 'regressor', 'params', 'scoring_fnc', and 'cv_sets' respectively.
grid = None
# Fit the grid search object to the data to compute the optimal model
grid = grid.fit(X, y)
# Return the optimal model after fitting the data
return grid.best_estimator_
"""
Explanation: 问题 5 - 偏差与方差的权衡
当进行训练的模型最大深度为 1 时,该模型是出现高方差还是高偏差?
当进行训练的模型最大深度为 10 时情形又如何?上方的图像是否可以支撑你的结论?
提示: 高偏差是欠拟合的表示(模型不够复杂,无法察觉数据中的微妙变化),而高方差是过拟合的标志(模型中数据过于庞大,无法一般化)。思考两个模型(深度为 1 和 10)中哪一个与偏差或方差一致。
回答:
问题 6 - 最佳猜测模型
你认为最大深度是多少的模型能够最好地预测不可见数据?
你得出这个答案的依据是什么?
提示:结合问题 5 中的图像,查看模型在不同深度下的验证分数。当深度更高时,分数也更高吗?我们什么时候能在避免使模型过度复杂化的情况下获得最佳分数?请记住,Occams Razor 曾说过“在多个假说中,我们应该选择假设最少的哪一个。”
回答:
评估模型性能
在本项目的最后一部分,你将构建一个模型,并使用 fit_model 中的优化模型来预测客户的特征集。
问题 7 - 网格搜索
什么是网格搜索技术?
如何用它来优化学习算法?
提示:在解释网格搜索技术时,请说明为何使用这一技术,“网格”代表什么,该方法的目标又是什么?你还可以举例说明如何使用这个方法来优化模型中的参数。
答案:
问题 8 - 交叉验证
什么是 K 折交叉验证训练技术?
在优化模型时,该技术会给网格搜索带来哪些益处?
提示:在解释 K 折交叉验证技术时,请说明“k”代表什么?如何根据“k”值将数据集分成训练集和测试集,以及确定运行次数。
在说明 k 折交叉验证给网格搜索带来的好处时,请思考网格搜索的主要劣势,它取决于使用特定的数据集子集来进行训练或测试,以及 k 折交叉验证如何改善这一点。你可以参考这个文件。
答案:
实现:拟合模型
在最终的实现中,你需要整合学习的内容,并使用决策树算法来训练一个模型。为了保证得到最优模型,你需要使用网格搜索方法来训练模型,并为这个决策树优化 'max_depth' 参数。你可以将 'max_depth' 参数视作在做出预测前,该决策树被允许针对数据提出的问题。决策树是监督学习算法的一部分。
此外,你还会发现你的实现使用 ShuffleSplit() 作为交叉验证的另一种形式(查看 'cv_sets' 变量)。尽管这并不是你在 问题 8 中描述的 k 折交叉验证技术,这种交叉验证方法同样十分有用!下方的 ShuffleSplit() 实现将创建 10('n_splits') 个重新排列的数据集,针对每个数据集,20%('test_size')的数据将作为“验证集”。在完成你的实现时,请思考该方法与 k 折验证技术的不同点和相似之处。
请注意,在 scikit-learn 的 0.17 和 0.18 版本中,ShuffleSplit 的参数也有所不同。
对于下方代码块中的 fit_model 函数,你需要实现以下内容:
使用 sklearn.tree 中的 DecisionTreeRegressor 创建一个决策树回归量对象。
将该对象指定为 'regressor' 变量。
为 'max_depth' 创建字典,其深度为 1 到 10,并指定为 'params' 变量。
使用 sklearn.metrics 中的 make_scorer 创建一个计分函数对象。
将该 performance_metric 函数作为参数转到对象。
将该计分函数指定为'scoring_fnc' 变量。
使用 sklearn.grid_search 中的 GridSearchCV 创建网格搜索对象。
将 'regressor','params','scoring_fnc' 和 'cv_sets' 作为参数转到对象。
将 GridSearchCV 对象指定为 'grid' 变量。
End of explanation
"""
# Fit the training data to the model using grid search
reg = fit_model(X_train, y_train)
# Produce the value for 'max_depth'
print("Parameter 'max_depth' is {} for the optimal model.".format(reg.get_params()['max_depth']))
"""
Explanation: 做出预测
当根据一组数据训练模型后,该模型可根据新输入的数据组进行预测。在决策树回归函数中,模型已经学会如何针对输入的数据进行提问,并返回目标变量的预测值。你可以根据这个预测来获取有关未知目标变量值的信息,比如不包含在训练数据内的数据。
问题 9 - 最佳模型
最优模型的最大深度是多少?此答案与你在问题 6 中的猜测是否相同?
运行下方区域内的代码,将决策树回归函数与训练数据进行拟合,以得到最优化的模型。
End of explanation
"""
# Produce a matrix for client data
client_data = [[5, 17, 15], # Client 1
[4, 32, 22], # Client 2
[8, 3, 12]] # Client 3
# Show predictions
for i, price in enumerate(reg.predict(client_data)):
print("Predicted selling price for Client {}'s home: ${:,.2f}".format(i+1, price))
"""
Explanation: 提示: 该答案来自上方代码的输出。
回答:
问题 10 - 预测销售价格
假设你是波士顿地区的房屋经纪人,并希望使用此模型帮助你的客户评估他们想出售的房屋。你已经从三个客户那里收集到以下的信息:
| 特征 | 客户 1 | 客户 2 | 客户 3 |
| :---: | :---: | :---: | :---: |
| 房屋内的房间总数 | 5 间 | 4 间 | 8 间 |
| 社区贫困指数(%) | 17% | 32% | 3% |
| 附近学校的学生-教师比例 | 15-to-1 | 22-to-1 | 12-to-1 |
你会建议每位客户以什么价位出售房屋?
从房屋特征的数值判断,这样的价格合理吗?
提示: 使用你在数据分析部分计算的数据,来验证你的回答。对于这三个客户,客户 3 的房屋最大,附近有最优秀的公立学校,同时贫困指数最低。而客户 2 的房屋最小,所在街区贫困指数相对较高,附近的公立学校也十分一般。
请运行下面的代码,使用你的优化模型来预测每位客户的房屋价值。
End of explanation
"""
vs.PredictTrials(features, prices, fit_model, client_data)
"""
Explanation: 回答:
敏感性
有时最优模型并不一定是一个稳健模型。有时模型会过于复杂或过于简单,因而难以泛化新增添的数据;有时模型采用的学习算法并不适用于特定的数据结构;有时样本本身可能有太多噪点或样本数量过少,使得模型无法准确地预测目标变量。此时,该模型欠拟合。
使用不同训练数据及测试数据,将下方的 fit_model 函数运行十次,以测试模型针对不同客户的数据做出的预测有何不同。
End of explanation
"""
|
nwjs/chromium.src | third_party/tflite_support/src/tensorflow_lite_support/tools/Build_TFLite_Support_Targets.ipynb | bsd-3-clause | # Create folders
!mkdir -p '/android/sdk'
# Download and move android SDK tools to specific folders
!wget -q 'https://dl.google.com/android/repository/tools_r25.2.5-linux.zip'
!unzip 'tools_r25.2.5-linux.zip'
!mv '/content/tools' '/android/sdk'
# Copy paste the folder
!cp -r /android/sdk/tools /android/android-sdk-linux
# Download NDK, unzip and move contents
!wget 'https://dl.google.com/android/repository/android-ndk-r19c-linux-x86_64.zip'
!unzip 'android-ndk-r19c-linux-x86_64.zip'
!mv /content/android-ndk-r19c /content/ndk
!mv '/content/ndk' '/android'
# Copy paste the folder
!cp -r /android/ndk /android/android-ndk-r19c
# Remove .zip files
!rm 'tools_r25.2.5-linux.zip'
!rm 'android-ndk-r19c-linux-x86_64.zip'
# Make android ndk executable to all users
!chmod -R go=u '/android'
# Set and view environment variables
%env PATH = /usr/local/nvidia/bin:/usr/local/cuda/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/tools/node/bin:/tools/google-cloud-sdk/bin:/opt/bin:/android/sdk/tools:/android/sdk/platform-tools:/android/ndk
%env ANDROID_SDK_API_LEVEL=29
%env ANDROID_API_LEVEL=29
%env ANDROID_BUILD_TOOLS_VERSION=29.0.2
%env ANDROID_DEV_HOME=/android
%env ANDROID_NDK_API_LEVEL=21
%env ANDROID_NDK_FILENAME=android-ndk-r19c-linux-x86_64.zip
%env ANDROID_NDK_HOME=/android/ndk
%env ANDROID_NDK_URL=https://dl.google.com/android/repository/android-ndk-r19c-linux-x86_64.zip
%env ANDROID_SDK_FILENAME=tools_r25.2.5-linux.zip
%env ANDROID_SDK_HOME=/android/sdk
#%env ANDROID_HOME=/android/sdk
%env ANDROID_SDK_URL=https://dl.google.com/android/repository/tools_r25.2.5-linux.zip
#!echo $PATH
!export -p
# Install specific versions of sdk, tools etc.
!android update sdk --no-ui -a \
--filter tools,platform-tools,android-29,build-tools-29.0.2
"""
Explanation: Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
Build TensorFlow Lite Support libraries with Bazel
Set up Android environment
End of explanation
"""
# Download Latest version of Bazelisk
!wget https://github.com/bazelbuild/bazelisk/releases/latest/download/bazelisk-linux-amd64
# Make script executable
!chmod +x bazelisk-linux-amd64
# Adding to the path
!sudo mv bazelisk-linux-amd64 /usr/local/bin/bazel
# Extract bazel info
!bazel
# Clone TensorFlow Lite Support repository OR upload your custom folder to build
!git clone https://github.com/tensorflow/tflite-support.git
# Move into tflite-support folder
%cd /content/tflite-support/
!ls
"""
Explanation: Install BAZEL with Baselisk
End of explanation
"""
#@title Select library. { display-mode: "form" }
library = 'Support library' #@param ["Support library", "Task Vision library", "Task Text library", "Task Audio library","Metadata library","C++ image_classifier","C++ image_objector","C++ image_segmenter","C++ image_embedder","C++ nl_classifier","C++ bert_nl_classifier", "C++ bert_question_answerer", "C++ metadata_extractor"]
print('You selected:', library)
if library == 'Support library':
library = '//tensorflow_lite_support/java:tensorflowlite_support.aar'
elif library == 'Task Vision library':
library = '//tensorflow_lite_support/java/src/java/org/tensorflow/lite/task/vision:task-library-vision'
elif library == 'Task Text library':
library = '//tensorflow_lite_support/java/src/java/org/tensorflow/lite/task/text:task-library-text'
elif library == 'Task Audio library':
library = '//tensorflow_lite_support/java/src/java/org/tensorflow/lite/task/audio:task-library-audio'
elif library == 'Metadata library':
library = '//tensorflow_lite_support/metadata/java:tensorflow-lite-support-metadata-lib'
elif library == 'C++ image_classifier':
library = '//tensorflow_lite_support/cc/task/vision:image_classifier'
elif library == 'C++ image_objector':
library = '//tensorflow_lite_support/cc/task/vision:image_objector'
elif library == 'C++ image_segmenter':
library = '//tensorflow_lite_support/cc/task/vision:image_segmenter'
elif library == 'C++ image_embedder':
library = '//tensorflow_lite_support/cc/task/vision:image_embedder'
elif library == 'C++ nl_classifier':
library = '//tensorflow_lite_support/cc/task/text/nlclassifier:nl_classifier'
elif library == 'C++ bert_nl_classifier':
library = '//tensorflow_lite_support/cc/task/text/nlclassifier:bert_nl_classifier'
elif library == 'C++ bert_question_answerer':
library = '//tensorflow_lite_support/cc/task/text/qa:bert_question_answerer'
elif library == 'C++ metadata_extractor':
library = '//tensorflow_lite_support/metadata/cc:metadata_extractor'
#@title Select platform(s). { display-mode: "form" }
platforms = 'arm64-v8a,armeabi-v7a' #@param ["arm64-v8a,armeabi-v7a","x86", "x86_64", "arm64-v8a", "armeabi-v7a","x86,x86_64,arm64-v8a,armeabi-v7a"]
print('You selected:', platforms)
# Build library
!bazel build \
--fat_apk_cpu='{platforms}' \
'{library}'
"""
Explanation: Build .aar files
End of explanation
"""
|
keras-team/keras-io | examples/generative/ipynb/lstm_character_level_text_generation.ipynb | apache-2.0 | from tensorflow import keras
from tensorflow.keras import layers
import numpy as np
import random
import io
"""
Explanation: Character-level text generation with LSTM
Author: fchollet<br>
Date created: 2015/06/15<br>
Last modified: 2020/04/30<br>
Description: Generate text from Nietzsche's writings with a character-level LSTM.
Introduction
This example demonstrates how to use a LSTM model to generate
text character-by-character.
At least 20 epochs are required before the generated text
starts sounding locally coherent.
It is recommended to run this script on GPU, as recurrent
networks are quite computationally intensive.
If you try this script on new data, make sure your corpus
has at least ~100k characters. ~1M is better.
Setup
End of explanation
"""
path = keras.utils.get_file(
"nietzsche.txt", origin="https://s3.amazonaws.com/text-datasets/nietzsche.txt"
)
with io.open(path, encoding="utf-8") as f:
text = f.read().lower()
text = text.replace("\n", " ") # We remove newlines chars for nicer display
print("Corpus length:", len(text))
chars = sorted(list(set(text)))
print("Total chars:", len(chars))
char_indices = dict((c, i) for i, c in enumerate(chars))
indices_char = dict((i, c) for i, c in enumerate(chars))
# cut the text in semi-redundant sequences of maxlen characters
maxlen = 40
step = 3
sentences = []
next_chars = []
for i in range(0, len(text) - maxlen, step):
sentences.append(text[i : i + maxlen])
next_chars.append(text[i + maxlen])
print("Number of sequences:", len(sentences))
x = np.zeros((len(sentences), maxlen, len(chars)), dtype=np.bool)
y = np.zeros((len(sentences), len(chars)), dtype=np.bool)
for i, sentence in enumerate(sentences):
for t, char in enumerate(sentence):
x[i, t, char_indices[char]] = 1
y[i, char_indices[next_chars[i]]] = 1
"""
Explanation: Prepare the data
End of explanation
"""
model = keras.Sequential(
[
keras.Input(shape=(maxlen, len(chars))),
layers.LSTM(128),
layers.Dense(len(chars), activation="softmax"),
]
)
optimizer = keras.optimizers.RMSprop(learning_rate=0.01)
model.compile(loss="categorical_crossentropy", optimizer=optimizer)
"""
Explanation: Build the model: a single LSTM layer
End of explanation
"""
def sample(preds, temperature=1.0):
# helper function to sample an index from a probability array
preds = np.asarray(preds).astype("float64")
preds = np.log(preds) / temperature
exp_preds = np.exp(preds)
preds = exp_preds / np.sum(exp_preds)
probas = np.random.multinomial(1, preds, 1)
return np.argmax(probas)
"""
Explanation: Prepare the text sampling function
End of explanation
"""
epochs = 40
batch_size = 128
for epoch in range(epochs):
model.fit(x, y, batch_size=batch_size, epochs=1)
print()
print("Generating text after epoch: %d" % epoch)
start_index = random.randint(0, len(text) - maxlen - 1)
for diversity in [0.2, 0.5, 1.0, 1.2]:
print("...Diversity:", diversity)
generated = ""
sentence = text[start_index : start_index + maxlen]
print('...Generating with seed: "' + sentence + '"')
for i in range(400):
x_pred = np.zeros((1, maxlen, len(chars)))
for t, char in enumerate(sentence):
x_pred[0, t, char_indices[char]] = 1.0
preds = model.predict(x_pred, verbose=0)[0]
next_index = sample(preds, diversity)
next_char = indices_char[next_index]
sentence = sentence[1:] + next_char
generated += next_char
print("...Generated: ", generated)
print()
"""
Explanation: Train the model
End of explanation
"""
|
CloudVLab/professional-services | examples/kubeflow-fairing-example/Fairing_Tensorflow_Keras.ipynb | apache-2.0 | import os
import logging
import tensorflow as tf
import fairing
import numpy as np
from datetime import datetime
from fairing.cloud import gcp
# Setting up google container repositories (GCR) for storing output containers
# You can use any docker container registry istead of GCR
# For local notebook, GCP_PROJECT should be set explicitly
GCP_PROJECT = fairing.cloud.gcp.guess_project_name()
GCP_Bucket = os.environ['GCP_BUCKET'] # e.g., 'gs://kubeflow-demo-g/'
# This is for local notebook instead of that in kubeflow cluster
# os.environ['GOOGLE_APPLICATION_CREDENTIALS']=
"""
Explanation: Train tensorflow or keras model on GCP or Kubeflow from Notebooks
This notebook introduces you to using Kubeflow Fairing to train the model to Kubeflow on Google Kubernetes Engine (GKE), and Google Cloud AI Platform training. This notebook demonstrate how to:
Train an Keras model in a local notebook,
Use Kubeflow Fairing to train an Keras model remotely on Kubeflow cluster,
Use Kubeflow Fairing to train an Keras model remotely on AI Platform training,
Use Kubeflow Fairing to deploy a trained model to Kubeflow, and Call the deployed endpoint for predictions.
You need Python 3.6 to use Kubeflow Fairing.
Setups
Pre-conditions
Deployed a kubeflow cluster through https://deploy.kubeflow.cloud/
Have the following environment variable ready:
PROJECT_ID # project host the kubeflow cluster or for running AI platform training
DEPLOYMENT_NAME # kubeflow deployment name, the same the cluster name after delpoyed
GCP_BUCKET # google cloud storage bucket
Create service account
bash
export SA_NAME = [service account name]
gcloud iam service-accounts create ${SA_NAME}
gcloud projects add-iam-policy-binding ${PROJECT_ID} \
--member serviceAccount:${SA_NAME}@${PROJECT_ID}.iam.gserviceaccount.com \
--role 'roles/editor'
gcloud iam service-accounts keys create ~/key.json \
--iam-account ${SA_NAME}@${PROJECT_ID}.iam.gserviceaccount.com
Authorize for Source Repository
bash
gcloud auth configure-docker
Update local kubeconfig (for submiting job to kubeflow cluster)
bash
export CLUSTER_NAME=${DEPLOYMENT_NAME} # this is the deployment name or the kubenete cluster name
export ZONE=us-central1-c
gcloud container clusters get-credentials ${CLUSTER_NAME} --region ${ZONE}
Set the environmental variable: GOOGLE_APPLICATION_CREDENTIALS
bash
export GOOGLE_APPLICATION_CREDENTIALS = ....
python
os.environ['GOOGLE_APPLICATION_CREDENTIALS']=...
Install the lastest version of fairing
python
pip install git+https://github.com/kubeflow/fairing@master
Please not that the above configuration is required for notebook service running outside Kubeflow environment. And the examples demonstrated in the notebook is fully tested on notebook service outside Kubeflow cluster also.
The environemt variables, e.g. service account, projects and etc, should have been pre-configured while setting up the cluster.
End of explanation
"""
def gcs_copy(src_path, dst_path):
import subprocess
print(subprocess.run(['gsutil', 'cp', src_path, dst_path], stdout=subprocess.PIPE).stdout[:-1].decode('utf-8'))
def gcs_download(src_path, file_name):
import subprocess
print(subprocess.run(['gsutil', 'cp', src_path, file_name], stdout=subprocess.PIPE).stdout[:-1].decode('utf-8'))
class TensorflowModel(object):
def __init__(self):
self.model_file = "mnist_model.h5"
self.model = None
def build(self):
self.model = tf.keras.models.Sequential([
tf.keras.layers.Flatten(input_shape=(28, 28)),
tf.keras.layers.Dense(512, activation=tf.nn.relu),
tf.keras.layers.Dropout(0.2),
tf.keras.layers.Dense(10, activation=tf.nn.softmax)
])
self.model.compile(optimizer='adam',
loss='sparse_categorical_crossentropy',
metrics=['accuracy'])
print(self.model.summary())
def save_model(self):
self.model.save(self.model_file)
gcs_copy(self.model_file, GCP_Bucket + self.model_file)
def train(self):
self.build()
mnist = tf.keras.datasets.mnist
(x_train, y_train),(x_test, y_test) = mnist.load_data()
x_train, x_test = x_train / 255.0, x_test / 255.0
callbacks = [
# Interrupt training if `val_loss` stops improving for over 2 epochs
tf.keras.callbacks.EarlyStopping(patience=2, monitor='val_loss'),
# Write TensorBoard logs to `./logs` directory
tf.keras.callbacks.TensorBoard(log_dir=GCP_Bucket + 'logs/'
+ datetime.now().date().__str__())
]
self.model.fit(x_train, y_train, batch_size=32, epochs=5, callbacks=callbacks,
validation_data=(x_test, y_test))
self.save_model()
def predict(self, X):
if not self.model:
self.model = tf.keras.models.load_model(self.model_file)
# Do any preprocessing
prediction = self.model.predict(data=X)
"""
Explanation: Define the model logic
End of explanation
"""
TensorflowModel().train()
"""
Explanation: Train an Keras model in a notebook
End of explanation
"""
# In this demo, I use gsutil, therefore i compile a special image to install GoogleCloudSDK as based image
base_image = 'gcr.io/{}/fairing-predict-example:latest'.format(GCP_PROJECT)
!docker build --build-arg PY_VERSION=3.6.4 . -t {base_image}
!docker push {base_image}
GCP_PROJECT = fairing.cloud.gcp.guess_project_name()
BASE_IMAGE = 'gcr.io/{}/fairing-predict-example:latest'.format(GCP_PROJECT)
DOCKER_REGISTRY = 'gcr.io/{}/fairing-job-tf'.format(GCP_PROJECT)
"""
Explanation: Spicify a image registry that will hold the image built by fairing
End of explanation
"""
from fairing import TrainJob
from fairing.backends import GKEBackend
train_job = TrainJob(TensorflowModel, BASE_IMAGE, input_files=["requirements.txt"],
docker_registry=DOCKER_REGISTRY, backend=GKEBackend())
train_job.submit()
"""
Explanation: Deploy the training job to kubeflow cluster
End of explanation
"""
fairing.config.set_builder(name='docker', registry=DOCKER_REGISTRY,
base_image=BASE_IMAGE, push=True)
fairing.config.set_deployer(name='tfjob', worker_count=1, ps_count=1)
run_fn = fairing.config.fn(TensorflowModel)
run_fn()
"""
Explanation: Deploy distributed training job to kubeflow cluster
End of explanation
"""
from fairing import TrainJob
from fairing.backends import GCPManagedBackend
train_job = TrainJob(TensorflowModel, BASE_IMAGE, input_files=["requirements.txt"],
docker_registry=DOCKER_REGISTRY, backend=GCPManagedBackend())
train_job.submit()
"""
Explanation: Deploy the training job as CMLE training job
Doesn’t support CMLE distributed training
End of explanation
"""
# ! tensorboard --logdir=gs://kubeflow-demo-g/logs --host=localhost --port=8777
"""
Explanation: Inspect training process with tensorboard
End of explanation
"""
from fairing import PredictionEndpoint
from fairing.backends import KubeflowGKEBackend
# The trained_ames_model.joblib is exported during the above local training
endpoint = PredictionEndpoint(TensorflowModel, BASE_IMAGE, input_files=['mnist_model.h5', "requirements.txt"],
docker_registry=DOCKER_REGISTRY, backend=KubeflowGKEBackend())
endpoint.create()
endpoint.delete()
"""
Explanation: Deploy the trained model to Kubeflow for predictions
End of explanation
"""
|
LSSTC-DSFP/LSSTC-DSFP-Sessions | Sessions/Session11/Day2/FindingSourcesSolutions.ipynb | mit | import numpy as np
import matplotlib.pyplot as plt
from scipy.stats import norm
from matplotlib.ticker import MultipleLocator
%matplotlib notebook
def pixel_plot(pix, counts, fig=None, ax=None):
'''Make a pixelated 1D plot'''
if fig is None and ax is None:
fig, ax = plt.subplots()
ax.step(pix, counts,
where='post')
ax.set_xlabel('pixel number')
ax.set_ylabel('relative counts')
ax.xaxis.set_minor_locator(MultipleLocator(1))
ax.xaxis.set_major_locator(MultipleLocator(5))
fig.tight_layout()
return fig, ax
# Define your PSF function phi()
# It is sufficient to copy and paste from
# your introductionToBasicStellarPhotometry noteboook
def phi(x, mu, fwhm):
"""Evalute the 1d PSF N(mu, sigma^2) along x
Parameters
----------
x : array-like of shape (n_pixels,)
detector pixel number
mu : float
mean position of the 1D star
fwhm : float
Full-width half-maximum of the stellar profile on the detector
Returns
-------
flux : array-like of shape (n_pixels,)
Flux in each pixel of the input array
"""
sigmaPerFwhm = 2*np.sqrt(2*np.log(2))
sigma = fwhm/sigmaPerFwhm
flux = norm.pdf(x, mu, sigma)
return flux
# Define your image simulation function to
# It is sufficient to copy and paste from
# your introductionToBasicStellarPhotometry noteboook
# Note that the background S should now be supplied as
# an array of length (S) or a constant.
def simulate(x, mu, fwhm, S, F):
"""simulate a noisy stellar signal
Parameters
----------
x : array-like
detector pixel number
mu : float
mean position of the 1D star
fwhm : float
Full-width half-maximum of the stellar profile on the detector
S : float or array-like of len(x)
Sky background for each pixel
F : float
Total stellar flux
Returns
-------
noisy_counts : array-like (same shape as x)
the (noisy) number of counts in each pixel
"""
signal = F * phi(x=x, mu=mu, fwhm=fwhm) + S
noise = np.random.normal(loc=0, scale=np.sqrt(signal))
noisy_counts = signal + noise
return noisy_counts
"""
Explanation: Background Subtraction and Source Detection
Version 0.1
By Yusra AlSayyad (Princeton University)
Note: for portability, the examples in this notebook are one-dimensional and avoid using libraries. In practice on real astronomical images, I recommend using a library for astronomical image processing, e.g. AstroPy or the LSST Stack.
Background Estimation
A prerequisite to this notebook is the introductionToBasicStellarPhotometry.ipynb notebook. We're going to use the same single stellar simulation, but with increasingly complex backgrounds.
First, setup the simulation and necessary imports
End of explanation
"""
# simulate the star
x = np.arange(100)
mu = 35
S = 100
fwhm = 5
F = 500
fig = plt.figure(figsize=(8,4))
ax = plt.subplot()
sim_star = simulate(x, mu=mu, fwhm=fwhm, S=S, F=F)
pixel_plot(x, sim_star, fig=fig, ax=ax)
# plot and inspect histogram
fig = plt.figure(figsize=(6,4))
plt.hist(sim_star, bins=20)
plt.xlabel('image counts')
plt.ylabel('num pixels')
S_estimate = np.median(sim_star)
plt.axvline(S_estimate, color='red')
plt.axvline(np.mean(sim_star), color='orange')
print('My background estimate = {:.4f}'.format(S_estimate))
print('The mean pixel count = {:.4f}'.format(np.mean(sim_star)))
# plot your background model over the "image"
fig, ax = pixel_plot(x, sim_star)
pixel_plot(x, np.repeat(S_estimate, len(x)), fig=fig, ax=ax)
"""
Explanation: Problem 1) Simple 1-D Background Estimation
Problem 1.1) Estimate a the background as a constant offset (order = 0)
For this problem we will use a simulated star with a constant background offset of $S=100$.
Background estimation is typically done by inspecting the distribution of counts in pixel bins. First inspect the distribution of counts, and pick an estimator for the background that is robust to the star (reduces biased fromthe star). Remember that we haven't done detection yet and don't know where the sources are.
End of explanation
"""
# Double check that your simulate function can take S optionally as array-like
# Create and plot the image with S = 3*x + 100
S = S=(3*x + 100)
sim_star = simulate(x=x, mu=mu, fwhm=fwhm, S=S, F=F)
pixel_plot(x, sim_star)
# bin the image in 20-pixel bins
# complete
BIN_SIZE = 20
bins = np.arange(0, 100 + BIN_SIZE, BIN_SIZE)
bin_centers = 0.5 *(bins[0:-1] + bins[1:])
digitized = np.digitize(x, bins=bins)
bin_values = [np.median(sim_star[digitized == i]) for i in range(1, len(bins))]
# Fit the bin_values vs bin_centers with a 1st-order chebyshev polynomial
# Evaluate your model for the full image
# hint: look up np.polynomial.chebyshev.chebfit and np.polynomial.chebyshev.chebeval
coefficients = np.polynomial.chebyshev.chebfit(bin_centers, bin_values, 1)
bg = np.polynomial.chebyshev.chebval(x, coefficients)
# Replot the image:
fig, ax = pixel_plot(x, sim_star)
# binned values
ax.plot(bin_centers, bin_values, 'o')
# Overplot your background model:
ax.plot(x, bg, '-')
# Finally plot your background subtracted image:
fig, ax = pixel_plot(x, sim_star - bg)
"""
Explanation: Problem 1.2) Estimate a the background as a ramp/line (order = 1)
Now let's simulate a slightly more complicated background a linear ramp: $y = 3x + 100$. First simulate the same star with the new background. Then we're going to fit it using the following steps:
* Bin the image
* Use your robust estimator to estimate the background value per bin center
* Fit these bin center with a model
A common simple model that astronomers use are Chebyshev polynomials. Chebyshevs have some very nice properties that prevent ringing at the edges of the fit window. Another popular way to "model" the bin centers is non-parametrically via interpolation.
End of explanation
"""
SIGMA_PER_FWHM = 2*np.sqrt(2*np.log(2))
fwhm = 5
x = np.arange(100)
background = 1000*norm.pdf(x, 50, 18) + 100*norm.pdf(x, 20, fwhm/SIGMA_PER_FWHM) + 100*norm.pdf(x, 60, fwhm/SIGMA_PER_FWHM)
sim_star3 = simulate(x=x, mu=35, fwhm=fwhm, S=background, F=200)
fig, ax = pixel_plot(x, sim_star3)
"""
Explanation: Problem 1.3) Estimate a more realistic background (still in 1D)
Now repeat the the exercise in problem 1.2 with a more complex background.
End of explanation
"""
BIN_SIZE = 10
bins = np.arange(0, 100 + BIN_SIZE, BIN_SIZE)
bin_centers = 0.5 *(bins[0:-1] + bins[1:])
print(bin_centers)
digitized = np.digitize(x, bins=bins)
bin_values = [np.median(sim_star3[digitized == i]) for i in range(1, len(bins))]
# overplot the binned esimtates:
fig, ax = pixel_plot(x, sim_star3)
ax.plot(bin_centers, bin_values, 'o')
"""
Explanation: 1.3.1) Bin the image. Plot the bin centers. What bin size did you pick?
End of explanation
"""
fig, ax = pixel_plot(x, sim_star3)
ax.plot(bin_centers, bin_values, 'o')
coefficients = np.polynomial.chebyshev.chebfit(bin_centers, bin_values, 2)
bg = np.polynomial.chebyshev.chebval(x, coefficients)
ax.plot(x, bg, '-', label='order=2')
coefficients = np.polynomial.chebyshev.chebfit(bin_centers, bin_values, 3)
bg = np.polynomial.chebyshev.chebval(x, coefficients)
ax.plot(x, bg, '-', label='order=3')
ax.legend()
"""
Explanation: 1.3.2) Spatially model the binned estimates (bin_values vs bin_centers) as a chebyshev polynomial.
Evaluate your model on the image grid and overplot.(what degree/order did you pick?)
End of explanation
"""
# Plot the background subtracted image
fig, ax = pixel_plot(x, sim_star3 - bg)
"""
Explanation: 1.3.3) Subtract off the model and plot the "background-subtracted image."
End of explanation
"""
# set up simulation
x = np.arange(100)
mu = 35
S = 100
fwhm = 5
F = 300
fig = plt.figure(figsize=(8,4))
ax = plt.subplot()
sim_star = simulate(x, mu=mu, fwhm=fwhm, S=S, F=F)
# To simplify this pretend we know for sure the background = 100
# Plots the backround subtracted image
image = sim_star - 100
pixel_plot(x, image, fig=fig, ax=ax)
"""
Explanation: And now you can see that this problem is fairly unrealistic as far as background subtraction goes and should probably be treated with a deblender. Typically in images,
* For chebyshev polynomials we use bin sizes of at least 128 pixels and orders of no more than 6. The spatial scale is controlled by the order, and bin size is less important. In fact you could probably use bin size of 1 if you really wanted to.
* For interpolation, the spatial scale is controlled by the bin size. We usually choose bins >= 256 pixels.
Problem 2) Finding Sources
Now that we have subtracted background image, let’s look for sources. In the lecture we focused on the matched filter interpretation. Here we will go into the hypothesis testing and maximum likelihood interpretations.
Maximum likelihood interpretation:
Assume that we know there is a point source somewhere in this image. We want to find to pixel that has the maximum likelihood of having a point source centered on it. recall from session 10, the probability for an individual observation $I_i$ is:
$$P(X_i) = \frac{1}{\sqrt{2\pi\sigma_i^2}} \exp{-\frac{(X_i - y_i)^2}{2\sigma_i^2}}$$
Here: $X_i$ is the pixel value of pixel $i$ in the image and $y_i$ is the model prediction for that pixel.
The model in this case is your simulate() function from the IntroductionToBasicStellarPhotometry.ipynb notebook: the PSF evaluated at a distance from the center multiplied by the flux amplitude: $F * \phi(x - x_{center}) + S$ Where $F$ is the flux amplitude $\phi$ is the PSF profile (a function of position), and $S$ is the background.
Plug it in:
$$P(X_i) = \frac{1}{\sqrt{2\pi\sigma_i^2}} \exp{-\frac{(X_i - (F * \phi_i(x_{center}) + S))^2}{2\sigma_i^2}}$$
Hypothesis test interpretation:
If I were teaching source detection to my non-scientist, college stats 101 students, I'd frame the problem like this:
Pretend you have an infinitely large population of pixels Say I know definitively, that the arbitrarily large population of pixels is drawn from $N(0,100)$ (has a variance of 10). I have another sample of 13 pixels. I want to tesst the hypothesis that those 13 pixels were drawn from the $N(0,100)$ pop too.
Test the hypothesis that your subsample of 13 pixels were drawn from the larger sample.
* $H_0$: $\mu = 0$
* $H_A$: $\mu > 0$
$$z = \frac{\bar{x} - \mu}{\sigma / \sqrt{n}} $$
$$z = \frac{\sum{x}/13 - 0}{10 /\sqrt{13}} $$
OK, if this is coming back now, let's replace this with our real estimator for PSF flux, which is a weighted mean of the pixels where the weights are the PSF $\phi_i$. Whenever I forget the formulas for weighted means, I consult the wikipedia page.
Now tweak it for a weighted mean (PSF flux):
$$ z = \frac{\sum{\phi_i x_i} - \mu} {\sqrt{ \sum{\phi_i^2 \sigma_i^2}}} $$
Where the denominator is from the variance estimate of a weighted mean. For constant $\sigma$ it reduces to $\sigma_{\bar{x}}^2 = \sigma^2 \sum{\phi^2_i}$, and for a constant $\phi$ this reduces to $\sigma_{\bar{x}}^2 = \sigma^2 /n$, the denomiator in the simple mean example above. Replace $\mu=0$ again.
$$ z = \frac{\sum{\phi_i x_i}} {\sqrt{ \sum{\phi_i^2 \sigma_i^2}}} $$
Our detection map is just the nominator for each pixel! We deal with the denominator later when choosing the thresholding, but we could just as easily divide the whole image by the denominator and have a z-score image!
2.0) Plot the problem image
End of explanation
"""
xx = np.arange(-7, 8)
kernel = phi(xx, mu=0, fwhm=5)
pixel_plot(xx, kernel)
print(xx)
print(kernel)
"""
Explanation: 2.1) Make a kernel for the PSF.
Properties of kernels: They're centered at x=0 (which also means that they have an odd number of pixels) and sum up to 1. You can use your phi().
End of explanation
"""
import scipy.signal
size = len(kernel)//2
detection_image = scipy.signal.convolve(image, kernel, mode='same')
# mode='same' pads then clips the padding This is the same as:
# size = len(kernel)//2
# scipy.signal.convolve(image, kernel)[size:-size]
# Note: pay attention to how scipy.signal.convolve handles the edges.
pixel_plot(x, detection_image)
print(len(scipy.signal.convolve(image, kernel, mode='full')))
print(len(scipy.signal.convolve(image, kernel, mode='same')))
print(len(scipy.signal.convolve(image, kernel, mode='valid')))
"""
Explanation: 2.2) Correlate the image with the PSF kernel,
and plot the result.
What are the tradeoffs when choosing the size of your PSF kernel? What happens if its too big? What happens if it's too small.?
hint: scipy.signal.convolve
End of explanation
"""
# Using a robust estimator for the detection image standard deviation,
# Compute the 5 sigma threshold
N_SIGMA = 5
q1, q2 = np.percentile(detection_image, (30.85, 69.15))
std = q2 - q1
threshold_value = std * N_SIGMA
print('5 sigma threshold value is = {:.4f}'.format(threshold_value))
"""
Explanation: Answer to the question: Bigger PSF kernels = more accurate convolution, more pixels lost on the edges, and more expensive computation. Smaller kernels don't get close enough to zero on the edges and resulting correlated image can look "boxy"
2.3) Detect pixels
for which the null hypothesis that there's no source centered there is ruled out at the 5$\sigma$ level.
End of explanation
"""
qq = scipy.stats.probplot(detection_image, dist="norm")
plt.plot(qq[0][0], qq[0][1])
plt.ylabel('value')
plt.xlabel('Normal quantiles')
# Just for fun to see what's going on:
fig, ax = pixel_plot(x, detection_image)
plt.axhline(threshold_value)
"""
Explanation: The noise estimate is a little high, but not bad for the first iteration. In future interations we will mask the fotoprints detected in the initial round, before recomputing.
In practice, we use a sigma-clipped rms as the estimate of the standard deviation, in which we iteratively clip outliers until we have what looks like a normal distribution and compute a plain std.
End of explanation
"""
# complete
growBy = fwhm
mask = detection_image > threshold_value
print(np.count_nonzero(mask))
dilated_mask = scipy.ndimage.binary_dilation(mask, iterations=growBy)
print(np.count_nonzero(dilated_mask))
fig, ax = pixel_plot(x[dilated_mask], image[dilated_mask])
# easy aperture flux:
np.sum(image[dilated_mask])
# Copied from solutions of previous notebook
from scipy.optimize import minimize
psf = phi(x[dilated_mask], mu=35, fwhm=5)
im = image[dilated_mask]
# minimize the square of the residuals to determine flux
def sum_res(A, flux=im, model=psf):
return sum((flux - A*model)**2)
sim_star = simulate(x, mu, fwhm, S, F)
psf_flux = minimize(sum_res, 300, args=(im, psf))
print("The PSF flux is {:.3f}".format(psf_flux.x[0]))
"""
Explanation: 2.4) Dilate footprint to provide a window or region for the point source.
We will use this window to compute the centroid and total flux of the star in the next two lessons. In the meantime, compute the flux like we did in introductionToStellarPhotometry assuming the input center.
End of explanation
"""
|
rishuatgithub/MLPy | nlp/UPDATED_NLP_COURSE/00-Python-Text-Basics/04-Python-Text-Basics-Assessment-Solutions.ipynb | apache-2.0 | abbr = 'NLP'
full_text = 'Natural Language Processing'
# Enter your code here:
print(f'{abbr} stands for {full_text}')
"""
Explanation: <a href='http://www.pieriandata.com'> <img src='../Pierian_Data_Logo.png' /></a>
Python Text Basics Assessment - Solutions
Welcome to your assessment! Complete the tasks described in bold below by typing the relevant code in the cells.
f-Strings
1. Print an f-string that displays NLP stands for Natural Language Processing using the variables provided.
End of explanation
"""
%%writefile contacts.txt
First_Name Last_Name, Title, Extension, Email
"""
Explanation: Files
2. Create a file in the current working directory called contacts.txt by running the cell below:
End of explanation
"""
# Write your code here:
with open('contacts.txt') as c:
fields = c.read()
# Run fields to see the contents of contacts.txt:
fields
"""
Explanation: 3. Open the file and use .read() to save the contents of the file to a string called fields. Make sure the file is closed at the end.
End of explanation
"""
# Perform import
import PyPDF2
# Open the file as a binary object
f = open('Business_Proposal.pdf','rb')
# Use PyPDF2 to read the text of the file
pdf_reader = PyPDF2.PdfFileReader(f)
# Get the text from page 2 (CHALLENGE: Do this in one step!)
page_two_text = pdf_reader.getPage(1).extractText()
# Close the file
f.close()
# Print the contents of page_two_text
print(page_two_text)
"""
Explanation: Working with PDF Files
4. Use PyPDF2 to open the file Business_Proposal.pdf. Extract the text of page 2.
End of explanation
"""
# Simple Solution:
with open('contacts.txt','a+') as c:
c.write(page_two_text)
c.seek(0)
print(c.read())
# CHALLENGE Solution (re-run the %%writefile cell above to obtain an unmodified contacts.txt file):
with open('contacts.txt','a+') as c:
c.write(page_two_text[8:])
c.seek(0)
print(c.read())
"""
Explanation: 5. Open the file contacts.txt in append mode. Add the text of page 2 from above to contacts.txt.
CHALLENGE: See if you can remove the word "AUTHORS:"
End of explanation
"""
import re
# Enter your regex pattern here. This may take several tries!
pattern = r'\w+@\w+.\w{3}'
re.findall(pattern, page_two_text)
"""
Explanation: Regular Expressions
6. Using the page_two_text variable created above, extract any email addresses that were contained in the file Business_Proposal.pdf.
End of explanation
"""
|
amitkaps/applied-machine-learning | reference/Module-01a-reference.ipynb | mit | #Load the libraries
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
#Defualt Variables
%matplotlib inline
plt.rcParams['figure.figsize'] = (16,9)
plt.style.use('fivethirtyeight')
pd.set_option('display.float_format', lambda x: '%.2f' % x)
#Load the dataset
df = pd.read_csv("data/loan_data.csv")
#View the first few rows of train
df.head()
#View the columns of the train dataset
df.columns
#View the data types of the train dataset
df.dtypes
#View the number of records in the data
df.shape
#View summary of raw data
df.describe()
"""
Explanation: Frame, Acquire & Refine
Raw Data
You are provided with the following data: loan_data.csv
This is the historical data that the bank has provided. It has the following columns
Application Attributes:
- years: Number of years the applicant has been employed
- ownership: Whether the applicant owns a house or not
- income: Annual income of the applicant
- age: Age of the applicant
Behavioural Attributes:
- grade: Credit grade of the applicant
Outcome Variable:
- amount : Amount of Loan provided to the applicant
- default : Whether the applicant has defaulted or not
- interest: Interest rate charged for the applicant
Frame the Problem
What are the features?
What is the target?
Discuss?
Acquire the Data
End of explanation
"""
# Find if df has missing values. Hint: There is a isnull() function
df.isnull().head()
"""
Explanation: Refine the Data
Lets check the dataset for quality and compeleteness
1. Missing Values
2. Outliers
Check for Missing Values
End of explanation
"""
#let's see how many missing values are present
df.isnull().sum()
"""
Explanation: One consideration we check here is the number of observations with missing values for those columns that have missing values. If a column has too many missing values, it might make sense to drop the column.
End of explanation
"""
#Let's replace missing values with the median of the column
df.describe()
#there's a fillna function
df = df.fillna(df.median())
#Now, let's check if train has missing values or not
df.isnull().any()
"""
Explanation: So, we see that two columns have missing values: interest and years. Both the columns are numeric. We have three options for dealing with this missing values
Options to treat Missing Values
- REMOVE - NAN rows
- IMPUTATION - Replace them with something??
- Mean
- Median
- Fixed Number - Domain Relevant
- High Number (999) - Issue with modelling
- BINNING - Categorical variable and "Missing becomes a number
- DOMAIN SPECIFIC* - Entry error, pipeline, etc.
End of explanation
"""
# Which variables are Categorical?
df.dtypes
# Create a Crosstab of those variables with another variable
pd.crosstab(df.default, df.grade)
# Create a Crosstab of those variables with another variable
pd.crosstab(df.default, df.ownership)
"""
Explanation: Check for Outlier Values
Let us check first the categorical variables
End of explanation
"""
# Describe the data set continuous values
df.describe()
"""
Explanation: Let us check outliers in the continuous variable
Plotting
Histogram
Box-Plot
Measuring
Z-score > 3
Modified Z-score > 3.5
where modified Z-score = 0.6745 * (x - x_median) / MAD
End of explanation
"""
# Make a histogram of age
df.age.hist(bins=100)
# Make a histogram of income
df.income.hist(bins=100)
# Make Histograms for all other variables
# Make a scatter of age and income
plt.scatter(df.age, df.income)
"""
Explanation: Clearly the age variable looks like it has an outlier - Age cannot be greater 100!
Also the income variable looks like it may also have an outlier.
End of explanation
"""
# Find the observation
df[df.age == 144]
df[df.age == 144].index
# Use drop to remove the observation inplace
df.drop(df[df.age == 144].index, axis=0, inplace=True)
# Find the shape of the df
df.shape
# Check again for outliers
df.describe()
# Save the new file as cleaned data
df.to_csv("data/loan_data_clean.csv", index=False)
#We are good to go to the next step
"""
Explanation: Find the observation which has age = 144 and remove it from the dataframe
End of explanation
"""
|
kabrapratik28/Stanford_courses | cs231n/assignment3/NetworkVisualization-PyTorch.ipynb | apache-2.0 | import torch
from torch.autograd import Variable
import torchvision
import torchvision.transforms as T
import random
import numpy as np
from scipy.ndimage.filters import gaussian_filter1d
import matplotlib.pyplot as plt
from cs231n.image_utils import SQUEEZENET_MEAN, SQUEEZENET_STD
from PIL import Image
%matplotlib inline
plt.rcParams['figure.figsize'] = (10.0, 8.0) # set default size of plots
plt.rcParams['image.interpolation'] = 'nearest'
plt.rcParams['image.cmap'] = 'gray'
"""
Explanation: Network Visualization (PyTorch)
In this notebook we will explore the use of image gradients for generating new images.
When training a model, we define a loss function which measures our current unhappiness with the model's performance; we then use backpropagation to compute the gradient of the loss with respect to the model parameters, and perform gradient descent on the model parameters to minimize the loss.
Here we will do something slightly different. We will start from a convolutional neural network model which has been pretrained to perform image classification on the ImageNet dataset. We will use this model to define a loss function which quantifies our current unhappiness with our image, then use backpropagation to compute the gradient of this loss with respect to the pixels of the image. We will then keep the model fixed, and perform gradient descent on the image to synthesize a new image which minimizes the loss.
In this notebook we will explore three techniques for image generation:
Saliency Maps: Saliency maps are a quick way to tell which part of the image influenced the classification decision made by the network.
Fooling Images: We can perturb an input image so that it appears the same to humans, but will be misclassified by the pretrained network.
Class Visualization: We can synthesize an image to maximize the classification score of a particular class; this can give us some sense of what the network is looking for when it classifies images of that class.
This notebook uses PyTorch; we have provided another notebook which explores the same concepts in TensorFlow. You only need to complete one of these two notebooks.
End of explanation
"""
def preprocess(img, size=224):
transform = T.Compose([
T.Scale(size),
T.ToTensor(),
T.Normalize(mean=SQUEEZENET_MEAN.tolist(),
std=SQUEEZENET_STD.tolist()),
T.Lambda(lambda x: x[None]),
])
return transform(img)
def deprocess(img, should_rescale=True):
transform = T.Compose([
T.Lambda(lambda x: x[0]),
T.Normalize(mean=[0, 0, 0], std=(1.0 / SQUEEZENET_STD).tolist()),
T.Normalize(mean=(-SQUEEZENET_MEAN).tolist(), std=[1, 1, 1]),
T.Lambda(rescale) if should_rescale else T.Lambda(lambda x: x),
T.ToPILImage(),
])
return transform(img)
def rescale(x):
low, high = x.min(), x.max()
x_rescaled = (x - low) / (high - low)
return x_rescaled
def blur_image(X, sigma=1):
X_np = X.cpu().clone().numpy()
X_np = gaussian_filter1d(X_np, sigma, axis=2)
X_np = gaussian_filter1d(X_np, sigma, axis=3)
X.copy_(torch.Tensor(X_np).type_as(X))
return X
"""
Explanation: Helper Functions
Our pretrained model was trained on images that had been preprocessed by subtracting the per-color mean and dividing by the per-color standard deviation. We define a few helper functions for performing and undoing this preprocessing. You don't need to do anything in this cell.
End of explanation
"""
# Download and load the pretrained SqueezeNet model.
model = torchvision.models.squeezenet1_1(pretrained=True)
# We don't want to train the model, so tell PyTorch not to compute gradients
# with respect to model parameters.
for param in model.parameters():
param.requires_grad = False
"""
Explanation: Pretrained Model
For all of our image generation experiments, we will start with a convolutional neural network which was pretrained to perform image classification on ImageNet. We can use any model here, but for the purposes of this assignment we will use SqueezeNet [1], which achieves accuracies comparable to AlexNet but with a significantly reduced parameter count and computational complexity.
Using SqueezeNet rather than AlexNet or VGG or ResNet means that we can easily perform all image generation experiments on CPU.
[1] Iandola et al, "SqueezeNet: AlexNet-level accuracy with 50x fewer parameters and < 0.5MB model size", arXiv 2016
End of explanation
"""
from cs231n.data_utils import load_imagenet_val
X, y, class_names = load_imagenet_val(num=5)
plt.figure(figsize=(12, 6))
for i in range(5):
plt.subplot(1, 5, i + 1)
plt.imshow(X[i])
plt.title(class_names[y[i]])
plt.axis('off')
plt.gcf().tight_layout()
"""
Explanation: Load some ImageNet images
We have provided a few example images from the validation set of the ImageNet ILSVRC 2012 Classification dataset. To download these images, change to cs231n/datasets/ and run get_imagenet_val.sh.
Since they come from the validation set, our pretrained model did not see these images during training.
Run the following cell to visualize some of these images, along with their ground-truth labels.
End of explanation
"""
# Example of using gather to select one entry from each row in PyTorch
def gather_example():
N, C = 4, 5
s = torch.randn(N, C)
y = torch.LongTensor([1, 2, 1, 3])
print(s)
print(y)
print(s.gather(1, y.view(-1, 1)).squeeze())
gather_example()
def compute_saliency_maps(X, y, model):
"""
Compute a class saliency map using the model for images X and labels y.
Input:
- X: Input images; Tensor of shape (N, 3, H, W)
- y: Labels for X; LongTensor of shape (N,)
- model: A pretrained CNN that will be used to compute the saliency map.
Returns:
- saliency: A Tensor of shape (N, H, W) giving the saliency maps for the input
images.
"""
# Make sure the model is in "test" mode
model.eval()
# Wrap the input tensors in Variables
X_var = Variable(X, requires_grad=True)
y_var = Variable(y)
saliency = None
##############################################################################
# TODO: Implement this function. Perform a forward and backward pass through #
# the model to compute the gradient of the correct class score with respect #
# to each input image. You first want to compute the loss over the correct #
# scores, and then compute the gradients with a backward pass. #
##############################################################################
pass
##############################################################################
# END OF YOUR CODE #
##############################################################################
return saliency
"""
Explanation: Saliency Maps
Using this pretrained model, we will compute class saliency maps as described in Section 3.1 of [2].
A saliency map tells us the degree to which each pixel in the image affects the classification score for that image. To compute it, we compute the gradient of the unnormalized score corresponding to the correct class (which is a scalar) with respect to the pixels of the image. If the image has shape (3, H, W) then this gradient will also have shape (3, H, W); for each pixel in the image, this gradient tells us the amount by which the classification score will change if the pixel changes by a small amount. To compute the saliency map, we take the absolute value of this gradient, then take the maximum value over the 3 input channels; the final saliency map thus has shape (H, W) and all entries are nonnegative.
[2] Karen Simonyan, Andrea Vedaldi, and Andrew Zisserman. "Deep Inside Convolutional Networks: Visualising
Image Classification Models and Saliency Maps", ICLR Workshop 2014.
Hint: PyTorch gather method
Recall in Assignment 1 you needed to select one element from each row of a matrix; if s is an numpy array of shape (N, C) and y is a numpy array of shape (N,) containing integers 0 <= y[i] < C, then s[np.arange(N), y] is a numpy array of shape (N,) which selects one element from each element in s using the indices in y.
In PyTorch you can perform the same operation using the gather() method. If s is a PyTorch Tensor or Variable of shape (N, C) and y is a PyTorch Tensor or Variable of shape (N,) containing longs in the range 0 <= y[i] < C, then
s.gather(1, y.view(-1, 1)).squeeze()
will be a PyTorch Tensor (or Variable) of shape (N,) containing one entry from each row of s, selected according to the indices in y.
run the following cell to see an example.
You can also read the documentation for the gather method
and the squeeze method.
End of explanation
"""
def show_saliency_maps(X, y):
# Convert X and y from numpy arrays to Torch Tensors
X_tensor = torch.cat([preprocess(Image.fromarray(x)) for x in X], dim=0)
y_tensor = torch.LongTensor(y)
# Compute saliency maps for images in X
saliency = compute_saliency_maps(X_tensor, y_tensor, model)
# Convert the saliency map from Torch Tensor to numpy array and show images
# and saliency maps together.
saliency = saliency.numpy()
N = X.shape[0]
for i in range(N):
plt.subplot(2, N, i + 1)
plt.imshow(X[i])
plt.axis('off')
plt.title(class_names[y[i]])
plt.subplot(2, N, N + i + 1)
plt.imshow(saliency[i], cmap=plt.cm.hot)
plt.axis('off')
plt.gcf().set_size_inches(12, 5)
plt.show()
show_saliency_maps(X, y)
"""
Explanation: Once you have completed the implementation in the cell above, run the following to visualize some class saliency maps on our example images from the ImageNet validation set:
End of explanation
"""
def make_fooling_image(X, target_y, model):
"""
Generate a fooling image that is close to X, but that the model classifies
as target_y.
Inputs:
- X: Input image; Tensor of shape (1, 3, 224, 224)
- target_y: An integer in the range [0, 1000)
- model: A pretrained CNN
Returns:
- X_fooling: An image that is close to X, but that is classifed as target_y
by the model.
"""
# Initialize our fooling image to the input image, and wrap it in a Variable.
X_fooling = X.clone()
X_fooling_var = Variable(X_fooling, requires_grad=True)
learning_rate = 1
##############################################################################
# TODO: Generate a fooling image X_fooling that the model will classify as #
# the class target_y. You should perform gradient ascent on the score of the #
# target class, stopping when the model is fooled. #
# When computing an update step, first normalize the gradient: #
# dX = learning_rate * g / ||g||_2 #
# #
# You should write a training loop. #
# #
# HINT: For most examples, you should be able to generate a fooling image #
# in fewer than 100 iterations of gradient ascent. #
# You can print your progress over iterations to check your algorithm. #
##############################################################################
pass
##############################################################################
# END OF YOUR CODE #
##############################################################################
return X_fooling
"""
Explanation: Fooling Images
We can also use image gradients to generate "fooling images" as discussed in [3]. Given an image and a target class, we can perform gradient ascent over the image to maximize the target class, stopping when the network classifies the image as the target class. Implement the following function to generate fooling images.
[3] Szegedy et al, "Intriguing properties of neural networks", ICLR 2014
End of explanation
"""
idx = 0
target_y = 6
X_tensor = torch.cat([preprocess(Image.fromarray(x)) for x in X], dim=0)
X_fooling = make_fooling_image(X_tensor[idx:idx+1], target_y, model)
scores = model(Variable(X_fooling))
assert target_y == scores.data.max(1)[1][0, 0], 'The model is not fooled!'
"""
Explanation: Run the following cell to generate a fooling image:
End of explanation
"""
X_fooling_np = deprocess(X_fooling.clone())
X_fooling_np = np.asarray(X_fooling_np).astype(np.uint8)
plt.subplot(1, 4, 1)
plt.imshow(X[idx])
plt.title(class_names[y[idx]])
plt.axis('off')
plt.subplot(1, 4, 2)
plt.imshow(X_fooling_np)
plt.title(class_names[target_y])
plt.axis('off')
plt.subplot(1, 4, 3)
X_pre = preprocess(Image.fromarray(X[idx]))
diff = np.asarray(deprocess(X_fooling - X_pre, should_rescale=False))
plt.imshow(diff)
plt.title('Difference')
plt.axis('off')
plt.subplot(1, 4, 4)
diff = np.asarray(deprocess(10 * (X_fooling - X_pre), should_rescale=False))
plt.imshow(diff)
plt.title('Magnified difference (10x)')
plt.axis('off')
plt.gcf().set_size_inches(12, 5)
plt.show()
"""
Explanation: After generating a fooling image, run the following cell to visualize the original image, the fooling image, as well as the difference between them.
End of explanation
"""
def jitter(X, ox, oy):
"""
Helper function to randomly jitter an image.
Inputs
- X: PyTorch Tensor of shape (N, C, H, W)
- ox, oy: Integers giving number of pixels to jitter along W and H axes
Returns: A new PyTorch Tensor of shape (N, C, H, W)
"""
if ox != 0:
left = X[:, :, :, :-ox]
right = X[:, :, :, -ox:]
X = torch.cat([right, left], dim=3)
if oy != 0:
top = X[:, :, :-oy]
bottom = X[:, :, -oy:]
X = torch.cat([bottom, top], dim=2)
return X
def create_class_visualization(target_y, model, dtype, **kwargs):
"""
Generate an image to maximize the score of target_y under a pretrained model.
Inputs:
- target_y: Integer in the range [0, 1000) giving the index of the class
- model: A pretrained CNN that will be used to generate the image
- dtype: Torch datatype to use for computations
Keyword arguments:
- l2_reg: Strength of L2 regularization on the image
- learning_rate: How big of a step to take
- num_iterations: How many iterations to use
- blur_every: How often to blur the image as an implicit regularizer
- max_jitter: How much to gjitter the image as an implicit regularizer
- show_every: How often to show the intermediate result
"""
model.type(dtype)
l2_reg = kwargs.pop('l2_reg', 1e-3)
learning_rate = kwargs.pop('learning_rate', 25)
num_iterations = kwargs.pop('num_iterations', 100)
blur_every = kwargs.pop('blur_every', 10)
max_jitter = kwargs.pop('max_jitter', 16)
show_every = kwargs.pop('show_every', 25)
# Randomly initialize the image as a PyTorch Tensor, and also wrap it in
# a PyTorch Variable.
img = torch.randn(1, 3, 224, 224).mul_(1.0).type(dtype)
img_var = Variable(img, requires_grad=True)
for t in range(num_iterations):
# Randomly jitter the image a bit; this gives slightly nicer results
ox, oy = random.randint(0, max_jitter), random.randint(0, max_jitter)
img.copy_(jitter(img, ox, oy))
########################################################################
# TODO: Use the model to compute the gradient of the score for the #
# class target_y with respect to the pixels of the image, and make a #
# gradient step on the image using the learning rate. Don't forget the #
# L2 regularization term! #
# Be very careful about the signs of elements in your code. #
########################################################################
pass
########################################################################
# END OF YOUR CODE #
########################################################################
# Undo the random jitter
img.copy_(jitter(img, -ox, -oy))
# As regularizer, clamp and periodically blur the image
for c in range(3):
lo = float(-SQUEEZENET_MEAN[c] / SQUEEZENET_STD[c])
hi = float((1.0 - SQUEEZENET_MEAN[c]) / SQUEEZENET_STD[c])
img[:, c].clamp_(min=lo, max=hi)
if t % blur_every == 0:
blur_image(img, sigma=0.5)
# Periodically show the image
if t == 0 or (t + 1) % show_every == 0 or t == num_iterations - 1:
plt.imshow(deprocess(img.clone().cpu()))
class_name = class_names[target_y]
plt.title('%s\nIteration %d / %d' % (class_name, t + 1, num_iterations))
plt.gcf().set_size_inches(4, 4)
plt.axis('off')
plt.show()
return deprocess(img.cpu())
"""
Explanation: Class visualization
By starting with a random noise image and performing gradient ascent on a target class, we can generate an image that the network will recognize as the target class. This idea was first presented in [2]; [3] extended this idea by suggesting several regularization techniques that can improve the quality of the generated image.
Concretely, let $I$ be an image and let $y$ be a target class. Let $s_y(I)$ be the score that a convolutional network assigns to the image $I$ for class $y$; note that these are raw unnormalized scores, not class probabilities. We wish to generate an image $I^*$ that achieves a high score for the class $y$ by solving the problem
$$
I^* = \arg\max_I s_y(I) - R(I)
$$
where $R$ is a (possibly implicit) regularizer (note the sign of $R(I)$ in the argmax: we want to minimize this regularization term). We can solve this optimization problem using gradient ascent, computing gradients with respect to the generated image. We will use (explicit) L2 regularization of the form
$$
R(I) = \lambda \|I\|_2^2
$$
and implicit regularization as suggested by [3] by periodically blurring the generated image. We can solve this problem using gradient ascent on the generated image.
In the cell below, complete the implementation of the create_class_visualization function.
[2] Karen Simonyan, Andrea Vedaldi, and Andrew Zisserman. "Deep Inside Convolutional Networks: Visualising
Image Classification Models and Saliency Maps", ICLR Workshop 2014.
[3] Yosinski et al, "Understanding Neural Networks Through Deep Visualization", ICML 2015 Deep Learning Workshop
End of explanation
"""
dtype = torch.FloatTensor
# dtype = torch.cuda.FloatTensor # Uncomment this to use GPU
model.type(dtype)
target_y = 76 # Tarantula
# target_y = 78 # Tick
# target_y = 187 # Yorkshire Terrier
# target_y = 683 # Oboe
# target_y = 366 # Gorilla
# target_y = 604 # Hourglass
out = create_class_visualization(target_y, model, dtype)
"""
Explanation: Once you have completed the implementation in the cell above, run the following cell to generate an image of a Tarantula:
End of explanation
"""
# target_y = 78 # Tick
# target_y = 187 # Yorkshire Terrier
# target_y = 683 # Oboe
# target_y = 366 # Gorilla
# target_y = 604 # Hourglass
target_y = np.random.randint(1000)
print(class_names[target_y])
X = create_class_visualization(target_y, model, dtype)
"""
Explanation: Try out your class visualization on other classes! You should also feel free to play with various hyperparameters to try and improve the quality of the generated image, but this is not required.
End of explanation
"""
|
skkandrach/foundations-homework | data-databases/Homeowrk_3.ipynb | mit | from bs4 import BeautifulSoup
from urllib.request import urlopen
html_str = urlopen("http://static.decontextualize.com/widgets2016.html").read()
document = BeautifulSoup(html_str, "html.parser")
"""
Explanation: Homework assignment #3
These problem sets focus on using the Beautiful Soup library to scrape web pages.
Problem Set #1: Basic scraping
I've made a web page for you to scrape. It's available here. The page concerns the catalog of a famous widget company. You'll be answering several questions about this web page. In the cell below, I've written some code so that you end up with a variable called html_str that contains the HTML source code of the page, and a variable document that stores a Beautiful Soup object.
End of explanation
"""
type(document)
h3_tag = document.find('h3')
h3_tag
h3_tag.string
h3_tags = []
h3_tags = document.find_all('h3')
for item in h3_tags:
print(item.string)
print(len(h3_tags))
"""
Explanation: Now, in the cell below, use Beautiful Soup to write an expression that evaluates to the number of <h3> tags contained in widgets2016.html.
End of explanation
"""
a_tag = document.find('a', {'class': 'tel'})
print(a_tag)
"""
Explanation: Now, in the cell below, write an expression or series of statements that displays the telephone number beneath the "Widget Catalog" header.
End of explanation
"""
w_tag = document.find_all('td', {'class': 'wname'})
#print(w_tag)
for item in w_tag:
print(item.string)
"""
Explanation: In the cell below, use Beautiful Soup to write some code that prints the names of all the widgets on the page. After your code has executed, widget_names should evaluate to a list that looks like this (though not necessarily in this order):
Skinner Widget
Widget For Furtiveness
Widget For Strawman
Jittery Widget
Silver Widget
Divided Widget
Manicurist Widget
Infinite Widget
Yellow-Tipped Widget
Unshakable Widget
Self-Knowledge Widget
Widget For Cinema
End of explanation
"""
widgets = []
entire_widgets = document.find_all('tr', {'class':"winfo"})
for item in entire_widgets:
dictionaries = {}
partno_tag = item.find('td',{'class':'partno'})
wname_tag = item.find('td',{'class':'wname'})
price_tag = item.find('td',{'class':'price'})
quantity_tag = item.find('td',{'class':'quantity'})
dictionaries['partno']= partno_tag.string
dictionaries['wname']= wname_tag.string
dictionaries['price']= price_tag.string
dictionaries['quantity']= quantity_tag.string
widgets.append(dictionaries)
print(widgets)
print(widgets[5]['partno'])
"""
Explanation: Problem set #2: Widget dictionaries
For this problem set, we'll continue to use the HTML page from the previous problem set. In the cell below, I've made an empty list and assigned it to a variable called widgets. Write code that populates this list with dictionaries, one dictionary per widget in the source file. The keys of each dictionary should be partno, wname, price, and quantity, and the value for each of the keys should be the value for the corresponding column for each row. After executing the cell, your list should look something like this:
[{'partno': 'C1-9476',
'price': '$2.70',
'quantity': u'512',
'wname': 'Skinner Widget'},
{'partno': 'JDJ-32/V',
'price': '$9.36',
'quantity': '967',
'wname': u'Widget For Furtiveness'},
...several items omitted...
{'partno': '5B-941/F',
'price': '$13.26',
'quantity': '919',
'wname': 'Widget For Cinema'}]
And this expression:
widgets[5]['partno']
... should evaluate to:
LH-74/O
End of explanation
"""
widgets = []
entire_widgets = document.find_all('tr', {'class':"winfo"})
for item in entire_widgets:
dictionaries = {}
partno_tag = item.find('td',{'class':'partno'})
wname_tag = item.find('td',{'class':'wname'})
price_tag = item.find('td',{'class':'price'})
quantity_tag = item.find('td',{'class':'quantity'})
for price_tag_item in price_tag:
price= price_tag_item[1:] #getting rid of the dollar sign
for quantity_tag_item in quantity_tag:
quantity= quantity_tag_item
dictionaries['partno']= partno_tag.string
dictionaries['wname']= wname_tag.string
dictionaries['price']= float(price)
dictionaries['quantity']= int(quantity_tag_item)
widgets.append(dictionaries)
widgets
"""
Explanation: In the cell below, duplicate your code from the previous question. Modify the code to ensure that the values for price and quantity in each dictionary are floating-point numbers and integers, respectively. I.e., after executing the cell, your code should display something like this:
[{'partno': 'C1-9476',
'price': 2.7,
'quantity': 512,
'widgetname': 'Skinner Widget'},
{'partno': 'JDJ-32/V',
'price': 9.36,
'quantity': 967,
'widgetname': 'Widget For Furtiveness'},
... some items omitted ...
{'partno': '5B-941/F',
'price': 13.26,
'quantity': 919,
'widgetname': 'Widget For Cinema'}]
(Hint: Use the float() and int() functions. You may need to use string slices to convert the price field to a floating-point number.)
End of explanation
"""
total_widget_list= []
for item in widgets:
total_widget_list.append(item['quantity'])
sum(total_widget_list)
"""
Explanation: Great! I hope you're having fun. In the cell below, write an expression or series of statements that uses the widgets list created in the cell above to calculate the total number of widgets that the factory has in its warehouse.
Expected output: 7928
End of explanation
"""
for item in widgets:
if item['price'] > 9.30:
print(item['wname'])
"""
Explanation: In the cell below, write some Python code that prints the names of widgets whose price is above $9.30.
Expected output:
Widget For Furtiveness
Jittery Widget
Silver Widget
Infinite Widget
Widget For Cinema
End of explanation
"""
example_html = """
<h2>Camembert</h2>
<p>A soft cheese made in the Camembert region of France.</p>
<h2>Cheddar</h2>
<p>A yellow cheese made in the Cheddar region of... France, probably, idk whatevs.</p>
"""
"""
Explanation: Problem set #3: Sibling rivalries
In the following problem set, you will yet again be working with the data in widgets2016.html. In order to accomplish the tasks in this problem set, you'll need to learn about Beautiful Soup's .find_next_sibling() method. Here's some information about that method, cribbed from the notes:
Often, the tags we're looking for don't have a distinguishing characteristic, like a class attribute, that allows us to find them using .find() and .find_all(), and the tags also aren't in a parent-child relationship. This can be tricky! For example, take the following HTML snippet, (which I've assigned to a string called example_html):
End of explanation
"""
example_doc = BeautifulSoup(example_html, "html.parser")
cheese_dict = {}
for h2_tag in example_doc.find_all('h2'):
cheese_name = h2_tag.string
cheese_desc_tag = h2_tag.find_next_sibling('p')
cheese_dict[cheese_name] = cheese_desc_tag.string
cheese_dict
"""
Explanation: If our task was to create a dictionary that maps the name of the cheese to the description that follows in the <p> tag directly afterward, we'd be out of luck. Fortunately, Beautiful Soup has a .find_next_sibling() method, which allows us to search for the next tag that is a sibling of the tag you're calling it on (i.e., the two tags share a parent), that also matches particular criteria. So, for example, to accomplish the task outlined above:
End of explanation
"""
for item in document.find_all('h3'):
if item.string == "Hallowed widgets":
hallowed_widgets_table = item.find_next_sibling('table', {'class': 'widgetlist'})
#print(type(hallowed_widgets_table))
td_list = hallowed_widgets_table.find_all('td', {'class': 'partno'})
for td in td_list:
print(td.string)
"""
Explanation: With that knowledge in mind, let's go back to our widgets. In the cell below, write code that uses Beautiful Soup, and in particular the .find_next_sibling() method, to print the part numbers of the widgets that are in the table just beneath the header "Hallowed Widgets."
Expected output:
MZ-556/B
QV-730
T1-9731
5B-941/F
End of explanation
"""
category_counts = {}
all_categories = document.find_all('h3')
for item in all_categories:
category = item.string
widget_table = item.find_next_sibling('table', {'class': 'widgetlist'})
widget_quantity = widget_table.find_all('tr', {'class':'winfo'})
category_counts[category]= len(widget_quantity)
category_counts
"""
Explanation: Okay, now, the final task. If you can accomplish this, you are truly an expert web scraper. I'll have little web scraper certificates made up and I'll give you one, if you manage to do this thing. And I know you can do it!
In the cell below, I've created a variable category_counts and assigned to it an empty dictionary. Write code to populate this dictionary so that its keys are "categories" of widgets (e.g., the contents of the <h3> tags on the page: "Forensic Widgets", "Mood widgets", "Hallowed Widgets") and the value for each key is the number of widgets that occur in that category. I.e., after your code has been executed, the dictionary category_counts should look like this:
{'Forensic Widgets': 3,
'Hallowed widgets': 4,
'Mood widgets': 2,
'Wondrous widgets': 3}
End of explanation
"""
|
FedericoMuciaccia/SistemiComplessi | src/Adiacenza, grafo e grado.ipynb | mit | import geopy
from geopy import distance
import math
import itertools
import pandas
import numpy
import networkx
from matplotlib import pyplot
%matplotlib inline
"""
Explanation: Importo tutte le librerie necessarie
End of explanation
"""
colosseo = (41.890173, 12.492331)
raccordo = [(41.914456, 12.615807),(41.990672, 12.502714),(41.793883, 12.511297),(41.812566, 12.396628),(41.956277, 12.384611)]
raggi = []
def geodesicDistance(A, B=colosseo):
return geopy.distance.vincenty(A, B).meters
raggioTerra = 6372795
def euclidDistance(A, B=colosseo):
latitudine1 = math.radians(A[0])
latitudine2 = math.radians(B[0])
longitudine1 = math.radians(A[1])
longitudine2 = math.radians(B[1])
x1 = raggioTerra*math.sin(math.pi-latitudine1)*math.cos(longitudine1)
y1 = raggioTerra*math.sin(math.pi-latitudine1)*math.sin(longitudine1)
z1 = raggioTerra*math.cos(math.pi-latitudine1)
x2 = raggioTerra*math.sin(math.pi-latitudine2)*math.cos(longitudine2)
y2 = raggioTerra*math.sin(math.pi-latitudine2)*math.sin(longitudine2)
z2 = raggioTerra*math.cos(math.pi-latitudine2)
return math.sqrt((x1-x2)**2+(y1-y2)**2+(z1-z2)**2)
raggi = map(geodesicDistance, raccordo)
print raggi
raggi1= []
raggi1 = map(euclidDistance, raccordo)
print raggi1
raggiomedioGeo = 0
raggiomedioEuclid = 0
for i in raggi:
raggiomedioGeo += i
for i in raggi1:
raggiomedioEuclid += i
raggiomedioGeo /= len(raggi)
raggiomedioEuclid /= len(raggi1)
print raggiomedioGeo
print raggiomedioEuclid
"""
Explanation: Calcolo matrice adiacenza
Calcolo il raggio medio che definisce Roma entro il raccordo anulare
NB: da verificare che distanza euclidea non crei troppi problemi
End of explanation
"""
dataframe = pandas.read_csv("/home/protoss/Documenti/Siscomp_datas/data/cell_towers.csv")
#dataframe = pandas.read_csv("/home/protoss/Documenti/SistemiComplessi/data/cell_towers_diff-2016012100.csv")
#dataframe
criterioMCC = dataframe.mcc == 222
criterioMinsamples = dataframe.samples > 1
italydoitcleaner = dataframe[criterioMCC & criterioMinsamples]
italydoitcleaner
italydoitcleaner = italydoitcleaner.reset_index(drop=True)
italydoitcleaner.drop(italydoitcleaner.columns[[1, 3, 5, 10, 11, 12, 13]], axis = 1, inplace=True)
#italydoitcleaner
"""
Explanation: Popolo il dataframe e faccio una prima grossa scrematura
End of explanation
"""
#istruzione che fa selezione alcune righe con criteri su alcune colonne,
#ne seleziona alcune e restituisce un array nompy di valori desiderati
coordinate = dataframe[criterioMCC & criterioMinsamples][['lat', 'lon']].values
%time distanza = numpy.array(map(geodesicDistance, coordinate), dtype = int)
raggiomedioGeo = 12000
italydoitcleaner['distance'] = distanza
criterioRaccordo = italydoitcleaner.distance < raggiomedioGeo
romaCell = italydoitcleaner[criterioRaccordo]
romaCell = romaCell.reset_index(drop=True)
romaCell.to_csv("../../data/Roma_towers.csv", index= False)
criterioTim = romaCell.net == 1
criterioWind = romaCell.net == 88
criterioVoda = romaCell.net == 10
criterioTre = romaCell.net == 99
timCell = romaCell[criterioTim]
timCell = timCell.reset_index(drop=True)
timCell.to_csv("../../data/Tim_towers.csv", index= False)
windCell = romaCell[criterioWind]
windCell = windCell.reset_index(drop=True)
windCell.to_csv("../../data/Wind_towers.csv", index= False)
vodaCell = romaCell[criterioVoda]
vodaCell = vodaCell.reset_index(drop=True)
vodaCell.to_csv("../../data/Vodafone_towers.csv", index= False)
treCell = romaCell[criterioTre]
treCell = treCell.reset_index(True)
treCell.to_csv("../../data/Tre_towers.csv", index= False)
#istruzione che seleziona alcune righe con criteri su alcune colonne,
#e restituisce un array numpy di valori desiderati
coordinate = dataframe[criterioMCC & criterioMinsamples][['lat', 'lon']].values
%time distanza = numpy.array(map(euclidDistance, coordinate), dtype=int)
raggiomedioEuclid = 12000
italydoitcleaner['distance'] = distanza
criterioRaccordo = italydoitcleaner.distance < raggiomedioEuclid
romaCell = italydoitcleaner[criterioRaccordo]
romaCell = romaCell.reset_index(drop=True)
romaCell.to_csv("../../data/Roma_towersEuc.csv", index= False)
criterioTim = romaCell.net == 1
criterioWind = romaCell.net == 88
criterioVoda = romaCell.net == 10
criterioTre = romaCell.net == 99
timCell = romaCell[criterioTim]
timCell = timCell.reset_index(drop=True)
timCell.to_csv("../../data/Tim_towersEuc.csv", index= False)
windCell = romaCell[criterioWind]
windCell = windCell.reset_index(drop=True)
windCell.to_csv("../../data/Wind_towersEuc.csv", index= False)
vodaCell = romaCell[criterioVoda]
vodaCell = vodaCell.reset_index(drop=True)
vodaCell.to_csv("../../data/Vodafone_towersEuc.csv", index= False)
treCell = romaCell[criterioTre]
treCell = treCell.reset_index(True)
treCell.to_csv("../../data/Tre_towersEuc.csv", index= False)
"""
Explanation: Seleziono le antenne in Roma e faccio dei .csv appositi
End of explanation
"""
#definisco la funzione che mi calcola la matrice di adiacenza
def matriceSupEuclid(datiCoordinate, datiRaggi):
a = numpy.zeros((numdati,numdati), dtype=int)
for i in xrange(numdati):
for j in xrange(numdati-i-1):
sommaraggi = datiRaggi[i] + datiRaggi[j+i+1]
#è equivalente a un if
a[i,j+i+1] = a[j+i+1,i] = (euclidDistance(datiCoordinate[i], datiCoordinate[j+i+1]) <= 0.8*sommaraggi)
return a
#attenzione: molto lenta!
def matriceSupGeodetic(datiCoordinate, datiRaggi):
a = numpy.zeros((numdati,numdati))
for i in xrange(numdati):
for j in xrange(numdati-i-1):
if geodesicDistance(datiCoordinate[i], datiCoordinate[j+i+1]) <= datiRaggi[i] + datiRaggi[j+i+1]:
a[i,j+i+1] = 1
a[j+i+1,i] = 1
return a
gestore = ["Roma", "Tim", "Vodafone", "Wind", "Tre"]
for aziende in gestore:
dataframe = pandas.read_csv("../../data/{0}_towers.csv".format(aziende))
coordinate = dataframe[['lat', 'lon']].values
raggio = dataframe['range'].values
# for che mette a tutti i raggi sotto i 500 metri, il valore minimo di 500 metri
# for i in range(len(raggio)):
# if(raggio[i] < 500):
# raggio[i] = 500
numdati = raggio.size
#%time adiacenzaGeo = matriceSupGeodetic(coordinate, raggio)
%time adiacenzaEuclid = matriceSupEuclid(coordinate, raggio)
numpy.savetxt(("/home/protoss/Documenti/Siscomp_datas/data/AdiacenzaEuclidea_{0}.csv".format(aziende)),adiacenzaEuclid, fmt='%d',delimiter=',',newline='\n')
"""
Explanation: Prendo le antenne di Roma e faccio matrice adiacenza
End of explanation
"""
#for azienda in gestore:
#italydoitcleaner['distanze'] = distanza
#romaCell.to_csv("../data/Roma_towers.csv")
adiacenzaRoma = numpy.genfromtxt("/home/protoss/Documenti/Siscomp_datas/data/AdiacenzaEuclidea_Roma.csv",delimiter=',',dtype='int')
adiacenzaTim = numpy.genfromtxt("/home/protoss/Documenti/Siscomp_datas/data/AdiacenzaEuclidea_Tim.csv",delimiter=',',dtype='int')
adiacenzaVoda = numpy.genfromtxt("/home/protoss/Documenti/Siscomp_datas/data/AdiacenzaEuclidea_Vodafone.csv",delimiter=',',dtype='int')
adiacenzaWind = numpy.genfromtxt("/home/protoss/Documenti/Siscomp_datas/data/AdiacenzaEuclidea_Wind.csv",delimiter=',',dtype='int')
adiacenzaTre = numpy.genfromtxt("/home/protoss/Documenti/Siscomp_datas/data/AdiacenzaEuclidea_Tre.csv",delimiter=',',dtype='int')
%time grafoRoma = networkx.Graph(adiacenzaRoma)
%time grafoTim = networkx.Graph(adiacenzaTim)
%time grafoVoda = networkx.Graph(adiacenzaVoda)
%time grafoWind = networkx.Graph(adiacenzaWind)
%time grafoTre = networkx.Graph(adiacenzaTre)
gradoRoma = grafoRoma.degree().values()
numpy.savetxt("../../data/DistrGrado_Roma",gradoRoma,fmt='%d',newline='\n')
istoGradoRoma = networkx.degree_histogram(grafoRoma)
#numpy.savetxt("../../data/IstoGrado_Roma",istoGradoRoma,fmt='%d',newline='\n')
romaCell["degree"] = gradoRoma
romaCell.to_csv("../../data/Roma_towers.csv", index= False)
gradoTim = grafoTim.degree().values()
numpy.savetxt("../../data/DistrGrado_Tim",gradoTim,fmt='%d',newline='\n')
istoGradoTim = networkx.degree_histogram(grafoTim)
#numpy.savetxt("../../data/IstoGrado_Tim",istoGradoTim,fmt='%d',newline='\n')
timCell["degree"] = gradoTim
timCell.to_csv("../../data/Tim_towers.csv", index= False)
gradoVoda = grafoVoda.degree().values()
numpy.savetxt("../../data/DistrGrado_Vodafone",gradoVoda,fmt='%d',newline='\n')
istoGradoVoda = networkx.degree_histogram(grafoVoda)
#numpy.savetxt("../../data/IstoGrado_Voda",istoGradoVoda,fmt='%d',newline='\n')
vodaCell["degree"] = gradoVoda
vodaCell.to_csv("../../data/Vodafone_towers.csv", index= False)
gradoWind = grafoWind.degree().values()
numpy.savetxt("../../data/DistrGrado_Wind",gradoWind,fmt='%d',newline='\n')
istoGradoWind = networkx.degree_histogram(grafoWind)
#numpy.savetxt("../../data/IstoGrado_Wind",istoGradoWind,fmt='%d',newline='\n')
windCell["degree"] = gradoWind
windCell.to_csv("../../data/Wind_towers.csv", index= False)
gradoTre = grafoTre.degree().values()
numpy.savetxt("../../data/DistrGrado_Tre",gradoTre,fmt='%d',newline='\n')
istoGradoTre = networkx.degree_histogram(grafoTre)
#numpy.savetxt("../../data/IstoGrado_Tre",istoGradoTre,fmt='%d',newline='\n')
treCell["degree"] = gradoTre
treCell.to_csv("../../data/Tre_towers.csv", index= False)
"""
Explanation: Faccio grafo e calcolo distr grado
End of explanation
"""
gestore = ["Tim", "Vodafone", "Wind", "Tre"]
def topologyNetx(gestore):
adiacenza = numpy.genfromtxt("/home/protoss/Documenti/Siscomp_datas/data/AdiacenzaEuclidea_{0}.csv".format(gestore),delimiter=',',dtype='int')
grafo = networkx.Graph(adiacenza)
c = networkx.average_clustering(grafo)
d = networkx.diameter(grafo)
l = networkx.average_shortest_path_length(grafo)
return c, d, l
for compagnia in gestore:
print compagnia
topo = %time topologyNetx(compagnia)
print topo, "\n"
"""
Explanation: Topologia iniziale con networkx (molto lento)
End of explanation
"""
colori = ['#4d4d4d', '#004184','#ff3300','#ff8000','#018ECC']
paletta = seaborn.color_palette(palette = colori)
seaborn.palplot(paletta)
paletta = seaborn.color_palette(palette = 'muted')
seaborn.palplot(paletta)
paletta = seaborn.color_palette(palette = 'bright')
seaborn.palplot(paletta)
paletta = seaborn.color_palette(palette = 'pastel')
seaborn.palplot(paletta)
paletta = seaborn.color_palette(palette = 'dark')
seaborn.palplot(paletta)
paletta = seaborn.color_palette(palette = 'colorblind')
seaborn.palplot(paletta)
paletta = seaborn.color_palette
print paletta
"""
Explanation: NB. num antenne
* TIM - 1756
* Vodafone - 1771
* Wind - 2365
* 3 - 1395
Tot antenne: 6571
TODO:
Prendere array coordinate ✔
fare array distanze ✔
mettere colonna distanze in dataframe ✔
selezionare righe con variabile compresa entro raggiomedio ✔
fare un nuovo dataframe ✔
escludere tutti i nodi con 1 sample solo ✔
fare P(k) ✔
log binning ✔
FARE GRAFICI MEGLIO ✔
Fare fit su P(k)
variazione D con rimozione random o preferenziale ✔
variazione GC con rimozione random o preferenziale ✔
approfondire condizioni di soglia percolativa (v lez prof e articoli)
barabasi e albert dicono che andamento giant cluster relativo è indipendente dalla dimensione della rete, non solo per reti scale free (frattali), ma anche per reti esponenziali! (frattali anch'esse?) Verificare sta cosa facendo confronto andamento GC tra rete totale e reti delle varie compagnie
fare dei grafi barabasi e erdos e aggiungere quei grafi modellizzati a grafici di attacco e failure per fare confronto
NB giant cluster è cluster che scala con N.
E.g., se il giant cluster è composto da N/10 della rete, se raddoppio la rete o la dimezzo deve rimanere composto da 1/10 del totale dei nodi della rete. Idem se è N/100 o N/0.9
Leggere (materiale lezione su percolazione-attacchi-epidemie):
http://www.nature.com/nature/journal/v406/n6794/pdf/406378a0.pdf
http://arxiv.org/pdf/cond-mat/0010317.pdf
http://arxiv.org/pdf/cond-mat/0007048.pdf
http://arxiv.org/pdf/cond-mat/0010251.pdf
Altro materiale forse utile:
http://www.renyi.hu/~p_erdos/1959-11.pdf (Erdos e Renyi)
http://arxiv.org/pdf/cond-mat/0106096.pdf (Stat mec scale free network)
http://arxiv.org/pdf/cond-mat/9910332.pdf
http://arxiv.org/pdf/cond-mat/9907068.pdf
http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.71.8276&rep=rep1&type=pdf
Federico nota andamento range segua una sorta di legge di zipf, NOTA BENE, I NOSTRI DATI NON SONO DATI UFFICIALI, MA COSTRUITI DA GENTE CHE CAMMINA, QUINDI PROB DI TROVARE NUOVA ANTENNA POTREBBE ESSERE SIMILE A PROB TROVARE NUOVA PAROLA, ma io penso che non c'entri perché noi stiamo vedendo solo le lunghezze delle parole. Che legge regola la prob delle lunghezze delle parole?
Il primo tentativo è stato di fare la matrice di adiacenza a forza bruta. Con un campione di soli 50 nodi ci metteva pochi microsecondi, quindi abbiamo provato a fare la matrice di adiacenza delle 7000 antenne entro il raccordo anulare, notando che la compilazione durava tanto, facendo le dovute proporzioni abbiamo preventivato 2,5 ore di tempo di calcolo. La prima cosa che abbiamo sistemato è stato ovviamente fare un ciclo che calcolasse soltanto la metà superiore della matrice, dimezzando il tempo di calcolo.
La prima cosa che abbiamo pensato di fare è stato di diagonalizzare a blocchi la matrice, o fare un ciclo di bassissimo livello che mettesse 0 a tutti gli elementi relativi alle antenne con $\Delta$Latitudine e/o $\Delta$Longitudine maggiori del range massimo del campione di dati. Il problema avuto è che il range delle antenne è tendenzialmente grande, con alcune che arrivano a 10km (con raggioRoma 11km)(e anche tanti samples), quindi non c'era modo di ridurre i calcoli.
L'unica altra idea che abbiamo avuto è stata di non fare il calcolo complicato con la distanza sul geoide con il metodo vincenty. Primo passo è stato usare il metodo con great circles, l'altro è stato di considerare la porzione di Roma presa come un cerchio piano, calcolando quindi la distanza euclidea tra coordinate geografiche e convertendola in metri. E ci mette MOLTO meno tempo $\sim$10 volte in meno. Con un
preventivo quindi di 10 minuti di tempo di calcolo invece di 1 ora e mezza.
TODO vedere parallelaizazione
Varie note su tempi di calcolo
Prova preliminare con 50 dati
con vincenti
$\sim$45 ms
con great circols
$\sim$25 ms
con euclid
$\sim$5 ms
Prova 50 dati
CPU times: user 32 ms, sys: 0 ns, total: 32 ms
Wall time: 31.8 ms
CPU times: user 4 ms, sys: 0 ns, total: 4 ms
Wall time: 4.3 ms
CPU times: user 32 ms, sys: 0 ns, total: 32 ms
Wall time: 33.6 ms
CPU times: user 4 ms, sys: 0 ns, total: 4 ms
Wall time: 4.2 ms
CPU times: user 32 ms, sys: 0 ns, total: 32 ms
Wall time: 31.2 ms
CPU times: user 4 ms, sys: 0 ns, total: 4 ms
Wall time: 4.24 ms
CPU times: user 32 ms, sys: 0 ns, total: 32 ms
Wall time: 31 ms
CPU times: user 4 ms, sys: 0 ns, total: 4 ms
Wall time: 4.29 ms
Prova 100 dati
CPU times: user 132 ms, sys: 0 ns, total: 132 ms
Wall time: 133 ms
CPU times: user 12 ms, sys: 16 ms, total: 28 ms
Wall time: 21.5 ms
CPU times: user 124 ms, sys: 0 ns, total: 124 ms
Wall time: 126 ms
CPU times: user 20 ms, sys: 0 ns, total: 20 ms
Wall time: 16.6 ms
CPU times: user 132 ms, sys: 0 ns, total: 132 ms
Wall time: 126 ms
CPU times: user 16 ms, sys: 8 ms, total: 24 ms
Wall time: 21.9 ms
CPU times: user 128 ms, sys: 0 ns, total: 128 ms
Wall time: 127 ms
CPU times: user 16 ms, sys: 0 ns, total: 16 ms
Wall time: 16.8 ms
con 500
CPU times: user 3.28 s, sys: 0 ns, total: 3.28 s
Wall time: 3.27 s
CPU times: user 404 ms, sys: 0 ns, total: 404 ms
Wall time: 403 ms
CPU times: user 3.26 s, sys: 20 ms, total: 3.28 s
Wall time: 3.23 s
CPU times: user 404 ms, sys: 0 ns, total: 404 ms
Wall time: 401 ms
con 1000
CPU times: user 12.6 s, sys: 32 ms, total: 12.6 s
Wall time: 12.5 s
CPU times: user 1.62 s, sys: 16 ms, total: 1.64 s
Wall time: 1.62 s
CPU times: user 12.5 s, sys: 48 ms, total: 12.5 s
Wall time: 12.5 s
CPU times: user 1.62 s, sys: 16 ms, total: 1.64 s
Wall time: 1.62 s
con 2000
CPU times: user 49.7 s, sys: 160 ms, total: 49.9 s
Wall time: 49.6 s
CPU times: user 6.47 s, sys: 40 ms, total: 6.51 s
Wall time: 6.44 s
CPU times: user 51.2 s, sys: 232 ms, total: 51.4 s
Wall time: 51.1 s
CPU times: user 6.67 s, sys: 24 ms, total: 6.7 s
Wall time: 6.65 s
Geo dist
Tempo previsto di calcolo con $\sim$ 7000 dati: $\sim$ 620 sec $\sim$ 10 minuti
Euclid dist
Tempo previsto di calcolo con $\sim$ 7000 dati: $\sim$ 80 sec $\sim$ 1,3 minuti
End of explanation
"""
|
ghvn7777/ghvn7777.github.io | content/fluent_python/2_2_list_split.ipynb | apache-2.0 | l = list(range(10))
l
l[2:5] = 100 #当赋值对象是切片时候,即使只有一个元素,等式右面也必须是一个可迭代元素
l[2:5] = [100]
l
"""
Explanation: 切片
为了计算 seq[start:stop:step],Python 会调用 seq.__getitem__(slice(start, stop, step))。
多维切片
[ ] 运算符也可以接收以逗号分隔的多个索引或切片,举例来说,Numpy 中,你可以使用 a[i, j] 取得二维的 numpy.ndarray,以及使用 a[m:n, k:l] 这类的运算符获取二维的切片。处理 [ ] 运算符的 __getitem__ 和 __setitem__ 特殊方法,都只会接收 tuple 格式的 a[i, j] 内的索引,换句话说,Python 调用 a.getitem((i, j)) 算出来的 a[i, j]
Python 会将省略号(三个句点)视为一种标记,在 Numpy 对多维矩阵进行切片时,会使用快捷的方式 ...,例如一个四维矩阵 a[i, ...] 相当于 a[i, :, :, :] 的简写。
对切片赋值
End of explanation
"""
l = [1, 2, 3]
l * 5
"""
Explanation: 对列表使用 + 与 *
要连接多个同一列表副本,只需要将列表乘上一个整数
End of explanation
"""
#建立一个有三个元素的列表,每个元素是一个列表,有三个项目
board = [[' '] * 3 for i in range(3)]
board
board[1][2] = 'X'
board
"""
Explanation: 用 * 构建内含多个列表的列表
如果我们想初始化列表中有一定数量的列表,最适合使用列表生成式,例如下面就可以表示井字的棋盘列表,里面有 3 个长度为 3 的列表
End of explanation
"""
weir_board = [['_']* 3] * 3 #这是 3 个指向同一个地址的列表
weir_board
weir_board[1][2] = 'O'
weir_board
"""
Explanation: 上面很吸引人,并且是一种标准的做法,不过要注意,如果你在 a*n 语句中,如果 a 元素是对其它可变对象的引用的话,就要注意了,因为这个式子可能出乎你的意料,例如,你想用 my_list = [[]] * 3 初始化一个由列表组成的列表,但是我们其实得到的列表中包含的三个元素是 3 个引用,这 3 个引用指向同一个列表。
下面也是创建三个元素的列表,每个元素是一个列表,有三个项目,这是一种看起来很简洁,但是是错误的做法
End of explanation
"""
row = [' '] * 3
board = []
for i in range(3):
board.append(row)
board
board[1][2] = '0'
board
"""
Explanation: 上面的程序本质上相当于:
End of explanation
"""
l = [1, 2, 3]
id(l)
l *= 2
l
id(l)
t = (1, 2, 3)
id(t)
t *= 2
id(t)
"""
Explanation: 列表加法
+= 和 = 的行为会根据第一个运算元不同而有很大的不同。为了简化讨论,我们主要讨论 +=,但是它的概念也可以套用到 =(乘等于,显示有问题) 上。让 += 可以工作的是 __iadd__(代表 ”in-place addition“ 就地加法)。但是,如果没有实现 __iadd__ 方法的话,就会去调用 __add__ 方法,运算式 a += b 就和 a = a + b 效果一样,也就是 先算出 a + b,产生一个新的变量,将新的变量赋值给 a,换句话说,赋值给 a 的可能是一个新的内存地址的变量,变量名会不会被关联到新的对象完全取决于是否有 __iadd__ 决定。
一般来说,可变序列都实现了 __iadd__ 方法,因此 += 是就地加法,而不可变序列根本不支持这个操作,对这个方法的实现也就无从谈起。
这种 += 的概念也可以应用到 *=,它是用 __imul__重写的,关于这两个方法,第 13 章会谈到。
下面是一个例子,展现 乘等于在可变和不可变序列上的作用:
End of explanation
"""
t = (1, 2, [30, 40])
t[2] += [50, 60]
t
"""
Explanation: Note: 对不可变序列拼接效率很低,因为它要整个的把原来的内容复制到新的内存,然后拼接。
一个好玩的 += 例子
End of explanation
"""
import dis
t = (1, 2, [3, 4])
dis.dis('a[2] += [5, 6]')
"""
Explanation: 看到虽然报错了,但是 t 的值还是被改变了。我们看一下反汇编代码
End of explanation
"""
x = [3, 4]
x.__iadd__([5, 6])
t[2] = x #这步报错,tutle 不可变
"""
Explanation: 前面是初始化变量,看 17 开头的 INPLACE_ADD,执行 += 操作,这步成功了,然后 19 STORE_SUBSCR 执行 t[2] = [3, 4, 5, 6],但是由于 tuple 不可变,这步失败了,但是由于列表 执行的 += 操作是调用 __iadd__,不会改变内存地址,所以 list 内容已经改变了。
所以上面的相当于:
End of explanation
"""
fruits = ['grape', 'raspberry', 'apple', 'banana']
sorted(fruits)
fruits
sorted(fruits, reverse=True)
sorted(fruits, key=len)
sorted(fruits, key=len, reverse=True)
fruits
fruits.sort()
fruits
"""
Explanation: list.sort 与 sorted() 内部函数
list.sort 会将原来的 list 排序,不会产生新的 list 副本,并返回一个 None 值告诉我们,已经改变了目标列表,没有创建一个新的列表,这是一种很重要的 Python API 惯例,当函数或方法改变目标时,返回一个 None 值。在 random.shuffle() 函数中也可以看到相同的用法。
相比之下,内建函数 sorted 会建立一个新的 list,并将它返回,事实上,它会接收所有可迭代对象,包括生成器,无论要给它排序的可迭代对象是什么,都会返回一个新的列表,不管 list.sort() 还是 sorted() 都有两个参数
reverse 参数: 如果值是 True,会降序返回,默认是 False
key 参数: 一个函数,返回每一个元素的排序键。例如你要排序几个字符串,key = str.lower 可以执行不分大小写的排序,key = len 可以按照长度排序,默认值是恒等函数,也就是默认用元素自己的值排序
Note: key 关键字也可以在 min() 和 max() 函数中使用,另外有些标准程序库中也支持这个参数(例如 itertools.groupby() 和 heapq.nlargest() 等)。
下面例子可以让我们了解排序函数的用法:
End of explanation
"""
import bisect
import sys
HAYSTACK = [1, 4, 5, 6, 8, 12, 15, 20, 21, 23, 23, 26, 29, 30]
NEEDLES = [0, 1, 2, 5, 8, 10, 22, 23, 29, 30, 31]
ROW_FMT = '{0:2d} @ {1:2d} {2}{0:<2d}' #构建的很好,每个数字占两个字节,不会错位
def demo(bisect_fn):
for needle in reversed(NEEDLES):
position = bisect_fn(HAYSTACK, needle)
offset = position * ' |' #建立与位移相符的分隔符
print(ROW_FMT.format(needle, position, offset))
def main(args=None):
if args == 'left':
bisect_fn = bisect.bisect_left
else:
bisect_fn = bisect.bisect
print('DEMO:', bisect_fn.__name__)
print('haystack ->', ' '.join('%2d' % n for n in HAYSTACK))
demo(bisect_fn)
main()
main('left')
"""
Explanation: 当我们将列表排序后,就可以非常有效率的搜索它们,幸运的是,Python 标准库中的 bisect 模块已经提供标准的二分搜索法了。我们来讨论一下它的基本功能,包括方便的 bisect.insort() 函数,我们可以用它来确保排序后的列表仍然保持已经排序的状态
使用 bisect 来管理有效列表
bisect 主要提供两个函数 bisect() 和 insoct(),它们使用二分搜索,可以在任何有序列表中快速的查找和插入元素
使用 bisect 来搜索
bisect(haystack, needle) 会对 haystack(干草垛) 中(必须是已排序的列表)的 needle(针) 做二分搜索,找出可以插入 needle 的位置的索引,并保持 haystack 的顺序,也就是该位置左面的数都小于等于 needle,你可以使用 bisect(haystack, needle) 的结果来作为 haystack.insert(index, needle) 的参数,但是 insort() 可以做这两步,速度更快
End of explanation
"""
def grade(score, breakpoints=[60, 70, 80, 90], grades='FDCBA'):
i = bisect.bisect(breakpoints, score)
return grades[i]
[grade(score) for score in [33, 99, 77, 70, 89, 90, 100]]
"""
Explanation: bisect 函数有两种微调方式,再插入时,可以使用一对索引,lo 和 hi 来缩小索引范围,lo 的默认值是 0, hi 的默认值是列表的 len()
其次,bisect() 其实是 bisect_right() 的别名,它还有一种姐妹函数,叫做 bisect_left(),从上面看出,当列表和插入元素不相同时,看不出来差别,但是如果有相同元素时,bisect() 会在相同的最一个元素右面插入,bisect_left() 会在相同的第一个元素左面插入。
bisect 有一个有趣的用法,就是执行数值表格查询,例如,将考试的分数转成字母:
End of explanation
"""
import bisect
import random
SIZE = 7
random.seed(1729)
# 哈哈,python 版本的插入排序,很方便
my_list = []
for i in range(SIZE):
new_item = random.randrange(SIZE * 2)
bisect.insort(my_list, new_item)
print("%2d ->" % new_item, my_list)
"""
Explanation: 这段程序出自 bisect 模块文件,在搜索冗长的数字列表时,用 bisect 来取代 index 方法,可以加快查询速度
使用 bisect.insort 来插入
insort(seq, item) 会将 item 插入 seq,让 seq 保持升序排列
End of explanation
"""
from array import array
from random import random
floats = array('d', (random() for i in range(10 ** 7))) #双精度浮点数
floats[-1]
fp = open('ipynb_floats.bin', 'wb')
floats.tofile(fp)
fp.close()
floats2 = array('d')
fp = open('ipynb_floats.bin', 'rb')
floats2.fromfile(fp, 10 ** 7)
fp.close()
floats2[-1]
floats == floats2
"""
Explanation: 和 bisect 一样,inosrt 可以使用 lo,hi 参数来搜索子列表。另外还有一个 insort_left 函数,使用 bisect_left 查找插入点
当列表不适用时
list 类型很好,但有时根据特定需求会有更好的选择,例如,如果你要存储一千万浮点数,那么 数组(array)会比较有效率,因为数组存的不是浮点数对象,而是像 C 语言一样保存它们的机器值。另一方面,如果你经常在列表尾端加入以及移除元素,将它当成栈或队列使用, deque(double-ended queue 双端队列) 的工作速度会较快
数组
如果列表中只存放数字,那么 array.array 会比列表更有效率,它支持所有的可变列表操作(包括 .pop, .insert 和 .extend)以及额外的方法,可以快速将内容存储到硬盘,例如 .frombytes 和 .tofile
Python 数组与 C 中的数组一样精简,当建立数组时,需要提供一个 类型码(typecode)来指定在底层存储时使用的 C 语言类型,例如,b 是 signed char 的 类型码,如果建立一个 array('b'),那么每一个元素都会被存成一个 byte,并被解释成整数,范围是 -128 到 127。对于大型数字列表来说,节省了很多内存。且 Python 不允许任何不符合数组类型的数字放进去。
下面展示了创建 1000 万浮点数数组,如何存到文件中并读取到数组中。
End of explanation
"""
floats = array(floats.typecode, sorted(floats))
floats[0:5]
"""
Explanation: 看到 array.tofile() 和 array.fromfile() 都很简单,执行速度也很快,事实证明,array.fromfile 载入这些数据只花了 0.1 秒,大约比从文本文件中读取数据快了 60 倍
Note: 另一种快速且更方便的数值存储方式是 pickle,pickle.dump 存储浮点数数组和 array.tofile() 几乎一样的快,但是 pickle 几乎可以处理所有的内置类型,包括复数等,甚至可以处理自己定义的实例(如果不是太复杂)
对于一些特殊的数字数组,用来表示二进制图像,例如光栅图像,里面涉及到的 bytes 和 bytearry 类型会在第 4 章讲到。
Note: 在 python3.5 为止,array 都没有像 list.sort() 这样的排序方法,如果排序可以使用 sorted() 函数排序建立一个新数组 a = array.array(a.typecode, sorted(a))
End of explanation
"""
numbers = array('h', [-2, -1, 0, 1, 2]) # h 代表 short 类型
memv = memoryview(numbers)
len(memv)
memv[0]
memv_oct = memv.cast('B') #将 memv 转成无符号字节
memv_oct.tolist()
memv_oct[5] = 4 #小端模式,所以 memv_oct[5] 代表 0 的高位
numbers
"""
Explanation: 内存视图(Memoryview)
memoryview 是一个内置类,可以让你在不复制内容的情况下下操作同一个数组的不同切片,本质上,memoryview 是一个泛化和去数学化的 Numpy 数组,在不需要复制内存情况下,可以再数据库结构之间共享内存,其中数据可以任何格式,例如 PIL 图像,SQLlite 数据库,Numpy 数组等,这个功能对处理大型数据集合时候非常重要。
memoryview.cast 使用类似数组模块的标记法,可以让你使用不同的方式读取同一块内存数据,而且内容字节不会随意移动。memoryview.cast 会将同一块内从打包成一个全新 memoryview 对象返回,听起来像 C 语言中的类型转换。
更改数组中的一个 bytes,来改变某个元素的值:
End of explanation
"""
import numpy as np
a = np.arange(12)
a
type(a)
a.shape
a.shape = 3, 4
a
a[:1]
a.T
floats = np.array([random() for i in range(10 ** 7)])
floats.shape
floats[-3:]
floats *= .5
floats[-3:]
from time import perf_counter as pc #引入高效计时器(Python3.3 开始提供)
t0 = pc(); floats /= 3; pc() - t0 #看到除以 3 这个操作只用了 20 毫秒的时间
"""
Explanation: 看到 memoryview 可以将数据用另一种类型读写,还是很方便的,在第 4 章会看到一个 memoryview 和 struct 操作二进制序列的例子
这时候,我们自然想到了如何处理数组中的数据,答案是使用 Numpy 和 Scipy
Numpy 和 Scipy
Numpy 和 Scipy 实现了数组和矩阵运算,使得 Python 成为了应用科学计算的主流。
Scipy 是一种基于 Numpy 的包,提供了很多科学计算方法,包括线性代数,数值计算,统计等,Scipy 即快速又可靠,是一个很好的科学计算包
End of explanation
"""
from collections import deque
dq = deque(range(10), maxlen=10)
dq
dq.rotate(3) #使用 n > 0 来旋转,从右面取出元素,放到最左面,n < 0 是从左面取出元素放到最右面
dq
dq.rotate(-4)
dq
dq.appendleft(-1) #最左面插入
dq
dq.extend([11, 22, 33]) #在最右面加入 3 个元素,最左面的 3 个元素将被丢弃
dq
dq.extendleft([10, 20, 30, 40])
dq #注意 dq.extendleft(iter) 工作方式,它会将迭代器的个元素逐个加到队列左边,所以这些元素位置最后是反的
"""
Explanation: 双向队列(Deque) 和其它的队列
.append() 和 .pop() 方法可以让你的列表当队列使用,每次进行 append() 和 pop(0),就可以产生 LIFO,但是如果你在最左端插入和移除元素,就很耗费资源,因为整个列表都要移位。
collections.deque(双向队列) 是一个线程安全的双端队列,可以快速的在两端进行插入和移除,如果你需要一个 “只保留最后看到的几个元素” 的功能,这也是一个最佳选择,因为 deque 可以设置队列的大小,当它被填满,加入新元素时,它会丢弃另一端的元素,下面是几个双向队列的典型操作
End of explanation
"""
|
tkurfurst/deep-learning | autoencoder/Simple_Autoencoder_Solution.ipynb | mit | %matplotlib inline
import numpy as np
import tensorflow as tf
import matplotlib.pyplot as plt
from tensorflow.examples.tutorials.mnist import input_data
mnist = input_data.read_data_sets('MNIST_data', validation_size=0)
"""
Explanation: A Simple Autoencoder
We'll start off by building a simple autoencoder to compress the MNIST dataset. With autoencoders, we pass input data through an encoder that makes a compressed representation of the input. Then, this representation is passed through a decoder to reconstruct the input data. Generally the encoder and decoder will be built with neural networks, then trained on example data.
In this notebook, we'll be build a simple network architecture for the encoder and decoder. Let's get started by importing our libraries and getting the dataset.
End of explanation
"""
img = mnist.train.images[2]
plt.imshow(img.reshape((28, 28)), cmap='Greys_r')
"""
Explanation: Below I'm plotting an example image from the MNIST dataset. These are 28x28 grayscale images of handwritten digits.
End of explanation
"""
# Size of the encoding layer (the hidden layer)
encoding_dim = 32
image_size = mnist.train.images.shape[1]
inputs_ = tf.placeholder(tf.float32, (None, image_size), name='inputs')
targets_ = tf.placeholder(tf.float32, (None, image_size), name='targets')
# Output of hidden layer
encoded = tf.layers.dense(inputs_, encoding_dim, activation=tf.nn.relu)
# Output layer logits
logits = tf.layers.dense(encoded, image_size, activation=None)
# Sigmoid output from
decoded = tf.nn.sigmoid(logits, name='output')
loss = tf.nn.sigmoid_cross_entropy_with_logits(labels=targets_, logits=logits)
cost = tf.reduce_mean(loss)
opt = tf.train.AdamOptimizer(0.001).minimize(cost)
"""
Explanation: We'll train an autoencoder with these images by flattening them into 784 length vectors. The images from this dataset are already normalized such that the values are between 0 and 1. Let's start by building basically the simplest autoencoder with a single ReLU hidden layer. This layer will be used as the compressed representation. Then, the encoder is the input layer and the hidden layer. The decoder is the hidden layer and the output layer. Since the images are normalized between 0 and 1, we need to use a sigmoid activation on the output layer to get values matching the input.
Exercise: Build the graph for the autoencoder in the cell below. The input images will be flattened into 784 length vectors. The targets are the same as the inputs. And there should be one hidden layer with a ReLU activation and an output layer with a sigmoid activation. The loss should be calculated with the cross-entropy loss, there is a convenient TensorFlow function for this tf.nn.sigmoid_cross_entropy_with_logits (documentation). You should note that tf.nn.sigmoid_cross_entropy_with_logits takes the logits, but to get the reconstructed images you'll need to pass the logits through the sigmoid function.
End of explanation
"""
# Create the session
sess = tf.Session()
"""
Explanation: Training
End of explanation
"""
epochs = 20
batch_size = 200
sess.run(tf.global_variables_initializer())
for e in range(epochs):
for ii in range(mnist.train.num_examples//batch_size):
batch = mnist.train.next_batch(batch_size)
feed = {inputs_: batch[0], targets_: batch[0]}
batch_cost, _ = sess.run([cost, opt], feed_dict=feed)
print("Epoch: {}/{}...".format(e+1, epochs),
"Training loss: {:.4f}".format(batch_cost))
"""
Explanation: Here I'll write a bit of code to train the network. I'm not too interested in validation here, so I'll just monitor the training loss and the test loss afterwards.
Calling mnist.train.next_batch(batch_size) will return a tuple of (images, labels). We're not concerned with the labels here, we just need the images. Otherwise this is pretty straightfoward training with TensorFlow. We initialize the variables with sess.run(tf.global_variables_initializer()). Then, run the optimizer and get the loss with batch_cost, _ = sess.run([cost, opt], feed_dict=feed).
End of explanation
"""
fig, axes = plt.subplots(nrows=2, ncols=10, sharex=True, sharey=True, figsize=(20,4))
in_imgs = mnist.test.images[:10]
reconstructed, compressed = sess.run([decoded, encoded], feed_dict={inputs_: in_imgs})
for images, row in zip([in_imgs, reconstructed], axes):
for img, ax in zip(images, row):
ax.imshow(img.reshape((28, 28)), cmap='Greys_r')
ax.get_xaxis().set_visible(False)
ax.get_yaxis().set_visible(False)
fig.tight_layout(pad=0.1)
sess.close()
"""
Explanation: Checking out the results
Below I've plotted some of the test images along with their reconstructions. For the most part these look pretty good except for some blurriness in some parts.
End of explanation
"""
|
ardiya/siamesenetwork-tensorflow | Similar image retrieval.ipynb | mit | img_placeholder = tf.placeholder(tf.float32, [None, 28, 28, 1], name='img')
net = mnist_model(img_placeholder, reuse=False)
"""
Explanation: Create the siamese net feature extraction model
End of explanation
"""
saver = tf.train.Saver()
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
ckpt = tf.train.get_checkpoint_state("model")
saver.restore(sess, "model/model.ckpt")
train_feat = sess.run(net, feed_dict={img_placeholder:train_images[:10000]})
"""
Explanation: Restore from checkpoint and calc the features from all of train data
End of explanation
"""
#generate new random test image
idx = np.random.randint(0, len_test)
im = test_images[idx]
#show the test image
show_image(idx, test_images)
print("This is image from id:", idx)
#run the test image through the network to get the test features
saver = tf.train.Saver()
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
ckpt = tf.train.get_checkpoint_state("model")
saver.restore(sess, "model/model.ckpt")
search_feat = sess.run(net, feed_dict={img_placeholder:[im]})
#calculate the cosine similarity and sort
dist = cdist(train_feat, search_feat, 'cosine')
rank = np.argsort(dist.ravel())
#show the top n similar image from train data
n = 7
show_image(rank[:n], train_images)
print("retrieved ids:", rank[:n])
"""
Explanation: Searching for similar test images from trainset based on siamese feature
End of explanation
"""
|
openexp/OpenEXP | notebooks/N170 Emotiv Exploratory.ipynb | mit | from mne import Epochs, find_events, set_eeg_reference, read_epochs, viz, combine_evoked
from time import time, strftime, gmtime
from collections import OrderedDict
from glob import glob
from collections import OrderedDict
from mne import create_info, concatenate_raws
from mne.io import RawArray
from mne.channels import read_montage
import pandas as pd
import numpy as np
import seaborn as sns
import matplotlib.pyplot as plt
import sys, os
sys.path.append(os.path.join(os.path.dirname(os.getcwd()), 'app','utils','jupyter'))
import utils
%matplotlib inline
import warnings
warnings.filterwarnings("ignore")
"""
Explanation: Emotiv EEG Data Visualization Options
A short overview of what's possible with the MNE library and what we should focus on including into the first iteration of the BrainWaves app.
End of explanation
"""
epochs = read_epochs('/home/dano/BrainWaves Workspaces/Dano Nelson Faces Houses/Data/Subash_2-epo.fif')
face_epochs = epochs['Face']
house_epochs = epochs['House']
%matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
from mne.datasets import sample
from mne import read_evokeds
house_evoked = house_epochs.average()
face_evoked = face_epochs.average()
"""
Explanation: This is a dataset that has been collected and cleaned in the app. It is comprised of two 2-minute runs of the Faces Houses experiment and, although it doesn't show a classic N170, there is some interesting event-related potential information in the data. It actually looks like there may be P300s.
End of explanation
"""
# Computation
conditions = OrderedDict({key: [value] for (key, value) in event_id.items()})
# Output
X, y = utils.plot_conditions(epochs, ch_ind=4, conditions=conditions,
ci=97.5, n_boot=1000, title='', diff_waveform=None)
"""
Explanation: The way I see it, the students have to demonstrate that they can make three types of comparisons on this screen: comparing between conditions (faces and house, etc.), comparing between channels, and comparing between subjects. This gets a bit complicated because the comparisons between channels and subjects should really be based on the difference between conditions that they've found by comparing between conditions
I've broken up the different visualizations into these different 'jobs'. I'm not sure which order these tasks should necessarily be completed, but I'm putting them in the order that makes most sense for how I would explore a dataset.
Job 1: Comparing Between Conditions
Averaged Epoch Difference Plot
This is our standard, which is good at showing differences between conditions with statistical significance indicated by the colored background confidence interval.
However, it kind of sucks because it only displays one channel at a time
End of explanation
"""
face_epochs.plot_image(title="Faces T7",picks=[3])
house_epochs.plot_image(title="Houses T7",picks=[3])
"""
Explanation: Epochs Spectrogram
This is the first new interesting viz we could consider using. The colors correspond to the amplitude of the EEG, with red being positive and blue being negative. The epochs are all lined up in rows on the y axis and ERPs should be visible as vertical colored bands.
Unfortunately, here we also can only view one plot at a time
End of explanation
"""
ace_evoked.plot_joint(times="peaks", title="Faces");
house_evoked.plot_joint(times="peaks", title="Houses");
"""
Explanation: Joint Plot
This is a very cool plot that I just discovered. Essentially, it has all the potential comparisons we could make in one plot. The colored lines show each channels average epoch stacked on top of each other ('Butterfly plot'). In addition, there is an algorithm that will automatically find peaks (3 by default) in the data, mark those times, and display a topomap of the EEG at that point in time.
I think this could be a great plot to use, but it may be a little overwhelming without much instruction.
End of explanation
"""
viz.plot_compare_evokeds([house_evoked, face_evoked]);
"""
Explanation: Evoked power comparison
This is an interesting plot that could be used for comparing directly between conditions. It averages over all channels to compute global field power and plots the difference between conditions
Will have to spend more time figuring out how to add legend. A nice example is here: https://martinos.org/mne/stable/auto_tutorials/plot_visualize_evoked.html
End of explanation
"""
diff_evoked = combine_evoked([face_evoked, house_evoked], weights=[1, -1])
house_evoked.plot_topo();
face_evoked.plot_topo();
diff_evoked.plot_topo();
"""
Explanation: Job 2: Comparing Between Channels
These should help students pick out which channels are showing the most powerful ERPs and interesting effects
Topographic Map of Averaged Epochs
I think this is a fun one that will correspond nicely with our head diagram. All the averaged epochs are spaced out like the electrodes are.
Unfortunately, there isn't a convenient way to plot both conditions together on the same map. The best I can do is plot the difference epochs
End of explanation
"""
house_evoked.plot_image(titles="Houses", time_unit="s");
face_evoked.plot_image(titles="Faces", time_unit="s");
"""
Explanation: Averaged Epochs Spectrogram
I think this is the easiest way to pick out which channels contained an ERP. However, it's a little bit more complex than a spatial map. Not sure how to go from channel index to channel name, though (I think it's possible, though)
End of explanation
"""
# set time instants in seconds (from 50 to 150ms in a step of 10ms)
times = np.arange(0.1, 0.5, 0.025)
# compute a 50 ms bin to stabilize topographies
house_evoked.plot_topomap(times, title='Houses', ch_type='eeg', average=0.025, time_unit='s');
face_evoked.plot_topomap(times, title='Faces', ch_type='eeg', average=0.025, time_unit='s');
"""
Explanation: Topomap Series
This is a decent way to show what's happening in the different electrode locations during the epoch. However, I think it's prone to displaying a lot of info that's prone to being misinterpreted by students. This kind of viz will also be nearly useless for the Muse because of its low channel count
End of explanation
"""
|
SunPower/pvfactors | docs/tutorials/Run_full_parallel_simulations.ipynb | bsd-3-clause | # Import external libraries
import os
import numpy as np
import matplotlib.pyplot as plt
from datetime import datetime
import pandas as pd
import warnings
# Settings
%matplotlib inline
np.set_printoptions(precision=3, linewidth=300)
warnings.filterwarnings('ignore')
# Paths
LOCAL_DIR = os.getcwd()
DATA_DIR = os.path.join(LOCAL_DIR, 'data')
filepath = os.path.join(DATA_DIR, 'test_df_inputs_MET_clearsky_tucson.csv')
"""
Explanation: Run full simulations in parallel
In this section, we will learn how to:
run full timeseries simulations in parallel (with multiprocessing) using the run_parallel_engine() function
Note: for a better understanding, it might help to read the previous tutorial section on running full timeseries simulations sequentially before going through the following
Imports and settings
End of explanation
"""
def export_data(fp):
tz = 'US/Arizona'
df = pd.read_csv(fp, index_col=0)
df.index = pd.DatetimeIndex(df.index).tz_convert(tz)
return df
df = export_data(filepath)
df_inputs = df.iloc[:48, :]
# Plot the data
f, (ax1, ax2, ax3) = plt.subplots(1, 3, figsize=(12, 3))
df_inputs[['dni', 'dhi']].plot(ax=ax1)
df_inputs[['solar_zenith', 'solar_azimuth']].plot(ax=ax2)
df_inputs[['surface_tilt', 'surface_azimuth']].plot(ax=ax3)
plt.show()
# Use a fixed albedo
albedo = 0.2
"""
Explanation: Get timeseries inputs
End of explanation
"""
pvarray_parameters = {
'n_pvrows': 3, # number of pv rows
'pvrow_height': 1, # height of pvrows (measured at center / torque tube)
'pvrow_width': 1, # width of pvrows
'axis_azimuth': 0., # azimuth angle of rotation axis
'gcr': 0.4, # ground coverage ratio
'rho_front_pvrow': 0.01, # pv row front surface reflectivity
'rho_back_pvrow': 0.03 # pv row back surface reflectivity
}
"""
Explanation: Prepare PV array parameters
End of explanation
"""
# Choose the number of workers
n_processes = 3
# import function to run simulations in parallel
from pvfactors.run import run_parallel_engine
# import the report building class for the simulation run
from pvfactors.report import ExampleReportBuilder
# run simulations in parallel mode
report = run_parallel_engine(ExampleReportBuilder, pvarray_parameters, df_inputs.index,
df_inputs.dni, df_inputs.dhi,
df_inputs.solar_zenith, df_inputs.solar_azimuth,
df_inputs.surface_tilt, df_inputs.surface_azimuth,
albedo, n_processes=n_processes)
# make a dataframe out of the report
df_report = pd.DataFrame(report, index=df_inputs.index)
df_report.iloc[6:11, :]
f, ax = plt.subplots(1, 2, figsize=(10, 3))
df_report[['qinc_front', 'qinc_back']].plot(ax=ax[0])
df_report[['iso_front', 'iso_back']].plot(ax=ax[1])
plt.show()
"""
Explanation: Run simulations in parallel with run_parallel_engine()
Running full mode timeseries simulations in parallel is done using the run_parallel_engine().
In the previous tutorial section on running timeseries simulations, we showed that a function needed to be passed in order to build a report out of the timeseries simulation.
For the parallel mode, it will not be very different but we will need to pass a class (or an object) instead. The reason is that python multiprocessing uses pickling to run different processes, but python functions cannot be pickled, so a class or an object with the necessary methods needs to be passed instead in order to build a report.
An example of a report building class is provided in the report.py module of the pvfactors package.
End of explanation
"""
class NewReportBuilder(object):
"""A class is required to build reports when running calculations with
multiprocessing because of python constraints"""
@staticmethod
def build(pvarray):
# Return back side qinc of rightmost PV row
return {'total_inc_back': pvarray.ts_pvrows[1].back.get_param_weighted('qinc').tolist()}
@staticmethod
def merge(reports):
"""Works for dictionary reports"""
report = reports[0]
# Merge other reports
keys_report = list(reports[0].keys())
for other_report in reports[1:]:
for key in keys_report:
report[key] += other_report[key]
return report
# run simulations in parallel mode using the new reporting class
new_report = run_parallel_engine(NewReportBuilder, pvarray_parameters, df_inputs.index,
df_inputs.dni, df_inputs.dhi,
df_inputs.solar_zenith, df_inputs.solar_azimuth,
df_inputs.surface_tilt, df_inputs.surface_azimuth,
albedo, n_processes=n_processes)
# make a dataframe out of the report
df_new_report = pd.DataFrame(new_report, index=df_inputs.index)
f, ax = plt.subplots(figsize=(5, 3))
df_new_report.plot(ax=ax)
plt.show()
"""
Explanation: The results above are consistent with running the simulations without parallel model (this is also tested in the package).
Building a report for parallel mode
For parallel simulations, a class (or object) that builds the report needs to be specified, otherwise nothing will be returned by the simulation.
Here is an example of a report building class that will return the total incident irradiance ('qinc') on the back surface of the rightmost PV row. A good way to get started building the reporting class is to use the example provided in the report.py module of the pvfactors package.
Another important action of the class is to merge the different reports resulting from the parallel simulations: since the users decide how the reports are built, the users are also responsible for specifying how to merge the reports after a parallel run.
The static method that builds the reports needs to be named build(report, pvarray).
And the static method that merges the reports needs to be named merge(reports).
End of explanation
"""
|
bakanchevn/DBCourseMirea2017 | Неделя 2/Задание в классе/Лабораторная 2-1-Решение.ipynb | gpl-3.0 | %%sql
SELECT t.name
FROM tracks t
INNER JOIN genres g
ON t.genreid = g.genreid
INNER JOIN media_types m
ON m.mediatypeid = t.mediatypeid
ORDER BY t.bytes desc
limit 10
"""
Explanation: Задание 1
Вывести 10 самых больших по размеру треков жанра ROCK и формата MPEG
End of explanation
"""
%%sql
SELECT distinct ar.name, t.name, a.title
FROM tracks t
INNER JOIN albums a
ON a.albumid = t.albumid
INNER JOIN artists ar
ON a.artistid = ar.artistid
INNER JOIN invoice_items i
ON i.trackid = t.trackid
INNER JOIN invoices ii
on ii.invoiceid = i.invoiceid
INNER JOIN customers c
ON ii.customerid = c.customerid
INNER JOIN genres g
ON t.genreid = t.genreid
WHERE c.company like '%Microsoft%'
AND g.name = 'Rock'
"""
Explanation: Задание 2
Вывести названия всех групп, их песен и названия их альбомов для всех треков жанра Рок, приобретенные сотрудниками Microsoft.
End of explanation
"""
%%sql
WITH A
AS
(
select g.genreid, g.name, m.mediatypeid, m.name as m_name, i.unitprice
from tracks t
inner join media_types m
on t.mediatypeid = m.mediatypeid
inner join genres g
on t.genreid = t.genreid
inner join invoice_items i
on i.trackid = t.trackid
)
select name, m_name, avg(unitprice) as avg_unitprice
from A
where not exists
(
select *
from A Inn
where A.genreid = Inn.genreid
and A.mediatypeid = Inn.mediatypeid
and unitprice <= 1.5)
group by genreid, mediatypeid, name, m_name
"""
Explanation: Задание 3
Для каждого набора (жанр, тип медиа) вывести среднюю цену по стоимости трека и общее количество, причем вывести только те наборы, для которых все треку стоят больше 1,5$.
End of explanation
"""
%%sql
WITH A
AS
(
select c.company, count(i.invoiceid) as cnt
from customers c
inner join invoices i
on i.customerid = c.customerid
where company is not null
group by company
)
SELECT *
FROM A
where cnt in (select min(cnt) from A union select max(cnt) from A )
"""
Explanation: Задание 4
Вывести компании, сделавшие максимальное и минимальное число заказов.
End of explanation
"""
%%sql
select c.company, count(*) as cnt
from tracks t
inner join genres g
on t.genreid = g.genreid
inner join invoice_items ii
on t.trackid = ii.trackid
inner join invoices i
on i.invoiceid = ii.invoiceid
inner join customers c
on c.customerid = i.customerid
where g.name = 'Pop'
and c.company is not null
group by c.company
"""
Explanation: Задание 5
Для каждой компании вывести общее количестов песен, купленных по жанру поп-музыки
End of explanation
"""
%%sql
Select avg(sb) as avg_sb
from
(
select al.title, sum(bytes) as sb
from albums al
inner join tracks t
on al.albumid = t.albumid
group by al.title
)
"""
Explanation: Задание 6
Вывести средний размер альбома в байтах.
End of explanation
"""
|
angelmtenor/data-science-keras | machine_translation.ipynb | mit | import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
import helper
import keras
helper.info_gpu()
np.random.seed(9)
%matplotlib inline
%load_ext autoreload
%autoreload 2
"""
Explanation: Machine Translation
Recurrent Neural Network that accepts English text as input and returns the French translation
Natural Language Processing
This notebook is based on the Natural Language Processing capstone project of the Udacity's Artificial Intelligence Nanodegree.
The dataset is a reduced sentence set taken from WMT. The small_vocab_en file contains English sentences with their French translations in the small_vocab_fr file. The punctuations have been delimited using spaces already, and all the text have been converted to lowercase.
End of explanation
"""
with open('data/small_vocab_en', "r") as f:
english_sentences = f.read().split('\n')
with open('data/small_vocab_fr', "r") as f:
french_sentences = f.read().split('\n')
print("Number of sentences: {}\n".format(len(english_sentences)))
for i in range(2):
print("sample {}:".format(i))
print("{} \n{} \n".format(english_sentences[i], french_sentences[i]))
import collections
words = dict()
words["English"] = [word for sentence in english_sentences for word in sentence.split()]
words["French"] = [word for sentence in french_sentences for word in sentence.split()]
for key, value in words.items():
print("{}: {} words, {} unique words".format(key,
len(value), len(collections.Counter(value))))
"""
Explanation: 1. Load and prepare the data
End of explanation
"""
from keras.preprocessing.text import Tokenizer
def tokenize(x):
"""
:param x: List of sentences/strings to be tokenized
:return: Tuple of (tokenized x data, tokenizer used to tokenize x)
"""
tokenizer = Tokenizer()
tokenizer.fit_on_texts(x)
tokens = tokenizer.texts_to_sequences(x)
return tokens, tokenizer
"""
Explanation: Tokenize
Low complexity word to numerical word ids
End of explanation
"""
from keras.preprocessing.sequence import pad_sequences
def pad(x, length=None):
"""
:param x: List of sequences.
:param length: Length to pad the sequence to. If None, longest sequence length in x.
:return: Padded numpy array of sequences
"""
return pad_sequences(x, maxlen=length, padding='post')
"""
Explanation: Padding
When batching the sequence of word ids together, each sequence needs to be the same length. Since sentences are dynamic in length, we can add padding to the end of the sequences to make them the same length.
End of explanation
"""
def preprocess(x, y, length=None):
"""
:param x: Feature List of sentences
:param y: Label List of sentences
:return: Tuple of (Preprocessed x, Preprocessed y, x tokenizer, y tokenizer)
"""
preprocess_x, x_tk = tokenize(x)
preprocess_y, y_tk = tokenize(y)
preprocess_x = pad(preprocess_x, length)
preprocess_y = pad(preprocess_y, length)
# Keras's sparse_categorical_crossentropy function requires the labels to be in 3 dims
preprocess_y = preprocess_y.reshape(*preprocess_y.shape, 1)
return preprocess_x, preprocess_y, x_tk, y_tk
x, y, x_tk, y_tk = preprocess(english_sentences, french_sentences)
print('Data Preprocessed')
"""
Explanation: Preprocess pipeline
End of explanation
"""
# Only the 10 last translations will be predicted
x_train, y_train = x[:-10], y[:-10]
x_test, y_test = x[-10:-1], y[-10:-1] # last sentence removed
test_english_sentences, test_french_sentences = english_sentences[-10:], french_sentences[-10:]
"""
Explanation: Split the data into training and test sets
End of explanation
"""
def logits_to_text(logits, tokenizer, show_pad=True):
"""
Turn logits from a neural network into text using the tokenizer
:param logits: Logits from a neural network
:param tokenizer: Keras Tokenizer fit on the labels
:return: String that represents the text of the logits
"""
index_to_words = {id: word for word, id in tokenizer.word_index.items()}
index_to_words[0] = '<PAD>' if show_pad else ''
return ' '.join([index_to_words[prediction] for prediction in np.argmax(logits, 1)])
"""
Explanation: Ids Back to Text
The function logits_to_text will bridge the gap between the logits from the neural network to the French translation.
End of explanation
"""
from keras.models import Sequential
from keras.layers import GRU, Dense, TimeDistributed, LSTM, Bidirectional, RepeatVector
from keras.layers.embeddings import Embedding
from keras.layers.core import Dropout
from keras.losses import sparse_categorical_crossentropy
def rnn_model(input_shape, output_sequence_length, english_vocab_size, french_vocab_size):
"""
Build a model with embedding, encoder-decoder, and bidirectional RNN
:param input_shape: Tuple of input shape
:param output_sequence_length: Length of output sequence
:param english_vocab_size: Number of unique English words in the dataset
:param french_vocab_size: Number of unique French words in the dataset
:return: Keras model built, but not trained
"""
learning_rate = 0.01
model = Sequential()
vector_size = english_vocab_size // 10
model.add(
Embedding(
english_vocab_size, vector_size, input_shape=input_shape[1:], mask_zero=False))
model.add(Bidirectional(GRU(output_sequence_length)))
model.add(Dense(128, activation='relu'))
model.add(RepeatVector(output_sequence_length))
model.add(Bidirectional(GRU(128, return_sequences=True)))
model.add(TimeDistributed(Dense(french_vocab_size, activation="softmax")))
print(model.summary())
model.compile(
loss=sparse_categorical_crossentropy,
optimizer=keras.optimizers.adam(learning_rate),
metrics=['accuracy'])
return model
model = rnn_model(x_train.shape, y_train.shape[1], len(x_tk.word_index), len(y_tk.word_index))
"""
Explanation: 2. Recurrent neural network
Model that incorporates encoder-decoder, embedding and bidirectional RNNs:
- An embedding is a vector representation of the word that is close to similar words in $n$-dimensional space, where the $n$ represents the size of the embedding vectors
- The encoder creates a matrix representation of the sentence
- The decoder takes this matrix as input and predicts the translation as output
End of explanation
"""
print('Training...')
callbacks = [keras.callbacks.EarlyStopping(monitor='val_acc', patience=3, verbose=1)]
%time history = model.fit(x_train, y_train, batch_size=1024, epochs=50, verbose=0, \
validation_split=0.2, callbacks=callbacks)
helper.show_training(history)
"""
Explanation: Train the model
End of explanation
"""
score = model.evaluate(x_test, y_test, verbose=0)
print("Test Accuracy: {:.2f}\n".format(score[1]))
y = model.predict(x_test)
for idx, value in enumerate(y):
print('Sample: {}'.format(test_english_sentences[idx]))
print('Actual: {}'.format(test_french_sentences[idx]))
print('Predicted: {}\n'.format(logits_to_text(value, y_tk, show_pad=False)))
"""
Explanation: Evaluate the model
End of explanation
"""
|
google/eng-edu | ml/cc/prework/ko/creating_and_manipulating_tensors.ipynb | apache-2.0 | # Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""
Explanation: Copyright 2017 Google LLC.
End of explanation
"""
from __future__ import print_function
import tensorflow as tf
"""
Explanation: # 텐서 만들기 및 조작
학습 목표:
* 텐서플로우 변수 초기화 및 할당
* 텐서 만들기 및 조작
* 선형대수학의 덧셈 및 곱셈 지식 되살리기(이 주제가 생소한 경우 행렬 덧셈 및 곱셈 참조)
* 기본 텐서플로우 수학 및 배열 작업에 익숙해지기
End of explanation
"""
with tf.Graph().as_default():
# Create a six-element vector (1-D tensor).
primes = tf.constant([2, 3, 5, 7, 11, 13], dtype=tf.int32)
# Create another six-element vector. Each element in the vector will be
# initialized to 1. The first argument is the shape of the tensor (more
# on shapes below).
ones = tf.ones([6], dtype=tf.int32)
# Add the two vectors. The resulting tensor is a six-element vector.
just_beyond_primes = tf.add(primes, ones)
# Create a session to run the default graph.
with tf.Session() as sess:
print(just_beyond_primes.eval())
"""
Explanation: ## 벡터 덧셈
텐서에서 여러 일반적인 수학 연산을 할 수 있습니다(TF API). 다음 코드는
각기 정확히 6개 요소를 가지는 두 벡터(1-D 텐서)를 만들고 조작합니다.
End of explanation
"""
with tf.Graph().as_default():
# A scalar (0-D tensor).
scalar = tf.zeros([])
# A vector with 3 elements.
vector = tf.zeros([3])
# A matrix with 2 rows and 3 columns.
matrix = tf.zeros([2, 3])
with tf.Session() as sess:
print('scalar has shape', scalar.get_shape(), 'and value:\n', scalar.eval())
print('vector has shape', vector.get_shape(), 'and value:\n', vector.eval())
print('matrix has shape', matrix.get_shape(), 'and value:\n', matrix.eval())
"""
Explanation: ### 텐서 형태
형태는 텐서의 크기와 차원 수를 결정하는 데 사용됩니다. 텐서 형태는 목록으로 표현하며, i번째 요소는 i 차원에서 크기를 나타냅니다. 그리고 이 목록의 길이는 텐서의 순위(예: 차원 수)를 나타냅니다.
자세한 정보는 텐서플로우 문서를 참조하세요.
몇 가지 기본 예:
End of explanation
"""
with tf.Graph().as_default():
# Create a six-element vector (1-D tensor).
primes = tf.constant([2, 3, 5, 7, 11, 13], dtype=tf.int32)
# Create a constant scalar with value 1.
ones = tf.constant(1, dtype=tf.int32)
# Add the two tensors. The resulting tensor is a six-element vector.
just_beyond_primes = tf.add(primes, ones)
with tf.Session() as sess:
print(just_beyond_primes.eval())
"""
Explanation: ### 브로드캐스팅
수학에서는 같은 형태의 텐서에서 요소간 연산(예: add 및 equals)만 실행할 수 있습니다. 하지만 텐서플로우에서는 텐서에서 기존에는 호환되지 않았던 연산을 실행할 수 있습니다. 텐서플로우는 요소간 연산에서 더 작은 배열을 확장하여 더 큰 배열과 같은 형태를 가지게 하는 브로드캐스팅(Numpy에서 차용한 개념)을 지원합니다. 예를 들어 브로드캐스팅을 통해 다음과 같은 결과를 얻을 수 있습니다.
피연산자에 크기가 [6]인 텐서가 필요한 경우 크기가 [1] 또는 크기가 []인 텐서가 피연산자가 될 수 있습니다.
연산에 크기가 [4, 6]인 텐서가 필요한 경우 다음 크기의 텐서가 피연산자가 될 수 있습니다.
[1, 6]
[6]
[]
연산에 크기가 [3, 5, 6]인 텐서가 필요한 경우 다음 크기의 텐서가 피연산자가 될 수 있습니다.
[1, 5, 6]
[3, 1, 6]
[3, 5, 1]
[1, 1, 1]
[5, 6]
[1, 6]
[6]
[1]
[]
참고: 텐서가 브로드캐스팅되면 텐서의 항목은 개념적으로 복사됩니다. (성능상의 이유로 실제로 복사되지는 않음. 브로드캐스팅은 성능 최적화를 위해 개발됨.)
전체 브로드캐스팅 규칙 세트는 Numpy 브로드캐스팅 문서에 이해하기 쉽게 잘 설명되어 있습니다.
다음 코드는 앞서 설명한 텐서 덧셈을 실행하지만 브로드캐스팅을 사용합니다.
End of explanation
"""
with tf.Graph().as_default():
# Create a matrix (2-d tensor) with 3 rows and 4 columns.
x = tf.constant([[5, 2, 4, 3], [5, 1, 6, -2], [-1, 3, -1, -2]],
dtype=tf.int32)
# Create a matrix with 4 rows and 2 columns.
y = tf.constant([[2, 2], [3, 5], [4, 5], [1, 6]], dtype=tf.int32)
# Multiply `x` by `y`.
# The resulting matrix will have 3 rows and 2 columns.
matrix_multiply_result = tf.matmul(x, y)
with tf.Session() as sess:
print(matrix_multiply_result.eval())
"""
Explanation: ## 행렬 곱셈
선형대수학에서 두 개의 행렬을 곱할 때는 첫 번째 행렬의 열 수가 두 번째
행렬의 행 수와 같아야 했습니다.
3x4 행렬과 4x2 행렬을 곱하는 것은 유효합니다. 이렇게 하면 3x2 행렬을 얻을 수 있습니다.
4x2 행렬과 3x4 행렬을 곱하는 것은 유효하지 않습니다.
End of explanation
"""
with tf.Graph().as_default():
# Create an 8x2 matrix (2-D tensor).
matrix = tf.constant([[1,2], [3,4], [5,6], [7,8],
[9,10], [11,12], [13, 14], [15,16]], dtype=tf.int32)
# Reshape the 8x2 matrix into a 2x8 matrix.
reshaped_2x8_matrix = tf.reshape(matrix, [2,8])
# Reshape the 8x2 matrix into a 4x4 matrix
reshaped_4x4_matrix = tf.reshape(matrix, [4,4])
with tf.Session() as sess:
print("Original matrix (8x2):")
print(matrix.eval())
print("Reshaped matrix (2x8):")
print(reshaped_2x8_matrix.eval())
print("Reshaped matrix (4x4):")
print(reshaped_4x4_matrix.eval())
"""
Explanation: ## 텐서 형태 변경
텐서 덧셈과 행렬 곱셈에서 각각 피연산자에 제약조건을 부여하면
텐서플로우 프로그래머는 자주 텐서의 형태를 변경해야 합니다.
tf.reshape 메서드를 사용하여 텐서의 형태를 변경할 수 있습니다.
예를 들어 8x2 텐서를 2x8 텐서나 4x4 텐서로 형태를 변경할 수 있습니다.
End of explanation
"""
with tf.Graph().as_default():
# Create an 8x2 matrix (2-D tensor).
matrix = tf.constant([[1,2], [3,4], [5,6], [7,8],
[9,10], [11,12], [13, 14], [15,16]], dtype=tf.int32)
# Reshape the 8x2 matrix into a 3-D 2x2x4 tensor.
reshaped_2x2x4_tensor = tf.reshape(matrix, [2,2,4])
# Reshape the 8x2 matrix into a 1-D 16-element tensor.
one_dimensional_vector = tf.reshape(matrix, [16])
with tf.Session() as sess:
print("Original matrix (8x2):")
print(matrix.eval())
print("Reshaped 3-D tensor (2x2x4):")
print(reshaped_2x2x4_tensor.eval())
print("1-D vector:")
print(one_dimensional_vector.eval())
"""
Explanation: 또한 tf.reshape를 사용하여 텐서의 차원 수(\'순위\')를 변경할 수도 있습니다.
예를 들어 8x2 텐서를 3-D 2x2x4 텐서나 1-D 16-요소 텐서로 변경할 수 있습니다.
End of explanation
"""
# Write your code for Task 1 here.
"""
Explanation: ### 실습 #1: 두 개의 텐서를 곱하기 위해 두 텐서의 형태를 변경합니다.
다음 두 벡터는 행렬 곱셈과 호환되지 않습니다.
a = tf.constant([5, 3, 2, 7, 1, 4])
b = tf.constant([4, 6, 3])
이 벡터를 행렬 곱셈에 호환될 수 있는 피연산자로 형태를 변경하세요.
그런 다음 형태가 변경된 텐서에서 행렬 곱셈 작업을 호출하세요.
End of explanation
"""
with tf.Graph().as_default(), tf.Session() as sess:
# Task: Reshape two tensors in order to multiply them
# Here are the original operands, which are incompatible
# for matrix multiplication:
a = tf.constant([5, 3, 2, 7, 1, 4])
b = tf.constant([4, 6, 3])
# We need to reshape at least one of these operands so that
# the number of columns in the first operand equals the number
# of rows in the second operand.
# Reshape vector "a" into a 2-D 2x3 matrix:
reshaped_a = tf.reshape(a, [2,3])
# Reshape vector "b" into a 2-D 3x1 matrix:
reshaped_b = tf.reshape(b, [3,1])
# The number of columns in the first matrix now equals
# the number of rows in the second matrix. Therefore, you
# can matrix mutiply the two operands.
c = tf.matmul(reshaped_a, reshaped_b)
print(c.eval())
# An alternate approach: [6,1] x [1, 3] -> [6,3]
"""
Explanation: ### 해결 방법
해결 방법을 보려면 아래를 클릭하세요.
End of explanation
"""
g = tf.Graph()
with g.as_default():
# Create a variable with the initial value 3.
v = tf.Variable([3])
# Create a variable of shape [1], with a random initial value,
# sampled from a normal distribution with mean 1 and standard deviation 0.35.
w = tf.Variable(tf.random_normal([1], mean=1.0, stddev=0.35))
"""
Explanation: ## 변수, 초기화, 할당
지금까지 수행한 모든 연산은 정적 값(tf.constant)에서 실행되었고; eval()을 호출하면 항상 같은 결과가 반환되었습니다. 텐서플로우에서는 변수 객체를 정의할 수 있으며, 변수 값은 변경할 수 있습니다.
변수를 만들 때 초기 값을 명시적으로 설정하거나 이니셜라이저(예: 분포)를 사용할 수 있습니다.
End of explanation
"""
with g.as_default():
with tf.Session() as sess:
try:
v.eval()
except tf.errors.FailedPreconditionError as e:
print("Caught expected error: ", e)
"""
Explanation: 텐서플로우의 한 가지 특징은 변수 초기화가 자동으로 실행되지 않는다는 것입니다. 예를 들어 다음 블록에서는 오류가 발생합니다.
End of explanation
"""
with g.as_default():
with tf.Session() as sess:
initialization = tf.global_variables_initializer()
sess.run(initialization)
# Now, variables can be accessed normally, and have values assigned to them.
print(v.eval())
print(w.eval())
"""
Explanation: 변수를 초기화하는 가장 쉬운 방법은 global_variables_initializer를 호출하는 것입니다. eval()과 거의 비슷한 Session.run()의 사용을 참고하세요.
End of explanation
"""
with g.as_default():
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
# These three prints will print the same value.
print(w.eval())
print(w.eval())
print(w.eval())
"""
Explanation: 초기화된 변수는 같은 세션 내에서는 값을 유지합니다. 하지만 새 세션을 시작하면 다시 초기화해야 합니다.
End of explanation
"""
with g.as_default():
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
# This should print the variable's initial value.
print(v.eval())
assignment = tf.assign(v, [7])
# The variable has not been changed yet!
print(v.eval())
# Execute the assignment op.
sess.run(assignment)
# Now the variable is updated.
print(v.eval())
"""
Explanation: 변수 값을 변경하려면 할당 작업을 사용합니다. 할당 작업을 만들기만 하면 실행되는 것은 아닙니다. 초기화와 마찬가지로 할당 작업을 실행해야 변수 값이 업데이트됩니다.
End of explanation
"""
# Write your code for Task 2 here.
"""
Explanation: 로드 및 저장과 같이 여기에서 다루지 않은 변수에 관한 주제도 더 많이 있습니다. 자세히 알아보려면 텐서플로우 문서를 참조하세요.
### 실습 #2: 주사위 2개 10번 굴리기를 시뮬레이션합니다.
주사위 시뮬레이션을 만듭니다. 여기에서 10x3 2-D 텐서를 생성하며 조건은 다음과 같습니다.
열 1 및 2는 각각 주사위 1개를 1번 던졌을 때의 값입니다.
열 3은 같은 줄의 열 1과 2의 합입니다.
예를 들어 첫 번째 행의 값은 다음과 같을 수 있습니다.
열 1은 4
열 2는 3
열 3은 7
텐서플로우 문서를 참조하여 이 문제를 해결해 보세요.
End of explanation
"""
with tf.Graph().as_default(), tf.Session() as sess:
# Task 2: Simulate 10 throws of two dice. Store the results
# in a 10x3 matrix.
# We're going to place dice throws inside two separate
# 10x1 matrices. We could have placed dice throws inside
# a single 10x2 matrix, but adding different columns of
# the same matrix is tricky. We also could have placed
# dice throws inside two 1-D tensors (vectors); doing so
# would require transposing the result.
dice1 = tf.Variable(tf.random_uniform([10, 1],
minval=1, maxval=7,
dtype=tf.int32))
dice2 = tf.Variable(tf.random_uniform([10, 1],
minval=1, maxval=7,
dtype=tf.int32))
# We may add dice1 and dice2 since they share the same shape
# and size.
dice_sum = tf.add(dice1, dice2)
# We've got three separate 10x1 matrices. To produce a single
# 10x3 matrix, we'll concatenate them along dimension 1.
resulting_matrix = tf.concat(
values=[dice1, dice2, dice_sum], axis=1)
# The variables haven't been initialized within the graph yet,
# so let's remedy that.
sess.run(tf.global_variables_initializer())
print(resulting_matrix.eval())
"""
Explanation: ### 해결 방법
해결 방법을 보려면 아래를 클릭하세요.
End of explanation
"""
|
r31415smith/intro_python | @Crash+Course+v0.63.ipynb | lgpl-3.0 | import matplotlib.pyplot as plt
import numpy as np
%matplotlib inline
"""
Explanation: A Crash Course in Python for Scientists
Rick Muller, Sandia National Laboratories
version 0.62, Updated Dec 15, 2016 by Ryan Smith, Cal State East Bay
version 0.63, Updated Oct 2017 by Ryan Smith, Cal State East Bay
Using Python 3.5.2 | Anaconda 4.1.1
This work is licensed under a Creative Commons Attribution-ShareAlike 3.0 Unported License.
If skipping to other sections, it's a good idea to run this first:
(If you're using an IPython (jupyter) notebook, or otherwise using notebook file, you hit shift-Enter to evaluate a cell.)
End of explanation
"""
2+2
(50-5*6)/4
"""
Explanation: Table of Contents
0. Preliminary
0.1 Why Python?
0.2 What You Need to Install
1. Python Overview
1.1 Using Python as a Calculator
1.2 Strings
1.3 Lists
1.4 Iteration, Indentation, and Blocks
1.5 Slicing
1.6 Booleans and Truth Testing
1.7 Code Example: The Fibonacci Sequence
1.8 Functions
1.9 Recursion and Factorials
1.10 Two More Data Structures: Tuples and Dictionaries
1.11 Plotting with Matplotlib
1.12 Conclusion of the Python Overview
2. Numpy and Scipy
2.1 Making vectors and matrices
2.2 Linspace, matrix functions, and plotting
2.3 Matrix operations
2.4 Matrix Solvers
2.5 Example: Finite Differences
2.6 One-Dimensional Harmonic Oscillator using Finite Difference
2.7 Special Functions
2.8 Least squares fitting
2.9 Monte Carlo, random numbers, and computing $\pi$
2.10 Numerical Integration
2.11 Fast Fourier Transform and Signal Processing
3. Intermediate Python
3.1 Parsing data output
3.2 Reading in data files
3.3 More Sophisticated String Formatting and Processing
3.4 Optional arguments of a function
3.5 List Comprehensions and Generators
3.6 Factory Functions
3.7 Serialization: Save it for later
3.8 Functional programming
3.9 Object Oriented Programming
4. Speeding Python: Timeit, Profiling, Cython, SWIG, and PyPy
4.1 Timeit
4.2 Profiling
4.3 Other Ways to Speed Python
4.4 Fun: Finding Primes
5. References
6. Acknowledgements
0. Preliminary
0.1 Why Python?
Python is the programming language of choice for many scientists to a large degree because it offers a great deal of power to analyze and model scientific data with relatively little overhead in terms of learning, installation or development time. It is a language you can pick up in a weekend, and use for the rest of your life.
The Python Tutorial is a great place to start getting a feel for the language. To complement this material, I taught a Python Short Course years ago to a group of computational chemists during a time that I was worried the field was moving too much in the direction of using canned software rather than developing one's own methods. I wanted to focus on what working scientists needed to be more productive: parsing output of other programs, building simple models, experimenting with object oriented programming, extending the language with C, and simple GUIs.
I'm trying to do something very similar here, to cut to the chase and focus on what scientists need. In the last year or so, the IPython Project has put together a notebook interface that I have found incredibly valuable. A large number of people have released very good IPython Notebooks that I have taken a huge amount of pleasure reading through. Some ones that I particularly like include:
Rob Johansson's excellent notebooks, including Scientific Computing with Python and Computational Quantum Physics with QuTiP lectures;
XKCD style graphs in matplotlib;
A collection of Notebooks for using IPython effectively
A gallery of interesting IPython Notebooks
I find IPython notebooks an easy way both to get important work done in my everyday job, as well as to communicate what I've done, how I've done it, and why it matters to my coworkers. I find myself endlessly sweeping the IPython subreddit hoping someone will post a new notebook. In the interest of putting more notebooks out into the wild for other people to use and enjoy, I thought I would try to recreate some of what I was trying to get across in the original Python Short Course, updated by 15 years of Python, Numpy, Scipy, Matplotlib, and IPython development, as well as my own experience in using Python almost every day of this time.
IPython notebooks are now called Jupyter notebooks.
0.2 What You Need to Install
There are two branches of current releases in Python: the older-syntax Python 2, and the newer-syntax Python 3. This schizophrenia is largely intentional: when it became clear that some non-backwards-compatible changes to the language were necessary, the Python dev-team decided to go through a five-year (or so) transition, during which the new language features would be introduced and the old language was still actively maintained, to make such a transition as easy as possible. We're now (2016) past the halfway point, and people are moving to python 3.
These notes are written with Python 3 in mind.
If you are new to python, try installing Anaconda Python 3.5 (supported by Continuum) and you will automatically have all libraries installed with your distribution. These notes assume you have a Python distribution that includes:
Python version 3;
Numpy, the core numerical extensions for linear algebra and multidimensional arrays;
Scipy, additional libraries for scientific programming;
Matplotlib, excellent plotting and graphing libraries;
IPython, with the additional libraries required for the notebook interface.
Here are some other options for various ways to run python:
Continuum supports a bundled, multiplatform Python package called Anaconda
Entought Python Distribution, also known as EPD. You can either purchase a license to use EPD, or there is also a free version that you can download and install.
Linux Most distributions have an installation manager. Redhat has yum, Ubuntu has apt-get. To my knowledge, all of these packages should be available through those installers.
Mac I use Macports, which has up-to-date versions of all of these packages.
Windows The PythonXY package has everything you need: install the package, then go to Start > PythonXY > Command Prompts > IPython notebook server.
Cloud This notebook is currently running on the IPython notebook viewer, which allows the notebook to be viewed but not interactively.
1. Python Overview
This is a quick introduction to Python. There are lots of other places to learn the language more thoroughly. I have collected a list of useful links, including ones to other learning resources, at the end of this notebook. If you want a little more depth, Python Tutorial is a great place to start, as is Zed Shaw's Learn Python the Hard Way.
The lessons that follow make use of the IPython notebooks. There's a good introduction to notebooks in the IPython notebook documentation that even has a nice video on how to use the notebooks. You should probably also flip through the IPython tutorial in your copious free time.
Briefly, notebooks have code cells (that are generally followed by result cells) and text cells. The text cells are the stuff that you're reading now. The code cells start with "In []:" with some number generally in the brackets. If you put your cursor in the code cell and hit Shift-Enter, the code will run in the Python interpreter and the result will print out in the output cell. You can then change things around and see whether you understand what's going on. If you need to know more, see the IPython notebook documentation or the IPython tutorial.
1.1 Using Python as a Calculator
Many of the things I used to use a calculator for, I now use Python for:
End of explanation
"""
2**3
7/33
"""
Explanation: If you want to raise to an exponent, "^" is reserved for something else, so we use ** instead
End of explanation
"""
from math import sqrt
sqrt(81)
"""
Explanation: There used to be gotchas in division in python 2, like C or Fortran integer division, where division truncates the remainder and returns an integer. In version 3, Python returns a floating point number. If for some reason you are using Python 2, you can fix this by importing the module from the future features:
from __future__ import division
In the last few lines, we have sped by a lot of things that we should stop for a moment and explore a little more fully. We've seen, however briefly, two different data types: integers, also known as whole numbers to the non-programming world, and floating point numbers, also known (incorrectly) as decimal numbers to the rest of the world.
We've also seen the first instance of an import statement. Python has a huge number of libraries included with the distribution. To keep things simple, most of these variables and functions are not accessible from a normal Python interactive session. Instead, you have to import the name. For example, there is a numpy module containing many useful functions. To access, say, the square root function, you can either first import the sqrt function from the math library:
End of explanation
"""
np.sqrt(81)
"""
Explanation: I prefer doing things with numpy though.. we already imported numpy as np in the first command box above, so we can just call it... I prefer things this way, so that code is always easily readable: you will know from what library a function is being called immediately.
End of explanation
"""
width = 20
length = 30
area = length*width
area
"""
Explanation: You can define variables using the equals (=) sign:
End of explanation
"""
volume
"""
Explanation: If you try to access a variable that you haven't yet defined, you get an error:
End of explanation
"""
depth = 10
volume = area*depth
volume
"""
Explanation: and you need to define it:
End of explanation
"""
np.exp(-1)
"""
Explanation: A few examples of common functions. Look up numpy library -- it has most mathematical functions you could want.
End of explanation
"""
return = 0
"""
Explanation: You can name a variable almost anything you want. It needs to start with an alphabetical character or "_", can contain alphanumeric charcters plus underscores ("_"). Certain words, however, are reserved for the language:
and, as, assert, break, class, continue, def, del, elif, else, except,
exec, finally, for, from, global, if, import, in, is, lambda, not, or,
pass, print, raise, return, try, while, with, yield
Trying to define a variable using one of these will result in a syntax error:
End of explanation
"""
'Hello, World!'
"""
Explanation: The Python Tutorial has more on using Python as an interactive shell. The IPython tutorial makes a nice complement to this, since IPython has a much more sophisticated iteractive shell.
1.2 Strings
Strings are lists of printable characters, and can be defined using either single quotes
End of explanation
"""
"Hello, World!"
"""
Explanation: or double quotes
End of explanation
"""
"He's a Rebel"
'She asked, "How are you today?"'
"""
Explanation: But not both at the same time, unless you want one of the symbols to be part of the string.
End of explanation
"""
greeting = "Hello, World!"
"""
Explanation: Just like the other two data objects we're familiar with (ints and floats), you can assign a string to a variable
End of explanation
"""
print(greeting)
"""
Explanation: The print statement is often used for printing character strings:
End of explanation
"""
print("The area is ",area)
"""
Explanation: But it can also print data types other than strings:
End of explanation
"""
statement = "Hello," + "World!"
print(statement)
"""
Explanation: In the above snipped, the number 600 (stored in the variable "area") is converted into a string before being printed out.
You can use the + operator to concatenate strings together:
End of explanation
"""
statement = "Hello, " + "World!"
print(statement)
"""
Explanation: Don't forget the space between the strings, if you want one there.
End of explanation
"""
print( "This " + "is " + "a " + "longer " + "statement.")
"""
Explanation: You can use + to concatenate multiple strings in a single statement:
End of explanation
"""
days_of_the_week = ["Sunday","Monday","Tuesday","Wednesday","Thursday","Friday","Saturday"]
"""
Explanation: If you have a lot of words to concatenate together, there are other, more efficient ways to do this. But this is fine for linking a few strings together.
1.3 Lists
Very often in a programming language, one wants to keep a group of similar items together. Python does this using a data type called lists.
End of explanation
"""
days_of_the_week[2]
"""
Explanation: You can access members of the list using the index of that item:
End of explanation
"""
days_of_the_week[-1]
"""
Explanation: Python lists, like C, but unlike Fortran, use 0 as the index of the first element of a list. Thus, in this example, the 0 element is "Sunday", 1 is "Monday", and so on. If you need to access the nth element from the end of the list, you can use a negative index. For example, the -1 element of a list is the last element:
End of explanation
"""
languages = ["Fortran","C","C++"]
languages.append("Python")
print(languages)
"""
Explanation: You can add additional items to the list using the .append() command:
End of explanation
"""
list(range(10))
"""
Explanation: The range() command is a convenient way to make sequential lists of numbers:
End of explanation
"""
list(range(2,8))
"""
Explanation: Note that range(n) starts at 0 and gives the sequential list of integers less than n. If you want to start at a different number, use range(start,stop)
End of explanation
"""
evens = list(range(0,20,2))
evens
evens[3]
"""
Explanation: The lists created above with range have a step of 1 between elements. You can also give a fixed step size via a third command:
End of explanation
"""
["Today",7,99.3,""]
"""
Explanation: Lists do not have to hold the same data type. For example,
End of explanation
"""
help(len)
len(evens)
"""
Explanation: However, it's good (but not essential) to use lists for similar objects that are somehow logically connected. If you want to group different data types together into a composite data object, it's best to use tuples, which we will learn about below.
You can find out how long a list is using the len() command:
End of explanation
"""
for day in days_of_the_week:
print(day)
"""
Explanation: 1.4 Iteration, Indentation, and Blocks
One of the most useful things you can do with lists is to iterate through them, i.e. to go through each element one at a time. To do this in Python, we use the for statement:
End of explanation
"""
for day in days_of_the_week:
statement = "Today is " + day
print(statement)
"""
Explanation: This code snippet goes through each element of the list called days_of_the_week and assigns it to the variable day. It then executes everything in the indented block (in this case only one line of code, the print statement) using those variable assignments. When the program has gone through every element of the list, it exists the block.
(Almost) every programming language defines blocks of code in some way. In Fortran, one uses END statements (ENDDO, ENDIF, etc.) to define code blocks. In C, C++, and Perl, one uses curly braces {} to define these blocks.
Python uses a colon (":"), followed by indentation level to define code blocks. Everything at a higher level of indentation is taken to be in the same block. In the above example the block was only a single line, but we could have had longer blocks as well:
End of explanation
"""
for i in range(20):
print("The square of ",i," is ",i*i)
"""
Explanation: The range() command is particularly useful with the for statement to execute loops of a specified length:
End of explanation
"""
for letter in "Sunday":
print(letter)
"""
Explanation: 1.5 Slicing
Lists and strings have something in common that you might not suspect: they can both be treated as sequences. You already know that you can iterate through the elements of a list. You can also iterate through the letters in a string:
End of explanation
"""
days_of_the_week[0]
"""
Explanation: This is only occasionally useful. Slightly more useful is the slicing operation, which you can also use on any sequence. We already know that we can use indexing to get the first element of a list:
End of explanation
"""
days_of_the_week[0:2]
"""
Explanation: If we want the list containing the first two elements of a list, we can do this via
End of explanation
"""
days_of_the_week[:2]
"""
Explanation: or simply
End of explanation
"""
days_of_the_week[-2:]
"""
Explanation: If we want the last items of the list, we can do this with negative slicing:
End of explanation
"""
workdays = days_of_the_week[1:5]
print(workdays)
"""
Explanation: which is somewhat logically consistent with negative indices accessing the last elements of the list.
You can do:
End of explanation
"""
day = "Saturday"
abbreviation = day[:3]
print(abbreviation)
"""
Explanation: Since strings are sequences, you can also do this to them:
End of explanation
"""
numbers = list(range(0,21))
evens = numbers[6::2]
evens
"""
Explanation: If we really want to get fancy, we can pass a third element into the slice, which specifies a step length (just like a third argument to the range() function specifies the step):
End of explanation
"""
if day == "Sunday":
print("Sleep in")
else:
print("Go to work")
"""
Explanation: Note that in this example we omitted the first few arguments, so that the slice started at 6, went to the end of the list, and took every second element, to generate the list of even numbers lup to 20 (the last element in the original list).
1.6 Booleans and Truth Testing
We have now learned a few data types. We have integers and floating point numbers, strings, and lists to contain them. We have also learned about lists, a container that can hold any data type. We have learned to print things out, and to iterate over items in lists. We will now learn about boolean variables that can be either True or False.
We invariably need some concept of conditions in programming to control branching behavior, to allow a program to react differently to different situations. If it's Monday, I'll go to work, but if it's Sunday, I'll sleep in. To do this in Python, we use a combination of boolean variables, which evaluate to either True or False, and if statements, that control branching based on boolean values.
For example:
End of explanation
"""
day == "Sunday"
"""
Explanation: (Quick quiz: why did the snippet print "Go to work" here? What is the variable "day" set to?)
Let's take the snippet apart to see what happened. First, note the statement
End of explanation
"""
1 == 2
50 == 2*25
3 < 3.14159
1 == 1.0
1 != 0
1 <= 2
1 >= 1
"""
Explanation: If we evaluate it by itself, as we just did, we see that it returns a boolean value, False. The "==" operator performs equality testing. If the two items are equal, it returns True, otherwise it returns False. In this case, it is comparing two variables, the string "Sunday", and whatever is stored in the variable "day", which, in this case, is the other string "Saturday". Since the two strings are not equal to each other, the truth test has the false value.
The if statement that contains the truth test is followed by a code block (a colon followed by an indented block of code). If the boolean is true, it executes the code in that block. Since it is false in the above example, we don't see that code executed.
The first block of code is followed by an else statement, which is executed if nothing else in the above if statement is true. Since the value was false, this code is executed, which is why we see "Go to work".
Try setting the day equal to "Sunday" and then running the above if/else statement. Did it work as you thought it would?
You can compare any data types in Python:
End of explanation
"""
1 is 1.0
"""
Explanation: We see a few other boolean operators here, all of which which should be self-explanatory. Less than, equality, non-equality, and so on.
Particularly interesting is the 1 == 1.0 test, which is true, since even though the two objects are different data types (integer and floating point number), they have the same value. There is another boolean operator is, that tests whether two objects are the same object:
End of explanation
"""
type(1)
type(1.0)
"""
Explanation: Why is 1 not the same as 1.0? Different data type. You can check the data type:
End of explanation
"""
[1,2,3] == [1,2,4]
"""
Explanation: We can do boolean tests on lists as well:
End of explanation
"""
hours = 5
0 < hours < 24
"""
Explanation: Finally, note that you can also string multiple comparisons together, which can result in very intuitive tests:
End of explanation
"""
if day == "Sunday":
print ("Sleep in")
elif day == "Saturday":
print ("Do chores")
else:
print ("Go to work")
"""
Explanation: If statements can have elif parts ("else if"), in addition to if/else parts. For example:
End of explanation
"""
for day in days_of_the_week:
statement = "On " + day + ":"
print (statement)
if day == "Sunday":
print (" Sleep in")
elif day == "Saturday":
print (" Do chores")
else:
print (" Go to work")
"""
Explanation: Of course we can combine if statements with for loops, to make a snippet that is almost interesting:
End of explanation
"""
bool(1)
bool(0)
bool(["This "," is "," a "," list"])
"""
Explanation: This is something of an advanced topic, but ordinary data types have boolean values associated with them, and, indeed, in early versions of Python there was not a separate boolean object. Essentially, anything that was a 0 value (the integer or floating point 0, an empty string "", or an empty list []) was False, and everything else was true. You can see the boolean value of any data object using the bool() function.
End of explanation
"""
n = 10
sequence = [0,1]
for i in range(2,n): # This is going to be a problem if we ever set n <= 2!
sequence.append(sequence[i-1]+sequence[i-2])
print (sequence)
"""
Explanation: 1.7 Code Example: The Fibonacci Sequence
The Fibonacci sequence is a sequence in math that starts with 0 and 1, and then each successive entry is the sum of the previous two. Thus, the sequence goes 0,1,1,2,3,5,8,13,21,34,55,89,...
A very common exercise in programming books is to compute the Fibonacci sequence up to some number n. First I'll show the code, then I'll discuss what it is doing.
End of explanation
"""
def fibonacci(sequence_length):
"Return the Fibonacci sequence of length *sequence_length*"
sequence = [0,1]
if sequence_length < 1:
print("Fibonacci sequence only defined for length 1 or greater")
return
if 0 < sequence_length < 3:
return sequence[:sequence_length]
for i in range(2,sequence_length):
sequence.append(sequence[i-1]+sequence[i-2])
return sequence
"""
Explanation: Let's go through this line by line. First, we define the variable n, and set it to the integer 20. n is the length of the sequence we're going to form, and should probably have a better variable name. We then create a variable called sequence, and initialize it to the list with the integers 0 and 1 in it, the first two elements of the Fibonacci sequence. We have to create these elements "by hand", since the iterative part of the sequence requires two previous elements.
We then have a for loop over the list of integers from 2 (the next element of the list) to n (the length of the sequence). After the colon, we see a hash tag "#", and then a comment that if we had set n to some number less than 2 we would have a problem. Comments in Python start with #, and are good ways to make notes to yourself or to a user of your code explaining why you did what you did. Better than the comment here would be to test to make sure the value of n is valid, and to complain if it isn't; we'll try this later.
In the body of the loop, we append to the list an integer equal to the sum of the two previous elements of the list.
After exiting the loop (ending the indentation) we then print out the whole list. That's it!
1.8 Functions
We might want to use the Fibonacci snippet with different sequence lengths. We could cut an paste the code into another cell, changing the value of n, but it's easier and more useful to make a function out of the code. We do this with the def statement in Python:
End of explanation
"""
fibonacci(2)
fibonacci(12)
"""
Explanation: We can now call fibonacci() for different sequence_lengths:
End of explanation
"""
help(fibonacci)
"""
Explanation: We've introduced a several new features here. First, note that the function itself is defined as a code block (a colon followed by an indented block). This is the standard way that Python delimits things. Next, note that the first line of the function is a single string. This is called a docstring, and is a special kind of comment that is often available to people using the function through the python command line:
End of explanation
"""
from math import factorial
help(factorial)
"""
Explanation: If you define a docstring for all of your functions, it makes it easier for other people to use them, since they can get help on the arguments and return values of the function.
Next, note that rather than putting a comment in about what input values lead to errors, we have some testing of these values, followed by a warning if the value is invalid, and some conditional code to handle special cases.
1.9 Recursion and Factorials
Functions can also call themselves, something that is often called recursion. We're going to experiment with recursion by computing the factorial function. The factorial is defined for a positive integer n as
$$ n! = n(n-1)(n-2)\cdots 1 $$
First, note that we don't need to write a function at all, since this is a function built into the standard math library. Let's use the help function to find out about it:
End of explanation
"""
factorial(20)
"""
Explanation: This is clearly what we want.
End of explanation
"""
def fact(n):
if n <= 0:
return 1
return n*fact(n-1)
fact(20)
"""
Explanation: However, if we did want to write a function ourselves, we could do recursively by noting that
$$ n! = n(n-1)!$$
The program then looks something like:
End of explanation
"""
t = (1,2,'hi',9.0)
t
"""
Explanation: Recursion can be very elegant, and can lead to very simple programs.
1.10 Two More Data Structures: Tuples and Dictionaries
Before we end the Python overview, I wanted to touch on two more data structures that are very useful (and thus very common) in Python programs.
A tuple is a sequence object like a list or a string. It's constructed by grouping a sequence of objects together with commas, either without brackets, or with parentheses:
End of explanation
"""
t[1]
"""
Explanation: Tuples are like lists, in that you can access the elements using indices:
End of explanation
"""
t.append(7)
t[1]=77
"""
Explanation: However, tuples are immutable, you can't append to them or change the elements of them:
End of explanation
"""
('Bob',0.0,21.0)
"""
Explanation: Tuples are useful anytime you want to group different pieces of data together in an object, but don't want to create a full-fledged class (see below) for them. For example, let's say you want the Cartesian coordinates of some objects in your program. Tuples are a good way to do this:
End of explanation
"""
positions = [
('Bob',0.0,21.0),
('Cat',2.5,13.1),
('Dog',33.0,1.2)
]
"""
Explanation: Again, it's not a necessary distinction, but one way to distinguish tuples and lists is that tuples are a collection of different things, here a name, and x and y coordinates, whereas a list is a collection of similar things, like if we wanted a list of those coordinates:
End of explanation
"""
def minmax(objects):
minx = 1e20 # These are set to really big numbers
miny = 1e20
for obj in objects:
name,x,y = obj
if x < minx:
minx = x
if y < miny:
miny = y
return minx,miny
x,y = minmax(positions)
print(x,y)
"""
Explanation: Tuples can be used when functions return more than one value. Say we wanted to compute the smallest x- and y-coordinates of the above list of objects. We could write:
End of explanation
"""
x,y = 1,2
y,x = x,y
x,y
"""
Explanation: Here we did two things with tuples you haven't seen before. First, we unpacked an object into a set of named variables using tuple assignment:
>>> name,x,y = obj
We also returned multiple values (minx,miny), which were then assigned to two other variables (x,y), again by tuple assignment. This makes what would have been complicated code in C++ rather simple.
Tuple assignment is also a convenient way to swap variables:
End of explanation
"""
mylist = [1,2,9,21]
"""
Explanation: Dictionaries are an object called "mappings" or "associative arrays" in other languages. Whereas a list associates an integer index with a set of objects:
End of explanation
"""
ages = {"Rick": 46, "Bob": 86, "Fred": 21}
print("Rick's age is ",ages["Rick"])
"""
Explanation: The index in a dictionary is called the key, and the corresponding dictionary entry is the value. A dictionary can use (almost) anything as the key. Whereas lists are formed with square brackets [], dictionaries use curly brackets {}:
End of explanation
"""
dict(Rick=46,Bob=86,Fred=20)
"""
Explanation: There's also a convenient way to create dictionaries without having to quote the keys.
End of explanation
"""
len(t)
len(ages)
"""
Explanation: Notice in either case you are not choosing the ordering -- it is automagically grouped alphabetically.
The len() command works on both tuples and dictionaries:
End of explanation
"""
import matplotlib.pyplot as plt
%matplotlib inline
"""
Explanation: 1.11 Plotting with Matplotlib
We can generally understand trends in data by using a plotting program to chart it. Python has a wonderful plotting library called Matplotlib. The Jupyter notebook interface we are using for these notes has that functionality built in.
First off, it is important to import the library. We did this at the very beginning of this whole jupyter notebook, but here it is in case you've jumped straight here without running the first code line:
End of explanation
"""
fibs = fibonacci(10)
"""
Explanation: The %matplotlib inline command makes it so plots are within this notebook. To plot to a separate window, use instead:
%matplotlib qt
As an example of plotting, we have looked at two different functions, the Fibonacci function, and the factorial function, both of which grow faster than polynomially. Which one grows the fastest? Let's plot them. First, let's generate the Fibonacci sequence of length 20:
End of explanation
"""
facts = []
for i in range(10):
facts.append(factorial(i))
"""
Explanation: Next lets generate the factorials.
End of explanation
"""
plt.plot(facts,'-ob',label="factorial")
plt.plot(fibs,'-dg',label="Fibonacci")
plt.xlabel("n")
plt.legend()
"""
Explanation: Now we use the Matplotlib function plot to compare the two.
End of explanation
"""
plt.semilogy(facts,label="factorial")
plt.semilogy(fibs,label="Fibonacci")
plt.xlabel("n")
plt.legend()
"""
Explanation: The factorial function grows much faster. In fact, you can't even see the Fibonacci sequence. It's not entirely surprising: a function where we multiply by n each iteration is bound to grow faster than one where we add (roughly) n each iteration.
Let's plot these on a semilog plot so we can see them both a little more clearly:
End of explanation
"""
import this
"""
Explanation: There are many more things you can do with Matplotlib. We'll be looking at some of them in the sections to come. In the meantime, if you want an idea of the different things you can do, look at the Matplotlib Gallery. Rob Johansson's IPython notebook Introduction to Matplotlib is also particularly good.
1.12 Conclusion of the Python Overview
There is, of course, much more to the language than we've covered here. I've tried to keep this brief enough so that you can jump in and start using Python to simplify your life and work. My own experience in learning new things is that the information doesn't "stick" unless you try and use it for something in real life.
You will no doubt need to learn more as you go. I've listed several other good references, including the Python Tutorial and Learn Python the Hard Way. Additionally, now is a good time to start familiarizing yourself with the Python Documentation, and, in particular, the Python Language Reference.
Tim Peters, one of the earliest and most prolific Python contributors, wrote the "Zen of Python", which can be accessed via the "import this" command:
End of explanation
"""
import numpy as np
"""
Explanation: No matter how experienced a programmer you are, these are words to meditate on.
2. Numpy and Scipy
Numpy contains core routines for doing fast vector, matrix, and linear algebra-type operations in Python. Scipy contains additional routines for optimization, special functions, and so on. Both contain modules written in C and Fortran so that they're as fast as possible. Together, they give Python roughly the same capability that the Matlab program offers. (In fact, if you're an experienced Matlab user, there a guide to Numpy for Matlab users just for you.)
First off, it is important to import the library. Again, we did this at the very beginning of this whole jupyter notebook, but here it is in case you've jumped straight here without running the first code line:
End of explanation
"""
np.array([1,2,3,4,5,6])
"""
Explanation: 2.1 Making vectors and matrices
Fundamental to both Numpy and Scipy is the ability to work with vectors and matrices. You can create vectors from lists using the array command:
End of explanation
"""
np.array([1,2,3,4,5,6],'d')
np.array([1,2,3,4,5,6],'D')
np.array([1,2,3,4,5,6],'i')
"""
Explanation: You can pass in a second argument to array that gives the numeric type. There are a number of types listed here that your matrix can be. Some of these are aliased to single character codes. The most common ones are 'd' (double precision floating point number), 'D' (double precision complex number), and 'i' (int32). Thus,
End of explanation
"""
np.array([[0,1],[1,0]],'d')
"""
Explanation: To build matrices, you can either use the array command with lists of lists:
End of explanation
"""
np.zeros((3,3),'d')
"""
Explanation: You can also form empty (zero) matrices of arbitrary shape (including vectors, which Numpy treats as vectors with one row), using the zeros command:
End of explanation
"""
np.zeros(3,'d')
np.zeros((1,3),'d')
"""
Explanation: The first argument is a tuple containing the shape of the matrix, and the second is the data type argument, which follows the same conventions as in the array command. Thus, you can make row vectors:
End of explanation
"""
np.zeros((3,1),'d')
"""
Explanation: or column vectors:
End of explanation
"""
np.identity(4,'d')
"""
Explanation: There's also an identity command that behaves as you'd expect:
End of explanation
"""
np.linspace(0,1)
"""
Explanation: as well as a ones command.
2.2 Linspace, matrix functions, and plotting
The linspace command makes a linear array of points from a starting to an ending value.
End of explanation
"""
np.linspace(0,1,11)
"""
Explanation: If you provide a third argument, it takes that as the number of points in the space. If you don't provide the argument, it gives a length 50 linear space.
End of explanation
"""
x = np.linspace(0,2*np.pi)
np.sin(x)
"""
Explanation: linspace is an easy way to make coordinates for plotting. Functions in the numpy library (all of which are imported into IPython notebook) can act on an entire vector (or even a matrix) of points at once. Thus,
End of explanation
"""
plt.plot(x,np.sin(x))
plt.show()
"""
Explanation: In conjunction with matplotlib, this is a nice way to plot things:
End of explanation
"""
0.125*np.identity(3,'d')
"""
Explanation: 2.3 Matrix operations
Matrix objects act sensibly when multiplied by scalars:
End of explanation
"""
np.identity(2,'d') + np.array([[1,1],[1,2]])
"""
Explanation: as well as when you add two matrices together. (However, the matrices have to be the same shape.)
End of explanation
"""
np.identity(2)*np.ones((2,2))
"""
Explanation: Something that confuses Matlab users is that the times (*) operator give element-wise multiplication rather than matrix multiplication:
End of explanation
"""
np.dot(np.identity(2),np.ones((2,2)))
"""
Explanation: To get matrix multiplication, you need the dot command:
End of explanation
"""
v = np.array([3,4],'d')
np.sqrt(np.dot(v,v))
"""
Explanation: dot can also do dot products (duh!):
End of explanation
"""
m = np.array([[1,2],[3,4]])
m.T
"""
Explanation: as well as matrix-vector products.
There are determinant, inverse, and transpose functions that act as you would suppose. Transpose can be abbreviated with ".T" at the end of a matrix object:
End of explanation
"""
np.diag([1,2,3,4,5])
"""
Explanation: There's also a diag() function that takes a list or a vector and puts it along the diagonal of a square matrix.
End of explanation
"""
A = np.array([[1,1,1],[0,2,5],[2,5,-1]])
b = np.array([6,-4,27])
np.linalg.solve(A,b)
"""
Explanation: We'll find this useful later on.
2.4 Matrix Solvers
You can solve systems of linear equations using the solve command in the linear algebra toolbox of the numpy library:
End of explanation
"""
A = np.array([[13,-4],[-4,7]],'d')
np.linalg.eigvalsh(A)
np.linalg.eigh(A)
"""
Explanation: There are a number of routines to compute eigenvalues and eigenvectors
eigvals returns the eigenvalues of a matrix
eigvalsh returns the eigenvalues of a Hermitian matrix
eig returns the eigenvalues and eigenvectors of a matrix
eigh returns the eigenvalues and eigenvectors of a Hermitian matrix.
End of explanation
"""
def nderiv(y,x):
"Finite difference derivative of the function f"
n = len(y)
d = np.zeros(n,'d') # assume double
# Use centered differences for the interior points, one-sided differences for the ends
for i in range(1,n-1):
d[i] = (y[i+1]-y[i])/(x[i+1]-x[i])
d[0] = (y[1]-y[0])/(x[1]-x[0])
d[n-1] = (y[n-1]-y[n-2])/(x[n-1]-x[n-2])
return d
"""
Explanation: 2.5 Example: Finite Differences
Now that we have these tools in our toolbox, we can start to do some cool stuff with it. Many of the equations we want to solve in Physics involve differential equations. We want to be able to compute the derivative of functions:
$$ y' = \frac{y(x+h)-y(x)}{h} $$
by discretizing the function $y(x)$ on an evenly spaced set of points $x_0, x_1, \dots, x_n$, yielding $y_0, y_1, \dots, y_n$. Using the discretization, we can approximate the derivative by
$$ y_i' \approx \frac{y_{i+1}-y_{i-1}}{x_{i+1}-x_{i-1}} $$
We can write a derivative function in Python via
End of explanation
"""
x = np.linspace(0,2*np.pi)
dsin = nderiv(np.sin(x),x)
plt.plot(x,dsin,label='numerical')
plt.plot(x,np.cos(x),label='analytical')
plt.title("Comparison of numerical and analytical derivatives of sin(x)")
plt.legend()
"""
Explanation: Let's see whether this works for our sin example from above:
End of explanation
"""
def Laplacian(x):
h = x[1]-x[0] # assume uniformly spaced points
n = len(x)
M = -2*np.identity(n,'d')
for i in range(1,n):
M[i,i-1] = M[i-1,i] = 1
return M/h**2
x = np.linspace(-3,3)
m = 1.0
ohm = 1.0
T = (-0.5/m)*Laplacian(x)
V = 0.5*(ohm**2)*(x**2)
H = T + np.diag(V)
E,U = np.linalg.eigh(H)
h = x[1]-x[0]
# Plot the Harmonic potential
plt.plot(x,V,color='k')
for i in range(4):
# For each of the first few solutions, plot the energy level:
plt.axhline(y=E[i],color='k',ls=":")
# as well as the eigenfunction, displaced by the energy level so they don't
# all pile up on each other:
plt.plot(x,-U[:,i]/np.sqrt(h)+E[i])
plt.title("Eigenfunctions of the Quantum Harmonic Oscillator")
plt.xlabel("Displacement (bohr)")
plt.ylabel("Energy (hartree)")
"""
Explanation: Pretty close!
2.6 One-Dimensional Harmonic Oscillator using Finite Difference
Now that we've convinced ourselves that finite differences aren't a terrible approximation, let's see if we can use this to solve the one-dimensional harmonic oscillator.
We want to solve the time-independent Schrodinger equation
$$ -\frac{\hbar^2}{2m}\frac{\partial^2\psi(x)}{\partial x^2} + V(x)\psi(x) = E\psi(x)$$
for $\psi(x)$ when $V(x)=\frac{1}{2}m\omega^2x^2$ is the harmonic oscillator potential. We're going to use the standard trick to transform the differential equation into a matrix equation by multiplying both sides by $\psi^*(x)$ and integrating over $x$. This yields
$$ -\frac{\hbar}{2m}\int\psi(x)\frac{\partial^2}{\partial x^2}\psi(x)dx + \int\psi(x)V(x)\psi(x)dx = E$$
We will again use the finite difference approximation. The finite difference formula for the second derivative is
$$ y'' = \frac{y_{i+1}-2y_i+y_{i-1}}{x_{i+1}-x_{i-1}} $$
We can think of the first term in the Schrodinger equation as the overlap of the wave function $\psi(x)$ with the second derivative of the wave function $\frac{\partial^2}{\partial x^2}\psi(x)$. Given the above expression for the second derivative, we can see if we take the overlap of the states $y_1,\dots,y_n$ with the second derivative, we will only have three points where the overlap is nonzero, at $y_{i-1}$, $y_i$, and $y_{i+1}$. In matrix form, this leads to the tridiagonal Laplacian matrix, which has -2's along the diagonals, and 1's along the diagonals above and below the main diagonal.
The second term turns leads to a diagonal matrix with $V(x_i)$ on the diagonal elements. Putting all of these pieces together, we get:
End of explanation
"""
from numpy.polynomial.hermite import Hermite
def ho_evec(x,n,m,ohm):
vec = [0]*9
vec[n] = 1
Hn = Hermite(vec)
return (1/np.sqrt(2**n*factorial(n)))*pow(m*ohm/np.pi,0.25)*np.exp(-0.5*m*ohm*x**2)*Hn(x*np.sqrt(m*ohm))
"""
Explanation: We've made a couple of hacks here to get the orbitals the way we want them. First, I inserted a -1 factor before the wave functions, to fix the phase of the lowest state. The phase (sign) of a quantum wave function doesn't hold any information, only the square of the wave function does, so this doesn't really change anything.
But the eigenfunctions as we generate them aren't properly normalized. The reason is that finite difference isn't a real basis in the quantum mechanical sense. It's a basis of Dirac δ functions at each point; we interpret the space betwen the points as being "filled" by the wave function, but the finite difference basis only has the solution being at the points themselves. We can fix this by dividing the eigenfunctions of our finite difference Hamiltonian by the square root of the spacing, and this gives properly normalized functions.
2.7 Special Functions
The solutions to the Harmonic Oscillator are supposed to be Hermite polynomials. The Wikipedia page has the HO states given by
$$\psi_n(x) = \frac{1}{\sqrt{2^n n!}}
\left(\frac{m\omega}{\pi\hbar}\right)^{1/4}
\exp\left(-\frac{m\omega x^2}{2\hbar}\right)
H_n\left(\sqrt{\frac{m\omega}{\hbar}}x\right)$$
Let's see whether they look like those. There are some special functions in the Numpy library, and some more in Scipy. Hermite Polynomials are in Numpy:
End of explanation
"""
plt.plot(x,ho_evec(x,0,1,1),label="Analytic")
plt.plot(x,-U[:,0]/np.sqrt(h),label="Numeric")
plt.xlabel('x (bohr)')
plt.ylabel(r'$\psi(x)$')
plt.title("Comparison of numeric and analytic solutions to the Harmonic Oscillator")
plt.legend()
"""
Explanation: Let's compare the first function to our solution.
End of explanation
"""
phase_correction = [-1,1,1,-1,-1,1]
for i in range(6):
plt.subplot(2,3,i+1)
plt.plot(x,ho_evec(x,i,1,1),label="Analytic")
plt.plot(x,phase_correction[i]*U[:,i]/np.sqrt(h),label="Numeric")
"""
Explanation: The agreement is almost exact.
We can use the subplot command to put multiple comparisons in different panes on a single plot (run %matplotlib qt on a separate line first to plot in a separate window):
End of explanation
"""
from scipy.special import airy,jn,eval_chebyt,eval_legendre
plt.subplot(2,2,1)
x = np.linspace(-1,1)
Ai,Aip,Bi,Bip = airy(x)
plt.plot(x,Ai)
plt.plot(x,Aip)
plt.plot(x,Bi)
plt.plot(x,Bip)
plt.title("Airy functions")
plt.subplot(2,2,2)
x = np.linspace(0,10)
for i in range(4):
plt.plot(x,jn(i,x))
plt.title("Bessel functions")
plt.subplot(2,2,3)
x = np.linspace(-1,1)
for i in range(6):
plt.plot(x,eval_chebyt(i,x))
plt.title("Chebyshev polynomials of the first kind")
plt.subplot(2,2,4)
x = np.linspace(-1,1)
for i in range(6):
plt.plot(x,eval_legendre(i,x))
plt.title("Legendre polynomials")
# plt.tight_layout()
plt.show()
"""
Explanation: Other than phase errors (which I've corrected with a little hack: can you find it?), the agreement is pretty good, although it gets worse the higher in energy we get, in part because we used only 50 points.
The Scipy module has many more special functions:
End of explanation
"""
raw_data = """\
3.1905781584582433,0.028208609537968457
4.346895074946466,0.007160804747670053
5.374732334047101,0.0046962988461934805
8.201284796573875,0.0004614473299618756
10.899357601713055,0.00005038370219939726
16.295503211991434,4.377451812785309e-7
21.82012847965739,3.0799922117601088e-9
32.48394004282656,1.524776208284536e-13
43.53319057815846,5.5012073588707224e-18"""
"""
Explanation: As well as Jacobi, Laguerre, Hermite polynomials, Hypergeometric functions, and many others. There's a full listing at the Scipy Special Functions Page.
2.8 Least squares fitting
Very often we deal with some data that we want to fit to some sort of expected behavior. Say we have the following:
End of explanation
"""
data = []
for line in raw_data.splitlines():
words = line.split(',')
data.append(list(map(float,words)))
data = np.array(data)
plt.title("Raw Data")
plt.xlabel("Distance")
plt.plot(data[:,0],data[:,1],'bo')
"""
Explanation: There's a section below on parsing CSV data. We'll steal the parser from that. For an explanation, skip ahead to that section. Otherwise, just assume that this is a way to parse that text into a numpy array that we can plot and do other analyses with.
End of explanation
"""
plt.title("Raw Data")
plt.xlabel("Distance")
plt.semilogy(data[:,0],data[:,1],'bo')
"""
Explanation: Since we expect the data to have an exponential decay, we can plot it using a semi-log plot.
End of explanation
"""
params = np.polyfit(data[:,0],np.log(data[:,1]),1)
a = params[0] # the coefficient of x**1
logA = params[1] # the coefficient of x**0
# plot if curious:
# plt.plot(data[:,0],np.log(data[:,1]),'bo')
# plt.plot(data[:,0],data[:,0]*a+logA,'r')
"""
Explanation: For a pure exponential decay like this, we can fit the log of the data to a straight line. The above plot suggests this is a good approximation. Given a function
$$ y = Ae^{ax} $$
$$ \log(y) = ax + \log(A) $$
Thus, if we fit the log of the data versus x, we should get a straight line with slope $a$, and an intercept that gives the constant $A$.
There's a numpy function called polyfit that will fit data to a polynomial form. We'll use this to fit to a straight line (a polynomial of order 1)
End of explanation
"""
x = np.linspace(1,45)
plt.title("Raw Data")
plt.xlabel("Distance")
plt.semilogy(data[:,0],data[:,1],'bo',label='data')
plt.semilogy(x,np.exp(logA)*np.exp(a*x),'r-',label='fit')
plt.legend()
"""
Explanation: Let's see whether this curve fits the data.
End of explanation
"""
gauss_data = """\
-0.9902286902286903,1.4065274110372852e-19
-0.7566104566104566,2.2504438576596563e-18
-0.5117810117810118,1.9459459459459454
-0.31887271887271884,10.621621621621626
-0.250997150997151,15.891891891891893
-0.1463309463309464,23.756756756756754
-0.07267267267267263,28.135135135135133
-0.04426734426734419,29.02702702702703
-0.0015939015939017698,29.675675675675677
0.04689304689304685,29.10810810810811
0.0840994840994842,27.324324324324326
0.1700546700546699,22.216216216216214
0.370878570878571,7.540540540540545
0.5338338338338338,1.621621621621618
0.722014322014322,0.08108108108108068
0.9926849926849926,-0.08108108108108646"""
gdata = []
for line in gauss_data.splitlines():
words = line.split(',')
gdata.append(list(map(float,words)))
gdata = np.array(gdata)
plt.plot(gdata[:,0],gdata[:,1],'bo')
"""
Explanation: If we have more complicated functions, we may not be able to get away with fitting to a simple polynomial. Consider the following data:
End of explanation
"""
def gauss(x,A,a): return A*np.exp(a*x**2)
"""
Explanation: This data looks more Gaussian than exponential. If we wanted to, we could use polyfit for this as well, but let's use the curve_fit function from Scipy, which can fit to arbitrary functions. You can learn more using help(curve_fit).
First define a general Gaussian function to fit to.
End of explanation
"""
from scipy.optimize import curve_fit
params,conv = curve_fit(gauss,gdata[:,0],gdata[:,1])
x = np.linspace(-1,1)
plt.plot(gdata[:,0],gdata[:,1],'bo')
A,a = params
plt.plot(x,gauss(x,A,a),'r-')
"""
Explanation: Now fit to it using curve_fit:
End of explanation
"""
# from random import random
rands = []
for i in range(100):
rands.append(np.random.random())
"""
Explanation: The curve_fit routine we just used is built on top of a very good general minimization capability in Scipy. You can learn more at the scipy documentation pages.
2.9 Monte Carlo, random numbers, and computing $\pi$
Many methods in scientific computing rely on Monte Carlo integration, where a sequence of (pseudo) random numbers are used to approximate the integral of a function. Python has good random number generators in the standard library. The random() function from the numpy library gives pseudorandom numbers uniformly distributed between 0 and 1:
End of explanation
"""
rands = np.random.rand(100)
plt.plot(rands,'o')
"""
Explanation: Or, more elegantly:
End of explanation
"""
mu, sigma = 0, 0.1 # mean and standard deviation
s=np.random.normal(mu, sigma,1000)
"""
Explanation: np.random.random() uses the Mersenne Twister algorithm, which is a highly regarded pseudorandom number generator. There are also functions to generate random integers, to randomly shuffle a list, and functions to pick random numbers from a particular distribution, like the normal distribution:
It is generally more efficient to generate a list of random numbers all at once, particularly if you're drawing from a non-uniform distribution. Numpy has functions to generate vectors and matrices of particular types of random distributions:
End of explanation
"""
count, bins, ignored = plt.hist(s, 30, normed=True)
plt.plot(bins, 1/(sigma * np.sqrt(2 * np.pi)) *
np.exp( - (bins - mu)**2 / (2 * sigma**2) ),
linewidth=2, color='r')
"""
Explanation: We can check the distribution by using the histogram feature, as shown on the help page for numpy.random.normal:
End of explanation
"""
npts = 5000
xs = 2*np.random.rand(npts)-1
ys = 2*np.random.rand(npts)-1
r = xs**2+ys**2
ninside = (r<1).sum()
plt.figure(figsize=(6,6)) # make the figure square
plt.title("Approximation to pi = %f" % (4*ninside/float(npts)))
plt.plot(xs[r<1],ys[r<1],'b.')
plt.plot(xs[r>1],ys[r>1],'r.')
plt.figure(figsize=(8,6)) # change the figsize back to standard size for the rest of the notebook
"""
Explanation: Here's an interesting use of random numbers: compute $\pi$ by taking random numbers as x and y coordinates, and counting how many of them were in the unit circle. For example:
End of explanation
"""
n = 100
total = 0
for k in range(n):
total += pow(-1,k)/(2*k+1.0)
print(4*total)
"""
Explanation: The idea behind the program is that the ratio of the area of the unit circle to the square that inscribes it is $\pi/4$, so by counting the fraction of the random points in the square that are inside the circle, we get increasingly good estimates to $\pi$.
The above code uses some higher level Numpy tricks to compute the radius of each point in a single line, to count how many radii are below one in a single line, and to filter the x,y points based on their radii. To be honest, I rarely write code like this: I find some of these Numpy tricks a little too cute to remember them, and I'm more likely to use a list comprehension (see below) to filter the points I want, since I can remember that.
As methods of computing $\pi$ go, this is among the worst. A much better method is to use Leibniz's expansion of arctan(1):
$$\frac{\pi}{4} = \sum_k \frac{(-1)^k}{2*k+1}$$
End of explanation
"""
def f(x): return np.exp(-x)
x = np.linspace(0,10)
plt.plot(x,np.exp(-x))
"""
Explanation: If you're interested in another great method, check out Ramanujan's method. This converges so fast you really need arbitrary precision math to display enough decimal places. You can do this with the Python decimal module, if you're interested.
2.10 Numerical Integration
Integration can be hard, and sometimes it's easier to work out a definite integral using an approximation. For example, suppose we wanted to figure out the integral:
$$\int_0^\infty\exp(-x)dx$$
(It turns out that this is equal to 1, as you can work out easily with a pencil :) )
End of explanation
"""
from scipy.integrate import quad
quad(f,0,np.inf)
"""
Explanation: Scipy has a numerical integration routine quad (since sometimes numerical integration is called quadrature), that we can use for this:
End of explanation
"""
from scipy.fftpack import fft,fftfreq
npts = 4000
nplot = int(npts/10)
t = np.linspace(0,120,npts)
def sig(t): return 50*np.sin(2*np.pi*2.0*t) + 20*np.sin(2*np.pi*5.0*t) + 10*np.sin(2*np.pi*8.0*t) + 2*np.random.rand(npts)
Vsignal = sig(t)
FFT = abs(fft(Vsignal))
freqs = fftfreq(npts, t[1]-t[0])
FFT_plot = FFT[0:int(len(freqs)/2)]
freqs_plot = freqs[0:int(len(freqs)/2)]
plt.subplot(211)
plt.plot(t[:nplot], Vsignal[:nplot])
plt.xlabel ('time (s)')
plt.ylabel ('voltage\nmeasured (V)')
plt.subplot(212)
plt.semilogy(freqs_plot,FFT_plot**2,'-')
plt.xlabel ('frequency (Hz)')
plt.ylabel ('power\nspectrum (a.u.)')
plt.ylim([1e-1,np.max(FFT_plot**2)])
plt.tight_layout()
"""
Explanation: The first number in the tuple is the result, the second number is an estimate of the absolute error in the result.
There are also 2d and 3d numerical integrators in Scipy. See the docs for more information.
2.11 Fast Fourier Transform and Signal Processing
Very often we want to use FFT techniques to help obtain the signal from noisy data. Scipy has several different options for this.
End of explanation
"""
csv = """\
1, -6095.12544083, 0.03686, 1391.5
2, -6095.25762870, 0.00732, 10468.0
3, -6095.26325979, 0.00233, 11963.5
4, -6095.26428124, 0.00109, 13331.9
5, -6095.26463203, 0.00057, 14710.8
6, -6095.26477615, 0.00043, 20211.1
7, -6095.26482624, 0.00015, 21726.1
8, -6095.26483584, 0.00021, 24890.5
9, -6095.26484405, 0.00005, 26448.7
10, -6095.26484599, 0.00003, 27258.1
11, -6095.26484676, 0.00003, 28155.3
12, -6095.26484693, 0.00002, 28981.7
13, -6095.26484693, 0.00002, 28981.7"""
csv
"""
Explanation: There are additional signal processing routines in Scipy (e.g. splines, filtering) that you can read about here.
3. Intermediate Python
3.1 Parsing data output
As more and more of our day-to-day work is being done on and through computers, we increasingly have output that one program writes, often in a text file, that we need to analyze in one way or another, and potentially feed that output into another file.
Suppose we have the following output in CSV (comma separated values) format, a format that originally came from Microsoft Excel, and is increasingly used as a data interchange format in big data applications. How would we parse that?
End of explanation
"""
lines = csv.splitlines()
lines
"""
Explanation: This is a giant string. If we use splitlines(), we see that a list is created where line gets separated into a string:
End of explanation
"""
lines[4].split(",")
"""
Explanation: Splitting is a big concept in text processing. We used splitlines() here, and next we'll use the more general .split(",") function below to split each line into comma-delimited words.
We now want to do three things:
Skip over the lines that don't carry any information
Break apart each line that does carry information and grab the pieces we want
Turn the resulting data into something that we can plot.
To break apart each line, we will use .split(","). Let's see what it does to one of the lines:
End of explanation
"""
help("".split)
"""
Explanation: What does split() do?
End of explanation
"""
for line in lines:
# do something with each line
words = line.split(",")
"""
Explanation: Since the data is now in a list of lines, we can iterate over it, splitting up data where we see a comma:
End of explanation
"""
data = []
for line in csv.splitlines()[2:]:
words = line.split(',')
data.append(list(map(float,words)))
data = np.array(data)
data
"""
Explanation: We need to add these results at each step to a list:
End of explanation
"""
plt.plot(data[:,0],data[:,1],'-o')
plt.xlabel('step')
plt.ylabel('Energy (hartrees)')
plt.title('Convergence of NWChem geometry optimization for Si cluster\n')
"""
Explanation: Let's examine what we just did: first, we used a for loop to iterate over each line. However, we skipped the first two (the lines[2:] only takes the lines starting from index 2), since lines[0] contained the title information, and lines[1] contained underscores. Similarly, [:5] instead would take the first five lines.
We pass the comma string ",'"into the split function, so that it breaks to a new word every time it sees a comma. Next, to simplify things a bit, we're using the map() command to repeatedly apply a single function (float()) to a list, and to return the output as a list. Finally, we turn the list of lists into a numpy arrray structure.
End of explanation
"""
energies = data[:,1]
minE = np.min(energies)
energies_eV = 27.211*(energies-minE)
plt.plot(data[:,0],energies_eV,'-o')
plt.xlabel('step')
plt.ylabel('Energy (eV)')
plt.title('Convergence of NWChem geometry optimization for Si cluster')
"""
Explanation: Hartrees (what most quantum chemistry programs use by default) are really stupid units. We really want this in kcal/mol or eV or something we use. So let's quickly replot this in terms of eV above the minimum energy, which will give us a much more useful plot:
End of explanation
"""
filename= 'DS0004.csv'
data = np.genfromtxt(filename,delimiter=',',skip_header=17 )
x_values = data[:,0]
y_values = data[:,1]
plt.plot(x_values, y_values)
"""
Explanation: The real value in a language like Python is that it makes it easy to take additional steps to analyze data in this fashion, which means you are thinking more about your data, and are more likely to see important patterns.
3.2 Reading in data files
Let's take a look at a perhaps easier approach to a common problem -- you have a data file with some header info and comma-delimited values and you want the data so you can start doing stuff with it. Let's use numpy's genfromtxt()
End of explanation
"""
print("I have 3 errands to run")
"""
Explanation: That was easy! Why didn't we only learn that? Because not every data set is "nice" like that. Better to have some tools for when things aren't working how you'd like them to be. That being said, much data coming from scientific equipment and computational tools can be cast into a format that can be read in through genfromtxt(). For larger data sets, the library pandas might be helpful.
3.3 More Sophisticated String Formatting and Processing
Strings are a big deal in most modern languages, and hopefully the previous sections helped underscore how versatile Python's string processing techniques are. We will continue this topic in this section.
We can print out lines in Python using the print command.
End of explanation
"""
"I have 3 errands to run"
"""
Explanation: In IPython we don't even need the print command, since it will display the last expression not assigned to a variable.
End of explanation
"""
a,b,c = 1,2,3
print("The variables are ",1,2,3)
"""
Explanation: print even converts some arguments to strings for us:
End of explanation
"""
print("Pi as a decimal = %d" % np.pi)
print("Pi as a float = %f" % np.pi)
print("Pi with 4 decimal places = %.4f" % np.pi)
print("Pi with overall fixed length of 10 spaces, with 6 decimal places = %10.6f" % np.pi)
print("Pi as in exponential format = %e" % np.pi)
"""
Explanation: As versatile as this is, you typically need more freedom over the data you print out. For example, what if we want to print a bunch of data to exactly 4 decimal places? We can do this using formatted strings.
Formatted strings share a syntax with the C printf statement. We make a string that has some funny format characters in it, and then pass a bunch of variables into the string that fill out those characters in different ways.
For example,
End of explanation
"""
print("The variables specified earlier are %d, %d, and %d" % (a,b,c))
"""
Explanation: We use a percent sign in two different ways here. First, the format character itself starts with a percent sign. %d or %i are for integers, %f is for floats, %e is for numbers in exponential formats. All of the numbers can take number immediately after the percent that specifies the total spaces used to print the number. Formats with a decimal can take an additional number after a dot . to specify the number of decimal places to print.
The other use of the percent sign is after the string, to pipe a set of variables in. You can pass in multiple variables (if your formatting string supports it) by putting a tuple after the percent. Thus,
End of explanation
"""
form_letter = """\
%s
Dear %s,
We regret to inform you that your product did not
ship today due to %s.
We hope to remedy this as soon as possible.
From,
Your Supplier
"""
print(form_letter % ("July 1, 2016","Valued Customer Bob","alien attack"))
"""
Explanation: This is a simple formatting structure that will satisfy most of your string formatting needs. More information on different format symbols is available in the string formatting part of the standard docs.
It's worth noting that more complicated string formatting methods are in development, but I prefer this system due to its simplicity and its similarity to C formatting strings.
Recall we discussed multiline strings. We can put format characters in these as well, and fill them with the percent sign as before.
End of explanation
"""
form_letter = """\
%(date)s
Dear %(customer)s,
We regret to inform you that your product did not
ship today due to %(lame_excuse)s.
We hope to remedy this as soon as possible.
From,
Your Supplier
"""
print(form_letter % {"date" : "July 1, 2016","customer":"Valued Customer Bob","lame_excuse":"alien attack"})
"""
Explanation: The problem with a long block of text like this is that it's often hard to keep track of what all of the variables are supposed to stand for. There's an alternate format where you can pass a dictionary into the formatted string, and give a little bit more information to the formatted string itself. This method looks like:
End of explanation
"""
nwchem_format = """
start %(jobname)s
title "%(thetitle)s"
charge %(charge)d
geometry units angstroms print xyz autosym
%(geometry)s
end
basis
* library 6-31G**
end
dft
xc %(dft_functional)s
mult %(multiplicity)d
end
task dft %(jobtype)s
"""
"""
Explanation: By providing a little bit more information, you're less likely to make mistakes, like referring to your customer as "alien attack".
As a scientist, you're less likely to be sending bulk mailings to a bunch of customers. But these are great methods for generating and submitting lots of similar runs, say scanning a bunch of different structures to find the optimal configuration for something.
For example, you can use the following template for NWChem input files:
End of explanation
"""
oxygen_xy_coords = [(0,0),(0,0.1),(0.1,0),(0.1,0.1)]
charge = 0
multiplicity = 1
dft_functional = "b3lyp"
jobtype = "optimize"
geometry_template = """\
O %f %f 0.0
H 0.0 1.0 0.0
H 1.0 0.0 0.0"""
for i,xy in enumerate(oxygen_xy_coords):
thetitle = "Water run #%d" % i
jobname = "h2o-%d" % i
geometry = geometry_template % xy
print("---------")
print(nwchem_format % dict(thetitle=thetitle,charge=charge,jobname=jobname,jobtype=jobtype,
geometry=geometry,dft_functional=dft_functional,multiplicity=multiplicity))
"""
Explanation: If you want to submit a sequence of runs to a computer somewhere, it's pretty easy to put together a little script, maybe even with some more string formatting in it:
End of explanation
"""
def my_enumerate(seq):
l = []
for i in range(len(seq)):
l.append((i,seq[i]))
return l
my_enumerate(oxygen_xy_coords)
"""
Explanation: This is a very bad geometry for a water molecule, and it would be silly to run so many geometry optimizations of structures that are guaranteed to converge to the same single geometry, but you get the idea of how you can run vast numbers of simulations with a technique like this.
We used the enumerate function to loop over both the indices and the items of a sequence, which is valuable when you want a clean way of getting both. enumerate is roughly equivalent to:
End of explanation
"""
np.linspace(0,1)
"""
Explanation: Although enumerate uses generators (see below) so that it doesn't have to create a big list, which makes it faster for really long sequenes.
3.4 Optional arguments of a function
You will recall that the linspace function can take either two arguments (for the starting and ending points):
End of explanation
"""
np.linspace(0,1,5)
"""
Explanation: or it can take three arguments, for the starting point, the ending point, and the number of points:
End of explanation
"""
np.linspace(0,1,5,endpoint=False)
"""
Explanation: You can also pass in keywords to exclude the endpoint:
End of explanation
"""
def my_linspace(start,end):
npoints = 50
v = []
d = (end-start)/float(npoints-1)
for i in range(npoints):
v.append(start + i*d)
return v
my_linspace(0,1)
"""
Explanation: Right now, we only know how to specify functions that have a fixed number of arguments. We'll learn how to do the more general cases here.
If we're defining a simple version of linspace, we would start with:
End of explanation
"""
def my_linspace(start,end,npoints = 50):
v = []
d = (end-start)/float(npoints-1)
for i in range(npoints):
v.append(start + i*d)
return v
"""
Explanation: We can add an optional argument by specifying a default value in the argument list:
End of explanation
"""
my_linspace(0,1)
"""
Explanation: This gives exactly the same result if we don't specify anything:
End of explanation
"""
my_linspace(0,1,5)
"""
Explanation: But also let's us override the default value with a third argument:
End of explanation
"""
def my_linspace(start,end,npoints=50,**kwargs):
endpoint = kwargs.get('endpoint',True)
v = []
if endpoint:
d = (end-start)/float(npoints-1)
else:
d = (end-start)/float(npoints)
for i in range(npoints):
v.append(start + i*d)
return v
my_linspace(0,1,5,endpoint=False)
"""
Explanation: We can add arbitrary keyword arguments to the function definition by putting a keyword argument **kwargs handle in:
End of explanation
"""
def my_range(*args):
start = 0
step = 1
if len(args) == 1:
end = args[0]
elif len(args) == 2:
start,end = args
elif len(args) == 3:
start,end,step = args
else:
raise Exception("Unable to parse arguments")
v = []
value = start
while True:
v.append(value)
value += step
if value > end: break
return v
"""
Explanation: What the keyword argument construction does is to take any additional keyword arguments (i.e. arguments specified by name, like "endpoint=False"), and stick them into a dictionary called "kwargs" (you can call it anything you like, but it has to be preceded by two stars). You can then grab items out of the dictionary using the get command, which also lets you specify a default value. I realize it takes a little getting used to, but it is a common construction in Python code, and you should be able to recognize it.
There's an analogous *args that dumps any additional arguments into a list called "args". Think about the range function: it can take one (the endpoint), two (starting and ending points), or three (starting, ending, and step) arguments. How would we define this?
End of explanation
"""
my_range()
"""
Explanation: Note that we have defined a few new things you haven't seen before: a break statement, that allows us to exit a for loop if some conditions are met, and an exception statement, that causes the interpreter to exit with an error message. For example:
End of explanation
"""
evens1 = [2*i for i in range(10)]
print(evens1)
"""
Explanation: 3.5 List Comprehensions and Generators
List comprehensions are a streamlined way to make lists. They look something like a list definition, with some logic thrown in. For example:
End of explanation
"""
odds = [i for i in range(20) if i%2==1]
odds
"""
Explanation: You can also put some boolean testing into the construct:
End of explanation
"""
def evens_below(n):
for i in range(n):
if i%2 == 0:
yield i
return
for i in evens_below(9):
print(i)
"""
Explanation: Here i%2 is the remainder when i is divided by 2, so that i%2==1 is true if the number is odd. Even though this is a relative new addition to the language, it is now fairly common since it's so convenient.
iterators are a way of making virtual sequence objects. Consider if we had the nested loop structure:
for i in range(1000000):
for j in range(1000000):
Inside the main loop, we make a list of 1,000,000 integers, just to loop over them one at a time. We don't need any of the additional things that a lists gives us, like slicing or random access, we just need to go through the numbers one at a time. And we're making 1,000,000 of them.
iterators are a way around this. For example, the xrange function is the iterator version of range. This simply makes a counter that is looped through in sequence, so that the analogous loop structure would look like:
for i in xrange(1000000):
for j in xrange(1000000):
Even though we've only added two characters, we've dramatically sped up the code, because we're not making 1,000,000 big lists.
We can define our own iterators using the yield statement:
End of explanation
"""
list(evens_below(9))
"""
Explanation: We can always turn an iterator into a list using the list command:
End of explanation
"""
evens_gen = (i for i in range(9) if i%2==0)
for i in evens_gen:
print(i)
"""
Explanation: There's a special syntax called a generator expression that looks a lot like a list comprehension:
End of explanation
"""
def gauss(x,A,a,x0):
return A*np.exp(-a*(x-x0)**2)
"""
Explanation: 3.6 Factory Functions
A factory function is a function that returns a function. They have the fancy name lexical closure, which makes you sound really intelligent in front of your CS friends. But, despite the arcane names, factory functions can play a very practical role.
Suppose you want the Gaussian function centered at 0.5, with height 99 and width 1.0. You could write a general function.
End of explanation
"""
def gauss_maker(A,a,x0):
def f(x):
return A*np.exp(-a*(x-x0)**2)
return f
x = np.linspace(0,1)
g = gauss_maker(99.0,20,0.5)
plt.plot(x,g(x))
"""
Explanation: But what if you need a function with only one argument, like f(x) rather than f(x,y,z,...)? You can do this with Factory Functions:
End of explanation
"""
# Data in a json format:
json_data = """\
{
"a": [1,2,3],
"b": [4,5,6],
"greeting" : "Hello"
}"""
import json
loaded_json=json.loads(json_data)
loaded_json
"""
Explanation: Everything in Python is an object, including functions. This means that functions can be returned by other functions. (They can also be passed into other functions, which is also useful, but a topic for another discussion.) In the gauss_maker example, the g function that is output "remembers" the A, a, x0 values it was constructed with, since they're all stored in the local memory space (this is what the lexical closure really refers to) of that function.
Factories are one of the more important of the Software Design Patterns, which are a set of guidelines to follow to make high-quality, portable, readable, stable software. It's beyond the scope of the current work to go more into either factories or design patterns, but I thought I would mention them for people interested in software design.
3.7 Serialization: Save it for later
Serialization refers to the process of outputting data (and occasionally functions) to a database or a regular file, for the purpose of using it later on. In the very early days of programming languages, this was normally done in regular text files. Python is excellent at text processing, and you probably already know enough to get started with this.
When accessing large amounts of data became important, people developed database software based around the Structured Query Language (SQL) standard. I'm not going to cover SQL here, but, if you're interested, I recommend using the sqlite3 module in the Python standard library.
As data interchange became important, the eXtensible Markup Language (XML) has emerged. XML makes data formats that are easy to write parsers for, greatly simplifying the ambiguity that sometimes arises in the process. Again, I'm not going to cover XML here, but if you're interested in learning more, look into Element Trees, now part of the Python standard library.
Python has a very general serialization format called pickle that can turn any Python object, even a function or a class, into a representation that can be written to a file and read in later. But, again, I'm not going to talk about this, since I rarely use it myself. Again, the standard library documentation for pickle is the place to go.
What I am going to talk about is a relatively recent format call JavaScript Object Notation (JSON) that has become very popular over the past few years. There's a module in the standard library for encoding and decoding JSON formats. The reason I like JSON so much is that it looks almost like Python, so that, unlike the other options, you can look at your data and edit it, use it in another program, etc.
Here's a little example:
End of explanation
"""
json.dumps({"a":[1,2,3],"b":[9,10,11],"greeting":"Hola"})
"""
Explanation: Your data sits in something that looks like a Python dictionary, and in a single line of code, you can load it into a Python dictionary for use later.
In the same way, you can, with a single line of code, put a bunch of variables into a dictionary, and then output to a file using json:
End of explanation
"""
from operator import add, mul
add(1,2)
mul(3,4)
"""
Explanation: 3.8 Functional programming
Functional programming is a very broad subject. The idea is to have a series of functions, each of which generates a new data structure from an input, without changing the input structure at all. By not modifying the input structure (something that is called not having side effects), many guarantees can be made about how independent the processes are, which can help parallelization and guarantees of program accuracy. There is a Python Functional Programming HOWTO in the standard docs that goes into more details on functional programming. I just wanted to touch on a few of the most important ideas here.
There is an operator module that has function versions of most of the Python operators. For example:
End of explanation
"""
def doubler(x): return 2*x
doubler(17)
"""
Explanation: These are useful building blocks for functional programming.
The lambda operator allows us to build anonymous functions, which are simply functions that aren't defined by a normal def statement with a name. For example, a function that doubles the input is:
End of explanation
"""
lambda x: 2*x
"""
Explanation: We could also write this as:
End of explanation
"""
another_doubler = lambda x: 2*x
another_doubler(19)
"""
Explanation: And assign it to a function separately:
End of explanation
"""
list(map(float,'1 2 3 4 5'.split()))
"""
Explanation: lambda is particularly convenient (as we'll see below) in passing simple functions as arguments to other functions.
map is a way to repeatedly apply a function to a list:
End of explanation
"""
sum([1,2,3,4,5])
"""
Explanation: reduce is a way to repeatedly apply a function to the first two items of the list. There already is a sum function in Python that is a reduction:
End of explanation
"""
from functools import reduce
def prod(l): return reduce(mul,l)
prod([1,2,3,4,5])
"""
Explanation: We can use reduce to define an analogous prod function:
End of explanation
"""
mystring = "Hi there"
"""
Explanation: 3.9 Object Oriented Programming
We've seen a lot of examples of objects in Python. We create a string object with quote marks:
End of explanation
"""
mystring.split()
mystring.startswith('Hi')
len(mystring)
"""
Explanation: and we have a bunch of methods we can use on the object:
End of explanation
"""
class Schrod1d:
"""\
Schrod1d: Solver for the one-dimensional Schrodinger equation.
"""
def __init__(self,V,start=0,end=1,npts=50,**kwargs):
m = kwargs.get('m',1.0)
self.x = np.linspace(start,end,npts)
self.Vx = V(self.x)
self.H = (-0.5/m)*self.laplacian() + np.diag(self.Vx)
return
def plot(self,*args,**kwargs):
titlestring = kwargs.get('titlestring',"Eigenfunctions of the 1d Potential")
xstring = kwargs.get('xstring',"Displacement (bohr)")
ystring = kwargs.get('ystring',"Energy (hartree)")
if not args:
args = [3]
x = self.x
E,U = np.linalg.eigh(self.H)
h = x[1]-x[0]
# Plot the Potential
plt.plot(x,self.Vx,color='k')
for i in range(*args):
# For each of the first few solutions, plot the energy level:
plt.axhline(y=E[i],color='k',ls=":")
# as well as the eigenfunction, displaced by the energy level so they don't
# all pile up on each other:
plt.plot(x,U[:,i]/np.sqrt(h)+E[i])
plt.title(titlestring)
plt.xlabel(xstring)
plt.ylabel(ystring)
return
def laplacian(self):
x = self.x
h = x[1]-x[0] # assume uniformly spaced points
n = len(x)
M = -2*np.identity(n,'d')
for i in range(1,n):
M[i,i-1] = M[i-1,i] = 1
return M/h**2
"""
Explanation: Object oriented programming simply gives you the tools to define objects and methods for yourself. It's useful anytime you want to keep some data (like the characters in the string) tightly coupled to the functions that act on the data (length, split, startswith, etc.).
As an example, we're going to bundle the functions we did to make the 1d harmonic oscillator eigenfunctions with arbitrary potentials, so we can pass in a function defining that potential, some additional specifications, and get out something that can plot the orbitals, as well as do other things with them, if desired.
End of explanation
"""
square_well = Schrod1d(lambda x: 0*x,m=10)
square_well.plot(4,titlestring="Square Well Potential")
"""
Explanation: The init() function specifies what operations go on when the object is created. The self argument is the object itself, and we don't pass it in. The only required argument is the function that defines the QM potential. We can also specify additional arguments that define the numerical grid that we're going to use for the calculation.
For example, to do an infinite square well potential, we have a function that is 0 everywhere. We don't have to specify the barriers, since we'll only define the potential in the well, which means that it can't be defined anywhere else.
End of explanation
"""
ho = Schrod1d(lambda x: x**2,start=-3,end=3)
ho.plot(6,titlestring="Harmonic Oscillator")
"""
Explanation: We can similarly redefine the Harmonic Oscillator potential.
End of explanation
"""
def finite_well(x,V_left=1,V_well=0,V_right=1,d_left=10,d_well=10,d_right=10):
V = np.zeros(x.size,'d')
for i in range(x.size):
if x[i] < d_left:
V[i] = V_left
elif x[i] > (d_left+d_well):
V[i] = V_right
else:
V[i] = V_well
return V
fw = Schrod1d(finite_well,start=0,end=30,npts=100)
fw.plot()
"""
Explanation: Let's define a finite well potential:
End of explanation
"""
def triangular(x,F=30): return F*x
tw = Schrod1d(triangular,m=10)
tw.plot()
"""
Explanation: A triangular well:
End of explanation
"""
def tri_finite(x): return finite_well(x)+triangular(x,F=0.025)
tfw = Schrod1d(tri_finite,start=0,end=30,npts=100)
tfw.plot()
"""
Explanation: Or we can combine the two, making something like a semiconductor quantum well with a top gate:
End of explanation
"""
%timeit factorial(20)
"""
Explanation: There's a lot of philosophy behind object oriented programming. Since I'm trying to focus on just the basics here, I won't go into them, but the internet is full of lots of resources on OO programming and theory. The best of this is contained in the Design Patterns book, which I highly recommend.
4. Speeding Python: Timeit, Profiling, Cython, SWIG, and PyPy
The first rule of speeding up your code is not to do it at all. As Donald Knuth said:
"We should forget about small efficiencies, say about 97% of the time: premature optimization is the root of all evil."
The second rule of speeding up your code is to only do it if you really think you need to do it. Python has two tools to help with this process: a timing program called timeit, and a very good code profiler. We will discuss both of these tools in this section, as well as techniques to use to speed up your code once you know it's too slow.
4.1 Timeit
timeit helps determine which of two similar routines is faster. Recall that some time ago we wrote a factorial routine, but also pointed out that Python had its own routine built into the math module. Is there any difference in the speed of the two? timeit helps us determine this. For example, timeit tells how long each method takes:
End of explanation
"""
%timeit fact(20)
"""
Explanation: The little % sign that we have in front of the timeit call is an example of an IPython magic function, which we don't have time to go into here, but it's just some little extra mojo that IPython adds to the functions to make it run better in the IPython environment. You can read more about it in the IPython tutorial.
In any case, the timeit function runs 3 loops, and tells us that it took on the average of 583 ns to compute 20!. In contrast:
End of explanation
"""
def evens(n):
"Return a list of even numbers below n"
l = []
for x in range(n):
if x % 2 == 0:
l.append(x)
return l
"""
Explanation: the factorial function we wrote is about a factor of 10 slower. This is because the built-in factorial function is written in C code and called from Python, and the version we wrote is written in plain old Python. A Python program has a lot of stuff in it that make it nice to interact with, but all that friendliness slows down the code. In contrast, the C code is less friendly but more efficient. If you want speed with as little effort as possible, write your code in an easy to program language like Python, but dump the slow parts into a faster language like C, and call it from Python. We'll go through some tricks to do this in this section.
4.2 Profiling
Profiling complements what timeit does by splitting the overall timing into the time spent in each function. It can give us a better understanding of what our program is really spending its time on.
Suppose we want to create a list of even numbers. Our first effort yields this:
End of explanation
"""
import cProfile
cProfile.run('evens(100000)')
"""
Explanation: Is this code fast enough? We find out by running the Python profiler on a longer run:
End of explanation
"""
def evens2(n):
"Return a list of even numbers below n"
return [x for x in range(n) if x % 2 == 0]
import cProfile
cProfile.run('evens2(100000)')
"""
Explanation: This looks okay, 0.05 seconds isn't a huge amount of time, but looking at the profiling shows that the append function is taking almost 20% of the time. Can we do better? Let's try a list comprehension.
End of explanation
"""
def evens3(n):
"Return a list of even numbers below n"
return [x for x in range(n) if x % 2 == 0]
import cProfile
cProfile.run('evens3(100000)')
"""
Explanation: By removing a small part of the code using a list comprehension, we've doubled the overall speed of the code!
It seems like range is taking a long time, still. Can we get rid of it? We can, using the xrange generator:
End of explanation
"""
def primes(n):
"""\
From python cookbook, returns a list of prime numbers from 2 to < n
>>> primes(2)
[2]
>>> primes(10)
[2, 3, 5, 7]
"""
if n==2: return [2]
elif n<2: return []
s=list(range(3,n+2,2))
mroot = n ** 0.5
half=(n+1)/2-1
i=0
m=3
while m <= mroot:
if s[i]:
j=int((m*m-3)/2)
s[j]=0
while j<half:
s[j]=0
j+=m
i=i+1
m=2*i+3
return [2]+[x for x in s if x]
number_to_try = 1000000
list_of_primes = primes(number_to_try)
print(list_of_primes[10001])
"""
Explanation: This is where profiling can be useful. Our code now runs 3x faster by making trivial changes. We wouldn't have thought to look in these places had we not had access to easy profiling. Imagine what you would find in more complicated programs.
4.3 Other Ways to Speed Python
When we compared the fact and factorial functions, above, we noted that C routines are often faster because they're more streamlined. Once we've determined that one routine is a bottleneck for the performance of a program, we can replace it with a faster version by writing it in C. This is called extending Python, and there's a good section in the standard documents. This can be a tedious process if you have many different routines to convert. Fortunately, there are several other options.
Swig (the simplified wrapper and interface generator) is a method to generate binding not only for Python but also for Matlab, Perl, Ruby, and other scripting languages. Swig can scan the header files of a C project and generate Python binding for it. Using Swig is substantially easier than writing the routines in C.
Cython is a C-extension language. You can start by compiling a Python routine into a shared object libraries that can be imported into faster versions of the routines. You can then add additional static typing and make other restrictions to further speed the code. Cython is generally easier than using Swig.
PyPy is the easiest way of obtaining fast code. PyPy compiles Python to a subset of the Python language called RPython that can be efficiently compiled and optimized. Over a wide range of tests, PyPy is roughly 6 times faster than the standard Python Distribution.
4.4 Fun: Finding Primes
Project Euler is a site where programming puzzles are posed that might have interested Euler. Problem 7 asks the question:
By listing the first six prime numbers: 2, 3, 5, 7, 11, and 13, we can see that the 6th prime is 13.
What is the 10,001st prime number?
To solve this we need a very long list of prime numbers. First we'll make a function that uses the Sieve of Erastothenes to generate all the primes less than n.
End of explanation
"""
cProfile.run('primes(1000000)')
"""
Explanation: You might think that Python is a bad choice for something like this, but, in terms of time, it really doesn't take long:
End of explanation
"""
|
acuzzio/GridQuantumPropagator | Scripts/Final_cube_analysis.ipynb | gpl-3.0 | import quantumpropagator as qp
import matplotlib.pyplot as plt
%matplotlib ipympl
plt.rcParams.update({'font.size': 8})
from mpl_toolkits.mplot3d import axes3d
import matplotlib.pyplot as pltfrom
from ipywidgets import interact,fixed #, interactive, fixed, interact_manual
import ipywidgets as widgets
from matplotlib import cm
import pickle
# name_data_file = '/home/alessio/n-Propagation/newExtrapolated_allCorrection.pickle'
name_data_file = '/home/alessio/n-Propagation/newExtrapolated_gammaExtrExag.pickle'
# dataDict = np.load('/home/alessio/n-Propagation/datanewoneWithNACnow.npy')[()]
# # name_data_file = '/home/alessio/n-Propagation/NAC_2_1_little_exagerated.pickle'
with open(name_data_file, "rb") as input_file:
data = pickle.load(input_file)
%load_ext Cython
# data.keys()
# name_data_file2 = 'NAC_2_1_little_exagerated.pickle'
# with open(name_data_file2, "rb") as input_file:
# data2 = pickle.load(input_file)
# name_data_file3 = 'newExtrapolated_gammaExtrExag.pickle'
# with open(name_data_file3, "rb") as input_file:
# data3 = pickle.load(input_file)
# pot = data['potCube']
# pot2= data2['potCube']
# pot3 = data3['potCube']
# np.all(pot == pot3)
"""
Explanation: Final cube analysis
End of explanation
"""
pot = data['potCube']
data['potCube'].shape
print(pot.shape)
qp.find_numpy_index_minumum(pot), pot[29, 28, 55, 0]
%matplotlib ipympl
pot_difference_AU = pot[15:-15,15:-15,30:-30,2] - pot[15:-15,15:-15,30:-30,3]
#phiC, gamC, theC, phiD, gamD, theD = (27,25,85, 4, 7, 24)
#pot_difference_AU = pot[phiC-phiD:phiC+phiD,gamC-gamD:gamC+gamD,theC-theD:theC+theD,2] - pot[phiC-phiD:phiC+phiD,gamC-gamD:gamC+gamD,theC-theD:theC+theD,3]
pot_difference = qp.fromHartoEv(pot_difference_AU)
print(qp.find_numpy_index_minumum(pot_difference))
b = pd.Series(pot_difference.flatten())
b.describe()
b.hist(bins=100)
plt.close('all')
phiC, gamC, theC, phiD, gamD, theD = (27,26,85, 2, 2, 24)
pot_difference_AU = pot[phiC-phiD:phiC+phiD,gamC-gamD:gamC+gamD,theC-theD:theC+theD,2] - pot[phiC-phiD:phiC+phiD,gamC-gamD:gamC+gamD,theC-theD:theC+theD,3]
pot_difference = qp.fromHartoEv(pot_difference_AU)
print(qp.find_numpy_index_minumum(pot_difference))
b = pd.Series(pot_difference.flatten())
b.describe()
b.hist(bins=100)
%matplotlib ipympl
dp = 7
dg = 7
dt = 20
mask = pot_difference[22-dp:22+dp,22-dg:22+dg,110-dt:110+dt]
c = pd.Series(mask.flatten())
c.describe()
#c.hist()
diff_0_1_all = pot[:,:,:,1]-pot[:,:,:,0]
diff_2_3_all = pot[:,:,:,3]-pot[:,:,:,2]
diff_0_1 = np.zeros_like(diff_0_1_all) + 999
diff_0_1[15:-15,15:-15,30:-30] = diff_0_1_all[15:-15,15:-15,30:-30]
diff_2_3 = np.zeros_like(diff_2_3_all) + 999
diff_2_3[15:-15,15:-15,30:-30] = diff_2_3_all[15:-15,15:-15,30:-30]
save_pot_diff = True
dictio = {}
a = 0
if save_pot_diff:
filename = '/home/alessio/IMPORTANTS/VISUALIZE_ENERGY_DIFFERENCE/PotDiff{:04}.h5'.format(a)
# I reput the zeros out.
dictio['diff'] = qp.fromHartoEv(diff_0_1)
dictio['lab'] = 'Diff 0 1'
qp.writeH5fileDict(filename, dictio)
a = a + 1
if save_pot_diff:
filename = '/home/alessio/IMPORTANTS/VISUALIZE_ENERGY_DIFFERENCE/PotDiff{:04}.h5'.format(a)
# I reput the zeros out.
dictio['diff'] = qp.fromHartoEv(diff_2_3)
dictio['lab'] = 'Diff 2 3'
qp.writeH5fileDict(filename, dictio)
a = 0
for i in range(8):
a = a + 1
thisState = np.zeros_like(diff_0_1_all) + 999
thisState[15:-15,15:-15,30:-30] = pot[15:-15,15:-15,30:-30,i]
dictio = {}
if save_pot_diff:
filename = '/home/alessio/IMPORTANTS/VISUALIZE_ENERGY_DIFFERENCE/Energy{:04}.h5'.format(a)
dictio['diff'] = qp.fromHartoEv(thisState)
dictio['lab'] = i
qp.writeH5fileDict(filename, dictio)
"""
Explanation: VISUALIZE POTENTIAL DIFFERENCE PART
End of explanation
"""
from quantumpropagator import fromLabelsToFloats, labTranformA
phis_ext = labTranformA(data['phis'])
gams_ext = labTranformA(data['gams'])
thes_ext = labTranformA(data['thes'])
phiV_ext, gamV_ext, theV_ext = fromLabelsToFloats(data)
# take step
dphi = phis_ext[0] - phis_ext[1]
dgam = gams_ext[0] - gams_ext[1]
dthe = thes_ext[0] - thes_ext[1]
# take range
range_phi = phis_ext[-1] - phis_ext[0]
range_gam = gams_ext[-1] - gams_ext[0]
range_the = thes_ext[-1] - thes_ext[0]
phis = phis_ext[15:-15]
gams = gams_ext[15:-15]
thes = thes_ext[30:-30]
phiV = phiV_ext[15:-15]
gamV = gamV_ext[15:-15]
theV = theV_ext[30:-30]
header = ' Labels extr. internal extr. dq range\n'
string = 'Phi -> {:8.4f} {:8.4f} {:8.4f} {:8.4f} {:8.4f} {:8.4f}\nGam -> {:8.4f} {:8.4f} {:8.4f} {:8.4f} {:8.4f} {:8.4f}\nThe -> {:8.4f} {:8.4f} {:8.4f} {:8.4f} {:8.4f} {:8.4f}'
out = (header + string).format(phiV_ext[-1],phiV_ext[0],phis_ext[-1],phis_ext[0],dphi,range_phi,
gamV_ext[-1],gamV_ext[0],gams_ext[-1],gams_ext[0],dgam,range_gam,
theV_ext[-1],theV_ext[0],thes_ext[-1],thes_ext[0],dthe,range_the)
print(out)
"""
Explanation: Coordinates
End of explanation
"""
nacs = data['smoCube']
# take out zeros
NACS = nacs[15:-15,15:-15,30:-30]
# select the two states
print(NACS.shape, nacs.shape)
pL, gL, tL, sL, dL, coorL = NACS.shape
#%%time
n=10
makeGraph = True
states_to_consider = 2
if makeGraph:
for s1 in range(states_to_consider):
for s2 in range(s1):
a = np.abs(NACS[:,:,:,s1,s2,0].flatten())
binZ = [0.0000000000000001, 0.0000001, 0.000001, 0.00001,0.0001,0.001,0.01,0.1]
# thing here is the integer where I plot the bar (x position)
thing = np.arange(len(binZ)-1)
label_names = [ '{}'.format(x) for x in binZ ]
counts, bins = np.histogram(a,bins=binZ)
fig, ax0 = plt.subplots(1,1)
ax0.bar(thing,counts)
plt.xticks(thing,label_names)
plt.title('Nacs values between states {} {}'.format(s1,s2))
for xy in zip(thing, counts):
ax0.annotate('{}'.format(xy[1]), xy=xy)
cart = 0
s1 = 5
s2 = 4
p = 22
g=5
t=77
elem = np.abs(NACS[p,g,t,s1,s2,cart])
neighbors = np.abs(np.array([NACS[p+1,g,t,s1,s2,cart],
NACS[p-1,g,t,s1,s2,cart],
NACS[p,g+1,t,s1,s2,cart],
NACS[p,g-1,t,s1,s2,cart],
NACS[p,g,t+1,s1,s2,cart],
NACS[p,g,t-1,s1,s2,cart]]))
lol = neighbors - elem
differences = np.amin(lol)
print('{} {} {} {}'.format(elem, neighbors, lol, differences))
print('States({},{}) -> Cube({:2},{:2},{:2}): {:5.3e}'.format(s1,s2,p,g,t,differences))
NACS
cart = 0
for s1 in range(sL):
for s2 in range(s1):
#for p in qp.log_progress(range(pL),every=1,size=(pL)):
for p in range(1,pL-1):
for g in range(1,gL-1):
for t in range(1,tL-1):
elem = np.abs(NACS[p,g,t,s1,s2,cart])
neighbors = np.abs(np.array([NACS[p+1,g,t,s1,s2,cart],
NACS[p-1,g,t,s1,s2,cart],
NACS[p,g+1,t,s1,s2,cart],
NACS[p,g-1,t,s1,s2,cart],
NACS[p,g,t+1,s1,s2,cart],
NACS[p,g,t-1,s1,s2,cart]]))
differences = neighbors - elem
#print('{} {} {}'.format(elem, neighbors, differences))
if np.all(differences > 0.0001):
print('States({},{}) -> Cube({:2},{:2},{:2}): {:5.3e}'.format(s1,s2,p,g,t,differences))
"""
Explanation: NACS ANALYSIS
End of explanation
"""
# AAA is the plane at which I want to study the "only" point
AAA = NACS[10,:,:,1,2,2]
gam_That, the_That = np.unravel_index(AAA.argmin(), AAA.shape)
10, gam_That, the_That
phis[10],gams[gam_That],thes[the_That]
"""
Explanation: NACS visualization
End of explanation
"""
%%cython --annotate --compile-args=-fopenmp --link-args=-fopenmp --force
### #%%cython
### #%%cython --annotate
import numpy as np
cimport numpy as np
cimport cython
from cython.parallel import prange
@cython.boundscheck(False)
@cython.wraparound(False)
@cython.cdivision(True)
@cython.nonecheck(False)
cdef void neigh(double [:,:,:,:] nacs_2_1, double [:,:,:,::1] bigger_2_1_nacs):
cdef:
int pL = 25, gL = 26, tL = 100,coorL=3
int p1,g1,t1,p2,g2,t2,coor,tuplL,pg
double thresh
thresh = 0.0001
tuplL = pL*gL
for coor in range(coorL):
#for pg in prange(tuplL, nogil=True, schedule='dynamic',num_threads=16):
for pg in range(tuplL):
p1 = pg // gL
g1 = pg % gL
for t1 in range(tL):
# for p2 in range(pL):
# for g2 in range(gL):
# for t2 in range(tL):
# if abs(nacs_2_1[p1,g1,t1,coor]) < 0.000001:
# bigger_2_1_nacs[p1,g1,t1,coor] = nacs_2_1[p1,g1,t1,coor]*100
# elif abs(nacs_2_1[p1,g1,t1,coor]) < 0.00001:
bigger_2_1_nacs[p1,g1,t1,coor] = nacs_2_1[p1,g1,t1,coor]*100
# return(bigger_2_1_nacs)
def neighbor(nacs_2_1,bigger_2_1_nacs):
return np.asarray(neigh(nacs_2_1,bigger_2_1_nacs))
print('done')
%%time
state1 = 2
state2 = 1
nacs_2_1 = NACS[:,:,:,state1,state2,:]
nacs_other = NACS[:,:,:,state2,state1,:]
print(np.all(nacs_2_1 == -nacs_other))
print(nacs_2_1.shape)
bigger_2_1_nacs = np.zeros_like(nacs_2_1)
neighbor(nacs_2_1,bigger_2_1_nacs)
saveFile = False
dictio = {}
a=0
if saveFile:
for coord in range(3):
filename = '/home/alessio/k-nokick/IMPORTANTS/VISUALIZE_NACS/newNacSmoother/Nac{:04}.h5'.format(a)
# I reput the zeros out.
external = np.pad(vector[:,:,:,coord], ((15,15),(15,15),(30,30)), 'constant')
dictio['NACS'] = external
dictio['state1'] = state1
dictio['state2'] = state2
dictio['cart'] = coord
qp.writeH5fileDict(filename, dictio)
a += 1
filename = '/home/alessio/k-nokick/IMPORTANTS/VISUALIZE_NACS/newNacSmoother/Nac{:04}.h5'.format(a)
dictio['NACS'] = np.pad(nacs_2_1[:,:,:,coord], ((15,15),(15,15),(30,30)), 'constant')
dictio['state1'] = state1
dictio['state2'] = state2
dictio['cart'] = coord
qp.writeH5fileDict(filename, dictio)
a += 1
# PUT TRUE IF YOU WANT TO EXAGERATE AND CHANGE THE NACS
do_it = False
nacs_2_1 = nacs[:,:,:,1,2,:]
bigger_2_1_nacs = np.empty_like(nacs_2_1)
#print(bigger_2_1_nacs.shape)
pL,gL,tL,coorL = bigger_2_1_nacs.shape
for p in qp.log_progress(pL,every=1,size=(len(pL))):
for g in range(gL):
for t in range(tL):
for coor in range(coorL):
elem = nacs_2_1[p,g,t,coor]
if np.abs(elem) > 0.0001:
first = 2
secon = 4
# proximity(6)
bigger_2_1_nacs[p+1,g,t,coor] = elem/first
bigger_2_1_nacs[p-1,g,t,coor] = elem/first
bigger_2_1_nacs[p,g+1,t,coor] = elem/first
bigger_2_1_nacs[p,g-1,t,coor] = elem/first
bigger_2_1_nacs[p,g,t+1,coor] = elem/first
bigger_2_1_nacs[p,g,t-1,coor] = elem/first
# Corners (8)
bigger_2_1_nacs[p+1,g+1,t+1,coor] = elem/secon # 000
bigger_2_1_nacs[p+1,g+1,t-1,coor] = elem/secon
bigger_2_1_nacs[p+1,g-1,t+1,coor] = elem/secon
bigger_2_1_nacs[p+1,g-1,t-1,coor] = elem/secon # 011
bigger_2_1_nacs[p-1,g+1,t+1,coor] = elem/secon # 000
bigger_2_1_nacs[p-1,g+1,t-1,coor] = elem/secon
bigger_2_1_nacs[p-1,g-1,t+1,coor] = elem/secon
bigger_2_1_nacs[p-1,g-1,t-1,coor] = elem/secon # 011
# Half sides (12)
bigger_2_1_nacs[p+1,g,t+1,coor] = elem/secon
bigger_2_1_nacs[p+1,g,t-1,coor] = elem/secon
bigger_2_1_nacs[p-1,g,t+1,coor] = elem/secon
bigger_2_1_nacs[p-1,g,t-1,coor] = elem/secon
bigger_2_1_nacs[p+1,g+1,t,coor] = elem/secon
bigger_2_1_nacs[p+1,g-1,t,coor] = elem/secon
bigger_2_1_nacs[p-1,g+1,t,coor] = elem/secon
bigger_2_1_nacs[p-1,g-1,t,coor] = elem/secon
bigger_2_1_nacs[p,g+1,t+1,coor] = elem/secon
bigger_2_1_nacs[p,g+1,t-1,coor] = elem/secon
bigger_2_1_nacs[p,g-1,t+1,coor] = elem/secon
bigger_2_1_nacs[p,g-1,t-1,coor] = elem/secon
# 2 distant (6)
bigger_2_1_nacs[p+2,g,t,coor] = elem/secon
bigger_2_1_nacs[p-2,g,t,coor] = elem/secon
bigger_2_1_nacs[p,g+2,t,coor] = elem/secon
bigger_2_1_nacs[p,g-2,t,coor] = elem/secon
bigger_2_1_nacs[p,g,t+2,coor] = elem/secon
bigger_2_1_nacs[p,g,t-2,coor] = elem/secon
#print('{} {} {} {} {}'.format(p,g,t,coor,elem))
else:
bigger_2_1_nacs[p,g,t,coor] = elem
if do_it:
data_new = data
name_data_file_new = 'NAC_2_1_little_exagerated.pickle'
print(data_new.keys())
nacs[:,:,:,1,2,:] = bigger_2_1_nacs
nacs[:,:,:,2,1,:] = -bigger_2_1_nacs
data_new['smoCube'] = nacs
pickle.dump( data_new, open( name_data_file_new, "wb" ) )
"""
Explanation: here we try to make interpolated "unified" NAC values for S_2 - S_1
End of explanation
"""
dipo = data['dipCUBE']
DIPO = dipo[15:-15,15:-15,30:-30]
dipo.shape, DIPO.shape
plt.close('all')
def do3dplot(xs,ys,zss):
'with mesh function'
fig = plt.figure(figsize=(9,9))
ax = fig.add_subplot(111, projection='3d')
X,Y = np.meshgrid(ys,xs)
#ax.set_zlim(-1, 1)
#ax.scatter(X, Y, zss)
ax.plot_surface(X, Y, zss,cmap=cm.coolwarm, linewidth=1, antialiased=False)
fig.canvas.layout.height = '800px'
fig.tight_layout()
def visualize_this_thing(thing,state1,state2,cart,kind,dim):
along = ['X','Y','Z']
print('DIPOLE between state ({},{}) along {} - Doing cut in {} with value ({:8.4f},{:8.4f}) - shape: {}'.format(state1,
state2,
along[cart],
kind,
dimV[kind][dim],
dims[kind][dim],
thing.shape))
if kind == 'Phi':
pot = thing[dim,:,:,cart,state1,state2]
print('Looking at DIPOLE with indexes [{},:,:,{},{},{}]'.format(dim,cart,state1,state2))
do3dplot(gams,thes,pot)
elif kind == 'Gam':
print('Looking at DIPOLE with indexes [:,{},:,{},{},{}]'.format(dim,cart,state1,state2))
pot = thing[:,dim,:,cart,state1,state2]
do3dplot(phis,thes,pot)
elif kind == 'The':
print('Looking at DIPOLE with indexes [:,:,{},{},{},{}]'.format(dim,cart,state1,state2))
pot = thing[:,:,dim,cart,state1,state2]
do3dplot(phis,gams,pot)
dimV = { 'Phi': phiV, 'Gam': gamV, 'The': theV } # real values
dims = { 'Phi': phis, 'Gam': gams, 'The': thes } # for labels
kinds = ['Phi','Gam','The']
def fun_pot2D(kind,state1, state2, cart,dim):
visualize_this_thing(DIPO, state1, state2, cart, kind, dim)
def nested(kinds):
dimensionV = dimV[kinds]
interact(fun_pot2D, kind=fixed(kinds),
state1 = widgets.IntSlider(min=0,max=7,step=1,value=0,continuous_update=False),
state2 = widgets.IntSlider(min=0,max=7,step=1,value=1,continuous_update=False),
cart = widgets.IntSlider(min=0,max=2,step=1,value=2,continuous_update=False),
dim = widgets.IntSlider(min=0,max=(len(dimensionV)-1),step=1,value=0,continuous_update=False))
interact(nested, kinds = ['Gam','Phi','The']);
import ipyvolume as ipv
def do3dplot2(xs,ys,zss):
X,Y = np.meshgrid(ys,xs)
ipv.figure()
ipv.plot_surface(X, zss, Y, color="orange")
ipv.plot_wireframe(X, zss, Y, color="red")
ipv.show()
"""
Explanation: DIPOLES visualization
End of explanation
"""
pot = data['potCube'] - data['potCube'].min()
A = pot
# find the minimum index having the shape
phi_min, gam_min, the_min, state_min = np.unravel_index(A.argmin(), A.shape)
phi_min, gam_min, the_min, state_min
"""
Explanation: Minimum geometry is found by getting the minimum on the ground state potential
End of explanation
"""
nacs.shape
B = nacs[:,:,:,:,1,0]
# this should be absolute value
phi_ci, gam_ci, the_ci, cart_ci = np.unravel_index(B.argmax(), B.shape)
np.unravel_index(B.argmax(), B.shape)
phis_ext[16],gams_ext[15],thes_ext[112]
phi_ci, gam_ci, the_ci = [16,15,112]
"""
Explanation: CI geometry by taking the maximum NAC value between 0 and 1
End of explanation
"""
# I start by making a cube of 0.
# boolean that creates (and overwrite) the file
create_mask_file = False
ZERO = np.zeros_like(pot[:,:,:,0])
print(ZERO.shape)
region_a = np.zeros_like(ZERO)
region_b = np.zeros_like(ZERO)
region_c = np.zeros_like(ZERO)
# for pi, p in qp.log_progress(enumerate(phis_n),every=1,size=(len(phis_n))):
# p_linea_a = 18
# g_linea_S = 30
# t_linea_b = 112
p_linea_a, g_linea_S, t_linea_b = 18, 30, 112
m_coeff,q_coeff = 1.7,75
for p,phi in qp.log_progress(enumerate(phis_ext),every=1,size=(len(phis_ext))):
for g,gam in enumerate(gams_ext):
lineValue_theta = m_coeff * g + q_coeff
for t,the in enumerate(thes_ext):
if p > p_linea_a:
region_a[p,g,t] = 1
if p <= p_linea_a:
if t > lineValue_theta:
region_c[p,g,t] = 1
else:
region_b[p,g,t] = 1
# if t > t_linea_b and g < g_linea_S:
# region_c[p,g,t] = 1
# else:
# region_b[p,g,t] = 1
# to paint cubes on the verge I make zero the sides values
if p==0 or g == 0 or t == 0 or p == len(phis_ext)-1 or g == len(gams_ext)-1 or t == len(thes_ext)-1:
region_a[p,g,t] = 0
region_b[p,g,t] = 0
region_c[p,g,t] = 0
regions = [{'label' : 'FC', 'cube': region_a},{'label' : 'reactants', 'cube': region_b},{'label' : 'products', 'cube': region_c}]
if create_mask_file:
print('I created the regions pickle file')
pickle.dump(regions, open('regions.pickle', "wb" ) )
else:
qp.warning("file region NOT written, check the variable 'create_mask_file' if you want to write region file")
"""
Explanation: Product/reactant catcher
Here I want to generate the cubes of 1 and 0 to catch different regions of my cube.
So, now I want to do this. I want to create several cubes with different regions (basically the cubes are of 1 and 0). A is FC region, B is PRODUCT region and C is REACTANT
End of explanation
"""
# I start by making a cube of 0.
# boolean that creates (and overwrite) the file
create_adv_mask_files = False
ZERO = np.zeros_like(pot[:,:,:,0])
ones = np.ones_like(pot[:,:,:,0])
print(ZERO.shape)
mask_CI = np.zeros_like(ZERO)
mask_AFC = np.zeros_like(ZERO)
# 18, 30, 112
#p_cube1, p_cube2, g_cube1, g_cube2, t_cube1, t_cube2 = 12, 29, 15, 32, 82, 142
p_CI_cube1, p_CI_cube2, g_CI_cube1, g_CI_cube2, t_CI_cube1, t_CI_cube2 = 12, 29, 15, 32, 82, 142
p_AFC_cube1, p_AFC_cube2, g_AFC_cube1, g_AFC_cube2, t_AFC_cube1, t_AFC_cube2 = 0, 54, 0, 55, 85, 159
for p,phi in qp.log_progress(enumerate(phis_ext),every=1,size=(len(phis_ext))):
for g,gam in enumerate(gams_ext):
for t,the in enumerate(thes_ext):
if p > p_CI_cube1 and p < p_CI_cube2 and g > g_CI_cube1 and g < g_CI_cube2 and t > t_CI_cube1 and t < t_CI_cube2:
mask_CI[p,g,t] = 1
if p > p_AFC_cube1 and p < p_AFC_cube2 and g > g_AFC_cube1 and g < g_AFC_cube2 and t > t_AFC_cube1 and t < t_AFC_cube2:
mask_AFC[p,g,t] = 1
# to paint cubes on the verge I make zero the sides values
if p==0 or g == 0 or t == 0 or p == len(phis_ext)-1 or g == len(gams_ext)-1 or t == len(thes_ext)-1:
ones[p,g,t] = 0
mask_AFC[p,g,t] = 0
masks_only_one = [{'label' : 'All', 'cube': ones, 'states':[0,1,2,3,4,5,6,7], 'show' : False}]
masks = [
{'label' : 'All', 'cube': ones, 'states':[0,1,2,3,4,5,6,7], 'show' : False},
{'label' : 'AfterFC', 'cube': mask_AFC, 'states':[0,1,2,3,4,5,6,7], 'show' : True},
{'label' : 'CI', 'cube': mask_CI, 'states':[0,1,2,3,4,5,6,7], 'show' : True},
]
# {'label' : 'Mask CI', 'cube': mask_a, 'states':[0,1]}]
#masks = [{'label' : 'Mask', 'cube': ones, 'states':[0,1,2,3,4,5,6,7]}]
if create_adv_mask_files:
print('I created the regions pickle file')
pickle.dump(masks_only_one, open('advanced_masks_onlyONE.pickle', "wb" ))
pickle.dump(masks, open('advanced_masks.pickle', "wb" ))
else:
qp.warning("file advanced_masks NOT written, check the variable 'create_adv_mask_files' if you want to write region file")
"""
Explanation: HERE THE REGIONS FOR ADVANCED MASKS
End of explanation
"""
# dipo_min = dipo[phi_min, gam_min, the_min]
# dipo_ci = dipo[phi_ci, gam_ci, the_ci]
# difference_dipo = dipo_ci - dipo_min
# for i in range(8):
# permanent = difference_dipo[:,i,i]
# print('S_{} -> {}'.format(i,permanent))
# dipo_min[:,1,2],dipo_min[:,0,1],dipo_min[:,0,6],dipo_min[:,0,3],dipo_min[:,0,2],dipo_min[:,0,7]
"""
Explanation: Here I check the direction of the permanent dipoles.
End of explanation
"""
# npy = '/home/alessio/Desktop/NAC_CORRECTION_NOVEMBER2018/dataprova.npy'
# dictio = np.load(npy)[()]
# dictio.keys()
# NACS2 = dictio['nacCUBE']
# NACS2.shape
# def do3dplot(xs,ys,zss):
# 'with mesh function'
# fig = plt.figure(figsize=(9,9))
# ax = fig.add_subplot(111, projection='3d')
# X,Y = np.meshgrid(ys,xs)
# #ax.set_zlim(-1, 1)
# #ax.scatter(X, Y, zss)
# ax.plot_wireframe(X, Y, zss)
# fig.tight_layout()
# def visualize_this_thing(thing,state1,state2,cart,kind,dim):
# print(thing.shape)
# print('\nWARNING, this is not fully correct!!! Not SMO and not really what you think\n')
# along = ['Phi','Gam','The']
# print('NAC between state ({},{}) along {}\nDoing cut in {} with value ({:8.4f},{:8.4f})'.format(state1,
# state2,
# along[cart],
# kind,
# dimV[kind][dim],
# dims[kind][dim]))
# if kind == 'Phi':
# pot = thing[dim,:,:,state1,state2,0,cart]
# print('\nLooking at SMO with indexes [{},:,:,{},{},{}]'.format(dim, state1,state2,cart))
# do3dplot(gams,thes,pot)
# elif kind == 'Gam':
# print('\nLooking at SMO with indexes [:,{},:,{},{},{}]'.format(dim, state1,state2,cart))
# pot = thing[:,dim,:,state1,state2,0,cart]
# do3dplot(phis,thes,pot)
# elif kind == 'The':
# print('\nLooking at SMO with indexes [:,:,{},{},{},{}]'.format(dim, state1,state2,cart))
# pot = thing[:,:,dim,state1,state2,0,cart]
# do3dplot(phis,gams,pot)
# dimV = { 'Phi': phiV, 'Gam': gamV, 'The': theV } # real values
# dims = { 'Phi': phis, 'Gam': gams, 'The': thes } # for labels
# kinds = ['Phi','Gam','The']
# def fun_pot2D(kind,state1, state2, cart,dim):
# visualize_this_thing(NACS2, state1, state2, cart, kind, dim)
# def nested(kinds):
# dimensionV = dimV[kinds]
# interact(fun_pot2D, kind=fixed(kinds), state1 = widgets.IntSlider(min=0,max=7,step=1,value=0), state2 = widgets.IntSlider(min=0,max=7,step=1,value=0), cart = widgets.IntSlider(min=0,max=2,step=1,value=0), dim = widgets.IntSlider(min=0,max=(len(dimensionV)-1),step=1,value=0))
# interact(nested, kinds = ['Phi','Gam','The']);
"""
Explanation: temporary cells for last correction sign
End of explanation
"""
# print(data.keys())
# data_new = data
# nacs_new = data['smoCube']
# NACS_new = nacs_new[15:-15,15:-15,30:-30]
# print(NACS_new.shape,nacs_new.shape)
# phi_ext_000_000 = 29
# phi_prev = 28
# phi_next = 30
# new_nacs = np.copy(nacs)
# for g in range(56):
# for t in range(160):
# not_correct = nacs[phi_ext_000_000,g,t]
# correct_prev = nacs[phi_prev ,g,t]
# correct_next = nacs[phi_next ,g,t]
# #if np.linalg.norm(not_correct) > 0.001:
# # print('{} {}\nThis {} \nMiddle {}\n After {}'.format(g,t,correct_prev[:,:,1], not_correct[:,:,1],correct_next[:,:,1]))
# for state1 in range(8):
# for state2 in range(8):
# for cart in range(3):
# value_prev = correct_prev[state1,state2,cart]
# value_this = not_correct [state1,state2,cart]
# value_next = correct_next[state1,state2,cart]
# average = (value_prev + value_next)/2
# if np.sign(average) == np.sign(value_this):
# new_value = value_this
# else:
# new_value = -value_this
# new_nacs[phi_ext_000_000,g,t,state1,state2,cart] = new_value
# def do3dplot(xs,ys,zss):
# 'with mesh function'
# fig = plt.figure(figsize=(9,9))
# ax = fig.add_subplot(111, projection='3d')
# X,Y = np.meshgrid(ys,xs)
# #ax.set_zlim(-1, 1)
# #ax.scatter(X, Y, zss)
# ax.plot_wireframe(X, Y, zss)
# fig.tight_layout()
# def visualize_this_thing(thing,state1,state2,cart,kind,dim):
# print(thing.shape)
# along = ['Phi','Gam','The']
# print('NAC between state ({},{}) along {}\nDoing cut in {} with value ({:8.4f},{:8.4f})'.format(state1,
# state2,
# along[cart],
# kind,
# dimV[kind][dim],
# dims[kind][dim]))
# if kind == 'Phi':
# pot = thing[dim,:,:,state1,state2,cart]
# print('\nLooking at SMO with indexes [{},:,:,{},{},{}]'.format(dim, state1,state2,cart))
# do3dplot(gams_ext,thes_ext,pot)
# elif kind == 'Gam':
# print('\nLooking at SMO with indexes [:,{},:,{},{},{}]'.format(dim, state1,state2,cart))
# pot = thing[:,dim,:,state1,state2,cart]
# do3dplot(phis_ext,thes_ext,pot)
# elif kind == 'The':
# print('\nLooking at SMO with indexes [:,:,{},{},{},{}]'.format(dim, state1,state2,cart))
# pot = thing[:,:,dim,state1,state2,cart]
# do3dplot(phis_ext,gams_ext,pot)
# dimV = { 'Phi': phiV_ext, 'Gam': gamV_ext, 'The': theV_ext } # real values
# dims = { 'Phi': phis_ext, 'Gam': gams_ext, 'The': thes_ext } # for labels
# kinds = ['Phi','Gam','The']
# def fun_pot2D(kind,state1, state2, cart,dim):
# visualize_this_thing(new_nacs, state1, state2, cart, kind, dim)
# def nested(kinds):
# dimensionV = dimV[kinds]
# interact(fun_pot2D, kind=fixed(kinds), state1 = widgets.IntSlider(min=0,max=7,step=1,value=0), state2 = widgets.IntSlider(min=0,max=7,step=1,value=0), cart = widgets.IntSlider(min=0,max=2,step=1,value=0), dim = widgets.IntSlider(min=0,max=(len(dimensionV)-1),step=1,value=0))
# interact(nested, kinds = ['Phi','Gam','The']);
"""
Explanation: sign flipper on extrapolated SMO cube
you used the cells below to correct NAC on the main plane... it was still flipping
End of explanation
"""
#name_data_file_new = 'newExtrapolated_allCorrectionSECOND.pickle'
#data_new.keys()
# data_new['smoCube'] = new_nacs
# pickle.dump( data_new, open( name_data_file_new, "wb" ) )
"""
Explanation: Things regarding writing down the Pickle file
End of explanation
"""
folder = '.'
a=0
saveFile = False
for state1 in range(8):
for state2 in range(state1):
for cart in range(3):
dictio = {}
cartL = ['X','Y','Z']
print('Nacs ({},{}) along {} -> {:04}'.format(state1,state2,cartL[cart],a))
a+=1
if saveFile:
filename = 'Nac{:04}.h5'.format(a)
dictio['NACS'] = nacs[:,:,:,state1,state2,cart]
dictio['state1'] = state1
dictio['state2'] = state2
dictio['cart'] = cart
qp.writeH5fileDict(filename, dictio)
"""
Explanation: those cells here are used to visualize in 3d space the dipoles/nac
End of explanation
"""
# lol2 is the new function to be added
phi_index = 16
theta_index = 81
state_index = 0
lol = pot[phi_index,:,theta_index,state_index]
num = 15
constant = 0.001
lol2 = np.zeros_like(lol)
for i in range(num):
lol2[i] = constant * (i-num)**2
#print('{} {} {}'.format(i,num,i-num))
fig = plt.figure()
plt.title('Gamma wall')
plt.xlabel('Gamma')
plt.ylabel('Energy')
plt.plot(lol)
plt.plot(lol2+lol);
newpot = np.zeros_like(pot)
for p in range(55):
for t in range(160):
for s in range(8):
newpot[p,:,t,s] = pot[p,:,t,s] + lol2
do_it = False
if do_it:
data_new = data
name_data_file_new = 'newExtrapolated_gammaExtrExag.pickle'
data_new.keys()
data_new['potCube'] = newpot
pickle.dump( data_new, open( name_data_file_new, "wb" ) )
else:
qp.warning('Here it is set to false, new file is NOT created')
fig = plt.figure()
phi = 20
the = 100
plt.plot(pot[phi,:,the,1])
plt.plot(newpot[phi,:,the,1]);
"""
Explanation: those to make the wall on extrapolated gamma values
End of explanation
"""
|
tritemio/multispot_paper | out_notebooks/usALEX-5samples-PR-raw-dir_ex_aa-fit-out-AexAem-17d.ipynb | mit | ph_sel_name = "AexAem"
data_id = "17d"
# ph_sel_name = "all-ph"
# data_id = "7d"
"""
Explanation: Executed: Mon Mar 27 11:38:07 2017
Duration: 10 seconds.
usALEX-5samples - Template
This notebook is executed through 8-spots paper analysis.
For a direct execution, uncomment the cell below.
End of explanation
"""
from fretbursts import *
init_notebook()
from IPython.display import display
"""
Explanation: Load software and filenames definitions
End of explanation
"""
data_dir = './data/singlespot/'
"""
Explanation: Data folder:
End of explanation
"""
import os
data_dir = os.path.abspath(data_dir) + '/'
assert os.path.exists(data_dir), "Path '%s' does not exist." % data_dir
"""
Explanation: Check that the folder exists:
End of explanation
"""
from glob import glob
file_list = sorted(f for f in glob(data_dir + '*.hdf5') if '_BKG' not in f)
file_list
## Selection for POLIMI 2012-12-6 dataset
# file_list.pop(2)
# file_list = file_list[1:-2]
# display(file_list)
# labels = ['22d', '27d', '17d', '12d', '7d']
## Selection for P.E. 2012-12-6 dataset
# file_list.pop(1)
# file_list = file_list[:-1]
# display(file_list)
# labels = ['22d', '27d', '17d', '12d', '7d']
## Selection for POLIMI 2012-11-26 datatset
labels = ['17d', '27d', '7d', '12d', '22d']
files_dict = {lab: fname for lab, fname in zip(labels, file_list)}
files_dict
ph_sel_map = {'all-ph': Ph_sel('all'), 'AexAem': Ph_sel(Aex='Aem')}
ph_sel = ph_sel_map[ph_sel_name]
data_id, ph_sel_name
"""
Explanation: List of data files in data_dir:
End of explanation
"""
d = loader.photon_hdf5(filename=files_dict[data_id])
"""
Explanation: Data load
Initial loading of the data:
End of explanation
"""
d.ph_times_t, d.det_t
"""
Explanation: Laser alternation selection
At this point we have only the timestamps and the detector numbers:
End of explanation
"""
d.add(det_donor_accept=(0, 1), alex_period=4000, D_ON=(2850, 580), A_ON=(900, 2580), offset=0)
"""
Explanation: We need to define some parameters: donor and acceptor ch, excitation period and donor and acceptor excitiations:
End of explanation
"""
plot_alternation_hist(d)
"""
Explanation: We should check if everithing is OK with an alternation histogram:
End of explanation
"""
loader.alex_apply_period(d)
"""
Explanation: If the plot looks good we can apply the parameters with:
End of explanation
"""
d
"""
Explanation: Measurements infos
All the measurement data is in the d variable. We can print it:
End of explanation
"""
d.time_max
"""
Explanation: Or check the measurements duration:
End of explanation
"""
d.calc_bg(bg.exp_fit, time_s=60, tail_min_us='auto', F_bg=1.7)
dplot(d, timetrace_bg)
d.rate_m, d.rate_dd, d.rate_ad, d.rate_aa
"""
Explanation: Compute background
Compute the background using automatic threshold:
End of explanation
"""
from mpl_toolkits.axes_grid1 import AxesGrid
import lmfit
print('lmfit version:', lmfit.__version__)
assert d.dir_ex == 0
assert d.leakage == 0
d.burst_search(m=10, F=6, ph_sel=ph_sel)
print(d.ph_sel, d.num_bursts)
ds_sa = d.select_bursts(select_bursts.naa, th1=30)
ds_sa.num_bursts
"""
Explanation: Burst search and selection
End of explanation
"""
mask = (d.naa[0] - np.abs(d.na[0] + d.nd[0])) > 30
ds_saw = d.select_bursts_mask_apply([mask])
ds_sas0 = ds_sa.select_bursts(select_bursts.S, S2=0.10)
ds_sas = ds_sa.select_bursts(select_bursts.S, S2=0.15)
ds_sas2 = ds_sa.select_bursts(select_bursts.S, S2=0.20)
ds_sas3 = ds_sa.select_bursts(select_bursts.S, S2=0.25)
ds_st = d.select_bursts(select_bursts.size, add_naa=True, th1=30)
ds_sas.num_bursts
dx = ds_sas0
size = dx.na[0] + dx.nd[0]
s_hist, s_bins = np.histogram(size, bins=np.r_[-15 : 25 : 1], density=True)
s_ax = s_bins[:-1] + 0.5*(s_bins[1] - s_bins[0])
plot(s_ax, s_hist, '-o', alpha=0.5)
dx = ds_sas
size = dx.na[0] + dx.nd[0]
s_hist, s_bins = np.histogram(size, bins=np.r_[-15 : 25 : 1], density=True)
s_ax = s_bins[:-1] + 0.5*(s_bins[1] - s_bins[0])
plot(s_ax, s_hist, '-o', alpha=0.5)
dx = ds_sas2
size = dx.na[0] + dx.nd[0]
s_hist, s_bins = np.histogram(size, bins=np.r_[-15 : 25 : 1], density=True)
s_ax = s_bins[:-1] + 0.5*(s_bins[1] - s_bins[0])
plot(s_ax, s_hist, '-o', alpha=0.5)
dx = ds_sas3
size = dx.na[0] + dx.nd[0]
s_hist, s_bins = np.histogram(size, bins=np.r_[-15 : 25 : 1], density=True)
s_ax = s_bins[:-1] + 0.5*(s_bins[1] - s_bins[0])
plot(s_ax, s_hist, '-o', alpha=0.5)
plt.title('(nd + na) for A-only population using different S cutoff');
dx = ds_sa
alex_jointplot(dx);
dplot(ds_sa, hist_S)
"""
Explanation: Preliminary selection and plots
End of explanation
"""
dx = ds_sa
bin_width = 0.03
bandwidth = 0.03
bins = np.r_[-0.2 : 1 : bin_width]
x_kde = np.arange(bins.min(), bins.max(), 0.0002)
## Weights
weights = None
## Histogram fit
fitter_g = mfit.MultiFitter(dx.S)
fitter_g.histogram(bins=np.r_[-0.2 : 1.2 : bandwidth])
fitter_g.fit_histogram(model = mfit.factory_two_gaussians(p1_center=0.1, p2_center=0.4))
S_hist_orig = fitter_g.hist_pdf
S_2peaks = fitter_g.params.loc[0, 'p1_center']
dir_ex_S2p = S_2peaks/(1 - S_2peaks)
print('Fitted direct excitation (na/naa) [2-Gauss]:', dir_ex_S2p)
## KDE
fitter_g.calc_kde(bandwidth=bandwidth)
fitter_g.find_kde_max(x_kde, xmin=0, xmax=0.15)
S_peak = fitter_g.kde_max_pos[0]
dir_ex_S_kde = S_peak/(1 - S_peak)
print('Fitted direct excitation (na/naa) [KDE]: ', dir_ex_S_kde)
fig, ax = plt.subplots(1, 2, figsize=(14, 4.5))
mfit.plot_mfit(fitter_g, ax=ax[0])
ax[0].set_title('2-Gaussians fit (S_fit = %.2f %%)' % (S_2peaks*100))
mfit.plot_mfit(fitter_g, ax=ax[1], plot_model=False, plot_kde=True)
ax[1].set_title('KDE fit (S_fit = %.2f %%)' % (S_peak*100));
## 2-Asym-Gaussian
fitter_ag = mfit.MultiFitter(dx.S)
fitter_ag.histogram(bins=np.r_[-0.2 : 1.2 : bandwidth])
fitter_ag.fit_histogram(model = mfit.factory_two_asym_gaussians(p1_center=0.1, p2_center=0.4))
#print(fitter_ag.fit_obj[0].model.fit_report())
S_2peaks_a = fitter_ag.params.loc[0, 'p1_center']
dir_ex_S2pa = S_2peaks_a/(1 - S_2peaks_a)
print('Fitted direct excitation (na/naa) [2-Gauss]:', dir_ex_S2pa)
fig, ax = plt.subplots(1, 2, figsize=(14, 4.5))
mfit.plot_mfit(fitter_g, ax=ax[0])
ax[0].set_title('2-Gaussians fit (S_fit = %.2f %%)' % (S_2peaks*100))
mfit.plot_mfit(fitter_ag, ax=ax[1])
ax[1].set_title('2-Asym-Gaussians fit (S_fit = %.2f %%)' % (S_2peaks_a*100));
"""
Explanation: A-direct excitation fitting
To extract the A-direct excitation coefficient we need to fit the
S values for the A-only population.
The S value for the A-only population is fitted with different methods:
- Histogram git with 2 Gaussians or with 2 asymmetric Gaussians
(an asymmetric Gaussian has right- and left-side of the peak
decreasing according to different sigmas).
- KDE maximum
In the following we apply these methods using different selection
or weighting schemes to reduce amount of FRET population and make
fitting of the A-only population easier.
Even selection
Here A-only and FRET population are evenly selected.
End of explanation
"""
dx = ds_sa.select_bursts(select_bursts.nd, th1=-100, th2=0)
fitter = bext.bursts_fitter(dx, 'S')
fitter.fit_histogram(model = mfit.factory_gaussian(center=0.1))
S_1peaks_th = fitter.params.loc[0, 'center']
dir_ex_S1p = S_1peaks_th/(1 - S_1peaks_th)
print('Fitted direct excitation (na/naa) [2-Gauss]:', dir_ex_S1p)
mfit.plot_mfit(fitter)
plt.xlim(-0.1, 0.6)
"""
Explanation: Zero threshold on nd
Select bursts with:
$$n_d < 0$$.
End of explanation
"""
dx = ds_sa
## Weights
weights = 1 - mfit.gaussian(dx.S[0], fitter_g.params.loc[0, 'p2_center'], fitter_g.params.loc[0, 'p2_sigma'])
weights[dx.S[0] >= fitter_g.params.loc[0, 'p2_center']] = 0
## Histogram fit
fitter_w1 = mfit.MultiFitter(dx.S)
fitter_w1.weights = [weights]
fitter_w1.histogram(bins=np.r_[-0.2 : 1.2 : bandwidth])
fitter_w1.fit_histogram(model = mfit.factory_two_gaussians(p1_center=0.1, p2_center=0.4))
S_2peaks_w1 = fitter_w1.params.loc[0, 'p1_center']
dir_ex_S2p_w1 = S_2peaks_w1/(1 - S_2peaks_w1)
print('Fitted direct excitation (na/naa) [2-Gauss]:', dir_ex_S2p_w1)
## KDE
fitter_w1.calc_kde(bandwidth=bandwidth)
fitter_w1.find_kde_max(x_kde, xmin=0, xmax=0.15)
S_peak_w1 = fitter_w1.kde_max_pos[0]
dir_ex_S_kde_w1 = S_peak_w1/(1 - S_peak_w1)
print('Fitted direct excitation (na/naa) [KDE]: ', dir_ex_S_kde_w1)
def plot_weights(x, weights, ax):
ax2 = ax.twinx()
x_sort = x.argsort()
ax2.plot(x[x_sort], weights[x_sort], color='k', lw=4, alpha=0.4)
ax2.set_ylabel('Weights');
fig, ax = plt.subplots(1, 2, figsize=(14, 4.5))
mfit.plot_mfit(fitter_w1, ax=ax[0])
mfit.plot_mfit(fitter_g, ax=ax[0], plot_model=False, plot_kde=False)
plot_weights(dx.S[0], weights, ax=ax[0])
ax[0].set_title('2-Gaussians fit (S_fit = %.2f %%)' % (S_2peaks_w1*100))
mfit.plot_mfit(fitter_w1, ax=ax[1], plot_model=False, plot_kde=True)
mfit.plot_mfit(fitter_g, ax=ax[1], plot_model=False, plot_kde=False)
plot_weights(dx.S[0], weights, ax=ax[1])
ax[1].set_title('KDE fit (S_fit = %.2f %%)' % (S_peak_w1*100));
"""
Explanation: Selection 1
Bursts are weighted using $w = f(S)$, where the function $f(S)$ is a
Gaussian fitted to the $S$ histogram of the FRET population.
End of explanation
"""
## Weights
sizes = dx.nd[0] + dx.na[0] #- dir_ex_S_kde_w3*dx.naa[0]
weights = dx.naa[0] - abs(sizes)
weights[weights < 0] = 0
## Histogram
fitter_w4 = mfit.MultiFitter(dx.S)
fitter_w4.weights = [weights]
fitter_w4.histogram(bins=np.r_[-0.2 : 1.2 : bandwidth])
fitter_w4.fit_histogram(model = mfit.factory_two_gaussians(p1_center=0.1, p2_center=0.4))
S_2peaks_w4 = fitter_w4.params.loc[0, 'p1_center']
dir_ex_S2p_w4 = S_2peaks_w4/(1 - S_2peaks_w4)
print('Fitted direct excitation (na/naa) [2-Gauss]:', dir_ex_S2p_w4)
## KDE
fitter_w4.calc_kde(bandwidth=bandwidth)
fitter_w4.find_kde_max(x_kde, xmin=0, xmax=0.15)
S_peak_w4 = fitter_w4.kde_max_pos[0]
dir_ex_S_kde_w4 = S_peak_w4/(1 - S_peak_w4)
print('Fitted direct excitation (na/naa) [KDE]: ', dir_ex_S_kde_w4)
fig, ax = plt.subplots(1, 2, figsize=(14, 4.5))
mfit.plot_mfit(fitter_w4, ax=ax[0])
mfit.plot_mfit(fitter_g, ax=ax[0], plot_model=False, plot_kde=False)
#plot_weights(dx.S[0], weights, ax=ax[0])
ax[0].set_title('2-Gaussians fit (S_fit = %.2f %%)' % (S_2peaks_w4*100))
mfit.plot_mfit(fitter_w4, ax=ax[1], plot_model=False, plot_kde=True)
mfit.plot_mfit(fitter_g, ax=ax[1], plot_model=False, plot_kde=False)
#plot_weights(dx.S[0], weights, ax=ax[1])
ax[1].set_title('KDE fit (S_fit = %.2f %%)' % (S_peak_w4*100));
"""
Explanation: Selection 2
Bursts are here weighted using weights $w$:
$$w = n_{aa} - |n_a + n_d|$$
End of explanation
"""
mask = (d.naa[0] - np.abs(d.na[0] + d.nd[0])) > 30
ds_saw = d.select_bursts_mask_apply([mask])
print(ds_saw.num_bursts)
dx = ds_saw
## Weights
weights = None
## 2-Gaussians
fitter_w5 = mfit.MultiFitter(dx.S)
fitter_w5.histogram(bins=np.r_[-0.2 : 1.2 : bandwidth])
fitter_w5.fit_histogram(model = mfit.factory_two_gaussians(p1_center=0.1, p2_center=0.4))
S_2peaks_w5 = fitter_w5.params.loc[0, 'p1_center']
dir_ex_S2p_w5 = S_2peaks_w5/(1 - S_2peaks_w5)
print('Fitted direct excitation (na/naa) [2-Gauss]:', dir_ex_S2p_w5)
## KDE
fitter_w5.calc_kde(bandwidth=bandwidth)
fitter_w5.find_kde_max(x_kde, xmin=0, xmax=0.15)
S_peak_w5 = fitter_w5.kde_max_pos[0]
S_2peaks_w5_fiterr = fitter_w5.fit_res[0].params['p1_center'].stderr
dir_ex_S_kde_w5 = S_peak_w5/(1 - S_peak_w5)
print('Fitted direct excitation (na/naa) [KDE]: ', dir_ex_S_kde_w5)
## 2-Asym-Gaussians
fitter_w5a = mfit.MultiFitter(dx.S)
fitter_w5a.histogram(bins=np.r_[-0.2 : 1.2 : bandwidth])
fitter_w5a.fit_histogram(model = mfit.factory_two_asym_gaussians(p1_center=0.05, p2_center=0.3))
S_2peaks_w5a = fitter_w5a.params.loc[0, 'p1_center']
dir_ex_S2p_w5a = S_2peaks_w5a/(1 - S_2peaks_w5a)
#print(fitter_w5a.fit_obj[0].model.fit_report(min_correl=0.5))
print('Fitted direct excitation (na/naa) [2-Asym-Gauss]:', dir_ex_S2p_w5a)
fig, ax = plt.subplots(1, 3, figsize=(19, 4.5))
mfit.plot_mfit(fitter_w5, ax=ax[0])
mfit.plot_mfit(fitter_g, ax=ax[0], plot_model=False, plot_kde=False)
ax[0].set_title('2-Gaussians fit (S_fit = %.2f %%)' % (S_2peaks_w5*100))
mfit.plot_mfit(fitter_w5, ax=ax[1], plot_model=False, plot_kde=True)
mfit.plot_mfit(fitter_g, ax=ax[1], plot_model=False, plot_kde=False)
ax[1].set_title('KDE fit (S_fit = %.2f %%)' % (S_peak_w5*100));
mfit.plot_mfit(fitter_w5a, ax=ax[2])
mfit.plot_mfit(fitter_g, ax=ax[2], plot_model=False, plot_kde=False)
ax[2].set_title('2-Asym-Gaussians fit (S_fit = %.2f %%)' % (S_2peaks_w5a*100));
"""
Explanation: Selection 3
Bursts are here selected according to:
$$n_{aa} - |n_a + n_d| > 30$$
End of explanation
"""
sample = data_id
n_bursts_aa = ds_sas.num_bursts[0]
"""
Explanation: Save data to file
End of explanation
"""
variables = ('sample n_bursts_aa dir_ex_S1p dir_ex_S_kde dir_ex_S2p dir_ex_S2pa '
'dir_ex_S2p_w1 dir_ex_S_kde_w1 dir_ex_S_kde_w4 dir_ex_S_kde_w5 dir_ex_S2p_w5 dir_ex_S2p_w5a '
'S_2peaks_w5 S_2peaks_w5_fiterr\n')
"""
Explanation: The following string contains the list of variables to be saved. When saving, the order of the variables is preserved.
End of explanation
"""
variables_csv = variables.replace(' ', ',')
fmt_float = '{%s:.6f}'
fmt_int = '{%s:d}'
fmt_str = '{%s}'
fmt_dict = {**{'sample': fmt_str},
**{k: fmt_int for k in variables.split() if k.startswith('n_bursts')}}
var_dict = {name: eval(name) for name in variables.split()}
var_fmt = ', '.join([fmt_dict.get(name, fmt_float) % name for name in variables.split()]) + '\n'
data_str = var_fmt.format(**var_dict)
print(variables_csv)
print(data_str)
# NOTE: The file name should be the notebook name but with .csv extension
with open('results/usALEX-5samples-PR-raw-dir_ex_aa-fit-%s.csv' % ph_sel_name, 'a') as f:
f.seek(0, 2)
if f.tell() == 0:
f.write(variables_csv)
f.write(data_str)
"""
Explanation: This is just a trick to format the different variables:
End of explanation
"""
|
brockk/clintrials | tutorials/EffTox.ipynb | gpl-3.0 | import numpy as np
from scipy.stats import norm
from clintrials.dosefinding.efftox import EffTox, LpNormCurve, scale_doses
real_doses = [7.5, 15, 30, 45]
dose_indices = range(1, len(real_doses)+1)
trial_size = 30
cohort_size = 3
first_dose = 3
prior_tox_probs = (0.025, 0.05, 0.1, 0.25)
prior_eff_probs = (0.2, 0.3, 0.5, 0.6)
tox_cutoff = 0.40
eff_cutoff = 0.45
tox_certainty = 0.05
eff_certainty = 0.05 # The original Matchpoint implementation. Later changed to 0.03
"""
Explanation: EffTox - General Usage
This IPython notebook illustrates the use of the EffTox class in the clintrials package.
The design was first published:
Thall & Cook in 2004, Dose-Finding Based on Efficacy-Toxicity Trade-Offs, Biometrics vol 60, issue 3.
Follow-up publications were:
Thall, Cook & Estey, 2006, Adaptive dose selection using efficacy-toxicity trade-offs: illustrations and practical considerations
Cook, 2006, Efficacy-Toxicity trade-offs based on L-p norms Technical Report UTMDABTR-003-06 Bivariate binary model.
End of explanation
"""
mu_t_mean, mu_t_sd = -5.4317, 2.7643
beta_t_mean, beta_t_sd = 3.1761, 2.7703
mu_e_mean, mu_e_sd = -0.8442, 1.9786
beta_e_1_mean, beta_e_1_sd = 1.9857, 1.9820
beta_e_2_mean, beta_e_2_sd = 0, 0.2
psi_mean, psi_sd = 0, 1
efftox_priors = [
norm(loc=mu_t_mean, scale=mu_t_sd),
norm(loc=beta_t_mean, scale=beta_t_sd),
norm(loc=mu_e_mean, scale=mu_e_sd),
norm(loc=beta_e_1_mean, scale=beta_e_1_sd),
norm(loc=beta_e_2_mean, scale=beta_e_2_sd),
norm(loc=psi_mean, scale=psi_sd),
]
"""
Explanation: The priors below correspond to ESS=1.3 and were calculated using v4.0.12 of the EffTox, available at https://biostatistics.mdanderson.org/softwaredownload/SingleSoftware.aspx?Software_Id=2
Ideally, clintrials should include the algorithm to calculate priors but I have not written it yet. KB
End of explanation
"""
hinge_points = [(0.4, 0), (1, 0.7), (0.5, 0.4)]
metric = LpNormCurve(hinge_points[0][0], hinge_points[1][1], hinge_points[2][0], hinge_points[2][1])
"""
Explanation: The metric is the object that calculates the utility of a $(\pi_E, \pi_T)$ pair. The hinge points are the three elicited points of equal utility used to identify the curve, as explained in Thall & Cook, 2004.
End of explanation
"""
et = EffTox(real_doses, efftox_priors, tox_cutoff, eff_cutoff, tox_certainty, eff_certainty, metric, trial_size,
first_dose)
"""
Explanation: Finally, the EffTox instance is created:
End of explanation
"""
cases = [(3, 1, 1), (3, 0, 1), (3, 1, 0)]
"""
Explanation: The EffTox object does all the work. For instance, consider the scenario where three patients are given dose-level 3:
The first patient has both toxicity and efficacy. This is represented by the tuple (3, 1, 1)
The second has efficacy without toxicity. This is represented by the tuple (3, 0, 1)
The third has toxicity without efficacy. This is represented by the tuple (3, 1, 0)
Putting that information into a list:
End of explanation
"""
np.random.seed(123)
et.reset()
next_dose = et.update(cases, n=10**6)
next_dose
"""
Explanation: and updating the model with 1,000,000 points in the posterior integral:
End of explanation
"""
et.prob_eff
"""
Explanation: We can take a look at the posterior beliefs on the efficacy probabilities at each dose:
End of explanation
"""
et.prob_tox
"""
Explanation: and the toxicity probabilities:
End of explanation
"""
et.prob_acc_eff
et.prob_acc_tox
"""
Explanation: and the probabilities that each dose satisfies the criteria for admissability, i.e. is probably efficable and probably tolerable:
End of explanation
"""
et.admissable_set()
"""
Explanation: We see that all doses admissable.
For confirmation:
End of explanation
"""
cases = [
(3, 0, 0), (3, 0, 1), (3, 1, 0),
(4, 1, 1), (4, 1, 1), (4, 0, 0),
(3, 1, 0), (3, 1, 1), (3, 0, 1),
(2, 0, 0), (2, 0, 0), (2, 0, 1),
]
et.update(cases, n=10**6)
"""
Explanation: Of course, there are only three observations in the trial. Not many at all. Let's add some data for twelve more patients.
The EffTox class remembers the cases it has already seen and adds to those when you call update.
End of explanation
"""
et.utility
"""
Explanation: Confirmation that dose-level 3 is the dose with highest utility:
End of explanation
"""
et.size()
"""
Explanation: The trial has more data in it now:
End of explanation
"""
et.tabulate()
"""
Explanation: Notice that the sample size is 15 and not 12? This is because calls to the update method are cumulative, i.e. the EffTox class also includes the original three outcomes. To make an instance of EffTox forget what it knows and start afresh, use et.reset()
We have now explored more doses:
End of explanation
"""
et.prob_eff
"""
Explanation: so we have a bit more faith in posterior beliefs:
End of explanation
"""
import ggplot
%matplotlib inline
et.plot_posterior_utility_density()
"""
Explanation: If ggplot is available, we can look at estimates of the posterior densities of utility, efficacy probability and toxicity probability.
For instance:
End of explanation
"""
et.posterior_params(n=10**6)
"""
Explanation: We see that dose-level 3 has the highest utility, quite comfortably.
Posterior mean estimates of the parameter vector $\theta$ are available:
End of explanation
"""
|
karlstroetmann/Algorithms | Python/Chapter-07/ListMap.ipynb | gpl-2.0 | class ListNode:
def __init__(self, key, value):
self.mKey = key
self.mValue = value
self.mNextPtr = None
"""
Explanation: Implementing Maps as Lists of Key-Value-Pairs
The class ListNode implements a node of a <em style="color:blue">linked lists</em> of
key-value pairs. Every node has three member variables:
- mKey stores the key,
- mValue stores the value associated with this key, and
- mNextPtr stores a reference to the next node. If there is no next node, then
mNextPtr is None.
Objects of class ListNode are used to represent linked lists.
The constructor of the class ListNode creates a single node that stores the given key and its associated value.
End of explanation
"""
def find(self, key):
ptr = self
while True:
if ptr.mKey == key:
return ptr.mValue
if ptr.mNextPtr != None:
ptr = ptr.mNextPtr
else:
return
ListNode.find = find
del find
"""
Explanation: Given a key, the method find traverses the given list until it finds a node that stores the given key. In this case, it returns the associated value. Otherwise, None is returned.
End of explanation
"""
def insert(self, key, value):
while True:
if self.mKey == key:
self.mValue = value
return False
elif self.mNextPtr != None:
self = self.mNextPtr
else:
self.mNextPtr = ListNode(key, value)
return True
ListNode.insert = insert
del insert
"""
Explanation: Given the first node of a linked list $L$, the function $L.\texttt{insert}(k, v)$ inserts the key-value pair $(k, v)$ into the list $L$. If there is already a key value pair in $L$ that has the same key, then the old value is overwritten. It returns a boolean that is true if a new node has been allocated.
End of explanation
"""
def delete(self, key):
previous = None
ptr = self
while True:
if ptr.mKey == key:
if previous == None:
return ptr.mNextPtr, True
else:
previous.mNextPtr = ptr.mNextPtr
return self, True
elif ptr.mNextPtr != None:
previous = ptr
ptr = ptr.mNextPtr
else:
return self, False
ListNode.delete = delete
del delete
"""
Explanation: Given the first node of a linked list $L$, the function $L.\texttt{delete}(k)$ deletes the first key-value pair of the form $(k, v)$ from the list $L$. If there is no such pair, the list $L$ is unchanged. It returns a pair such that:
- The first component of this pair is a pointer to the changed list.
If the list becomes empty, the first component is None.
- The second component is a Boolean that is True if a node has been deleted.
End of explanation
"""
def toString(self):
if self.mNextPtr != None:
return f'{self.mKey} ↦ {self.mValue}, ' + self.mNextPtr.__str__()
else:
return f'{self.mKey} ↦ {self.mValue}'
ListNode.__str__ = toString
del toString
"""
Explanation: Given the first node of a linked list $L$, the function $L.\texttt{toString}()$ returns a string representing $L$.
End of explanation
"""
class ListMap:
def __init__(self):
self.mPtr = None
def find(self, key):
if self.mPtr != None:
return self.mPtr.find(key)
def insert(self, key, value):
if self.mPtr != None:
return self.mPtr.insert(key, value)
else:
self.mPtr = ListNode(key, value)
return True
def delete(self, key):
if self.mPtr != None:
self.mPtr, flag = self.mPtr.delete(key)
return flag
return False
def __iter__(self):
return MapIterator(self.mPtr)
def __str__(self):
if self.mPtr != None:
return '{' + self.mPtr.__str__() + '}'
else:
return '{}'
def __repr__(self):
return self.__str__()
"""
Explanation: The class ListMap implements a map using a linked list of key-value pairs. Basically, it is a wrapper for the class
ListNode. Furthermore, an object of type ListMap is iterable.
End of explanation
"""
class MapIterator:
def __init__(self, ptr):
self.mPtr = ptr
def __next__(self):
if self.mPtr == None:
raise StopIteration
key = self.mPtr.mKey
value = self.mPtr.mValue
self.mPtr = self.mPtr.mNextPtr
return key, value
def main(n = 100):
S = ListMap()
for i in range(2, n + 1):
S.insert(i, True)
for i in range(2, n // 2 + 1):
for j in range(i, n // i + 1):
S.delete(i * j)
print([p for p, _ in S]) # iterates over ListMap
print(S.find(83))
print(S.find(99))
main()
"""
Explanation: A MapIterator is an iterator that iterates over the elements of a linked list. It maintains a pointer mPtr that points to the next element.
It is implemented via the function __next__. This function either returns the next key-vale pair or, if there are no more key-value pairs left, then it raises a
StopIteration exception.
If the __iter__ method of a class $C$ returns an iterator, then we can use a
for-loop to iterate over the elements contained in class $C$.
End of explanation
"""
|
phobson/paramnormal | docs/tutorial/overview.ipynb | mit | %matplotlib inline
import numpy as np
from scipy import stats
"""
Explanation: Why paramnormal ?
The currect state of the ecosystem
Both in numpy and scipy.stats and in the field of statistics in general, you can refer to the location (loc) and scale (scale) parameters of a distribution. Roughly speaking, they refer to the position and spread of the distribution, respectively. For normal distribtions loc refers the mean (symbolized as $\mu$) and scale refers to the standard deviation (a.k.a. $\sigma$).
The main problem that paramnormal is trying to solve is that sometimes, creating a probability distribution using these parameters (and others) in scipy.stats can be confusing. Also the parameters in numpy.random can be inconsistently named (admittedly, just a minor inconvenience).
End of explanation
"""
np.random.seed(0)
mu = 0
sigma = 1
N = 3
np.random.lognormal(mean=mu, sigma=sigma, size=N)
"""
Explanation: Consider the lognormal distribution.
In probability theory, a log-normal (or lognormal) distribution is a continuous probability distribution of a random variable whose logarithm is normally distributed. Thus, if the random variable $X$ is log-normally distributed, then $Y = \ln(X)$ has a normal distribution. Likewise, if $Y$ has a normal distribution, then $X = \exp(Y)$ has a log-normal distribution. (from wikipedia)
In numpy, you specify the "mean" and "sigma" of the underlying normal distribution. A lot lof scientific programmers know what that would mean. But mean and standard_deviation, loc and scale or mu and sigma would have been better choices.
Still, generating random numbers is pretty straight-forward:
End of explanation
"""
np.random.seed(0)
stats.lognorm(sigma, loc=0, scale=np.exp(mu)).rvs(size=N)
"""
Explanation: In scipy, you need an additional shape parameter (s), plus the usual loc and scale. Aside from the mystery behind what s might bem that seems straight-forward enough.
Except it's not.
That shape parameter is actually the standard deviation ($\sigma$) of the underlying normal distribution. The scale should be set to the exponentiated location parameter of the raw distribution ($e ^ \mu$). Finally, loc actually refers to a sort of offset that can be applied to entire distribution. In other words, you can translate the distribution up and down to e.g., negative values.
In my field (civil/environmental engineering) variables that are often assumed to be lognormally distributed (e.g., pollutant concentration) can never have values less than or equal to zerlo. So in that sense, the loc parameter in scipy's lognormal distribution nearly always should be set to zero.
With that out of the way, recreating the three numbers above in scipy is done as follows:
End of explanation
"""
import paramnormal
np.random.seed(0)
paramnormal.lognormal(mu=mu, sigma=sigma).rvs(size=N)
"""
Explanation: A new challenger appears
paramnormal really just hopes to take away some of this friction. Consider the following:
End of explanation
"""
np.random.seed(0)
paramnormal.lognormal(μ=mu, σ=sigma).rvs(size=N)
"""
Explanation: Hopefully that's much more readable and straight-forward.
Greek-letter support
Tom Augspurger added a lovely little decorator to let you use greek letters in the function signature.
End of explanation
"""
for d in paramnormal.dist.__all__:
print(d)
"""
Explanation: Other distributions
As of now, we provide a convenient interface for the following distributions in scipy.stats:
End of explanation
"""
|
sdpython/ensae_teaching_cs | _doc/notebooks/competitions/2016/td2a_eco_competition_comparer_classifieurs.ipynb | mit | from jyquickhelper import add_notebook_menu
add_notebook_menu()
from pyensae.datasource import download_data
download_data("ensae_competition_2016.zip",
url="https://github.com/sdpython/ensae_teaching_cs/raw/master/_doc/competitions/2016_ENSAE_2A/")
%matplotlib inline
"""
Explanation: 2A.ml - 2016 - Compétition - Préparation des données
Une compétition était proposée dans le cadre du cours Python pour un Data Scientist à l'ENSAE. Ce notebook facilite la prise en main des données et montre comment comparer plusieurs classifieurs avec la courbe ROC.
End of explanation
"""
import pandas as p
import numpy as np
df = p.read_csv('./ensae_competition_train.txt', header=[0,1], sep="\t", index_col=0)
#### Gender dummies
df['X2'] = df['X2'].applymap(str)
gender_dummies = p.get_dummies(df['X2'] )
### education dummies
df['X3'] = df['X3'].applymap(str)
educ_dummies = p.get_dummies(df['X3'] )
#### marriage dummies
df['X4'] = df['X4'].applymap(str)
mariage_dummies = p.get_dummies(df['X4'] )
### On va aussi supprimer les multi index de la table
df.columns = df.columns.droplevel(0)
#### on aggrège ensuite les 3 tables ensemble
data = df.join(gender_dummies).join(educ_dummies).join(mariage_dummies)
data.rename(columns = {'default payment next month' : "Y"}, inplace = True)
data = data.drop(['SEX','EDUCATION','MARRIAGE'],1)
data_resample = p.concat([data[data['Y']==1], data[data['Y']==0].sample(len(data[data['Y']==1]))])
data.head(n=2)
Y = data['Y']
Y = data_resample['Y']
X = data.drop('Y', 1)
#X = data[["SEX_1", "AGE", "MARRIAGE_0", 'PAY_0']]
X = data_resample.drop('Y',1)
X.columns
from sklearn.model_selection import train_test_split
X_train, X_test, Y_train, Y_test = train_test_split(X, Y, test_size=0.33)
"""
Explanation: Données
End of explanation
"""
from sklearn.linear_model import LogisticRegression
from sklearn import svm
from sklearn.ensemble import GradientBoostingClassifier, RandomForestClassifier
from sklearn.linear_model import SGDClassifier, Perceptron
#type_classifier = LogisticRegression
#type_classifier = svm.SVC
type_classifier = GradientBoostingClassifier
#type_classifier = RandomForestClassifier
#type_classifier = Perceptron
clf = type_classifier()
#clf = SGDClassifier(loss="hinge", penalty="l2")
clf = clf.fit(X_train, Y_train.ravel())
# Matrice de confusion
%matplotlib inline
from sklearn.metrics import confusion_matrix
for x,y in [ (X_train, Y_train), (X_test, Y_test) ]:
yp = clf.predict(x)
cm = confusion_matrix(y.ravel(), yp.ravel())
print(cm.transpose())
import matplotlib.pyplot as plt
import seaborn as sns
sns.set_style('ticks')
plt.matshow(cm.transpose())
plt.title('Confusion matrix sur Test')
plt.colorbar()
plt.ylabel('Predicted label')
plt.xlabel('True label')
(cm.transpose()[0,0]+cm.transpose()[1,1])/ (cm[0].sum()+cm[1].sum())
"""
Explanation: Choix du Classifieur
End of explanation
"""
from sklearn.metrics import roc_curve, auc
probas = clf.predict_proba(X_test)
probas
rep = [ ]
yt = Y_test.ravel()
for i in range(probas.shape[0]):
p0,p1 = probas[i,:]
exp = yt[i]
if p0 > p1 :
if exp == 0 :
# bonne réponse, true positive (tp)
rep.append ( (1, p0) )
else :
# mauvaise réponse, false positive (fp)
rep.append( (0, p0) )
else :
if exp == 0 :
# mauvaise réponse, false negative (fn)
rep.append ( (0, p1) )
else :
# bonne réponse, true negative (tn)
rep.append( (1, p1) )
mat_rep = np.array(rep)
print("AUC : Taux de bonnes réponses" , sum(mat_rep[:,0]) / len(mat_rep[:,0]))
"""
Explanation: Calcul du critère AUC
End of explanation
"""
fpr, tpr, thresholds = roc_curve(mat_rep[:,0], mat_rep[:, 1])
roc_auc = auc(fpr, tpr)
plt.plot(fpr, tpr, label='ROC curve (area = %0.2f)' % roc_auc)
plt.plot([0, 1], [0, 1], 'k--')
plt.xlim([0.0, 1.0])
plt.ylim([0.0, 1.0])
plt.xlabel('False Positive Rate')
plt.ylabel('True Positive Rate (or precision)')
plt.title('ROC')
plt.legend(loc="lower right")
"""
Explanation: Tous les critères sont détaillés là. Attention au sens de la matrice de confusion, selon les articles, cela change.
Courbe ROC
End of explanation
"""
tp=0
fp=0
fn=0
tn=0
for i in range(len(probas[:,0])):
if (probas[i,0] >= 0.5 and yt[i] == 0):
tp+=1
elif (probas[i,0] >= 0.5 and yt[i] == 1):
fp+=1
elif (probas[i,0] <= 0.5 and yt[i] == 0):
fn+=1
else:
tn+=1
print("On retrouve la matrice de confusion :\n", "TP : ", tp, "FP : ", fp, "\n",
" FN : ", fn, "TN : ", tn)
print("Precision : TP / (TP + FP) = ", tp/(tp+fp))
print("Recall : TP / (TP + FN) = ", tp/(tp+fn))
precision = tp/(tp+fp)
recall = tp/(tp+fn)
print("F1 Score : T2 * P * R / (P + R) = ", 2 * precision * recall / (precision + recall) )
print("False Positive rate : FP / (FP + FN) = ", fp/(fp+tn))
from sklearn.metrics import precision_recall_curve
precision, recall, _ = precision_recall_curve(Y_test.ravel(), yp.ravel())
lw = 2
plt.plot(recall, precision, lw=lw, color='navy', label='Precision-Recall curve')
plt.xlabel('Recall')
plt.ylabel('Precision')
plt.ylim([0.0, 1.05])
plt.xlim([0.0, 1.0])
plt.title('Precision-Recall')
plt.legend(loc="lower left")
"""
Explanation: En haut à droite, TPR et FPR valent 1 (il suffit de prédire toujours positif = pas de défaut = Y_hat=0), en bas à gauche, TPR et FPR valent 0 parce qu'il suffit de toujours prédire la situation négative (ou le défaut, Y_hat = 1).
Une autre métrique souvent suivie consiste à comparer Precision (= TPR) et Recall. C'est un peu le même arbitrage. Cela devrait vous rappeler celui entre risque de première espèce et puissance d'un test.
Précision-Recall, Score F1
End of explanation
"""
|
ES-DOC/esdoc-jupyterhub | notebooks/hammoz-consortium/cmip6/models/sandbox-2/ocean.ipynb | gpl-3.0 | # DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'hammoz-consortium', 'sandbox-2', 'ocean')
"""
Explanation: ES-DOC CMIP6 Model Properties - Ocean
MIP Era: CMIP6
Institute: HAMMOZ-CONSORTIUM
Source ID: SANDBOX-2
Topic: Ocean
Sub-Topics: Timestepping Framework, Advection, Lateral Physics, Vertical Physics, Uplow Boundaries, Boundary Forcing.
Properties: 133 (101 required)
Model descriptions: Model description details
Initialized From: --
Notebook Help: Goto notebook help page
Notebook Initialised: 2018-02-15 16:54:03
Document Setup
IMPORTANT: to be executed each time you run the notebook
End of explanation
"""
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
"""
Explanation: Document Authors
Set document authors
End of explanation
"""
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
"""
Explanation: Document Contributors
Specify document contributors
End of explanation
"""
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
"""
Explanation: Document Publication
Specify document publication status
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: Document Table of Contents
1. Key Properties
2. Key Properties --> Seawater Properties
3. Key Properties --> Bathymetry
4. Key Properties --> Nonoceanic Waters
5. Key Properties --> Software Properties
6. Key Properties --> Resolution
7. Key Properties --> Tuning Applied
8. Key Properties --> Conservation
9. Grid
10. Grid --> Discretisation --> Vertical
11. Grid --> Discretisation --> Horizontal
12. Timestepping Framework
13. Timestepping Framework --> Tracers
14. Timestepping Framework --> Baroclinic Dynamics
15. Timestepping Framework --> Barotropic
16. Timestepping Framework --> Vertical Physics
17. Advection
18. Advection --> Momentum
19. Advection --> Lateral Tracers
20. Advection --> Vertical Tracers
21. Lateral Physics
22. Lateral Physics --> Momentum --> Operator
23. Lateral Physics --> Momentum --> Eddy Viscosity Coeff
24. Lateral Physics --> Tracers
25. Lateral Physics --> Tracers --> Operator
26. Lateral Physics --> Tracers --> Eddy Diffusity Coeff
27. Lateral Physics --> Tracers --> Eddy Induced Velocity
28. Vertical Physics
29. Vertical Physics --> Boundary Layer Mixing --> Details
30. Vertical Physics --> Boundary Layer Mixing --> Tracers
31. Vertical Physics --> Boundary Layer Mixing --> Momentum
32. Vertical Physics --> Interior Mixing --> Details
33. Vertical Physics --> Interior Mixing --> Tracers
34. Vertical Physics --> Interior Mixing --> Momentum
35. Uplow Boundaries --> Free Surface
36. Uplow Boundaries --> Bottom Boundary Layer
37. Boundary Forcing
38. Boundary Forcing --> Momentum --> Bottom Friction
39. Boundary Forcing --> Momentum --> Lateral Friction
40. Boundary Forcing --> Tracers --> Sunlight Penetration
41. Boundary Forcing --> Tracers --> Fresh Water Forcing
1. Key Properties
Ocean key properties
1.1. Model Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of ocean model.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 1.2. Model Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Name of ocean model code (NEMO 3.6, MOM 5.0,...)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.model_family')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "OGCM"
# "slab ocean"
# "mixed layer ocean"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 1.3. Model Family
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of ocean model.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.basic_approximations')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Primitive equations"
# "Non-hydrostatic"
# "Boussinesq"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 1.4. Basic Approximations
Is Required: TRUE Type: ENUM Cardinality: 1.N
Basic approximations made in the ocean.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.prognostic_variables')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Potential temperature"
# "Conservative temperature"
# "Salinity"
# "U-velocity"
# "V-velocity"
# "W-velocity"
# "SSH"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 1.5. Prognostic Variables
Is Required: TRUE Type: ENUM Cardinality: 1.N
List of prognostic variables in the ocean component.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.seawater_properties.eos_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Linear"
# "Wright, 1997"
# "Mc Dougall et al."
# "Jackett et al. 2006"
# "TEOS 2010"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 2. Key Properties --> Seawater Properties
Physical properties of seawater in ocean
2.1. Eos Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of EOS for sea water
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.seawater_properties.eos_functional_temp')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Potential temperature"
# "Conservative temperature"
# TODO - please enter value(s)
"""
Explanation: 2.2. Eos Functional Temp
Is Required: TRUE Type: ENUM Cardinality: 1.1
Temperature used in EOS for sea water
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.seawater_properties.eos_functional_salt')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Practical salinity Sp"
# "Absolute salinity Sa"
# TODO - please enter value(s)
"""
Explanation: 2.3. Eos Functional Salt
Is Required: TRUE Type: ENUM Cardinality: 1.1
Salinity used in EOS for sea water
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.seawater_properties.eos_functional_depth')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Pressure (dbars)"
# "Depth (meters)"
# TODO - please enter value(s)
"""
Explanation: 2.4. Eos Functional Depth
Is Required: TRUE Type: ENUM Cardinality: 1.1
Depth or pressure used in EOS for sea water ?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.seawater_properties.ocean_freezing_point')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "TEOS 2010"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 2.5. Ocean Freezing Point
Is Required: TRUE Type: ENUM Cardinality: 1.1
Equation used to compute the freezing point (in deg C) of seawater, as a function of salinity and pressure
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.seawater_properties.ocean_specific_heat')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 2.6. Ocean Specific Heat
Is Required: TRUE Type: FLOAT Cardinality: 1.1
Specific heat in ocean (cpocean) in J/(kg K)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.seawater_properties.ocean_reference_density')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 2.7. Ocean Reference Density
Is Required: TRUE Type: FLOAT Cardinality: 1.1
Boussinesq reference density (rhozero) in kg / m3
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.bathymetry.reference_dates')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Present day"
# "21000 years BP"
# "6000 years BP"
# "LGM"
# "Pliocene"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 3. Key Properties --> Bathymetry
Properties of bathymetry in ocean
3.1. Reference Dates
Is Required: TRUE Type: ENUM Cardinality: 1.1
Reference date of bathymetry
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.bathymetry.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 3.2. Type
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is the bathymetry fixed in time in the ocean ?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.bathymetry.ocean_smoothing')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 3.3. Ocean Smoothing
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe any smoothing or hand editing of bathymetry in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.bathymetry.source')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 3.4. Source
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe source of bathymetry in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.nonoceanic_waters.isolated_seas')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 4. Key Properties --> Nonoceanic Waters
Non oceanic waters treatement in ocean
4.1. Isolated Seas
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how isolated seas is performed
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.nonoceanic_waters.river_mouth')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 4.2. River Mouth
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how river mouth mixing or estuaries specific treatment is performed
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.software_properties.repository')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 5. Key Properties --> Software Properties
Software properties of ocean code
5.1. Repository
Is Required: FALSE Type: STRING Cardinality: 0.1
Location of code for this component.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.software_properties.code_version')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 5.2. Code Version
Is Required: FALSE Type: STRING Cardinality: 0.1
Code version identifier.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.software_properties.code_languages')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 5.3. Code Languages
Is Required: FALSE Type: STRING Cardinality: 0.N
Code language(s).
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.resolution.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 6. Key Properties --> Resolution
Resolution in the ocean grid
6.1. Name
Is Required: TRUE Type: STRING Cardinality: 1.1
This is a string usually used by the modelling group to describe the resolution of this grid, e.g. ORCA025, N512L180, T512L70 etc.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.resolution.canonical_horizontal_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 6.2. Canonical Horizontal Resolution
Is Required: TRUE Type: STRING Cardinality: 1.1
Expression quoted for gross comparisons of resolution, eg. 50km or 0.1 degrees etc.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.resolution.range_horizontal_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 6.3. Range Horizontal Resolution
Is Required: TRUE Type: STRING Cardinality: 1.1
Range of horizontal resolution with spatial details, eg. 50(Equator)-100km or 0.1-0.5 degrees etc.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.resolution.number_of_horizontal_gridpoints')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 6.4. Number Of Horizontal Gridpoints
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Total number of horizontal (XY) points (or degrees of freedom) on computational grid.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.resolution.number_of_vertical_levels')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 6.5. Number Of Vertical Levels
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Number of vertical levels resolved on computational grid.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.resolution.is_adaptive_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 6.6. Is Adaptive Grid
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Default is False. Set true if grid resolution changes during execution.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.resolution.thickness_level_1')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 6.7. Thickness Level 1
Is Required: TRUE Type: FLOAT Cardinality: 1.1
Thickness of first surface ocean level (in meters)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.tuning_applied.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 7. Key Properties --> Tuning Applied
Tuning methodology for ocean component
7.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General overview description of tuning: explain and motivate the main targets and metrics retained. &Document the relative weight given to climate performance metrics versus process oriented metrics, &and on the possible conflicts with parameterization level tuning. In particular describe any struggle &with a parameter value that required pushing it to its limits to solve a particular model deficiency.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.tuning_applied.global_mean_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 7.2. Global Mean Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List set of metrics of the global mean state used in tuning model/component
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.tuning_applied.regional_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 7.3. Regional Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List of regional metrics of mean state (e.g THC, AABW, regional means etc) used in tuning model/component
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.tuning_applied.trend_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 7.4. Trend Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List observed trend metrics used in tuning model/component
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.conservation.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 8. Key Properties --> Conservation
Conservation in the ocean component
8.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
Brief description of conservation methodology
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.conservation.scheme')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Energy"
# "Enstrophy"
# "Salt"
# "Volume of ocean"
# "Momentum"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 8.2. Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.N
Properties conserved in the ocean by the numerical schemes
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.conservation.consistency_properties')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 8.3. Consistency Properties
Is Required: FALSE Type: STRING Cardinality: 0.1
Any additional consistency properties (energy conversion, pressure gradient discretisation, ...)?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.conservation.corrected_conserved_prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 8.4. Corrected Conserved Prognostic Variables
Is Required: FALSE Type: STRING Cardinality: 0.1
Set of variables which are conserved by more than the numerical scheme alone.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.conservation.was_flux_correction_used')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 8.5. Was Flux Correction Used
Is Required: FALSE Type: BOOLEAN Cardinality: 0.1
Does conservation involve flux correction ?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.grid.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 9. Grid
Ocean grid
9.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of grid in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.grid.discretisation.vertical.coordinates')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Z-coordinate"
# "Z*-coordinate"
# "S-coordinate"
# "Isopycnic - sigma 0"
# "Isopycnic - sigma 2"
# "Isopycnic - sigma 4"
# "Isopycnic - other"
# "Hybrid / Z+S"
# "Hybrid / Z+isopycnic"
# "Hybrid / other"
# "Pressure referenced (P)"
# "P*"
# "Z**"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 10. Grid --> Discretisation --> Vertical
Properties of vertical discretisation in ocean
10.1. Coordinates
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of vertical coordinates in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.grid.discretisation.vertical.partial_steps')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 10.2. Partial Steps
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Using partial steps with Z or Z vertical coordinate in ocean ?*
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.grid.discretisation.horizontal.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Lat-lon"
# "Rotated north pole"
# "Two north poles (ORCA-style)"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 11. Grid --> Discretisation --> Horizontal
Type of horizontal discretisation scheme in ocean
11.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Horizontal grid type
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.grid.discretisation.horizontal.staggering')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Arakawa B-grid"
# "Arakawa C-grid"
# "Arakawa E-grid"
# "N/a"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 11.2. Staggering
Is Required: FALSE Type: ENUM Cardinality: 0.1
Horizontal grid staggering type
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.grid.discretisation.horizontal.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Finite difference"
# "Finite volumes"
# "Finite elements"
# "Unstructured grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 11.3. Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Horizontal discretisation scheme in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 12. Timestepping Framework
Ocean Timestepping Framework
12.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of time stepping in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.diurnal_cycle')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "Via coupling"
# "Specific treatment"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 12.2. Diurnal Cycle
Is Required: TRUE Type: ENUM Cardinality: 1.1
Diurnal cycle type
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.tracers.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Leap-frog + Asselin filter"
# "Leap-frog + Periodic Euler"
# "Predictor-corrector"
# "Runge-Kutta 2"
# "AM3-LF"
# "Forward-backward"
# "Forward operator"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 13. Timestepping Framework --> Tracers
Properties of tracers time stepping in ocean
13.1. Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Tracers time stepping scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.tracers.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 13.2. Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Tracers time step (in seconds)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.baroclinic_dynamics.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Preconditioned conjugate gradient"
# "Sub cyling"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 14. Timestepping Framework --> Baroclinic Dynamics
Baroclinic dynamics in ocean
14.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Baroclinic dynamics type
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.baroclinic_dynamics.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Leap-frog + Asselin filter"
# "Leap-frog + Periodic Euler"
# "Predictor-corrector"
# "Runge-Kutta 2"
# "AM3-LF"
# "Forward-backward"
# "Forward operator"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 14.2. Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Baroclinic dynamics scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.baroclinic_dynamics.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 14.3. Time Step
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Baroclinic time step (in seconds)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.barotropic.splitting')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "split explicit"
# "implicit"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 15. Timestepping Framework --> Barotropic
Barotropic time stepping in ocean
15.1. Splitting
Is Required: TRUE Type: ENUM Cardinality: 1.1
Time splitting method
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.barotropic.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 15.2. Time Step
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Barotropic time step (in seconds)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.vertical_physics.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 16. Timestepping Framework --> Vertical Physics
Vertical physics time stepping in ocean
16.1. Method
Is Required: TRUE Type: STRING Cardinality: 1.1
Details of vertical time stepping in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 17. Advection
Ocean advection
17.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of advection in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.momentum.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Flux form"
# "Vector form"
# TODO - please enter value(s)
"""
Explanation: 18. Advection --> Momentum
Properties of lateral momemtum advection scheme in ocean
18.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of lateral momemtum advection scheme in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.momentum.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 18.2. Scheme Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Name of ocean momemtum advection scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.momentum.ALE')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 18.3. ALE
Is Required: FALSE Type: BOOLEAN Cardinality: 0.1
Using ALE for vertical advection ? (if vertical coordinates are sigma)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.lateral_tracers.order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 19. Advection --> Lateral Tracers
Properties of lateral tracer advection scheme in ocean
19.1. Order
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Order of lateral tracer advection scheme in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.lateral_tracers.flux_limiter')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 19.2. Flux Limiter
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Monotonic flux limiter for lateral tracer advection scheme in ocean ?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.lateral_tracers.effective_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 19.3. Effective Order
Is Required: TRUE Type: FLOAT Cardinality: 1.1
Effective order of limited lateral tracer advection scheme in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.lateral_tracers.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 19.4. Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Descriptive text for lateral tracer advection scheme in ocean (e.g. MUSCL, PPM-H5, PRATHER,...)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.lateral_tracers.passive_tracers')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Ideal age"
# "CFC 11"
# "CFC 12"
# "SF6"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 19.5. Passive Tracers
Is Required: FALSE Type: ENUM Cardinality: 0.N
Passive tracers advected
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.lateral_tracers.passive_tracers_advection')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 19.6. Passive Tracers Advection
Is Required: FALSE Type: STRING Cardinality: 0.1
Is advection of passive tracers different than active ? if so, describe.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.vertical_tracers.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 20. Advection --> Vertical Tracers
Properties of vertical tracer advection scheme in ocean
20.1. Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Descriptive text for vertical tracer advection scheme in ocean (e.g. MUSCL, PPM-H5, PRATHER,...)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.vertical_tracers.flux_limiter')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 20.2. Flux Limiter
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Monotonic flux limiter for vertical tracer advection scheme in ocean ?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 21. Lateral Physics
Ocean lateral physics
21.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of lateral physics in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "Eddy active"
# "Eddy admitting"
# TODO - please enter value(s)
"""
Explanation: 21.2. Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of transient eddy representation in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.operator.direction')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Horizontal"
# "Isopycnal"
# "Isoneutral"
# "Geopotential"
# "Iso-level"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 22. Lateral Physics --> Momentum --> Operator
Properties of lateral physics operator for momentum in ocean
22.1. Direction
Is Required: TRUE Type: ENUM Cardinality: 1.1
Direction of lateral physics momemtum scheme in the ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.operator.order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Harmonic"
# "Bi-harmonic"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 22.2. Order
Is Required: TRUE Type: ENUM Cardinality: 1.1
Order of lateral physics momemtum scheme in the ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.operator.discretisation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Second order"
# "Higher order"
# "Flux limiter"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 22.3. Discretisation
Is Required: TRUE Type: ENUM Cardinality: 1.1
Discretisation of lateral physics momemtum scheme in the ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.eddy_viscosity_coeff.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant"
# "Space varying"
# "Time + space varying (Smagorinsky)"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 23. Lateral Physics --> Momentum --> Eddy Viscosity Coeff
Properties of eddy viscosity coeff in lateral physics momemtum scheme in the ocean
23.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Lateral physics momemtum eddy viscosity coeff type in the ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.eddy_viscosity_coeff.constant_coefficient')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 23.2. Constant Coefficient
Is Required: FALSE Type: INTEGER Cardinality: 0.1
If constant, value of eddy viscosity coeff in lateral physics momemtum scheme (in m2/s)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.eddy_viscosity_coeff.variable_coefficient')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 23.3. Variable Coefficient
Is Required: FALSE Type: STRING Cardinality: 0.1
If space-varying, describe variations of eddy viscosity coeff in lateral physics momemtum scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.eddy_viscosity_coeff.coeff_background')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 23.4. Coeff Background
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe background eddy viscosity coeff in lateral physics momemtum scheme (give values in m2/s)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.eddy_viscosity_coeff.coeff_backscatter')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 23.5. Coeff Backscatter
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there backscatter in eddy viscosity coeff in lateral physics momemtum scheme ?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.mesoscale_closure')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 24. Lateral Physics --> Tracers
Properties of lateral physics for tracers in ocean
24.1. Mesoscale Closure
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there a mesoscale closure in the lateral physics tracers scheme ?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.submesoscale_mixing')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 24.2. Submesoscale Mixing
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there a submesoscale mixing parameterisation (i.e Fox-Kemper) in the lateral physics tracers scheme ?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.operator.direction')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Horizontal"
# "Isopycnal"
# "Isoneutral"
# "Geopotential"
# "Iso-level"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 25. Lateral Physics --> Tracers --> Operator
Properties of lateral physics operator for tracers in ocean
25.1. Direction
Is Required: TRUE Type: ENUM Cardinality: 1.1
Direction of lateral physics tracers scheme in the ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.operator.order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Harmonic"
# "Bi-harmonic"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 25.2. Order
Is Required: TRUE Type: ENUM Cardinality: 1.1
Order of lateral physics tracers scheme in the ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.operator.discretisation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Second order"
# "Higher order"
# "Flux limiter"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 25.3. Discretisation
Is Required: TRUE Type: ENUM Cardinality: 1.1
Discretisation of lateral physics tracers scheme in the ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_diffusity_coeff.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant"
# "Space varying"
# "Time + space varying (Smagorinsky)"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 26. Lateral Physics --> Tracers --> Eddy Diffusity Coeff
Properties of eddy diffusity coeff in lateral physics tracers scheme in the ocean
26.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Lateral physics tracers eddy diffusity coeff type in the ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_diffusity_coeff.constant_coefficient')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 26.2. Constant Coefficient
Is Required: FALSE Type: INTEGER Cardinality: 0.1
If constant, value of eddy diffusity coeff in lateral physics tracers scheme (in m2/s)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_diffusity_coeff.variable_coefficient')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 26.3. Variable Coefficient
Is Required: FALSE Type: STRING Cardinality: 0.1
If space-varying, describe variations of eddy diffusity coeff in lateral physics tracers scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_diffusity_coeff.coeff_background')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 26.4. Coeff Background
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Describe background eddy diffusity coeff in lateral physics tracers scheme (give values in m2/s)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_diffusity_coeff.coeff_backscatter')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 26.5. Coeff Backscatter
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there backscatter in eddy diffusity coeff in lateral physics tracers scheme ?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_induced_velocity.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "GM"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 27. Lateral Physics --> Tracers --> Eddy Induced Velocity
Properties of eddy induced velocity (EIV) in lateral physics tracers scheme in the ocean
27.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of EIV in lateral physics tracers in the ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_induced_velocity.constant_val')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 27.2. Constant Val
Is Required: FALSE Type: INTEGER Cardinality: 0.1
If EIV scheme for tracers is constant, specify coefficient value (M2/s)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_induced_velocity.flux_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 27.3. Flux Type
Is Required: TRUE Type: STRING Cardinality: 1.1
Type of EIV flux (advective or skew)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_induced_velocity.added_diffusivity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 27.4. Added Diffusivity
Is Required: TRUE Type: STRING Cardinality: 1.1
Type of EIV added diffusivity (constant, flow dependent or none)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 28. Vertical Physics
Ocean Vertical Physics
28.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of vertical physics in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.details.langmuir_cells_mixing')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 29. Vertical Physics --> Boundary Layer Mixing --> Details
Properties of vertical physics in ocean
29.1. Langmuir Cells Mixing
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there Langmuir cells mixing in upper ocean ?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.tracers.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant value"
# "Turbulent closure - TKE"
# "Turbulent closure - KPP"
# "Turbulent closure - Mellor-Yamada"
# "Turbulent closure - Bulk Mixed Layer"
# "Richardson number dependent - PP"
# "Richardson number dependent - KT"
# "Imbeded as isopycnic vertical coordinate"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 30. Vertical Physics --> Boundary Layer Mixing --> Tracers
*Properties of boundary layer (BL) mixing on tracers in the ocean *
30.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of boundary layer mixing for tracers in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.tracers.closure_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 30.2. Closure Order
Is Required: FALSE Type: FLOAT Cardinality: 0.1
If turbulent BL mixing of tracers, specific order of closure (0, 1, 2.5, 3)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.tracers.constant')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 30.3. Constant
Is Required: FALSE Type: INTEGER Cardinality: 0.1
If constant BL mixing of tracers, specific coefficient (m2/s)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.tracers.background')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 30.4. Background
Is Required: TRUE Type: STRING Cardinality: 1.1
Background BL mixing of tracers coefficient, (schema and value in m2/s - may by none)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.momentum.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant value"
# "Turbulent closure - TKE"
# "Turbulent closure - KPP"
# "Turbulent closure - Mellor-Yamada"
# "Turbulent closure - Bulk Mixed Layer"
# "Richardson number dependent - PP"
# "Richardson number dependent - KT"
# "Imbeded as isopycnic vertical coordinate"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 31. Vertical Physics --> Boundary Layer Mixing --> Momentum
*Properties of boundary layer (BL) mixing on momentum in the ocean *
31.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of boundary layer mixing for momentum in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.momentum.closure_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 31.2. Closure Order
Is Required: FALSE Type: FLOAT Cardinality: 0.1
If turbulent BL mixing of momentum, specific order of closure (0, 1, 2.5, 3)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.momentum.constant')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 31.3. Constant
Is Required: FALSE Type: INTEGER Cardinality: 0.1
If constant BL mixing of momentum, specific coefficient (m2/s)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.momentum.background')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 31.4. Background
Is Required: TRUE Type: STRING Cardinality: 1.1
Background BL mixing of momentum coefficient, (schema and value in m2/s - may by none)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.details.convection_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Non-penetrative convective adjustment"
# "Enhanced vertical diffusion"
# "Included in turbulence closure"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 32. Vertical Physics --> Interior Mixing --> Details
*Properties of interior mixing in the ocean *
32.1. Convection Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of vertical convection in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.details.tide_induced_mixing')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 32.2. Tide Induced Mixing
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe how tide induced mixing is modelled (barotropic, baroclinic, none)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.details.double_diffusion')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 32.3. Double Diffusion
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there double diffusion
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.details.shear_mixing')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 32.4. Shear Mixing
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there interior shear mixing
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.tracers.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant value"
# "Turbulent closure / TKE"
# "Turbulent closure - Mellor-Yamada"
# "Richardson number dependent - PP"
# "Richardson number dependent - KT"
# "Imbeded as isopycnic vertical coordinate"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 33. Vertical Physics --> Interior Mixing --> Tracers
*Properties of interior mixing on tracers in the ocean *
33.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of interior mixing for tracers in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.tracers.constant')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 33.2. Constant
Is Required: FALSE Type: INTEGER Cardinality: 0.1
If constant interior mixing of tracers, specific coefficient (m2/s)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.tracers.profile')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 33.3. Profile
Is Required: TRUE Type: STRING Cardinality: 1.1
Is the background interior mixing using a vertical profile for tracers (i.e is NOT constant) ?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.tracers.background')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 33.4. Background
Is Required: TRUE Type: STRING Cardinality: 1.1
Background interior mixing of tracers coefficient, (schema and value in m2/s - may by none)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.momentum.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant value"
# "Turbulent closure / TKE"
# "Turbulent closure - Mellor-Yamada"
# "Richardson number dependent - PP"
# "Richardson number dependent - KT"
# "Imbeded as isopycnic vertical coordinate"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 34. Vertical Physics --> Interior Mixing --> Momentum
*Properties of interior mixing on momentum in the ocean *
34.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of interior mixing for momentum in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.momentum.constant')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 34.2. Constant
Is Required: FALSE Type: INTEGER Cardinality: 0.1
If constant interior mixing of momentum, specific coefficient (m2/s)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.momentum.profile')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 34.3. Profile
Is Required: TRUE Type: STRING Cardinality: 1.1
Is the background interior mixing using a vertical profile for momentum (i.e is NOT constant) ?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.momentum.background')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 34.4. Background
Is Required: TRUE Type: STRING Cardinality: 1.1
Background interior mixing of momentum coefficient, (schema and value in m2/s - may by none)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.uplow_boundaries.free_surface.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 35. Uplow Boundaries --> Free Surface
Properties of free surface in ocean
35.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of free surface in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.uplow_boundaries.free_surface.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Linear implicit"
# "Linear filtered"
# "Linear semi-explicit"
# "Non-linear implicit"
# "Non-linear filtered"
# "Non-linear semi-explicit"
# "Fully explicit"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 35.2. Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Free surface scheme in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.uplow_boundaries.free_surface.embeded_seaice')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 35.3. Embeded Seaice
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is the sea-ice embeded in the ocean model (instead of levitating) ?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.uplow_boundaries.bottom_boundary_layer.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 36. Uplow Boundaries --> Bottom Boundary Layer
Properties of bottom boundary layer in ocean
36.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of bottom boundary layer in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.uplow_boundaries.bottom_boundary_layer.type_of_bbl')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Diffusive"
# "Acvective"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 36.2. Type Of Bbl
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of bottom boundary layer in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.uplow_boundaries.bottom_boundary_layer.lateral_mixing_coef')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 36.3. Lateral Mixing Coef
Is Required: FALSE Type: INTEGER Cardinality: 0.1
If bottom BL is diffusive, specify value of lateral mixing coefficient (in m2/s)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.uplow_boundaries.bottom_boundary_layer.sill_overflow')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 36.4. Sill Overflow
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe any specific treatment of sill overflows
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 37. Boundary Forcing
Ocean boundary forcing
37.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of boundary forcing in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.surface_pressure')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 37.2. Surface Pressure
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe how surface pressure is transmitted to ocean (via sea-ice, nothing specific,...)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.momentum_flux_correction')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 37.3. Momentum Flux Correction
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe any type of ocean surface momentum flux correction and, if applicable, how it is applied and where.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.tracers_flux_correction')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 37.4. Tracers Flux Correction
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe any type of ocean surface tracers flux correction and, if applicable, how it is applied and where.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.wave_effects')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 37.5. Wave Effects
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe if/how wave effects are modelled at ocean surface.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.river_runoff_budget')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 37.6. River Runoff Budget
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe how river runoff from land surface is routed to ocean and any global adjustment done.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.geothermal_heating')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 37.7. Geothermal Heating
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe if/how geothermal heating is present at ocean bottom.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.momentum.bottom_friction.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Linear"
# "Non-linear"
# "Non-linear (drag function of speed of tides)"
# "Constant drag coefficient"
# "None"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 38. Boundary Forcing --> Momentum --> Bottom Friction
Properties of momentum bottom friction in ocean
38.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of momentum bottom friction in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.momentum.lateral_friction.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "Free-slip"
# "No-slip"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 39. Boundary Forcing --> Momentum --> Lateral Friction
Properties of momentum lateral friction in ocean
39.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of momentum lateral friction in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.tracers.sunlight_penetration.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "1 extinction depth"
# "2 extinction depth"
# "3 extinction depth"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 40. Boundary Forcing --> Tracers --> Sunlight Penetration
Properties of sunlight penetration scheme in ocean
40.1. Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of sunlight penetration scheme in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.tracers.sunlight_penetration.ocean_colour')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 40.2. Ocean Colour
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is the ocean sunlight penetration scheme ocean colour dependent ?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.tracers.sunlight_penetration.extinction_depth')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 40.3. Extinction Depth
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe and list extinctions depths for sunlight penetration scheme (if applicable).
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.tracers.fresh_water_forcing.from_atmopshere')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Freshwater flux"
# "Virtual salt flux"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 41. Boundary Forcing --> Tracers --> Fresh Water Forcing
Properties of surface fresh water forcing in ocean
41.1. From Atmopshere
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of surface fresh water forcing from atmos in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.tracers.fresh_water_forcing.from_sea_ice')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Freshwater flux"
# "Virtual salt flux"
# "Real salt flux"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 41.2. From Sea Ice
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of surface fresh water forcing from sea-ice in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.tracers.fresh_water_forcing.forced_mode_restoring')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 41.3. Forced Mode Restoring
Is Required: TRUE Type: STRING Cardinality: 1.1
Type of surface salinity restoring in forced mode (OMIP)
End of explanation
"""
|
metpy/MetPy | dev/_downloads/8532b75251585046a16f04a9afaef079/Advanced_Sounding.ipynb | bsd-3-clause | import matplotlib.pyplot as plt
import pandas as pd
import metpy.calc as mpcalc
from metpy.cbook import get_test_data
from metpy.plots import add_metpy_logo, SkewT
from metpy.units import units
"""
Explanation: Advanced Sounding
Plot a sounding using MetPy with more advanced features.
Beyond just plotting data, this uses calculations from metpy.calc to find the lifted
condensation level (LCL) and the profile of a surface-based parcel. The area between the
ambient profile and the parcel profile is colored as well.
End of explanation
"""
col_names = ['pressure', 'height', 'temperature', 'dewpoint', 'direction', 'speed']
df = pd.read_fwf(get_test_data('may4_sounding.txt', as_file_obj=False),
skiprows=5, usecols=[0, 1, 2, 3, 6, 7], names=col_names)
# Drop any rows with all NaN values for T, Td, winds
df = df.dropna(subset=('temperature', 'dewpoint', 'direction', 'speed'), how='all'
).reset_index(drop=True)
"""
Explanation: Upper air data can be obtained using the siphon package, but for this example we will use
some of MetPy's sample data.
End of explanation
"""
p = df['pressure'].values * units.hPa
T = df['temperature'].values * units.degC
Td = df['dewpoint'].values * units.degC
wind_speed = df['speed'].values * units.knots
wind_dir = df['direction'].values * units.degrees
u, v = mpcalc.wind_components(wind_speed, wind_dir)
"""
Explanation: We will pull the data out of the example dataset into individual variables and
assign units.
End of explanation
"""
fig = plt.figure(figsize=(9, 9))
add_metpy_logo(fig, 115, 100)
skew = SkewT(fig, rotation=45)
# Plot the data using normal plotting functions, in this case using
# log scaling in Y, as dictated by the typical meteorological plot.
skew.plot(p, T, 'r')
skew.plot(p, Td, 'g')
skew.plot_barbs(p, u, v)
skew.ax.set_ylim(1000, 100)
skew.ax.set_xlim(-40, 60)
# Calculate LCL height and plot as black dot. Because `p`'s first value is
# ~1000 mb and its last value is ~250 mb, the `0` index is selected for
# `p`, `T`, and `Td` to lift the parcel from the surface. If `p` was inverted,
# i.e. start from low value, 250 mb, to a high value, 1000 mb, the `-1` index
# should be selected.
lcl_pressure, lcl_temperature = mpcalc.lcl(p[0], T[0], Td[0])
skew.plot(lcl_pressure, lcl_temperature, 'ko', markerfacecolor='black')
# Calculate full parcel profile and add to plot as black line
prof = mpcalc.parcel_profile(p, T[0], Td[0]).to('degC')
skew.plot(p, prof, 'k', linewidth=2)
# Shade areas of CAPE and CIN
skew.shade_cin(p, T, prof, Td)
skew.shade_cape(p, T, prof)
# An example of a slanted line at constant T -- in this case the 0
# isotherm
skew.ax.axvline(0, color='c', linestyle='--', linewidth=2)
# Add the relevant special lines
skew.plot_dry_adiabats()
skew.plot_moist_adiabats()
skew.plot_mixing_lines()
# Show the plot
plt.show()
"""
Explanation: Create a new figure. The dimensions here give a good aspect ratio.
End of explanation
"""
|
wmvanvliet/neuroscience_tutorials | eeg-bci/2. Frequency analysis.ipynb | bsd-2-clause | %pylab inline
"""
Explanation: 2. Frequency analysis
This tutorial covers basic frequency analysis of the EEG signal. The recording that is used is of a subject performing the SSVEP (steady-state visual evoked potential) paradigm. In simplest terms: when we look at a light that is flashing on and off at a certain frequency $f$, the neurons in our visual cortex will resonate at the same frequency (plus the harmonics $1f$, $2f$, $3f$, ...) This phenomenon can be used to create a brain-computer interface. Different options are presented on the screen, each flashing at a different frequency. By determining the frequency at which the visual cortex is resonating, the option that is attended can be distinguished from the rest.
In the current case, the subject is a witness in a cluedo murder mystery. He will try to communicate the murderer, weapon and location. The options available for selection, along with their corresponding frequency are as follows:
Murderer:
Colonel Mustard (8Hz)
Miss Scarlett (10Hz)
Professor Plum (12Hz)
Reverend Green (15Hz)
Weapon:
Axe (8Hz)
Poison (10Hz)
Revolver (12Hz)
Rope (15Hz)
Location:
Billiard Room (8Hz)
Dining Room (10Hz)
Kitchen (12Hz)
Library (15Hz)
End of explanation
"""
import scipy.io
m = scipy.io.loadmat('data/tutorial2-01.mat')
print(m.keys())
"""
Explanation: The data is already located on the virtual server you are talking to right now. The data used in this tutorial is EEG data that has been bandpass filtered with a 3rd order Butterworth filter with a passband of 1.0-40 Hz. When performing analysis on other data, you might have to filter it yourself. Bandpass filtering is covered in the 3rd tutorial.
Using scipy.io.loadmat to load the Matlab file and printing the variables stored within:
End of explanation
"""
murder_EEG = m['Murder_filteredEEG']
weapon_EEG = m['Weapon_filteredEEG']
room_EEG = m['Room_filteredEEG']
print('Shape of murder_EEG:', murder_EEG.shape)
print('Shape of weapon_EEG:', weapon_EEG.shape)
print('Shape of room_EEG:', room_EEG.shape)
"""
Explanation: The three variables of interest are Murder_filteredEEG, Weapon_filteredEEG and Room_filteredEEG:
End of explanation
"""
nchannels, nsamples = murder_EEG.shape
sample_rate = 128.0
print('Duration of recordings:', (nsamples / sample_rate), 'seconds')
"""
Explanation: The data is recorded using the Emotiv EPOC device. It has 14 channels and a sample rate of 128Hz:
End of explanation
"""
from matplotlib.mlab import psd
psd?
"""
Explanation: The subject was looking for 20 seconds at one of the four selection options. If we plot the PSD (power spectral density) of the channels on the visual cortex, there should be a distinctive peak at the frequency of the attended option.
The algorithm to calculate a PSD is provided by the matplotlib toolbox as matplotlib.mlab.psd:
End of explanation
"""
# These channels are roughly in the area covering the visual cortex
channels_of_interest = [0, 1, 2, 11, 12, 13]
# Create a new figure
figure(figsize=(10, 4.5))
# Draw the PSD of each channel of interest
for i,ch in enumerate(channels_of_interest):
# The main plot is devided into 6 subplots: one for each channel
# They are layed out in a grid: 2 rows and 3 columns
subplot(2, 3, i+1)
# Calculate the PSD (using a function in the matplotlib module)
from matplotlib.mlab import psd
(murder_PSD, freqs) = psd(murder_EEG[ch,:], NFFT=nsamples, Fs=sample_rate)
# Plot the PSD
plot(freqs, murder_PSD)
# Each graph should have the same scale, otherwise we cannot compare them
xlim(5, 20)
ylim(0, 50)
# Add some decoration
grid(True)
xlabel('Frequency (Hz)')
ylabel('PSD (dB)')
title('Channel %d' % ch)
# Instruct the plotting engine to use 'tight layout', otherwise labels are drawn on top of eachother
tight_layout()
"""
Explanation: Below is some code that calculates the PSD for some channels of the murderer EEG block:
End of explanation
"""
def ssvep_classifier(EEG, frequencies, sample_rate):
'''
SSVEP classifier based on Canonical Correlation Analysis (CCA).
Given a list of frequencies, this function will return the frequency
that is present the most strongly in some given EEG signal.
'''
nchannels, nsamples = EEG.shape
# The matrix X is our EEG signal, but we transpose it so
# observations (=samples) are on the rows and variables (=channels)
# are on the columns.
X = EEG.T
# Keep track of the score for each frequency
scores = []
# Calculate a timeline in seconds, so we can construct
# sine/cosine waves with the correct frequencies
time = arange(nsamples) / float(sample_rate)
# Calculate sines/cosines for all possible frequencies
for frequency in frequencies:
# Calculate this part only once
y = 2 * pi * frequency * time
# Construct the matrix Y containing the base frequency and 2 harmonics
Y = vstack((sin(y), cos(y), sin(2*y), cos(2*y), sin(3*y), cos(3*y))).T
# Center the variables (= remove the mean)
X = X - tile(mean(X, axis=0), (nsamples, 1))
Y = Y - tile(mean(Y, axis=0), (nsamples, 1))
# Perform Q-R decomposition. We only care about the Q part, ignore R
QX,_ = linalg.qr(X)
QY,_ = linalg.qr(Y)
# Compute canonical correlations through SVD
_, D, _ = linalg.svd(QX.T.dot(QY))
# Note the highest (= first) coefficient as the final score for this frequency
scores.append(D[0])
# Return the frequency with the highest score
return frequencies[ argmax(scores) ]
"""
Explanation: The EEG recording made by the Emotiv EPOC is noisy: there are many peaks. Still, the peak at 10Hz is strongest as can be observed at channels 0 and 13. Therefore the conclusion is that the murderer is Miss Scarlett.
Automatic classification
A common approach to automatically determine the frequency that is present most strongly is Canonical Correlation Analysis (CCA) [1]. The idea is that an EEG recording of someone looking at a SSVEP stimulus with frequency $f$, will be highly correlated with a generated sine wave with the same frequency. More so than a generated sine wave with a different frequency. Under the condition that the EEG signal and the generated sine are in phase (e.g. the waveforms 'line up'). This condition is problematic, as we have no guarantee that this will be the case. To counter the phase shift, we correlate the EEG signal with both a sine and a cosine wave, following the same logic employed by the Fourier transform. If the sine wave correlates poorly, the cosine will correlate strongly and vice versa, as long as the frequencies match.
Simple correlation analysis only compares two vectors, lets call them $x$ and $y$. For example a single EEG channel and an artificial signal of the same length. In our case we have multiple EEG channels, so vector $x$ becomes a matrix $X$ with samples along the rows and channels in the columns. We are testing this signal against multiple sine and cosine waves, generated with frequence $f$ and also some harmonics of $f$ ($2f$ and $3f$ in this case). So $y$ becomes a matrix $Y$ with the generated samples in the rows and each column containing either a sine or cosine generated with some frequency.
The CCA procedure can be seen as a way to calculate the correlation between two sets of vectors simultaniously, a set of vectors being the same as a matrix. It does so be calculating a vector $u_0$, which is a linear combination of the columns of $X$ and a vector $v_0$, which is a linear combination of the columns of $Y$, in such a way that the correlation coefficient $\rho$ between $u_0$ and $v_0$ is maximal. It then goes on to calculate more vectors $u_1$ and $v_1$ that are uncorrelated with $u_0$ and $v_0$ and maximally correlated with each other. In our case however, we are only interested in the correlation between $u_0$ and $v_0$.
The task of our classifier is what we did manually before: to pick, out of the frequencies of all the stimuli on the screen, the one that best corresponds to what is present in the EEG signal. Therefore, for each frequency, we calculate a score for that frequency by performing CCA between the EEG signal and a matrix of generated sine/cosine waves with the frequency and its harmonics. Finally we pick the frequency with the highest score.
[1] Frequency recognition based on canonical correlation analysis for SSVEP-based BCIs.
Lin, Zhonglin / Zhang, Changshui / Wu, Wei / Gao, Xiaorong, IEEE transactions on bio-medical engineering, 53 (12 Pt 2), p.2610-2614, Dec 2006, http://www.ncbi.nlm.nih.gov/pubmed/17152442
Below is the CCA classifier implemented as a Python function:
End of explanation
"""
print('Murderer frequency: %d Hz.' % ssvep_classifier(murder_EEG, [8, 10, 12, 15], 128))
print('Weapon frequency: %d Hz.' % ssvep_classifier(weapon_EEG, [8, 10, 12, 15], 128))
print('Room: frequency: %d Hz.' % ssvep_classifier(room_EEG, [8, 10, 12, 15], 128))
"""
Explanation: Putting the classifier to work, lets solve the murder completely:
End of explanation
"""
|
enchantner/python-zero | lesson_10/Slides.ipynb | mit | import logging
import sys
logger = logging.getLogger(__file__) # логгер идентифицируется по имени
logger.setLevel(logging.DEBUG) # глобальный уровень логирования (WARNING по умолчанию)
fh = logging.FileHandler('test.log') # обработчик для записи в файл, еще есть RotationFileHandler
fh.setLevel(logging.DEBUG) # выставляем уровень логирования конкретного обработчика
ch = logging.StreamHandler() # обработчик для записи в stderr (по умолчанию)
ch.setLevel(logging.ERROR) # логируем сообщения ERROR и CRITICAL
formatter = logging.Formatter(
'%(asctime)s - %(name)s - %(levelname)s - %(message)s'
) # форматируем
fh.setFormatter(formatter) # можно назначить разное форматирование для разных обработчиков
ch.setFormatter(formatter)
logger.addHandler(fh) # добавляем обработчики
logger.addHandler(ch)
logger.info("We're in a jungle")
logger.warning("Oh no! It's a snake!")
logger.error("Nowhere to run!")
logger.critical("It bit be!")
logger.debug("Ha, it's just a small Python")
"""
Explanation: Вопросы
Функция parser.parse_known_args() возвращает два значения. Что в них находится?
Какие два основных типа собранных пакетов существуют в Python?
Где Python ищет модуль, когда мы пишем "import foobar"?
Зачем нужен MANIFEST.in и как включить его поддержку?
Зачем нужен .gitignore?
Каким образом мы строим дерево разделов в Sphinx?
Логирование
Можно кидать лог в одно или несколько мест одновременно
Разные уровни ошибок:
DEBUG
INFO
WARNING
ERROR
CRITICAL
Расширенные возможности форматирования
http://docs.python-guide.org/en/latest/writing/logging/
End of explanation
"""
# test_code.py
import unittest
class TestStringMethods(unittest.TestCase):
def test_upper(self):
self.assertEqual('foo'.upper(), 'FOO')
def test_isupper(self):
self.assertTrue('FOO'.isupper())
self.assertFalse('Foo'.isupper())
def test_split(self):
s = 'hello world'
self.assertEqual(s.split(), ['hello', 'world'])
# check that s.split fails when the separator is not a string
with self.assertRaises(TypeError):
s.split(2)
if __name__ == '__main__':
unittest.main()
"""
Explanation: "Сконфигурить сразу все"
logging.basicConfig()
https://docs.python.org/3/library/logging.html#logging.basicConfig
Тестирование
Про виды тестирования: http://www.protesting.ru/testing/testtypes.html
По сути, практически любой тест состоит из кода, который сравнивает ожидаемое поведение с реальным
Концепция TDD - сначала пишем тесты, описывающие поведение, потом код
Главные модули:
unittest (https://docs.python.org/3/library/unittest.html )
mock (внутри unittest, https://docs.python.org/3/library/unittest.mock.html )
End of explanation
"""
# my_code.py
def read_file(fname):
with open(fname, "r") as f:
for line in f:
yield line
def do_cool_stuff(filename):
s = 0
for line in read_file(filename):
s += int(line.strip())
return s
# test_my_code.py
import unittest
from unittest.mock import MagicMock
import my_code
class TestMyCode(unittest.TestCase):
def test_cool_stuff(self):
my_code.read_file = MagicMock()
my_code.read_file.return_value = iter([1, 2, 3])
self.assertEqual(
my_code.do_cool_stuff("fakefile"),
[1, 2, 3]
)
if __name__ == '__main__':
unittest.main()
"""
Explanation: setUp() и tearDown()
Запустить код перед тестом и после него
Упражнение
Сгенерируйте файл 'input.txt' из 100 случайных чисел от -1000 до 1000 (по одному на строчке). Напишите программу, которая читает этот файл и находит 10 самых больших чисел в файле, не используя sorted() (дополнительные очки за использование, например, модуля heapq). Постарайтесь учесть ошибки (файл отсутствует, в файле не числа, файл пуст и т.п.). Придумайте, как структурировать программу таким образом, чтобы ее было легко тестировать, и напишите в соседнем файле тест для нее, желательно не менее четырех методов в кейсе.
Динамическая подмена объектов - мок
https://docs.python.org/3/library/unittest.mock.html
Для того, чтобы что-то функционально протестировать - надо это изолировать
Особенно внешние вызовы - запросы к сайтам, работу с базой и т.д.
Предположим, у нас такой вот код
End of explanation
"""
import pytest
from pytest_mock import mocker
import my_code
def test_load_list_extended(mocker):
my_code.read_file = mocker.MagicMock()
my_code.read_file.return_value = iter(['a', 'b', 'c'])
expected = ['a', 'b', 'c']
assert my_code.do_cool_stuf('some_file') == expected
"""
Explanation: Pytest
https://docs.pytest.org/en/latest/
End of explanation
"""
|
JWarmenhoven/DBDA-python | Notebooks/Chapter 9.ipynb | mit | import pandas as pd
import numpy as np
import pymc3 as pm
import matplotlib.pyplot as plt
import seaborn as sns
import warnings
warnings.filterwarnings("ignore", category=FutureWarning)
from IPython.display import Image
from matplotlib import gridspec
%matplotlib inline
plt.style.use('seaborn-white')
color = '#87ceeb'
%load_ext watermark
%watermark -p pandas,numpy,pymc3,matplotlib,seaborn
"""
Explanation: Chapter 9 - Hierarchical Models
9.2.4 - Example: Therapeutic touch
Shrinkage
9.5.1 - Example: Baseball batting abilities by position (subjects within categories)
End of explanation
"""
df = pd.read_csv('data/TherapeuticTouchData.csv', dtype={'s':'category'})
df.info()
df.head()
"""
Explanation: 9.2.4 - Example: Therapeutic touch
End of explanation
"""
df_proportions = df.groupby('s')['y'].apply(lambda x: x.sum()/len(x))
ax = sns.distplot(df_proportions, bins=8, kde=False, color='gray')
ax.set(xlabel='Proportion Correct', ylabel='# Practitioners')
sns.despine(ax=ax);
"""
Explanation: Figure 9.9
End of explanation
"""
Image('images/fig9_7.png', width=200)
practitioner_idx = df.s.cat.codes.values
practitioner_codes = df.s.cat.categories
n_practitioners = practitioner_codes.size
with pm.Model() as hierarchical_model:
omega = pm.Beta('omega', 1., 1.)
kappa_minus2 = pm.Gamma('kappa_minus2', 0.01, 0.01)
kappa = pm.Deterministic('kappa', kappa_minus2 + 2)
theta = pm.Beta('theta', alpha=omega*(kappa-2)+1, beta=(1-omega)*(kappa-2)+1, shape=n_practitioners)
y = pm.Bernoulli('y', theta[practitioner_idx], observed=df.y)
pm.model_to_graphviz(hierarchical_model)
with hierarchical_model:
trace = pm.sample(5000, cores=4, nuts_kwargs={'target_accept': 0.95})
pm.traceplot(trace, ['omega','kappa', 'theta']);
pm.summary(trace)
# Note that theta is indexed starting with 0 and not 1, as is the case in Kruschke (2015).
"""
Explanation: Model (Kruschke, 2015)
End of explanation
"""
plt.figure(figsize=(10,12))
# Define gridspec
gs = gridspec.GridSpec(4, 6)
ax1 = plt.subplot(gs[0,:3])
ax2 = plt.subplot(gs[0,3:])
ax3 = plt.subplot(gs[1,:2])
ax4 = plt.subplot(gs[1,2:4])
ax5 = plt.subplot(gs[1,4:6])
ax6 = plt.subplot(gs[2,:2])
ax7 = plt.subplot(gs[2,2:4])
ax8 = plt.subplot(gs[2,4:6])
ax9 = plt.subplot(gs[3,:2])
ax10 = plt.subplot(gs[3,2:4])
ax11 = plt.subplot(gs[3,4:6])
# thetas and theta pairs to plot
thetas = (0, 13, 27)
theta_pairs = ((0,13),(0,27),(13,27))
font_d = {'size':14}
# kappa & omega posterior plots
for var, ax in zip(['kappa', 'omega'], [ax1, ax2]):
pm.plot_posterior(trace[var], point_estimate='mode', ax=ax, color=color, round_to=2)
ax.set_xlabel('$\{}$'.format(var), fontdict={'size':20, 'weight':'bold'})
ax1.set(xlim=(0,500))
# theta posterior plots
for var, ax in zip(thetas,[ax3, ax7, ax11]):
pm.plot_posterior(trace['theta'][:,var], point_estimate='mode', ax=ax, color=color)
ax.set_xlabel('theta[{}]'.format(var), fontdict=font_d)
# theta scatter plots
for var, ax in zip(theta_pairs,[ax6, ax9, ax10]):
ax.scatter(trace['theta'][::10,var[0]], trace['theta'][::10,var[1]], alpha=0.75, color=color, facecolor='none')
ax.plot([0, 1], [0, 1], ':k', transform=ax.transAxes, alpha=0.5)
ax.set_xlabel('theta[{}]'.format(var[0]), fontdict=font_d)
ax.set_ylabel('theta[{}]'.format(var[1]), fontdict=font_d)
ax.set(xlim=(0,1), ylim=(0,1), aspect='equal')
# theta posterior differences plots
for var, ax in zip(theta_pairs,[ax4, ax5, ax8]):
pm.plot_posterior(trace['theta'][:,var[0]]-trace['theta'][:,var[1]], point_estimate='mode', ax=ax, color=color)
ax.set_xlabel('theta[{}] - theta[{}]'.format(*var), fontdict=font_d)
plt.tight_layout()
"""
Explanation: Figure 9.10 - Marginal posterior distributions
End of explanation
"""
with pm.Model() as unpooled_model:
theta = pm.Beta('theta', 1, 1, shape=n_practitioners)
y = pm.Bernoulli('y', theta[practitioner_idx], observed=df.y)
pm.model_to_graphviz(unpooled_model)
with unpooled_model:
unpooled_trace = pm.sample(5000, cores=4)
"""
Explanation: Shrinkage
Let's create a model with just the theta estimations per practitioner, without the influence of a higher level distribution. Then we can compare the theta values with the hierarchical model above.
End of explanation
"""
df_shrinkage = (pd.concat([pm.summary(unpooled_trace).iloc[:,0],
pm.summary(trace).iloc[3:,0]],
axis=1)
.reset_index())
df_shrinkage.columns = ['theta', 'unpooled', 'hierarchical']
df_shrinkage = pd.melt(df_shrinkage, 'theta', ['unpooled', 'hierarchical'], var_name='Model')
df_shrinkage.head()
"""
Explanation: Here we concatenate the trace results (thetas) from both models into a dataframe. Next we shape the data into a format that we can use with Seaborn's pointplot.
End of explanation
"""
plt.figure(figsize=(10,9))
plt.scatter(1, pm.summary(trace).iloc[0,0], s=100, c='r', marker='x', zorder=999, label='Group mean')
sns.pointplot(x='Model', y='value', hue='theta', data=df_shrinkage);
"""
Explanation: The below plot shows that the theta estimates on practitioner level are pulled towards the group mean of the hierarchical model.
End of explanation
"""
df2 = pd.read_csv('data/BattingAverage.csv', usecols=[0,1,2,3], dtype={'PriPos':'category'})
df2.info()
"""
Explanation: 9.5.1 - Example: Baseball batting abilities by position
End of explanation
"""
df2['BatAv'] = df2.Hits.divide(df2.AtBats)
df2.head(10)
# Batting average by primary field positions calculated from the data
df2.groupby('PriPos')['Hits','AtBats'].sum().pipe(lambda x: x.Hits/x.AtBats)
"""
Explanation: The DataFrame contains records for 948 players in the 2012 regular season of Major League Baseball.
- One record per player
- 9 primary field positions
End of explanation
"""
Image('images/fig9_13.png', width=300)
pripos_idx = df2.PriPos.cat.codes.values
pripos_codes = df2.PriPos.cat.categories
n_pripos = pripos_codes.size
# df2 contains one entry per player
n_players = df2.index.size
with pm.Model() as hierarchical_model2:
# Hyper parameters
omega = pm.Beta('omega', 1, 1)
kappa_minus2 = pm.Gamma('kappa_minus2', 0.01, 0.01)
kappa = pm.Deterministic('kappa', kappa_minus2 + 2)
# Parameters for categories (Primary field positions)
omega_c = pm.Beta('omega_c',
omega*(kappa-2)+1, (1-omega)*(kappa-2)+1,
shape = n_pripos)
kappa_c_minus2 = pm.Gamma('kappa_c_minus2',
0.01, 0.01,
shape = n_pripos)
kappa_c = pm.Deterministic('kappa_c', kappa_c_minus2 + 2)
# Parameter for individual players
theta = pm.Beta('theta',
omega_c[pripos_idx]*(kappa_c[pripos_idx]-2)+1,
(1-omega_c[pripos_idx])*(kappa_c[pripos_idx]-2)+1,
shape = n_players)
y2 = pm.Binomial('y2', n=df2.AtBats.values, p=theta, observed=df2.Hits)
pm.model_to_graphviz(hierarchical_model2)
with hierarchical_model2:
trace2 = pm.sample(3000, cores=4)
pm.traceplot(trace2, ['omega', 'kappa', 'omega_c', 'kappa_c']);
"""
Explanation: Model (Kruschke, 2015)
End of explanation
"""
pm.plot_posterior(trace2['omega'], point_estimate='mode', color=color)
plt.title('Overall', fontdict={'fontsize':16, 'fontweight':'bold'})
plt.xlabel('omega', fontdict={'fontsize':14});
"""
Explanation: Figure 9.17
Posterior distribution of hyper parameter omega after sampling.
End of explanation
"""
fig, axes = plt.subplots(3,3, figsize=(14,8))
for i, ax in enumerate(axes.T.flatten()):
pm.plot_posterior(trace2['omega_c'][:,i], ax=ax, point_estimate='mode', color=color)
ax.set_title(pripos_codes[i], fontdict={'fontsize':16, 'fontweight':'bold'})
ax.set_xlabel('omega_c__{}'.format(i), fontdict={'fontsize':14})
ax.set_xlim(0.10,0.30)
plt.tight_layout(h_pad=3)
"""
Explanation: Posterior distributions of the omega_c parameters after sampling.
End of explanation
"""
|
Vvkmnn/books | ThinkBayes/08_Observer_Bias.ipynb | gpl-3.0 | def BiasPmf(pmf):
new_pmf = pmf.Copy()
for x, p in pmf.Items():
new_pmf.Mult(x, x)
new_pmf.Normalize()
return new_pmf
"""
Explanation: Observer Bias
The Red Line problem
In Massachusetts, the Red Line is a subway that connects Cambridge and
Boston. When I was working in Cambridge I took the Red Line from Kendall
Square to South Station and caught the commuter rail to Needham. During
rush hour Red Line trains run every 7–8 minutes, on average.
When I arrived at the station, I could estimate the time until the next
train based on the number of passengers on the platform. If there were
only a few people, I inferred that I just missed a train and expected to
wait about 7 minutes. If there were more passengers, I expected the
train to arrive sooner. But if there were a large number of passengers,
I suspected that trains were not running on schedule, so I would go back
to the street level and get a taxi.
While I was waiting for trains, I thought about how Bayesian estimation
could help predict my wait time and decide when I should give up and
take a taxi. This chapter presents the analysis I came up with.
This chapter is based on a project by Brendan Ritter and Kai Austin, who
took a class with me at Olin College. The code in this chapter is
available from http://thinkbayes.com/redline.py. The code I used to
collect data is in http://thinkbayes.com/redline_data.py. For more
information see Section [download].
The model
[fig.redline0]
Before we get to the analysis, we have to make some modeling decisions.
First, I will treat passenger arrivals as a Poisson process, which means
I assume that passengers are equally likely to arrive at any time, and
that they arrive at an unknown rate, $\lambda$, measured in passengers
per minute. Since I observe passengers during a short period of time,
and at the same time every day, I assume that $\lambda$ is constant.
On the other hand, the arrival process for trains is not Poisson. Trains
to Boston are supposed to leave from the end of the line (Alewife
station) every 7–8 minutes during peak times, but by the time they get
to Kendall Square, the time between trains varies between 3 and 12
minutes.
To gather data on the time between trains, I wrote a script that
downloads real-time data from
http://www.mbta.com/rider_tools/developers/, selects south-bound
trains arriving at Kendall square, and records their arrival times in a
database. I ran the script from 4pm to 6pm every weekday for 5 days, and
recorded about 15 arrivals per day. Then I computed the time between
consecutive arrivals; the distribution of these gaps is shown in
Figure [fig.redline0], labeled z.
If you stood on the platform from 4pm to 6pm and recorded the time
between trains, this is the distribution you would see. But if you
arrive at some random time (without regard to the train schedule) you
would see a different distribution. The average time between trains, as
seen by a random passenger, is substantially higher than the true
average.
Why? Because a passenger is more like to arrive during a large interval
than a small one. Consider a simple example: suppose that the time
between trains is either 5 minutes or 10 minutes with equal probability.
In that case the average time between trains is 7.5 minutes.
But a passenger is more likely to arrive during a 10 minute gap than a 5
minute gap; in fact, twice as likely. If we surveyed arriving
passengers, we would find that 2/3 of them arrived during a 10 minute
gap, and only 1/3 during a 5 minute gap. So the average time between
trains, as seen by an arriving passenger, is 8.33 minutes.
This kind of observer bias appears in many contexts.
Students think that classes are bigger than they are because more of
them are in the big classes. Airline passengers think that planes are
fuller than they are because more of them are on full flights.
In each case, values from the actual distribution are oversampled in
proportion to their value. In the Red Line example, a gap that is twice
as big is twice as likely to be observed.
So given the actual distribution of gaps, we can compute the
distribution of gaps as seen by passengers. BiasPmf does
this computation:
End of explanation
"""
def PmfOfWaitTime(pmf_zb):
metapmf = thinkbayes.Pmf()
for gap, prob in pmf_zb.Items():
uniform = MakeUniformPmf(0, gap)
metapmf.Set(uniform, prob)
pmf_y = thinkbayes.MakeMixture(metapmf)
return pmf_y
"""
Explanation: pmf is the actual distribution; new_pmf is the biased
distribution. Inside the loop, we multiply the probability of each
value, x, by the likelihood it will be observed, which is
proportional to x. Then we normalize the result.
Figure [fig.redline0] shows the actual distribution of gaps, labeled
z, and the distribution of gaps seen by passengers, labeled
zb for “z biased”.
Wait times
[fig.redline2]
Wait time, which I call y, is the time between the arrival
of a passenger and the next arrival of a train. Elapsed time, which I
call x, is the time between the arrival of the previous
train and the arrival of a passenger. I chose these definitions so that
zb = x + y.
Given the distribution of zb, we can compute the
distribution of y. I’ll start with a simple case and then
generalize. Suppose, as in the previous example, that zb is
either 5 minutes with probability 1/3, or 10 minutes with probability
2/3.
If we arrive at a random time during a 5 minute gap, y is
uniform from 0 to 5 minutes. If we arrive during a 10 minute gap,
y is uniform from 0 to 10. So the overall distribution is a
mixture of uniform distributions weighted according to the probability
of each gap.
The following function takes the distribution of zb and
computes the distribution of y:
End of explanation
"""
def MakeUniformPmf(low, high):
pmf = thinkbayes.Pmf()
for x in MakeRange(low=low, high=high):
pmf.Set(x, 1)
pmf.Normalize()
return pmf
"""
Explanation: PmfOfWaitTime makes a meta-Pmf that maps from each uniform
distribution to its probability. Then it uses MakeMixture,
which we saw in Section [mixture], to compute the mixture.
PmfOfWaitTime also uses MakeUniformPmf,
defined here:
End of explanation
"""
def MakeRange(low, high, skip=10):
return range(low, high+skip, skip)
"""
Explanation: low and high are the range of the uniform
distribution, (both ends included). Finally, MakeUniformPmf
uses MakeRange, defined here:
End of explanation
"""
class WaitTimeCalculator(object):
def __init__(self, pmf_z):
self.pmf_z = pmf_z
self.pmf_zb = BiasPmf(pmf)
self.pmf_y = self.PmfOfWaitTime(self.pmf_zb)
self.pmf_x = self.pmf_y
"""
Explanation: MakeRange defines a set of possible values for wait time
(expressed in seconds). By default it divides the range into 10 second
intervals.
To encapsulate the process of computing these distributions, I created a
class called WaitTimeCalculator:
End of explanation
"""
x = zp - y
"""
Explanation: The parameter, pmf_z, is the unbiased distribution of z.
pmf_zb is the biased distribution of gap time, as seen by passengers.
pmf_y is the distribution of wait time. pmf_x is the distribution of
elapsed time, which is the same as the distribution of wait time. To see
why, remember that for a particular value of zp, the
distribution of y is uniform from 0 to zp.
Also
End of explanation
"""
wtc = WaitTimeCalculator(pmf_z)
"""
Explanation: So the distribution of x is also uniform from 0 to
zp.
Figure [fig.redline2] shows the distribution of z,
zb, and y based on the data I collected from
the Red Line web site.
To present these distributions, I am switching from Pmfs to Cdfs. Most
people are more familiar with Pmfs, but I think Cdfs are easier to
interpret, once you get used to them. And if you want to plot several
distributions on the same axes, Cdfs are the way to go.
The mean of z is 7.8 minutes. The mean of zb
is 8.8 minutes, about 13% higher. The mean of y is 4.4,
half the mean of zb.
As an aside, the Red Line schedule reports that trains run every 9
minutes during peak times. This is close to the average of
zb, but higher than the average of z. I
exchanged email with a representative of the MBTA, who confirmed that
the reported time between trains is deliberately conservative in order
to account for variability.
Predicting wait times
[fig.redline3]
Let’s get back to the motivating question: suppose that when I arrive at
the platform I see 10 people waiting. How long should I expect to wait
until the next train arrives?
As always, let’s start with the easiest version of the problem and work
our way up. Suppose we are given the actual distribution of
z, and we know that the passenger arrival rate, $\lambda$,
is 2 passengers per minute.
In that case we can:
Use the distribution of z to compute the prior
distribution of zp, the time between trains as seen by
a passenger.
Then we can use the number of passengers to estimate the
distribution of x, the elapsed time since the last
train.
Finally, we use the relation y = zp - x to get the
distribution of y.
The first step is to create a WaitTimeCalculator that
encapsulates the distributions of zp, x, and
y, prior to taking into account the number of passengers.
End of explanation
"""
ete = ElapsedTimeEstimator(wtc,
lam=2.0/60,
num_passengers=15)
"""
Explanation: pmf_z is the given distribution of gap times.
The next step is to make an ElapsedTimeEstimator (defined
below), which encapsulates the posterior distribution of x
and the predictive distribution of y.
End of explanation
"""
class ElapsedTimeEstimator(object):
def __init__(self, wtc, lam, num_passengers):
self.prior_x = Elapsed(wtc.pmf_x)
self.post_x = self.prior_x.Copy()
self.post_x.Update((lam, num_passengers))
self.pmf_y = PredictWaitTime(wtc.pmf_zb, self.post_x)
"""
Explanation: The parameters are the WaitTimeCalculator, the passenger
arrival rate, lam (expressed in passengers per second), and
the observed number of passengers, let’s say 15.
Here is the definition of ElapsedTimeEstimator:
End of explanation
"""
class Elapsed(thinkbayes.Suite):
def Likelihood(self, data, hypo):
x = hypo
lam, k = data
like = thinkbayes.EvalPoissonPmf(k, lam * x)
return like
"""
Explanation: prior_x and posterior_x are the prior and posterior distributions of
elapsed time. pmf_y is the predictive distribution of wait time.
ElapsedTimeEstimator uses Elapsed and
PredictWaitTime, defined below.
Elapsed is a Suite that represents the hypothetical
distribution of x. The prior distribution of x
comes straight from the WaitTimeCalculator. Then we use the
data, which consists of the arrival rate, lam, and the
number of passengers on the platform, to compute the posterior
distribution.
Here’s the definition of Elapsed:
End of explanation
"""
def PredictWaitTime(pmf_zb, pmf_x):
pmf_y = pmf_zb - pmf_x
RemoveNegatives(pmf_y)
return pmf_y
"""
Explanation: As always, Likelihood takes a hypothesis and data, and
computes the likelihood of the data under the hypothesis. In this case
hypo is the elapsed time since the last train and
data is a tuple of lam and the number of
passengers.
The likelihood of the data is the probability of getting k
arrivals in x time, given arrival rate lam. We
compute that using the PMF of the Poisson distribution.
Finally, here’s the definition of PredictWaitTime:
End of explanation
"""
pmf_y = pmf_zb - pmf_x
"""
Explanation: pmf_zb is the distribution of gaps between trains; pmf_x is the
distribution of elapsed time, based on the observed number of
passengers. Since y = zb - x, we can compute
End of explanation
"""
def RemoveNegatives(pmf):
for val in pmf.Values():
if val < 0:
pmf.Remove(val)
pmf.Normalize()
"""
Explanation: The subtraction operator invokes Pmf.__sub__, which enumerates all
pairs of zb and x, computes the differences,
and adds the results to pmf_y.
The resulting Pmf includes some negative values, which we know are
impossible. For example, if you arrive during a gap of 5 minutes, you
can’t wait more than 5 minutes. RemoveNegatives removes the
impossible values from the distribution and renormalizes.
End of explanation
"""
class ArrivalRate(thinkbayes.Suite):
def Likelihood(self, data, hypo):
lam = hypo
y, k = data
like = thinkbayes.EvalPoissonPmf(k, lam * y)
return like
"""
Explanation: Figure [fig.redline3] shows the results. The prior distribution of
x is the same as the distribution of y in
Figure [fig.redline2]. The posterior distribution of x
shows that, after seeing 15 passengers on the platform, we believe that
the time since the last train is probably 5-10 minutes. The predictive
distribution of y indicates that we expect the next train
in less than 5 minutes, with about 80% confidence.
Estimating the arrival rate
[fig.redline1]
The analysis so far has been based on the assumption that we know (1)
the distribution of gaps and (2) the passenger arrival rate. Now we are
ready to relax the second assumption.
Suppose that you just moved to Boston, so you don’t know much about the
passenger arrival rate on the Red Line. After a few days of commuting,
you could make a guess, at least qualitatively. With a little more
effort, you could estimate $\lambda$ quantitatively.
Each day when you arrive at the platform, you should note the time and
the number of passengers waiting (if the platform is too big, you could
choose a sample area). Then you should record your wait time and the
number of new arrivals while you are waiting.
After five days, you might have data like this:
k1 y k2
-- --- --
17 4.6 9
22 1.0 0
23 1.4 4
18 5.4 12
4 5.8 11
where k1 is the number of passengers waiting when you
arrive, y is your wait time in minutes, and k2
is the number of passengers who arrive while you are waiting.
Over the course of one week, you waited 18 minutes and saw 36 passengers
arrive, so you would estimate that the arrival rate is 2 passengers per
minute. For practical purposes that estimate is good enough, but for the
sake of completeness I will compute a posterior distribution for
$\lambda$ and show how to use that distribution in the rest of the
analysis.
ArrivalRate is a Suite that represents
hypotheses about $\lambda$. As always, Likelihood takes a
hypothesis and data, and computes the likelihood of the data under the
hypothesis.
In this case the hypothesis is a value of $\lambda$. The data is a pair,
y, k, where y is a wait time and
k is the number of passengers that arrived.
End of explanation
"""
class ArrivalRateEstimator(object):
def __init__(self, passenger_data):
low, high = 0, 5
n = 51
hypos = numpy.linspace(low, high, n) / 60
self.prior_lam = ArrivalRate(hypos)
self.post_lam = self.prior_lam.Copy()
for k1, y, k2 in passenger_data:
self.post_lam.Update((y, k2))
"""
Explanation: This Likelihood might look familiar; it is almost identical
to Elapsed.Likelihood in Section [elapsed]. The difference
is that in Elapsed.Likelihood the hypothesis is
x, the elapsed time; in ArrivalRate.Likelihood
the hypothesis is lam, the arrival rate. But in both cases
the likelihood is the probability of seeing k arrivals in
some period of time, given lam.
ArrivalRateEstimator encapsulates the process of estimating
$\lambda$. The parameter, passenger_data, is a list of k1, y,
k2 tuples, as in the table above.
End of explanation
"""
import thinkbayes
n = 220
cdf_z = thinkbayes.MakeCdfFromList(gap_times)
sample_z = cdf_z.Sample(n)
pmf_z = thinkbayes.MakePmfFromList(sample_z)
"""
Explanation: __init__ builds hypos, which is a sequence of
hypothetical values for lam, then builds the prior
distribution, prior_lam. The for loop updates the prior
with data, yielding the posterior distribution, post_lam.
Figure [fig.redline1] shows the prior and posterior distributions. As
expected, the mean and median of the posterior are near the observed
rate, 2 passengers per minute. But the spread of the posterior
distribution captures our uncertainty about $\lambda$ based on a small
sample.
Incorporating uncertainty
[fig.redline4]
Whenever there is uncertainty about one of the inputs to an analysis, we
can take it into account by a process like this:
Implement the analysis based on a deterministic value of the
uncertain parameter (in this case $\lambda$).
Compute the distribution of the uncertain parameter.
Run the analysis for each value of the parameter, and generate a set
of predictive distributions.
Compute a mixture of the predictive distributions, using the weights
from the distribution of the parameter.
We have already done steps (1) and (2). I wrote a class called
WaitMixtureEstimator to handle steps (3) and (4).
class WaitMixtureEstimator(object):
def __init__(self, wtc, are, num_passengers=15):
self.metapmf = thinkbayes.Pmf()
for lam, prob in sorted(are.post_lam.Items()):
ete = ElapsedTimeEstimator(wtc, lam, num_passengers)
self.metapmf.Set(ete.pmf_y, prob)
self.mixture = thinkbayes.MakeMixture(self.metapmf)
wtc is the WaitTimeCalculator that contains
the distribution of zb. are is the
ArrivalTimeEstimator that contains the distribution of
lam.
The first line makes a meta-Pmf that maps from each possible
distribution of y to its probability. For each value of
lam, we use ElapsedTimeEstimator to compute
the corresponding distribution of y and store it in the
Meta-Pmf. Then we use MakeMixture to compute the mixture.
Figure [fig.redline4] shows the results. The shaded lines in the
background are the distributions of y for each value of
lam, with line thickness that represents likelihood. The
dark line is the mixture of these distributions.
In this case we could get a very similar result using a single point
estimate of lam. So it was not necessary, for practical
purposes, to include the uncertainty of the estimate.
In general, it is important to include variability if the system
response is non-linear; that is, if small changes in the input can cause
big changes in the output. In this case, posterior variability in
lam is small and the system response is approximately
linear for small perturbations.
Decision analysis
[fig.redline5]
At this point we can use the number of passengers on the platform to
predict the distribution of wait times. Now let’s get to the second part
of the question: when should I stop waiting for the train and go catch a
taxi?
Remember that in the original scenario, I am trying to get to South
Station to catch the commuter rail. Suppose I leave the office with
enough time that I can wait 15 minutes and still make my connection at
South Station.
In that case I would like to know the probability that y
exceeds 15 minutes as a function of num_passengers. It is easy enough
to use the analysis from Section [elapsed] and run it for a range of
num_passengers.
But there’s a problem. The analysis is sensitive to the frequency of
long delays, and because long delays are rare, it is hard estimate their
frequency.
I only have data from one week, and the longest delay I observed was 15
minutes. So I can’t estimate the frequency of longer delays accurately.
However, I can use previous observations to make at least a coarse
estimate. When I commuted by Red Line for a year, I saw three long
delays caused by a signaling problem, a power outage, and “police
activity” at another stop. So I estimate that there are about 3 major
delays per year.
But remember that my observations are biased. I am more likely to
observe long delays because they affect a large number of passengers. So
we should treat my observations as a sample of zb rather
than z. Here’s how we can do that.
During my year of commuting, I took the Red Line home about 220 times.
So I take the observed gap times, gap_times, generate a sample of 220
gaps, and compute their Pmf:
End of explanation
"""
cdf_zp = BiasPmf(pmf_z).MakeCdf()
sample_zb = cdf_zp.Sample(n) + [1800, 2400, 3000]
"""
Explanation: Next I bias pmf_z to get the distribution of zb, draw a
sample, and then add in delays of 30, 40, and 50 minutes (expressed in
seconds):
End of explanation
"""
pdf_zb = thinkbayes.EstimatedPdf(sample_zb)
xs = MakeRange(low=60)
pmf_zb = pdf_zb.MakePmf(xs)
"""
Explanation: Cdf.Sample is more efficient than Pmf.Sample,
so it is usually faster to convert a Pmf to a Cdf before sampling.
Next I use the sample of zb to estimate a Pdf using KDE,
and then convert the Pdf to a Pmf:
End of explanation
"""
pmf_z = UnbiasPmf(pmf_zb)
wtc = WaitTimeCalculator(pmf_z)
"""
Explanation: Finally I unbias the distribution of zb to get the
distribution of z, which I use to create the
WaitTimeCalculator:
End of explanation
"""
def ProbLongWait(num_passengers, minutes):
ete = ElapsedTimeEstimator(wtc, lam, num_passengers)
cdf_y = ete.pmf_y.MakeCdf()
prob = 1 - cdf_y.Prob(minutes * 60)
"""
Explanation: This process is complicated, but all of the steps are operations we have
seen before. Now we are ready to compute the probability of a long wait.
End of explanation
"""
|
yaoxx151/UCSB_Boot_Camp_copy | Day06_GraphAlgorithms2/notebooks/Fun with graphs 1.ipynb | cc0-1.0 | %matplotlib inline
import networkx as nx
import matplotlib.pyplot as plt
from IPython.display import Image
n = 10
m = 20
rgraph1 = nx.gnm_random_graph(n,m)
print "Nodes: ", rgraph1.nodes()
print "Edges: ", rgraph1.edges()
if nx.is_connected(rgraph1):
print "Graph is connected"
else:
print "Graph is not connected"
print "Diameter of graph is ", nx.diameter(rgraph1)
nx.draw(rgraph1)
plt.draw()
elarge=[(u,v) for (u,v) in rgraph1.edges() if u + v >= 9]
esmall=[(u,v) for (u,v) in rgraph1.edges() if u + v < 9]
pos=nx.spring_layout(rgraph1) # positions for all nodes
# nodes
nx.draw_networkx_nodes(rgraph1,pos,node_size=700)
# edges
nx.draw_networkx_edges(rgraph1,pos,edgelist=elarge,
width=6,edge_color='r')
nx.draw_networkx_edges(rgraph1,pos,edgelist=esmall,
width=6,alpha=0.5,edge_color='b',style='dashed')
# labels
nx.draw_networkx_labels(rgraph1,pos,font_size=20,)
plt.axis('off')
plt.savefig("data/weighted_graph.png") # save as png
plt.show() # display
"""
Explanation: <h1> Basic operations on graphs </h1>
End of explanation
"""
T = nx.dfs_tree(rgraph1,0)
print "DFS Tree edges : ", T.edges()
T = nx.bfs_tree(rgraph1, 0)
print "BFS Tree edges : ", T.edges()
"""
Explanation: <h1> Graph Traversal BFS, DFS<h1>
End of explanation
"""
|
McIntyre-Lab/ipython-demo | pickle.ipynb | gpl-2.0 | from IPython.display import YouTubeVideo
YouTubeVideo('yYey8ntlK_E', width=800, height=500)
"""
Explanation: Using Stream Serialization (Pickling)
In python pickling|unpickling is a process of serializing a python object into a file. Basically it takes the hunk of memory that a file is sitting in and writes it out to disk.
End of explanation
"""
import cPickle as pk
"""
Explanation: There are two implementations of pickle in python:
pickle
cPickle
The cPickle implementation is faster.
I like using pickles in two situations
When passing complex objects between python programs
For debugging programs that take a long time to run.
Say we have a program that runs for 15 min, then produces an error. We could fire up the python debugger and dig into the problem, but each time we change something we have to re-run the entire program. Instead lets use pickling to help us out.
First lets make some helper functions.
End of explanation
"""
def pickleDict(objDict, fname):
""" Pickle a dictionary of "var name": values for debugging.
Arguments:
:param dict objDict: A dictionary where each key is the name of a
variable and the value is the object itself.
:param str fname: File name of the pickle.
"""
with open(fname, 'wb') as FH:
pk.dump(objDict, FH)
"""
Explanation: This first function takes a dictionary and pickles it.
End of explanation
"""
def unPickleDict(fname):
""" Pickle a dictionary of "var name": values for debugging.
Arguments:
:param str fname: File name of the pickle.
Returns:
:rtype: dict
:returns: A dictionary where each key is the name of a
variable and the value is the object itself.
"""
with open(fname, 'rb') as FH:
objDict = pk.load(FH)
return objDict
"""
Explanation: This second function unpickles a file and retunrs the dictionary.
End of explanation
"""
# Make up some data
my_cool_flags = [0, 0, 0, 0, 0, 1]
my_big_data = ['one', 'two', 'three']
my_list_o_genes = ['Sxl', 'dsx', 'fru']
# Put all of these in a dictionary
objDict = {'my_cool_flags': my_cool_flags,
'my_big_data': my_big_data,
'my_list_o_genes': my_list_o_genes}
# We can pickle that
pickleDict(objDict, '/tmp/mypickle.pkl')
"""
Explanation: Ok, now lets say our program is erroring out at a specific function. We know that the function takes three objects.
my_cool_flags
my_big_data
my_list_o_genes
I can create a dictionary where the key is my object name and the value is my object.
End of explanation
"""
objDict = unPickleDict('/tmp/mypickle.pkl')
print objDict
"""
Explanation: Now in a separate ipython environment you can unpickle your objects, import your broken function and go to town.
End of explanation
"""
|
tmylk/gensim | docs/notebooks/WMD_tutorial.ipynb | gpl-3.0 | from time import time
start_nb = time()
# Initialize logging.
import logging
logging.basicConfig(format='%(asctime)s : %(levelname)s : %(message)s')
sentence_obama = 'Obama speaks to the media in Illinois'
sentence_president = 'The president greets the press in Chicago'
sentence_obama = sentence_obama.lower().split()
sentence_president = sentence_president.lower().split()
"""
Explanation: Finding similar documents with Word2Vec and WMD
Word Mover's Distance is a promising new tool in machine learning that allows us to submit a query and return the most relevant documents. For example, in a blog post OpenTable use WMD on restaurant reviews. Using this approach, they are able to mine different aspects of the reviews. In part 2 of this tutorial, we show how you can use Gensim's WmdSimilarity to do something similar to what OpenTable did. In part 1 shows how you can compute the WMD distance between two documents using wmdistance. Part 1 is optional if you want use WmdSimilarity, but is also useful in it's own merit.
First, however, we go through the basics of what WMD is.
Word Mover's Distance basics
WMD is a method that allows us to assess the "distance" between two documents in a meaningful way, even when they have no words in common. It uses word2vec [4] vector embeddings of words. It been shown to outperform many of the state-of-the-art methods in k-nearest neighbors classification [3].
WMD is illustrated below for two very similar sentences (illustration taken from Vlad Niculae's blog). The sentences have no words in common, but by matching the relevant words, WMD is able to accurately measure the (dis)similarity between the two sentences. The method also uses the bag-of-words representation of the documents (simply put, the word's frequencies in the documents), noted as $d$ in the figure below. The intution behind the method is that we find the minimum "traveling distance" between documents, in other words the most efficient way to "move" the distribution of document 1 to the distribution of document 2.
<img src='https://vene.ro/images/wmd-obama.png' height='600' width='600'>
This method was introduced in the article "From Word Embeddings To Document Distances" by Matt Kusner et al. (link to PDF). It is inspired by the "Earth Mover's Distance", and employs a solver of the "transportation problem".
In this tutorial, we will learn how to use Gensim's WMD functionality, which consists of the wmdistance method for distance computation, and the WmdSimilarity class for corpus based similarity queries.
Note:
If you use this software, please consider citing [1], [2] and [3].
Running this notebook
You can download this iPython Notebook, and run it on your own computer, provided you have installed Gensim, PyEMD, NLTK, and downloaded the necessary data.
The notebook was run on an Ubuntu machine with an Intel core i7-4770 CPU 3.40GHz (8 cores) and 32 GB memory. Running the entire notebook on this machine takes about 3 minutes.
Part 1: Computing the Word Mover's Distance
To use WMD, we need some word embeddings first of all. You could train a word2vec (see tutorial here) model on some corpus, but we will start by downloading some pre-trained word2vec embeddings. Download the GoogleNews-vectors-negative300.bin.gz embeddings here (warning: 1.5 GB, file is not needed for part 2). Training your own embeddings can be beneficial, but to simplify this tutorial, we will be using pre-trained embeddings at first.
Let's take some sentences to compute the distance between.
End of explanation
"""
# Import and download stopwords from NLTK.
from nltk.corpus import stopwords
from nltk import download
download('stopwords') # Download stopwords list.
# Remove stopwords.
stop_words = stopwords.words('english')
sentence_obama = [w for w in sentence_obama if w not in stop_words]
sentence_president = [w for w in sentence_president if w not in stop_words]
"""
Explanation: These sentences have very similar content, and as such the WMD should be low. Before we compute the WMD, we want to remove stopwords ("the", "to", etc.), as these do not contribute a lot to the information in the sentences.
End of explanation
"""
start = time()
import os
from gensim.models import Word2Vec
if not os.path.exists('/data/w2v_googlenews/GoogleNews-vectors-negative300.bin.gz'):
raise ValueError("SKIP: You need to download the google news model")
model = Word2Vec.load_word2vec_format('/data/w2v_googlenews/GoogleNews-vectors-negative300.bin.gz', binary=True)
print('Cell took %.2f seconds to run.' % (time() - start))
"""
Explanation: Now, as mentioned earlier, we will be using some downloaded pre-trained embeddings. We load these into a Gensim Word2Vec model class. Note that the embeddings we have chosen here require a lot of memory.
End of explanation
"""
distance = model.wmdistance(sentence_obama, sentence_president)
print 'distance = %.4f' % distance
"""
Explanation: So let's compute WMD using the wmdistance method.
End of explanation
"""
sentence_orange = 'Oranges are my favorite fruit'
sentence_orange = sentence_orange.lower().split()
sentence_orange = [w for w in sentence_orange if w not in stop_words]
distance = model.wmdistance(sentence_obama, sentence_orange)
print 'distance = %.4f' % distance
"""
Explanation: Let's try the same thing with two completely unrelated sentences. Notice that the distance is larger.
End of explanation
"""
# Normalizing word2vec vectors.
start = time()
model.init_sims(replace=True) # Normalizes the vectors in the word2vec class.
distance = model.wmdistance(sentence_obama, sentence_president) # Compute WMD as normal.
print 'Cell took %.2f seconds to run.' %(time() - start)
"""
Explanation: Normalizing word2vec vectors
When using the wmdistance method, it is beneficial to normalize the word2vec vectors first, so they all have equal length. To do this, simply call model.init_sims(replace=True) and Gensim will take care of that for you.
Usually, one measures the distance between two word2vec vectors using the cosine distance (see cosine similarity), which measures the angle between vectors. WMD, on the other hand, uses the Euclidean distance. The Euclidean distance between two vectors might be large because their lengths differ, but the cosine distance is small because the angle between them is small; we can mitigate some of this by normalizing the vectors.
Note that normalizing the vectors can take some time, especially if you have a large vocabulary and/or large vectors.
Usage is illustrated in the example below. It just so happens that the vectors we have downloaded are already normalized, so it won't do any difference in this case.
End of explanation
"""
# Pre-processing a document.
from nltk import word_tokenize
download('punkt') # Download data for tokenizer.
def preprocess(doc):
doc = doc.lower() # Lower the text.
doc = word_tokenize(doc) # Split into words.
doc = [w for w in doc if not w in stop_words] # Remove stopwords.
doc = [w for w in doc if w.isalpha()] # Remove numbers and punctuation.
return doc
start = time()
import json
# Business IDs of the restaurants.
ids = ['4bEjOyTaDG24SY5TxsaUNQ', '2e2e7WgqU1BnpxmQL5jbfw', 'zt1TpTuJ6y9n551sw9TaEg',
'Xhg93cMdemu5pAMkDoEdtQ', 'sIyHTizqAiGu12XMLX3N3g', 'YNQgak-ZLtYJQxlDwN-qIg']
w2v_corpus = [] # Documents to train word2vec on (all 6 restaurants).
wmd_corpus = [] # Documents to run queries against (only one restaurant).
documents = [] # wmd_corpus, with no pre-processing (so we can see the original documents).
with open('/data/yelp_academic_dataset_review.json') as data_file:
for line in data_file:
json_line = json.loads(line)
if json_line['business_id'] not in ids:
# Not one of the 6 restaurants.
continue
# Pre-process document.
text = json_line['text'] # Extract text from JSON object.
text = preprocess(text)
# Add to corpus for training Word2Vec.
w2v_corpus.append(text)
if json_line['business_id'] == ids[0]:
# Add to corpus for similarity queries.
wmd_corpus.append(text)
documents.append(json_line['text'])
print 'Cell took %.2f seconds to run.' %(time() - start)
"""
Explanation: Part 2: Similarity queries using WmdSimilarity
You can use WMD to get the most similar documents to a query, using the WmdSimilarity class. Its interface is similar to what is described in the Similarity Queries Gensim tutorial.
Important note:
WMD is a measure of distance. The similarities in WmdSimilarity are simply the negative distance. Be careful not to confuse distances and similarities. Two similar documents will have a high similarity score and a small distance; two very different documents will have low similarity score, and a large distance.
Yelp data
Let's try similarity queries using some real world data. For that we'll be using Yelp reviews, available at http://www.yelp.com/dataset_challenge. Specifically, we will be using reviews of a single restaurant, namely the Mon Ami Gabi.
To get the Yelp data, you need to register by name and email address. The data is 775 MB.
This time around, we are going to train the Word2Vec embeddings on the data ourselves. One restaurant is not enough to train Word2Vec properly, so we use 6 restaurants for that, but only run queries against one of them. In addition to the Mon Ami Gabi, mentioned above, we will be using:
Earl of Sandwich.
Wicked Spoon.
Serendipity 3.
Bacchanal Buffet.
The Buffet.
The restaurants we chose were those with the highest number of reviews in the Yelp dataset. Incidentally, they all are on the Las Vegas Boulevard. The corpus we trained Word2Vec on has 18957 documents (reviews), and the corpus we used for WmdSimilarity has 4137 documents.
Below a JSON file with Yelp reviews is read line by line, the text is extracted, tokenized, and stopwords and punctuation are removed.
End of explanation
"""
from matplotlib import pyplot as plt
%matplotlib inline
# Document lengths.
lens = [len(doc) for doc in wmd_corpus]
# Plot.
plt.rc('figure', figsize=(8,6))
plt.rc('font', size=14)
plt.rc('lines', linewidth=2)
plt.rc('axes', color_cycle=('#377eb8','#e41a1c','#4daf4a',
'#984ea3','#ff7f00','#ffff33'))
# Histogram.
plt.hist(lens, bins=20)
plt.hold(True)
# Average length.
avg_len = sum(lens) / float(len(lens))
plt.axvline(avg_len, color='#e41a1c')
plt.hold(False)
plt.title('Histogram of document lengths.')
plt.xlabel('Length')
plt.text(100, 800, 'mean = %.2f' % avg_len)
plt.show()
"""
Explanation: Below is a plot with a histogram of document lengths and includes the average document length as well. Note that these are the pre-processed documents, meaning stopwords are removed, punctuation is removed, etc. Document lengths have a high impact on the running time of WMD, so when comparing running times with this experiment, the number of documents in query corpus (about 4000) and the length of the documents (about 62 words on average) should be taken into account.
End of explanation
"""
# Train Word2Vec on all the restaurants.
model = Word2Vec(w2v_corpus, workers=3, size=100)
# Initialize WmdSimilarity.
from gensim.similarities import WmdSimilarity
num_best = 10
instance = WmdSimilarity(wmd_corpus, model, num_best=10)
"""
Explanation: Now we want to initialize the similarity class with a corpus and a word2vec model (which provides the embeddings and the wmdistance method itself).
End of explanation
"""
start = time()
sent = 'Very good, you should seat outdoor.'
query = preprocess(sent)
sims = instance[query] # A query is simply a "look-up" in the similarity class.
print 'Cell took %.2f seconds to run.' %(time() - start)
"""
Explanation: The num_best parameter decides how many results the queries return. Now let's try making a query. The output is a list of indeces and similarities of documents in the corpus, sorted by similarity.
Note that the output format is slightly different when num_best is None (i.e. not assigned). In this case, you get an array of similarities, corresponding to each of the documents in the corpus.
The query below is taken directly from one of the reviews in the corpus. Let's see if there are other reviews that are similar to this one.
End of explanation
"""
# Print the query and the retrieved documents, together with their similarities.
print 'Query:'
print sent
for i in range(num_best):
print
print 'sim = %.4f' % sims[i][1]
print documents[sims[i][0]]
"""
Explanation: The query and the most similar documents, together with the similarities, are printed below. We see that the retrieved documents are discussing the same thing as the query, although using different words. The query talks about getting a seat "outdoor", while the results talk about sitting "outside", and one of them says the restaurant has a "nice view".
End of explanation
"""
start = time()
sent = 'I felt that the prices were extremely reasonable for the Strip'
query = preprocess(sent)
sims = instance[query] # A query is simply a "look-up" in the similarity class.
print 'Query:'
print sent
for i in range(num_best):
print
print 'sim = %.4f' % sims[i][1]
print documents[sims[i][0]]
print '\nCell took %.2f seconds to run.' %(time() - start)
"""
Explanation: Let's try a different query, also taken directly from one of the reviews in the corpus.
End of explanation
"""
print 'Notebook took %.2f seconds to run.' %(time() - start_nb)
"""
Explanation: This time around, the results are more straight forward; the retrieved documents basically contain the same words as the query.
WmdSimilarity normalizes the word embeddings by default (using init_sims(), as explained before), but you can overwrite this behaviour by calling WmdSimilarity with normalize_w2v_and_replace=False.
End of explanation
"""
|
cmshobe/landlab | notebooks/tutorials/grid_object_demo/grid_object_demo.ipynb | mit | import numpy as np
from landlab import RasterModelGrid, VoronoiDelaunayGrid, HexModelGrid
smg = RasterModelGrid(
(3, 4), 1.) # a square-cell raster, 3 rows x 4 columns, unit spacing
rmg = RasterModelGrid((3, 4), xy_spacing=(1., 2.)) # a rectangular-cell raster
hmg = HexModelGrid(shape=(3, 4))
# ^a hexagonal grid with 3 rows, 4 columns from the base row, & node spacing of 1.
x = np.random.rand(100) * 100.
y = np.random.rand(100) * 100.
vmg = VoronoiDelaunayGrid(x, y)
# ^a Voronoi-cell grid with 100 randomly positioned nodes within a 100.x100. square
"""
Explanation: <a href="http://landlab.github.io"><img style="float: left" src="../../landlab_header.png"></a>
What happens when you create a grid object?
<hr>
<small>For more Landlab tutorials, click here: <a href="https://landlab.readthedocs.io/en/latest/user_guide/tutorials.html">https://landlab.readthedocs.io/en/latest/user_guide/tutorials.html</a></small>
<hr>
Landlab supports a range of grid types. These include both rasters (with both square and rectangular cells), and a range of structured and unstructured grids based around the interlocking polygons and triangles of a Voronoi-Delaunay tesselation (radial, hexagonal, and irregular grids).
Here, we look at some of the features of both of these types.
We can create grid objects with the following lines of code.
End of explanation
"""
# what are the y-coordinates of the pair of nodes in the middle of our 3-by-4 grid?
# the IDs of these nodes are 5 and 6, so:
smg.y_of_node[[5, 6]]
"""
Explanation: All these various ModelGrid objects contains various data items (known as attributes). These include, for example:
* number nodes and links in the grid
* x and y coordinates of each each node
* starting ("tail") and ending ("head") node IDs of each link
* IDs of links that are active
* IDs of core nodes
* etc.
From here on we'll focus on the square raster grid as its geometry is a bit easier to think through, but all of the following applies to all grid types.
Understanding the topology of Landlab grids
All grids consist of two interlocked sets of points joined by lines outlining areas. If we define data on the points we call nodes, then they are joined by links, which outline patches. Each node within the interior of the grid lies at the geometric center of the area of a cell. The cell's edges are faces, and the endpoints of the faces---which are also vertices of the cells---are corners.
Note that this kind of scheme requires one set of features to be "dominant" over the other; i.e., either not every node has a cell, or not every link is crossed by a face. Both cannot be true, because one or other set of features has to define the edge of the grid. Landlab assumes that the node set is primary, so there are always more nodes than corners; more links than faces; and more patches than cells.
Each of these sets of "elements" has its own set of IDs. These IDs are what allow us to index the various Landlab fields, which store spatial data. Each feature is ordered by x, then y. The origin is always at the bottom left node, unless you choose to move it (grid.move_origin)... except in the specific case of a radial grid, where logic and symmetry dictates it must be the central node.
Whenever Landlab needs to order something rotationally (angles; elements around a different element type), it does so following the standard mathematical convention of counterclockwise from east. We'll see this in practical terms a bit later in this tutorial.
The final thing to know is that links and faces have directions. This lets us record fluxes on the grid by associating them with, and mapping them onto, the links (or, much less commonly, the faces). All lines point into the upper right half-space. So, on our raster, this means the horizontal links point east and the vertical links point north.
So, for reference, our raster grid looks like this:
NODES: LINKS: PATCHES:
8 ----- 9 ---- 10 ---- 11 * -14-->* -15-->* -16-->* * ----- * ----- * ----- *
| | | | ^ ^ ^ ^ | | | |
| | | | 10 11 12 13 | 3 | 4 | 5 |
| | | | | | | | | | | |
4 ----- 5 ----- 6 ----- 7 * --7-->* --8-->* --9-->* * ----- * ----- * ----- *
| | | | ^ ^ ^ ^ | | | |
| | | | 3 4 5 6 | 0 | 1 | 2 |
| | | | | | | | | | | |
0 ----- 1 ----- 2 ----- 3 * --0-->* --1-->* --2-->* * ----- * ----- * ----- *
CELLS: FACES: CORNERS:
* ----- * ----- * ----- * * ----- * ----- * ----- * * ----- * ----- * ----- *
| | | | | | | | | | | |
| . ----- . ----- . | | . --5-->. --6-->. | | 3 ----- 4 ----- 5 |
| | | | | | ^ ^ ^ | | | | | |
* --| 0 | 1 |-- * * --2 3 4-- * * --| | |-- *
| | | | | | | | | | | | | | |
| . ----- . ----- . | | . --0-->. --1-->. | | 0 ----- 1 ----- 2 |
| | | | | | | | | | | |
* ----- * ----- * ----- * * ----- * ----- * ----- * * ----- * ----- * ----- *
Recording and indexing the values at elements
Landlab lets you record values at any element you want. In practice, the most useful places to store data is on the primary elements of nodes, links, and patches, with the nodes being most useful for scalar values (e.g, elevations) and the links for fluxes with direction to them (e.g., velocity or discharge).
In order to maintain compatibility across data types, all landlab data are stored in number-of-elements-long arrays. This includes both user-defined data and the properties of the nodes within the grid. This means that these arrays can be immediately indexed by their element ID. For example:
End of explanation
"""
# what are the x-coordinates of nodes in the middle row?
smg.x_of_node.reshape(smg.shape)[1, :]
"""
Explanation: If you're working with a raster, you can always reshape the value arrays back into two dimensions so you can take Numpy-style slices through it:
End of explanation
"""
smg.add_zeros('elevation', at='node', clobber=True)
# ^Creates a new field of zero data associated with nodes
smg.at_node['elevation'] # Note the use of dictionary syntax
"""
Explanation: This same data storage pattern is what underlies the Landlab data fields, which are simply one dimensional, number-of-elements-long arrays that store user defined spatial data across the grid, attached to the grid itself.
End of explanation
"""
smg.add_ones('slope', at='link', clobber=True)
# ^Creates a new array of data associated with links
smg.at_link['slope']
"""
Explanation: Or, equivalently, at links:
End of explanation
"""
smg.number_of_nodes
smg.number_of_links
"""
Explanation: The Landlab components use fields to share spatial information among themselves. See the fields and components tutorials for more information.
Getting this information from the grid object
All of this topological information is recorded within our grid objects, and can be used to work with data arrays that are defined over the grid. The grid records the numbers of each element, their positions, and their relationships with one another. Let's take a look at some of this information for the raster:
End of explanation
"""
for i in range(smg.number_of_nodes):
print(i, smg.x_of_node[i], smg.y_of_node[i])
"""
Explanation: The grid contains its geometric information too. Let's look at the (x,y) coordinates of the nodes:
End of explanation
"""
for i in range(smg.number_of_links):
print('Link', i, ': node', smg.node_at_link_tail[i], '===> node',
smg.node_at_link_head[i])
"""
Explanation: Link connectivity and direction is described by specifying the starting ("tail") and ending ("head") node IDs for each link (to remember this, think of an arrow: TAIL ===> HEAD).
End of explanation
"""
smg.core_nodes
smg.active_links
# let's demonstrate the auto-updating of boundary conditions:
smg.status_at_node[smg.nodes_at_bottom_edge] = smg.BC_NODE_IS_CLOSED
smg.active_links # the links connected to the bottom edge nodes are now inactive
"""
Explanation: Boundary conditions are likewise defined on these elements (see also the full boundary conditions tutorial). Landlab is clever enough to ensure that the boundary conditions recorded on, say, the links get updated when you redefine the conditions on, say, the nodes.
Nodes can be core, fixed value, fixed gradient, or closed (flux into or out of node is forbidden). Links can be active (can carry flux), fixed (always carries the same flux; joined to a fixed gradient node) or inactive (forbidden from carrying flux).
Note that this boundary coding does not mean that a particular boundary condition is automatically enforced. It's up to the user to take advantage of these codes. For example, if you are writing a model that calculates flow velocity on links but wish the velocity to be zero at inactive links, you the programmer must ensure this, for instance by including a line like my_velocity[grid.inactive_links] = 0.0, or alternatively my_velocity[grid.active_links] = ...<something>....
Information on boundary coding is available from the grid:
End of explanation
"""
smg.links_at_node[5]
smg.links_at_node.shape
"""
Explanation: Element connectivity
Importantly, we can also find out which elements are connected to which other elements. This allows us to do computationally vital operations involving mapping values defined at one element onto another, e.g., the net flux at a node; the mean slope at a patch; the node value at a cell.
In cases where these relationships are one-to-many (e.g., links_at_node, nodes_at_patch), the shape of the resulting arrays is always (number_of_elements, max-number-of-connected-elements-across-grid). For example, on a raster, links_at_node is (nnodes, 4), because the cells are always square. On an irregular Voronoi-cell grid, links_at_node will be (nnodes, X) where X is the number of sides of the side-iest cell, and nodes_at_patch will be (npatches, 3) because all the patches are Delaunay triangles. And so on.
Lets take a look. Remember, Landlab orders things counterclockwise from east, so for a raster the order will the EAST, NORTH, WEST, SOUTH.
End of explanation
"""
smg.links_at_node[8]
smg.patches_at_node
smg.nodes_at_patch
"""
Explanation: Undefined directions get recorded as -1:
End of explanation
"""
smg.node_at_cell # shape is (n_cells, )
smg.cell_at_node # shape is (n_nodes, ) with -1s as needed
"""
Explanation: Where element-to-element mapping is one-to-one, you get simple, one dimensional arrays:
End of explanation
"""
smg.link_dirs_at_node # all links; positive points INTO the node; zero where no link
# prove there are zeros where links are missing:
np.all((smg.link_dirs_at_node == 0) == (smg.links_at_node == -1))
smg.active_link_dirs_at_node # in this one, inactive links get zero too
"""
Explanation: A bit of thought reveals that things get more complicated for links and faces, because they have direction. You'll need a convenient way to record whether a given flux (which is positive if it goes with the link's inherent direction, and negative if against) actually is travelling into or out of a given node. The grid provides link_dirs_at_node and active_link_dirs_at_node to help with this:
End of explanation
"""
fluxes_at_node = smg.at_link['slope'][smg.links_at_node]
# ^...remember we defined the slope field as ones, above
fluxes_into_node = fluxes_at_node * smg.active_link_dirs_at_node
flux_div_at_node = fluxes_into_node.sum(axis=1)
print(flux_div_at_node[smg.core_nodes])
"""
Explanation: Multiply the fluxes indexed by links_at_node and sum by axis=1 to have a very convenient way to calculate flux divergences at nodes:
End of explanation
"""
|
UCBerkeleySETI/breakthrough | SDR/stations/.ipynb_checkpoints/sdr_stations-checkpoint.ipynb | gpl-3.0 | %matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
import csv
from collections import OrderedDict
from FMstations import *
raw_data = np.genfromtxt("stationsdata.csv", delimiter = ",", dtype = None)
MINFREQ = 87900000
MAXFREQ = 107900000
FREQ_BIN = raw_data[0][4]
INTERVAL = 10
TOTAL_TIME = 900
SCAN = TOTAL_TIME / INTERVAL
DATAPOINTS = MAXFREQ - MINFREQ / FREQ_BIN
"""
Explanation: Radio Frequency Interference: Identifying Nearby Sources
In the world of radio astronomy, observations of signals from extraterrestrial origins are often compounded by human-made sources. Many electronical sources, such as radio stations, cell phone towers, WiFi, even GPS satellites and your personal microwave oven, can interfere with radio astronomy. This tutorial combines usage of hardware with some code to identify nearby interference sources. This analysis is part of the larger practice of filtering out any human-made and Earth-based radio signals to better identify potential signals of interest from space.
For my experiment, I used a Realtek RTL2838 dongle as my hardware receiver. While I specifically looked for nearby radio stations within the limited scope of the hardware, the code can be applied with little modification, depending on the hardware used, to processing and identifying a wide range of other sources.
Ultimately, this script produces two outputs:
1. A plot showing signal sources as peaks, which are marked according to a given dictionary of the sources' names. The plot is a spectrum of signal power against frequency
2. A dictionary matching the peaking frequencies with their sources
Familiarizing with the properties of human-made sources can be convenient for manually dismissing uninteresting signals.
From the Command Line: rtl_power
Before diving into python code, we first need to generate a file to store in the data from the dongle. The command to do that is rtl_power:
rtl_power min:max:bin filename.ext
min: initial frequency
max: terminal frequency
bin: frequency interval
The file extension I used is csv. There are additional parameters for further options; example:
rtl_power 87M:108M:1k -g 20 -i 10 -e 15m logfile.csv
Here, I set the gain to 20 and the runtime to 15 minutes, taking data in 10-second intervals. All the data is stored in a csv file logfile.csv. With our data in hand, open a python file with your favorite editor and begin the processing.
Modules to Import and Global Variables
Several modules are important for the basic functioning of this script:
1. Numpy: essential for computing and number crunching
2. Matplotlib.pyplot: useful for plotting
3. csv: used to generate the raw data, which is stored as a csv file, into a workable text file. The appropriate modules, depending on the format of the raw data file and the format of the desired data conversion, should be used here
4. OrderedDict from collections: outputs any dictionary with the entries in the order that you entered them. This will be useful later for ordering peaks, but is useful in general when dealing with dictionaries
End of explanation
"""
def total_freq(samp_rate, line):
"""total frequency band of the data
samp_rate: sample rate used to collect the data
freq_bin: difference in frequency between each data point
line: number of iterations to create the band;
each sampling may sample over only a part of the desired bandwidth
"""
if samp_rate < (MAXFREQ - MINFREQ):
allfreq = []
i = 0
while i < line:
freqband = np.arange(MINFREQ + (samp_rate * i), MINFREQ + (samp_rate * (i+1)), FREQ_BIN)
allfreq = np.append(allfreq, freqband)
i += 1
return allfreq
else:
return np.arange(MINFREQ, MAXFREQ, FREQ_BIN)
TotalBand = total_freq(raw_data[0][3] - raw_data[0][2], 8)
"""
Explanation: FMstations is a separate .py file storing a dictionary to be used in a later function. While dictionaries can be included in the executing script without functional issues, their potential large size warrants storage in a separate file.
stationsdata.csv is the file containing the raw data, an array of lines. Each line is an array itself, consisting of a number of power values corresponding to the sampling rate. Specifically for my experiment, the sampling rate is 2.5 MHz and each eight lines constitute a scan of the bandwidth of interest, the 20 MHz band between 87.9 and 107.9 MHz. U.S. radio stations are allocated broadcast frequencies in that range, and these values should be changed according to the experiment at hand. Data is collected every 10 seconds for every 610.35 (raw_data[0][4]) frequency difference in that range over a total of 900 seconds.
The raw data file consists of 720 lines because the scan is performed 90 times, the total number of seconds divided by the scan interval. Each line unfortunately comes with an initial six outputs, such the date of when the data was taken, that we have no interest of for purposes of data processing and plotting. Those outputs would have to be truncated.
Processing Raw Data
First, we have to idenfity the frequency range we are interested in nd have it be consistent with the type and number of data in the file for plotting. There are DATAPOINTS number of data (32776 for my experiment) for each scan, with each line having SAMP_RATE / FREQ_BIN number of data (4107).
End of explanation
"""
def power_total(data, line, trunc):
"""returns an array of arrays; all power values from the scans
data: input raw data; usually an array of arrays
line: number of lines of data corresponding to a full scan over the bandwidth
trunc: truncate number of, if any, of unneeded data, such as time, at the start of each line
"""
P_tot = []
j = 0
while j < SCAN:
P_band = []
i = 0
while i < line:
P_band = np.append(P_band, data[j*line+i].tolist()[trunc:])
i += 1
P_tot.append(P_band)
j += 1
return P_tot
total_data = power_total(raw_data, 8, 6)
"""
Explanation: There are many ways to go about processing the raw data values to correspond with our frequency range. Since my data came in the form of an array of arrays, that's how I initially kept my data as, though with the fir several unneeded elements truncated from each line. The follow code does just that.
End of explanation
"""
def power_avg(data, xax=0, yax=1):
"""returns the time-average power values of each index-sharing set of numbers
data: input data; usually an array of arrays
"""
indices_arr = np.swapaxes(data, xax, yax)
y = []
for index in indices_arr:
y.append(np.average(index))
return y
avg_data = power_avg(total_data)
"""
Explanation: Unlike the raw data, however, each inner array now consists of a full scan of the relevant band (as opposed to each line in the raw data corresponding to only an eighth of the band). For my data, there's now only 90 inner arrays, each a scan at a specific time, down from 720 lines in the raw data.
Having this result is useful for, as an example, plotting multiple graphs to see the time-evolution of the signal. For our immediate purposes, we look at the time-averaged values to produce a single plot.
End of explanation
"""
plt.figure(figsize=(15,5))
plt.title('Time-Averaged Radio Spectrum over 90 Scans')
plt.xlabel('MHz')
plt.ylabel('Power (dBm)')
plt.plot(TotalBand, avg_data)
plt.show()
"""
Explanation: By swapping the axes of the array, I can put all the index-sharing values in the same inner array. For example, the first value of each of the scans are now in the first inner array and the second values of all the scans are in the second inner array. Averaging the values in each of those arrays would now be a simple task.
We now have a single array of values ripe for plotting.
Spectrum Improvement and Plotting
Let's see a visual of our plot of the average power values against our frequency band.
End of explanation
"""
def Flatten(spec, flatter, n):
"""flattens the noise level of our original plot
dataSpec: input spectrum to be flatted
flatter: an array derived from the input spectrum used to divided by the spectrum
n: half the number of a set of points that the median is taken over
"""
MED = []
i = 0
for x in flatter:
if i < n:
MED.append(np.median(flatter[0 : i + n + 1]))
i += 1
elif i >= (len(spec) - (n + 1)):
MED.append(np.median(flatter[i - n :]))
i += 1
else:
MED.append(np.median(flatter[i - n : i + n + 1]))
i += 1
Spectrum = np.array(spec) / np.array(MED) * np.average(spec)
return Spectrum
"""
Explanation: We have a functional plot, but it's riddled with issues associated with signal process. We need to flatten the noise floor, remove the noisy peaks, and smooth out the graph so that we can systematically identify the peaks, which correspond to frequencies which signals are received at. Power is in units of dBm, decibels referencing milliwatts. Online resources have more in-depth explanation of the bel unit and power representation in signal processing.
To flatten the noise floor, we have the libery of choosing the agent for our task. We have to divide our plot by a fitted curve or any set of values that can accomplish the task.
End of explanation
"""
FlatSpec = Flatten(avg_data, avg_data[24582:28679] * 8, 10)
plt.figure(figsize=(15,5))
plt.title('Time-Averaged Radio Flattened Spectrum over 90 Scans')
plt.xlabel('MHz')
plt.ylabel('Power (dBm)')
plt.plot(TotalBand, FlatSpec)
plt.show()
"""
Explanation: Fortunately for my data, the seventh arc as seen above is devoid of peaks so it's convenient to apply that arc to my entire plot to be divided by. The Flatten function takes the median of a certain number of points to remove, for instance, the few noisy peaks seen on the arc. The fitted arc is then swiftly applied to flatten the noise floor.
End of explanation
"""
def Reduce(spec, n):
"""takes the median of a set of points, removing noise-produced peaks and much noise
n: half the number of a set of points that the median is taken over
"""
R = []
i = 0
for x in spec:
if i < n:
R.append(np.median(spec[0 : i + n + 1]))
i += 1
elif i >= 32769:
R.append(np.median(spec[i - n :]))
i += 1
else:
R.append(np.median(spec[i - n : i + n + 1]))
i += 1
return R
ReduSpec = Reduce(FlatSpec, 10)
plt.figure(figsize=(15,5))
plt.title('Noise-Reduced, Time-Averaged Radio Spectrum')
plt.xlabel('MHz')
plt.ylabel('Power (dBm)')
plt.plot(TotalBand, ReduSpec)
plt.show()
"""
Explanation: The code, though looks nicer, is still pretty rough due to all the noise. This can be simply solved by applying a median averaging across the data; the noisy peaks that correspond to a single frequency would thus be filtered out.
End of explanation
"""
plt.figure(figsize=(15,5))
plt.title('Second Highest Peak Zoomed-In')
plt.xlabel('MHz')
plt.ylabel('Power (dBm)')
plt.ylim(-43,-42)
plt.plot(TotalBand[14000:14200], ReduSpec[14000:14200])
plt.show()
"""
Explanation: This is seemingly our desired plot, but upon closer inspection...
End of explanation
"""
def smooth(spec, win_len, window, beta = 20):
"""smooths a signal with kernel window of win_len number of inputs
signal: Input data spectrum to be smoothed
win_len: the size of the kernel window used to smooth the input
window: type of kernel; e.g. 'blackman'
"""
if window == 'kaiser':
w = eval('np.'+window+'(win_len, beta)')
elif window == 'flat':
w = np.ones(len(win_len, 'd'))
else:
w = eval('np.'+window+'(win_len)')
s = np.r_[spec[win_len-1 : 0 : -1], spec, spec[-1 : -win_len : -1]]
y = np.convolve(w/w.sum(), s, mode='valid')
return y[(int(win_len / 2) - 1) : (int(-win_len / 2))]
"""
Explanation: Despite the plot looking great from afar, it's still rather rugged even at the peaks upon closer inspection. The roughness can make it difficult to mathematically identify peaks. What we need is a smoothing. The process of smoothing a function over another (convolution) is rooted in mathematical rigor, so we do not go in depth there. Essentially, a smoothing function, traditionally called the kernel, is 'applied' and moved along our spectrum. There are various ways to implement convolution and thankfully numpy has built-in function to make the job simpler.
End of explanation
"""
SmooSpec = smooth(ReduSpec, 150, 'kaiser')
plt.figure(figsize=(15,5))
plt.title('Smoothed, Time-Averaged Radio Spectrum')
plt.xlabel('MHz')
plt.ylabel('Power (dBm)')
plt.plot(TotalBand, SmooSpec)
plt.show()
plt.figure(figsize=(15,5))
plt.title('Second Highest Peak Zoomed-In')
plt.xlabel('MHz')
plt.ylabel('Power (dBm)')
plt.ylim(-43.5,-42.5)
plt.plot(total_freq(8)[14000:14200]/1000000, SmooSpec[14000:14200],8)
plt.show()
"""
Explanation: The code evaluates based on what type of smoothing we're using. Numpy has a number of built-in smoothing functions. For my plots, I used the kaiser kernel, which is derived from Bessel functions. If flat smoothing is chosen, then our plot is smoothed by a line. The result is a much smoother graph, with the side effect of reduced power at every point.
End of explanation
"""
def get_peaks(spec, threshold, xreducer, rounder=2):
"""identifies the peaks of a plot. Returns an array of 2 lists:
1. the indices of the frequencies corresponding to the peaks;
2. said frequencies, divided by xreducer for simpler units and rounded to rounder decimals
spec: input data spectrum
threshold: only data above which are taken into account to ignore the noise level
"""
Peaks = []
spec = spec.tolist()
for i in np.arange(len(spec)):
if spec[i] > threshold and spec[i] > spec[i-1] and spec[i] > spec[i+1]:
Peaks.append(spec[i])
else:
continue
Ordered_Indices = []
while True:
if np.array(Peaks).tolist() == []:
Ordered_Freq = [(x * FREQ_BIN + MINFREQ) for x in Ordered_Indices]
Reduced_Freq = np.around((np.array(Ordered_Freq) / xreducer), rounder)
return [Ordered_Indices, Reduced_Freq.tolist()]
elif len(Peaks) == 1:
Ordered_Indices.append(spec.index(Peaks[0]))
Peaks = np.delete(Peaks, 0)
else:
Ordered_Indices.append(spec.index(np.amax(Peaks)))
Peaks = np.delete(Peaks, np.array(Peaks).tolist().index(np.amax(Peaks)))
"""
Explanation: We finally have our smoothed plot and can systematically identify characteristics with numpy without dealing with much of the interference due to rough data patterns.
Identifying and Marking Peaks and their Frequencies
We can isolate the peak frequencies with simple methods.
End of explanation
"""
def mark_peaks(src_dict, spec, threshold, line, title, xreducer, error=.01, bound1='left', bound2='bottom', rot=90):
"""returns both a plot and a dictionary
plot: shows the stations next to the marked peaks
dictionary: matches the relevant peak frequencies with the corresponding station(s)
src_dict: input dictionary of frequencies and stations from which the results are selected from
spec: input spectrum data
threshold: only data above which are taken in account to ignore the noise level
title: title for the plot
xreducer: the values of the x-axis divided by which to simpler units
error: within which the obtained frequencies are acceptable as equivalent to that of a station
remaining parameters: used the adjust the markings and labels of the plots
"""
stations = []
peakfreqs = []
stations_i = []
peaker = get_peaks(spec, threshold, xreducer)
p0 = peaker[0]
p1 = peaker[1]
for i in np.arange(len(p1)):
if p1[i] in src_dict.keys():
stations.append(src_dict[p1[i]])
peakfreqs.append(p1[i])
stations_i.append(p0[i])
else:
for x in np.arange(p1[i]-error, p1[i]+error, error):
if x in src_dict.keys():
stations.append(src_dict[x])
peakfreqs.append(p1[i])
stations_i.append(p0[i])
else:
continue
peaks = [spec[y] for y in stations_i]
plt.figure(figsize=(15,5))
plt.title(title)
plt.xlabel('Frequency (MHz)')
plt.ylabel('Reduced Power (dBm)')
yoffset = (np.amax(spec) - np.amin(spec)) / 4
plt.ylim(np.amin(spec) - yoffset, np.amax(spec) + yoffset)
plt.plot(np.array(total_freq(line)) / 1000000, spec)
plt.scatter(peakfreqs, peaks, marker = 'o', color = 'r', s = 40)
text_bounds = {'ha':bound1, 'va':bound2}
for i in np.arange(len(peakfreqs)):
plt.text(peakfreqs[i], peaks[i] + (yoffset / 10), stations[i], text_bounds, rotation=rot)
plt.savefig('stations_peaks.pdf')
plt.show()
stations_dict = OrderedDict()
for i in np.arange(len(stations)):
stations_dict[peakfreqs[i]] = stations[i]
return stations_dict
"""
Explanation: THe function iterates over all the values in the spectrum and stores any frequency (and its associated index) of the peaks (values above a certain threshold that's a local maxima) in arrays. The indices will be useful for corresponding the frequencies with their peak location. The majority of the function is ordering the frequencies in descending order of the peak intensity by iterating over the array of peaks and determining each's corresponding peak value. The ordered list is then rounded to one decimal and is in units of MHz or in a desired unit as determined by xreducer.
With a list of peaks, we can use them to physically mark their sources on our plot, provided a dictionary of the sources.
End of explanation
"""
print(mark_peaks(BAFMRS, SmooSpec, -54, 8, 'Bay Area FM Radio Stations', 1000000))
"""
Explanation: It's a rather long function, but a lot of it is making adjustments to the final plot. The function requires an input [ython dictionary, so that dictionary either has to be from an established source online or be manually made. For my code, I made a dictionary of all Bay Area FM Radio Stations (BAFMRS) with their corresponding frequencies as the dictionary keys. For example, an entry in the dictionary is '88.5: KQED' with 88.5 MHz being the transmitting frequency and KQED being the name of the station.
A radio station signal comes in the form of a primary analog signal (almost all the peaks you see on the plot) and two digital signals on the side; only the stronger signals, such as the one corresponding to the leftmost peak, visibly show the digital parts. This function naturally filters out almost all digital peaks because their frequencies do not correspond to any of that in a dictionary of radio station signals. Unfortunately, it's not easy to feasibly filter multiple stations broadcasting the same frequency with this level of coding. Naturally, however, there won't be many of these cases at any given location.
The final plot, all processed and labeled:
End of explanation
"""
def waterfall(data, line, flat1, flat2, n, win_len, window, title, axesrange, gridshape='auto'):
"""returns a waterfall grid consisting off all the spectra from the input
data: an array of arrays
line: number of arrays that make up one full scan of the band
flat1, flat2: boundaries of the data points from each array used for flattening the spectrum
n: half the number used to take medians of the spectra for noise-reducing
win_len: size of kernel window for smoothing
window: type of kernel
title: title of grid
axesrange: boundaries of the values of the grid
"""
wf = []
i = 1
while i <= len(data):
flatter = data[-i][flat1:flat2]
flatspec = Flatten(data[-i], np.array(flatter.tolist() * line), n)
reduspec = Reduce(flatspec, n)
smoospec = smooth(reduspec, win_len, window)
wf.append(smoospec)
i += 1
fig = plt.figure(figsize=(15, 5))
ax = fig.add_subplot(111)
ax.set_title(title)
ax.set_xlabel('Frequency (MHz)')
ax.set_ylabel('Time (s)')
plt.imshow(wf, extent=axesrange)
ax.set_aspect(gridshape)
plt.colorbar(orientation='vertical')
plt.show()
waterfall(total_data, 8, 24582, 28679, 10, 150, 'kaiser', 'Radio Spectra Waterfall', [87.9, 107.9, 0, 900])
"""
Explanation: We got our final result. This particular approach is useful for identifying nearby signal sources for any purpose. Another approach uses not the average power values, but all the power values from all the scans. Visualizing the time-evolution can be useful for other purposes, such as noting the fluctuations of the peaks' strengths or finding sudden peaks arising at certain moments. The "waterfall" plot displays, in this case, intensity (power) with color codes and has time instead on the y-axis.
End of explanation
"""
|
mckinziebrandon/DeepChatModels | notebooks/ubuntu_reformat.ipynb | mit | import numpy as np
import os.path
import pdb
import pandas as pd
from pprint import pprint
#DATA_DIR = '/home/brandon/terabyte/Datasets/ubuntu_dialogue_corpus/'
DATA_DIR = '/home/brandon/ubuntu_dialogue_corpus/src/' # sample/'
TRAIN_PATH = DATA_DIR + 'train.csv'
VALID_PATH = DATA_DIR + 'valid.csv'
TEST_PATH = DATA_DIR + 'test.csv'
def get_training():
"""Returns dataframe data from train.csv """
# First, we need to load the data directly into a dataframe from the train.csv file.
df_train = pd.read_csv(TRAIN_PATH)
# Remove all examples with label = 0. (why would i want to train on false examples?)
df_train = df_train.loc[df_train['Label'] == 1.0]
# Don't care about the pandas indices in the df, so remove them.
df_train = df_train.reset_index(drop=True)
df_train = df_train[df_train.columns[:2]]
return df_train
def get_validation():
"""Returns data from valid.csv """
# First, we need to load the data directly into a dataframe from the train.csv file.
df_valid = pd.read_csv(VALID_PATH)
first_two_cols = df_valid.columns[:2]
df_valid = df_valid[first_two_cols]
df_valid.columns = ['Context', 'Utterance']
return df_valid
df_train = get_training()
df_valid = get_validation()
"""
Explanation: Reformatting Ubuntu Dialogue Corpus for Chatbot Model
Dataset Description
How to run this notebook:
1. Download the data from the github repository
2. CD to the downloaded directory and run ./generate.sh -t -l. I downloaded 1 mill. train samples with p = 1.0.
3. Change the value of DATA_DIR below to your project path root.
I chose to copy the first 10k lines of data into a "sample" directory so I could quickly play around with the data, and then use the full dataset when done.
Random facts from paper:
* 2-way (dyadic) conversation, as opposed to multi-participant.
Load the Data
End of explanation
"""
# Now get all of the data in a single string and make a 'vocabulary' (unique words).
import nltk, re, pprint
from nltk import word_tokenize
import pdb
def print_single_turn(turn: str):
as_list_of_utters = turn.split('__eou__')[:-1]
for idx_utter, utter in enumerate(as_list_of_utters):
print("\t>>>", utter)
def print_conversation(df, index=0):
"""Display the ith conversation in nice format."""
# Get the row identified by 'index'.
context_entry = df['Context'].values[index]
target = df['Utterance'].values[index]
# Split returns a blank last entry, so don't store.
turns = context_entry.split('__eot__')[:-1]
print('--------------------- CONTEXT ------------------- ')
for idx_turn, turn in enumerate(turns):
print("\nUser {}: ".format(idx_turn % 2))
print_single_turn(turn)
print('\n--------------------- RESPONSE ------------------- ')
print("\nUser {}: ".format(len(turns) % 2))
print_single_turn(target)
def get_user_arrays(df):
"""Returns two arrays of every other turn.
Specifically:
len(returned array) is number of rows in df. I SURE HOPE NOT!
each entry is a numpy array.
each numpy array contains utterances as entries.
"""
userOne = []
userTwo = []
contexts = df['Context'].values
targets = df['Utterance'].values
assert(len(contexts) == len(targets))
for i in range(len(contexts)):
# combined SINGLE CONVERSATION ENTRY of multiple turns each with multiple utterances.
list_of_turns = contexts[i].lower().split('__eot__')[:-1] + [targets[i].lower()]
# make sure even number of entries
if len(list_of_turns) % 2 != 0:
list_of_turns = list_of_turns[:-1]
# strip out the __eou__ occurences (leading space bc otherwise would result in two spaces)
new_list_of_turns = []
for turn in list_of_turns:
utter_list = turn.lower().split(" __eou__")
#if len(utter_list) > 3:
# utter_list = utter_list[:3]
new_list_of_turns.append("".join(utter_list))
#list_of_turns = [re.sub(' __eou__', '', t) for t in list_of_turns]
userOneThisConvo = new_list_of_turns[0::2]
userTwoThisConvo = new_list_of_turns[1::2]
userOne += userOneThisConvo
userTwo += userTwoThisConvo
assert(len(userOne) == len(userTwo))
return userOne, userTwo
def save_to_file(fname, arr):
with open(DATA_DIR+fname,"w") as f:
for line in arr:
f.write(line + "\n")
"""
Explanation: Functions for Visualization and Reformatting
End of explanation
"""
df_train.describe()
pd.options.display.max_colwidth = 500
df_train.head(2)
print_conversation(df_train, 3)
"""
Explanation: Training Data
At a Glance
End of explanation
"""
#df_merged = pd.DataFrame(df_train['Context'].map(str) + df_train['Utterance'])
userOne, userTwo = get_user_arrays(df_train)
df_turns = pd.DataFrame({'UserOne': userOne, 'UserTwo': userTwo})
df_turns.head(200)
"""
Explanation: Turn-Based DataFrame
End of explanation
"""
userOne[0]
def get_sentences(userOne, userTwo):
encoder = []
decoder = []
assert(len(userOne) == len(userTwo))
for i in range(len(userOne)):
one = nltk.sent_tokenize(userOne[i])
one = [s for s in one if s != '.']
two = nltk.sent_tokenize(userTwo[i])
two = [s for s in two if s != '.']
combine = one + two
assert(len(combine) == len(one) + len(two))
if len(combine) % 2 != 0:
combine = combine[:-1]
enc = combine[0::2]
dec = combine[1::2]
assert(len(enc) == len(dec))
encoder.append(enc)
decoder.append(dec)
return encoder, decoder
encoder, decoder = get_sentences(userOne, userTwo)
print('done')
encoder = [nltk.word_tokenize(s[0]) for s in encoder]
decoder = [nltk.word_tokenize(s[0]) for s in decoder]
max_enc_len = max([len(s) for s in encoder])
max_dec_len = max([len(s) for s in decoder])
print(max_enc_len)
print(max_dec_len)
"""
Explanation: Sentence-Based DataFrame
End of explanation
"""
encoder_lengths = [len(s) for s in encoder]
decoder_lengths = [len(s) for s in decoder]
df_lengths = pd.DataFrame({'EncoderSentLength': encoder_lengths, 'DecoderSentLengths': decoder_lengths})
df_lengths.describe()
import matplotlib.pyplot as plt
plt.style.use('ggplot')
plt.rcParams['figure.figsize'] = 9, 5
fig, axes = plt.subplots(nrows=1, ncols=2)
plt.subplot(1, 2, 1)
plt.hist(encoder_lengths)
plt.subplot(1, 2, 2)
plt.hist(decoder_lengths, color='b')
plt.tight_layout()
plt.show()
save_to_file("train_from.txt", userOne)
save_to_file("train_to.txt", userTwo)
"""
Explanation: Analyzing Sentence Lengths
End of explanation
"""
print("df_valid has", len(df_valid), "rows.")
df_valid.head()
userOne, userTwo = get_user_arrays(df_valid)
save_to_file("valid_from.txt", userOne)
save_to_file("valid_to.txt", userTwo)
print('done')
"""
Explanation: Validation Data
End of explanation
"""
import matplotlib.pyplot as plt
%matplotlib inline
userOne, userTwo = get_user_arrays(df_train)
# Regular expressions used to tokenize.
_WORD_SPLIT = re.compile(b"([.,!?\"':;)(])")
_DIGIT_RE = re.compile(br"\d")
lengths = np.array([len(t.strip().split()) for t in userOne])
max_ind = lengths.argmax()
print(max(lengths), "at", max_ind)
print("Sentence:\n", userOne[max_ind])
import matplotlib.pyplot as plt
plt.hist(sorted(lengths)[:-20])
n_under_20 = sum([1 if l < 100 else 0 for l in lengths])
print(n_under_20, "out of", len(lengths), "({}\%)".format(float(n_under_20)/len(lengths)))
df_lengths = pd.DataFrame(lengths)
df_lengths.describe()
"""
Explanation: Visualization
End of explanation
"""
import matplotlib.pyplot as plt
import numpy as np
%matplotlib inline
# Number of gradient descent steps, each over a batch_size amount of data.
vocab_size = 40000
# Uniform chance of guessing any word.
loss_random_guess = np.log(float(vocab_size))
print("Loss for uniformly random guessing is", loss_random_guess)
sent_length = [5, 10, 25]
# Outputs correct target x percent of the time.
pred_accuracy = np.arange(100)
plt.plot(pred_accuracy, [1./p for p in pred_accuracy])
import matplotlib.pyplot as plt
import numpy as np
%matplotlib inline
plt.style.use('ggplot')
plt.rcParams['figure.figsize'] = 10, 8
def _sample(logits, t):
res = logits / t
res = np.exp(res) / np.sum(np.exp(res))
return res
N = 100
x = np.arange(N)
before = np.array([1.0+i**2 for i in range(N)])
before /= before.sum()
plt.plot(x, before, 'b--', label='before')
after = _sample(before, 0.1)
plt.plot(x, after, 'g--', label='temp=0.01')
after = _sample(before, 0.2)
print(after.argmax())
plt.plot(x, after, 'r--', label='temp=0.001')
plt.legend(bbox_to_anchor=(1.05, 1), loc=2, borderaxespad=0.)
np.info(plt.plot)
"""
Explanation: Relationship between Accuracy, Loss, and Others
$$
\text{Loss} = - \frac{1}{N} \sum_{i = 1}^{N} \ln\left(p_{target_i}\right)
$$
End of explanation
"""
|
KshitijT/fundamentals_of_interferometry | 1_Radio_Science/1_6_synchrotron_emission.ipynb | gpl-2.0 | import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
from IPython.display import HTML
HTML('../style/course.css') #apply general CSS
"""
Explanation: Outline
Glossary
1. Radio Science using Interferometric Arrays
Previous: 1.5 Black body radiation
Next: 1.7 Line emission
Section status: <span style="background-color:orange"> </span>
Import standard modules:
End of explanation
"""
from IPython.display import Image
HTML('../style/code_toggle.html')
"""
Explanation: Import section specific modules:
End of explanation
"""
Image(filename='figures/drawing.png', width=300)
"""
Explanation: 1.6.1 Synchrotron Emission:
Sychrotron emission is one of the most commonly encountered forms of radiation found from astronomical radio sources. This type of radiation originates from relativistic particles get accelerated in a magnetic field.
The mechanism by which synchrotron emission occurs depends fundamentally on special relativistic effects. We won't delve into the details here. Instead we will try to explain (in a rather hand wavy way) some of the underlying physics. As we have seen in $\S$ 1.2.1 ➞,
<span style="background-color:cyan"> LB:RF:this is the original link but I don't think it points to the right place. Add a reference to where this is discussed and link to that. See also comment in previous section about where the Larmor formula is first introduced</span>
an accelerating charge emitts radiation. The acceleration is a result of the charge moving through an ambient magnetic field. The non-relativistic Larmor formula for the radiated power is:
$$P= \frac{2}{3}\frac{q^{2}a^{2}}{c^{3}}$$
If the acceleration is a result of a magnetic field $B$, we get:
$$P=\frac{2}{3}\frac{q^{2}}{c^{3}}\frac{v_{\perp}^{2}B^{2}q^{2}}{m^{2}} $$
where $v_{\perp}$ is the component of velocity of the particle perpendicular to the magnetic field, $m$ is the mass of the charged particle, $q$ is it's charge and $a$ is its acceleration. This is essentially the cyclotron radiation. Relativistic effects (i.e. as $v_\perp \rightarrow c$) modifies this to:
$$P = \gamma^{2} \frac{2}{3}\frac{q^{2}}{c^{3}}\frac{v_{\perp}^{2}B^{2}q^{2}}{m^{2}c^{2}} = \gamma^{2} \frac{2}{3}\frac{q^{4}}{c^{3}}\frac{v_{\perp}^{2}B^{2}}{m^{2}c^{2}} $$
where $$\gamma = \frac{1}{\sqrt{1+v^{2}/c^{2}}} = \frac{E}{mc^{2}} $$
<span style="background-color:yellow"> LB:IC: This is a very unusual form for the relativistic version of Larmor's formula. I suggest clarifying the derivation. </span>
is a measure of the energy of the particle. Non-relativistic particles have $\gamma \sim 1$ whereas relativistic and ultra-relativistic particles typically have $\gamma \sim 100$ and $\gamma \geq 1000$ respectively. Since $v_{\perp}= v \sin\alpha$, with $\alpha$ being the angle between the magnetic field and the velocity of the particle, the radiated power can be written as:
$$P=\gamma^{2} \frac{2}{3}\frac{q^{4}}{c^{3}}\frac{v^{2}B^{2}\sin\alpha^{2}}{m^{2}c^{2}} $$
From this equation it can be seen that the total power radiated by the particle depends on the strength of the magnetic field and that the higher the energy of the particle, the more power it radiates.
In analogy with the non-relativistic case, there is a frequency of gyration. This refers to the path the charged particle follows while being accelerated in a magnetic field. The figure below illustrates the idea.
End of explanation
"""
Image(filename='figures/cygnusA.png')
"""
Explanation: Figure 1.6.1 Example path of a charged particle accelerated in a magnetic field
The frequency of gyration in the non-relativistic case is simply
$$\omega = \frac{qB}{mc} $$
For synchrotron radiation, this gets modified to
$$\omega_{G}= \frac{qB}{\gamma mc} $$
since, in the relativistic case, the mass is modified to $m \rightarrow \gamma m$.
In the non-relativistic case (i.e. cyclotron radiation) the frequency of gyration corresponds to the frequency of the emitted radiation. If this was also the case for the synchrotron radiation then, for magnetic fields typically found in galaxies (a few micro-Gauss or so), the resultant frequency would be less than one Hertz! Fortunately the relativistic beaming and Doppler effects come into play increasing the frequency of the observed radiation by a factor of about $\gamma^{3}$. This brings the radiation into the radio regime. This frequency, known also as the 'critical frequency' is at most of the emission takes place. It is given by
$$\nu_{c} \propto \gamma^{3}\nu_{G} \propto E^{2}$$
<span style="background-color:yellow"> LB:IC: The last sentence is not clear. Why is it called the critical frequency? How does it come about? </span>
So far we have discussed a single particle emitting synchrotron radiation. However, what we really want to know is what happens in the case of an ensemble of radiating particles. Since, in an (approximately) uniform magnetic field, the synchrotron emission depends only on the magnetic field and the energy of the particle, all we need is the distribution function of the particles. Denoting the distribution function of the particles as $N(E)$ (i.e. the number of particles at energy $E$ per unit volume per solid angle), the spectrum resulting from an ensemble of particles is:
$$ \epsilon(E) dE = N(E) P(E) dE $$
<span style="background-color:yellow"> LB:IC: Clarify what is $P(E)$. How does the spectrum come about? </span>
The usual assumption made about the distribution $N(E)$ (based also on the observed cosmic ray distribution) is that of a power law, i.e.
$$N(E)dE=E^{-\alpha}dE $$
Plugging in this and remembering that $P(E) \propto \gamma^{2} \propto E^{2}$, we get
$$ \epsilon(E) dE \propto E^{2-\alpha} dE $$
Shifting to the frequency domain
$$\epsilon(\nu) \propto \nu^{(1-\alpha)/2} $$
The usual value for $\alpha$ is 5/2 and since flux $S_{\nu} \propto \epsilon_{\nu}$
$$S_{\nu} \propto \nu^{-0.75} $$
This shows that the synchrotron flux is also a power law, if the underlying distribution of particles is a power law.
<span style="background-color:yellow"> LB:IC: The term spectral index is used below without being properly introduced. Introduce the notion of a spectral index here. </span>
This is approximately valid for 'fresh' collection of radiating particles. However, as mentioned above, the higher energy particles lose energy through radiation much faster than the lower energy particles. This means that the distribution of particles over time gets steeper at higher frequencies (which is where the contribution from the high energy particles comes in). As we will see below, this steepening of the spectral index is a typical feature of older plasma in astrophysical scenarios.
1.6.2 Sources of Synchrotron Emission:
So where do we actually see synchrotron emission? As mentioned above, the prerequisites are magnetic fields and relativistic particles. These conditions are satisfied in a variety of situations. Prime examples are the lobes of radio galaxies. The lobes contain relativistic plasma in magnetic fields of strength ~ $\mu$G. It is believed that these plasmas and magnetic fields ultimately originate from the activity in the center of radio galaxies where a supermassive black hole resides. The figure below shows a radio image of the radio galaxy nearest to us, Cygnus A.
End of explanation
"""
# Data taken from Steenbrugge et al.,2010, MNRAS
freq=(151.0,327.5,1345.0,4525.0,8514.9,14650.0)
flux_L=(4746,2752.7,749.8,189.4,83.4,40.5)
flux_H=(115.7,176.4,69.3,45.2,20.8,13.4)
fig,ax = plt.subplots()
ax.loglog(freq,flux_L,'bo--',label='Lobe Flux')
ax.loglog(freq,flux_H,'g*-',label='Hotspot Flux')
ax.legend()
ax.set_xlabel("Frequency (MHz)")
ax.set_ylabel("Flux (Jy)")
"""
Explanation: Figure 1.6.2 Cygnus A: Example of Synchroton Emission
The jets, which carry relativistic charged particles or plasma originating from the centre of the host galaxy (marked as 'core' in the figure), collide with the surrounding medium at the places labelled as "hotspots" in the figure. The plasma responsible for the radio emission (the lobes) tends to stream backward from the hotspots. As a result we can expect the youngest plasma to reside in and around the hotspots. On the other hand, we can expect the plasma closest to the core to be the oldest. But is there a way to verify this?
Well, the non-thermal nature of the emission can be verified by measuring the spectrum of the radio emission. A value close to -0.7 suggests, by the reasoning given above, that the radiation results from a synchroton emission mechanism. The plots below show the spectrum of the lobes of Cygnus A within a frequency range of 150 MHz to 14.65 GHz.
<span style="background-color:cyan"> LB:RF: Add proper citation. </span>
End of explanation
"""
|
HaFl/ufldl-tutorial-python | Linear_Regression.ipynb | mit | data_original = np.loadtxt('stanford_dl_ex/ex1/housing.data')
data = np.insert(data_original, 0, 1, axis=1)
np.random.shuffle(data)
"""
Explanation: Load and preprocess the data.
End of explanation
"""
train_X = data[:400, :-1]
train_y = data[:400, -1]
test_X = data[400:, :-1]
test_y = data[400:, -1]
m, n = train_X.shape
"""
Explanation: Create train & test sets.
End of explanation
"""
def cost_function(theta, X, y):
squared_errors = (X.dot(theta) - y) ** 2
J = 0.5 * squared_errors.sum()
return J
def gradient(theta, X, y):
errors = X.dot(theta) - y
return errors.dot(X)
"""
Explanation: Define the cost function and how to compute the gradient.<br>
Both are needed for the subsequent optimization procedure.
End of explanation
"""
J_history = []
t0 = time.time()
res = scipy.optimize.minimize(
fun=cost_function,
x0=np.random.rand(n),
args=(train_X, train_y),
method='bfgs',
jac=gradient,
options={'maxiter': 200, 'disp': True},
callback=lambda x: J_history.append(cost_function(x, train_X, train_y)),
)
t1 = time.time()
print('Optimization took {s} seconds'.format(s=t1 - t0))
optimal_theta = res.x
"""
Explanation: Run a timed optimization and store the iteration values of the cost function (for latter investigation).
End of explanation
"""
plt.plot(J_history, marker='o')
plt.xlabel('Iterations')
plt.ylabel('J(theta)')
"""
Explanation: It's always interesting to take a more detailed look at the optimization results.
End of explanation
"""
for dataset, (X, y) in (
('train', (train_X, train_y)),
('test', (test_X, test_y)),
):
actual_prices = y
predicted_prices = X.dot(optimal_theta)
print(
'RMS {dataset} error: {error}'.format(
dataset=dataset,
error=np.sqrt(np.mean((predicted_prices - actual_prices) ** 2))
)
)
"""
Explanation: Now compute the Root Mean Square Error on both the train and the test set and hopefully they are similar to each other.
End of explanation
"""
plt.figure(figsize=(10, 8))
plt.scatter(np.arange(test_y.size), sorted(test_y), c='b', edgecolor='None', alpha=0.5, label='actual')
plt.scatter(np.arange(test_y.size), sorted(test_X.dot(optimal_theta)), c='g', edgecolor='None', alpha=0.5, label='predicted')
plt.legend(loc='upper left')
plt.ylabel('House price ($1000s)')
plt.xlabel('House #')
"""
Explanation: Finally, let's have a more intuitive look at the predictions.
End of explanation
"""
|
Hash--/documents | notebooks/Fusion_Basics/Fusion Cross Sections and Reaction Rates.ipynb | mit | """
Plot the Reaction rates in m^3 s^-1 as a function of
E, the energy in keV of the incident particle
[the first ion of the reaction label]
Data taken from NRL Formulary 2013.
"""
E, DD, DT, DH = loadtxt('reaction_rates_vs_energy_incident_particle.txt',
skiprows=1, unpack=True)
cm3_2_m3 = 1e-6
fig, ax = plt.subplots(num=1)
ax.loglog(E, cm3_2_m3*DD, E, cm3_2_m3*DT, E, cm3_2_m3*DH, lw=3)
ax.grid(True)
ax.grid(True, which='minor')
ax.set_xlabel('Temperature [keV]', fontsize=16)
ax.set_ylabel(r'Reaction Rates $\left<\sigma v\right>$ [$m^3.s^{-1}$]', fontsize=16)
ax.tick_params(labelsize=14)
ax.set_xlim(min(E), max(E))
ax.legend(('D-D', 'D-T', 'D-He$^3$'), loc='best', fontsize=18)
def kev2deg(kev):
# 1 eV is 11606 K
return kev * 1e3 * 11606
def deg2kev(deg):
return deg / (1e3 * 11606)
secax = ax.secondary_xaxis('top', functions=(kev2deg, deg2kev))
secax.set_xlabel('Temperature [K]', fontsize=16)
secax.tick_params(labelsize=14)
tight_layout()
savefig('Fusion_Reactivity.png', dpi=150)
"""
Explanation: Reaction Rates
End of explanation
"""
def cross_section(E, A):
"""
The total cross section in barns (1 barns=1e-24 cm^2) as a function of E,
the energy in keV of the incident particle.
Formula from NRL Formulary 2013.
"""
sigma_T = (A[4]+((A[3]-A[2]*E)**2+1)**(-1) * A[1])/(E*(exp(A[0]/sqrt(E))-1))
return(sigma_T)
A_DD_a = [46.097, 372, 4.36e-4, 1.220, 0]
A_DD_b = [47.88, 482, 3.08e-4, 1.177, 0]
A_DT = [45.95, 50200, 1.368e-2, 1.076, 409]
A_DHe3 = [89.27, 25900, 3.98e-3, 1.297, 647]
A_TT = [38.39, 448, 1.02e-3, 2.09, 0]
A_THe3 = [123.1, 11250, 0, 0, 0]
E = logspace(0, 3, 501)
barns2SI = 1e-24 * 1e-4 # in m^2
sigma_DD = barns2SI*(cross_section(E, A_DD_a) + cross_section(E, A_DD_b))
sigma_DT = barns2SI*cross_section(E, A_DT)
sigma_DHe3 = barns2SI*cross_section(E, A_DHe3)
figure(num=2)
loglog(E, sigma_DD, E, sigma_DT, E, sigma_DHe3, lw=3)
grid()
xlabel('Deuteron Energy [keV]', fontsize=16)
ylabel('Cross-section $\sigma$ [$m^2$]', fontsize=16)
legend(('D-D', 'D-T', 'D-He$^3$'), loc='best', fontsize=18)
ylim([1e-32, 2e-27])
xlim(min(E), max(E))
xticks(fontsize=14)
yticks(fontsize=14)
tight_layout()
savefig('Fusion_cross-section.png', dpi=150)
"""
Explanation: Cross Sections
Plot the total cross section in m^2 for various species vs incident energy in keV
End of explanation
"""
|
mathinmse/mathinmse.github.io | Lecture-23-Cahn-Hilliard.ipynb | mit | %matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
from ipywidgets import interact, fixed
def idealSolution(GA, GB, XB, temperature):
"""
Computes the free energy of solution for an ideal binary mixture.
Parameters
----------
GA : float
The partial molar Gibbs free energy of pure A in Joules.
GB : float
The partial molar Gibbs free energy of pure B in Joules.
XB : ndarray
The mol fraction of component B as an array.
temperature : float
The temperature.
Returns
-------
G : ndarray
An array of the Gibbs free energy having the same shape as `XB`.
Examples
--------
>>> XB = np.linspace(0.01,0.99,10)
>>> G = idealSolution(0.0,0.0,XB,1.0)
>>> array([ 0.53440324, -3.72037187, -4.76282566, -3.72037187, 0.53440324])
"""
return (1.0-XB)*GA+XB*GB+8.314*temperature*((1-XB)*np.log(1-XB)+XB*np.log(XB))
def myfig(temperature):
"""
This function produces a plot of the Gibbs free energy of mixing for an ideal solution.
"""
GA = 1.0
GB = 500.0
XB = np.linspace(0.01,0.99,50)
temperatureSpace = np.linspace(1.0,100.0,10)
y = idealSolution(GA,GB,XB,temperature)
greySolutionLines = [idealSolution(GA,GB,XB,greyT) for greyT in temperatureSpace]
fig, axes = plt.subplots(figsize=(10,8))
for greyLine in greySolutionLines:
axes.plot(XB, greyLine, 'black', alpha=0.9)
axes.plot(XB, y, 'r', label=r"$G_A X_A + G_B X_B + RT(X_A \ln X_A + X_B \ln X_B)$", linewidth=4)
axes.legend()
axes.grid(True, linestyle='dotted')
axes.set_ylabel(r"$G_{soln}$")
axes.set_xlabel(r"$X_B$")
# Location for annotations can always be done by extents instead of absolute values.
axes.annotate(r'$G_A$='+str(GA)+'\n'+r'$G_B$='+str(GB),xy=(0,200), size='large')
plt.show()
return
interact(myfig, temperature=(1.0,100.0,1.0));
"""
Explanation: Lecture 23: The Equilbrium and Kinetic Properties of a Non-Uniform System
Reading and Reference
Free Energy of a Nonuniform System. I. Interfacial Free Energy, Journal of chemical Physics, v28, n2, p258-267 (1958)
Essential Mathematical Methods for Physicists, H. Weber and G. Arfken, Academic Press, 2003 (Chapter 5)
Calculus of Variations, L. Elsgolc, Dover Publications, 2007 (Chapters 1 and 2)
Thermodynamics in Materials Science, R. DeHoff, Taylor and Francis 2006. (Chapters 8, 9 and 14)
What to Learn?
A Taylor's Series can be used to introduce correction terms to the bulk free energy.
The energy functional can be minimized using variational principles
The equations of motion can be developed from simple kinetic postulates
Solutions to these equations show capillarity effects
What to do?
Develop a model for the thermodynamics of an inhomogeneous system.
Derive the equation of motion for the phase seperation.
Solve the kinetic equations and show the microstructural evolution.
Introduction
Cahn and Hilliard’s paper from 1958 appears in Journal of Chemical Physics v. 28, n. 2, p. 258-267. They generalize the free energy of a system with composition gradients. They do so using a Taylor expansion and develop a free energy functional and solve the resulting differential equation. Then a paper in 1961 titled "On Spinodal Decomposition" outlines the differential equation for the time rate of change of the composition in a spinodally decomposing system.
An Example of a Spinodially Decomposing Structure
Lecture Outline
The Free Energy of Mixing
Beyond the Bulk: Energy Correction Terms
Minimizing the Total Energy
The Surface Energy and Concentration Profile of a Non-Uniform System
Spinodal Decomposition
The Free Energy of Mixing
The formation of a solution can be thought of as a sequence of steps:
Compute the free energy of the unmixed state for an amount of pure A and pure B
Allow both A and B to form a chemical solution
Compute the energy change upon mixing and add this change to the energy of the initial state
The Free Energy of Mixing
To understand the energetics of a non-uniform system we need a model for a solution where the free energy of solution (e.g. Gibbs or Helmholz) is a function of composition. This is most often represented as a free energy density (energy/volume). We will start by describing the ideal solution where the mixing process results in an entropy change alone without any contribution from the enthalpy.
Recall from thermodynamics that energy is an extensive quantity and that the Gibbs free energy is defined as:
$$
G = H - TS
$$
If we want to describe the isothermal change between states 1 and 2, we can write the following:
$$
G_2 - G_1 = H_2 - H_1 - T(S_2-S_1)
$$
Resulting in:
$$
\Delta G_{1 \rightarrow 2} = \Delta H_{1 \rightarrow 2} - T \Delta S_{1 \rightarrow 2}
$$
We will used this formula to describe the change from unmixed (state 1) to mixed (state 2) in a thermodynamic solution.
No Preference for Chemical Surroundings
In an ideal solution the enthalpy change (or internal energy change) is zero.
The entropy arises from mixing effects only.
Stirling's Formula is used to approximate terms due to the energy change on mixing:
$$
\Delta G_{\mathrm{mix, \, id}} = RT(X_A \ln X_A + X_B \ln X_B)
$$
The free energy for an ideal solution can therefore be written:
\begin{align}
G_{\mathrm{ideal}} &= G_{\mathrm{unmixed}} + \Delta G_{\mathrm{mix, \, id}} \ &= X_A G_A + X_B G_B + RT(X_A \ln X_A + X_B \ln X_B)
\end{align}
End of explanation
"""
def regularSolution(GA, GB, XB, omega, temperature):
return omega*(1.0-XB)*XB+(1.0-XB)*GA+XB*GB+8.314*temperature*((1.0-XB)*np.log(1.0-XB)+XB*np.log(XB))
def myfig2(omega, temperature):
"""
This function produces a plot of the Gibbs free energy of mixing for a regular solution.
"""
GA = 1.0
GB = 1.0
XB = np.linspace(0.01,0.99,50)
temperatureSpace = np.linspace(1.0,200.0,10)
y = regularSolution(GA, GB, XB, omega, temperature)
greySolutionLines = [regularSolution(GA, GB, XB, omega, greyT) for greyT in temperatureSpace]
fig2, axes2 = plt.subplots(figsize=(14,9))
for greyLine in greySolutionLines:
axes2.plot(XB, greyLine, 'black', alpha=0.9)
axes2.plot(XB, y, 'r', label=r"$G_{soln}$", linewidth=4)
# Location for annotations can always be done by extents instead of absolute values.
axes2.annotate('GA='+str(GA)+'\n'+'GB='+str(GB),xy=(0,400), fontsize=20)
axes2.set_ylabel(r"$G_{soln}$", fontsize=15)
axes2.set_xlabel(r"$X_B$", fontsize=15)
axes2.legend(loc="upper right", fontsize=15)
axes2.xaxis.set_tick_params(labelsize=15)
axes2.yaxis.set_tick_params(labelsize=15)
plt.show()
return
interact(myfig2, omega=(0.0,5000.0,1.0), temperature=(1.0,200.0,1.0));
"""
Explanation: Correcting the Ideal Solution for Local Chemical Effects
In general, the free energy of solution includes both enthalpic and entropic terms
The previous treatment of the ideal solution neglects any contribution from the enthalpy.
Before mixing - there are only A-A and B-B bonds and NO A-B bonds.
After mixing the number of A-B bonds is estimated from statistical and structural considerations to produce a model of the excess enthalpy
As outlined above, it is possible to have both enthalpy and entropy of mixing effects when forming a solution. A more general approach would include the possibility that there may be enthalpy changes upon mixing. Two types of arguments can be made: mathematical and physical.
A simple mathematical argument for the enthalpy of solution is based on the fact that the functions of mixing have the property that their values must pass through zero at the pure end member compositions. A simple function that captures this requirement would have a form:
$$
\Delta H_{mix} = \Omega X_A X_B
$$
where $\Omega$ is a single adjustable parameter. As is pointed out by DeHoff, simpler functions than this are not possible. A physical argument for this form is known as the quasichemical model for solutions. A summary of the quasichemical model and the probability argument for finding like and unlike bonds in a random solution is given by DeHoff (and other texts on thermodynamics), but the important points are as follows:
The heat of mixing of a non-ideal solution, called the regular solution is proportional:
to the number of unlike bonds, and
includes a parameter that scales with the difference in energy between like and unlike bonds.
<img src="./images/Enthalpy-Nonzero.png",width=1200>
$$
\Delta H_{\mathrm{mix}} = \Omega(\epsilon)X_A X_B
$$
DIY: Exploration of Bond Types
Simulate three different types of solutions: clustered, random and ordered. Compute the fraction of bond types as a function of mole fraction solute (e.g. $X_B$) for each type. Justify the form of the enthalpy of mixing based on your calculations.
The regular solution model is then writte as:
\begin{align}
G_{\mathrm{regular}} = X_A G_A + X_B G_B &+ \Omega(\epsilon)X_A X_B \ &+ RT(X_A \ln X_A + X_B \ln X_B)
\end{align}
End of explanation
"""
def regularSolution(GA, GB, XB, omega, temperature):
return omega*(1.0-XB)*XB+(1.0-XB)*GA+XB*GB+8.314*temperature*((1-XB)*np.log(1-XB)+XB*np.log(XB))
def simplifiedSolution(XB, W):
return (1.0-XB)**2*XB**2*W
def myfig3(omega, W, temperature):
"""
This function ...
"""
GA = 1.0
GB = 1.0
XB = np.linspace(0.01,0.99,50)
temperatureSpace = np.linspace(1.0,100.0,10)
y1 = regularSolution(GA, GB, XB, omega, temperature)
greySolutionLines = [regularSolution(GA, GB, XB, omega, greyT) for greyT in temperatureSpace]
wSpace = np.linspace(0.01,100.0,10)
y2 = simplifiedSolution(XB, W)
greyWLines = [simplifiedSolution(XB, greyW) for greyW in wSpace]
fig, axes = plt.subplots(nrows=1, ncols=2, figsize=(14,8))
plt.tight_layout(pad=5.0)
#for greyLine in greyMagLines:
# axes[0].plot(eta, greyLine, 'grey', alpha=0.3)
#for greyLine in greyPhiLines:
# axes[1].plot(eta, greyLine, 'grey', alpha=0.3)
#axes[0].set_ylim(0,4)
#axes[0].plot(eta, y1, 'r', label=r"$MF(\eta,\xi)$")
#axes[1].set_ylim(0,180)
#axes[1].plot(eta, y2, 'b', label=r"$\phi(\eta,\xi)$")
for greyLine in greySolutionLines:
axes[0].plot(XB, greyLine, 'black', alpha=0.9)
axes[0].plot(XB, y1, 'r', label=r"Regular Solution", linewidth=4)
axes[0].annotate('GA='+str(GA)+'\n'+'GB='+str(GB),xy=(0,40), fontsize=15)
for greyLine in greyWLines:
axes[1].plot(XB, greyLine, 'black', alpha=0.9)
axes[1].plot(XB, y2, 'g', label=r"$W \phi^2 (1-\phi)^2$", linewidth=4)
axes[1].set_ylim(0.0,4.0)
#axes.plot(XB, y, 'r', label=r"$G_{soln}$")
#axes.legend()
#axes.grid(True, linestyle='dotted')
#axes.set_ylim(-600,200)
#axes.set_ylabel(r"$G_{soln}$")
#axes.set_xlabel(r"$X_B$")
for ax in axes:
ax.legend(loc="upper right", fontsize=15)
ax.set_ylabel(r"$G_{soln}$", fontsize=20)
ax.xaxis.set_tick_params(labelsize=15)
ax.yaxis.set_tick_params(labelsize=15)
axes[0].set_xlabel(r"$X_B$", fontsize=20)
axes[1].set_xlabel(r"$\phi$", fontsize=20)
plt.show()
return
interact(myfig3, omega=(0.0,1000.0,1.0), W=(0.0,100.0,1.0), temperature=(1.0,100.0,1.0));
"""
Explanation: A Small Simplification
Although the regular solution can approximate more kinds of chemical solutions, all the effects we wish to show are produced by a simple function that replaces the regular solution model:
$$
f(\phi) = W \phi^2 (1-\phi)^2
$$
I switch to $\phi$ as a reminder of this simplification. A plot of this function and the regular solution are shown side-by-side below:
End of explanation
"""
def phiPlots():
"""
This function's docstring explaining the function.
"""
t = np.linspace(0,10,100)
y1 = np.cos(t)
y2 = np.cos(2*t)
y3 = 0*t
fig, axes = plt.subplots(figsize=(14,9))
axes.plot(t, y3, 'g--', label="Average")
axes.plot(t, y1, 'r', label="Profile 1")
axes.plot(t, y2, 'b', label="Profile 2")
axes.set_xlabel(r"$t$", fontsize=15)
axes.set_ylabel(r"$c(x)$", fontsize=15)
axes.legend(fontsize=15)
axes.xaxis.set_tick_params(labelsize=15)
axes.yaxis.set_tick_params(labelsize=15)
plt.show()
return
phiPlots()
"""
Explanation: Beyond the Bulk: Energy Correction Terms
Diffusive length scales are on the order of microns.
Precipitation and phase seperation length scales are on the order of nanometers.
At nanometer length scales the gradient energy becomes comparable to the bulk free energy and must be accounted for.
Cahn and Hilliard's insight was that a non-uniform system's total energy should depend on the average values of the order parameter as well as the spatial gradients within the order parameter. One viewpoint is that these gradient terms are "correction terms" needed when the order parameter gradients are very large.
Practically, this insight impacts the way scientists think of interfaces and phase transitions in areas primarily related to chemical solutions, magnetic domains and ferroelectric materials. The development of a free energy functional that includes bulk and gradient terms permits a unified treatment of the bulk and interface regions in a material. This, in turn, allows new understanding of both non-uniform and heterogenerous systems. To understand the origin of the energy correction terms, the series expansion and the independence of the order parameter and its gradients are first discussed.
Review: A Taylor Series Expansion
In multiple independent variables the Taylor's Series is:
\begin{align}
f(x,y) & = f(a,b) + (x-a)\frac{\partial f}{\partial x} + (x-b)\frac{\partial f}{\partial y}\
& + \frac{1}{2!} \left[ (x-a)^2 \frac{\partial^2 f}{\partial x^2} + 2(x-a)(y-b) \frac{\partial^2 f}{\partial x \partial y} + (y-b)^2 \frac{\partial^2 f}{\partial y^2} \right] \
& + \; ...
\end{align}
The Independence of $\phi$, $\nabla \phi$ and $\nabla^2 \phi$
The total energy of the system can depend on the concentration and local variations
This phase space can be sampled by permitting $\phi(x)$ and its gradients to vary independently
Consider that the average composition of a system is independent of the wavelengths of concentration variations.
The Taylor's series above is written assuming that the variables $x$ and $y$ are independent. More than two independent variables can be treated similarly. When we write the free energy of a non-uniform system we will postulate that the concentration and its gradients are independent quantities. Help establish that this is a reasonable assumption, consider the following example.
An intuitive argument is that in a conservative field a particle's instantaneous energy can be determined by its position and velocity. It is possible to choose a particle's potential energy and kinetic energy by setting the position and velocity at an instant in time. These two quantities can be chosen independently to return any desired value of the system's total energy. In an effort to extend this analogy to chemical systems, below I plot three functions. The first is the average composition. The other two are functions that have the same average, but have different gradients and second derivatives.
End of explanation
"""
import sympy as sp
sp.init_session(quiet=True)
phi, W, epsilon = symbols('phi W epsilon', real=true)
functionalForm = W*phi(x)**2*(1-phi(x))**2 + epsilon*(phi(x).diff(x))**2
ele = sp.euler_equations(functionalForm, phi(x), x)
ele
delFdelPhi = (ele[0].lhs).simplify()
delFdelPhi
firstTermsFactored = sp.factor(4*W*phi**3-6*W*phi**2+2*W*phi)
firstTermsFactored
"""
Explanation: The Free Energy of Our System
If the temperature and pressure are our process variables, then we can use the Gibbs free energy per unit volume. The total energy of the system is then found by integrating the Gibbs free energy density over the volume of the system as in the integral below. Furthermore we assume that the order parameter (the composition proxy) and powers of the derivatives of order parameter all contribute to the free energy and are independent:
$$
F = \int_V f_v(\phi, \nabla \phi, \nabla^2 \phi, ...) \delta V
$$
It is possible to expand the integrand explicitly in powers of the independent parameters using a Taylor's series formalism (DeHoff explains this in Chapter 14 of his thermodynamics text, also), in a shorthand notation we write an equivalent statement:
$$
f_v = f_v^0 + L \nabla \phi + K_1 \nabla^2 \phi + K_2 (\nabla \phi)^2 + \; ...
$$
with
$$
L = \left( \frac{\partial f_v}{\partial (\nabla \phi)} \right)
$$
and other similar terms as per the Taylor's Series expansion above treating $\phi$ and all higher order derivatives as independent parameters in the free energy space. These extra terms can be viewed as "correction" terms in the approximation of the free energy density in the vicinity of the average alloy composition.
The Free Energy Functional
Three arguments and manipulations are made to arrive at the desired functional:
The sign of the gradient should not affect the total energy;
The energy should be invariant with respect to inversion symmetry;
The energy should be invariant with respect to four fold rotations about a principal axis.
$$
F = \int_V f_v + K (\nabla \phi)^2 \delta V
$$
We keep the lowest order, nonzero correction term in the gradient of the order parameter. The above assumptions can be relaxed for different applications. We can now proceed to find the function $\phi(x)$ that minimizes the integral.
A Result from the Calculus of Variations
The main purpose of the CoV is to find the function $y$ minimizing (or making extreme) the integral:
$$
I(y) = \int_{x_0}^{x_1} F(x,y',y'') dx
$$
One application is a minimum path problem: a straight line connects two points in the plane.
One important result is the Euler-Lagrange equation:
$$
\frac{\partial F}{\partial y} - \frac{d}{dx} \frac{\partial F}{\partial y'} = 0
$$
A functional is a "function of functions". To find the function that makes the integral stationary (most often a minimum) we will need to apply the Euler Lagrange result from the calculus of variations.
If you examine a series of nearby functions to the extreme function, $y(x)$, it can be demonstrated that the Euler-Lagrange equation is the only possibility for making $I(y)$ stationary.
Using the functional that includes the gradient correction term, we can write a differential equation where the solution is the minimizing function.
DIY: Use The Euler Lagrange Equation
Using a path length integral, demonstrate that the shortest distance between two points on the plane is a straight line.
The PDE to Determine Profiles and Kinetic Evolution of a Non-Uniform System
$$
F(\phi,\phi') = W \phi^2 (1-\phi)^2 + \epsilon (\nabla \phi)^2
$$
Applying the Euler-Lagrange equation to our functional we get:
\begin{align}
\frac{\delta F}{\delta \phi} & = \frac{\partial F}{\partial \phi} - \frac{d}{dx} \frac{\partial F}{\partial \nabla \phi} = 0 \ &= 2 W \phi \left(\phi - 1\right) \left(2 \phi - 1\right) - 2 \epsilon \nabla^2 \phi = 0
\end{align}
recall that $\phi(x,t)$ and this equation implies equilibrium.
End of explanation
"""
import sympy as sp
sp.init_session(quiet=True)
sp.dsolve(sp.diff(f(x),x)*(1/(f(x)*(1-f(x))))-k,f(x),hint='lie_group')
"""
Explanation: Solving the ODE Explicitly
The use of an integrating factor results in the ODE:
$$
\frac{d \phi}{d x} \frac{1}{\phi(1-\phi)} - \sqrt{\frac{W}{\epsilon}} = 0
$$
End of explanation
"""
def phiSolution(W, epsilon):
"""
This function's docstring explaining the function.
"""
x = np.linspace(-10,10,100)
y = 0.5*(1.0 + np.tanh((np.sqrt(W/epsilon))*(x/2.0)))
fig, axes = plt.subplots(figsize=(14,9))
axes.plot(x, y, 'r', label=r"$\phi(x)$")
axes.set_xlabel(r"$x$", fontsize=20)
axes.set_ylabel(r"$\phi(x)$", fontsize=20)
axes.xaxis.set_tick_params(labelsize=15)
axes.yaxis.set_tick_params(labelsize=15)
axes.legend(fontsize=20)
plt.show()
return
interact(phiSolution, W=(0.01,10,0.1), epsilon=(0.01,10,0.1));
"""
Explanation: and after some exciting manipulation, a solution is:
$$
\phi(x) = \frac{1}{2}\left(1 + \tanh{\sqrt{\frac{W}{\epsilon}}\frac{x}{2}}\right)
$$
Contributions from the bulk free energy (through W) and the gradient correction term (through $\epsilon$) shape the profile of the concentration, order, etc. at the interface.
End of explanation
"""
%%HTML
<video width="600" height="600" controls> <source src="./images/Cahn-Hilliard.mp4" type="video/mp4">
</video>
from fipy import *
from IPython.display import clear_output
import time
nx = ny = 100
mesh = Grid2D(nx=nx, ny=ny, dx=0.5, dy=0.5)
phi = CellVariable(name=r"$\phi$", mesh=mesh)
psi = CellVariable(name=r"$\psi$", mesh=mesh)
noise = GaussianNoiseVariable(mesh=mesh,mean=0.5,variance=0.01).value
phi[:] = noise
viewer = Viewer(vars=phi)
D = a = epsilon = 1.
dfdphi = a**2 * 2 * phi * (1 - phi) * (1 - 2 * phi)
dfdphi_ = a**2 * 2 * (1 - phi) * (1 - 2 * phi)
d2fdphi2 = a**2 * 2 * (1 - 6 * phi * (1 - phi))
eq1 = (TransientTerm(var=phi) == DiffusionTerm(coeff=D, var=psi))
eq2 = (ImplicitSourceTerm(coeff=1., var=psi)
== ImplicitSourceTerm(coeff=-d2fdphi2, var=phi) - d2fdphi2 * phi + dfdphi
- DiffusionTerm(coeff=epsilon**2, var=phi))
eq3 = (ImplicitSourceTerm(coeff=1., var=psi)
== ImplicitSourceTerm(coeff=dfdphi_, var=phi)
- DiffusionTerm(coeff=epsilon**2, var=phi))
eq = eq1 & eq3
dexp = -3
elapsed = 0.
duration = 100.0
# Run the model.
while elapsed < duration:
dt = min(100, numerix.exp(dexp))
elapsed += dt
dexp += 0.01
eq.solve(dt=dt)
viewer.plot()
clear_output(wait=True)
display(viewer)
"""
Explanation: Solving the PDE Using Relaxation
With the bulk free energy and the gradient energy contributions conceptually justified it is now necessary to identify the equations of motion. In the non conserved case:
$$
\frac{\partial \phi}{\partial t} = -M \frac{\delta F}{\delta \phi}
$$
and for a conserved order parameter the equations of motion are derived from:
$$
\frac{\partial \phi}{\partial t} = \nabla \cdot D \nabla \frac{\delta F}{\delta \phi}
$$
There are other choices, but these are the simplest choices that guarantee a free energy decrease with time.
When writing the equations of motion - things can get messy. It is better therefore to write the leading term on the LHS as $A(\phi)$. This gives:
$$
\frac{\delta F}{\delta \phi} = A(\phi) - \epsilon \frac{d^{2}}{d x^{2}} \phi{\left (x \right )}
$$
with
$$
\nabla \cdot D \nabla \frac{\delta F}{\delta \phi} = \nabla \cdot D \left( \frac{\partial A}{\partial \phi} \nabla \phi(x) - \epsilon \frac{d^{3}}{d x^{3}} \phi(x) \right)
$$
By distributing the divergence and diffusion coefficient, we arrive at:
$$
\frac{\partial \phi}{\partial t} = \nabla \cdot D \frac{\partial A}{\partial \phi} \nabla \phi(x) - D \epsilon \nabla^4 \phi(x)
$$
Plots of the Progress of a Spinodal Decomposition Simulation
End of explanation
"""
|
wy1iu/sphereface | tools/caffe-sphereface/examples/02-fine-tuning.ipynb | mit | caffe_root = '../' # this file should be run from {caffe_root}/examples (otherwise change this line)
import sys
sys.path.insert(0, caffe_root + 'python')
import caffe
caffe.set_device(0)
caffe.set_mode_gpu()
import numpy as np
from pylab import *
%matplotlib inline
import tempfile
# Helper function for deprocessing preprocessed images, e.g., for display.
def deprocess_net_image(image):
image = image.copy() # don't modify destructively
image = image[::-1] # BGR -> RGB
image = image.transpose(1, 2, 0) # CHW -> HWC
image += [123, 117, 104] # (approximately) undo mean subtraction
# clamp values in [0, 255]
image[image < 0], image[image > 255] = 0, 255
# round and cast from float32 to uint8
image = np.round(image)
image = np.require(image, dtype=np.uint8)
return image
"""
Explanation: Fine-tuning a Pretrained Network for Style Recognition
In this example, we'll explore a common approach that is particularly useful in real-world applications: take a pre-trained Caffe network and fine-tune the parameters on your custom data.
The advantage of this approach is that, since pre-trained networks are learned on a large set of images, the intermediate layers capture the "semantics" of the general visual appearance. Think of it as a very powerful generic visual feature that you can treat as a black box. On top of that, only a relatively small amount of data is needed for good performance on the target task.
First, we will need to prepare the data. This involves the following parts:
(1) Get the ImageNet ilsvrc pretrained model with the provided shell scripts.
(2) Download a subset of the overall Flickr style dataset for this demo.
(3) Compile the downloaded Flickr dataset into a database that Caffe can then consume.
End of explanation
"""
# Download just a small subset of the data for this exercise.
# (2000 of 80K images, 5 of 20 labels.)
# To download the entire dataset, set `full_dataset = True`.
full_dataset = False
if full_dataset:
NUM_STYLE_IMAGES = NUM_STYLE_LABELS = -1
else:
NUM_STYLE_IMAGES = 2000
NUM_STYLE_LABELS = 5
# This downloads the ilsvrc auxiliary data (mean file, etc),
# and a subset of 2000 images for the style recognition task.
import os
os.chdir(caffe_root) # run scripts from caffe root
!data/ilsvrc12/get_ilsvrc_aux.sh
!scripts/download_model_binary.py models/bvlc_reference_caffenet
!python examples/finetune_flickr_style/assemble_data.py \
--workers=-1 --seed=1701 \
--images=$NUM_STYLE_IMAGES --label=$NUM_STYLE_LABELS
# back to examples
os.chdir('examples')
"""
Explanation: 1. Setup and dataset download
Download data required for this exercise.
get_ilsvrc_aux.sh to download the ImageNet data mean, labels, etc.
download_model_binary.py to download the pretrained reference model
finetune_flickr_style/assemble_data.py downloads the style training and testing data
We'll download just a small subset of the full dataset for this exercise: just 2000 of the 80K images, from 5 of the 20 style categories. (To download the full dataset, set full_dataset = True in the cell below.)
End of explanation
"""
import os
weights = os.path.join(caffe_root, 'models/bvlc_reference_caffenet/bvlc_reference_caffenet.caffemodel')
assert os.path.exists(weights)
"""
Explanation: Define weights, the path to the ImageNet pretrained weights we just downloaded, and make sure it exists.
End of explanation
"""
# Load ImageNet labels to imagenet_labels
imagenet_label_file = caffe_root + 'data/ilsvrc12/synset_words.txt'
imagenet_labels = list(np.loadtxt(imagenet_label_file, str, delimiter='\t'))
assert len(imagenet_labels) == 1000
print 'Loaded ImageNet labels:\n', '\n'.join(imagenet_labels[:10] + ['...'])
# Load style labels to style_labels
style_label_file = caffe_root + 'examples/finetune_flickr_style/style_names.txt'
style_labels = list(np.loadtxt(style_label_file, str, delimiter='\n'))
if NUM_STYLE_LABELS > 0:
style_labels = style_labels[:NUM_STYLE_LABELS]
print '\nLoaded style labels:\n', ', '.join(style_labels)
"""
Explanation: Load the 1000 ImageNet labels from ilsvrc12/synset_words.txt, and the 5 style labels from finetune_flickr_style/style_names.txt.
End of explanation
"""
from caffe import layers as L
from caffe import params as P
weight_param = dict(lr_mult=1, decay_mult=1)
bias_param = dict(lr_mult=2, decay_mult=0)
learned_param = [weight_param, bias_param]
frozen_param = [dict(lr_mult=0)] * 2
def conv_relu(bottom, ks, nout, stride=1, pad=0, group=1,
param=learned_param,
weight_filler=dict(type='gaussian', std=0.01),
bias_filler=dict(type='constant', value=0.1)):
conv = L.Convolution(bottom, kernel_size=ks, stride=stride,
num_output=nout, pad=pad, group=group,
param=param, weight_filler=weight_filler,
bias_filler=bias_filler)
return conv, L.ReLU(conv, in_place=True)
def fc_relu(bottom, nout, param=learned_param,
weight_filler=dict(type='gaussian', std=0.005),
bias_filler=dict(type='constant', value=0.1)):
fc = L.InnerProduct(bottom, num_output=nout, param=param,
weight_filler=weight_filler,
bias_filler=bias_filler)
return fc, L.ReLU(fc, in_place=True)
def max_pool(bottom, ks, stride=1):
return L.Pooling(bottom, pool=P.Pooling.MAX, kernel_size=ks, stride=stride)
def caffenet(data, label=None, train=True, num_classes=1000,
classifier_name='fc8', learn_all=False):
"""Returns a NetSpec specifying CaffeNet, following the original proto text
specification (./models/bvlc_reference_caffenet/train_val.prototxt)."""
n = caffe.NetSpec()
n.data = data
param = learned_param if learn_all else frozen_param
n.conv1, n.relu1 = conv_relu(n.data, 11, 96, stride=4, param=param)
n.pool1 = max_pool(n.relu1, 3, stride=2)
n.norm1 = L.LRN(n.pool1, local_size=5, alpha=1e-4, beta=0.75)
n.conv2, n.relu2 = conv_relu(n.norm1, 5, 256, pad=2, group=2, param=param)
n.pool2 = max_pool(n.relu2, 3, stride=2)
n.norm2 = L.LRN(n.pool2, local_size=5, alpha=1e-4, beta=0.75)
n.conv3, n.relu3 = conv_relu(n.norm2, 3, 384, pad=1, param=param)
n.conv4, n.relu4 = conv_relu(n.relu3, 3, 384, pad=1, group=2, param=param)
n.conv5, n.relu5 = conv_relu(n.relu4, 3, 256, pad=1, group=2, param=param)
n.pool5 = max_pool(n.relu5, 3, stride=2)
n.fc6, n.relu6 = fc_relu(n.pool5, 4096, param=param)
if train:
n.drop6 = fc7input = L.Dropout(n.relu6, in_place=True)
else:
fc7input = n.relu6
n.fc7, n.relu7 = fc_relu(fc7input, 4096, param=param)
if train:
n.drop7 = fc8input = L.Dropout(n.relu7, in_place=True)
else:
fc8input = n.relu7
# always learn fc8 (param=learned_param)
fc8 = L.InnerProduct(fc8input, num_output=num_classes, param=learned_param)
# give fc8 the name specified by argument `classifier_name`
n.__setattr__(classifier_name, fc8)
if not train:
n.probs = L.Softmax(fc8)
if label is not None:
n.label = label
n.loss = L.SoftmaxWithLoss(fc8, n.label)
n.acc = L.Accuracy(fc8, n.label)
# write the net to a temporary file and return its filename
with tempfile.NamedTemporaryFile(delete=False) as f:
f.write(str(n.to_proto()))
return f.name
"""
Explanation: 2. Defining and running the nets
We'll start by defining caffenet, a function which initializes the CaffeNet architecture (a minor variant on AlexNet), taking arguments specifying the data and number of output classes.
End of explanation
"""
dummy_data = L.DummyData(shape=dict(dim=[1, 3, 227, 227]))
imagenet_net_filename = caffenet(data=dummy_data, train=False)
imagenet_net = caffe.Net(imagenet_net_filename, weights, caffe.TEST)
"""
Explanation: Now, let's create a CaffeNet that takes unlabeled "dummy data" as input, allowing us to set its input images externally and see what ImageNet classes it predicts.
End of explanation
"""
def style_net(train=True, learn_all=False, subset=None):
if subset is None:
subset = 'train' if train else 'test'
source = caffe_root + 'data/flickr_style/%s.txt' % subset
transform_param = dict(mirror=train, crop_size=227,
mean_file=caffe_root + 'data/ilsvrc12/imagenet_mean.binaryproto')
style_data, style_label = L.ImageData(
transform_param=transform_param, source=source,
batch_size=50, new_height=256, new_width=256, ntop=2)
return caffenet(data=style_data, label=style_label, train=train,
num_classes=NUM_STYLE_LABELS,
classifier_name='fc8_flickr',
learn_all=learn_all)
"""
Explanation: Define a function style_net which calls caffenet on data from the Flickr style dataset.
The new network will also have the CaffeNet architecture, with differences in the input and output:
the input is the Flickr style data we downloaded, provided by an ImageData layer
the output is a distribution over 20 classes rather than the original 1000 ImageNet classes
the classification layer is renamed from fc8 to fc8_flickr to tell Caffe not to load the original classifier (fc8) weights from the ImageNet-pretrained model
End of explanation
"""
untrained_style_net = caffe.Net(style_net(train=False, subset='train'),
weights, caffe.TEST)
untrained_style_net.forward()
style_data_batch = untrained_style_net.blobs['data'].data.copy()
style_label_batch = np.array(untrained_style_net.blobs['label'].data, dtype=np.int32)
"""
Explanation: Use the style_net function defined above to initialize untrained_style_net, a CaffeNet with input images from the style dataset and weights from the pretrained ImageNet model.
Call forward on untrained_style_net to get a batch of style training data.
End of explanation
"""
def disp_preds(net, image, labels, k=5, name='ImageNet'):
input_blob = net.blobs['data']
net.blobs['data'].data[0, ...] = image
probs = net.forward(start='conv1')['probs'][0]
top_k = (-probs).argsort()[:k]
print 'top %d predicted %s labels =' % (k, name)
print '\n'.join('\t(%d) %5.2f%% %s' % (i+1, 100*probs[p], labels[p])
for i, p in enumerate(top_k))
def disp_imagenet_preds(net, image):
disp_preds(net, image, imagenet_labels, name='ImageNet')
def disp_style_preds(net, image):
disp_preds(net, image, style_labels, name='style')
batch_index = 8
image = style_data_batch[batch_index]
plt.imshow(deprocess_net_image(image))
print 'actual label =', style_labels[style_label_batch[batch_index]]
disp_imagenet_preds(imagenet_net, image)
"""
Explanation: Pick one of the style net training images from the batch of 50 (we'll arbitrarily choose #8 here). Display it, then run it through imagenet_net, the ImageNet-pretrained network to view its top 5 predicted classes from the 1000 ImageNet classes.
Below we chose an image where the network's predictions happen to be reasonable, as the image is of a beach, and "sandbar" and "seashore" both happen to be ImageNet-1000 categories. For other images, the predictions won't be this good, sometimes due to the network actually failing to recognize the object(s) present in the image, but perhaps even more often due to the fact that not all images contain an object from the (somewhat arbitrarily chosen) 1000 ImageNet categories. Modify the batch_index variable by changing its default setting of 8 to another value from 0-49 (since the batch size is 50) to see predictions for other images in the batch. (To go beyond this batch of 50 images, first rerun the above cell to load a fresh batch of data into style_net.)
End of explanation
"""
disp_style_preds(untrained_style_net, image)
"""
Explanation: We can also look at untrained_style_net's predictions, but we won't see anything interesting as its classifier hasn't been trained yet.
In fact, since we zero-initialized the classifier (see caffenet definition -- no weight_filler is passed to the final InnerProduct layer), the softmax inputs should be all zero and we should therefore see a predicted probability of 1/N for each label (for N labels). Since we set N = 5, we get a predicted probability of 20% for each class.
End of explanation
"""
diff = untrained_style_net.blobs['fc7'].data[0] - imagenet_net.blobs['fc7'].data[0]
error = (diff ** 2).sum()
assert error < 1e-8
"""
Explanation: We can also verify that the activations in layer fc7 immediately before the classification layer are the same as (or very close to) those in the ImageNet-pretrained model, since both models are using the same pretrained weights in the conv1 through fc7 layers.
End of explanation
"""
del untrained_style_net
"""
Explanation: Delete untrained_style_net to save memory. (Hang on to imagenet_net as we'll use it again later.)
End of explanation
"""
from caffe.proto import caffe_pb2
def solver(train_net_path, test_net_path=None, base_lr=0.001):
s = caffe_pb2.SolverParameter()
# Specify locations of the train and (maybe) test networks.
s.train_net = train_net_path
if test_net_path is not None:
s.test_net.append(test_net_path)
s.test_interval = 1000 # Test after every 1000 training iterations.
s.test_iter.append(100) # Test on 100 batches each time we test.
# The number of iterations over which to average the gradient.
# Effectively boosts the training batch size by the given factor, without
# affecting memory utilization.
s.iter_size = 1
s.max_iter = 100000 # # of times to update the net (training iterations)
# Solve using the stochastic gradient descent (SGD) algorithm.
# Other choices include 'Adam' and 'RMSProp'.
s.type = 'SGD'
# Set the initial learning rate for SGD.
s.base_lr = base_lr
# Set `lr_policy` to define how the learning rate changes during training.
# Here, we 'step' the learning rate by multiplying it by a factor `gamma`
# every `stepsize` iterations.
s.lr_policy = 'step'
s.gamma = 0.1
s.stepsize = 20000
# Set other SGD hyperparameters. Setting a non-zero `momentum` takes a
# weighted average of the current gradient and previous gradients to make
# learning more stable. L2 weight decay regularizes learning, to help prevent
# the model from overfitting.
s.momentum = 0.9
s.weight_decay = 5e-4
# Display the current training loss and accuracy every 1000 iterations.
s.display = 1000
# Snapshots are files used to store networks we've trained. Here, we'll
# snapshot every 10K iterations -- ten times during training.
s.snapshot = 10000
s.snapshot_prefix = caffe_root + 'models/finetune_flickr_style/finetune_flickr_style'
# Train on the GPU. Using the CPU to train large networks is very slow.
s.solver_mode = caffe_pb2.SolverParameter.GPU
# Write the solver to a temporary file and return its filename.
with tempfile.NamedTemporaryFile(delete=False) as f:
f.write(str(s))
return f.name
"""
Explanation: 3. Training the style classifier
Now, we'll define a function solver to create our Caffe solvers, which are used to train the network (learn its weights). In this function we'll set values for various parameters used for learning, display, and "snapshotting" -- see the inline comments for explanations of what they mean. You may want to play with some of the learning parameters to see if you can improve on the results here!
End of explanation
"""
def run_solvers(niter, solvers, disp_interval=10):
"""Run solvers for niter iterations,
returning the loss and accuracy recorded each iteration.
`solvers` is a list of (name, solver) tuples."""
blobs = ('loss', 'acc')
loss, acc = ({name: np.zeros(niter) for name, _ in solvers}
for _ in blobs)
for it in range(niter):
for name, s in solvers:
s.step(1) # run a single SGD step in Caffe
loss[name][it], acc[name][it] = (s.net.blobs[b].data.copy()
for b in blobs)
if it % disp_interval == 0 or it + 1 == niter:
loss_disp = '; '.join('%s: loss=%.3f, acc=%2d%%' %
(n, loss[n][it], np.round(100*acc[n][it]))
for n, _ in solvers)
print '%3d) %s' % (it, loss_disp)
# Save the learned weights from both nets.
weight_dir = tempfile.mkdtemp()
weights = {}
for name, s in solvers:
filename = 'weights.%s.caffemodel' % name
weights[name] = os.path.join(weight_dir, filename)
s.net.save(weights[name])
return loss, acc, weights
"""
Explanation: Now we'll invoke the solver to train the style net's classification layer.
For the record, if you want to train the network using only the command line tool, this is the command:
<code>
build/tools/caffe train \
-solver models/finetune_flickr_style/solver.prototxt \
-weights models/bvlc_reference_caffenet/bvlc_reference_caffenet.caffemodel \
-gpu 0
</code>
However, we will train using Python in this example.
We'll first define run_solvers, a function that takes a list of solvers and steps each one in a round robin manner, recording the accuracy and loss values each iteration. At the end, the learned weights are saved to a file.
End of explanation
"""
niter = 200 # number of iterations to train
# Reset style_solver as before.
style_solver_filename = solver(style_net(train=True))
style_solver = caffe.get_solver(style_solver_filename)
style_solver.net.copy_from(weights)
# For reference, we also create a solver that isn't initialized from
# the pretrained ImageNet weights.
scratch_style_solver_filename = solver(style_net(train=True))
scratch_style_solver = caffe.get_solver(scratch_style_solver_filename)
print 'Running solvers for %d iterations...' % niter
solvers = [('pretrained', style_solver),
('scratch', scratch_style_solver)]
loss, acc, weights = run_solvers(niter, solvers)
print 'Done.'
train_loss, scratch_train_loss = loss['pretrained'], loss['scratch']
train_acc, scratch_train_acc = acc['pretrained'], acc['scratch']
style_weights, scratch_style_weights = weights['pretrained'], weights['scratch']
# Delete solvers to save memory.
del style_solver, scratch_style_solver, solvers
"""
Explanation: Let's create and run solvers to train nets for the style recognition task. We'll create two solvers -- one (style_solver) will have its train net initialized to the ImageNet-pretrained weights (this is done by the call to the copy_from method), and the other (scratch_style_solver) will start from a randomly initialized net.
During training, we should see that the ImageNet pretrained net is learning faster and attaining better accuracies than the scratch net.
End of explanation
"""
plot(np.vstack([train_loss, scratch_train_loss]).T)
xlabel('Iteration #')
ylabel('Loss')
plot(np.vstack([train_acc, scratch_train_acc]).T)
xlabel('Iteration #')
ylabel('Accuracy')
"""
Explanation: Let's look at the training loss and accuracy produced by the two training procedures. Notice how quickly the ImageNet pretrained model's loss value (blue) drops, and that the randomly initialized model's loss value (green) barely (if at all) improves from training only the classifier layer.
End of explanation
"""
def eval_style_net(weights, test_iters=10):
test_net = caffe.Net(style_net(train=False), weights, caffe.TEST)
accuracy = 0
for it in xrange(test_iters):
accuracy += test_net.forward()['acc']
accuracy /= test_iters
return test_net, accuracy
test_net, accuracy = eval_style_net(style_weights)
print 'Accuracy, trained from ImageNet initialization: %3.1f%%' % (100*accuracy, )
scratch_test_net, scratch_accuracy = eval_style_net(scratch_style_weights)
print 'Accuracy, trained from random initialization: %3.1f%%' % (100*scratch_accuracy, )
"""
Explanation: Let's take a look at the testing accuracy after running 200 iterations of training. Note that we're classifying among 5 classes, giving chance accuracy of 20%. We expect both results to be better than chance accuracy (20%), and we further expect the result from training using the ImageNet pretraining initialization to be much better than the one from training from scratch. Let's see.
End of explanation
"""
end_to_end_net = style_net(train=True, learn_all=True)
# Set base_lr to 1e-3, the same as last time when learning only the classifier.
# You may want to play around with different values of this or other
# optimization parameters when fine-tuning. For example, if learning diverges
# (e.g., the loss gets very large or goes to infinity/NaN), you should try
# decreasing base_lr (e.g., to 1e-4, then 1e-5, etc., until you find a value
# for which learning does not diverge).
base_lr = 0.001
style_solver_filename = solver(end_to_end_net, base_lr=base_lr)
style_solver = caffe.get_solver(style_solver_filename)
style_solver.net.copy_from(style_weights)
scratch_style_solver_filename = solver(end_to_end_net, base_lr=base_lr)
scratch_style_solver = caffe.get_solver(scratch_style_solver_filename)
scratch_style_solver.net.copy_from(scratch_style_weights)
print 'Running solvers for %d iterations...' % niter
solvers = [('pretrained, end-to-end', style_solver),
('scratch, end-to-end', scratch_style_solver)]
_, _, finetuned_weights = run_solvers(niter, solvers)
print 'Done.'
style_weights_ft = finetuned_weights['pretrained, end-to-end']
scratch_style_weights_ft = finetuned_weights['scratch, end-to-end']
# Delete solvers to save memory.
del style_solver, scratch_style_solver, solvers
"""
Explanation: 4. End-to-end finetuning for style
Finally, we'll train both nets again, starting from the weights we just learned. The only difference this time is that we'll be learning the weights "end-to-end" by turning on learning in all layers of the network, starting from the RGB conv1 filters directly applied to the input image. We pass the argument learn_all=True to the style_net function defined earlier in this notebook, which tells the function to apply a positive (non-zero) lr_mult value for all parameters. Under the default, learn_all=False, all parameters in the pretrained layers (conv1 through fc7) are frozen (lr_mult = 0), and we learn only the classifier layer fc8_flickr.
Note that both networks start at roughly the accuracy achieved at the end of the previous training session, and improve significantly with end-to-end training. To be more scientific, we'd also want to follow the same additional training procedure without the end-to-end training, to ensure that our results aren't better simply because we trained for twice as long. Feel free to try this yourself!
End of explanation
"""
test_net, accuracy = eval_style_net(style_weights_ft)
print 'Accuracy, finetuned from ImageNet initialization: %3.1f%%' % (100*accuracy, )
scratch_test_net, scratch_accuracy = eval_style_net(scratch_style_weights_ft)
print 'Accuracy, finetuned from random initialization: %3.1f%%' % (100*scratch_accuracy, )
"""
Explanation: Let's now test the end-to-end finetuned models. Since all layers have been optimized for the style recognition task at hand, we expect both nets to get better results than the ones above, which were achieved by nets with only their classifier layers trained for the style task (on top of either ImageNet pretrained or randomly initialized weights).
End of explanation
"""
plt.imshow(deprocess_net_image(image))
disp_style_preds(test_net, image)
"""
Explanation: We'll first look back at the image we started with and check our end-to-end trained model's predictions.
End of explanation
"""
batch_index = 1
image = test_net.blobs['data'].data[batch_index]
plt.imshow(deprocess_net_image(image))
print 'actual label =', style_labels[int(test_net.blobs['label'].data[batch_index])]
disp_style_preds(test_net, image)
"""
Explanation: Whew, that looks a lot better than before! But note that this image was from the training set, so the net got to see its label at training time.
Finally, we'll pick an image from the test set (an image the model hasn't seen) and look at our end-to-end finetuned style model's predictions for it.
End of explanation
"""
disp_style_preds(scratch_test_net, image)
"""
Explanation: We can also look at the predictions of the network trained from scratch. We see that in this case, the scratch network also predicts the correct label for the image (Pastel), but is much less confident in its prediction than the pretrained net.
End of explanation
"""
disp_imagenet_preds(imagenet_net, image)
"""
Explanation: Of course, we can again look at the ImageNet model's predictions for the above image:
End of explanation
"""
|
jtwhite79/pyemu | verification/Freyberg/verify_null_space_proj.ipynb | bsd-3-clause | %matplotlib inline
import os
import shutil
import numpy as np
import matplotlib.pyplot as plt
import pandas as pd
import pyemu
"""
Explanation: verify pyEMU null space projection with the freyberg problem
End of explanation
"""
mc = pyemu.MonteCarlo(jco="freyberg.jcb",verbose=False,forecasts=[])
mc.drop_prior_information()
jco_ord = mc.jco.get(mc.pst.obs_names,mc.pst.par_names)
ord_base = "freyberg_ord"
jco_ord.to_binary(ord_base + ".jco")
mc.pst.control_data.parsaverun = ' '
mc.pst.write(ord_base+".pst")
nsing = 5
"""
Explanation: instaniate pyemu object and drop prior info. Then reorder the jacobian and save as binary. This is needed because the pest utilities require strict order between the control file and jacobian
End of explanation
"""
# setup the dirs to hold all this stuff
par_dir = "prior_par_draws"
proj_dir = "proj_par_draws"
parfile_base = os.path.join(par_dir,"draw_")
projparfile_base = os.path.join(proj_dir,"draw_")
if os.path.exists(par_dir):
shutil.rmtree(par_dir)
os.mkdir(par_dir)
if os.path.exists(proj_dir):
shutil.rmtree(proj_dir)
os.mkdir(proj_dir)
mc = pyemu.MonteCarlo(jco=ord_base+".jco")
# make some draws
mc.draw(10)
#for i in range(10):
# mc.parensemble.iloc[i,:] = i+1
#write them to files
mc.parensemble.index = [str(i+1) for i in range(mc.parensemble.shape[0])]
mc.parensemble.to_parfiles(parfile_base)
mc.parensemble.shape
"""
Explanation: Draw some vectors from the prior and write the vectors to par files
End of explanation
"""
exe = os.path.join("pnulpar.exe")
args = [ord_base+".pst","y",str(nsing),"y","pnulpar_qhalfx.mat",parfile_base,projparfile_base]
in_file = os.path.join("misc","pnulpar.in")
with open(in_file,'w') as f:
f.write('\n'.join(args)+'\n')
os.system(exe + ' <'+in_file)
pnul_en = pyemu.ParameterEnsemble(mc.pst)
parfiles =[os.path.join(proj_dir,f) for f in os.listdir(proj_dir) if f.endswith(".par")]
pnul_en.read_parfiles(parfiles)
pnul_en.loc[:,"fname"] = pnul_en.index
pnul_en.index = pnul_en.fname.apply(lambda x:str(int(x.split('.')[0].split('_')[-1])))
f = pnul_en.pop("fname")
pnul_en.sort_index(axis=1,inplace=True)
pnul_en.sort_index(axis=0,inplace=True)
pnul_en
"""
Explanation: Run pnulpar
End of explanation
"""
print(mc.parensemble.istransformed)
mc.parensemble._transform()
en = mc.project_parensemble(nsing=nsing,inplace=False)
print(mc.parensemble.istransformed)
#en._back_transform()
en.sort_index(axis=1,inplace=True)
en.sort_index(axis=0,inplace=True)
en
#pnul_en.sort(inplace=True)
#en.sort(inplace=True)
diff = 100.0 * np.abs(pnul_en - en) / en
#diff[diff<1.0] = np.NaN
dmax = diff.max(axis=0)
dmax.sort_index(ascending=False,inplace=True)
dmax.plot(figsize=(10,10))
diff
en.loc[:,"wf6_2"]
pnul_en.loc[:,"wf6_2"]
"""
Explanation: Now for pyemu
End of explanation
"""
|
calroc/joypy | docs/4. Replacing Functions in the Dictionary.ipynb | gpl-3.0 | from notebook_preamble import D, J, V
"""
Explanation: Preamble
End of explanation
"""
V('[23 18] average')
"""
Explanation: A long trace
End of explanation
"""
J('[sum] help')
J('[size] help')
"""
Explanation: Replacing sum and size with "compiled" versions.
Both sum and size are catamorphisms, they each convert a sequence to a single value.
End of explanation
"""
from joy.library import SimpleFunctionWrapper, primitives
from joy.utils.stack import iter_stack
@SimpleFunctionWrapper
def size(stack):
'''Return the size of the sequence on the stack.'''
sequence, stack = stack
n = 0
for _ in iter_stack(sequence):
n += 1
return n, stack
sum_ = next(p for p in primitives if p.name == 'sum')
"""
Explanation: We can use "compiled" versions (they're not really compiled in this case, they're hand-written in Python) to speed up evaluation and make the trace more readable. The sum function is already in the library. It gets shadowed by the definition version above during initialize().
End of explanation
"""
old_sum, D['sum'] = D['sum'], sum_
old_size, D['size'] = D['size'], size
"""
Explanation: Now we replace them old versions in the dictionary with the new versions and re-evaluate the expression.
End of explanation
"""
V('[23 18] average')
"""
Explanation: You can see that size and sum now execute in a single step.
End of explanation
"""
|
uqyge/combustionML | ode/ode_mlp_good.ipynb | mit | '''
keras mlp regression
'''
from __future__ import print_function
import numpy as np
import matplotlib.pyplot as plt
from mpl_toolkits.mplot3d import Axes3D
from matplotlib import cm
from keras.models import Sequential
from keras.layers import Dense, Dropout, Activation
%matplotlib inline
"""
Explanation: Deep Learning for simulating ODE(Laplace equation)
$\nabla^2 \phi (x,y) = \frac{\partial^2 \phi}{\partial x^2} + \frac{\partial^2 \phi}{\partial y^2}=0,
\forall x \in D.
$
With following BCs:
$\phi (x,y)=0,\forall x \in ${$(x,y)\in D|x=0,x=1,$ or $y=0$}
$\phi (x,y)=sin\pi x, \forall x \in ${$(x,y)\in D|y=1$}
Load all the libaray
End of explanation
"""
nx = 20
ny = 20
x_space = np.linspace(0, 1, nx)
y_space = np.linspace(0, 1, ny)
def analytic_solution(x):
return (1 / (np.exp(np.pi) - np.exp(-np.pi))) * \
np.sin(np.pi * x[0]) * (np.exp(np.pi * x[1]) - np.exp(-np.pi * x[1]))
x_input = np.zeros((ny,nx,2))
surface = np.zeros((ny, nx))
for i, x in enumerate(x_space):
for j, y in enumerate(y_space):
surface[i][j] = analytic_solution([x, y])
x_input[i][j] = [x, y]
x_input = x_input.reshape(-1, x_input.shape[-1])
y_anal = surface.reshape(-1,1)
print('generate data from analytic solution')
###
fig = plt.figure()
ax = fig.gca(projection='3d')
X, Y = np.meshgrid(x_space, y_space)
surf = ax.plot_surface(X, Y, surface, rstride=1, cstride=1, cmap=cm.viridis,
linewidth=0, antialiased=False)
ax.set_xlim(0, 1)
ax.set_ylim(0, 1)
ax.set_zlim(0, 2)
ax.set_xlabel('$x$')
ax.set_ylabel('$y$')
plt.colorbar(surf)
"""
Explanation: Generate data sample from the analitical solution
End of explanation
"""
batch_size = 32
epochs = 400
vsplit = 0.
print('Building model...')
model = Sequential()
model.add(Dense(20, input_shape=(2,)))
model.add(Activation('relu'))
model.add(Dropout(0.))
model.add(Dense(400))
model.add(Activation('relu'))
model.add(Dropout(0.))
model.add(Dense(20))
model.add(Activation('relu'))
model.add(Dropout(0.2))
model.add(Dense(1))
#model.add(Activation('linear'))
model.compile(loss='mse',
optimizer='adam',
metrics=['accuracy'])
history = model.fit(x_input, y_anal,
batch_size=batch_size,
epochs=epochs,
verbose=1,
validation_split=vsplit)
# score = model.evaluate(x_test, y_test,
# batch_size=batch_size, verbose=1)
# print('Test score:', score[0])
# print('Test accuracy:', score[1])
"""
Explanation: Build a MLP emulator
3 hidden Layes MLP
20 X 400 X 20
End of explanation
"""
if(bool(vsplit)):
# summarize history for accuracy
fig = plt.figure()
plt.plot(history.history['acc'])
plt.plot(history.history['val_acc'])
plt.title('model accuracy')
plt.ylabel('accuracy')
plt.xlabel('epoch')
plt.legend(['train', 'test'], loc='upper left')
plt.show()
# summarize history for loss
fig = plt.figure()
plt.plot(history.history['loss'])
plt.plot(history.history['val_loss'])
plt.title('model loss')
plt.ylabel('loss')
plt.xlabel('epoch')
plt.legend(['train', 'test'], loc='upper left')
plt.show()
"""
Explanation: Learning Curves
End of explanation
"""
x_test = x_input+0.0
x_test_space=x_test.reshape(ny,nx,2)[0,:,1]
y_test_space=x_test.reshape(ny,nx,2)[:,0,0]
surface_predict = model.predict(x_test).reshape(ny, nx)
"""
Explanation: ML inferencing
End of explanation
"""
###
fig = plt.figure()
ax = fig.gca(projection='3d')
X, Y = np.meshgrid(x_test_space, y_test_space)
surf_pdt = ax.plot_surface(X, Y, surface_predict, rstride=1, cstride=1, cmap=cm.viridis,
linewidth=0, antialiased=False)
ax.set_xlim(0, 1)
ax.set_ylim(0, 1)
ax.set_zlim(0, 2)
ax.set_xlabel('$x$')
ax.set_ylabel('$y$')
plt.colorbar(surf_pdt)
fig = plt.figure()
ax = fig.gca(projection='3d')
X, Y = np.meshgrid(x_space, y_space)
surf = ax.plot_surface(X, Y, surface, rstride=1, cstride=1, cmap=cm.viridis,
linewidth=0, antialiased=False)
ax.set_xlim(0, 1)
ax.set_ylim(0, 1)
ax.set_zlim(0, 2)
ax.set_xlabel('$x$')
ax.set_ylabel('$y$')
plt.colorbar(surf)
"""
Explanation: compare to analitical solution
End of explanation
"""
|
dwhswenson/annotated_trajectories | examples/annotation_example.ipynb | lgpl-2.1 | from __future__ import print_function
import openpathsampling as paths
from annotated_trajectories import AnnotatedTrajectory, Annotation, plot_annotated
"""
Explanation: Annotated Trajectory Example
This example shows how to annotate a trajectory (and save the annotations) using the annotated_trajectories package, which supplements OpenPathSampling.
First you'll need to import the two packages (after installing them, of course).
End of explanation
"""
from openpathsampling.tests.test_helpers import make_1d_traj
traj = make_1d_traj([-1, 1, 4, 3, 6, 11, 22, 33, 23, 101, 205, 35, 45])
# to get a real trajectory:
# from openpathsampling.engines.openmm.tools import ops_load_trajectory
# traj = ops_load_trajectory("name_of_file.xtc", top="topology.pdb") # can also be a .gro
"""
Explanation: Now I'm going to create some fake data:
End of explanation
"""
storage = paths.Storage("output.nc", "w")
"""
Explanation: Next I'll open the file. You'll only do this once, and then add all of your annotations for each trajectory into the open file.
End of explanation
"""
annotations = [
Annotation(state="1-digit", begin=1, end=4),
Annotation(state="2-digit", begin=6, end=8),
Annotation(state="3-digit", begin=10, end=10),
Annotation(state="2-digit", begin=11, end=12)
]
a_traj = AnnotatedTrajectory(trajectory=traj, annotations=annotations)
"""
Explanation: Annotating trajectories
Now we get to the core. For each trajectory, you can choose state names, and you create a list of annotations for those states. Each annotation includes the state name, the first frame in the state, and the final frame in the state (first and final, named begin and end, are included in the state). Remember that, in Python, the first frame is 0.
Once you've made your annotations, you assign them to your trajectory by putting them both into an AnnotatedTrajectory object.
End of explanation
"""
storage.tag['my_file_name'] = a_traj
"""
Explanation: Note that I worry more about incorrectly identifying something as in the state when it actually is not, than missing any frame that could be in the state. There's always some room for optimization here, but you should err on the side of ensuring that your labels actually identify that state. Allow false negatives; don't allow false positives.
Next, you save the trajectory to the file using the tag attribute of the storage. This will save both the trajectory and all its annotations to the file.
In the future, we hope to avoid use of the tag store. However, for now I recommend using something like the file name of the trajectory as the string for the tag. It must be unique.
End of explanation
"""
storage.sync()
storage.close()
"""
Explanation: Repeat the steps in the last two cells for each trajectory. When you're done, you can run:
End of explanation
"""
%matplotlib inline
import matplotlib.pyplot as plt
def ln_x(snapshot):
import math
return math.log10(abs(snapshot.xyz[0][0]))
cv = paths.CoordinateFunctionCV("log(x)", ln_x)
state_1 = paths.CVDefinedVolume(cv, 0.0, 1.0)
state_2 = paths.CVDefinedVolume(cv, 1.0, 2.0)
state_3 = paths.CVDefinedVolume(cv, 2.0, 3.0)
names_to_states = {
'1-digit': state_1,
'2-digit': state_2,
'3-digit': state_3,
}
names_to_colors = {
'1-digit': 'b',
'2-digit': 'c',
'3-digit': 'r'
}
plot_annotated(a_traj, cv, names_to_states, names_to_colors, dt=0.1)
plt.xlim(xmax=1.5)
plt.ylim(ymin=-0.1)
"""
Explanation: Plotting annotations
End of explanation
"""
# only difference is that I claim frame 5 (x=11) is in the 1-digit state
bad_annotations = [
Annotation(state="1-digit", begin=1, end=5),
Annotation(state="2-digit", begin=6, end=8),
Annotation(state="3-digit", begin=10, end=10),
Annotation(state="2-digit", begin=11, end=12)
]
bad_traj = AnnotatedTrajectory(traj, bad_annotations)
plot_annotated(bad_traj, cv, names_to_states, names_to_colors, dt=0.1)
plt.xlim(xmax=1.5)
plt.ylim(ymin=-0.1)
"""
Explanation: Checking for conflicts
Now I'm going to label one of my frames in a way that conflicts with my state definition. This means I'll have a false positive: the state definition says this is in the state when the annotation says it isn't (and specifically, when the annotation says it is in a different state).
Note that this still isn't a sufficient check: a more complicated test would ensure that, if the state gives a false positive where there is no annotation, that one of the nearest annotations (forward or backward) is of the same state. This has not been implemented yet, but might be added in the future.
End of explanation
"""
(results, conflicts) = bad_traj.validate_states(names_to_states)
"""
Explanation: Note how the value at $t=0.5$ conflicts: the annotation (the line) says that this this is in the blue (1-digit) state, whereas the state definition (the points) says that this is in the can (2-digit) state.
End of explanation
"""
print(results['1-digit'])
print(results['2-digit'])
print(results['3-digit'])
print(bad_traj.trajectory.index(results['1-digit'].false_negative[0]))
"""
Explanation: The results object will count something as a false postive if the state identifies it, but the annotation doesn't. Not all false positives are bad: if the state identified by the annotation is smaller than the state definition, then the state definition will catch some frames that the annotation left marked as not in any state.
It will count something as a false negative if the annotation identifies it, but the state doesn't. Not all false negatives are bad: this merely indicates that the state definitions may claim a frame is in no state, even though the annotation says it is in a state. In fact, we typically want this: a reasonable number of false negatives is necessary for your state to a have a decent flux in TIS.
End of explanation
"""
print(conflicts)
"""
Explanation: While false positives and false negatives are not inherently bad, conflicts are. A frame is said to be in conflict if the state definition assigns a different state than the annotation. The conflicts dictionary tells which frame numbers are in conflict, organized by which state definition volume they are in.
End of explanation
"""
bad_traj.get_label_for_frame(5)
"""
Explanation: You can identify which state the annotations claim using get_label_for_frame:
End of explanation
"""
|
GoogleCloudPlatform/training-data-analyst | courses/machine_learning/deepdive/09_sequence/text_classification.ipynb | apache-2.0 | !sudo chown -R jupyter:jupyter /home/jupyter/training-data-analyst
!pip install --user google-cloud-bigquery==1.25.0
"""
Explanation: <h1> Text Classification using TensorFlow/Keras on AI Platform </h1>
This notebook illustrates:
<ol>
<li> Creating datasets for AI Platform using BigQuery
<li> Creating a text classification model using the Estimator API with a Keras model
<li> Training on Cloud AI Platform
<li> Rerun with pre-trained embedding
</ol>
End of explanation
"""
# change these to try this notebook out
BUCKET = 'cloud-training-demos-ml'
PROJECT = 'cloud-training-demos'
REGION = 'us-central1'
import os
os.environ['BUCKET'] = BUCKET
os.environ['PROJECT'] = PROJECT
os.environ['REGION'] = REGION
os.environ['TFVERSION'] = '2.6'
if 'COLAB_GPU' in os.environ: # this is always set on Colab, the value is 0 or 1 depending on whether a GPU is attached
from google.colab import auth
auth.authenticate_user()
# download "sidecar files" since on Colab, this notebook will be on Drive
!rm -rf txtclsmodel
!git clone --depth 1 https://github.com/GoogleCloudPlatform/training-data-analyst
!mv training-data-analyst/courses/machine_learning/deepdive/09_sequence/txtclsmodel/ .
!rm -rf training-data-analyst
# downgrade TensorFlow to the version this notebook has been tested with
#!pip install --upgrade tensorflow==$TFVERSION
import tensorflow as tf
print(tf.__version__)
"""
Explanation: Note: Restart your kernel to use updated packages.
Kindly ignore the deprecation warnings and incompatibility errors related to google-cloud-storage.
End of explanation
"""
%load_ext google.cloud.bigquery
%%bigquery --project $PROJECT
SELECT
url, title, score
FROM
`bigquery-public-data.hacker_news.stories`
WHERE
LENGTH(title) > 10
AND score > 10
AND LENGTH(url) > 0
LIMIT 10
"""
Explanation: We will look at the titles of articles and figure out whether the article came from the New York Times, TechCrunch or GitHub.
We will use hacker news as our data source. It is an aggregator that displays tech related headlines from various sources.
Creating Dataset from BigQuery
Hacker news headlines are available as a BigQuery public dataset. The dataset contains all headlines from the sites inception in October 2006 until October 2015.
Here is a sample of the dataset:
End of explanation
"""
%%bigquery --project $PROJECT
SELECT
ARRAY_REVERSE(SPLIT(REGEXP_EXTRACT(url, '.*://(.[^/]+)/'), '.'))[OFFSET(1)] AS source,
COUNT(title) AS num_articles
FROM
`bigquery-public-data.hacker_news.stories`
WHERE
REGEXP_CONTAINS(REGEXP_EXTRACT(url, '.*://(.[^/]+)/'), '.com$')
AND LENGTH(title) > 10
GROUP BY
source
ORDER BY num_articles DESC
LIMIT 10
"""
Explanation: Let's do some regular expression parsing in BigQuery to get the source of the newspaper article from the URL. For example, if the url is http://mobile.nytimes.com/...., I want to be left with <i>nytimes</i>
End of explanation
"""
from google.cloud import bigquery
bq = bigquery.Client(project=PROJECT)
query="""
SELECT source, LOWER(REGEXP_REPLACE(title, '[^a-zA-Z0-9 $.-]', ' ')) AS title FROM
(SELECT
ARRAY_REVERSE(SPLIT(REGEXP_EXTRACT(url, '.*://(.[^/]+)/'), '.'))[OFFSET(1)] AS source,
title
FROM
`bigquery-public-data.hacker_news.stories`
WHERE
REGEXP_CONTAINS(REGEXP_EXTRACT(url, '.*://(.[^/]+)/'), '.com$')
AND LENGTH(title) > 10
)
WHERE (source = 'github' OR source = 'nytimes' OR source = 'techcrunch')
"""
df = bq.query(query + " LIMIT 5").to_dataframe()
df.head()
"""
Explanation: Now that we have good parsing of the URL to get the source, let's put together a dataset of source and titles. This will be our labeled dataset for AI Platform.
End of explanation
"""
traindf = bq.query(query + " AND ABS(MOD(FARM_FINGERPRINT(title), 4)) > 0").to_dataframe()
evaldf = bq.query(query + " AND ABS(MOD(FARM_FINGERPRINT(title), 4)) = 0").to_dataframe()
"""
Explanation: For ML training, we will need to split our dataset into training and evaluation datasets (and perhaps an independent test dataset if we are going to do model or feature selection based on the evaluation dataset).
A simple, repeatable way to do this is to use the hash of a well-distributed column in our data (See https://www.oreilly.com/learning/repeatable-sampling-of-data-sets-in-bigquery-for-machine-learning).
End of explanation
"""
traindf['source'].value_counts()
evaldf['source'].value_counts()
"""
Explanation: Below we can see that roughly 75% of the data is used for training, and 25% for evaluation.
We can also see that within each dataset, the classes are roughly balanced.
End of explanation
"""
import os, shutil
DATADIR='data/txtcls'
shutil.rmtree(DATADIR, ignore_errors=True)
os.makedirs(DATADIR)
traindf.to_csv( os.path.join(DATADIR,'train.tsv'), header=False, index=False, encoding='utf-8', sep='\t')
evaldf.to_csv( os.path.join(DATADIR,'eval.tsv'), header=False, index=False, encoding='utf-8', sep='\t')
!head -3 data/txtcls/train.tsv
!wc -l data/txtcls/*.tsv
"""
Explanation: Finally we will save our data, which is currently in-memory, to disk.
End of explanation
"""
%%bash
pip install google-cloud-storage
rm -rf txtcls_trained
gcloud ai-platform local train \
--module-name=trainer.task \
--package-path=${PWD}/txtclsmodel/trainer \
-- \
--output_dir=${PWD}/txtcls_trained \
--train_data_path=${PWD}/data/txtcls/train.tsv \
--eval_data_path=${PWD}/data/txtcls/eval.tsv \
--num_epochs=0.1
"""
Explanation: TensorFlow/Keras Code
Please explore the code in this <a href="txtclsmodel/trainer">directory</a>: model.py contains the TensorFlow model and task.py parses command line arguments and launches off the training job.
In particular look for the following:
tf.keras.preprocessing.text.Tokenizer.fit_on_texts() to generate a mapping from our word vocabulary to integers
tf.keras.preprocessing.text.Tokenizer.texts_to_sequences() to encode our sentences into a sequence of their respective word-integers
tf.keras.preprocessing.sequence.pad_sequences() to pad all sequences to be the same length
The embedding layer in the keras model takes care of one-hot encoding these integers and learning a dense emedding represetation from them.
Finally we pass the embedded text representation through a CNN model pictured below
<img src=images/txtcls_model.png width=25%>
Run Locally (optional step)
Let's make sure the code compiles by running locally for a fraction of an epoch.
This may not work if you don't have all the packages installed locally for gcloud (such as in Colab).
This is an optional step; move on to training on the cloud.
End of explanation
"""
%%bash
gsutil cp data/txtcls/*.tsv gs://${BUCKET}/txtcls/
%%bash
OUTDIR=gs://${BUCKET}/txtcls/trained_fromscratch
JOBNAME=txtcls_$(date -u +%y%m%d_%H%M%S)
gsutil -m rm -rf $OUTDIR
gcloud ai-platform jobs submit training $JOBNAME \
--region=$REGION \
--module-name=trainer.task \
--package-path=${PWD}/txtclsmodel/trainer \
--job-dir=$OUTDIR \
--scale-tier=BASIC_GPU \
--runtime-version 2.3 \
--python-version 3.7 \
-- \
--output_dir=$OUTDIR \
--train_data_path=gs://${BUCKET}/txtcls/train.tsv \
--eval_data_path=gs://${BUCKET}/txtcls/eval.tsv \
--num_epochs=5
"""
Explanation: Train on the Cloud
Let's first copy our training data to the cloud:
End of explanation
"""
!gcloud ai-platform jobs describe txtcls_190209_224828
"""
Explanation: Change the job name appropriately. View the job in the console, and wait until the job is complete.
End of explanation
"""
!gsutil cp gs://cloud-training-demos/courses/machine_learning/deepdive/09_sequence/text_classification/glove.6B.200d.txt gs://$BUCKET/txtcls/
"""
Explanation: Results
What accuracy did you get? You should see around 80%.
Rerun with Pre-trained Embedding
We will use the popular GloVe embedding which is trained on Wikipedia as well as various news sources like the New York Times.
You can read more about Glove at the project homepage: https://nlp.stanford.edu/projects/glove/
You can download the embedding files directly from the stanford.edu site, but we've rehosted it in a GCS bucket for faster download speed.
End of explanation
"""
|
NLeSC/noodles | notebooks/control_your_flow.ipynb | apache-2.0 | sentence = 'the quick brown fox jumps over the lazy dog'
reverse = []
def reverse_word(word):
return word[::-1]
for word in sentence.split():
reverse.append(reverse_word(word))
result = ' '.join(reverse)
print(result)
"""
Explanation: Advanced: Control your flow
Here we dive a bit deeper in advanced flow control in Noodles. Starting with a recap into for-loops, moving on to conditional evaluation of workflows and standard algorithms. This chapter will also go a bit deeper into the teritory of functional programming. Specifically, we will see how to program sequential loops using only functions and recursion.
If you are new to the concepts of recursion, here is some nice material to start with:
Recap: for loops
In the Translating Poetry tutorial we saw how we could create parallel for loops in Noodles. To recap, let's reverse the words in a sentence. Assume you have the following for-loop in Python:
End of explanation
"""
reverse = [reverse_word(word) for word in sentence.split()]
result = ' '.join(reverse)
print(result)
"""
Explanation: There is a pattern to this code that is better written as:
End of explanation
"""
import noodles
@noodles.schedule
def reverse_word(word):
return word[::-1]
@noodles.schedule
def make_sentence(words):
return ' '.join(words)
reverse_words = noodles.gather_all(
reverse_word(word) for word in sentence.split())
workflow = make_sentence(reverse_words)
from noodles.tutorial import display_workflows
noodles.tutorial.display_workflows(prefix='control', quick_brown_fox=workflow)
"""
Explanation: This last version can be translated to Noodles. Assume for some reason we want to schedule the reverse_word function (it takes forever to run on a single core!). Because reverse_words becomes a promise, the line with ' '.join(reverse) also has to be captured in a scheduled function.
End of explanation
"""
from noodles.tutorial import display_text
def factorial(x):
if x == 0:
return 1
else:
return factorial(x - 1) * x
display_text('100! = {}'.format(factorial(100)))
"""
Explanation: This example shows how we can do loops in parallel. There are cases where we will need to do loops in a serialised manner. For example, if we are handling a very large data set and all of the computation does not fit in memory when done in parallel.
There are hybrid divide and conquer approaches that can be implemented in Noodles. We then chunk all the work in blocks that can be executed in parallel, and stop when the first chunk gives us reason to. Divide-and-conquer can be implemented using a combination of the two looping strategies (parallel and sequential).
Sequential loops are made using recursion techniques.
Recursion
Sequential loops can be made in Noodles using recursion. Comes the obligatory factorial function example:
End of explanation
"""
try:
display_text('10000! =', factorial(10000))
except RecursionError as e:
display_text(e)
"""
Explanation: There is a problem with such a recursive algorithm when numbers get too high.
End of explanation
"""
@noodles.schedule
def factorial(x, acc=1):
if x == 0:
return acc
else:
return factorial(x - 1, acc * x)
result = noodles.run_single(factorial(10000))
display_text('10000! = {}'.format(result))
"""
Explanation: Yikes! Let's head on. And translate the program to Noodles. Suppose we make factorial a scheduled function, we cannot multiply a promise with a number just like that (at least not in the current version of Noodles). We change the function slightly with a second argument that keeps count. This also makes the factorial function tail-recursive.
End of explanation
"""
import numpy
import time
from copy import copy
gobble_size = 10000000
@noodles.schedule(call_by_ref=['gobble'])
def mul(x, y, gobble):
return x*y
@noodles.schedule(call_by_ref=['gobble'])
def factorial(x, gobble):
time.sleep(0.1)
if x == 0:
return 1
else:
return mul(factorial(x - 1, copy(gobble)), x, gobble)
gobble = numpy.zeros(gobble_size)
result = noodles.run_single(factorial(50, gobble))
"""
Explanation: Yeah! Noodles runs the tail-recursive function iteratively! This is actually very important. We'll do a little experiment. Start your system monitor (plotting a graph of your memory usage) and run the following snippets. We let every function call to factorial gobble up some memory and to be able to measure the effect of that we insert a small sleep. Fair warning: With the current setting of gobble_size and running 50 loops, the first version will take about 4GB of memory. Just change the size so that a measurable fraction of your RAM is taken up by the process and you can see the result.
End of explanation
"""
@noodles.schedule(call_by_ref=['gobble'])
def factorial_tr(x, acc=1, gobble=None):
time.sleep(0.1)
if x == 0:
return acc
else:
return factorial_tr(x - 1, mul(acc, x, gobble), copy(gobble))
gobble = numpy.zeros(gobble_size)
result = noodles.run_single(factorial_tr(50, gobble=gobble))
"""
Explanation: We passed the gobble argument by reference. This prevents Noodles from copying the array when creating the workflow. If you have functions that take large arrays as input and you don't change the value of the array in between calls this is a sensible thing to do.
On my machine, running only 10 loops, this gives the following result:
Try to understand why this happens. We have reserved a NumPy array with gobble_size ($10^7$) floating points of 8 bytes each. The total size in bytes of this array is $8 \times 10^7\ MB$. In each recursive call to factorial the array is copied, so in total this will use $10 \cdot 8 \times 10^7\ MB = 800\ MB$ of memory!
The next version is tail-recursive. This should barely make a dent in your memory usage!
End of explanation
"""
display_workflows(
prefix='control',
factorial_one=noodles.unwrap(factorial)(10, '<memory gobble>'))
"""
Explanation: Now, the factorial function is still recursive. However, since returning a call to the factorial function is last thing we do, the intermediate results can be safely thrown away. We'll have in memory the original reference to gobble and one version in the Noodles run-time for the last time factorial returned a workflow where gobble.copy() was one of the arguments.
In total this gives a memory consumption of $160\ MB$ (plus a little extra for the Python run-time itself). We see peeks that reach over $250\ MB$ in the graph: this is where gobble is being copied, after which the garbage collector deletes the old array.
Try to understand why this happens. In the first case the function returns a new workflow to be evaluated. This workflow has two nodes:
End of explanation
"""
display_workflows(
prefix='control',
tail_recursive_factorial=noodles.unwrap(factorial_tr)(10, gobble='<memory gobble>'))
"""
Explanation: To evaluate this workflow, Noodles first runs the top node factorial(9, '<memory gobble>'). When the answer for this function is obtained it is inserted into the slot for mul(-, 10). Until the entire workflow is evaluated, the <memory gobble> remains in memory. Before this happens the factorial function is called which copies the gobble and creates a new workflow! We can write this out by expanding our algorithm symbolically $f(x) = x \cdot f(x-1)$:
$$\begin{align}
f(10) &= 10 \cdot f(9)\
&= 10 \cdot (9 \cdot f(8))\
&= 10 \cdot (9 \cdot (8 \cdot f(7)))\
&\dots\
&= 10 \cdot (9 \cdot (8 \cdot (7 \cdot (6 \cdot (5 \cdot (4 \cdot (3 \cdot (2 \cdot 1))))))))\
&= 10 \cdot (9 \cdot (8 \cdot (7 \cdot (6 \cdot (5 \cdot (4 \cdot (3 \cdot 2)))))))\
&= 10 \cdot (9 \cdot (8 \cdot (7 \cdot (6 \cdot (5 \cdot (4 \cdot 6))))))\
&\dots
\end{align}$$
Now for the tail-recursive version, the workflow looks a bit different:
End of explanation
"""
@noodles.schedule
def method_one(x):
pass
@noodles.schedule
def method_two(x):
pass
@noodles.schedule
def what_to_do(x):
if condition(x):
return method_one(x)
else:
return method_two(x)
"""
Explanation: First the mul(1, 10, '<memory gobble>') is evaluated. Its result is inserted into the empty slot in the call to factorial_tr. This call returns a new workflow with a new copy of <memory gobble>. This time however, the old workflow can be safely deleted. Again, it helps to look at the algorithm symbolically, given
$f(x, a) = f(x-1, x \cdot a)$:
$$\begin{align}
f(10, 1) &= f(9, (10 \cdot 1))\
&= f(9, 10)\
&= f(8, (9 \cdot 10))\
&= f(8, 90)\
&= f(7, (8 \cdot 90))\
&\dots
\end{align}$$
Conditional evaluation
But Python has more statements for flow control! The conditional execution of code is regulated through the if statement. You may want to make the exection of parts of your workflow conditional based on intermediate results. One such instance may look like this:
End of explanation
"""
for n in range(2, 10):
for x in range(2, n):
if n % x == 0:
print(n, 'equals', x, '*', n//x)
break
else:
# loop fell through without finding a factor
print(n, 'is a prime number')
"""
Explanation: We've put the if-statement inside the scheduled function what_to_do. This returns a new workflow depending on the value of x. We can no longer get a nice single graph picture of the workflow, because the workflow doesn't exist! (there is no spoon ...) We can work through a small example from the Python tutorial: computing prime numbers.
End of explanation
"""
@noodles.schedule
def divides(n, x):
return n % x == 0
"""
Explanation: The core computation in this example is the n % x == 0 bit. So we start by creating a scheduled function that does that.
End of explanation
"""
@noodles.schedule
def compress(lst):
"""Takes a list of pairs, returns a list of
first elements of those pairs for which the
second element is thruthy."""
return [a for a, b in lst if b]
"""
Explanation: Noodles can parallelize the inner loop, but this gives a problem: how do we know when to stop? There is no way to get it both ways.
First, we'll see how to do the parallel solution. We'll compute the divides(n, x) function for the values of n and x and then filter out those where divides gave False. This last step is done using the compress function.
End of explanation
"""
?filter
"""
Explanation: Using the compress function we can write the Noodlified parallel version of the filter function. We'll call it p_filter for parallel filter.
End of explanation
"""
def p_filter(f, lst):
return compress(noodles.gather_all(
noodles.gather(x, f(x)) for x in lst))
def find_factors(n):
return p_filter(lambda x: divides(n, x), range(2, n))
display_workflows(prefix='control', factors=find_factors(5))
"""
Explanation: Using the generic p_filter function we then write the function find_factors that finds all integer factors of a number in parallel. Both p_filter and find_factors won't be scheduled functions. Rather, together they build the workflow that solves our problem.
End of explanation
"""
result = noodles.run_parallel(
noodles.gather_all(noodles.gather(n, find_factors(n))
for n in range(2, 10)),
n_threads=4)
for n, factors in result:
if factors:
print(n, 'equals', ', '.join(
'{}*{}'.format(x, n//x) for x in factors))
else:
print(n, 'is prime')
"""
Explanation: No we can run this workflow for all the numbers we like.
End of explanation
"""
def find_first(f, lst):
if not lst:
return None
elif f(lst[0]):
return lst[0]
else:
return find_first(f, lst[1:])
"""
Explanation: Few! We managed, but if all we wanted to do is find primes, we did way too much work; we also found all factors of the numbers. We had to write some boiler plate code. Argh, this tutorial was supposed to be on flow control! We move on to the sequential version. Wait, I hear you think, we were using Noodles to do things in parallel!?? Why make an effort to do sequential work? Well, we'll need it to implement the divide-and-conquer strategy, among other things. Noodles is not only a framework for parallel programming, but it also works concurrent. In the context of a larger workflow we may still want to make decision steps on a sequential basis, while another component of the workflow is happily churning out numbers.
Find-first
Previously we saw the definition of a Noodlified filter function. How can we write a find_first that stops after finding a first match? If we look at the workflow that p_filter produces, we see that all predicates are already present in the workflow and will be computed concurrently. We now write a sequential version. We may achieve sequential looping through recursion like this:
End of explanation
"""
@noodles.schedule
def find_first_helper(f, lst, first):
if first:
return lst[0]
elif len(lst) == 1:
return None
else:
return find_first_helper(f, lst[1:], f(lst[1]))
def find_first(f, lst):
return find_first_helper(f, lst, f(lst[0]))
noodles.run_single(find_first(lambda x: divides(77, x), range(2, 63)))
"""
Explanation: However, if f is a scheduled function f(lst[0]) will give a promise, and this routine will fail.
End of explanation
"""
%%writefile test-tail-recursion.py
import numpy
import noodles
import time
from copy import copy
@noodles.schedule(call_by_ref=['gobble'])
def factorial_tr(x, acc=1, gobble=None):
time.sleep(0.1)
if x == 0:
return acc
else:
return factorial_tr(x - 1, acc * x, copy(gobble))
gobble_size = 10000000
gobble = numpy.zeros(gobble_size)
result = noodles.run_single(factorial_tr(10, gobble=gobble))
%%writefile test-recursion.py
import numpy
import noodles
import time
from copy import copy
@noodles.schedule(call_by_ref=['gobble'])
def mul(x, y, gobble):
return x*y
@noodles.schedule(call_by_ref=['gobble'])
def factorial(x, gobble):
time.sleep(0.1)
if numpy.all(x == 0):
return numpy.ones_like(x)
else:
return mul(factorial(x - 1, copy(gobble)), x, gobble)
gobble_size = 10000000
gobble = numpy.zeros(gobble_size)
result = noodles.run_single(factorial(10, gobble))
!pip install matplotlib
!pip install memory_profiler
%%bash
rm mprofile_*.dat
mprof run -T 0.001 python ./test-tail-recursion.py
mprof run -T 0.001 python ./test-recursion.py
from pathlib import Path
from matplotlib import pyplot as plt
import numpy
plt.rcParams['font.family'] = 'serif'
def read_mprof(filename):
lines = list(open(filename, 'r'))
cmd = filter(lambda l: l[:3] == 'CMD', lines)
mem = filter(lambda l: l[:3] == 'MEM', lines)
data = numpy.array([list(map(float, l.split()[1:])) for l in mem])
data[:,1] -= data[0,1]
data[:,0] *= 1024**2
return cmd, data
def plot_mprof(filename):
cmd, data = read_mprof(filename)
if 'tail' in next(cmd):
figname = 'tail-recursion'
else:
figname = 'recursion'
plt.plot(data[:,1], data[:,0] / 1e6)
plt.xlabel('time (s)')
plt.ylabel('memory usage (MB)')
plt.title(figname)
plt.savefig('control-' + figname + '-raw.svg', bbox_inches='tight')
plt.show()
files = list(Path('.').glob('mprofile_*.dat'))
for f in files:
plot_mprof(f)
plt.close()
"""
Explanation: That works. Now suppose the input list is somewhat harder to compute; every element is the result of a workflow.
Appendix: creating memory profile plots
End of explanation
"""
|
diegocavalca/Studies | phd-thesis/nilmtk/loading_data_into_memory.ipynb | cc0-1.0 | from nilmtk import DataSet
iawe = DataSet('/data/iawe.h5')
elec = iawe.buildings[1].elec
elec
"""
Explanation: Loading data into memory
Loading API is central to a lot of nilmtk operations and provides a great deal of flexibility. Let's look at ways in which we can load data from a NILMTK DataStore into memory. To see the full range of possible queries, we'll use the iAWE data set (whose HDF5 file can be downloaded here).
The load function returns a generator of DataFrames loaded from the DataStore based on the conditions specified. If no conditions are specified, then all data from all the columns is loaded. (If you have not come across Python generators, it might be worth reading this quick guide to Python generators.)
NOTE: If you are on Windows, remember to escape the back-slashes, use forward-slashs, or use raw-strings when passing paths in Python, e.g. one of the following would work:
python
iawe = DataSet('c:\\data\\iawe.h5')
iawe = DataSet('c:/data/iawe.h5')
iawe = DataSet(r'c:\data\iawe.h5')
End of explanation
"""
fridge = elec['fridge']
fridge.available_columns()
"""
Explanation: Let us see what measurements we have for the fridge:
End of explanation
"""
df = next(fridge.load())
df.head()
"""
Explanation: Loading data
Load all columns (default)
End of explanation
"""
series = next(fridge.power_series())
series.head()
"""
Explanation: Load a single column of power data
Use fridge.power_series() which returns a generator of 1-dimensional pandas.Series objects, each containing power data using the most 'sensible' AC type:
End of explanation
"""
series = next(fridge.power_series(ac_type='reactive'))
series.head()
"""
Explanation: or, to get reactive power:
End of explanation
"""
df = next(fridge.load(physical_quantity='power', ac_type='reactive'))
df.head()
"""
Explanation: Specify physical_quantity or AC type
End of explanation
"""
df = next(fridge.load(physical_quantity='voltage'))
df.head()
df = next(fridge.load(physical_quantity = 'power'))
df.head()
"""
Explanation: To load voltage data:
End of explanation
"""
df = next(fridge.load(ac_type='active'))
df.head()
"""
Explanation: Loading by specifying AC type
End of explanation
"""
# resample to minutely (i.e. with a sample period of 60 secs)
df = next(fridge.load(ac_type='active', sample_period=60))
df.head()
"""
Explanation: Loading by resampling to a specified period
End of explanation
"""
|
miykael/nipype_tutorial | notebooks/example_2ndlevel.ipynb | bsd-3-clause | from nilearn import plotting
%matplotlib inline
from os.path import join as opj
from nipype.interfaces.io import SelectFiles, DataSink
from nipype.interfaces.spm import (OneSampleTTestDesign, EstimateModel,
EstimateContrast, Threshold)
from nipype.interfaces.utility import IdentityInterface
from nipype import Workflow, Node
from nipype.interfaces.fsl import Info
from nipype.algorithms.misc import Gunzip
"""
Explanation: Example 4: 2nd-level Analysis
Last but not least, the 2nd-level analysis. After we removed left-handed subjects and normalized all subject data into template space, we can now do the group analysis. To show the flexibility of Nipype, we will run the group analysis on data with two different smoothing kernel (fwhm= [4, 8]) and two different normalizations (ANTs and SPM).
This example will also directly include thresholding of the output, as well as some visualization.
Let's start!
Group Analysis with SPM
Let's first run the group analysis with the SPM normalized data.
Imports (SPM12)
First, we need to import all the modules we later want to use.
End of explanation
"""
experiment_dir = '/output'
output_dir = 'datasink'
working_dir = 'workingdir'
# Smoothing withds used during preprocessing
fwhm = [4, 8]
# Which contrasts to use for the 2nd-level analysis
contrast_list = ['con_0001', 'con_0002', 'con_0003', 'con_0004', 'con_0005', 'con_0006', 'con_0007']
mask = "/data/ds000114/derivatives/fmriprep/mni_icbm152_nlin_asym_09c/1mm_brainmask.nii.gz"
"""
Explanation: Experiment parameters (SPM12)
It's always a good idea to specify all parameters that might change between experiments at the beginning of your script.
End of explanation
"""
# Gunzip - unzip the mask image
gunzip = Node(Gunzip(in_file=mask), name="gunzip")
# OneSampleTTestDesign - creates one sample T-Test Design
onesamplettestdes = Node(OneSampleTTestDesign(),
name="onesampttestdes")
# EstimateModel - estimates the model
level2estimate = Node(EstimateModel(estimation_method={'Classical': 1}),
name="level2estimate")
# EstimateContrast - estimates group contrast
level2conestimate = Node(EstimateContrast(group_contrast=True),
name="level2conestimate")
cont1 = ['Group', 'T', ['mean'], [1]]
level2conestimate.inputs.contrasts = [cont1]
# Threshold - thresholds contrasts
level2thresh = Node(Threshold(contrast_index=1,
use_topo_fdr=True,
use_fwe_correction=False,
extent_threshold=0,
height_threshold=0.005,
height_threshold_type='p-value',
extent_fdr_p_threshold=0.05),
name="level2thresh")
"""
Explanation: Specify Nodes (SPM12)
Initiate all the different interfaces (represented as nodes) that you want to use in your workflow.
End of explanation
"""
# Infosource - a function free node to iterate over the list of subject names
infosource = Node(IdentityInterface(fields=['contrast_id', 'fwhm_id']),
name="infosource")
infosource.iterables = [('contrast_id', contrast_list),
('fwhm_id', fwhm)]
# SelectFiles - to grab the data (alternativ to DataGrabber)
templates = {'cons': opj(output_dir, 'norm_spm', 'sub-*_fwhm{fwhm_id}',
'w{contrast_id}.nii')}
selectfiles = Node(SelectFiles(templates,
base_directory=experiment_dir,
sort_filelist=True),
name="selectfiles")
# Datasink - creates output folder for important outputs
datasink = Node(DataSink(base_directory=experiment_dir,
container=output_dir),
name="datasink")
# Use the following DataSink output substitutions
substitutions = [('_contrast_id_', '')]
subjFolders = [('%s_fwhm_id_%s' % (con, f), 'spm_%s_fwhm%s' % (con, f))
for f in fwhm
for con in contrast_list]
substitutions.extend(subjFolders)
datasink.inputs.substitutions = substitutions
"""
Explanation: Specify input & output stream (SPM12)
Specify where the input data can be found & where and how to save the output data.
End of explanation
"""
# Initiation of the 2nd-level analysis workflow
l2analysis = Workflow(name='spm_l2analysis')
l2analysis.base_dir = opj(experiment_dir, working_dir)
# Connect up the 2nd-level analysis components
l2analysis.connect([(infosource, selectfiles, [('contrast_id', 'contrast_id'),
('fwhm_id', 'fwhm_id')]),
(selectfiles, onesamplettestdes, [('cons', 'in_files')]),
(gunzip, onesamplettestdes, [('out_file',
'explicit_mask_file')]),
(onesamplettestdes, level2estimate, [('spm_mat_file',
'spm_mat_file')]),
(level2estimate, level2conestimate, [('spm_mat_file',
'spm_mat_file'),
('beta_images',
'beta_images'),
('residual_image',
'residual_image')]),
(level2conestimate, level2thresh, [('spm_mat_file',
'spm_mat_file'),
('spmT_images',
'stat_image'),
]),
(level2conestimate, datasink, [('spm_mat_file',
'2ndLevel.@spm_mat'),
('spmT_images',
'2ndLevel.@T'),
('con_images',
'2ndLevel.@con')]),
(level2thresh, datasink, [('thresholded_map',
'2ndLevel.@threshold')]),
])
"""
Explanation: Specify Workflow (SPM12)
Create a workflow and connect the interface nodes and the I/O stream to each other.
End of explanation
"""
# Create 1st-level analysis output graph
l2analysis.write_graph(graph2use='colored', format='png', simple_form=True)
# Visualize the graph
from IPython.display import Image
Image(filename=opj(l2analysis.base_dir, 'spm_l2analysis', 'graph.png'))
"""
Explanation: Visualize the workflow (SPM12)
It always helps to visualize your workflow.
End of explanation
"""
l2analysis.run('MultiProc', plugin_args={'n_procs': 4})
"""
Explanation: Run the Workflow (SPM12)
Now that everything is ready, we can run the 1st-level analysis workflow. Change n_procs to the number of jobs/cores you want to use.
End of explanation
"""
# Change the SelectFiles template and recreate the node
templates = {'cons': opj(output_dir, 'norm_ants', 'sub-*_fwhm{fwhm_id}',
'{contrast_id}_trans.nii')}
selectfiles = Node(SelectFiles(templates,
base_directory=experiment_dir,
sort_filelist=True),
name="selectfiles")
# Change the substituion parameters for the datasink
substitutions = [('_contrast_id_', '')]
subjFolders = [('%s_fwhm_id_%s' % (con, f), 'ants_%s_fwhm%s' % (con, f))
for f in fwhm
for con in contrast_list]
substitutions.extend(subjFolders)
datasink.inputs.substitutions = substitutions
"""
Explanation: Group Analysis with ANTs
Now to run the same group analysis, but on the ANTs normalized images, we just need to change a few parameters:
End of explanation
"""
# Initiation of the 2nd-level analysis workflow
l2analysis = Workflow(name='ants_l2analysis')
l2analysis.base_dir = opj(experiment_dir, working_dir)
# Connect up the 2nd-level analysis components
l2analysis.connect([(infosource, selectfiles, [('contrast_id', 'contrast_id'),
('fwhm_id', 'fwhm_id')]),
(selectfiles, onesamplettestdes, [('cons', 'in_files')]),
(gunzip, onesamplettestdes, [('out_file',
'explicit_mask_file')]),
(onesamplettestdes, level2estimate, [('spm_mat_file',
'spm_mat_file')]),
(level2estimate, level2conestimate, [('spm_mat_file',
'spm_mat_file'),
('beta_images',
'beta_images'),
('residual_image',
'residual_image')]),
(level2conestimate, level2thresh, [('spm_mat_file',
'spm_mat_file'),
('spmT_images',
'stat_image'),
]),
(level2conestimate, datasink, [('spm_mat_file',
'2ndLevel.@spm_mat'),
('spmT_images',
'2ndLevel.@T'),
('con_images',
'2ndLevel.@con')]),
(level2thresh, datasink, [('thresholded_map',
'2ndLevel.@threshold')]),
])
"""
Explanation: Now, we just have to recreate the workflow.
End of explanation
"""
l2analysis.run('MultiProc', plugin_args={'n_procs': 4})
"""
Explanation: And we can run it!
End of explanation
"""
from nilearn.plotting import plot_stat_map
%matplotlib inline
anatimg = '/data/ds000114/derivatives/fmriprep/mni_icbm152_nlin_asym_09c/1mm_T1.nii.gz'
plot_stat_map(
'/output/datasink/2ndLevel/ants_con_0001_fwhm4/spmT_0001_thr.nii', title='ants fwhm=4', dim=1,
bg_img=anatimg, threshold=2, vmax=8, display_mode='y', cut_coords=(-45, -30, -15, 0, 15), cmap='viridis');
plot_stat_map(
'/output/datasink/2ndLevel/spm_con_0001_fwhm4/spmT_0001_thr.nii', title='spm fwhm=4', dim=1,
bg_img=anatimg, threshold=2, vmax=8, display_mode='y', cut_coords=(-45, -30, -15, 0, 15), cmap='viridis');
plot_stat_map(
'/output/datasink/2ndLevel/ants_con_0001_fwhm8/spmT_0001_thr.nii', title='ants fwhm=8', dim=1,
bg_img=anatimg, threshold=2, vmax=8, display_mode='y', cut_coords=(-45, -30, -15, 0, 15), cmap='viridis');
plot_stat_map(
'/output/datasink/2ndLevel/spm_con_0001_fwhm8/spmT_0001_thr.nii', title='spm fwhm=8',
bg_img=anatimg, threshold=2, vmax=8, display_mode='y', cut_coords=(-45, -30, -15, 0, 15), cmap='viridis');
"""
Explanation: Visualize results
Now we create a lot of outputs, but how do they look like? And also, what was the influence of different smoothing kernels and normalization?
Keep in mind, that the group analysis was only done on N=7 subjects, and that we chose a voxel-wise threshold of p<0.005. Nonetheless, we corrected for multiple comparisons with a cluster-wise FDR threshold of p<0.05.
So let's first look at the contrast average:
End of explanation
"""
from nilearn.plotting import plot_stat_map
anatimg = '/data/ds000114/derivatives/fmriprep/mni_icbm152_nlin_asym_09c/1mm_T1.nii.gz'
plot_stat_map(
'/output/datasink/2ndLevel/ants_con_0005_fwhm4/spmT_0001_thr.nii', title='ants fwhm=4', dim=1,
bg_img=anatimg, threshold=2, vmax=8, cmap='viridis', display_mode='y', cut_coords=(-45, -30, -15, 0, 15));
plot_stat_map(
'/output/datasink/2ndLevel/spm_con_0005_fwhm4/spmT_0001_thr.nii', title='spm fwhm=4', dim=1,
bg_img=anatimg, threshold=2, vmax=8, cmap='viridis', display_mode='y', cut_coords=(-45, -30, -15, 0, 15));
plot_stat_map(
'/output/datasink/2ndLevel/ants_con_0005_fwhm8/spmT_0001_thr.nii', title='ants fwhm=8', dim=1,
bg_img=anatimg, threshold=2, vmax=8, cmap='viridis', display_mode='y', cut_coords=(-45, -30, -15, 0, 15));
plot_stat_map(
'/output/datasink/2ndLevel/spm_con_0005_fwhm8/spmT_0001_thr.nii', title='spm fwhm=8', dim=1,
bg_img=anatimg, threshold=2, vmax=8, cmap='viridis', display_mode='y', cut_coords=(-45, -30, -15, 0, 15));
"""
Explanation: The results are more or less what you would expect: The peaks are more or less at the same places for the two normalization approaches and a wider smoothing has the effect of bigger clusters, while losing the sensitivity for smaller clusters.
Now, let's see other contrast -- Finger > others. Since we removed left-handed subjects, the activation is seen on the left part of the brain.
End of explanation
"""
from nilearn.plotting import plot_glass_brain
plot_glass_brain(
'/output/datasink/2ndLevel/spm_con_0005_fwhm4/spmT_0001_thr.nii', colorbar=True,
threshold=2, display_mode='lyrz', black_bg=True, vmax=10, title='spm_fwhm4');
plot_glass_brain(
'/output/datasink/2ndLevel/ants_con_0005_fwhm4/spmT_0001_thr.nii', colorbar=True,
threshold=2, display_mode='lyrz', black_bg=True, vmax=10, title='ants_fwhm4');
plot_glass_brain(
'/output/datasink/2ndLevel/spm_con_0005_fwhm8/spmT_0001_thr.nii', colorbar=True,
threshold=2, display_mode='lyrz', black_bg=True, vmax=10, title='spm_fwhm8');
plot_glass_brain(
'/output/datasink/2ndLevel/ants_con_0005_fwhm8/spmT_0001_thr.nii', colorbar=True,
threshold=2, display_mode='lyrz', black_bg=True, vmax=10, title='ants_fwhm8');
"""
Explanation: Now, let's see the results using the glass brain plotting method.
End of explanation
"""
|
tensorflow/hub | examples/colab/tf_hub_delf_module.ipynb | apache-2.0 | # Copyright 2018 The TensorFlow Hub Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ==============================================================================
"""
Explanation: Copyright 2018 The TensorFlow Hub Authors.
Licensed under the Apache License, Version 2.0 (the "License");
End of explanation
"""
!pip install scikit-image
from absl import logging
import matplotlib.pyplot as plt
import numpy as np
from PIL import Image, ImageOps
from scipy.spatial import cKDTree
from skimage.feature import plot_matches
from skimage.measure import ransac
from skimage.transform import AffineTransform
from six import BytesIO
import tensorflow as tf
import tensorflow_hub as hub
from six.moves.urllib.request import urlopen
"""
Explanation: How to match images using DELF and TensorFlow Hub
<table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https://www.tensorflow.org/hub/tutorials/tf_hub_delf_module"><img src="https://www.tensorflow.org/images/tf_logo_32px.png" />View on TensorFlow.org</a>
</td>
<td>
<a target="_blank" href="https://colab.research.google.com/github/tensorflow/hub/blob/master/examples/colab/tf_hub_delf_module.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" />Run in Google Colab</a>
</td>
<td>
<a target="_blank" href="https://github.com/tensorflow/hub/blob/master/examples/colab/tf_hub_delf_module.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />View on GitHub</a>
</td>
<td>
<a href="https://storage.googleapis.com/tensorflow_docs/hub/examples/colab/tf_hub_delf_module.ipynb"><img src="https://www.tensorflow.org/images/download_logo_32px.png" />Download notebook</a>
</td>
<td>
<a href="https://tfhub.dev/google/delf/1"><img src="https://www.tensorflow.org/images/hub_logo_32px.png" />See TF Hub model</a>
</td>
</table>
TensorFlow Hub (TF-Hub) is a platform to share machine learning expertise packaged in reusable resources, notably pre-trained modules.
In this colab, we will use a module that packages the DELF neural network and logic for processing images to identify keypoints and their descriptors. The weights of the neural network were trained on images of landmarks as described in this paper.
Setup
End of explanation
"""
#@title Choose images
images = "Bridge of Sighs" #@param ["Bridge of Sighs", "Golden Gate", "Acropolis", "Eiffel tower"]
if images == "Bridge of Sighs":
# from: https://commons.wikimedia.org/wiki/File:Bridge_of_Sighs,_Oxford.jpg
# by: N.H. Fischer
IMAGE_1_URL = 'https://upload.wikimedia.org/wikipedia/commons/2/28/Bridge_of_Sighs%2C_Oxford.jpg'
# from https://commons.wikimedia.org/wiki/File:The_Bridge_of_Sighs_and_Sheldonian_Theatre,_Oxford.jpg
# by: Matthew Hoser
IMAGE_2_URL = 'https://upload.wikimedia.org/wikipedia/commons/c/c3/The_Bridge_of_Sighs_and_Sheldonian_Theatre%2C_Oxford.jpg'
elif images == "Golden Gate":
IMAGE_1_URL = 'https://upload.wikimedia.org/wikipedia/commons/1/1e/Golden_gate2.jpg'
IMAGE_2_URL = 'https://upload.wikimedia.org/wikipedia/commons/3/3e/GoldenGateBridge.jpg'
elif images == "Acropolis":
IMAGE_1_URL = 'https://upload.wikimedia.org/wikipedia/commons/c/ce/2006_01_21_Ath%C3%A8nes_Parth%C3%A9non.JPG'
IMAGE_2_URL = 'https://upload.wikimedia.org/wikipedia/commons/5/5c/ACROPOLIS_1969_-_panoramio_-_jean_melis.jpg'
else:
IMAGE_1_URL = 'https://upload.wikimedia.org/wikipedia/commons/d/d8/Eiffel_Tower%2C_November_15%2C_2011.jpg'
IMAGE_2_URL = 'https://upload.wikimedia.org/wikipedia/commons/a/a8/Eiffel_Tower_from_immediately_beside_it%2C_Paris_May_2008.jpg'
"""
Explanation: The data
In the next cell, we specify the URLs of two images we would like to process with DELF in order to match and compare them.
End of explanation
"""
def download_and_resize(name, url, new_width=256, new_height=256):
path = tf.keras.utils.get_file(url.split('/')[-1], url)
image = Image.open(path)
image = ImageOps.fit(image, (new_width, new_height), Image.ANTIALIAS)
return image
image1 = download_and_resize('image_1.jpg', IMAGE_1_URL)
image2 = download_and_resize('image_2.jpg', IMAGE_2_URL)
plt.subplot(1,2,1)
plt.imshow(image1)
plt.subplot(1,2,2)
plt.imshow(image2)
"""
Explanation: Download, resize, save and display the images.
End of explanation
"""
delf = hub.load('https://tfhub.dev/google/delf/1').signatures['default']
def run_delf(image):
np_image = np.array(image)
float_image = tf.image.convert_image_dtype(np_image, tf.float32)
return delf(
image=float_image,
score_threshold=tf.constant(100.0),
image_scales=tf.constant([0.25, 0.3536, 0.5, 0.7071, 1.0, 1.4142, 2.0]),
max_feature_num=tf.constant(1000))
result1 = run_delf(image1)
result2 = run_delf(image2)
"""
Explanation: Apply the DELF module to the data
The DELF module takes an image as input and will describe noteworthy points with vectors. The following cell contains the core of this colab's logic.
End of explanation
"""
#@title TensorFlow is not needed for this post-processing and visualization
def match_images(image1, image2, result1, result2):
distance_threshold = 0.8
# Read features.
num_features_1 = result1['locations'].shape[0]
print("Loaded image 1's %d features" % num_features_1)
num_features_2 = result2['locations'].shape[0]
print("Loaded image 2's %d features" % num_features_2)
# Find nearest-neighbor matches using a KD tree.
d1_tree = cKDTree(result1['descriptors'])
_, indices = d1_tree.query(
result2['descriptors'],
distance_upper_bound=distance_threshold)
# Select feature locations for putative matches.
locations_2_to_use = np.array([
result2['locations'][i,]
for i in range(num_features_2)
if indices[i] != num_features_1
])
locations_1_to_use = np.array([
result1['locations'][indices[i],]
for i in range(num_features_2)
if indices[i] != num_features_1
])
# Perform geometric verification using RANSAC.
_, inliers = ransac(
(locations_1_to_use, locations_2_to_use),
AffineTransform,
min_samples=3,
residual_threshold=20,
max_trials=1000)
print('Found %d inliers' % sum(inliers))
# Visualize correspondences.
_, ax = plt.subplots()
inlier_idxs = np.nonzero(inliers)[0]
plot_matches(
ax,
image1,
image2,
locations_1_to_use,
locations_2_to_use,
np.column_stack((inlier_idxs, inlier_idxs)),
matches_color='b')
ax.axis('off')
ax.set_title('DELF correspondences')
match_images(image1, image2, result1, result2)
"""
Explanation: Use the locations and description vectors to match the images
End of explanation
"""
|
oditorium/blog | iPython/CurveFitting.ipynb | agpl-3.0 | import numpy as np
import scipy as sp
import matplotlib.pyplot as plt
"""
Explanation: iPython Cookbook - Curve Fitting
End of explanation
"""
a,b,c=(1,2,1)
def func0 (x,a,b,c):
return a*exp(-b*x)+c
"""
Explanation: Generating some data to play with
We first generate some data to play with. So in the first step we choose a functional relationship $func_0$ between our $x$ and $y$ that is parameterised by three parameters $a,b,c$...
End of explanation
"""
xmin,xmax = (0,1)
N = 500
xvals = np.random.uniform(xmin, xmax, N)
yvals0 = func0(xvals,a,b,c)
plt.plot(xvals, yvals0, '+')
plt.show()
"""
Explanation: ...we now choose the area over which we want to plot the function $x_{min}\ldots x_{max}$ and the number of points we want to generate $N$ and generate the undisturbed curve $yvals_0$ over a random selection of $x$ coordinates $xvals$.
End of explanation
"""
sig = 0.05
err = sig * np.random.standard_normal(N)
yvals = yvals0 + err
plt.plot(xvals, yvals, '+')
plt.show()
"""
Explanation: In a second step we generate the error term $err$ and add it to the previously computed values, storing the results in $yvals$
End of explanation
"""
# import the curve fitting module and standard imports
import matplotlib.pyplot as plt
from scipy.optimize import curve_fit
# choose the function to be fitted...
def func (x,a,b,c):
return a*exp(-b*x)+c
# ...and provide initial estimates for the parameters
a0,b0,c0 = (0.5,0.5,0.5)
"""
Explanation: Fitting the curve
We now import the relvant module, and we now choose the functional relationship $func$ between our $x$ and $y$ that is parameterised by three parameters $a,b,c$ of the curve we want to fit. We here use the same relationship as above, but obviously we can fit any curve we want (eg, linear, quadratic etc). We also have to provide an initial guess for the paramters, here $a_i,b_i,c_i$.
End of explanation
"""
# exectute the curve fit...
coeffs, fiterr = curve_fit(func, xvals, yvals, p0=(a0,b0,c0))
# ...and plot the results
print ("a=%s, b=%s, c=%s" % (coeffs[0], coeffs[1], coeffs[2]))
plt.plot(xvals,yvals, '+')
plt.plot(xvals,func(xvals,*coeffs),'r.')
plt.show()
"""
Explanation: We now execute the curve fit and plot the results. The coefficients will be in the tuple $coeffs=(a,b,c)$ and $fiterr$ will contain an error estimate. If $fiterr$ is NaN (or the algorithm terminates with too many iterations) this means that the algorithm did not converge, which means we need to adapt the initial values (see below)
End of explanation
"""
# manually fit the curve to obtain a viable set of starting parameters
at,bt,ct = (0.5,0.5,0.5)
plt.plot(xvals,yvals, '+')
plot(xvals,func(xvals,at,bt,ct), 'g.')
print ("a=%s, b=%s, c=%s" % (at,bt,ct))
"""
Explanation: In order to find the initial values we might want to plot the function with some initial parameters $a_t,b_t,c_t$ to allow for a rough manual fit
End of explanation
"""
import sys
print(sys.version)
"""
Explanation: Licence and version
(c) Stefan Loesch / oditorium 2014; all rights reserved
(license)
End of explanation
"""
|
ethen8181/machine-learning | projects/kaggle_rossman_store_sales/rossman_deep_learning.ipynb | mit | from jupyterthemes import get_themes
from jupyterthemes.stylefx import set_nb_theme
themes = get_themes()
set_nb_theme(themes[3])
# 1. magic for inline plot
# 2. magic to print version
# 3. magic so that the notebook will reload external python modules
# 4. magic to enable retina (high resolution) plots
# https://gist.github.com/minrk/3301035
%matplotlib inline
%load_ext watermark
%load_ext autoreload
%autoreload 2
%config InlineBackend.figure_format='retina'
import os
import torch
import numpy as np
import pandas as pd
%watermark -a 'Ethen' -d -t -v -p numpy,pandas,pyarrow,torch,fastai
"""
Explanation: <h1>Table of Contents<span class="tocSkip"></span></h1>
<div class="toc"><ul class="toc-item"><li><span><a href="#Rossman-Deep-Learning-Modeling" data-toc-modified-id="Rossman-Deep-Learning-Modeling-1"><span class="toc-item-num">1 </span>Rossman Deep Learning Modeling</a></span><ul class="toc-item"><li><span><a href="#Embeddings-for-Categorical-Variables" data-toc-modified-id="Embeddings-for-Categorical-Variables-1.1"><span class="toc-item-num">1.1 </span>Embeddings for Categorical Variables</a></span></li><li><span><a href="#Data-Preparation" data-toc-modified-id="Data-Preparation-1.2"><span class="toc-item-num">1.2 </span>Data Preparation</a></span></li><li><span><a href="#Model-Training" data-toc-modified-id="Model-Training-1.3"><span class="toc-item-num">1.3 </span>Model Training</a></span></li></ul></li><li><span><a href="#Reference" data-toc-modified-id="Reference-2"><span class="toc-item-num">2 </span>Reference</a></span></li></ul></div>
End of explanation
"""
data_dir = 'cleaned_data'
path_train = os.path.join(data_dir, 'train_clean.parquet')
path_test = os.path.join(data_dir, 'test_clean.parquet')
engine = 'pyarrow'
df_train = pd.read_parquet(path_train, engine)
df_test = pd.read_parquet(path_test, engine)
print('train dimension: ', df_train.shape)
print('test dimension: ', df_test.shape)
df_train.head()
cat_names = ['Store', 'DayOfWeek', 'Year', 'Month', 'Day', 'StateHoliday',
'CompetitionMonthsOpen', 'Promo2Weeks', 'StoreType', 'Assortment',
'PromoInterval', 'CompetitionOpenSinceYear', 'Promo2SinceYear',
'State', 'Week', 'Events']
cont_names = ['CompetitionDistance', 'Max_TemperatureC', 'Mean_TemperatureC',
'Min_TemperatureC', 'Max_Humidity', 'Mean_Humidity', 'Min_Humidity',
'Max_Wind_SpeedKm_h', 'Mean_Wind_SpeedKm_h', 'CloudCover', 'trend',
'trend_DE', 'Promo', 'SchoolHoliday', 'AfterSchoolHoliday',
'AfterStateHoliday', 'AfterPromo', 'BeforeSchoolHoliday',
'BeforeStateHoliday', 'BeforePromo']
dep_var = 'Sales'
"""
Explanation: Rossman Deep Learning Modeling
The success of deep learning is often times mentioned in domains such as computer vision and natural language processing, another use-case that is also powerful but receives far less attention is to use deep learning on tabular data. By tabular data, we are referring to data that we usually put in a dataframe or a relational database, which is one of the most commonly encountered type of data in the industry.
Embeddings for Categorical Variables
One key technique to make the most out of deep learning for tabular data is to use embeddings for our categorical variables. This approach allows for relationship between categories to be captured, e.g. Given a categorical feature with high cardinality (number of distinct categories is large), it often works best to embed the categories into a lower dimensional numeric space, the embeddings might be able to capture zip codes that are geographically near each other without us needing to explicitly tell it so. Similarly for a feature such as week of day, the week day embedding might be able to capture that Saturday and Sunday have similar behavior and maybe Friday behaves like an average of a weekend and weekday. By converting our raw categories into embeddings, our goal/hope is that these embeddings can capture more rich/complex relationships that will ultimately improve the performance of our models.
For instance, a 4-dimensional version of an embedding for day of week could look like:
Sunday [.8, .2, .1, .1]
Monday [.1, .2, .9, .9]
Tuesday [.2, .1, .9, .8]
Here, Monday and Tuesday are fairly similar, yet they are both quite different from Sunday. In practice, our neural network would learn the best representations for each category while it is training, and we can experiment with the number of dimensions that are allowed to capture these rich relationships.
People have shared usecase/success stories of leveraging embeddings, e.g.
Instacart has embeddings for its stores, groceries, and customers.
Pinterest has embeddings for its pins.
Another interesting thing about embeddings is that once we train them, we can leverage them in other scenarios. e.g. use these learned embeddings as features for our tree-based models.
Data Preparation
End of explanation
"""
df_train = df_train[df_train[dep_var] != 0].reset_index(drop=True)
"""
Explanation: Here, we will remove all records where the store had zero sale / was closed (feel free to experiment with not excluding the zero sales record and see if improves performance)
We also perform a train/validation split. The validation split will be used in our hyper-parameter tuning process and for early stopping. Notice that because this is a time series application, where we are trying to predict different stores' daily sales. It's important to not perform a random train/test split, but instead divide the training and validation set based on time/date.
End of explanation
"""
df_test['Date'].min(), df_test['Date'].max()
# the minimum date of the test set is larger than the maximum date of the
# training set
df_train['Date'].min(), df_train['Date'].max()
"""
Explanation: We print out the min/max time stamp of the training and test set to confirm that the two sets doesn't overlap.
End of explanation
"""
mask = df_train['Date'] == df_train['Date'].iloc[len(df_test)]
cut = df_train.loc[mask, 'Date'].index.max()
# fastai expects a collection of int for specifying which index belongs
# to the validation set
valid_idx = range(cut)
valid_idx
"""
Explanation: Our training data is already sorted by date in decreasing order, hence we can create the validation set by checking how big is our test set and select the top-N observations to create a validation set that has similar size to our test set. Here we're saying similar size and not exact size, because we make sure that all the records from the same date falls under either training or validation set.
End of explanation
"""
df_train.loc[(cut - 2):(cut + 1)]
df_train = df_train[cat_names + cont_names + [dep_var]]
df_test = df_test[cat_names + cont_names + ['Id']]
print('train dimension: ', df_train.shape)
print('test dimension: ', df_test.shape)
df_train.head()
"""
Explanation: Here, we print out the dataframe where we'll be doing the train/validation cut to illustrate the point, this is technically not required for the rest of the pipeline. Notice in the dataframe that we've printed out, the last record's date, 2015-06-18 is different from the rest. This means that all records including/after the date 2015-06-19 will become our validation set.
End of explanation
"""
from fastai.tabular import DatasetType
from fastai.tabular import defaults, tabular_learner, exp_rmspe, TabularList
from fastai.tabular import Categorify, Normalize, FillMissing, FloatList
"""
Explanation: Model Training
End of explanation
"""
procs = [FillMissing, Categorify, Normalize]
# regression
data = (TabularList
.from_df(df_train, path=data_dir, cat_names=cat_names,
cont_names=cont_names, procs=procs)
.split_by_idx(valid_idx)
.label_from_df(cols=dep_var, label_cls=FloatList, log=True)
.add_test(TabularList.from_df(df_test, path=data_dir,
cat_names=cat_names, cont_names=cont_names))
.databunch())
"""
Explanation: The fastai will automatically fit a regression model when the dependent variable is a float, but not when it's an int. So in order to apply regression we need to tell fastai it is a float type, hence the argument label_cls=FloatList when creating the DataBunch that is required for training the model.
The procs variable is fastai's procedure that contains transformation logic that will be applied to our variables. Here we transform all categorical variables into categories (an unique numeric id that represents the original category). We also replace missing values for continuous variables by the median column value (apart from imputing the missing values with the median, it will also create a new column that indicates whether the original value was missing) and normalize them (similar to sklearn's StandardScaler).
End of explanation
"""
max_log_y = np.log(np.max(df_train[dep_var]) * 1.2)
y_range = torch.tensor([0, max_log_y], device=defaults.device)
"""
Explanation: We can specify the capping for our prediction, ensuring that it won't be a negative value and it won't go beyond 1.2 times the maximum sales value we see in the dataset.
End of explanation
"""
learn = tabular_learner(data, layers=[1000, 500], ps=[0.001, 0.01], emb_drop=0.04,
y_range=y_range, metrics=exp_rmspe)
learn.model
"""
Explanation: We'll now use all the information we have to create a fastai TabularModel. Here we've defined a fixed model with 2 hidden layers, we also try to avoid overfitting by applying regularization. This can be done by performing dropout, which we can specify the dropout probability at each layer with argument ps (more commonly seen) and the embedding (input) dropout with argument emb_drop.
End of explanation
"""
learn.model.n_emb + learn.model.n_cont
# training time shown here is for a 8 core cpu
learn.fit_one_cycle(6, 1e-3, wd=0.2)
"""
Explanation: Printing out the model architecture, we see it first consists of a list of Embedding layer, one for each categorical variable. Recall the shape of Embedding layer is (the number of distinct categories, the dimension of the embedding). When we specify our fastai learner, we didn't specify emb_szs argument, which lets us specify the embedding size for each of our categorical variable. Hence the embedding size will be determined algorithmically. e.g. Our first embedding is for our Store feature, it shows it has 1116 of them and 81 is corresponding embedding size that was chosen.
Then moving to the first Linear layer, we can see it accepts the size of the sum of our embedding layer and number of continuous variables, showing that they are concatenated together before moving to the next stage in the network.
End of explanation
"""
test_preds = learn.get_preds(ds_type=DatasetType.Test)
test_preds[:5]
# we logged our label, remember to exponentiate it back to the original scale
df_test[dep_var] = np.exp(test_preds[0].numpy().ravel())
df_test[['Id', dep_var]] = df_test[['Id', dep_var]].astype('int')
submission_dir = 'submission'
if not os.path.isdir(submission_dir):
os.makedirs(submission_dir, exist_ok=True)
submission_path = os.path.join(submission_dir, 'rossmann_submission_fastai.csv')
df_test[['Id', dep_var]].to_csv(submission_path, index=False)
df_test[['Id', dep_var]].head()
"""
Explanation: We can leverage the get_preds method to return the predictions and targets on the type of dataset. For the test set, we're only interested in the prediction.
End of explanation
"""
|
azogue/esiosdata | notebooks/esiosdata - Factura electricidad con datos enerPI.ipynb | mit | %matplotlib inline
%config InlineBackend.figure_format = 'retina'
from glob import glob
import matplotlib.pyplot as plt
import os
import pandas as pd
import requests
from esiosdata import FacturaElec
from esiosdata.prettyprinting import *
# enerPI JSON API
ip_enerpi = '192.168.1.44'
t0, tf = '2016-11-01', '2016-12-24'
url = 'http://{}/enerpi/api/consumption/from/{}/to/{}'.format(ip_enerpi, t0, tf)
print(url)
r = requests.get(url)
if r.ok:
data = r.json()
data_consumo = pd.DataFrame(pd.Series(data, name='kWh')).sort_index().reset_index()
data_consumo.index = data_consumo['index'].apply(lambda x: pd.Timestamp.fromtimestamp(float(x) / 1000.))
data_consumo.drop('index', axis=1, inplace=True)
print_ok(data_consumo.head())
print_ok(data_consumo.tail())
else:
print_err(r)
# Consumo Total en intervalo:
c_tot = round(data_consumo.kWh.round(3).sum(), 3)
print_ok(c_tot)
# Plot consumo diario en kWh
data_consumo.kWh.resample('D').sum().plot(figsize=(16, 9));
"""
Explanation: Cálculo de la factura eléctrica
Recuperación de datos de consumo eléctrico horario del medidor de consumo (enerPI) vía JSON.
Para otros casos, formar un pandas.Series de índice horario con datos de consumo en kWh.
Generación de factura eléctrica mediante esiosdata.FacturaElec
Simulación de cambio de tarifa eléctrica para el mismo consumo
Alguna gráfica de los patrones de consumo diario
End of explanation
"""
factura = FacturaElec(consumo=data_consumo.kWh)
print_info(factura)
factura.tipo_peaje = 2
print_cyan(factura)
factura.tipo_peaje = 3
print_magenta(factura)
"""
Explanation: Factura eléctrica con datos horarios
End of explanation
"""
path_csv = os.path.expanduser('~/Desktop/')
df_csv = factura.generacion_csv_oficial_consumo_horario(path_csv)
print_ok(df_csv.tail())
file_path = glob(path_csv + '*.csv')[0]
with open(file_path, 'r') as f:
print_magenta(f.read()[:500])
"""
Explanation: EXPORT TO CSV
Para importar en https://facturaluz2.cnmc.es/facturaluz2.html
End of explanation
"""
print_ok('Consumo diario')
factura.plot_consumo_diario()
plt.show()
print_ok('Patrón de consumo semanal')
factura.plot_patron_semanal_consumo()
plt.show()
"""
Explanation: Plots del periodo facturado
End of explanation
"""
|
phoebe-project/phoebe2-docs | 2.3/tutorials/atm_passbands.ipynb | gpl-3.0 | #!pip install -I "phoebe>=2.3,<2.4"
"""
Explanation: Atmospheres & Passbands
Setup
Let's first make sure we have the latest version of PHOEBE 2.3 installed (uncomment this line if running in an online notebook session such as colab).
End of explanation
"""
import phoebe
from phoebe import u # units
import numpy as np
import matplotlib.pyplot as plt
logger = phoebe.logger()
b = phoebe.default_binary()
"""
Explanation: As always, let's do imports and initialize a logger and a new bundle.
End of explanation
"""
b.add_dataset('lc', times=np.linspace(0,1,101), dataset='lc01')
"""
Explanation: And we'll add a single light curve dataset to expose all the passband-dependent options.
End of explanation
"""
b['atm']
b['atm@primary']
b['atm@primary'].description
b['atm@primary'].choices
"""
Explanation: Relevant Parameters
An 'atm' parameter exists for each of the components in the system (for each set of compute options) and defines which atmosphere table should be used.
By default, these are set to 'ck2004' (Castelli-Kurucz) but can be set to 'blackbody' as well as 'extern_atmx' and 'extern_planckint' (which are included primarily for direct comparison with PHOEBE legacy).
End of explanation
"""
b['ld_func@primary']
b['atm@primary'] = 'blackbody'
print(b.run_checks())
b['ld_mode@primary'] = 'manual'
b['ld_func@primary'] = 'logarithmic'
print(b.run_checks())
"""
Explanation: Note that if you change the value of 'atm' to anything other than 'ck2004', the corresponding 'ld_func' will need to be changed to something other than 'interp' (warnings and errors will be raised to remind you of this).
End of explanation
"""
b['passband']
"""
Explanation: A 'passband' parameter exists for each passband-dependent-dataset (i.e. not meshes or orbits, but light curves and radial velocities). This parameter dictates which passband should be used for the computation of all intensities.
End of explanation
"""
print(b['passband'].choices)
"""
Explanation: The available choices will include both locally installed passbands as well as passbands currently available from the online PHOEBE repository. If you choose an online-passband, it will be downloaded and installed locally as soon as required by b.run_compute.
End of explanation
"""
print(phoebe.list_installed_passbands())
"""
Explanation: To see your current locally-installed passbands, call phoebe.list_installed_passbands().
End of explanation
"""
print(phoebe.list_passband_directories())
"""
Explanation: These installed passbands can be in any of a number of directories, which can be accessed via phoebe.list_passband_directories().
The first entry is the global location - this is where passbands can be stored by a server-admin to be available to all PHOEBE-users on that machine.
The second entry is the local location - this is where individual users can store passbands and where PHOEBE will download and install passbands (by default).
End of explanation
"""
print(phoebe.list_online_passbands())
"""
Explanation: To see the passbands available from the online repository, call phoebe.list_online_passbands().
End of explanation
"""
phoebe.download_passband('Cousins:R')
print(phoebe.list_installed_passbands())
"""
Explanation: Lastly, to manually download and install one of these online passbands, you can do so explicitly via phoebe.download_passband or by visiting tables.phoebe-project.org. See also the tutorial on updating passbands.
Note that this isn't necessary unless you want to explicitly download passbands before needed by run_compute (perhaps if you're expecting to have unreliable network connection in the future and want to ensure you have all needed passbands).
End of explanation
"""
|
DawesLab/LabNotebooks | 1D Numerov Schrodinger Solver.ipynb | mit | import numpy as np
from scipy.linalg import eigh, inv
import matplotlib.pyplot as plt
%matplotlib inline
N = 1000
x, dx = np.linspace(-1,1,N,retstep=True)
#dx = dx*0.1
# Finite square well
V_0 = np.zeros(N)
V_0[:] = 450
V_0[int(N/2 - N/6):int(N/2+N/6)] = 0
plt.plot(x,V_0)
plt.ylim(V.min() - 0.1*V_0.max(),V_0.max()*1.1)
plt.xlim(-1.1,1.1)
Alower = np.diag(np.ones(N)[:-1],k=-1)
Aupper = np.diag(np.ones(N)[:-1],k=1)
Amid = np.diag(-2*np.ones(N),k=0)
A = 1/dx**2 * (Alower + Amid + Aupper)
Blower = np.diag(np.ones(N)[:-1],k=-1)
Bupper = np.diag(np.ones(N)[:-1],k=1)
Bmid = np.diag(10*np.ones(N),k=0)
B = 1/12 * (Blower + Bmid + Bupper)
V = np.diag(V_0)
hbar=1
m=0.5
H = -(hbar**2)/(2*m)*inv(B)*A + V
energy, evecs = eigh(H,eigvals=(0,20))
E0 = energy[0] # ground state energy
states = [evecs[:,i] for i in range(20)]
plt.plot(energy,".")
plt.fill_between(range(21),E0,E0+V_0.max(), color='c', alpha=0.25) # Shade the bound states
for i,state in enumerate(states[0:17]):
# Make these plot at the height for a cool figure!
plt.plot(x,state**2*2000 + energy[i])
plt.title("Finite square well")
#plt.fill_between(x,0,V,color='k',alpha=0.1) # shade in the potential well
"""
Explanation: A numerical 1D Schrödinger solution
Revised from initial work in comp phys class.
Based on: "TANG_DONGJIAO thesis.pdf"
Would be good to reconcile these two and publish to http://www.compadre.org/picup/
TODO: check agreement with theory
End of explanation
"""
# Finite square well
V_0 = 250*x**2
plt.plot(x,V_0)
plt.ylim(-50,400)
plt.xlim(-1.1,1.1)
Alower = np.diag(np.ones(N)[:-1],k=-1)
Aupper = np.diag(np.ones(N)[:-1],k=1)
Amid = np.diag(-2*np.ones(N),k=0)
A = 1/dx**2 * (Alower + Amid + Aupper)
Blower = np.diag(np.ones(N)[:-1],k=-1)
Bupper = np.diag(np.ones(N)[:-1],k=1)
Bmid = np.diag(10*np.ones(N),k=0)
B = 1/12 * (Blower + Bmid + Bupper)
V = np.diag(V_0)
hbar=1
m=0.5
H = -(hbar**2)/(2*m)*inv(B)*A + V
energy, evecs = eigh(H,eigvals=(0,30))
E0 = energy[0]
states = [evecs[:,i] for i in range(30)]
plt.plot(energy,".")
plt.fill_between(range(31),E0,E0+V_0.max(), color='c', alpha=0.25)
"""
Explanation: SHO
End of explanation
"""
for i,state in enumerate(states[0:8]):
# Make these plot at the height for a cool figure!
plt.plot(x,state**2*1000 + energy[i])
plt.title("Harmonic oscillator")
plt.ylim(E0,E0+100)
plt.fill_between(x,E0,E0+V_0,color='k',alpha=0.1)
"""
Explanation: The bound states (below the cutoff) are clearly linear in energy (as expected), then above that we see the ∞-well solutions.
End of explanation
"""
N = 1000
x, dx = np.linspace(-1,1,N,retstep=True)
V_0 = np.zeros(N)
# periodic wells
V_0[:] = 1000
L = N/12 # width
S = N/10 # s
a = N/4 #
for i in range(5):
V_0[int(i*S+a):int(i*S+a+L)] = 0
plt.plot(x,V_0)
plt.ylim(-50,3050)
plt.xlim(-1.1,1.1)
Alower = np.diag(np.ones(N)[:-1],k=-1)
Aupper = np.diag(np.ones(N)[:-1],k=1)
Amid = np.diag(-2*np.ones(N),k=0)
A = 1/dx**2 * (Alower + Amid + Aupper)
Blower = np.diag(np.ones(N)[:-1],k=-1)
Bupper = np.diag(np.ones(N)[:-1],k=1)
Bmid = np.diag(10*np.ones(N),k=0)
B = 1/12 * (Blower + Bmid + Bupper)
V = np.diag(V_0)
hbar=1
m=0.5
H = -(hbar**2)/(2*m)*inv(B)*A + V
energy, evecs = eigh(H,eigvals=(0,30))
E0 = energy[0]
states = [evecs[:,i] for i in range(30)]
plt.plot(energy,".")
plt.fill_between(range(31),E0,E0+V_0.max(), color='c', alpha=0.25)
plt.figure(figsize=(16,6))
for i,state in enumerate(states[0:15]):
# Make these plot at the height for a cool figure!
plt.plot(x,state**2*3000 + energy[i])
plt.fill_between(x,E0,E0+V_0,color='k',alpha=0.1)
#plt.plot(E0+V_0) TODO
plt.title("Bandgaps in periodic structure")
"""
Explanation: Periodic wells:
End of explanation
"""
for i,state in enumerate(states[0:5]):
plt.subplot(5,1,i+1)
plt.plot(x, state**2)
for i,state in enumerate(states[20:25]):
plt.subplot(5,1,i+1)
plt.plot(x, state**2)
plt.figure(figsize=(10,3))
plt.plot(x,states[24]**2)
plt.plot(x,states[20]**2)
"""
Explanation: Bandgaps!
For Students: explore the symmetry of these states.
Q: Are there five degenerate states because each state has the particle in only one well?
Q: Why does each cluster of states start to have a slope in the E vs. # graph?
End of explanation
"""
|
hoenir/GestaltAppreciation | analysis/.ipynb_checkpoints/AppreciationNumerosity-checkpoint.ipynb | gpl-3.0 | import pandas as pd
from pandas import DataFrame
from psychopy import data, core, gui, misc
import numpy as np
import seaborn as sns
#from ggplot import *
from scipy import stats
import statsmodels.formula.api as smf
import statsmodels.api as sm
from __future__ import division
from pivottablejs import pivot_ui
%pylab inline
#plotting params
from matplotlib import rcParams
rcParams['font.family'] = 'ubuntu'
sns.set(style="whitegrid", color_codes=True)
sns.set_context("talk")
"""
Explanation: Analysis Gestalt appreciation & numerosity estimation experiment
Researchers: Elise Berbiers (MA student), Rebecca Chamberlain, Johan Wagemans, Sander Van de Cruys
Replication & extension of experiment 1 in:
Topolinski, S., Erle, T. M., & Reber, R. (2015). Necker’s smile: Immediate affective consequences of early perceptual processes. Cognition, 140, 1–13.
Notes:
All errorbars are 95% CIs
Only a 100 milliseconds exposure duration was used, instead of the two exposure durations (25 ms and 100 ms) in Topolinski, Erle, & Reber (2015). Note that Topolinski, Erle, & Reber found the effect for both exposure durations.
30 coherent "Gestalt" images, 30 scrambled versions, 23 inverted Gestalt images (7 images don't have a singular canonical orientation).
On the construction of the scrambled stimuli Topolinski & Strack write:
"First, we developed and tested a set of pictorial stimuli that were useful for intuitive judgments because they were so degraded that they could only rarely be visually recognized (Bower et al., 1990). We used 30 black-and-white drawings of everyday objects randomly chosen from the inventory by Snodgrass and Vanderwart (1980), with the only constraint being that depicted objects were visually not too simple (e.g., a circle). Following Volz and von Cramon (2006), these stimuli were visually degraded by a filter that masked the black picture on the white background by increasing the white pixels by 75%. These pictures were the object condition (Volz & von Cramon, 2006) since they depicted visually degraded real objects. Then, these pictures were divided into nine equal rectangles (3x3); and these rectangles were randomly rotated within the picture (Volz & von Cramon, 2006; cf., Bower et al., 1990; Wippich, 1994). Thus, these pictures contained the same pixel information as in the object condition and even contained local collinearities (Volz & von Cramon, 2006), but the picture as a whole depicted a physically impossible and thus meaningless object. These pictures were used in the nonobject condition."
Interestingly, there is no effect of condition on density, but there is one on hull. This is probably due to the stimulus generation: the fact that the scrambled are made by scrambling 'tiles' of the source (which may keep the nearest dots together, and density measures the average distance to the nearest dots).
I agree we should address the zygomaticus data. Because the participants liked the inverted condition as much as the gestalt I think this is fairly easily addressed - we would predict the same pattern of results for inverted vs. scrambled as gestalt vs. scrambled (i.e. more positive affect). The authors state in the paper, 'We interpret this differential finding in the way that a success in early Gestalt integration immediately evokes an increase in positive affectivity’ (p.6). I think we could argue the same for the inverted condition, but it is low level grouping/clustering rather than higher-level gestalt that is producing the positive affect. Compact/dense figures are more easily processed, hence liked?
I also have some doubts on the way I analyzed it. Basically, I've done a simple anova of condition on ratings (as in Topolinski), where scramble is significant. Then, to look at the low-level measures, I computed mean rating for each image, and added variables for the extra measures of this image. In this case, an anova of condition on ratings is not significant. If I add hull as predictor, then scramble becomes significant (0.045) and hull is too. If I add density (but not hull), there's no scramble effect, but there is a density effect. Finally, including all three leaves nothing significant.
At first glance I think the first approach (adding mean hull/density etc. for the mean image ratings) is more appropriate, but I agree we should check this with the lab. However, I’m not sure if adding them all in to the same ANCOVA is the right way to go. Aren’t there dependencies between some the measures (i.e. don’t we run into collinearity problems?). Should it then be three separate ANCOVAs with the image statistics as covariates?
If I now use the full dataset to add the low-level measures, so I just add the columns of density/hull to every existing trial, then scramble stays significant (note that it already was significant in this dataset), and additionally scramble*hull interaction and main effect of density become significant. So it seems these low-level measures are not sufficient/appropriate to account for the scramble-effect.
Overall I think there is a good story here but it’s worth presenting to the lab and checking:
How to analyse the effect of the image statistics
How to present the numerosity effects so they are in line with the story
End of explanation
"""
# get data file names
files = gui.fileOpenDlg("../data")
dfs = []
for filename in files:
#print(filename)
df = pd.read_table(filename, sep=",") #only Appreciation trials 85
dfs.append(df)
df = pd.concat(dfs, ignore_index=True)
len(df)
dfApp = df[pd.notnull(df.images)]
dfNum = df[pd.notnull(df.imgA)]
print('total len:', len(dfApp))
print('total len:', len(dfNum))
"""
Explanation: Importing data
End of explanation
"""
dfApp = dfApp.rename(columns={'rating.response': 'rating'})
dfApp = dfApp.rename(columns={'rating.rt': 'rt'})
#add var for img
dfApp.loc[:,"img"]= dfApp.images.str.extract("(\d+)", expand=False).astype(int)
print(dfApp.columns)
dfApp.to_csv("dataLiking.csv")
dfApp = pd.read_csv("dataLiking.csv")
"""
Explanation: Appreciation experiment
Relevant vars:
condition
rating.response
rating.rt
End of explanation
"""
print(len(dfApp['participant'].unique()))
len(dfApp)
dfApp.groupby(["condition"]).rating.describe()
dfApp.groupby(["condition"]).rt.describe()
sns.distplot(dfApp.groupby(['participant']).rating.mean(), bins=20);
"""
Explanation: Descriptive stats
End of explanation
"""
#sns.violinplot(x="condition", y="rating.response", data=dfApp);
sns.pointplot(x="condition", y="rating", unit="participant", data=dfApp);
sns.violinplot(x="condition", y="rating", unit="participant", data=dfApp);
# GLM test
model = smf.glm(formula="rating ~ condition", data=dfApp)
results = model.fit()
print(results.summary())
"""
Explanation: Does gestaltness influence appreciation?
End of explanation
"""
# compute diff score to correlate with PNS score
print(len(dfApp['participant'].unique()))
def diffScore(df):
gestaltm = df[df.condition=='gestalt'].rating.mean()
scrambledm = df[df.condition=='scrambled'].rating.mean()
diff= gestaltm - scrambledm
#df['id'].iloc[0]
dfout = pd.DataFrame(data=[(gestaltm, scrambledm, diff)],\
columns=['gestaltm', 'scrambledm', 'diff'])
return dfout
dfdiff = dfApp.groupby('participant').apply(diffScore)
dfdiff = dfdiff.reset_index()
# add PNS scores
dfPNS = pd.read_table("ScoringPNS.csv", sep=",")
dfPNS = dfPNS.iloc[4:,:]
dfPNS["participant"] = pd.to_numeric(dfPNS["participant"])
dfmerged= pd.merge(dfdiff, dfPNS, how='outer', on='participant')
dfmerged.head()
"""
Explanation: Interim conclusion:
scrambling effect is significant, in expected direction (less liked).
Inversion does not seem to matter, meaning it probably is not an (implicitly processed) "familiar Gestalt" effect.
Note: no item effect included. Add it? (but then: incomplete data, bc inverted)
Correlation between individual difference scores (appreciation of Gestalt minus scrambled patterns) and PNS scores
End of explanation
"""
sns.jointplot(x="Totaalscore", y="diff", data=dfmerged, kind="reg");
dfmerged["diff"].mean()
"""
Explanation: Does higher PNS (personal need for structure) imply higher relative appreciation of Gestalt vs scrambled patterns?
End of explanation
"""
dfNum = dfNum.rename(columns={'numResp.corr': 'acc','numResp.rt': 'rt', 'numResp.Alocation': 'Alocation' })
#add var for img
dfNum.loc[:,"img"]= dfNum.imgA.str.extract("(\d+)", expand=False).astype(int)
dfNum.to_csv('dfNum.csv', sep='\t')
dfNum = pd.read_csv("dfNum.csv", sep='\t')
print dfNum.columns
print dfNum.dtypes
"""
Explanation: Interim conclusion
Nonsign correlation
Trend in direction opposite to expected: higher PNS -> less liking of Gestalt vs scrambled.
This outcome is what would be expected if the "Gestalt-effect" is not actually caused by coherent Gestalt, but by more basic image feature not related to PNS?
Numerosity experiment
2-IFC: which has the most dots (first or second image)
Note that all comparison images have exactly the same numbers of dots in reality
Hence any deviation from .50 is caused by the difference in configuration
Relevant vars:
acc: percentage choice for first image
rt
condition
End of explanation
"""
sns.pointplot(x="condition", y="acc",unit="participant", data=dfNum);
#sns.axlabel("Condition", "Percentage higher")
#sns.stripplot(x="condition", y="acc",unit="participant", data=dfNum, jitter=True);
model = smf.glm(formula="acc ~ condition", data=dfNum, family=sm.families.Binomial())
results = model.fit()
print(results.summary())
dfimnum = dfNum.groupby(['imgA','imgB','condition'])['acc'].mean()
dfimnum= dfimnum.reset_index()
dfimnum.head()
"""
Explanation: Does Gestaltness (and inversion) influence numerosity estimation?
End of explanation
"""
ndf = dfNum.pivot_table(index=['participant','img'], columns=['condition'], values='acc')
ndf = ndf.reset_index()
adf = dfApp.pivot_table(index=['participant','img'], columns=['condition'], values='rating')
adf = adf.reset_index()
adf["diffGestalt"] = adf.gestalt - adf.scrambled
adf["diffInv"] = adf.inverted - adf.scrambled
andf = pd.merge(ndf, adf, how='outer', on=['participant', 'img'])
andf = pd.merge(andf, lldf, how = 'left', on = 'img')
andf.head()
import scipy.stats as stats
stats.ttest_1samp(andf[andf.diffInv.notnull()].diffInv, 0, axis=0)
stats.ttest_1samp(andf.diffGestalt, 0, axis=0)
sns.jointplot(x="hullDiff", y="diffGestalt", data=andf, kind="reg");
model = smf.glm(formula="diffGestalt ~ hullDiff", data=andf)
results = model.fit()
print(results.summary())
sns.pointplot(x="Gestalt-scrambled", y="diffGestalt", unit="img", data=andf);
sns.pointplot(x="Inverted-scrambled", y="diffInv", unit="img", data=andf);
"""
Explanation: Notes
The numerosity exp might be nice to add, because it is consistent with the liking results. Basically, people noticed the equal number of dots for the comparison Gestalt vs inverted (it's around 50%, where it theoretically always should be because the compared images always contained an equal amount of dots). In the other comparisons you see they (substantially) overestimate the dots in the scrambled condition (or underestimate the Gestalt/inverted condition).
Does numerosity predict liking?
For those that overestimated the number of dots in the scrambled condition, did they like the scrambled images less?
Make combined dataset:
End of explanation
"""
# prerequisites
import os, re
import matplotlib.pyplot as plt
from skimage.data import data_dir
from skimage.util import img_as_ubyte
from skimage.morphology import skeletonize, convex_hull_image, disk
from skimage import io
from skimage.measure import label, regionprops
from skimage.filters.rank import entropy
from skimage.feature import peak_local_max
def plot_comparison(original, filtered, filter_name):
fig, (ax1, ax2) = plt.subplots(ncols=2, figsize=(8, 4), sharex=True,
sharey=True)
ax1.imshow(original, cmap=plt.cm.gray)
ax1.set_title('original')
ax1.axis('off')
ax1.set_adjustable('box-forced')
ax2.imshow(filtered, cmap=plt.cm.gray)
ax2.set_title(filter_name)
ax2.axis('off')
ax2.set_adjustable('box-forced')
"""
Explanation: Checking for low-level structural measures of the test images
Using scikit-image:
Stéfan van der Walt, Johannes L. Schönberger, Juan Nunez-Iglesias, François Boulogne, Joshua D. Warner, Neil Yager, Emmanuelle Gouillart, Tony Yu and the scikit-image contributors. scikit-image: Image processing in Python. PeerJ 2:e453 (2014) http://dx.doi.org/10.7717/peerj.453
End of explanation
"""
from itertools import repeat
def distance(p1,p2):
"""Euclidean distance between two points."""
x1,y1 = p1
x2,y2 = p2
return np.hypot(x2 - x1, y2 - y1)
# get data file names
files = gui.fileOpenDlg("../ExpCode/images/)")#, allowed="bmp files (*.bmp)|*.bmp")
d = []
for filename in files:
#print(filename)
stim = io.imread(filename, as_grey=True)
stim = img_as_ubyte(io.imread(filename, as_grey=True))
#fig, ax = plt.subplots()
#ax.imshow(stim, cmap=plt.cm.gray)
#compute convex hull for this img
chull = convex_hull_image(stim == 0)
#plot_comparison(stim, chull, 'convex hull')
label_img = label(chull)
regions_img = regionprops(label_img)
region = regions_img[0]
hull= float(region.convex_area)/(chull.shape[0]*chull.shape[1])
#print "percentage pixels of convex hull image: ", hull
#compute density for this img
stim = numpy.invert(stim) # peaks should be white
# extract peaks aka dots
coords = peak_local_max(stim, min_distance=1)
#print coords
#print len(coords)
#print len(stim[stim>.5])
#compute density as mean of distances to 3 closest neighbors
density = 0
for p1 in coords:
dists = [distance(*pair) for pair in zip(repeat(p1),coords)]
sorted_dists = np.sort(dists)[1:6] # take only distances to 3 closest neighbors
av_dist = np.mean(sorted_dists)
density += av_dist
#print "dists: ", len(dists)
#print "sorted: ", sorted_dists
#print "average: ", av_dist
density /= len(coords)
d.append({'images': filename.split("GestaltAppreciation/ExpCode/")[1] ,'hull': hull, 'density': 1/density})
#print(d)
stims = pd.DataFrame(d)
# make dataset where row is image
dfim = dfApp.groupby(['images','condition'])['rating'].mean()
dfim= dfim.reset_index()
dfim.head()
dfmerged= pd.merge(dfim, stims, how='outer', on='images')
dfmerged.head()
dfmerged.to_csv("dataLikingWithLowlevel.csv")
dfApp= pd.merge(dfApp, stims, how='outer', on='images')
# GLM test
model = smf.glm(formula="rating ~ condition * hull * density", data=dfApp)
results = model.fit()
print(results.summary())
"""
Explanation: Compute measures: convex hull area and density
Convex hull area
Convex hull docs: putting an elastic band around all elements to form the smallest possible convex polygon
Intuitively the area depends on how much the points are spread out.
One possible measure of clustering/compactness
Density
For each dot: mean of distances to 3 closest neighbors, averaged over all dots
End of explanation
"""
dfnummerged= pd.merge(dfimnum, stims, how='left', left_on='imgA', right_on='images')
dfnummerged = dfnummerged.rename(columns={'hull': 'Ahull','density': 'Adensity', 'images': 'imageA' })
dfnummerged= pd.merge(dfnummerged, stims, how='left', left_on='imgB', right_on='images')
dfnummerged = dfnummerged.rename(columns={'hull': 'Bhull','density': 'Bdensity', 'images': 'imageB' })
dfnummerged["hullDiff"]= dfnummerged.Ahull-dfnummerged.Bhull
dfnummerged["densDiff"]= dfnummerged.Adensity-dfnummerged.Bdensity
dfnummerged["img"]= dfnummerged['imgA'].str.extract("(\d+)", expand=False).astype(int)
dfnummerged.tail()
lldf = dfnummerged[dfnummerged.condition == "Gestalt-scrambled"]
lldf = lldf[['img','hullDiff', 'densDiff']]
lldf.head()
model = smf.glm(formula="acc ~ hullDiff*condition", data=dfnummerged, family=sm.families.Binomial())
results = model.fit()
print(results.summary())
"""
Explanation: Do low-level measures influence on numerosity judgments?
End of explanation
"""
g = sns.distplot(dfmerged[dfmerged.condition=="inverted"].hull, bins=15, label="inverted");
sns.distplot(dfmerged[dfmerged.condition=="scrambled"].hull, bins=15, label="scrambled");
sns.distplot(dfmerged[dfmerged.condition=="gestalt"].hull, bins=15, label="Gestalt");
plt.legend();
g.set(xlabel='Convex hull area', ylabel='Frequency');
"""
Explanation: How is convex hull area distributed? (over images)
End of explanation
"""
g = sns.pointplot(x="condition", y="hull", data=dfmerged, linestyles=["-"]);
g.set(xlabel='Condition', ylabel='Average convex hull area');
# GLM test
model = smf.glm(formula="hull ~ condition", data=dfmerged)
#model = smf.ols(formula="rating ~ hull", data=dfmerged)
results = model.fit()
print(results.summary())
"""
Explanation: Does convex hull area differ across conditions?
End of explanation
"""
sns.set_style("whitegrid",{"axes.grid": False})
sns.set_context("talk")
g = sns.lmplot(x="hull", y="rating", hue="condition",\
data=dfmerged, x_estimator=np.mean, palette="Set1");
g.set(xlabel='Convex hull area', ylabel='Rating');
sns.set_style("whitegrid",{"axes.grid": False})
# GLM test
model = smf.glm(formula="rating ~ condition * hull", data=dfmerged)
#model = smf.ols(formula="rating ~ hull", data=dfmerged)
results = model.fit()
print(results.summary())
"""
Explanation: Interim conclusion
As expected, convex hull area is significantly higher in the scrambled condition (compared to Gestalt or inverted)
Inversion does not matter (obviously). It is not exactly the same as Gestalt because some images were not inverted (see notes at top)
Are ratings dependent on convex hull area, in the different conditions?
End of explanation
"""
sns.distplot(dfmerged[dfmerged.condition=="inverted"].density, bins=15, label="inverted");
sns.distplot(dfmerged[dfmerged.condition=="scrambled"].density, bins=15, label="scrambled");
sns.distplot(dfmerged[dfmerged.condition=="gestalt"].density, bins=15, label="Gestalt");
plt.legend();
g.set(xlabel='Density', ylabel='Frequency');
"""
Explanation: Interim conclusion:
If we add hull area to the model (and interaction with condition) for DV rating:
Hull significantly influences ratings
Interestingly the main scrambling effect also becomes significant here
Interaction scramblingxhull is also (marginally) significant. As can be seen in the graph, this is because within the scrambling condition, hull does not seem to influence ratings much. This is only the case for the inverted and the Gestalt condition, specifically, both show a (similar) negative correlation.
Together this seems to indicate that ratings are determined by low-level structural differences rather than high-level Gestalt or coherence.
How is density distributed? (over images)
End of explanation
"""
sns.pointplot(x="condition", y="density", data=dfmerged, linestyles=["-"]);
g.set(xlabel='Condition', ylabel='Average density');
# GLM test
model = smf.glm(formula="density ~ condition", data=dfmerged)
#model = smf.ols(formula="rating ~ hull", data=dfmerged)
results = model.fit()
print(results.summary())
"""
Explanation: Does density differ across conditions?
End of explanation
"""
sns.set_style("whitegrid",{"axes.grid": False})
sns.set_context("talk")
g= sns.lmplot(x="density", y="rating",\
hue="condition", data=dfmerged, x_estimator=np.mean, palette="Set1");
g.set(xlabel='Density', ylabel='Rating');
# GLM test
model = smf.glm(formula="rating ~ condition * density", data=dfmerged)
#model = smf.ols(formula="rating ~ hull", data=dfmerged)
results = model.fit()
print(results.summary())
# GLM test
model = smf.glm(formula="rating ~ condition * hull * density", data=dfmerged)
#model = smf.ols(formula="rating ~ hull", data=dfmerged)
results = model.fit()
print(results.summary())
"""
Explanation: Interim conclusion
Density differs in the three conditions.
Inverted imgs should have same density but as Gestalt but because some images were not inverted this is not the case (see notes at top)
Are ratings dependent on density, in the different conditions?
End of explanation
"""
sns.jointplot(x="density", y="hull", data=dfmerged, kind="reg")
"""
Explanation: Interim conclusion:
If we add density to the model (and interaction with condition) for DV rating:
Density significantly influences ratings (higher density -> higher rating)
No other significant relations.
Is there a correlation between density and convex hull area? (Yes, a negative one, as expected)
End of explanation
"""
|
Quantiacs/quantiacs-python | sampleSystems/svm_tutorial.ipynb | mit | import quantiacsToolbox
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
from sklearn import svm
%matplotlib inline
%%html
<style>
table {float:left}
</style>
"""
Explanation: Quantiacs Toolbox Sample: Support Vector Machine
This tutorial will show you how to use svm with the Quantiacs Toolbox to predict the next day's trend.
We will use the closing price of the last week (5 days) as features and the trend of the next day as a value.
For each prediction, we will use one year (252 days) of lookback data.
End of explanation
"""
F_AD = pd.read_csv('./tickerData/F_AD.txt')
CLOSE = np.array(F_AD.loc[:252-1, [' CLOSE']])
plt.plot(CLOSE)
"""
Explanation: For developing and testing a strategy, we will use the raw data in the tickerData folder that has been downloaded via the Toolbox's loadData() function.
This is just a simple sample to show how svm works.
Extract the closing price of the Australian Dollar future (F_AD) for the past year:
End of explanation
"""
X = np.concatenate([CLOSE[i:i+5] for i in range(252-5)], axis=1).T
y = np.sign((CLOSE[5:] - CLOSE[5-1: -1]).T[0])
"""
Explanation: Now we can create samples.
Use the last 5 days' close price as features.
We will use a binary trend: y = 1 if price goes up, y = -1 if price goes down
For example, given the close price on 19900114:
| DATE | CLOSE |
| :--- |------ |
| 19900110 | 77580.0 |
| 19900111 | 77980.0 |
| 19900112 | 78050.0 |
| 19900113 | 77920.0 |
| 19900114 | 77770.0 |
| 19900115 | 78060.0 |
Corresponding sample should be
x = (77580.0, 77980.0, 78050.0, 77920.0, 77770.0)
y = 1
End of explanation
"""
clf = svm.SVC()
clf.fit(X, y)
clf.predict(CLOSE[-5:].T)
"""
Explanation: Use svm to learn and predict:
End of explanation
"""
F_AD.loc[251:252, ['DATE', ' CLOSE']]
"""
Explanation: 1 shows that the close price will go up tomorrow.
What is the real value?
End of explanation
"""
class myStrategy(object):
def myTradingSystem(self, DATE, OPEN, HIGH, LOW, CLOSE, VOL, OI, P, R, RINFO, exposure, equity, settings):
"""
For 4 lookback days and 3 markets, CLOSE is a numpy array looks like
[[ 12798. 11537.5 9010. ]
[ 12822. 11487.5 9020. ]
[ 12774. 11462.5 8940. ]
[ 12966. 11587.5 9220. ]]
"""
# define helper function
# use close price predict the trend of the next day
def predict(CLOSE, gap):
lookback = CLOSE.shape[0]
X = np.concatenate([CLOSE[i:i + gap] for i in range(lookback - gap)], axis=1).T
y = np.sign((CLOSE[gap:lookback] - CLOSE[gap - 1:lookback - 1]).T[0])
y[y==0] = 1
clf = svm.SVC()
clf.fit(X, y)
return clf.predict(CLOSE[-gap:].T)
nMarkets = len(settings['markets'])
gap = settings['gap']
pos = np.zeros((1, nMarkets), dtype='float')
for i in range(nMarkets):
try:
pos[0, i] = predict(CLOSE[:, i].reshape(-1, 1),
gap, )
# for NaN data set position to 0
except ValueError:
pos[0, i] = 0.
return pos, settings
def mySettings(self):
""" Define your trading system settings here """
settings = {}
# Futures Contracts
settings['markets'] = ['CASH', 'F_AD', 'F_BO', 'F_BP', 'F_C', 'F_CC', 'F_CD',
'F_CL', 'F_CT', 'F_DX', 'F_EC', 'F_ED', 'F_ES', 'F_FC', 'F_FV', 'F_GC',
'F_HG', 'F_HO', 'F_JY', 'F_KC', 'F_LB', 'F_LC', 'F_LN', 'F_MD', 'F_MP',
'F_NG', 'F_NQ', 'F_NR', 'F_O', 'F_OJ', 'F_PA', 'F_PL', 'F_RB', 'F_RU',
'F_S', 'F_SB', 'F_SF', 'F_SI', 'F_SM', 'F_TU', 'F_TY', 'F_US', 'F_W', 'F_XX',
'F_YM']
settings['lookback'] = 252
settings['budget'] = 10 ** 6
settings['slippage'] = 0.05
settings['gap'] = 5
return settings
result = quantiacsToolbox.runts(myStrategy)
"""
Explanation: Hooray! Our strategy successfully predicted the trend.
Now we use the Quantiacs Toolbox to run our strategy.
End of explanation
"""
|
IST256/learn-python | content/lessons/07-Files/Slides.ipynb | mit | x = input()
if x.find("rr")!= -1:
y = x[1:]
else:
y = x[:-1]
print(y)
"""
Explanation: IST256 Lesson 07
Files
Zybook Ch7
P4E Ch7
Links
Participation: https://poll.ist256.com
Zoom Chat!
Agenda
Go Over Homework H06
New Stuff
The importance of a persistence layer in programming.
How to read and write from files.
Techniques for reading a file a line at a time.
Using exception handling with files.
FEQT (Future Exam Questions Training) 1
What is the output of the following code when berry is input on line 1?
End of explanation
"""
x = input()
y = x.split()
w = ""
for z in y:
w = w + z[1]
print(w)
"""
Explanation: A. erry
B. berr
C. berry
D. bey
Vote Now: https://poll.ist256.com
FEQT (Future Exam Questions Training) 2
What is the output of the following code when mike is cold is input on line 1?
End of explanation
"""
x = input()
x = x + x
x = x.replace("o","i")
x = x[:5]
print(x)
"""
Explanation: A. iic
B. ike
C. mic
D. iso
Vote Now: https://poll.ist256.com
FEQT (Future Exam Questions Training) 3
What is the output of the following code when tony is input on line 1?
End of explanation
"""
# all at once
with open(filename, 'r') as handle:
contents = handle.read()
# a line at a time
with open(filename, 'r') as handle:
for line in handle.readlines():
do_something_with_line
"""
Explanation: A. tony
B. tiny
C. tinyt
D. tonyt
Vote Now: https://poll.ist256.com
Connect Activity
Which of the following is not an example of secondary (persistent) memory?
A. Flash Memory
B. Hard Disk Drive (HDD)
C. Random-Access Memory (RAM)
D. Solid State Disk (SSD)
Vote Now: https://poll.ist256.com
Files == Persistence
Files add a Persistence Layer to our computing environment where we can store our data after the program completes.
Think: Saving a game's progress or saving your work!
When our program Stores data, we open the file for writing.
When our program Reads data, we open the file for reading.
To read or write a file we must first open it, which gives us a special variable called a file handle.
We then use the file handle to read or write from the file.
The read() function reads from the write() function writes to the file through the file handle.
Reading From a File
Two approaches... that's it!
End of explanation
"""
# write mode
with open(filename, 'w') as handle:
handle.write(something)
# append mode
with open(filename, 'a') as handle:
handle.write(something)
"""
Explanation: Writing a To File
End of explanation
"""
a = "savename.txt"
with open(a,'w') as b:
c = input("Enter your name: ")
b.write(c)
"""
Explanation: Watch Me Code 1
Let’s Write two programs.
Save a text message to a file.
Retrieve the text message from the file.
Check Yourself: Which line 1
Which line number creates the file handle?
End of explanation
"""
with open("sample.txt","r") as f:
for line in f.readlines():
print(line)
g = "done"
"""
Explanation: A. 1
B. 2
C. 3
D. 4
Vote Now: https://poll.ist256.com
Watch Me Code 2
Common patterns for reading and writing more than one item to a file.
- Input a series of grades, write them to a file one line at a time.
- Read in that file one line at a time, print average.
Check Yourself: Which line 2
On which line number does the file handle no longer exist?
End of explanation
"""
try:
file = 'data.txt'
with open(file,'r') as f:
print( f.read() )
except FileNotFoundError:
print(f"{file} was not found!")
"""
Explanation: A. 1
B. 2
C. 3
D. 4
Vote Now: https://poll.ist256.com
Your Operating System and You
Files are stored in your secondary memory in folders.
When the python program is in the same folder as the file, no path is required.
When the file is in a different folder, a path is required.
Absolute paths point to a file starting at the root of the hard disk.
Relative paths point to a file starting at the current place on the hard disk.
Python Path Examples
<table style="font-size:1.0em;">
<thead><tr>
<th>What</th>
<th>Windows</th>
<th>Mac/Linux</th>
</tr></thead>
<tbody>
<tr>
<td><code> File in current folder </code></td>
<td> "file.txt" </td>
<td> "file.txt"</td>
</tr>
<tr>
<td><code> File up one folder from the current folder </code></td>
<td> "../file.txt"</td>
<td> "../file.txt"</td>
</tr>
<tr>
<td><code> File in a folder from the current folder </code></td>
<td> "folder1/file.txt" </td>
<td> "folder1/file.txt" </td>
</tr>
<tr>
<td><code> Absolute path to file in a folder</code></td>
<td> "C:/folder1/file.txt" </td>
<td> "/folder1/file.txt"</td>
</tr>
</tbody>
</table>
Check Yourself: Path
- Is this path relative or absolute?
"/path/to/folder/file.txt"
A. Relative
B. Absolute
C. Neither
D. Not sure
Vote Now: https://poll.ist256.com
Handling Errors with Try…Except
I/O is the ideal use case for exception handling.
Don't assume you can read a file!
Use try… except!
End of explanation
"""
file = "a.txt"
with open(file,'w'):
file.write("Hello")
"""
Explanation: End-To-End Example (Pre-Recorded)
How Many Calories in that Beer?
Let's write a program to search a data file of 254 popular beers. Given the name of the beer the program will return the number of calories.
Watch this here:
https://youtu.be/s-1ToO0dJIs
End-To-End Example
A Better Spell Check
In this example, we create a better spell checker than the one from small group.
read words from a file
read text to check from a file.
Conclusion Activity : One Question Challenge
What is wrong with the following code:
End of explanation
"""
|
Upward-Spiral-Science/ugrad-data-design-team-0 | reveal/pdfs/nuis_methods.ipynb | apache-2.0 | %matplotlib inline
import numpy as np
import ndmg.nuis.nuis as ndn # nuisance correction scripts
import matplotlib.pyplot as plt
import nibabel as nb
import scipy.fftpack as scifft
from sklearn.metrics import r2_score
L = 200
tr = 2 # the tr of our data
t = np.linspace(0, L-1, L) # generate time steps
stim_freq = .1
stim = np.sin(2*np.pi*stim_freq*t*tr) # generate our stimulus timeseries
fig = plt.figure()
ax = fig.add_subplot(111)
ax.plot((t*tr)[0:100], stim[0:100])
ax.set_xlabel('Timestep')
ax.set_ylabel('Response')
ax.set_title('Stimulus before corruption')
low_freq = [0.003,0.006]
low_freq_amp = [.2,.1]
sig = stim.copy()
# add low-frequency drift
for freq, amp in zip(low_freq, low_freq_amp):
sig += amp*np.sin(2*np.pi*freq*t*tr)
# so that we can use our fft code on it, we need an ndarray not
# a 1d array
sig = np.expand_dims(sig, axis=1)
bef_r2 = r2_score(stim, sig[:,0])
fig = plt.figure()
ax = fig.add_subplot(111)
ax.plot((t*tr)[0:100], sig[0:100,0])
ax.set_xlabel('Timestep')
ax.set_ylabel('Response')
ax.set_title('Stimulus after corruption, r^2 = ' + str(bef_r2))
corr = ndn().freq_filter(sig, tr)
aft_r2 = r2_score(stim, corr[:,0])
fig = plt.figure()
ax = fig.add_subplot(111)
ax.plot((t*tr)[0:100], corr[0:100,0])
ax.set_xlabel('Timestep')
ax.set_ylabel('Response')
ax.set_title('Stimulus after correction, r^2 = ' + str(aft_r2))
"""
Explanation: FFT/Linear Regression Nuisance Correction
created by Eric Bridgeford
Task
In an experimental setting, a number of nuisance variables can hamper the effectiveness of resting state fMRI. Variables such as motion, scanner noise, physiological drift, and a number of other factors, referred to as "nuisance" variables, can singificantly deteriorate the quality of downstream inferences that researchers are able to make.
Our data observations at a given timestep can be described as follows:
\begin{align}
T_{observed} = L + Q + P + \epsilon
\end{align}
Where the observed signal $T \in \mathbb{R}^{t \times n}$ contains the desired latent neurological signal $L \in \mathbb{R}^{t \times n}$, as well as permutations from $Q \in \mathbb{R}^{t \times n}$, a quadratic drift term produced by the scanner, $P \in \mathbb{R}^{t \times n}$, a low frequency periodic drift, and $\epsilon \in \mathbb{R}^{t \times n}$, an error term that is a result of noise that we cannot account for.
Loss function
Here, we will make use of the sum of squared residuals (SSR) loss function. We note that our timeseries $T_{ldr}$ (low-frequency drift-removed; that is, our timeseries after removing low-frequency drift) can be modeled as:
\begin{align}
T_{ldr} = L + Q + \epsilon
\end{align}
where we assume that our quadratic $Q$ is the product between $X \in \mathbb{R}^{t \times 3}$ our quadratic function and $w \in \mathbb{R}^3$ the coeffients of our quadratic, such that:
\begin{align}
Q = Xw \
Q = \begin{bmatrix}
1 & 1 & 1 \
1 & 2 & 4 \
1 & 3 & 9 \
\vdots & \vdots & \vdots \
1 & t & t^2
\end{bmatrix} \begin{bmatrix}
w_0 \ w_1 \ w_2
\end{bmatrix}
\end{align}
We assume that our error $\epsilon$ is minimal, and rewrite our timeseries $T_{ldr}$:
\begin{align}
T_{ldr} = L + Xw \
L = T_{ldr} - Xw
\end{align}
Assuming that we want to fit our data maximally (and thereby remove the best-fit quadratic, which is a simple enough model that it will not regress signal explicitly), we can use SSR:
\begin{align}
L(w | X, T_{ldr}) = ||L||2^2 = ||T{ldr} - Xw||_2^2
\end{align}
Where we will penalize points that are not effectively fit by our model significantly due to squaring the error.
Statistical Goal
To estimate our quadratic drift, we want to find the coefficients for a quadratic function such that:
\begin{align}
W = \textrm{argmin}_W L(w | X, T)
\end{align}
Where $X \in \mathbb{R}^{t \times 3}$ is a quadratic function defined from $1:t$ with 3 coefficients; that is, a constant drift, linear drift, and quadratic drift. An example of $X$ is shown above.
Our FFT does not have a statistical goal.
Desiderata
With this algorithm, we have several critical desiderata that we are concerned with:
We want an algorithm that effectively models the nuisance variables in our timeseries
We want an algorithm that, unlike component correction, avoids removing task-dependent stimulus
We want an algorithm that will perform quickly under experimental settings, as fMRI processing is not cheap.
Approach
Algorithm
High Level Description
With this algorithm, we begin by removing physiological-related low frequency drift. This can be due to a number of factors, such as physiological effects, scanner effects, and a number of other factors. We use the FFT to estimate the fourier components of our signal, remove signal that is well below the frequency range of actual brain response (signal under 0.01 Hz in particular), and then perform an inverse FFT to bring us back to the time domain. After correcting out low frequency drift, we model our resulting timeseries with a quadratic, estimate our quadratic coefficients with Least-Squares Regression, and finally remove our quadratic function from our data. In the process, this also mean-normalizes our data, which provides a favorable common axis for downstream inferences (our timeseries otherwise would be all over the place). We then replace and return our resulting nuisance corrected fMRI timeseries.
Pseudocode
Pseudo
Evaluation
Simulation Performance
To evaluate our new FFT code, we will generate examples from various combinations of sinusoids, with one goal sinusoid corrupted with various low-frequency sinusoids, and demonstrate that we are able to recover our original signal after passing low-frequency signal. Then, we will generate an example from a quadratic with a periodic function, and show that we are able to recover our periodic stimulus. We will evaluate our simulations with qualitative comparisons before and after correction, as well as $R^2$ values between the actual signal and the uncorrected vs corrected signal. Finally, we will construct a simulation combining a quadratic with various low frequency sinusoids and one goal sinusoid, and verify that we are able to recover our goal sinusoid after correction.
Real Data Performance
Real data will be analyzed using the discriminability.
Simulations
Positive Illustrative Example
Frequency Filtering Check
First, we will demonstrate a positive example of our frequency filtering code. We will construct one "stimulus" sinusoid (approximately 3 Hz), and permute it with several low-frequency sinusoids below the cutoff point that our real data would have (less than 0.01 Hz).
End of explanation
"""
%matplotlib inline
import numpy as np
import ndmg.nuis.nuis as ndn # nuisance correction scripts
import matplotlib.pyplot as plt
import nibabel as nb
import scipy.fftpack as scifft
from sklearn.metrics import r2_score
L = 200
tr = 2 # the tr of our data
t = np.linspace(0, L-1, L) # generate time steps
stim_freq = .1
stim = np.sin(2*np.pi*stim_freq*t*tr) # generate our stimulus timeseries
# generate coefficients for linear model
w = np.array([.75, -.01, .00005])
w = np.expand_dims(w, 1)
const = np.ones((L,1))
lin = np.array(range(0, L))
quad = np.array(range(0, L))**2
# make a design matrix
R = np.column_stack((const, lin, quad))
sig = stim.copy()
sig = np.expand_dims(sig, 1)
sig += R.dot(w)
fig = plt.figure()
ax = fig.add_subplot(111)
ax.plot((t*tr)[0:100], stim[0:100])
ax.set_xlabel('Timestep')
ax.set_ylabel('Response')
ax.set_title('Stimulus before corruption')
bef_r2 = r2_score(stim, sig[:,0])
fig = plt.figure()
ax = fig.add_subplot(111)
ax.plot((t*tr)[0:100], sig[0:100,0])
ax.set_xlabel('Timestep')
ax.set_ylabel('Response')
ax.set_title('Stimulus after corruption, r^2 = ' + str(bef_r2))
corr = ndn().linear_reg(sig)
aft_r2 = r2_score(stim, corr[:,0])
fig = plt.figure()
ax = fig.add_subplot(111)
ax.plot((t*tr)[0:100], corr[0:100,0])
ax.set_xlabel('Timestep')
ax.set_ylabel('Response')
ax.set_title('Stimulus after correction, r^2 = ' + str(aft_r2))
"""
Explanation: As we can see, we are able to do a good job of removing the noise, as our $R^2$ increases from around .9 to approximately .999, which indicates that our resulting signal very significantly resembles our actual signal.
Quadratic Drift Removal Check
Next, we will double check the effectiveness of our ability to remove quadratic drift with our linear regression model. We will have the same stimulus sinusoid as before, and we will produce quadratic drift coefficients to simulate quadratic drifting.
End of explanation
"""
%matplotlib inline
import numpy as np
import ndmg.nuis.nuis as ndn # nuisance correction scripts
import matplotlib.pyplot as plt
import nibabel as nb
import scipy.fftpack as scifft
from sklearn.metrics import r2_score
L = 200
tr = 2 # the tr of our data
t = np.linspace(0, L-1, L) # generate time steps
stim_freq = .1
stim = np.sin(2*np.pi*stim_freq*t*tr) # generate our stimulus timeseries
# generate coefficients for linear model
w = np.array([.75, -.01, .00005])
w = np.expand_dims(w, 1)
const = np.ones((L,1))
lin = np.array(range(0, L))
quad = np.array(range(0, L))**2
# make a design matrix
R = np.column_stack((const, lin, quad))
sig = stim.copy()
sig = np.expand_dims(sig, 1)
sig += R.dot(w)
low_freq = [0.003,0.006]
low_freq_amp = [.2,.1]
# add low-frequency drift
for freq, amp in zip(low_freq, low_freq_amp):
sig += np.expand_dims(amp*np.sin(2*np.pi*freq*t*tr), 1)
fig = plt.figure()
ax = fig.add_subplot(111)
ax.plot((t*tr)[0:100], stim[0:100])
ax.set_xlabel('Timestep')
ax.set_ylabel('Response')
ax.set_title('Stimulus before corruption')
bef_r2 = r2_score(stim, sig[:,0])
fig = plt.figure()
ax = fig.add_subplot(111)
ax.plot((t*tr)[0:100], sig[0:100,0])
ax.set_xlabel('Timestep')
ax.set_ylabel('Response')
ax.set_title('Stimulus after corruption, r^2 = ' + str(bef_r2))
corr = ndn().freq_filter(sig, tr, 0.01)
corr = ndn().linear_reg(sig)
aft_r2 = r2_score(stim, corr[:,0])
fig = plt.figure()
ax = fig.add_subplot(111)
ax.plot((t*tr)[0:100], corr[0:100,0])
ax.set_xlabel('Timestep')
ax.set_ylabel('Response')
ax.set_title('Stimulus after correction, r^2 = ' + str(aft_r2))
"""
Explanation: Again, we can see that we are able to virtually perfectly fit the predicted response, as we increase our $R^2$ from .68 to .999.
Combined Check
Next, we will do a combined check using the same data from the previous two tests to verify that when we perform fft followed by quadratic detrending, we are again able to recover the expected sinusoid with similar accuracy.
End of explanation
"""
%matplotlib inline
import numpy as np
import ndmg.nuis.nuis as ndn # nuisance correction scripts
import matplotlib.pyplot as plt
import nibabel as nb
import scipy.fftpack as scifft
from sklearn.metrics import r2_score
L = 200
tr = 2 # the tr of our data
t = np.linspace(0, L-1, L) # generate time steps
stim_freq = .1
stim = np.sin(2*np.pi*stim_freq*t*tr) # generate our stimulus timeseries
low_freq = [.0095, .008]
low_freq_amp = [.2, .2]
sig = stim.copy()
sig = np.expand_dims(sig, 1)
# add low-frequency drift
for freq, amp in zip(low_freq, low_freq_amp):
sig += np.expand_dims(amp*np.sin(2*np.pi*freq*t*tr), 1)
fig = plt.figure()
ax = fig.add_subplot(111)
ax.plot((t*tr)[0:100], stim[0:100])
ax.set_xlabel('Timestep')
ax.set_ylabel('Response')
ax.set_title('Stimulus before corruption')
bef_r2 = r2_score(stim, sig[:,0])
fig = plt.figure()
ax = fig.add_subplot(111)
ax.plot((t*tr)[0:100], sig[0:100,0])
ax.set_xlabel('Timestep')
ax.set_ylabel('Response')
ax.set_title('Stimulus after corruption, r^2 = ' + str(bef_r2))
corr = ndn().freq_filter(sig, tr, 0.01)
corr = ndn().linear_reg(sig)
aft_r2 = r2_score(stim, corr[:,0])
fig = plt.figure()
ax = fig.add_subplot(111)
ax.plot((t*tr)[0:100], corr[0:100,0])
ax.set_xlabel('Timestep')
ax.set_ylabel('Response')
ax.set_title('Stimulus after correction, r^2 = ' + str(aft_r2))
low_freq2 = [0.003,0.006]
low_freq_amp2 = [.2,.2]
sig2 = stim.copy()
# add low-frequency drift
for freq, amp in zip(low_freq2, low_freq_amp2):
sig2 += amp*np.sin(2*np.pi*freq*t*tr)
# so that we can use our fft code on it, we need an ndarray not
# a 1d array
sig2 = np.expand_dims(sig2, axis=1)
fig = plt.figure()
ax = fig.add_subplot(111)
l1, = ax.plot((t*tr)[0:100], sig[0:100,0])
l2, = ax.plot((t*tr)[0:100], sig2[0:100,0])
ax.set_xlabel('Timestep')
ax.set_ylabel('Response')
ax.set_title('Comparing stimulus after corruption')
fig.legend((l1, l2), ('close', 'far'))
corr2 = ndn().freq_filter(sig2, tr)
fig = plt.figure()
ax = fig.add_subplot(111)
l1, = ax.plot((t*tr)[0:100], corr[0:100,0])
l2, = ax.plot((t*tr)[0:100], 2*corr2[0:100,0])
ax.set_xlabel('Timestep')
ax.set_ylabel('Response')
ax.set_title('Stimulus after correction')
fig.legend((l1, l2), ('close', 'far'))
"""
Explanation: As we can see, the signal picked up dips initially, but recovers to approximately the value of the signal by the time we have a few timesteps. This shows a weakness of highpass filtering, which is its difficulty passing signal that impacts the endpoints properly.
Negative Illustrative Example
The highpass filter will struggle when the frequencies are right near our cutoff in particular; if the cutoff is well beyond our frequency, if FFT mis-bins particular signals it has a significant margin for error. However, if we have a signal right near the cutoff and we mis-bin, we will risk binning stimulus-related sinusoids above our cutoff and not having the sitmulus removed. We demonstatrate this below:
End of explanation
"""
|
AllenDowney/ThinkBayes2 | soln/chap20.ipynb | mit | # If we're running on Colab, install libraries
import sys
IN_COLAB = 'google.colab' in sys.modules
if IN_COLAB:
!pip install empiricaldist
# Get utils.py
from os.path import basename, exists
def download(url):
filename = basename(url)
if not exists(filename):
from urllib.request import urlretrieve
local, _ = urlretrieve(url, filename)
print('Downloaded ' + local)
download('https://github.com/AllenDowney/ThinkBayes2/raw/master/soln/utils.py')
from utils import set_pyplot_params
set_pyplot_params()
"""
Explanation: Approximate Bayesian Computation
Think Bayes, Second Edition
Copyright 2020 Allen B. Downey
License: Attribution-NonCommercial-ShareAlike 4.0 International (CC BY-NC-SA 4.0)
End of explanation
"""
import numpy as np
def calc_volume(diameter):
"""Converts a diameter to a volume."""
factor = 4 * np.pi / 3
return factor * (diameter/2.0)**3
"""
Explanation: This chapter introduces a method of last resort for the most complex problems, Approximate Bayesian Computation (ABC).
I say it is a last resort because it usually requires more computation than other methods, so if you can solve a problem any other way, you should.
However, for the examples in this chapter, ABC is not just easy to implement; it is also efficient.
The first example is my solution to a problem posed by a patient
with a kidney tumor.
I use data from a medical journal to model tumor growth, and use simulations to estimate the age of a tumor based on its size.
The second example is a model of cell counting, which has applications in biology, medicine, and zymurgy (beer-making).
Given a cell count from a diluted sample, we estimate the concentration of cells.
Finally, as an exercise, you'll have a chance to work on a fun sock-counting problem.
The Kidney Tumor Problem
I am a frequent reader and occasional contributor to the online
statistics forum at http://reddit.com/r/statistics.
In November 2011, I read the following message:
"I have Stage IV Kidney Cancer and am trying to determine if the cancer formed before I retired from the military. ... Given the dates of retirement and detection is it possible to determine when there was a 50/50 chance that I developed the disease? Is it possible to determine the probability on the retirement date? My tumor was 15.5 cm x 15 cm at detection. Grade II."
I contacted the author of the message to get more information; I
learned that veterans get different benefits if it is "more likely than not" that a tumor formed while they were in military service (among other considerations).
So I agree to help him answer his question.
Because renal tumors grow slowly, and often do not cause symptoms, they are sometimes left untreated. As a result, doctors can observe the rate of growth for untreated tumors by comparing scans from the same patient at different times. Several papers have reported these growth rates.
For my analysis I used data from a paper by Zhang et al.
They report growth rates in two forms:
Volumetric doubling time, which is the time it would take for a tumor to double in size.
Reciprocal doubling time (RDT), which is the number of doublings per year.
The next section shows how we work with these growth rates.
Zhang et al, Distribution of Renal Tumor Growth Rates Determined
by Using Serial Volumetric CT Measurements, January 2009
Radiology, 250, 137-144.
https://pubs.rsna.org/doi/full/10.1148/radiol.2501071712
A Simple Growth Model
We'll start with a simple model of tumor growth based on two assumptions:
Tumors grow with a constant doubling time, and
They are roughly spherical in shape.
And I'll define two points in time:
t1 is when my correspondent retired.
t2 is when the tumor was detected.
The time between t1 and t2 was about 9.0 years.
As an example, let's assume that the diameter of the tumor was 1 cm at t1, and estimate its size at t2.
I'll use the following function to compute the volume of a sphere with a given diameter.
End of explanation
"""
d1 = 1
v1 = calc_volume(d1)
v1
"""
Explanation: Assuming that the tumor is spherical, we can compute its volume at t1.
End of explanation
"""
median_doubling_time = 811
rdt = 365 / median_doubling_time
rdt
"""
Explanation: The median volume doubling time reported by Zhang et al. is 811 days, which corresponds to an RDT of 0.45 doublings per year.
End of explanation
"""
interval = 9.0
doublings = interval * rdt
doublings
"""
Explanation: We can compute the number of doublings that would have happened in the interval between t1 and t2:
End of explanation
"""
v2 = v1 * 2**doublings
v2
"""
Explanation: Given v1 and the number of doublings, we can compute the volume at t2.
End of explanation
"""
def calc_diameter(volume):
"""Converts a volume to a diameter."""
factor = 3 / np.pi / 4
return 2 * (factor * volume)**(1/3)
"""
Explanation: The following function computes the diameter of a sphere with the given volume.
End of explanation
"""
d2 = calc_diameter(v2)
d2
"""
Explanation: So we can compute the diameter of the tumor at t2:
End of explanation
"""
# Data from the histogram in Figure 3
import numpy as np
from empiricaldist import Pmf
counts = [2, 29, 11, 6, 3, 1, 1]
rdts = np.arange(-1, 6) + 0.01
pmf_rdt = Pmf(counts, rdts)
pmf_rdt.normalize()
# Data from the scatter plot in Figure 4
rdts = [5.089, 3.572, 3.242, 2.642, 1.982, 1.847, 1.908, 1.798,
1.798, 1.761, 2.703, -0.416, 0.024, 0.869, 0.746, 0.257,
0.269, 0.086, 0.086, 1.321, 1.052, 1.076, 0.758, 0.587,
0.367, 0.416, 0.073, 0.538, 0.281, 0.122, -0.869, -1.431,
0.012, 0.037, -0.135, 0.122, 0.208, 0.245, 0.404, 0.648,
0.673, 0.673, 0.563, 0.391, 0.049, 0.538, 0.514, 0.404,
0.404, 0.33, -0.061, 0.538, 0.306]
rdt_sample = np.array(rdts)
len(rdt_sample)
"""
Explanation: If the diameter of the tumor was 1 cm at t1, and it grew at the median rate, the diameter would be about 2.5 cm at t2.
This example demonstrates the growth model, but it doesn't answer the question my correspondent posed.
A More General Model
Given the size of a tumor at time of diagnosis, we would like to know the distribution of its age.
To find it, we'll run simulations of tumor growth to get the distribution of size conditioned on age.
Then we'll compute the distribution of age conditioned on size.
The simulation starts with a small tumor and runs these steps:
Choose a value from the distribution of growth rates.
Compute the size of the tumor at the end of an interval.
Repeat until the tumor exceeds the maximum relevant size.
So the first thing we need is the distribution of growth rates.
Using the figures in the paper by Zhange et al., I created an array, rdt_sample, that contains estimated values of RDT for the 53 patients in the study.
Again, RDT stands for "reciprocal doubling time", which is in doublings per year.
So if rdt=1, a tumor would double in volume in one year.
If rdt=2, it would double twice; that is, the volume would quadruple.
And if rdt=-1, it would halve in volume.
End of explanation
"""
from utils import kde_from_sample
qs = np.linspace(-2, 6, num=201)
pmf_rdt = kde_from_sample(rdt_sample, qs)
1 / pmf_rdt.median() * 365
"""
Explanation: We can use the sample of RDTs to estimate the PDF of the distribution.
End of explanation
"""
from utils import decorate
pmf_rdt.plot(label='rdts')
decorate(xlabel='Reciprocal doubling time (RDT)',
ylabel='PDF',
title='Distribution of growth rates')
"""
Explanation: Here's what it looks like.
End of explanation
"""
interval = 245 / 365 # year
min_diameter = 0.3 # cm
max_diameter = 20 # cm
"""
Explanation: In the next section we will use this distribution to simulate tumor growth.
Simulation
Now we're ready to run the simulations.
Starting with a small tumor, we'll simulate a series of intervals until the tumor reaches a maximum size.
At the beginning of each simulated interval, we'll choose a value from the distribution of growth rates and compute the size of the tumor at the end.
I chose an interval of 245 days (about 8 months) because that is the
median time between measurements in the data source
For the initial diameter I chose 0.3 cm, because carcinomas smaller than that are less likely to be invasive and less likely to have the blood supply needed for rapid growth (see this page on carcinoma).
For the maximum diameter I chose 20 cm.
End of explanation
"""
v0 = calc_volume(min_diameter)
vmax = calc_volume(max_diameter)
v0, vmax
"""
Explanation: I'll use calc_volume to compute the initial and maximum volumes:
End of explanation
"""
import pandas as pd
def simulate_growth(pmf_rdt):
"""Simulate the growth of a tumor."""
age = 0
volume = v0
res = []
while True:
res.append((age, volume))
if volume > vmax:
break
rdt = pmf_rdt.choice()
age += interval
doublings = rdt * interval
volume *= 2**doublings
columns = ['age', 'volume']
sim = pd.DataFrame(res, columns=columns)
sim['diameter'] = calc_diameter(sim['volume'])
return sim
"""
Explanation: The following function runs the simulation.
End of explanation
"""
np.random.seed(17)
sim = simulate_growth(pmf_rdt)
"""
Explanation: simulate_growth takes as a parameter a Pmf that represents the distribution of RDT.
It initializes the age and volume of the tumor, then runs a loop that simulates one interval at a time.
Each time through the loop, it checks the volume of the tumor and exits if it exceeds vmax.
Otherwise it chooses a value from pmf_rdt and updates age and volume. Since rdt is in doublings per year, we multiply by interval to compute the number of doublings during each interval.
At the end of the loop, simulate_growth puts the results in a DataFrame and computes the diameter that corresponds to each volume.
Here's how we call this function:
End of explanation
"""
sim.head(3)
"""
Explanation: Here are the results for the first few intervals:
End of explanation
"""
sim.tail(3)
"""
Explanation: And the last few intervals.
End of explanation
"""
np.random.seed(17)
sims = [simulate_growth(pmf_rdt) for _ in range(101)]
"""
Explanation: To show the results graphically, I'll run 101 simulations:
End of explanation
"""
import matplotlib.pyplot as plt
diameters = [4, 8, 16]
for diameter in diameters:
plt.axhline(diameter,
color='C5', linewidth=2, ls=':')
for sim in sims:
plt.plot(sim['age'], sim['diameter'],
color='C1', linewidth=0.5, alpha=0.5)
decorate(xlabel='Tumor age (years)',
ylabel='Diameter (cm, log scale)',
ylim=[0.2, 20],
yscale='log')
yticks = [0.2, 0.5, 1, 2, 5, 10, 20]
plt.yticks(yticks, yticks);
"""
Explanation: And plot the results.
End of explanation
"""
from scipy.interpolate import interp1d
def interpolate_ages(sims, diameter):
"""Estimate the age when each tumor reached a given size."""
ages = []
for sim in sims:
interp = interp1d(sim['diameter'], sim['age'])
age = interp(diameter)
ages.append(float(age))
return ages
"""
Explanation: In this figure, each thin, solid line shows the simulated growth of a tumor over time, with diameter on a log scale.
The dotted lines are at 4, 8, and 16 cm.
By reading across the dotted lines, you can get a sense of the distribution of age at each size.
For example, reading across the top line, we see that the age of a 16 cm tumor might be as low 10 years or as high as 40 years, but it is most likely to be between 15 and 30.
To compute this distribution more precisely, we can interpolate the growth curves to see when each one passes through a given size.
The following function takes the results of the simulations and returns the age when each tumor reached a given diameter.
End of explanation
"""
from empiricaldist import Cdf
ages = interpolate_ages(sims, 15)
cdf = Cdf.from_seq(ages)
print(cdf.median(), cdf.credible_interval(0.9))
"""
Explanation: We can call this function like this:
End of explanation
"""
1 - cdf(9.0)
"""
Explanation: For a tumor 15 cm in diameter, the median age is about 22 years, the 90% credible interval is between 13 and 34 years, and the probability that it formed less than 9 years ago is less than 1%.
End of explanation
"""
for diameter in diameters:
ages = interpolate_ages(sims, diameter)
cdf = Cdf.from_seq(ages)
cdf.plot(label=f'{diameter} cm')
decorate(xlabel='Tumor age (years)',
ylabel='CDF')
"""
Explanation: But this result is based on two modeling decisions that are potentially problematic:
In the simulations, growth rate during each interval is independent of previous growth rates. In reality it is plausible that tumors that have grown quickly in the past are likely to grow quickly in the future. In other words, there is probably a serial correlation in growth rate.
To convert from linear measure to volume, we assume that tumors are approximately spherical.
In additional experiments, I implemented a simulation that chooses growth rates with serial correlation; the effect is that the fast-growing tumors grow faster and the slow-growing tumors grow slower.
Nevertheless, with moderate correlation (0.5), the probability that a 15 cm tumor is less than 9 years old is only about 1%.
The assumption that tumors are spherical is probably fine for tumors up to a few centimeters, but not for a tumor with linear dimensions 15.5 x 15 cm.
If, as seems likely, a tumor this size is relatively flat, it might have the same volume as a 6 cm sphere.
But even with this smaller volume and correlation 0.5, the probability that this tumor is less than 9 years old is about 5%.
So even taking into account modeling errors, it is unlikely that such a large tumor could have formed after my correspondent retired from military service.
The following figure shows the distribution of ages for tumors with diameters 4, 8, and 15 cm.
End of explanation
"""
total_squares = 25
squares_counted = 5
yeast_counted = 49
"""
Explanation: Approximate Bayesian Calculation
At this point you might wonder why this example is in a book about Bayesian statistics.
We never defined a prior distribution or did a Bayesian update.
Why not? Because we didn't have to.
Instead, we used simulations to compute ages and sizes for a collection of hypothetical tumors.
Then, implicitly, we used the simulation results to form a joint distribution of age and size.
If we select a column from the joint distribution, we get a distribution of size conditioned on age.
If we select a row, we get a distribution of age conditioned on size.
So this example is like the ones we saw in <<_Probability>>: if you have all of the data, you don't need Bayes's theorem; you can compute probabilities by counting.
This example is a first step toward Approximate Bayesian Computation (ABC).
The next example is a second step.
Counting Cells
This example comes from this blog post, by Cameron Davidson-Pilon.
In it, he models the process biologists use to estimate the concentration of cells in a sample of liquid.
The example he presents is counting cells in a "yeast slurry", which is a mixture of yeast and water used in brewing beer.
There are two steps in the process:
First, the slurry is diluted until the concentration is low enough that it is practical to count cells.
Then a small sample is put on a hemocytometer, which is a specialized microscope slide that holds a fixed amount of liquid on a rectangular grid.
The cells and the grid are visible in a microscope, making it possible to count the cells accurately.
As an example, suppose we start with a yeast slurry with unknown concentration of cells.
Starting with a 1 mL sample, we dilute it by adding it to a shaker with 9 mL of water and mixing well.
Then we dilute it again, and then a third time.
Each dilution reduces the concentration by a factor of 10, so three dilutions reduces the concentration by a factor of 1000.
Then we add the diluted sample to the hemocytometer, which has a capacity of 0.0001 mL spread over a 5x5 grid.
Although the grid has 25 squares, it is standard practice to inspect only a few of them, say 5, and report the total number of cells in the inspected squares.
This process is simple enough, but at every stage there are sources of error:
During the dilution process, liquids are measured using pipettes that introduce measurement error.
The amount of liquid in the hemocytometer might vary from the specification.
During each step of the sampling process, we might select more or less than the average number of cells, due to random variation.
Davidson-Pilon presents a PyMC model that describes these errors.
I'll start by replicating his model; then we'll adapt it for ABC.
Suppose there are 25 squares in the grid, we count 5 of them, and the total number of cells is 49.
End of explanation
"""
import pymc3 as pm
billion = 1e9
with pm.Model() as model:
yeast_conc = pm.Normal("yeast conc",
mu=2 * billion, sd=0.4 * billion)
shaker1_vol = pm.Normal("shaker1 vol",
mu=9.0, sd=0.05)
shaker2_vol = pm.Normal("shaker2 vol",
mu=9.0, sd=0.05)
shaker3_vol = pm.Normal("shaker3 vol",
mu=9.0, sd=0.05)
"""
Explanation: Here's the first part of the model, which defines the prior distribution of yeast_conc, which is the concentration of yeast we're trying to estimate.
shaker1_vol is the actual volume of water in the first shaker, which should be 9 mL, but might be higher or lower, with standard deviation 0.05 mL.
shaker2_vol and shaker3_vol are the volumes in the second and third shakers.
End of explanation
"""
with model:
yeast_slurry_vol = pm.Normal("yeast slurry vol",
mu=1.0, sd=0.01)
shaker1_to_shaker2_vol = pm.Normal("shaker1 to shaker2",
mu=1.0, sd=0.01)
shaker2_to_shaker3_vol = pm.Normal("shaker2 to shaker3",
mu=1.0, sd=0.01)
"""
Explanation: Now, the sample drawn from the yeast slurry is supposed to be 1 mL, but might be more or less.
And similarly for the sample from the first shaker and from the second shaker.
The following variables model these steps.
End of explanation
"""
with model:
dilution_shaker1 = (yeast_slurry_vol /
(yeast_slurry_vol + shaker1_vol))
dilution_shaker2 = (shaker1_to_shaker2_vol /
(shaker1_to_shaker2_vol + shaker2_vol))
dilution_shaker3 = (shaker2_to_shaker3_vol /
(shaker2_to_shaker3_vol + shaker3_vol))
final_dilution = (dilution_shaker1 *
dilution_shaker2 *
dilution_shaker3)
"""
Explanation: Given the actual volumes in the samples and in the shakers, we can compute the effective dilution, final_dilution, which should be 1000, but might be higher or lower.
End of explanation
"""
with model:
chamber_vol = pm.Gamma("chamber_vol",
mu=0.0001, sd=0.0001 / 20)
"""
Explanation: The next step is to place a sample from the third shaker in the chamber of the hemocytomer.
The capacity of the chamber should be 0.0001 mL, but might vary; to describe this variance, we'll use a gamma distribution, which ensures that we don't generate negative values.
End of explanation
"""
with model:
yeast_in_chamber = pm.Poisson("yeast in chamber",
mu=yeast_conc * final_dilution * chamber_vol)
"""
Explanation: On average, the number of cells in the chamber is the product of the actual concentration, final dilution, and chamber volume.
But the actual number might vary; we'll use a Poisson distribution to model this variance.
End of explanation
"""
with model:
count = pm.Binomial("count",
n=yeast_in_chamber,
p=squares_counted/total_squares,
observed=yeast_counted)
"""
Explanation: Finally, each cell in the chamber will be in one of the squares we count with probability p=squares_counted/total_squares.
So the actual count follows a binomial distribution.
End of explanation
"""
options = dict(return_inferencedata=False)
with model:
trace = pm.sample(1000, **options)
"""
Explanation: With the model specified, we can use sample to generate a sample from the posterior distribution.
End of explanation
"""
posterior_sample = trace['yeast conc'] / billion
cdf_pymc = Cdf.from_seq(posterior_sample)
print(cdf_pymc.mean(), cdf_pymc.credible_interval(0.9))
"""
Explanation: And we can use the sample to estimate the posterior distribution of yeast_conc and compute summary statistics.
End of explanation
"""
with model:
prior_sample = pm.sample_prior_predictive(10000)
"""
Explanation: The posterior mean is about 2.3 billion cells per mL, with a 90% credible interval from 1.8 and 2.7.
So far we've been following in Davidson-Pilon's footsteps.
And for this problem, the solution using MCMC is sufficient.
But it also provides an opportunity to demonstrate ABC.
Cell Counting with ABC
The fundamental idea of ABC is that we use the prior distribution to generate a sample of the parameters, and then simulate the system for each set of parameters in the sample.
In this case, since we already have a PyMC model, we can use sample_prior_predictive to do the sampling and the simulation.
End of explanation
"""
count = prior_sample['count']
print(count.mean())
"""
Explanation: The result is a dictionary that contains samples from the prior distribution of the parameters and the prior predictive distribution of count.
End of explanation
"""
mask = (count == 49)
mask.sum()
"""
Explanation: Now, to generate a sample from the posterior distribution, we'll select only the elements in the prior sample where the output of the simulation, count, matches the observed data, 49.
End of explanation
"""
posterior_sample2 = prior_sample['yeast conc'][mask] / billion
"""
Explanation: We can use mask to select the values of yeast_conc for the simulations that yield the observed data.
End of explanation
"""
cdf_abc = Cdf.from_seq(posterior_sample2)
print(cdf_abc.mean(), cdf_abc.credible_interval(0.9))
"""
Explanation: And we can use the posterior sample to estimate the CDF of the posterior distribution.
End of explanation
"""
cdf_pymc.plot(label='MCMC', ls=':')
cdf_abc.plot(label='ABC')
decorate(xlabel='Yeast concentration (cells/mL)',
ylabel='CDF',
title='Posterior distribution',
xlim=(1.4, 3.4))
"""
Explanation: The posterior mean and credible interval are similar to what we got with MCMC.
Here's what the distributions look like.
End of explanation
"""
n = prior_sample['yeast in chamber']
n.shape
"""
Explanation: The distributions are similar, but the results from ABC are noisier because the sample size is smaller.
When Do We Get to the Approximate Part?
The examples so far are similar to Approximate Bayesian Computation, but neither of them demonstrates all of the elements of ABC.
More generally, ABC is characterized by:
A prior distribution of parameters.
A simulation of the system that generates the data.
A criterion for when we should accept that the output of the simulation matches the data.
The kidney tumor example was atypical because we didn't represent the prior distribution of age explicitly.
Because the simulations generate a joint distribution of age and size, we we able to get the marginal posterior distribution of age directly from the results.
The yeast example is more typical because we represented the distribution of the parameters explicitly.
But we accepted only simulations where the output matches the data exactly.
The result is approximate in the sense that we have a sample from the posterior distribution rather than the posterior distribution itself.
But it is not approximate in the sense of Approximate Bayesian Computation, which typically accepts simulations where the output matches the data only approximately.
To show how that works, I will extend the yeast example with an approximate matching criterion.
In the previous section, we accepted a simulation if the output is precisely 49 and rejected it otherwise.
As a result, we got only a few hundred samples out of 10,000 simulations, so that's not very efficient.
We can make better use of the simulations if we give "partial credit" when the output is close to 49.
But how close? And how much credit?
One way to answer that is to back up to the second-to-last step of the simulation, where we know the number of cells in the chamber, and we use the binomial distribution to generate the final count.
If there are n cells in the chamber, each has a probability p of being counted, depending on whether it falls in one of the squares in the grid that get counted.
We can extract n from the prior sample, like this:
End of explanation
"""
p = squares_counted/total_squares
p
"""
Explanation: And compute p like this:
End of explanation
"""
from scipy.stats import binom
likelihood = binom(n, p).pmf(yeast_counted).flatten()
likelihood.shape
"""
Explanation: Now here's the idea: we'll use the binomial distribution to compute the likelihood of the data, yeast_counted, for each value of n and the fixed value of p.
End of explanation
"""
plt.plot(n*p, likelihood, '.', alpha=0.03, color='C2')
decorate(xlabel='Expected count (number of cells)',
ylabel='Likelihood')
"""
Explanation: When the expected count, n * p, is close to the actual count, likelihood is relatively high; when it is farther away, likelihood is lower.
The following is a scatter plot of these likelihoods versus the expected counts.
End of explanation
"""
qs = prior_sample['yeast conc'] / billion
ps = likelihood
posterior_pmf = Pmf(ps, qs)
"""
Explanation: We can't use these likelihoods to do a Bayesian update because they are incomplete; that is, each likelihood is the probability of the data given n, which is the result of a single simulation.
But we can use them to weight the results of the simulations.
Instead of requiring the output of the simulation to match the data exactly, we'll use the likelihoods to give partial credit when the output is close.
Here's how: I'll construct a Pmf that contains yeast concentrations as quantities and the likelihoods as unnormalized probabilities.
End of explanation
"""
posterior_pmf.sort_index(inplace=True)
posterior_pmf.normalize()
print(posterior_pmf.mean(), posterior_pmf.credible_interval(0.9))
"""
Explanation: In this Pmf, values of yeast_conc that yield outputs close to the data map to higher probabilities.
If we sort the quantities and normalize the probabilities, the result is an estimate of the posterior distribution.
End of explanation
"""
cdf_pymc.plot(label='MCMC', ls=':')
#cdf_abc.plot(label='ABC')
posterior_pmf.make_cdf().plot(label='ABC2')
decorate(xlabel='Yeast concentration (cells/mL)',
ylabel='CDF',
title='Posterior distribution',
xlim=(1.4, 3.4))
"""
Explanation: The posterior mean and credible interval are similar to the values we got from MCMC.
And here's what the posterior distributions look like.
End of explanation
"""
from scipy.stats import nbinom, beta
mu = 30
p = 0.8666666
r = mu * (1-p) / p
prior_n_socks = nbinom(r, 1-p)
prior_n_socks.mean(), prior_n_socks.std()
prior_prop_pair = beta(15, 2)
prior_prop_pair.mean()
qs = np.arange(90)
ps = prior_n_socks.pmf(qs)
pmf = Pmf(ps, qs)
pmf.normalize()
pmf.plot(label='prior', drawstyle='steps')
decorate(xlabel='Number of socks',
ylabel='PMF')
from utils import pmf_from_dist
qs = np.linspace(0, 1, 101)
pmf = pmf_from_dist(prior_prop_pair, qs)
pmf.plot(label='prior', color='C1')
decorate(xlabel='Proportion of socks in pairs',
ylabel='PDF')
"""
Explanation: The distributions are similar, but the results from MCMC are a little noisier.
In this example, ABC is more efficient than MCMC, requiring less computation to generate a better estimate of the posterior distribution.
But that's unusual; usually ABC requires a lot of computation.
For that reason, it is generally a method of last resort.
Summary
In this chapter we saw two examples of Approximate Bayesian Computation (ABC), based on simulations of tumor growth and cell counting.
The definitive elements of ABC are:
A prior distribution of parameters.
A simulation of the system that generates the data.
A criterion for when we should accept that the output of the simulation matches the data.
ABC is particularly useful when the system is too complex to model with tools like PyMC.
For example, it might involve a physical simulation based on differential equations.
In that case, each simulation might require substantial computation, and many simulations might be needed to estimate the posterior distribution.
Next, you'll have a chance to practice with one more example.
Exercises
Exercise: This exercise is based on a blog post by Rasmus Bååth, which is motivated by a tweet from Karl Broman, who wrote:
That the first 11 socks in the laundry are distinct suggests that there are a lot of socks.
Suppose you pull 11 socks out of the laundry and find that no two of them make a matched pair. Estimate the number of socks in the laundry.
To solve this problem, we'll use the model Bååth suggests, which is based on these assumptions:
The laundry contains some number of pairs of socks, n_pairs, plus some number of odd (unpaired) socks, n_odds.
The pairs of socks are different from each other and different from the unpaired socks; in other words, the number of socks of each type is either 1 or 2, never more.
We'll use the prior distributions Bååth suggests, which are:
The number of socks follows a negative binomial distribution with mean 30 and standard deviation 15.
The proportion of socks that are paired follows a beta distribution with parameters alpha=15 and beta=2.
In the notebook for this chapter, I'll define these priors. Then you can simulate the sampling process and use ABC to estimate the posterior distributions.
To get you started, I'll define the priors.
End of explanation
"""
n_socks = prior_n_socks.rvs()
prop_pairs = prior_prop_pair.rvs()
n_socks, prop_pairs
"""
Explanation: We can sample from the prior distributions like this:
End of explanation
"""
n_pairs = np.round(n_socks//2 * prop_pairs)
n_odds = n_socks - n_pairs*2
n_pairs, n_odds
"""
Explanation: And use the values to compute n_pairs and n_odds:
End of explanation
"""
# Solution
n_pairs = 9
n_odds = 5
socks = np.append(np.arange(n_pairs),
np.arange(n_pairs + n_odds))
print(socks)
# Solution
picked_socks = np.random.choice(socks, size=11, replace=False)
picked_socks
# Solution
values, counts = np.unique(picked_socks, return_counts=True)
values
# Solution
counts
# Solution
solo = np.sum(counts==1)
pairs = np.sum(counts==2)
solo, pairs
# Solution
def pick_socks(n_pairs, n_odds, n_pick):
socks = np.append(np.arange(n_pairs),
np.arange(n_pairs + n_odds))
picked_socks = np.random.choice(socks,
size=n_pick,
replace=False)
values, counts = np.unique(picked_socks,
return_counts=True)
pairs = np.sum(counts==2)
odds = np.sum(counts==1)
return pairs, odds
# Solution
pick_socks(n_pairs, n_odds, 11)
# Solution
data = (0, 11)
res = []
for i in range(10000):
n_socks = prior_n_socks.rvs()
if n_socks < 11:
continue
prop_pairs = prior_prop_pair.rvs()
n_pairs = np.round(n_socks//2 * prop_pairs)
n_odds = n_socks - n_pairs*2
result = pick_socks(n_pairs, n_odds, 11)
if result == data:
res.append((n_socks, n_pairs, n_odds))
len(res)
# Solution
columns = ['n_socks', 'n_pairs', 'n_odds']
results = pd.DataFrame(res, columns=columns)
results.head()
# Solution
qs = np.arange(15, 100)
posterior_n_socks = Pmf.from_seq(results['n_socks'])
print(posterior_n_socks.median(),
posterior_n_socks.credible_interval(0.9))
# Solution
posterior_n_socks.plot(label='posterior', drawstyle='steps')
decorate(xlabel='Number of socks',
ylabel='PMF')
"""
Explanation: Now you take it from there.
End of explanation
"""
|
maxalbert/paper-supplement-nanoparticle-sensing | notebooks/fig_9b_dependence_of_frequency_change_on_particle_size.ipynb | mit | import matplotlib.lines as mlines
import matplotlib.pyplot as plt
import pandas as pd
from style_helpers import style_cycle
%matplotlib inline
plt.style.use('style_sheets/custom_style.mplstyle')
"""
Explanation: Fig. 9(b): Dependence of Frequency Change $\Delta f$ on Particle Size
This notebook reproduces Fig. 9(b) in the paper, which shows how the frequency change $\Delta f$ of the first five eigenmodes depends on the particle size, with the separation between the bottom of the particle and the disc surface held constant at d = 30 nm.
End of explanation
"""
df = pd.read_csv('../data/eigenmode_info_data_frame.csv')
df = df.query('(has_particle == True) and (x == 0) and (y == 0) and '
'(d == 30) and (Hz == 8e4) and (Ms_particle == 1e6)')
df = df.sort_values('d_particle')
"""
Explanation: Read the data frame containing the eigenmode data and filter out the parameter values relevant for this plot.
End of explanation
"""
def plot_curve_for_eigenmode(ax, N, df, style_kwargs):
df_filtered = df.query('N == {}'.format(N)).sort_values('d_particle')
d_vals = df_filtered['d_particle']
freq_vals = df_filtered['freq_diff'] * 1e3 # freq in MHz
ax.plot(d_vals, freq_vals, label='N = {}'.format(N), **style_kwargs)
"""
Explanation: Define helper function to plot $\Delta f$ vs. particle size for a single eigenmode.
End of explanation
"""
fig, ax = plt.subplots(figsize=(6, 6))
for N, style_kwargs in zip([1, 2, 3, 4, 5], style_cycle):
plot_curve_for_eigenmode(ax, N, df, style_kwargs)
xmin, xmax = 7.5, 45
ymin, ymax = -50, 800
ax.plot([xmin, xmax], [0, 0], color='black', linestyle='--', linewidth=1)
ax.set_xlim(xmin, xmax)
ax.set_ylim(ymin, ymax)
ax.set_xlabel('Particle diameter (nm)')
ax.set_ylabel(r'Frequency change $\Delta\,$f (MHz)')
ax.legend(loc='upper left')
"""
Explanation: Produce the plot for Fig. 9(b).
End of explanation
"""
|
ProjectQ-Framework/ProjectQ | examples/awsbraket.ipynb | apache-2.0 | from projectq import MainEngine
from projectq.backends import AWSBraketBackend
from projectq.ops import Measure, H, C, X, All
"""
Explanation: Running ProjectQ code on AWS Braket service provided devices
Compiling code for AWS Braket Service
In this tutorial we will see how to run code on some of the devices provided by the Amazon AWS Braket service. The AWS Braket devices supported are: the State Vector Simulator 'SV1', the Rigetti device 'Aspen-8' and the IonQ device 'IonQ'
You need to have a valid AWS account, created a pair of access key/secret key, and have activated the braket service. As part of the activation of the service, a specific S3 bucket and folder associated to the service should be configured.
First we need to do the required imports. That includes the mail compiler engine (MainEngine), the backend (AWSBraketBackend in this case) and the operations to be used in the cicuit
End of explanation
"""
creds = {
'AWS_ACCESS_KEY_ID': 'aws_access_key_id',
'AWS_SECRET_KEY': 'aws_secret_key',
} # replace with your Access key and Secret key
s3_folder = ['S3Bucket', 'S3Directory'] # replace with your S3 bucket and directory
device = 'SV1' # replace by the device you want to use
"""
Explanation: Prior to the instantiation of the backend we need to configure the credentials, the S3 storage folder and the device to be used (in the example the State Vector Simulator SV1)
End of explanation
"""
eng = MainEngine(AWSBraketBackend(use_hardware=False,
credentials=creds,
s3_folder=s3_folder,
num_runs=10,
interval=10))
"""
Explanation: Next we instantiate the engine with the AWSBraketBackend including the credentials and S3 configuration. By setting the 'use_hardware' parameter to False we indicate the use of the Simulator. In addition we set the number of times we want to run the circuit and the interval in secons to ask for the results. For a complete list of parameters and descriptions, please check the documentation.
End of explanation
"""
# Allocate the required qubits
qureg = eng.allocate_qureg(3)
# Create the circuit. In this example a quantum teleportation algorithms that teleports the first qubit to the third one.
H | qureg[0]
H | qureg[1]
C(X) | (qureg[1], qureg[2])
C(X) | (qureg[0], qureg[1])
H | qureg[0]
C(X) | (qureg[1], qureg[2])
# At the end we measure the qubits to get the results; should be all-0 or all-1
All(Measure) | qureg
# And run the circuit
eng.flush()
"""
Explanation: We can now allocate the required qubits and create the circuit to be run. With the last instruction we ask the backend to run the circuit.
End of explanation
"""
# Obtain and print the probabilies of the states
prob_dict = eng.backend.get_probabilities(qureg)
print("Probabilites for each of the results: ", prob_dict)
"""
Explanation: The backend will automatically create the task and generate a unique identifier (the task Arn) that can be used to recover the status of the task and results later on.
Once the circuit is executed the indicated number of times, the results are stored in the S3 folder configured previously and can be recovered to obtain the probabilities of each of the states.
End of explanation
"""
# Set the Task Arn of the job to be retrieved and instantiate the engine with the AWSBraketBackend
task_arn = 'your_task_arn' # replace with the actual TaskArn you want to use
eng1 = MainEngine(AWSBraketBackend(retrieve_execution=task_arn, credentials=creds, num_retries=2, verbose=True))
# Configure the qubits to get the states probabilies
qureg1 = eng1.allocate_qureg(3)
# Ask the backend to retrieve the results
eng1.flush()
# Obtain and print the probabilities of the states
prob_dict1 = eng1.backend.get_probabilities(qureg1)
print("Probabilities ", prob_dict1)
"""
Explanation: Retrieve results form a previous execution
We can retrieve the result later on (of this job or a previously executed one) using the task Arn provided when it was run. In addition, you have to remember the amount of qubits involved in the job and the order you used. The latter is required since we need to set up a mapping for the qubits when retrieving results of a previously executed job.
To retrieve the results we need to configure the backend including the parameter 'retrieve_execution' set to the Task Arn of the job. To be able to get the probabilities of each state we need to configure the qubits and ask the backend to get the results.
End of explanation
"""
import matplotlib.pyplot as plt
%matplotlib inline
from projectq.libs.hist import histogram
histogram(eng1.backend, qureg1)
plt.show()
"""
Explanation: We can plot an histogram with the probabilities as well.
End of explanation
"""
|
NathanYee/ThinkBayes2 | code/chap02soln.ipynb | gpl-2.0 | from __future__ import print_function, division
% matplotlib inline
from thinkbayes2 import Hist, Pmf, Suite
"""
Explanation: Think Bayes: Chapter 2
This notebook presents example code and exercise solutions for Think Bayes.
Copyright 2016 Allen B. Downey
MIT License: https://opensource.org/licenses/MIT
End of explanation
"""
pmf = Pmf()
for x in [1,2,3,4,5,6]:
pmf[x] = 1
pmf.Print()
"""
Explanation: The Pmf class
I'll start by making a Pmf that represents the outcome of a six-sided die. Initially there are 6 values with equal probability.
End of explanation
"""
pmf.Normalize()
"""
Explanation: To be true probabilities, they have to add up to 1. So we can normalize the Pmf:
End of explanation
"""
pmf.Print()
"""
Explanation: The return value from Normalize is the sum of the probabilities before normalizing.
End of explanation
"""
pmf = Pmf([1,2,3,4,5,6])
pmf.Print()
"""
Explanation: A faster way to make a Pmf is to provide a sequence of values. The constructor adds the values to the Pmf and then normalizes:
End of explanation
"""
pmf.Prob(1)
"""
Explanation: To extract a value from a Pmf, you can use Prob
End of explanation
"""
pmf[1]
"""
Explanation: Or you can use the bracket operator. Either way, if you ask for the probability of something that's not in the Pmf, the result is 0.
End of explanation
"""
pmf = Pmf()
pmf['Bowl 1'] = 0.5
pmf['Bowl 2'] = 0.5
pmf.Print()
"""
Explanation: The cookie problem
Here's a Pmf that represents the prior distribution.
End of explanation
"""
pmf.Mult('Bowl 1', 0.75)
pmf.Mult('Bowl 2', 0.5)
pmf.Print()
"""
Explanation: And we can update it using Mult
End of explanation
"""
pmf = Pmf(['Bowl 1', 'Bowl 2'])
pmf.Print()
"""
Explanation: Or here's the shorter way to construct the prior.
End of explanation
"""
pmf['Bowl 1'] *= 0.75
pmf['Bowl 2'] *= 0.5
pmf.Print()
"""
Explanation: And we can use *= for the update.
End of explanation
"""
pmf.Normalize()
pmf.Print()
"""
Explanation: Either way, we have to normalize the posterior distribution.
End of explanation
"""
class Cookie(Pmf):
"""A map from string bowl ID to probablity."""
def __init__(self, hypos):
"""Initialize self.
hypos: sequence of string bowl IDs
"""
Pmf.__init__(self)
for hypo in hypos:
self.Set(hypo, 1)
self.Normalize()
def Update(self, data):
"""Updates the PMF with new data.
data: string cookie type
"""
for hypo in self.Values():
like = self.Likelihood(data, hypo)
self.Mult(hypo, like)
self.Normalize()
mixes = {
'Bowl 1':dict(vanilla=0.75, chocolate=0.25),
'Bowl 2':dict(vanilla=0.5, chocolate=0.5),
}
def Likelihood(self, data, hypo):
"""The likelihood of the data under the hypothesis.
data: string cookie type
hypo: string bowl ID
"""
mix = self.mixes[hypo]
like = mix[data]
return like
"""
Explanation: The Bayesian framework
Here's the same computation encapsulated in a class.
End of explanation
"""
pmf = Cookie(['Bowl 1', 'Bowl 2'])
pmf.Update('vanilla')
pmf.Print()
"""
Explanation: We can confirm that we get the same result.
End of explanation
"""
dataset = ['vanilla', 'chocolate', 'vanilla']
for data in dataset:
pmf.Update(data)
pmf.Print()
"""
Explanation: But this implementation is more general; it can handle any sequence of data.
End of explanation
"""
class Monty(Pmf):
"""Map from string location of car to probability"""
def __init__(self, hypos):
"""Initialize the distribution.
hypos: sequence of hypotheses
"""
Pmf.__init__(self)
for hypo in hypos:
self.Set(hypo, 1)
self.Normalize()
def Update(self, data):
"""Updates each hypothesis based on the data.
data: any representation of the data
"""
for hypo in self.Values():
like = self.Likelihood(data, hypo)
self.Mult(hypo, like)
self.Normalize()
def Likelihood(self, data, hypo):
"""Compute the likelihood of the data under the hypothesis.
hypo: string name of the door where the prize is
data: string name of the door Monty opened
"""
if hypo == data:
return 0
elif hypo == 'A':
return 0.5
else:
return 1
"""
Explanation: The Monty Hall problem
The Monty Hall problem might be the most contentious question in
the history of probability. The scenario is simple, but the correct
answer is so counterintuitive that many people just can't accept
it, and many smart people have embarrassed themselves not just by
getting it wrong but by arguing the wrong side, aggressively,
in public.
Monty Hall was the original host of the game show Let's Make a
Deal. The Monty Hall problem is based on one of the regular
games on the show. If you are on the show, here's what happens:
Monty shows you three closed doors and tells you that there is a
prize behind each door: one prize is a car, the other two are less
valuable prizes like peanut butter and fake finger nails. The
prizes are arranged at random.
The object of the game is to guess which door has the car. If
you guess right, you get to keep the car.
You pick a door, which we will call Door A. We'll call the
other doors B and C.
Before opening the door you chose, Monty increases the
suspense by opening either Door B or C, whichever does not
have the car. (If the car is actually behind Door A, Monty can
safely open B or C, so he chooses one at random.)
Then Monty offers you the option to stick with your original
choice or switch to the one remaining unopened door.
The question is, should you "stick" or "switch" or does it
make no difference?
Most people have the strong intuition that it makes no difference.
There are two doors left, they reason, so the chance that the car
is behind Door A is 50%.
But that is wrong. In fact, the chance of winning if you stick
with Door A is only 1/3; if you switch, your chances are 2/3.
Here's a class that solves the Monty Hall problem.
End of explanation
"""
pmf = Monty('ABC')
pmf.Update('B')
pmf.Print()
"""
Explanation: And here's how we use it.
End of explanation
"""
class Monty(Suite):
def Likelihood(self, data, hypo):
if hypo == data:
return 0
elif hypo == 'A':
return 0.5
else:
return 1
"""
Explanation: The Suite class
Most Bayesian updates look pretty much the same, especially the Update method. So we can encapsulate the framework in a class, Suite, and create new classes that extend it.
Child classes of Suite inherit Update and provide Likelihood. So here's the short version of Monty
End of explanation
"""
pmf = Monty('ABC')
pmf.Update('B')
pmf.Print()
"""
Explanation: And it works.
End of explanation
"""
class M_and_M(Suite):
"""Map from hypothesis (A or B) to probability."""
mix94 = dict(brown=30,
yellow=20,
red=20,
green=10,
orange=10,
tan=10,
blue=0)
mix96 = dict(blue=24,
green=20,
orange=16,
yellow=14,
red=13,
brown=13,
tan=0)
hypoA = dict(bag1=mix94, bag2=mix96)
hypoB = dict(bag1=mix96, bag2=mix94)
hypotheses = dict(A=hypoA, B=hypoB)
def Likelihood(self, data, hypo):
"""Computes the likelihood of the data under the hypothesis.
hypo: string hypothesis (A or B)
data: tuple of string bag, string color
"""
bag, color = data
mix = self.hypotheses[hypo][bag]
like = mix[color]
return like
"""
Explanation: The M&M problem
M&Ms are small candy-coated chocolates that come in a variety of
colors. Mars, Inc., which makes M&Ms, changes the mixture of
colors from time to time.
In 1995, they introduced blue M&Ms. Before then, the color mix in
a bag of plain M&Ms was 30% Brown, 20% Yellow, 20% Red, 10%
Green, 10% Orange, 10% Tan. Afterward it was 24% Blue , 20%
Green, 16% Orange, 14% Yellow, 13% Red, 13% Brown.
Suppose a friend of mine has two bags of M&Ms, and he tells me
that one is from 1994 and one from 1996. He won't tell me which is
which, but he gives me one M&M from each bag. One is yellow and
one is green. What is the probability that the yellow one came
from the 1994 bag?
Here's a solution:
End of explanation
"""
suite = M_and_M('AB')
suite.Update(('bag1', 'yellow'))
suite.Update(('bag2', 'green'))
suite.Print()
"""
Explanation: And here's an update:
End of explanation
"""
suite.Update(('bag1', 'blue'))
suite.Print()
"""
Explanation: Exercise: Suppose you draw another M&M from bag1 and it's blue. What can you conclude? Run the update to confirm your intuition.
End of explanation
"""
# Solution
# suite.Update(('bag2', 'blue'))
# throws ValueError: Normalize: total probability is zero.
"""
Explanation: Exercise: Now suppose you draw an M&M from bag2 and it's blue. What does that mean? Run the update to see what happens.
End of explanation
"""
# Solution
# Here's a Pmf with the prior probability that Elvis
# was an identical twin (taking the fact that he was a
# twin as background information)
pmf = Pmf(dict(fraternal=0.92, identical=0.08))
# Solution
# And here's the update. The data is that the other twin
# was also male, which has likelihood 1 if they were identical
# and only 0.5 if they were fraternal.
pmf['fraternal'] *= 0.5
pmf['identical'] *= 1
pmf.Normalize()
pmf.Print()
"""
Explanation: Exercises
Exercise: This one is from one of my favorite books, David MacKay's "Information Theory, Inference, and Learning Algorithms":
Elvis Presley had a twin brother who died at birth. What is the probability that Elvis was an identical twin?"
To answer this one, you need some background information: According to the Wikipedia article on twins: ``Twins are estimated to be approximately 1.9% of the world population, with monozygotic twins making up 0.2% of the total---and 8% of all twins.''
End of explanation
"""
from sympy import symbols
p = symbols('p')
# Solution
# Here's the solution if Monty opens B.
pmf = Pmf('ABC')
pmf['A'] *= p
pmf['B'] *= 0
pmf['C'] *= 1
pmf.Normalize()
pmf['A'].simplify()
# Solution
# When p=0.5, the result is what we saw before
pmf['A'].evalf(subs={p:0.5})
# Solution
# When p=0.0, we know for sure that the prize is behind C
pmf['C'].evalf(subs={p:0.0})
# Solution
# And here's the solution if Monty opens C.
pmf = Pmf('ABC')
pmf['A'] *= 1-p
pmf['B'] *= 1
pmf['C'] *= 0
pmf.Normalize()
pmf['A'].simplify()
"""
Explanation: Exercise: Let's consider a more general version of the Monty Hall problem where Monty is more unpredictable. As before, Monty never opens the door you chose (let's call it A) and never opens the door with the prize. So if you choose the door with the prize, Monty has to decide which door to open. Suppose he opens B with probability p and C with probability 1-p. If you choose A and Monty opens B, what is the probability that the car is behind A, in terms of p? What if Monty opens C?
Hint: you might want to use SymPy to do the algebra for you.
End of explanation
"""
# Solution
# In this case, we can't compute the likelihoods individually;
# we only know the ratio of one to the other. But that's enough.
# Two ways to proceed: we could include a variable in the computation,
# and we would see it drop out.
# Or we can use "unnormalized likelihoods", for want of a better term.
# Here's my solution.
pmf = Pmf(dict(smoker=15, nonsmoker=85))
pmf['smoker'] *= 13
pmf['nonsmoker'] *= 1
pmf.Normalize()
pmf.Print()
"""
Explanation: Exercise: According to the CDC, ``Compared to nonsmokers, men who smoke are about 23 times more likely to develop lung cancer and women who smoke are about 13 times more likely.'' Also, among adults in the U.S. in 2014:
Nearly 19 of every 100 adult men (18.8%)
Nearly 15 of every 100 adult women (14.8%)
If you learn that a woman has been diagnosed with lung cancer, and you know nothing else about her, what is the probability that she is a smoker?
End of explanation
"""
# Solution
# We'll need an object to keep track of the number of cookies in each bowl.
# I use a Hist object, defined in thinkbayes2:
bowl1 = Hist(dict(vanilla=30, chocolate=10))
bowl2 = Hist(dict(vanilla=20, chocolate=20))
bowl1.Print()
# Solution
# Now I'll make a Pmf that contains the two bowls, giving them equal probability.
pmf = Pmf([bowl1, bowl2])
pmf.Print()
# Solution
# Here's a likelihood function that takes `hypo`, which is one of
# the Hist objects that represents a bowl, and `data`, which is either
# 'vanilla' or 'chocolate'.
# `likelihood` computes the likelihood of the data under the hypothesis,
# and as a side effect, it removes one of the cookies from `hypo`
def likelihood(hypo, data):
like = hypo[data] / hypo.Total()
if like:
hypo[data] -= 1
return like
# Solution
# Now for the update. We have to loop through the hypotheses and
# compute the likelihood of the data under each hypothesis.
def update(pmf, data):
for hypo in pmf:
pmf[hypo] *= likelihood(hypo, data)
return pmf.Normalize()
# Solution
# Here's the first update. The posterior probabilities are the
# same as what we got before, but notice that the number of cookies
# in each Hist has been updated.
update(pmf, 'vanilla')
pmf.Print()
# Solution
# So when we update again with a chocolate cookies, we get different
# likelihoods, and different posteriors.
update(pmf, 'chocolate')
pmf.Print()
# Solution
# If we get 10 more chocolate cookies, that eliminates Bowl 1 completely
for i in range(10):
update(pmf, 'chocolate')
print(pmf[bowl1])
"""
Explanation: Exercise In Section 2.3 I said that the solution to the cookie problem generalizes to the case where we draw multiple cookies with replacement.
But in the more likely scenario where we eat the cookies we draw, the likelihood of each draw depends on the previous draws.
Modify the solution in this chapter to handle selection without replacement. Hint: add instance variables to Cookie to represent the hypothetical state of the bowls, and modify Likelihood accordingly. You might want to define a Bowl object.
End of explanation
"""
|
mroberge/hydrofunctions | docs/notebooks/Selecting_Sites.ipynb | mit | # First things first
import hydrofunctions as hf
"""
Explanation: Selecting Sites By Location
The National Water Information System (NWIS) makes data available for approximately 1.9 Million different locations in the US and Territories. Finding the data you need within this collection can be a challenge!
There are four methods for selecting sites by location:
Request data for a site or a list of sites
Request data for all sites in a state
Request data for all sites in a county or list of counties
Request data for all sites inside of bounding box
We'll give examples for each of these methods below.
The following examples are requesting sites, but not specifying a time or parameter of interest. When time is not specified, the NWIS will return only the most recent reading for the site- even if that was fifty years ago! If we don't specify a parameter of interest, NWIS will return all of the parameters measured at that site.
End of explanation
"""
hf.draw_map()
"""
Explanation: Requesting data for a site or a list of sites
Most USGS site names are between 8-11 digits long. You can use the draw_map() function
to create an interactive map with 8,000 active stream gages from the Gages-II dataset.
End of explanation
"""
Beetree = hf.NWIS('01581960')
Beetree.df()
"""
Explanation: Select a single site
End of explanation
"""
sites = ['01580000', '01585500', '01589330']
Baltimore = hf.NWIS(sites)
Baltimore.df()
"""
Explanation: Select a list of sites
End of explanation
"""
# Request data for all stations in Puerto Rico.
puerto_rico = hf.NWIS(stateCd='PR')
# List the names for all of the sites in PR
puerto_rico
"""
Explanation: Request data by state or territory
Use the two-letter state postal code to retrieve all of the stations
inside of a state. You can only request one state at a time. Lists are not accepted.
End of explanation
"""
# Mills, Iowa: 19129; Maui, Hawaii: 15009
counties = hf.NWIS(countyCd = ['19129', '15009'])
counties
"""
Explanation: Request data by county or list of counties
Use the five digit FIPS code for each county.
End of explanation
"""
# Request multiple sites using a bounding box
test = hf.NWIS(bBox=[-105.430, 39.655, -104, 39.863])
test
"""
Explanation: Request data using a bounding box
The coordinates for the bounding box should be in decimal degrees, with negative values for Western and Southern hemispheres.
Give the coordinates counter clockwise: West, South, East, North
End of explanation
"""
|
keir-rex/zipline | docs/notebooks/tutorial.ipynb | apache-2.0 | !tail ../zipline/examples/buyapple.py
"""
Explanation: Zipline beginner tutorial
Basics
Zipline is an open-source algorithmic trading simulator written in Python.
The source can be found at: https://github.com/quantopian/zipline
Some benefits include:
Realistic: slippage, transaction costs, order delays.
Stream-based: Process each event individually, avoids look-ahead bias.
Batteries included: Common transforms (moving average) as well as common risk calculations (Sharpe).
Developed and continuously updated by Quantopian which provides an easy-to-use web-interface to Zipline, 10 years of minute-resolution historical US stock data, and live-trading capabilities. This tutorial is directed at users wishing to use Zipline without using Quantopian. If you instead want to get started on Quantopian, see here.
This tutorial assumes that you have zipline correctly installed, see the installation instructions if you haven't set up zipline yet.
Every zipline algorithm consists of two functions you have to define:
* initialize(context)
* handle_data(context, data)
Before the start of the algorithm, zipline calls the initialize() function and passes in a context variable. context is a persistent namespace for you to store variables you need to access from one algorithm iteration to the next.
After the algorithm has been initialized, zipline calls the handle_data() function once for each event. At every call, it passes the same context variable and an event-frame called data containing the current trading bar with open, high, low, and close (OHLC) prices as well as volume for each stock in your universe. For more information on these functions, see the relevant part of the Quantopian docs.
My first algorithm
Lets take a look at a very simple algorithm from the examples directory, buyapple.py:
End of explanation
"""
!run_algo.py --help
"""
Explanation: As you can see, we first have to import some functions we would like to use. All functions commonly used in your algorithm can be found in zipline.api. Here we are using order() which takes two arguments -- a security object, and a number specifying how many stocks you would like to order (if negative, order() will sell/short stocks). In this case we want to order 10 shares of Apple at each iteration. For more documentation on order(), see the Quantopian docs.
You don't have to use the symbol() function and could just pass in AAPL directly but it is good practice as this way your code will be Quantopian compatible.
Finally, the record() function allows you to save the value of a variable at each iteration. You provide it with a name for the variable together with the variable itself: varname=var. After the algorithm finished running you will have access to each variable value you tracked with record() under the name you provided (we will see this further below). You also see how we can access the current price data of the AAPL stock in the data event frame (for more information see here.
Running the algorithm
To now test this algorithm on financial data, zipline provides two interfaces. A command-line interface and an IPython Notebook interface.
Command line interface
After you installed zipline you should be able to execute the following from your command line (e.g. cmd.exe on Windows, or the Terminal app on OSX):
End of explanation
"""
!run_algo.py -f ../zipline/examples/buyapple.py --start 2000-1-1 --end 2014-1-1 --symbols AAPL -o buyapple_out.pickle
"""
Explanation: Note that you have to omit the preceding '!' when you call run_algo.py, this is only required by the IPython Notebook in which this tutorial was written.
As you can see there are a couple of flags that specify where to find your algorithm (-f) as well as parameters specifying which stock data to load from Yahoo! finance (--symbols) and the time-range (--start and --end). Finally, you'll want to save the performance metrics of your algorithm so that you can analyze how it performed. This is done via the --output flag and will cause it to write the performance DataFrame in the pickle Python file format. Note that you can also define a configuration file with these parameters that you can then conveniently pass to the -c option so that you don't have to supply the command line args all the time (see the .conf files in the examples directory).
Thus, to execute our algorithm from above and save the results to buyapple_out.pickle we would call run_algo.py as follows:
End of explanation
"""
import pandas as pd
perf = pd.read_pickle('buyapple_out.pickle') # read in perf DataFrame
perf.head()
"""
Explanation: run_algo.py first outputs the algorithm contents. It then fetches historical price and volume data of Apple from Yahoo! finance in the desired time range, calls the initialize() function, and then streams the historical stock price day-by-day through handle_data(). After each call to handle_data() we instruct zipline to order 10 stocks of AAPL. After the call of the order() function, zipline enters the ordered stock and amount in the order book. After the handle_data() function has finished, zipline looks for any open orders and tries to fill them. If the trading volume is high enough for this stock, the order is executed after adding the commission and applying the slippage model which models the influence of your order on the stock price, so your algorithm will be charged more than just the stock price * 10. (Note, that you can also change the commission and slippage model that zipline uses, see the Quantopian docs for more information).
Note that there is also an analyze() function printed. run_algo.py will try and look for a file with the ending with _analyze.py and the same name of the algorithm (so buyapple_analyze.py) or an analyze() function directly in the script. If an analyze() function is found it will be called after the simulation has finished and passed in the performance DataFrame. (The reason for allowing specification of an analyze() function in a separate file is that this way buyapple.py remains a valid Quantopian algorithm that you can copy&paste to the platform).
Lets take a quick look at the performance DataFrame. For this, we use pandas from inside the IPython Notebook and print the first ten rows. Note that zipline makes heavy usage of pandas, especially for data input and outputting so it's worth spending some time to learn it.
End of explanation
"""
%pylab inline
figsize(12, 12)
import matplotlib.pyplot as plt
ax1 = plt.subplot(211)
perf.portfolio_value.plot(ax=ax1)
ax1.set_ylabel('portfolio value')
ax2 = plt.subplot(212, sharex=ax1)
perf.AAPL.plot(ax=ax2)
ax2.set_ylabel('AAPL stock price')
"""
Explanation: As you can see, there is a row for each trading day, starting on the first business day of 2000. In the columns you can find various information about the state of your algorithm. The very first column AAPL was placed there by the record() function mentioned earlier and allows us to plot the price of apple. For example, we could easily examine now how our portfolio value changed over time compared to the AAPL stock price.
End of explanation
"""
import zipline
%%zipline --start 2000-1-1 --end 2014-1-1 --symbols AAPL -o perf_ipython
from zipline.api import symbol, order, record
def initialize(context):
pass
def handle_data(context, data):
order(symbol('AAPL'), 10)
record(AAPL=data[symbol('AAPL')].price)
"""
Explanation: As you can see, our algorithm performance as assessed by the portfolio_value closely matches that of the AAPL stock price. This is not surprising as our algorithm only bought AAPL every chance it got.
IPython Notebook
The IPython Notebook is a very powerful browser-based interface to a Python interpreter (this tutorial was written in it). As it is already the de-facto interface for most quantitative researchers zipline provides an easy way to run your algorithm inside the Notebook without requiring you to use the CLI.
To use it you have to write your algorithm in a cell and let zipline know that it is supposed to run this algorithm. This is done via the %%zipline IPython magic command that is available after you import zipline from within the IPython Notebook. This magic takes the same arguments as the command line interface described above. Thus to run the algorithm from above with the same parameters we just have to execute the following cell after importing zipline to register the magic.
End of explanation
"""
perf_ipython.head()
"""
Explanation: Note that we did not have to specify an input file as above since the magic will use the contents of the cell and look for your algorithm functions there. Also, instead of defining an output file we are specifying a variable name with -o that will be created in the name space and contain the performance DataFrame we looked at above.
End of explanation
"""
import pytz
from datetime import datetime
from zipline.algorithm import TradingAlgorithm
from zipline.utils.factory import load_bars_from_yahoo
# Load data manually from Yahoo! finance
start = datetime(2000, 1, 1, 0, 0, 0, 0, pytz.utc)
end = datetime(2012, 1, 1, 0, 0, 0, 0, pytz.utc)
data = load_bars_from_yahoo(stocks=['AAPL'], start=start,
end=end)
# Define algorithm
def initialize(context):
pass
def handle_data(context, data):
order(symbol('AAPL'), 10)
record(AAPL=data[symbol('AAPL')].price)
# Create algorithm object passing in initialize and
# handle_data functions
algo_obj = TradingAlgorithm(initialize=initialize,
handle_data=handle_data)
# Run algorithm
perf_manual = algo_obj.run(data)
"""
Explanation: Manual (advanced)
If you are happy with either way above you can safely skip this passage. To provide a closer look at how zipline actually works it is instructive to see how we run an algorithm without any of the interfaces demonstrated above which hide the actual zipline API.
End of explanation
"""
%%zipline --start 2000-1-1 --end 2014-1-1 --symbols AAPL -o perf_dma
from zipline.api import order_target, record, symbol, history, add_history
import numpy as np
def initialize(context):
# Register 2 histories that track daily prices,
# one with a 100 window and one with a 300 day window
add_history(100, '1d', 'price')
add_history(300, '1d', 'price')
context.i = 0
def handle_data(context, data):
# Skip first 300 days to get full windows
context.i += 1
if context.i < 300:
return
# Compute averages
# history() has to be called with the same params
# from above and returns a pandas dataframe.
short_mavg = history(100, '1d', 'price').mean()
long_mavg = history(300, '1d', 'price').mean()
# Trading logic
if short_mavg[0] > long_mavg[0]:
# order_target orders as many shares as needed to
# achieve the desired number of shares.
order_target(symbol('AAPL'), 100)
elif short_mavg[0] < long_mavg[0]:
order_target(symbol('AAPL'), 0)
# Save values for later inspection
record(AAPL=data[symbol('AAPL')].price,
short_mavg=short_mavg[0],
long_mavg=long_mavg[0])
def analyze(context, perf):
fig = plt.figure()
ax1 = fig.add_subplot(211)
perf.portfolio_value.plot(ax=ax1)
ax1.set_ylabel('portfolio value in $')
ax2 = fig.add_subplot(212)
perf['AAPL'].plot(ax=ax2)
perf[['short_mavg', 'long_mavg']].plot(ax=ax2)
perf_trans = perf.ix[[t != [] for t in perf.transactions]]
buys = perf_trans.ix[[t[0]['amount'] > 0 for t in perf_trans.transactions]]
sells = perf_trans.ix[
[t[0]['amount'] < 0 for t in perf_trans.transactions]]
ax2.plot(buys.index, perf.short_mavg.ix[buys.index],
'^', markersize=10, color='m')
ax2.plot(sells.index, perf.short_mavg.ix[sells.index],
'v', markersize=10, color='k')
ax2.set_ylabel('price in $')
plt.legend(loc=0)
plt.show()
"""
Explanation: As you can see, we again define the functions as above but we manually pass them to the TradingAlgorithm class which is the main zipline class for running algorithms. We also manually load the data using load_bars_from_yahoo() and pass it to the TradingAlgorithm.run() method which kicks off the backtest simulation.
Access to previous prices using history
Working example: Dual Moving Average Cross-Over
The Dual Moving Average (DMA) is a classic momentum strategy. It's probably not used by any serious trader anymore but is still very instructive. The basic idea is that we compute two rolling or moving averages (mavg) -- one with a longer window that is supposed to capture long-term trends and one shorter window that is supposed to capture short-term trends. Once the short-mavg crosses the long-mavg from below we assume that the stock price has upwards momentum and long the stock. If the short-mavg crosses from above we exit the positions as we assume the stock to go down further.
As we need to have access to previous prices to implement this strategy we need a new concept: History
history() is a convenience function that keeps a rolling window of data for you. The first argument is the number of bars you want to collect, the second argument is the unit (either '1d' for '1m' but note that you need to have minute-level data for using 1m). For a more detailed description history()'s features, see the Quantopian docs. While you can directly use the history() function on Quantopian, in zipline you have to register each history container you want to use with add_history() and pass it the same arguments as the history function below. Lets look at the strategy which should make this clear:
End of explanation
"""
|
phoebe-project/phoebe2-docs | 2.1/tutorials/20_21_xyz_uvw.ipynb | gpl-3.0 | !pip install -I "phoebe>=2.1,<2.2"
"""
Explanation: 2.0 - 2.1 Migration: xyz vs uvw coordinates
Let's first make sure we have the latest version of PHOEBE 2.1 installed. (You can comment out this line if you don't use pip for your installation or don't want to update to the latest release).
End of explanation
"""
import phoebe
b = phoebe.default_binary()
b.add_dataset('orb', times=[0])
b.add_dataset('mesh', times=[0])
b.run_compute()
"""
Explanation: In this tutorial we will review the changes in the coordinate conventions used for plane-of-sky vs Roche coordinates, which applies to both orbit and mesh datasets.
End of explanation
"""
print b.filter(context='model', kind='orb').qualifiers
"""
Explanation: In PHOEBE 2.0, the orbit dataset had qualifiers x, y, z, vxs, vys, vzs which corresponded to the plane-of-sky coordinates (with z along the line-of-sight). In PHOEBE 2.1, these plane-of-sky coordinates are now denoted by u, v, w - with w along the line-of-sight and with the corresponding velocities: vus, vvs, vws.
End of explanation
"""
print b.filter(context='model', kind='mesh').qualifiers
"""
Explanation: In PHOEBE 2.0, the protomesh was exposed with x, y, z signifying roche coordinates, but the mesh dataset or pbmesh with x, y, z signifying plane-of-sky coordinates. Now in PHOEBE 2.1, this distinction is gone, with 'xyz' always signifying Roche coordinates and 'uvw' always plane-of-sky. Note that the concept of the protomesh and pbmesh have also been removed and replaced with a more flexible mesh dataset.
End of explanation
"""
|
mne-tools/mne-tools.github.io | 0.18/_downloads/0f794e75f963d5793938890d6f3d2513/plot_receptive_field_mtrf.ipynb | bsd-3-clause | # Authors: Chris Holdgraf <choldgraf@gmail.com>
# Eric Larson <larson.eric.d@gmail.com>
# Nicolas Barascud <nicolas.barascud@ens.fr>
#
# License: BSD (3-clause)
# sphinx_gallery_thumbnail_number = 3
import numpy as np
import matplotlib.pyplot as plt
from scipy.io import loadmat
from os.path import join
import mne
from mne.decoding import ReceptiveField
from sklearn.model_selection import KFold
from sklearn.preprocessing import scale
"""
Explanation: Receptive Field Estimation and Prediction
This example reproduces figures from Lalor et al's mTRF toolbox in
matlab [1]_. We will show how the :class:mne.decoding.ReceptiveField class
can perform a similar function along with scikit-learn. We will first fit a
linear encoding model using the continuously-varying speech envelope to predict
activity of a 128 channel EEG system. Then, we will take the reverse approach
and try to predict the speech envelope from the EEG (known in the literature
as a decoding model, or simply stimulus reconstruction).
References
.. [1] Crosse, M. J., Di Liberto, G. M., Bednar, A. & Lalor, E. C. (2016).
The Multivariate Temporal Response Function (mTRF) Toolbox:
A MATLAB Toolbox for Relating Neural Signals to Continuous Stimuli.
Frontiers in Human Neuroscience 10, 604. doi:10.3389/fnhum.2016.00604
.. [2] Haufe, S., Meinecke, F., Goergen, K., Daehne, S., Haynes, J.-D.,
Blankertz, B., & Biessmann, F. (2014). On the interpretation of weight
vectors of linear models in multivariate neuroimaging. NeuroImage, 87,
96-110. doi:10.1016/j.neuroimage.2013.10.067
End of explanation
"""
path = mne.datasets.mtrf.data_path()
decim = 2
data = loadmat(join(path, 'speech_data.mat'))
raw = data['EEG'].T
speech = data['envelope'].T
sfreq = float(data['Fs'])
sfreq /= decim
speech = mne.filter.resample(speech, down=decim, npad='auto')
raw = mne.filter.resample(raw, down=decim, npad='auto')
# Read in channel positions and create our MNE objects from the raw data
montage = mne.channels.read_montage('biosemi128')
montage.selection = montage.selection[:128]
info = mne.create_info(montage.ch_names[:128], sfreq, 'eeg', montage=montage)
raw = mne.io.RawArray(raw, info)
n_channels = len(raw.ch_names)
# Plot a sample of brain and stimulus activity
fig, ax = plt.subplots()
lns = ax.plot(scale(raw[:, :800][0].T), color='k', alpha=.1)
ln1 = ax.plot(scale(speech[0, :800]), color='r', lw=2)
ax.legend([lns[0], ln1[0]], ['EEG', 'Speech Envelope'], frameon=False)
ax.set(title="Sample activity", xlabel="Time (s)")
mne.viz.tight_layout()
"""
Explanation: Load the data from the publication
First we will load the data collected in [1]_. In this experiment subjects
listened to natural speech. Raw EEG and the speech stimulus are provided.
We will load these below, downsampling the data in order to speed up
computation since we know that our features are primarily low-frequency in
nature. Then we'll visualize both the EEG and speech envelope.
End of explanation
"""
# Define the delays that we will use in the receptive field
tmin, tmax = -.2, .4
# Initialize the model
rf = ReceptiveField(tmin, tmax, sfreq, feature_names=['envelope'],
estimator=1., scoring='corrcoef')
# We'll have (tmax - tmin) * sfreq delays
# and an extra 2 delays since we are inclusive on the beginning / end index
n_delays = int((tmax - tmin) * sfreq) + 2
n_splits = 3
cv = KFold(n_splits)
# Prepare model data (make time the first dimension)
speech = speech.T
Y, _ = raw[:] # Outputs for the model
Y = Y.T
# Iterate through splits, fit the model, and predict/test on held-out data
coefs = np.zeros((n_splits, n_channels, n_delays))
scores = np.zeros((n_splits, n_channels))
for ii, (train, test) in enumerate(cv.split(speech)):
print('split %s / %s' % (ii + 1, n_splits))
rf.fit(speech[train], Y[train])
scores[ii] = rf.score(speech[test], Y[test])
# coef_ is shape (n_outputs, n_features, n_delays). we only have 1 feature
coefs[ii] = rf.coef_[:, 0, :]
times = rf.delays_ / float(rf.sfreq)
# Average scores and coefficients across CV splits
mean_coefs = coefs.mean(axis=0)
mean_scores = scores.mean(axis=0)
# Plot mean prediction scores across all channels
fig, ax = plt.subplots()
ix_chs = np.arange(n_channels)
ax.plot(ix_chs, mean_scores)
ax.axhline(0, ls='--', color='r')
ax.set(title="Mean prediction score", xlabel="Channel", ylabel="Score ($r$)")
mne.viz.tight_layout()
"""
Explanation: Create and fit a receptive field model
We will construct an encoding model to find the linear relationship between
a time-delayed version of the speech envelope and the EEG signal. This allows
us to make predictions about the response to new stimuli.
End of explanation
"""
# Print mean coefficients across all time delays / channels (see Fig 1 in [1])
time_plot = 0.180 # For highlighting a specific time.
fig, ax = plt.subplots(figsize=(4, 8))
max_coef = mean_coefs.max()
ax.pcolormesh(times, ix_chs, mean_coefs, cmap='RdBu_r',
vmin=-max_coef, vmax=max_coef, shading='gouraud')
ax.axvline(time_plot, ls='--', color='k', lw=2)
ax.set(xlabel='Delay (s)', ylabel='Channel', title="Mean Model\nCoefficients",
xlim=times[[0, -1]], ylim=[len(ix_chs) - 1, 0],
xticks=np.arange(tmin, tmax + .2, .2))
plt.setp(ax.get_xticklabels(), rotation=45)
mne.viz.tight_layout()
# Make a topographic map of coefficients for a given delay (see Fig 2C in [1])
ix_plot = np.argmin(np.abs(time_plot - times))
fig, ax = plt.subplots()
mne.viz.plot_topomap(mean_coefs[:, ix_plot], pos=info, axes=ax, show=False,
vmin=-max_coef, vmax=max_coef)
ax.set(title="Topomap of model coefficients\nfor delay %s" % time_plot)
mne.viz.tight_layout()
"""
Explanation: Investigate model coefficients
Finally, we will look at how the linear coefficients (sometimes
referred to as beta values) are distributed across time delays as well as
across the scalp. We will recreate figure 1 and figure 2 from [1]_.
End of explanation
"""
# We use the same lags as in [1]. Negative lags now index the relationship
# between the neural response and the speech envelope earlier in time, whereas
# positive lags would index how a unit change in the amplitude of the EEG would
# affect later stimulus activity (obviously this should have an amplitude of
# zero).
tmin, tmax = -.2, 0.
# Initialize the model. Here the features are the EEG data. We also specify
# ``patterns=True`` to compute inverse-transformed coefficients during model
# fitting (cf. next section). We'll use a ridge regression estimator with an
# alpha value similar to [1].
sr = ReceptiveField(tmin, tmax, sfreq, feature_names=raw.ch_names,
estimator=1e4, scoring='corrcoef', patterns=True)
# We'll have (tmax - tmin) * sfreq delays
# and an extra 2 delays since we are inclusive on the beginning / end index
n_delays = int((tmax - tmin) * sfreq) + 2
n_splits = 3
cv = KFold(n_splits)
# Iterate through splits, fit the model, and predict/test on held-out data
coefs = np.zeros((n_splits, n_channels, n_delays))
patterns = coefs.copy()
scores = np.zeros((n_splits,))
for ii, (train, test) in enumerate(cv.split(speech)):
print('split %s / %s' % (ii + 1, n_splits))
sr.fit(Y[train], speech[train])
scores[ii] = sr.score(Y[test], speech[test])[0]
# coef_ is shape (n_outputs, n_features, n_delays). We have 128 features
coefs[ii] = sr.coef_[0, :, :]
patterns[ii] = sr.patterns_[0, :, :]
times = sr.delays_ / float(sr.sfreq)
# Average scores and coefficients across CV splits
mean_coefs = coefs.mean(axis=0)
mean_patterns = patterns.mean(axis=0)
mean_scores = scores.mean(axis=0)
max_coef = np.abs(mean_coefs).max()
max_patterns = np.abs(mean_patterns).max()
"""
Explanation: Create and fit a stimulus reconstruction model
We will now demonstrate another use case for the for the
:class:mne.decoding.ReceptiveField class as we try to predict the stimulus
activity from the EEG data. This is known in the literature as a decoding, or
stimulus reconstruction model [1]_. A decoding model aims to find the
relationship between the speech signal and a time-delayed version of the EEG.
This can be useful as we exploit all of the available neural data in a
multivariate context, compared to the encoding case which treats each M/EEG
channel as an independent feature. Therefore, decoding models might provide a
better quality of fit (at the expense of not controlling for stimulus
covariance), especially for low SNR stimuli such as speech.
End of explanation
"""
y_pred = sr.predict(Y[test])
time = np.linspace(0, 2., 5 * int(sfreq))
fig, ax = plt.subplots(figsize=(8, 4))
ax.plot(time, speech[test][sr.valid_samples_][:int(5 * sfreq)],
color='grey', lw=2, ls='--')
ax.plot(time, y_pred[sr.valid_samples_][:int(5 * sfreq)], color='r', lw=2)
ax.legend([lns[0], ln1[0]], ['Envelope', 'Reconstruction'], frameon=False)
ax.set(title="Stimulus reconstruction")
ax.set_xlabel('Time (s)')
mne.viz.tight_layout()
"""
Explanation: Visualize stimulus reconstruction
To get a sense of our model performance, we can plot the actual and predicted
stimulus envelopes side by side.
End of explanation
"""
time_plot = (-.140, -.125) # To average between two timepoints.
ix_plot = np.arange(np.argmin(np.abs(time_plot[0] - times)),
np.argmin(np.abs(time_plot[1] - times)))
fig, ax = plt.subplots(1, 2)
mne.viz.plot_topomap(np.mean(mean_coefs[:, ix_plot], axis=1),
pos=info, axes=ax[0], show=False,
vmin=-max_coef, vmax=max_coef)
ax[0].set(title="Model coefficients\nbetween delays %s and %s"
% (time_plot[0], time_plot[1]))
mne.viz.plot_topomap(np.mean(mean_patterns[:, ix_plot], axis=1),
pos=info, axes=ax[1],
show=False, vmin=-max_patterns, vmax=max_patterns)
ax[1].set(title="Inverse-transformed coefficients\nbetween delays %s and %s"
% (time_plot[0], time_plot[1]))
mne.viz.tight_layout()
plt.show()
"""
Explanation: Investigate model coefficients
Finally, we will look at how the decoding model coefficients are distributed
across the scalp. We will attempt to recreate figure 5 from [1]. The
decoding model weights reflect the channels that contribute most toward
reconstructing the stimulus signal, but are not directly interpretable in a
neurophysiological sense. Here we also look at the coefficients obtained
via an inversion procedure [2]_, which have a more straightforward
interpretation as their value (and sign) directly relates to the stimulus
signal's strength (and effect direction).
End of explanation
"""
|
GoogleCloudPlatform/ai-platform-samples | notebooks/samples/aihub/xgboost_regression/xgboost_regression.ipynb | apache-2.0 | PROJECT_ID = "[your-project-id]" #@param {type:"string"}
! gcloud config set project $PROJECT_ID
"""
Explanation: By deploying or using this software you agree to comply with the AI Hub Terms of Service and the Google APIs Terms of Service. To the extent of a direct conflict of terms, the AI Hub Terms of Service will control.
Overview
This notebook provides an example workflow of using the Distributed XGBoost ML container for training a regression ML model.
Dataset
The notebook uses the Boston housing price regression dataset. It containers 506 observations with 13 features describing a house in Boston and a corresponding house price, stored in a 506x14 table.
Objective
The goal of this notebook is to go through a common training workflow:
- Create a dataset
- Train an ML model using the AI Platform Training service
- Identify if the model was trained successfully by looking at the generated "Run Report"
- Deploy the model for serving using the AI Platform Prediction service
- Use the endpoint for online predictions
- Interactively inspect the deployed ML model with the What-If Tool
Costs
This tutorial uses billable components of Google Cloud Platform (GCP):
Cloud AI Platform
Cloud Storage
Learn about Cloud AI Platform
pricing and Cloud Storage
pricing, and use the Pricing
Calculator
to generate a cost estimate based on your projected usage.
Set up your local development environment
If you are using Colab or AI Platform Notebooks, your environment already meets
all the requirements to run this notebook. You can skip this step.
Otherwise, make sure your environment meets this notebook's requirements.
You need the following:
The Google Cloud SDK
Git
Python 3
virtualenv
Jupyter notebook running in a virtual environment with Python 3
The Google Cloud guide to Setting up a Python development
environment and the Jupyter
installation guide provide detailed instructions
for meeting these requirements. The following steps provide a condensed set of
instructions:
Install and initialize the Cloud SDK.
Install Python 3.
Install
virtualenv
and create a virtual environment that uses Python 3.
Activate that environment and run pip install jupyter in a shell to install
Jupyter.
Run jupyter notebook in a shell to launch Jupyter.
Open this notebook in the Jupyter Notebook Dashboard.
Set up your GCP project
The following steps are required, regardless of your notebook environment.
Select or create a GCP project.. When you first create an account, you get a $300 free credit towards your compute/storage costs.
Make sure that billing is enabled for your project.
Enable the AI Platform APIs and Compute Engine APIs.
Enter your project ID in the cell below. Then run the cell to make sure the
Cloud SDK uses the right project for all the commands in this notebook.
Note: Jupyter runs lines prefixed with ! as shell commands, and it interpolates Python variables prefixed with $ into these commands.
End of explanation
"""
import sys
# If you are running this notebook in Colab, run this cell and follow the
# instructions to authenticate your GCP account. This provides access to your
# Cloud Storage bucket and lets you submit training jobs and prediction
# requests.
if 'google.colab' in sys.modules:
from google.colab import auth as google_auth
google_auth.authenticate_user()
# If you are running this notebook locally, replace the string below with the
# path to your service account key and run this cell to authenticate your GCP
# account.
else:
%env GOOGLE_APPLICATION_CREDENTIALS ''
"""
Explanation: Authenticate your GCP account
If you are using AI Platform Notebooks, your environment is already
authenticated. Skip this step.
If you are using Colab, run the cell below and follow the instructions
when prompted to authenticate your account via oAuth.
Otherwise, follow these steps:
In the GCP Console, go to the Create service account key
page.
From the Service account drop-down list, select New service account.
In the Service account name field, enter a name.
From the Role drop-down list, select
Machine Learning Engine > AI Platform Admin and
Storage > Storage Object Admin.
Click Create. A JSON file that contains your key downloads to your
local environment.
Enter the path to your service account key as the
GOOGLE_APPLICATION_CREDENTIALS variable in the cell below and run the cell.
End of explanation
"""
BUCKET_NAME = "[your-bucket-name]" #@param {type:"string"}
REGION = 'us-central1' #@param {type:"string"}
"""
Explanation: Create a Cloud Storage bucket
The following steps are required, regardless of your notebook environment.
You need to have a "workspace" bucket that will hold the dataset and the output from the ML Container. Set the name of your Cloud Storage bucket below. It must be unique across all Cloud Storage buckets.
You may also change the REGION variable, which is used for operations
throughout the rest of this notebook. Make sure to choose a region where Cloud AI Platform services are available. You may not use a Multi-Regional Storage bucket for training with AI Platform.
End of explanation
"""
! gsutil mb -l $REGION gs://$BUCKET_NAME
"""
Explanation: Only if your bucket doesn't already exist: Run the following cell to create your Cloud Storage bucket.
End of explanation
"""
! gsutil ls -al gs://$BUCKET_NAME
"""
Explanation: Finally, validate access to your Cloud Storage bucket by examining its contents:
End of explanation
"""
! pip install witwidget
"""
Explanation: PIP Install Packages and dependencies
End of explanation
"""
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import os
import time
import pandas as pd
import tensorflow as tf
from IPython.core.display import HTML
from googleapiclient import discovery
import numpy as np
from sklearn.metrics import r2_score
"""
Explanation: Import libraries and define constants
End of explanation
"""
bh = tf.keras.datasets.boston_housing
(X_train, y_train), (X_eval, y_eval) = bh.load_data()
data_mean = X_train.mean(axis=0)
data_std = X_train.std(axis=0)
X_train = (X_train - data_mean) / data_std
X_eval = (X_eval - data_mean) / data_std
training = pd.DataFrame(X_train)
training.columns = ["f{}".format(c) for c in training.columns]
training['target'] = y_train
validation = pd.DataFrame(X_eval)
validation.columns = ["f{}".format(c) for c in validation.columns]
validation['target'] = y_eval
print('Training data head')
display(training.head())
training_data = os.path.join('gs://', BUCKET_NAME, 'data/train.csv')
validation_data = os.path.join('gs://', BUCKET_NAME, 'data/valid.csv')
print('Copy the data in bucket ...')
with tf.io.gfile.GFile(training_data, 'w') as f:
training.to_csv(f, index=False)
with tf.io.gfile.GFile(validation_data, 'w') as f:
validation.to_csv(f, index=False)
"""
Explanation: Create a dataset
End of explanation
"""
output_location = os.path.join('gs://', BUCKET_NAME, 'output')
job_name = "xgboost_regression_{}".format(time.strftime("%Y%m%d%H%M%S"))
!gcloud ai-platform jobs submit training $job_name \
--master-image-uri gcr.io/aihub-c2t-containers/kfp-components/trainer/dist_xgboost:latest \
--region $REGION \
--scale-tier CUSTOM \
--master-machine-type standard \
-- \
--output-location {output_location} \
--training-data {training_data} \
--validation-data {validation_data} \
--target-column target \
--data-type csv \
--objective reg:tweedie
"""
Explanation: Cloud training
Accelerator and distribution support
| GPU | Multi-GPU Node | TPU | Workers | Parameter Server |
|---|---|---|---|---|
| Yes | No | No | Yes | No |
To have distribution and/or accelerators to your AI Platform training call, use parameters similar to the examples as shown below.
bash
--master-machine-type standard_gpu \
--worker-machine-type standard_gpu \
--worker-count 2 \
AI Platform training
Distributed XGBoost ML container documentation.
AI Platform training documentation.
End of explanation
"""
if not tf.io.gfile.exists(os.path.join(output_location, 'report.html')):
raise RuntimeError('The file report.html was not found. Did the training job finish?')
with tf.io.gfile.GFile(os.path.join(output_location, 'report.html')) as f:
display(HTML(f.read()))
"""
Explanation: Local training snippet
Note that the training can also be done locally with Docker
bash
docker run \
-v /tmp:/tmp \
-it gcr.io/aihub-c2t-containers/kfp-components/trainer/dist_xgboost:latest \
--training-data /tmp/train.csv \
--validation-data /tmp/valid.csv \
--output-location /tmp/output \
--target-column target \
--data-type csv \
--objective reg:tweedie
Inspect the Run Report
The "Run Report" will help you identify if the model was successfully trained.
End of explanation
"""
#@markdown ---
model = 'xgboost_boston_housing' #@param {type:"string"}
version = 'v1' #@param {type:"string"}
#@markdown ---
"""
Explanation: Deployment parameters
End of explanation
"""
# the exact location of the model is in model_uri.txt
with tf.io.gfile.GFile(os.path.join(output_location, 'model_uri.txt')) as f:
model_uri = f.read().replace('/model.bst', '')
# create a model
! gcloud ai-platform models create $model --region $REGION
# create a version
! gcloud ai-platform versions create $version --region $REGION \
--model $model \
--runtime-version 1.15 \
--origin $model_uri \
--framework XGBOOST \
--project $PROJECT_ID
"""
Explanation: Deploy the model for serving
https://cloud.google.com/ai-platform/prediction/docs/deploying-models
End of explanation
"""
# format the data for serving
instances = validation.drop(columns='target').values.tolist()
validation_targets = validation['target']
display(instances[:2])
service = discovery.build('ml', 'v1')
name = 'projects/{project}/models/{model}/versions/{version}'.format(project=PROJECT_ID,
model=model,
version=version)
body = {'instances': instances}
response = service.projects().predict(name=name, body=body).execute()
if 'error' in response:
raise RuntimeError(response['error'])
predictions = [row for row in response['predictions']]
R2 = r2_score(validation_targets, predictions)
print('Coefficient of dermination for the predictions: {}'.format(R2))
"""
Explanation: Use the endpoint for online predictions
End of explanation
"""
import witwidget
from witwidget.notebook.visualization import WitWidget, WitConfigBuilder
config_builder = WitConfigBuilder(examples=validation.values.tolist(),
feature_names=validation.columns.tolist())
config_builder.set_ai_platform_model(project=PROJECT_ID,
model=model,
version=version)
config_builder.set_model_type('regression')
config_builder.set_target_feature('target')
WitWidget(config_builder)
"""
Explanation: Inspect the ML model
What if tool home page
Installation
End of explanation
"""
# Delete model version resource
! gcloud ai-platform versions delete $version --quiet --model $model
# Delete model resource
! gcloud ai-platform models delete $model --quiet
# If training job is still running, cancel it
! gcloud ai-platform jobs cancel $job_name --quiet
# Delete Cloud Storage objects that were created
! gsutil -m rm -r $BUCKET_NAME
"""
Explanation: Cleaning up
To clean up all GCP resources used in this project, you can delete the GCP
project you used for the tutorial.
End of explanation
"""
|
kuchaale/X-regression | examples/xarray_coupled_w_GLSAR_JRA55_analysis.ipynb | gpl-3.0 | import supp_functions as fce
import xarray as xr
import pandas as pd
import statsmodels.api as sm
import numpy as np
import matplotlib.pyplot as plt
"""
Explanation: Table of Contents
<p><div class="lev1"><a href="#Import-libraries-1"><span class="toc-item-num">1 </span>Import libraries</a></div><div class="lev1"><a href="#Data-opening-2"><span class="toc-item-num">2 </span>Data opening</a></div><div class="lev1"><a href="#Variable-and-period-of-analysis-selection-3"><span class="toc-item-num">3 </span>Variable and period of analysis selection</a></div><div class="lev1"><a href="#Deseasonalizing-4"><span class="toc-item-num">4 </span>Deseasonalizing</a></div><div class="lev1"><a href="#Regressor-loading-5"><span class="toc-item-num">5 </span>Regressor loading</a></div><div class="lev1"><a href="#Regression-function-6"><span class="toc-item-num">6 </span>Regression function</a></div><div class="lev1"><a href="#Regression-calculation-7"><span class="toc-item-num">7 </span>Regression calculation</a></div><div class="lev1"><a href="#Visualization-8"><span class="toc-item-num">8 </span>Visualization</a></div>
# Import libraries
End of explanation
"""
s_year = 1979
e_year = 2009
vari ='t'
in_dir = '~/'
in_netcdf = in_dir + 'jra55_tmp_1960_2009_zm.nc'
ds = xr.open_dataset(in_netcdf)
"""
Explanation: Data opening
End of explanation
"""
times = pd.date_range(str(s_year)+'-01-01', str(e_year)+'-12-31', name='time', freq = 'M')
ds_sel = ds.sel(time = times, method='ffill') #nearest
ds_sel = ds_sel[vari]
"""
Explanation: Variable and period of analysis selection
End of explanation
"""
climatology = ds_sel.groupby('time.month').mean('time')
anomalies = ds_sel.groupby('time.month') - climatology
"""
Explanation: Deseasonalizing
End of explanation
"""
global reg
solar = fce.open_reg_ccmi(in_dir+'solar_1947.nc', 'solar', 0, 1947, s_year, e_year)
solar /= 126.6
trend = np.linspace(-1, 1, solar.shape[0])
norm = 4
what_re = 'jra55'
what_sp = ''
i_year2 = 1947
i_year = 1960
what_re2 = 'HadISST'
saod = fce.open_reg_ccmi(in_dir+'sad_gm_50hPa_1949_2013.nc', 'sad', 0, 1949, s_year, e_year)
qbo1 = fce.open_reg_ccmi(in_dir+'qbo_'+what_re+what_sp+'_pc1.nc', 'index', norm, i_year, s_year, e_year)
qbo2 = fce.open_reg_ccmi(in_dir+'qbo_'+what_re+what_sp+'_pc2.nc', 'index', norm, i_year, s_year, e_year)
enso = fce.open_reg_ccmi(in_dir+'enso_'+what_re2+'_monthly_'+str(i_year2)+'_'+str(e_year)+'.nc', \
'enso', norm, i_year2, s_year, e_year)
print(trend.shape, solar.shape, saod.shape, enso.shape, qbo1.shape, qbo2.shape, anomalies.time.shape)
reg = np.column_stack((trend, solar, qbo1, qbo2, saod, enso))
"""
Explanation: Regressor loading
End of explanation
"""
def xr_regression(y):
X = sm.add_constant(reg, prepend=True) # regressor matrix
mod = sm.GLSAR(y.values, X, 2, missing = 'drop') # MLR analysis with AR2 modeling
res = mod.iterative_fit()
return xr.DataArray(res.params[1:])
"""
Explanation: Regression function
End of explanation
"""
stacked = anomalies.stack(allpoints = ['lev', 'lat']).squeeze()
stacked = stacked.reset_coords(drop=True)
coefs = stacked.groupby('allpoints').apply(xr_regression)
coefs_unstacked = coefs.unstack('allpoints')
"""
Explanation: Regression calculation
End of explanation
"""
%matplotlib inline
coefs_unstacked.isel(dim_0 = [1]).squeeze().plot.contourf(yincrease=False)#, vmin=-1, vmax=1, cmap=plt.cm.RdBu_r)
coefs_unstacked.isel(dim_0 = [1]).squeeze().plot.contour(yincrease=False, colors='k', add_colorbar=False, \
levels = [-0.5, -0.2,-0.1,0,0.1,0.2, 0.5])
plt.yscale('log')
"""
Explanation: Visualization
End of explanation
"""
|
lwcook/horsetail-matching | notebooks/Surrogates.ipynb | mit | from horsetailmatching import HorsetailMatching, UniformParameter
from horsetailmatching.demoproblems import TP2
from horsetailmatching.surrogates import PolySurrogate
import numpy as np
uparams = [UniformParameter(), UniformParameter()]
"""
Explanation: When we cannot afford to sample the quantity of interest many times at every design within an optimization, we can use surrogate models instead. Here we will show you how to use third party surrogates as well as the polynomial chaos surrogate provided with horsetail matching.
For the third party surrogates, we will use the effective-quadratures package [Seshadri, P. and Parks, G. (2017) Effective-Quadratures (EQ): Polynomials for Computational Engineering Studies, The Open Journal, http://dx.doi.org/10.21105/joss.0016], (also see http://www.effective-quadratures.org/). We will also use pyKriging [pyKriging 0.5281/zenodo.593877] (also see http://pykriging.com/).
The HorstailMaching object can take a "surrogate" argument, which should be a function that takes an np.ndarray of values of the uncertain parameters of size (num_points, num_uncertainties), and a np.ndarray of the quantity of interest evaluated at these values of size (num_points) that returns a function that predicts the function output at any value of the uncertainties. num_points is the number of points at which the surrogate is to be evaluated, and num_uncertainties is the number of uncertain parameters. The object also takes a "surrogate_points" argument, which is a list of points (values of u) at which horsetail matching calls the qoi function in order to fit the surrogate.
The following examples should make this more clear.
End of explanation
"""
thePoly = PolySurrogate(dimensions=len(uparams), order=4)
u_quadrature = thePoly.getQuadraturePoints()
def myPolynomialChaosSurrogate(u_quad, q_quad):
thePoly.train(q_quad)
return thePoly.predict
theHM = HorsetailMatching(TP2, uparams, surrogate=myPolynomialChaosSurrogate, surrogate_points=u_quadrature)
print('Metric evaluated with polynomial chaos surrogate: ', theHM.evalMetric([0, 1]))
theHM.surrogate = None
print('Metric evaluated with direct sampling: ', theHM.evalMetric([0, 1]))
"""
Explanation: Lets start with the built in in polynomial chaos surrogate. This finds the coefficients of a polynomial expansion by evaluating the inner product of the qoi function with each polynomial using gaussian quadrature.
The polynomial chaos expansion used by the PolySurrogate class uses specific quadrature points over the uncertainty space to perform efficient integration, and so we must tell the HorsetailMatching object that these are the points at which to evaluate the quantity of interest when making the surrogate. This is done with the surrogate_points argument.
End of explanation
"""
from pyKriging.krige import kriging
from pyKriging.samplingplan import samplingplan
sp = samplingplan(2)
u_sampling = sp.optimallhc(25)
def myKrigingSurrogate(u_lhc, q_lhc):
krig = kriging(u_lhc, q_lhc)
krig.train()
return krig.predict
theHM.surrogate = myKrigingSurrogate
theHM.surrogate_points = u_sampling
print('Metric evaluated with kriging surrogate: ', theHM.evalMetric([0, 1]))
theHM.surrogate = None
print('Metric evaluated with direct sampling: ', theHM.evalMetric([0, 1]))
"""
Explanation: Next we use the pyKriging samplingplan function to give us 20 points found via latin hypercube sampling at which to evaluate the metric to create the surrogate. Then we create a function in the form required by horsetail matching called myKrigingSurrogate, and pass this as the surrogate argument when making the horestail matching object, along with the LHS points as the surrogate_points argument. Here we modify the already created horsetail matching object instead of making a new one.
End of explanation
"""
from equadratures import Polyreg
U1, U2 = np.meshgrid(np.linspace(-1, 1, 5), np.linspace(-1, 1, 5))
u_tensor = np.vstack([U1.flatten(), U2.flatten()]).T
def myQuadraticSurrogate(u_tensor, q_tensor):
poly = Polyreg(np.mat(u_tensor), np.mat(q_tensor).T, 'quadratic')
def model(u):
return poly.testPolynomial(np.mat(u))
return model
theHM.surrogate = myQuadraticSurrogate
theHM.surrogate_points = u_tensor
print('Metric evaluated with quadratic surrogate: ', theHM.evalMetric([0, 1]))
theHM.surrogate = None
print('Metric evaluated with direct sampling: ', theHM.evalMetric([0, 1]))
"""
Explanation: Now we do a similar thing with the effective quadrature toolbox to make a quadratic polynomial surrogate.
End of explanation
"""
|
OSHI7/Learning1 | MatplotLib Pynotebooks/AnatomyOfMatplotlib-Part6-mpl_toolkits.ipynb | mit | from mpl_toolkits.mplot3d import Axes3D, axes3d
fig, ax = plt.subplots(1, 1, subplot_kw={'projection': '3d'})
X, Y, Z = axes3d.get_test_data(0.05)
ax.plot_wireframe(X, Y, Z, rstride=10, cstride=10)
plt.show()
"""
Explanation: mpl_toolkits
In addition to the core library of matplotlib, there are a few additional utilities that are set apart from matplotlib proper for some reason or another, but are often shipped with matplotlib.
Basemap - shipped separately from matplotlib due to size of mapping data that are included.
mplot3d - shipped with matplotlib to provide very simple, rudimentary 3D plots in the same style as matplotlib's 2D plots.
axes_grid1 - An enhanced SubplotAxes. Very Enhanced...
mplot3d
By taking advantage of matplotlib's z-order layering engine, mplot3d emulates 3D plotting by projecting 3D data into 2D space, layer by layer. While it isn't going to replace any of the true 3D plotting libraries anytime soon, its goal is to allow for matplotlib users to produce 3D plots with the same amount of simplicity as 2D plots are.
End of explanation
"""
from mpl_toolkits.axes_grid1 import AxesGrid
fig = plt.figure()
grid = AxesGrid(fig, 111, # similar to subplot(111)
nrows_ncols = (2, 2),
axes_pad = 0.2,
share_all=True,
label_mode = "L", # similar to "label_outer"
cbar_location = "right",
cbar_mode="single",
)
extent = (-3,4,-4,3)
for i in range(4):
im = grid[i].imshow(Z, extent=extent, interpolation="nearest")
grid.cbar_axes[0].colorbar(im)
plt.show()
"""
Explanation: axes_grid1
This module was originally intended as a collection of helper classes to ease the displaying of (possibly multiple) images with matplotlib. Some of the functionality has come to be useful for non-image plotting as well. Some classes deals with the sizing and positioning of multiple Axes relative to each other (ImageGrid, RGB Axes, and AxesDivider). The ParasiteAxes allow for the plotting of multiple datasets in the same axes, but with each their own x or y scale. Also, there is the AnchoredArtist that can be used to anchor particular artist objects in place.
One can get a sense of the neat things that can be done with this toolkit by browsing through its user guide linked above. There is one particular feature that is an absolute must-have for me -- automatic allocation of space for colorbars.
End of explanation
"""
%load http://matplotlib.org/mpl_examples/axes_grid/demo_parasite_axes2.py
from mpl_toolkits.axes_grid1 import host_subplot
import mpl_toolkits.axisartist as AA
import matplotlib.pyplot as plt
if 1:
host = host_subplot(111, axes_class=AA.Axes)
plt.subplots_adjust(right=0.75)
par1 = host.twinx()
par2 = host.twinx()
offset = 60
new_fixed_axis = par2.get_grid_helper().new_fixed_axis
par2.axis["right"] = new_fixed_axis(loc="right",
axes=par2,
offset=(offset, 0))
par2.axis["right"].toggle(all=True)
host.set_xlim(0, 2)
host.set_ylim(0, 2)
host.set_xlabel("Distance")
host.set_ylabel("Density")
par1.set_ylabel("Temperature")
par2.set_ylabel("Velocity")
p1, = host.plot([0, 1, 2], [0, 1, 2], label="Density")
p2, = par1.plot([0, 1, 2], [0, 3, 2], label="Temperature")
p3, = par2.plot([0, 1, 2], [50, 30, 15], label="Velocity")
par1.set_ylim(0, 4)
par2.set_ylim(1, 65)
host.legend()
host.axis["left"].label.set_color(p1.get_color())
par1.axis["right"].label.set_color(p2.get_color())
par2.axis["right"].label.set_color(p3.get_color())
plt.draw()
plt.show()
#plt.savefig("Test")
"""
Explanation: This next feature is commonly requested on the mailing lists. The problem is that most people who request it don't quite know how to describe it. We call it "Parasite Axes".
End of explanation
"""
%load http://matplotlib.org/mpl_toolkits/axes_grid/examples/demo_floating_axes.py
from matplotlib.transforms import Affine2D
import mpl_toolkits.axisartist.floating_axes as floating_axes
import numpy as np
import mpl_toolkits.axisartist.angle_helper as angle_helper
from matplotlib.projections import PolarAxes
from mpl_toolkits.axisartist.grid_finder import FixedLocator, MaxNLocator, \
DictFormatter
def setup_axes1(fig, rect):
"""
A simple one.
"""
tr = Affine2D().scale(2, 1).rotate_deg(30)
grid_helper = floating_axes.GridHelperCurveLinear(tr, extremes=(0, 4, 0, 4))
ax1 = floating_axes.FloatingSubplot(fig, rect, grid_helper=grid_helper)
fig.add_subplot(ax1)
aux_ax = ax1.get_aux_axes(tr)
grid_helper.grid_finder.grid_locator1._nbins = 4
grid_helper.grid_finder.grid_locator2._nbins = 4
return ax1, aux_ax
def setup_axes2(fig, rect):
"""
With custom locator and formatter.
Note that the extreme values are swapped.
"""
#tr_scale = Affine2D().scale(np.pi/180., 1.)
tr = PolarAxes.PolarTransform()
pi = np.pi
angle_ticks = [(0, r"$0$"),
(.25*pi, r"$\frac{1}{4}\pi$"),
(.5*pi, r"$\frac{1}{2}\pi$")]
grid_locator1 = FixedLocator([v for v, s in angle_ticks])
tick_formatter1 = DictFormatter(dict(angle_ticks))
grid_locator2 = MaxNLocator(2)
grid_helper = floating_axes.GridHelperCurveLinear(tr,
extremes=(.5*pi, 0, 2, 1),
grid_locator1=grid_locator1,
grid_locator2=grid_locator2,
tick_formatter1=tick_formatter1,
tick_formatter2=None,
)
ax1 = floating_axes.FloatingSubplot(fig, rect, grid_helper=grid_helper)
fig.add_subplot(ax1)
# create a parasite axes whose transData in RA, cz
aux_ax = ax1.get_aux_axes(tr)
aux_ax.patch = ax1.patch # for aux_ax to have a clip path as in ax
ax1.patch.zorder=0.9 # but this has a side effect that the patch is
# drawn twice, and possibly over some other
# artists. So, we decrease the zorder a bit to
# prevent this.
return ax1, aux_ax
def setup_axes3(fig, rect):
"""
Sometimes, things like axis_direction need to be adjusted.
"""
# rotate a bit for better orientation
tr_rotate = Affine2D().translate(-95, 0)
# scale degree to radians
tr_scale = Affine2D().scale(np.pi/180., 1.)
tr = tr_rotate + tr_scale + PolarAxes.PolarTransform()
grid_locator1 = angle_helper.LocatorHMS(4)
tick_formatter1 = angle_helper.FormatterHMS()
grid_locator2 = MaxNLocator(3)
ra0, ra1 = 8.*15, 14.*15
cz0, cz1 = 0, 14000
grid_helper = floating_axes.GridHelperCurveLinear(tr,
extremes=(ra0, ra1, cz0, cz1),
grid_locator1=grid_locator1,
grid_locator2=grid_locator2,
tick_formatter1=tick_formatter1,
tick_formatter2=None,
)
ax1 = floating_axes.FloatingSubplot(fig, rect, grid_helper=grid_helper)
fig.add_subplot(ax1)
# adjust axis
ax1.axis["left"].set_axis_direction("bottom")
ax1.axis["right"].set_axis_direction("top")
ax1.axis["bottom"].set_visible(False)
ax1.axis["top"].set_axis_direction("bottom")
ax1.axis["top"].toggle(ticklabels=True, label=True)
ax1.axis["top"].major_ticklabels.set_axis_direction("top")
ax1.axis["top"].label.set_axis_direction("top")
ax1.axis["left"].label.set_text(r"cz [km$^{-1}$]")
ax1.axis["top"].label.set_text(r"$\alpha_{1950}$")
# create a parasite axes whose transData in RA, cz
aux_ax = ax1.get_aux_axes(tr)
aux_ax.patch = ax1.patch # for aux_ax to have a clip path as in ax
ax1.patch.zorder=0.9 # but this has a side effect that the patch is
# drawn twice, and possibly over some other
# artists. So, we decrease the zorder a bit to
# prevent this.
return ax1, aux_ax
if 1:
import matplotlib.pyplot as plt
fig = plt.figure(1, figsize=(8, 4))
fig.subplots_adjust(wspace=0.3, left=0.05, right=0.95)
ax1, aux_ax2 = setup_axes1(fig, 131)
aux_ax2.bar([0, 1, 2, 3], [3, 2, 1, 3])
#theta = np.random.rand(10) #*.5*np.pi
#radius = np.random.rand(10) #+1.
#aux_ax1.scatter(theta, radius)
ax2, aux_ax2 = setup_axes2(fig, 132)
theta = np.random.rand(10)*.5*np.pi
radius = np.random.rand(10)+1.
aux_ax2.scatter(theta, radius)
ax3, aux_ax3 = setup_axes3(fig, 133)
theta = (8 + np.random.rand(10)*(14-8))*15. # in degrees
radius = np.random.rand(10)*14000.
aux_ax3.scatter(theta, radius)
plt.show()
"""
Explanation: And finally, as a nice teaser of what else axes_grid1 can do...
End of explanation
"""
|
johnnyliu27/openmc | examples/jupyter/nuclear-data.ipynb | mit | %matplotlib inline
import os
from pprint import pprint
import shutil
import subprocess
import urllib.request
import h5py
import numpy as np
import matplotlib.pyplot as plt
import matplotlib.cm
from matplotlib.patches import Rectangle
import openmc.data
"""
Explanation: In this notebook, we will go through the salient features of the openmc.data package in the Python API. This package enables inspection, analysis, and conversion of nuclear data from ACE files. Most importantly, the package provides a mean to generate HDF5 nuclear data libraries that are used by the transport solver.
End of explanation
"""
openmc.data.atomic_mass('Fe54')
openmc.data.NATURAL_ABUNDANCE['H2']
openmc.data.atomic_weight('C')
"""
Explanation: Physical Data
Some very helpful physical data is available as part of openmc.data: atomic masses, natural abundances, and atomic weights.
End of explanation
"""
url = 'https://anl.box.com/shared/static/kxm7s57z3xgfbeq29h54n7q6js8rd11c.ace'
filename, headers = urllib.request.urlretrieve(url, 'gd157.ace')
# Load ACE data into object
gd157 = openmc.data.IncidentNeutron.from_ace('gd157.ace')
gd157
"""
Explanation: The IncidentNeutron class
The most useful class within the openmc.data API is IncidentNeutron, which stores to continuous-energy incident neutron data. This class has factory methods from_ace, from_endf, and from_hdf5 which take a data file on disk and parse it into a hierarchy of classes in memory. To demonstrate this feature, we will download an ACE file (which can be produced with NJOY 2016) and then load it in using the IncidentNeutron.from_ace method.
End of explanation
"""
total = gd157[1]
total
"""
Explanation: Cross sections
From Python, it's easy to explore (and modify) the nuclear data. Let's start off by reading the total cross section. Reactions are indexed using their "MT" number -- a unique identifier for each reaction defined by the ENDF-6 format. The MT number for the total cross section is 1.
End of explanation
"""
total.xs
"""
Explanation: Cross sections for each reaction can be stored at multiple temperatures. To see what temperatures are available, we can look at the reaction's xs attribute.
End of explanation
"""
total.xs['294K'](1.0)
"""
Explanation: To find the cross section at a particular energy, 1 eV for example, simply get the cross section at the appropriate temperature and then call it as a function. Note that our nuclear data uses eV as the unit of energy.
End of explanation
"""
total.xs['294K']([1.0, 2.0, 3.0])
"""
Explanation: The xs attribute can also be called on an array of energies.
End of explanation
"""
gd157.energy
energies = gd157.energy['294K']
total_xs = total.xs['294K'](energies)
plt.loglog(energies, total_xs)
plt.xlabel('Energy (eV)')
plt.ylabel('Cross section (b)')
"""
Explanation: A quick way to plot cross sections is to use the energy attribute of IncidentNeutron. This gives an array of all the energy values used in cross section interpolation for each temperature present.
End of explanation
"""
pprint(list(gd157.reactions.values())[:10])
"""
Explanation: Reaction Data
Most of the interesting data for an IncidentNeutron instance is contained within the reactions attribute, which is a dictionary mapping MT values to Reaction objects.
End of explanation
"""
n2n = gd157[16]
print('Threshold = {} eV'.format(n2n.xs['294K'].x[0]))
"""
Explanation: Let's suppose we want to look more closely at the (n,2n) reaction. This reaction has an energy threshold
End of explanation
"""
n2n.xs
xs = n2n.xs['294K']
plt.plot(xs.x, xs.y)
plt.xlabel('Energy (eV)')
plt.ylabel('Cross section (b)')
plt.xlim((xs.x[0], xs.x[-1]))
"""
Explanation: The (n,2n) cross section, like all basic cross sections, is represented by the Tabulated1D class. The energy and cross section values in the table can be directly accessed with the x and y attributes. Using the x and y has the nice benefit of automatically acounting for reaction thresholds.
End of explanation
"""
n2n.products
neutron = n2n.products[0]
neutron.distribution
"""
Explanation: To get information on the energy and angle distribution of the neutrons emitted in the reaction, we need to look at the products attribute.
End of explanation
"""
dist = neutron.distribution[0]
dist.energy_out
"""
Explanation: We see that the neutrons emitted have a correlated angle-energy distribution. Let's look at the energy_out attribute to see what the outgoing energy distributions are.
End of explanation
"""
for e_in, e_out_dist in zip(dist.energy[::5], dist.energy_out[::5]):
plt.semilogy(e_out_dist.x, e_out_dist.p, label='E={:.2f} MeV'.format(e_in/1e6))
plt.ylim(ymax=1e-6)
plt.legend()
plt.xlabel('Outgoing energy (eV)')
plt.ylabel('Probability/eV')
plt.show()
"""
Explanation: Here we see we have a tabulated outgoing energy distribution for each incoming energy. Note that the same probability distribution classes that we could use to create a source definition are also used within the openmc.data package. Let's plot every fifth distribution to get an idea of what they look like.
End of explanation
"""
fig = plt.figure()
ax = fig.add_subplot(111)
cm = matplotlib.cm.Spectral_r
# Determine size of probability tables
urr = gd157.urr['294K']
n_energy = urr.table.shape[0]
n_band = urr.table.shape[2]
for i in range(n_energy):
# Get bounds on energy
if i > 0:
e_left = urr.energy[i] - 0.5*(urr.energy[i] - urr.energy[i-1])
else:
e_left = urr.energy[i] - 0.5*(urr.energy[i+1] - urr.energy[i])
if i < n_energy - 1:
e_right = urr.energy[i] + 0.5*(urr.energy[i+1] - urr.energy[i])
else:
e_right = urr.energy[i] + 0.5*(urr.energy[i] - urr.energy[i-1])
for j in range(n_band):
# Determine maximum probability for a single band
max_prob = np.diff(urr.table[i,0,:]).max()
# Determine bottom of band
if j > 0:
xs_bottom = urr.table[i,1,j] - 0.5*(urr.table[i,1,j] - urr.table[i,1,j-1])
value = (urr.table[i,0,j] - urr.table[i,0,j-1])/max_prob
else:
xs_bottom = urr.table[i,1,j] - 0.5*(urr.table[i,1,j+1] - urr.table[i,1,j])
value = urr.table[i,0,j]/max_prob
# Determine top of band
if j < n_band - 1:
xs_top = urr.table[i,1,j] + 0.5*(urr.table[i,1,j+1] - urr.table[i,1,j])
else:
xs_top = urr.table[i,1,j] + 0.5*(urr.table[i,1,j] - urr.table[i,1,j-1])
# Draw rectangle with appropriate color
ax.add_patch(Rectangle((e_left, xs_bottom), e_right - e_left, xs_top - xs_bottom,
color=cm(value)))
# Overlay total cross section
ax.plot(gd157.energy['294K'], total.xs['294K'](gd157.energy['294K']), 'k')
# Make plot pretty and labeled
ax.set_xlim(1.0, 1.0e5)
ax.set_ylim(1e-1, 1e4)
ax.set_xscale('log')
ax.set_yscale('log')
ax.set_xlabel('Energy (eV)')
ax.set_ylabel('Cross section(b)')
"""
Explanation: Unresolved resonance probability tables
We can also look at unresolved resonance probability tables which are stored in a ProbabilityTables object. In the following example, we'll create a plot showing what the total cross section probability tables look like as a function of incoming energy.
End of explanation
"""
gd157.export_to_hdf5('gd157.h5', 'w')
"""
Explanation: Exporting HDF5 data
If you have an instance IncidentNeutron that was created from ACE or HDF5 data, you can easily write it to disk using the export_to_hdf5() method. This can be used to convert ACE to HDF5 or to take an existing data set and actually modify cross sections.
End of explanation
"""
gd157_reconstructed = openmc.data.IncidentNeutron.from_hdf5('gd157.h5')
np.all(gd157[16].xs['294K'].y == gd157_reconstructed[16].xs['294K'].y)
"""
Explanation: With few exceptions, the HDF5 file encodes the same data as the ACE file.
End of explanation
"""
h5file = h5py.File('gd157.h5', 'r')
main_group = h5file['Gd157/reactions']
for name, obj in sorted(list(main_group.items()))[:10]:
if 'reaction_' in name:
print('{}, {}'.format(name, obj.attrs['label'].decode()))
n2n_group = main_group['reaction_016']
pprint(list(n2n_group.values()))
"""
Explanation: And one of the best parts of using HDF5 is that it is a widely used format with lots of third-party support. You can use h5py, for example, to inspect the data.
End of explanation
"""
n2n_group['294K/xs'].value
"""
Explanation: So we see that the hierarchy of data within the HDF5 mirrors the hierarchy of Python objects that we manipulated before.
End of explanation
"""
# Download ENDF file
url = 'https://t2.lanl.gov/nis/data/data/ENDFB-VII.1-neutron/Gd/157'
filename, headers = urllib.request.urlretrieve(url, 'gd157.endf')
# Load into memory
gd157_endf = openmc.data.IncidentNeutron.from_endf(filename)
gd157_endf
"""
Explanation: Working with ENDF files
In addition to being able to load ACE and HDF5 data, we can also load ENDF data directly into an IncidentNeutron instance using the from_endf() factory method. Let's download the ENDF/B-VII.1 evaluation for $^{157}$Gd and load it in:
End of explanation
"""
elastic = gd157_endf[2]
"""
Explanation: Just as before, we can get a reaction by indexing the object directly:
End of explanation
"""
elastic.xs
"""
Explanation: However, if we look at the cross section now, we see that it isn't represented as tabulated data anymore.
End of explanation
"""
elastic.xs['0K'](0.0253)
"""
Explanation: If had Cython installed when you built/installed OpenMC, you should be able to evaluate resonant cross sections from ENDF data directly, i.e., OpenMC will reconstruct resonances behind the scenes for you.
End of explanation
"""
gd157_endf.resonances.ranges
"""
Explanation: When data is loaded from an ENDF file, there is also a special resonances attribute that contains resolved and unresolved resonance region data (from MF=2 in an ENDF file).
End of explanation
"""
[(r.energy_min, r.energy_max) for r in gd157_endf.resonances.ranges]
"""
Explanation: We see that $^{157}$Gd has a resolved resonance region represented in the Reich-Moore format as well as an unresolved resonance region. We can look at the min/max energy of each region by doing the following:
End of explanation
"""
# Create log-spaced array of energies
resolved = gd157_endf.resonances.resolved
energies = np.logspace(np.log10(resolved.energy_min),
np.log10(resolved.energy_max), 1000)
# Evaluate elastic scattering xs at energies
xs = elastic.xs['0K'](energies)
# Plot cross section vs energies
plt.loglog(energies, xs)
plt.xlabel('Energy (eV)')
plt.ylabel('Cross section (b)')
"""
Explanation: With knowledge of the energy bounds, let's create an array of energies over the entire resolved resonance range and plot the elastic scattering cross section.
End of explanation
"""
resolved.parameters.head(10)
"""
Explanation: Resonance ranges also have a useful parameters attribute that shows the energies and widths for resonances.
End of explanation
"""
gd157.add_elastic_0K_from_endf('gd157.endf')
"""
Explanation: Heavy-nuclide resonance scattering
OpenMC has two methods for accounting for resonance upscattering in heavy nuclides, DBRC and ARES. These methods rely on 0 K elastic scattering data being present. If you have an existing ACE/HDF5 dataset and you need to add 0 K elastic scattering data to it, this can be done using the IncidentNeutron.add_elastic_0K_from_endf() method. Let's do this with our original gd157 object that we instantiated from an ACE file.
End of explanation
"""
gd157[2].xs
"""
Explanation: Let's check to make sure that we have both the room temperature elastic scattering cross section as well as a 0K cross section.
End of explanation
"""
# Download ENDF file
url = 'https://t2.lanl.gov/nis/data/data/ENDFB-VII.1-neutron/H/2'
filename, headers = urllib.request.urlretrieve(url, 'h2.endf')
# Run NJOY to create deuterium data
h2 = openmc.data.IncidentNeutron.from_njoy('h2.endf', temperatures=[300., 400., 500.], stdout=True)
"""
Explanation: Generating data from NJOY
To run OpenMC in continuous-energy mode, you generally need to have ACE files already available that can be converted to OpenMC's native HDF5 format. If you don't already have suitable ACE files or need to generate new data, both the IncidentNeutron and ThermalScattering classes include from_njoy() methods that will run NJOY to generate ACE files and then read those files to create OpenMC class instances. The from_njoy() methods take as input the name of an ENDF file on disk. By default, it is assumed that you have an executable named njoy available on your path. This can be configured with the optional njoy_exec argument. Additionally, if you want to show the progress of NJOY as it is running, you can pass stdout=True.
Let's use IncidentNeutron.from_njoy() to run NJOY to create data for $^2$H using an ENDF file. We'll specify that we want data specifically at 300, 400, and 500 K.
End of explanation
"""
h2[2].xs
"""
Explanation: Now we can use our h2 object just as we did before.
End of explanation
"""
url = 'https://github.com/mit-crpg/WMP_Library/releases/download/v1.1/092238.h5'
filename, headers = urllib.request.urlretrieve(url, '092238.h5')
u238_multipole = openmc.data.WindowedMultipole.from_hdf5('092238.h5')
"""
Explanation: Note that 0 K elastic scattering data is automatically added when using from_njoy() so that resonance elastic scattering treatments can be used.
Windowed multipole
OpenMC can also be used with an experimental format called windowed multipole. Windowed multipole allows for analytic on-the-fly Doppler broadening of the resolved resonance range. Windowed multipole data can be downloaded with the openmc-get-multipole-data script. This data can be used in the transport solver, but it can also be used directly in the Python API.
End of explanation
"""
u238_multipole(1.0, 294)
"""
Explanation: The WindowedMultipole object can be called with energy and temperature values. Calling the object gives a tuple of 3 cross sections: elastic scattering, radiative capture, and fission.
End of explanation
"""
E = np.linspace(5, 25, 1000)
plt.semilogy(E, u238_multipole(E, 293.606)[1])
"""
Explanation: An array can be passed for the energy argument.
End of explanation
"""
E = np.linspace(6.1, 7.1, 1000)
plt.semilogy(E, u238_multipole(E, 0)[1])
plt.semilogy(E, u238_multipole(E, 900)[1])
"""
Explanation: The real advantage to multipole is that it can be used to generate cross sections at any temperature. For example, this plot shows the Doppler broadening of the 6.67 eV resonance between 0 K and 900 K.
End of explanation
"""
|
relopezbriega/mi-python-blog | content/notebooks/Python-Librerias-esenciales.ipynb | gpl-2.0 | import numpy as np
"""
Explanation: Python - Librerías esenciales para el analisis de datos
Esta notebook fue creada originalmente como un blog post por Raúl E. López Briega en Mi blog sobre Python. El contenido esta bajo la licencia BSD.
En mi artículo anterior hice una breve introducción al mundo de Python, hoy voy a detallar algunas de las librerías que son esenciales para trabajar con Python en la comunidad científica o en el análisis de datos.
Una de las grandes ventajas que ofrece Python sobre otros lenguajes de programación, además de que es que es mucho más fácil de aprender; es lo grande y prolifera que es la comunidad de desarrolladores que lo rodean; comunidad que ha contribuido con una gran variedad de librerías de primer nivel que extienden la funcionalidades del lenguaje. Vamos a poder encontrar una librería en Python para prácticamente cualquier cosa que se nos ocurra.
Algunas de las librerías que se han vuelto esenciales y ya forman casi parte del lenguaje en sí mismo son las siguientes:
Numpy
Numpy, abreviatura de Numerical Python , es el paquete fundamental para la computación científica en Python. Dispone, entre otras cosas de:
Un objeto <a href='https://es.wikipedia.org/wiki/Matriz_(matem%C3%A1ticas)' target='_blank'>matriz</a> multidimensional ndarray,rápido y eficiente.
Funciones para realizar cálculos elemento a elemento u otras operaciones matemáticas con <a href='https://es.wikipedia.org/wiki/Matriz_(matem%C3%A1ticas)' target='_blank'>matrices</a>.
Herramientas para la lectura y escritura de los conjuntos de datos basados <a href='https://es.wikipedia.org/wiki/Matriz_(matem%C3%A1ticas)' target='_blank'>matrices</a>.
Operaciones de álgebra lineal, transformaciones de Fourier, y generación de números aleatorios.
Herramientas de integración para conectar [C](https://es.wikipedia.org/wiki/C_(lenguaje_de_programaci%C3%B3n), C++ y Fortran con Python
Más allá de las capacidades de procesamiento rápido de <a href='https://es.wikipedia.org/wiki/Matriz_(matem%C3%A1ticas)' target='_blank'>matrices</a> que Numpy añade a Python, uno de sus
propósitos principales con respecto al análisis de datos es la utilización de sus estructuras de datos como contenedores para transmitir los datos entre diferentes algoritmos. Para datos numéricos , las <a href='https://es.wikipedia.org/wiki/Matriz_(matem%C3%A1ticas)' target='_blank'>matrices</a> de Numpy son una forma mucho más eficiente de almacenar y manipular datos que cualquier otra de las estructuras de datos estándar incorporadas en Python. Asimismo, librerías escritas en un lenguaje de bajo nivel, como C.
Como nomenclatura general, cuando importamos la librería Numpy en nuestro programa Python se suele utilizar la siguiente:
End of explanation
"""
#Creando un vector desde una lista de Python
vector = np.array([1, 2, 3, 4])
vector
#Para crear una matriz, simplemente le pasamos una lista anidada al objeto array de Numpy
matriz = np.array([[1, 2],
[3, 4]])
matriz
#El tipo de objeto de tanto de los vectores como de las matrices es ndarray
type(vector), type(matriz)
#Los objetos ndarray de Numpy cuentan con las propiedades shape y size que nos muestran sus dimensiones.
print vector.shape, vector.size
print matriz.shape, matriz.size
"""
Explanation: Creando matrices en Numpy
Existen varias maneras de crear matrices en Numpy, por ejemplo desde:
Una lista o tuple de Python
Funciones específicas para crear matrices como arange, linspace, etc.
Archivos planos con datos, como por ejemplo archivos .csv
En Numpy tanto los vectores como las matrices se crean utilizando el objeto ndarray
End of explanation
"""
#arange
#La funcion arange nos facilita la creación de matrices
x = np.arange(1, 11, 1) # argumentos: start, stop, step
x
#linspace
#linspace nos devuelve un vector con la cantidad de muestras que le ingresemos y separados uniformamente entre sí.
np.linspace(1, 25, 25) # argumentos: start, stop, samples
#mgrid
#Con mgrid podemos crear arrays multimensionales.
x, y = np.mgrid[0:5, 0:5]
x
y
#zeros y ones
#Estas funciones nos permiten crear matrices de ceros o de unos.
np.zeros((3,3))
np.ones((3,3))
#random.randn
#Esta funcion nos permite generar una matriz con una distribución estándar de números.
np.random.randn(5,5)
#diag
#Nos permite crear una matriz con la diagonal de números que le ingresemos.
np.diag([1,1,1])
"""
Explanation: Utilizando funciones para crear matrices
End of explanation
"""
#Generalmente se suele importar matplotlib de la siguiente forma.
import matplotlib.pyplot as plt
"""
Explanation: Matplotlib
Matplotlib es la librería más popular en Python para visualizaciones y gráficos. Matplotlib puede producir gráficos de alta calidad dignos de cualquier publicación científica.
Algunas de las muchas ventajas que nos ofrece Matplotlib, incluyen:
Es fácil de aprender.
Soporta texto, títulos y etiquetas en formato $\LaTeX$.
Proporciona un gran control sobre cada uno de los elementos de las figuras, como ser su tamaño, el trazado de sus líneas, etc.
Nos permite crear gráficos y figuras de gran calidad que pueden ser guardados en varios formatos, como ser: PNG, PDF, SVG, EPS, y PGF.
Matplotlib se integra de maravilla con IPython (ver más abajo), lo que nos proporciona un ambiente confortable para las visualizaciones y la exploración de datos interactiva.
Algunos gráficos con Matplotlib
End of explanation
"""
# Definimos nuestra función.
def f(x):
return np.exp(-x ** 2)
#Creamos un vector con los puntos que le pasaremos a la funcion previamente creada.
x = np.linspace(-1, 5, num=30)
#Representeamos la función utilizando el objeto plt de matplotlib
plt.xlabel("Eje $x$")
plt.ylabel("$f(x)$")
plt.legend()
plt.title("Funcion $f(x)$")
plt.grid(True)
fig = plt.plot(x, f(x), label="Función f(x)")
#Grafico de puntos con matplotlib
N = 100
x1 = np.random.randn(N) #creando vector x
y1 = np.random.randn(N) #creando vector x
s = 50 + 50 * np.random.randn(N) #variable para modificar el tamaño(size)
c = np.random.randn(N) #variable para modificar el color(color)
plt.scatter(x1, y1, s=s, c=c, cmap=plt.cm.Blues)
plt.grid(True)
plt.colorbar()
fig = plt.scatter(x1, y1)
"""
Explanation: Ahora vamos a graficar la siguiente función.
$$f(x) = e^{-x^2}$$
End of explanation
"""
x = linspace(0, 5, 10) # Conjunto de puntos
y = x ** 2 # Funcion
fig = plt.figure()
axes1 = fig.add_axes([0.1, 0.1, 0.8, 0.8]) # Eje principal
axes2 = fig.add_axes([0.2, 0.5, 0.4, 0.3]) # Eje secundario
# Figura principal
axes1.plot(x, y, 'r')
axes1.set_xlabel('x')
axes1.set_ylabel('y')
axes1.set_title('Ej OOP')
# Insertada
axes2.plot(y, x, 'g')
axes2.set_xlabel('y')
axes2.set_ylabel('x')
axes2.set_title('insertado');
# Ejemplo con más de una figura.
fig, axes = plt.subplots(nrows=1, ncols=2)
for ax in axes:
ax.plot(x, y, 'r')
ax.set_xlabel('x')
ax.set_ylabel('y')
ax.set_title('titulo')
fig.tight_layout()
"""
Explanation: Interfase orientada a objetos de matplotlib
La idea principal con la programación orientada a objetos es que a los objetos que se pueden aplicar funciones y acciones, y ningún objeto debería tener un estado global (como en el caso de la interfase con plt que acabamos de utilizar). La verdadera ventaja de este enfoque se hace evidente cuando se crean más de una figura, o cuando una figura contiene más de una trama secundaria.
Para utilizar la API orientada a objetos comenzamos de forma similar al ejemplo anterior, pero en lugar de crear una nueva instancia global de plt, almacenamos una referencia a la recientemente creada figura en la variable fig, y a partir de ella creamos un nuevo eje ejes usando el método add_axes de la instancia Figure:
End of explanation
"""
# Importando pandas
import pandas as pd
"""
Explanation: IPython
IPython promueve un ambiente de trabajo de ejecutar-explorar en contraposición al tradicional modelo de desarrollo de software de editar-compilar-ejecutar. Es decir, que el problema computacional a resolver es más visto como todo un proceso de ejecucion de tareas, en lugar del tradicional modelo de producir una respuesta(output) a una pregunta(input). IPython también provee una estrecha integración con nuestro sistema operativo, permitiendo acceder fácilmente a todos nuestros archivos desde la misma herramienta.
Algunas de las características sobresalientes de IPython son:
Su poderoso <a href='https://es.wikipedia.org/wiki/Shell_(inform%C3%A1tica)' target='_blank'>shell</a> interactivo.
Notebook, su interfase web con soporte para código, texto, expresiones matemáticas, gráficos en línea y multimedia.
Su soporte para poder realizar visualizaciones de datos en forma interactiva. IPython esta totalmente integrado con matplotlib.
Su simple y flexible interfase para trabajar con la computación paralela.
IPython es mucho más que una librería, es todo un ambiente de trabajo que nos facilita enormemente trabajar con Python; las mismas páginas de este blog están desarrolladas con la ayuda del fantástico Notebook de IPython. (para ver el Notebook en el que se basa este artículo, visiten el siguiente enlace.)
Para más información sobre IPython y algunas de sus funciones los invito también a visitar el artículo que escribí en mi otro blog.
Pandas
Pandas es una librería open source que aporta a Python unas estructuras de datos fáciles de user y de alta performance, junto con un gran número de funciones esenciales para el análisis de datos. Con la ayuda de Pandas podemos trabajar con datos estructurados de una forma más rápida y expresiva.
Algunas de las cosas sobresalientes que nos aporta Pandas son:
Un rápido y eficiente objeto DataFrame para manipular datos con indexación integrada;
herramientas para la lectura y escritura de datos entre estructuras de datos rápidas y eficientes manejadas en memoria, como el DataFrame, con la mayoría de los formatos conocidos para el manejo de datos, como ser: CSV y archivos de texto, archivos Microsoft Excel, bases de datos SQL, y el formato científico HDF5.
Proporciona una alineación inteligente de datos y un manejo integrado de los datos faltantes; con estas funciones podemos obtener una ganancia de performace en los cálculos entre DataFrames y una fácil manipulación y ordenamiento de los datos de nuestro data set;
Flexibilidad para manipular y redimensionar nuestro data set, facilidad para construir tablas pivote;
La posibilidad de filtrar los datos, agregar o eliminar columnas de una forma sumamente expresiva;
Operaciones de merge y *join* altamente eficientes sobre nuestros conjuntos de datos;
Indexación jerárquica que proporciona una forma intuitiva de trabajar con datos de alta dimensión en una estructura de datos de menor dimensión ;
Posibilidad de realizar cálculos agregados o transformaciones de datos con el poderoso motor group by que nos permite dividir-aplicar-combinar nuestros conjuntos de datos;
combina las características de las matrices de alto rendimiento de Numpy con las flexibles capacidades de manipulación de datos de las hojas de cálculo y bases de datos relacionales (tales como SQL);
Gran número de funcionalidades para el manejo de series de tiempo ideales para el análisis financiero;
Todas sus funciones y estructuras de datos están optimizadas para el alto rendimiento, con las partes críticas del código escritas en Cython o [C](https://es.wikipedia.org/wiki/C_(lenguaje_de_programaci%C3%B3n);
Estructuras de datos de Pandas
End of explanation
"""
# Las series son matrices de una sola dimension similares a los vectores, pero con su propio indice.
# Creando una Serie
serie = pd.Series([2, 4, -8, 3])
serie
# podemos ver tantos los índices como los valores de las Series.
print serie.values
print serie.index
# Creando Series con nuestros propios índices.
serie2 = pd.Series([2, 4, -8, 3], index=['d', 'b', 'a', 'c'])
serie2
# Accediendo a los datos a través de los índices
print serie2['a']
print serie2[['b', 'c', 'd']]
print serie2[serie2 > 0]
"""
Explanation: Series
End of explanation
"""
# El DataFrame es una estructura de datos tabular similar a las hojas de cálculo de Excel.
# Posee tanto indices de columnas como de filas.
# Creando un DataFrame.
data = {'state': ['Ohio', 'Ohio', 'Ohio', 'Nevada', 'Nevada'],
'year' : [2000, 2001, 2002, 2001, 2002],
'pop' : [1.5, 1.7, 3.6, 2.4, 2.9]}
frame = pd.DataFrame(data) # Creando un DataFrame desde un diccionario
frame
# Creando un DataFrame desde un archivo.
!cat 'dataset.csv' # ejemplo archivo csv.
# Leyendo el archivo dataset.csv para crear el DataFrame
frame2 = pd.read_csv('dataset.csv', header=0)
frame2
# Seleccionando una columna como una Serie
frame['state']
# Seleccionando una línea como una Serie.
frame.ix[1]
# Verificando las columnas
frame.columns
# Verificando los índices.
frame.index
"""
Explanation: DataFrame
End of explanation
"""
|
tritemio/multispot_paper | out_notebooks/usALEX-5samples-E-corrected-all-ph-out-27d.ipynb | mit | ph_sel_name = "None"
data_id = "27d"
# data_id = "7d"
"""
Explanation: Executed: Mon Mar 27 11:39:46 2017
Duration: 7 seconds.
usALEX-5samples - Template
This notebook is executed through 8-spots paper analysis.
For a direct execution, uncomment the cell below.
End of explanation
"""
from fretbursts import *
init_notebook()
from IPython.display import display
"""
Explanation: Load software and filenames definitions
End of explanation
"""
data_dir = './data/singlespot/'
import os
data_dir = os.path.abspath(data_dir) + '/'
assert os.path.exists(data_dir), "Path '%s' does not exist." % data_dir
"""
Explanation: Data folder:
End of explanation
"""
from glob import glob
file_list = sorted(f for f in glob(data_dir + '*.hdf5') if '_BKG' not in f)
## Selection for POLIMI 2012-11-26 datatset
labels = ['17d', '27d', '7d', '12d', '22d']
files_dict = {lab: fname for lab, fname in zip(labels, file_list)}
files_dict
data_id
"""
Explanation: List of data files:
End of explanation
"""
d = loader.photon_hdf5(filename=files_dict[data_id])
"""
Explanation: Data load
Initial loading of the data:
End of explanation
"""
leakage_coeff_fname = 'results/usALEX - leakage coefficient DexDem.csv'
leakage = np.loadtxt(leakage_coeff_fname)
print('Leakage coefficient:', leakage)
"""
Explanation: Load the leakage coefficient from disk:
End of explanation
"""
dir_ex_coeff_fname = 'results/usALEX - direct excitation coefficient dir_ex_aa.csv'
dir_ex_aa = np.loadtxt(dir_ex_coeff_fname)
print('Direct excitation coefficient (dir_ex_aa):', dir_ex_aa)
"""
Explanation: Load the direct excitation coefficient ($d_{exAA}$) from disk:
End of explanation
"""
gamma_fname = 'results/usALEX - gamma factor - all-ph.csv'
gamma = np.loadtxt(gamma_fname)
print('Gamma-factor:', gamma)
"""
Explanation: Load the gamma-factor ($\gamma$) from disk:
End of explanation
"""
d.leakage = leakage
d.dir_ex = dir_ex_aa
d.gamma = gamma
"""
Explanation: Update d with the correction coefficients:
End of explanation
"""
d.ph_times_t[0][:3], d.ph_times_t[0][-3:]#, d.det_t
print('First and last timestamps: {:10,} {:10,}'.format(d.ph_times_t[0][0], d.ph_times_t[0][-1]))
print('Total number of timestamps: {:10,}'.format(d.ph_times_t[0].size))
"""
Explanation: Laser alternation selection
At this point we have only the timestamps and the detector numbers:
End of explanation
"""
d.add(det_donor_accept=(0, 1), alex_period=4000, D_ON=(2850, 580), A_ON=(900, 2580), offset=0)
"""
Explanation: We need to define some parameters: donor and acceptor ch, excitation period and donor and acceptor excitiations:
End of explanation
"""
plot_alternation_hist(d)
"""
Explanation: We should check if everithing is OK with an alternation histogram:
End of explanation
"""
loader.alex_apply_period(d)
print('D+A photons in D-excitation period: {:10,}'.format(d.D_ex[0].sum()))
print('D+A photons in A-excitation period: {:10,}'.format(d.A_ex[0].sum()))
"""
Explanation: If the plot looks good we can apply the parameters with:
End of explanation
"""
d
"""
Explanation: Measurements infos
All the measurement data is in the d variable. We can print it:
End of explanation
"""
d.time_max
"""
Explanation: Or check the measurements duration:
End of explanation
"""
d.calc_bg(bg.exp_fit, time_s=60, tail_min_us='auto', F_bg=1.7)
dplot(d, timetrace_bg)
d.rate_m, d.rate_dd, d.rate_ad, d.rate_aa
"""
Explanation: Compute background
Compute the background using automatic threshold:
End of explanation
"""
d.burst_search(L=10, m=10, F=7, ph_sel=Ph_sel('all'))
print(d.ph_sel)
dplot(d, hist_fret);
# if data_id in ['7d', '27d']:
# ds = d.select_bursts(select_bursts.size, th1=20)
# else:
# ds = d.select_bursts(select_bursts.size, th1=30)
ds = d.select_bursts(select_bursts.size, add_naa=False, th1=30)
n_bursts_all = ds.num_bursts[0]
def select_and_plot_ES(fret_sel, do_sel):
ds_fret= ds.select_bursts(select_bursts.ES, **fret_sel)
ds_do = ds.select_bursts(select_bursts.ES, **do_sel)
bpl.plot_ES_selection(ax, **fret_sel)
bpl.plot_ES_selection(ax, **do_sel)
return ds_fret, ds_do
ax = dplot(ds, hist2d_alex, S_max_norm=2, scatter_alpha=0.1)
if data_id == '7d':
fret_sel = dict(E1=0.60, E2=1.2, S1=0.2, S2=0.9, rect=False)
do_sel = dict(E1=-0.2, E2=0.5, S1=0.8, S2=2, rect=True)
ds_fret, ds_do = select_and_plot_ES(fret_sel, do_sel)
elif data_id == '12d':
fret_sel = dict(E1=0.30,E2=1.2,S1=0.131,S2=0.9, rect=False)
do_sel = dict(E1=-0.4, E2=0.4, S1=0.8, S2=2, rect=False)
ds_fret, ds_do = select_and_plot_ES(fret_sel, do_sel)
elif data_id == '17d':
fret_sel = dict(E1=0.01, E2=0.98, S1=0.14, S2=0.88, rect=False)
do_sel = dict(E1=-0.4, E2=0.4, S1=0.80, S2=2, rect=False)
ds_fret, ds_do = select_and_plot_ES(fret_sel, do_sel)
elif data_id == '22d':
fret_sel = dict(E1=-0.16, E2=0.6, S1=0.2, S2=0.80, rect=False)
do_sel = dict(E1=-0.2, E2=0.4, S1=0.85, S2=2, rect=True)
ds_fret, ds_do = select_and_plot_ES(fret_sel, do_sel)
elif data_id == '27d':
fret_sel = dict(E1=-0.1, E2=0.5, S1=0.2, S2=0.82, rect=False)
do_sel = dict(E1=-0.2, E2=0.4, S1=0.88, S2=2, rect=True)
ds_fret, ds_do = select_and_plot_ES(fret_sel, do_sel)
n_bursts_do = ds_do.num_bursts[0]
n_bursts_fret = ds_fret.num_bursts[0]
n_bursts_do, n_bursts_fret
d_only_frac = 1.*n_bursts_do/(n_bursts_do + n_bursts_fret)
print('D-only fraction:', d_only_frac)
dplot(ds_fret, hist2d_alex, scatter_alpha=0.1);
dplot(ds_do, hist2d_alex, S_max_norm=2, scatter=False);
"""
Explanation: Burst search and selection
End of explanation
"""
bandwidth = 0.03
E_range_do = (-0.1, 0.15)
E_ax = np.r_[-0.2:0.401:0.0002]
E_pr_do_kde = bext.fit_bursts_kde_peak(ds_do, bandwidth=bandwidth, weights='size',
x_range=E_range_do, x_ax=E_ax, save_fitter=True)
mfit.plot_mfit(ds_do.E_fitter, plot_kde=True, bins=np.r_[E_ax.min(): E_ax.max(): bandwidth])
plt.xlim(-0.3, 0.5)
print("%s: E_peak = %.2f%%" % (ds.ph_sel, E_pr_do_kde*100))
"""
Explanation: Donor Leakage fit
End of explanation
"""
nt_th1 = 50
dplot(ds_fret, hist_size, which='all', add_naa=False)
xlim(-0, 250)
plt.axvline(nt_th1)
Th_nt = np.arange(35, 120)
nt_th = np.zeros(Th_nt.size)
for i, th in enumerate(Th_nt):
ds_nt = ds_fret.select_bursts(select_bursts.size, th1=th)
nt_th[i] = (ds_nt.nd[0] + ds_nt.na[0]).mean() - th
plt.figure()
plot(Th_nt, nt_th)
plt.axvline(nt_th1)
nt_mean = nt_th[np.where(Th_nt == nt_th1)][0]
nt_mean
"""
Explanation: Burst sizes
End of explanation
"""
E_pr_fret_kde = bext.fit_bursts_kde_peak(ds_fret, bandwidth=bandwidth, weights='size')
E_fitter = ds_fret.E_fitter
E_fitter.histogram(bins=np.r_[-0.1:1.1:0.03])
E_fitter.fit_histogram(mfit.factory_gaussian(center=0.5))
E_fitter.fit_res[0].params.pretty_print()
fig, ax = plt.subplots(1, 2, figsize=(14, 4.5))
mfit.plot_mfit(E_fitter, ax=ax[0])
mfit.plot_mfit(E_fitter, plot_model=False, plot_kde=True, ax=ax[1])
print('%s\nKDE peak %.2f ' % (ds_fret.ph_sel, E_pr_fret_kde*100))
display(E_fitter.params*100)
"""
Explanation: Fret fit
Max position of the Kernel Density Estimation (KDE):
End of explanation
"""
ds_fret.fit_E_m(weights='size')
"""
Explanation: Weighted mean of $E$ of each burst:
End of explanation
"""
ds_fret.fit_E_generic(fit_fun=bl.gaussian_fit_hist, bins=np.r_[-0.1:1.1:0.03], weights=None)
"""
Explanation: Gaussian fit (no weights):
End of explanation
"""
ds_fret.fit_E_generic(fit_fun=bl.gaussian_fit_hist, bins=np.r_[-0.1:1.1:0.005], weights='size')
E_kde_w = E_fitter.kde_max_pos[0]
E_gauss_w = E_fitter.params.loc[0, 'center']
E_gauss_w_sig = E_fitter.params.loc[0, 'sigma']
E_gauss_w_err = float(E_gauss_w_sig/np.sqrt(ds_fret.num_bursts[0]))
E_gauss_w_fiterr = E_fitter.fit_res[0].params['center'].stderr
E_kde_w, E_gauss_w, E_gauss_w_sig, E_gauss_w_err, E_gauss_w_fiterr
"""
Explanation: Gaussian fit (using burst size as weights):
End of explanation
"""
S_pr_fret_kde = bext.fit_bursts_kde_peak(ds_fret, burst_data='S', bandwidth=0.03) #weights='size', add_naa=True)
S_fitter = ds_fret.S_fitter
S_fitter.histogram(bins=np.r_[-0.1:1.1:0.03])
S_fitter.fit_histogram(mfit.factory_gaussian(), center=0.5)
fig, ax = plt.subplots(1, 2, figsize=(14, 4.5))
mfit.plot_mfit(S_fitter, ax=ax[0])
mfit.plot_mfit(S_fitter, plot_model=False, plot_kde=True, ax=ax[1])
print('%s\nKDE peak %.2f ' % (ds_fret.ph_sel, S_pr_fret_kde*100))
display(S_fitter.params*100)
S_kde = S_fitter.kde_max_pos[0]
S_gauss = S_fitter.params.loc[0, 'center']
S_gauss_sig = S_fitter.params.loc[0, 'sigma']
S_gauss_err = float(S_gauss_sig/np.sqrt(ds_fret.num_bursts[0]))
S_gauss_fiterr = S_fitter.fit_res[0].params['center'].stderr
S_kde, S_gauss, S_gauss_sig, S_gauss_err, S_gauss_fiterr
"""
Explanation: Stoichiometry fit
Max position of the Kernel Density Estimation (KDE):
End of explanation
"""
S = ds_fret.S[0]
S_ml_fit = (S.mean(), S.std())
S_ml_fit
"""
Explanation: The Maximum likelihood fit for a Gaussian population is the mean:
End of explanation
"""
weights = bl.fret_fit.get_weights(ds_fret.nd[0], ds_fret.na[0], weights='size', naa=ds_fret.naa[0], gamma=1.)
S_mean = np.dot(weights, S)/weights.sum()
S_std_dev = np.sqrt(
np.dot(weights, (S - S_mean)**2)/weights.sum())
S_wmean_fit = [S_mean, S_std_dev]
S_wmean_fit
"""
Explanation: Computing the weighted mean and weighted standard deviation we get:
End of explanation
"""
sample = data_id
"""
Explanation: Save data to file
End of explanation
"""
variables = ('sample n_bursts_all n_bursts_do n_bursts_fret '
'E_kde_w E_gauss_w E_gauss_w_sig E_gauss_w_err E_gauss_w_fiterr '
'S_kde S_gauss S_gauss_sig S_gauss_err S_gauss_fiterr '
'E_pr_do_kde nt_mean\n')
"""
Explanation: The following string contains the list of variables to be saved. When saving, the order of the variables is preserved.
End of explanation
"""
variables_csv = variables.replace(' ', ',')
fmt_float = '{%s:.6f}'
fmt_int = '{%s:d}'
fmt_str = '{%s}'
fmt_dict = {**{'sample': fmt_str},
**{k: fmt_int for k in variables.split() if k.startswith('n_bursts')}}
var_dict = {name: eval(name) for name in variables.split()}
var_fmt = ', '.join([fmt_dict.get(name, fmt_float) % name for name in variables.split()]) + '\n'
data_str = var_fmt.format(**var_dict)
print(variables_csv)
print(data_str)
# NOTE: The file name should be the notebook name but with .csv extension
with open('results/usALEX-5samples-E-corrected-all-ph.csv', 'a') as f:
f.seek(0, 2)
if f.tell() == 0:
f.write(variables_csv)
f.write(data_str)
"""
Explanation: This is just a trick to format the different variables:
End of explanation
"""
|
GoogleCloudPlatform/vertex-ai-samples | notebooks/official/pipelines/metrics_viz_run_compare_kfp.ipynb | apache-2.0 | import os
# Google Cloud Notebook
if os.path.exists("/opt/deeplearning/metadata/env_version"):
USER_FLAG = "--user"
else:
USER_FLAG = ""
! pip3 install --upgrade google-cloud-aiplatform $USER_FLAG
"""
Explanation: Vertex AI Pipelines: Metrics visualization and run comparison using the KFP SDK
<table align="left">
<td>
<a href="https://colab.research.google.com/github/GoogleCloudPlatform/vertex-ai-samples/blob/master/notebooks/official/pipelines/metrics_viz_run_compare_kfp.ipynb">
<img src="https://cloud.google.com/ml-engine/images/colab-logo-32px.png" alt="Colab logo"> Run in Colab
</a>
</td>
<td>
<a href="https://github.com/GoogleCloudPlatform/vertex-ai-samples/blob/master/notebooks/official/pipelines/metrics_viz_run_compare_kfp.ipynb">
<img src="https://cloud.google.com/ml-engine/images/github-logo-32px.png" alt="GitHub logo">
View on GitHub
</a>
</td>
<td>
<a href="https://console.cloud.google.com/vertex-ai/notebooks/deploy-notebook?download_url=https://github.com/GoogleCloudPlatform/vertex-ai-samples/blob/master/notebooks/official/pipelines/metrics_viz_run_compare_kfp.ipynb">
Open in Vertex AI Workbench
</a>
</td>
</table>
<br/><br/><br/>
Overview
This notebook shows how to use the Kubeflow Pipelines (KFP) SDK to build Vertex AI Pipelines that generate model metrics and metrics visualizations, and comparing pipeline runs.
Dataset
The dataset used for this tutorial is the Wine dataset from Scikit-learn builtin datasets.
The dataset predicts the origin of a wine.
Dataset
The dataset used for this tutorial is the Iris dataset from Scikit-learn builtin datasets.
The dataset predicts the type of Iris flower species from a class of three species: setosa, virginica, or versicolor.
Objective
The steps performed include:
Generate ROC curve and confusion matrix visualizations for classification results
Write metrics
Compare metrics across pipeline runs
Costs
This tutorial uses billable components of Google Cloud:
Vertex AI
Cloud Storage
Learn about Vertex AI
pricing and Cloud Storage
pricing, and use the Pricing
Calculator
to generate a cost estimate based on your projected usage.
Set up your local development environment
If you are using Colab or Google Cloud Notebook, your environment already meets all the requirements to run this notebook. You can skip this step.
Otherwise, make sure your environment meets this notebook's requirements. You need the following:
The Cloud Storage SDK
Git
Python 3
virtualenv
Jupyter notebook running in a virtual environment with Python 3
The Cloud Storage guide to Setting up a Python development environment and the Jupyter installation guide provide detailed instructions for meeting these requirements. The following steps provide a condensed set of instructions:
Install and initialize the SDK.
Install Python 3.
Install virtualenv and create a virtual environment that uses Python 3.
Activate that environment and run pip3 install Jupyter in a terminal shell to install Jupyter.
Run jupyter notebook on the command line in a terminal shell to launch Jupyter.
Open this notebook in the Jupyter Notebook Dashboard.
Installation
Install the latest version of Vertex AI SDK for Python.
End of explanation
"""
! pip3 install -U google-cloud-storage $USER_FLAG
"""
Explanation: Install the latest GA version of google-cloud-storage library as well.
End of explanation
"""
! pip3 install $USER kfp --upgrade
if os.getenv("IS_TESTING"):
! pip3 install --upgrade matplotlib $USER_FLAG
"""
Explanation: Install the latest GA version of KFP SDK library as well.
End of explanation
"""
import os
if not os.getenv("IS_TESTING"):
# Automatically restart kernel after installs
import IPython
app = IPython.Application.instance()
app.kernel.do_shutdown(True)
"""
Explanation: Restart the kernel
Once you've installed the additional packages, you need to restart the notebook kernel so it can find the packages.
End of explanation
"""
! python3 -c "import kfp; print('KFP SDK version: {}'.format(kfp.__version__))"
"""
Explanation: Check the versions of the packages you installed. The KFP SDK version should be >=1.6.
End of explanation
"""
PROJECT_ID = "[your-project-id]" # @param {type:"string"}
if PROJECT_ID == "" or PROJECT_ID is None or PROJECT_ID == "[your-project-id]":
# Get your GCP project id from gcloud
shell_output = ! gcloud config list --format 'value(core.project)' 2>/dev/null
PROJECT_ID = shell_output[0]
print("Project ID:", PROJECT_ID)
! gcloud config set project $PROJECT_ID
"""
Explanation: Before you begin
GPU runtime
This tutorial does not require a GPU runtime.
Set up your Google Cloud project
The following steps are required, regardless of your notebook environment.
Select or create a Google Cloud project. When you first create an account, you get a $300 free credit towards your compute/storage costs.
Make sure that billing is enabled for your project.
Enable the Vertex AI APIs, Compute Engine APIs, and Cloud Storage.
The Google Cloud SDK is already installed in Google Cloud Notebook.
Enter your project ID in the cell below. Then run the cell to make sure the
Cloud SDK uses the right project for all the commands in this notebook.
Note: Jupyter runs lines prefixed with ! as shell commands, and it interpolates Python variables prefixed with $.
End of explanation
"""
REGION = "us-central1" # @param {type: "string"}
"""
Explanation: Region
You can also change the REGION variable, which is used for operations
throughout the rest of this notebook. Below are regions supported for Vertex AI. We recommend that you choose the region closest to you.
Americas: us-central1
Europe: europe-west4
Asia Pacific: asia-east1
You may not use a multi-regional bucket for training with Vertex AI. Not all regions provide support for all Vertex AI services.
Learn more about Vertex AI regions
End of explanation
"""
from datetime import datetime
TIMESTAMP = datetime.now().strftime("%Y%m%d%H%M%S")
"""
Explanation: Timestamp
If you are in a live tutorial session, you might be using a shared test account or project. To avoid name collisions between users on resources created, you create a timestamp for each instance session, and append the timestamp onto the name of resources you create in this tutorial.
End of explanation
"""
# If you are running this notebook in Colab, run this cell and follow the
# instructions to authenticate your GCP account. This provides access to your
# Cloud Storage bucket and lets you submit training jobs and prediction
# requests.
import os
import sys
# If on Google Cloud Notebook, then don't execute this code
if not os.path.exists("/opt/deeplearning/metadata/env_version"):
if "google.colab" in sys.modules:
from google.colab import auth as google_auth
google_auth.authenticate_user()
# If you are running this notebook locally, replace the string below with the
# path to your service account key and run this cell to authenticate your GCP
# account.
elif not os.getenv("IS_TESTING"):
%env GOOGLE_APPLICATION_CREDENTIALS ''
"""
Explanation: Authenticate your Google Cloud account
If you are using Google Cloud Notebook, your environment is already authenticated. Skip this step.
If you are using Colab, run the cell below and follow the instructions when prompted to authenticate your account via oAuth.
Otherwise, follow these steps:
In the Cloud Console, go to the Create service account key page.
Click Create service account.
In the Service account name field, enter a name, and click Create.
In the Grant this service account access to project section, click the Role drop-down list. Type "Vertex" into the filter box, and select Vertex Administrator. Type "Storage Object Admin" into the filter box, and select Storage Object Admin.
Click Create. A JSON file that contains your key downloads to your local environment.
Enter the path to your service account key as the GOOGLE_APPLICATION_CREDENTIALS variable in the cell below and run the cell.
End of explanation
"""
BUCKET_NAME = "gs://[your-bucket-name]" # @param {type:"string"}
if BUCKET_NAME == "" or BUCKET_NAME is None or BUCKET_NAME == "gs://[your-bucket-name]":
BUCKET_NAME = "gs://" + PROJECT_ID + "aip-" + TIMESTAMP
"""
Explanation: Create a Cloud Storage bucket
The following steps are required, regardless of your notebook environment.
When you initialize the Vertex AI SDK for Python, you specify a Cloud Storage staging bucket. The staging bucket is where all the data associated with your dataset and model resources are retained across sessions.
Set the name of your Cloud Storage bucket below. Bucket names must be globally unique across all Google Cloud projects, including those outside of your organization.
End of explanation
"""
! gsutil mb -l $REGION $BUCKET_NAME
"""
Explanation: Only if your bucket doesn't already exist: Run the following cell to create your Cloud Storage bucket.
End of explanation
"""
! gsutil ls -al $BUCKET_NAME
"""
Explanation: Finally, validate access to your Cloud Storage bucket by examining its contents:
End of explanation
"""
SERVICE_ACCOUNT = "[your-service-account]" # @param {type:"string"}
if (
SERVICE_ACCOUNT == ""
or SERVICE_ACCOUNT is None
or SERVICE_ACCOUNT == "[your-service-account]"
):
# Get your GCP project id from gcloud
shell_output = !gcloud auth list 2>/dev/null
SERVICE_ACCOUNT = shell_output[2].strip()
print("Service Account:", SERVICE_ACCOUNT)
"""
Explanation: Service Account
If you don't know your service account, try to get your service account using gcloud command by executing the second cell below.
End of explanation
"""
! gsutil iam ch serviceAccount:{SERVICE_ACCOUNT}:roles/storage.objectCreator $BUCKET_NAME
! gsutil iam ch serviceAccount:{SERVICE_ACCOUNT}:roles/storage.objectViewer $BUCKET_NAME
"""
Explanation: Set service account access for Vertex AI Pipelines
Run the following commands to grant your service account access to read and write pipeline artifacts in the bucket that you created in the previous step -- you only need to run these once per service account.
End of explanation
"""
import google.cloud.aiplatform as aip
"""
Explanation: Set up variables
Next, set up some variables used throughout the tutorial.
Import libraries and define constants
End of explanation
"""
PIPELINE_ROOT = "{}/pipeline_root/iris".format(BUCKET_NAME)
"""
Explanation: Vertex AI Pipelines constants
Setup up the following constants for Vertex AI Pipelines:
End of explanation
"""
from kfp.v2 import dsl
from kfp.v2.dsl import ClassificationMetrics, Metrics, Output, component
"""
Explanation: Additional imports.
End of explanation
"""
aip.init(project=PROJECT_ID, staging_bucket=BUCKET_NAME)
"""
Explanation: Initialize Vertex AI SDK for Python
Initialize the Vertex AI SDK for Python for your project and corresponding bucket.
End of explanation
"""
@component(
packages_to_install=["sklearn"],
base_image="python:3.9",
output_component_file="wine_classification_component.yaml",
)
def wine_classification(wmetrics: Output[ClassificationMetrics]):
from sklearn.datasets import load_wine
from sklearn.ensemble import RandomForestClassifier
from sklearn.metrics import roc_curve
from sklearn.model_selection import cross_val_predict, train_test_split
X, y = load_wine(return_X_y=True)
# Binary classification problem for label 1.
y = y == 1
X_train, X_test, y_train, y_test = train_test_split(X, y, random_state=42)
rfc = RandomForestClassifier(n_estimators=10, random_state=42)
rfc.fit(X_train, y_train)
y_scores = cross_val_predict(rfc, X_train, y_train, cv=3, method="predict_proba")
fpr, tpr, thresholds = roc_curve(
y_true=y_train, y_score=y_scores[:, 1], pos_label=True
)
wmetrics.log_roc_curve(fpr, tpr, thresholds)
"""
Explanation: Define pipeline components using scikit-learn
In this section, you define some Python function-based components that use scikit-learn to train some classifiers and produce evaluations that can be visualized.
Note the use of the @component() decorator in the definitions below. You can optionally set a list of packages for the component to install; the base image to use (the default is a Python 3.7 image); and the name of a component YAML file to generate, so that the component definition can be shared and reused.
Define wine_classification component
The first component shows how to visualize an ROC curve.
Note that the function definition includes an output called wmetrics, of type Output[ClassificationMetrics]. You can visualize the metrics in the Pipelines user interface in the Cloud Console.
To do this, this example uses the artifact's log_roc_curve() method. This method takes as input arrays with the false positive rates, true positive rates, and thresholds, as generated by the sklearn.metrics.roc_curve function.
When you evaluate the cell below, a task factory function called wine_classification is created, that is used to construct the pipeline definition. In addition, a component YAML file is created, which can be shared and loaded via file or URL to create the same task factory function.
End of explanation
"""
@component(packages_to_install=["sklearn"], base_image="python:3.9")
def iris_sgdclassifier(
test_samples_fraction: float,
metricsc: Output[ClassificationMetrics],
):
from sklearn import datasets, model_selection
from sklearn.linear_model import SGDClassifier
from sklearn.metrics import confusion_matrix
iris_dataset = datasets.load_iris()
train_x, test_x, train_y, test_y = model_selection.train_test_split(
iris_dataset["data"],
iris_dataset["target"],
test_size=test_samples_fraction,
)
classifier = SGDClassifier()
classifier.fit(train_x, train_y)
predictions = model_selection.cross_val_predict(classifier, train_x, train_y, cv=3)
metricsc.log_confusion_matrix(
["Setosa", "Versicolour", "Virginica"],
confusion_matrix(
train_y, predictions
).tolist(), # .tolist() to convert np array to list.
)
"""
Explanation: Define iris_sgdclassifier component
The second component shows how to visualize a confusion matrix, in this case for a model trained using SGDClassifier.
As with the previous component, you create a metricsc output artifact of type Output[ClassificationMetrics]. Then, use the artifact's log_confusion_matrix method to visualize the confusion matrix results, as generated by the sklearn.metrics.confusion_matrix function.
End of explanation
"""
@component(
packages_to_install=["sklearn"],
base_image="python:3.9",
)
def iris_logregression(
input_seed: int,
split_count: int,
metrics: Output[Metrics],
):
from sklearn import datasets, model_selection
from sklearn.linear_model import LogisticRegression
# Load digits dataset
iris = datasets.load_iris()
# # Create feature matrix
X = iris.data
# Create target vector
y = iris.target
# test size
test_size = 0.20
# cross-validation settings
kfold = model_selection.KFold(
n_splits=split_count, random_state=input_seed, shuffle=True
)
# Model instance
model = LogisticRegression()
scoring = "accuracy"
results = model_selection.cross_val_score(model, X, y, cv=kfold, scoring=scoring)
print(f"results: {results}")
# split data
X_train, X_test, y_train, y_test = model_selection.train_test_split(
X, y, test_size=test_size, random_state=input_seed
)
# fit model
model.fit(X_train, y_train)
# accuracy on test set
result = model.score(X_test, y_test)
print(f"result: {result}")
metrics.log_metric("accuracy", (result * 100.0))
"""
Explanation: Define iris_logregression component
The third component also uses the "iris" dataset, but trains a LogisticRegression model. It logs model accuracy in the metrics output artifact.
End of explanation
"""
PIPELINE_NAME = "metrics-pipeline-v2"
@dsl.pipeline(
# Default pipeline root. You can override it when submitting the pipeline.
pipeline_root=PIPELINE_ROOT,
# A name for the pipeline.
name="metrics-pipeline-v2",
)
def pipeline(seed: int, splits: int):
wine_classification_op = wine_classification() # noqa: F841
iris_logregression_op = iris_logregression( # noqa: F841
input_seed=seed, split_count=splits
)
iris_sgdclassifier_op = iris_sgdclassifier(test_samples_fraction=0.3) # noqa: F841
"""
Explanation: Define the pipeline
Next, define a simple pipeline that uses the components that were created in the previous section.
End of explanation
"""
from kfp.v2 import compiler # noqa: F811
compiler.Compiler().compile(
pipeline_func=pipeline,
package_path="tabular classification_pipeline.json".replace(" ", "_"),
)
"""
Explanation: Compile the pipeline
Next, compile the pipeline.
End of explanation
"""
DISPLAY_NAME = "iris_" + TIMESTAMP
job = aip.PipelineJob(
display_name=DISPLAY_NAME,
template_path="tabular classification_pipeline.json".replace(" ", "_"),
job_id=f"tabular classification-v2{TIMESTAMP}-1".replace(" ", ""),
pipeline_root=PIPELINE_ROOT,
parameter_values={"seed": 7, "splits": 10},
)
job.run()
"""
Explanation: Run the pipeline
Next, run the pipeline.
End of explanation
"""
job = aip.PipelineJob(
display_name="iris_" + TIMESTAMP,
template_path="tabular classification_pipeline.json".replace(" ", "_"),
job_id=f"tabular classification-pipeline-v2{TIMESTAMP}-2".replace(" ", ""),
pipeline_root=PIPELINE_ROOT,
parameter_values={"seed": 5, "splits": 7},
)
job.run()
"""
Explanation: Click on the generated link to see your run in the Cloud Console.
<!-- It should look something like this as it is running:
<a href="https://storage.googleapis.com/amy-jo/images/mp/automl_tabular_classif.png" target="_blank"><img src="https://storage.googleapis.com/amy-jo/images/mp/automl_tabular_classif.png" width="40%"/></a> -->
In the UI, many of the pipeline DAG nodes will expand or collapse when you click on them.
Comparing pipeline runs in the UI
Next, generate another pipeline run that uses a different seed and split for the iris_logregression step.
Submit the new pipeline run:
End of explanation
"""
pipeline_df = aip.get_pipeline_df(pipeline=PIPELINE_NAME)
print(pipeline_df.head(2))
"""
Explanation: When both pipeline runs have finished, compare their results by navigating to the pipeline runs list in the Cloud Console, selecting both of them, and clicking COMPARE at the top of the Console panel.
Compare the parameters and metrics of the pipelines run from their tracked metadata
Next, you use the Vertex AI SDK for Python to compare the parameters and metrics of the pipeline runs. Wait until the pipeline runs have finished to run the next cell.
End of explanation
"""
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
plt.rcParams["figure.figsize"] = [15, 5]
pipeline_df["param.input:seed"] = pipeline_df["param.input:seed"].astype(np.float16)
pipeline_df["param.input:splits"] = pipeline_df["param.input:splits"].astype(np.float16)
ax = pd.plotting.parallel_coordinates(
pipeline_df.reset_index(level=0),
"run_name",
cols=["param.input:seed", "param.input:splits", "metric.accuracy"],
)
ax.set_yscale("symlog")
ax.legend(bbox_to_anchor=(1.0, 0.5))
"""
Explanation: Plot parallel coordinates of parameters and metrics
With the metric and parameters in a dataframe, you can perform further analysis to extract useful information. The following example compares data from each run using a parallel coordinate plot.
End of explanation
"""
try:
df = pd.DataFrame(pipeline_df["metric.confidenceMetrics"][0])
auc = np.trapz(df["recall"], df["falsePositiveRate"])
plt.plot(df["falsePositiveRate"], df["recall"], label="auc=" + str(auc))
plt.legend(loc=4)
plt.show()
except Exception as e:
print(e)
"""
Explanation: Plot ROC curve and calculate AUC number
In addition to basic metrics, you can extract complex metrics and perform further analysis using the get_pipeline_df method.
End of explanation
"""
delete_dataset = True
delete_pipeline = True
delete_model = True
delete_endpoint = True
delete_batchjob = True
delete_customjob = True
delete_hptjob = True
delete_bucket = True
try:
if delete_model and "DISPLAY_NAME" in globals():
models = aip.Model.list(
filter=f"display_name={DISPLAY_NAME}", order_by="create_time"
)
model = models[0]
aip.Model.delete(model)
print("Deleted model:", model)
except Exception as e:
print(e)
try:
if delete_endpoint and "DISPLAY_NAME" in globals():
endpoints = aip.Endpoint.list(
filter=f"display_name={DISPLAY_NAME}_endpoint", order_by="create_time"
)
endpoint = endpoints[0]
endpoint.undeploy_all()
aip.Endpoint.delete(endpoint.resource_name)
print("Deleted endpoint:", endpoint)
except Exception as e:
print(e)
if delete_dataset and "DISPLAY_NAME" in globals():
if "tabular" == "tabular":
try:
datasets = aip.TabularDataset.list(
filter=f"display_name={DISPLAY_NAME}", order_by="create_time"
)
dataset = datasets[0]
aip.TabularDataset.delete(dataset.resource_name)
print("Deleted dataset:", dataset)
except Exception as e:
print(e)
if "tabular" == "image":
try:
datasets = aip.ImageDataset.list(
filter=f"display_name={DISPLAY_NAME}", order_by="create_time"
)
dataset = datasets[0]
aip.ImageDataset.delete(dataset.resource_name)
print("Deleted dataset:", dataset)
except Exception as e:
print(e)
if "tabular" == "text":
try:
datasets = aip.TextDataset.list(
filter=f"display_name={DISPLAY_NAME}", order_by="create_time"
)
dataset = datasets[0]
aip.TextDataset.delete(dataset.resource_name)
print("Deleted dataset:", dataset)
except Exception as e:
print(e)
if "tabular" == "video":
try:
datasets = aip.VideoDataset.list(
filter=f"display_name={DISPLAY_NAME}", order_by="create_time"
)
dataset = datasets[0]
aip.VideoDataset.delete(dataset.resource_name)
print("Deleted dataset:", dataset)
except Exception as e:
print(e)
try:
if delete_pipeline and "DISPLAY_NAME" in globals():
pipelines = aip.PipelineJob.list(
filter=f"display_name={DISPLAY_NAME}", order_by="create_time"
)
pipeline = pipelines[0]
aip.PipelineJob.delete(pipeline.resource_name)
print("Deleted pipeline:", pipeline)
except Exception as e:
print(e)
if delete_bucket and "BUCKET_NAME" in globals():
! gsutil rm -r $BUCKET_NAME
"""
Explanation: Cleaning up
To clean up all Google Cloud resources used in this project, you can delete the Google Cloud
project you used for the tutorial.
Otherwise, you can delete the individual resources you created in this tutorial -- Note: this is auto-generated and not all resources may be applicable for this tutorial:
Dataset
Pipeline
Model
Endpoint
Batch Job
Custom Job
Hyperparameter Tuning Job
Cloud Storage Bucket
End of explanation
"""
|
MaxPoint/spylon | examples/02_SpylonExample-WithJar.ipynb | bsd-3-clause | !mkdir helloscala
%%file helloscala/hw.scala
object HelloScala
{
def sayHi(): String = "Hi! from scala"
def sum(x: Int, y: Int) = x + y
}
"""
Explanation: Lets make some simple examples with scala
We'll make a very simple scala object compile it and use it in the python process
End of explanation
"""
%%bash
cd helloscala
sbt package
import spylon
import spylon.spark as sp
c = sp.SparkConfiguration()
c._spark_home = "/path/to/spark-1.6.2-bin-hadoop2.6"
c.master = ["local[4]"]
"""
Explanation: We'll compile things with sbt (the scala build tool)
This makes us a jar that we can load with spark.
End of explanation
"""
c.jars = ["./helloscala/target/scala-2.10/helloscala_2.10-0.1-SNAPSHOT.jar"]
(sc, sqlContext) = c.sql_context("MyApplicationName")
"""
Explanation: Add the jar we built previously so that we can import our scala stuff
End of explanation
"""
from spylon.spark.utils import SparkJVMHelpers
helpers = SparkJVMHelpers(sc)
Hi = helpers.import_scala_object("HelloScala")
print Hi.__doc__
Hi.sayHi()
Hi.sum(4, 6)
"""
Explanation: Lets load our helpers and import the Scala class we just wrote
End of explanation
"""
|
dhuppenkothen/ShiftyLines | demos/ShiftyLines.ipynb | gpl-3.0 | %matplotlib inline
import matplotlib.pyplot as plt
import seaborn as sns
sns.set_context("notebook", font_scale=2.5, rc={"axes.labelsize": 26})
sns.set_style("darkgrid")
plt.rc("font", size=24, family="serif", serif="Computer Sans")
plt.rc("text", usetex=True)
import cPickle as pickle
import numpy as np
import scipy.special
import shutil
from astropy import units
import astropy.constants as const
import emcee
import corner
import dnest4
"""
Explanation: Shifty Lines: Line Detection and Doppler Shift Estimation
We have some X-ray spectra that have absorption and emission lines in it. The original spectrum is seen through a stellar wind, which moves either toward or away from us, Doppler-shifting the absorbed lines. Not all lines will be absorbed, some may be at their original position. There may also be more than one redshift in the same spectrum.
There are various other complications: for example, we rarely have the complete spectrum, but separate intervals of it (due to issues with calibration). In principle, however, the other segments may give valuable information about any other given segment.
Simplest problem: estimating Doppler shift and line presence at the same time
Second-simplest problem: some lines are Doppler shifted, some are not
Full problem: estimating the number of redshifts and the lines belonging to each in the same model
Note: we are going to need to add some kind of OU process or spline to model the background.
End of explanation
"""
datadir = "../data/"
datafile = "8525_nodip_0.dat"
data = np.loadtxt(datadir+datafile)
wavelength_left = data[:,0]
wavelength_right = data[:,1]
wavelength_mid = data[:,0] + (data[:,1]-data[:,0])/2.
counts = data[:,2]
counts_err = data[:,3]
plt.figure(figsize=(16,4))
plt.plot(wavelength_mid, counts)
plt.gca().invert_xaxis()
plt.xlim(wavelength_mid[-1], wavelength_mid[0])
"""
Explanation: Let's first load our first example spectrum:
End of explanation
"""
siXIV = 1864.9995 * units.eV
siXIII = 2005.494 * units.eV
siXII = 1845.02 * units.eV
siXII_err = 0.07 * units.eV
siXI = 1827.51 * units.eV
siXI_err = 0.06 * units.eV
siX = 1808.39 * units.eV
siX_err = 0.05 * units.eV
siIX = 1789.57 * units.eV
siIX_err = 0.07 * units.eV
siVIII = 1772.01 * units.eV
siVIII_err = 0.09 * units.eV
siVII = 1756.68 * units.eV
siVII_err = 0.08 * units.eV
si_all = [siXIV, siXIII, siXII, siXI, siX, siIX, siVIII, siVII]
si_err_all = [0.0*units.eV, 0.0*units.eV, siXII_err, siXI_err,
siX_err, siIX_err, siVIII_err, siVII_err]
"""
Explanation: Next, we're going to need the lines we're interested in. Let's use the Silicon lines. Note that these are all in electron volt. However, the data are in Angstrom, which means I need to convert them.
I'm going to use astropy.units to do that:
End of explanation
"""
si_all_angstrom = [(const.h*const.c/s.to(units.Joule)).to(units.Angstrom)
for s in si_all]
si_err_all_angstrom = [(const.h*const.c/s.to(units.Joule)).to(units.Angstrom)
for s in si_err_all]
si_err_all_angstrom[0] = 0.0*units.Angstrom
si_err_all_angstrom[1] = 0.0*units.Angstrom
for s in si_all_angstrom:
print(s)
"""
Explanation: Now I can do the actual conversion:
End of explanation
"""
plt.figure(figsize=(16,4))
plt.errorbar(wavelength_mid, counts, yerr=counts_err, fmt="o")
plt.gca().invert_xaxis()
plt.xlim(wavelength_mid[-1], wavelength_mid[0])
for s in si_all_angstrom:
plt.vlines(s.value, np.min(counts), np.max(counts), lw=2)
"""
Explanation: Let's plot the lines onto the spectrum:
End of explanation
"""
maxind = wavelength_mid.searchsorted(7.1)
wnew_mid = wavelength_mid[:maxind]
wnew_left = wavelength_left[:maxind]
wnew_right = wavelength_right[:maxind]
cnew = counts[:maxind]
enew = counts_err[:maxind]
plt.figure(figsize=(16,4))
plt.errorbar(wnew_mid, cnew, yerr=enew, fmt="o")
plt.gca().invert_xaxis()
plt.xlim(wnew_mid[-1], wnew_mid[0])
for s in si_all_angstrom:
plt.vlines(s.value, np.min(counts), np.max(counts), lw=2)
"""
Explanation: We currently don't have the positions of the longer-wavelength lines, so we're going to cut the spectrum at 7.1 Angstrom:
End of explanation
"""
# the full spectrum in a format usable by ShiftyLines
np.savetxt(datadir+"8525_nodip_full.txt", np.array([wavelength_left, wavelength_right,
counts, counts_err]).T)
# the cut spectrum with the Si lines only
np.savetxt(datadir+"8525_nodip_cut.txt", np.array([wnew_left, wnew_right, cnew, enew]).T)
## convert from astropy.units to float
si_all_val = np.array([s.value for s in si_all_angstrom])
si_err_all_val = np.array([s.value for s in si_err_all_angstrom])
np.savetxt(datadir+"si_lines.txt", np.array(si_all_val))
"""
Explanation: We are going to save the spectrum as well as the line centers:
End of explanation
"""
# line energies in keV
al_ly_alpha = 1.72855 * 1000 * units.eV
mg_he_gamma = 1.65910 * 1000 * units.eV
mg_he_delta = 1.69606 * 1000 * units.eV
mg_ly_beta = 1.74474 * 1000 * units.eV
mg_ly_gamma = 1.84010 * 1000 * units.eV
mg_ly_delta = 1.88423 * 1000 * units.eV
fe_xxiv = 1.88494 * 1000 * units.eV
"""
Explanation: Adding some more lines for later
We have some more lines that are going to become important/interesting later, because
they'll be shifted at a different redshift:
End of explanation
"""
other_lines_all = [al_ly_alpha, mg_he_gamma, mg_ly_beta, mg_ly_gamma, mg_ly_delta, fe_xxiv]
other_lines_all_angstrom = [(const.h*const.c/s.to(units.Joule)).to(units.Angstrom)
for s in other_lines_all]
other_lines_all_val = np.array([s.value for s in other_lines_all_angstrom])
for l in other_lines_all_angstrom:
print(str(l.value) + " " + str(l.unit))
"""
Explanation: The lines need to be converted to Angstroms
End of explanation
"""
plt.figure(figsize=(16,4))
plt.errorbar(wavelength_mid, counts, yerr=counts_err, fmt="o")
plt.gca().invert_xaxis()
plt.xlim(wavelength_mid[-1], wavelength_mid[0])
for s in si_all_angstrom:
plt.vlines(s.value, np.min(counts), np.max(counts), lw=2)
for l in other_lines_all_angstrom:
plt.vlines(l.value, np.min(counts), np.max(counts), lw=2)
"""
Explanation: What does the spectrum with the line centroids look like?
End of explanation
"""
# make extended array of lines
lines_extended = np.hstack([si_all_val, other_lines_all_val])
# save the lines
np.savetxt(datadir+"lines_extended.txt", np.array(lines_extended))
"""
Explanation: Let's save the extended list of lines to a file:
End of explanation
"""
np.random.seed(20160216)
"""
Explanation: Simulating Data
In order to test any methods we are creating, we are first going to produce some simulated data.
Set the seed so that the output simulations will always be the same:
End of explanation
"""
def gaussian_cdf(x, w0, width):
return 0.5*(1. + scipy.special.erf((x-w0)/(width*np.sqrt(2.))))
def spectral_line(wleft, wright, w0, amplitude, width):
"""
Use the CDF of a Gaussian distribution to define spectral
lines. We use the CDF to integrate over the energy bins,
rather than taking the mid-bin energy.
Parameters
----------
wleft: array
Left edges of the energy bins
wright: array
Right edges of the energy bins
w0: float
The centroid of the line
amplitude: float
The amplitude of the line
width: float
The width of the line
Returns
-------
line_flux: array
The array of line fluxes integrated over each bin
"""
line_flux = amplitude*(gaussian_cdf(wright, w0, width)-
gaussian_cdf(wleft, w0, width))
return line_flux
"""
Explanation: The spectral lines are modelled as simple Gaussians with an amplitude $A$, a width $\sigma$ and a position $\lambda_0$.
Because energy data comes naturally binned (the original channels detect photons between a certain minimum and maximum energy), we integrate over energy bins to get an accurate estimate of the flux in each energy bin. This also allows the use of uneven binning.
In order to integrate over the bins correctly, I also define the cumulative distribution function (CDF) of a Gaussian below, which is, in fact, the integral of the Gaussian function.
This also means that the amplitude is defined as the integrated area under the Gaussian rather than the height of the Gaussian, but this is closer to the physical quantities astronomers might be interested in (equivalent width) anyway.
End of explanation
"""
w0 = 6.6
amp = 0.01
width = 0.01
line_flux = spectral_line(wnew_left, wnew_right, w0, amp, width)
plt.plot(wnew_mid, line_flux)
"""
Explanation: A simple test:
End of explanation
"""
def fake_spectrum(wleft, wright, line_pos, logbkg=np.log(0.09), err=0.007, dshift=0.0,
sample_logamp=False, sample_logq=False, sample_signs=False,
logamp_hypermean=None, logamp_hypersigma=np.log(0.08), nzero=0,
logq_hypermean=np.log(500), logq_hypersigma=np.log(50)):
"""
Make a fake spectrum with emission/absorption lines.
The background is constant, though that should later become an OU process or
something similar.
NOTE: The amplitude *must not* fall below zero! I'm not entirely sure how to deal
with that yet!
Parameters
----------
wleft: np.ndarray
Left edges of the energy bins
wright: np.ndarray
Right edges of the energy bins
line_pos: np.ndarray
The positions of the line centroids
bkg: float
The value of the constant background
err: float
The width of the Gaussian error distribution
dshift: float, default 0.0
The Doppler shift of the spectral lines.
sample_amp: bool, default False
Sample all amplitudes? If not, whatever value is set in
`amp_hypermean` will be set as collective amplitude for all
lines
sample_width: bool, default False
Sample all line widths? If not, whatever value is set in
`width_hypersigma` will be set as collective amplitude for all
lines
sample_signs: bool, default False
Sample the sign of the line amplitude (i.e. whether the line is an
absorption or emission line)? If False, all lines will be absorption
lines
logamp_hypermean: {float | None}, default None
The mean of the Gaussian prior distribution on the line amplitude. If None,
it is set to the same value as `bkg`.
logamp_hypersigma: float, default 0.08
The width of the Gaussian prior distribution on the line amplitude.
nzero: int, default 0
The number of lines to set to zero amplitude
logq_hypermean: float, default 0.01
The mean of the Gaussian prior distribution on the
q-factor, q=(line centroid wavelength)/(line width)
logq_hypersigma: float, default 0.01
The width of the Gaussian prior distribution on the
q-factor, q=(line centroid wavelength)/(line width)
Returns
-------
model_flux: np.ndarray
The array of model line fluxes for each bin
fake_flux: np.ndarray
The array of fake fluxes (with errors) for each bin
"""
# number of lines
nlines = line_pos.shape[0]
# shift spectral lines
line_pos_shifted = line_pos*(1. + dshift)
# if sampling the amplitudes
if sample_logamp:
# sample line amplitudes
logamps = np.random.normal(logamp_hypermean, logamp_hypersigma, size=nlines)
else:
logamps = np.zeros(nlines)+logamp_hypermean
amps = np.exp(logamps)
if nzero > 0:
zero_ind = np.random.choice(np.arange(nlines), size=nzero)
for z in zero_ind:
amps[int(z)] = 0.0
if sample_signs:
# sample sign of the amplitudes
signs = np.random.choice([-1., 1.], p=[0.5, 0.5], size=nlines)
else:
# all lines are absorption lines
signs = -1.*np.ones(nlines)
# include signs in the amplitudes
amps *= signs
if sample_logq:
# widths of the lines
logq = np.random.normal(logq_hypermean, logq_hypersigma, size=nlines)
else:
logq = np.ones(nlines)*logq_hypermean
widths = line_pos_shifted/np.exp(logq)
model_flux = np.zeros_like(wleft) + np.exp(logbkg)
for si, a, w in zip(line_pos_shifted, amps, widths):
model_flux += spectral_line(wleft, wright, si, a, w)
fake_flux = model_flux + np.random.normal(0.0, 0.007, size=model_flux.shape[0])
pars = {"wavelength_left": wleft, "wavelength_right": wright, "err":err,
"model_flux": model_flux, "fake_flux": fake_flux, "logbkg":logbkg,
"dshift": dshift, "line_pos": line_pos_shifted, "logamp": logamps,
"signs": signs, "logq": logq }
return pars
"""
Explanation: Simulating Spectra
In order to test our algorithm, we'd like to simulate some test data where we know the "ground truth" (i.e. the input parameters that made the spectrum).
Below is a (admittedly complicated) function that will simulate data for various test cases.
We'll address these test cases one by one below and simulate a spectrum to test.
End of explanation
"""
froot = "test_noshift1"
## set amplitude and q
logamp_mean = np.log(0.3)
logq_mean = np.log(600.)
# set Doppler shift
dshift = 0.0
# set background
logbkg = np.log(0.09)
# do not sample amplitudes or q-factors(all are the same!)
sample_logamp = False
sample_logq = False
# all lines are absorption lines
sample_signs = False
# error on the data points (will sample from a Gaussian distribution)
err = 0.007
# do not set any lines to zero!
nzero = 0
pars = fake_spectrum(wnew_left, wnew_right, si_all_val, logbkg=logbkg, err=err,
dshift=dshift, sample_logamp=sample_logamp, sample_logq=sample_logq,
logamp_hypermean=logamp_mean, logq_hypermean=logq_mean,
sample_signs=sample_signs, nzero=nzero)
model_flux = pars["model_flux"]
fake_flux = pars["fake_flux"]
fake_err = np.zeros_like(fake_flux) + pars["err"]
"""
Explanation: Test 1: A spectrum with no redshift and strong lines
As a first simple check, we simulate a spectrum with strong absorption lines at all available line positions. The amplitudes and widths ($q$-values) are the same for all lines. There is no Doppler shift.
We use the wavelength bins from the real data for generating the simulation:
End of explanation
"""
plt.figure(figsize=(14,6))
plt.errorbar(wnew_mid, fake_flux, yerr=fake_err, fmt="o", label="simulated flux", alpha=0.7)
plt.plot(wnew_mid, model_flux, label="simulated model", lw=3)
plt.xlim(wnew_mid[0], wnew_mid[-1])
plt.gca().invert_xaxis()
plt.legend(prop={"size":18})
plt.xlabel("Wavelength [Angstrom]")
plt.ylabel("Normalized Flux")
plt.savefig(datadir+froot+"_lc.png", format="png")
"""
Explanation: Let's plot the spectrum:
End of explanation
"""
# save the whole dictionary in a pickle file
f = open(datadir+froot+"_data.pkl", "w")
pickle.dump(pars, f)
f.close()
# save the fake data in an ASCII file for input into ShiftyLines
np.savetxt(datadir+froot+".txt", np.array([wnew_left, wnew_right, fake_flux, fake_err]).T)
"""
Explanation: We're going to save the data and the parameters in a pickle file for later use. We'll also save the fake data itself in a way that I can easily input it into ShiftyLines.
End of explanation
"""
import shutil
def move_dnest_output(froot, dnest_dir="./"):
shutil.move(dnest_dir+"posterior_sample.txt", froot+"_posterior_sample.txt")
shutil.move(dnest_dir+"sample.txt", froot+"_sample.txt")
shutil.move(dnest_dir+"sample_info.txt", froot+"_sample_info.txt")
shutil.move(dnest_dir+"weights.txt", froot+"_weights.txt")
shutil.move(dnest_dir+"levels.txt", froot+"_levels.txt")
return
move_dnest_output("../data/%s"%froot, "../code/")
"""
Explanation: Sampling the Model
If DNest4 and ShiftyLines are installed and compiled, you should now be able to run this model (from the ShiftyLines/code/ directory) with
>>> ./main -d ../data/test_noshift1.txt -t 8
The last option sets the number of threads to run; this will depend on your computer and how many CPUs you can keep busy with this.
In this run, we set the number of levels in the OPTIONS file to $100$, based on running the sampler until the likelihood change between two levels fell below $1$ or so.
Results
Here are the results of the initial simulation. For this run, we set the number of Doppler shifts to exactly $1$ and do not sample over redshifts. This means the model is equivalent to simple template fitting, but it makes a fairly good test.
First, we need to move the posterior run to a new directory so it doesn't get overwritten by subsequent runs. We'll write a function for that
End of explanation
"""
# the pickle file with the data + parameters:
f = open(datadir+froot+"_data.pkl")
data = pickle.load(f)
f.close()
print("Keys in data dictionary: " + str(data.keys()))
# the posterior samples
sample = np.atleast_2d(np.loadtxt(datadir+froot+"_posterior_sample.txt"))
nsamples = sample.shape[0]
print("We have %i samples from the posterior."%nsamples)
"""
Explanation: We'll need to load the data and the posterior samples for plotting:
End of explanation
"""
# randomly pick some samples from the posterior to plot
s_ind = np.random.choice(np.arange(nsamples, dtype=int), size=20)
# the middle of the wavelength bins for plotting
wmid = data["wavelength_left"] + (data["wavelength_right"] - data["wavelength_left"])/2.
# the error on the data
yerr = np.zeros_like(wmid) + data['err']
plt.figure(figsize=(14,6))
plt.errorbar(wmid, data["fake_flux"], yerr=yerr, fmt="o")
for i in s_ind:
plt.plot(wmid, sample[i,-wmid.shape[0]:], lw=2, alpha=0.7)
plt.xlim(wmid[0], wmid[-1])
plt.gca().invert_xaxis()
plt.xlabel("Wavelength [Angstrom]")
plt.ylabel("Normalized Flux")
plt.tight_layout()
plt.savefig(datadir+froot+"_samples.png", format="png")
"""
Explanation: First plot: a random set of realizations from the posterior overplotted on the data:
End of explanation
"""
fig = plt.figure(figsize=(12,9))
# Plot a historgram and kernel density estimate
ax = fig.add_subplot(111)
sns.distplot(sample[:,0], hist_kws={"histtype":"stepfilled"}, ax=ax)
_, ymax = ax.get_ylim()
ax.vlines(np.exp(data["logbkg"]), 0, ymax, lw=3, color="black")
ax.set_xlabel("Normalized Background Flux")
ax.set_ylabel("$N_{\mathrm{samples}}$")
plt.tight_layout()
plt.savefig(datadir+froot+"_bkg.png", format="png")
"""
Explanation: What's the posterior distribution over the constant background?
End of explanation
"""
fig, (ax1, ax2) = plt.subplots(1,2,figsize=(14,6))
sns.distplot(sample[:,1], hist_kws={"histtype":"stepfilled"}, ax=ax1)
#_, ymax = ax.get_ylim()
#ax.vlines(np.exp(data["logbkg"]), 0, ymax, lw=3, color="black")
ax1.set_xlabel(r"OU time scale $\tau$")
ax1.set_ylabel(r"$N_{\mathrm{samples}}$")
sns.distplot(sample[:,2], hist_kws={"histtype":"stepfilled"}, ax=ax2)
#_, ymax = ax.get_ylim()
#ax.vlines(np.exp(data["logbkg"]), 0, ymax, lw=3, color="black")
ax2.set_xlabel(r"OU amplitude")
ax2.set_ylabel(r"$N_{\mathrm{samples}}$")
fig.tight_layout()
plt.savefig(datadir+froot+"_ou.png", format="png")
"""
Explanation: What does the OU process modelling the variable background look like?
End of explanation
"""
fig, (ax1, ax2) = plt.subplots(1,2,figsize=(14,6))
sns.distplot(np.log(sample[:,5]), hist_kws={"histtype":"stepfilled"}, ax=ax1)
_, ymax = ax1.get_ylim()
ax1.vlines(data["logamp"], 0, ymax, lw=3, color="black")
ax1.set_xlabel(r"$\mu_{\mathrm{\log{A}}}$")
ax1.set_ylabel(r"$N_{\mathrm{samples}}$")
ax1.set_title("Location parameter of the amplitude prior")
sns.distplot(sample[:,6], hist_kws={"histtype":"stepfilled"}, ax=ax2)
#_, ymax = ax.get_ylim()
#ax.vlines(np.exp(data["logbkg"]), 0, ymax, lw=3, color="black")
ax2.set_xlabel(r"$\sigma_{\mathrm{\log{A}}}$")
ax2.set_ylabel(r"$N_{\mathrm{samples}}$")
ax2.set_title("Scale parameter of the amplitude prior")
fig.tight_layout()
plt.savefig(datadir+froot+"_logamp_prior.png", format="png")
"""
Explanation: Now we can look at the hyperparameters for the log-amplitude and log-q priors:
End of explanation
"""
fig, (ax1, ax2) = plt.subplots(1,2,figsize=(14,6))
sns.distplot(sample[:,7], hist_kws={"histtype":"stepfilled"}, ax=ax1)
_, ymax = ax1.get_ylim()
ax1.vlines(data["logq"], 0, ymax, lw=3, color="black")
ax1.set_xlabel(r"$\mu_{\mathrm{\log{q}}}$")
ax1.set_ylabel(r"$N_{\mathrm{samples}}$")
ax1.set_title(r"Location parameter of the $q$ prior")
sns.distplot(sample[:,8], hist_kws={"histtype":"stepfilled"}, ax=ax2)
#_, ymax = ax.get_ylim()
#ax.vlines(np.exp(data["logbkg"]), 0, ymax, lw=3, color="black")
ax2.set_xlabel(r"$\sigma_{\mathrm{\log{q}}}$")
ax2.set_ylabel(r"$N_{\mathrm{samples}}$")
ax2.set_title(r"Scale parameter of the $q$ prior")
fig.tight_layout()
plt.savefig(datadir+froot+"_logq_prior.png", format="png")
"""
Explanation: Let's do the same for the width:
End of explanation
"""
fig = plt.figure(figsize=(12,9))
# Plot a historgram and kernel density estimate
ax = fig.add_subplot(111)
sns.distplot(sample[:,9], hist_kws={"histtype":"stepfilled"}, ax=ax)
ax.set_xlabel("Threshold parameter")
ax.set_ylabel("$N_{\mathrm{samples}}$")
plt.tight_layout()
plt.savefig(datadir+froot+"_pp.png", format="png")
"""
Explanation: The next parameter (pp) in the model is the threshold determining the sign of the amplitude (i.e. whether a line is an absorption or an emission line). The sign is sampled as a random variable between $0$ and $1$; the threshold sets the boundary below which a line will become an absorption line. Above the threshold, the sign will be flipped to return an emission line.
For a spectrum with mostly absorption lines, pp should be quite high, close to $1$. For a spectrum with mostly emission lines, pp should be close to $0$.
End of explanation
"""
fig = plt.figure(figsize=(12,9))
plt.locator_params(axis = 'x', nbins = 6)
# Plot a historgram and kernel density estimate
ax = fig.add_subplot(111)
sns.distplot(sample[:,11], hist_kws={"histtype":"stepfilled"}, ax=ax)
plt.xticks(rotation=45)
_, ymax = ax.get_ylim()
ax.vlines(data["dshift"], 0, ymax, lw=3, color="black")
ax.set_xlabel(r"Doppler shift $d=v/c$")
ax.set_ylabel("$N_{\mathrm{samples}}$")
plt.tight_layout()
plt.savefig(datadir+froot+"_dshift.png", format="png")
"""
Explanation: Hmmm, the model doesn't seem to care much about that? Funny!
The Doppler shift is next!
End of explanation
"""
nlines = 8
ncolumns = 3
nrows = int(nlines/3)+1
fig = plt.figure(figsize=(nrows*4,ncolumns*4))
plt.locator_params(axis = 'x', nbins = 6)
# log-amplitudes
for i in range(8):
ax = plt.subplot(nrows, ncolumns, i+1)
sns.distplot(sample[:,12+i], hist_kws={"histtype":"stepfilled"}, ax=ax)
#ax.hist(sample[:,12+i], histtype="stepfilled", alpha=0.7)
plt.locator_params(axis = 'x', nbins = 6)
xlabels = ax.get_xticklabels()
for l in xlabels:
l.set_rotation(45)
_, ymax = ax.get_ylim()
ax.vlines(data["logamp"][i], 0, ymax, lw=3, color="black")
ax.set_xlabel(r"$\log{A}$")
ax.set_ylabel("$N_{\mathrm{samples}}$")
plt.tight_layout()
plt.savefig(datadir+froot+"_logamp.png", format="png")
"""
Explanation: What is the posterior on all the line amplitudes and widths? Let's try overplotting them all:
End of explanation
"""
fig = plt.figure(figsize=(ncolumns*4,nrows*4))
plt.locator_params(axis = 'x', nbins = 6)
# log-amplitudes
for i in range(8):
ax = plt.subplot(nrows, ncolumns, i+1)
sns.distplot(sample[:,20+i], hist_kws={"histtype":"stepfilled"}, ax=ax)
#ax.hist(sample[:,20+i], histtype="stepfilled", alpha=0.7)
plt.locator_params(axis = 'x', nbins = 6)
xlabels = ax.get_xticklabels()
for l in xlabels:
l.set_rotation(45)
_, ymax = ax.get_ylim()
ax.vlines(data["logq"][i], 0, ymax, lw=3, color="black")
ax.set_xlabel(r"$\log{q}$")
ax.set_ylabel("$N_{\mathrm{samples}}$")
plt.tight_layout()
plt.savefig(datadir+froot+"_logq.png", format="png")
"""
Explanation: Looks like it samples amplitudes correctly! Let's make the same Figure for log-q:
End of explanation
"""
fig = plt.figure(figsize=(12,9))
plt.locator_params(axis = 'x', nbins = 6)
# Plot a historgram and kernel density estimate
ax = fig.add_subplot(111)
for i in range(8):
sns.distplot(sample[:,28+i], hist_kws={"histtype":"stepfilled"}, ax=ax, alpha=0.6)
plt.xticks(rotation=45)
_, ymax = ax.get_ylim()
ax.vlines(data["dshift"], 0, ymax, lw=3, color="black")
ax.set_xlabel(r"Emission/absorption line sign")
ax.set_ylabel("$N_{\mathrm{samples}}$")
plt.tight_layout()
plt.savefig(datadir+froot+"_signs.png", format="png")
"""
Explanation: Final thing, just to be sure: the signs of the amplitudes!
End of explanation
"""
def plot_samples(data, sample, fout, close=True):
"""
FIRST PLOT: SPECTRUM + SAMPLES FROM THE POSTERIOR
"""
# number of posterior samples
nsamples = sample.shape[0]
# randomly pick some samples from the posterior to plot
s_ind = np.random.choice(np.arange(nsamples, dtype=int), size=20)
# the middle of the wavelength bins for plotting
wmid = data["wavelength_left"] + (data["wavelength_right"] - data["wavelength_left"])/2.
# the error on the data
yerr = np.zeros_like(wmid) + data['err']
plt.figure(figsize=(14,6))
plt.errorbar(wmid, data["fake_flux"], yerr=yerr, fmt="o")
for i in s_ind:
plt.plot(wmid, sample[i,-wmid.shape[0]:], lw=2, alpha=0.7)
plt.xlim(wmid[0], wmid[-1])
plt.gca().invert_xaxis()
plt.xlabel("Wavelength [Angstrom]")
plt.ylabel("Normalized Flux")
plt.tight_layout()
plt.savefig(fout+"_samples.png", format="png")
if close:
plt.close()
return
def plot_bkg(data, sample, fout, close=True):
"""
PLOT THE BACKGROUND POSTERIOR
"""
fig = plt.figure(figsize=(12,9))
# Plot a histogram and kernel density estimate
ax = fig.add_subplot(111)
sns.distplot(sample[:,0], hist_kws={"histtype":"stepfilled"}, ax=ax)
plt.xticks(rotation=45)
_, ymax = ax.get_ylim()
ax.vlines(np.exp(data["logbkg"]), 0, ymax, lw=3, color="black")
ax.set_xlabel("Normalized Background Flux")
ax.set_ylabel("$N_{\mathrm{samples}}$")
plt.tight_layout()
plt.savefig(fout+"_bkg.png", format="png")
if close:
plt.close()
return
def plot_ou_bkg(sample, fout, close=True):
"""
PLOT THE POSTERIOR FOR THE OU PROCESS
"""
fig, (ax1, ax2) = plt.subplots(1,2,figsize=(14,6))
sns.distplot(sample[:,1], hist_kws={"histtype":"stepfilled"}, ax=ax1)
#_, ymax = ax.get_ylim()
#ax.vlines(np.exp(data["logbkg"]), 0, ymax, lw=3, color="black")
ax1.set_xlabel(r"OU time scale $\tau$")
ax1.set_ylabel(r"$N_{\mathrm{samples}}$")
sns.distplot(sample[:,2], hist_kws={"histtype":"stepfilled"}, ax=ax2)
plt.xticks(rotation=45)
#_, ymax = ax.get_ylim()
#ax.vlines(np.exp(data["logbkg"]), 0, ymax, lw=3, color="black")
ax2.set_xlabel(r"OU amplitude")
ax2.set_ylabel(r"$N_{\mathrm{samples}}$")
fig.tight_layout()
plt.savefig(fout+"_ou.png", format="png")
if close:
plt.close()
return
def plot_logamp_hyper(data, sample, fout, close=True):
"""
PLOT THE POSTERIOR FOR THE LOG-AMP HYPERPARAMETERS
"""
fig, (ax1, ax2) = plt.subplots(1,2,figsize=(14,6))
sns.distplot(np.log(sample[:,5]), hist_kws={"histtype":"stepfilled"}, ax=ax1)
_, ymax = ax1.get_ylim()
ax1.vlines(data["logamp"], 0, ymax, lw=3, color="black")
ax1.set_xlabel(r"$\mu_{\mathrm{\log{A}}}$")
ax1.set_ylabel(r"$N_{\mathrm{samples}}$")
ax1.set_title("Location parameter of the amplitude prior")
sns.distplot(sample[:,6], hist_kws={"histtype":"stepfilled"}, ax=ax2)
plt.xticks(rotation=45)
#_, ymax = ax.get_ylim()
#ax.vlines(np.exp(data["logbkg"]), 0, ymax, lw=3, color="black")
ax2.set_xlabel(r"$\sigma_{\mathrm{\log{A}}}$")
ax2.set_ylabel(r"$N_{\mathrm{samples}}$")
ax2.set_title("Scale parameter of the amplitude prior")
fig.tight_layout()
plt.savefig(fout+"_logamp_prior.png", format="png")
if close:
plt.close()
return
def plot_logq_hyper(data, sample, fout, close=True):
"""
PLOT THE POSTERIOR FOR THE LOG-Q HYPERPARAMETERS
"""
fig, (ax1, ax2) = plt.subplots(1,2,figsize=(14,6))
sns.distplot(sample[:,7], hist_kws={"histtype":"stepfilled"}, ax=ax1)
_, ymax = ax1.get_ylim()
ax1.vlines(data["logq"], 0, ymax, lw=3, color="black")
ax1.set_xlabel(r"$\mu_{\mathrm{\log{q}}}$")
ax1.set_ylabel(r"$N_{\mathrm{samples}}$")
ax1.set_title(r"Location parameter of the $q$ prior")
sns.distplot(sample[:,8], hist_kws={"histtype":"stepfilled"}, ax=ax2)
plt.xticks(rotation=45)
#_, ymax = ax.get_ylim()
#ax.vlines(np.exp(data["logbkg"]), 0, ymax, lw=3, color="black")
ax2.set_xlabel(r"$\sigma_{\mathrm{\log{q}}}$")
ax2.set_ylabel(r"$N_{\mathrm{samples}}$")
ax2.set_title(r"Scale parameter of the $q$ prior")
fig.tight_layout()
plt.savefig(fout+"_logq_prior.png", format="png")
if close:
plt.close()
return
def plot_threshold(sample, fout, close=True):
"""
PLOT THE POSTERIOR FOR THE THRESHOLD PARAMETER
"""
fig = plt.figure(figsize=(12,9))
# Plot a historgram and kernel density estimate
ax = fig.add_subplot(111)
sns.distplot(sample[:,9], hist_kws={"histtype":"stepfilled"}, ax=ax)
plt.xticks(rotation=45)
ax.set_xlabel("Threshold parameter")
ax.set_ylabel("$N_{\mathrm{samples}}$")
plt.tight_layout()
plt.savefig(fout+"_pp.png", format="png")
if close:
plt.close()
return
def plot_dshift(data, sample, fout, dshift_ind=0, close=True):
"""
PLOT THE POSTERIOR FOR THE DOPPLER SHIFT
"""
fig = plt.figure(figsize=(12,9))
plt.locator_params(axis = 'x', nbins = 6)
# Plot a historgram and kernel density estimate
ax = fig.add_subplot(111)
sns.distplot(sample[:,11+dshift_ind], hist_kws={"histtype":"stepfilled"}, ax=ax)
plt.xticks(rotation=45)
_, ymax = ax.get_ylim()
ax.vlines(data["dshift"], 0, ymax, lw=3, color="black")
ax.set_xlabel(r"Doppler shift $d=v/c$")
ax.set_ylabel("$N_{\mathrm{samples}}$")
plt.tight_layout()
plt.savefig(fout+"_dshift%i.png"%dshift_ind, format="png")
if close:
plt.close()
return
def plot_logamp(data, sample, fout, ndshift, nlines, ncolumns=3,
dshift_ind=0, close=True):
"""
PLOT THE POSTERIOR FOR THE LINE LOG-AMPLITUDES
"""
nrows = int(nlines/ncolumns)+1
fig = plt.figure(figsize=(nrows*4,ncolumns*4))
plt.locator_params(axis = 'x', nbins = 6)
# index of column where the log-amplitudes start:
start_ind = 11 + ndshift + dshift_ind*nlines
# log-amplitudes
for i in range(nlines):
ax = plt.subplot(nrows, ncolumns, i+1)
# ax.hist(sample[:,start_ind+i], histtype="stepfilled", alpha=0.7)
sns.distplot(sample[:,start_ind+i], hist_kws={"histtype":"stepfilled"}, ax=ax)
plt.locator_params(axis = 'x', nbins = 6)
xlabels = ax.get_xticklabels()
for l in xlabels:
l.set_rotation(45)
_, ymax = ax.get_ylim()
ax.vlines(data["logamp"][i], 0, ymax, lw=3, color="black")
ax.set_xlabel(r"$\log{A}$")
ax.set_ylabel("$N_{\mathrm{samples}}$")
plt.tight_layout()
if dshift_ind == 0:
plt.savefig(fout+"_logamp.png", format="png")
else:
plt.savefig(fout+"_logamp%i.png"%dshift_ind, format="png")
if close:
plt.close()
return
def plot_logq(data, sample, fout, ndshift, nlines, ncolumns=3,
dshift_ind=0, close=True):
"""
PLOT THE POSTERIOR FOR THE LINE LOG-Q
"""
nrows = int(nlines/ncolumns)+1
fig = plt.figure(figsize=(nrows*4,ncolumns*4))
plt.locator_params(axis = 'x', nbins = 6)
# set starting index for the logq values:
start_ind = 11 + ndshift + nlines*(dshift_ind + 1)
# log-amplitudes
for i in range(nlines):
ax = plt.subplot(nrows, ncolumns, i+1)
#ax.hist(sample[:,start_ind+i], histtype="stepfilled", alpha=0.7)
sns.distplot(sample[:,start_ind+i], hist_kws={"histtype":"stepfilled"}, ax=ax)
plt.locator_params(axis = 'x', nbins = 6)
xlabels = ax.get_xticklabels()
for l in xlabels:
l.set_rotation(45)
_, ymax = ax.get_ylim()
ax.vlines(data["logq"][i], 0, ymax, lw=3, color="black")
ax.set_xlabel(r"$\log{q}$")
ax.set_ylabel("$N_{\mathrm{samples}}$")
plt.tight_layout()
if dshift_ind == 0:
plt.savefig(fout+"_logq.png", format="png")
else:
plt.savefig(fout+"_logq%i.png"%dshift_ind, format="png")
if close:
plt.close()
return
def plot_signs(data, sample, fout, ndshift, nlines, ncolumns=3,
dshift_ind=0, close=True):
"""
PLOT THE POSTERIOR FOR THE LINE AMPLITUDE SIGNS
"""
nrows = int(nlines/ncolumns)+1
fig = plt.figure(figsize=(nrows*4,ncolumns*4))
plt.locator_params(axis = 'x', nbins = 6)
# set starting index for the logq values:
start_ind = 11 + ndshift + nlines*(dshift_ind + 2)
# log-amplitudes
for i in range(nlines):
ax = plt.subplot(nrows, ncolumns, i+1)
# ax.hist(sample[:,start_ind+i], histtype="stepfilled", alpha=0.7)
sns.distplot(sample[:,start_ind+i], hist_kws={"histtype":"stepfilled"}, ax=ax)
plt.locator_params(axis = 'x', nbins = 6)
xlabels = ax.get_xticklabels()
for l in xlabels:
l.set_rotation(45)
ax.set_xlabel(r"Emission/absorption line sign")
ax.set_ylabel("$N_{\mathrm{samples}}$")
plt.tight_layout()
if dshift_ind == 0:
plt.savefig(fout+"_signs.png", format="png")
else:
plt.savefig(fout+"_signs%i.png"%dshift_ind, format="png")
if close:
plt.close()
return
def plot_posterior_summary(froot, datadir="../data/", nlines=8,
ndshift=1, ncolumns=3, close=True):
"""
Plot summeries of the posterior distribution. Mostly histograms.
Plots a bunch of Figures to png files.
Parameters
----------
froot: str
The root string of the data file and file with posterior samples to be loaded.
datadir: str
The directory with the data.
Default: "../data/"
nlines: int
The number of lines in the model.
Default: 8
ndshift: int
The number of (possible) Doppler shifts in the model.
Default: 1
ncolumns: int
The number of columns in multi-panel plots. Default: 3
close: bool
Close plots at the end of the plotting? Default: True
"""
# the pickle file with the data + parameters:
f = open(datadir+froot+"_data.pkl")
data = pickle.load(f)
f.close()
print("Keys in data dictionary: " + str(data.keys()))
# the posterior samples
sample = np.atleast_2d(np.loadtxt(datadir+froot+"_posterior_sample.txt"))
nsamples = sample.shape[0]
print("We have %i samples from the posterior."%nsamples)
# set the directory path and file name for output files:
fout = datadir+froot
# plot the spectrum with some draws from the posterior
plot_samples(data, sample, fout, close=close)
# plot a histogram of the background parameter
plot_bkg(data, sample, fout, close=close)
# plot histograms of the OU parameters
plot_ou_bkg(sample, fout, close=close)
# plot the hyper parameters of the log-amp prior
plot_logamp_hyper(data, sample, fout, close=close)
# plot the hyper parameters of the log-q prior
plot_logq_hyper(data, sample, fout, close=close)
# plot the threshold for the amplitude sign
plot_threshold(sample, fout, close=close)
# for the varying number of Doppler shifts, plot their posterior
if ndshift == 1:
plot_dshift(data, sample, fout, dshift_ind=0, close=close)
plot_logamp(data, sample, fout, ndshift, nlines,
ncolumns=ncolumns, dshift_ind=0, close=close)
plot_logq(data, sample, fout, ndshift, nlines,
ncolumns=ncolumns, dshift_ind=0, close=close)
plot_signs(data, sample, fout, ndshift, nlines,
ncolumns=ncolumns, dshift_ind=0, close=close)
else:
for i in range(ndshift):
plot_dshift(data, sample, fout, dshift_ind=i, close=close)
plot_logamp(data, sample, fout, ndshift, nlines,
ncolumns=ncolumns, dshift_ind=i, close=close)
plot_logq(data, sample, fout, ndshift, nlines,
ncolumns=ncolumns, dshift_ind=i, close=close)
plot_signs(data, sample, fout, ndshift, nlines,
ncolumns=ncolumns, dshift_ind=i, close=close)
return
"""
Explanation: A General Plotting function for the Posterior
We'll make some individual plotting functions that we can then combine to plot useful Figures on the whole posterior sample!
End of explanation
"""
plot_posterior_summary(froot, datadir="../data/", nlines=8, ndshift=1, ncolumns=3,
close=True)
"""
Explanation: Let's run this function on the data we just made individual plots from to see whether it worked.
The model had $8$ lines and $1$ redshift:
End of explanation
"""
def fake_data(wleft, wright, line_pos, input_pars, froot):
"""
Simulate spectra, including a (constant) background and a set of
(Gaussian) lines with given positions.
The model integrates over wavelength/frequency/energy bins, hence
it requires the left and right bin edges rather than the centre of
the bin.
Produces (1) a pickle file with the simulated data and parameters used
to produce the simulation; (2) an ASCII file that can be fed straight into
ShiftyLines; (3) a Figure of the simulated data and the model that
produced it.
Parameters
----------
wleft: numpy.ndarray
The left bin edges of the wavelength/frequency/energy bins
wright: numpy.ndarray
The right bin edges of the wavelength/frequency/energy bins
line_pos: numpy.ndarray
The positions of the spectral lines in the same units as
`wleft` and `wright`; will be translated into centroid
wavelength/frequency/energy of the Gaussian modelling the line
input_pars: dict
A dictionary containing the following keywords (for detailed
descriptions see the docstring for `fake_spectrum`):
logbkg: the log-background
err: the error on the data points
dshift: the Doppler shift (just one!)
sample_logamp: sample the log-amplitudes?
sample_logq: sample the log-q values?
sample_signs: sample the amplitude signs (if no, all lines
are absorption lines!)
logamp_mean: location of normal sampling distribution for log-amplitudes
logq_mean: location of normal sampling distribution for log-q
logamp_sigma: scale of normal sampling distribution for log-amplitudes
logq_sigma: scale of normal sampling distribution for log-q
nzero: Number of lines to set to zero
froot: str
A string describing the directory and file name of the output files
"""
# read out all the parameters
logbkg = input_pars["logbkg"]
err = input_pars["err"]
dshift = input_pars["dshift"]
sample_logamp = input_pars["sample_logamp"]
sample_logq = input_pars["sample_logq"]
sample_signs = input_pars["sample_signs"]
logamp_mean = input_pars["logamp_mean"]
logq_mean = input_pars["logq_mean"]
logamp_sigma = input_pars["logamp_sigma"]
logq_sigma = input_pars["logq_sigma"]
nzero = input_pars["nzero"]
# simulate fake spectrum
pars = fake_spectrum(wleft, wright, line_pos, logbkg=logbkg, err=err, dshift=dshift,
sample_logamp=sample_logamp, sample_logq=sample_logq,
logamp_hypermean=logamp_mean, logq_hypermean=logq_mean,
logamp_hypersigma=logamp_sigma, logq_hypersigma=logq_sigma,
sample_signs=sample_signs, nzero=nzero)
# read out model and fake flux, construct error
model_flux = pars["model_flux"]
fake_flux = pars["fake_flux"]
fake_err = np.zeros_like(fake_flux) + pars["err"]
# get the middle of each bin
wmid = wleft + (wright-wleft)/2.
# plot the resulting data and model to a file
plt.figure(figsize=(14,6))
plt.errorbar(wmid, fake_flux, yerr=fake_err, fmt="o", label="simulated flux", alpha=0.7)
plt.plot(wmid, model_flux, label="simulated model", lw=3)
plt.xlim(wmid[0], wmid[-1])
plt.gca().invert_xaxis()
plt.legend(prop={"size":18})
plt.xlabel("Wavelength [Angstrom]")
plt.ylabel("Normalized Flux")
plt.savefig(froot+"_lc.png", format="png")
# save the whole dictionary in a pickle file
f = open(froot+"_data.pkl", "w")
pickle.dump(pars, f)
f.close()
# save the fake data in an ASCII file for input into ShiftyLines
np.savetxt(froot+".txt", np.array([wleft, wright, fake_flux, fake_err]).T)
return
"""
Explanation: A general function for simulating data sets
Based on what we just did, we'll write a function that takes parameters as an input and spits out the files we need:
End of explanation
"""
froot = "../data/test_noshift2"
input_pars = {}
# set amplitude and q
input_pars["logamp_mean"] = np.log(0.1)
input_pars["logq_mean"] = np.log(600.)
# set the width of the amplitude and q distribution (not used here)
input_pars["logamp_sigma"] =np.log(0.08)
input_pars["logq_sigma"] = np.log(50)
# set Doppler shift
input_pars["dshift"] = 0.0
# set background
input_pars["logbkg"] = np.log(0.09)
# do not sample amplitudes or q-factors(all are the same!)
input_pars["sample_logamp"] = False
input_pars["sample_logq"] = False
# lines are either absorption or emission lines this time!
input_pars["sample_signs"] = True
# error on the data points (will sample from a Gaussian distribution)
input_pars["err"] = 0.007
# do not set any lines to zero!
input_pars["nzero"] = 0
fake_data(wnew_left, wnew_right, si_all_val, input_pars, froot)
"""
Explanation: A Spectrum with Weak Absorption Lines
The model should still work if the lines are very weak. We will simulate a spectrum with weak lines to test how the strength of the lines will affect the inferences drawn from the model:
End of explanation
"""
move_dnest_output("../data/%s"%froot, "../code/")
plot_posterior_summary(froot, datadir="../data/", nlines=8, ndshift=1,
ncolumns=3, close=False)
"""
Explanation: This time, we run DNest4 with $100$ levels.
Results
Let's first move the samples into the right directory and give it the right filename
End of explanation
"""
froot = "../data/test_noshift3"
input_pars = {}
# set amplitude and q
input_pars["logamp_mean"] = np.log(0.3)
input_pars["logq_mean"] = np.log(600.)
# set the width of the amplitude and q distribution (not used here)
input_pars["logamp_sigma"] = np.log(0.08)
input_pars["logq_sigma"] = np.log(50)
# set Doppler shift
input_pars["dshift"] = 0.0
# set background
input_pars["logbkg"] = np.log(0.09)
# do not sample amplitudes or q-factors(all are the same!)
input_pars["sample_logamp"] = False
input_pars["sample_logq"] = False
# lines are either absorption or emission lines this time!
input_pars["sample_signs"] = True
# error on the data points (will sample from a Gaussian distribution)
input_pars["err"] = 0.007
# do not set any lines to zero!
input_pars["nzero"] = 0
"""
Explanation: It looks like for this spectrum the amplitude really is too weak to constrain anything, so the Doppler shift does whatever the hell it wants.
I'm not sure I like this behaviour; I might need to ask Brendon about it!
A Spectrum With Emission+Absorption Lines
We'll do the same test as the first, but with varying strong emission and absorption lines:
End of explanation
"""
fake_data(wnew_left, wnew_right, si_all_val, input_pars, froot)
"""
Explanation: Now run the new function:
End of explanation
"""
move_dnest_output(froot, dnest_dir="../code/")
plot_posterior_summary(froot, datadir="../data/", nlines=8, ndshift=1, ncolumns=3,
close=False)
"""
Explanation: Run the model the same as with test_noshift2.txt, but with $150$ levels.
Results
End of explanation
"""
froot = "../data/test_noshift4"
input_pars = {}
# set amplitude and q
input_pars["logamp_mean"] = np.log(0.3)
input_pars["logq_mean"] = np.log(600.)
# set the width of the amplitude and q distribution (not used here)
input_pars["logamp_sigma"] = 0.5
input_pars["logq_sigma"] = np.log(50)
# set Doppler shift
input_pars["dshift"] = 0.0
# set background
input_pars["logbkg"] = np.log(0.09)
# sample amplitudes, but not q-factors(all are the same!)
input_pars["sample_logamp"] = True
input_pars["sample_logq"] = False
# lines are either absorption or emission lines this time!
input_pars["sample_signs"] = False
# error on the data points (will sample from a Gaussian distribution)
input_pars["err"] = 0.007
# do not set any lines to zero!
input_pars["nzero"] = 0
fake_data(wnew_left, wnew_right, si_all_val, input_pars, froot)
"""
Explanation: A Spectrum With Variable Absorption Lines
In this test, we'll see how the model deals with variable absorption lines:
End of explanation
"""
move_dnest_output(froot, dnest_dir="../code/")
plot_posterior_summary(froot, datadir="../data/", nlines=8, ndshift=1, ncolumns=3,
close=False)
"""
Explanation: Results
I set the number of levels to $200$.
End of explanation
"""
froot = "../data/test_noshift5"
input_pars = {}
# set amplitude and q
input_pars["logamp_mean"] = np.log(0.3)
input_pars["logq_mean"] = np.log(600.)
# set the width of the amplitude and q distribution (not used here)
input_pars["logamp_sigma"] = 0.5
input_pars["logq_sigma"] = np.log(50)
# set Doppler shift
input_pars["dshift"] = 0.0
# set background
input_pars["logbkg"] = np.log(0.09)
# do not sample amplitudes or q-factors(all are the same!)
input_pars["sample_logamp"] = False
input_pars["sample_logq"] = False
# lines are either absorption or emission lines this time!
input_pars["sample_signs"] = False
# error on the data points (will sample from a Gaussian distribution)
input_pars["err"] = 0.007
# Set three lines straight to zero!
input_pars["nzero"] = 3
np.random.seed(20160221)
fake_data(wnew_left, wnew_right, si_all_val, input_pars, froot)
"""
Explanation: A Spectrum with Lines Turned Off
What does the model do if lines are just not there? This is an important question,
so we will now make a spectrum with three lines having amplitudes $A=0$:
End of explanation
"""
plot_posterior_summary(froot, datadir="../data/", nlines=8, ndshift=1, ncolumns=3,
close=False)
"""
Explanation: Results
End of explanation
"""
froot = "../data/test_shift1"
input_pars = {}
# set amplitude and q
input_pars["logamp_mean"] = np.log(0.3)
input_pars["logq_mean"] = np.log(600.)
# set the width of the amplitude and q distribution (not used here)
input_pars["logamp_sigma"] = 0.5
input_pars["logq_sigma"] = np.log(50)
# set Doppler shift
input_pars["dshift"] = 0.01
# set background
input_pars["logbkg"] = np.log(0.09)
# do not sample amplitudes or q-factors(all are the same!)
input_pars["sample_logamp"] = False
input_pars["sample_logq"] = False
# lines are only absorption lines this time!
input_pars["sample_signs"] = False
# error on the data points (will sample from a Gaussian distribution)
input_pars["err"] = 0.007
# Set three lines straight to zero!
input_pars["nzero"] = 0
np.random.seed(20160220)
fake_data(wnew_left, wnew_right, si_all_val, input_pars, froot)
"""
Explanation: A Doppler-shifted Spectrum with Absorption lines
We are now going to look how well the model constrains the Doppler shift.
Again, we build a simple model where all lines have the sample amplitude,
End of explanation
"""
move_dnest_output(froot, "../code/")
plot_posterior_summary(froot, datadir="../data/", nlines=8, ndshift=1, ncolumns=3,
close=False)
"""
Explanation: Results
End of explanation
"""
froot = "../data/test_shift2"
input_pars = {}
# set amplitude and q
input_pars["logamp_mean"] = np.log(0.3)
input_pars["logq_mean"] = np.log(600.)
# set the width of the amplitude and q distribution (not used here)
input_pars["logamp_sigma"] = 0.5
input_pars["logq_sigma"] = np.log(50)
# set Doppler shift
input_pars["dshift"] = 0.01
# set background
input_pars["logbkg"] = np.log(0.09)
# do not sample amplitudes or q-factors(all are the same!)
input_pars["sample_logamp"] = True
input_pars["sample_logq"] = False
# lines are only absorption lines this time!
input_pars["sample_signs"] = True
# error on the data points (will sample from a Gaussian distribution)
input_pars["err"] = 0.007
# Set three lines straight to zero!
input_pars["nzero"] = 0
np.random.seed(20160221)
fake_data(wnew_left, wnew_right, si_all_val, input_pars, froot)
"""
Explanation: A Shifted Spectrum with Emission/Absorption Lines with Variable Amplitudes and Signs
More complicated: a spectrum with a single Doppler shift and variable line amplitudes of both emission and absorption lines:
End of explanation
"""
froot = "../data/test_extended_shift1"
input_pars = {}
# set amplitude and q
input_pars["logamp_mean"] = np.log(0.2)
input_pars["logq_mean"] = np.log(600.)
# set the width of the amplitude and q distribution (not used here)
input_pars["logamp_sigma"] = 0.4
input_pars["logq_sigma"] = np.log(50)
# set Doppler shift
input_pars["dshift"] = 0.01
# set background
input_pars["logbkg"] = np.log(0.09)
# do not sample amplitudes or q-factors(all are the same!)
input_pars["sample_logamp"] = True
input_pars["sample_logq"] = False
# lines are only absorption lines this time!
input_pars["sample_signs"] = True
# error on the data points (will sample from a Gaussian distribution)
input_pars["err"] = 0.007
# Set three lines straight to zero!
input_pars["nzero"] = 0
np.random.seed(20162210)
fake_data(wavelength_left, wavelength_right, lines_extended, input_pars, froot)
"""
Explanation: Results
Adding a Noise Process
At this point, I should be adding an OU process to the data generation process to simulate the effect of a variable background in the spectrum.
THIS IS STILL TO BE DONE!
Testing Multiple Doppler Shift Components
For all of the above simulations, we also ought to test how well the model works if I add additional Doppler shift components to sample over.
For this, you'll need to change the line
:dopplershift(3*nlines+1, 1, true, MyConditionalPrior())
in MyModel.cpp to read
:dopplershift(3*nlines+1, 3, false, MyConditionalPrior())
and recompile.
This will sample over up to three different Doppler shifts at the same time. In theory, we expect that the posterior will have a strong mode at either zero Doppler shifts (for the non-Doppler-shifted data) or at $1$ for all types of data set (where for some, the Doppler shift is rather strongly constrained to $0$).
Results
A place holder for the results from this experiment.
The extended Spectrum: More lines!
Let's also make some simulations for the extended spectrum with more lines. This will require the extended_lines.txt file. The file name for the file with the lines centroid is currently hard-coded in the main.cpp file.
Change the line
Data::get_instance().load_lines("../data/si_lines.txt");
to
Data::get_instance().load_lines("../data/lines_extended.txt");
and also change the Doppler shifts in MyModel.cpp back to
:dopplershift(3*nlines+1, 1, true, MyConditionalPrior())
before recompiling. We will change the last line again in a little while, but first we'll test the general performance of the model on the extended data set.
An extended spectrum with variable line amplitudes and a single Doppler shift
End of explanation
"""
froot = "test_extended_shift2"
# set amplitude and q
logamp_mean = np.log(0.2)
logq_mean = np.log(600.)
# set the width of the amplitude and q distribution (not used here)
logamp_sigma = 0.4
logq_sigma = np.log(50)
# set Doppler shift
dshift1 = 0.01
dshift2 = 0.02
# set background
logbkg1 = np.log(0.09)
logbkg2 = -15.
# do not sample amplitudes or q-factors(all are the same!)
sample_logamp = True
sample_logq = False
# lines are only absorption lines this time!
sample_signs = True
# error on the data points (will sample from a Gaussian distribution)
err = 0.007
# Set three lines straight to zero!
nzero = 0
np.random.seed(20160201)
pars1 = fake_spectrum(wavelength_left, wavelength_right, si_all_val, logbkg=logbkg1,
dshift=dshift1, err=err, sample_logamp=sample_logamp,
sample_logq=sample_logq, logamp_hypermean=logamp_mean,
logamp_hypersigma=logamp_sigma, logq_hypermean=logq_mean,
logq_hypersigma=logq_sigma, sample_signs=sample_signs, nzero=nzero)
pars2 = fake_spectrum(wavelength_left, wavelength_right, other_lines_all_val, logbkg=logbkg2,
dshift=dshift2, err=err, sample_logamp=sample_logamp,
sample_logq=sample_logq, logamp_hypermean=logamp_mean,
logamp_hypersigma=logamp_sigma, logq_hypermean=logq_mean,
logq_hypersigma=logq_sigma, sample_signs=sample_signs, nzero=nzero)
model_flux_c = pars1["model_flux"]+pars2["model_flux"]
fake_flux_c = model_flux_c + np.random.normal(0.0, err, size=model_flux_c.shape[0])
fake_err_c = np.zeros_like(fake_flux_c) + pars1["err"]
plt.figure(figsize=(14,6))
plt.errorbar(wnew_mid, fake_flux, yerr=fake_err, fmt="o", label="simulated flux", alpha=0.7)
plt.plot(wnew_mid, model_flux, label="simulated model", lw=3)
plt.xlim(wnew_mid[0], wnew_mid[-1])
plt.gca().invert_xaxis()
plt.legend(prop={"size":18})
plt.xlabel("Wavelength [Angstrom]")
plt.ylabel("Normalized Flux")
plt.savefig(datadir+froot+"_lc.png", format="png")
"""
Explanation: A Spectrum with Two Doppler Shifts
What if the lines are shifted with respect to each other?
Let's simulate a spectrum where the silicon lines are Doppler shifted by one value, but the other lines are shifted by a different Doppler shift.
End of explanation
"""
pars = {"wavelength_left": wavelength_left, "wavelength_right": wavelength_right, "err":err,
"model_flux": model_flux_c, "fake_flux": fake_flux_c,
"dshift": [dshift1, dshift2],
"line_pos": np.hstack([pars1["line_pos"], pars2["line_pos"]]),
"logamp": np.hstack([pars1["logamp"], pars2["logamp"]]),
"signs": np.hstack([pars1["signs"], pars2["signs"]]),
"logq": np.hstack([pars1["logq"], pars2["logq"]]) }
# save the whole dictionary in a pickle file
f = open(froot+"_data.pkl", "w")
pickle.dump(pars, f)
f.close()
# save the fake data in an ASCII file for input into ShiftyLines
np.savetxt(froot+".txt", np.array([wavelength_left, wavelength_right,
fake_flux_c, fake_err_c]).T)
"""
Explanation: Let's save all the output files as we did before:
End of explanation
"""
logmin = -1.e16
def gaussian_cdf(x, w0, width):
return 0.5*(1. + scipy.special.erf((x-w0)/(width*np.sqrt(2.))))
def spectral_line(wleft, wright, w0, amplitude, width):
"""
Use the CDF of a Gaussian distribution to define spectral
lines. We use the CDF to integrate over the energy bins,
rather than taking the mid-bin energy.
Parameters
----------
wleft: array
Left edges of the energy bins
wright: array
Right edges of the energy bins
w0: float
The centroid of the line
amplitude: float
The amplitude of the line
width: float
The width of the line
Returns
-------
line_flux: array
The array of line fluxes integrated over each bin
"""
line_flux = amplitude*(gaussian_cdf(wright, w0, width)-
gaussian_cdf(wleft, w0, width))
return line_flux
class LinePosterior(object):
def __init__(self, x_left, x_right, y, yerr, line_pos):
"""
A class to compute the posterior of all the lines in
a spectrum.
Parameters
----------
x_left: np.ndarray
The left edges of the independent variable (wavelength bins)
x_right: np.ndarray
The right edges of the independent variable (wavelength bins)
y: np.ndarray
The dependent variable (flux)
yerr: np.ndarray
The uncertainty on the dependent variable (flux)
line_pos: np.ndarray
The rest-frame positions of the spectral lines
Attributes
----------
x_left: np.ndarray
The left edges of the independent variable (wavelength bins)
x_right: np.ndarray
The right edges of the independent variable (wavelength bins)
x_mid: np.ndarray
The mid-bin positions
y: np.ndarray
The dependent variable (flux)
yerr: np.ndarray
The uncertainty on the dependent variable (flux)
line_pos: np.ndarray
The rest-frame positions of the spectral lines
nlines: int
The number of lines in the model
"""
self.x_left = x_left
self.x_right = x_right
self.x_mid = x_left + (x_right-x_left)/2.
self.y = y
assert np.size(yerr) == 1, "Multiple errors are not supported!"
self.yerr = yerr
self.line_pos = line_pos
self.nlines = len(line_pos)
def logprior(self, t0):
"""
The prior of the model. Currently there are Gaussian priors on the
line width as well as the amplitude and the redshift.
Parameters
----------
t0: iterable
The list or array with the parameters of the model
Contains:
* Doppler shift
* a background parameter
* all line amplitudes
* all line widths
Returns
-------
logp: float
The log-prior of the model
"""
# t0 must have twice the number of lines (amplitude, width for each) plus a
# background plus the redshift
assert len(t0) == 2*self.nlines+2, "Wrong number of parameters!"
# get out the individual parameters
dshift = t0[0] # Doppler shift
log_bkg = t0[1]
amps = t0[2:2+self.nlines]
log_widths = t0[2+self.nlines:]
# prior on the Doppler shift is Gaussian
dshift_hypermean = 0.0
dshift_hypersigma = 0.01
dshift_prior = scipy.stats.norm(dshift_hypermean, dshift_hypersigma)
p_dshift = np.log(dshift_prior.pdf(dshift))
#print("p_dshift: " + str(p_dshift))
# Prior on the background is uniform
logbkg_min = -10.0
logbkg_max = 10.0
p_bkg = (log_bkg >= logbkg_min and log_bkg <= logbkg_max)/(logbkg_max-logbkg_min)
if p_bkg == 0:
p_logbkg = logmin
else:
p_logbkg = 0.0
#print("p_logbkg: " + str(p_logbkg))
# prior on the amplitude is Gaussian
amp_hypermean = 0.0
amp_hypersigma = 0.1
amp_prior = scipy.stats.norm(amp_hypermean, amp_hypersigma)
p_amp = 0.0
for a in amps:
p_amp += np.log(amp_prior.pdf(a))
#print("p_amp: " + str(p_amp))
# prior on the log-widths is uniform:
logwidth_min = -5.
logwidth_max = 3.
p_logwidths = 0.0
for w in log_widths:
#print("w: " + str(w))
p_width = (w >= logwidth_min and w <= logwidth_max)/(logwidth_max-logwidth_min)
if p_width == 0.0:
p_logwidths += logmin
else:
continue
#print("p_logwidths: " + str(p_logwidths))
logp = p_dshift + p_logbkg + p_amp + p_logwidths
return logp
@staticmethod
def _spectral_model(x_left, x_right, dshift, logbkg, line_pos, amplitudes, logwidths):
"""
The spectral model underlying the data. It uses the object
attributes `x_left` and `x_right` to integrate over the bins
correctly.
Parameters
----------
x_left: np.ndarray
The left bin edges of the ordinate (wavelength bins)
x_right: np.ndarray
The right bin edges of the ordinate (wavelength bins)
dshift: float
The Doppler shift
logbkg: float
Logarithm of the constant background level
line_pos: iterable
The rest frame positions of the line centroids in the same
units as `x_left` and `x_right`
amplitudes: iterable
The list of all line amplitudes
logwidths: iterable
The list of the logarithm of all line widths
Returns
-------
flux: np.ndarray
The integrated flux in the bins defined by `x_left` and `x_right`
"""
assert len(line_pos) == len(amplitudes), "Line positions and amplitudes must have same length"
assert len(line_pos) == len(logwidths), "Line positions and widths must have same length"
# shift the line position by the redshift
line_pos_shifted = line_pos + dshift
#print(line_pos_shifted)
# exponentiate logarithmic quantities
bkg = np.exp(logbkg)
widths = np.exp(logwidths)
#print(widths)
# background flux
flux = np.zeros_like(x_left) + bkg
#print(amplitudes)
# add all the line fluxes
for x0, a, w in zip(line_pos_shifted, amplitudes, widths):
flux += spectral_line(x_left, x_right, x0, a, w)
return flux
def loglikelihood(self, t0):
"""
The Gaussian likelihood of the model.
Parameters
----------
t0: iterable
The list or array with the parameters of the model
Contains:
* Doppler shift
* a background parameter
* all line amplitudes
* all line widths
Returns
-------
loglike: float
The log-likelihood of the model
"""
assert len(t0) == 2*self.nlines+2, "Wrong number of parameters!"
# get out the individual parameters
dshift = t0[0] # Doppler shift
logbkg = t0[1]
amplitudes = t0[2:2+self.nlines]
logwidths = t0[2+self.nlines:]
model_flux = self._spectral_model(self.x_left, self.x_right,
dshift, logbkg, self.line_pos,
amplitudes, logwidths)
loglike = -(len(self.y)/2.)*np.log(2.*np.pi*self.yerr**2.) - \
np.sum((self.y-model_flux)**2./(2.*self.yerr**2.))
return loglike
def logposterior(self, t0):
"""
The Gaussian likelihood of the model.
Parameters
----------
t0: iterable
The list or array with the parameters of the model
Contains:
* Doppler shift
* a background parameter
* all line amplitudes
* all line widths
Returns
-------
logpost: float
The log-likelihood of the model
"""
# assert the number of input parameters is correct:
assert len(t0) == 2*self.nlines+2, "Wrong number of parameters!"
logpost = self.logprior(t0) + self.loglikelihood(t0)
#print("prior: " + str(self.logprior(t0)))
#print("likelihood: " + str(self.loglikelihood(t0)))
#print("posterior: " + str(self.logposterior(t0)))
if np.isfinite(logpost):
return logpost
else:
return logmin
def __call__(self, t0):
return self.logposterior(t0)
"""
Explanation: Results
A simple model for Doppler-shifted Spectra
Below we define a basic toy model which samples over all the line amplitudes, widths as well as a Doppler shift. We'll later extend this to work in DNest4.
End of explanation
"""
data = np.loadtxt(datadir+"test_spectra_noshift_sameamp_samewidth3.txt")
x_left = data[:,0]
x_right = data[:,1]
flux = data[:,3]
f_err = 0.007
lpost = LinePosterior(x_left, x_right, flux, f_err, si_all_val)
plt.errorbar(lpost.x_mid, flux, yerr=f_err, fmt="o", label="Fake data")
plt.plot(lpost.x_mid, data[:,4], label="Underlying model")
d_test = 0.0
bkg_test = np.log(0.09)
amp_test = np.zeros_like(si_all_val) - 0.5
logwidth_test = np.log(np.zeros_like(si_all_val) + 0.01)
p_test = np.hstack([d_test, bkg_test, amp_test, logwidth_test])
lpost.logprior(p_test)
d_test = 0.0
bkg_test = np.log(0.09)
amp_test = np.zeros_like(si_all_val) - 0.5
logwidth_test = np.log(np.zeros_like(si_all_val) + 0.01)
p_test = np.hstack([d_test, bkg_test, amp_test, logwidth_test])
lpost.loglikelihood(p_test)
nwalkers = 1000
niter = 300
burnin = 300
# starting positions for all parameters, from the prior
# the values are taken from the `logprior` method in `LinePosterior`.
# If the hyperparameters of the prior change in there, they'd better
# change here, too!
dshift_start = np.random.normal(0.0, 0.01, size=nwalkers)
logbkg_start = np.random.uniform(-10., 10., size=nwalkers)
amp_start = np.random.normal(0.0, 0.1, size=(lpost.nlines, nwalkers))
logwidth_start = np.random.uniform(-5., 3., size=(lpost.nlines, nwalkers))
p0 = np.vstack([dshift_start, logbkg_start, amp_start, logwidth_start]).T
ndim = p0.shape[1]
sampler = emcee.EnsembleSampler(nwalkers, ndim, lpost)
pos, prob, state = sampler.run_mcmc(p0, burnin)
sampler.reset()
## do the actual MCMC run
pos, prob, state = sampler.run_mcmc(pos, niter, rstate0=state)
mcall = sampler.flatchain
mcall.shape
for i in range(mcall.shape[1]):
pmean = np.mean(mcall[:,i])
pstd = np.std(mcall[:,i])
print("Parameter %i: %.4f +/- %.4f"%(i, pmean, pstd))
mcall.shape
corner.corner(mcall);
lpost = LinePosterior(x_left, x_right, flux, f_err, si_all_val)
plt.errorbar(lpost.x_mid, flux, yerr=f_err, fmt="o", label="Fake data")
plt.plot(lpost.x_mid, data[:,4], label="Underlying model")
randind = np.random.choice(np.arange(mcall.shape[0]), replace=False, size=20)
for ri in randind:
ri = int(ri)
p = mcall[ri]
dshift = p[0]
logbkg = p[1]
line_pos = lpost.line_pos
amplitudes = p[2:2+lpost.nlines]
logwidths = p[2+lpost.nlines:]
plt.plot(lpost.x_mid, lpost._spectral_model(x_left, x_right, dshift, logbkg, line_pos,
amplitudes, logwidths))
"""
Explanation: Now we can use emcee to sample.
We're going to use one of our example data sets, one without Doppler shift and strong lines:
End of explanation
"""
|
tensorflow/docs-l10n | site/ja/tutorials/distribute/save_and_load.ipynb | apache-2.0 | #@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""
Explanation: Copyright 2019 The TensorFlow Authors.
End of explanation
"""
import tensorflow_datasets as tfds
import tensorflow as tf
tfds.disable_progress_bar()
"""
Explanation: 分散ストラテジーを使ってモデルを保存して読み込む
<table class="tfo-notebook-buttons" align="left">
<td><a target="_blank" href="https://www.tensorflow.org/tutorials/distribute/save_and_load"><img src="https://www.tensorflow.org/images/tf_logo_32px.png">TensorFlow.org で表示</a></td>
<td><a target="_blank" href="https://colab.research.google.com/github/tensorflow/docs-l10n/blob/master/site/ja/tutorials/distribute/save_and_load.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png">Google Colab で実行</a></td>
<td><a target="_blank" href="https://github.com/tensorflow/docs-l10n/blob/master/site/ja/tutorials/distribute/save_and_load.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png">GitHub でソースを表示</a></td>
<td><a href="https://storage.googleapis.com/tensorflow_docs/docs-l10n/site/ja/tutorials/distribute/save_and_load.ipynb"><img src="https://www.tensorflow.org/images/download_logo_32px.png"> ノートブックをダウンロード</a></td>
</table>
概要
トレーニング中にモデルを保存して読み込むことは一般的な作業です。Keras モデルの保存と読み込みに使用する API には、高レベル API と低レベル API の 2 つがあります。このチュートリアルでは、tf.distribute.Strategy を使用してる場合に SavedModel API を使用する方法を実演しています。SavedModel とシリアル化に関する一般的な情報は、SavedModel ガイドとKeras モデルのシリアル化ガイドをご覧ください。では、簡単な例から始めましょう。
依存関係をインポートします。
End of explanation
"""
mirrored_strategy = tf.distribute.MirroredStrategy()
def get_data():
datasets, ds_info = tfds.load(name='mnist', with_info=True, as_supervised=True)
mnist_train, mnist_test = datasets['train'], datasets['test']
BUFFER_SIZE = 10000
BATCH_SIZE_PER_REPLICA = 64
BATCH_SIZE = BATCH_SIZE_PER_REPLICA * mirrored_strategy.num_replicas_in_sync
def scale(image, label):
image = tf.cast(image, tf.float32)
image /= 255
return image, label
train_dataset = mnist_train.map(scale).cache().shuffle(BUFFER_SIZE).batch(BATCH_SIZE)
eval_dataset = mnist_test.map(scale).batch(BATCH_SIZE)
return train_dataset, eval_dataset
def get_model():
with mirrored_strategy.scope():
model = tf.keras.Sequential([
tf.keras.layers.Conv2D(32, 3, activation='relu', input_shape=(28, 28, 1)),
tf.keras.layers.MaxPooling2D(),
tf.keras.layers.Flatten(),
tf.keras.layers.Dense(64, activation='relu'),
tf.keras.layers.Dense(10)
])
model.compile(loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True),
optimizer=tf.keras.optimizers.Adam(),
metrics=['accuracy'])
return model
"""
Explanation: tf.distribute.Strategy を使ってデータとモデルを準備します。
End of explanation
"""
model = get_model()
train_dataset, eval_dataset = get_data()
model.fit(train_dataset, epochs=2)
"""
Explanation: モデルをトレーニングします。
End of explanation
"""
keras_model_path = "/tmp/keras_save"
model.save(keras_model_path)
"""
Explanation: モデルを保存して読み込む
作業に使用する単純なモデルの準備ができたので、保存と読み込みの API を見てみましょう。利用できる API には次の 2 種類があります。
高レベルの Keras model.save と tf.keras.models.load_model
低レベルの tf.saved_model.save と tf.saved_model.load
Keras API
次に、Keras API を使ってモデルを保存して読み込む例を示します。
End of explanation
"""
restored_keras_model = tf.keras.models.load_model(keras_model_path)
restored_keras_model.fit(train_dataset, epochs=2)
"""
Explanation: tf.distribute.Strategy を使用せずにモデルを復元します。
End of explanation
"""
another_strategy = tf.distribute.OneDeviceStrategy("/cpu:0")
with another_strategy.scope():
restored_keras_model_ds = tf.keras.models.load_model(keras_model_path)
restored_keras_model_ds.fit(train_dataset, epochs=2)
"""
Explanation: モデルを復元したら、compile() をもう一度呼び出すことなくそのトレーニングを続行できます。保存前にコンパイル済みであるからです。モデルは、TensorFlow の標準的な SavedModel プロと形式で保存されています。その他の詳細は、saved_model 形式ガイドをご覧ください。
tf.distribute.Strategy を使用して、モデルを読み込んでトレーニングします。
End of explanation
"""
model = get_model() # get a fresh model
saved_model_path = "/tmp/tf_save"
tf.saved_model.save(model, saved_model_path)
"""
Explanation: ご覧の通り、tf.distribute.Strategy を使って期待通りに読み込まれました。ここで使用されるストラテジーは、保存前に使用したストラテジーと同じものである必要はありません。
tf.saved_model API
では、低レベル API を見てみましょう。モデルの保存は Keras API に類似しています。
End of explanation
"""
DEFAULT_FUNCTION_KEY = "serving_default"
loaded = tf.saved_model.load(saved_model_path)
inference_func = loaded.signatures[DEFAULT_FUNCTION_KEY]
"""
Explanation: 読み込みは tf.saved_model.load() で行えますが、より低いレベルにある API(したがって広範なユースケースのある API)であるため、Keras モデルを返しません。代わりに、推論を行うために使用できる関数を含むオブジェクトを返します。次に例を示します。
End of explanation
"""
predict_dataset = eval_dataset.map(lambda image, label: image)
for batch in predict_dataset.take(1):
print(inference_func(batch))
"""
Explanation: 読み込まれたオブジェクトには複数の関数が含まれ、それぞれにキーが関連付けられている可能性があります。"serving_default" は、保存された Keras モデルを使用した推論関数のデフォルトのキーです。この関数で推論を実行するには、次のようにします。
End of explanation
"""
another_strategy = tf.distribute.MirroredStrategy()
with another_strategy.scope():
loaded = tf.saved_model.load(saved_model_path)
inference_func = loaded.signatures[DEFAULT_FUNCTION_KEY]
dist_predict_dataset = another_strategy.experimental_distribute_dataset(
predict_dataset)
# Calling the function in a distributed manner
for batch in dist_predict_dataset:
another_strategy.run(inference_func,args=(batch,))
"""
Explanation: また、分散方法で読み込んで推論を実行することもできます。
End of explanation
"""
import tensorflow_hub as hub
def build_model(loaded):
x = tf.keras.layers.Input(shape=(28, 28, 1), name='input_x')
# Wrap what's loaded to a KerasLayer
keras_layer = hub.KerasLayer(loaded, trainable=True)(x)
model = tf.keras.Model(x, keras_layer)
return model
another_strategy = tf.distribute.MirroredStrategy()
with another_strategy.scope():
loaded = tf.saved_model.load(saved_model_path)
model = build_model(loaded)
model.compile(loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True),
optimizer=tf.keras.optimizers.Adam(),
metrics=['accuracy'])
model.fit(train_dataset, epochs=2)
"""
Explanation: 復元された関数の呼び出しは、保存されたモデル(predict)に対するフォワードパスです。読み込まれた関数をトレーニングし続ける場合はどうでしょうか。または読み込まれた関数をより大きなモデルに埋め込むには?一般的には、この読み込まれたオブジェクトを Keras レイヤーにラップして行うことができます。幸いにも、TF Hub には、次に示すとおり、この目的に使用できる hub.KerasLayer が用意されています。
End of explanation
"""
model = get_model()
# Saving the model using Keras's save() API
model.save(keras_model_path)
another_strategy = tf.distribute.MirroredStrategy()
# Loading the model using lower level API
with another_strategy.scope():
loaded = tf.saved_model.load(keras_model_path)
"""
Explanation: ご覧の通り、hub.KerasLayer は tf.saved_model.load() から読み込まれた結果を、別のモデルの構築に使用できる Keras レイヤーにラップしています。学習を転送する場合に非常に便利な手法です。
どの API を使用すべきですか?
保存の場合は、Keras モデルを使用しているのであれば、Keras の model.save() API をほぼ必ず使用することが推奨されます。保存しているものが Keras モデルでなければ、低レベル API しか使用できません。
読み込みの場合は、読み込み API から得ようとしているものによって選択肢がきまs理ます。Keras モデルを使用できない(または使用を希望しない)のであれば、tf.saved_model.load() を使用し、そうでなければ、tf.keras.models.load_model() を使用します。Keras モデルを保存した場合にのみ Keras モデルを読み込めることに注意してください。
API を混在させることも可能です。model.save で Keras モデルを保存し、低レベルの tf.saved_model.load API を使用して、非 Keras モデルを読み込むことができます。
End of explanation
"""
model = get_model()
# Saving the model to a path on localhost.
saved_model_path = "/tmp/tf_save"
save_options = tf.saved_model.SaveOptions(experimental_io_device='/job:localhost')
model.save(saved_model_path, options=save_options)
# Loading the model from a path on localhost.
another_strategy = tf.distribute.MirroredStrategy()
with another_strategy.scope():
load_options = tf.saved_model.LoadOptions(experimental_io_device='/job:localhost')
loaded = tf.keras.models.load_model(saved_model_path, options=load_options)
"""
Explanation: ローカルデバイスで保存または読み込む
クラウド TPU を使用するなど、リモートで実行中にローカルの IO デバイスに保存したり、そこから読み込んだりする場合、experimental_io_device オプションを使用して、IO デバイスを localhost に設定する必要があります。
End of explanation
"""
class SubclassedModel(tf.keras.Model):
output_name = 'output_layer'
def __init__(self):
super(SubclassedModel, self).__init__()
self._dense_layer = tf.keras.layers.Dense(
5, dtype=tf.dtypes.float32, name=self.output_name)
def call(self, inputs):
return self._dense_layer(inputs)
my_model = SubclassedModel()
# my_model.save(keras_model_path) # ERROR!
tf.saved_model.save(my_model, saved_model_path)
"""
Explanation: 警告
特殊なケースは、入力が十分に定義されていない Keras モデルがある場合です。たとえば、Seeuqntial モデルは、入力形状(Sequential([Dense(3), ...])を使用せずに作成できます。Subclassed モデルにも、初期化後は十分に定義された入力がありません。この場合、保存と読み込みの両方に低レベル API を使用する必要があります。そうしない場合はエラーが発生します。
モデルの入力が十分に定義されたものであるかを確認するには、model.inputs が None であるかどうかを確認します。None でなければ問題ありません。入力形状は、モデルが .fit、.evaluate、.predict で使用されている場合、またはモデルを呼び出す場合(model(inputs))に自動的に定義されます。
次に例を示します。
End of explanation
"""
|
ES-DOC/esdoc-jupyterhub | notebooks/nerc/cmip6/models/sandbox-1/atmos.ipynb | gpl-3.0 | # DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'nerc', 'sandbox-1', 'atmos')
"""
Explanation: ES-DOC CMIP6 Model Properties - Atmos
MIP Era: CMIP6
Institute: NERC
Source ID: SANDBOX-1
Topic: Atmos
Sub-Topics: Dynamical Core, Radiation, Turbulence Convection, Microphysics Precipitation, Cloud Scheme, Observation Simulation, Gravity Waves, Solar, Volcanos.
Properties: 156 (127 required)
Model descriptions: Model description details
Initialized From: --
Notebook Help: Goto notebook help page
Notebook Initialised: 2018-02-15 16:54:27
Document Setup
IMPORTANT: to be executed each time you run the notebook
End of explanation
"""
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
"""
Explanation: Document Authors
Set document authors
End of explanation
"""
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
"""
Explanation: Document Contributors
Specify document contributors
End of explanation
"""
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
"""
Explanation: Document Publication
Specify document publication status
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.overview.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: Document Table of Contents
1. Key Properties --> Overview
2. Key Properties --> Resolution
3. Key Properties --> Timestepping
4. Key Properties --> Orography
5. Grid --> Discretisation
6. Grid --> Discretisation --> Horizontal
7. Grid --> Discretisation --> Vertical
8. Dynamical Core
9. Dynamical Core --> Top Boundary
10. Dynamical Core --> Lateral Boundary
11. Dynamical Core --> Diffusion Horizontal
12. Dynamical Core --> Advection Tracers
13. Dynamical Core --> Advection Momentum
14. Radiation
15. Radiation --> Shortwave Radiation
16. Radiation --> Shortwave GHG
17. Radiation --> Shortwave Cloud Ice
18. Radiation --> Shortwave Cloud Liquid
19. Radiation --> Shortwave Cloud Inhomogeneity
20. Radiation --> Shortwave Aerosols
21. Radiation --> Shortwave Gases
22. Radiation --> Longwave Radiation
23. Radiation --> Longwave GHG
24. Radiation --> Longwave Cloud Ice
25. Radiation --> Longwave Cloud Liquid
26. Radiation --> Longwave Cloud Inhomogeneity
27. Radiation --> Longwave Aerosols
28. Radiation --> Longwave Gases
29. Turbulence Convection
30. Turbulence Convection --> Boundary Layer Turbulence
31. Turbulence Convection --> Deep Convection
32. Turbulence Convection --> Shallow Convection
33. Microphysics Precipitation
34. Microphysics Precipitation --> Large Scale Precipitation
35. Microphysics Precipitation --> Large Scale Cloud Microphysics
36. Cloud Scheme
37. Cloud Scheme --> Optical Cloud Properties
38. Cloud Scheme --> Sub Grid Scale Water Distribution
39. Cloud Scheme --> Sub Grid Scale Ice Distribution
40. Observation Simulation
41. Observation Simulation --> Isscp Attributes
42. Observation Simulation --> Cosp Attributes
43. Observation Simulation --> Radar Inputs
44. Observation Simulation --> Lidar Inputs
45. Gravity Waves
46. Gravity Waves --> Orographic Gravity Waves
47. Gravity Waves --> Non Orographic Gravity Waves
48. Solar
49. Solar --> Solar Pathways
50. Solar --> Solar Constant
51. Solar --> Orbital Parameters
52. Solar --> Insolation Ozone
53. Volcanos
54. Volcanos --> Volcanoes Treatment
1. Key Properties --> Overview
Top level key properties
1.1. Model Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of atmosphere model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.overview.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 1.2. Model Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Name of atmosphere model code (CAM 4.0, ARPEGE 3.2,...)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.overview.model_family')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "AGCM"
# "ARCM"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 1.3. Model Family
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of atmospheric model.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.overview.basic_approximations')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "primitive equations"
# "non-hydrostatic"
# "anelastic"
# "Boussinesq"
# "hydrostatic"
# "quasi-hydrostatic"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 1.4. Basic Approximations
Is Required: TRUE Type: ENUM Cardinality: 1.N
Basic approximations made in the atmosphere.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.resolution.horizontal_resolution_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 2. Key Properties --> Resolution
Characteristics of the model resolution
2.1. Horizontal Resolution Name
Is Required: TRUE Type: STRING Cardinality: 1.1
This is a string usually used by the modelling group to describe the resolution of the model grid, e.g. T42, N48.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.resolution.canonical_horizontal_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 2.2. Canonical Horizontal Resolution
Is Required: TRUE Type: STRING Cardinality: 1.1
Expression quoted for gross comparisons of resolution, e.g. 2.5 x 3.75 degrees lat-lon.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.resolution.range_horizontal_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 2.3. Range Horizontal Resolution
Is Required: TRUE Type: STRING Cardinality: 1.1
Range of horizontal resolution with spatial details, eg. 1 deg (Equator) - 0.5 deg
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.resolution.number_of_vertical_levels')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 2.4. Number Of Vertical Levels
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Number of vertical levels resolved on the computational grid.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.resolution.high_top')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 2.5. High Top
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does the atmosphere have a high-top? High-Top atmospheres have a fully resolved stratosphere with a model top above the stratopause.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.timestepping.timestep_dynamics')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 3. Key Properties --> Timestepping
Characteristics of the atmosphere model time stepping
3.1. Timestep Dynamics
Is Required: TRUE Type: STRING Cardinality: 1.1
Timestep for the dynamics, e.g. 30 min.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.timestepping.timestep_shortwave_radiative_transfer')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 3.2. Timestep Shortwave Radiative Transfer
Is Required: FALSE Type: STRING Cardinality: 0.1
Timestep for the shortwave radiative transfer, e.g. 1.5 hours.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.timestepping.timestep_longwave_radiative_transfer')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 3.3. Timestep Longwave Radiative Transfer
Is Required: FALSE Type: STRING Cardinality: 0.1
Timestep for the longwave radiative transfer, e.g. 3 hours.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.orography.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "present day"
# "modified"
# TODO - please enter value(s)
"""
Explanation: 4. Key Properties --> Orography
Characteristics of the model orography
4.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Time adaptation of the orography.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.orography.changes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "related to ice sheets"
# "related to tectonics"
# "modified mean"
# "modified variance if taken into account in model (cf gravity waves)"
# TODO - please enter value(s)
"""
Explanation: 4.2. Changes
Is Required: TRUE Type: ENUM Cardinality: 1.N
If the orography type is modified describe the time adaptation changes.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.grid.discretisation.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 5. Grid --> Discretisation
Atmosphere grid discretisation
5.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of grid discretisation in the atmosphere
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.grid.discretisation.horizontal.scheme_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "spectral"
# "fixed grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 6. Grid --> Discretisation --> Horizontal
Atmosphere discretisation in the horizontal
6.1. Scheme Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Horizontal discretisation type
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.grid.discretisation.horizontal.scheme_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "finite elements"
# "finite volumes"
# "finite difference"
# "centered finite difference"
# TODO - please enter value(s)
"""
Explanation: 6.2. Scheme Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
Horizontal discretisation method
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.grid.discretisation.horizontal.scheme_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "second"
# "third"
# "fourth"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 6.3. Scheme Order
Is Required: TRUE Type: ENUM Cardinality: 1.1
Horizontal discretisation function order
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.grid.discretisation.horizontal.horizontal_pole')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "filter"
# "pole rotation"
# "artificial island"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 6.4. Horizontal Pole
Is Required: FALSE Type: ENUM Cardinality: 0.1
Horizontal discretisation pole singularity treatment
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.grid.discretisation.horizontal.grid_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Gaussian"
# "Latitude-Longitude"
# "Cubed-Sphere"
# "Icosahedral"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 6.5. Grid Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Horizontal grid type
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.grid.discretisation.vertical.coordinate_type')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "isobaric"
# "sigma"
# "hybrid sigma-pressure"
# "hybrid pressure"
# "vertically lagrangian"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 7. Grid --> Discretisation --> Vertical
Atmosphere discretisation in the vertical
7.1. Coordinate Type
Is Required: TRUE Type: ENUM Cardinality: 1.N
Type of vertical coordinate system
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 8. Dynamical Core
Characteristics of the dynamical core
8.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of atmosphere dynamical core
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 8.2. Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Commonly used name for the dynamical core of the model.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.timestepping_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Adams-Bashforth"
# "explicit"
# "implicit"
# "semi-implicit"
# "leap frog"
# "multi-step"
# "Runge Kutta fifth order"
# "Runge Kutta second order"
# "Runge Kutta third order"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 8.3. Timestepping Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Timestepping framework type
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.prognostic_variables')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "surface pressure"
# "wind components"
# "divergence/curl"
# "temperature"
# "potential temperature"
# "total water"
# "water vapour"
# "water liquid"
# "water ice"
# "total water moments"
# "clouds"
# "radiation"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 8.4. Prognostic Variables
Is Required: TRUE Type: ENUM Cardinality: 1.N
List of the model prognostic variables
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.top_boundary.top_boundary_condition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "sponge layer"
# "radiation boundary condition"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 9. Dynamical Core --> Top Boundary
Type of boundary layer at the top of the model
9.1. Top Boundary Condition
Is Required: TRUE Type: ENUM Cardinality: 1.1
Top boundary condition
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.top_boundary.top_heat')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 9.2. Top Heat
Is Required: TRUE Type: STRING Cardinality: 1.1
Top boundary heat treatment
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.top_boundary.top_wind')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 9.3. Top Wind
Is Required: TRUE Type: STRING Cardinality: 1.1
Top boundary wind treatment
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.lateral_boundary.condition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "sponge layer"
# "radiation boundary condition"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 10. Dynamical Core --> Lateral Boundary
Type of lateral boundary condition (if the model is a regional model)
10.1. Condition
Is Required: FALSE Type: ENUM Cardinality: 0.1
Type of lateral boundary condition
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.diffusion_horizontal.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 11. Dynamical Core --> Diffusion Horizontal
Horizontal diffusion scheme
11.1. Scheme Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Horizontal diffusion scheme name
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.diffusion_horizontal.scheme_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "iterated Laplacian"
# "bi-harmonic"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 11.2. Scheme Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
Horizontal diffusion scheme method
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_tracers.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Heun"
# "Roe and VanLeer"
# "Roe and Superbee"
# "Prather"
# "UTOPIA"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 12. Dynamical Core --> Advection Tracers
Tracer advection scheme
12.1. Scheme Name
Is Required: FALSE Type: ENUM Cardinality: 0.1
Tracer advection scheme name
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_tracers.scheme_characteristics')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Eulerian"
# "modified Euler"
# "Lagrangian"
# "semi-Lagrangian"
# "cubic semi-Lagrangian"
# "quintic semi-Lagrangian"
# "mass-conserving"
# "finite volume"
# "flux-corrected"
# "linear"
# "quadratic"
# "quartic"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 12.2. Scheme Characteristics
Is Required: TRUE Type: ENUM Cardinality: 1.N
Tracer advection scheme characteristics
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_tracers.conserved_quantities')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "dry mass"
# "tracer mass"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 12.3. Conserved Quantities
Is Required: TRUE Type: ENUM Cardinality: 1.N
Tracer advection scheme conserved quantities
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_tracers.conservation_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "conservation fixer"
# "Priestley algorithm"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 12.4. Conservation Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
Tracer advection scheme conservation method
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_momentum.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "VanLeer"
# "Janjic"
# "SUPG (Streamline Upwind Petrov-Galerkin)"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 13. Dynamical Core --> Advection Momentum
Momentum advection scheme
13.1. Scheme Name
Is Required: FALSE Type: ENUM Cardinality: 0.1
Momentum advection schemes name
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_momentum.scheme_characteristics')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "2nd order"
# "4th order"
# "cell-centred"
# "staggered grid"
# "semi-staggered grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 13.2. Scheme Characteristics
Is Required: TRUE Type: ENUM Cardinality: 1.N
Momentum advection scheme characteristics
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_momentum.scheme_staggering_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Arakawa B-grid"
# "Arakawa C-grid"
# "Arakawa D-grid"
# "Arakawa E-grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 13.3. Scheme Staggering Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Momentum advection scheme staggering type
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_momentum.conserved_quantities')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Angular momentum"
# "Horizontal momentum"
# "Enstrophy"
# "Mass"
# "Total energy"
# "Vorticity"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 13.4. Conserved Quantities
Is Required: TRUE Type: ENUM Cardinality: 1.N
Momentum advection scheme conserved quantities
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_momentum.conservation_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "conservation fixer"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 13.5. Conservation Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
Momentum advection scheme conservation method
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.aerosols')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "sulphate"
# "nitrate"
# "sea salt"
# "dust"
# "ice"
# "organic"
# "BC (black carbon / soot)"
# "SOA (secondary organic aerosols)"
# "POM (particulate organic matter)"
# "polar stratospheric ice"
# "NAT (nitric acid trihydrate)"
# "NAD (nitric acid dihydrate)"
# "STS (supercooled ternary solution aerosol particle)"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 14. Radiation
Characteristics of the atmosphere radiation process
14.1. Aerosols
Is Required: TRUE Type: ENUM Cardinality: 1.N
Aerosols whose radiative effect is taken into account in the atmosphere model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_radiation.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 15. Radiation --> Shortwave Radiation
Properties of the shortwave radiation scheme
15.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of shortwave radiation in the atmosphere
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_radiation.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 15.2. Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Commonly used name for the shortwave radiation scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_radiation.spectral_integration')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "wide-band model"
# "correlated-k"
# "exponential sum fitting"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 15.3. Spectral Integration
Is Required: TRUE Type: ENUM Cardinality: 1.1
Shortwave radiation scheme spectral integration
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_radiation.transport_calculation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "two-stream"
# "layer interaction"
# "bulk"
# "adaptive"
# "multi-stream"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 15.4. Transport Calculation
Is Required: TRUE Type: ENUM Cardinality: 1.N
Shortwave radiation transport calculation methods
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_radiation.spectral_intervals')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 15.5. Spectral Intervals
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Shortwave radiation scheme number of spectral intervals
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_GHG.greenhouse_gas_complexity')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "CO2"
# "CH4"
# "N2O"
# "CFC-11 eq"
# "CFC-12 eq"
# "HFC-134a eq"
# "Explicit ODSs"
# "Explicit other fluorinated gases"
# "O3"
# "H2O"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 16. Radiation --> Shortwave GHG
Representation of greenhouse gases in the shortwave radiation scheme
16.1. Greenhouse Gas Complexity
Is Required: TRUE Type: ENUM Cardinality: 1.N
Complexity of greenhouse gases whose shortwave radiative effects are taken into account in the atmosphere model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_GHG.ODS')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "CFC-12"
# "CFC-11"
# "CFC-113"
# "CFC-114"
# "CFC-115"
# "HCFC-22"
# "HCFC-141b"
# "HCFC-142b"
# "Halon-1211"
# "Halon-1301"
# "Halon-2402"
# "methyl chloroform"
# "carbon tetrachloride"
# "methyl chloride"
# "methylene chloride"
# "chloroform"
# "methyl bromide"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 16.2. ODS
Is Required: FALSE Type: ENUM Cardinality: 0.N
Ozone depleting substances whose shortwave radiative effects are explicitly taken into account in the atmosphere model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_GHG.other_flourinated_gases')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "HFC-134a"
# "HFC-23"
# "HFC-32"
# "HFC-125"
# "HFC-143a"
# "HFC-152a"
# "HFC-227ea"
# "HFC-236fa"
# "HFC-245fa"
# "HFC-365mfc"
# "HFC-43-10mee"
# "CF4"
# "C2F6"
# "C3F8"
# "C4F10"
# "C5F12"
# "C6F14"
# "C7F16"
# "C8F18"
# "c-C4F8"
# "NF3"
# "SF6"
# "SO2F2"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 16.3. Other Flourinated Gases
Is Required: FALSE Type: ENUM Cardinality: 0.N
Other flourinated gases whose shortwave radiative effects are explicitly taken into account in the atmosphere model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_ice.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 17. Radiation --> Shortwave Cloud Ice
Shortwave radiative properties of ice crystals in clouds
17.1. General Interactions
Is Required: TRUE Type: ENUM Cardinality: 1.N
General shortwave radiative interactions with cloud ice crystals
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_ice.physical_representation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "bi-modal size distribution"
# "ensemble of ice crystals"
# "mean projected area"
# "ice water path"
# "crystal asymmetry"
# "crystal aspect ratio"
# "effective crystal radius"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 17.2. Physical Representation
Is Required: TRUE Type: ENUM Cardinality: 1.N
Physical representation of cloud ice crystals in the shortwave radiation scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_ice.optical_methods')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "T-matrix"
# "geometric optics"
# "finite difference time domain (FDTD)"
# "Mie theory"
# "anomalous diffraction approximation"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 17.3. Optical Methods
Is Required: TRUE Type: ENUM Cardinality: 1.N
Optical methods applicable to cloud ice crystals in the shortwave radiation scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_liquid.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 18. Radiation --> Shortwave Cloud Liquid
Shortwave radiative properties of liquid droplets in clouds
18.1. General Interactions
Is Required: TRUE Type: ENUM Cardinality: 1.N
General shortwave radiative interactions with cloud liquid droplets
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_liquid.physical_representation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "cloud droplet number concentration"
# "effective cloud droplet radii"
# "droplet size distribution"
# "liquid water path"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 18.2. Physical Representation
Is Required: TRUE Type: ENUM Cardinality: 1.N
Physical representation of cloud liquid droplets in the shortwave radiation scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_liquid.optical_methods')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "geometric optics"
# "Mie theory"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 18.3. Optical Methods
Is Required: TRUE Type: ENUM Cardinality: 1.N
Optical methods applicable to cloud liquid droplets in the shortwave radiation scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_inhomogeneity.cloud_inhomogeneity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Monte Carlo Independent Column Approximation"
# "Triplecloud"
# "analytic"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 19. Radiation --> Shortwave Cloud Inhomogeneity
Cloud inhomogeneity in the shortwave radiation scheme
19.1. Cloud Inhomogeneity
Is Required: TRUE Type: ENUM Cardinality: 1.1
Method for taking into account horizontal cloud inhomogeneity
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_aerosols.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 20. Radiation --> Shortwave Aerosols
Shortwave radiative properties of aerosols
20.1. General Interactions
Is Required: TRUE Type: ENUM Cardinality: 1.N
General shortwave radiative interactions with aerosols
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_aerosols.physical_representation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "number concentration"
# "effective radii"
# "size distribution"
# "asymmetry"
# "aspect ratio"
# "mixing state"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 20.2. Physical Representation
Is Required: TRUE Type: ENUM Cardinality: 1.N
Physical representation of aerosols in the shortwave radiation scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_aerosols.optical_methods')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "T-matrix"
# "geometric optics"
# "finite difference time domain (FDTD)"
# "Mie theory"
# "anomalous diffraction approximation"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 20.3. Optical Methods
Is Required: TRUE Type: ENUM Cardinality: 1.N
Optical methods applicable to aerosols in the shortwave radiation scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_gases.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 21. Radiation --> Shortwave Gases
Shortwave radiative properties of gases
21.1. General Interactions
Is Required: TRUE Type: ENUM Cardinality: 1.N
General shortwave radiative interactions with gases
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_radiation.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 22. Radiation --> Longwave Radiation
Properties of the longwave radiation scheme
22.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of longwave radiation in the atmosphere
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_radiation.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 22.2. Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Commonly used name for the longwave radiation scheme.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_radiation.spectral_integration')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "wide-band model"
# "correlated-k"
# "exponential sum fitting"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 22.3. Spectral Integration
Is Required: TRUE Type: ENUM Cardinality: 1.1
Longwave radiation scheme spectral integration
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_radiation.transport_calculation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "two-stream"
# "layer interaction"
# "bulk"
# "adaptive"
# "multi-stream"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 22.4. Transport Calculation
Is Required: TRUE Type: ENUM Cardinality: 1.N
Longwave radiation transport calculation methods
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_radiation.spectral_intervals')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 22.5. Spectral Intervals
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Longwave radiation scheme number of spectral intervals
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_GHG.greenhouse_gas_complexity')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "CO2"
# "CH4"
# "N2O"
# "CFC-11 eq"
# "CFC-12 eq"
# "HFC-134a eq"
# "Explicit ODSs"
# "Explicit other fluorinated gases"
# "O3"
# "H2O"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 23. Radiation --> Longwave GHG
Representation of greenhouse gases in the longwave radiation scheme
23.1. Greenhouse Gas Complexity
Is Required: TRUE Type: ENUM Cardinality: 1.N
Complexity of greenhouse gases whose longwave radiative effects are taken into account in the atmosphere model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_GHG.ODS')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "CFC-12"
# "CFC-11"
# "CFC-113"
# "CFC-114"
# "CFC-115"
# "HCFC-22"
# "HCFC-141b"
# "HCFC-142b"
# "Halon-1211"
# "Halon-1301"
# "Halon-2402"
# "methyl chloroform"
# "carbon tetrachloride"
# "methyl chloride"
# "methylene chloride"
# "chloroform"
# "methyl bromide"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 23.2. ODS
Is Required: FALSE Type: ENUM Cardinality: 0.N
Ozone depleting substances whose longwave radiative effects are explicitly taken into account in the atmosphere model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_GHG.other_flourinated_gases')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "HFC-134a"
# "HFC-23"
# "HFC-32"
# "HFC-125"
# "HFC-143a"
# "HFC-152a"
# "HFC-227ea"
# "HFC-236fa"
# "HFC-245fa"
# "HFC-365mfc"
# "HFC-43-10mee"
# "CF4"
# "C2F6"
# "C3F8"
# "C4F10"
# "C5F12"
# "C6F14"
# "C7F16"
# "C8F18"
# "c-C4F8"
# "NF3"
# "SF6"
# "SO2F2"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 23.3. Other Flourinated Gases
Is Required: FALSE Type: ENUM Cardinality: 0.N
Other flourinated gases whose longwave radiative effects are explicitly taken into account in the atmosphere model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_cloud_ice.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 24. Radiation --> Longwave Cloud Ice
Longwave radiative properties of ice crystals in clouds
24.1. General Interactions
Is Required: TRUE Type: ENUM Cardinality: 1.N
General longwave radiative interactions with cloud ice crystals
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_cloud_ice.physical_reprenstation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "bi-modal size distribution"
# "ensemble of ice crystals"
# "mean projected area"
# "ice water path"
# "crystal asymmetry"
# "crystal aspect ratio"
# "effective crystal radius"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 24.2. Physical Reprenstation
Is Required: TRUE Type: ENUM Cardinality: 1.N
Physical representation of cloud ice crystals in the longwave radiation scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_cloud_ice.optical_methods')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "T-matrix"
# "geometric optics"
# "finite difference time domain (FDTD)"
# "Mie theory"
# "anomalous diffraction approximation"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 24.3. Optical Methods
Is Required: TRUE Type: ENUM Cardinality: 1.N
Optical methods applicable to cloud ice crystals in the longwave radiation scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_cloud_liquid.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 25. Radiation --> Longwave Cloud Liquid
Longwave radiative properties of liquid droplets in clouds
25.1. General Interactions
Is Required: TRUE Type: ENUM Cardinality: 1.N
General longwave radiative interactions with cloud liquid droplets
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_cloud_liquid.physical_representation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "cloud droplet number concentration"
# "effective cloud droplet radii"
# "droplet size distribution"
# "liquid water path"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 25.2. Physical Representation
Is Required: TRUE Type: ENUM Cardinality: 1.N
Physical representation of cloud liquid droplets in the longwave radiation scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_cloud_liquid.optical_methods')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "geometric optics"
# "Mie theory"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 25.3. Optical Methods
Is Required: TRUE Type: ENUM Cardinality: 1.N
Optical methods applicable to cloud liquid droplets in the longwave radiation scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_cloud_inhomogeneity.cloud_inhomogeneity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Monte Carlo Independent Column Approximation"
# "Triplecloud"
# "analytic"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 26. Radiation --> Longwave Cloud Inhomogeneity
Cloud inhomogeneity in the longwave radiation scheme
26.1. Cloud Inhomogeneity
Is Required: TRUE Type: ENUM Cardinality: 1.1
Method for taking into account horizontal cloud inhomogeneity
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_aerosols.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 27. Radiation --> Longwave Aerosols
Longwave radiative properties of aerosols
27.1. General Interactions
Is Required: TRUE Type: ENUM Cardinality: 1.N
General longwave radiative interactions with aerosols
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_aerosols.physical_representation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "number concentration"
# "effective radii"
# "size distribution"
# "asymmetry"
# "aspect ratio"
# "mixing state"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 27.2. Physical Representation
Is Required: TRUE Type: ENUM Cardinality: 1.N
Physical representation of aerosols in the longwave radiation scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_aerosols.optical_methods')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "T-matrix"
# "geometric optics"
# "finite difference time domain (FDTD)"
# "Mie theory"
# "anomalous diffraction approximation"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 27.3. Optical Methods
Is Required: TRUE Type: ENUM Cardinality: 1.N
Optical methods applicable to aerosols in the longwave radiation scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_gases.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 28. Radiation --> Longwave Gases
Longwave radiative properties of gases
28.1. General Interactions
Is Required: TRUE Type: ENUM Cardinality: 1.N
General longwave radiative interactions with gases
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 29. Turbulence Convection
Atmosphere Convective Turbulence and Clouds
29.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of atmosphere convection and turbulence
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.boundary_layer_turbulence.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Mellor-Yamada"
# "Holtslag-Boville"
# "EDMF"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 30. Turbulence Convection --> Boundary Layer Turbulence
Properties of the boundary layer turbulence scheme
30.1. Scheme Name
Is Required: FALSE Type: ENUM Cardinality: 0.1
Boundary layer turbulence scheme name
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.boundary_layer_turbulence.scheme_type')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "TKE prognostic"
# "TKE diagnostic"
# "TKE coupled with water"
# "vertical profile of Kz"
# "non-local diffusion"
# "Monin-Obukhov similarity"
# "Coastal Buddy Scheme"
# "Coupled with convection"
# "Coupled with gravity waves"
# "Depth capped at cloud base"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 30.2. Scheme Type
Is Required: TRUE Type: ENUM Cardinality: 1.N
Boundary layer turbulence scheme type
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.boundary_layer_turbulence.closure_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 30.3. Closure Order
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Boundary layer turbulence scheme closure order
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.boundary_layer_turbulence.counter_gradient')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 30.4. Counter Gradient
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Uses boundary layer turbulence scheme counter gradient
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.deep_convection.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 31. Turbulence Convection --> Deep Convection
Properties of the deep convection scheme
31.1. Scheme Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Deep convection scheme name
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.deep_convection.scheme_type')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "mass-flux"
# "adjustment"
# "plume ensemble"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 31.2. Scheme Type
Is Required: TRUE Type: ENUM Cardinality: 1.N
Deep convection scheme type
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.deep_convection.scheme_method')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "CAPE"
# "bulk"
# "ensemble"
# "CAPE/WFN based"
# "TKE/CIN based"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 31.3. Scheme Method
Is Required: TRUE Type: ENUM Cardinality: 1.N
Deep convection scheme method
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.deep_convection.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "vertical momentum transport"
# "convective momentum transport"
# "entrainment"
# "detrainment"
# "penetrative convection"
# "updrafts"
# "downdrafts"
# "radiative effect of anvils"
# "re-evaporation of convective precipitation"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 31.4. Processes
Is Required: TRUE Type: ENUM Cardinality: 1.N
Physical processes taken into account in the parameterisation of deep convection
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.deep_convection.microphysics')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "tuning parameter based"
# "single moment"
# "two moment"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 31.5. Microphysics
Is Required: FALSE Type: ENUM Cardinality: 0.N
Microphysics scheme for deep convection. Microphysical processes directly control the amount of detrainment of cloud hydrometeor and water vapor from updrafts
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.shallow_convection.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 32. Turbulence Convection --> Shallow Convection
Properties of the shallow convection scheme
32.1. Scheme Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Shallow convection scheme name
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.shallow_convection.scheme_type')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "mass-flux"
# "cumulus-capped boundary layer"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 32.2. Scheme Type
Is Required: TRUE Type: ENUM Cardinality: 1.N
shallow convection scheme type
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.shallow_convection.scheme_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "same as deep (unified)"
# "included in boundary layer turbulence"
# "separate diagnosis"
# TODO - please enter value(s)
"""
Explanation: 32.3. Scheme Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
shallow convection scheme method
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.shallow_convection.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "convective momentum transport"
# "entrainment"
# "detrainment"
# "penetrative convection"
# "re-evaporation of convective precipitation"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 32.4. Processes
Is Required: TRUE Type: ENUM Cardinality: 1.N
Physical processes taken into account in the parameterisation of shallow convection
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.shallow_convection.microphysics')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "tuning parameter based"
# "single moment"
# "two moment"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 32.5. Microphysics
Is Required: FALSE Type: ENUM Cardinality: 0.N
Microphysics scheme for shallow convection
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.microphysics_precipitation.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 33. Microphysics Precipitation
Large Scale Cloud Microphysics and Precipitation
33.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of large scale cloud microphysics and precipitation
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.microphysics_precipitation.large_scale_precipitation.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 34. Microphysics Precipitation --> Large Scale Precipitation
Properties of the large scale precipitation scheme
34.1. Scheme Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Commonly used name of the large scale precipitation parameterisation scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.microphysics_precipitation.large_scale_precipitation.hydrometeors')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "liquid rain"
# "snow"
# "hail"
# "graupel"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 34.2. Hydrometeors
Is Required: TRUE Type: ENUM Cardinality: 1.N
Precipitating hydrometeors taken into account in the large scale precipitation scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.microphysics_precipitation.large_scale_cloud_microphysics.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 35. Microphysics Precipitation --> Large Scale Cloud Microphysics
Properties of the large scale cloud microphysics scheme
35.1. Scheme Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Commonly used name of the microphysics parameterisation scheme used for large scale clouds.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.microphysics_precipitation.large_scale_cloud_microphysics.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "mixed phase"
# "cloud droplets"
# "cloud ice"
# "ice nucleation"
# "water vapour deposition"
# "effect of raindrops"
# "effect of snow"
# "effect of graupel"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 35.2. Processes
Is Required: TRUE Type: ENUM Cardinality: 1.N
Large scale cloud microphysics processes
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 36. Cloud Scheme
Characteristics of the cloud scheme
36.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of the atmosphere cloud scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 36.2. Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Commonly used name for the cloud scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.atmos_coupling')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "atmosphere_radiation"
# "atmosphere_microphysics_precipitation"
# "atmosphere_turbulence_convection"
# "atmosphere_gravity_waves"
# "atmosphere_solar"
# "atmosphere_volcano"
# "atmosphere_cloud_simulator"
# TODO - please enter value(s)
"""
Explanation: 36.3. Atmos Coupling
Is Required: FALSE Type: ENUM Cardinality: 0.N
Atmosphere components that are linked to the cloud scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.uses_separate_treatment')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 36.4. Uses Separate Treatment
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Different cloud schemes for the different types of clouds (convective, stratiform and boundary layer)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "entrainment"
# "detrainment"
# "bulk cloud"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 36.5. Processes
Is Required: TRUE Type: ENUM Cardinality: 1.N
Processes included in the cloud scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.prognostic_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 36.6. Prognostic Scheme
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is the cloud scheme a prognostic scheme?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.diagnostic_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 36.7. Diagnostic Scheme
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is the cloud scheme a diagnostic scheme?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.prognostic_variables')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "cloud amount"
# "liquid"
# "ice"
# "rain"
# "snow"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 36.8. Prognostic Variables
Is Required: FALSE Type: ENUM Cardinality: 0.N
List the prognostic variables used by the cloud scheme, if applicable.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.optical_cloud_properties.cloud_overlap_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "random"
# "maximum"
# "maximum-random"
# "exponential"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 37. Cloud Scheme --> Optical Cloud Properties
Optical cloud properties
37.1. Cloud Overlap Method
Is Required: FALSE Type: ENUM Cardinality: 0.1
Method for taking into account overlapping of cloud layers
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.optical_cloud_properties.cloud_inhomogeneity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 37.2. Cloud Inhomogeneity
Is Required: FALSE Type: STRING Cardinality: 0.1
Method for taking into account cloud inhomogeneity
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_water_distribution.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# TODO - please enter value(s)
"""
Explanation: 38. Cloud Scheme --> Sub Grid Scale Water Distribution
Sub-grid scale water distribution
38.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Sub-grid scale water distribution type
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_water_distribution.function_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 38.2. Function Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Sub-grid scale water distribution function name
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_water_distribution.function_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 38.3. Function Order
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Sub-grid scale water distribution function type
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_water_distribution.convection_coupling')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "coupled with deep"
# "coupled with shallow"
# "not coupled with convection"
# TODO - please enter value(s)
"""
Explanation: 38.4. Convection Coupling
Is Required: TRUE Type: ENUM Cardinality: 1.N
Sub-grid scale water distribution coupling with convection
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_ice_distribution.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# TODO - please enter value(s)
"""
Explanation: 39. Cloud Scheme --> Sub Grid Scale Ice Distribution
Sub-grid scale ice distribution
39.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Sub-grid scale ice distribution type
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_ice_distribution.function_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 39.2. Function Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Sub-grid scale ice distribution function name
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_ice_distribution.function_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 39.3. Function Order
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Sub-grid scale ice distribution function type
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_ice_distribution.convection_coupling')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "coupled with deep"
# "coupled with shallow"
# "not coupled with convection"
# TODO - please enter value(s)
"""
Explanation: 39.4. Convection Coupling
Is Required: TRUE Type: ENUM Cardinality: 1.N
Sub-grid scale ice distribution coupling with convection
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 40. Observation Simulation
Characteristics of observation simulation
40.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of observation simulator characteristics
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.isscp_attributes.top_height_estimation_method')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "no adjustment"
# "IR brightness"
# "visible optical depth"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 41. Observation Simulation --> Isscp Attributes
ISSCP Characteristics
41.1. Top Height Estimation Method
Is Required: TRUE Type: ENUM Cardinality: 1.N
Cloud simulator ISSCP top height estimation methodUo
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.isscp_attributes.top_height_direction')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "lowest altitude level"
# "highest altitude level"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 41.2. Top Height Direction
Is Required: TRUE Type: ENUM Cardinality: 1.1
Cloud simulator ISSCP top height direction
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.cosp_attributes.run_configuration')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Inline"
# "Offline"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 42. Observation Simulation --> Cosp Attributes
CFMIP Observational Simulator Package attributes
42.1. Run Configuration
Is Required: TRUE Type: ENUM Cardinality: 1.1
Cloud simulator COSP run configuration
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.cosp_attributes.number_of_grid_points')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 42.2. Number Of Grid Points
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Cloud simulator COSP number of grid points
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.cosp_attributes.number_of_sub_columns')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 42.3. Number Of Sub Columns
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Cloud simulator COSP number of sub-cloumns used to simulate sub-grid variability
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.cosp_attributes.number_of_levels')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 42.4. Number Of Levels
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Cloud simulator COSP number of levels
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.radar_inputs.frequency')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 43. Observation Simulation --> Radar Inputs
Characteristics of the cloud radar simulator
43.1. Frequency
Is Required: TRUE Type: FLOAT Cardinality: 1.1
Cloud simulator radar frequency (Hz)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.radar_inputs.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "surface"
# "space borne"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 43.2. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Cloud simulator radar type
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.radar_inputs.gas_absorption')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 43.3. Gas Absorption
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Cloud simulator radar uses gas absorption
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.radar_inputs.effective_radius')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 43.4. Effective Radius
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Cloud simulator radar uses effective radius
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.lidar_inputs.ice_types')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "ice spheres"
# "ice non-spherical"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 44. Observation Simulation --> Lidar Inputs
Characteristics of the cloud lidar simulator
44.1. Ice Types
Is Required: TRUE Type: ENUM Cardinality: 1.1
Cloud simulator lidar ice type
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.lidar_inputs.overlap')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "max"
# "random"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 44.2. Overlap
Is Required: TRUE Type: ENUM Cardinality: 1.N
Cloud simulator lidar overlap
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 45. Gravity Waves
Characteristics of the parameterised gravity waves in the atmosphere, whether from orography or other sources.
45.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of gravity wave parameterisation in the atmosphere
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.sponge_layer')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Rayleigh friction"
# "Diffusive sponge layer"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 45.2. Sponge Layer
Is Required: TRUE Type: ENUM Cardinality: 1.1
Sponge layer in the upper levels in order to avoid gravity wave reflection at the top.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.background')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "continuous spectrum"
# "discrete spectrum"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 45.3. Background
Is Required: TRUE Type: ENUM Cardinality: 1.1
Background wave distribution
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.subgrid_scale_orography')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "effect on drag"
# "effect on lifting"
# "enhanced topography"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 45.4. Subgrid Scale Orography
Is Required: TRUE Type: ENUM Cardinality: 1.N
Subgrid scale orography effects taken into account.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.orographic_gravity_waves.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 46. Gravity Waves --> Orographic Gravity Waves
Gravity waves generated due to the presence of orography
46.1. Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Commonly used name for the orographic gravity wave scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.orographic_gravity_waves.source_mechanisms')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "linear mountain waves"
# "hydraulic jump"
# "envelope orography"
# "low level flow blocking"
# "statistical sub-grid scale variance"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 46.2. Source Mechanisms
Is Required: TRUE Type: ENUM Cardinality: 1.N
Orographic gravity wave source mechanisms
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.orographic_gravity_waves.calculation_method')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "non-linear calculation"
# "more than two cardinal directions"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 46.3. Calculation Method
Is Required: TRUE Type: ENUM Cardinality: 1.N
Orographic gravity wave calculation method
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.orographic_gravity_waves.propagation_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "linear theory"
# "non-linear theory"
# "includes boundary layer ducting"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 46.4. Propagation Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Orographic gravity wave propogation scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.orographic_gravity_waves.dissipation_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "total wave"
# "single wave"
# "spectral"
# "linear"
# "wave saturation vs Richardson number"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 46.5. Dissipation Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Orographic gravity wave dissipation scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.non_orographic_gravity_waves.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 47. Gravity Waves --> Non Orographic Gravity Waves
Gravity waves generated by non-orographic processes.
47.1. Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Commonly used name for the non-orographic gravity wave scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.non_orographic_gravity_waves.source_mechanisms')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "convection"
# "precipitation"
# "background spectrum"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 47.2. Source Mechanisms
Is Required: TRUE Type: ENUM Cardinality: 1.N
Non-orographic gravity wave source mechanisms
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.non_orographic_gravity_waves.calculation_method')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "spatially dependent"
# "temporally dependent"
# TODO - please enter value(s)
"""
Explanation: 47.3. Calculation Method
Is Required: TRUE Type: ENUM Cardinality: 1.N
Non-orographic gravity wave calculation method
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.non_orographic_gravity_waves.propagation_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "linear theory"
# "non-linear theory"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 47.4. Propagation Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Non-orographic gravity wave propogation scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.non_orographic_gravity_waves.dissipation_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "total wave"
# "single wave"
# "spectral"
# "linear"
# "wave saturation vs Richardson number"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 47.5. Dissipation Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Non-orographic gravity wave dissipation scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 48. Solar
Top of atmosphere solar insolation characteristics
48.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of solar insolation of the atmosphere
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.solar_pathways.pathways')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "SW radiation"
# "precipitating energetic particles"
# "cosmic rays"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 49. Solar --> Solar Pathways
Pathways for solar forcing of the atmosphere
49.1. Pathways
Is Required: TRUE Type: ENUM Cardinality: 1.N
Pathways for the solar forcing of the atmosphere model domain
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.solar_constant.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "fixed"
# "transient"
# TODO - please enter value(s)
"""
Explanation: 50. Solar --> Solar Constant
Solar constant and top of atmosphere insolation characteristics
50.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Time adaptation of the solar constant.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.solar_constant.fixed_value')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 50.2. Fixed Value
Is Required: FALSE Type: FLOAT Cardinality: 0.1
If the solar constant is fixed, enter the value of the solar constant (W m-2).
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.solar_constant.transient_characteristics')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 50.3. Transient Characteristics
Is Required: TRUE Type: STRING Cardinality: 1.1
solar constant transient characteristics (W m-2)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.orbital_parameters.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "fixed"
# "transient"
# TODO - please enter value(s)
"""
Explanation: 51. Solar --> Orbital Parameters
Orbital parameters and top of atmosphere insolation characteristics
51.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Time adaptation of orbital parameters
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.orbital_parameters.fixed_reference_date')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 51.2. Fixed Reference Date
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Reference date for fixed orbital parameters (yyyy)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.orbital_parameters.transient_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 51.3. Transient Method
Is Required: TRUE Type: STRING Cardinality: 1.1
Description of transient orbital parameters
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.orbital_parameters.computation_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Berger 1978"
# "Laskar 2004"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 51.4. Computation Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
Method used for computing orbital parameters.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.insolation_ozone.solar_ozone_impact')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 52. Solar --> Insolation Ozone
Impact of solar insolation on stratospheric ozone
52.1. Solar Ozone Impact
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does top of atmosphere insolation impact on stratospheric ozone?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.volcanos.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 53. Volcanos
Characteristics of the implementation of volcanoes
53.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of the implementation of volcanic effects in the atmosphere
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.volcanos.volcanoes_treatment.volcanoes_implementation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "high frequency solar constant anomaly"
# "stratospheric aerosols optical thickness"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 54. Volcanos --> Volcanoes Treatment
Treatment of volcanoes in the atmosphere
54.1. Volcanoes Implementation
Is Required: TRUE Type: ENUM Cardinality: 1.1
How volcanic effects are modeled in the atmosphere.
End of explanation
"""
|
wikistat/Apprentissage | HAR/ML-4-IoT-Har.ipynb | gpl-3.0 | import pandas as pd
import numpy as np
import copy
import random
import itertools
%matplotlib inline
import matplotlib.pyplot as plt
import time
"""
Explanation: <center>
<a href="http://www.insa-toulouse.fr/" ><img src="http://www.math.univ-toulouse.fr/~besse/Wikistat/Images/logo-insa.jpg" style="float:left; max-width: 120px; display: inline" alt="INSA"/></a>
Machine Learning Tutorial: IoT and Human Activity Recognition (HAR)
Analyse de signaux issus d'un smartphone
Utilisation des librairies <a href="http://scikit-learn.org/stable/#"><img src="http://scikit-learn.org/stable/_static/scikit-learn-logo-small.png" style="max-width: 100px; display: inline" alt="Scikit-Learn"/></a> en <a href="https://www.python.org/"><img src="https://upload.wikimedia.org/wikipedia/commons/thumb/f/f8/Python_logo_and_wordmark.svg/390px-Python_logo_and_wordmark.svg.png" style="max-width: 120px; display: inline" alt="Python"/></a> et <a href="https://keras.io/"><img src="https://s3.amazonaws.com/keras.io/img/keras-logo-2018-large-1200.png" style="max-width: 100px; display: inline" alt="Keras"/></a>
Résumé
Cas d'usage de reconnaissance d'activités humaines à partir des enregistrements de signaux (gyroscope, accéléromètre) issus d'un objet connecté: un simple smartphone. Les données sont analysées pour illustrer les principales étapes communes en science des données et appliquables à des signaux physiques échantillonnés. Visualisation des signaux bruts afin d'évaluer les difficultés posées par ce type de données; exploration (analyse en composantes principales, analyse factorielle discriminante) des données transformées (features) ou métier calculées à partir des signaux; prévision de l'activité à partir des données métier par la plupart des méthodes linéaires dont: régression logistique, SVM et non linéaires; prévision de l'activité à partir des signaux bruts par réseau de neurones de type perceptron multicouches.
1 Introduction
1.1 Objectif général
L'objectif est de reconnaître l'activité d'un individu porteur d'un smartphone qui enregistre un ensemble de signaux issus du gyroscope et de l'accéléromètre embarqués et ainsi connectés. Une base de données d'apprentissage a été construite expérimentalement. Un ensemble de porteurs d'un smartphone ont produit une activité déterminée pendant un laps de temps prédéfini tandis que des signaux étaient enregistrés. Les données sont issues de la communauté qui vise la reconnaissance d'activités humaines (Human activity recognition, HAR). Voir à ce propos l'article relatant un colloque de 2013. L'analyse des données associée à une identification d'activité en temps réel, ne sont pas abordées.
Les données publiques disponibles ont été acquises, décrites et partiellement analysées par Anguita et al. (2013). Elles sont accessibles sur le dépôt de l'University California Irvine (UCI) consacré à l'apprentissage machine.
L'archive contient les données brutes: accélérations échantillonnnées à 64 htz pendant 2s. Les accélérations en x, y, et z, chacune de 128 colonnes, celles en y soustrayant la gravité naturelle ainsi que les accélérations angulaires (gyroscope) en x, y, et z soit en tout 9 fichiers. Le choix d'une puissance de 2 pour la fréquence d'échantillonnage permet l'exécution efficace d'algorithmes de transformée de Fourier ou en ondelettes.
1.2 Déroulement
Une première visualisation et exploration des signaux bruts montre (section 2) que ceux-ci sont difficiles à analyser; les classes d'activité y sont en effet mal caractérisées. La principale cause est l'absence de synchronisation des débuts d'activité; le déphasage des signaux apparaît alors comme un bruit ou artefact très préjudiciable à la bonne discrimination des activités sur la base d'une distance euclidienne usuelle ($L_2$). C'est la raison pour laquelle, Anguita et al. (2013) proposent de calculer un ensemble de transformations ou caractéristiques (features) classiques du traitement du signal: variance, corrélations, entropie, décompositions de Fourier... Ce sont alors $p=561$ variables qui sont considérées et explorées dans la section 3. L'analyse en composantes principales et surtout l'analyse factorielle discriminante montrent les bonnes qualités discriminatoires de ces données "métier" issues d'une connaissance experte des signaux. La section 4 exploite ces variables métier et montre que des modèles statistiques élémentaires car linéaires (régression logistique, analyse discriminante) ou qu'un algorithme classique de machine à vecteurs supports (SVM) utilisant un simple noyau linéaire conduisent à d'excellentes prévisions au contraire d'algorihtmes non linaires sophistiqués (random forest).
Néanmoins, faire calculer en permanence des transformations sophistiquées n'est pas une solution viable pour la batterie d'un objet embarqué connecté. L'algorithme candidat doit pouvoir produire une solution intégrable (cablée) dans un cicuit, comme c'est par exemple le cas des puces dédiées à la reconnaissance faciale. C'est l'objet de la section 5: montrer la faisabilité d'une solution basée sur les seuls signaux bruts; solution mettant en oeuvre des réseaux de neurones.
1.3 Environnement logiciel
Pour être exécuté, ce calepin (jupyter notebook) nécessite l'installation de Python 3 via par exemple le site Anaconda. Les algorihtmes d'exploration et d'apprentissage statistique utilisés sont disponibles dans la librairie Scikit-learn.
<FONT COLOR="Red">Episode 1</font>
2 Etude préalable des signaux bruts
2.1 Source
Les données sont celles du dépôt de l'UCI. Elle peuvent être préalablement téléchargées en cliquant ici.
Chaque enregistrement ou unité statistique ou instance est labellisée avec 6 activités: debout, assis, couché, marche, monte ou descend un escalier. Chaque jeu de données est partagé en une partie échantillon d'apprentissage et une partie échantillon test. L'échantillon test n'est utilisé que pour évaluer et comparer les qualités de prévision des principales méthodes. Il est conservé en l'état afin de rendre les comparaisons possibles avec les résultats de la littérature. Il s'agit donc d'un problème de classification supervisée (6 classes) avec $n=10299$ échantillons pour l'apprentissage, 2947 pour le test.
Les données contiennent deux jeux de dimensions différentes:
Jeu multidimensionel: un individus est constitué de 9 Séries Temporelles de dimensions $(n, 128, 9)$.
Jeu unidimensionnel: Les 9 Séries Temporelles sont concaténées pour constituer un vecteur de 1289 = 1152 variables de dimensions* $(n, 1152)$.
N.B. La structure des données est nettement plus complexe que celles couramment étudiées dans le dépôt Wikistat. Le code a été structuré en une séquence de fonctions afin d'en faciliter la compréhension. L'outil calepin atteint ici des limites pour la réalisation de codes complexes.
2.2 Importation des principales librairies.
End of explanation
"""
# Attention: le chemin ci-dessous doit être adapté au contexte
DATADIR_UCI = './UCI HAR Dataset'
# Liste des noms des fichiers afin d'automatiser la lecture.
SIGNALS = [ "body_acc_x", "body_acc_y", "body_acc_z", "body_gyro_x", "body_gyro_y", "body_gyro_z", "total_acc_x", "total_acc_y", "total_acc_z"]
# Fonctions permettant de lire la séquence des fichiers avant de restructurer les données
# dans le fortmat recherché.
def my_read_csv(filename):
return pd.read_csv(filename, delim_whitespace=True, header=None)
def load_signal(data_dir, subset, signal):
filename = data_dir+'/'+subset+'/Inertial Signals/'+signal+'_'+subset+'.txt'
x = my_read_csv(filename).values
return x
def load_signals(data_dir, subset, flatten = False):
signals_data = []
for signal in SIGNALS:
signals_data.append(load_signal(data_dir, subset, signal))
if flatten :
X = np.hstack(signals_data)
else:
X = np.transpose(signals_data, (1, 2, 0))
return X
def load_y(data_dir, subset, dummies = False):
filename = data_dir+'/'+subset+'/y_'+subset+'.txt'
y = my_read_csv(filename)[0]
if dummies:
Y = pd.get_dummies(y).values
else:
Y = y.values
return Y
"""
Explanation: 2.3 Structurer les données
Définir le chemin d'accès aux données puis les fonctions utiles.
End of explanation
"""
#Multidimensional Data
X_train, X_test = load_signals(DATADIR_UCI, 'train'), load_signals(DATADIR_UCI, 'test')
# Flattened Data
X_train_flatten, X_test_flatten = load_signals(DATADIR_UCI, 'train', flatten=True), load_signals(DATADIR_UCI, 'test', flatten=True)
# Label Y
Y_train_label, Y_test_label = load_y(DATADIR_UCI, 'train', dummies = False), load_y(DATADIR_UCI, 'test', dummies = False)
#Dummies Y (For Keras)
Y_train_dummies, Y_test_dummies = load_y(DATADIR_UCI, 'train', dummies = True), load_y(DATADIR_UCI, 'test', dummies = True)
N_train = X_train.shape[0]
N_test = X_test.shape[0]
"""
Explanation: Lecture des données
End of explanation
"""
print("Dimension")
print("Données Multidimensionelles, : " + str(X_train.shape))
print("Données Unimensionelles, : " + str(X_train_flatten.shape))
print("Vecteur réponse (scikit-learn) : " + str(Y_train_label.shape))
print("Matrice réponse(Keras) : " + str(Y_train_dummies.shape))
"""
Explanation: Vérification des dimensions afin de s'assurer de la bonne lecture des fichiers.
End of explanation
"""
# Liste des couleurs
CMAP = plt.get_cmap("Accent")
# Liste des types de signaux
SIGNALS = ["body_acc x", "body_acc y", "body_acc z",
"body_gyro x", "body_gyro y", "body_gyro z",
"total_acc x", "total_acc y", "total_acc z"]
# Dictionnaire en clair des activités expérimentées (contexte supervisé)
ACTIVITY_DIC = {1 : "WALKING",
2 : "WALKING UPSTAIRS",
3 : "WALKING DOWNSTAIRS",
4 : "SITTING",
5 : "STANDING",
6 : "LAYING"}
labels = ACTIVITY_DIC.values()
# Fonction pour le tracé d'un signal
def plot_one_axe(X, fig, ax, sample_to_plot, cmap):
for act,Xgb in X.groupby("Activity"):
Xgb_first_values = Xgb.values[:sample_to_plot,:-1]
x = Xgb_first_values[0]
ax.plot(x, linewidth=1, color=cmap(act-1), label = label_dic[act])
for x in Xgb_first_values[1:]:
ax.plot(x, linewidth=1, color=cmap(act-1))
def plot_one_axe_shuffle(X, fig, ax, sample_to_plot, cmap):
plot_data = []
for act,Xgb in X.groupby("Activity"):
Xgb_first_values = Xgb.values[:sample_to_plot,:-1]
x = Xgb_first_values[0]
ax.plot(x, linewidth=1, color=cmap(act-1), label = label_dic[act])
for x in Xgb_first_values[1:]:
plot_data.append([x,cmap(act-1)])
random.shuffle(plot_data)
for x,color in plot_data:
ax.plot(x, linewidth=1, color=color)
"""
Explanation: 2.4 Visualisations
Cette phase est essentielle à la bonne compréhension des données, de leur structure et donc des problèmes qui vont être soulevés par la suite. La visualisation est très élémentaire d'un point de vue méthodologique mais nécessite des compétences plus élaborées en Python et donc des fonctions préalables.
Fonctions utiles
End of explanation
"""
sample_to_plot = 50
index_per_act = [list(zip(np.repeat(act, sample_to_plot), np.where(Y_train_label==act)[0][:sample_to_plot])) for act in range(1,7)]
index_to_plot = list(itertools.chain.from_iterable(index_per_act))
random.shuffle(index_to_plot)
fig = plt.figure(figsize=(15,10))
for isignal in range(9):
ax = fig.add_subplot(3,3,isignal+1)
for act , i in index_to_plot:
ax.plot(range(128), X_train[i,:,isignal],color=CMAP(act-1), linewidth=1)
ax.set_title(SIGNALS[isignal])
plt.tight_layout()
"""
Explanation: Tracés de tous les signaux
Tous les signaux sont tracés par type en superposant les activités.
End of explanation
"""
sample_to_plot = 10
isignal = 1
index_per_act_dict = dict([(act, np.where(Y_train_label==act)[0][:sample_to_plot]) for act in range(1,7)])
fig = plt.figure(figsize=(15,8), num=SIGNALS[isignal])
for act , index in index_per_act_dict.items():
ax = fig.add_subplot(3,2,act)
for x in X_train[index]:
ax.plot(range(128), x[:,0],color=CMAP(act-1), linewidth=1)
ax.set_title(ACTIVITY_DIC[act])
plt.tight_layout()
"""
Explanation: Remarque : Apprécier la difficulté à distinguer les activités au sein d'un même signal.
3.3 Par signal
Le seul signal "acélération en" x est tracé en distinguant les activités.
End of explanation
"""
def plot_pca(X_R, ytrain, fig, ax, nbc, nbc2, label_dic=ACTIVITY_DIC, cmaps = plt.get_cmap("Accent")
):
for i in range(6):
xs = X_R[ytrain==i+1,nbc-1]
ys = X_R[ytrain==i+1, nbc2-1]
label = label_dic[i+1]
color = cmaps(i)
ax.scatter(xs, ys, color=color, alpha=.8, s=10, label=label)
ax.set_xlabel("PC%d : %.2f %%" %(nbc,pca.explained_variance_ratio_[nbc-1]*100), fontsize=15)
ax.set_ylabel("PC%d : %.2f %%" %(nbc2,pca.explained_variance_ratio_[nbc2-1]*100), fontsize=15)
"""
Explanation: Q. Quelle est l'activité qui semble se distinguer facilement des autres ?
Q. Observer les signaux d'une activité, par exemple Walking upstairs. Qu'est-ce qui fait que, pour ces signaux ou courbes, une métrique Euclidienne classique ($L_2$) est inopérante ?
C'est la raison pour laquelle il sera important de décomposer notamment les signaux dans le domaine des fréquences.
3.4 Analyse en composantes principales
Il est important de se faire une idée précise de la structure des données. Une analyse en composantes principales est adaptée à cet objectif.
Remarques
L'ACP n'est pas réduite sur ces données car cette transformation est sans effet sur la piètre qualité des graphiques.
L'ACP basée sur une métrique Euclidienne usuelle ne fait que confirmer les difficultés précedemment identifiées et l'absence de pouvoir discriminant des données brutes au sens de cette métrique; cette exploration n'est pas approfondies sur ces données. En revanche un autre calepin détaille une analyse factorielle discriminante mais avec la même conclusion.
La fonction définie ci-après affiche un nuage de points dans un plan factoriel.
End of explanation
"""
from sklearn.decomposition import PCA
pca = PCA()
# Choix du signal
isignal = 4
signal = SIGNALS[isignal]
print("ACP Sur signal : " +signal)
X_c = pca.fit_transform(X_train[:,:,isignal])
"""
Explanation: ACP sur un type de signal
End of explanation
"""
fig = plt.figure(figsize=(15,10))
ax = fig.add_subplot(1,2,1)
ax.bar(range(10), pca.explained_variance_ratio_[:10]*100, align='center',
color='grey', ecolor='black')
ax.set_xticks(range(10))
ax.set_ylabel("Variance")
ax.set_title("", fontsize=35)
ax.set_title(u"Pourcentage de variance expliquée \n par les premières composantes", fontsize=20)
ax = fig.add_subplot(1,2,2)
box=ax.boxplot(X_c[:,0:10],whis=100)
ax.set_title(u"Distribution des premières composantes", fontsize=20)
fig.suptitle(u"Résultat ACP sur Signal : " + signal, fontsize=25)
"""
Explanation: Représentation des parts de variance expliquée par les premières composantes principales.
End of explanation
"""
fig = plt.figure(figsize=(10,10), )
ax = fig.add_subplot(1,1,1)
plot_pca(X_c, Y_train_label,fig ,ax ,1 ,2)
"""
Explanation: Attention: les diagrammes boîtes sont très perturbés par les distributions des composantes avec une très forte concentration autour de 0 et énormément de valeurs atypiques. D'où l'utilisation du paramètre whis=100 pour rallonger les moustaches.
Q. Que sont les graphes ci-dessus ? Quelles interprétations ou absence d'interprétation en tirer ?
Représentation du premier plan factoriel :
End of explanation
"""
pca = PCA()
print("ACP Sur tous les signaux")
X_c = pca.fit_transform(X_train_flatten)
fig = plt.figure(figsize=(15,10))
ax = fig.add_subplot(1,2,1)
ax.bar(range(10), pca.explained_variance_ratio_[:10]*100, align='center',
color='grey', ecolor='black')
ax.set_xticks(range(10))
ax.set_ylabel("Variance")
ax.set_title("", fontsize=35)
ax.set_title(u"Pourcentage de variance expliqué \n des premières composantes", fontsize=20)
ax = fig.add_subplot(1,2,2)
box=ax.boxplot(X_c[:,0:10],whis=100)
ax.set_title(u"Distribution des premières composantes", fontsize=20)
fig.suptitle(u"Résultat ACP", fontsize=25)
fig = plt.figure(figsize=(10,10), )
ax = fig.add_subplot(1,1,1)
plot_pca(X_c, Y_train_label,fig ,ax ,1 ,3)
"""
Explanation: Les autres plans factoriels ne sont guère plus informatifs.
Sur tous les signaux
Tous les signaux sont concaténés à plat en un seul signal.
End of explanation
"""
# Lecture des données d'apprentissage
# Attention, il peut y avoir plusieurs espaces comme séparateur dans le fichier
Xtrain=pd.read_csv("X_train.txt",sep='\s+',header=None)
Xtrain.head()
# Variable cible
ytrain=pd.read_csv("y_train.txt",sep='\s+',header=None,names=['y'])
# Le type dataFrame est inutile et même gênant pour la suite
ytrain=ytrain["y"]
# Lecture des données de test
Xtest=pd.read_csv("X_test.txt",sep='\s+',header=None)
Xtest.shape
ytest=pd.read_csv("y_test.txt",sep='\s+',header=None,names=['y'])
ytest=ytest["y"]
"""
Explanation: Q. Quelle activité semble néanmoins facile à identifier ?
3 Exploration des données "métier"
3.1 Les données
L'archive de l'UCI contient également deux fichiers train et test des 561 caractéristiques (features) ou variables "métier" calculées dans les domaines temporels et fréquentiels par transformation des signaux bruts.
Voici une liste indicative des variables calculées sur chacun des signaux bruts ou couples de signaux:
Name|Signification
-|-
mean | Mean value
std | Standard deviation
mad | Median absolute value
max | Largest values in array
min | Smallest value in array
sma | Signal magnitude area
energy | Average sum of the squares
iqr | Interquartile range
entropy | Signal Entropy
arCoeff | Autorregresion coefficients
correlation | Correlation coefficient
maxFreqInd | Largest frequency component
meanFreq | Frequency signal weighted average
skewness | Frequency signal Skewness
kurtosis | Frequency signal Kurtosis
energyBand | Energy of a frequency interval
angle | Angle between two vectors
Lecture des données métier
End of explanation
"""
def plot_pca(X_R,fig,ax,nbc,nbc2):
for i in range(6):
xs = X_R[ytrain==i+1,nbc-1]
ys = X_R[ytrain==i+1, nbc2-1]
label = ACTIVITY_DIC [i+1]
color = cmaps(i)
ax.scatter(xs, ys, color=color, alpha=.8, s=1, label=label)
ax.set_xlabel("PC%d : %.2f %%" %(nbc,pca.explained_variance_ratio_[nbc-1]*100), fontsize=10)
ax.set_ylabel("PC%d : %.2f %%" %(nbc2,pca.explained_variance_ratio_[nbc2-1]*100), fontsize=10)
"""
Explanation: 3.2 Analyse en composantes principales
Fonction graphique pour les plans factoriels.
End of explanation
"""
pca = PCA()
X_c = pca.fit_transform(Xtrain)
"""
Explanation: Calcul de la matrice des composantes principales. C'est aussi un changement (transformation) de base; de la base canonique dans la base des vecteurs propres.
End of explanation
"""
plt.plot(pca.explained_variance_ratio_[0:10])
plt.show()
"""
Explanation: Valeurs propres ou variances des composantes principales
Représentation de la décroissance des valeurs propres, les variances des variables ou composantes principales.
End of explanation
"""
plt.boxplot(X_c[:,0:10])
plt.show()
"""
Explanation: Un graphique plus explicite décrit les distribution de ces composantes par des diagrames boîtes; seules les premières sont affichées.
End of explanation
"""
cmaps = plt.get_cmap("Accent")
fig = plt.figure(figsize= (20,20))
count = 0
for nbc, nbc2,count in [(1,2,1), (2,3,2), (3,4,3), (1,3,4), (2,4,5), (1,4,7)] :
ax = fig.add_subplot(3,3,count)
plot_pca(X_c, fig,ax,nbc,nbc2)
plt.legend(loc='upper right', bbox_to_anchor=(1.8, 0.5), markerscale=10)
plt.show()
"""
Explanation: Commenter la décroissance des variances, le choix éventuel d'une dimension ou nombre de composantes à retenir sur les 561.
Représentation des individus ou "activités" en ACP
Projection dans les principaux plans factoriels.
End of explanation
"""
with open('features.txt', 'r') as content_file:
featuresNames = content_file.read()
columnsNames = list(map(lambda x : x.split(" ")[1],featuresNames.split("\n")[:-1]))
"""
Explanation: Q. Commenter la séparation des deux types de situation par le premier axe.
Q. Que dire sur la forme des nuages ?
Q. Que dire sur la plus ou moins bonne séparation des classes ?
Représentation des variables en ACP
Lecture des libellés des variables et constitution d'une liste. Souci de la grande dimension (561), les représentations ne sont guère exploitables.
End of explanation
"""
# coordonnées des variables
coord1=pca.components_[0]*np.sqrt(pca.explained_variance_[0])
coord2=pca.components_[1]*np.sqrt(pca.explained_variance_[1])
fig = plt.figure(figsize=(8,8))
ax = fig.add_subplot(1, 1, 1)
for i, j in zip(coord1,coord2, ):
plt.text(i, j, "*")
plt.arrow(0,0,i,j,color='r')
plt.axis((-1.2,1.2,-1.2,1.2))
# cercle
c=plt.Circle((0,0), radius=1, color='b', fill=False)
ax.add_patch(c)
plt.show()
"""
Explanation: Graphe des variables illisible en mettant les libellés en clair. Seule une * est représentée.
End of explanation
"""
print(np.array(columnsNames)[abs(coord1)>.6])
"""
Explanation: Identification des variables participant le plus au premier axe. Ce n'est pas plus clair! Seule la réprésentation des individus apporte finalement des éléments de compréhension.
End of explanation
"""
from sklearn.discriminant_analysis import LinearDiscriminantAnalysis
method = LinearDiscriminantAnalysis()
lda=method.fit(Xtrain,ytrain)
X_r2=lda.transform(Xtrain)
"""
Explanation: 3.3 Analyse Factorielle Discriminante (AFD)
Principe
L'ACP ne prend pas en compte la présence de la variable qualitative à modéliser contrairement à l'analyse factorielle discriminante (AFD) adaptés à ce contexte "supervisé" puisque l'activité est connue sur un échantillon d'apprentissage. L'AFD est une ACP des barycentres des classes munissant l'espace des individus d'une métrique spécifique dite de Mahalanobis. Métrique définie par l'inverse de la matrice de covariance intraclase. L'objectif est alors de visualiser les capacités des variables à discriminer les classes.
La librairie scikit-learn ne propose pas de fonction spécifique d'analyse factorielle discriminante mais les coordonnées des individus dans la base des vecteurs discriminants sont obtenues comme résultats de l'analyse discriminante linéaire décisionnnelle. Cette dernière sera utilisée avec une finalité prédictive dans un deuxième temps (autre calepin).
Les résultats de la fonction LinearDiscriminantAnalysis de scikit-learn sont identiques à ceux de la fonction lda de R. Elle est donc utilisée strictement de la même façon.
End of explanation
"""
fig = plt.figure(figsize= (20,20))
count = 0
for nbc, nbc2,count in [(1,2,1), (2,3,2), (3,4,3), (1,3,4), (2,4,5), (1,4,7)] :
ax = fig.add_subplot(3,3,count)
plot_pca(X_r2, fig,ax,nbc,nbc2)
plt.legend(loc='upper right', bbox_to_anchor=(1.8, 0.5), markerscale=10)
plt.show()
"""
Explanation: Q. Que signifie le warning ? Quel traitement faudrait-t-il mettre en oeuvre pour utiliser une analyse discriminante décisionnelle en modélisation ou apprentissage ?
Représentation des individus en AFD
End of explanation
"""
from sklearn.cluster import KMeans
from sklearn.preprocessing import StandardScaler
tps1 = time.perf_counter()
X = StandardScaler().fit_transform(Xtrain)
km=KMeans(n_clusters=6)
km.fit(Xtrain)
tps2 = time.perf_counter()
print("Temps execution Kmeans :", (tps2 - tps1))
from sklearn.metrics import confusion_matrix
pd.DataFrame(confusion_matrix(ytrain, km.labels_)[1:7,0:6].T, columns=labels)
"""
Explanation: Q. Que dire de la séparation des classes ? Sont-elles toutes séparables deux-à-deux ?
Q. Que dire de la forme des nuages notamment dans le premier plan ?
Comme pour l'ACP, la représentation trop complexe des variables n'apporterait rien.
3.4 Classification non supervisée
Cette section n'est pas utile puisque les classes sont connues. Néanmoins, une approche générale des l'étude de signaux relatant des activités humaines non identifiées a priori nécessiterait cette phase de classification non supervisée ou clustering. Cette étape permet simplement ici d'illustrer le comportement d'un algorithme de classification non supervisée classique. La matrice de confusion des classes obtenues avec celles connues permet d'en évaluer les performances.
$k$-means
Attention, il est nécessaire de centrer et réduire les variables avant d'exécuter un algorithme de classification non supervisé.
End of explanation
"""
from sklearn.linear_model import LogisticRegression
ts = time.time()
method = LogisticRegression(solver='liblinear',multi_class='auto')
method.fit(Xtrain,ytrain)
score = method.score(Xtest, ytest)
ypred = method.predict(Xtest)
te = time.time()
"""
Explanation: Q. Que dire de l'efficacité d'une approche non supervisée pour catégoriser les activités ?
<FONT COLOR="Red">Episode 2</font>
4 Prévision de l'activité à partir des variables "métier"
Plusieurs méthodes sont successivement testées dans ce calepin : SVM, analyse discriminate décisionnelle, $k$ plus proches voisins, forêts aléatoires, réseaux de neurones... Nous commençons par la régression logistique pour la prévision du comportement.
4.1 Régression logistique
Principe
Une méthode statistique ancienne mais finalement efficace sur ces données. La régression logistique est adaptée à la prévision d'une variable binaire. Dans le cas multiclasse, la fonction logistique de la librairie Scikit-learn estime par défaut un modèle par classe: une classe contre les autres.
La probabilité d'appartenance d'un individu à une classe est modélisée à l'aide d'une combinaison linéaire des variables explicatives. Pour transformer une combinaison linéaire à valeur dans $R$ en une probabilité à valeurs dans l'intervalle $[0, 1]$, une fonction de forme sigmoïdale est appliquée. Ceci donne: $$P(y_i=1)=\frac{e^{Xb}}{1+e^{Xb}}$$ ou, c'est équivalent, une décomposition linéaire du logit ou log odd ratio de $P(y_i=1)$: $$\log\frac{P(y_i=1)}{1-P(y_i=1)}=Xb.$$
Estimation du modèle sans optimisation
Le modèle est estimé sans chercher à raffiner les valeurs de certains paramètres (pénalisation). Ce sera fait dans un deuxième temps. Le paramètre de choix du solver est précisé car le choix par défaut (lbfgs) semble converger moins vite. Une comparaison systématique des différentes options (liblinear, lbfgs, saga, sag, newton-cg) serait bienvenue en association avec le choix de modèle lorsque le nombre de classes est plus grand que 2: fonction perte multinomiale ou un modèle binomial par classe.
End of explanation
"""
from sklearn.metrics import confusion_matrix, accuracy_score
print("Score : %f, time running : %d secondes" %(score, te-ts))
pd.DataFrame(confusion_matrix(ytest, ypred), index = labels, columns=labels)
"""
Explanation: Prévision de l'activité de l'échantillon test
Une fois le modèle estimé, l'erreur de prévision est évaluée, sans biais optimiste, sur un autre échantillon, dit échantillon test, qui n'a pas participé à l'apprentissage du modèle.
End of explanation
"""
# Optimisation du paramètre de pénalisation
# grille de valeurs
from sklearn.model_selection import GridSearchCV
ts = time.time()
param=[{"C":[0.5,1,5,10,12,15,30]}]
logit = GridSearchCV(LogisticRegression(penalty="l1",solver='liblinear',
multi_class='auto'), param,cv=10,n_jobs=-1)
logitOpt=logit.fit(Xtrain, ytrain)
# paramètre optimal
logitOpt.best_params_["C"]
te = time.time()
print("Temps : %d secondes" %(te-ts))
print("Meilleur score = %f, Meilleur paramètre = %s" % (logitOpt.best_score_,logitOpt.best_params_))
yChap = logitOpt.predict(Xtest)
# matrice de confusion
logitOpt.score(Xtest, ytest)
pd.DataFrame(confusion_matrix(ytest, yChap), index = labels, columns=labels)
"""
Explanation: Q. Quelles sont les classes qui restent difficiles à discriminer ?
Q. Commenter la qualité des résultats obtenus. Sont-ils cohérents avec l'approche exploratoire ?
Optimisation du modèle par pénalisation Lasso
Attention l'exécution est un peu longue... cette optimisation peut être omise en première lecture.
End of explanation
"""
from sklearn.discriminant_analysis import LinearDiscriminantAnalysis, QuadraticDiscriminantAnalysis
ts = time.time()
method = LinearDiscriminantAnalysis()
method.fit(Xtrain,ytrain)
score = method.score(Xtest, ytest)
ypred = method.predict(Xtest)
te = time.time()
score
print("Score : %f, time running : %d secondes" %(score, te-ts))
pd.DataFrame(confusion_matrix(ytest, ypred), index = labels, columns=labels)
"""
Explanation: Q. L'amélioration est-elle bien significative au regard du temps de calcul ?
Q. Déterminer les variables sélectionnées par la méthode LASSO.
4.2 Analyse discriminante linéaire
Q. Que dire de l'optimisation de cette méthode ? Celle-ci est proposée dans une librairie de R mais pas disponible en Python.
Q. L'analyse discriminante quadratique pose quelques soucis. Pourquoi ?
End of explanation
"""
from sklearn.neighbors import KNeighborsClassifier
ts = time.time()
method = KNeighborsClassifier(n_jobs=-1)
method.fit(Xtrain,ytrain)
score = method.score(Xtest, ytest)
ypred = method.predict(Xtest)
te = time.time()
t_total = te-ts
print("Score : %f, time running : %d secondes" %(score, te-ts))
pd.DataFrame(confusion_matrix(ytest, ypred), index = labels, columns=labels)
"""
Explanation: 4.3 K plus proches voisins
Cette méthode peut être vue comme un cas particulier d'analyse discriminante avec une estimation locale des fonctions de densité conditionnelle.
Q. Combien de voisins sont utilisés par défaut ? Optimiser ce paramètre.
End of explanation
"""
from sklearn.svm import LinearSVC
ts = time.time()
method = LinearSVC(max_iter=20000)
method.fit(Xtrain,ytrain)
score = method.score(Xtest, ytest)
ypred = method.predict(Xtest)
te = time.time()
t_total = te-ts
print("Score : %f, time running : %d secondes" %(score, te-ts))
pd.DataFrame(confusion_matrix(ytest, ypred), index = labels, columns=labels)
"""
Explanation: 4.4 SVM linéaire
Le nombre max d'itérations a été très sensiblement augmenté (1000 par défaut) sans pour autant améliorer la performance. Le cas multi-classes est traité en considérant une classe contre les autres donc 6 modèles.
Q. Utiliser le paramètre par défaut puis faire varier le paramètre de pénalisation.
End of explanation
"""
from sklearn.svm import SVC
ts = time.time()
method = SVC(gamma='auto')
method.fit(Xtrain,ytrain)
score = method.score(Xtest, ytest)
ypred = method.predict(Xtest)
te = time.time()
print("Score : %f, time running : %d secondes" %(score, te-ts))
pd.DataFrame(confusion_matrix(ytest, ypred), index = labels, columns=labels)
"""
Explanation: 4.5 SVM avec noyau gaussien
Apprentissage avec les valeurs par défaut puis optimisation des paramètres.
Q. Quels sont les paramètres à optimiser ?
End of explanation
"""
ts = time.time()
param=[{"C":[4,5,6],"gamma":[.01,.02,.03]}]
svm= GridSearchCV(SVC(),param,cv=10,n_jobs=-1)
svmOpt=svm.fit(Xtrain, ytrain)
te = time.time()
te-ts
# paramètre optimal
print("Meilleur score = %f, Meilleur paramètre = %s" % (svmOpt.best_score_,svmOpt.best_params_))
"""
Explanation: Q. Quelle procédure est exécutée ci-après et dans quel but ?
Attention: l'exécution est un peu longue et peut être omise en preière lecture.
End of explanation
"""
from sklearn.ensemble import RandomForestClassifier
ts = time.time()
method = RandomForestClassifier(n_estimators=200,n_jobs=-1)
method.fit(Xtrain,ytrain)
score = method.score(Xtest, ytest)
ypred = method.predict(Xtest)
te = time.time()
t_total = te-ts
print("Score : %f, time running : %d secondes" %(score, te-ts))
pd.DataFrame(confusion_matrix(ytest, ypred), index = labels, columns=labels)
"""
Explanation: Q. Comparer les deux approches par SVM (linéaire et radiale): temps de calcul et performance.
4.6 Random forest
Q. Quel serait le paramètre à optimiser ?
End of explanation
"""
import tensorflow as tf
import tensorflow.keras.models as km
import tensorflow.keras.layers as kl
ACTIVITIES = {
0: 'WALKING',
1: 'WALKING_UPSTAIRS',
2: 'WALKING_DOWNSTAIRS',
3: 'SITTING',
4: 'STANDING',
5: 'LAYING',
}
def my_confusion_matrix(Y_true, Y_pred):
Y_true = pd.Series([ACTIVITIES[y] for y in np.argmax(Y_true, axis=1)])
Y_pred = pd.Series([ACTIVITIES[y] for y in np.argmax(Y_pred, axis=1)])
return pd.crosstab(Y_true, Y_pred, rownames=['True'], colnames=['Pred'])
"""
Explanation: 4.8 Combinaison de modèles
Les formes des nuages de chaque classe observées dans le premier plan de l'analyse en composantes principales montrent que la structure de covariance n'est pas identique dans chaque classe. Cette remarque suggèrerait de s'intéresser à l'analyse discriminante quadratique mais celle-ci bloque sur l'estimation des six matrices de covariance et de leur inverse. Néanmoins il semble que, plus précisément, deux groupes se distinguent : les classes actives (marcher, monter ou descendre un escalier) d'une part et les classes passives (couché, assis, debout) d'autre part et, qu'à l'intérieur de chaque groupe les variances sont assez similaires.
Cette situation suggère de construire une décision en deux étapes ou hiérarchique:
1. Régression logistique séparant les activités passives vs. actives,
2. Un modèle spécifique à chacune des classes précédentes, par exemple des SVM à noyau gaussien.
Une telle construction hiérarchique de modèles aboutit à une précision supérieure à 97%.
Exercice : Programmer une telle approche en utilisant les capacités de python pour réaliser un pipeline.
<FONT COLOR="Red">Episode 3</font>
5 Prévision de l'activité à partir des signaux bruts
5.1 Introduction
Comme expliqué en introduction, le calcul des nombreuses transformations des données est bien trop consommateur des ressources de la batterie d'un objet connecté. Cette section se propose d'utiliser les seuls signaux bruts pour faire apprendre un algorithme et parmi les algorithmes possibles, seuls les réseaux de neurones pouvant être "cablés" dans un circuit sont pris en compte. Nous utilisons ici un réseau de type perceptron multicouche.
5.2 Perceptron à une couche cachée
Librairies
End of explanation
"""
epochs=20
batch_size=32
n_hidden = 50
timesteps = len(X_train[0])
input_dim = len(X_train[0][0])
n_classes = 6
model_base_mlp =km.Sequential()
model_base_mlp.add(kl.Dense(n_hidden, input_shape=(timesteps, input_dim), activation = "relu"))
model_base_mlp.add(kl.Reshape((timesteps*n_hidden,) , input_shape= (timesteps, n_hidden) ))
model_base_mlp.add(kl.Dense(n_classes, activation='softmax'))
model_base_mlp.compile(loss='categorical_crossentropy', optimizer='rmsprop', metrics=['accuracy'])
model_base_mlp.summary()
t_start = time.time()
model_base_mlp.fit(X_train, Y_train_dummies, batch_size=batch_size, validation_data=(X_test, Y_test_dummies), epochs=epochs)
t_end = time.time()
t_learning = t_end-t_start
score = model_base_mlp.evaluate(X_test, Y_test_dummies)[1]
print("\nScore With Simple MLP on Multidimensional Inertial Signals = %.2f, Learning time = %.2f secondes" %(score*100, t_learning) )
metadata_mlp = {"time_learning" : t_learning, "score" : score}
base_mlp_prediction = model_base_mlp.predict(X_test)
my_confusion_matrix(Y_test_dummies, base_mlp_prediction)
"""
Explanation: Définition du réseau
Une couche cachée, une couche de reformattage puis une couche de sortie à 6 classes. Le nombre de neurones (50) sur la couche cachée a été optimisé par ailleurs. Le nombre d'epochs et la taille des batchs devraient être optimisés surtout dans le cas d'utilisation d'une carte GPU.
Remarquer le nombre de paramètres à estimer.
End of explanation
"""
|
totalgood/talks | notebooks/HyperParamOpts_TechFestNW_interactive.ipynb | mit | import numpy as np
from time import time
from operator import itemgetter
from scipy.stats import randint as sp_randint
from sklearn.grid_search import GridSearchCV, RandomizedSearchCV
from sklearn.datasets import load_digits
from sklearn.ensemble import RandomForestClassifier
"""
Explanation: TechFestNW Interactive
Hyperparameter Optimization for Fun and Profit
Thunder Shiviah@TotalGood
github: ThunderShiviah
What is hyperparameter optimization (HPO)?
First things first
What are hyperparameters?
One answer:
A machine learning model's 'knobs'
<img src='images/Vienna_-_Telefunken_RA_463-2_analog_computer_-_0156.jpg'>
Photograph by Jorge Royan, licensed under the Creative Commons Attribution-Share Alike 3.0 Unported
Example: gradient descent - step size
<img src='images/Gradient_ascent_(contour).png'>
Created by Joris Gillis using Maple 10, distributed as public domain.
<img src='images/Gradient_ascent_(surface).png'>
Created by Joris Gillis using Maple 10, distributed as public domain.
Difference between parameters and hyperparameters
The parameters are what the machine learning algorithm 'learns'. Hyperparameters are set before the ML model runs.
In some cases, hyperparameters determine the complexity or flexibility of the model (e.g. regularization parameters) which are used to prevent overfitting (high variance).
Decision trees
Desired depth and number of leaves in the tree
Support Vector Machines
Misclassification penalty term
Kernelized SVM
kernel parameters
In other cases, hyperparameters affect how the algorithm learns:
Stochastic gradient descent optimization
learning rate
Convergence thresholds
Random forests and boosted decision trees
number of total trees
What is an HPO?
[A hyperparameter optimizer is] a functional from data to classifier (taking classification
problems as an example)...
-- James Bergstra, Algorithms for Hyper-Parameter Optimization [0]
Let's make an HPO!
Grid Search
The good: It's parallelizable
The bad: It's the definition of brute-force
The ugly: Grid Search might not even find optimal values
<img src='images/gridsearch.jpeg'>
Illustration from Random Search for Hyper-Parameter Optimization by Bergstra and Bengio.[1]
A solution?
Random Search
Bergstra and Bengio's paper, Random Search for Hyper-Parameter Optimization[1] tell us that if we randomly sample our space, we approach our global optima using fewer steps than grid search. Furthermore, while grid search gets exponentially worse (see Curse of Dimensionality), random search actually performs quite well in higher dimensional spaces because many problems actually have lower effective representations, namely even though a data set may have many parameters, only a few of those parameters account for most of the variation. Hence randomized sampling of the search space is suprisingly effective.
Random Search vs Grid Search
Number of necessary samples
Higher dimensions
How many times would we need to sample randomly to get reasonably close (>95% probability) to an optimal configuration?
To rephrase a bit more quantitatively: let's figure out how many points we should sample using random search in order to get a sample within 5% of the globally optimal hyper-parameterization of our sample space with at least 95% probability.
The probability of missing (not getting within 5% of) our global optimum on $n$ successive samples will be
$$
P_{miss} = (1 - 0.05)^n
$$
Hence the probability of a hit will be
$$
P_{hit} = 1 - P_{miss} = 1 - (1 - 0.05)^n.
$$
Setting the probability of a hit to 95% and solving for $n$ gives us
$$
P_{hit} = 1 - (1 - 0.05)^n = 0.95
$$
$$
\implies n = 58.4 \approx 60
$$
Hence, using only 60 samples, we have a 95% probability of getting a sample within 5% of the global optimum. Very nice!
Comparing Grid Search to Random Search
<img src='images/gridsearchbad.jpeg'>
Core illustration from Random Search for Hyper-Parameter Optimization by Bergstra and Bengio. [1]
Scikit-learn provides both grid and random search as well as an example benchmark comparing the two. Let's try it out.
First we'll import some relevant packages
Sidenote: I recommend the Anaconda Scientific Python distribution and conda package manager for creating virtual environments that come pre-installed with scikit-learn and (almost) everything else I need.
End of explanation
"""
iris = load_digits() # get some data
X, y = iris.data, iris.target
"""
Explanation: For this example, we'll load up the iris data set, an example data set from scikit-learn that has various measurements of different species of iris (the flower, not the eye thing).
End of explanation
"""
clf = RandomForestClassifier(n_estimators=20) # build a classifier
def report(grid_scores, n_top=3):
# Utility function to report best scores
top_scores = sorted(grid_scores, key=itemgetter(1), reverse=True)[:n_top]
for i, score in enumerate(top_scores):
print("Model with rank: {0}".format(i + 1))
print("Mean validation score: {0:.3f} (std: {1:.3f})".format(
score.mean_validation_score,
np.std(score.cv_validation_scores)))
print("Parameters: {0}".format(score.parameters))
print("")
"""
Explanation: Next, we initialize our classifier (a random forest in this case).
Also, we'll make a short helper function for outputting nice grid scores from our HPOs.
End of explanation
"""
# specify parameters and distributions to sample from
param_dist = {"max_depth": [3, None],
"max_features": sp_randint(1, 11),
"min_samples_split": sp_randint(1, 11),
"min_samples_leaf": sp_randint(1, 11),
"bootstrap": [True, False],
"criterion": ["gini", "entropy"]}
"""
Explanation: In order to run random search, we need to specify a distribution to sample from. We'll use sp_randint from the scipy.stats library which will return a random integer.
End of explanation
"""
# run randomized search
n_iter_search = 20
random_search = RandomizedSearchCV(clf, param_distributions=param_dist,
n_iter=n_iter_search)
start = time()
random_search.fit(X, y)
finish = (time() - start)
print("RandomizedSearchCV took {time} seconds for {candidate} candidates"
" parameter settings.".format(time=finish, candidate=n_iter_search))
report(random_search.grid_scores_)
"""
Explanation: Finally, we'll run the random search over our random forest classifier
End of explanation
"""
# use a full grid over all parameters
param_grid = {"max_depth": [3, None],
"max_features": [1, 3, 10],
"min_samples_split": [1, 3, 10],
"min_samples_leaf": [1, 3, 10],
"bootstrap": [True, False],
"criterion": ["gini", "entropy"]}
# run grid search
grid_search = GridSearchCV(clf, param_grid=param_grid)
start = time()
grid_search.fit(X, y)
finish = (time() - start)
print("GridSearchCV took {time} seconds for {candidates} candidate parameter settings.".format(
time=finish, candidates=len(grid_search.grid_scores_)))
report(grid_search.grid_scores_)
"""
Explanation: We'll now follow the same process for grid search, the only difference being that instead of sampling from a distribution, we'll specify an array of values to try.
End of explanation
"""
|
adityaka/misc_scripts | python-scripts/data_analytics_learn/link_pandas/Ex_Files_Pandas_Data/Exercise Files/02_11/Begin/.ipynb_checkpoints/Resampling-checkpoint.ipynb | bsd-3-clause | # min: minutes
my_index = pd.date_range('9/1/2016', periods=9, freq='min')
"""
Explanation: Resampling
documentation: http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.resample.html
For arguments to 'freq' parameter, please see Offset Aliases
create a date range to use as an index
End of explanation
"""
my_series = pd.Series(np.arange(9), index=my_index)
"""
Explanation: create a time series that includes a simple pattern
End of explanation
"""
my_series.resample('3min').sum()
"""
Explanation: Downsample the series into 3 minute bins and sum the values of the timestamps falling into a bin
End of explanation
"""
my_series.resample('3min', label='right').sum()
"""
Explanation: Downsample the series into 3 minute bins as above, but label each bin using the right edge instead of the left
Notice the difference in the time indices; the sum in each bin is the same
End of explanation
"""
my_series.resample('3min', label='right', closed='right').sum()
"""
Explanation: Downsample the series into 3 minute bins as above, but close the right side of the bin interval
"count backwards" from end of time series
End of explanation
"""
#select first 5 rows
my_series.resample('30S').asfreq()[0:5]
"""
Explanation: Upsample the series into 30 second bins
asfreq()
End of explanation
"""
def custom_arithmetic(array_like):
temp = 3 * np.sum(array_like) + 5
return temp
"""
Explanation: define a custom function to use with resampling
End of explanation
"""
my_series.resample('3min').apply(custom_arithmetic)
"""
Explanation: apply custom resampling function
End of explanation
"""
|
kimkipyo/dss_git_kkp | 통계, 머신러닝 복습/160615수_16일차_문서 전처리 Text Preprocessing/3.konlpy 한국어 처리 패키지 소개.ipynb | mit | from konlpy.corpus import kolaw
kolaw.fileids()
c = kolaw.open('constitution.txt').read()
print(c[:100])
from konlpy.corpus import kobill
kobill.fileids()
d = kobill.open('1809890.txt').read()
print(d[:100])
"""
Explanation: konlpy 한국어 처리 패키지 소개
앞의 내용은 영어. 지금은 한국어
konlpy는 한국어 정보처리를 위한 파이썬 패키지이다.
http://konlpy.org/ko/v0.4.4/
https://github.com/konlpy/konlpy
konlpy는 다음과 같은 다양한 형태소 분석, 태깅 라이브러리를 파이썬에서 쉽게 사용할 수 있도록 모아놓았다.
Kkma
http://kkma.snu.ac.kr/
Hannanum
http://semanticweb.kaist.ac.kr/hannanum/
Twitter
https://github.com/twitter/twitter-korean-text/
Komoran
http://www.shineware.co.kr/?page_id=835
Mecab
https://bitbucket.org/eunjeon/mecab-ko-dic
konlpy 는 다음과 같은 기능을 제공한다.
한국어 corpus
한국어 처리 유틸리티
형태소 분석 및 품사 태깅
한국어 corpus
End of explanation
"""
x = [u"한글", {u"한글 키": [u"한글 밸류1", u"한글 밸류2"]}]
print(x) #python3라서 한글 제대로 인식하는겁니다.
x = [u"한글", {u"한글 키": [u"한글 밸류1", u"한글 밸류2"]}]
from konlpy.utils import pprint
pprint(x)
from konlpy.utils import concordance
idx = concordance(u'대한민국', c, show=True)
idx
"""
Explanation: 한국어 처리 유틸리티
아주 유용하다
End of explanation
"""
from konlpy.tag import *
hannanum = Hannanum()
kkma = Kkma()
twitter = Twitter()
"""
Explanation: 형태소 분석
konlpy는 tag 서브패키지에서 형태소 분석을 위한 5개의 클래스를 제공한다.
Kkma
Hannanum
Twitter
Komoran
Mecab
이 클래스는 다음과 같은 메서드를 대부분 제공한다.
morphs : 형태소 추출
nouns : 명사 추출
pos : pos 태깅
End of explanation
"""
pprint(hannanum.nouns(c[:65]))
pprint(kkma.nouns(c[:65]))
pprint(twitter.nouns(c[:65]))
"""
Explanation: 명사 추출
End of explanation
"""
pprint(hannanum.morphs(c[:65]))
pprint(kkma.morphs(c[:65]))
pprint(twitter.morphs(c[:65]))
"""
Explanation: 형태소 추출
End of explanation
"""
pprint(hannanum.pos(c[:65]))
pprint(kkma.pos(c[:65]))
pprint(twitter.pos(c[:65]))
"""
Explanation: 품사 태깅
품사 태그표
https://docs.google.com/spreadsheets/d/1OGAjUvalBuX-oZvZ_-9tEfYD2gQe7hTGsgUpiiBSXI8/edit#gid=0
End of explanation
"""
|
mne-tools/mne-tools.github.io | 0.20/_downloads/85b80d223414f32365a9175978a38cb4/plot_limo_data.ipynb | bsd-3-clause | # Authors: Jose C. Garcia Alanis <alanis.jcg@gmail.com>
#
# License: BSD (3-clause)
import numpy as np
import matplotlib.pyplot as plt
from mne.datasets.limo import load_data
from mne.stats import linear_regression
from mne.viz import plot_events, plot_compare_evokeds
from mne import combine_evoked
print(__doc__)
# subject to use
subj = 1
"""
Explanation: Single trial linear regression analysis with the LIMO dataset
Here we explore the structure of the data contained in the
LIMO dataset.
This example replicates and extends some of the main analysis
and tools integrated in LIMO MEEG, a MATLAB toolbox originally designed
to interface with EEGLAB_.
In summary, the example:
Fetches epoched data files for a single subject of the LIMO dataset [1]_.
If the LIMO files are not found on disk, the
fetcher :func:mne.datasets.limo.load_data() will automatically download
the files from a remote repository.
During import, information about the data (i.e., sampling rate, number of
epochs per condition, number and name of EEG channels per subject, etc.) is
extracted from the LIMO :file:.mat files stored on disk and added to the
epochs structure as metadata.
Fits linear models on the single subject's data and visualizes inferential
measures to evaluate the significance of the estimated effects.
References
.. [1] Guillaume, Rousselet. (2016). LIMO EEG Dataset, [dataset].
University of Edinburgh, Centre for Clinical Brain Sciences.
https://doi.org/10.7488/ds/1556.
.. [2] Rousselet, G. A., Gaspar, C. M., Pernet, C. R., Husk, J. S.,
Bennett, P. J., & Sekuler, A. B. (2010). Healthy aging delays scalp EEG
sensitivity to noise in a face discrimination task.
Frontiers in psychology, 1, 19. https://doi.org/10.3389/fpsyg.2010.00019
.. [3] Rousselet, G. A., Pernet, C. R., Bennett, P. J., & Sekuler, A. B.
(2008). Parametric study of EEG sensitivity to phase noise during face
processing. BMC neuroscience, 9(1), 98.
https://doi.org/10.1186/1471-2202-9-98
End of explanation
"""
# This step can take a little while if you're loading the data for the
# first time.
limo_epochs = load_data(subject=subj)
"""
Explanation: About the data
In the original LIMO experiment (see [2]), participants performed a
two-alternative forced choice task, discriminating between two face stimuli.
The same two faces were used during the whole experiment,
with varying levels of noise added, making the faces more or less
discernible to the observer (see Fig 1 in [3]_ for a similar approach).
The presented faces varied across a noise-signal (or phase-coherence)
continuum spanning from 0 to 85% in increasing steps of 5%.
In other words, faces with high phase-coherence (e.g., 85%) were easy to
identify, while faces with low phase-coherence (e.g., 5%) were hard to
identify and by extension very hard to discriminate.
Load the data
We'll begin by loading the data from subject 1 of the LIMO dataset.
End of explanation
"""
print(limo_epochs)
"""
Explanation: Note that the result of the loading process is an
:class:mne.EpochsArray containing the data ready to interface
with MNE-Python.
End of explanation
"""
fig = plot_events(limo_epochs.events, event_id=limo_epochs.event_id)
fig.suptitle("Distribution of events in LIMO epochs")
"""
Explanation: Visualize events
We can visualise the distribution of the face events contained in the
limo_epochs structure. Events should appear clearly grouped, as the
epochs are ordered by condition.
End of explanation
"""
print(limo_epochs.metadata.head())
"""
Explanation: As it can be seen above, conditions are coded as Face/A and Face/B.
Information about the phase-coherence of the presented faces is stored in the
epochs metadata. These information can be easily accessed by calling
limo_epochs.metadata. As shown below, the epochs metadata also contains
information about the presented faces for convenience.
End of explanation
"""
# We want include all columns in the summary table
epochs_summary = limo_epochs.metadata.describe(include='all').round(3)
print(epochs_summary)
"""
Explanation: Now let's take a closer look at the information in the epochs
metadata.
End of explanation
"""
# only show -250 to 500 ms
ts_args = dict(xlim=(-0.25, 0.5))
# plot evoked response for face A
limo_epochs['Face/A'].average().plot_joint(times=[0.15],
title='Evoked response: Face A',
ts_args=ts_args)
# and face B
limo_epochs['Face/B'].average().plot_joint(times=[0.15],
title='Evoked response: Face B',
ts_args=ts_args)
"""
Explanation: The first column of the summary table above provides more or less the same
information as the print(limo_epochs) command we ran before. There are
1055 faces (i.e., epochs), subdivided in 2 conditions (i.e., Face A and
Face B) and, for this particular subject, there are more epochs for the
condition Face B.
In addition, we can see in the second column that the values for the
phase-coherence variable range from -1.619 to 1.642. This is because the
phase-coherence values are provided as a z-scored variable in the LIMO
dataset. Note that they have a mean of zero and a standard deviation of 1.
Visualize condition ERPs
Let's plot the ERPs evoked by Face A and Face B, to see how similar they are.
End of explanation
"""
# Face A minus Face B
difference_wave = combine_evoked([limo_epochs['Face/A'].average(),
-limo_epochs['Face/B'].average()],
weights='equal')
# plot difference wave
difference_wave.plot_joint(times=[0.15], title='Difference Face A - Face B')
"""
Explanation: We can also compute the difference wave contrasting Face A and Face B.
Although, looking at the evoked responses above, we shouldn't expect great
differences among these face-stimuli.
End of explanation
"""
# Create a dictionary containing the evoked responses
conditions = ["Face/A", "Face/B"]
evokeds = {condition: limo_epochs[condition].average()
for condition in conditions}
# concentrate analysis an occipital electrodes (e.g. B11)
pick = evokeds["Face/A"].ch_names.index('B11')
# compare evoked responses
plot_compare_evokeds(evokeds, picks=pick, ylim=dict(eeg=(-15, 7.5)))
"""
Explanation: As expected, no clear pattern appears when contrasting
Face A and Face B. However, we could narrow our search a little bit more.
Since this is a "visual paradigm" it might be best to look at electrodes
located over the occipital lobe, as differences between stimuli (if any)
might easier to spot over visual areas.
End of explanation
"""
phase_coh = limo_epochs.metadata['phase-coherence']
# get levels of phase coherence
levels = sorted(phase_coh.unique())
# create labels for levels of phase coherence (i.e., 0 - 85%)
labels = ["{0:.2f}".format(i) for i in np.arange(0., 0.90, 0.05)]
# create dict of evokeds for each level of phase-coherence
evokeds = {label: limo_epochs[phase_coh == level].average()
for level, label in zip(levels, labels)}
# pick channel to plot
electrodes = ['C22', 'B11']
# create figures
for electrode in electrodes:
fig, ax = plt.subplots(figsize=(8, 4))
plot_compare_evokeds(evokeds,
axes=ax,
ylim=dict(eeg=(-20, 15)),
picks=electrode,
cmap=("Phase coherence", "magma"))
"""
Explanation: We do see a difference between Face A and B, but it is pretty small.
Visualize effect of stimulus phase-coherence
Since phase-coherence
determined whether a face stimulus could be easily identified,
one could expect that faces with high phase-coherence should evoke stronger
activation patterns along occipital electrodes.
End of explanation
"""
limo_epochs.interpolate_bads(reset_bads=True)
limo_epochs.drop_channels(['EXG1', 'EXG2', 'EXG3', 'EXG4'])
"""
Explanation: As shown above, there are some considerable differences between the
activation patterns evoked by stimuli with low vs. high phase-coherence at
the chosen electrodes.
Prepare data for linear regression analysis
Before we test the significance of these differences using linear
regression, we'll interpolate missing channels that were
dropped during preprocessing of the data.
Furthermore, we'll drop the EOG channels (marked by the "EXG" prefix)
present in the data:
End of explanation
"""
# name of predictors + intercept
predictor_vars = ['face a - face b', 'phase-coherence', 'intercept']
# create design matrix
design = limo_epochs.metadata[['phase-coherence', 'face']].copy()
design['face a - face b'] = np.where(design['face'] == 'A', 1, -1)
design['intercept'] = 1
design = design[predictor_vars]
"""
Explanation: Define predictor variables and design matrix
To run the regression analysis,
we need to create a design matrix containing information about the
variables (i.e., predictors) we want to use for prediction of brain
activity patterns. For this purpose, we'll use the information we have in
limo_epochs.metadata: phase-coherence and Face A vs. Face B.
End of explanation
"""
reg = linear_regression(limo_epochs,
design_matrix=design,
names=predictor_vars)
"""
Explanation: Now we can set up the linear model to be used in the analysis using
MNE-Python's func:~mne.stats.linear_regression function.
End of explanation
"""
print('predictors are:', list(reg))
print('fields are:', [field for field in getattr(reg['intercept'], '_fields')])
"""
Explanation: Extract regression coefficients
The results are stored within the object reg,
which is a dictionary of evoked objects containing
multiple inferential measures for each predictor in the design matrix.
End of explanation
"""
reg['phase-coherence'].beta.plot_joint(ts_args=ts_args,
title='Effect of Phase-coherence',
times=[0.23])
"""
Explanation: Plot model results
Now we can access and plot the results of the linear regression analysis by
calling :samp:reg['{<name of predictor>}'].{<measure of interest>} and
using the
:meth:~mne.Evoked.plot_joint method just as we would do with any other
evoked object.
Below we can see a clear effect of phase-coherence, with higher
phase-coherence (i.e., better "face visibility") having a negative effect on
the activity measured at occipital electrodes around 200 to 250 ms following
stimulus onset.
End of explanation
"""
# use unit=False and scale=1 to keep values at their original
# scale (i.e., avoid conversion to micro-volt).
ts_args = dict(xlim=(-0.25, 0.5),
unit=False)
topomap_args = dict(scalings=dict(eeg=1),
average=0.05)
fig = reg['phase-coherence'].t_val.plot_joint(ts_args=ts_args,
topomap_args=topomap_args,
times=[0.23])
fig.axes[0].set_ylabel('T-value')
"""
Explanation: We can also plot the corresponding T values.
End of explanation
"""
ts_args = dict(xlim=(-0.25, 0.5))
reg['face a - face b'].beta.plot_joint(ts_args=ts_args,
title='Effect of Face A vs. Face B',
times=[0.23])
"""
Explanation: Conversely, there appears to be no (or very small) systematic effects when
comparing Face A and Face B stimuli. This is largely consistent with the
difference wave approach presented above.
End of explanation
"""
|
dh7/ML-Tutorial-Notebooks | word2vec/word2vec.ipynb | bsd-2-clause | # import and init
from annoy import AnnoyIndex
import gensim
import os.path
import numpy as np
prefix_filename = 'word2vec'
ann_filename = prefix_filename + '.ann'
i2k_filename = prefix_filename + '_i2k.npy'
k2i_filename = prefix_filename + '_k2i.npy'
"""
Explanation: A Word2Vec playground
To play with this notebook, you'll need Numpy, Annoy, Gensim, and the GoogleNews word2vec model
pip install numpy
pip install annoy
pip install gensim
you can find the GoogleNews vector by googling ./GoogleNews-vectors-negative300.bin
Inspired by: https://github.com/chrisjmccormick/inspect_word2vec
End of explanation
"""
# Load Google's pre-trained Word2Vec model.
print "load GoogleNews Model"
model = gensim.models.KeyedVectors.load_word2vec_format('./GoogleNews-vectors-negative300.bin', binary=True)
print "loading done"
hello = model['hello']
vector_size = len(hello)
print 'model size=', len(model.vocab)
print 'vector size=', vector_size
# process the model and save a model
# or load the model directly
vocab = model.vocab.keys()
#indexNN = AnnoyIndex(vector_size, metric='angular')
indexNN = AnnoyIndex(vector_size)
index2key = [None]*len(model.vocab)
key2index = {}
if not os.path.isfile(ann_filename):
print 'creating indexes'
i = 0
try:
for key in vocab:
indexNN.add_item(i, model[key])
key2index[key]=i
index2key[i]=key
i=i+1
if (i%10000==0):
print i, key
except TypeError:
print 'Error with key', key
print 'building 10 trees'
indexNN.build(10) # 10 trees
print 'save files'
indexNN.save(ann_filename)
np.save(i2k_filename, index2key)
np.save(k2i_filename, key2index)
print 'done'
else:
print "loading files"
indexNN.load(ann_filename)
index2key = np.load(i2k_filename)
key2index = np.load(k2i_filename)
print "loading done:", indexNN.get_n_items(), "items"
"""
Explanation: Create a model or load it
End of explanation
"""
what_vec = model['king'] - model['male'] + model['female']
what_indexes = indexNN.get_nns_by_vector(what_vec, 1)
print index2key[what_indexes[0]]
"""
Explanation: King - Male + Female = Queen?
Nope!
At least not based on a word2vec that is trained on the News...
End of explanation
"""
what_vec = model['king'] - model['boy'] + model['girl']
what_indexes = indexNN.get_nns_by_vector(what_vec, 1)
print index2key[what_indexes[0]]
what_vec = model['king'] - model['man'] + model['women']
what_indexes = indexNN.get_nns_by_vector(what_vec, 1)
print index2key[what_indexes[0]]
"""
Explanation: King - boy + girl = Queen?
Yes :)
but it don't work with man & women :(
End of explanation
"""
what_vec = model['Berlin'] - model['Germany'] + model['France']
what_indexes = indexNN.get_nns_by_vector(what_vec, 1)
print index2key[what_indexes[0]]
"""
Explanation: Berlin - Germany + France = Paris?
Yes!
This makes me happy, but if someone understand why, please tell me!
End of explanation
"""
what_vec = model['Trump'] + model['Germany'] - model['USA']
what_indexes = indexNN.get_nns_by_vector(what_vec, 1)
for i in what_indexes:
print index2key[i]
"""
Explanation: Trump - USA + Germany = Hitler?
FAKE NEWS
End of explanation
"""
man2women = - model['boy'] + model['girl']
word_list = ["king","prince", "male", "boy","dad", "father", "president", "dentist",
"scientist", "efficient", "teacher", "doctor", "minister", "lover"]
for word in word_list:
what_vec = model[word] + man2women
what_indexes = indexNN.get_nns_by_vector(what_vec, 1)
print word, "for him,", index2key[what_indexes[0]], "for her."
capital = model['Berlin'] - model['Germany']
word_list = ["Germany", "France", "Italy", "USA", "Russia", "boys", "cars", "flowers", "soldiers",
"scientists", ]
for word in word_list:
what_vec = model[word] + capital
what_indexes = indexNN.get_nns_by_vector(what_vec, 1)
print index2key[what_indexes[0]], "is the capital of", word
"""
Explanation: Let's explore the stereotypes hidded in the news:
End of explanation
"""
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.