repo_name stringlengths 6 77 | path stringlengths 8 215 | license stringclasses 15
values | content stringlengths 335 154k |
|---|---|---|---|
drcgw/bass | Single Wave- Interactive.ipynb | gpl-3.0 | from bass import *
"""
Explanation: Welcome to BASS!
Version: Single Wave- Interactive Notebook.
BASS: Biomedical Analysis Software Suite for event detection and signal processing.
Copyright (C) 2015 Abigail Dobyns
This program is free software: you can redistribute it and/or modify
it under the terms of the GNU General Public License as published by
the Free Software Foundation, either version 3 of the License, or
(at your option) any later version.
This program is distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
GNU General Public License for more details.
You should have received a copy of the GNU General Public License
along with this program. If not, see <http://www.gnu.org/licenses/>
Initalize
Run the following code block to intialize the program. This notebook and bass.py file must be in the same folder.
End of explanation
"""
Data, Settings, Results = load_interact()
"""
Explanation: Begin User Input
For help, check out the wiki: Protocol
Or the video tutorial: Coming Soon!
Load Data File
Use the following block to change your settings. You must use this block.
Here are some helpful information about the loading settings:
Full File Path to Folder containing file:
Designate the path to your file to load. It can also be the relative path to the folder where this notebook is stored. This does not include the file itself.
Mac OSX Example: '/Users/MYNAME/Documents/bass'
Microsoft Example: 'C:\\Users\MYNAME\Documents\bass'
File name:
This is the name of your data file. It should include the file type. This file should NOT have a header and the first column must be time in seconds. Note: This file name will also appear as part of the output files names.
'rat34_ECG.txt'
Full File Path for data output: Designate the location of the folder where you would like the folder containing your results to go. If the folder does not exist, then it will be created. A plots folder, called 'plots' will be created inside this folder for you if it does not already exist.
Mac OSX Example: '/Users/MYNAME/Documents/output'
Microsoft Example: 'C:\\Users\MYNAME\Documents\output'
Loading a file
End of explanation
"""
plot_rawdata(Data)
"""
Explanation: Graph Data (Optional)
Use this block to check any slicing you need to do to cut out problematic data from the head or tail. You can click on any point in the wave to get the (x,y) location of that point. Clipping inside this notebook is not supported at this time.
Graph Raw Data
End of explanation
"""
#optional
Settings['PSD-Signal'] = Series(index = ['ULF', 'VLF', 'LF','HF','dx'])
#Set PSD ranges for power in band
Settings['PSD-Signal']['ULF'] = 25 #max of the range of the ultra low freq band. range is 0:ulf
Settings['PSD-Signal']['VLF'] = 75 #max of the range of the very low freq band. range is ulf:vlf
Settings['PSD-Signal']['LF'] = 150 #max of the range of the low freq band. range is vlf:lf
Settings['PSD-Signal']['HF'] = 300 #max of the range of the high freq band. range is lf:hf. hf can be no more than (hz/2) where hz is the sampling frequency
Settings['PSD-Signal']['dx'] = 2 #segmentation for integration of the area under the curve.
"""
Explanation: Power Spectral Density (Optional)
Use the settings code block to set your frequency bands to calculate area under the curve. This block is not required. band output is always in raw power, even if the graph scale is dB/Hz.
Power Spectral Density: Signal
End of explanation
"""
scale = 'raw' #raw or db
Results = psd_signal(version = 'original', key = 'Mean1', scale = scale,
Data = Data, Settings = Settings, Results = Results)
Results['PSD-Signal']
"""
Explanation: Use the block below to generate the PSD graph and power in bands results (if selected). scale toggles which units to use for the graph:
raw = s^2/Hz
db = dB/Hz = 10*log10(s^2/Hz)
Graph and table are automatically saved in the PSD-Signal subfolder.
End of explanation
"""
version = 'original'
key = 'Mean1'
spectogram(version, key, Data, Settings, Results)
"""
Explanation: Spectrogram (Optional)
Use the block below to get the spectrogram of the signal. The frequency (y-axis) scales automatically to only show 'active' frequencies. This can take some time to run.
version = 'original'
key = 'Mean1'
After transformation is run, you can call version = 'trans'. This graph is not automatically saved.
Spectrogram
End of explanation
"""
Settings = load_settings_interact(Settings)
Settings_display = display_settings(Settings)
Settings_display
"""
Explanation: Transforming Data
Must be done for each new uploaded data file.
WARNING: If you do not load a settings file OR enter your own settings, the analysis will not run. There are no defaults. This section is not optional.
Transforming Data
Load settings from a file
Must be a previously outputed BASS settings file, although the name can be changed. Expected format is '.csv'. Enter the full file path and name.
Mac OSX Example: '/Users/MYNAME/Documents/rat34_Settings.csv'
Microsoft Example: 'C:\\Users\MYNAME\Documents\rat34_Settings.csv'
See above instructions for how to load your data file.
Warning!! You must load a settings file or specify your settings below. There are no defaults
Loading Settings
End of explanation
"""
Settings = user_input_trans(Settings)
"""
Explanation: Enter your settings for data transformation
WARNING: If you do not load a settings file OR enter your own settings, the analysis will not run. There are no defaults. This section is not optional.
Enter the parameters of the functions you would like to use to transform your data.
If you do not want to use a function, enter 'none'
For more help on settings:
Transformation Settings
End of explanation
"""
Data, Settings = transform_wrapper(Data, Settings)
graph_ts(Data, Settings, Results)
"""
Explanation: Run data transformation
This Block Is Not Optional
Transform
End of explanation
"""
Settings = user_input_base(Settings)
"""
Explanation: Set Baseline for Thresholding
WARNING If you do not load a settings file OR enter your own settings, the analysis will not run. There are no defaults. This section is not optional.
Baseline
Choose either linear or rolling baseline.
Linear - takes a user specified time segment as a good representation of baseline. If the superstructure is linear but has a slope, use linear fit in the transformation to still use linear. Linear automatically shifts your data by the ammount of your baseline normalize the baseline to zero.
Rolling - rolling mean of the data is generated based on a moving window. User provides the window size in miliseconds. there is no shift in the data with this method.
Static - Skips baseline generatoin and allows you to choose an arbitrary y value for threshold. No Shift in the data.
End of explanation
"""
Data, Settings, Results = baseline_wrapper(Data, Settings, Results)
graph_ts(Data, Settings, Results)
"""
Explanation: Run baseline
Generate Baseline
End of explanation
"""
Settings_display = display_settings(Settings)
Settings_display
"""
Explanation: Display Settings (Optional)
Optional block. Run this at any time to check what your settings are. If it does not appear, it has not been set yet.
Display Settings
End of explanation
"""
Settings = event_peakdet_settings(Data, Settings)
"""
Explanation: Event Detection
Peaks
Peaks are local maxima, defined by local minima on either side of them. Click here for more information about this algorithm
Peak Detection Settings
Run the Following Block of code to enter or change peak detection settings. If you have loaded settings from a previous file, you do not need to run this block.
Peak Detection Settings
End of explanation
"""
Results = event_peakdet_wrapper(Data, Settings, Results)
Results['Peaks-Master'].groupby(level=0).describe()
"""
Explanation: Run Event Peak Detection
Run block of code below to run peak deteaction. This block will print a summary table of the all available peak measurments.
Peak Detection
End of explanation
"""
graph_ts(Data, Settings, Results)
"""
Explanation: Plot Events (Optional)
Use the block below to visualize event detection. Peaks are blue triangles. Valleys are pink triangles.
Visualize Events
End of explanation
"""
Settings = event_burstdet_settings(Data, Settings, Results)
"""
Explanation: Bursts
Bursts are the boundaries of events defined by their amplitudes, which are greater than the set threshold
Enter Burst Settings
Run the Following Block of code to enter or change burst detection settings. If you have loaded settings from a previous file, you do not need to run this block.
Burst Settings
End of explanation
"""
Results = event_burstdet_wrapper(Data, Settings, Results)
Results['Bursts-Master'].groupby(level=0).describe()
"""
Explanation: Run Event Burst Detection
Run block of code below to run burst deteaction.
This block will print a summary table of all available burst measurements.
Burst Detection
End of explanation
"""
key = 'Mean1'
graph_ts(Data, Settings, Results, key)
"""
Explanation: Plot Events (Optional)
Call a column of data by its key (column name). Default name for one column of data is 'Mean1'
Visualize Bursts
End of explanation
"""
Save_Results(Data, Settings, Results)
"""
Explanation: Save all files and settings
Save Event Tables and Settings
End of explanation
"""
#grouped summary for peaks
Results['Peaks-Master'].groupby(level=0).describe()
"""
Explanation: Event Analysis
Now that events are detected, you can analyze them using any of the optional blocks below.
More information about how to use this
Display Tables
Display Summary Results for Peaks
End of explanation
"""
#grouped summary for bursts
Results['Bursts-Master'].groupby(level=0).describe()
"""
Explanation: Display Summary Results for Bursts
End of explanation
"""
#Batch
event_type = 'Peaks'
meas = 'all'
Results = poincare_batch(event_type, meas, Data, Settings, Results)
pd.concat({'SD1':Results['Poincare SD1'],'SD2':Results['Poincare SD2']})
"""
Explanation: Results Plots
Poincare Plots
Create a Poincare Plot of your favorite varible. Choose an event type (Peaks or Bursts), measurement type. Calling meas = 'All' is supported.
Plots and tables are saved automatically
Example:
event_type = 'Bursts'
meas = 'Burst Duration'
More on Poincare Plots
Batch Poincare
Batch Poincare
End of explanation
"""
#quick
event_type = 'Bursts'
meas = 'Burst Duration'
key = 'Mean1'
poincare_plot(Results[event_type][key][meas])
"""
Explanation: Quick Poincare Plot
Quickly call one poincare plot for display. Plot and Table are not saved automatically. Choose an event type (Peaks or Bursts), measurement type, and key. Calling meas = 'All' is not supported.
Quick Poincare
End of explanation
"""
key = 'Mean1'
start =100 #start time in seconds
end= 101 #end time in seconds
results_timeseries_plot(key, start, end, Data, Settings, Results)
"""
Explanation: Line Plots
Create line plots of the raw data as well as the data analysis.
Plots are saved by clicking the save button in the pop-up window with your graph.
key = 'Mean1'
start =100
end= 101
Results Line Plot
End of explanation
"""
#autocorrelation
key = 'Mean1'
start = 0 #seconds, where you want the slice to begin
end = 10 #seconds, where you want the slice to end.
autocorrelation_plot(Data['trans'][key][start:end])
plt.show()
"""
Explanation: Autocorrelation Plot
Display the Autocorrelation plot of your transformed data.
Choose the start and end time in seconds. May be slow
key = 'Mean1'
start = 0
end = 10
Autocorrelation Plot
End of explanation
"""
event_type = 'Peaks'
meas = 'Intervals'
key = 'Mean1' #'Mean1' default for single wave
frequency_plot(event_type, meas, key, Data, Settings, Results)
"""
Explanation: Frequency Plot
Use this block to plot changes of any measurement over time. Does not support 'all'. Example:
event_type = 'Peaks'
meas = 'Intervals'
key = 'Mean1'
Frequency Plot
End of explanation
"""
Settings['PSD-Event'] = Series(index = ['Hz','ULF', 'VLF', 'LF','HF','dx'])
#Set PSD ranges for power in band
Settings['PSD-Event']['hz'] = 4.0 #freqency that the interpolation and PSD are performed with.
Settings['PSD-Event']['ULF'] = 0.03 #max of the range of the ultra low freq band. range is 0:ulf
Settings['PSD-Event']['VLF'] = 0.05 #max of the range of the very low freq band. range is ulf:vlf
Settings['PSD-Event']['LF'] = 0.15 #max of the range of the low freq band. range is vlf:lf
Settings['PSD-Event']['HF'] = 0.4 #max of the range of the high freq band. range is lf:hf. hf can be no more than (hz/2)
Settings['PSD-Event']['dx'] = 10 #segmentation for the area under the curve.
"""
Explanation: Power Spectral Density
The following blocks allows you to asses the power of event measuments in the frequency domain. While you can call this block on any event measurement, it is intended to be used on interval data (or at least data with units in seconds). Reccomended:
event_type = 'Bursts'
meas = 'Total Cycle Time'
key = 'Mean1'
scale = 'raw'
event_type = 'Peaks'
meas = 'Intervals'
key = 'Mean1'
scale = 'raw'
Because this data is in the frequency domain, we must interpolate it in order to perform a FFT on it. Does not support 'all'.
Power Spectral Density: Events
Settings
Use the code block below to specify your settings for event measurment PSD.
End of explanation
"""
event_type = 'Bursts'
meas = 'Total Cycle Time'
key = 'Mean1'
scale = 'raw'
Results = psd_event(event_type, meas, key, scale, Data, Settings, Results)
Results['PSD-Event'][key]
"""
Explanation: Event PSD
Use block below to return the PSD plot, as well as the power in the bands defined by the settings above.
End of explanation
"""
#Get average plots, display only
event_type = 'peaks'
meas = 'Peaks Amplitude'
average_measurement_plot(event_type, meas,Results)
"""
Explanation: Analyze Events by Measurement
Generates a line plot with error bars for a given event measurement. X axis is the names of each time series. Display Only. Intended for more than one column of data. This is not a box and whiskers plot.
event_type = 'peaks'
meas = 'Peaks Amplitude'
Analyze Events by Measurement
End of explanation
"""
#Moving Stats
event_type = 'Peaks'
meas = 'all'
window = 30 #seconds
Results = moving_statistics(event_type, meas, window, Data, Settings, Results)
"""
Explanation: Moving/Sliding Averages, Standard Deviation, and Count
Generates the moving mean, standard deviation, and count for a given measurement across all columns of the Data in the form of a DataFrame (displayed as a table).
Saves out the dataframes of these three results automatically with the window size in the name as a .csv.
If meas == 'All', then the function will loop and produce these tables for all measurements.
event_type = 'Peaks'
meas = 'all'
window = 30
Moving Stats
End of explanation
"""
#Histogram Entropy
event_type = 'Bursts'
meas = 'all'
Results = histent_wrapper(event_type, meas, Data, Settings, Results)
Results['Histogram Entropy']
"""
Explanation: Histogram Entropy
Calculates the histogram entropy of a measurement for each column of data. Also saves the histogram of each. If meas is set to 'all', then all available measurements from the event_type chosen will be calculated iteratevely.
If all of the samples fall in one bin regardless of the bin size, it means we have the most predictable sitution and the entropy is 0. If we have uniformly dist function, the max entropy will be 1
event_type = 'Bursts'
meas = 'all'
Histogram Entropy
End of explanation
"""
#Approximate Entropy
event_type = 'Peaks'
meas = 'all'
Results = ap_entropy_wrapper(event_type, meas, Data, Settings, Results)
Results['Approximate Entropy']
"""
Explanation: STOP HERE
You can run another file be going back the the Begin User Input section and chose another file path.
What Should I do now?
Advanced user options
Approximate entropy
this only runs if you have pyeeg.py in the same folder as this notebook and bass.py. WARNING: THIS FUNCTION RUNS SLOWLY
run the below code to get the approximate entropy of any measurement or raw signal. Returns the entropy of the entire results array (no windowing). I am using the following M and R values:
M = 2
R = 0.2*std(measurement)
these values can be modified in the source code. alternatively, you can call ap_entropy directly. supports 'all'
Interpretation: A time series containing many repetitive patterns has a relatively small ApEn; a less predictable process has a higher ApEn.
Approximate Entropy in BASS
Aproximate Entropy Source
Events
End of explanation
"""
#Approximate Entropy on raw signal
#takes a VERY long time
from pyeeg import ap_entropy
version = 'original' #original, trans, shift, or rolling
key = 'Mean1' #Mean1 default key for one time series
start = 0 #seconds, where you want the slice to begin
end = 1 #seconds, where you want the slice to end. The absolute end is -1
ap_entropy(Data[version][key][start:end].tolist(), 2, (0.2*np.std(Data[version][key][start:end])))
"""
Explanation: Time Series
End of explanation
"""
#Sample Entropy
event_type = 'Bursts'
meas = 'all'
Results = samp_entropy_wrapper(event_type, meas, Data, Settings, Results)
Results['Sample Entropy']
"""
Explanation: Sample Entropy
this only runs if you have pyeeg.py in the same folder as this notebook and bass.py. WARNING: THIS FUNCTION RUNS SLOWLY
run the below code to get the sample entropy of any measurement. Returns the entropy of the entire results array (no windowing). I am using the following M and R values:
M = 2
R = 0.2*std(measurement)
these values can be modified in the source code. alternatively, you can call samp_entropy directly.
Supports 'all'
Sample Entropy in BASS
Sample Entropy Source
Events
End of explanation
"""
#on raw signal
#takes a VERY long time
version = 'original' #original, trans, shift, or rolling
key = 'Mean1' #Mean1 default key for one time series
start = 0 #seconds, where you want the slice to begin
end = 1 #seconds, where you want the slice to end. The absolute end is -1
samp_entropy(Data[version][key][start:end].tolist(), 2, (0.2*np.std(Data[version][key][start:end])))
"""
Explanation: Time Series
End of explanation
"""
|
jsnajder/StrojnoUcenje | notebooks/SU-2015-7-LogistickaRegresija.ipynb | cc0-1.0 | import scipy as sp
import scipy.stats as stats
import matplotlib.pyplot as plt
import pandas as pd
%pylab inline
"""
Explanation: Sveučilište u Zagrebu<br>
Fakultet elektrotehnike i računarstva
Strojno učenje
<a href="http://www.fer.unizg.hr/predmet/su">http://www.fer.unizg.hr/predmet/su</a>
Ak. god. 2015./2016.
Bilježnica 7: Logistička regresija
(c) 2015 Jan Šnajder
<i>Verzija: 0.2 (2015-11-16)</i>
End of explanation
"""
def sigm(x): return 1 / (1 + sp.exp(-x))
xs = sp.linspace(-10, 10)
plt.plot(xs, sigm(xs));
"""
Explanation: Sadržaj:
Model logističke regresije
Gubitak unakrsne entropije
Minimizacija pogreške
Poveznica s generativnim modelom
Usporedba linearnih modela
Sažetak
Model logističke regresije
Podsjetnik: poopćeni linearni modeli
$$
h(\mathbf{x}) = \color{red}{f\big(}\mathbf{w}^\intercal\tilde{\mathbf{x}}\color{red}{\big)}
$$
$f : \mathbb{R}\to[0,1]$ ili $f : \mathbb{R}\to[-1,+1]$ je aktivacijska funkcija
Linearna granica u ulaznom prostoru (premda je $f$ nelinearna)
Međutim, ako preslikavamo s $\boldsymbol\phi(\mathbf{x})$ u prostor značajki, granica u ulaznom prostoru može biti nelinearna
Model nelinearan u parametrima (jer je $f$ nelinearna)
Komplicira optimizaciju (nema rješenja u zatvorenoj formi)
Podsjetnik: klasifikacija regresijom
Model:
$$
h(\mathbf{x}) = \mathbf{w}^\intercal\boldsymbol{\phi}(\mathbf{x}) \qquad (f(\alpha)=\alpha)
$$
[Skica]
Funkcija gubitka: kvadratni gubitak
Optimizacijski postupak: izračun pseudoinverza (rješenje u zatvorenoj formi)
Prednosti:
Uvijek dobivamo rješenje
Nedostatci:
Nerobusnost: ispravno klasificirani primjeri utječu na granicu $\Rightarrow$ pogrešna klasifikacija čak i kod linearno odvojivih problema
Izlaz modela nije probabilistički
Podsjetnik: perceptron
Model:
$$
h(\mathbf{x}) = f\big(\mathbf{w}^\intercal\boldsymbol\phi(\mathbf{x})\big)
\qquad f(\alpha) = \begin{cases}
+1 & \text{ako $\alpha\geq0$}\
-1 & \text{inače}
\end{cases}
$$
[Skica]
Funkcija gubitka: količina pogrešne klasifikacije
$$
\mathrm{max}(0,-\tilde{\mathbf{w}}^\intercal\boldsymbol{\phi}(\mathbf{x})y)
$$
Optimizacijski postupak: gradijentni spust
Prednosti:
Ispravno klasificirani primjeri ne utječu na granicu<br>
$\Rightarrow$ ispravna klasifikacija linearno odvojivih problema
Nedostatci:
Aktivacijska funkcija nije derivabilna<br>
$\Rightarrow$ funkcija gubitka nije derivabilna<br>
$\Rightarrow$ gradijent funkcije pogreške nije nula u točki minimuma<br>
$\Rightarrow$ postupak ne konvergira ako primjeri nisu linearno odvojivi
Decizijska granica ovisi o početnom izboru težina
Izlaz modela nije probabilistički
Logistička regresija
Ideja: upotrijebiti aktivacijsku funkciju s izlazima $[0,1]$ ali koja jest derivabilna
Logistička (sigmoidalna) funkcija:
$$
\sigma(\alpha) = \frac{1}{1 + \exp(-\alpha)}
$$
End of explanation
"""
plt.plot(xs, sigm(0.5*xs), 'r');
plt.plot(xs, sigm(xs), 'g');
plt.plot(xs, sigm(2*xs), 'b');
"""
Explanation: Nagib sigmoide može se regulirati množenjem ulaza određenim faktorom:
End of explanation
"""
xs = linspace(0, 1)
plt.plot(xs, -sp.log(xs));
plt.plot(xs, 1 - sp.log(1 - xs));
"""
Explanation: Derivacija sigmoidalne funkcije:
$$
\frac{\partial\sigma(\alpha)}{\partial\alpha} =
\frac{\partial}{\partial\alpha}\big(1 + \exp(-\alpha)\big) =
\sigma(\alpha)\big(1 - \sigma(\alpha)\big)
$$
Model logističke regresije:
$$
h(\mathbf{x}|\mathbf{w}) = \sigma\big(\mathbf{w}^\intercal\boldsymbol{\phi}(\mathbf{x})\big) =
\frac{1}{1+\exp(-\mathbf{w}^\intercal\boldsymbol{\phi}(\mathbf{x}))}
$$
NB: Logistička regresija je klasifikacijski model (unatoč nazivu)!
Probabilistički izlaz
$h(\mathbf{x})\in[0,1]$, pa $h(\mathbf{x})$ možemo tumačiti kao vjerojatnost da primjer pripada klasi $\mathcal{C}_1$ (klasi za koju $y=1$):
$$
h(\mathbf{x}|\mathbf{w}) = \sigma\big(\mathbf{w}^\intercal\mathbf{\phi}(\mathbf{x})\big) = \color{red}{P(y=1|\mathbf{x})}
$$
Vidjet ćemo kasnije da postoji i dublje opravdanje za takvu interpretaciju
Funkcija logističkog gubitka
Definirali smo model, trebamo još definirati funkciju gubitka i optimizacijski postupak
Logistička funkcija koristi gubitak unakrsne entropije
Definicija
Funkcija pokriva dva slučajeva (kada je oznaka primjera $y=1$ i kada je $y=0$):
$$
L(h(\mathbf{x}),y) =
\begin{cases}
- \ln h(\mathbf{x}) & \text{ako $y=1$}\
- \ln \big(1-h(\mathbf{x})\big) & \text{ako $y=0$}
\end{cases}
$$
Ovo možemo napisati sažetije:
$$
L(h(\mathbf{x}),y) =
y \ln h(\mathbf{x}) - (1-y)\ln \big(1-h(\mathbf{x})\big)
$$
End of explanation
"""
def cross_entropy_loss(h_x, y):
return -y * sp.log(h_x) - (1 - y) * sp.log(1 - h_x)
xs = linspace(0, 1)
plt.plot(xs, cross_entropy_loss(xs, 0), label='y=0')
plt.plot(xs, cross_entropy_loss(xs, 1), label='y=1')
plt.ylabel('$L(h(\mathbf{x}),y)$')
plt.xlabel('$h(\mathbf{x}) = \sigma(w^\intercal\mathbf{x}$)')
plt.legend()
plt.show()
"""
Explanation: Ako $y=1$, funkcija kažnjava model to više što je njegov izlaz manji od jedinice. Slično, ako $y=0$, funkcija kažnjava model to više što je njegov izlaz veći od nule
Intutivno se ovakva funkcija čini u redu, ali je pitanje kako smo do nje došli
Izvod
Funkciju gubitka izvest ćemo iz funkcije pogreške
Podsjetnik: funkcija pogreške = očekivanje funkcije gubitka
Budući da logistička regresija daje vjerojatnosti oznaka za svaki primjer, možemo izračunati kolika je vjerojatnost označenog skupa primjera $\mathcal{D}$ pod našim modelom, odnosno kolika je izglednost parametra $\mathbf{w}$ modela
Želimo da ta izglednost bude što veća, pa ćemo funkciju pogreške definirati kao negativnu log-izglednost parametara $\mathbf{w}$:
$$
E(\mathbf{w}|\mathcal{D}) = -\ln\mathcal{L}(\mathbf{w}|\mathcal{D})
$$
Želimo maksimizirati log-izglednost, tj. minimizirati ovu pogrešku
Log-izglednost:
$$
\begin{align}
\ln\mathcal{L}(\mathbf{w}|\mathcal{D})
&= \ln p(\mathcal{D}|\mathbf{w})
= \ln\prod_{i=1}^N p(\mathbf{x}^{(i)}, y^{(i)}|\mathbf{w})\
&= \ln\prod_{i=1}^N P(y^{(i)}|\mathbf{x}^{(i)},\mathbf{w})p(\mathbf{x}^{(i)})\
&= \sum_{i=1}^N \ln P(y^{(i)}|\mathbf{x}^{(i)},\mathbf{w}) + \underbrace{\color{gray}{\sum_{i=1}^N \ln p(\mathbf{x}^{(i)})}}_{\text{ne ovisi o $\mathbf{w}$}}
\end{align}
$$
$y^{(i)}$ je oznaka $i$-tog primjera koja može biti 0 ili 1 $\Rightarrow$ Bernoullijeva varijabla
Budući da $y^{(i)}$ Bernoullijeva varijabla, njezina distribucija je:
$$
P(y^{(i)}) = \mu^{y^{(i)}}(1-\mu)^{y^{(i)}}
$$
gdje je $\mu$ vjerojatnost da $y^{(i)}=1$
Naš model upravo daje vjerojatnost da primjer $\mathcal{x}^{(i)}$ ima oznaku $y^{(i)}=1$, tj.:
$$
\mu = P(y^{(i)}=1|\mathbf{x}^{(i)},\mathbf{w}) = \color{red}{h(\mathbf{x}^{(i)} | \mathbf{w})}
$$
To znači da vjerojatnost oznake $y^{(i)}$ za dani primjer $\mathbf{x}^{i}$ možemo napisati kao:
$$
P(y^{(i)}|\mathbf{x}^{(i)},\mathbf{w}) =
\color{red}{h(\mathbf{x}^{(i)}|\mathbf{w})}^{y^{(i)}}\big(1-\color{red}{h(\mathbf{x}^{(i)}|\mathbf{w})}\big)^{1-y^{(i)}}
$$
Nastavljamo s izvodom log-izglednosti:
$$
\begin{align}
\ln\mathcal{L}(\mathbf{w}|\mathcal{D})
&= \sum_{i=1}^N \ln P(y^{(i)}|\mathbf{x}^{(i)},\mathbf{w}) \color{gray}{+ \text{konst.}}\
&\Rightarrow \sum_{i=1}^N\ln \Big(h(\mathbf{x}^{(i)}|\mathbf{w})^{y^{(i)}}\big(1-h(\mathbf{x}^{(i)}|\mathbf{w})\big)^{1-y^{(i)}}\Big) \
& = \sum_{i=1}^N \Big(y^{(i)} \ln h(\mathbf{x}^{(i)}|\mathbf{w})+ (1-y^{(i)})\ln\big(1-h(\mathbf{x}^{(i)}|\mathbf{w})\big)\Big)
\end{align}
$$
Empirijsku pogrešku definiramo kao negativnu log-izglednost (do na konstantu):
$$
E(\mathbf{w}|\mathcal{D}) = \sum_{i=1}^N \Big(-y^{(i)} \ln h(\mathbf{x}^{(i)}|\mathbf{w}) - (1-y^{(i)})\ln \big(1-h(\mathbf{x}^{(i)}|\mathbf{w})\big)\Big)
$$
Alternativno (kako ne bi ovisila o broju primjera):
$$
E(\mathbf{w}|\mathcal{D}) = \color{red}{\frac{1}{N}} \sum_{i=1}^N\Big( - y^{(i)} \ln h(\mathbf{x}^{(i)}|\mathbf{w})- (1-y^{(i)})\ln \big(1-h(\mathbf{x}^{(i)}|\mathbf{w})\big)\Big)
$$
$\Rightarrow$ pogreška unakrsne entropije (engl. cross-entropy error)
Iz pogreške možemo iščitati funkciju gubitka:
$$
L(h(\mathbf{x}),y) = - y \ln h(\mathbf{x}) - (1-y)\ln \big(1-h(\mathbf{x})\big)
$$
$\Rightarrow$ gubitak unakrsne entropije (engl. cross-entropy loss)
NB: Izraz kompaktno definira grananje za dva slučaja (za $y=1$ i za $y=0$)
End of explanation
"""
#TODO: konkretan primjer u ravnini
"""
Explanation: Q: Koliki je gubitak na primjeru $\mathbf{x}$ za koji model daje $h(\mathbf{x})=P(y=1|\mathbf{x})=0.7$, ako je stvarna oznaka primjera $y=0$? Koliki je gubitak ako je stvarna oznaka $y=1$?
Gubitaka nema jedino onda kada je primjer savršeno točno klasificiran ($h(x)=1$ za $y=1$ odnosno $h(x)=0$ za $y=0$)
U svim drugim slučajevima postoji gubitak: čak i ako je primjer ispravno klasificiran (na ispravnoj strani granice) postoji malen gubitak, ovisno o pouzdanosti klasifikacije
Ipak, primjeri na ispravnoj strani granice ($h(\mathbf{x})\geq 0.5$ za $y=1$ odnosno $h(\mathbf{x})< 0.5$ za $y=0$) nanose puno manji gubitak od primjera na pogrešnoj strani granice
End of explanation
"""
#TODO kod + primjer
"""
Explanation: Minimizacija pogreške
$$
\begin{align}
E(\mathbf{w}) &=
\sum_{i=1}^N L\big(h(\mathbf{x}^{(i)}|\mathbf{w}),y^{(i)}\big)\
L(h(\mathbf{x}),y) &= - y \ln h(\mathbf{x}) - (1-y)\ln \big(1-h(\mathbf{x})\big)\
h(\mathbf{x}) &= \sigma(\mathbf{w}^\intercal\mathbf{x}) = \frac{1}{1 + \exp(-\mathbf{w}^\intercal\mathbf{x})}
\end{align}
$$
Ne postoji rješenje u zatvorenoj formi (zbog nelinearnosti funkcije $\sigma$)
Minimiziramo gradijentnim spustom:
$$
\nabla E(\mathbf{w}) =
\sum_{i=1}^N \nabla L\big(h(\mathbf{x}^{(i)}|\mathbf{w}),y^{(i)}\big)
$$
Prisjetimo se:
$$
\frac{\partial\sigma(\alpha)}{\partial\alpha} =
\sigma(\alpha)\big(1 - \sigma(\alpha)\big)
$$
Dobivamo:
$$
\nabla L\big(h(\mathbf{x}),y\big) =
\Big(-\frac{y}{h(\mathbf{x})} + \frac{1-y}{1-h(\mathbf{x})}\Big)h(\mathbf{x})\big(1-h(\mathbf{x})\big)
\tilde{\mathbf{x}} = \big(h(\mathbf{x})-y\big)\tilde{\mathbf{x}}
$$
Gradijent-vektor pogreške:
$$
\nabla E(\mathbf{w}) = \sum_{i=1}^N \big(h(\mathbf{x}^{(i)})-y^{(i)}\big)\tilde{\mathbf{x}}^{(i)}
$$
Gradijentni spust (batch)
$\mathbf{w} \gets (0,0,\dots,0)$<br>
ponavljaj do konvergencije<br>
$\quad \Delta\mathbf{w} \gets (0,0,\dots,0)$<br>
$\quad$ za $i=1,\dots, N$<br>
$\qquad h \gets \sigma(\mathbf{w}^\intercal\tilde{\mathbf{x}}^{(i)})$<br>
$\qquad \Delta \mathbf{w} \gets \Delta\mathbf{w} + (h-y^{(i)})\, \tilde{\mathbf{x}}^{(i)}$<br>
$\quad \mathbf{w} \gets \mathbf{w} - \eta \Delta\mathbf{w} $
Stohastički gradijentni spust (on-line)
$\mathbf{w} \gets (0,0,\dots,0)$<br>
ponavljaj do konvergencije<br>
$\quad$ (slučajno permutiraj primjere u $\mathcal{D}$)<br>
$\quad$ za $i=1,\dots, N$<br>
$\qquad$ $h \gets \sigma(\mathbf{w}^\intercal\tilde{\mathbf{x}}^{(i)})$<br>
$\qquad$ $\mathbf{w} \gets \mathbf{w} - \eta (h-y^{(i)})\tilde{\mathbf{x}}^{(i)}$
End of explanation
"""
|
g-weatherill/notebooks | gmpe-smtk/ConditionalFields-Training.ipynb | agpl-3.0 | %matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
from matplotlib.colors import LogNorm, Normalize
import smtk.hazard.conditional_simulation as csim
import smtk.sm_database_builder as sdb
from smtk.residuals.gmpe_residuals import Residuals
from smtk.residuals.residual_plotter import ResidualPlot, ResidualWithDistance
from smtk.parsers.sigma_database_parser import SigmaDatabaseMetadataReader, SigmaRecordParser, SigmaSpectraParser
"""
Explanation: Example of Conditional Random Field Simulation Using the GMPE-SMTK and OpenQuake
End of explanation
"""
import cPickle
db1 = cPickle.load(open("data/LAquila_Database/metadatafile.pkl", "r"))
# 1 GMPE, 2 IMS
gmpe_list = ["AkkarEtAlRjb2014"]
imts = ["PGA", "SA(0.2)", "SA(1.0)"]
resid1 = Residuals(gmpe_list, imts)
resid1.get_residuals(db1)
ResidualWithDistance(resid1, "AkkarEtAlRjb2014", "PGA", plot_type="Linear", figure_size=(5,7))
ResidualWithDistance(resid1, "AkkarEtAlRjb2014", "SA(1.0)", plot_type="Linear", figure_size=(5,7))
"""
Explanation: Create Database from Event Records
Load in the database and retreive the GMPE residuals
End of explanation
"""
rupture_file = "data/laquila_rupture.xml"
rupture = csim.build_rupture_from_file(rupture_file)
from openquake.hazardlib.geo.point import Point
from openquake.hazardlib.geo.polygon import Polygon
rupture_outline = []
for iloc in xrange(rupture.surface.mesh.shape[1]):
rupture_outline.append(Point(rupture.surface.mesh.lons[0, iloc],
rupture.surface.mesh.lats[0, iloc],
rupture.surface.mesh.depths[0, iloc]))
for iloc in xrange(rupture.surface.mesh.shape[1]):
rupture_outline.append(Point(rupture.surface.mesh.lons[-1, -(iloc + 1)],
rupture.surface.mesh.lats[-1, -(iloc + 1)],
rupture.surface.mesh.depths[-1, -(iloc + 1)]))
# Close the polygon
rupture_outline.append(Point(rupture.surface.mesh.lons[0, 0],
rupture.surface.mesh.lats[0, 0],
rupture.surface.mesh.depths[0, 0]))
rupture_outline = Polygon(rupture_outline)
observed_sites = db1.get_site_collection()
pga_residuals = resid1.residuals["AkkarEtAlRjb2014"]["PGA"]["Intra event"]
#pga_residuals = (pga_residuals - (-3.0)) / 6.0
plt.figure(figsize=(10,8))
#ax = plt.subplot(111)
plt.plot(rupture_outline.lons, rupture_outline.lats, "r-")
plt.scatter(observed_sites.lons, observed_sites.lats,
s=40,
c=pga_residuals,
norm=Normalize(vmin=-3.0, vmax=3.0))
plt.title("PGA Observed Intra-event Residual", fontsize=16)
plt.colorbar()
plt.xlabel("Longitude", fontsize=14)
plt.ylabel("Latitude", fontsize=14)
sa1_residuals = resid1.residuals["AkkarEtAlRjb2014"]["SA(1.0)"]["Intra event"]
#pga_residuals = (pga_residuals - (-3.0)) / 6.0
plt.figure(figsize=(10,8))
#ax = plt.subplot(111)
plt.plot(rupture_outline.lons, rupture_outline.lats, "r-")
plt.scatter(observed_sites.lons, observed_sites.lats,
s=40,
c=sa1_residuals,
norm=Normalize(vmin=-3.0, vmax=3.0))
plt.title("Sa(1.0s) Observed Intra-event Residual", fontsize=16)
plt.colorbar()
plt.xlabel("Longitude", fontsize=14)
plt.ylabel("Latitude", fontsize=14)
# Generate a field of calculation sites
limits = [12.5, 15.0, 0.05, 40.5, 43.0, 0.05]
vs30 = 800.0
unknown_sites = csim.get_regular_site_collection(limits, vs30)
"""
Explanation: Load in the Rupture Model and View
End of explanation
"""
# Generate a set of residuals
output_resid = csim.conditional_simulation(observed_sites, pga_residuals, unknown_sites, "PGA", 1)
plt.figure(figsize=(10,8))
plt.plot(rupture_outline.lons, rupture_outline.lats, "r-")
plt.scatter(unknown_sites.lons, unknown_sites.lats,
s=20,
c=output_resid[:, 0].A.flatten(),
marker="s",
edgecolor="None",
norm=Normalize(vmin=-3.0, vmax=3.0))
plt.scatter(observed_sites.lons, observed_sites.lats,
s=50,
c=pga_residuals,
norm=Normalize(vmin=-3.0, vmax=3.0))
plt.title("PGA - Simulated Intra-event Residual", fontsize=16)
plt.colorbar()
plt.xlabel("Longitude", fontsize=14)
plt.ylabel("Latitude", fontsize=14)
sa1_residuals = resid1.residuals["AkkarEtAlRjb2014"]["SA(1.0)"]["Intra event"]
output_resid_1p0 = csim.conditional_simulation(observed_sites, sa1_residuals, unknown_sites, "SA(1.0)", 1)
plt.figure(figsize=(10,8))
#ax = plt.subplot(111)
plt.plot(rupture_outline.lons, rupture_outline.lats, "r-")
plt.scatter(unknown_sites.lons, unknown_sites.lats,
s=20,
c=output_resid_1p0[:, 0].A.flatten(),
marker="s",
edgecolor="None",
norm=Normalize(vmin=-3.0, vmax=3.0))
plt.scatter(observed_sites.lons, observed_sites.lats,
s=50,
c=sa1_residuals,
norm=Normalize(vmin=-3.0, vmax=3.0))
plt.title("Sa (1.0s) - Simulated Intra-event Residual", fontsize=16)
plt.colorbar()
plt.xlabel("Longitude", fontsize=14)
plt.ylabel("Latitude", fontsize=14)
"""
Explanation: Generate a set of ground motion residuals conditioned upon the observations
End of explanation
"""
gmfs = csim.get_conditional_gmfs(db1,
rupture,
sites=unknown_sites,
gsims=["AkkarEtAlRjb2014"],
imts=["PGA", "SA(1.0)"],
number_simulations=5,
truncation_level=3.0)
"""
Explanation: Generate the Full Ground Motion Fields
End of explanation
"""
plt.figure(figsize=(10,8))
pga_field = gmfs["AkkarEtAlRjb2014"]["PGA"][:, 0]
plt.plot(rupture_outline.lons, rupture_outline.lats, "r-")
plt.scatter(unknown_sites.lons, unknown_sites.lats,
s=50,
c=pga_field,
marker="s",
edgecolor="None",
norm=LogNorm(vmin=0.001, vmax=1))
plt.xlim(12.5, 15.0)
plt.ylim(40.5, 43.0)
plt.title("PGA (g) - Conditional Random Field", fontsize=18)
plt.colorbar()
plt.xlabel("Longitude", fontsize=14)
plt.ylabel("Latitude", fontsize=14)
plt.figure(figsize=(10,8))
sa1_field = gmfs["AkkarEtAlRjb2014"]["SA(1.0)"][:, 0]
plt.plot(rupture_outline.lons, rupture_outline.lats, "r-")
plt.scatter(unknown_sites.lons, unknown_sites.lats,
s=50,
c=sa1_field,
marker="s",
edgecolor="None",
norm=LogNorm(vmin=0.001, vmax=1))
plt.xlim(12.5, 15.0)
plt.ylim(40.5, 43.0)
plt.title("Sa (1.0s) (g) - Conditional Random Field", fontsize=18)
plt.colorbar()
plt.xlabel("Longitude", fontsize=14)
plt.ylabel("Latitude", fontsize=14)
"""
Explanation: Visualise the fields
End of explanation
"""
|
citxx/sis-python | crash-course/if-and-logical-expressions.ipynb | mit | print("Чему равно 2 * 2?")
a = int(input())
if a == 4:
print("Правда")
else:
print("Ложь")
"""
Explanation: <h1>Содержание<span class="tocSkip"></span></h1>
<div class="toc"><ul class="toc-item"><li><span><a href="#Условный-оператор-if" data-toc-modified-id="Условный-оператор-if-1">Условный оператор if</a></span></li><li><span><a href="#Логический-тип-bool" data-toc-modified-id="Логический-тип-bool-2">Логический тип bool</a></span></li><li><span><a href="#Логические-операции-(<,-==,-...)" data-toc-modified-id="Логические-операции-(<,-==,-...)-3">Логические операции (<, ==, ...)</a></span></li><li><span><a href="#Логические-выражения" data-toc-modified-id="Логические-выражения-4">Логические выражения</a></span></li></ul></div>
Условный оператор и логические выражения
Условный оператор if
Для проверки истинности выражений в Python используется оператор if. Если условие истинно, то выполниться блок кода, который стоит после if. Если ложно, то тот, который после else.
End of explanation
"""
if 2 * 2 == 5:
print("2 * 2 = 5")
# В выводе ничего нет
"""
Explanation: Особенности синтаксиса:
После условия в if ставится :
После else ставится :
Принадлежащий if'у блок обозначается отступом в 4 пробела
Скобки вокруг условия не нужны
if переводиться с английского как если, а else — иначе
else можно пропускать. Тогда при ложном условии ничего не выполнится:
End of explanation
"""
if 10 > 5:
print("1")
print("2")
if 5 < 4:
print("3")
print("4")
else:
print("5")
else:
print("6")
"""
Explanation: Блок кода внутри if или else может содержать сколько угодно кода, в том числе вложенные if:
End of explanation
"""
if 6 > 5
print("Пропущено двоеточие!")
"""
Explanation: По отступам очень удобно отслеживать, какой else к какому if относится. Если не ставить двоеточие или отступы, то программа будет выдавать ошибку и не запустится.
End of explanation
"""
if 6 > 5:
print("Пропущен отступ!")
"""
Explanation: SyntaxError: invalid syntax = ОшибкаСинтаксиса: некорректный синтаксис
End of explanation
"""
x = 3
if x < 2:
print("0 or 1")
else:
if x < 4:
print("2 or 3")
else:
print("4+")
"""
Explanation: IndentationError: expected an indented block = ОшибкаОтступа: ожидался блок с отступом
Конструкция elif позволяет рассматривать множественные случаи без вложенных if ... else в ветке else.
End of explanation
"""
x = 3
if x < 2:
print("0 or 1")
elif x < 4:
print("2 or 3")
else:
print("4+")
"""
Explanation: Тот же код с использованием elif выглядит так:
End of explanation
"""
if True:
print("Ветка if")
else:
print("Ветка else")
if False:
print("Ветка if")
else:
print("Ветка else")
"""
Explanation: Логический тип bool
Для обозначения истинности в Python есть логический тип — bool. У него есть 2 возможных значения: True (истина) и False (ложь).
End of explanation
"""
a = True
b = False
print(a, b)
if a:
print("a =", a)
if b:
print("b =", b)
"""
Explanation: Переменные также могут быть логического типа:
End of explanation
"""
if 0:
print("Ветка if")
else:
print("Ветка else")
"""
Explanation: Понятно, что после if может стоять не только значение True или False. Посмотрим, что делает Python, если после if стоит нечто более странное?
Одни типы данных могут преобразовываться к другим, если это возможно (если Python позволяет). Такое преобразование называется преобразованием типов (или приведением типов).
Например, 0 преобразуется в False:
End of explanation
"""
if 3.5:
print("Ветка if")
else:
print("Ветка else")
if 0.0:
print("Ветка if")
else:
print("Ветка else")
if -1:
print("Ветка if")
else:
print("Ветка else")
"""
Explanation: Любое другое число (в том числе и нецелое), отличное от 0 преобразуется в True:
End of explanation
"""
if "":
print("Ветка if")
else:
print("Ветка else")
"""
Explanation: Пустая строка "" преобразуется в False:
End of explanation
"""
if "abc":
print("Ветка if")
else:
print("Ветка else")
"""
Explanation: Непустая строка — в True:
End of explanation
"""
print(5 > 2)
print(not 5 > 2)
a = 5
b = 2
print(b == a)
print(b != a)
a = True
b = False
print(not b)
print(a and b)
print(a or b)
"""
Explanation: Логические операции (<, ==, ...)
Посмотрим, как устроены логические операции и операции сравнения в Python.
Список основных операторов сравнения, которые вам понадобятся:
| Действие | Обозначение в Python | Аналог в C++ | Аналог в Pascal | Приоритет |
| --- | --- | --- | --- | --- |
| Равенство | a == b | a == b | a = b | 4 |
| Неравенство | a != b | a != b | a != b | 4 |
| Меньше | a < b | a < b | a < b | 4 |
| Меньше либо равно | a <= b | a <= b | a <= b | 4 |
| Больше | a > b | a > b | a > b | 4 |
| Больше либо равно | a >= b | a >= b | a >= b | 4 |
Список основных логических операторов, которые вам понадобятся:
<table>
<thead><tr>
<th>Действие</th>
<th>Обозначение в Python</th>
<th>Аналог в C++</th>
<th>Аналог в Pascal</th>
<th>Приоритет</th>
</tr>
</thead>
<tbody>
<tr>
<td>Логическое отрицание</td>
<td><code>not a</code></td>
<td><code>!a</code></td>
<td><code>not a</code></td>
<td>5</td>
</tr>
<tr>
<td>Логическое и</td>
<td><code>a and b</code></td>
<td><code>a && b</code></td>
<td><code>a and b</code></td>
<td>6</td>
</tr>
<tr>
<td>Логическое или</td>
<td><code>a or b</code></td>
<td><code>a || b</code></td>
<td><code>a or b</code></td>
<td>7</td>
</tr>
</tbody>
</table>
Примеры:
End of explanation
"""
a = 5
b = 2
print(True and a > 2)
print(False or b >= 2)
if a >= 2 and b >= 2:
print("Обратите внимание, в if не нужны скобки вокруг условия")
x = 4
print(1 < x and x <= 6)
"""
Explanation: Логические выражения
Как и во многих других языках программирования, вы можете составлять большие выражения из True, False, булевых переменных, операций сравнения, логических операций и скобок (для изменения приоритета). Например:
End of explanation
"""
x = 4
print(1 < x <= 6)
"""
Explanation: Вообще говоря, сравнения можно соединять в цепочки, например предыдущий пример можно переписать так:
End of explanation
"""
bool_var = True
if bool_var == True:
print("Ай-ай")
if bool_var:
print("Так хорошо")
bool_var = False
if not bool_var:
print("Так тоже хорошо")
"""
Explanation: В таких случаях для читаемости кода полезно упорядочивать сравнения так, чтобы использовать только знаки < и <=.
Никогда не сравнивайте напрямую с True или False. Вместо этого используйте саму переменную или её отрицание. То же самое для сложных выражений.
End of explanation
"""
year = int(input())
if (year % 4 == 0 and year % 100 != 0) or year % 400 == 0:
print("YES")
else:
print("NO")
"""
Explanation: Конечно, в условии после if чаще всего приходится использовать логические операции. В качестве примера, давайте напишем программу, проверяющую является ли данный год високосным:
End of explanation
"""
|
piyushbhattacharya/machine-learning | python/Consumer Complains.ipynb | gpl-3.0 | ld_train, ld_test = train_test_split(cd_train, test_size=0.2, random_state=2)
x80_train = ld_train.drop(['Consumer disputed?','Complaint ID'],1)
y80_train = ld_train['Consumer disputed?']
x20_test = ld_test.drop(['Consumer disputed?','Complaint ID'],1)
y20_test = ld_test['Consumer disputed?']
"""
Explanation: Optimizing model...
Run train_test splits on the train data
End of explanation
"""
model_logr1 = LogisticRegression(penalty="l1",class_weight=None,random_state=2)
model_logr1.fit(x80_train, y80_train)
#y20_test_pred = np.where(model_logr1.predict(ld_test.drop(['Complaint ID','Consumer disputed?'],1))==1,1,0)
y20_test_pred = np.where(model_logr1.predict(x20_test)==1,1,0)
temp_df = pd.DataFrame(list(zip(cd_test['Complaint ID'],list(y20_test_pred))), columns=['Complaint ID','Consumer disputed?'])
y_test_pred = temp_df['Consumer disputed?']
roc_auc_score(y20_test, y_test_pred)
"""
Explanation: 1. Check ROC_AUC_SCORE {penalty='l1', class_weight=None}
End of explanation
"""
model_logrl2 = LogisticRegression(penalty="l2",class_weight=None,random_state=2)
model_logrl2.fit(x80_train, y80_train)
y20_test_pred = np.where(model_logrl2.predict(x20_test)==1,1,0)
temp_df = pd.DataFrame(list(zip(cd_test['Complaint ID'],list(y20_test_pred))), columns=['Complaint ID','Consumer disputed?'])
y_test_pred = temp_df['Consumer disputed?']
roc_auc_score(y20_test, y_test_pred)
"""
Explanation: 2. Check ROC_AUC_SCORE {penalty='l2', class_weight=None}
End of explanation
"""
model_logr2 = LogisticRegression(penalty="l1",class_weight="balanced",random_state=2)
model_logr2.fit(x80_train, y80_train)
y20_test_pred2 = np.where(model_logr2.predict(ld_test.drop(['Complaint ID','Consumer disputed?'],1))==1,1,0)
temp_df2 = pd.DataFrame(list(zip(cd_test['Complaint ID'],list(y20_test_pred2))),
columns=['Complaint ID','Consumer disputed?'])
y_test_pred2 = temp_df2['Consumer disputed?']
roc_auc_score(y20_test, y_test_pred2)
"""
Explanation: 3. Check ROC_AUC_SCORE {penalty='l1', class_weight='balanced'}
End of explanation
"""
model_logr3 = LogisticRegression(penalty="l2",class_weight="balanced",random_state=2)
model_logr3.fit(x80_train, y80_train)
y20_test_pred3 = np.where(model_logr3.predict(ld_test.drop(['Complaint ID','Consumer disputed?'],1))==1,1,0)
temp_df3 = pd.DataFrame(list(zip(cd_test['Complaint ID'],list(y20_test_pred3))),
columns=['Complaint ID','Consumer disputed?'])
y_test_pred3 = temp_df3['Consumer disputed?']
roc_auc_score(y20_test, y_test_pred3)
"""
Explanation: 4. Check ROC_AUC_SCORE {penalty='l2', class_weight='balanced'}
End of explanation
"""
from sklearn import cross_validation
predicted = cross_validation.cross_val_predict(model_logr2, x, y, cv=10)
print(accuracy_score(y, predicted))
print(classification_report(y, predicted))
"""
Explanation: 2. Optimizing Model continues...
a. Employ CV procedure
End of explanation
"""
prob_score=pd.Series(list(zip(*model_logr2.predict_proba(x80_train)))[1])
cutoffs=np.linspace(0,1,100)
"""
Explanation: 3. Cutoff based predicted probabilities
End of explanation
"""
KS_cut=[]
for cutoff in cutoffs:
predicted=pd.Series([0]*len(y80_train))
predicted[prob_score>cutoff]=1
df=pd.DataFrame(list(zip(y80_train,predicted)),columns=["real","predicted"])
TP=len(df[(df["real"]==1) &(df["predicted"]==1) ])
FP=len(df[(df["real"]==0) &(df["predicted"]==1) ])
TN=len(df[(df["real"]==0) &(df["predicted"]==0) ])
FN=len(df[(df["real"]==1) &(df["predicted"]==0) ])
P=TP+FN
N=TN+FP
KS=(TP/P)-(FP/N)
KS_cut.append(KS)
cutoff_data=pd.DataFrame(list(zip(cutoffs,KS_cut)),columns=["cutoff","KS"])
KS_cutoff=cutoff_data[cutoff_data["KS"]==cutoff_data["KS"].max()]["cutoff"]
"""
Explanation: For each of these cutoff , we are going to look at TP,FP,TN,FN values and caluclate KS. Then we'll choose the best cutoff as the one having highest KS.
End of explanation
"""
# Performance on test data
prob_score_test=pd.Series(list(zip(*model_logr2.predict_proba(x20_test)))[1])
predicted_test=pd.Series([0]*len(y20_test))
predicted_test[prob_score_test > float(KS_cutoff)]=1
df_test=pd.DataFrame(list(zip(y20_test,predicted_test)),columns=["real","predicted"])
k=pd.crosstab(df_test['real'],df_test["predicted"])
print('confusion matrix :\n \n ',k)
TN=k.iloc[0,0]
TP=k.iloc[1,1]
FP=k.iloc[0,1]
FN=k.iloc[1,0]
P=TP+FN
N=TN+FP
# Accuracy of test
(TP+TN)/(P+N)
# Sensitivity on test
TP/P
#Specificity on test
TN/N
"""
Explanation: Now we'll see how this model with the cutoff determined here , performs on the test data.
End of explanation
"""
model_logr2.fit(x,y)
prediction = np.where(model_logr2.predict(cd_test.drop(['Complaint ID'],1))==1,"Yes","No")
submission = pd.DataFrame(list(zip(cd_test['Complaint ID'],list(prediction))),
columns=['Complaint ID','Consumer disputed?'])
pred_y = submission['Consumer disputed?']
actual_y = cd_train['Consumer disputed?']
# roc_auc_score(actual_y, pred_y) # This will fail since the probability pairs are one-one between y_actual and y_predicted
submission.head(4)
submission.to_csv('submission_new.csv',index=False)
"""
Explanation: Fit the optimized model on actual x,y and predict y from test dataset
End of explanation
"""
|
mne-tools/mne-tools.github.io | 0.22/_downloads/dfd4175ec1a2c7f21de3596573c74301/plot_multidict_reweighted_tfmxne.ipynb | bsd-3-clause | # Author: Mathurin Massias <mathurin.massias@gmail.com>
# Yousra Bekhti <yousra.bekhti@gmail.com>
# Daniel Strohmeier <daniel.strohmeier@tu-ilmenau.de>
# Alexandre Gramfort <alexandre.gramfort@inria.fr>
#
# License: BSD (3-clause)
import os.path as op
import mne
from mne.datasets import somato
from mne.inverse_sparse import tf_mixed_norm, make_stc_from_dipoles
from mne.viz import plot_sparse_source_estimates
print(__doc__)
"""
Explanation: Compute iterative reweighted TF-MxNE with multiscale time-frequency dictionary
The iterative reweighted TF-MxNE solver is a distributed inverse method
based on the TF-MxNE solver, which promotes focal (sparse) sources
:footcite:StrohmeierEtAl2015. The benefit of this approach is that:
it is spatio-temporal without assuming stationarity (sources properties
can vary over time),
activations are localized in space, time and frequency in one step,
the solver uses non-convex penalties in the TF domain, which results in a
solution less biased towards zero than when simple TF-MxNE is used,
using a multiscale dictionary allows to capture short transient
activations along with slower brain waves :footcite:BekhtiEtAl2016.
End of explanation
"""
data_path = somato.data_path()
subject = '01'
task = 'somato'
raw_fname = op.join(data_path, 'sub-{}'.format(subject), 'meg',
'sub-{}_task-{}_meg.fif'.format(subject, task))
fwd_fname = op.join(data_path, 'derivatives', 'sub-{}'.format(subject),
'sub-{}_task-{}-fwd.fif'.format(subject, task))
condition = 'Unknown'
# Read evoked
raw = mne.io.read_raw_fif(raw_fname)
events = mne.find_events(raw, stim_channel='STI 014')
reject = dict(grad=4000e-13, eog=350e-6)
picks = mne.pick_types(raw.info, meg=True, eog=True)
event_id, tmin, tmax = 1, -1., 3.
epochs = mne.Epochs(raw, events, event_id, tmin, tmax, picks=picks,
reject=reject, preload=True)
evoked = epochs.filter(1, None).average()
evoked = evoked.pick_types(meg=True)
evoked.crop(tmin=0.008, tmax=0.2)
# Compute noise covariance matrix
cov = mne.compute_covariance(epochs, rank='info', tmax=0.)
# Handling forward solution
forward = mne.read_forward_solution(fwd_fname)
"""
Explanation: Load somatosensory MEG data
End of explanation
"""
alpha, l1_ratio = 20, 0.05
loose, depth = 1, 0.95
# Use a multiscale time-frequency dictionary
wsize, tstep = [4, 16], [2, 4]
n_tfmxne_iter = 10
# Compute TF-MxNE inverse solution with dipole output
dipoles, residual = tf_mixed_norm(
evoked, forward, cov, alpha=alpha, l1_ratio=l1_ratio,
n_tfmxne_iter=n_tfmxne_iter, loose=loose,
depth=depth, tol=1e-3,
wsize=wsize, tstep=tstep, return_as_dipoles=True,
return_residual=True)
# Crop to remove edges
for dip in dipoles:
dip.crop(tmin=-0.05, tmax=0.3)
evoked.crop(tmin=-0.05, tmax=0.3)
residual.crop(tmin=-0.05, tmax=0.3)
"""
Explanation: Run iterative reweighted multidict TF-MxNE solver
End of explanation
"""
stc = make_stc_from_dipoles(dipoles, forward['src'])
plot_sparse_source_estimates(forward['src'], stc, bgcolor=(1, 1, 1),
opacity=0.1, fig_name="irTF-MxNE (cond %s)"
% condition)
"""
Explanation: Generate stc from dipoles
End of explanation
"""
ylim = dict(grad=[-300, 300])
evoked.pick_types(meg='grad')
evoked.plot(titles=dict(grad='Evoked Response: Gradiometers'), ylim=ylim,
proj=True)
residual.pick_types(meg='grad')
residual.plot(titles=dict(grad='Residuals: Gradiometers'), ylim=ylim,
proj=True)
"""
Explanation: Show the evoked response and the residual for gradiometers
End of explanation
"""
|
ericmjl/influenza-reassortment-analysis | 13 Where are human-adapted mutations found?.ipynb | mit | # Open the sequences
sequences = SeqIO.to_dict(SeqIO.parse('20150312_PB2_CDS_from_Whole_Genomes.fasta', 'fasta'))
# sequences
data = pd.read_csv('20150312_PB2_CDS_from_Whole_Genomes.csv', parse_dates=['Collection Date'])
data['Strain Name'] = data['Strain Name'].str.split('(').str[0]
data['Sequence Accession'] = data['Sequence Accession'].str.strip('*')
data
accession_strain = dict(zip(data['Sequence Accession'], data['Strain Name']))
accession_strain
G = nx.read_gpickle('20141103 All IRD Final Graph.pkl')
G.nodes()
"""
Explanation: Introduction
There are particular polymorphisms that are found that have been experimentally shown (on a limited scale) to confer host adaptation mutations. I want to see whether there is evidence of these polymorphisms flowing into human populations, or if they instead arise independently post-infection.
12 March 2015: For simplicity, I am first starting with the PB2 gene only, and looking only at E627K.
End of explanation
"""
def tokenize_string(string, token_length, start_pos, end_pos):
"""
Takes an input string, and returns a dictionary where:
- key = position, and
- value = a tokenized string of length token_length
"""
assert end_pos > start_pos, "End position must be greater than start position."
assert (end_pos - start_pos) > token_length, "Token length must be smaller than the position range."
tokens = dict()
for i in range(start_pos, end_pos-token_length):
tokens[string[i:i+token_length]] = i
return tokens
# tokenize_string(eg_seq, 20, 627-30, 627+30)
# Of the tokenized strings, there will be a few with a "best match" i.e. minimum distance.
# We are only interested in comparing against that one.
def get_best_tokens(tokens, query):
"""
Takes in:
- a dictionary containing the tokens of a string & their position.
- a query string of equal length to the keys in the dictionary.
Returns:
- a list of tokens that have the smallest Levenshtein distance to the query string.
"""
from Levenshtein import distance
best_tokens = set()
min_distance = len(query)
for token, position in tokens.items():
if distance(token, query) < min_distance:
min_distance = distance(token, query)
best_tokens = set()
best_tokens.add(token)
if distance(token, query) == min_distance:
best_tokens.add(token)
if distance(token, query) > min_distance:
pass
return(list(best_tokens))
# Let's then setup a reference string that contains a E at the 627th position in the a.a. sequence.
# - The first letter in the reference string will be the letter E.
# - The length of the string will be 20 a.a. long.
# Looking to the future, we call it a match if at least 17 out of 20 a.a. are identical.
eg_seq = str(sequences['CY005153'].seq)
refstring = eg_seq[626:626+20]
idx = 6720
for i, (accession, seqrecord) in enumerate(sequences.items()):
tokens = tokenize_string(str(seqrecord.seq), 20, 627-20, 627+20)
not_present = set()
best_tokens = get_best_tokens(tokens, refstring)
strain_name = accession_strain[accession]
if len(best_tokens) == 1 and strain_name in G.nodes():
# Add in the polymorphism into the node.
G.node[strain_name]['627aa'] = string[0]
if len(best_tokens) > 1:
print(strain_name, strain_name in G.nodes(), best_tokens)
refstring
[k for k,v in accession_strain.items() if v == 'A/South Australia/36/2000']
# Which nodes don't have the '627aa' metadata?
[n for n, d in G.nodes(data=True) if '627aa' not in d.keys()]
tokens = tokenize_string(str(sequences['CY017154'].seq), 20, 627-20, 627+20, )
print(tokens)
for k, v in tokens.items():
print(distance(refstring, k))
from collections import Counter
Counter([d['host_species'] for n, d in G.nodes(data=True) if '627aa' in d.keys() and d['627aa'] == 'K'])
"""
Explanation: We need a function to figure out what polymorphism exists in a particular position along the PB2 gene. However, this is complicated by the fact that there are PB2 sequences with indels, hence the exact numbering might not be the same. To get around this, I will take a "best match" approach that Justin Z. coded up earlier on. To reduce the search space, I will also tokenize a short region ±40 a.a.
End of explanation
"""
|
Chipe1/aima-python | games4e.ipynb | mit | from collections import namedtuple, Counter, defaultdict
import random
import math
import functools
cache = functools.lru_cache(10**6)
class Game:
"""A game is similar to a problem, but it has a terminal test instead of
a goal test, and a utility for each terminal state. To create a game,
subclass this class and implement `actions`, `result`, `is_terminal`,
and `utility`. You will also need to set the .initial attribute to the
initial state; this can be done in the constructor."""
def actions(self, state):
"""Return a collection of the allowable moves from this state."""
raise NotImplementedError
def result(self, state, move):
"""Return the state that results from making a move from a state."""
raise NotImplementedError
def is_terminal(self, state):
"""Return True if this is a final state for the game."""
return not self.actions(state)
def utility(self, state, player):
"""Return the value of this final state to player."""
raise NotImplementedError
def play_game(game, strategies: dict, verbose=False):
"""Play a turn-taking game. `strategies` is a {player_name: function} dict,
where function(state, game) is used to get the player's move."""
state = game.initial
while not game.is_terminal(state):
player = state.to_move
move = strategies[player](game, state)
state = game.result(state, move)
if verbose:
print('Player', player, 'move:', move)
print(state)
return state
"""
Explanation: Game Tree Search
We start with defining the abstract class Game, for turn-taking n-player games. We rely on, but do not define yet, the concept of a state of the game; we'll see later how individual games define states. For now, all we require is that a state has a state.to_move attribute, which gives the name of the player whose turn it is. ("Name" will be something like 'X' or 'O' for tic-tac-toe.)
We also define play_game, which takes a game and a dictionary of {player_name: strategy_function} pairs, and plays out the game, on each turn checking state.to_move to see whose turn it is, and then getting the strategy function for that player and applying it to the game and the state to get a move.
End of explanation
"""
def minimax_search(game, state):
"""Search game tree to determine best move; return (value, move) pair."""
player = state.to_move
def max_value(state):
if game.is_terminal(state):
return game.utility(state, player), None
v, move = -infinity, None
for a in game.actions(state):
v2, _ = min_value(game.result(state, a))
if v2 > v:
v, move = v2, a
return v, move
def min_value(state):
if game.is_terminal(state):
return game.utility(state, player), None
v, move = +infinity, None
for a in game.actions(state):
v2, _ = max_value(game.result(state, a))
if v2 < v:
v, move = v2, a
return v, move
return max_value(state)
infinity = math.inf
def alphabeta_search(game, state):
"""Search game to determine best action; use alpha-beta pruning.
As in [Figure 5.7], this version searches all the way to the leaves."""
player = state.to_move
def max_value(state, alpha, beta):
if game.is_terminal(state):
return game.utility(state, player), None
v, move = -infinity, None
for a in game.actions(state):
v2, _ = min_value(game.result(state, a), alpha, beta)
if v2 > v:
v, move = v2, a
alpha = max(alpha, v)
if v >= beta:
return v, move
return v, move
def min_value(state, alpha, beta):
if game.is_terminal(state):
return game.utility(state, player), None
v, move = +infinity, None
for a in game.actions(state):
v2, _ = max_value(game.result(state, a), alpha, beta)
if v2 < v:
v, move = v2, a
beta = min(beta, v)
if v <= alpha:
return v, move
return v, move
return max_value(state, -infinity, +infinity)
"""
Explanation: Minimax-Based Game Search Algorithms
We will define several game search algorithms. Each takes two inputs, the game we are playing and the current state of the game, and returns a a (value, move) pair, where value is the utility that the algorithm computes for the player whose turn it is to move, and move is the move itself.
First we define minimax_search, which exhaustively searches the game tree to find an optimal move (assuming both players play optimally), and alphabeta_search, which does the same computation, but prunes parts of the tree that could not possibly have an affect on the optimnal move.
End of explanation
"""
class TicTacToe(Game):
"""Play TicTacToe on an `height` by `width` board, needing `k` in a row to win.
'X' plays first against 'O'."""
def __init__(self, height=3, width=3, k=3):
self.k = k # k in a row
self.squares = {(x, y) for x in range(width) for y in range(height)}
self.initial = Board(height=height, width=width, to_move='X', utility=0)
def actions(self, board):
"""Legal moves are any square not yet taken."""
return self.squares - set(board)
def result(self, board, square):
"""Place a marker for current player on square."""
player = board.to_move
board = board.new({square: player}, to_move=('O' if player == 'X' else 'X'))
win = k_in_row(board, player, square, self.k)
board.utility = (0 if not win else +1 if player == 'X' else -1)
return board
def utility(self, board, player):
"""Return the value to player; 1 for win, -1 for loss, 0 otherwise."""
return board.utility if player == 'X' else -board.utility
def is_terminal(self, board):
"""A board is a terminal state if it is won or there are no empty squares."""
return board.utility != 0 or len(self.squares) == len(board)
def display(self, board): print(board)
def k_in_row(board, player, square, k):
"""True if player has k pieces in a line through square."""
def in_row(x, y, dx, dy): return 0 if board[x, y] != player else 1 + in_row(x + dx, y + dy, dx, dy)
return any(in_row(*square, dx, dy) + in_row(*square, -dx, -dy) - 1 >= k
for (dx, dy) in ((0, 1), (1, 0), (1, 1), (1, -1)))
"""
Explanation: A Simple Game: Tic-Tac-Toe
We have the notion of an abstract game, we have some search functions; now it is time to define a real game; a simple one, tic-tac-toe. Moves are (x, y) pairs denoting squares, where (0, 0) is the top left, and (2, 2) is the bottom right (on a board of size height=width=3).
End of explanation
"""
class Board(defaultdict):
"""A board has the player to move, a cached utility value,
and a dict of {(x, y): player} entries, where player is 'X' or 'O'."""
empty = '.'
off = '#'
def __init__(self, width=8, height=8, to_move=None, **kwds):
self.__dict__.update(width=width, height=height, to_move=to_move, **kwds)
def new(self, changes: dict, **kwds) -> 'Board':
"Given a dict of {(x, y): contents} changes, return a new Board with the changes."
board = Board(width=self.width, height=self.height, **kwds)
board.update(self)
board.update(changes)
return board
def __missing__(self, loc):
x, y = loc
if 0 <= x < self.width and 0 <= y < self.height:
return self.empty
else:
return self.off
def __hash__(self):
return hash(tuple(sorted(self.items()))) + hash(self.to_move)
def __repr__(self):
def row(y): return ' '.join(self[x, y] for x in range(self.width))
return '\n'.join(map(row, range(self.height))) + '\n'
"""
Explanation: States in tic-tac-toe (and other games) will be represented as a Board, which is a subclass of defaultdict that in general will consist of {(x, y): contents} pairs, for example {(0, 0): 'X', (1, 1): 'O'} might be the state of the board after two moves. Besides the contents of squares, a board also has some attributes:
- .to_move to name the player whose move it is;
- .width and .height to give the size of the board (both 3 in tic-tac-toe, but other numbers in related games);
- possibly other attributes, as specified by keywords.
As a defaultdict, the Board class has a __missing__ method, which returns empty for squares that have no been assigned but are within the width × height boundaries, or off otherwise. The class has a __hash__ method, so instances can be stored in hash tables.
End of explanation
"""
def random_player(game, state): return random.choice(list(game.actions(state)))
def player(search_algorithm):
"""A game player who uses the specified search algorithm"""
return lambda game, state: search_algorithm(game, state)[1]
"""
Explanation: Players
We need an interface for players. I'll represent a player as a callable that will be passed two arguments: (game, state) and will return a move.
The function player creates a player out of a search algorithm, but you can create your own players as functions, as is done with random_player below:
End of explanation
"""
play_game(TicTacToe(), dict(X=random_player, O=player(alphabeta_search)), verbose=True).utility
"""
Explanation: Playing a Game
We're ready to play a game. I'll set up a match between a random_player (who chooses randomly from the legal moves) and a player(alphabeta_search) (who makes the optimal alpha-beta move; practical for tic-tac-toe, but not for large games). The player(alphabeta_search) will never lose, but if random_player is lucky, it will be a tie.
End of explanation
"""
play_game(TicTacToe(), dict(X=player(alphabeta_search), O=player(minimax_search)), verbose=True).utility
"""
Explanation: The alpha-beta player will never lose, but sometimes the random player can stumble into a draw. When two optimal (alpha-beta or minimax) players compete, it will always be a draw:
End of explanation
"""
class ConnectFour(TicTacToe):
def __init__(self): super().__init__(width=7, height=6, k=4)
def actions(self, board):
"""In each column you can play only the lowest empty square in the column."""
return {(x, y) for (x, y) in self.squares - set(board)
if y == board.height - 1 or (x, y + 1) in board}
play_game(ConnectFour(), dict(X=random_player, O=random_player), verbose=True).utility
"""
Explanation: Connect Four
Connect Four is a variant of tic-tac-toe, played on a larger (7 x 6) board, and with the restriction that in any column you can only play in the lowest empty square in the column.
End of explanation
"""
def minimax_search_tt(game, state):
"""Search game to determine best move; return (value, move) pair."""
player = state.to_move
@cache
def max_value(state):
if game.is_terminal(state):
return game.utility(state, player), None
v, move = -infinity, None
for a in game.actions(state):
v2, _ = min_value(game.result(state, a))
if v2 > v:
v, move = v2, a
return v, move
@cache
def min_value(state):
if game.is_terminal(state):
return game.utility(state, player), None
v, move = +infinity, None
for a in game.actions(state):
v2, _ = max_value(game.result(state, a))
if v2 < v:
v, move = v2, a
return v, move
return max_value(state)
"""
Explanation: Transposition Tables
By treating the game tree as a tree, we can arrive at the same state through different paths, and end up duplicating effort. In state-space search, we kept a table of reached states to prevent this. For game-tree search, we can achieve the same effect by applying the @cache decorator to the min_value and max_value functions. We'll use the suffix _tt to indicate a function that uses these transisiton tables.
End of explanation
"""
def cache1(function):
"Like lru_cache(None), but only considers the first argument of function."
cache = {}
def wrapped(x, *args):
if x not in cache:
cache[x] = function(x, *args)
return cache[x]
return wrapped
def alphabeta_search_tt(game, state):
"""Search game to determine best action; use alpha-beta pruning.
As in [Figure 5.7], this version searches all the way to the leaves."""
player = state.to_move
@cache1
def max_value(state, alpha, beta):
if game.is_terminal(state):
return game.utility(state, player), None
v, move = -infinity, None
for a in game.actions(state):
v2, _ = min_value(game.result(state, a), alpha, beta)
if v2 > v:
v, move = v2, a
alpha = max(alpha, v)
if v >= beta:
return v, move
return v, move
@cache1
def min_value(state, alpha, beta):
if game.is_terminal(state):
return game.utility(state, player), None
v, move = +infinity, None
for a in game.actions(state):
v2, _ = max_value(game.result(state, a), alpha, beta)
if v2 < v:
v, move = v2, a
beta = min(beta, v)
if v <= alpha:
return v, move
return v, move
return max_value(state, -infinity, +infinity)
%time play_game(TicTacToe(), {'X':player(alphabeta_search_tt), 'O':player(minimax_search_tt)})
%time play_game(TicTacToe(), {'X':player(alphabeta_search), 'O':player(minimax_search)})
"""
Explanation: For alpha-beta search, we can still use a cache, but it should be based just on the state, not on whatever values alpha and beta have.
End of explanation
"""
def cutoff_depth(d):
"""A cutoff function that searches to depth d."""
return lambda game, state, depth: depth > d
def h_alphabeta_search(game, state, cutoff=cutoff_depth(6), h=lambda s, p: 0):
"""Search game to determine best action; use alpha-beta pruning.
As in [Figure 5.7], this version searches all the way to the leaves."""
player = state.to_move
@cache1
def max_value(state, alpha, beta, depth):
if game.is_terminal(state):
return game.utility(state, player), None
if cutoff(game, state, depth):
return h(state, player), None
v, move = -infinity, None
for a in game.actions(state):
v2, _ = min_value(game.result(state, a), alpha, beta, depth+1)
if v2 > v:
v, move = v2, a
alpha = max(alpha, v)
if v >= beta:
return v, move
return v, move
@cache1
def min_value(state, alpha, beta, depth):
if game.is_terminal(state):
return game.utility(state, player), None
if cutoff(game, state, depth):
return h(state, player), None
v, move = +infinity, None
for a in game.actions(state):
v2, _ = max_value(game.result(state, a), alpha, beta, depth + 1)
if v2 < v:
v, move = v2, a
beta = min(beta, v)
if v <= alpha:
return v, move
return v, move
return max_value(state, -infinity, +infinity, 0)
%time play_game(TicTacToe(), {'X':player(h_alphabeta_search), 'O':player(h_alphabeta_search)})
%time play_game(ConnectFour(), {'X':player(h_alphabeta_search), 'O':random_player}, verbose=True).utility
%time play_game(ConnectFour(), {'X':player(h_alphabeta_search), 'O':player(h_alphabeta_search)}, verbose=True).utility
class CountCalls:
"""Delegate all attribute gets to the object, and count them in ._counts"""
def __init__(self, obj):
self._object = obj
self._counts = Counter()
def __getattr__(self, attr):
"Delegate to the original object, after incrementing a counter."
self._counts[attr] += 1
return getattr(self._object, attr)
def report(game, searchers):
for searcher in searchers:
game = CountCalls(game)
searcher(game, game.initial)
print('Result states: {:7,d}; Terminal tests: {:7,d}; for {}'.format(
game._counts['result'], game._counts['is_terminal'], searcher.__name__))
report(TicTacToe(), (alphabeta_search_tt, alphabeta_search, h_alphabeta_search, minimax_search_tt))
"""
Explanation: Heuristic Cutoffs
End of explanation
"""
class Node:
def __init__(self, parent, )
def mcts(state, game, N=1000):
"""
Explanation: Monte Carlo Tree Search
End of explanation
"""
t = CountCalls(TicTacToe())
play_game(t, dict(X=minimax_player, O=minimax_player), verbose=True)
t._counts
for tactic in (three, fork, center, opposite_corner, corner, any):
for s in squares:
if tactic(board, s,player): return s
for s ins quares:
if tactic(board, s, opponent): return s
def ucb(U, N, C=2**0.5, parentN=100):
return round(U/N + C * math.sqrt(math.log(parentN)/N), 2)
{C: (ucb(60, 79, C), ucb(1, 10, C), ucb(2, 11, C))
for C in (1.4, 1.5)}
def ucb(U, N, parentN=100, C=2):
return U/N + C * math.sqrt(math.log(parentN)/N)
C = 1.4
class Node:
def __init__(self, name, children=(), U=0, N=0, parent=None, p=0.5):
self.__dict__.update(name=name, U=U, N=N, parent=parent, children=children, p=p)
for c in children:
c.parent = self
def __repr__(self):
return '{}:{}/{}={:.0%}{}'.format(self.name, self.U, self.N, self.U/self.N, self.children)
def select(n):
if n.children:
return select(max(n.children, key=ucb))
else:
return n
def back(n, amount):
if n:
n.N += 1
n.U += amount
back(n.parent, 1 - amount)
def one(root):
n = select(root)
amount = int(random.uniform(0, 1) < n.p)
back(n, amount)
def ucb(n):
return (float('inf') if n.N == 0 else
n.U / n.N + C * math.sqrt(math.log(n.parent.N)/n.N))
tree = Node('root', [Node('a', p=.8, children=[Node('a1', p=.05),
Node('a2', p=.25,
children=[Node('a2a', p=.7), Node('a2b')])]),
Node('b', p=.5, children=[Node('b1', p=.6,
children=[Node('b1a', p=.3), Node('b1b')]),
Node('b2', p=.4)]),
Node('c', p=.1)])
for i in range(100):
one(tree);
for c in tree.children: print(c)
'select', select(tree), 'tree', tree
us = (100, 50, 25, 10, 5, 1)
infinity = float('inf')
@lru_cache(None)
def f1(n, denom):
return (0 if n == 0 else
infinity if n < 0 or not denom else
min(1 + f1(n - denom[0], denom),
f1(n, denom[1:])))
@lru_cache(None)
def f2(n, denom):
@lru_cache(None)
def f(n):
return (0 if n == 0 else
infinity if n < 0 else
1 + min(f(n - d) for d in denom))
return f(n)
@lru_cache(None)
def f3(n, denom):
return (0 if n == 0 else
infinity if n < 0 or not denom else
min(k + f2(n - k * denom[0], denom[1:])
for k in range(1 + n // denom[0])))
def g(n, d=us): return f1(n, d), f2(n, d), f3(n, d)
n = 12345
%time f1(n, us)
%time f2(n, us)
%time f3(n, us)
"""
Explanation: Heuristic Search Algorithms
End of explanation
"""
|
google/trax | trax/layers/intro.ipynb | apache-2.0 | # Copyright 2018 Google LLC.
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
# https://www.apache.org/licenses/LICENSE-2.0
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import numpy as np
# Import Trax
! pip install -q -U trax
! pip install -q tensorflow
from trax import fastmath
from trax import layers as tl
from trax import shapes
from trax.fastmath import numpy as jnp # For use in defining new layer types.
from trax.shapes import ShapeDtype
from trax.shapes import signature
# Settings and utilities for handling inputs, outputs, and object properties.
np.set_printoptions(precision=3) # Reduce visual noise from extra digits.
def show_layer_properties(layer_obj, layer_name):
template = ('{}.n_in: {}\n'
'{}.n_out: {}\n'
'{}.sublayers: {}\n'
'{}.weights: {}\n')
print(template.format(layer_name, layer_obj.n_in,
layer_name, layer_obj.n_out,
layer_name, layer_obj.sublayers,
layer_name, layer_obj.weights))
"""
Explanation: Trax Layers Intro
This notebook introduces the core concepts of the Trax library through a series of code samples and explanations. The topics covered in following sections are:
Layers: the basic building blocks and how to combine them
Inputs and Outputs: how data streams flow through layers
Defining New Layer Classes (if combining existing layers isn't enough)
Testing and Debugging Layer Classes
General Setup
Execute the following few cells (once) before running any of the code samples in this notebook.
End of explanation
"""
relu = tl.Relu()
x = np.array([[-2, -1, 0, 1, 2],
[-20, -10, 0, 10, 20]])
y = relu(x)
# Show input, output, and two layer properties.
print(f'x:\n{x}\n\n'
f'relu(x):\n{y}\n\n'
f'Number of inputs expected by this layer: {relu.n_in}\n'
f'Number of outputs promised by this layer: {relu.n_out}')
"""
Explanation: 1. Layers
The Layer class represents Trax's basic building blocks:
```
class Layer:
"""Base class for composable layers in a deep learning network.
Layers are the basic building blocks for deep learning models. A Trax layer
computes a function from zero or more inputs to zero or more outputs,
optionally using trainable weights (common) and non-parameter state (not
common). ...
...
```
Layers compute functions.
A layer computes a function from zero or more inputs to zero or more outputs.
The inputs and outputs are NumPy arrays or JAX objects behaving as NumPy arrays.
The simplest layers, those with no weights or sublayers, can be used without
initialization. You can think of them as (pure) mathematical functions that can
be plugged into neural networks.
For ease of testing and interactive exploration, layer objects implement the
__call__ method, so you can call them directly on input data:
y = my_layer(x)
Layers are also objects, so you can inspect their properties. For example:
print(f'Number of inputs expected by this layer: {my_layer.n_in}')
Example 1. tl.Relu $[n_{in} = 1, n_{out} = 1]$
End of explanation
"""
concat = tl.Concatenate()
x0 = np.array([[1, 2, 3],
[4, 5, 6]])
x1 = np.array([[10, 20, 30],
[40, 50, 60]])
y = concat([x0, x1])
print(f'x0:\n{x0}\n\n'
f'x1:\n{x1}\n\n'
f'concat([x1, x2]):\n{y}\n\n'
f'Number of inputs expected by this layer: {concat.n_in}\n'
f'Number of outputs promised by this layer: {concat.n_out}')
"""
Explanation: Example 2. tl.Concatenate $[n_{in} = 2, n_{out} = 1]$
End of explanation
"""
concat3 = tl.Concatenate(n_items=3, axis=0)
x0 = np.array([[1, 2, 3],
[4, 5, 6]])
x1 = np.array([[10, 20, 30],
[40, 50, 60]])
x2 = np.array([[100, 200, 300],
[400, 500, 600]])
y = concat3([x0, x1, x2])
print(f'x0:\n{x0}\n\n'
f'x1:\n{x1}\n\n'
f'x2:\n{x2}\n\n'
f'concat3([x0, x1, x2]):\n{y}')
"""
Explanation: Layers are configurable.
Many layer types have creation-time parameters for flexibility. The
Concatenate layer type, for instance, has two optional parameters:
axis: index of axis along which to concatenate the tensors; default value of -1 means to use the last axis.
n_items: number of tensors to join into one by concatenation; default value is 2.
The following example shows Concatenate configured for 3 input tensors,
and concatenation along the initial $(0^{th})$ axis.
Example 3. tl.Concatenate(n_items=3, axis=0)
End of explanation
"""
layer_norm = tl.LayerNorm()
x = np.array([[-2, -1, 0, 1, 2],
[1, 2, 3, 4, 5],
[10, 20, 30, 40, 50]]).astype(np.float32)
layer_norm.init(shapes.signature(x))
y = layer_norm(x)
print(f'x:\n{x}\n\n'
f'layer_norm(x):\n{y}\n')
print(f'layer_norm.weights:\n{layer_norm.weights}')
"""
Explanation: Layers are trainable.
Many layer types include weights that affect the computation of outputs from
inputs, and they use back-progagated gradients to update those weights.
🚧🚧 A very small subset of layer types, such as BatchNorm, also include
modifiable weights (called state) that are updated based on forward-pass
inputs/computation rather than back-propagated gradients.
Initialization
Trainable layers must be initialized before use. Trax can take care of this
as part of the overall training process. In other settings (e.g., in tests or
interactively in a Colab notebook), you need to initialize the
outermost/topmost layer explicitly. For this, use init:
```
def init(self, input_signature, rng=None, use_cache=False):
"""Initializes weights/state of this layer and its sublayers recursively.
Initialization creates layer weights and state, for layers that use them.
It derives the necessary array shapes and data types from the layer's input
signature, which is itself just shape and data type information.
For layers without weights or state, this method safely does nothing.
This method is designed to create weights/state only once for each layer
instance, even if the same layer instance occurs in multiple places in the
network. This enables weight sharing to be implemented as layer sharing.
Args:
input_signature: `ShapeDtype` instance (if this layer takes one input)
or list/tuple of `ShapeDtype` instances.
rng: Single-use random number generator (JAX PRNG key), or `None`;
if `None`, use a default computed from an integer 0 seed.
use_cache: If `True`, and if this layer instance has already been
initialized elsewhere in the network, then return special marker
values -- tuple `(GET_WEIGHTS_FROM_CACHE, GET_STATE_FROM_CACHE)`.
Else return this layer's newly initialized weights and state.
Returns:
A `(weights, state)` tuple.
"""
```
Input signatures can be built from scratch using ShapeDType objects, or can
be derived from data via the signature function (in module shapes):
``
def signature(obj):
"""Returns aShapeDtypesignature for the givenobj`.
A signature is either a ShapeDtype instance or a tuple of ShapeDtype
instances. Note that this function is permissive with respect to its inputs
(accepts lists or tuples or dicts, and underlying objects can be any type
as long as they have shape and dtype attributes) and returns the corresponding
nested structure of ShapeDtype.
Args:
obj: An object that has shape and dtype attributes, or a list/tuple/dict
of such objects.
Returns:
A corresponding nested structure of ShapeDtype instances.
"""
```
Example 4. tl.LayerNorm $[n_{in} = 1, n_{out} = 1]$
End of explanation
"""
layer_block = tl.Serial(
tl.Relu(),
tl.LayerNorm(),
)
x = np.array([[-2, -1, 0, 1, 2],
[-20, -10, 0, 10, 20]]).astype(np.float32)
layer_block.init(shapes.signature(x))
y = layer_block(x)
print(f'x:\n{x}\n\n'
f'layer_block(x):\n{y}')
"""
Explanation: Layers combine into layers.
The Trax library authors encourage users to build networks and network
components as combinations of existing layers, by means of a small set of
combinator layers. A combinator makes a list of layers behave as a single
layer -- by combining the sublayer computations yet looking from the outside
like any other layer. The combined layer, like other layers, can:
compute outputs from inputs,
update parameters from gradients, and
combine with yet more layers.
Combine with Serial
The most common way to combine layers is with the Serial combinator:
```
class Serial(base.Layer):
"""Combinator that applies layers serially (by function composition).
This combinator is commonly used to construct deep networks, e.g., like this::
mlp = tl.Serial(
tl.Dense(128),
tl.Relu(),
tl.Dense(10),
)
A Serial combinator uses stack semantics to manage data for its sublayers.
Each sublayer sees only the inputs it needs and returns only the outputs it
has generated. The sublayers interact via the data stack. For instance, a
sublayer k, following sublayer j, gets called with the data stack in the
state left after layer j has applied. The Serial combinator then:
- takes n_in items off the top of the stack (n_in = k.n_in) and calls
layer k, passing those items as arguments; and
- takes layer k's n_out return values (n_out = k.n_out) and pushes
them onto the data stack.
A Serial instance with no sublayers acts as a special-case (but useful)
1-input 1-output no-op.
"""
```
If one layer has the same number of outputs as the next layer has inputs (which
is the usual case), the successive layers behave like function composition:
```
h(.) = g(f(.))
layer_h = Serial(
layer_f,
layer_g,
)
``
Note how, insideSerial`, function composition is expressed naturally as a
succession of operations, so that no nested parentheses are needed.
Example 5. y = layer_norm(relu(x)) $[n_{in} = 1, n_{out} = 1]$
End of explanation
"""
print(f'layer_block: {layer_block}\n\n'
f'layer_block.weights: {layer_block.weights}')
"""
Explanation: And we can inspect the block as a whole, as if it were just another layer:
Example 5'. Inspecting a Serial layer.
End of explanation
"""
relu = tl.Relu()
times_100 = tl.Fn("Times100", lambda x: x * 100.0)
branch_relu_t100 = tl.Branch(relu, times_100)
x = np.array([[-2, -1, 0, 1, 2],
[-20, -10, 0, 10, 20]])
branch_relu_t100.init(shapes.signature(x))
y0, y1 = branch_relu_t100(x)
print(f'x:\n{x}\n\n'
f'y0:\n{y0}\n\n'
f'y1:\n{y1}')
"""
Explanation: Combine with Branch
The Branch combinator arranges layers into parallel computational channels:
```
def Branch(*layers, name='Branch'):
"""Combinator that applies a list of layers in parallel to copies of inputs.
Each layer in the input list is applied to as many inputs from the stack
as it needs, and their outputs are successively combined on stack.
For example, suppose one has three layers:
- F: 1 input, 1 output
- G: 3 inputs, 1 output
- H: 2 inputs, 2 outputs (h1, h2)
Then Branch(F, G, H) will take 3 inputs and give 4 outputs:
- inputs: a, b, c
- outputs: F(a), G(a, b, c), h1, h2 where h1, h2 = H(a, b)
As an important special case, a None argument to Branch acts as if it takes
one argument, which it leaves unchanged. (It acts as a one-arg no-op.)
Args:
*layers: List of layers.
name: Descriptive name for this layer.
Returns:
A branch layer built from the given sublayers.
"""
```
Residual blocks, for example, are implemented using Branch:
```
def Residual(*layers, shortcut=None):
"""Wraps a series of layers with a residual connection.
Args:
*layers: One or more layers, to be applied in series.
shortcut: If None (the usual case), the Residual layer computes the
element-wise sum of the stack-top input with the output of the layer
series. If specified, the shortcut layer applies to a copy of the
inputs and (elementwise) adds its output to the output from the main
layer series.
Returns:
A layer representing a residual connection paired with a layer series.
"""
layers = _ensure_flat(layers)
layer = layers[0] if len(layers) == 1 else Serial(layers)
return Serial(
Branch(shortcut, layer),
Add(),
)
```
Here's a simple code example to highlight the mechanics.
Example 6. Branch
End of explanation
"""
# Define new layer type.
def Gcd():
"""Returns a layer to compute the greatest common divisor, elementwise."""
return tl.Fn('Gcd', lambda x0, x1: jnp.gcd(x0, x1))
# Use it.
gcd = Gcd()
x0 = np.array([1, 2, 3, 4, 5, 6, 7, 8, 9, 10])
x1 = np.array([11, 12, 13, 14, 15, 16, 17, 18, 19, 20])
y = gcd((x0, x1))
print(f'x0:\n{x0}\n\n'
f'x1:\n{x1}\n\n'
f'gcd((x0, x1)):\n{y}')
"""
Explanation: 2. Inputs and Outputs
Trax allows layers to have multiple input streams and output streams. When
designing a network, you have the flexibility to use layers that:
process a single data stream ($n_{in} = n_{out} = 1$),
process multiple parallel data streams ($n_{in} = n_{out} = 2, 3, ... $),
split or inject data streams ($n_{in} < n_{out}$), or
merge or remove data streams ($n_{in} > n_{out}$).
We saw in section 1 the example of Residual, which involves both a split and a merge:
...
return Serial(
Branch(shortcut, layer),
Add(),
)
In other words, layer by layer:
Branch(shortcut, layers): makes two copies of the single incoming data stream, passes one copy via the shortcut (typically a no-op), and processes the other copy via the given layers (applied in series). [$n_{in} = 1$, $n_{out} = 2$]
Add(): combines the two streams back into one by adding two tensors elementwise. [$n_{in} = 2$, $n_{out} = 1$]
Data Stack
Trax supports flexible data flows through a network via a data stack, which is
managed by the Serial combinator:
```
class Serial(base.Layer):
"""Combinator that applies layers serially (by function composition).
...
A Serial combinator uses stack semantics to manage data for its sublayers.
Each sublayer sees only the inputs it needs and returns only the outputs it
has generated. The sublayers interact via the data stack. For instance, a
sublayer k, following sublayer j, gets called with the data stack in the
state left after layer j has applied. The Serial combinator then:
- takes n_in items off the top of the stack (n_in = k.n_in) and calls
layer k, passing those items as arguments; and
- takes layer k's n_out return values (n_out = k.n_out) and pushes
them onto the data stack.
...
"""
```
Simple Case 1 -- Each layer takes one input and has one output.
This is in effect a single data stream pipeline, and the successive layers
behave like function composition:
```
s(.) = h(g(f(.)))
layer_s = Serial(
layer_f,
layer_g,
layer_h,
)
``
Note how, insideSerial`, function composition is expressed naturally as a
succession of operations, so that no nested parentheses are needed and the
order of operations matches the textual order of layers.
Simple Case 2 -- Each layer consumes all outputs of the preceding layer.
This is still a single pipeline, but data streams internal to it can split and
merge. The Residual example above illustrates this kind.
General Case -- Successive layers interact via the data stack.
As described in the Serial class docstring, each layer gets its inputs from
the data stack after the preceding layer has put its outputs onto the stack.
This covers the simple cases above, but also allows for more flexible data
interactions between non-adjacent layers. The following example is schematic:
```
x, y_target = get_batch_of_labeled_data()
model_plus_eval = Serial(
my_fancy_deep_model(), # Takes one arg (x) and has one output (y_hat)
my_eval(), # Takes two args (y_hat, y_target) and has one output (score)
)
eval_score = model_plus_eval((x, y_target))
```
Here is the corresponding progression of stack states:
At start: --empty--
After get_batch_of_labeled_data(): x, y_target
After my_fancy_deep_model(): y_hat, y_target
After my_eval(): score
Note in particular how the application of the model (between stack states 1
and 2) only uses and affects the top element on the stack: x --> y_hat.
The rest of the data stack (y_target) comes in use only later, for the
eval function.
3. Defining New Layer Classes
If you need a layer type that is not easily defined as a combination of
existing layer types, you can define your own layer classes in a couple
different ways.
With the Fn layer-creating function.
Many layer types needed in deep learning compute pure functions from inputs to
outputs, using neither weights nor randomness. You can use Trax's Fn function
to define your own pure layer types:
``
def Fn(name, f, n_out=1): # pylint: disable=invalid-name
"""Returns a layer with no weights that applies the functionf`.
f can take and return any number of arguments, and takes only positional
arguments -- no default or keyword arguments. It often uses JAX-numpy (jnp).
The following, for example, would create a layer that takes two inputs and
returns two outputs -- element-wise sums and maxima:
`Fn('SumAndMax', lambda x0, x1: (x0 + x1, jnp.maximum(x0, x1)), n_out=2)`
The layer's number of inputs (n_in) is automatically set to number of
positional arguments in f, but you must explicitly set the number of
outputs (n_out) whenever it's not the default value 1.
Args:
name: Class-like name for the resulting layer; for use in debugging.
f: Pure function from input tensors to output tensors, where each input
tensor is a separate positional arg, e.g., f(x0, x1) --> x0 + x1.
Output tensors must be packaged as specified in the Layer class
docstring.
n_out: Number of outputs promised by the layer; default value 1.
Returns:
Layer executing the function f.
"""
```
Example 7. Use Fn to define a new layer type:
End of explanation
"""
# Define new layer type.
def SumAndMax():
"""Returns a layer to compute sums and maxima of two input tensors."""
return tl.Fn('SumAndMax',
lambda x0, x1: (x0 + x1, jnp.maximum(x0, x1)),
n_out=2)
# Use it.
sum_and_max = SumAndMax()
x0 = np.array([1, 2, 3, 4, 5])
x1 = np.array([10, -20, 30, -40, 50])
y0, y1 = sum_and_max([x0, x1])
print(f'x0:\n{x0}\n\n'
f'x1:\n{x1}\n\n'
f'y0:\n{y0}\n\n'
f'y1:\n{y1}')
"""
Explanation: The Fn function infers n_in (number of inputs) as the length of f's arg
list. Fn does not infer n_out (number out outputs) though. If your f has
more than one output, you need to give an explicit value using the n_out
keyword arg.
Example 8. Fn with multiple outputs:
End of explanation
"""
# Function defined in trax/layers/core.py:
def Flatten(n_axes_to_keep=1):
"""Returns a layer that combines one or more trailing axes of a tensor.
Flattening keeps all the values of the input tensor, but reshapes it by
collapsing one or more trailing axes into a single axis. For example, a
`Flatten(n_axes_to_keep=2)` layer would map a tensor with shape
`(2, 3, 5, 7, 11)` to the same values with shape `(2, 3, 385)`.
Args:
n_axes_to_keep: Number of leading axes to leave unchanged when reshaping;
collapse only the axes after these.
"""
layer_name = f'Flatten_keep{n_axes_to_keep}'
def f(x):
in_rank = len(x.shape)
if in_rank <= n_axes_to_keep:
raise ValueError(f'Input rank ({in_rank}) must exceed the number of '
f'axes to keep ({n_axes_to_keep}) after flattening.')
return jnp.reshape(x, (x.shape[:n_axes_to_keep] + (-1,)))
return tl.Fn(layer_name, f)
flatten_keep_1_axis = Flatten(n_axes_to_keep=1)
flatten_keep_2_axes = Flatten(n_axes_to_keep=2)
x = np.array([[[1, 2, 3],
[10, 20, 30],
[100, 200, 300]],
[[4, 5, 6],
[40, 50, 60],
[400, 500, 600]]])
y1 = flatten_keep_1_axis(x)
y2 = flatten_keep_2_axes(x)
print(f'x:\n{x}\n\n'
f'flatten_keep_1_axis(x):\n{y1}\n\n'
f'flatten_keep_2_axes(x):\n{y2}')
"""
Explanation: Example 9. Use Fn to define a configurable layer:
End of explanation
"""
|
berlemontkevin/Jupyter_Notebook | PyDSTool/PyDSTool_Introduction.ipynb | apache-2.0 | from PyDSTool import *
"""
Explanation: PyDSTool : An introduction
This Notebook is inspired by : http://www2.gsu.edu/~matrhc/FrontPage.html
The description of PyDSTool is :"With PyDSTool we aim to provide a powerful suite of computational tools for the development, simulation, and analysis of dynamical systems that are used for the modeling of physical processes in many scientific disciplines. "
Basic Introduction
Using this package there is one important thing to remember : the numerical solvers need to be presented with a system of first order differential equations.
$$ \frac{d y}{d t} = - \frac{kx}{m} $$
$$ \frac{dx}{dt} = y$$
The PyDStool solver will be analog to the one of Matlab in the way we interract with the system.
End of explanation
"""
icdict = {'x': 1, 'y': 0.4} # Initial conditions dictonnary
pardict = {'k': 0.1, 'm': 0.5} # Parameters values dictionnary
"""
Explanation: First we need to declare a name that we will use for the dictionnary containing initial conditions for two variables. In the same way we need to define a dictionary for all the parameters. This leads to the following code :
End of explanation
"""
x_rhs = 'y'
y_rhs = '-k*x/m'
"""
Explanation: The next step is to define the vector field of the system. Or in other words the right-hand sides of the differential equation.
End of explanation
"""
vardict = {'x': x_rhs, 'y': y_rhs}
"""
Explanation: Those two lines are particular. They assign strings to two names. The strings happen to use name that we have mentioned already and the name we assigned where only to remeber us which string belong to which variables. Now we need to tell which variables are dynamic, with the dynamic of the string that the variable maps to.
End of explanation
"""
DSargs = args() # create an empty object instance of the args class, call it DSargs
DSargs.name = 'SHM' # name our model
DSargs.ics = icdict # assign the icdict to the ics attribute
DSargs.pars = pardict # assign the pardict to the pars attribute
DSargs.tdata = [0, 20] # declare how long we expect to integrate for
DSargs.varspecs = vardict # assign the vardict dictionary to the 'varspecs' attribute of DSargs
"""
Explanation: Now we need to construct the full model for PyDSTool. To do this we will need to call the 'args' class of PyDSTool:
End of explanation
"""
DS = Generator.Vode_ODEsystem(DSargs)
"""
Explanation: All the details of this class can be found here : http://www.ni.gsu.edu/~rclewley/PyDSTool/UserDocumentation.html
We can change all the values after, but we first to need to initialize in order to call the program.
Now we need to convert all this specifications into a specific solver. All this kind of generators can be found :
http://www.ni.gsu.edu/~rclewley/PyDSTool/Generators.html
And the following code will be an example
End of explanation
"""
help(DS
)
"""
Explanation: We can interact in many ways with DS objects. In order to show all the things we can do, we can use the function 'help'.
End of explanation
"""
DS.set(pars={'k': 0.3},
ics={'x': 0.4})
"""
Explanation: If we want to change the parameters of the ODE, we need to be careful and use the already written function. If we change it directly we could mess with the related values of other parameters.
End of explanation
"""
traj = DS.compute('demo')
pts = traj.sample()
%matplotlib inline
plt.plot(pts['t'], pts['x'], label='x')
plt.plot(pts['t'], pts['y'], label='y')
plt.legend()
plt.xlabel('t')
"""
Explanation: Now we can finally solve the system and obtain a trajectory http://www.ni.gsu.edu/~rclewley/PyDSTool/UserDocumentation.html#head-8bb69b45e39d4947ca78953b7fb61080c864d68a).
End of explanation
"""
def KE(pts):
return 0.5*DS.pars['m']*pts['y']**2
def PE(pts):
return 0.5*DS.pars['k']*pts['x']**2
total_energy = KE(pts) + PE(pts)
print (total_energy)
KE(traj(5.4)) # At time 5.4
"""
Explanation: Just to finish this part, we will see how to run scripts on a DS object it order to measure quantities from the simulation.
End of explanation
"""
pts.find(5.4)
"""
Explanation: This code highlights the utility of the 'trajectory' object. We can use this object as a parametric function. Indeed it interpolates between independent variables automatically. In our example $5.4$ was not in the 'time set'.
End of explanation
"""
import PyDSTool as dst # Give a name to the package
import numpy as np
from matplotlib import pyplot as plt
# we must give a name
DSargs = dst.args(name='Calcium channel model')
# parameters
DSargs.pars = { 'vl': -60,
'vca': 120,
'i': 0,
'gl': 2,
'gca': 4,
'c': 20,
'v1': -1.2,
'v2': 18 }
# auxiliary helper function(s) -- function name: ([func signature], definition)
DSargs.fnspecs = {'minf': (['v'], '0.5 * (1 + tanh( (v-v1)/v2 ))') }
# rhs of the differential equation, including dummy variable w
DSargs.varspecs = {'v': '( i + gl * (vl - v) - gca * minf(v) * (v-vca) )/c',
'w': 'v-w' }
# initial conditions
DSargs.ics = {'v': 0, 'w': 0 }
"""
Explanation: A bit further : Calcium Channel model
The examples can be found : https://github.com/robclewley/pydstool/tree/master/examples
In this part we will begin to work on bifurcation diagramm for a simple nonlinear model :
$$ C \frac{dV}{dt} = I + g_L (V_L - V) +g_{Ca} m(V) (V_{Ca}-V)$$
where $m(V) = 0.5(1 + \tanh[(V-V_1)/V_2])$
The initialization is as follows :
End of explanation
"""
DSargs.tdomain = [0,30] # set the range of integration.
ode = dst.Generator.Vode_ODEsystem(DSargs) # an instance of the 'Generator' class.
traj = ode.compute('polarization') # integrate ODE with trajectory name : polarization / use print (traj.info(1)) to obtian the info
pts = traj.sample(dt=0.1) # Data for plotting
# PyPlot commands
plt.plot(pts['t'], pts['v'])
plt.xlabel('time') # Axes labels
plt.ylabel('voltage') # ...
plt.ylim([0,65]) # Range of the y axis
plt.title(ode.name) # Figure title from model name
plt.show()
"""
Explanation: Like before we use a Generator in order to find the solution of the dynamical system. We can just note that $w$ in the code is jsut a dummy variable necessary to this version of PyDSTool (need two variables).
End of explanation
"""
plt.clf() # Clear the figure
for i, v0 in enumerate(np.linspace(-80,80,20)):
ode.set( ics = { 'v': v0 } ) # Initial condition
# Trajectories are called pol0, pol1, ...
# sample them on the fly to create Pointset tmp
tmp = ode.compute('pol%3i' % i).sample() # or specify dt option to sample to sub-sample
plt.plot(tmp['t'], tmp['v'])
plt.xlabel('time')
plt.ylabel('voltage')
plt.title(ode.name + ' multi ICs')
plt.show()
"""
Explanation: The equation we used is bistable. One way to highlight this is with the following code :
End of explanation
"""
# Prepare the system to start close to a steady state
ode.set(pars = {'i': -220} ) # Lower bound of the control parameter 'i'
ode.set(ics = {'v': -170} ) # Close to one of the steady states present for i=-220
PC = dst.ContClass(ode) # Set up continuation class
PCargs = dst.args(name='EQ1', type='EP-C') # 'EP-C' stands for Equilibrium Point Curve. The branch will be labeled 'EQ1'.
PCargs.freepars = ['i'] # control parameter(s) (it should be among those specified in DSargs.pars)
PCargs.MaxNumPoints = 450 # The following 3 parameters are set after trial-and-error
PCargs.MaxStepSize = 2
PCargs.MinStepSize = 1e-5
PCargs.StepSize = 2e-2
PCargs.LocBifPoints = 'LP' # detect limit points / saddle-node bifurcations
PCargs.SaveEigen = True # to tell unstable from stable branches
"""
Explanation: We will now be interested in the bifurcation diagram and the nonlinear study, that are the main point of this package.
To do so we need to work with a ContClass (http://www2.gsu.edu/~matrhc/PyCont.html). It allows tools for numerical continuation for solutions to initial value problems and level curves of nonlinear function. It also detects bifurcation points. To obtain all the specifities check the link.
End of explanation
"""
PC.newCurve(PCargs)
PC['EQ1'].forward() # Forward because we look for t>0
PC.display(['i','v'], stability=True, figure=3) # stable and unstable branches as solid and dashed curves, resp.
"""
Explanation: The 'LocBifPoints' attributes tells PyCont what type of bifurcation should be tracked. In this example, because we now the result we specify that only saddle-node bifurcation should be detected. We can then compute this diagram :
End of explanation
"""
PCargs = dst.args(name='SN1', type='LP-C')
PCargs.initpoint = 'EQ1:LP2'
PCargs.freepars = ['i', 'gca']
PCargs.MaxStepSize = 2
PCargs.LocBifPoints = ['CP']
PCargs.MaxNumPoints = 200
PC.newCurve(PCargs)
PC['SN1'].forward()
PC['SN1'].backward()
PC['SN1'].display(['i','gca'], figure=4)
"""
Explanation: PC['EQ1'] now consists of a "struct" data type that specifies the particular equilibrium curve we prepared the system for. The information of the equilibrium curve can be accessed via the 'info()' method. We can obtain detailed information about a particular special point calling the 'getSpecialPoint' method.
If we want to know the location of the limit points as we vary the calcium condutance :
End of explanation
"""
|
tpin3694/tpin3694.github.io | machine-learning/imputing_missing_class_labels.ipynb | mit | # Load libraries
import numpy as np
from sklearn.preprocessing import Imputer
"""
Explanation: Title: Imputing Missing Class Labels
Slug: imputing_missing_class_labels
Summary: How to impute missing class labels for machine learning in Python.
Date: 2016-09-06 12:00
Category: Machine Learning
Tags: Preprocessing Structured Data
Authors: Chris Albon
Preliminaries
End of explanation
"""
# Create feature matrix with categorical feature
X = np.array([[0, 2.10, 1.45],
[1, 1.18, 1.33],
[0, 1.22, 1.27],
[0, -0.21, -1.19],
[np.nan, 0.87, 1.31],
[np.nan, -0.67, -0.22]])
"""
Explanation: Create Feature Matrix With Missing Values
End of explanation
"""
# Create Imputer object
imputer = Imputer(strategy='most_frequent', axis=0)
# Fill missing values with most frequent class
imputer.fit_transform(X)
"""
Explanation: Fill Missing Values' Class With Most Frequent Class
End of explanation
"""
|
M-R-Houghton/euroscipy_2015 | scikit_image/lectures/solutions/adv3_panorama-stitching-solution.ipynb | mit | import numpy as np
import matplotlib.pyplot as plt
def compare(*images, **kwargs):
"""
Utility function to display images side by side.
Parameters
----------
image0, image1, image2, ... : ndarrray
Images to display.
labels : list
Labels for the different images.
"""
f, axes = plt.subplots(1, len(images), **kwargs)
axes = np.array(axes, ndmin=1)
labels = kwargs.pop('labels', None)
if labels is None:
labels = [''] * len(images)
for n, (image, label) in enumerate(zip(images, labels)):
axes[n].imshow(image, interpolation='nearest', cmap='gray')
axes[n].set_title(label)
axes[n].axis('off')
f.tight_layout()
"""
Explanation: scikit-image advanced panorama tutorial
Enhanced from the original demo as featured in the scikit-image paper.
Multiple overlapping images of the same scene, combined into a single image, can yield amazing results. This tutorial will illustrate how to accomplish panorama stitching using scikit-image, from loading the images to cleverly stitching them together.
First things first
Import NumPy and matplotlib, then define a utility function to compare multiple images
End of explanation
"""
import skimage.io as io
pano_imgs = io.ImageCollection('../../images/pano/JDW_03*')
"""
Explanation: Load data
The ImageCollection class provides an easy and efficient way to load and represent multiple images. Images in the ImageCollection are not only read from disk when accessed.
Load a series of images into an ImageCollection with a wildcard, as they share similar names.
End of explanation
"""
compare(*pano_imgs, figsize=(12, 10))
"""
Explanation: Inspect these images using the convenience function compare() defined earlier
End of explanation
"""
from skimage.color import rgb2gray
pano0, pano1, pano2 = [rgb2gray(im) for im in pano_imgs]
# View the results
compare(pano0, pano1, pano2, figsize=(12, 10))
"""
Explanation: Credit: Images of Private Arch and the trail to Delicate Arch in Arches National Park, USA, taken by Joshua D. Warner.<br>
License: CC-BY 4.0
0. Pre-processing
This stage usually involves one or more of the following:
* Resizing, often downscaling with fixed aspect ratio
* Conversion to grayscale, as many feature descriptors are not defined for color images
* Cropping to region(s) of interest
For convenience our example data is already resized smaller, and we won't bother cropping. However, they are presently in color so coversion to grayscale with skimage.color.rgb2gray is appropriate.
End of explanation
"""
from skimage.feature import ORB
# Initialize ORB
# 800 keypoints is large enough for robust results,
# but low enough to run within a few seconds.
orb = ORB(n_keypoints=800, fast_threshold=0.05)
# Detect keypoints in pano0
orb.detect_and_extract(pano0)
keypoints0 = orb.keypoints
descriptors0 = orb.descriptors
# Detect keypoints in pano1
orb.detect_and_extract(pano1)
keypoints1 = orb.keypoints
descriptors1 = orb.descriptors
# Detect keypoints in pano2
orb.detect_and_extract(pano2)
keypoints2 = orb.keypoints
descriptors2 = orb.descriptors
"""
Explanation: 1. Feature detection and matching
We need to estimate a projective transformation that relates these images together. The steps will be
Define one image as a target or destination image, which will remain anchored while the others are warped
Detect features in all three images
Match features from left and right images against the features in the center, anchored image.
In this three-shot series, the middle image pano1 is the logical anchor point.
We detect "Oriented FAST and rotated BRIEF" (ORB) features in both images.
Note: For efficiency, in this tutorial we're finding 800 keypoints. The results are good but small variations are expected. If you need a more robust estimate in practice, run multiple times and pick the best result or generate additional keypoints.
End of explanation
"""
from skimage.feature import match_descriptors
# Match descriptors between left/right images and the center
matches01 = match_descriptors(descriptors0, descriptors1, cross_check=True)
matches12 = match_descriptors(descriptors1, descriptors2, cross_check=True)
"""
Explanation: Match features from images 0 <-> 1 and 1 <-> 2.
End of explanation
"""
from skimage.feature import plot_matches
fig, ax = plt.subplots(1, 1, figsize=(15, 12))
# Best match subset for pano0 -> pano1
plot_matches(ax, pano0, pano1, keypoints0, keypoints1, matches01)
ax.axis('off');
"""
Explanation: Inspect these matched features side-by-side using the convenience function skimage.feature.plot_matches.
End of explanation
"""
fig, ax = plt.subplots(1, 1, figsize=(15, 12))
# Best match subset for pano2 -> pano1
plot_matches(ax, pano1, pano2, keypoints1, keypoints2, matches12)
ax.axis('off');
"""
Explanation: Most of these line up similarly, but it isn't perfect. There are a number of obvious outliers or false matches.
End of explanation
"""
from skimage.transform import ProjectiveTransform
from skimage.measure import ransac
# Select keypoints from
# * source (image to be registered): pano0
# * target (reference image): pano1, our middle frame registration target
src = keypoints0[matches01[:, 0]][:, ::-1]
dst = keypoints1[matches01[:, 1]][:, ::-1]
model_robust01, inliers01 = ransac((src, dst), ProjectiveTransform,
min_samples=4, residual_threshold=1, max_trials=300)
# Select keypoints from
# * source (image to be registered): pano2
# * target (reference image): pano1, our middle frame registration target
src = keypoints2[matches12[:, 1]][:, ::-1]
dst = keypoints1[matches12[:, 0]][:, ::-1]
model_robust12, inliers12 = ransac((src, dst), ProjectiveTransform,
min_samples=4, residual_threshold=1, max_trials=300)
"""
Explanation: Similar to above, decent signal but numerous false matches.
2. Transform estimation
To filter out the false matches, we apply RANdom SAmple Consensus (RANSAC), a powerful method of rejecting outliers available in skimage.transform.ransac. The transformation is estimated iteratively, based on randomly chosen subsets, finally selecting the model which corresponds best with the majority of matches.
We need to do this twice, once each for the transforms left -> center and right -> center.
End of explanation
"""
fig, ax = plt.subplots(1, 1, figsize=(15, 12))
# Best match subset for pano0 -> pano1
plot_matches(ax, pano0, pano1, keypoints0, keypoints1, matches01[inliers01])
ax.axis('off');
fig, ax = plt.subplots(1, 1, figsize=(15, 12))
# Best match subset for pano2 -> pano1
plot_matches(ax, pano1, pano2, keypoints1, keypoints2, matches12[inliers12])
ax.axis('off');
"""
Explanation: The inliers returned from RANSAC select the best subset of matches. How do they look?
End of explanation
"""
from skimage.transform import SimilarityTransform
# Shape of middle image, our registration target
r, c = pano1.shape[:2]
# Note that transformations take coordinates in (x, y) format,
# not (row, column), in order to be consistent with most literature
corners = np.array([[0, 0],
[0, r],
[c, 0],
[c, r]])
# Warp the image corners to their new positions
warped_corners01 = model_robust01(corners)
warped_corners12 = model_robust12(corners)
# Find the extents of both the reference image and the warped
# target image
all_corners = np.vstack((warped_corners01, warped_corners12, corners))
# The overall output shape will be max - min
corner_min = np.min(all_corners, axis=0)
corner_max = np.max(all_corners, axis=0)
output_shape = (corner_max - corner_min)
# Ensure integer shape with np.ceil and dtype conversion
output_shape = np.ceil(output_shape[::-1]).astype(int)
"""
Explanation: Most of the false matches are rejected!
3. Warping
Next, we produce the panorama itself. We must warp, or transform, two of the three images so they will properly align with the stationary image.
Extent of output image
The first step is to find the shape of the output image to contain all three transformed images. To do this we consider the extents of all warped images.
End of explanation
"""
from skimage.transform import warp
# This in-plane offset is the only necessary transformation for the middle image
offset1 = SimilarityTransform(translation= -corner_min)
# Translate pano1 into place
pano1_warped = warp(pano1, offset1.inverse, order=3,
output_shape=output_shape, cval=-1)
# Acquire the image mask for later use
pano1_mask = (pano1_warped != -1) # Mask == 1 inside image
pano1_warped[~pano1_mask] = 0 # Return background values to 0
"""
Explanation: Apply estimated transforms
Warp the images with skimage.transform.warp according to the estimated models. A shift, or translation is needed to place as our middle image in the middle - it isn't truly stationary.
Values outside the input images are initially set to -1 to distinguish the "background", which is identified for later use.
Note: warp takes the inverse mapping as an input.
End of explanation
"""
# Warp pano0 (left) to pano1
transform01 = (model_robust01 + offset1).inverse
pano0_warped = warp(pano0, transform01, order=3,
output_shape=output_shape, cval=-1)
pano0_mask = (pano0_warped != -1) # Mask == 1 inside image
pano0_warped[~pano0_mask] = 0 # Return background values to 0
"""
Explanation: Warp left panel into place
End of explanation
"""
# Warp pano2 (right) to pano1
transform12 = (model_robust12 + offset1).inverse
pano2_warped = warp(pano2, transform12, order=3,
output_shape=output_shape, cval=-1)
pano2_mask = (pano2_warped != -1) # Mask == 1 inside image
pano2_warped[~pano2_mask] = 0 # Return background values to 0
"""
Explanation: Warp right panel into place
End of explanation
"""
compare(pano0_warped, pano1_warped, pano2_warped, figsize=(12, 10));
"""
Explanation: Inspect the warped images:
End of explanation
"""
# Add the three images together. This could create dtype overflows!
# We know they are are floating point images after warping, so it's OK.
merged = (pano0_warped + pano1_warped + pano2_warped)
# Track the overlap by adding the masks together
overlap = (pano0_mask * 1.0 + # Multiply by 1.0 for bool -> float conversion
pano1_mask +
pano2_mask)
# Normalize through division by `overlap` - but ensure the minimum is 1
normalized = merged / np.maximum(overlap, 1)
"""
Explanation: 4. Combining images the easy (and bad) way
This method simply
sums the warped images
tracks how many images overlapped to create each point
normalizes the result.
End of explanation
"""
fig, ax = plt.subplots(figsize=(12, 12))
ax.imshow(normalized, cmap='gray')
plt.tight_layout()
ax.axis('off');
"""
Explanation: Finally, view the results!
End of explanation
"""
fig, ax = plt.subplots(figsize=(15,12))
# Generate difference image and inspect it
difference_image = pano0_warped - pano1_warped
ax.imshow(difference_image, cmap='gray')
ax.axis('off');
"""
Explanation: <div style="height: 400px;"></div>
What happened?! Why are there nasty dark lines at boundaries, and why does the middle look so blurry?
The lines are artifacts (boundary effect) from the warping method. When the image is warped with interpolation, edge pixels containing part image and part background combine these values. We would have bright lines if we'd chosen cval=2 in the warp calls (try it!), but regardless of choice there will always be discontinuities.
...Unless you use order=0 in warp, which is nearest neighbor. Then edges are perfect (try it!). But who wants to be limited to an inferior interpolation method?
Even then, it's blurry! Is there a better way?
5. Stitching images along a minimum-cost path
Let's step back a moment and consider: Is it even reasonable to blend pixels?
Take a look at a difference image, which is just one image subtracted from the other.
End of explanation
"""
ymax = output_shape[1] - 1
xmax = output_shape[0] - 1
# Start anywhere along the top and bottom, left of center.
mask_pts01 = [[0, ymax // 3],
[xmax, ymax // 3]]
# Start anywhere along the top and bottom, right of center.
mask_pts12 = [[0, 2*ymax // 3],
[xmax, 2*ymax // 3]]
"""
Explanation: The surrounding flat gray is zero. A perfect overlap would show no structure!
Instead, the overlap region matches fairly well in the middle... but off to the sides where things start to look a little embossed, a simple average blurs the result. This caused the blurring in the previous, method (look again). Unfortunately, this is almost always the case for panoramas!
How can we fix this?
Let's attempt to find a vertical path through this difference image which stays as close to zero as possible. If we use that to build a mask, defining a transition between images, the result should appear seamless.
Seamless image stitching with Minimum-Cost Paths and skimage.graph
Among other things, skimage.graph allows you to
* start at any point on an array
* find the path to any other point in the array
* the path found minimizes the sum of values on the path.
The array is called a cost array, while the path found is a minimum-cost path or MCP.
To accomplish this we need
Starting and ending points for the path
A cost array (a modified difference image)
This method is so powerful that, with a carefully constructed cost array, the seed points are essentially irrelevant. It just works!
Define seed points
End of explanation
"""
from skimage.measure import label
def generate_costs(diff_image, mask, vertical=True, gradient_cutoff=2.):
"""
Ensures equal-cost paths from edges to region of interest.
Parameters
----------
diff_image : ndarray of floats
Difference of two overlapping images.
mask : ndarray of bools
Mask representing the region of interest in ``diff_image``.
vertical : bool
Control operation orientation.
gradient_cutoff : float
Controls how far out of parallel lines can be to edges before
correction is terminated. The default (2.) is good for most cases.
Returns
-------
costs_arr : ndarray of floats
Adjusted costs array, ready for use.
"""
if vertical is not True:
return tweak_costs(diff_image.T, mask.T, vertical=vertical,
gradient_cutoff=gradient_cutoff).T
# Start with a high-cost array of 1's
costs_arr = np.ones_like(diff_image)
# Obtain extent of overlap
row, col = mask.nonzero()
cmin = col.min()
cmax = col.max()
# Label discrete regions
cslice = slice(cmin, cmax + 1)
labels = label(mask[:, cslice])
# Find distance from edge to region
upper = (labels == 0).sum(axis=0)
lower = (labels == 2).sum(axis=0)
# Reject areas of high change
ugood = np.abs(np.gradient(upper)) < gradient_cutoff
lgood = np.abs(np.gradient(lower)) < gradient_cutoff
# Give areas slightly farther from edge a cost break
costs_upper = np.ones_like(upper, dtype=np.float64)
costs_lower = np.ones_like(lower, dtype=np.float64)
costs_upper[ugood] = upper.min() / np.maximum(upper[ugood], 1)
costs_lower[lgood] = lower.min() / np.maximum(lower[lgood], 1)
# Expand from 1d back to 2d
vdist = mask.shape[0]
costs_upper = costs_upper[np.newaxis, :].repeat(vdist, axis=0)
costs_lower = costs_lower[np.newaxis, :].repeat(vdist, axis=0)
# Place these in output array
costs_arr[:, cslice] = costs_upper * (labels == 0)
costs_arr[:, cslice] += costs_lower * (labels == 2)
# Finally, place the difference image
costs_arr[mask] = diff_image[mask]
return costs_arr
"""
Explanation: Construct cost array
This utility function exists to give a "cost break" for paths from the edge to the overlap region.
We will visually explore the results shortly. Examine the code later - for now, just use it.
End of explanation
"""
# Start with the absolute value of the difference image.
# np.abs is necessary because we don't want negative costs!
costs01 = generate_costs(np.abs(pano0_warped - pano1_warped),
pano0_mask & pano1_mask)
"""
Explanation: Use this function to generate the cost array.
End of explanation
"""
costs01[0, :] = 0
costs01[-1, :] = 0
"""
Explanation: Allow the path to "slide" along top and bottom edges to the optimal horizontal position by setting top and bottom edges to zero cost.
End of explanation
"""
fig, ax = plt.subplots(figsize=(15, 12))
ax.imshow(costs01, cmap='gray', interpolation='none')
ax.axis('off');
"""
Explanation: Our cost array now looks like this
End of explanation
"""
from skimage.graph import route_through_array
# Arguments are:
# cost array
# start pt
# end pt
# can it traverse diagonally
pts, _ = route_through_array(costs01, mask_pts01[0], mask_pts01[1], fully_connected=True)
# Convert list of lists to 2d coordinate array for easier indexing
pts = np.array(pts)
"""
Explanation: The tweak we made with generate_costs is subtle but important. Can you see it?
Find the minimum-cost path (MCP)
Use skimage.graph.route_through_array to find an optimal path through the cost array
End of explanation
"""
fig, ax = plt.subplots(figsize=(12, 12))
# Plot the difference image
ax.imshow(pano0_warped - pano1_warped, cmap='gray')
# Overlay the minimum-cost path
ax.plot(pts[:, 1], pts[:, 0])
plt.tight_layout()
ax.axis('off');
"""
Explanation: Did it work?
End of explanation
"""
# Start with an array of zeros and place the path
mask0 = np.zeros_like(pano0_warped, dtype=np.uint8)
mask0[pts[:, 0], pts[:, 1]] = 1
"""
Explanation: That looks like a great seam to stitch these images together - the path looks very close to zero.
Irregularities
Due to the random element in the RANSAC transform estimation, everyone will have a slightly different path. Your path will look different from mine, and different from your neighbor's. That's expected! The awesome thing about MCP is that everyone just calculated the best possible path to stitch together their unique transforms!
Filling the mask
Turn that path into a mask, which will be 1 where we want the left image to show through and zero elsewhere. We need to fill the left side of the mask with ones over to our path.
Note: This is the inverse of NumPy masked array conventions (numpy.ma), which specify a negative mask (mask == bad/missing) rather than a positive mask as used here (mask == good/selected).
Place the path into a new, empty array.
End of explanation
"""
fig, ax = plt.subplots(figsize=(11, 11))
# View the path in black and white
ax.imshow(mask0, cmap='gray')
ax.axis('off');
"""
Explanation: Ensure the path appears as expected
End of explanation
"""
from skimage.measure import label
# Labeling starts with zero at point (0, 0)
mask0[label(mask0, connectivity=1) == 0] = 1
# The result
plt.imshow(mask0, cmap='gray');
"""
Explanation: Label the various contiguous regions in the image using skimage.measure.label
End of explanation
"""
# Start with the absolute value of the difference image.
# np.abs necessary because we don't want negative costs!
costs12 = generate_costs(np.abs(pano1_warped - pano2_warped),
pano1_mask & pano2_mask)
# Allow the path to "slide" along top and bottom edges to the optimal
# horizontal position by setting top and bottom edges to zero cost
costs12[0, :] = 0
costs12[-1, :] = 0
"""
Explanation: Looks great!
Rinse and repeat
Apply the same principles to images 1 and 2: first, build the cost array
End of explanation
"""
costs12[mask0 > 0] = 1
"""
Explanation: Add an additional constraint this time, to prevent this path crossing the prior one!
End of explanation
"""
fig, ax = plt.subplots(figsize=(8, 8))
ax.imshow(costs12, cmap='gray');
"""
Explanation: Check the result
End of explanation
"""
# Arguments are:
# cost array
# start pt
# end pt
# can it traverse diagonally
pts, _ = route_through_array(costs12, mask_pts12[0], mask_pts12[1], fully_connected=True)
# Convert list of lists to 2d coordinate array for easier indexing
pts = np.array(pts)
"""
Explanation: Your results may look slightly different.
Compute the minimal cost path
End of explanation
"""
fig, ax = plt.subplots(figsize=(12, 12))
# Plot the difference image
ax.imshow(pano1_warped - pano2_warped, cmap='gray')
# Overlay the minimum-cost path
ax.plot(pts[:, 1], pts[:, 0]);
ax.axis('off');
"""
Explanation: Verify a reasonable result
End of explanation
"""
mask2 = np.zeros_like(pano0_warped, dtype=np.uint8)
mask2[pts[:, 0], pts[:, 1]] = 1
"""
Explanation: Initialize the mask by placing the path in a new array
End of explanation
"""
mask2[label(mask2, connectivity=1) == 2] = 1
# The result
plt.imshow(mask2, cmap='gray');
"""
Explanation: Fill the right side this time, again using skimage.measure.label - the label of interest is 2
End of explanation
"""
mask1 = ~(mask0 | mask2).astype(bool)
"""
Explanation: Final mask
The last mask for the middle image is one of exclusion - it will be displayed everywhere mask0 and mask2 are not.
End of explanation
"""
def add_alpha(img, mask=None):
"""
Adds a masked alpha channel to an image.
Parameters
----------
img : (M, N[, 3]) ndarray
Image data, should be rank-2 or rank-3 with RGB channels
mask : (M, N[, 3]) ndarray, optional
Mask to be applied. If None, the alpha channel is added
with full opacity assumed (1) at all locations.
"""
from skimage.color import gray2rgb
if mask is None:
mask = np.ones_like(img)
if img.ndim == 2:
img = gray2rgb(img)
return np.dstack((img, mask))
"""
Explanation: Define a convenience function to place masks in alpha channels
End of explanation
"""
pano0_final = add_alpha(pano0_warped, mask0)
pano1_final = add_alpha(pano1_warped, mask1)
pano2_final = add_alpha(pano2_warped, mask2)
compare(pano0_final, pano1_final, pano2_final, figsize=(12, 12))
"""
Explanation: Obtain final, alpha blended individual images and inspect them
End of explanation
"""
fig, ax = plt.subplots(figsize=(12, 12))
# This is a perfect combination, but matplotlib's interpolation
# makes it appear to have gaps. So we turn it off.
ax.imshow(pano0_final, interpolation='none')
ax.imshow(pano1_final, interpolation='none')
ax.imshow(pano2_final, interpolation='none')
fig.tight_layout()
ax.axis('off');
"""
Explanation: What we have here is the world's most complicated and precisely-fitting jigsaw puzzle...
Plot all three together and view the results!
End of explanation
"""
# Identical transforms as before, except
# * Operating on original color images
# * filling with cval=0 as we know the masks
pano0_color = warp(pano_imgs[0], (model_robust01 + offset1).inverse, order=3,
output_shape=output_shape, cval=0)
pano1_color = warp(pano_imgs[1], offset1.inverse, order=3,
output_shape=output_shape, cval=0)
pano2_color = warp(pano_imgs[2], (model_robust12 + offset1).inverse, order=3,
output_shape=output_shape, cval=0)
"""
Explanation: Fantastic! Without the black borders, you'd never know this was composed of separate images!
Bonus round: now, in color!
We converted to grayscale for ORB feature detection, back in the initial preprocessing steps. Since we stored our transforms and masks, adding color is straightforward!
Transform the colored images
End of explanation
"""
pano0_final = add_alpha(pano0_color, mask0)
pano1_final = add_alpha(pano1_color, mask1)
pano2_final = add_alpha(pano2_color, mask2)
"""
Explanation: Then apply the custom alpha channel masks
End of explanation
"""
fig, ax = plt.subplots(figsize=(12, 12))
# Turn off matplotlib's interpolation
ax.imshow(pano0_final, interpolation='none')
ax.imshow(pano1_final, interpolation='none')
ax.imshow(pano2_final, interpolation='none')
fig.tight_layout()
ax.axis('off');
"""
Explanation: View the result!
End of explanation
"""
from skimage.color import gray2rgb
# Start with empty image
pano_combined = np.zeros_like(pano0_color)
# Place the masked portion of each image into the array
# masks are 2d, they need to be (M, N, 3) to match the color images
pano_combined += pano0_color * gray2rgb(mask0)
pano_combined += pano1_color * gray2rgb(mask1)
pano_combined += pano2_color * gray2rgb(mask2)
# Save the output - precision loss warning is expected
# moving from floating point -> uint8
io.imsave('./pano-advanced-output.png', pano_combined)
"""
Explanation: Save the combined, color panorama locally as './pano-advanced-output.png'
End of explanation
"""
%reload_ext load_style
%load_style ../../themes/tutorial.css
"""
Explanation: <div style="height: 400px;"></div>
<div style="height: 400px;"></div>
Once more, from the top
I hear what you're saying. "Those were too easy! The panoramas had too much overlap! Does this still work in the real world?"
Go back to the top. Under "Load Data" replace the string 'data/JDW_03*' with 'data/JDW_9*', and re-run all of the cells in order.
<div style="height: 400px;"></div>
End of explanation
"""
|
teuben/astr288p | notebooks/wrapup.ipynb | mit | %matplotlib inline
import numpy as np
import numpy.ma as ma
import matplotlib.pyplot as plt
from astropy.io import fits
from scipy.optimize import curve_fit
"""
Explanation: Some final thoughts
In the lectures you should have learned:
Unix (Linux or Mac) terminals and how to work with directories and files
basic commands: cd, ls, rm , mv, cp,
helpful commands: cat, man, find, grep
tools, such as git
Python for scientific programming
Installing your own python (miniconda3)
3 ways to run python: python scripts, ipython terminal, jupyter notebook
basic language constructs (variables, loops, if/then/else, lists, dictionaries)
plotting, numpy, scipy as the most important modules
ODE and fitting
Using open source software, using PDE athena as example
athena installation (athena is written in C), autoconf
running athena
analyzing 1D shocktube results in python
Things we didn't cover in Python:
classes in the object oriented sense
some interesting modules: pandas (dataframes are like numpy arrays)
talking to other languages (e.g. via cython or ctypes)
Too much available online....
astropy tutorials
Python in Astronomy annual meeting, this week in Leiden (NL)
http://openastronomy.org/pyastro/
pyastro17
Basic import's
First the basic imports we'll need for this notebook
End of explanation
"""
#Q1
a = 1
b = 1.0
c = "1"
print(type(a),type(b),type(c))
"""
Explanation: Basic Python
Create three variables a,b,c all representing the number 1 in the three basic python types, and print out their type using the type(a) function call:
a = ...
b = ...
c = ...
print...
note the word "class" you should see, which we never really covered. But just remember: everything in python is an object (ie. class)
End of explanation
"""
!echo "# x y yerr" > 123.tab
!echo 0.0 0.0 0.10 >> 123.tab
!echo 1.0 1.0 0.20 >> 123.tab
!echo 2.0 0.5 0.15 >> 123.tab
!echo 3.0 1.5 0.20 >> 123.tab
!echo 4.0 3.5 0.15 >> 123.tab
!pwd
!cat 123.tab
"""
Explanation: Reading data from an ascii table
In the cell below a small table with 5 rows and 3 colums are created on the fly using the unix "echo" command. The first line in the file is a comment line.
End of explanation
"""
#Q2
data = np.loadtxt('123.tab')
print(data.shape)
x = data[:,0]
y = data[:,1]
z = data[:,2]
"""
Explanation: Use the "loadtxt" function from the numpy module to read this data, and create three arrays representing these three columns. Print out the shape of the data array and notice where the rows and colums are.
End of explanation
"""
%whos
"""
Explanation: One the many ipython magic %-commands we did not cover is the "%whos" command, which is like a who is who in memory. Try this out in the next cell. You can find more about the magic commands in e.g. https://ipython.org/ipython-doc/3/interactive/magics.html
End of explanation
"""
#Q3
plt.errorbar(x,y,fmt='o',yerr=z)
plt.xlim(-0.5,4.5)
plt.xlabel("X")
plt.ylabel("Y")
plt.title("My 123 Title")
"""
Explanation: Plotting with errorbars
We have used "plot" and "scatter", but now use "errorbar" to plot. Use col1 and col2 for the X and Y, and col3 for the errors in Y.
Label your axes and set limits so you can see all points clearly.
End of explanation
"""
#Q4
def line(x, a, b):
return a*x + b
a,b = curve_fit(line,x,y)[0]
print(a,b)
yfit = a*x + b
plt.errorbar(x,y,fmt='o',yerr=z)
plt.plot(x,yfit)
plt.xlim(-0.5,4.5)
"""
Explanation: Fitting a straight line
Use the curve_fit function from scipy.optimize again to fit a straight line
$$
y = a . x + b
$$
to these data, and overplot this line on top of the errorbar data you plotted just before. Print out the values for the slope $a$ and intercept $b$.
End of explanation
"""
#Q5
data = fits.open('../data/ngc6503.cube.fits')[0].data.squeeze()
for z in [1,40,50,87]:
print("z=%d rms=%g" % (z,data[z,:,:].flatten().std()))
"""
Explanation: FITS data cubes
What is the RMS noise in channel 1, 40, 50 and 87 of the ngc6503.cube.fits file. Try and loop over a list of those channels, and write this as compact as you can. Hint: I could do it in 3 lines of python code.
To check your results, the rms for channel 50 should be about 0.00085
End of explanation
"""
#Q6 (in a copy notebook)
"""
Explanation: Orbits in non-axisymmetric potentials
Make a copy of the orbits-01 notebook to orbits-01b and change the potential to work in an ellipsoidal coordinate system emulating something like a (non-rotating) elliptical galaxy.
1) You would replace
$$
r^2 = x^2 + y^2
$$
with
$$
r^2 = x^2 + {y^2 \over b^2}
$$
where $b$ is the effective axis ratio in the potential of the mass distribution. Then note that
$$
{{\partial \Phi} \over { \partial x}} = {{\partial \Phi} \over { \partial r}}. {x \over r},
{{\partial \Phi} \over { \partial y}} = {{\partial \Phi} \over { \partial r}}. {y \over r b^2},
$$
2) Instead of launching the orbit from the X axis, now launch it from the Y axis. First confirm with b=1.0 that you get the same orbit as you got in the orbits-01 notebook, but use b=0.8 for the experiments. Take the same launch velocity of 0.3, and launch at different positions along the Y axis, stepping down from 1.0 to 0.9, 0.8 etc. Notice a transition? At what launch position does the transition occur?
3) Integrate the orbit at least 10 times longer, and use plt.plot() instead of plt.scatter(), since there would be too many points on the screen.
4) You can either use our handcrafted stepper, or feel free to use the scipy.integrate.odeint function, since that conserves the energy better. But be sure to check that energy is conserved. What about angular momentum?
End of explanation
"""
|
josiahdavis/python_data_analysis | .ipynb_checkpoints/python_data_analysis-checkpoint.ipynb | mit | # Import the pandas and numpy libraries
import pandas as pd
import numpy as np
# Read a file with an absolute path
ufo = pd.read_csv('/Users/josiahdavis/Documents/GitHub/python_data_analysis/ufo_sightings.csv')
# Alterntively, read the the file using a relative path
ufo = pd.read_csv('ufo_sightings.csv')
# Alterntively read in the file from the internet
ufo = pd.read_csv('https://raw.githubusercontent.com/josiahdavis/python_data_analysis/master/ufo_sightings.csv')
# Get help on a function
help(pd.read_csv)
"""
Explanation: Python for Data Analysis
This is a hands on workshop aimed at getting you comfortable with the the syntax of core data analysis concepts in Python. Some background in base Python is useful, but not required to learn from this workshop.
* Keystrokes for the IPython notebook
* Reading and Summarizing Data
* Filtering and Sorting Data
* Modifying Columns
* Handling Missing Values
* EXERCISE: Working with drinks data
* Indexing and Slicing Data
* Analyzing across time
* Split-Apply-Combine
* Merging Data
* Writing Data
* Other Useful Features
Keystrokes for the IPython Notebook
There are two modes: Command (enabled by esc) and Edit (enabled by enter). The table below has a quick reference of the main keystrokes that I will be using in the workshop. To get the full list go to Help -> Keyboard Shortcuts.
| | Mac | PC |
|:----------|:--------|:-------|
| Command Mode | esc | |
| Delete | d, d | |
| Markdown | m | |
| Run Cell | control, return | |
| Run Cell and Insert Below | option, return | |
| Insert Above | a | |
| Insert Below | b | |
| Edit Mode | return | Enter |
| Run Cell | control, return | |
| Run Cell and Insert Below | option, return | |
Reading and Summarizing Data
Reading Data
End of explanation
"""
ufo.head(10) # Look at the top 10 observations
ufo.tail() # Bottom x observations (defaults to 5)
ufo.describe() # get summary statistics for columns
ufo.index # "the index" (aka "the labels")
ufo.columns # column names (which is "an index")
ufo.dtypes # data types of each column
ufo.values # underlying numpy array
ufo.info() # concise summary
"""
Explanation: Summarize the data that was just read in
End of explanation
"""
# Select a single column
ufo['State']
ufo.State # This is equivalent
# Select multiple columns
ufo[['State', 'City','Shape Reported']]
my_cols = ['State', 'City', 'Shape Reported']
ufo[my_cols] # This is equivalent
# Logical filtering
ufo[ufo.State == 'TX'] # Select only rows where State == 'TX'
ufo[~(ufo.State == 'TX')] # Select everything where the test fails
ufo[ufo.State != 'TX'] # Same thing as before
ufo.City[ufo.State == 'TX'] # Select only city columm where State == 'TX'
ufo[ufo.State == 'TX'].City # Same thing as before
ufo[(ufo.State == 'CA') | (ufo.State =='TX')] # Select only records where State is 'CA' or State is 'TX'
ufo_dallas = ufo[(ufo.City == 'Dallas') & (ufo.State =='TX')] # Select only Dallas, TX records
ufo[ufo.City.isin(['Austin','Dallas', 'Houston'])] # Select only Austin, Dallas, or Houston records
"""
Explanation: Filtering and Sorting Data
End of explanation
"""
ufo.State.order() # only works for a Series
ufo.sort_index(inplace=True) # sort rows by label
ufo.sort_index(ascending=False, inplace=False)
ufo.sort_index(by='State') # sort rows by specific column
ufo.sort_index(by=['State', 'Shape Reported']) # sort by multiple columns
ufo.sort_index(by=['State', 'Shape Reported'], ascending=[False, True], inplace=True) # specify sort order
"""
Explanation: Sorting
End of explanation
"""
# Add a new column as a function of existing columns
ufo['Location'] = ufo['City'] + ', ' + ufo['State']
ufo.head()
# Rename columns
ufo.rename(columns={'Colors Reported':'Colors', 'Shape Reported':'Shape'}, inplace=True)
ufo.head()
# Hide a column (temporarily)
ufo.drop(['Location'], axis=1)
# Delete a column (permanently)
del ufo['Location']
"""
Explanation: Modifying Columns
End of explanation
"""
# Missing values are often just excluded
ufo.describe() # Excludes missing values
ufo.Shape.value_counts() # Excludes missing values
ufo.Shape.value_counts(dropna=False) # Includes missing values
# Find missing values in a Series
ufo.Shape.isnull() # True if NaN, False otherwise
ufo.Shape.notnull() # False if NaN, True otherwise
ufo.Shape.isnull().sum() # Count the missing values
# Find missing values in a DataFrame
ufo.isnull()
# Count the missing values in a DataFrame
ufo.isnull().sum()
# Exclude rows with missing values in a dataframe
ufo[(ufo.Shape.notnull()) & (ufo.Colors.notnull())]
# Drop missing values
ufo.dropna() # Drop a row if ANY values are missing
ufo.dropna(how='all') # Drop a row only if ALL values are missing
# Fill in missing values for a series
ufo.Colors.fillna(value='Unknown', inplace=True)
# Fill in missing values for the DataFrame
ufo.fillna(value='Unknown', inplace=True)
"""
Explanation: Handling Missing Values
End of explanation
"""
# Read drinks.csv (in the 'drinks_data' folder) into a DataFrame called 'drinks'
# Print the first 10 rows
# Examine the data types of all columns
# Print the 'beer_servings' Series
# Calculate the average 'beer_servings' for the entire dataset
# Print all columns, but only show rows where the country is in Europe
# Calculate the average 'beer_servings' for all of Europe
# Only show European countries with 'wine_servings' greater than 300
# Determine which 10 countries have the highest 'total_litres_of_pure_alcohol'
# Determine which country has the highest value for 'beer_servings'
# Count the number of occurrences of each 'continent' value and see if it looks correct
# Determine which countries do not have continent designations
# Determine the number of countries per continent. Does it look right?
"""
Explanation: Exercise: Working with the Drinks Data
(Be on the lookout for a curveball question)
End of explanation
"""
# Read drinks.csv (in the drinks_data folder) into a DataFrame called 'drinks'
drinks = pd.read_csv('drinks_data/drinks.csv')
# Print the first 10 rows
drinks.head(10)
# Examine the data types of all columns
drinks.dtypes
drinks.info()
# Print the 'beer_servings' Series
drinks.beer_servings
drinks['beer_servings']
# Calculate the average 'beer_servings' for the entire dataset
drinks.describe() # Mean is provided in the summary from describe()
drinks.beer_servings.mean() # Alternatively, calculate the mean directly
# Print all columns, but only show rows where the country is in Europe
drinks[drinks.continent=='EU']
# Calculate the average 'beer_servings' for all of Europe (hint: use the .mean() function)
drinks[drinks.continent=='EU'].beer_servings.mean()
# Only show European countries with 'wine_servings' greater than 300
drinks[(drinks.continent=='EU') & (drinks.wine_servings > 300)]
# Determine which 10 countries have the highest 'total_litres_of_pure_alcohol'
drinks.sort_index(by='total_litres_of_pure_alcohol').tail(10)
# Determine which country has the highest value for 'beer_servings' (hint: use the .max() function)
drinks[drinks.beer_servings==drinks.beer_servings.max()].country
drinks[['country', 'beer_servings']].sort_index(by='beer_servings', ascending=False).head(1) # This is equivalent
# Count the number of occurrences of each 'continent' value and see if it looks correct
drinks.continent.value_counts()
# Determine which countries do not have continent designations
drinks[drinks.continent.isnull()].country
# Due to "na_filter = True" default within pd.read_csv()
help(pd.read_csv)
"""
Explanation: Solutions
End of explanation
"""
ufo.set_index('State', inplace=True)
ufo.index
ufo.index.is_unique
ufo.sort_index(inplace=True)
ufo.head(25)
"""
Explanation: Indexing and Slicing Data
Create a new index
End of explanation
"""
ufo.loc['FL',:] # row with label FL`
ufo.loc[:'FL',:] # rows with labels through'FL'
ufo.loc['FL':'HI', 'City':'Shape'] # rows FL, columns 'City' through 'Shape Reported'
ufo.loc[:, 'City':'Shape'] # all rows, columns 'City' through 'Shape Reported'
ufo.loc[['FL', 'TX'], ['City','Shape']] # rows FL and TX, columns 'City' and 'Shape Reported'
"""
Explanation: loc: filter rows by LABEL, and select columns by LABEL
End of explanation
"""
ufo.iloc[0,:] # row with 0th position (first row)
ufo.iloc[0:3,:] # rows with positions 0 through 2 (not 3)
ufo.iloc[0:3, 0:3] # rows and columns with positions 0 through 2
ufo.iloc[:, 0:3] # all rows, columns with positions 0 through 2
ufo.iloc[[0,2], [0,1]] # 1st and 3rd row, 1st and 2nd column
"""
Explanation: iloc: filter rows by POSITION, and select columns by POSITION
End of explanation
"""
ufo.set_index('City', inplace=True, append=True) # Adds to existing index
ufo.sort_index(inplace=True)
ufo.head(25)
ufo.loc[['ND', 'WY'],:] # Select all records from ND AND WY
ufo.loc['ND':'WY',:] # Select all records from ND THROUGH WY
ufo.loc[('ND', 'Bismarck'),:] # Select all records from Bismark, ND
ufo.loc[('ND', 'Bismarck'):('ND','Casselton'),:] # Select all records from Bismark, ND through Casselton, ND
ufo.reset_index(level='City', inplace=True) # Remove the City from the index
ufo.head()
ufo.reset_index(inplace=True) # Remove all columns from the index
ufo.head()
"""
Explanation: Add another level to the index
End of explanation
"""
# Reset the index
ufo.dtypes
# Convert Time column to date-time format (defined in Pandas)
# Reference: https://docs.python.org/2/library/time.html#time.strftime
ufo['Time'] = pd.to_datetime(ufo['Time'], format="%m/%d/%Y %H:%M")
ufo.dtypes
# Compute date range
ufo.Time.min()
ufo.Time.max()
# Slice using time
ufo[ufo.Time > pd.datetime(1995, 1, 1)] # Slice using the time
ufo[(ufo.Time > pd.datetime(1995, 1, 1)) & (ufo.State =='TX')] # Works with other logical conditions, as expected
# Set the index to time
ufo.set_index('Time', inplace=True)
ufo.sort_index(inplace=True)
ufo.head()
# Access particular times/ranges
ufo.loc['1995',:]
ufo.loc['1995-01',:]
ufo.loc['1995-01-01',:]
# Access range of times/ranges
ufo.loc['1995':,:]
ufo.loc['1995':'1996',:]
ufo.loc['1995-12-01':'1996-01',:]
# Access elements of the timestamp
# Reference: http://pandas.pydata.org/pandas-docs/stable/timeseries.html#time-date-components
ufo.index.year
ufo.index.month
ufo.index.weekday
ufo.index.day
ufo.index.time
ufo.index.hour
# Create a new variable with time element
ufo['Year'] = ufo.index.year
ufo['Month'] = ufo.index.month
ufo['Day'] = ufo.index.day
ufo['Weekday'] = ufo.index.weekday
ufo['Hour'] = ufo.index.hour
"""
Explanation: Analyzing Across Time
End of explanation
"""
# For each year, calculate the count of sightings
ufo.groupby('Year').City.count()
# For each Shape, calculate the first sighting, last sighting, and range of sightings.
ufo.groupby('Shape').Year.min()
ufo.groupby('Shape').Year.max()
# Specify the variable outside of the apply statement
ufo.groupby('Shape').Year.apply(lambda x: x.max())
# Specifiy the variable within the apply statement
ufo.groupby('Shape').apply(lambda x: x.Year.max() - x.Year.min())
# Specify a custom function to use in the apply statement
def get_max_year(df):
try:
return df.Year.max()
except:
return ''
ufo.groupby('Shape').apply(lambda x: get_max_year(x))
# Split/combine can occur on multiple columns at the same time
ufo.groupby(['Weekday','Hour']).City.count()
"""
Explanation: Split-Apply-Combine
Drawing by Hadley Wickham
End of explanation
"""
# Read in population data
pop = pd.read_csv('population.csv')
pop.head()
ufo.head()
# Merge the data together
ufo = pd.merge(ufo, pop, on='State', how = 'left')
# Specify keys if columns have different names
ufo = pd.merge(ufo, pop, left_on='State', right_on='State', how = 'left')
# Observe the new Population column
ufo.head()
# Check for values that didn't make it (length)
ufo.Population.isnull().sum()
# Check for values that didn't make it (values)
ufo[ufo.Population.isnull()]
# Change the records that didn't match up using np.where command
ufo['State'] = np.where(ufo['State'] == 'Fl', 'FL', ufo['State'])
# Alternatively, change the state using native python string functionality
ufo['State'] = ufo['State'].str.upper()
# Merge again, this time get all of the records
ufo = pd.merge(ufo, pop, on='State', how = 'left')
"""
Explanation: Merging Data
End of explanation
"""
ufo.to_csv('ufo_new.csv')
ufo.to_csv('ufo_new.csv', index=False) # Index is not included in the csv
"""
Explanation: Writing Data
End of explanation
"""
ufo.duplicated() # Series of logicals
ufo.duplicated().sum() # count of duplicates
ufo[ufo.duplicated(['State','Time'])] # only show duplicates
ufo[ufo.duplicated()==False] # only show unique rows
ufo_unique = ufo[~ufo.duplicated()] # only show unique rows
ufo.duplicated(['State','Time']).sum() # columns for identifying duplicates
"""
Explanation: Other Useful Features
Detect duplicate rows
End of explanation
"""
ufo['Weekday'] = ufo.Weekday.map({ 0:'Mon', 1:'Tue', 2:'Wed',
3:'Thu', 4:'Fri', 5:'Sat',
6:'Sun'})
"""
Explanation: Map existing values to other values
End of explanation
"""
ufo.groupby(['Weekday','Hour']).City.count()
ufo.groupby(['Weekday','Hour']).City.count().unstack(0) # Make first row level a column
ufo.groupby(['Weekday','Hour']).City.count().unstack(1) # Make second row level a column
# Note: .stack() transforms columns to rows
"""
Explanation: Pivot rows to columns
End of explanation
"""
idxs = np.random.rand(len(ufo)) < 0.66 # create a Series of booleans
train = ufo[idxs] # will contain about 66% of the rows
test = ufo[~idxs] # will contain the remaining rows
"""
Explanation: Randomly sample a DataFrame
End of explanation
"""
ufo.Shape.replace('DELTA', 'TRIANGLE') # replace values in a Series
ufo.replace('PYRAMID', 'TRIANGLE') # replace values throughout a DataFrame
"""
Explanation: Replace all instances of a value
End of explanation
"""
%matplotlib inline
# Plot the number of sightings over time
ufo.groupby('Year').City.count().plot( kind='line',
color='r',
linewidth=2,
title='UFO Sightings by year')
# Plot the number of sightings over the day of week and time of day
ufo.groupby(['Weekday','Hour']).City.count().unstack(0).plot( kind='line',
linewidth=2,
title='UFO Sightings by Time of Day')
# Plot multiple plots on the same plot (plots neeed to be in column format)
ufo_fourth = ufo[(ufo.Year.isin([2011, 2012, 2013, 2014])) & (ufo.Month == 7)]
ufo_fourth.groupby(['Year', 'Day']).City.count().unstack(0).plot( kind = 'bar',
subplots=True,
figsize=(7,9))
"""
Explanation: One more thing...
End of explanation
"""
|
gojomo/gensim | docs/src/auto_examples/core/run_corpora_and_vector_spaces.ipynb | lgpl-2.1 | import logging
logging.basicConfig(format='%(asctime)s : %(levelname)s : %(message)s', level=logging.INFO)
"""
Explanation: Corpora and Vector Spaces
Demonstrates transforming text into a vector space representation.
Also introduces corpus streaming and persistence to disk in various formats.
End of explanation
"""
documents = [
"Human machine interface for lab abc computer applications",
"A survey of user opinion of computer system response time",
"The EPS user interface management system",
"System and human system engineering testing of EPS",
"Relation of user perceived response time to error measurement",
"The generation of random binary unordered trees",
"The intersection graph of paths in trees",
"Graph minors IV Widths of trees and well quasi ordering",
"Graph minors A survey",
]
"""
Explanation: First, let’s create a small corpus of nine short documents [1]_:
From Strings to Vectors
This time, let's start from documents represented as strings:
End of explanation
"""
from pprint import pprint # pretty-printer
from collections import defaultdict
# remove common words and tokenize
stoplist = set('for a of the and to in'.split())
texts = [
[word for word in document.lower().split() if word not in stoplist]
for document in documents
]
# remove words that appear only once
frequency = defaultdict(int)
for text in texts:
for token in text:
frequency[token] += 1
texts = [
[token for token in text if frequency[token] > 1]
for text in texts
]
pprint(texts)
"""
Explanation: This is a tiny corpus of nine documents, each consisting of only a single sentence.
First, let's tokenize the documents, remove common words (using a toy stoplist)
as well as words that only appear once in the corpus:
End of explanation
"""
from gensim import corpora
dictionary = corpora.Dictionary(texts)
dictionary.save('/tmp/deerwester.dict') # store the dictionary, for future reference
print(dictionary)
"""
Explanation: Your way of processing the documents will likely vary; here, I only split on whitespace
to tokenize, followed by lowercasing each word. In fact, I use this particular
(simplistic and inefficient) setup to mimic the experiment done in Deerwester et al.'s
original LSA article [1]_.
The ways to process documents are so varied and application- and language-dependent that I
decided to not constrain them by any interface. Instead, a document is represented
by the features extracted from it, not by its "surface" string form: how you get to
the features is up to you. Below I describe one common, general-purpose approach (called
:dfn:bag-of-words), but keep in mind that different application domains call for
different features, and, as always, it's garbage in, garbage out <http://en.wikipedia.org/wiki/Garbage_In,_Garbage_Out>_...
To convert documents to vectors, we'll use a document representation called
bag-of-words <http://en.wikipedia.org/wiki/Bag_of_words>_. In this representation,
each document is represented by one vector where each vector element represents
a question-answer pair, in the style of:
Question: How many times does the word system appear in the document?
Answer: Once.
It is advantageous to represent the questions only by their (integer) ids. The mapping
between the questions and ids is called a dictionary:
End of explanation
"""
print(dictionary.token2id)
"""
Explanation: Here we assigned a unique integer id to all words appearing in the corpus with the
:class:gensim.corpora.dictionary.Dictionary class. This sweeps across the texts, collecting word counts
and relevant statistics. In the end, we see there are twelve distinct words in the
processed corpus, which means each document will be represented by twelve numbers (ie., by a 12-D vector).
To see the mapping between words and their ids:
End of explanation
"""
new_doc = "Human computer interaction"
new_vec = dictionary.doc2bow(new_doc.lower().split())
print(new_vec) # the word "interaction" does not appear in the dictionary and is ignored
"""
Explanation: To actually convert tokenized documents to vectors:
End of explanation
"""
corpus = [dictionary.doc2bow(text) for text in texts]
corpora.MmCorpus.serialize('/tmp/deerwester.mm', corpus) # store to disk, for later use
print(corpus)
"""
Explanation: The function :func:doc2bow simply counts the number of occurrences of
each distinct word, converts the word to its integer word id
and returns the result as a sparse vector. The sparse vector [(0, 1), (1, 1)]
therefore reads: in the document "Human computer interaction", the words computer
(id 0) and human (id 1) appear once; the other ten dictionary words appear (implicitly) zero times.
End of explanation
"""
from smart_open import open # for transparently opening remote files
class MyCorpus(object):
def __iter__(self):
for line in open('https://radimrehurek.com/gensim/mycorpus.txt'):
# assume there's one document per line, tokens separated by whitespace
yield dictionary.doc2bow(line.lower().split())
"""
Explanation: By now it should be clear that the vector feature with id=10 stands for the question "How many
times does the word graph appear in the document?" and that the answer is "zero" for
the first six documents and "one" for the remaining three.
Corpus Streaming -- One Document at a Time
Note that corpus above resides fully in memory, as a plain Python list.
In this simple example, it doesn't matter much, but just to make things clear,
let's assume there are millions of documents in the corpus. Storing all of them in RAM won't do.
Instead, let's assume the documents are stored in a file on disk, one document per line. Gensim
only requires that a corpus must be able to return one document vector at a time:
End of explanation
"""
# This flexibility allows you to create your own corpus classes that stream the
# documents directly from disk, network, database, dataframes... The models
# in Gensim are implemented such that they don't require all vectors to reside
# in RAM at once. You can even create the documents on the fly!
"""
Explanation: The full power of Gensim comes from the fact that a corpus doesn't have to be
a list, or a NumPy array, or a Pandas dataframe, or whatever.
Gensim accepts any object that, when iterated over, successively yields
documents.
End of explanation
"""
corpus_memory_friendly = MyCorpus() # doesn't load the corpus into memory!
print(corpus_memory_friendly)
"""
Explanation: Download the sample mycorpus.txt file here <./mycorpus.txt>_. The assumption that
each document occupies one line in a single file is not important; you can mold
the __iter__ function to fit your input format, whatever it is.
Walking directories, parsing XML, accessing the network...
Just parse your input to retrieve a clean list of tokens in each document,
then convert the tokens via a dictionary to their ids and yield the resulting sparse vector inside __iter__.
End of explanation
"""
for vector in corpus_memory_friendly: # load one vector into memory at a time
print(vector)
"""
Explanation: Corpus is now an object. We didn't define any way to print it, so print just outputs address
of the object in memory. Not very useful. To see the constituent vectors, let's
iterate over the corpus and print each document vector (one at a time):
End of explanation
"""
from six import iteritems
# collect statistics about all tokens
dictionary = corpora.Dictionary(line.lower().split() for line in open('https://radimrehurek.com/gensim/mycorpus.txt'))
# remove stop words and words that appear only once
stop_ids = [
dictionary.token2id[stopword]
for stopword in stoplist
if stopword in dictionary.token2id
]
once_ids = [tokenid for tokenid, docfreq in iteritems(dictionary.dfs) if docfreq == 1]
dictionary.filter_tokens(stop_ids + once_ids) # remove stop words and words that appear only once
dictionary.compactify() # remove gaps in id sequence after words that were removed
print(dictionary)
"""
Explanation: Although the output is the same as for the plain Python list, the corpus is now much
more memory friendly, because at most one vector resides in RAM at a time. Your
corpus can now be as large as you want.
Similarly, to construct the dictionary without loading all texts into memory:
End of explanation
"""
corpus = [[(1, 0.5)], []] # make one document empty, for the heck of it
corpora.MmCorpus.serialize('/tmp/corpus.mm', corpus)
"""
Explanation: And that is all there is to it! At least as far as bag-of-words representation is concerned.
Of course, what we do with such a corpus is another question; it is not at all clear
how counting the frequency of distinct words could be useful. As it turns out, it isn't, and
we will need to apply a transformation on this simple representation first, before
we can use it to compute any meaningful document vs. document similarities.
Transformations are covered in the next tutorial
(sphx_glr_auto_examples_core_run_topics_and_transformations.py),
but before that, let's briefly turn our attention to corpus persistency.
Corpus Formats
There exist several file formats for serializing a Vector Space corpus (~sequence of vectors) to disk.
Gensim implements them via the streaming corpus interface mentioned earlier:
documents are read from (resp. stored to) disk in a lazy fashion, one document at
a time, without the whole corpus being read into main memory at once.
One of the more notable file formats is the Market Matrix format <http://math.nist.gov/MatrixMarket/formats.html>_.
To save a corpus in the Matrix Market format:
create a toy corpus of 2 documents, as a plain Python list
End of explanation
"""
corpora.SvmLightCorpus.serialize('/tmp/corpus.svmlight', corpus)
corpora.BleiCorpus.serialize('/tmp/corpus.lda-c', corpus)
corpora.LowCorpus.serialize('/tmp/corpus.low', corpus)
"""
Explanation: Other formats include Joachim's SVMlight format <http://svmlight.joachims.org/>,
Blei's LDA-C format <http://www.cs.princeton.edu/~blei/lda-c/> and
GibbsLDA++ format <http://gibbslda.sourceforge.net/>_.
End of explanation
"""
corpus = corpora.MmCorpus('/tmp/corpus.mm')
"""
Explanation: Conversely, to load a corpus iterator from a Matrix Market file:
End of explanation
"""
print(corpus)
"""
Explanation: Corpus objects are streams, so typically you won't be able to print them directly:
End of explanation
"""
# one way of printing a corpus: load it entirely into memory
print(list(corpus)) # calling list() will convert any sequence to a plain Python list
"""
Explanation: Instead, to view the contents of a corpus:
End of explanation
"""
# another way of doing it: print one document at a time, making use of the streaming interface
for doc in corpus:
print(doc)
"""
Explanation: or
End of explanation
"""
corpora.BleiCorpus.serialize('/tmp/corpus.lda-c', corpus)
"""
Explanation: The second way is obviously more memory-friendly, but for testing and development
purposes, nothing beats the simplicity of calling list(corpus).
To save the same Matrix Market document stream in Blei's LDA-C format,
End of explanation
"""
import gensim
import numpy as np
numpy_matrix = np.random.randint(10, size=[5, 2]) # random matrix as an example
corpus = gensim.matutils.Dense2Corpus(numpy_matrix)
# numpy_matrix = gensim.matutils.corpus2dense(corpus, num_terms=number_of_corpus_features)
"""
Explanation: In this way, gensim can also be used as a memory-efficient I/O format conversion tool:
just load a document stream using one format and immediately save it in another format.
Adding new formats is dead easy, check out the code for the SVMlight corpus
<https://github.com/piskvorky/gensim/blob/develop/gensim/corpora/svmlightcorpus.py>_ for an example.
Compatibility with NumPy and SciPy
Gensim also contains efficient utility functions <http://radimrehurek.com/gensim/matutils.html>_
to help converting from/to numpy matrices
End of explanation
"""
import scipy.sparse
scipy_sparse_matrix = scipy.sparse.random(5, 2) # random sparse matrix as example
corpus = gensim.matutils.Sparse2Corpus(scipy_sparse_matrix)
scipy_csc_matrix = gensim.matutils.corpus2csc(corpus)
"""
Explanation: and from/to scipy.sparse matrices
End of explanation
"""
import matplotlib.pyplot as plt
import matplotlib.image as mpimg
img = mpimg.imread('run_corpora_and_vector_spaces.png')
imgplot = plt.imshow(img)
_ = plt.axis('off')
"""
Explanation: What Next
Read about sphx_glr_auto_examples_core_run_topics_and_transformations.py.
References
For a complete reference (Want to prune the dictionary to a smaller size?
Optimize converting between corpora and NumPy/SciPy arrays?), see the apiref.
.. [1] This is the same corpus as used in
Deerwester et al. (1990): Indexing by Latent Semantic Analysis <http://www.cs.bham.ac.uk/~pxt/IDA/lsa_ind.pdf>_, Table 2.
End of explanation
"""
|
SJSlavin/phys202-2015-work | assignments/assignment08/InterpolationEx02.ipynb | mit | %matplotlib inline
import matplotlib.pyplot as plt
import seaborn as sns
import numpy as np
sns.set_style('white')
from scipy.interpolate import griddata
"""
Explanation: Interpolation Exercise 2
End of explanation
"""
# YOUR CODE HERE
x = np.hstack((np.arange(-5, 6), np.full(10, 5), np.arange(-5, 5), np.full(9, -5), [0]))
y = np.hstack((np.full(11, 5), np.arange(-5, 5), np.full(10, -5), np.arange(-4, 5), [0]))
f = np.hstack((np.zeros(40), [1]))
print(x)
print(y)
print(f)
"""
Explanation: Sparse 2d interpolation
In this example the values of a scalar field $f(x,y)$ are known at a very limited set of points in a square domain:
The square domain covers the region $x\in[-5,5]$ and $y\in[-5,5]$.
The values of $f(x,y)$ are zero on the boundary of the square at integer spaced points.
The value of $f$ is known at a single interior point: $f(0,0)=1.0$.
The function $f$ is not known at any other points.
Create arrays x, y, f:
x should be a 1d array of the x coordinates on the boundary and the 1 interior point.
y should be a 1d array of the y coordinates on the boundary and the 1 interior point.
f should be a 1d array of the values of f at the corresponding x and y coordinates.
You might find that np.hstack is helpful.
End of explanation
"""
plt.scatter(x, y);
assert x.shape==(41,)
assert y.shape==(41,)
assert f.shape==(41,)
assert np.count_nonzero(f)==1
"""
Explanation: The following plot should show the points on the boundary and the single point in the interior:
End of explanation
"""
# YOUR CODE HERE
xnew = np.linspace(-5, 5, 100)
ynew = np.linspace(-5, 5, 100)
Xnew, Ynew = np.meshgrid(xnew, ynew)
Fnew = griddata((x, y), f, (Xnew, Ynew), method="cubic")
print(Fnew)
assert xnew.shape==(100,)
assert ynew.shape==(100,)
assert Xnew.shape==(100,100)
assert Ynew.shape==(100,100)
assert Fnew.shape==(100,100)
"""
Explanation: Use meshgrid and griddata to interpolate the function $f(x,y)$ on the entire square domain:
xnew and ynew should be 1d arrays with 100 points between $[-5,5]$.
Xnew and Ynew should be 2d versions of xnew and ynew created by meshgrid.
Fnew should be a 2d array with the interpolated values of $f(x,y)$ at the points (Xnew,Ynew).
Use cubic spline interpolation.
End of explanation
"""
# YOUR CODE HERE
plt.contourf(Xnew, Ynew, Fnew, cmap="gnuplot2", levels=np.linspace(0, 1, 50))
plt.xlabel("X")
plt.ylabel("Y")
plt.colorbar(ticks=[0, 0.2, 0.4, 0.6, 0.8, 1])
assert True # leave this to grade the plot
"""
Explanation: Plot the values of the interpolated scalar field using a contour plot. Customize your plot to make it effective and beautiful.
End of explanation
"""
|
mne-tools/mne-tools.github.io | 0.12/_downloads/plot_read_and_write_raw_data.ipynb | bsd-3-clause | # Author: Alexandre Gramfort <alexandre.gramfort@telecom-paristech.fr>
#
# License: BSD (3-clause)
import mne
from mne.datasets import sample
print(__doc__)
data_path = sample.data_path()
fname = data_path + '/MEG/sample/sample_audvis_raw.fif'
raw = mne.io.read_raw_fif(fname)
# Set up pick list: MEG + STI 014 - bad channels
want_meg = True
want_eeg = False
want_stim = False
include = ['STI 014']
raw.info['bads'] += ['MEG 2443', 'EEG 053'] # bad channels + 2 more
picks = mne.pick_types(raw.info, meg=want_meg, eeg=want_eeg, stim=want_stim,
include=include, exclude='bads')
some_picks = picks[:5] # take 5 first
start, stop = raw.time_as_index([0, 15]) # read the first 15s of data
data, times = raw[some_picks, start:(stop + 1)]
# save 150s of MEG data in FIF file
raw.save('sample_audvis_meg_raw.fif', tmin=0, tmax=150, picks=picks,
overwrite=True)
"""
Explanation: Reading and writing raw files
In this example, we read a raw file. Plot a segment of MEG data
restricted to MEG channels. And save these data in a new
raw file.
End of explanation
"""
raw.plot()
"""
Explanation: Show MEG data
End of explanation
"""
|
ES-DOC/esdoc-jupyterhub | notebooks/cams/cmip6/models/sandbox-3/toplevel.ipynb | gpl-3.0 | # DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'cams', 'sandbox-3', 'toplevel')
"""
Explanation: ES-DOC CMIP6 Model Properties - Toplevel
MIP Era: CMIP6
Institute: CAMS
Source ID: SANDBOX-3
Sub-Topics: Radiative Forcings.
Properties: 85 (42 required)
Model descriptions: Model description details
Initialized From: --
Notebook Help: Goto notebook help page
Notebook Initialised: 2018-02-15 16:53:43
Document Setup
IMPORTANT: to be executed each time you run the notebook
End of explanation
"""
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
"""
Explanation: Document Authors
Set document authors
End of explanation
"""
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
"""
Explanation: Document Contributors
Specify document contributors
End of explanation
"""
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
"""
Explanation: Document Publication
Specify document publication status
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: Document Table of Contents
1. Key Properties
2. Key Properties --> Flux Correction
3. Key Properties --> Genealogy
4. Key Properties --> Software Properties
5. Key Properties --> Coupling
6. Key Properties --> Tuning Applied
7. Key Properties --> Conservation --> Heat
8. Key Properties --> Conservation --> Fresh Water
9. Key Properties --> Conservation --> Salt
10. Key Properties --> Conservation --> Momentum
11. Radiative Forcings
12. Radiative Forcings --> Greenhouse Gases --> CO2
13. Radiative Forcings --> Greenhouse Gases --> CH4
14. Radiative Forcings --> Greenhouse Gases --> N2O
15. Radiative Forcings --> Greenhouse Gases --> Tropospheric O3
16. Radiative Forcings --> Greenhouse Gases --> Stratospheric O3
17. Radiative Forcings --> Greenhouse Gases --> CFC
18. Radiative Forcings --> Aerosols --> SO4
19. Radiative Forcings --> Aerosols --> Black Carbon
20. Radiative Forcings --> Aerosols --> Organic Carbon
21. Radiative Forcings --> Aerosols --> Nitrate
22. Radiative Forcings --> Aerosols --> Cloud Albedo Effect
23. Radiative Forcings --> Aerosols --> Cloud Lifetime Effect
24. Radiative Forcings --> Aerosols --> Dust
25. Radiative Forcings --> Aerosols --> Tropospheric Volcanic
26. Radiative Forcings --> Aerosols --> Stratospheric Volcanic
27. Radiative Forcings --> Aerosols --> Sea Salt
28. Radiative Forcings --> Other --> Land Use
29. Radiative Forcings --> Other --> Solar
1. Key Properties
Key properties of the model
1.1. Model Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Top level overview of coupled model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 1.2. Model Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Name of coupled model.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.flux_correction.details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 2. Key Properties --> Flux Correction
Flux correction properties of the model
2.1. Details
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe if/how flux corrections are applied in the model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.genealogy.year_released')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 3. Key Properties --> Genealogy
Genealogy and history of the model
3.1. Year Released
Is Required: TRUE Type: STRING Cardinality: 1.1
Year the model was released
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.genealogy.CMIP3_parent')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 3.2. CMIP3 Parent
Is Required: FALSE Type: STRING Cardinality: 0.1
CMIP3 parent if any
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.genealogy.CMIP5_parent')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 3.3. CMIP5 Parent
Is Required: FALSE Type: STRING Cardinality: 0.1
CMIP5 parent if any
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.genealogy.previous_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 3.4. Previous Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Previously known as
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.software_properties.repository')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 4. Key Properties --> Software Properties
Software properties of model
4.1. Repository
Is Required: FALSE Type: STRING Cardinality: 0.1
Location of code for this component.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.software_properties.code_version')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 4.2. Code Version
Is Required: FALSE Type: STRING Cardinality: 0.1
Code version identifier.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.software_properties.code_languages')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 4.3. Code Languages
Is Required: FALSE Type: STRING Cardinality: 0.N
Code language(s).
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.software_properties.components_structure')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 4.4. Components Structure
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe how model realms are structured into independent software components (coupled via a coupler) and internal software components.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.software_properties.coupler')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "OASIS"
# "OASIS3-MCT"
# "ESMF"
# "NUOPC"
# "Bespoke"
# "Unknown"
# "None"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 4.5. Coupler
Is Required: FALSE Type: ENUM Cardinality: 0.1
Overarching coupling framework for model.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.coupling.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 5. Key Properties --> Coupling
**
5.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of coupling in the model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.coupling.atmosphere_double_flux')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 5.2. Atmosphere Double Flux
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is the atmosphere passing a double flux to the ocean and sea ice (as opposed to a single one)?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.coupling.atmosphere_fluxes_calculation_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Atmosphere grid"
# "Ocean grid"
# "Specific coupler grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 5.3. Atmosphere Fluxes Calculation Grid
Is Required: FALSE Type: ENUM Cardinality: 0.1
Where are the air-sea fluxes calculated
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.coupling.atmosphere_relative_winds')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 5.4. Atmosphere Relative Winds
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Are relative or absolute winds used to compute the flux? I.e. do ocean surface currents enter the wind stress calculation?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.tuning_applied.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 6. Key Properties --> Tuning Applied
Tuning methodology for model
6.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General overview description of tuning: explain and motivate the main targets and metrics/diagnostics retained. Document the relative weight given to climate performance metrics/diagnostics versus process oriented metrics/diagnostics, and on the possible conflicts with parameterization level tuning. In particular describe any struggle with a parameter value that required pushing it to its limits to solve a particular model deficiency.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.tuning_applied.global_mean_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 6.2. Global Mean Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List set of metrics/diagnostics of the global mean state used in tuning model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.tuning_applied.regional_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 6.3. Regional Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List of regional metrics/diagnostics of mean state (e.g THC, AABW, regional means etc) used in tuning model/component
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.tuning_applied.trend_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 6.4. Trend Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List observed trend metrics/diagnostics used in tuning model/component (such as 20th century)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.tuning_applied.energy_balance')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 6.5. Energy Balance
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe how energy balance was obtained in the full system: in the various components independently or at the components coupling stage?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.tuning_applied.fresh_water_balance')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 6.6. Fresh Water Balance
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe how fresh_water balance was obtained in the full system: in the various components independently or at the components coupling stage?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.heat.global')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 7. Key Properties --> Conservation --> Heat
Global heat convervation properties of the model
7.1. Global
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe if/how heat is conserved globally
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.heat.atmos_ocean_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 7.2. Atmos Ocean Interface
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how heat is conserved at the atmosphere/ocean coupling interface
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.heat.atmos_land_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 7.3. Atmos Land Interface
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe if/how heat is conserved at the atmosphere/land coupling interface
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.heat.atmos_sea-ice_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 7.4. Atmos Sea-ice Interface
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how heat is conserved at the atmosphere/sea-ice coupling interface
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.heat.ocean_seaice_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 7.5. Ocean Seaice Interface
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how heat is conserved at the ocean/sea-ice coupling interface
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.heat.land_ocean_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 7.6. Land Ocean Interface
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how heat is conserved at the land/ocean coupling interface
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.global')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 8. Key Properties --> Conservation --> Fresh Water
Global fresh water convervation properties of the model
8.1. Global
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe if/how fresh_water is conserved globally
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.atmos_ocean_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 8.2. Atmos Ocean Interface
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how fresh_water is conserved at the atmosphere/ocean coupling interface
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.atmos_land_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 8.3. Atmos Land Interface
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe if/how fresh water is conserved at the atmosphere/land coupling interface
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.atmos_sea-ice_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 8.4. Atmos Sea-ice Interface
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how fresh water is conserved at the atmosphere/sea-ice coupling interface
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.ocean_seaice_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 8.5. Ocean Seaice Interface
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how fresh water is conserved at the ocean/sea-ice coupling interface
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.runoff')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 8.6. Runoff
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe how runoff is distributed and conserved
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.iceberg_calving')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 8.7. Iceberg Calving
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how iceberg calving is modeled and conserved
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.endoreic_basins')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 8.8. Endoreic Basins
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how endoreic basins (no ocean access) are treated
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.snow_accumulation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 8.9. Snow Accumulation
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe how snow accumulation over land and over sea-ice is treated
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.salt.ocean_seaice_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 9. Key Properties --> Conservation --> Salt
Global salt convervation properties of the model
9.1. Ocean Seaice Interface
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how salt is conserved at the ocean/sea-ice coupling interface
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.momentum.details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 10. Key Properties --> Conservation --> Momentum
Global momentum convervation properties of the model
10.1. Details
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how momentum is conserved in the model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 11. Radiative Forcings
Radiative forcings of the model for historical and scenario (aka Table 12.1 IPCC AR5)
11.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of radiative forcings (GHG and aerosols) implementation in model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CO2.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 12. Radiative Forcings --> Greenhouse Gases --> CO2
Carbon dioxide forcing
12.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CO2.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 12.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CH4.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 13. Radiative Forcings --> Greenhouse Gases --> CH4
Methane forcing
13.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CH4.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 13.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.N2O.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 14. Radiative Forcings --> Greenhouse Gases --> N2O
Nitrous oxide forcing
14.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.N2O.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 14.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.tropospheric_O3.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 15. Radiative Forcings --> Greenhouse Gases --> Tropospheric O3
Troposheric ozone forcing
15.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.tropospheric_O3.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 15.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.stratospheric_O3.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 16. Radiative Forcings --> Greenhouse Gases --> Stratospheric O3
Stratospheric ozone forcing
16.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.stratospheric_O3.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 16.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CFC.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 17. Radiative Forcings --> Greenhouse Gases --> CFC
Ozone-depleting and non-ozone-depleting fluorinated gases forcing
17.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CFC.equivalence_concentration')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "Option 1"
# "Option 2"
# "Option 3"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 17.2. Equivalence Concentration
Is Required: TRUE Type: ENUM Cardinality: 1.1
Details of any equivalence concentrations used
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CFC.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 17.3. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.SO4.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 18. Radiative Forcings --> Aerosols --> SO4
SO4 aerosol forcing
18.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.SO4.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 18.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.black_carbon.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 19. Radiative Forcings --> Aerosols --> Black Carbon
Black carbon aerosol forcing
19.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.black_carbon.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 19.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.organic_carbon.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 20. Radiative Forcings --> Aerosols --> Organic Carbon
Organic carbon aerosol forcing
20.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.organic_carbon.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 20.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.nitrate.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 21. Radiative Forcings --> Aerosols --> Nitrate
Nitrate forcing
21.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.nitrate.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 21.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_albedo_effect.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 22. Radiative Forcings --> Aerosols --> Cloud Albedo Effect
Cloud albedo effect forcing (RFaci)
22.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_albedo_effect.aerosol_effect_on_ice_clouds')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 22.2. Aerosol Effect On Ice Clouds
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Radiative effects of aerosols on ice clouds are represented?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_albedo_effect.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 22.3. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_lifetime_effect.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 23. Radiative Forcings --> Aerosols --> Cloud Lifetime Effect
Cloud lifetime effect forcing (ERFaci)
23.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_lifetime_effect.aerosol_effect_on_ice_clouds')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 23.2. Aerosol Effect On Ice Clouds
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Radiative effects of aerosols on ice clouds are represented?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_lifetime_effect.RFaci_from_sulfate_only')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 23.3. RFaci From Sulfate Only
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Radiative forcing from aerosol cloud interactions from sulfate aerosol only?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_lifetime_effect.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 23.4. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.dust.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 24. Radiative Forcings --> Aerosols --> Dust
Dust forcing
24.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.dust.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 24.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.tropospheric_volcanic.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 25. Radiative Forcings --> Aerosols --> Tropospheric Volcanic
Tropospheric volcanic forcing
25.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.tropospheric_volcanic.historical_explosive_volcanic_aerosol_implementation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Type A"
# "Type B"
# "Type C"
# "Type D"
# "Type E"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 25.2. Historical Explosive Volcanic Aerosol Implementation
Is Required: TRUE Type: ENUM Cardinality: 1.1
How explosive volcanic aerosol is implemented in historical simulations
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.tropospheric_volcanic.future_explosive_volcanic_aerosol_implementation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Type A"
# "Type B"
# "Type C"
# "Type D"
# "Type E"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 25.3. Future Explosive Volcanic Aerosol Implementation
Is Required: TRUE Type: ENUM Cardinality: 1.1
How explosive volcanic aerosol is implemented in future simulations
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.tropospheric_volcanic.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 25.4. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.stratospheric_volcanic.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 26. Radiative Forcings --> Aerosols --> Stratospheric Volcanic
Stratospheric volcanic forcing
26.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.stratospheric_volcanic.historical_explosive_volcanic_aerosol_implementation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Type A"
# "Type B"
# "Type C"
# "Type D"
# "Type E"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 26.2. Historical Explosive Volcanic Aerosol Implementation
Is Required: TRUE Type: ENUM Cardinality: 1.1
How explosive volcanic aerosol is implemented in historical simulations
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.stratospheric_volcanic.future_explosive_volcanic_aerosol_implementation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Type A"
# "Type B"
# "Type C"
# "Type D"
# "Type E"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 26.3. Future Explosive Volcanic Aerosol Implementation
Is Required: TRUE Type: ENUM Cardinality: 1.1
How explosive volcanic aerosol is implemented in future simulations
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.stratospheric_volcanic.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 26.4. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.sea_salt.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 27. Radiative Forcings --> Aerosols --> Sea Salt
Sea salt forcing
27.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.sea_salt.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 27.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.other.land_use.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 28. Radiative Forcings --> Other --> Land Use
Land use forcing
28.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.other.land_use.crop_change_only')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 28.2. Crop Change Only
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Land use change represented via crop change only?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.other.land_use.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 28.3. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.other.solar.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "irradiance"
# "proton"
# "electron"
# "cosmic ray"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 29. Radiative Forcings --> Other --> Solar
Solar forcing
29.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How solar forcing is provided
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.other.solar.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 29.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
"""
|
deep-learning-indaba/practicals2017 | practical4.ipynb | mit | class FeedForwardModel():
# ...
def act_fn(self, x):
return np.tanh(x)
def forward(self, x):
'''One example of a FFN.'''
# Compute activations on the hidden layer.
hidden_layer = self.act_fn(np.dot(self.W_xh, x))
# Compute the (linear) output layer activations.
output = np.dot(self.W_ho, hidden_layer)
return output
"""
Explanation: DL Indaba Practical 4
Gated Recurrent Models (GRUs and LSTMs)
Developed by Stephan Gouws, Avishkar Bhoopchand & Ulrich Paquet.
Introduction
So far we have looked at feedforward models which learn to map a single input x to a label (prediction) y. However, a lot of real-world data comes in the form of sequences, for example the words in natural languages, phonemes in speech, and so forth. In this practical we move from feed-forward models to sequence models which are designed to specifically model the dependencies between inputs in sequences of data that change over time.
We will start with the basic/"vanilla" recurrent neural network (RNN) model. We will look at the intuition behind the model, and then approach it from a mathemetical as well as a programming point of view. Next we'll look at how to train RNNs, by discussing the Backpropagation-through-Time (BPTT) algorithm. Unfortunately, training RNNs still suffers from problems like vanishing and exploding grdients, so we will then look at how gated architectures overcome these issues. Finally we will implement an RNN and again apply this model to predict the labels of the handwritten MNIST images (yes! images can be thought of as sequences of pixels!).
Learning objectives
Understanding how to use deep learning to deal with sequential input data.
Understanding how the vanilla RNN is the generalization of the DNN for sequential data.
Understanding the BPPT algorithm and the difficulties involved with training RNNs.
Understanding the GRU and LSTM architectures and how they help to solve some of these difficulties.
What is expected of you:
* Complete the RNN forward pass code by filling in the sections marked "#IMPLEMENT-ME" until the fprop check passes on the dummy data.
* Fill in the derivatives for cell_fn_backward() and get rnn_gradient_check() to pass.
* Fill in the missing pieces of RNNSequenceClassifer and train a reccurrent classifier on MNIST data.
* Experiment with different cells and observe the effect on trainin time/accuracy.
Recurrent Neural Networks (RNNs)
NOTE: You can safely skip the first section below if you know RNNs. Skim the second and dip into the third, but be sure to resurface back at "Putting it all together"!
The intuition
RNNs generalize feedforward networks (FFNs) to be able to work with sequential data. FFNs take an input (e.g. an image) and immediately produce an output (e.g. a digit class). RNNs, on the other hand, consider the data sequentially and remembers what it has seen in the past in order to make new predictions about the future observations.
To understand this distinction, consider the example where we want to label words as the part-of-speech categories that they belong to: E.g. for the input sentence 'I want a duck' and 'He had to duck', we want our model to predict that duck is a noun in the first sentence and a verb in the second. To do this successfully, the model needs to be aware of the surrounding context. However, if we feed a FFN model only one word at a time, how could it know the difference? If we want to feed it all the words at once, how do we deal with the fact that sentences are of different lengths? (We could try to find a trade-off by feeding it windows of words...)
RNNs solve this issue by processing the sentence word-by-word, and maintaining an internal state summarizing what it has seen so far. This applies not only to words, but also to phonemes in speech, or even, as we will see, pixels of an image.
The RNN API
Feedforward neural networks operate on vectors of fixed size. As we have done before, we could think of the "API" of feedforward models as follows:
End of explanation
"""
class RecurrentModel():
# ...
def act_fn(self, x):
return np.tanh(x)
def recurrent_fn(self, x, prev_state):
'''Process the current input and the previous state and produce an output and a new state.'''
# Compute the new state based on the previous state and current input.
new_state = self.act_fn(np.dot(self.W_hh, prev_state) + np.dot(self.W_xh, x))
# Compute the output vector.
y = np.dot(self.W_hy, new_state)
return new_state, y
def forward(self, data_sequence):
state = self.init_state()
all_states = [state]
last_output = None
for x in data_sequence:
new_state, last_output = recurrent_fn(x, state)
all_states.append(new_state)
state = new_state
return all_states, last_output
"""
Explanation: Recurrent neural networks (RNNs) generalize this idea to operating on sequences of vectors. To process sequences, the model has an internal state which gets updated with each new observation. Computationally, one can think of this as recursively applying a function recurrent_fn to update the state of the model based on each new input in the sequence:
End of explanation
"""
import numpy as np
## HELPER DEFINITIONS
## NOTE: WE KEEP THESE EXPLICIT BECAUSE WE WILL NEED THEIR DERIVATIVES BELOW.
def softmax(X):
eX = np.exp((X.T - np.max(X, axis=1)).T)
return (eX.T / eX.sum(axis=1)).T
def cross_entropy(y_pred, y_train):
m = y_pred.shape[0]
prob = softmax(y_pred)
log_like = -np.log(prob[range(m), y_train])
data_loss = np.sum(log_like) / m
return data_loss
def fc_forward(X, W, b):
'''A fully-connected feedforward layer.'''
out = np.dot(X, W) + b
cache = (W, X)
return out, cache
def tanh_forward(X):
out = np.tanh(X)
cache = out
return out, cache
"""
Explanation: Putting this together
Look at the definition of hidden_layer in FeedForwardModel.forward() and new_state in RecurrentModel.forward(). If you're more comfortable with math, compare the expression for computing the hidden layer of a feedforward neural network where ($\sigma$ is our non-linearity, tanh in the case shown above):
$h = \sigma(W_{xh}x)$
to the expression for computing the hidden layer at time step $t$ in an RNN:
$h_t = \sigma(W_{hh}h_{t-1} + W_{xh}x_t)$
NOTE: The weight subscript $W_{xz}$ is used to indicate a mapping from layer $x$ to layer $z$.
QUESTIONS:
* How are they similar?
* How are they different?
* Why is $W_{hh}$ called "recurrent" weights?
Spend a few min to think about and discuss this with your neighbour before you move on.
'Unrolling' the network
Imagine we are trying to classify sequences X into labels y (for now, let's keep it abstract). After running the forward() function of our RNN defined above on X, we would have a list of internal states of the model at each sequence position, and the final state of the network. This process is called unrolling in time, because you can think of it as unrolling the computation graph defined by the RNN forward function, over the inputs at each position of the sequence. RNNs are often used to model time series data, and therefore these positions are referred to as time-steps, hence, "unrolling over time".
We can therefore think of an RNN as a composition of identical feedforward neural networks (with replicated/tied weights), one for each moment or step in time.
These feedforward functions (i.e. our recurrent_fn above) are typically referred to as cells, and the only restriction on its API is that the cell function needs to be a differentiable function that can map an input and a state vector to an output and a new state vector. What we have shown above is called the vanilla RNN, but there are many more possibilities. In this practical, we will build up to a family of gated-recurrent cells. One of the most popular variants is called the Long short-term memory cell. But we're getting ahead of ourselves.
Training RNNs: (Truncated) Back-prop through Time
RNNs model sequential data, and are designed to capture how outputs at the current time step are influenced by the inputs that came before them. This is referred to as long-range dependencies. At a high level, this allows the model to remember what it has seen so far in order to better contextualize what it is seeing at the moment (think about how knowing the context of the sentence or conversation can sometimes help one to better figure out the intended meaning of a misheard word or ambiguous statement). It is what makes these models so powerful, but it is also what makes them so hard to train!
BPTT: A quick theoretical overview
The most well-known algorithm for training RNNs is called back-propagation through time (BPTT) (there are other algorithms). BPTT conceptually amounts to unrolling the computations of the RNN over time, computing the errors, and backpropagating the gradients through the unrolled graph structure. Ideally we want to unroll the graph up to the maximum sequence length, however in practice, since sequence lengths vary and memory is limited, we only end up unrolling sequences up to some length $T$. This is called truncated BPTT, and is the most used variant of BPTT.
At a high level, there are two main issues when using (truncated) BPTT to train RNNs:
Shared / tied recurrent weights ($W_{hh}$) mean that the gradient on these weights at some time step $t$ depends on all time steps up to time-step $T$, the maximum length of the unrolled graph. This also leads to the vanishing/exploding gradients problem.
As alluded to above, memory usage grows linearly with the total number of steps $T$ that we unroll for, because we need to save/cache the activations at each time-step. This matters computationally, since memory is a limited resource. It also matters statistically, because it puts a limit on the types of dependencies the model is exposed to, and hence that it could learn.
NOTE: Think about that last statement and make sure you understand those 2 points.
BPTT is very similar to the standard back-propagation algorithm. Key to understanding the BPTT algorithm is to realize that gradients on the non-recurrent weights (weights of a per time-step classifier that tries to predict the next word in a sentence for example) and recurrent weights (that transform $h_{t-1}$ into $h_t$) are computed differently:
The gradients of non-recurrent weights ($W_{hy}$) depend only on the error at that time-step, $E_t$.
The gradients of recurrent weights ($W_{hh}$) depend on all time-steps up to maximum length $T$.
The first point is fairly intuitive: predictions at time-step $t$ is related to the loss of that particular prediction.
The second point will be explained in more detail in the lectures (see also this great blog post), but briefly, this can be summarized in these equations:
The current state is a function of the previous state: $h_t = \sigma(W_{hh}h_{t-1} + W_{xh}x_t)$
The gradient of the loss $E_t$ at time $t$ on $W_{hh}$ is a function of the current hidden state and model predictions $\hat{y}t$ at time t:
$\frac{\partial E_t}{\partial W{hh}} = \frac{\partial E_t}{\partial \hat{y}t}\frac{\partial\hat{y}_t}{\partial h_t}\frac{\partial h_t}{\partial W{hh}}$
Substituting (1) into (2) results in a sum over all previous time-steps:
$\frac{\partial E_t}{\partial W_{hh}} = \sum\limits_{k=0}^{t} \frac{\partial E_t}{\partial \hat{y}t}\frac{\partial\hat{y}_t}{\partial h_t}\frac{\partial h_t}{\partial h_k}\frac{\partial h_k}{\partial W{hh}}$
Because of this repeated multiplicative interaction, as the sequence length $t$ gets longer, the gradients themselves can get diminishingly small (vanish) or grow too large and result in numeric overflow (explode). This has been shown to be related to the norms of the recurrent weight matrices being less than or equal to 1. Intuitively, it works very similar to how multiplying a small number $v<1.0$ with itself repeatedly can quickly go to zero, or conversely, a large number $v>1.0$ could quickly go to infinity; only this is for matrices.
To implement this, we need three components:
The code to fprop one time-step though the cell,
the code to fprop through the unrolled RNN,
the code to backprop one time-step through the cell,
And then we'll put all this together within the BPTT algorithm. Let's start.
Forward Propagation
Let's write the code to fprop one time-step though the cell. We'll need some helper functions:
End of explanation
"""
## Define the RNN Cell. We use a vanilla RNN.
def cell_fn_forward(X, h, model, train=True):
Wxh, Whh, Why = model['Wxh'], model['Whh'], model['Why']
bh, by = model['bh'], model['by']
hprev = h.copy()
## IMPLEMENT-ME: ...
h, h_cache = tanh_forward(np.dot(hprev, Whh) + np.dot(X, Wxh) + bh)
y, y_cache = fc_forward(h, Why, by)
cache = (X, Whh, h, hprev, y, h_cache, y_cache)
if not train:
# Compute per-time step outputs.
# NOTE: Here we build a classifer, but it could be anything else.
y = softmax(y)
return y, h, cache
"""
Explanation: Now we can implement the equation $h_t = \sigma(W_{hh}h_{t-1} + W_{xh}x_t)$ as follows:
End of explanation
"""
def rnn_forward(X_train, y_train, model, initial_state, verbose=True):
ys = []
caches = []
loss = 0.
h = initial_state
t = 0
for x, y in zip(X_train, y_train):
## IMPLEMENT-ME: ...
y_pred, h, cache = cell_fn_forward(x, h, model, train=True)
loss += cross_entropy(y_pred, y)
##
ys.append(y_pred)
caches.append(cache)
if verbose:
print "Time-step: ", t
print "x_t = ", x
print "cur_state = ", h
print "predicted y = ", y_pred
t += 1
# We return final hidden state, predictions, caches and final total loss.
return h, ys, caches, loss
"""
Explanation: Put this together to do the RNN fprop over the entire sequence:
QUESTION: Notice how we save all activations in caches. Why do we need to do this?
End of explanation
"""
# Create a helper function that calculates the relative error between two arrays
def relative_error(x, y):
""" returns relative error """
return np.max(np.abs(x - y) / (np.maximum(1e-8, np.abs(x) + np.abs(y))))
# We set the seed to make the results reproducible.
np.random.seed(1234)
def _initial_state(hidden_dim):
return np.zeros((1, hidden_dim))
def _init_model(input_dim, hidden_dim, output_dim):
D, H, C = input_dim, hidden_dim, output_dim # More compact.
model_params = dict(
Wxh=np.random.randn(D, H) / np.sqrt(D / 2.),
Whh=np.random.randn(H, H) / np.sqrt(H / 2.),
Why=np.random.randn(H, D) / np.sqrt(C / 2.),
bh=np.zeros((1, H)),
by=np.zeros((1, D)))
return model_params
# Initialize model.
input_dim=8
hidden_dim=4
output_dim=2 # num_classes
num_steps = 5
test_mdl = _init_model(input_dim, hidden_dim, output_dim)
# Create some dummy data (there's no batching, just num_steps input vectors)
X_test = np.split(np.random.randn(num_steps, input_dim), num_steps, axis=0)
y_test = np.random.randint(low=0, high=output_dim, size=num_steps).reshape(-1)
#y_onehot = np.eye(output_dim)[y_ids]
print "Created dummy input data X"
print "Created fake targets: ", y_test
#print "Created fake onehot targets: \n", y_onehot
print "\nRunning FPROP on dummy data: "
initial_state = _initial_state(hidden_dim)
last_state, ys, caches, loss = rnn_forward(X_test, y_test, test_mdl, initial_state, verbose=True)
print "Final hidden state: ", last_state
correct_final_state = np.array([[ 0.19868001, 0.98286478, 0.76491549, -0.91578737]])
# Compare your output to the "correct" ones
# The difference should be around 2e-8 (or lower)
print "\n============================"
print 'Testing rnn_forward'
diff = relative_error(last_state, correct_final_state)
if diff <= 2e-8:
print 'PASSED'
else:
print 'The difference of %s is too high, try again' % diff
print "\n============================"
"""
Explanation: Let's test this on some dummy data.
End of explanation
"""
## HELPER DERIVATIVE FUNCTIONS
def fc_backward(dout, cache):
W, h = cache
dW = np.dot(h.T, dout)
db = np.sum(dout, axis=0)
dX = np.dot(dout, W.T)
return dX, dW, db
def tanh_backward(dout, cache):
dX = (1 - cache**2) * dout
return dX
def dcross_entropy(y_pred, y_train):
m = y_pred.shape[0]
grad_y = softmax(y_pred)
grad_y[range(m), y_train] -= 1.
grad_y /= m
return grad_y
"""
Explanation: Computing the derivative: Truncated BPTT
Let's start with computing the per time-step derivative of cell_fn_forward wrt all model parameters. First, some helper derivative functions:
NOTE: Make sure you understand how these were derived.
End of explanation
"""
## PERFORM PER-TIMESTEP BACKWARD STEP
## IMPLEMENT-ME: Most of this (with hints).
def cell_fn_backward(y_pred, y_train, dh_next, cache):
X, Whh, h, hprev, y, h_cache, y_cache = cache
# Softmax gradient
dy = dcross_entropy(y_pred, y_train)
# Hidden to output gradient
dh, dWhy, dby = fc_backward(dy, y_cache)
dh += dh_next
dby = dby.reshape((1, -1))
# tanh
dh = tanh_backward(dh, h_cache)
# Hidden gradient
dbh = dh
dWhh = np.dot(hprev.T, dh)
dWxh = np.dot(X.T, dh)
dh_next = np.dot(dh, Whh.T)
grad = dict(Wxh=dWxh, Whh=dWhh, Why=dWhy, bh=dbh, by=dby)
return grad, dh_next
"""
Explanation: Let's put these together and write the code for $\frac{\partial E}{\partial \mathbf{\theta}}$ for all parameters $\mathbb{\theta} = {W_{xh}, W_{hh}, W_{hy}, b_h, b_y}$:
End of explanation
"""
def bptt(model, X_train, y_train, initial_state):
# Forward
last_state, ys, caches, loss = rnn_forward(X_train, y_train, model, initial_state)
loss /= y_train.shape[0]
# Backward
dh_next = np.zeros((1, last_state.shape[0]))
grads = {k: np.zeros_like(v) for k, v in model.items()}
for t in reversed(range(len(X_train))):
grad, dh_next = cell_fn_backward(ys[t], y_train[t], dh_next, caches[t])
for k in grads.keys():
grads[k] += grad[k]
for k, v in grads.items():
grads[k] = np.clip(v, -5., 5.)
return grads, loss, last_state
"""
Explanation: Now let's put this together inside the BPTT algorithm:
End of explanation
"""
def rnn_gradient_check(model, x, y, init_state, h=0.001, error_threshold=0.01):
# Calculate the gradients using backpropagation. We want to checker if these are correct.
bptt_gradients, _, _ = bptt(model, x, y, init_state)
# List of all parameters we want to check.
model_parameters = ['Wxh', 'Whh', 'Why', 'bh', 'by']
# Gradient check for each parameter
for pidx, pname in enumerate(model_parameters):
# Get the actual parameter value from the model, e.g. model.W
parameter = model[pname]
print "Performing gradient check for parameter %s with size %d." % (pname, np.prod(parameter.shape))
# Iterate over each element of the parameter matrix, e.g. (0,0), (0,1), ...
it = np.nditer(parameter, flags=['multi_index'], op_flags=['readwrite'])
while not it.finished:
ix = it.multi_index
# Save the original value so we can reset it later
original_value = parameter[ix]
## IMPLEMENT-ME: ...
# Estimate the gradient using (f(x+h) - f(x-h))/(2*h)
parameter[ix] = original_value + h
# Compute the ENTIRE rnn_foward, evaluate cross entropy loss
_, _, _, gradplus = rnn_forward(x, y, model, init_state, verbose=False)
parameter[ix] = original_value - h
_, _, _, gradminus = rnn_forward(x, y, model, init_state, verbose=False)
estimated_gradient = (gradplus - gradminus)/(2*h)
##
# Reset parameter to original value
parameter[ix] = original_value
# The gradient for this parameter calculated using backpropagation
backprop_gradient = bptt_gradients[pname][ix]
# calculate The relative error: (|x - y|/(|x| + |y|))
relative_error = np.abs(backprop_gradient - estimated_gradient) / (np.abs(backprop_gradient) + np.abs(estimated_gradient))
# If the error is to large fail the gradient check
if relative_error > error_threshold:
print "Gradient Check ERROR: parameter=%s ix=%s" % (pname, ix)
print "+h Loss: %f" % gradplus
print "-h Loss: %f" % gradminus
print "Estimated_gradient: %f" % estimated_gradient
print "Backpropagation gradient: %f" % backprop_gradient
print "Relative Error: %f" % relative_error
return
it.iternext()
print "Gradient check for parameter %s: PASSED." % (pname)
"""
Explanation: Finally, we can check our implementation using numerically-derived gradients:
End of explanation
"""
rnn_gradient_check(test_mdl, X_test, y_test, initial_state)
"""
Explanation: Aaaaand let's test it!
End of explanation
"""
import tensorflow as tf
import numpy as np
import functools
from tensorflow.examples.tutorials.mnist import input_data
mnist = input_data.read_data_sets("MNIST_data/", one_hot=True)
"""
Explanation: Gated Cells (GRUs and LSTMs)
Vanilla RNNs are very powerful sequence models. However, they are difficult to train, in large part due to the difficulties with getting gradients to propagate through all the time-steps without vanishing or exploding. If the gradient explodes then backpropagation will not work because we will get NaN values for the gradient contributions from earlier layers. The simplest trick to overcome this is called gradient clipping (see Pascanu et al., 2013). One basically rescales the gradients once their norms exceed a certain threshold. Dealing with vanishing gradients is trickier. Proper weight initialization helps to overcome this at the start of training (e.g orthogonal initialization), and there are regularization tricks for encouraging constant backwards error flow, which works for some tasks, but is not theoretically well-motivated.
Gated models (we will look at the GRU and LSTM) modify the architecture of the RNN cell to ensure constant gradient propagation. The problem with the vanilla RNN is that its entire state is multiplicatively updated (overwritten) at every time-step (notice the $W_{hh}h_{t-1}$ term):
$h_t = \sigma(W_{hh}h_{t-1} ...)$
Gated cells (GRUs/LSTMs) have two main new ideas:
Ensure incremental state change by updating state additively: $h_t = h_{t-1} + f(h_{t-1})$.
Control the update process by selectively modulating how much to keep/forget of the old, how much to read, and how much to write into the new state.
These models modulate how much to throw away (forget) from the previous state when making a proposed new state, and then how much to read from the previous state and write to the output of the cell, by using gates (values between 0 and 1 which squash information flow to some extent; little neural networks, of course!). Gates are vectors of per-dimension interpolation scalars: When you multiply some vector by a gate vector, you essentially control how much of that vector you "let through". Below we show the generic equation for such a gate (they're all the same!):
$g_t = \sigma(W_g h_{t-1} + U_g x_t + b_g)$
where $\sigma(z) = 1 / (1+e^{-z})$ is the sigmoid function (i.e. $0 \leq \sigma(z) \leq 1.$).
Think about this for a second:
What are the inputs of this gate (model)?
What are the parameters of this gate (model)?
What does this remind you of?
NOTE: Gates are just (vectors of) simple, logistic regression models which take inputs from the previous hidden layer $h_{t-1}$ and the current input layer $x_t$) and produce outputs between 0 and 1.
Now let's use them to modulate the flow of information. We'll start with the GRU and build up to the LSTM.
The Gated Recurrent Unit (GRU)
The GRU was introduced in 2014 by Cho et al. -- almost 2 decades after the LSTM -- but we'll start here because the GRU is simpler than the LSTM and based on the same principle. It uses only two gates per cell, a reset gate $r_t$ and an update gate $z_t$ (not the same z from the previous practicals!). These are defined exactly like the generic gates above:
\begin{aligned}
r_t &= \sigma(W_r h_{t-1} + U_r x_t + b_r) \
z_t &= \sigma(W_z h_{t-1} + U_z x_t + b_z) \
\end{aligned}
First, it uses the reset gate to control how much of the previous state is used in computing the new proposed state:
$\tilde{h_t} = \phi(W(r_t \circ h_{t-1}) + Ux_t + b)$
(where $\circ$ is element-wise multiplication). Then it ties "reading" and "writing" of the LSTM (below) into an update gate (by bounding their sums to 1, i.e. the more you read the less you write) when the new state is calculated:
$h_t = z_t \circ h_{t-1} + (1 - z_t)\circ \tilde{h_t}$
Try to reconcile the equations with the following flow diagram of the same:
What happens when the reset gate is high/low?
What happens when the update gate is high/low?
How do these two interact?
Why would this architecture be more powerful than a vanilla RNN?
The Long Short-Term Memory unit (LSTM)
The LSTM was introduced in 1997 by Hochreiter and Schmidhuber. There are several different architectual variations 'out there', but they all operate by maintaining a separate memory vector $c_t$ and a state vector $h_t$ (i.e. the model computes a tuple of vectors per time-step, not just a single vector). We'll just focus on the "basic" LSTM version for now (we'll call this BasicLSTM in line with the name used in TensorFlow which we'll get to in the implementation section). It uses three gates, the input, output and forget gates, traditionally denoted by their first letters:
\begin{aligned}
i_t &= \sigma(W_i h_{t-1} + U_i x_t + b_i) \
o_t &= \sigma(W_o h_{t-1} + U_o x_t + b_o) \
f_t &= \sigma(W_f h_{t-1} + U_f x_t + b_f) \
\end{aligned}
Don't be intimidated by these equations. We've seen them all above already in their generic form. They're all just doing the same thing (computationally, not functionally). Convince yourself of this, by answering the following:
What is the same between these equations?
What is different between them?
Now let's step through the rest of the BasicLSTM Cell:
First, the BasicLSTM Cell uses no gating to create the proposed new hidden state (original notation uses g, but I use tilde h):
$\tilde{h}_t = \phi(W h_{t-1}) + Ux_t + b)$
Then it updates its internal memory to be a combination of the previous memory $c_{t-1}$ (multiplied/modulated by the forget gates $f_t$) and the new proposed state $\tilde{h}_t$ (modulated by the input gates $i_t$):
$c_t = f_t \circ c_{t-1} + i_t \circ \tilde{h}_t$
**Intuitively**: the model could choose to ignore old memory completely (if $f_t$ is all 1s), or ignore the newly proposed state completely ($i_t$ all 0s), but more likely it would learn to do something in-between. **QUESTION**: Think about how and why this behaviour would be encouraged (learned) during training?
Finally, the state that is actually output by the cell is a gated version of the memory vector, squashed by a tanh (because not everything in the memory cell might be immediately useful to the surrounding network):
$ h_t = o_t \circ \phi(c_t) $
Phewwww. We know.. there is a lot going on here! The following flow-diagram might help a bit to make this more clear:
Look at the equations and at the flow-diagram, and then try to answer the following questions:
How is the LSTM similar to the GRU?
How is it different?
What is the function of the memory vector (think about edge cases, e.g. where the forget gate is set to all 1s)?
Is the LSTM theoretically more powerful than the GRU? If so, why?
What is the computational drawback to using LSTMs (think about the number of gates; these must be parameterized..)?
Implementing Recurrent Models in TensorFlow
In TensorFlow, we implement recurrent models using two building blocks:
A (graph) definition for the cell (you can use those provided, or you can write your own); and
A method to unroll the graph over the sequence (dynamic vs unrolled).
First, TensorFlow provides implementations for many of the standard RNN cells. Poke around in the docs.
Second, TensorFlow provides two different ways to implement the recurrence operations: dynamic or static. Basically, static unrolling prebuilds the entire unrolled RNN over the maximum number of time-steps. Dynamic unrolling dynamically creates the graph at each time-step, and saves the activations during the forward phase for the backward phase.
QUESTION: Why do activations need to be saved during the forward phase? (HINT: Look at our use of cache in the Numpy code above).
We will use dynamic unrolling: it uses less memory, and (counterintuitively), oftentimes it turns out to be faster.
End of explanation
"""
class BaseSoftmaxClassifier(object):
def __init__(self, input_size, output_size, l2_lambda):
# Define the input placeholders. The "None" dimension means that the
# placeholder can take any number of images as the batch size.
self.x = tf.placeholder(tf.float32, [None, input_size], name='x')
self.y = tf.placeholder(tf.float32, [None, output_size], name='y')
self.input_size = input_size
self.output_size = output_size
self.l2_lambda = l2_lambda
self._all_weights = [] # Used to compute L2 regularization in compute_loss().
# You should override these in your build_model() function.
self.logits = None
self.predictions = None
self.loss = None
self.build_model()
def get_logits(self):
return self.logits
def build_model(self):
# OVERRIDE THIS FOR YOUR PARTICULAR MODEL.
raise NotImplementedError("Subclasses should implement this function!")
def compute_loss(self):
"""All models share the same softmax cross-entropy loss."""
assert self.logits is not None # Ensure that logits has been created!
data_loss = tf.reduce_mean(
tf.nn.softmax_cross_entropy_with_logits(logits=self.logits, labels=self.y))
reg_loss = 0.
for w in self._all_weights:
reg_loss += tf.nn.l2_loss(w)
return data_loss + self.l2_lambda * reg_loss
def accuracy(self):
# Calculate accuracy.
assert self.predictions is not None # Ensure that pred has been created!
correct_prediction = tf.equal(tf.argmax(self.predictions, 1), tf.argmax(self.y, 1))
accuracy = tf.reduce_mean(tf.cast(correct_prediction, "float"))
return accuracy
"""
Explanation: An RNN Image Labeler
Let's reuse the BaseSoftmaxClassifier from the previous practicals:
End of explanation
"""
class RecurrentClassifier(BaseSoftmaxClassifier):
def __init__(self, model_params):
self.config = model_params
super(RecurrentClassifier, self).__init__(model_params['input_size'],
model_params['output_size'],
model_params['l2_lambda'])
def build_model(self):
assert self.config['num_steps'] * self.config['pixels_per_step'] == self.config['input_size']
# We break up the input images into num_steps groups of pixels_per_step
# pixels each.
rnn_input = tf.reshape(self.x, [-1,
self.config['num_steps'],
self.config['pixels_per_step']])
## IMPLEMENT-ME: ...
# Define the main RNN 'cell', that will be applied to each timestep.
cell = self.config['cell_fn'](self.config['memory_units'])
# NOTE: This is how we apply Dropout to RNNs.
cell = tf.contrib.rnn.DropoutWrapper(
cell,
output_keep_prob = self.config['dropout_keep_prob'])
cell = tf.contrib.rnn.MultiRNNCell(cells=[cell] * self.config['num_layers'],
state_is_tuple=True)
##########
outputs, state = tf.nn.dynamic_rnn(cell,
rnn_input,
dtype=tf.float32)
# Transpose the cell to get the output from the last timestep for each batch.
output = tf.transpose(outputs, [1, 0, 2])
last_hiddens = tf.gather(output, int(output.get_shape()[0]) - 1)
# Define weights and biases for output prediction.
out_weights = tf.Variable(tf.random_normal([self.config['memory_units'],
self.config['output_size']]))
self._all_weights.append(out_weights)
out_biases = tf.Variable(tf.random_normal([self.config['output_size']]))
self.logits = tf.matmul(last_hiddens, out_weights) + out_biases
self.predictions = tf.nn.softmax(self.logits)
self.loss = self.compute_loss()
def get_logits(self):
return self.logits
class MNISTFraction(object):
"""A helper class to extract only a fixed fraction of MNIST data."""
def __init__(self, mnist, fraction):
self.mnist = mnist
self.num_images = int(mnist.num_examples * fraction)
self.image_data, self.label_data = mnist.images[:self.num_images], mnist.labels[:self.num_images]
self.start = 0
def next_batch(self, batch_size):
start = self.start
end = min(start + batch_size, self.num_images)
self.start = 0 if end == self.num_images else end
return self.image_data[start:end], self.label_data[start:end]
def train_tf_model(tf_model,
session, # The active session.
num_epochs, # Max epochs/iterations to train for.
batch_size=50, # Number of examples per batch.
keep_prob=1.0, # (1. - dropout) probability, none by default.
train_only_on_fraction=1., # Fraction of training data to use.
optimizer_fn=None, # TODO(sgouws): more correct to call this optimizer_obj
report_every=1, # Report training results every nr of epochs.
eval_every=1, # Evaluate on validation data every nr of epochs.
stop_early=True, # Use early stopping or not.
verbose=True):
# Get the (symbolic) model input, output, loss and accuracy.
x, y = tf_model.x, tf_model.y
loss = tf_model.loss
accuracy = tf_model.accuracy()
# Compute the gradient of the loss with respect to the model parameters
# and create an op that will perform one parameter update using the specific
# optimizer's update rule in the direction of the gradients.
if optimizer_fn is None:
optimizer_fn = tf.train.AdamOptimizer()
optimizer_step = optimizer_fn.minimize(loss)
# Get the op which, when executed, will initialize the variables.
init = tf.global_variables_initializer()
# Actually initialize the variables (run the op).
session.run(init)
# Save the training loss and accuracies on training and validation data.
train_costs = []
train_accs = []
val_costs = []
val_accs = []
if train_only_on_fraction < 1:
mnist_train_data = MNISTFraction(mnist.train, train_only_on_fraction)
else:
mnist_train_data = mnist.train
prev_c_eval = 1000000
# Main training cycle.
for epoch in range(num_epochs):
avg_cost = 0.
avg_acc = 0.
total_batch = int(train_only_on_fraction * mnist.train.num_examples / batch_size)
## IMPLEMENT-ME: ...
# Loop over all batches.
for i in range(total_batch):
batch_x, batch_y = mnist_train_data.next_batch(batch_size)
# Run optimization op (backprop) and cost op (to get loss value),
# and compute the accuracy of the model.
feed_dict = {x: batch_x, y: batch_y}
if keep_prob < 1.:
feed_dict["keep_prob:0"] = keep_prob
_, c, a = session.run(
[optimizer_step, loss, accuracy], feed_dict=feed_dict)
# Compute average loss/accuracy
avg_cost += c / total_batch
avg_acc += a / total_batch
train_costs.append((epoch, avg_cost))
train_accs.append((epoch, avg_acc))
# Display logs per epoch step
if epoch % report_every == 0 and verbose:
print "Epoch:", '%04d' % (epoch+1), "Training cost =", \
"{:.9f}".format(avg_cost)
if epoch % eval_every == 0:
val_x, val_y = mnist.validation.images, mnist.validation.labels
feed_dict = {x : val_x, y : val_y}
if keep_prob < 1.:
feed_dict['keep_prob:0'] = 1.0
c_eval, a_eval = session.run([loss, accuracy], feed_dict=feed_dict)
if verbose:
print "Epoch:", '%04d' % (epoch+1), "Validation acc=", \
"{:.9f}".format(a_eval)
if c_eval >= prev_c_eval and stop_early:
print "Validation loss stopped improving, stopping training early after %d epochs!" % (epoch + 1)
break
prev_c_eval = c_eval
val_costs.append((epoch, c_eval))
val_accs.append((epoch, a_eval))
print "Optimization Finished!"
return train_costs, train_accs, val_costs, val_accs
# Helper functions to plot training progress.
from matplotlib import pyplot as plt
def my_plot(list_of_tuples):
"""Take a list of (epoch, value) and split these into lists of
epoch-only and value-only. Pass these to plot to make sure we
line up the values at the correct time-steps.
"""
plt.plot(*zip(*list_of_tuples))
def plot_multi(values_lst, labels_lst, y_label, x_label='epoch'):
# Plot multiple curves.
assert len(values_lst) == len(labels_lst)
plt.subplot(2, 1, 2)
for v in values_lst:
my_plot(v)
plt.legend(labels_lst, loc='upper left')
plt.xlabel(x_label)
plt.ylabel(y_label)
plt.show()
%%time
def build_train_eval_and_plot(model_params, train_params, verbose=True):
tf.reset_default_graph()
m = RecurrentClassifier(model_params)
with tf.Session() as sess:
# Train model on the MNIST dataset.
train_losses, train_accs, val_losses, val_accs = train_tf_model(
m,
sess,
verbose=verbose,
**train_params)
# Now evaluate it on the test set:
accuracy_op = m.accuracy() # Get the symbolic accuracy operation
# Calculate the accuracy using the test images and labels.
accuracy = accuracy_op.eval({m.x: mnist.test.images,
m.y: mnist.test.labels})
if verbose:
print "Accuracy on test set:", accuracy
# Plot losses and accuracies.
plot_multi([train_losses, val_losses], ['train', 'val'], 'loss', 'epoch')
plot_multi([train_accs, val_accs], ['train', 'val'], 'accuracy', 'epoch')
ret = {'train_losses': train_losses, 'train_accs' : train_accs,
'val_losses' : val_losses, 'val_accs' : val_accs,
'test_acc' : accuracy}
return m, ret
#################################CODE TEMPLATE##################################
# Specify the model hyperparameters:
model_params = {
'input_size' : 784,
'output_size' : 10,
'batch_size' : 100,
'num_steps' : 28,
'pixels_per_step' : 28, # NOTE: num_steps * pixels_per_step must = input_size
'cell_fn' : tf.contrib.rnn.BasicRNNCell,
'memory_units' : 256,
'num_layers' : 1,
'l2_lambda' : 1e-3,
'dropout_keep_prob': 1.
}
# Specify the training hyperparameters:
training_params = {
'num_epochs' : 100, # Max epochs/iterations to train for.
'batch_size' : 100, # Number of examples per batch, 100 default.
#'keep_prob' : 1.0, # (1. - dropout) probability, none by default.
'train_only_on_fraction' : 1., # Fraction of training data to use, 1. for everything.
'optimizer_fn' : None, # Optimizer, None for Adam.
'report_every' : 1, # Report training results every nr of epochs.
'eval_every' : 1, # Evaluate on validation data every nr of epochs.
'stop_early' : True, # Use early stopping or not.
}
# Build, train, evaluate and plot the results!
trained_model, training_results = build_train_eval_and_plot(
model_params,
training_params,
verbose=True # Modify as desired.
)
###############################END CODE TEMPLATE################################
"""
Explanation: We override build_model() to build the graph for the RNN classifier.
End of explanation
"""
# Your code here...
"""
Explanation: Exercise: Try out different cells!
Replace the BasicRNN cell with the BasicLSTMCell. What is the effect on accuracy?
End of explanation
"""
# Specify the model hyperparameters:
model_params = {
'input_size' : 784,
'output_size' : 10,
'batch_size' : 100,
'num_steps' : 28,
'pixels_per_step' : 28, # NOTE: num_steps * pixels_per_step must = input_size
'cell_fn' : tf.contrib.rnn.BasicLSTMCell,
'memory_units' : 128,
'num_layers' : 1,
'l2_lambda' : 1e-3,
'dropout_keep_prob': 1.
}
# Specify the training hyperparameters:
training_params = {
'num_epochs' : 100, # Max epochs/iterations to train for.
'batch_size' : 100, # Number of examples per batch, 100 default.
#'keep_prob' : 1.0, # (1. - dropout) probability, none by default.
'train_only_on_fraction' : 1., # Fraction of training data to use, 1. for everything.
'optimizer_fn' : None, # Optimizer, None for Adam.
'report_every' : 1, # Report training results every nr of epochs.
'eval_every' : 1, # Evaluate on validation data every nr of epochs.
'stop_early' : True, # Use early stopping or not.
}
# Build, train, evaluate and plot the results!
trained_model, training_results = build_train_eval_and_plot(
model_params,
training_params,
verbose=True # Modify as desired.
)
"""
Explanation: Known good settings
We got 98.8% with this model and hyperparams:
End of explanation
"""
|
ModestoCabrera/IS360Project_1 | project_1.ipynb | gpl-2.0 | import pandas as pd
from pandas import DataFrame
import matplotlib.pyplot as plt
data = pd.read_csv('project_1.csv')
"""
Explanation: IS360 Project 1 - Data Analysis of Formed Flights Database
<img src="stats_keyboard.jpg" align=right>
<i>Before I get started I need to import the tools and data into my Environment</i>
I begin by importing pandas, <b>DataFrame</b> object explicitly because its frequently used.
Also I have explicitly imported <b>pyplot</b> from <b>matplotlib</b> to visualize the data use the <b>plt</b> conventional name.
End of explanation
"""
status = [n for n in data['status']]
airline = [n for n in data['airline']]
la = [n for n in data['LosAngeles']]
phx = [n for n in data['Phoenix']]
sandg = [n for n in data['SanDiego']]
sanfrn = [n for n in data['SanFrancisco']]
seatl = [n for n in data['Seattle']]
"""
Explanation: Now that the <b>data</b> is accessible, it will need to be sliced so that I can create a manageable data object in order to further analyze the data in the <i>csv file</i>.
End of explanation
"""
flight_status = zip(status, la, phx, sandg, sanfrn, seatl)
flight_status
flight_df = DataFrame(data = flight_status, columns = ['status', 'la','phx','sandiego','sanfrancisco','seattle'],
index = airline)
flight_df
flight_df.plot(kind='bar', title='Flights Database Visualization')
"""
Explanation: I will use the zip function to create a tuple set of the above sliced data list, to create a <b>DataFrame</b> object I can work with.
End of explanation
"""
flight_df.loc['AMWEST', 'phx'].plot(kind='bar', title='Phoenix AMWEST on-time vs delayed')
"""
Explanation: <img src="bar1.png">
The bar graph above allows me to visualize which cities have the most flights, ontime+delayed each by airline and diferentiated by <b>colored cities</b>.
<b>AMWEST</b> and <b>American Airlines</b> both have the highest number of flights in phoenix, followed by <b>ALASKA</b> and <b>United Airlines</b> in flights on/to Seatle. We would like to see how this relates to ontime and delayed, since its difficult to visualize in this bar graph due to how the data is being presented.
I want to see how the ontime flights, compare with delayed flights, so i will <i>plot a graph indexed by airlines and plot it to display status times in comparison.</i>
End of explanation
"""
flight_df.loc['ALASKA', 'seattle'].plot(kind='bar', title='ALASKA Seattle on-time vs delayed')
"""
Explanation: <img src='phx_1aw.png'>
<b>AMWEST</b> does relatively well in <i>Phoenix</i> for the number of ontime flights, compared to delayed flights.
<img src='phx_1aa.png'>
<b>American Airlines</b> does well also in <i> Phoenix</i> although the number of flights are higher in relation to the ontime flights when compared with <b>AMWEST</b> although when looking at the numbers closer, they do comparatively the same ranging about 100 delays per 1000 ontime flights.
End of explanation
"""
flight_df.loc['United Airlines', 'seattle'].plot(kind='bar', title='Seatle U. Airlines on-time vs delayed')
"""
Explanation: <img src='stl_1al.png'>
The same argument could be made in this observation here, where <b>ALASKA</b> does well over all other flights ontime for Seatle, but ranges close to 400 delays for about nearly 2000 <i>ontime flights</i>.
End of explanation
"""
flight_df.loc['American Airlines', 'phx'].plot(kind='bar', title='Phoenix American Airlines on-time vs delayed')
"""
Explanation: <img src='stl_1ul.png'>
End of explanation
"""
|
geography-munich/sciprog | material/sub/jrjohansson/Lecture-5-Sympy.ipynb | apache-2.0 | %matplotlib inline
import matplotlib.pyplot as plt
"""
Explanation: Sympy - Symbolic algebra in Python
J.R. Johansson (jrjohansson at gmail.com)
The latest version of this IPython notebook lecture is available at http://github.com/jrjohansson/scientific-python-lectures.
The other notebooks in this lecture series are indexed at http://jrjohansson.github.io.
End of explanation
"""
from sympy import *
"""
Explanation: Introduction
There are two notable Computer Algebra Systems (CAS) for Python:
SymPy - A python module that can be used in any Python program, or in an IPython session, that provides powerful CAS features.
Sage - Sage is a full-featured and very powerful CAS enviroment that aims to provide an open source system that competes with Mathematica and Maple. Sage is not a regular Python module, but rather a CAS environment that uses Python as its programming language.
Sage is in some aspects more powerful than SymPy, but both offer very comprehensive CAS functionality. The advantage of SymPy is that it is a regular Python module and integrates well with the IPython notebook.
In this lecture we will therefore look at how to use SymPy with IPython notebooks. If you are interested in an open source CAS environment I also recommend to read more about Sage.
To get started using SymPy in a Python program or notebook, import the module sympy:
End of explanation
"""
init_printing()
# or with older versions of sympy/ipython, load the IPython extension
#%load_ext sympy.interactive.ipythonprinting
# or
#%load_ext sympyprinting
"""
Explanation: To get nice-looking $\LaTeX$ formatted output run:
End of explanation
"""
x = Symbol('x')
(pi + x)**2
# alternative way of defining symbols
a, b, c = symbols("a, b, c")
type(a)
"""
Explanation: Symbolic variables
In SymPy we need to create symbols for the variables we want to work with. We can create a new symbol using the Symbol class:
End of explanation
"""
x = Symbol('x', real=True)
x.is_imaginary
x = Symbol('x', positive=True)
x > 0
"""
Explanation: We can add assumptions to symbols when we create them:
End of explanation
"""
1+1*I
I**2
(x * I + 1)**2
"""
Explanation: Complex numbers
The imaginary unit is denoted I in Sympy.
End of explanation
"""
r1 = Rational(4,5)
r2 = Rational(5,4)
r1
r1+r2
r1/r2
"""
Explanation: Rational numbers
There are three different numerical types in SymPy: Real, Rational, Integer:
End of explanation
"""
pi.evalf(n=50)
y = (x + pi)**2
N(y, 5) # same as evalf
"""
Explanation: Numerical evaluation
SymPy uses a library for artitrary precision as numerical backend, and has predefined SymPy expressions for a number of mathematical constants, such as: pi, e, oo for infinity.
To evaluate an expression numerically we can use the evalf function (or N). It takes an argument n which specifies the number of significant digits.
End of explanation
"""
y.subs(x, 1.5)
N(y.subs(x, 1.5))
"""
Explanation: When we numerically evaluate algebraic expressions we often want to substitute a symbol with a numerical value. In SymPy we do that using the subs function:
End of explanation
"""
y.subs(x, a+pi)
"""
Explanation: The subs function can of course also be used to substitute Symbols and expressions:
End of explanation
"""
import numpy
x_vec = numpy.arange(0, 10, 0.1)
y_vec = numpy.array([N(((x + pi)**2).subs(x, xx)) for xx in x_vec])
fig, ax = plt.subplots()
ax.plot(x_vec, y_vec);
"""
Explanation: We can also combine numerical evolution of expressions with NumPy arrays:
End of explanation
"""
f = lambdify([x], (x + pi)**2, 'numpy') # the first argument is a list of variables that
# f will be a function of: in this case only x -> f(x)
y_vec = f(x_vec) # now we can directly pass a numpy array and f(x) is efficiently evaluated
"""
Explanation: However, this kind of numerical evolution can be very slow, and there is a much more efficient way to do it: Use the function lambdify to "compile" a Sympy expression into a function that is much more efficient to evaluate numerically:
End of explanation
"""
%%timeit
y_vec = numpy.array([N(((x + pi)**2).subs(x, xx)) for xx in x_vec])
%%timeit
y_vec = f(x_vec)
"""
Explanation: The speedup when using "lambdified" functions instead of direct numerical evaluation can be significant, often several orders of magnitude. Even in this simple example we get a significant speed up:
End of explanation
"""
(x+1)*(x+2)*(x+3)
expand((x+1)*(x+2)*(x+3))
"""
Explanation: Algebraic manipulations
One of the main uses of an CAS is to perform algebraic manipulations of expressions. For example, we might want to expand a product, factor an expression, or simply an expression. The functions for doing these basic operations in SymPy are demonstrated in this section.
Expand and factor
The first steps in an algebraic manipulation
End of explanation
"""
sin(a+b)
expand(sin(a+b), trig=True)
"""
Explanation: The expand function takes a number of keywords arguments which we can tell the functions what kind of expansions we want to have performed. For example, to expand trigonometric expressions, use the trig=True keyword argument:
End of explanation
"""
factor(x**3 + 6 * x**2 + 11*x + 6)
"""
Explanation: See help(expand) for a detailed explanation of the various types of expansions the expand functions can perform.
The opposite a product expansion is of course factoring. The factor an expression in SymPy use the factor function:
End of explanation
"""
# simplify expands a product
simplify((x+1)*(x+2)*(x+3))
# simplify uses trigonometric identities
simplify(sin(a)**2 + cos(a)**2)
simplify(cos(x)/sin(x))
"""
Explanation: Simplify
The simplify tries to simplify an expression into a nice looking expression, using various techniques. More specific alternatives to the simplify functions also exists: trigsimp, powsimp, logcombine, etc.
The basic usages of these functions are as follows:
End of explanation
"""
f1 = 1/((a+1)*(a+2))
f1
apart(f1)
f2 = 1/(a+2) + 1/(a+3)
f2
together(f2)
"""
Explanation: apart and together
To manipulate symbolic expressions of fractions, we can use the apart and together functions:
End of explanation
"""
simplify(f2)
"""
Explanation: Simplify usually combines fractions but does not factor:
End of explanation
"""
y
diff(y**2, x)
"""
Explanation: Calculus
In addition to algebraic manipulations, the other main use of CAS is to do calculus, like derivatives and integrals of algebraic expressions.
Differentiation
Differentiation is usually simple. Use the diff function. The first argument is the expression to take the derivative of, and the second argument is the symbol by which to take the derivative:
End of explanation
"""
diff(y**2, x, x)
diff(y**2, x, 2) # same as above
"""
Explanation: For higher order derivatives we can do:
End of explanation
"""
x, y, z = symbols("x,y,z")
f = sin(x*y) + cos(y*z)
"""
Explanation: To calculate the derivative of a multivariate expression, we can do:
End of explanation
"""
diff(f, x, 1, y, 2)
"""
Explanation: $\frac{d^3f}{dxdy^2}$
End of explanation
"""
f
integrate(f, x)
"""
Explanation: Integration
Integration is done in a similar fashion:
End of explanation
"""
integrate(f, (x, -1, 1))
"""
Explanation: By providing limits for the integration variable we can evaluate definite integrals:
End of explanation
"""
integrate(exp(-x**2), (x, -oo, oo))
"""
Explanation: and also improper integrals
End of explanation
"""
n = Symbol("n")
Sum(1/n**2, (n, 1, 10))
Sum(1/n**2, (n,1, 10)).evalf()
Sum(1/n**2, (n, 1, oo)).evalf()
"""
Explanation: Remember, oo is the SymPy notation for inifinity.
Sums and products
We can evaluate sums and products using the functions: 'Sum'
End of explanation
"""
Product(n, (n, 1, 10)) # 10!
"""
Explanation: Products work much the same way:
End of explanation
"""
limit(sin(x)/x, x, 0)
"""
Explanation: Limits
Limits can be evaluated using the limit function. For example,
End of explanation
"""
f
diff(f, x)
"""
Explanation: We can use 'limit' to check the result of derivation using the diff function:
End of explanation
"""
h = Symbol("h")
limit((f.subs(x, x+h) - f)/h, h, 0)
"""
Explanation: $\displaystyle \frac{\mathrm{d}f(x,y)}{\mathrm{d}x} = \frac{f(x+h,y)-f(x,y)}{h}$
End of explanation
"""
limit(1/x, x, 0, dir="+")
limit(1/x, x, 0, dir="-")
"""
Explanation: OK!
We can change the direction from which we approach the limiting point using the dir keywork argument:
End of explanation
"""
series(exp(x), x)
"""
Explanation: Series
Series expansion is also one of the most useful features of a CAS. In SymPy we can perform a series expansion of an expression using the series function:
End of explanation
"""
series(exp(x), x, 1)
"""
Explanation: By default it expands the expression around $x=0$, but we can expand around any value of $x$ by explicitly include a value in the function call:
End of explanation
"""
series(exp(x), x, 1, 10)
"""
Explanation: And we can explicitly define to which order the series expansion should be carried out:
End of explanation
"""
s1 = cos(x).series(x, 0, 5)
s1
s2 = sin(x).series(x, 0, 2)
s2
expand(s1 * s2)
"""
Explanation: The series expansion includes the order of the approximation, which is very useful for keeping track of the order of validity when we do calculations with series expansions of different order:
End of explanation
"""
expand(s1.removeO() * s2.removeO())
"""
Explanation: If we want to get rid of the order information we can use the removeO method:
End of explanation
"""
(cos(x)*sin(x)).series(x, 0, 6)
"""
Explanation: But note that this is not the correct expansion of $\cos(x)\sin(x)$ to $5$th order:
End of explanation
"""
m11, m12, m21, m22 = symbols("m11, m12, m21, m22")
b1, b2 = symbols("b1, b2")
A = Matrix([[m11, m12],[m21, m22]])
A
b = Matrix([[b1], [b2]])
b
"""
Explanation: Linear algebra
Matrices
Matrices are defined using the Matrix class:
End of explanation
"""
A**2
A * b
"""
Explanation: With Matrix class instances we can do the usual matrix algebra operations:
End of explanation
"""
A.det()
A.inv()
"""
Explanation: And calculate determinants and inverses, and the like:
End of explanation
"""
solve(x**2 - 1, x)
solve(x**4 - x**2 - 1, x)
"""
Explanation: Solving equations
For solving equations and systems of equations we can use the solve function:
End of explanation
"""
solve([x + y - 1, x - y - 1], [x,y])
"""
Explanation: System of equations:
End of explanation
"""
solve([x + y - a, x - y - c], [x,y])
"""
Explanation: In terms of other symbolic expressions:
End of explanation
"""
%reload_ext version_information
%version_information numpy, matplotlib, sympy
"""
Explanation: Further reading
http://sympy.org/en/index.html - The SymPy projects web page.
https://github.com/sympy/sympy - The source code of SymPy.
http://live.sympy.org - Online version of SymPy for testing and demonstrations.
Versions
End of explanation
"""
|
eds-uga/csci1360e-su17 | assignments/A6/A6_BONUS.ipynb | mit | c = count_datasets("submission_partial.json")
assert c == 4
c = count_datasets("submission_full.json")
assert c == 9
try:
c = count_datasets("submission_nonexistent.json")
except:
assert False
else:
assert c == -1
"""
Explanation: BONUS
This bonus will do a very deep dive into dictionaries and lists, and do so within the context of a real-world application.
CodeNeuro is a project run out of HHMI Janelia Farms which looks at designing algorithms to automatically identify neurons in time-lapse microscope data. The competition is called "NeuroFinder":
http://neurofinder.codeneuro.org/
The goal of the project is to use data that look like this
and automatically segment out all the neurons in the image, like so
As you can probably imagine, storing this information is tricky, requiring a great deal of specifics. To store this data, they use a data format we haven't covered before called JSON, or JavaScript Object Notation. It's an extremely flexible data storage format, using regular old text but structuring it in such a way that you can store pretty much any information you want. As a bonus, its structure maps really really well to dictionaries.
In the CodeNeuro competition, then, the JSON format is used to submit predictions from participating competitors. Recalling the aforementioned goal of the competition (re: the two above images), the format of the JSON submissions is as follows:
The first layer is a list, where each item in the list is a dictionary. Each item (again, a dictionary) corresponds to a single dataset.
One of the dictionaries will contain two keys: dataset, which gives the name of the dataset as the value (a string), and regions, which contains a list of all the regions found in that dataset.
A single item in the list of regions is a dictionary, with one key: coordinates.
The value for coordinates is, again, a list, where each element of the list is an (x, y) pair that specifies a pixel in the region.
That's a lot, for sure. Here's an example of a JSON structure representing two different datasets, where one dataset has only 1 region and the other dataset has 2 regions:
```
'[
{"dataset": "d1", "regions":
[
{"coordinates": [[1, 2], [3, 4], [4, 5]]}
]
},
{"dataset": "d2", "regions":
[
{"coordinates": [[2, 3], [4, 10]]},
{"coordinates": [[20, 20], [20, 21], [22, 23]]}
]
}
]'
```
You have two datasets, d1 and d2, represented as two elements in the outermost list. Those two dictionaries have two keys, dataset (the name of the dataset) and regions (the list of regions outlining neurons present in that dataset). The regions field is a list of dictionaries, and the length of the list is how many distinct regions/neurons there are in that dataset. For example, in d1 above, there is only 1 neuron/region, but in d2, there are 2 neurons/regions. Each region is just a list of (x, y) tuple integers that specify a pixel in the image dataset that is part of the region.
WHEW. That's a lot. We'll try to start things off slowly.
Part A
Write a function which:
is named count_datasets
takes 1 argument: a string to a JSON file
returns 1 value: the number (integer) of datasets in the provided JSON file
The JSON file string provided to the function indicates the file on the hard disk that represents a submission for CodeNeuro. Your function should go through the JSON structure in this file, count the number of datasets present in it, and return an integer count of the number of datasets present in the JSON input file.
This function should read the file off the hard disk, count the number of datasets in the file, and return that number. It should also be able to handle file exceptions gracefully; if an error is encountered, return -1 to represent this. Otherwise, the return value should always be 0 or greater.
You can use the json Python library; otherwise, no other imports are allowed. Here is the Python JSON library documentatio: https://docs.python.org/3/library/json.html Of particular note, for reading JSON files, is the json.load function.
End of explanation
"""
import json
d = json.loads(open("partial_1.json", "r").read())
assert d == get_dataset_by_index("submission_partial.json", 1)
d = json.loads(open("full_8.json", "r").read())
assert d == get_dataset_by_index("submission_full.json", 8)
try:
c = get_dataset_by_index("submission_partial.json", 5)
except:
assert False
else:
assert c is None
try:
c = get_dataset_by_index("submission_nonexistent.json", 4983)
except:
assert False
else:
assert c is None
"""
Explanation: Part B
Write a function which:
is named get_dataset_by_index
takes 2 arguments: the name of the JSON file on the filesystem (same as in Part A), and the integer index of the data to return from that JSON file
returns 1 value: the JSON dataset specified by the integer index argument
This function should return the dictionary corresponding to the dataset in the JSON file, or None if an invalid index is supplied (e.g. specified 10 when there are only 4 datasets, or a negative number, or a float/string/list/non-integer type). It should also be able to handle file-related errors.
You can use the json Python library; otherwise, no other imports are allowed.
End of explanation
"""
import json
d = json.loads(open("partial_1.json", "r").read())
assert d == get_dataset_by_name("submission_partial.json", "01.01.test")
d = json.loads(open("full_8.json", "r").read())
assert d == get_dataset_by_name("submission_full.json", "04.01.test")
try:
c = get_dataset_by_name("submission_partial.json", "nonexistent")
except:
assert False
else:
assert c is None
try:
c = get_dataset_by_name("submission_nonexistent.json", "02.00.test")
except:
assert False
else:
assert c is None
"""
Explanation: Part C
Write a function which:
is named get_dataset_by_name
takes 2 arguments: the name of the JSON file on the filesystem (same as in Part A and B), and a string indicating the name of the dataset to return
returns 1 value: the JSON dataset specified by the string name argument
This solution is functionally identical to get_dataset_by_index, except rather than retrieving a dataset by the integer index, you instead return a dataset by its string name.
This function should return the dictionary corresponding to the dataset in the JSON file, or None if an invalid name is supplied. It should also be able to handle file-related errors.
You can use the json Python library; otherwise, no other imports are allowed.
End of explanation
"""
assert 29476 == count_pixels_in_dataset("submission_full.json", "01.01.test")
assert 30231 == count_pixels_in_dataset("submission_full.json", "04.01.test")
try:
c = count_pixels_in_dataset("submission_partial.json", "02.00.test")
except:
assert False
else:
assert c == -1
"""
Explanation: Part D
Write a function which:
is named count_pixels_in_dataset
takes 2 arguments: the name of the JSON file on the filesystem (same as in Part A, B, and C), and the string name of the dataset to examine (same as Part C)
returns 1 number: the count of pixels found in all regions of the specified dataset
Each individual pixel is a single pair of (x, y) numbers (that counts as 1).
If any file-related errors are encountered, or an incorrect dataset name specified, the function should return -1.
You can use the json Python library, or other functions you've already written in this question; otherwise, no other imports are allowed.
End of explanation
"""
|
km-Poonacha/python4phd | Session 2/ipython/Lesson 5- Crawl and scrape.ipynb | gpl-3.0 | import requests
url = 'http://www.tripadvisor.com/'
response = requests.get(url)
print(response.status_code)
#print(response.headers)
"""
Explanation: Lesson 5 - Crawl and Scrape
Making the request
Using 'requests' module
Use the requests module to make a HTTP request to http://www.tripadvisor.com
- Check the status of the request
- Display the response header information
End of explanation
"""
import requests
url = 'http://www.tripadvisor.com/robots.txt'
response = requests.get(url)
if response.status_code == 200:
print(response.status_code)
print(response.text)
else:
print('Failed to get a response from the url. Error code: ',resp.status_code )
"""
Explanation: Get the '/robots.txt' file contents
End of explanation
"""
import requests
url = 'http://tripadvisor.com'
response = requests.get(url)
if response.status_code == 200:
print(response.status_code)
print(response.text)
else:
print('Failed to get a response from the url. Error code: ',resp.status_code )
"""
Explanation: Get the HTML content from the website
End of explanation
"""
<h1 id="HEADING" property="name" class="heading_name ">
<div class="heading_height"></div>
"
Le Jardin Napolitain
"
</h1>
"""
Explanation: Scraping websites
Sometimes, you may want a little bit of information - a movie rating, stock price, or product availability - but the information is available only in HTML pages, surrounded by ads and extraneous content.
To do this we build an automated web fetcher called a crawler or spider. After the HTML contents have been retrived from the remote web servers, a scraper parses it to find the needle in the haystack.
BeautifulSoup Module
The bs4 module can be used for searching a webpage (HTML file) and pulling required data from it. It does three things to make a HTML page searchable-
* First, converts the HTML page to Unicode, and HTML entities are converted to Unicode characters
* Second, parses (analyses) the HTML page using the best available parser. It will use an HTML parser unless you specifically tell it to use an XML parser
* Finally transforms a complex HTML document into a complex tree of Python objects.
This module takes the HTML page and creates four kinds of objects: Tag, NavigableString, BeautifulSoup, and Comment.
* The BeautifulSoup object itself represents the webpage as a whole
* A Tag object corresponds to an XML or HTML tag in the webpage
* The NavigableString class to contains the bit of text within a tag
Read more about BeautifulSoup : https://www.crummy.com/software/BeautifulSoup/bs4/doc/
End of explanation
"""
import requests
from bs4 import BeautifulSoup
scrape_url = 'https://www.tripadvisor.com/Restaurant_Review-g187147-d1751525-Reviews-Cafe_Le_Dome-Paris_Ile_de_France.html'
response = requests.get(scrape_url)
print(response.status_code)
if response.status_code == 200:
soup = BeautifulSoup(response.text, 'html.parser') # Soup
print(soup.prettify)
"""
Explanation: Step 1: Making the soup
First we need to use the BeautifulSoup module to parse the HTML data into Python readable Unicode Text format.
*Let us write the code to parse a html page. We will use the trip advisor URL for an infamous restaurant - https://www.tripadvisor.com/Restaurant_Review-g187147-d1751525-Reviews-Cafe_Le_Dome-Paris_Ile_de_France.html *
End of explanation
"""
<div class="entry">
<p class="partial_entry">
Popped in on way to Eiffel Tower for lunch, big mistake.
Pizza was disgusting and service was poor.
It’s a shame Trip Advisor don’t let you score venues zero....
<span class="taLnk ulBlueLinks" onclick="widgetEvCall('handlers.clickExpand',event,this);">More
</span>
</p>
</div>
"""
Explanation: Step 2: Inspect the element you want to scrape
In this step we will inspect the HTML data of the website to understand the tags and attributes that matches the element. Let us inspect the HTML data of the URL and understand where (under which tag) the review data is located.
End of explanation
"""
import requests
from bs4 import BeautifulSoup
def scrapecontent(url):
"""This function parses the HTML page representing the url using the BeautifulSoup module
and returns the created python readable data structure (soup)"""
scrape_response = requests.get(url)
print(scrape_response.status_code)
if scrape_response.status_code == 200:
soup = BeautifulSoup(scrape_response.text, 'html.parser')
return soup
else:
print('Error accessing url : ',scrape_response.status_code)
return None
def main():
scrape_url = 'https://www.tripadvisor.com/Restaurant_Review-g187147-d1751525-Reviews-Cafe_Le_Dome-Paris_Ile_de_France.html'
ret_soup = scrapecontent(scrape_url)
if ret_soup:
for review in ret_soup.find_all('p', class_='partial_entry'):
print(review.text) #We are interested only in the text data, since the reviews are stored as text
main()
"""
Explanation: Step 3: Searching the soup for the data
Beautiful Soup defines a lot of methods for searching the parse tree (soup), the two most popular methods are: find() and find_all().
The simplest filter is a tag. Pass a tag to a search method and Beautiful Soup will perform a match against that exact string.
Let us try and find all the < p > (paragraph) tags in the soup:
End of explanation
"""
import requests
from bs4 import BeautifulSoup
def scrapecontent(url):
"""This function parses the HTML page representing the url using the BeautifulSoup module
and returns the created python readable data structure (soup)"""
scrape_response = requests.get(url)
print(scrape_response.status_code)
if scrape_response.status_code == 200:
soup = BeautifulSoup(scrape_response.text, 'html.parser')
return soup
else:
print('Error accessing url : ',scrape_response.status_code)
return None
def main():
page_no = 0
while(page_no < 60):
scrape_url = 'https://www.tripadvisor.com/Restaurant_Review-g187147-d1751525-Reviews-or'+str(page_no)+'-Cafe_Le_Dome-Paris_Ile_de_France.html'
ret_soup = scrapecontent(scrape_url)
if ret_soup:
for review in ret_soup.find_all('p', class_='partial_entry'):
print(review.text) #We are interested only in the text data, since the reviews are stored as text
page_no = page_no + 10
main()
"""
Explanation: Step 4: Enable pagination
Automatically access subsequent pages
End of explanation
"""
#Enter your code here
"""
Explanation: Using yesterdays sentiment analysis code and the corpus of sentiment found in the word_sentiment.csv file, calculate the sentiment of the reviews.
End of explanation
"""
import requests
from bs4 import BeautifulSoup
def scrapecontent(url):
"""This function parses the HTML page representing the url using the BeautifulSoup module
and returns the created python readable data structure (soup)"""
scrape_response = requests.get(url)
print(scrape_response.status_code)
if scrape_response.status_code == 200:
soup = BeautifulSoup(scrape_response.text, 'html.parser')
return soup
else:
print('Error accessing url : ',scrape_response.status_code)
return None
def main():
scrape_url = 'https://www.tripadvisor.com/Restaurant_Review-g187147-d1751525-Reviews-Cafe_Le_Dome-Paris_Ile_de_France.html'
ret_soup = scrapecontent(scrape_url)
if ret_soup:
for rev_data in ret_soup.find_all('div', class_= 'review-container'):
date = rev_data.find('span', class_ ='ratingDate')# Get the date if the review
print(date.text)
review = rev_data.find('p') # Get the review text
print(review.text)
rating = rev_data.find('span',class_='ui_bubble_rating') #Get the rating of the review
print(int(rating['class'][1][7:])/10)
main()
"""
Explanation: Expanding this further
To add additional details we can inspect the tags further and add the reviewer rating and reviwer details.
End of explanation
"""
|
xaibeing/cn-deep-learning | tv-script-generation/dlnd_tv_script_generation.ipynb | mit | """
DON'T MODIFY ANYTHING IN THIS CELL
"""
import helper
data_dir = './data/simpsons/moes_tavern_lines.txt'
text = helper.load_data(data_dir)
# Ignore notice, since we don't use it for analysing the data
text = text[81:]
"""
Explanation: TV Script Generation
In this project, you'll generate your own Simpsons TV scripts using RNNs. You'll be using part of the Simpsons dataset of scripts from 27 seasons. The Neural Network you'll build will generate a new TV script for a scene at Moe's Tavern.
Get the Data
The data is already provided for you. You'll be using a subset of the original dataset. It consists of only the scenes in Moe's Tavern. This doesn't include other versions of the tavern, like "Moe's Cavern", "Flaming Moe's", "Uncle Moe's Family Feed-Bag", etc..
End of explanation
"""
view_sentence_range = (0, 10)
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import numpy as np
print('Dataset Stats')
print('Roughly the number of unique words: {}'.format(len({word: None for word in text.split()})))
scenes = text.split('\n\n')
print('Number of scenes: {}'.format(len(scenes)))
sentence_count_scene = [scene.count('\n') for scene in scenes]
print('Average number of sentences in each scene: {}'.format(np.average(sentence_count_scene)))
sentences = [sentence for scene in scenes for sentence in scene.split('\n')]
print('Number of lines: {}'.format(len(sentences)))
word_count_sentence = [len(sentence.split()) for sentence in sentences]
print('Average number of words in each line: {}'.format(np.average(word_count_sentence)))
print()
print('The sentences {} to {}:'.format(*view_sentence_range))
print('\n'.join(text.split('\n')[view_sentence_range[0]:view_sentence_range[1]]))
"""
Explanation: Explore the Data
Play around with view_sentence_range to view different parts of the data.
End of explanation
"""
import numpy as np
import problem_unittests as tests
def create_lookup_tables(text):
"""
Create lookup tables for vocabulary
:param text: The text of tv scripts split into words
:return: A tuple of dicts (vocab_to_int, int_to_vocab)
"""
# TODO: Implement Function
word_set = set(text)
int_to_vocab = {ii: word for ii, word in enumerate(word_set)}
vocab_to_int = {word: ii for ii, word in int_to_vocab.items()}
return vocab_to_int, int_to_vocab
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_create_lookup_tables(create_lookup_tables)
"""
Explanation: Implement Preprocessing Functions
The first thing to do to any dataset is preprocessing. Implement the following preprocessing functions below:
- Lookup Table
- Tokenize Punctuation
Lookup Table
To create a word embedding, you first need to transform the words to ids. In this function, create two dictionaries:
- Dictionary to go from the words to an id, we'll call vocab_to_int
- Dictionary to go from the id to word, we'll call int_to_vocab
Return these dictionaries in the following tuple (vocab_to_int, int_to_vocab)
End of explanation
"""
def token_lookup():
"""
Generate a dict to turn punctuation into a token.
:return: Tokenize dictionary where the key is the punctuation and the value is the token
"""
# TODO: Implement Function
return {'.' : '||Period||',
',' : '||Comma||',
'"' : '||QuotationMark||',
';' : '||Semicolon||',
'!' : '||ExclamationMark||',
'?' : '||QuestionMark||',
'(' : '||LeftParentheses||',
')' : '||RightParentheses||',
'--' : '||Dash||',
'\n' : '||Return||'}
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_tokenize(token_lookup)
"""
Explanation: Tokenize Punctuation
We'll be splitting the script into a word array using spaces as delimiters. However, punctuations like periods and exclamation marks make it hard for the neural network to distinguish between the word "bye" and "bye!".
Implement the function token_lookup to return a dict that will be used to tokenize symbols like "!" into "||Exclamation_Mark||". Create a dictionary for the following symbols where the symbol is the key and value is the token:
- Period ( . )
- Comma ( , )
- Quotation Mark ( " )
- Semicolon ( ; )
- Exclamation mark ( ! )
- Question mark ( ? )
- Left Parentheses ( ( )
- Right Parentheses ( ) )
- Dash ( -- )
- Return ( \n )
This dictionary will be used to token the symbols and add the delimiter (space) around it. This separates the symbols as it's own word, making it easier for the neural network to predict on the next word. Make sure you don't use a token that could be confused as a word. Instead of using the token "dash", try using something like "||dash||".
End of explanation
"""
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
# Preprocess Training, Validation, and Testing Data
helper.preprocess_and_save_data(data_dir, token_lookup, create_lookup_tables)
"""
Explanation: Preprocess all the data and save it
Running the code cell below will preprocess all the data and save it to file.
End of explanation
"""
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import helper
import numpy as np
import problem_unittests as tests
int_text, vocab_to_int, int_to_vocab, token_dict = helper.load_preprocess()
"""
Explanation: Check Point
This is your first checkpoint. If you ever decide to come back to this notebook or have to restart the notebook, you can start from here. The preprocessed data has been saved to disk.
End of explanation
"""
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
from distutils.version import LooseVersion
import warnings
import tensorflow as tf
# Check TensorFlow Version
assert LooseVersion(tf.__version__) >= LooseVersion('1.0'), 'Please use TensorFlow version 1.0 or newer'
print('TensorFlow Version: {}'.format(tf.__version__))
# Check for a GPU
if not tf.test.gpu_device_name():
warnings.warn('No GPU found. Please use a GPU to train your neural network.')
else:
print('Default GPU Device: {}'.format(tf.test.gpu_device_name()))
"""
Explanation: Build the Neural Network
You'll build the components necessary to build a RNN by implementing the following functions below:
- get_inputs
- get_init_cell
- get_embed
- build_rnn
- build_nn
- get_batches
Check the Version of TensorFlow and Access to GPU
End of explanation
"""
#The batches should be a Numpy array with the shape (number of batches, 2, batch size, sequence length).
#Here the "2" means the input and the target
#Each batch contains two elements:
# The first element is a single batch of input with the shape [batch size, sequence length]
# The second element is a single batch of targets with the shape [batch size, sequence length]
def get_inputs():
"""
Create TF Placeholders for input, targets, and learning rate.
:return: Tuple (input, targets, learning rate)
"""
# TODO: Implement Function
# The input shape should be (batch_size, seq_length)
input = tf.placeholder(tf.int32, shape=(None, None), name='input')
# The output shape should be (batch_size, vocab_size)
output = tf.placeholder(tf.int32, shape=(None, None), name='output')
learning_rate = tf.placeholder(tf.float32, shape=None, name='learning_rate')
return input, output, learning_rate
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_get_inputs(get_inputs)
"""
Explanation: Input
Implement the get_inputs() function to create TF Placeholders for the Neural Network. It should create the following placeholders:
- Input text placeholder named "input" using the TF Placeholder name parameter.
- Targets placeholder
- Learning Rate placeholder
Return the placeholders in the following tuple (Input, Targets, LearningRate)
End of explanation
"""
# num LSTM layers
num_layers = 1
def get_init_cell(batch_size, rnn_size):
"""
Create an RNN Cell and initialize it.
:param batch_size: Size of batches
:param rnn_size: Size of RNNs
:return: Tuple (cell, initialize state)
"""
# TODO: Implement Function
cell = tf.contrib.rnn.BasicLSTMCell(num_units=rnn_size)
cells = tf.contrib.rnn.MultiRNNCell(num_layers * [cell])
initial_state = cells.zero_state(batch_size, tf.float32)
initial_state = tf.identity(initial_state, name='initial_state')
return cells, initial_state
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_get_init_cell(get_init_cell)
"""
Explanation: Build RNN Cell and Initialize
Stack one or more BasicLSTMCells in a MultiRNNCell.
- The Rnn size should be set using rnn_size
- Initalize Cell State using the MultiRNNCell's zero_state() function
- Apply the name "initial_state" to the initial state using tf.identity()
Return the cell and initial state in the following tuple (Cell, InitialState)
End of explanation
"""
# calculate from input_data to embed output
def get_embed(input_data, vocab_size, embed_dim):
"""
Create embedding for <input_data>.
:param input_data: TF placeholder for text input.
:param vocab_size: Number of words in vocabulary.
:param embed_dim: Number of embedding dimensions
:return: Embedded input.
"""
# TODO: Implement Function
embedding = tf.Variable(tf.truncated_normal(shape=[vocab_size, embed_dim], mean=0, stddev=1)) # create embedding weight matrix here
embed = tf.nn.embedding_lookup(embedding, input_data) # use tf.nn.embedding_lookup to get the hidden layer output
return embed
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_get_embed(get_embed)
"""
Explanation: Word Embedding
Apply embedding to input_data using TensorFlow. Return the embedded sequence.
End of explanation
"""
# calculate from embed output to LSTM output, fully dynamic unrolling of sequence steps
def build_rnn(cell, inputs):
"""
Create a RNN using a RNN Cell
:param cell: RNN Cell
:param inputs: Input text data
:return: Tuple (Outputs, Final State)
"""
# TODO: Implement Function
outputs, final_state = tf.nn.dynamic_rnn(cell, inputs, dtype=tf.float32)
final_state = tf.identity(final_state, name='final_state')
return outputs, final_state
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_build_rnn(build_rnn)
"""
Explanation: Build RNN
You created a RNN Cell in the get_init_cell() function. Time to use the cell to create a RNN.
- Build the RNN using the tf.nn.dynamic_rnn()
- Apply the name "final_state" to the final state using tf.identity()
Return the outputs and final_state state in the following tuple (Outputs, FinalState)
End of explanation
"""
def build_nn(cell, rnn_size, input_data, vocab_size, embed_dim):
"""
Build part of the neural network
:param cell: RNN cell
:param rnn_size: Size of rnns
:param input_data: Input data
:param vocab_size: Vocabulary size
:param embed_dim: Number of embedding dimensions
:return: Tuple (Logits, FinalState)
"""
# TODO: Implement Function
embed = get_embed(input_data, vocab_size, embed_dim)
outputs, final_state = build_rnn(cell, embed)
logits = tf.layers.dense(outputs, vocab_size)
return logits, final_state
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_build_nn(build_nn)
"""
Explanation: Build the Neural Network
Apply the functions you implemented above to:
- Apply embedding to input_data using your get_embed(input_data, vocab_size, embed_dim) function.
- Build RNN using cell and your build_rnn(cell, inputs) function.
- Apply a fully connected layer with a linear activation and vocab_size as the number of outputs.
Return the logits and final state in the following tuple (Logits, FinalState)
End of explanation
"""
def get_batches(int_text, batch_size, seq_length):
"""
Return batches of input and target
:param int_text: Text with the words replaced by their ids
:param batch_size: The size of batch
:param seq_length: The length of sequence
:return: Batches as a Numpy array
"""
# TODO: Implement Function
segment_len = (len(int_text) - 1) // batch_size
num_seqs = segment_len // seq_length
segment_len = num_seqs * seq_length
# use_text_len = segment_len * batch_size + 1
batches = np.zeros(shape=(num_seqs, 2, batch_size, seq_length))
for s in range(num_seqs):
# for j in range(2):
for b in range(batch_size):
batches[s, 0, b, :] = int_text[b*segment_len+s*seq_length : b*segment_len+s*seq_length+seq_length]
batches[s, 1, b, :] = int_text[b*segment_len+s*seq_length+1 : b*segment_len+s*seq_length+seq_length+1]
return batches
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_get_batches(get_batches)
"""
Explanation: Batches
Implement get_batches to create batches of input and targets using int_text. The batches should be a Numpy array with the shape (number of batches, 2, batch size, sequence length). Each batch contains two elements:
- The first element is a single batch of input with the shape [batch size, sequence length]
- The second element is a single batch of targets with the shape [batch size, sequence length]
If you can't fill the last batch with enough data, drop the last batch.
For exmple, get_batches([1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15], 2, 3) would return a Numpy array of the following:
```
[
# First Batch
[
# Batch of Input
[[ 1 2 3], [ 7 8 9]],
# Batch of targets
[[ 2 3 4], [ 8 9 10]]
],
# Second Batch
[
# Batch of Input
[[ 4 5 6], [10 11 12]],
# Batch of targets
[[ 5 6 7], [11 12 13]]
]
]
```
End of explanation
"""
# Number of Epochs
num_epochs = 80
# Batch Size
batch_size = 128
# RNN Size
rnn_size = 128
# Embedding Dimension Size
embed_dim = 200
# Sequence Length
seq_length = 20
# Learning Rate
learning_rate = 0.01
# Show stats for every n number of batches
show_every_n_batches = 26
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
save_dir = './save'
"""
Explanation: Neural Network Training
Hyperparameters
Tune the following parameters:
Set num_epochs to the number of epochs.
Set batch_size to the batch size.
Set rnn_size to the size of the RNNs.
Set embed_dim to the size of the embedding.
Set seq_length to the length of sequence.
Set learning_rate to the learning rate.
Set show_every_n_batches to the number of batches the neural network should print progress.
End of explanation
"""
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
from tensorflow.contrib import seq2seq
train_graph = tf.Graph()
with train_graph.as_default():
vocab_size = len(int_to_vocab)
input_text, targets, lr = get_inputs()
input_data_shape = tf.shape(input_text)
cell, initial_state = get_init_cell(input_data_shape[0], rnn_size)
logits, final_state = build_nn(cell, rnn_size, input_text, vocab_size, embed_dim)
# Probabilities for generating words
probs = tf.nn.softmax(logits, name='probs')
# Loss function
cost = seq2seq.sequence_loss(
logits,
targets,
tf.ones([input_data_shape[0], input_data_shape[1]]))
# Optimizer
optimizer = tf.train.AdamOptimizer(lr)
# Gradient Clipping
gradients = optimizer.compute_gradients(cost)
capped_gradients = [(tf.clip_by_value(grad, -1., 1.), var) for grad, var in gradients if grad is not None]
train_op = optimizer.apply_gradients(capped_gradients)
"""
Explanation: Build the Graph
Build the graph using the neural network you implemented.
End of explanation
"""
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
batches = get_batches(int_text, batch_size, seq_length)
with tf.Session(graph=train_graph) as sess:
sess.run(tf.global_variables_initializer())
for epoch_i in range(num_epochs):
state = sess.run(initial_state, {input_text: batches[0][0]})
for batch_i, (x, y) in enumerate(batches):
feed = {
input_text: x,
targets: y,
initial_state: state,
lr: learning_rate}
train_loss, state, _ = sess.run([cost, final_state, train_op], feed)
# Show every <show_every_n_batches> batches
if (epoch_i * len(batches) + batch_i) % show_every_n_batches == 0:
print('Epoch {:>3} Batch {:>4}/{} train_loss = {:.3f}'.format(
epoch_i,
batch_i,
len(batches),
train_loss))
# Save Model
saver = tf.train.Saver()
saver.save(sess, save_dir)
print('Model Trained and Saved')
"""
Explanation: Train
Train the neural network on the preprocessed data. If you have a hard time getting a good loss, check the forms to see if anyone is having the same problem.
End of explanation
"""
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
# Save parameters for checkpoint
helper.save_params((seq_length, save_dir))
"""
Explanation: Save Parameters
Save seq_length and save_dir for generating a new TV script.
End of explanation
"""
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import tensorflow as tf
import numpy as np
import helper
import problem_unittests as tests
_, vocab_to_int, int_to_vocab, token_dict = helper.load_preprocess()
seq_length, load_dir = helper.load_params()
"""
Explanation: Checkpoint
End of explanation
"""
def get_tensors(loaded_graph):
"""
Get input, initial state, final state, and probabilities tensor from <loaded_graph>
:param loaded_graph: TensorFlow graph loaded from file
:return: Tuple (InputTensor, InitialStateTensor, FinalStateTensor, ProbsTensor)
"""
# TODO: Implement Function
return loaded_graph.get_tensor_by_name('input:0'), \
loaded_graph.get_tensor_by_name('initial_state:0'), \
loaded_graph.get_tensor_by_name('final_state:0'), \
loaded_graph.get_tensor_by_name('probs:0')
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_get_tensors(get_tensors)
"""
Explanation: Implement Generate Functions
Get Tensors
Get tensors from loaded_graph using the function get_tensor_by_name(). Get the tensors using the following names:
- "input:0"
- "initial_state:0"
- "final_state:0"
- "probs:0"
Return the tensors in the following tuple (InputTensor, InitialStateTensor, FinalStateTensor, ProbsTensor)
End of explanation
"""
def pick_word(probabilities, int_to_vocab):
"""
Pick the next word in the generated text
:param probabilities: Probabilites of the next word
:param int_to_vocab: Dictionary of word ids as the keys and words as the values
:return: String of the predicted word
"""
# TODO: Implement Function
rnd_idx = np.random.choice(len(probabilities), p=probabilities)
return int_to_vocab[rnd_idx]
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_pick_word(pick_word)
"""
Explanation: Choose Word
Implement the pick_word() function to select the next word using probabilities.
End of explanation
"""
gen_length = 200
# homer_simpson, moe_szyslak, or Barney_Gumble
prime_word = 'moe_szyslak'
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
loaded_graph = tf.Graph()
with tf.Session(graph=loaded_graph) as sess:
# Load saved model
loader = tf.train.import_meta_graph(load_dir + '.meta')
loader.restore(sess, load_dir)
# Get Tensors from loaded model
input_text, initial_state, final_state, probs = get_tensors(loaded_graph)
# Sentences generation setup
gen_sentences = [prime_word + ':']
prev_state = sess.run(initial_state, {input_text: np.array([[1]])})
# Generate sentences
for n in range(gen_length):
# Dynamic Input
dyn_input = [[vocab_to_int[word] for word in gen_sentences[-seq_length:]]]
dyn_seq_length = len(dyn_input[0])
# Get Prediction
probabilities, prev_state = sess.run(
[probs, final_state],
{input_text: dyn_input, initial_state: prev_state})
pred_word = pick_word(probabilities[dyn_seq_length-1], int_to_vocab)
gen_sentences.append(pred_word)
# Remove tokens
tv_script = ' '.join(gen_sentences)
for key, token in token_dict.items():
ending = ' ' if key in ['\n', '(', '"'] else ''
tv_script = tv_script.replace(' ' + token.lower(), key)
tv_script = tv_script.replace('\n ', '\n')
tv_script = tv_script.replace('( ', '(')
print(tv_script)
"""
Explanation: Generate TV Script
This will generate the TV script for you. Set gen_length to the length of TV script you want to generate.
End of explanation
"""
|
chrlttv/Teaching | Session5/1_MachineLearning_Regression_keys.ipynb | mit | # Write code to import required libraries
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
# For visualzing plots in this notebook
%matplotlib inline
"""
Explanation: Machine Learning: Regression
Supervised Machine Learning algorithms consist of target / outcome variable (or dependent variable) which is to be predicted from a given set of features/predictors (independent variables). Using these set of features, we generate a function that map inputs to desired outputs. The training process continues until the model achieves a desired level of accuracy/performance score on the training data. An supervised learning problem is called Regression where the output variable is continuous valued.
Objective of this notebook
In this notebook you will explore machine learning regression with Python Scikit-Learn library.
For questions, comments and suggestions, please contact parantapa[dot]goswami[at]viseo[dot]com
Import basic libraries
Initially we require:
1. pandas: to store data efficiently
2. numpy: to matrix operations
3. matplotlib.pyplot: for data visualization
End of explanation
"""
# We start by importing the data using pandas
# Hint: use "read_csv" method, Note that comma (",") is the field separator, and we have no "header"
housing = pd.read_csv('housing_price.txt', sep=",", header=None)
# We name the columns based on above features
housing.columns = ["Area", "Bedrooms", "Price"]
# We sneak peek into the data
# Hint: use dataframe "head" method with "n" parameter
housing.head(n=5)
"""
Explanation: Housing Price Dataset
This small dataset is a collection of housing prices in Portland, Oregan, USA. It contains collected information on some houses sold and the prices.
Each house is described with following features:
1. size of the house in square feet
2. number of bedrooms
House prices are in US dollars.
Importing the data
The data is provided in the file housing_price.txt. Use read_csv() module from pandas to import the data.
End of explanation
"""
# Write code to get a summary of the data
# Hint: use "DataFrame.describe()" on our dataframe housing
housing.describe()
"""
Explanation: Statistical summary of the data
Sometimes it is better to have a statistical summary of the data at hand. Use DataFrame.describe() to get a summary of the data with various statistical quantities.
End of explanation
"""
# Write code to create Scatter Plot between "Area" and "Price"
# Hint: use "DataFrame.plot.scatter()" on our dataframe housing,
# mention the "x" and "y" axis features
housing.plot.scatter(x="Area", y="Price")
"""
Explanation: Visualize the data
Initially, we will use only Area to predict Price. So, it is recommended to visualize Area against Price.
As, we wish to see correlation between two continuous variables, we will use scatter plot.
End of explanation
"""
# Write code to convert desired dataframe columns into numpy arrays
# Hint: "columns" atttribute of DataFrame.as_matrix() accepts only list.
# Even if you wish to select only one column, you have to pass it in a list.
X = housing.as_matrix(columns=["Area"])
y = housing.as_matrix(columns=["Price"])
"""
Explanation: Training a Univariate Linear Regression Model
You will now train a Linear Regression model using "Area" feature to predict "Price".
Note: All machine learning algorithm implementations work efficiently with numpy matrices and arrays. The current format of our data is pandas dataframes. Fortunately, pandas provides DataFrame.as_matrix() method to convert dataframes to numpy arrays. It accepts columns attribute to convert only certain columns of the dataframe.
Question: What is your input X here? What is your output y here?
End of explanation
"""
# Write code to learn a linear regression model on housing price dataset
from sklearn.linear_model import LinearRegression # TO DELETE
lin_reg = LinearRegression()
lin_reg.fit(X, y)
"""
Explanation: Using the following step, train a Linear Regression model on housing price dataset:
1. import LinearRegression module from sklearn
2. create an instance of LinearRegression and fit with the X and y arrays you created
End of explanation
"""
# Write code to predict prices using the trained LinearRegression
y_predicted = lin_reg.predict(X)
# Importing modules to calculate MSE
from sklearn.metrics import mean_squared_error
# Write code to calculate and print the MSE on the predicted values.
# Hint 1: use "mean_squared_error()" method
# Hint 2: you have to pass both original y and predicted y to compute the MSE.
mse = mean_squared_error(y, y_predicted)
print("MSE = ", mse)
"""
Explanation: Use your trained LinearRegression model to predict on the same dataset (i.e. "Area" features stored as numpy array X). Also calculate Mean Squared Error using the mean_squared_error() method from sklearn library.
End of explanation
"""
# Write code to get coefficient of determination using "score()"
# Hint: you have to pass both X and original y to score()
R2 = lin_reg.score(X, y)
print("R2 = ", R2)
"""
Explanation: Question: Why such a huge MSE?
Use LinearRegression.score() method to get coefficient of determination $R^2$ of the prediction. It is calculated based on MSE. The best possible score is $1.0$ and it can be negative (because the model can be arbitrarily worse). A constant model that always predicts the expected value of $y$, disregarding the input features, would get a $R^2$ score of $0.0$.
<font color=red>Warning</font>: You need to pass input matrix X and original output y to score() method. This method first performs testing with the trained model and then calculates the $R^2$ score.
End of explanation
"""
# Write code to create a scatter plot with the data as above.
# Then add the best line to that.
# Hint 1: store the returned "axes" object and then use "axes.plot()"
# to plot the best_line
# Hint 2: "axes.plot()" takes the X and the y_predicted arrays
ax = housing.plot.scatter(x="Area", y="Price")
ax.plot(X, y_predicted, "r")
"""
Explanation: Now, we will visualize the predicted prices along with the actual data.
Note: DataFrame.plot.scatter() returns a axes object. You can use that axes object to add more visualizations to the same plot.
End of explanation
"""
# Write code to convert desired dataframe columns into numpy arrays
X = housing.as_matrix(columns=["Area","Bedrooms"])
y = housing.as_matrix(columns=["Price"])
# Write code to train a LinearRegression model
lin_reg = LinearRegression()
lin_reg.fit(X, y)
# Write code to calculate and print the MSE
R2 = lin_reg.score(X, y)
print("R2 = ", R2)
"""
Explanation: Training a Multivariate Linear Regression Model
You will now train a Linear Regression model using both "Area" and "Bedrooms" features to predict "Price".
Question: What is your input X here? What is your output y here?
Note that only your input changes. Nothing else changes in your implmentation.
End of explanation
"""
# Write code to create a 3D scatter plot for "Area", "Bedroom" and actual "Price"
# Then add visualization of "Area" and "Bedroom" against the predicted price.
from mpl_toolkits.mplot3d import Axes3D
y_pred = lin_reg.predict(X)
fig_scatter = plt.figure()
ax = fig_scatter.add_subplot(111, projection='3d')
ax.scatter(housing["Area"], housing["Bedrooms"], housing["Price"])
ax.scatter(housing["Area"], housing["Bedrooms"], y_pred)
ax.set_xlabel("Area")
ax.set_ylabel("Bedrooms")
ax.set_zlabel("Price")
"""
Explanation: Homework
End of explanation
"""
# Write code to import the data using pandas
# Hint: note that comma (",") is the field separator, and we have no "header"
energy = pd.read_csv("energy_efficiency.csv", sep=",", header=None)
# We name the columns based on above features
energy.columns = ["Compactness","Surface","Wall", "Roof", "Heiht",
"Orientation","Glazing","GlazingDist", "Heating"]
# We sneak peek into the data
# Hint: use dataframe "head" method with "n" parameter
energy.head(n=5)
"""
Explanation: Supplementary Material
This portion of the notebook is for your own practice. Do not hesitate to contact me for any question.
Energy Efficiency Dataset
This dataset poses the regression problem of predicting heating load of different buildings. More details of this dataset can be found in this link. The buildings are described using following 8 features:
1. Relative Compactness
2. Surface Area
3. Wall Area
4. Roof Area
5. Overall Height
6. Orientation
7. Glazing Area
8. Glazing Area Distribution
Import the data
The data is provided in the file energy_efficiency.csv. Like before, use read_csv() module from pandas to import the data.
End of explanation
"""
# Write code to convert desired dataframe columns into numpy arrays
X = energy.as_matrix(columns=["Compactness","Surface","Wall", "Roof", "Heiht",
"Orientation","Glazing","GlazingDist"])
y = energy.as_matrix(columns=["Heating"])
"""
Explanation: Regression Models on Energy Efficiency Dataset
In this section, we will train various regression models on the Energy Efficiency Dataset. We will measure the performances in terms of $R^2$ and compare their performances.
Inputs and Outputs
First, identify the inputs and outputs, and convert them to numpy matrices.
End of explanation
"""
# Importing the module
from sklearn.model_selection import train_test_split
# Write code for splitting the data into train and test sets.
# Hint: use "train_test_split" on X and y, and test size should be 0.2 (20%)
X_train,X_test,y_train,y_test = train_test_split(X, y, test_size=0.2)
"""
Explanation: Training Set and Testing Set
A portion of the dataset is to be set aside to be used only for testing. Fortunately sklearn provides a train_test_split() module to do that. You can specify a ratio for test_size parameter.
In this exercise, we will retain $20\%$ data for testing.
End of explanation
"""
# Write code to train a Linear Regression model and to test its performance on the test set
lin_reg = LinearRegression()
lin_reg.fit(X_train, y_train)
lin_reg_R2 = lin_reg.score(X_test, y_test)
print("Linear Regression R2 = ", lin_reg_R2)
"""
Explanation: LinearRegression
Train a LinearRegression model.
1. Create an instance of LinearRegression of sklearn
1. Use training set X_train and y_train to fit.
2. Use test set X_test and y_test to measure $R^2$ using score method.
End of explanation
"""
# Write code to import necessary module for SVR
from sklearn.svm import SVR
# Write code to train a SVR model and to test its performance on the test set
svr = SVR() # TO DELETE
svr.fit(X_train, y_train)
svr_R2 = svr.score(X_test, y_test)
print("Support Vector Regression R2 = ", svr_R2)
"""
Explanation: Support Vector Regression (SVR)
Use SVR module from sklearn.
End of explanation
"""
# Write code to import necessary module for RandomForestRegressor
from sklearn.ensemble import RandomForestRegressor
# Write code to train a RandomForestRegressor model and to test its performance on the test set
rfr = RandomForestRegressor()
rfr.fit(X_train, y_train)
rfr_R2 = rfr.score(X_test, y_test)
print("Random Forest Regressor R2 = ", rfr_R2)
"""
Explanation: Random Forest Regressor
Use RandomForestRegressor from sklearn library.
End of explanation
"""
|
calico/basenji | tutorials/archive/genes.ipynb | apache-2.0 | import os, subprocess
if not os.path.isfile('data/hg19.ml.fa'):
subprocess.call('curl -o data/hg19.ml.fa https://storage.googleapis.com/basenji_tutorial_data/hg19.ml.fa', shell=True)
subprocess.call('curl -o data/hg19.ml.fa.fai https://storage.googleapis.com/basenji_tutorial_data/hg19.ml.fa.fai', shell=True)
"""
Explanation: Although Basenji is unaware of the locations of known genes in the genome, we can go in afterwards and ask what a model predicts for those locations to interpret it as a gene expression prediction.
To do this, you'll need
* Trained model
* Gene Transfer Format (GTF) gene annotations
* BigWig coverage tracks
* Gene sequences saved in my HDF5 format.
First, make sure you have an hg19 FASTA file visible. If you have it already, put a symbolic link into the data directory. Otherwise, I have a machine learning friendly simplified version you can download in the next cell.
End of explanation
"""
if not os.path.isfile('data/CNhs11760.bw'):
subprocess.call('curl -o data/CNhs11760.bw https://storage.googleapis.com/basenji_tutorial_data/CNhs11760.bw', shell=True)
subprocess.call('curl -o data/CNhs12843.bw https://storage.googleapis.com/basenji_tutorial_data/CNhs12843.bw', shell=True)
subprocess.call('curl -o data/CNhs12856.bw https://storage.googleapis.com/basenji_tutorial_data/CNhs12856.bw', shell=True)
"""
Explanation: Next, let's grab a few CAGE datasets from FANTOM5 related to heart biology.
These data were processed by
1. Aligning with Bowtie2 with very sensitive alignment parameters.
2. Distributing multi-mapping reads and estimating genomic coverage with bam_cov.py
End of explanation
"""
lines = [['index','identifier','file','clip','sum_stat','description']]
lines.append(['0', 'CNhs11760', 'data/CNhs11760.bw', '384', 'sum', 'aorta'])
lines.append(['1', 'CNhs12843', 'data/CNhs12843.bw', '384', 'sum', 'artery'])
lines.append(['2', 'CNhs12856', 'data/CNhs12856.bw', '384', 'sum', 'pulmonic_valve'])
samples_out = open('data/heart_wigs.txt', 'w')
for line in lines:
print('\t'.join(line), file=samples_out)
samples_out.close()
"""
Explanation: Then we'll write out these BigWig files and labels to a samples table.
End of explanation
"""
! basenji_hdf5_genes.py -g data/human.hg19.genome -l 131072 -c 0.333 -p 3 -t data/heart_wigs.txt -w 128 data/hg19.ml.fa data/gencode_chr9.gtf data/gencode_chr9.h5
"""
Explanation: Predictions in the portion of the genome that we trained might inflate our accuracy, so we'll focus on chr9 genes, which have formed my typical test set. Then we use basenji_hdf5_genes.py to create the file.
The most relevant options are:
| Option/Argument | Value | Note |
|:---|:---|:---|
| -g | data/human.hg19.genome | Genome assembly chromosome length to bound gene sequences. |
| -l | 262144 | Sequence length. |
| -c | 0.333 | Multiple genes per sequence are allowed, but the TSS must be in the middle 1/3 of the sequence. |
| -p | 3 | Use 3 threads via
| -t | data/heart_wigs.txt | Save coverage values from this table of BigWig files. |
| -w | 128 | Bin the coverage values at 128 bp resolution. |
| fasta_file | data/hg19.ml.fa | Genome FASTA file for extracting sequences. |
| gtf_file | data/gencode_chr9.gtf | Gene annotations in gene transfer format. |
| hdf5_file | data/gencode_chr9_l262k_w128.h5 | Gene sequence output HDF5 file. |
End of explanation
"""
if not os.path.isdir('models/heart'):
os.mkdir('models/heart')
if not os.path.isfile('models/heart/model_best.tf.meta'):
subprocess.call('curl -o models/heart/model_best.tf.index https://storage.googleapis.com/basenji_tutorial_data/model_best.tf.index', shell=True)
subprocess.call('curl -o models/heart/model_best.tf.meta https://storage.googleapis.com/basenji_tutorial_data/model_best.tf.meta', shell=True)
subprocess.call('curl -o models/heart/model_best.tf.data-00000-of-00001 https://storage.googleapis.com/basenji_tutorial_data/model_best.tf.data-00000-of-00001', shell=True)
"""
Explanation: Now, you can either train your own model in the Train/test tutorial or download one that I pre-trained.
End of explanation
"""
! basenji_test_genes.py -o output/gencode_chr9_test --rc -s --table models/params_small.txt models/heart/model_best.tf data/gencode_chr9.h5
"""
Explanation: Finally, you can offer data/gencode_chr9_l262k_w128.h5 and the model to basenji_test_genes.py to make gene expression predictions and benchmark them.
The most relevant options are:
| Option/Argument | Value | Note |
|:---|:---|:---|
| -o | data/gencode_chr9_test | Output directory. |
| --rc | | Average the forward and reverse complement to form prediction. |
| -s | | Make scatter plots, comparing predictions to experiment values. |
| --table | | Print gene expression table. |
| params_file | models/params_small.txt | Table of parameters to setup the model architecture and optimization. |
| model_file | models/gm12878_best.tf | Trained saved model prefix. |
| genes_hdf5_file | data/gencode_chr9_l262k_w128.h5 | HDF5 file containing the gene sequences, annotations, and experiment values. |
End of explanation
"""
! cat output/gencode_chr9_test/gene_cors.txt
"""
Explanation: In the output directory output/gencode_chr9_test/ are several tables and plots describing gene prediction accuracy. For example gene_cors.txt contains Spearman and Pearson correlations for predictions versus experimental measurements for all genes and nonzero genes.
End of explanation
"""
! gunzip -c output/gencode_chr9_test/gene_table.txt.gz | head
"""
Explanation: gene_table.txt.gz contains specific gene predictions and experimental measurements.
End of explanation
"""
from IPython.display import IFrame
IFrame('output/gencode_chr9_test/gene_scatter0.pdf', width=600, height=500)
"""
Explanation: And gene_scatterX.pdf plots gene predictions versus experimental measurements for each dataset indexed by X.
End of explanation
"""
|
AllenDowney/ThinkBayes2 | examples/beta.ipynb | mit | trials = 174
successes = 173
failures = trials-successes
failures
"""
Explanation: Jeffreys interval
Copyright 2020 Allen Downey
MIT License
Suppose you have run 174 trials and 173 were successful. You want to report an estimate of the probability of success and a confidence interval for the estimate.
According to our friends at Wikipedia there are several ways to compute it, based on different assumptions and requirements.
In my opinion, the clear best option is the Jeffreys interval.
The Jeffreys interval has a Bayesian derivation, but it has good frequentist properties. In particular, it has coverage properties that are similar to those of the Wilson interval, but it is one of the few intervals with the advantage of being equal-tailed (e.g., for a 95% confidence interval, the probabilities of the interval lying above or below the true value are both close to 2.5%). In contrast, the Wilson interval has a systematic bias such that it is centred too close to p = 0.5.
The Jeffreys interval is the Bayesian credible interval obtained when using the non-informative Jeffreys prior for the binomial proportion p. The Jeffreys prior for this problem is a Beta distribution with parameters (1/2, 1/2), it is a conjugate prior. After observing x successes in n trials, the posterior distribution for p is a Beta distribution with parameters (x + 1/2, n – x + 1/2).
When x ≠0 and x ≠ n, the Jeffreys interval is taken to be the 100(1 – α)% equal-tailed posterior probability interval, i.e., the α / 2 and 1 – α / 2 quantiles of a Beta distribution with parameters (x + 1/2, n – x + 1/2). These quantiles need to be computed numerically, although this is reasonably simple with modern statistical software.
In order to avoid the coverage probability tending to zero when p → 0 or 1, when x = 0 the upper limit is calculated as before but the lower limit is set to 0, and when x = n the lower limit is calculated as before but the upper limit is set to 1.
In my opinion, that sentence is an unnecessary hack.
Here's how to compute a Jeffrey's interval for the example.
End of explanation
"""
from scipy.stats import beta
dist = beta(successes+1/2, failures+1/2)
"""
Explanation: Here's a beta distribution that represents the posterior distribution for the proportion, assuming a Jeffrey's prior.
End of explanation
"""
estimate = dist.mean() * 100
estimate
"""
Explanation: I think the best point estimate is the posterior mean:
End of explanation
"""
p = 0.95
a = (1-p)
a, 1-a
ci = dist.ppf([a/2, 1-a/2]) * 100
ci
"""
Explanation: Here's the confidence interval.
End of explanation
"""
|
chinapnr/python_study | Python 基础课程/Python Basic Lesson 16 - 函数式编程.ipynb | gpl-3.0 | # 将函数作为值返回
def lazy_sum(*args):
def sum():
ax = 0
for n in args:
ax = ax + n
return ax
return sum
f = lazy_sum(1, 3, 5, 7, 9)
print(f())
"""
Explanation: Lesson 16
v1.1, 2020.5, 2020.6 edit by David Yi
本次内容
闭包:将函数作为值返回
偏函数
高阶函数 map/reduce/filte
End of explanation
"""
# 进制转换函数
print(int(12345))
print(int('1000',base=2))
print(int('1A',base=16))
"""
Explanation: 闭包
在这个例子中,我们在函数 lazy_sum 中又定义了函数 sum,并且,内部函数 sum 可以引用外部函数 lazy_sum 的参数和局部变量,当 lazy_sum 返回函数 sum 时,相关参数和变量都保存在返回的函数中,这种称为“闭包(Closure)”。
一个函数可以返回一个计算结果,也可以返回一个函数。
返回一个函数时,该函数并未执行,返回函数中不要引用任何可能会变化的变量。
闭包和函数的作用域等有关,掌握和理解闭包,对于后续 Python 的深入开发有很大好处,特别是装饰器。
偏函数
python 的 functools 模块提供了很多有用的功能,其中一个就是偏函数(Partial function)。要注意,这里的偏函数和数学意义上的偏函数不一样。
在介绍函数参数的时候,我们讲到,通过设定参数的默认值,可以降低函数调用的难度。而偏函数也可以做到这一点。
End of explanation
"""
import functools
int2 = functools.partial(int, base=2)
print(int2('1000000'))
print(int2('1010101'))
# 偏函数举例
# 原来的一个函数
def func(x=2,y=3,z=4):
return x+y+z
print(func(x=3))
print(func(y=6))
print(func(x=4,y=10))
print(func(2,3))
# 构造偏函数,设置默认值
import functools
f1 = functools.partial(func, x=2,z=3)
print(f1(y=3))
print(f1(y=2))
print(f1(2)) # 会报错,不需要再输入 z 的值
"""
Explanation: 虽然默认参数还是很容易使用,但是如果我们在某个场景需要大量调用的话,还是有点不方便,特别是对于有很多参数的函数来说,会让程序显得复杂。还记得之前那个 max min 的程序举例么?我们可以用偏函数来解决整个问题。
通过 functools.partial 来进行操作。
End of explanation
"""
# map 函数举例
def f(x):
return x * x
r = map(f, [1, 2, 3, 4, 5, 6, 7, 8, 9])
for i in r:
print(i)
# 这个 f(x) 函数可以比较复杂,包含更多逻辑
def f(x):
y = x * x + 3
return y
r = map(f, [1, 2, 3, 4, 5, 6, 7, 8, 9])
for i in r:
print(i)
# 进行 map 处理的数据也可以复杂一些
def f(x):
y = x * x + 3
return y
list1 = [x for x in range(1,100,7) if x % 2 ==0]
print(list1)
# 主要的程序还是很简洁就可以了
r = map(f, list1)
for i in r:
print(i)
# map 函数也可以同时作用在两组数据上
def addition(x, y):
return x + y
numbers1 = [5, 6, 2, 8]
numbers2 = [7, 1, 4, 9]
result = map(addition, numbers1, numbers2)
print(list(result))
# map 函数更加复杂的用法
def multiply(x):
return (x*x)
def add(x):
return (x+x)
func = [multiply, add]
for i in range(5):
value = list(map(lambda x: x(i), func))
print(value)
"""
Explanation: map() 函数
python内建了 map() 和 reduce() 这两个功能强大的函数。
我们先看 map。map() 函数接收两个参数,一个是函数,一个是可迭代的序列,比如 list,map 将传入的函数依次作用到序列的每个元素,并把结果作为新的 Iterator 可迭代值返回。
举例说明,比如我们有一个函数 f(x) = x*x,要把这个函数作用在一个list [1, 2, 3, 4, 5, 6, 7, 8, 9]上,就可以用map()实现如下:
如果你知道 Google的那篇大名鼎鼎的论文“MapReduce: Simplified Data Processing on Large Clusters”,你就能大概明白map/reduce的概念。在 hadoop 时代,map/reduce 的理念是高效并行运算的核心。
End of explanation
"""
# reduce 举例,一个加法函数
from functools import reduce
def add(x, y):
return x + y
print(reduce(add, [1, 3, 5, 7, 9]))
# reduce,模拟一个字符串转换为整数的函数
from functools import reduce
def f(x, y):
return x * 10 + y
def char2int(s):
return {'0': 0, '1': 1, '2': 2, '3': 3, '4': 4, '5': 5,
'6': 6, '7': 7, '8': 8, '9': 9}[s]
def str2int(s):
return reduce(f, map(char2int, s))
print(str2int('13579'))
print(type(str2int('13579')))
# 拆解上面的函数,先 map
list1 = map(char2int, '13579')
for i in list1:
print(i,type(i))
# 拆解上面的函数,再 reduce
list1 = map(char2int, '13579')
print(reduce(f,list1))
"""
Explanation: reduce() 函数
再看 reduce 的用法。reduce 把一个函数作用在一个序列[x1, x2, x3, ...]上,这个函数必须接收两个参数,reduce 把结果继续和序列的下一个元素做累积计算,其效果就是:
reduce(f, [x1, x2, x3, x4]) = f(f(f(x1, x2), x3), x4)
End of explanation
"""
# filter 举例,在一个list中,删掉偶数,只保留奇数
# 判断是否是奇数
def is_odd(n):
return n % 2 == 1
print(list(filter(is_odd, [1, 2, 4, 5, 6])))
# 筛选一个 list 中为空的元素
def is_empty(s):
# strip() 用于移除字符串头尾指定的字符(默认为空格)
if len(s.strip()) ==0:
return False
else:
return True
print(list(filter(is_empty, ['A', '', 'B','C', ' '])))
# 返回一定范围内既不能被2整除也不能被3整数的数字
def f(x):
return x % 2 != 0 and x % 3 != 0
print(list(filter(f, range(2, 30))))
"""
Explanation: filter() 函数
Python内建的filter()函数用于过滤序列。
和map()类似,filter()也接收一个函数和一个序列。和map()不同的是,filter()把传入的函数依次作用于每个元素,然后根据返回值是True还是False决定保留还是丢弃该元素。
End of explanation
"""
|
ethen8181/machine-learning | python/pivot_table/pivot_table.ipynb | mit | # code for loading the format for the notebook
import os
# path : store the current path to convert back to it later
path = os.getcwd()
os.chdir(os.path.join('..', '..', 'notebook_format'))
from formats import load_style
load_style(plot_style = False)
os.chdir(path)
import numpy as np
import pandas as pd
# 1. magic to print version
# 2. magic so that the notebook will reload external python modules
%load_ext watermark
%load_ext autoreload
%autoreload 2
%watermark -a 'Ethen' -d -t -v -p numpy,pandas
"""
Explanation: <h1>Table of Contents<span class="tocSkip"></span></h1>
<div class="toc"><ul class="toc-item"><li><span><a href="#Pandas's-Pivot-Table" data-toc-modified-id="Pandas's-Pivot-Table-1"><span class="toc-item-num">1 </span>Pandas's Pivot Table</a></span><ul class="toc-item"><li><span><a href="#Pivot-the-Data" data-toc-modified-id="Pivot-the-Data-1.1"><span class="toc-item-num">1.1 </span>Pivot the Data</a></span></li></ul></li><li><span><a href="#Reference" data-toc-modified-id="Reference-2"><span class="toc-item-num">2 </span>Reference</a></span></li></ul></div>
End of explanation
"""
df = pd.read_excel('sales-funnel.xlsx')
df.head()
"""
Explanation: Pandas's Pivot Table
Following the tutorial from the following link. Blog: Pandas pivot table explained.
The General rule of thumb is that once you use multiple grouby you should evaluate whether a pivot table is a useful approach.
One of the challenges with using the panda’s pivot_table is making sure you understand your data and what questions you are trying to answer with the pivot table. It is a seemingly simple function but can produce very powerful analysis very quickly. In this scenario, we'll be tracking a sales pipeline (also called funnel). The basic problem is that some sales cycles are very long (e.g. enterprise software, capital equipment, etc.) and the managemer wants to understand it in more detail throughout the year. Typical questions include:
How much revenue is in the pipeline?
What products are in the pipeline?
Who has what products at what stage?
How likely are we to close deals by year end?
Many companies will have CRM tools or other software that sales uses to track the process, while they may be useful tools for analyzing the data, inevitably someone will export the data to Excel and use a PivotTable to summarize the data. Using a panda’s pivot table can be a good alternative because it is:
Quicker (once it is set up)
Self documenting (look at the code and you know what it does)
Easy to use to generate a report or email
More flexible because you can define custome aggregation functions
End of explanation
"""
df.pivot_table(index = ['Manager', 'Rep'], values = ['Price'])
"""
Explanation: Pivot the Data
As we build up the pivot table, it's probably easiest to take one step at a time. Add items and check each step to verify you are getting the results you expect.
The simplest pivot table must have a dataframe and an index, which stands for the column that the data will be aggregated upon and values, which are the aggregated value.
End of explanation
"""
# you can provide multiple arguments to almost every argument of the pivot_table function
df.pivot_table(index = ['Manager', 'Rep'], values = ['Price'], aggfunc = [np.mean, len])
"""
Explanation: By default, the values will be averaged, but we can do a count or a sum by providing the aggfun parameter.
End of explanation
"""
df.pivot_table(index = ['Manager','Rep'], values = ['Price'],
columns = ['Product'], aggfunc = [np.sum])
"""
Explanation: If we want to see sales broken down by the products, the columns variable allows us to define one or more columns. Note: The confusing points with the pivot_table is the use of columns and values. Columns are optional - they provide an additional way to segment the actual values you care about. The aggregation functions are applied to the values you've listed.
End of explanation
"""
df.pivot_table(index = ['Manager', 'Rep'], values = ['Price', 'Quantity'],
columns = ['Product'], aggfunc = [np.sum], fill_value = 0)
"""
Explanation: The NaNs are a bit distracting. If we want to remove them, we could use fill_value to set them to 0.
End of explanation
"""
df.pivot_table(index = ['Manager', 'Rep', 'Product'],
values = ['Price', 'Quantity'], aggfunc = [np.sum], margins = True)
"""
Explanation: You can move items to the index to get a different visual representation. The following code chunk removes Product from the columns and add it to the index and also uses the margins = True parameter to add totals to the pivot table.
End of explanation
"""
df['Status'] = df['Status'].astype('category')
df['Status'] = df['Status'].cat.set_categories(['won', 'pending', 'presented', 'declined'])
df.pivot_table(index = ['Manager', 'Status'], values = ['Price'],
aggfunc = [np.sum], fill_value = 0, margins = True)
"""
Explanation: We can define the status column as a category and set the order we want in the pivot table.
End of explanation
"""
table = df.pivot_table(index = ['Manager','Status'],
columns = ['Product'],
values = ['Quantity','Price'],
aggfunc = {'Quantity': len, 'Price': [np.sum, np.mean]},
fill_value = 0)
table
"""
Explanation: A really handy feature is the ability to pass a dictionary to the aggfunc so you can perform different functions on each of the values you select. This has a side-effect of making the labels a little cleaner.
End of explanation
"""
# .query uses strings for boolean indexing and we don't have to
# specify the dataframe that the Status is comming from
table.query("Status == ['pending','won']")
"""
Explanation: Once you have generated your data, it is in a DataFrame so you can filter on it using your standard DataFrame functions. e.g. We can look at all of our pending and won deals.
End of explanation
"""
|
moonbury/pythonanywhere | github/MasteringMatplotlib/mmpl-interaction.ipynb | gpl-3.0 | import matplotlib
matplotlib.use('nbagg')
"""
Explanation: Event Handling and Interactive Plots
In the following sections of this IPython Notebook we be looking at the following:
matplotlib's event loop support
Basic Event Handling
List of supported events
Mouse events
Limitations of the IPython Notebook backend
Keyboard events
Axes and Figures events
Object picking
Compound Event Handling
Toolbar
Interactive panning and zooming of figures
Warm-up proceedures:
End of explanation
"""
import random
import sys
import time
import numpy as np
import matplotlib as mpl
import matplotlib.pyplot as plt
import seaborn as sns
from IPython.display import Image
from typecheck import typecheck
sys.path.append("../lib")
import topo
"""
Explanation: Notice that we've left out the following line from our usual notebook prelude:
%matplotlib inline
We've disabled inline so that we get access to the interactive mode. More on that later :-)
Let's continue with the necessary imports:
End of explanation
"""
pallete_name = "husl"
#colors = sns.color_palette(pallete_name, 8)
#colors.reverse()
#cmap = mpl.colors.LinearSegmentedColormap.from_list(pallete_name, colors)
cmap = mpl.colors.Colormap('Sequential')
"""
Explanation: Let's set up our colors for this notebook:
End of explanation
"""
x = True
while x:
time.sleep(1)
if random.random() < 0.15:
x = False
"""
Explanation: Event Loop Basics
Before we look at matplotlib's event loop support, let's do a quick survey of event loops and get a refresher on how they work. Here's a pretty simple "event" loop:
python
while True:
pass
That loop is not going be worth our while to execute in this notebook :-) So let's do another one, almost as simple, that has a good chance of exiting in under a minute:
End of explanation
"""
class EventLoop:
def __init__(self):
self.command = None
self.status = None
self.handlers = {"interrupt": self.handle_interrupt}
self.resolution = 0.1
def loop(self):
self.command = "loop"
while self.command != "stop":
self.status = "running"
time.sleep(self.resolution)
def start(self):
self.command = "run"
try:
self.loop()
except KeyboardInterrupt:
self.handle_event("interrupt")
def stop(self):
self.command = "stop"
@typecheck
def add_handler(self, fn: callable, event: str):
self.handlers[event] = fn
@typecheck
def handle_event(self, event: str):
self.handlers[event]()
def handle_interrupt(self):
print("Stopping event loop ...")
self.stop()
"""
Explanation: This loop only handles one "event": the change of a value from True to False. That loop will continue to run until the condition for a false value of x is met (a random float under a particular threshold).
So what relation do these simple loops have with the loops that power toolkits like GTK and Qt or frameworks like Twisted and Tornado? Usually event systems have something like the following:
* a way to start the event loop
* a way to stop the event loop
* providing a means for registering events
* providing a means for responding to events
During each run, a loop will usually check a data structure to see if there are any new events that have occurred since the last time it looped. In a network event system, each loop might check to see if any file descriptors are ready for reading or writing. In a GUI toolkit, each look might check to see if any clicks or button presses had occurred.
Given the simple criteria above, let's try building a minimally demonstrative, if not useful, event loop. To keep this small, we're not going to integrate with socket or GUI events. The event that our loop will respond to will be quite minimal indeed.
End of explanation
"""
el = EventLoop()
el.start()
"""
Explanation: Here's what we did:
Created a class that maintains a data structure for event handlers
We also added a default handler for the "interrupt" event
Created a loop method
Created methods for starting and stopping the loop (via an attribute change)
In our start method, we check for an interrupt signal, and fire off an interrupt handler for said signal
Created a method for adding event handlers to the handler data structure (should we want to add more)
Let's creat an instance and start it up:
End of explanation
"""
def press_callback(event):
event.canvas.figure.text(event.xdata, event.ydata, '<- clicked here')
def release_callback(event):
event.canvas.figure.show()
(figure, axes) = plt.subplots()
press_conn_id = figure.canvas.mpl_connect('button_press_event', press_callback)
release_conn_id = figure.canvas.mpl_connect('button_release_event', release_callback)
plt.show()
"""
Explanation: When you evaluate that cell, IPython will display the usual indicator that a cell is continuing to run:
In [*]:
As soon as you're satisfied that the loop is merrily looping, go up to the IPython Notebook menu and select "Kernel" -> "Interrupt". The cell with the loop in it should finish, with not only an In number instead of an asterisk, but our interrupt handler should have printed out a status message as well.
Though this event loop is fairly different from those that power networking libraries or GUI toolkits, it's very close (both in nature and code) to the default event loops matplotlib provides for its canvas objects. As such, this is a perfect starting place for your deeper understanding of matplotlib. To continue in this vein, reading the matplotlib backend source code would serve you well.
Standard Event Handling in matplotlib
With some event loop knowledge under our belts, we're ready to start working with matplotlib events.
Below is the list of supported events in matplotlib as of version 1.4:
| Event name | Class and description |
|-------------------------|------------------------------------------------------|
|button_press_event | MouseEvent - mouse button is pressed |
|button_release_event | MouseEvent - mouse button is released |
|draw_event | DrawEvent - canvas draw |
|key_press_event | KeyEvent - key is pressed |
|key_release_event | KeyEvent - key is released |
|motion_notify_event | MouseEvent - mouse motion |
|pick_event | PickEvent - an object in the canvas is selected |
|resize_event | ResizeEvent - figure canvas is resized |
|scroll_event | MouseEvent - mouse scroll wheel is rolled |
|figure_enter_event | LocationEvent - mouse enters a new figure |
|figure_leave_event | LocationEvent - mouse leaves a figure |
|axes_enter_event | LocationEvent - mouse enters a new axes |
|axes_leave_event | LocationEvent - mouse leaves an axes |
We'll discuss some of these below in more detail. With that information in hand, you should be able to tackle problems with any of the supported events in matplotlib.
Mouse Events
In the next cell, we will define a couple of callback functions, and then connet these to specific canvas events.
Go ahead and render the cell then click on the display plot a couple of times:
End of explanation
"""
class Callbacks:
def __init__(self):
(figure, axes) = plt.subplots()
axes.set_aspect(1)
figure.canvas.mpl_connect('button_press_event', self.press)
figure.canvas.mpl_connect('button_release_event', self.release)
def start(self):
plt.show()
def press(self, event):
self.start_time = time.time()
def release(self, event):
self.end_time = time.time()
self.draw_click(event)
def draw_click(self, event):
size = 4 * (self.end_time - self.start_time) ** 2
c1 = plt.Circle([event.xdata, event.ydata], 0.002,)
c2 = plt.Circle([event.xdata, event.ydata], 0.02 * size, alpha=0.2)
event.canvas.figure.gca().add_artist(c1)
event.canvas.figure.gca().add_artist(c2)
event.canvas.figure.show()
cbs = Callbacks()
cbs.start()
"""
Explanation: Our callbacks display a little note close to each $(x, y)$ coordinate where we clicked (the location is not exact due to font-sizing, etc.) If we use a graphical indication as opposed to a textual one, we can get much better precision:
End of explanation
"""
class LineBuilder:
def __init__(self, event_name='button_press_event'):
(self.figure, self.axes) = plt.subplots()
plt.xlim([0, 10])
plt.ylim([0, 10])
(self.xs, self.ys) = ([5], [5])
(self.line,) = self.axes.plot(self.xs, self.ys)
self.axes.set_title('Click the canvas to build line segments...')
self.canvas = self.figure.canvas
self.conn_id = self.canvas.mpl_connect(event_name, self.callback)
def start(self):
plt.show()
def update_line(self, event):
self.xs.append(event.xdata)
self.ys.append(event.ydata)
self.line.set_data(self.xs, self.ys)
def callback(self, event):
if event.inaxes != self.line.axes:
return
self.update_line(event)
self.canvas.draw()
lb = LineBuilder()
lb.start()
"""
Explanation: As you can see, we changed the callback to display a cicle instead of text. If you choose to press and hold, and then release a bit later, you will see that a second, transparent circle is displayed. The longer you hold, the larger the second transpent circle will be.
Let's try something a little more involved, adapted from the line-drawing example in the "Event handling and picking" chapter of the matplotlib Advanced Guide:
End of explanation
"""
from matplotlib import widgets
from matplotlib.backend_bases import MouseEvent
def get_sine_data(amplitude=5, frequency=3, time=None):
return amplitude * np.sin(2 * np.pi * frequency * time)
class SineSliders:
def __init__(self, amplitude=5, frequency=3):
(self.figure, _) = plt.subplots()
self.configure()
self.a0 = amplitude
self.f0 = frequency
self.time = np.arange(0.0, 1.0, 0.001)
self.data = get_sine_data(
amplitude=self.a0, frequency=self.f0, time=self.time)
(self.line,) = plt.plot(self.time, self.data, lw=2, color='red')
self.axes_amp = plt.axes([0.25, 0.15, 0.65, 0.03])
self.axes_freq = plt.axes([0.25, 0.1, 0.65, 0.03])
self.setup_sliders()
self.setup_reset_button()
self.setup_color_selector()
def start(self):
plt.show()
def configure(self):
plt.subplots_adjust(left=0.25, bottom=0.25)
plt.axis([0, 1, -10, 10])
def setup_sliders(self):
self.slider_amp = widgets.Slider(
self.axes_amp, 'Amp', 0.1, 10.0, valinit=self.a0)
self.slider_freq = widgets.Slider(
self.axes_freq, 'Freq', 0.1, 30.0, valinit=self.f0)
self.slider_freq.on_changed(self.update)
self.slider_amp.on_changed(self.update)
def setup_reset_button(self):
reset_axes = plt.axes([0.8, 0.025, 0.1, 0.04])
reset_button = widgets.Button(reset_axes, 'Reset', hovercolor='0.975')
reset_button.on_clicked(self.reset)
def setup_color_selector(self):
radio_axes = plt.axes([0.025, 0.5, 0.15, 0.15], aspect=1)
radio_select = widgets.RadioButtons(
radio_axes, ('red', 'blue', 'green',), active=0)
radio_select.on_clicked(self.switchcolor)
def update(self, val):
self.data = get_sine_data(self.slider_amp.val,
self.slider_freq.val,
self.time)
self.line.set_ydata(self.data)
self.figure.canvas.draw()
def reset(self, event):
self.slider_freq.reset()
self.slider_amp.reset()
def switchcolor(self, label):
self.line.set_color(label)
self.figure.canvas.draw()
sldrs = SineSliders(amplitude=0.5, frequency=20)
sldrs.start()
"""
Explanation: For dessert, here's the slider demo from matplotlib:
End of explanation
"""
sorted(set(mpl.rcsetup.interactive_bk + mpl.rcsetup.non_interactive_bk + mpl.rcsetup.all_backends))
"""
Explanation: Limitations of nbagg
The IPython Notebook AGG backend currently doesn't provide support for the following matplotlib events:
* key_press
* scroll_event (mouse scrolling)
* mouse right click
* mouse doubleclick
Also, mouse movement events can be a little inconsistent (this can be especially true if your browser or other application is running at a significant CPU%, causing events to be missed in matplotlib running in an IPython notebook).
However, we can still use IPython while switching to a new backend for matplotlib. To see which backends are available to you:
End of explanation
"""
plt.switch_backend('MacOSX')
"""
Explanation: Currently keyboard events aren't supported by IPython and the matplotlib nbagg backend. So, for this section, we'll switch over to your default platform's GUI toolkit in matplotlib.
You have two options for the remainder of this notebook:
Use IPython from a terminal, or
Switch backends in this notebook.
For terminal use, change directory to where you cloned this notebook's git repo and then fire up IPython:
bash
$ cd interaction
$ make repl
The repl target is a convenience that uses a Python virtual environment and the downloaded dependencies for this notebook. Once you're at the IPython prompt, you may start entering code with automatically-configured access to the libraries needed by this notebook.
If you would like to continue using this notebook instead of switching to the terminal, you'll need to change your backend for the remaining examples. For instance:
End of explanation
"""
def make_data(n, c):
r = 4 * c * np.random.rand(n) ** 2
theta = 2 * np.pi * np.random.rand(n)
area = 200 * r**2 * np.random.rand(n)
return (r, area, theta)
def generate_data(n, c):
while True:
yield make_data(n, c)
def make_plot(radius, area, theta, axes=None):
scatter = axes.scatter(
theta, radius, c=theta, s=area, cmap=cmap)
scatter.set_alpha(0.75)
def update_plot(radius, area, theta, event):
figure = event.canvas.figure
axes = figure.gca()
make_plot(radius, area, theta, axes)
event.canvas.draw()
"""
Explanation: Keyboard Events
Let's prepare for our key event explorations by defining some support functions ahead of time:
End of explanation
"""
class Carousel:
def __init__(self, data):
(self.left, self.right) = ([], [])
self.gen = data
self.last_key = None
def start(self, axes):
make_plot(*self.next(), axes=axes)
def prev(self):
if not self.left:
return []
data = self.left.pop()
self.right.insert(0, data)
return data
def next(self):
if self.right:
data = self.right.pop(0)
else:
data = next(self.gen)
self.left.append(data)
return data
def reset(self):
self.right = self.left + self.right
self.left = []
def dispatch(self, event):
if event.key == "right":
self.handle_right(event)
elif event.key == "left":
self.handle_left(event)
elif event.key == "r":
self.handle_reset(event)
def handle_right(self, event):
print("Got right key ...")
if self.last_key == "left":
self.next()
update_plot(*self.next(), event=event)
self.last_key = event.key
def handle_left(self, event):
print("Got left key ...")
if self.last_key == "right":
self.prev()
data = self.prev()
if data:
update_plot(*data, event=event)
self.last_key = event.key
def handle_reset(self, event):
print("Got reset key ...")
self.reset()
update_plot(*self.next(), event=event)
self.last_key = event.key
"""
Explanation: Now let's make a class which will:
dispatch based upon keys pressed and
navigate through our endless data set
End of explanation
"""
class CarouselManager:
def __init__(self, density=300, multiplier=1):
(figure, self.axes) = plt.subplots(
figsize=(12,12), subplot_kw={"polar": "True"})
self.axes.hold(False)
data = generate_data(density, multiplier)
self.carousel = Carousel(data)
_ = figure.canvas.mpl_connect(
'key_press_event', self.carousel.dispatch)
def start(self):
self.carousel.start(self.axes)
plt.show()
"""
Explanation: One more class, to help keep things clean:
End of explanation
"""
cm = CarouselManager(multiplier=2)
cm.start()
"""
Explanation: Now we can take it for a spin:
End of explanation
"""
def enter_axes(event):
print('enter_axes', event.inaxes)
event.inaxes.patch.set_facecolor('yellow')
event.canvas.draw()
def leave_axes(event):
print('leave_axes', event.inaxes)
event.inaxes.patch.set_facecolor('white')
event.canvas.draw()
def enter_figure(event):
print('enter_figure', event.canvas.figure)
event.canvas.figure.patch.set_facecolor('red')
event.canvas.draw()
def leave_figure(event):
print('leave_figure', event.canvas.figure)
event.canvas.figure.patch.set_facecolor('grey')
event.canvas.draw()
class FigureAndAxesFocus:
def __init__(self):
(self.figure, (self.axes1, self.axes2)) = plt.subplots(2, 1)
title = "Hover mouse over figure or its axes to trigger events"
self.figure.suptitle(title)
self.setup_figure_events()
self.setup_axes_events()
def start(self):
plt.show()
def setup_figure_events(self):
self.figure.canvas.mpl_connect(
"figure_enter_event", enter_figure)
self.figure.canvas.mpl_connect(
"figure_leave_event", leave_figure)
def setup_axes_events(self):
self.figure.canvas.mpl_connect(
"axes_enter_event", enter_axes)
self.figure.canvas.mpl_connect(
"axes_leave_event", leave_axes)
"""
Explanation: In the GUI canvas, you should see something that looks a bit like this:
The plot shoudl have the focus automatically. Press the right and left arrow keys to navigate through your data sets. You can return to the beginning of the data set by typing "r", the "reset" key. Play with it a bit, to convince yourself that it's really doing what we intended :-)
Axes and Figure Events
End of explanation
"""
faaf = FigureAndAxesFocus()
faaf.start()
"""
Explanation: Let's try it out:
End of explanation
"""
class DataPicker:
def __init__(self, range):
self.range = range
self.figure = self.axes = self.line = None
self.xs = np.random.rand(*self.range)
self.means = np.mean(self.xs, axis=1)
self.stddev = np.std(self.xs, axis=1)
def start(self):
self.create_main_plot()
self.figure.canvas.mpl_connect('pick_event', self.handle_pick)
plt.show()
def create_main_plot(self):
(self.figure, self.axes) = plt.subplots()
self.axes.set_title('click on point to plot time series')
(self.line,) = self.axes.plot(self.means, self.stddev, 'o', picker=10)
def create_popup_plot(self, n, event):
popup_figure = plt.figure()
for subplotnum, i in enumerate(event.ind):
popup_axes = popup_figure.add_subplot(n, 1, subplotnum + 1)
popup_axes.plot(self.xs[i])
text_data = (self.means[i], self.stddev[i])
popup_axes.text(
0.05, 0.9,
'$\mu$=%1.3f\n$\sigma$=%1.3f' % text_data,
transform=popup_axes.transAxes, va='top')
popup_axes.set_ylim(-0.5, 1.5)
popup_figure.show()
def handle_pick(self, event):
if event.artist != self.line:
return
n = len(event.ind)
if not n:
return
self.create_popup_plot(n, event)
dp = DataPicker(range=(100,1000))
dp.start()
"""
Explanation: Object Picking
The next event we will mention is a special one: the event of an object being "picked". Every Artist instance (naturally including any subclassess of Artist) has an attribute picker. Setting this attribute is what enables object picking in matplotlib.
The definition of picked can vary, depending upon context. For instance, setting Artist.picked has the following results:
* If True, picking is enabled for the artist object and a pick_event will fire any time a mouse event occurs over the artist object in the figure.
* If a number (e.g., float or int), the value is interpreted as a "tolerance"; if the event's data (such as $x$ and $y$ values) is within the value of that tolerance, the pick_event will fire.
* If a callable, then the provided function or method returns a boolean value which determines if the pick_event is fired.
* If None, picking is disabled.
The example below is adapted from the matplotlib project's picking exercise in the Advanced User's Guide. In it, we create a data set of 100 arrays, each containing 1000 random numbers. The sample mean and standard deviation of each is determined, and a plot is made of the 100 means vs the 100 standard deviations. We then connect the line created by the plot command to the pick event, and plot the original (randomly generated) time series data corresponding to the "picked" points. If more than one point is within the tolerance of the clicked on point, we display multiple subplots for the time series which fall into our tolerance (in this case, 10 pixels).
End of explanation
"""
class TopoFlowMap:
def __init__(self, xrange=None, yrange=None, seed=1):
self.xrange = xrange or (0,1)
self.yrange = yrange or (0,1)
self.seed = seed
(self.figure, self.axes) = plt.subplots(figsize=(12,8))
self.axes.set_aspect(1)
self.colorbar = None
self.update()
def get_ranges(self, xrange, yrange):
if xrange:
self.xrange = xrange
if yrange:
self.yrange = yrange
return (xrange, yrange)
def get_colorbar_axes(self):
colorbar_axes = None
if self.colorbar:
colorbar_axes = self.colorbar.ax
colorbar_axes.clear()
return colorbar_axes
def get_filled_contours(self, coords):
return self.axes.contourf(cmap=topo.land_cmap, *coords.values())
def update_contour_lines(self, filled_contours):
contours = self.axes.contour(filled_contours, colors="black", linewidths=2)
self.axes.clabel(contours, fmt="%d", colors="#330000")
def update_water_flow(self, coords, gradient):
self.axes.streamplot(
coords.get("x")[:,0],
coords.get("y")[0,:],
gradient.get("dx"),
gradient.get("dy"),
color="0.6",
density=1,
arrowsize=2)
def update_labels(self):
self.colorbar.set_label("Altitude (m)")
self.axes.set_title("Water Flow across Land Gradients", fontsize=20)
self.axes.set_xlabel("$x$ (km)")
self.axes.set_ylabel("$y$ (km)")
def update(self, xrange=None, yrange=None):
(xrange, yrange) = self.get_ranges(xrange, yrange)
(coords, grad) = topo.make_land_map(self.xrange, self.yrange, self.seed)
self.axes.clear()
colorbar_axes = self.get_colorbar_axes()
filled_contours = self.get_filled_contours(coords)
self.update_contour_lines(filled_contours)
self.update_water_flow(coords, grad)
self.colorbar = self.figure.colorbar(filled_contours, cax=colorbar_axes)
self.update_labels()
"""
Explanation: Compound Event Handling
This section discusses the combination of multiple events or other sources of data in order to provide a more highly customized user experience, whether that be for visual plot updates, preparation of data, setting object properties, or updating widgets. This is what we will refer to as "compound events".
Navigation Toolbar
matplotlib backends come with a feature we haven't discussed yet: a widget for interactive navigation. This widget is available for all the backends (including the nbagg backend for IPython, when not in "inline" mode). In brief, the functionality associated with the buttons in the widget is as follows:
* Home: returns the figure to its originally rendered state
* Previous: return to the previous view in the plot's history
* Next: move to the next view in the plot's history
* Pan/Zoom: pan across the plot by clicking and holding the left mouse button; zoom by clicking and holding the right mouse button (behavior differs between Cartesian and Polar plots)
* Zoom-to-Rectangle: zoom in on a selected portion of the plot
* Subplot Configuration: configure the display of subplots via a pop-up widget with various parameters
* Save: save the plot, in its currently displayed state, to a file
When a toolbar action is engaged, the NavigationToolbar instance sets the current mode. For instance, when the Zoom-to-Rectangle button is clicked, the mode will be set to zoom rect. When in Pan/Zoom, the mode will be set to pan/zoom. These can be used in conjunction with the supported events to fire callbacks in response to toolbar activity.
In point of fact, the toolbar class, matplotlib.backend_bases.NavigationToolbar2 is an excellent place to look for examples of "compound events". Let's examine the Pan/Zoom button. The class tracks the following via attributes that get set:
* The connection id for a "press" event
* The connection id for a "release" event
* The connection id for a "mouse move" event (correlated to a mouse drag later)
* Whether the toolbar is "active"
* What the toolbar mode is
* What the zoom mode is
During toolbar setup, toolbar button events are connected to callbacks. When these buttons are pressed, and the callbacks are fired, old events are disconnected and new ones connected. In this way, chains of events may be set up with a particular sequence of events firing only a particular set of callbacks and in a particular order.
Specialized Events
The code in matplotlib.backend_bases.NavigationToolbar2 is a great place to go to get some ideas about how you might combine events in your own projects. You might have a workflow that requires responses to plot updates, but only if a series of other events has taken place first. You can accomplish these by connecting events to and disconnecting them from various callbacks.
Interactive Panning and Zooming
Let's go back to the toolbar for a practical example of creating a compound event.
The problem we want to address is this: when a user pans or zooms out of the range of previously computed data in a plotted area, they are presented with parts of an empty grid with no visualization. It would be nice if we could put our new-found event callback skills to use in order to solve this issue.
Let's look at an example where it would be useful to have the plot figure refreshed when it is moved: a topographic map. Geophysicsist Joe Kington has provided some nice answers on Stackoverflow regarding matplotlib in the context of terrain gradients. In one particular example, he showed how to view the flow of water from random wells on a topographic map. We're going to do a couple of things with this example:
add a color map to give it the look of a physical map
give altitude in meters, and most importantly,
create a class that can update the map via a method call
Our custom color map and Joe's equations for generating a topographical map have been saved to ./lib/topo.py. We'll need to import those. Then we can define TopoFlowMap, our wrapper class that will be used to update the plot when we pan:
End of explanation
"""
plt.switch_backend('nbAgg')
"""
Explanation: Let's switch back to the IPython Notebook backend, so we have a reference image saved in the notebook:
End of explanation
"""
tfm = TopoFlowMap(xrange=(0,1.5), yrange=(0,1.5), seed=1732)
plt.show()
"""
Explanation: Let's draw the topographical map next, without any ability to update when panning:
End of explanation
"""
class TopoFlowMapManager:
def __init__(self, xrange=None, yrange=None, seed=1):
self.map = TopoFlowMap(xrange, yrange, seed)
_ = self.map.figure.canvas.mpl_connect(
'button_release_event', self.handle_pan_zoom_release)
def start(self):
plt.show()
def handle_pan_zoom_release(self, event):
if event.canvas.toolbar.mode != "pan/zoom":
return
self.map.update(event.inaxes.get_xlim(),
event.inaxes.get_ylim())
event.canvas.draw()
"""
Explanation: If you click the "pan/zoom" button on the navigation toolbar, and then click+hold on the figure, you can move it about. Note that, when you do so, nothing gets redrawn.
Since we do want to redraw, and there is no "pan event" to connect to, what are our options? Well, two come to mind:
* piggy back on the draw_event, which fires each time the canvas is moved, or
* use the button_release_event which will fire when the panning is complete
If our figure was easy to draw with simple equations, the first option would probably be fine. However, we're doing some multivariate calculus on our simulated topography; as you might have noticed, our plot does not render immediately. So let's go with the second option.
There's an added bonus, though, that will make our lives easier: the NavigationTool2 keeps track of the mode it is in on it's mode attribute. Let's use that to save some coding!
End of explanation
"""
plt.switch_backend('MacOSX')
"""
Explanation: Let's switch back to the native backend (in my case, that's MacOSX; you may need Qt5Agg, WXAgg, or GTK3Agg):
End of explanation
"""
tfmm = TopoFlowMapManager(xrange=(0,1.5), yrange=(0,1.5), seed=1732)
tfmm.start()
"""
Explanation: Run the next bit of code, and then start panning around and releasing; you should see the new data displayed after the callbacks fires off the recalculation.
End of explanation
"""
|
mattmcd/PyBayes | scripts/qfe_20220221.ipynb | apache-2.0 | import sympy as sp
from sympy.interactive import printing
printing.init_printing(use_latex=True)
from sympy.stats import Bernoulli, LogNormal, density, sample, P as Prob, E as Expected, variance
"""
Explanation: Utility Functions
Date: 2022-02-21
Author: Matt McDonnell @mattmcd
Looking at 'Quantitative Financial Economics' by Cuthbertson and Nitzsche to understand utility functions and
indifference curves.
End of explanation
"""
k1, k2 = sp.symbols('k1 k2', real=True)
p = sp.symbols('p', nonnegative=True)
Xs = sp.symbols('X')
X = Bernoulli('X', p=p, succ=k1, fail=k2)
Expected(X)
sp.Eq(p, sp.solve(Expected(X), p)[0])
"""
Explanation: Start by looking at a fair lottery random. A Bernoulli distribution can be used to represent a fair lottery
with proability $p$ for payoff $k_1$, probability $1-p$ for payoff of $k_2$. If the lottery is fair it has
expected value of zero, which can be used to solve for $p$.
End of explanation
"""
# Doesn't work - same RV?
# FairCoin = X.subs({p: sp.S.Half, k1: 1, k2: -1})
# FairCoin2 = X.subs({p: sp.S.Half, k1: -1, k2: 1})
# Works
FairCoin = Bernoulli('X1', p=sp.S.Half, succ=1, fail=-1)
FairCoin2 = Bernoulli('X2', p=sp.S.Half, succ=1, fail=-1)
sample(FairCoin + FairCoin2, size=(10,))
Prob(X.subs({p: 1/2, k1:1, k2:-1}) > 0)
"""
Explanation: Playing around with random variables
End of explanation
"""
# Expected value of FairCoin toss with payoff $16 and $4.
# No risk aversion -> would pay up to this amount for the bet
Payoff = X.subs({p: sp.S.Half, k1: 16, k2: 4})
Expected(Payoff)
"""
Explanation: Worked example and definition of Utility
Below we follow the example from p14-17, of a bet on a fair coin flip, $p=1/2$,
that costs \$10 to enter, and pays off \$16 for a win, \$4 for a loss.
End of explanation
"""
# Expected utility of the FairCoin toss with payoff $16 and $4 for sqrt utility
# i.e. U(W) = sqrt(W)
d = sp.Dummy()
U = sp.Lambda(d, sp.sqrt(d))
Expected(U(Payoff))
"""
Explanation: Now consider a utility function of the form $U(W) = \sqrt{W}$, where $W$ is the wealth of the player.
End of explanation
"""
# Utility of keeping $10 rather than paying $10 for bet
U(10).evalf()
"""
Explanation: We see that the expected utility of the bet is less than the utility of the original \$10:
End of explanation
"""
# Calculate the risk premium: what would you pay not to have to take the bet?
initial_wealth = 10
bet_cost = 10
sp.solve(U(initial_wealth-d) - Expected(U(initial_wealth + Payoff - bet_cost)), d)[0].evalf()
"""
Explanation: Calculate the risk premium: what would you pay not to have to take the bet?
End of explanation
"""
# General form
W, c, pis = sp.symbols('W, c, pi', real=True)
# Solve for cost
sp.Eq(c, sp.solve(Expected(X - c), c)[0].collect([k1, k2]))
sp.Eq(pis, sp.solve(U(W-pis) - Expected(U(W + X - Expected(X))), pis)[0].simplify().collect(p))
"""
Explanation: We can go back to the general form for the lottery and calculate the risk premium $\pi$ as a function of the other
parameters, assuming that the cost to enter the bet is given by the fair value of the lottery.
(Sidenote: I'm not a fan of using $\pi$ to represent risk premium, seems like it's asking for trouble if $\pi$
the number crops up in expressions.)
End of explanation
"""
x = sp.stats.rv.RandomSymbol('x')
Us = sp.symbols('U', cls=sp.Function)
sp.Eq(Us(W - pis), Expected(Us(W + x)))
lhs = sp.series(Us(W - pis), pis, n=2).removeO().simplify()
lhs
sigma_sq_x = sp.symbols('sigma_x^2', positive=True)
rhs = Expected(
sp.series(Us(W + x), x, n=3).removeO()
).collect(Us(W)).subs(Expected(x), 0).subs({Expected(x**2): sigma_sq_x})
rhs
pi = sp.solve(lhs - rhs, pis)[0]
sp.Eq(pis, pi)
"""
Explanation: Relate risk premium to utility function curvature.
Assume general RV for (payoff - cost), assume this has $E[x] == 0$ i.e. cost is expected payoff.
This also means var(x) = $E[x^2]$.
End of explanation
"""
Ras, Rrs = sp.symbols('R_A R_R', positive=True)
Ra = pi/(sigma_sq_x/2)
Rr = W*Ras
Rau = lambda U: Ra.subs(Us(W), U).simplify().powsimp()
sp.Eq(Ras, Ra), sp.Eq(Rrs, Rr)
"""
Explanation: We get to the form for risk premium given in the text, using SymPy to do the required manipulations.
Risk Aversion Coefficients
We can extract the term containing $U(W)$ into the coefficient for absolute risk aversion $R_A$ and from this the
coefficient for relative risk aversion $R_R$
End of explanation
"""
g, a, b, c = sp.symbols('gamma a b c', positive=True)
U_crras, U_caras, U_qs = sp.symbols('U_{CRRA} U_{CARA} U_Q', cls=sp.Function)
U_crra = W**(1-g)/(1-g)
U_cara = a - b*sp.exp(-c*W)
U_q = W - b/2*W**2
"""
Explanation: Then we can apply these definitions to some utility functions of interest: constant relative risk aversion,
constant absolute risk aversion, quadratic
End of explanation
"""
sp.Eq(U_crras(W), U_crra)
sp.Eq(Ras, Rau(U_crra)), sp.Eq(Rrs, Rr.subs(Ras, Rau(U_crra)))
"""
Explanation: Constant Relative Risk Aversion
End of explanation
"""
sp.Eq(U_caras(W), U_cara)
sp.Eq(Ras, Rau(U_cara)), sp.Eq(Rrs, Rr.subs(Ras, Rau(U_cara)))
"""
Explanation: Constant Absolute Risk Aversion
End of explanation
"""
sp.Eq(U_qs(W), U_q)
sp.Eq(Ras, Rau(U_q)), sp.Eq(Rrs, Rr.subs(Ras, Rau(U_q)))
"""
Explanation: Quadratic Utility
End of explanation
"""
|
ENCODE-DCC/pyencoded-tools | jupyter_notebooks/keenan/pyencoded_tools_skills_lab_2017.ipynb | mit | !encode explore
"""
Explanation: pyencoded-tools
https://github.com/ENCODE-DCC/pyencoded-tools
Skills Lab 2017
Purpose
Tools for programmatically interacting with metadata on the portal
Metadata includes
Origin of raw data (donor, experimental conditions)
Processing steps (align reads to assembly)
Relation between files (this BAM derives from that FASTQ)
How is our metadata represented on the portal?
Objects!
End of explanation
"""
object_count()
"""
Explanation: (All of our objects from https://www.encodeproject.org/profiles/)
Includes not just a model of files but a model of everything associated with the generation and processing of genetic data (e.g. experiment, quality metric)
Top objects by count
End of explanation
"""
n = 4
"""
Explanation: What is an object?
Four fundamental building blocks
Number
End of explanation
"""
s = 'string!'
"""
Explanation: String (quotation marks)
End of explanation
"""
# This is a list of numbers.
x = [4, 6, 8]
# This is a list of strings.
x = ['string!', 'string!', 'string!']
"""
Explanation: List (square brackets)
End of explanation
"""
d = {'use me': 'to find me'}
# Use key to find value.
d['use me']
"""
Explanation: Dictionary (curly braces)
End of explanation
"""
o = {'first_level': {'second_level': {'third_level': 'nested_value'}}}
print(json.dumps(o, indent=4))
"""
Explanation: JSON is a complex mixture of these four things
Must have curly braces on the outside and double quotation marks on all strings
Three Examples
1. Nested dictionaries
End of explanation
"""
o = {'list_of_objects': [{'thing': 'value_of_thing1'},
{'thing': 'value_of_thing2'},
{'thing': ['multiple', 'values', 'of', 'thing3',
['including', 'this', 'sublist']]}]}
print(json.dumps(o, indent=4))
"""
Explanation: 2. Lists of dictionaries
End of explanation
"""
!encode get ENCGM330VHI
"""
Explanation: 3. Actual ENCODE object (genetic modification)
End of explanation
"""
keypairs = !cat keypairs_template.json
print(json.dumps(json.loads(keypairs[0]), indent=4, sort_keys=True))
"""
Explanation: Scripts in pyencoded-tools create, update, and read these objects
Three tools in particular for C.R.U.(D.):
1. #### Create objects - ENCODE_import_data.py
2. #### Read objects - ENCODE_get_fields.py
3. #### Update objects - ENCODE_patch_set.py
4. #### (Delete objects) - Use patch_set to update status field to deleted
Setting up default keypairs
Search for user on user page (https://www.encodeproject.org/users/)
Click add acess key (returns key and secret associated with your account)
Save keypairs.json file in home directory (formatted as below) after filling in your server-specific keypairs
End of explanation
"""
!encode explore --field GeneticModification.properties
"""
Explanation: Creating an object
Example for genetic modification
Look at all possible fields in properties field of schema
End of explanation
"""
!encode explore --field GeneticModification.required
"""
Explanation: Look at minimum necessary fields in required field of schema
End of explanation
"""
!encode explore --field GeneticModification.properties.category.enum
"""
Explanation: Lab and award point to objects while category, purpose, and method have enum fields that list allowed values
Look at allowed values in category field
End of explanation
"""
!encode explore --field GeneticModification.properties.purpose.enum
"""
Explanation: Look at allowed values in purpose field
End of explanation
"""
!encode explore --field GeneticModification.properties.method.enum
"""
Explanation: Look at allowed values in method field
End of explanation
"""
!python ENCODE_import_data.py prez/create_object.xlsx --key test --update
"""
Explanation: Build Excel file where sheet name is the name of the object and columns are field names
Specify
path to Excel file
--key test (create object on test server)
--update (only pretends otherwise)
Outputs
Name of server
JSON it will try to POST
Result of attempt (success/failure)
End of explanation
"""
!encode explore --field GeneticModification.dependencies
"""
Explanation: Didn't work. Why?
Required fields have dependencies so object fails schema validation
End of explanation
"""
!python ENCODE_import_data.py prez/create_object1.xlsx --key test --update
"""
Explanation: Disentangling multiple dependencies can be complicated but comments in the schema usually help
<img src="prez/dependency1.png" alt="dependency" style="width: 700px;"/>
Means that when you specify tagging as the purpose and insertion as the category you must also specify introduced_tags and modified_site_by_target_id or modified_site_nonspecific fields
<img src="prez/dependency2.png" alt="dependency" style="width: 700px;"/>
Means that when you specify bombardment as the method you must also specify reagents field
Adding the new fields
Running again
End of explanation
"""
!encode explore --field GeneticModification.properties.lab.type
"""
Explanation: Works! Returns accession of newly created object: TSTGM949276
What about the column names?
If it is type string don't add anything
End of explanation
"""
!encode explore --field GeneticModification.properties.aliases.type
"""
Explanation: If it is type array add :list or :array to end of field name (separate multiple items with comma)
End of explanation
"""
!python ENCODE_import_data.py prez/reagents_list.xlsx --key test
"""
Explanation: If it is an array that takes multiple objects must number columns instead (use dash, start at one)
Builds up list of objects
End of explanation
"""
!python ENCODE_import_data.py --help
"""
Explanation: Use --help if you forget
End of explanation
"""
!python ENCODE_import_data.py prez/create_object1.xlsx --key bad_test --update
"""
Explanation: Other common error output
Using invalid keypairs
End of explanation
"""
!python ENCODE_import_data.py prez/create_object_err_1.xlsx --key test --update
"""
Explanation: Response code = {"description": "This server could not verify that you are authorized to access the document you requested."}
Misspelling field name (catgory instead of category)
End of explanation
"""
!python ENCODE_import_data.py prez/create_object_err_2.xlsx --key test --update
"""
Explanation: "errors": [{"description": "Additional properties are not allowed ('catgory' was unexpected)"}]
Using invalid value (blending instead of bombardment)
End of explanation
"""
!python ENCODE_patch_set.py --key test --accession TSTGM949276 --field award --data /awards/UM1HG009411/ --update
"""
Explanation: "errors": [{"description": "'blending' is not one of ['bombardment', 'CRISPR', 'microinjection', 'mutagen treatment', 'RNAi', 'site-specific recombination', 'stable transfection', 'TALEN', 'transduction', 'transient transfection']"}]
Editing the object we just created
Using ENCODE_patch_set.py
Specify
--key (prod/test)
--update (really make the changes)
--overwrite (optional, overwrites lists instead of appending values)
--remove (optional, removes field instead of editing value)
Option 1: Editing one object
Also specify
--accession (object to edit)
--field (field to update, not needed if you provide infile)
--data (updated value, not needed if you provide infile)
Change award value
End of explanation
"""
!python ENCODE_get_fields.py --key test --infile TSTGM949276 --field award,status
"""
Explanation: Option 2: Editing multiple objects
Also specify
--infile (path to file)
Infile is a TSV where column names are fields to edit (must always have accession/@id/uuid column to identify objects) and rows are new values
Using UUID
List of objects
Getting values from our object
Using ENCODE_get_fields.py
Specify
--key (prod/test)
--infile (path to file containing list of accessions or single accession)
--field (field of interest or multiple fields separated by commas, no spaces)
--allfields (optional flag, returns values of all fields in object)
Get award and status
End of explanation
"""
data = !python ENCODE_get_fields.py --key test --infile TSTGM949276 --allfields
pd.read_table(StringIO('\n'.join(data))).dropna(axis=1)
"""
Explanation: Get all fields
End of explanation
"""
%matplotlib inline
%config InlineBackend.figure_format = 'retina'
import matplotlib.pyplot as plt
import numpy as np
import json
import os
import pandas as pd
import requests
import seaborn as sns
from io import StringIO
from jupyterthemes import jtplot
jtplot.style()
count_by_object = [
('Library', 25297),
('Experiment', 14348),
('Document', 9894),
('QualityMetric', 124128),
('File', 353847),
('AnalysisStepRun', 113789),
('SamtoolsFlagstatsQualityMetric', 72999),
('Biosample', 11453),
('Replicate', 25936),
('ChipSeqFilterQualityMetric', 19074)
]
def object_count():
df = pd.DataFrame(count_by_object)
df = df.rename(columns={0: 'name', 1: 'value'})
sns.barplot(y='name', x='value', data=df)
ax = sns.plt.gca()
ax.set(ylabel='', xlabel='')
sns.despine(bottom=True)
"""
Explanation: Review
pyencoded-tools make interacting with metadata much faster and less error-prone
Use
ENCODE_import_data.py to create objects
ENCODE_patch_set.py to edit objects
ENCODE_get_fields.py to read objects
Other scripts in pyencoded-tools variation of the above
Releasinator specialized ENCODE_patch_set.py that changes statuses of hierarchy of objects
Reporter specialized ENCODE_get_fields.py that traverses associated objects (e.g. experiments to files)
Other scripts specialized external audits (get data, compare data)
Aditi's pyencoded-tools cheat sheet
Future
Ideas for better tools?
Currently no consistency between tools
Impossible to do certain things (e.g. patching nested fields)
Integration of separate scripts into one command-line interface
Worth the time?
Who would primarily use it?
What is the most complicated use case?
End of explanation
"""
|
EducationalTestingService/rsmtool | rsmtool/notebooks/evaluation.ipynb | apache-2.0 | markdown_str = ("The tables in this section show the standard association metrics between "
"*observed* human scores and different types of machine scores. "
"These results are computed on the evaluation set. `raw_trim` scores "
"are truncated to [{}, {}]. `raw_trim_round` scores are computed by first truncating "
"and then rounding the predicted score. Scaled scores are computed by re-scaling "
"the predicted scores using mean and standard deviation of human scores as observed "
"on the training data and mean and standard deviation of machine scores as predicted "
"for the training set.".format(min_score, max_score))
display(Markdown(markdown_str))
"""
Explanation: Evaluation results
Overall association statistics
End of explanation
"""
raw_or_scaled = "scaled" if use_scaled_predictions else "raw"
eval_file = join(output_dir, '{}_eval.{}'.format(experiment_id, file_format))
df_eval = DataReader.read_from_file(eval_file, index_col=0)
distribution_columns = ['N', 'h_mean', 'sys_mean', 'h_sd', 'sys_sd', 'h_min', 'sys_min', 'h_max', 'sys_max', 'SMD']
association_columns = ['N'] + [column for column in df_eval.columns if not column in distribution_columns]
df_distribution = df_eval[distribution_columns]
df_association = df_eval[association_columns]
pd.options.display.width=10
formatter = partial(color_highlighter, low=-0.15, high=0.15)
HTML('<span style="font-size:95%">'+ df_distribution.to_html(classes=['sortable'],
escape=False,
formatters={'SMD': formatter},
float_format=float_format_func) + '</span>')
"""
Explanation: Descriptive holistic score statistics
The table shows distributional properties of human and system scores. SMD values lower then -0.15 or higher than 0.15 are <span class="highlight_color">highlighted</span>.
Please note that for raw scores, SMD values are likely to be affected by possible differences in scale.
End of explanation
"""
markdown_str = ['The table shows the standard association metrics between human scores and machine scores.']
if continuous_human_score:
markdown_str.append("Note that for computation of `kappa` both human and machine scores are rounded.")
else:
markdown_str.append("Note that for computation of `kappa` all machine scores are rounded.")
Markdown('\n'.join(markdown_str))
pd.options.display.width=10
HTML('<span style="font-size:95%">'+ df_association.to_html(classes=['sortable'],
escape=False,
float_format=float_format_func) + '</span>')
"""
Explanation: Association statistics
End of explanation
"""
markdown_str = ["Confusion matrix using {}, trimmed, and rounded scores and human scores (rows=system, columns=human).".format(raw_or_scaled)]
if continuous_human_score:
markdown_str.append("Note: Human scores have beeen rounded to the nearest integer.")
Markdown('\n'.join(markdown_str))
confmat_file = join(output_dir, '{}_confMatrix.{}'.format(experiment_id, file_format))
df_confmat = DataReader.read_from_file(confmat_file, index_col=0)
df_confmat
"""
Explanation: Confusion matrix
End of explanation
"""
markdown_strs = ["The histogram and the table below show the distibution of "
"human scores and {}, trimmed, and rounded machine scores "
"(as % of all responses).".format(raw_or_scaled)]
markdown_strs.append("Differences in the table between human and machine distributions "
"larger than 5 percentage points are <span class='highlight_color'>highlighted</span>.")
if continuous_human_score:
markdown_strs.append("Note: Human scores have beeen rounded to the nearest integer.")
display(Markdown('\n'.join(markdown_strs)))
scoredist_file = join(output_dir, '{}_score_dist.{}'.format(experiment_id, file_format))
df_scoredist = DataReader.read_from_file(scoredist_file, index_col=0)
df_scoredist_melted = pd.melt(df_scoredist, id_vars=['score'])
df_scoredist_melted = df_scoredist_melted[df_scoredist_melted['variable'] != 'difference']
# get the colors for the plot
colors = sns.color_palette("Greys", 2)
with sns.axes_style('whitegrid'):
# make a barplot without a legend since we will
# add one manually later
p = sns.catplot(x="score", y="value", hue="variable", kind="bar",
palette=colors, data=df_scoredist_melted,
height=3, aspect=2, legend=False)
p.set_axis_labels('score', '% of responses')
# add a legend with the right colors
axis = p.axes[0][0]
legend = axis.legend(labels=('Human', 'Machine'), title='', frameon=True, fancybox=True)
legend.legendHandles[0].set_color(colors[0])
legend.legendHandles[1].set_color(colors[1])
imgfile = join(figure_dir, '{}_score_dist.svg'.format(experiment_id))
plt.savefig(imgfile)
if use_thumbnails:
show_thumbnail(imgfile, next(id_generator))
else:
plt.show()
formatter = partial(color_highlighter, low=0, high=5, absolute=True)
df_html = df_scoredist.to_html(classes=['sortable'], index=False,
escape=False, formatters={'difference': formatter})
display(HTML(df_html))
"""
Explanation: Distribution of human and machine scores
End of explanation
"""
|
NekuSakuraba/my_capstone_research | subjects/em/multivariate t - draft03 - EM Unknown degrees of freedom.ipynb | mit | def log_of_psi():
n = len(X)
u_ = [(_-mu).T.dot(inv(cov).dot(_-mu)) for _ in X]
return -.5*n*p*log(2 * np.pi) -.5*n*log(np.linalg.det(cov)) - .5 * sum(u_)
def az():
n = len(X)
u_ = [(_-mu).T.dot(inv(cov).dot(_-mu)) for _ in X]
return -n*log(gamma(df/2.)) + .5*n*df*log(df/2.) + .5*df*(log(u_) - u_).sum() - log(u_).sum()
"""
Explanation: Source
The EM Algorithm and Extensions, Pg. 60.
$$
log L_c(\Psi) = log L_{1c}(\Psi) + a(z) \
$$
where
$$
log L_{1c}(\Psi) = -\frac{np}{2}\ log(2 \pi) - \frac{n}{2} log | \Sigma | -\frac{1}{2} \sum_{j=1}^n u_j (w_j - \mu)^T \Sigma^{-1}(w_j - \mu)
$$
and
$$
a(z) = -n log \Gamma \big(\frac{v}{2} \big) + \frac{nv}{2}log \big( \frac{v}{2} \big) +\frac{v}{2} \sum_{j=1}^n \big[log(u_j) - u_j)\big] - \sum_{j=1}^n log u_j
$$
End of explanation
"""
def log_of_psi():
n = len(X)
u_ = [(_-mu).T.dot(inv(cov).dot(_-mu)) for _ in X]
return -.5*n*log(np.linalg.det(cov)) - .5 * sum(u_)
def az():
n = len(X)
u_ = [(_-mu).T.dot(inv(cov).dot(_-mu)) for _ in X]
return -n*log(gamma(df/2.)) + .5*n*df*log(df/2.) + .5*df*(log(u_) - u_).sum()
"""
Explanation: Modified version
End of explanation
"""
mu = [0,0]
cov = [[1,0], [0,1]]
df = 10
size = 300
X = multivariate_t_rvs(m=mu, S=cov, df=df, n=size)
t0 = multivariate_t(mu, cov, df)
x, y = np.mgrid[-4:4:.1, -4:4:.1]
xy = np.column_stack([x.ravel(), y.ravel()])
z0 = []
for _ in xy:
z0.append(t0.pdf(_))
z0 = np.reshape(z0, x.shape)
plt.scatter(X.T[0], X.T[1])
plt.contour(x, y, z0)
"""
Explanation: Generating sample
End of explanation
"""
def find_df(v):
return -digamma(v/2.) + log(v/2.) + (log(tau) - tau).sum()/len(tau) + 1 + (digamma((v+p)/2.)-log((v+p)/2.))
# My Guesses
mu = [1, 2]
cov = [[1.5, 0.],[0, 1.5]]
df = 4
p = 2
psi_likelihood = []
df_likelihood = []
for z in range(200):
# E-Step 1
u = []
for delta in X-mu:
u.append(delta.dot(inv(cov)).dot(delta))
u = np.array(u)
tau = (df + p)/(df + u); tau = tau.reshape(-1, 1)
tau_sum = tau.sum()
# CM-Step 1
mu_ = (tau * X).sum(axis=0) / tau_sum
cov_ = np.array([[0,0], [0,0]], dtype=np.float32)
for idx, delta in enumerate(X - mu_):
delta = delta.reshape(-1, 1)
cov_ += (tau[idx]*delta).dot(delta.T)
cov_ /= len(tau)
# E-Step 2
u = []
for delta in X-mu_:
u.append(delta.dot(inv(cov_)).dot(delta))
u = np.array(u)
tau = (df + p)/(df + u); tau = tau.reshape(-1, 1)
tau_sum = tau.sum()
# CM-Step 2
v_ = 0
my_range = np.arange(df-3, df+3, .001)
for _ in my_range:
solution = find_df(_)
if solution < 0+1e-4 and solution > 0-1e-4:
#if z % 5 == 0:
# print '#%d - %.6f' % (z, _)
v_ = _
break
mu = mu_
cov = cov_
df = v_
psi_likelihood.append(log_of_psi())
df_likelihood.append(az())
if len(psi_likelihood) > 1:
if psi_likelihood[-1] - psi_likelihood[-2] <= 1e-10:
break
"""
if len(df_likelihood) == 0:
df = v_
elif len(df_likelihood) > 1:
if df_likelihood[-2] < df_likelihood[-1]:
df = v_
"""
print mu
print cov
print df
plt.figure(figsize=(8, 10))
plt.subplot(311)
plt.plot(range(len(psi_likelihood)), psi_likelihood)
plt.subplot(312)
plt.plot(range(len(df_likelihood)), df_likelihood)
t1 = multivariate_t(mu, cov, df)
x, y = np.mgrid[-4:4:.1, -4:4:.1]
xy = np.column_stack([x.ravel(), y.ravel()])
z1 = []
for _ in xy:
z1.append(t1.pdf(_))
z1 = np.reshape(z1, x.shape)
plt.figure(figsize=(8, 10))
plt.subplot(211)
plt.title('After Estimating')
plt.scatter(X.T[0], X.T[1])
plt.contour(x, y, z1)
plt.subplot(212)
plt.title('Actual Parameters')
plt.scatter(X.T[0], X.T[1])
plt.contour(x, y, z0)
"""
Explanation: Estimating Parameters
End of explanation
"""
|
abigailStev/power_spectra | scripts/TimmerKoenig.ipynb | mit | import numpy as np
from scipy import fftpack
import matplotlib.pyplot as plt
from matplotlib.ticker import MultipleLocator
import matplotlib.font_manager as font_manager
import itertools
## Shows the plots inline, instead of in a separate window:
%matplotlib inline
## Sets the font size for plotting
font_prop = font_manager.FontProperties(size=18)
def power_of_two(num):
## Checks if an input is a power of 2 (1 <= num < 2147483648).
n = int(num)
x = 2
assert n > 0, "ERROR: Number must be positive."
if n == 1:
return True
else:
while x < n and x < 2147483648:
x *= 2
return n == x
"""
Explanation: This code generates a Fourier transform of a specified shape as outlined in Timmer and Koenig 1995 (A&A vol 300 p 707-710), and plots it and its corresponding time series. The user can specify a mean count rate for the light curve, fractional rms^2 variance for the power spectrum, and 'Poissonify' the light curve.
by Abigail Stevens, A.L.Stevens at uva.nl
End of explanation
"""
def make_pulsation(n_bins, dt, freq, amp, mean, phase):
binning = 10
period = 1.0 / freq # in seconds
bins_per_period = period / dt
tiny_bins = np.arange(0, n_bins, 1.0/binning)
smooth_sine = amp * np.sin(2.0 * np.pi * tiny_bins / bins_per_period + phase) + mean
time_series = np.mean(np.array_split(smooth_sine, n_bins), axis=1)
return time_series
"""
Explanation: Define a function to make a pulsation light curve
End of explanation
"""
def geometric_rebinning(freq, power, rebin_const):
"""
geometric_rebinning
Re-bins the power spectrum in frequency space by some re-binning constant (rebin_const > 1).
"""
## Initializing variables
rb_power = np.asarray([]) # List of re-binned power
rb_freq = np.asarray([]) # List of re-binned frequencies
real_index = 1.0 # The unrounded next index in power
int_index = 1 # The int of real_index, added to current_m every iteration
current_m = 1 # Current index in power
prev_m = 0 # Previous index m
bin_power = 0.0 # The power of the current re-binned bin
bin_freq = 0.0 # The frequency of the current re-binned bin
bin_range = 0.0 # The range of un-binned bins covered by this re-binned bin
freq_min = np.asarray([])
freq_max = np.asarray([])
## Looping through the length of the array power, geometric bin by geometric bin,
## to compute the average power and frequency of that geometric bin.
## Equations for frequency, power, and error are from Adam Ingram's PhD thesis.
while current_m < len(power):
# while current_m < 100: # used for debugging
## Initializing clean variables for each iteration of the while-loop
bin_power = 0.0 # the averaged power at each index of rb_power
bin_range = 0.0
bin_freq = 0.0
## Determining the range of indices this specific geometric bin covers
bin_range = np.absolute(current_m - prev_m)
## Want mean of data points contained within one geometric bin
bin_power = np.mean(power[prev_m:current_m])
## Computing the mean frequency of a geometric bin
bin_freq = np.mean(freq[prev_m:current_m])
## Appending values to arrays
rb_power = np.append(rb_power, bin_power)
rb_freq = np.append(rb_freq, bin_freq)
freq_min = np.append(freq_min, freq[prev_m])
freq_max = np.append(freq_max, freq[current_m])
## Incrementing for the next iteration of the loop
## Since the for-loop goes from prev_m to current_m-1 (since that's how
## the range function and array slicing works) it's ok that we set
## prev_m = current_m here for the next round. This will not cause any
## double-counting bins or skipping bins.
prev_m = current_m
real_index *= rebin_const
int_index = int(round(real_index))
current_m += int_index
bin_range = None
bin_freq = None
bin_power = None
## End of while-loop
return rb_freq, rb_power, freq_min, freq_max
## End of function 'geometric_rebinning'
"""
Explanation: Define a function to re-bin a power spectrum in frequency by a specified constant > 1.
End of explanation
"""
def powerlaw(w, beta):
## Gives a powerlaw of (1/w)^beta
pl = np.zeros(len(w))
pl[1:] = w[1:] ** (beta)
pl[0] = np.inf
return pl
def lorentzian(w, w_0, gamma):
## Gives a Lorentzian centered on w_0 with a FWHM of gamma
numerator = gamma / (np.pi * 2.0)
denominator = (w - w_0) ** 2 + (1.0/2.0 * gamma) ** 2
L = numerator / denominator
return L
def gaussian(w, mean, std_dev):
## Gives a Gaussian with a mean of mean and a standard deviation of std_dev
## FWHM = 2 * np.sqrt(2 * np.log(2))*std_dev
exp_numerator = -(w - mean)**2
exp_denominator = 2 * std_dev**2
G = np.exp(exp_numerator / exp_denominator)
return G
def powerlaw_expdecay(w, beta, alpha):
pl_exp = np.where(w != 0, (1.0 / w) ** beta * np.exp(-alpha * w), np.inf)
return pl_exp
def broken_powerlaw(w, w_b, beta_1, beta_2):
c = w_b ** (-beta_1 + beta_2) ## scale factor so that they're equal at the break frequency
pl_1 = w[np.where(w <= w_b)] ** (-beta_1)
pl_2 = c * w[np.where(w > w_b)] ** (-beta_2)
pl = np.append(pl_1, pl_2)
return pl
"""
Explanation: Define functions to make different power spectral shapes: power law, Lorentzian, Gaussian, power law with exponential decay, broken power law
End of explanation
"""
def inv_frac_rms2_norm(amplitudes, dt, n_bins, mean_rate):
# rms2_power = 2.0 * power * dt / float(n_bins) / (mean_rate ** 2)
inv_rms2 = amplitudes * n_bins * (mean_rate ** 2) / 2.0 / dt
return inv_rms2
def inv_leahy_norm(amplitudes, dt, n_bins, mean_rate):
# leahy_power = 2.0 * power * dt / float(n_bins) / mean_rate
inv_leahy = amplitudes * n_bins * mean_rate / 2.0 / dt
return inv_leahy
"""
Explanation: Defining functions for applying an inverse fractional rms^2 and inverse Leahy normalization to the noise psd shape
End of explanation
"""
n_bins = 8192
# n_bins = 64
dt = 64.0 / 8192.0
print "dt = %.15f" % dt
df = 1.0 / dt / n_bins
# print df
assert power_of_two(n_bins), "ERROR: N_bins must be a power of 2 and an even integer."
## Making an array of Fourier frequencies
frequencies = np.arange(float(-n_bins/2)+1, float(n_bins/2)+1) * df
pos_freq = frequencies[np.where(frequencies >= 0)]
## positive should have 2 more than negative, because of the 0 freq and the nyquist freq
neg_freq = frequencies[np.where(frequencies < 0)]
nyquist = pos_freq[-1]
"""
Explanation: Define some basics: number of bins per segment, timestep between bins, and making the fourier frequencies.
End of explanation
"""
noise_psd_variance = 0.007 ## in fractional rms^2 units
# noise_mean_rate = 1000.0 ## in count rate units
noise_mean_rate = 500
beta = -1.0 ## Slope of power law (include negative here if needed)
## For a Lorentzian QPO
# w_0 = 5.46710256 ## Centroid frequency of QPO
# fwhm = 0.80653875 ## FWHM of QPO
## For a Gaussian QPO
w_0 = 5.4
# g_stddev = 0.473032436922
# fwhm = 2.0 * np.sqrt(2.0 * np.log(2.0)) * g_stddev
fwhm = 0.9
pl_scale = 0.08 ## relative scale factor
qpo_scale = 1.0 ## relative scale factor
Q = w_0 / fwhm ## For QPOs, Q factor is w_0 / gamma
print "Q =", Q
# noise_psd_shape = noise_psd_variance * powerlaw(pos_freq, beta)
noise_psd_shape = noise_psd_variance * (qpo_scale * lorentzian(pos_freq, w_0, fwhm) + pl_scale * powerlaw(pos_freq, beta))
# noise_psd_shape = noise_psd_variance * (qpo_scale * gaussian(pos_freq, w_0, g_stddev) + pl_scale * powerlaw(pos_freq, beta))
# noise_psd_shape = lorentzian(pos_freq, w_1, gamma_1) + lorentzian(pos_freq, w_2, gamma_2) + powerlaw(pos_freq, beta)
# noise_psd_shape = lorentzian(pos_freq, w_1, gamma_1) + lorentzian(pos_freq, w_2, gamma_2)
noise_psd_shape = inv_frac_rms2_norm(noise_psd_shape, dt, n_bins, noise_mean_rate)
"""
Explanation: Defining the psd shape of the noise
End of explanation
"""
rand_r = np.random.standard_normal(len(pos_freq))
rand_i = np.random.standard_normal(len(pos_freq)-1)
rand_i = np.append(rand_i, 0.0) # because the nyquist frequency should only have a real value
## Creating the real and imaginary values from the lists of random numbers and the frequencies
r_values = rand_r * np.sqrt(0.5 * noise_psd_shape)
i_values = rand_i * np.sqrt(0.5 * noise_psd_shape)
r_values[np.where(pos_freq == 0)] = 0
i_values[np.where(pos_freq == 0)] = 0
## Combining to make the Fourier transform
FT_pos = r_values + i_values*1j
FT_neg = np.conj(FT_pos[1:-1])
FT_neg = FT_neg[::-1] ## Need to flip direction of the negative frequency FT values so that they match up correctly
FT = np.append(FT_pos, FT_neg)
## Making the light curve from the Fourier transform and Poissonifying it
noise_lc = fftpack.ifft(FT).real + noise_mean_rate
noise_lc[np.where(noise_lc < 0)] = 0.0
# noise_lc_poiss = np.random.poisson(noise_lc * dt)
noise_lc_poiss = np.random.poisson(noise_lc * dt) / dt
## Making the power spectrum from the Poissonified light curve
real_mean = np.mean(noise_lc_poiss)
noise_power = np.absolute(fftpack.fft(noise_lc_poiss - real_mean))**2
noise_power = noise_power[0:len(pos_freq)]
## Applying the fractional rms^2 normalization to the power spectrum
noise_power *= 2.0 * dt / float(n_bins) / (real_mean **2)
noise_level = 2.0 / real_mean
noise_power -= noise_level
## Re-binning the power spectrum in frequency
rb_freq, rb_power, freq_min, freq_max = geometric_rebinning(pos_freq, noise_power, 1.01)
"""
Explanation: Generating a noise process with the specific shape noise_psd_shape
End of explanation
"""
super_title_noise="Noise process"
npn_noise = rb_power * rb_freq
time_bins = np.arange(n_bins)
time = time_bins * dt
# fig, ax1 = plt.subplots(1, 1, figsize=(10,5))
fig, (ax1, ax2) = plt.subplots(2,1, figsize=(10,10))
# fig.suptitle(super_title_noise, fontsize=20, y=1.03)
ax1.plot(time, noise_lc_poiss, linewidth=2.0, color='purple')
ax1.set_xlabel('Time (s)', fontproperties=font_prop)
ax1.set_ylabel('Count rate (cts/s)', fontproperties=font_prop)
# ax1.set_xlim(0,0.3)
ax1.set_xlim(np.min(time), np.max(time))
ax1.set_xlim(0,1)
# ax1.set_ylim(0,450)
ax1.tick_params(axis='x', labelsize=16, bottom=True, top=True, labelbottom=True, labeltop=False)
ax1.tick_params(axis='y', labelsize=16, left=True, right=True, labelleft=True, labelright=False)
# ax1.set_title('Light curve', fontproperties=font_prop)
ax1.set_title('Timmer & Koenig Simulated Light Curve Segment of QPO', fontproperties=font_prop)
# ax2.plot(pos_freq, pulse_power * pos_freq, linewidth=2.0)
ax2.plot(rb_freq, npn_noise, linewidth=2.0)
ax2.set_xscale('log')
ax2.set_yscale('log')
ax2.set_xlabel(r'Frequency (Hz)', fontproperties=font_prop)
ax2.set_ylabel(r'Power $\times$ frequency', fontproperties=font_prop)
ax2.set_xlim(0, nyquist)
# ax2.set_xlim(0,200)
# ax2.set_ylim(1e-5, 1e-1)
ax2.tick_params(axis='x', labelsize=16, bottom=True, top=True, labelbottom=True, labeltop=False)
ax2.tick_params(axis='y', labelsize=16, left=True, right=True, labelleft=True, labelright=False)
ax2.set_title('Power density spectrum', fontproperties=font_prop)
fig.tight_layout(pad=1.0, h_pad=2.0)
plt.savefig("FAKEGX339-BQPO_QPO_lightcurve.eps")
plt.show()
# print np.var(noise_power, ddof=1)
print "Q =", Q
print "w_0 =", w_0
print "fwhm =", fwhm
print "beta =", beta
"""
Explanation: Plotting the noise process: light curve and power spectrum.
End of explanation
"""
pulse_mean = 1.0 # fractional
pulse_amp = 0.05 # fractional
freq = 40
assert freq < nyquist, "ERROR: Pulsation frequency must be less than the Nyquist frequency."
period = 1.0 / freq # in seconds
bins_per_period = period / dt
pulse_lc = make_pulsation(n_bins, dt, freq, pulse_amp, pulse_mean, 0.0)
pulse_unnorm_power = np.absolute(fftpack.fft(pulse_lc)) ** 2
pulse_unnorm_power = pulse_unnorm_power[0:len(pos_freq)]
pulse_power = 2.0 * pulse_unnorm_power * dt / float(n_bins) / (pulse_mean ** 2)
"""
Explanation: Generating a pulsation using the above function 'make_pulsation'
End of explanation
"""
super_title_pulse = "Periodic pulsation"
npn_pulse = pulse_power[1:] * pos_freq[1:]
# npn_pulse = pulse_power[1:]
fig, (ax1, ax2) = plt.subplots(2,1, figsize=(10,10))
fig.suptitle(super_title_pulse, fontsize=20, y=1.03)
ax1.plot(time, pulse_lc, linewidth=2.0, color='g')
ax1.set_xlabel('Elapsed time (seconds)', fontproperties=font_prop)
ax1.set_ylabel('Relative count rate', fontproperties=font_prop)
ax1.set_xlim(0, 0.3)
ax1.tick_params(axis='x', labelsize=16, bottom=True, top=True, labelbottom=True, labeltop=False)
ax1.tick_params(axis='y', labelsize=16, left=True, right=True, labelleft=True, labelright=False)
ax1.set_title('Light curve', fontproperties=font_prop)
ax2.plot(pos_freq[1:], npn_pulse, linewidth=2.0)
ax2.set_xlabel('Frequency (Hz)', fontproperties=font_prop)
ax2.set_ylabel(r'Power $\times$ frequency', fontproperties=font_prop)
ax2.set_xscale('log')
ax2.set_xlim(pos_freq[1], nyquist)
# ax2.set_xlim(0,200)
ax2.set_ylim(0, np.max(npn_pulse)+.5*np.max(npn_pulse))
## Setting the y-axis minor ticks. It's complicated.
y_maj_loc = ax2.get_yticks()
y_min_mult = 0.5 * (y_maj_loc[1] - y_maj_loc[0])
yLocator = MultipleLocator(y_min_mult) ## location of minor ticks on the y-axis
ax2.yaxis.set_minor_locator(yLocator)
ax2.tick_params(axis='x', labelsize=16, bottom=True, top=True, labelbottom=True, labeltop=False)
ax2.tick_params(axis='y', labelsize=16, left=True, right=True, labelleft=True, labelright=False)
ax2.set_title('Power density spectrum', fontproperties=font_prop)
fig.tight_layout(pad=1.0, h_pad=2.0)
plt.show()
print "dt = %.13f s" % dt
print "df = %.2f Hz" % df
print "nyquist = %.2f Hz" % nyquist
print "n_bins = %d" % n_bins
"""
Explanation: Plotting just the pulsation: light curve and power spectrum.
End of explanation
"""
total_lc = pulse_lc * noise_lc
total_lc[np.where(total_lc < 0)] = 0.0
total_lc_poiss = np.random.poisson(total_lc * dt) / dt
mean_total_lc = np.mean(total_lc_poiss)
print "Mean count rate of light curve:", mean_total_lc
total_unnorm_power = np.absolute(fftpack.fft(total_lc_poiss - mean_total_lc)) ** 2
total_unnorm_power = total_unnorm_power[0:len(pos_freq)]
total_power = 2.0 * total_unnorm_power * dt / float(n_bins) / (mean_total_lc ** 2)
total_power -= 2.0 / mean_total_lc
print "Mean of total power:", np.mean(total_power)
total_pow_var = np.sum(total_power * df)
## Re-binning the power spectrum in frequency
rb_freq, rb_total_power, tfreq_min, tfreq_max = geometric_rebinning(pos_freq, total_power, 1.01)
"""
Explanation: Summing together the pulsation and the noise (where the noise has a power law, QPO, etc.)
End of explanation
"""
super_title_total = "Together"
npn_total = rb_total_power * rb_freq
# npn_total = total_power
fig, (ax1, ax2, ax3) = plt.subplots(3,1, figsize=(10,15))
fig.suptitle(super_title_total, fontsize=20, y=1.03)
## Plotting the light curve
ax1.plot(time, total_lc_poiss, linewidth=2.0, color='g')
ax1.set_xlabel('Elapsed time (seconds)', fontproperties=font_prop)
ax1.set_ylabel('Photon count rate', fontproperties=font_prop)
ax1.set_xlim(0, 0.3)
ax1.set_ylim(0,)
ax1.tick_params(axis='x', labelsize=16, bottom=True, top=True, labelbottom=True, labeltop=False)
ax1.tick_params(axis='y', labelsize=16, left=True, right=True, labelleft=True, labelright=False)
ax1.set_title('Light curve', fontproperties=font_prop)
## Linearly plotting the power spectrum
ax2.plot(pos_freq, total_power, linewidth=2.0)
ax2.set_xlabel(r'$\nu$ (Hz)', fontproperties=font_prop)
ax2.set_ylabel(r'Power (frac. rms$^{2}$)', fontproperties=font_prop)
ax2.set_xlim(0, nyquist)
ax2.set_ylim(0, np.max(total_power) + (0.1 * np.max(total_power)))
## Setting the axes' minor ticks. It's complicated.
x_maj_loc = ax2.get_xticks()
y_maj_loc = ax2.get_yticks()
x_min_mult = 0.2 * (x_maj_loc[1] - x_maj_loc[0])
y_min_mult = 0.5 * (y_maj_loc[1] - y_maj_loc[0])
xLocator = MultipleLocator(x_min_mult) ## location of minor ticks on the y-axis
yLocator = MultipleLocator(y_min_mult) ## location of minor ticks on the y-axis
ax2.xaxis.set_minor_locator(xLocator)
ax2.yaxis.set_minor_locator(yLocator)
ax2.tick_params(axis='x', labelsize=16, bottom=True, top=True, labelbottom=True, labeltop=False)
ax2.tick_params(axis='y', labelsize=16, left=True, right=True, labelleft=True, labelright=False)
ax2.set_title('Linear power density spectrum', fontproperties=font_prop)
## Logaritmically plotting the re-binned power * frequency spectrum
ax3.plot(rb_freq, npn_total, linewidth=2.0)
ax3.set_xscale('log')
ax3.set_yscale('log')
ax3.set_xlabel(r'$\nu$ (Hz)', fontproperties=font_prop)
ax3.set_ylabel(r'Power $\times$ frequency', fontproperties=font_prop)
ax3.set_xlim(0, nyquist)
# ax3.set_xlim(4000,)
# ax3.set_ylim(0,300)
ax3.tick_params(axis='x', labelsize=16, bottom=True, top=True, labelbottom=True, labeltop=False)
ax3.tick_params(axis='y', labelsize=16, left=True, right=True, labelleft=True, labelright=False)
ax3.set_title('Re-binned power density spectrum', fontproperties=font_prop)
fig.tight_layout(pad=1.0, h_pad=2.0)
plt.show()
print "dt = %.13f s" % dt
print "df = %.2f Hz" % df
print "nyquist = %.2f Hz" % nyquist
print "n_bins = %d" % n_bins
print "power variance = %.2e (frac rms2)" % total_pow_var
"""
Explanation: Plotting the periodic pulsation + noise process: light curve and power spectrum.
End of explanation
"""
## Titles in unicode
# print u"\n\t\tPower law; \u03B2 = %s\n" % str(beta)
# print u"\n\t\tLorentzian; \u0393 = %s at \u03C9\u2080 = %s\n" % (str(gamma), str(w_0))
# print u"\n\t\tPower law with exponential decay; \u03B2 = %s, \u03B1 = %s\n" \
# % (str(beta), str(alpha))
# print u"\n\n\tBroken power law; \u03C9_break = %s, \u03B2\u2081 = %s, \u03B2\u2082 = %s\n" \
# % (str(w_b), str(beta_1), str(beta_2))
# super_title = r"Power law with exponential decay: $\beta$ = %s, $\alpha$ = %s" % (str(beta), str(alpha))
# super_title = r"Broken power law; $\omega_{break}$ = %.2f Hz, $\beta_1$ = %.2f, $\beta_2$ = %.2f" \
# % (w_b, beta_1, beta_2)
"""
Explanation: Playing around with other stuff
End of explanation
"""
|
rbondesan/ssd | reinforcement_q_learning.ipynb | mit | import gym
import math
import random
import numpy as np
import matplotlib
import matplotlib.pyplot as plt
from collections import namedtuple
from itertools import count
from copy import deepcopy
from PIL import Image
import torch
import torch.nn as nn
import torch.optim as optim
import torch.autograd as autograd
import torch.nn.functional as F
import torchvision.transforms as T
env = gym.make('CartPole-v0')
is_ipython = 'inline' in matplotlib.get_backend()
if is_ipython:
from IPython import display
"""
Explanation: Reinforcement Learning (DQN) tutorial
Author: Adam Paszke <https://github.com/apaszke>_
This tutorial shows how to use PyTorch to train a Deep Q Learning (DQN) agent
on the CartPole-v0 task from the OpenAI Gym <https://gym.openai.com/>__.
Task
The agent has to decide between two actions - moving the cart left or
right - so that the pole attached to it stays upright. You can find an
official leaderboard with various algorithms and visualizations at the
Gym website <https://gym.openai.com/envs/CartPole-v0>__.
.. figure:: /_static/img/cartpole.gif
:alt: cartpole
cartpole
As the agent observes the current state of the environment and chooses
an action, the environment transitions to a new state, and also
returns a reward that indicates the consequences of the action. In this
task, the environment terminates if the pole falls over too far.
The CartPole task is designed so that the inputs to the agent are 4 real
values representing the environment state (position, velocity, etc.).
However, neural networks can solve the task purely by looking at the
scene, so we'll use a patch of the screen centered on the cart as an
input. Because of this, our results aren't directly comparable to the
ones from the official leaderboard - our task is much harder.
Unfortunately this does slow down the training, because we have to
render all the frames.
Strictly speaking, we will present the state as the difference between
the current screen patch and the previous one. This will allow the agent
to take the velocity of the pole into account from one image.
Packages
First, let's import needed packages. Firstly, we need
gym <https://gym.openai.com/docs>__ for the environment
(Install using pip install gym).
We'll also use the following from PyTorch:
neural networks (torch.nn)
optimization (torch.optim)
automatic differentiation (torch.autograd)
utilities for vision tasks (torchvision - a separate
package <https://github.com/pytorch/vision>__).
End of explanation
"""
# class Transition with tuples accessible by name with . operator (here name class=name instance)
Transition = namedtuple('Transition',
('state', 'action', 'next_state', 'reward'))
class ReplayMemory(object):
def __init__(self, capacity):
self.capacity = capacity
self.memory = []
self.position = 0
def push(self, *args):
"""Saves a transition."""
if len(self.memory) < self.capacity:
self.memory.append(None)
self.memory[self.position] = Transition(*args)
self.position = (self.position + 1) % self.capacity
def sample(self, batch_size):
return random.sample(self.memory, batch_size)
def __len__(self):
return len(self.memory)
"""
Explanation: Replay Memory
We'll be using experience replay memory for training our DQN. It stores
the transitions that the agent observes, allowing us to reuse this data
later. By sampling from it randomly, the transitions that build up a
batch are decorrelated. It has been shown that this greatly stabilizes
and improves the DQN training procedure.
For this, we're going to need two classses:
Transition - a named tuple representing a single transition in
our environment
ReplayMemory - a cyclic buffer of bounded size that holds the
transitions observed recently. It also implements a .sample()
method for selecting a random batch of transitions for training.
End of explanation
"""
class DQN(nn.Module):
def __init__(self):
super(DQN, self).__init__()
self.conv1 = nn.Conv2d(3, 16, kernel_size=5, stride=2)
self.bn1 = nn.BatchNorm2d(16)
self.conv2 = nn.Conv2d(16, 32, kernel_size=5, stride=2)
self.bn2 = nn.BatchNorm2d(32)
self.conv3 = nn.Conv2d(32, 32, kernel_size=5, stride=2)
self.bn3 = nn.BatchNorm2d(32)
#448 = 32 * H * W, where H and W are the height and width of image after all convolutions
self.head = nn.Linear(448, 2)
def forward(self, x):
x = F.relu(self.bn1(self.conv1(x)))
x = F.relu(self.bn2(self.conv2(x)))
x = F.relu(self.bn3(self.conv3(x)))
return self.head(x.view(x.size(0), -1)) # the size -1 is inferred from other dimensions
# after first conv2d, size is
Hin = 40; Win = 80;
def dim_out(dim_in):
ks = 5
stride = 2
return math.floor((dim_in-ks)/stride+1)
HH=dim_out(dim_out(dim_out(Hin)))
WW=dim_out(dim_out(dim_out(Win)))
print(32*HH*WW)
"""
Explanation: Now, let's define our model. But first, let quickly recap what a DQN is.
DQN algorithm
Our environment is deterministic, so all equations presented here are
also formulated deterministically for the sake of simplicity. In the
reinforcement learning literature, they would also contain expectations
over stochastic transitions in the environment.
Our aim will be to train a policy that tries to maximize the discounted,
cumulative reward
$R_{t_0} = \sum_{t=t_0}^{\infty} \gamma^{t - t_0} r_t$, where
$R_{t_0}$ is also known as the return. The discount,
$\gamma$, should be a constant between $0$ and $1$
that ensures the sum converges. It makes rewards from the uncertain far
future less important for our agent than the ones in the near future
that it can be fairly confident about.
The main idea behind Q-learning is that if we had a function
$Q^*: State \times Action \rightarrow \mathbb{R}$, that could tell
us what our return would be, if we were to take an action in a given
state, then we could easily construct a policy that maximizes our
rewards:
\begin{align}\pi^(s) = \arg!\max_a \ Q^(s, a)\end{align}
However, we don't know everything about the world, so we don't have
access to $Q^$. But, since neural networks are universal function
approximators, we can simply create one and train it to resemble
$Q^$.
For our training update rule, we'll use a fact that every $Q$
function for some policy obeys the Bellman equation:
\begin{align}Q^{\pi}(s, a) = r + \gamma Q^{\pi}(s', \pi(s'))\end{align}
The difference between the two sides of the equality is known as the
temporal difference error, $\delta$:
\begin{align}\delta = Q(s, a) - (r + \gamma \max_a Q(s', a))\end{align}
To minimise this error, we will use the Huber
loss <https://en.wikipedia.org/wiki/Huber_loss>__. The Huber loss acts
like the mean squared error when the error is small, but like the mean
absolute error when the error is large - this makes it more robust to
outliers when the estimates of $Q$ are very noisy. We calculate
this over a batch of transitions, $B$, sampled from the replay
memory:
\begin{align}\mathcal{L} = \frac{1}{|B|}\sum_{(s, a, s', r) \ \in \ B} \mathcal{L}(\delta)\end{align}
\begin{align}\text{where} \quad \mathcal{L}(\delta) = \begin{cases}
\frac{1}{2}{\delta^2} & \text{for } |\delta| \le 1, \
|\delta| - \frac{1}{2} & \text{otherwise.}
\end{cases}\end{align}
Q-network
^^^^^^^^^
Our model will be a convolutional neural network that takes in the
difference between the current and previous screen patches. It has two
outputs, representing $Q(s, \mathrm{left})$ and
$Q(s, \mathrm{right})$ (where $s$ is the input to the
network). In effect, the network is trying to predict the quality of
taking each action given the current input.
End of explanation
"""
resize = T.Compose([T.ToPILImage(),
T.Scale(40, interpolation=Image.CUBIC),
T.ToTensor()])
# This is based on the code from gym.
screen_width = 600
def get_cart_location():
world_width = env.unwrapped.x_threshold * 2
scale = screen_width / world_width
return int(env.unwrapped.state[0] * scale + screen_width / 2.0) # MIDDLE OF CART
def get_screen():
screen = env.render(mode='rgb_array').transpose(
(2, 0, 1)) # transpose into torch order (CHW)
# Strip off the top and bottom of the screen
screen = screen[:, 160:320]
view_width = 320
cart_location = get_cart_location()
if cart_location < view_width // 2:
slice_range = slice(view_width)
elif cart_location > (screen_width - view_width // 2):
slice_range = slice(-view_width, None)
else:
slice_range = slice(cart_location - view_width // 2,
cart_location + view_width // 2)
# Strip off the edges, so that we have a square image centered on a cart
screen = screen[:, :, slice_range]
# Convert to float, rescare, convert to torch tensor
# (this doesn't require a copy)
screen = np.ascontiguousarray(screen, dtype=np.float32) / 255
screen = torch.from_numpy(screen)
# Resize, and add a batch dimension (BCHW)
print(resize(screen).unsqueeze(0).size)
return resize(screen).unsqueeze(0)
env.reset()
plt.imshow(get_screen().squeeze(0).permute(
1, 2, 0).numpy(), interpolation='none')
plt.show()
"""
Explanation: Input extraction
^^^^^^^^^^^^^^^^
The code below are utilities for extracting and processing rendered
images from the environment. It uses the torchvision package, which
makes it easy to compose image transforms. Once you run the cell it will
display an example patch that it extracted.
End of explanation
"""
BATCH_SIZE = 128
GAMMA = 0.999
EPS_START = 0.9
EPS_END = 0.05
EPS_DECAY = 200
USE_CUDA = torch.cuda.is_available()
model = DQN()
memory = ReplayMemory(10000)
optimizer = optim.RMSprop(model.parameters())
if USE_CUDA:
model.cuda()
class Variable(autograd.Variable):
def __init__(self, data, *args, **kwargs):
if USE_CUDA:
data = data.cuda()
super(Variable, self).__init__(data, *args, **kwargs)
steps_done = 0
def select_action(state):
global steps_done
sample = random.random()
eps_threshold = EPS_END + (EPS_START - EPS_END) * \
math.exp(-1. * steps_done / EPS_DECAY)
steps_done += 1
if sample > eps_threshold:
return model(Variable(state, volatile=True)).data.max(1)[1].cpu()
else:
return torch.LongTensor([[random.randrange(2)]])
episode_durations = []
def plot_durations():
plt.figure(1)
plt.clf()
durations_t = torch.Tensor(episode_durations)
plt.xlabel('Episode')
plt.ylabel('Duration')
plt.plot(durations_t.numpy())
# Take 100 episode averages and plot them too
if len(durations_t) >= 100:
means = durations_t.unfold(0, 100, 1).mean(1).view(-1)
means = torch.cat((torch.zeros(99), means))
plt.plot(means.numpy())
if is_ipython:
display.clear_output(wait=True)
display.display(plt.gcf())
"""
Explanation: Training
Hyperparameters and utilities
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
This cell instantiates our model and its optimizer, and defines some
utilities:
Variable - this is a simple wrapper around
torch.autograd.Variable that will automatically send the data to
the GPU every time we construct a Variable.
select_action - will select an action accordingly to an epsilon
greedy policy. Simply put, we'll sometimes use our model for choosing
the action, and sometimes we'll just sample one uniformly. The
probability of choosing a random action will start at EPS_START
and will decay exponentially towards EPS_END. EPS_DECAY
controls the rate of the decay.
plot_durations - a helper for plotting the durations of episodes,
along with an average over the last 100 episodes (the measure used in
the official evaluations). The plot will be underneath the cell
containing the main training loop, and will update after every
episode.
End of explanation
"""
last_sync = 0
def optimize_model():
global last_sync
print("len<batch:",len(memory) < BATCH_SIZE)
# if the memory is smaller than wanted, don't do anything and keep building memory
if len(memory) < BATCH_SIZE:
return
transitions = memory.sample(BATCH_SIZE)
# Transpose the batch (see http://stackoverflow.com/a/19343/3343043 for
# detailed explanation).
batch = Transition(*zip(*transitions))
# Compute a mask of non-final states and concatenate the batch elements
non_final_mask = torch.ByteTensor(
tuple(map(lambda s: s is not None, batch.next_state)))
if USE_CUDA:
non_final_mask = non_final_mask.cuda()
# We don't want to backprop through the expected action values and volatile
# will save us on temporarily changing the model parameters'
# requires_grad to False!
non_final_next_states = Variable(torch.cat([s for s in batch.next_state
if s is not None]),
volatile=True)
state_batch = Variable(torch.cat(batch.state))
action_batch = Variable(torch.cat(batch.action))
reward_batch = Variable(torch.cat(batch.reward))
# Compute Q(s_t, a) - the model computes Q(s_t), then we select the
# columns of actions taken
print("In optimize: state_batch", state_batch.data.size())
state_action_values = model(state_batch).gather(1, action_batch)
# Compute V(s_{t+1})=max_a Q(s_{t+1}, a) for all next states.
next_state_values = Variable(torch.zeros(BATCH_SIZE))
next_state_values[non_final_mask] = model(non_final_next_states).max(1)[0]
# Now, we don't want to mess up the loss with a volatile flag, so let's
# clear it. After this, we'll just end up with a Variable that has
# requires_grad=False
next_state_values.volatile = False
# Compute the expected Q values
expected_state_action_values = (next_state_values * GAMMA) + reward_batch
# Compute Huber loss
loss = F.smooth_l1_loss(state_action_values, expected_state_action_values)
# Optimize the model
optimizer.zero_grad()
loss.backward()
for param in model.parameters():
param.grad.data.clamp_(-1, 1)
optimizer.step()
transitions = memory.sample(BATCH_SIZE)
batch = Transition(*zip(*transitions))
non_final_next_states = Variable(torch.cat([s for s in batch.next_state
if s is not None]),
volatile=True)
state_batch = Variable(torch.cat(batch.state))
action_batch = Variable(torch.cat(batch.action))
reward_batch = Variable(torch.cat(batch.reward))
#print(state_batch.data.size())
#print(action_batch.data.size())
#print(reward_batch.data.size())
x=state_batch
x.view(x.size(0), -1)
40*80*3
"""
Explanation: Training loop
^^^^^^^^^^^^^
Finally, the code for training our model.
Here, you can find an optimize_model function that performs a
single step of the optimization. It first samples a batch, concatenates
all the tensors into a single one, computes $Q(s_t, a_t)$ and
$V(s_{t+1}) = \max_a Q(s_{t+1}, a)$, and combines them into our
loss. By defition we set $V(s) = 0$ if $s$ is a terminal
state.
End of explanation
"""
num_episodes = 1
for i_episode in range(num_episodes):
# Initialize the environment and state
env.reset()
last_screen = get_screen()
current_screen = get_screen()
state = current_screen - last_screen
for t in count():
print(t)
# Select and perform an action
action = select_action(state)
_, reward, done, _ = env.step(action[0, 0])
reward = torch.Tensor([reward])
# Observe new state
last_screen = current_screen
current_screen = get_screen()
if not done:
next_state = current_screen - last_screen
else:
next_state = None
# Store the transition in memory
memory.push(state, action, next_state, reward)
# Move to the next state
state = next_state
# Perform one step of the optimization (on the target network)
optimize_model()
if done:
episode_durations.append(t + 1)
#plot_durations()
break
"""
Explanation: Below, you can find the main training loop. At the beginning we reset
the environment and initialize the state variable. Then, we sample
an action, execute it, observe the next screen and the reward (always
1), and optimize our model once. When the episode ends (our model
fails), we restart the loop.
Below, num_episodes is set small. You should download
the notebook and run lot more epsiodes.
End of explanation
"""
|
vadim-ivlev/STUDY | handson-data-science-python/DataScience-Python3/LinearRegression.ipynb | mit | %matplotlib inline
import numpy as np
from pylab import *
pageSpeeds = np.random.normal(3.0, 1.0, 1000)
purchaseAmount = 100 - (pageSpeeds + np.random.normal(0, 0.1, 1000)) * 3
scatter(pageSpeeds, purchaseAmount)
"""
Explanation: Linear Regression
Let's fabricate some data that shows a roughly linear relationship between page speed and amount purchased:
End of explanation
"""
from scipy import stats
slope, intercept, r_value, p_value, std_err = stats.linregress(pageSpeeds, purchaseAmount)
"""
Explanation: As we only have two features, we can keep it simple and just use scipy.state.linregress:
End of explanation
"""
r_value ** 2
"""
Explanation: Not surprisngly, our R-squared value shows a really good fit:
End of explanation
"""
import matplotlib.pyplot as plt
def predict(x):
return slope * x + intercept
fitLine = predict(pageSpeeds)
plt.scatter(pageSpeeds, purchaseAmount)
plt.plot(pageSpeeds, fitLine, c='r')
plt.show()
"""
Explanation: Let's use the slope and intercept we got from the regression to plot predicted values vs. observed:
End of explanation
"""
|
mined-gatech/pymks_overview | notebooks/stress_homogenization_2D.ipynb | mit | %matplotlib inline
%load_ext autoreload
%autoreload 2
import numpy as np
import matplotlib.pyplot as plt
"""
Explanation: Effective Stiffness
Introduction
This example uses the MKSHomogenizationModel to create a homogenization linkage for the effective stiffness. This example starts with a brief background of the homogenization theory on the components of the effective elastic stiffness tensor for a composite material. Then the example generates random microstructures and their average stress values that will be used to show how to calibrate and use our model. We will also show how to use tools from sklearn to optimize fit parameters for the MKSHomogenizationModel. Lastly, the data is used to evaluate the MKSHomogenizationModel for effective stiffness values for a new set of microstructures.
Linear Elasticity and Effective Elastic Modulus
For this example we are looking to create a homogenization linkage that predicts the effective isotropic stiffness components for two-phase microstructures. The specific stiffness component we are looking to predict in this example is $C_{xxxx}$ which is easily accessed by applying an uniaxial macroscal strain tensor (the only non-zero component is $\varepsilon_{xx}$.
$$ u(L, y) = u(0, y) + L\bar{\varepsilon}_{xx}$$
$$ u(0, L) = u(0, 0) = 0 $$
$$ u(x, 0) = u(x, L) $$
More details about these boundary conditions can be found in [1]. Using these boundary conditions, $C_{xxxx}$ can be estimated calculating the ratio of the averaged stress over the applied averaged strain.
$$ C_{xxxx}^* \cong \bar{\sigma}{xx} / \bar{\varepsilon}{xx}$$
In this example, $C_{xxxx}$ for 6 different types of microstructures will be estimated using the MKSHomogenizationModel from pymks, and provides a method to compute $\bar{\sigma}{xx}$ for a new microstructure with an applied strain of $\bar{\varepsilon}{xx}$.
End of explanation
"""
from pymks.datasets import make_elastic_stress_random
sample_size = 200
grain_size = [(15, 2), (2, 15), (7, 7), (8, 3), (3, 9), (2, 2)]
n_samples = [sample_size] * 6
elastic_modulus = (380, 200)
poissons_ratio = (0.28, 0.3)
macro_strain = 0.001
size = (21, 21)
X, y = make_elastic_stress_random(n_samples=n_samples, size=size, grain_size=grain_size,
elastic_modulus=elastic_modulus, poissons_ratio=poissons_ratio,
macro_strain=macro_strain, seed=0)
"""
Explanation: Data Generation
A set of periodic microstructures and their volume averaged elastic stress values $\bar{\sigma}_{xx}$ can be generated by importing the make_elastic_stress_random function from pymks.datasets. This function has several arguments. n_samples is the number of samples that will be generated, size specifies the dimensions of the microstructures, grain_size controls the effective microstructure feature size, elastic_modulus and poissons_ratio are used to indicate the material property for each of the
phases, macro_strain is the value of the applied uniaxixial strain, and the seed can be used to change the the random number generator seed.
Let's go ahead and create 6 different types of microstructures each with 200 samples with dimensions 21 x 21. Each of the 6 samples will have a different microstructure feature size. The function will return and the microstructures and their associated volume averaged stress values.
End of explanation
"""
print(X.shape)
print(y.shape)
"""
Explanation: The array X contains the microstructure information and has the dimensions
of (n_samples, Nx, Ny). The array y contains the average stress value for
each of the microstructures and has dimensions of (n_samples,).
End of explanation
"""
from pymks.tools import draw_microstructures
X_examples = X[::sample_size]
draw_microstructures((X_examples[:3]))
draw_microstructures((X_examples[3:]))
"""
Explanation: Lets take a look at the 6 types the microstructures to get an idea of what they
look like. We can do this by importing draw_microstructures.
End of explanation
"""
print('Stress Values'), (y[::200])
"""
Explanation: In this dataset 4 of the 6 microstructure types have grains that are elongated in either
the x or y directions. The remaining 2 types of samples have equiaxed grains with
different average sizes.
Let's look at the stress values for each of the microstructures shown above.
End of explanation
"""
from pymks import MKSHomogenizationModel
from pymks import PrimitiveBasis
= PrimitiveBasis(n_states=, domain=[0,1])
= MKSHomogenizationModel(basis=, correlations=[(0,0),(1,1),(0,1)])
"""
Explanation: Now that we have a dataset to work with, we can look at how to use the MKSHomogenizationModelto predict stress values for new microstructures.
MKSHomogenizationModel Work Flow
The default instance of the MKSHomogenizationModel takes in a dataset and
- calculates the 2-point statistics
- performs dimensionality reduction using Singular Valued Decomposition (SVD)
- and fits a polynomial regression model model to the low-dimensional representation.
This work flow has been shown to accurately predict effective properties in several examples [2][3], and requires that we specify the number of components used in dimensionality reduction and the order of the polynomial we will be using for the polynomial regression. In this example we will show how we can use tools from sklearn to try and optimize our selection for these two parameters.
Modeling with MKSHomogenizationModel
In order to make an instance of the MKSHomogenizationModel, we need to pass an instance of a basis (used to compute the 2-point statistics). For this particular example, there are only 2 discrete phases, so we will use the PrimitiveBasis from pymks. We only have two phases denoted by 0 and 1, therefore we have two local states and our domain is 0 to 1.
Let's make an instance of the MKSHomgenizationModel.
End of explanation
"""
print('Default Number of Components'), (model.n_components)
print('Default Polynomail Order'), (model.degree)
"""
Explanation: Let's take a look at the default values for the number of components and the order of the polynomial.
End of explanation
"""
model.n_components = 40
model.fit(,, periodic_axes = [0,1])
"""
Explanation: These default parameters may not be the best model for a given problem, we will now show one method that can be used to optimize them.
Optimizing the Number of Components and Polynomial Order
To start with, we can look at how the variance changes as a function of the number of components.
In general for SVD as well as PCA, the amount of variance captured in each component decreases
as the component number increases.
This means that as the number of components used in the dimensionality reduction increases, the percentage of the variance will asymptotically approach 100%. Let's see if this is true for our dataset.
In order to do this we will change the number of components to 40 and then
fit the data we have using the fit function. This function performs the dimensionality reduction and
also fits the regression model. Because our microstructures are periodic, we need to
use the periodic_axes argument when we fit the data.
End of explanation
"""
from pymks.tools import draw_component_variance
draw_component_variance(model.dimension_reducer.explained_variance_ratio_)
"""
Explanation: Now look at how the cumlative variance changes as a function of the number of components using draw_component_variance
from pymks.tools.
End of explanation
"""
from sklearn.cross_validation import train_test_split
flat_shape = (X.shape[0],) + (np.prod(X.shape[1:]),)
= train_test_split(X.reshape(flat_shape), y,
test_size=0.2, random_state=3)
print(X_train.shape)
print(X_test.shape)
"""
Explanation: Roughly 90 percent of the variance is captured with the first 5 components. This means our model may only need a few components to predict the average stress.
Next we need to optimize the number of components and the polynomial order. To do this we are going to split the data into testing and training sets. This can be done using the train_test_spilt function from sklearn.
End of explanation
"""
from sklearn.grid_search import GridSearchCV
params_to_tune = {'degree': np.arange(1, 4), 'n_components': np.arange(1, 8)}
fit_params = {'size': X[0].shape, 'periodic_axes': [0, 1]}
gs = GridSearchCV(model, params_to_tune, cv=3, n_jobs=1, fit_params=fit_params).fit(X_train,y_train)
model = gs.best_estimator_
"""
Explanation: We will use cross validation with the testing data to fit a number
of models, each with a different number
of components and a different polynomial order.
Then we will use the testing data to verify the best model.
This can be done using GridSeachCV
from sklearn.
We will pass a dictionary params_to_tune with the range of
polynomial order degree and components n_components we want to try.
A dictionary fit_params can be used to pass the periodic_axes variable to
calculate periodic 2-point statistics. The argument cv can be used to specify
the number of folds used in cross validation and n_jobs can be used to specify
the number of jobs that are ran in parallel.
Let's vary n_components from 1 to 7 and degree from 1 to 3.
End of explanation
"""
model.fit(,, periodic_axes=[0, 1])
"""
Explanation: Prediction using MKSHomogenizationModel
Now that we have selected values for n_components and degree, lets fit the model with the data. Again because
our microstructures are periodic, we need to use the periodic_axes argument.
End of explanation
"""
test_sample_size =
n_samples = [test_sample_size] * 6
, = make_elastic_stress_random(n_samples=n_samples, size=size, grain_size=grain_size,
elastic_modulus=elastic_modulus, poissons_ratio=poissons_ratio,
macro_strain=macro_strain, seed=1)
"""
Explanation: Let's generate some more data that can be used to try and validate our model's prediction accuracy. We are going to
generate 20 samples of all six different types of microstructures using the same
make_elastic_stress_random function.
End of explanation
"""
= model.predict(, periodic_axes=[0,1])
"""
Explanation: Now let's predict the stress values for the new microstructures.
End of explanation
"""
from pymks.tools import draw_components
draw_components([model.reduced_fit_data[:, :2], model.reduced_predict_data[:, :2]],['Training Data', 'Testing Data'])
"""
Explanation: We can look to see if the low-dimensional representation of the
new data is similar to the low-dimensional representation of the data
we used to fit the model using draw_components from pymks.tools.
End of explanation
"""
from sklearn.metrics import r2_score
print('R-squared'), (model.score(X_new, y_new, periodic_axes=[0, 1]))
"""
Explanation: The predicted data seems to be reasonably similar to the data we used to fit the model
with. Now let's look at the score value for the predicted data.
End of explanation
"""
print('Actual Stress '), (y_new[::20])
print('Predicted Stress'), (y_predict[::20])
"""
Explanation: Looks pretty good. Let's print out one actual and predicted stress value for each of the 6 microstructure types to see how they compare.
End of explanation
"""
from pymks.tools import draw_goodness_of_fit
= np.array([, model.predict(, periodic_axes=[0, 1])])
= np.array([, ])
draw_goodness_of_fit(, , ['Training Data', 'Testing Data'])
"""
Explanation: Lastly, we can also evaluate our prediction by looking at a goodness-of-fit plot. We
can do this by importing draw_goodness_of_fit from pymks.tools.
End of explanation
"""
|
grokkaine/biopycourse | day3/DL2_CNN.ipynb | cc0-1.0 | from tensorflow.keras.datasets import mnist
from tensorflow.keras.utils import to_categorical
(train_images, train_labels), (test_images, test_labels) = mnist.load_data()
train_images = train_images.reshape((60000, 28, 28, 1))
train_images = train_images.astype('float32') / 255
test_images = test_images.reshape((10000, 28, 28, 1))
test_images = test_images.astype('float32') / 255
train_labels = to_categorical(train_labels)
test_labels = to_categorical(test_labels)
print(train_images.shape)
"""
Explanation: Deep learning 2. Convolutional Neural Networks
The dense FFN used before contained 600000 parameters. These are expensive to train!
A picture is not a flat array of numbers, it is a 2D matrix with multiple color channels. Convolution kernels are able to find 2D hidden features.
A CNN's layers are operating on a 3D tensor of shape (height, width, channels). A colored imaged usually has three channels (RGB) but more are possible. We have grayscale thus only one channel.
The width and height dimensions tend to shrink as we go deeper in the network. Why? NNs are efective information filters.
Why is this important for a biologist?
- Sight is our main sense. Labelling pictures is much easier than other types of data!
- Most biological data can be converted to image format. (including genomics, transcriptomics, etc)
- Spatial transcriptomics, as well as some single cell data have multi-channel and spatial features.
- Microscopy is biology too!
End of explanation
"""
from IPython.display import Image
Image(url= "../img/cnn.png", width=400, height=400)
Image(url= "../img/convolution.png", width=400, height=400)
Image(url= "../img/pooling.png", width=400, height=400)
"""
Explanation: Method:
The convolutional network will filter the image in a sequence, gradually expanding the complexity of hidden features and eliminating the noise via the "downsampling bottleneck".
A CNN's filtering principle is based on the idea of functional convolution, this is a mathematical way of comparing two functions in a temporal manner by sliding one over the other.
Parts: convolution, pooling and classification
End of explanation
"""
#from tensorflow.keras import layers
from tensorflow.keras.layers import Conv2D, MaxPooling2D, Dropout, Flatten, Dense
from tensorflow.keras import models
model = models.Sequential()
# first block
model.add(Conv2D(32, kernel_size=(3, 3), input_shape=(28, 28, 1), activation='relu'))
model.add(MaxPooling2D(pool_size=(2,2)))
# second block
model.add(Conv2D(64, kernel_size=(3,3), activation='relu'))
model.add(MaxPooling2D(pool_size=(2,2)))
model.add(Dropout(0.2))
# flattening followed by dense layer and final output layer
model.add(Flatten())
model.add(Dense(128, activation='relu'))
model.add(Dense(10,activation='softmax'))
model.summary()
"""
Explanation: The layers:
- The first block: 32 number of kernels (convolutional filters) each of 3 x 3 size followed by a max pooling operation with pool size of 2 x 2.
- The second block: 64 number of kernels each of 3 x 3 size followed by a max pooling operation with pool size of 2 x 2 and a dropout of 20% to ensure the regularization and thus avoiding overfitting of the model.
- classification block: flattening operation which transforms the data to 1 dimensional so as to feed it to fully connected or dense layer. The first dense layer consists of 128 neurons with relu activation while the final output layer consist of 10 neurons with softmax activation which will output the probability for each of the 10 classes.
End of explanation
"""
from tensorflow.keras.optimizers import Adam
# compiling the model
model.compile(loss='categorical_crossentropy', optimizer=Adam(lr=0.01), metrics=['accuracy'])
# train the model training dataset
history = model.fit(train_images, train_labels, epochs=5, validation_data=(test_images, test_labels), batch_size=128)
# save the model
model.save('cnn.h5')
test_loss, test_acc = model.evaluate(test_images, test_labels)
print(test_loss, test_acc)
%matplotlib inline
import matplotlib.pyplot as plt
print(history.history.keys())
# Ploting the accuracy graph
plt.plot(history.history['accuracy'])
plt.plot(history.history['val_accuracy'])
plt.title('model accuracy')
plt.ylabel('accuracy')
plt.xlabel('epoch')
plt.legend(['train', 'val'], loc='best')
plt.show()
# Ploting the loss graph
plt.plot(history.history['loss'])
plt.plot(history.history['val_loss'])
plt.title('model loss')
plt.ylabel('loss')
plt.xlabel('epoch')
plt.legend(['train', 'val'], loc='best')
plt.show()
"""
Explanation: Loss:
Many functions are possible (mean square error, maximum likelihood)
cross-entropy loss (or log loss): sum for all predicted classes $- \sum_c y_c log(p_c)$, where $y_c$ is a binary inndication of classification success and $p_c$ is the probability value of the model prediction
Optimizers:
SGD: slower, classic, can get stuck in local minima, uses momentum to avoid small valleys
rmsprop (root mean square propagation): batches contain bias noise, weights are adjusted ortogonal to bias, leading to faster convergence.
Adam: combines the above
Read more at: https://medium.com/analytics-vidhya/momentum-rmsprop-and-adam-optimizer-5769721b4b19
Learning rate ($\alpha$): Gradient descent algorithms multiply the magnitude of the gradient (the rate of error change with respect to each weight) by a scalar known as learning rate (also sometimes called step size) to determine the next point. $w_{ij} = w_{ij} + \alpha \frac{dE}{dw_{ij}}$
End of explanation
"""
|
eigendreams/TensorFlow-Tutorials | 08_Transfer_Learning.ipynb | mit | from IPython.display import Image, display
Image('images/08_transfer_learning_flowchart.png')
"""
Explanation: TensorFlow Tutorial #08
Transfer Learning
by Magnus Erik Hvass Pedersen
/ GitHub / Videos on YouTube
Introduction
We saw in the previous Tutorial #07 how to use the pre-trained Inception model for classifying images. Unfortunately the Inception model seemed unable to classify images of people. The reason was the data-set used for training the Inception model, which had some confusing text-labels for classes.
The Inception model is actually quite capable of extracting useful information from an image. So we can instead train the Inception model using another data-set. But it takes several weeks using a very powerful and expensive computer to fully train the Inception model on a new data-set.
We can instead re-use the pre-trained Inception model and merely replace the layer that does the final classification. This is called Transfer Learning.
This tutorial builds on the previous tutorials so you should be familiar with Tutorial #07 on the Inception model, as well as earlier tutorials on how to build and train Neural Networks in TensorFlow. A part of the source-code for this tutorial is located in the inception.py file.
Flowchart
The following chart shows how the data flows when using the Inception model for Transfer Learning. First we input and process an image with the Inception model. Just prior to the final classification layer of the Inception model, we save the so-called Transfer Values to a cache-file.
The reason for using a cache-file is that it takes a long time to process an image with the Inception model. My laptop computer with a Quad-Core 2 GHz CPU can process about 3 images per second using the Inception model. If each image is processed more than once then we can save a lot of time by caching the transfer-values.
The transfer-values are also sometimes called bottleneck-values, but that is a confusing term so it is not used here.
When all the images in the new data-set have been processed through the Inception model and the resulting transfer-values saved to a cache file, then we can use those transfer-values as the input to another neural network. We will then train the second neural network using the classes from the new data-set, so the network learns how to classify images based on the transfer-values from the Inception model.
In this way, the Inception model is used to extract useful information from the images and another neural network is then used for the actual classification.
End of explanation
"""
%matplotlib inline
import matplotlib.pyplot as plt
import tensorflow as tf
import numpy as np
import time
from datetime import timedelta
import os
# Functions and classes for loading and using the Inception model.
import inception
# We use Pretty Tensor to define the new classifier.
import prettytensor as pt
"""
Explanation: Imports
End of explanation
"""
tf.__version__
"""
Explanation: This was developed using Python 3.5.2 (Anaconda) and TensorFlow version:
End of explanation
"""
import cifar10
"""
Explanation: Load Data for CIFAR-10
End of explanation
"""
from cifar10 import num_classes
"""
Explanation: The data dimensions have already been defined in the cifar10 module, so we just need to import the ones we need.
End of explanation
"""
# cifar10.data_path = "data/CIFAR-10/"
"""
Explanation: Set the path for storing the data-set on your computer.
End of explanation
"""
cifar10.maybe_download_and_extract()
"""
Explanation: The CIFAR-10 data-set is about 163 MB and will be downloaded automatically if it is not located in the given path.
End of explanation
"""
class_names = cifar10.load_class_names()
class_names
"""
Explanation: Load the class-names.
End of explanation
"""
images_train, cls_train, labels_train = cifar10.load_training_data()
"""
Explanation: Load the training-set. This returns the images, the class-numbers as integers, and the class-numbers as One-Hot encoded arrays called labels.
End of explanation
"""
images_test, cls_test, labels_test = cifar10.load_test_data()
"""
Explanation: Load the test-set.
End of explanation
"""
print("Size of:")
print("- Training-set:\t\t{}".format(len(images_train)))
print("- Test-set:\t\t{}".format(len(images_test)))
"""
Explanation: The CIFAR-10 data-set has now been loaded and consists of 60,000 images and associated labels (i.e. classifications of the images). The data-set is split into 2 mutually exclusive sub-sets, the training-set and the test-set.
End of explanation
"""
def plot_images(images, cls_true, cls_pred=None, smooth=True):
assert len(images) == len(cls_true)
# Create figure with sub-plots.
fig, axes = plt.subplots(3, 3)
# Adjust vertical spacing.
if cls_pred is None:
hspace = 0.3
else:
hspace = 0.6
fig.subplots_adjust(hspace=hspace, wspace=0.3)
# Interpolation type.
if smooth:
interpolation = 'spline16'
else:
interpolation = 'nearest'
for i, ax in enumerate(axes.flat):
# There may be less than 9 images, ensure it doesn't crash.
if i < len(images):
# Plot image.
ax.imshow(images[i],
interpolation=interpolation)
# Name of the true class.
cls_true_name = class_names[cls_true[i]]
# Show true and predicted classes.
if cls_pred is None:
xlabel = "True: {0}".format(cls_true_name)
else:
# Name of the predicted class.
cls_pred_name = class_names[cls_pred[i]]
xlabel = "True: {0}\nPred: {1}".format(cls_true_name, cls_pred_name)
# Show the classes as the label on the x-axis.
ax.set_xlabel(xlabel)
# Remove ticks from the plot.
ax.set_xticks([])
ax.set_yticks([])
# Ensure the plot is shown correctly with multiple plots
# in a single Notebook cell.
plt.show()
"""
Explanation: Helper-function for plotting images
Function used to plot at most 9 images in a 3x3 grid, and writing the true and predicted classes below each image.
End of explanation
"""
# Get the first images from the test-set.
images = images_test[0:9]
# Get the true classes for those images.
cls_true = cls_test[0:9]
# Plot the images and labels using our helper-function above.
plot_images(images=images, cls_true=cls_true, smooth=False)
"""
Explanation: Plot a few images to see if data is correct
End of explanation
"""
inception.data_dir = 'inception/'
"""
Explanation: Download the Inception Model
The Inception model is downloaded from the internet. This is the default directory where you want to save the data-files. The directory will be created if it does not exist.
End of explanation
"""
inception.maybe_download()
"""
Explanation: Download the data for the Inception model if it doesn't already exist in the directory. It is 85 MB.
See Tutorial #07 for more details.
End of explanation
"""
model = inception.Inception()
"""
Explanation: Load the Inception Model
Load the Inception model so it is ready for classifying images.
Note the deprecation warning, which might cause the program to fail in the future.
End of explanation
"""
from inception import transfer_values_cache
"""
Explanation: Calculate Transfer-Values
Import a helper-function for caching the transfer-values of the Inception model.
End of explanation
"""
file_path_cache_train = os.path.join(cifar10.data_path, 'inception_cifar10_train.pkl')
file_path_cache_test = os.path.join(cifar10.data_path, 'inception_cifar10_test.pkl')
print("Processing Inception transfer-values for training-images ...")
# Scale images because Inception needs pixels to be between 0 and 255,
# while the CIFAR-10 functions return pixels between 0.0 and 1.0
images_scaled = images_train * 255.0
# If transfer-values have already been calculated then reload them,
# otherwise calculate them and save them to a cache-file.
transfer_values_train = transfer_values_cache(cache_path=file_path_cache_train,
images=images_scaled,
model=model)
print("Processing Inception transfer-values for test-images ...")
# Scale images because Inception needs pixels to be between 0 and 255,
# while the CIFAR-10 functions return pixels between 0.0 and 1.0
images_scaled = images_test * 255.0
# If transfer-values have already been calculated then reload them,
# otherwise calculate them and save them to a cache-file.
transfer_values_test = transfer_values_cache(cache_path=file_path_cache_test,
images=images_scaled,
model=model)
"""
Explanation: Set the file-paths for the caches of the training-set and test-set.
End of explanation
"""
transfer_values_train.shape
"""
Explanation: Check the shape of the array with the transfer-values. There are 50,000 images in the training-set and for each image there are 2048 transfer-values.
End of explanation
"""
transfer_values_test.shape
"""
Explanation: Similarly, there are 10,000 images in the test-set with 2048 transfer-values for each image.
End of explanation
"""
def plot_transfer_values(i):
print("Input image:")
# Plot the i'th image from the test-set.
plt.imshow(images_test[i], interpolation='nearest')
plt.show()
print("Transfer-values for the image using Inception model:")
# Transform the transfer-values into an image.
img = transfer_values_test[i]
img = img.reshape((32, 64))
# Plot the image for the transfer-values.
plt.imshow(img, interpolation='nearest', cmap='Reds')
plt.show()
plot_transfer_values(i=16)
plot_transfer_values(i=17)
"""
Explanation: Helper-function for plotting transfer-values
End of explanation
"""
from sklearn.decomposition import PCA
"""
Explanation: Analysis of Transfer-Values using PCA
Use Principal Component Analysis (PCA) from scikit-learn to reduce the array-lengths of the transfer-values from 2048 to 2 so they can be plotted.
End of explanation
"""
pca = PCA(n_components=2)
"""
Explanation: Create a new PCA-object and set the target array-length to 2.
End of explanation
"""
transfer_values = transfer_values_train[0:3000]
"""
Explanation: It takes a while to compute the PCA so the number of samples has been limited to 3000. You can try and use the full training-set if you like.
End of explanation
"""
cls = cls_train[0:3000]
"""
Explanation: Get the class-numbers for the samples you selected.
End of explanation
"""
transfer_values.shape
"""
Explanation: Check that the array has 3000 samples and 2048 transfer-values for each sample.
End of explanation
"""
transfer_values_reduced = pca.fit_transform(transfer_values)
"""
Explanation: Use PCA to reduce the transfer-value arrays from 2048 to 2 elements.
End of explanation
"""
transfer_values_reduced.shape
"""
Explanation: Check that it is now an array with 3000 samples and 2 values per sample.
End of explanation
"""
def plot_scatter(values, cls):
# Create a color-map with a different color for each class.
import matplotlib.cm as cm
cmap = cm.rainbow(np.linspace(0.0, 1.0, num_classes))
# Get the color for each sample.
colors = cmap[cls]
# Extract the x- and y-values.
x = values[:, 0]
y = values[:, 1]
# Plot it.
plt.scatter(x, y, color=colors)
plt.show()
"""
Explanation: Helper-function for plotting the reduced transfer-values.
End of explanation
"""
plot_scatter(transfer_values_reduced, cls)
"""
Explanation: Plot the transfer-values that have been reduced using PCA. There are 10 different colors for the different classes in the CIFAR-10 data-set. The colors are grouped together but with very large overlap. This may be because PCA cannot properly separate the transfer-values.
End of explanation
"""
from sklearn.manifold import TSNE
"""
Explanation: Analysis of Transfer-Values using t-SNE
End of explanation
"""
pca = PCA(n_components=50)
transfer_values_50d = pca.fit_transform(transfer_values)
"""
Explanation: Another method for doing dimensionality reduction is t-SNE. Unfortunately, t-SNE is very slow so we first use PCA to reduce the transfer-values from 2048 to 50 elements.
End of explanation
"""
tsne = TSNE(n_components=2)
"""
Explanation: Create a new t-SNE object for the final dimensionality reduction and set the target to 2-dim.
End of explanation
"""
transfer_values_reduced = tsne.fit_transform(transfer_values_50d)
"""
Explanation: Perform the final reduction using t-SNE. The current implemenation of t-SNE in scikit-learn cannot handle data with many samples so this might crash if you use the full training-set.
End of explanation
"""
transfer_values_reduced.shape
"""
Explanation: Check that it is now an array with 3000 samples and 2 transfer-values per sample.
End of explanation
"""
plot_scatter(transfer_values_reduced, cls)
"""
Explanation: Plot the transfer-values that have been reduced to 2-dim using t-SNE, which shows better separation than the PCA-plot above.
This means the transfer-values from the Inception model appear to contain enough information to separate the CIFAR-10 images into classes, although there is still some overlap so the separation is not perfect.
End of explanation
"""
transfer_len = model.transfer_len
"""
Explanation: New Classifier in TensorFlow
Now we will create another neural network in TensorFlow. This network will take as input the transfer-values from the Inception model and output the predicted classes for CIFAR-10 images.
It is assumed that you are already familiar with how to build neural networks in TensorFlow, otherwise see e.g. Tutorial #03.
Placeholder Variables
First we need the array-length for transfer-values which is stored as a variable in the object for the Inception model.
End of explanation
"""
x = tf.placeholder(tf.float32, shape=[None, transfer_len], name='x')
"""
Explanation: Now create a placeholder variable for inputting the transfer-values from the Inception model into the new network that we are building. The shape of this variable is [None, transfer_len] which means it takes an input array with an arbitrary number of samples as indicated by the keyword None and each sample has 2048 elements, equal to transfer_len.
End of explanation
"""
y_true = tf.placeholder(tf.float32, shape=[None, num_classes], name='y_true')
"""
Explanation: Create another placeholder variable for inputting the true class-label of each image. These are so-called One-Hot encoded arrays with 10 elements, one for each possible class in the data-set.
End of explanation
"""
y_true_cls = tf.argmax(y_true, dimension=1)
"""
Explanation: Calculate the true class as an integer. This could also be a placeholder variable.
End of explanation
"""
# Wrap the transfer-values as a Pretty Tensor object.
x_pretty = pt.wrap(x)
with pt.defaults_scope(activation_fn=tf.nn.relu):
y_pred, loss = x_pretty.\
fully_connected(size=1024, name='layer_fc1').\
softmax_classifier(class_count=num_classes, labels=y_true)
"""
Explanation: Neural Network
Create the neural network for doing the classification on the CIFAR-10 data-set. This takes as input the transfer-values from the Inception model which will be fed into the placeholder variable x. The network outputs the predicted class in y_pred.
See Tutorial #03 for more details on how to use Pretty Tensor to construct neural networks.
End of explanation
"""
global_step = tf.Variable(initial_value=0,
name='global_step', trainable=False)
"""
Explanation: Optimization Method
Create a variable for keeping track of the number of optimization iterations performed.
End of explanation
"""
optimizer = tf.train.AdamOptimizer(learning_rate=1e-4).minimize(loss, global_step)
"""
Explanation: Method for optimizing the new neural network.
End of explanation
"""
y_pred_cls = tf.argmax(y_pred, dimension=1)
"""
Explanation: Classification Accuracy
The output of the network y_pred is an array with 10 elements. The class number is the index of the largest element in the array.
End of explanation
"""
correct_prediction = tf.equal(y_pred_cls, y_true_cls)
"""
Explanation: Create an array of booleans whether the predicted class equals the true class of each image.
End of explanation
"""
accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))
"""
Explanation: The classification accuracy is calculated by first type-casting the array of booleans to floats, so that False becomes 0 and True becomes 1, and then taking the average of these numbers.
End of explanation
"""
session = tf.Session()
"""
Explanation: TensorFlow Run
Create TensorFlow Session
Once the TensorFlow graph has been created, we have to create a TensorFlow session which is used to execute the graph.
End of explanation
"""
session.run(tf.initialize_all_variables())
"""
Explanation: Initialize Variables
The variables for the new network must be initialized before we start optimizing them.
End of explanation
"""
train_batch_size = 64
"""
Explanation: Helper-function to get a random training-batch
There are 50,000 images (and arrays with transfer-values for the images) in the training-set. It takes a long time to calculate the gradient of the model using all these images (transfer-values). We therefore only use a small batch of images (transfer-values) in each iteration of the optimizer.
If your computer crashes or becomes very slow because you run out of RAM, then you may try and lower this number, but you may then need to perform more optimization iterations.
End of explanation
"""
def random_batch():
# Number of images (transfer-values) in the training-set.
num_images = len(transfer_values_train)
# Create a random index.
idx = np.random.choice(num_images,
size=train_batch_size,
replace=False)
# Use the random index to select random x and y-values.
# We use the transfer-values instead of images as x-values.
x_batch = transfer_values_train[idx]
y_batch = labels_train[idx]
return x_batch, y_batch
"""
Explanation: Function for selecting a random batch of transfer-values from the training-set.
End of explanation
"""
def optimize(num_iterations):
# Start-time used for printing time-usage below.
start_time = time.time()
for i in range(num_iterations):
# Get a batch of training examples.
# x_batch now holds a batch of images (transfer-values) and
# y_true_batch are the true labels for those images.
x_batch, y_true_batch = random_batch()
# Put the batch into a dict with the proper names
# for placeholder variables in the TensorFlow graph.
feed_dict_train = {x: x_batch,
y_true: y_true_batch}
# Run the optimizer using this batch of training data.
# TensorFlow assigns the variables in feed_dict_train
# to the placeholder variables and then runs the optimizer.
# We also want to retrieve the global_step counter.
i_global, _ = session.run([global_step, optimizer],
feed_dict=feed_dict_train)
# Print status to screen every 100 iterations (and last).
if (i_global % 100 == 0) or (i == num_iterations - 1):
# Calculate the accuracy on the training-batch.
batch_acc = session.run(accuracy,
feed_dict=feed_dict_train)
# Print status.
msg = "Global Step: {0:>6}, Training Batch Accuracy: {1:>6.1%}"
print(msg.format(i_global, batch_acc))
# Ending time.
end_time = time.time()
# Difference between start and end-times.
time_dif = end_time - start_time
# Print the time-usage.
print("Time usage: " + str(timedelta(seconds=int(round(time_dif)))))
"""
Explanation: Helper-function to perform optimization
This function performs a number of optimization iterations so as to gradually improve the variables of the neural network. In each iteration, a new batch of data is selected from the training-set and then TensorFlow executes the optimizer using those training samples. The progress is printed every 100 iterations.
End of explanation
"""
def plot_example_errors(cls_pred, correct):
# This function is called from print_test_accuracy() below.
# cls_pred is an array of the predicted class-number for
# all images in the test-set.
# correct is a boolean array whether the predicted class
# is equal to the true class for each image in the test-set.
# Negate the boolean array.
incorrect = (correct == False)
# Get the images from the test-set that have been
# incorrectly classified.
images = images_test[incorrect]
# Get the predicted classes for those images.
cls_pred = cls_pred[incorrect]
# Get the true classes for those images.
cls_true = cls_test[incorrect]
n = min(9, len(images))
# Plot the first n images.
plot_images(images=images[0:n],
cls_true=cls_true[0:n],
cls_pred=cls_pred[0:n])
"""
Explanation: Helper-Functions for Showing Results
Helper-function to plot example errors
Function for plotting examples of images from the test-set that have been mis-classified.
End of explanation
"""
# Import a function from sklearn to calculate the confusion-matrix.
from sklearn.metrics import confusion_matrix
def plot_confusion_matrix(cls_pred):
# This is called from print_test_accuracy() below.
# cls_pred is an array of the predicted class-number for
# all images in the test-set.
# Get the confusion matrix using sklearn.
cm = confusion_matrix(y_true=cls_test, # True class for test-set.
y_pred=cls_pred) # Predicted class.
# Print the confusion matrix as text.
for i in range(num_classes):
# Append the class-name to each line.
class_name = "({}) {}".format(i, class_names[i])
print(cm[i, :], class_name)
# Print the class-numbers for easy reference.
class_numbers = [" ({0})".format(i) for i in range(num_classes)]
print("".join(class_numbers))
"""
Explanation: Helper-function to plot confusion matrix
End of explanation
"""
# Split the data-set in batches of this size to limit RAM usage.
batch_size = 256
def predict_cls(transfer_values, labels, cls_true):
# Number of images.
num_images = len(transfer_values)
# Allocate an array for the predicted classes which
# will be calculated in batches and filled into this array.
cls_pred = np.zeros(shape=num_images, dtype=np.int)
# Now calculate the predicted classes for the batches.
# We will just iterate through all the batches.
# There might be a more clever and Pythonic way of doing this.
# The starting index for the next batch is denoted i.
i = 0
while i < num_images:
# The ending index for the next batch is denoted j.
j = min(i + batch_size, num_images)
# Create a feed-dict with the images and labels
# between index i and j.
feed_dict = {x: transfer_values[i:j],
y_true: labels[i:j]}
# Calculate the predicted class using TensorFlow.
cls_pred[i:j] = session.run(y_pred_cls, feed_dict=feed_dict)
# Set the start-index for the next batch to the
# end-index of the current batch.
i = j
# Create a boolean array whether each image is correctly classified.
correct = (cls_true == cls_pred)
return correct, cls_pred
"""
Explanation: Helper-functions for calculating classifications
This function calculates the predicted classes of images and also returns a boolean array whether the classification of each image is correct.
The calculation is done in batches because it might use too much RAM otherwise. If your computer crashes then you can try and lower the batch-size.
End of explanation
"""
def predict_cls_test():
return predict_cls(transfer_values = transfer_values_test,
labels = labels_test,
cls_true = cls_test)
"""
Explanation: Calculate the predicted class for the test-set.
End of explanation
"""
def classification_accuracy(correct):
# When averaging a boolean array, False means 0 and True means 1.
# So we are calculating: number of True / len(correct) which is
# the same as the classification accuracy.
# Return the classification accuracy
# and the number of correct classifications.
return correct.mean(), correct.sum()
"""
Explanation: Helper-functions for calculating the classification accuracy
This function calculates the classification accuracy given a boolean array whether each image was correctly classified. E.g. classification_accuracy([True, True, False, False, False]) = 2/5 = 0.4. The function also returns the number of correct classifications.
End of explanation
"""
def print_test_accuracy(show_example_errors=False,
show_confusion_matrix=False):
# For all the images in the test-set,
# calculate the predicted classes and whether they are correct.
correct, cls_pred = predict_cls_test()
# Classification accuracy and the number of correct classifications.
acc, num_correct = classification_accuracy(correct)
# Number of images being classified.
num_images = len(correct)
# Print the accuracy.
msg = "Accuracy on Test-Set: {0:.1%} ({1} / {2})"
print(msg.format(acc, num_correct, num_images))
# Plot some examples of mis-classifications, if desired.
if show_example_errors:
print("Example errors:")
plot_example_errors(cls_pred=cls_pred, correct=correct)
# Plot the confusion matrix, if desired.
if show_confusion_matrix:
print("Confusion Matrix:")
plot_confusion_matrix(cls_pred=cls_pred)
"""
Explanation: Helper-function for showing the classification accuracy
Function for printing the classification accuracy on the test-set.
It takes a while to compute the classification for all the images in the test-set, that's why the results are re-used by calling the above functions directly from this function, so the classifications don't have to be recalculated by each function.
End of explanation
"""
print_test_accuracy(show_example_errors=False,
show_confusion_matrix=False)
"""
Explanation: Results
Performance before any optimization
The classification accuracy on the test-set is very low because the model variables have only been initialized and not optimized at all, so it just classifies the images randomly.
End of explanation
"""
optimize(num_iterations=1000)
print_test_accuracy(show_example_errors=True,
show_confusion_matrix=True)
"""
Explanation: Performance after 10,000 optimization iterations
After 10,000 optimization iterations, the classification accuracy is about 90% on the test-set. Compare this to the basic Convolutional Neural Network from Tutorial #06 which had less than 80% accuracy on the test-set.
End of explanation
"""
# This has been commented out in case you want to modify and experiment
# with the Notebook without having to restart it.
# model.close()
# session.close()
"""
Explanation: Close TensorFlow Session
We are now done using TensorFlow, so we close the session to release its resources. Note that there are two TensorFlow-sessions so we close both, one session is inside the model-object.
End of explanation
"""
|
mne-tools/mne-tools.github.io | 0.17/_downloads/c92e443909d938b06abf63b902dac687/plot_epochs_to_data_frame.ipynb | bsd-3-clause | # Author: Denis Engemann <denis.engemann@gmail.com>
#
# License: BSD (3-clause)
import mne
import matplotlib.pyplot as plt
import numpy as np
from mne.datasets import sample
print(__doc__)
data_path = sample.data_path()
raw_fname = data_path + '/MEG/sample/sample_audvis_filt-0-40_raw.fif'
event_fname = data_path + '/MEG/sample/sample_audvis_filt-0-40_raw-eve.fif'
# These data already have an average EEG ref applied
raw = mne.io.read_raw_fif(raw_fname)
# For simplicity we will only consider the first 10 epochs
events = mne.read_events(event_fname)[:10]
# Add a bad channel
raw.info['bads'] += ['MEG 2443']
picks = mne.pick_types(raw.info, meg='grad', eeg=False, eog=True,
stim=False, exclude='bads')
tmin, tmax = -0.2, 0.5
baseline = (None, 0)
reject = dict(grad=4000e-13, eog=150e-6)
event_id = dict(auditory_l=1, auditory_r=2, visual_l=3, visual_r=4)
epochs = mne.Epochs(raw, events, event_id, tmin, tmax, proj=True, picks=picks,
baseline=baseline, preload=True, reject=reject)
"""
Explanation: Export epochs to Pandas DataFrame
In this example the pandas exporter will be used to produce a DataFrame
object. After exploring some basic features a split-apply-combine
work flow will be conducted to examine the latencies of the response
maxima across epochs and conditions.
<div class="alert alert-info"><h4>Note</h4><p>Equivalent methods are available for raw and evoked data objects.</p></div>
More information and additional introductory materials can be found at the
pandas doc sites: http://pandas.pydata.org/pandas-docs/stable/
Short Pandas Primer
Pandas Data Frames
~~~~~~~~~~~~~~~~~~
A data frame can be thought of as a combination of matrix, list and dict:
It knows about linear algebra and element-wise operations but is size mutable
and allows for labeled access to its data. In addition, the pandas data frame
class provides many useful methods for restructuring, reshaping and visualizing
data. As most methods return data frame instances, operations can be chained
with ease; this allows to write efficient one-liners. Technically a DataFrame
can be seen as a high-level container for numpy arrays and hence switching
back and forth between numpy arrays and DataFrames is very easy.
Taken together, these features qualify data frames for inter operation with
databases and for interactive data exploration / analysis.
Additionally, pandas interfaces with the R statistical computing language that
covers a huge amount of statistical functionality.
Export Options
~~~~~~~~~~~~~~
The pandas exporter comes with a few options worth being commented.
Pandas DataFrame objects use a so called hierarchical index. This can be
thought of as an array of unique tuples, in our case, representing the higher
dimensional MEG data in a 2D data table. The column names are the channel names
from the epoch object. The channels can be accessed like entries of a
dictionary::
>>> df['MEG 2333']
Epochs and time slices can be accessed with the .loc method::
>>> epochs_df.loc[(1, 2), 'MEG 2333']
However, it is also possible to include this index as regular categorial data
columns which yields a long table format typically used for repeated measure
designs. To take control of this feature, on export, you can specify which
of the three dimensions 'condition', 'epoch' and 'time' is passed to the Pandas
index using the index parameter. Note that this decision is revertible any
time, as demonstrated below.
Similarly, for convenience, it is possible to scale the times, e.g. from
seconds to milliseconds.
Some Instance Methods
~~~~~~~~~~~~~~~~~~~~~
Most numpy methods and many ufuncs can be found as instance methods, e.g.
mean, median, var, std, mul, , max, argmax etc.
Below an incomplete listing of additional useful data frame instance methods:
apply : apply function to data.
Any kind of custom function can be applied to the data. In combination with
lambda this can be very useful.
describe : quickly generate summary stats
Very useful for exploring data.
groupby : generate subgroups and initialize a 'split-apply-combine' operation.
Creates a group object. Subsequently, methods like apply, agg, or transform
can be used to manipulate the underlying data separately but
simultaneously. Finally, reset_index can be used to combine the results
back into a data frame.
plot : wrapper around plt.plot
However it comes with some special options. For examples see below.
shape : shape attribute
gets the dimensions of the data frame.
values :
return underlying numpy array.
to_records :
export data as numpy record array.
to_dict :
export data as dict of arrays.
End of explanation
"""
# The following parameters will scale the channels and times plotting
# friendly. The info columns 'epoch' and 'time' will be used as hierarchical
# index whereas the condition is treated as categorial data. Note that
# this is optional. By passing None you could also print out all nesting
# factors in a long table style commonly used for analyzing repeated measure
# designs.
index, scaling_time, scalings = ['epoch', 'time'], 1e3, dict(grad=1e13)
df = epochs.to_data_frame(picks=None, scalings=scalings,
scaling_time=scaling_time, index=index)
# Create MEG channel selector and drop EOG channel.
meg_chs = [c for c in df.columns if 'MEG' in c]
df.pop('EOG 061') # this works just like with a list.
"""
Explanation: Export DataFrame
End of explanation
"""
# Pandas is using a MultiIndex or hierarchical index to handle higher
# dimensionality while at the same time representing data in a flat 2d manner.
print(df.index.names, df.index.levels)
# Inspecting the index object unveils that 'epoch', 'time' are used
# for subsetting data. We can take advantage of that by using the
# .loc attribute, where in this case the first position indexes the MultiIndex
# and the second the columns, that is, channels.
# Plot some channels across the first three epochs
xticks, sel = np.arange(3, 600, 120), meg_chs[:15]
df.loc[:3, sel].plot(xticks=xticks)
mne.viz.tight_layout()
# slice the time starting at t0 in epoch 2 and ending 500ms after
# the base line in epoch 3. Note that the second part of the tuple
# represents time in milliseconds from stimulus onset.
df.loc[(1, 0):(3, 500), sel].plot(xticks=xticks)
mne.viz.tight_layout()
# Note: For convenience the index was converted from floating point values
# to integer values. To restore the original values you can e.g. say
# df['times'] = np.tile(epoch.times, len(epochs_times)
# We now reset the index of the DataFrame to expose some Pandas
# pivoting functionality. To simplify the groupby operation we
# we drop the indices to treat epoch and time as categroial factors.
df = df.reset_index()
# The ensuing DataFrame then is split into subsets reflecting a crossing
# between condition and trial number. The idea is that we can broadcast
# operations into each cell simultaneously.
factors = ['condition', 'epoch']
sel = factors + ['MEG 1332', 'MEG 1342']
grouped = df[sel].groupby(factors)
# To make the plot labels more readable let's edit the values of 'condition'.
df.condition = df.condition.apply(lambda name: name + ' ')
# Now we compare the mean of two channels response across conditions.
grouped.mean().plot(kind='bar', stacked=True, title='Mean MEG Response',
color=['steelblue', 'orange'])
mne.viz.tight_layout()
# We can even accomplish more complicated tasks in a few lines calling
# apply method and passing a function. Assume we wanted to know the time
# slice of the maximum response for each condition.
max_latency = grouped[sel[2]].apply(lambda x: df.time[x.idxmax()])
print(max_latency)
# Then make the plot labels more readable let's edit the values of 'condition'.
df.condition = df.condition.apply(lambda name: name + ' ')
plt.figure()
max_latency.plot(kind='barh', title='Latency of Maximum Response',
color=['steelblue'])
mne.viz.tight_layout()
# Finally, we will again remove the index to create a proper data table that
# can be used with statistical packages like statsmodels or R.
final_df = max_latency.reset_index()
final_df.rename(columns={0: sel[2]}) # as the index is oblivious of names.
# The index is now written into regular columns so it can be used as factor.
print(final_df)
plt.show()
# To save as csv file, uncomment the next line.
# final_df.to_csv('my_epochs.csv')
# Note. Data Frames can be easily concatenated, e.g., across subjects.
# E.g. say:
#
# import pandas as pd
# group = pd.concat([df_1, df_2])
# group['subject'] = np.r_[np.ones(len(df_1)), np.ones(len(df_2)) + 1]
"""
Explanation: Explore Pandas MultiIndex
End of explanation
"""
|
xgcm/xgcm | doc/grid_metrics.ipynb | mit | import xarray as xr
import numpy as np
from xgcm import Grid
import matplotlib.pyplot as plt
%matplotlib inline
# hack to make file name work with nbsphinx and binder
import os
fname = '../datasets/mitgcm_example_dataset_v2.nc'
if not os.path.exists(fname):
fname = '../' + fname
ds = xr.open_dataset(fname)
ds
"""
Explanation: Grid Metrics
Most modern circulation models discretize the partial differential equations needed to simulate the earth system on a logically rectangular grid. This means the grid for a single time step can be represented as a 3-dimensional array of cells. Even for more complex grid geometries like here, subdomains are usually organized in this manner. A noteable exception are models with unstructured grids example, which currently cannot be processed with the datamodel of xarray and xgcm.
Our grid operators work on the logically rectangular grid of an ocean model, meaning that e.g. differences are evaluated on the 'neighboring' cells in either direction, but even though these cells are adjacent, cells can have different size and geometry.
In order to convert operators acting on the logically rectangular grid to physically meaningful output models need 'metrics' - information about the grid cell geometry in physical space.
In the case of a perfectly rectangular cuboid, the only metrics needed would be three of the edge distances. All other distances can be reconstructed exactly. Most ocean models have however slightly distorted cells, due to the curvature of the earth. To accurately represent the volume of the cell we require more metrics.
Each grid point has three kinds of fundamental metrics associated with it which differ in the number of described axes:
Distances: A distance is associated with a single axis (e.g. ('X',),('Y',) or ('Z',)). Each distance describes the distance from the point to either face of the cell associated with the grid point.
Areas: An area is associated with a pair of axes (e.g. ('X', 'Y'), ('Y', 'Z') and ('X', 'Z')). Each grid point intersects three areas.
Volume: The cell volume is unique for each cell and associated with all three axes (('X', 'Y', 'Z')).
Using metrics with xgcm
Once the user assigns the metrics (given as coordinates in most model output) to the grid object, xgcm is able to automatically select and apply these to calculate e.g. derivatives and integrals from model data.
<div class="alert alert-info">
*Note*: xgcm does not currently check for alignment of missing values between data and metrics. The user needs to check and mask values appropriately
</div>
End of explanation
"""
ds['drW'] = ds.hFacW * ds.drF #vertical cell size at u point
ds['drS'] = ds.hFacS * ds.drF #vertical cell size at v point
ds['drC'] = ds.hFacC * ds.drF #vertical cell size at tracer point
"""
Explanation: For mitgcm output we need to first incorporate partial cell thicknesses into new metric coordinates
End of explanation
"""
metrics = {
('X',): ['dxC', 'dxG'], # X distances
('Y',): ['dyC', 'dyG'], # Y distances
('Z',): ['drW', 'drS', 'drC'], # Z distances
('X', 'Y'): ['rA', 'rAz', 'rAs', 'rAw'] # Areas
}
grid = Grid(ds, metrics=metrics)
"""
Explanation: To assign the metrics, the user has to provide a dictionary with keys and entries corresponding to the spatial orientation of the metrics and a list of the appropriate variable names in the dataset.
End of explanation
"""
grid.integrate(ds.UVEL, 'Z').plot();
"""
Explanation: Grid-aware (weighted) integration
It is now possible to integrate over any grid axis. For example, we can integrate over the Z axis to compute the
discretized version of:
$$ \int_{-H}^0 u dz$$
in one line:
End of explanation
"""
a = grid.integrate(ds.UVEL, 'Z')
b = (ds.UVEL * ds.drW).sum('Z')
xr.testing.assert_equal(a, b)
"""
Explanation: This is equivalent to doing (ds.UVEL * ds.drW).sum('Z'), with the advantage of not having to remember the name of the appropriate metric and the matching dimension. The only thing the user needs to input is the axis to integrate over.
End of explanation
"""
grid.integrate(ds.SALT, 'Z').plot();
"""
Explanation: We can do the exact same thing on a tracer field (which is located on a different grid point) by using the exact same syntax:
End of explanation
"""
a = grid.integrate(ds.UVEL, ['X', 'Y'])
a.plot(y='Z')
# Equivalent to integrating over area
b = (ds.UVEL * ds.rAw).sum(['XG', 'YC'])
xr.testing.assert_equal(a, b)
"""
Explanation: It also works in two dimensions:
End of explanation
"""
print('Spatial integral of zonal velocity: ',grid.integrate(ds.UVEL, ['X', 'Y', 'Z']).values)
"""
Explanation: And finally in 3 dimensions, this time using the salinity of the tracer cell:
End of explanation
"""
a = grid.integrate(ds.SALT, ['X', 'Y', 'Z'])
b = (ds.SALT * ds.rA * ds.drC).sum(['XC', 'YC', 'Z'])
xr.testing.assert_allclose(a, b)
"""
Explanation: But wait, we did not provide a cell volume when setting up the Grid. What happened?
Whenever no matching metric is provided, xgcm will default to reconstruct it from the other available metrics, in this case the area and z distance of the tracer cell
End of explanation
"""
# depth mean salinity
grid.average(ds.SALT, ['Z']).plot();
"""
Explanation: Grid-aware (weighted) average
xgcm can also calcualate the weighted average along each axis and combinations of axes.
See for example the vertical average of salinity:
$$ \frac{\int_{-H}^0 S dz}{\int_{-H}^0 dz} $$
End of explanation
"""
# depth mean zonal velocity
grid.average(ds.UVEL, ['Z']).plot();
"""
Explanation: Equivalently, this can be computed with the xgcm operations:
(ds.SALT * ds.drF).sum('Z') / ds.drF.sum('Z')
See also for zonal velocity:
End of explanation
"""
# horizontal average zonal velocity
grid.average(ds.UVEL, ['X','Y']).plot(y='Z');
# average salinity of the global ocean
# horizontal average zonal velocity
print('Volume weighted average of salinity: ',grid.average(ds.SALT, ['X','Y', 'Z']).values)
"""
Explanation: This works with multiple dimensions as well:
End of explanation
"""
# the streamfunction is the cumulative integral of the vertically integrated zonal velocity along y
psi = grid.cumint(-grid.integrate(ds.UVEL,'Z'),'Y', boundary='fill')
maskZ = grid.interp(ds.hFacS, 'X').isel(Z=0)
(psi / 1e6).squeeze().where(maskZ).plot.contourf(levels=np.arange(-160, 40, 5));
"""
Explanation: Cumulative integration
Using the metric-aware cumulative integration cumint, we can calculate the barotropic transport streamfunction even easier and more intuitive in one line:
End of explanation
"""
uvel_l = grid.interp(ds.UVEL,'Z')
vvel_l = grid.interp(ds.VVEL,'Z')
theta_l = grid.interp(ds.THETA,'Z')
"""
Explanation: Here, cumint is performing the discretized form of:
$$ \psi = \int_{y_0}^{y} -U dy' $$
where $U = \int_{-H}^0 u dz$, and under the hood looks like the following operation:
grid.cumsum( -grid.integrate(ds.UVEL,'Z') * ds.dyG, 'Y', boundary='fill')
Except that, once again, one does not have to remember the matching metric while using cumint.
Computing derivatives
In a similar fashion to integration, xgcm uses metrics to compute derivatives.
For this example we show vertical shear, i.e. the derivative of some quantity in the vertical.
At it's core, derivative is based on diff, which shifts a data array
to a new grid point, as shown
here.
Because of this shifting, we need to either define new metrics which live at the right points on the grid,
or first interpolate the desired quantities, anticipating the shift.
Here we choose the latter, and interpolate velocities and temperature onto the vertical cell faces of the grid
cells.
The resulting quantities are in line with the vertical velocity w, which is shown in the vertical grid of the C
grid here.
End of explanation
"""
zonal_shear = grid.derivative(uvel_l,'Z')
zonal_shear.isel(Z=0).plot();
"""
Explanation: The subscript "l" is used to denote a leftward shift on the vertical axis, following this nomenclature.
As a first example, we show zonal velocity shear in the top layer, which is the finite difference version of:
$$ \frac{\partial u}{\partial z}\Big|_{z=-25m} $$
End of explanation
"""
expected_result = (grid.diff( uvel_l, 'Z') ) /ds.drW
xr.testing.assert_equal(zonal_shear, expected_result.reset_coords(drop=True))
"""
Explanation: and the underlying xgcm operations are:
grid.diff( uvel_l, 'Z' ) / ds.drW
Which is shown to be equivalent below:
End of explanation
"""
print('1. ', ds.UVEL.dims)
print('2. ', uvel_l.dims)
print('3. ', zonal_shear.dims)
"""
Explanation: A note on dimensions: here we first interpolated from "Z"->"Zl" and
the derivative operation shifted the result back from "Zl"->"Z".
End of explanation
"""
fig,axs = plt.subplots(1,2,figsize=(12,8))
titles=['Horizontal average of zonal velocity, $u$',
'Horizontal average of zonal velocity shear, $\partial u/\partial z$']
for ax,fld,title in zip(axs,[ds.UVEL,zonal_shear],titles):
# Only select non-land (a.k.a. wet) points
fld = fld.where(ds.maskW).isel(time=0).copy()
grid.average(fld,['X','Y']).plot(ax=ax,y='Z')
ax.grid();
ax.set_title(title);
"""
Explanation: For reference, the vertical profiles of horizontal average of zonal velocity
and zonal velocity shear are shown below.
End of explanation
"""
grid.derivative(vvel_l,'Z').isel(Z=0).plot();
grid.derivative(theta_l,'Z').isel(Z=0).plot();
"""
Explanation: And finally, for meridional velocity and temperature in the top layer:
End of explanation
"""
ds['dxF'] = grid.interp(ds.dxC,'X')
ds['dyF'] = grid.interp(ds.dyC,'Y')
metrics = {
('X',): ['dxC', 'dxG','dxF'], # X distances
('Y',): ['dyC', 'dyG','dyF'], # Y distances
('Z',): ['drW', 'drS', 'drC'], # Z distances
('X', 'Y'): ['rA', 'rAz', 'rAs', 'rAw'] # Areas
}
grid = Grid(ds, metrics=metrics)
"""
Explanation: <div class="alert alert-info">
**Note:** The `.derivative` function performs a centered finite difference operation.
Keep in mind that this is different from
[finite volume differencing schemes](https://mitgcm.readthedocs.io/en/latest/algorithm/finitevol-meth.html)
as used in many ocean models.
See [this section](https://xgcm.readthedocs.io/en/latest/example_mitgcm.html#Divergence)
of documentation for some examples of how xgcm can be helpful in performing these operations.
</div>
Metric weighted interpolation
Finally, grid metrics allow us to implement area-weighted interpolation schemes quite easily. First, however, we need to once again define new metrics in the horizontal:
End of explanation
"""
grid.interp(ds.THETA.where(ds.maskC),'X',metric_weighted=['X','Y']).isel(Z=0).plot();
"""
Explanation: Here we show temperature interpolated in the X direction: from the tracer location to where zonal velocity is located, i.e. from t to u in the horizontal view of the C grid shown here.
End of explanation
"""
|
nicolas998/wmf | Examples/Ejemplo_Cuencas_Basico_1.ipynb | gpl-3.0 | #Paquete Watershed Modelling Framework (WMF) para el trabajo con cuencas.
from wmf import wmf
"""
Explanation: Ejemplo cuencas
En el siguiente ejemplo se presentan las funcionalidades básicas de la herramienta wmf.Stream y wmf.Basin
dentro de los temas tocados se presenta:
Trazado de corrientes.
Perfil de corrientes.
Trazado de cuencas.
Balances para estimación de caudal.
Análisis Geomorfológico de cuencas.
End of explanation
"""
# Lectura del DEM
DEM = wmf.read_map_raster('/media/nicolas/discoGrande/raster/dem_corr.tif',isDEMorDIR=True, dxp=30.0)
DIR = wmf.read_map_raster('/media/nicolas/discoGrande/raster/dirAMVA.tif',isDEMorDIR=True, dxp= 30.0)
wmf.cu.nodata=-9999.0; wmf.cu.dxp=30.0
DIR[DIR<=0]=wmf.cu.nodata.astype(int)
DIR=wmf.cu.dir_reclass(DIR,wmf.cu.ncols,wmf.cu.nrows)
"""
Explanation: Este es como se leen los mapas de direcciones y dem para el trazado de cuencas y corrientes
End of explanation
"""
st = wmf.Stream(-75.618,6.00,DEM=DEM,DIR=DIR,name ='Rio Medellin')
st.structure
st.Plot_Profile()
"""
Explanation: Trazado de corrientes
Importantes para determinar por donde se acumula el flujo y por lo tanto por donde se debe trazar la cuenca, estos elementos operan más como una guia que como un resultado final, pueden ser usados directamente para el trazado de las cuencas acudiendo a su propiedad structure, en la cual en las dos primeras entradas se alojan la coordenada X y la coordenada Y
Trazado de una corriente
End of explanation
"""
# Mediante el comando de busqueda hemos buscado donde se localizan las coordenadas que cumplen la propiedad de estar a
#una distancia de la salida que oscila entre 10000 y 10100 metros.
np.where((st.structure[3]>10000) & (st.structure[3]<10100))
"""
Explanation: El perfil de una corriente puede ser utilizado como punto de referencia para la búsqueda de puntos de trazado
End of explanation
"""
# Las coordenadas en la entrada 289 son:
print st.structure[0,289]
print st.structure[1,289]
# La cuenca puede ser trtazada utilizando las coordenadas de forma implicita (como en este ejemplo), o de una
# manera explicita como se realizaría en la segunda línea de código.
cuenca = wmf.Basin(-75.6364,6.11051,DEM,DIR,name='ejemplo',stream=st)
# en esta segunda linea estamos trazando una cuenca con unas coordenadas que no son exactas y no se sabe si estan
# sobre la corriente, este problema se corrige al pasarle la corriente al trazador mediante el comando stream, el cual
# recibe como entrada el objeto corriente previamente obtenido.
cuenca2 = wmf.Basin(-75.6422,6.082,DEM,DIR,name='ejemplo',stream=st)
# Cuenca error: en este caso no se para el argumento stream, por lo que la cuenca se traza sobre las coordenadas
# que se han dado, lo cual probablemente produzca un error.
cuenca3 = wmf.Basin(-75.6364,6.11051,DEM,DIR,name='ejemplo',stream=st)
# Se imprime la cantidad de celdas que comprenden a cada una de las cuencas obtenidas, esto para ver que efectivamente
# existe una diferencia entre ambas debida a las diferencias de coordenadas.
print cuenca.ncells
print cuenca2.ncells
print cuenca3.ncells
"""
Explanation: Trazado de cuenca
Trazado de cuencas mediante el objeto Basin
La cuenca se traza a partir de un par de coordenadas, de un DEM y de un DIR, se le pueden agregar parametros como el nombre o el umbral para producir corrientes, pero este es opcional, este tipo de cuencas no se dejan simular, para ello se debe usar la herramienta SimuBasin
End of explanation
"""
del(cuenca3)
"""
Explanation: La ultima cuenca tiene un conteo de celdas igual a 1, lo cual significa que no se ha trazado nada y que por esta celda no pasa ninguna otra celda, por lo tanto esto no es una cuenca, y no debe ser usado para ningún tipo de cálculos, en la siguiente línea este elemento es eliminado:
End of explanation
"""
# Balance en una cuenca asumiendo precipitación anual igual a 2000 mm/año sobre toda la cuenca
cuenca.GetQ_Balance(2100)
# La variable de balance de largo plazo se calcula para cada celda de la cuenca y queda almacenada en cuenca.CellQmed
cuenca.Plot_basin(cuenca.CellQmed)
"""
Explanation: Balance sobre cuencas
EL objeto Basin Trae por defecto funciones para el cálculño de propiedades geomorfológicas y para el cálculo de caudales mediante el método de balances de largo plazo. a continuación se presenta la funcionalidad del mismo.
End of explanation
"""
# Plot de la evaporación sobre la cuenca de caldas
cuenca.Plot_basin(cuenca.CellETR, extra_lat= 0.001, extra_long= 0.001, lines_spaces= 0.02,
ruta = 'Caldas_ETR.png')
"""
Explanation: En la Figura se presenta el caudal medio estimado para cada elemento de la cuenca, incluidas celdas en donde no se considera la presencia de red hídrica.
Cuando se ha calculado el caudal medi tambien se ha calculado la evaporación sobre la cuenca, esta se puede ver en la variable cuenca.CellETR
End of explanation
"""
# Estimacion de maximos, por defecto lo hace por gumbel, lo puede hacer tambien por lognormal
Qmax = cuenca.GetQ_Max(cuenca.CellQmed)
Qmax2 = cuenca.GetQ_Max(cuenca.CellQmed, Tr= [3, 15])
# Estimacion de minimos, por defecto lo hace por gumbel, lo puede hacer tambien por lognormal
Qmin = cuenca.GetQ_Min(cuenca.CellQmed)
Qmin[Qmin<0]=0
"""
Explanation: La figura anterior ha sido guardada en el disco mediante el comando ruta = 'Caldas_ETR.png', en este caso ha sido sobre el directorio de trabajo actual, si este se cambia, se cambia el directorio donde se guarda.
El módulo permite estimar caudales máximos y mínimos mediante rtegionalización de caudales extremos mediante la ecuación:
$Q_{max}(T_r) = \widehat{Q}{max} + K{dist}(T_r) \sigma_{max}$
$Q_{min}(T_r) = \widehat{Q}{min} - K{dist}(T_r) \sigma_{min}$
End of explanation
"""
# Plot del caudal máximo para un periodo de retorno de 2.33
cuenca.Plot_basin(Qmax[0])
# Plot del caudal máximo para un periodo de retorno de 100
cuenca.Plot_basin(Qmax[5])
"""
Explanation: Cada entrada en Qmax y Qmin corresponde al periodo de retorno Tr [2.33, 5, 10, 25, 50, 100], estos pueden ser cambiados al interior de la función al cambiar la propiedad Tr en el momento en que esta es invocada.
End of explanation
"""
cuenca.Save_Basin2Map('Cuenca.kml',DriverFormat='kml')
cuenca.Save_Net2Map('Red.kml',DriverFormat='kml',qmed=cuenca.CellQmed)
"""
Explanation: Guardado en shp:
Tanto la cuenca como la red hídrica se pueden guardar en shp para poder ser vistos en cualquier visor gis, tambien se puede guardar en otro tipo de archivos como kml.
End of explanation
"""
# Calcula geomorfología por cauces
cuenca.GetGeo_Cell_Basics()
# reporte de geomorfologia generico y los almacena en cuenca.GeoParameters y en cuenca.Tc
cuenca.GetGeo_Parameters()
cuenca.GeoParameters
# Tiempos de concentracion
cuenca.Tc
cuenca.Plot_Tc()
cuenca.GetGeo_IsoChrones(1.34)
cuenca.Plot_basin(cuenca.CellTravelTime)
cuenca.Plot_Travell_Hist()
cuenca.GetGeo_Ppal_Hipsometric()
cuenca.PlotPpalStream()
cuenca.Plot_Hipsometric()
"""
Explanation: Geomorfologia
Aca se explica un poco las funciones que hay de geomorfología
End of explanation
"""
|
ConnectedSystems/veneer-py | doc/training/6_Model_Setup_and_Configuration.ipynb | isc | existing_models = v.model.link.routing.get_models()
existing_models
"""
Explanation: Session 6 - Model Setup and Reconfiguration
This session covers functionality in Veneer and veneer-py for making larger changes to model setup, including structural changes.
Using this functionality, it is possible to:
Create (and remove) nodes and links
Change model algorithms, such as changing links from Straight Through Routing to Storage Routing
Assign input time series to model variables
Query and modify parameters across similar nodes/links/catchments/functional-units
Overview
(This is a Big topic)
Strengths and limitations of configuring from outside
+ve repeatability
+ve clarity around common elements - e.g. do one thing everywhere, parameterised by spatial data
-ve feedback - need to query the system to find out what you need to do vs a GUI that displays it
Obvious and compelling use cases
Catchments: Applying a constituent model everywhere and assigning parameters using spatial data
Catchments: Climate data
How it works:
The Python <-> IronPython bridge
What’s happening under the hood
Layers of helper functions
How to discover parameters
Harder examples (not fully worked)
Creating and configuring a storage from scratch
Extending the system
Which Model?
Note: This session uses ExampleProject/RiverModel2.rsproj. You are welcome to work with your own model instead, however you will need to change the notebook text at certain points to reflect the names of nodes, links and functions in your model file.
Warning: Big Topic
This is a big topic and the material in this session will only touch on some of the possibilities.
Furthermore, its an evolving area - so while there is general purpose functionality that is quite stable, making the functionality easy to use for particular tasks is a case by case basis that has been tackled on an as-needed basis. There are lots of gaps!
Motivations, Strengths and Limitations of Scripting configuration
There are various motivations for the type of automation of Source model setup described here. Some of these motivations are more practical to achieve than others!
Automatically build a model from scratch, using an executable 'recipe'
Could you build a complete Source model from scratch using a script?
In theory, yes you could. However it is not practical at this point in time using Veneer. (Though the idea of building a catchments-style model is more foreseeable than building a complex river model).
For some people, building a model from script would be desirable as it would have some similarities to configuring models in text files as was done with the previous generation of river models. A script would be more powerful though, because it has the ability to bring in adhoc data sources (GIS layers, CSV files, etc) to define the model structure. The scripting approach presented here wouldn't be the most convenient way to describe a model node-by-node, link-by-link - it would be quite cumbersome. However it would be possible to build a domain-specific language for describing models that makes use of the Python scripting.
Automate bulk changes to a model
Most of the practical examples to date have involved applying some change across a model (whether that model is a catchments-style geographic model or a schematic style network). Examples include:
Apply a new constituent generation model: A new generation model was being tested and needed to be applied to every catchment in the model. Some of the parameters would subsequently be calibrated (using PEST), but others needed to be derived from spatial data.
Add and configure nodes for point source inputs: A series of point sources needed to be represented in the models. This involved adding inflow nodes for each point source, connecting those inflows to the most appropriate (and available) downstream node and computing and configuring time series inputs for the inflows.
Bulk rename nodes and links based on a CSV file: A complex model needed a large number of nodes and links renamed to introduce naming conventions that would allow automatic post-processing and visualisation. A CSV was created with old node/link names (extracted from Source using veneer-py). A second column in the CSV was then populated (by hand) with new node/link names. This CSV file was read into Python and used to apply new names to affected nodes/links.
Change multiple models in a consistent way
Testing a plugin in multiple catchments: A new plugin model was being tested across multiple catchments models, including calibration. A notebook was written to apply the plugin to a running Source model, parameterise the plugin and configure PEST. This notebook was then applied to each distinct Source model in turn.
Change a model without making the changes permanent
There are several reasons for making changes to the Source model without wanting the changes to be permanently saved in the model.
Testing an alternative setup, such as a different set of routing parameters. Automating the application of new parameters means you can test, and then re-test at a later date, without needing manual rework.
Maintaining a single point-of-truth for a core model that needs to support different purposes and users.
Persistence not available. In the earlier examples of testing new plugin models, the automated application of model setup allowed sophisticated testing, including calibration by PEST, to take place before the plugin was stable enough to be persisted using the Source data management system.
Example - Switching routing methods and configuring
This example uses the earlier RiverModel.rsproj example file although it will work with other models.
Here, we will convert all links to use Storage Routing except for links that lead to a water user.
Note: To work through this example (and the others that follow), you will need to ensure the 'Allow Scripts' option is enabled in the Web Server Monitoring window.
The v.model namepsace
Most of our work in this session will involve the v.model namespace. This namespace contains functionality that provides query and modification of the model structure. Everything in v.model relies on the 'Allow Scripts' option.
As with other parts of veneer-py (and Python packages in general), you can use <tab> completion to explore the available functions and the help() function (or the ? suffix) to get help.
Finding the current routing type
We can use v.model.link.routing.get_models() to find the routing models used on each link
End of explanation
"""
link_names_order = v.model.link.routing.names()
link_names_order
"""
Explanation: Note:
The get_models() functions is available in various places through the v.model namespace. For example, v.model.catchments.runoff.get_models() queries the rainfall runoff models in subcatchments (actually in functional units). There are other such methods, available in multiple places, including:
set_models
get_param_values
set_param_values
These functions are all bulk functions - that is they operate across all matching elements (all nodes, all links, etc).
Each of these functions accept parameters to restrict the search, such as only including links with certain names. These query parameters differ in different contexts (ie between runoff models and routing models), but they are consistent between the functions in a given context. Confused?
For example, in link routing you can look for links of certain names and you can do this with the different methods:
python
v.model.link.routing.get_models(links='Default Link #3')
v.model.link.routing.set_models('RiverSystem.Flow.LaggedFlowRoutingWrapper',links='Default Link #3')
Whereas, with runoff models, you can restrict by catchment or by fus:
python
v.model.catchment.runoff.get_models(fus='Grazing')
v.model.catchment.runoff.set_models('MyFancyRunoffModel',fus='Grazing')
You can find out what query parameters are available by looking at the help, one level up:
python
help(v.model.link.routing)
The call to get_models() return a list of model names. Two observations about this:
The model name is the fully qualified class name as used internally in Source. This is a common pattern through the v.model namespace - it uses the terminology within Source. There are, however, help functions for finding what you need. For example:
python
v.model.find_model_type('gr4')
v.model.find_parameters('RiverSystem.Flow.LaggedFlowRoutingWrapper')
Returning a list doesn't tell you which link has which model - so how are you going to determine which ones should be Storage Routing and which should stay as Straight Through? In general the get_ functions return lists (although there is a by_name option being implemented) and the set_ functions accept lists (unless you provide a single value in which case it is applied uniformly). It is up to you to interpret the lists returned by get_* and to provide set_* with a list in the right order. The way to get it right is to separately query for the names of the relevant elements (nodes/links/catchments) and order accordingly. This will be demonstrated!
Identifying which link should stay as StraightThroughRouting
We can ask for the names of each link in order to establish which ones should be Storage Routing and which should stay as Straight Through
End of explanation
"""
network = v.network()
"""
Explanation: OK - that gives us the names - but it doesn't help directly. We could look at the model in Source to
work out which one is connected to the Water User - but that's cheating!
More generally, we can ask Veneer for the network and perform a topological query
End of explanation
"""
network['features']._unique_values('icon')
"""
Explanation: Now that we've got the network, we want all the water users.
Now, the information we've been returned regarding the network is in GeoJSON format and is intended for use in visualisation. It doesn't explicitly say 'this is a water user' at any point, but it does tell us indirectly tell us this by telling us about the icon in use:
End of explanation
"""
water_users = network['features'].find_by_icon('/resources/WaterUserNodeModel')
water_users
"""
Explanation: So, we can find all the water users in the network, by finding all the network features with '/resources/WaterUserNodeModel' as their icon!
End of explanation
"""
links_upstream_of_water_users=[]
for water_user in water_users:
links_upstream_of_water_users += network.upstream_links(water_user)
links_upstream_of_water_users
"""
Explanation: Now, we can query the network for links upstream of each water user.
We'll loop over the water_users list (just one in the sample model)
End of explanation
"""
names_of_water_user_links = [link['properties']['name'] for link in links_upstream_of_water_users]
names_of_water_user_links
"""
Explanation: Just one link (to be expected) in the sample model. Its the name we care about though:
End of explanation
"""
v.model.find_model_type('StorageRo')
v.model.find_model_type('StraightThrough')
"""
Explanation: To recap, we now have:
existing_models - A list of routing models used on links
link_names_order - The name of each link, in the same order as for existing_models
names_of_water_user_links - The names of links immediately upstream of water users. These links need to stay as Straight Through Routing
We're ultimately going to call
python
v.model.link.routing.set_models(new_models,fromList=True)
so we need to construct new_models, which will be a list of model names to assign to links, with the right mix and order of storage routing and straight through. We'll want new_models to be the same length as existing_models so there is one entry per link. (There are cases where you my use set_models or set_param_values with shorter lists. You'll get R-style 'recycling' of values, but its more useful in catchments where you're iterating over catchments AND functional units)
The entries in new_models need to be strings - those long, fully qualified class names from the Source world. We can find them using v.model.find_model_type
End of explanation
"""
new_models = ['RiverSystem.Flow.StraightThroughRouting' if link_name in names_of_water_user_links
else 'RiverSystem.Flow.StorageRouting'
for link_name in link_names_order]
new_models
"""
Explanation: We can construct our list using a list comprehension, this time with a bit of extra conditional logic thrown in
End of explanation
"""
v.model.link.routing.set_models(new_models,fromList=True)
"""
Explanation: This is a more complex list comprehension than we've used before. It goes like this, reading from the end:
Iterate over all the link names. This will be the right number of elements - and it tells us which link we're dealling with
python
for link_name in link_names_order]
If the current link_name is present in the list of links upstream of water users, use straight through routing
python
['RiverSystem.Flow.StraightThroughRouting' if link_name in names_of_water_user_links
Otherwise use storage routing
python
else 'RiverSystem.Flow.StorageRouting'
All that's left is to apply this to the model
End of explanation
"""
v.model.find_parameters('RiverSystem.Flow.StorageRouting')
"""
Explanation: Notes:
The Source Applications draws links with different line styles based on their routing types - but it might not redraw until you prompt it - eg by resizing the window
The fromList parameter tells the set_models function that you want the list to be applied one element at a time.
Now that you have Storage Routing used in most links, you can start to parameterise the links from the script.
To do so, you could use an input set, as per the previous session. To change parameters via input sets, you would first need to know the wording to use in the input set commands - and at this stage you need to find that wording in the Source user interface.
Alternatively, you can set the parameters directly using v.model.link.routing.set_param_values, which expects the variable name as used internally by Source. You can query for the parameter names for a particular model, using v.model.find_parameters(model_type) and, if that doesn't work v.model.find_properties(model_type).
We'll start by using find_parameters:
End of explanation
"""
v.model.find_properties('RiverSystem.Flow.StorageRouting')
"""
Explanation: The function v.model.find_parameters, accepts a model type (actually, you can give it a list of model types) and it returns a list of parameters.
This list is determined by the internal code of Source - a parameter will only be returned if it has a [Parameter] tag in the C# code.
From the list above, we see some parameters that we expect to see, but not all of the parameters for a Storage Routing reach. For example, the list of parameters doesn't seem to say how we'd switch from Generic to Piecewise routing mode. This is because the model property in question (IsGeneric) doesn't have a [Property] attribute.
We can find a list of all fields and properties of the model using v.model.find_properties. It's a lot more information, but it can be helpful:
End of explanation
"""
help(v.model.link.routing.set_param_values)
v.model.link.routing.set_param_values('RoutingConstant',86400.0)
v.model.link.routing.set_param_values('RoutingPower',1.0)
"""
Explanation: Lets apply an initial parameter set to every Storage Routing link by setting:
RoutingConstant to 86400, and
RoutingPower to 1
We will call set_param_values
End of explanation
"""
number_of_links = len(new_models) - len(names_of_water_user_links)
power_vals = np.arange(1.0,0.0,-1.0/number_of_links)
power_vals
v.model.link.routing.set_param_values('RoutingPower',power_vals,fromList=True)
"""
Explanation: You can check in the Source user interface to see that the parameters have been applied
Setting parameters as a function of other values
Often, you will want to calculate model parameters based on some other information, either within the model or from some external data source.
The set_param_values can accept a list of values, where each item in the list is applied, in turn, to the corresponding models - in much the same way that we used the known link order to set the routing type.
The list of values can be computed in your Python script based on any available information. A common use case is to compute catchment or functional unit parameters based on spatial data.
We will demonstrate the list functionality here with a contrived example!
We will set a different value of RoutingPower for each link. We will compute a different value of RoutingPower from 1.0 down to >0, based on the number of storage routing links
End of explanation
"""
v.model.link.routing.set_param_values('RoutingPower',[0.5,0.75,1.0],fromList=True)
"""
Explanation: If you open the Feature Table for storage routing, you'll now see these values propagated.
The fromList option has another characteristic that can be useful - particularly for catchments models with multiple functional units: value recycling.
If you provide a list with few values than are required, the system will start again from the start of the list.
So, for example, the following code will assign the three values: [0.5,0.75,1.0]
End of explanation
"""
veneer.general.PRINT_SCRIPTS=True
v.model.link.routing.get_models(links=['Default Link #3','Default Link #4'])
veneer.general.PRINT_SCRIPTS=False
"""
Explanation: Check the Feature Table to see the effect.
Note: You can run these scripts with the Feature Table open and the model will be updated - but the feature table won't reflect the new values until you Cancel the feature table and reopen it.
How it Works
As mentioned, everything under v.model works by sending an IronPython script to Source to be run within the Source software itself.
IronPython is a native, .NET, version of Python and hence can access all the classes and objects that make up Source.
When you call a function witnin v.model, veneer-py is generating an IronPython script for Source.
To this point, we haven't seen what these IronPython scripts look like - they are hidden from view. We can see the scripts that get sent to Source by setting the option veneer.general.PRINT_SCRIPTS=True
End of explanation
"""
veneer.general.PRINT_SCRIPTS=True
num_nodes = v.model.get('scenario.Network.Nodes.Count()')
num_nodes
"""
Explanation: Writing these IronPython scripts from scratch requires an understanding of the internal data structures of Source. The functions under v.model are designed to shield you from these details.
That said, if you have an idea of the data structures, you may wish to try writing IronPython scripts, OR, try working with some of the lower-level functionality offered in v.model.
Most of the v.model functions that we've used, are ultimately based upon two, low level thems:
v.model.get and
v.model.set
Both get and set expect a query to perform on a Source scenario object. Structuring this query is where an understanding of Source data structures comes in.
For example, the following query will return the number of nodes in the network. (We'll use the PRINT_SCRIPTS option to show how the query translates to a script):
End of explanation
"""
node_names = v.model.get('scenario.Network.Nodes.*Name')
node_names
"""
Explanation: The follow example returns the names of each node in the network. The .* notation tells veneer-py to generate a loop over every element in a collection
End of explanation
"""
# Generate a new name for each node (based on num_nodes)
names = ['New Name %d'%i for i in range(num_nodes)]
names
v.model.set('scenario.Network.Nodes.*Name',names,fromList=True,literal=True)
"""
Explanation: You can see from the script output that veneer-py has generated a Python for loop to iterate over the nodes:
python
for i_0 in scenario.Network.Nodes:
There are other characteristics in there, such as ignoring exceptions - this is a common default used in v.model to silently skip nodes/links/catchments/etc that don't have a particular property.
The same query approach can work for set, which can set a particular property (on one or more objects) to a particular value (which can be the same value everywhere, or drawn from a list)
End of explanation
"""
v.model.set('scenario.Network.Nodes.*Name',node_names,fromList=True,literal=True)
"""
Explanation: If you look at the Source model now (you may need to trigger a redraw by resizing the window), all the nodes have been renamed.
(Lets reset the names - note how we saved node_names earlier on!)
End of explanation
"""
veneer.general.PRINT_SCRIPTS=False
v.model.sourceHelp('scenario')
"""
Explanation: Note: The literal=True option is currently necessary setting text properties using v.model.set. This tells the IronPython generator to wrap the strings in quotes in the final script. Otherwise, IronPython would be looking for symbols (eg classes) with the same names
The examples of v.model.get and v.model.set illustrate some of the low level functionality for manipulating the source model.
The earlier, high-level, functions (eg v.model.link.routing.set_param_values) take care of computing the query string for you, including context dependent code such as searching for links of a particular name, or nodes of a particular type. They then call the lower level functions, which takes care of generating the actual IronPython script.
The v.model namespace is gradually expanding with new capabilities and functions - but at their essence, most new functions provide a high level wrapper, around v.model.get and v.model.set for some new area of the Source data structures. So, for example, you could envisage a v.model.resource_assessment which provides high level wrappers around resource assessment functionality.
Exploring the system
Writing the high level wrappers (as with writing the query strings for v.model.get/set) requires an understanding of the internal data structures of Source. You can get this from the C# code for Source, or, to a degree, from a help function v.model.sourceHelp.
Lets say you want to discover how to change the description of the scenario (say, to automatically add a note about the changes made by your script)
Start, by asking for help on 'scenario' and explore from there
End of explanation
"""
existing_description = v.model.get('scenario.Description')
existing_description
"""
Explanation: This tells you everything that is available on a Source scenario. It's a lot, but Description looks promising:
End of explanation
"""
v.model.set('scenario.Description','Model modified by script',literal=True)
"""
Explanation: OK. It looks like there is no description in the existing scenario. Lets set one
End of explanation
"""
v2 = veneer.Veneer(port=9877)
"""
Explanation: Harder examples
Lets look at a simple model building example.
We will test out different routing parameters, by setting up a scenario with several parallel networks. Each network will consist of an Inflow Node and a Gauge Node, joined by a Storage Routing link.
The inflows will all use the same time series of flows, so the only difference will be the routing parameters.
To proceed,
Start a new copy of Source (in the following code, I've assumed that you're leaving the existing copy open)
Create a new schematic model - but don't add any nodes or links
Open Tools|Web Server Monitoring
Once Veneer has started, make a note of what port number it is using - it will probably be 9877 if you've left the other copy of Source open.
Make sure you tick the 'Allow Scripts' option
Now, create a new veneer client (creatively called v2 here)
End of explanation
"""
v2.network()
"""
Explanation: And check that the network has nothing in it at the moment
End of explanation
"""
help(v2.model.node.create)
"""
Explanation: We can create nodes with v.model.node.create
End of explanation
"""
help(v2.model.node.new_gauge)
"""
Explanation: There are also functions to create different node types:
End of explanation
"""
loc = [10,10]
v2.model.node.new_inflow('The Inflow',schematic_location=loc,location=loc)
loc = [20,10]
v2.model.node.new_gauge('The Gauge',schematic_location=loc,location=loc)
"""
Explanation: First, we'll do a bit of a test run. Ultimately, we'll want to create a number of such networks - and the nodes will definitely need unique names then
End of explanation
"""
help(v2.model.link.create)
v2.model.link.create('The Inflow','The Gauge','The Link')
"""
Explanation: Note: At this stage (and after some frustration) we can't set the location of the node on the schematic. We can set the 'geographic' location - which doesn't have to be true geographic coordinates, so that's what we'll do here.
Creating a link can be done with v2.model.link.create
End of explanation
"""
v2.network().as_dataframe()
"""
Explanation: Now, lets look at the information from v2.network() to see that it's all there. (We should also see the model in the geographic view)
End of explanation
"""
v2.model.node.remove('The Inflow')
v2.model.node.remove('The Gauge')
"""
Explanation: Now, after all that, we'll delete everything we've created and then recreate it all in a loop to give us parallel networks
End of explanation
"""
num_networks=20
for i in range(1,num_networks+1): # Loop from 1 to 20
veneer.log('Creating network %d'%i)
x = i
loc_inflow = [i,10]
loc_gauge = [i,0]
name_inflow = 'Inflow %d'%i
name_gauge = 'Gauge %d'%i
v2.model.node.new_inflow(name_inflow,location=loc_inflow,schematic_location=loc_inflow)
v2.model.node.new_gauge(name_gauge,location=loc_gauge,schematic_location=loc_gauge)
# Create the link
name_link = 'Link %d'%i
v2.model.link.create(name_inflow,name_gauge,name_link)
# Set the routing type to storage routing (we *could* do this at the end, outside the loop)
v2.model.link.routing.set_models('RiverSystem.Flow.StorageRouting',links=name_link)
"""
Explanation: So, now we can create (and delete) nodes and links, lets create multiple parallel networks, to test out our flow routing parameters. We'll create 20, because we can!
End of explanation
"""
import os
os.path.exists('ExampleProject/Fish_G_flow.csv')
absolute_path = os.path.abspath('ExampleProject/Fish_G_flow.csv')
absolute_path
"""
Explanation: We'll use one of the flow files from the earlier model to drive each of our inflow nodes. We need to know where that data is. Here, I'm assuming its in the ExampleProject directory within the same directory as this notebook. We'll need the absolute path for Source, and the Python os package helps with this type of filesystem operation
End of explanation
"""
v2.model.node.get_models(nodes='Inflow 1')
v2.model.find_parameters('RiverSystem.Nodes.Inflow.InjectedFlow')
v2.model.find_inputs('RiverSystem.Nodes.Inflow.InjectedFlow')
"""
Explanation: We can use v.model.node.assign_time_series to attach a time series of inflows to the inflow node. We could have done this in the for loop, one node at a time, but, like set_param_values we can assign time series to multiple nodes at once.
One thing that we do need to know is the parameter that we're assigning the time series to (because, after all, this could be any type of node - veneer-py doesn't know at this stage). We can find the model type, then check v.model.find_parameters and, if that doesn't work, v.model.find_inputs:
End of explanation
"""
v2.model.node.assign_time_series('Flow',absolute_path,'Inflows')
"""
Explanation: So 'Flow' it is!
End of explanation
"""
power_vals = np.arange(1.0,0.0,-1.0/num_networks)
power_vals
"""
Explanation: Almost there.
Now, lets set a range of storage routing parameters (much like we did before)
End of explanation
"""
v2.model.link.routing.set_param_values('RoutingConstant',86400.0)
v2.model.link.routing.set_param_values('RoutingPower',power_vals,fromList=True)
"""
Explanation: And assign those to the links
End of explanation
"""
v2.configure_recording(disable=[{}],enable=[{'RecordingVariable':'Downstream Flow Volume'}])
"""
Explanation: Now, configure recording
End of explanation
"""
inflow_ts = pd.read_csv(absolute_path,index_col=0)
start,end=inflow_ts.index[[0,-1]]
start,end
"""
Explanation: And one last thing - work out the time period for the run from the inflow time series
End of explanation
"""
v2.run_model(start='01/01/1999',end='31/12/1999')
"""
Explanation: That looks a bit much. Lets run for a year
End of explanation
"""
upstream = v2.retrieve_multiple_time_series(criteria={'RecordingVariable':'Downstream Flow Volume','NetworkElement':'Inflow.*'})
downstream = v2.retrieve_multiple_time_series(criteria={'RecordingVariable':'Downstream Flow Volume','NetworkElement':'Gauge.*'})
downstream[['Gauge 1:Downstream Flow Volume','Gauge 20:Downstream Flow Volume']].plot(figsize=(10,10))
"""
Explanation: Now, we can retrieve some results. Because we used a naming convention for all the nodes, its possible to grab relevant results using those conventions
End of explanation
"""
#nodes = v2.network()['features'].find_by_feature_type('node')._all_values('name')
#for n in nodes:
# v2.model.node.remove(n)
"""
Explanation: If you'd like to change and rerun this example, the following code block can be used to delete all the existing nodes. (Or, just start a new project in Source)
End of explanation
"""
|
harmsm/pythonic-science | chapters/00_inductive-python/key/03_conditionals_key.ipynb | unlicense | x = 5
print(x > 2)
x = 5
print(x < 2)
"""
Explanation: Conditional Execution
Conditional execution allows a program to only execute code if some condition is met.
Predict what this code will do.
End of explanation
"""
x = 20
print (x > 2)
# one way
x = 1
print (x > 2)
# another way
x = 20
print (x < 2)
"""
Explanation: Summarize
What does x < y do? What does it spit out?
x < y tests whether $x$ is smaller than $y$.
It returns True or False.
Modify
Change the following cell so it prints False
End of explanation
"""
x = 5
if x > 2:
print(x)
x = 0
if x > 2:
print(x)
"""
Explanation: Predict what this code will do.
End of explanation
"""
x = 0
if x > 2:
print(x)
print("hello")
"""
Explanation: Predict what this code will do.
End of explanation
"""
x = 20
if x < 5:
print(x)
# one way
x = 2
if x < 5:
print(x)
# another way
x = 20
if x < 100:
print(x)
"""
Explanation: Summarize
How does the if statement work?
if takes something that can be interpreted True or False and then executes only if that something is True.
Anything indented under the if executes conditionally; anything not indented under the if runs no matter what.
Your simple comparison operators are:
x < y (less than)
x <= y (less than or equal to)
x > y (greater than)
x >= y (greater than or equal to)
x == y (is equal to)
x != y (is not equal to)
There are more, depending on the sorts of comparisons being made. We'll get to them later.
Modify
Change the code above so it prints out x.
End of explanation
"""
x = 100
if x > 100:
print(x)
"""
Explanation: Implement
Write code that will only print x if x is more than 100.
End of explanation
"""
x = 2
if x < 5 and x > 10:
print("condition met")
x = 2
if x < 5 or x > 10:
print("condition met")
x = 2
if not x > 5:
print("condition met")
"""
Explanation: More complicated conditionals
Predict what this code will do.
End of explanation
"""
x = 20
if x < 5 or x > 10:
print("HERE")
#answer
x = 20
if x > 16 and x < 23:
print("HERE")
"""
Explanation: Summarize
How do "and", "or", and "not" work?
"and" and "or" let you logically concatenate True and False statements.
"not" lets you invert the logic of a True or False statement.
Modify
Change the following code so it will print "HERE" if x is between 17 and 22.
End of explanation
"""
if x < 0 or x > 10:
print(x)
"""
Explanation: Implement
Write a conditional statement that prints x if it is either negative or if it is greater than 10.
End of explanation
"""
x = 5
if x > 2:
print("inside conditional")
print("also inside conditional")
if x < 2:
print("inside a different conditional")
print("not inside conditional")
"""
Explanation: Predict what this code will do.
End of explanation
"""
x = 5
if x > 10:
print("condition 1")
else:
print("condition 2")
"""
Explanation: Summarize
How does an if statement decide if something is inside the if or not?
Python decides what is inside the conditional if using indentation.
Predict what this code will do.
End of explanation
"""
x = 1
if x > 1:
print("condition 1")
elif x == 1:
print("condition 2")
else:
print("condition 3")
"""
Explanation: Summarize
What does else do?
The else statement executes if the condition is not met.
Predict what this code will do.
End of explanation
"""
x = -2
if x > 5:
print("a")
elif x > 0 and x <= 5:
print("b")
elif x > -6 and x <= 0:
print("c")
else:
print("d")
"""
Explanation: Predict what this code will do.
End of explanation
"""
x = 5
if x > 10:
print("condition 1")
elif x < 0:
print("condition 2")
else:
print("condition 3")
"""
Explanation: Summarize
What does elif do? (it's pronounced "el"-"if", short for "else if").
elif is a way to test another conditional in the same block.
In python the:
```python
if CONDITION:
DO_SOMETHING
elif SOME_OTHER_CONDITION:
DO_SOMETHING_ELSE
elif YET_ANOTHER_CONDITION:
DO_YET_ANOTHER_THING
else:
DO_WHATEVER_WE_WANT
```
construct allows different responses in response to different conditions. If no conditions are met, whatever is under else is done. If multiple conditions are met, only the first condition is met.
Modify
Change the following program code so it prints "condition 2"
End of explanation
"""
if x < 0:
print("a")
elif x == 0:
print("b")
else:
print("c")
"""
Explanation: Implement
Write an if/elif/else statement that prints "a" if $x < 0$, "b" if $x = 0$ and "c" otherwise.
End of explanation
"""
|
jorisvandenbossche/DS-python-data-analysis | _solved/visualization_02_seaborn.ipynb | bsd-3-clause | import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
"""
Explanation: <p><font size="6"><b>Visualisation: Seaborn </b></font></p>
© 2021, Joris Van den Bossche and Stijn Van Hoey (jorisvandenbossche@gmail.com, stijnvanhoey@gmail.com). Licensed under CC BY 4.0 Creative Commons
End of explanation
"""
import seaborn as sns
"""
Explanation: Seaborn
Seaborn is a Python data visualization library:
Built on top of Matplotlib, but providing
High level functions.
Support for tidy data, which became famous due to the ggplot2 R package.
Attractive and informative statistical graphics out of the box.
Interacts well with Pandas
End of explanation
"""
titanic = pd.read_csv('data/titanic.csv')
titanic.head()
"""
Explanation: Introduction
We will use the Titanic example data set:
End of explanation
"""
age_stat = titanic.groupby(["Pclass", "Sex"])["Age"].mean().reset_index()
age_stat
"""
Explanation: Let's consider following question:
For each class at the Titanic and each gender, what was the average age?
Hence, we should define the mean of the male and female groups of column Survived in combination with the groups of the Pclass column. In Pandas terminology:
End of explanation
"""
age_stat.plot(kind='bar')
## A possible other way of plotting this could be using groupby again:
#age_stat.groupby('Pclass').plot(x='Sex', y='Age', kind='bar') # (try yourself by uncommenting)
"""
Explanation: Providing this data in a bar chart with pure Pandas is still partly supported:
End of explanation
"""
sns.catplot(data=age_stat,
x="Sex", y="Age",
col="Pclass", kind="bar")
"""
Explanation: but with mixed results.
Seaborn provides another level of abstraction to visualize such grouped plots with different categories:
End of explanation
"""
# A relation between variables in a Pandas DataFrame -> `relplot`
sns.relplot(data=titanic, x="Age", y="Fare")
"""
Explanation: Check <a href="#this_is_tidy">here</a> for a short recap about tidy data.
<div class="alert alert-info">
**Remember**
- Seaborn is especially suitbale for these so-called <a href="http://vita.had.co.nz/papers/tidy-data.pdf">tidy</a> dataframe representations.
- The [Seaborn tutorial](https://seaborn.pydata.org/tutorial/data_structure.html#long-form-vs-wide-form-data) provides a very good introduction to tidy (also called _long-form_) data.
- You can use __Pandas column names__ as input for the visualisation functions of Seaborn.
</div>
Interaction with Matplotlib
Seaborn builds on top of Matplotlib/Pandas, adding an additional layer of convenience.
Topic-wise, Seaborn provides three main modules, i.e. type of plots:
relational: understanding how variables in a dataset relate to each other
distribution: specialize in representing the distribution of datapoints
categorical: visualize a relationship involving categorical data (i.e. plot something for each category)
The organization looks like this:
We first check out the top commands of each of the types of plots: relplot, displot, catplot, each returning a Matplotlib Figure:
Figure level functions
Let's start from: What is the relation between Age and Fare?
End of explanation
"""
sns.relplot(data=titanic, x="Age", y="Fare",
hue="Survived")
"""
Explanation: Extend to: Is the relation between Age and Fare different for people how survived?
End of explanation
"""
age_fare = sns.relplot(data=titanic, x="Age", y="Fare",
hue="Survived",
col="Sex")
"""
Explanation: Extend to: Is the relation between Age and Fare different for people how survived and/or the gender of the passengers?
End of explanation
"""
type(age_fare), type(age_fare.fig)
"""
Explanation: The function returns a Seaborn FacetGrid, which is related to a Matplotlib Figure:
End of explanation
"""
age_fare.axes, type(age_fare.axes.flatten()[0])
"""
Explanation: As we are dealing here with 2 subplots, the FacetGrid consists of two Matplotlib Axes:
End of explanation
"""
scatter_out = sns.scatterplot(data=titanic, x="Age", y="Fare", hue="Survived")
type(scatter_out)
"""
Explanation: Hence, we can still apply all the power of Matplotlib, but start from the convenience of Seaborn.
<div class="alert alert-info">
**Remember**
The `Figure` level Seaborn functions:
- Support __faceting__ by data variables (split up in subplots using a categorical variable)
- Return a Matplotlib `Figure`, hence the output can NOT be part of a larger Matplotlib Figure
</div>
Axes level functions
In 'technical' terms, when working with Seaborn functions, it is important to understand which level they operate, as Axes-level or Figure-level:
axes-level functions plot data onto a single matplotlib.pyplot.Axes object and return the Axes
figure-level functions return a Seaborn object, FacetGrid, which is a matplotlib.pyplot.Figure
Remember the Matplotlib Figure, axes and axis anatomy explained in visualization_01_matplotlib?
Each plot module has a single Figure-level function (top command in the scheme), which offers a unitary interface to its various Axes-level functions (.
We can ask the same question: Is the relation between Age and Fare different for people how survived?
End of explanation
"""
# sns.scatterplot(data=titanic, x="Age", y="Fare", hue="Survived", col="Sex") # uncomment to check the output
"""
Explanation: But we can't use the col/row options for facetting:
End of explanation
"""
fig, (ax0, ax1) = plt.subplots(1, 2, figsize=(10, 6))
sns.scatterplot(data=titanic, x="Age", y="Fare", hue="Survived", ax=ax0)
sns.violinplot(data=titanic, x="Survived", y="Fare", ax=ax1) # boxplot, stripplot,.. as alternative to represent distribution per category
"""
Explanation: We can use these functions to create custom combinations of plots:
End of explanation
"""
sns.catplot(data=titanic, x="Survived", col="Pclass",
kind="count")
"""
Explanation: Note! Check the similarity with the best of both worlds approach:
Prepare with Matplotlib
Plot using Seaborn
Further adjust specific elements with Matplotlib if needed
<div class="alert alert-info">
**Remember**
The `Axes` level Seaborn functions:
- Do NOT support faceting by data variables
- Return a Matplotlib `Axes`, hence the output can be used in combination with other Matplotlib `Axes` in the same `Figure`
</div>
Summary statistics
Aggregations such as count, mean are embedded in Seaborn (similar to other 'Grammar of Graphics' packages such as ggplot in R and plotnine/altair in Python). We can do these operations directly on the original titanic data set in a single coding step:
End of explanation
"""
sns.catplot(data=titanic, x="Sex", y="Age", col="Pclass", kind="bar",
estimator=np.mean)
"""
Explanation: To use another statistical function to apply on each of the groups, use the estimator:
End of explanation
"""
sns.displot(data=titanic, x="Age", row="Sex", aspect=3, height=2)
"""
Explanation: Exercises
<div class="alert alert-success">
**EXERCISE**
- Make a histogram of the age, split up in two subplots by the `Sex` of the passengers.
- Put both subplots underneath each other.
- Use the `height` and `aspect` arguments of the plot function to adjust the size of the figure.
<details><summary>Hints</summary>
- When interested in a histogram, i.e. the distribution of data, use the `displot` module
- A split into subplots is requested using a variable of the DataFrame (facetting), so use the `Figure`-level function instead of the `Axes` level functions.
- Link a column name to the `row` argument for splitting into subplots row-wise.
</details>
End of explanation
"""
# Figure based
sns.catplot(data=titanic, x="Pclass", y="Age",
hue="Sex", split=True,
palette="Set2", kind="violin")
sns.despine(left=True)
# Axes based
sns.violinplot(data=titanic, x="Pclass", y="Age",
hue="Sex", split=True,
palette="Set2")
sns.despine(left=True)
"""
Explanation: <div class="alert alert-success">
**EXERCISE**
Make a violin plot showing the `Age` distribution in each of the `Pclass` categories comparing for `Sex`:
- Use the `Pclass` column to create a violin plot for each of the classes. To do so, link the `Pclass` column to the `x-axis`.
- Use a different color for the `Sex`.
- Check the behavior of the `split` argument and apply it to compare male/female.
- Use the `sns.despine` function to remove the boundaries around the plot.
<details><summary>Hints</summary>
- Have a look at https://seaborn.pydata.org/examples/grouped_violinplots.html for inspiration.
</details>
End of explanation
"""
# joined distribution plot
sns.jointplot(data=titanic, x="Fare", y="Age",
hue="Sex", kind="scatter") # kde
sns.pairplot(data=titanic[["Age", "Fare", "Sex"]], hue="Sex") # Also called scattermatrix plot
"""
Explanation: Some more Seaborn functionalities to remember
Whereas the relplot, catplot and displot represent the main components of the Seaborn library, more useful functions are available. You can check the gallery yourself, but let's introduce a few rof them:
jointplot() and pairplot()
jointplot() and pairplot() are Figure-level functions and create figures with specific subplots by default:
End of explanation
"""
titanic_age_summary = titanic.pivot_table(columns="Pclass", index="Sex",
values="Age", aggfunc="mean")
titanic_age_summary
sns.heatmap(data=titanic_age_summary, cmap="Reds")
"""
Explanation: heatmap()
Plot rectangular data as a color-encoded matrix.
End of explanation
"""
g = sns.lmplot(
data=titanic, x="Age", y="Fare",
hue="Survived", col="Survived", # hue="Pclass"
)
"""
Explanation: lmplot() regressions
Figure level function to generate a regression model fit across a FacetGrid:
End of explanation
"""
# RUN THIS CELL TO PREPARE THE ROAD CASUALTIES DATA SET
%run ./data/load_casualties.py 2005 2020
"""
Explanation: Exercises data set road casualties
The Belgian road casualties data set contains data about the number of victims involved in road accidents.
The script load_casualties.py in the data folder contains the routine to download the individual years of data, clean up the data and concatenate the individual years.
The %run is an 'IPython magic' function to run a Python file as if you would run it from the command line. Run %run ./data/load_casualties.py --help to check the input arguments required to run the script. As data is available since 2005, we download 2005 till 2020.
Note As the scripts downloads the individual files, it can take a while to run the script the first time.
End of explanation
"""
casualties = pd.read_csv("./data/casualties.csv", parse_dates=["datetime"])
"""
Explanation: When succesfull, the casualties.csv data is available in the data folder:
End of explanation
"""
victims_hour_of_day = casualties.groupby(casualties["datetime"].dt.hour)["n_victims"].sum().reset_index()
victims_hour_of_day = victims_hour_of_day.rename(
columns={"datetime": "Hour of the day", "n_victims": "Number of victims"}
)
sns.catplot(data=victims_hour_of_day,
x="Hour of the day",
y="Number of victims",
kind="bar",
aspect=4,
height=3,
)
"""
Explanation: The data contains the following columns (in bold the main columns used in the exercises):
datetime: Date and time of the casualty.
week_day: Weekday of the datetime.
n_victims: Number of victims
n_victims_ok: Number of victims without injuries
n_slightly_injured: Number of slightly injured victims
n_seriously_injured: Number of severely injured victims
n_dead_30days: Number of victims that died within 30 days
road_user_type: Road user type (passenger car, motorbike, bicycle, pedestrian, ...)
victim_type: Type of victim (driver, passenger, ...)
gender
age
road_type: Regional road, Motorway or Municipal road
build_up_area: Outside or inside built-up area
light_conditions: Day or night (with or without road lights), or dawn
refnis_municipality: Postal reference ID number of municipality
municipality: Municipality name
refnis_region: Postal reference ID number of region
region: Flemish Region, Walloon Region or Brussels-Capital Region
Each row of the dataset does not represent a single accident, but a number of victims for a set of characteristics (for example, how many victims for accidents that happened between 8-9am at a certain day and at a certain road type in a certain municipality with the given age class and gender, ...). Thus, in practice, the victims of one accidents might be split over multiple rows (and one row might in theory also come from multiple accidents).
Therefore, to get meaningful numbers in the exercises, we will each time sum the number of victims for a certain aggregation level (a subset of those characteristics).
<div class="alert alert-success">
**EXERCISE**
Create a barplot with the number of victims ("n_victims") for each hour of the day. Before plotting, calculate the total number of victims for each hour of the day with pandas and assign it to the variable `victims_hour_of_day`. Update the column names to respectively "Hour of the day" and "Number of victims".
Use the `height` and `aspect` to adjust the figure width/height.
<details><summary>Hints</summary>
- The sum of victims _for each_ hour of the day requires `groupby`. One can create a new column with the hour of the day or pass the hour directly to `groupby`.
- The `.dt` accessor provides access to all kinds of datetime information.
- `rename` requires a dictionary with a mapping of the old vs new names.
- A bar plot is in seaborn one of the `catplot` options.
</details>
End of explanation
"""
victims_gender_hour_of_day = casualties.groupby([casualties["datetime"].dt.hour, "gender"],
dropna=False)["n_victims"].sum().reset_index()
victims_gender_hour_of_day.head()
sns.catplot(data=victims_gender_hour_of_day.fillna("unknown"),
x="datetime",
y="n_victims",
row="gender",
palette="rocket",
kind="bar",
aspect=4,
height=3)
"""
Explanation: <div class="alert alert-success">
**EXERCISE**
Create a barplot with the number of victims ("n_victims") for each hour of the day for each category in the gender column. Before plotting, calculate the total number of victims for each hour of the day and each gender with Pandas and assign it to the variable `victims_gender_hour_of_day`.
Create a separate subplot for each gender category in a separate row and apply the `rocket` color palette.
Make sure to include the `NaN` values of the "gender" column as a separate subplot, called _"unknown"_ without changing the `casualties` DataFrame data.
<details><summary>Hints</summary>
- The sum of victims _for each_ hour of the day requires `groupby`. Groupby accepts multiple inputs to group on multiple categories together.
- `groupby` also accepts a parameter `dropna=False` and/or using `fillna` is a useful function to replace the values in the gender column with the value "unknown".
- The `.dt` accessor provides access to all kinds of datetime information.
- Link the "gender" column with the `row` parameter to create a facet of rows.
- Use the `height` and `aspect` to adjust the figure width/height.
</details>
End of explanation
"""
# Convert weekday to Pandas categorical data type
casualties["week_day"] = pd.Categorical(
casualties["week_day"],
categories=["Monday", "Tuesday", "Wednesday",
"Thursday", "Friday", "Saturday", "Sunday"],
ordered=True
)
casualties_motorway_trucks = casualties[
(casualties["road_type"] == "Motorway")
& casualties["road_user_type"].isin(["Light truck", "Truck"])
]
sns.catplot(data=casualties_motorway_trucks,
x="week_day",
y="n_victims",
estimator=np.sum,
ci=None,
kind="bar",
color="#900C3F",
height=3,
aspect=4)
"""
Explanation: <div class="alert alert-success">
**EXERCISE**
Compare the number of victims for each day of the week for casualties that happened on a "Motorway" (`road_type` column) for trucks ("Truck" and "Light truck" in the `road_user_type` column).
Use a bar plot to compare the victims for each day of the week with Seaborn directly (do not use the `groupby`).
__Note__ The `week_day` is converted to an __ordered__ categorical variable. This ensures the days are sorted correctly in Seaborn.
<details><summary>Hints</summary>
- The first part of the exercise is filtering the data. Combine the statements with `&` and do not forget to provide the necessary brackets. The `.isin()`to create a boolean condition might be useful for the road user type selection.
- Whereas using `groupby` to get to the counts is perfectly correct, using the `estimator` in Seaborn gives the same result.
__Note__ The `estimator=np.sum` is less performant than using pandas `groupby`. After filtering the data set, the summation with Seaborn is a feasible option.
</details>
End of explanation
"""
# filter the data
compare_dead_30 = casualties.set_index("datetime")["2019":"2021"]
compare_dead_30 = compare_dead_30[compare_dead_30["road_user_type"].isin(
["Bicycle", "Passenger car", "Pedestrian", "Motorbike"])]
# Sum the victims and dead within 30 days victims for each year/road-user type combination
compare_dead_30 = compare_dead_30.groupby(
["road_user_type", compare_dead_30.index.year])[["n_dead_30days", "n_victims"]].sum().reset_index()
# create a new colum with the percentage deads
compare_dead_30["dead_prop"] = compare_dead_30["n_dead_30days"] / compare_dead_30["n_victims"] * 100
sns.catplot(data=compare_dead_30,
x="dead_prop",
y="road_user_type",
kind="bar",
hue="datetime"
)
"""
Explanation: <div class="alert alert-success">
**EXERCISE**
Compare the relative number of deaths within 30 days (in relation to the total number of victims) in between the following "road_user_type"s: "Bicycle", "Passenger car", "Pedestrian", "Motorbike" for the year 2019 and 2020:
- Filter the data for the years 2019 and 2020.
- Filter the data on the road user types "Bicycle", "Passenger car", "Pedestrian" and "Motorbike". Call the new variable `compare_dead_30`.
- Count for each combination of year and road_user_type the total victims and the total deaths within 30 days victims.
- Calculate the percentage deaths within 30 days (add a new column "dead_prop").
- Use a horizontal bar chart to plot the results with the "road_user_type" on the y-axis and a separate color for each year.
<details><summary>Hints</summary>
- By setting `datetime` as the index, slicing time series can be done using strings to filter data on the years 2019 and 2020.
- Use `isin()` to filter "road_user_type" categories used in the exercise.
- Count _for each_... Indeed, use `groupby` with 2 inputs, "road_user_type" and the year of `datetime`.
- Deriving the year from the datetime: When having an index, use `compare_dead_30.index.year`, otherwise `compare_dead_30["datetime"].dt.year`.
- Dividing columns works element-wise in Pandas.
- A horizontal bar chart in seaborn is a matter of defining `x` and `y` inputs correctly.
</details>
End of explanation
"""
monthly_victim_counts = casualties.resample("M", on="datetime")[
["n_victims_ok", "n_slightly_injured", "n_seriously_injured", "n_dead_30days"]
].sum()
sns.relplot(
data=monthly_victim_counts,
kind="line",
palette="colorblind",
height=3, aspect=4,
)
# Optional solution with tidy data representation (providing x and y)
monthly_victim_counts_melt = monthly_victim_counts.reset_index().melt(
id_vars="datetime", var_name="victim_type", value_name="count"
)
sns.relplot(
data=monthly_victim_counts_melt,
x="datetime",
y="count",
hue="victim_type",
kind="line",
palette="colorblind",
height=3, aspect=4,
)
# Pandas area plot
monthly_victim_counts.plot.area(colormap='Reds', figsize=(15, 5))
"""
Explanation: <div class="alert alert-success">
**EXERCISE**
Create a line plot of the __monthly__ number of victims for each of the categories of victims ('n_victims_ok', 'n_dead_30days', 'n_slightly_injured' and 'n_seriously_injured') as a function of time:
- Create a new variable `monthly_victim_counts` that contains the monthly sum of 'n_victims_ok', 'n_dead_30days', 'n_slightly_injured' and 'n_seriously_injured'.
- Create a line plot of the `monthly_victim_counts` using Seaborn. Choose any [color palette](https://seaborn.pydata.org/tutorial/color_palettes.html).
- Create an `area` plot (line plot with the individual categories stacked on each other) using Pandas.
What happens with the data registration since 2012?
<details><summary>Hints</summary>
- Monthly statistics from a time series requires `resample` (with - in this case - `sum`), which also takes the `on` parameter to specify the datetime column (instead of using the index of the DataFrame).
- Apply the resampling on the `["n_victims_ok", "n_slightly_injured", "n_seriously_injured", "n_dead_30days"]` columns only.
- Seaborn line plots works without tidy data when NOT providing `x` and `y` argument. It also works using tidy data. To 'tidy' the data set, `.melt()` can be used, see [pandas_08_reshaping.ipynb](pandas_08_reshaping.ipynb).
- Pandas plot method works on the non-tidy data set with `plot.area()` .
__Note__ Seaborn does not have an area plot.
</details>
End of explanation
"""
# Using Pandas
daily_total_counts_2020 = casualties.set_index("datetime")["2020":"2021"].resample("D")["n_victims"].sum()
daily_total_counts_2020.plot.line(figsize=(12, 3))
# Using Seaborn
sns.relplot(data=daily_total_counts_2020,
kind="line",
aspect=4, height=3)
"""
Explanation: <div class="alert alert-success">
**EXERCISE**
Make a line plot of the daily victims (column "n_victims") in 2020. Can you explain the counts from March till May?
<details><summary>Hints</summary>
- To get the line plot of 2020 with daily counts, the data preparation steps are:
- Filter data on 2020. By defining `datetime` as the index, slicing time series can be done using strings.
- Resample to daily counts. Use `resample` with the sum on column "n_victims".
- Create a line plot. Do you prefer Pandas or Seaborn?
</details>
End of explanation
"""
# weekly proportion of deadly victims for each light condition
weekly_victim_dead_lc = (
casualties
.groupby("light_conditions")
.resample("W", on="datetime")[["datetime", "n_victims", "n_dead_30days"]]
.sum()
.reset_index()
)
weekly_victim_dead_lc["dead_prop"] = weekly_victim_dead_lc["n_dead_30days"] / weekly_victim_dead_lc["n_victims"] * 100
# .. and the same for each road type
weekly_victim_dead_rt = (
casualties
.groupby("road_type")
.resample("W", on="datetime")[["datetime", "n_victims", "n_dead_30days"]]
.sum()
.reset_index()
)
weekly_victim_dead_rt["dead_prop"] = weekly_victim_dead_rt["n_dead_30days"] / weekly_victim_dead_rt["n_victims"] * 100
fig, (ax0, ax1) = plt.subplots(1, 2, figsize=(15, 5))
sns.ecdfplot(data=weekly_victim_dead_lc, x="dead_prop", hue="light_conditions", ax=ax0)
sns.ecdfplot(data=weekly_victim_dead_rt, x="dead_prop", hue="road_type", ax=ax1)
"""
Explanation: <div class="alert alert-success">
**EXERCISE**
Combine the following two plots in a single Matplotlib figure:
- (left) The empirical cumulative distribution of the _weekly_ proportion of victims that died (`n_dead_30days` / `n_victims`) with a separate color for each "light_conditions".
- (right) The empirical cumulative distribution of the _weekly_ proportion of victims that died (`n_dead_30days` / `n_victims`) with a separate color for each "road_type".
Prepare the data for both plots separately with Pandas and use the variable `weekly_victim_dead_lc` and `weekly_victim_dead_rt`.
<details><summary>Hints</summary>
- The plot can not be made by a single Seaborn Figure-level plot. Create a Matplotlib figure first and use the __axes__ based functions of Seaborn to plot the left and right Axes.
- The data for both subplots need to be prepared separately, by `groupby` once on "light_conditions" and once on "road_type".
- Weekly sums (`resample`) _for each_ (`groupby`) "light_conditions" or "road_type"?! yes! you need to combine both here.
- [`sns.ecdfplot`](https://seaborn.pydata.org/generated/seaborn.ecdfplot.html#seaborn.ecdfplot) creates empirical cumulative distribution plots.
</details>
End of explanation
"""
# available (see previous exercises)
daily_total_counts_2020 = casualties.set_index("datetime")["2020": "2021"].resample("D")["n_victims"].sum()
daily_min_temp_2020 = pd.read_csv("./data/daily_min_temperature_2020.csv",
parse_dates=["datetime"])
daily_with_temp = daily_total_counts_2020.reset_index().merge(daily_min_temp_2020, on="datetime")
g = sns.jointplot(
data=daily_with_temp, x="air_temperature", y="n_victims", kind="reg"
)
"""
Explanation: <div class="alert alert-success">
**EXERCISE**
You wonder if there is a relation between the number of victims per day and the minimal daily temperature. A data set with minimal daily temperatures for the year 2020 is available in the `./data` subfolder: `daily_min_temperature_2020.csv`.
- Read the file `daily_min_temperature_2020.csv` and assign output to the variable `daily_min_temp_2020`.
- Combine the daily (minimal) temperatures with the `daily_total_counts_2020` variable
- Create a regression plot with Seaborn.
Does it make sense to present the data as a regression plot?
<details><summary>Hints</summary>
- `pd.read_csv` has a `parse_dates` parameter to load the `datetime` column as a Timestamp data type.
- `pd.merge` need a (common) key to link the data.
- `sns.lmplot` or `sns.jointplot` are both seaborn functions to create scatter plots with a regression. Joint plot adds the marginal distributions.
</details>
End of explanation
"""
|
nimish-jose/dlnd | image-classification/dlnd_image_classification.ipynb | gpl-3.0 | """
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
from urllib.request import urlretrieve
from os.path import isfile, isdir
from tqdm import tqdm
import problem_unittests as tests
import tarfile
cifar10_dataset_folder_path = 'cifar-10-batches-py'
class DLProgress(tqdm):
last_block = 0
def hook(self, block_num=1, block_size=1, total_size=None):
self.total = total_size
self.update((block_num - self.last_block) * block_size)
self.last_block = block_num
if not isfile('cifar-10-python.tar.gz'):
with DLProgress(unit='B', unit_scale=True, miniters=1, desc='CIFAR-10 Dataset') as pbar:
urlretrieve(
'https://www.cs.toronto.edu/~kriz/cifar-10-python.tar.gz',
'cifar-10-python.tar.gz',
pbar.hook)
if not isdir(cifar10_dataset_folder_path):
with tarfile.open('cifar-10-python.tar.gz') as tar:
tar.extractall()
tar.close()
tests.test_folder_path(cifar10_dataset_folder_path)
"""
Explanation: Image Classification
In this project, you'll classify images from the CIFAR-10 dataset. The dataset consists of airplanes, dogs, cats, and other objects. You'll preprocess the images, then train a convolutional neural network on all the samples. The images need to be normalized and the labels need to be one-hot encoded. You'll get to apply what you learned and build a convolutional, max pooling, dropout, and fully connected layers. At the end, you'll get to see your neural network's predictions on the sample images.
Get the Data
Run the following cell to download the CIFAR-10 dataset for python.
End of explanation
"""
%matplotlib inline
%config InlineBackend.figure_format = 'retina'
import helper
import numpy as np
# Explore the dataset
batch_id = 1
sample_id = 5
helper.display_stats(cifar10_dataset_folder_path, batch_id, sample_id)
"""
Explanation: Explore the Data
The dataset is broken into batches to prevent your machine from running out of memory. The CIFAR-10 dataset consists of 5 batches, named data_batch_1, data_batch_2, etc.. Each batch contains the labels and images that are one of the following:
* airplane
* automobile
* bird
* cat
* deer
* dog
* frog
* horse
* ship
* truck
Understanding a dataset is part of making predictions on the data. Play around with the code cell below by changing the batch_id and sample_id. The batch_id is the id for a batch (1-5). The sample_id is the id for a image and label pair in the batch.
Ask yourself "What are all possible labels?", "What is the range of values for the image data?", "Are the labels in order or random?". Answers to questions like these will help you preprocess the data and end up with better predictions.
End of explanation
"""
def normalize(x):
"""
Normalize a list of sample image data in the range of 0 to 1
: x: List of image data. The image shape is (32, 32, 3)
: return: Numpy array of normalize data
"""
return (x - x.min())/(x.max() - x.min())
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_normalize(normalize)
"""
Explanation: Implement Preprocess Functions
Normalize
In the cell below, implement the normalize function to take in image data, x, and return it as a normalized Numpy array. The values should be in the range of 0 to 1, inclusive. The return object should be the same shape as x.
End of explanation
"""
from sklearn.preprocessing import LabelBinarizer
lb = LabelBinarizer()
lb.fit(np.arange(10))
def one_hot_encode(x):
"""
One hot encode a list of sample labels. Return a one-hot encoded vector for each label.
: x: List of sample Labels
: return: Numpy array of one-hot encoded labels
"""
# TODO: Implement Function
arr = np.array(x)
return lb.transform(arr)
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_one_hot_encode(one_hot_encode)
"""
Explanation: One-hot encode
Just like the previous code cell, you'll be implementing a function for preprocessing. This time, you'll implement the one_hot_encode function. The input, x, are a list of labels. Implement the function to return the list of labels as One-Hot encoded Numpy array. The possible values for labels are 0 to 9. The one-hot encoding function should return the same encoding for each value between each call to one_hot_encode. Make sure to save the map of encodings outside the function.
Hint: Don't reinvent the wheel.
End of explanation
"""
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
# Preprocess Training, Validation, and Testing Data
helper.preprocess_and_save_data(cifar10_dataset_folder_path, normalize, one_hot_encode)
"""
Explanation: Randomize Data
As you saw from exploring the data above, the order of the samples are randomized. It doesn't hurt to randomize it again, but you don't need to for this dataset.
Preprocess all the data and save it
Running the code cell below will preprocess all the CIFAR-10 data and save it to file. The code below also uses 10% of the training data for validation.
End of explanation
"""
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import pickle
import problem_unittests as tests
import helper
# Load the Preprocessed Validation data
valid_features, valid_labels = pickle.load(open('preprocess_validation.p', mode='rb'))
"""
Explanation: Check Point
This is your first checkpoint. If you ever decide to come back to this notebook or have to restart the notebook, you can start from here. The preprocessed data has been saved to disk.
End of explanation
"""
import tensorflow as tf
def neural_net_image_input(image_shape):
"""
Return a Tensor for a batch of image input
: image_shape: Shape of the images
: return: Tensor for image input.
"""
# TODO: Implement Function
return tf.placeholder(tf.float32, shape=(None, image_shape[0], image_shape[1], image_shape[2]), name="x")
def neural_net_label_input(n_classes):
"""
Return a Tensor for a batch of label input
: n_classes: Number of classes
: return: Tensor for label input.
"""
# TODO: Implement Function
return tf.placeholder(tf.float32, shape=(None, n_classes), name="y")
def neural_net_keep_prob_input():
"""
Return a Tensor for keep probability
: return: Tensor for keep probability.
"""
# TODO: Implement Function
return tf.placeholder(tf.float32, name="keep_prob")
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tf.reset_default_graph()
tests.test_nn_image_inputs(neural_net_image_input)
tests.test_nn_label_inputs(neural_net_label_input)
tests.test_nn_keep_prob_inputs(neural_net_keep_prob_input)
"""
Explanation: Build the network
For the neural network, you'll build each layer into a function. Most of the code you've seen has been outside of functions. To test your code more thoroughly, we require that you put each layer in a function. This allows us to give you better feedback and test for simple mistakes using our unittests before you submit your project.
Note: If you're finding it hard to dedicate enough time for this course each week, we've provided a small shortcut to this part of the project. In the next couple of problems, you'll have the option to use classes from the TensorFlow Layers or TensorFlow Layers (contrib) packages to build each layer, except the layers you build in the "Convolutional and Max Pooling Layer" section. TF Layers is similar to Keras's and TFLearn's abstraction to layers, so it's easy to pickup.
However, if you would like to get the most out of this course, try to solve all the problems without using anything from the TF Layers packages. You can still use classes from other packages that happen to have the same name as ones you find in TF Layers! For example, instead of using the TF Layers version of the conv2d class, tf.layers.conv2d, you would want to use the TF Neural Network version of conv2d, tf.nn.conv2d.
Let's begin!
Input
The neural network needs to read the image data, one-hot encoded labels, and dropout keep probability. Implement the following functions
* Implement neural_net_image_input
* Return a TF Placeholder
* Set the shape using image_shape with batch size set to None.
* Name the TensorFlow placeholder "x" using the TensorFlow name parameter in the TF Placeholder.
* Implement neural_net_label_input
* Return a TF Placeholder
* Set the shape using n_classes with batch size set to None.
* Name the TensorFlow placeholder "y" using the TensorFlow name parameter in the TF Placeholder.
* Implement neural_net_keep_prob_input
* Return a TF Placeholder for dropout keep probability.
* Name the TensorFlow placeholder "keep_prob" using the TensorFlow name parameter in the TF Placeholder.
These names will be used at the end of the project to load your saved model.
Note: None for shapes in TensorFlow allow for a dynamic size.
End of explanation
"""
def conv2d_maxpool(x_tensor, conv_num_outputs, conv_ksize, conv_strides, pool_ksize, pool_strides):
"""
Apply convolution then max pooling to x_tensor
:param x_tensor: TensorFlow Tensor
:param conv_num_outputs: Number of outputs for the convolutional layer
:param conv_ksize: kernal size 2-D Tuple for the convolutional layer
:param conv_strides: Stride 2-D Tuple for convolution
:param pool_ksize: kernal size 2-D Tuple for pool
:param pool_strides: Stride 2-D Tuple for pool
: return: A tensor that represents convolution and max pooling of x_tensor
"""
# TODO: Implement Function
x_depth = x_tensor.get_shape().as_list()[3]
weight = tf.Variable(tf.truncated_normal([conv_ksize[0], conv_ksize[1], x_depth, conv_num_outputs], stddev=0.1))
bias = tf.Variable(tf.zeros(conv_num_outputs))
conv_layer = tf.nn.conv2d(x_tensor, weight, strides=[1, conv_strides[0], conv_strides[1], 1], padding='SAME')
conv_layer = tf.nn.bias_add(conv_layer, bias)
conv_layer = tf.nn.relu(conv_layer)
conv_layer = tf.nn.max_pool(conv_layer, ksize=[1, pool_ksize[0], pool_ksize[1], 1], strides=[1, pool_strides[0], pool_strides[1], 1], padding='SAME')
return conv_layer
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_con_pool(conv2d_maxpool)
"""
Explanation: Convolution and Max Pooling Layer
Convolution layers have a lot of success with images. For this code cell, you should implement the function conv2d_maxpool to apply convolution then max pooling:
* Create the weight and bias using conv_ksize, conv_num_outputs and the shape of x_tensor.
* Apply a convolution to x_tensor using weight and conv_strides.
* We recommend you use same padding, but you're welcome to use any padding.
* Add bias
* Add a nonlinear activation to the convolution.
* Apply Max Pooling using pool_ksize and pool_strides.
* We recommend you use same padding, but you're welcome to use any padding.
Note: You can't use TensorFlow Layers or TensorFlow Layers (contrib) for this layer, but you can still use TensorFlow's Neural Network package. You may still use the shortcut option for all the other layers.
End of explanation
"""
def flatten(x_tensor):
"""
Flatten x_tensor to (Batch Size, Flattened Image Size)
: x_tensor: A tensor of size (Batch Size, ...), where ... are the image dimensions.
: return: A tensor of size (Batch Size, Flattened Image Size).
"""
# TODO: Implement Function
shape = x_tensor.get_shape().as_list()
flat_size = tf.cast(shape[1]*shape[2]*shape[3], tf.int32)
return tf.reshape(x_tensor, [-1, flat_size])
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_flatten(flatten)
"""
Explanation: Flatten Layer
Implement the flatten function to change the dimension of x_tensor from a 4-D tensor to a 2-D tensor. The output should be the shape (Batch Size, Flattened Image Size). Shortcut option: you can use classes from the TensorFlow Layers or TensorFlow Layers (contrib) packages for this layer. For more of a challenge, only use other TensorFlow packages.
End of explanation
"""
def fully_conn(x_tensor, num_outputs):
"""
Apply a fully connected layer to x_tensor using weight and bias
: x_tensor: A 2-D tensor where the first dimension is batch size.
: num_outputs: The number of output that the new tensor should be.
: return: A 2-D tensor where the second dimension is num_outputs.
"""
# TODO: Implement Function
x_size = x_tensor.get_shape().as_list()[1]
weight = tf.Variable(tf.truncated_normal([x_size, num_outputs], stddev=0.1))
bias = tf.Variable(tf.zeros(num_outputs))
fc_layer = tf.add(tf.matmul(x_tensor, weight), bias)
fc_layer = tf.nn.relu(fc_layer)
return fc_layer
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_fully_conn(fully_conn)
"""
Explanation: Fully-Connected Layer
Implement the fully_conn function to apply a fully connected layer to x_tensor with the shape (Batch Size, num_outputs). Shortcut option: you can use classes from the TensorFlow Layers or TensorFlow Layers (contrib) packages for this layer. For more of a challenge, only use other TensorFlow packages.
End of explanation
"""
def output(x_tensor, num_outputs):
"""
Apply a output layer to x_tensor using weight and bias
: x_tensor: A 2-D tensor where the first dimension is batch size.
: num_outputs: The number of output that the new tensor should be.
: return: A 2-D tensor where the second dimension is num_outputs.
"""
# TODO: Implement Function
x_size = x_tensor.get_shape().as_list()[1]
weight = tf.Variable(tf.truncated_normal([x_size, num_outputs], stddev=0.1))
bias = tf.Variable(tf.zeros(num_outputs))
return tf.add(tf.matmul(x_tensor, weight), bias)
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_output(output)
"""
Explanation: Output Layer
Implement the output function to apply a fully connected layer to x_tensor with the shape (Batch Size, num_outputs). Shortcut option: you can use classes from the TensorFlow Layers or TensorFlow Layers (contrib) packages for this layer. For more of a challenge, only use other TensorFlow packages.
Note: Activation, softmax, or cross entropy should not be applied to this.
End of explanation
"""
def conv_net(x, keep_prob):
"""
Create a convolutional neural network model
: x: Placeholder tensor that holds image data.
: keep_prob: Placeholder tensor that hold dropout keep probability.
: return: Tensor that represents logits
"""
# TODO: Apply 1, 2, or 3 Convolution and Max Pool layers
# Play around with different number of outputs, kernel size and stride
# Function Definition from Above:
# conv2d_maxpool(x_tensor, conv_num_outputs, conv_ksize, conv_strides, pool_ksize, pool_strides)
x_tensor = x
x_tensor = conv2d_maxpool(x_tensor, 16, (5, 5), (1, 1), (2, 2), (2, 2))
x_tensor = conv2d_maxpool(x_tensor, 32, (3, 3), (1, 1), (2, 2), (2, 2))
x_tensor = conv2d_maxpool(x_tensor, 64, (3, 3), (1, 1), (2, 2), (2, 2))
# TODO: Apply a Flatten Layer
# Function Definition from Above:
# flatten(x_tensor)
x_tensor = flatten(x_tensor)
# TODO: Apply 1, 2, or 3 Fully Connected Layers
# Play around with different number of outputs
# Function Definition from Above:
# fully_conn(x_tensor, num_outputs)
x_tensor = fully_conn(x_tensor, 1024)
x_tensor = tf.nn.dropout(x_tensor, keep_prob=keep_prob)
x_tensor = fully_conn(x_tensor, 256)
x_tensor = tf.nn.dropout(x_tensor, keep_prob=keep_prob)
x_tensor = fully_conn(x_tensor, 64)
x_tensor = tf.nn.dropout(x_tensor, keep_prob=keep_prob)
# TODO: Apply an Output Layer
# Set this to the number of classes
# Function Definition from Above:
# output(x_tensor, num_outputs)
x_tensor = output(x_tensor, 10)
# TODO: return output
return x_tensor
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
##############################
## Build the Neural Network ##
##############################
# Remove previous weights, bias, inputs, etc..
tf.reset_default_graph()
# Inputs
x = neural_net_image_input((32, 32, 3))
y = neural_net_label_input(10)
keep_prob = neural_net_keep_prob_input()
# Model
logits = conv_net(x, keep_prob)
# Name logits Tensor, so that is can be loaded from disk after training
logits = tf.identity(logits, name='logits')
# Loss and Optimizer
cost = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(logits=logits, labels=y))
optimizer = tf.train.AdamOptimizer().minimize(cost)
# Accuracy
correct_pred = tf.equal(tf.argmax(logits, 1), tf.argmax(y, 1))
accuracy = tf.reduce_mean(tf.cast(correct_pred, tf.float32), name='accuracy')
tests.test_conv_net(conv_net)
"""
Explanation: Create Convolutional Model
Implement the function conv_net to create a convolutional neural network model. The function takes in a batch of images, x, and outputs logits. Use the layers you created above to create this model:
Apply 1, 2, or 3 Convolution and Max Pool layers
Apply a Flatten Layer
Apply 1, 2, or 3 Fully Connected Layers
Apply an Output Layer
Return the output
Apply TensorFlow's Dropout to one or more layers in the model using keep_prob.
End of explanation
"""
def train_neural_network(session, optimizer, keep_probability, feature_batch, label_batch):
"""
Optimize the session on a batch of images and labels
: session: Current TensorFlow session
: optimizer: TensorFlow optimizer function
: keep_probability: keep probability
: feature_batch: Batch of Numpy image data
: label_batch: Batch of Numpy label data
"""
# TODO: Implement Function
session.run(optimizer, feed_dict={x: feature_batch, y: label_batch, keep_prob: keep_probability})
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_train_nn(train_neural_network)
"""
Explanation: Train the Neural Network
Single Optimization
Implement the function train_neural_network to do a single optimization. The optimization should use optimizer to optimize in session with a feed_dict of the following:
* x for image input
* y for labels
* keep_prob for keep probability for dropout
This function will be called for each batch, so tf.global_variables_initializer() has already been called.
Note: Nothing needs to be returned. This function is only optimizing the neural network.
End of explanation
"""
def print_stats(session, feature_batch, label_batch, cost, accuracy):
"""
Print information about loss and validation accuracy
: session: Current TensorFlow session
: feature_batch: Batch of Numpy image data
: label_batch: Batch of Numpy label data
: cost: TensorFlow cost function
: accuracy: TensorFlow accuracy function
"""
# TODO: Implement Function
loss = sess.run(cost, feed_dict={x: feature_batch, y: label_batch, keep_prob: 1.})
valid_acc = sess.run(accuracy, feed_dict={x: valid_features, y: valid_labels, keep_prob: 1.})
print('Loss: {:>10.4f} Validation Accuracy: {:.6f}'.format(loss, valid_acc))
"""
Explanation: Show Stats
Implement the function print_stats to print loss and validation accuracy. Use the global variables valid_features and valid_labels to calculate validation accuracy. Use a keep probability of 1.0 to calculate the loss and validation accuracy.
End of explanation
"""
# TODO: Tune Parameters
epochs = 100
batch_size = 4096
keep_probability = 0.5
"""
Explanation: Hyperparameters
Tune the following parameters:
* Set epochs to the number of iterations until the network stops learning or start overfitting
* Set batch_size to the highest number that your machine has memory for. Most people set them to common sizes of memory:
* 64
* 128
* 256
* ...
* Set keep_probability to the probability of keeping a node using dropout
End of explanation
"""
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
print('Checking the Training on a Single Batch...')
with tf.Session() as sess:
# Initializing the variables
sess.run(tf.global_variables_initializer())
# Training cycle
for epoch in range(epochs):
batch_i = 1
for batch_features, batch_labels in helper.load_preprocess_training_batch(batch_i, batch_size):
train_neural_network(sess, optimizer, keep_probability, batch_features, batch_labels)
print('Epoch {:>2}, CIFAR-10 Batch {}: '.format(epoch + 1, batch_i), end='')
print_stats(sess, batch_features, batch_labels, cost, accuracy)
"""
Explanation: Train on a Single CIFAR-10 Batch
Instead of training the neural network on all the CIFAR-10 batches of data, let's use a single batch. This should save time while you iterate on the model to get a better accuracy. Once the final validation accuracy is 50% or greater, run the model on all the data in the next section.
End of explanation
"""
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
save_model_path = './image_classification'
print('Training...')
with tf.Session() as sess:
# Initializing the variables
sess.run(tf.global_variables_initializer())
# Training cycle
for epoch in range(epochs):
# Loop over all batches
n_batches = 5
for batch_i in range(1, n_batches + 1):
for batch_features, batch_labels in helper.load_preprocess_training_batch(batch_i, batch_size):
train_neural_network(sess, optimizer, keep_probability, batch_features, batch_labels)
print('Epoch {:>2}, CIFAR-10 Batch {}: '.format(epoch + 1, batch_i), end='')
print_stats(sess, batch_features, batch_labels, cost, accuracy)
# Save Model
saver = tf.train.Saver()
save_path = saver.save(sess, save_model_path)
"""
Explanation: Fully Train the Model
Now that you got a good accuracy with a single CIFAR-10 batch, try it with all five batches.
End of explanation
"""
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
%matplotlib inline
%config InlineBackend.figure_format = 'retina'
import tensorflow as tf
import pickle
import helper
import random
# Set batch size if not already set
try:
if batch_size:
pass
except NameError:
batch_size = 64
save_model_path = './image_classification'
n_samples = 4
top_n_predictions = 3
def test_model():
"""
Test the saved model against the test dataset
"""
test_features, test_labels = pickle.load(open('preprocess_training.p', mode='rb'))
loaded_graph = tf.Graph()
with tf.Session(graph=loaded_graph) as sess:
# Load model
loader = tf.train.import_meta_graph(save_model_path + '.meta')
loader.restore(sess, save_model_path)
# Get Tensors from loaded model
loaded_x = loaded_graph.get_tensor_by_name('x:0')
loaded_y = loaded_graph.get_tensor_by_name('y:0')
loaded_keep_prob = loaded_graph.get_tensor_by_name('keep_prob:0')
loaded_logits = loaded_graph.get_tensor_by_name('logits:0')
loaded_acc = loaded_graph.get_tensor_by_name('accuracy:0')
# Get accuracy in batches for memory limitations
test_batch_acc_total = 0
test_batch_count = 0
for train_feature_batch, train_label_batch in helper.batch_features_labels(test_features, test_labels, batch_size):
test_batch_acc_total += sess.run(
loaded_acc,
feed_dict={loaded_x: train_feature_batch, loaded_y: train_label_batch, loaded_keep_prob: 1.0})
test_batch_count += 1
print('Testing Accuracy: {}\n'.format(test_batch_acc_total/test_batch_count))
# Print Random Samples
random_test_features, random_test_labels = tuple(zip(*random.sample(list(zip(test_features, test_labels)), n_samples)))
random_test_predictions = sess.run(
tf.nn.top_k(tf.nn.softmax(loaded_logits), top_n_predictions),
feed_dict={loaded_x: random_test_features, loaded_y: random_test_labels, loaded_keep_prob: 1.0})
helper.display_image_predictions(random_test_features, random_test_labels, random_test_predictions)
test_model()
"""
Explanation: Checkpoint
The model has been saved to disk.
Test Model
Test your model against the test dataset. This will be your final accuracy. You should have an accuracy greater than 50%. If you don't, keep tweaking the model architecture and parameters.
End of explanation
"""
|
shareactorIO/pipeline | gpu.ml/notebooks/04_Train_Model_GPU.ipynb | apache-2.0 | import tensorflow as tf
from tensorflow.python.client import timeline
import pylab
import numpy as np
%matplotlib inline
%config InlineBackend.figure_format = 'retina'
tf.logging.set_verbosity(tf.logging.INFO)
tf.reset_default_graph()
num_samples = 100000
from datetime import datetime
version = int(datetime.now().strftime("%s"))
"""
Explanation: Train Model with GPU (and CPU*)
End of explanation
"""
x_train = np.random.rand(num_samples).astype(np.float32)
print(x_train)
noise = np.random.normal(scale=0.01, size=len(x_train))
y_train = x_train * 0.1 + 0.3 + noise
print(y_train)
pylab.plot(x_train, y_train, '.')
x_test = np.random.rand(len(x_train)).astype(np.float32)
print(x_test)
noise = np.random.normal(scale=.01, size=len(x_train))
y_test = x_test * 0.1 + 0.3 + noise
print(y_test)
pylab.plot(x_test, y_test, '.')
with tf.device("/cpu:0"):
W = tf.get_variable(shape=[], name='weights')
print(W)
b = tf.get_variable(shape=[], name='bias')
print(b)
x_observed = tf.placeholder(shape=[None], dtype=tf.float32, name='x_observed')
print(x_observed)
with tf.device("/gpu:0"):
y_pred = W * x_observed + b
print(y_pred)
learning_rate = 0.025
with tf.device("/gpu:0"):
y_observed = tf.placeholder(shape=[None], dtype=tf.float32, name='y_observed')
print(y_observed)
loss_op = tf.reduce_mean(tf.square(y_pred - y_observed))
optimizer_op = tf.train.GradientDescentOptimizer(learning_rate)
train_op = optimizer_op.minimize(loss_op)
print("Loss Scalar: ", loss_op)
print("Optimizer Operation: ", optimizer_op)
with tf.device("/cpu:0"):
init_op = tf.global_variables_initializer()
print(init_op)
train_summary_writer = tf.summary.FileWriter('/root/tensorboard/linear/gpu/%s/train' % version,
graph=tf.get_default_graph())
test_summary_writer = tf.summary.FileWriter('/root/tensorboard/linear/gpu/%s/test' % version,
graph=tf.get_default_graph())
config = tf.ConfigProto(
log_device_placement=True,
)
config.gpu_options.allow_growth=True
print(config)
sess = tf.Session(config=config)
print(sess)
"""
Explanation: Load Model Training and Test/Validation Data
End of explanation
"""
sess.run(init_op)
print(sess.run(W))
print(sess.run(b))
"""
Explanation: Randomly Initialize Variables (Weights and Bias)
The goal is to learn more accurate Weights and Bias during training.
End of explanation
"""
#def test(x, y):
# return sess.run(loss_op, feed_dict={x_observed: x, y_observed: y})
print(1 - sess.run(loss_op, [x_test, y_test]))
"""
Explanation: View Model Graph in Tensorboard
http://[ip-address]:6006
View Accuracy of Pre-Training, Initial Random Variables
This should be relatively low.
End of explanation
"""
loss_summary_scalar_op = tf.summary.scalar('loss', loss_op)
loss_summary_merge_all_op = tf.summary.merge_all()
"""
Explanation: Setup Loss Summary Operations for Tensorboard
End of explanation
"""
%%time
run_metadata = tf.RunMetadata()
max_steps = 401
for step in range(max_steps):
if (step < max_steps):
test_summary_log, _ = sess.run([loss_summary_merge_all_op, loss_op], feed_dict={x_observed: x_test, y_observed: y_test})
train_summary_log, _ = sess.run([loss_summary_merge_all_op, train_op], feed_dict={x_observed: x_train, y_observed: y_train})
else:
test_summary_log, _ = sess.run([loss_summary_merge_all_op, loss_op], feed_dict={x_observed: x_test, y_observed: y_test})
train_summary_log, _ = sess.run([loss_summary_merge_all_op, train_op], feed_dict={x_observed: x_train, y_observed: y_train},
options=tf.RunOptions(trace_level=tf.RunOptions.FULL_TRACE),
run_metadata=run_metadata)
trace = timeline.Timeline(step_stats=run_metadata.step_stats)
with open('gpu-timeline.json', 'w') as trace_file:
trace_file.write(trace.generate_chrome_trace_format(show_memory=True))
if step % 1 == 0:
print(step, sess.run([W, b]))
train_summary_writer.add_summary(train_summary_log, step)
train_summary_writer.flush()
test_summary_writer.add_summary(test_summary_log, step)
test_summary_writer.flush()
# TODO:
#pylab.plot(x_train, y_train, '.', label="target")
#pylab.plot(x_train, sess.run(y_pred, feed_dict={x_observed: x_train, y_observed: y_train}), ".", label="predicted")
#pylab.legend()
#pylab.ylim(0, 1.0)
## View Accuracy of Trained Variables
This should be relatively high
print(1 - sess.run(loss_op, [x_test, y_test]))
"""
Explanation: Train Model
End of explanation
"""
from tensorflow.python.saved_model import utils
tensor_info_x_observed = utils.build_tensor_info(x_observed)
print(tensor_info_x_observed)
tensor_info_y_pred = utils.build_tensor_info(y_pred)
print(tensor_info_y_pred)
export_path = "/root/models/linear/gpu/%s" % version
print(export_path)
from tensorflow.python.saved_model import builder as saved_model_builder
from tensorflow.python.saved_model import signature_constants
from tensorflow.python.saved_model import signature_def_utils
from tensorflow.python.saved_model import tag_constants
with tf.device("/cpu:0"):
builder = saved_model_builder.SavedModelBuilder(export_path)
prediction_signature = signature_def_utils.build_signature_def(
inputs = {'x_observed': tensor_info_x_observed},
outputs = {'y_pred': tensor_info_y_pred},
method_name = signature_constants.PREDICT_METHOD_NAME)
legacy_init_op = tf.group(tf.initialize_all_tables(), name='legacy_init_op')
builder.add_meta_graph_and_variables(sess,
[tag_constants.SERVING],
signature_def_map={'predict':prediction_signature,
signature_constants.DEFAULT_SERVING_SIGNATURE_DEF_KEY:prediction_signature},
legacy_init_op=legacy_init_op)
builder.save()
"""
Explanation: View Train and Test Loss Summaries in Tensorboard
Navigate to the Scalars tab at this URL:
http://[ip-address]:6006
Save Model for Deployment and Inference
End of explanation
"""
%%bash
ls -l /root/models/linear/gpu/[version]
"""
Explanation: View Saved Model on Disk
You must replace [version] with the version number from above ^^
End of explanation
"""
from tensorflow.python.framework import graph_io
graph_io.write_graph(sess.graph,
"/root/models/optimize_me/",
"unoptimized_gpu.pb")
sess.close()
"""
Explanation: HACK: Save Model in Previous Format
We will use this later.
End of explanation
"""
|
bakerjd99/jacks | numpyjlove/NumPy and J make Sweet Array Love.ipynb | unlicense | import numpy as np
"""
Explanation: NumPy and J make Sweet Array Love
Import NumPy using the standard naming convention
End of explanation
"""
import sys
# local api/python3 path - adjust path for your system
japipath = 'C:\\j64\\j64-807\\addons\\api\\python3'
if japipath not in sys.path:
sys.path.append(japipath)
sys.path
import jbase as j
print(j.__doc__)
# start J - only one instance currently allowed
try:
j.init()
except:
print('j running')
j.dor("i. 2 3 4") # run sentence and print output result
rc = j.do(('+a.')) # run and return error code
print(rc)
j.getr() # get last output result
j.do('abc=: i.2 3') # define abc
q= j.get('abc') # get q as numpy array from J array
print (q)
j.set('ghi',23+q) # set J array from numpy array
j.dor('ghi') # print array (note typo in addon (j.__doc___)
"""
Explanation: Configure the J Python3 addon
To use the J Python3 addon you must edit path variables in jbase.py so Python can locate the J binaries. On my system I set:
# typical for windows install in home
pathbin= 'c:/j64/j64-807/bin'
pathdll= pathbin+'/j.dll'
pathpro= pathbin+'/profile.ijs'
Insure jbase.py and jcore.py are on Python's search path
End of explanation
"""
j.do("cows =. 'don''t have a cow man'")
j.get('cows')
ido = "I do what I do because I am what I am!"
j.set("ido", ido)
j.dor("ido")
"""
Explanation: Character data is passed as bytes.
End of explanation
"""
# decomment to run REPL
# j.j()
"""
Explanation: j.j() enters a simple REPL
Running j.j() opens a simple read, execute and reply loop with J. Exit by typing ....
End of explanation
"""
# boolean numpy array
p = np.array([True, False, True, True]).reshape(2,2)
p
j.set("p", p)
j.dor("p")
"""
Explanation: J accepts a subset of NumPy datatypes
Passing datatypes that do not match the types the J addon supports is allowed but does not work as you might expect.
End of explanation
"""
# numpy
a = np.arange(15).reshape(3, 5)
print(a)
# J
j.do("a =. 3 5 $ i. 15")
j.dor("a")
# numpy
a = np.array([2,3,4])
print(a)
# J
j.do("a =. 2 3 4")
j.dor("a")
# numpy
b = np.array([(1.5,2,3), (4,5,6)])
print(b)
# J
j.do("b =. 1.5 2 3 ,: 4 5 6")
j.dor("b")
# numpy
c = np.array( [ [1,2], [3,4] ], dtype=complex )
print(c)
# J
j.do("c =. 0 j.~ 1 2 ,: 3 4")
j.dor("c") # does not show as complex
j.dor("datatype c") # c is complex
# numpy - make complex numbers with nonzero real and imaginary parts
c + (0+4.7j)
# J - also for J
j.dor("c + 0j4.7")
# numpy
np.zeros( (3,4) )
# J
j.dor("3 4 $ 0")
# numpy - allocates array with whatever is in memory
np.empty( (2,3) )
# J - uses fill - safer but slower than numpy's trust memory method
j.dor("2 3 $ 0.0001")
"""
Explanation: As you can see a round trip of numpy booleans generates digital noise.
The only numpy datatypes J natively supports on Win64 systems are:
np.int64
np.float64
simple character strings - passed as bytes
To use other types it will be necessary to encode and decode them with Python and J helper functions.
The limited datatype support is not as limiting as you might expect. The default NumPy array is
np.float64 on 64 bit systems and the majority of NumPy based packages manipulate floating point
and integer arrays.
NumPy and J are derivative Iverson Array Processing Notations
The following NumPy examples are from the SciPy.org's
NumPy quick start tutorial. For each NumPy statement, I have provided a J equivalent
Creating simple arrays
End of explanation
"""
# numpy
a = np.array( [20,30,40,50] )
b = np.arange( 4 )
c = a - b
print(c)
# J
j.do("a =. 20 30 40 50")
j.do("b =. i. 4")
j.do("c =. a - b")
j.dor("c")
# numpy - uses previously defined (b)
b ** 2
# J
j.dor("b ^ 2")
# numpy - uses previously defined (a)
10 * np.sin(a)
# J
j.dor("10 * 1 o. a")
# numpy - booleans are True and False
a < 35
# J - booleans are 1 and 0
j.dor("a < 35")
"""
Explanation: Basic Operations
End of explanation
"""
# numpy
a = np.array( [[1,1], [0,1]] )
b = np.array( [[2,0], [3,4]] )
# elementwise product
a * b
# J
j.do("a =. 1 1 ,: 0 1")
j.do("b =. 2 0 ,: 3 4")
j.dor("a * b")
# numpy - matrix product
np.dot(a, b)
# J - matrix product
j.dor("a +/ . * b")
# numpy - uniform pseudo random - seeds are different in Python and J processes - results will differ
a = np.random.random( (2,3) )
print(a)
# J - uniform pseudo random
j.dor("?. 2 3 $ 0")
# numpy - sum all array elements - implicit ravel
a = np.arange(100).reshape(20,5)
a.sum()
# j - sum all array elements - explicit ravel
j.dor("+/ , 20 5 $ i.100")
# numpy
b = np.arange(12).reshape(3,4)
print(b)
# sum of each column
print(b.sum(axis=0))
# min of each row
print(b.min(axis=1))
# cumulative sum along each row
print(b.cumsum(axis=1))
# transpose
print(b.T)
# J
j.do("b =. 3 4 $ i. 12")
j.dor("b")
# sum of each column
j.dor("+/ b")
# min of each row
j.dor('<./"1 b')
# cumulative sum along each row
j.dor('+/\\"0 1 b') # must escape \ character to pass +/\"0 1 properly to J
# transpose
j.dor("|: b")
"""
Explanation: Array Processing
End of explanation
"""
# numpy
a = np.arange(10) ** 3
print(a[2])
print(a[2:5])
print(a[ : :-1]) # reversal
# J
j.do("a =. (i. 10) ^ 3")
j.dor("2 { a")
j.dor("(2 + i. 3) { a")
j.dor("|. a")
"""
Explanation: Indexing and Slicing
End of explanation
"""
from numpy import pi
x = np.linspace( 0, 2*pi, 100, np.float64) # useful to evaluate function at lots of points
f = np.sin(x)
f
j.set("f", f)
j.get("f")
r = np.random.random((2000,3000))
r = np.asarray(r, dtype=np.float64)
r
j.set("r", r)
j.get("r")
r.shape
j.get("r").shape
j.dor("r=. ,r")
j.get("r").shape
r.sum()
b = np.ones((5,300,4), dtype=np.int64)
j.set("b", b)
b2 = j.get("b")
print(b.sum())
print(b2.sum())
"""
Explanation: Passing Larger Arrays
Toy interfaces abound. Useful interfaces scale. The current addon is capable of passing
large enough arrays for serious work. Useful subsets of J and NumPy arrays can be memory mapped. It wouldn't
be difficult to memory map very large (gigabyte sized) NumPy arrays for J.
End of explanation
"""
|
wuafeing/Python3-Tutorial | 01 data structures and algorithms/01.16 filter sequence elements.ipynb | gpl-3.0 | mylist = [1, 4, -5, 10, -7, 2, 3, -1]
[n for n in mylist if n > 0]
[n for n in mylist if n < 0]
"""
Explanation: Previous
1.16 过滤序列元素
问题
你有一个数据序列,想利用一些规则从中提取出需要的值或者是缩短序列
解决方案
最简单的过滤序列元素的方法就是使用列表推导。比如:
End of explanation
"""
pos = (n for n in mylist if n > 0)
pos
for x in pos:
print(x)
"""
Explanation: 使用列表推导的一个潜在缺陷就是如果输入非常大的时候会产生一个非常大的结果集,占用大量内存。 如果你对内存比较敏感,那么你可以使用生成器表达式迭代产生过滤的元素。比如:
End of explanation
"""
values = ['1', '2', '-3', '-', '4', 'N/A', '5']
def is_int(val):
try:
x = int(val)
return True
except ValueError:
return False
ivals = list(filter(is_int, values))
print(ivals)
"""
Explanation: 有时候,过滤规则比较复杂,不能简单的在列表推导或者生成器表达式中表达出来。 比如,假设过滤的时候需要处理一些异常或者其他复杂情况。这时候你可以将过滤代码放到一个函数中, 然后使用内建的 filter() 函数。示例如下:
End of explanation
"""
mylist = [1, 4, -5, 10, -7, 2, 3, -1]
import math
[math.sqrt(n) for n in mylist if n > 0]
"""
Explanation: filter() 函数创建了一个迭代器,因此如果你想得到一个列表的话,就得像示例那样使用 list() 去转换。
讨论
列表推导和生成器表达式通常情况下是过滤数据最简单的方式。 其实它们还能在过滤的时候转换数据。比如:
End of explanation
"""
clip_neg = [n if n > 0 else 0 for n in mylist]
clip_neg
clip_pos = [n if n < 0 else 0 for n in mylist]
clip_pos
"""
Explanation: 过滤操作的一个变种就是将不符合条件的值用新的值代替,而不是丢弃它们。 比如,在一列数据中你可能不仅想找到正数,而且还想将不是正数的数替换成指定的数。 通过将过滤条件放到条件表达式中去,可以很容易的解决这个问题,就像这样:
End of explanation
"""
addresses = [
'5412 N CLARK',
'5148 N CLARK',
'5800 E 58TH',
'2122 N CLARK'
'5645 N RAVENSWOOD',
'1060 W ADDISON',
'4801 N BROADWAY',
'1039 W GRANVILLE',
]
counts = [ 0, 3, 10, 4, 1, 7, 6, 1]
"""
Explanation: 另外一个值得关注的过滤工具就是 itertools.compress() , 它以一个 iterable 对象和一个相对应的 Boolean 选择器序列作为输入参数。 然后输出 iterable 对象中对应选择器为 True 的元素。 当你需要用另外一个相关联的序列来过滤某个序列的时候,这个函数是非常有用的。 比如,假如现在你有下面两列数据:
End of explanation
"""
from itertools import compress
more5 = [n > 5 for n in counts]
more5
list(compress(addresses, more5))
"""
Explanation: 现在你想将那些对应 count 值大于 5 的地址全部输出,那么你可以这样做:
End of explanation
"""
|
napsternxg/ipython-notebooks | Likelihood+ratio.ipynb | apache-2.0 | def get_likelihood(theta, n, k, normed=False):
ll = (theta**k)*((1-theta)**(n-k))
if normed:
num_combs = comb(n, k)
ll = num_combs*ll
return ll
get_likelihood(0.5, 2, 2, normed=True)
get_likelihood(0.5, 10, np.arange(10), normed=True)
N = 100
plt.plot(
np.arange(N),
get_likelihood(0.5, N, np.arange(N), normed=True),
color='k', markeredgecolor='none', marker='o',
linestyle="--", ms=3, markerfacecolor='r', lw=0.5
)
n, k = 10, 6
theta=np.arange(0,1,0.01)
ll = get_likelihood(theta, n, k, normed=True)
source = ColumnDataSource(data=dict(
theta=theta,
ll=ll,
))
hover = HoverTool(tooltips=[
("index", "$index"),
("theta", "$x"),
("ll", "$y"),
])
p1 = figure(plot_width=600, plot_height=400,
tools=[hover], title="Likelihood of fair coin")
p1.grid.grid_line_alpha=0.3
p1.xaxis.axis_label = 'theta'
p1.yaxis.axis_label = 'Likelihood'
p1.line('theta', 'll', color='#A6CEE3', source=source)
# get a handle to update the shown cell with
handle = show(p1, notebook_handle=True)
handle
"""
Explanation: Likelihood of a coin being fair
$$
P(\theta|X) = \frac{P(X|\theta)P(\theta)}{P(X)}
$$
Here, $P(X|\theta)$ is the likelihood, $P(\theta)$ is the prior on theta, $P(X)$ is the evidence, while $P(\theta|X)$ is the posteior.
Now the probability of observing $k$ heads out of $n$ trials, given that the probability of a fair coin is $\theta$ is given as follows:
$P(N_{heads}=k|n,\theta) = \frac{n!}{k!(n-k)!}\theta^{k}(1-\theta)^{(n-k)}$
Consider, $n=10$ and $k=9$. Now, we can plot our likelihood. as follows:
End of explanation
"""
hover = HoverTool(tooltips=[
("index", "$index"),
("theta", "$x"),
("ll_ratio", "$y"),
])
p1 = figure(plot_width=600, plot_height=400,
#y_axis_type="log",
tools=[hover], title="Likelihood ratio compared to unbiased coin")
p1.grid.grid_line_alpha=0.3
p1.xaxis.axis_label = 'theta'
p1.yaxis.axis_label = 'Likelihood ratio wrt theta = 0.5'
theta=np.arange(0,1,0.01)
for n, k, color in zip(
[10, 100, 500],
[6, 60, 300],
["red", "blue", "black"]
):
ll_unbiased = get_likelihood(0.5, n, k, normed=False)
ll = get_likelihood(theta, n, k, normed=False)
ll_ratio = ll / ll_unbiased
source = ColumnDataSource(data=dict(
theta=theta,
ll_ratio=ll_ratio,
))
p1.line('theta', 'll_ratio',
color=color, source=source,
legend="n={}, k={}".format(n,k))
# get a handle to update the shown cell with
handle = show(p1, notebook_handle=True)
handle
hover = HoverTool(tooltips=[
("index", "$index"),
("theta", "$x"),
("ll_ratio", "$y"),
])
p1 = figure(plot_width=600, plot_height=400,
y_axis_type="log",
tools=[hover], title="Likelihood ratio compared to unbiased coin")
p1.grid.grid_line_alpha=0.3
p1.xaxis.axis_label = 'n'
p1.yaxis.axis_label = 'Likelihood ratio wrt theta = 0.5'
n = 10**np.arange(0,6)
k = (n*0.6).astype(int)
theta=0.6
ll_unbiased = get_likelihood(0.5, n, k, normed=False)
ll = get_likelihood(theta, n, k, normed=False)
ll_ratio = ll / ll_unbiased
source = ColumnDataSource(data=dict(
n=n,
ll_ratio=ll_ratio,
))
p1.line('n', 'll_ratio',
color=color, source=source,
legend="theta={:.2f}".format(theta))
# get a handle to update the shown cell with
handle = show(p1, notebook_handle=True)
handle
"""
Explanation: Plot for multiple data
End of explanation
"""
|
fvnts/finitedifference | notebooks/fheat.ipynb | gpl-3.0 | # ----------------------------------------/
%matplotlib inline
# ----------------------------------------/
import numpy as np
import matplotlib.pyplot as plt
from scipy import *
from ipywidgets import *
from scipy import linalg
from numpy import asmatrix
from numpy import matlib as ml
from scipy.sparse import spdiags
"""
Explanation: <h1> Fractional Diffusion Equation </h1>
We solve the one dimensional fractional diffusion equation
\begin{equation}
\left{ \begin{array}{ll}
u_t(x,t)=d(x)\,\partial^\alpha_x u(x,t), & 1\leq\alpha\leq2,\
u(x,t_0)=f(x), & x\in\mathbb{R},\
u(x,t)=0, & \vert x \vert\rightarrow\infty,
\end{array} \right.
\end{equation}
with $d(x)>0$, by using discrete representations of Riemann–Liouville (left) derivative.
<h2> Essential Libraries </h2>
End of explanation
"""
def w(a,j,k):
"""
interpolation weight coefficients
"""
if (k <= j - 1):
r = (j-k+1)**(3-a) - 2*(j-k)**(3-a) + (j-k-1)**(3-a)
elif (k == j):
r = 1
return r
# -----------------------------------------------------------------/
def q(a,j,k):
"""
Sousa-Li weight coefficients
"""
if (k <= j - 1):
r = w(a,j-1,k)-2*w(a,j,k) + w(a,j+1,k)
elif (k == j):
r = -2*w(a,j,j) + w(a,j+1,j)
elif (k == j + 1):
r = w(a,j+1,j+1)
return r
"""
Explanation: <h2> Functions </h2>
End of explanation
"""
# space, time domain
xi, xf, ti, tf = -2, 2, 0, 5
# fractional order (1,2]
alpha = 1
# grid points
mu = 200
# steps
n = mu
# space, time intervals
h = (xf - xi)/(1.0 + mu)
k = (tf - ti)/float(n)
m = mu + 2
# coordinates
x = np.linspace(xi,xf,m)
t = np.linspace(ti,tf,n)
# convex weight in [1/2,1]
tau = 0.5
# vectors
o = np.ones(m)
u = np.zeros(m)
# matrices
I = ml.eye(m)
Q = np.zeros((m,m))
M = np.zeros((m,m))
U = np.zeros((m,n))
"""
Explanation: <h2> Definitions </h2>
End of explanation
"""
# initial condition
f = lambda x: 4*(x**2)*(2-x)**2
# boundary condition (bona fide)
g = lambda x, t: 4*(x**2)*(2-x)**2
# diffusion
d = lambda x: 0.25*x**alpha
"""
Explanation: <h2>Initial + Boundary Data</h2>
End of explanation
"""
# diagonal mattrix
M = k*h**(-alpha)/math.gamma(4.0 - alpha) * spdiags([d(x)], [0], m, m).toarray()
# -----------------------------------------------------------------/
for j in range(m):
for k in range(m):
if (k <= j - 1):
Q[j,k] = q(alpha,j,k)
elif (k == j):
Q[j,k] = q(alpha,j,j)
elif (k == j + 1):
Q[j,k] = q(alpha,j,j + 1)
elif k > j + 1:
Q[j,k] = 0
# -----------------------------------------------------------------/
Ap = I - tau*M.dot(Q)
An = I + (1 - tau)*M.dot(Q)
# -----------------------------------------------------------------/
# left boundary
Ap[0,0], Ap[0,1] = 1, 0
# right boundary
Ap[m-1,m-2], Ap[m-1,m-1] = 0, 1
# left boundary
An[0,0], An[0,1] = 1, 0
# right boundary
An[m-1,m-2], An[m-1,m-1] = 0, 1
# -----------------------------------------------------------------/
Ai = linalg.inv(Ap)
# -----------------------------------------------------------------/
# initial condition
u = f(x)[:, None]
# -----------------------------------------------------------------/
for i in range(n):
# boundary conditions
u[0], u[-1] = g(xi, i*k + ti), g(xf, i*k + ti)
# store data
U[:,i] = np.asarray(u)[:,0]
# solve linear system
b = np.asmatrix(An) * np.asmatrix(u)
u = np.asmatrix(Ai) * b
"""
Explanation: <h2> Method </h2>
End of explanation
"""
# --------------------/
# plots
def evolution(step):
plt.figure(figsize=(5,5))
plt.plot(x, U[:,step], lw=3, alpha=0.5, color='deeppink')
plt.grid(color='gray', alpha=0.95)
plt.xlim(x.min() - 0.125, x.max() + 0.125)
plt.ylim(U.min() - 0.125, U.max() + 0.125)
# --------------------/
# interactive plot
step = widgets.IntSlider(min=0, max=n-1, description='step')
interact(evolution, step=step)
%reset
"""
Explanation: <h2> Plots </h2>
End of explanation
"""
|
elmaso/tno-ai | aind2-cnn/cifar10-classification/cifar10_mlp.ipynb | gpl-3.0 | import keras
from keras.datasets import cifar10
# load the pre-shuffled train and test data
(x_train, y_train), (x_test, y_test) = cifar10.load_data()
"""
Explanation: Artificial Intelligence Nanodegree
Convolutional Neural Networks
In this notebook, we train an MLP to classify images from the CIFAR-10 database.
1. Load CIFAR-10 Database
End of explanation
"""
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
fig = plt.figure(figsize=(20,5))
for i in range(36):
ax = fig.add_subplot(3, 12, i + 1, xticks=[], yticks=[])
ax.imshow(np.squeeze(x_train[i]))
"""
Explanation: 2. Visualize the First 24 Training Images
End of explanation
"""
# rescale [0,255] --> [0,1]
x_train = x_train.astype('float32')/255
x_test = x_test.astype('float32')/255
"""
Explanation: 3. Rescale the Images by Dividing Every Pixel in Every Image by 255
End of explanation
"""
from keras.utils import np_utils
# one-hot encode the labels
num_classes = len(np.unique(y_train))
y_train = keras.utils.to_categorical(y_train, num_classes)
y_test = keras.utils.to_categorical(y_test, num_classes)
# break training set into training and validation sets
(x_train, x_valid) = x_train[5000:], x_train[:5000]
(y_train, y_valid) = y_train[5000:], y_train[:5000]
# print shape of training set
print('x_train shape:', x_train.shape)
# print number of training, validation, and test images
print(x_train.shape[0], 'train samples')
print(x_test.shape[0], 'test samples')
print(x_valid.shape[0], 'validation samples')
"""
Explanation: 4. Break Dataset into Training, Testing, and Validation Sets
End of explanation
"""
from keras.models import Sequential
from keras.layers import Dense, Dropout, Flatten
# define the model
model = Sequential()
model.add(Flatten(input_shape = x_train.shape[1:]))
model.add(Dense(1000, activation='relu'))
model.add(Dropout(0.2))
model.add(Dense(512, activation='relu'))
model.add(Dropout(0.2))
model.add(Dense(num_classes, activation='softmax'))
model.summary()
"""
Explanation: 5. Define the Model Architecture
End of explanation
"""
# compile the model
model.compile(loss='categorical_crossentropy', optimizer='rmsprop',
metrics=['accuracy'])
"""
Explanation: 6. Compile the Model
End of explanation
"""
from keras.callbacks import ModelCheckpoint
# train the model
checkpointer = ModelCheckpoint(filepath='MLP.weights.best.hdf5', verbose=1,
save_best_only=True)
hist = model.fit(x_train, y_train, batch_size=32, epochs=20,
validation_data=(x_valid, y_valid), callbacks=[checkpointer],
verbose=2, shuffle=True)
"""
Explanation: 7. Train the Model
End of explanation
"""
# load the weights that yielded the best validation accuracy
model.load_weights('MLP.weights.best.hdf5')
"""
Explanation: 8. Load the Model with the Best Classification Accuracy on the Validation Set
End of explanation
"""
# evaluate and print test accuracy
score = model.evaluate(x_test, y_test, verbose=0)
print('\n', 'Test accuracy:', score[1])
"""
Explanation: 9. Calculate Classification Accuracy on Test Set
End of explanation
"""
|
gojomo/gensim | docs/notebooks/Any2Vec_Filebased.ipynb | lgpl-2.1 | import gensim
import gensim.downloader as api
from gensim.utils import save_as_line_sentence
from gensim.models.word2vec import Word2Vec
print(gensim.models.word2vec.CORPUSFILE_VERSION) # must be >= 0, i.e. optimized compiled version
corpus = api.load("text8")
save_as_line_sentence(corpus, "my_corpus.txt")
model = Word2Vec(corpus_file="my_corpus.txt", iter=5, size=300, workers=14)
"""
Explanation: *2Vec File-based Training: API Tutorial
This tutorial introduces a new file-based training mode for gensim.models.{Word2Vec, FastText, Doc2Vec} which leads to (much) faster training on machines with many cores. Below we demonstrate how to use this new mode, with Python examples.
In this tutorial
We will show how to use the new training mode on Word2Vec, FastText and Doc2Vec.
Evaluate the performance of file-based training on the English Wikipedia and compare it to the existing queue-based training.
Show that model quality (analogy accuracies on question-words.txt) are almost the same for both modes.
Motivation
The original implementation of Word2Vec training in Gensim is already super fast (covered in this blog series, see also benchmarks against other implementations in Tensorflow, DL4J, and C) and flexible, allowing you to train on arbitrary Python streams. We had to jump through some serious hoops to make it so, avoiding the Global Interpreter Lock (the dreaded GIL, the main bottleneck for any serious high performance computation in Python).
The end result worked great for modest machines (< 8 cores), but for higher-end servers, the GIL reared its ugly head again. Simply managing the input stream iterators and worker queues, which has to be done in Python holding the GIL, was becoming the bottleneck. Simply put, the Python implementation didn't scale linearly with cores, as the original C implementation by Tomáš Mikolov did.
We decided to change that. After much experimentation and benchmarking, including some pretty hardcore outlandish ideas, we figured there's no way around the GIL limitations—not at the level of fine-tuned performance needed here. Remember, we're talking >500k words (training instances) per second, using highly optimized C code. Way past the naive "vectorize with NumPy arrays" territory.
So we decided to introduce a new code path, which has less flexibility in favour of more performance. We call this code path file-based training, and it's realized by passing a new corpus_file parameter to training. The existing sentences parameter (queue-based training) is still available, and you can continue using without any change: there's full backward compatibility.
How it works
<style>
.rendered_html tr, .rendered_html th, .rendered_html td {
text-align: "left";
}
</style>
| code path | input parameter | advantages | disadvantages
| :-------- | :-------- | :--------- | :----------- |
| queue-based training (existing) | sentences (Python iterable) | Input can be generated dynamically from any storage, or even on-the-fly. | Scaling plateaus after 8 cores. |
| file-based training (new) | corpus_file (file on disk) | Scales linearly with CPU cores. | Training corpus must be serialized to disk in a specific format. |
When you specify corpus_file, the model will read and process different portions of the file with different workers. The entire bulk of work is done outside of GIL, using no Python structures at all. The workers update the same weight matrix, but otherwise there's no communication, each worker munches on its data portion completely independently. This is the same approach the original C tool uses.
Training with corpus_file yields a significant performance boost: for example, in the experiment belows training is 3.7x faster with 32 workers in comparison to training with sentences argument. It even outperforms the original Word2Vec C tool in terms of words/sec processing speed on high-core machines.
The limitation of this approach is that corpus_file argument accepts a path to your corpus file, which must be stored on disk in a specific format. The format is simply the well-known gensim.models.word2vec.LineSentence: one sentence per line, with words separated by spaces.
How to use it
You only need to:
Save your corpus in the LineSentence format to disk (you may use gensim.utils.save_as_line_sentence(your_corpus, your_corpus_file) for convenience).
Change sentences=your_corpus argument to corpus_file=your_corpus_file in Word2Vec.__init__, Word2Vec.build_vocab, Word2Vec.train calls.
A short Word2Vec example:
End of explanation
"""
CORPUS_FILE = 'wiki-en-20171001.txt'
import itertools
from gensim.parsing.preprocessing import preprocess_string
def processed_corpus():
raw_corpus = api.load('wiki-english-20171001')
for article in raw_corpus:
# concatenate all section titles and texts of each Wikipedia article into a single "sentence"
doc = '\n'.join(itertools.chain.from_iterable(zip(article['section_titles'], article['section_texts'])))
yield preprocess_string(doc)
# serialize the preprocessed corpus into a single file on disk, using memory-efficient streaming
save_as_line_sentence(processed_corpus(), CORPUS_FILE)
"""
Explanation: Let's prepare the full Wikipedia dataset as training corpus
We load wikipedia dump from gensim-data, perform text preprocessing with Gensim functions, and finally save processed corpus in LineSentence format.
End of explanation
"""
from gensim.models.word2vec import LineSentence
import time
start_time = time.time()
model_sent = Word2Vec(sentences=LineSentence(CORPUS_FILE), iter=5, size=300, workers=32)
sent_time = time.time() - start_time
start_time = time.time()
model_corp_file = Word2Vec(corpus_file=CORPUS_FILE, iter=5, size=300, workers=32)
file_time = time.time() - start_time
print("Training model with `sentences` took {:.3f} seconds".format(sent_time))
print("Training model with `corpus_file` took {:.3f} seconds".format(file_time))
"""
Explanation: Word2Vec
We train two models:
* With sentences argument
* With corpus_file argument
Then, we compare the timings and accuracy on question-words.txt.
End of explanation
"""
from gensim.test.utils import datapath
model_sent_accuracy = model_sent.wv.evaluate_word_analogies(datapath('questions-words.txt'))[0]
print("Word analogy accuracy with `sentences`: {:.1f}%".format(100.0 * model_sent_accuracy))
model_corp_file_accuracy = model_corp_file.wv.evaluate_word_analogies(datapath('questions-words.txt'))[0]
print("Word analogy accuracy with `corpus_file`: {:.1f}%".format(100.0 * model_corp_file_accuracy))
"""
Explanation: Training with corpus_file took 3.7x less time!
Now, let's compare the accuracies:
End of explanation
"""
import gensim.downloader as api
from gensim.utils import save_as_line_sentence
from gensim.models.fasttext import FastText
corpus = api.load("text8")
save_as_line_sentence(corpus, "my_corpus.txt")
model = FastText(corpus_file="my_corpus.txt", iter=5, size=300, workers=14)
"""
Explanation: The accuracies are approximately the same.
FastText
Short example:
End of explanation
"""
from gensim.models.word2vec import LineSentence
from gensim.models.fasttext import FastText
import time
start_time = time.time()
model_corp_file = FastText(corpus_file=CORPUS_FILE, iter=5, size=300, workers=32)
file_time = time.time() - start_time
start_time = time.time()
model_sent = FastText(sentences=LineSentence(CORPUS_FILE), iter=5, size=300, workers=32)
sent_time = time.time() - start_time
print("Training model with `sentences` took {:.3f} seconds".format(sent_time))
print("Training model with `corpus_file` took {:.3f} seconds".format(file_time))
"""
Explanation: Let's compare the timings
End of explanation
"""
from gensim.test.utils import datapath
model_sent_accuracy = model_sent.wv.evaluate_word_analogies(datapath('questions-words.txt'))[0]
print("Word analogy accuracy with `sentences`: {:.1f}%".format(100.0 * model_sent_accuracy))
model_corp_file_accuracy = model_corp_file.wv.evaluate_word_analogies(datapath('questions-words.txt'))[0]
print("Word analogy accuracy with `corpus_file`: {:.1f}%".format(100.0 * model_corp_file_accuracy))
"""
Explanation: We see a 1.67x performance boost!
Now, accuracies:
End of explanation
"""
import gensim.downloader as api
from gensim.utils import save_as_line_sentence
from gensim.models.doc2vec import Doc2Vec
corpus = api.load("text8")
save_as_line_sentence(corpus, "my_corpus.txt")
model = Doc2Vec(corpus_file="my_corpus.txt", epochs=5, vector_size=300, workers=14)
"""
Explanation: Doc2Vec
Short example:
End of explanation
"""
from gensim.models.doc2vec import Doc2Vec, TaggedLineDocument
import time
start_time = time.time()
model_corp_file = Doc2Vec(corpus_file=CORPUS_FILE, epochs=5, vector_size=300, workers=32)
file_time = time.time() - start_time
start_time = time.time()
model_sent = Doc2Vec(documents=TaggedLineDocument(CORPUS_FILE), epochs=5, vector_size=300, workers=32)
sent_time = time.time() - start_time
print("Training model with `sentences` took {:.3f} seconds".format(sent_time))
print("Training model with `corpus_file` took {:.3f} seconds".format(file_time))
"""
Explanation: Let's compare the timings
End of explanation
"""
from gensim.test.utils import datapath
model_sent_accuracy = model_sent.wv.evaluate_word_analogies(datapath('questions-words.txt'))[0]
print("Word analogy accuracy with `sentences`: {:.1f}%".format(100.0 * model_sent_accuracy))
model_corp_file_accuracy = model_corp_file.wv.evaluate_word_analogies(datapath('questions-words.txt'))[0]
print("Word analogy accuracy with `corpus_file`: {:.1f}%".format(100.0 * model_corp_file_accuracy))
"""
Explanation: A 6.6x speedup!
Accuracies:
End of explanation
"""
|
steinam/teacher | jup_notebooks/datenbanken/Sommer_2015.ipynb | mit | %load_ext sql
%sql mysql://steinam:steinam@localhost/sommer_2015
"""
Explanation: Subselects
End of explanation
"""
%%sql
%sql select count(*) as AnzahlFahrten from fahrten
"""
Explanation: Sommer 2015
Datenmodell
Aufgabe
Erstellen Sie eine Abfrage, mit der Sie die Daten aller Kunden, die Anzahl deren Aufträge, die Anzahl der Fahrten und die Summe der Streckenkilometer erhalten. Die Ausgabe soll nach Kunden-PLZ absteigend sortiert sein.
Lösung
End of explanation
"""
%%sql
select k.kd_id, k.`kd_firma`, k.`kd_plz`,
count(distinct a.Au_ID) as AnzAuftrag,
count(distinct f.f_id) as AnzFahrt,
sum(distinct ts.ts_strecke) as SumStrecke
from kunde k left join auftrag a on k.`kd_id` = a.`au_kd_id`
left join fahrten f on a.`au_id` = f.`f_au_id`
left join teilstrecke ts on ts.`ts_f_id` = f.`f_id`
group by k.kd_id order by k.`kd_plz`
"""
Explanation: Warum geht kein Join ??
```mysql
```
End of explanation
"""
%sql select k.kd_id, k.`kd_firma`, k.`kd_plz`, a.`au_id` from kunde k left join auftrag a on k.`kd_id` = a.`au_kd_id` left join fahrten f on a.`au_id` = f.`f_au_id` left join teilstrecke ts on ts.`ts_f_id` = f.`f_id` order by k.`kd_plz`
"""
Explanation: Der Ansatz mit Join funktioniert in dieser Form nicht, da spätestens beim 2. Join die Firma Trappo mit 2 Datensätzen aus dem 1. Join verknüpft wird. Deshalb wird auch die Anzahl der Fahren verdoppelt. Dies wiederholt sich beim 3. Join.
Die folgende Abfrage zeigt ohne die Aggregatfunktionen das jeweilige Ausgangsergebnis
mysql
select k.kd_id, k.`kd_firma`, k.`kd_plz`, a.`au_id`
from kunde k left join auftrag a
on k.`kd_id` = a.`au_kd_id`
left join fahrten f
on a.`au_id` = f.`f_au_id`
left join teilstrecke ts
on ts.`ts_f_id` = f.`f_id`
order by k.`kd_plz`
End of explanation
"""
%sql mysql://steinam:steinam@localhost/winter_2015
"""
Explanation: Winter 2015
Datenmodell
Hinweis: In Rechnung gibt es zusätzlich ein Feld Rechnung.Kd_ID
Aufgabe
Erstellen Sie eine SQL-Abfrage, mit der alle Kunden wie folgt aufgelistet werden, bei denen eine Zahlungsbedingung mit einem Skontosatz größer 3 % ist, mit Ausgabe der Anzahl der hinterlegten Rechnungen aus dem Jahr 2015.
Lösung
End of explanation
"""
%%sql
select count(rechnung.`Rg_ID`), kunde.`Kd_Name` from rechnung
inner join kunde on `rechnung`.`Rg_KD_ID` = kunde.`Kd_ID`
inner join `zahlungsbedingung` on kunde.`Kd_Zb_ID` = `zahlungsbedingung`.`Zb_ID`
where `zahlungsbedingung`.`Zb_SkontoProzent` > 3.0
and year(`rechnung`.`Rg_Datum`) = 2015 group by Kunde.`Kd_Name`
"""
Explanation: ``mysql
select count(rechnung.Rg_ID), kunde.Kd_Namefrom rechnung inner join kunde
onrechnung.Rg_KD_ID= kunde.Kd_IDinner joinzahlungsbedingungon kunde.Kd_Zb_ID=zahlungsbedingung.Zb_IDwherezahlungsbedingung.Zb_SkontoProzent> 3.0
and year(rechnung.Rg_Datum) = 2015
group by Kunde.Kd_Name`
```
End of explanation
"""
%%sql
select kd.`Kd_Name`,
(select COUNT(*) from Rechnung as R
where R.`Rg_KD_ID` = KD.`Kd_ID` and year(R.`Rg_Datum`) = 2015) as Anzahl
from Kunde kd inner join `zahlungsbedingung`
on kd.`Kd_Zb_ID` = `zahlungsbedingung`.`Zb_ID`
and `zahlungsbedingung`.`Zb_SkontoProzent` > 3.0
"""
Explanation: Es geht auch mit einem Subselect
``mysql
select kd.Kd_Name,
(select COUNT(*) from Rechnung as R
where R.Rg_KD_ID= KD.Kd_IDand year(R.Rg_Datum`) = 2015)
from Kunde kd inner join `zahlungsbedingung`
on kd.`Kd_Zb_ID` = `zahlungsbedingung`.`Zb_ID`
and zahlungsbedingung.Zb_SkontoProzent > 3.0
```
End of explanation
"""
%sql -- your code goes here
"""
Explanation: Versicherung
Zeigen Sie zu jedem Mitarbeiter der Abteilung „Vertrieb“ den ersten Vertrag (mit einigen Angaben) an, den er abgeschlossen hat. Der Mitarbeiter soll mit ID und Name/Vorname angezeigt werden.
Datenmodell Versicherung
End of explanation
"""
%sql mysql://steinam:steinam@localhost/versicherung_complete
%%sql
select min(`vv`.`Abschlussdatum`) as 'Erster Abschluss', `vv`.`Mitarbeiter_ID`
from `versicherungsvertrag` vv inner join mitarbeiter m
on vv.`Mitarbeiter_ID` = m.`ID`
where vv.`Mitarbeiter_ID` in ( select m.`ID` from mitarbeiter m
inner join Abteilung a
on m.`Abteilung_ID` = a.`ID`)
group by vv.`Mitarbeiter_ID`
result = _
result
"""
Explanation: Lösung
End of explanation
"""
|
mohanprasath/Course-Work | numpy/numpy_exercises_from_kyubyong/Logic_functions.ipynb | gpl-3.0 | import numpy as np
np.__version__
"""
Explanation: Logic functions
End of explanation
"""
x = np.array([1,2,3])
#
x = np.array([1,0,3])
#
"""
Explanation: Truth value testing
Q1. Let x be an arbitrary array. Return True if none of the elements of x is zero. Remind that 0 evaluates to False in python.
End of explanation
"""
x = np.array([1,0,0])
#
x = np.array([0,0,0])
#
"""
Explanation: Q2. Let x be an arbitrary array. Return True if any of the elements of x is non-zero.
End of explanation
"""
x = np.array([1, 0, np.nan, np.inf])
#print np.isfinite(x)
"""
Explanation: Array contents
Q3. Predict the result of the following code.
End of explanation
"""
x = np.array([1, 0, np.nan, np.inf])
#print np.isinf(x)
"""
Explanation: Q4. Predict the result of the following code.
End of explanation
"""
x = np.array([1, 0, np.nan, np.inf])
#print np.isnan(x)
"""
Explanation: Q5. Predict the result of the following code.
End of explanation
"""
x = np.array([1+1j, 1+0j, 4.5, 3, 2, 2j])
#print np.iscomplex(x)
"""
Explanation: Array type testing
Q6. Predict the result of the following code.
End of explanation
"""
x = np.array([1+1j, 1+0j, 4.5, 3, 2, 2j])
#print np.isreal(x)
"""
Explanation: Q7. Predict the result of the following code.
End of explanation
"""
#print np.isscalar(3)
#print np.isscalar([3])
#print np.isscalar(True)
"""
Explanation: Q8. Predict the result of the following code.
End of explanation
"""
#print np.logical_and([True, False], [False, False])
#print np.logical_or([True, False, True], [True, False, False])
#print np.logical_xor([True, False, True], [True, False, False])
#print np.logical_not([True, False, 0, 1])
"""
Explanation: Logical operations
Q9. Predict the result of the following code.
End of explanation
"""
#print np.allclose([3], [2.999999])
#print np.array_equal([3], [2.999999])
"""
Explanation: Comparison
Q10. Predict the result of the following code.
End of explanation
"""
x = np.array([4, 5])
y = np.array([2, 5])
#
#
#
#
"""
Explanation: Q11. Write numpy comparison functions such that they return the results as you see.
End of explanation
"""
#print np.equal([1, 2], [1, 2.000001])
#print np.isclose([1, 2], [1, 2.000001])
"""
Explanation: Q12. Predict the result of the following code.
End of explanation
"""
|
Bio204-class/bio204-notebooks | Introduction-to-Simulation.ipynb | cc0-1.0 | # set the seed for the pseudo-random number generator
# the seed is any 32 bit integer
# different seeds will generate different results for the
# simulations that follow
np.random.seed(20160208)
"""
Explanation: A brief note about pseudo-random numbers
When carrying out simulations, it is typical to use random number generators. Most computers can not generate true random numbers -- instead we use algorithms that approximate the generation of random numbers (pseudo-random number generators). One important difference between a true random number generator and a pseudo-random number generator is that a series of pseudo-random numbers can be regenerated if you know the "seed" value that initialized the algorithm. We can specifically set this seed value, so that we can guarantee that two different people evaluating this notebook get the same results, even though we're using (pseudo)random numbers in our simulation.
End of explanation
"""
popn = np.random.normal(loc=10, scale=1, size=6500)
plt.hist(popn,bins=50)
plt.xlabel("Glucorticoid concentration (nM)")
plt.ylabel("Frequency")
pass
print("Mean glucorticoid concentration:", np.mean(popn))
print("Standard deviation of glucocorticoid concentration:", np.std(popn))
"""
Explanation: Generating a population to sample from
We'll start by simulating our "population of interest" -- i.e. the population we want to make inferences about. We'll assume that our variable of interest (e.g. circulating stress hormone levels) is normally distributed with a mean of 10 nM and a standard deviation of 1 nM.
End of explanation
"""
sample1 = np.random.choice(popn, size=25)
plt.hist(sample1)
plt.xlabel("Glucorticoid concentration (nM)")
plt.ylabel("Frequency")
pass
np.mean(sample1), np.std(sample1,ddof=1)
"""
Explanation: Take a random sample of the population of interest
We'll use the np.random.choice function to take a sample from our population of interest.
End of explanation
"""
sample2 = np.random.choice(popn, size=25)
np.mean(sample2), np.std(sample2,ddof=1)
"""
Explanation: Take a second random sample of size 25
End of explanation
"""
plt.hist(sample1)
plt.hist(sample2,alpha=0.5)
plt.xlabel("Glucorticoid concentration (nM)")
plt.ylabel("Frequency")
pass
"""
Explanation: Compare the first and second samples
End of explanation
"""
means25 = []
std25 = []
for i in range(100):
s = np.random.choice(popn, size=25)
means25.append(np.mean(s))
std25.append(np.std(s,ddof=1))
plt.hist(means25,bins=15)
plt.xlabel("Mean glucocorticoid concentration")
plt.ylabel("Frequency")
plt.title("Distribution of estimates of the\n mean glucocorticoid concentration\n for 100 samples of size 25")
plt.vlines(np.mean(popn), 0, 18, linestyle='dashed', color='red',label="True Mean")
plt.legend(loc="upper right")
pass
"""
Explanation: ## Generate a large number of samples of size 25
Every time we take a random sample from our population of interest we'll get a different estimate of the mean and standard deviation (or whatever other statistics we're interested in). To explore how well random samples of size 25 perform, generally, in terms of estimating the mean and standard deviation of the population of interest we need a large number of such samples.
It's tedious to take one sample at a time, so we'll generate 100 samples of size 25, and calculate the mean and standard deviation for each of those samples (storing the means and standard deviations in lists).
End of explanation
"""
# Relative Frequency Histogram
plt.hist(means25, bins=15, weights=np.ones_like(means25) * (1.0/len(means25)))
plt.xlabel("mean glucocorticoid concentration")
plt.ylabel("Relative Frequency")
plt.vlines(np.mean(popn), 0, 0.20, linestyle='dashed', color='red',label="True Mean")
plt.legend(loc="upper right")
pass
"""
Explanation: Relative Frequency Histogram
A relative frequency histogram is like a frequency histogram, except the bin heights are given in fractions of the total sample size (relative frequency) rather than absolute frequency. This is equivalent to adding the constraint that the total height of all the bars in the histogram will add to 1.0.
End of explanation
"""
plt.hist(means25,bins=15,normed=True)
plt.xlabel("Mean glucocorticoid concentration")
plt.ylabel("Density")
plt.vlines(np.mean(popn), 0, 2.5, linestyle='dashed', color='red',label="True Mean")
plt.legend(loc="upper right")
pass
"""
Explanation: Density histogram
If instead of constraining the total height of the bars, we constrain the total area of the bars to sum to one, we call this a density histogram. When comparing histograms based on different numbers of samples, with different bin width, etc. you should usually use the density histogram.
The argument normed=True to the pyplot.hist function will this function calculate a density histogram instead of the default frequency histogram.
End of explanation
"""
means50 = []
std50 = []
for i in range(100):
s = np.random.choice(popn, size=50)
means50.append(np.mean(s))
std50.append(np.std(s,ddof=1))
means100 = []
std100 = []
for i in range(100):
s = np.random.choice(popn, size=100)
means100.append(np.mean(s))
std100.append(np.std(s,ddof=1))
means200 = []
std200 = []
for i in range(100):
s = np.random.choice(popn, size=200)
means200.append(np.mean(s))
std200.append(np.std(s,ddof=1))
# the label arguments get used when we create a legend
plt.hist(means25, normed=True, alpha=0.75, histtype="stepfilled", label="n=25")
plt.hist(means50, normed=True, alpha=0.75, histtype="stepfilled", label="n=50")
plt.hist(means100, normed=True, alpha=0.75, histtype="stepfilled", label="n=100")
plt.hist(means200, normed=True, alpha=0.75, histtype="stepfilled", label="n=200")
plt.xlabel("Mean glucocorticoid concentration")
plt.ylabel("Density")
plt.vlines(np.mean(popn), 0, 7, linestyle='dashed', color='black',label="True Mean")
plt.legend()
pass
"""
Explanation: How does the spread of our estimates of the mean change as sample size increases?
What happens as we increase the size of our samples? Let's draw 100 random samples of size 50, 100, and 200 observations to compare.
End of explanation
"""
sm25 = np.std(means25,ddof=1)
sm50 = np.std(means50,ddof=1)
sm100 = np.std(means100,ddof=1)
sm200 = np.std(means200, ddof=1)
x = [25,50,100,200]
y = [sm25,sm50,sm100,sm200]
plt.scatter(x,y)
plt.xlabel("Sample size")
plt.ylabel("Std Dev of Mean Estimates")
pass
"""
Explanation: Standard Error of the Mean
We see from the graph above that our estimates of the mean cluster more tightly about the true mean as our sample size increases. Let's quantify that by calculating the standard deviation of our mean estimates as a function of sample size.
The standard deviation of the sampling distribution of a statistic of interest is called the "Standard Error" of that statistic. Here, through simulation, we are estimating the "Standard Error of the Mean".
End of explanation
"""
x = [25,50,100,200]
y = [sm25,sm50,sm100,sm200]
theory = [np.std(popn)/np.sqrt(i) for i in range(10,250)]
plt.scatter(x,y, label="Simulation estimates")
plt.plot(range(10,250), theory, color='red', label="Theoretical expectation")
plt.xlabel("Sample size")
plt.ylabel("Std Error of Mean")
plt.legend()
plt.xlim(0,300)
pass
"""
Explanation: You can show mathematically for normally distributed data, that the expected Standard Error of the Mean as a function of sample size is:
$$
\mbox{Standard Error of Mean} = \frac{\sigma}{\sqrt{n}}
$$
where $\sigma$ is the population standard deviation, and $n$ is the sample size.
Let's compare that theoretical expectation to our simulated estimates.
End of explanation
"""
# the label arguments get used when we create a legend
plt.hist(std25, normed=True, alpha=0.75, histtype="stepfilled", label="n=25")
plt.hist(std50, normed=True, alpha=0.75, histtype="stepfilled", label="n=50")
plt.hist(std100, normed=True, alpha=0.75, histtype="stepfilled", label="n=100")
plt.hist(std200, normed=True, alpha=0.75, histtype="stepfilled", label="n=200")
plt.xlabel("Standard Deviation of Glucocorticoid Concentration")
plt.ylabel("Density")
plt.vlines(np.std(popn), 0, 9, linestyle='dashed', color='black',label="True Standard Deviation")
#plt.legend()
pass
"""
Explanation: Standard Errors of the Standard Deviation
Above we explored how the spread in our estimates of the mean changed with sample size. We can similarly explore how our estimates of the standard deviation of the population change as we vary our sample size.
End of explanation
"""
x = [25,50,100,200]
y = [ss25,ss50,ss100,ss200]
plt.scatter(x,y, label="Simulation estimates")
plt.xlabel("Sample size")
plt.ylabel("Std Error of Std Dev")
theory = [np.std(popn)/(np.sqrt(2.0*(i-1))) for i in range(10,250)]
plt.plot(range(10,250), theory, color='red', label="Theoretical expectation")
plt.xlim(0,300)
plt.legend()
pass
"""
Explanation: You can show mathematically for normally distributed data, that the expected Standard Error of the Standard Deviation is approximately
$$
\mbox{Standard Error of Standard Deviation} \approx \frac{\sigma}{\sqrt{2(n-1)}}
$$
where $\sigma$ is the population standard deviation, and $n$ is the sample size.
Let's compare that theoretical expectation to our simulated estimates.
End of explanation
"""
|
mne-tools/mne-tools.github.io | 0.24/_downloads/31239620dd9631320a99b07ac4a81074/interpolate_bad_channels.ipynb | bsd-3-clause | # Authors: Denis A. Engemann <denis.engemann@gmail.com>
# Mainak Jas <mainak.jas@telecom-paristech.fr>
#
# License: BSD-3-Clause
import mne
from mne.datasets import sample
print(__doc__)
data_path = sample.data_path()
fname = data_path + '/MEG/sample/sample_audvis-ave.fif'
evoked = mne.read_evokeds(fname, condition='Left Auditory',
baseline=(None, 0))
# plot with bads
evoked.plot(exclude=[], picks=('grad', 'eeg'))
"""
Explanation: Interpolate bad channels for MEG/EEG channels
This example shows how to interpolate bad MEG/EEG channels
Using spherical splines from :footcite:PerrinEtAl1989 for EEG data.
Using field interpolation for MEG and EEG data.
In this example, the bad channels will still be marked as bad.
Only the data in those channels is replaced.
End of explanation
"""
evoked_interp = evoked.copy().interpolate_bads(reset_bads=False)
evoked_interp.plot(exclude=[], picks=('grad', 'eeg'))
"""
Explanation: Compute interpolation (also works with Raw and Epochs objects)
End of explanation
"""
evoked_interp_mne = evoked.copy().interpolate_bads(
reset_bads=False, method=dict(eeg='MNE'), verbose=True)
evoked_interp_mne.plot(exclude=[], picks=('grad', 'eeg'))
"""
Explanation: You can also use minimum-norm for EEG as well as MEG
End of explanation
"""
|
nick-youngblut/SIPSim | ipynb/bac_genome/fullCyc/.ipynb_checkpoints/Day1_default_run-checkpoint.ipynb | mit | import os
import glob
import re
import nestly
%load_ext rpy2.ipython
%%R
library(ggplot2)
library(dplyr)
library(tidyr)
library(gridExtra)
library(phyloseq)
## BD for G+C of 0 or 100
BD.GCp0 = 0 * 0.098 + 1.66
BD.GCp100 = 1 * 0.098 + 1.66
"""
Explanation: Goal
Simulating fullCyc Day1 control gradients
Not simulating incorporation (all 0% isotope incorp.)
Don't know how much true incorporatation for emperical data
Using parameters inferred from emperical data (fullCyc Day1 seq data), or if not available, default SIPSim parameters
Determining whether simulated taxa show similar distribution to the emperical data
Input parameters
phyloseq.bulk file
taxon mapping file
list of genomes
fragments simulated for all genomes
bulk community richness
workflow
Creating a community file from OTU abundances in bulk soil samples
phyloseq.bulk --> OTU table --> filter to sample --> community table format
Fragment simulation
simulated_fragments --> parse out fragments for target OTUs
simulated_fragments --> parse out fragments from random genomes to obtain richness of interest
combine fragment python objects
Convert fragment lists to kde object
Add diffusion
Make incorp config file
Add isotope incorporation
Calculating BD shift from isotope incorp
Simulating gradient fractions
Simulating OTU table
Simulating PCR
Subsampling from the OTU table
Init
End of explanation
"""
workDir = '/home/nick/notebook/SIPSim/dev/fullCyc/n1147_frag_norm_9_2.5_n5/'
buildDir = os.path.join(workDir, 'Day1_default_run')
R_dir = '/home/nick/notebook/SIPSim/lib/R/'
fragFile= '/home/nick/notebook/SIPSim/dev/bac_genome1147/validation/ampFrags.pkl'
targetFile = '/home/nick/notebook/SIPSim/dev/fullCyc/CD-HIT/target_taxa.txt'
physeqDir = '/var/seq_data/fullCyc/MiSeq_16SrRNA/515f-806r/lib1-7/phyloseq/'
physeq_bulkCore = 'bulk-core'
physeq_SIP_core = 'SIP-core_unk'
prefrac_comm_abundance = ['1e9']
richness = 2503 # chao1 estimate for bulk Day 1
seq_per_fraction = ['lognormal', 9.432, 0.5, 10000, 30000] # dist, mean, scale, min, max
bulk_days = [1]
nprocs = 24
# building tree structure
nest = nestly.Nest()
## varying params
nest.add('abs', prefrac_comm_abundance)
## set params
nest.add('bulk_day', bulk_days, create_dir=False)
nest.add('percIncorp', [0], create_dir=False)
nest.add('percTaxa', [0], create_dir=False)
nest.add('np', [nprocs], create_dir=False)
nest.add('richness', [richness], create_dir=False)
nest.add('subsample_dist', [seq_per_fraction[0]], create_dir=False)
nest.add('subsample_mean', [seq_per_fraction[1]], create_dir=False)
nest.add('subsample_scale', [seq_per_fraction[2]], create_dir=False)
nest.add('subsample_min', [seq_per_fraction[3]], create_dir=False)
nest.add('subsample_max', [seq_per_fraction[4]], create_dir=False)
### input/output files
nest.add('buildDir', [buildDir], create_dir=False)
nest.add('R_dir', [R_dir], create_dir=False)
nest.add('fragFile', [fragFile], create_dir=False)
nest.add('targetFile', [targetFile], create_dir=False)
nest.add('physeqDir', [physeqDir], create_dir=False)
nest.add('physeq_bulkCore', [physeq_bulkCore], create_dir=False)
# building directory tree
nest.build(buildDir)
# bash file to run
bashFile = os.path.join(buildDir, 'SIPSimRun.sh')
%%writefile $bashFile
#!/bin/bash
export PATH={R_dir}:$PATH
#-- making DNA pool similar to gradient of interest
echo '# Creating comm file from phyloseq'
phyloseq2comm.r {physeqDir}{physeq_bulkCore} -s 12C-Con -d {bulk_day} > {physeq_bulkCore}_comm.txt
printf 'Number of lines: '; wc -l {physeq_bulkCore}_comm.txt
echo '## Adding target taxa to comm file'
comm_add_target.r {physeq_bulkCore}_comm.txt {targetFile} > {physeq_bulkCore}_comm_target.txt
printf 'Number of lines: '; wc -l {physeq_bulkCore}_comm_target.txt
echo '# Adding extra richness to community file'
printf "1\t{richness}\n" > richness_needed.txt
comm_add_richness.r -s {physeq_bulkCore}_comm_target.txt richness_needed.txt > {physeq_bulkCore}_comm_all.txt
### renaming comm file for downstream pipeline
cat {physeq_bulkCore}_comm_all.txt > {physeq_bulkCore}_comm_target.txt
rm -f {physeq_bulkCore}_comm_all.txt
echo '## parsing out genome fragments to make simulated DNA pool resembling the gradient of interest'
## all OTUs without an associated reference genome will be assigned a random reference (of the reference genome pool)
### this is done through --NA-random
SIPSim fragment_KDE_parse {fragFile} {physeq_bulkCore}_comm_target.txt \
--rename taxon_name --NA-random > fragsParsed.pkl
echo '#-- SIPSim pipeline --#'
echo '# converting fragments to KDE'
SIPSim fragment_KDE \
fragsParsed.pkl \
> fragsParsed_KDE.pkl
echo '# adding diffusion'
SIPSim diffusion \
fragsParsed_KDE.pkl \
--np {np} \
> fragsParsed_KDE_dif.pkl
echo '# adding DBL contamination'
SIPSim DBL \
fragsParsed_KDE_dif.pkl \
--np {np} \
> fragsParsed_KDE_dif_DBL.pkl
echo '# making incorp file'
SIPSim incorpConfigExample \
--percTaxa {percTaxa} \
--percIncorpUnif {percIncorp} \
> {percTaxa}_{percIncorp}.config
echo '# adding isotope incorporation to BD distribution'
SIPSim isotope_incorp \
fragsParsed_KDE_dif_DBL.pkl \
{percTaxa}_{percIncorp}.config \
--comm {physeq_bulkCore}_comm_target.txt \
--np {np} \
> fragsParsed_KDE_dif_DBL_inc.pkl
#echo '# calculating BD shift from isotope incorporation'
#SIPSim BD_shift \
# fragsParsed_KDE_dif_DBL.pkl \
# fragsParsed_KDE_dif_DBL_inc.pkl \
# --np {np} \
# > fragsParsed_KDE_dif_DBL_inc_BD-shift.txt
echo '# simulating gradient fractions'
SIPSim gradient_fractions \
{physeq_bulkCore}_comm_target.txt \
> fracs.txt
echo '# simulating an OTU table'
SIPSim OTU_table \
fragsParsed_KDE_dif_DBL_inc.pkl \
{physeq_bulkCore}_comm_target.txt \
fracs.txt \
--abs {abs} \
--np {np} \
> OTU_abs{abs}.txt
#echo '# simulating PCR'
SIPSim OTU_PCR \
OTU_abs{abs}.txt \
> OTU_abs{abs}_PCR.txt
echo '# subsampling from the OTU table (simulating sequencing of the DNA pool)'
SIPSim OTU_subsample \
--dist {subsample_dist} \
--dist_params mean:{subsample_mean},sigma:{subsample_scale} \
--min_size {subsample_min} \
--max_size {subsample_max} \
OTU_abs{abs}_PCR.txt \
> OTU_abs{abs}_PCR_sub.txt
echo '# making a wide-formatted table'
SIPSim OTU_wideLong -w \
OTU_abs{abs}_PCR_sub.txt \
> OTU_abs{abs}_PCR_sub_w.txt
echo '# making metadata (phyloseq: sample_data)'
SIPSim OTU_sampleData \
OTU_abs{abs}_PCR_sub.txt \
> OTU_abs{abs}_PCR_sub_meta.txt
!chmod 777 $bashFile
!cd $workDir; \
nestrun --template-file $bashFile -d Day1_default_run --log-file log.txt -j 1
"""
Explanation: Nestly
assuming fragments already simulated
End of explanation
"""
workDir1 = os.path.join(workDir, 'Day1_default_run/1e9/')
!cd $workDir1; \
SIPSim KDE_info \
-s fragsParsed_KDE.pkl \
> fragsParsed_KDE_info.txt
%%R -i workDir1
inFile = file.path(workDir1, 'fragsParsed_KDE_info.txt')
df = read.delim(inFile, sep='\t') %>%
filter(KDE_ID == 1)
df %>% head(n=3)
%%R -w 600 -h 300
ggplot(df, aes(median)) +
geom_histogram(binwidth=0.001) +
labs(x='Buoyant density') +
theme_bw() +
theme(
text = element_text(size=16)
)
"""
Explanation: Checking amplicon fragment BD distribution
'Raw' fragments
End of explanation
"""
workDir1 = os.path.join(workDir, 'Day1_default_run/1e9/')
!cd $workDir1; \
SIPSim KDE_info \
-s fragsParsed_KDE_dif_DBL.pkl \
> fragsParsed_KDE_dif_DBL_info.pkl
%%R -i workDir1
inFile = file.path(workDir1, 'fragsParsed_KDE_dif_DBL_info.pkl')
df = read.delim(inFile, sep='\t') %>%
filter(KDE_ID == 1)
df %>% head(n=3)
%%R -w 600 -h 300
ggplot(df, aes(median)) +
geom_histogram(binwidth=0.001) +
labs(x='Buoyant density') +
theme_bw() +
theme(
text = element_text(size=16)
)
"""
Explanation: fragments w/ diffusion + DBL
End of explanation
"""
%%R
## min G+C cutoff
min_GC = 13.5
## max G+C cutoff
max_GC = 80
## max G+C shift
max_13C_shift_in_BD = 0.036
min_BD = min_GC/100.0 * 0.098 + 1.66
max_BD = max_GC/100.0 * 0.098 + 1.66
max_BD = max_BD + max_13C_shift_in_BD
cat('Min BD:', min_BD, '\n')
cat('Max BD:', max_BD, '\n')
"""
Explanation: BD min/max
what is the min/max BD that we care about?
End of explanation
"""
%%R
# simulated OTU table file
OTU.table.dir = '/home/nick/notebook/SIPSim/dev/fullCyc/frag_norm_9_2.5_n5/Day1_default_run/1e9/'
OTU.table.file = 'OTU_abs1e9_PCR_sub.txt'
#OTU.table.file = 'OTU_abs1e9_sub.txt'
#OTU.table.file = 'OTU_abs1e9.txt'
%%R -i physeqDir -i physeq_SIP_core -i bulk_days
# bulk core samples
F = file.path(physeqDir, physeq_SIP_core)
physeq.SIP.core = readRDS(F)
physeq.SIP.core.m = physeq.SIP.core %>% sample_data
physeq.SIP.core = prune_samples(physeq.SIP.core.m$Substrate == '12C-Con' &
physeq.SIP.core.m$Day %in% bulk_days,
physeq.SIP.core) %>%
filter_taxa(function(x) sum(x) > 0, TRUE)
physeq.SIP.core.m = physeq.SIP.core %>% sample_data
physeq.SIP.core
%%R -w 800 -h 300
## dataframe
df.EMP = physeq.SIP.core %>% otu_table %>%
as.matrix %>% as.data.frame
df.EMP$OTU = rownames(df.EMP)
df.EMP = df.EMP %>%
gather(sample, abundance, 1:(ncol(df.EMP)-1))
df.EMP = inner_join(df.EMP, physeq.SIP.core.m, c('sample' = 'X.Sample'))
df.EMP.nt = df.EMP %>%
group_by(sample) %>%
mutate(n_taxa = sum(abundance > 0)) %>%
ungroup() %>%
distinct(sample) %>%
filter(Buoyant_density >= min_BD,
Buoyant_density <= max_BD)
## plotting
p = ggplot(df.EMP.nt, aes(Buoyant_density, n_taxa)) +
geom_point(color='blue') +
geom_line(color='blue') +
#geom_vline(xintercept=c(BD.GCp0, BD.GCp100), linetype='dashed', alpha=0.5) +
labs(x='Buoyant density', y='Number of taxa') +
theme_bw() +
theme(
text = element_text(size=16),
legend.position = 'none'
)
p
"""
Explanation: Plotting number of taxa in each fraction
Emperical data (fullCyc)
End of explanation
"""
%%R -w 800 -h 300
# loading file
F = file.path(workDir1, OTU.table.file)
df.SIM = read.delim(F, sep='\t')
## edit table
df.SIM.nt = df.SIM %>%
filter(count > 0) %>%
group_by(library, BD_mid) %>%
summarize(n_taxa = n()) %>%
filter(BD_mid >= min_BD,
BD_mid <= max_BD)
## plot
p = ggplot(df.SIM.nt, aes(BD_mid, n_taxa)) +
geom_point(color='red') +
geom_line(color='red') +
geom_point(data=df.EMP.nt, aes(x=Buoyant_density), color='blue') +
geom_line(data=df.EMP.nt, aes(x=Buoyant_density), color='blue') +
#geom_vline(xintercept=c(BD.GCp0, BD.GCp100), linetype='dashed', alpha=0.5) +
labs(x='Buoyant density', y='Number of taxa') +
theme_bw() +
theme(
text = element_text(size=16),
legend.position = 'none'
)
p
%%R -w 800 -h 300
# normalized by max number of taxa
## edit table
df.SIM.nt = df.SIM.nt %>%
group_by() %>%
mutate(n_taxa_norm = n_taxa / max(n_taxa))
df.EMP.nt = df.EMP.nt %>%
group_by() %>%
mutate(n_taxa_norm = n_taxa / max(n_taxa))
## plot
p = ggplot(df.SIM.nt, aes(BD_mid, n_taxa_norm)) +
geom_point(color='red') +
geom_line(color='red') +
geom_point(data=df.EMP.nt, aes(x=Buoyant_density), color='blue') +
geom_line(data=df.EMP.nt, aes(x=Buoyant_density), color='blue') +
#geom_vline(xintercept=c(BD.GCp0, BD.GCp100), linetype='dashed', alpha=0.5) +
scale_y_continuous(limits=c(0, 1)) +
labs(x='Buoyant density', y='Number of taxa\n(fraction of max)') +
theme_bw() +
theme(
text = element_text(size=16),
legend.position = 'none'
)
p
"""
Explanation: w/ simulated data
End of explanation
"""
%%R -w 800 -h 300
# simulated
df.SIM.s = df.SIM %>%
group_by(library, BD_mid) %>%
summarize(total_abund = sum(count)) %>%
rename('Day' = library, 'Buoyant_density' = BD_mid) %>%
ungroup() %>%
mutate(dataset='simulated')
# emperical
df.EMP.s = df.EMP %>%
group_by(Day, Buoyant_density) %>%
summarize(total_abund = sum(abundance)) %>%
ungroup() %>%
mutate(dataset='emperical')
# join
df.j = rbind(df.SIM.s, df.EMP.s) %>%
filter(Buoyant_density >= min_BD,
Buoyant_density <= max_BD)
df.SIM.s = df.EMP.s = ""
# plot
ggplot(df.j, aes(Buoyant_density, total_abund, color=dataset)) +
geom_point() +
geom_line() +
scale_color_manual(values=c('blue', 'red')) +
labs(x='Buoyant density', y='Total sequences per sample') +
theme_bw() +
theme(
text = element_text(size=16),
legend.position = 'none'
)
"""
Explanation: Total sequence count
End of explanation
"""
%%R
shannon_index_long = function(df, abundance_col, ...){
# calculating shannon diversity index from a 'long' formated table
## community_col = name of column defining communities
## abundance_col = name of column defining taxon abundances
df = df %>% as.data.frame
cmd = paste0(abundance_col, '/sum(', abundance_col, ')')
df.s = df %>%
group_by_(...) %>%
mutate_(REL_abundance = cmd) %>%
mutate(pi__ln_pi = REL_abundance * log(REL_abundance),
shannon = -sum(pi__ln_pi, na.rm=TRUE)) %>%
ungroup() %>%
dplyr::select(-REL_abundance, -pi__ln_pi) %>%
distinct_(...)
return(df.s)
}
%%R
# calculating shannon
df.SIM.shan = shannon_index_long(df.SIM, 'count', 'library', 'fraction') %>%
filter(BD_mid >= min_BD,
BD_mid <= max_BD)
df.EMP.shan = shannon_index_long(df.EMP, 'abundance', 'sample') %>%
filter(Buoyant_density >= min_BD,
Buoyant_density <= max_BD)
%%R -w 800 -h 300
# plotting
p = ggplot(df.SIM.shan, aes(BD_mid, shannon)) +
geom_point(color='red') +
geom_line(color='red') +
geom_point(data=df.EMP.shan, aes(x=Buoyant_density), color='blue') +
geom_line(data=df.EMP.shan, aes(x=Buoyant_density), color='blue') +
scale_y_continuous(limits=c(4, 7.5)) +
labs(x='Buoyant density', y='Shannon index') +
theme_bw() +
theme(
text = element_text(size=16),
legend.position = 'none'
)
p
"""
Explanation: Plotting Shannon diversity for each
End of explanation
"""
%%R -h 300 -w 800
# simulated
df.SIM.s = df.SIM %>%
filter(rel_abund > 0) %>%
group_by(BD_mid) %>%
summarize(min_abund = min(rel_abund),
max_abund = max(rel_abund)) %>%
ungroup() %>%
rename('Buoyant_density' = BD_mid) %>%
mutate(dataset = 'simulated')
# emperical
df.EMP.s = df.EMP %>%
group_by(Buoyant_density) %>%
mutate(rel_abund = abundance / sum(abundance)) %>%
filter(rel_abund > 0) %>%
summarize(min_abund = min(rel_abund),
max_abund = max(rel_abund)) %>%
ungroup() %>%
mutate(dataset = 'emperical')
df.j = rbind(df.SIM.s, df.EMP.s) %>%
filter(Buoyant_density >= min_BD,
Buoyant_density <= max_BD)
# plotting
ggplot(df.j, aes(Buoyant_density, max_abund, color=dataset, group=dataset)) +
geom_point() +
geom_line() +
scale_color_manual(values=c('blue', 'red')) +
labs(x='Buoyant density', y='Maximum relative abundance') +
theme_bw() +
theme(
text = element_text(size=16),
legend.position = 'none'
)
"""
Explanation: min/max abundances of taxa
End of explanation
"""
%%R -w 900
# simulated
df.SIM.s = df.SIM %>%
select(BD_mid, rel_abund) %>%
rename('Buoyant_density' = BD_mid) %>%
mutate(dataset='simulated')
# emperical
df.EMP.s = df.EMP %>%
group_by(Buoyant_density) %>%
mutate(rel_abund = abundance / sum(abundance)) %>%
ungroup() %>%
filter(rel_abund > 0) %>%
select(Buoyant_density, rel_abund) %>%
mutate(dataset='emperical')
# join
df.j = rbind(df.SIM.s, df.EMP.s) %>%
filter(Buoyant_density > 1.73) %>%
mutate(Buoyant_density = round(Buoyant_density, 3),
Buoyant_density_c = as.character(Buoyant_density))
df.j$Buoyant_density_c = reorder(df.j$Buoyant_density_c, df.j$Buoyant_density)
ggplot(df.j, aes(Buoyant_density_c, rel_abund)) +
geom_boxplot() +
scale_color_manual(values=c('blue', 'red')) +
labs(x='Buoyant density', y='Maximum relative abundance') +
facet_grid(dataset ~ .) +
theme_bw() +
theme(
text = element_text(size=16),
axis.text.x = element_text(angle=60, hjust=1),
legend.position = 'none'
)
"""
Explanation: Plotting rank-abundance of heavy fractions
In heavy fractions, is DBL resulting in approx. equal abundances among taxa?
End of explanation
"""
%%R
# loading comm file
F = file.path(workDir1, 'bulk-core_comm_target.txt')
df.comm = read.delim(F, sep='\t') %>%
dplyr::select(library, taxon_name, rel_abund_perc) %>%
rename('bulk_abund' = rel_abund_perc) %>%
mutate(bulk_abund = bulk_abund / 100)
## joining
df.SIM.j = inner_join(df.SIM, df.comm, c('library' = 'library',
'taxon' = 'taxon_name')) %>%
filter(BD_mid >= min_BD,
BD_mid <= max_BD)
df.SIM.j %>% head(n=3)
"""
Explanation: BD range where an OTU is detected
Do the simulated OTU BD distributions span the same BD range of the emperical data?
Simulated
End of explanation
"""
%%R
bulk_days = c(1)
%%R
physeq.dir = '/var/seq_data/fullCyc/MiSeq_16SrRNA/515f-806r/lib1-7/phyloseq/'
physeq.bulk = 'bulk-core'
physeq.file = file.path(physeq.dir, physeq.bulk)
physeq.bulk = readRDS(physeq.file)
physeq.bulk.m = physeq.bulk %>% sample_data
physeq.bulk = prune_samples(physeq.bulk.m$Exp_type == 'microcosm_bulk' &
physeq.bulk.m$Day %in% bulk_days, physeq.bulk)
physeq.bulk.m = physeq.bulk %>% sample_data
physeq.bulk
%%R
physeq.bulk.n = transform_sample_counts(physeq.bulk, function(x) x/sum(x))
physeq.bulk.n
%%R
# making long format of each bulk table
bulk.otu = physeq.bulk.n %>% otu_table %>% as.data.frame
ncol = ncol(bulk.otu)
bulk.otu$OTU = rownames(bulk.otu)
bulk.otu = bulk.otu %>%
gather(sample, abundance, 1:ncol)
bulk.otu = inner_join(physeq.bulk.m, bulk.otu, c('X.Sample' = 'sample')) %>%
dplyr::select(OTU, abundance) %>%
rename('bulk_abund' = abundance)
bulk.otu %>% head(n=3)
%%R
# joining tables
df.EMP.j = inner_join(df.EMP, bulk.otu, c('OTU' = 'OTU')) %>%
filter(Buoyant_density >= min_BD,
Buoyant_density <= max_BD)
df.EMP.j %>% head(n=3)
%%R -h 400
# filtering & combining emperical w/ simulated data
## emperical
max_BD_range = max(df.EMP.j$Buoyant_density) - min(df.EMP.j$Buoyant_density)
df.EMP.j.f = df.EMP.j %>%
filter(abundance > 0) %>%
group_by(OTU) %>%
summarize(mean_rel_abund = mean(bulk_abund),
min_BD = min(Buoyant_density),
max_BD = max(Buoyant_density),
BD_range = max_BD - min_BD,
BD_range_perc = BD_range / max_BD_range * 100) %>%
ungroup() %>%
mutate(dataset = 'emperical')
## simulated
max_BD_range = max(df.SIM.j$BD_mid) - min(df.SIM.j$BD_mid)
df.SIM.j.f = df.SIM.j %>%
filter(count > 0) %>%
group_by(taxon) %>%
summarize(mean_rel_abund = mean(bulk_abund),
min_BD = min(BD_mid),
max_BD = max(BD_mid),
BD_range = max_BD - min_BD,
BD_range_perc = BD_range / max_BD_range * 100) %>%
ungroup() %>%
rename('OTU' = taxon) %>%
mutate(dataset = 'simulated')
## join
df.j = rbind(df.EMP.j.f, df.SIM.j.f) %>%
filter(BD_range_perc > 0,
mean_rel_abund > 0)
## plotting
ggplot(df.j, aes(mean_rel_abund, BD_range_perc, color=dataset)) +
geom_point(alpha=0.5, shape='O') +
#stat_density2d() +
#scale_fill_gradient(low='white', high='red', na.value='grey50') +
#scale_x_log10(limits=c(min(df.j$mean_rel_abund, na.rm=T), 1e-2)) +
#scale_y_continuous(limits=c(90, 100)) +
scale_x_log10() +
scale_y_continuous() +
scale_color_manual(values=c('blue', 'red')) +
labs(x='Pre-fractionation abundance', y='% of total BD range') +
#geom_vline(xintercept=0.001, linetype='dashed', alpha=0.5) +
facet_grid(dataset ~ .) +
theme_bw() +
theme(
text = element_text(size=16),
panel.grid = element_blank(),
legend.position = 'none'
)
"""
Explanation: Emperical
End of explanation
"""
%%R -i targetFile
df.target = read.delim(targetFile, sep='\t')
df.target %>% nrow %>% print
df.target %>% head(n=3)
%%R
# filtering to just target taxa
df.j.t = df.j %>%
filter(OTU %in% df.target$OTU)
## plotting
ggplot(df.j.t, aes(mean_rel_abund, BD_range_perc, color=dataset)) +
geom_point(alpha=0.5, shape='O') +
#stat_density2d() +
#scale_fill_gradient(low='white', high='red', na.value='grey50') +
#scale_x_log10(limits=c(min(df.j$mean_rel_abund, na.rm=T), 1e-2)) +
#scale_y_continuous(limits=c(90, 100)) +
scale_x_log10() +
scale_y_continuous() +
scale_color_manual(values=c('blue', 'red')) +
labs(x='Pre-fractionation abundance', y='% of total BD range') +
#geom_vline(xintercept=0.001, linetype='dashed', alpha=0.5) +
facet_grid(dataset ~ .) +
theme_bw() +
theme(
text = element_text(size=16),
panel.grid = element_blank(),
legend.position = 'none'
)
"""
Explanation: BD span of just overlapping taxa
taxa overlapping between emperical data and genomes in dataset
End of explanation
"""
%%R
## emperical
df.EMP.j.f = df.EMP.j %>%
filter(abundance > 0) %>%
dplyr::select(OTU, sample, abundance, Buoyant_density, bulk_abund) %>%
mutate(dataset = 'emperical')
## simulated
df.SIM.j.f = df.SIM.j %>%
filter(count > 0) %>%
dplyr::select(taxon, fraction, count, BD_mid, bulk_abund) %>%
rename('OTU' = taxon,
'sample' = fraction,
'Buoyant_density' = BD_mid,
'abundance' = count) %>%
mutate(dataset = 'simulated')
df.j = rbind(df.EMP.j.f, df.SIM.j.f) %>%
group_by(sample) %>%
mutate(rel_abund = abundance / sum(abundance))
df.j %>% head(n=3) %>% as.data.frame
%%R -w 800 -h 400
# plotting absolute abundances of subsampled
## plot
p = ggplot(df.j, aes(Buoyant_density, abundance, fill=OTU)) +
geom_area(stat='identity', position='dodge', alpha=0.5) +
#geom_vline(xintercept=c(BD.GCp0, BD.GCp100), linetype='dashed', alpha=0.5) +
labs(x='Buoyant density', y='Subsampled community\n(absolute abundance)') +
facet_grid(dataset ~ .) +
theme_bw() +
theme(
text = element_text(size=16),
legend.position = 'none',
axis.title.y = element_text(vjust=1),
axis.title.x = element_blank(),
plot.margin=unit(c(0.1,1,0.1,1), "cm")
)
p
%%R -w 800 -h 400
# plotting relative abundances of subsampled
p = ggplot(df.j, aes(Buoyant_density, rel_abund, fill=OTU)) +
geom_area(stat='identity', position='dodge', alpha=0.5) +
#geom_vline(xintercept=c(BD.GCp0, BD.GCp100), linetype='dashed', alpha=0.5) +
labs(x='Buoyant density', y='Subsampled community\n(relative abundance)') +
facet_grid(dataset ~ .) +
theme_bw() +
theme(
text = element_text(size=16),
legend.position = 'none',
axis.title.y = element_text(vjust=1),
axis.title.x = element_blank(),
plot.margin=unit(c(0.1,1,0.1,1), "cm")
)
p
"""
Explanation: Plotting abundance distributions
End of explanation
"""
%%R
physeq.SIP.core.n = transform_sample_counts(physeq.SIP.core, function(x) x/sum(x))
physeq.SIP.core.n
%%R
physeq.dir = '/var/seq_data/fullCyc/MiSeq_16SrRNA/515f-806r/lib1-7/phyloseq/'
physeq.bulk = 'bulk-core'
physeq.file = file.path(physeq.dir, physeq.bulk)
physeq.bulk = readRDS(physeq.file)
physeq.bulk.m = physeq.bulk %>% sample_data
physeq.bulk = prune_samples(physeq.bulk.m$Exp_type == 'microcosm_bulk' &
physeq.bulk.m$Day %in% bulk_days, physeq.bulk)
physeq.bulk.m = physeq.bulk %>% sample_data
physeq.bulk
%%R
physeq.bulk.n = transform_sample_counts(physeq.bulk, function(x) x/sum(x))
physeq.bulk.n
%%R
# making long format of SIP OTU table
SIP.otu = physeq.SIP.core.n %>% otu_table %>% as.data.frame
ncol = ncol(SIP.otu)
SIP.otu$OTU = rownames(SIP.otu)
SIP.otu = SIP.otu %>%
gather(sample, abundance, 1:ncol)
SIP.otu = inner_join(physeq.SIP.core.m, SIP.otu, c('X.Sample' = 'sample')) %>%
select(-core_dataset, -Sample_location, -Sample_date, -Sample_treatment,
-Sample_subtreatment, -library, -Sample_type)
SIP.otu %>% head(n=3)
%%R
# making long format of each bulk table
bulk.otu = physeq.bulk.n %>% otu_table %>% as.data.frame
ncol = ncol(bulk.otu)
bulk.otu$OTU = rownames(bulk.otu)
bulk.otu = bulk.otu %>%
gather(sample, abundance, 1:ncol)
bulk.otu = inner_join(physeq.bulk.m, bulk.otu, c('X.Sample' = 'sample')) %>%
select(OTU, abundance) %>%
rename('bulk_abund' = abundance)
bulk.otu %>% head(n=3)
%%R
# joining tables
SIP.otu = inner_join(SIP.otu, bulk.otu, c('OTU' = 'OTU'))
SIP.otu %>% head(n=3)
%%R -w 900 -h 900
# for each gradient, plotting gradient rel_abund vs bulk rel_abund
ggplot(SIP.otu, aes(bulk_abund, abundance)) +
geom_point(alpha=0.2) +
geom_point(shape='O', alpha=0.6) +
facet_wrap(~ Buoyant_density) +
labs(x='Pre-fractionation relative abundance',
y='Fraction relative abundance') +
theme_bw() +
theme(
text = element_text(size=16)
)
%%R -w 900 -h 900
# for each gradient, plotting gradient rel_abund vs bulk rel_abund
ggplot(SIP.otu, aes(bulk_abund, abundance)) +
geom_point(alpha=0.2) +
geom_point(shape='O', alpha=0.6) +
scale_x_continuous(limits=c(0,0.01)) +
scale_y_continuous(limits=c(0,0.01)) +
facet_wrap(~ Buoyant_density) +
labs(x='Pre-fractionation relative abundance',
y='Fraction relative abundance') +
theme_bw() +
theme(
text = element_text(size=16),
axis.text.x = element_text(angle=90, hjust=1, vjust=0.5)
)
"""
Explanation: --OLD--
Determining the pre-fractionation abundances of taxa in each gradient fraction
emperical data
low-abundant taxa out at the tails?
OR broad distributions of high abundant taxa
End of explanation
"""
%%R -w 500 -h 300
# checking bulk rank-abundance
tmp = bulk.otu %>%
mutate(rank = row_number(-bulk_abund))
ggplot(tmp, aes(rank, bulk_abund)) +
geom_point()
%%R -w 900
top.n = filter(tmp, rank <= 10)
SIP.otu.f = SIP.otu %>%
filter(OTU %in% top.n$OTU)
ggplot(SIP.otu.f, aes(Buoyant_density, abundance, group=OTU, fill=OTU)) +
#geom_point() +
#geom_line() +
geom_area(position='dodge', alpha=0.4) +
labs(y='Relative abundance', x='Buoyant density') +
theme_bw() +
theme(
text = element_text(size=16)
)
%%R -w 600 -h 400
# Number of gradients that each OTU is found in
max_BD_range = max(SIP.otu$Buoyant_density) - min(SIP.otu$Buoyant_density)
SIP.otu.f = SIP.otu %>%
filter(abundance > 0) %>%
group_by(OTU) %>%
summarize(bulk_abund = mean(bulk_abund),
min_BD = min(Buoyant_density),
max_BD = max(Buoyant_density),
BD_range = max_BD - min_BD,
BD_range_perc = BD_range / max_BD_range * 100) %>%
ungroup()
ggplot(SIP.otu.f, aes(bulk_abund, BD_range_perc, group=OTU)) +
geom_point() +
scale_x_log10() +
labs(x='Pre-fractionation abundance', y='% of total BD range') +
geom_vline(xintercept=0.001, linetype='dashed', alpha=0.5) +
theme_bw() +
theme(
text = element_text(size=16)
)
"""
Explanation: Plotting the abundance distribution of top 10 most abundant taxa (bulk samples)
End of explanation
"""
|
joshnsolomon/phys202-2015-work | assignments/assignment03/NumpyEx04.ipynb | mit | import numpy as np
%matplotlib inline
import matplotlib.pyplot as plt
import seaborn as sns
"""
Explanation: Numpy Exercise 4
Imports
End of explanation
"""
import networkx as nx
K_5=nx.complete_graph(5)
nx.draw(K_5)
"""
Explanation: Complete graph Laplacian
In discrete mathematics a Graph is a set of vertices or nodes that are connected to each other by edges or lines. If those edges don't have directionality, the graph is said to be undirected. Graphs are used to model social and communications networks (Twitter, Facebook, Internet) as well as natural systems such as molecules.
A Complete Graph, $K_n$ on $n$ nodes has an edge that connects each node to every other node.
Here is $K_5$:
End of explanation
"""
def complete_deg(n):
"""Return the integer valued degree matrix D for the complete graph K_n."""
a = np.zeros((n,n))
b = a.astype(dtype=np.int)
for x in range(n):
b[x,x] = n-1
return b
D = complete_deg(5)
assert D.shape==(5,5)
assert D.dtype==np.dtype(int)
assert np.all(D.diagonal()==4*np.ones(5))
assert np.all(D-np.diag(D.diagonal())==np.zeros((5,5),dtype=int))
"""
Explanation: The Laplacian Matrix is a matrix that is extremely important in graph theory and numerical analysis. It is defined as $L=D-A$. Where $D$ is the degree matrix and $A$ is the adjecency matrix. For the purpose of this problem you don't need to understand the details of these matrices, although their definitions are relatively simple.
The degree matrix for $K_n$ is an $n \times n$ diagonal matrix with the value $n-1$ along the diagonal and zeros everywhere else. Write a function to compute the degree matrix for $K_n$ using NumPy.
End of explanation
"""
def complete_adj(n):
"""Return the integer valued adjacency matrix A for the complete graph K_n."""
a = np.ones((n,n))
b = a.astype(dtype=np.int)
for x in range(n):
b[x,x] = 0
return b
A = complete_adj(5)
assert A.shape==(5,5)
assert A.dtype==np.dtype(int)
assert np.all(A+np.eye(5,dtype=int)==np.ones((5,5),dtype=int))
"""
Explanation: The adjacency matrix for $K_n$ is an $n \times n$ matrix with zeros along the diagonal and ones everywhere else. Write a function to compute the adjacency matrix for $K_n$ using NumPy.
End of explanation
"""
def laplacian(n):
return complete_deg(n)-complete_adj(n)
one = laplacian(1)
two = laplacian(2)
three = laplacian(3)
four = laplacian(4)
ten = laplacian(10)
five = laplacian(5)
print(one)
print(np.linalg.eigvals(one))
print(two)
print(np.linalg.eigvals(two))
print(three)
print(np.linalg.eigvals(three))
print(four)
print(np.linalg.eigvals(four))
print(five)
print(np.linalg.eigvals(five))
print(ten)
print(np.linalg.eigvals(ten))
"""
Explanation: Use NumPy to explore the eigenvalues or spectrum of the Laplacian L of $K_n$. What patterns do you notice as $n$ changes? Create a conjecture about the general Laplace spectrum of $K_n$.
End of explanation
"""
|
southpaw94/MachineLearning | TextExamples/3547_08_Code.ipynb | gpl-2.0 | %load_ext watermark
%watermark -a 'Sebastian Raschka' -u -d -v -p numpy,pandas,matplotlib,scikit-learn,nltk
# to install watermark just uncomment the following line:
#%install_ext https://raw.githubusercontent.com/rasbt/watermark/master/watermark.py
"""
Explanation: Sebastian Raschka, 2015
Python Machine Learning Essentials
Chapter 8 - Applying Machine Learning To Sentiment Analysis
Note that the optional watermark extension is a small IPython notebook plugin that I developed to make the code reproducible. You can just skip the following line(s).
End of explanation
"""
import pyprind
import pandas as pd
import os
labels = {'pos':1, 'neg':0}
pbar = pyprind.ProgBar(50000)
df = pd.DataFrame()
for s in ('test', 'train'):
for l in ('pos', 'neg'):
path ='./aclImdb/%s/%s' % (s, l)
for file in os.listdir(path):
with open(os.path.join(path, file), 'r') as infile:
txt = infile.read()
df = df.append([[txt, labels[l]]], ignore_index=True)
pbar.update()
df.columns = ['review', 'sentiment']
"""
Explanation: <br>
<br>
Sections
Obtaining the IMDb movie review dataset
Introducing the bag-of-words model
Transforming documents into feature vectors
Assessing word relevancy via term frequency-inverse document frequency
Cleaning text data
Processing documents into tokens
Training a logistic regression model for document classification
Working with bigger data - online algorithms and out-of-core learning
<br>
<br>
Obtaining the IMDb movie review dataset
[back to top]
The IMDB movie review set can be downloaded from http://ai.stanford.edu/~amaas/data/sentiment/.
After downloading the dataset, decompress the files.
A) If you are working with Linux or MacOS X, open a new terminal windowm cd into the download directory and execute
tar -zxf aclImdb_v1.tar.gz
B) If you are working with Windows, download an archiver such as 7Zip to extract the files from the download archive.
End of explanation
"""
import numpy as np
np.random.seed(0)
df = df.reindex(np.random.permutation(df.index))
"""
Explanation: Shuffling the DataFrame:
End of explanation
"""
df.to_csv('./movie_data.csv', index=False)
import pandas as pd
df = pd.read_csv('./movie_data.csv')
df.head(3)
"""
Explanation: Optional: Saving the assembled data as CSV file:
End of explanation
"""
import numpy as np
from sklearn.feature_extraction.text import CountVectorizer
count = CountVectorizer()
docs = np.array([
'The sun is shining',
'The weather is sweet',
'The sun is shining and the weather is sweet'])
bag = count.fit_transform(docs)
print(count.vocabulary_)
print(bag.toarray())
"""
Explanation: <br>
<br>
Introducing the bag-of-words model
[back to top]
Transforming documents into feature vectors
[back to top]
End of explanation
"""
np.set_printoptions(precision=2)
from sklearn.feature_extraction.text import TfidfTransformer
tfidf = TfidfTransformer(use_idf=True, norm='l2', smooth_idf=True)
print(tfidf.fit_transform(count.fit_transform(docs)).toarray())
tf_is = 2
n_docs = 3
idf_is = np.log((n_docs+1) / (3+1) )
tfidf_is = tf_is * (idf_is + 1)
print('tf-idf of term "is" = %.2f' % tfidf_is)
tfidf = TfidfTransformer(use_idf=True, norm=None, smooth_idf=True)
raw_tfidf = tfidf.fit_transform(count.fit_transform(docs)).toarray()[-1]
raw_tfidf
l2_tfidf = raw_tfidf / np.sqrt(np.sum(raw_tfidf**2))
l2_tfidf
"""
Explanation: Assessing word relevancy via term frequency-inverse document frequency
[back to top]
End of explanation
"""
df.loc[0, 'review'][-50:]
import re
def preprocessor(text):
text = re.sub('<[^>]*>', '', text)
emoticons = re.findall('(?::|;|=)(?:-)?(?:\)|\(|D|P)', text)
text = re.sub('[\W]+', ' ', text.lower()) + \
' '.join(emoticons).replace('-', '')
return text
preprocessor(df.loc[0, 'review'][-50:])
preprocessor("</a>This :) is :( a test :-)!")
df['review'] = df['review'].apply(preprocessor)
"""
Explanation: Cleaning text data
[back to top]
End of explanation
"""
from nltk.stem.porter import PorterStemmer
porter = PorterStemmer()
def tokenizer(text):
return text.split()
def tokenizer_porter(text):
return [porter.stem(word) for word in text.split()]
tokenizer('runners like running and thus they run')
tokenizer_porter('runners like running and thus they run')
import nltk
nltk.download('stopwords')
from nltk.corpus import stopwords
stop = stopwords.words('english')
[w for w in tokenizer_porter('a runner likes running and runs a lot')[-10:] if w not in stop]
"""
Explanation: Processing documents into tokens
[back to top]
End of explanation
"""
X_train = df.loc[:25000, 'review'].values
y_train = df.loc[:25000, 'sentiment'].values
X_test = df.loc[25000:, 'review'].values
y_test = df.loc[25000:, 'sentiment'].values
from sklearn.grid_search import GridSearchCV
from sklearn.pipeline import Pipeline
from sklearn.linear_model import LogisticRegression
from sklearn.feature_extraction.text import TfidfVectorizer
tfidf = TfidfVectorizer(strip_accents=None,
lowercase=False,
preprocessor=None)
param_grid = [{'vect__ngram_range': [(1,1)],
'vect__stop_words': [stop, None],
'vect__tokenizer': [tokenizer, tokenizer_porter],
'clf__penalty': ['l1', 'l2'],
'clf__C': [1.0, 10.0, 100.0]},
{'vect__ngram_range': [(1,1)],
'vect__stop_words': [stop, None],
'vect__tokenizer': [tokenizer, tokenizer_porter],
'vect__use_idf':[False],
'vect__norm':[None],
'clf__penalty': ['l1', 'l2'],
'clf__C': [1.0, 10.0, 100.0]},
]
lr_tfidf = Pipeline([('vect', tfidf),
('clf', LogisticRegression(random_state=0))])
gs_lr_tfidf = GridSearchCV(lr_tfidf, param_grid,
scoring='accuracy',
cv=5, verbose=1,
n_jobs=-1)
gs_lr_tfidf.fit(X_train, y_train)
print('Best parameter set: %s ' % gs_lr_tfidf.best_params_)
print('CV Accuracy: %.3f' % gs_lr_tfidf.best_score_)
clf = gs_lr_tfidf.best_estimator_
print('Test Accuracy: %.3f' % clf.score(X_test, y_test))
"""
Explanation: <br>
<br>
Training a logistic regression model for document classification
[back to top]
Strip HTML and punctuation to speed up the GridSearch later:
End of explanation
"""
import numpy as np
import re
from nltk.corpus import stopwords
stop = stopwords.words('english')
def tokenizer(text):
text = re.sub('<[^>]*>', '', text)
emoticons = re.findall('(?::|;|=)(?:-)?(?:\)|\(|D|P)', text.lower())
text = re.sub('[\W]+', ' ', text.lower()) + ' '.join(emoticons).replace('-', '')
tokenized = [w for w in text.split() if w not in stop]
return tokenized
def stream_docs(path):
with open(path, 'r') as csv:
next(csv) # skip header
for line in csv:
text, label = line[:-3], int(line[-2])
yield text, label
next(stream_docs(path='./movie_data.csv'))
def get_minibatch(doc_stream, size):
docs, y = [], []
try:
for _ in range(size):
text, label = next(doc_stream)
docs.append(text)
y.append(label)
except StopIteration:
return None, None
return docs, y
from sklearn.feature_extraction.text import HashingVectorizer
from sklearn.linear_model import SGDClassifier
vect = HashingVectorizer(decode_error='ignore',
n_features=2**21,
preprocessor=None,
tokenizer=tokenizer)
clf = SGDClassifier(loss='log', random_state=1, n_iter=1)
doc_stream = stream_docs(path='./movie_data.csv')
import pyprind
pbar = pyprind.ProgBar(45)
classes = np.array([0, 1])
for _ in range(45):
X_train, y_train = get_minibatch(doc_stream, size=1000)
if not X_train:
break
X_train = vect.transform(X_train)
clf.partial_fit(X_train, y_train, classes=classes)
pbar.update()
X_test, y_test = get_minibatch(doc_stream, size=5000)
X_test = vect.transform(X_test)
print('Accuracy: %.3f' % clf.score(X_test, y_test))
clf = clf.partial_fit(X_test, y_test)
"""
Explanation: <br>
<br>
Working with bigger data - online algorithms and out-of-core learning
[back to top]
End of explanation
"""
|
danresende/deep-learning | sentiment_network/Sentiment Classification - Mini Project 1.ipynb | mit | def pretty_print_review_and_label(i):
print(labels[i] + "\t:\t" + reviews[i][:80] + "...")
g = open('reviews.txt','r') # What we know!
reviews = list(map(lambda x:x[:-1],g.readlines()))
g.close()
g = open('labels.txt','r') # What we WANT to know!
labels = list(map(lambda x:x[:-1].upper(),g.readlines()))
g.close()
len(reviews)
reviews[0]
labels[0]
"""
Explanation: Sentiment Classification & How To "Frame Problems" for a Neural Network
by Andrew Trask
Twitter: @iamtrask
Blog: http://iamtrask.github.io
What You Should Already Know
neural networks, forward and back-propagation
stochastic gradient descent
mean squared error
and train/test splits
Where to Get Help if You Need it
Re-watch previous Udacity Lectures
Leverage the recommended Course Reading Material - Grokking Deep Learning (40% Off: traskud17)
Shoot me a tweet @iamtrask
Tutorial Outline:
Intro: The Importance of "Framing a Problem"
Curate a Dataset
Developing a "Predictive Theory"
PROJECT 1: Quick Theory Validation
Transforming Text to Numbers
PROJECT 2: Creating the Input/Output Data
Putting it all together in a Neural Network
PROJECT 3: Building our Neural Network
Understanding Neural Noise
PROJECT 4: Making Learning Faster by Reducing Noise
Analyzing Inefficiencies in our Network
PROJECT 5: Making our Network Train and Run Faster
Further Noise Reduction
PROJECT 6: Reducing Noise by Strategically Reducing the Vocabulary
Analysis: What's going on in the weights?
Lesson: Curate a Dataset
End of explanation
"""
print("labels.txt \t : \t reviews.txt\n")
pretty_print_review_and_label(2137)
pretty_print_review_and_label(12816)
pretty_print_review_and_label(6267)
pretty_print_review_and_label(21934)
pretty_print_review_and_label(5297)
pretty_print_review_and_label(4998)
"""
Explanation: Lesson: Develop a Predictive Theory
End of explanation
"""
import numpy as np
bag_of_words = {}
pos_words = {}
neg_words = {}
for i in range(len(reviews)):
words = reviews[i].split(' ')
for word in words:
if word in bag_of_words.keys():
bag_of_words[word] += 1
else:
bag_of_words[word] = 1
pos_words[word] = 0
neg_words[word] = 0
if labels[i] == 'POSITIVE':
if word in pos_words.keys():
pos_words[word] += 1
elif labels[i] == 'NEGATIVE':
if word in neg_words.keys():
neg_words[word] += 1
words_pos_neg_ratio = []
for word in bag_of_words.keys():
if bag_of_words[word] > 500:
pos_neg_ratio = pos_words[word] / float(neg_words[word] + 1)
words_pos_neg_ratio.append((word, np.log(pos_neg_ratio)))
words_pos_neg_ratio = sorted(words_pos_neg_ratio, key=lambda x: x[1], reverse=True)
print('\nTop positive words: \n')
for i in range(10):
print(words_pos_neg_ratio[i][0],': ', round(words_pos_neg_ratio[i][1], 10), sep='')
print('\nTop negative words: \n')
for i in range(-1, -11, -1):
print(words_pos_neg_ratio[i][0],': ', round(words_pos_neg_ratio[i][1], 10), sep='')
"""
Explanation: Project 1: Quick Theory Validation
End of explanation
"""
|
fastai/fastai | nbs/35_tutorial.wikitext.ipynb | apache-2.0 | path = untar_data(URLs.WIKITEXT_TINY)
"""
Explanation: Tutorial - Assemble the data on the wikitext dataset
Using Datasets, Pipeline, TfmdLists and Transform in text
In this tutorial, we explore the mid-level API for data collection in the text application. We will use the bases introduced in the pets tutorial so you should be familiar with Transform, Pipeline, TfmdLists and Datasets already.
Data
End of explanation
"""
df_train = pd.read_csv(path/'train.csv', header=None)
df_valid = pd.read_csv(path/'test.csv', header=None)
df_all = pd.concat([df_train, df_valid])
df_all.head()
"""
Explanation: The dataset comes with the articles in two csv files, so we read it and concatenate them in one dataframe.
End of explanation
"""
splits = [list(range_of(df_train)), list(range(len(df_train), len(df_all)))]
tfms = [attrgetter("text"), Tokenizer.from_df(0), Numericalize()]
dsets = Datasets(df_all, [tfms], splits=splits, dl_type=LMDataLoader)
bs,sl = 104,72
dls = dsets.dataloaders(bs=bs, seq_len=sl)
dls.show_batch(max_n=3)
"""
Explanation: We could tokenize it based on spaces to compare (as is usually done) but here we'll use the standard fastai tokenizer.
End of explanation
"""
config = awd_lstm_lm_config.copy()
config.update({'input_p': 0.6, 'output_p': 0.4, 'weight_p': 0.5, 'embed_p': 0.1, 'hidden_p': 0.2})
model = get_language_model(AWD_LSTM, len(dls.vocab), config=config)
opt_func = partial(Adam, wd=0.1, eps=1e-7)
cbs = [MixedPrecision(), GradientClip(0.1)] + rnn_cbs(alpha=2, beta=1)
learn = Learner(dls, model, loss_func=CrossEntropyLossFlat(), opt_func=opt_func, cbs=cbs, metrics=[accuracy, Perplexity()])
learn.fit_one_cycle(1, 5e-3, moms=(0.8,0.7,0.8), div=10)
#learn.fit_one_cycle(90, 5e-3, moms=(0.8,0.7,0.8), div=10)
"""
Explanation: Model
End of explanation
"""
|
ethen8181/machine-learning | trees/gbm/gbm.ipynb | mit | # code for loading the format for the notebook
import os
# path : store the current path to convert back to it later
path = os.getcwd()
os.chdir(os.path.join('..', '..', 'notebook_format'))
from formats import load_style
load_style(css_style = 'custom2.css', plot_style = False)
os.chdir(path)
# 1. magic for inline plot
# 2. magic to print version
# 3. magic so that the notebook will reload external python modules
# 4. magic to enable retina (high resolution) plots
# https://gist.github.com/minrk/3301035
%matplotlib inline
%load_ext watermark
%load_ext autoreload
%autoreload 2
%config InlineBackend.figure_format = 'retina'
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
from sklearn.metrics import mean_squared_error
from sklearn.tree import DecisionTreeRegressor
from sklearn.model_selection import train_test_split, GridSearchCV
from sklearn.ensemble import RandomForestRegressor, GradientBoostingRegressor
%watermark -d -t -v -p numpy,pandas,matplotlib,sklearn
"""
Explanation: <h1>Table of Contents<span class="tocSkip"></span></h1>
<div class="toc"><ul class="toc-item"><li><span><a href="#Gradient-Boosting-Machine-(GBM)" data-toc-modified-id="Gradient-Boosting-Machine-(GBM)-1"><span class="toc-item-num">1 </span>Gradient Boosting Machine (GBM)</a></span><ul class="toc-item"><li><span><a href="#Implementation" data-toc-modified-id="Implementation-1.1"><span class="toc-item-num">1.1 </span>Implementation</a></span></li><li><span><a href="#Classification" data-toc-modified-id="Classification-1.2"><span class="toc-item-num">1.2 </span>Classification</a></span><ul class="toc-item"><li><span><a href="#Softmax" data-toc-modified-id="Softmax-1.2.1"><span class="toc-item-num">1.2.1 </span>Softmax</a></span></li></ul></li><li><span><a href="#Implementation" data-toc-modified-id="Implementation-1.3"><span class="toc-item-num">1.3 </span>Implementation</a></span></li><li><span><a href="#Understanding-Model-Complexity" data-toc-modified-id="Understanding-Model-Complexity-1.4"><span class="toc-item-num">1.4 </span>Understanding Model Complexity</a></span></li></ul></li><li><span><a href="#Reference" data-toc-modified-id="Reference-2"><span class="toc-item-num">2 </span>Reference</a></span></li></ul></div>
End of explanation
"""
# read in the data and shuffle the row order for model stability
np.random.seed(4321)
wine_path = os.path.join('..', 'winequality-white.csv')
wine = pd.read_csv(wine_path, sep = ';')
wine = wine.sample(frac = 1)
# train/test split the features and response column
y = wine['quality'].values
X = wine.drop('quality', axis = 1).values
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size = 0.2, random_state = 1234)
print('dimension of the dataset: ', wine.shape)
wine.head()
class GBMReg:
"""
Regression gradient boosting machine using scikit learn's
decision tree as the base tree
Parameters
----------
n_estimators: int
number of trees to train
learning_rate: float
learning rate, some calls it shrinkage,
shrinks the contribution of each tree
to prevent overfitting
max_depth: int
controls how deep to grow the tree;
this is more of a decision tree parameter,
it is tune here to make later comparison fair
all the other parameters for a decision tree like
max_features or min_sample_split also applies to GBM,
it is just not used here as that is more
related to a single decision tree
"""
def __init__(self, n_estimators, learning_rate, max_depth):
self.max_depth = max_depth
self.n_estimators = n_estimators
self.learning_rate = learning_rate
def fit(self, X, y):
self.estimators = []
# simply use the response as the original residuals
# and covert it to float type to prevent error warning
# that it's converting from int to float
residual = y.astype(np.float)
for i in range(self.n_estimators):
tree = DecisionTreeRegressor(max_depth = self.max_depth)
tree.fit(X, residual)
y_pred = tree.predict(X)
self.estimators.append(tree)
residual -= self.learning_rate * y_pred
return self
def predict(self, X):
y_pred = np.zeros(X.shape[0])
for tree in self.estimators:
y_pred += self.learning_rate * tree.predict(X)
return y_pred
# compare the results between a single decision tree,
# gradient boosting, the lower the mean square
# error, the better
tree = DecisionTreeRegressor(max_depth = 6)
tree.fit(X_train, y_train)
tree_y_pred = tree.predict(X_test)
print('tree: ', mean_squared_error(y_test, tree_y_pred))
# library to confirm result
gbm_reg = GBMReg(n_estimators = 100, learning_rate = 0.1, max_depth = 6)
gbm_reg.fit(X_train, y_train)
gbm_reg_y_pred = gbm_reg.predict(X_test)
print('gbm: ', mean_squared_error(y_test, gbm_reg_y_pred))
# gradient boosting for 100 trees and learning rate of 0.1
gbm = GradientBoostingRegressor(n_estimators = 100, learning_rate = 0.1, max_depth = 6)
gbm.fit(X_train, y_train)
gbm_y_pred = gbm.predict(X_test)
print('gbm library: ', mean_squared_error(y_test, gbm_y_pred))
"""
Explanation: Gradient Boosting Machine (GBM)
Just like Random Forest and Extra Trees, Gradient Boosting Machine is also a type of Ensemble Tree method, the only difference is it is stemmed from the the boosting framework. The idea of boosting is to add a weak classifier to the ensemble at a time, and this newly added weak classifier is trained to improve upon the already trained ensemble. Meaning it will pay higher attention on examples which are misclassified or have higher errors and focus on mitigating those errors. Boosting is a general framework can be applied to any sort of weak learner, although Decision Tree models is by far the commonly used due to the fact that they have the flexibility to be weak learners by simply restricting their depth and they are quite fast to train.
Suppose we are given some dataset $(x_1, y_1), (x_2, y_2), ...,(x_n, y_n)$, and the task is to fit a model $F(x)$ to minimize square loss. After training the model, we discovered the model is good but not perfect.
There are some mistakes: $F(x_1) = 0.8$, while $y_1 = 0.9$, and $F(x_2) = 1.4$ while $y_2 = 1.3$ .... Now the question is, how can we improve this model without changing anything from $F(x)$?
How about we simply add an additional model (e.g. regression tree) $h$ to the already existing $F$, so the new prediction becomes $F(x) + h(x)$. In other words, we wish to improve upon the existing model so that $F(x_1) + h(x_1) = y_1, F(x_2) + h(x_2) = y_2 ...$ or equivalent we wish to find a new model $h$ such that $h(x_1) = y_1 - F(x_1), h(x_2) = y_2 - F(x_2) ...$. The idea is all well and good, but the bad news is probably no model $h$ (e.g. regression tree) will be able to do this perfectly. Fortunately, the good news is, some $h$ might be able to do this approximately.
The idea is, we fit the model $h$ to the data using $y_1 - F(x_1), y_2 - F(x_2)$ as the response variable. And the intuition for this is: the $y_i - F(x_i)$s are the residuals. These are the areas that the existing
model $F$ cannot do well, so now the role of $h$ is to compensate the shortcoming of existing model $F$. And if the model after adding the new model $h$, $F + h$ is still unsatisfactory, we will just add another new one.
To make sure we're actually learning the residuals, we'll employ the idea of gradient descent. Say our goal is to minimize $J$, an overall loss function additively calculated from all observations with regard to $F$, a classifier with some parameters. More formally, we're given the formula:
$$J(y, F) = \sum_i^n L\big(y_i, F(x_i)\big)$$
Where:
$L$ is a cost/loss function comparing the response variable's value and the prediction of the model for each observation
Instead of trying to solve it directly, gradient descent is an iterative technique that allows us to approach the solution of an optimization problem. At each step of the algorithm, it will perform the following operations:
$$F_b(x_i) = F_{b-1}(x_i) - \eta \times \nabla L\big(y_i, F(x_i)\big)$$
Where:
$F_b$ is the version of classifier at step/iteration $b$
$\eta$ is the learning rate which controls the size of the learning process
$\nabla$ is the gradient i.e. the first order partial derivative of the cost function with respect to the classifier
The formula above actually refers to stochastic gradient descent as we are only computing the function for a single observation, $x_i$
For example, say we're given, sum of squares errors, a well-known quality indicator for regression model as our loss function. So now our loss function $L\big(y_i, F(x_i)\big)$ is defined as: $\frac{1}{2} \big( y_i - F(x_i) \big)^2$ (the 1/2 is simply to make the notation cleaner later). Taking the gradient of this loss function we get:
$$\frac{ \partial L\big(y_i, F(x_i)\big) }{ \partial F(x_i) } = \frac{ \partial \frac{1}{2} \big( y_i - F(x_i) \big)^2 }{ \partial F(x_i) } = F(x_i) - y_i$$
Tying this back to our original problem, we wish to update our function $F$ at iteration $b$ with a new model $h$:
\begin{align}
F_b(x_i) &= F_{b-1}(x_i) + h(x_i) \nonumber \
&= F_{b-1}(x_i) + y_i - F_{b-1}(x_i) \nonumber \
&= F_{b-1}(x_i) - 1 \times \frac{ \partial L\big(y_i, F_{b-1}(x_i)\big) }{ \partial F_{b-1}(x_i) }
\nonumber \
\end{align}
As we can see, the formula above is 99% the same as as the gradient descent formula, $F_b(x_i) = F_{b-1}(x_i) - \eta \times \nabla L\big(y_i, F(x_i)\big)$. The only difference is that the learning rate $\eta$ is 1. Thus, we now have an iterative process constructing the additive model that minimizes our loss function (residuals).
In practice though, Gradient Boosting Machine is more prone to overfitting, since the week learner is tasked with optimally fitting the gradient. This means that boosting will select the optimal learner at each stage of the algorithm, although this strategy generates an optimal solution at the current stage, it has the drawbacks of not finding the optimal global model as well as overfitting the training data. A remedy for greediness is to constrain the learning process by setting the learning rate $\eta$ (also known as shrinkage). In the above algorithm, instead of directly adding the predicted value for a sample to next iteration's predicted value, so that only a fraction of the current predicted value is added to the previous iteration's predicted value. This parameter can take values between 0 and 1 and becomes another tuning parameter for the model. Small values of the learning parameter such as 0.1 tends to work better, but the value of the parameter is inversely proportional to the computation time required to find an optimal model, because more iterations is required.
To sum it all up, the process of training a GBM for regression is:
Initialize a predicted value for each observation (e.g. the original response or the average response or a value that minimizes the loss function). This will be our initial "residuals", $r$. It can be called the residuals because we're dealing with a regression task, but this quantity is more often referred to as the negative gradient, this terminology makes the $- \nabla \times L\big(y_i, F(x_i) \big)$ part generalizes to any loss function we might wish to employ. In short, GBM is fitting to the gradient of the loss function
For step = 1 to $B$ (number of iterations that we specify) do:
Fit a regression tree $F_b$ to the training data $(X, r)$, where we use the residuals as the response variable
Update model $F$ by adding a shrunken version of the newly fitted regression tree. Translating it to code, this means we append the new tree to the array of trees we've already stored:
$F(X) = F(X) + \eta F_{b}(X)$
Update each observation's residual by adding the predicted value to it:
$r_{b + 1} = r_b - \eta F_b(X)$
In the end, our final output boosted model becomes $F(x) = \sum_{b = 1}^B \eta F_b(x)$, where we sum the values that each individual tree gives (times the learning rate)
To hit the notion home, let's conside an example using made up numbers. Suppose we have 5 observations, with responses 10, 20, 30, 40, 50. The first tree is built and gives predictions of 12, 18, 27, 39, 54 (these predictions are made up numbers). If our learning rate $\eta$ = 0.1, all trees will have their predictions scaled down by $\eta$, so the first tree will instead "predict" 1.2, 1.8, 2.7, 3.9, 5.4. The response variable passed to the next tree will then have values 8.8, 18.2, 27.3, 36.1, 44.6 (the difference between the prediction that was scaled down by the prediction and the true response). The second round then uses these response values to build another tree - and again the predictions are scaled down by the learning rate $\eta$. So tree 2 predicts say, 7, 18, 25, 40, 40, which, once scaled, become 0.7, 1.8, 2.5, 4.0, 4.0. As before, the third tree will be passed the difference between these values and the previous tree's response variable (so 8.1, 16.4, 24.8, 32.1. 40.6). And we keep iterating this process until we finished training all the trees (a parameter that we specify), in the end, the sum of the predictions from all trees will give the final prediction.
Implementation
Here, we will use the Wine Quality Data Set to test our implementation. This link should download the .csv file. The task is to predict the quality of the wine (a scale of 1 ~ 10) given some of its features.
End of explanation
"""
def viz_importance(model, feature_names, n_features):
"""Visualize the relative importance of predictors"""
# sort the importance in decreasing order
importances = model.feature_importances_
idx = np.argsort(importances)[-n_features:]
names = feature_names[idx]
scores = importances[idx]
y_pos = np.arange(1, n_features + 1)
plt.barh(y_pos, scores, color = 'lightskyblue', align = 'center')
plt.yticks(y_pos, names)
plt.xlabel('Importance')
plt.title('Feature Importance Plot')
# change default figure and font size
plt.rcParams['figure.figsize'] = 8, 6
plt.rcParams['font.size'] = 12
viz_importance(gbm, wine.columns[:-1], X.shape[1])
"""
Explanation: Clearly, Gradient Boosting has some similarities to Random Forests and Extra Trees: the final prediction is based on an ensemble of models, and trees are used as the base learner, so all the tuning parameters for the tree model also controls the variability of Gradient Boosting. And for interpretability we can also access the feature importance attribute.
End of explanation
"""
def compute_softmax(x):
"""compute the softmax of vector"""
exp_x = np.exp(x)
softmax = exp_x / np.sum(exp_x)
return softmax
# this can be interpreted as the probability
# of belonging to the three classes
compute_softmax([1, 2, 3])
"""
Explanation: But the way the ensembles are constructed differs substantially between each model. In Random Forests and Extra Trees, all trees are created independently and each tree contributes equally to the final model. The trees in Gradient Boosting, however, are dependent on past trees and contribute unequally to the final model. Despite these differences, Random Forests, Extra Trees and Gradient Boosting all offer competitive predictive performance (Gradient Boosting often wins when carefully tuned). As for computation time, Gradient Boosting is often greater than for Random Forests, Extra Trees, since the two former models' procedure can be easily parallel processed given that their individual trees are created independently.
Classification
Gradient Boosting Machine can also be extended to handle classification tasks, as we'll soon see, even in the classification context, the underlying algorithm is still a regression tree. To adapt the algorithm to a classification process, we start by defining a new loss function, cross entropy (also known as multinomial deviance), denoted as:
$$L\big(y_i, F(x_i)\big) = -\sum_k ^ K y_k(x_i) \log p_k(x_i)$$
The notation above says:
We have a total of $K$ output class (categorical response variable) that ranges from $1, ..., K$
$y_k(x_i)$ is a dummy indicator of the response variable that takes the value of 1 if the $i_{th}$ observation belongs to class $k$ and 0 otherwise
$p_k(x_i)$ is the predicted probability of the $i_{th}$ observation belonging to class $k$
So the next question is how do we get $p_k(x_i)$?
Softmax
Softmax function takes an $N$-dimensional vector of arbitrary real values and produces another $N$-dimensional vector with real values in the range (0, 1) that add up to 1. The function's formula can be written as:
$$p_i = \frac{e^{o_i}}{\sum_k^K e^{o_k}}$$
For example, in the following code chunk, we see that how the softmax function transforms a 3-element vector 1.0, 2.0, 3.0 into probabilities that sums up to 1, while still preserving the relative size of the original elements.
End of explanation
"""
from sklearn.metrics import accuracy_score
from sklearn.preprocessing import LabelEncoder
from sklearn.tree import DecisionTreeClassifier
from sklearn.ensemble import GradientBoostingClassifier
class GBMClass:
"""
Classification gradient boosting machine using scikit learn's
decision tree as the base tree
Parameters
----------
n_estimators: int
number of trees to train
learning_rate: float
learning rate, some calls it shrinkage,
shrinks the contribution of each tree
to prevent overfitting
max_depth: int
controls how deep to grow the tree;
this is more of a decision tree parameter,
it is tune here to make later comparison fair
all the other parameters for a decision tree like
max_features or min_sample_split also applies to GBM,
it is just not used here as that is more
related to a single decision tree
"""
def __init__(self, n_estimators, learning_rate, max_depth):
self.max_depth = max_depth
self.n_estimators = n_estimators
self.learning_rate = learning_rate
def fit(self, X, y):
# encode labels with value between 0 and n_classes - 1,
# so we can easily one-hot encode them
self.le = LabelEncoder()
labels = self.le.fit_transform(y)
Y = self._to_categorical(labels)
del labels
# the predicted probability starts out with
# a value that's uniform over all classes;
# then we compute the residuals (negative gradient),
# which is the difference between the predicted
# probability and the class label
y_proba = np.full(Y.shape, 1 / Y.shape[1])
residuals = Y - y_proba
# train a base decision tree on the residuals
# for every single class, hence we end up with
# n_estimators * n_classes base tree models
self.estimators = []
for i in range(self.n_estimators):
for j in range(self.n_classes):
tree = DecisionTreeRegressor(max_depth = self.max_depth)
tree.fit(X, residuals[:, j])
y_pred = tree.predict(X)
self.estimators.append(tree)
residuals[:, j] -= self.learning_rate * y_pred
return self
def _to_categorical(self, y):
"""one hot encode class vector y"""
self.n_classes = np.amax(y) + 1
Y = np.zeros((y.shape[0], self.n_classes))
for i in range(y.shape[0]):
Y[i, y[i]] = 1.0
return Y
def predict(self, X):
# after predicting the class remember to
# transform it back to the actual class label
y_prob = self.predict_proba(X)
y_pred = np.argmax(y_prob, axis = 1)
y_pred = self.le.inverse_transform(y_pred)
return y_pred
def predict_proba(self, X):
# add up raw score for every class and convert
# it to probability using softmax
y_raw = np.zeros((X.shape[0], self.n_classes))
# obtain the tree for each class and add up the prediction
for c in range(self.n_classes):
class_tree = self.estimators[c::self.n_classes]
for tree in class_tree:
y_raw[:, c] += self.learning_rate * tree.predict(X)
y_proba = self._compute_softmax(y_raw)
return y_proba
def _compute_softmax(self, z):
"""
compute the softmax of matrix z in a numerically stable way,
by substracting each row with the max of each row. For more
information refer to the following link:
https://nolanbconaway.github.io/blog/2017/softmax-numpy
"""
shift_z = z - np.amax(z, axis = 1, keepdims = 1)
exp_z = np.exp(shift_z)
softmax = exp_z / np.sum(exp_z, axis = 1, keepdims = 1)
return softmax
# compare the results between a single decision tree,
# gradient boosting, the higher the accuracy, the better
tree = DecisionTreeClassifier(max_depth = 6)
tree.fit(X_train, y_train)
tree_y_pred = tree.predict(X_test)
print('tree: ', accuracy_score(y_test, tree_y_pred))
# gradient boosting for 150 trees and learning rate of 0.2
# unlike random forest, gradient boosting's base tree can be shallower
# meaning that there depth can be smaller
gbm_class = GBMClass(n_estimators = 150, learning_rate = 0.2, max_depth = 3)
gbm_class.fit(X_train, y_train)
gbm_class_y_pred = gbm_class.predict(X_test)
print('gbm: ', accuracy_score(y_test, gbm_class_y_pred))
# library to confirm results are comparable
gbm = GradientBoostingClassifier(n_estimators = 150, learning_rate = 0.2, max_depth = 3)
gbm.fit(X_train, y_train)
gbm_y_pred = gbm.predict(X_test)
print('gbm library: ', accuracy_score(y_test, gbm_y_pred))
"""
Explanation: Next, we wish to compute the derivative of this function with respect to the input $o_i$ so we can use it later when computing the derivative of the loss function. To be explicit we wish to find:
$$\frac{\partial p_i}{\partial o_j} = \frac{\partial \frac{e^{o_i}}{\sum_{k=1}^{N}e^{o_k}}}{\partial o_j}$$
For any arbitrary output $i$ and input $j$. To do so, We'll be using the quotient rule of derivatives. The rule tells us that for a function $f(x) = \frac{g(x)}{h(x)}$:
$$f'(x) = \frac{g'(x)h(x) - h'(x)g(x)}{[h(x)]^2}$$
In our case, we have:
$$
\begin{align}
g &= e^{o_i} \nonumber \
h &= \sum_{k=1}^{K}e^{o_k} \nonumber
\end{align}
$$
It's important to notice that no matter which $o_j$ we compute the derivative of $h$ for the output will always be $e^{o_j}$. However, this is not the case for $g$. It's derivative will be $e^{o_j}$ only if $i = j$, because only then will it have the term $e^{o_j}$. Otherwise, the derivative is simply 0 (because it's simply taking the derivative of a constant).
So going back to using our quotient rule, we start with the $i = j$ case. In the following derivation we'll use the $\Sigma$ (Sigma) sign to represent $\sum_{k=1}^{K}e^{o_k}$ for simplicity and to prevent cluttering up the notation.
$$
\begin{align}
\frac{\partial \frac{e^{o_i}}{\sum_{k = 1}^{N} e^{o_k}}}{\partial o_j}
&= \frac{e^{o_i}\Sigma-e^{o_j}e^{o_i}}{\Sigma^2} \nonumber \
&= \frac{e^{o_i}}{\Sigma}\frac{\Sigma - e^{o_j}}{\Sigma} \nonumber \
&= p_i(1 - p_j) \nonumber \
&= p_i(1 - p_i) \nonumber
\end{align}
$$
The reason we can perform the operation in the last line is because we're considering the scenario where $i = j$. Similarly we can do the case where $i \neq j$.
$$
\begin{align}
\frac{\partial \frac{e^{o_i}}{\sum_{k = 1}^{N} e^{o_k}}}{\partial o_j}
&= \frac{0-e^{o_j}e^{o_i}}{\Sigma^2} \nonumber \
&= -\frac{e^{o_j}}{\Sigma}\frac{e^{o_i}}{\Sigma} \nonumber \
&= -p_j p_i \nonumber \
&= -p_i p_j \nonumber
\end{align}
$$
Just to sum it up, we now have:
$$\frac{\partial p_i}{\partial o_j} = p_i(1 - p_i),\quad i = j$$
$$\frac{\partial p_i}{\partial o_j} = -p_i p_j,\quad i \neq j$$
Now, we can tie this back to the original loss function $-\sum_k^K y_k \log p_k$ and compute its negative gradient.
$$
\begin{align}
\frac{\partial L}{\partial o_i}
&= -\sum_k y_k\frac{\partial \log p_k}{\partial o_i} \nonumber \
&= -\sum_k y_k\frac{1}{p_k}\frac{\partial p_k}{\partial o_i} \nonumber \
&= -y_i(1-p_i) - \sum_{k \neq i}y_k\frac{1}{p_k}(-p_kp_i) \nonumber \
&= -y_i(1 - p_i) + \sum_{k \neq i}y_k(p_i) \nonumber \
&= -y_i + y_i p_i + \sum_{k \neq i}y_k(p_i) \nonumber \
&= p_i\left(\sum_ky_k\right) - y_i \nonumber \
&= p_i - y_i \nonumber
\end{align}
$$
Remember $\sum_ky_k=1$ (as $y$ is a vector with only one non-zero element, which is $1$ when the indicating the observation belongs to the $k_{th}$ class.
After a long journey, we now see, for every class $k$, the gradient is the difference between the associated dummy variable and the predicted probability of belonging to that class. This is essentially the "residuals" from the classification gradient boosting. Given this, we can now implement the algorithm, the overall process of training a regression tree has still not changed, only now we must deal with the dummy variables, $y_k$ and fit a regression tree on the negative gradient for each dummy variable.
Implementation
For the dataset, we'll still use the Wine Quality Data Set that was used for the regression task, except we now treat the quality of the wine (a scale of 1 ~ 10) as categorical instead of numeric.
End of explanation
"""
def ground_truth(x):
"""Ground truth -- function to approximate"""
return x * np.sin(x) + np.sin(2 * x)
def gen_data(low, high, n_samples):
"""generate training and testing data from the ground truth function"""
np.random.seed(15)
X = np.random.uniform(low, high, size = n_samples)
# generate the response from the ground truth function and add
# some random noise to it
y = ground_truth(X) + np.random.normal(scale = 2, size = n_samples)
X_train, X_test, y_train, y_test = train_test_split(
X, y, test_size = 0.2, random_state = 3)
return X_train, X_test, y_train, y_test
def plot_data(x_plot, X_train, X_test, y_train, y_test):
"""plot training and testing data"""
s = 20
alpha = 0.4
plt.plot(x_plot, ground_truth(x_plot), alpha = alpha, label = 'ground truth')
plt.scatter(X_train, y_train, s = s, alpha = alpha)
plt.scatter(X_test, y_test, s = s, alpha = alpha, color = 'red')
plt.xlim(( 0, 10 ))
plt.ylabel('y')
plt.xlabel('x')
plt.legend(loc = 'upper left')
plt.show()
low = 0
high = 10
x_plot = np.linspace(low, high, 500)
X_train, X_test, y_train, y_test = gen_data(low = low, high = high, n_samples = 100)
plot_data(x_plot, X_train, X_test, y_train, y_test)
"""
Explanation: Understanding Model Complexity
In the following section, we generate a Sinoide function + random gaussian noise, with 80 training samples (blue points) and 20 test samples (red points).
End of explanation
"""
# when using scikit-learn, the training data has to be
# a 2d-array even if it only has 1 features
tree1 = DecisionTreeRegressor(max_depth = 1)
tree1.fit(X_train[:, np.newaxis], y_train)
tree2 = DecisionTreeRegressor(max_depth = 3)
tree2.fit(X_train[:, np.newaxis], y_train)
plt.plot(x_plot, tree1.predict(x_plot[:, np.newaxis]),
label = 'RT max_depth=1', color = 'g', alpha = 0.9, linewidth = 2)
plt.plot(x_plot, tree2.predict(x_plot[:, np.newaxis]),
label = 'RT max_depth=3', color = 'g', alpha = 0.7, linewidth = 1)
plot_data(x_plot, X_train, X_test, y_train, y_test)
"""
Explanation: Recall that in a single regression tree, we can use the max_depth parameter to control how deep to grow the tree and the deeper the tree the more variance can be explained.
End of explanation
"""
gbm = GradientBoostingRegressor(n_estimators = 300, max_depth = 6, learning_rate = 0.1)
gbm.fit(X_train[:, np.newaxis], y_train)
plt.plot(x_plot, gbm.predict(x_plot[:, np.newaxis]),
label = 'GBM max_depth=6', color = 'r', alpha = 0.9, linewidth = 2)
plot_data(x_plot, X_train, X_test, y_train, y_test)
"""
Explanation: The plot above shows that the decision boundaries made by decision trees are always perpendicular to $x$ and $y$ axis (due to the fact that they consists of nested if-else statements). Let's see what happens when we use gradient boosting without tuning the parameters (by specifying a fix max_depth).
End of explanation
"""
param_grid = {
'max_depth': [4, 6],
'min_samples_leaf': [3, 5, 8],
'subsample': [0.9, 1]
# 'max_features': [1.0, 0.3, 0.1] # not possible in this example (there's only 1)
}
gs_gbm = GridSearchCV(gbm, param_grid, scoring = 'neg_mean_squared_error', n_jobs = 4)
gs_gbm.fit(X_train[:, np.newaxis], y_train)
print('Best hyperparameters: %r' % gs_gbm.best_params_)
plt.plot(x_plot, gs_gbm.predict(x_plot[:, np.newaxis]),
label = 'GBM tuned', color = 'r', alpha = 0.9, linewidth = 2)
plot_data(x_plot, X_train, X_test, y_train, y_test)
"""
Explanation: Hopefully, it should be clear that compared with decision trees, gradient boosting machine is far more susceptible to overfitting the training data, hence it is common to tune parameters including max_depth, max_features, min_samples_leaf, subsample (explained below) to reduce the overfitting phenomenon from occurring.
The parameter subsample (technically called stochastic gradient boosting) borrows some idea from bagging techniques. What it does is: while iterating through each individual tree building process, it randomly select a fraction of the training data. Then the residuals and models in the remaining steps of the current iteration are based only on that sample of data. It turns out that this simple modification improved the predictive accuracy of boosting while also reducing the required computational resources (of course, this is based on the fact that you have enough observations to subsample).
The following section tunes the commonly tuned parameter and find the best one and draws the decision boundary. The resulting plot should be self-explanatory.
End of explanation
"""
|
biosustain/cameo-notebooks | Advanced-SynBio-for-Cell-Factories-Course/Vanillin Production.ipynb | apache-2.0 | from cameo import models
model = models.bigg.iMM904
"""
Explanation: Vanillin production
In 2010, Brochado et al used heuristic optimization together with flux simulations to design a vanillin producing yeast strain.
Brochado, A. R., Andrejev, S., Maranas, C. D., & Patil, K. R. (2012). Impact of stoichiometry representation on simulation of genotype-phenotype relationships in metabolic networks. PLoS Computational Biology, 8(11), e1002758. doi:10.1371/journal.pcbi.1002758
Genome-scale metabolic model
In their work, the authors used iFF708 model, but recent insights in Yeast yielded newer and more complete versions.
Becuase this algorithms should be agnostic to the model, we implement the same strategy with a newer model.
End of explanation
"""
model.reactions.EX_glc__D_e.lower_bound = -13 #glucose exchange
model.reactions.EX_o2_e.lower_bound = -3 #oxygen exchange
model.medium
model.objective = model.reactions.BIOMASS_SC5_notrace #growth
model.optimize().f
"""
Explanation: Constraints can be set in the model according to data found in the literature. The defined conditions allow the simulation of phenotypes very close to the experimental results.
<img src=http://www.biomedcentral.com/content/figures/1752-0509-7-36-2.jpg/>
Model validation by comparing in silico prediction of the specific growth rate with experimental data. Growth phenotypes were collected from literature and compared to simulated values for chemostat cultivations at four different conditions, nitrogen limited aerobic (green) and anaerobic (red), carbon limited aerobic (blue) and anaerobic (white).
Österlund, T., Nookaew, I., Bordel, S., & Nielsen, J. (2013). Mapping condition-dependent regulation of metabolism in yeast through genome-scale modeling. BMC Systems Biology, 7, 36. doi:10.1186/1752-0509-7-36
End of explanation
"""
from cameo.strain_design.pathway_prediction import PathwayPredictor
predictor = PathwayPredictor(model)
pathways = predictor.run('vanillin', max_predictions=3)
vanillin_pathway = pathways.pathways[0]
from cameo.core.pathway import Pathway
vanillin_pathway = Pathway.from_file("data/vanillin_pathway.tsv")
vanillin_pathway.data_frame
"""
Explanation: Heterologous pathway
Vanillin is not produced by S. cervisiae. In their work an heterolgous pathway is inserted to allow generate a vanillin production strain. The pathway is described as:
<img src=http://static-content.springer.com/image/art%3A10.1186%2F1475-2859-9-84/MediaObjects/12934_2010_Article_474_Fig1_HTML.jpg>
Schematic representation of the de novo VG biosynthetic pathway in S. Cerevisisae (as designed by Hansen et al [5]). Metabolites are shown in black, enzymes are shown in black and in italic, cofactors and additional precursors are shown in red. Reactions catalyzed by heterologously introduced enzymes are shown in red. Reactions converting glucose to aromatic amino acids are represented by dashed black arrows. Metabolite secretion is represented by solid black arrows where relative thickness corresponds to relative extracellular accumulation. 3-DSH stands for 3-dedhydroshikimate, PAC stands for protocathechuic acid, PAL stands for protocatechuic aldehyde, SAM stands for S-adenosylmethionine. 3DSD stands for 3-dedhydroshikimate dehydratase, ACAR stands for aryl carboxylic acid reductase, PPTase stands for phosphopantetheine transferase, hsOMT stands for O-methyltransferase, and UGT stands for UDP-glycosyltransferase. Adapted from Hansen et al. [5].
Brochado et al. Microbial Cell Factories 2010 9:84 doi:10.1186/1475-2859-9-84
Using cameo, is very easy to generate a pathway and add it to a model.
End of explanation
"""
vanillin_pathway.plug_model(model)
from cameo import phenotypic_phase_plane
"""
Explanation: And now we can plug the pathway to the model.
End of explanation
"""
production_envelope = phenotypic_phase_plane(model, variables=[model.reactions.BIOMASS_SC5_notrace],
objective=model.reactions.EX_vnl_b_glu_c)
production_envelope.plot()
production_envelope = phenotypic_phase_plane(model, variables=[model.reactions.BIOMASS_SC5_notrace],
objective=model.reactions.EX_vnl_b_glu_c)
production_envelope.plot()
"""
Explanation: The Phenotypic phase plane can be used to analyse the theoretical yields at different growth rates.
End of explanation
"""
from cameo.strain_design.heuristic.evolutionary_based import OptGene
from cameo.flux_analysis.simulation import lmoma
optgene = OptGene(model)
results = optgene.run(target="EX_vnl_b_glu_c",
biomass="BIOMASS_SC5_notrace",
substrate="EX_glc__D_e",
simulation_method=lmoma)
results
"""
Explanation: To find gene knockout targets, we use cameo.strain_design.heuristic package which implements the OptGene strategy.
The authors used the biomass-product coupled yield (bpcy) for optimization which is the equivalent of running OptGene in non-robust mode. All simulations were computed using MOMA but because cameo does not implement MOMA we use it's equivalent linear version (it minimizes the absolute distance instead of the quadratic distance). The linear MOMA version is faster than the original MOMA formulation.
By default, our OptGene implementation will run 20'000 evaluations.
End of explanation
"""
|
GoogleCloudPlatform/ml-design-patterns | 03_problem_representation/multilabel.ipynb | apache-2.0 | import numpy as np
import pandas as pd
import tensorflow as tf
from tensorflow import keras
from tensorflow.keras import Model
from tensorflow.keras.layers import Dense, Embedding, Input, Flatten, Conv2D, MaxPooling2D
from sklearn.utils import shuffle
from sklearn.preprocessing import MultiLabelBinarizer
"""
Explanation: Multilabel Design Pattern
The Multilabel Design Pattern refers to models that can assign more than one label to a given input. This design requires changing the activation function used in the final output layer of your model, and choosing how your application will parse model output. Note that this is different from multiclass classification problems, where a single input is assigned exactly one label from a group of many (> 1) possible classes.
End of explanation
"""
!gsutil cp 'gs://ml-design-patterns/so_data.csv' .
"""
Explanation: Building a multilabel model with simgoid output
We'll be using a pre-processed version of the Stack Overflow dataset on BigQuery to run this code. You can download it from a publicly available Cloud Storage bucket.
End of explanation
"""
data = pd.read_csv('so_data.csv', names=['tags', 'original_tags', 'text'], header=0)
data = data.drop(columns=['original_tags'])
data = data.dropna()
data = shuffle(data, random_state=22)
data.head()
# Encode top tags to multi-hot
tags_split = [tags.split(',') for tags in data['tags'].values]
print(tags_split[0])
tag_encoder = MultiLabelBinarizer()
tags_encoded = tag_encoder.fit_transform(tags_split)
num_tags = len(tags_encoded[0])
print(data['text'].values[0][:110])
print(tag_encoder.classes_)
print(tags_encoded[0])
# Split our data into train and test sets
train_size = int(len(data) * .8)
print ("Train size: %d" % train_size)
print ("Test size: %d" % (len(data) - train_size))
# Split our labels into train and test sets
train_tags = tags_encoded[:train_size]
test_tags = tags_encoded[train_size:]
train_qs = data['text'].values[:train_size]
test_qs = data['text'].values[train_size:]
from tensorflow.keras.preprocessing import text
VOCAB_SIZE=400 # This is a hyperparameter, try out different values for your dataset
tokenizer = text.Tokenizer(num_words=VOCAB_SIZE)
tokenizer.fit_on_texts(train_qs)
body_train = tokenizer.texts_to_matrix(train_qs)
body_test = tokenizer.texts_to_matrix(test_qs)
# Note we're using sigmoid output with binary_crossentropy loss
model = tf.keras.models.Sequential()
model.add(tf.keras.layers.Dense(50, input_shape=(VOCAB_SIZE,), activation='relu'))
model.add(tf.keras.layers.Dense(25, activation='relu'))
model.add(tf.keras.layers.Dense(num_tags, activation='sigmoid'))
model.compile(loss='binary_crossentropy', optimizer='adam', metrics=['accuracy'])
model.summary()
# Train and evaluate the model
model.fit(body_train, train_tags, epochs=3, batch_size=128, validation_split=0.1)
print('Eval loss/accuracy:{}'.format(
model.evaluate(body_test, test_tags, batch_size=128)))
"""
Explanation: 🥑🥑🥑
We've pre-processed this dataset to remove any uses of the tag within a question and replaced it with the word "avocado". For example, the question: "How do i feed a pandas dataframe to a keras model?" would become "How do I feed a avocado dataframe to a avocado model?" This will help the model learn more nuanced patterns throughout the data, rather than just learning to associate the occurrence of the tag itself in a question.
End of explanation
"""
# Get some test predictions
predictions = model.predict(body_test[:3])
classes = tag_encoder.classes_
for q_idx, probabilities in enumerate(predictions):
print(test_qs[q_idx])
for idx, tag_prob in enumerate(probabilities):
if tag_prob > 0.7:
print(classes[idx], round(tag_prob * 100, 2), '%')
print('')
"""
Explanation: Parsing sigmoid results
Unlike softmax output, we can't simply take the argmax of the output probability array. We need to consider our thresholds for each class. In this case, we'll say that a tag is associated with a question if our model is more than 70% confident.
Below we'll print the original question along with our model's predicted tags.
End of explanation
"""
# First, download the data. We've made it publicly available in Google Cloud Storage
!gsutil cp gs://ml-design-patterns/mushrooms.csv .
mushroom_data = pd.read_csv('mushrooms.csv')
mushroom_data.head()
"""
Explanation: Sigmoid output for binary classification
Typically, binary classification is the only type of multilabel classification (each input has only one class) where you'd want to use sigmoid output. In this case, a 2-element softmax output is redundant and can increase training time.
To demonstrate this we'll build a model on the UCI mushroom dataset to determine whether a mushroom is edible or poisonous.
End of explanation
"""
# 1 = edible, 0 = poisonous
mushroom_data.loc[mushroom_data['class'] == 'p', 'class'] = 0
mushroom_data.loc[mushroom_data['class'] == 'e', 'class'] = 1
labels = mushroom_data.pop('class')
dummy_data = pd.get_dummies(mushroom_data)
# Split the data
train_size = int(len(mushroom_data) * .8)
train_data = dummy_data[:train_size]
test_data = dummy_data[train_size:]
train_labels = labels[:train_size]
test_labels = labels[train_size:]
model = keras.Sequential([
keras.layers.Dense(32, input_shape=(len(dummy_data.iloc[0]),), activation='relu'),
keras.layers.Dense(8, activation='relu'),
keras.layers.Dense(1, activation='sigmoid')
])
model.summary()
# Since we're using sigmoid output, we use binary_crossentropy for our loss function
model.compile(loss='binary_crossentropy', optimizer='adam', metrics=['accuracy'])
model.fit(train_data.values.tolist(), train_labels.values.tolist())
model.evaluate(test_data.values.tolist(), test_labels.values.tolist())
"""
Explanation: To keep things simple, we'll first convert the label column to numeric and then
use pd.get_dummies() to covert the data to numeric.
End of explanation
"""
# First, transform the label column to one-hot
def to_one_hot(data):
if data == 0:
return [1, 0]
else:
return [0,1]
train_labels_one_hot = train_labels.apply(to_one_hot)
test_labels_one_hot = test_labels.apply(to_one_hot)
model_softmax = keras.Sequential([
keras.layers.Dense(32, input_shape=(len(dummy_data.iloc[0]),), activation='relu'),
keras.layers.Dense(8, activation='relu'),
keras.layers.Dense(2, activation='softmax')
])
model_softmax.summary()
model_softmax.compile(loss='categorical_crossentropy', optimizer='adam', metrics=['accuracy'])
model_softmax.fit(train_data.values.tolist(), train_labels_one_hot.values.tolist())
model_softmax.evaluate(test_data.values.tolist(), test_labels_one_hot.values.tolist())
"""
Explanation: Sidebar: for comparison, let's train the same model but use a 2-element softmax output layer.
This is an anti-pattern. It's better to use sigmoid for binary classification.
Note the increased number of trainable parameters on the output layer for each model. You can imagine how this could increase training time for larger models.
End of explanation
"""
|
dxl0632/deeplearning_nd_udacity | intro-to-tflearn/TFLearn_Sentiment_Analysis.ipynb | mit | import pandas as pd
import numpy as np
import tensorflow as tf
import tflearn
from tflearn.data_utils import to_categorical
"""
Explanation: Sentiment analysis with TFLearn
In this notebook, we'll continue Andrew Trask's work by building a network for sentiment analysis on the movie review data. Instead of a network written with Numpy, we'll be using TFLearn, a high-level library built on top of TensorFlow. TFLearn makes it simpler to build networks just by defining the layers. It takes care of most of the details for you.
We'll start off by importing all the modules we'll need, then load and prepare the data.
End of explanation
"""
reviews = pd.read_csv('reviews.txt', header=None)
labels = pd.read_csv('labels.txt', header=None)
reviews.head()
labels.head()
"""
Explanation: Preparing the data
Following along with Andrew, our goal here is to convert our reviews into word vectors. The word vectors will have elements representing words in the total vocabulary. If the second position represents the word 'the', for each review we'll count up the number of times 'the' appears in the text and set the second position to that count. I'll show you examples as we build the input data from the reviews data. Check out Andrew's notebook and video for more about this.
Read the data
Use the pandas library to read the reviews and postive/negative labels from comma-separated files. The data we're using has already been preprocessed a bit and we know it uses only lower case characters. If we were working from raw data, where we didn't know it was all lower case, we would want to add a step here to convert it. That's so we treat different variations of the same word, like The, the, and THE, all the same way.
End of explanation
"""
from collections import Counter
total_counts = Counter()
for idx, row in reviews.iterrows():
for word in row[0].split(" "):
total_counts[word.lower()] += 1
print("Total words in data set: ", len(total_counts))
"""
Explanation: Counting word frequency
To start off we'll need to count how often each word appears in the data. We'll use this count to create a vocabulary we'll use to encode the review data. This resulting count is known as a bag of words. We'll use it to select our vocabulary and build the word vectors. You should have seen how to do this in Andrew's lesson. Try to implement it here using the Counter class.
Exercise: Create the bag of words from the reviews data and assign it to total_counts. The reviews are stores in the reviews Pandas DataFrame. If you want the reviews as a Numpy array, use reviews.values. You can iterate through the rows in the DataFrame with for idx, row in reviews.iterrows(): (documentation). When you break up the reviews into words, use .split(' ') instead of .split() so your results match ours.
End of explanation
"""
vocab = sorted(total_counts, key=total_counts.get, reverse=True)[:10000]
print(vocab[:60])
"""
Explanation: Let's keep the first 10000 most frequent words. As Andrew noted, most of the words in the vocabulary are rarely used so they will have little effect on our predictions. Below, we'll sort vocab by the count value and keep the 10000 most frequent words.
End of explanation
"""
print(vocab[-1], ': ', total_counts[vocab[-1]])
"""
Explanation: What's the last word in our vocabulary? We can use this to judge if 10000 is too few. If the last word is pretty common, we probably need to keep more words.
End of explanation
"""
word2idx = {word: indx for indx, word in enumerate(vocab)}
## create the word-to-index dictionary here
word2idx['the']
"""
Explanation: The last word in our vocabulary shows up in 30 reviews out of 25000. I think it's fair to say this is a tiny proportion of reviews. We are probably fine with this number of words.
Note: When you run, you may see a different word from the one shown above, but it will also have the value 30. That's because there are many words tied for that number of counts, and the Counter class does not guarantee which one will be returned in the case of a tie.
Now for each review in the data, we'll make a word vector. First we need to make a mapping of word to index, pretty easy to do with a dictionary comprehension.
Exercise: Create a dictionary called word2idx that maps each word in the vocabulary to an index. The first word in vocab has index 0, the second word has index 1, and so on.
End of explanation
"""
def text_to_vector(text):
vector = np.zeros(len(word2idx))
words = map(str.lower, text.split(" "))
for word in words:
idx = word2idx.get(word, None)
if idx is None:
continue
else:
vector[idx] += 1
return vector
"""
Explanation: Text to vector function
Now we can write a function that converts a some text to a word vector. The function will take a string of words as input and return a vector with the words counted up. Here's the general algorithm to do this:
Initialize the word vector with np.zeros, it should be the length of the vocabulary.
Split the input string of text into a list of words with .split(' '). Again, if you call .split() instead, you'll get slightly different results than what we show here.
For each word in that list, increment the element in the index associated with that word, which you get from word2idx.
Note: Since all words aren't in the vocab dictionary, you'll get a key error if you run into one of those words. You can use the .get method of the word2idx dictionary to specify a default returned value when you make a key error. For example, word2idx.get(word, None) returns None if word doesn't exist in the dictionary.
End of explanation
"""
text_to_vector('The tea is for a party to celebrate '
'the movie so she has no time for a cake')[:65]
"""
Explanation: If you do this right, the following code should return
```
text_to_vector('The tea is for a party to celebrate '
'the movie so she has no time for a cake')[:65]
array([0, 1, 0, 0, 2, 0, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 2, 0, 1, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0,
0, 0, 1, 0, 0, 0, 0, 0, 1, 0, 0, 0, 1, 1, 0, 0, 0, 0, 0])
```
End of explanation
"""
word_vectors = np.zeros((len(reviews), len(vocab)), dtype=np.int_)
for ii, (_, text) in enumerate(reviews.iterrows()):
word_vectors[ii] = text_to_vector(text[0])
# Printing out the first 5 word vectors
word_vectors[:5, :23]
"""
Explanation: Now, run through our entire review data set and convert each review to a word vector.
End of explanation
"""
Y = (labels=='positive').astype(np.int_)
records = len(labels)
shuffle = np.arange(records)
np.random.shuffle(shuffle)
test_fraction = 0.9
train_split, test_split = shuffle[:int(records*test_fraction)], shuffle[int(records*test_fraction):]
trainX, trainY = word_vectors[train_split,:], to_categorical(Y.values[train_split], 2)
testX, testY = word_vectors[test_split,:], to_categorical(Y.values[test_split], 2)
trainY
"""
Explanation: Train, Validation, Test sets
Now that we have the word_vectors, we're ready to split our data into train, validation, and test sets. Remember that we train on the train data, use the validation data to set the hyperparameters, and at the very end measure the network performance on the test data. Here we're using the function to_categorical from TFLearn to reshape the target data so that we'll have two output units and can classify with a softmax activation function. We actually won't be creating the validation set here, TFLearn will do that for us later.
End of explanation
"""
reviews.shape
trainX.shape
# Network building
def build_model(lr=0.01):
# This resets all parameters and variables, leave this here
tf.reset_default_graph()
#### Your code ####
# input
net = tflearn.input_data([None, 10000])
# hidden
net = tflearn.fully_connected(net, 200, activation='ReLU')
net = tflearn.fully_connected(net, 20, activation='ReLU')
# output
net = tflearn.fully_connected(net, 2, activation='softmax')
net = tflearn.regression(net, optimizer='sgd', learning_rate=lr,
loss='categorical_crossentropy')
model = tflearn.DNN(net)
return model
"""
Explanation: Building the network
TFLearn lets you build the network by defining the layers.
Input layer
For the input layer, you just need to tell it how many units you have. For example,
net = tflearn.input_data([None, 100])
would create a network with 100 input units. The first element in the list, None in this case, sets the batch size. Setting it to None here leaves it at the default batch size.
The number of inputs to your network needs to match the size of your data. For this example, we're using 10000 element long vectors to encode our input data, so we need 10000 input units.
Adding layers
To add new hidden layers, you use
net = tflearn.fully_connected(net, n_units, activation='ReLU')
This adds a fully connected layer where every unit in the previous layer is connected to every unit in this layer. The first argument net is the network you created in the tflearn.input_data call. It's telling the network to use the output of the previous layer as the input to this layer. You can set the number of units in the layer with n_units, and set the activation function with the activation keyword. You can keep adding layers to your network by repeated calling net = tflearn.fully_connected(net, n_units).
Output layer
The last layer you add is used as the output layer. Therefore, you need to set the number of units to match the target data. In this case we are predicting two classes, positive or negative sentiment. You also need to set the activation function so it's appropriate for your model. Again, we're trying to predict if some input data belongs to one of two classes, so we should use softmax.
net = tflearn.fully_connected(net, 2, activation='softmax')
Training
To set how you train the network, use
net = tflearn.regression(net, optimizer='sgd', learning_rate=0.1, loss='categorical_crossentropy')
Again, this is passing in the network you've been building. The keywords:
optimizer sets the training method, here stochastic gradient descent
learning_rate is the learning rate
loss determines how the network error is calculated. In this example, with the categorical cross-entropy.
Finally you put all this together to create the model with tflearn.DNN(net). So it ends up looking something like
net = tflearn.input_data([None, 10]) # Input
net = tflearn.fully_connected(net, 5, activation='ReLU') # Hidden
net = tflearn.fully_connected(net, 2, activation='softmax') # Output
net = tflearn.regression(net, optimizer='sgd', learning_rate=0.1, loss='categorical_crossentropy')
model = tflearn.DNN(net)
Exercise: Below in the build_model() function, you'll put together the network using TFLearn. You get to choose how many layers to use, how many hidden units, etc.
End of explanation
"""
model = build_model()
"""
Explanation: Intializing the model
Next we need to call the build_model() function to actually build the model. In my solution I haven't included any arguments to the function, but you can add arguments so you can change parameters in the model if you want.
Note: You might get a bunch of warnings here. TFLearn uses a lot of deprecated code in TensorFlow. Hopefully it gets updated to the new TensorFlow version soon.
End of explanation
"""
# Training
model.fit(trainX, trainY, validation_set=0.1, show_metric=True, batch_size=128, n_epoch=40)
"""
Explanation: Training the network
Now that we've constructed the network, saved as the variable model, we can fit it to the data. Here we use the model.fit method. You pass in the training features trainX and the training targets trainY. Below I set validation_set=0.1 which reserves 10% of the data set as the validation set. You can also set the batch size and number of epochs with the batch_size and n_epoch keywords, respectively. Below is the code to fit our the network to our word vectors.
You can rerun model.fit to train the network further if you think you can increase the validation accuracy. Remember, all hyperparameter adjustments must be done using the validation set. Only use the test set after you're completely done training the network.
End of explanation
"""
predictions = (np.array(model.predict(testX))[:,0] >= 0.5).astype(np.int_)
test_accuracy = np.mean(predictions == testY[:,0], axis=0)
print("Test accuracy: ", test_accuracy)
"""
Explanation: Testing
After you're satisified with your hyperparameters, you can run the network on the test set to measure its performance. Remember, only do this after finalizing the hyperparameters.
End of explanation
"""
# Helper function that uses your model to predict sentiment
def test_sentence(sentence):
positive_prob = model.predict([text_to_vector(sentence.lower())])[0][1]
print('Sentence: {}'.format(sentence))
print('P(positive) = {:.3f} :'.format(positive_prob),
'Positive' if positive_prob > 0.5 else 'Negative')
sentence = "Moonlight is by far the best movie of 2016."
test_sentence(sentence)
sentence = "It's amazing anyone could be talented enough to make something this spectacularly awful"
test_sentence(sentence)
"""
Explanation: Try out your own text!
End of explanation
"""
|
davofis/computational_seismology | 05_pseudospectral/ps_derivative.ipynb | gpl-3.0 | # Import all necessary libraries, this is a configuration step for the exercise.
# Please run it before the simulation code!
import numpy as np
import matplotlib.pyplot as plt
# Show the plots in the Notebook.
plt.switch_backend("nbagg")
"""
Explanation: <div style='background-image: url("../../share/images/header.svg") ; padding: 0px ; background-size: cover ; border-radius: 5px ; height: 250px'>
<div style="float: right ; margin: 50px ; padding: 20px ; background: rgba(255 , 255 , 255 , 0.7) ; width: 50% ; height: 150px">
<div style="position: relative ; top: 50% ; transform: translatey(-50%)">
<div style="font-size: xx-large ; font-weight: 900 ; color: rgba(0 , 0 , 0 , 0.8) ; line-height: 100%">Computational Seismology</div>
<div style="font-size: large ; padding-top: 20px ; color: rgba(0 , 0 , 0 , 0.5)">Numerical derivatives based on the Fourier Transform</div>
</div>
</div>
</div>
Seismo-Live: http://seismo-live.org
Authors:
Fabian Linder (@fablindner)
Heiner Igel (@heinerigel)
David Vargas (@dvargas)
Basic Equations
The derivative of function $f(x)$ with respect to the spatial coordinate $x$ is calculated using the differentiation theorem of the Fourier transform:
\begin{equation}
\frac{d}{dx} f(x) = \frac{1}{\sqrt{2\pi}} \int_{-\infty}^{\infty} ik F(k) e^{ikx} dk
\end{equation}
In general, this formulation can be extended to compute the n−th derivative of $f(x)$ by considering that $F^{(n)}(k) = D(k)^{n}F(k) = (ik)^{n}F(k)$. Next, the inverse Fourier transform is taken to return to physical space.
\begin{equation}
f^{(n)}(x) = \mathscr{F}^{-1}[(ik)^{n}F(k)] = \frac{1}{\sqrt{2\pi}} \int_{-\infty}^{\infty} (ik)^{n} F(k) e^{ikx} dk
\end{equation}
End of explanation
"""
#################################################################
# IMPLEMENT THE FOURIER DERIVATIVE METHOD HERE!
#################################################################
"""
Explanation: Exercise 1
Define a python function call "fourier_derivative(f, dx)" that compute the first derivative of a function $f$ using the Fourier transform properties.
End of explanation
"""
# Basic parameters
# ---------------------------------------------------------------
nx = 128
x, dx = np.linspace(2*np.pi/nx, 2*np.pi, nx, retstep=True)
sigma = 0.5
xo = np.pi
#################################################################
# IMPLEMENT YOUR SOLUTION HERE!
#################################################################
"""
Explanation: Exercise 2
Calculate the numerical derivative based on the Fourier transform to show that the derivative is exact. Define an arbitrary function (e.g. a Gaussian) and initialize its analytical derivative on the same spatial grid. Calculate the numerical derivative and the difference to the analytical solution. Vary the wavenumber content of the analytical function. Does it make a difference? Why is the numerical result not entirely exact?
End of explanation
"""
#################################################################
# PLOT YOUR SOLUTION HERE!
#################################################################
"""
Explanation: Exercise 3
Now that the numerical derivative is available, we can visually inspect our results. Make a plot of both, the analytical and numerical derivatives together with the difference error.
End of explanation
"""
|
cougarTech2228/Scouting-2016 | notebooks/Obstacle_scatter.ipynb | mit | import matplotlib
import matplotlib.pyplot as plot
import pandas as pd
import numpy as np
%matplotlib inline
"""
Explanation: Scatter Plots of Rank, OPR, and Obstacle Success
End of explanation
"""
a = [3, 4, 5, 6, 7, 15]
b = [4, 5, 6, 7 , 7, 3]
size = [2, 200, 10, 2, 15]
hues = [0.1, 0.2, 0.6, 0.7, 0.9]
plot.scatter(a, b, s=size, alpha=0.5)
plot.show()
"""
Explanation: Quick test to demonstrate how the arguments in the scatterplots work and what their types are:
End of explanation
"""
team_data = pd.read_csv("fake_features.csv")
team_data
"""
Explanation: Just some fake data in the same format as the actual data will be
End of explanation
"""
ranks = team_data['Rank']
OPR = team_data['NormalOPR']
obstacles = 5 * team_data['Obstacles']
test_hues = ['red', 'red', 'red', 'blue', 'red', 'green', 'red', 'yellow', 'red']
plot.scatter(ranks, OPR, s=obstacles, c=hues)
plot.show()
obstacle_list = list(obstacles)
print obstacle_list
top_mean = sum(obstacle_list[0:3]) / len(obstacle_list[0:3])
hues = []
for index, obstacle in enumerate(obstacle_list):
if obstacle < top_mean or index <= 3:
hues.append('blue')
elif obstacle >= top_mean:
hues.append('red')
# testerino = []
# for i, v in enumerate(obstacle_list):
# if v < top_mean or i <= 3:
# testerino.append('blue')
# elif v >= top_mean:
# testerino.append('red')
testerino
top_mean
obstacle_list
"""
Explanation: Notes
Color code dots red that have an obstacle rating > the median or mean of the top eight or sixteen teams.
Nice shade of red : c = (1,0.2,0.2)
Extract the data from the dataframe so that it can be used as arguments in the scatter function
End of explanation
"""
|
ES-DOC/esdoc-jupyterhub | notebooks/mri/cmip6/models/sandbox-2/atmoschem.ipynb | gpl-3.0 | # DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'mri', 'sandbox-2', 'atmoschem')
"""
Explanation: ES-DOC CMIP6 Model Properties - Atmoschem
MIP Era: CMIP6
Institute: MRI
Source ID: SANDBOX-2
Topic: Atmoschem
Sub-Topics: Transport, Emissions Concentrations, Gas Phase Chemistry, Stratospheric Heterogeneous Chemistry, Tropospheric Heterogeneous Chemistry, Photo Chemistry.
Properties: 84 (39 required)
Model descriptions: Model description details
Initialized From: --
Notebook Help: Goto notebook help page
Notebook Initialised: 2018-02-15 16:54:19
Document Setup
IMPORTANT: to be executed each time you run the notebook
End of explanation
"""
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
"""
Explanation: Document Authors
Set document authors
End of explanation
"""
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
"""
Explanation: Document Contributors
Specify document contributors
End of explanation
"""
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
"""
Explanation: Document Publication
Specify document publication status
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: Document Table of Contents
1. Key Properties
2. Key Properties --> Software Properties
3. Key Properties --> Timestep Framework
4. Key Properties --> Timestep Framework --> Split Operator Order
5. Key Properties --> Tuning Applied
6. Grid
7. Grid --> Resolution
8. Transport
9. Emissions Concentrations
10. Emissions Concentrations --> Surface Emissions
11. Emissions Concentrations --> Atmospheric Emissions
12. Emissions Concentrations --> Concentrations
13. Gas Phase Chemistry
14. Stratospheric Heterogeneous Chemistry
15. Tropospheric Heterogeneous Chemistry
16. Photo Chemistry
17. Photo Chemistry --> Photolysis
1. Key Properties
Key properties of the atmospheric chemistry
1.1. Model Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of atmospheric chemistry model.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 1.2. Model Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Name of atmospheric chemistry model code.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.chemistry_scheme_scope')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "troposhere"
# "stratosphere"
# "mesosphere"
# "mesosphere"
# "whole atmosphere"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 1.3. Chemistry Scheme Scope
Is Required: TRUE Type: ENUM Cardinality: 1.N
Atmospheric domains covered by the atmospheric chemistry model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.basic_approximations')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 1.4. Basic Approximations
Is Required: TRUE Type: STRING Cardinality: 1.1
Basic approximations made in the atmospheric chemistry model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.prognostic_variables_form')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "3D mass/mixing ratio for gas"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 1.5. Prognostic Variables Form
Is Required: TRUE Type: ENUM Cardinality: 1.N
Form of prognostic variables in the atmospheric chemistry component.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.number_of_tracers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 1.6. Number Of Tracers
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Number of advected tracers in the atmospheric chemistry model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.family_approach')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 1.7. Family Approach
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Atmospheric chemistry calculations (not advection) generalized into families of species?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.coupling_with_chemical_reactivity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 1.8. Coupling With Chemical Reactivity
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Atmospheric chemistry transport scheme turbulence is couple with chemical reactivity?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.software_properties.repository')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 2. Key Properties --> Software Properties
Software properties of aerosol code
2.1. Repository
Is Required: FALSE Type: STRING Cardinality: 0.1
Location of code for this component.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.software_properties.code_version')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 2.2. Code Version
Is Required: FALSE Type: STRING Cardinality: 0.1
Code version identifier.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.software_properties.code_languages')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 2.3. Code Languages
Is Required: FALSE Type: STRING Cardinality: 0.N
Code language(s).
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Operator splitting"
# "Integrated"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 3. Key Properties --> Timestep Framework
Timestepping in the atmospheric chemistry model
3.1. Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
Mathematical method deployed to solve the evolution of a given variable
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_advection_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 3.2. Split Operator Advection Timestep
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Timestep for chemical species advection (in seconds)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_physical_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 3.3. Split Operator Physical Timestep
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Timestep for physics (in seconds).
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_chemistry_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 3.4. Split Operator Chemistry Timestep
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Timestep for chemistry (in seconds).
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_alternate_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 3.5. Split Operator Alternate Order
Is Required: FALSE Type: BOOLEAN Cardinality: 0.1
?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.integrated_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 3.6. Integrated Timestep
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Timestep for the atmospheric chemistry model (in seconds)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.integrated_scheme_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Explicit"
# "Implicit"
# "Semi-implicit"
# "Semi-analytic"
# "Impact solver"
# "Back Euler"
# "Newton Raphson"
# "Rosenbrock"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 3.7. Integrated Scheme Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Specify the type of timestep scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.turbulence')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 4. Key Properties --> Timestep Framework --> Split Operator Order
**
4.1. Turbulence
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Call order for turbulence scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.convection')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 4.2. Convection
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Call order for convection scheme This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.precipitation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 4.3. Precipitation
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Call order for precipitation scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.emissions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 4.4. Emissions
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Call order for emissions scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.deposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 4.5. Deposition
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Call order for deposition scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.gas_phase_chemistry')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 4.6. Gas Phase Chemistry
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Call order for gas phase chemistry scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.tropospheric_heterogeneous_phase_chemistry')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 4.7. Tropospheric Heterogeneous Phase Chemistry
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Call order for tropospheric heterogeneous phase chemistry scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.stratospheric_heterogeneous_phase_chemistry')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 4.8. Stratospheric Heterogeneous Phase Chemistry
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Call order for stratospheric heterogeneous phase chemistry scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.photo_chemistry')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 4.9. Photo Chemistry
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Call order for photo chemistry scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.aerosols')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 4.10. Aerosols
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Call order for aerosols scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.tuning_applied.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 5. Key Properties --> Tuning Applied
Tuning methodology for atmospheric chemistry component
5.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General overview description of tuning: explain and motivate the main targets and metrics retained. &Document the relative weight given to climate performance metrics versus process oriented metrics, &and on the possible conflicts with parameterization level tuning. In particular describe any struggle &with a parameter value that required pushing it to its limits to solve a particular model deficiency.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.tuning_applied.global_mean_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 5.2. Global Mean Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List set of metrics of the global mean state used in tuning model/component
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.tuning_applied.regional_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 5.3. Regional Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List of regional metrics of mean state used in tuning model/component
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.tuning_applied.trend_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 5.4. Trend Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List observed trend metrics used in tuning model/component
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.grid.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 6. Grid
Atmospheric chemistry grid
6.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the general structure of the atmopsheric chemistry grid
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.grid.matches_atmosphere_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 6.2. Matches Atmosphere Grid
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
* Does the atmospheric chemistry grid match the atmosphere grid?*
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.grid.resolution.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 7. Grid --> Resolution
Resolution in the atmospheric chemistry grid
7.1. Name
Is Required: TRUE Type: STRING Cardinality: 1.1
This is a string usually used by the modelling group to describe the resolution of this grid, e.g. ORCA025, N512L180, T512L70 etc.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.grid.resolution.canonical_horizontal_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 7.2. Canonical Horizontal Resolution
Is Required: FALSE Type: STRING Cardinality: 0.1
Expression quoted for gross comparisons of resolution, eg. 50km or 0.1 degrees etc.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.grid.resolution.number_of_horizontal_gridpoints')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 7.3. Number Of Horizontal Gridpoints
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Total number of horizontal (XY) points (or degrees of freedom) on computational grid.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.grid.resolution.number_of_vertical_levels')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 7.4. Number Of Vertical Levels
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Number of vertical levels resolved on computational grid.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.grid.resolution.is_adaptive_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 7.5. Is Adaptive Grid
Is Required: FALSE Type: BOOLEAN Cardinality: 0.1
Default is False. Set true if grid resolution changes during execution.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.transport.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 8. Transport
Atmospheric chemistry transport
8.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
General overview of transport implementation
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.transport.use_atmospheric_transport')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 8.2. Use Atmospheric Transport
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is transport handled by the atmosphere, rather than within atmospheric cehmistry?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.transport.transport_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 8.3. Transport Details
Is Required: FALSE Type: STRING Cardinality: 0.1
If transport is handled within the atmospheric chemistry scheme, describe it.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 9. Emissions Concentrations
Atmospheric chemistry emissions
9.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview atmospheric chemistry emissions
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.surface_emissions.sources')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Vegetation"
# "Soil"
# "Sea surface"
# "Anthropogenic"
# "Biomass burning"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 10. Emissions Concentrations --> Surface Emissions
**
10.1. Sources
Is Required: FALSE Type: ENUM Cardinality: 0.N
Sources of the chemical species emitted at the surface that are taken into account in the emissions scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.surface_emissions.method')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Climatology"
# "Spatially uniform mixing ratio"
# "Spatially uniform concentration"
# "Interactive"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 10.2. Method
Is Required: FALSE Type: ENUM Cardinality: 0.N
Methods used to define chemical species emitted directly into model layers above the surface (several methods allowed because the different species may not use the same method).
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.surface_emissions.prescribed_climatology_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 10.3. Prescribed Climatology Emitted Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of chemical species emitted at the surface and prescribed via a climatology, and the nature of the climatology (E.g. CO (monthly), C2H6 (constant))
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.surface_emissions.prescribed_spatially_uniform_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 10.4. Prescribed Spatially Uniform Emitted Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of chemical species emitted at the surface and prescribed as spatially uniform
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.surface_emissions.interactive_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 10.5. Interactive Emitted Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of chemical species emitted at the surface and specified via an interactive method
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.surface_emissions.other_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 10.6. Other Emitted Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of chemical species emitted at the surface and specified via any other method
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.atmospheric_emissions.sources')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Aircraft"
# "Biomass burning"
# "Lightning"
# "Volcanos"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 11. Emissions Concentrations --> Atmospheric Emissions
TO DO
11.1. Sources
Is Required: FALSE Type: ENUM Cardinality: 0.N
Sources of chemical species emitted in the atmosphere that are taken into account in the emissions scheme.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.atmospheric_emissions.method')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Climatology"
# "Spatially uniform mixing ratio"
# "Spatially uniform concentration"
# "Interactive"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 11.2. Method
Is Required: FALSE Type: ENUM Cardinality: 0.N
Methods used to define the chemical species emitted in the atmosphere (several methods allowed because the different species may not use the same method).
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.atmospheric_emissions.prescribed_climatology_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 11.3. Prescribed Climatology Emitted Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of chemical species emitted in the atmosphere and prescribed via a climatology (E.g. CO (monthly), C2H6 (constant))
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.atmospheric_emissions.prescribed_spatially_uniform_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 11.4. Prescribed Spatially Uniform Emitted Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of chemical species emitted in the atmosphere and prescribed as spatially uniform
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.atmospheric_emissions.interactive_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 11.5. Interactive Emitted Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of chemical species emitted in the atmosphere and specified via an interactive method
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.atmospheric_emissions.other_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 11.6. Other Emitted Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of chemical species emitted in the atmosphere and specified via an "other method"
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.concentrations.prescribed_lower_boundary')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 12. Emissions Concentrations --> Concentrations
TO DO
12.1. Prescribed Lower Boundary
Is Required: FALSE Type: STRING Cardinality: 0.1
List of species prescribed at the lower boundary.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.concentrations.prescribed_upper_boundary')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 12.2. Prescribed Upper Boundary
Is Required: FALSE Type: STRING Cardinality: 0.1
List of species prescribed at the upper boundary.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 13. Gas Phase Chemistry
Atmospheric chemistry transport
13.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview gas phase atmospheric chemistry
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.species')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "HOx"
# "NOy"
# "Ox"
# "Cly"
# "HSOx"
# "Bry"
# "VOCs"
# "isoprene"
# "H2O"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 13.2. Species
Is Required: FALSE Type: ENUM Cardinality: 0.N
Species included in the gas phase chemistry scheme.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.number_of_bimolecular_reactions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 13.3. Number Of Bimolecular Reactions
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The number of bi-molecular reactions in the gas phase chemistry scheme.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.number_of_termolecular_reactions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 13.4. Number Of Termolecular Reactions
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The number of ter-molecular reactions in the gas phase chemistry scheme.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.number_of_tropospheric_heterogenous_reactions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 13.5. Number Of Tropospheric Heterogenous Reactions
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The number of reactions in the tropospheric heterogeneous chemistry scheme.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.number_of_stratospheric_heterogenous_reactions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 13.6. Number Of Stratospheric Heterogenous Reactions
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The number of reactions in the stratospheric heterogeneous chemistry scheme.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.number_of_advected_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 13.7. Number Of Advected Species
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The number of advected species in the gas phase chemistry scheme.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.number_of_steady_state_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 13.8. Number Of Steady State Species
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The number of gas phase species for which the concentration is updated in the chemical solver assuming photochemical steady state
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.interactive_dry_deposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 13.9. Interactive Dry Deposition
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is dry deposition interactive (as opposed to prescribed)? Dry deposition describes the dry processes by which gaseous species deposit themselves on solid surfaces thus decreasing their concentration in the air.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.wet_deposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 13.10. Wet Deposition
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is wet deposition included? Wet deposition describes the moist processes by which gaseous species deposit themselves on solid surfaces thus decreasing their concentration in the air.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.wet_oxidation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 13.11. Wet Oxidation
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is wet oxidation included? Oxidation describes the loss of electrons or an increase in oxidation state by a molecule
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.stratospheric_heterogeneous_chemistry.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 14. Stratospheric Heterogeneous Chemistry
Atmospheric chemistry startospheric heterogeneous chemistry
14.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview stratospheric heterogenous atmospheric chemistry
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.stratospheric_heterogeneous_chemistry.gas_phase_species')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Cly"
# "Bry"
# "NOy"
# TODO - please enter value(s)
"""
Explanation: 14.2. Gas Phase Species
Is Required: FALSE Type: ENUM Cardinality: 0.N
Gas phase species included in the stratospheric heterogeneous chemistry scheme.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.stratospheric_heterogeneous_chemistry.aerosol_species')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Sulphate"
# "Polar stratospheric ice"
# "NAT (Nitric acid trihydrate)"
# "NAD (Nitric acid dihydrate)"
# "STS (supercooled ternary solution aerosol particule))"
# TODO - please enter value(s)
"""
Explanation: 14.3. Aerosol Species
Is Required: FALSE Type: ENUM Cardinality: 0.N
Aerosol species included in the stratospheric heterogeneous chemistry scheme.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.stratospheric_heterogeneous_chemistry.number_of_steady_state_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 14.4. Number Of Steady State Species
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The number of steady state species in the stratospheric heterogeneous chemistry scheme.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.stratospheric_heterogeneous_chemistry.sedimentation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 14.5. Sedimentation
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is sedimentation is included in the stratospheric heterogeneous chemistry scheme or not?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.stratospheric_heterogeneous_chemistry.coagulation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 14.6. Coagulation
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is coagulation is included in the stratospheric heterogeneous chemistry scheme or not?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.tropospheric_heterogeneous_chemistry.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 15. Tropospheric Heterogeneous Chemistry
Atmospheric chemistry tropospheric heterogeneous chemistry
15.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview tropospheric heterogenous atmospheric chemistry
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.tropospheric_heterogeneous_chemistry.gas_phase_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 15.2. Gas Phase Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of gas phase species included in the tropospheric heterogeneous chemistry scheme.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.tropospheric_heterogeneous_chemistry.aerosol_species')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Sulphate"
# "Nitrate"
# "Sea salt"
# "Dust"
# "Ice"
# "Organic"
# "Black carbon/soot"
# "Polar stratospheric ice"
# "Secondary organic aerosols"
# "Particulate organic matter"
# TODO - please enter value(s)
"""
Explanation: 15.3. Aerosol Species
Is Required: FALSE Type: ENUM Cardinality: 0.N
Aerosol species included in the tropospheric heterogeneous chemistry scheme.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.tropospheric_heterogeneous_chemistry.number_of_steady_state_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 15.4. Number Of Steady State Species
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The number of steady state species in the tropospheric heterogeneous chemistry scheme.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.tropospheric_heterogeneous_chemistry.interactive_dry_deposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 15.5. Interactive Dry Deposition
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is dry deposition interactive (as opposed to prescribed)? Dry deposition describes the dry processes by which gaseous species deposit themselves on solid surfaces thus decreasing their concentration in the air.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.tropospheric_heterogeneous_chemistry.coagulation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 15.6. Coagulation
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is coagulation is included in the tropospheric heterogeneous chemistry scheme or not?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.photo_chemistry.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 16. Photo Chemistry
Atmospheric chemistry photo chemistry
16.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview atmospheric photo chemistry
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.photo_chemistry.number_of_reactions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 16.2. Number Of Reactions
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The number of reactions in the photo-chemistry scheme.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.photo_chemistry.photolysis.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Offline (clear sky)"
# "Offline (with clouds)"
# "Online"
# TODO - please enter value(s)
"""
Explanation: 17. Photo Chemistry --> Photolysis
Photolysis scheme
17.1. Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
Photolysis scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.photo_chemistry.photolysis.environmental_conditions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 17.2. Environmental Conditions
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe any environmental conditions taken into account by the photolysis scheme (e.g. whether pressure- and temperature-sensitive cross-sections and quantum yields in the photolysis calculations are modified to reflect the modelled conditions.)
End of explanation
"""
|
mne-tools/mne-tools.github.io | 0.23/_downloads/9d142d98d094666b7fd2d94155f8b3ec/decoding_unsupervised_spatial_filter.ipynb | bsd-3-clause | # Authors: Jean-Remi King <jeanremi.king@gmail.com>
# Asish Panda <asishrocks95@gmail.com>
#
# License: BSD (3-clause)
import numpy as np
import matplotlib.pyplot as plt
import mne
from mne.datasets import sample
from mne.decoding import UnsupervisedSpatialFilter
from sklearn.decomposition import PCA, FastICA
print(__doc__)
# Preprocess data
data_path = sample.data_path()
# Load and filter data, set up epochs
raw_fname = data_path + '/MEG/sample/sample_audvis_filt-0-40_raw.fif'
event_fname = data_path + '/MEG/sample/sample_audvis_filt-0-40_raw-eve.fif'
tmin, tmax = -0.1, 0.3
event_id = dict(aud_l=1, aud_r=2, vis_l=3, vis_r=4)
raw = mne.io.read_raw_fif(raw_fname, preload=True)
raw.filter(1, 20, fir_design='firwin')
events = mne.read_events(event_fname)
picks = mne.pick_types(raw.info, meg=False, eeg=True, stim=False, eog=False,
exclude='bads')
epochs = mne.Epochs(raw, events, event_id, tmin, tmax, proj=False,
picks=picks, baseline=None, preload=True,
verbose=False)
X = epochs.get_data()
"""
Explanation: Analysis of evoked response using ICA and PCA reduction techniques
This example computes PCA and ICA of evoked or epochs data. Then the
PCA / ICA components, a.k.a. spatial filters, are used to transform
the channel data to new sources / virtual channels. The output is
visualized on the average of all the epochs.
End of explanation
"""
pca = UnsupervisedSpatialFilter(PCA(30), average=False)
pca_data = pca.fit_transform(X)
ev = mne.EvokedArray(np.mean(pca_data, axis=0),
mne.create_info(30, epochs.info['sfreq'],
ch_types='eeg'), tmin=tmin)
ev.plot(show=False, window_title="PCA", time_unit='s')
"""
Explanation: Transform data with PCA computed on the average ie evoked response
End of explanation
"""
ica = UnsupervisedSpatialFilter(FastICA(30), average=False)
ica_data = ica.fit_transform(X)
ev1 = mne.EvokedArray(np.mean(ica_data, axis=0),
mne.create_info(30, epochs.info['sfreq'],
ch_types='eeg'), tmin=tmin)
ev1.plot(show=False, window_title='ICA', time_unit='s')
plt.show()
"""
Explanation: Transform data with ICA computed on the raw epochs (no averaging)
End of explanation
"""
|
rastala/mmlspark | notebooks/samples/103 - Before and After MMLSpark.ipynb | mit | import pandas as pd
import mmlspark
from pyspark.sql.types import IntegerType, StringType, StructType, StructField
dataFile = "BookReviewsFromAmazon10K.tsv"
textSchema = StructType([StructField("rating", IntegerType(), False),
StructField("text", StringType(), False)])
import os, urllib
if not os.path.isfile(dataFile):
urllib.request.urlretrieve("https://mmlspark.azureedge.net/datasets/"+dataFile, dataFile)
raw_data = spark.createDataFrame(pd.read_csv(dataFile, sep="\t", header=None), textSchema)
raw_data.show(5)
"""
Explanation: 103 - Simplifying Machine Learning Pipelines with mmlspark
1. Introduction
<p><img src="https://images-na.ssl-images-amazon.com/images/G/01/img16/books/bookstore/landing-page/1000638_books_landing-page_bookstore-photo-01.jpg" style="width: 500px;" title="Image from https://images-na.ssl-images-amazon.com/images/G/01/img16/books/bookstore/landing-page/1000638_books_landing-page_bookstore-photo-01.jpg" /><br /></p>
In this tutorial, we perform the same classification task in two
diffeerent ways: once using plain pyspark and once using the
mmlspark library. The two methods yield the same performance,
but one of the two libraries is drastically simpler to use and iterate
on (can you guess which one?).
The task is simple: Predict whether a user's review of a book sold on
Amazon is good (rating > 3) or bad based on the text of the review. We
accomplish this by training LogisticRegression learners with different
hyperparameters and choosing the best model.
2. Read the data
We download and read in the data. We show a sample below:
End of explanation
"""
from pyspark.sql.functions import udf
from pyspark.sql.types import LongType, FloatType, DoubleType
def word_count(s):
return len(s.split())
def word_length(s):
import numpy as np
ss = [len(w) for w in s.split()]
return round(float(np.mean(ss)), 2)
word_length_udf = udf(word_length, DoubleType())
word_count_udf = udf(word_count, IntegerType())
data = raw_data \
.select("rating", "text",
word_count_udf("text").alias("wordCount"),
word_length_udf("text").alias("wordLength")) \
.withColumn("label", raw_data["rating"] > 3).drop("rating")
data.show(5)
"""
Explanation: 3. Extract more features and process data
Real data however is more complex than the above dataset. It is common
for a dataset to have features of multiple types: text, numeric,
categorical. To illustrate how difficult it is to work with these
datasets, we add two numerical features to the dataset: the word
count of the review and the mean word length.
End of explanation
"""
from pyspark.ml.feature import Tokenizer, HashingTF
from pyspark.ml.feature import VectorAssembler
# Featurize text column
tokenizer = Tokenizer(inputCol="text", outputCol="tokenizedText")
numFeatures = 10000
hashingScheme = HashingTF(inputCol="tokenizedText",
outputCol="TextFeatures",
numFeatures=numFeatures)
tokenizedData = tokenizer.transform(data)
featurizedData = hashingScheme.transform(tokenizedData)
# Merge text and numeric features in one feature column
feature_columns_array = ["TextFeatures", "wordCount", "wordLength"]
assembler = VectorAssembler(
inputCols = feature_columns_array,
outputCol="features")
assembledData = assembler.transform(featurizedData)
# Select only columns of interest
# Convert rating column from boolean to int
processedData = assembledData \
.select("label", "features") \
.withColumn("label", assembledData.label.cast(IntegerType()))
from pyspark.ml.evaluation import BinaryClassificationEvaluator
from pyspark.ml.classification import LogisticRegression
# Prepare data for learning
train, test, validation = processedData.randomSplit([0.60, 0.20, 0.20], seed=123)
# Train the models on the 'train' data
lrHyperParams = [0.05, 0.1, 0.2, 0.4]
logisticRegressions = [LogisticRegression(regParam = hyperParam)
for hyperParam in lrHyperParams]
evaluator = BinaryClassificationEvaluator(rawPredictionCol="rawPrediction",
metricName="areaUnderROC")
metrics = []
models = []
# Select the best model
for learner in logisticRegressions:
model = learner.fit(train)
models.append(model)
scored_data = model.transform(test)
metrics.append(evaluator.evaluate(scored_data))
best_metric = max(metrics)
best_model = models[metrics.index(best_metric)]
# Save model
best_model.write().overwrite().save("SparkMLExperiment.mmls")
# Get AUC on the validation dataset
scored_val = best_model.transform(validation)
print(evaluator.evaluate(scored_val))
"""
Explanation: 4a. Classify using pyspark
To choose the best LogisticRegression classifier using the pyspark
library, need to explictly perform the following steps:
Process the features:
Tokenize the text column
Hash the tokenized column into a vector using hashing
Merge the numeric features with the vector in the step above
Process the label column: cast it into the proper type.
Train multiple LogisticRegression algorithms on the train dataset
with different hyperparameters
Compute the area under the ROC curve for each of the trained models
and select the model with the highest metric as computed on the
test dataset
Evaluate the best model on the validation set
As you can see below, there is a lot of work involved and a lot of
steps where something can go wrong!
End of explanation
"""
from mmlspark import TrainClassifier, FindBestModel, ComputeModelStatistics
# Prepare data for learning
train, test, validation = data.randomSplit([0.60, 0.20, 0.20], seed=123)
# Train the models on the 'train' data
lrHyperParams = [0.05, 0.1, 0.2, 0.4]
logisticRegressions = [LogisticRegression(regParam = hyperParam)
for hyperParam in lrHyperParams]
lrmodels = [TrainClassifier(model=lrm, labelCol="label", numFeatures=10000).fit(train)
for lrm in logisticRegressions]
# Select the best model
bestModel = FindBestModel(evaluationMetric="AUC", models=lrmodels).fit(test)
# Save model
bestModel.write().overwrite().save("MMLSExperiment.mmls")
# Get AUC on the validation dataset
predictions = bestModel.transform(validation)
metrics = ComputeModelStatistics().transform(predictions)
print("Best model's AUC on validation set = "
+ "{0:.2f}%".format(metrics.first()["AUC"] * 100))
"""
Explanation: 4b. Classify using mmlspark
Life is a lot simpler when using mmlspark!
The TrainClassifier Estimator featurizes the data internally,
as long as the columns selected in the train, test, validation
dataset represent the features
The FindBestModel Estimator find the best model from a pool of
trained models by find the model which performs best on the test
dataset given the specified metric
The CompueModelStatistics Transformer computes the different
metrics on a scored dataset (in our case, the validation dataset)
at the same time
End of explanation
"""
|
tensorflow/docs-l10n | site/ja/tutorials/keras/text_classification.ipynb | apache-2.0 | #@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#@title MIT License
#
# Copyright (c) 2017 François Chollet
#
# Permission is hereby granted, free of charge, to any person obtaining a
# copy of this software and associated documentation files (the "Software"),
# to deal in the Software without restriction, including without limitation
# the rights to use, copy, modify, merge, publish, distribute, sublicense,
# and/or sell copies of the Software, and to permit persons to whom the
# Software is furnished to do so, subject to the following conditions:
#
# The above copyright notice and this permission notice shall be included in
# all copies or substantial portions of the Software.
#
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
# THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
# FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
# DEALINGS IN THE SOFTWARE.
"""
Explanation: Copyright 2019 The TensorFlow Authors.
End of explanation
"""
import matplotlib.pyplot as plt
import os
import re
import shutil
import string
import tensorflow as tf
from tensorflow.keras import layers
from tensorflow.keras import losses
print(tf.__version__)
"""
Explanation: 映画レビューのテキスト分類
<table class="tfo-notebook-buttons" align="left">
<td> <a target="_blank" href="https://www.tensorflow.org/tutorials/keras/text_classification"><img src="https://www.tensorflow.org/images/tf_logo_32px.png">TensorFlow.org で表示</a>
</td>
<td> <a target="_blank" href="https://colab.research.google.com/github/tensorflow/docs-l10n/blob/master/site/ja/tutorials/keras/text_classification.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png">Google Colab で実行</a> </td>
<td> <a target="_blank" href="https://github.com/tensorflow/docs-l10n/blob/master/site/ja/tutorials/keras/text_classification.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png">GitHub でソースを表示</a>
</td>
<td> <a href="https://storage.googleapis.com/tensorflow_docs/docs-l10n/site/ja/tutorials/keras/text_classification.ipynb"><img src="https://www.tensorflow.org/images/download_logo_32px.png">ノートブックをダウンロード</a> </td>
</table>
このチュートリアルでは、ディスクに保存されているプレーンテキストファイルを使用してテキストを分類する方法について説明します。IMDB データセットでセンチメント分析を実行するように、二項分類器をトレーニングします。ノートブックの最後には、Stack Overflow のプログラミングに関する質問のタグを予測するためのマルチクラス分類器をトレーニングする演習があります。
End of explanation
"""
url = "https://ai.stanford.edu/~amaas/data/sentiment/aclImdb_v1.tar.gz"
dataset = tf.keras.utils.get_file("aclImdb_v1", url,
untar=True, cache_dir='.',
cache_subdir='')
dataset_dir = os.path.join(os.path.dirname(dataset), 'aclImdb')
os.listdir(dataset_dir)
train_dir = os.path.join(dataset_dir, 'train')
os.listdir(train_dir)
"""
Explanation: 感情分析
このノートブックでは、映画レビューのテキストを使用して、それが肯定的であるか否定的であるかに分類するように感情分析モデルをトレーニングします。これは二項分類の例で、機械学習問題では重要な分類法として広く適用されます。
ここでは、Internet Movie Database から抽出した 50,000 件の映画レビューを含む、大規模なレビューデータセットを使います。レビューはトレーニング用とテスト用に 25,000 件ずつに分割されています。トレーニング用とテスト用のデータは<strong>均衡</strong>しています。言い換えると、それぞれが同数の肯定的及び否定的なレビューを含んでいます。
IMDB データセットをダウンロードして調べる
データセットをダウンロードして抽出してから、ディレクトリ構造を調べてみましょう。
End of explanation
"""
sample_file = os.path.join(train_dir, 'pos/1181_9.txt')
with open(sample_file) as f:
print(f.read())
"""
Explanation: aclImdb/train/pos および aclImdb/train/neg ディレクトリには多くのテキストファイルが含まれており、それぞれが 1 つの映画レビューです。それらの 1 つを見てみましょう。
End of explanation
"""
remove_dir = os.path.join(train_dir, 'unsup')
shutil.rmtree(remove_dir)
"""
Explanation: データセットを読み込む
次に、データをディスクから読み込み、トレーニングに適した形式に準備します。これを行うには、便利な text_dataset_from_directory ユーティリティを使用します。このユーティリティは、次のようなディレクトリ構造を想定しています。
main_directory/
...class_a/
......a_text_1.txt
......a_text_2.txt
...class_b/
......b_text_1.txt
......b_text_2.txt
二項分類用のデータセットを準備するには、ディスクに class_a および class_bに対応する 2 つのフォルダが必要です。これらは、aclImdb/train/pos および aclImdb/train/neg にある肯定的および否定的な映画レビューになります。IMDB データセットには追加のフォルダーが含まれているため、このユーティリティを使用する前にそれらを削除します。
End of explanation
"""
batch_size = 32
seed = 42
raw_train_ds = tf.keras.utils.text_dataset_from_directory(
'aclImdb/train',
batch_size=batch_size,
validation_split=0.2,
subset='training',
seed=seed)
"""
Explanation: 次に、text_dataset_from_directory ユーティリティを使用して、ラベル付きの tf.data.Dataset を作成します。tf.data は、データを操作するための強力なツールのコレクションです。
機械学習実験を実行するときは、データセットをトレーニング、検証、および、テストの 3 つに分割することをお勧めします。
IMDB データセットはすでにトレーニング用とテスト用に分割されていますが、検証セットはありません。以下の validation_split 引数を使用して、トレーニングデータの 80:20 分割を使用して検証セットを作成しましょう。
End of explanation
"""
for text_batch, label_batch in raw_train_ds.take(1):
for i in range(3):
print("Review", text_batch.numpy()[i])
print("Label", label_batch.numpy()[i])
"""
Explanation: 上記のように、トレーニングフォルダには 25,000 の例があり、そのうち 80% (20,000) をトレーニングに使用します。以下に示すとおり、データセットを model.fit に直接渡すことで、モデルをトレーニングできます。tf.data を初めて使用する場合は、データセットを繰り返し処理して、次のようにいくつかの例を出力することもできます。
End of explanation
"""
print("Label 0 corresponds to", raw_train_ds.class_names[0])
print("Label 1 corresponds to", raw_train_ds.class_names[1])
"""
Explanation: レビューには生のテキストが含まれていることに注意してください(句読点や <br/> などのような HTML タグが付いていることもあります)。次のセクションでは、これらの処理方法を示します。
ラベルは 0 または 1 です。これらのどれが肯定的および否定的な映画レビューに対応するかを確認するには、データセットの class_names プロパティを確認できます。
End of explanation
"""
raw_val_ds = tf.keras.utils.text_dataset_from_directory(
'aclImdb/train',
batch_size=batch_size,
validation_split=0.2,
subset='validation',
seed=seed)
raw_test_ds = tf.keras.utils.text_dataset_from_directory(
'aclImdb/test',
batch_size=batch_size)
"""
Explanation: 次に、検証およびテスト用データセットを作成します。トレーニング用セットの残りの 5,000 件のレビューを検証に使用します。
注意: validation_split および subset 引数を使用する場合は、必ずランダムシードを指定するか、shuffle=False を渡して、検証とトレーニング分割に重複がないようにします。
End of explanation
"""
def custom_standardization(input_data):
lowercase = tf.strings.lower(input_data)
stripped_html = tf.strings.regex_replace(lowercase, '<br />', ' ')
return tf.strings.regex_replace(stripped_html,
'[%s]' % re.escape(string.punctuation),
'')
"""
Explanation: トレーニング用データを準備する
次に、便利な tf.keras.layers.TextVectorization レイヤーを使用して、データを標準化、トークン化、およびベクトル化します。
標準化とは、テキストを前処理することを指します。通常、句読点や HTML 要素を削除して、データセットを簡素化します。トークン化とは、文字列をトークンに分割することです (たとえば、空白で分割することにより、文を個々の単語に分割します)。ベクトル化とは、トークンを数値に変換して、ニューラルネットワークに入力できるようにすることです。これらのタスクはすべて、このレイヤーで実行できます。
前述のとおり、レビューには <br /> のようなさまざまな HTML タグが含まれています。これらのタグは、TextVectorization レイヤーのデフォルトの標準化機能によって削除されません (テキストを小文字に変換し、デフォルトで句読点を削除しますが、HTML は削除されません)。HTML を削除するカスタム標準化関数を作成します。
注意: トレーニング/テストスキュー(トレーニング/サービングスキューとも呼ばれます)を防ぐには、トレーニング時とテスト時にデータを同じように前処理することが重要です。これを容易にするためには、このチュートリアルの後半で示すように、TextVectorization レイヤーをモデル内に直接含めます。
End of explanation
"""
max_features = 10000
sequence_length = 250
vectorize_layer = layers.TextVectorization(
standardize=custom_standardization,
max_tokens=max_features,
output_mode='int',
output_sequence_length=sequence_length)
"""
Explanation: 次に、TextVectorization レイヤーを作成します。このレイヤーを使用して、データを標準化、トークン化、およびベクトル化します。output_mode を int に設定して、トークンごとに一意の整数インデックスを作成します。
デフォルトの分割関数と、上記で定義したカスタム標準化関数を使用していることに注意してください。また、明示的な最大値 sequence_length など、モデルの定数をいくつか定義します。これにより、レイヤーはシーケンスを正確に sequence_length 値にパディングまたは切り捨てます。
End of explanation
"""
# Make a text-only dataset (without labels), then call adapt
train_text = raw_train_ds.map(lambda x, y: x)
vectorize_layer.adapt(train_text)
"""
Explanation: 次に、adapt を呼び出して、前処理レイヤーの状態をデータセットに適合させます。これにより、モデルは文字列から整数へのインデックスを作成します。
注意: Adapt を呼び出すときは、トレーニング用データのみを使用することが重要です(テスト用セットを使用すると情報が漏洩します)。
End of explanation
"""
def vectorize_text(text, label):
text = tf.expand_dims(text, -1)
return vectorize_layer(text), label
# retrieve a batch (of 32 reviews and labels) from the dataset
text_batch, label_batch = next(iter(raw_train_ds))
first_review, first_label = text_batch[0], label_batch[0]
print("Review", first_review)
print("Label", raw_train_ds.class_names[first_label])
print("Vectorized review", vectorize_text(first_review, first_label))
"""
Explanation: このレイヤーを使用して一部のデータを前処理した結果を確認する関数を作成します。
End of explanation
"""
print("1287 ---> ",vectorize_layer.get_vocabulary()[1287])
print(" 313 ---> ",vectorize_layer.get_vocabulary()[313])
print('Vocabulary size: {}'.format(len(vectorize_layer.get_vocabulary())))
"""
Explanation: 上記のように、各トークンは整数に置き換えられています。レイヤーで .get_vocabulary() を呼び出すことにより、各整数が対応するトークン(文字列)を検索できます。
End of explanation
"""
train_ds = raw_train_ds.map(vectorize_text)
val_ds = raw_val_ds.map(vectorize_text)
test_ds = raw_test_ds.map(vectorize_text)
"""
Explanation: モデルをトレーニングする準備がほぼ整いました。最後の前処理ステップとして、トレーニング、検証、およびデータセットのテストのために前に作成した TextVectorization レイヤーを適用します。
End of explanation
"""
AUTOTUNE = tf.data.AUTOTUNE
train_ds = train_ds.cache().prefetch(buffer_size=AUTOTUNE)
val_ds = val_ds.cache().prefetch(buffer_size=AUTOTUNE)
test_ds = test_ds.cache().prefetch(buffer_size=AUTOTUNE)
"""
Explanation: パフォーマンスのためにデータセットを構成する
以下は、I/O がブロックされないようにするためにデータを読み込むときに使用する必要がある 2 つの重要な方法です。
.cache() はデータをディスクから読み込んだ後、データをメモリに保持します。これにより、モデルのトレーニング中にデータセットがボトルネックになることを回避できます。データセットが大きすぎてメモリに収まらない場合は、この方法を使用して、パフォーマンスの高いオンディスクキャッシュを作成することもできます。これは、多くの小さなファイルを読み込むより効率的です。
.prefetch() はトレーニング中にデータの前処理とモデルの実行をオーバーラップさせます。
以上の 2 つの方法とデータをディスクにキャッシュする方法についての詳細は、データパフォーマンスガイドを参照してください。
End of explanation
"""
embedding_dim = 16
model = tf.keras.Sequential([
layers.Embedding(max_features + 1, embedding_dim),
layers.Dropout(0.2),
layers.GlobalAveragePooling1D(),
layers.Dropout(0.2),
layers.Dense(1)])
model.summary()
"""
Explanation: モデルを作成する
ニューラルネットワークを作成します。
End of explanation
"""
model.compile(loss=losses.BinaryCrossentropy(from_logits=True),
optimizer='adam',
metrics=tf.metrics.BinaryAccuracy(threshold=0.0))
"""
Explanation: これらのレイヤーは、分類器を構成するため一列に積み重ねられます。
最初のレイヤーは Embedding (埋め込み)レイヤーです。このレイヤーは、整数にエンコードされた語彙を受け取り、それぞれの単語インデックスに対応する埋め込みベクトルを検索します。埋め込みベクトルは、モデルのトレーニングの中で学習されます。ベクトル化のために、出力行列には次元が1つ追加されます。その結果、次元は、(batch, sequence, embedding) となります。埋め込みの詳細については、単語埋め込みチュートリアルを参照してください。
次は、GlobalAveragePooling1D(1次元のグローバル平均プーリング)レイヤーです。このレイヤーは、それぞれのサンプルについて、シーケンスの次元方向に平均値をもとめ、固定長のベクトルを返します。この結果、モデルは最も単純な形で、可変長の入力を扱うことができるようになります。
この固定長の出力ベクトルは、16 個の非表示ユニットを持つ全結合 (Dense) レイヤーに受け渡されます。
最後のレイヤーは、単一の出力ノードと密に接続されています。
損失関数とオプティマイザ
モデルをトレーニングするには、損失関数とオプティマイザが必要です。これは二項分類問題であり、モデルは確率(シグモイドアクティベーションを持つ単一ユニットレイヤー)を出力するため、losses.BinaryCrossentropy 損失関数を使用します。
損失関数の候補はこれだけではありません。例えば、mean_squared_error(平均二乗誤差)を使うこともできます。しかし、一般的には、確率を扱うにはbinary_crossentropyの方が適しています。binary_crossentropyは、確率分布の間の「距離」を測定する尺度です。今回の場合には、真の分布と予測値の分布の間の距離ということになります。
End of explanation
"""
epochs = 10
history = model.fit(
train_ds,
validation_data=val_ds,
epochs=epochs)
"""
Explanation: モデルをトレーニングする
dataset オブジェクトを fit メソッドに渡すことにより、モデルをトレーニングします。
End of explanation
"""
loss, accuracy = model.evaluate(test_ds)
print("Loss: ", loss)
print("Accuracy: ", accuracy)
"""
Explanation: モデルを評価する
モデルがどのように実行するか見てみましょう。2 つの値が返されます。損失(誤差、値が低いほど良)と正確度です。
End of explanation
"""
history_dict = history.history
history_dict.keys()
"""
Explanation: この、かなり素朴なアプローチでも 86% 前後の正解度を達成しました。
経時的な正解度と損失のグラフを作成する
model.fit() は、トレーニング中に発生したすべての情報を詰まったディクショナリを含む History オブジェクトを返します。
End of explanation
"""
acc = history_dict['binary_accuracy']
val_acc = history_dict['val_binary_accuracy']
loss = history_dict['loss']
val_loss = history_dict['val_loss']
epochs = range(1, len(acc) + 1)
# "bo" is for "blue dot"
plt.plot(epochs, loss, 'bo', label='Training loss')
# b is for "solid blue line"
plt.plot(epochs, val_loss, 'b', label='Validation loss')
plt.title('Training and validation loss')
plt.xlabel('Epochs')
plt.ylabel('Loss')
plt.legend()
plt.show()
plt.plot(epochs, acc, 'bo', label='Training acc')
plt.plot(epochs, val_acc, 'b', label='Validation acc')
plt.title('Training and validation accuracy')
plt.xlabel('Epochs')
plt.ylabel('Accuracy')
plt.legend(loc='lower right')
plt.show()
"""
Explanation: トレーニングと検証中に監視されている各メトリックに対して 1 つずつ、計 4 つのエントリがあります。このエントリを使用して、トレーニングと検証の損失とトレーニングと検証の正解度を比較したグラフを作成することができます。
End of explanation
"""
export_model = tf.keras.Sequential([
vectorize_layer,
model,
layers.Activation('sigmoid')
])
export_model.compile(
loss=losses.BinaryCrossentropy(from_logits=False), optimizer="adam", metrics=['accuracy']
)
# Test it with `raw_test_ds`, which yields raw strings
loss, accuracy = export_model.evaluate(raw_test_ds)
print(accuracy)
"""
Explanation: このグラフでは、点はトレーニングの損失と正解度を表し、実線は検証の損失と正解度を表します。
トレーニングの損失がエポックごとに下降し、トレーニングの正解度がエポックごとに上昇していることに注目してください。これは、勾配下降最適化を使用しているときに見られる現象で、イテレーションごとに希望する量を最小化します。
これは検証の損失と精度には当てはまりません。これらはトレーニング精度の前にピークに達しているようです。これが過適合の例で、モデルが、遭遇したことのないデータよりもトレーニングデータで優れたパフォーマンスを発揮する現象です。この後、モデルは過度に最適化し、テストデータに一般化しないトレーニングデータ特有の表現を学習します。
この特定のケースでは、検証の正解度が向上しなくなったときにトレーニングを停止することにより、過適合を防ぐことができます。これを行うには、tf.keras.callbacks.EarlyStopping コールバックを使用することができます。
モデルをエクスポートする
上記のコードでは、モデルにテキストをフィードする前に、TextVectorization レイヤーをデータセットに適用しました。モデルで生の文字列を処理できるようにする場合 (たとえば、展開を簡素化するため)、モデル内に TextVectorization レイヤーを含めることができます。これを行うには、トレーニングしたばかりの重みを使用して新しいモデルを作成します。
End of explanation
"""
examples = [
"The movie was great!",
"The movie was okay.",
"The movie was terrible..."
]
export_model.predict(examples)
"""
Explanation: 新しいデータの推論
新しい例の予測を取得するには、model.predict()を呼び出します。
End of explanation
"""
|
rvianello/rdkit | Code/GraphMol/ChemReactions/tutorial/EnumerationToolkit.ipynb | bsd-3-clause | from __future__ import print_function
from rdkit.Chem import AllChem
from rdkit.Chem import rdChemReactions
from rdkit.Chem.AllChem import ReactionFromRxnBlock, ReactionToRxnBlock
from rdkit.Chem.Draw import IPythonConsole
IPythonConsole.ipython_useSVG=True
rxn_data = """$RXN
ISIS 090220091539
2 1
$MOL
-ISIS- 09020915392D
2 1 1 0 0 0 0 0 0 0999 V2000
-2.0744 0.1939 0.0000 L 0 0 0 0 0 0 0 0 0 0 0 0
-2.5440 -0.1592 0.0000 R# 0 0 0 0 0 0 0 0 0 1 0 0
1 2 1 0 0 0 0
1 F 2 17 35
V 1 halogen
M RGP 1 2 1
M ALS 1 2 F Cl Br
M END
$MOL
-ISIS- 09020915392D
2 1 0 0 0 0 0 0 0 0999 V2000
2.8375 -0.2500 0.0000 R# 0 0 0 0 0 0 0 0 0 2 0 0
3.3463 0.0438 0.0000 N 0 0 0 0 0 0 0 0 0 3 0 0
1 2 1 0 0 0 0
V 2 amine.primary
M RGP 1 1 2
M END
$MOL
-ISIS- 09020915392D
3 2 0 0 0 0 0 0 0 0999 V2000
13.5792 0.0292 0.0000 N 0 0 0 0 0 0 0 0 0 3 0 0
14.0880 0.3229 0.0000 R# 0 0 0 0 0 0 0 0 0 1 0 0
13.0704 0.3229 0.0000 R# 0 0 0 0 0 0 0 0 0 2 0 0
1 2 1 0 0 0 0
1 3 1 0 0 0 0
M RGP 2 2 1 3 2
M END"""
rxn = ReactionFromRxnBlock(rxn_data)
rxn
"""
Explanation: RDKit Enumeration Toolkit
RDKit Reaction Enumeration Toolkit tutorial.
Here you will learn how to enumerate reactions with various building blocks.
End of explanation
"""
AllChem.SanitizeRxn(rxn)
"""
Explanation: Sanitizing Reaction Blocks
Reaction blocks come from many different sketchers, and some don't follow the MDL conventions very well. It is always a good idea to sanitize your reaction blocks first. This is also true for Smiles reactions if they are in kekule form.
End of explanation
"""
rxn.Initialize()
nWarn, nError, nReactants, nProducts, labels = AllChem.PreprocessReaction(rxn)
print ("Number of warnings:", nWarn)
print ("Number of preprocessing errors:", nError)
print ("Number of reactants in reaction:", nReactants)
print ("Number of products in reaction:", nProducts)
print ("Preprocess labels added:", labels)
"""
Explanation: Preprocessing Reaction Blocks
You will note that there are some special annotations in the reaction block:
V 1 halogen
V 2 amine.primary
These allows us to specify functional groups with very specific smarts patterns.
These smarts patterns are preloaded into the RDKit, but require the use of PreprocessReactions
to embed the patterns.
End of explanation
"""
!wget http://www.sigmaaldrich.com/content/dam/sigma-aldrich/docs/Aldrich/General_Information/1/sdf-benzylic-primary-amines.sdf -O amines.sdf
!wget http://www.sigmaaldrich.com/content/dam/sigma-aldrich/docs/Aldrich/General_Information/1/sdf-alkyl-halides.sdf -O halides.sdf
reagents = [
[x for x in AllChem.SDMolSupplier("halides.sdf")],
[x for x in AllChem.SDMolSupplier("amines.sdf")]
]
print ("number of reagents per template:", [len(x) for x in reagents])
"""
Explanation: So now, this scaffold will only match the specified halogens and a primary amine. Let's get some!
End of explanation
"""
library = rdChemReactions.EnumerateLibrary(rxn, reagents)
"""
Explanation: Basic Usage
Creating a library for enumeration
Using the enumerator is simple, simply supply the desired reaction and reagents. The library filters away non-matching reagents by default. The RDKit will log any removed reagents to the info log.
End of explanation
"""
params = rdChemReactions.EnumerationParams()
params.reagentMaxMatchCount = 1
library = rdChemReactions.EnumerateLibrary(rxn, reagents, params=params)
"""
Explanation: If you only want each reactant to match once ( and hence only produce one product per reactant set ) you can adjust the parameters:
End of explanation
"""
enumerator = library.GetEnumerator()
print (enumerator)
print ("Possible number of permutations:", enumerator.GetNumPermutations())
"""
Explanation: Enumerating the library
A library has an enumerator that determines what reagents are selected for purposes of enumeration.
The default enumerator is a CartesianProduct enumerator, which is a fancy way of saying enumerate everything. You can get hold this enumerator by using the GetEnumerator method.
End of explanation
"""
count = 0
totalMols = 0
for results in library:
for productSet in results:
for mol in productSet:
totalMols += 1
count += 1
print("Number of result sets", count)
print("Number of result molecules", totalMols)
"""
Explanation: Understanding results of enumerations
Each enumeration result may contain multiple resulting molecules. Consider a reaction setup as follows:
A + B >> C + D
There may be multiple result molecules for a number of reasons:
The reactant templates (A and B) match a reagent multiple times.
Each match has to analyzed to form a new product. Hence,
the result has to be a vector of products.
There me be multiple product templates, i.e. C+D as shown above
where C and D are two different result templates. These are
output in a result as follows: result = enumerator.next()
result == [ [results_from_product_template1],
[results_from_product_template2], ... ]
result[0] == [results_from_product_template1]
result[1] == [results_from_Product_template2]
Because there may be multiple product templates specified with
potentially multiple matches, iterating through the results to
get to the final molecules isa bit complicated and requires three loops. Here we use:
result for the result of reacting one set of reagents
productSet for the products for a given product template
mol the actual product
In many reactions, this will result in a single molecule, but the
datastructures have to handle the full set of results:
for result in enumerator:
for productSet in results:
for mol in productSet:
End of explanation
"""
import copy
enumerator = copy.copy(library.GetEnumerator())
print(enumerator)
test_enumerator = copy.copy(enumerator)
"""
Explanation: Note: the productSet above may be empty if one of the current reagents did
not match the reaction!
Note: the number of permutations is not the same as the number of molecules. There may be more or less depending on how many times the reagent matched the template, or if the reagent matched
at all.
How does the enumerator work?
As mentioned, you can make a copy of the current enumeration scheme using the GetEnumerator method. Lets make a copy of this enumerator by copying it using copy.copy(..), this makes a copy so we don't change the state of the Library.
End of explanation
"""
list(test_enumerator.GetPosition())
"""
Explanation: Let's play with this enumerator.
First: let's understand what the position means (this is the same as library.GetPosition)
End of explanation
"""
reagents[0][111]
reagents[1][130]
"""
Explanation: What this means is make the product from reagents[0][111] and reagents[1][130]
End of explanation
"""
library = rdChemReactions.EnumerateLibrary(rxn, reagents, params=params)
test_enumerator = copy.copy(library.GetEnumerator())
list(test_enumerator.GetPosition())
"""
Explanation: This also appears to be the last product. So lets' start over.
End of explanation
"""
test_enumerator.Skip(100)
pos = list(test_enumerator.GetPosition())
print(pos)
reagents[0][pos[0]]
reagents[0][pos[1]]
"""
Explanation: We can Skip to the 100th result
End of explanation
"""
pos = test_enumerator.next()
print(list(pos))
"""
Explanation: Let's advance by one here and see what happens. It's no surprise that for the CartesianProduct strategy the first index is increased by one.
End of explanation
"""
library = rdChemReactions.EnumerateLibrary(rxn, reagents, params=params)
# skip the first 100 molecules
library.GetEnumerator().Skip(100)
# get the state
state = library.GetState()
print("State is:\n", repr(state))
result = library.next()
for productSet in result:
for mol in productSet:
smiles = AllChem.MolToSmiles(mol)
break
"""
Explanation: Enumeration States
Enumerations have states as well, so you can come back later using GetState and SetState
GetState returns a text string so you can save this pretty much anywhere you like.
Let's skip to the 100th sample and save both the state and the product at this step.
End of explanation
"""
library.SetState(state)
result = library.next()
for productSet in result:
for mol in productSet:
assert AllChem.MolToSmiles(mol) == smiles
print(AllChem.MolToSmiles(mol), "==", smiles, "!")
"""
Explanation: Now when we go back to this state, the next molecule should be the one we just saved.
End of explanation
"""
library.ResetState()
print(list(library.GetPosition()))
"""
Explanation: Resetting the enumeration back to the beginning
To go back to the beginning, use Reset, for a CartesianProductStrategy this should revert back to [0,0] for indexing these reagents.
This is useful because the state of the library is saved when the library
is serialized. See Pickling Libraries below.
End of explanation
"""
s = library.Serialize() # XXX bug need default arg
library2 = rdChemReactions.EnumerateLibrary()
library2.InitFromString(s)
"""
Explanation: Pickling Libraries
The whole library, including all reagents and the current enumeration state reagents is saved when the library is serialized.
End of explanation
"""
for i in range(10):
result = library.next()
for productSet in result:
for mol in productSet:
print("Result library1", AllChem.MolToSmiles(mol))
result = library2.next()
for productSet in result:
for mol in productSet:
print("Result library2", AllChem.MolToSmiles(mol))
"""
Explanation: And the libraries are in lock step.
End of explanation
"""
|
albahnsen/ML_RiskManagement | notebooks/02-IntroPython_Numpy_Scypy_Pandas.ipynb | mit | import sys
print('Python version:', sys.version)
import IPython
print('IPython:', IPython.__version__)
import numpy
print('numpy:', numpy.__version__)
import scipy
print('scipy:', scipy.__version__)
import matplotlib
print('matplotlib:', matplotlib.__version__)
import pandas
print('pandas:', pandas.__version__)
import sklearn
print('scikit-learn:', sklearn.__version__)
"""
Explanation: 02 - Introduction to Python for Data Analysis
by Alejandro Correa Bahnsen & Iván Torroledo
version 1.2, Feb 2018
Part of the class Machine Learning for Risk Management
This notebook is licensed under a Creative Commons Attribution-ShareAlike 3.0 Unported License. Special thanks goes to Rick Muller, Sandia National Laboratories
Why Python?
Python is the programming language of choice for many scientists to a large degree because it offers a great deal of power to analyze and model scientific data with relatively little overhead in terms of learning, installation or development time. It is a language you can pick up in a weekend, and use for the rest of one's life.
The Python Tutorial is a great place to start getting a feel for the language. To complement this material, I taught a Python Short Course years ago to a group of computational chemists during a time that I was worried the field was moving too much in the direction of using canned software rather than developing one's own methods. I wanted to focus on what working scientists needed to be more productive: parsing output of other programs, building simple models, experimenting with object oriented programming, extending the language with C, and simple GUIs.
I'm trying to do something very similar here, to cut to the chase and focus on what scientists need. In the last year or so, the Jupyter Project has put together a notebook interface that I have found incredibly valuable. A large number of people have released very good IPython Notebooks that I have taken a huge amount of pleasure reading through. Some ones that I particularly like include:
Rick Muller A Crash Course in Python for Scientists
Rob Johansson's excellent notebooks, including Scientific Computing with Python and Computational Quantum Physics with QuTiP lectures;
XKCD style graphs in matplotlib;
A collection of Notebooks for using IPython effectively
A gallery of interesting IPython Notebooks
I find Jupyter notebooks an easy way both to get important work done in my everyday job, as well as to communicate what I've done, how I've done it, and why it matters to my coworkers. In the interest of putting more notebooks out into the wild for other people to use and enjoy, I thought I would try to recreate some of what I was trying to get across in the original Python Short Course, updated by 15 years of Python, Numpy, Scipy, Pandas, Matplotlib, and IPython development, as well as my own experience in using Python almost every day of this time.
Why Python for Data Analysis?
Python is great for scripting and applications.
The pandas library offers imporved library support.
Scraping, web APIs
Strong High Performance Computation support
Load balanceing tasks
MPI, GPU
MapReduce
Strong support for abstraction
Intel MKL
HDF5
Environment
But we already know R
...Which is better? Hard to answer
http://www.kdnuggets.com/2015/05/r-vs-python-data-science.html
http://www.kdnuggets.com/2015/03/the-grammar-data-science-python-vs-r.html
https://www.datacamp.com/community/tutorials/r-or-python-for-data-analysis
https://www.dataquest.io/blog/python-vs-r/
http://www.dataschool.io/python-or-r-for-data-science/
What You Need to Install
There are two branches of current releases in Python: the older-syntax Python 2, and the newer-syntax Python 3. This schizophrenia is largely intentional: when it became clear that some non-backwards-compatible changes to the language were necessary, the Python dev-team decided to go through a five-year (or so) transition, during which the new language features would be introduced and the old language was still actively maintained, to make such a transition as easy as possible.
Nonetheless, I'm going to write these notes with Python 3 in mind, since this is the version of the language that I use in my day-to-day job, and am most comfortable with.
With this in mind, these notes assume you have a Python distribution that includes:
Python version 3.5;
Numpy, the core numerical extensions for linear algebra and multidimensional arrays;
Scipy, additional libraries for scientific programming;
Matplotlib, excellent plotting and graphing libraries;
IPython, with the additional libraries required for the notebook interface.
Pandas, Python version of R dataframe
scikit-learn, Machine learning library!
A good, easy to install option that supports Mac, Windows, and Linux, and that has all of these packages (and much more) is the Anaconda.
Checking your installation
You can run the following code to check the versions of the packages on your system:
(in IPython notebook, press shift and return together to execute the contents of a cell)
End of explanation
"""
2+2
(50-5*6)/4
"""
Explanation: I. Python Overview
This is a quick introduction to Python. There are lots of other places to learn the language more thoroughly. I have collected a list of useful links, including ones to other learning resources, at the end of this notebook. If you want a little more depth, Python Tutorial is a great place to start, as is Zed Shaw's Learn Python the Hard Way.
The lessons that follow make use of the IPython notebooks. There's a good introduction to notebooks in the IPython notebook documentation that even has a nice video on how to use the notebooks. You should probably also flip through the IPython tutorial in your copious free time.
Briefly, notebooks have code cells (that are generally followed by result cells) and text cells. The text cells are the stuff that you're reading now. The code cells start with "In []:" with some number generally in the brackets. If you put your cursor in the code cell and hit Shift-Enter, the code will run in the Python interpreter and the result will print out in the output cell. You can then change things around and see whether you understand what's going on. If you need to know more, see the IPython notebook documentation or the IPython tutorial.
Using Python as a Calculator
Many of the things I used to use a calculator for, I now use Python for:
End of explanation
"""
sqrt(81)
from math import sqrt
sqrt(81)
"""
Explanation: (If you're typing this into an IPython notebook, or otherwise using notebook file, you hit shift-Enter to evaluate a cell.)
In the last few lines, we have sped by a lot of things that we should stop for a moment and explore a little more fully. We've seen, however briefly, two different data types: integers, also known as whole numbers to the non-programming world, and floating point numbers, also known (incorrectly) as decimal numbers to the rest of the world.
We've also seen the first instance of an import statement. Python has a huge number of libraries included with the distribution. To keep things simple, most of these variables and functions are not accessible from a normal Python interactive session. Instead, you have to import the name. For example, there is a math module containing many useful functions. To access, say, the square root function, you can either first
from math import sqrt
and then
End of explanation
"""
import math
math.sqrt(81)
"""
Explanation: or you can simply import the math library itself
End of explanation
"""
radius = 20
pi = math.pi
area = pi * radius ** 2
area
"""
Explanation: You can define variables using the equals (=) sign:
End of explanation
"""
return = 0
"""
Explanation: You can name a variable almost anything you want. It needs to start with an alphabetical character or "_", can contain alphanumeric charcters plus underscores ("_"). Certain words, however, are reserved for the language:
and, as, assert, break, class, continue, def, del, elif, else, except,
exec, finally, for, from, global, if, import, in, is, lambda, not, or,
pass, print, raise, return, try, while, with, yield
Trying to define a variable using one of these will result in a syntax error:
End of explanation
"""
'Hello, World!'
"""
Explanation: The Python Tutorial has more on using Python as an interactive shell. The IPython tutorial makes a nice complement to this, since IPython has a much more sophisticated iteractive shell.
Strings
Strings are lists of printable characters, and can be defined using either single quotes
End of explanation
"""
"Hello, World!"
"""
Explanation: or double quotes
End of explanation
"""
greeting = "Hello, World!"
"""
Explanation: Just like the other two data objects we're familiar with (ints and floats), you can assign a string to a variable
End of explanation
"""
print(greeting)
"""
Explanation: The print statement is often used for printing character strings:
End of explanation
"""
print("The area is " + area)
print("The area is " + str(area))
"""
Explanation: But it can also print data types other than strings:
End of explanation
"""
statement = "Hello, " + "World!"
print(statement)
"""
Explanation: In the above snipped, the number 600 (stored in the variable "area") is converted into a string before being printed out.
You can use the + operator to concatenate strings together:
Don't forget the space between the strings, if you want one there.
End of explanation
"""
days_of_the_week = ["Sunday","Monday","Tuesday","Wednesday","Thursday","Friday","Saturday"]
"""
Explanation: If you have a lot of words to concatenate together, there are other, more efficient ways to do this. But this is fine for linking a few strings together.
Lists
Very often in a programming language, one wants to keep a group of similar items together. Python does this using a data type called lists.
End of explanation
"""
days_of_the_week[2]
"""
Explanation: You can access members of the list using the index of that item:
End of explanation
"""
days_of_the_week[-1]
"""
Explanation: Python lists, like C, but unlike Fortran, use 0 as the index of the first element of a list. Thus, in this example, the 0 element is "Sunday", 1 is "Monday", and so on. If you need to access the nth element from the end of the list, you can use a negative index. For example, the -1 element of a list is the last element:
End of explanation
"""
languages = ["Fortran","C","C++"]
languages.append("Python")
print(languages)
"""
Explanation: You can add additional items to the list using the .append() command:
End of explanation
"""
list(range(10))
"""
Explanation: The range() command is a convenient way to make sequential lists of numbers:
End of explanation
"""
list(range(2,8))
"""
Explanation: Note that range(n) starts at 0 and gives the sequential list of integers less than n. If you want to start at a different number, use range(start,stop)
End of explanation
"""
evens = list(range(0,20,2))
evens
evens[3]
"""
Explanation: The lists created above with range have a step of 1 between elements. You can also give a fixed step size via a third command:
End of explanation
"""
["Today",7,99.3,""]
"""
Explanation: Lists do not have to hold the same data type. For example,
End of explanation
"""
help(len)
len(evens)
"""
Explanation: However, it's good (but not essential) to use lists for similar objects that are somehow logically connected. If you want to group different data types together into a composite data object, it's best to use tuples, which we will learn about below.
You can find out how long a list is using the len() command:
End of explanation
"""
for day in days_of_the_week:
print(day)
"""
Explanation: Iteration, Indentation, and Blocks
One of the most useful things you can do with lists is to iterate through them, i.e. to go through each element one at a time. To do this in Python, we use the for statement:
End of explanation
"""
for day in days_of_the_week:
statement = "Today is " + day
print(statement)
"""
Explanation: This code snippet goes through each element of the list called days_of_the_week and assigns it to the variable day. It then executes everything in the indented block (in this case only one line of code, the print statement) using those variable assignments. When the program has gone through every element of the list, it exists the block.
(Almost) every programming language defines blocks of code in some way. In Fortran, one uses END statements (ENDDO, ENDIF, etc.) to define code blocks. In C, C++, and Perl, one uses curly braces {} to define these blocks.
Python uses a colon (":"), followed by indentation level to define code blocks. Everything at a higher level of indentation is taken to be in the same block. In the above example the block was only a single line, but we could have had longer blocks as well:
End of explanation
"""
for i in range(20):
print("The square of ",i," is ",i*i)
"""
Explanation: The range() command is particularly useful with the for statement to execute loops of a specified length:
End of explanation
"""
for letter in "Sunday":
print(letter)
"""
Explanation: Slicing
Lists and strings have something in common that you might not suspect: they can both be treated as sequences. You already know that you can iterate through the elements of a list. You can also iterate through the letters in a string:
End of explanation
"""
days_of_the_week[0]
"""
Explanation: This is only occasionally useful. Slightly more useful is the slicing operation, which you can also use on any sequence. We already know that we can use indexing to get the first element of a list:
End of explanation
"""
days_of_the_week[0:2]
"""
Explanation: If we want the list containing the first two elements of a list, we can do this via
End of explanation
"""
days_of_the_week[:2]
"""
Explanation: or simply
End of explanation
"""
days_of_the_week[-2:]
"""
Explanation: If we want the last items of the list, we can do this with negative slicing:
End of explanation
"""
workdays = days_of_the_week[1:6]
print(workdays)
"""
Explanation: which is somewhat logically consistent with negative indices accessing the last elements of the list.
You can do:
End of explanation
"""
day = "Sunday"
abbreviation = day[:3]
print(abbreviation)
"""
Explanation: Since strings are sequences, you can also do this to them:
End of explanation
"""
numbers = list(range(0,40))
evens = numbers[2::2]
evens
"""
Explanation: If we really want to get fancy, we can pass a third element into the slice, which specifies a step length (just like a third argument to the range() function specifies the step):
End of explanation
"""
if day == "Sunday":
print("Sleep in")
else:
print("Go to work")
"""
Explanation: Note that in this example I was even able to omit the second argument, so that the slice started at 2, went to the end of the list, and took every second element, to generate the list of even numbers less that 40.
Booleans and Truth Testing
We have now learned a few data types. We have integers and floating point numbers, strings, and lists to contain them. We have also learned about lists, a container that can hold any data type. We have learned to print things out, and to iterate over items in lists. We will now learn about boolean variables that can be either True or False.
We invariably need some concept of conditions in programming to control branching behavior, to allow a program to react differently to different situations. If it's Monday, I'll go to work, but if it's Sunday, I'll sleep in. To do this in Python, we use a combination of boolean variables, which evaluate to either True or False, and if statements, that control branching based on boolean values.
For example:
End of explanation
"""
day == "Sunday"
"""
Explanation: (Quick quiz: why did the snippet print "Go to work" here? What is the variable "day" set to?)
Let's take the snippet apart to see what happened. First, note the statement
End of explanation
"""
1 == 2
50 == 2*25
3 < 3.14159
1 == 1.0
1 != 0
1 <= 2
1 >= 1
"""
Explanation: If we evaluate it by itself, as we just did, we see that it returns a boolean value, False. The "==" operator performs equality testing. If the two items are equal, it returns True, otherwise it returns False. In this case, it is comparing two variables, the string "Sunday", and whatever is stored in the variable "day", which, in this case, is the other string "Saturday". Since the two strings are not equal to each other, the truth test has the false value.
The if statement that contains the truth test is followed by a code block (a colon followed by an indented block of code). If the boolean is true, it executes the code in that block. Since it is false in the above example, we don't see that code executed.
The first block of code is followed by an else statement, which is executed if nothing else in the above if statement is true. Since the value was false, this code is executed, which is why we see "Go to work".
You can compare any data types in Python:
End of explanation
"""
1 is 1.0
"""
Explanation: We see a few other boolean operators here, all of which which should be self-explanatory. Less than, equality, non-equality, and so on.
Particularly interesting is the 1 == 1.0 test, which is true, since even though the two objects are different data types (integer and floating point number), they have the same value. There is another boolean operator is, that tests whether two objects are the same object:
End of explanation
"""
[1,2,3] == [1,2,4]
[1,2,3] < [1,2,4]
"""
Explanation: We can do boolean tests on lists as well:
End of explanation
"""
hours = 5
0 < hours < 24
"""
Explanation: Finally, note that you can also string multiple comparisons together, which can result in very intuitive tests:
End of explanation
"""
if day == "Sunday":
print("Sleep in")
elif day == "Saturday":
print("Do chores")
else:
print("Go to work")
"""
Explanation: If statements can have elif parts ("else if"), in addition to if/else parts. For example:
End of explanation
"""
for day in days_of_the_week:
statement = "Today is " + day
print(statement)
if day == "Sunday":
print(" Sleep in")
elif day == "Saturday":
print(" Do chores")
else:
print(" Go to work")
"""
Explanation: Of course we can combine if statements with for loops, to make a snippet that is almost interesting:
End of explanation
"""
bool(1)
bool(0)
bool(["This "," is "," a "," list"])
"""
Explanation: This is something of an advanced topic, but ordinary data types have boolean values associated with them, and, indeed, in early versions of Python there was not a separate boolean object. Essentially, anything that was a 0 value (the integer or floating point 0, an empty string "", or an empty list []) was False, and everything else was true. You can see the boolean value of any data object using the bool() function.
End of explanation
"""
n = 10
sequence = [0,1]
for i in range(2,n): # This is going to be a problem if we ever set n <= 2!
sequence.append(sequence[i-1]+sequence[i-2])
print(sequence)
"""
Explanation: Code Example: The Fibonacci Sequence
The Fibonacci sequence is a sequence in math that starts with 0 and 1, and then each successive entry is the sum of the previous two. Thus, the sequence goes 0,1,1,2,3,5,8,13,21,34,55,89,...
A very common exercise in programming books is to compute the Fibonacci sequence up to some number n. First I'll show the code, then I'll discuss what it is doing.
End of explanation
"""
def fibonacci(sequence_length):
"Return the Fibonacci sequence of length *sequence_length*"
sequence = [0,1]
if sequence_length < 1:
print("Fibonacci sequence only defined for length 1 or greater")
return
if 0 < sequence_length < 3:
return sequence[:sequence_length]
for i in range(2,sequence_length):
sequence.append(sequence[i-1]+sequence[i-2])
return sequence
"""
Explanation: Let's go through this line by line. First, we define the variable n, and set it to the integer 20. n is the length of the sequence we're going to form, and should probably have a better variable name. We then create a variable called sequence, and initialize it to the list with the integers 0 and 1 in it, the first two elements of the Fibonacci sequence. We have to create these elements "by hand", since the iterative part of the sequence requires two previous elements.
We then have a for loop over the list of integers from 2 (the next element of the list) to n (the length of the sequence). After the colon, we see a hash tag "#", and then a comment that if we had set n to some number less than 2 we would have a problem. Comments in Python start with #, and are good ways to make notes to yourself or to a user of your code explaining why you did what you did. Better than the comment here would be to test to make sure the value of n is valid, and to complain if it isn't; we'll try this later.
In the body of the loop, we append to the list an integer equal to the sum of the two previous elements of the list.
After exiting the loop (ending the indentation) we then print out the whole list. That's it!
Functions
We might want to use the Fibonacci snippet with different sequence lengths. We could cut an paste the code into another cell, changing the value of n, but it's easier and more useful to make a function out of the code. We do this with the def statement in Python:
End of explanation
"""
fibonacci(2)
fibonacci(12)
"""
Explanation: We can now call fibonacci() for different sequence_lengths:
End of explanation
"""
help(fibonacci)
"""
Explanation: We've introduced a several new features here. First, note that the function itself is defined as a code block (a colon followed by an indented block). This is the standard way that Python delimits things. Next, note that the first line of the function is a single string. This is called a docstring, and is a special kind of comment that is often available to people using the function through the python command line:
End of explanation
"""
t = (1,2,'hi',9.0)
t
"""
Explanation: If you define a docstring for all of your functions, it makes it easier for other people to use them, since they can get help on the arguments and return values of the function.
Next, note that rather than putting a comment in about what input values lead to errors, we have some testing of these values, followed by a warning if the value is invalid, and some conditional code to handle special cases.
Two More Data Structures: Tuples and Dictionaries
Before we end the Python overview, I wanted to touch on two more data structures that are very useful (and thus very common) in Python programs.
A tuple is a sequence object like a list or a string. It's constructed by grouping a sequence of objects together with commas, either without brackets, or with parentheses:
End of explanation
"""
t[1]
"""
Explanation: Tuples are like lists, in that you can access the elements using indices:
End of explanation
"""
t.append(7)
t[1]=77
"""
Explanation: However, tuples are immutable, you can't append to them or change the elements of them:
End of explanation
"""
('Bob',0.0,21.0)
"""
Explanation: Tuples are useful anytime you want to group different pieces of data together in an object, but don't want to create a full-fledged class (see below) for them. For example, let's say you want the Cartesian coordinates of some objects in your program. Tuples are a good way to do this:
End of explanation
"""
positions = [
('Bob',0.0,21.0),
('Cat',2.5,13.1),
('Dog',33.0,1.2)
]
"""
Explanation: Again, it's not a necessary distinction, but one way to distinguish tuples and lists is that tuples are a collection of different things, here a name, and x and y coordinates, whereas a list is a collection of similar things, like if we wanted a list of those coordinates:
End of explanation
"""
def minmax(objects):
minx = 1e20 # These are set to really big numbers
miny = 1e20
for obj in objects:
name,x,y = obj
if x < minx:
minx = x
if y < miny:
miny = y
return minx,miny
x,y = minmax(positions)
print(x,y)
"""
Explanation: Tuples can be used when functions return more than one value. Say we wanted to compute the smallest x- and y-coordinates of the above list of objects. We could write:
End of explanation
"""
mylist = [1,2,9,21]
"""
Explanation: Dictionaries are an object called "mappings" or "associative arrays" in other languages. Whereas a list associates an integer index with a set of objects:
End of explanation
"""
ages = {"Rick": 46, "Bob": 86, "Fred": 21}
print("Rick's age is ",ages["Rick"])
"""
Explanation: The index in a dictionary is called the key, and the corresponding dictionary entry is the value. A dictionary can use (almost) anything as the key. Whereas lists are formed with square brackets [], dictionaries use curly brackets {}:
End of explanation
"""
dict(Rick=46,Bob=86,Fred=20)
"""
Explanation: There's also a convenient way to create dictionaries without having to quote the keys.
End of explanation
"""
len(t)
len(ages)
"""
Explanation: The len() command works on both tuples and dictionaries:
End of explanation
"""
import this
"""
Explanation: Conclusion of the Python Overview
There is, of course, much more to the language than I've covered here. I've tried to keep this brief enough so that you can jump in and start using Python to simplify your life and work. My own experience in learning new things is that the information doesn't "stick" unless you try and use it for something in real life.
You will no doubt need to learn more as you go. I've listed several other good references, including the Python Tutorial and Learn Python the Hard Way. Additionally, now is a good time to start familiarizing yourself with the Python Documentation, and, in particular, the Python Language Reference.
Tim Peters, one of the earliest and most prolific Python contributors, wrote the "Zen of Python", which can be accessed via the "import this" command:
End of explanation
"""
import numpy as np
import scipy as sp
array = np.array([1,2,3,4,5,6])
array
"""
Explanation: No matter how experienced a programmer you are, these are words to meditate on.
II. Numpy and Scipy
Numpy contains core routines for doing fast vector, matrix, and linear algebra-type operations in Python. Scipy contains additional routines for optimization, special functions, and so on. Both contain modules written in C and Fortran so that they're as fast as possible. Together, they give Python roughly the same capability that the Matlab program offers. (In fact, if you're an experienced Matlab user, there a guide to Numpy for Matlab users just for you.)
Making vectors and matrices
Fundamental to both Numpy and Scipy is the ability to work with vectors and matrices. You can create vectors from lists using the array command:
End of explanation
"""
array.shape
"""
Explanation: size of the array
End of explanation
"""
mat = np.array([[0,1],[1,0]])
mat
"""
Explanation: To build matrices, you can either use the array command with lists of lists:
End of explanation
"""
mat2 = np.c_[mat, np.ones(2)]
mat2
"""
Explanation: Add a column of ones to mat
End of explanation
"""
mat2.shape
"""
Explanation: size of a matrix
End of explanation
"""
np.zeros((3,3))
"""
Explanation: You can also form empty (zero) matrices of arbitrary shape (including vectors, which Numpy treats as vectors with one row), using the zeros command:
End of explanation
"""
np.identity(4)
"""
Explanation: There's also an identity command that behaves as you'd expect:
End of explanation
"""
np.linspace(0,1)
"""
Explanation: as well as a ones command.
Linspace, matrix functions, and plotting
The linspace command makes a linear array of points from a starting to an ending value.
End of explanation
"""
np.linspace(0,1,11)
"""
Explanation: If you provide a third argument, it takes that as the number of points in the space. If you don't provide the argument, it gives a length 50 linear space.
End of explanation
"""
x = np.linspace(0,2*np.pi)
np.sin(x)
"""
Explanation: linspace is an easy way to make coordinates for plotting. Functions in the numpy library (all of which are imported into IPython notebook) can act on an entire vector (or even a matrix) of points at once. Thus,
End of explanation
"""
%matplotlib inline
import matplotlib.pyplot as plt
plt.style.use('ggplot')
plt.plot(x,np.sin(x))
"""
Explanation: In conjunction with matplotlib, this is a nice way to plot things:
End of explanation
"""
0.125*np.identity(3)
"""
Explanation: Matrix operations
Matrix objects act sensibly when multiplied by scalars:
End of explanation
"""
np.identity(2) + np.array([[1,1],[1,2]])
"""
Explanation: as well as when you add two matrices together. (However, the matrices have to be the same shape.)
End of explanation
"""
np.identity(2)*np.ones((2,2))
"""
Explanation: Something that confuses Matlab users is that the times (*) operator give element-wise multiplication rather than matrix multiplication:
End of explanation
"""
np.dot(np.identity(2),np.ones((2,2)))
"""
Explanation: To get matrix multiplication, you need the dot command:
End of explanation
"""
v = np.array([3,4])
np.sqrt(np.dot(v,v))
"""
Explanation: dot can also do dot products (duh!):
End of explanation
"""
m = np.array([[1,2],[3,4]])
m.T
np.linalg.inv(m)
"""
Explanation: as well as matrix-vector products.
There are determinant, inverse, and transpose functions that act as you would suppose. Transpose can be abbreviated with ".T" at the end of a matrix object:
End of explanation
"""
np.diag([1,2,3,4,5])
"""
Explanation: There's also a diag() function that takes a list or a vector and puts it along the diagonal of a square matrix.
End of explanation
"""
raw_data = """\
3.1905781584582433,0.028208609537968457
4.346895074946466,0.007160804747670053
5.374732334047101,0.0046962988461934805
8.201284796573875,0.0004614473299618756
10.899357601713055,0.00005038370219939726
16.295503211991434,4.377451812785309e-7
21.82012847965739,3.0799922117601088e-9
32.48394004282656,1.524776208284536e-13
43.53319057815846,5.5012073588707224e-18"""
"""
Explanation: We'll find this useful later on.
Least squares fitting
Very often we deal with some data that we want to fit to some sort of expected behavior. Say we have the following:
End of explanation
"""
data = []
for line in raw_data.splitlines():
words = line.split(',')
data.append(words)
data = np.array(data, dtype=np.float)
data
data[:, 0]
plt.title("Raw Data")
plt.xlabel("Distance")
plt.plot(data[:,0],data[:,1],'bo')
"""
Explanation: There's a section below on parsing CSV data. We'll steal the parser from that. For an explanation, skip ahead to that section. Otherwise, just assume that this is a way to parse that text into a numpy array that we can plot and do other analyses with.
End of explanation
"""
plt.title("Raw Data")
plt.xlabel("Distance")
plt.semilogy(data[:,0],data[:,1],'bo')
"""
Explanation: Since we expect the data to have an exponential decay, we can plot it using a semi-log plot.
End of explanation
"""
params = sp.polyfit(data[:,0],np.log(data[:,1]),1)
a = params[0]
A = np.exp(params[1])
"""
Explanation: For a pure exponential decay like this, we can fit the log of the data to a straight line. The above plot suggests this is a good approximation. Given a function
$$ y = Ae^{-ax} $$
$$ \log(y) = \log(A) - ax$$
Thus, if we fit the log of the data versus x, we should get a straight line with slope $a$, and an intercept that gives the constant $A$.
There's a numpy function called polyfit that will fit data to a polynomial form. We'll use this to fit to a straight line (a polynomial of order 1)
End of explanation
"""
x = np.linspace(1,45)
plt.title("Raw Data")
plt.xlabel("Distance")
plt.semilogy(data[:,0],data[:,1],'bo')
plt.semilogy(x,A*np.exp(a*x),'b-')
"""
Explanation: Let's see whether this curve fits the data.
End of explanation
"""
gauss_data = """\
-0.9902286902286903,1.4065274110372852e-19
-0.7566104566104566,2.2504438576596563e-18
-0.5117810117810118,1.9459459459459454
-0.31887271887271884,10.621621621621626
-0.250997150997151,15.891891891891893
-0.1463309463309464,23.756756756756754
-0.07267267267267263,28.135135135135133
-0.04426734426734419,29.02702702702703
-0.0015939015939017698,29.675675675675677
0.04689304689304685,29.10810810810811
0.0840994840994842,27.324324324324326
0.1700546700546699,22.216216216216214
0.370878570878571,7.540540540540545
0.5338338338338338,1.621621621621618
0.722014322014322,0.08108108108108068
0.9926849926849926,-0.08108108108108646"""
data = []
for line in gauss_data.splitlines():
words = line.split(',')
data.append(words)
data = np.array(data, dtype=np.float)
plt.plot(data[:,0],data[:,1],'bo')
"""
Explanation: If we have more complicated functions, we may not be able to get away with fitting to a simple polynomial. Consider the following data:
End of explanation
"""
def gauss(x,A,a):
return A*np.exp(a*x**2)
"""
Explanation: This data looks more Gaussian than exponential. If we wanted to, we could use polyfit for this as well, but let's use the curve_fit function from Scipy, which can fit to arbitrary functions. You can learn more using help(curve_fit).
First define a general Gaussian function to fit to.
End of explanation
"""
from scipy.optimize import curve_fit
params,conv = curve_fit(gauss,data[:,0],data[:,1])
x = np.linspace(-1,1)
plt.plot(data[:,0],data[:,1],'bo')
A,a = params
plt.plot(x,gauss(x,A,a),'b-')
"""
Explanation: Now fit to it using curve_fit:
End of explanation
"""
from random import random
rands = []
for i in range(100):
rands.append(random())
plt.plot(rands)
"""
Explanation: The curve_fit routine we just used is built on top of a very good general minimization capability in Scipy. You can learn more at the scipy documentation pages.
Monte Carlo and random numbers
Many methods in scientific computing rely on Monte Carlo integration, where a sequence of (pseudo) random numbers are used to approximate the integral of a function. Python has good random number generators in the standard library. The random() function gives pseudorandom numbers uniformly distributed between 0 and 1:
End of explanation
"""
from random import gauss
grands = []
for i in range(100):
grands.append(gauss(0,1))
plt.plot(grands)
"""
Explanation: random() uses the Mersenne Twister algorithm, which is a highly regarded pseudorandom number generator. There are also functions to generate random integers, to randomly shuffle a list, and functions to pick random numbers from a particular distribution, like the normal distribution:
End of explanation
"""
plt.plot(np.random.rand(100))
"""
Explanation: It is generally more efficient to generate a list of random numbers all at once, particularly if you're drawing from a non-uniform distribution. Numpy has functions to generate vectors and matrices of particular types of random distributions.
End of explanation
"""
import pandas as pd
import numpy as np
"""
Explanation: III. Introduction to Pandas
End of explanation
"""
ser_1 = pd.Series([1, 1, 2, -3, -5, 8, 13])
ser_1
"""
Explanation: Series
A Series is a one-dimensional array-like object containing an array of data and an associated array of data labels. The data can be any NumPy data type and the labels are the Series' index.
Create a Series:
End of explanation
"""
ser_1.values
"""
Explanation: Get the array representation of a Series:
End of explanation
"""
ser_1.index
"""
Explanation: Index objects are immutable and hold the axis labels and metadata such as names and axis names.
Get the index of the Series:
End of explanation
"""
ser_2 = pd.Series([1, 1, 2, -3, -5], index=['a', 'b', 'c', 'd', 'e'])
ser_2
"""
Explanation: Create a Series with a custom index:
End of explanation
"""
ser_2[4] == ser_2['e']
"""
Explanation: Get a value from a Series:
End of explanation
"""
ser_2[['c', 'a', 'b']]
"""
Explanation: Get a set of values from a Series by passing in a list:
End of explanation
"""
ser_2[ser_2 > 0]
"""
Explanation: Get values great than 0:
End of explanation
"""
ser_2 * 2
"""
Explanation: Scalar multiply:
End of explanation
"""
np.exp(ser_2)
"""
Explanation: Apply a numpy math function:
End of explanation
"""
dict_1 = {'foo' : 100, 'bar' : 200, 'baz' : 300}
ser_3 = pd.Series(dict_1)
ser_3
"""
Explanation: A Series is like a fixed-length, ordered dict.
Create a series by passing in a dict:
End of explanation
"""
index = ['foo', 'bar', 'baz', 'qux']
ser_4 = pd.Series(dict_1, index=index)
ser_4
"""
Explanation: Re-order a Series by passing in an index (indices not found are NaN):
End of explanation
"""
pd.isnull(ser_4)
"""
Explanation: Check for NaN with the pandas method:
End of explanation
"""
ser_4.isnull()
"""
Explanation: Check for NaN with the Series method:
End of explanation
"""
ser_3 + ser_4
"""
Explanation: Series automatically aligns differently indexed data in arithmetic operations:
End of explanation
"""
ser_4.name = 'foobarbazqux'
"""
Explanation: Name a Series:
End of explanation
"""
ser_4.index.name = 'label'
ser_4
"""
Explanation: Name a Series index:
End of explanation
"""
ser_4.index = ['fo', 'br', 'bz', 'qx']
ser_4
"""
Explanation: Rename a Series' index in place:
End of explanation
"""
data_1 = {'state' : ['VA', 'VA', 'VA', 'MD', 'MD'],
'year' : [2012, 2013, 2014, 2014, 2015],
'pop' : [5.0, 5.1, 5.2, 4.0, 4.1]}
df_1 = pd.DataFrame(data_1)
df_1
df_2 = pd.DataFrame(data_1, columns=['year', 'state', 'pop'])
df_2
"""
Explanation: DataFrame
A DataFrame is a tabular data structure containing an ordered collection of columns. Each column can have a different type. DataFrames have both row and column indices and is analogous to a dict of Series. Row and column operations are treated roughly symmetrically. Columns returned when indexing a DataFrame are views of the underlying data, not a copy. To obtain a copy, use the Series' copy method.
Create a DataFrame:
End of explanation
"""
df_3 = pd.DataFrame(data_1, columns=['year', 'state', 'pop', 'unempl'])
df_3
"""
Explanation: Like Series, columns that are not present in the data are NaN:
End of explanation
"""
df_3['state']
"""
Explanation: Retrieve a column by key, returning a Series:
End of explanation
"""
df_3.year
"""
Explanation: Retrive a column by attribute, returning a Series:
End of explanation
"""
df_3.iloc[0]
"""
Explanation: Retrieve a row by position:
End of explanation
"""
df_3['unempl'] = np.arange(5)
df_3
"""
Explanation: Update a column by assignment:
End of explanation
"""
unempl = pd.Series([6.0, 6.0, 6.1], index=[2, 3, 4])
df_3['unempl'] = unempl
df_3
"""
Explanation: Assign a Series to a column (note if assigning a list or array, the length must match the DataFrame, unlike a Series):
End of explanation
"""
df_3['state_dup'] = df_3['state']
df_3
"""
Explanation: Assign a new column that doesn't exist to create a new column:
End of explanation
"""
del df_3['state_dup']
df_3
"""
Explanation: Delete a column:
End of explanation
"""
df_3.T
"""
Explanation: Transpose the DataFrame:
End of explanation
"""
pop = {'VA' : {2013 : 5.1, 2014 : 5.2},
'MD' : {2014 : 4.0, 2015 : 4.1}}
df_4 = pd.DataFrame(pop)
df_4
"""
Explanation: Create a DataFrame from a nested dict of dicts (the keys in the inner dicts are unioned and sorted to form the index in the result, unless an explicit index is specified):
End of explanation
"""
data_2 = {'VA' : df_4['VA'][1:],
'MD' : df_4['MD'][2:]}
df_5 = pd.DataFrame(data_2)
df_5
"""
Explanation: Create a DataFrame from a dict of Series:
End of explanation
"""
df_5.index.name = 'year'
df_5
"""
Explanation: Set the DataFrame index name:
End of explanation
"""
df_5.columns.name = 'state'
df_5
"""
Explanation: Set the DataFrame columns name:
End of explanation
"""
df_5.values
"""
Explanation: Return the data contained in a DataFrame as a 2D ndarray:
End of explanation
"""
df_3.values
"""
Explanation: If the columns are different dtypes, the 2D ndarray's dtype will accomodate all of the columns:
End of explanation
"""
df_3
"""
Explanation: Reindexing
Create a new object with the data conformed to a new index. Any missing values are set to NaN.
End of explanation
"""
df_3.reindex(list(reversed(range(0, 6))))
"""
Explanation: Reindexing rows returns a new frame with the specified index:
End of explanation
"""
df_3.reindex(columns=['state', 'pop', 'unempl', 'year'])
"""
Explanation: Reindex columns:
End of explanation
"""
df_7 = df_3.drop([0, 1])
df_7
df_7 = df_7.drop('unempl', axis=1)
df_7
"""
Explanation: Dropping Entries
Drop rows from a Series or DataFrame:
End of explanation
"""
df_3
"""
Explanation: Indexing, Selecting, Filtering
Pandas supports indexing into a DataFrame.
End of explanation
"""
df_3[['pop', 'unempl']]
"""
Explanation: Select specified columns from a DataFrame:
End of explanation
"""
df_3[:2]
df_3.iloc[1:3]
"""
Explanation: Select a slice from a DataFrame:
End of explanation
"""
df_3[df_3['pop'] > 5]
"""
Explanation: Select from a DataFrame based on a filter:
End of explanation
"""
df_3.loc[0:2, 'pop']
df_3
"""
Explanation: Select a slice of rows from a specific column of a DataFrame:
End of explanation
"""
np.random.seed(0)
df_8 = pd.DataFrame(np.random.rand(9).reshape((3, 3)),
columns=['a', 'b', 'c'])
df_8
np.random.seed(1)
df_9 = pd.DataFrame(np.random.rand(9).reshape((3, 3)),
columns=['b', 'c', 'd'])
df_9
df_8 + df_9
"""
Explanation: Arithmetic and Data Alignment
Adding DataFrame objects results in the union of index pairs for rows and columns if the pairs are not the same, resulting in NaN for indices that do not overlap:
End of explanation
"""
df_10 = df_8.add(df_9, fill_value=0)
df_10
"""
Explanation: Set a fill value instead of NaN for indices that do not overlap:
End of explanation
"""
ser_8 = df_10.iloc[0]
df_11 = df_10 - ser_8
df_11
"""
Explanation: Like NumPy, pandas supports arithmetic operations between DataFrames and Series.
Match the index of the Series on the DataFrame's columns, broadcasting down the rows:
End of explanation
"""
ser_9 = pd.Series(range(3), index=['a', 'd', 'e'])
ser_9
df_11 - ser_9
"""
Explanation: Match the index of the Series on the DataFrame's columns, broadcasting down the rows and union the indices that do not match:
End of explanation
"""
df_11 = np.abs(df_11)
df_11
"""
Explanation: Function Application and Mapping
NumPy ufuncs (element-wise array methods) operate on pandas objects:
End of explanation
"""
df_11.apply(sum)
"""
Explanation: Apply a function on 1D arrays to each column:
End of explanation
"""
df_11.apply(sum, axis=1)
"""
Explanation: Apply a function on 1D arrays to each row:
End of explanation
"""
def func_3(x):
return '%.2f' %x
df_11.applymap(func_3)
"""
Explanation: Apply an element-wise Python function to a DataFrame:
End of explanation
"""
df_12 = pd.DataFrame(np.arange(12).reshape((3, 4)),
index=['three', 'one', 'two'],
columns=['c', 'a', 'b', 'd'])
df_12
"""
Explanation: Sorting
End of explanation
"""
df_12.sort_index()
"""
Explanation: Sort a DataFrame by its index:
End of explanation
"""
df_12.sort_index(axis=1, ascending=False)
"""
Explanation: Sort a DataFrame by columns in descending order:
End of explanation
"""
df_12.sort_values(by=['d', 'c'])
"""
Explanation: Sort a DataFrame's values by column:
End of explanation
"""
df_15 = pd.DataFrame(np.random.randn(10, 3),
columns=['a', 'b', 'c'])
df_15['cat1'] = (np.random.rand(10) * 3).round(0)
df_15['cat2'] = (np.random.rand(10)).round(0)
df_15
"""
Explanation: Summarizing and Computing Descriptive Statistics
Unlike NumPy arrays, Pandas descriptive statistics automatically exclude missing data. NaN values are excluded unless the entire row or column is NA.
End of explanation
"""
df_15.sum()
df_15.sum(axis=1)
df_15.mean(axis=0)
"""
Explanation: Sum and Mean
End of explanation
"""
df_15['a'].describe()
df_15['cat1'].value_counts()
"""
Explanation: Descriptive analysis
End of explanation
"""
pd.pivot_table(df_15, index='cat1', aggfunc=np.mean)
"""
Explanation: Pivot tables
group by cat1 and calculate mean
End of explanation
"""
|
FFIG/ffig | demos/CppLondon_Aug-2017.ipynb | mit | %%file Tree.hpp
#ifndef FFIG_DEMOS_TREE_H
#define FFIG_DEMOS_TREE_H
#include <memory>
class Tree {
std::unique_ptr<Tree> left_;
std::unique_ptr<Tree> right_;
int data_ = 0;
public:
Tree(int children) {
if(children <=0) return;
left_ = std::make_unique<Tree>(children-1);
right_ = std::make_unique<Tree>(children-1);
}
Tree* left() { return left_.get(); }
Tree* right() { return right_.get(); }
int data() const { return data_; }
void set_data(int x) { data_ = x; }
};
#endif // FFIG_DEMOS_TREE_H
"""
Explanation: FFIG - A foreign function interface generator for C++
https://github.com/FFIG/ffig
jonathanbcoe@gmail.com
I want to write C++ code and call it from Python without doing any extra work.
For Project Managers
A language like Python can be written to resemble executable pseudo-code and is a great langauge for defining high-level acceptance criteria.
Developer engagement is easy when requirements are code.
For Clients
It's easy to put together interactive demos with Python. Jupyter notebook allows one to display graphs, images, videos and tabulated data.
The ability to change demos on the fly in response to client queries is seriously impressive.
For Developers
I can break the edit-compile-link-test cycle with an interpreted scripting language.
Once the design is right I can drive it into C++.
Calling C++ from Python
Write some C++ code.
Define a C-API for binary compatibility.
Solving problems with ownership.
Define classes and functions that use a Foreign Function Interface to communicate with C.
Code-generation
Parse C++ code.
Transform parsed source code into a easy-to use form.
Define a template that transforms parsed source code into C
Define a template that transforms parsed source code into classes, functions and FFI calls.
End of explanation
"""
%%sh
clang++ -std=c++14 -fsyntax-only Tree.hpp
"""
Explanation: Let's compile this to ensure we've not made any mistakes.
End of explanation
"""
%%file Tree_c.h
#ifndef FFIG_DEMOS_TREE_C_H
#define FFIG_DEMOS_TREE_C_H
#define C_API extern "C" __attribute__((visibility("default")))
struct CTree_t;
typedef CTree_t* CTree;
C_API CTree Tree_create(int children);
C_API void Tree_dispose(CTree t);
C_API CTree Tree_left(CTree t);
C_API CTree Tree_right(CTree t);
C_API int Tree_data(CTree t);
C_API void Tree_set_data(CTree t, int x);
#endif // FFIG_DEMOS_TREE_C_H
%%file Tree_c.cpp
#include "Tree_c.h"
#include "Tree.hpp"
CTree Tree_create(int children) {
auto tree = std::make_unique<Tree>(children);
return reinterpret_cast<CTree>(tree.release());
}
void Tree_dispose(CTree t) {
auto* tree = reinterpret_cast<Tree*>(t);
delete tree;
}
CTree Tree_left(CTree t) {
auto* tree = reinterpret_cast<Tree*>(t);
return reinterpret_cast<CTree>(tree->left());
}
CTree Tree_right(CTree t) {
auto* tree = reinterpret_cast<Tree*>(t);
return reinterpret_cast<CTree>(tree->right());
}
int Tree_data(CTree t) {
auto* tree = reinterpret_cast<Tree*>(t);
return tree->data();
}
void Tree_set_data(CTree t, int x) {
auto* tree = reinterpret_cast<Tree*>(t);
tree->set_data(x);
}
"""
Explanation: Defining a C-API
We want a C-API so that we have a well-defined and portable binary interface.
We'll have to re-model our code as C does not support classes.
Free functions with an extra leading argument for this should suffice.
End of explanation
"""
%%file CMakeLists.txt
cmake_minimum_required(VERSION 3.0)
set(CMAKE_CXX_STANDARD 14)
add_compile_options(-fvisibility=hidden)
add_library(Tree_c SHARED Tree_c.cpp)
%%sh
rm -rf CMakeFiles/
rm CMakeCache.txt || echo
cmake . -GNinja
cmake --build .
strip -x libTree_c.dylib
%%sh
nm -U libTree_c.dylib
"""
Explanation: We can build this to create a shared library with our C-API symbols exposed.
End of explanation
"""
import ctypes
ctypes.c_object_p = ctypes.POINTER(ctypes.c_void_p)
tree_lib = ctypes.cdll.LoadLibrary("libTree_c.dylib")
"""
Explanation: Python interop
We can use Python's ctypes module to interact with the C shared-library.
End of explanation
"""
tree_lib.Tree_create.argtypes = [ctypes.c_int]
tree_lib.Tree_create.restype = ctypes.c_object_p
tree_lib.Tree_dispose.argtypes = [ctypes.c_object_p]
tree_lib.Tree_dispose.restype = None
tree_lib.Tree_left.argtypes = [ctypes.c_object_p]
tree_lib.Tree_left.restype = ctypes.c_object_p
tree_lib.Tree_right.argtypes = [ctypes.c_object_p]
tree_lib.Tree_right.restype = ctypes.c_object_p
tree_lib.Tree_data.argtypes = [ctypes.c_object_p]
tree_lib.Tree_data.restype = ctypes.c_int
tree_lib.Tree_set_data.argtypes = [ctypes.c_object_p, ctypes.c_int]
tree_lib.Tree_set_data.restype = None
"""
Explanation: We need to tell ctypes about the arguments and return types of the functions.
By default ctypes assumes functions take no arguments and return an integer.
End of explanation
"""
root = tree_lib.Tree_create(2)
root
tree_lib.Tree_data(root)
tree_lib.Tree_set_data(root, 42)
tree_lib.Tree_data(root)
tree_lib.Tree_dispose(root)
"""
Explanation: We'll leave the string-related methods for now, interop there is not so easy.
Let's see what we can do with the fledgling Python API.
End of explanation
"""
class Tree(object):
def __init__(self, children=None, _p=None):
if _p:
self._ptr = _p
self._owner = False
else:
self._ptr = tree_lib.Tree_create(children)
self._owner = True
def __del__(self):
if self._owner:
tree_lib.Tree_dispose(self._ptr)
def __repr__(self):
return "<Tree data:{}>".format(self.data)
@property
def left(self):
p = tree_lib.Tree_left(self._ptr)
if not p:
return None
return Tree(_p=p)
@property
def right(self):
p = tree_lib.Tree_right(self._ptr)
if not p:
return None
return Tree(_p=p)
@property
def data(self):
return tree_lib.Tree_data(self._ptr)
@data.setter
def data(self, x):
tree_lib.Tree_set_data(self._ptr, x)
t = Tree(2)
t
t.data = 42
t
t.left
t.left.data = 6
t.left
"""
Explanation: So far, so not-very-Pythonic.
We want classes!
End of explanation
"""
# This kills the kernel
#left = Tree(3).left.left
#left.data
"""
Explanation: This looks good but we our crude attempts at memory management will fail if we start working with temporaries.
End of explanation
"""
%%file Tree_2_c.cpp
#include <memory>
#include "Tree_c.h"
#include "Tree.hpp"
using Tree_ptr = std::shared_ptr<Tree>*;
CTree Tree_create(int children) {
auto tree = std::make_unique<Tree>(children);
Tree_ptr p = new std::shared_ptr<Tree>(tree.release());
return reinterpret_cast<CTree>(p);
}
void Tree_dispose(CTree t) {
delete reinterpret_cast<Tree_ptr>(t);
}
CTree Tree_left(CTree t) {
const auto& tree = *reinterpret_cast<Tree_ptr>(t);
auto left = tree->left();
if(!left)
return nullptr;
Tree_ptr p = new std::shared_ptr<Tree>(tree, left);
return reinterpret_cast<CTree>(p);
}
CTree Tree_right(CTree t) {
const auto& tree = *reinterpret_cast<Tree_ptr>(t);
auto right = tree->left();
if(!right)
return nullptr;
Tree_ptr p = new std::shared_ptr<Tree>(tree, right);
return reinterpret_cast<CTree>(p);
}
int Tree_data(CTree t) {
const auto& tree = *reinterpret_cast<Tree_ptr>(t);
return tree->data();
}
void Tree_set_data(CTree t, int x) {
const auto& tree = *reinterpret_cast<Tree_ptr>(t);
tree->set_data(x);
}
%%file CMakeLists.txt
cmake_minimum_required(VERSION 3.0)
set(CMAKE_CXX_STANDARD 14)
add_compile_options(-fvisibility=hidden)
add_library(Tree_2_c SHARED Tree_2_c.cpp)
%%sh
rm -rf CMakeFiles/
rm CMakeCache.txt
cmake . -GNinja
cmake --build .
strip -x libTree_2_c.dylib
"""
Explanation: Our Python classes don't know enough about the underlying C++ to do the memory management.
Our C-API implementation needs a re-think.
Defining a C-API (Again)
We can imbue the pointers passed across the API boundary with object lifetime detail
The aliasing constructor of std::shared_ptr
We can use the aliasing constructor of shared_ptr to keep objects alive while any subobject is exposed across the API boundary.
End of explanation
"""
import ctypes
ctypes.c_object_p = ctypes.POINTER(ctypes.c_void_p)
tree_lib2 = ctypes.cdll.LoadLibrary("libTree_2_c.dylib")
tree_lib2.Tree_create.argtypes = [ctypes.c_int]
tree_lib2.Tree_create.restype = ctypes.c_object_p
tree_lib2.Tree_dispose.argtypes = [ctypes.c_object_p]
tree_lib2.Tree_dispose.restype = None
tree_lib2.Tree_left.argtypes = [ctypes.c_object_p]
tree_lib2.Tree_left.restype = ctypes.c_object_p
tree_lib2.Tree_right.argtypes = [ctypes.c_object_p]
tree_lib2.Tree_right.restype = ctypes.c_object_p
tree_lib2.Tree_data.argtypes = [ctypes.c_object_p]
tree_lib2.Tree_data.restype = ctypes.c_int
tree_lib2.Tree_set_data.argtypes = [ctypes.c_object_p, ctypes.c_int]
tree_lib2.Tree_set_data.restype = None
class Tree2(object):
def __init__(self, children=None, _p=None):
if _p:
self._ptr = _p
else:
self._ptr = tree_lib2.Tree_create(children)
def __del__(self):
tree_lib2.Tree_dispose(self._ptr)
def __repr__(self):
return "<Tree data:{}>".format(self.data)
@property
def left(self):
p = tree_lib2.Tree_left(self._ptr)
if not p:
return None
return Tree2(_p=p)
@property
def right(self):
p = tree_lib2.Tree_right(self._ptr)
if not p:
return None
return Tree2(_p=p)
@property
def data(self):
return tree_lib2.Tree_data(self._ptr)
@data.setter
def data(self, x):
tree_lib2.Tree_set_data(self._ptr, x)
# This no longer kills the kernel
left = Tree2(3).left.left.left
left.data
root = Tree2(3)
left = root.left
left.data = 42
del root
left.data
"""
Explanation: A Safer Python API
End of explanation
"""
import sys
sys.path.insert(0,'..')
import ffig.clang.cindex
index = ffig.clang.cindex.Index.create()
translation_unit = index.parse("Tree.hpp", ['-x', 'c++', '-std=c++14', '-I../ffig/include'])
import asciitree
def node_children(node):
return (c for c in node.get_children() if c.location.file.name == "Tree.hpp")
print(asciitree.draw_tree(translation_unit.cursor,
lambda n: [c for c in node_children(n)],
lambda n: "%s (%s)" % (n.spelling or n.displayname, str(n.kind).split(".")[1])))
"""
Explanation: In addition to memory management, we have string translation and exception handling to think about.
Given time constraints, I won't cover that here.
FFIG
Writing C-API and Python bindings out by hand is time consuming and more than a little error prone.
There's not a lot of creativity required once the approach is worked out.
We want to generate it.
Parsing C++ with libclang
libclang has Python bindings and exposes enough of the AST that we can extract all the information we need.
End of explanation
"""
import ffig.cppmodel
import ffig.clang.cindex
model = ffig.cppmodel.Model(translation_unit)
model
model.classes[-5:]
model.classes[-1].methods
"""
Explanation: We create some simple classes of our own to make handling the relevant AST info easy.
End of explanation
"""
from jinja2 import Template
template = Template(R"""
C++ 17 will bring us:
{%for feature in features%}
* {{feature}}
{% endfor%}
""")
print(template.render(
{'features':['variant',
'optional',
'inline variables',
'fold-expressions',
'mandated-copy-elision']}))
"""
Explanation: Jinja2 templates
Jinja2 is a lightweight web-templating engine used in Flask and we can use it to generate code from AST info.
End of explanation
"""
tree_ast = model.classes[-1]
from jinja2 import Template
template = Template(R"""\
#ifndef {{class.name|upper}}_H
#define {{class.name|upper}}_H
struct {{class.name}}_t;
typedef {{class.name}}_t* {{class.name}};
{{class.name}} {{class.name}}_create();
{{class.name}} {{class.name}}_dispose({{class.name}} my{{class.name}});
{%- for m in class.methods %}{% if not m.arguments %}
{{class.name}} {{class.name}}_{{m.name}}({{class.name}} my{{class.name}});
{%- endif %}{% endfor %}
#else // {{class.name|upper}}_H
#endif // {{class.name|upper}}_H
""")
print(template.render({'class':tree_ast}))
"""
Explanation: Jinja2 and libclang
We can feed the AST info to a Jinja2 template to write some code for us.
End of explanation
"""
%%file Shape.h
#include "ffig/attributes.h"
#include <stdexcept>
#include <string>
struct FFIG_EXPORT Shape {
virtual ~Shape() = default;
virtual double area() const = 0;
virtual double perimeter() const = 0;
virtual const char* name() const = 0;
};
static const double pi = 3.14159;
class Circle : public Shape {
const double radius_;
public:
double area() const override {
return pi * radius_ * radius_;
}
double perimeter() const override {
return 2 * pi * radius_;
}
const char* name() const override {
return "Circle";
}
Circle(double radius) : radius_(radius) {
if ( radius < 0 ) {
std::string s = "Circle radius \""
+ std::to_string(radius_) + "\" must be non-negative.";
throw std::runtime_error(s);
}
}
};
%%sh
cd ..
rm -rf demos/ffig_output
mkdir demos/ffig_output
python -m ffig -b rb.tmpl python -m Shape -i demos/Shape.h -o demos/ffig_output
ls -R demos/ffig_output
%cat ffig_output/Shape_c.h
%%sh
cd ../
python -m ffig -b rb.tmpl python -m Shape -i demos/Shape.h -o demos
%%file CMakeLists.txt
cmake_minimum_required(VERSION 3.0)
set(CMAKE_CXX_STANDARD 14)
include_directories(../ffig/include)
add_compile_options(-fvisibility=hidden)
add_library(Shape_c SHARED Shape_c.cpp)
%%sh
rm -rf CMakeFiles/
rm CMakeCache.txt
cmake . -GNinja
cmake --build .
strip -x libShape_c.dylib
import Shape
c = Shape.Circle(5)
print("A {} with radius {} has area {}".format(c.name(), 8, c.area()))
%%script /opt/intel/intelpython27/bin/python
import Shape
Shape.Config.set_library_path(".")
c = Shape.Circle(8)
print("A {} with radius {} has area {}".format(c.name(), 8, c.area()))
Shape.Circle(-5)
%%ruby
load "Shape.rb"
c = Circle.new(8)
puts("A #{c.name()} with radius #{8} has area #{c.area()}")
"""
Explanation: Generating a Python API with FFIG
FFIG can be invoked to generate bindings for us.
FFIG requires that a class is annotated to create bindings for it.
End of explanation
"""
|
jbwhit/WSP-312-Tips-and-Tricks | notebooks/01-Tips-and-tricks.ipynb | mit | %matplotlib inline
%config InlineBackend.figure_format='retina'
# Add this to python2 code to make life easier
from __future__ import absolute_import, division, print_function
from itertools import combinations
import string
from IPython.display import IFrame, HTML, YouTubeVideo
import matplotlib as mpl
from matplotlib import pyplot as plt
from matplotlib.pyplot import GridSpec
import seaborn as sns
import mpld3
import numpy as np
# don't do:
# from numpy import *
import pandas as pd
import os, sys
import warnings
sns.set();
plt.rcParams['figure.figsize'] = (12, 8)
sns.set_style("darkgrid")
sns.set_context("poster", font_scale=1.3)
warnings.filterwarnings('ignore')
"""
Explanation: Best practices
Let's start with pep8 (https://www.python.org/dev/peps/pep-0008/)
Imports should be grouped in the following order:
standard library imports
related third party imports
local application/library specific imports
You should put a blank line between each group of imports.
Put any relevant all specification after the imports.
End of explanation
"""
df = pd.read_csv("../data/coal_prod_cleaned.csv")
# !conda install qgrid -y
df.head()
# Check out http://nbviewer.ipython.org/github/quantopian/qgrid/blob/master/qgrid_demo.ipynb for more (including demo)
df.shape
# This broke w/ Notebooks 5.0
import qgrid # Put imports at the top
qgrid.nbinstall(overwrite=True)
qgrid.show_grid(df[['MSHA_ID', 'Year', 'Mine_Name', 'Mine_State', 'Mine_County']], remote_js=True)
ls
"""
Explanation: Look at Pandas Dataframes
this is italicized
End of explanation
"""
# !conda install pivottablejs -y
df = pd.read_csv("../data/mps.csv", encoding="ISO-8859-1")
df.head(10)
"""
Explanation: Pivot Tables w/ pandas
http://nicolas.kruchten.com/content/2015/09/jupyter_pivottablejs/
End of explanation
"""
# Province, Party, Average, Age, Heatmap
from pivottablejs import pivot_ui
pivot_ui(df)
"""
Explanation: Enhanced Pandas Dataframe Display
End of explanation
"""
# in select mode, shift j/k (to select multiple cells at once)
# split cell with ctrl shift -
first = 1
second = 2
third = 3
"""
Explanation: Keyboard shortcuts
For help, ESC + h
End of explanation
"""
import numpy as np
np.linspace(start=, )
"""
Explanation: Different heading levels
With text and $\LaTeX$ support.
You can also get monospaced fonts by indenting 4 spaces:
mkdir toc
cd toc
Wrap with triple-backticks and language:
bash
mkdir toc
cd toc
wget https://repo.continuum.io/miniconda/Miniconda3-latest-MacOSX-x86_64.sh
End of explanation
"""
```sql
SELECT first_name,
last_name,
year_of_birth
FROM presidents
WHERE year_of_birth > 1800;
```
%%bash
pwd
for i in *.ipynb
do
du -h $i
done
echo "break"
echo
du -h *ipynb
"""
Explanation: SQL
SELECT *
FROM tablename
End of explanation
"""
%%writefile ../scripts/temp.py
from __future__ import absolute_import, division, print_function
I'm not cheating!
!cat ../scripts/temp.py
"""
Explanation: Other cell-magics
End of explanation
"""
def silly_absolute_value_function(xval):
"""Takes a value and returns the value."""
xval_sq = xval ** 2.0
1 + 4
xval_abs = np.sqrt(xval_sq)
return xval_abs
silly_absolute_value_function?
silly_absolute_value_function??
silly_absolute_value_function()
import numpy as np
# This doesn't work because ufunc
np.linspace??
# Indent/dedent/comment
for _ in range(5):
df["one"] = 1
df["two"] = 2
df["three"] = 3
df["four"] = 4
"""
Explanation: Tab; shift-tab; shift-tab-tab; shift-tab-tab-tab-tab; and more!
End of explanation
"""
df["one_better_name"] = 1
df["two_better_name"] = 2
df["three_better_name"] = 3
df["four_better_name"] = 4
"""
Explanation: Multicursor magic
End of explanation
"""
|
quantopian/research_public | notebooks/data/eventvestor.issue_equity/notebook.ipynb | apache-2.0 | # import the dataset
from quantopian.interactive.data.eventvestor import issue_equity
# or if you want to import the free dataset, use:
# from quantopian.interactive.data.eventvestor import issue_equity_free
# import data operations
from odo import odo
# import other libraries we will use
import pandas as pd
# Let's use blaze to understand the data a bit using Blaze dshape()
issue_equity.dshape
# And how many rows are there?
# N.B. we're using a Blaze function to do this, not len()
issue_equity.count()
# Let's see what the data looks like. We'll grab the first three rows.
issue_equity[:3]
"""
Explanation: EventVestor: Issue Equity
In this notebook, we'll take a look at EventVestor's Issue Equity dataset, available on the Quantopian Store. This dataset spans January 01, 2007 through the current day, and documents events and announcements covering secondary equity issues by companies.
Blaze
Before we dig into the data, we want to tell you about how you generally access Quantopian Store data sets. These datasets are available through an API service known as Blaze. Blaze provides the Quantopian user with a convenient interface to access very large datasets.
Blaze provides an important function for accessing these datasets. Some of these sets are many millions of records. Bringing that data directly into Quantopian Research directly just is not viable. So Blaze allows us to provide a simple querying interface and shift the burden over to the server side.
It is common to use Blaze to reduce your dataset in size, convert it over to Pandas and then to use Pandas for further computation, manipulation and visualization.
Helpful links:
* Query building for Blaze
* Pandas-to-Blaze dictionary
* SQL-to-Blaze dictionary.
Once you've limited the size of your Blaze object, you can convert it to a Pandas DataFrames using:
from odo import odo
odo(expr, pandas.DataFrame)
Free samples and limits
One other key caveat: we limit the number of results returned from any given expression to 10,000 to protect against runaway memory usage. To be clear, you have access to all the data server side. We are limiting the size of the responses back from Blaze.
There is a free version of this dataset as well as a paid one. The free one includes about three years of historical data, though not up to the current day.
With preamble in place, let's get started:
End of explanation
"""
issues = issue_equity[('2014-12-31' < issue_equity['asof_date']) &
(issue_equity['asof_date'] <'2016-01-01') &
(issue_equity.issue_amount < 20)&
(issue_equity.issue_units == "$M")]
# When displaying a Blaze Data Object, the printout is automatically truncated to ten rows.
issues.sort('asof_date')
"""
Explanation: Let's go over the columns:
- event_id: the unique identifier for this event.
- asof_date: EventVestor's timestamp of event capture.
- trade_date: for event announcements made before trading ends, trade_date is the same as event_date. For announcements issued after market close, trade_date is next market open day.
- symbol: stock ticker symbol of the affected company.
- event_type: this should always be Issue Equity.
- event_headline: a brief description of the event
- issue_amount: value of the equity issued in issue_units
- issue_units: units of the issue_amount: most commonly millions of dollars or millions of shares
- issue_stage: phase of the issue process: announcement, closing, pricing, etc. Note: currently, there appear to be unrelated entries in this column. We are speaking with the data vendor to amend this.
- event_rating: this is always 1. The meaning of this is uncertain.
- timestamp: this is our timestamp on when we registered the data.
- sid: the equity's unique identifier. Use this instead of the symbol.
We've done much of the data processing for you. Fields like timestamp and sid are standardized across all our Store Datasets, so the datasets are easy to combine. We have standardized the sid across all our equity databases.
We can select columns and rows with ease. Below, we'll fetch all 2015 equity issues smaller than $20M.
End of explanation
"""
df = odo(issues, pd.DataFrame)
df = df[df.issue_stage == "Announcement"]
df = df[['sid', 'issue_amount', 'asof_date']].dropna()
# When printing a pandas DataFrame, the head 30 and tail 30 rows are displayed. The middle is truncated.
df
"""
Explanation: Now suppose we want a DataFrame of the Blaze Data Object above, want to filter it further down to the announcements only, and we only want the sid, issue_amount, and the asof_date.
End of explanation
"""
|
PythonFreeCourse/Notebooks | week08/3_Exceptions.ipynb | mit | counter = 0
while counter < 10
print("Stop it!")
counter += 1
"""
Explanation: <img src="images/logo.jpg" style="display: block; margin-left: auto; margin-right: auto;" alt="לוגו של מיזם לימוד הפייתון. נחש מצויר בצבעי צהוב וכחול, הנע בין האותיות של שם הקורס: לומדים פייתון. הסלוגן המופיע מעל לשם הקורס הוא מיזם חינמי ללימוד תכנות בעברית.">
<span style="text-align: right; direction: rtl; float: right;">חריגות</span>
<span style="text-align: right; direction: rtl; float: right; clear: both;">הקדמה</span>
<p style="text-align: right; direction: rtl; float: right; clear: both;">
עוד בשבוע הראשון, כשרק התחלתם את הקורס, הזהרנו אתכם שהמחשב עלול להיות עמית קשוח לעבודה.<br>
הוא תמיד מבצע בדיוק את מה שהוריתם לו לעשות, ולא מסוגל להתגבר לבד גם על הקלה שבטעויות.
</p>
<p style="text-align: right; direction: rtl; float: right; clear: both;">
במהלך הקורס נתקלתם פעמים רבות בחריגות ("שגיאות") שפייתון התריעה לכם עליהן.<br>
חלק מההתרעות על חריגות התרחשו בגלל טעויות בסיסיות בקוד, כמו נקודתיים חסרות בסוף שורת <code>if</code>,<br>
וחלק מהן התרחשו בגלל בעיות שהתגלו מאוחר יותר – כמו קובץ שניסיתם לפתוח אבל לא היה קיים במחשב.
</p>
<p style="text-align: right; direction: rtl; float: right; clear: both;">
בפרק זה ניכנס בעובי הקורה בכל הנוגע לחריגות.<br>
נבין אילו סוגי חריגות יש, איך מפענחים אותן ואיך הן בנויות בפייתון, איך מטפלים בהן ואיך יוצרים חריגות בעצמנו.
</p>
<span style="text-align: right; direction: rtl; float: right; clear: both;">הגדרה</span>
<p style="text-align: right; direction: rtl; float: right; clear: both;">
<dfn>חריגה</dfn> (<dfn>Exception</dfn>) מייצגת כשל שהתרחש בזמן שפייתון ניסתה לפענח את הקוד שלנו או להריץ אותו.<br>
כשהקוד קורס ומוצגת לנו הודעת שגיאה, אפשר להגיד שפייתון <dfn>מתריעה על חריגה</dfn> (<dfn>raise an exception</dfn>).
</p>
<p style="text-align: right; direction: rtl; float: right; clear: both;">
נבדיל בין שני סוגי חריגות: שגיאות תחביר וחריגות כלליות.
</p>
<span style="text-align: right; direction: rtl; float: right; clear: both;">שגיאת תחביר</span>
<p style="text-align: right; direction: rtl; float: right; clear: both;">
פייתון תתריע לפנינו על <dfn>שגיאת תחביר</dfn> (<dfn>syntax error</dfn>) כשנכתוב קוד שהיא לא מסוגלת לפענח.<br>
לרוב זה יתרחש כשלא עמדנו בכללי התחביר של פייתון ושכחנו תו מסוים, בין אם זה סגירת סוגריים, גרשיים או נקודתיים בסוף שורה.<br>
ודאי נתקלתם בשגיאה דומה בעבר:
</p>
End of explanation
"""
names = (
"John Cleese",
"Terry Gilliam",
"Eric Idle",
"Michael Palin",
"Graham Chapman",
"Terry Jones",
for name in names:
print(name)
"""
Explanation: <p style="text-align: right; direction: rtl; float: right; clear: both;">
פייתון משתדלת לספק לנו כמה שיותר מידע על מקור השגיאה:<br>
</p>
<ol style="text-align: right; direction: rtl; float: right; clear: both;">
<li>בשורה הראשונה נראה מידע על מיקום השגיאה: קובץ (אם יש כזה) ומספר השורה שבה נמצאה השגיאה.</li>
<li>בשורה השנייה את הקוד שבו פייתון מצאה את השגיאה.</li>
<li>בשורה השלישית חץ שמצביע חזותית למקום שבו נמצאה השגיאה.</li>
<li>בשורה הרביעית פייתון מסבירה לנו מה התרחש ומספקת הסבר קצר על השגיאה. במקרה הזה – <code>SyntaxError</code>.</li>
</ol>
<p style="text-align: right; direction: rtl; float: right; clear: both;">
כשמדובר בשגיאות תחביר כדאי להסתכל על מיקום השגיאה בעין ביקורתית.<br>
בחלק מהפעמים, פייתון תכלול בשגיאה מידע לא מדויק על מיקום השגיאה:
</p>
End of explanation
"""
a = int(input("Please enter the first number: "))
b = int(input("Please enter the second number: "))
print(a // b)
"""
Explanation: <p style="text-align: right; direction: rtl; float: right; clear: both;">
אפשר לראות בקוד שלמעלה ששכחנו לסגור את הסוגריים שפתחנו בשורה הראשונה.<br>
הודעת השגיאה שפייתון תציג לנו מצביעה על הלולאה כמקור לבעיה, כאשר הבעיה האמיתית היא בבירור אי סגירת הסוגריים.<br>
ההודעות הלא מדויקות של פייתון בהקשר של שגיאות תחביר מבלבלות לעיתים קרובות מתכנתים מתחילים.<br>
המלצתנו, אם הסתבכתם עם שגיאה כזו - בדקו אם מקור השגיאה הוא בסביבה, ולאו דווקא במקום שפייתון מורה עליו.
</p>
<span style="text-align: right; direction: rtl; float: right; clear: both;">חריגה כללית</span>
<p style="text-align: right; direction: rtl; float: right; clear: both;">
גם כשהקוד שלכם תואם את כללי התחביר של פייתון, לפעמים עלולות להתגלות בעיות בזמן הרצת הקוד.<br>
כפי שוודאי כבר חוויתם, מנעד הבעיות האפשריות הוא רחב מאוד – החל בחלוקה באפס, עבור לטעות בשם המשתנה וכלה בניסיון לפתיחת קובץ לא קיים.<br>
</p>
<p style="text-align: right; direction: rtl; float: right; clear: both;">
בניגוד להתרעה על חריגה במצב של שגיאות תחביר, פייתון תתריע על חריגות אחרות רק כשהיא תגיע להריץ את הקוד שגורם לחריגה.<br>
נראה דוגמה לחריגה שכזו:
</p>
End of explanation
"""
a = 5
b = 0
print(a // b)
"""
Explanation: <div class="align-center" style="display: flex; text-align: right; direction: rtl; clear: both;">
<div style="display: flex; width: 10%; float: right; clear: both;">
<img src="images/exercise.svg" style="height: 50px !important;" alt="תרגול">
</div>
<div style="width: 70%">
<p style="text-align: right; direction: rtl; float: right; clear: both;">
חשבו על כל החריגות שמשתמש שובב יכול לסחוט מהקוד שבתא למעלה.
</p>
</div>
<div style="display: flex; width: 20%; border-right: 0.1rem solid #A5A5A5; padding: 1rem 2rem;">
<p style="text-align: center; direction: rtl; justify-content: center; align-items: center; clear: both;">
<strong>חשוב!</strong><br>
פתרו לפני שתמשיכו!
</p>
</div>
</div>
<p style="text-align: right; direction: rtl; float: right; clear: both;">
אחת החריגות הראשונות שנחשוב עליהן היא חלוקה באפס, שלא מוגדרת חשבונית.<br>
נראה מה יקרה כשננסה לבצע השמה של הערך 0 למשתנה <var>b</var>:
</p>
End of explanation
"""
a = int("5")
b = int("a")
print(a // b)
"""
Explanation: <p style="text-align: right; direction: rtl; float: right; clear: both;">
פייתון התריעה בפנינו על <var>ZeroDivisionError</var> בשורה 3, וגם הפעם היא פלטה לנו הודעה מפורטת יחסית.<br>
בדיוק כמו בשגיאת התחביר, השורה האחרונה היא הודעה שמספרת לנו מה קרה בפועל: חילקתם באפס וזה אסור.<br>
באותה שורה נראה גם את <dfn>סוג החריגה</dfn> (<var>ZeroDivisionError</var>) – הקטגוריה הכללית שאליה החריגה משתייכת.<br>
</p>
<p style="text-align: right; direction: rtl; float: right; clear: both;">
נוכל לראות סוג אחר של חריגה אם נעביר לאחד המשתנים אות במקום מספר:
</p>
End of explanation
"""
a = int("5")
b = int("a")
print(a // b)
"""
Explanation: <p style="text-align: right; direction: rtl; float: right; clear: both;">
גם הפעם פייתון התריעה בפנינו על חריגה, אבל מסוג שונה.<br>
חריגה מסוג <var>ValueError</var> מעידה שהערך שהעברנו הוא מהסוג (טיפוס) הנכון, אבל הוא לא התאים לביטוי שביקשנו מפייתון להריץ.<br>
ההודעה שפייתון הציגה מסבירה לנו שאי אפשר להמיר את המחרוזת "a" למספר עשרוני.
</p>
<span style="text-align: right; direction: rtl; float: right; clear: both;">קריאת הודעת השגיאה</span>
<p style="text-align: right; direction: rtl; float: right; clear: both;">
ניקח לדוגמה את ההתרעה האחרונה שקיבלנו על חריגה:
</p>
End of explanation
"""
def division(a, b):
return int(a) // int(b)
division("meow", 5)
"""
Explanation: <p style="text-align: right; direction: rtl; float: right; clear: both;">
ננסה להבין לעומק את החלקים השונים של ההודעה:
</p>
<figure>
<img src="images/exception_parts.svg?v=3" style="margin-right: auto; margin-left: auto; text-align: center;" alt="תמונה הממחישה את החלקים השונים בהתרעת החריגה. חלקי הודעת השגיאה, לפי הסדר: שם הקובץ שבו נמצאה החריגה, מיקום החריגה, חץ המצביע למיקום החריגה, השורה שבה זוהתה החריגה, הקטגוריה של החריגה והסיבה להתרחשות החריגה. ויזואלית, בצד ימין יש קופסאות בצבעים עם תיאור חלקי השגיאה, ובצד שמאל יש את הודעת השגיאה כאשר כל חלק בה צבוע בצבעים המופיעים בצד ימין."/>
<figcaption style="margin-top: 2rem; text-align: center; direction: rtl;">
איור המבאר את חלקי ההודעה המוצגת במקרים של התרעה על חריגה.
</figcaption>
</figure>
<p style="text-align: right; direction: rtl; float: right; clear: both;">
במקרה של התרעה על חריגה שהתרחשה בתוך פונקציה, ההודעה תציג מעקב אחר שרשרת הקריאות שגרמו לה:
</p>
End of explanation
"""
def get_file_content(filepath):
with open(filepath) as file_handler:
return file_handler.read()
"""
Explanation: <p style="text-align: right; direction: rtl; float: right; clear: both;">
ההודעה הזו מכילה <dfn>Traceback</dfn> – מעין סיפור שמטרתו לעזור לנו להבין מדוע התרחשה החריגה.<br>
ב־Traceback נראה את השורה שבה התרחשה החריגה, ומעליה את שרשרת הקריאות לפונקציות שגרמו לשורה הזו לרוץ.<br>
כדי להבין טוב יותר את ה־Traceback, נהוג לקרוא אותו מהסוף להתחלה.
</p>
<p style="text-align: right; direction: rtl; float: right; clear: both;">
תחילה, נביט בשורה האחרונה ונקרא מה הייתה הסיבה שבגינה פייתון התריעה לנו על החריגה.<br>
ההודעה היא <q dir="ltr">invalid literal for int() with base 10: 'meow'</q> – ניסינו להמיר את המחרוזת "meow" למספר שלם, וזה לא תקין.<br>
כדאי להסתכל גם על סוג החריגה (<var>ValueError</var>) כדי לקבל מושג כללי על היצור שאנחנו מתעסקים איתו.
</p>
<p style="text-align: right; direction: rtl; float: right; clear: both;">
נמשיך ל־Traceback.<br>
בפסקה שמעל השורה שבה מוצגת הודעת השגיאה, נסתכל על שורת הקוד שגרמה להתרעה על חריגה: <code>return int(a) // int(b)</code>.<br>
בשלב זה יש בידינו די נתונים לצורך פענוח ההתרעה על החריגה: ניסינו לבצע המרה לא חוקית של המחרוזת "meow" למספר שלם בשורה 2.
</p>
<p style="text-align: right; direction: rtl; float: right; clear: both;">
אם עדיין לא נחה דעתכם וקשה לכם להבין מאיפה הגיעה ההתרעה על החריגה, תוכלו להמשיך ולטפס במעלה ה־Traceback.<br>
נעבור לקוד שגרם לשורה <code dir="ltr">return int(a) // int(b)</code> לרוץ: <code dir="ltr">division("meow", 5)</code>.<br>
נוכל לראות שהקוד הזה מעביר לפרמטר הראשון של הפונקציה <var>division</var> את הערך "meow", שאותו היא מנסה להמיר למספר שלם.<br>
עכשיו ברור לחלוטין מאיפה מגיעה ההתרעה על החריגה.
</p>
<span style="text-align: right; direction: rtl; float: right; clear: both;">טיפול בחריגות</span>
<span style="text-align: right; direction: rtl; float: right; clear: both;">הרעיון</span>
<p style="text-align: right; direction: rtl; float: right; clear: both;">
לפעמים אנחנו יודעים מראש על שורת קוד שכתבנו שעלולה לגרום להתרעה על חריגה.
</p>
<p style="text-align: right; direction: rtl; float: right; clear: both;">
פתיחת קובץ לקריאה, לדוגמה, עלולה להיכשל אם הנתיב לקובץ לא קיים במחשב.<br>
נכתוב פונקציה שמקבלת נתיב לקובץ ומחזירה את התוכן שלו כדי להדגים את הרעיון:
</p>
End of explanation
"""
princess_location = get_file_content('castle.txt')
"""
Explanation: <p style="text-align: right; direction: rtl; float: right; clear: both;">
ננסה לאחזר את התוכן של הקובץ castle.txt:
</p>
End of explanation
"""
def get_file_content(filepath):
try: # נסה לבצע את השורות הבאות
with open(filepath) as file_handler:
return file_handler.read()
except FileNotFoundError: # ...אם נכשלת בגלל סוג החריגה הזה, נסה לבצע במקום
print(f"Couldn't open the file: {filepath}.")
return ""
"""
Explanation: <p style="text-align: right; direction: rtl; float: right; clear: both;">
במקרה שלמעלה ניסינו לפתוח קובץ שלא באמת קיים במחשב, ופייתון התריעה לנו על חריגת <var>FileNotFoundError</var>.<br>
מכאן, שהפונקציה <var>get_file_content</var> עשויה לגרום להתרעה על חריגה מסוג <var>FileNotFoundError</var> בכל פעם שיועבר לה נתיב שאינו קיים.<br>
</p>
<p style="text-align: right; direction: rtl; float: right; clear: both;">
כמתכנתים אחראים, חשוב לנו שהתוכנה לא תקרוס בכל פעם שהמשתמש מספק נתיב שגוי לקובץ.<br>
פייתון מאפשרת לנו להגדיר מראש כיצד לטפל במקרים שבהם אנחנו צופים שהיא תתריע על חריגה, ובכך למנוע את קריסת התוכנית.<br>
נעשה זאת בעזרת מילות המפתח <code>try</code> ו־<code>except</code>.<br>
</p>
<span style="text-align: right; direction: rtl; float: right; clear: both;">התחביר הבסיסי</span>
<p style="text-align: right; direction: rtl; float: right; clear: both;">
לפני שנצלול לקוד, נבין מהו הרעיון הכללי של <code>try</code> ושל <code>except</code>.<br>
המטרה שלנו היא לספק התנהגות חלופית לקוד שעשוי להיכשל בגלל התרעה על חריגה מסוימת שחזינו שעשויה לקרות.
</p>
<p style="text-align: right; direction: rtl; float: right; clear: both;">
שימוש ב־<code>try</code> וב־<code>except</code> בפייתון נראה פחות או יותר כך:
</p>
<ol style="text-align: right; direction: rtl; float: right; clear: both;">
<li>נסה לבצע את שורות הקוד הבאות.</li>
<li>אם לא הצלחת כיוון שהייתה התרעה על חריגה מסוג <em>כך וכך</em>, בצע במקומן את השורות החלופיות הבאות.</li>
</ol>
<p style="text-align: right; direction: rtl; float: right; clear: both;">
נממש בקוד:
</p>
End of explanation
"""
princess_location = get_file_content("castle.txt")
princess_location
"""
Explanation: <p style="text-align: right; direction: rtl; float: right; clear: both;">
וננסה לאחזר שוב את התוכן של הקובץ castle.txt:
</p>
End of explanation
"""
princess_location = get_file_content("?")
"""
Explanation: <p style="text-align: right; direction: rtl; float: right; clear: both;">
כפי שאפשר לראות בדוגמה, הפונקציה לא התריעה על חריגת <var>FileNotFoundError</var>, אלא הדפיסה לנו הודעה והחזירה מחרוזת ריקה.<br>
זה קרה כיוון שעטפנו את הקוד שעלול להתריע על חריגה ב־<code>try</code>,
והגדרנו לפייתון בתוך ה־<code>except</code> מה לבצע במקרה של כישלון.
</p>
<p style="text-align: right; direction: rtl; float: right; clear: both;">
התחביר של <code>try</code> ... <code>except</code> הוא כדלהלן:<br>
</p>
<ol style="text-align: right; direction: rtl; float: right; clear: both;">
<li>נתחיל עם שורה שבה כתוב אך ורק <code dir="ltr">try:</code>.</li>
<li>בהזחה, נכתוב את כל מה שאנחנו רוצים לנסות לבצע ועלול לגרום להתרעה על חריגה.</li>
<li>בשורה הבאה, נצא מההזחה ונכתוב <code dir="ltr">except ExceptionType:</code>, כאשר <var>ExceptionType</var> הוא סוג החריגה שנרצה לתפוס.</li>
<li>בהזחה (שוב), נכתוב קוד שנרצה לבצע אם פייתון התריעה על חריגה מסוג <var>ExceptionType</var> בזמן שהקוד המוזח תחת ה־<code>try</code> רץ.</li>
</ol>
<figure>
<img src="images/try_except_syntax.svg?v=2" style="width: 800px; margin-right: auto; margin-left: auto; text-align: center;" alt="בתמונה יש את הקוד מהדוגמה האחרונה, שלידו כתוב בעברית מה כל חלק עושה. ליד ה־try כתוב 'נסה לבצע...'. ליד שתי השורות המוזחות בתוכו כתוב 'קוד שעשוי לגרום להתרעה על חריגה'. ליד ה־except FileNotFoundError כתוב 'אם הייתה התרעה על חריגה מסוג FileNotFoundError', וליד הקוד המוזח בתוכו כתוב בצע במקום את הפעולות הבאות. הכתוביות מופיעות משמאל לקוד, ולידן קו שמסמן לאיזה חלק בקוד הן שייכות."/>
<figcaption style="margin-top: 2rem; text-align: center; direction: rtl;">
התחביר של <code>try</code> ... <code>except</code>
</figcaption>
</figure>
<p style="text-align: right; direction: rtl; float: right; clear: both;">
השורות שב־<code>try</code> ירוצו כרגיל.<br>
אם לא תהיה התרעה על חריגה, פייתון תתעלם ממה שכתוב בתוך ה־<code>except</code>.<br>
אם אחת השורות בתוך ה־<code>try</code> גרמה להתרעה על חריגה מהסוג שכתוב בשורת ה־<code>except</code>,<br>
פייתון תפסיק מייד לבצע את הקוד שכתוב ב־<code>try</code>, ותעבור להריץ את הקוד המוזח בתוך ה־<code>except</code>.<br>
</p>
<p style="text-align: right; direction: rtl; float: right; clear: both;">
ניקח את דוגמת הקוד שלמעלה, וננסה להבין כיצד פייתון קוראת אותה.<br>
פייתון תתחיל בהרצת השורה <code dir="ltr">with open("castle.txt") as file_handler:</code> ותתריע על חריגה, משום שהקובץ castle.txt לא נמצא.<br>
כיוון שהחריגה היא מסוג <code>FileNotFoundError</code>, היא תחפש את המילים <code dir="ltr">except FileNotFoundError:</code> מייד בסיום ההזחה.<br>
הביטוי הזה קיים בדוגמה שלנו, ולכן פייתון תבצע את מה שכתוב בהזחה שאחריו במקום להתריע על חריגה.
</p>
<figure>
<img src="images/try_except_flow.svg?v=1" style="width: 700px; margin-right: auto; margin-left: auto; text-align: center;" alt="בתמונה יש תרשים זרימה המציג כיצד פייתון קוראת את הקוד במבנה try-except. התרשים בסגנון קומיקסי עם אימוג'ים. החץ ממנו נכנסים לתרשים הוא 'התחל ב־try' עם סמלון של דגל מרוצים, שמוביל לעיגול שבו כתוב 'הרץ את השורה המוזחת הבאה בתוך ה־try'. מתוך עיגול זה יש חץ לעצמו, שבו כתוב 'אין התראה על חריגה' עם סמלון של וי ירוק, וחץ נוסף שבו כתוב 'אין שורות נוספות ב־try' עם סמלון של דגל מרוצים. החץ האחרון מוביל לעיגול ללא מוצא שבו כתוב 'סיימנו! המשך לקוד שאחרי ה־try וה־except'. מהעיגול הראשון יוצא גם חץ שעליו כתוב 'התרעה על חריגה' עם סמלון של פיצוץ, ומוביל לעיגול שבו כתוב 'חפש except עם סוג החריגה'. מעיגול זה יוצאים שני חצים: הראשון 'לא קיים', עם סמלון של איקס אדום שמוביל לעיגול ללא מוצא בו כתוב 'זרוק התרעה על חריגה'. השני 'קיים' עם סמלון של וי ירוק שמוביל לעיגול 'הרץ את השורות המוזחות בתוך ה־except'. מעיגול זה עצמו יוצא חץ לעיגול עליו סופר מקודם, 'סיימנו! המשך לקוד שאחרי ה־try וה־except' שתואר קודם."/>
<figcaption style="margin-top: 2rem; text-align: center; direction: rtl;">
תרשים זרימה המציג כיצד פייתון קוראת את הקוד במבנה <code>try</code> ... <code>except</code>
</figcaption>
</figure>
<p style="text-align: right; direction: rtl; float: right; clear: both;">
היכולת החדשה שקיבלנו נקראת "<dfn>לתפוס חריגות</dfn>", או "<dfn>לטפל בחריגות</dfn>".<br>
היא מאפשרת לנו לתכנן קוד שיגיב לבעיות שעלולות להתעורר במהלך ריצת הקוד שלנו.
</p>
<div class="align-center" style="display: flex; text-align: right; direction: rtl; clear: both;">
<div style="display: flex; width: 10%; float: right; clear: both;">
<img src="images/exercise.svg" style="height: 50px !important;" alt="תרגול">
</div>
<div style="width: 70%">
<p style="text-align: right; direction: rtl; float: right; clear: both;">
כתבו פונקציה שמקבלת שני מספרים, ומחלקת את המספר הראשון בשני.<br>
אם המספר השני הוא אפס, החזירו <samp>0</samp> בתור התוצאה.<br>
השתמשו בחריגות.
</p>
</div>
<div style="display: flex; width: 20%; border-right: 0.1rem solid #A5A5A5; padding: 1rem 2rem;">
<p style="text-align: center; direction: rtl; justify-content: center; align-items: center; clear: both;">
<strong>חשוב!</strong><br>
פתרו לפני שתמשיכו!
</p>
</div>
</div>
<span style="text-align: right; direction: rtl; float: right; clear: both;">סוגים מרובים של חריגות</span>
<p style="text-align: right; direction: rtl; float: right; clear: both;">
משתמשים פרחחים במיוחד לא יעצרו כאן.<br>
הפונקציה <var>get_file_content</var> מוגנת מניסיון לאחזר קבצים לא קיימים, זה נכון,<br>
אך משתמש שובב מהרגיל עשוי לנסות להעביר לפונקציה מחרוזות עם תווים שאסור לנתיבים להכיל:
</p>
End of explanation
"""
def get_file_content(filepath):
try:
with open(filepath) as file_handler:
return file_handler.read()
except FileNotFoundError:
print(f"Couldn't open the file: {filepath}.")
return ""
"""
Explanation: <p style="text-align: right; direction: rtl; float: right; clear: both;">
נביט בחריגה ובקוד המקורי, ונגלה שבאמת לא ביקשנו לתפוס בשום מקום חריגה מסוג <var>OSError</var>.
</p>
End of explanation
"""
def get_file_content(filepath):
try:
with open(filepath) as file_handler:
return file_handler.read()
except (FileNotFoundError, OSError):
print(f"Couldn't open the file: {filepath}.")
return ""
"""
Explanation: <p style="text-align: right; direction: rtl; float: right; clear: both;">
מכאן, נוכל לבחור לתקן את הקוד באחת משתי דרכים.
</p>
<p style="text-align: right; direction: rtl; float: right; clear: both;">
הדרך הראשונה היא להשתמש בקוד שכבר יצרנו לטיפול בחריגות מסוג <var>FileNotFoundError</var>.<br>
במקרה כזה, נצטרך לשנות את מה שכתוב אחרי ה־<code>except</code> ל־tuple שאיבריו הם כל סוגי השגיאות שבהן נרצה לטפל:
</p>
End of explanation
"""
def get_file_content(filepath):
try:
with open(filepath) as file_handler:
return file_handler.read()
except FileNotFoundError:
print(f"Couldn't open the file: {filepath}.")
return ""
except OSError:
print(f"The path '{filepath}' is invalid.")
return ""
"""
Explanation: <p style="text-align: right; direction: rtl; float: right; clear: both;">
בקוד שלמעלה גרמנו לכך, שהן חריגות מסוג <var>FileNotFoundError</var> והן חריגות מסוג <var>OSError</var> יטופלו באותה הצורה.
</p>
<p style="text-align: right; direction: rtl; float: right; clear: both;">
אבל מה אם נרצה שחריגות <var>OSError</var> תטופלנה בצורה שונה מחריגות <var>FileNotFoundError</var>?<br>
במקרה הזה נפנה לדרך השנייה, שמימושה פשוט למדי – נוסף לקוד הקיים, נכתוב פסקת קוד חדשה שעושה שימוש ב־<code>except</code>:
</p>
End of explanation
"""
princess_location = get_file_content("?")
princess_location
"""
Explanation: <p style="text-align: right; direction: rtl; float: right; clear: both;">
בקוד שלמעלה הוספנו ל־<var>get_file_content</var> קטע קוד נוסף.<br>
ה־<code>except</code> שהוספנו מאפשר לפייתון לטפל בחריגות מסוג <var>OSError</var>, כמו החריגה שקפצה לנו כשהכנסנו תווים בלתי חוקיים לנתיב הקובץ.<br>
נראה את הקוד בפעולה:
</p>
End of explanation
"""
def a():
print("Dividing by zero...")
return 1 / 0
print("End of a.")
def b():
print("Calling a...")
a()
print("End of b.")
def c():
print("Calling b...")
b()
print("End of c.")
print("Start.")
print("Calling c...")
c()
print("Stop.")
"""
Explanation: <p style="text-align: right; direction: rtl; float: right; clear: both;">
לשמחתנו, אנחנו לא מוגבלים במספר ה־<code>except</code>־ים שאפשר להוסיף אחרי ה־<code>try</code>.
</p>
<span style="text-align: right; direction: rtl; float: right; clear: both;">זיהוי סוג החריגה</span>
<p style="text-align: right; direction: rtl; float: right; clear: both;">
בכל פעם שפייתון מתריעה על חריגה, היא גם מציגה את הקטגוריה שאליה שייכת אותה חריגה.<br>
כפי שכבר ראינו, שגיאות תחביר שייכות לקטגוריה <var>SyntaxError</var>, וחריגות שנובעות מערך שגוי שייכות לקטגוריה <var>ValueError</var>.<br>
למדנו גם להכיר שגיאות <var>FileNotFoundError</var> ושגיאות <var>OSError</var>, וללא ספק נתקלתם בחריגות שונות ומשונות בעצמכם במהלך הקורס.
</p>
<p style="text-align: right; direction: rtl; float: right; clear: both;">
אפשר להגיד, אם כך, שבפייתון יש סוגים רבים של חריגות שהם חלק מהשפה.<br>
מרוב סוגי חריגות, לפעמים קל לאבד את הידיים והרגליים ונעשה לא פשוט להבין על איזו חריגה פייתון עשויה להתריע.
</p>
<p style="text-align: right; direction: rtl; float: right; clear: both;">
נציע כמה דרכים מועילות להתמודד עם הבעיה, ולמצוא מהם סוגי החריגות שעשויים להיווצר בעקבות קוד שכתבתם:
</p>
<ol style="text-align: right; direction: rtl; float: right; clear: both;">
<li><em>תיעוד</em> – בשימוש בפונקציה או בפעולה מסוימת, קראו בתיעוד שלה על אילו חריגות עלולה פייתון להתריע בעקבות הפעלת הפונקציה או הפעולה.</li>
<li><em>חיפוש</em> – השתמשו במנוע חיפוש כדי לשאול אילו חריגות עלולות לקפוץ בעקבות פעולה כללית שביצעתם. נניח: <q>python exceptions read file</q>.</li>
<li><em>נסו בעצמכם</em> – אם ברצונכם לברר על סוג חריגה במקרה מסוים – הריצו את המקרה במחברת ובדקו על איזה סוג חריגה פייתון מתריעה.</li>
</ol>
<p style="text-align: right; direction: rtl; float: right; clear: both;">
לנוחיותכם, בתיעוד של פייתון ישנו עמוד שמסביר על <a href="https://docs.python.org/3/library/exceptions.html">כל סוגי החריגות שפייתון מגדירה</a>.
</p>
<div class="align-center" style="display: flex; text-align: right; direction: rtl; clear: both;">
<div style="display: flex; width: 10%; float: right; clear: both;">
<img src="images/exercise.svg" style="height: 50px !important;" alt="תרגול">
</div>
<div style="width: 70%">
<p style="text-align: right; direction: rtl; float: right; clear: both;">
על איזה סוג חריגה תתריע פייתון כאשר ניגש לרשימה במיקום שאינו קיים?<br>
מה בנוגע לגישה לרשימה במיקום שהוא מחרוזת?<br>
מהן סוגי החריגות שעליהם עלולה פייתון להתריע בעקבות הרצת הפעולה <code>index</code> על רשימה?
</p>
</div>
<div style="display: flex; width: 20%; border-right: 0.1rem solid #A5A5A5; padding: 1rem 2rem;">
<p style="text-align: center; direction: rtl; justify-content: center; align-items: center; clear: both;">
<strong>חשוב!</strong><br>
פתרו לפני שתמשיכו!
</p>
</div>
</div>
<span style="text-align: right; direction: rtl; float: right; clear: both;">תרגיל ביניים: תוכנית החלוקה</span>
<p style="text-align: right; direction: rtl; float: right; clear: both;">
כתבו פונקציה בשם <var>super_division</var> שמקבלת מספר בלתי מוגבל של פרמטרים מספריים.
</p>
<p style="text-align: right; direction: rtl; float: right; clear: both;">
הפונקציה תבצע חלוקה של המספר הראשון במספר השני.<br>
את התוצאה היא תחלק במספר השלישי לכדי תוצאה חדשה, את התוצאה החדשה היא תחלק במספר הרביעי וכן הלאה.<br>
לדוגמה: עבור הקריאה <code dir="ltr">super_division(100, 10, 5, 2)</code> הפונקציה תחזיר 1, כיוון שתוצאת הביטוי $100 / 10 / 5 / 2$ היא 1.
</p>
<p style="text-align: right; direction: rtl; float: right; clear: both;">
עשו שימוש בטיפול בחריגות, ונסו לתפוס כמה שיותר מקרים של משתמשים שובבים.
</p>
<span style="text-align: right; direction: rtl; float: right; clear: both;">פעפוע של חריגות במעלה שרשרת הקריאות</span>
<p style="text-align: right; direction: rtl; float: right; clear: both;">
התרעה על חריגה גורמת לריצת התוכנית להתנהג בצורה שונה מעט ממה שהכרנו עד כה.
</p>
<p style="text-align: right; direction: rtl; float: right; clear: both;">
במהלך המחברת נתקלנו בשני מקרים אפשריים:
</p>
<ol style="text-align: right; direction: rtl; float: right; clear: both;">
<li>או שהשורה שגרמה להתרעה על החריגה נמצאת ישירות תחת <code>try-except</code> שתואם לסוג החריגה, ואז החריגה נתפסת.</li>
<li>או שהשורה הזו אינה נמצאת ישירות תחת <code>try-except</code> ואז החריגה מקריסה את התוכנית. במקרה שכזה, מוצג לנו Traceback.</li>
</ol>
<p style="text-align: right; direction: rtl; float: right; clear: both;">
אבל מתברר שהסיפור מאחורי הקלעים הוא מורכב מעט יותר.<br>
אם פונקציה מסוימת לא יודעת כיצד לטפל בהתרעה על חריגה, היא מבקשת עזרה מהפונקציה שקראה לה.<br>
זה קצת כמו לבקש סוכר מהשכנים, גרסת החריגות והפונקציות.
</p>
<p style="text-align: right; direction: rtl; float: right; clear: both;">
נניח שבפונקציה <var>A</var> ישנה שורה שגרמה להתרעה על חריגה, והיא לא עטופה ב־<code>try-except</code>.<br>
לפני שהתוכנית תקרוס, ההתרעה על החריגה תִּשָּׁלַח לפונקציה הקוראת, <var>B</var>, זו שהפעילה את פונקציה <var>A</var> שבה התרחשה ההתרעה על החריגה.<br>
בשלב הזה פייתון נותנת לנו הזדמנות נוספת לתפוס את החריגה.<br>
אם בתוך פונקציה <var>B</var> השורה שקראה לפונקציה <var>A</var> עטופה ב־<code>try-except</code> שתופס את סוג החריגה הנכונה, החריגה תטופל.<br>
אם לא, החריגה תועבר לפונקציה <var>C</var> שקראה לפונקציה <var>B</var>, וכך הלאה, עד שנגיע לראש שרשרת הקריאות.<br>
אם אף אחד במעלה שרשרת הקריאות לא תפס את החריגה, התוכנית תקרוס ויוצג לנו Traceback.
</p>
<p style="text-align: right; direction: rtl; float: right; clear: both;">
ננסה להדגים באמצעות קטע קוד לא מתוחכם במיוחד.<br>
הנה האפשרות השנייה שדיברנו עליה – אף אחד בשרשרת הקריאות לא תופס את החריגה, התוכנה קורסת ומודפס Traceback:
</p>
End of explanation
"""
def a():
print("Dividing by zero...")
try:
return 1 / 0
except ZeroDivisionError:
print("Never Dare Anyone to Divide By Zero!")
print("https://reddit.com/2rkuek/")
print("End of a.")
def b():
print("Calling a...")
a()
print("End of b.")
def c():
print("Calling b...")
b()
print("End of c.")
print("Start.")
print("Calling c...")
c()
print("Stop.")
"""
Explanation: <p style="text-align: right; direction: rtl; float: right; clear: both;">
והנה דוגמה לאפשרות הראשונה – שבה אנחנו תופסים את החריגה מייד כשהיא מתרחשת:
</p>
End of explanation
"""
def a():
print("Dividing by zero...")
return 1 / 0
print("End of a.")
def b():
print("Calling a...")
a()
print("End of b.")
def c():
print("Calling b...")
try:
b()
except ZeroDivisionError:
print("Never Dare Anyone to Divide By Zero!")
print("https://reddit.com/2rkuek/")
print("End of c.")
print("Start.")
print("Calling c...")
c()
print("Stop.")
"""
Explanation: <p style="text-align: right; direction: rtl; float: right; clear: both;">
האפשרות המעניינת היא האפשרות השלישית.<br>
מה קורה אם מישהו במעלה שרשרת הקריאות, נניח הפונקציה <var>c</var>, היא זו שמחליטה לתפוס את החריגה:
</p>
End of explanation
"""
MONTHS = [
"January", "February", "March", "April", "May", "June",
"July", "August", "September", "October", "November", "December",
]
def get_month_name(index):
"""Return the month name given its number."""
return MONTHS[index - 1]
def get_month_number(name):
"""Return the month number given its name."""
return MONTHS.index(name) + 1
def is_same_month(index, name):
return (
get_month_name(index) == name
and get_month_number(name) == index
)
"""
Explanation: <p style="text-align: right; direction: rtl; float: right; clear: both;">
שימו לב שבמקרה הזה דילגנו על השורות שמדפיסות את ההודעה על סיום ריצתן של הפונקציות <var>a</var> ו־<var>b</var>.<br>
בשורה מספר 3 התבצעה חלוקה לא חוקית ב־0 שגרמה להתרעה על חריגה מסוג <var>ZeroDivisionError</var>.<br>
כיוון שהקוד לא היה עטוף ב־<code>try-except</code>, ההתרעה על החריגה פעפעה לשורה <code dir="ltr">a()</code> שנמצאת בפונקציה <var>b</var>.<br>
גם שם אף אחד לא טיפל בחריגה באמצעות <code>try-except</code>, ולכן ההתרעה על החריגה המשיכה לפעפע לפונקציה שקראה ל־<var>b</var>, הלא היא <var>c</var>.<br>
ב־<var>c</var> סוף כל סוף הקריאה ל־<var>b</var> הייתה עטופה ב־<code>try-except</code>, ושם התבצע הטיפול בחריגה.<br>
מאותה נקודה שבה טופלה החריגה, התוכנית המשיכה לרוץ כרגיל.
</p>
<figure>
<img src="images/exception_propogation.svg?v=3" style="margin-right: auto; margin-left: auto; text-align: center;" alt="באיור 6 עמודות, כל אחת מייצגת שלב ומורכבת מ־4 מלבנים. אל המלבן התחתון בכל שלב מצביע חץ, וכתוב בו 'התוכנית'. ממנו יוצא חץ למלבן 'פונקציה c', ממנו יוצא חץ למלבן 'פונקציה b' וממנו יוצא חץ למלבן 'פונקציה a'. מעל השלב הראשון נכתב 'התוכנית קוראת לפונקציה c, שקוראת ל־b, שקוראת ל־a.'. בשלב 2 כתוב תחת הכותרת 'פייתון מתריעה על חריגה בפונקציה a. פונקציה a אינה מטפלת בחריגה.'. ליד המלבן של פונקציה a מופיע סמליל בו כתוב BOOM המסמן את ההתרעה על החריגה שפייתון תייצר. בשלב 3 כתוב 'החריגה, שלא נתפסה בפונקציה a, מפעפעת חזרה ל־b שקראה לה.'. על המלבן של פונקציה a מופיע סימן STOP שמייצג את זה שריצת הפונקציה הפסיקה. על המלבן של הפונקציה b מופיע סימן פיצוץ עם הכיתוב BOOM שמייצג את ההתרעה על החריגה. שנמצאת כרגע בפונקציה b. מהמלבן של פונקציה a בשלב 2 יוצא חץ למלבן של פונקציה b בשלב 3, שמסמן את פעפוע ההתרעה על השגיאה במעלה שרשרת הקריאות. מתחת לכותרת של שלב 4 כתוב 'החריגה, שלא נתפסה בפונקציה b, מפעפעת חזרה ל־c שקראה לה.'. ליד המלבן של הפונקציות a ו־b מופיע הסמליל STOP, וליד המלבן של הפונקציה c מופיע סמליל של BOOM, אליו מצביע חץ שיוצא מה־BOOM בשלב 3. מתחת לכותרת של שלב 5 כתוב 'פונקציה c תופסת את החריגה ומצילה את התוכנית מקריסה.'. הסימן ליד המלבן של פונקציה c משתנה מ־BOOM לכפפת בייסבול, שמסמלת את תפיסת ההודעה על החריגה. מתחת לכותרת של שלב 6 כתוב 'התוכנית ממשיכה את ריצתה כרגיל ממקום התפיסה בפונקציה c.'. הסמלילים נשארו כשהיו בשלב 5, אך מהמלבן של פונקציה c נוסף חץ למלבן של 'התוכנית', ומהמלבן של 'התוכנית' נוסף חץ היוצא כלפי חוץ."/>
<figcaption style="margin-top: 2rem; text-align: center; direction: rtl;">
איור שממחיש כיצד התרעה על חריגה מפעפעת בשרשרת הקריאות.
</figcaption>
</figure>
<span style="text-align: right; direction: rtl; float: right; clear: both;">תרגיל ביניים: החמישי זה נובמבר?</span>
<p style="text-align: right; direction: rtl; float: right; clear: both;">
קראו את הקוד הבא, ופתרו את הסעיפים שאחריו.
</p>
End of explanation
"""
try:
1 / 0
except ZeroDivisionError as err:
print(type(err))
print('-' * 40)
print(dir(err))
"""
Explanation: <ol style="text-align: right; direction: rtl; float: right; clear: both;">
<li>כתבו שתי שורות הקוראות ל־<code>is_same_month</code>, שאחת מהן מחזירה <code>True</code> והשנייה מחזירה <code>False</code>.</li>
<li>על אילו חריגות פייתון עלולה להתריע בשתי הפונקציות הראשונות? תפסו אותן בפונקציות הרלוונטיות. החזירו <code>None</code> אם התרחשה התרעה על חריגה.</li>
<li>בונוס: האם תצליחו לחשוב על דרך לגרום לפונקציה להתרסק בכל זאת? אם כן, תקנו אותה כך שתחזיר <code>None</code> במקרה שכזה.</li>
<li>בונוס: האם תוכלו ליצור קריאה ל־<code>is_same_month</code> שתחזיר <code>True</code> בזמן שהיא אמורה להחזיר <code>False</code>?</li>
<li>הבה נשנה גישה: תפסו את החריגות ברמת <code>is_same_month</code> במקום בפונקציות שהתריעו על חריגה. החזירו <code>False</code> אם התרחשה התרעה על חריגה.</li>
</ol>
<span style="text-align: right; direction: rtl; float: right; clear: both;">הקשר בין חריגות למחלקות</span>
<p style="text-align: right; direction: rtl; float: right; clear: both;">
כל התרעה על חריגה מיוצגת בפייתון באמצעות מופע.<br>
בעזרת מילת המפתח <code>as</code>, נוכל ליצור משתנה שיצביע למופע הזה.
</p>
<p style="text-align: right; direction: rtl; float: right; clear: both;">
נכתוב אחרי ה־<code>except</code> ולפני הנקודתיים את הביטוי <code>as VarName</code>, כאשר <var>VarName</var> הוא שם משתנה חדש שיצביע למופע של ההתרעה על החריגה.<br>
ביטוי זה יאפשר לנו לגשת למשתנה <var>VarName</var> שכולל את הפרטים על החריגה, מכל שורה שמוזחת תחת ה־<code>except</code>:
</p>
End of explanation
"""
try:
1 / 0
except ZeroDivisionError as err:
print(f"The error is '{err}'.")
"""
Explanation: <p style="text-align: right; direction: rtl; float: right; clear: both;">
היופי במשתנה החדש שנוצר, <var>err</var>, זה שהוא מופע פייתוני שנוצר מתוך המחלקה <var>ZeroDivisionError</var>.<br>
<var>ZeroDivisionError</var>, אם כך, היא מחלקה לכל דבר: יש לה <code>__init__</code>, פעולות ותכונות, כמו שאפשר לראות בדוגמת הקוד שלמעלה.</p>
<p style="text-align: right; direction: rtl; float: right; clear: both;">
נבדוק אם יש לה <code>__str__</code> מועיל:
</p>
End of explanation
"""
ZeroDivisionError.mro()
"""
Explanation: <p style="text-align: right; direction: rtl; float: right; clear: both;">
כמה נוח!<br>
זו דרך ממש טובה להדפיס למשתמש הודעת שגיאה המתארת בדיוק מה הגורם לשגיאה שחווה.
</p>
<p style="text-align: right; direction: rtl; float: right; clear: both;">
זה זמן טוב לעצור רגע ולחשוב.<br>
ישנם סוגי חריגות רבים, ולכל סוג חריגה יש מחלקה שמייצגת אותו.<br>
האם זה אומר שכולן יורשות מאיזו מחלקת "חריגה" מופשטת כלשהי?<br>
נבדוק באמצעות גישה ל־<var>Method Resolution Order</var> של המחלקה.
</p>
End of explanation
"""
try:
1 / 0
except Exception as err:
print(f"The error is '{err}'.")
"""
Explanation: <p style="text-align: right; direction: rtl; float: right; clear: both;">
וואו! זו שרשרת ירושות באורך שלא היה מבייש את שושלת המלוכה הבריטית.<br>
אז נראה שהחריגה של חלוקה באפס (<var>ZeroDivisionError</var>) היא מקרה מיוחד של חריגה חשבונית.<br>
חריגה חשבונית (<var>ArithmeicError</var>), בתורה, יורשת מ־<var>Exception</var>, שהיא עצמה יורשת מ־<var>BaseException</var>.
</p>
<p style="text-align: right; direction: rtl; float: right; clear: both;">
נביט ב<a href="https://docs.python.org/3/library/exceptions.html#exception-hierarchy">מדרג הירושה המלא</a> המוצג בתיעוד של פייתון כדי להתרשם:
</p>
<p style="text-align: right; direction: rtl; float: right; clear: both;">
כיוון שאנחנו כבר מנוסים יחסית בענייני ירושה, בשלב הזה נוכל לפתח כמה רעיונות מעניינים.<br>
האם עצם זה ש־<var>ZeroDivisionError</var> היא תת־מחלקה של <var>Exception</var>, גורם לכך שנוכל לתפוס אותה בעזרת <var>Exception</var>?<br>
אם יתברר שכן, נוכל לתפוס מספר גדול מאוד של סוגי התרעות על חריגות בצורה הזו.<br>
נבדוק!
</p>
End of explanation
"""
search_in_directory(r"C:\Projects\Notebooks\week08", ["class", "int"])
"""
Explanation: <p style="text-align: right; direction: rtl; float: right; clear: both;">
ובכן, כן.<br>
אנחנו יכולים לכתוב בצורה הזו קוד שיתפוס את מרב סוגי ההתרעות על חריגות.
</p>
<p style="text-align: right; direction: rtl; float: right; clear: both;">
בשלב זה נציין שבניגוד לאינטואיציה, חשוב שנהיה ממוקדים בטיפול שלנו בחריגות.<br>
חניכים שזה עתה למדו על הרעיון של טיפול בחריגות מקיפים לפעמים את כל הקוד שלהם ב־<code>try-except</code>. מדובר ברעיון רע למדי.<br>
כלל האצבע שלנו מעתה יהיה זה:
</p>
<blockquote dir="rtl" style="direction: rtl; text-align: right; float: right; border-right: 5px solid rgba(0,0,0,.05); border-left: 0;">
בכל שימוש ב־<code>try-except</code>, צמצמו את כמות הקוד שנמצאת ב־<code>try</code>, וטפלו בחריגה כמה שיותר ספציפית.
</blockquote>
<p style="text-align: right; direction: rtl; float: right; clear: both;">
טיפול בחריגות הוא מנגנון שבהינתן התרעה על חריגה, מאפשר לנו להריץ קוד חלופי או קוד שיטפל בבעיה שנוצרה.<br>
אם אנחנו לא יודעים בדיוק מה הבעיה, או לא מתכוונים לטפל בה בדרך הגיונית – עדיף שלא נתפוס אותה.<br>
טיפול בחריגות שלא לצורך עלול ליצור "תקלים שקטים" בתוכנה שלנו, שאותם יהיה לנו קשה מאוד לאתר לאחר מכן.
</p>
<span style="text-align: right; direction: rtl; float: right; clear: both;">סיכום</span>
<p style="text-align: right; direction: rtl; float: right; clear: both;">
במחברת זו למדנו מהן חריגות, כיצד לקרוא הודעות שגיאה של פייתון, וכיצד לטפל בהתרעות על חריגות שנוצרות עקב כשל בריצת התוכנית.<br>
ראינו כיצד חריגות מיוצגות בפייתון, איך הן פועלות מאחורי הקלעים ואיך לגלות אילו חריגות עלולות לצוץ בזמן ריצת הקוד שלנו.<br>
למדנו גם שמוטב לתפוס שגיאה באופן נקודתי עד כמה שאפשר, ורק כשאנחנו יודעים כיצד לטפל בה בדרך הגיונית.
</p>
<span style="text-align: right; direction: rtl; float: right; clear: both;">מונחים</span>
<dl style="text-align: right; direction: rtl; float: right; clear: both;">
<dt>חריגה (exception)</dt>
<dd>
מופע המייצג מצב לא סדיר, לרוב בעייתי, שזוהה במהלך הרצת הקוד שבתוכנית.<br>
לכל חריגה יש סוג, וכל סוג חריגה מיוצג באמצעות מחלקה פייתונית.
</dd>
<dt>התרעה על חריגה (raise of an exception)</dt>
<dd>
מצב שבו פייתון מודיעה על שגיאה או על מצב לא סדיר בריצת התוכנית.<br>
התרעה על חריגה משנה את זרימת הריצה של התוכנית ועלולה לגרום לקריסתה.
</dd>
<dt>Traceback</dt>
<dd>
שרשרת הקריאות שהובילו להפעלתה של הפונקציה שבה אנחנו נמצאים ברגע מסוים.<br>
בהקשר של חריגות, מדובר בשרשרת הקריאות שהובילה להתרעה על החריגה.<br>
שרשרת הקריאות הזו מופיעה גם בהודעת השגיאה שמוצגת כאשר פייתון מתריעה על חריגה.
</dd>
<dt>טיפול בחריגה (exception handling)</dt>
<dd>
נקרא גם <dfn>תפיסת חריגה</dfn> (<dfn>catching an exception</dfn>).<br>
בעת התרעה על חריגה, התנהגות ברירת המחדל של התוכנה היא קריסה.<br>
מתכנת יכול להגדיר מראש מה הוא רוצה שיקרה במקרה של התרעה על חריגה.<br>
במקרה כזה, קריסת התוכנית תימנע.
</dd>
</dl>
<span style="text-align: right; direction: rtl; float: right; clear: both;">תרגילים</span>
<span style="text-align: right; direction: rtl; float: right; clear: both;">מחשבון</span>
<p style="text-align: right; direction: rtl; float: right; clear: both;">
כתבו פונקציה בשם <var>calc</var> שמקבלת כפרמטרים שני מספרים וסימן של פעולה חשבונית, בסדר הזה.<br>
הסימן יכול להיות אחד מאלה: <code>+</code>, <code>-</code>, <code>*</code> או <code>/</code>.<br>
מטרת הפונקציה היא להחזיר את תוצאת הפעולה החשבונית שהופעלה על שני המספרים.<br>
בפתרונכם, ודאו שאתם מטפלים בכל ההתרעות על חריגות שיכולות לצוץ בעקבות קלט מאתגר שהזין המשתמש.
</p>
<span style="text-align: right; direction: rtl; float: right; clear: both;">מנסה להבין איפה הסדר כאן</span>
<p style="text-align: right; direction: rtl; float: right; clear: both;">
כתבו פונקציה בשם <var>search_in_directory</var> שמקבלת נתיב, ורשימה של מילות מפתח.<br>
התוכנה תנסה לפתוח את כל הקבצים הנמצאים בנתיב, ותדפיס עבור כל מילת מפתח את כל הקבצים שבהם היא נמצאת.<br>
התוכנה תרוץ גם על תתי־התיקיות שנמצאות בנתיב שסופק (ועל תתי־תתי התיקיות, וכן הלאה), אם יש כאלו.<br>
</p>
<p style="text-align: right; direction: rtl; float: right; clear: both;">
לדוגמה:
</p>
End of explanation
"""
|
stitchfix/d3-jupyter-tutorial | iris_scatterplot.ipynb | mit | from IPython.core.display import display, HTML
from string import Template
import pandas as pd
import json, random
HTML('<script src="lib/d3/d3.min.js"></script>')
"""
Explanation: Iris Scatterplot
A simple example of using a bl.ock as the basis for a D3 visualization in Jupyter
Using this bl.ocks example as a template, we will construct a scatterplot of the canonical Iris dataset.
Notebook Config
End of explanation
"""
filename = 'https://gist.githubusercontent.com/mbostock/3887118/raw/2e68ffbeb23fe4dadd9b0f6bca62e9def6ee9e17/data.tsv'
iris = pd.read_csv(filename,sep="\t")
iris.head()
"""
Explanation: Data
The bl.ocks example uses a tsv file with the iris dataset. If you click on the block number at the top of the bl.ocks post, it will take you to the github gist upon which this bl.ocks entry is based. From there you can navigate to the raw version of the tsv file, and read that into a Pandas dataframe, as below. (Mind you, there are also many many other ways of getting this canonical dataset.)
End of explanation
"""
iris_array_of_dicts = iris.to_dict(orient='records')
iris_array_of_dicts[:5]
"""
Explanation: A trick of the D3 trade is to know that its file readers usually output the data in the form of an array of dictionaries. As such, we will reformat our tabular data that way in preparation for it to be used in the graph below.
End of explanation
"""
css_text = '''
.axis path,
.axis line {
fill: none;
stroke: #000;
shape-rendering: crispEdges;
}
.dot {
stroke: #000;
}
'''
"""
Explanation: CSS and JavaScript based on bl.ocks example
Note that in the below css_text, we have removed the 'body' style reference from the original bl.ocks text. This is to avoid this style changing the rest of the notebook.
End of explanation
"""
js_text_template = Template('''
var margin = {top: 20, right: 20, bottom: 30, left: 40},
// **** width = 960 - margin.left - margin.right, ****
// **** height = 500 - margin.top - margin.bottom; ****
width = 720 - margin.left - margin.right,
height = 375 - margin.top - margin.bottom;
var x = d3.scale.linear()
.range([0, width]);
var y = d3.scale.linear()
.range([height, 0]);
var color = d3.scale.category10();
var xAxis = d3.svg.axis()
.scale(x)
.orient("bottom");
var yAxis = d3.svg.axis()
.scale(y)
.orient("left");
// **** var svg = d3.select("body").append("svg") ****
var svg = d3.select("#$graphdiv").append("svg")
.attr("width", width + margin.left + margin.right)
.attr("height", height + margin.top + margin.bottom)
.append("g")
.attr("transform", "translate(" + margin.left + "," + margin.top + ")");
// **** d3.tsv("data.tsv", function(error, data) { ****
// **** if (error) throw error; ****
var data = $python_data ;
data.forEach(function(d) {
d.sepalLength = +d.sepalLength;
d.sepalWidth = +d.sepalWidth;
});
x.domain(d3.extent(data, function(d) { return d.sepalWidth; })).nice();
y.domain(d3.extent(data, function(d) { return d.sepalLength; })).nice();
svg.append("g")
.attr("class", "x axis")
.attr("transform", "translate(0," + height + ")")
.call(xAxis)
.append("text")
.attr("class", "label")
.attr("x", width)
.attr("y", -6)
.style("text-anchor", "end")
.text("Sepal Width (cm)");
svg.append("g")
.attr("class", "y axis")
.call(yAxis)
.append("text")
.attr("class", "label")
.attr("transform", "rotate(-90)")
.attr("y", 6)
.attr("dy", ".71em")
.style("text-anchor", "end")
.text("Sepal Length (cm)")
svg.selectAll(".dot")
.data(data)
.enter().append("circle")
.attr("class", "dot")
.attr("r", 3.5)
.attr("cx", function(d) { return x(d.sepalWidth); })
.attr("cy", function(d) { return y(d.sepalLength); })
.style("fill", function(d) { return color(d.species); });
var legend = svg.selectAll(".legend")
.data(color.domain())
.enter().append("g")
.attr("class", "legend")
.attr("transform", function(d, i) { return "translate(0," + i * 20 + ")"; });
legend.append("rect")
.attr("x", width - 18)
.attr("width", 18)
.attr("height", 18)
.style("fill", color);
legend.append("text")
.attr("x", width - 24)
.attr("y", 9)
.attr("dy", ".35em")
.style("text-anchor", "end")
.text(function(d) { return d; });
// **** }); ****
''')
"""
Explanation: The javascript below was copied directly from the bl.ocks script text, and then six lines were changed, as noted by // **** (the double-backslash is a comment in javascript, so these lines will not be executed). The first set of changes is to the width and height of the image. The second change is simply to reference a different DOM element as the starting point. The remaining changes are to replace the data-file reading step with a direct infusion of data into the script. (Note that the $ characters denote replacement points in the Template object.)
End of explanation
"""
html_template = Template('''
<style> $css_text </style>
<div id="graph-div"></div>
<script> $js_text </script>
''')
js_text = js_text_template.substitute({'python_data': json.dumps(iris_array_of_dicts),
'graphdiv': 'graph-div'})
HTML(html_template.substitute({'css_text': css_text, 'js_text': js_text}))
"""
Explanation: And finally, the viz
End of explanation
"""
|
ibm-cds-labs/simple-data-pipe-connector-flightstats | notebook/Flight Predict with Pixiedust.ipynb | apache-2.0 | !pip install --upgrade --user pixiedust
!pip install --upgrade --user pixiedust-flightpredict
"""
Explanation: Flight Delay Predictions with PixieDust
<img style="max-width: 800px; padding: 25px 0px;" src="https://ibm-watson-data-lab.github.io/simple-data-pipe-connector-flightstats/flight_predictor_architecture.png"/>
This notebook features a Spark Machine Learning application that predicts whether a flight will be delayed based on weather data. Read the step-by-step tutorial
The application workflow is as follows:
1. Configure the application parameters
2. Load the training and test data
3. Build the classification models
4. Evaluate the models and iterate
5. Launch a PixieDust embedded application to run the models
Prerequisite
This notebook is a follow-up to Predict Flight Delays with Apache Spark MLlib, FlightStats, and Weather Data. Follow the steps in that tutorial and at a minimum:
Set up a FlightStats account
Provision the Weather Company Data service
Obtain or build the training and test data sets
Learn more about the technology used:
Weather Company Data
FlightStats
Apache Spark MLlib
PixieDust
pixiedust_flightpredict
Install latest pixiedust and pixiedust-flightpredict plugin
Make sure you are running the latest pixiedust and pixiedust-flightpredict versions. After upgrading, restart the kernel before continuing to the next cells.
End of explanation
"""
import pixiedust_flightpredict
pixiedust_flightpredict.configure()
"""
Explanation: <h3>If PixieDust was just installed or upgraded, <span style="color: red">restart the kernel</span> before continuing.</h3>
Import required python package and set Cloudant credentials
Have available your credentials for Cloudant, Weather Company Data, and FlightStats, as well as the training and test data info from Predict Flight Delays with Apache Spark MLlib, FlightStats, and Weather Data
Run this cell to launch and complete the Configuration Dashboard, where you'll load the training and test data. Ensure all <i class="fa fa-2x fa-times" style="font-size:medium"></i> tasks are completed. After editing configuration, you can re-run this cell to see the updated status for each task.
End of explanation
"""
from pyspark.mllib.regression import LabeledPoint
from pyspark.mllib.linalg import Vectors
from numpy import array
import numpy as np
import math
from datetime import datetime
from dateutil import parser
from pyspark.mllib.classification import LogisticRegressionWithLBFGS
logRegModel = LogisticRegressionWithLBFGS.train(labeledTrainingData.map(lambda lp: LabeledPoint(lp.label,\
np.fromiter(map(lambda x: 0.0 if np.isnan(x) else x,lp.features.toArray()),dtype=np.double )))\
, iterations=1000, validateData=False, intercept=False)
print(logRegModel)
from pyspark.mllib.classification import NaiveBayes
#NaiveBayes requires non negative features, set them to 0 for now
modelNaiveBayes = NaiveBayes.train(labeledTrainingData.map(lambda lp: LabeledPoint(lp.label, \
np.fromiter(map(lambda x: x if x>0.0 else 0.0,lp.features.toArray()),dtype=np.int)\
))\
)
print(modelNaiveBayes)
from pyspark.mllib.tree import DecisionTree
modelDecisionTree = DecisionTree.trainClassifier(labeledTrainingData.map(lambda lp: LabeledPoint(lp.label,\
np.fromiter(map(lambda x: 0.0 if np.isnan(x) else x,lp.features.toArray()),dtype=np.double )))\
, numClasses=training.getNumClasses(), categoricalFeaturesInfo={})
print(modelDecisionTree)
from pyspark.mllib.tree import RandomForest
modelRandomForest = RandomForest.trainClassifier(labeledTrainingData.map(lambda lp: LabeledPoint(lp.label,\
np.fromiter(map(lambda x: 0.0 if np.isnan(x) else x,lp.features.toArray()),dtype=np.double )))\
, numClasses=training.getNumClasses(), categoricalFeaturesInfo={},numTrees=100)
print(modelRandomForest)
"""
Explanation: Train multiple classification models
The following cells train four models: Logistic Regression, Naive Bayes, Decision Tree, and Random Forest.
Feel free to update these models or build your own models.
End of explanation
"""
display(testData)
"""
Explanation: Evaluate the models
pixiedust_flightpredict provides a plugin to the PixieDust display api and adds a menu (look for the plane icon) that computes the accuracy metrics for the models, including the confusion table.
End of explanation
"""
import pixiedust_flightpredict
from pixiedust_flightpredict import *
pixiedust_flightpredict.flightPredict("LAS")
"""
Explanation: Run the predictive model application
This cell runs the embedded PixieDust application, which lets users enter flight information. The models run and predict the probability that the flight will be on-time.
End of explanation
"""
import pixiedust_flightpredict
pixiedust_flightpredict.displayMapResults()
"""
Explanation: Get aggregated results for all the flights that have been predicted.
The following cell shows a map with all the airports and flights searched to-date. Each edge represents an aggregated view of all the flights between 2 airports. Click on it to display a group list of flights showing how many users are on the same flight.
End of explanation
"""
|
danijel3/ASRDemos | notebooks/VoxforgeDataPrep.ipynb | apache-2.0 | import sys
sys.path.append('../python')
from voxforge import *
"""
Explanation: Preparing the Voxforge database
This notebook will demonstrate how to prepare the free Voxforge database for training. This database is a medium sized (~80 hours) database available online for free under the GPL license. A much more common database used in most research is the TIMIT, but that costs $250 and also much smaller (~4h - although much more professionally developed than Voxforge). The best alternative today is the Librispeech database, but that has a few dozen GB of data (almost 1000h) and wouldn't be sensible for a simple demo. So Voxforge it is...
First thing to do is realize what a speech corpus actually is: in its simplest form it is a collection of audio files (containing preferably speech only) with a set of transcripts of the speech. There are a few extensions to this that are worth noting:
* phonemes - transcripts are usually presented as a list of words - although not a rule, it is often easier to start the recognition process with phonemes and go from there. Voxforge defines a list of 39 phonemes (+ silence) and contains a lexicon mapping the words into phonemes (more about that below)
* aligned speech - the transcripts are usually just a sequence of words/phonemes, but they don't denote which word/phoneme occurs when - there are models that can learn from that (seq. learning, e.g. CTC, attention models), but having alignments is usually a big plus. TIMIT was hand-aligned by a group of professionals (which is why its a popular resource for research), but Voxforge wasn't. Fortunately, we can use one of the many available tools to do this automatically (with a margin of error - more on that below)
* meta-data - each recording session in the Voxforge database contains a readme file with useful information about the speaker and the environment that the recording took place in. When making a serious speech recognizer, this information can be very useful (e.g. for speaker adaptation - taking into account the speaker id, gender, age, etc...)
Downloading the corpus
To start working with the corpus, it needs to be downloaded first. All the files can be found in the download section of the Voxforge website under this URL:
http://www.repository.voxforge1.org/downloads/SpeechCorpus/Trunk/Audio/Main/16kHz_16bit/
There are 2 versions of the main corpus: sampled at 16kHz and 8kHz. The 16 kHz one is of better quality and is known as "desktop quality speech". While the original recordings were made at an even higher quality (44.1 kHz), 16k is completely sufficient for recoginzing speech (higher quality doesn't help much). 8 kHz is known as the telephony quality and is a standard value for the old (uncompressed, aka T0) digital telephone signal. If you are making a recognizer that has to work in the telephony environment, you should use this data instead
To download the whole dataset, a small program in Python is included in this demo. Be warned, this can take a long time (I think Voxforge is throttling the speed to save on costs) and restarts may be neccessary. The python method does check for failed downloads (compares file sizes) and restarts whatever wasn't downloaded completely, so you can run the method 2-3 times to make sure everything is ok.
Alternatively, wou can use a program like wget and enter this command (where "audio" is the dir to save the data to):
wget -P audio -l 1 -N -nd -c -e robots=off -A tgz -r -np http://www.repository.voxforge1.org/downloads/SpeechCorpus/Trunk/Audio/Main/16kHz_16bit
First lets import all the voxforge methods from the python directory. These will need the following libraries installed on your system:
* numpy - for working with data
* random, urllib, lxml, os, tarfile, gzip, re, pickle, shutil - these are standard system libraries and anyone should have them
* scikits.audiolab - to load the audio files from the database (WAV and FLAC files)
* tqdm - a simple library for progressbars that you can install using pip
End of explanation
"""
downloadVoxforgeData('../audio')
"""
Explanation: Ignore any warnings above (I coudn't be bothered to compile audiolab with Alsa). Below you will find the method to download the Voxforge database. You only need to do this once, so you can run it either here or from a console or use wget. Be warned that it takes a long time (as mentioned earlier) so it's a good idea to leave it running over night.
End of explanation
"""
f=loadFile('../audio/Joel-20080716-qoz.tgz')
print f.props
print f.prompts
print f.data
%xdel f
"""
Explanation: Loading the corpus
Once the data is downloaded and stored in the 'audio' subdir of the main project dir, we can start loading the data into a Python datastructure. There are several methods that can be used for that. The following method will load a file and display its contents:
End of explanation
"""
corp=loadBySpeaker('../audio', limit=30)
"""
Explanation: The loadBySpeaker method will load the whole folder and organize its contents by speakers (as a dictionary). Each utterance contains only the data and the prompts. For this demo, only 30 files are read - as this isn't a method we are going to ultimately use.
End of explanation
"""
addPhonemesSpk(corp,'../data/lex.tgz')
print corp.keys()
spk=corp.keys()[0]
print corp[spk]
%xdel corp
"""
Explanation: The corpus can also be extended by the phonetic transcription of the utterances using a lexicon file. Voxforge does provide such a file on its website and it is downloaded automatically (if it doesn't already exist).
Note that a single word can have several transcriptions. In the lexicon, these alternatives will have sequential number suffixes added to the word (word, word2, word3, etc), but this particular function will do nothing about that. Choosing the right pronounciation variant has to be done either manually, or by using a more sophisticated program (a pre-trained ASR system) to choose the right version automatically.
End of explanation
"""
convertCTMToAli('../data/ali.ctm.gz','../data/phones.list','../audio','../data/ali.pklz')
"""
Explanation: Aligned corpus
As mentioned earlier, this sort or cropus has it's downsides. For one, we don't know when each phoneme occurs so we cannot train the system discriminatavely. While it's still possible, it would be nice if we could start with a simpler example. Another problem is choosing the right pronounciation variant mentioned above.
To solve these issues, an automatic alignement was created using a different ASR system called Kaldi. This system is a very good ASR solution that implements various types of models. It also contains simple out-of-the-box scripts for training on Voxforge data.
To create the alignments using Kaldi, a working system had to be trained first and what's interesting, the same Voxforge data was used to train the system. How was this done? Well, Kaldi uses (among other things) a classic Gaussian Mixture Model and trains it using the EM algorithm. Initially the alignment is assumed to be even, throughout the file, but as the system is trained iteratively, the model gets better and thus the alignment gets more accurate. The system is trained with gradually better models to achieve even more accurate results and the provided solution here is generated using the "tri3b" model, as described in the scripts.
The alignments in Kaldi are stored in special binary files, but there are simple tools to help convert them into something more easier to use. The type of file chosen for this example is the CTM file, which contains a series of lines in a text file, each line describing a single word or phoneme. The description has 5 columns: encoded file name, unused id (always 1), segment start, segment length and segment text (i.e. word of phoneme name/value). This file was generated using Kaldi, compressed using gzip and stored in 'ali.ctm.gz' in the 'data' directory of this project.
Please note, that the number of files in this aligned set is smaller than the acutal count in the whole Voxforge dataset. This is because there is a small percentage of errors in the database (around a 100 files or so) and some recordings are of such poor quality that Kaldi couldn't generate a reasonable alignemnet for these files. We can simply ignore them here. This, however, doesn't mean that all the alignments present in the CTM are 100% accurate. There can still be mistakes there, but hopefully they are unlikely enough to not cause any issue.
While this file contains everything that we need, it'd be useful to convert it into a datastructure that can be easily used in Python. The convertCTMToAli method is used for that:
End of explanation
"""
import gzip
import pickle
with gzip.open('../data/ali.pklz') as f:
ali=pickle.load(f)
print 'Number of utterances: {}'.format(len(ali))
"""
Explanation: We store the generated datastructure into a gzipped and pickled file, so we don't need to perform this more than once. This file is already included in the repository, so you can skip the step above.
We can read the file like this:
End of explanation
"""
print ali[100].spk
print ali[100].phones
print ali[100].ph_lens
print ali[100].archive
print ali[100].audiofile
print ali[100].data
"""
Explanation: Here is an example of the structure and its attributes loaded from that file:
End of explanation
"""
import random
from sets import Set
#make a list of speaker names
spk=set()
for utt in ali:
spk.add(utt.spk)
print 'Number of speakers: {}'.format(len(spk))
#choose 20 random speakers
tst_spk=list(spk)
random.shuffle(tst_spk)
tst_spk=tst_spk[:20]
#save the list for reference - if anyone else wants to use our list (will be saved in the repo)
with open('../data/test_spk.list', 'w') as f:
for spk in tst_spk:
f.write("{}\n".format(spk))
ali_test=filter(lambda x: x.spk in tst_spk, ali)
ali_train=filter(lambda x: not x.spk in tst_spk, ali)
print 'Number of test utterances: {}'.format(len(ali_test))
print 'Number of train utterances: {}'.format(len(ali_train))
#shuffle the utterances, to make them more uniform
random.shuffle(ali_test)
random.shuffle(ali_train)
#save the data for future use
with gzip.open('../data/ali_test.pklz','wb') as f:
pickle.dump(ali_test,f,pickle.HIGHEST_PROTOCOL)
with gzip.open('../data/ali_train.pklz','wb') as f:
pickle.dump(ali_train,f,pickle.HIGHEST_PROTOCOL)
"""
Explanation: Please note that the audio data is not yet loaded at this step (it's set to None).
Test data
Before we go on, we need to prepare our test set. This needs to be completely independent from the training data and it needs to be the same for all the experiemnts we want to do, if we want to be able to make them comparable in any way. The test set also needs to be "representable" of the whole data we are working on (so they need to be chosen randomly from all the data).
This isn't the only way we could perform our experiments - very often people use what is known as "k-fold cross validation", but that would take a lot of time to do for all our experiemnts, so choosing a single representative evaluation set is a more convinient option.
Now, generally most corpora have a designated evaluation set: for example, TIMIT has just such a set of 192 files that is used by most papers on the subject. Voxforge doesn't have anything like that and there aren't many papers out there using it as a resource anyway. One of the most advanced uses of Voxforge is in Kaldi and there they only shuffle the training set and choose 20-30 random speakers from that. To make our experiemtns at least "comparable" to TIMIT, we will try and do a similar thing here, but we will save the list of speakers (and their files) so anyone can use the same when conducting their experiemnts.
WARNING If you want to compare the results of your own experiemnts to the ones from these notebooks, then don't run the code below and use the files provided in the repo. If you run the code below, you will reset the test ordering and your experiements won't be strictly comparable to the ones from these notebooks.
End of explanation
"""
num=int(len(ali_train)*0.05)
ali_small=ali_train[:num]
with gzip.open('../data/ali_train_small.pklz','wb') as f:
pickle.dump(ali_small,f,pickle.HIGHEST_PROTOCOL)
"""
Explanation: To make things more managable for this demo, we will take 5% of the training set and work using that instead of the whole 80 hours. 5% should give us an amount similar to TIMIT. If you wish to re-run the experiments using the whole dataset, go to the bottom of this notebook for further instructions.
End of explanation
"""
corp=loadAlignedCorpus('../data/ali_train_small.pklz','../audio')
"""
Explanation: Here we load additional data using the loadAlignedCorpus method. It loads the alignment and the appropriate audio datafile for each utterance (it can take a while for larger corpora):
End of explanation
"""
corp_test=loadAlignedCorpus('../data/ali_test.pklz','../audio')
"""
Explanation: We have to do the same for the test data:
End of explanation
"""
print 'Number of utterances: {}'.format(len(corp))
print 'List of phonemes:\n{}'.format(corp[0].phones)
print 'Lengths of phonemes:\n{}'.format(corp[0].ph_lens)
print 'Audio:\n{}'.format(corp[0].data)
samp_num=0
for utt in corp:
samp_num+=utt.data.size
print 'Length of cropus: {} hours'.format(((samp_num/16000.0)/60.0)/60.0)
"""
Explanation: Now we can check if we have all the neccessary data: phonemes, phoneme alignments and data.
End of explanation
"""
import sys
sys.path.append('../PyHTK/python')
import numpy as np
from HTKFeat import MFCC_HTK
import h5py
from tqdm import *
def extract_features(corpus, savefile):
mfcc=MFCC_HTK()
h5f=h5py.File(savefile,'w')
uid=0
for utt in tqdm(corpus):
feat=mfcc.get_feats(utt.data)
delta=mfcc.get_delta(feat)
acc=mfcc.get_delta(delta)
feat=np.hstack((feat,delta,acc))
utt_len=feat.shape[0]
o=[]
for i in range(len(utt.phones)):
num=utt.ph_lens[i]/10
o.extend([utt.phones[i]]*num)
# here we fix an off-by-one error that happens very inrequently
if utt_len-len(o)==1:
o.append(o[-1])
assert len(o)==utt_len
uid+=1
#instead of a proper name, we simply use a unique identifier: utt00001, utt00002, ..., utt99999
g=h5f.create_group('/utt{:05d}'.format(uid))
g['in']=feat
g['out']=o
h5f.flush()
h5f.close()
"""
Explanation: Feature extraction
To perform a simple test, we will use a standard set of audio features used in many, if not most papers on speech recognition. This set of features will first split each file into a bunch of small chunks of equal size, giving about a 100 of such frames per second. Each chunk will then be converted into a vector of 39 real values. Furhtermore, each vector will be assigned a phonetic class (value from 0..39) thanks to the alignemnt created above. The problem can then be sovled as a simple classification problem that maps a real vector to a phonetic class.
This particular set of features is calculated to match a specification developed in a classic toolkit known as HTK. All the details on this feature set can be found under the linked repository here. If you want to experiment with this feature set (highly encouraged) please read the description there.
In the code below, we extract the set of features for each utterance and store the results, together with the classification decision for each frame. For performance reasons, we will store all the files in HDF5 format using the h5py library. This will allow us to read data directly from the drive without wasting too much RAM. This isn't as important when doing small experiemnts, but it will get relevant for doing the large ones.
The structure of the HDF5 file is broken into utterances. The file contains a list of utterances sotred as groups in the root and each utterance has 2 datasets: inputs and outputs. Later also normalized inputs are added.
Since we intend to use this procedure more than once, we will encapsulate it into a function:
End of explanation
"""
extract_features(corp,'../data/mfcc_train_small.hdf5')
extract_features(corp_test,'../data/mfcc_test.hdf5')
"""
Explanation: Now let's process the small training and test datasets:
End of explanation
"""
def normalize(corp_file):
h5f=h5py.File(corp_file)
b=0
for utt in tqdm(h5f):
f=h5f[utt]['in']
n=f-np.mean(f)
n/=np.std(n)
h5f[utt]['norm']=n
h5f.flush()
h5f.close()
normalize('../data/mfcc_train_small.hdf5')
normalize('../data/mfcc_test.hdf5')
"""
Explanation: Normalization
While usable as-is, many machine learning models will perform badly if the data isn't standardized. Standarization or normalization stands for making sure that all the samples are distributed to a reasonable scale - usually centered around 0 (with a mean of 0) and spread to a standard deviation of 1. The reason for this is because the data can come from various sources - some people are louder, some are quiter, some have higher pitched voices, some lower, some used a more sensitive microphones than other, etc. That is why the audio between sessions can have various ranges of values. Normalization makes sure that all the recordings are tuned to a similar scale before processing.
To perform normalization we simply need to compute the mean and standard deviation of the given signal and then subtract the mean and divide by the standard deviation (thus making the new mean 0 and new stdev 1). A common question is what signal do we use to perform these calculations? Do we calculate it once for the whole corpus, or once per each utterance? Maybe once per speaker or once per session? Or maybe several times per utterance?
Generally, the longer the signal we do this on, the better (the statistics get more accurate), but performing it only once on the whole corpus doesn't make much sense because of what is written above. The reason we normalize the data is to remove the differences between recording sessions, so at minimum we should normalize each session seperately. In practice, it's easier to just normalize each utterance as they are long enough on their own. This is known as "batch normalization" (where each utterance is one batch).
But this makes one assumption, that the recording conditions don't change significantly throughout the whole utterance. In certain cases, it may actually be a good idea to split the utterance into several parts and normalize them sepeartely, in case the volume changes throught the recording, or maybe there is more than one speaker in a single file. This is solved best by using a technique known as "online normalization" which uses a sliding window to compute the statistics and can react to rapid changes in the values. This is, however, beyond the scope of this simple demo (and shouldn't really be neccessary for this corpus anyway).
End of explanation
"""
!h5ls ../data/mfcc_test.hdf5/utt00001
"""
Explanation: To see what's inside we can run the following command in the terminal:
End of explanation
"""
from data import Corpus
import numpy as np
train=Corpus('../data/mfcc_train_small.hdf5',load_normalized=True)
test=Corpus('../data/mfcc_test.hdf5',load_normalized=True)
g=train.get()
tr_in=np.vstack(g[0])
tr_out=np.concatenate(g[1])
print 'Training input shape: {}'.format(tr_in.shape)
print 'Training output shape: {}'.format(tr_out.shape)
g=test.get()
tst_in=np.vstack(g[0])
tst_out=np.concatenate(g[1])
print 'Test input shape: {}'.format(tst_in.shape)
print 'Test output shape: {}'.format(tst_in.shape)
train.close()
test.close()
"""
Explanation: Simple classification example
To finish here, we will use a simple SGD classifier from the scikit.learn library to classify the phonemes from the database. We have all the datasets prepared above, so all we need to do is load the preapared arrays. We will use a special class called Corpus included in the data.py file. In the constructor we provide the path to the file and say that we wish to load the normalized inputs. Next we use the get() method to load the list of all the input and out values. This method returns a tuple - one value for inputs and one for outputs. Each of these is a list of arrays corresponding to individual utterances. We can then convert it into a single contiguous array using the concatenate and vstack methods:
End of explanation
"""
import sklearn
print sklearn.__version__
from sklearn.linear_model import SGDClassifier
model=SGDClassifier(loss='log',n_jobs=-1,verbose=0,n_iter=100)
"""
Explanation: Here we create the SGD classifier model. Please note that the settings below work on the version 0.17 of scikit-learn, so it's recommended to upgrade. If you can't, then feel free to modify the settings to something that works for you. You may also turn on verbose to get more information on the training process. Here it's off to preserve space in the notebook.
End of explanation
"""
%time model.fit(tr_in,tr_out)
"""
Explanation: Here we train the model. It took 4 minutes for me:
End of explanation
"""
acc=model.score(tst_in,tst_out)
print 'Accuracy: {:%}'.format(acc)
"""
Explanation: Here we get about ~52% accuracy which is pretty bad for phoneme recogntion. In other notebooks, we will try to improve on that.
End of explanation
"""
corp=loadAlignedCorpus('../data/ali_train.pklz','../audio')
extract_features(corp,'../data/mfcc_train.hdf5')
normalize('../data/mfcc_train.hdf5')
"""
Explanation: Other data
Here we will also prepare the rest of the data to perform other experiments. If you wish to make only simple experiments and don't want to waste time on preparing large datasets and wasting a lot of time, feel free to skip these steps. Be warned that the dataset for the full 80 hours of training data takes up to 10GB in so you will need that much memory in RAM as well as your drive to make it work using the code present in this notebook.
End of explanation
"""
|
moble/MatchedFiltering | GW150914/AdjustCoM.ipynb | mit | import scri
import scri.SpEC
import numpy as np
data_dir = '/Users/boyle/Research/Data/SimulationAnnex/Incoming/BBH_SKS_d13.4_q1.23_sA_0_0_0.320_sB_0_0_-0.580/Lev5/'
w_N2 = scri.SpEC.remove_avg_com_motion(data_dir + 'rhOverM_Asymptotic_GeometricUnits.h5/Extrapolated_N2.dir', file_write_mode='w')
w_N3 = scri.SpEC.remove_avg_com_motion(data_dir + 'rhOverM_Asymptotic_GeometricUnits.h5/Extrapolated_N3.dir', file_write_mode='a')
w_N4 = scri.SpEC.remove_avg_com_motion(data_dir + 'rhOverM_Asymptotic_GeometricUnits.h5/Extrapolated_N4.dir', file_write_mode='a')
w_No = scri.SpEC.remove_avg_com_motion(data_dir + 'rhOverM_Asymptotic_GeometricUnits.h5/OutermostExtraction.dir', file_write_mode='a')
"""
Explanation: Adjust center of mass position and velocity of simulation for GW150914
End of explanation
"""
x_i = np.array([0.0254846374656213, -0.051270560984526176, 3.328532865089032e-06])
v_i = np.array([-1.4420901467875399e-06, 6.341746857347185e-06, -3.412200633855404e-08])
x_f = x_i + w_No.t[-1]*v_i
print(x_f)
print(np.linalg.norm(x_f))
"""
Explanation: Those displacements look pretty large. I wonder how far the system wanders...
End of explanation
"""
w_No.t[-1]
"""
Explanation: That's not very far. I guess it's not a very long simulation...
End of explanation
"""
scri.SpEC.metadata.read_metadata_into_object?
"""
Explanation: Indeed it's pretty short, so the system doesn't get very far. I start to worry when we need to be careful with higher modes, or when the displacements are a few times larger than this.
End of explanation
"""
|
SheffieldML/notebook | compbio/periodic/figure1.ipynb | bsd-3-clause | %matplotlib inline
import numpy as np
from matplotlib import pyplot as plt
import GPy
np.random.seed(1)
"""
Explanation: Supplementary materials : Details on generating Figure 1
This document is a supplementary material of the article Detecting periodicities with Gaussian
processes by N. Durrande, J. Hensman, M. Rattray and N. D. Lawrence.
The first step is to import the required packages. This tutorial has been written with GPy 0.8.8 which includes the kernels discussed in the article. The latter can be downloaded on the SheffieldML github page.
End of explanation
"""
# domain boundaries
lower = 0.
upper = 3.
x = np.linspace(lower,upper,500)
# grid for function evaluations and plots
n_pts = 50
X = np.linspace(lower,upper,n_pts+1)
X = X[0:-1]+X[1]/2
# test functions
def f1(x,tau2=0.1):
return(np.cos(x*2*np.pi) + np.sqrt(tau2)*np.random.normal(size=x.shape))
def f2(x,tau2=0.1):
return(1./2*np.cos(x*2*np.pi)+1./2*np.cos(x*4*np.pi) + np.sqrt(tau2)*np.random.normal(size=x.shape))
def f3(x,tau2=0.1):
alpha = 0.2
return(2*(x - np.trunc(x) < alpha)-1 + np.sqrt(tau2)*np.random.normal(size=x.shape))
def f4(x,tau2=0.1):
return(4*np.abs(x - np.trunc(x) - 0.5)-1 + np.sqrt(tau2)*np.random.normal(size=x.shape))
def f5(x,tau2=0.1):
return(2*np.trunc(x) - 2*x +1 + np.sqrt(tau2)*np.random.normal(size=x.shape))
def f6(x,tau2=0.1):
return(np.zeros((len(x),)) + np.sqrt(tau2)*np.random.normal(size=x.shape))
"""
Explanation: Test functions
We now introduce 1-periodic tests functions that are defined over $[0,1)$ as:
\begin{equation}
\begin{split}
f_1(x) & = \cos(2 \pi x) + \varepsilon\
f_2(x) & = 1/2 \cos(2 \pi x) + 1/2 \cos(4 \pi x) + \varepsilon\
f_3(x) & = \left{
\begin{matrix}
1 + \varepsilon \text{ if } x \in [0,0.2] \
-1 + \varepsilon \text{ if } x \in (0.2,1) \
\end{matrix}
\right. \
f_4(x) & = 4 |x-0.5| + 1) + \varepsilon\
f_5(x) & = 1 - 2x + \varepsilon\
f_6(x) & = + \varepsilon
\end{split}
\end{equation}
where $\varepsilon$ is a $\mathcal{N}(0,\tau^2)$ random variable.
End of explanation
"""
names = ['f1','f2','f3','f4','f5','f6']
fig, axs = plt.subplots(2,3,figsize=(12,7), sharex=True, sharey=True)
for i, (ax,testfunc) in enumerate(zip(axs.flat, [f1,f2,f3,f4,f5,f6])):
ax.plot(x,testfunc(x,0.),'r', linewidth=1.5)
ax.plot(X,testfunc(X),'kx',mew=1)
ax.legend(['$\\mathrm{'+names[i]+'}$'],prop={'size':18},borderaxespad=0.)
ax.set_ylim((-1.5,1.8))
"""
Explanation: The associated graphs are:
End of explanation
"""
def fit_cosopt(X,Y):
X = X[:,None]
Y = Y[:,None]
period = np.linspace(0.15,2,100)
phase = np.linspace(-np.pi,np.pi,100)
MSE = np.zeros((100,100))
for i,per in enumerate(period):
for j,pha in enumerate(phase):
B = np.hstack((np.ones(X.shape),np.cos(X*2*np.pi/per+pha)))
C = np.dot(np.linalg.inv(np.dot(B.T,B)),np.dot(B.T,Y))
MSE[i,j] = np.mean((np.dot(B,C)-Y)**2)
i,j = np.unravel_index(MSE.argmin(), MSE.shape)
B = np.hstack((np.ones(X.shape),np.cos(X*2*np.pi/period[i]+phase[j])))
C = np.dot(np.linalg.inv(np.dot(B.T,B)),np.dot(B.T,Y))
return((C,period[i],phase[j]))
def pred_cosopt(x,m_cosopt):
C,per,pha = m_cosopt
Bx = np.hstack((0*x[:,None]+1, np.cos(x[:,None]*2*np.pi/per+pha)))
P = np.dot(Bx,C)
return(P.flatten())
"""
Explanation: Models
COSOPT: We consider here the following implementation
End of explanation
"""
def B(x):
# function returning the matrix of basis functions evaluated at x
#input: x, np.array with d columns
#output: a matrix (b_j(x_i))_{i,j}
B = np.ones((x.shape[0],1))
for i in range(1,20):
B = np.hstack((B,np.sin(2*np.pi*i*x[:,None]),np.cos(2*np.pi*i*x[:,None])))
return(B)
def LR(X,F,B,tau2):
#input: X, np.array with d columns representing the DoE
# F, np.array with 1 column representing the observations
# B, a function returning the (p) basis functions evaluated at x
# tau2, noise variance
#output: beta, estimate of coefficients np.array of shape (p,1)
# covBeta, cov matrix of beta, np.array of shape (p,p)
BX = B(X)
covBeta = np.linalg.inv(np.dot(BX.T,BX))
beta = np.dot(covBeta,np.dot(BX.T,F))
return(beta,tau2*covBeta)
def predLR(x,B,beta,covBeta):
#function returning predicted mean and variance
#input: x, np.array with d columns representing m prediction points
# B, a function returning the (p) basis functions evaluated at x
# beta, estimate of the regression coefficients
# covBeta, covariance matrix of beta
#output: m, predicted mean at x, np.array of shape (m,1)
# v, predicted variance, np.array of shape (m,1)
m = np.dot(B(x),beta)
v = np.dot(B(x),np.dot(covBeta,B(x).T))
return(m,v)
"""
Explanation: Linear regression
End of explanation
"""
def fit_gp(X,Y):
#input: X, np.array with d columns representing the DoE
# Y, np.array with 1 column representing the observations
#output: a GPy gaussian process model object
X = X[:,None]
Y = Y[:,None]
k = GPy.kern.PeriodicMatern32(1,variance=1.,lengthscale=1., period=1., n_freq=20,lower=lower,upper=upper)
bias = GPy.kern.Bias(1,variance=1.)
m32 = GPy.models.GPRegression(X,Y,k+bias)
m32.unconstrain('') # remove positivity constrains to avoids warnings
m32.likelihood.constrain_bounded(0.001,3., warning=False) # boundaries for the observation noise
m32.kern.periodic_Matern32.constrain_bounded(0.01,3., warning=False) # boundaries for the periodic variance and lengthscale
m32.kern.periodic_Matern32.period.constrain_bounded(0.15,2., warning=False) # boundaries for the period
m32.randomize()
m32.optimize_restarts(5,robust=True)
return(m32)
def pred_gp(x,m_gp):
x = x[:,None]
mu,var = m_gp.predict(x)
return(mu.flatten())
"""
Explanation: Gaussian Process model
End of explanation
"""
def RMSE(Ypred,Yreal):
return( np.sqrt(np.mean((Yreal - Ypred)**2)) )
"""
Explanation: Definition of the criterion for assesing the quality of a model prediction :
End of explanation
"""
M_COS = []
M_GP = []
M_LR = []
for i,testfunc in enumerate([f1,f2,f3,f4,f5,f6]):
Y = testfunc(X)
Yreal = testfunc(x,0)
M_COS += [fit_cosopt(X,Y)]
M_GP += [fit_gp(X,Y)]
M_LR += [LR(X,Y,B,0.1)]
"""
Explanation: Fit models
End of explanation
"""
lower = 0.
upper = 3.
xRMSE = np.linspace(lower,upper,500)
fig, axes = plt.subplots(2,3,figsize=(12,7), sharex=True, sharey=True, tight_layout=False)
for i,testfunc in enumerate([f1,f2,f3,f4,f5,f6]):
Y = testfunc(X)
Yreal = testfunc(x,0)
ax = axes.flat[i]
# test func
plreal, = ax.plot(x,Yreal, '-r', linewidth=1.5,label="test function")
realRMSE = testfunc(xRMSE,0)
# COSOPT
cosopt_pred = pred_cosopt(x,M_COS[i])
plcos, = ax.plot(x,cosopt_pred, '--g', linewidth=1.5,label="COSOPT", dashes=(7,3))
cRMSE = RMSE(pred_cosopt(xRMSE,M_COS[i]),realRMSE)
#GP 32
gp_pred = pred_gp(x,M_GP[i])
plgp, = ax.plot(x,gp_pred, '--b', linewidth=1.5,label="periodic GP", dashes=(15,3))
gpRMSE = RMSE(pred_gp(xRMSE,M_GP[i]),realRMSE)
# Lin Reg
lr_pred = predLR(x,B,M_LR[i][0],M_LR[i][1])
pllr, = ax.plot(x,lr_pred[0], ':k', linewidth=2,label="periodic GP")
lrRMSE = RMSE(predLR(xRMSE,B,M_LR[i][0],M_LR[i][1])[0],realRMSE)
#
ax.plot(X,testfunc(X),'kx',mew=1, alpha=0.5)
ax.set_xlim((0.9,2.1))
ax.set_ylim((-1.5,2.))
## RMSE
ax.text(1.5, 1.8, 'RMSE=(%.2f, %.2f, %.2f)'%(cRMSE,gpRMSE,lrRMSE),
verticalalignment='center', horizontalalignment='center', fontsize=13)
fig.suptitle(' ')
l = fig.legend((plreal,plcos,plgp,pllr),("test function","COSOPT","periodic GP","Lin. Reg."),'upper center',ncol=4,handlelength=3,fancybox=True,columnspacing=3)
l.draw_frame(False)
"""
Explanation: Generate figure
End of explanation
"""
|
hektor-monteiro/python-notebooks | aula-12_Monte-carlo.ipynb | gpl-2.0 | import numpy as np
import matplotlib.pyplot as plt
s = np.random.uniform(8,10., 100000)
count, bins, ignored = plt.hist(s, 30)
#print (count, bins, ignored)
import numpy as np
import matplotlib.pyplot as plt
mean = [0, 0]
cov = [[1, 10], [5, 10]] # covariancia diagonal
x, y = np.random.multivariate_normal(mean, cov, 500).T
plt.plot(x, y, 'x')
plt.axis('equal')
"""
Explanation: Técnicas Monte Carlo
O pacote numpy vem com uma coleção completa de geradores aleatórios baseados nos melhores algoritmos:
http://docs.scipy.org/doc/numpy/reference/routines.random.html
End of explanation
"""
import numpy as np
p = np.random.random()
if p > 0.2:
print ('cara', p)
else:
print ('coroa', p)
"""
Explanation: Simulando jogo de dados
End of explanation
"""
NL1 = 1000
NPb = 0
tau = 3.053*60.
h = 1
tmax = 1000
p = 1 - 2**(-h/tau)
tlist = np.arange(0.,tmax,h)
Tl1list = []
Pblist = []
for t in tlist:
Tl1list.append(NL1)
Pblist.append(NPb)
decay = 0
for i in range(NL1):
if np.random.random() < p:
decay += 1
NL1 -= decay
NPb += decay
plt.plot(tlist,Tl1list)
plt.plot(tlist,Pblist)
plt.show()
print (p)
"""
Explanation: Simulando decaimento radiativo
O problema de decaimento exponecial é um dos problemas clássicos da física moderna. Para um resumo prático do formalismo matemático usual para resolução desse tipo de problema veja:
https://amsi.org.au/ESA_Senior_Years/SeniorTopic3/3e/3e_2content_1.html
Na média, sabemos que o numero de atomos vai decair no tempo segundo:
$$ N(t) = N(0) * 2^{-t/\tau} $$
com isso sabemos que a fração de atomos que restam é:
$$ N(t)/N(0) = 2^{-t/\tau} $$
e portanto, como a probabilidade de um decaimento é dada pela frequencia de decaimento, $N_{decay}/N_{total}$ podemos escrever:
$$ p(t) = 1 - 2^{-t/\tau} $$
End of explanation
"""
import numpy as np
import matplotlib.pyplot as plt
dt = 1.0 # minutos
tmax = 1200.
tempo = np.arange(0.,tmax,dt)
# ponto de partida
x0 = 0.
y0 = 0.
posicao = []
for t in tempo:
#sorteia um numero de 0 a 1
passo = np.random.random()
sentido = np.random.random()
if (passo < 0.9):
if(sentido <0.5):
posicao.append([x0+0.5,y0+0.])
x0 += 0.5
else:
posicao.append([x0-0.5,y0+0.])
x0 -= 0.5
else:
if(sentido <0.5):
posicao.append([x0,y0+0.5])
y0 += 0.5
else:
posicao.append([x0,y0-0.5])
y0 -= 0.5
posicao = np.array(posicao)
plt.plot(posicao[:,0],posicao[:,1])
"""
Explanation: Simulando um andar de bebado
End of explanation
"""
import numpy as np
import matplotlib.pyplot as plt
import pandas as pd
def easy_function(x):
return((3)*(x**2))
def hard_function(x):
return((1/np.sqrt(2*np.pi))*np.exp(-(x**2)/2))
X=np.linspace(-20,20,1000)
plt.plot(X,easy_function(X))
plt.show()
plt.plot(X,hard_function(X))
plt.show()
def integrate(x1,x2,func=easy_function,n=100000):
X=np.linspace(x1,x2,1000)
y1=0
y2=max((func(X)))+1
print(x1,x2,y1,y2)
area=(x2-x1)*(y2-y1)
check=[]
xs=[]
ys=[]
for i in range(n):
x=np.random.uniform(x1,x2,1)
xs.append(x)
y=np.random.uniform(y1,y2,1)
ys.append(y)
if abs(y)>abs(func(x)) or y<0:
check.append(0)
else:
check.append(1)
return(np.mean(check)*area,xs,ys,check)
print(integrate(0.3,2.5)[0])
print(integrate(0.3,2.5,hard_function)[0])
_,x,y,c=integrate(0.3,2.5,n=100)
df=pd.DataFrame()
df['x']=x
df['y']=y
df['c']=c
X=np.linspace(0.3,2.5,1000)
plt.plot(X,easy_function(X))
plt.scatter(df[df['c']==0]['x'],df[df['c']==0]['y'],color='red')
plt.scatter(df[df['c']==1]['x'],df[df['c']==1]['y'],color='blue')
plt.show()
import numpy as np
np.random.seed(32)
print(np.random.random(5))
print(np.random.random(5))
print('')
np.random.seed(32)
print(np.random.random(10))
"""
Explanation: Integração Monte Carlo
veja detalhes aqui: https://en.wikipedia.org/wiki/Monte_Carlo_integration
End of explanation
"""
|
Autodesk/molecular-design-toolkit | moldesign/_notebooks/Example 2. UV-vis absorption spectra.ipynb | apache-2.0 | %matplotlib inline
import numpy as np
from matplotlib.pylab import *
try: import seaborn #optional, makes plots look nicer
except ImportError: pass
import moldesign as mdt
from moldesign import units as u
"""
Explanation: <span style="float:right"><a href="http://moldesign.bionano.autodesk.com/" target="_blank" title="About">About</a> <a href="https://github.com/autodesk/molecular-design-toolkit/issues" target="_blank" title="Issues">Issues</a> <a href="http://bionano.autodesk.com/MolecularDesignToolkit/explore.html" target="_blank" title="Tutorials">Tutorials</a> <a href="http://autodesk.github.io/molecular-design-toolkit/" target="_blank" title="Documentation">Documentation</a></span>
</span>
<br>
<center><h1>Example 2: Using MD sampling to calculate UV-Vis spectra</h1> </center>
This notebook uses basic quantum chemical calculations to calculate the absorption spectra of a small molecule.
Author: Aaron Virshup, Autodesk Research<br>
Created on: September 23, 2016
Tags: excited states, CASSCF, absorption, sampling
End of explanation
"""
qmmol = mdt.from_name('benzene')
qmmol.set_energy_model(mdt.models.CASSCF, active_electrons=6,
active_orbitals=6, state_average=6, basis='sto-3g')
properties = qmmol.calculate()
"""
Explanation: Contents
Single point
Sampling
Post-processing
Create spectrum
Single point
Let's start with calculating the vertical excitation energy and oscillator strengths at the ground state minimum (aka Franck-Condon) geometry.
Note that the active space and number of included states here is system-specific.
End of explanation
"""
for fstate in range(1, len(qmmol.properties.state_energies)):
excitation_energy = properties.state_energies[fstate] - properties.state_energies[0]
print('--- Transition from S0 to S%d ---' % fstate )
print('Excitation wavelength: %s' % excitation_energy.to('nm', 'spectroscopy'))
print('Oscillator strength: %s' % qmmol.properties.oscillator_strengths[0,fstate])
"""
Explanation: This cell print a summary of the possible transitions.
Note: you can convert excitation energies directly to nanometers using Pint by calling energy.to('nm', 'spectroscopy').
End of explanation
"""
mdmol = mdt.Molecule(qmmol)
mdmol.set_energy_model(mdt.models.GaffSmallMolecule)
mdmol.minimize()
mdmol.set_integrator(mdt.integrators.OpenMMLangevin, frame_interval=250*u.fs,
timestep=0.5*u.fs, constrain_hbonds=False, remove_rotation=True,
remove_translation=True, constrain_water=False)
mdtraj = mdmol.run(5.0 * u.ps)
"""
Explanation: Sampling
Of course, molecular spectra aren't just a set of discrete lines - they're broadened by several mechanisms. We'll treat vibrations here by sampling the molecule's motion on the ground state at 300 Kelvin.
To do this, we'll sample its geometries as it moves on the ground state by:
1. Create a copy of the molecule
2. Assign a forcefield (GAFF2/AM1-BCC)
3. Run dynamics for 5 ps, taking a snapshot every 250 fs, for a total of 20 separate geometries.
End of explanation
"""
post_traj = mdt.Trajectory(qmmol)
for frame in mdtraj:
qmmol.positions = frame.positions
qmmol.calculate()
post_traj.new_frame()
"""
Explanation: Post-processing
Next, we calculate the spectrum at each sampled geometry. Depending on your computer speed and if PySCF is installed locally, this may take up to several minutes to run.
End of explanation
"""
wavelengths_to_state = []
oscillators_to_state = []
for i in range(1, len(qmmol.properties.state_energies)):
wavelengths_to_state.append(
(post_traj.state_energies[:,i] - post_traj.potential_energy).to('nm', 'spectroscopy'))
oscillators_to_state.append([o[0,i] for o in post_traj.oscillator_strengths])
for istate, (w,o) in enumerate(zip(wavelengths_to_state, oscillators_to_state)):
plot(w,o, label='S0 -> S%d'%(istate+1),
marker='o', linestyle='none')
xlabel('wavelength / nm'); ylabel('oscillator strength'); legend()
"""
Explanation: This cell plots the results - wavelength vs. oscillator strength at each geometry for each transition:
End of explanation
"""
from itertools import chain
all_wavelengths = u.array(list(chain(*wavelengths_to_state)))
all_oscs = u.array(list(chain(*oscillators_to_state)))
hist(all_wavelengths, weights=all_oscs, bins=50)
xlabel('wavelength / nm')
"""
Explanation: Create spectrum
We're finally ready to calculate a spectrum - we'll create a histogram of all calculated transition wavelengths over all states, weighted by the oscillator strengths.
End of explanation
"""
|
lukasmerten/CRPropa3 | doc/pages/example_notebooks/propagation_comparison/Propagation_Comparison_CK_BP.ipynb | gpl-3.0 | def analytical_solution(max_trajectory, p_z, r_g_0, number_steps):
# calculate the time stamps similar to that used in the numerical simulation
t = np.linspace(0, max_trajectory/pc, int(number_steps+1))
# shift the phase so that the analytical solution
# also starts at (0,0,0) with in the direction (p_x,p_y,p_z)
d = t[1:]/r_g_0-3*math.pi/4.
# for parallel motion corrected gyro radius
r_g = r_g_0*(1-p_z**2)**0.5
# at these trajectory lengths, the numerical solutions are known
x_ana = r_g*np.cos(d)
y_ana = -r_g*np.sin(d)
z_ana = p_z*t[1:]
return x_ana, y_ana, z_ana
"""
Explanation: Comparison of Propagation Modules (BP - CK)
Numerical simulations of the propagation of charged particles through magnetic fields, solving the equation of motion can be achieved in principle with many different algorithms. There are, however, an increasing number of studies that have found that there are two algorithms, which work in general best for propagating charged particles within a magnetic field [1,2,3]. These two algorithms are compared and evaluated.
Both the Boris push [4] and the Cash-Karp [3] algorithms solve the Lorentz equation and thus enable the propagation of charged particles through magnetic fields. For the simplest case of a homogeneous background field, the particle trajectory can be derived analytically. This enables comparison with numerical integrators and provides information on the errors of the algorithms for the corresponding parameters being used. The aim is to find a relationship between the error and the parameters applied. The pitch angle and the step length are suitable parameters to determine the influence.
Consequently, we want to understand which algorithm is suitable for each simulation setup.
Analytic Solution
The trajectory of a charged particle in a homogeneous background field can be solved analytically. We assume that the particle starts off with non-zero component of the momentum ($p_z > 0$) along the background magnetic field. The particle, then moves in a helix whose gyration radius is given by: $r_g = \frac{E}{B\cdot q \cdot c}$
The velocity component parallel to the background field remains constant. The analytical solution for the direction = Vector3d(p_x, p_y, p_z) with $p_x^2+p_y^2+p_z^2 = 1$ and the position = Vector3d(0, 0, 0) yields:
End of explanation
"""
import numpy as np
from crpropa import *
import math
# Gyro radius should be R_g = 10.810076 parsecs for p_z = 0 and B = 10nG and E = 100 TeV
def larmor_radius(c, field):
p = c.current.getMomentum()
try:
B = field.getRegularField(c.current.getPosition())
except:
B = field.getField(c.current.getPosition())
q = c.current.getCharge()
p_perp = p.getPerpendicularTo(B)
try:
r = abs(p_perp.getR() / (B.getR()*q))
except ZeroDivisionError:
r = 1000000
return r
# Calculate gyro radius
def larmor_circumference(c, field):
r_g_0 = larmor_radius(c, field)/pc # gyro radius for p_z = 0
l_gyration = 2*math.pi*r_g_0*pc # trajectory lenght of one gyration
return r_g_0, l_gyration
# Trajectory length of particle in number of gyrations
def maximum_trajectory(steps_per_gyrations, number_of_gyrations, c, field, p_z):
r_g_0, l_gyration = larmor_circumference(c, field)
max_trajectory = l_gyration*number_of_gyrations
steplength = l_gyration/np.array(steps_per_gyrations)
return max_trajectory, steplength, r_g_0
"""
Explanation: Helper Functions
We define functions with which we can calculate the gyration radius and the circumference. In addition we only want to define how many steps we require per circumference and how many gyrations should be performed. The last helping function should then calculate the maximum trajectory length and the step size, so that our requirements are fulfilled.
End of explanation
"""
import time as Time
# We use only a Background magnetic field in the z-direction.
# We could add more complex magentic fields to our MagneticFieldList.
B = 10*nG
direction_B = Vector3d(0, 0, 1)
const_mag_vec = direction_B * B
reg_field = UniformMagneticField(const_mag_vec)
### Running the simulation with either CK or BP
def run_simulation(module, steps_per_gyrations, number_of_gyrations, p_z):
# Initial condition of candidate
p_x = (1-p_z**2)**(1/2.)/2**0.5
p_y = p_x
E = 100 * TeV
direction = Vector3d(p_x, p_y, p_z)
position = Vector3d(0, 0, 0)
c = Candidate(nucleusId(1, 1), E, position, direction)
max_trajectory, steplength, r_g_0 = maximum_trajectory(steps_per_gyrations, number_of_gyrations, c, reg_field, p_z)
sim = ModuleList()
if module == 'CK':
sim.add(PropagationCK(reg_field,1e-4,steplength, steplength))
output = TextOutput('trajectory_CK.txt', Output.Trajectory3D)
elif module == 'BP':
sim.add(PropagationBP(reg_field, steplength))
output = TextOutput('trajectory_BP.txt', Output.Trajectory3D)
else:
print('no module found. Use either BP or CK.')
return
# we only want to simulate a certain trajectory length
sim.add(MaximumTrajectoryLength(max_trajectory))
# the output information will be saved in pc instead of the default which is Mpc
output.setLengthScale(pc)
# each particle position will be saved in the above specified text field.
sim.add(output)
# compare the simulation time of both propagation methods
t0 = Time.time()
# run the simulation
sim.run(c, True)
t1 = Time.time()
output.close()
print('Simulation time with module '+ str(module)+' is '+str(t1-t0)+'s.')
Time.sleep(0.5)
return max_trajectory, p_z, r_g_0
"""
Explanation: Run Simulation
We can now compare the analytical solution with both particle integrators in CRPropa, namely the Boris push and the Cash-Karp.
First we have to add our propagation module with the above specialized background magnetic field to our module list. Afterwards we can specify that the particle information is collected at each step along the particle trajectory. With the output module, we can specify where these information will be saved.
Now that we have our module list ready we can fire up our simulation with both propagation algorithms and hope that something visually interesting is going to happen.
End of explanation
"""
import pandas as pd
def load_data(text, r_g):
data = pd.read_csv(text,
names=['D','ID','E','X','Y','Z','Px','Py','Pz'], delimiter='\t', comment='#',
usecols=["D", "X", "Y", "Z","Px","Py","Pz"])
### distances are saved in units of pc
### transform so that the center of the gyromotion is at (0,0)
data.X = data.X.values-r_g/2**0.5
data.Y = data.Y.values+r_g/2**0.5
### convert disctance in kpc
data.D = data.D.values/1000.
### calcualte gyro radius
data['R'] = (data.X**2+data.Y**2)**0.5
return data
"""
Explanation: Load Simulation Data
There are several ways to load the simulation data. Pandas is helpful to load files. To illustrate this, we will load and process the data with pandas.
End of explanation
"""
import matplotlib.pyplot as plt
steps_per_gyrations = 10
number_gyrations = 40000
number_of_steps = steps_per_gyrations * number_gyrations
def plot_subplots(ax1, ax2, ax3, data, x_ana, y_ana, r_g, module, color):
# numerical calculated positions
ax1.scatter(data.X,data.Y, s=1,color = color, label = module)
# analytical solution shwon in black squares!
if module == 'CK':
ax1.scatter(x_ana, y_ana, color = 'k', marker = 's', s = 15)
# for the legend
ax1.scatter(x_ana, y_ana, color = 'k', marker = 's', s = 1, label = 'Analytical solution')
ax1.legend(markerscale=5)
# numerical solutions
ax2.scatter(data.D,(data.R-r_g)/r_g*100., s=1,color = color, label= module)
ax2.legend(markerscale=5)
ax3.scatter(data.D,((x_ana-data.X)**2+(y_ana-data.Y)**2)**0.5, s=1,color = color, label=module)
ax3.legend(markerscale=5)
# We use this function to plot the whole figure for the particle motion in the xy-plane
def plot_figure_perp(max_trajectory, p_z, r_g_0, number_of_steps):
fig, ((ax1, ax2, ax3)) = plt.subplots(1, 3,figsize=(12,4))
# for parallel motion corrected gyro radius
r_g = r_g_0*(1-p_z**2)**0.5
# Initial condition of candidate
p_x = (1-p_z**2)**(1/2.)/2**0.5
p_y = p_x
x_ana, y_ana, z_ana = analytical_solution(max_trajectory, p_z, r_g_0, number_of_steps)
data = load_data('trajectory_BP.txt', r_g)
plot_subplots(ax1, ax2, ax3, data, x_ana, y_ana, r_g, 'BP', 'brown')
data = load_data('trajectory_CK.txt', r_g)
plot_subplots(ax1, ax2, ax3, data, x_ana, y_ana, r_g, 'CK', 'dodgerblue')
ax1.set_xlabel('$x$ [pc]')
ax1.set_ylabel('$y$ [pc]')
ax1.set_title('$p_z/p$ = '+str(p_z), fontsize=18)
ax2.set_xlabel('distance [kpc]')
ax2.set_ylabel('relative error in $r_\mathrm{g}$ [%]')
ax2.set_title('$p_z/p$ = '+str(p_z), fontsize=18)
ax3.set_xlabel('distance [kpc]')
ax3.set_ylabel('$xy-$deviation from ana. solution [pc]')
ax3.set_title('$p_z/p$ = '+str(p_z), fontsize=18)
fig.tight_layout()
plt.show()
"""
Explanation: Compare the Propagation in the Perpendicular Plane
Here, we can compare the numerical results of both modules with the analytical solution of the particle trajectory. First, we want to show the trajectories in the $xy$-plane (left plot). The black squares present the positions of the analytical solution for the chosen step size. In the middle plot, we can display the relative error of the gyro radius as a function of time. Finally, we can compute the distance of the numerical solutions to the analytical solution in the $xy$-plane (right plot).
End of explanation
"""
p_z = 0.01
max_trajectory, p_z, r_g_0 = run_simulation('CK', steps_per_gyrations, number_gyrations/2, p_z)
run_simulation('BP', steps_per_gyrations, number_gyrations/2, p_z)
plot_figure_perp(max_trajectory, p_z, r_g_0, number_of_steps/2)
"""
Explanation: We can study different pitch angles of the particle with respect to the background magnetic field and start with $p_z/p = 0.01$ which represents a strong perpendicular component:
End of explanation
"""
p_z = 0.5
max_trajectory, p_z, r_g_0 = run_simulation('CK', steps_per_gyrations, number_gyrations, p_z)
run_simulation('BP', steps_per_gyrations, number_gyrations, p_z)
plot_figure_perp(max_trajectory, p_z, r_g_0, number_of_steps)
p_z = 0.99
max_trajectory, p_z, r_g_0 = run_simulation('CK', steps_per_gyrations, number_gyrations, p_z)
run_simulation('BP', steps_per_gyrations, number_gyrations, p_z)
plot_figure_perp(max_trajectory, p_z, r_g_0, number_of_steps)
p_z = 0.9999
max_trajectory, p_z, r_g_0 = run_simulation('CK', steps_per_gyrations, number_gyrations, p_z)
run_simulation('BP', steps_per_gyrations, number_gyrations, p_z)
plot_figure_perp(max_trajectory, p_z, r_g_0, number_of_steps)
"""
Explanation: We increase the component of the momentum along the parallel component:
End of explanation
"""
def plot_subplots_para(ax1, ax2, ax3, data, z_ana, module, color):
ax1.plot(data.X,data.Z, markersize=0.01, marker='o',color = color, label=module)
ax1.legend(markerscale=5)
ax2.scatter(data.D,data.Z,s=1, color = color, label=module)
ax2.legend(markerscale=5)
# compare with analytical solution
ax3.scatter(data.D, (data.D-z_ana),s=1, color = color, label=module)
ax3.legend(markerscale=5)
def plot_figure_para(max_trajectory, p_z, r_g_0, number_of_steps):
fig, ((ax1, ax2, ax3)) = plt.subplots(1, 3,figsize=(12,4))
# for parallel motion corrected gyro radius
r_g = r_g_0*(1-p_z**2)**0.5
x_ana, y_ana, z_ana = analytical_solution(max_trajectory, p_z, r_g_0, number_of_steps)
data = load_data('trajectory_BP.txt', r_g)
plot_subplots_para(ax1, ax2, ax3, data, z_ana, 'BP', 'brown')
data = load_data('trajectory_CK.txt', r_g)
plot_subplots_para(ax1, ax2, ax3, data, z_ana, 'CK', 'dodgerblue')
ax1.set_xlabel('$x$ [pc]')
ax1.set_ylabel('$z$ [pc]')
ax1.set_title('$p_z/p$ = '+str(p_z), fontsize=18)
ax2.set_xlabel('distance [kpc]')
ax2.set_ylabel('$z$ [pc]')
ax2.set_title('$p_z/p$ = '+str(p_z), fontsize=18)
ax3.set_xlabel('distance [kpc]')
ax3.set_ylabel('difference in $z$ [pc]')
ax3.set_title('$p_z/p$ = '+str(p_z), fontsize=18)
fig.tight_layout()
plt.show()
"""
Explanation: Conclusions:
- the Boris push propagates the particle in the perpendicular plane, as expected, on a cirlce with a radius that is in great agreement with the gyro radius. The problem is that the frequency deviates from the analytical solution. Consequently, even though we can see an agreement between the numerical and analytical gyro radius, the deviation in the $xy$-plane is between 0 and $2r_g$. The global error is constrained.
- the Cash-Karp algorithm results in a relative small local error in the perpendicular plane for small components of the momentum along the parallel component of the magnetic field. The problem is that the small local error accumulates to a large global error. In addition, the error depends on the pitch angle (angle between particle direction and background magnetic field) of the particle and thus may artificially pollute the results if we use a background field in combination with an isotropic source of particles.
Note that other implementations of the CashKarp algorithm lead to different results, since CRPropa prevents the candidate from losing energy due to propagation alone. This is not true in general for other implementations.
Compare the Propagation in the Parallel Direction
We now turn to the parallel direction of the particle trajectory with respect to the background magnetic field. Along this direction a constant momentum is expected, resulting in a linear increase of the $z$-position. This time-behavior of the $z$-position is shown in the middle plots below. The left plot presents the movement in the $xz$-plane. An rectangle is expected (as it can be seen for the Boris push). In the right, the difference between the numerically calculated and theoretically expected time-behavior of the distance along the $z$-axis is plotted. Zero deviation in the right plot is preferable. While the Boris push has no deviation from the prediction, the Cash-Karp algorithm shows either a rapid increase in the $z$-direction or a decelerated movement which can result in a stagnation in the motion along the $z$-direction. The behavior depends on the ratio of $p_\mathrm{\parallel} / p$.
End of explanation
"""
p_z = 0.01
max_trajectory, p_z, r_g_0 = run_simulation('CK', steps_per_gyrations, number_gyrations, p_z)
run_simulation('BP', steps_per_gyrations, number_gyrations, p_z)
plot_figure_para(max_trajectory, p_z, r_g_0, number_of_steps)
p_z = 0.5
max_trajectory, p_z, r_g_0 = run_simulation('CK', steps_per_gyrations, number_gyrations, p_z)
run_simulation('BP', steps_per_gyrations, number_gyrations, p_z)
plot_figure_para(max_trajectory, p_z, r_g_0, number_of_steps)
p_z = 0.99
max_trajectory, p_z, r_g_0 = run_simulation('CK', steps_per_gyrations, number_gyrations, p_z)
run_simulation('BP', steps_per_gyrations, number_gyrations, p_z)
plot_figure_para(max_trajectory, p_z, r_g_0, number_of_steps)
"""
Explanation: We can again study different pitch angles of the particle with respect to the background magnetic field and start with $p_z/p = 0.01$ which represents a strong perpendicular component:
End of explanation
"""
def plot_subplots_momentum(ax1, ax2, ax3, data, module, color):
ax1.scatter(data.D,((data.Px**2+data.Py**2)**0.5),s=1, color = color, label=module)
ax1.legend(markerscale=5)
ax2.scatter(data.D,data.Pz,s=1, color = color, label=module)
ax2.legend(markerscale=5)
ax3.scatter(data.D,(data.Pz-p_z)/p_z*100,s=1, color = color, label=module)
ax3.legend(markerscale=5)
def plot_figure_momentum(max_trajectory, p_z, r_g_0, number_of_steps):
fig, ((ax1, ax2, ax3)) = plt.subplots(1, 3,figsize=(12,4))
# for parallel motion corrected gyro radius
r_g = r_g_0*(1-p_z**2)**0.5
x_ana, y_ana, z_ana = analytical_solution(max_trajectory, p_z, r_g_0, number_of_steps)
# Initial condition of candidate
p_x = (1-p_z**2)**(1/2.)/2**0.5
p_y = p_x
data = load_data('trajectory_BP.txt', r_g)
plot_subplots_momentum(ax1, ax2, ax3, data, 'BP', 'brown')
data = load_data('trajectory_CK.txt', r_g)
plot_subplots_momentum(ax1, ax2, ax3, data, 'CK', 'dodgerblue')
ax1.set_xlabel('distance [kpc]')
ax1.set_ylabel('$\sqrt{p_x^2+p_y^2}/p$')
ax1.set_title('$p_z/p$ = '+str(p_z), fontsize=18)
ax2.set_xlabel('distance [kpc]')
ax2.set_ylabel('$p_z/p$')
ax2.set_title('$p_z/p$ = '+str(p_z), fontsize=18)
ax3.set_xlabel('distance [kpc]')
ax3.set_ylabel('relative error in $p_z$ [%]')
ax3.set_title('$p_z/p$ = '+str(p_z), fontsize=18)
fig.tight_layout()
plt.show()
"""
Explanation: Conclusions:
- the Boris push propagates the particle along the parallel component without any errors and is in great agreement with the analytical solution.
- the behavior of the Cash-Karp on the other hand has again a small local error that accumulates to a large global error. Again, the error depends on the pitch angle of the particle and thus may artificially pollute the results if we use a background field in combination with a isotropic source (this source would emit particles with all possible pitch angles) of particles.
If we are interested in the motion along the magnetic field lines, we should use the Boris push mehthod instead of the Cash-Karp method.
Conservation of Momentum
CRPropa conserves the total energy and momentum of each particle during the propagation. However, only the Boris push conserves energy by construction. Urging the Cash-Karp algorithm to preserve energy and momentum leads to a shift of the energy from the perpendicular to the parallel component (or the other way round) as observed in this example of a pure background magnetic field. We have seen that the particle accelerates or slows down in the parallel direction.
We can study this effect also directly by plotting the components of the momentum as functions of the time (or distance which is equivilent for relativistic particles $v=c$).
We expect to have a constant momentum in both the parallel ($p_z/p$) and the perpendicular ($\sqrt{p_x^2+p_y^2}/p$) direction.
End of explanation
"""
p_z = 0.01
max_trajectory, p_z, r_g_0 = run_simulation('CK', steps_per_gyrations, number_gyrations, p_z)
run_simulation('BP', steps_per_gyrations, number_gyrations, p_z)
plot_figure_momentum(max_trajectory, p_z, r_g_0, number_of_steps)
"""
Explanation: We can start with small parallel component ($p_z$ = 0.01)...
End of explanation
"""
p_z = 0.5
max_trajectory, p_z, r_g_0 = run_simulation('CK', steps_per_gyrations, number_gyrations, p_z)
run_simulation('BP', steps_per_gyrations, number_gyrations, p_z)
plot_figure_momentum(max_trajectory, p_z, r_g_0, number_of_steps)
"""
Explanation: ...and increase it so that is already dominating $p_z/p = 0.5 > p_x/p = p_y/p$...
End of explanation
"""
p_z = 0.99
max_trajectory, p_z, r_g_0 = run_simulation('CK', steps_per_gyrations, number_gyrations, p_z)
run_simulation('BP', steps_per_gyrations, number_gyrations, p_z)
plot_figure_momentum(max_trajectory, p_z, r_g_0, number_of_steps)
p_z = 0.9999
max_trajectory, p_z, r_g_0 = run_simulation('CK', steps_per_gyrations, number_gyrations, p_z)
run_simulation('BP', steps_per_gyrations, number_gyrations, p_z)
plot_figure_momentum(max_trajectory, p_z, r_g_0, number_of_steps)
"""
Explanation: ...and finally study two cases of really strong parallel motion ($p_z/p = 0.99$ and $p_z/p = 0.999)$):
End of explanation
"""
p_z = 0.99
max_trajectory, p_z, r_g_0 = run_simulation('CK', steps_per_gyrations/5, number_gyrations, p_z)
run_simulation('BP', steps_per_gyrations/5, number_gyrations, p_z)
plot_figure_momentum(max_trajectory, p_z, r_g_0, number_of_steps/5)
"""
Explanation: Conclusions:
- The Boris push conserves the components of the momentum because the algorithm already conserves energy by construction.
- The Cash-Karp does not conserve energy by construction. CRPropa urges the propagation to conserve energy, resulting in a shift of the momentum between its components. This effect depends on the geometry of the magnetic field and the pitch angle (see above) of the particle as well as the step size (see below).
In the following two plots we can investigate the influence of the step size. We can consider for example 2 steps per gyration instead of 10 in the first figure and 50 steps per gyration in the second figure.
End of explanation
"""
p_z = 0.99
max_trajectory, p_z, r_g_0 = run_simulation('CK', steps_per_gyrations*5, number_gyrations, p_z)
run_simulation('BP', steps_per_gyrations*5, number_gyrations, p_z)
plot_figure_momentum(max_trajectory, p_z, r_g_0, number_of_steps*5)
"""
Explanation: If we increase the number of steps per gyration, we expect to minimize the error of the Cash-Karp algorithm. 50 steps per gyration lead to:
End of explanation
"""
def plot_subplots_3d(ax, data, x_ana, y_ana, z_ana, module, color):
ax.scatter(data.D,((x_ana-data.X)**2+(y_ana-data.Y)**2+(z_ana-data.Z)**2)**0.5, s=1,color=color, label=module)
ax.legend(markerscale=5)
def plot_figure_3d(max_trajectory, p_z, r_g_0, number_of_steps):
fig, ax = plt.subplots(figsize=(12,5))
# for parallel motion corrected gyro radius
r_g = r_g_0*(1-p_z**2)**0.5
# Initial condition of candidate
p_x = (1-p_z**2)**(1/2.)/2**0.5
p_y = p_x
x_ana, y_ana, z_ana = analytical_solution(max_trajectory, p_z, r_g_0, number_of_steps)
data = load_data('trajectory_BP.txt', r_g)
plot_subplots_3d(ax, data, x_ana, y_ana, z_ana, 'BP', 'brown')
data = load_data('trajectory_CK.txt', r_g)
plot_subplots_3d(ax, data, x_ana, y_ana, z_ana, 'CK', 'dodgerblue')
ax.set_xlabel('distance [kpc]')
ax.set_ylabel('$xyz-$deviation from analytical solution [pc]')
ax.set_title('$p_z/p$ = '+str(p_z), fontsize=18)
fig.tight_layout()
plt.show()
p_z = 0.01
max_trajectory, p_z, r_g_0 = run_simulation('CK', steps_per_gyrations, number_gyrations, p_z)
run_simulation('BP', steps_per_gyrations, number_gyrations, p_z)
plot_figure_3d(max_trajectory, p_z, r_g_0, number_of_steps)
p_z = 0.5
max_trajectory, p_z, r_g_0 = run_simulation('CK', steps_per_gyrations, number_gyrations, p_z)
run_simulation('BP', steps_per_gyrations, number_gyrations, p_z)
plot_figure_3d(max_trajectory, p_z, r_g_0, number_of_steps)
p_z = 0.99
max_trajectory, p_z, r_g_0 = run_simulation('CK', steps_per_gyrations, number_gyrations, p_z)
run_simulation('BP', steps_per_gyrations, number_gyrations, p_z)
plot_figure_3d(max_trajectory, p_z, r_g_0, number_of_steps)
p_z = 0.9999
max_trajectory, p_z, r_g_0 = run_simulation('CK', steps_per_gyrations, number_gyrations, p_z)
run_simulation('BP', steps_per_gyrations, number_gyrations, p_z)
plot_figure_3d(max_trajectory, p_z, r_g_0, number_of_steps)
"""
Explanation: The conservation of the momentum with the Boris push is independet of the number of steps per gyration and works always, while it strongly depend on this number for the Cash-Karp algorithm. As expected, only the Boris push preserves the components of the momentum correctly. 2 steps per gyrations lead imediately to a complete change of the momentum components. Instead of a parallel motion as expected from the initial condition ($p_z/p$ = 0.99), the particle only moves in the perpendicular plane ($p_z/p$ = 0) after a short distance.
The behavior is much better for 50 steps per gyration (the simulation time is, however, high). The error is small at the investigated distances. Feel free to test different numbers of gyrations and different numbers of steps per gyration.
Comparison with 3D Analytic Solution
We can study the complete deviation of the numerical trajectories from the analytical solution. The following code calculates the distance between the positions calculated with both algorithms and the analytical particle position.
End of explanation
"""
p_z = 0.01
max_trajectory, p_z, r_g_0 = run_simulation('CK', steps_per_gyrations*4, number_gyrations/4, p_z)
run_simulation('BP', steps_per_gyrations*4, number_gyrations/4, p_z)
plot_figure_3d(max_trajectory, p_z, r_g_0, number_of_steps)
"""
Explanation: The Boris push is for long simulations for all presented cases better and faster than the Cash-Karp algorithm. If we are only interested in small distances and we don't care about long simulation times, the Cash-Karp algorithm outperforms the Boris push as presented below (40 steps per gyration and $p_z/p = 0.01$):
End of explanation
"""
### Setup turbulent magnetic field
randomSeed = 42
lMin = 0.1*pc
lMax = 5.*pc
N_grid = 256
b = 100*nG
spacing = lMin/2.
#spacing = R / 2 *pc # lMin > 2 * spacing
#vgrid = Grid3f(Vector3d(0), N_grid, spacing)
#initTurbulence(vgrid, b, lMin, lMax, -11./3., randomSeed)
#turb_field = MagneticFieldGrid(vgrid)
turbSpectrum = SimpleTurbulenceSpectrum(Brms=b, lMin=lMin, lMax=lMax, sIndex=5./3.)
gridprops = GridProperties(Vector3d(0), N_grid, spacing)
turb_field = SimpleGridTurbulence(turbSpectrum, gridprops, randomSeed)
"""
Explanation: The main findings of this tutorial are highlighted:
- The Cash-Karp algorithm has a small local error, which accumulates over time to a large global error. The components of the momentum are not conserved. Due to the small local error (for small step sizes), the module PropagationCK is, however, useful for short simulations, where the exact position is crucial.
- The Boris push has a compareable large local error, which does not accumulate over time, resulting in a small global error. The accuracy doesn't depend too much on the step size, especially not in the direction where the magnetic field should have no influence based on alaytical arguments ($z$-axis in this example).
Therefore, for almost all simulation scenarios, the PropagationBP module outperforms the PropagationCK module in both simulation time and accuracy.
Comparison of Simulation Time
We have already compared the simulation times for a background magnetic field. Now we can compare the simulation time of both modules for a different magnetic field, namely the turbulent magnetic field. The Boris push calls the magnetic field only once per step, whereas the Cash-Karp algorithm needs the field at the current position six times per step. For analytical fields where the field is known at every location (for example the background magnetic field), the time difference is much smaller than for magnetic fields where the current magnetic field vector has to be interpolated first. The latter is especially relevant for the turbulent magnetic fields initialized on grids.
Turbulent Magnetic Field
The inialisation of the turbulent magnetic field vectors on the grid points takes some time:
End of explanation
"""
# ### Running the simulation with either CK or BP
def runSimulation(module):
sim = ModuleList()
steplength = 1.*pc
if module == 'CK':
sim.add(PropagationCK(turb_field,1e-4,steplength, steplength))
elif module == 'BP':
sim.add(PropagationBP(turb_field, steplength))
else:
print('no module found. Use either BP or CK.')
return
sim.add(MaximumTrajectoryLength(10*Mpc))
# Proton
c = Candidate(nucleusId(1, 1), 100*TeV, Vector3d(0,0,0), Vector3d(1,0,0))
# compare the simulation time of both propagation methods
t0 = Time.time()
sim.run(c, True)
t1 = Time.time()
print('Simulation time with module {} is {:4.4} s.'.format(module, t1-t0))
Time.sleep(4)
runSimulation('BP')
runSimulation('CK')
"""
Explanation: Finally, we can run both propagation modules and compare their simulation times:
End of explanation
"""
from crpropa import *
import time as Time
# magnetic field setup
B = JF12Field()
randomSeed = 691342
B.randomStriated(randomSeed)
B.randomTurbulent(randomSeed)
# simulation setup for fixed step size
sim_CK = ModuleList()
sim_CK.add(PropagationCK(B, 1e-4, 0.1 * parsec, 0.1 * parsec))
sim_CK.add(SphericalBoundary(Vector3d(0), 20 * kpc))
sim_BP = ModuleList()
sim_BP.add(PropagationBP(B, 0.1 * parsec))
sim_BP.add(SphericalBoundary(Vector3d(0), 20 * kpc))
class MyTrajectoryOutput(Module):
"""
Custom trajectory output: i, x, y, z
where i is a running cosmic ray number
and x,y,z are the galactocentric coordinates in [kpc].
"""
def __init__(self, fname):
Module.__init__(self)
self.fout = open(fname, 'w')
self.fout.write('#i\tX\tY\tZ\n')
self.i = 0
def process(self, c):
v = c.current.getPosition()
x = v.x / kpc
y = v.y / kpc
z = v.z / kpc
self.fout.write('%i\t%.3f\t%.3f\t%.3f\n'%(self.i, x, y, z))
if not(c.isActive()):
self.i += 1
def close(self):
self.fout.close()
output_CK = MyTrajectoryOutput('galactic_trajectories_CK.txt')
output_BP = MyTrajectoryOutput('galactic_trajectories_BP.txt')
sim_CK.add(output_CK)
sim_BP.add(output_BP)
# source setup
source = Source()
source.add(SourcePosition(Vector3d(-8.5, 0, 0) * kpc))
source.add(SourceIsotropicEmission())
source.add(SourceParticleType(-nucleusId(1,1)))
source.add(SourceEnergy(1 * EeV))
t0 = Time.time()
sim_CK.run(source, 10) # backtrack 10 random cosmic rays
t1 = Time.time()
print('Simulation time with module CK is {:4.4} s.'.format(t1-t0))
output_CK.close() # flush particles to ouput file
t2 = Time.time()
sim_BP.run(source, 10) # backtrack 10 random cosmic rays
t3 = Time.time()
print('Simulation time with module BP is {:4.4} s.'.format(t3-t2))
output_BP.close() # flush particles to ouput file
"""
Explanation: The simulation time difference is really high! PropagationBP is much faster than PropagationCK for the turbulent magnetic field.
Time Comparison for Galactic Trajectories
For fixed step sizes
Here, we test the time difference for the example of galactic trajectories presented in:
https://github.com/CRPropa/CRPropa3-notebooks/blob/master/galactic_trajectories/galactic_trajectories.v4.ipynb.
First, we want to compare both modules with a fixed step size:
End of explanation
"""
# magnetic field setup
B = JF12Field()
randomSeed = 691342
B.randomStriated(randomSeed)
B.randomTurbulent(randomSeed)
# simulation setup for adaptive step size for the Cash-Karp algorithm
sim_CK = ModuleList()
sim_CK.add(PropagationCK(B, 1e-4, 0.1 * parsec, 100 * parsec))
sim_CK.add(SphericalBoundary(Vector3d(0), 20 * kpc))
# simulation setup for adaptive step size for the Boris push algorithm with a higher tolerance
# so that it is as fast as the Cash-Karp algorithm.
sim_BP = ModuleList()
sim_BP.add(PropagationBP(B, 2e-3, 0.1 * parsec, 100 * parsec))
sim_BP.add(SphericalBoundary(Vector3d(0), 20 * kpc))
class MyTrajectoryOutput(Module):
"""
Custom trajectory output: i, x, y, z
where i is a running cosmic ray number
and x,y,z are the galactocentric coordinates in [kpc].
"""
def __init__(self, fname):
Module.__init__(self)
self.fout = open(fname, 'w')
self.fout.write('#i\tX\tY\tZ\n')
self.i = 0
def process(self, c):
v = c.current.getPosition()
x = v.x / kpc
y = v.y / kpc
z = v.z / kpc
self.fout.write('%i\t%.3f\t%.3f\t%.3f\n'%(self.i, x, y, z))
if not(c.isActive()):
self.i += 1
def close(self):
self.fout.close()
output_CK = MyTrajectoryOutput('galactic_trajectories_CK.txt')
output_BP = MyTrajectoryOutput('galactic_trajectories_BP.txt')
sim_CK.add(output_CK)
sim_BP.add(output_BP)
# source setup
source = Source()
source.add(SourcePosition(Vector3d(-8.5, 0, 0) * kpc))
source.add(SourceIsotropicEmission())
source.add(SourceParticleType(-nucleusId(1,1)))
source.add(SourceEnergy(1 * EeV))
t0 = Time.time()
sim_CK.run(source, 10) # backtrack 10 random cosmic rays
t1 = Time.time()
print('Simulation time with module CK is {:4.4} s.'.format(t1-t0))
output_CK.close() # flush particles to ouput file
t2 = Time.time()
sim_BP.run(source, 10) # backtrack 10 random cosmic rays
t3 = Time.time()
print('Simulation time with module BP is {:4.4} s.'.format(t3-t2))
output_BP.close() # flush particles to ouput file
"""
Explanation: PropagationBP is faster than PropagationCK for the same step sizes.
Adaptive step sizes
Finally we can test exactly the example presented in:
https://github.com/CRPropa/CRPropa3-notebooks/blob/master/galactic_trajectories/galactic_trajectories.v4.ipynb.
End of explanation
"""
|
deepchem/deepchem | examples/tutorials/Introducing_JaxModel_and_PINNModel.ipynb | mit | !pip install --pre deepchem[jax]
import numpy as np
import functools
try:
import jax
import jax.numpy as jnp
import haiku as hk
import optax
from deepchem.models import PINNModel, JaxModel
from deepchem.data import NumpyDataset
from deepchem.models.optimizers import Adam
from jax import jacrev
has_haiku_and_optax = True
except:
has_haiku_and_optax = False
"""
Explanation: <a href="https://colab.research.google.com/github/VIGNESHinZONE/Beginners-level-ML-projects/blob/master/Presentation.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
End of explanation
"""
import matplotlib.pyplot as plt
give_size = 10
in_given = np.linspace(-2 * np.pi, 2 * np.pi, give_size)
out_given = np.cos(in_given) + 0.1*np.random.normal(loc=0.0, scale=1, size=give_size)
# red for numpy.sin()
plt.figure(figsize=(13, 7))
plt.scatter(in_given, out_given, color = 'green', marker = "o")
plt.xlabel("x --> ", fontsize=18)
plt.ylabel("f (x) -->", fontsize=18)
plt.legend(["Supervised Data"], prop={'size': 16}, loc ="lower right")
plt.title("Data of our physical system", fontsize=18)
"""
Explanation: Given Physical Data
We have a 10 random points between $x\in [-2\pi, 2\pi]$ and its corresponding value f(x)
We know that data follows an underlying physical rule that
$\frac{df(x)}{dx} = -sin(x) $
End of explanation
"""
import matplotlib.pyplot as plt
test = np.expand_dims(np.linspace(-2.5 * np.pi, 2.5 * np.pi, 100), 1)
out_array = np.cos(test)
plt.figure(figsize=(13, 7))
plt.plot(test, out_array, color = 'blue', alpha = 0.5)
plt.scatter(in_given, out_given, color = 'green', marker = "o")
plt.xlabel("x --> ", fontsize=18)
plt.ylabel("f (x) -->", fontsize=18)
plt.legend(["Actual data" ,"Supervised Data"], prop={'size': 16}, loc ="lower right")
plt.title("Data of our physical system", fontsize=18)
"""
Explanation: From simple integeration, we can easily solve the diffrential equation and the solution will be -
$f(x) = cos(x)$
End of explanation
"""
# defining the Haiku model
# A neural network is defined as a function of its weights & operations.
# NN(x) = F(x, W)
# forward function defines the F which describes the mathematical operations like Matrix & dot products, Signmoid functions, etc
# W is the init_params
def f(x):
net = hk.nets.MLP(output_sizes=[256, 128, 1], activation=jax.nn.softplus)
val = net(x)
return val
init_params, forward_fn = hk.transform(f)
rng = jax.random.PRNGKey(500)
params = init_params(rng, np.random.rand(1000, 1))
"""
Explanation: Building a Simple Neural Network Model -
We define a simple Feed-forward Neural Network with 2 hidden layers of size 256 & 128 neurons.
End of explanation
"""
train_dataset = NumpyDataset(np.expand_dims(in_given, axis=1), np.expand_dims(out_given, axis=1))
rms_loss = lambda pred, tar, w: jnp.mean(optax.l2_loss(pred, tar))
# JaxModel Working
nn_model = JaxModel(
forward_fn,
params,
rms_loss,
batch_size=100,
learning_rate=0.001,
log_frequency=2)
nn_model.fit(train_dataset, nb_epochs=10000, deterministic=True)
dataset_test = NumpyDataset(test)
nn_output = nn_model.predict(dataset_test)
plt.figure(figsize=(13, 7))
plt.plot(test, out_array, color = 'blue', alpha = 0.5)
plt.scatter(in_given, out_given, color = 'green', marker = "o")
plt.plot(test, nn_output, color = 'red', marker = "o", alpha = 0.7)
plt.xlabel("x --> ", fontsize=18)
plt.ylabel("f (x) -->", fontsize=18)
plt.legend(["Actual data", "Vanilla NN", "Supervised Data"], prop={'size': 16}, loc ="lower right")
plt.title("Data of our physical system", fontsize=18)
"""
Explanation: Fitting a simple Neural Network solution to the Physical Data
End of explanation
"""
def create_eval_fn(forward_fn, params):
"""
Calls the function to evaluate the model
"""
@jax.jit
def eval_model(x, rng=None):
bu = forward_fn(params, rng, x)
return jnp.squeeze(bu)
return eval_model
def gradient_fn(forward_fn, loss_outputs, initial_data):
"""
This function calls the gradient function, to implement the backpropagation
"""
boundary_data = initial_data['X0']
boundary_target = initial_data['u0']
@jax.jit
def model_loss(params, target, weights, rng, x_train):
@functools.partial(jax.vmap, in_axes=(None, 0))
def periodic_loss(params, x):
"""
diffrential equation => grad(f(x)) = - sin(x)
minimize f(x) := grad(f(x)) + sin(x)
"""
x = jnp.expand_dims(x, 0)
u_x = jacrev(forward_fn, argnums=(2))(params, rng, x)
return u_x + jnp.sin(x)
u_pred = forward_fn(params, rng, boundary_data)
loss_u = jnp.mean((u_pred - boundary_target)**2)
f_pred = periodic_loss(params, x_train)
loss_f = jnp.mean((f_pred**2))
return loss_u + loss_f
return model_loss
initial_data = {
'X0': jnp.expand_dims(in_given, 1),
'u0': jnp.expand_dims(out_given, 1)
}
opt = Adam(learning_rate=1e-3)
pinn_model= PINNModel(
forward_fn=forward_fn,
params=params,
initial_data=initial_data,
batch_size=1000,
optimizer=opt,
grad_fn=gradient_fn,
eval_fn=create_eval_fn,
deterministic=True,
log_frequency=1000)
# defining our training data. We feed 100 points between [-2.5pi, 2.5pi] without the labels,
# which will be used as the differential loss(regulariser)
X_f = np.expand_dims(np.linspace(-3 * np.pi, 3 * np.pi, 1000), 1)
dataset = NumpyDataset(X_f)
pinn_model.fit(dataset, nb_epochs=3000)
import matplotlib.pyplot as plt
pinn_output = pinn_model.predict(dataset_test)
plt.figure(figsize=(13, 7))
plt.plot(test, out_array, color = 'blue', alpha = 0.5)
plt.scatter(in_given, out_given, color = 'green', marker = "o")
# plt.plot(test, nn_output, color = 'red', marker = "x", alpha = 0.3)
plt.scatter(test, pinn_output, color = 'red', marker = "o", alpha = 0.7)
plt.xlabel("x --> ", fontsize=18)
plt.ylabel("f (x) -->", fontsize=18)
plt.legend(["Actual data" ,"Supervised Data", "PINN"], prop={'size': 16}, loc ="lower right")
plt.title("Data of our physical system", fontsize=18)
"""
Explanation: Learning to fit the Data using the underlying Diffrential equation
Lets ensure that final output of the neural network satisfies the diffrential equation as a loss function-
End of explanation
"""
plt.figure(figsize=(13, 7))
# plt.plot(test, out_array, color = 'blue', alpha = 0.5)
# plt.scatter(in_given, out_given, color = 'green', marker = "o")
plt.scatter(test, nn_output, color = 'blue', marker = "x", alpha = 0.3)
plt.scatter(test, pinn_output, color = 'red', marker = "o", alpha = 0.7)
plt.xlabel("x --> ", fontsize=18)
plt.ylabel("f (x) -->", fontsize=18)
plt.legend(["Vanilla NN", "PINN"], prop={'size': 16}, loc ="lower right")
plt.title("Data of our physical system", fontsize=18)
"""
Explanation: Comparing the results between PINN & Vanilla NN model
End of explanation
"""
|
rnikutta/rhocube | rhocube.ipynb | bsd-3-clause | from rhocube import *
from models import *
import warnings
import pylab as p
import matplotlib
%matplotlib inline
def myplot(images,titles='',interpolation='bicubic'):
if not isinstance(images,list):
images = [images]
n = len(images)
if not isinstance(titles,list):
titles = [titles]
assert (len(titles)==n)
side = 2.5
fig = p.figure(figsize=((side+0.4)*n,side))
for j,image in enumerate(images):
ax = fig.add_subplot(1,n,j+1)
im = ax.imshow(image,origin='lower',extent=[-1,1,-1,1],cmap=matplotlib.cm.Blues_r,interpolation=interpolation)
if titles is not None:
ax.set_title(titles[j],fontsize=9)
"""
Explanation: rhocube
Synopsis
rhocube is a Python code that computes 3D density models, and integrates them over $dz$, creating a 2D map.
In the case of optically thin emission, this 2D density map is proportional to the brightness map. Ideal for fitting, e.g., observed images of LBV shells, planetary nebulae, HII regions, and many more. Its flexibility allows for applications way beyond astronomy only.
Below we showcase the flexibility and the capabilities of rhocube. Follow the examples, clone the code, use it, and come up with own 3D density models and density field transforms!
Authors
Robert Nikutta, Claudia Agliozzo
Licence and attribution
Please see README.md
Imports, setup, simple plotting function
End of explanation
"""
model = PowerLawShell(101,exponent=0.) # npix, radial exponent in 1/r^exponent (0 = constant-density shell)
args = (0.5,0.6,0,0,None) # rin, rout, xoff, yoff, weight
model(*args) # call the model instance with these parameter values
myplot(model.image) # the computed rho(x,y,z) in in model.rho; the z-integrated rho_surface(x,y) is in model.image
"""
Explanation: Cube class
All models, whether provided with rhocube or own, inherit basic functionality from the Cube class. The class provides a npix^3 cube of voxels, their 3D coordinate arrays X,Y,Z, and several methods for constructing the density field $\rho(x,y,z)$. It also provides methods for shifting the model in $x$ and $y$ directions, and, for models where this is appropriate, for rotating the model about some of the principal axes.
Available models
Several common geometrical models for the density distribution come with rhocube. Below we demonstrate how to instantiate and to use them. Further down we will show how rhocube can be easily extended with own models.
PowerLawShell
The PowerLawShell model is spherically symmetric, has an inner and outer radius, and the density falls off in radial direction as $1/r^{\rm exponent}$. Let's instantiate a constant-density shell model with inner radius = 0.5, outer radius = 0.6, in a 101^2 pixel image (=101^3 cube of computed voxels).
End of explanation
"""
args = (0.2,0.4,-0.2,0.3,None) # same model instance, but different parameter values, and shifted left and up
model(*args)
myplot(model.image)
"""
Explanation: The shell model can also be shifted in the $x$ and $y$ directions on the image. The $x$-axis points right, the $y$-axis points up.
End of explanation
"""
args = (0.2,0.8,0,0,None)
powers = (0,-1,-2) # the exponents in 1/r^exponent
images, titles = [], []
for j,pow in enumerate(powers):
mod = PowerLawShell(101,exponent=pow)
with warnings.catch_warnings(): # we're just silencing a runtime warning here
warnings.simplefilter("ignore")
mod(*args)
images.append(mod.image)
titles.append("exponent = %d" % pow)
myplot(images,titles)
"""
Explanation: Let's plot shells with different radial powers.
End of explanation
"""
model = TruncatedNormalShell(101)
args = (0.6,0.15,0.4,0.9,0,0,None) # r, sigma, rlo, rup, xoff, yoff, weight
model(*args)
myplot(model.image,("Gaussian shell",))
p.axhline(0) # indicate profile line
fig2 = p.figure()
p.plot(N.linspace(-1,1,model.npix),model.image[model.npix/2,:]) # plot the profile accross the image
p.title("Profile accross a central line")
"""
Explanation: TruncatedNormalShell
Another model is a Gaussian ("Normal") shell with radius r, width sigma around r, and lower & upper clip radii rlo and rup.
End of explanation
"""
model = ConstantDensityTorus(101)
args = (0.6,0.2,0,0,30,40,None) # radius, tube radius, xoff, yoff, tiltx, tiltz, weight
model(*args)
myplot(model.image)
"""
Explanation: Constant-density torus
The torus model has a torus radius and a tube radius, can be offset in $x$ and $y$, and can be tilted about the $x$-axis (points right) and $z$-axis (points to the observer). The tilts are given in degrees.
End of explanation
"""
model = ConstantDensityDualCone(101)
tilts = ([0,0],[90,0],[0,90],[50,30]) # about [x,z] axes
args = [[0.8,45]+tilts_+[0,0,None] for tilts_ in tilts] # height,theta,tiltx,tiltz,xoff,yoff,weight
images, titles = [],[]
for j,args_ in enumerate(args):
model(*args_)
images.append(model.image)
titles.append('tiltx = %d deg, tiltz = %d deg' % tuple(args[j][2:4]))
myplot(images,titles)
"""
Explanation: Constant-density dual cone
The dual cone model has a height (from center) and a opening angle theta (in degrees). Like the torus, the dual cone can be tilted about the $x$ and $z$ axes, and shifted in $x$ and $y$.
End of explanation
"""
model = Helix3D(101,envelope='dualcone')
args = (0.8,1,0.2,0,0,0,0,0,None) # height from center (= radius at top), number of turns, tube cross-section radius, x/y/z tilts, x/y offsets, weight
model(*args)
myplot(model.image)
"""
Explanation: Helix3D
The density fields computed by rhocube need not be spherically or axially symmetric. A model provided with rhocube is a 3D helical tube, i.e. a parametric curve with a given tube cross-section radius. Let's define a helix that spirals outwards along the surface of a dual cone:
End of explanation
"""
tilts = ([0,0,0],[90,0,0],[0,90,0],[0,0,90],[70,30,40]) # tilt angles about the [x,y,z] axes
args = [[0.8,1,0.2]+tilts_+[0,0,None] for tilts_ in tilts] # height, nturns, tube radius, x/y/z tilts, x/y offsets, weight
images, titles = [],[]
for j,args_ in enumerate(args):
model(*args_)
images.append(model.image)
titles.append('tilt x,y,z = %d,%d,%d deg' % (args[j][3],args[j][4],args[j][5]))
myplot(images,titles)
"""
Explanation: Caution: This class of models internally computes a k-d tree to speed up subsequent computations of the density field. Note that the k-d tree construction time is an expensive function of npix. The subsequent computations are then much faster. In other words: you might not have the patience to wait for a 301^3 voxel tree to be constructed.
End of explanation
"""
model = Helix3D(101,envelope='cylinder')
args = (0.6,1,0.2,0,0,0,0,0,None) # height, nturns, tube cross-section radius, 3 tilts, 2 offsets, weight
model(*args)
myplot(model.image)
"""
Explanation: The plot above shows the same helix rotated in 3D to five different points of view.
The helix can also spiral along the surface of a cylinder:
End of explanation
"""
class CosineShell(Cube):
def __init__(self,npix):
Cube.__init__(self,npix,computeR=True) # 'computeR' is a built-in facility, and by default True; check out also 'buildkdtree'
def __call__(self):
self.get_rho()
def get_rho(self):
self.rho = N.cos(self.R*N.pi)
self.apply_rho_ops() # shift, rotate3d, smoothing
"""
Explanation: Adding custom model classes
Custom models can be very easily added to rhocube. Just write a small Python class. Below we show in a simple example how one can construct a custom 3D density model that computes a spherically symmetric $\rho(R)$ which varies as the cosine of distance, i.e. $\rho(R)\propto\cos(R)$.
End of explanation
"""
model = CosineShell(101) # 101 pixels along each cube axis
model() # this model has no free parameters
myplot(model.image)
p.axhline(0) # indicate profile line
fig2 = p.figure()
p.plot(N.linspace(-1,1,model.npix),model.image[model.npix/2,:]) # plot the profile accross the image
p.title("Profile accross a central line")
"""
Explanation: You can then use it simply like this:
End of explanation
"""
model1 = PowerLawShell(101,exponent=-1.,smoothing=None) # no transform of rho(x,y,z)
model2 = PowerLawShell(101,exponent=-1.,smoothing=None,transform=PowerTransform(2.)) # i.e. rho^2(x,y,z) will be computed
args = (0.3,0.8,0,0,None) # rin, rout, xoff, yoff, weight
model1(*args)
model2(*args)
myplot([model1.image,model2.image],\
[r'no transf., $\rho[50,40,30]=$%.4f' % model1.rho[50,40,30],\
r'$\rho^2$ transf., $\rho[50,40,30]$=%.4f' % model2.rho[50,40,30]])
"""
Explanation: Special functions and methods
transform class
An optional transform function $f(\rho)$ can be passed as an argument when creating a model instance. Transform functions are implemented as simple Python classes. The transform will be applied to the density field before $z$-integration, i.e. $\int dz\ f(\rho(x,y,z))$ will be computed. For instance, squaring of the density field $\rho(x,y,z)$ before integration can be achived via:
End of explanation
"""
# make a model with density transforming as rho-->rho^2
model = PowerLawShell(101,exponent=-1,transform=PowerTransform(2.))
args = (0.3,0.8,0,0,None)
model(*args)
print "transform and _inverse functions of the model:"
print model.transform
print model.transform._inverse
"""
Explanation: Note how the $\rho$ value of voxel [50,40,30] (as an example) on the right is exactly the squared value of the model on the left.
inverse function
If the supplied $f(\rho)$ class also provides an inverse function _inverse, e.g. $f^{-1} = \sqrt{\cdot}$ when $f = (\cdot)^2$, then from a scaled image the entire cube of $\rho(x,y,z)$ voxels can be computed with correct scaling. This is very useful if the application of rhocube is to model the $z$-integrated image of some observations for instance, but the 3D density distribution is not known. For example:
End of explanation
"""
scale = 16. # this is the image scaling factor you got from fitting...
# ... and since the model knows the _inverse transform function, it can compute the required scaling of rho(x,y,z)
print "Image scale = %.3f requires rho to be scaled by %.5f" % (scale,model.transform._inverse(scale))
"""
Explanation: Now imagine that fitting the map to a data map told you that you must scale your 2D model image by a factor scale to match the 2D data image... By how much should you multiply the underlying 3D density field $\rho(x,y,z)$ in order to achieve the desired scaling?
End of explanation
"""
model = PowerLawShell(101,exponent=-1,transform=PowerTransform(3.))
args = (0.3,0.8,0,0,None)
model(*args)
scale = 16. # this is the image scaling factor you got from fitting...
print "Image scale = %.3f requires rho to be scaled by %.5f" % (scale,model.transform._inverse(scale))
"""
Explanation: Let's try the same model but with a different transform
End of explanation
"""
model0 = PowerLawShell(31,exponent=-2,smoothing=None) # no smoothing
model1 = PowerLawShell(31,exponent=-2,smoothing=1.0) # Gaussian 3D kernel smoothing, width=1sigma
model2 = PowerLawShell(31,exponent=-2,smoothing=2.0) # Gaussian 3D kernel smoothing, width=2sigma
args = (0.3,0.8,0,0,None)
model0(*args)
model1(*args)
model2(*args)
myplot([model0.image,model1.image,model2.image],\
['no smoothing','Gaussian 3D smoothing 1 sigma','Gaussian 3D smoothing 2 sigma'],interpolation='none')
print "Sum over all voxels\nwithout smoothing: %.2f\nwith smoothing=1.0: %.2f\nwith smoothing=2.0: %.2f" % (model0.rho.sum(), model1.rho.sum(), model2.rho.sum())
"""
Explanation: Several common transform classes are provided with rhocube, e.g. PowerTransform, which we used above, or LogTransform which computes a base-base logarithm of $\rho(x,y,z)$. Another provided transform is GenericTransform which can take any parameter-free numpy function and an inverse functon (the defaults are func=’sin’ and inversefunc=’arcsin’). Custom transform functions can be easily added.
Smoothing of the 3D density field
Upon instantiating a model, the smoothing parameter can be specified. If smoothing is a float value, it is the width (in standard deviations) of a 3D Gaussian kernel that $\rho(x,y,z)$ will be convolved with, resulting in a smoothed 3D density distribution. Smoothing does preserve the total $\sum_i (x,y,z)$, where $i$ runs over all voxels, but need not preserve $\rho$ within a single voxel. smoothing=1.0 is the default, and does not alter the resulting structure significantly. If smoothing=None, no smoothing will be performed.
End of explanation
"""
def myprint(t,model):
print "%s, weight = %r --> model.rho.sum() = %.1f" % (t,model.weight,model.rho.sum())
model = PowerLawShell(101,exponent=-1)
model(*(0.3,0.8,0,0,None))
myprint("No transform",model)
model(*(0.3,0.8,0,0,0.8))
myprint("No transform",model)
model = PowerLawShell(101,exponent=-1,transform=PowerTransform(2.))
model(*(0.3,0.8,0,0,None))
myprint("PowerTransform(2)",model)
model(*(0.3,0.8,0,0,0.8))
myprint("PowerTransform(2)",model)
"""
Explanation: Smoothing indeed preserves the total mass.
The weight parameter
When calling a model instance with parameter values, a weight argument can be given. If weight=None (the default), $\rho(x,y,z)$ will be computed by the model function as-is. If weight is a floating-point number, then the $\rho(x,y,z)$ cube will be normalized such that the sum of all voxels equals weight. Any transform function (if given) will be applied to $\rho(x,y,z)$ before normalizing to weight. Examples:
End of explanation
"""
|
matthewzhenggong/fiwt | workspace_py/RigFreeRollId-Copy1.ipynb | lgpl-3.0 | %run matt_startup
%run -i matt_utils
button_qtconsole()
#import other needed modules in all used engines
#with dview.sync_imports():
# import os
"""
Explanation: Parameter Estimation of RIG Roll Experiments
Setup and descriptions
Without ACM model
Turn off wind tunnel
Only 1DoF for RIG roll movement
Free in any angle and observe
Consider RIG roll angle and its derivative as States (in radians)
$$X = \begin{pmatrix} \phi_{rig} \ \dot{\phi}_{rig} \end{pmatrix}$$
Observe RIG roll angle and its derivative as Outputs (in degrees)
$$Z = \begin{pmatrix} \phi_{rig} \ \dot{\phi}_{rig} \end{pmatrix}$$
Use output error method based on most-likehood(ML) to estimate
$$ \theta = \begin{pmatrix} F_{c} \ f \end{pmatrix} $$
Startup computation engines
End of explanation
"""
filename = 'FIWT_Exp014_20150601105647.dat.npz'
def loadData():
# Read and parse raw data
global exp_data
exp_data = np.load(filename)
# Select colums
global T_rig, phi_rig
T_rig = exp_data['data44'][:,0]
phi_rig = exp_data['data44'][:,2]
loadData()
text_loadData()
"""
Explanation: Data preparation
Load raw data
End of explanation
"""
def checkInputOutputData():
#check inputs/outputs
fig, ax = plt.subplots(2,1,True)
ax[1].plot(T_rig,phi_rig, 'b', picker=2)
ax[1].set_ylabel('$\phi \/ / \/ ^o/s$')
ax[1].set_xlabel('$T \/ / \/ s$', picker=True)
ax[0].set_title('Output', picker=True)
fig.canvas.mpl_connect('pick_event', onPickTime)
fig.show()
display(fig)
button_CheckData()
"""
Explanation: Check time sequence and inputs/outputs
Click 'Check data' button to show the raw data.
Click on curves to select time point and push into queue; click 'T/s' text to pop up last point in the queue; and click 'Output' text to print time sequence table.
End of explanation
"""
# Pick up focused time ranges
time_marks = [[341.47, 350, "free drop 1"],
[369.83, 377, "free drop 2"],
[392.38, 400, "free drop 3"],
[413.57, 421, "free drop 4"],
[439.80, 450, "free drop 5"],
[463.46, 474, "free drop 6"],
[488.48, 498, "free drop 7"],
[509.40, 520, "free drop 8"],
[531.80, 543, "free drop 9"],
[557.84, 569, "free drop 10"],
[587.60, 599, "free drop 11"],
[615.00, 627, "free drop 12"]
]
# Decide DT,U,Z and their processing method
DT=0.01
process_set = {
'U':[],
'Z':[(T_rig, phi_rig,3),],
'cutoff_freq': 10 #Hz
}
U_names = []
Y_names = Z_names = ['$\phi_{a,rig} \, / \, ^o$',
'$\dot{\phi}_{a,rig} \, / \, ^o/s$',]
display_data_prepare()
"""
Explanation: Input $\delta_T$ and focused time ranges
For each section,
* Select time range and shift it to start from zero;
* Resample Time, Inputs, Outputs in unique $\delta_T$;
* Smooth Input/Observe data if flag bit0 is set;
* Take derivatives of observe data if flag bit1 is set.
End of explanation
"""
resample(True);
"""
Explanation: Resample and filter data in sections
End of explanation
"""
%%px --local
#update common const parameters in all engines
#problem size
Nx = 2
Nu = 0
Ny = 2
Npar = 7
#reference
S_c = 0.1254 #S_c(m2)
b_c = 0.7 #b_c(m)
g = 9.81 #g(m/s2)
#other parameters
v_th = 0.5/57.3 #v_th(rad/s)
v_th2 = 0.5/57.3 #v_th(rad/s)
def x0(Z,T,U,params):
return Z[0,:]/57.3
def xdot(X,t,U,params):
F_c = params[0]
f = params[1]
Ixx = params[2]
m_T = params[3]
l_z_T = params[4]
kBrk = params[5]
phi0 = params[6]
phi = X[0]
phi_dot = X[1]
moments = -(m_T*l_z_T)*g*math.sin(phi-phi0)
abs_phi_dot = abs(phi_dot)
F = f*phi_dot
if abs_phi_dot > v_th+v_th2:
F += math.copysign(F_c, phi_dot)
elif abs_phi_dot > v_th:
F += math.copysign(F_c*(kBrk-(kBrk-1)*(abs_phi_dot-v_th)/v_th2), phi_dot)
else:
F += phi_dot/v_th*(F_c*kBrk)
moments -= F
phi_dot2 = moments/Ixx
return [phi_dot, phi_dot2]
def obs(X,T,U,params):
return X*57.3
display(HTML('<b>Constant Parameters</b>'))
table = ListTable()
table.append(['Name','Value','unit'])
table.append(['$S_c$',S_c,'$m^2$'])
table.append(['$b_c$',b_c,'$m$'])
table.append(['$g$',g,'$m/s^2$'])
display(table)
"""
Explanation: Define dynamic model to be estimated
$$\left{\begin{matrix}
\ddot{\phi}{rig} = \frac{M{x,rig}}{I_{xx,rig}} \
M_{x,rig} = M_{x,f} + M_{x,cg} \
M_{x,f} = -F_c \, sign(\dot{\phi}{rig}) - f\dot{\phi}{rig} \
M_{x,cg} = -m_T g l_{zT} \sin \left ( \phi - \phi_0 \right ) \
\end{matrix}\right.$$
End of explanation
"""
#initial guess
param0 = [
0.05, #F_c(N*m)
0.03, #f(N*m/(rad/s))
0.1678, #Ixx(kg*m2)
7.5588, #m_T(kg)
0.0526, #l_z_T(m)
1.01, #kBrk
0, #phi0(rad)
]
param_name = ['$F_c$','$f$', '$I_{xx,rig}$', '$m_T$',r'$l_{zT}$','$k_{Brk}$','$phi_0$']
param_unit = ['$Nm$',r'$\frac{Nm}{rad/s}$', '$kg\,m^2$', 'kg', 'm','1', 'rad']
NparID = 4
opt_idx = [0,1,2,6]
opt_param0 = [param0[i] for i in opt_idx]
par_del = [0.3*1e-3, 0.3*1e-3, 0.2*1e-3, 0.0174]
bounds = [(1e-6,0.15),(1e-6,0.1), (1e-6,0.5), (-0.1,0.1)]
display_default_params()
#select sections for training
section_idx = range(8)
display_data_for_train()
#push parameters to engines
push_opt_param()
# select 2 section from training data
idx = random.sample(section_idx, 2)
interact_guess();
"""
Explanation: Initial guess
Input default values and ranges for parameters
Select sections for trainning
Adjust parameters based on simulation results
Decide start values of parameters for optimization
End of explanation
"""
display_preopt_params()
if True:
InfoMat = None
method = 'trust-ncg'
def hessian(opt_params, index):
global InfoMat
return InfoMat
dview['enable_infomat']=True
options={'gtol':1}
opt_bounds = None
else:
method = 'L-BFGS-B'
hessian = None
dview['enable_infomat']=False
options={'ftol':1e-4,'maxfun':200}
opt_bounds = bounds
cnt = 0
tmp_rslt = None
T0 = time.time()
print('#cnt, Time, |R|')
%time res = sp.optimize.minimize(fun=costfunc, x0=opt_param0, \
args=(opt_idx,), method=method, jac=True, hess=hessian, \
bounds=opt_bounds, options=options)
"""
Explanation: Optimize using ML
End of explanation
"""
display_opt_params()
# show result
idx = range(len(time_marks))
display_data_for_test();
update_guess();
toggle_inputs()
button_qtconsole()
"""
Explanation: Show and test results
End of explanation
"""
|
fastai/course-v3 | docs/production/lesson-1-export-jit.ipynb | apache-2.0 | %reload_ext autoreload
%autoreload 2
%matplotlib inline
import os
import io
import tarfile
import PIL
import boto3
from fastai.vision import *
path = untar_data(URLs.PETS); path
path_anno = path/'annotations'
path_img = path/'images'
fnames = get_image_files(path_img)
np.random.seed(2)
pat = re.compile(r'/([^/]+)_\d+.jpg$')
bs=64
img_size=299
data = ImageDataBunch.from_name_re(path_img, fnames, pat, ds_tfms=get_transforms(),
size=img_size, bs=bs//2).normalize(imagenet_stats)
learn = cnn_learner(data, models.resnet50, metrics=error_rate)
learn.lr_find()
learn.recorder.plot()
learn.fit_one_cycle(8)
learn.unfreeze()
learn.fit_one_cycle(3, max_lr=slice(1e-6,1e-4))
"""
Explanation: fast.ai lesson 1 - training on Notebook Instance and export to torch.jit model
Overview
This notebook shows how to use the SageMaker Python SDK to train your fast.ai model on a SageMaker notebook instance then export it as a torch.jit model to be used for inference on AWS Lambda.
Set up the environment
You will need a Jupyter notebook with the boto3 and fastai libraries installed. You can do this with the command pip install boto3 fastai
This notebook was created and tested on a single ml.p3.2xlarge notebook instance.
Train your model
We are going to train a fast.ai model as per Lesson 1 of the fast.ai MOOC course locally on the SageMaker Notebook instance. We will then save the model weights and upload them to S3.
End of explanation
"""
save_texts(path_img/'models/classes.txt', data.classes)
"""
Explanation: Export model and upload to S3
Now that we have trained our model we need to export it, create a tarball of the artefacts and upload to S3.
First we need to export the class names from the data object into a text file.
End of explanation
"""
trace_input = torch.ones(1,3,img_size,img_size).cuda()
jit_model = torch.jit.trace(learn.model.float(), trace_input)
model_file='resnet50_jit.pth'
output_path = str(path_img/f'models/{model_file}')
torch.jit.save(jit_model, output_path)
"""
Explanation: Now we need to export the model in the PyTorch TorchScript format so we can load into an AWS Lambda function.
End of explanation
"""
tar_file=path_img/'models/model.tar.gz'
classes_file='classes.txt'
with tarfile.open(tar_file, 'w:gz') as f:
f.add(path_img/f'models/{model_file}', arcname=model_file)
f.add(path_img/f'models/{classes_file}', arcname=classes_file)
"""
Explanation: Next step is to create a tarfile of the exported classes file and model weights.
End of explanation
"""
s3 = boto3.resource('s3')
s3.meta.client.upload_file(str(tar_file), 'REPLACE_WITH_YOUR_BUCKET_NAME', 'fastai-models/lesson1/model.tar.gz')
"""
Explanation: Now we need to upload the model tarball to S3.
End of explanation
"""
|
zerothi/ts-tbt-sisl-tutorial | TS_01/run.ipynb | gpl-3.0 | graphene = sisl.geom.graphene(1.44, orthogonal=True)
graphene.write('STRUCT_ELEC_SMALL.fdf')
graphene.write('STRUCT_ELEC_SMALL.xyz')
elec = graphene.tile(2, axis=0)
elec.write('STRUCT_ELEC.fdf')
elec.write('STRUCT_ELEC.xyz')
"""
Explanation: First TranSiesta example.
This example will only create the structures for input into TranSiesta. I.e. sisl's capabilities of creating geometries with different species is a core functionality which is handy for creating geometries for Siesta/TranSiesta.
This example will teach you one of the most important aspect of performing a successfull DFT+NEGF calculation. Namely that the electrodes should only couple to its nearest neighbouring cell.
End of explanation
"""
device = elec.tile(3, axis=0)
device.write('STRUCT_DEVICE.fdf')
device.write('STRUCT_DEVICE.xyz')
"""
Explanation: The above two code blocks writes two different electrodes that we will test in TranSiesta. In this example we will have the transport direction along the 1st lattice vector (0th index in Python). Note how TranSiesta does not limit your choice of orientation. Any direction may be used as a semi-infinite direction, just as in TBtrans.
End of explanation
"""
|
vahidpartovinia/pythonworkshop | jupyter/chapter03.ipynb | gpl-2.0 | import numpy as np
n=200
x_tr = np.linspace(0.0, 2.0, n)
y_tr = np.exp(3*x_tr)
import random
mu, sigma = 0,50
random.seed(1)
y = y_tr + np.random.normal(loc=mu, scale= sigma, size=len(x_tr))
import matplotlib.pyplot as plt
%matplotlib inline
plt.plot(x_tr,y,".",mew=3);
plt.plot(x_tr, y_tr,"--r",lw=3);
plt.xlabel('Explanatory variable (x)')
plt.ylabel('Dependent variable (y)')
"""
Explanation: Linear Regression
In the nature (or in a real life situation), it is unusual to observe a variable, let's say $y$, and its exact mathematical relationship with another variables $x$. Suppose now that we would like to model the (linear) relationship between a dependent variable $y$ and the explanatory variable $x$. Usually, a first modeling would take this form
\begin{equation}
y = \beta_{0} + \beta_{1}x + \varepsilon \ \ ,
\end{equation}
where $\varepsilon$ is a random variable that we CAN NOT observe and who adds noise to the linear relationship between the dependent and independent variables. Altough the noise $\varepsilon$ is unobservable, we can still estimate the real model. The relationship between the dependent and independent variables will be now estimated by
\begin{equation}
\hat{y} = \hat{\beta}_0 + \hat{\beta}_1 x_1 \ \ .
\end{equation}
Q.This is great, but how can we estimate $\hat{y}$ ?
A.By estimating $\hat{\beta}$.
Q.How can we estimate $\hat{\beta}$ ?
A.Good question! But first, let's create some artificial data :D
The dependent variable $x$ will variate between 0 and 2. The TRUE relationship between $y$ and $x$ will take this form
\begin{equation}
y = e^{\ 3 x} + \varepsilon \ \ \text{where} \ \ \varepsilon \sim \mathcal{N}(0,50^2) \ \ ,
\end{equation}
where the noise $\varepsilon$ will follow a normal distribution of mean $\mu=0$ and standard deviation $\sigma=50$. Let's produce $n=200$ observations defined by the above equation.
End of explanation
"""
ignored=plt.hist(y,30, color="g")
"""
Explanation: The red curve is defined by the function
\begin{equation}
\ f(x) = e^{\ 3 x} \ \ ,
\end{equation}
and the blue dots are actually the dependent variable defined by
\begin{equation}
y = e^{\ 3 x} + \varepsilon \ \ \text{where} \ \ \varepsilon \sim \mathcal{N}(0,50^2) \ \ .
\end{equation}
Here are the available Line2D properties (See matplotlib tutorial for more pyplot options here). For the list of generating random numbers see here.
We can check the histogram of y (for histogram and other matplot pyplots see here).
End of explanation
"""
import sklearn.linear_model as lm
lr=lm.LinearRegression()
#We can see that the dimensions indicated are different
#In fact, the data in the second expression is "reshape"
#This is necessary if we want to use the linear regression command with scikit learn
#Otherwise, python send us a message error
print np.shape(x_tr)
print np.shape(x_tr[:, np.newaxis])
#We regress y on x, then estimate y
lr.fit(x_tr[:, np.newaxis],y)
y_hat=lr.predict(x_tr[:, np.newaxis])
plt.plot(x_tr,y,".",mew=2)
plt.plot(x_tr, y_hat,"-g",lw=4, label='Estimations with linear regression')
plt.xlabel('Explanatory variable (x)')
plt.ylabel('Dependent variable (y)')
plt.legend(bbox_to_anchor=(1.8, 1.03))
"""
Explanation: Let's fit a simple linear model on $y$ and $x$.
First of all, we need to import the librairy scikit.learn. There is a lot of algorithms and each of them is well explained. Stop wasting your time with cat video, be a data scientist. We won't talk about how we can theorically fit the model, but you may find the information here.
End of explanation
"""
#And then fit the model
lr.fit(x_tr[:, np.newaxis]**2,y)
y_hat2=lr.predict(x_tr[:, np.newaxis]**2)
#Let's check it out
plt.plot(x_tr,y,".",mew=2);
plt.plot(x_tr, y_hat,"-g",lw=4, label='Estimations with linear regression')
plt.plot(x_tr, y_hat2,"-r",lw=4, label='Estimations with linear regression (Quadratic term)');
plt.xlabel('Explanatory variable (x)')
plt.ylabel('Dependent variable (y)')
plt.legend(bbox_to_anchor=(2.1, 1.03))
"""
Explanation: Well, that's not really good... We can do better!
Exercise
Replace $x$ with $x^2$ and regress $y$ as defined earlier on $x^2$.
The 'new' fitted model will be
\begin{equation}
\hat{y}=\hat{\beta}_0+ \hat{\beta}_1 x^2 \ .
\end{equation}
End of explanation
"""
index=y>90
z=(1*(y>90)-0.5)*2
#print index, z
#The tilt symbol ~ below means the opposite of the boolean value
plt.figure()
plt.plot(x_tr[index],z[index],".r",mew=3)
plt.plot(x_tr[~index],z[~index],".b",mew=3)
plt.ylim(-1.5,1.5)
plt.xlabel('Explanatory variable (x)')
plt.ylabel('Dependent variable (y)')
"""
Explanation: Question
Which one do you prefer?
Classification
In the last example, the dependent variable was continuous. Now suppose that the dependent $y$ variable is binary. This makes the cross-road between regression and classification. First, let's create the binary outcome.
End of explanation
"""
lr.fit(x_tr[:, np.newaxis],z)
z_hat=lr.predict(x_tr[:, np.newaxis])
#We define a threshold overwhat the z estimation will be considered as 1
threshold = 0
z_class= 2*(z_hat>threshold) - 1
"""
Explanation: Linear regression
Now that the new dependent variable $z$ takes binary values (-1 or 1), we could still think of it as a real-valued variable on which we can do standard linear regression! Thus, the gaussian noise model on $z$ doesn't make sense anymore, but we can still do least-squares approximation to estimate the parameters of a linear decision boundary.
End of explanation
"""
#This function simply calculate the classification rate on the training set
def plotbc(x, y, z):
#Plot the classification
plt.plot(x[z==1],z[z==1],".r", markersize=3, label='True positive')
plt.plot(x[z==-1],z[z==-1],".b", markersize=3, label='True negative')
#Plot the classification errors
plt.plot(x[(z==-1) & (y==1)],z[(z==-1) & (y==1)],"^y", markersize=10, label='False negative')
plt.plot(x[(z==1) & (y==-1)],z[(z==1) & (y==-1)],"^c", markersize=10, label='False positive')
plt.legend(bbox_to_anchor=(1.55, 1.03))
plt.ylim(-1.5,1.5)
#This function simply calculate the classification rate on the training set
def precision(y, z):
print "The classification rate is :"
print np.mean(y==z)
"""
Explanation: We create 2 functions. The first one, called plotbc should plot the predictions done (and their accurancy) by the linear regression. The second one calculate the classification rate.
End of explanation
"""
plotbc(x_tr, z, z_class)
plt.plot(x_tr,z_hat,"-g",lw=1, label='Predictions by the linear regression model');
plt.legend(bbox_to_anchor=(2, 1.03))
plt.xlabel('Explanatory variable (x)')
plt.ylabel('Dependent variable (y)')
"""
Explanation: We now call the functions previously defined.
End of explanation
"""
precision(z_class, z)
"""
Explanation: Let's compute the confusion rate.
End of explanation
"""
from sklearn.metrics import confusion_matrix
confusion_matrix(z,z_class)/float(len(z))
"""
Explanation: But maybe we could get more information with the confusion matrix!
End of explanation
"""
from sklearn import linear_model, datasets
#The C parameter (Strictly positive) controls the regularization strength
#Smaller values specify stronger regularization
logreg = linear_model.LogisticRegression(C=1e5)
logreg.fit(x_tr[:, np.newaxis], z)
z_hat=logreg.predict(x_tr[:, np.newaxis])
plotbc(x_tr, z, z_hat)
plt.plot(x_tr,z_hat,"-g",lw=1, label='Predictions by the logistic regression');
plt.legend(bbox_to_anchor=(2.3, 1.03))
confusion_matrix(z,z_hat)/float(len(z))
"""
Explanation: Logistic regression
We will now perform classification using logistic regression. The modelisation behind the logistic regression is
\begin{equation}
\mathbb{P}(y^{(i)} = 1 \ | \ \ x^{(i)}) = \frac{1}{1+e^{-\beta_{0}-\beta_{1}x_{1}^{(i)}}}
\end{equation}
where $\boldsymbol{\beta}$ is estimated with maximum likelihood
\begin{equation}
\widehat{\boldsymbol{\beta}} = \arg!\max_{\boldsymbol{\beta}} \prod_{i=1}^{n} \mathbb{P}(y^{(i)} = 1 \ | \ \ x^{(i)})^{y^{(i)}} \big(1-\mathbb{P}(y^{(i)} = 1 \ | \ \ x^{(i)})\big)^{ -y^{(i)}} \ \ .
\end{equation}
End of explanation
"""
from sklearn.model_selection import train_test_split
x_train, x_valid, z_train, z_valid = train_test_split(x_tr, z, test_size=0.2, random_state=3)
"""
Explanation: The classification rate seem slightly better...
Cross-validation and estimation of generalization error
In a classification problem, where we try to predict a discret outcome given some observed informations, the classification error rate calculated on the dataset wich we used to fit (or train) the model should be lower than the error rate calculated on an independent, external dataset. Thus, we cannot rely on the classification error rate based on the dataset on wich we fitted the model because it may be overoptimistic. Then, in machine learning, once we trained the model on a specific dataset (the TRAIN set), we wish to test its reactions with new observations. This specific set of new observations is called the VALIDATION set). One lazy way to get a new set of observation is to split the original dataset in two parts : the training set (composed of 80% the observations) and the validation set.
Question
What percentage of the original dataset's observation is include in the validation set?
Now, let's just split the original dataset with the train_test_split function.
End of explanation
"""
clf = logreg.fit(x_train[:, np.newaxis], z_train)
#z_hat_train=logreg.predict(x_train[:, np.newaxis])
#z_hat_test=logreg.predict(x_test[:, np.newaxis])
score_train = clf.score(x_train[:, np.newaxis], z_train)
score_valid = clf.score(x_valid[:, np.newaxis], z_valid)
print("The prediction error rate on the train set is : ")
print(score_train)
print("The prediction error rate on the test set is : ")
print(score_valid)
"""
Explanation: This being said, we can now calculate the prediction error rate on the train and the test sets.
Question
a) Which algorithm should present the best performances?
b) Can we rely on this results? Why?
End of explanation
"""
#Number of iterations
n=1000
score_train_vec_log = np.zeros(n)
score_valid_vec_log = np.zeros(n)
#Loop of iterations
for k in np.arange(n):
x_train, x_valid, z_train, z_valid = train_test_split(x_tr, z, test_size=0.2, random_state=k)
clf = logreg.fit(x_train[:, np.newaxis], z_train)
score_train_vec_log[k] = clf.score(x_train[:, np.newaxis], z_train)
score_valid_vec_log[k] = clf.score(x_valid[:, np.newaxis], z_valid)
print("The average prediction error rate on the train set is : ")
print(np.mean(score_train_vec_log))
print("The average prediction error rate on the test set is : ")
print(np.mean(score_valid_vec_log))
"""
Explanation: We created the train and validation sets randomly. Hence, considering that the original dataset has a small number of observations, 200, the division of the data may favorized either the train or even the validation dataset.
One way to counteract the randomness of the data division is to simply iterate the above commande. Here are the main steps :
1) Repeat a large number of time the original dataset's division in train and validation sets.
2) For each division (or iteration), we fit the model and then after calculate the prediction error rate on the corresponding train and validation set.
3) Average the prediction errors overall train and validation sets.
End of explanation
"""
img = plt.imread("../data/hyperplanes.png")
plt.imshow(img)
plt.axis("off")
"""
Explanation: Support Vector Machines (SVM)
We just learned 2 linear classifiers : univariate regression and logistic regression. Linear classifiers are a famous algorithms family where classification of the data is done with a descriminative hyperplane. In the present section, we will talk about the Support Vector Machine (SVM), a widely used (linear) classifier in machine learning. Let's start with some some textbook problems...
There are two classes of observations, shown in blue and in purple, each of which has measurements on two variables. Three separating hyperplanes, out of many possible, are shown in black. Which one should we use?
End of explanation
"""
img = plt.imread("../data/maximal.margin.png")
plt.imshow(img)
plt.axis("off")
"""
Explanation: The maximal margin hyperplane is shown in as a solide black line. The margin is the distance form the solid line to either of the dashed lines. The two blue points and the purple point that lie on the dashed lines are the support vectors. The blue and the purple grid indicates the decision rule made by a classifier based on this separating hyperplane.
End of explanation
"""
img = plt.imread("../data/non.separable.png")
plt.imshow(img)
plt.axis("off")
"""
Explanation: Some motivations behind the SVM method have their roots in the linearly separable concept. Sometimes, the data is not linearly separable. Thus we can't use a maximal margin classifier.
End of explanation
"""
img = plt.imread("../data/support.vector.png")
plt.imshow(img)
plt.axis("off")
"""
Explanation: A good strategy could be to consider a classifier based on a hyperplane that does not perfectly separate the two classes. Thus, it could be worthwile to misclassify somes observations in order to do a better job in classifying the remaining observations. We call this technic the support vector classifier (with soft margin).
End of explanation
"""
img = plt.imread("../data/kernel.example.1.png")
plt.imshow(img)
plt.axis("off")
"""
Explanation: Sometimes, good margins don't even exist and support vector classifier are useless.
End of explanation
"""
img = plt.imread("../data/kernel.example.2.png")
plt.imshow(img)
plt.axis("off")
"""
Explanation: In this specific case, a smart strategy would be to enlarge the feature space with a non-linear transformation. Then, find a good margin.
End of explanation
"""
img = plt.imread("../data/kernel.example.3.png")
plt.imshow(img)
plt.axis("off")
"""
Explanation: The new margin (in $\mathbb{R}^2$) corresponds to the following margins in $\mathbb{R}$.
End of explanation
"""
n=100
np.random.seed(0)
X=np.vstack((np.random.multivariate_normal([1,1],[[1,0],[0,1]] ,n), np.random.multivariate_normal([3,3],[[1,0],[0,1]] ,n)))
Y =np.array([0] * n + [1] * n)
index=(Y==0)
plt.scatter(X[index,0], X[index,1], color="r", label='X1 distribution')
plt.scatter(X[~index,0], X[~index,1], color="b", label='X2 distribution')
plt.xlabel('First dimension')
plt.ylabel('Second dimension')
plt.legend(bbox_to_anchor=(1.5, 1.03))
from sklearn import svm
clf = svm.SVC(kernel="rbf", gamma=2 ,C=10).fit(X,Y)
Z=clf.predict(X)
index=(Z==0)
plt.scatter(X[index,0], X[index,1], edgecolors="b")
plt.scatter(X[~index,0], X[~index,1], edgecolors="r")
xx, yy = np.meshgrid(np.linspace(-3, 6, 500), np.linspace(-3, 6, 500))
Z = clf.decision_function(np.c_[xx.ravel(), yy.ravel()])
Z = Z.reshape(xx.shape)
plt.contourf(xx, yy, Z, alpha=1, cmap=plt.cm.seismic)
plt.scatter(X[:, 0], X[:, 1], c=Y, s=2, alpha=0.9, cmap=plt.cm.spectral)
"""
Explanation: Support Vector Machines (SVM) with RBF kernel
We will now use the Support Vector Machines (SVM) method to perform classification. But first, let's create some new artificial data. The explanatory variable's distribution is a gaussian mixture where
\begin{equation}
X_{1} \sim \mathcal{N}{2}(\mu = (1,1) , \sigma^{2} = I{2}) \
X_{2} \sim \mathcal{N}{2}(\mu = (3,3) , \sigma^{2} = I{2}) \ \ .
\end{equation}
We finally plot the data. We can observe that each distribution is associated with a specific color.
End of explanation
"""
#Number of iterations
n=1000
score_train_vec_svm = np.zeros(n)
score_valid_vec_svm = np.zeros(n)
#Loop of iterations
for k in np.arange(n):
x_train, x_valid, z_train, z_valid = train_test_split(x_tr, z, test_size=0.2, random_state=k)
#Command for the SVM
clf = svm.SVC(kernel='rbf', C=.1, gamma=3.2).fit(x_train[:, np.newaxis], z_train)
score_train_vec_svm[k] = clf.score(x_train[:, np.newaxis], z_train)
score_valid_vec_svm[k] = clf.score(x_valid[:, np.newaxis], z_valid)
print("The SVM's average prediction error rate on the train set is : ")
print(np.mean(score_train_vec_svm))
print("The SVM's average prediction error rate on the test set is : ")
print(np.mean(score_valid_vec_svm))
print("The logistic regression's average prediction error rate on the train set is : ")
print(np.mean(score_train_vec_log))
print("The logistic regression's average prediction error rate on the test set is : ")
print(np.mean(score_valid_vec_log))
"""
Explanation: for the list of colormap options see colormap help and to learn more about SVM and related options check svm tutorial and support vector classification (svc) examples.
Back to our original problem
Finally, we can compare the logistic regression's and the SVM's performances on the first dataset that we created earlier.
End of explanation
"""
|
davebshow/DH3501 | class5.ipynb | mit | # Usually we import networkx as nx.
import networkx as nx
# Instantiate a graph.
g = nx.Graph()
# Add a node.
g.add_node(1)
# Add a list of nodes.
g.add_nodes_from([2, 3, 4, 5])
# Add an edge.
g.add_edge(1, 2)
# Add a list of edges.
g.add_edges_from([(2, 3), (3, 4)])
# Remove a node.
g.remove_node(5)
"""
Explanation: <div align="left">
<h4><a href="index.ipynb">RETURN TO INDEX</a></h4>
</div>
<div align="center">
<h1><a href="index.ipynb">DH3501: Advanced Social Networks</a><br/><br/><em>Class 5</em>: NetworkX and Centrality</h1>
</div>
<div style="float:left">
<b>Western University</b><br/>
<b>Department of Modern Languages and Literatures</b><br/>
<b>Digital Humanities – DH 3501</b><br/>
<br/>
<b>Instructor</b>: David Brown<br/>
<b>E-mail</b>: <a href="mailto:dbrow52@uwo.ca">dbrow52@uwo.ca</a><br/>
<b>Office</b>: AHB 1R14<br/>
</div>
<div style="float:left">
<img style="width:200px; margin-left:100px" src="http://www.bsr.org/images/blog/networks.jpg" />
</div>
So...impressions on the NetworkX API...Hard? Easy? Let's go over the basics.
End of explanation
"""
# Your code goes here.
"""
Explanation: What about removing an edge? Multiple nodes? Multiple edges? All nodes and edges? Use the following cell to figure out how to delete nodes and edges and to clear the entire graph.
End of explanation
"""
g.add_edges_from([(1,2),(1,3)])
g.add_node(1)
g.add_edge(1,2)
g.add_node("spam") # adds node "spam"
g.add_nodes_from("spam") # adds 4 nodes: 's', 'p', 'a', 'm'
# Do you remember how to look at the nodes in the graph? How about the edges?
# Your code goes here.
"""
Explanation: What happens if we add multiple nodes with the same name?
End of explanation
"""
g.add_node(4, {"name": "Joebob"})
g.node[4]
g.node[4]["name"] = "Dave"
g.node[4]["job"] = "Ph.D. Student"
g.node[4]
# Add an edge with attributes.
g.add_edge("s", "p", {"type": "knows"})
g["s"]["p"]
g["s"]["p"]["type"] = "follows"
g["s"]["p"]["weight"] = 1
g["s"]["p"]
"""
Explanation: Node and edge attributes
End of explanation
"""
rand = nx.gnp_random_graph(20, 0.25)
sf = nx.scale_free_graph(20)
"""
Explanation: Graph generators
End of explanation
"""
# Config environment visualization.
%matplotlib inline
import matplotlib as plt
plt.rcParams['figure.figsize'] = 17, 12
nx.draw_networkx(rand)
nx.draw(sf)
"""
Explanation: Drawing
End of explanation
"""
g = nx.scale_free_graph(50)
dc = nx.degree_centrality(g)
idc = nx.in_degree_centrality(g)
odc = nx.out_degree_centrality(g)
"""
Explanation: Analytics
NetworkX has tons of analytics algorithms ready to go out of the box. There are too many to go over in class, but we'll be seeing them throughout the term. Today, we will focus on the concpet of centrality, how it is measured, and how it is interpreted.
Centrality
Centrality is a micro measure that compares a node to all of the other nodes in the network. It is a way to measure the power, influence, and overall importance of a node in a network. However, it is important to remember that measures of centrality must be interpreted within the context of the network i.e., a measure of centrality does not have the same connotations for every network.
There are four main groups or types of centrality measurement:
Degree - how connected is a node (how many adjacent edges does it posses).
Closeness - how close is a node to all of the other nodes in the graph, how easily can it access them.
Betweenness - how important a node is in terms of connecting other nodes.
Neighbors characteristics - how important or central a node's neighbors are.
In small groups, discuss the different measures of centrality in terms of how they were presented in T & K. What do these measure mean? What examples did they use in the book? Can you think of some of your own example of how these measures could be used and how you would interpret them?
<img style="float:left; width: 500px" src="http://4.bp.blogspot.com/-TiK9BLqwncU/T2xAGQVMiRI/AAAAAAAAYQ4/6X66VqxiO54/s1600/0familyphoto.png" />
Degree Centrality
End of explanation
"""
cc = nx.closeness_centrality(g)
"""
Explanation: Closeness Centrality
End of explanation
"""
bc = nx.betweenness_centrality(g)
"""
Explanation: Betweenness Centrality
End of explanation
"""
import pandas as pd
cent_df = pd.DataFrame({"deg": dc, "indeg": idc, "outdeg": odc, "close": cc, "betw": bc})
cent_df.describe()
"""
Explanation: Centrality summary
End of explanation
"""
cent_df.hist()
"""
Explanation: What's a histogram?
End of explanation
"""
|
mne-tools/mne-tools.github.io | 0.23/_downloads/27d6cff3f645408158cdf4f3f05a21b6/30_eeg_erp.ipynb | bsd-3-clause | import os
import numpy as np
import matplotlib.pyplot as plt
import mne
sample_data_folder = mne.datasets.sample.data_path()
sample_data_raw_file = os.path.join(sample_data_folder, 'MEG', 'sample',
'sample_audvis_filt-0-40_raw.fif')
raw = mne.io.read_raw_fif(sample_data_raw_file, preload=False)
sample_data_events_file = os.path.join(sample_data_folder, 'MEG', 'sample',
'sample_audvis_filt-0-40_raw-eve.fif')
events = mne.read_events(sample_data_events_file)
raw.crop(tmax=90) # in seconds; happens in-place
# discard events >90 seconds (not strictly necessary: avoids some warnings)
events = events[events[:, 0] <= raw.last_samp]
"""
Explanation: EEG processing and Event Related Potentials (ERPs)
This tutorial shows how to perform standard ERP analyses in MNE-Python. Most of
the material here is covered in other tutorials too, but for convenience the
functions and methods most useful for ERP analyses are collected here, with
links to other tutorials where more detailed information is given.
As usual we'll start by importing the modules we need and loading some example
data. Instead of parsing the events from the raw data's :term:stim channel
(like we do in this tutorial <tut-events-vs-annotations>), we'll load
the events from an external events file. Finally, to speed up computations so
our documentation server can handle them, we'll crop the raw data from ~4.5
minutes down to 90 seconds.
End of explanation
"""
raw.pick(['eeg', 'eog']).load_data()
raw.info
"""
Explanation: The file that we loaded has already been partially processed: 3D sensor
locations have been saved as part of the .fif file, the data have been
low-pass filtered at 40 Hz, and a common average reference is set for the
EEG channels, stored as a projector (see section-avg-ref-proj in the
tut-set-eeg-ref tutorial for more info about when you may want to do
this). We'll discuss how to do each of these below.
Since this is a combined EEG+MEG dataset, let's start by restricting the data
to just the EEG and EOG channels. This will cause the other projectors saved
in the file (which apply only to magnetometer channels) to be removed. By
looking at the measurement info we can see that we now have 59 EEG channels
and 1 EOG channel.
End of explanation
"""
channel_renaming_dict = {name: name.replace(' 0', '').lower()
for name in raw.ch_names}
_ = raw.rename_channels(channel_renaming_dict) # happens in-place
"""
Explanation: Channel names and types
In practice it's quite common to have some channels labelled as EEG that are
actually EOG channels. ~mne.io.Raw objects have a
~mne.io.Raw.set_channel_types method that you can use to change a channel
that is labeled as eeg into an eog type. You can also rename channels
using the ~mne.io.Raw.rename_channels method. Detailed examples of both of
these methods can be found in the tutorial tut-raw-class. In this data
the channel types are all correct already, so for now we'll just rename the
channels to remove a space and a leading zero in the channel names, and
convert to lowercase:
End of explanation
"""
raw.plot_sensors(show_names=True)
fig = raw.plot_sensors('3d')
"""
Explanation: Channel locations
The tutorial tut-sensor-locations describes MNE-Python's handling of
sensor positions in great detail. To briefly summarize: MNE-Python
distinguishes :term:montages <montage> (which contain sensor positions in
3D: x, y, z, in meters) from :term:layouts <layout> (which
define 2D arrangements of sensors for plotting approximate overhead diagrams
of sensor positions). Additionally, montages may specify idealized sensor
positions (based on, e.g., an idealized spherical headshape model) or they
may contain realistic sensor positions obtained by digitizing the 3D
locations of the sensors when placed on the actual subject's head.
This dataset has realistic digitized 3D sensor locations saved as part of the
.fif file, so we can view the sensor locations in 2D or 3D using the
~mne.io.Raw.plot_sensors method:
End of explanation
"""
for proj in (False, True):
fig = raw.plot(n_channels=5, proj=proj, scalings=dict(eeg=50e-6))
fig.subplots_adjust(top=0.9) # make room for title
ref = 'Average' if proj else 'No'
fig.suptitle(f'{ref} reference', size='xx-large', weight='bold')
"""
Explanation: If you're working with a standard montage like the 10-20 <ten_twenty_>_
system, you can add sensor locations to the data like this:
raw.set_montage('standard_1020'). See tut-sensor-locations for
info on what other standard montages are built-in to MNE-Python.
If you have digitized realistic sensor locations, there are dedicated
functions for loading those digitization files into MNE-Python; see
reading-dig-montages for discussion and dig-formats for a list
of supported formats. Once loaded, the digitized sensor locations can be
added to the data by passing the loaded montage object to
raw.set_montage().
Setting the EEG reference
As mentioned above, this data already has an EEG common average reference
added as a :term:projector. We can view the effect of this on the raw data
by plotting with and without the projector applied:
End of explanation
"""
raw.filter(l_freq=0.1, h_freq=None)
"""
Explanation: The referencing scheme can be changed with the function
mne.set_eeg_reference (which by default operates on a copy of the data)
or the raw.set_eeg_reference() <mne.io.Raw.set_eeg_reference> method (which
always modifies the data in-place). The tutorial tut-set-eeg-ref shows
several examples of this.
Filtering
MNE-Python has extensive support for different ways of filtering data. For a
general discussion of filter characteristics and MNE-Python defaults, see
disc-filtering. For practical examples of how to apply filters to your
data, see tut-filter-resample. Here, we'll apply a simple high-pass
filter for illustration:
End of explanation
"""
np.unique(events[:, -1])
"""
Explanation: Evoked responses: epoching and averaging
The general process for extracting evoked responses from continuous data is
to use the ~mne.Epochs constructor, and then average the resulting epochs
to create an ~mne.Evoked object. In MNE-Python, events are represented as
a :class:NumPy array <numpy.ndarray> of sample numbers and integer event
codes. The event codes are stored in the last column of the events array:
End of explanation
"""
event_dict = {'auditory/left': 1, 'auditory/right': 2, 'visual/left': 3,
'visual/right': 4, 'face': 5, 'buttonpress': 32}
"""
Explanation: The tut-event-arrays tutorial discusses event arrays in more detail.
Integer event codes are mapped to more descriptive text using a Python
:class:dictionary <dict> usually called event_id. This mapping is
determined by your experiment code (i.e., it reflects which event codes you
chose to use to represent different experimental events or conditions). For
the sample-dataset data has the following mapping:
End of explanation
"""
epochs = mne.Epochs(raw, events, event_id=event_dict, tmin=-0.3, tmax=0.7,
preload=True)
fig = epochs.plot()
"""
Explanation: Now we can extract epochs from the continuous data. An interactive plot
allows you to click on epochs to mark them as "bad" and drop them from the
analysis (it is not interactive on the documentation website, but will be
when you run epochs.plot() <mne.Epochs.plot> in a Python console).
End of explanation
"""
reject_criteria = dict(eeg=100e-6, # 100 µV
eog=200e-6) # 200 µV
_ = epochs.drop_bad(reject=reject_criteria)
"""
Explanation: It is also possible to automatically drop epochs, when first creating them or
later on, by providing maximum peak-to-peak signal value thresholds (pass to
the ~mne.Epochs constructor as the reject parameter; see
tut-reject-epochs-section for details). You can also do this after
the epochs are already created, using the ~mne.Epochs.drop_bad method:
End of explanation
"""
epochs.plot_drop_log()
"""
Explanation: Next we generate a barplot of which channels contributed most to epochs
getting rejected. If one channel is responsible for lots of epoch rejections,
it may be worthwhile to mark that channel as "bad" in the ~mne.io.Raw
object and then re-run epoching (fewer channels w/ more good epochs may be
preferable to keeping all channels but losing many epochs). See
tut-bad-channels for more info.
End of explanation
"""
l_aud = epochs['auditory/left'].average()
l_vis = epochs['visual/left'].average()
"""
Explanation: Another way in which epochs can be automatically dropped is if the
~mne.io.Raw object they're extracted from contains :term:annotations that
begin with either bad or edge ("edge" annotations are automatically
inserted when concatenating two separate ~mne.io.Raw objects together). See
tut-reject-data-spans for more information about annotation-based
epoch rejection.
Now that we've dropped the bad epochs, let's look at our evoked responses for
some conditions we care about. Here the ~mne.Epochs.average method will
create and ~mne.Evoked object, which we can then plot. Notice that we\
select which condition we want to average using the square-bracket indexing
(like a :class:dictionary <dict>); that returns a smaller epochs object
containing just the epochs from that condition, to which we then apply the
~mne.Epochs.average method:
End of explanation
"""
fig1 = l_aud.plot()
fig2 = l_vis.plot(spatial_colors=True)
"""
Explanation: These ~mne.Evoked objects have their own interactive plotting method
(though again, it won't be interactive on the documentation website):
click-dragging a span of time will generate a scalp field topography for that
time span. Here we also demonstrate built-in color-coding the channel traces
by location:
End of explanation
"""
l_aud.plot_topomap(times=[-0.2, 0.1, 0.4], average=0.05)
"""
Explanation: Scalp topographies can also be obtained non-interactively with the
~mne.Evoked.plot_topomap method. Here we display topomaps of the average
field in 50 ms time windows centered at -200 ms, 100 ms, and 400 ms.
End of explanation
"""
l_aud.plot_joint()
"""
Explanation: Considerable customization of these plots is possible, see the docstring of
~mne.Evoked.plot_topomap for details.
There is also a built-in method for combining "butterfly" plots of the
signals with scalp topographies, called ~mne.Evoked.plot_joint. Like
~mne.Evoked.plot_topomap you can specify times for the scalp topographies
or you can let the method choose times automatically, as is done here:
End of explanation
"""
for evk in (l_aud, l_vis):
evk.plot(gfp=True, spatial_colors=True, ylim=dict(eeg=[-12, 12]))
"""
Explanation: Global field power (GFP)
Global field power :footcite:Lehmann1980,Lehmann1984,Murray2008 is,
generally speaking, a measure of agreement of the signals picked up by all
sensors across the entire scalp: if all sensors have the same value at a
given time point, the GFP will be zero at that time point; if the signals
differ, the GFP will be non-zero at that time point. GFP
peaks may reflect "interesting" brain activity, warranting further
investigation. Mathematically, the GFP is the population standard
deviation across all sensors, calculated separately for every time point.
You can plot the GFP using evoked.plot(gfp=True) <mne.Evoked.plot>. The GFP
trace will be black if spatial_colors=True and green otherwise. The EEG
reference does not affect the GFP:
End of explanation
"""
l_aud.plot(gfp='only')
"""
Explanation: To plot the GFP by itself you can pass gfp='only' (this makes it easier
to read off the GFP data values, because the scale is aligned):
End of explanation
"""
gfp = l_aud.data.std(axis=0, ddof=0)
# Reproducing the MNE-Python plot style seen above
fig, ax = plt.subplots()
ax.plot(l_aud.times, gfp * 1e6, color='lime')
ax.fill_between(l_aud.times, gfp * 1e6, color='lime', alpha=0.2)
ax.set(xlabel='Time (s)', ylabel='GFP (µV)', title='EEG')
"""
Explanation: As stated above, the GFP is the population standard deviation of the signal
across channels. To compute it manually, we can leverage the fact that
evoked.data <mne.Evoked.data> is a :class:NumPy array <numpy.ndarray>,
and verify by plotting it using matplotlib commands:
End of explanation
"""
left = ['eeg17', 'eeg18', 'eeg25', 'eeg26']
right = ['eeg23', 'eeg24', 'eeg34', 'eeg35']
left_ix = mne.pick_channels(l_aud.info['ch_names'], include=left)
right_ix = mne.pick_channels(l_aud.info['ch_names'], include=right)
"""
Explanation: Analyzing regions of interest (ROIs): averaging across channels
Since our sample data is responses to left and right auditory and visual
stimuli, we may want to compare left versus right ROIs. To average across
channels in a region of interest, we first find the channel indices we want.
Looking back at the 2D sensor plot above, we might choose the following for
left and right ROIs:
End of explanation
"""
roi_dict = dict(left_ROI=left_ix, right_ROI=right_ix)
roi_evoked = mne.channels.combine_channels(l_aud, roi_dict, method='mean')
print(roi_evoked.info['ch_names'])
roi_evoked.plot()
"""
Explanation: Now we can create a new Evoked with 2 virtual channels (one for each ROI):
End of explanation
"""
evokeds = dict(auditory=l_aud, visual=l_vis)
picks = [f'eeg{n}' for n in range(10, 15)]
mne.viz.plot_compare_evokeds(evokeds, picks=picks, combine='mean')
"""
Explanation: Comparing conditions
If we wanted to compare our auditory and visual stimuli, a useful function is
mne.viz.plot_compare_evokeds. By default this will combine all channels in
each evoked object using global field power (or RMS for MEG channels); here
instead we specify to combine by averaging, and restrict it to a subset of
channels by passing picks:
End of explanation
"""
evokeds = dict(auditory=list(epochs['auditory/left'].iter_evoked()),
visual=list(epochs['visual/left'].iter_evoked()))
mne.viz.plot_compare_evokeds(evokeds, combine='mean', picks=picks)
"""
Explanation: We can also easily get confidence intervals by treating each epoch as a
separate observation using the ~mne.Epochs.iter_evoked method. A confidence
interval across subjects could also be obtained, by passing a list of
~mne.Evoked objects (one per subject) to the
~mne.viz.plot_compare_evokeds function.
End of explanation
"""
aud_minus_vis = mne.combine_evoked([l_aud, l_vis], weights=[1, -1])
aud_minus_vis.plot_joint()
"""
Explanation: We can also compare conditions by subtracting one ~mne.Evoked object from
another using the mne.combine_evoked function (this function also allows
pooling of epochs without subtraction).
End of explanation
"""
grand_average = mne.grand_average([l_aud, l_vis])
print(grand_average)
"""
Explanation: <div class="alert alert-danger"><h4>Warning</h4><p>The code above yields an **equal-weighted difference**. If you have
imbalanced trial numbers, you might want to equalize the number of events
per condition first by using `epochs.equalize_event_counts()
<mne.Epochs.equalize_event_counts>` before averaging.</p></div>
Grand averages
To compute grand averages across conditions (or subjects), you can pass a
list of ~mne.Evoked objects to mne.grand_average. The result is another
~mne.Evoked object.
End of explanation
"""
list(event_dict)
"""
Explanation: For combining conditions it is also possible to make use of :term:HED
tags in the condition names when selecting which epochs to average. For
example, we have the condition names:
End of explanation
"""
epochs['auditory'].average()
"""
Explanation: We can select the auditory conditions (left and right together) by passing:
End of explanation
"""
|
enakai00/jupyter_NikkeiLinux | No5/Figure11 - derivative_animation.ipynb | apache-2.0 | import numpy as np
import matplotlib.pyplot as plt
import matplotlib.animation as animation
%matplotlib nbagg
"""
Explanation: [4-1] 動画作成用のモジュールをインポートして、動画を表示可能なモードにセットします。
End of explanation
"""
def derivative(f, filename):
fig = plt.figure(figsize=(4,4))
images = []
x0, d = 0.5, 0.5
for _ in range(10):
subplot = fig.add_subplot(1,1,1)
subplot.set_xlim(0, 1)
subplot.set_ylim(0, 1)
slope = (f(x0+d)-f(x0)) / d
linex = np.linspace(0, 1, 100)
image0 = subplot.text(0.5, 8, ('slope = %f' % slope))
image1, = subplot.plot(linex, f(linex), color='blue')
image2 = subplot.scatter([x0,x0+d],[f(x0),f(x0+d)])
def g(x):
return f(x0) + slope * (x-x0)
image3, = subplot.plot([0,1], [g(0),g(1)],
linewidth=1, color='red')
image4 = subplot.text(0.3, 1.05, ('slope = %f' % slope))
images.append([image0, image1, image2, image3, image4])
d *= 0.5
ani = animation.ArtistAnimation(fig, images, interval=1000)
ani.save(filename, writer='imagemagick', fps=1)
return ani
"""
Explanation: [4-2] x=0.5における接線を描いて、その傾きを求める関数derivativeを定義します。
End of explanation
"""
def f(x):
y = x*x
return y
derivative(f, 'derivative01.gif')
"""
Explanation: [4-3] 二次関数 y=x*x を用意して、関数derivativeを呼び出します。
GIF動画ファイル「derivative01.gif」が作成されます。
End of explanation
"""
|
google/applied-machine-learning-intensive | content/04_classification/03_classification_with_tensorflow/colab.ipynb | apache-2.0 | # Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""
Explanation: <a href="https://colab.research.google.com/github/google/applied-machine-learning-intensive/blob/master/content/04_classification/03_classification_with_tensorflow/colab.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
Copyright 2020 Google LLC.
End of explanation
"""
! chmod 600 kaggle.json && (ls ~/.kaggle 2>/dev/null || mkdir ~/.kaggle) && mv kaggle.json ~/.kaggle/ && echo 'Done'
"""
Explanation: Classification with TensorFlow
By now you should be familiar with classification in scikit-learn. In this Colab we will explore another commonly used tool for classification and machine learning: TensorFlow.
The Dataset
The dataset that we'll be using is the UCI Heart Disease dataset. The dataset contains health information about patients, as well as a "presence of heart disease" indicator.
The original dataset contains over 70 different attributes and five heart disease classifications. For this lab we'll use a simplified version of the dataset hosted on Kaggle.
This simplified version of the dataset contains 13 attributes and a yes/no indicator for the presence or absence of heart disease.
The columns are below:
Feature | Description
--------|--------------
age | age in years
sex | sex<br>0 = female<br>1 = male
cp | chest pain type<br>1 = typical angina<br>2 = atypical angina<br>3 = non-anginal pain<br>4 = asymptomatic
trestbps | resting blood pressure in Hg
chol | serum cholesterol in mg/dl
fbs | is fasting blood sugar > 120 mg/dl<br>0 = false<br>1 = true
restecg | results of a resting electrocardiograph<br>0 = normal<br>1 = ST-T wave abnormality<br>2 = left ventricular hypertrophy
thalach | max heart rate
exang | exercise induced angina<br>0 = no<br>1 = yes
oldpeak | measurement of an abnormal ST depression
slope | slope of peak of exercise ST segment<br>1 = upslope<br>2 = flat<br>3 = downslope
ca | count of major blood vessels colored by fluoroscopy<br>0, 1, 2, 3, or 4
thal | presence heart condition<br>0 = unknown<br>1 = normal<br>2 = fixed defect<br>3 = reversible defect
The heart disease indicator is a 0 for no disease and a 1 for heart disease.
Let's assume we have been given this dataset by the Cleveland Clinic and have been asked to build a model that can predict if their patients have heart disease or not. The purpose of the model is to assist doctors in making diagnostic decisions faster.
Exercise 1: Ethical Considerations
Before we dive in, let's take a moment to think about the dataset and the larger problem that we are trying to solve. We have 17 data attributes related to an individual's health, as well as an indicator that determines if the patient has heart disease.
Question 1
Are there any attributes in the data that we should pay special attention to? Imagine a case where the data is unbalanced in some way. How might that affect the model and the doctor/patient experience?
Student Solution
Your answer goes here
Question 2
Assuming we can get a reasonably well-performing model deployed, is there potential for problems with how the predictions from this model are interpreted and used?
Student Solution
Your answer goes here
Exploratory Data Analysis
Let's download the data and take a look at what we are working with.
Upload your kaggle.json file and run the code below.
End of explanation
"""
!kaggle datasets download ronitf/heart-disease-uci
!ls
"""
Explanation: And then download the dataset.
End of explanation
"""
import pandas as pd
df = pd.read_csv('heart-disease-uci.zip')
df.sample(5)
"""
Explanation: And load the data into a DataFrame and take a peek.
End of explanation
"""
df.describe()
"""
Explanation: We can see that all of the data is numeric, but varies a bit in scale.
Let's describe the data:
End of explanation
"""
import matplotlib.pyplot as plt
import seaborn as sns
plt.figure(figsize=(10,10))
_ = sns.heatmap(df.corr(), cmap='coolwarm', annot=True)
"""
Explanation: No missing data. Only 303 rows of data, though, so we aren't working with a huge dataset.
Now we'll dig deeper into the data in a few of the columns. If you were working with a dataset for a real-world model you would want to explore each column.
We'll start by mapping out the correlations in the data.
End of explanation
"""
df['sex'].hist()
"""
Explanation: There are no obviously strong correlations.
Let's now see how balanced our data is by sex:
End of explanation
"""
df['target'].hist()
"""
Explanation: In this data female maps to 0 and male maps to 1, so there are over twice as many men in the dataset.
Let's also check out the target.
End of explanation
"""
df['age'].hist()
"""
Explanation: In this case the dataset looks more balanced.
And finally we'll look at age.
End of explanation
"""
FEATURES = df.columns.values[:-1]
TARGET = df.columns.values[-1]
FEATURES, TARGET
"""
Explanation: The dataset seems to be pretty heavily skewed toward individuals in their 50s and 60s.
There isn't a lot of actionable information from our analysis. We might want to stratify our data by sex when we train and test our model, but there are no data repairs that seem to need to be done.
If you were building this model for a real world application, you would also want to ensure that the values in the numeric columns are realistic.
The Model
Let's build and train our model. We'll build a deep neural network that takes our input features and returns a 0 if it predicts that the patient doesn't have heart disease and a 1 if it predicts that the patient does have heart disease.
First let's create a list of features to make coding easier.
End of explanation
"""
df.loc[:, FEATURES] = ((df[FEATURES] - df[FEATURES].min()) / (df[FEATURES].max() - df[FEATURES].min()))
df.describe()
"""
Explanation: We'll also want to normalize our feature data before feeding it into the model.
End of explanation
"""
from sklearn.model_selection import train_test_split
X_train, X_validate, y_train, y_validate = train_test_split(
df[FEATURES], df[TARGET], test_size=0.2, stratify=df['sex'])
X_train.shape, X_validate.shape
"""
Explanation: We can also now split off a validation set from our data. Since we have so many more men than women in this dataset, we will stratify on sex.
End of explanation
"""
import tensorflow as tf
model = tf.keras.Sequential([
tf.keras.layers.Dense(64, activation=tf.nn.relu,
input_shape=(FEATURES.size,)),
tf.keras.layers.Dense(32, activation=tf.nn.relu),
tf.keras.layers.Dense(16, activation=tf.nn.relu),
tf.keras.layers.Dense(1, activation=tf.nn.sigmoid)
])
model.summary()
"""
Explanation: We'll use the TensorFlow Keras Sequential model. The input size needs to be equal to the number of input features that we have. The output size needs to be 1 since we are predicting a yes/no value. The number and width of layers in between are an area for experimentation, as are the activation functions.
We start with an initial hidden layer 64 nodes wide and funnel down to 32, 16, and finally, to the output layer of 1 node.
End of explanation
"""
model.compile(
loss='binary_crossentropy',
optimizer='Adam',
metrics=['accuracy']
)
"""
Explanation: We can now compile the model. We use 'binary_crossentropy' loss since this is a binary classification model.
End of explanation
"""
history = model.fit(X_train, y_train, epochs=500, verbose=0)
history.history['accuracy'][-1]
"""
Explanation: And finally, we can actually fit the model. We'll start with a run of 500 training epochs. Once we are done, we'll print out the final accuracy the model achieved.
End of explanation
"""
import matplotlib.pyplot as plt
plt.figure(figsize=(16,5))
plt.subplot(1,2,1)
plt.plot(history.history['accuracy'])
plt.title('Training Accuracy')
plt.ylabel('accuracy')
plt.xlabel('epoch')
plt.legend(['train_accuracy'], loc='best')
plt.subplot(1,2,2)
plt.plot(history.history['loss'])
plt.title('Training Loss')
plt.ylabel('loss')
plt.xlabel('epoch')
plt.legend(['train_loss'], loc='best')
"""
Explanation: We got perfect accuracy in our model. Let's see how the accuracy improves and the loss is reduced over epochs.
End of explanation
"""
predictions = model.predict(X_validate)
predictions[:10]
"""
Explanation: We seem to have kept training this model far too long. The accuracy reaches perfection, and the loss moves to 0.0 after a few hundred epochs.
Let's see if we overfit by using our validation holdout data. In order to do that, we need to convert our predictions back into a binary representation.
End of explanation
"""
import matplotlib.pyplot as plt
_ = plt.hist(predictions)
"""
Explanation: As you can see, our predictions are continuous numbers, not the 1 or 0 values that we expected. These values are confidences that the value is 1. Let's look at them in a histogram.
End of explanation
"""
predictions = [round(x[0]) for x in predictions]
_ = plt.hist(predictions)
"""
Explanation: Here we can see that the model is highly confident yes or no in many cases, but there are some cases where the model was unsure.
How do we convert these confidences into a yes/no decision?
One way is to simply round:
End of explanation
"""
from sklearn.metrics import accuracy_score
accuracy_score(y_validate, predictions)
"""
Explanation: This puts the cut-off threshold for a yes/no decision at 0.5. Let's think about the implications of this.
Also note that the choice of a sigmoid activation function was not coincidence. We wanted to use an activation function that would keep the output values between 0.0 and 1.0 for rounding purposes.
Now let's check our accuracy.
End of explanation
"""
# Your Code Goes Here
"""
Explanation: When we ran this model, our score was in the low 80s, which is not great. Yours is likely similar.
Exercise 2: Adjusting the Threshold
Question 1
We decided to round for our classification, which puts the threshold for the decision at 0.5. This decision was made somewhat arbitrarily. Let's think about our problem space a bit more. We are making a model that predicts if an individual has heart disease. Would it be better if we set the threshold for predicting heart disease higher or lower than 0.5? Or is 0.5 okay? Explain your reasoning.
Student Solution
Your solution goes here
Question 2
Write code to make yes/no predictions using a higher or lower threshold based on the argument you made in the first question of this exercise. If you chose to keep the threshold at 0.5, then just pick higher or lower and write the code for that. Print out the accuracy for the new threshold.
Student Solution
End of explanation
"""
# Your code goes here
"""
Explanation: Exercise 2: Early Stopping
Five hundred epochs turned out to be a bit too many. Use the EarlyStopping class to stop when the loss doesn't improve over the course of five epochs. Print your accuracy score so you can see if it stayed reasonably close to your earlier model. Be sure to also make model fitting verbosity 1 or 2 so you can see at which epoch your model stopped.
Student Solution
End of explanation
"""
|
axm108/Rydberg | model_fitting/rabi/rabi_fit.ipynb | mit | import numpy as np
import matplotlib.pyplot as plt
import scipy.stats as stats
from scipy.optimize import curve_fit
%matplotlib inline
"""
Explanation: Rabi model fitting
End of explanation
"""
def rabiModel(time, rabiFreq, T1, Tdec, phi, a0, a1, a2, detuning=0.0):
phi_deg = phi*(np.pi/180)
W = np.sqrt(rabiFreq**2 + detuning**2)
ampl = rabiFreq**2 / W**2
osci = np.cos(2*np.pi*W*time + phi_deg)
expo = np.exp(-time/T1)
decay = np.exp(-time/Tdec)
return a0 + a1*ampl*(1 - (expo * osci)) + a2 * (1 - decay)
"""
Explanation: The model
According to the Rabi model, the probability of being in the excited state, $p_e$ is given by,
\begin{equation}
P_e = a_0 + a_1 \frac{\Omega^2}{W^2} \left[ 1 - \left( e^{-\frac{t}{T_1}} \cos \left( 2\pi Wt + \phi \right) \right) \right] + a_2 \left( 1 - e^{-\frac{t}{T_{decay}}} \right)
\end{equation}
where $W = \sqrt{\Omega^2 + \delta^2}$, $\Omega$ is the Rabi frequency, $\delta$ is the detuning, $T_1$ is the relaxational coherence time, and $t$ is time.
This gives a form which looks like the plot below.
End of explanation
"""
time_start = 0.0
time_end = 7.0
time_steps = 1000
time_fit = np.linspace(time_start,time_end,time_steps)
rabiFreq = 3
T1 = 2
Tdec = 20
phi = -65
a0 = 0
a1 = 0.5
a2 = -1.0
p_e_fit = rabiModel(time_fit, rabiFreq, T1, Tdec, phi, a0, a1, a2, detuning=0)
plt.plot(time_fit, p_e_fit, 'g-', label='$P_e (\Delta = 0)$')
plt.plot(time_fit, np.exp(-time_fit/T1), 'r--', label='$e^{-t/T_1}$')
plt.xlabel("Time, ($\mu s$)")
plt.ylabel("Prob. excited state, $P_e$")
plt.title("Rabi model")
plt.legend()
plt.ylim([0.0, 1.0])
plt.grid()
"""
Explanation: Plot example model
End of explanation
"""
d0,d1,d2,d3,d4,d5,d6,d7,d8,d9 = np.loadtxt('SR080317_026.dat',delimiter="\t",unpack=True)
time_exp = d1*1e6
p_e_exp = d4-min(d4)
p_e_exp = p_e_exp/max(p_e_exp)
cropNum = 170
time_exp = time_exp[1:len(time_exp)-cropNum]
p_e_exp = p_e_exp[1:len(p_e_exp)-cropNum]
time_start = 0.0
time_end = 6.0
time_steps = 1000
time_fit = np.linspace(time_start,time_end,time_steps)
# Initial guess
rabiFreq = 3
T1 = 2
Tdec = 20
phi = -65
a0 = 0
a1 = 0.5
a2 = -1.0
p_e_fit = rabiModel(time_fit, rabiFreq, T1, Tdec, phi, a0, a1, a2, detuning=0)
plt.plot(time_fit, p_e_fit, 'g-', label='$P_e (\Delta = 0)$')
plt.plot(time_exp,p_e_exp, 'b-', label='data')
plt.xlabel("Time, ($\mu s$)")
plt.ylabel("Prob. excited state, $P_e$")
plt.title("Initial guess")
plt.legend()
plt.ylim([0.0, 1.0])
plt.grid()
"""
Explanation: Plot data and initial guess
End of explanation
"""
guess = [rabiFreq, T1, Tdec, phi, a0, a1, a2]
popt,pcov = curve_fit(rabiModel, time_exp, p_e_exp, p0=guess)
perr = np.sqrt(np.diag(pcov))
params = ['rabiFreq', 'T1', 'Tdec', 'phi', 'a0', 'a1', 'a2']
for idx in range(len(params)):
print( "The fitted value of ", params[idx], " is ", popt[idx], " with error ", perr[idx] )
p_e_fit = rabiModel(time_fit,*popt)
plt.plot(time_fit, p_e_fit, 'g-', label='$P_e (\Delta = 0)$')
plt.plot(time_exp,p_e_exp, 'b-', label='data')
plt.xlabel("Time, ($\mu s$)")
plt.ylabel("Prob. excited state, $P_e$")
plt.title("Rabi model fit")
plt.legend()
plt.ylim([0.0, 1.0])
plt.grid()
"""
Explanation: Curve fitting
End of explanation
"""
|
transientskp/notebooks | trap movie.ipynb | mit | import matplotlib
# remove this inline statement to stop the previews in the notebook
%matplotlib inline
#matplotlib.use('Agg')
import logging
from tkp.db.model import Image, Extractedsource
from tkp.db import Database
from pymongo import MongoClient
from gridfs import GridFS
from astropy.io import fits
from astropy import log as astrolog
from matplotlib import pyplot
from aplpy import FITSFigure
import os
"""
Explanation: Extract all images from a TraP dataset with extracted sources as overlay
When executed it will try to download all images from the dataset, plot them with aply, overlay extracted sources and writes it as a set of png files to disk inside the configured output folder. You can copy the database and image cache settings from your pipeline config.
For this you need:
pip install astropy APLpy pymongo matplotlib
and install TKP 3.0 (not released yet)
End of explanation
"""
# colors for the extracted types
# 0: blind fit, 1: forced fit, 2: manual monitoring
source_colors = ['red', 'lightgreen', 'cyan']
loglevel = logging.WARNING # Set to INFO to see queries, otherwise WARNING
image_size = 10
output_folder = 'output'
"""
Explanation: general settings
End of explanation
"""
engine = 'postgresql'
host = 'localhost'
port = 5432
user = 'gijs'
password = 'gijs'
database = 'gijs'
dataset_id = 2
"""
Explanation: Database settings
End of explanation
"""
mongo_host = "localhost"
mongo_port = 27017
mongo_db = "tkp"
"""
Explanation: Image cache settings
End of explanation
"""
# configure loging
logging.getLogger('sqlalchemy.engine').setLevel(loglevel)
astrolog.setLevel(loglevel)
# make output folder if it doesn't exists
if not os.access(output_folder, os.X_OK):
os.mkdir(output_folder)
# connect to the databases
db = Database(engine=engine, host=host, port=port, user=user,
password=password, database=database)
db.connect()
session = db.Session()
mongo_connection = MongoClient(mongo_host, mongo_port)
gfs = GridFS(mongo_connection[mongo_db])
# get all images from the database that belong to a databse, sorted by starttime
images = session.query(Image).filter(Image.dataset_id==dataset_id).order_by(Image.taustart_ts).all()
# open the files
fitss = [fits.open(gfs.get_last_version(i.url)) for i in images]
fitss = [setattr(f, 'closed', False) for f in fitss]
# get the sources for all images
sourcess = [session.query(Extractedsource).filter(Extractedsource.image==image).all() for image in images]
combined = zip(images, fitss, sourcess)
for index, (image, fits, sources) in enumerate(combined):
fig = pyplot.figure(figsize=(image_size, image_size))
plot = FITSFigure(fits, subplot=[0, 0, 1, 1], figure=fig)
# so here you can tweak the scale if you want, maybe change contrast or color schema
#
# http://aplpy.readthedocs.org/en/stable/normalize.html
#
plot.show_grayscale(stretch='sqrt')
#plot.show_colorscale(stretch='sqrt')
# you probably don't want to change this
plot.axis_labels.hide()
plot.tick_labels.hide()
plot.ticks.hide()
ra = [source.ra for source in sources]
dec = [source.decl for source in sources]
semimajor = [source.semimajor / 900 for source in sources]
semiminor = [source.semiminor / 900 for source in sources]
pa = [source.pa + 90 for source in sources]
# this adds the extracted sources, you can configure the colors with the settings above
color = [source_colors[source.extract_type] for source in sources]
plot.show_ellipses(ra, dec, semimajor, semiminor, pa, facecolor='none',
edgecolor=color, linewidth=2)
# you can change the bottom text here
plot.add_label(.23, .02, image.url.split('/')[-1], relative=True, color='white')
plot.save(os.path.join(output_folder, str(index) + '.png'))
"""
Explanation: Now lets get funky
End of explanation
"""
|
psychemedia/ou-robotics-vrep | robotVM/notebooks/Demo - Square N - Functions.ipynb | apache-2.0 | import time
def myFunction():
print("Hello...")
#Pause awhile...
time.sleep(2)
print("...world!")
#call the function - note the brackets!
myFunction()
"""
Explanation: Traverse a Square - Part N - Functions
tricky because of scoping - need to think carefully about this....
In the previous notebook on this topic, we had described how to use a loop that could run the same block of code multiple times so that we could avoid repeating ourselves in the construction of program to drive a mobile robot along a square shaped trajectory.
One possible form of the program was as follows - note the use of varaiables to specify several parameter values:
```python
import time
side_speed=2
side_length_time=1
turn_speed=1.8
turn_time=0.45
number_of_sides=4
for side in range(number_of_sides):
#side
robot.move_forward(side_speed)
time.sleep(side_length_time)
#turn
robot.rotate_left(turn_speed)
time.sleep(turn_time)
```
Looking at the program, we have grouped the lines of code inside the loop into two separate meaningful groups:
one group of lines to move the robot in a straight line along one side of the square;
one group of lines to turn the robo through ninety degrees.
We can further abstract the program into one in which we define some custom functions that can be called by name and that will execute a code block captured within the function definition.
Here's an example:
End of explanation
"""
%run 'Set-up.ipynb'
%run 'Loading scenes.ipynb'
%run 'vrep_models/PioneerP3DX.ipynb'
%%vrepsim '../scenes/OU_Pioneer.ttt' PioneerP3DX
#Your code - using functions - here
"""
Explanation: The function definition takes the following, minimal form:
python
def NAME_OF_FUNCTION():
#Code block - there must be at least one line of code
#That said, we can use a null (do nothing) statement
pass
Set up the notebook to use the simulator and see if you can think of a way to use functions to call the lines of code that control the robot.
The function definitions should appear before the loop.
End of explanation
"""
%%vrepsim '../scenes/OU_Pioneer.ttt' PioneerP3DX
import time
side_speed=2
side_length_time=1
turn_speed=1.8
turn_time=0.45
number_of_sides=4
def traverse_side():
pass
def turn():
pass
for side in range(number_of_sides):
#side
robot.move_forward(side_speed)
time.sleep(side_length_time)
#turn
robot.rotate_left(turn_speed)
time.sleep(turn_time)
"""
Explanation: How did you get on? Could you work out how to use the functions?
Here's how I used them:
End of explanation
"""
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.