repo_name stringlengths 6 77 | path stringlengths 8 215 | license stringclasses 15
values | content stringlengths 335 154k |
|---|---|---|---|
loujine/musicbrainz-dataviz | 0-introduction.ipynb | mit | %load_ext watermark
%watermark --python -r
%watermark --date --updated
"""
Explanation: Visualizing MusicBrainz data with Python/JS, an introduction
This introductory notebook will explain how I get database from MusicBrainz and how I transform it to Python format for display in tables or plots.
A static HTML version of this notebook and the next ones should be available on github.io.
Prerequisites: having PostgreSQL to store the database (or being able to create virtual machines that will run PostgreSQL). I will use Python to manipulate the data but you can probably do the same in other languages. I will not go into details on how I build the SQL queries to fetch the data, you will need to look into the MusicBrainz schema if you try something too different from my examples.
Getting the MusicBrainz data
The first step is to get a local copy of the MusicBrainz database in order to make direct queries to it without going through the website or or webservice (which doesn't give the possibility to write complex queries).
The raw data itself is available for download and the files are updated twice a week. As of early 2017 the database zipped files to download are close to 2.5Gb.
Several possibilities exist to build the database locally, using the raw data above. I'm only explaining the basics here:
if you already have or can have PostgreSQL installed (MusicBrainz uses version 9.5 for the moment) on your machine, you can use the mbslave project that will recreate the database structure on your machine. You will also be able to synchronise your database and fetch the latest changes when you want.
another possibility is to use virtual machines to store the database and create a local copy of the website also (this is not required for what I intend to show here). I'm using the musicbrainz-docker project that uses Docker to create several machines for the different MusicBrainz components (database, website, search)
In both cases you should expect to download several Gb of data and need several Gb of RAM to have the postgreSQL database running smoothly.
Customize the database
Note: this step is again absolutely not required. It also increases a lot the space you need to run the database (the new dump you need to download is 4Gb large).
In my case, I want to explore metadata about the data modifications, i.e. the edits performed by MusicBrainz contributors. In order to do so I had to download also the mbdump-edit.tar.bz2 and mbdump-editor.tar.bz2 and add them to the local database build process (I did that by patching the createdb.sh script in musicbrainz-docker).
Python toolbox
For data analysis I will use Python3 libraries:
- PanDas for manipulating data as tables
- psycopg2 and sqlalchemy to access the SQL database
- plotly for plots
End of explanation
"""
import os
import psycopg2
# define global variables to store our DB credentials
PGHOST = 'localhost'
PGDATABASE = os.environ.get('PGDATABASE', 'musicbrainz')
PGUSER = os.environ.get('PGUSER', 'musicbrainz')
PGPASSWORD = os.environ.get('PGPASSWORD', 'musicbrainz')
"""
Explanation: Accessing the database from Python
Once the local database is set I can access it using e.g. Python with the psycopg2 library to perform SQL queries. Let's try a simple query.
With musicbrainz-docker, my database is on a virtual machine. I can access it from my main machine by setting the following parameters:
End of explanation
"""
sql_beethoven = """
SELECT gid, name, begin_date_year, end_date_year
FROM artist
WHERE name='Ludwig van Beethoven'
"""
"""
Explanation: Of course your parameters (especially IP) might be different from mine.
In order to simplify this procedure I developed a new branch in the musicbrainz-docker project that creates a Jupyter VM. If you use this branch, you don't need to set the parameters above, they are set when you start your notebook.
We need to define a SQL query as a Python string that psycopg2 will send to our database
End of explanation
"""
with psycopg2.connect(host=PGHOST, database=PGDATABASE,
user=PGUSER, password=PGPASSWORD) as cnx:
crs = cnx.cursor()
crs.execute(sql_beethoven)
for result in crs:
print(result)
"""
Explanation: Let's apply our query
End of explanation
"""
# pandas SQL query require an sqlalchemy engine object
# rather than the direct psycopg2 connection
import sqlalchemy
import pandas
engine = sqlalchemy.create_engine(
'postgresql+psycopg2://{PGUSER}:{PGPASSWORD}@{PGHOST}/{PGDATABASE}'.format(**locals()),
isolation_level='READ UNCOMMITTED'
)
pandas.read_sql(sql_beethoven, engine)
"""
Explanation: We got one result! So that means the correct Ludwig van Beethoven (1770-1828) exists in the MusicBrainz database. I also extracted his MBID (unique identifier) so that you can check Beethoven's page is available on the main musicbrainz server.
If you only want to manipulate basic data as Python strings and numbers, that's all you need, and you can start writing other queries.
But in my case I want to do more complex stuff on the data, so I want to use another Python library that will help me to manipulate and plot the data. I'm going to use PanDas for that.
Using PanDas to manipulate data
The PanDas library allows manipulations of complex data in Python as Series or DataFrames. It also integrates some of the matplotlib plotting library capabilities directly on the DataFrames object. Let's do the same query as earlier using pandas:
End of explanation
"""
|
phoebe-project/phoebe2-docs | 2.0/examples/minimal_synthetic.ipynb | gpl-3.0 | !pip install -I "phoebe>=2.0,<2.1"
%matplotlib inline
"""
Explanation: Minimal Example to Produce a Synthetic Light Curve
Setup
Let's first make sure we have the latest version of PHOEBE 2.0 installed. (You can comment out this line if you don't use pip for your installation or don't want to update to the latest release).
End of explanation
"""
import phoebe
from phoebe import u # units
import numpy as np
import matplotlib.pyplot as plt
logger = phoebe.logger()
b = phoebe.default_binary()
"""
Explanation: As always, let's do imports and initialize a logger and a new bundle. See Building a System for more details.
End of explanation
"""
b.add_dataset('lc', times=np.linspace(0,1,201), dataset='mylc')
"""
Explanation: Adding Datasets
Now we'll create an empty lc dataset:
End of explanation
"""
b.run_compute(irrad_method='none')
"""
Explanation: Running Compute
Now we'll compute synthetics at the times provided using the default options
End of explanation
"""
axs, artists = b['mylc@model'].plot()
axs, artists = b['mylc@model'].plot(x='phases')
"""
Explanation: Plotting
Now we can simply plot the resulting synthetic light curve.
End of explanation
"""
|
h-mayorquin/time_series_basic | presentations/.ipynb_checkpoints/2015-august-checkpoint.ipynb | bsd-3-clause | # Scientific Python libraries
import numpy as np
import matplotlib.pyplot as plt
import mpld3
import seaborn as sn
mpld3.enable_notebook()
import sys
sys.path.append("../")
# Nexa in-house libraries
from signals.time_series_class import MixAr
from signals.aux_functions import sidekick
from input.sensors import PerceptualSpace, Sensor
from nexa.nexa import Nexa
"""
Explanation: Nexa And Time Series
So this is a brief show on the work with time series so far. The first thing that we have to do is to import the classical libraries. We also do a little trick to work with the librarires in the directory above.
Main Libraries
End of explanation
"""
dt = 0.1
Tmax = 100
"""
Explanation: Now we have a couple of imports here. I will explain what al libraries does:
Signals
This is the module to put time series. In this case I am importing a class that allows us to build an autoregressive process (MixAr) that can be mix in space with a simpler series (sidekick) as Pawell suggested.
Input
This module takes care of the input sides to Nexa, that is a group of sensors. Here I created some classes that allow easy organization and implementation of data with a time dimension. In particular the class Sensor represents a single receptive unit that reads a time series from the exterior whereas Perceptual Space allows us to deal with a group of them and their interactions.
Nexa
Finally, Nexa. Building in Benjaminsson's previous work I implemented (in far more simpler terms, there is still of testing and optimization to be done) a Nexa framework. The Nexa object contains here a perceptual space which represents a -as stated before- a group of sensors with information on time. The Object contains all the operations that allow the creation of vector codes from the ground up:
Formation of a Spatio Temporal Distance Matrix (STDM) that captures the cross-correlations of a percpetual space.
Clustering / Vector quantization in the vector space
Index creation. That is, utilities to transform the data from the whole preceptual space to the particular set of indexes of a cluster and the other way around.
Clustering / Vector quantization in the data / time space.
Code creation.
Program Execution and Workflow
So first we declare and discuss the parameters and setup requiered for a run of Nexa. We declare the time resoultion of the system and the total amount of time that our system will be simulated. In a real data analysis task this will be determined from the domain of the problem but given that we are in the development, toy example phase we determine those quantites by ourselves.
End of explanation
"""
# Let's get the sideckick function
amplitude = 1
w1 = 1
w2 = 5
beta = sidekick(w1, w2, dt, Tmax, amplitude)
# Now we will get the AR proccess mixed with the sidekick
# First we need the phi's vector
phi0 = 0.0
phi1 = -0.8
phi2 = 0.3
phi = np.array((phi0, phi1, phi2))
# Now we need the initial conditions
x0 = 1
x1 = 1
x2 = 0
initial_conditions = np.array((x0, x1, x2))
# Second we construct the series with the mix
A = MixAr(phi, dt=dt, Tmax=Tmax, beta=beta)
A.initial_conditions(initial_conditions)
mix_series = A.construct_series()
# mix_series = beta
time = A.time
"""
Explanation: Time series to analyze
Now we input the necessary setup for our time series. We present the code here and explain it bellow together with a visualization of both of them.
End of explanation
"""
%matplotlib inline
plt.plot(time, beta)
plt.show()
"""
Explanation: First we describe the sideckick function, it is specified by two frquencies and the amplitude. Under the hood it is simple a the mix of two sine waves with the given frequency. We visualize it bellow.
End of explanation
"""
plt.plot(time, mix_series)
"""
Explanation: Now we will visualiza the Auto Regresive process which is a little bit more complicated. In order to specify an autoregresive process we need as many initial conditions as the order of the process. In concrete our AR is:
$$x(t) = \phi_0 + x(t - 1) * \phi_1 + x(t - 2) * \phi_2 $$
It is easy to imagine how to generalize this to any order. Now, the particularity that we introduce is to add also an spatial term to this equation.
$$x(t) = \phi_0 + x(t - 1) * \phi_1 + x(t - 2) * \phi_2 + \beta(t)$$
Where beta is our sidekick function.
Our AR class therefore takes in its constructor three initial conditions and the corresponding values of phi. We show the plot below and we see the characteristic plot of an AR process.
End of explanation
"""
# Here we will calculate correlations
Nlags = 100
Nspatial_clusters = 2 # Number of spatial clusters
Ntime_clusters = 2 # Number of time clusters
Nembedding = 3 # Dimension of the embedding space
# We create the here perceptual space
aux_sensors = [Sensor(mix_series, dt), Sensor(beta, dt)]
perceptual_space = PerceptualSpace(aux_sensors, Nlags)
# Now the Nexa object
nexa_object = Nexa(perceptual_space, Nlags, Nspatial_clusters,
Ntime_clusters, Nembedding)
"""
Explanation: Nexa worflow
Now we present here the nexa worflow but first we need to initialize a couple of parameters and the setup
End of explanation
"""
# Calculate all the quantities
nexa_object.calculate_all()
"""
Explanation: We execute the whole nexa workflow with a single routine
End of explanation
"""
# Build the code vectors
code_vectors = nexa_object.build_code_vectors()
"""
Explanation: I decided to implement the routine to calculate the code vectors separte however (discuss this!)
End of explanation
"""
from visualization.sensor_clustering import visualize_cluster_matrix
from visualization.sensors import visualize_SLM
from visualization.sensors import visualize_STDM_seaborn
from visualization.time_cluster import visualize_time_cluster_matrix
from visualization.code_vectors import visualize_code_vectors
"""
Explanation: Visualization
Now in order to discuss this with more detail and show how the whole process looks in at the graph level I present the plots.
First we import all the required libraries.
End of explanation
"""
%matplotlib inline
fig = visualize_SLM(nexa_object)
plt.show(fig)
"""
Explanation: Visualize SLM
First we present the plot of the Sensor Lagged Matrix, which just represents the sensors in our system and all the possible lags until the klags quantity in order to show the overall structure of the time series
End of explanation
"""
%matplotlib qt
# fig = visualize_STDM(nexa_object)
fig = visualize_STDM_seaborn(nexa_object)
plt.show(fig)
"""
Explanation: Visualize STDM (Spatio Temporal Distance Matrix)
Now we get the usual correlation matrix between the data with the novelty that we also calculate the correlation between all the possible pairs of laggins and sensors.
End of explanation
"""
%matplotlib inline
fig = visualize_cluster_matrix(nexa_object)
"""
Explanation: Visualize of Sensor Clusterings
Now we show how the lagged sensors cluster
End of explanation
"""
%matplotlib inline
cluster = 0
time_center = 1
fig = visualize_time_cluster_matrix(nexa_object, cluster, time_center,
cmap='coolwarm', inter='none',
origin='upper', fontsize=16)
"""
Explanation: Visualize the time cluster
This one is a little bit more tricky. Here we take on of the centers (in this case the second center of the first cluster) and show how the center (code vector) of that and show how it looks. So here we have a center of the first cluster in other words.
End of explanation
"""
%matplotlib inline
fig = visualize_code_vectors(code_vectors)
"""
Explanation: Visualize the Code Vectors
Here we visualize the code vectors. We show as different cells the different cluster in the sensor space and as different colors the particular code vector that encode the signal at each particular moment in time.
End of explanation
"""
np.corrcoef(code_vectors, rowvar=0)
"""
Explanation: Statistics of the Code Vectors
Now we calculate the correlation between the two clusters. We expect them to have a very low correlation coefficient.
End of explanation
"""
|
ebridge2/FNGS_website | sic/sic_ndmg.ipynb | apache-2.0 | %%bash
ndmg_demo-func
"""
Explanation: SIC for NDMG Pipeline
The NDMG pipeline estimates connectomes from M3r (multi-modal MRI) scans. The NDMG pipeline is designed to operate with:
NDMG-d
+ 1xdiffusion-weighted image (DWI) for a particular subject.
+ 1xbval file giving the magnitude of diffuion vectors in the DWI.
+ 1xbvec file giving the direction of diffusion vectors in the DWI.
+ the corresponding T1w anatomical reference for the same subject.
NDMG-f
+ 1xfunctional magnetic-resonance image (fMRI) for a particular subject.
+ the slice-timing encoding for the fMRI scan.
+ the corresponding T1w anatomical reference for the same subject.
Connectome Estimation
The FNGS pipeline must first translate our raw M3r images into connectomes. Pressing "run" on the cell below will call the ndmg demo scripts.
Figure 1: The workflow for the NDMG Pipeline.
If you get "process interrupted" as an output for any step of this notebook refresh the page and start again from the top; this is due to the server rebooting which it is scheduled to do every few hours.
NDMG-F Tutorial
Now that we are acquainted with the basics of the pipeline, feel free to click the cell below, and click the "play" button in the bar at the top. Alternatively, press shift + Enter from inside of the cell:
End of explanation
"""
%matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
ts = np.load('/tmp/ndmg_demo/outputs/func/roi-timeseries/desikan-res-4x4x4/sub-0025864_ses-1_bold_desikan-res-4x4x4_variant-mean_timeseries.npz')['roi']
fig = plt.figure()
ax = fig.add_subplot(111)
ax.plot(ts)
ax.set_xlabel('TR')
ax.set_ylabel('Intensity')
ax.set_title('Subject 0025864 session 1 ROI timeseries')
fig.show()
"""
Explanation: To estimate our functional connectomes, we leverage many tools along the way, notably:
NDMG-f Tools
Step | Tool(s) leveraged
-----------------|-------------------------
Preprocessing | mcflirt (FSL), slicetimer (FSL), 3dSkullStrip (AFNI)
Registration | FLIRT (FSL), FLIRT-bbr (FSL), FNIRT (FSL), MNI 152 Template (MNI)
Nuisance | Neurodata Code (Neurodata)
Timeseries Extraction/Connectome Estimation | Parcellation Atlases
Figure 2: The tools leveraged by the NDMG-f Pipeline.
View Timeseries Estimation Results
Your fMRI timeseries can be viewed with the following code. This demo is on very small data, so it is not necessarily guaranteed to produce high-quality outputs as it was given drastically simpler data than most neuroscience applications, but it gives users a feel for the overall workflow. Click in the block, and press the "play" button at the top to execute it.
End of explanation
"""
import networkx as nx
g = nx.read_gpickle('/tmp/ndmg_demo/outputs/func/connectomes/desikan-res-4x4x4/sub-0025864_ses-1_bold_desikan-res-4x4x4_measure-correlation.gpickle')
mtx = nx.to_numpy_matrix(g)
fig = plt.figure(figsize=(7,7))
ax = fig.add_subplot(111)
cax = ax.imshow(mtx, interpolation='None')
ax.set_xlabel('ROI')
ax.set_ylabel('ROI')
ax.set_title('Subject 0025864 session 1 Functional Connectome')
fig.colorbar(cax)
fig.show()
"""
Explanation: View Connectome Estimation Results
You can view your functional connectome with the following code. Again, click anywhere in the block and press the "play" button to execute it.
End of explanation
"""
%%bash
ndmg_bids /tmp/ndmg_demo/outputs/func/connectomes /tmp/ndmg_demo/outputs/func/group group func --atlas desikan-res-4x4x4
cp /tmp/ndmg_demo/outputs/func/group/connectomes/desikan-res-4x4x4/plot.html ./qc_desikan_fmri_plot.html
"""
Explanation: Group Level Summary Statistics
End of explanation
"""
%%bash
ndmg_demo-dwi
"""
Explanation: View Summary Statistics
The NDMG-f pipeline produces a plot which tells you about your functional connectomes.
Click this link to view the result!
NDMG-d Tutorial
The cell below will similarly run the DWI pipeline demo:
End of explanation
"""
%matplotlib inline
g = nx.read_gpickle('/tmp/ndmg_demo/outputs/graphs/desikan-res-4x4x4/sub-0025864_ses-1_dwi_desikan-res-4x4x4.gpickle')
mtx = nx.to_numpy_matrix(g)
fig = plt.figure(figsize=(7,7))
ax = fig.add_subplot(111)
cax = ax.imshow(mtx, interpolation='None')
ax.set_xlabel('ROI')
ax.set_ylabel('ROI')
ax.set_title('Subject 0025864 session 1 Diffusion Connectome')
fig.colorbar(cax)
fig.show()
"""
Explanation: NDMG-d Tools
Step | Tool(s) leveraged
-----------------|-------------------------
Registration | FLIRT (FSL), MNI 152 Template (MNI)
Tensor Estimation and Tractography | Dipy, MNI152 Template (MNI)
Connectome Estimation | Parcellation Atlases
Figure 3: The tools leveraged by the NDMG-d Pipeline.
View Connectome Estimation Results
You can view your structural connectome with the following code. Again, click anywhere in the block and press the "play" button to execute it.
End of explanation
"""
%%bash
ndmg_bids /tmp/ndmg_demo/outputs/graphs /tmp/ndmg_demo/outputs/dwi/group group dwi
cp /tmp/ndmg_demo/outputs/dwi/group/connectomes/desikan-res-4x4x4/plot.html ./qc_desikan_dwi_plot.html
"""
Explanation: Group Level Analysis
The FNGS pipeline produces numerous statistical reports and quality assurance summaries, both qualitative and quantitative, at both the subject-specific and group level. The subject-level qa will be alongside your local outputs in the qa folder. To generate the group-level qa, we can use the following command:
End of explanation
"""
|
cypherai/PySyft | notebooks/Syft - Testing - Benchmark Tests.ipynb | apache-2.0 | from syft.test.benchmark import Benchmark
Benchmark(str)
"""
Explanation: Testing: Benchmark Tests
One goal of the OpenMined project is to efficiently train Deep Learning models in a homomorphically encrypted state. Therefore it is very important to benchmark new and existing features in order to achieve better and faster implementations.
Installation
Simpy run pip install -r test-requirements.txt instead of the regular requirements.txt to get all testing tools.
Usage
Before using the Benchmark Testing Suite, you have to import it from syft.test.benchmark. After that, you can pass in the function which needs benchmark testing.
End of explanation
"""
import time
def wait_a_second(seconds=3): # Define a function for testing or use an existing one
time.sleep(seconds)
# Call function without params
exec_time = Benchmark(wait_a_second).exec_time()
print("EXECUTION TIME: {} SECONDS".format(exec_time))
"""
Explanation: Compute a function's execution time
The exec_time method is a very basic tool to calculate a functions' execution time. The method can be used as follows.
End of explanation
"""
# Call functions with params
exec_time = Benchmark(wait_a_second, seconds=1).exec_time() # Pass function and params to be tested into class
print("(2) EXECUTION TIME: {} SECONDS".format(exec_time))
"""
Explanation: So as we see, the function returns the execution time of the function in seconds. Additional params can be added to the function call as follows.
End of explanation
"""
def some_function(count):
a = 6*8
b = 6**3
c = a + b
x = [a, b, c]
for i in range(count):
a += x[0] * i + x[1] + x [2]
Benchmark(some_function, count=5).profile_lines()
"""
Explanation: Profile Execution Times Per Line
It is possible to get the execution time per line using the profile_lines() method.
End of explanation
"""
|
resendislab/cobrame-docker | getting_started.ipynb | apache-2.0 | import pickle
with open("me_models/iLE1678.pickle", "rb") as model_file:
ecoli = pickle.load(model_file)
"""
Explanation: Building and solving the E. coli ME model
The image includes the COBRAme and ECOLIme Python packages to get you started quickly. The docker image includes a prebuild version of the E. coli ME model iLE1678 at at me_models/iLE1678.pickle.
If you need more info about the construction process you can find it in the build_ME_model.ipynb notebook.
End of explanation
"""
print(ecoli)
print("Reactions:", len(ecoli.reactions))
print("Metabolites:", len(ecoli.metabolites))
"""
Explanation: This will read the saved model into the variable ecoli.
End of explanation
"""
from cobrame.solve.algorithms import binary_search
%time binary_search(ecoli, min_mu=0.1, max_mu=1.0, debug=True, mu_accuracy=1e-2)
"""
Explanation: We can now run the optimization for the model. This will take around 10 minutes.
End of explanation
"""
import escher
view = escher.Builder("iJO1366.Central metabolism")
view.reaction_data = ecoli.get_metabolic_flux()
view.display_in_notebook()
"""
Explanation: If we want to we could also visualize the model fluxes on a map of the E. coli central carbon metabolism obtained from iJO1366.
End of explanation
"""
|
Ragnamus/sci-comp | notebooks/matthew.truscott.ipynb | mit | from prettytable import PrettyTable
import numpy as np
def root_finder():
a = 1
b = 1
crray = np.geomspace(0.1, (10 ** (-200)), num=200, endpoint=False)
t = PrettyTable(['root1', 'root2', 'root3', 'root4'])
for c in crray:
root = np.sqrt((b * b) - (4 * a * c))
#print(root)
r1 = (-b + root) / (2 * a)
r2 = (-b - root) / (2 * a)
r3 = -2 * c / (b + root)
if b - root == 0:
r4 = 'div0'
else:
r4 = -2 * c / (b - root)
t.add_row([r1, r2, r3, r4])
print(t)
root_finder()
"""
Explanation: 1: Subtractive Cancellation<a id="1"></a>
Consider the quadratic equation
$$\tag*{1}
a x^{2} + b x + c = 0.$$
This has two roots that can be calculated in two different ways
$$\tag*{2}
x_{1,2} = \frac{-b \pm \sqrt{b^{2} -4ac}}{2a}\quad\mbox{or}\quad
x_{1,2}' = \frac{-2c}{b \pm \sqrt{b^{2} -4ac}}.$$
By keeping a and b constant while decreasing the value of c, and investigating the ability for the computer to evaluate the algorithms above, one can see the effect of subtractive cancellation error propagation. The square root and its preceding term almost cancel out but according to error propagation, the error of this value gets large. Consider the fractional error,
$$\tag*{3}
\begin{align}
\frac{r_{c} - r} {r} \simeq \frac{s}{r}\;
\mbox{max}(|\epsilon_{s}|,|\epsilon_{t}|).
\end{align}$$
If r ends up small, but s is relatively large, the fractional error is also large. The first equation (2) is therefore calculating a small number with error order 1 by 1, and therefore doesn't suffer too much from the effects of subtractive cancellation but the second equation involves division of a small number by a smaller number with error order 1 and therefore will diverge.
End of explanation
"""
import numpy as np
import matplotlib.pyplot as plt
def sum_1(n):
outsum = 0
outarray = np.zeros((n,))
ii = (i for i in range(1, 2*n+1) if i % 2 != 0)
for i in ii:
outsum += ((-1) ** i) * i / (i + 1)
outsum += ((-1) ** (i+1)) * (i+1) / (i+1 + 1)
b = (int((i+1)/2))-1
outarray[b] = outsum
return outarray
def sum_2(n):
outsum = 0
outarray = np.zeros((n,))
for i in range(1, n+1):
outsum -= (2 * i - 1) / (2 * i)
outsum += (2 * i) / (2 * i + 1)
outarray[(i-1)] = outsum
return outarray
def sum_3(n):
outsum = 0
outarray = np.zeros((n,))
for i in range(1, n+1):
outsum += 1 / (2 * i * (2 * i + 1))
outarray[(i-1)] = outsum
return outarray
n = 1000000
method_1 = sum_1(n)
method_2 = sum_2(n)
method_3 = sum_3(n)
error_1 = abs(np.subtract(method_1, method_3))
error_2 = abs(np.subtract(method_2, method_3))
#print(method_1)
#print(method_2)
#print(method_3)
relerror_1 = np.divide(error_1, method_3)
relerror_2 = np.divide(error_2, method_3)
mil = np.arange(1, n+1, 1)
#print(mil)
logerror_1 = np.log10(relerror_1)
logerror_2 = np.log10(relerror_2)
logmil = np.log10(mil)
plt.xlabel = ('logN')
plt.ylabel = ('SN')
#plt.plot(logmil, logerror_1, '-b', lw=1)
plt.plot(logmil, logerror_2, '-r', lw=1)
plt.grid(True)
plt.show()
"""
Explanation: The code above shows this effect (compare columns 2 and 4). Notice how the the last column diverges. Additionally the second equation appears to be better at calculating the 'non-cancelling' root, returning a small value rather than zero, because of the nature of error propagation when dividing.
Next consider three ways of representing an expression in terms of sums.
$$\tag*{4}
S_{N}^{(1)} = \sum_{n=1}^{2N} (-1)^{n} \frac{n}{n+1}.$$
$$\tag*{5}
S_{N}^{(2)} = - \sum_{n=1}^{N} \frac{2n-1}{2n} + \sum_{n=1}^{N}
\frac{2n}{2n+1} .$$
$$\tag*{6}
S_{N}^{(3)} = \sum_{n=1}^{N} \frac{1}{2n(2n+1)} .$$
If we assume the last one is correct and calculate how the other two differ, we run into subtractive cancellation issues, causing increasingly large errors for large N.
End of explanation
"""
|
BrainIntensive/OnlineBrainIntensive | resources/matplotlib/AnatomyOfMatPlotLib/AnatomyOfMatplotlib-Part3-HowToSpeakMPL.ipynb | mit | %load exercises/3.1-colors.py
t = np.arange(0.0, 5.0, 0.2)
plt.plot(t, t, t, t**2, t, t**3)
plt.show()
"""
Explanation: How to speak "MPL"
In the previous parts, you learned how Matplotlib organizes plot-making by figures and axes. We broke down the components of a basic figure and learned how to create them. You also learned how to add one or more axes to a figure, and how to tie them together. You even learned how to change some of the basic appearances of the axes. Finally, we went over some of the many plotting methods that Matplotlib has to draw on those axes. With all that knowledge, you should be off making great and wonderful figures.
Why are you still here?
"We don't know how to control our plots and figures!" says some random voice in the back of the room.
Of course! While the previous sections may have taught you some of the structure and syntax of Matplotlib, it did not describe much of the substance and vocabulary of the library. This section will go over many of the properties that are used throughout the library. Note that while many of the examples in this section may show one way of setting a particular property, that property may be applicible elsewhere in completely different context. This is the "language" of Matplotlib.
Colors
This is, perhaps, the most important piece of vocabulary in Matplotlib. Given that Matplotlib is a plotting library, colors are associated with everything that is plotted in your figures. Matplotlib supports a very robust language for specifying colors that should be familiar to a wide variety of users.
Colornames
First, colors can be given as strings. For very basic colors, you can even get away with just a single letter:
b: blue
g: green
r: red
c: cyan
m: magenta
y: yellow
k: black
w: white
Other colornames that are allowed are the HTML/CSS colornames such as "burlywood" and "chartreuse". See the full list of the 147 colornames. For the British speaking and poor spellers among us (I am not implying that British speakers are poor spellers!), we allow "grey" where-ever "gray" appears in that list of colornames. All of these colornames are case-insensitive.
Hex values
Colors can also be specified by supplying a HTML/CSS hex string, such as '#0000FF' for blue. Support for an optional alpha channel was added for v2.0.
256 Shades of Gray
A gray level can be given instead of a color by passing a string representation of a number between 0 and 1, inclusive. '0.0' is black, while '1.0' is white. '0.75' would be a light shade of gray.
RGB[A] tuples
You may come upon instances where the previous ways of specifying colors do not work. This can sometimes happen in some of the deeper, stranger levels of the library. When all else fails, the universal language of colors for matplotlib is the RGB[A] tuple. This is the "Red", "Green", "Blue", and sometimes "Alpha" tuple of floats in the range of [0, 1]. One means full saturation of that channel, so a red RGBA tuple would be (1.0, 0.0, 0.0, 1.0), whereas a partly transparent green RGBA tuple would be (0.0, 1.0, 0.0, 0.75). The documentation will usually specify whether it accepts RGB or RGBA tuples. Sometimes, a list of tuples would be required for multiple colors, and you can even supply a Nx3 or Nx4 numpy array in such cases.
In functions such as plot() and scatter(), while it may appear that they can take a color specification, what they really need is a "format specification", which includes color as part of the format. Unfortunately, such specifications are string only and so RGB[A] tuples are not supported for such arguments (but you can still pass an RGB[A] tuple for a "color" argument).
Oftentimes there is a separate argument for "alpha" where-ever you can specify a color. The value for "alpha" will usually take precedence over the alpha value in the RGBA tuple. There is no easy way around this inconsistency.
Cycle references
With the advent of fancier color cycles coming from the many available styles, users needed a way to reference those colors in the style without explicitly knowing what they are. So, in v2.0, the ability to reference the first 10 iterations of the color cycle was added. Whereever one could specify a color, you can supply a 2 character string of 'C#'. So, 'C0' would be the first color, 'C1' would be the second, and so on and so forth up to 'C9'.
Exercise 3.1
Try out some different string representations of colors (you can't do RGB[A] tuples here).
End of explanation
"""
xs, ys = np.mgrid[:4, 9:0:-1]
markers = [".", "+", ",", "x", "o", "D", "d", "", "8", "s", "p", "*", "|", "_", "h", "H", 0, 4, "<", "3",
1, 5, ">", "4", 2, 6, "^", "2", 3, 7, "v", "1", "None", None, " ", ""]
descripts = ["point", "plus", "pixel", "cross", "circle", "diamond", "thin diamond", "",
"octagon", "square", "pentagon", "star", "vertical bar", "horizontal bar", "hexagon 1", "hexagon 2",
"tick left", "caret left", "triangle left", "tri left", "tick right", "caret right", "triangle right", "tri right",
"tick up", "caret up", "triangle up", "tri up", "tick down", "caret down", "triangle down", "tri down",
"Nothing", "Nothing", "Nothing", "Nothing"]
fig, ax = plt.subplots(1, 1, figsize=(7.5, 4))
for x, y, m, d in zip(xs.T.flat, ys.T.flat, markers, descripts):
ax.scatter(x, y, marker=m, s=100)
ax.text(x + 0.1, y - 0.1, d, size=14)
ax.set_axis_off()
plt.show()
"""
Explanation: Markers
Markers are commonly used in plot() and scatter() plots, but also show up elsewhere. There is a wide set of markers available, and custom markers can even be specified.
marker | description ||marker | description ||marker | description ||marker | description
:----------|:--------------||:---------|:--------------||:---------|:--------------||:---------|:--------------
"." | point ||"+" | plus ||"," | pixel ||"x" | cross
"o" | circle ||"D" | diamond ||"d" | thin_diamond || |
"8" | octagon ||"s" | square ||"p" | pentagon ||"*" | star
"|" | vertical line||"_" | horizontal line ||"h" | hexagon1 ||"H" | hexagon2
0 | tickleft ||4 | caretleft ||"<" | triangle_left ||"3" | tri_left
1 | tickright ||5 | caretright ||">" | triangle_right||"4" | tri_right
2 | tickup ||6 | caretup ||"^" | triangle_up ||"2" | tri_up
3 | tickdown ||7 | caretdown ||"v" | triangle_down ||"1" | tri_down
"None" | nothing ||None | nothing ||" " | nothing ||"" | nothing
End of explanation
"""
%load exercises/3.2-markers.py
t = np.arange(0.0, 5.0, 0.2)
plt.plot(t, t, , t, t**2, , t, t**3, )
plt.show()
"""
Explanation: Exercise 3.2
Try out some different markers and colors
End of explanation
"""
t = np.arange(0.0, 5.0, 0.2)
plt.plot(t, t, '-', t, t**2, '--', t, t**3, '-.', t, -t, ':')
plt.show()
"""
Explanation: Linestyles
Line styles are about as commonly used as colors. There are a few predefined linestyles available to use. Note that there are some advanced techniques to specify some custom line styles. Here is an example of a custom dash pattern.
linestyle | description
-------------------|------------------------------
'-' | solid
'--' | dashed
'-.' | dashdot
':' | dotted
'None' | draw nothing
' ' | draw nothing
'' | draw nothing
Also, don't mix up ".-" (line with dot markers) and "-." (dash-dot line) when using the plot function!
End of explanation
"""
fig, ax = plt.subplots(1, 1)
ax.bar([1, 2, 3, 4], [10, 20, 15, 13], ls='dashed', ec='r', lw=5)
plt.show()
"""
Explanation: It is a bit confusing, but the line styles mentioned above are only valid for lines. Whenever you are dealing with the linestyles of the edges of "Patch" objects, you will need to use words instead of the symbols. So "solid" instead of "-", and "dashdot" instead of "-.". This issue will be fixed for the v2.1 release and allow these specifications to be used interchangably.
End of explanation
"""
t = np.arange(0., 5., 0.2)
# red dashes, blue squares and green triangles
plt.plot(t, t, 'r--', t, t**2, 'bs', t, t**3, 'g^')
plt.show()
"""
Explanation: Plot attributes
With just about any plot you can make, there are many attributes that can be modified to make the lines and markers suit your needs. Note that for many plotting functions, Matplotlib will cycle the colors for each dataset you plot. However, you are free to explicitly state which colors you want used for which plots. For plt.plot(), you can mix the specification for the colors, linestyles, and markers in a single string.
End of explanation
"""
%load exercises/3.3-properties.py
t = np.arange(0.0, 5.0, 0.1)
a = np.exp(-t) * np.cos(2*np.pi*t)
plt.plot(t, a, )
plt.show()
"""
Explanation: | Property | Value Type
|------------------------|-------------------------------------------------
|alpha | float
|color or c | any matplotlib color
|dash_capstyle | ['butt', 'round' 'projecting']
|dash_joinstyle | ['miter' 'round' 'bevel']
|dashes | sequence of on/off ink in points
|drawstyle | [ ‘default’ ‘steps’ ‘steps-pre’
| | ‘steps-mid’ ‘steps-post’ ]
|linestyle or ls | [ '-' '--' '-.' ':' 'None' ' ' '']
| | and any drawstyle in combination with a
| | linestyle, e.g. 'steps--'.
|linewidth or lw | float value in points
|marker | [ 0 1 2 3 4 5 6 7 'o' 'd' 'D' 'h' 'H'
| | '' 'None' ' ' None '8' 'p' ','
| | '+' 'x' '.' 's' '*' '_' '|'
| | '1' '2' '3' '4' 'v' '<' '>' '^' ]
|markeredgecolor or mec | any matplotlib color
|markeredgewidth or mew | float value in points
|markerfacecolor or mfc | any matplotlib color
|markersize or ms | float
|solid_capstyle | ['butt' 'round' 'projecting']
|solid_joinstyle | ['miter' 'round' 'bevel']
|visible | [True False]
|zorder | any number
Exercise 3.3
Make a plot that has a dotted red line, with large yellow diamond markers that have a green edge
End of explanation
"""
# %load http://matplotlib.org/mpl_examples/color/colormaps_reference.py
"""
==================
Colormap reference
==================
Reference for colormaps included with Matplotlib.
This reference example shows all colormaps included with Matplotlib. Note that
any colormap listed here can be reversed by appending "_r" (e.g., "pink_r").
These colormaps are divided into the following categories:
Sequential:
These colormaps are approximately monochromatic colormaps varying smoothly
between two color tones---usually from low saturation (e.g. white) to high
saturation (e.g. a bright blue). Sequential colormaps are ideal for
representing most scientific data since they show a clear progression from
low-to-high values.
Diverging:
These colormaps have a median value (usually light in color) and vary
smoothly to two different color tones at high and low values. Diverging
colormaps are ideal when your data has a median value that is significant
(e.g. 0, such that positive and negative values are represented by
different colors of the colormap).
Qualitative:
These colormaps vary rapidly in color. Qualitative colormaps are useful for
choosing a set of discrete colors. For example::
color_list = plt.cm.Set3(np.linspace(0, 1, 12))
gives a list of RGB colors that are good for plotting a series of lines on
a dark background.
Miscellaneous:
Colormaps that don't fit into the categories above.
"""
import numpy as np
import matplotlib.pyplot as plt
# Have colormaps separated into categories:
# http://matplotlib.org/examples/color/colormaps_reference.html
cmaps = [('Perceptually Uniform Sequential', [
'viridis', 'plasma', 'inferno', 'magma']),
('Sequential', [
'Greys', 'Purples', 'Blues', 'Greens', 'Oranges', 'Reds',
'YlOrBr', 'YlOrRd', 'OrRd', 'PuRd', 'RdPu', 'BuPu',
'GnBu', 'PuBu', 'YlGnBu', 'PuBuGn', 'BuGn', 'YlGn']),
('Sequential (2)', [
'binary', 'gist_yarg', 'gist_gray', 'gray', 'bone', 'pink',
'spring', 'summer', 'autumn', 'winter', 'cool', 'Wistia',
'hot', 'afmhot', 'gist_heat', 'copper']),
('Diverging', [
'PiYG', 'PRGn', 'BrBG', 'PuOr', 'RdGy', 'RdBu',
'RdYlBu', 'RdYlGn', 'Spectral', 'coolwarm', 'bwr', 'seismic']),
('Qualitative', [
'Pastel1', 'Pastel2', 'Paired', 'Accent',
'Dark2', 'Set1', 'Set2', 'Set3',
'tab10', 'tab20', 'tab20b', 'tab20c']),
('Miscellaneous', [
'flag', 'prism', 'ocean', 'gist_earth', 'terrain', 'gist_stern',
'gnuplot', 'gnuplot2', 'CMRmap', 'cubehelix', 'brg', 'hsv',
'gist_rainbow', 'rainbow', 'jet', 'nipy_spectral', 'gist_ncar'])]
nrows = max(len(cmap_list) for cmap_category, cmap_list in cmaps)
gradient = np.linspace(0, 1, 256)
gradient = np.vstack((gradient, gradient))
def plot_color_gradients(cmap_category, cmap_list, nrows):
fig, axes = plt.subplots(nrows=nrows)
fig.subplots_adjust(top=0.95, bottom=0.01, left=0.2, right=0.99)
axes[0].set_title(cmap_category + ' colormaps', fontsize=14)
for ax, name in zip(axes, cmap_list):
ax.imshow(gradient, aspect='auto', cmap=plt.get_cmap(name))
pos = list(ax.get_position().bounds)
x_text = pos[0] - 0.01
y_text = pos[1] + pos[3]/2.
fig.text(x_text, y_text, name, va='center', ha='right', fontsize=10)
# Turn off *all* ticks & spines, not just the ones with colormaps.
for ax in axes:
ax.set_axis_off()
for cmap_category, cmap_list in cmaps:
plot_color_gradients(cmap_category, cmap_list, nrows)
plt.show()
"""
Explanation: Colormaps
Another very important property of many figures is the colormap. The job of a colormap is to relate a scalar value to a color. In addition to the regular portion of the colormap, an "over", "under" and "bad" color can be optionally defined as well. NaNs will trigger the "bad" part of the colormap.
As we all know, we create figures in order to convey information visually to our readers. There is much care and consideration that have gone into the design of these colormaps. Your choice in which colormap to use depends on what you are displaying. In mpl, the "jet" colormap has historically been used by default, but it will often not be the colormap you would want to use. Much discussion has taken place on the mailing lists with regards to what colormap should be default. The v2.0 release of Matplotlib adopted a new default colormap, 'viridis', along with some other stylistic changes to the defaults.
I want to acknowedge Nicolas Rougier and Tony Yu for putting significant effort in educating users in proper colormap selections. Furthermore, thanks goes to Nathaniel Smith and Stéfan van der Walt for developing the new perceptually uniform colormaps such as viridis. Here is the talk they gave at SciPy 2015 that does an excelent job explaining colormaps.
Here is the full gallery of all the pre-defined colormaps, organized by the types of data they are usually used for.
End of explanation
"""
fig, (ax1, ax2) = plt.subplots(1, 2)
z = np.random.random((10, 10))
ax1.imshow(z, interpolation='none', cmap='gray')
ax2.imshow(z, interpolation='none', cmap='coolwarm')
plt.show()
"""
Explanation: When colormaps are created in mpl, they get "registered" with a name. This allows one to specify a colormap to use by name.
End of explanation
"""
plt.scatter([1, 2, 3, 4], [4, 3, 2, 1])
plt.title(r'$\sigma_i=15$', fontsize=20)
plt.show()
"""
Explanation: Mathtext
Oftentimes, you just simply need that superscript or some other math text in your labels. Matplotlib provides a very easy way to do this for those familiar with LaTeX. Any text that is surrounded by dollar signs will be treated as "mathtext". Do note that because backslashes are prevelent in LaTeX, it is often a good idea to prepend an r to your string literal so that Python will not treat the backslashes as escape characters.
End of explanation
"""
import matplotlib as mpl
from matplotlib.rcsetup import cycler
mpl.rc('axes', prop_cycle=cycler('color', 'rgc') +
cycler('lw', [1, 4, 6]) +
cycler('linestyle', ['-', '-.', ':']))
t = np.arange(0.0, 5.0, 0.2)
plt.plot(t, t)
plt.plot(t, t**2)
plt.plot(t, t**3)
plt.show()
"""
Explanation: Hatches
A Patch object can have a hatching defined for it.
/ - diagonal hatching
\ - back diagonal
| - vertical
- - horizontal
+ - crossed
x - crossed diagonal
o - small circle
O - large circle (upper-case 'o')
. - dots
* - stars
Letters can be combined, in which case all the specified
hatchings are done. Repeating a character increases the
density of hatching of that pattern.
Property Cycles
In case you haven't noticed, when you do multiple plot calls in the same axes -- and not specify any colors -- the color for each plot is different! The default style in Matplotlib will cycle through a list of colors if you don't specify any. This feature has been in Matplotlib for a long time and it is similar to Matlab behavior.
In v1.5, this feature was extended so that one can cycle through other properties besides just color. Now, you can cycle linestyles, markers, hatch styles -- just about any property that can be specified is now possible to be cycled.
This feature is still being refined, and there has been significant improvements in its usability since v1.5, but here is a basic example that will work for v2.0 or greater (for v1.5, you may need to have this cycler expression quoted).
End of explanation
"""
mpl.rc('axes', prop_cycle=cycler('color', ['r', 'orange', 'c', 'y']) +
cycler('hatch', ['x', 'xx-', '+O.', '*']))
x = np.array([0.4, 0.2, 0.5, 0.8, 0.6])
y = [0, -5, -6, -5, 0]
plt.fill(x+1, y)
plt.fill(x+2, y)
plt.fill(x+3, y)
plt.fill(x+4, y)
plt.show()
"""
Explanation: Ugly tie contest!
End of explanation
"""
import matplotlib
print(matplotlib.matplotlib_fname())
"""
Explanation: Transforms
The topic of transforms in Matplotlib, that is the ability to map the coordinates specified by your data to the coordinates of your figure, is very advanced and will not be covered in this tutorial. For those who are interested in learning about them, see the transformation tutorial. For those who are really daring, there are the developer guides to transforms and scales. While most users will never, ever need to understand Matplotlib transforms to the level described in those links, it is important to be aware of them, and their critical role in figure-making.
In a figure, there are four coordinate systems: display, figure, axes, and data. Transforms are used to convert coordinates in one system into another system for various uses. This is how Matplotlib knows exactly where to place the ticks and ticklabels, even when you change the axis limits. The ticker says that the tick and label "1.5", for example, are to go at data x-coordinate 1.5. The transform says that location is at 0.4 in axes x-coordinate space. Meanwhile, the xlabel of "Distance" is placed at axes x-coordinate space of 0.5 (half-way). Meanwhile, a legend might be placed at a location relative to the figure coordinates.
Furthermore, the transform system is what is used to allow various scales to work, such as log scales. The transform system is what is used to make the polar plots work seamlessly. Whether you realize it or not, you use the transforms system in Matplotlib all the time. Everything drawn in Matplotlib has a transform associated with it. Most of the time, you will never notice this, and will happily operate within the data coordinate system. But when you want to do some more advanced plots, with some eye-catching visual tricks, the transform system will be there for you.
Managing the unmanagable -- Introducing matplotlibrc
Matplotlib's greatest strength is its ability to give you complete control over every single aspect of your plots and figures. Matplotlib's second greatest strength is its ability to take as much control over as much of your plots and figures as you want. You, as the user, would never consider to use Matplotlib if you had to specify all of these things for every single plot. Most of the time, the defaults are exactly what you want them to be.
Matplotlib uses the matplotlibrc configuration file to define the plethora of defaults found in the library. You can control the defaults of almost every property in Matplotlib: figure size and dpi, line width, color and style, axes, axis and grid properties, text and font properties and so on. Just modify your rc file and re-run your scripts to produce your improved figures.
To display where the currently active matplotlibrc file was loaded from, one can do the following:
End of explanation
"""
import matplotlib as mpl
import matplotlib.pyplot as plt
mpl.rcdefaults() # for when re-running this cell
fig, (ax1, ax2) = plt.subplots(1, 2)
ax1.plot([1, 2, 3, 4])
mpl.rc('lines', linewidth=2, linestyle='-.')
# Equivalent older, but still valid syntax
#mpl.rcParams['lines.linewidth'] = 2
#mpl.rcParams['lines.linestyle'] = '-.'
ax2.plot([1, 2, 3, 4])
plt.show()
"""
Explanation: You can also change the rc settings during runtime within a python script or interactively from the python shell. All of the rc settings are stored in a dictionary-like variable called matplotlib.rcParams, which is global to the matplotlib package. rcParams can be modified directly. Newer versions of matplotlib can use rc(), for example:
End of explanation
"""
|
landlab/landlab | notebooks/tutorials/overland_flow/linear_diffusion_overland_flow/linear_diffusion_overland_flow_router.ipynb | mit | import numpy as np
import matplotlib.pyplot as plt
from landlab import RasterModelGrid, imshow_grid
from landlab.components.overland_flow import LinearDiffusionOverlandFlowRouter
from landlab.io.esri_ascii import read_esri_ascii
"""
Explanation: <a href="http://landlab.github.io"><img style="float: left" src="../../../landlab_header.png"></a>
The Linear Diffusion Overland Flow Router
<hr>
<small>For more Landlab tutorials, click here: <a href="https://landlab.readthedocs.io/en/latest/user_guide/tutorials.html">https://landlab.readthedocs.io/en/latest/user_guide/tutorials.html</a></small>
<hr>
Overview
This notebook demonstrates the LinearDiffusionOverlandFlowRouter Landlab component. The component implements a two-dimensional model of overland flow, based on a linearization of the diffusion-wave approximation of the shallow-water equations.
Theory
Flow direction, depth, and velocity
The diffusion-wave equations are a simplified form of the 2D shallow-water equations in which energy slope is assumed to equal water-surface slope. Conservation of water mass is expressed in terms of the time derivative of the local water depth, $H$, and the spatial derivative (divergence) of the unit discharge vector $\mathbf{q} = UH$ (where $U$ is the 2D depth-averaged velocity vector):
$$\frac{\partial H}{\partial t} = R - \nabla\cdot \mathbf{q}$$
where $R$ is the local runoff rate [L/T] and $\mathbf{q}$ has dimensions of volume flow per time per width [L$^2$/T]. The flow velocity is calculated using a linearized form of the Manning friction law:
$$\mathbf{U} = \frac{H^{4/3}}{n^2 u_c} \nabla w$$
$$w = \eta + H$$
Here $\eta(x,y,t)$ is ground-surface elevation, $w(x,y,t)$ is water-surface elevation, $n$ is the Manning friction coefficient, and $u_c$ is a characteristic scale velocity (see, e.g., Mariotti, 2018). Thus, there are two parameters governing flow speed: $n$ and $u_c$. The may, however, be treated as a single lumped parameter $n^2 u_c$.
Rainfall and infiltration
Runoff rate is calculated as the difference between the rates of precipitation, $P$, and infiltration, $I$. The user specifies a precipitation rate (which is a public variable that can be modified after instantiation), and a maximum infiltration rate, $I_c$. The actual infiltration rate depends on the available surface water, and is calculated in a way that allows it to approach zero as the surface-water depth approaches zero:
$$I = I_c \left( 1 - e^{-H/H_i} \right)$$
where $H_i$ is a characteristic water depth, defined such that the actual infiltration rate is about 95% of $I_c$ when $H = 3 H_i$.
Numerical Methods
Finite-volume representation
The component uses an explicit, forward-Euler finite-volume method. The solution for water depth at a new time step $k+1$ is calculated from:
$$H_i^{k+1} = H_i^k + \Delta t \left[ \frac{dH_i}{dt} \right]_i^k$$
The time derivative at step $k$ is calculated as:
$$\left[ \frac{dH_i}{dt} \right]i^k = R - I - \frac{1}{\Lambda_i} \sum{j=1}^{N_i} \lambda_{ij} q_{ij}$$
where $R$ is rainfall rate, $I$ is infiltration rate (calculated as above), $\Lambda_i$ is the horizontal surface area of the cell enclosing node $i$, $\lambda_{ij}$ is the length of the cell face between node $i$ and its $j$-th neighbor, and $q_{ij}$ is the specific water discharge along the link that connects node $i$ and its $j$-th neighbor.
For a raster grid, this treatment is equivalent to a centered-in-space finite-difference arrangement. For more on finite-difference solutions to diffusion problems, see for example Slingerland and Kump (2011) and Press et al. (1986).
Time-step limiter
Because of the linearization described above, the flow model is effectively a diffusion problem with a space- and time-varying diffusivity, because the effective diffusivity $D$ depends on water depth:
$$D = \frac{H^{7/3}}{n^2 u_c}$$
One numerical challenge is that, according to the Courant-Levy-Friedrichs (CFL) criterion, the maximum stable time step will depend on water depth, which varies in space and time. To prevent instability, the solution algorithm calculates at every iteration a maximum value for $D$ using the current maximum water depth (or a very small minimum value, whichever is larger, to prevent blowup in the case of zero water depth). The maximum step size is then calculated as:
$$\Delta t_\text{max} = \alpha \frac{L_\text{min}^2}{2 D}$$
where $L_\text{min}$ is the length of the shortest link in the grid (which is just equal to node spacing, $\Delta x$, for a uniform raster or hex grid). The stability factor $\alpha$ is a user-controllable parameter that defaults to 0.2, and must be $\le 1$.
If $\Delta t_\text{max}$ is less than the user-specified "global" time step duration, the algorithm iterates repeatedly with time steps of size $\Delta t_\text{max}$ (or the remaining time in the global step, whichever is smaller) until the global time step duration has been completed.
The component
Import the needed libraries, then inspect the component's docstring:
End of explanation
"""
help(LinearDiffusionOverlandFlowRouter)
"""
Explanation: Use the help function to get a description of the LinearDiffusionOverlandFlowRouter component. If you scroll down to the __init__ section, you will see a list of parameters.
End of explanation
"""
# Process parameters
n = 0.01 # roughness coefficient, (s/m^(1/3))
vel_scale = 1.0 # velocity scale, m/s
R = 72.0 / (3600.0 * 1000.0) # runoff rate, m/s
# Run-control parameters
run_time = 360.0 # duration of run, (s)
nrows = 3 # number of node rows
ncols = 3 # number of node columns
dx = 2.0 # node spacing, m
dt = 20.0 # time-step size, s
plot_every = 60.0 # plot interval, s
# Derived parameters
num_steps = int(run_time / dt)
"""
Explanation: Example 1: downpour on a single cell
The first example tests that the component can reproduce the expected steady flow depth for rain on a single cell. The input water flow rate, in $m^3 / s$, is:
$$Q_\text{in} = P \Delta x^2$$
The output flow rate is
$$Q_\text{out} = \frac{\Delta x}{n^2 u_c} H^{7/3} S_w$$
where $S_w$ is the water surface slope. We can write the water-surface slope in terms of the water height of the (one) core node (which just equals $H$, because the ground elevation is zero) and the water height at the adjacent open-boundary node, which is zero, so
$$S_w = \frac{H}{\Delta x}$$
We can therefore plug this into the equation for $Q_\text{out}$ and solve for the expected equilibrium depth:
$$H = \left(\Delta x^2 P n^2 u_c \right)^{3/10}$$
Pick the initial and run conditions
End of explanation
"""
# create and set up grid
grid = RasterModelGrid((nrows, ncols), xy_spacing=dx)
grid.set_closed_boundaries_at_grid_edges(False, True, True, True) # open only on east
# add required field
elev = grid.add_zeros("topographic__elevation", at="node")
# Instantiate the component
olflow = LinearDiffusionOverlandFlowRouter(
grid, rain_rate=R, roughness=n, velocity_scale=vel_scale
)
# Helpful function to plot the profile
def plot_flow_profile(grid, olflow):
"""Plot the middle row of topography and water surface
for the overland flow model olflow."""
nc = grid.number_of_node_columns
nr = grid.number_of_node_rows
startnode = nc * (nr // 2) + 1
midrow = np.arange(startnode, startnode + nc - 1, dtype=int)
topo = grid.at_node["topographic__elevation"]
plt.plot(
grid.x_of_node[midrow],
topo[midrow] + grid.at_node["surface_water__depth"][midrow],
"b",
)
plt.plot(grid.x_of_node[midrow], topo[midrow], "k")
plt.xlabel("Distance (m)")
plt.ylabel("Ground and water surface height (m)")
"""
Explanation: Create grid and fields:
End of explanation
"""
next_plot = plot_every
HH = [] # keep track of depth through time
for i in range(num_steps):
olflow.run_one_step(dt)
if (i + 1) * dt >= next_plot:
plot_flow_profile(grid, olflow)
next_plot += plot_every
HH.append(grid.at_node["surface_water__depth"][4])
# Compare with analytical solution for depth
expected_depth = (dx * dx * R * n * n * vel_scale) ** 0.3
computed_depth = grid.at_node["surface_water__depth"][4]
print(f"Expected depth = {expected_depth} m")
print(
f"Computed depth = {computed_depth} m",
)
plt.plot(np.linspace(0, run_time, len(HH)), HH)
plt.xlabel("Time (s)")
plt.ylabel("Water depth (m)")
"""
Explanation: Run the component forward in time, plotting the output in the form of a profile:
End of explanation
"""
# Process parameters
n = 0.1 # roughness coefficient, (s/m^(1/3))
uc = 1.0 # characteristic velocity scale (m/s)
R1 = 72.0 / (3600.0 * 1000.0) # initial rainfall rate, m/s (converted from mm/hr)
R2 = 0.0 / (3600.0 * 1000.0) # later rainfall rate, m/s (converted from mm/hr)
infilt_cap = 10.0 / (3600 * 1000.0) # infiltration capacity, m/s (converted from mm/hr)
# Run-control parameters
heavy_rain_duration = 300.0 # duration of heavy rainfall, s
run_time = 1200.0 # duration of run, s
dt = 20.0 # time-step size, s
dem_filename = "../hugo_site.asc"
# Derived parameters
num_steps = int(run_time / dt)
# set up arrays to hold discharge, time, and rain rate
time_since_storm_start = np.linspace(0.0, run_time, num_steps + 1)
discharge = np.zeros(num_steps + 1)
rain_rate = np.zeros(num_steps + 1)
rain_rate[:] = R1
rain_rate[time_since_storm_start >= heavy_rain_duration] = R2
# Read the DEM file as a grid with a 'topographic__elevation' field
(grid, elev) = read_esri_ascii(dem_filename, name="topographic__elevation")
# Configure the boundaries: valid right-edge nodes will be open;
# all NODATA (= -9999) nodes will be closed.
grid.status_at_node[grid.nodes_at_right_edge] = grid.BC_NODE_IS_FIXED_VALUE
grid.status_at_node[np.isclose(elev, -9999.0)] = grid.BC_NODE_IS_CLOSED
# display the topography
imshow_grid(grid, elev, colorbar_label="Elevation (m)", cmap="pink")
"""
Explanation: Example 2: overland flow on a DEM
For this example, we'll import a small digital elevation model (DEM) for a site in New Mexico, USA, with 10 m cells.
End of explanation
"""
indices = np.where(elev[grid.nodes_at_right_edge] > 0.0)[0]
outlet_nodes = grid.nodes_at_right_edge[indices]
print(f"Outlet nodes: {outlet_nodes}")
print(f"Elevations of the outlet nodes: {elev[outlet_nodes]}")
links_at_outlets = grid.links_at_node[outlet_nodes]
links_to_track = links_at_outlets[
grid.status_at_link[links_at_outlets] == grid.BC_LINK_IS_ACTIVE
].flatten()
print(f"Links at which to track discharge: {links_to_track}")
# Instantiate the component
olflow = LinearDiffusionOverlandFlowRouter(
grid, rain_rate=R, infilt_rate=infilt_cap, roughness=n, velocity_scale=vel_scale
)
def cfl():
hmax = np.amax(grid.at_node["surface_water__depth"])
D = hmax ** (7.0 / 3.0) / (n * n * uc)
return 0.5 * dx * dx / D
q = grid.at_link["water__specific_discharge"]
for i in range(num_steps):
olflow.rain_rate = rain_rate[i]
olflow.run_one_step(dt)
discharge[i + 1] = np.sum(q[links_to_track]) * dx
plt.plot(time_since_storm_start / 60.0, discharge)
plt.xlabel("Time (min)")
plt.ylabel("Discharge (cms)")
plt.grid(True)
imshow_grid(
grid,
grid.at_node["surface_water__depth"],
cmap="Blues",
colorbar_label="Water depth (m)",
)
"""
Explanation: It would be nice to track discharge at the watershed outlet, but how do we find the outlet location? We actually have several valid nodes along the right-hand edge. Then we'll keep track of the field water__specific_discharge at the active links that connect to these boundary nodes. We can identify the nodes by the fact that they are (a) at the right-hand edge of the grid, and (b) have positive elevations (the ones with -9999 are outside of the watershed). We can identify the relevant active links as those connected to the outlet nodes that have active status (meaning they do not connect to any closed boundary nodes).
End of explanation
"""
|
ryan-leung/PHYS4650_Python_Tutorial | notebooks/06-Python-Matplotlib.ipynb | bsd-3-clause | import matplotlib.pyplot as plt
%matplotlib inline
import numpy as np
"""
Explanation: Matplotlib
<img src="images/matplotlib.svg" alt="matplotlib" style="width: 600px;"/>
<a href="https://colab.research.google.com/github/ryan-leung/PHYS4650_Python_Tutorial/blob/master/notebooks/06-Python-Matplotlib.ipynb"><img align="right" src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open in Colab" title="Open and Execute in Google Colaboratory">
</a>
Using matplotlib in Jupyter notebook
End of explanation
"""
x = np.arange(-np.pi,np.pi,0.01) # Create an array of x values from -pi to pi with 0.01 interval
y = np.sin(x) # Apply sin function on all x
plt.plot(x,y)
plt.plot(y)
"""
Explanation: Line Plots
plt.plot Plot lines and/or markers:
* plot(x, y)
* plot x and y using default line style and color
* plot(x, y, 'bo')
* plot x and y using blue circle markers
* plot(y)
* plot y using x as index array 0..N-1
* plot(y, 'r+')
* Similar, but with red plusses
run
%pdoc plt.plot
for more details
End of explanation
"""
x = np.arange(0,10,1) # x = 1,2,3,4,5...
y = x*x # Squared x
plt.plot(x,y,'bo') # plot x and y using blue circle markers
plt.plot(x,y,'r+') # plot x and y using red plusses
"""
Explanation: Scatter Plots
plt.plot can also plot markers.
End of explanation
"""
x = np.arange(-np.pi,np.pi,0.001)
plt.plot(x,np.sin(x))
plt.title('y = sin(x)') # title
plt.xlabel('x (radians)') # x-axis label
plt.ylabel('y') # y-axis label
# To plot the axis label in LaTex, we can run
from matplotlib import rc
## For sans-serif font:
rc('font',**{'family':'sans-serif','sans-serif':['Helvetica']})
rc('text', usetex=True)
## for Palatino and other serif fonts use:
#rc('font',**{'family':'serif','serif':['Palatino']})
plt.plot(x,np.sin(x))
plt.title(r'T = sin($\theta$)') # title, the `r` in front of the string means raw string
plt.xlabel(r'$\theta$ (radians)') # x-axis label, LaTex synatx should be encoded with $$
plt.ylabel('T') # y-axis label
"""
Explanation: Plot properties
Add x-axis and y-axis
End of explanation
"""
x1 = np.linspace(0.0, 5.0)
x2 = np.linspace(0.0, 2.0)
y1 = np.cos(2 * np.pi * x1) * np.exp(-x1)
y2 = np.cos(2 * np.pi * x2)
plt.subplot(2, 1, 1)
plt.plot(x1, y1, '.-')
plt.title('Plot 2 graph at the same time')
plt.ylabel('Amplitude (Damped)')
plt.subplot(2, 1, 2)
plt.plot(x2, y2, '.-')
plt.xlabel('time (s)')
plt.ylabel('Amplitude (Undamped)')
"""
Explanation: Multiple plots
End of explanation
"""
plt.plot(x,np.sin(x))
plt.savefig('plot.pdf')
plt.savefig('plot.png')
# To load image into this Jupyter notebook
from IPython.display import Image
Image("plot.png")
"""
Explanation: Save figure
End of explanation
"""
|
ekaakurniawan/iPyMacLern | ML-W1/Linear Regression With One Variable.ipynb | gpl-3.0 | import sys
print("Python %d.%d.%d" % (sys.version_info.major, \
sys.version_info.minor, \
sys.version_info.micro))
import numpy as np
print("NumPy %s" % np.__version__)
import matplotlib
import matplotlib.pyplot as plt
print("matplotlib %s" % matplotlib.__version__)
"""
Explanation: Part of iPyMacLern project.
Copyright (C) 2015 by Eka A. Kurniawan
eka.a.kurniawan(ta)gmail(tod)com
This program is free software: you can redistribute it and/or modify
it under the terms of the GNU General Public License as published by
the Free Software Foundation, either version 3 of the License, or
(at your option) any later version.
This program is distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
GNU General Public License for more details.
You should have received a copy of the GNU General Public License
along with this program. If not, see http://www.gnu.org/licenses/.
Tested On
End of explanation
"""
# Display graph inline
%matplotlib inline
# Display graph in 'retina' format for Mac with retina display. Others, use PNG or SVG format.
%config InlineBackend.figure_format = 'retina'
#%config InlineBackend.figure_format = 'PNG'
#%config InlineBackend.figure_format = 'SVG'
# For displaying 3D graph
from mpl_toolkits.mplot3d import Axes3D
from matplotlib import cm
"""
Explanation: Display Settings
End of explanation
"""
filename = 'ex1data1.txt'
data = np.recfromcsv(filename, delimiter=',', names=['population', 'profit'])
plt.scatter(data['population'], data['profit'], color='red', s=2)
plt.title('Restaurant Sells')
plt.xlabel('Population of City in 10,000s')
plt.ylabel('Profit in $10,000s')
plt.show()
# Total training samples
m = len(data)
# Add column of ones to variable x
X = np.append(np.ones([m,1]),data['population'].reshape(m,1),1)
# Output
y = data['profit']
# Initialize fitting parameters
theta = np.zeros([2, 1])
"""
Explanation: Linear Regression With One Variable[1]
The data we use if form different outlets accross multiple cities consists of the profit of each outlet based on the number of population in the city the outlet located. Using this known data, we can perform linear regression analysis among the two variables (profit vs population) and then we can use the result to predict the profit we are going to get for our new outlet if we know the population of the city where the new outlet is going to be located.
Get and Plot Data
End of explanation
"""
def compute_cost(X, y, theta):
# Total training samples
m = len(y)
h = theta[0] + (theta[1] * X[:,1])
J = (1 / (2*m)) * sum((h - y) ** 2)
return J
J = compute_cost(X, y, theta)
print(J)
"""
Explanation: Cost Function
For $x$ as input variable (feature) and $y$ as output/target variable, we have cost function $J$ of parameter $\theta$ as follow.
$$J(\theta) = \frac{1}{2m} \sum_{i=1}^{m}\left(h_{\theta}(x^{(i)}) - y^{(i)}\right)^2$$
Where hypothesis $h_{\theta}(x)$ is the linear model as follow.
$$h_{\theta}(x) = \theta^Tx = \theta_0 + \theta_1x$$
And $m$ is total training samples and $i$ is the index.
End of explanation
"""
def perform_gradient_descent(X, y, theta, alpha, iterations):
# Total training samples
m = len(y)
# History of costs
J_history = np.zeros([iterations, 1])
for i in range(iterations):
h = theta[0] + (theta[1] * X[:,1])
error = h - y
theta_tmp1 = theta[0] - ((alpha / m) * sum(error))
theta_tmp2 = theta[1] - ((alpha / m) * sum(error * X[:,1]))
theta[0] = theta_tmp1
theta[1] = theta_tmp2
J_history[i] = compute_cost(X, y, theta)
return theta, J_history
iterations = 1500
alpha = 0.01
theta, J_history = perform_gradient_descent(X, y, theta, alpha, iterations)
theta
J_history
"""
Explanation: Gradient Descent Algorithm
Our objective is to have the parameters ($\theta_0$ and $\theta_1$) to produce a hypothesis $h_{\theta}(x)$ as close as possible to target variable $y$. As we have defined our cost function $J(\theta)$ as the error of the two therefore all we need to do is to minimize the cost function $\left(\underset{\theta_0, \theta_1}{\text{minimize}} \hspace{2mm} J(\theta_0, \theta_1)\right)$.
Following is the algorithm where $\alpha$ is the learning rate.
repeat until convergence {
$\hspace{10mm} \theta_j := \theta_j - \alpha \frac{\partial}{\partial\theta_j} J(\theta_0, \theta_1) \hspace{10mm} (\text{for } j=0 \text{ and } j=1)$
}
Derive cost function $J(\theta_j)$ for both $j=0$ and $j=1$.
$j = 0$ :
$$\frac{\partial}{\partial\theta_0} J(\theta_0, \theta_1) =$$
$$\frac{\partial}{\partial\theta_0} \frac{1}{2m} \sum_{i=1}^{m}\left(h_{\theta}(x^{(i)}) - y^{(i)}\right)^2 =$$
$$\frac{\partial}{\partial\theta_0} \frac{1}{2m} \sum_{i=1}^{m}\left(\theta_0 + \theta_1x^{(i)}-y^{(i)}\right)^2 =$$
$$\frac{1}{m} \sum_{i=1}^{m}\left(\theta_0 + \theta_1x^{(i)} - y^{(i)}\right) =$$
$$\frac{1}{m} \sum_{i=1}^{m}\left(h_{\theta}(x^{(i)}) - y^{(i)}\right)$$
$j = 1$ :
$$\frac{\partial}{\partial\theta_1} J(\theta_0, \theta_1) =$$
$$\frac{\partial}{\partial\theta_1} \frac{1}{2m} \sum_{i=1}^{m}\left(h_{\theta}(x^{(i)}) - y^{(i)}\right)^2 =$$
$$\frac{\partial}{\partial\theta_1} \frac{1}{2m} \sum_{i=1}^{m}\left(\theta_0 + \theta_1x^{(i)}-y^{(i)}\right)^2 =$$
$$\frac{1}{m} \sum_{i=1}^{m}\left(\theta_0 + \theta_1x^{(i)} - y^{(i)}\right) . x^{(i)} =$$
$$\frac{1}{m} \sum_{i=1}^{m}\left(h_{\theta}(x^{(i)}) - y^{(i)}\right) . x^{(i)}$$
For this linear case, the algorithm would be as follow.
repeat until convergence {
$\hspace{10mm} \theta_0 := \theta_0 - \alpha \frac{1}{m} \sum_{i=1}^{m}\left(h_{\theta}(x^{(i)}) - y^{(i)}\right)$
$\hspace{10mm} \theta_1 := \theta_1 - \alpha \frac{1}{m} \sum_{i=1}^{m}\left(h_{\theta}(x^{(i)}) - y^{(i)}\right) . x^{(i)}$
}
End of explanation
"""
plt.plot(J_history)
plt.title('Cost History')
plt.xlabel('Iterations')
plt.ylabel(r'Cost $J(\theta)$')
plt.show()
"""
Explanation: Plot Cost History
End of explanation
"""
plt.scatter(data['population'], data['profit'], color='red', s=2)
plt.plot(data['population'], np.dot(X, theta), color='blue')
plt.title('Restaurant Sells')
plt.xlabel('Population of City in 10,000s')
plt.ylabel('Profit in $10,000s')
plt.show()
"""
Explanation: Plot Linear Fit
End of explanation
"""
ttl_population = 35000
print('For population = 35,000, we predict a profit of %s' % \
int(np.dot([1, ttl_population/10000], theta) * 10000))
ttl_population = 70000
print('For population = 70,000, we predict a profit of %s' % \
int(np.dot([1, ttl_population/10000], theta) * 10000))
"""
Explanation: Make Prediction
Predict profit for cities with population of 35,000 and 70,000 people.
End of explanation
"""
fig = plt.figure()
ax = fig.add_subplot(111, projection='3d')
ax.view_init(azim =-120)
theta0_vals = np.linspace(-10, 10, 100)
theta1_vals = np.linspace(-1, 4, 100)
theta0_mesh, theta1_mesh = np.meshgrid(theta0_vals, theta1_vals)
zs = np.array([compute_cost(X, y, [theta0_val,theta1_val]) for theta0_val,theta1_val in zip(np.ravel(theta0_mesh), np.ravel(theta1_mesh))])
Z = zs.reshape(theta0_mesh.shape)
ax.plot_surface(theta0_mesh, theta1_mesh, Z, cmap=cm.coolwarm)
ax.set_xlabel('Theta 0')
ax.set_ylabel('Theta 1')
ax.set_zlabel(r'Cost $J(\theta)$')
plt.show()
"""
Explanation: Visualize Cost for Different Range of Parameters
End of explanation
"""
|
turi-code/tutorials | strata-sj-2016/time-series/anomaly_detection.ipynb | apache-2.0 | import graphlab as gl
okla_daily = gl.load_timeseries('working_data/ok_daily_stats.ts')
print "Number of rows:", len(okla_daily)
print "Start:", okla_daily.min_time
print "End:", okla_daily.max_time
okla_daily.print_rows(3)
import matplotlib.pyplot as plt
%matplotlib notebook
plt.style.use('ggplot')
fig, ax = plt.subplots()
ax.plot(okla_daily['time'], okla_daily['count'], color='dodgerblue')
ax.set_ylabel('Number of quakes')
ax.set_xlabel('Date')
fig.autofmt_xdate()
fig.show()
"""
Explanation: 1. Load and inspect the data: Oklahoma earthquake stats
End of explanation
"""
from graphlab.toolkits import anomaly_detection
model = anomaly_detection.create(okla_daily, features=['count'])
print model
"""
Explanation: 2. Let the toolkit choose the model
End of explanation
"""
threshold = 5
anomaly_mask = okla_daily['count'] >= threshold
anomaly_scores = okla_daily[['count']]
anomaly_scores['threshold_score'] = anomaly_mask
anomaly_scores.tail(8).print_rows()
"""
Explanation: 3. The simple thresholding model
End of explanation
"""
from graphlab.toolkits.anomaly_detection import moving_zscore
zscore_model = moving_zscore.create(okla_daily, feature='count',
window_size=30,
min_observations=15)
print zscore_model
zscore_model.scores.tail(3)
zscore_model.scores.head(3)
anomaly_scores['outlier_score'] = zscore_model.scores['anomaly_score']
anomaly_scores.tail(5).print_rows()
fig, ax = plt.subplots(2, sharex=True)
ax[0].plot(anomaly_scores['time'], anomaly_scores['count'], color='dodgerblue')
ax[0].set_ylabel('# quakes')
ax[1].plot(anomaly_scores['time'], anomaly_scores['outlier_score'], color='orchid')
ax[1].set_ylabel('outlier score')
ax[1].set_xlabel('Date')
fig.autofmt_xdate()
fig.show()
"""
Explanation: 4. The moving Z-score model
End of explanation
"""
from graphlab.toolkits.anomaly_detection import bayesian_changepoints
changept_model = bayesian_changepoints.create(okla_daily, feature='count',
expected_runlength=2000, lag=7)
print changept_model
anomaly_scores['changepoint_score'] = changept_model.scores['changepoint_score']
anomaly_scores.head(5)
fig, ax = plt.subplots(3, sharex=True)
ax[0].plot(anomaly_scores['time'], anomaly_scores['count'], color='dodgerblue')
ax[0].set_ylabel('# quakes')
ax[1].plot(anomaly_scores['time'], anomaly_scores['outlier_score'], color='orchid')
ax[1].set_ylabel('outlier score')
ax[2].plot(anomaly_scores['time'], anomaly_scores['changepoint_score'], color='orchid')
ax[2].set_ylabel('changepoint score')
ax[2].set_xlabel('Date')
fig.autofmt_xdate()
fig.show()
"""
Explanation: 5. The Bayesian changepoint model
End of explanation
"""
threshold = 0.5
anom_mask = anomaly_scores['changepoint_score'] >= threshold
anomalies = anomaly_scores[anom_mask]
print "Number of anomalies:", len(anomalies)
anomalies.head(5)
"""
Explanation: 6. How to use the anomaly scores
Option 1: choose an anomaly threshold a priori
Slightly better than choosing a threshold in the original feature space.
For Bayesian changepoint detection, where the scores are probabilities, there is a natural threshold of 0.5.
End of explanation
"""
anomalies = anomaly_scores.to_sframe().topk('changepoint_score', k=5)
print "Number of anomalies:", len(anomalies)
anomalies.head(5)
"""
Explanation: Option 2: choose the top-k anomalies
If you have a fixed budget for investigating and acting on anomalies, this is a good way to go.
End of explanation
"""
anomaly_scores['changepoint_score'].show()
threshold = 0.072
anom_mask = anomaly_scores['changepoint_score'] >= threshold
anomalies = anomaly_scores[anom_mask]
print "Number of anomalies:", len(anomalies)
anomalies.head(5)
"""
Explanation: Option 3: look at the anomaly distribution and choose a threshold
End of explanation
"""
from interactive_plot import LineDrawer
fig, ax = plt.subplots(3, sharex=True)
guide_lines = []
threshold_lines = []
p = ax[0].plot(anomaly_scores['time'], anomaly_scores['count'],
color='dodgerblue')
ax[0].set_ylabel('# quakes')
line, = ax[0].plot((anomaly_scores.min_time, anomaly_scores.min_time),
ax[0].get_ylim(), lw=1, ls='--', color='black')
guide_lines.append(line)
ax[1].plot(anomaly_scores['time'], anomaly_scores['outlier_score'],
color='orchid')
ax[1].set_ylabel('outlier score')
line, = ax[1].plot((anomaly_scores.min_time, anomaly_scores.min_time),
ax[1].get_ylim(), lw=1, ls='--', color='black')
guide_lines.append(line)
ax[2].plot(anomaly_scores['time'], anomaly_scores['changepoint_score'],
color='orchid')
ax[2].set_ylabel('changepoint score')
ax[2].set_xlabel('Date')
line, = ax[2].plot((anomaly_scores.min_time, anomaly_scores.min_time), (0., 1.),
lw=1, ls='--', color='black')
guide_lines.append(line)
for a in ax:
line, = a.plot(anomaly_scores.range, (0., 0.), lw=1, ls='--',
color='black')
threshold_lines.append(line)
plot_scores = anomaly_scores[['count', 'outlier_score', 'changepoint_score']]
interactive_thresholder = LineDrawer(plot_scores, guide_lines, threshold_lines)
interactive_thresholder.connect()
fig.autofmt_xdate()
fig.show()
interactive_thresholder.anoms.print_rows(10)
"""
Explanation: Option 4: get fancy with plotting
End of explanation
"""
okla_new = gl.load_timeseries('working_data/ok_daily_update.ts')
okla_new.print_rows(20)
"""
Explanation: 7. Updating the model with new data
End of explanation
"""
changept_model2 = changept_model.update(okla_new)
print changept_model2
changept_model2.scores.print_rows(20)
"""
Explanation: Why do we want to update the model, rather than training a new one?
1. Because we've updated our parameters using the data we've seen already.
2. Updating simplifies the drudgery of prepending the data to get a final score for the lags in the previous data set.
End of explanation
"""
|
tritemio/multispot_paper | Multi-spot Gamma Fitting.ipynb | mit | from fretbursts import fretmath
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import matplotlib as mpl
from cycler import cycler
import seaborn as sns
%matplotlib inline
%config InlineBackend.figure_format='retina' # for hi-dpi displays
import matplotlib as mpl
from cycler import cycler
bmap = sns.color_palette("Set1", 9)
colors = np.array(bmap)[(1,0,2,3,4,8,6,7), :]
mpl.rcParams['axes.prop_cycle'] = cycler('color', colors)
colors_labels = ['blue', 'red', 'green', 'violet', 'orange', 'gray', 'brown', 'pink', ]
for c, cl in zip(colors, colors_labels):
locals()[cl] = tuple(c) # assign variables with color names
sns.palplot(colors)
sns.set_style('whitegrid')
"""
Explanation: Multi-spot Gamma Fitting
End of explanation
"""
leakage_coeff_fname = 'results/Multi-spot - leakage coefficient KDE wmean DexDem.csv'
leakageM = float(np.loadtxt(leakage_coeff_fname, ndmin=1))
print('Multispot Leakage Coefficient:', leakageM)
"""
Explanation: Load Data
Multispot
Load the leakage coefficient from disk (computed in Multi-spot 5-Samples analyis - Leakage coefficient fit):
End of explanation
"""
dir_ex_coeff_fname = 'results/usALEX - direct excitation coefficient dir_ex_t beta.csv'
dir_ex_t = float(np.loadtxt(dir_ex_coeff_fname, ndmin=1))
print('Direct excitation coefficient (dir_ex_t):', dir_ex_t)
"""
Explanation: Load the direct excitation coefficient ($d_{dirT}$) from disk (computed in usALEX - Corrections - Direct excitation physical parameter):
End of explanation
"""
mspot_filename = 'results/Multi-spot - dsDNA - PR - all_samples all_ch.csv'
E_pr_fret = pd.read_csv(mspot_filename, index_col=0)
E_pr_fret
"""
Explanation: Multispot PR for FRET population:
End of explanation
"""
data_file = 'results/usALEX-5samples-E-corrected-all-ph.csv'
data_alex = pd.read_csv(data_file).set_index('sample')#[['E_pr_fret_kde']]
data_alex.round(6)
E_alex = data_alex.E_gauss_w
E_alex
"""
Explanation: usALEX
Corrected $E$ from μs-ALEX data:
End of explanation
"""
import lmfit
def residuals(params, E_raw, E_ref):
gamma = params['gamma'].value
# NOTE: leakageM and dir_ex_t are globals
return E_ref - fretmath.correct_E_gamma_leak_dir(E_raw, leakage=leakageM, gamma=gamma, dir_ex_t=dir_ex_t)
params = lmfit.Parameters()
params.add('gamma', value=0.5)
E_pr_fret_mean = E_pr_fret.mean(1)
E_pr_fret_mean
m = lmfit.minimize(residuals, params, args=(E_pr_fret_mean, E_alex))
lmfit.report_fit(m.params, show_correl=False)
E_alex['12d'], E_pr_fret_mean['12d']
m = lmfit.minimize(residuals, params, args=(np.array([E_pr_fret_mean['12d']]), np.array([E_alex['12d']])))
lmfit.report_fit(m.params, show_correl=False)
print('Fitted gamma(multispot):', m.params['gamma'].value)
multispot_gamma = m.params['gamma'].value
multispot_gamma
E_fret_mch = fretmath.correct_E_gamma_leak_dir(E_pr_fret, leakage=leakageM, dir_ex_t=dir_ex_t,
gamma=multispot_gamma)
E_fret_mch = E_fret_mch.round(6)
E_fret_mch
E_fret_mch.to_csv('results/Multi-spot - dsDNA - Corrected E - all_samples all_ch.csv')
'%.5f' % multispot_gamma
with open('results/Multi-spot - gamma factor.csv', 'wt') as f:
f.write('%.5f' % multispot_gamma)
norm = (E_fret_mch.T - E_fret_mch.mean(1))#/E_pr_fret.mean(1)
norm_rel = (E_fret_mch.T - E_fret_mch.mean(1))/E_fret_mch.mean(1)
norm.plot()
norm_rel.plot()
"""
Explanation: Multi-spot gamma fitting
End of explanation
"""
sns.set_style('whitegrid')
CH = np.arange(8)
CH_labels = ['CH%d' % i for i in CH]
dist_s_bp = [7, 12, 17, 22, 27]
fontsize = 16
fig, ax = plt.subplots(figsize=(8, 5))
ax.plot(dist_s_bp, E_fret_mch, '+', lw=2, mew=1.2, ms=10, zorder=4)
ax.plot(dist_s_bp, E_alex, '-', lw=3, mew=0, alpha=0.5, color='k', zorder=3)
plt.title('Multi-spot smFRET dsDNA, Gamma = %.2f' % multispot_gamma)
plt.xlabel('Distance in base-pairs', fontsize=fontsize);
plt.ylabel('E', fontsize=fontsize)
plt.ylim(0, 1); plt.xlim(0, 30)
plt.grid(True)
plt.legend(['CH1','CH2','CH3','CH4','CH5','CH6','CH7','CH8', u'μsALEX'],
fancybox=True, prop={'size':fontsize-1},
loc='best');
"""
Explanation: Plot FRET vs distance
End of explanation
"""
|
jamesfolberth/NGC_STEM_camp_AWS | notebooks/data8_notebooks/project3/project3.ipynb | bsd-3-clause | # Run this cell to set up the notebook, but please don't change it.
import numpy as np
import math
from datascience import *
# These lines set up the plotting functionality and formatting.
import matplotlib
matplotlib.use('Agg', warn=False)
%matplotlib inline
import matplotlib.pyplot as plt
plt.style.use('fivethirtyeight')
import warnings
warnings.simplefilter(action="ignore", category=FutureWarning)
# These lines load the tests.
from client.api.assignment import load_assignment
tests = load_assignment('project3.ok')
"""
Explanation: Project 3 - Classification
Welcome to the third project of Data 8! You will build a classifier that guesses whether a song is hip-hop or country, using only the numbers of times words appear in the song's lyrics. By the end of the project, you should know how to:
Build a k-nearest-neighbors classifier.
Test a classifier on data.
Administrivia
Piazza
While collaboration is encouraged on this and other assignments, sharing answers is never okay. In particular, posting code or other assignment answers publicly on Piazza (or elsewhere) is academic dishonesty. It will result in a reduced project grade at a minimum. If you wish to ask a question and include your code or an answer to a written question, you must make it a private post.
Partners
You may complete the project with up to one partner. Partnerships are an exception to the rule against sharing answers. If you have a partner, one person in the partnership should submit your project on Gradescope and include the other partner in the submission. (Gradescope will prompt you to fill this in.)
For this project, you can partner with anyone in the class.
Due Date and Checkpoint
Part of the project will be due early. Parts 1 and 2 of the project (out of 4) are due Tuesday, November 22nd at 7PM. Unlike the final submission, this early checkpoint will be graded for completion. It will be worth approximately 10% of the total project grade. Simply submit your partially-completed notebook as a PDF, as you would submit any other notebook. (See the note above on submitting with a partner.)
The entire project (parts 1, 2, 3, and 4) will be due Tuesday, November 29th at 7PM. (Again, see the note above on submitting with a partner.)
On to the project!
Run the cell below to prepare the automatic tests. Passing the automatic tests does not guarantee full credit on any question. The tests are provided to help catch some common errors, but it is your responsibility to answer the questions correctly.
End of explanation
"""
# Just run this cell.
lyrics = Table.read_table('lyrics.csv')
# The first 5 rows and 8 columns of the table:
lyrics.where("Title", are.equal_to("In Your Eyes"))\
.select("Title", "Artist", "Genre", "i", "the", "like", "love")\
.show()
"""
Explanation: 1. The Dataset
Our dataset is a table of songs, each with a name, an artist, and a genre. We'll be trying to predict each song's genre.
The predict a song's genre, we have some attributes: the lyrics of the song, in a certain format. We have a list of approximately 5,000 words that might occur in a song. For each song, our dataset tells us how frequently each of these words occur in that song.
Run the cell below to read the lyrics table. It may take up to a minute to load.
End of explanation
"""
title_index = lyrics.index_by('Title')
def row_for_title(title):
return title_index.get(title)[0]
"""
Explanation: That cell prints a few columns of the row for the song "In Your Eyes". The song contains 168 words. The word "like" appears twice: $\frac{2}{168} \approx 0.0119$ of the words in the song. Similarly, the word "love" appears 10 times: $\frac{10}{168} \approx 0.0595$ of the words.
Our dataset doesn't contain all information about a song. For example, it doesn't include the total number of words in each song, or information about the order of words in the song, let alone the melody, instruments, or rhythm. Nonetheless, you may find that word counts alone are sufficient to build an accurate genre classifier.
All titles are unique. The row_for_title function provides fast access to the one row for each title.
End of explanation
"""
# Set row_sum to a number that's the (approximate) sum of each row of word proportions.
expected_row_sum = ...
"""
Explanation: <div class="hide">\pagebreak</div>
Question 1.1
Set expected_row_sum to the number that you expect will result from summing all proportions in each row, excluding the first three columns.
End of explanation
"""
# Run this cell to display a histogram of the sums of proportions in each row.
# This computation might take up to a minute; you can skip it if it's too slow.
Table().with_column('sums', lyrics.drop([0, 1, 2]).apply(sum)).hist(0)
"""
Explanation: <div class="hide">\pagebreak</div>
You can draw the histogram below to check that the actual row sums are close to what you expect.
End of explanation
"""
# Just run this cell.
vocab_mapping = Table.read_table('mxm_reverse_mapping_safe.csv')
stemmed = np.take(lyrics.labels, np.arange(3, len(lyrics.labels)))
vocab_table = Table().with_column('Stem', stemmed).join('Stem', vocab_mapping)
vocab_table.take(np.arange(900, 910))
"""
Explanation: This dataset was extracted from the Million Song Dataset (http://labrosa.ee.columbia.edu/millionsong/). Specifically, we are using the complementary datasets from musiXmatch (http://labrosa.ee.columbia.edu/millionsong/musixmatch) and Last.fm (http://labrosa.ee.columbia.edu/millionsong/lastfm).
The counts of common words in the lyrics for all of these songs are provided by the musiXmatch dataset (called a bag-of-words format). Only the top 5000 most common words are represented. For each song, we divided the number of occurrences of each word by the total number of word occurrences in the lyrics of that song.
The Last.fm dataset contains multiple tags for each song in the Million Song Dataset. Some of the tags are genre-related, such as "pop", "rock", "classic", etc. To obtain our dataset, we first extracted songs with Last.fm tags that included the words "country", or "hip" and "hop". These songs were then cross-referenced with the musiXmatch dataset, and only songs with musixMatch lyrics were placed into our dataset. Finally, inappropriate words and songs with naughty titles were removed, leaving us with 4976 words in the vocabulary and 1726 songs.
1.1. Word Stemming
The columns other than Title, Artist, and Genre in the lyrics table are all words that appear in some of the songs in our dataset. Some of those names have been stemmed, or abbreviated heuristically, in an attempt to make different inflected forms of the same base word into the same string. For example, the column "manag" is the sum of proportions of the words "manage", "manager", "managed", and "managerial" (and perhaps others) in each song.
Stemming makes it a little tricky to search for the words you want to use, so we have provided another table that will let you see examples of unstemmed versions of each stemmed word. Run the code below to load it.
End of explanation
"""
# The staff solution took 3 lines.
unchanged = ...
print(str(round(unchanged)) + '%')
"""
Explanation: <div class="hide">\pagebreak</div>
Question 1.1.1
Assign unchanged to the percentage of words in vocab_table that are the same as their stemmed form (such as "coup" above).
Hint: Try to use where. Start by computing an array of boolean values, one for each row in vocab_table, indicating whether the word in that row is equal to its stemmed form.
End of explanation
"""
# Set stemmed_message to the stemmed version of "message" (which
# should be a string). Use vocab_table.
stemmed_message = ...
stemmed_message
_ = tests.grade('q1_1_1_2')
"""
Explanation: <div class="hide">\pagebreak</div>
Question 1.1.2
Assign stemmed_message to the stemmed version of the word "message".
End of explanation
"""
# Set unstemmed_singl to the unstemmed version of "singl" (which
# should be a string).
unstemmed_singl = ...
unstemmed_singl
_ = tests.grade('q1_1_1_3')
"""
Explanation: <div class="hide">\pagebreak</div>
Question 1.1.3
Assign unstemmed_singl to the word in vocab_table that has "singl" as its stemmed form. (Note that multiple English words may stem to "singl", but only one example appears in vocab_table.)
End of explanation
"""
# In our solution, we found it useful to first make an array
# called shortened containing the number of characters that was
# chopped off of each word in vocab_table, but you don't have
# to do that.
shortened = ...
most_shortened = ...
# This will display your answer and its shortened form.
vocab_table.where('Word', most_shortened)
_ = tests.grade('q1_1_1_4')
"""
Explanation: <div class="hide">\pagebreak</div>
Question 1.1.4
What word in vocab_table was shortened the most by this stemming process? Assign most_shortened to the word. It's an example of how heuristic stemming can collapse two unrelated words into the same stem (which is bad, but happens a lot in practice anyway).
End of explanation
"""
# Here we have defined the proportion of our data
# that we want to designate for training as 11/16ths
# of our total dataset. 5/16ths of the data is
# reserved for testing.
training_proportion = 11/16
num_songs = lyrics.num_rows
num_train = int(num_songs * training_proportion)
num_valid = num_songs - num_train
train_lyrics = lyrics.take(np.arange(num_train))
test_lyrics = lyrics.take(np.arange(num_train, num_songs))
print("Training: ", train_lyrics.num_rows, ";",
"Test: ", test_lyrics.num_rows)
"""
Explanation: 1.2. Splitting the dataset
We're going to use our lyrics dataset for two purposes.
First, we want to train song genre classifiers.
Second, we want to test the performance of our classifiers.
Hence, we need two different datasets: training and test.
The purpose of a classifier is to classify unseen data that is similar to the training data. Therefore, we must ensure that there are no songs that appear in both sets. We do so by splitting the dataset randomly. The dataset has already been permuted randomly, so it's easy to split. We just take the top for training and the rest for test.
Run the code below (without changing it) to separate the datasets into two tables.
End of explanation
"""
def country_proportion(table):
"""Return the proportion of songs in a table that have the Country genre."""
return ...
# The staff solution took 4 lines. Start by creating a table.
...
"""
Explanation: <div class="hide">\pagebreak</div>
Question 1.2.1
Draw a horizontal bar chart with two bars that show the proportion of Country songs in each dataset. Complete the function country_proportion first; it should help you create the bar chart.
End of explanation
"""
# Just run this cell to define genre_colors.
def genre_color(genre):
"""Assign a color to each genre."""
if genre == 'Country':
return 'gold'
elif genre == 'Hip-hop':
return 'blue'
else:
return 'green'
"""
Explanation: 2. K-Nearest Neighbors - a Guided Example
K-Nearest Neighbors (k-NN) is a classification algorithm. Given some attributes (also called features) of an unseen example, it decides whether that example belongs to one or the other of two categories based on its similarity to previously seen examples.
A feature we have about each song is the proportion of times a particular word appears in the lyrics, and the categories are two music genres: hip-hop and country. The algorithm requires many previously seen examples for which both the features and categories are known: that's the train_lyrics table.
We're going to visualize the algorithm, instead of just describing it. To get started, let's pick colors for the genres.
End of explanation
"""
# Just run this cell.
def plot_with_two_features(test_song, training_songs, x_feature, y_feature):
"""Plot a test song and training songs using two features."""
test_row = row_for_title(test_song)
distances = Table().with_columns(
x_feature, make_array(test_row.item(x_feature)),
y_feature, make_array(test_row.item(y_feature)),
'Color', make_array(genre_color('Unknown')),
'Title', make_array(test_song)
)
for song in training_songs:
row = row_for_title(song)
color = genre_color(row.item('Genre'))
distances.append([row.item(x_feature), row.item(y_feature), color, song])
distances.scatter(x_feature, y_feature, colors='Color', labels='Title', s=200)
training = make_array("Sangria Wine", "Insane In The Brain")
plot_with_two_features("In Your Eyes", training, "like", "love")
"""
Explanation: 2.1. Classifying a song
In k-NN, we classify a song by finding the k songs in the training set that are most similar according to the features we choose. We call those songs with similar features the "neighbors". The k-NN algorithm assigns the song to the most common category among its k neighbors.
Let's limit ourselves to just 2 features for now, so we can plot each song. The features we will use are the proportions of the words "like" and "love" in the lyrics. Taking the song "In Your Eyes" (in the test set), 0.0119 of its words are "like" and 0.0595 are "love". This song appears in the test set, so let's imagine that we don't yet know its genre.
First, we need to make our notion of similarity more precise. We will say that the dissimilarity, or distance between two songs is the straight-line distance between them when we plot their features in a scatter diagram. This distance is called the Euclidean ("yoo-KLID-ee-un") distance.
For example, in the song Insane in the Brain (in the training set), 0.0203 of all the words in the song are "like" and 0 are "love". Its distance from In Your Eyes on this 2-word feature set is $\sqrt{(0.0119 - 0.0203)^2 + (0.0595 - 0)^2} \approx 0.06$. (If we included more or different features, the distance could be different.)
A third song, Sangria Wine (in the training set), is 0.0044 "like" and 0.0925 "love".
The function below creates a plot to display the "like" and "love" features of a test song and some training songs. As you can see in the result, In Your Eyes is more similar to Sangria Wine than to Insane in the Brain.
End of explanation
"""
in_your_eyes = row_for_title("In Your Eyes")
sangria_wine = row_for_title("Sangria Wine")
country_distance = ...
country_distance
_ = tests.grade('q1_2_1_1')
"""
Explanation: <div class="hide">\pagebreak</div>
Question 2.1.1
Compute the distance between the two country songs, In Your Eyes and Sangria Wine, using the like and love features only. Assign it the name country_distance.
Note: If you have a row object, you can use item to get an element from a column by its name. For example, if r is a row, then r.item("foo") is the element in column "foo" in row r.
End of explanation
"""
training = make_array("Sangria Wine", "Lookin' for Love", "Insane In The Brain")
plot_with_two_features("In Your Eyes", training, "like", "love")
"""
Explanation: The plot_with_two_features function can show the positions of several training songs. Below, we've added one that's even closer to In Your Eyes.
End of explanation
"""
def distance_two_features(title0, title1, x_feature, y_feature):
"""Compute the distance between two songs, represented as rows.
Only the features named x_feature and y_feature are used when computing the distance."""
row0 = ...
row1 = ...
...
for song in make_array("Lookin' for Love", "Insane In The Brain"):
song_distance = distance_two_features(song, "In Your Eyes", "like", "love")
print(song, 'distance:\t', song_distance)
_ = tests.grade('q1_2_1_2')
"""
Explanation: <div class="hide">\pagebreak</div>
Question 2.1.2
Complete the function distance_two_features that computes the Euclidean distance between any two songs, using two features. The last two lines call your function to show that Lookin' for Love is closer to In Your Eyes than Insane In The Brain.
End of explanation
"""
def distance_from_in_your_eyes(title):
"""The distance between the given song and "In Your Eyes", based on the features "like" and "love".
This function takes a single argument:
* title: A string, the name of a song.
"""
...
"""
Explanation: <div class="hide">\pagebreak</div>
Question 2.1.3
Define the function distance_from_in_your_eyes so that it works as described in its documentation.
End of explanation
"""
# The staff solution took 4 lines.
close_songs = ...
close_songs
_ = tests.grade('q1_2_1_4')
"""
Explanation: <div class="hide">\pagebreak</div>
Question 2.1.4
Using the features "like" and "love", what are the names and genres of the 7 songs in the training set closest to "In Your Eyes"? To answer this question, make a table named close_songs containing those 7 songs with columns "Title", "Artist", "Genre", "like", and "love", as well as a column called "distance" that contains the distance from "In Your Eyes". The table should be sorted in ascending order by distance.
End of explanation
"""
def most_common(column_label, table):
"""The most common element in a column of a table.
This function takes two arguments:
* column_label: The name of a column, a string.
* table: A table.
It returns the most common value in that column of that table.
In case of a tie, it returns one of the most common values, selected
arbitrarily."""
...
# Calling most_common on your table of 7 nearest neighbors classifies
# "In Your Eyes" as a country song, 4 votes to 3.
most_common('Genre', close_songs)
_ = tests.grade('q1_2_1_5')
"""
Explanation: <div class="hide">\pagebreak</div>
Question 2.1.5
Define the function most_common so that it works as described in its documentation.
End of explanation
"""
def distance(features1, features2):
"""The Euclidean distance between two arrays of feature values."""
...
distance_first_to_first = ...
distance_first_to_first
_ = tests.grade('q1_3_1')
"""
Explanation: Congratulations are in order -- you've classified your first song!
3. Features
Now, we're going to extend our classifier to consider more than two features at a time.
Euclidean distance still makes sense with more than two features. For n different features, we compute the difference between corresponding feature values for two songs, square each of the n differences, sum up the resulting numbers, and take the square root of the sum.
<div class="hide">\pagebreak</div>
Question 3.1
Write a function to compute the Euclidean distance between two arrays of features of arbitrary (but equal) length. Use it to compute the distance between the first song in the training set and the first song in the test set, using all of the features. (Remember that the title, artist, and genre of the songs are not features.)
End of explanation
"""
# Set my_20_features to an array of 20 features (strings that are column labels)
my_20_features = ...
train_20 = train_lyrics.select(my_20_features)
test_20 = test_lyrics.select(my_20_features)
_ = tests.grade('q1_3_1_1')
"""
Explanation: 3.1. Creating your own feature set
Unfortunately, using all of the features has some downsides. One clear downside is computational -- computing Euclidean distances just takes a long time when we have lots of features. You might have noticed that in the last question!
So we're going to select just 20. We'd like to choose features that are very discriminative, that is, which lead us to correctly classify as much of the test set as possible. This process of choosing features that will make a classifier work well is sometimes called feature selection, or more broadly feature engineering.
<div class="hide">\pagebreak</div>
Question 3.1.1
Look through the list of features (the labels of the lyrics table after the first three). Choose 20 that you think will let you distinguish pretty well between country and hip-hop songs. You might want to come back to this question later to improve your list, once you've seen how to evaluate your classifier. The first time you do this question, spend some time looking through the features, but not more than 15 minutes.
End of explanation
"""
test_lyrics.take(0).select('Title', 'Artist', 'Genre')
test_20.take(0)
"""
Explanation: <div class="hide">\pagebreak</div>
Question 3.1.2
In a few sentences, describe how you selected your features.
Write your answer here, replacing this text.
Next, let's classify the first song from our test set using these features. You can examine the song by running the cells below. Do you think it will be classified correctly?
End of explanation
"""
# Just run this cell to define fast_distances.
def fast_distances(test_row, train_rows):
"""An array of the distances between test_row and each row in train_rows.
Takes 2 arguments:
* test_row: A row of a table containing features of one
test song (e.g., test_20.row(0)).
* train_rows: A table of features (for example, the whole
table train_20)."""
counts_matrix = np.asmatrix(train_rows.columns).transpose()
diff = np.tile(np.array(test_row), [counts_matrix.shape[0], 1]) - counts_matrix
distances = np.squeeze(np.asarray(np.sqrt(np.square(diff).sum(1))))
return distances
"""
Explanation: As before, we want to look for the songs in the training set that are most alike our test song. We will calculate the Euclidean distances from the test song (using the 20 selected features) to all songs in the training set. You could do this with a for loop, but to make it computationally faster, we have provided a function, fast_distances, to do this for you. Read its documentation to make sure you understand what it does. (You don't need to read the code in its body unless you want to.)
End of explanation
"""
# The staff solution took 4 lines of code.
genre_and_distances = ...
genre_and_distances
_ = tests.grade('q1_3_1_3')
"""
Explanation: <div class="hide">\pagebreak</div>
Question 3.1.3
Use the fast_distances function provided above to compute the distance from the first song in the test set to all the songs in the training set, using your set of 20 features. Make a new table called genre_and_distances with one row for each song in the training set and two columns:
* The "Genre" of the training song
* The "Distance" from the first song in the test set
Ensure that genre_and_distances is sorted in increasing order by distance to the first test song.
End of explanation
"""
# Set my_assigned_genre to the most common genre among these.
my_assigned_genre = ...
# Set my_assigned_genre_was_correct to True if my_assigned_genre
# matches the actual genre of the first song in the test set.
my_assigned_genre_was_correct = ...
print("The assigned genre, {}, was{}correct.".format(my_assigned_genre, " " if my_assigned_genre_was_correct else " not "))
_ = tests.grade('q1_3_1_4')
"""
Explanation: <div class="hide">\pagebreak</div>
Question 3.1.4
Now compute the 5-nearest neighbors classification of the first song in the test set. That is, decide on its genre by finding the most common genre among its 5 nearest neighbors, according to the distances you've calculated. Then check whether your classifier chose the right genre. (Depending on the features you chose, your classifier might not get this song right, and that's okay.)
End of explanation
"""
def classify(test_row, train_rows, train_classes, k):
"""Return the most common class among k nearest neigbors to test_row."""
distances = ...
genre_and_distances = ...
...
_ = tests.grade('q1_3_2_1')
"""
Explanation: 3.2. A classifier function
Now we can write a single function that encapsulates the whole process of classification.
<div class="hide">\pagebreak</div>
Question 3.2.1
Write a function called classify. It should take the following four arguments:
* A row of features for a song to classify (e.g., test_20.row(0)).
* A table with a column for each feature (for example, train_20).
* An array of classes that has as many items as the previous table has rows, and in the same order.
* k, the number of neighbors to use in classification.
It should return the class a k-nearest neighbor classifier picks for the given row of features (the string 'Country' or the string 'Hip-hop').
End of explanation
"""
# The staff solution first defined a row object called grandpa_features.
grandpa_features = ...
grandpa_genre = ...
grandpa_genre
_ = tests.grade('q1_3_2_2')
"""
Explanation: <div class="hide">\pagebreak</div>
Question 3.2.2
Assign grandpa_genre to the genre predicted by your classifier for the song "Grandpa Got Runned Over By A John Deere" in the test set, using 9 neighbors and using your 20 features.
End of explanation
"""
def classify_one_argument(row):
...
# When you're done, this should produce 'Hip-hop' or 'Country'.
classify_one_argument(test_20.row(0))
_ = tests.grade('q1_3_2_3')
"""
Explanation: Finally, when we evaluate our classifier, it will be useful to have a classification function that is specialized to use a fixed training set and a fixed value of k.
<div class="hide">\pagebreak</div>
Question 3.2.3
Create a classification function that takes as its argument a row containing your 20 features and classifies that row using the 5-nearest neighbors algorithm with train_20 as its training set.
End of explanation
"""
test_guesses = ...
proportion_correct = ...
proportion_correct
_ = tests.grade('q1_3_3_1')
"""
Explanation: 3.3. Evaluating your classifier
Now that it's easy to use the classifier, let's see how accurate it is on the whole test set.
Question 3.3.1. Use classify_one_argument and apply to classify every song in the test set. Name these guesses test_guesses. Then, compute the proportion of correct classifications.
End of explanation
"""
# To start you off, here's a list of possibly-useful features:
staff_features = make_array("come", "do", "have", "heart", "make", "never", "now", "wanna", "with", "yo")
train_staff = train_lyrics.select(staff_features)
test_staff = test_lyrics.select(staff_features)
def another_classifier(row):
return ...
"""
Explanation: At this point, you've gone through one cycle of classifier design. Let's summarize the steps:
1. From available data, select test and training sets.
2. Choose an algorithm you're going to use for classification.
3. Identify some features.
4. Define a classifier function using your features and the training set.
5. Evaluate its performance (the proportion of correct classifications) on the test set.
4. Extra Explorations
Now that you know how to evaluate a classifier, it's time to build a better one.
<div class="hide">\pagebreak</div>
Question 4.1
Find a classifier with better test-set accuracy than classify_one_argument. (Your new function should have the same arguments as classify_one_argument and return a classification. Name it another_classifier.) You can use more or different features, or you can try different values of k. (Of course, you still have to use train_lyrics as your training set!)
End of explanation
"""
#####################
# Custom Classifier #
#####################
"""
Explanation: Ungraded and optional
Try to create an even better classifier. You're not restricted to using only word proportions as features. For example, given the data, you could compute various notions of vocabulary size or estimated song length. If you're feeling very adventurous, you could also try other classification methods, like logistic regression. If you think you built a classifier that works well, post on Piazza and let us know.
End of explanation
"""
|
tensorflow/model-remediation | docs/min_diff/guide/customizing_min_diff_model.ipynb | apache-2.0 | #@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""
Explanation: Copyright 2020 The TensorFlow Authors.
End of explanation
"""
!pip install --upgrade tensorflow-model-remediation
import tensorflow as tf
tf.get_logger().setLevel('ERROR') # Avoid TF warnings.
from tensorflow_model_remediation import min_diff
from tensorflow_model_remediation.tools.tutorials_utils import uci as tutorials_utils
"""
Explanation: Customizing MinDiffModel
<div class="devsite-table-wrapper"><table class="tfo-notebook-buttons" align="left">
<td><a target="_blank" href="https://www.tensorflow.org/responsible_ai/model_remediation/min_diff/guide/customizing_min_diff_model">
<img src="https://www.tensorflow.org/images/tf_logo_32px.png" />View on TensorFlow.org</a>
</td>
<td>
<a target="_blank" href="https://colab.research.google.com/github/tensorflow/model-remediation/blob/master/docs/min_diff/guide/customizing_min_diff_model.ipynb">
<img src="https://www.tensorflow.org/images/colab_logo_32px.png">Run in Google Colab</a>
</td>
<td>
<a target="_blank" href="https://github.com/tensorflow/model-remediation/blob/master/docs/min_diff/guide/customizing_min_diff_model.ipynb">
<img width=32px src="https://www.tensorflow.org/images/GitHub-Mark-32px.png">View source on GitHub</a>
</td>
<td>
<a target="_blank" href="https://storage.googleapis.com/tensorflow_docs/model-remediation/docs/min_diff/guide/customizing_min_diff_model.ipynb"><img src="https://www.tensorflow.org/images/download_logo_32px.png" />Download notebook</a>
</td>
</table></div>
Introduction
In most cases, using MinDiffModel directly as described in the "Integrating MinDiff with MinDiffModel" guide is sufficient. However, it is possible that you will need customized behavior. The two primary reasons for this are:
The keras.Model you are using has custom behavior that you want to preserve.
You want the MinDiffModel to behave differently from the default.
In either case, you will need to subclass MinDiffModel to achieve the desired results.
Setup
End of explanation
"""
# Original Dataset for training, sampled at 0.3 for reduced runtimes.
train_df = tutorials_utils.get_uci_data(split='train', sample=0.3)
train_ds = tutorials_utils.df_to_dataset(train_df, batch_size=128)
# Dataset needed to train with MinDiff.
train_with_min_diff_ds = (
tutorials_utils.get_uci_with_min_diff_dataset(split='train', sample=0.3))
"""
Explanation: First, download the data. For succinctness, the input preparation logic has been factored out into helper functions as described in the input preparation guide. You can read the full guide for details on this process.
End of explanation
"""
class CustomModel(tf.keras.Model):
# Customized train_step
def train_step(self, *args, **kwargs):
self.used_custom_train_step = True # Marker that we can check for.
return super(CustomModel, self).train_step(*args, **kwargs)
"""
Explanation: Preserving Original Model Customizations
tf.keras.Model is designed to be easily customized via subclassing as described here. If your model has customized implementations that you wish to preserve when applying MinDiff, you will need to subclass MinDiffModel.
Original Custom Model
To see how you can preserve customizations, create a custom model that sets an attribute to True when its custom train_step is called. This is not a useful customization but will serve to illustrate behavior.
End of explanation
"""
model = tutorials_utils.get_uci_model(model_class=CustomModel) # Use CustomModel.
model.compile(optimizer='adam', loss='binary_crossentropy')
_ = model.fit(train_ds.take(1), epochs=1, verbose=0)
# Model has used the custom train_step.
print('Model used the custom train_step:')
print(hasattr(model, 'used_custom_train_step')) # True
"""
Explanation: Training such a model would look the same as a normal Sequential model.
End of explanation
"""
model = tutorials_utils.get_uci_model(model_class=CustomModel)
model = min_diff.keras.MinDiffModel(model, min_diff.losses.MMDLoss())
model.compile(optimizer='adam', loss='binary_crossentropy')
_ = model.fit(train_with_min_diff_ds.take(1), epochs=1, verbose=0)
# Model has not used the custom train_step.
print('Model used the custom train_step:')
print(hasattr(model, 'used_custom_train_step')) # False
"""
Explanation: Subclassing MinDiffModel
If you were to try and use MinDiffModel directly, the model would not use the custom train_step.
End of explanation
"""
class CustomMinDiffModel(min_diff.keras.MinDiffModel, CustomModel):
pass # No need for any further implementation.
"""
Explanation: In order to use the correct train_step method, you need a custom class that subclasses both MinDiffModel and CustomModel.
Note: Make sure to inherit from MinDiffModel first. This is important since you need to make sure that certain functions such as __init__ and call still come from MinDiffModel.
End of explanation
"""
model = tutorials_utils.get_uci_model(model_class=CustomModel)
model = CustomMinDiffModel(model, min_diff.losses.MMDLoss())
model.compile(optimizer='adam', loss='binary_crossentropy')
_ = model.fit(train_with_min_diff_ds.take(1), epochs=1, verbose=0)
# Model has used the custom train_step.
print('Model used the custom train_step:')
print(hasattr(model, 'used_custom_train_step')) # True
"""
Explanation: Training this model will use the train_step from CustomModel.
End of explanation
"""
def _reformat_input(inputs, original_labels):
min_diff_data = min_diff.keras.utils.unpack_min_diff_data(inputs)
original_inputs = min_diff.keras.utils.unpack_original_inputs(inputs)
return ({
'min_diff_data': min_diff_data,
'original_inputs': original_inputs}, original_labels)
customized_train_with_min_diff_ds = train_with_min_diff_ds.map(_reformat_input)
"""
Explanation: Customizing default behaviors of MinDiffModel
In other cases, you may want to change specific default behaviors of MinDiffModel. The most common use case of this is changing the default unpacking behavior to properly handle your data if you don't use pack_min_diff_data.
When packing the data into a custom format, this might appear as follows.
End of explanation
"""
for x, _ in customized_train_with_min_diff_ds.take(1):
print('Type of x:', type(x)) # dict
print('Keys of x:', x.keys()) # 'min_diff_data', 'original_inputs'
"""
Explanation: The customized_train_with_min_diff_ds dataset returns batches composed of tuples (x, y) where x is a dict containing min_diff_data and original_inputs and y is the original_labels.
End of explanation
"""
class CustomUnpackingMinDiffModel(min_diff.keras.MinDiffModel):
def unpack_min_diff_data(self, inputs):
return inputs['min_diff_data']
def unpack_original_inputs(self, inputs):
return inputs['original_inputs']
"""
Explanation: This data format is not what MinDiffModel expects by default and passing customized_train_with_min_diff_ds to it would result in unexpected behavior. To fix this you will need to create your own subclass.
End of explanation
"""
model = tutorials_utils.get_uci_model()
model = CustomUnpackingMinDiffModel(model, min_diff.losses.MMDLoss())
model.compile(optimizer='adam', loss='binary_crossentropy')
_ = model.fit(customized_train_with_min_diff_ds, epochs=1)
"""
Explanation: With this subclass, you can train as with the other examples.
End of explanation
"""
|
CentroGeo/Analisis-Espacial | muap/MUAP II.ipynb | gpl-2.0 | from geopandas import GeoDataFrame
datos = GeoDataFrame.from_file('data/distritos_variables.shp')
datos.head()
"""
Explanation: Parte II: efecto de agregación
En la primera parte de la práctica vimos cómo la escala de análisis tiene influencia sobre los resultados del mismo. En esta segunda parte vamos a ver como diferentes formas de agregar los datos, en una misma escala, también pueden tener una influencia sobre los resultados de un análisis.
Para esta segunda parte vamos a volver a trabajar con los datos de mezcla de usos de suelo en la Ciudad de México y vamos a tratar de estimar su efecto en la generación de viajes. Los datos de uso de suelo están calculados a partir del DENUE, mientras que la información sobre los viajes es de la Encuesta Origen Destino del 2007, por lo que la escala de análisis serán los distritos de tráfico de dicha encuesta.
Como siempre, el primer paso es leer los datos:
End of explanation
"""
import numpy as np
intensidad = datos['comercio'] + datos['viv'] + datos['ocio'] + datos['servicios']
prop_comercio = datos['comercio'] / intensidad
prop_viv = datos['viv'] / intensidad
prop_ocio = datos['ocio'] / intensidad
prop_servicios = datos['servicios'] / intensidad
entropia = (prop_comercio*np.log(prop_comercio) + prop_viv*np.log(prop_viv) + prop_ocio*np.log(prop_ocio)
+ prop_servicios*np.log(prop_servicios))/np.log(4)
entropia.head()
"""
Explanation: El shapefile que leimos tiene columnas para cada uno de los tipos de uso de suelo que nos interesan (vivienda, comercio, servicios y ocio), adicionalmente tiene tres columnas con información de los viajes:
entrada: los viajes que terminan en el distrito y que empezaron en uno diferente
salida: los viajes que empiezan en el distrito y terminan en uno distinto
loop: los viajes que inician y terminan en el mismo distrito.
Como pueden recordar de la práctica anterior en que usamos estos datos, la mezcla de usos de suelo se mide utilizando el índice de entropía, en esta ocasión vmos a calcularlo de acuerdo a la siguiente fórmula:
\begin{equation}
E = \sum\limits_{j}{\frac{p_{j}*ln(p_{j})}{ln(J)}}
\end{equation}
Donde $p_{j}$ representa la proporción del $j-ésimo$ uso de suelo con respecto al total y $J$ es el número de usos de suelo.
End of explanation
"""
datos['entropia'] = entropia
datos.head()
"""
Explanation: Lo que hicimos para calcular la entropía es relativamente fácil. Primero creamos la variable intensidad, que es la suma de todos los usos de suelo y luego fuimos calculando cada una de las proporciones, finalmente las sumamos y las dividimos por el logaritmo natural del número de usus de suelo. Lo que tenemos ahora es una Serie que contiene los datos de entropía, ahora lo que necesitamos es unir esa serie a nuestros datos originales:
End of explanation
"""
corr = datos[['ocio','comercio','servicios','viv','entropia']].corr()
w, v = np.linalg.eig(corr)
print w
"""
Explanation: Ahora que ya tenemos todos los datos, probemos un modelo que nos trate de explicar, por ejemplo, la cantidad de viajes que terminan en cada distrito. Lo primero que vamos a hacer, es explorar la colinearidad de las variables que tenemos:
End of explanation
"""
print corr
"""
Explanation: Parece ser que, si utilizamos las variables de arriba, vamos a tener problemas de colinearidad, entonces, para seleccionar las variables, observemos la matriz de correlación:
End of explanation
"""
variables = datos[['entrada','viv','entropia']]
vars_norm = (variables - variables.mean())/variables.std()
print vars_norm.mean()
print vars_norm.std()
"""
Explanation: Como puden ver, la entropía está bastante correlacionada con las demás variables, sobre todo con comercio y ocio, seleccionesmos entonces, por lo pronto, entropía y vivienda como variables explicativas.
Ahora, antes de correr nuestro modelo, vamos a normalizar las variables porque tiene escalas de variación muy diferentes y ya sabemos que eso nos puede traer problemas:
End of explanation
"""
import statsmodels.formula.api as sm
model = sm.ols(formula="entrada ~ viv + entropia",data=vars_norm).fit()
print model.summary()
"""
Explanation: Ahora que ya seleccionamos las varibles y las normalizamos, vamos a correr un primer modelo:
End of explanation
"""
import clusterpy
datos_cp = clusterpy.importArcData('data/distritos_variables')
datos_cp.cluster('random',[datos_cp.fieldNames[0]],50,dissolve=0)
"""
Explanation: Ejercicio:
Especfique diferentes modelos para tratar de explicar las tres variables de viajes.
Cambiando de escala
Ahora vamos a cambiar la escala del análisis agregando los distritos en unidades más grandes, para hacer esto, primero vamos a crear regionalizaciones al azar a partir de los datos:
End of explanation
"""
datos_cp.fieldNames
"""
Explanation: Recuerden que para ver las regiones resultantes pueden exportar los resultados como shapefile usando datos_cp.exportArcData('ruta/para/guardar'). Por lo pronto omitiremos este paso y más bien vamos a unir el resultado de los clusters a los datos originales para poder hacer algunas operaciones.
El primer paso es saber en qué columna guardo el algoritmo los resultados de la regionalización, para esto, vamos a ver qué columnas tiene nuestro objeto:
End of explanation
"""
resultado = datos_cp.getVars(['cve_dist','random_20160406101421'])
print resultado
"""
Explanation: Como pueden ver, el algoritmo le pone un nombre al azar a la columna donde va a guardar los resultados, random_20160405130236, en este caso.
Ahora lo que vamos a hacer es extraer sólo el identificador de distrito y el identificador de región, que es lo que necesitamos para hacer un merge con nuestros datros originales:
End of explanation
"""
import pandas as pd
df = pd.DataFrame(resultado)
df.head()
"""
Explanation: El resultado es un diccionario, es decir, un conjunto de parejas llave : valor, en este caso las llaves son el id de los polígonos y los valores son la clave del distrito y el identificador de la región. Como queremos unir estos resultados con nuestros datos originales, necesitamos convertirlos en un DataFrame, afortunadamete esto es relativamente fácil:
End of explanation
"""
df = df.transpose()
df.head()
"""
Explanation: Casi lo logramos, el único problema es que los datos están "al revés", es decir renglones y columnas están intercambiados. Lo que necesitamos es trasponerlos:º
End of explanation
"""
df.columns=['cve_dist','id_region']
df.head()
"""
Explanation: Ahora pongámosle nombre a las columnas, la primera es el identificador de distrito y la segunda el de región:
End of explanation
"""
region_1 = datos.merge(df,how='inner',on='cve_dist')
region_1.head()
"""
Explanation: Ahora sí podemos hacer un merge con los datos originales:
End of explanation
"""
agregados = region_1.groupby(by='id_region').sum()
agregados.head()
"""
Explanation: Ahora ya tenemos casi todo lo que necesitamos para correr un nuevo modelo, esta vez sobre las variables agregadas en regiones, lo único que necesitamos es, justamente, calcular las variables agregadas, para esto, vamos a hacer un "gropup by":
End of explanation
"""
intensidad = agregados['comercio'] + agregados['viv'] + agregados['ocio']
prop_comercio = agregados['comercio'] / intensidad
prop_viv = agregados['viv'] / intensidad
prop_ocio = agregados['ocio'] / intensidad
entropia = (prop_comercio*np.log(prop_comercio) + prop_viv*np.log(prop_viv) + prop_ocio*np.log(prop_ocio))/np.log(3)
agregados.ix[:,'entropia']= entropia
agregados.head()
"""
Explanation: Aquí simplemente agrupamos los datos por su id de región y calculamos la suma de las variables sobre cada grupo.
Sólo hay un problema, la entropía quedo calculada como la suma de las entropías individuales y eso no nos sirve, necesitamos volver a calcularla:
End of explanation
"""
variables = agregados[['entrada','viv','entropia']]
vars_norm = (variables - variables.mean())/variables.std()
model = sm.ols(formula="entrada ~ viv + entropia",data=vars_norm).fit()
print model.summary()
"""
Explanation: Ahora sí, podemos correr el mismo modelo de antes pero sobre nuestros datos agregados (recordemos que antes hay que normalizarlos):
End of explanation
"""
def regionaliza_agrega(dataframe,shp_path='data/distritos_variables',n_regiones=50):
datos = clusterpy.importArcData(shp_path)
datos.cluster('random',[datos.fieldNames[0]],50,dissolve=0)
resultado = datos.getVars(['cve_dist',datos.fieldNames[-1]])
df = pd.DataFrame(resultado)
df = df.transpose()
df.columns=['cve_dist','id_region']
region = dataframe.merge(df,how='inner',on='cve_dist')
agregados = region.groupby(by='id_region').sum()
intensidad = agregados['comercio'] + agregados['viv'] + agregados['ocio']
prop_comercio = agregados['comercio'] / intensidad
prop_viv = agregados['viv'] / intensidad
prop_ocio = agregados['ocio'] / intensidad
entropia = (prop_comercio*np.log(prop_comercio) + prop_viv*np.log(prop_viv) + prop_ocio*np.log(prop_ocio))/np.log(3)
agregados.ix[:,'entropia']= entropia
return agregados
"""
Explanation: Ahora, el objetivo es comparar los resultados usando diferentes regionalizaciones aleatorias de los mismos datos. El proceso de pasar los datos a regiones es un poco largo, pero no se preocupen, para aliviar ese trabajo escribi una función que se encarga de encapsular todo el proceso:
End of explanation
"""
agregados = regionaliza_agrega(datos)
variables = agregados[['entrada','viv','entropia']]
vars_norm = (variables - variables.mean())/variables.std()
model = sm.ols(formula="entrada ~ viv + entropia",data=vars_norm).fit()
print model.summary()
"""
Explanation: La función nos regresa los datos agregados en regiones aleatorias, lo único que necesitamos es selecionar las varables que vaos a usar, normalizarlas y correr un modelo:
End of explanation
"""
datos_nuevos = GeoDataFrame.from_file('data/variables_regiones.shp')
datos_nuevos.head()
"""
Explanation: Como pueden ver, cada vez que corran la celda anterior, van a tener un resultado un poco diferente, es decir, cada modelo arroja resultados diferentes en la misma escala. En este caso, podemos pensar que no es muy grave, nosotros sabemos que las agregaciones son aleatorias y tenemos alguna certeza de que las desviaciones observadas en los resultados son aleatorias, pero ¿Qué pasa cuando no tenemos control sobre las agregaciones?
Antes de examinar esa pregunta, hay un...
Ejercicio
Encuentren un modelo que explique razonablemente bien los viajes entrantes a la escala agregada (para esto usen una sóla regionalización aleatoria, es decir, no usen la función que les acabo de explicar). A aprtir de ese modelo, ahora sí, usando la función regionaliza_agrega, haz diferentes experimentos y registra los parámetros más importantes: la $R^2$ y los coeficientes de las variables. Con los resultados de varias repeticiones, trata de mostrar que las diferencias son aleatorias (es decir, que siguen una distribución normal).
Epílogo ¿Qué pasa cuando no tenemos control sobre las agregaciones?
Ahora vamos a trabajar sobre tres agregaciones diferentes de los datos, en la misma escala que las que hicimos, pero sobre las que no tenemos control. Simplemente son las unidades que tenemos (más o menos como las AGEBS, por ejemplo)
Lo primero que vamos a hacer es leer los nuevos datos:
End of explanation
"""
r_1 = datos_nuevos.groupby(by='region_1').sum()
intensidad = r_1['comercio'] + r_1['viv'] + r_1['ocio']
prop_comercio = r_1['comercio'] / intensidad
prop_viv = r_1['viv'] / intensidad
prop_ocio = r_1['ocio'] / intensidad
entropia = (prop_comercio*np.log(prop_comercio) + prop_viv*np.log(prop_viv) + prop_ocio*np.log(prop_ocio))/np.log(3)
r_1.ix[:,'entropia']= entropia
variables = r_1[['entrada','viv','entropia']]
vars_norm = (variables - variables.mean())/variables.std()
model_1 = sm.ols(formula="entrada ~ viv + entropia",data=vars_norm).fit()
print model_1.summary()
"""
Explanation: Como pueden ver, son exáctamente igual a los originales, salvo porque tienen tres columnas extra: region_1, region_2 y region_3. Esta son, jústamente, las nuevas agregaciones de los datos, por lo demás, todo es exáctamente igiual que antes.
Entonces, construyamos tres agregaciones, una para cada regionalización, y ajustemos nuestro modelo lineal para ver qué sucede. Empecemos con la primera regionalización:
End of explanation
"""
r_2 = datos_nuevos.groupby(by='region_2').sum()
intensidad = r_2['comercio'] + r_2['viv'] + r_2['ocio']
prop_comercio = r_2['comercio'] / intensidad
prop_viv = r_2['viv'] / intensidad
prop_ocio = r_2['ocio'] / intensidad
entropia = (prop_comercio*np.log(prop_comercio) + prop_viv*np.log(prop_viv) + prop_ocio*np.log(prop_ocio))/np.log(3)
r_2.ix[:,'entropia']= entropia
variables = r_2[['entrada','viv','entropia']]
vars_norm = (variables - variables.mean())/variables.std()
model_2 = sm.ols(formula="entrada ~ viv + entropia",data=vars_norm).fit()
print model_2.summary()
"""
Explanation: Ahora con la segunda:
End of explanation
"""
|
metpy/MetPy | v1.0/_downloads/cdca3e0cb8a2930cccab0e29b97ef52a/upperair_soundings.ipynb | bsd-3-clause | import matplotlib.pyplot as plt
from mpl_toolkits.axes_grid1.inset_locator import inset_axes
import pandas as pd
import metpy.calc as mpcalc
from metpy.cbook import get_test_data
from metpy.plots import Hodograph, SkewT
from metpy.units import units
"""
Explanation: Upper Air Sounding Tutorial
Upper air analysis is a staple of many synoptic and mesoscale analysis
problems. In this tutorial we will gather weather balloon data, plot it,
perform a series of thermodynamic calculations, and summarize the results.
To learn more about the Skew-T diagram and its use in weather analysis and
forecasting, checkout this <http://www.pmarshwx.com/research/manuals/AF_skewt_manual.pdf>_
air weather service guide.
End of explanation
"""
col_names = ['pressure', 'height', 'temperature', 'dewpoint', 'direction', 'speed']
df = pd.read_fwf(get_test_data('nov11_sounding.txt', as_file_obj=False),
skiprows=5, usecols=[0, 1, 2, 3, 6, 7], names=col_names)
# Drop any rows with all NaN values for T, Td, winds
df = df.dropna(subset=('temperature', 'dewpoint', 'direction', 'speed'
), how='all').reset_index(drop=True)
# We will pull the data out of the example dataset into individual variables and
# assign units.
p = df['pressure'].values * units.hPa
T = df['temperature'].values * units.degC
Td = df['dewpoint'].values * units.degC
wind_speed = df['speed'].values * units.knots
wind_dir = df['direction'].values * units.degrees
u, v = mpcalc.wind_components(wind_speed, wind_dir)
"""
Explanation: Getting Data
Upper air data can be obtained using the siphon package, but for this tutorial we will use
some of MetPy's sample data. This event is the Veterans Day tornado outbreak in 2002.
End of explanation
"""
# Calculate the LCL
lcl_pressure, lcl_temperature = mpcalc.lcl(p[0], T[0], Td[0])
print(lcl_pressure, lcl_temperature)
# Calculate the parcel profile.
parcel_prof = mpcalc.parcel_profile(p, T[0], Td[0]).to('degC')
"""
Explanation: Thermodynamic Calculations
Often times we will want to calculate some thermodynamic parameters of a
sounding. The MetPy calc module has many such calculations already implemented!
Lifting Condensation Level (LCL) - The level at which an air parcel's
relative humidity becomes 100% when lifted along a dry adiabatic path.
Parcel Path - Path followed by a hypothetical parcel of air, beginning
at the surface temperature/pressure and rising dry adiabatically until
reaching the LCL, then rising moist adiabatially.
End of explanation
"""
# Create a new figure. The dimensions here give a good aspect ratio
fig = plt.figure(figsize=(9, 9))
skew = SkewT(fig)
# Plot the data using normal plotting functions, in this case using
# log scaling in Y, as dictated by the typical meteorological plot
skew.plot(p, T, 'r', linewidth=2)
skew.plot(p, Td, 'g', linewidth=2)
skew.plot_barbs(p, u, v)
# Show the plot
plt.show()
"""
Explanation: Basic Skew-T Plotting
The Skew-T (log-P) diagram is the standard way to view rawinsonde data. The
y-axis is height in pressure coordinates and the x-axis is temperature. The
y coordinates are plotted on a logarithmic scale and the x coordinate system
is skewed. An explanation of skew-T interpretation is beyond the scope of this
tutorial, but here we will plot one that can be used for analysis or
publication.
The most basic skew-T can be plotted with only five lines of Python.
These lines perform the following tasks:
Create a Figure object and set the size of the figure.
Create a SkewT object
Plot the pressure and temperature (note that the pressure,
the independent variable, is first even though it is plotted on the y-axis).
Plot the pressure and dewpoint temperature.
Plot the wind barbs at the appropriate pressure using the u and v wind
components.
End of explanation
"""
# Create a new figure. The dimensions here give a good aspect ratio
fig = plt.figure(figsize=(9, 9))
skew = SkewT(fig, rotation=30)
# Plot the data using normal plotting functions, in this case using
# log scaling in Y, as dictated by the typical meteorological plot
skew.plot(p, T, 'r')
skew.plot(p, Td, 'g')
skew.plot_barbs(p, u, v)
skew.ax.set_ylim(1000, 100)
skew.ax.set_xlim(-40, 60)
# Plot LCL temperature as black dot
skew.plot(lcl_pressure, lcl_temperature, 'ko', markerfacecolor='black')
# Plot the parcel profile as a black line
skew.plot(p, parcel_prof, 'k', linewidth=2)
# Shade areas of CAPE and CIN
skew.shade_cin(p, T, parcel_prof, Td)
skew.shade_cape(p, T, parcel_prof)
# Plot a zero degree isotherm
skew.ax.axvline(0, color='c', linestyle='--', linewidth=2)
# Add the relevant special lines
skew.plot_dry_adiabats()
skew.plot_moist_adiabats()
skew.plot_mixing_lines()
# Show the plot
plt.show()
"""
Explanation: Advanced Skew-T Plotting
Fiducial lines indicating dry adiabats, moist adiabats, and mixing ratio are
useful when performing further analysis on the Skew-T diagram. Often the
0C isotherm is emphasized and areas of CAPE and CIN are shaded.
End of explanation
"""
# Create a new figure. The dimensions here give a good aspect ratio
fig = plt.figure(figsize=(9, 9))
skew = SkewT(fig, rotation=30)
# Plot the data using normal plotting functions, in this case using
# log scaling in Y, as dictated by the typical meteorological plot
skew.plot(p, T, 'r')
skew.plot(p, Td, 'g')
skew.plot_barbs(p, u, v)
skew.ax.set_ylim(1000, 100)
skew.ax.set_xlim(-40, 60)
# Plot LCL as black dot
skew.plot(lcl_pressure, lcl_temperature, 'ko', markerfacecolor='black')
# Plot the parcel profile as a black line
skew.plot(p, parcel_prof, 'k', linewidth=2)
# Shade areas of CAPE and CIN
skew.shade_cin(p, T, parcel_prof, Td)
skew.shade_cape(p, T, parcel_prof)
# Plot a zero degree isotherm
skew.ax.axvline(0, color='c', linestyle='--', linewidth=2)
# Add the relevant special lines
skew.plot_dry_adiabats()
skew.plot_moist_adiabats()
skew.plot_mixing_lines()
# Create a hodograph
# Create an inset axes object that is 40% width and height of the
# figure and put it in the upper right hand corner.
ax_hod = inset_axes(skew.ax, '40%', '40%', loc=1)
h = Hodograph(ax_hod, component_range=80.)
h.add_grid(increment=20)
h.plot_colormapped(u, v, wind_speed) # Plot a line colored by wind speed
# Show the plot
plt.show()
"""
Explanation: Adding a Hodograph
A hodograph is a polar representation of the wind profile measured by the rawinsonde.
Winds at different levels are plotted as vectors with their tails at the origin, the angle
from the vertical axes representing the direction, and the length representing the speed.
The line plotted on the hodograph is a line connecting the tips of these vectors,
which are not drawn.
End of explanation
"""
|
kyleabeauchamp/mdtraj | examples/rmsd-benchmark.ipynb | lgpl-2.1 | t = md.Trajectory(xyz=np.random.randn(1000, 100, 3), topology=None)
print(t)
"""
Explanation: To benchmark the speed of the RMSD calculation, it's not really
necessary to use 'real' coordinates, so let's just generate
some random numbers from a normal distribution for the cartesian
coordinates.
End of explanation
"""
import time
start = time.time()
for i in range(100):
md.rmsd(t, t, 0)
print('md.rmsd(): %.2f rmsds / s' % ((t.n_frames * 100) / (time.time() - start)))
"""
Explanation: The Theobald QCP method requires centering the invidual conformations
the origin. That's done on the fly when we call md.rmsd().
End of explanation
"""
t.center_coordinates()
start = time.time()
for i in range(100):
md.rmsd(t, t, 0, precentered=True)
print('md.rmsd(precentered=True): %.2f rmsds / s' % ((t.n_frames * 100) / (time.time() - start)))
"""
Explanation: But for some applications like clustering, we want to run many
rmsd() calculations per trajectory frame. Under these circumstances,
the centering of the trajectories is going to be done many times, which
leads to a slight slowdown. If we manually center the trajectory
and then inform the rmsd() function that the centering has been
precentered, we can achieve ~2x speedup, depending on your machine
and the number of atoms.
End of explanation
"""
from mdtraj.geometry.alignment import rmsd_qcp
start = time.time()
for k in range(t.n_frames):
rmsd_qcp(t.xyz[0], t.xyz[k])
print('pure numpy rmsd_qcp(): %.2f rmsds / s' % (t.n_frames / (time.time() - start)))
"""
Explanation: Just for fun, let's compare this code to the straightforward
numpy implementation of the same algorithm, which mdtraj has
(mostly for testing) in the mdtraj.geometry.alignment subpackage
End of explanation
"""
|
tensorflow/docs-l10n | site/ja/guide/gpu.ipynb | apache-2.0 | #@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""
Explanation: Copyright 2018 The TensorFlow Authors.
End of explanation
"""
import tensorflow as tf
print("Num GPUs Available: ", len(tf.config.list_physical_devices('GPU')))
"""
Explanation: GPU を使用する
<table class="tfo-notebook-buttons" align="left">
<td><a target="_blank" href="https://www.tensorflow.org/guide/gpu"><img src="https://www.tensorflow.org/images/tf_logo_32px.png">TensorFlow.org で表示</a></td>
<td><a target="_blank" href="https://colab.research.google.com/github/tensorflow/docs-l10n/blob/master/site/ja/guide/gpu.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png">Google Colab で実行</a></td>
<td><a target="_blank" href="https://github.com/tensorflow/docs-l10n/blob/master/site/ja/guide/gpu.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png">GitHub でソースを表示</a></td>
<td><a href="https://storage.googleapis.com/tensorflow_docs/docs-l10n/site/ja/guide/gpu.ipynb"><img src="https://www.tensorflow.org/images/download_logo_32px.png">ノートブックをダウンロード</a></td>
</table>
TensorFlow のコードとtf.kerasモデルは、コードを変更することなく単一の GPU で透過的に実行されます。
注意: tf.config.list_physical_devices('GPU')を使用して、TensorFlow が GPU を使用していることを確認してください。
単一または複数のマシンで複数の GPU を実行する最も簡単な方法は、分散ストラテジーの使用です。
このガイドは、これらのアプローチを試し、TensorFlow の GPU 使用方法を細かく制御する必要があることに気づいたユーザーを対象としています。シングル GPU とマルチ GPU のシナリオにおいてパフォーマンスをデバッグする方法については、「TensorFlow GPU パフォーマンスを最適化する」ガイドをご覧ください。
セットアップ
最新リリースの TensorFlow GPU がインストールされていることを確認します。
End of explanation
"""
tf.debugging.set_log_device_placement(True)
# Create some tensors
a = tf.constant([[1.0, 2.0, 3.0], [4.0, 5.0, 6.0]])
b = tf.constant([[1.0, 2.0], [3.0, 4.0], [5.0, 6.0]])
c = tf.matmul(a, b)
print(c)
"""
Explanation: 概要
TensorFlow は、CPU や GPU など、さまざまなタイプのデバイスでの計算の実行をサポートしています。それらは、例えば文字列の識別子で表されています。
"/device:CPU:0": マシンの CPU。
"/GPU:0": TensorFlow が認識するマシンの最初の GPU の簡略表記。
"/job:localhost/replica:0/task:0/device:GPU:1": TensorFlow が認識するマシンの 2 番目の GPU の完全修飾名。
TensorFlow 演算に CPU と GPU 実装の両方を持つ場合、演算がデバイスに割り当てられるときに、デフォルトでは GPU デバイスが優先されます。たとえば、tf.matmul は CPU と GPU カーネルの両方を持ちます。デバイス CPU:0 と GPU:0 を持つシステム上では、それを他のデバイス上で実行することを明示的に要求しない限りは、GPU:0 デバイスが tf.matmul の実行に選択されます。
TensorFlow 演算に対応する GPU 実装がない場合は、演算は CPU デバイスにフォールバックします。たとえば、tf.cast には CPU カーネルしかないため、デバイス CPU:0 と GPU:0 のあるシステムでは、CPU:0 デバイスが tf.cast の実行に選択され、GPU:0 デバイスでの実行をリクエストされても無視されます。
デバイスの配置をログに記録する
演算と tensor がどのデバイスに割り当てられたかを確認するには、tf.debugging.set_log_device_placement(True)をプログラムの最初のステートメントとして置きます。デバイス配置ログを有効にすると、テンソルの割り当てや演算が出力されます。
End of explanation
"""
tf.debugging.set_log_device_placement(True)
# Place tensors on the CPU
with tf.device('/CPU:0'):
a = tf.constant([[1.0, 2.0, 3.0], [4.0, 5.0, 6.0]])
b = tf.constant([[1.0, 2.0], [3.0, 4.0], [5.0, 6.0]])
c = tf.matmul(a, b)
print(c)
"""
Explanation: 上記のコードの出力は、GPU:0でMatMul演算が実行されたことを示します。
手動でデバイスを配置する
自動的に選択されたものの代わりに、自分が選択したデバイスで特定の演算を実行する場合は、デバイスコンテキストの作成にwith tf.deviceを使用すると、そのコンテキスト内のすべての演算は同じ指定されたデバイス上で実行されます。
End of explanation
"""
gpus = tf.config.list_physical_devices('GPU')
if gpus:
# Restrict TensorFlow to only use the first GPU
try:
tf.config.set_visible_devices(gpus[0], 'GPU')
logical_gpus = tf.config.list_logical_devices('GPU')
print(len(gpus), "Physical GPUs,", len(logical_gpus), "Logical GPU")
except RuntimeError as e:
# Visible devices must be set before GPUs have been initialized
print(e)
"""
Explanation: これでaとbはCPU:0に割り当てられたことが分かります。MatMul演算に対してデバイスが明示的に指定されていないため、TensorFlow ランタイムは演算と利用可能なデバイスに基づいてデバイスを選択し(この例ではGPU:0)、必要に応じて自動的にデバイス間でテンソルをコピーします。
GPU のメモリ増加を制限する
デフォルトでは、TensorFlow は(CUDA_VISIBLE_DEVICESに従い)プロセスが認識する全 GPU の ほぼ全てのGPU メモリをマップします。これはメモリの断片化を減らしてデバイス上のかなり貴重な GPU メモリリソースをより効率的に使用するために行われます。TensorFlow を特定の GPU セットに制限するには、tf.config.set_visible_devicesメソッドを使用します。
End of explanation
"""
gpus = tf.config.list_physical_devices('GPU')
if gpus:
try:
# Currently, memory growth needs to be the same across GPUs
for gpu in gpus:
tf.config.experimental.set_memory_growth(gpu, True)
logical_gpus = tf.config.list_logical_devices('GPU')
print(len(gpus), "Physical GPUs,", len(logical_gpus), "Logical GPUs")
except RuntimeError as e:
# Memory growth must be set before GPUs have been initialized
print(e)
"""
Explanation: 場合によりますが、プロセスが使用可能なメモリのサブセットのみを割り当てるか、またはプロセスに必要とされるメモリ使用量のみを増やすことが望ましいです。TensorFlow は、これを制御する 2 つのメソッドを提供します。
最初のオプションは、tf.config.experimental.set_memory_growth を呼び出してメモリ増大を有効にすることです。これはランタイムの割り当てに必要な GPU メモリだけを割り当てようと試みます。非常に小さいメモリの割り当てから始め、プログラムが実行されてより多くの GPU メモリが必要になってくるにつれて、TensorFlow プロセスに割り当てられる GPU メモリ領域を拡張します。メモリの断面化につながる可能性があるため、メモリは解放されません。特定の GPU のメモリ増大を有効にするためには、テンソルの割り当てや演算の実行の前に次のコードを使用してください。
End of explanation
"""
gpus = tf.config.list_physical_devices('GPU')
if gpus:
# Restrict TensorFlow to only allocate 1GB of memory on the first GPU
try:
tf.config.set_logical_device_configuration(
gpus[0],
[tf.config.LogicalDeviceConfiguration(memory_limit=1024)])
logical_gpus = tf.config.list_logical_devices('GPU')
print(len(gpus), "Physical GPUs,", len(logical_gpus), "Logical GPUs")
except RuntimeError as e:
# Virtual devices must be set before GPUs have been initialized
print(e)
"""
Explanation: このオプションを有効にするもう 1 つの方法は、環境変数TF_FORCE_GPU_ALLOW_GROWTHをtrueに設定するというものです。この構成はプラットフォーム固有です。
2 番目のメソッドは、tf.config.set_logical_device_configurartionで仮想 GPU デバイスを構成し、GPU に割り当てられるメモリ総量を固定することです。
End of explanation
"""
tf.debugging.set_log_device_placement(True)
try:
# Specify an invalid GPU device
with tf.device('/device:GPU:2'):
a = tf.constant([[1.0, 2.0, 3.0], [4.0, 5.0, 6.0]])
b = tf.constant([[1.0, 2.0], [3.0, 4.0], [5.0, 6.0]])
c = tf.matmul(a, b)
except RuntimeError as e:
print(e)
"""
Explanation: これは TensorFlow プロセスに利用可能な GPU メモリの量を正確に抑制する場合に有用です。これは GPU がワークステーション GUI などの他のアプリケーションと共有されている場合のローカル開発では一般的な方法です。
マルチ GPU システムで単一 GPU を使用する
システムに 2 つ以上の GPU が搭載されている場合、デフォルトでは最小の ID を持つ GPU が選択されます。異なる GPU 上で実行する場合には、その選択を明示的に指定する必要があります。
End of explanation
"""
tf.config.set_soft_device_placement(True)
tf.debugging.set_log_device_placement(True)
# Creates some tensors
a = tf.constant([[1.0, 2.0, 3.0], [4.0, 5.0, 6.0]])
b = tf.constant([[1.0, 2.0], [3.0, 4.0], [5.0, 6.0]])
c = tf.matmul(a, b)
print(c)
"""
Explanation: 指定したデバイスが存在しない場合には、RuntimeError: .../device:GPU:2 unknown deviceが表示されます。
指定されたデバイスが存在しない場合に TensorFlow に既に演算の実行をサポートしているデバイスを自動的に選択させたければ、tf.config.set_soft_device_placement(True)を呼び出すことができます。
End of explanation
"""
gpus = tf.config.list_physical_devices('GPU')
if gpus:
# Create 2 virtual GPUs with 1GB memory each
try:
tf.config.set_logical_device_configuration(
gpus[0],
[tf.config.LogicalDeviceConfiguration(memory_limit=1024),
tf.config.LogicalDeviceConfiguration(memory_limit=1024)])
logical_gpus = tf.config.list_logical_devices('GPU')
print(len(gpus), "Physical GPU,", len(logical_gpus), "Logical GPUs")
except RuntimeError as e:
# Virtual devices must be set before GPUs have been initialized
print(e)
"""
Explanation: マルチ GPU を使用する
マルチ GPU 向けの開発は、追加リソースでのモデルのスケーリングを可能にします。単一 GPU システム上で開発している場合は、仮想デバイスで複数の GPU をシミュレートできます。これにより、追加のリソースがなくてもマルチ GPU セットアップの簡単なテストが可能になります。
End of explanation
"""
tf.debugging.set_log_device_placement(True)
gpus = tf.config.list_logical_devices('GPU')
strategy = tf.distribute.MirroredStrategy(gpus)
with strategy.scope():
inputs = tf.keras.layers.Input(shape=(1,))
predictions = tf.keras.layers.Dense(1)(inputs)
model = tf.keras.models.Model(inputs=inputs, outputs=predictions)
model.compile(loss='mse',
optimizer=tf.keras.optimizers.SGD(learning_rate=0.2))
"""
Explanation: ランタイムで利用可能な複数の論理 GPU を取得したら、tf.distribute.Strategy または手動配置でマルチ GPU を利用できます。
tf.distribute.Strategyを使用する
マルチ GPU を使用するベストプラクティスは、tf.distribute.Strategyを使用することです。次に単純な例を示します。
End of explanation
"""
tf.debugging.set_log_device_placement(True)
gpus = tf.config.list_logical_devices('GPU')
if gpus:
# Replicate your computation on multiple GPUs
c = []
for gpu in gpus:
with tf.device(gpu.name):
a = tf.constant([[1.0, 2.0, 3.0], [4.0, 5.0, 6.0]])
b = tf.constant([[1.0, 2.0], [3.0, 4.0], [5.0, 6.0]])
c.append(tf.matmul(a, b))
with tf.device('/CPU:0'):
matmul_sum = tf.add_n(c)
print(matmul_sum)
"""
Explanation: このプログラムは各 GPU でモデルのコピーを実行し、それらの間で入力データを分割します。これは、「データの並列処理」としても知られています。
分散ストラテジーの詳細については、こちらのガイドをご覧ください。
手動で配置する
tf.distribute.Strategyは、内部的には複数のデバイスに渡って計算を複製することによって動作しています。各 GPU 上でモデルを構築することにより、複製を手動で実装することができます。例を示します。
End of explanation
"""
|
AbeHandler/minhash_tutorial | start.ipynb | mit | import itertools
import string
import functools
letters = string.ascii_lowercase
vocab = list(map(''.join, itertools.product(letters, repeat=2)))
from random import choices
def zipf_pdf(k):
return 1/k**1.07
def exponential_pdf(k, base):
return base**k
def new_document(n_words, pdf):
return set(
choices(
vocab,
weights=map(pdf, range(1, 1+len(vocab))),
k=n_words
)
)
def new_documents(n_documents, n_words, pdf):
return [new_document(n_words, pdf) for _ in range(n_documents)]
def jaccard(a, b):
return len(a & b) / len(a | b)
def all_pairs(documents):
return list(itertools.combinations(documents, 2))
def filter_similar(pairs, cutoff=0.9):
return list(filter(
lambda docs: jaccard(docs[0], docs[1]) > cutoff,
pairs
))
documents = new_documents(1000, 1000, functools.partial(exponential_pdf, base=1.1))
pairs = all_pairs(documents)
"""
Explanation: In data science, it's common to have lots of nearly duplicate data. For instance, you'll find lots of nearly duplicate web pages in crawling the internet; you'll find lots of pictures of the same cat or dog overlaid with slightly different text posted on Twitter as memes.
Near duplicate data just means data that is almost the same. The sentences "Congress returned from recess" and "Congress returned from recess last week" are near duplicates. They are not exactly the same, but they're similar.
Sometimes its helpful to find all such duplicates: either to remove them from the dataset or analyze the dupes themselves. For instance, you might want to find all the groups of memes on Twitter, or delete online comments from bots.
In order to find duplicates, you need a formal way to represent similiarity so that a computer can understand. One commonly used metric is the Jaccard similarity, which is the size of the intersection of two sets divided by the union. We can express the Jaccard in symbols as follows. If $A$ and $B$ are sets and $|A|$ and $|B|$ show the sizes of those sets (sometimes this is called the "cardinality") then:
$$Jaccard(A,B) = \frac{|A \cap B|}{|A \cup B|}$$
If you try calculating a few similarities with pen and paper, you'll quickly get a good intuition for the Jaccard. For instance, say $A = {1,2,3}$ and $B={2,3,4}$. $A \cap B$ just is just the elements which are in $A$ and $B$ = ${2,3}$, which has a size of 2. Similarly ${A \cup B}$ is the elements which are in $A$ or $B$ which is equal to ${1,2,3,4}$, which has a size of 4. Thus, the Jaccard is 2/4 = .5.
Exercise: Try calculating the Jaccard when $A = {1,2,3}$ and $B={1,2,3}$. What happens? How about when $A ={1,2,3}$ and $B={4,5,6}$? Is it possible to have a Jaccard that is lower? How about a Jaccard that is higher?
Now let's say you are trying to find documents that are almost duplicates in your set. You can represent each document as a set of words, assigning each word a unqiue number. Then you find the Jaccard similarity between all pairs of documents, and find those pairs that have a value greater than, say, $0.9$.
End of explanation
"""
len(filter_similar(pairs))
"""
Explanation: Based on the way we are choosing words, we say that 410 pairs out of 1000 documents have a high enough jaccard to call them similar. This seems realistic enough. We can fiddle with this if we want, by changing the base
End of explanation
"""
jacards = list(map(lambda docs: jaccard(docs[0], docs[1]), pairs))
%matplotlib inline
import seaborn as sns
sns.distplot(jacards)
"""
Explanation: We can also see that the jaccards look normally distributed
End of explanation
"""
def create_and_filter(n_documents):
documents = new_documents(n_documents, 500, functools.partial(exponential_pdf, base=1.1))
pairs = all_pairs(documents)
return filter_similar(pairs)
import timeit
def time_create_and_filter(n_documents):
return timeit.timeit(
'create_and_filter(n)',
globals={
"n": n_documents,
"create_and_filter": create_and_filter
},
number=1
)
import pandas as pd
from tqdm import tnrange, tqdm_notebook
def create_timing_df(ns):
return pd.DataFrame({
'n': ns,
'time': list(map(time_create_and_filter, tqdm_notebook(ns)))
})
df = create_timing_df([2 ** e for e in range(1, 13)])
sns.lmplot(x="n", y="time", data=df, order=2, )
"""
Explanation: Now let's time this, to see how it increases with the number of documents $n$. We expect it to be $\Theta(n^2)$, because each document is compared to every other document.
End of explanation
"""
|
ajgpitch/qutip-notebooks | examples/photon_birth_death.ipynb | lgpl-3.0 | %matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
from qutip import *
"""
Explanation: QuTiP Example: Birth and Death of Photons in a Cavity
J.R. Johansson and P.D. Nation
For more information about QuTiP see http://qutip.org
End of explanation
"""
N=5
a=destroy(N)
H=a.dag()*a # Simple oscillator Hamiltonian
psi0=basis(N,1) # Initial Fock state with one photon
kappa=1.0/0.129 # Coupling rate to heat bath
nth= 0.063 # Temperature with <n>=0.063
# Build collapse operators for the thermal bath
c_ops = []
c_ops.append(np.sqrt(kappa * (1 + nth)) * a)
c_ops.append(np.sqrt(kappa * nth) * a.dag())
"""
Explanation: Overview
Here we aim to reproduce the experimental results from:
<blockquote>
Gleyzes et al., "Quantum jumps of light recording the birth and death of a photon in a cavity", [Nature **446**, 297 (2007)](http://dx.doi.org/10.1038/nature05589).
</blockquote>
In particular, we will simulate the creation and annihilation of photons inside the optical cavity due to the thermal environment when the initial cavity is a single-photon Fock state $ |1\rangle$, as presented in Fig. 3 from the article.
System Setup
End of explanation
"""
ntraj = [1,5,15,904] # number of MC trajectories
tlist = np.linspace(0,0.8,100)
mc = mcsolve(H,psi0,tlist,c_ops,[a.dag()*a],ntraj)
me = mesolve(H,psi0,tlist,c_ops, [a.dag()*a])
"""
Explanation: Run Simulation
End of explanation
"""
fig = plt.figure(figsize=(8, 8), frameon=False)
plt.subplots_adjust(hspace=0.0)
# Results for a single trajectory
ax1 = plt.subplot(4,1,1)
ax1.xaxis.tick_top()
ax1.plot(tlist,mc.expect[0][0],'b',lw=2)
ax1.set_xticks([0,0.2,0.4,0.6])
ax1.set_yticks([0,0.5,1])
ax1.set_ylim([-0.1,1.1])
ax1.set_ylabel(r'$\langle P_{1}(t)\rangle$')
# Results for five trajectories
ax2 = plt.subplot(4,1,2)
ax2.plot(tlist,mc.expect[1][0],'b',lw=2)
ax2.set_yticks([0,0.5,1])
ax2.set_ylim([-0.1,1.1])
ax2.set_ylabel(r'$\langle P_{1}(t)\rangle$')
# Results for fifteen trajectories
ax3 = plt.subplot(4,1,3)
ax3.plot(tlist,mc.expect[2][0],'b',lw=2)
ax3.plot(tlist,me.expect[0],'r--',lw=2)
ax3.set_yticks([0,0.5,1])
ax3.set_ylim([-0.1,1.1])
ax3.set_ylabel(r'$\langle P_{1}(t)\rangle$')
# Results for 904 trajectories
ax4 = plt.subplot(4,1,4)
ax4.plot(tlist,mc.expect[3][0],'b',lw=2)
ax4.plot(tlist,me.expect[0],'r--',lw=2)
plt.xticks([0,0.2,0.4,0.6])
plt.yticks([0,0.5,1])
ax4.set_xlim([0,0.8])
ax4.set_ylim([-0.1,1.1])
ax4.set_xlabel(r'Time (s)')
ax4.set_ylabel(r'$\langle P_{1}(t)\rangle$')
xticklabels = ax2.get_xticklabels()+ax3.get_xticklabels()
plt.setp(xticklabels, visible=False);
"""
Explanation: Plot Results
End of explanation
"""
from qutip.ipynbtools import version_table
version_table()
"""
Explanation: Versions
End of explanation
"""
|
lileiting/goatools | notebooks/report_depth_level.ipynb | bsd-2-clause | # Get http://geneontology.org/ontology/go-basic.obo
from goatools.base import download_go_basic_obo
obo_fname = download_go_basic_obo()
"""
Explanation: Report counts of GO terms at various levels and depths
Reports the number of GO terms at each level and depth.
Level refers to the length of the shortest path from the top.
Depth refers to the length of the longest path from the top.
See the Gene Ontology Consorium's (GOC) advice regarding
levels and depths of a GO term
GO level and depth reporting
GO terms reported can be all GO terms in an ontology.
Or subsets of GO terms can be reported.
GO subset examples include all GO terms annotated for a species or all GO terms in a study.
Example report on full Ontology from Ontologies downloaded April 27, 2016.
```
Dep <-Depth Counts-> <-Level Counts->
Lev BP MF CC BP MF CC
00 1 1 1 1 1 1
01 24 19 24 24 19 24
02 125 132 192 223 155 336
03 950 494 501 1907 738 1143
04 1952 1465 561 4506 1815 1294
05 3376 3861 975 7002 4074 765
06 4315 1788 724 7044 1914 274
07 4646 1011 577 4948 906 60
08 4150 577 215 2017 352 6
09 3532 309 106 753 110 1
10 2386 171 24 182 40 0
11 1587 174 3 37 22 0
12 1032 70 1 1 0 0
13 418 53 0 0 0 0
14 107 17 0 0 0 0
15 33 4 0 0 0 0
16 11 0 0 0 0 0
```
1. Download Ontologies, if necessary
End of explanation
"""
# Get ftp://ftp.ncbi.nlm.nih.gov/gene/DATA/gene2go.gz
from goatools.base import download_ncbi_associations
gene2go = download_ncbi_associations()
"""
Explanation: 2. Download Associations, if necessary
End of explanation
"""
from goatools.obo_parser import GODag
obodag = GODag("go-basic.obo")
"""
Explanation: 3. Initialize GODag object
End of explanation
"""
from goatools.rpt_lev_depth import RptLevDepth
rptobj = RptLevDepth(obodag)
"""
Explanation: 4. Initialize Reporter class
End of explanation
"""
rptobj.write_summary_cnts_all()
"""
Explanation: 5. Generate depth/level report for all GO terms
End of explanation
"""
|
lepmik/nest-simulator | doc/model_details/aeif_models_implementation.ipynb | gpl-2.0 | import numpy as np
from scipy.integrate import odeint
import matplotlib.pyplot as plt
%matplotlib inline
plt.rcParams['figure.figsize'] = (15, 6)
"""
Explanation: NEST implementation of the aeif models
Hans Ekkehard Plesser and Tanguy Fardet, 2016-09-09
This notebook provides a reference solution for the Adaptive Exponential Integrate and Fire
(AEIF) neuronal model and compares it with several numerical implementation using simpler solvers.
In particular this justifies the change of implementation in September 2016 to make the simulation
closer to the reference solution.
Position of the problem
Basics
The equations governing the evolution of the AEIF model are
$$\left\lbrace\begin{array}{rcl}
C_m\dot{V} &=& -g_L(V-E_L) + g_L \Delta_T e^{\frac{V-V_T}{\Delta_T}} + I_e + I_s(t) -w\
\tau_s\dot{w} &=& a(V-E_L) - w
\end{array}\right.$$
when $V < V_{peak}$ (threshold/spike detection).
Once a spike occurs, we apply the reset conditions:
$$V = V_r \quad \text{and} \quad w = w + b$$
Divergence
In the AEIF model, the spike is generated by the exponential divergence. In practice, this means that just before threshold crossing (threshpassing), the argument of the exponential can become very large.
This can lead to numerical overflow or numerical instabilities in the solver, all the more if $V_{peak}$ is large, or if $\Delta_T$ is small.
Tested solutions
Old implementation (before September 2016)
The orginal solution that was adopted was to bind the exponential argument to be smaller that 10 (ad hoc value to be close to the original implementation in BRIAN).
As will be shown in the notebook, this solution does not converge to the reference LSODAR solution.
New implementation
The new implementation does not bind the argument of the exponential, but the potential itself, since according to the theoretical model, $V$ should never get larger than $V_{peak}$.
We will show that this solution is not only closer to the reference solution in general, but also converges towards it as the timestep gets smaller.
Reference solution
The reference solution is implemented using the LSODAR solver which is described and compared in the following references:
http://www.radford.edu/~thompson/RP/eventlocation.pdf (papers citing this one)
http://www.sciencedirect.com/science/article/pii/S0377042712000684
http://www.radford.edu/~thompson/RP/rootfinding.pdf
https://computation.llnl.gov/casc/nsde/pubs/u88007.pdf
http://www.cs.ucsb.edu/~cse/Files/SCE000136.pdf
http://www.sciencedirect.com/science/article/pii/0377042789903348
http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.455.2976&rep=rep1&type=pdf
https://theses.lib.vt.edu/theses/available/etd-12092002-105032/unrestricted/etd.pdf
Technical details and requirements
Implementation of the functions
The old and new implementations are reproduced using Scipy and are called by the scipy_aeif function
The NEST implementations are not shown here, but keep in mind that for a given time resolution, they are closer to the reference result than the scipy implementation since the GSL implementation uses a RK45 adaptive solver.
The reference solution using LSODAR, called reference_aeif, is implemented through the assimulo package.
Requirements
To run this notebook, you need:
numpy and scipy
assimulo
matplotlib
End of explanation
"""
def rhs_aeif_new(y, _, p):
'''
New implementation bounding V < V_peak
Parameters
----------
y : list
Vector containing the state variables [V, w]
_ : unused var
p : Params instance
Object containing the neuronal parameters.
Returns
-------
dv : double
Derivative of V
dw : double
Derivative of w
'''
v = min(y[0], p.Vpeak)
w = y[1]
Ispike = 0.
if p.DeltaT != 0.:
Ispike = p.gL * p.DeltaT * np.exp((v-p.vT)/p.DeltaT)
dv = (-p.gL*(v-p.EL) + Ispike - w + p.Ie)/p.Cm
dw = (p.a * (v-p.EL) - w) / p.tau_w
return dv, dw
def rhs_aeif_old(y, _, p):
'''
Old implementation bounding the argument of the
exponential function (e_arg < 10.).
Parameters
----------
y : list
Vector containing the state variables [V, w]
_ : unused var
p : Params instance
Object containing the neuronal parameters.
Returns
-------
dv : double
Derivative of V
dw : double
Derivative of w
'''
v = y[0]
w = y[1]
Ispike = 0.
if p.DeltaT != 0.:
e_arg = min((v-p.vT)/p.DeltaT, 10.)
Ispike = p.gL * p.DeltaT * np.exp(e_arg)
dv = (-p.gL*(v-p.EL) + Ispike - w + p.Ie)/p.Cm
dw = (p.a * (v-p.EL) - w) / p.tau_w
return dv, dw
"""
Explanation: Scipy functions mimicking the NEST code
Right hand side functions
End of explanation
"""
def scipy_aeif(p, f, simtime, dt):
'''
Complete aeif model using scipy `odeint` solver.
Parameters
----------
p : Params instance
Object containing the neuronal parameters.
f : function
Right-hand side function (either `rhs_aeif_old`
or `rhs_aeif_new`)
simtime : double
Duration of the simulation (will run between
0 and tmax)
dt : double
Time increment.
Returns
-------
t : list
Times at which the neuronal state was evaluated.
y : list
State values associated to the times in `t`
s : list
Spike times.
vs : list
Values of `V` just before the spike.
ws : list
Values of `w` just before the spike
fos : list
List of dictionaries containing additional output
information from `odeint`
'''
t = np.arange(0, simtime, dt) # time axis
n = len(t)
y = np.zeros((n, 2)) # V, w
y[0, 0] = p.EL # Initial: (V_0, w_0) = (E_L, 5.)
y[0, 1] = 5. # Initial: (V_0, w_0) = (E_L, 5.)
s = [] # spike times
vs = [] # membrane potential at spike before reset
ws = [] # w at spike before step
fos = [] # full output dict from odeint()
# imitate NEST: update time-step by time-step
for k in range(1, n):
# solve ODE from t_k-1 to t_k
d, fo = odeint(f, y[k-1, :], t[k-1:k+1], (p, ), full_output=True)
y[k, :] = d[1, :]
fos.append(fo)
# check for threshold crossing
if y[k, 0] >= p.Vpeak:
s.append(t[k])
vs.append(y[k, 0])
ws.append(y[k, 1])
y[k, 0] = p.Vreset # reset
y[k, 1] += p.b # step
return t, y, s, vs, ws, fos
"""
Explanation: Complete model
End of explanation
"""
from assimulo.solvers import LSODAR
from assimulo.problem import Explicit_Problem
class Extended_Problem(Explicit_Problem):
# need variables here for access
sw0 = [ False ]
ts_spikes = []
ws_spikes = []
Vs_spikes = []
def __init__(self, p):
self.p = p
self.y0 = [self.p.EL, 5.] # V, w
# reset variables
self.ts_spikes = []
self.ws_spikes = []
self.Vs_spikes = []
#The right-hand-side function (rhs)
def rhs(self, t, y, sw):
"""
This is the function we are trying to simulate (aeif model).
"""
V, w = y[0], y[1]
Ispike = 0.
if self.p.DeltaT != 0.:
Ispike = self.p.gL * self.p.DeltaT * np.exp((V-self.p.vT)/self.p.DeltaT)
dotV = ( -self.p.gL*(V-self.p.EL) + Ispike + self.p.Ie - w ) / self.p.Cm
dotW = ( self.p.a*(V-self.p.EL) - w ) / self.p.tau_w
return np.array([dotV, dotW])
# Sets a name to our function
name = 'AEIF_nosyn'
# The event function
def state_events(self, t, y, sw):
"""
This is our function that keeps track of our events. When the sign
of any of the events has changed, we have an event.
"""
event_0 = -5 if y[0] >= self.p.Vpeak else 5 # spike
if event_0 < 0:
if not self.ts_spikes:
self.ts_spikes.append(t)
self.Vs_spikes.append(y[0])
self.ws_spikes.append(y[1])
elif self.ts_spikes and not np.isclose(t, self.ts_spikes[-1], 0.01):
self.ts_spikes.append(t)
self.Vs_spikes.append(y[0])
self.ws_spikes.append(y[1])
return np.array([event_0])
#Responsible for handling the events.
def handle_event(self, solver, event_info):
"""
Event handling. This functions is called when Assimulo finds an event as
specified by the event functions.
"""
ev = event_info
event_info = event_info[0] # only look at the state events information.
if event_info[0] > 0:
solver.sw[0] = True
solver.y[0] = self.p.Vreset
solver.y[1] += self.p.b
else:
solver.sw[0] = False
def initialize(self, solver):
solver.h_sol=[]
solver.nq_sol=[]
def handle_result(self, solver, t, y):
Explicit_Problem.handle_result(self, solver, t, y)
# Extra output for algorithm analysis
if solver.report_continuously:
h, nq = solver.get_algorithm_data()
solver.h_sol.extend([h])
solver.nq_sol.extend([nq])
"""
Explanation: LSODAR reference solution
Setting assimulo class
End of explanation
"""
def reference_aeif(p, simtime):
'''
Reference aeif model using LSODAR.
Parameters
----------
p : Params instance
Object containing the neuronal parameters.
f : function
Right-hand side function (either `rhs_aeif_old`
or `rhs_aeif_new`)
simtime : double
Duration of the simulation (will run between
0 and tmax)
dt : double
Time increment.
Returns
-------
t : list
Times at which the neuronal state was evaluated.
y : list
State values associated to the times in `t`
s : list
Spike times.
vs : list
Values of `V` just before the spike.
ws : list
Values of `w` just before the spike
h : list
List of the minimal time increment at each step.
'''
#Create an instance of the problem
exp_mod = Extended_Problem(p) #Create the problem
exp_sim = LSODAR(exp_mod) #Create the solver
exp_sim.atol=1.e-8
exp_sim.report_continuously = True
exp_sim.store_event_points = True
exp_sim.verbosity = 30
#Simulate
t, y = exp_sim.simulate(simtime) #Simulate 10 seconds
return t, y, exp_mod.ts_spikes, exp_mod.Vs_spikes, exp_mod.ws_spikes, exp_sim.h_sol
"""
Explanation: LSODAR reference model
End of explanation
"""
# Regular spiking
aeif_param = {
'V_reset': -58.,
'V_peak': 0.0,
'V_th': -50.,
'I_e': 420.,
'g_L': 11.,
'tau_w': 300.,
'E_L': -70.,
'Delta_T': 2.,
'a': 3.,
'b': 0.,
'C_m': 200.,
'V_m': -70., #! must be equal to E_L
'w': 5., #! must be equal to 5.
'tau_syn_ex': 0.2
}
# Bursting
aeif_param2 = {
'V_reset': -46.,
'V_peak': 0.0,
'V_th': -50.,
'I_e': 500.0,
'g_L': 10.,
'tau_w': 120.,
'E_L': -58.,
'Delta_T': 2.,
'a': 2.,
'b': 100.,
'C_m': 200.,
'V_m': -58., #! must be equal to E_L
'w': 5., #! must be equal to 5.
}
# Close to chaos (use resol < 0.005 and simtime = 200)
aeif_param3 = {
'V_reset': -48.,
'V_peak': 0.0,
'V_th': -50.,
'I_e': 160.,
'g_L': 12.,
'tau_w': 130.,
'E_L': -60.,
'Delta_T': 2.,
'a': -11.,
'b': 30.,
'C_m': 100.,
'V_m': -60., #! must be equal to E_L
'w': 5., #! must be equal to 5.
}
class Params(object):
'''
Class giving access to the neuronal
parameters.
'''
def __init__(self):
self.params = aeif_param
self.Vpeak = aeif_param["V_peak"]
self.Vreset = aeif_param["V_reset"]
self.gL = aeif_param["g_L"]
self.Cm = aeif_param["C_m"]
self.EL = aeif_param["E_L"]
self.DeltaT = aeif_param["Delta_T"]
self.tau_w = aeif_param["tau_w"]
self.a = aeif_param["a"]
self.b = aeif_param["b"]
self.vT = aeif_param["V_th"]
self.Ie = aeif_param["I_e"]
p = Params()
"""
Explanation: Set the parameters and simulate the models
Params (chose a dictionary)
End of explanation
"""
# Parameters of the simulation
simtime = 100.
resol = 0.01
t_old, y_old, s_old, vs_old, ws_old, fo_old = scipy_aeif(p, rhs_aeif_old, simtime, resol)
t_new, y_new, s_new, vs_new, ws_new, fo_new = scipy_aeif(p, rhs_aeif_new, simtime, resol)
t_ref, y_ref, s_ref, vs_ref, ws_ref, h_ref = reference_aeif(p, simtime)
"""
Explanation: Simulate the 3 implementations
End of explanation
"""
fig, ax = plt.subplots()
ax2 = ax.twinx()
# Plot the potentials
ax.plot(t_ref, y_ref[:,0], linestyle="-", label="V ref.")
ax.plot(t_old, y_old[:,0], linestyle="-.", label="V old")
ax.plot(t_new, y_new[:,0], linestyle="--", label="V new")
# Plot the adaptation variables
ax2.plot(t_ref, y_ref[:,1], linestyle="-", c="k", label="w ref.")
ax2.plot(t_old, y_old[:,1], linestyle="-.", c="m", label="w old")
ax2.plot(t_new, y_new[:,1], linestyle="--", c="y", label="w new")
# Show
ax.set_xlim([0., simtime])
ax.set_ylim([-65., 40.])
ax.set_xlabel("Time (ms)")
ax.set_ylabel("V (mV)")
ax2.set_ylim([-20., 20.])
ax2.set_ylabel("w (pA)")
ax.legend(loc=6)
ax2.legend(loc=2)
plt.show()
"""
Explanation: Plot the results
Zoom out
End of explanation
"""
fig, ax = plt.subplots()
ax2 = ax.twinx()
# Plot the potentials
ax.plot(t_ref, y_ref[:,0], linestyle="-", label="V ref.")
ax.plot(t_old, y_old[:,0], linestyle="-.", label="V old")
ax.plot(t_new, y_new[:,0], linestyle="--", label="V new")
# Plot the adaptation variables
ax2.plot(t_ref, y_ref[:,1], linestyle="-", c="k", label="w ref.")
ax2.plot(t_old, y_old[:,1], linestyle="-.", c="y", label="w old")
ax2.plot(t_new, y_new[:,1], linestyle="--", c="m", label="w new")
ax.set_xlim([90., 92.])
ax.set_ylim([-65., 40.])
ax.set_xlabel("Time (ms)")
ax.set_ylabel("V (mV)")
ax2.set_ylim([17.5, 18.5])
ax2.set_ylabel("w (pA)")
ax.legend(loc=5)
ax2.legend(loc=2)
plt.show()
"""
Explanation: Zoom in
End of explanation
"""
print("spike times:\n-----------")
print("ref", np.around(s_ref, 3)) # ref lsodar
print("old", np.around(s_old, 3))
print("new", np.around(s_new, 3))
print("\nV at spike time:\n---------------")
print("ref", np.around(vs_ref, 3)) # ref lsodar
print("old", np.around(vs_old, 3))
print("new", np.around(vs_new, 3))
print("\nw at spike time:\n---------------")
print("ref", np.around(ws_ref, 3)) # ref lsodar
print("old", np.around(ws_old, 3))
print("new", np.around(ws_new, 3))
"""
Explanation: Compare properties at spike times
End of explanation
"""
plt.semilogy(t_ref, h_ref, label='Reference')
plt.semilogy(t_old[1:], [d['hu'] for d in fo_old], linewidth=2, label='Old')
plt.semilogy(t_new[1:], [d['hu'] for d in fo_new], label='New')
plt.legend(loc=6)
plt.show();
"""
Explanation: Size of minimal integration timestep
End of explanation
"""
plt.plot(t_ref, y_ref[:,0], label="V ref.")
resolutions = (0.1, 0.01, 0.001)
di_res = {}
for resol in resolutions:
t_old, y_old, _, _, _, _ = scipy_aeif(p, rhs_aeif_old, simtime, resol)
t_new, y_new, _, _, _, _ = scipy_aeif(p, rhs_aeif_new, simtime, resol)
di_res[resol] = (t_old, y_old, t_new, y_new)
plt.plot(t_old, y_old[:,0], linestyle=":", label="V old, r={}".format(resol))
plt.plot(t_new, y_new[:,0], linestyle="--", linewidth=1.5, label="V new, r={}".format(resol))
plt.xlim(0., simtime)
plt.xlabel("Time (ms)")
plt.ylabel("V (mV)")
plt.legend(loc=2)
plt.show();
"""
Explanation: Convergence towards LSODAR reference with step size
Zoom out
End of explanation
"""
plt.plot(t_ref, y_ref[:,0], label="V ref.")
for resol in resolutions:
t_old, y_old = di_res[resol][:2]
t_new, y_new = di_res[resol][2:]
plt.plot(t_old, y_old[:,0], linestyle="--", label="V old, r={}".format(resol))
plt.plot(t_new, y_new[:,0], linestyle="-.", linewidth=2., label="V new, r={}".format(resol))
plt.xlim(90., 92.)
plt.ylim([-62., 2.])
plt.xlabel("Time (ms)")
plt.ylabel("V (mV)")
plt.legend(loc=2)
plt.show();
"""
Explanation: Zoom in
End of explanation
"""
|
nimagh/CNN_Implementations | Notebooks/VAE.ipynb | gpl-3.0 | %load_ext autoreload
%autoreload 2
import os, sys
sys.path.append('../')
sys.path.append('../common')
sys.path.append('../GenerativeModels')
from tools_general import tf, np
from IPython.display import Image
from tools_train import get_train_params, OneHot, vis_square
from tools_config import data_dir
from tools_train import get_train_params, plot_latent_variable
import matplotlib.pyplot as plt
import imageio
from tensorflow.examples.tutorials.mnist import input_data
from tools_train import get_demo_data
# define parameters
networktype = 'VAE_MNIST'
work_dir = '../trained_models/%s/' %networktype
if not os.path.exists(work_dir): os.makedirs(work_dir)
"""
Explanation: Variational Autoencoders
Auto-Encoding Variational Bayes - Kingma and Welling, 2013
Efficient inference and learning in a directed probabilistic model, in the presence of latent variables with an intractable posterior distribution over large datasets.
Use this code with no warranty and please respect the accompanying license.
End of explanation
"""
from VAE import create_encoder, create_decoder, create_vae_trainer
"""
Explanation: Network definitions
End of explanation
"""
#Image(url=work_dir+'posterior_likelihood_evolution.gif')
"""
Explanation: Training VAE
You can either get the fully trained models from the google drive or train your own models using the VAE.py script.
Evolution of the latent variable during the training phase
Evolution of approximate posterior Q(z|X) and likelihood P(X|z) during training with 2D latent space.
End of explanation
"""
iter_num = 37752
best_model = work_dir + "Model_Iter_%.3d.ckpt"%iter_num
best_img = work_dir + 'Rec_Iter_%d.jpg'%iter_num
Image(filename=best_img)
latentD = 2
batch_size = 128
tf.reset_default_graph()
demo_sess = tf.InteractiveSession()
is_training = tf.placeholder(tf.bool, [], 'is_training')
Zph = tf.placeholder(tf.float32, [None, latentD])
Xph = tf.placeholder(tf.float32, [None, 28, 28, 1])
z_mu_op, z_log_sigma_sq_op = create_encoder(Xph, is_training, latentD, reuse=False, networktype=networktype + '_Enc')
Z_op = tf.add(z_mu_op, tf.multiply(tf.sqrt(tf.exp(z_log_sigma_sq_op)), Zph))
Xrec_op = create_decoder(Z_op, is_training, latentD, reuse=False, networktype=networktype + '_Dec')
Xgen_op = create_decoder(Zph, is_training, latentD, reuse=True, networktype=networktype + '_Dec')
tf.global_variables_initializer().run()
Enc_varlist = tf.get_collection(tf.GraphKeys.GLOBAL_VARIABLES, scope=networktype + '_Enc')
Dec_varlist = tf.get_collection(tf.GraphKeys.GLOBAL_VARIABLES, scope=networktype + '_Dec')
print('List of vars in Encoder:%d -- %s\n' % (len(Enc_varlist), '; '.join([var.name.split('/')[1] for var in Enc_varlist])))
print('List of vars in Decoder:%d -- %s\n' % (len(Dec_varlist), '; '.join([var.name.split('/')[1] for var in Dec_varlist])))
saver = tf.train.Saver(var_list=Enc_varlist+Dec_varlist)
saver.restore(demo_sess, best_model)
#Get uniform samples over the labels
spl = 800 # sample_per_label
data = input_data.read_data_sets(data_dir, one_hot=False, reshape=False)
Xdemo, Xdemo_labels = get_demo_data(data, spl)
decoded_data = demo_sess.run(z_mu_op, feed_dict={Xph:Xdemo, is_training:False})
plot_latent_variable(decoded_data, Xdemo_labels)
"""
Explanation: Visualization of the 2D latent variable corresponding to a Convolutional Variational Autoencoder during training on MNIST dataset (handwritten digits). The image to the left is the mean of the approximate posterior Q(z|X) and each color represents a class of digits within the dataset. The image to the left shows samples from the decoder (likelihood) P(X|z). The title above shows the iteration number and total loss [Reconstruction + KL] of the model at the point that images below were produced from the model under training. One can observe that by the time the generated outputs (left image) get better, the points on the latent space (posterior) also get into better seperated clusters. Also note that points get closer to each other because the KL part of the total loss is imposing a zero mean gaussian distribution on the latent variable, which is realized on the latent variable as the trainig proceeds.
Experiments
Create demo networks and restore weights
End of explanation
"""
Zdemo = np.random.normal(size=[128, latentD], loc=0.0, scale=1.).astype(np.float32)
gen_sample = demo_sess.run(Xgen_op, feed_dict={Zph: Zdemo , is_training:False})
vis_square(gen_sample[:121], [11, 11], save_path=work_dir + 'sample.jpg')
Image(filename=work_dir + 'sample.jpg')
"""
Explanation: Generate new data
Generate samples from the approximate posterior distribution over the latent variables p(z|x)
End of explanation
"""
|
jsharpna/DavisSML | lectures/lecture8/lecture8.ipynb | mit | import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
## open wine data
wine = pd.read_csv('../../data/winequality-red.csv',delimiter=';')
Y = wine.values[:,-1]
X = wine.values[:,:-1]
n,p = X.shape
X = n**0.5 * (X - X.mean(axis=0)) / X.std(axis=0)
## Look at LROnline.py
from LROnline import *
learner = LROnline(p,loss='sqr',decay=-1.)
help(learner.update_beta) # why we do docstrings
yx_it = zip(Y,X) # iterator giving data
y,x = next(yx_it) # first datum
learner.beta, y, x # init beta, first datum
learner.update_beta(x,y) # return loss
learner.beta, y, x # new beta, first datum
losses = [learner.update_beta(x,y) for y,x in yx_it] # run online learning
plt.plot(losses)
_ = plt.title('Losses with sqr error gradient descent')
"""
Explanation: Online Learning
DavisSML: Lecture 8
Prof. James Sharpnack
Online Learning
Data is streaming, and we need to predict the new points sequentially
For each t:
- $x_t$ is revealed
- learner predicts $\hat y_t$
- $y_t$ is revealed and loss $\ell(\hat y_t,y_t)$ is incurred
- learner updates parameters based on the experience
Naive "batch" method
For each t:
- Learner fits on ${x_i,y_i}_{i=1}^{t-1}$
- Learner predicts $\hat y_t$ from $x_t$
If complexity of fit is $O(A_t)$ time then overall takes
$$
O\left(\sum_{t=1}^T A_t\right)
$$
time, for $A_t = t$ (linear time) then $O(\sum_{t=1}^T A_t) = O(T^2)$
Recall Risk and Empirical Risk
Given a loss $\ell(\theta; X,Y)$, for parameters $\theta$, the risk is
$$
R(\theta) = \mathbb E \ell(\theta; X,Y).
$$
And given training data ${x_i,y_i}{i=1}^{n}$ (drawn iid to $X,Y$), then the empirical risk is
$$
R_n(\theta) = \frac 1n \sum{i=1}^n \ell(\theta; x_i, y_i).
$$
Notice that $\mathbb E R_n(\theta) = R(\theta)$ for fixed $\theta$.
For a class of parameters $\Theta$, the empirical risk minimizer (ERM) is the
$$
\hat \theta = \arg \min_{\theta \in \Theta} R_n(\theta)
$$
(may not be unique).
Ideal gradient descent
Suitable for uncontrained/regularized form. Risk is
$$
R(\theta) = \mathbb E \ell(\theta; X,Y).
$$
Suppose that we had access to $R(\theta)$ the true risk. Then to minimize $R$ we could do gradient descent,
$$
\theta \gets \theta - \eta \nabla R(\theta)
$$
To do this we only need access to $\nabla R(\theta)$
ERM for convex opt
Gradient for empirical risk:
$$
\nabla R_n(\theta) = \frac 1n \sum_{i=1}^n \nabla \ell(\theta; x_i, y_i)
$$
and
$$
\mathbb E \nabla \ell(\theta; x_i, y_i) = \nabla \mathbb E \ell(\theta; x_i, y_i) = \nabla R(\theta)
$$
So, gradient descent for ERM moves $\theta$ in direction of $- \nabla R_n(\theta)$
$$
\theta \gets \theta - \eta \nabla R_n(\theta)
$$
where
$$
\mathbb E \nabla R_n(\theta) = \nabla R(\theta)
$$
Minibatch gradient descent
A minibatch is a random subsample of data $(x_1,y_1), \ldots, (x_m,y_m)$ in the full training data.
Then the minibatch gradient is
$$
\nabla R_m(\theta) = \frac 1m \sum_{i=1}^m \nabla \ell(\theta; x_i, y_i)
$$
we also have that
$$
\mathbb E \nabla R_m(\theta) = \nabla R(\theta)
$$
the downside is that $R_m(\theta)$ is noisier.
Stochastic gradient descent
Assumes that $(x_t,y_t)$ are drawn iid from some population. SGD uses a minibatch size of $m=1$.
For each t:
- $x_t$ is revealed
- learner predicts $\hat y_t$ with $f_\theta$
- $y_t$ is revealed and loss $\ell(\hat y_t,y_t)$ is incurred
- learner updates parameters with update,
$$
\theta \gets \theta - \eta \nabla \ell(\theta; x_t,y_t)
$$
Loss: $$\ell(\hat y_i,y_i) = \left(\beta_0 + \sum_j \beta_j x_{i,j} - y_i \right)^2$$
Gradient: $$\frac{\partial}{\partial \beta_j} \ell(\hat y_i,y_i) = 2 \left(\beta_0 + \sum_j \beta_j x_{i,j} - y_i\right) x_{i,j} = \delta_i x_{i,j}$$
$$\frac{\partial}{\partial \beta_0} \ell(\hat y_i,y_i) = 2 \left(\beta_0 + \sum_j \beta_j x_{i,j} - y_i\right) = \delta_i$$
$$ \delta_i = 2 \left(\hat y_i - y_i \right)$$
Update: $$\beta \gets \beta - \eta \delta_i x_i$$
$$\beta_0 \gets \beta_0 - \eta \delta_i$$
Exercise 8.1
Suppose $t$ is drawn uniformly at random from $1,\ldots,n$. What is $\mathbb E_t \nabla \ell(\theta; x_t, y_t)$ where the expectation is taken only with respect to the random draw of $t$?
For the cell above, let $\beta, \beta_0$ be fixed. Suppose that $y_i = \beta_0^ + x_i^\top \beta^ + \epsilon_i$ where $\epsilon_i$ is zero mean and independent of $x_i$ (this is called exogeneity). What is the expected gradients for a random draw of $x_i,y_i$,
$$ \mathbb E \delta_i x_i = ?$$
$$ \mathbb E \delta_i = ?$$
Try to get these expressions as reduced as possible.
Exercise 8.1 Answers
$$ \mathbb E_t \nabla \ell(\theta; x_t, y_t) = \frac 1n \sum_{i=1}^n \nabla \ell(\theta; x_i, y_i) = \nabla R_n(\theta)$$
Because $\hat y_i = \beta_0 + \beta^\top x_i$, $$\mathbb E \delta_i = 2 \mathbb E (\beta_0 + \beta^\top x_i - y_i) = 2 (\beta - \beta^)^\top \mathbb E [x_i] + 2(\beta_0 - \beta_0^).$$
Also,
$$ \delta_i x_i = 2(\beta_0 + \beta^\top x_i - y_i) x_i = 2(\beta_0 - \beta_0^ + \beta^\top x_i - \beta^{,\top} x_i - \epsilon_i) x_i$$
So,
$$ \mathbb E \delta_i x_i = 2 \mathbb E (\beta_0 - \beta_0^ + \beta^\top x_i - \beta^{,\top} x_i) x_i + 2 \mathbb E \epsilon_i x_i = 2 \left( \mathbb E [x_i x_i^\top] (\beta - \beta^) + (\beta_0 - \beta_0^) \mathbb E [x_i] \right)$$
by the exogeneity.
End of explanation
"""
learner = LROnline(p,loss='abs',decay=-1.)
losses = [learner.update_beta(x,y) for y,x in zip(Y,X)]
plt.plot(losses)
_ = plt.title('Losses with abs error SGD')
"""
Explanation: Loss: $$\ell(\hat y_i,y_i) = \left| \beta_0 + \sum_j \beta_j x_{i,j} - y_i \right|$$
(sub-)Gradient: $$\frac{\partial}{\partial \beta_j} \ell(\hat y_i,y_i) = {\rm sign} \left(\beta_0 + \sum_j \beta_j x_{i,j} - y_i\right) x_{i,j} = \delta_i x_{i,j}$$
$$\frac{\partial}{\partial \beta_0} \ell(\hat y_i,y_i) = {\rm sign} \left(\beta_0 + \sum_j \beta_j x_{i,j} - y_i\right) = \delta_i$$
$$ \delta_i = {\rm sign} \left(\hat y_i - y_i \right)$$
Update: $$\beta \gets \beta - \eta \delta_i x_i$$
$$\beta_0 \gets \beta_0 - \eta \delta_i$$
End of explanation
"""
class Perceptron:
"""
Rosenblatt's perceptron, online learner
Attributes:
eta: learning rate
beta: coefficient vector
p: dimension of X
beta_zero: intercept
"""
def __init__(self,eta,dim,
beta_init=None,beta_zero_init=None):
"""initialize and set beta"""
self.eta = eta
self.p = dim
if beta_init:
self.beta = beta_init
else:
self.beta = np.zeros(dim)
if beta_zero_init:
self.beta_zero = beta_zero_init
else:
self.beta_zero = 0.
...
class Perceptron:
...
def predict(self,x):
"""predict y with x"""
s = x @ self.beta + self.beta_zero
yhat = 2*(s > 0) - 1
return yhat
def update_beta(self,x,y):
"""single step update output 0/1 loss"""
yhat = self.predict(x)
if yhat != y:
self.beta += self.eta * y * x
self.beta_zero += self.eta * y
return yhat != y
loss = []
t_iter = 40
for t,(x,y) in enumerate(zip(X,Y)):
loss.append(perc.update_beta(x,y))
"""
Explanation: Exercise 8.2
Look at LROnline.py and determine what the decay argument is doing. Play with the arguments and see when you achieve convergence and when you do not.
Perceptron
Recall SVM for $y_i \in {-1,1}$,
$$
\min_\theta \frac 1n \sum_i (1 - y_i x_i^\top \theta)_+ + \lambda \| \theta \|^2.
$$
Then subdifferential of $(1 - y x^\top\theta)_+$ is
${- y x}$ if $1 - y x^\top \theta > 0$
$[0,-yx]$ if $1 - y x^\top \theta = 0$
${0}$ if $1 - y x^\top \theta < 0$
Choose subgradient $0$ when we can.
Perceptron
Our subgradient of $\ell(\theta; x, y) = (1 - y x^\top\theta)_+ + \lambda \| \theta \|^2$ is
$-yx + \lambda \theta$ if $1 - y x^\top \theta > 0$
$\lambda \theta$ otherwise
SGD makes update
$$
\theta \gets (1 - \lambda \eta) \theta + \eta y_t x_t 1{1 - y x^\top \theta > 0}
$$
Perceptron
Recall that as $\lambda \rightarrow 0$ the margin is more narrow, equivalent to reducing 1 in $1 - y x^\top \theta < 0$.
In the limit as $\lambda \rightarrow 0$ and with $\eta = 1$,
$$
\theta \gets \theta + y_t x_t 1{y x^\top \theta \le 0}
$$
which is Rosenblatt's perceptron.
The update for the intercept is simpler
$$
\theta_0 \gets \theta_0 + y_t 1{y x^\top \theta \le 0}
$$
End of explanation
"""
|
openearth/notebooks | unstrucgridplot.ipynb | gpl-3.0 | # Create split locations
if not hasattr(netelemnode, 'mask'):
netelemnode = np.ma.masked_array(netelemnode, mask=False)
splitidx = np.cumsum(np.r_[(~netelemnode.mask).sum(1)][:-1])
# Convert to 1d filled idx
idx = netelemnode[(~netelemnode.mask)]-1
xpoly = np.split(X[idx],splitidx) # x vector per poly
ypoly = np.split(Y[idx],splitidx)
zcell = np.split(Z[idx],splitidx)
zcellmean = np.array([z.mean() for z in zcell])
polycoords = [np.c_[xy[0],xy[1]] for xy in np.c_[xpoly,ypoly]]
fig, ax = plt.subplots(1,1)
# Plot the cells as polygons
polys = matplotlib.collections.PolyCollection(polycoords, linewidth=2, edgecolor=(0.5,0.5,0.5), cmap=matplotlib.cm.Accent)
# Show the number of elements
polys.set_array((~netelemnode.mask).sum(1))
ax.add_collection(polys)
ax.autoscale()
# Add nodes manually so they don't scale... (could just use ax.plot)
nodes = matplotlib.lines.Line2D(X, Y, marker='.', linestyle='none', markerfacecolor='black', markeredgecolor='none')
ax.add_line(nodes)
"""
Explanation: This is an example of how to use the elemnode variable to connect the cells.
End of explanation
"""
# dimension n lines, start, stop
linecoords = np.concatenate([X[netlink-1][...,np.newaxis], Y[netlink-1][...,np.newaxis]], axis=2)
fig, ax = plt.subplots(1,1, figsize=(10,6))
# Black is the new purple
ax.set_axis_bgcolor('black')
# Regenerate the polycollection in a different color
polys = matplotlib.collections.PolyCollection(polycoords, linewidth=0,cmap=matplotlib.cm.Pastel2)
polys.set_array((~netelemnode.mask).sum(1))
ax.add_collection(polys)
# Now add the lines on top
lines = matplotlib.collections.LineCollection(linecoords, linewidth=2, edgecolor=(0.1,0.5,0.8))
ax.add_collection(lines)
ax.autoscale()
"""
Explanation: This is an example of how to use the netlink variable to connect the links.
End of explanation
"""
|
GoogleCloudPlatform/training-data-analyst | courses/machine_learning/deepdive/08_image_keras/labs/mnist_models.ipynb | apache-2.0 | import os
PROJECT = "cloud-training-demos" # REPLACE WITH YOUR PROJECT ID
BUCKET = "cloud-training-demos-ml" # REPLACE WITH YOUR BUCKET NAME
REGION = "us-central1" # REPLACE WITH YOUR BUCKET REGION e.g. us-central1
MODEL_TYPE = "linear" # "linear", "dnn", "dnn_dropout", or "cnn"
# do not change these
os.environ["PROJECT"] = PROJECT
os.environ["BUCKET"] = BUCKET
os.environ["REGION"] = REGION
os.environ["MODEL_TYPE"] = MODEL_TYPE
os.environ["TFVERSION"] = "1.13" # Tensorflow version
%%bash
gcloud config set project $PROJECT
gcloud config set compute/region $REGION
"""
Explanation: MNIST Image Classification with TensorFlow on Cloud ML Engine
This notebook demonstrates how to implement different image models on MNIST using Estimator.
Note the MODEL_TYPE; change it to try out different models
End of explanation
"""
%%bash
rm -rf mnistmodel.tar.gz mnist_trained
gcloud ml-engine local train \
--module-name=trainer.task \
--package-path=${PWD}/mnistmodel/trainer \
-- \
--output_dir=${PWD}/mnist_trained \
--train_steps=100 \
--learning_rate=0.01 \
--model=$MODEL_TYPE
"""
Explanation: Run as a Python module
In the previous notebook (mnist_linear.ipynb) we ran our code directly from the notebook.
Now since we want to run our code on Cloud ML Engine, we've packaged it as a python module.
The model.py and task.py containing the model code is in <a href="mnistmodel/trainer">mnistmodel/trainer</a>
Complete the TODOs in model.py before proceeding!
Once you've completed the TODOs, set MODEL_TYPE and run it locally for a few steps to test the code.
End of explanation
"""
%%bash
OUTDIR=gs://${BUCKET}/mnist/trained_${MODEL_TYPE}
JOBNAME=mnist_${MODEL_TYPE}_$(date -u +%y%m%d_%H%M%S)
echo $OUTDIR $REGION $JOBNAME
gsutil -m rm -rf $OUTDIR
gcloud ml-engine jobs submit training $JOBNAME \
--region=$REGION \
--module-name=trainer.task \
--package-path=${PWD}/mnistmodel/trainer \
--job-dir=$OUTDIR \
--staging-bucket=gs://$BUCKET \
--scale-tier=BASIC_GPU \
--runtime-version=$TFVERSION \
-- \
--output_dir=$OUTDIR \
--train_steps=10000 --learning_rate=0.01 --train_batch_size=512 \
--model=$MODEL_TYPE --batch_norm
"""
Explanation: Now, let's do it on Cloud ML Engine so we can train on GPU: --scale-tier=BASIC_GPU
Note the GPU speed up depends on the model type. You'll notice the more complex CNN model trains significantly faster on GPU, however the speed up on the simpler models is not as pronounced.
End of explanation
"""
from google.datalab.ml import TensorBoard
TensorBoard().start("gs://{}/mnist/trained_{}".format(BUCKET, MODEL_TYPE))
for pid in TensorBoard.list()["pid"]:
TensorBoard().stop(pid)
print("Stopped TensorBoard with pid {}".format(pid))
"""
Explanation: Monitoring training with TensorBoard
Use this cell to launch tensorboard
End of explanation
"""
%%bash
MODEL_NAME="mnist"
MODEL_VERSION=${MODEL_TYPE}
MODEL_LOCATION=$(gsutil ls gs://${BUCKET}/mnist/trained_${MODEL_TYPE}/export/exporter | tail -1)
echo "Deleting and deploying $MODEL_NAME $MODEL_VERSION from $MODEL_LOCATION ... this will take a few minutes"
#gcloud ml-engine versions delete ${MODEL_VERSION} --model ${MODEL_NAME}
#gcloud ml-engine models delete ${MODEL_NAME}
gcloud ml-engine models create ${MODEL_NAME} --regions $REGION
gcloud ml-engine versions create ${MODEL_VERSION} --model ${MODEL_NAME} --origin ${MODEL_LOCATION} --runtime-version=$TFVERSION
"""
Explanation: Here are my results:
Model | Accuracy | Time taken | Model description | Run time parameters
--- | :---: | ---
linear | 91.53 | 3 min | linear | 100 steps, LR=0.01, Batch=512
linear | 92.73 | 8 min | linear | 1000 steps, LR=0.01, Batch=512
linear | 92.29 | 18 min | linear | 10000 steps, LR=0.01, Batch=512
dnn | 98.14 | 15 min | 300-100-30 nodes fully connected | 10000 steps, LR=0.01, Batch=512
dnn | 97.99 | 48 min | 300-100-30 nodes fully connected | 100000 steps, LR=0.01, Batch=512
dnn_dropout | 97.84 | 29 min | 300-100-30-DL(0.1)- nodes | 20000 steps, LR=0.01, Batch=512
cnn | 98.97 | 35 min | maxpool(10 5x5 cnn, 2)-maxpool(20 5x5 cnn, 2)-300-DL(0.25) | 20000 steps, LR=0.01, Batch=512
cnn | 98.93 | 35 min | maxpool(10 11x11 cnn, 2)-maxpool(20 3x3 cnn, 2)-300-DL(0.25) | 20000 steps, LR=0.01, Batch=512
cnn | 99.17 | 35 min | maxpool(10 11x11 cnn, 2)-maxpool(20 3x3 cnn, 2)-300-DL(0.25), batch_norm (logits only) | 20000 steps, LR=0.01, Batch=512
cnn | 99.27 | 35 min | maxpool(10 11x11 cnn, 2)-maxpool(20 3x3 cnn, 2)-300-DL(0.25), batch_norm (logits, deep) | 10000 steps, LR=0.01, Batch=512
cnn | 99.48 | 12 hr | as-above but nfil1=20, nfil2=27, dprob=0.1, lr=0.001, batchsize=233 | (hyperparameter optimization)
Create a table to keep track of your own results as you experiment with model type and hyperparameters!
Deploying and predicting with model
Deploy the model:
End of explanation
"""
import json, codecs
import matplotlib.pyplot as plt
import tensorflow as tf
HEIGHT = 28
WIDTH = 28
# Get mnist data
mnist = tf.keras.datasets.mnist
(_, _), (x_test, _) = mnist.load_data()
# Scale our features between 0 and 1
x_test = x_test / 255.0
IMGNO = 5 # CHANGE THIS to get different images
jsondata = {"image": x_test[IMGNO].reshape(HEIGHT, WIDTH).tolist()}
json.dump(jsondata, codecs.open("test.json", 'w', encoding = "utf-8"))
plt.imshow(x_test[IMGNO].reshape(HEIGHT, WIDTH));
"""
Explanation: To predict with the model, let's take one of the example images.
End of explanation
"""
%%bash
gcloud ml-engine predict \
--model=mnist \
--version=${MODEL_TYPE} \
--json-instances=./test.json
"""
Explanation: Send it to the prediction service
End of explanation
"""
|
noammor/coursera-machinelearning-python | ex1/ml-ex1-multivariate.ipynb | mit | import pandas
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
"""
Explanation: Exercise 1: Linear regression with multiple variables
End of explanation
"""
data = pandas.read_csv('ex1data2.txt', header=None, names=['x1', 'x2', 'y'])
data.head()
data.shape
X = data[['x1', 'x2']].values
Y = data['y'].values
m = len(data)
"""
Explanation: Load and explore data:
End of explanation
"""
def feature_normalize(X):
# FEATURENORMALIZE Normalizes the features in X
# FEATURENORMALIZE(X) returns a normalized version of X where
# the mean value of each feature is 0 and the standard deviation
# is 1. This is often a good preprocessing step to do when
# working with learning algorithms.
# You need to set these values correctly
X_norm = X
mu = np.zeros(X.shape[1])
sigma = np.zeros(X.shape[1])
# ====================== YOUR CODE HERE ======================
# Instructions: First, for each feature dimension, compute the mean
# of the feature and subtract it from the dataset,
# storing the mean value in mu. Next, compute the
# standard deviation of each feature and divide
# each feature by it's standard deviation, storing
# the standard deviation in sigma.
#
# Note that X is a matrix where each column is a
# feature and each row is an example. You need
# to perform the normalization separately for
# each feature.
#
# Hint: You might find the 'np.mean' and 'np.std' functions useful.
#
# ============================================================
return X_norm, mu, sigma
"""
Explanation: Part 1: Feature Normalization
End of explanation
"""
X_norm, mu, sigma = feature_normalize(X)
"""
Explanation: Scale features and set them to zero mean:
End of explanation
"""
X_norm = np.insert(X_norm, 0, 1, 1)
X_norm[:2]
# choose some alpha value
alpha = 0.01
# Init Theta
theta = np.zeros(3)
iterations = 400
"""
Explanation: Add intercept term to X:
End of explanation
"""
def compute_cost_multi(X, y, theta):
# COMPUTECOSTMULTI Compute cost for linear regression
# J = COMPUTECOSTMULTI(X, y, theta) computes the cost of using theta as the
# parameter for linear regression to fit the data points in X and y
# some useful values
m = len(X)
# You need to return this value correctly:
J = 0
# ====================== YOUR CODE HERE ======================
# Instructions: Compute the cost of a particular choice of theta
# You should set J to the cost.
# ============================================================
return J
"""
Explanation: Part 2: Gradient Descent
Make sure your implementations of compute_cost and gradient_descent work when X has more than 2 columns!
End of explanation
"""
compute_cost_multi(X_norm, Y, theta)
def gradient_descent_multi(X, y, theta, alpha, num_iters):
# GRADIENTDESCENT Performs gradient descent to learn theta
# theta = GRADIENTDESCENT(X, y, theta, alpha, num_iters) updates theta by
# taking num_iters gradient steps with learning rate alpha
# Initialize
J_history = np.zeros(num_iters)
T_history = np.zeros((num_iters,X.shape[1]))
for i in range(num_iters):
T_history[i] = theta
### ========= YOUR CODE HERE ============
# Instructions: Perform a single gradient step on the parameter vector theta.
### =====================================
J_history[i] = compute_cost_multi(X, y, theta)
return theta, J_history, T_history
"""
Explanation: Cost at initial theta:
End of explanation
"""
theta, J_history, T_history = gradient_descent_multi(X_norm, Y, theta, alpha, iterations)
"""
Explanation: Run gradient descent:
End of explanation
"""
theta
"""
Explanation: The theta values found by gradient descent should be [ 340412.65957447, 109447.79646964, -6578.35485416]).
End of explanation
"""
pandas.Series(J_history).plot()
"""
Explanation: Convergence graph:
End of explanation
"""
# Estimate the price of a 1650 sq-ft, 3 br house
# ====================== YOUR CODE HERE ======================
# Recall that the first column of X is all-ones. Thus, it does
# not need to be normalized.
price = 0
# ============================================================
price
"""
Explanation: Estimate the price of a 1650 sqft, 3 bedroom house:
End of explanation
"""
data = pandas.read_csv('ex1data2.txt', header=None, names=['x1', 'x2', 'y'])
X = data[['x1', 'x2']].values
Y = data['y'].values
X = np.insert(X, 0, 1, 1)
def normal_eqn(X, y):
#NORMALEQN Computes the closed-form solution to linear regression
# NORMALEQN(X,y) computes the closed-form solution to linear
# regression using the normal equations.
theta = np.zeros(X.shape[1]);
# ====================== YOUR CODE HERE ======================
# Instructions: Complete the code to compute the closed form solution
# to linear regression and put the result in theta.
#
# ============================================================
return theta
theta = normal_eqn(X, Y)
"""
Explanation: Part 3: Normal Equations
The following code computes the closed form
solution for linear regression using the normal
equations. You should complete the code in
normal_eqn().
After doing so, you should complete this code
to predict the price of a 1650 sq-ft, 3 br house.
End of explanation
"""
theta
"""
Explanation: Theta found using the normal equations:
End of explanation
"""
# ====================== YOUR CODE HERE ======================
0
# ============================================================
"""
Explanation: Price estimation of a 1650sqft house with 3 bedrooms, using theta from the normal equations:
End of explanation
"""
|
IS-ENES-Data/submission_forms | test/forms/CMIP6/.ipynb_checkpoints/CMIP6_ki_123-checkpoint.ipynb | apache-2.0 | # initialize your CORDEX submission form template
from dkrz_forms import form_handler
from dkrz_forms import checks
"""
Explanation: DKRZ CMIP6 submission form for ESGF data publication
General Information
Data to be submitted for ESGF data publication must follow the rules outlined in the CMIP6 Archive Design Document <br /> (https://...)
Thus file names have to follow the pattern:<br />
VariableName_Domain_GCMModelName_CMIP5ExperimentName_CMIP5EnsembleMember_RCMModelName_RCMVersionID_Frequency[_StartTime-EndTime].nc <br />
Example: tas_AFR-44_MPI-M-MPI-ESM-LR_rcp26_r1i1p1_MPI-CSC-REMO2009_v1_mon_yyyymm-yyyymm.nc
The directory structure in which these files are stored follow the pattern:<br />
activity/product/Domain/Institution/
GCMModelName/CMIP5ExperimentName/CMIP5EnsembleMember/
RCMModelName/RCMVersionID/Frequency/VariableName <br />
Example: CORDEX/output/AFR-44/MPI-CSC/MPI-M-MPI-ESM-LR/rcp26/r1i1p1/MPI-CSC-REMO2009/v1/mon/tas/tas_AFR-44_MPI-M-MPI-ESM-LR_rcp26_r1i1p1_MPI-CSC-REMO2009_v1_mon_yyyymm-yyyymm.nc
Notice: If your model is not yet registered, please contact contact ....
This 'data submission form' is used to improve initial information exchange between data providers and the data center. The form has to be filled before the publication process can be started. In case you have questions pleas contact the individual data center:
o DKRZ: cmip6@dkrz.de
Start submission procedure
The submission is based on this interactive document consisting of "cells" you can modify and then evaluate
evaluation of cells is done by selecting the cell and then press the keys "Shift" + "Enter"
<br /> please evaluate the following cell to initialize your form
End of explanation
"""
my_email = "..." # example: sf.email = "Mr.Mitty@yahoo.com"
my_first_name = "..." # example: sf.first_name = "Harold"
my_last_name = "..." # example: sf.last_name = "Mitty"
my_keyword = "..." # example: sf.keyword = "mymodel_myrunid"
sf = form_handler.init_form("CORDEX",my_first_name,my_last_name,my_email,my_keyword)
"""
Explanation: please provide information on the contact person for this CORDEX data submission request
End of explanation
"""
sf.submission_type = "..." # example: sf.submission_type = "initial_version"
"""
Explanation: Type of submission
please specify the type of this data submission:
- "initial_version" for first submission of data
- "new _version" for a re-submission of previousliy submitted data
- "retract" for the request to retract previously submitted data
End of explanation
"""
sf.institution = "..." # example: sf.institution = "Alfred Wegener Institute"
"""
Explanation: Requested general information
... to be finalized as soon as CMIP6 specification is finalized ....
Please provide model and institution info as well as an example of a file name
institution
The value of this field has to equal the value of the optional NetCDF attribute 'institution'
(long version) in the data files if the latter is used.
End of explanation
"""
sf.institute_id = "..." # example: sf.institute_id = "AWI"
"""
Explanation: institute_id
The value of this field has to equal the value of the global NetCDF attribute 'institute_id'
in the data files and must equal the 4th directory level. It is needed before the publication
process is started in order that the value can be added to the relevant CORDEX list of CV1
if not yet there. Note that 'institute_id' has to be the first part of 'model_id'
End of explanation
"""
sf.model_id = "..." # example: sf.model_id = "AWI-HIRHAM5"
"""
Explanation: model_id
The value of this field has to be the value of the global NetCDF attribute 'model_id'
in the data files. It is needed before the publication process is started in order that
the value can be added to the relevant CORDEX list of CV1 if not yet there.
Note that it must be composed by the 'institute_id' follwed by the RCM CORDEX model name,
separated by a dash. It is part of the file name and the directory structure.
End of explanation
"""
sf.experiment_id = "..." # example: sf.experiment_id = "evaluation"
# ["value_a","value_b"] in case of multiple experiments
sf.time_period = "..." # example: sf.time_period = "197901-201412"
# ["time_period_a","time_period_b"] in case of multiple values
"""
Explanation: experiment_id and time_period
Experiment has to equal the value of the global NetCDF attribute 'experiment_id'
in the data files. Time_period gives the period of data for which the publication
request is submitted. If you intend to submit data from multiple experiments you may
add one line for each additional experiment or send in additional publication request sheets.
End of explanation
"""
sf.example_file_name = "..." # example: sf.example_file_name = "tas_AFR-44_MPI-M-MPI-ESM-LR_rcp26_r1i1p1_MPI-CSC-REMO2009_v1_mon_yyyymm-yyyymm.nc"
# Please run this cell as it is to check your example file name structure
# to_do: implement submission_form_check_file function - output result (attributes + check_result)
form_handler.cordex_file_info(sf,sf.example_file_name)
"""
Explanation: Example file name
Please provide an example file name of a file in your data collection,
this name will be used to derive the other
End of explanation
"""
sf.grid_mapping_name = "..." # example: sf.grid_mapping_name = "rotated_latitude_longitude"
"""
Explanation: information on the grid_mapping
the NetCDF/CF name of the data grid ('rotated_latitude_longitude', 'lambert_conformal_conic', etc.),
i.e. either that of the native model grid, or 'latitude_longitude' for the regular -XXi grids
End of explanation
"""
sf.grid_as_specified_if_rotated_pole = "..." # example: sf.grid_as_specified_if_rotated_pole = "yes"
"""
Explanation: Does the grid configuration exactly follow the specifications in ADD2 (Table 1)
in case the native grid is 'rotated_pole'? If not, comment on the differences; otherwise write 'yes' or 'N/A'. If the data is not delivered on the computational grid it has to be noted here as well.
End of explanation
"""
sf.data_qc_status = "..." # example: sf.data_qc_status = "QC2-CORDEX"
sf.data_qc_comment = "..." # any comment of quality status of the files
"""
Explanation: Please provide information on quality check performed on the data you plan to submit
Please answer 'no', 'QC1', 'QC2-all', 'QC2-CORDEX', or 'other'.
'QC1' refers to the compliancy checker that can be downloaded at http://cordex.dmi.dk.
'QC2' refers to the quality checker developed at DKRZ.
If your answer is 'other' give some informations.
End of explanation
"""
sf.terms_of_use = "..." # example: sf.terms_of_use = "unrestricted"
"""
Explanation: Terms of use
Please give the terms of use that shall be asigned to the data.
The options are 'unrestricted' and 'non-commercial only'.
For the full text 'Terms of Use' of CORDEX data refer to
http://cordex.dmi.dk/joomla/images/CORDEX/cordex_terms_of_use.pdf
End of explanation
"""
sf.directory_structure = "..." # example: sf.directory_structure = "compliant"
"""
Explanation: Information on directory structure and data access path
(and other information needed for data transport and data publication)
If there is any directory structure deviation from the CORDEX standard please specify here.
Otherwise enter 'compliant'. Please note that deviations MAY imply that data can not be accepted.
End of explanation
"""
sf.data_path = "..." # example: sf.data_path = "mistral.dkrz.de:/mnt/lustre01/work/bm0021/k204016/CORDEX/archive/"
sf.data_information = "..." # ...any info where data can be accessed and transfered to the data center ... "
"""
Explanation: Give the path where the data reside, for example:
blizzard.dkrz.de:/scratch/b/b364034/. If not applicable write N/A and give data access information in the data_information string
End of explanation
"""
sf.exclude_variables_list = "..." # example: sf.exclude_variables_list=["bnds", "vertices"]
"""
Explanation: Exclude variable list
In each CORDEX file there may be only one variable which shall be published and searchable at the ESGF portal (target variable). In order to facilitate publication, all non-target variables are included in a list used by the publisher to avoid publication. A list of known non-target variables is [time, time_bnds, lon, lat, rlon ,rlat ,x ,y ,z ,height, plev, Lambert_Conformal, rotated_pole]. Please enter other variables into the left field if applicable (e.g. grid description variables), otherwise write 'N/A'.
End of explanation
"""
sf.uniqueness_of_tracking_id = "..." # example: sf.uniqueness_of_tracking_id = "yes"
"""
Explanation: Uniqueness of tracking_id and creation_date
In case any of your files is replacing a file already published, it must not have the same tracking_id nor
the same creation_date as the file it replaces.
Did you make sure that that this is not the case ?
Reply 'yes'; otherwise adapt the new file versions.
End of explanation
"""
sf.variable_list_day = [
"clh","clivi","cll","clm","clt","clwvi",
"evspsbl","evspsblpot",
"hfls","hfss","hurs","huss","hus850",
"mrfso","mrro","mrros","mrso",
"pr","prc","prhmax","prsn","prw","ps","psl",
"rlds","rlus","rlut","rsds","rsdt","rsus","rsut",
"sfcWind","sfcWindmax","sic","snc","snd","snm","snw","sund",
"tas","tasmax","tasmin","tauu","tauv","ta200","ta500","ta850","ts",
"uas","ua200","ua500","ua850",
"vas","va200","va500","va850","wsgsmax",
"zg200","zg500","zmla"
]
sf.variable_list_mon = [
"clt",
"evspsbl",
"hfls","hfss","hurs","huss","hus850",
"mrfso","mrro","mrros","mrso",
"pr","psl",
"rlds","rlus","rlut","rsds","rsdt","rsus","rsut",
"sfcWind","sfcWindmax","sic","snc","snd","snm","snw","sund",
"tas","tasmax","tasmin","ta200",
"ta500","ta850",
"uas","ua200","ua500","ua850",
"vas","va200","va500","va850",
"zg200","zg500"
]
sf.variable_list_sem = [
"clt",
"evspsbl",
"hfls","hfss","hurs","huss","hus850",
"mrfso","mrro","mrros","mrso",
"pr","psl",
"rlds","rlus","rlut","rsds","rsdt","rsus","rsut",
"sfcWind","sfcWindmax","sic","snc","snd","snm","snw","sund",
"tas","tasmax","tasmin","ta200","ta500","ta850",
"uas","ua200","ua500","ua850",
"vas","va200","va500","va850",
"zg200","zg500"
]
sf.variable_list_fx = [
"areacella",
"mrsofc",
"orog",
"rootd",
"sftgif","sftlf"
]
"""
Explanation: Variable list
list of variables submitted -- please remove the ones you do not provide:
End of explanation
"""
# simple consistency check report for your submission form
res = form_handler.check_submission(sf)
sf.sub['status_flag_validity'] = res['valid_submission']
form_handler.DictTable(res)
"""
Explanation: Check your submission before submission
End of explanation
"""
form_handler.form_save(sf)
#evaluate this cell if you want a reference to the saved form emailed to you
# (only available if you access this form via the DKRZ form hosting service)
form_handler.email_form_info()
# evaluate this cell if you want a reference (provided by email)
# (only available if you access this form via the DKRZ hosting service)
form_handler.email_form_info(sf)
"""
Explanation: Save your form
your form will be stored (the form name consists of your last name plut your keyword)
End of explanation
"""
form_handler.email_form_info(sf)
form_handler.form_submission(sf)
"""
Explanation: officially submit your form
the form will be submitted to the DKRZ team to process
you also receive a confirmation email with a reference to your online form for future modifications
End of explanation
"""
|
ericmjl/Network-Analysis-Made-Simple | notebooks/03-practical/01-io.ipynb | mit | from IPython.display import YouTubeVideo
YouTubeVideo(id="3sJnTpeFXZ4", width="100%")
"""
Explanation: Introduction
End of explanation
"""
from pyprojroot import here
"""
Explanation: In order to get you familiar with graph ideas,
I have deliberately chosen to steer away from
the more pedantic matters
of loading graph data to and from disk.
That said, the following scenario will eventually happen,
where a graph dataset lands on your lap,
and you'll need to load it in memory
and start analyzing it.
Thus, we're going to go through graph I/O,
specifically the APIs on how to convert
graph data that comes to you
into that magical NetworkX object G.
Let's get going!
Graph Data as Tables
Let's recall what we've learned in the introductory chapters.
Graphs can be represented using two sets:
Node set
Edge set
Node set as tables
Let's say we had a graph with 3 nodes in it: A, B, C.
We could represent it in plain text, computer-readable format:
csv
A
B
C
Suppose the nodes also had metadata.
Then, we could tag on metadata as well:
csv
A, circle, 5
B, circle, 7
C, square, 9
Does this look familiar to you?
Yes, node sets can be stored in CSV format,
with one of the columns being node ID,
and the rest of the columns being metadata.
Edge set as tables
If, between the nodes, we had 4 edges (this is a directed graph),
we can also represent those edges in plain text, computer-readable format:
csv
A, C
B, C
A, B
C, A
And let's say we also had other metadata,
we can represent it in the same CSV format:
csv
A, C, red
B, C, orange
A, B, yellow
C, A, green
If you've been in the data world for a while,
this should not look foreign to you.
Yes, edge sets can be stored in CSV format too!
Two of the columns represent the nodes involved in an edge,
and the rest of the columns represent the metadata.
Combined Representation
In fact, one might also choose to combine
the node set and edge set tables together in a merged format:
n1, n2, colour, shape1, num1, shape2, num2
A, C, red, circle, 5, square, 9
B, C, orange, circle, 7, square, 9
A, B, yellow, circle, 5, circle, 7
C, A, green, square, 9, circle, 5
In this chapter, the datasets that we will be looking at
are going to be formatted in both ways.
Let's get going.
Dataset
We will be working with the Divvy bike sharing dataset.
Divvy is a bike sharing service in Chicago.
Since 2013, Divvy has released their bike sharing dataset to the public.
The 2013 dataset is comprised of two files:
- Divvy_Stations_2013.csv, containing the stations in the system, and
- DivvyTrips_2013.csv, containing the trips.
Let's dig into the data!
End of explanation
"""
import zipfile
import os
from nams.load_data import datasets
# This block of code checks to make sure that a particular directory is present.
if "divvy_2013" not in os.listdir(datasets):
print('Unzipping the divvy_2013.zip file in the datasets folder.')
with zipfile.ZipFile(datasets / "divvy_2013.zip","r") as zip_ref:
zip_ref.extractall(datasets)
"""
Explanation: Firstly, we need to unzip the dataset:
End of explanation
"""
import pandas as pd
stations = pd.read_csv(datasets / 'divvy_2013/Divvy_Stations_2013.csv', parse_dates=['online date'], encoding='utf-8')
stations.head()
stations.describe()
"""
Explanation: Now, let's load in both tables.
First is the stations table:
End of explanation
"""
trips = pd.read_csv(datasets / 'divvy_2013/Divvy_Trips_2013.csv',
parse_dates=['starttime', 'stoptime'])
trips.head()
import janitor
trips_summary = (
trips
.groupby(["from_station_id", "to_station_id"])
.count()
.reset_index()
.select_columns(
[
"from_station_id",
"to_station_id",
"trip_id"
]
)
.rename_column("trip_id", "num_trips")
)
trips_summary.head()
"""
Explanation: Now, let's load in the trips table.
End of explanation
"""
import networkx as nx
G = nx.from_pandas_edgelist(
df=trips_summary,
source="from_station_id",
target="to_station_id",
edge_attr=["num_trips"],
create_using=nx.DiGraph
)
"""
Explanation: Graph Model
Given the data, if we wished to use a graph as a data model
for the number of trips between stations,
then naturally, nodes would be the stations,
and edges would be trips between them.
This graph would be directed,
as one could have more trips from station A to B
and less in the reverse.
With this definition,
we can begin graph construction!
Create NetworkX graph from pandas edgelist
NetworkX provides an extremely convenient way
to load data from a pandas DataFrame:
End of explanation
"""
print(nx.info(G))
"""
Explanation: Inspect the graph
Once the graph is in memory,
we can inspect it to get out summary graph statistics.
End of explanation
"""
list(G.edges(data=True))[0:5]
"""
Explanation: You'll notice that the edge metadata have been added correctly: we have recorded in there the number of trips between stations.
End of explanation
"""
list(G.nodes(data=True))[0:5]
"""
Explanation: However, the node metadata is not present:
End of explanation
"""
stations.head()
"""
Explanation: Annotate node metadata
We have rich station data on hand,
such as the longitude and latitude of each station,
and it would be a pity to discard it,
especially when we can potentially use it as part of the analysis
or for visualization purposes.
Let's see how we can add this information in.
Firstly, recall what the stations dataframe looked like:
End of explanation
"""
for node, metadata in stations.set_index("id").iterrows():
for key, val in metadata.items():
G.nodes[node][key] = val
"""
Explanation: The id column gives us the node ID in the graph,
so if we set id to be the index,
if we then also loop over each row,
we can treat the rest of the columns as dictionary keys
and values as dictionary values,
and add the information into the graph.
Let's see this in action.
End of explanation
"""
list(G.nodes(data=True))[0:5]
"""
Explanation: Now, our node metadata should be populated.
End of explanation
"""
def filter_graph(G, minimum_num_trips):
"""
Filter the graph such that
only edges that have minimum_num_trips or more
are present.
"""
G_filtered = G.____()
for _, _, _ in G._____(data=____):
if d[___________] < ___:
G_________.___________(_, _)
return G_filtered
from nams.solutions.io import filter_graph
G_filtered = filter_graph(G, 50)
"""
Explanation: In nxviz, a GeoPlot object is available
that allows you to quickly visualize
a graph that has geographic data.
However, being matplotlib-based,
it is going to be quickly overwhelmed
by the sheer number of edges.
As such, we are going to first filter the edges.
Exercise: Filter graph edges
Leveraging what you know about how to manipulate graphs,
now try filtering edges.
Hint: NetworkX graph objects can be deep-copied using G.copy():
python
G_copy = G.copy()
Hint: NetworkX graph objects also let you remove edges:
python
G.remove_edge(node1, node2) # does not return anything
End of explanation
"""
import nxviz as nv
c = nv.geo(G_filtered, node_color_by="dpcapacity")
"""
Explanation: Visualize using GeoPlot
nxviz provides a GeoPlot object
that lets you quickly visualize geospatial graph data.
A note on geospatial visualizations:
As the creator of nxviz,
I would recommend using proper geospatial packages
to build custom geospatial graph viz,
such as pysal.)
That said, nxviz can probably do what you need
for a quick-and-dirty view of the data.
End of explanation
"""
nx.write_gpickle(G, "/tmp/divvy.pkl")
"""
Explanation: Does that look familiar to you? Looks quite a bit like Chicago, I'd say :)
Jesting aside, this visualization does help illustrate
that the majority of trips occur between stations that are
near the city center.
Pickling Graphs
Since NetworkX graphs are Python objects,
the canonical way to save them is by pickling them.
You can do this using:
python
nx.write_gpickle(G, file_path)
Here's an example in action:
End of explanation
"""
G_loaded = nx.read_gpickle("/tmp/divvy.pkl")
"""
Explanation: And just to show that it can be loaded back into memory:
End of explanation
"""
def test_graph_integrity(G):
"""Test integrity of raw Divvy graph."""
# Your solution here
pass
from nams.solutions.io import test_graph_integrity
test_graph_integrity(G)
"""
Explanation: Exercise: checking graph integrity
If you get a graph dataset as a pickle,
you should always check it against reference properties
to make sure of its data integrity.
Write a function that tests that the graph
has the correct number of nodes and edges inside it.
End of explanation
"""
from nams.solutions import io
import inspect
print(inspect.getsource(io))
"""
Explanation: Other text formats
CSV files and pandas DataFrames
give us a convenient way to store graph data,
and if possible, do insist with your data collaborators
that they provide you with graph data that are in this format.
If they don't, however, no sweat!
After all, Python is super versatile.
In this ebook, we have loaded data in
from non-CSV sources,
sometimes by parsing text files raw,
sometimes by treating special characters as delimiters in a CSV-like file,
and sometimes by resorting to parsing JSON.
You can see other examples of how we load data
by browsing through the source file of load_data.py
and studying how we construct graph objects.
Solutions
The solutions to this chapter's exercises are below
End of explanation
"""
|
coolharsh55/advent-of-code | 2016/python3/Day05.ipynb | mit | with open('../inputs/day05.txt', 'r') as f:
door_id = f.readline().strip()
"""
Explanation: Day 5: How About a Nice Game of Chess?
author: Harshvardhan Pandit
license: MIT
link to problem statement
You are faced with a security door designed by Easter Bunny engineers that seem to have acquired most of their security knowledge by watching hacking movies.
The eight-character password for the door is generated one character at a time by finding the MD5 hash of some Door ID (your puzzle input) and an increasing integer index (starting with 0).
A hash indicates the next character in the password if its hexadecimal representation starts with five zeroes. If it does, the sixth character in the hash is the next character of the password.
For example, if the Door ID is abc:
- The first index which produces a hash that starts with five zeroes is `3231929`, which we find by hashing `abc3231929`; the sixth character of the hash, and thus the first character of the password, is `1`.
- `5017308` produces the next interesting hash, which starts with `000008f82`..., so the second character of the password is `8`.
- The third time a hash starts with five zeroes is for `abc5278568`, discovering the character f.
In this example, after continuing this search a total of eight times, the password is 18f47a30.
Given the actual Door ID, what is the password?
Solution logic
The password is eight characters long, so any loop or condition we run needs to have that many iterations. Python has a handy way to calculate the MD5 hash using hashlib module.
import hashlib
md5 = hashlib.md5()
md5.update('string-here')
md5.hexdigest()
Next, we need to keep track of the integers we are suffixing the door ID with. Starting from 0, without any upper limit. Once the MD5 is calculated, it is of interest only when it starts with 5 zeroes. So we check if the string starts with '00000'
md5.startswith('00000')
If it does, we append the sixth character as the password.
Algorithm
- set password to empty string
- set hash_suffix to 0
- while password length is not 6:
- append hash_suffix to door ID
- take MD5
- check if it starts with '00000'
- if it does, append sixth character to password
- increment hash_suffix by 1
Input
The input for this puzzle is a single line, but I still store it in an input file so as to make it available to any other scritps written in the future.
End of explanation
"""
import hashlib
password = ''
hash_suffix = 0
while len(password) != 8:
string = door_id + str(hash_suffix)
md5 = hashlib.md5()
md5.update(string.encode('ascii'))
md5hash = md5.hexdigest()
if md5hash.startswith('00000'):
password += md5hash[5]
hash_suffix += 1
"""
Explanation: Initialising variables, importing libraries
End of explanation
"""
password_char_count = 0
password = [None for i in range(0, 8)]
hash_suffix = 0
while password_char_count != 8:
string = door_id + str(hash_suffix)
hash_suffix += 1
md5 = hashlib.md5()
md5.update(string.encode('ascii'))
md5hash = md5.hexdigest()
if not md5hash.startswith('00000'):
continue
if not md5hash[5].isdigit():
continue
position_char = int(md5hash[5])
if 0 <= position_char <= 7 and password[position_char] is None:
password[position_char] = md5hash[6]
password_char_count += 1
''.join(password)
"""
Explanation: Part Two
As the door slides open, you are presented with a second door that uses a slightly more inspired security mechanism. Clearly unimpressed by the last version (in what movie is the password decrypted in order?!), the Easter Bunny engineers have worked out a better solution.
Instead of simply filling in the password from left to right, the hash now also indicates the position within the password to fill. You still look for hashes that begin with five zeroes; however, now, the sixth character represents the position (0-7), and the seventh character is the character to put in that position.
A hash result of 000001f means that f is the second character in the password. Use only the first result for each position, and ignore invalid positions.
For example, if the Door ID is abc:
- The first interesting hash is from `abc3231929`, which produces `0000015`...; so, `5` goes in position `1`: `_5______`.
- In the previous method, `5017308` produced an interesting hash; however, it is ignored, because it specifies an invalid position `(8)`.
- The second interesting hash is at index `5357525`, which produces `000004e`...; so, e goes in position `4`: `_5__e___`.
You almost choke on your popcorn as the final character falls into place, producing the password 05ace8e3.
Given the actual Door ID and this new method, what is the password? Be extra proud of your solution if it uses a cinematic "decrypting" animation.
Solution logic
Not much has changed from the last puzzle. We still calculate the MD5 hash as before. However, we now have an extra condition to check whether the hash is valid. The sixth character now represents the position of the character in the password. The seventh character is the actual password character. So we need some way to store password characters with positions. An easy approach would be a list pre-filled with eight None values. This makes it trivial to store the password characters by index.
password = [None for i in range(0, 8)]
To check whether the position parameter is valid, we need to ensure that it is in the range 0..7 (8 total).
'0' <= ch <= '7'
And also whether the character at that position has already been filled
password[ch -'0'] is not None
The ch - '0' converts the ascii digit to its numerical value. Another way to do that is to use int(ch).
Instead of using length of the list, we now explicitly keep a track of password character count, since len(password) will always be 8.
End of explanation
"""
|
xgrg/alfa | notebooks/Miscellaneous/Box-Cox transformation.ipynb | mit | # Generate data
x = stats.loggamma.rvs(5, size=500) + 5
# Plot it
fig = plt.figure(figsize=(6,9))
ax1 = fig.add_subplot(211)
prob = stats.probplot(x, dist=stats.norm, plot=ax1)
ax1.set_title('Probplot against normal distribution')
# Plot an histogram
ax2 = fig.add_subplot(212)
ax2.hist(x)
ax2.set_title('Histogram')
"""
Explanation: We generate some random variates from a non-normal distribution and make a
probability plot for it, to show it is non-normal in the tails:
End of explanation
"""
xt, _ = stats.boxcox(x)
# Plot the results
fig = plt.figure(figsize=(6,9))
ax1 = fig.add_subplot(211)
prob = stats.probplot(xt, dist=stats.norm, plot=ax1)
ax1.set_title('Probplot after Box-Cox transformation')
# Plot an histogram
ax2 = fig.add_subplot(212)
ax2.hist(xt)
ax2.set_title('Histogram')
"""
Explanation: We now use boxcox to transform the data so it's closest to normal:
End of explanation
"""
|
xebia-france/luigi-airflow | Luigi_airflow_004.ipynb | apache-2.0 | raw_dataset = pd.read_csv(source_path + "Speed_Dating_Data.csv",encoding = "ISO-8859-1")
"""
Explanation: Import data
End of explanation
"""
raw_dataset.head(3)
raw_dataset_copy = raw_dataset
check1 = raw_dataset_copy[raw_dataset_copy["iid"] == 1]
check1_sel = check1[["iid", "pid", "match","gender","date","go_out","sports","tvsports","exercise","dining",
"museums","art","hiking","gaming","clubbing","reading","tv","theater","movies",
"concerts","music","shopping","yoga"]]
check1_sel.drop_duplicates().head(20)
#merged_datasets = raw_dataset.merge(raw_dataset_copy, left_on="pid", right_on="iid")
#merged_datasets[["iid_x","gender_x","pid_y","gender_y"]].head(5)
#same_gender = merged_datasets[merged_datasets["gender_x"] == merged_datasets["gender_y"]]
#same_gender.head()
columns_by_types = raw_dataset.columns.to_series().groupby(raw_dataset.dtypes).groups
raw_dataset.dtypes.value_counts()
raw_dataset.isnull().sum().head(3)
summary = raw_dataset.describe() #.transpose()
print (summary.head())
#raw_dataset.groupby("gender").agg({"iid": pd.Series.nunique})
raw_dataset.groupby('gender').iid.nunique()
raw_dataset.groupby('career').iid.nunique().sort_values(ascending=False).head(5)
raw_dataset.groupby(["gender","match"]).iid.nunique()
"""
Explanation: Data exploration
Shape, types, distribution, modalities and potential missing values
End of explanation
"""
local_path = "/Users/sandrapietrowska/Documents/Trainings/luigi/data_source/"
local_filename = "Speed_Dating_Data.csv"
my_variables_selection = ["iid", "pid", "match","gender","date","go_out","sports","tvsports","exercise","dining",
"museums","art","hiking","gaming","clubbing","reading","tv","theater","movies",
"concerts","music","shopping","yoga"]
class RawSetProcessing(object):
"""
This class aims to load and clean the dataset.
"""
def __init__(self,source_path,filename,features):
self.source_path = source_path
self.filename = filename
self.features = features
# Load data
def load_data(self):
raw_dataset_df = pd.read_csv(self.source_path + self.filename,encoding = "ISO-8859-1")
return raw_dataset_df
# Select variables to process and include in the model
def subset_features(self, df):
sel_vars_df = df[self.features]
return sel_vars_df
@staticmethod
# Remove ids with missing values
def remove_ids_with_missing_values(df):
sel_vars_filled_df = df.dropna()
return sel_vars_filled_df
@staticmethod
def drop_duplicated_values(df):
df = df.drop_duplicates()
return df
# Combine processing stages
def combiner_pipeline(self):
raw_dataset = self.load_data()
subset_df = self.subset_features(raw_dataset)
subset_no_dup_df = self.drop_duplicated_values(subset_df)
subset_filled_df = self.remove_ids_with_missing_values(subset_no_dup_df)
return subset_filled_df
raw_set = RawSetProcessing(local_path, local_filename, my_variables_selection)
dataset_df = raw_set.combiner_pipeline()
dataset_df.head(3)
# Number of unique participants
dataset_df.iid.nunique()
dataset_df.shape
"""
Explanation: Data processing
End of explanation
"""
suffix_me = "_me"
suffix_partner = "_partner"
def get_partner_features(df, suffix_1, suffix_2, ignore_vars=True):
#print df[df["iid"] == 1]
df_partner = df.copy()
if ignore_vars is True:
df_partner = df_partner.drop(['pid','match'], 1).drop_duplicates()
else:
df_partner = df_partner.copy()
#print df_partner.shape
merged_datasets = df.merge(df_partner, how = "inner",left_on="pid", right_on="iid",suffixes=(suffix_1,suffix_2))
#print merged_datasets[merged_datasets["iid_me"] == 1]
return merged_datasets
feat_eng_df = get_partner_features(dataset_df,suffix_me,suffix_partner)
feat_eng_df.head(3)
"""
Explanation: Feature engineering
End of explanation
"""
import sklearn
print (sklearn.__version__)
from sklearn import tree
from sklearn.model_selection import train_test_split
from sklearn.model_selection import GridSearchCV
from sklearn.metrics import classification_report
import subprocess
"""
Explanation: Modelling
This model aims to answer the questions what is the profile of the persons regarding interests that got the most matches.
Variables:
* gender
* date (In general, how frequently do you go on dates?)
* go out (How often do you go out (not necessarily on dates)?
* sports: Playing sports/ athletics
* tvsports: Watching sports
* excersice: Body building/exercising
* dining: Dining out
* museums: Museums/galleries
* art: Art
* hiking: Hiking/camping
* gaming: Gaming
* clubbing: Dancing/clubbing
* reading: Reading
* tv: Watching TV
* theater: Theater
* movies: Movies
* concerts: Going to concerts
* music: Music
* shopping: Shopping
* yoga: Yoga/meditation
End of explanation
"""
#features = list(["gender","age_o","race_o","goal","samerace","imprace","imprelig","date","go_out","career_c"])
features = list(["gender","date","go_out","sports","tvsports","exercise","dining","museums","art",
"hiking","gaming","clubbing","reading","tv","theater","movies","concerts","music",
"shopping","yoga"])
label = "match"
#add suffix to each element of list
def process_features_names(features, suffix_1, suffix_2):
features_me = [feat + suffix_1 for feat in features]
features_partner = [feat + suffix_2 for feat in features]
features_all = features_me + features_partner
print (features_all)
return features_all
features_model = process_features_names(features, suffix_me, suffix_partner)
feat_eng_df.head(5)
explanatory = feat_eng_df[features_model]
explained = feat_eng_df[label]
"""
Explanation: Variables selection
End of explanation
"""
clf = tree.DecisionTreeClassifier(min_samples_split=20,min_samples_leaf=10,max_depth=4)
clf = clf.fit(explanatory, explained)
# Download http://www.graphviz.org/
with open("data.dot", 'w') as f:
f = tree.export_graphviz(clf, out_file=f, feature_names= features_model, class_names="match")
subprocess.call(['dot', '-Tpdf', 'data.dot', '-o' 'data.pdf'])
"""
Explanation: Decision Tree
End of explanation
"""
# Split the dataset in two equal parts
X_train, X_test, y_train, y_test = train_test_split(explanatory, explained, test_size=0.3, random_state=0)
parameters = [
{'criterion': ['gini','entropy'], 'max_depth': [4,6,10,12,14],
'min_samples_split': [10,20,30], 'min_samples_leaf': [10,15,20]
}
]
scores = ['precision', 'recall']
dtc = tree.DecisionTreeClassifier()
clf = GridSearchCV(dtc, parameters,n_jobs=3, cv=5, refit=True)
warnings.filterwarnings("ignore")
for score in scores:
print("# Tuning hyper-parameters for %s" % score)
print("")
clf = GridSearchCV(dtc, parameters, cv=5,
scoring='%s_macro' % score)
clf.fit(X_train, y_train)
print("Best parameters set found on development set:")
print("")
print(clf.best_params_)
print("")
y_true, y_pred = y_test, clf.predict(X_test)
print(classification_report(y_true, y_pred))
print("")
best_param_dtc = tree.DecisionTreeClassifier(criterion="entropy",min_samples_split=10,min_samples_leaf=10,max_depth=14)
best_param_dtc = best_param_dtc.fit(explanatory, explained)
best_param_dtc.feature_importances_
raw_dataset.rename(columns={"age_o":"age_of_partner","race_o":"race_of_partner"},inplace=True)
"""
Explanation: Tuning Parameters
End of explanation
"""
import unittest
from pandas.util.testing import assert_frame_equal
"""
Explanation: Test
End of explanation
"""
class FeatureEngineeringTest(unittest.TestCase):
def test_get_partner_features(self):
"""
:return:
"""
# Given
raw_data_a = {
'iid': ['1', '2', '3', '4', '5','6'],
'first_name': ['Sue', 'Maria', 'Sandra', 'Bill', 'Brian','Bruce'],
'sport':['foot','run','volley','basket','swim','tv'],
'pid': ['4', '5', '6', '1', '2','3'],}
df_a = pd.DataFrame(raw_data_a, columns = ['iid', 'first_name', 'sport','pid'])
expected_output_values = pd.DataFrame({
'iid_me': ['1', '2', '3', '4', '5','6'],
'first_name_me': ['Sue', 'Maria', 'Sandra', 'Bill', 'Brian','Bruce'],
'sport_me': ['foot','run','volley','basket','swim','tv'],
'pid_me': ['4', '5', '6', '1', '2','3'],
'iid_partner': ['4', '5', '6', '1', '2','3'],
'first_name_partner': ['Bill', 'Brian','Bruce','Sue', 'Maria', 'Sandra'],
'sport_partner': ['basket','swim','tv','foot','run','volley'],
'pid_partner':['1', '2', '3', '4', '5','6']
}, columns = ['iid_me','first_name_me','sport_me','pid_me',
'iid_partner','first_name_partner','sport_partner','pid_partner'])
# When
output_values = get_partner_features(df_a, "_me","_partner",ignore_vars=False)
# Then
assert_frame_equal(output_values, expected_output_values)
suite = unittest.TestLoader().loadTestsFromTestCase(FeatureEngineeringTest)
unittest.TextTestRunner(verbosity=2).run(suite)
"""
Explanation: There is a weird thing, with self.XX the code does not work. I tried self.assertEqual
End of explanation
"""
|
buntyke/DataAnalysis | startup.ipynb | mit | # Hit shift + enter or use the run button to run this cell and see the results
print 'hello world'
# The last line of every code cell will be displayed by default,
# even if you don't print it. Run this cell to see how this works.
2 + 2 # The result of this line will not be displayed
3 + 3 # The result of this line will be displayed, because it is the last line of the cell
"""
Explanation: Startup IPy notebook for Intro to Data Analysis Course
Text Using Markdown
If you double click on this cell, you will see the text change so that all of the formatting is removed. This allows you to edit this block of text. This block of text is written using Markdown, which is a way to format text using headers, links, italics, and many other options. Hit shift + enter or shift + return to show the formatted text again. This is called "running" the cell, and you can also do it using the run button in the toolbar.
Code cells
One great advantage of IPython notebooks is that you can show your Python code alongside the results, add comments to the code, or even add blocks of text using Markdown. These notebooks allow you to collaborate with others and share your work. The following cell is a code cell.
End of explanation
"""
# If you run this cell, you should see the values displayed as a table.
# Pandas is a software library for data manipulation and analysis. You'll learn to use it later in this course.
import pandas as pd
df = pd.DataFrame({'a': [2, 4, 6, 8], 'b': [1, 3, 5, 7]})
df
# If you run this cell, you should see a scatter plot of the function y = x^2
%pylab inline
import matplotlib.pyplot as plt
xs = range(-30, 31)
ys = [x ** 2 for x in xs]
plt.scatter(xs, ys)
"""
Explanation: Nicely formatted results
IPython notebooks allow you to display nicely formatted results, such as plots and tables, directly in
the notebook. You'll learn how to use the following libraries later on in this course, but for now here's a
preview of what IPython notebook can do.
End of explanation
"""
class_name = "Nishanth Koganti"
message = class_name + " is awesome!"
message
"""
Explanation: Creating cells
To create a new code cell, click "Insert > Insert Cell [Above or Below]". A code cell will automatically be created.
To create a new markdown cell, first follow the process above to create a code cell, then change the type from "Code" to "Markdown" using the dropdown next to the run, stop, and restart buttons.
Re-running cells
If you find a bug in your code, you can always update the cell and re-run it. However, any cells that come afterward won't be automatically updated. Try it out below. First run each of the three cells. The first two don't have any output, but you will be able to tell they've run because a number will appear next to them, for example, "In [5]". The third cell should output the message "Intro to Data Analysis is awesome!"
End of explanation
"""
|
feststelltaste/software-analytics | cheatbooks/groupby.ipynb | gpl-3.0 | import pandas as pd
df = pd.DataFrame({
"file" : ['hello.java', 'tutorial.md', 'controller.java', "build.sh", "deploy.sh"],
"dir" : ["src", "docs", "src", "src", "src"],
"bytes" : [54, 124, 36, 78, 62]
})
df
"""
Explanation: groupby
With groupby, you can group data in a DataFrame and apply calculations on those groups in various ways.
This Cheatbook (Cheatsheet + Notebook) introduces you to the core functionality of pandas' groupby function. Here can find the executable Jupyter Notebook version to directly play around with it!
References
Here you can find out more about this function.
API Reference
Pandas Grouper and Agg Functions Explained
Understanding the Transform Function in Pandas
Example Scenario
This is an excerpt of a file list from a directory with the following information as separate columns / Series:
file: The name of the file
dir: The name of the directory where the file lives in
bytes: The size of the file in bytes
This data is stored into a pandas' DataFrame named df.
End of explanation
"""
df.groupby('dir')
"""
Explanation: When to use it
groupby is a great way to summarize data in a specific way to build a more higher-level view on your data (e.g., to go from code level to module level).
E.g., in our scenario, we could count the number of files per directory.
Let's take a look at this use case step by step.
Basic Principles
You can use the groupby function on our DataFrame df.
As parameter, you can put in the name (or a list of names) of the Series you want to group.
In our case, we want to group the directories / the Series dir.
End of explanation
"""
df.groupby('dir').groups
"""
Explanation: This gives you a GroupBy object. We can take a look at the built groups by inspecting the groups object of the GroupBy object.
End of explanation
"""
df.groupby('dir').count()
"""
Explanation: The groups object shows you the groups and their members, using their indexes.
Aggregating Values
Now we have built some groups, but now what? The next step is to decide what we want to do with the values that belong to a group. This means we need to tell the GroupBy object how we want to group the values. We can apply a multitude of aggregating functions here, e.g.
count: count the number of entries of each group
End of explanation
"""
df.groupby('dir').first()
"""
Explanation: first: take the first entry of each group
End of explanation
"""
df.groupby('dir').max()
"""
Explanation: max: take the entry with the highest value
End of explanation
"""
df.groupby('dir').sum()
"""
Explanation: sum: sum up all values within one group
End of explanation
"""
df['ext'] = df["file"].str.split(".").str[-1]
df
"""
Explanation: This gives us the number of bytes of all files that reside in a directory. Note that there is no more file Series because it doesn't contain any values we could sum up. So this Series was thrown away.
We can also apply dedicated functions on each group using e.g.,
agg: apply a variety of aggregating functions on the groups (e.g., building the sum as well as counting the values at once)
apply: apply a custom function on each group to execute calculations as you like
transform: calculate summarizing values for each group (e.g., the sum of all entries for each group)
We'll see these operations later on!
More Advanced Use Cases
Let's dig deeper into our example scenario.
We want to find out which kind of files occupy what space in which directory.
For this, we extract the files' extensions from the file series.
We use the string split function to split by the . sign and keep just the last piece of the split file name (which is the file's extension).
End of explanation
"""
dir_ext_bytes = df.groupby(['dir', 'ext']).sum()
dir_ext_bytes
"""
Explanation: We can then group this data in a more sophisticated way by using two Series for our groups.
We sum up the numeric values (= the bytes) for each file for each group.
End of explanation
"""
bytes_per_dir = dir_ext_bytes.groupby('dir').transform('sum')
bytes_per_dir
"""
Explanation: Last, we want to calculate the ratio of the files' bytes for each extension.
We first calculate the overall size for each extension in each directory by using transform.
The transform function doesn't compute results for each value of a group.
Instead, it provides results for all values of a group.
End of explanation
"""
dir_ext_bytes['all'] = bytes_per_dir
dir_ext_bytes
"""
Explanation: In our case, we summed up all the files' bytes of the file extensions per directory.
We can add this new information to our existing DataFrame.
End of explanation
"""
dir_ext_bytes['ratio'] = dir_ext_bytes['bytes'] / dir_ext_bytes['all']
dir_ext_bytes
"""
Explanation: Now we are able to calculate the ratio.
End of explanation
"""
|
dolittle007/dolittle007.github.io | notebooks/bayesian_neural_network_advi.ipynb | gpl-3.0 | %matplotlib inline
import theano
floatX = theano.config.floatX
import pymc3 as pm
import theano.tensor as T
import sklearn
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
sns.set_style('white')
from sklearn import datasets
from sklearn.preprocessing import scale
from sklearn.cross_validation import train_test_split
from sklearn.datasets import make_moons
X, Y = make_moons(noise=0.2, random_state=0, n_samples=1000)
X = scale(X)
X = X.astype(floatX)
Y = Y.astype(floatX)
X_train, X_test, Y_train, Y_test = train_test_split(X, Y, test_size=.5)
fig, ax = plt.subplots()
ax.scatter(X[Y==0, 0], X[Y==0, 1], label='Class 0')
ax.scatter(X[Y==1, 0], X[Y==1, 1], color='r', label='Class 1')
sns.despine(); ax.legend()
ax.set(xlabel='X', ylabel='Y', title='Toy binary classification data set');
"""
Explanation: Variational Inference: Bayesian Neural Networks
(c) 2016 by Thomas Wiecki
Original blog post: http://twiecki.github.io/blog/2016/06/01/bayesian-deep-learning/
Current trends in Machine Learning
There are currently three big trends in machine learning: Probabilistic Programming, Deep Learning and "Big Data". Inside of PP, a lot of innovation is in making things scale using Variational Inference. In this blog post, I will show how to use Variational Inference in PyMC3 to fit a simple Bayesian Neural Network. I will also discuss how bridging Probabilistic Programming and Deep Learning can open up very interesting avenues to explore in future research.
Probabilistic Programming at scale
Probabilistic Programming allows very flexible creation of custom probabilistic models and is mainly concerned with insight and learning from your data. The approach is inherently Bayesian so we can specify priors to inform and constrain our models and get uncertainty estimation in form of a posterior distribution. Using MCMC sampling algorithms we can draw samples from this posterior to very flexibly estimate these models. PyMC3 and Stan are the current state-of-the-art tools to consruct and estimate these models. One major drawback of sampling, however, is that it's often very slow, especially for high-dimensional models. That's why more recently, variational inference algorithms have been developed that are almost as flexible as MCMC but much faster. Instead of drawing samples from the posterior, these algorithms instead fit a distribution (e.g. normal) to the posterior turning a sampling problem into and optimization problem. ADVI -- Automatic Differentation Variational Inference -- is implemented in PyMC3 and Stan, as well as a new package called Edward which is mainly concerned with Variational Inference.
Unfortunately, when it comes to traditional ML problems like classification or (non-linear) regression, Probabilistic Programming often plays second fiddle (in terms of accuracy and scalability) to more algorithmic approaches like ensemble learning (e.g. random forests or gradient boosted regression trees.
Deep Learning
Now in its third renaissance, deep learning has been making headlines repeatadly by dominating almost any object recognition benchmark, kicking ass at Atari games, and beating the world-champion Lee Sedol at Go. From a statistical point, Neural Networks are extremely good non-linear function approximators and representation learners. While mostly known for classification, they have been extended to unsupervised learning with AutoEncoders and in all sorts of other interesting ways (e.g. Recurrent Networks, or MDNs to estimate multimodal distributions). Why do they work so well? No one really knows as the statistical properties are still not fully understood.
A large part of the innoviation in deep learning is the ability to train these extremely complex models. This rests on several pillars:
* Speed: facilitating the GPU allowed for much faster processing.
* Software: frameworks like Theano and TensorFlow allow flexible creation of abstract models that can then be optimized and compiled to CPU or GPU.
* Learning algorithms: training on sub-sets of the data -- stochastic gradient descent -- allows us to train these models on massive amounts of data. Techniques like drop-out avoid overfitting.
* Architectural: A lot of innovation comes from changing the input layers, like for convolutional neural nets, or the output layers, like for MDNs.
Bridging Deep Learning and Probabilistic Programming
On one hand we Probabilistic Programming which allows us to build rather small and focused models in a very principled and well-understood way to gain insight into our data; on the other hand we have deep learning which uses many heuristics to train huge and highly complex models that are amazing at prediction. Recent innovations in variational inference allow probabilistic programming to scale model complexity as well as data size. We are thus at the cusp of being able to combine these two approaches to hopefully unlock new innovations in Machine Learning. For more motivation, see also Dustin Tran's recent blog post.
While this would allow Probabilistic Programming to be applied to a much wider set of interesting problems, I believe this bridging also holds great promise for innovations in Deep Learning. Some ideas are:
* Uncertainty in predictions: As we will see below, the Bayesian Neural Network informs us about the uncertainty in its predictions. I think uncertainty is an underappreciated concept in Machine Learning as it's clearly important for real-world applications. But it could also be useful in training. For example, we could train the model specifically on samples it is most uncertain about.
* Uncertainty in representations: We also get uncertainty estimates of our weights which could inform us about the stability of the learned representations of the network.
* Regularization with priors: Weights are often L2-regularized to avoid overfitting, this very naturally becomes a Gaussian prior for the weight coefficients. We could, however, imagine all kinds of other priors, like spike-and-slab to enforce sparsity (this would be more like using the L1-norm).
* Transfer learning with informed priors: If we wanted to train a network on a new object recognition data set, we could bootstrap the learning by placing informed priors centered around weights retrieved from other pre-trained networks, like GoogLeNet.
* Hierarchical Neural Networks: A very powerful approach in Probabilistic Programming is hierarchical modeling that allows pooling of things that were learned on sub-groups to the overall population (see my tutorial on Hierarchical Linear Regression in PyMC3). Applied to Neural Networks, in hierarchical data sets, we could train individual neural nets to specialize on sub-groups while still being informed about representations of the overall population. For example, imagine a network trained to classify car models from pictures of cars. We could train a hierarchical neural network where a sub-neural network is trained to tell apart models from only a single manufacturer. The intuition being that all cars from a certain manufactures share certain similarities so it would make sense to train individual networks that specialize on brands. However, due to the individual networks being connected at a higher layer, they would still share information with the other specialized sub-networks about features that are useful to all brands. Interestingly, different layers of the network could be informed by various levels of the hierarchy -- e.g. early layers that extract visual lines could be identical in all sub-networks while the higher-order representations would be different. The hierarchical model would learn all that from the data.
* Other hybrid architectures: We can more freely build all kinds of neural networks. For example, Bayesian non-parametrics could be used to flexibly adjust the size and shape of the hidden layers to optimally scale the network architecture to the problem at hand during training. Currently, this requires costly hyper-parameter optimization and a lot of tribal knowledge.
Bayesian Neural Networks in PyMC3
Generating data
First, lets generate some toy data -- a simple binary classification problem that's not linearly separable.
End of explanation
"""
# Trick: Turn inputs and outputs into shared variables.
# It's still the same thing, but we can later change the values of the shared variable
# (to switch in the test-data later) and pymc3 will just use the new data.
# Kind-of like a pointer we can redirect.
# For more info, see: http://deeplearning.net/software/theano/library/compile/shared.html
ann_input = theano.shared(X_train)
ann_output = theano.shared(Y_train)
n_hidden = 5
# Initialize random weights between each layer
init_1 = np.random.randn(X.shape[1], n_hidden).astype(floatX)
init_2 = np.random.randn(n_hidden, n_hidden).astype(floatX)
init_out = np.random.randn(n_hidden).astype(floatX)
with pm.Model() as neural_network:
# Weights from input to hidden layer
weights_in_1 = pm.Normal('w_in_1', 0, sd=1,
shape=(X.shape[1], n_hidden),
testval=init_1)
# Weights from 1st to 2nd layer
weights_1_2 = pm.Normal('w_1_2', 0, sd=1,
shape=(n_hidden, n_hidden),
testval=init_2)
# Weights from hidden layer to output
weights_2_out = pm.Normal('w_2_out', 0, sd=1,
shape=(n_hidden,),
testval=init_out)
# Build neural-network using tanh activation function
act_1 = pm.math.tanh(pm.math.dot(ann_input,
weights_in_1))
act_2 = pm.math.tanh(pm.math.dot(act_1,
weights_1_2))
act_out = pm.math.sigmoid(pm.math.dot(act_2,
weights_2_out))
# Binary classification -> Bernoulli likelihood
out = pm.Bernoulli('out',
act_out,
observed=ann_output)
"""
Explanation: Model specification
A neural network is quite simple. The basic unit is a perceptron which is nothing more than logistic regression. We use many of these in parallel and then stack them up to get hidden layers. Here we will use 2 hidden layers with 5 neurons each which is sufficient for such a simple problem.
End of explanation
"""
%%time
with neural_network:
# Run ADVI which returns posterior means, standard deviations, and the evidence lower bound (ELBO)
v_params = pm.variational.advi(n=50000)
"""
Explanation: That's not so bad. The Normal priors help regularize the weights. Usually we would add a constant b to the inputs but I omitted it here to keep the code cleaner.
Variational Inference: Scaling model complexity
We could now just run a MCMC sampler like NUTS which works pretty well in this case but as I already mentioned, this will become very slow as we scale our model up to deeper architectures with more layers.
Instead, we will use the brand-new ADVI variational inference algorithm which was recently added to PyMC3. This is much faster and will scale better. Note, that this is a mean-field approximation so we ignore correlations in the posterior.
End of explanation
"""
with neural_network:
trace = pm.variational.sample_vp(v_params, draws=5000)
"""
Explanation: < 20 seconds on my older laptop. That's pretty good considering that NUTS is having a really hard time. Further below we make this even faster. To make it really fly, we probably want to run the Neural Network on the GPU.
As samples are more convenient to work with, we can very quickly draw samples from the variational posterior using sample_vp() (this is just sampling from Normal distributions, so not at all the same like MCMC):
End of explanation
"""
plt.plot(v_params.elbo_vals)
plt.ylabel('ELBO')
plt.xlabel('iteration')
"""
Explanation: Plotting the objective function (ELBO) we can see that the optimization slowly improves the fit over time.
End of explanation
"""
# Replace shared variables with testing set
ann_input.set_value(X_test)
ann_output.set_value(Y_test)
# Creater posterior predictive samples
ppc = pm.sample_ppc(trace, model=neural_network, samples=500)
# Use probability of > 0.5 to assume prediction of class 1
pred = ppc['out'].mean(axis=0) > 0.5
fig, ax = plt.subplots()
ax.scatter(X_test[pred==0, 0], X_test[pred==0, 1])
ax.scatter(X_test[pred==1, 0], X_test[pred==1, 1], color='r')
sns.despine()
ax.set(title='Predicted labels in testing set', xlabel='X', ylabel='Y');
print('Accuracy = {}%'.format((Y_test == pred).mean() * 100))
"""
Explanation: Now that we trained our model, lets predict on the hold-out set using a posterior predictive check (PPC). We use sample_ppc() to generate new data (in this case class predictions) from the posterior (sampled from the variational estimation).
End of explanation
"""
grid = np.mgrid[-3:3:100j,-3:3:100j].astype(floatX)
grid_2d = grid.reshape(2, -1).T
dummy_out = np.ones(grid.shape[1], dtype=np.int8)
ann_input.set_value(grid_2d)
ann_output.set_value(dummy_out)
# Creater posterior predictive samples
ppc = pm.sample_ppc(trace, model=neural_network, samples=500)
"""
Explanation: Hey, our neural network did all right!
Lets look at what the classifier has learned
For this, we evaluate the class probability predictions on a grid over the whole input space.
End of explanation
"""
cmap = sns.diverging_palette(250, 12, s=85, l=25, as_cmap=True)
fig, ax = plt.subplots(figsize=(10, 6))
contour = ax.contourf(*grid, ppc['out'].mean(axis=0).reshape(100, 100), cmap=cmap)
ax.scatter(X_test[pred==0, 0], X_test[pred==0, 1])
ax.scatter(X_test[pred==1, 0], X_test[pred==1, 1], color='r')
cbar = plt.colorbar(contour, ax=ax)
_ = ax.set(xlim=(-3, 3), ylim=(-3, 3), xlabel='X', ylabel='Y');
cbar.ax.set_ylabel('Posterior predictive mean probability of class label = 0');
"""
Explanation: Probability surface
End of explanation
"""
cmap = sns.cubehelix_palette(light=1, as_cmap=True)
fig, ax = plt.subplots(figsize=(10, 6))
contour = ax.contourf(*grid, ppc['out'].std(axis=0).reshape(100, 100), cmap=cmap)
ax.scatter(X_test[pred==0, 0], X_test[pred==0, 1])
ax.scatter(X_test[pred==1, 0], X_test[pred==1, 1], color='r')
cbar = plt.colorbar(contour, ax=ax)
_ = ax.set(xlim=(-3, 3), ylim=(-3, 3), xlabel='X', ylabel='Y');
cbar.ax.set_ylabel('Uncertainty (posterior predictive standard deviation)');
"""
Explanation: Uncertainty in predicted value
So far, everything I showed we could have done with a non-Bayesian Neural Network. The mean of the posterior predictive for each class-label should be identical to maximum likelihood predicted values. However, we can also look at the standard deviation of the posterior predictive to get a sense for the uncertainty in our predictions. Here is what that looks like:
End of explanation
"""
from six.moves import zip
# Set back to original data to retrain
ann_input.set_value(X_train)
ann_output.set_value(Y_train)
# Tensors and RV that will be using mini-batches
minibatch_tensors = [ann_input, ann_output]
minibatch_RVs = [out]
# Generator that returns mini-batches in each iteration
def create_minibatch(data):
rng = np.random.RandomState(0)
while True:
# Return random data samples of set size 100 each iteration
ixs = rng.randint(len(data), size=50)
yield data[ixs]
minibatches = zip(
create_minibatch(X_train),
create_minibatch(Y_train),
)
total_size = len(Y_train)
"""
Explanation: We can see that very close to the decision boundary, our uncertainty as to which label to predict is highest. You can imagine that associating predictions with uncertainty is a critical property for many applications like health care. To further maximize accuracy, we might want to train the model primarily on samples from that high-uncertainty region.
Mini-batch ADVI: Scaling data size
So far, we have trained our model on all data at once. Obviously this won't scale to something like ImageNet. Moreover, training on mini-batches of data (stochastic gradient descent) avoids local minima and can lead to faster convergence.
Fortunately, ADVI can be run on mini-batches as well. It just requires some setting up:
End of explanation
"""
%%time
with neural_network:
# Run advi_minibatch
v_params = pm.variational.advi_minibatch(
n=50000, minibatch_tensors=minibatch_tensors,
minibatch_RVs=minibatch_RVs, minibatches=minibatches,
total_size=total_size, learning_rate=1e-2, epsilon=1.0
)
with neural_network:
trace = pm.variational.sample_vp(v_params, draws=5000)
plt.plot(v_params.elbo_vals)
plt.ylabel('ELBO')
plt.xlabel('iteration')
sns.despine()
"""
Explanation: While the above might look a bit daunting, I really like the design. Especially the fact that you define a generator allows for great flexibility. In principle, we could just pool from a database there and not have to keep all the data in RAM.
Lets pass those to advi_minibatch():
End of explanation
"""
pm.traceplot(trace);
"""
Explanation: As you can see, mini-batch ADVI's running time is much lower. It also seems to converge faster.
For fun, we can also look at the trace. The point is that we also get uncertainty of our Neural Network weights.
End of explanation
"""
|
RyRose/College-Projects | lab4/Lab 4.ipynb | mit | ## Imports!
%matplotlib inline
import os
import re
import string
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
from matplotlib.mlab import PCA
from scipy.cluster.vq import kmeans, vq
"""
Explanation: Lab 4
Ryan Rose
Scientific Computing
9/21/2016
End of explanation
"""
os.chdir("/home/ryan/School/scientific_computing/labs/lab4/books")
filenames = os.listdir()
books = []
for name in filenames:
with open(name) as f:
books.append(f.read())
"""
Explanation: Loading Fifty Books
First, we load all fifty books from their text files.
End of explanation
"""
def get_title(text):
pattern = "\*\*\*\s*START OF (THIS|THE) PROJECT GUTENBERG EBOOK ([A-Z,;' ]*)\*\*\*"
m = re.search(pattern, text)
if m:
return m.group(2).strip()
return None
def remove_gutenberg_info(text):
pattern = "\*\*\*\s*START OF (THIS|THE) PROJECT GUTENBERG EBOOK ([A-Z,;' ]*)\*\*\*"
start = re.search(pattern, text).end()
pattern = "\*\*\*\s*END OF (THIS|THE) PROJECT GUTENBERG EBOOK ([A-Z,;' ]*)\*\*\*"
end = re.search(pattern, text).start()
return text[start:end]
cut_off_books = { get_title(book):remove_gutenberg_info(book) for book in books}
pd.DataFrame(cut_off_books, index=["Book's Text"]).T.head()
"""
Explanation: Cleaning up the Data
Next, we create a mapping of titles to their book's text along with removing the Project Gutenberg header and footer.
End of explanation
"""
def strip_word(word, alphabet):
ret = ""
for c in word:
if c in alphabet:
ret += c.lower()
if len(ret) == 0:
return None
else:
return ret
def get_words(book):
alphabet = set(string.ascii_letters)
b = book.split()
words = []
for word in b:
w = strip_word(word, alphabet)
if w:
words.append(w)
return words
cut_books = {name:get_words(book) for name, book in cut_off_books.items()}
"""
Explanation: Next, we iterate through all of the words, strip all characters that are not upper or lower-case letters. If the the resulting word is considered, non-empty, we throw it out. Else, we add the word in all lowercase stripped of all non-ASCII letters to our list of words for that book.
This is useful to determine word frequencies.
End of explanation
"""
def get_word_freq(words):
word_counts = {}
for word in words:
if word in word_counts:
word_counts[word] += 1
else:
word_counts[word] = 1
return word_counts
book_freqs = {}
for name, words in cut_books.items():
book_freqs[name] = get_word_freq(words)
"""
Explanation: Determining Frequencies
Now, we determine the frequencies for each word and putting them in a dictionary for each book.
End of explanation
"""
total_word_count = {}
for dicts in book_freqs.values():
for word, count in dicts.items():
if word in total_word_count:
total_word_count[word] += count
else:
total_word_count[word] = count
a, b = zip(*total_word_count.items())
tuples = list(zip(b, a))
tuples.sort()
tuples.reverse()
tuples[:20]
_, top_20_words = zip(*tuples[:20])
top_20_words
"""
Explanation: Top 20 Words
Now, let's determine the top 20 words across the whole corpus
End of explanation
"""
def filter_frequencies(frequencies, words):
d = {}
for word, freq in frequencies.items():
if word in words:
d[word] = freq
return d
labels = {}
for name, freqs in book_freqs.items():
labels[name] = filter_frequencies(freqs, top_20_words)
df = pd.DataFrame(labels).fillna(0)
df = (df / df.sum()).T
df.head()
"""
Explanation: Creating the 20-dimensional vectors
Using the top 20 words above, let's determine the book vectors.
End of explanation
"""
kvals = []
dists = []
for k in range(2, 11):
centroids, distortion = kmeans(df, k)
kvals.append(k)
dists.append(distortion)
plt.plot(kvals, dists)
plt.show()
"""
Explanation: Creating the Elbow Graph
Let's try each k and see what makes the sharpest elbow.
End of explanation
"""
centroids, _ = kmeans(df, 3)
idx, _ = vq(df, centroids)
clusters = {}
for i, cluster in enumerate(idx):
if cluster in clusters:
clusters[cluster].append(df.iloc[i].name)
else:
clusters[cluster] = [df.iloc[i].name]
clusters
"""
Explanation: We can see that the best k is 3 or 6.
Clustering
Let's cluster based on k = 3 and plot the clusters.
End of explanation
"""
m = PCA(df)
fig, ax = plt.subplots()
for i in range(len(idx)):
plt.plot(m.Y[idx==i, 0], m.Y[idx==i, 1], "o", alpha=.75)
for index, (x, y) in enumerate(zip(m.Y[:, 0], m.Y[:, 1])):
plt.text(x, y, df.index[index])
fig.set_size_inches(36,40)
plt.show()
m.sigma.sort_values()[-2:]
"""
Explanation: Do the clusters make sense?
Yes. For instance, we can see that The Republic and The Iliad of Homer are in the same cluster.
Performing PCA
Now, let's perform PCA and determine the most important elements and plot the clusters.
End of explanation
"""
with open("../pg45.txt") as f:
anne = f.read()
get_title(anne)
anne_cut = remove_gutenberg_info(anne)
anne_words = get_words(anne_cut)
anne_freq = {get_title(anne):filter_frequencies(get_word_freq(anne_words), top_20_words)}
anne_frame = pd.DataFrame(anne_freq).fillna(0)
anne_frame = (anne_frame / anne_frame.sum()).T
anne_frame
"""
Explanation: We can see the data clusters well and the most important words are i and the based on them having the standard deviation. This is based on the concept of PCA.fracs aligning to the variance based on this documentation: https://www.clear.rice.edu/comp130/12spring/pca/pca_docs.shtml. And, since PCA.sigma is the square root of the variance, the highest standard deviation should correspond to the highest value for the PCA.fracs. Then, i and the are the most important words
New Book
So, we continue as before by loading Anne of Green Gables, parsing it, creating an array, and normalizing the book vector.
End of explanation
"""
df_with_anne = df.append(anne_frame).sort_index()
centroids, _ = kmeans(df_with_anne, 3)
idx2, _ = vq(df_with_anne, centroids)
clusters = {}
for i, cluster in enumerate(idx2):
if cluster in clusters:
clusters[cluster].append(df_with_anne.iloc[i].name)
else:
clusters[cluster] = [df_with_anne.iloc[i].name]
clusters
coords = m.project(np.array(anne_frame).flatten())
fig, _ = plt.subplots()
plt.plot(coords[0], coords[1], "s", markeredgewidth=5)
for i in range(len(idx)):
plt.plot(m.Y[idx==i, 0], m.Y[idx==i, 1], "o", alpha=.75)
for index, (x, y) in enumerate(zip(m.Y[:, 0], m.Y[:, 1])):
plt.text(x, y, df.index[index])
fig.set_size_inches(36,40)
plt.show()
"""
Explanation: Now, let's do k-means based on the previously determined k.
End of explanation
"""
stop_words_text = open("../common-english-words.txt").read()
stop_words = stop_words_text.split(",")
stop_words[:5]
word_counts_without_stop = [t for t in tuples if t[1] not in stop_words]
word_counts_without_stop[:20]
_, top_20_without_stop = zip(*word_counts_without_stop[:20])
top_20_without_stop
no_stop_labels = {}
for name, freqs in book_freqs.items():
no_stop_labels[name] = filter_frequencies(freqs, top_20_without_stop)
df_without_stop = pd.DataFrame(no_stop_labels).fillna(0)
df_without_stop = (df_without_stop / df_without_stop.sum()).T
df_without_stop.head()
kvals = []
dists = []
for k in range(2, 11):
centroids, distortion = kmeans(df_without_stop, k)
kvals.append(k)
dists.append(distortion)
plt.plot(kvals, dists)
plt.show()
"""
Explanation: We can see that the new book is the black square above. In addition, it makes sense it fits into that cluster especially when we compare it to Jane Eyre.
Stop Words
End of explanation
"""
centroids, _ = kmeans(df_without_stop, 7)
idx3, _ = vq(df, centroids)
clusters = {}
for i, cluster in enumerate(idx3):
if cluster in clusters:
clusters[cluster].append(df_without_stop.iloc[i].name)
else:
clusters[cluster] = [df_without_stop.iloc[i].name]
clusters
m2 = PCA(df_without_stop)
fig, _ = plt.subplots()
for i in range(len(idx3)):
plt.plot(m2.Y[idx3==i, 0], m2.Y[idx3==i, 1], "o", alpha=.75)
for index, (x, y) in enumerate(zip(m2.Y[:, 0], m2.Y[:, 1])):
plt.text(x, y, df_without_stop.index[index])
fig.set_size_inches(36,40)
plt.show()
m2.sigma.sort_values()[-2:]
"""
Explanation: We can see that our k could be 3 or 7. Let's choose 7.
End of explanation
"""
|
jpcofr/svgpathtools | README.ipynb | mit | from __future__ import division, print_function
# Coordinates are given as points in the complex plane
from svgpathtools import Path, Line, QuadraticBezier, CubicBezier, Arc
seg1 = CubicBezier(300+100j, 100+100j, 200+200j, 200+300j) # A cubic beginning at (300, 100) and ending at (200, 300)
seg2 = Line(200+300j, 250+350j) # A line beginning at (200, 300) and ending at (250, 350)
path = Path(seg1, seg2) # A path traversing the cubic and then the line
# We could alternatively created this Path object using a d-string
from svgpathtools import parse_path
path_alt = parse_path('M 300 100 C 100 100 200 200 200 300 L 250 350')
# Let's check that these two methods are equivalent
print(path)
print(path_alt)
print(path == path_alt)
# On a related note, the Path.d() method returns a Path object's d-string
print(path.d())
print(parse_path(path.d()) == path)
"""
Explanation: svgpathtools
svgpathtools is a collection of tools for manipulating and analyzing SVG Path objects and Bézier curves.
Features
svgpathtools contains functions designed to easily read, write and display SVG files as well as a large selection of geometrically-oriented tools to transform and analyze path elements.
Additionally, the submodule bezier.py contains tools for for working with general nth order Bezier curves stored as n-tuples.
Some included tools:
read, write, and display SVG files containing Path (and other) SVG elements
convert Bézier path segments to numpy.poly1d (polynomial) objects
convert polynomials (in standard form) to their Bézier form
compute tangent vectors and (right-hand rule) normal vectors
compute curvature
break discontinuous paths into their continuous subpaths.
efficiently compute intersections between paths and/or segments
find a bounding box for a path or segment
reverse segment/path orientation
crop and split paths and segments
smooth paths (i.e. smooth away kinks to make paths differentiable)
transition maps from path domain to segment domain and back (T2t and t2T)
compute area enclosed by a closed path
compute arc length
compute inverse arc length
convert RGB color tuples to hexadecimal color strings and back
Prerequisites
numpy
svgwrite
Setup
If not already installed, you can install the prerequisites using pip.
bash
$ pip install numpy
bash
$ pip install svgwrite
Then install svgpathtools:
bash
$ pip install svgpathtools
Alternative Setup
You can download the source from Github and install by using the command (from inside the folder containing setup.py):
bash
$ python setup.py install
Credit where credit's due
Much of the core of this module was taken from the svg.path (v2.0) module. Interested svg.path users should see the compatibility notes at bottom of this readme.
Basic Usage
Classes
The svgpathtools module is primarily structured around four path segment classes: Line, QuadraticBezier, CubicBezier, and Arc. There is also a fifth class, Path, whose objects are sequences of (connected or disconnected<sup id="a1">1</sup>) path segment objects.
Line(start, end)
Arc(start, radius, rotation, large_arc, sweep, end) Note: See docstring for a detailed explanation of these parameters
QuadraticBezier(start, control, end)
CubicBezier(start, control1, control2, end)
Path(*segments)
See the relevant docstrings in path.py or the official SVG specifications for more information on what each parameter means.
<u id="f1">1</u> Warning: Some of the functionality in this library has not been tested on discontinuous Path objects. A simple workaround is provided, however, by the Path.continuous_subpaths() method. ↩
End of explanation
"""
# Let's append another to the end of it
path.append(CubicBezier(250+350j, 275+350j, 250+225j, 200+100j))
print(path)
# Let's replace the first segment with a Line object
path[0] = Line(200+100j, 200+300j)
print(path)
# You may have noticed that this path is connected and now is also closed (i.e. path.start == path.end)
print("path is continuous? ", path.iscontinuous())
print("path is closed? ", path.isclosed())
# The curve the path follows is not, however, smooth (differentiable)
from svgpathtools import kinks, smoothed_path
print("path contains non-differentiable points? ", len(kinks(path)) > 0)
# If we want, we can smooth these out (Experimental and only for line/cubic paths)
# Note: smoothing will always works (except on 180 degree turns), but you may want
# to play with the maxjointsize and tightness parameters to get pleasing results
# Note also: smoothing will increase the number of segments in a path
spath = smoothed_path(path)
print("spath contains non-differentiable points? ", len(kinks(spath)) > 0)
print(spath)
# Let's take a quick look at the path and its smoothed relative
# The following commands will open two browser windows to display path and spaths
from svgpathtools import disvg
from time import sleep
disvg(path)
sleep(1) # needed when not giving the SVGs unique names (or not using timestamp)
disvg(spath)
print("Notice that path contains {} segments and spath contains {} segments."
"".format(len(path), len(spath)))
"""
Explanation: The Path class is a mutable sequence, so it behaves much like a list.
So segments can appended, inserted, set by index, deleted, enumerated, sliced out, etc.
End of explanation
"""
# Read SVG into a list of path objects and list of dictionaries of attributes
from svgpathtools import svg2paths, wsvg
paths, attributes = svg2paths('test.svg')
# Update: You can now also extract the svg-attributes by setting
# return_svg_attributes=True, or with the convenience function svg2paths2
from svgpathtools import svg2paths2
paths, attributes, svg_attributes = svg2paths2('test.svg')
# Let's print out the first path object and the color it was in the SVG
# We'll see it is composed of two CubicBezier objects and, in the SVG file it
# came from, it was red
redpath = paths[0]
redpath_attribs = attributes[0]
print(redpath)
print(redpath_attribs['stroke'])
"""
Explanation: Reading SVGSs
The svg2paths() function converts an svgfile to a list of Path objects and a separate list of dictionaries containing the attributes of each said path.
Note: Line, Polyline, Polygon, and Path SVG elements can all be converted to Path objects using this function.
End of explanation
"""
# Let's make a new SVG that's identical to the first
wsvg(paths, attributes=attributes, svg_attributes=svg_attributes, filename='output1.svg')
"""
Explanation: Writing SVGSs (and some geometric functions and methods)
The wsvg() function creates an SVG file from a list of path. This function can do many things (see docstring in paths2svg.py for more information) and is meant to be quick and easy to use.
Note: Use the convenience function disvg() (or set 'openinbrowser=True') to automatically attempt to open the created svg file in your default SVG viewer.
End of explanation
"""
# Example:
# Let's check that the first segment of redpath starts
# at the same point as redpath
firstseg = redpath[0]
print(redpath.point(0) == firstseg.point(0) == redpath.start == firstseg.start)
# Let's check that the last segment of redpath ends on the same point as redpath
lastseg = redpath[-1]
print(redpath.point(1) == lastseg.point(1) == redpath.end == lastseg.end)
# This next boolean should return False as redpath is composed multiple segments
print(redpath.point(0.5) == firstseg.point(0.5))
# If we want to figure out which segment of redpoint the
# point redpath.point(0.5) lands on, we can use the path.T2t() method
k, t = redpath.T2t(0.5)
print(redpath[k].point(t) == redpath.point(0.5))
"""
Explanation: There will be many more examples of writing and displaying path data below.
The .point() method and transitioning between path and path segment parameterizations
SVG Path elements and their segments have official parameterizations.
These parameterizations can be accessed using the Path.point(), Line.point(), QuadraticBezier.point(), CubicBezier.point(), and Arc.point() methods.
All these parameterizations are defined over the domain 0 <= t <= 1.
Note: In this document and in inline documentation and doctrings, I use a capital T when referring to the parameterization of a Path object and a lower case t when referring speaking about path segment objects (i.e. Line, QaudraticBezier, CubicBezier, and Arc objects).
Given a T value, the Path.T2t() method can be used to find the corresponding segment index, k, and segment parameter, t, such that path.point(T)=path[k].point(t).
There is also a Path.t2T() method to solve the inverse problem.
End of explanation
"""
# Example:
b = CubicBezier(300+100j, 100+100j, 200+200j, 200+300j)
p = b.poly()
# p(t) == b.point(t)
print(p(0.235) == b.point(0.235))
# What is p(t)? It's just the cubic b written in standard form.
bpretty = "{}*(1-t)^3 + 3*{}*(1-t)^2*t + 3*{}*(1-t)*t^2 + {}*t^3".format(*b.bpoints())
print("The CubicBezier, b.point(x) = \n\n" +
bpretty + "\n\n" +
"can be rewritten in standard form as \n\n" +
str(p).replace('x','t'))
"""
Explanation: Bezier curves as NumPy polynomial objects
Another great way to work with the parameterizations for Line, QuadraticBezier, and CubicBezier objects is to convert them to numpy.poly1d objects. This is done easily using the Line.poly(), QuadraticBezier.poly() and CubicBezier.poly() methods.
There's also a polynomial2bezier() function in the pathtools.py submodule to convert polynomials back to Bezier curves.
Note: cubic Bezier curves are parameterized as $$\mathcal{B}(t) = P_0(1-t)^3 + 3P_1(1-t)^2t + 3P_2(1-t)t^2 + P_3t^3$$
where $P_0$, $P_1$, $P_2$, and $P_3$ are the control points start, control1, control2, and end, respectively, that svgpathtools uses to define a CubicBezier object. The CubicBezier.poly() method expands this polynomial to its standard form
$$\mathcal{B}(t) = c_0t^3 + c_1t^2 +c_2t+c3$$
where
$$\begin{bmatrix}c_0\c_1\c_2\c_3\end{bmatrix} =
\begin{bmatrix}
-1 & 3 & -3 & 1\
3 & -6 & -3 & 0\
-3 & 3 & 0 & 0\
1 & 0 & 0 & 0\
\end{bmatrix}
\begin{bmatrix}P_0\P_1\P_2\P_3\end{bmatrix}$$
QuadraticBezier.poly() and Line.poly() are defined similarly.
End of explanation
"""
t = 0.5
### Method 1: the easy way
u1 = b.unit_tangent(t)
### Method 2: another easy way
# Note: This way will fail if it encounters a removable singularity.
u2 = b.derivative(t)/abs(b.derivative(t))
### Method 2: a third easy way
# Note: This way will also fail if it encounters a removable singularity.
dp = p.deriv()
u3 = dp(t)/abs(dp(t))
### Method 4: the removable-singularity-proof numpy.poly1d way
# Note: This is roughly how Method 1 works
from svgpathtools import real, imag, rational_limit
dx, dy = real(dp), imag(dp) # dp == dx + 1j*dy
p_mag2 = dx**2 + dy**2 # p_mag2(t) = |p(t)|**2
# Note: abs(dp) isn't a polynomial, but abs(dp)**2 is, and,
# the limit_{t->t0}[f(t) / abs(f(t))] ==
# sqrt(limit_{t->t0}[f(t)**2 / abs(f(t))**2])
from cmath import sqrt
u4 = sqrt(rational_limit(dp**2, p_mag2, t))
print("unit tangent check:", u1 == u2 == u3 == u4)
# Let's do a visual check
mag = b.length()/4 # so it's not hard to see the tangent line
tangent_line = Line(b.point(t), b.point(t) + mag*u1)
disvg([b, tangent_line], 'bg', nodes=[b.point(t)])
"""
Explanation: The ability to convert between Bezier objects to NumPy polynomial objects is very useful. For starters, we can take turn a list of Bézier segments into a NumPy array
Numpy Array operations on Bézier path segments
Example available here
To further illustrate the power of being able to convert our Bezier curve objects to numpy.poly1d objects and back, lets compute the unit tangent vector of the above CubicBezier object, b, at t=0.5 in four different ways.
Tangent vectors (and more on NumPy polynomials)
End of explanation
"""
# Speaking of tangents, let's add a normal vector to the picture
n = b.normal(t)
normal_line = Line(b.point(t), b.point(t) + mag*n)
disvg([b, tangent_line, normal_line], 'bgp', nodes=[b.point(t)])
# and let's reverse the orientation of b!
# the tangent and normal lines should be sent to their opposites
br = b.reversed()
# Let's also shift b_r over a bit to the right so we can view it next to b
# The simplest way to do this is br = br.translated(3*mag), but let's use
# the .bpoints() instead, which returns a Bezier's control points
br.start, br.control1, br.control2, br.end = [3*mag + bpt for bpt in br.bpoints()] #
tangent_line_r = Line(br.point(t), br.point(t) + mag*br.unit_tangent(t))
normal_line_r = Line(br.point(t), br.point(t) + mag*br.normal(t))
wsvg([b, tangent_line, normal_line, br, tangent_line_r, normal_line_r],
'bgpkgp', nodes=[b.point(t), br.point(t)], filename='vectorframes.svg',
text=["b's tangent", "br's tangent"], text_path=[tangent_line, tangent_line_r])
"""
Explanation: Translations (shifts), reversing orientation, and normal vectors
End of explanation
"""
# Let's take a Line and an Arc and make some pictures
top_half = Arc(start=-1, radius=1+2j, rotation=0, large_arc=1, sweep=1, end=1)
midline = Line(-1.5, 1.5)
# First let's make our ellipse whole
bottom_half = top_half.rotated(180)
decorated_ellipse = Path(top_half, bottom_half)
# Now let's add the decorations
for k in range(12):
decorated_ellipse.append(midline.rotated(30*k))
# Let's move it over so we can see the original Line and Arc object next
# to the final product
decorated_ellipse = decorated_ellipse.translated(4+0j)
wsvg([top_half, midline, decorated_ellipse], filename='decorated_ellipse.svg')
"""
Explanation: Rotations and Translations
End of explanation
"""
# First we'll load the path data from the file test.svg
paths, attributes = svg2paths('test.svg')
# Let's mark the parametric midpoint of each segment
# I say "parametric" midpoint because Bezier curves aren't
# parameterized by arclength
# If they're also the geometric midpoint, let's mark them
# purple and otherwise we'll mark the geometric midpoint green
min_depth = 5
error = 1e-4
dots = []
ncols = []
nradii = []
for path in paths:
for seg in path:
parametric_mid = seg.point(0.5)
seg_length = seg.length()
if seg.length(0.5)/seg.length() == 1/2:
dots += [parametric_mid]
ncols += ['purple']
nradii += [5]
else:
t_mid = seg.ilength(seg_length/2)
geo_mid = seg.point(t_mid)
dots += [parametric_mid, geo_mid]
ncols += ['red', 'green']
nradii += [5] * 2
# In 'output2.svg' the paths will retain their original attributes
wsvg(paths, nodes=dots, node_colors=ncols, node_radii=nradii,
attributes=attributes, filename='output2.svg')
"""
Explanation: arc length and inverse arc length
Here we'll create an SVG that shows off the parametric and geometric midpoints of the paths from test.svg. We'll need to compute use the Path.length(), Line.length(), QuadraticBezier.length(), CubicBezier.length(), and Arc.length() methods, as well as the related inverse arc length methods .ilength() function to do this.
End of explanation
"""
# Let's find all intersections between redpath and the other
redpath = paths[0]
redpath_attribs = attributes[0]
intersections = []
for path in paths[1:]:
for (T1, seg1, t1), (T2, seg2, t2) in redpath.intersect(path):
intersections.append(redpath.point(T1))
disvg(paths, filename='output_intersections.svg', attributes=attributes,
nodes = intersections, node_radii = [5]*len(intersections))
"""
Explanation: Intersections between Bezier curves
End of explanation
"""
from svgpathtools import parse_path, Line, Path, wsvg
def offset_curve(path, offset_distance, steps=1000):
"""Takes in a Path object, `path`, and a distance,
`offset_distance`, and outputs an piecewise-linear approximation
of the 'parallel' offset curve."""
nls = []
for seg in path:
ct = 1
for k in range(steps):
t = k / steps
offset_vector = offset_distance * seg.normal(t)
nl = Line(seg.point(t), seg.point(t) + offset_vector)
nls.append(nl)
connect_the_dots = [Line(nls[k].end, nls[k+1].end) for k in range(len(nls)-1)]
if path.isclosed():
connect_the_dots.append(Line(nls[-1].end, nls[0].end))
offset_path = Path(*connect_the_dots)
return offset_path
# Examples:
path1 = parse_path("m 288,600 c -52,-28 -42,-61 0,-97 ")
path2 = parse_path("M 151,395 C 407,485 726.17662,160 634,339").translated(300)
path3 = parse_path("m 117,695 c 237,-7 -103,-146 457,0").translated(500+400j)
paths = [path1, path2, path3]
offset_distances = [10*k for k in range(1,51)]
offset_paths = []
for path in paths:
for distances in offset_distances:
offset_paths.append(offset_curve(path, distances))
# Note: This will take a few moments
wsvg(paths + offset_paths, 'g'*len(paths) + 'r'*len(offset_paths), filename='offset_curves.svg')
"""
Explanation: An Advanced Application: Offsetting Paths
Here we'll find the offset curve for a few paths.
End of explanation
"""
|
jjehl/poppy_education | python-divers/python_language_objet.ipynb | gpl-2.0 | objet1 = 'bol'
"""
Explanation: Séquences 1 - Découverte de la programmation objet et du language Python
Activité 1 - Manipuler les objets Python
Compétences visées par cette activité :
Savoir créer des variables de types chaîne de caractères et liste. Utiliser une méthode liée à un objet par la syntaxe objet.méthode().
Programme de mathématique - seconde :
L’utilisation de logiciels (calculatrice ou ordinateur), d’outils de visualisation et de représentation, de calcul (numérique ou formel), de simulation, de programmation développe la possibilité d’expérimenter, ouvre largement la dialectique entre l’observation et la démonstration et change profondément la nature de l’enseignement.
http://cache.media.education.gouv.fr/file/30/52/3/programme_mathematiques_seconde_65523.pdf (page 2)
À l’occasion de l’écriture d’algorithmes et de petits programmes, il convient de donner aux élèves de bonnes habitudes de
rigueur et de les entraîner aux pratiques systématiques de vérification et de contrôle.
http://cache.media.education.gouv.fr/file/30/52/3/programme_mathematiques_seconde_65523.pdf (page 9)
Programme de mathématique - première :
Algorithmique : en seconde, les élèves ont conçu et mis en œuvre quelques algorithmes. Cette formation se poursuit tout au
long du cycle terminal.
http://cache.media.education.gouv.fr/file/special_9/21/1/mathsS_155211.pdf (page 6)
Programme ISN - terminale : découverte du monde numérique
Programme SI - etc...
Les languages de programmation moderne sont tous des languages dit "orienté objet". Il est donc préférable de dire que l'on n'écrit pas des lignes de codes mais que l'on manipule des objets. Ce principe est fondamental et très structurant dans l'écriture d'un programme.
L'activité qui suit à pour but de donner quelques mots de vocabulaire et de ponctuation du language Python utilisé pour programmer notre robot.
Mais tout cela est un peu abstrait donc place aux exemples.
Imaginons que nous voulions définir un objet comme étant un bol. La syntaxe à utiliser est la suivante :
End of explanation
"""
print objet1
"""
Explanation: On dira que vous avez assigné la valeur 'bol' à la variable objet1.
Maintenant, si je veux savoir qu'est ce que c'est que objet1, je fais :
End of explanation
"""
#Ecrivez votre code ci-dessous et executez le en cliquant sur lecture dans la barre de menu :
objet2 = 'assiette'
print objet2
"""
Explanation: A ton tour de créer un objet2 étant une assiette et ensuite d'afficher ce prix :
End of explanation
"""
placard = [objet1,objet2]
"""
Explanation: Nous voulons à présent regrouper nos objets dans une liste d'objet que nous appellerons un placard, la syntaxe pour la fabriquer est la suivante :
End of explanation
"""
#Ecrivez votre code ci-dessous et executez le en cliquant sur lecture dans la barre de menu :
print placard
"""
Explanation: A vous maintenant d'afficher le contenu de placard à l'aide de l'instruction print.
End of explanation
"""
objet3 = 'fourchette'
placard.append(objet3)
"""
Explanation: Pour créer un objet3 étant une fourchette et la rajouter dans mon placard, je dois faire :
End of explanation
"""
#Ecrivez votre code ci-dessous et executez le en cliquant sur lecture dans la barre de menu :
objet4 = 'verre'
placard.append(objet4)
print placard
#!!!!!!!!! Résultat faisant l'objet d'une évaluation !!!!!!!!!!!!!!!!!!!!!
"""
Explanation: A présent, ré-affichez le contenu de placard en ré-éxécutant une précédente cellule.
Pour ajouter les fourchettes dans le placard, nous avons utilisée une méthode de l'objet placard. Cette méthode s'appelle "append" et permet de rajouter une valeur à une liste. Lorsque je veux que cette méthode "append" agissent sur l'objet placard je met un point "." entre placard et append.
Le point est un signe de ponctuation très important en Python, il permet d'accéder à ce qu'il y a à l'intérieur d'un objet.
A vous de créer un nouvel objet verre et de le rajouter à notre liste placard. Afficher ensuite le contenu de placard :
End of explanation
"""
|
idc9/law-net | vertex_metrics_experiment/data_pipline_scotus.ipynb | mit | setup_data_dir(data_dir)
make_subnetwork_directory(data_dir, network_name)
"""
Explanation: set up the data directory
End of explanation
"""
download_op_and_cl_files(data_dir, network_name)
"""
Explanation: data download
get opinion and cluster files from CourtListener
opinions/cluster files are saved in data_dir/raw/court/
End of explanation
"""
download_master_edgelist(data_dir)
"""
Explanation: get the master edgelist from CL
master edgelist is saved in data_dir/raw/
End of explanation
"""
download_scdb(data_dir)
"""
Explanation: download scdb data from SCDB
scdb data is saved in data_dir/scdb
End of explanation
"""
# create the raw case metadata data frame in the raw/ folder
make_subnetwork_raw_case_metadata(data_dir, network_name)
# create clean case metadata and edgelist from raw data
clean_metadata_and_edgelist(data_dir, network_name)
"""
Explanation: network data
make the case metadata and edgelist
add the raw case metadata data frame to the raw/ folder
remove cases missing scdb ids
remove detroit lumber case
get edgelist of cases within desired subnetwork
save case metadata and edgelist to the experiment_dir/
End of explanation
"""
make_graph(subnet_dir, network_name)
"""
Explanation: make graph
creates the network with the desired case metadata and saves it as a .graphml file in experiment_dir/
End of explanation
"""
%%time
make_network_textfiles(data_dir, network_name)
"""
Explanation: NLP data
make case text files
grabs the opinion text for each case in the network and saves them as a text file in experiment_dir/textfiles/
End of explanation
"""
%%time
make_tf_idf(text_dir, subnet_dir + 'nlp/')
"""
Explanation: make tf-idf matrix
creates the tf-idf matrix for the corpus of cases in the network and saves them to subnet_dir + 'nlp/'
End of explanation
"""
# load the graph
G = ig.Graph.Read_GraphML(subnet_dir + network_name +'_network.graphml')
G.summary()
"""
Explanation: Load network
End of explanation
"""
vertex_metrics = ['indegree', 'outdegree', 'degree',
'd_pagerank','u_pagerank','rev_pagerank',
'authorities', 'hubs',
'd_eigen', 'u_eigen',
'd_betweenness', 'u_betweenness',
'd_in_closeness', 'd_out_closeness',
'd_all_closeness', 'u_closeness']
# add recent citations
vertex_metrics += ['recentcite_' + str(t) for t in np.arange(1, 10 + 1)]
vertex_metrics += ['recentcite_' + str(t) for t in [15, 20, 25, 30, 35, 40]]
vertex_metrics += ['citerank_' + str(t) for t in [1, 2, 5, 10, 20, 50]]
vertex_metrics += ['polyrank_' + str(t) for t in [1, 2, 5, 10, 20, 50]]
vertex_metrics += ['pagerank_' + str(t * 10) for t in range(1, 9 + 1)]
vertex_metrics += ['num_words']
active_years = range(1900, 2015 + 1)
%%time
make_snapshot_vertex_metrics(G, active_years, vertex_metrics, subnet_dir)
"""
Explanation: compute snapshots
End of explanation
"""
# to_add = ['rev_pagerank', 'num_words']
# to_add += ['citerank_' + str(t) for t in [1, 2, 5, 10, 20, 50]]
# to_add = ['polyrank_' + str(t) for t in [1, 2, 5, 10, 20, 50]]
# to_add = ['d_in_closeness', 'd_out_closeness', 'd_all_closeness', 'd_eigen']
# to_add = ['pagerank_' + str(t * 10) for t in range(1, 9 + 1)]
%%time
update_snapshot_vertex_metrics(G, active_years, to_add, subnet_dir)
"""
Explanation: update snapshots
End of explanation
"""
G.vs['num_words'] = [0] * len(G.vs)
for op_id in G.vs['name']:
text = open(text_dir + op_id +'.txt', 'r').read()
num_words = len(text.split())
G.vs.find(name=op_id)['num_words'] = num_words
G.write_graphml(subnet_dir + network_name +'_network.graphml')
"""
Explanation: add text length
adds word count as a vertex attribute
End of explanation
"""
|
dnc1994/MachineLearning-UW | ml-regression/blank/week-4-ridge-regression-assignment-1-blank.ipynb | mit | import graphlab
"""
Explanation: Regression Week 4: Ridge Regression (interpretation)
In this notebook, we will run ridge regression multiple times with different L2 penalties to see which one produces the best fit. We will revisit the example of polynomial regression as a means to see the effect of L2 regularization. In particular, we will:
* Use a pre-built implementation of regression (GraphLab Create) to run polynomial regression
* Use matplotlib to visualize polynomial regressions
* Use a pre-built implementation of regression (GraphLab Create) to run polynomial regression, this time with L2 penalty
* Use matplotlib to visualize polynomial regressions under L2 regularization
* Choose best L2 penalty using cross-validation.
* Assess the final fit using test data.
We will continue to use the House data from previous notebooks. (In the next programming assignment for this module, you will implement your own ridge regression learning algorithm using gradient descent.)
Fire up graphlab create
End of explanation
"""
def polynomial_sframe(feature, degree):
"""
Explanation: Polynomial regression, revisited
We build on the material from Week 3, where we wrote the function to produce an SFrame with columns containing the powers of a given input. Copy and paste the function polynomial_sframe from Week 3:
End of explanation
"""
import matplotlib.pyplot as plt
%matplotlib inline
sales = graphlab.SFrame('kc_house_data.gl/')
"""
Explanation: Let's use matplotlib to visualize what a polynomial regression looks like on the house data.
End of explanation
"""
sales = sales.sort(['sqft_living','price'])
"""
Explanation: As in Week 3, we will use the sqft_living variable. For plotting purposes (connecting the dots), you'll need to sort by the values of sqft_living. For houses with identical square footage, we break the tie by their prices.
End of explanation
"""
l2_small_penalty = 1e-5
"""
Explanation: Let us revisit the 15th-order polynomial model using the 'sqft_living' input. Generate polynomial features up to degree 15 using polynomial_sframe() and fit a model with these features. When fitting the model, use an L2 penalty of 1e-5:
End of explanation
"""
(semi_split1, semi_split2) = sales.random_split(.5,seed=0)
(set_1, set_2) = semi_split1.random_split(0.5, seed=0)
(set_3, set_4) = semi_split2.random_split(0.5, seed=0)
"""
Explanation: Note: When we have so many features and so few data points, the solution can become highly numerically unstable, which can sometimes lead to strange unpredictable results. Thus, rather than using no regularization, we will introduce a tiny amount of regularization (l2_penalty=1e-5) to make the solution numerically stable. (In lecture, we discussed the fact that regularization can also help with numerical stability, and here we are seeing a practical example.)
With the L2 penalty specified above, fit the model and print out the learned weights.
Hint: make sure to add 'price' column to the new SFrame before calling graphlab.linear_regression.create(). Also, make sure GraphLab Create doesn't create its own validation set by using the option validation_set=None in this call.
QUIZ QUESTION: What's the learned value for the coefficient of feature power_1?
Observe overfitting
Recall from Week 3 that the polynomial fit of degree 15 changed wildly whenever the data changed. In particular, when we split the sales data into four subsets and fit the model of degree 15, the result came out to be very different for each subset. The model had a high variance. We will see in a moment that ridge regression reduces such variance. But first, we must reproduce the experiment we did in Week 3.
First, split the data into split the sales data into four subsets of roughly equal size and call them set_1, set_2, set_3, and set_4. Use .random_split function and make sure you set seed=0.
End of explanation
"""
(train_valid, test) = sales.random_split(.9, seed=1)
train_valid_shuffled = graphlab.toolkits.cross_validation.shuffle(train_valid, random_seed=1)
"""
Explanation: Next, fit a 15th degree polynomial on set_1, set_2, set_3, and set_4, using 'sqft_living' to predict prices. Print the weights and make a plot of the resulting model.
Hint: When calling graphlab.linear_regression.create(), use the same L2 penalty as before (i.e. l2_small_penalty). Also, make sure GraphLab Create doesn't create its own validation set by using the option validation_set = None in this call.
The four curves should differ from one another a lot, as should the coefficients you learned.
QUIZ QUESTION: For the models learned in each of these training sets, what are the smallest and largest values you learned for the coefficient of feature power_1? (For the purpose of answering this question, negative numbers are considered "smaller" than positive numbers. So -5 is smaller than -3, and -3 is smaller than 5 and so forth.)
Ridge regression comes to rescue
Generally, whenever we see weights change so much in response to change in data, we believe the variance of our estimate to be large. Ridge regression aims to address this issue by penalizing "large" weights. (Weights of model15 looked quite small, but they are not that small because 'sqft_living' input is in the order of thousands.)
With the argument l2_penalty=1e5, fit a 15th-order polynomial model on set_1, set_2, set_3, and set_4. Other than the extra parameter, the code should be the same as the experiment above. Also, make sure GraphLab Create doesn't create its own validation set by using the option validation_set = None in this call.
These curves should vary a lot less, now that you applied a high degree of regularization.
QUIZ QUESTION: For the models learned with the high level of regularization in each of these training sets, what are the smallest and largest values you learned for the coefficient of feature power_1? (For the purpose of answering this question, negative numbers are considered "smaller" than positive numbers. So -5 is smaller than -3, and -3 is smaller than 5 and so forth.)
Selecting an L2 penalty via cross-validation
Just like the polynomial degree, the L2 penalty is a "magic" parameter we need to select. We could use the validation set approach as we did in the last module, but that approach has a major disadvantage: it leaves fewer observations available for training. Cross-validation seeks to overcome this issue by using all of the training set in a smart way.
We will implement a kind of cross-validation called k-fold cross-validation. The method gets its name because it involves dividing the training set into k segments of roughtly equal size. Similar to the validation set method, we measure the validation error with one of the segments designated as the validation set. The major difference is that we repeat the process k times as follows:
Set aside segment 0 as the validation set, and fit a model on rest of data, and evalutate it on this validation set<br>
Set aside segment 1 as the validation set, and fit a model on rest of data, and evalutate it on this validation set<br>
...<br>
Set aside segment k-1 as the validation set, and fit a model on rest of data, and evalutate it on this validation set
After this process, we compute the average of the k validation errors, and use it as an estimate of the generalization error. Notice that all observations are used for both training and validation, as we iterate over segments of data.
To estimate the generalization error well, it is crucial to shuffle the training data before dividing them into segments. GraphLab Create has a utility function for shuffling a given SFrame. We reserve 10% of the data as the test set and shuffle the remainder. (Make sure to use seed=1 to get consistent answer.)
End of explanation
"""
n = len(train_valid_shuffled)
k = 10 # 10-fold cross-validation
for i in xrange(k):
start = (n*i)/k
end = (n*(i+1))/k-1
print i, (start, end)
"""
Explanation: Once the data is shuffled, we divide it into equal segments. Each segment should receive n/k elements, where n is the number of observations in the training set and k is the number of segments. Since the segment 0 starts at index 0 and contains n/k elements, it ends at index (n/k)-1. The segment 1 starts where the segment 0 left off, at index (n/k). With n/k elements, the segment 1 ends at index (n*2/k)-1. Continuing in this fashion, we deduce that the segment i starts at index (n*i/k) and ends at (n*(i+1)/k)-1.
With this pattern in mind, we write a short loop that prints the starting and ending indices of each segment, just to make sure you are getting the splits right.
End of explanation
"""
train_valid_shuffled[0:10] # rows 0 to 9
"""
Explanation: Let us familiarize ourselves with array slicing with SFrame. To extract a continuous slice from an SFrame, use colon in square brackets. For instance, the following cell extracts rows 0 to 9 of train_valid_shuffled. Notice that the first index (0) is included in the slice but the last index (10) is omitted.
End of explanation
"""
print int(round(validation4['price'].mean(), 0))
"""
Explanation: Now let us extract individual segments with array slicing. Consider the scenario where we group the houses in the train_valid_shuffled dataframe into k=10 segments of roughly equal size, with starting and ending indices computed as above.
Extract the fourth segment (segment 3) and assign it to a variable called validation4.
To verify that we have the right elements extracted, run the following cell, which computes the average price of the fourth segment. When rounded to nearest whole number, the average should be $536,234.
End of explanation
"""
n = len(train_valid_shuffled)
first_two = train_valid_shuffled[0:2]
last_two = train_valid_shuffled[n-2:n]
print first_two.append(last_two)
"""
Explanation: After designating one of the k segments as the validation set, we train a model using the rest of the data. To choose the remainder, we slice (0:start) and (end+1:n) of the data and paste them together. SFrame has append() method that pastes together two disjoint sets of rows originating from a common dataset. For instance, the following cell pastes together the first and last two rows of the train_valid_shuffled dataframe.
End of explanation
"""
print int(round(train4['price'].mean(), 0))
"""
Explanation: Extract the remainder of the data after excluding fourth segment (segment 3) and assign the subset to train4.
To verify that we have the right elements extracted, run the following cell, which computes the average price of the data with fourth segment excluded. When rounded to nearest whole number, the average should be $539,450.
End of explanation
"""
def k_fold_cross_validation(k, l2_penalty, data, output_name, features_list):
"""
Explanation: Now we are ready to implement k-fold cross-validation. Write a function that computes k validation errors by designating each of the k segments as the validation set. It accepts as parameters (i) k, (ii) l2_penalty, (iii) dataframe, (iv) name of output column (e.g. price) and (v) list of feature names. The function returns the average validation error using k segments as validation sets.
For each i in [0, 1, ..., k-1]:
Compute starting and ending indices of segment i and call 'start' and 'end'
Form validation set by taking a slice (start:end+1) from the data.
Form training set by appending slice (end+1:n) to the end of slice (0:start).
Train a linear model using training set just formed, with a given l2_penalty
Compute validation error using validation set just formed
End of explanation
"""
# Plot the l2_penalty values in the x axis and the cross-validation error in the y axis.
# Using plt.xscale('log') will make your plot more intuitive.
"""
Explanation: Once we have a function to compute the average validation error for a model, we can write a loop to find the model that minimizes the average validation error. Write a loop that does the following:
* We will again be aiming to fit a 15th-order polynomial model using the sqft_living input
* For l2_penalty in [10^1, 10^1.5, 10^2, 10^2.5, ..., 10^7] (to get this in Python, you can use this Numpy function: np.logspace(1, 7, num=13).)
* Run 10-fold cross-validation with l2_penalty
* Report which L2 penalty produced the lowest average validation error.
Note: since the degree of the polynomial is now fixed to 15, to make things faster, you should generate polynomial features in advance and re-use them throughout the loop. Make sure to use train_valid_shuffled when generating polynomial features!
QUIZ QUESTIONS: What is the best value for the L2 penalty according to 10-fold validation?
You may find it useful to plot the k-fold cross-validation errors you have obtained to better understand the behavior of the method.
End of explanation
"""
|
OpenDSA/Analysis | Yusuf/Test_Clustering_Results_22.ipynb | mit | clusters_df_22 = pd.read_csv("Clustered_Sessions_FCM.csv")
clusters_df_22.columns
framesets_credit_seek = clusters_df_22[clusters_df_22['cluster']=='Credit Seeking']['curr_frameset_name'].unique()
framesets_normal = clusters_df_22[clusters_df_22['cluster']=='Normal'] ['curr_frameset_name'].unique()
for framename in framesets_credit_seek:
if (framename not in framesets_normal):
print(framename)
clusteres_df_all = pd.concat([clusters_df_20_21, clusters_df_22])
clusteres_df_all_credit_seeking = clusteres_df_all[clusteres_df_all['cluster']=='Credit Seeking']
clusteres_df_all_normal = clusteres_df_all[clusteres_df_all['cluster']=='Normal']
clusteres_df_all_credit_seeking.mean()
clusteres_df_all_credit_seeking['n_backs'].describe()
clusteres_df_all_normal.mean()
clusteres_df_all_normal['n_backs'].describe()
# Adding time to the sessions clustered
time = pd.read_excel("Sessions_1D_22.xlsx")
time.columns
frames_we_want = clusters_df.session_number.unique()
time = time[time['session_number'].isin(frames_we_want)]
time = time.groupby(["session_number"], as_index=False).agg(
timestamp=("timestamp", "min"),
)
clustered_df_with_time = (
pd.merge(
clusters_df, time, left_on="session_number", right_on="session_number", how="inner"
)
)
"""
Explanation: Counting Credit Seeking times per student Up to END --> "Clustered_Users.csv"
End of explanation
"""
all_scores = pd.read_excel("Grades_Users_22.xlsx")
all_scores.columns
all_scores['Total Homeworks'] = all_scores[['Homework 1', 'Homework 2', 'Homework 3', 'Homework 4',]].sum(1)
all_scores['email'] = all_scores['SIS Login ID'].astype(str) + '@vt.edu'
email_opendsa_ID = pd.read_csv("email_opendsa_ID.csv")
idd = email_opendsa_ID['id'].tolist()
email = email_opendsa_ID["email"].tolist()
email_mapper = dict(zip(email, idd))
all_scores['odsa_ID'] = all_scores['email'].map(email_mapper)
all_scores['odsa_ID'].dropna(inplace=True)
all_scores = all_scores[['Student','odsa_ID', 'Total Homeworks','Final Score','Midterm 1', 'Homework 1', 'Homework 2',
'Homework 3', 'Homework4',]]
all_scores["Is A"] = all_scores["Final Score"] >= 90
# https://courses.cs.vt.edu/~cs1604/grading.html
letter_grade = []
for grade in all_scores["Final Score"]:
if grade >= 90:
letter_grade.append("A")
elif grade >= 80:
letter_grade.append("B")
elif grade >= 70:
letter_grade.append("C")
elif grade >= 60:
letter_grade.append("D")
else:
letter_grade.append("F")
all_scores["Letter Grade"] = letter_grade
all_scores = pd.read_csv("all_scores.csv")
all_scores = all_scores.drop('Student',axis=1)
all_scores = all_scores.fillna(0)
fig, axs = plt.subplots(2, 2, figsize=(13, 10.4))
plt.style.use("default")
sns.ecdfplot(data=all_scores["Total Homeworks"], legend=True, color="red", ax=axs[0, 0])
sns.ecdfplot(data=all_scores["Midterm 1"], legend=True, color="orange", ax=axs[0, 1])
sns.ecdfplot(data=all_scores["Final Score"], legend=True, color="purple", ax=axs[1, 1])
fig, axs = plt.subplots(3, 2, figsize=(10, 9.4))
plt.style.use("default")
sns.histplot(x=all_scores["Total Homeworks"], kde=True, ax=axs[0, 0])
sns.histplot(x=all_scores["Midterm 1"], kde=True, ax=axs[0, 1])
sns.histplot(x=all_scores["Final Score"], kde=True, ax=axs[1, 0])
fig.savefig("HistPlot Grades", dpi=500, facecolor="white")
"""
Explanation: Importing Grades --> "all_scores.csv"
End of explanation
"""
clusters_df = pd.read_csv("Clustered_Sessions_FCM.csv")
clusters_df.columns
all_scores = pd.read_csv("all_scores.csv")
all_scores.columns
Clustered_Users = clusters_df.groupby(["user_id", "cluster"], as_index=False).agg(
Cluster_Count=("cluster", "count"),
FramesetName_nunique=("curr_frameset_name", "nunique"),
)
# Cond on Cluster Type --> Drop Cluster Type --> Rename Count to new Name to be able to merge
credit_seeking_student_count = (
Clustered_Users[Clustered_Users["cluster"] == "Credit Seeking"]
.drop(labels=["cluster"], axis=1)
.rename(
columns={
"Cluster_Count": "# CrSk Sessions",
"FramesetName_nunique": "# CrSk Framesets",
}
)
)
normal_student_count = (
Clustered_Users[Clustered_Users["cluster"] == "Normal"]
.drop(labels=["cluster"], axis=1)
.rename(
columns={
"Cluster_Count": "# Nrml Sessions",
"FramesetName_nunique": "# Nrml Framesets",
}
)
)
Clustered_Users = pd.merge(credit_seeking_student_count,normal_student_count,left_on='user_id',right_on='user_id',how='outer')
Clustered_Users = (
pd.merge(
Clustered_Users, all_scores, left_on="user_id", right_on="odsa_ID", how="inner"
)
)
Clustered_Users['# CrSk Sessions'] = Clustered_Users['# CrSk Sessions'].fillna(0)
Clustered_Users['# Nrml Sessions'] = Clustered_Users['# Nrml Sessions'].fillna(0)
Clustered_Users['# CrSk Framesets'] = Clustered_Users['# CrSk Framesets'].fillna(0)
Clustered_Users['# Nrml Framesets'] = Clustered_Users['# Nrml Framesets'].fillna(0)
Clustered_Users["% of CrSk Sessions"] = Clustered_Users["# CrSk Sessions"] / (
Clustered_Users["# CrSk Sessions"] + Clustered_Users["# Nrml Sessions"]
)
Clustered_Users["% of Nrml Sessions"] = Clustered_Users["# Nrml Sessions"] / (
Clustered_Users["# CrSk Sessions"] + Clustered_Users["# Nrml Sessions"]
)
Clustered_Users["% of CrSk Framesets"] = Clustered_Users["# CrSk Framesets"] / (
Clustered_Users["# CrSk Framesets"] + Clustered_Users["# Nrml Framesets"]
)
Clustered_Users["Is CrSk"] = Clustered_Users["% of CrSk Sessions"] >= 0.5
Clustered_Users["Is CrSk2"] = Clustered_Users["% of CrSk Framesets"] >= 0.5
Clustered_Users.columns, all_scores.columns
len(Clustered_Users[Clustered_Users["Is CrSk"]==True]), len(Clustered_Users[Clustered_Users["Is CrSk"]==False])
Clustered_Users = Clustered_Users.fillna(0)
Clustered_Users["Final Score"].describe()
"""
Explanation: Clustered_Users
End of explanation
"""
# Negative correlations imply that as x increases, y decreases.
# The closer to 1, the better the regression line fits the data
Y = "Midterm 1"
X = "% of CrSk Sessions"
res = stats.linregress(Clustered_Users[X], Clustered_Users[Y])
rvalue, pvalue = stats.pearsonr(x=Clustered_Users[X], y=Clustered_Users[Y])
print(f"R: {res.rvalue}")
print("P-value", pvalue)
fig, axs = plt.subplots(figsize=(6, 6))
sns.scatterplot(data=Clustered_Users, x=X, y=Y)
gfg = sns.lineplot(x=Clustered_Users[X], y=res.intercept + res.slope * Clustered_Users[X])
gfg.set(
xlabel= "Percentage of credit seeking sessions per student",
ylabel="Midterm 1 score",
)
plt.legend(loc='upper right', frameon=True, shadow=True, title='r = -0.418\np-value = 0.0003')
plt.show()
fig.savefig("Midterm-1-r-22.pdf", facecolor="white",dpi=500)
Y = "Final Score"
X = "% of CrSk Sessions"
res = stats.linregress(Clustered_Users[X], Clustered_Users[Y])
rvalue, pvalue = stats.pearsonr(x=Clustered_Users[X], y=Clustered_Users[Y])
print(f"R: {res.rvalue}")
print("P-value", pvalue)
fig, axs = plt.subplots(figsize=(6, 6))
sns.scatterplot(data=Clustered_Users, x=X, y=Y)
gfg = sns.lineplot(x=Clustered_Users[X], y=res.intercept + res.slope * Clustered_Users[X])
gfg.set(
xlabel="Percentage of credit seeking sessions per student",
ylabel="Final score",
)
plt.show()
fig.savefig("Final-Score-r-22.pdf", facecolor="white",dpi=500)
print(
"% of CrSk Framesets vs. Total Homeworks: ",
stats.pearsonr(x=Clustered_Users["% of CrSk Framesets"], y=Clustered_Users["Total Homeworks"]),
)
print(
"% of CrSk Framesets vs. Final Exam: ",
stats.pearsonr(x=Clustered_Users["% of CrSk Framesets"], y=Clustered_Users["Final Exam"]),
)
print(
"% of CrSk Framesets vs. Total Exams: ",
stats.pearsonr(x=Clustered_Users["% of CrSk Framesets"], y=Clustered_Users["Total Exams"]),
)
print(
"% of CrSk Framesets vs. Final Score: ",
stats.pearsonr(x=Clustered_Users["% of CrSk Framesets"], y=Clustered_Users["Final Score"]),
)
print(
"% of CrSk Framesets vs. Midterm 1: ",
stats.pearsonr(x=Clustered_Users["% of CrSk Framesets"], y=Clustered_Users["Midterm 1"]),
)
print(
"% of Credit Seeking Vs. Total Homeworks: ",
stats.pearsonr(x=Clustered_Users["% of CrSk Sessions"], y=Clustered_Users["Total Homeworks"]),
)
print(
"% of Credit Seeking Vs. Final Exam: ",
stats.pearsonr(x=Clustered_Users["% of CrSk Sessions"], y=Clustered_Users["Final Exam"]),
)
print(
"% of Credit Seeking Vs. Total Exams: ",
stats.pearsonr(x=Clustered_Users["% of CrSk Sessions"], y=Clustered_Users["Total Exams"]),
)
print(
"% of Credit Seeking Vs. Final Score: ",
stats.pearsonr(x=Clustered_Users["% of CrSk Sessions"], y=Clustered_Users["Final Score"]),
)
"""
Explanation: This is how to select a specific MultiIndex Column ---> data.loc[:, (['one', 'two'], ['a', 'b'])], where 'one' and 'two' are level 1 of column1 and column2, then 'a' and 'b' are level2
This is how to select a specific rows based on the values of its multiindex --> Each row here is reperesented by two indexes (UserID, Cluster)
l1 = user_clustered.index.get_level_values(1); cond = (l1=='a')|| (l0=='b'); df[cond]
To reset Multiindex index ==> .reset_index()
To reset Multiindex Column ==> df.columns = [' '.join(col).strip() for col in df.columns.values]
Correlation and Regression
End of explanation
"""
sns.displot(data=Clustered_Users, x="Final Score", hue="Is CrSk", multiple="stack")
sns.displot(data=Clustered_Users, x="Midterm 1", hue="Is CrSk", multiple="stack")
sns.displot(
data=Clustered_Users, x="Final Score", hue="Is CrSk", multiple="stack", kind="kde", legend=True,
)
fig.savefig("Final-Score-22-KDE.pdf", facecolor="white",dpi=500)
sns.displot(
data=Clustered_Users, x="Midterm 1", hue="Is CrSk", multiple="stack", kind="kde"
)
"""
Explanation:
End of explanation
"""
temp = Clustered_Users[Clustered_Users["Is CrSk"] == True]["Final Score"]
print("Credit Seeking: ", temp.mean(), temp.std(), len(temp), temp.median())
print()
temp = Clustered_Users[Clustered_Users["Is CrSk"] == False]["Final Score"]
print("Normal: ", temp.mean(), temp.std(), len(temp), temp.median())
temp = Clustered_Users[Clustered_Users["Is CrSk"] == True]["Midterm 1"]
print("Credit Seeking: ", temp.mean(), temp.std(), len(temp), temp.median())
print()
temp = Clustered_Users[Clustered_Users["Is CrSk"] == False]["Midterm 1"]
print("Normal: ", temp.mean(), temp.std(), len(temp), temp.median())
"""
Explanation: t-test for comparing Scores between Students
End of explanation
"""
|
OceanPARCELS/parcels | parcels/examples/parcels_tutorial.ipynb | mit | %matplotlib inline
from parcels import FieldSet, ParticleSet, Variable, JITParticle, AdvectionRK4, plotTrajectoriesFile
import numpy as np
import math
from datetime import timedelta
from operator import attrgetter
"""
Explanation: Parcels Tutorial
Welcome to a quick tutorial on Parcels. This is meant to get you started with the code, and give you a flavour of some of the key features of Parcels.
In this tutorial, we will first cover how to run a set of particles from a very simple idealised field. We will show how easy it is to run particles in time-backward mode. Then, we will show how to add custom behaviour to the particles. Then we will show how to run particles in a set of NetCDF files from external data. Then we will show how to use particles to sample a field such as temperature or sea surface height. And finally, we will show how to write a kernel that tracks the distance travelled by the particles.
Let's start with importing the relevant modules. The key ones are all in the parcels package.
End of explanation
"""
fieldset = FieldSet.from_parcels("MovingEddies_data/moving_eddies")
"""
Explanation: Running particles in an idealised field
The first step to running particles with Parcels is to define a FieldSet object, which is simply a collection of hydrodynamic fields. In this first case, we use a simple flow of two idealised moving eddies. That field is saved in NetCDF format in the directory examples/MovingEddies_data. Since we know that the files are in what's called Parcels FieldSet format, we can call these files using the function FieldSet.from_parcels().
End of explanation
"""
fieldset.U.show()
"""
Explanation: The fieldset can then be visualised with the show() function. To show the zonal velocity (U), give the following command
End of explanation
"""
pset = ParticleSet.from_list(fieldset=fieldset, # the fields on which the particles are advected
pclass=JITParticle, # the type of particles (JITParticle or ScipyParticle)
lon=[3.3e5, 3.3e5], # a vector of release longitudes
lat=[1e5, 2.8e5]) # a vector of release latitudes
"""
Explanation: The next step is to define a ParticleSet. In this case, we start 2 particles at locations (330km, 100km) and (330km, 280km) using the from_list constructor method, that are advected on the fieldset we defined above. Note that we use JITParticle as pclass, because we will be executing the advection in JIT (Just-In-Time) mode. The alternative is to run in scipy mode, in which case pclass is ScipyParticle
End of explanation
"""
print(pset)
"""
Explanation: Print the ParticleSet to see where they start
End of explanation
"""
pset.show(field=fieldset.U)
"""
Explanation: This output shows for each particle the (longitude, latitude, depth, time). Note that in this case the time is not_yet_set, that is because we didn't specify a time when we defined the pset.
To plot the positions of these particles on the zonal velocity, use the following command
End of explanation
"""
output_file = pset.ParticleFile(name="EddyParticles.nc", outputdt=timedelta(hours=1)) # the file name and the time step of the outputs
pset.execute(AdvectionRK4, # the kernel (which defines how particles move)
runtime=timedelta(days=6), # the total length of the run
dt=timedelta(minutes=5), # the timestep of the kernel
output_file=output_file)
"""
Explanation: The final step is to run (or 'execute') the ParticelSet. We run the particles using the AdvectionRK4 kernel, which is a 4th order Runge-Kutte implementation that comes with Parcels. We run the particles for 6 days (using the timedelta function from datetime), at an RK4 timestep of 5 minutes. We store the trajectory information at an interval of 1 hour in a file called EddyParticles.nc. Because time was not_yet_set, the particles will be advected from the first date available in the fieldset, which is the default behaviour.
End of explanation
"""
print(pset)
pset.show(field=fieldset.U)
"""
Explanation: The code should have run, which can be confirmed by printing and plotting the ParticleSet again
End of explanation
"""
output_file.export()
plotTrajectoriesFile('EddyParticles.nc');
"""
Explanation: Note that both the particles (the black dots) and the U field have moved in the plot above. Also, the time of the particles is now 518400 seconds, which is 6 days.
The trajectory information of the particles can be written to the EddyParticles.nc file by using the .export() method on the output file. The trajectory can then be quickly plotted using the plotTrajectoriesFile function.
End of explanation
"""
plotTrajectoriesFile('EddyParticles.nc', mode='movie2d_notebook')
"""
Explanation: The plotTrajectoriesFile function can also be used to show the trajectories as an animation, by specifying that it has to run in movie2d_notebook mode. If we pass this to our function above, we can watch the particles go!
End of explanation
"""
plotTrajectoriesFile('EddyParticles.nc', mode='hist2d', bins=[30, 20]);
"""
Explanation: The plotTrajectoriesFile can also be used to display 2-dimensional histograms (mode=hist2d) of the number of particle observations per bin. Use the bins argument to control the number of bins in the longitude and latitude direction. See also the matplotlib.hist2d page.
End of explanation
"""
# THIS DOES NOT WORK IN THIS IPYTHON NOTEBOOK, BECAUSE OF THE INLINE PLOTTING.
# THE 'SHOW_MOVIE' KEYWORD WILL WORK ON MOST MACHINES, THOUGH
# pset = ParticleSet(fieldset=fieldset, pclass=JITParticle, lon=[3.3e5, 3.3e5], lat=[1e5, 2.8e5])
# pset.execute(AdvectionRK4,
# runtime=timedelta(days=6),
# dt=timedelta(minutes=5),
# moviedt=timedelta(hours=1),
# movie_background_field=fieldset.U)
"""
Explanation: Now one of the neat features of Parcels is that the particles can be plotted as a movie during execution, which is great for debugging. To rerun the particles while plotting them on top of the zonal velocity field (fieldset.U), first reinitialise the ParticleSet and then re-execute. However, now rather than saving the output to a file, display a movie using the moviedt display frequency, in this case with the zonal velocity fieldset.U as background
End of explanation
"""
output_file = pset.ParticleFile(name="EddyParticles_Bwd.nc", outputdt=timedelta(hours=1)) # the file name and the time step of the outputs
pset.execute(AdvectionRK4,
dt=-timedelta(minutes=5), # negative timestep for backward run
runtime=timedelta(days=6), # the run time
output_file=output_file)
"""
Explanation: Running particles in backward time
Running particles in backward time is extremely simple: just provide a dt < 0.
End of explanation
"""
print(pset)
pset.show(field=fieldset.U)
"""
Explanation: Now print the particles again, and see that they (except for some round-off errors) returned to their original position
End of explanation
"""
def WestVel(particle, fieldset, time):
if time > 86400:
uvel = -2.
particle.lon += uvel * particle.dt
"""
Explanation: Adding a custom behaviour kernel
A key feature of Parcels is the ability to quickly create very simple kernels, and add them to the execution. Kernels are little snippets of code that are run during exection of the particles.
In this example, we'll create a simple kernel where particles obtain an extra 2 m/s westward velocity after 1 day. Of course, this is not very realistic scenario, but it nicely illustrates the power of custom kernels.
End of explanation
"""
pset = ParticleSet.from_list(fieldset=fieldset, pclass=JITParticle, lon=[3.3e5, 3.3e5], lat=[1e5, 2.8e5])
k_WestVel = pset.Kernel(WestVel) # casting the WestVel function to a kernel object
output_file = pset.ParticleFile(name="EddyParticles_WestVel.nc", outputdt=timedelta(hours=1))
pset.execute(AdvectionRK4 + k_WestVel, # simply add kernels using the + operator
runtime=timedelta(days=2),
dt=timedelta(minutes=5),
output_file=output_file)
"""
Explanation: Now reset the ParticleSet again, and re-execute. Note that we have now changed kernel to be AdvectionRK4 + k_WestVel, where k_WestVel is the WestVel function as defined above cast into a Kernel object (via the pset.Kernel call).
End of explanation
"""
output_file.export()
plotTrajectoriesFile('EddyParticles_WestVel.nc');
"""
Explanation: And now plot this new trajectory file
End of explanation
"""
filenames = {'U': "GlobCurrent_example_data/20*.nc",
'V': "GlobCurrent_example_data/20*.nc"}
"""
Explanation: Reading in data from arbritrary NetCDF files
In most cases, you will want to advect particles within pre-computed velocity fields. If these velocity fields are stored in NetCDF format, it is fairly easy to load them into the FieldSet.from_netcdf() function.
The examples directory contains a set of GlobCurrent files of the region around South Africa.
First, define the names of the files containing the zonal (U) and meridional (V) velocities. You can use wildcards (*) and the filenames for U and V can be the same (as in this case)
End of explanation
"""
variables = {'U': 'eastward_eulerian_current_velocity',
'V': 'northward_eulerian_current_velocity'}
dimensions = {'lat': 'lat',
'lon': 'lon',
'time': 'time'}
"""
Explanation: Then, define a dictionary of the variables (U and V) and dimensions (lon, lat and time; note that in this case there is no depth because the GlobCurrent data is only for the surface of the ocean)
End of explanation
"""
fieldset = FieldSet.from_netcdf(filenames, variables, dimensions)
"""
Explanation: Finally, read in the fieldset using the FieldSet.from_netcdf function with the above-defined filenames, variables and dimensions
End of explanation
"""
pset = ParticleSet.from_line(fieldset=fieldset, pclass=JITParticle,
size=5, # releasing 5 particles
start=(28, -33), # releasing on a line: the start longitude and latitude
finish=(30, -33)) # releasing on a line: the end longitude and latitude
"""
Explanation: Now define a ParticleSet, in this case with 5 particle starting on a line between (28E, 33S) and (30E, 33S) using the ParticleSet.from_line constructor method
End of explanation
"""
output_file = pset.ParticleFile(name="GlobCurrentParticles.nc", outputdt=timedelta(hours=6))
pset.execute(AdvectionRK4,
runtime=timedelta(days=10),
dt=timedelta(minutes=5),
output_file=output_file)
"""
Explanation: And finally execute the ParticleSet for 10 days using 4th order Runge-Kutta
End of explanation
"""
output_file.export()
plotTrajectoriesFile('GlobCurrentParticles.nc',
tracerfile='GlobCurrent_example_data/20020101000000-GLOBCURRENT-L4-CUReul_hs-ALT_SUM-v02.0-fv01.0.nc',
tracerlon='lon',
tracerlat='lat',
tracerfield='eastward_eulerian_current_velocity');
"""
Explanation: Now visualise this simulation using the plotParticles script again. Note you can plot the particles on top of one of the velocity fields using the tracerfile, tracerfield, etc keywords.
End of explanation
"""
fieldset = FieldSet.from_parcels("Peninsula_data/peninsula", extra_fields={'P': 'P'}, allow_time_extrapolation=True)
"""
Explanation: Sampling a Field with Particles
One typical use case of particle simulations is to sample a Field (such as temperature, vorticity or sea surface hight) along a particle trajectory. In Parcels, this is very easy to do, with a custom Kernel.
Let's read in another example, the flow around a Peninsula (see Fig 2.2.3 in this document), and this time also load the Pressure (P) field, using extra_fields={'P': 'P'}. Note that, because this flow does not depend on time, we need to set allow_time_extrapolation=True when reading in the fieldset.
End of explanation
"""
class SampleParticle(JITParticle): # Define a new particle class
p = Variable('p', initial=fieldset.P) # Variable 'p' initialised by sampling the pressure
"""
Explanation: Now define a new Particle class that has an extra Variable: the pressure. We initialise this by sampling the fieldset.P field.
End of explanation
"""
pset = ParticleSet.from_line(fieldset=fieldset, pclass=SampleParticle,
start=(3000, 3000), finish=(3000, 46000), size=5, time=0)
pset.show(field='vector')
print('p values before execution:', [p.p for p in pset])
"""
Explanation: Now define a ParticleSet using the from_line method also used above in the GlobCurrent data. Plot the pset and print their pressure values p
End of explanation
"""
def SampleP(particle, fieldset, time): # Custom function that samples fieldset.P at particle location
particle.p = fieldset.P[time, particle.depth, particle.lat, particle.lon]
k_sample = pset.Kernel(SampleP) # Casting the SampleP function to a kernel.
"""
Explanation: Now create a custom function that samples the fieldset.P field at the particle location. Cast this function to a Kernel.
End of explanation
"""
pset.execute(AdvectionRK4 + k_sample, # Add kernels using the + operator.
runtime=timedelta(hours=20),
dt=timedelta(minutes=5))
pset.show(field=fieldset.P, show_time=0)
print('p values after execution:', [p.p for p in pset])
"""
Explanation: Finally, execute the pset with a combination of the AdvectionRK4 and SampleP kernels, plot the pset and print their new pressure values p
End of explanation
"""
class DistParticle(JITParticle): # Define a new particle class that contains three extra variables
distance = Variable('distance', initial=0., dtype=np.float32) # the distance travelled
prev_lon = Variable('prev_lon', dtype=np.float32, to_write=False,
initial=attrgetter('lon')) # the previous longitude
prev_lat = Variable('prev_lat', dtype=np.float32, to_write=False,
initial=attrgetter('lat')) # the previous latitude.
"""
Explanation: And see that these pressure values p are (within roundoff errors) the same as the pressure values before the execution of the kernels. The particles thus stay on isobars!
Note that there is a crucial subtlety in how to sample time-evolving Fields. In that case, it is important to sample at fieldset.P[time+particle.dt, ...] and to first advect the particles before sampling them. See also this part of the interpolation tutorial for more background.
Calculating distance travelled
As a second example of what custom kernels can do, we will now show how to create a kernel that logs the total distance that particles have travelled.
First, we need to create a new Particle class that includes three extra variables. The distance variable will be written to output, but the auxiliary variables prev_lon and prev_lat won't be written to output (can be controlled using the to_write keyword)
End of explanation
"""
def TotalDistance(particle, fieldset, time):
# Calculate the distance in latitudinal direction (using 1.11e2 kilometer per degree latitude)
lat_dist = (particle.lat - particle.prev_lat) * 1.11e2
# Calculate the distance in longitudinal direction, using cosine(latitude) - spherical earth
lon_dist = (particle.lon - particle.prev_lon) * 1.11e2 * math.cos(particle.lat * math.pi / 180)
# Calculate the total Euclidean distance travelled by the particle
particle.distance += math.sqrt(math.pow(lon_dist, 2) + math.pow(lat_dist, 2))
particle.prev_lon = particle.lon # Set the stored values for next iteration.
particle.prev_lat = particle.lat
"""
Explanation: Now define a new function TotalDistance that calculates the sum of Euclidean distances between the old and new locations in each RK4 step
End of explanation
"""
filenames = {'U': "GlobCurrent_example_data/20*.nc",
'V': "GlobCurrent_example_data/20*.nc"}
variables = {'U': 'eastward_eulerian_current_velocity',
'V': 'northward_eulerian_current_velocity'}
dimensions = {'lat': 'lat',
'lon': 'lon',
'time': 'time'}
fieldset = FieldSet.from_netcdf(filenames, variables, dimensions)
pset = ParticleSet.from_line(fieldset=fieldset,
pclass=DistParticle,
size=5, start=(28, -33), finish=(30, -33))
"""
Explanation: Note: here it is assumed that the latitude and longitude are measured in degrees North and East, respectively. However, some datasets (e.g. the MovingEddies used above) give them measured in (kilo)meters, in which case we must not include the factor 1.11e2.
We will run the TotalDistance function on a ParticleSet containing the five particles within the GlobCurrent fieldset from above. Note that pclass=DistParticle in this case.
End of explanation
"""
k_dist = pset.Kernel(TotalDistance) # Casting the TotalDistance function to a kernel.
pset.execute(AdvectionRK4 + k_dist, # Add kernels using the + operator.
runtime=timedelta(days=6),
dt=timedelta(minutes=5),
output_file=pset.ParticleFile(name="GlobCurrentParticles_Dist.nc", outputdt=timedelta(hours=1)))
"""
Explanation: Again define a new kernel to include the function written above and execute the ParticleSet.
End of explanation
"""
print([p.distance for p in pset]) #the distances in km travelled by the particles
"""
Explanation: And finally print the distance in km that each particle has travelled (note that this is also stored in the EddyParticles_Dist.nc file)
End of explanation
"""
|
GoogleCloudPlatform/mlops-on-gcp | model_serving/bqml-caip/02-bqml-to-caip-pipeline.ipynb | apache-2.0 | import kfp
import kfp.components as comp
import random
import os
#Common Parameters
PROJECT_ID=[] #enter your project name
KFPHOST=[] #enter your KFP hostname
#Parameters for BQML
DATASET=[] #name of dataset to create or use if exists
VIEW= [] #name of view to be created for BQML create model
MODEL=[] #model name for both BQML and AI Platform
ALGO=[] #e.g. 'linear_reg'
#Parameters for AI Platform Prediction
REGION=[] #e.g. 'us-central1'
MODEL_VERSION=[] #e.g. 'v1'
RUNTIME_VERSION='1.15' #do not change
PYTHON_VERSION='3.7' #do not change
MODEL_BUCKET='gs://{0}-{1}'.format(PROJECT_ID,str(random.randrange(1000,10000)))
MODEL_PATH=os.path.join(MODEL_BUCKET,'bqml/model/export/',MODEL,MODEL_VERSION)
#Parameters for KF Pipeline
KFP_EXPERIMENT_NAME='Natality Pipeline'
"""
Explanation: Kubeflow Pipeline: Exporting BQML Models to Online AI Platform Prediction
The notebook "Tutorial: Exporting BQML Models to Online AI Platform Prediction" walks through the concepts of training model in BQML and exporting the model to be served in AI Platform. This notebook takes that process and implements a Kubeflow Pipeline to automate the steps.
End of explanation
"""
mlengine_deploy_op = comp.load_component_from_url(
'https://raw.githubusercontent.com/kubeflow/pipelines/01a23ae8672d3b18e88adf3036071496aca3552d/components/gcp/ml_engine/deploy/component.yaml')
"""
Explanation: Creating KFP Ops
Each step in a Kubeflow Pipeline is a container operation. If the operation you would like to accomplish is already available in the components library in the KFP repo, the process of creating an op is simply loading it. AI Platform provides such an op for model deployment.
End of explanation
"""
def gcp_command_func(project_id: str, command_string: str) -> str:
import subprocess
config_string="gcloud config set project {}".format(project_id)
config=subprocess.run(config_string, shell=True, check=True, stdout=subprocess.PIPE, universal_newlines=True)
print("Running command: {}".format(command_string))
response=subprocess.run(command_string, shell=True, check=True, stdout=subprocess.PIPE, universal_newlines=True)
print("Command response: {}".format(response.stdout))
return project_id
gcp_command_op=comp.func_to_container_op(func=gcp_command_func, base_image="google/cloud-sdk:latest")
"""
Explanation: If you cannot find the op in the component library to support your need, you will need to create that container. An easy way to do this is via kfp.components.func_to_container_op. In our case, we would like to execute a number of bq commands. To do this, we will create a Python function as a general command executor, then convert the function to a container op via kfp.components.func_to_container_op.
End of explanation
"""
def make_bucket(bucket):
return "gsutil ls {0} || gsutil mb {0}".format(bucket)
def create_dataset(dataset):
return "bq show {0} || bq mk {0}".format(dataset)
def create_view(dataset, view):
query = """
SELECT
weight_pounds,
is_male,
gestation_weeks,
mother_age,
CASE
WHEN MOD(CAST(ROUND(weight_pounds*100) as int64), 10) < 8 THEN "training"
WHEN MOD(CAST(ROUND(weight_pounds*100) as int64), 10) = 8 THEN "evaluation"
WHEN MOD(CAST(ROUND(weight_pounds*100) as int64), 10) = 9 THEN "prediction"
END AS datasplit
FROM
`bigquery-public-data.samples.natality`
WHERE
weight_pounds IS NOT NULL
""".format(dataset, view)
return "bq show {1}.{2} || bq mk --use_legacy_sql=false --view '{0}' {1}.{2}".format(query, dataset, view)
def create_model(dataset, view, model, algo):
query = """
CREATE OR REPLACE MODEL
`{0}.{2}`
OPTIONS
(model_type="{3}",
input_label_cols=["weight_pounds"]) AS
SELECT
weight_pounds,
is_male,
gestation_weeks,
mother_age
FROM
{0}.{1}
WHERE
datasplit = "training"
""".format(dataset, view, model, algo)
return "bq show {1}.{3} || bq query --use_legacy_sql=false '{0}'".format(query, dataset, view, model, algo)
def export_model(dataset, model, export_path):
return "bq extract -m {0}.{1} {2}".format(dataset, model, export_path)
"""
Explanation: Prepare bq commands
There are four bq operations in our pipeline:
1. Create Dataset
2. Create View
3. Create Model
4. Export Model
The following will create the commands that will be executed by the gcp_command_op created above.
End of explanation
"""
@kfp.dsl.pipeline(
name='BQML Model Export to AI Platform Prediction',
description='This pipeline trains a BQML model and exports to GCS, then loads into AI Platform Prediction.'
)
def bqml_to_caip(project_id = PROJECT_ID,
bucket=MODEL_BUCKET,
model_path=MODEL_PATH,
dataset=DATASET,
view=VIEW,
model=MODEL,
model_version=MODEL_VERSION,
algo=ALGO,
export_path=MODEL_PATH,
runtime_version=RUNTIME_VERSION,
python_version=PYTHON_VERSION,
region=REGION
):
#Prepare commands for gcp_command_op
make_bucket_command=make_bucket(bucket)
create_dataset_command=create_dataset(dataset)
create_view_command=create_view(dataset, view)
create_model_command=create_model(dataset, view, model, algo)
export_model_command=export_model(dataset, model, export_path)
#Create ops in pipeline
make_bucket_op=gcp_command_op(project_id=project_id,
command_string=make_bucket_command)
create_dataset_op=gcp_command_op(project_id=project_id,
command_string=create_dataset_command)
create_view_op=gcp_command_op(project_id=project_id,
command_string=create_view_command)
create_model_op=gcp_command_op(project_id=project_id,
command_string=create_model_command)
export_model_op=gcp_command_op(project_id=project_id,
command_string=export_model_command)
model_deploy_op=mlengine_deploy_op(model_uri=export_path,
project_id=project_id,
model_id=model,
version_id=model_version,
runtime_version=runtime_version,
python_version=python_version)
#Set op dependencies
create_dataset_op.after(make_bucket_op)
create_view_op.after(create_dataset_op)
create_model_op.after(create_view_op)
export_model_op.after(create_model_op)
model_deploy_op.after(export_model_op)
"""
Explanation: Ops in a Kubeflow Pipeline is organized in to a Directed Acyclic Graph (DAG). Order of operation of the ops is controlled by their dependencies on other ops. Ops with no dependencies or dependencies that are satisfied will run. Dependencies can be naturally created when one op's input is dependent on another op's output. If your ops do not have that kind of dependency, you can still manually enforce it with this pattern:
current_op.after(previous_op) where current_op depends on previous_op
End of explanation
"""
pipeline_func = bqml_to_caip
pipeline_filename = pipeline_func.__name__ + '.zip'
import kfp.compiler as compiler
compiler.Compiler().compile(pipeline_func, pipeline_filename)
arguments = {}
"""
Explanation: Compile and Run the Pipeline
End of explanation
"""
client = kfp.Client(KFPHOST)
experiment = client.create_experiment(KFP_EXPERIMENT_NAME)
"""
Explanation: Create an experiment name. If this experiment already exist, this step will set the experiment name to the specified experiment.
End of explanation
"""
#Submit a pipeline run
run_name = pipeline_func.__name__ + ' run'
run_result = client.run_pipeline(
experiment_id=experiment.id,
job_name=run_name,
pipeline_package_path=pipeline_filename,
params=arguments)
"""
Explanation: Submit a Pipeline run under the experiment name above.
End of explanation
"""
|
Esri/gis-stat-analysis-py-tutor | notebooks/PythonFeatureIO.ipynb | apache-2.0 | import arcpy as ARCPY
import numpy as NUM
import SSDataObject as SSDO
"""
Explanation: Using the Spatial Statistics Data Object (SSDataObject) Makes Feature IO Simple
SSDataObject does the read/write and accounting of feature/attribute and NumPy Array order
Write/Utilize methods that take NumPy Arrays
Using NumPy as the common denominator
Could use the ArcPy Data Access Module directly, but there are host of issues/information one must take into account:
How to deal with projections and other environment settings?
How Cursors affect the accounting of features?
How to deal with bad records/bad data and error handling?
How to honor/account for full field object control?
How do I create output features that correspond to my inputs?
Points are easy, what about Polygons and Polylines?
Spatial Statistics Data Object (SSDataObject)
Almost 30 Spatial Statistics Tools written in Python that ${\bf{must}}$ behave like traditional GP Tools
Use SSDataObject and your code should adhere
The Data Analysis Python Modules
PANDAS (Python Data Analysis Library)
SciPy (Scientific Python)
PySAL (Python Spatial Analysis Library)
Basic Imports
End of explanation
"""
inputFC = r'../data/CA_Polygons.shp'
ssdo = SSDO.SSDataObject(inputFC)
ssdo.obtainData("MYID", ['GROWTH', 'LOGPCR69', 'PERCNOHS', 'POP1969'])
df = ssdo.getDataFrame()
print(df.head())
"""
Explanation: Initialize and Load Fields into Spatial Statsitics Data Object
The Unique ID Field ("MYID" in this example) will keep track of the order of your features
You can use ssdo.oidName as your Unique ID Field
You have no control over Object ID Fields. It is quick, assures "uniqueness", but can't assume they will not get "scrambled" during copies.
To assure full control I advocate the "Add Field (LONG)" --> "Calculate Field (From Object ID)" workflow.
End of explanation
"""
pop69 = ssdo.fields['POP1969']
nativePop69 = pop69.data
floatPop69 = pop69.returnDouble()
print(floatPop69[0:5])
"""
Explanation: You can get your data using the core NumPy Arrays
Use .data to get the native data type
Use the returnDouble() function to cast explicitly to float
End of explanation
"""
df = ssdo.getDataFrame()
print(df.head())
"""
Explanation: You can get your data in a PANDAS Data Frame
Note the Unique ID Field is used as the Index
End of explanation
"""
df['XCoords'] = ssdo.xyCoords[:,0]
df['YCoords'] = ssdo.xyCoords[:,1]
print(df.head())
"""
Explanation: By default the SSDataObject only stores the centroids of the features
End of explanation
"""
ssdo = SSDO.SSDataObject(inputFC)
ssdo.obtainData("MYID", ['GROWTH', 'LOGPCR69', 'PERCNOHS', 'POP1969'],
requireGeometry = True)
df = ssdo.getDataFrame()
shapes = NUM.array(ssdo.shapes, dtype = object)
df['shapes'] = shapes
print(df.head())
"""
Explanation: You can get the core ArcPy Geometries if desired
Set requireGeometry = True
End of explanation
"""
import numpy.random as RAND
import os as OS
ARCPY.env.overwriteOutput = True
outArray = RAND.normal(0,1, (ssdo.numObs,))
outDict = {}
outField = SSDO.CandidateField('STDNORM', 'DOUBLE', outArray, alias = 'Standard Normal')
outDict[outField.name] = outField
outputFC = OS.path.abspath(r'../data/testMyOutput.shp')
ssdo.output2NewFC(outputFC, outDict, appendFields = ['GROWTH', 'PERCNOHS', 'NEW_NAME'])
"""
Explanation: Coming Soon... ArcPy Geometry Data Frame Integration
In conjunction with the ArcGIS Python SDK
Spatial operators on ArcGIS Data Frames: selection, clip, intersection etc.
Creating Output Feature Classes
Simple Example: Adding a field of random standard normal values to your input/output
appendFields can be used to copy over any fields from the input whether you read them into the SSDataObject or not.
E.g. 'NEW_NAME' was never read into Python but it will be copied to the output. This can save you a lot of memory.
End of explanation
"""
|
marco-olimpio/ufrn | IMD0104 - PROGRAMAÇÃO ORIENTADA A OBJETOS E MAPEAMENTO OBJETO-RELACIONAL/assignments/3/all-that-you-need-to-know-about-the-android-market.ipynb | gpl-3.0 | print('Number of apps in the dataset : ' , len(df))
df.sample(7)
"""
Explanation: Sneak peek at the dataset
df = pd.read_csv('../input/googleplaystore.csv')
print(df.dtypes)
df.loc[df.App=='Tiny Scanner - PDF Scanner App']
df[df.duplicated(keep='first')]
df.drop_duplicates(subset='App', inplace=True)
df = df[df['Android Ver'] != np.nan]
df = df[df['Android Ver'] != 'NaN']
df = df[df['Installs'] != 'Free']
df = df[df['Installs'] != 'Paid']
print(len(df))
End of explanation
"""
# - Installs : Remove + and ,
df['Installs'] = df['Installs'].apply(lambda x: x.replace('+', '') if '+' in str(x) else x)
df['Installs'] = df['Installs'].apply(lambda x: x.replace(',', '') if ',' in str(x) else x)
df['Installs'] = df['Installs'].apply(lambda x: int(x))
#print(type(df['Installs'].values))
# - Size : Remove 'M', Replace 'k' and divide by 10^-3
#df['Size'] = df['Size'].fillna(0)
df['Size'] = df['Size'].apply(lambda x: str(x).replace('Varies with device', 'NaN') if 'Varies with device' in str(x) else x)
df['Size'] = df['Size'].apply(lambda x: str(x).replace('M', '') if 'M' in str(x) else x)
df['Size'] = df['Size'].apply(lambda x: str(x).replace(',', '') if 'M' in str(x) else x)
df['Size'] = df['Size'].apply(lambda x: float(str(x).replace('k', '')) / 1000 if 'k' in str(x) else x)
df['Size'] = df['Size'].apply(lambda x: float(x))
df['Installs'] = df['Installs'].apply(lambda x: float(x))
df['Price'] = df['Price'].apply(lambda x: str(x).replace('$', '') if '$' in str(x) else str(x))
df['Price'] = df['Price'].apply(lambda x: float(x))
df['Reviews'] = df['Reviews'].apply(lambda x: int(x))
#df['Reviews'] = df['Reviews'].apply(lambda x: 'NaN' if int(x) == 0 else int(x))
#print(df.loc[df.Size == 0.713]) #index = 3384
#df.loc[df.col1 == '']['col2']
# 0 - Free, 1 - Paid
# df['Type'] = pd.factorize(df['Type'])[0]
#print(df.dtypes)
"""
Explanation: Data Cleaning
Convert all app sizes to MB
Remove '+' from 'Number of Installs' to make it numeric
Convert all review text to English language using Google Translator library
End of explanation
"""
#print(df.dtypes)
x = df['Rating'].dropna()
y = df['Size'].dropna()
z = df['Installs'][df.Installs!=0].dropna()
p = df['Reviews'][df.Reviews!=0].dropna()
t = df['Type'].dropna()
price = df['Price']
p = sns.pairplot(pd.DataFrame(list(zip(x, y, np.log(z), np.log10(p), t, price)),
columns=['Rating','Size', 'Installs', 'Reviews', 'Type', 'Price']), hue='Type', palette="Set2")
"""
Explanation: Basic EDA
End of explanation
"""
number_of_apps_in_category = df['Category'].value_counts().sort_values(ascending=True)
data = [go.Pie(
labels = number_of_apps_in_category.index,
values = number_of_apps_in_category.values,
hoverinfo = 'label+value'
)]
plotly.offline.iplot(data, filename='active_category')
"""
Explanation: This is the basic exploratory analysis to look for any evident patterns or relationships between the features.
Android market breakdown
Which category has the highest share of (active) apps in the market?
End of explanation
"""
data = [go.Histogram(
x = df.Rating,
xbins = {'start': 1, 'size': 0.1, 'end' :5}
)]
print('Average app rating = ', np.mean(df['Rating']))
plotly.offline.iplot(data, filename='overall_rating_distribution')
"""
Explanation: Family and Game apps have the highest market prevelance.
Interestingly, Tools, Business and Medical apps are also catching up.
Average rating of apps
Do any apps perform really good or really bad?
End of explanation
"""
import scipy.stats as stats
f = stats.f_oneway(df.loc[df.Category == 'BUSINESS']['Rating'].dropna(),
df.loc[df.Category == 'FAMILY']['Rating'].dropna(),
df.loc[df.Category == 'GAME']['Rating'].dropna(),
df.loc[df.Category == 'PERSONALIZATION']['Rating'].dropna(),
df.loc[df.Category == 'LIFESTYLE']['Rating'].dropna(),
df.loc[df.Category == 'FINANCE']['Rating'].dropna(),
df.loc[df.Category == 'EDUCATION']['Rating'].dropna(),
df.loc[df.Category == 'MEDICAL']['Rating'].dropna(),
df.loc[df.Category == 'TOOLS']['Rating'].dropna(),
df.loc[df.Category == 'PRODUCTIVITY']['Rating'].dropna()
)
print(f)
print('\nThe p-value is extremely small, hence we reject the null hypothesis in favor of the alternate hypothesis.\n')
#temp = df.loc[df.Category.isin(['BUSINESS', 'DATING'])]
groups = df.groupby('Category').filter(lambda x: len(x) > 286).reset_index()
array = groups['Rating'].hist(by=groups['Category'], sharex=True, figsize=(20,20))
"""
Explanation: Generally, most apps do well with an average rating of 4.17.
Let's break this down and inspect if we have categories which perform exceptionally good or bad.
App ratings across categories - One Way Anova Test
End of explanation
"""
groups = df.groupby('Category').filter(lambda x: len(x) >= 170).reset_index()
#print(type(groups.item.['BUSINESS']))
print('Average rating = ', np.nanmean(list(groups.Rating)))
#print(len(groups.loc[df.Category == 'DATING']))
c = ['hsl('+str(h)+',50%'+',50%)' for h in np.linspace(0, 720, len(set(groups.Category)))]
#df_sorted = df.groupby('Category').agg({'Rating':'median'}).reset_index().sort_values(by='Rating', ascending=False)
#print(df_sorted)
layout = {'title' : 'App ratings across major categories',
'xaxis': {'tickangle':-40},
'yaxis': {'title': 'Rating'},
'plot_bgcolor': 'rgb(250,250,250)',
'shapes': [{
'type' :'line',
'x0': -.5,
'y0': np.nanmean(list(groups.Rating)),
'x1': 19,
'y1': np.nanmean(list(groups.Rating)),
'line': { 'dash': 'dashdot'}
}]
}
data = [{
'y': df.loc[df.Category==category]['Rating'],
'type':'violin',
'name' : category,
'showlegend':False,
#'marker': {'color': 'Set2'},
} for i,category in enumerate(list(set(groups.Category)))]
plotly.offline.iplot({'data': data, 'layout': layout})
"""
Explanation: The average app ratings across categories is significantly different.
Best performing categories
End of explanation
"""
groups = df.groupby('Category').filter(lambda x: len(x) >= 50).reset_index()
# sns.set_style('ticks')
# fig, ax = plt.subplots()
# fig.set_size_inches(8, 8)
sns.set_style("darkgrid")
ax = sns.jointplot(df['Size'], df['Rating'])
#ax.set_title('Rating Vs Size')
"""
Explanation: Almost all app categories perform decently. Health and Fitness and Books and Reference produce the highest quality apps with 50% apps having a rating greater than 4.5. This is extremely high!
On the contrary, 50% of apps in the Dating category have a rating lesser than the average rating.
A few junk apps also exist in the Lifestyle, Family and Finance category.
Sizing Strategy - Light Vs Bulky?
How do app sizes impact the app rating?
End of explanation
"""
c = ['hsl('+str(h)+',50%'+',50%)' for h in np.linspace(0, 360, len(list(set(groups.Category))))]
subset_df = df[df.Size > 40]
groups_temp = subset_df.groupby('Category').filter(lambda x: len(x) >20)
# for category in enumerate(list(set(groups_temp.Category))):
# print (category)
data = [{
'x': groups_temp.loc[subset_df.Category==category[1]]['Rating'],
'type':'scatter',
'y' : subset_df['Size'],
'name' : str(category[1]),
'mode' : 'markers',
'showlegend': True,
#'marker': {'color':c[i]}
#'text' : df['rating'],
} for category in enumerate(['GAME', 'FAMILY'])]
layout = {'title':"Rating vs Size",
'xaxis': {'title' : 'Rating'},
'yaxis' : {'title' : 'Size (in MB)'},
'plot_bgcolor': 'rgb(0,0,0)'}
plotly.offline.iplot({'data': data, 'layout': layout})
# heavy_categories = [ 'ENTERTAINMENT', 'MEDICAL', 'DATING']
# data = [{
# 'x': groups.loc[df.Category==category]['Rating'],
# 'type':'scatter',
# 'y' : df['Size'],
# 'name' : category,
# 'mode' : 'markers',
# 'showlegend': True,
# #'text' : df['rating'],
# } for category in heavy_categories]
"""
Explanation: Most top rated apps are optimally sized between ~2MB to ~40MB - neither too light nor too heavy.
End of explanation
"""
paid_apps = df[df.Price>0]
p = sns.jointplot( "Price", "Rating", paid_apps)
"""
Explanation: Most bulky apps ( >50MB) belong to the Game and Family category. Despite this, these bulky apps are fairly highly rated indicating that they are bulky for a purpose.
Pricing Strategy - Free Vs Paid?
How do app prices impact app rating?
End of explanation
"""
subset_df = df[df.Category.isin(['GAME', 'FAMILY', 'PHOTOGRAPHY', 'MEDICAL', 'TOOLS', 'FINANCE',
'LIFESTYLE','BUSINESS'])]
sns.set_style('darkgrid')
fig, ax = plt.subplots()
fig.set_size_inches(15, 8)
p = sns.stripplot(x="Price", y="Category", data=subset_df, jitter=True, linewidth=1)
title = ax.set_title('App pricing trend across categories')
"""
Explanation: Most top rated apps are optimally priced between ~1\$ to ~30\$. There are only a very few apps priced above 20\$.
Current pricing trend - How to price your app?
End of explanation
"""
#print('Junk apps priced above 350$')
df[['Category', 'App']][df.Price > 200]
fig, ax = plt.subplots()
fig.set_size_inches(15, 8)
subset_df_price = subset_df[subset_df.Price<100]
p = sns.stripplot(x="Price", y="Category", data=subset_df_price, jitter=True, linewidth=1)
title = ax.set_title('App pricing trend across categories - after filtering for junk apps')
"""
Explanation: Shocking...Apps priced above 250\$ !!! Let's quickly examine what these junk apps are.
End of explanation
"""
# Stacked bar graph for top 5-10 categories - Ratio of paid and free apps
#fig, ax = plt.subplots(figsize=(15,10))
new_df = df.groupby(['Category', 'Type']).agg({'App' : 'count'}).reset_index()
#print(new_df)
# outer_group_names = df['Category'].sort_values().value_counts()[:5].index
# outer_group_values = df['Category'].sort_values().value_counts()[:5].values
outer_group_names = ['GAME', 'FAMILY', 'MEDICAL', 'TOOLS']
outer_group_values = [len(df.App[df.Category == category]) for category in outer_group_names]
a, b, c, d=[plt.cm.Blues, plt.cm.Reds, plt.cm.Greens, plt.cm.Purples]
inner_group_names = ['Paid', 'Free'] * 4
inner_group_values = []
#inner_colors = ['#58a27c','#FFD433']
for category in outer_group_names:
for t in ['Paid', 'Free']:
x = new_df[new_df.Category == category]
try:
#print(x.App[x.Type == t].values[0])
inner_group_values.append(int(x.App[x.Type == t].values[0]))
except:
#print(x.App[x.Type == t].values[0])
inner_group_values.append(0)
explode = (0.025,0.025,0.025,0.025)
# First Ring (outside)
fig, ax = plt.subplots(figsize=(10,10))
ax.axis('equal')
mypie, texts, _ = ax.pie(outer_group_values, radius=1.2, labels=outer_group_names, autopct='%1.1f%%', pctdistance=1.1,
labeldistance= 0.75, explode = explode, colors=[a(0.6), b(0.6), c(0.6), d(0.6)], textprops={'fontsize': 16})
plt.setp( mypie, width=0.5, edgecolor='black')
# Second Ring (Inside)
mypie2, _ = ax.pie(inner_group_values, radius=1.2-0.5, labels=inner_group_names, labeldistance= 0.7,
textprops={'fontsize': 12}, colors = [a(0.4), a(0.2), b(0.4), b(0.2), c(0.4), c(0.2), d(0.4), d(0.2)])
plt.setp( mypie2, width=0.5, edgecolor='black')
plt.margins(0,0)
# show it
plt.tight_layout()
plt.show()
#ax = sns.countplot(x="Category", hue="Type", data=new_df)
#df.groupby(['Category', 'Type']).count()['App'].unstack().plot(kind='bar', stacked=True, ax=ax)
#ylabel = plt.ylabel('Number of apps')
"""
Explanation: Clearly, Medical and Family apps are the most expensive. Some medical apps extend even upto 80\$.
All other apps are priced under 30\$.
Surprisingly, all game apps are reasonably priced below 20\$.
Distribution of paid and free apps across categories
End of explanation
"""
trace0 = go.Box(
y=np.log10(df['Installs'][df.Type=='Paid']),
name = 'Paid',
marker = dict(
color = 'rgb(214, 12, 140)',
)
)
trace1 = go.Box(
y=np.log10(df['Installs'][df.Type=='Free']),
name = 'Free',
marker = dict(
color = 'rgb(0, 128, 128)',
)
)
layout = go.Layout(
title = "Number of downloads of paid apps Vs free apps",
yaxis= {'title': 'Number of downloads (log-scaled)'}
)
data = [trace0, trace1]
plotly.offline.iplot({'data': data, 'layout': layout})
"""
Explanation: Distribution of free and paid apps across major categories
Are paid apps downloaded as much as free apps?
End of explanation
"""
temp_df = df[df.Type == 'Paid']
temp_df = temp_df[temp_df.Size > 5]
#type_groups = df.groupby('Type')
data = [{
#'x': type_groups.get_group(t)['Rating'],
'x' : temp_df['Rating'],
'type':'scatter',
'y' : temp_df['Size'],
#'name' : t,
'mode' : 'markers',
#'showlegend': True,
'text' : df['Size'],
} for t in set(temp_df.Type)]
layout = {'title':"Rating vs Size",
'xaxis': {'title' : 'Rating'},
'yaxis' : {'title' : 'Size (in MB)'},
'plot_bgcolor': 'rgb(0,0,0)'}
plotly.offline.iplot({'data': data, 'layout': layout})
"""
Explanation: Paid apps have a relatively lower number of downloads than free apps. However, it is not too bad.
How do the sizes of paid apps and free apps vary?
End of explanation
"""
#df['Installs'].corr(df['Reviews'])#df['Insta
#print(np.corrcoef(l, rating))
corrmat = df.corr()
#f, ax = plt.subplots()
p =sns.heatmap(corrmat, annot=True, cmap=sns.diverging_palette(220, 20, as_cmap=True))
df_copy = df.copy()
df_copy = df_copy[df_copy.Reviews > 10]
df_copy = df_copy[df_copy.Installs > 0]
df_copy['Installs'] = np.log10(df['Installs'])
df_copy['Reviews'] = np.log10(df['Reviews'])
sns.lmplot("Reviews", "Installs", data=df_copy)
ax = plt.gca()
_ = ax.set_title('Number of Reviews Vs Number of Downloads (Log scaled)')
"""
Explanation: Majority of the paid apps that are highly rated have small sizes. This means that most paid apps are designed and developed to cater to specific functionalities and hence are not bulky.
Users prefer to pay for apps that are light-weighted. A paid app that is bulky may not perform well in the market.
Exploring Correlations
End of explanation
"""
reviews_df = pd.read_csv('../input/googleplaystore_user_reviews.csv')
merged_df = pd.merge(df, reviews_df, on = "App", how = "inner")
merged_df = merged_df.dropna(subset=['Sentiment', 'Translated_Review'])
grouped_sentiment_category_count = merged_df.groupby(['Category', 'Sentiment']).agg({'App': 'count'}).reset_index()
grouped_sentiment_category_sum = merged_df.groupby(['Category']).agg({'Sentiment': 'count'}).reset_index()
new_df = pd.merge(grouped_sentiment_category_count, grouped_sentiment_category_sum, on=["Category"])
#print(new_df)
new_df['Sentiment_Normalized'] = new_df.App/new_df.Sentiment_y
new_df = new_df.groupby('Category').filter(lambda x: len(x) ==3)
# new_df = new_df[new_df.Category.isin(['HEALTH_AND_FITNESS', 'GAME', 'FAMILY', 'EDUCATION', 'COMMUNICATION',
# 'ENTERTAINMENT', 'TOOLS', 'SOCIAL', 'TRAVEL_AND_LOCAL'])]
new_df
trace1 = go.Bar(
x=list(new_df.Category[::3])[6:-5],
y= new_df.Sentiment_Normalized[::3][6:-5],
name='Negative',
marker=dict(color = 'rgb(209,49,20)')
)
trace2 = go.Bar(
x=list(new_df.Category[::3])[6:-5],
y= new_df.Sentiment_Normalized[1::3][6:-5],
name='Neutral',
marker=dict(color = 'rgb(49,130,189)')
)
trace3 = go.Bar(
x=list(new_df.Category[::3])[6:-5],
y= new_df.Sentiment_Normalized[2::3][6:-5],
name='Positive',
marker=dict(color = 'rgb(49,189,120)')
)
data = [trace1, trace2, trace3]
layout = go.Layout(
title = 'Sentiment analysis',
barmode='stack',
xaxis = {'tickangle': -45},
yaxis = {'title': 'Fraction of reviews'}
)
fig = go.Figure(data=data, layout=layout)
plotly.offline.iplot({'data': data, 'layout': layout})
"""
Explanation: A moderate positive correlation of 0.63 exists between the number of reviews and number of downloads. This means that customers tend to download a given app more if it has been reviewed by a larger number of people.
This also means that many active users who download an app usually also leave back a review or feedback.
So, getting your app reviewed by more people maybe a good idea to increase your app's capture in the market!
Basic sentiment analysis - User reviews
End of explanation
"""
#merged_df.loc[merged_df.Type=='Free']['Sentiment_Polarity']
sns.set_style('ticks')
sns.set_style("darkgrid")
fig, ax = plt.subplots()
fig.set_size_inches(11.7, 8.27)
ax = sns.boxplot(x='Type', y='Sentiment_Polarity', data=merged_df)
title = ax.set_title('Sentiment Polarity Distribution')
"""
Explanation: Health and Fitness apps perform the best, having more than 85% positive reviews.
On the contrary, many Game and Social apps perform bad leading to 50% positive and 50% negative.
End of explanation
"""
from wordcloud import WordCloud
wc = WordCloud(background_color="white", max_words=200, colormap="Set2")
# generate word cloud
from nltk.corpus import stopwords
stop = stopwords.words('english')
stop = stop + ['app', 'APP' ,'ap', 'App', 'apps', 'application', 'browser', 'website', 'websites', 'chrome', 'click', 'web', 'ip', 'address',
'files', 'android', 'browse', 'service', 'use', 'one', 'download', 'email', 'Launcher']
#merged_df = merged_df.dropna(subset=['Translated_Review'])
merged_df['Translated_Review'] = merged_df['Translated_Review'].apply(lambda x: " ".join(x for x in str(x).split(' ') if x not in stop))
#print(any(merged_df.Translated_Review.isna()))
merged_df.Translated_Review = merged_df.Translated_Review.apply(lambda x: x if 'app' not in x.split(' ') else np.nan)
merged_df.dropna(subset=['Translated_Review'], inplace=True)
free = merged_df.loc[merged_df.Type=='Free']['Translated_Review'].apply(lambda x: '' if x=='nan' else x)
wc.generate(''.join(str(free)))
plt.figure(figsize=(10, 10))
plt.imshow(wc, interpolation='bilinear')
plt.axis("off")
plt.show()
"""
Explanation: Free apps receive a lot of harsh comments which are indicated as outliers on the negative Y-axis.
Users are more lenient and tolerant while reviewing paid apps - moderate choice of words. They are never extremely negative while reviewing a paid app.
WORDCLOUD - A quick look on reviews
End of explanation
"""
paid = merged_df.loc[merged_df.Type=='Paid']['Translated_Review'].apply(lambda x: '' if x=='nan' else x)
wc.generate(''.join(str(paid)))
plt.figure(figsize=(10, 10))
plt.imshow(wc, interpolation='bilinear')
plt.axis("off")
plt.show()
"""
Explanation: FREE APPS
Negative words: ads, bad, hate
Positive words: good, love, best, great
End of explanation
"""
|
taliamo/Final_Project | organ_pitch/.ipynb_checkpoints/upload_pitch_data-checkpoint.ipynb | mit | # I import useful libraries (with functions) so I can visualize my data
# I use Pandas because this dataset has word/string column titles and I like the readability features of commands and finish visual products that Pandas offers
import pandas as pd
import matplotlib.pyplot as plt
import re
import numpy as np
%matplotlib inline
#I want to be able to easily scroll through this notebook so I limit the length of the appearance of my dataframes
from pandas import set_option
set_option('display.max_rows', 10)
"""
Explanation: T. Martz-Oberlander, 2015-11-12, CO2 and Speed of Sound
Formatting PITCH pipe organ data for Python operations
The entire script looks for mathematical relationships between CO2 concentration changes and pitch changes from a pipe organ. This script uploads, cleans data and organizes new dataframes, creates figures, and performs statistical tests on the relationships between variable CO2 and frequency of sound from a note played on a pipe organ.
This uploader script:
1) Uploads organ note pitch data files
2) Munges it (creates a Date Time column for the time stamps), establishes column contents as floats
Here I pursue data analysis route 1 (as mentionted in my notebook.md file), which involves comparing one pitch dataframe with one dataframe of environmental characteristics taken at one sensor location. Both dataframes are compared by the time of data recorded.
End of explanation
"""
#I import a pitch data file
#comment by nick changed the path you upload that data from making in compatible with clone copies of your project
pitch=pd.read_table('../Data/pitches.csv', sep=',')
#assigning columns names
pitch.columns=[['date_time','section','note','freq1','freq2','freq3', 'freq4', 'freq5', 'freq6', 'freq7', 'freq8', 'freq9']]
#I display my dataframe
pitch
#Tell python that my date_time column has a "datetime" values, so it won't read as a string or object
pitch['date_time']= pd.to_datetime(env_choir_div['Date_time'])
#print the new table and the type of data to check that all columns are in line with the column names
print(pitch)
#Check the type of data in each column. This shows there are integers and floats, and datetime. This is good for analysing.
pitch.dtypes
"""
Explanation: Uploaded data into Python¶
First I upload my data sets. I am working with two: one for pitch measurements and another for environmental characteristics (CO2, temperature (deg C), and relative humidity (RH) (%) measurements). My data comes from environmental sensing logger devices in the "Choir Division" section of the organ consul.
End of explanation
"""
|
phoebe-project/phoebe2-docs | development/tutorials/ETV.ipynb | gpl-3.0 | #!pip install -I "phoebe>=2.4,<2.5"
"""
Explanation: ETV Datasets and Options
Setup
Let's first make sure we have the latest version of PHOEBE 2.4 installed (uncomment this line if running in an online notebook session such as colab).
End of explanation
"""
import phoebe
from phoebe import u # units
import numpy as np
import matplotlib.pyplot as plt
logger = phoebe.logger()
b = phoebe.default_binary()
"""
Explanation: As always, let's do imports and initialize a logger and a new Bundle.
End of explanation
"""
ps, constraints = phoebe.dataset.etv(component='mycomponent')
print ps
"""
Explanation: Dataset Parameters
Let's create the ParameterSet which would be added to the Bundle when calling add_dataset. Later we'll call add_dataset, which will create and attach this ParameterSet for us.
End of explanation
"""
print ps['Ns']
"""
Explanation: Currently, none of the available etv methods actually compute fluxes. But if one is added that computes a light-curve and actually finds the time of mid-eclipse, then the passband-dependend parameters will be added here.
For information on these passband-dependent parameters, see the section on the lc dataset
Ns
End of explanation
"""
print ps['time_ephems']
"""
Explanation: time_ephems
NOTE: this parameter will be constrained when added through add_dataset
End of explanation
"""
print ps['time_ecls']
"""
Explanation: time_ecls
End of explanation
"""
print ps['etvs']
"""
Explanation: etvs
NOTE: this parameter will be constrained when added through add_dataset
End of explanation
"""
print ps['sigmas']
"""
Explanation: sigmas
End of explanation
"""
ps_compute = phoebe.compute.phoebe()
print ps_compute
"""
Explanation: Compute Options
Let's look at the compute options (for the default PHOEBE 2 backend) that relate to the ETV dataset.
Other compute options are covered elsewhere:
* parameters related to dynamics are explained in the section on the orb dataset
End of explanation
"""
print ps_compute['etv_method']
"""
Explanation: etv_method
End of explanation
"""
print ps_compute['etv_tol']
"""
Explanation: etv_tol
End of explanation
"""
b.add_dataset('etv', Ns=np.linspace(0,10,11), dataset='etv01')
b.add_compute()
b.run_compute()
b['etv@model'].twigs
print b['time_ephems@primary@etv@model']
print b['time_ecls@primary@etv@model']
print b['etvs@primary@etv@model']
"""
Explanation: Synthetics
End of explanation
"""
axs, artists = b['etv@model'].plot()
"""
Explanation: Plotting
By default, ETV datasets plot as etv vs time_ephem. Of course, a simple binary with no companion or apsidal motion won't show much of a signal (this is essentially flat with some noise). To see more ETV examples see:
Apsidal Motion
Minimial Hierarchical Triple
LTTE ETVs in a Hierarchical Triple
End of explanation
"""
axs, artists = b['etv@model'].plot(x='time_ecls', y=2)
"""
Explanation: Alternatively, especially when overplotting with a light curve, its sometimes handy to just plot ticks at each of the eclipse times. This can easily be done by passing a single value for 'y'.
For other examples with light curves as well see:
* Apsidal Motion
* LTTE ETVs in a Hierarchical Triple
End of explanation
"""
|
thinkingmachines/deeplearningworkshop | codelab_3_tensorflow_nn.ipynb | mit | import numpy as np
import matplotlib.pyplot as plt
N = 100 # points per class
D = 2 # dimensionality at 2 so we can eyeball it
K = 3 # number of classes
X = np.zeros((N*K, D)) # generate an empty matrix to hold X features
y = np.zeros(N*K, dtype='int32') # switching this to int32
# for 3 classes, evenly generates spiral arms
for j in xrange(K):
ix = range(N*j, N*(j+1))
r = np.linspace(0.0,1,N) #radius
t = np.linspace(j*4, (j+1)*4, N) + np.random.randn(N)*0.2 # theta
X[ix] = np.c_[r*np.sin(t), r*np.cos(t)]
y[ix] = j
plt.scatter(X[:,0], X[:,1], c=y, s=20, cmap=plt.cm.Spectral)
plt.show()
"""
Explanation: Building a Two Layer Neural Network in TensorFlow
Use built-in Deep Learning Classifier
Recall our Spiral Dataset
from earlier today
End of explanation
"""
import tensorflow as tf
# what should the classifier expect in terms of features
feature_columns = [tf.contrib.layers.real_valued_column("", dimension=D)]
# defining the actual classifier
dnn_spiral_classifier = tf.contrib.learn.DNNClassifier(feature_columns=feature_columns,
activation_fn = tf.nn.softmax, # softmax activation
optimizer = tf.train.GradientDescentOptimizer(learning_rate=0.01), #GD with LR of 0.01
hidden_units = [10], # one hidden layer, containing 10 neurons
n_classes = K, # K target classes
model_dir="/tmp/spiral_model") # directory for saving model checkpoints
# turn data into tensors to feed into the computational graph
# honestly input_fn could also handle these as np.arrays but this is here to show you that the tf.constant operation can run on np.array input
def get_inputs():
X_tensor = tf.constant(X)
y_tensor = tf.constant(y)
return X_tensor, y_tensor
# fit the model
dnn_spiral_classifier.fit(input_fn=get_inputs, steps=200)
# interestingsly, you can continue training the model by continuing to call fit
dnn_spiral_classifier.fit(input_fn=get_inputs, steps=300)
#evaluating the accuracy
accuracy_score = dnn_spiral_classifier.evaluate(input_fn=get_inputs,
steps=1)["accuracy"]
print("\n Accuracy: {0:f}\n".format(accuracy_score))
"""
Explanation: TensorFlow
Let's create a DNNClassifier using TF's built-in classifier, and evaluate its accuracy.
End of explanation
"""
%ls '/tmp/spiral_model/'
"""
Explanation: Notice the following:
The higher level library vastly simplied the following mechanics:
tf.session management
training the model
running evaluation loops
feeding data into the model
generating predictions from the model
saving the model in a checkpoint file
For most use cases, it's likely that the many common models built into to tf will be able to solve your problem. You'll have to do model tuning by figuring out the correct parameters. Building your computational graph node by doing isn't likely needed unless you're doing academic research or working with very specialized datasets where default performance plataues.'
Look at the checkpoints
Poke inside the /tmp/spiral_model/ directory to see how the checkpoint data is stored. What's contained in these files?
End of explanation
"""
def new_points():
return np.array([[1.0, 1.0],
[-1.5, -1.0]], dtype = np.int32)
predictions = list(dnn_spiral_classifier.predict(input_fn=new_points))
print(
"New Samples, Class Predictions: {}\n"
.format(predictions))
"""
Explanation: Predicting on a new value
Let's classify a new point
End of explanation
"""
# watch out for this, tf.classifier.evaluate is going to be deprecated, so keep an eye out for a long-term solution to calculating accuracy
accuracy_score = dnn_spiral_classifier.evaluate(input_fn=get_inputs,
steps=1)["accuracy"]
"""
Explanation: Digging into the DNNClassifier
The DNNClassifier is one fo the (Estimators)[https://www.tensorflow.org/api_guides/python/contrib.learn#Estimators] available in the tf.contrib.learn libary. Other estimators include:
KMeansClustering
DNNRegressor
LinearClassifier
LinearRegressor
LogisticRegressor
Each one of these can perform various actions on the graph, including:
evaluate
infer
train
Each one of these can read in batched input data with types including:
pandas data
real values columns from an input
real valued columns from an input function (what we used above)
batches
Keep an eye on the documentation and updates. TensorFlow is under constant development. Things change very quickly!
End of explanation
"""
# sample code to use for the gold star challenge from https://www.tensorflow.org/get_started/get_started
import numpy as np
import tensorflow as tf
# Model parameters
W = tf.Variable([.3], dtype=tf.float32)
b = tf.Variable([-.3], dtype=tf.float32)
# Model input and output
x = tf.placeholder(tf.float32)
linear_model = W * x + b
y = tf.placeholder(tf.float32)
# loss
loss = tf.reduce_sum(tf.square(linear_model - y)) # sum of the squares
# optimizer
optimizer = tf.train.GradientDescentOptimizer(0.01)
train = optimizer.minimize(loss)
# training data
x_train = [1,2,3,4]
y_train = [0,-1,-2,-3]
# training loop
init = tf.global_variables_initializer()
sess = tf.Session()
sess.run(init) # reset values to wrong
for i in range(1000):
sess.run(train, {x:x_train, y:y_train})
# evaluate training accuracy
curr_W, curr_b, curr_loss = sess.run([W, b, loss], {x:x_train, y:y_train})
print("W: %s b: %s loss: %s"%(curr_W, curr_b, curr_loss))
"""
Explanation: Exercises
Get into the (TensorFlow API docs)[https://www.tensorflow.org/api_docs/python/tf]. Try the following and see how it impacts the final scores
Change the activation function to a ReLU
Change the optimization function to stochastic gradient descent, then change it again to Adagrad
Add more steps to training
Add more layers
Increase the number of neurons in each hidden layer
Change the learning rate to huge and tiny values
/
Gold Star Challenge
Reimplement the Spiral Classifier as a 2 Layer Neural Network in TensorFlow Core
End of explanation
"""
|
LimeeZ/phys292-2015-work | assignments/assignment08/InterpolationEx02.ipynb | mit | %matplotlib inline
import matplotlib.pyplot as plt
import seaborn as sns
import numpy as np
sns.set_style('white')
from scipy.interpolate import griddata
"""
Explanation: Interpolation Exercise 2
End of explanation
"""
x = np.linspace(-5.0,5.0 , 10)
y = np.linspace(-5.0, 5.0, 10)
f = (x,y)
np.hstack?
"""
Explanation: Sparse 2d interpolation
In this example the values of a scalar field $f(x,y)$ are known at a very limited set of points in a square domain:
The square domain covers the region $x\in[-5,5]$ and $y\in[-5,5]$.
The values of $f(x,y)$ are zero on the boundary of the square at integer spaced points.
The value of $f$ is known at a single interior point: $f(0,0)=1.0$.
The function $f$ is not known at any other points.
Create arrays x, y, f:
x should be a 1d array of the x coordinates on the boundary and the 1 interior point.
y should be a 1d array of the y coordinates on the boundary and the 1 interior point.
f should be a 1d array of the values of f at the corresponding x and y coordinates.
You might find that np.hstack is helpful.
End of explanation
"""
plt.scatter(x, y);
assert x.shape==(41,)
assert y.shape==(41,)
assert f.shape==(41,)
assert np.count_nonzero(f)==1
"""
Explanation: The following plot should show the points on the boundary and the single point in the interior:
End of explanation
"""
# YOUR CODE HERE
raise NotImplementedError()
assert xnew.shape==(100,)
assert ynew.shape==(100,)
assert Xnew.shape==(100,100)
assert Ynew.shape==(100,100)
assert Fnew.shape==(100,100)
"""
Explanation: Use meshgrid and griddata to interpolate the function $f(x,y)$ on the entire square domain:
xnew and ynew should be 1d arrays with 100 points between $[-5,5]$.
Xnew and Ynew should be 2d versions of xnew and ynew created by meshgrid.
Fnew should be a 2d array with the interpolated values of $f(x,y)$ at the points (Xnew,Ynew).
Use cubic spline interpolation.
End of explanation
"""
# YOUR CODE HERE
raise NotImplementedError()
assert True # leave this to grade the plot
"""
Explanation: Plot the values of the interpolated scalar field using a contour plot. Customize your plot to make it effective and beautiful.
End of explanation
"""
|
drphilmarshall/StatisticalMethods | lessons/templates/RISE_example.ipynb | gpl-2.0 | # This is a code block within slide #2.
b = 1
# Obviously, its the same python instance under the hood.
b
"""
Explanation: This is slide #1.
Space and ctrl-space (or the clickable arrows) are used to move between slides.
Up/down arrows (or mouse) are used to move between cells.
Shift-enter executes a cell, as usual.
Cell contents can be changed in slideshow mode.
A "sub-slide" seems to be a new slide that doesn't increment the "major" number in the corner, but does make nagivation using the clickable arrows confusing. I'm not seeing the benefit, to be honest.
This is slide #2.
End of explanation
"""
# This is a code block within slide #3.
a = 1
# This is a "fragment" within slide #3. What is a fragment???
a
# We can still access `b` here, of course.
b
"""
Explanation: This is slide #3.
End of explanation
"""
import matplotlib.pyplot as plt
%matplotlib inline
plt.rcParams['figure.figsize'] = (5.0, 8.0)
import numpy as np
c = np.arange(10)
plt.plot(c);
"""
Explanation: What about displaying static images?
<table width="50%">
<tr>
<td><img src="sunrisezoom.jpeg" width=320></td>
</tr>
</table>
It works! Take my word for it, or change this to point to your own file.
What about matplotlib (and vertical scrolling)?
End of explanation
"""
|
dtamayo/rebound | ipython_examples/VariationalEquationsWithChainRule.ipynb | gpl-3.0 | import rebound
import numpy as np
"""
Explanation: Using Variational Equations With the Chain Rule
For a complete introduction to variational equations, please read the paper by Rein and Tamayo (2016).
Variational equations can be used to calculate derivatives in an $N$-body simulation. More specifically, given a set of initial conditions $\alpha_i$ and a set of variables at the end of the simulation $v_k$, we can calculate all first order derivatives
$$\frac{\partial v_k}{\partial \alpha_i}$$
as well as all second order derivates
$$\frac{\partial^2 v_k}{\partial \alpha_i\partial \alpha_j}$$
For this tutorial, we work with a two planet system.
We first chose the semi-major axis $a$ of the outer planet as an initial condition (this is our $\alpha_i$). At the end of the simulation we output the velocity of the star in the $x$ direction (this is our $v_k$).
To do that, let us first import REBOUND and numpy.
End of explanation
"""
def calculate_vx(a):
sim = rebound.Simulation()
sim.add(m=1.) # star
sim.add(primary=sim.particles[0],m=1e-3, a=1) # inner planet
sim.add(primary=sim.particles[0],m=1e-3, a=a) # outer planet
sim.integrate(2.*np.pi*10.) # integrate for ~10 orbits
return sim.particles[0].vx # return star's velocity in the x direction
calculate_vx(a=1.5) # initial semi-major axis of the outer planet is 1.5
"""
Explanation: The following function takes $a$ as a parameter, then integrates the two planet system and returns the velocity of the star at the end of the simulation.
End of explanation
"""
calculate_vx(a=1.51) # initial semi-major axis of the outer planet is 1.51
"""
Explanation: If we run the simulation again, with a different initial $a$, we get a different velocity:
End of explanation
"""
def calculate_vx_derivative(a):
sim = rebound.Simulation()
sim.add(m=1.) # star
sim.add(primary=sim.particles[0],m=1e-3, a=1) # inner planet
sim.add(primary=sim.particles[0],m=1e-3, a=a) # outer planet
v1 = sim.add_variation() # add a set of variational particles
v1.vary(2,"a") # initialize the variational particles
sim.integrate(2.*np.pi*10.) # integrate for ~10 orbits
return sim.particles[0].vx, v1.particles[0].vx # return star's velocity and its derivative
"""
Explanation: We could now run many different simulations to map out the parameter space. This is a very simple examlpe of a typical use case: the fitting of a radial velocity datapoint.
However, we can be smarter than simple running an almost identical simulation over and over again by using variational equations. These will allow us to calculate the derivate of the stellar velocity at the end of the simulation. We can take derivative with respect to any of the initial conditions, i.e. a particles's mass, semi-major axis, x-coordinate, etc. Here, we want to take the derivative with respect to the semi-major axis of the outer planet. The following function does exactly that:
End of explanation
"""
calculate_vx_derivative(a=1.5)
"""
Explanation: Note the two new functions. sim.add_variation() adds a set of variational particles to the simulation. All variational particles are by default initialized to zero. We use the vary() function to initialize them to a variation that we are interested in. Here, we initialize the variational particles corresponding to a change in the semi-major axis, $a$, of the particle with index 2 (the outer planet).
End of explanation
"""
a0=1.5
va0, dva0 = calculate_vx_derivative(a=a0)
def v(a):
return va0 + (a-a0)*dva0
print(v(1.51))
"""
Explanation: We can use the derivative to construct a Taylor series expansion of the velocity around $a_0=1.5$:
$$v(a) \approx v(a_0) + (a-a_0) \frac{\partial v}{\partial a}$$
End of explanation
"""
def calculate_vx_derivative_2ndorder(a):
sim = rebound.Simulation()
sim.add(m=1.) # star
sim.add(primary=sim.particles[0],m=1e-3, a=1) # inner planet
sim.add(primary=sim.particles[0],m=1e-3, a=a) # outer planet
v1 = sim.add_variation()
v1.vary(2,"a")
# The following lines add and initialize second order variational particles
v2 = sim.add_variation(order=2, first_order=v1)
v2.vary(2,"a")
sim.integrate(2.*np.pi*10.) # integrate for ~10 orbits
# return star's velocity and its first and second derivatives
return sim.particles[0].vx, v1.particles[0].vx, v2.particles[0].vx
"""
Explanation: Compare this value with the explicitly calculate one above. They are almost the same! But we can do even better, by using second order variational equations to calculate second order derivatives.
End of explanation
"""
a0=1.5
va0, dva0, ddva0 = calculate_vx_derivative_2ndorder(a=a0)
def v(a):
return va0 + (a-a0)*dva0 + 0.5*(a-a0)**2*ddva0
print(v(1.51))
"""
Explanation: Using a Taylor series expansion to second order gives a better estimate of v(1.51).
End of explanation
"""
def calculate_w_derivative(a):
sim = rebound.Simulation()
sim.add(m=1.) # star
sim.add(primary=sim.particles[0],m=1e-3, a=1) # inner planet
sim.add(primary=sim.particles[0],m=1e-3, a=a) # outer planet
v1 = sim.add_variation() # add a set of variational particles
v1.vary(2,"a") # initialize the variational particles
sim.integrate(2.*np.pi*10.) # integrate for ~10 orbits
c = 1.02 # some constant
w = (sim.particles[0].vx-c)**2
dwda = 2.*v1.particles[0].vx * (sim.particles[0].vx-c)
return w, dwda # return w and its derivative
calculate_w_derivative(1.5)
"""
Explanation: Now that we know how to calculate first and second order derivates of positions and velocities of particles, we can simply use the chain rule to calculate more complicated derivates. For example, instead of the velocity $v_x$, you might be interested in the quanity $w\equiv(v_x - c)^2$ where $c$ is a constant. This is something that typically appears in a $\chi^2$ fit. The chain rule gives us:
$$ \frac{\partial w}{\partial a} = 2 \cdot (v_x-c)\cdot \frac{\partial v_x}{\partial a}$$
The variational equations provide the $\frac{\partial v_x}{\partial a}$ part, the ordinary particles provide $v_x$.
End of explanation
"""
def calculate_vx_derivative_h():
h, k = 0.1, 0.2
e = float(np.sqrt(h**2+k**2))
omega = np.arctan2(k,h)
sim = rebound.Simulation()
sim.add(m=1.) # star
sim.add(primary=sim.particles[0],m=1e-3, a=1) # inner planet
sim.add(primary=sim.particles[0],m=1e-3, a=1.5, e=e, omega=omega) # outer planet
v1 = sim.add_variation()
dpde = rebound.Particle(simulation=sim, particle=sim.particles[2], variation="e")
dpdomega = rebound.Particle(simulation=sim, particle=sim.particles[2], m=1e-3, a=1.5, e=e, omega=omega, variation="omega")
v1.particles[2] = h/e * dpde - k/(e*e) * dpdomega
sim.integrate(2.*np.pi*10.) # integrate for ~10 orbits
# return star's velocity and its first derivatives
return sim.particles[0].vx, v1.particles[0].vx
calculate_vx_derivative_h()
"""
Explanation: Similarly, you can also use the chain rule to vary initial conditions of particles in a way that is not supported by REBOUND by default. For example, suppose you want to work in some fancy coordinate system, using $h\equiv e\sin(\omega)$ and $k\equiv e \cos(\omega)$ variables instead of $e$ and $\omega$. You might want to do that because $h$ and $k$ variables are often better behaved near $e\sim0$. In that case the chain rule gives us:
$$\frac{\partial p(e(h, k), \omega(h, k))}{\partial h} = \frac{\partial p}{\partial e}\frac{\partial e}{\partial h} + \frac{\partial p}{\partial \omega}\frac{\partial \omega}{\partial h}$$
where $p$ is any of the particles initial coordinates. In our case the derivates of $e$ and $\omega$ with respect to $h$ are:
$$\frac{\partial \omega}{\partial h} = -\frac{k}{e^2}\quad\text{and}\quad \frac{\partial e}{\partial h} = \frac{h}{e}$$
With REBOUND, you can easily implement this. The following function calculates the derivate of the star's velocity with respect to the outer planet's $h$ variable.
End of explanation
"""
|
mne-tools/mne-tools.github.io | 0.14/_downloads/plot_mixed_norm_inverse.ipynb | bsd-3-clause | # Author: Alexandre Gramfort <alexandre.gramfort@telecom-paristech.fr>
#
# License: BSD (3-clause)
import mne
from mne.datasets import sample
from mne.inverse_sparse import mixed_norm
from mne.minimum_norm import make_inverse_operator, apply_inverse
from mne.viz import plot_sparse_source_estimates
print(__doc__)
data_path = sample.data_path()
fwd_fname = data_path + '/MEG/sample/sample_audvis-meg-eeg-oct-6-fwd.fif'
ave_fname = data_path + '/MEG/sample/sample_audvis-ave.fif'
cov_fname = data_path + '/MEG/sample/sample_audvis-shrunk-cov.fif'
subjects_dir = data_path + '/subjects'
# Read noise covariance matrix
cov = mne.read_cov(cov_fname)
# Handling average file
condition = 'Left Auditory'
evoked = mne.read_evokeds(ave_fname, condition=condition, baseline=(None, 0))
evoked.crop(tmin=0, tmax=0.3)
# Handling forward solution
forward = mne.read_forward_solution(fwd_fname, surf_ori=True)
ylim = dict(eeg=[-10, 10], grad=[-400, 400], mag=[-600, 600])
evoked.plot(ylim=ylim, proj=True)
"""
Explanation: ================================================================
Compute sparse inverse solution with mixed norm: MxNE and irMxNE
================================================================
Runs (ir)MxNE (L1/L2 [1] or L0.5/L2 [2] mixed norm) inverse solver.
L0.5/L2 is done with irMxNE which allows for sparser
source estimates with less amplitude bias due to the non-convexity
of the L0.5/L2 mixed norm penalty.
References
.. [1] Gramfort A., Kowalski M. and Hamalainen, M.
"Mixed-norm estimates for the M/EEG inverse problem using accelerated
gradient methods", Physics in Medicine and Biology, 2012.
https://doi.org/10.1088/0031-9155/57/7/1937.
.. [2] Strohmeier D., Haueisen J., and Gramfort A.
"Improved MEG/EEG source localization with reweighted mixed-norms",
4th International Workshop on Pattern Recognition in Neuroimaging,
Tuebingen, 2014. 10.1109/PRNI.2014.6858545
End of explanation
"""
alpha = 50 # regularization parameter between 0 and 100 (100 is high)
loose, depth = 0.2, 0.9 # loose orientation & depth weighting
n_mxne_iter = 10 # if > 1 use L0.5/L2 reweighted mixed norm solver
# if n_mxne_iter > 1 dSPM weighting can be avoided.
# Compute dSPM solution to be used as weights in MxNE
inverse_operator = make_inverse_operator(evoked.info, forward, cov,
loose=None, depth=depth, fixed=True)
stc_dspm = apply_inverse(evoked, inverse_operator, lambda2=1. / 9.,
method='dSPM')
# Compute (ir)MxNE inverse solution
stc, residual = mixed_norm(
evoked, forward, cov, alpha, loose=loose, depth=depth, maxit=3000,
tol=1e-4, active_set_size=10, debias=True, weights=stc_dspm,
weights_min=8., n_mxne_iter=n_mxne_iter, return_residual=True)
residual.plot(ylim=ylim, proj=True)
"""
Explanation: Run solver
End of explanation
"""
plot_sparse_source_estimates(forward['src'], stc, bgcolor=(1, 1, 1),
fig_name="MxNE (cond %s)" % condition,
opacity=0.1)
# and on the fsaverage brain after morphing
stc_fsaverage = stc.morph(subject_from='sample', subject_to='fsaverage',
grade=None, sparse=True, subjects_dir=subjects_dir)
src_fsaverage_fname = subjects_dir + '/fsaverage/bem/fsaverage-ico-5-src.fif'
src_fsaverage = mne.read_source_spaces(src_fsaverage_fname)
plot_sparse_source_estimates(src_fsaverage, stc_fsaverage, bgcolor=(1, 1, 1),
opacity=0.1)
"""
Explanation: View in 2D and 3D ("glass" brain like 3D plot)
End of explanation
"""
|
belteki/alarms | Alarms_GitHub.ipynb | gpl-3.0 | import IPython
import pandas as pd
import numpy as np
import matplotlib
import matplotlib.pyplot as plt
import os
import sys
import pickle
import scipy as sp
from scipy import stats
from pandas import Series, DataFrame
from datetime import datetime, timedelta
%matplotlib inline
matplotlib.style.use('classic')
matplotlib.rcParams['figure.facecolor'] = 'w'
pd.set_option('display.max_rows', 100)
pd.set_option('display.max_columns', 100)
pd.set_option('mode.chained_assignment', None)
print("Python version: {}".format(sys.version))
print("IPython version: {}".format(IPython.__version__))
print("pandas version: {}".format(pd.__version__))
print("matplotlib version: {}".format(matplotlib.__version__))
print("NumPy version: {}".format(np.__version__))
print("SciPy version: {}".format(sp.__version__))
"""
Explanation: Analysis of neonatal ventilator alarms
Author: Dr Gusztav Belteki
This Notebook contains the code used for data processing, statistical analysis and
visualization described in the following paper:
Belteki G, Morley CJ. Frequency, duration and cause of ventilator alarms on a
neonatal intensive care unit. Arch Dis Child Fetal Neonatal Ed.
2018 Jul;103(4):F307-F311. doi: 10.1136/archdischild-2017-313493.
Epub 2017 Oct 27. PubMed PMID: 29079651.
Link to the paper: https://fn.bmj.com/content/103/4/F307.long
Contact: gusztav.belteki@addenbrookes.nhs.uk; gbelteki@aol.com
Importing the required libraries and setting options
End of explanation
"""
from gb_loader import *
from gb_stats import *
from gb_transform import *
"""
Explanation: Import modules containing own functions
End of explanation
"""
# Topic of the Notebook which will also be the name of the subfolder containing results
TOPIC = 'alarms_2'
# Name of the external hard drive
DRIVE = 'GUSZTI'
# Directory containing clinical and blood gas data
CWD = '/Users/guszti/ventilation_data'
# Directory on external drive to read the ventilation data from
DIR_READ = '/Volumes/%s/ventilation_data' % DRIVE
# Directory to write results and selected images to
if not os.path.isdir('%s/%s/%s' % (CWD, 'Analyses', TOPIC)):
os.makedirs('%s/%s/%s' % (CWD, 'Analyses', TOPIC))
DIR_WRITE = '%s/%s/%s' % (CWD, 'Analyses', TOPIC)
# Images and raw data will be written on an external hard drive
if not os.path.isdir('/Volumes/%s/data_dump/%s' % (DRIVE, TOPIC)):
os.makedirs('/Volumes/%s/data_dump/%s' % (DRIVE, TOPIC))
DATA_DUMP = '/Volumes/%s/data_dump/%s' % (DRIVE, TOPIC)
os.chdir(CWD)
os.getcwd()
DIR_READ
DIR_WRITE
DATA_DUMP
"""
Explanation: List and set the working directory and the directories to write out data
End of explanation
"""
# One recording from each patient, all of them 24 hours old or longer
# The sub folders containing the individual recordings have the same names within cwd
recordings = ['DG001', 'DG002_1', 'DG003', 'DG004', 'DG005_1', 'DG006_2', 'DG007', 'DG008', 'DG009', 'DG010',
'DG011', 'DG013', 'DG014', 'DG015', 'DG016', 'DG017', 'DG018_1', 'DG020',
'DG021', 'DG022', 'DG023', 'DG025', 'DG026', 'DG027', 'DG028', 'DG029', 'DG030',
'DG031', 'DG032_2', 'DG033', 'DG034', 'DG035', 'DG037', 'DG038_1', 'DG039', 'DG040_1', 'DG041',
'DG042', 'DG043', 'DG044', 'DG045', 'DG046_2', 'DG047', 'DG048', 'DG049', 'DG050']
"""
Explanation: List of the recordings
End of explanation
"""
clinical_details = pd.read_excel('%s/data_grabber_patient_data_combined.xlsx' % CWD)
clinical_details.index = clinical_details['Recording']
clinical_details.info()
current_weights = {}
for recording in recordings:
current_weights[recording] = clinical_details.loc[recording, 'Current weight' ] / 1000
"""
Explanation: Import clinical details
End of explanation
"""
slow_measurements = {}
for recording in recordings:
flist = os.listdir('%s/%s' % (DIR_READ, recording))
flist = [file for file in flist if not file.startswith('.')] # There are some hidden
# files on the hard drive starting with '.'; this step is necessary to ignore them
files = slow_measurement_finder(flist)
print('Loading recording %s' % recording)
print(files)
fnames = ['%s/%s/%s' % (DIR_READ, recording, filename) for filename in files]
slow_measurements[recording] = data_loader(fnames)
# 46 recordings from 46 patients (4 recordings excluded as they were < 24 hours lon)
len(slow_measurements)
"""
Explanation: Import ventilator parameters retrieved with 1/sec frequency
End of explanation
"""
for recording in recordings:
try:
a = slow_measurements[recording]
a['VT_kg'] = a['5001|VT [mL]'] / current_weights[recording]
a['VTi_kg'] = a['5001|VTi [mL]'] / current_weights[recording]
a['VTe_kg'] = a['5001|VTe [mL]'] / current_weights[recording]
a['VTmand_kg'] = a['5001|VTmand [mL]'] / current_weights[recording]
a['VTspon_kg'] = a['5001|VTspon [mL]'] / current_weights[recording]
a['VTimand_kg'] = a['5001|VTimand [mL]'] / current_weights[recording]
a['VTemand_kg'] = a['5001|VTemand [mL]'] / current_weights[recording]
a['VTispon_kg'] = a['5001|VTispon [mL]'] / current_weights[recording]
a['VTespon_kg'] = a['5001|VTespon [mL]'] / current_weights[recording]
except KeyError:
# print('%s does not have all of the parameters' % recording)
pass
for recording in recordings:
try:
a = slow_measurements[recording]
a['VThf_kg'] = a['5001|VThf [mL]'] / current_weights[recording]
a['DCO2_corr_kg'] = a['5001|DCO2 [10*mL^2/s]'] * 10 / (current_weights[recording]) ** 2
except KeyError:
# print('%s does not have all of the parameters' % recording)
pass
for recording in recordings:
try:
a = slow_measurements[recording]
a['MV_kg'] = a['5001|MV [L/min]'] / current_weights[recording]
a['MVi_kg'] = a['5001|MVi [L/min]'] / current_weights[recording]
a['MVe_kg'] = a['5001|MVe [L/min]'] / current_weights[recording]
a['MVemand_kg'] = a['5001|MVemand [L/min]'] / current_weights[recording]
a['MVespon_kg'] = a['5001|MVespon [L/min]'] / current_weights[recording]
a['MVleak_kg'] = a['5001|MVleak [L/min]'] / current_weights[recording]
except KeyError:
# print('%s does not have all of the parameters' % recording)
pass
"""
Explanation: Calculating parameters / body weight kg
End of explanation
"""
# 1/sec data are retrieved in two parts which need to be joined
# This resampling steps combines the two parts
for recording in recordings:
slow_measurements[recording] = slow_measurements[recording].resample('1S').mean()
# Example
slow_measurements['DG003'].head();
"""
Explanation: Resampling to remove half-empty rows
End of explanation
"""
len(recordings)
rec1 = recordings[:15]; rec2 = recordings[15:30]; rec3 = recordings[30:40]; rec4 = recordings[40:]
"""
Explanation: Save processed slow_measurements DataFrames to pickle archive
End of explanation
"""
# Time stamps are obtained from 'slow measurements'
recording_duration = {}
for recording in recordings:
recording_duration[recording] = slow_measurements[recording].index[-1] - slow_measurements[recording].index[0]
recording_duration_seconds = {}
recording_duration_hours = {}
for recording in recordings:
temp = recording_duration[recording]
recording_duration_seconds[recording] = temp.total_seconds()
recording_duration_hours[recording] = temp.total_seconds() / 3600
"""
Explanation: Import processed 'slow_measurements' data from pickle archive
Calculate recording durations
End of explanation
"""
v = list(range(1, len(recordings)+1))
w = [value for key, value in sorted(recording_duration_hours.items()) if key in recordings]
fig = plt.figure()
fig.set_size_inches(20, 10)
fig.subplots_adjust(left=None, bottom=None, right=None, top=None, wspace=None, hspace=None)
ax1 = fig.add_subplot(1, 1, 1);
ax1.bar(v, w, color = 'blue')
plt.xlabel("Recordings", fontsize = 22)
plt.ylabel("Hours", fontsize = 22)
plt.title("Recording periods" , fontsize = 22)
plt.yticks(fontsize = 22)
plt.xticks([i+1.5 for i, _ in enumerate(recordings)], recordings, fontsize = 22, rotation = 'vertical');
plt.grid()
fig.savefig('%s/%s' % (DIR_WRITE, 'recording_durations.jpg'), dpi=300, facecolor='w', edgecolor='w',
orientation='portrait', papertype=None, format='jpg',
transparent=False, bbox_inches=None, pad_inches=0.1,
frameon=True)
"""
Explanation: Visualising recording durations
End of explanation
"""
recording_times_frame = DataFrame([recording_duration, recording_duration_hours, recording_duration_seconds],
index = ['days', 'hours', 'seconds'])
recording_times_frame
writer = pd.ExcelWriter('%s/%s' % (DIR_WRITE, 'recording_periods.xlsx'))
recording_times_frame.to_excel(writer,'rec_periods')
writer.save()
"""
Explanation: Write recording times out into files
End of explanation
"""
vent_settings = {}
for recording in recordings:
flist = os.listdir('%s/%s' % (DIR_READ, recording))
flist = [file for file in flist if not file.startswith('.')] # There are some hidden
# files on the hard drive starting with '.'; this step is necessary to ignore them
files = slow_setting_finder(flist)
# print('Loading recording %s' % recording)
# print(files)
fnames = ['%s/%s/%s' % (DIR_READ, recording, filename) for filename in files]
vent_settings[recording] = data_loader(fnames)
# remove less important ventilator settings to simplify the table
vent_settings_selected = {}
for recording in recordings:
vent_settings_selected[recording] = vent_settings_cleaner(vent_settings[recording])
# Create a another dictionary of Dataframes wit some of the ventilation settings (set VT, set RR, set Pmax)
lsts = [(['VT_weight'], ['VTi', 'VThf']), (['RR_set'], ['RR']), (['Pmax'], ['Pmax', 'Ampl hf max'])]
vent_settings_2 = {}
for recording in recordings:
frmes = []
for name, pars in lsts:
if pars in [['VTi', 'VThf']]:
ind = []
val = []
for index, row in vent_settings_selected[recording].iterrows():
if row['Id'] in pars:
ind.append(index)
val.append(row['Value New'] / current_weights[recording])
frmes.append(DataFrame(val, index = ind, columns = name))
else:
ind = []
val = []
for index, row in vent_settings_selected[recording].iterrows():
if row['Id'] in pars:
ind.append(index)
val.append(row['Value New'])
frmes.append(DataFrame(val, index = ind, columns = name))
vent_settings_2[recording] = pd.concat(frmes)
vent_settings_2[recording].drop_duplicates(inplace = True)
"""
Explanation: Import ventilator modes and settings
Import ventilation settings
End of explanation
"""
vent_modes = {}
for recording in recordings:
flist = os.listdir('%s/%s' % (DIR_READ, recording))
flist = [file for file in flist if not file.startswith('.')] # There are some hidden
# files on the hard drive starting with '.'; this step is necessary to ignore them
files = slow_text_finder(flist)
# print('Loading recording %s' % recording)
# print(files)
fnames = ['%s/%s/%s' % (DIR_READ, recording, filename) for filename in files]
vent_modes[recording] = data_loader(fnames)
# remove less important ventilator mode settings to simplify the table
vent_modes_selected = {}
for recording in recordings:
vent_modes_selected[recording] = vent_mode_cleaner(vent_modes[recording])
"""
Explanation: Import ventilation modes
End of explanation
"""
writer = pd.ExcelWriter('%s/%s' % (DIR_WRITE, 'ventilator_settings.xlsx'))
for recording in recordings:
vent_settings[recording].to_excel(writer,'%s' % recording)
writer.save()
writer = pd.ExcelWriter('%s/%s' % (DIR_WRITE, 'ventilator_settings_selected.xlsx'))
for recording in recordings:
vent_settings_selected[recording].to_excel(writer,'%s' % recording)
writer.save()
writer = pd.ExcelWriter('%s/%s' % (DIR_WRITE, 'ventilator_settings_2.xlsx'))
for recording in recordings:
vent_settings_2[recording].to_excel(writer,'%s' % recording)
writer.save()
writer = pd.ExcelWriter('%s/%s' % (DIR_WRITE, 'ventilator_modes.xlsx'))
for recording in recordings:
vent_modes[recording].to_excel(writer,'%s' % recording)
writer.save()
writer = pd.ExcelWriter('%s/%s' % (DIR_WRITE, 'ventilator_modes_selected.xlsx'))
for recording in recordings:
vent_modes_selected[recording].to_excel(writer,'%s' % recording)
writer.save()
"""
Explanation: Save ventilation modes and settings into Excel files
End of explanation
"""
alarm_settings = {}
for recording in recordings:
flist = os.listdir('%s/%s' % (DIR_READ, recording))
flist = [file for file in flist if not file.startswith('.')] # There are some hidden
# files on the hard drive starting with '.'; this step is necessary to ignore them
files = alarm_setting_finder(flist)
# print('Loading recording %s' % recording)
# print(files)
fnames = ['%s/%s/%s' % (DIR_READ, recording, filename) for filename in files]
alarm_settings[recording] = data_loader(fnames)
# Remove etCO2 limits which were not used
alarm_settings_selected = {}
for recording in recordings:
alarm_settings_selected[recording] = alarm_setting_cleaner(alarm_settings[recording])
# Create a another dictionary of Dataframes with some of the alarm settings
lsts = [(['MV_high_weight'], ['MVe_HL']), (['MV_low_weight'], ['MVe_LL']),
(['PIP_high'], ['PIP_HL']), (['RR_high'], ['RR_HL'])]
alarm_settings_2 = {}
for recording in recordings:
frmes = []
for name, pars in lsts:
if pars in [['MVe_HL'], ['MVe_LL']]:
ind = []
val = []
for index, row in alarm_settings_selected[recording].iterrows():
if row['Id'] in pars:
ind.append(index)
val.append(row['Value New'] / current_weights[recording])
frmes.append(DataFrame(val, index = ind, columns = name))
else:
ind = []
val = []
for index, row in alarm_settings_selected[recording].iterrows():
if row['Id'] in pars:
ind.append(index)
val.append(row['Value New'])
frmes.append(DataFrame(val, index = ind, columns = name))
alarm_settings_2[recording] = pd.concat(frmes)
alarm_settings_2[recording].drop_duplicates(inplace = True)
# Write DataFrames containing alarm settings to a multisheet Excel file
writer = pd.ExcelWriter('%s/%s' % (DIR_WRITE, 'alarm_settings.xlsx'))
for recording in recordings:
alarm_settings[recording].to_excel(writer,'%s' % recording)
writer.save()
# Write DataFrames containing alarm settings to a multisheet Excel file
writer = pd.ExcelWriter('%s/%s' % (DIR_WRITE, 'alarm_settings_2.xlsx'))
for recording in recordings:
alarm_settings_2[recording].to_excel(writer,'%s' % recording)
writer.save()
"""
Explanation: Import alarm settings
End of explanation
"""
alarm_states = {}
for recording in recordings:
flist = os.listdir('%s/%s' % (DIR_READ, recording))
flist = [file for file in flist if not file.startswith('.')] # There are some hidden
# files on the hard drive starting with '.'; this step is necessary to ignore them
files = alarm_state_finder(flist)
# print('Loading recording %s' % recording)
# print(files)
fnames = ['%s/%s/%s' % (DIR_READ, recording, filename) for filename in files]
alarm_states[recording] = data_loader(fnames)
# Write DataFrames containing alarm states to a multisheet Excel file
writer = pd.ExcelWriter('%s/%s' % (DIR_WRITE, 'alarm_states.xlsx'))
for recording in recordings:
alarm_states[recording].to_excel(writer,'%s' % recording)
writer.save()
"""
Explanation: Import alarm states
End of explanation
"""
total_recording_time = timedelta(0)
for recording in recordings:
total_recording_time += recording_duration[recording]
total_recording_time
mean_recording_time = total_recording_time / len(recordings)
mean_recording_time
"""
Explanation: Calculate the total and average time of all recordings
End of explanation
"""
# Define function to retrieve alarm events from alarm timing data
def alarm_events_calculator(dframe, al):
'''
DataFrame, str -> DataFrame
dframe: DataFrame containing alarm states
al: alarm category (string)
Returns a pd.DataFrame object with the time stamps when the alarm went off and the duration (in seconds)
of the alarm for alarm 'al' in recording 'rec'
'''
alarms = dframe
alarm = alarms[alarms.Name == al]
length = len(alarm)
delta = np.array([(alarm.Date_Time[i] - alarm.Date_Time[i-1]).total_seconds()
for i in range(1, length) if alarm['State New'][i] == 'NotActive' and alarm['State New'][i-1] == 'Active'])
stamp = np.array([alarm.index[i-1]
for i in range(1, length) if alarm['State New'][i] == 'NotActive' and alarm['State New'][i-1] == 'Active'])
data = {'duration_seconds': delta, 'time_went_off': stamp,}
alarm_t = DataFrame(data, columns = ['time_went_off', 'duration_seconds'])
return alarm_t
"""
Explanation: Generate alarm events from alarm states
End of explanation
"""
# Create a list of alarms occurring during each recording
alarm_list = {}
for recording in recordings:
alarm_list[recording] = sorted(set(alarm_states[recording].Name))
alarm_events = {}
for recording in recordings:
alarm_events[recording] = {}
for alarm in alarm_list[recording]:
alarm_events[recording][alarm] = alarm_events_calculator(alarm_states[recording], alarm)
# Write Dataframes containing the alarm events in Excel files,
# one Excel file for each recording
for recording in recordings:
writer = pd.ExcelWriter('%s/%s%s' % (DIR_WRITE, recording, '_alarm_events.xlsx'))
for alarm in alarm_list[recording]:
alarm_events[recording][alarm].to_excel(writer, alarm[:20])
writer.save()
"""
Explanation: Using the files containing the alarm states, for each alarm category in each recording create a DataFrame with the timestamps the alarm went off and the duration of the alarm and store them in a dictionary of dictionaries
End of explanation
"""
def alarm_stats_calculator(dframe, rec, al):
'''
dframe: DataFrame containing alarm events
rec: recording (string)
al: alarm (string)
Returns detailed statistics about a particular alarm (al) in a particular recording (rec);
- number of times alarm went off and its value normalized to 24 hour periods
- mean, median, standard deviation, mean absolute deviation, minimum, 25th centile, 75th centile, maximum
time period when the alarm was off
- the total amount of time the alarm was off and its relative value in percent as the total recording time
'''
alarm = dframe[al].duration_seconds
return (alarm.size, round((alarm.size / (recording_duration_hours[rec] / 24)), 1),
round(alarm.mean() , 1), round(alarm.median(), 1), round(alarm.std(), 1), round(alarm.min() , 1),
round(alarm.quantile(0.25), 1), round(alarm.quantile(0.75), 1), round(alarm.max(), 1),
round(alarm.sum(), 1), round(alarm.sum() * 100 / recording_duration_seconds[rec] ,3))
alarm_stats = {}
for recording in recordings:
alarm_stats[recording] = {}
for alarm in alarm_list[recording]:
data = alarm_stats_calculator(alarm_events[recording], recording, alarm)
frame = DataFrame([data], columns = ['number of events', 'number of event per 24h',
'mean duration (s)', 'median duration (s)', 'SD duration (s)',
'miminum duration (s)',
'duration 25th centile (s)', 'duration 75th centile (s)',
'maximum duration (s)', 'cumulative duration (s)',
'percentage of recording length (%)'], index = [alarm])
alarm_stats[recording][alarm] = frame
# Write descriptive statistics in a multisheet Excel file
writer = pd.ExcelWriter('%s/%s' % (DIR_WRITE, 'alarm_stats.xlsx'))
for recording in recordings:
stats = []
for alarm in alarm_stats[recording]:
stats.append(alarm_stats[recording][alarm])
stats_all = pd.concat(stats)
stats_all.to_excel(writer, recording)
writer.save()
"""
Explanation: Calculate descriptive statistics for each alarm in each recording and write them to file
End of explanation
"""
# Generates a plot with the cumulative times (in seconds) of the various alarm occurring during recording (rec).
# Displays the plot
def alarm_plot_1(rec):
fig = plt.figure()
fig.set_size_inches(25, 8)
fig.subplots_adjust(left=0.4, bottom=None, right=None, top=None,
wspace=None, hspace=0.7)
ax1 = fig.add_subplot(1, 1, 1)
xs = [i + 0.1 for i, _ in enumerate(alarm_list[rec])]
stats = []
for alarm in alarm_list[rec]:
stats.append(alarm_stats[rec][alarm]['cumulative duration (s)'])
stats_all = pd.concat(stats)
plt.barh(xs, stats_all, color = 'red')
plt.xlabel("seconds", fontsize = 24)
plt.title("Recording %s : How long was the alarm active over the %d seconds of recording?" % (rec,
recording_duration_seconds[rec]), fontsize = 22)
plt.yticks([i + 0.5 for i, _ in enumerate(alarm_list[rec])], alarm_list[rec], fontsize = 22)
plt.xticks(fontsize = 20)
# Generates a plot with the cumulative times (in seconds) of the various alarm occurring during recording (rec).
# Does not displays the plot but write it into a jpg file.
# NB: the resolution of the image is only 100 dpi - for publication quality higher is needed
def alarm_plot_1_write(rec):
fig = plt.figure()
fig.set_size_inches(25, 8)
fig.subplots_adjust(left=0.4, bottom=None, right=None, top=None,
wspace=None, hspace=0.7)
ax1 = fig.add_subplot(1, 1, 1)
xs = [i + 0.1 for i, _ in enumerate(alarm_list[rec])]
stats = []
for alarm in alarm_list[rec]:
stats.append(alarm_stats[rec][alarm]['cumulative duration (s)'])
stats_all = pd.concat(stats)
plt.barh(xs, stats_all, color = 'red')
plt.xlabel("seconds", fontsize = 24)
plt.title("Recording %s : How long was the alarm active over the %d seconds of recording?" % (rec,
recording_duration_seconds[rec]), fontsize = 22)
plt.yticks([i + 0.5 for i, _ in enumerate(alarm_list[rec])], alarm_list[rec], fontsize = 22)
plt.xticks(fontsize = 20)
fig.savefig('%s/%s_%s.jpg' % (dir_write, 'alarm_durations_1', rec), dpi=100, facecolor='w', edgecolor='w',
orientation='portrait', papertype=None, format='jpg',
transparent=False, bbox_inches=None, pad_inches=0.1,
frameon=True)
plt.close(fig)
# Generates a plot with the cumulative times (expressed as percentage of the total recording time)
# of the various alarm occurring during recording (rec).
# Displays the plot
def alarm_plot_2(rec):
fig = plt.figure()
fig.set_size_inches(25, 8)
fig.subplots_adjust(left=0.4, bottom=None, right=None, top=None,
wspace=None, hspace=0.7)
ax1 = fig.add_subplot(1, 1, 1)
xs = [i + 0.1 for i, _ in enumerate(alarm_list[rec])]
stats = []
for alarm in alarm_list[rec]:
stats.append(alarm_stats[rec][alarm]['percentage of recording length (%)'])
stats_all = pd.concat(stats)
plt.barh(xs, stats_all, color = 'red')
plt.xlabel("% of total recording time", fontsize = 24)
plt.title("Recording %s: How long the alarm active over the %s hours of recording?" % (rec,
str(recording_duration[rec])), fontsize = 22)
plt.yticks([i + 0.5 for i, _ in enumerate(alarm_list[rec])], alarm_list[rec], fontsize = 22)
plt.xticks(fontsize = 20)
# Generates a plot with the cumulative times (expressed as percentage of the total recording time)
# of the various alarm occurring during recording (rec).
# Does not displays the plot but write it into a jpg file.
# NB: the resolution of the image is only 100 dpi - for publication quality higher is needed
def alarm_plot_2_write(rec):
fig = plt.figure()
fig.set_size_inches(25, 8)
fig.subplots_adjust(left=0.4, bottom=None, right=None, top=None,
wspace=None, hspace=0.7)
ax1 = fig.add_subplot(1, 1, 1)
xs = [i + 0.1 for i, _ in enumerate(alarm_list[rec])]
stats = []
for alarm in alarm_list[rec]:
stats.append(alarm_stats[rec][alarm]['percentage of recording length (%)'])
stats_all = pd.concat(stats)
plt.barh(xs, stats_all, color = 'red')
plt.xlabel("% of total recording time", fontsize = 24)
plt.title("Recording %s: How long the alarm active over the %s hours of recording?" % (rec,
str(recording_duration[rec])), fontsize = 22)
plt.yticks([i + 0.5 for i, _ in enumerate(alarm_list[rec])], alarm_list[rec], fontsize = 22)
plt.xticks(fontsize = 20)
fig.savefig('%s/%s_%s.jpg' % (dir_write, 'alarm_durations_2', rec), dpi=100, facecolor='w', edgecolor='w',
orientation='portrait', papertype=None, format='jpg',
transparent=False, bbox_inches=None, pad_inches=0.1,
frameon=True)
plt.close(fig)
# Displays the individual alarm events of the recording (rec) along the time axis
# Displays the plot
def alarm_plot_3(rec):
alarm_state = alarm_states[rec]
numbered = Series(np.zeros(len(alarm_state)), index = alarm_state.index)
for i in range(1, len(alarm_state)):
if alarm_state.iloc[i]['State New'] == 'Active':
numbered[i] = alarm_list[rec].index(alarm_state.iloc[i]['Id']) + 1
fig = plt.figure()
fig.set_size_inches(17, 8)
fig.subplots_adjust(left=0.4, bottom=None, right=None, top=None,
wspace=None, hspace=0.7)
ax1 = fig.add_subplot(1, 1, 1);
ax1.plot(alarm_state.index, numbered, '|', color = 'red', markersize = 16, markeredgewidth = 1 )
plt.xlabel("Time", fontsize = 20)
plt.title("Alarm events during recording %s" % rec , fontsize = 24)
plt.yticks([i+1 for i, _ in enumerate(alarm_list[rec])], alarm_list[rec], fontsize = 18);
plt.xticks(fontsize = 14, rotation = 30)
plt.ylim(0.5, len(alarm_list[rec]) + 0.5);
# Displays the individual alarm events of recording (rec) along the time axis
# Does not displays the plot but write it into a jpg file.
# NB: the resolution of the image is only 100 dpi - for publication quality higher is needed
def alarm_plot_3_write(rec):
alarm_state = alarm_states[rec]
numbered = Series(np.zeros(len(alarm_state)), index = alarm_state.index)
for i in range(1, len(alarm_state)):
if alarm_state.iloc[i]['State New'] == 'Active':
numbered[i] = alarm_list[rec].index(alarm_state.iloc[i]['Id']) + 1
fig = plt.figure()
fig.set_size_inches(17, 8)
fig.subplots_adjust(left=0.4, bottom=None, right=None, top=None,
wspace=None, hspace=0.7)
ax1 = fig.add_subplot(1, 1, 1);
ax1.plot(alarm_state.index, numbered, '|', color = 'red', markersize = 16, markeredgewidth = 1 )
plt.xlabel("Time", fontsize = 20)
plt.title("Alarm events during recording %s" % rec , fontsize = 24)
plt.yticks([i+1 for i, _ in enumerate(alarm_list[rec])], alarm_list[rec], fontsize = 18);
plt.xticks(fontsize = 14, rotation = 30)
plt.ylim(0.5, len(alarm_list[rec]) + 0.5)
fig.savefig('%s/%s_%s.pdf' % (dir_write, 'individual_alarms', rec), dpi=100, facecolor='w', edgecolor='w',
orientation='portrait', papertype=None, format='pdf',
transparent=False, bbox_inches=None, pad_inches=0.1,
frameon=True)
plt.close(fig)
"""
Explanation: Visualise alarm statistics for the individual alarms in the individual recording
End of explanation
"""
alarm_plot_1('DG032_2')
alarm_plot_2('DG032_2')
alarm_plot_3('DG032_2')
"""
Explanation: Example plots
End of explanation
"""
total_alarm_number_recordings = {} # dictionary containing the total number of alarm events in each recording
for recording in recordings:
total = 0
for alarm in alarm_list[recording]:
total += len(alarm_events[recording][alarm].index)
total_alarm_number_recordings[recording] = total
total_alarm_number_recordings_24H = {} # dictionary containing the total number of alarm events in each recording
# corrected for 24 hour period
for recording in recordings:
total_alarm_number_recordings_24H[recording] = (total_alarm_number_recordings[recording] /
(recording_duration[recording].total_seconds() / 86400))
"""
Explanation: Write all graphs to files
Generate cumulative descriptive statistics of all alarms combined in each recording
For each recording, what was the total number of alarm events and the number of events normalized for 24 hour periods
End of explanation
"""
alarm_durations_recordings = {} # a dictionary of Series. Each series contains all the alarm durations of a recording
for recording in recordings:
durations = []
for alarm in alarm_list[recording]:
durations.append(alarm_events[recording][alarm]['duration_seconds'])
durations = pd.concat(durations)
alarm_durations_recordings[recording] = durations
# Dictionaries containing various descriptive statistics for each recording
mean_alarm_duration_recordings = {}
median_alarm_duration_recordings = {}
sd_alarm_duration_recordings = {}
mad_alarm_duration_recordings = {}
min_alarm_duration_recordings = {}
pc25_alarm_duration_recordings = {}
pc75_alarm_duration_recordings = {}
max_alarm_duration_recordings = {}
for recording in recordings:
mean_alarm_duration_recordings[recording] = round(alarm_durations_recordings[recording].mean(), 4)
median_alarm_duration_recordings[recording] = round(alarm_durations_recordings[recording].median(), 4)
sd_alarm_duration_recordings[recording] = round(alarm_durations_recordings[recording].std(), 4)
mad_alarm_duration_recordings[recording] = round(alarm_durations_recordings[recording].mad(), 4)
min_alarm_duration_recordings[recording] = round(alarm_durations_recordings[recording].min(), 4)
pc25_alarm_duration_recordings[recording] = round(alarm_durations_recordings[recording].quantile(0.25), 4)
pc75_alarm_duration_recordings[recording] = round(alarm_durations_recordings[recording].quantile(0.75), 4)
max_alarm_duration_recordings[recording] = round(alarm_durations_recordings[recording].max(), 4)
# Create DataFrame containing cumulative alarm statistics for each recording
alarm_stats_cum_rec = DataFrame([total_alarm_number_recordings,
total_alarm_number_recordings_24H,
mean_alarm_duration_recordings,
median_alarm_duration_recordings,
sd_alarm_duration_recordings,
mad_alarm_duration_recordings,
min_alarm_duration_recordings,
pc25_alarm_duration_recordings,
pc75_alarm_duration_recordings,
max_alarm_duration_recordings],
index = ['count', 'count per 24h', 'mean duration (sec)', 'median duration (sec)', 'sd duration (sec)',
'mad duration (sec)', 'min duration (sec)', '25th cent duration (sec)', '75th cent duration (sec)',
'max duration (sec)'])
alarm_stats_cum_rec.round(2)
# Write statistics to Excel file
writer = pd.ExcelWriter('%s/%s' % (DIR_WRITE, 'alarm_stats_cum_rec.xlsx'))
alarm_stats_cum_rec.round(2).to_excel(writer, 'cumulative_stats')
writer.save()
"""
Explanation: In each recording, what was the mean, median, sd, mad, min, 25pc, 75pc, max of alarm durations
End of explanation
"""
# Plot the absolute number of alarm events for each recording
fig = plt.figure()
fig.set_size_inches(12, 12)
fig.subplots_adjust(left=None, bottom=None, right=None, top=None, wspace=None, hspace=0.7)
ax1 = fig.add_subplot(1, 1, 1);
ax1.barh(list(range(1, len(recordings)+1)), alarm_stats_cum_rec.loc['count', :], color = 'blue')
plt.ylabel("Recordings", fontsize = 22)
plt.xlabel("", fontsize = 22)
plt.title("Number of alarm events" , fontsize = 26)
ax1.tick_params(which = 'both', labelsize=14)
plt.yticks([i+1.5 for i, _ in enumerate(recordings)], recordings, rotation = 'horizontal')
plt.grid()
fig.savefig('%s/%s' % (DIR_WRITE, 'number_events_rec.jpg'), dpi=300, facecolor='w', edgecolor='w',
orientation='portrait', papertype=None, format='jpg',
transparent=False, bbox_inches=None, pad_inches=0.1,
frameon=True)
# Plot the number of alarm events in each recording normalised for 24 hour periods
fig = plt.figure()
fig.set_size_inches(12, 12)
fig.subplots_adjust(left=None, bottom=None, right=None, top=None, wspace=None, hspace=0.7)
ax1 = fig.add_subplot(1, 1, 1);
ax1.barh(list(range(1, len(recordings)+1)), alarm_stats_cum_rec.loc['count per 24h', :], color = 'blue')
plt.ylabel("Recordings", fontsize = 22)
plt.xlabel("", fontsize = 22)
plt.title("Number of alarm events per 24 hours" , fontsize = 26)
ax1.tick_params(which = 'both', labelsize=14)
plt.yticks([i+1.5 for i, _ in enumerate(recordings)], recordings, rotation = 'horizontal')
plt.grid()
fig.savefig('%s/%s' % (DIR_WRITE, 'number_events_24H_rec.jpg'), dpi=300, facecolor='w', edgecolor='w',
orientation='portrait', papertype=None, format='jpg',
transparent=False, bbox_inches=None, pad_inches=0.1,
frameon=True)
# Median duration of alarm events
fig = plt.figure()
fig.set_size_inches(12, 12)
fig.subplots_adjust(left=None, bottom=None, right=None, top=None, wspace=None, hspace=0.7)
ax1 = fig.add_subplot(1, 1, 1);
ax1.barh(list(range(1, len(recordings)+1)), alarm_stats_cum_rec.loc['mean duration (sec)', :], color = 'blue')
plt.ylabel("Recordings", fontsize = 22)
plt.xlabel("seconds", fontsize = 22)
plt.title("Median duration of alarm events" , fontsize = 26)
ax1.tick_params(which = 'both', labelsize=14)
plt.yticks([i+1.5 for i, _ in enumerate(recordings)], recordings, rotation = 'horizontal')
plt.grid()
fig.savefig('%s/%s' % (DIR_WRITE, 'median_duration_rec.jpg'), dpi=300, facecolor='w', edgecolor='w',
orientation='portrait', papertype=None, format='jpg',
transparent=False, bbox_inches=None, pad_inches=0.1,
frameon=True)
"""
Explanation: Visualize cumulative statistics of recordings
End of explanation
"""
# Create a list of all alarms occurring in any recording
total_alarm_list = set()
for recording in recordings:
total_alarm_list.update(alarm_list[recording])
total_alarm_list = sorted(total_alarm_list)
# A list of all alarms occurring during the service evaluation
total_alarm_list
# Write alarm list to Excel file
writer = pd.ExcelWriter('%s/%s' % (DIR_WRITE, 'total_alarm_list.xlsx'))
DataFrame(total_alarm_list, columns = ['alarm categories']).to_excel(writer, 'total_alarm_list')
writer.save()
"""
Explanation: Generate cumulative statistics of each alarm in all recordings combined
End of explanation
"""
total_alarm_number_alarms = {} # dictionary containing the number of alarm events in all recordings for the
# various alarm categories
for alarm in total_alarm_list:
total = 0
for recording in recordings:
if alarm in alarm_list[recording]:
total += len(alarm_events[recording][alarm].index)
total_alarm_number_alarms[alarm] = total
# Write alarm list to Excel file
writer = pd.ExcelWriter('%s/%s' % (DIR_WRITE, 'total_alarm_list_numbers.xlsx'))
DataFrame([total_alarm_number_alarms]).T.to_excel(writer, 'total_alarm_list')
writer.save()
total_alarm_number_alarms_24H = {} # dictionary containing the number of alarm events in all recordings for the
# various alarm categories normalized for 24 hour recording periods
for alarm in total_alarm_list:
total_alarm_number_alarms_24H[alarm] = round(((total_alarm_number_alarms[alarm] /
(total_recording_time.total_seconds() / 86400))), 4)
"""
Explanation: For each alarm, what was number of alarm events across all recordings and the number of events normalized per 24 hour recording time
End of explanation
"""
alarm_durations_alarms = {} # a dictionary of Series. Each Series contains all durations of a particular alarm
# in all recordings
for alarm in total_alarm_list:
durations = []
for recording in recordings:
if alarm in alarm_list[recording]:
durations.append(alarm_events[recording][alarm]['duration_seconds'])
durations = pd.concat(durations)
alarm_durations_alarms[alarm] = durations
cum_alarm_duration_alarms = {} # dictionary containing the total duration of alarms in all recordings for the
# various alarm categories
for alarm in total_alarm_list:
cum_alarm_duration_alarms[alarm] = round(alarm_durations_alarms[alarm].sum(), 4)
cum_alarm_duration_alarms_24H = {} # dictionary containing the total duration of alarms in all recordings for the
# various alarm categories normalized for 24 hour recording periods
for alarm in total_alarm_list:
cum_alarm_duration_alarms_24H[alarm] = round(((cum_alarm_duration_alarms[alarm] /
(total_recording_time.total_seconds() / 86400))), 4)
"""
Explanation: For each alarm, what was the total duration of alarm events across all recordings and normalized per 24 hour recording time
End of explanation
"""
# libraries containing various descriptive statistics for each recording
mean_alarm_duration_alarms = {}
median_alarm_duration_alarms = {}
sd_alarm_duration_alarms = {}
mad_alarm_duration_alarms = {}
min_alarm_duration_alarms = {}
pc25_alarm_duration_alarms = {}
pc75_alarm_duration_alarms = {}
max_alarm_duration_alarms = {}
for alarm in total_alarm_list:
mean_alarm_duration_alarms[alarm] = round(alarm_durations_alarms[alarm].mean(), 4)
median_alarm_duration_alarms[alarm] = round(alarm_durations_alarms[alarm].median(), 4)
sd_alarm_duration_alarms[alarm] = round(alarm_durations_alarms[alarm].std(), 4)
mad_alarm_duration_alarms[alarm] = round(alarm_durations_alarms[alarm].mad(), 4)
min_alarm_duration_alarms[alarm] = round(alarm_durations_alarms[alarm].min(), 4)
pc25_alarm_duration_alarms[alarm] = round(alarm_durations_alarms[alarm].quantile(0.25), 4)
pc75_alarm_duration_alarms[alarm] = round(alarm_durations_alarms[alarm].quantile(0.75), 4)
max_alarm_duration_alarms[alarm] = round(alarm_durations_alarms[alarm].max(), 4)
# Create DataFrame containing cumulative alarm statistics for each alarm
alarm_stats_cum_al = DataFrame([total_alarm_number_alarms,
total_alarm_number_alarms_24H,
cum_alarm_duration_alarms,
cum_alarm_duration_alarms_24H,
mean_alarm_duration_alarms,
median_alarm_duration_alarms,
sd_alarm_duration_alarms,
mad_alarm_duration_alarms,
min_alarm_duration_alarms,
pc25_alarm_duration_alarms,
pc75_alarm_duration_alarms,
max_alarm_duration_alarms],
index = ['count', 'count per 24h', 'total alarm duration (sec)', 'total alarm duration per 24 hours (sec)',
'mean duration (sec)', 'median duration (sec)', 'sd duration (sec)', 'mad duration (sec)',
'min duration (sec)', '25th cent duration (sec)', '75th cent duration (sec)',
'max duration (sec)'])
# Dataframe containing cumulative alarm statistics for each alarm
alarm_stats_cum_al.round(2)
# Write Dataframe containing cumulative alarm statistics for each alarm to Excel file
writer = pd.ExcelWriter('%s/%s' % (DIR_WRITE, 'alarm_stats_cum_al.xlsx'))
alarm_stats_cum_al.round(2).to_excel(writer, 'cumulative_stats')
writer.save()
"""
Explanation: For each alarm what was the mean, median, sd, mad, min, 25pc, 75pc, max of alarm durations
End of explanation
"""
# Reduce a too long alarm name
total_alarm_list[0] = 'A setting, alarm limit or vent...'
# Total number of alarm events in all recordings
fig = plt.figure()
fig.set_size_inches(17, 12)
fig.subplots_adjust(left=0.4, bottom=None, right=None, top=None, wspace=None, hspace=0.7)
ax1 = fig.add_subplot(1, 1, 1);
ax1.barh(list(range(1, len(total_alarm_list)+1)), alarm_stats_cum_al.loc['count', :], color = 'blue')
plt.ylabel("Alarms", fontsize = 22)
plt.xlabel("", fontsize = 22)
plt.title("Number of alarm events" , fontsize = 26)
ax1.tick_params(which = 'both', labelsize=14)
ax1.tick_params(which = 'both', labelsize=14)
plt.yticks([i+1.5 for i, _ in enumerate(total_alarm_list)], total_alarm_list, rotation = 'horizontal')
plt.grid()
fig.savefig('%s/%s' % (DIR_WRITE, 'number_events_al.jpg'), dpi=300, facecolor='w', edgecolor='w',
orientation='portrait', papertype=None, format='jpg',
transparent=False, bbox_inches=None, pad_inches=0.1,
frameon=True)
# Total number of alarm events in all recordings normalized for 24 hour periods
fig = plt.figure()
fig.set_size_inches(17, 12)
fig.subplots_adjust(left=0.4, bottom=None, right=None, top=None, wspace=None, hspace=0.7)
ax1 = fig.add_subplot(1, 1, 1);
ax1.barh(list(range(1, len(total_alarm_list)+1)), alarm_stats_cum_al.loc['count per 24h', :], color = 'blue')
plt.ylabel("Alarms", fontsize = 22)
plt.xlabel("", fontsize = 22)
plt.title("Number of alarm events per 24 hour" , fontsize = 26)
ax1.tick_params(which = 'both', labelsize=14)
plt.yticks([i+1.5 for i, _ in enumerate(total_alarm_list)], total_alarm_list, rotation = 'horizontal')
plt.grid()
fig.savefig('%s/%s' % (DIR_WRITE, 'number_events_24H_al.jpg'), dpi=300, facecolor='w', edgecolor='w',
orientation='portrait', papertype=None, format='jpg',
transparent=False, bbox_inches=None, pad_inches=0.1,
frameon=True)
# Median duration of alarm events in all recordings
fig = plt.figure()
fig.set_size_inches(17, 12)
fig.subplots_adjust(left=0.4, bottom=None, right=None, top=None, wspace=None, hspace=0.7)
ax1 = fig.add_subplot(1, 1, 1);
ax1.barh(list(range(1, len(total_alarm_list)+1)), alarm_stats_cum_al.loc['median duration (sec)', :], color = 'blue')
plt.ylabel("Alarms", fontsize = 22)
plt.xlabel("seconds", fontsize = 22)
plt.title("Median duration of alarm events" , fontsize = 26)
ax1.tick_params(which = 'both', labelsize=14)
plt.yticks([i+1.5 for i, _ in enumerate(total_alarm_list)], total_alarm_list, rotation = 'horizontal')
plt.grid()
fig.savefig('%s/%s' % (DIR_WRITE, 'median_events_al.jpg'), dpi=300, facecolor='w', edgecolor='w',
orientation='portrait', papertype=None, format='jpg',
transparent=False, bbox_inches=None, pad_inches=0.1,
frameon=True)
"""
Explanation: Visualising cumulative statistics of alarms
End of explanation
"""
all_durations = [] # Series containing durations of all alarm events in all the recording
for recording in recordings:
for alarm in alarm_list[recording]:
all_durations.append(alarm_events[recording][alarm]['duration_seconds'])
all_durations = pd.concat(all_durations)
# The total number of alarm events in all the recordings
total_count = len(all_durations)
total_count
# The total number of alarm events in all the recordings per 24 hour
total_count_24H = total_count / (total_recording_time.total_seconds() / 86400)
total_count_24H
# Calculate descriptive statistics (expressed in seconds)
mean_duration_total = round(all_durations.mean(), 4)
median_duration_total = round(all_durations.median(), 4)
sd_duration_total = round(all_durations.std(), 4)
mad_duration_total = round(all_durations.mad(), 4)
min_duration_total = round(all_durations.min(), 4)
pc25_duration_total = round(all_durations.quantile(0.25), 4)
pc75_duration_total = round(all_durations.quantile(0.75), 4)
max_duration_total = round(all_durations.max(), 4)
alarm_stats_cum_total = DataFrame([ total_count, total_count_24H,
mean_duration_total, median_duration_total,
sd_duration_total, mad_duration_total, min_duration_total,
pc25_duration_total, pc75_duration_total, max_duration_total],
columns = ['all alarms in all recordings'],
index = ['total alarm events', 'total alarm events per 24 hours',
'mean alarm duration (sec)', 'median alarm duration (sec)',
'sd alarm duration (sec)', 'mad alarm duration (sec)',
'min alarm duration (sec)', '25 centile alarm duration (sec)',
'75 centile alarm duration (sec)', 'max alarm duration (sec)'])
# Cumulative statistics of the whole datasett
alarm_stats_cum_total.round(2)
# Write cumulative statistics to Excel file
writer = pd.ExcelWriter('%s/%s' % (DIR_WRITE, 'alarm_stats_cum_total.xlsx'))
alarm_stats_cum_total.to_excel(writer, 'cumulative_stats')
writer.save()
"""
Explanation: Calculate cumulative descriptive statistics of all alarms in all recording together
End of explanation
"""
# Histogram showing the number of alarms which were shorter than 1 minute
fig = plt.figure()
fig.set_size_inches(12, 6)
fig.subplots_adjust(left=0.4, bottom=None, right=None, top=None, wspace=None, hspace=0.7)
ax1 = fig.add_subplot(1, 1, 1)
n, bins, patches = plt.hist(all_durations, bins = range(0, 60))
plt.grid(True)
plt.xlabel('Alarm duration (seconds)', fontsize = 20)
plt.ylabel('Number of events', fontsize = 20)
plt.xticks(range(0,60,2), fontsize = 10)
plt.yticks(fontsize = 10)
plt.title('Histogram of alarm durations', fontsize = 20)
fig.savefig('%s/%s' % (DIR_WRITE, 'alarm_duration_hist_1.jpg'), dpi=300, facecolor='w', edgecolor='w',
orientation='portrait', papertype=None, format='jpg',
transparent=False, bbox_inches=None, pad_inches=0.1,
frameon=True)
# Histogram showing the number of alarms which were shorter than 10 minutes
fig = plt.figure()
fig.set_size_inches(12, 6)
fig.subplots_adjust(left=0.4, bottom=None, right=None, top=None, wspace=None, hspace=0.7)
ax1 = fig.add_subplot(1, 1, 1)
n, bins, patches = plt.hist(all_durations, bins = range(0, 600))
plt.grid(True)
plt.xlabel('Alarm duration (seconds)', fontsize = 20)
plt.ylabel('Number of events', fontsize = 20)
plt.xticks(range(0, 600, 60), fontsize = 10)
plt.yticks(fontsize = 10)
plt.title('Histogram of alarm durations', fontsize = 20)
fig.savefig('%s/%s' % (DIR_WRITE, 'alarm_duration_hist_2.jpg'), dpi=300, facecolor='w', edgecolor='w',
orientation='portrait', papertype=None, format='jpg',
transparent=False, bbox_inches=None, pad_inches=0.1,
frameon=True)
# Histogram showing all data with a bin size of 1minutes and log X axis
fig = plt.figure()
fig.set_size_inches(12, 6)
fig.subplots_adjust(left=0.4, bottom=None, right=None, top=None, wspace=None, hspace=0.7)
ax1 = fig.add_subplot(1, 1, 1)
n, bins, patches = plt.hist(all_durations, bins = range(0, 50000, 60))
plt.grid(True)
plt.xlabel('Alarm duration (seconds)', fontsize = 20)
plt.ylabel('Number of events', fontsize = 20)
plt.xticks(range(0, 50000, 600), fontsize = 10)
plt.yticks(fontsize = 10)
plt.xscale('log')
plt.yscale('log')
plt.title('Histogram of alarm durations', fontsize = 20)
fig.savefig('%s/%s' % (DIR_WRITE, 'alarm_duration_hist_3.jpg'), dpi=300, facecolor='w', edgecolor='w',
orientation='portrait', papertype=None, format='jpg',
transparent=False, bbox_inches=None, pad_inches=0.1,
frameon=True)
"""
Explanation: Visualise the duration of all alarm events as histogram
End of explanation
"""
under_10_sec = sorted([al for al in all_durations if al < 10])
len(under_10_sec)
under_1_min = sorted([al for al in all_durations if al <= 60])
len(under_1_min)
under_10_sec_MV_low = sorted([al for al in alarm_durations_alarms['Minute volume < low limit'] if al < 10])
under_10_sec_MV_high = sorted([al for al in alarm_durations_alarms['Minute volume > high limit'] if al < 10])
under_10_sec_RR_high = sorted([al for al in alarm_durations_alarms['Respiratory rate > high limit'] if al < 10])
len(under_10_sec_MV_low), len(under_10_sec_MV_high), len(under_10_sec_RR_high)
# Short alarms (<10 sec) in the categories where the user sets the limits
len(under_10_sec_MV_low) + len(under_10_sec_MV_high) + len(under_10_sec_RR_high)
"""
Explanation: How many short alarms did occur?
End of explanation
"""
# How many alarm events are longer than 1 hour?
over_1_hour = sorted([al for al in all_durations if al > 3600])
len(over_1_hour)
# Which alarms were longer than one hour?
alarms_over_1_hour = []
for recording in recordings:
for alarm in alarm_list[recording]:
for event in alarm_events[recording][alarm]['duration_seconds']:
if event > 3600:
alarms_over_1_hour.append((recording, alarm, event))
alarms_over_1_hour = DataFrame(sorted(alarms_over_1_hour, key = lambda x: x[2], reverse = True),
columns = ['recording', 'alarm', 'duration (seconds)'])
alarms_over_1_hour
alarms_over_1_hour.groupby('alarm').count()
"""
Explanation: Check which are the longest alarms
End of explanation
"""
over_10_minutes = sorted([al for al in all_durations if al > 600 and al <= 3600])
len(over_10_minutes)
alarms_over_10_min = []
# which alarms were longer than 10 minutes but shorter than 1 hour
for recording in recordings:
for alarm in alarm_list[recording]:
for event in alarm_events[recording][alarm]['duration_seconds']:
if event > 600 and event <= 3600:
alarms_over_10_min.append((recording, alarm, event))
alarms_over_10_min = DataFrame(sorted(alarms_over_10_min, key = lambda x: x[2], reverse = True),
columns = ['recording', 'alarm', 'duration (seconds)'])
alarms_over_10_min.groupby('alarm').count()
"""
Explanation: How many alarm events are longer than 10 minutes but shorter than 1 hour?¶
End of explanation
"""
over_1_minutes = sorted([al for al in all_durations if al > 60 and al <= 600])
len(over_1_minutes)
alarms_over_1_min = []
# which alarms were longer than 1 minutes but shorter than 10 minutes
for recording in recordings:
for alarm in alarm_list[recording]:
for event in alarm_events[recording][alarm]['duration_seconds']:
if event > 60 and event <= 600:
alarms_over_1_min.append((recording, alarm, event))
alarms_over_1_min = DataFrame(sorted(alarms_over_1_min, key = lambda x: x[2], reverse = True),
columns = ['recording', 'alarm', 'duration (seconds)'])
alarms_over_1_min.groupby('alarm').count()
# Write long alarms into a multisheet Excel file
writer = pd.ExcelWriter('%s/%s' % (DIR_WRITE, 'long_alarms.xlsx'))
alarms_over_1_hour.to_excel(writer, 'over_1hour')
alarms_over_10_min.to_excel(writer, '10min_to_1hour')
alarms_over_1_min.to_excel(writer, '1min_to_10min')
writer.save()
"""
Explanation: how many alarm events are longer than 1 minutes?¶
End of explanation
"""
# Identify the most frequent alarm events
frequent_alarms = alarm_stats_cum_al.loc['count'].sort_values(inplace = False, ascending = False)
# The eight most frequent alarms
frequent_alarms[:8]
# How many percent of all alarms were these 8 frequent alarms?
round(frequent_alarms[:8].sum() / frequent_alarms.sum(), 3) * 100
# Write frequent alarm in an Excel file
writer = pd.ExcelWriter('%s/%s' % (DIR_WRITE, 'frequent_alarms.xlsx'))
DataFrame(frequent_alarms[:8]).to_excel(writer, 'frequent_alarms')
writer.save()
# Number of alarms where the user sets the limits
user_set_alarms = (frequent_alarms['Minute volume < low limit'] + frequent_alarms['Minute volume > high limit'] +
frequent_alarms['Respiratory rate > high limit'])
int(user_set_alarms)
# What proportion of all alarms were these 3 user-set alarms?
print('%.3f' % (user_set_alarms / frequent_alarms.sum()))
# Frequent alarms related to VT not achieved
other_frequent_alarms = (frequent_alarms['Tidal volume < low Limit'] + frequent_alarms['Volume not constant'] +
frequent_alarms['Tube obstructed'])
int(other_frequent_alarms)
# What proportion of all alarms were alarms related to VT not achieved?
print('%.3f' % (other_frequent_alarms / frequent_alarms.sum()))
"""
Explanation: Check which are the most frequent alarms
End of explanation
"""
MV_low_count = {}
for recording in recordings:
try:
MV_low_count[recording] = alarm_stats[recording]['Minute volume < low limit']['number of events'].iloc[0]
except KeyError:
# print('No "MV_low" alarm in recording %s' % recording)
pass
MV_low_count_24H = {}
for recording in recordings:
try:
MV_low_count_24H[recording] = \
alarm_stats[recording]['Minute volume < low limit']['number of event per 24h'].iloc[0]
except KeyError:
# print('No "MV_low" alarm in recording %s' % recording)
pass
MV_high_count = {}
for recording in recordings:
try:
MV_high_count[recording] = alarm_stats[recording]['Minute volume > high limit']['number of events'].iloc[0]
except KeyError:
# print('No "MV_high" alarm in recording %s' % recording)
pass
MV_high_count_24H = {}
for recording in recordings:
try:
MV_high_count_24H[recording] = alarm_stats[recording]['Minute volume > high limit']['number of event per 24h'].iloc[0]
except KeyError:
# print('No "MV_high" alarm in recording %s' % recording)
pass
RR_high_count = {}
for recording in recordings:
try:
RR_high_count[recording] = alarm_stats[recording]['Respiratory rate > high limit']['number of events'].iloc[0]
except KeyError:
# print('No "RR_high" alarm in recording %s' % recording)
pass
RR_high_count_24H = {}
for recording in recordings:
try:
RR_high_count_24H[recording] = alarm_stats[recording]['Respiratory rate > high limit']['number of event per 24h'].iloc[0]
except KeyError:
# print('No "RR_high" alarm in recording %s' % recording)
pass
# Plot the number of MV < low limit alarm events and write graph to file
fig = plt.figure()
fig.set_size_inches(17, 12)
fig.subplots_adjust(left=None, bottom=None, right=None, top=None, wspace=None, hspace=0.7)
ax1 = fig.add_subplot(1, 1, 1);
ax1.barh(list(range(1, len(MV_low_count)+1)), MV_low_count.values(), color = 'blue')
plt.ylabel("Recordings", fontsize = 16)
plt.xlabel("number of alarm events", fontsize = 16)
plt.title("MV < low limit" , fontsize = 26)
ax1.tick_params(which = 'both', labelsize=14)
plt.yticks([i+1.5 for i, _ in enumerate(MV_low_count.keys())], MV_low_count.keys(),
rotation = 'horizontal')
plt.grid()
fig.savefig('%s/%s' % (DIR_WRITE, 'MV_low.jpg'), dpi=300, facecolor='w', edgecolor='w',
orientation='portrait', papertype=None, format='jpg',
transparent=False, bbox_inches=None, pad_inches=0.1,
frameon=True)
# Plot the number of MV < low limit alarm events normalized for 24 hours and write graph to file
fig = plt.figure()
fig.set_size_inches(17, 12)
fig.subplots_adjust(left=None, bottom=None, right=None, top=None, wspace=None, hspace=0.7)
ax1 = fig.add_subplot(1, 1, 1);
ax1.barh(list(range(1, len(MV_low_count_24H)+1)), MV_low_count_24H.values(), color = 'blue')
plt.ylabel("Recordings", fontsize = 16)
plt.xlabel("number of alarm events per 24 hours", fontsize = 16)
plt.title("MV < low limit" , fontsize = 26)
ax1.tick_params(which = 'both', labelsize=14)
plt.yticks([i+1.5 for i, _ in enumerate(MV_low_count_24H.keys())], MV_low_count_24H.keys(),
rotation = 'horizontal')
plt.grid()
fig.savefig('%s/%s' % (DIR_WRITE, 'MV_low_24H.jpg'), dpi=300, facecolor='w', edgecolor='w',
orientation='portrait', papertype=None, format='jpg',
transparent=False, bbox_inches=None, pad_inches=0.1,
frameon=True)
# Plot the number of MV > low limit alarm events and write graph to file
fig = plt.figure()
fig.set_size_inches(17, 12)
fig.subplots_adjust(left=None, bottom=None, right=None, top=None, wspace=None, hspace=0.7)
ax1 = fig.add_subplot(1, 1, 1);
ax1.barh(list(range(1, len(MV_high_count)+1)), MV_high_count.values(), color = 'blue')
plt.ylabel("Recordings", fontsize = 16)
plt.xlabel("number of alarm events", fontsize = 16)
plt.title("MV > high limit" , fontsize = 26)
ax1.tick_params(which = 'both', labelsize=14)
plt.yticks([i+1.5 for i, _ in enumerate(MV_high_count.keys())], MV_high_count.keys(),
rotation = 'horizontal')
plt.grid()
fig.savefig('%s/%s' % (DIR_WRITE, 'MV_high.jpg'), dpi=300, facecolor='w', edgecolor='w',
orientation='portrait', papertype=None, format='jpg',
transparent=False, bbox_inches=None, pad_inches=0.1,
frameon=True)
# Plot the number of MV > low limit alarm events and write graph to file
fig = plt.figure()
fig.set_size_inches(17, 12)
fig.subplots_adjust(left=None, bottom=None, right=None, top=None, wspace=None, hspace=0.7)
ax1 = fig.add_subplot(1, 1, 1);
ax1.barh(list(range(1, len(MV_high_count_24H)+1)), MV_high_count_24H.values(), color = 'blue')
plt.ylabel("Recordings", fontsize = 16)
plt.xlabel("number of alarm events per 24 hours", fontsize = 16)
plt.title("MV > high limit" , fontsize = 26)
ax1.tick_params(which = 'both', labelsize=14)
plt.yticks([i+1.5 for i, _ in enumerate(MV_high_count_24H.keys())], MV_high_count_24H.keys(),
rotation = 'horizontal')
plt.grid()
fig.savefig('%s/%s' % (DIR_WRITE, 'MV_high_24H.jpg'), dpi=300, facecolor='w', edgecolor='w',
orientation='portrait', papertype=None, format='jpg',
transparent=False, bbox_inches=None, pad_inches=0.1,
frameon=True)
# Plot the number of RR > high limit alarm events and write graph to file
fig = plt.figure()
fig.set_size_inches(17, 12)
fig.subplots_adjust(left=None, bottom=None, right=None, top=None, wspace=None, hspace=0.7)
ax1 = fig.add_subplot(1, 1, 1);
ax1.barh(list(range(1, len(RR_high_count)+1)), RR_high_count.values(), color = 'blue')
plt.ylabel("Recordings", fontsize = 16)
plt.xlabel("number of alarm events", fontsize = 16)
plt.title("RR > high limit" , fontsize = 26)
ax1.tick_params(which = 'both', labelsize=14)
plt.yticks([i+1.5 for i, _ in enumerate(RR_high_count.keys())], RR_high_count.keys(),
rotation = 'horizontal')
plt.grid()
fig.savefig('%s/%s' % (DIR_WRITE, 'RR_high.jpg'), dpi=300, facecolor='w', edgecolor='w',
orientation='portrait', papertype=None, format='jpg',
transparent=False, bbox_inches=None, pad_inches=0.1,
frameon=True)
# Plot the number of RR > high limit alarm events normalized for 24 hours and write graph to file
fig = plt.figure()
fig.set_size_inches(17, 12)
fig.subplots_adjust(left=None, bottom=None, right=None, top=None, wspace=None, hspace=0.7)
ax1 = fig.add_subplot(1, 1, 1);
ax1.barh(list(range(1, len(RR_high_count_24H)+1)), RR_high_count_24H.values(), color = 'blue')
plt.ylabel("Recordings", fontsize = 16)
plt.xlabel("number of alarm events per 24 hours", fontsize = 16)
plt.title("RR > high limit" , fontsize = 26)
ax1.tick_params(which = 'both', labelsize=14)
plt.yticks([i+1.5 for i, _ in enumerate(RR_high_count_24H.keys())],
RR_high_count_24H.keys(), rotation = 'horizontal')
plt.grid()
fig.savefig('%s/%s' % (DIR_WRITE, 'RR_high_24H.jpg'), dpi=300, facecolor='w', edgecolor='w',
orientation='portrait', papertype=None, format='jpg',
transparent=False, bbox_inches=None, pad_inches=0.1,
frameon=True)
"""
Explanation: Visualise MV and RR limit alarms
Generate dictionaries with the alarm counts (absolute and per 24H recording period) for MV low and high alarms and RR high alarms for those recordings where this occurs
End of explanation
"""
for recording in recordings:
slow_measurements[recording] = pd.concat([slow_measurements[recording],
vent_settings_2[recording], alarm_settings_2[recording]], axis = 0, join = 'outer')
slow_measurements[recording].sort_index(inplace = True)
for recording in recordings:
slow_measurements[recording] = slow_measurements[recording].fillna(method = 'pad')
def minute_volume_plotter(rec, ylim = False):
'''
Plots the total minute volumme (using the data obtained with 1/sec sampling rate)
together with the "MV low" and "MV high" alarm limits
Displays the plot
'''
if ylim:
ymax = ylim
else:
ymax = slow_measurements[rec]['MV_high_weight'].max() + 0.3
fig = plt.figure()
fig.set_size_inches(12, 8)
fig.subplots_adjust(left=None, bottom=None, right=None, top=None,
wspace=None, hspace=0.7)
ax1 = fig.add_subplot(1, 1, 1);
slow_measurements[rec]['MV_kg'].plot(ax = ax1, color = 'blue', ylim = [0, ymax] );
slow_measurements[rec]['MV_low_weight'].plot(ax = ax1, color = 'green', linewidth = 3, ylim = [0, ymax] );
slow_measurements[rec]['MV_high_weight'].plot(ax = ax1, color = 'red', linewidth = 3, ylim = [0, ymax] );
ax1.set_title('Minute volume - %s' % rec, size = 22, color = 'black')
ax1.set_xlabel('Time', size = 22, color = 'black')
ax1.set_ylabel('L/kg/min', size = 22, color = 'black')
ax1.grid('on', linestyle='-', linewidth=0.5, color = 'gray')
ax1.legend(['MV_kg', 'alarm_low', 'alarm_high']);
minute_volume_plotter('DG003')
def minute_volume_plotter_2(rec, ylim = False, version = ''):
'''
Plots the total minute volumme (using the data obtained with 1/sec sampling rate)
together with the "MV low" and "MV high" alarm limits
Writes the plot to file (does not display the plot)
'''
if ylim:
ymax = ylim
else:
ymax = slow_measurements[rec]['alarm_MV_high_weight'].max() + 0.3
fig = plt.figure()
fig.set_size_inches(12, 8)
fig.subplots_adjust(left=None, bottom=None, right=None, top=None,
wspace=None, hspace=0.7)
ax1 = fig.add_subplot(1, 1, 1);
slow_measurements[rec]['MV_kg'].plot(ax = ax1, color = 'blue', ylim = [0, ymax] );
slow_measurements[rec]['alarm_MV_low_weight'].plot(ax = ax1, color = 'green', linewidth = 3, ylim = [0, ymax] );
slow_measurements[rec]['alarm_MV_high_weight'].plot(ax = ax1, color = 'red', linewidth = 3, ylim = [0, ymax] );
ax1.set_title('Minute volume - %s' % rec, size = 22, color = 'black')
ax1.set_xlabel('Time', size = 22, color = 'black')
ax1.set_ylabel('L/kg/min', size = 22, color = 'black')
ax1.grid('on', linestyle='-', linewidth=0.5, color = 'gray')
ax1.legend(['MV_kg', 'alarm_low', 'alarm_high']);
fig.savefig('%s/%s_%s%s.jpg' % (dir_write, 'minute_volume', rec, version), dpi=300, facecolor='w', edgecolor='w',
orientation='portrait', papertype=None, format='jpg',
transparent=False, bbox_inches=None, pad_inches=0.1,
frameon=True)
plt.close(fig)
def resp_rate_plotter(rec, ylim = False):
'''
Plots the total reapiratory rate (using the data obtained with 1/sec sampling rate)
together with the set backup rate and "RR high" alarm limits
Displays the plot
'''
if ylim:
ymax = ylim
else:
ymax = slow_measurements[rec]['5001|RR [1/min]'].max() + 10
fig = plt.figure()
fig.set_size_inches(12, 8)
fig.subplots_adjust(left=None, bottom=None, right=None, top=None,
wspace=None, hspace=0.7)
ax1 = fig.add_subplot(1, 1, 1);
slow_measurements[rec]['5001|RR [1/min]'].plot(ax = ax1, color = 'blue', ylim = [0, ymax] );
slow_measurements[rec]['RR_high'].plot(ax = ax1, color = 'red', linewidth = 3, ylim = [0, ymax] );
slow_measurements[rec]['RR_set'].plot(ax = ax1, color = 'green', linewidth = 3, ylim = [0, ymax] );
ax1.set_title('Respiratory rate - %s' % rec, size = 22, color = 'black')
ax1.set_xlabel('Time', size = 22, color = 'black')
ax1.set_ylabel('1/min', size = 22, color = 'black')
ax1.grid('on', linestyle='-', linewidth=0.5, color = 'gray')
ax1.legend(['RR', 'alarm_high', 'RR_set']);
resp_rate_plotter('DG003')
def resp_rate_plotter_2(rec, ylim = False, version = ''):
'''
Plots the total reapiratory rate (using the data obtained with 1/sec sampling rate)
together with the set backup rate and "RR high" alarm limits
Writes the plots to files (does not display the plot)
'''
if ylim:
ymax = ylim
else:
ymax = slow_measurements[rec]['5001|RR [1/min]'].max() + 10
fig = plt.figure()
fig.set_size_inches(12, 8)
fig.subplots_adjust(left=None, bottom=None, right=None, top=None,
wspace=None, hspace=0.7)
ax1 = fig.add_subplot(1, 1, 1);
slow_measurements[rec]['5001|RR [1/min]'].plot(ax = ax1, color = 'blue', ylim = [0, ymax] );
slow_measurements[rec]['alarm_RR_high'].plot(ax = ax1, color = 'red', linewidth = 3, ylim = [0, ymax] );
slow_measurements[rec]['RR_set'].plot(ax = ax1, color = 'green', linewidth = 3, ylim = [0, ymax] );
ax1.set_title('Respiratory rate - %s' % rec, size = 22, color = 'black')
ax1.set_xlabel('Time', size = 22, color = 'black')
ax1.set_ylabel('1/min', size = 22, color = 'black')
ax1.grid('on', linestyle='-', linewidth=0.5, color = 'gray')
ax1.legend(['RR', 'alarm_high', 'RR_set'])
fig.savefig('%s/%s_%s%s.jpg' % (dir_write, 'resp_rate', rec, version), dpi=300, facecolor='w', edgecolor='w',
orientation='portrait', papertype=None, format='jpg',
transparent=False, bbox_inches=None, pad_inches=0.1,
frameon=True)
plt.close(fig)
"""
Explanation: Investigate the relationship of MV and RR parameter readings, ventilation settings and alarm settings
End of explanation
"""
clinical_details_for_paper = clinical_details[['Gestation', 'Birth weight', 'Current weight', 'Main diagnoses']]
clinical_details_for_paper = clinical_details_for_paper.loc[recordings]
# clinical_details_for_paper
vent_modes_all = {}
for recording in recordings:
vent_modes_all[recording] = vent_modes_selected[recording].Text.unique()
vent_modes_all[recording] = [mode[5:] for mode in vent_modes_all[recording] if mode.startswith(' Mode')]
vent_modes_all = DataFrame([vent_modes_all]).T
vent_modes_all.columns = ['Ventilation modes']
vent_modes_all = vent_modes_all.loc[recordings]
# vent_modes_all
recording_duration_hours_all = DataFrame([recording_duration_hours]).T
recording_duration_hours_all.columns = ['Recording duration (hours)']
Table_1 = recording_duration_hours_all.join([clinical_details_for_paper, vent_modes_all])
Table_1
writer = pd.ExcelWriter('%s/%s' % (DIR_WRITE, 'Table_1.xlsx'))
Table_1.to_excel(writer)
writer.save()
"""
Explanation: Create the tables and figures of the paper
Table 1
End of explanation
"""
rec = 'DG032_2'
filetype = 'jpg'
dpi = 300
alarm_state = alarm_states[rec]
numbered = Series(np.zeros(len(alarm_state)), index = alarm_state.index)
for i in range(1, len(alarm_state)):
if alarm_state.iloc[i]['State New'] == 'Active':
numbered[i] = alarm_list[rec].index(alarm_state.iloc[i]['Id']) + 1
fig = plt.figure()
fig.set_size_inches(10, 4)
fig.subplots_adjust(left=0.4, bottom=None, right=None, top=None, wspace=None, hspace=None)
ax1 = fig.add_subplot(1, 1, 1);
ax1.plot(alarm_state.index, numbered, '|', color = 'red', markersize = 14, markeredgewidth = 0.5 )
plt.xlabel("Time", fontsize = 14)
plt.title(rec)
plt.yticks([i+1 for i, _ in enumerate(alarm_list[rec])], alarm_list[rec], fontsize = 14);
plt.xticks(fontsize = 8)
plt.ylim(0.5, len(alarm_list[rec]) + 0.5)
fig.savefig('%s/%s.jpg' % (DIR_WRITE, 'Figure_1a'), dpi=dpi, facecolor='w', edgecolor='w',
orientation='portrait', papertype=None, format= filetype,
transparent=False, bbox_inches=None, pad_inches=0.1, frameon=True)
rec = 'DG032_2'
filetype = 'jpg'
dpi = 300
fig = plt.figure()
fig.set_size_inches(8, 4)
fig.subplots_adjust(left=0.5, bottom=None, right=None, top=None, wspace=None, hspace= None)
ax1 = fig.add_subplot(1, 1, 1)
xs = [i + 0.1 for i, _ in enumerate(alarm_list[rec])]
stats = []
for alarm in alarm_list[rec]:
stats.append(alarm_stats[rec][alarm]['percentage of recording length (%)'])
stats_all = pd.concat(stats)
plt.barh(xs, stats_all, color = 'red')
plt.xlabel("% of total recording time", fontsize = 14)
plt.title(rec)
plt.yticks([i + 0.5 for i, _ in enumerate(alarm_list[rec])], alarm_list[rec], fontsize = 14)
plt.xticks(fontsize = 14);
fig.savefig('%s/%s.jpg' % (DIR_WRITE, 'Figure_1b'), dpi=dpi, facecolor='w', edgecolor='w',
orientation='portrait', papertype=None, format= filetype,
transparent=False, bbox_inches=None, pad_inches=0.1, frameon=True)
rec = 'DG032_2'
filetype = 'tiff'
dpi = 300
alarm_state = alarm_states[rec]
numbered = Series(np.zeros(len(alarm_state)), index = alarm_state.index)
for i in range(1, len(alarm_state)):
if alarm_state.iloc[i]['State New'] == 'Active':
numbered[i] = alarm_list[rec].index(alarm_state.iloc[i]['Id']) + 1
fig = plt.figure()
fig.set_size_inches(9, 7)
fig.subplots_adjust(left=0.4, bottom=None, right=None, top=None, wspace=None, hspace=0.3)
ax1 = fig.add_subplot(2, 1, 1);
ax1.plot(alarm_state.index, numbered, '|', color = 'red', markersize = 10, markeredgewidth = 0.5 )
plt.xlabel("Time", fontsize = 12)
plt.title(rec)
plt.yticks([i+1 for i, _ in enumerate(alarm_list[rec])], alarm_list[rec], fontsize = 12);
plt.xticks(fontsize = 8)
plt.ylim(0.5, len(alarm_list[rec]) + 0.5)
ax1 = fig.add_subplot(2, 1, 2)
xs = [i + 0.1 for i, _ in enumerate(alarm_list[rec])]
stats = []
for alarm in alarm_list[rec]:
stats.append(alarm_stats[rec][alarm]['percentage of recording length (%)'])
stats_all = pd.concat(stats)
plt.barh(xs, stats_all, color = 'red')
plt.xlabel("% of total recording time", fontsize = 12)
plt.title(rec)
plt.yticks([i + 0.5 for i, _ in enumerate(alarm_list[rec])], alarm_list[rec], fontsize = 12)
plt.xticks(fontsize = 8);
fig.savefig('%s/%s.tiff' % (DIR_WRITE, 'Figure_1'), dpi=dpi, facecolor='w', edgecolor='w',
orientation='portrait', papertype=None, format= filetype,
transparent=False, bbox_inches=None, pad_inches=0.1, frameon=True)
"""
Explanation: Figure 1
End of explanation
"""
rec = 'DG003'
filetype = 'jpg'
dpi = 300
ymax = slow_measurements[rec]['MV_high_weight'].max() + 0.3
fig = plt.figure()
fig.set_size_inches(8, 6)
fig.subplots_adjust(left=None, bottom=None, right=None, top=None,
wspace=None, hspace=None)
ax1 = fig.add_subplot(1, 1, 1);
slow_measurements[rec]['MV_kg'].plot(ax = ax1, color = 'blue', ylim = [0, ymax] );
slow_measurements[rec]['MV_low_weight'].plot(ax = ax1, color = 'green', linewidth = 3, ylim = [0, ymax] );
slow_measurements[rec]['MV_high_weight'].plot(ax = ax1, color = 'red', linewidth = 3, ylim = [0, ymax] );
ax1.set_title(rec, size = 14, color = 'black')
ax1.set_xlabel('Time', size = 14, color = 'black')
ax1.set_ylabel('L/min/kg', size = 14, color = 'black')
ax1.tick_params(which = 'both', labelsize=12)
ax1.grid('on', linestyle='-', linewidth=0.5, color = 'gray')
ax1.legend(['MV_kg', 'alarm_low', 'alarm_high']);
fig.savefig('%s/%s.jpg' % (DIR_WRITE, 'Figure_2a_color'), dpi=dpi, facecolor='w', edgecolor='w',
orientation='portrait', papertype=None, format= filetype,
transparent=False, bbox_inches=None, pad_inches=0.1, frameon=True)
rec = 'DG003'
filetype = 'jpg'
dpi = 300
ymax = slow_measurements[rec]['MV_high_weight'].max() + 0.3
fig = plt.figure()
fig.set_size_inches(8, 6)
fig.subplots_adjust(left=None, bottom=None, right=None, top=None,
wspace=None, hspace=None)
ax1 = fig.add_subplot(1, 1, 1);
slow_measurements[rec]['MV_kg'].plot(ax = ax1, color = 'black', alpha = 0.6, ylim = [0, ymax] );
slow_measurements[rec]['MV_low_weight'].plot(ax = ax1, color = 'black', linewidth = 3, ylim = [0, ymax] );
slow_measurements[rec]['MV_high_weight'].plot(ax = ax1, color = 'black', linewidth = 3, ylim = [0, ymax] );
ax1.set_title(rec, size = 14, color = 'black')
ax1.set_xlabel('Time', size = 14, color = 'black')
ax1.set_ylabel('L/min/kg', size = 14, color = 'black')
ax1.tick_params(which = 'both', labelsize=12)
ax1.grid('on', linestyle='-', linewidth=0.5, color = 'gray')
ax1.legend(['MV_kg', 'alarm_low', 'alarm_high']);
fig.savefig('%s/%s.jpg' % (DIR_WRITE, 'Figure_2a_bw'), dpi=dpi, facecolor='w', edgecolor='w',
orientation='portrait', papertype=None, format= filetype,
transparent=False, bbox_inches=None, pad_inches=0.1, frameon=True)
rec = 'DG041'
filetype = 'jpg'
dpi = 300
ymax = slow_measurements[rec]['5001|RR [1/min]'].max() + 15
fig = plt.figure()
fig.set_size_inches(8, 6)
fig.subplots_adjust(left=None, bottom=None, right=None, top=None,
wspace=None, hspace=0.7)
ax1 = fig.add_subplot(1, 1, 1);
slow_measurements[rec]['5001|RR [1/min]'].plot(ax = ax1, color = 'blue', ylim = [0, ymax] );
slow_measurements[rec]['RR_high'].plot(ax = ax1, color = 'red', linewidth = 3, ylim = [0, ymax] );
slow_measurements[rec]['RR_set'].plot(ax = ax1, color = 'green', linewidth = 3, ylim = [0, ymax] );
ax1.set_title(rec, size = 14, color = 'black')
ax1.set_xlabel('Time', size = 14, color = 'black')
ax1.set_ylabel('1/min', size = 14, color = 'black')
ax1.grid('on', linestyle='-', linewidth=0.5, color = 'gray')
ax1.legend(['RR', 'alarm_high', 'RR_set'])
fig.savefig('%s/%s.jpg' % (DIR_WRITE, 'Figure_2b_color'), dpi=dpi, facecolor='w', edgecolor='w',
orientation='portrait', papertype=None, format= filetype,
transparent=False, bbox_inches=None, pad_inches=0.1, frameon=True)
rec = 'DG041'
filetype = 'jpg'
dpi = 300
ymax = slow_measurements[rec]['5001|RR [1/min]'].max() + 15
fig = plt.figure()
fig.set_size_inches(8, 6)
fig.subplots_adjust(left=None, bottom=None, right=None, top=None,
wspace=None, hspace=0.7)
ax1 = fig.add_subplot(1, 1, 1);
slow_measurements[rec]['5001|RR [1/min]'].plot(ax = ax1, color = 'black', alpha = 0.6, ylim = [0, ymax] );
slow_measurements[rec]['RR_high'].plot(ax = ax1, color = 'black', linewidth = 3, ylim = [0, ymax] );
slow_measurements[rec]['RR_set'].plot(ax = ax1, color = 'black', linewidth = 3, ylim = [0, ymax] );
ax1.set_title(rec, size = 14, color = 'black')
ax1.set_xlabel('Time', size = 14, color = 'black')
ax1.set_ylabel('1/min', size = 14, color = 'black')
ax1.grid('on', linestyle='-', linewidth=0.5, color = 'gray')
ax1.legend(['RR', 'alarm_high', 'RR_set'])
fig.savefig('%s/%s.jpg' % (DIR_WRITE, 'Figure_2b_bw'), dpi=dpi, facecolor='w', edgecolor='w',
orientation='portrait', papertype=None, format= filetype,
transparent=False, bbox_inches=None, pad_inches=0.1, frameon=True)
rec0 = 'DG003'
rec1 = 'DG041'
filetype = 'tiff'
dpi = 300
ymax0 = slow_measurements[rec0]['MV_high_weight'].max() + 0.3
ymax1 = slow_measurements[rec1]['5001|RR [1/min]'].max() + 15
fig = plt.figure()
fig.set_size_inches(6, 9)
fig.subplots_adjust(left=None, bottom=None, right=None, top=None, wspace=None, hspace=0.3)
ax0 = fig.add_subplot(2, 1, 1);
slow_measurements[rec0]['MV_kg'].plot(ax = ax0, color = 'blue', ylim = [0, ymax0] );
slow_measurements[rec0]['MV_low_weight'].plot(ax = ax0, color = 'green', linewidth = 3, ylim = [0, ymax0] );
slow_measurements[rec0]['MV_high_weight'].plot(ax = ax0, color = 'red', linewidth = 3, ylim = [0, ymax0] );
ax0.set_title(rec0, size = 12, color = 'black')
ax0.set_xlabel('', size = 12, color = 'black')
ax0.set_ylabel('L/min/kg', size = 12, color = 'black')
ax0.tick_params(which = 'both', labelsize=10)
ax0.grid('on', linestyle='-', linewidth=0.5, color = 'gray')
ax0.legend(['MV_kg', 'alarm_low', 'alarm_high']);
ax1 = fig.add_subplot(2, 1, 2);
slow_measurements[rec1]['5001|RR [1/min]'].plot(ax = ax1, color = 'blue', ylim = [0, ymax1] );
slow_measurements[rec1]['RR_high'].plot(ax = ax1, color = 'red', linewidth = 3, ylim = [0, ymax1] );
slow_measurements[rec1]['RR_set'].plot(ax = ax1, color = 'green', linewidth = 3, ylim = [0, ymax1] );
ax1.set_title(rec1, size = 12, color = 'black')
ax1.set_xlabel('Time', size = 12, color = 'black')
ax1.set_ylabel('1/min', size = 12, color = 'black')
ax1.tick_params(which = 'both', labelsize=10)
ax1.grid('on', linestyle='-', linewidth=0.5, color = 'gray')
ax1.legend(['RR', 'alarm_high', 'RR_set'], loc = 4)
fig.savefig('%s/%s.tiff' % (DIR_WRITE, 'Figure_2'), dpi=dpi, facecolor='w', edgecolor='w',
orientation='portrait', papertype=None, format= filetype,
transparent=False, bbox_inches=None, pad_inches=0.1, frameon=True)
"""
Explanation: Figure 2
End of explanation
"""
# Histogram showing the number of alarms which were shorter than 1 minute
fig = plt.figure()
fig.set_size_inches(7, 5)
fig.subplots_adjust(left=None, bottom=None, right=None, top=None, wspace=None, hspace=None)
ax1 = fig.add_subplot(1, 1, 1)
n, bins, patches = plt.hist(all_durations, bins = range(0, 60))
plt.grid(True)
plt.xlabel('Alarm duration (seconds)', fontsize = 12)
plt.ylabel('Number of alarm events', fontsize = 12)
plt.xticks(range(0,60,4), fontsize = 12)
plt.yticks(fontsize = 12)
plt.title('Histogram of alarm durations', fontsize = 12)
fig.savefig('%s/%s' % (DIR_WRITE, 'Figure_3.tiff'), dpi=300, facecolor='w', edgecolor='w',
orientation='portrait', papertype=None, format='tiff',
transparent=False, bbox_inches=None, pad_inches=0.1,
frameon=True)
"""
Explanation: Figure 3
End of explanation
"""
|
European-XFEL/h5tools-py | docs/apply_geometry.ipynb | bsd-3-clause | %matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
import h5py
from karabo_data import RunDirectory, stack_detector_data
from karabo_data.geometry2 import LPD_1MGeometry
run = RunDirectory('/gpfs/exfel/exp/FXE/201830/p900020/proc/r0221/')
run.info()
# Find a train with some data in
empty = np.asarray([])
for tid, train_data in run.trains():
module_imgs = sum(d.get('image.data', empty).shape[0] for d in train_data.values())
if module_imgs:
print(tid, module_imgs)
break
tid, train_data = run.train_from_id(54861797)
print(tid)
for dev in sorted(train_data.keys()):
print(dev, end='\t')
try:
print(train_data[dev]['image.data'].shape)
except KeyError:
print("No image.data")
"""
Explanation: Assembling detector data into images
The X-ray detectors at XFEL are made up of a number of small pieces. To get an image from the data, or analyse it spatially, we need to know where each piece is located.
This example reassembles some commissioning data from LPD, a detector which has 4 quadrants, 16 modules, and 256 tiles.
Elements (especially the quadrants) can be repositioned; talk to the detector group to ensure that you have the right
geometry information for your data.
End of explanation
"""
modules_data = stack_detector_data(train_data, 'image.data')
modules_data.shape
"""
Explanation: Extract the detector images into a single Numpy array:
End of explanation
"""
def clip(array, min=-10000, max=10000):
x = array.copy()
finite = np.isfinite(x)
# Suppress warnings comparing numbers to nan
with np.errstate(invalid='ignore'):
x[finite & (x < min)] = np.nan
x[finite & (x > max)] = np.nan
return x
plt.figure(figsize=(10, 5))
a = modules_data[5][2]
plt.subplot(1, 2, 1).hist(a[np.isfinite(a)])
a = clip(a, min=-400, max=400)
plt.subplot(1, 2, 2).hist(a[np.isfinite(a)]);
"""
Explanation: To show the images, we sometimes need to 'clip' extreme high and low values, otherwise the colour map makes everything else the same colour.
End of explanation
"""
plt.figure(figsize=(8, 8))
clipped_mod = clip(modules_data[10][2], -400, 500)
plt.imshow(clipped_mod, origin='lower')
"""
Explanation: Let's look at the iamge from a single module. You can see where it's divided up into tiles:
End of explanation
"""
splitted = LPD_1MGeometry.split_tiles(clipped_mod)
plt.figure(figsize=(8, 8))
plt.imshow(splitted[11])
"""
Explanation: Here's a single tile:
End of explanation
"""
# From March 18; converted to XFEL standard coordinate directions
quadpos = [(11.4, 299), (-11.5, 8), (254.5, -16), (278.5, 275)] # mm
geom = LPD_1MGeometry.from_h5_file_and_quad_positions('lpd_mar_18_axesfixed.h5', quadpos)
"""
Explanation: Load the geometry from a file, along with the quadrant positions used here.
In the future, geometry information will be stored in the calibration catalogue.
End of explanation
"""
geom.plot_data_fast(clip(modules_data[12], max=5000))
"""
Explanation: Reassemble and show a detector image using the geometry:
End of explanation
"""
res, centre = geom.position_modules_fast(modules_data)
print(res.shape)
plt.figure(figsize=(8, 8))
plt.imshow(clip(res[12, 250:750, 450:850], min=-400, max=5000), origin='lower')
"""
Explanation: Reassemble detector data into a numpy array for further analysis. The areas without data have the special value nan to mark them as missing.
End of explanation
"""
|
timkpaine/lantern | experimental/ipysheet.ipynb | apache-2.0 | import ipysheet
sheet = ipysheet.sheet()
sheet
"""
Explanation: Spreadsheet widget for the Jupyter Notebook
Installation
To install use pip:
$ pip install ipysheet
To make it work for Jupyter lab:
$ jupyter labextension ipysheet
If you have notebook 5.2 or below, you also need to execute:
$ jupyter nbextension enable --py --sys-prefix ipysheet
$ jupyter nbextension enable --py --sys-prefix ipysheet.renderer_nbext
Getting started
Although ipysheet contains an object oriented interface, we recomment using the "state machine" based interface, similar to matplotlib's pyplot/pylab interface. Comparible to matplotlib pylab interface, this interface keeps track of the current sheet. Using the cell function, Cell widgets are added to the current sheet.
Importing ipysheet and invoking the sheet function will create the default spreadsheet widget. The function returns a Sheet instance, leaving that expression as a last statement of a code cell will display it, otherwise use display(sheet).
Note that this documentation is a Jupyter notebook, and you can try it out directly on Binder:
End of explanation
"""
sheet = ipysheet.sheet(rows=3, columns=4)
cell1 = ipysheet.cell(0, 0, 'Hello')
cell2 = ipysheet.cell(2, 0, 'World')
cell_value = ipysheet.cell(2,2, 42.)
sheet
"""
Explanation: Using the cell function, we can create Cell widgets that are directly added to the current sheet.
End of explanation
"""
import ipywidgets as widgets
sheet = ipysheet.sheet(rows=3, columns=2, column_headers=False, row_headers=False)
cell_a = ipysheet.cell(0, 1, 1, label_left='a')
cell_b = ipysheet.cell(1, 1, 2, label_left='b')
cell_sum = ipysheet.cell(2, 1, 3, label_left='sum', read_only=True)
# create a slider linked to cell a
slider = widgets.FloatSlider(min=-10, max=10, description='a')
widgets.jslink((cell_a, 'value'), (slider, 'value'))
# changes in a or b should trigger this function
def calculate(change):
cell_sum.value = cell_a.value + cell_b.value
cell_a.observe(calculate, 'value')
cell_b.observe(calculate, 'value')
widgets.VBox([sheet, slider])
"""
Explanation: Events
Using link or observe we can link widgets together, or attach event handlers
<div class="alert alert-warning">
**Note:** The examples below contain event handler written in Python that needs a running kernel, they will not work in the pure html documentation. They do work in binder!
</div>
End of explanation
"""
sheet = ipysheet.sheet(rows=5, columns=4)
row = ipysheet.row(0, [0, 1, 2, 3], background_color="red")
column = ipysheet.column(1, ["a", "b", "c", "d"], row_start=1, background_color="green")
cells = ipysheet.cell_range([["hi", "ola"], ["ciao", "bonjour"], ["hallo", "guten tag"]],
row_start=1, column_start=2, background_color="yellow")
sheet
"""
Explanation: Cell ranges
Instead of referring to a single cell, we can also refer to cell ranges, rows and columns.
End of explanation
"""
import ipywidgets as widgets
sheet = ipysheet.sheet(rows=3, columns=2, column_headers=False, row_headers=False)
cell_a = ipysheet.cell(0, 1, 1, label_left='a')
cell_b = ipysheet.cell(1, 1, 2, label_left='b')
cell_sum = ipysheet.cell(2, 1, 3, label_left='sum', read_only=True)
# create a slider linked to cell a
slider = widgets.FloatSlider(min=-10, max=10, description='a')
widgets.jslink((cell_a, 'value'), (slider, 'value'))
@ipysheet.calculation(inputs=[cell_a, cell_b], output=cell_sum)
def calculate(a, b):
return a + b
widgets.VBox([sheet, slider])
"""
Explanation: Calculations
Since this is such a common pattern, a helper decorator calculation is provided, shortening the above code considerably.
End of explanation
"""
jscode_renderer_negative = """
function (instance, td, row, col, prop, value, cellProperties) {
Handsontable.renderers.TextRenderer.apply(this, arguments);
if (value < 0)
td.style.backgroundColor = 'red'
else
td.style.backgroundColor = 'green'
}
"""
ipysheet.renderer(code=jscode_renderer_negative, name='negative');
import random
s = ipysheet.sheet(rows=3, columns=4)
data = [[random.randint(-10, 10) for j in range(4)] for j in range(3)]
ipysheet.cell_range(data, renderer='negative')
s
"""
Explanation: Renderers
ipysheet is build on Handsontable, which allows custom renderers, which we also support. Note that this means ipysheet allows arbitrary JavaScript injection (TODO: make this part optional)
End of explanation
"""
def renderer_negative(instance, td, row, col, prop, value, cellProperties):
Handsontable.renderers.TextRenderer.apply(this, arguments);
if value < 0:
td.style.backgroundColor = 'orange'
else:
td.style.backgroundColor = ''
ipysheet.renderer(code=renderer_negative, name='negative_transpiled');
import random
s = ipysheet.sheet(rows=3, columns=4)
data = [[random.randint(-10, 10) for j in range(4)] for j in range(3)]
ipysheet.cell_range(data, renderer='negative_transpiled')
s
"""
Explanation: If flexx is installed, Python code can be transpiled to JavaScript at runtime.
End of explanation
"""
|
julienchastang/unidata-python-workshop | notebooks/Declarative_Plotting/Satellite_Declarative.ipynb | mit | from siphon.catalog import TDSCatalog
from datetime import datetime
# Create variables for URL generation
image_date = datetime.utcnow().date()
region = 'Mesoscale-1'
channel = 8
# Create the URL to provide to siphon
data_url = ('https://thredds.ucar.edu/thredds/catalog/satellite/goes/east/products/'
f'CloudAndMoistureImagery/{region}/Channel{channel:02d}/'
f'{image_date:%Y%m%d}/catalog.xml')
cat = TDSCatalog(data_url)
dataset = cat.datasets[1]
print(dataset)
ds = dataset.remote_access(use_xarray=True)
print(ds)
"""
Explanation: <a name="pagetop"></a>
<div style="width:1000 px">
<div style="float:right; width:98 px; height:98px;"><img src="https://pbs.twimg.com/profile_images/1187259618/unidata_logo_rgb_sm_400x400.png" alt="Unidata Logo" style="height: 98px;"></div>
<h1>Declarative Plotting with Satellite Data</h1>
<h3>Unidata Python Workshop</h3>
<div style="clear:both"></div>
</div>
<hr style="height:2px;">
<div style="float:right; width:300 px"><img src="https://unidata.github.io/MetPy/latest/_images/sphx_glr_GINI_Water_Vapor_001.png" alt="Example Satellite Image" style="height: 350px;"></div>
Overview:
Teaching: 20 minutes
Exercises: 15 minutes
Questions
How can satellite data be accessed with siphon?
How can maps of satellite data be made using the declarative plotting interface?
Table of Contents
<a href="#dataaccess">Accessing data with Siphon</a>
<a href="#plotting">Plotting the data</a>
<a name="dataaccess"></a>
Accessing data with Siphon
As we saw with the PlottingSatelliteData notebook, GOES 16/17 data is available via the Unidata THREDDS server and can be accessed with siphon. We make use of fstrings in order to provide date, region, and channel variables to the URL string.
End of explanation
"""
from metpy.plots import ImagePlot, MapPanel, PanelContainer
%matplotlib inline
"""
Explanation: <a name="plotting"></a>
Plotting the Data
To plot our data we'll be using MetPy's new declarative plotting functionality. You can write lots of matplotlib based code, but this interface greatly reduces the number of lines you need to write to get a great starting plot and then lets you customize it. The declarative plotting interface consists of three fundamental objects/concepts:
Plot - This is the actual representation of the data and can be ImagePlot, ContourPlot, or Plot2D.
Panel - This is a single panel (i.e. coordinate system). Panels contain plots. Currently the MapPanel is the only panel type available.
Panel Container - The container can hold multiple panels to make a multi-pane figure. Panel Containers can be thought of as the whole figure object in matplotlib.
So containers have panels which have plots. It takes a second to get that straight in your mind, but it makes setting up complex figures very simple.
For this plot we need a single panel and we want to plot the satellite image, so we'll use the ImagePlot.
End of explanation
"""
img = ImagePlot()
img.data = ds
img.field = 'Sectorized_CMI'
"""
Explanation: Let's start out with the smallest element, the plot, and build up to the largest, the panel container.
First, we'll make the ImagePlot:
End of explanation
"""
panel = MapPanel()
panel.plots = [img]
"""
Explanation: Next, we'll make the panel that our image will go into, the MapPanel object and add the image to the plots on the panel.
End of explanation
"""
pc = PanelContainer()
pc.panels = [panel]
"""
Explanation: Finally, we make the PanelContainer and add the panel to its container. Remember that since we can have multiple plots on a panel and multiple panels on a plot, we use lists. In this case is just happens to be a list of length 1.
End of explanation
"""
pc.show()
"""
Explanation: Unlike working with matplotlib directly in the notebooks, this figure hasn't actually been rendered yet. Calling the show method of the panel container builds up everything, renders, and shows it to us.
End of explanation
"""
# Import for the bonus exercise
from metpy.plots import add_timestamp
# Make the image plot
# YOUR CODE GOES HERE
# Make the map panel and add the image to it
# YOUR CODE GOES HERE
# Make the panel container and add the panel to it
# YOUR CODE GOES HERE
# Show the plot
# YOUR CODE GOES HERE
"""
Explanation: Exercise
Look at the documentation for the ImagePlot here and figure out how to set the colormap of the image. For this image, let's go with the WVCIMSS_r colormap as this is a mid-level water vapor image. Set the range for the colormap to 195-265 K.
BONUS: Use the MetPy add_timestamp method from metpy.plots to add a timestamp to the plot. You can get the axes object to plot on from the ImagePlot. The call will look something like img.ax. This needs to happen after the panels have been added to the PanelContainer.
DAILY DOUBLE: Using the start_date_time attribute on the dataset ds, change the call to add_timestamp to use that date and time and the pretext to say GOES 16 Channel X.
End of explanation
"""
# %load solutions/sat_map.py
"""
Explanation: Solution
End of explanation
"""
|
snowicecat/umich-eecs445-f16 | handsOn_lecture06_MLE-MAP-Coding/HandsOn06_MAP-coding.ipynb | mit | # all the packages you need
import numpy as np
import matplotlib.pyplot as plt
import scipy.io
from numpy.linalg import inv
# load data from .mat
mat = scipy.io.loadmat('mnist_49_3000.mat')
print (mat.keys())
x = mat['x'].T
y = mat['y'].T
print (x.shape, y.shape)
# show example image
plt.imshow (x[4, :].reshape(28, 28))
# add bias term
x = np.hstack([np.ones((3000, 1)), x])
# convert label -1 to 0
y[y == -1] = 0
print(y[y == 0].size, y[y == 1].size)
# split into train set and test set
x_train = x[: 2000, :]
y_train = y[: 2000, :]
x_test = x[2000 : , :]
y_test = y[2000 : , :]
"""
Explanation: $$ \LaTeX \text{ command declarations here.}
\newcommand{\N}{\mathcal{N}}
\newcommand{\R}{\mathbb{R}}
\renewcommand{\vec}[1]{\mathbf{#1}}
\newcommand{\norm}[1]{\|#1\|_2}
\newcommand{\d}{\mathop{}!\mathrm{d}}
\newcommand{\qed}{\qquad \mathbf{Q.E.D.}}
\newcommand{\vx}{\mathbf{x}}
\newcommand{\vy}{\mathbf{y}}
\newcommand{\vt}{\mathbf{t}}
\newcommand{\vb}{\mathbf{b}}
\newcommand{\vw}{\mathbf{w}}
$$
EECS 445: Machine Learning
Hands On 05: Linear Regression II
Instructor: Zhao Fu, Valli, Jacob Abernethy and Jia Deng
Date: September 26, 2016
Problem 1a: MAP estimation for Linear Regression with unusual Prior
Assume we have $n$ vectors $\vec{x}_1, \cdots, \vec{x}_n$. We also assume that for each $\vec{x}_i$ we have observed a target value $t_i$, where
$$
\begin{gather}
t_i = \vec{w}^T \vec{x_i} + \epsilon \
\epsilon \sim \mathcal{N}(0, \beta^{-1})
\end{gather}
$$
where $\epsilon$ is the "noise term".
(a) Quick quiz: what is the likelihood given $\vec{w}$? That is, what's $p(t_i | \vec{x}_i, \vec{w})$?
Answer: $p(t_i | \vec{x}_i, \vec{w}) = \mathcal{N}(t_i|\vec{w}^\top \vec{x_i}, \beta^{-1}) = \frac{1}{(2\pi \beta^{-1})^\frac{1}{2}} \exp{(-\frac{\beta}{2}(t_i - \vec{w}^\top \vec{x_i})^2)}$
Problem 1: MAP estimation for Linear Regression with unusual Prior
Assume we have $n$ vectors $\vec{x}_1, \cdots, \vec{x}_n$. We also assume that for each $\vec{x}_i$ we have observed a target value $t_i$, sampled IID. We will also put a prior on $\vec{w}$, using PSD matrix $\Sigma$.
$$
\begin{gather}
t_i = \vec{w}^T \vec{x_i} + \epsilon \
\epsilon \sim \mathcal{N}(0, \beta^{-1}) \
\vec{w} \sim \mathcal{N}(0, \Sigma)
\end{gather}
$$
Note: the difference here is that our prior is a multivariate gaussian with non-identity covariance! Also we let $\mathcal{X} = {\vec{x}_1, \cdots, \vec{x}_n}$
(a) Compute the log posterior function, $\log p(\vec{w}|\vec{t}, \mathcal{X},\beta)$
Hint: use Bayes' Rule
(b) Compute the MAP estimate of $\vec{w}$ for this model
Hint: the solution is very similar to the MAP estimate for a gaussian prior with identity covariance
Problem 2: Handwritten digit classification with logistic regression
Download the file mnist_49_3000.mat from Canvas. This is a subset of the MNIST handwritten digit database, which is a well-known benchmark database for classification algorithms. This subset contains examples of the digits 4 and 9. The data file contains variables x and y, with the former containing patterns and the latter labels. The images are stored as column vectors.
Exercise:
* Load data and visualize data (Use scipy.io.loadmat to load matrix)
* Add bias to the features $\phi(\vx)^T = [1, \vx^T]$
* Split dataset into training set with the first 2000 data and test set with the last 1000 data
End of explanation
"""
# Initialization of parameters
w = np.zeros((785, 1))
lmd = 10
def computeE(w, x, y, lmd) :
E = np.dot(y.T, np.log(1 + np.exp(-np.dot(x, w)))) + np.dot(1 - y.T, np.log(1 + \
np.exp(np.dot(x, w)))) + lmd * np.dot(w.T, w)
return E[0][0]
print (computeE(w, x, y, lmd))
def sigmoid(a) :
return np.exp(a + 1e-6) / (1 + np.exp(a + 1e-6))
def computeGradientE(w, x, y, lmd) :
return np.dot(x.T, sigmoid(np.dot(x, w)) - y) + lmd * w
print (computeGradientE(w, x, y, lmd).shape)
"""
Explanation: Implement Newton’s method to find a minimizer of the regularized negative log likelihood. Try setting $\lambda$ = 10. Use the first 2000 examples as training data, and the last 1000 as test data.
Exercise
* Implement the loss function with the following formula:
$$
\begin{align}
E(\vw)
&= -\ln P(\vy = \vt| \mathcal{X}, \vw) \
&= \boxed{\sum \nolimits_{n=1}^N \left[ t_n \ln (1+\exp(-\vw^T\phi(\vx_n))) + (1-t_n) \ln(1+\exp(\vw^T\phi(\vx_n))) \right] + \lambda \vw^T\vw}\
\end{align}
$$
* Implement the gradient of loss $$\nabla_\vw E(\vw) = \boxed{ \Phi^T \left( \sigma(\Phi \vw) - \vt \right) + \lambda \vw}$$
where $\sigma(a) = \frac{\exp(a)}{1+\exp(a)}$
End of explanation
"""
def computeR(w, x, y) :
return sigmoid(np.dot(x, w)) * (1 - sigmoid(np.dot(x, w)))
# print (computeR(w, x, y).T)
def computeHessian(w, x, y, lmd) :
return np.dot(x.T * computeR(w, x, y).T, x) + lmd * np.eye(w.shape[0])
# print (computeHessian(w, x, y, lmd))
def update(w, x, y, lmd) :
hessian = computeHessian(w, x, y, lmd)
gradient = computeGradientE(w, x, y, lmd)
# print (np.sum(hessian))
return w - np.dot(inv(hessian), gradient)
print (update(w, x, y, lmd).shape)
"""
Explanation: Recall: Newton's Method
$$
\vx_{n+1}= \vx_n - \left(\nabla^2 f(\vx_n)\right)^{-1} \nabla_\vx f(\vx_n)
$$
of which $\nabla^2 f(\vx_n)$ is Hessian matrix which is the second order derivative
$$
\nabla^2 f = \begin{bmatrix}
\frac{\partial f}{\partial x_1\partial x_1} & \cdots & \frac{\partial f}{\partial x_1\partial x_n}\
\vdots & \ddots & \vdots\
\frac{\partial f}{\partial x_n\partial x_1} & \cdots & \frac{\partial f}{\partial x_n\partial x_n}
\end{bmatrix}
$$
$$
\begin{align}
\nabla^2 E(\vw)
&= \nabla_\vw \nabla_\vw E(\vw) \
&= \sum \nolimits_{n=1}^N \phi(\vx_n) r_n(\vw) \phi(\vx_n)^T + \lambda I
\end{align}
$$
of which $r_n(\vw) = \sigma(\vw^T \phi(\vx_n)) \cdot ( 1 - \sigma(\vw^T \phi(\vx_n)) )$
Exercise
* Implement $r_n(\vw)$
* Implement $\nabla^2 E(\vw)$
* Implement update function
End of explanation
"""
def train(w, x, y, lmd) :
w_new = update(w, x, y, lmd)
diff = np.sum(np.abs(w_new - w))
while diff > 1e-6:
w = w_new
w_new = update(w, x, y, lmd)
diff = np.sum(np.abs(w_new - w))
return w
w_train = train(w, x_train, y_train, lmd)
def test(w, x, y) :
tmp = np.dot(x, w)
y_pred = np.zeros(y.shape)
y_pred[tmp > 0] = 1
error = np.mean(np.abs(y_pred - y))
return error
print (test(w, x_test, y_test))
print (test(w_train, x_test, y_test))
print (computeE(w_train, x_train, y_train, lmd))
"""
Explanation: Exercise
* Implement training process(When to stop iterating?)
* Implement test function
* Compute the test error
* Compute the value of the objective function at the optimum
End of explanation
"""
|
alepoydes/introduction-to-numerical-simulation | practice/covid/COVID-19.ipynb | mit | # Устанавливаем библиотеки, если это не было сделано ранее.
# ! pip3 install seaborn matplotlib numpy pandas
# Импорт библиотек
import numpy as np
import matplotlib.pyplot as plt
import urllib.request
import pandas as pd
import seaborn as sns
# Используем настройки seaborn по-умолчанию, устанавливаем только размер рисунка
sns.set(rc={'figure.figsize':(11, 4)})
"""
Explanation: COVID-19 Моделирование распространения и анализ
Целью данной лабораторной является построение модели заболевания населения COVID-19, обладающей как можно лучшей предсказательной силой. Каждый студент может на свое усмотрение выбрать факторы, учитываемой моделью, по результатам семестра проводится конкурс на лучшую модель.
Ход выполнения работы
Изучение теории и имеющихся данных.
Формулировка гипотез о связах величин и законах их изменения.
Построение математической модели, нахождение аналитических решений, написание кода для симуляции.
Качественное сравнение результатов модели и реальных данных.
Оценивание параметров модели на основе реальных данных.
Количественное сравнение предсказаний модели с историческими данными.
Извелечение практически полезной информации, например, оценка продолжительности болезни, индекса репродукции, скорости снижения иммунитета, тенденции изменения этих параметров, формирование сценариев дальнейшего развития ситуации с COVID-19.
Предложения по добавлению управляющих параметров, таких как введение каратнинов, и разработка алгоритма, позволяющего добиться контроля над распространением ковида.
Исходные данные по COVID-19
COVID-19 Data Repository by the Center for Systems Science and Engineering (CSSE) at Johns Hopkins University Визуализация данных
Kouprianov, A. (2021). COVID-19.SPb.monitoring. Monitoring COVID-19 epidemic in St. Petersburg, Russia: Data and scripts. Данные по всей России.
Coronavirus (Covid-19) Data in the United States
Our World in Data Данные по избыточной смертности по месяцам. Визуализация данных
Oxford Covid-19 Government Response Tracker
Яндекс. Коронавирус: дашборд Данные на карте.
Excess mortality during the COVID-19 pandemic
Публикации про данные
Что важно знать гражданам Санкт-Петербурга об эпидемии коронавируса. COVID-19 в Петербурге, сводный инфографический отчет. Фонтанка: Волна идет на подъем? О чем молчит официальная ковидная статистика в Петербурге
Данные о возрастной пирамиде
PopulationPyramid.net
Росстат: Численность населения Российской Федерации по муниципальным образованиям
End of explanation
"""
# Загрузка данных агрегированных А. Куприяновым.
# URL = 'https://raw.githubusercontent.com/alexei-kouprianov/COVID-19.SPb.monitoring/main/data/SPb.COVID-19.united.csv'
# urllib.request.urlretrieve(URL, 'data/SPb.COVID-19.united.csv')
# Читаем локальную копию файла
data = pd.read_csv('data/SPb.COVID-19.united.csv', na_values='NA', parse_dates=[0,5], index_col=0)
# Выводим названия столбцов и типы
print(data.dtypes)
# Выводим размер таблицы
print(data.shape)
# Выводим диапазон дат.
print(f"{data.index[0]} -- {data.index[-1]}")
# Визуально проверяем корректность загруженных данных
# data
"""
Explanation: Получение исходных данных
End of explanation
"""
(data['ACTIVE.sk']+data['CONFIRMED.sk']-data['RECOVERED.sk']-data['DEATHS.sk']).plot(style='-r', label='Calculated')
(data['ACTIVE.sk'].shift(-1)).plot(style='k', label='Historical')
plt.legend()
plt.show()
# Посмотрим, как изменялись число заражений и число смертей.
data[['CONFIRMED.sk','CONFIRMED.spb']].plot(subplots=False)
plt.show()
data['DEATHS.sk'].plot(subplots=True)
plt.show()
# Мы видим колебания статистики в течении недели, возникающие из-за особенностей сбора данных.
# Просуммируем данные за каждую неделю, что уменьшит шум.
data7 = data.resample('7D').sum()
data7[["CONFIRMED.sk","CONFIRMED.spb"]].plot(subplots=False)
plt.show()
data7["DEATHS.sk"].plot(subplots=False)
plt.show()
# Загрузим данные по полной смертности в России (посчитано точнее, чем смертность от COVID).
# URL = "https://raw.githubusercontent.com/dkobak/excess-mortality/main/russian-data/raw/russia-monthly-deaths-allregions-final.csv"
# urllib.request.urlretrieve(URL, 'data/russia-monthly-deaths.csv')
# Читаем локальную копию файла
deaths_data = pd.read_csv('data/russia-monthly-deaths.csv', skiprows=0, header=None, ).T
deaths_data.columns = ['year','month'] + list( deaths_data.iloc[0,2:] )
deaths_data.drop(0,inplace=True)
deaths_data['day'] = 15
months = {'январь':1, 'февраль':2, 'март':3, 'апрель':4, 'май':5, 'июнь':6, 'июль':7, 'август':8, 'сентябрь':9, 'октябрь':10, 'ноябрь':11, 'декабрь':12}
deaths_data['month'] = deaths_data['month'].apply(lambda x: months[x])
index = pd.to_datetime( deaths_data[['year','month','day']] )
deaths_data.set_index(index, inplace=True)
deaths_data.drop(columns=['year','month','day'], inplace=True)
for n in deaths_data:
deaths_data[n] = deaths_data[n].astype(np.float32)
# Выводим названия столбцов и типы
# print(deaths_data.dtypes)
# Выводим размер таблицы
# print(deaths_data.shape)
# Выделим смертность по Петербургу и выведем график
spb_deaths_data = deaths_data['г. Санкт-Петербург']
print(spb_deaths_data)
spb_deaths_data.plot()
plt.show()
# spb_deaths_data.groupby(spb_deaths_data.index.year)
"""
Explanation: Так как имеющиеся данные описывают только изменение численности больших классов людей, то анализ этих данных будем проводить в духе SIR модели.
Всех людей мы объединяем в классы: S - восприимчивые к болезни, I - зараженные/болеющие, R - невосприимчивые к болезни/выздоровевшие/погибшие.
Число больных I доступно в инсторических данных непосредственно в поле ACTIVE.sk, все данные приводятся с шагом в день. Числа S и R непосредственно не доступны, однако у нас есть данные о переходах между классами:
- Поле CONFIRMED.sk содержит число заболевших, т.е. перешедших из класса S в класс I.
- Поле RECOVERED.sk содержит число выздоровевших, а поле DEATHS.sk число погибших. Их сумма равна числу перешедших из класса I в класс R.
Значение ACTIVE.sk теоретически можно вычислить через поля CONFIRMED.sk, RECOVERED.sk, DEATH.sk, практически же сохраненные значения и вычисленные могут немного различаться.
End of explanation
"""
# Посмотрим, как число больных связано со смертностью.
data7.plot(x="CONFIRMED.sk",y="DEATHS.sk", style='.r')
plt.show()
"""
Explanation: Предварительный анализ
Некоторые выводы о зависимостях в данных можно сделать разглядывая графики.
Посмотрим, что мы можем увидеть.
End of explanation
"""
# Извлечем данные по первой полне
first_wave = data7.loc[:'2020-09-01']
# Проверим визуально, что данные по смертям действительно показывают одну волну.
first_wave['DEATHS.sk'].plot()
plt.show()
"""
Explanation: В имеющихся у нас данных нет переходов из класса R в класс S, что соответствует потере иммунитета со временем.
По оценкам иммунологов иммунитет должен сохраняться хотя бы в течении 6 месяцев, причем иммуный человек заболевает со значительно меньшей вероятностью, поэтому во время первой волны заболеваний ковидом переходами R -> S можно пренебречь.
End of explanation
"""
# Выделим из доступных исторических данных величины, описываемые SIR моделью.
# Число невосприимчивых.
R = (first_wave['RECOVERED.sk']+first_wave['DEATHS.sk']).cumsum()
# Всего людей
N = 5384342
# Число восприимчивых
S = N - first_wave['CONFIRMED.sk'].cumsum()
# Число больных
I = N - S - R
# Число умирающих в день.
dD = first_wave['DEATHS.sk']/7
# В первую волну заболело только небольшое число жителей города, поэтому S почти не изменяется.
plt.semilogy(S/N, 'y', label='Susceptible')
plt.semilogy(I/N, 'r', label='Infectious')
plt.semilogy(R/N, 'g', label='Recovered')
plt.semilogy(dD/N, 'b', label='Deaths/day')
plt.legend()
plt.show()
plt.semilogx(S/N, R/N)
plt.xlabel('S')
plt.ylabel('R')
plt.show()
# Заменив производные в уравнениях SIR модели на конечные разности мы можем оценить
# константы модели в каждый момент времени.
# Для оценки производной мы используем центральные конечные разности,
# поэтому новые величины будут заданы на серединах интервалов между старыми отсчетами.
index = first_wave.index.shift(periods=3)[:-1]
# Вычислим производные.
dS = pd.Series( np.diff(S.to_numpy(), 1)/7, index=index)
dI = pd.Series( np.diff(I.to_numpy(), 1)/7, index=index)
dR = pd.Series( np.diff(R.to_numpy(), 1)/7, index=index)
# Вычислим средние значения на интервалах.
def midpoint(x): return pd.Series( (x[1:]+x[:-1])/2, index=index)
mS = midpoint(S.to_numpy())
mI = midpoint(I.to_numpy())
mR = midpoint(R.to_numpy())
# Оценким константы в каждый момент времени, считая, что в данных нет никакого шума.
beta = -dS/mS/mI*N
gamma = dR/mI
rho0 = beta/gamma # Basic reproduction number
rho = rho0*mS/N # Effective reproduction number
R0 = -np.log(S/N)/(R/N)
fig, ax = plt.subplots(1, figsize=(15,5))
ax.plot(beta, 'b', label='beta')
ax.plot(gamma, 'g', label='gamma')
ax.semilogy(rho0, 'k', label='rho0')
ax.semilogy(rho, 'r', label='rho')
ax.semilogy(R0, 'y', label='R0')
ax.semilogy(1+0*rho, '--k', label='threshold=1')
ax.set_xlabel("Time")
ax.legend()
plt.show()
# Шум определенно был, особенно на начальном интервале, когда число больных было очень малым.
# В нашей модели даже один больной должен выздоравливать примерно на 1/30 за день, так что производная dI/dt
# никогда не ноль, однако на практике выздоровевший больной попадет в статистику один раз, когда его выпишут из больницы,
# или даже позже.
# Если бы больных было 30, то в среднем каждый день выписывался бы 1 больной, что соответствует предсказанию модели:
# 30 раз по 1/30 больного в день = 1 больной в день.
# При большом числе больных оценка достаточно разумна, но на редких случаях болезни оценки параметров безумны.
"""
Explanation: Модель SIR
Получив общее представление о связи числа болеющих с размером популяции, числом переболевших и другими величинами из анализа исторических данных, графиков и нашего общего понимания развития эпидемий, мы готовы сформулировать математическую модель, воспроизводящую эти зависимости.
Модель фиксирует качественно характер зависимостей, однако может содержать параметры, задающие, например, заразность инфекции, продолжительность инкубационного периода и т.п.
Эти параметры мы позже попробуем востановить по историческим данным.
Для понимания происходящего и предсказания будущего нам нужна модель, описывающая связь между наблюдаемыми величинами.
Всплески заболеваемости и последующее уменьшение числа болеющих можно описать простейшей SIR моделью, в рамках которой вся попопуляция разбивается на группы: S - восприимчивые к инфекции, I - болеющие, R - невосприимчивые к инфекции.
Внутри каждой группы люди считаются одинаково реагирующими на инфекцию, также считается, что любой человек может заразить любого, т.е. популяция однородна.
Эти допущения не вполне соответствуют действительности, зато позволяют сформулировать простую модель, точность предсказаний которой мы попробуем проверить.
В начале число больных I весьма мало, и все люди попадают в группу людей S, рискующих заболеть.
В процессе инфицирования люди из группы S постепенно перетекают в группу I, причем скорость перетекания увеличивается как с увеличением числа больных I, так и с числом людей S, которых можно заразить.
В первом приближении можно считать скорость заболеваемости пропорциональной доле больных $I/N$ и не болевших $S/N$, с коэффициентом пропорциональности $\beta$.
Больные люди со временем выздоравливают и приобретают иммунитет, причем в модели мы считаем, что люди с иммунитетом не заболевают.
Людей с иммунитетом мы относим в группу R, также в эту группу попадают погибшие.
В модели мы приближенно считаем, что за единицу времени выздоравливает определенная доля $\gamma$ болеющих.
SIR модель описывается системой дифференциальных уравнений:
$$
\begin{cases}
\frac{dS}{dt} = -\frac{\beta I S}{N},\
\frac{dI}{dt} = \frac{\beta I S}{N}-\gamma I,\
\frac{dR}{dt} = \gamma I.
\end{cases}
$$
Полное число людей $N=S+I+R$ в этой модели постоянно.
По данным Росстата население Петербурга на 1 января 2021 составило $N=5 384 342$ человек.
Направление изменения числа больных определяется базовым индексом репродукции $\rho_0=\frac{\beta}{\gamma}$.
Из второго уравнения
$$\frac{dI}{dt} = \gamma I \left(\rho_0\frac{S}{N}-1\right).$$
Величина $\rho=\rho_0(1-\frac{R+I}{N})=\rho_0\frac{S}{N}$ называется эффективным репродуктивным числом
и равна числу заражений одним инфицированным.
Если $\rho<1$, то число больных уменьшается и эпидемия идет на спад.
Если $\rho>1$, то идет экспоненциальный рост числа больных.
Так как эффективное репродуктивное число зависит от числа неинфицированных людей,
то можно вычислить минимальное необходимое число людей с иммунитетом для предотвращения роста числа заражений,
т.е. для достижения коллективного иммунитета:
$$
\rho<1\quad\Leftrightarrow\quad
1-\frac{1}{\rho_0}<\frac{R+I}{N}.
$$
End of explanation
"""
# Используем метод классический метод Рунге-Кутты RK4 для решения систему ОДУ, см.
# https://en.wikipedia.org/wiki/Runge%E2%80%93Kutta_methods
class RungeKutta:
def __init__(self):
pass
def integrate(self, *x, rhs=None, dt=0.01, period=1.):
"""
Численно интегрируем систему дифференциальных уравнений:
dx_1/dt = rhs(x_1, x_n)_1,...
dx_n/dt = rhs(x_1, x_n)_n.
Вектор начальных значений передается в args.
Правая часть вычисляется функцией `rhs`, которая должны принимать на вход `n` значений,
и столько же возвращать.
Метод возвращает вектор значений x_1,.., x_n через period отсчетов времени.
Длина промежуточных шагов интегрирования равна dt.
"""
while period>dt:
x = self.single_step(*x, rhs=rhs, dt=dt)
period = period - dt
return self.single_step(*x, rhs=rhs, dt=period)
def single_step(self, *x0, rhs=None, dt=0.01):
dx0 = rhs(*x0) # Вычисляем производные
x1 = (xn+0.5*dt*dxn for xn, dxn in zip(x0, dx0)) # Делаем промежуточных шаг длины dt/2
dx1 = rhs(*x1) # Делаем следующий шаг...
x2 = (xn+0.5*dt*dxn for xn, dxn in zip(x0, dx1))
dx2 = rhs(*x2)
x3 = (xn+dt*dxn for xn, dxn in zip(x0, dx2))
dx3 = rhs(*x3)
# Суммируем результаты, делаем итоговый шаг.
return tuple(xn+dt/6*dxn0+dt/3*dxn1+dt/3*dxn2+dt/6*dxn3 for xn, dxn0, dxn1, dxn2, dxn3 in zip(x0, dx0, dx1, dx2, dx3))
# Небольшой тест. Уравнение x'(t)=-x(t), x(0)=1 имеет решение x(t)=exp(-t), равное x(1)=1/e в точке 1.
integrator = RungeKutta()
print( "RK4: ", integrator.integrate(1., rhs=lambda x: (-x,), period=1) )
print( "Точное решение:", np.exp(-1) )
# Напишем код для модели, и посмотрим, какое предсказание даст нам модель.
class GaussianNoise:
def __init__(self, sigma):
self._sigma = sigma
def __call__(self, *args):
return self.generate(*args)
def generate(self, *shape):
return self._sigma*np.random.randn(*shape)
class SIR:
def __init__(self, noise=0):
"""
Аргументы:
WS, WI = предположение о шуме для отдельных компонент.
"""
self.WS, self.WI = GaussianNoise(noise), GaussianNoise(noise)
self.integrator = RungeKutta()
def generate(self, beta, gamma, alpha=0., periods=180,
S0=5384342, I0=1, R0=0,
dt=1, t0="2020-03-02"
):
"""
Генерирует один вариант развития эпидемии с одинаковыми начальными условиями.
Аргументы:
S, I, R = начальное число людей без иммунитета, больных, с иммунитетом.
t = начальный момент времени,
dt = шаг времени в днях,
periods = число шагов моделирования,
beta = скорость заражения,
gamma = скорость выздоровления,
"""
index = pd.date_range(t0, periods=periods, freq=pd.DateOffset(days=dt))
S = np.zeros(periods)
I, R = np.zeros_like(S), np.zeros_like(S)
S[0], I[0], R[0] = S0, I0, R0
N = S0+I0+R0
for n in range(0, periods-1):
S[n+1], I[n+1], R[n+1] = self.integrator.integrate(
S[n], I[n], R[n],
rhs = lambda S, I, R: (-beta*S*I/N+alpha*R,beta*S*I/N-gamma*I,gamma*I-alpha*R),
period = dt,
dt = 0.1
)
WS, WI = self.WS(1)*np.sqrt(dt), self.WI(1)*np.sqrt(dt)
S[n+1] += WS
I[n+1] += WI
R[n+1] += -WS-WI
return pd.DataFrame(
data={ 'S': S, 'I': I, 'R': R },
index=index
)
def inspect_SIR(simadata):
plt.semilogy(simdata['S'], label='S')
plt.semilogy(simdata['I'], label='I')
plt.semilogy(simdata['R'], label='R')
plt.legend()
plt.ylim((1, None))
plt.show()
# Создаем модель.
model = SIR(noise=1)
# Проводим симуляцию.
simdata = model.generate(beta=0.9, gamma=0.1)
inspect_SIR(simdata)
"""
Explanation: Нерабочие дни вводились с 30 марта по 30 апреля. Именно на это время приходится скачок индекса репродукции. Имея данные за продолжительный промежуток времени, мы можем оценить скорость распространения достаточно точно, однако в первые месяцы эпидемии число заражений было малым, и как мы видим, наивные оценки скорости распространения инфекции дают очень грубый результат, что делает трудным принятие решений. Для более точного оценивания параметров можно воспользоваться вероятностной модель, что однако требует значительно больших усилий.
Согласно грубой оценке, скорость заражения падала с мая по июль, и где-то в середине июня базовый индекс репродукции упал ниже 1, на эти даты приходится конец первой волны эпидемии.
В июле уже заметны признаки начала второй волны.
Количество переболевших не провосходит долей процента от всего населения, поэтому эффективный индекс репродукции практически совпадает с базовым, а значит затухании эпидемии произошло не из-за исчерпания подверженных эпидемии людей, а из-за изменения условий.
Если считать, что вирус значительно не мутировал, то значит скорость распространения вируса уменьшилась из-за ограничения контактов между людьми.
End of explanation
"""
# Если мы допустим, что больные постепенно теряют иммунитет, то характер протекания эпидемии измениться.
model = SIR(noise=1)
simdata = model.generate(beta=0.9, gamma=0.1, alpha=0.001)
inspect_SIR(simdata)
# При выбранных параметрах число больных в популяции выходит со временем на константу, но никогда не падает до нуля.
# Как возникают сезонные заболевания? Можем ли мы получить периодическое изменение числа больных? Чем определяется период?
# Попробуем подобрать параметры, которые дают близкое к историческим данным число больных.
model = SIR(noise=0)
simdata = model.generate(beta=0.15, gamma=0.02)
inspect_SIR(simdata)
"""
Explanation: На графике четко виден период экспоненциального роста числа больных (выглядит как прямая линия, если число больных построить на логарифмической оси).
Когда почти все люди заболевают, период роста сменяется периодом экспоненциального убывания числа больных.
Интересно, что некоторое число людей не успевает заболеть, и в рамках этой модели не заболеет уже никогда.
End of explanation
"""
# Используем автоматическое дифференцирование, чтобы упростить себе жизнь.
import autograd
# В функция, которые мы хотим дифференцировать, нужно использовать специальную версию numpy.
import autograd.numpy as jnp
def sir_log_likelihood(betagamma, mS, mI, mR, dS, dI, dR, N):
"""
Вычисляет логарифмическую функцию правдоподобия, как описано выше.
Аргументы:
betagamma - массив из двух параметров [beta, gamma].
mS, mI, mR - массивы, хранящие число людей в каждой категории в некоторые моменты времени.
dS, dI, dR - изменение числа людей из катергории за день.
N - людей суммарно.
"""
beta,gamma = betagamma
vS=beta/N*mI*mS
vR=gamma*mI
WS = dS+vS
WI = dI-vS+vR
WR = dR-vR
return -( jnp.sum(WS**2) + jnp.sum(WI**2) + jnp.sum(WR**2) ) / mS.shape[0]
# Сохраним в массивы Numpy необходимые временные ряды.
np_dS = np.diff(S.to_numpy(), 1)/7
np_dI = np.diff(I.to_numpy(), 1)/7
np_dR = np.diff(R.to_numpy(), 1)/7
# Вычислим средние значения на интервалах.
def np_midpoint(x): return (x[1:]+x[:-1])/2
np_mS = np_midpoint(S.to_numpy())
np_mI = np_midpoint(I.to_numpy())
np_mR = np_midpoint(R.to_numpy())
# Зафиксируем индексы для тренировочного тестового наборов.
trainingset = slice(0,None,2) # все четные
testset = slice(1,None,2) # все нечетные отсчеты.
def loss(betagamma, indices):
"""Функция потерь."""
return -sir_log_likelihood(betagamma,
np_mS[indices], np_mI[indices], np_mR[indices],
np_dS[indices], np_dI[indices], np_dR[indices],
N)
# Градиент функции потерь по параметрам.
d_loss = autograd.grad(loss, 0)
betagamma = np.random.rand(2)
print(f"""Parameters{betagamma}
Loss {loss(betagamma, trainingset)}
Derivative {d_loss(betagamma, trainingset)}""")
# Для подбора параметров мы будем минимизировать функционал ошибки.
# Воспользуемся готовыми реализациями оптимизаторов.
from scipy.optimize import minimize
def train(betagamma, indices):
"""Функция берет начальные параметры и улучшает их, минимизируя функцию потерь."""
def trace(xk):
print('.',end='\n')
res = minimize(fun=loss, x0=betagamma, args=(indices,), jac=d_loss, callback=trace, tol=1e-1)
print(res.message)
return res.x
betagamma0 = np.random.rand(2)
print(f"Initial parameters {betagamma0}")
print(f"Initial loss {loss(betagamma0, trainingset)}")
betagamma = train(betagamma0, trainingset)
print(f"Optimized parameters {betagamma}")
print(f"Optimized loss {loss(betagamma, trainingset)}")
print(f"Loss on test set {loss(betagamma, testset)}")
# Генерируем динамику согласно модели с подобранными параметрами и рисуем график.
model = SIR(noise=0)
simdata = model.generate(beta=betagamma[0], gamma=betagamma[1])
inspect_SIR(simdata)
# Как мы выдим, оптимизатор находит максимум функции правдоподобия за несколько итераций.
# Динамика, однако, отличается значительно.
# При оцененных парамтрах эпидемия развивалась бы медленнее, но первая волна через полгода даже не дошла до максимума.
# Это легко объяснить: грубо оценивая параметры выше, мы видели, что параметры модели менялись со временем,
# однако в использованом сейчас подходе параметры считались постоянными на все интервале.
# Попробуйте рассмотреть меньшие промежутки времени, в течении которых параметры не должны были сильно изменяться:
# во время локдауна, после локдауна и т.п.
"""
Explanation: Оценка параметров модели
Исходные данные очень зашумлены, поэтому наши оценки параметров модели по паре точек весьма приблизительны.
Если предположить, что константы не менялись на всем рассматриваемом промежутке, то параметры можно оценить опираясь на все совокупность данных, значительно уменьшая шум.
Для корректной оценки нам нужна некоторая модель шума.
Рассмотрем простейшую модель, в которой скорость изменения числа людей в каждой группе определяется по SIR модели, но в каждый момент возможны отклонения от модели, которые в среднем равны нулю и независимы между собой.
$$
\begin{cases}
\dot S=\frac{dS}{dt} = -\frac{\beta I S}{N}+W_S,\
\dot I=\frac{dI}{dt} = \frac{\beta I S}{N}-\gamma I+W_I,\
\dot R=\frac{dR}{dt} = \gamma I+W_R.
\end{cases}
$$
Для каждого момента времени $t$ в уравнения входят свои случайные величины $W_S$, $W_I$ и $W_R$,
такие что они не зависят друг от друга и от себя при других $t$.
Математическое ожидание всех шумов $W_\cdot$ равно нулю, и для простоты предположим, что
они все распределены нормально со среднеквадратическим отклонением $1$.
Тогда логарифмическая функция правдоподобия равна:
$$
\log L[\beta,\gamma]=-\int_{T_1}^{T_2}[(\dot S+\beta IS/N)^2 +(\dot I-\beta IS/N+\gamma I)^2+(\dot R-\gamma I)^2]dt.
$$
Согласно принципу максимального правдоподобия
параметры можно найти как точку максимума функции правдоподобия:
$$
\beta,\gamma = \mathrm{argmax}_{\beta,\gamma} \log L[\beta,\gamma],
$$
где функции $S$, $I$ и $R$ берутся из исторических данных.
Для нашего выбора распределения шумов, задача сводится к методу наименьших квадратов.
End of explanation
"""
|
mitdbg/modeldb | demos/webinar-2020-5-6/02-mdb_versioned/01-train/02 Positive Data NLP.ipynb | mit | from __future__ import unicode_literals, print_function
import boto3
import json
import numpy as np
import pandas as pd
import spacy
"""
Explanation: Versioning Example (Part 2/3)
In part 1, we trained and logged a tweet sentiment classifier using ModelDB's versioning system.
Now we'll see how that can come in handy when we need to revisit or even revert changes we make.
This workflow requires verta>=0.14.4 and spaCy>=2.0.0.
Setup
As before, import libraries we'll need...
End of explanation
"""
from verta import Client
client = Client('http://localhost:3000/')
proj = client.set_project('Tweet Classification')
expt = client.set_experiment('SpaCy')
"""
Explanation: ...and instantiate Verta's ModelDB Client.
End of explanation
"""
S3_BUCKET = "verta-starter"
S3_KEY = "positive-english-tweets.csv"
FILENAME = S3_KEY
boto3.client('s3').download_file(S3_BUCKET, S3_KEY, FILENAME)
import utils
data = pd.read_csv(FILENAME).sample(frac=1).reset_index(drop=True)
utils.clean_data(data)
data.head()
"""
Explanation: Prepare Data
This time, things are a little different.
Let's say someone has provided us with a new, expermental dataset that supposedly will improve our model. Unbeknownst to everyone, this dataset actually only contains one of the two classes we're interested in. This is going to hurt our performance, but we don't know it yet.
Before, we trained a model on english-tweets.csv. Now, we're going to train with positive-english-tweets.csv.
End of explanation
"""
from verta.code import Notebook
from verta.configuration import Hyperparameters
from verta.dataset import S3
from verta.environment import Python
code_ver = Notebook() # Notebook & git environment
config_ver = Hyperparameters({'n_iter': 20})
dataset_ver = S3("s3://{}/{}".format(S3_BUCKET, S3_KEY))
env_ver = Python(Python.read_pip_environment()) # pip environment and Python version
repo = client.set_repository('Tweet Classification')
commit = repo.get_commit(branch='master')
commit.update("notebooks/tweet-analysis", code_ver)
commit.update("config/hyperparams", config_ver)
commit.update("data/tweets", dataset_ver)
commit.update("env/python", env_ver)
commit.save("Update tweet dataset")
commit
"""
Explanation: Capture and Version Model Ingredients
As with before, we'll capture and log our model ingredients directly onto our repository's master branch.
End of explanation
"""
nlp = spacy.load('en_core_web_sm')
import training
training.train(nlp, data, n_iter=20)
run = client.set_experiment_run()
run.log_model(nlp)
run.log_commit(
commit,
{
'notebook': "notebooks/tweet-analysis",
'hyperparameters': "config/hyperparams",
'training_data': "data/tweets",
'python_env': "env/python",
},
)
"""
Explanation: You may verify through the Web App that this commit updates the dataset, as well as the Notebook.
Train and Log Model
Again as before, we'll train the model and log it along with the commit to an Experiment Run.
End of explanation
"""
commit
commit.revert()
commit
"""
Explanation: Revert Commit
Looking back over our workflow, we might notice that there's something suspicious about the model's precision, recall, and F-score. This model isn't performing as it should, and we don't want it to be the latest commit in master. Using the Client, we'll revert the commit.
End of explanation
"""
|
Danghor/Algorithms | Python/Chapter-04/Merge-Sort-Iterative.ipynb | gpl-2.0 | def sort(L):
A = L[:] # A is a copy of L
mergeSort(L, A)
"""
Explanation: An Iterative Implementation of Merge Sort
The function $\texttt{sort}(L)$ sorts the list $L$ in place using <em style="color:blue">merge sort</em>.
It takes advantage of the fact that, in Python, lists are stored internally as arrays.
The function sort is a wrapper for the function merge_sort. Its sole purpose is to allocate the auxiliary array A,
which has the same size as the array storing L.
End of explanation
"""
def mergeSort(L, A):
n = 1
while n < len(L):
k = 0
while n * k + n < len(L):
top = min(n * k + 2 * n, len(L))
merge(L, n * k, n * k + n, top, A)
k += 2
n *= 2
"""
Explanation: The function mergeSort is called with 2 arguments.
- The first parameter $\texttt{L}$ is the list that is to be sorted.
- The second parameter $\texttt{A}$ is used as an auxiliary array. This array is needed
as <em style="color:blue">temporary storage</em> and is required to have the same size as the list $\texttt{L}$.
The implementation uses two loops:
* The outer while loop sorts sublists of length n. Before the $\texttt{n}^{\mbox{th}}$ iteration of the outer while
loop, all sublists of the form L[n*k:n*(k+1)] are sorted. After the $\texttt{n}^{\mbox{th}}$ iteration, all
sublists of the form L[2*n*k:2*n*(k+1)] are sorted.
* The inner while loop merges the sublists L[n*k:n*(k+1)] and L[n*(k+1):n*(k+2)] for even values of k.
End of explanation
"""
def merge(L, start, middle, end, A):
A[start:end] = L[start:end]
idx1 = start
idx2 = middle
i = start
while idx1 < middle and idx2 < end:
if A[idx1] <= A[idx2]:
L[i] = A[idx1]
idx1 += 1
else:
L[i] = A[idx2]
idx2 += 1
i += 1
if idx1 < middle:
L[i:end] = A[idx1:middle]
if idx2 < end:
L[i:end] = A[idx2:end]
"""
Explanation: The function merge takes five arguments.
- L is a list,
- start is an integer such that $\texttt{start} \in {0, \cdots, \texttt{len}(L)-1 }$,
- middle is an integer such that $\texttt{middle} \in {0, \cdots, \texttt{len}(L)-1 }$,
- end is an integer such that $\texttt{end} \in {0, \cdots, \texttt{len}(L)-1 }$,
- A is a list of the same length as L.
Furthermore, the indices start, middle and end have to satisfy the following inequations:
$$ 0 \leq \texttt{start} < \texttt{middle} < \texttt{end} \leq \texttt{len}(L) $$
The function assumes that the sublists L[start:middle] and L[middle:end] are already sorted.
The function merges these sublists so that when the call returns the sublist L[start:end]
is sorted. The last argument A is used as auxiliary memory.
End of explanation
"""
import random as rnd
from collections import Counter
def demo():
L = [ rnd.randrange(1, 100) for n in range(1, 20) ]
print("L = ", L)
S = L[:]
sort(S)
print("S = ", S)
print(Counter(L))
print(Counter(S))
print(Counter(L) == Counter(S))
demo()
"""
Explanation: Testing
End of explanation
"""
def isOrdered(L):
for i in range(len(L) - 1):
assert L[i] <= L[i+1]
"""
Explanation: The function isOrdered(L) checks that the list L is sorted in ascending order.
End of explanation
"""
def sameElements(L, S):
assert Counter(L) == Counter(S)
"""
Explanation: The function sameElements(L, S) returns Trueif the lists L and S contain the same elements and, furthermore, each
element $x$ occurring in L occurs in S the same number of times it occurs in L.
End of explanation
"""
def testSort(n, k):
for i in range(n):
L = [ rnd.randrange(2*k) for x in range(k) ]
oldL = L[:]
sort(L)
isOrdered(L)
sameElements(oldL, L)
print('.', end='')
print()
print("All tests successful!")
%%time
testSort(100, 20000)
%%timeit
k = 1_000_000
L = [ rnd.randrange(2*k) for x in range(k) ]
sort(L)
"""
Explanation: The function $\texttt{testSort}(n, k)$ generates $n$ random lists of length $k$, sorts them, and checks whether the output is sorted and contains the same elements as the input.
End of explanation
"""
%%timeit
k = 1_000_000
L = [ rnd.randrange(2*k) for x in range(k) ]
S = sorted(L)
help(sorted)
"""
Explanation: Python offers a predefined function sorted that can be used to sort a list of numbers.
Let us see how it compares to our implementation.
End of explanation
"""
|
bashtage/statsmodels | examples/notebooks/mediation_survival.ipynb | bsd-3-clause | import pandas as pd
import numpy as np
import statsmodels.api as sm
from statsmodels.stats.mediation import Mediation
"""
Explanation: Mediation analysis with duration data
This notebook demonstrates mediation analysis when the
mediator and outcome are duration variables, modeled
using proportional hazards regression. These examples
are based on simulated data.
End of explanation
"""
np.random.seed(3424)
"""
Explanation: Make the notebook reproducible.
End of explanation
"""
n = 1000
"""
Explanation: Specify a sample size.
End of explanation
"""
exp = np.random.normal(size=n)
"""
Explanation: Generate an exposure variable.
End of explanation
"""
def gen_mediator():
mn = np.exp(exp)
mtime0 = -mn * np.log(np.random.uniform(size=n))
ctime = -2 * mn * np.log(np.random.uniform(size=n))
mstatus = (ctime >= mtime0).astype(int)
mtime = np.where(mtime0 <= ctime, mtime0, ctime)
return mtime0, mtime, mstatus
"""
Explanation: Generate a mediator variable.
End of explanation
"""
def gen_outcome(otype, mtime0):
if otype == "full":
lp = 0.5 * mtime0
elif otype == "no":
lp = exp
else:
lp = exp + mtime0
mn = np.exp(-lp)
ytime0 = -mn * np.log(np.random.uniform(size=n))
ctime = -2 * mn * np.log(np.random.uniform(size=n))
ystatus = (ctime >= ytime0).astype(int)
ytime = np.where(ytime0 <= ctime, ytime0, ctime)
return ytime, ystatus
"""
Explanation: Generate an outcome variable.
End of explanation
"""
def build_df(ytime, ystatus, mtime0, mtime, mstatus):
df = pd.DataFrame(
{
"ytime": ytime,
"ystatus": ystatus,
"mtime": mtime,
"mstatus": mstatus,
"exp": exp,
}
)
return df
"""
Explanation: Build a dataframe containing all the relevant variables.
End of explanation
"""
def run(otype):
mtime0, mtime, mstatus = gen_mediator()
ytime, ystatus = gen_outcome(otype, mtime0)
df = build_df(ytime, ystatus, mtime0, mtime, mstatus)
outcome_model = sm.PHReg.from_formula(
"ytime ~ exp + mtime", status="ystatus", data=df
)
mediator_model = sm.PHReg.from_formula("mtime ~ exp", status="mstatus", data=df)
med = Mediation(
outcome_model,
mediator_model,
"exp",
"mtime",
outcome_predict_kwargs={"pred_only": True},
)
med_result = med.fit(n_rep=20)
print(med_result.summary())
"""
Explanation: Run the full simulation and analysis, under a particular
population structure of mediation.
End of explanation
"""
run("full")
"""
Explanation: Run the example with full mediation
End of explanation
"""
run("partial")
"""
Explanation: Run the example with partial mediation
End of explanation
"""
run("no")
"""
Explanation: Run the example with no mediation
End of explanation
"""
|
ES-DOC/esdoc-jupyterhub | notebooks/noaa-gfdl/cmip6/models/gfdl-cm4/land.ipynb | gpl-3.0 | # DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'noaa-gfdl', 'gfdl-cm4', 'land')
"""
Explanation: ES-DOC CMIP6 Model Properties - Land
MIP Era: CMIP6
Institute: NOAA-GFDL
Source ID: GFDL-CM4
Topic: Land
Sub-Topics: Soil, Snow, Vegetation, Energy Balance, Carbon Cycle, Nitrogen Cycle, River Routing, Lakes.
Properties: 154 (96 required)
Model descriptions: Model description details
Initialized From: --
Notebook Help: Goto notebook help page
Notebook Initialised: 2018-02-20 15:02:34
Document Setup
IMPORTANT: to be executed each time you run the notebook
End of explanation
"""
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
"""
Explanation: Document Authors
Set document authors
End of explanation
"""
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
"""
Explanation: Document Contributors
Specify document contributors
End of explanation
"""
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
"""
Explanation: Document Publication
Specify document publication status
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: Document Table of Contents
1. Key Properties
2. Key Properties --> Conservation Properties
3. Key Properties --> Timestepping Framework
4. Key Properties --> Software Properties
5. Grid
6. Grid --> Horizontal
7. Grid --> Vertical
8. Soil
9. Soil --> Soil Map
10. Soil --> Snow Free Albedo
11. Soil --> Hydrology
12. Soil --> Hydrology --> Freezing
13. Soil --> Hydrology --> Drainage
14. Soil --> Heat Treatment
15. Snow
16. Snow --> Snow Albedo
17. Vegetation
18. Energy Balance
19. Carbon Cycle
20. Carbon Cycle --> Vegetation
21. Carbon Cycle --> Vegetation --> Photosynthesis
22. Carbon Cycle --> Vegetation --> Autotrophic Respiration
23. Carbon Cycle --> Vegetation --> Allocation
24. Carbon Cycle --> Vegetation --> Phenology
25. Carbon Cycle --> Vegetation --> Mortality
26. Carbon Cycle --> Litter
27. Carbon Cycle --> Soil
28. Carbon Cycle --> Permafrost Carbon
29. Nitrogen Cycle
30. River Routing
31. River Routing --> Oceanic Discharge
32. Lakes
33. Lakes --> Method
34. Lakes --> Wetlands
1. Key Properties
Land surface key properties
1.1. Model Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of land surface model.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 1.2. Model Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Name of land surface model code (e.g. MOSES2.2)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 1.3. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General description of the processes modelled (e.g. dymanic vegation, prognostic albedo, etc.)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.land_atmosphere_flux_exchanges')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "water"
# "energy"
# "carbon"
# "nitrogen"
# "phospherous"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 1.4. Land Atmosphere Flux Exchanges
Is Required: FALSE Type: ENUM Cardinality: 0.N
Fluxes exchanged with the atmopshere.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.atmospheric_coupling_treatment')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 1.5. Atmospheric Coupling Treatment
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the treatment of land surface coupling with the Atmosphere model component, which may be different for different quantities (e.g. dust: semi-implicit, water vapour: explicit)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.land_cover')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "bare soil"
# "urban"
# "lake"
# "land ice"
# "lake ice"
# "vegetated"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 1.6. Land Cover
Is Required: TRUE Type: ENUM Cardinality: 1.N
Types of land cover defined in the land surface model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.land_cover_change')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 1.7. Land Cover Change
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe how land cover change is managed (e.g. the use of net or gross transitions)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 1.8. Tiling
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the general tiling procedure used in the land surface (if any). Include treatment of physiography, land/sea, (dynamic) vegetation coverage and orography/roughness
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.conservation_properties.energy')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 2. Key Properties --> Conservation Properties
TODO
2.1. Energy
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how energy is conserved globally and to what level (e.g. within X [units]/year)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.conservation_properties.water')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 2.2. Water
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how water is conserved globally and to what level (e.g. within X [units]/year)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.conservation_properties.carbon')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 2.3. Carbon
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how carbon is conserved globally and to what level (e.g. within X [units]/year)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.timestepping_framework.timestep_dependent_on_atmosphere')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 3. Key Properties --> Timestepping Framework
TODO
3.1. Timestep Dependent On Atmosphere
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is a time step dependent on the frequency of atmosphere coupling?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.timestepping_framework.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 3.2. Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Overall timestep of land surface model (i.e. time between calls)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.timestepping_framework.timestepping_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 3.3. Timestepping Method
Is Required: TRUE Type: STRING Cardinality: 1.1
General description of time stepping method and associated time step(s)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.software_properties.repository')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 4. Key Properties --> Software Properties
Software properties of land surface code
4.1. Repository
Is Required: FALSE Type: STRING Cardinality: 0.1
Location of code for this component.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.software_properties.code_version')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 4.2. Code Version
Is Required: FALSE Type: STRING Cardinality: 0.1
Code version identifier.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.software_properties.code_languages')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 4.3. Code Languages
Is Required: FALSE Type: STRING Cardinality: 0.N
Code language(s).
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.grid.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 5. Grid
Land surface grid
5.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of the grid in the land surface
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.grid.horizontal.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 6. Grid --> Horizontal
The horizontal grid in the land surface
6.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the general structure of the horizontal grid (not including any tiling)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.grid.horizontal.matches_atmosphere_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 6.2. Matches Atmosphere Grid
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does the horizontal grid match the atmosphere?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.grid.vertical.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 7. Grid --> Vertical
The vertical grid in the soil
7.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the general structure of the vertical grid in the soil (not including any tiling)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.grid.vertical.total_depth')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 7.2. Total Depth
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The total depth of the soil (in metres)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 8. Soil
Land surface soil
8.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of soil in the land surface
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_water_coupling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 8.2. Heat Water Coupling
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the coupling between heat and water in the soil
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.number_of_soil layers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 8.3. Number Of Soil layers
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The number of soil layers
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 8.4. Prognostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.1
List the prognostic variables of the soil scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 9. Soil --> Soil Map
Key properties of the land surface soil map
9.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General description of soil map
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.structure')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 9.2. Structure
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the soil structure map
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.texture')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 9.3. Texture
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the soil texture map
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.organic_matter')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 9.4. Organic Matter
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the soil organic matter map
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.albedo')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 9.5. Albedo
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the soil albedo map
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.water_table')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 9.6. Water Table
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the soil water table map, if any
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.continuously_varying_soil_depth')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 9.7. Continuously Varying Soil Depth
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does the soil properties vary continuously with depth?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.soil_depth')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 9.8. Soil Depth
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the soil depth map
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.snow_free_albedo.prognostic')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 10. Soil --> Snow Free Albedo
TODO
10.1. Prognostic
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is snow free albedo prognostic?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.snow_free_albedo.functions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "vegetation type"
# "soil humidity"
# "vegetation state"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 10.2. Functions
Is Required: FALSE Type: ENUM Cardinality: 0.N
If prognostic, describe the dependancies on snow free albedo calculations
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.snow_free_albedo.direct_diffuse')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "distinction between direct and diffuse albedo"
# "no distinction between direct and diffuse albedo"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 10.3. Direct Diffuse
Is Required: FALSE Type: ENUM Cardinality: 0.1
If prognostic, describe the distinction between direct and diffuse albedo
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.snow_free_albedo.number_of_wavelength_bands')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 10.4. Number Of Wavelength Bands
Is Required: FALSE Type: INTEGER Cardinality: 0.1
If prognostic, enter the number of wavelength bands used
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 11. Soil --> Hydrology
Key properties of the land surface soil hydrology
11.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General description of the soil hydrological model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 11.2. Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Time step of river soil hydrology in seconds
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 11.3. Tiling
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the soil hydrology tiling, if any.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.vertical_discretisation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 11.4. Vertical Discretisation
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the typical vertical discretisation
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.number_of_ground_water_layers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 11.5. Number Of Ground Water Layers
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The number of soil layers that may contain water
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.lateral_connectivity')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "perfect connectivity"
# "Darcian flow"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 11.6. Lateral Connectivity
Is Required: TRUE Type: ENUM Cardinality: 1.N
Describe the lateral connectivity between tiles
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Bucket"
# "Force-restore"
# "Choisnel"
# "Explicit diffusion"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 11.7. Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
The hydrological dynamics scheme in the land surface model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.freezing.number_of_ground_ice_layers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 12. Soil --> Hydrology --> Freezing
TODO
12.1. Number Of Ground Ice Layers
Is Required: TRUE Type: INTEGER Cardinality: 1.1
How many soil layers may contain ground ice
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.freezing.ice_storage_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 12.2. Ice Storage Method
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the method of ice storage
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.freezing.permafrost')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 12.3. Permafrost
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the treatment of permafrost, if any, within the land surface scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.drainage.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 13. Soil --> Hydrology --> Drainage
TODO
13.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General describe how drainage is included in the land surface scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.drainage.types')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Gravity drainage"
# "Horton mechanism"
# "topmodel-based"
# "Dunne mechanism"
# "Lateral subsurface flow"
# "Baseflow from groundwater"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 13.2. Types
Is Required: FALSE Type: ENUM Cardinality: 0.N
Different types of runoff represented by the land surface model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_treatment.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 14. Soil --> Heat Treatment
TODO
14.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General description of how heat treatment properties are defined
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_treatment.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 14.2. Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Time step of soil heat scheme in seconds
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_treatment.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 14.3. Tiling
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the soil heat treatment tiling, if any.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_treatment.vertical_discretisation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 14.4. Vertical Discretisation
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the typical vertical discretisation
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_treatment.heat_storage')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Force-restore"
# "Explicit diffusion"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 14.5. Heat Storage
Is Required: TRUE Type: ENUM Cardinality: 1.1
Specify the method of heat storage
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_treatment.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "soil moisture freeze-thaw"
# "coupling with snow temperature"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 14.6. Processes
Is Required: TRUE Type: ENUM Cardinality: 1.N
Describe processes included in the treatment of soil heat
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 15. Snow
Land surface snow
15.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of snow in the land surface
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 15.2. Tiling
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the snow tiling, if any.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.number_of_snow_layers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 15.3. Number Of Snow Layers
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The number of snow levels used in the land surface scheme/model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.density')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "constant"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 15.4. Density
Is Required: TRUE Type: ENUM Cardinality: 1.1
Description of the treatment of snow density
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.water_equivalent')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 15.5. Water Equivalent
Is Required: TRUE Type: ENUM Cardinality: 1.1
Description of the treatment of the snow water equivalent
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.heat_content')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 15.6. Heat Content
Is Required: TRUE Type: ENUM Cardinality: 1.1
Description of the treatment of the heat content of snow
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.temperature')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 15.7. Temperature
Is Required: TRUE Type: ENUM Cardinality: 1.1
Description of the treatment of snow temperature
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.liquid_water_content')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 15.8. Liquid Water Content
Is Required: TRUE Type: ENUM Cardinality: 1.1
Description of the treatment of snow liquid water
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.snow_cover_fractions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "ground snow fraction"
# "vegetation snow fraction"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 15.9. Snow Cover Fractions
Is Required: TRUE Type: ENUM Cardinality: 1.N
Specify cover fractions used in the surface snow scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "snow interception"
# "snow melting"
# "snow freezing"
# "blowing snow"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 15.10. Processes
Is Required: TRUE Type: ENUM Cardinality: 1.N
Snow related processes in the land surface scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 15.11. Prognostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.1
List the prognostic variables of the snow scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.snow_albedo.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "prescribed"
# "constant"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 16. Snow --> Snow Albedo
TODO
16.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Describe the treatment of snow-covered land albedo
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.snow_albedo.functions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "vegetation type"
# "snow age"
# "snow density"
# "snow grain type"
# "aerosol deposition"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 16.2. Functions
Is Required: FALSE Type: ENUM Cardinality: 0.N
*If prognostic, *
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 17. Vegetation
Land surface vegetation
17.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of vegetation in the land surface
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 17.2. Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Time step of vegetation scheme in seconds
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.dynamic_vegetation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 17.3. Dynamic Vegetation
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there dynamic evolution of vegetation?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 17.4. Tiling
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the vegetation tiling, if any.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.vegetation_representation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "vegetation types"
# "biome types"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 17.5. Vegetation Representation
Is Required: TRUE Type: ENUM Cardinality: 1.1
Vegetation classification used
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.vegetation_types')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "broadleaf tree"
# "needleleaf tree"
# "C3 grass"
# "C4 grass"
# "vegetated"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 17.6. Vegetation Types
Is Required: FALSE Type: ENUM Cardinality: 0.N
List of vegetation types in the classification, if any
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.biome_types')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "evergreen needleleaf forest"
# "evergreen broadleaf forest"
# "deciduous needleleaf forest"
# "deciduous broadleaf forest"
# "mixed forest"
# "woodland"
# "wooded grassland"
# "closed shrubland"
# "opne shrubland"
# "grassland"
# "cropland"
# "wetlands"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 17.7. Biome Types
Is Required: FALSE Type: ENUM Cardinality: 0.N
List of biome types in the classification, if any
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.vegetation_time_variation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "fixed (not varying)"
# "prescribed (varying from files)"
# "dynamical (varying from simulation)"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 17.8. Vegetation Time Variation
Is Required: TRUE Type: ENUM Cardinality: 1.1
How the vegetation fractions in each tile are varying with time
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.vegetation_map')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 17.9. Vegetation Map
Is Required: FALSE Type: STRING Cardinality: 0.1
If vegetation fractions are not dynamically updated , describe the vegetation map used (common name and reference, if possible)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.interception')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 17.10. Interception
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is vegetation interception of rainwater represented?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.phenology')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic (vegetation map)"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 17.11. Phenology
Is Required: TRUE Type: ENUM Cardinality: 1.1
Treatment of vegetation phenology
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.phenology_description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 17.12. Phenology Description
Is Required: FALSE Type: STRING Cardinality: 0.1
General description of the treatment of vegetation phenology
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.leaf_area_index')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prescribed"
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 17.13. Leaf Area Index
Is Required: TRUE Type: ENUM Cardinality: 1.1
Treatment of vegetation leaf area index
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.leaf_area_index_description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 17.14. Leaf Area Index Description
Is Required: FALSE Type: STRING Cardinality: 0.1
General description of the treatment of leaf area index
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.biomass')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 17.15. Biomass
Is Required: TRUE Type: ENUM Cardinality: 1.1
*Treatment of vegetation biomass *
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.biomass_description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 17.16. Biomass Description
Is Required: FALSE Type: STRING Cardinality: 0.1
General description of the treatment of vegetation biomass
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.biogeography')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 17.17. Biogeography
Is Required: TRUE Type: ENUM Cardinality: 1.1
Treatment of vegetation biogeography
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.biogeography_description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 17.18. Biogeography Description
Is Required: FALSE Type: STRING Cardinality: 0.1
General description of the treatment of vegetation biogeography
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.stomatal_resistance')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "light"
# "temperature"
# "water availability"
# "CO2"
# "O3"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 17.19. Stomatal Resistance
Is Required: TRUE Type: ENUM Cardinality: 1.N
Specify what the vegetation stomatal resistance depends on
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.stomatal_resistance_description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 17.20. Stomatal Resistance Description
Is Required: FALSE Type: STRING Cardinality: 0.1
General description of the treatment of vegetation stomatal resistance
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 17.21. Prognostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.1
List the prognostic variables of the vegetation scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.energy_balance.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 18. Energy Balance
Land surface energy balance
18.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of energy balance in land surface
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.energy_balance.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 18.2. Tiling
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the energy balance tiling, if any.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.energy_balance.number_of_surface_temperatures')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 18.3. Number Of Surface Temperatures
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The maximum number of distinct surface temperatures in a grid cell (for example, each subgrid tile may have its own temperature)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.energy_balance.evaporation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "alpha"
# "beta"
# "combined"
# "Monteith potential evaporation"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 18.4. Evaporation
Is Required: TRUE Type: ENUM Cardinality: 1.N
Specify the formulation method for land surface evaporation, from soil and vegetation
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.energy_balance.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "transpiration"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 18.5. Processes
Is Required: TRUE Type: ENUM Cardinality: 1.N
Describe which processes are included in the energy balance scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 19. Carbon Cycle
Land surface carbon cycle
19.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of carbon cycle in land surface
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 19.2. Tiling
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the carbon cycle tiling, if any.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 19.3. Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Time step of carbon cycle in seconds
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.anthropogenic_carbon')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "grand slam protocol"
# "residence time"
# "decay time"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 19.4. Anthropogenic Carbon
Is Required: FALSE Type: ENUM Cardinality: 0.N
Describe the treament of the anthropogenic carbon pool
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 19.5. Prognostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.1
List the prognostic variables of the carbon scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.number_of_carbon_pools')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 20. Carbon Cycle --> Vegetation
TODO
20.1. Number Of Carbon Pools
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Enter the number of carbon pools used
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.carbon_pools')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 20.2. Carbon Pools
Is Required: FALSE Type: STRING Cardinality: 0.1
List the carbon pools used
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.forest_stand_dynamics')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 20.3. Forest Stand Dynamics
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the treatment of forest stand dyanmics
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.photosynthesis.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 21. Carbon Cycle --> Vegetation --> Photosynthesis
TODO
21.1. Method
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the general method used for photosynthesis (e.g. type of photosynthesis, distinction between C3 and C4 grasses, Nitrogen depencence, etc.)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.autotrophic_respiration.maintainance_respiration')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 22. Carbon Cycle --> Vegetation --> Autotrophic Respiration
TODO
22.1. Maintainance Respiration
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the general method used for maintainence respiration
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.autotrophic_respiration.growth_respiration')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 22.2. Growth Respiration
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the general method used for growth respiration
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.allocation.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 23. Carbon Cycle --> Vegetation --> Allocation
TODO
23.1. Method
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the general principle behind the allocation scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.allocation.allocation_bins')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "leaves + stems + roots"
# "leaves + stems + roots (leafy + woody)"
# "leaves + fine roots + coarse roots + stems"
# "whole plant (no distinction)"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 23.2. Allocation Bins
Is Required: TRUE Type: ENUM Cardinality: 1.1
Specify distinct carbon bins used in allocation
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.allocation.allocation_fractions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "fixed"
# "function of vegetation type"
# "function of plant allometry"
# "explicitly calculated"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 23.3. Allocation Fractions
Is Required: TRUE Type: ENUM Cardinality: 1.1
Describe how the fractions of allocation are calculated
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.phenology.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 24. Carbon Cycle --> Vegetation --> Phenology
TODO
24.1. Method
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the general principle behind the phenology scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.mortality.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 25. Carbon Cycle --> Vegetation --> Mortality
TODO
25.1. Method
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the general principle behind the mortality scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.litter.number_of_carbon_pools')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 26. Carbon Cycle --> Litter
TODO
26.1. Number Of Carbon Pools
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Enter the number of carbon pools used
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.litter.carbon_pools')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 26.2. Carbon Pools
Is Required: FALSE Type: STRING Cardinality: 0.1
List the carbon pools used
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.litter.decomposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 26.3. Decomposition
Is Required: FALSE Type: STRING Cardinality: 0.1
List the decomposition methods used
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.litter.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 26.4. Method
Is Required: FALSE Type: STRING Cardinality: 0.1
List the general method used
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.soil.number_of_carbon_pools')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 27. Carbon Cycle --> Soil
TODO
27.1. Number Of Carbon Pools
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Enter the number of carbon pools used
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.soil.carbon_pools')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 27.2. Carbon Pools
Is Required: FALSE Type: STRING Cardinality: 0.1
List the carbon pools used
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.soil.decomposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 27.3. Decomposition
Is Required: FALSE Type: STRING Cardinality: 0.1
List the decomposition methods used
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.soil.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 27.4. Method
Is Required: FALSE Type: STRING Cardinality: 0.1
List the general method used
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.permafrost_carbon.is_permafrost_included')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 28. Carbon Cycle --> Permafrost Carbon
TODO
28.1. Is Permafrost Included
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is permafrost included?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.permafrost_carbon.emitted_greenhouse_gases')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 28.2. Emitted Greenhouse Gases
Is Required: FALSE Type: STRING Cardinality: 0.1
List the GHGs emitted
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.permafrost_carbon.decomposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 28.3. Decomposition
Is Required: FALSE Type: STRING Cardinality: 0.1
List the decomposition methods used
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.permafrost_carbon.impact_on_soil_properties')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 28.4. Impact On Soil Properties
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the impact of permafrost on soil properties
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.nitrogen_cycle.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 29. Nitrogen Cycle
Land surface nitrogen cycle
29.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of the nitrogen cycle in the land surface
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.nitrogen_cycle.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 29.2. Tiling
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the notrogen cycle tiling, if any.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.nitrogen_cycle.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 29.3. Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Time step of nitrogen cycle in seconds
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.nitrogen_cycle.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 29.4. Prognostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.1
List the prognostic variables of the nitrogen scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 30. River Routing
Land surface river routing
30.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of river routing in the land surface
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 30.2. Tiling
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the river routing, if any.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 30.3. Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Time step of river routing scheme in seconds
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.grid_inherited_from_land_surface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 30.4. Grid Inherited From Land Surface
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is the grid inherited from land surface?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.grid_description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 30.5. Grid Description
Is Required: FALSE Type: STRING Cardinality: 0.1
General description of grid, if not inherited from land surface
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.number_of_reservoirs')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 30.6. Number Of Reservoirs
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Enter the number of reservoirs
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.water_re_evaporation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "flood plains"
# "irrigation"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 30.7. Water Re Evaporation
Is Required: TRUE Type: ENUM Cardinality: 1.N
TODO
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.coupled_to_atmosphere')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 30.8. Coupled To Atmosphere
Is Required: FALSE Type: BOOLEAN Cardinality: 0.1
Is river routing coupled to the atmosphere model component?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.coupled_to_land')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 30.9. Coupled To Land
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the coupling between land and rivers
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.quantities_exchanged_with_atmosphere')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "heat"
# "water"
# "tracers"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 30.10. Quantities Exchanged With Atmosphere
Is Required: FALSE Type: ENUM Cardinality: 0.N
If couple to atmosphere, which quantities are exchanged between river routing and the atmosphere model components?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.basin_flow_direction_map')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "present day"
# "adapted for other periods"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 30.11. Basin Flow Direction Map
Is Required: TRUE Type: ENUM Cardinality: 1.1
What type of basin flow direction map is being used?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.flooding')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 30.12. Flooding
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the representation of flooding, if any
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 30.13. Prognostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.1
List the prognostic variables of the river routing
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.oceanic_discharge.discharge_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "direct (large rivers)"
# "diffuse"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 31. River Routing --> Oceanic Discharge
TODO
31.1. Discharge Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Specify how rivers are discharged to the ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.oceanic_discharge.quantities_transported')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "heat"
# "water"
# "tracers"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 31.2. Quantities Transported
Is Required: TRUE Type: ENUM Cardinality: 1.N
Quantities that are exchanged from river-routing to the ocean model component
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 32. Lakes
Land surface lakes
32.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of lakes in the land surface
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.coupling_with_rivers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 32.2. Coupling With Rivers
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Are lakes coupled to the river routing model component?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 32.3. Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Time step of lake scheme in seconds
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.quantities_exchanged_with_rivers')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "heat"
# "water"
# "tracers"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 32.4. Quantities Exchanged With Rivers
Is Required: FALSE Type: ENUM Cardinality: 0.N
If coupling with rivers, which quantities are exchanged between the lakes and rivers
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.vertical_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 32.5. Vertical Grid
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the vertical grid of lakes
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 32.6. Prognostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.1
List the prognostic variables of the lake scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.method.ice_treatment')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 33. Lakes --> Method
TODO
33.1. Ice Treatment
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is lake ice included?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.method.albedo')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 33.2. Albedo
Is Required: TRUE Type: ENUM Cardinality: 1.1
Describe the treatment of lake albedo
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.method.dynamics')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "No lake dynamics"
# "vertical"
# "horizontal"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 33.3. Dynamics
Is Required: TRUE Type: ENUM Cardinality: 1.N
Which dynamics of lakes are treated? horizontal, vertical, etc.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.method.dynamic_lake_extent')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 33.4. Dynamic Lake Extent
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is a dynamic lake extent scheme included?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.method.endorheic_basins')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 33.5. Endorheic Basins
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Basins not flowing to ocean included?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.wetlands.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 34. Lakes --> Wetlands
TODO
34.1. Description
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the treatment of wetlands, if any
End of explanation
"""
|
gdsfactory/gdsfactory | docs/notebooks/common_mistakes.ipynb | mit | import gdsfactory as gf
@gf.cell
def wg(length: float = 3):
return gf.components.straight(length=length)
print(wg(length=5))
print(wg(length=50))
"""
Explanation: Common mistakes
1. Creating cells without cell decorator
The cell decorator names cells deterministically and uniquely based on the name of the functions and its parameters.
It also uses a caching mechanisms that improves performance and guards against duplicated names.
1.a naming cells manually
Naming cells manually is susceptible to name colisions
in GDS you can't have two cells with the same name.
For example: this code will raise a duplicated cell name ValueError
```python
import gdsfactory as gf
c1 = gf.Component('wg')
c1 << gf.components.straight(length = 5)
c2 = gf.Component('wg')
c2 << gf.components.straight(length = 50)
c3 = gf.Component('waveguides')
wg1 = c3 << c1
wg2 = c3 << c2
wg2.movey(10)
c3
```
Solution: Use the gf.cell decorator for automatic naming your components.
End of explanation
"""
c1 = gf.Component()
c2 = gf.Component()
print(c1.name)
print(c2.name)
"""
Explanation: 1.b Not naming components with a unique and deterministic name
In the case of not wrapping the function with cell you will get unique names thanks to the unique identifier uuid.
This name will be different and non-deterministic for different invocations of the script.
However it will be hard for you to know where that cell came from.
End of explanation
"""
c1.write_gds()
"""
Explanation: Notice how gdsfactory raises a Warning when you save this Unnamed Components
End of explanation
"""
@gf.cell
def die_bad():
"""c1 is an intermediate Unnamed cell"""
c1 = gf.Component()
c1 << gf.components.straight(length=10)
c2 = gf.components.die_bbox(c1, street_width=10)
return c2
c = die_bad(cache=False)
print(c.references)
c
"""
Explanation: 1.c Intermediate Unnamed cells
While creating a cell, you should not create intermediate cells, because they won't get a name.
End of explanation
"""
@gf.cell
def die_good():
c = gf.Component()
c << gf.components.straight(length=10)
c << gf.components.die_bbox_frame(c.bbox, street_width=10)
return c
c = die_good(cache=False)
print(c.references)
c
"""
Explanation: Solution1 Don't use intermediate cells
End of explanation
"""
@gf.cell
def die_flat():
"""c will be an intermediate unnamed cell"""
c = gf.Component()
c << gf.components.straight(length=10)
c2 = gf.components.die_bbox(c, street_width=10)
c2 = c2.flatten()
return c2
c = die_flat(cache=False)
print(c.references)
c
"""
Explanation: Solution2 You can flatten the cell, but you will lose the memory savings from cell references. Solution1 is more elegant.
End of explanation
"""
|
Kaggle/learntools | notebooks/game_ai/raw/tut_halite.ipynb | apache-2.0 | #$HIDE_INPUT$
from kaggle_environments import make, evaluate
env = make("halite", debug=True)
env.run(["random", "random", "random", "random"])
env.render(mode="ipython", width=800, height=600)
"""
Explanation: Halite is an online multiplayer game created by Two Sigma. In the game, four participants command ships to collect an energy source called halite. The player with the most halite at the end of the game wins.
In this tutorial, as part of the Halite competition, you'll write your own intelligent bots to play the game.
Note that the Halite competition is now closed, so we are no longer accepting submissions. That said, you can still use the competition to write your own bots - you just cannot submit bots to the official leaderboard. To see the current list of open competitions, check out the simulations homepage: https://www.kaggle.com/simulations.
Part 1: Get started
In this section, you'll learn more about how to play the game.
Game rules
In this section, we'll look more closely at the game rules and explore the different icons on the game board.
For context, we'll look at a game played by four random players. You can use the animation below to view the game in detail: every move is captured and can be replayed.
End of explanation
"""
%%writefile submission.py
# Imports helper functions
from kaggle_environments.envs.halite.helpers import *
# Returns best direction to move from one position (fromPos) to another (toPos)
# Example: If I'm at pos 0 and want to get to pos 55, which direction should I choose?
def getDirTo(fromPos, toPos, size):
fromX, fromY = divmod(fromPos[0],size), divmod(fromPos[1],size)
toX, toY = divmod(toPos[0],size), divmod(toPos[1],size)
if fromY < toY: return ShipAction.NORTH
if fromY > toY: return ShipAction.SOUTH
if fromX < toX: return ShipAction.EAST
if fromX > toX: return ShipAction.WEST
# Directions a ship can move
directions = [ShipAction.NORTH, ShipAction.EAST, ShipAction.SOUTH, ShipAction.WEST]
# Will keep track of whether a ship is collecting halite or carrying cargo to a shipyard
ship_states = {}
# Returns the commands we send to our ships and shipyards
def agent(obs, config):
size = config.size
board = Board(obs, config)
me = board.current_player
# If there are no ships, use first shipyard to spawn a ship.
if len(me.ships) == 0 and len(me.shipyards) > 0:
me.shipyards[0].next_action = ShipyardAction.SPAWN
# If there are no shipyards, convert first ship into shipyard.
if len(me.shipyards) == 0 and len(me.ships) > 0:
me.ships[0].next_action = ShipAction.CONVERT
for ship in me.ships:
if ship.next_action == None:
### Part 1: Set the ship's state
if ship.halite < 200: # If cargo is too low, collect halite
ship_states[ship.id] = "COLLECT"
if ship.halite > 500: # If cargo gets very big, deposit halite
ship_states[ship.id] = "DEPOSIT"
### Part 2: Use the ship's state to select an action
if ship_states[ship.id] == "COLLECT":
# If halite at current location running low,
# move to the adjacent square containing the most halite
if ship.cell.halite < 100:
neighbors = [ship.cell.north.halite, ship.cell.east.halite,
ship.cell.south.halite, ship.cell.west.halite]
best = max(range(len(neighbors)), key=neighbors.__getitem__)
ship.next_action = directions[best]
if ship_states[ship.id] == "DEPOSIT":
# Move towards shipyard to deposit cargo
direction = getDirTo(ship.position, me.shipyards[0].position, size)
if direction: ship.next_action = direction
return me.next_actions
"""
Explanation: The game is played in a 21 by 21 gridworld and lasts 400 timesteps. Each player starts the game with 5,000 halite and one ship.
Grid locations with halite are indicated by a light blue icon, where larger icons indicate more available halite.
<center>
<img src="https://i.imgur.com/3NENMos.png" width=65%><br/>
</center>
Players use ships to navigate the world and collect halite. A ship can only collect halite from its current position. When a ship decides to collect halite, it collects 25% of the halite available in its cell. This collected halite is added to the ship's "cargo".
<center>
<img src="https://i.imgur.com/eKN0kP3.png" width=65%><br/>
</center>
Halite in ship cargo is not counted towards final scores. In order for halite to be counted, ships need to deposit their cargo into a shipyard of the same color. A ship can deposit all of its cargo in a single timestep simply by navigating to a cell containing a shipyard.
<center>
<img src="https://i.imgur.com/LAc6fj8.png" width=65%><br/>
</center>
Players start the game with no shipyards. To get a shipyard, a player must convert a ship into a shipyard, which costs 500 halite. Also, shipyards can spawn (or create) new ships, which deducts 500 halite (per ship) from the player.
Two ships cannot successfully inhabit the same cell. This event results in a collision, where:
- the ship with more halite in its cargo is destroyed, and
- the other ship survives and instantly collects the destroyed ship's cargo.
<center>
<img src="https://i.imgur.com/BuIUPmK.png" width=65%><br/>
</center>
If you view the full game rules, you'll notice that there are more types of collisions that can occur in the game (for instance, ships can collide with enemy shipyards, which destroys the ship, the ship's cargo, and the enemy shipyard).
In general, Halite is a complex game, and we have not covered all of the details here. But even given these simplified rules, you can imagine that a successful player will have to use a relatively complicated strategy.
Game strategy
As mentioned above, a ship has two options at its disposal for collecting halite. It can:
- collect (or mine) halite from its current position.
- collide with an enemy ship containing relatively more halite in its cargo. In this case, the ship destroys the enemy ship and steals its cargo.
Both are illustrated in the figure below. The "cargo" that is tracked in the player's scoreboard contains the total cargo, summed over all of the player's ships.
<center>
<img src="https://i.imgur.com/2DJX6Vt.png" width=75%><br/>
</center>
This raises some questions that you'll have to answer when commanding ships:
- Will your ships focus primarily on locating large halite reserves and mining them efficiently, while mostly ignoring and evading the other players?
- Or, will you look for opportunities to steal halite from other players?
- Alternatively, can you use a combination of those two strategies? If so, what cues will you look for in the game to decide which option is best? For instance, if all enemy ships are far away and your ships are located on cells containing a lot of halite, it makes sense to focus on mining halite. Conversely, if there are many ships nearby with halite to steal (and not too much local halite to collect), it makes sense to attack the enemy ships.
You'll also have to decide how to control your shipyards, and how your ships interact with shipyards. There are three primary actions in the game involving shipyards. You can:
- convert a ship into a shipyard. This is the only way to create a shipyard.
- use a shipyard to create a ship.
- deposit a ship's cargo into a shipyard.
These are illustrated in the image below.
<center>
<img src="https://i.imgur.com/fL5atut.png" width=75%><br/>
</center>
With more ships and shipyards, you can collect halite at a faster rate. But each additional ship and shipyard costs you halite: how will you decide when it might be beneficial to create more?
Part 2: Your first bot
In this section, you'll create your first bot to play the game.
The notebook
The first thing to do is to create a Kaggle notebook where you'll store all of your code.
Begin by navigating to https://www.kaggle.com/notebooks and clicking on "New Notebook".
Next, click on "Create". (Don't change the default settings: so, "Python" should appear under "Select language", and you should have "Notebook" selected under "Select type".)
You now have a notebook where you'll develop your first agent! If you're not sure how to use Kaggle Notebooks, we strongly recommend that you walk through this notebook before proceeding. It teaches you how to run code in the notebook.
Your first agent
It's time to create your first agent! Copy and paste the code in the cell below into your notebook. Then, run the code.
End of explanation
"""
from kaggle_environments import make
env = make("halite", debug=True)
env.run(["submission.py", "random", "random", "random"])
env.render(mode="ipython", width=800, height=600)
"""
Explanation: The line %%writefile submission.py saves the agent to a Python file. Note that all of the code above has to be copied and run in a single cell (please do not split the code into multiple cells).
If the code cell runs successfully, then you'll see a message Writing submission.py (or Overwriting submission.py, if you run it more than once).
Then, copy and run the next code cell in your notebook to play your agent against three random agents. Your agent is in the top left corner of the screen.
End of explanation
"""
|
swirlingsand/self-driving-car-nanodegree-nd013 | CarND-LaneLines-P1-1/P1.ipynb | mit | #importing some useful packages
import matplotlib.pyplot as plt
import matplotlib.image as mpimg
import numpy as np
import cv2
import math
%matplotlib inline
#reading in an image
image = mpimg.imread('test_images/solidWhiteRight.jpg')
#printing out some stats and plotting
print('This image is:', type(image), 'with dimesions:', image.shape)
plt.imshow(image) #call as plt.imshow(gray, cmap='gray') to show a grayscaled image
"""
Explanation: Finding Lane Lines on the Road
In this project, you will use the tools you learned about in the lesson to identify lane lines on the road. You can develop your pipeline on a series of individual images, and later apply the result to a video stream (really just a series of images). Check out the video clip "raw-lines-example.mp4" (also contained in this repository) to see what the output should look like after using the helper functions below.
Once you have a result that looks roughly like "raw-lines-example.mp4", you'll need to get creative and try to average and/or extrapolate the line segments you've detected to map out the full extent of the lane lines. You can see an example of the result you're going for in the video "P1_example.mp4". Ultimately, you would like to draw just one line for the left side of the lane, and one for the right.
Let's have a look at our first image called 'test_images/solidWhiteRight.jpg'. Run the 2 cells below (hit Shift-Enter or the "play" button above) to display the image.
Note If, at any point, you encounter frozen display windows or other confounding issues, you can always start again with a clean slate by going to the "Kernel" menu above and selecting "Restart & Clear Output".
Credit and code attribution
This is a school project from Udacity. A significant portion of the code is provided as a starting template. I also seeked help from the forums and chat channels. The idea of (and occasinally the whole function) of get_point_horizontal(), fitline(), using unit tests, and the bulk image writing all stem from there. A big thank you to everyone at Udacity for puttings this together and everyone on the forum for answering questions. I believe I gained a reasonable understanding of the pipeline flow and that the overall implemenation is my own.
The tools you have are color selection, region of interest selection, grayscaling, Gaussian smoothing, Canny Edge Detection and Hough Tranform line detection. You are also free to explore and try other techniques that were not presented in the lesson. Your goal is piece together a pipeline to detect the line segments in the image, then average/extrapolate them and draw them onto the image for display (as below). Once you have a working pipeline, try it out on the video stream below.
<figure>
<img src="line-segments-example.jpg" width="380" alt="Combined Image" />
<figcaption>
<p></p>
<p style="text-align: center;"> Your output should look something like this (above) after detecting line segments using the helper functions below </p>
</figcaption>
</figure>
<p></p>
<figure>
<img src="laneLines_thirdPass.jpg" width="380" alt="Combined Image" />
<figcaption>
<p></p>
<p style="text-align: center;"> Your goal is to connect/average/extrapolate line segments to get output like this</p>
</figcaption>
</figure>
End of explanation
"""
# test image for all unit tests
test_image = (mpimg.imread('test_images/solidYellowLeft.jpg'))
def grayscale(img):
"""Applies the Grayscale transform
This will return an image with only one color channel
but NOTE: to see the returned image as grayscale
you should call plt.imshow(gray, cmap='gray')"""
#gray = cv2.cvtColor(image,cv2.COLOR_RGB2GRAY)
# Use BGR2GRAY if you read an image with cv2.imread()
return cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
############## UNIT TEST ##############
gray = grayscale(test_image)
#this should be a single channel image shaped like(image.shape[0], image.shape[1])
print("gray image shape: {}".format(gray.shape))
plt.imshow(gray, cmap='gray');
############################
def gaussian_blur(img, kernel_size=5):
"""Applies a Gaussian Noise kernel"""
gray_image = grayscale(img)
return cv2.GaussianBlur(gray_image, (kernel_size, kernel_size), 0)
############## UNIT TEST ##############
gaussian_blur_test = gaussian_blur(test_image)
# this should still be a single channel image
print("gaussian_blur_test shape: {}".format(gaussian_blur_test.shape))
plt.imshow(gaussian_blur_test, cmap='gray');
######################
def canny(img, low_threshold=70, high_threshold=210):
"""Applies the Canny transform"""
return cv2.Canny(img, low_threshold, high_threshold)
############## UNIT TEST ##############
test_edges = canny(test_image)
print("canny image shape".format(test_edges.shape))
# this should still be a singel channel image.
plt.imshow(test_edges, cmap='gray')
######################
def region_of_interest(edges):
"""
Applies an image mask.
Only keeps the region of the image defined by the polygon
formed from `vertices`. The rest of the image is set to black.
#defining a blank mask to start with
"""
#Create a masked edges image
mask = np.zeros_like(edges)
ignore_mask_color = 255
# Define a four sided polygon to mask.
# numpy.array returns a tuple of number of rows, columns and channels.
imshape = edges.shape
vertices = np.array([[(50,imshape[0]),(380, 350), (580, 350), (900,imshape[0])]], dtype=np.int32)
#filling pixels inside the polygon defined by "vertices" with the fill color
cv2.fillPoly(mask, vertices, ignore_mask_color)
#returning the image only where mask pixels are nonzero
masked_edges = cv2.bitwise_and(edges, mask)
return masked_edges
############## UNIT TEST ##############
test_edges = canny(test_image)
masked_edges = region_of_interest(test_edges)
print("masked_edges shape {}".format(masked_edges.shape))
# again a single channel image
plt.imshow(masked_edges, cmap='gray')
######################
"""
After you separate them, calculate the average slope of the segments per lane.
With that slope, decide on two Y coordinates where you want the lane lines to start and end
(for example, use the bottom of the image as a Y_bottom point, and Y_top = 300 or something
like that – where the horizon is). Now that you have your Y coordinates, calculate the
X coordinates per lane line.
"""
# Helper variables for comparing and averaging values
prev_left_top_x = prev_right_top_x = prev_right_bottom_x = prev_left_bottom_x = 0
all_left_top_x = all_right_top_x = all_right_bottom_x = all_left_bottom_x = [0]
all_left_top_x = np.array(all_left_top_x)
all_left_bottom_x = np.array(all_left_bottom_x)
all_right_top_x = np.array(all_right_top_x)
all_right_bottom_x = np.array(all_right_bottom_x)
def cruise_control (previous, current, factor):
"""
Helper function for comparing current and previous values
Uncomment print line to watch value differences it's kind of neat!
"""
# print (previous, current, previous - current)
difference = int(abs(previous) - abs(current))
#print(difference)
if difference <= factor:
return current
else:
return previous
def get_point_horizontal( vx, vy, x1, y1, y_ref ):
"""
Helper function for draw_lines
Calculates 'x' matching: 2 points on a line, its slope, and a given 'y' coordinate.
"""
m = vy / vx
b = y1 - ( m * x1 )
x = ( y_ref - b ) / m
return x
def draw_lines(line_img, lines, color=[255, 0, 0], thickness=6):
"""
average/extrapolate the line segments you detect to map out the full extent of the lane
"""
right_segment_points = []
left_segment_points = []
top_y = 350
bot_y = line_img.shape[0]
smoothie = 6 #lower number = more discared frames.
for line in lines:
for x1,y1,x2,y2 in line:
# 1, find slope
slope = float((y2-y1)/(x2-x1))
# print (slope)
max_slope_thresh = .85
min_slope_thresh = .2
# 2, use sloap to split lanes into left and right.
# theory that a negative slope will be right lane
if max_slope_thresh >= slope >= min_slope_thresh:
# print (slope)
# append all points to points array
right_segment_points.append([x1,y1])
right_segment_points.append([x2,y2])
# declare numpy array
# fit a line with those points
# TODO explore other options besides DIST_12
# TODO compare to polyfit implementation
right_segment = np.array(right_segment_points)
[r_vx, r_vy, r_cx, r_cy] = cv2.fitLine(right_segment, cv2.DIST_L12, 0, 0.01, 0.01)
# define 2 x points for right lane line
right_top_x = get_point_horizontal( r_vx, r_vy, r_cx, r_cy, top_y )
right_bottom_x = get_point_horizontal( r_vx, r_vy, r_cx, r_cy, bot_y )
elif -max_slope_thresh <= slope <= -min_slope_thresh:
# print (slope)
# append all points to points array
left_segment_points.append([x1,y1])
left_segment_points.append([x2,y2])
# declare numpy array
# fit a line with those points
# TODO add something to test if segment points not blank
left_segment = np.array(left_segment_points)
[r_vx, r_vy, r_cx, r_cy] = cv2.fitLine(left_segment, cv2.DIST_L12, 0, 0.01, 0.01)
# define 2 x points for left lane line
left_top_x = get_point_horizontal( r_vx, r_vy, r_cx, r_cy, top_y )
left_bottom_x = get_point_horizontal( r_vx, r_vy, r_cx, r_cy, bot_y )
#TODO split into lists to avoid so much repeat
#TODO consider using Bayes ie Given frame thinks it's X, sensor .9 accurate, and past frame is Y
# what is chance in lane? and layer that logic on top of pixel difference
# (ie z chance or greater keep frame else reject frame)
"""
These global functions acomplish two things:
a) Averaging and weighting point values
b) Discarding frames that are too far out of "normal" as defined by smoothie variable
Smoothie compares absolute value difference between current and previous frame, in pixels,
and if the current frame has a greater difference than smoothie variable, it uses the previous frame.
"""
global prev_left_top_x, all_left_top_x
left_top_x_corrected = np.mean(all_left_top_x) * 2 + (cruise_control(prev_left_top_x, left_top_x, smoothie) * 2)/2
np.append(all_left_top_x, left_top_x)
prev_left_top_x = left_top_x
global prev_left_bottom_x, all_left_bottom_x
left_bottom_x_corrected = (np.mean(all_left_bottom_x) * 2) + (cruise_control(prev_left_bottom_x, left_bottom_x, smoothie) * 2)/2
np.append(all_left_bottom_x, left_bottom_x)
prev_left_bottom_x = left_bottom_x
global prev_right_top_x, all_right_top_x
right_top_x_corrected = (np.mean(all_right_top_x) * 2) + (cruise_control(prev_right_top_x, right_top_x, smoothie) * 2)/2
np.append(all_right_top_x, right_top_x)
prev_right_top_x = right_top_x
global prev_right_bottom_x, all_right_bottom_x
right_bottom_x_corrected = (np.mean(all_right_bottom_x) * 2) + (cruise_control(prev_right_bottom_x, right_bottom_x, smoothie) * 2)/2
np.append(all_right_bottom_x, right_bottom_x)
prev_right_bottom_x = right_bottom_x
# Print two lines based on above
cv2.line(line_img, (int(left_bottom_x_corrected), bot_y), (int(left_top_x_corrected), top_y), color, thickness)
cv2.line(line_img, (int(right_bottom_x_corrected), bot_y), (int(right_top_x_corrected), top_y), color, thickness)
def hough_lines(img, rho=1, theta=np.pi/180, threshold=20, min_line_len=40, max_line_gap=45):
"""
Run Hough on edge detected image
Output "lines" is an array containing endpoints of detected line segments
"""
edges = canny(img)
masked_edges = region_of_interest(edges)
lines = cv2.HoughLinesP(masked_edges, rho, theta, threshold, np.array([]), minLineLength=min_line_len, maxLineGap=max_line_gap)
line_img = np.zeros((img.shape[0], img.shape[1], 3), dtype=np.uint8)
draw_lines(line_img, lines)
return line_img
############## UNIT TEST ##############
test_hough = hough_lines(test_image)
print("masked_edges shape {}".format(test_hough.shape))
plt.imshow(test_hough)
######################
# Python 3 has support for cool math symbols.
def weighted_img(img, initial_img, α=1, β=1, λ=0.):
"""
`img` is the output of the hough_lines(), An image with lines drawn on it.
Should be a blank image (all black) with lines drawn on it.
`initial_img` should be the image before any processing.
The result image is computed as follows:
initial_img * α + img * β + λ
NOTE: initial_img and img must be the same shape!
"""
return cv2.addWeighted(initial_img, α, img, β, λ)
############## UNIT TEST ##############
test_hough = hough_lines(test_image)
test_weighted = weighted_img(test_hough, test_image)
print("masked_edges shape {}".format(test_weighted.shape))
plt.imshow(test_weighted)
######################
"""
Explanation: Some OpenCV functions (beyond those introduced in the lesson) that might be useful for this project are:
cv2.inRange() for color selection
cv2.fillPoly() for regions selection
cv2.line() to draw lines on an image given endpoints
cv2.addWeighted() to coadd / overlay two images
cv2.cvtColor() to grayscale or change color
cv2.imwrite() to output images to file
cv2.bitwise_and() to apply a mask to an image
Check out the OpenCV documentation to learn about these and discover even more awesome functionality!
Below are some helper functions to help get you started. They should look familiar from the lesson!
End of explanation
"""
import os
#os.listdir("test_images/")
"""
Explanation: Test on Images
Now you should build your pipeline to work on the images in the directory "test_images"
You should make sure your pipeline works well on these images before you try the videos.
End of explanation
"""
"""
Collects test images, creates folder to put them in, runs pipeline, and saves images.
"""
import os
import shutil
test_images = os.listdir("test_images/")
try:
processed_images = os.listdir("test_images/processed_images/")
except FileNotFoundError:
print("File not found")
if processed_images:
shutil.rmtree("test_images/processed_images/", ignore_errors=True)
#Create New Folder for Processing
create_success = os.mkdir("test_images/processed_images/")
for img in test_images:
if '.jpg' in img:
image = mpimg.imread("test_images/%(filename)s" % {"filename": img})
hough = hough_lines(image)
processed_image = weighted_img(hough, image)
color_fix = cv2.cvtColor(processed_image, cv2.COLOR_BGR2RGB)
cv2.imwrite("test_images/processed_images/%(filename)s_processed.jpg" %
{"filename": img.replace(".jpg","")}, color_fix)
"""
Explanation: run your solution on all test_images and make copies into the test_images directory).
End of explanation
"""
import imageio
imageio.plugins.ffmpeg.download()
# Import everything needed to edit/save/watch video clips
from moviepy.editor import VideoFileClip
from IPython.display import HTML
def process_image(image):
hough = hough_lines(image)
result = weighted_img(hough, image)
return result
"""
Explanation: Test on Videos
You know what's cooler than drawing lanes over images? Drawing lanes over video!
We can test our solution on two provided videos:
solidWhiteRight.mp4
solidYellowLeft.mp4
End of explanation
"""
white_output = 'white.mp4'
clip1 = VideoFileClip("solidWhiteRight.mp4")
white_clip = clip1.fl_image(process_image) #NOTE: this function expects color images!!
%time white_clip.write_videofile(white_output, audio=False)
"""
Explanation: Let's try the one with the solid white lane on the right first ...
End of explanation
"""
HTML("""
<video width="960" height="540" controls>
<source src="{0}">
</video>
""".format(white_output))
"""
Explanation: Play the video inline, or if you prefer find the video in your filesystem (should be in the same directory) and play it in your video player of choice.
End of explanation
"""
yellow_output = 'yellow.mp4'
clip2 = VideoFileClip('solidYellowLeft.mp4')
yellow_clip = clip2.fl_image(process_image)
%time yellow_clip.write_videofile(yellow_output, audio=False)
HTML("""
<video width="960" height="540" controls>
<source src="{0}">
</video>
""".format(yellow_output))
"""
Explanation: At this point, if you were successful you probably have the Hough line segments drawn onto the road, but what about identifying the full extent of the lane and marking it clearly as in the example video (P1_example.mp4)? Think about defining a line to run the full length of the visible lane based on the line segments you identified with the Hough Transform. Modify your draw_lines function accordingly and try re-running your pipeline.
Now for the one with the solid yellow lane on the left. This one's more tricky!
End of explanation
"""
def process_image2(image, rho=1, theta=np.pi/180, threshold=100, min_line_len=0, max_line_gap=0):
hough = hough_lines(image)
result = weighted_img(hough, image)
return result
challenge_output = 'extra1.mp4'
clip2 = VideoFileClip('challenge.mp4')
challenge_clip = clip2.fl_image(process_image2)
%time challenge_clip.write_videofile(challenge_output, audio=False)
HTML("""
<video width="960" height="540" controls>
<source src="{0}">
</video>
""".format(challenge_output))
"""
Explanation: Reflections
How could you imagine making your algorithm better / more robust?
-Better understanding of parameters of existing functions
-Approaching more from a probabilistic standpoint (ie Bayes, or some kind of Machine learning)
-More pre-processsing
-Less reliance on "after effects" such as the frame dropping and averaging
-Hardware approach (multiple cameras, sensor fusion, etc.)
Where will your current algorithm be likely to fail?
Some thoughts:
-At intersections (no lane markings)
-At sharp turns (constrained masking area)
-In poor weather (general sensitivity)
-During lane changes (change in angle of car)
-Emergency (sudden change in envrionment)
-Poorly marker lanes
-Lanes marked by reflective light points at night
Submission
If you're satisfied with your video outputs it's time to submit! Submit this ipython notebook for review.
Optional Challenge
Try your lane finding pipeline on the video below. Does it still work? Can you figure out a way to make it more robust? If you're up for the challenge, modify your pipeline so it works with this video and submit it along with the rest of your project!
End of explanation
"""
|
Nikea/scikit-xray-examples | demos/time_correlation/two-time-with-partial-data.ipynb | bsd-3-clause | import skbeam.core.correlation as corr
from skbeam.core.correlation import two_time_corr, two_time_state_to_results
import skbeam.core.roi as roi
import skbeam.core.utils as utils
from xray_vision.mpl_plotting.roi import show_label_array_on_image
import numpy as np
import time as ttime
import matplotlib.pyplot as plt
%matplotlib notebook
# multi-tau scheme info
real_data_levels = 7
real_data_bufs = 8
real_data = np.load("100_500_NIPA_GEL.npy")
avg_img = np.average(real_data, axis=0)
# generate some circular ROIs
# define the ROIs
roi_start = 65 # in pixels
roi_width = 9 # in pixels
roi_spacing = (5.0, 4.0)
x_center = 7. # in pixels
y_center = (129.) # in pixels
num_rings = 3
# get the edges of the rings
edges = roi.ring_edges(roi_start, width=roi_width,
spacing=roi_spacing, num_rings=num_rings)
# get the label array from the ring shaped 3 region of interests(ROI's)
labeled_roi_array = roi.rings(
edges, (y_center, x_center), real_data.shape[1:])
fig, ax = plt.subplots()
ax.imshow(np.sum(real_data, axis=0) / len(real_data))
show_label_array_on_image(ax, avg_img, labeled_roi_array)
plt.title("ROI's on the real data")
plt.show()
"""
Explanation: Two time correlation example notebook
End of explanation
"""
from mpl_toolkits.axes_grid1 import ImageGrid
def make_image_grid(im_shape):
"""Create the image grid with colorbars"""
def _add_inner_title(ax, title, loc, size=None, **kwargs):
"""Add a title on top of the image"""
from matplotlib.offsetbox import AnchoredText
from matplotlib.patheffects import withStroke
if size is None:
size = dict(size=plt.rcParams['legend.fontsize'])
at = AnchoredText(title, loc=loc, prop=size,
pad=0., borderpad=0.5,
frameon=False, **kwargs)
ax.add_artist(at)
at.txt._text.set_path_effects([withStroke(foreground="w", linewidth=3)])
return at
fig = plt.figure(None, (10, 8))
grid = ImageGrid(fig, 111, # similar to subplot(111)
nrows_ncols=(1, 2),
direction="row",
axes_pad=0.05,
add_all=True,
label_mode="1",
share_all=True,
cbar_location="top",
cbar_mode="each",
cbar_size="7%",
cbar_pad="1%",
)
ims = []
for ax, im_title in zip(grid, ["Ring 1", "Ring 2"]):
t = _add_inner_title(ax, im_title, loc=2)
t.patch.set_alpha(0.5)
ax.set_xlabel('t1')
ax.set_ylabel('t2')
im = ax.imshow(np.zeros(im_shape),
cmap='viridis', origin="lower")
ims.append(im)
ax.cax.colorbar(im)
return grid, ims
def update_plots(grid, ims, g2s):
"""Update the plot as the correlation is running"""
for ax, im, g2 in zip(grid, ims, g2s):
im.set_data(g2)
ax.cax.colorbar(im)
lo, hi = np.min(g2), np.max(g2)
# low bound should be at least one
lo = lo if lo > 1 else 1
# high bound should be at least the low bound
hi = lo if hi < lo else hi
im.set_clim(lo, hi)
ax.figure.canvas.draw()
ttime.sleep(0.01)
"""
Explanation: Brute force correlation
set num_levs to 1 and num_bufs to the number of images you want to correlate
End of explanation
"""
num_levs = 1
num_bufs = real_data.shape[0]
full_gen = corr.lazy_two_time(labeled_roi_array, real_data,
real_data.shape[0], num_bufs, num_levs)
grid, ims = make_image_grid(real_data.shape[1:])
for idx, intermediate_state1 in enumerate(full_gen):
if idx % 25 == 0:
print('processing %s' % idx)
result1 = corr.two_time_state_to_results(intermediate_state1)
update_plots(grid, ims, result1.g2)
# provide a final update
result1 = corr.two_time_state_to_results(intermediate_state1)
update_plots(grid, ims, result1.g2)
"""
Explanation: Using the NIPA gel data
End of explanation
"""
num_bufs = 8
num_levs = 6
multi_gen = corr.lazy_two_time(labeled_roi_array, real_data, real_data.shape[0],
num_bufs, num_levs)
grid, ims = make_image_grid(real_data.shape[1:])
for idx, intermediate_state in enumerate(multi_gen):
if idx % 25 == 0:
print('processing %s' % idx)
m_result = corr.two_time_state_to_results(intermediate_state)
update_plots(grid, ims, m_result.g2)
ttime.sleep(0.01)
#provide a final update
result = corr.two_time_state_to_results(intermediate_state)
update_plots(grid, ims, result.g2)
import skbeam
print(skbeam.__version__)
"""
Explanation: Multi tau two time correlation
For Multi tau two time correlation you can give the number of levels any number
you want and the number of buffers has to be an evan number
( More information : skbeam/core/corrleation.py https://github.com/scikit-beam/scikitbeam/blob/master/skbeam/core/correlation.py
End of explanation
"""
|
jonnydyer/pypropep | ipython_doc/BasicRocketPerformance.ipynb | gpl-3.0 | p = ppp.ShiftingPerformance()
o2 = ppp.PROPELLANTS['OXYGEN (GAS)']
ch4 = ppp.PROPELLANTS['METHANE']
p.add_propellants([(ch4, 1.0), (o2, 1.0)])
p.set_state(P=10, Pe=0.01)
print p
for k,v in p.composition.items():
print "{} : ".format(k)
pprint.pprint(v[0:8], indent=4)
OF = np.linspace(1, 5)
m_CH4 = 1.0
cstar_fr = []
cstar_sh = []
Isp_fr = []
Isp_sh = []
for i in xrange(len(OF)):
p = ppp.FrozenPerformance()
psh = ppp.ShiftingPerformance()
m_O2 = OF[i]
p.add_propellants_by_mass([(ch4, m_CH4), (o2, m_O2)])
psh.add_propellants_by_mass([(ch4, m_CH4), (o2, m_O2)])
p.set_state(P=1000./14.7, Pe=1)
psh.set_state(P=1000./14.7, Pe=1)
cstar_fr.append(p.performance.cstar)
Isp_fr.append(p.performance.Isp/9.8)
cstar_sh.append(psh.performance.cstar)
Isp_sh.append(psh.performance.Isp/9.8)
ax = plt.subplot(211)
ax.plot(OF, cstar_fr, label='Frozen')
ax.plot(OF, cstar_sh, label='Shifting')
ax.set_ylabel('C*')
ax1 = plt.subplot(212, sharex=ax)
ax1.plot(OF, Isp_fr, label='Frozen')
ax1.plot(OF, Isp_sh, label='Shifting')
ax1.set_ylabel('Isp (s)')
plt.xlabel('O/F')
plt.legend(loc='best')
"""
Explanation: Rocket Performance
Finally we are at the fun stuff. The next section shows a basic rocket performance example.
End of explanation
"""
kno3 = ppp.PROPELLANTS['POTASSIUM NITRATE']
sugar = ppp.PROPELLANTS['SUCROSE (TABLE SUGAR)']
p = ppp.ShiftingPerformance()
p.add_propellants_by_mass([(kno3, 0.65), (sugar, 0.35)])
p.set_state(P=30, Pe=1.)
for station in ['chamber', 'throat', 'exit']:
print "{} : ".format(station)
pprint.pprint(p.composition[station][0:8], indent=4)
print "Condensed: "
pprint.pprint(p.composition_condensed[station], indent=4)
print '\n'
"""
Explanation: Equilibrium with condensed species
End of explanation
"""
ap = ppp.PROPELLANTS['AMMONIUM PERCHLORATE (AP)']
pban = ppp.PROPELLANTS['POLYBUTADIENE/ACRYLONITRILE CO POLYMER']
al = ppp.PROPELLANTS['ALUMINUM (PURE CRYSTALINE)']
p = ppp.ShiftingPerformance()
p.add_propellants_by_mass([(ap, 0.70), (pban, 0.12), (al, 0.16)])
p.set_state(P=45, Ae_At=7.7)
for station in ['chamber', 'throat', 'exit']:
print "{} : ".format(station)
pprint.pprint(p.composition[station][0:8], indent=4)
print "Condensed: "
pprint.pprint(p.composition_condensed[station], indent=4)
print '\n'
print p.performance.Ivac/9.8
"""
Explanation: Smoky white space shuttle SRB exhaust
End of explanation
"""
p = ppp.ShiftingPerformance()
lh2 = ppp.PROPELLANTS['HYDROGEN (CRYOGENIC)']
lox = ppp.PROPELLANTS['OXYGEN (LIQUID)']
OF = 3
p.add_propellants_by_mass([(lh2, 1.0), (lox, OF)])
p.set_state(P=200, Pe=0.01)
print "Chamber Temperature: %.3f K, Exit temperature: %.3f K" % (p.properties[0].T, p.properties[2].T)
print "Gaseous exit products:"
pprint.pprint(p.composition['exit'][0:8])
print "Condensed exit products:"
pprint.pprint(p.composition_condensed['exit'])
"""
Explanation: And one final condensed species case
Remember the awesome video (https://www.youtube.com/watch?v=aJnrFKUz1Uc) of the RL-10 with liquid ice forming in the exhaust? Let's see if we can show that this is possible thermodynamically.
Caveat - the ice formation in RL-10 is likely due to highly non-ideal effects like non-1D flow in the nozzle, non-equilibrium conditions etc. But we can still show that it's possible even in idealized equilibrium conditions.
End of explanation
"""
|
pysg/pyther | practica_de_flash_isotermico.ipynb | mit | def Ki_wilson(self):
"""Equation of wilson for to calculate the Ki(T,P)"""
variable_0 = 5.373 * (1 + self.w) * (1 - self.Tc / self.T)
lnKi = np.log(self.Pc / self.P) + variable_0
self.Ki = np.exp(lnKi)
return self.Ki
"""
Explanation: Práctico flash isotermico
Jose Euliser Mosquera
Andrés Salazar
1. Cálculo del flash Isotermico (T, P)
Se presenta una implementación del calculo del flash isotermico bifasico utilizando la ecuación de estado Peng-Robinsong (PR) [2] junto con las reglas de mezclado de Van Der Waalls [2].
El cálculo del flash isotermico bifasico es un cálculo básico en la introducción de los procesos de separación porque es el esqeuma tecnologíco de separación más simple, en el que ingresa una corriente de fluido a un "tanque" calentado por un flujo de calor en el que se obtiene una corriente de salida por cada fase presente en el sistema. En el caso bifasico, una corriente de líquido y otra de vapor, tal como se muestra en la figura 1.
Figura 1. Esquema del cálculo del flash isotermico
1.1 Modelo flash líquido-vapor
El modelo del flash isotermico bifasico, corresponde al balance de materia global y por componente en el tanque separador que se muestra en la figura (1), junto con la condición de equilibrio de fases líquido-vapor.
Coeficiente de distribución $K_i$
$$ Ki = \frac {yi} {xi} $$
Aproximación de wilson para el coeficiente de distribución $K_i$
$$ lnK_i = ln \left(\frac {Pc_i} {P}\right ) + 5.373(1 + w_i)(1 - \frac {Tc_i} {T}) $$
Rachford-Rice $g(\beta)$
$$ g(\beta) = \sum \limits_{i=1}^{C} (y_i - x_i) $$
$$ g(\beta) = \sum \limits_{i=1}^{C} \frac {K_i - 1} {1 - \beta + \beta K_i} $$
Derivada de la función Rachford-Rice $g(\beta)$
$$ \frac {dg} {d \beta} = \sum \limits_{i=1}^{C} z_i \frac {(K_i - 1)^2} {(1 - \beta + \beta K_i)^2} < 0 $$
Valores límites de la función Rachford-Rice $g(\beta)$
$$ g(0) = \sum \limits_{i=1}^{C} (z_i K_i - 1) > 0 $$
$$ g(1) = \sum \limits_{i=1}^{C} (1 - \frac {z_i} {K_i}) < 0 $$
Ecuaciones para calcular las fracciones molares de cada fase
$$ y_i \frac{K_i z_i} {1 - \beta + \beta K_i} $$
$$ x_i = \frac{z_i} {1 - \beta + \beta K_i} $$
Relaciones que determinan los valores mínimos y máximos para $\beta$
$$ 1 - \beta + \beta K_i >= K_i z_i $$
$$ \beta \geq \frac {K-i z_i - 1} {K_i - 1} $$
$$ 1 - \beta + \beta K_i >= z_i $$
$$ \beta \leq \frac {z_i - 1} {1 - K_i} $$
Valores extremos de la fracción de vapor en el sistema $\beta$
$$ \beta_{min} = 0 $$
$$ \beta_{max} = 1 $$
2. Algoritmo
Especificar la Presión $P$, Temperatura $T$ y número de moles $N$ de cada componente del sistema
Calcular el coeficiente de distribución $K_i^{wilson}$ a partir de la relación de Wilson
Calcular el valor de $\beta_{min}$
Calcular el valor de $\beta_{max}$
Calcular el promedio de beta, usando Beta minimo y Beta máximo
Resolver la ecuación de Rachford-Rice $g(\beta)$, para calcular $\beta$ con una tolerancia de $1x10^{-6}$
Calcular las fracciones molares del líquido $x_i$ y del vapor $y_i$
Calcular los coeficientes de fugacidad $\hat{\phi_i}$ para las fracciones molares del líquido $x_i$ y del vapor $y_i$
Calcular el coeficiente de distribución $K_i$ a partir de los coeficientes de fugacidad del componente i $\hat{\phi_i}$
Volver a resolver la ecuación de Rachford-Rice $g(\beta)$, para calcular $\beta$ con una tolerancia de $1x10^{-6}$
Verificar la convergencia del sistema con una tolerancia de $1x10^{-6}$ para $\Delta K_i = \left | K_{i}^{j+1} - K_{i}^{j} \right| $, siendo está situación la convergencia del procedimiento.
2.1 Implementación
En la implementación del cálculo del flash isotermico, se tiene 3 partes importantes:
Cálculo de los coeficientes de distribución por medio de la ecuación de Wilson
Cálculo de los valores mínimos y máximos para la fracción $\beta$
Cálculo del step para calcular la fracción $\beta$
Ecuación de Wilson
End of explanation
"""
def beta_initial(self):
self.Ki = self.Ki_wilson()
self.Bmin = (self.Ki * self.zi - 1) / (self.Ki - 1)
self.Bmax = (1 - self.zi) / (1 - self.Ki)
self.Binit = (np.max(self.Bmin) + np.min(self.Bmax)) / 2
return self.Binit
"""
Explanation: Cálculo de los valores mínimos y máximos para la fracción $\beta$
End of explanation
"""
def beta_newton(self):
iteration, step, tolerance = 0, 1, 1e-5
while True:
self.Binit = self.Binit - step * self.rachford_rice()[0] / self.rachford_rice()[1]
iteration += 1
while self.Binit < self.Bmin or self.Binit > self.Bmax:
step = step / 2
if abs(self.rachford_rice()[0]) <= tolerance or (iteration >= 50):
break
return self.Binit
"""
Explanation: Cálculo del step para calcular la fracción $\beta$
End of explanation
"""
|
tpin3694/tpin3694.github.io | machine-learning/converting_a_dictionary_into_a_matrix.ipynb | mit | # Load library
from sklearn.feature_extraction import DictVectorizer
"""
Explanation: Title: Converting A Dictionary Into A Matrix
Slug: converting_a_dictionary_into_a_matrix
Summary: How to convert a dictionary into a feature matrix for machine learning in Python.
Date: 2016-09-06 12:00
Category: Machine Learning
Tags: Preprocessing Structured Data
Authors: Chris Albon
Preliminaries
End of explanation
"""
# Our dictionary of data
data_dict = [{'Red': 2, 'Blue': 4},
{'Red': 4, 'Blue': 3},
{'Red': 1, 'Yellow': 2},
{'Red': 2, 'Yellow': 2}]
"""
Explanation: Create Dictionary
End of explanation
"""
# Create DictVectorizer object
dictvectorizer = DictVectorizer(sparse=False)
# Convert dictionary into feature matrix
features = dictvectorizer.fit_transform(data_dict)
# View feature matrix
features
"""
Explanation: Feature Matrix From Dictionary
End of explanation
"""
# View feature matrix column names
dictvectorizer.get_feature_names()
"""
Explanation: View column names
End of explanation
"""
|
mrcslws/nupic.research | projects/archive/dynamic_sparse/notebooks/mcaporale/2019-10-11-ExperimentAnalysis-SmallDense.ipynb | agpl-3.0 | from IPython.display import Markdown, display
%load_ext autoreload
%autoreload 2
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import os
import glob
import tabulate
import pprint
import click
import numpy as np
import pandas as pd
from ray.tune.commands import *
from nupic.research.frameworks.dynamic_sparse.common.browser import *
import matplotlib.pyplot as plt
from matplotlib import rcParams
%config InlineBackend.figure_format = 'retina'
import seaborn as sns
sns.set(style="whitegrid")
sns.set_palette("colorblind")
"""
Explanation: Experiment:
Evaluate pruning by magnitude weighted by coactivations (more thorough evaluation), compare it to baseline (SET).
Motivation.
Check if results are consistently above baseline.
Conclusion
No significant difference between both models
No support for early stopping
End of explanation
"""
base = os.path.join('gsc-smalldense-2019-10-11-exp1')
exps = [
os.path.join(base, exp) for exp in [
'gsc-smalldense'
]
]
paths = [os.path.expanduser("~/nta/results/{}".format(e)) for e in exps]
df = load_many(paths)
df
df.columns
df.shape
df.iloc[1]
df.groupby('equivalent_on_perc')['equivalent_on_perc'].count()
"""
Explanation: Load and check data
End of explanation
"""
# Did any trials failed?
df[df["epochs"]<100]["epochs"].count()
# Removing failed or incomplete trials
df_origin = df.copy()
df = df_origin[df_origin["epochs"]>=30]
df.shape
# which ones failed?
# failed, or still ongoing?
df_origin['failed'] = df_origin["epochs"]<30
df_origin[df_origin['failed']]['epochs']
# helper functions
def mean_and_std(s):
return "{:.3f} ± {:.3f}".format(s.mean(), s.std())
def round_mean(s):
return "{:.0f}".format(round(s.mean()))
stats = ['min', 'max', 'mean', 'std']
def agg(columns, filter=None, round=3):
if filter is None:
return (df.groupby(columns)
.agg({'val_acc_max_epoch': round_mean,
'val_acc_max': stats,
'model': ['count']})).round(round)
else:
return (df[filter].groupby(columns)
.agg({'val_acc_max_epoch': round_mean,
'val_acc_max': stats,
'model': ['count']})).round(round)
"""
Explanation: ## Analysis
Experiment Details
End of explanation
"""
df_agg = agg(['equivalent_on_perc'])
df_agg
equivalent_on_percs = df_agg.index.values
val_means = df_agg['val_acc_max']['mean']
val_means = np.array(val_means)
val_stds = df_agg['val_acc_max']['std']
val_stds = np.array(val_stds)
print(equivalent_on_percs)
print(val_means)
print(val_stds)
# translate model names
rcParams['figure.figsize'] = 16, 8
# d = {
# 'DSNNWeightedMag': 'DSNN',
# 'DSNNMixedHeb': 'SET',
# 'SparseModel': 'Static',
# }
# df_plot = df.copy()
# df_plot['model'] = df_plot['model'].apply(lambda x, i: model_name(x, i))
plt.errorbar(
x=equivalent_on_percs,
y=val_means,
yerr=val_stds,
color='k',
marker='*',
lw=0,
elinewidth=2,
capsize=2,
markersize=10,
label="Small-Dense Equivalents"
)
# sns.scatterplot(data=df_plot, x='on_perc', y='val_acc_max', hue='model')
# sns.lineplot(data=df, x='equivalent_on_perc', y='val_acc_max', hue='equivalent_on_perc')
"""
Explanation: Does improved weight pruning outperforms regular SET
End of explanation
"""
|
tu-rbo/concarne | example/concarne_multiview_demo.ipynb | mit | from __future__ import print_function
import concarne
import concarne.patterns
import concarne.training
import lasagne
import theano.tensor as T
%pylab inline
try:
import sklearn.linear_model as sklm
except:
print (
"""You don't have scikit-learn installed; install it to compare
learning with side information to simple supervised learning""")
sklm = None
import numpy as np
"""
Explanation: This example illustrates how simple it is to train a classifier using
side information.
It illustrates the exemplary use of the <b>multi-view</b> pattern; for more info
on how to use other patterns, check out the other examples.
End of explanation
"""
num_samples = 300
input_dim = 50
side_dim = 50
# generate some random data with 100 samples
# and 5 dimensions
X = np.random.randn(num_samples, input_dim)
# select the third dimension as the relevant
# for our classification task
S = X[:, 2:3]
# The labels are simply the sign of S
# (note the downcast to int32 - this is required
# by theano)
y = np.asarray(S > 0, dtype='int32').reshape( (-1,) )
# This means we have 2 classes - we will use
# that later for building the pattern
num_classes = 2
plt.plot(S)
plt.plot(y)
"""
Explanation: Data generation
End of explanation
"""
Z = np.random.randn(num_samples, side_dim)
# set second dimension of Z to correspond to S
Z[:, 1] = S[:,0]
"""
Explanation: Now let's define some side information: we simulate an additional sensorwhich contains S, but embedded into a different space.
End of explanation
"""
# random rotation 1
R = np.linalg.qr(np.random.randn(input_dim, input_dim))[0]
X = X.dot(R)
# random rotation 2
Q = np.linalg.qr(np.random.randn(side_dim, side_dim))[0]
Z = Z.dot(Q)
"""
Explanation: Let's make it harder to find S in X and Z by applying a random rotations to both data sets
End of explanation
"""
split = num_samples/3
X_train = X[:split]
X_val = X[split:2*split]
X_test = X[2*split:]
y_train = y[:split]
y_val = y[split:2*split]
y_test = y[2*split:]
Z_train = Z[:split]
Z_val = Z[split:2*split]
Z_test = Z[2*split:]
"""
Explanation: Finally, split our data into training, test, and validation data
End of explanation
"""
if sklm is not None:
# let's try different regularizations
for c in [1e-5, 1e-1, 1, 10, 100, 1e5]:
lr = sklm.LogisticRegression(C=c)
lr.fit(X_train, y_train)
print ("Logistic Regression (C=%f)\n accuracy = %.3f %%" % (c, 100*lr.score(X_test, y_test)))
"""
Explanation: Purely supervised learning
Let's check how hard the problem is for supervised learning alone.
End of explanation
"""
# Let's first define the theano variables which will represent our data
input_var = T.matrix('inputs') # for X
target_var = T.ivector('targets') # for Y
side_var = T.matrix('sideinfo') # for Z
# Size of the intermediate representation phi(X);
# since S is 1-dim, phi(X) can also map to a
# 1-dim vector
representation_dim = 1
"""
Explanation: Learning with side information: building the pattern
End of explanation
"""
phi = [ (lasagne.layers.DenseLayer,
{ 'num_units': concarne.patterns.Pattern.PHI_OUTPUT_SHAPE,
'nonlinearity':None, 'b':None })]
psi = [(lasagne.layers.DenseLayer,
{ 'num_units': concarne.patterns.Pattern.PSI_OUTPUT_SHAPE,
'nonlinearity':lasagne.nonlinearities.softmax, 'b':None })]
beta = [(lasagne.layers.DenseLayer,
{ 'num_units': concarne.patterns.Pattern.BETA_OUTPUT_SHAPE,
'nonlinearity':None, 'b':None })]
"""
Explanation: Now define the functions - we choose linear functions.
concarne internally relies on lasagne which encodes functions as (sets of) layers. Additionally, concarne supports nolearn style initialization of lasagne layers as follows:
End of explanation
"""
pattern = concarne.patterns.MultiViewPattern(
phi=phi, psi=psi, beta=beta,
# the following parameters are required to
# build the functions and the losses
input_var=input_var,
target_var=target_var,
side_var=side_var,
input_shape=input_dim,
target_shape=num_classes,
side_shape=side_dim,
representation_shape=representation_dim,
# we have to define two loss functions:
# 1) the target loss deals with
# optimizing psi and phi wrt. X & Y
target_loss=lasagne.objectives.categorical_crossentropy,
# 2) the side loss deals with
# optimizing beta and phi wrt. X & Z,
# for multi-view it is beta(Z)~phi(X)
side_loss=lasagne.objectives.squared_error)
"""
Explanation: For the variable of your layer that denotes the output of the network you should use the markers PHI_OUTPUT_SHAPE,
PSI_OUTPUT_SHAPE and BETA_OUTPUT_SHAPE, so that the pattern can automatically infer the correct shape.
End of explanation
"""
trainer = concarne.training.PatternTrainer(
pattern,
procedure='simultaneous',
num_epochs=500,
batch_size=10,
update=lasagne.updates.nesterov_momentum,
update_learning_rate=0.01,
update_momentum=0.9,
)
"""
Explanation: Training
To train a pattern, you can use the PatternTrainer which trains the pattern via stochastic gradient descent.
It also supports different procedures to train the pattern.
End of explanation
"""
trainer.fit_XYZ(X_train, y_train, [Z_train],
X_val=X_val, y_val=y_val,
side_val=[X_val, Z_val],
verbose=True)
pass
"""
Explanation: <b>Let's train!</b>
End of explanation
"""
trainer.score(X_test, y_test, verbose=True)
pass
"""
Explanation: Some statistics: Test score.
End of explanation
"""
trainer.score_side([X_test, Z_test], verbose=True)
pass
"""
Explanation: We can also compute a test score for the side loss:
End of explanation
"""
trainer.predict(X_test)
"""
Explanation: You can then also query the prediction output, similar to the scikit-learn API:
End of explanation
"""
|
andreyf/machine-learning-examples | decision_trees_knn/practice_trees_titanic.ipynb | gpl-3.0 | import numpy as np
import pandas as pd
from sklearn.tree import DecisionTreeClassifier, export_graphviz
from sklearn.ensemble import RandomForestClassifier
from sklearn.model_selection import GridSearchCV
from sklearn.metrics import roc_auc_score, accuracy_score, confusion_matrix
%matplotlib inline
from matplotlib import pyplot as plt
import seaborn as sns
"""
Explanation: <center>
<img src="../img/ods_stickers.jpg">
Открытый курс по машинному обучению. Сессия № 2
</center>
Автор материала: программист-исследователь Mail.ru Group, старший преподаватель Факультета Компьютерных Наук ВШЭ Юрий Кашницкий. Материал распространяется на условиях лицензии Creative Commons CC BY-NC-SA 4.0. Можно использовать в любых целях (редактировать, поправлять и брать за основу), кроме коммерческих, но с обязательным упоминанием автора материала.
<center>Тема 3. Обучение с учителем. Методы классификации
<center>Практика. Дерево решений в задаче предсказания выживания пассажиров "Титаника". Решение
Заполните код в клетках и выберите ответы в веб-форме.
<a href="https://www.kaggle.com/c/titanic">Соревнование</a> Kaggle "Titanic: Machine Learning from Disaster".
End of explanation
"""
def write_to_submission_file(predicted_labels, out_file, train_num=891,
target='Survived', index_label="PassengerId"):
# turn predictions into data frame and save as csv file
predicted_df = pd.DataFrame(predicted_labels,
index = np.arange(train_num + 1,
train_num + 1 +
predicted_labels.shape[0]),
columns=[target])
predicted_df.to_csv(out_file, index_label=index_label)
"""
Explanation: Функция для формирования csv-файла посылки на Kaggle:
End of explanation
"""
train_df = pd.read_csv("../data/titanic_train.csv")
test_df = pd.read_csv("../data/titanic_test.csv")
y = train_df['Survived']
train_df.head()
train_df.describe(include='all')
test_df.describe(include='all')
"""
Explanation: Считываем обучающую и тестовую выборки.
End of explanation
"""
train_df['Age'].fillna(train_df['Age'].median(), inplace=True)
test_df['Age'].fillna(train_df['Age'].median(), inplace=True)
train_df['Embarked'].fillna('S', inplace=True)
test_df['Fare'].fillna(train_df['Fare'].median(), inplace=True)
"""
Explanation: Заполним пропуски медианными значениями.
End of explanation
"""
train_df = pd.concat([train_df, pd.get_dummies(train_df['Pclass'],
prefix="PClass"),
pd.get_dummies(train_df['Sex'], prefix="Sex"),
pd.get_dummies(train_df['SibSp'], prefix="SibSp"),
pd.get_dummies(train_df['Parch'], prefix="Parch"),
pd.get_dummies(train_df['Embarked'], prefix="Embarked")],
axis=1)
test_df = pd.concat([test_df, pd.get_dummies(test_df['Pclass'],
prefix="PClass"),
pd.get_dummies(test_df['Sex'], prefix="Sex"),
pd.get_dummies(test_df['SibSp'], prefix="SibSp"),
pd.get_dummies(test_df['Parch'], prefix="Parch"),
pd.get_dummies(test_df['Embarked'], prefix="Embarked")],
axis=1)
train_df.drop(['Survived', 'Pclass', 'Name', 'Sex', 'SibSp',
'Parch', 'Ticket', 'Cabin', 'Embarked', 'PassengerId'],
axis=1, inplace=True)
test_df.drop(['Pclass', 'Name', 'Sex', 'SibSp', 'Parch', 'Ticket', 'Cabin', 'Embarked', 'PassengerId'],
axis=1, inplace=True)
"""
Explanation: Кодируем категориальные признаки Pclass, Sex, SibSp, Parch и Embarked с помощью техники One-Hot-Encoding.
End of explanation
"""
train_df.shape, test_df.shape
set(test_df.columns) - set(train_df.columns)
test_df.drop(['Parch_9'], axis=1, inplace=True)
train_df.head()
test_df.head()
"""
Explanation: В тестовой выборке появляется новое значение Parch = 9, которого нет в обучающей выборке. Проигнорируем его.
End of explanation
"""
tree = DecisionTreeClassifier(max_depth=2, random_state=17)
tree.fit(train_df, y)
"""
Explanation: 1. Дерево решений без настройки параметров
Обучите на имеющейся выборке дерево решений (DecisionTreeClassifier) максимальной глубины 2. Используйте параметр random_state=17 для воспроизводимости результатов.
End of explanation
"""
predictions = tree.predict(test_df)
"""
Explanation: Сделайте с помощью полученной модели прогноз для тестовой выборки
End of explanation
"""
write_to_submission_file(predictions,
'titanic_tree_depth2.csv')
"""
Explanation: Сформируйте файл посылки и отправьте на Kaggle
End of explanation
"""
export_graphviz(tree, out_file="../img/titanic_tree_depth2.dot",
feature_names=train_df.columns)
!dot -Tpng ../img/titanic_tree_depth2.dot -o ../img/titanic_tree_depth2.png
"""
Explanation: <font color='red'>Вопрос 1. </font> Каков результат первой посылки (дерево решений без настройки параметров) в публичном рейтинге соревнования Titanic?
- <font color='green'>0.746</font>
- 0.756
- 0.766
- 0.776
У такой посылки результат на публичной тестовой выборке - 0.74641.
End of explanation
"""
# tree params for grid search
tree_params = {'max_depth': list(range(1, 5)),
'min_samples_leaf': list(range(1, 5))}
locally_best_tree = GridSearchCV(DecisionTreeClassifier(random_state=17),
tree_params,
verbose=True, n_jobs=-1, cv=5)
locally_best_tree.fit(train_df, y)
export_graphviz(locally_best_tree.best_estimator_,
out_file="../img/titanic_tree_tuned.dot",
feature_names=train_df.columns)
!dot -Tpng ../img/titanic_tree_tuned.dot -o ../img/titanic_tree_tuned.png
"""
Explanation: <img src='../img/titanic_tree_depth2.png'>
<font color='red'>Вопрос 2. </font> Сколько признаков задействуются при прогнозе деревом решений глубины 2?
- 2
- <font color='green'>3</font>
- 4
- 5
2. Дерево решений с настройкой параметров
Обучите на имеющейся выборке дерево решений (DecisionTreeClassifier). Также укажите random_state=17. Максимальную глубину и минимальное число элементов в листе настройте на 5-кратной кросс-валидации с помощью GridSearchCV.
End of explanation
"""
print("Best params:", locally_best_tree.best_params_)
print("Best cross validaton score", locally_best_tree.best_score_)
"""
Explanation: <img src='../img/titanic_tree_tuned.png'>
End of explanation
"""
predictions = locally_best_tree.predict(test_df)
"""
Explanation: <font color='red'>Вопрос 3. </font> Каковы лучшие параметры дерева, настроенные на кросс-валидации с помощью GridSearchCV?
- max_depth=2, min_samples_leaf=1
- max_depth=2, min_samples_leaf=4
- max_depth=3, min_samples_leaf=2
- <font color='green'>max_depth=3, min_samples_leaf=3</font>
<font color='red'>Вопрос 4. </font> Какой получилась средняя доля верных ответов на кросс-валидации для дерева решений с лучшим сочетанием гиперпараметров max_depth и min_samples_leaf?
- 0.77
- 0.79
- <font color='green'>0.81</font>
- 0.83
Сделайте с помощью полученной модели прогноз для тестовой выборки.
End of explanation
"""
write_to_submission_file(predictions, 'titanic_tree_tuned.csv')
"""
Explanation: Сформируйте файл посылки и отправьте на Kaggle.
End of explanation
"""
|
juditacs/labor | notebooks/bi_ea_demo/cryptocurrency_prediction_failed.ipynb | lgpl-3.0 | import os
import pandas as pd
import matplotlib.pyplot as plt
%matplotlib inline
from keras.layers import Input, Dense, Bidirectional, Dropout
from keras.layers.recurrent import LSTM
from keras.models import Model
from keras.callbacks import EarlyStopping
import numpy as np
os.listdir("data/cryptocurrency/")
"""
Explanation: Cryptocurrency prediction - DEMO
This is a simple demo for cryptocurrency prediction based on daily data. It does not work, so don't blame me if you lose your money.
Created by Judit Acs
Data source on kaggle.com
End of explanation
"""
coin_dataframes = {}
def convert_comma_int(field):
try:
return int(field.replace(',', ''))
except ValueError:
return None
for fn in os.listdir("data/cryptocurrency/"):
if "bitcoin_cache" in fn:
continue
if fn.endswith("_price.csv"):
coin_name = fn.split("_")[0]
df = pd.read_csv(os.path.join("data", "cryptocurrency", fn), parse_dates=["Date"])
df['Market Cap'] = df['Market Cap'].map(convert_comma_int)
coin_dataframes[coin_name] = df.sort_values('Date')
coin_dataframes.keys()
"""
Explanation: Load data
We load each currency into a separate dataframe and store the dataframes in a dictionary.
End of explanation
"""
coin_dataframes['nem'].head()
"""
Explanation: Each dataframe looks like this:
End of explanation
"""
coin_dataframes['bitcoin'].plot(x='Date', y='Close')
"""
Explanation: Bitcoin value growth
Just for fun.
End of explanation
"""
def add_relative_columns(df):
day_diff = df['Close'] - df['Open']
df['rel_close'] = day_diff / df['Open']
df['high_low_ratio'] = df['High'] / df['Low']
df['rel_high'] = df['High'] / df['Close']
df['rel_low'] = df['Low'] / df['Close']
for df in coin_dataframes.values():
add_relative_columns(df)
coin_dataframes["nem"].head()
"""
Explanation: Compute relative growth and other relative values
We add these values as new columns to the dataframes:
End of explanation
"""
def create_history_frames(coin_dataframes):
history_frames = {}
for coin_name, df in coin_dataframes.items():
history_frames[coin_name], x_cols = create_history_frame(df)
return history_frames, x_cols
def create_history_frame(df):
feature_cols = ['rel_close', 'rel_high', 'rel_low', 'high_low_ratio']
y_col = ['rel_close']
x_cols = []
days = 10
history = df[['Date'] + y_col].copy()
for n in range(1, days+1):
for feat_col in feature_cols:
colname = '{}_{}'.format(feat_col, n)
history[colname] = df[feat_col].shift(n)
x_cols.append(colname)
history = history[days:]
return history, x_cols
y_col = 'rel_close'
coin_history, x_cols = create_history_frames(coin_dataframes)
"""
Explanation: Create historical training data
The history tables will have values for the last 10 days for each day.
End of explanation
"""
def create_model():
input_layer = Input(batch_shape=(None, len(x_cols), 1))
layer = Bidirectional(LSTM(128, return_sequences=True))(input_layer)
layer = Bidirectional(LSTM(128))(layer)
out = Dense(1, activation="sigmoid")(layer)
m = Model(inputs=input_layer, outputs=out)
m.compile("rmsprop", loss='mean_squared_error')
return m
def create_train_test_mtx(history):
X = history[x_cols].as_matrix()
y = history[y_col].as_matrix()
X = X.reshape(X.shape[0], X.shape[1], 1)
rand_mtx = np.random.permutation(X.shape[0])
train_split = int(X.shape[0] * 0.9)
train_indices = rand_mtx[:train_split]
test_indices = rand_mtx[train_split:]
X_train = X[train_indices]
X_test = X[test_indices]
y_train = y[train_indices]
y_test = y[test_indices]
return X_train, X_test, y_train, y_test
def train_model(model, X, y):
ea = EarlyStopping(monitor='val_loss', patience=2)
val_loss = model.fit(X, y, epochs=500, batch_size=64, callbacks=[ea], verbose=0, validation_split=.1)
return val_loss
"""
Explanation: Define model
We will train a separate model for each currency. The models' architecture identical.
End of explanation
"""
rmse = {}
pred = {}
test = {}
for coin_name, history in coin_history.items():
model = create_model()
X_train, X_test, y_train, y_test = create_train_test_mtx(history)
train_model(model, X_train, y_train)
test[coin_name] = y_test
# run prediction on test set
pred[coin_name] = model.predict(X_test)
# compute test loss
rmse[coin_name] = np.sqrt(np.mean((pred[coin_name] - y_test)**2))
print(coin_name, rmse[coin_name])
"""
Explanation: Train a model for each currency
We save RMSE as well as the predictions on each test set.
End of explanation
"""
pred_sign = {coin_name: np.sign(pred[coin_name]) * np.sign(test[coin_name]) for coin_name in pred.keys()}
for coin, val in sorted(pred_sign.items()):
cnt = np.unique(pred_sign[coin], return_counts=True)[1]
print("[{}] pos/neg change guessed correctly: {}, incorrectly: {}, correct%: {}".format(
coin, cnt[0], cnt[1], cnt[0]/ (cnt[0]+cnt[1]) * 100))
"""
Explanation: Do our models predict the signum of the value change correctly?
End of explanation
"""
pred_sign = {coin_name: np.sign(pred[coin_name]) for coin_name in pred.keys()}
for coin, val in sorted(pred_sign.items()):
e, cnt = np.unique(val, return_counts=True)
print("[{}] guesses: {}".format(coin, dict(zip(e, cnt))))
"""
Explanation: Did we guess anything useful at all?
End of explanation
"""
|
idekerlab/deep-cell | data-builder/yeastnet_raw_interactions.ipynb | mit | import pandas as pd
from os import listdir
from os.path import isfile, join
import numpy as np
from goatools import obo_parser
# Annotation file for the CLIXO terms
clixo_mapping = './data/alignments_FDR_0.1_t_0.1'
oboUrl = './data/go.obo'
clixo_align = pd.read_csv(clixo_mapping, sep='\t', names=['term', 'go', 'score', 'fdr', 'genes'])
print(clixo_align['score'].max())
clixo_align.tail(10)
"""
Explanation: From YeastNet to CLIXO term documents
What's this?
This notebook is for building Elasticsearch index for CLIXO ontology.
End of explanation
"""
clixo2go = {}
for row in clixo_align.itertuples():
c = str(row[1])
go = row[2]
val = {
'go': go,
'score': row[3].item(),
'fdr': row[4].item(),
'genes': row[5].item()
}
clixo2go[c] = val
print(clixo2go['10552'])
# Save to file (for updating CyJS file)
import json
with open('./data/clixo-mapping.json', 'w') as outfile:
json.dump(clixo2go, outfile)
obo = obo_parser.GODag(oboUrl, optional_attrs=['def'])
# test data
obo['GO:0006563'].defn
# This directory should contains all of the YeastNet interaction files
# data_path = './data/raw-interactions'
# files = [f for f in listdir(data_path) if isfile(join(data_path, f))]
# files
"""
Explanation: Load raw interaction table originally created for AtgO
This table was created for AtgO, but can be used this, too.
End of explanation
"""
# columns = ['source', 'target', 'score']
# all_interactions = pd.DataFrame(columns=columns)
# for f in files:
# if not f.startswith('INT'):
# continue
# int_type = f.split('.')[1]
# df = pd.read_csv(data_path+'/'+ f, delimiter='\t', names=columns)
# df['interaction'] = int_type
# all_interactions = pd.concat([all_interactions, df])
# print(all_interactions.shape)
# all_interactions.head(10)
# all_interactions.to_csv('./data/raw-interactions/all.txt', sep='\t')
"""
Explanation: Create single interaction DataFrame
From all interaction fiules, make a table with all interactions
End of explanation
"""
# CLIXO term to gene mapping
mapping = pd.read_csv('./data/raw-interactions/preds_yeastnet_no_gi_0.04_0.5.txt.propagate.mapping', delimiter='\t', names=['gene', 'term'])
mapping.head()
mapping['term'].unique().shape # Number of terms
# All ORF names in CLIXO
mixed_ids = mapping['gene'].unique()
print(mixed_ids.shape)
geneset = set()
for row in mapping.itertuples():
geneset.add(row[1])
print(len(geneset))
"""
Explanation: Create list of all genes associated with CLIXO terms
End of explanation
"""
# Import gene association file
yeastAnnotationUrl = './data/gene_association.sgd.gz'
cols = pd.read_csv('./annotation_columns.txt', names=['col_names'])
col_names = cols['col_names'].tolist()
yeastAnnotation = pd.read_csv(yeastAnnotationUrl, delimiter='\t', comment='!', compression='gzip', names=col_names)
yeastAnnotation.tail()
# Mapping object: from any type of ID to SGD ID
to_sgd = {}
# Annotation for genes
sgd2fullname = {}
sgd2symbol = {}
for row in yeastAnnotation.itertuples():
sgd = row[2]
orf = row[3]
full_name = str(row[10]).replace('\r\n', '')
syn = str(row[11])
syns = syn.split('|')
to_sgd[orf] = sgd
for synonym in syns:
to_sgd[synonym] = sgd
sgd2fullname[sgd] = full_name
sgd2symbol[sgd] = orf
# Special case
to_sgd['AAD16'] = 'S000001837'
"""
Explanation: Standardize the gene IDs to SGD
End of explanation
"""
normalized_map = []
for row in mapping.itertuples():
gene = row[1]
term = str(row[2])
# Convert to SGD
sgd = gene
if gene in to_sgd.keys():
sgd = to_sgd[gene]
entry = (sgd, term)
normalized_map.append(entry)
# All ORF to SGD
all_sgd = list(map(lambda x: to_sgd[x] if x in to_sgd.keys() else x, mixed_ids))
if len(all_sgd) == len(mixed_ids):
print('All mapped!')
# This contains all gene IDs (SGD ID)
uniq_genes = set(all_sgd)
len(uniq_genes)
geneset_sgd = set()
for gene in geneset:
if gene not in to_sgd.keys():
geneset_sgd.add(gene)
else:
geneset_sgd.add(to_sgd[gene])
len(geneset_sgd)
df_genes = pd.DataFrame(list(uniq_genes))
# Save as text file and use it in UNIPROT ID Mapper
df_genes.to_csv('./data/all_sgd.txt', sep='\t', index=False, header=False)
uniprot = pd.read_csv('./data/uniprot-idmapping.txt', delimiter='\t')
print(uniprot.shape)
uniprot.head()
sgd2orf = {}
for row in uniprot.itertuples():
sgd = row[2]
orf = row[11]
sgd2orf[sgd] = orf
# Test
missing = set()
for sgd in uniq_genes:
if sgd not in sgd2orf.keys():
missing.add(sgd)
print(len(missing))
print(missing)
idmap = pd.read_csv('./yeast_clean4.txt', delimiter='\t')
idmap.head()
sgd2orf2 = {}
for row in idmap.itertuples():
sgd = row[5]
orf = row[2]
sgd2orf2[sgd] = orf
for sgd in missing:
sgd2orf[sgd] = sgd2orf2[sgd]
# Test
missing = set()
for sgd in uniq_genes:
if sgd not in sgd2orf.keys():
missing.add(sgd)
print(len(missing))
print(len(sgd2orf))
"""
Explanation: Make sure all genes in the interaction table exists in the mapping table
End of explanation
"""
gene_map = {}
missing_count = 0
all_orf = set(sgd2orf.values())
len(all_orf)
all_interactions = pd.read_csv('./data/interaction-table-atgo.txt', sep="\t")
all_interactions.head()
## Filter edges
print('All interaction count: ' + str(all_interactions.shape))
filtered = []
# Filter
for row in all_interactions.itertuples():
if row[1] not in to_sgd.keys():
to_sgd[row[1]] = row[1]
if row[2] not in to_sgd.keys():
to_sgd[row[2]] = row[2]
term2gene = {}
for row in mapping.itertuples():
term = str(row[2])
gene = ''
if row[1] not in to_sgd.keys():
gene = row[1]
to_sgd[gene] = gene
else:
gene = to_sgd[row[1]]
assigned = set()
if term in term2gene.keys():
assigned = term2gene[term]
assigned.add(gene)
term2gene[term] = assigned
print(len(term2gene))
print(term2gene['10000'])
original_cols = all_interactions.columns
original_names = original_cols[3:10].tolist()
col_names = [ 'source', 'target', 'interaction', 'score', 'reference',]
print(original_names)
network = []
for row in all_interactions.itertuples():
for idx, col in enumerate(row):
if idx < 3 or idx > 9:
continue
else:
score = col.item()
if score == 0:
continue
interaction = original_names[idx-3]
pub = str(row[idx+8])
new_row = (to_sgd[row[1]], to_sgd[row[2]], interaction, score, pub)
network.append(new_row)
net_df = pd.DataFrame(network, columns=col_names)
net_df.head()
print(net_df['score'].max())
print(net_df['score'].min())
net_df.to_csv('./data/clixo-raw-interactions.txt', sep='\t', encoding='utf-8', index=False)
# Create graph
import networkx as nx
g = nx.from_pandas_dataframe(net_df, source='source', target='target', edge_attr=['interaction', 'score', 'reference'])
g.nodes()[10]
print(list(term2gene['10275']))
sub = g.subgraph(list(term2gene['10275']))
%matplotlib inline
import matplotlib
import numpy as np
import matplotlib.pyplot as plt
nx.draw_circular(sub)
sub.edges()[1]
term2interaction = {}
for term in term2gene.keys():
gset = term2gene[term]
assigned_itrs = []
glist = list(gset)
sub = g.subgraph(glist)
edges = sub.edges(data=True)
itrs = []
if len(edges) > 8000:
term2interaction[term] = itrs
continue
for e in edges:
data = {
'source': e[0],
'target': e[1],
'interaction': e[2]['interaction'],
'score': e[2]['score']
}
itrs.append(data)
term2interaction[term] = itrs
counts = []
for key in term2interaction.keys():
counts.append(len(term2interaction[key]))
max(counts)
clixo_genes = {}
missing_name = set()
print(len(normalized_map))
for row in normalized_map:
sgd = row[0]
term = str(row[1])
orf = sgd2orf[sgd]
name = orf
symbol = ''
if sgd not in sgd2fullname.keys():
missing_name.add(sgd2orf[sgd])
name = orf
symbol = orf
else:
orf = sgd2orf[sgd]
name = sgd2fullname[sgd]
symbol = sgd2symbol[sgd]
entry = {
'sgdid': sgd,
'orf': orf,
'name': name,
'symbol': symbol
}
assigned_genes = []
if term in clixo_genes.keys():
assigned_genes = clixo_genes[term]['genes']
assigned_genes.append(entry)
clixo_genes[term] = {
'genes': assigned_genes
}
print(missing_name)
print(len(clixo_genes))
for key in clixo_genes.keys():
raw_interactions = []
gene_list = clixo_genes[key]['genes']
for gene in gene_list:
sgd = gene['sgdid']
if sgd in sgd2orf.keys():
orf = sgd2orf[sgd]
if orf in gene_map.keys():
raw_interactions.append(gene_map[orf])
clixo_genes[key]['interactions'] = term2interaction[key]
import pprint
pp = pprint.PrettyPrinter(indent=4)
# pp.pprint(clixo_genes['10000'])
"""
Explanation: Create mapping from gene to interactions
End of explanation
"""
from elasticsearch import Elasticsearch
from datetime import datetime
from elasticsearch_dsl import DocType, Date, Integer, Keyword, Text, Object, Nested, Index, Double, Integer
from elasticsearch_dsl.connections import connections
from elasticsearch import Elasticsearch
from elasticsearch import helpers
from elasticsearch_dsl import Search
from elasticsearch_dsl.query import MultiMatch, Match, Q
# Define a default Elasticsearch client
connections.create_connection(hosts=['localhost:9200'])
# Class which represents a CLIXO term
class ClixoTerm(DocType):
termid = Text(index='not_analyzed')
name = Text(analyzer='standard')
go = Object()
gene_count = Integer(index='not_analyzed')
genes = Object(multi=True)
interactions=Object(multi=True)
class Meta:
index = 'terms'
def get_clixo_term(key, term):
term_id = 'CLIXO:' + key
name = term_id
go = {
'goid': 'N/A',
'name': 'N/A',
'definition': 'N/A',
}
gene_count = 0
if key in clixo2go.keys():
go_alignment = clixo2go[key]
goid = go_alignment['go']
if goid == 'GO:00SUPER':
return ClixoTerm(
meta={'id': term_id},
termid=term_id,
name=name,
go=go,
gene_count = gene_count,
genes=term['genes'],
interactions=term2interaction[key])
def_raw = obo[goid].defn
def_str = def_raw.split('"')[1]
go['goid'] = goid
go['score'] = go_alignment['score']
go['fdr'] = go_alignment['fdr']
gene_count = go_alignment['genes']
go['name'] = obo[goid].name
go['definition'] = def_str
name = obo[goid].name
return ClixoTerm(
meta={'id': term_id},
termid=term_id,
name=name,
go=go,
gene_count = gene_count,
genes=term['genes'],
interactions=term2interaction[key]
)
print('init start==================')
ClixoTerm.init()
print('Init done ==================')
es = Elasticsearch(host='localhost', port=9200)
pool = []
print('Add start==================')
for id in clixo_genes.keys():
d = get_clixo_term(id, clixo_genes[id])
term = {'_index': getattr(d.meta, 'index', d._doc_type.index), '_type': d._doc_type.name, '_id': d.termid, '_source': d.to_dict()}
pool.append(term)
if len(pool) > 500:
print('Bulk add start:')
helpers.bulk(es, pool)
print('Bulk add success!')
pool = []
if len(pool) > 0:
print('Last: ' + str(len(pool)))
helpers.bulk(es, pool)
print('---------------success!')
"""
Explanation: Insert new documents to Elasticsearch
End of explanation
"""
|
PedramNavid/MachineLearningNanodegree | 3_Titanic/.ipynb_checkpoints/titanic_survival_exploration-checkpoint.ipynb | gpl-3.0 | import sys
print(sys.version)
import numpy as np
import pandas as pd
# RMS Titanic data visualization code
from titanic_visualizations import survival_stats
from IPython.display import display
%matplotlib inline
# Load the dataset
in_file = 'titanic_data.csv'
full_data = pd.read_csv(in_file)
# Print the first few entries of the RMS Titanic data
display(full_data.head())
"""
Explanation: Machine Learning Engineer Nanodegree
Introduction and Foundations
Project 0: Titanic Survival Exploration
In 1912, the ship RMS Titanic struck an iceberg on its maiden voyage and sank, resulting in the deaths of most of its passengers and crew. In this introductory project, we will explore a subset of the RMS Titanic passenger manifest to determine which features best predict whether someone survived or did not survive. To complete this project, you will need to implement several conditional predictions and answer the questions below. Your project submission will be evaluated based on the completion of the code and your responses to the questions.
Tip: Quoted sections like this will provide helpful instructions on how to navigate and use an iPython notebook.
Getting Started
To begin working with the RMS Titanic passenger data, we'll first need to import the functionality we need, and load our data into a pandas DataFrame.
Run the code cell below to load our data and display the first few entries (passengers) for examination using the .head() function.
Tip: You can run a code cell by clicking on the cell and using the keyboard shortcut Shift + Enter or Shift + Return. Alternatively, a code cell can be executed using the Play button in the hotbar after selecting it. Markdown cells (text cells like this one) can be edited by double-clicking, and saved using these same shortcuts. Markdown allows you to write easy-to-read plain text that can be converted to HTML.
End of explanation
"""
# Store the 'Survived' feature in a new variable and remove it from the dataset
outcomes = full_data['Survived']
data = full_data.drop('Survived', axis = 1)
# Show the new dataset with 'Survived' removed
display(data.head())
"""
Explanation: From a sample of the RMS Titanic data, we can see the various features present for each passenger on the ship:
- Survived: Outcome of survival (0 = No; 1 = Yes)
- Pclass: Socio-economic class (1 = Upper class; 2 = Middle class; 3 = Lower class)
- Name: Name of passenger
- Sex: Sex of the passenger
- Age: Age of the passenger (Some entries contain NaN)
- SibSp: Number of siblings and spouses of the passenger aboard
- Parch: Number of parents and children of the passenger aboard
- Ticket: Ticket number of the passenger
- Fare: Fare paid by the passenger
- Cabin Cabin number of the passenger (Some entries contain NaN)
- Embarked: Port of embarkation of the passenger (C = Cherbourg; Q = Queenstown; S = Southampton)
Since we're interested in the outcome of survival for each passenger or crew member, we can remove the Survived feature from this dataset and store it as its own separate variable outcomes. We will use these outcomes as our prediction targets.
Run the code cell below to remove Survived as a feature of the dataset and store it in outcomes.
End of explanation
"""
def accuracy_score(truth, pred):
""" Returns accuracy score for input truth and predictions. """
# Ensure that the number of predictions matches number of outcomes
if len(truth) == len(pred):
# Calculate and return the accuracy as a percent
return "Predictions have an accuracy of {:.2f}%.".format((truth == pred).mean()*100)
else:
return "Number of predictions does not match number of outcomes!"
# Test the 'accuracy_score' function
predictions = pd.Series(np.ones(5, dtype = int))
print accuracy_score(outcomes[:5], predictions)
"""
Explanation: The very same sample of the RMS Titanic data now shows the Survived feature removed from the DataFrame. Note that data (the passenger data) and outcomes (the outcomes of survival) are now paired. That means for any passenger data.loc[i], they have the survival outcome outcome[i].
To measure the performance of our predictions, we need a metric to score our predictions against the true outcomes of survival. Since we are interested in how accurate our predictions are, we will calculate the proportion of passengers where our prediction of their survival is correct. Run the code cell below to create our accuracy_score function and test a prediction on the first five passengers.
Think: Out of the first five passengers, if we predict that all of them survived, what would you expect the accuracy of our predictions to be?
End of explanation
"""
def predictions_0(data):
""" Model with no features. Always predicts a passenger did not survive. """
predictions = []
for _, passenger in data.iterrows():
# Predict the survival of 'passenger'
predictions.append(0)
# Return our predictions
return pd.Series(predictions)
# Make the predictions
predictions = predictions_0(data)
"""
Explanation: Tip: If you save an iPython Notebook, the output from running code blocks will also be saved. However, the state of your workspace will be reset once a new session is started. Make sure that you run all of the code blocks from your previous session to reestablish variables and functions before picking up where you last left off.
Making Predictions
If we were asked to make a prediction about any passenger aboard the RMS Titanic whom we knew nothing about, then the best prediction we could make would be that they did not survive. This is because we can assume that a majority of the passengers (more than 50%) did not survive the ship sinking.
The predictions_0 function below will always predict that a passenger did not survive.
End of explanation
"""
print accuracy_score(outcomes, predictions)
"""
Explanation: Question 1
Using the RMS Titanic data, how accurate would a prediction be that none of the passengers survived?
Hint: Run the code cell below to see the accuracy of this prediction.
End of explanation
"""
survival_stats(data, outcomes, 'Sex')
"""
Explanation: Answer: 61.62%
Let's take a look at whether the feature Sex has any indication of survival rates among passengers using the survival_stats function. This function is defined in the titanic_visualizations.py Python script included with this project. The first two parameters passed to the function are the RMS Titanic data and passenger survival outcomes, respectively. The third parameter indicates which feature we want to plot survival statistics across.
Run the code cell below to plot the survival outcomes of passengers based on their sex.
End of explanation
"""
int(data[:1]['Sex'] == "female")
def predictions_1(data):
""" Model with one feature:
- Predict a passenger survived if they are female. """
predictions = []
for _, passenger in data.iterrows():
# Simple way of returning 1 if female, 0 if male
if(passenger['Sex']=="female"):
z = 1
else:
z = 0
predictions.append(z)
# Return our predictions
return pd.Series(predictions)
# Make the predictions
predictions = predictions_1(data)
"""
Explanation: Examining the survival statistics, a large majority of males did not survive the ship sinking. However, a majority of females did survive the ship sinking. Let's build on our previous prediction: If a passenger was female, then we will predict that they survived. Otherwise, we will predict the passenger did not survive.
Fill in the missing code below so that the function will make this prediction.
Hint: You can access the values of each feature for a passenger like a dictionary. For example, passenger['Sex'] is the sex of the passenger.
End of explanation
"""
print accuracy_score(outcomes, predictions)
"""
Explanation: Question 2
How accurate would a prediction be that all female passengers survived and the remaining passengers did not survive?
Hint: Run the code cell below to see the accuracy of this prediction.
End of explanation
"""
survival_stats(data, outcomes, 'Age', ["Sex == 'male'"])
"""
Explanation: Answer: Accuracy of 78.68%
Using just the Sex feature for each passenger, we are able to increase the accuracy of our predictions by a significant margin. Now, let's consider using an additional feature to see if we can further improve our predictions. For example, consider all of the male passengers aboard the RMS Titanic: Can we find a subset of those passengers that had a higher rate of survival? Let's start by looking at the Age of each male, by again using the survival_stats function. This time, we'll use a fourth parameter to filter out the data so that only passengers with the Sex 'male' will be included.
Run the code cell below to plot the survival outcomes of male passengers based on their age.
End of explanation
"""
def predictions_2(data):
""" Model with two features:
- Predict a passenger survived if they are female.
- Predict a passenger survived if they are male and younger than 10. """
predictions = []
for _, passenger in data.iterrows():
if(passenger['Sex'] == "female"):
z = 1
elif(passenger['Sex'] == "male" and passenger["Age"] < 10):
z = 1
else:
z = 0
predictions.append(z)
# Return our predictions
return pd.Series(predictions)
# Make the predictions
predictions = predictions_2(data)
"""
Explanation: Examining the survival statistics, the majority of males younger than 10 survived the ship sinking, whereas most males age 10 or older did not survive the ship sinking. Let's continue to build on our previous prediction: If a passenger was female, then we will predict they survive. If a passenger was male and younger than 10, then we will also predict they survive. Otherwise, we will predict they do not survive.
Fill in the missing code below so that the function will make this prediction.
Hint: You can start your implementation of this function using the prediction code you wrote earlier from predictions_1.
End of explanation
"""
print accuracy_score(outcomes, predictions)
"""
Explanation: Question 3
How accurate would a prediction be that all female passengers and all male passengers younger than 10 survived?
Hint: Run the code cell below to see the accuracy of this prediction.
End of explanation
"""
survival_stats(data, outcomes, 'Age', ["Sex == 'male'", "Age < 18"])
# Note: exploration was done in R, as it lends itself better to
# data analysis. After building a decision tree, we will implement the logic
# from it in Python
"""
Explanation: Answer: 79.35%
Adding the feature Age as a condition in conjunction with Sex improves the accuracy by a small margin more than with simply using the feature Sex alone. Now it's your turn: Find a series of features and conditions to split the data on to obtain an outcome prediction accuracy of at least 80%. This may require multiple features and multiple levels of conditional statements to succeed. You can use the same feature multiple times with different conditions.
Pclass, Sex, Age, SibSp, and Parch are some suggested features to try.
Use the survival_stats function below to to examine various survival statistics.
Hint: To use mulitple filter conditions, put each condition in the list passed as the last argument. Example: ["Sex == 'male'", "Age < 18"]
End of explanation
"""
def predictions_3(data):
""" Model with multiple features. Makes a prediction with an accuracy of at least 80%. """
predictions = []
for _, passenger in data.iterrows():
if(passenger['Sex'] == "female"):
if(passenger['Pclass'>=2.5]):
if(passenger['Fare'>=23]):
z = 0
elif(passenger['Embarked'] == 'S' and passenger['Fare'] > 11):
z = 0
elif(passenger['Embarked'] == 'S'):
z = 1
z = 1
elif(passenger['Age'] >= 6.5):
z = 0
elif(passenger['SibSp'] >= 2.5)
z = 0
else z = 1
predictions.append(z)
# Return our predictions
return pd.Series(predictions)
# Make the predictions
predictions = predictions_3(data)
"""
Explanation: After exploring the survival statistics visualization, fill in the missing code below so that the function will make your prediction.
Make sure to keep track of the various features and conditions you tried before arriving at your final prediction model.
Hint: You can start your implementation of this function using the prediction code you wrote earlier from predictions_2.
End of explanation
"""
print accuracy_score(outcomes, predictions)
"""
Explanation: Question 4
Describe the steps you took to implement the final prediction model so that it got an accuracy of at least 80%. What features did you look at? Were certain features more informative than others? Which conditions did you use to split the survival outcomes in the data? How accurate are your predictions?
Hint: Run the code cell below to see the accuracy of your predictions.
End of explanation
"""
|
GoogleCloudPlatform/vertex-ai-samples | notebooks/community/migration/UJ9 Custom Training Prebuilt Container XGBoost.ipynb | apache-2.0 | ! pip3 install -U google-cloud-aiplatform --user
"""
Explanation: Vertex SDK: Train and deploy an XGBoost model with pre-built containers (formerly hosted runtimes)
Installation
Install the latest (preview) version of Vertex SDK.
End of explanation
"""
! pip3 install google-cloud-storage
"""
Explanation: Install the Google cloud-storage library as well.
End of explanation
"""
import os
if not os.getenv("AUTORUN"):
# Automatically restart kernel after installs
import IPython
app = IPython.Application.instance()
app.kernel.do_shutdown(True)
"""
Explanation: Restart the Kernel
Once you've installed the Vertex SDK and Google cloud-storage, you need to restart the notebook kernel so it can find the packages.
End of explanation
"""
PROJECT_ID = "[your-project-id]" # @param {type:"string"}
if PROJECT_ID == "" or PROJECT_ID is None or PROJECT_ID == "[your-project-id]":
# Get your GCP project id from gcloud
shell_output = !gcloud config list --format 'value(core.project)' 2>/dev/null
PROJECT_ID = shell_output[0]
print("Project ID:", PROJECT_ID)
! gcloud config set project $PROJECT_ID
"""
Explanation: Before you begin
GPU run-time
Make sure you're running this notebook in a GPU runtime if you have that option. In Colab, select Runtime > Change Runtime Type > GPU
Set up your GCP project
The following steps are required, regardless of your notebook environment.
Select or create a GCP project. When you first create an account, you get a $300 free credit towards your compute/storage costs.
Make sure that billing is enabled for your project.
Enable the Vertex APIs and Compute Engine APIs.
Google Cloud SDK is already installed in Google Cloud Notebooks.
Enter your project ID in the cell below. Then run the cell to make sure the
Cloud SDK uses the right project for all the commands in this notebook.
Note: Jupyter runs lines prefixed with ! as shell commands, and it interpolates Python variables prefixed with $ into these commands.
End of explanation
"""
REGION = "us-central1" # @param {type: "string"}
"""
Explanation: Region
You can also change the REGION variable, which is used for operations
throughout the rest of this notebook. Below are regions supported for Vertex AI. We recommend when possible, to choose the region closest to you.
Americas: us-central1
Europe: europe-west4
Asia Pacific: asia-east1
You cannot use a Multi-Regional Storage bucket for training with Vertex. Not all regions provide support for all Vertex services. For the latest support per region, see Region support for Vertex AI services
End of explanation
"""
from datetime import datetime
TIMESTAMP = datetime.now().strftime("%Y%m%d%H%M%S")
"""
Explanation: Timestamp
If you are in a live tutorial session, you might be using a shared test account or project. To avoid name collisions between users on resources created, you create a timestamp for each instance session, and append onto the name of resources which will be created in this tutorial.
End of explanation
"""
import os
import sys
# If you are running this notebook in Colab, run this cell and follow the
# instructions to authenticate your Google Cloud account. This provides access
# to your Cloud Storage bucket and lets you submit training jobs and prediction
# requests.
# If on Vertex, then don't execute this code
if not os.path.exists("/opt/deeplearning/metadata/env_version"):
if "google.colab" in sys.modules:
from google.colab import auth as google_auth
google_auth.authenticate_user()
# If you are running this tutorial in a notebook locally, replace the string
# below with the path to your service account key and run this cell to
# authenticate your Google Cloud account.
else:
%env GOOGLE_APPLICATION_CREDENTIALS your_path_to_credentials.json
# Log in to your account on Google Cloud
! gcloud auth login
"""
Explanation: Authenticate your GCP account
If you are using Google Cloud Notebooks, your environment is already
authenticated. Skip this step.
Note: If you are on an Vertex notebook and run the cell, the cell knows to skip executing the authentication steps.
End of explanation
"""
BUCKET_NAME = "[your-bucket-name]" # @param {type:"string"}
if BUCKET_NAME == "" or BUCKET_NAME is None or BUCKET_NAME == "[your-bucket-name]":
BUCKET_NAME = PROJECT_ID + "aip-" + TIMESTAMP
"""
Explanation: Create a Cloud Storage bucket
The following steps are required, regardless of your notebook environment.
This tutorial is designed to use training data that is in a public Cloud Storage bucket and a local Cloud Storage bucket for your batch predictions. You may alternatively use your own training data that you have stored in a local Cloud Storage bucket.
Set the name of your Cloud Storage bucket below. It must be unique across all Cloud Storage buckets.
End of explanation
"""
! gsutil mb -l $REGION gs://$BUCKET_NAME
"""
Explanation: Only if your bucket doesn't already exist: Run the following cell to create your Cloud Storage bucket.
End of explanation
"""
! gsutil ls -al gs://$BUCKET_NAME
"""
Explanation: Finally, validate access to your Cloud Storage bucket by examining its contents:
End of explanation
"""
import os
import sys
import time
from google.cloud.aiplatform import gapic as aip
from google.protobuf.json_format import MessageToJson, ParseDict
from google.protobuf.struct_pb2 import Struct, Value
"""
Explanation: Set up variables
Next, set up some variables used throughout the tutorial.
Import libraries and define constants
Import Vertex SDK
Import the Vertex SDK into our Python environment.
End of explanation
"""
# API Endpoint
API_ENDPOINT = "{}-aiplatform.googleapis.com".format(REGION)
# Vertex AI location root path for your dataset, model and endpoint resources
PARENT = "projects/" + PROJECT_ID + "/locations/" + REGION
"""
Explanation: Vertex AI constants
Setup up the following constants for Vertex AI:
API_ENDPOINT: The Vertex AI API service endpoint for dataset, model, job, pipeline and endpoint services.
PARENT: The Vertex AI location root path for dataset, model and endpoint resources.
End of explanation
"""
# client options same for all services
client_options = {"api_endpoint": API_ENDPOINT}
def create_model_client():
client = aip.ModelServiceClient(client_options=client_options)
return client
def create_endpoint_client():
client = aip.EndpointServiceClient(client_options=client_options)
return client
def create_prediction_client():
client = aip.PredictionServiceClient(client_options=client_options)
return client
def create_job_client():
client = aip.JobServiceClient(client_options=client_options)
return client
clients = {}
clients["model"] = create_model_client()
clients["endpoint"] = create_endpoint_client()
clients["prediction"] = create_prediction_client()
clients["job"] = create_job_client()
for client in clients.items():
print(client)
"""
Explanation: Clients
The Vertex SDK works as a client/server model. On your side (the Python script) you will create a client that sends requests and receives responses from the server (Vertex).
You will use several clients in this tutorial, so set them all up upfront.
Model Service for managed models.
Endpoint Service for deployment.
Job Service for batch jobs and custom training.
Prediction Service for serving. Note: Prediction has a different service endpoint.
End of explanation
"""
# Make folder for python training script
! rm -rf custom
! mkdir custom
# Add package information
! touch custom/README.md
setup_cfg = "[egg_info]\n\
tag_build =\n\
tag_date = 0"
! echo "$setup_cfg" > custom/setup.cfg
setup_py = "import setuptools\n\
setuptools.setup(\n\
install_requires=[\n\
],\n\
packages=setuptools.find_packages())"
! echo "$setup_py" > custom/setup.py
pkg_info = "Metadata-Version: 1.0\n\
Name: Custom XGBoost Iris\n\
Version: 0.0.0\n\
Summary: Demonstration training script\n\
Home-page: www.google.com\n\
Author: Google\n\
Author-email: aferlitsch@google.com\n\
License: Public\n\
Description: Demo\n\
Platform: Vertex AI"
! echo "$pkg_info" > custom/PKG-INFO
# Make the training subfolder
! mkdir custom/trainer
! touch custom/trainer/__init__.py
"""
Explanation: Prepare a trainer script
Package assembly
End of explanation
"""
%%writefile custom/trainer/task.py
# Single Instance Training for Iris
import datetime
import os
import subprocess
import sys
import pandas as pd
import xgboost as xgb
import argparse
parser = argparse.ArgumentParser()
parser.add_argument('--model-dir', dest='model_dir',
default=os.getenv('AIP_MODEL_DIR'), type=str, help='Model dir.')
args = parser.parse_args()
# Download data
iris_data_filename = 'iris_data.csv'
iris_target_filename = 'iris_target.csv'
data_dir = 'gs://cloud-samples-data/ai-platform/iris'
# gsutil outputs everything to stderr so we need to divert it to stdout.
subprocess.check_call(['gsutil', 'cp', os.path.join(data_dir,
iris_data_filename),
iris_data_filename], stderr=sys.stdout)
subprocess.check_call(['gsutil', 'cp', os.path.join(data_dir,
iris_target_filename),
iris_target_filename], stderr=sys.stdout)
# Load data into pandas, then use `.values` to get NumPy arrays
iris_data = pd.read_csv(iris_data_filename).values
iris_target = pd.read_csv(iris_target_filename).values
# Convert one-column 2D array into 1D array for use with XGBoost
iris_target = iris_target.reshape((iris_target.size,))
# Load data into DMatrix object
dtrain = xgb.DMatrix(iris_data, label=iris_target)
# Train XGBoost model
bst = xgb.train({}, dtrain, 20)
# Export the classifier to a file
model_filename = 'model.bst'
bst.save_model(model_filename)
# Upload the saved model file to Cloud Storage
gcs_model_path = os.path.join(args.model_dir, model_filename)
subprocess.check_call(['gsutil', 'cp', model_filename, gcs_model_path],
stderr=sys.stdout)
"""
Explanation: Task.py contents
End of explanation
"""
! rm -f custom.tar custom.tar.gz
! tar cvf custom.tar custom
! gzip custom.tar
! gsutil cp custom.tar.gz gs://$BUCKET_NAME/iris.tar.gz
"""
Explanation: Store training script on your Cloud Storage bucket
End of explanation
"""
TRAIN_IMAGE = "gcr.io/cloud-aiplatform/training/xgboost-cpu.1-1:latest"
JOB_NAME = "custom_job_XGB" + TIMESTAMP
WORKER_POOL_SPEC = [
{
"replica_count": 1,
"machine_spec": {"machine_type": "n1-standard-4"},
"python_package_spec": {
"executor_image_uri": TRAIN_IMAGE,
"package_uris": ["gs://" + BUCKET_NAME + "/iris.tar.gz"],
"python_module": "trainer.task",
"args": ["--model-dir=" + "gs://{}/{}".format(BUCKET_NAME, JOB_NAME)],
},
}
]
training_job = aip.CustomJob(
display_name=JOB_NAME, job_spec={"worker_pool_specs": WORKER_POOL_SPEC}
)
print(
MessageToJson(
aip.CreateCustomJobRequest(parent=PARENT, custom_job=training_job).__dict__[
"_pb"
]
)
)
"""
Explanation: Train a model
projects.locations.customJobs.create
Request
End of explanation
"""
request = clients["job"].create_custom_job(parent=PARENT, custom_job=training_job)
"""
Explanation: Example output:
{
"parent": "projects/migration-ucaip-training/locations/us-central1",
"customJob": {
"displayName": "custom_job_XGB20210323142337",
"jobSpec": {
"workerPoolSpecs": [
{
"machineSpec": {
"machineType": "n1-standard-4"
},
"replicaCount": "1",
"pythonPackageSpec": {
"executorImageUri": "gcr.io/cloud-aiplatform/training/xgboost-cpu.1-1:latest",
"packageUris": [
"gs://migration-ucaip-trainingaip-20210323142337/iris.tar.gz"
],
"pythonModule": "trainer.task",
"args": [
"--model-dir=gs://migration-ucaip-trainingaip-20210323142337/custom_job_XGB20210323142337"
]
}
}
]
}
}
}
Call
End of explanation
"""
print(MessageToJson(request.__dict__["_pb"]))
"""
Explanation: Response
End of explanation
"""
# The full unique ID for the custom training job
custom_training_id = request.name
# The short numeric ID for the custom training job
custom_training_short_id = custom_training_id.split("/")[-1]
print(custom_training_id)
"""
Explanation: Example output:
{
"name": "projects/116273516712/locations/us-central1/customJobs/7371064379959148544",
"displayName": "custom_job_XGB20210323142337",
"jobSpec": {
"workerPoolSpecs": [
{
"machineSpec": {
"machineType": "n1-standard-4"
},
"replicaCount": "1",
"diskSpec": {
"bootDiskType": "pd-ssd",
"bootDiskSizeGb": 100
},
"pythonPackageSpec": {
"executorImageUri": "gcr.io/cloud-aiplatform/training/xgboost-cpu.1-1:latest",
"packageUris": [
"gs://migration-ucaip-trainingaip-20210323142337/iris.tar.gz"
],
"pythonModule": "trainer.task",
"args": [
"--model-dir=gs://migration-ucaip-trainingaip-20210323142337/custom_job_XGB20210323142337"
]
}
}
]
},
"state": "JOB_STATE_PENDING",
"createTime": "2021-03-23T14:23:45.067026Z",
"updateTime": "2021-03-23T14:23:45.067026Z"
}
End of explanation
"""
request = clients["job"].get_custom_job(name=custom_training_id)
"""
Explanation: projects.locations.customJobs.get
Call
End of explanation
"""
print(MessageToJson(request.__dict__["_pb"]))
"""
Explanation: Response
End of explanation
"""
while True:
response = clients["job"].get_custom_job(name=custom_training_id)
if response.state != aip.PipelineState.PIPELINE_STATE_SUCCEEDED:
print("Training job has not completed:", response.state)
if response.state == aip.PipelineState.PIPELINE_STATE_FAILED:
break
else:
print("Training Time:", response.end_time - response.start_time)
break
time.sleep(60)
# model artifact output directory on Google Cloud Storage
model_artifact_dir = (
response.job_spec.worker_pool_specs[0].python_package_spec.args[0].split("=")[-1]
)
print("artifact location " + model_artifact_dir)
"""
Explanation: Example output:
{
"name": "projects/116273516712/locations/us-central1/customJobs/7371064379959148544",
"displayName": "custom_job_XGB20210323142337",
"jobSpec": {
"workerPoolSpecs": [
{
"machineSpec": {
"machineType": "n1-standard-4"
},
"replicaCount": "1",
"diskSpec": {
"bootDiskType": "pd-ssd",
"bootDiskSizeGb": 100
},
"pythonPackageSpec": {
"executorImageUri": "gcr.io/cloud-aiplatform/training/xgboost-cpu.1-1:latest",
"packageUris": [
"gs://migration-ucaip-trainingaip-20210323142337/iris.tar.gz"
],
"pythonModule": "trainer.task",
"args": [
"--model-dir=gs://migration-ucaip-trainingaip-20210323142337/custom_job_XGB20210323142337"
]
}
}
]
},
"state": "JOB_STATE_PENDING",
"createTime": "2021-03-23T14:23:45.067026Z",
"updateTime": "2021-03-23T14:23:45.067026Z"
}
End of explanation
"""
DEPLOY_IMAGE = "gcr.io/cloud-aiplatform/prediction/xgboost-cpu.1-1:latest"
model = {
"display_name": "custom_job_XGB" + TIMESTAMP,
"artifact_uri": model_artifact_dir,
"container_spec": {"image_uri": DEPLOY_IMAGE, "ports": [{"container_port": 8080}]},
}
print(MessageToJson(aip.UploadModelRequest(parent=PARENT, model=model).__dict__["_pb"]))
"""
Explanation: Deploy the model
projects.locations.models.upload
Request
End of explanation
"""
request = clients["model"].upload_model(parent=PARENT, model=model)
"""
Explanation: Example output:
{
"parent": "projects/migration-ucaip-training/locations/us-central1",
"model": {
"displayName": "custom_job_XGB20210323142337",
"containerSpec": {
"imageUri": "gcr.io/cloud-aiplatform/prediction/xgboost-cpu.1-1:latest",
"ports": [
{
"containerPort": 8080
}
]
},
"artifactUri": "gs://migration-ucaip-trainingaip-20210323142337/custom_job_XGB20210323142337"
}
}
Call
End of explanation
"""
result = request.result()
print(MessageToJson(result.__dict__["_pb"]))
"""
Explanation: Response
End of explanation
"""
# The full unique ID for the model version
model_id = result.model
"""
Explanation: Example output:
{
"model": "projects/116273516712/locations/us-central1/models/2093698837704081408"
}
End of explanation
"""
import json
import tensorflow as tf
INSTANCES = [[1.4, 1.3, 5.1, 2.8], [1.5, 1.2, 4.7, 2.4]]
gcs_input_uri = "gs://" + BUCKET_NAME + "/" + "test.jsonl"
with tf.io.gfile.GFile(gcs_input_uri, "w") as f:
for i in INSTANCES:
f.write(str(i) + "\n")
! gsutil cat $gcs_input_uri
"""
Explanation: Make batch predictions
Make a batch prediction file
End of explanation
"""
model_parameters = Value(
struct_value=Struct(
fields={
"confidence_threshold": Value(number_value=0.5),
"max_predictions": Value(number_value=10000.0),
}
)
)
batch_prediction_job = {
"display_name": "custom_job_XGB" + TIMESTAMP,
"model": model_id,
"input_config": {
"instances_format": "jsonl",
"gcs_source": {"uris": [gcs_input_uri]},
},
"model_parameters": model_parameters,
"output_config": {
"predictions_format": "jsonl",
"gcs_destination": {
"output_uri_prefix": "gs://" + f"{BUCKET_NAME}/batch_output/"
},
},
"dedicated_resources": {
"machine_spec": {"machine_type": "n1-standard-2"},
"starting_replica_count": 1,
"max_replica_count": 1,
},
}
print(
MessageToJson(
aip.CreateBatchPredictionJobRequest(
parent=PARENT, batch_prediction_job=batch_prediction_job
).__dict__["_pb"]
)
)
"""
Explanation: Example output:
[1.4, 1.3, 5.1, 2.8]
[1.5, 1.2, 4.7, 2.4]
projects.locations.batchPredictionJobs.create
Request
End of explanation
"""
request = clients["job"].create_batch_prediction_job(
parent=PARENT, batch_prediction_job=batch_prediction_job
)
"""
Explanation: Example output:
{
"parent": "projects/migration-ucaip-training/locations/us-central1",
"batchPredictionJob": {
"displayName": "custom_job_XGB20210323142337",
"model": "projects/116273516712/locations/us-central1/models/2093698837704081408",
"inputConfig": {
"instancesFormat": "jsonl",
"gcsSource": {
"uris": [
"gs://migration-ucaip-trainingaip-20210323142337/test.jsonl"
]
}
},
"modelParameters": {
"max_predictions": 10000.0,
"confidence_threshold": 0.5
},
"outputConfig": {
"predictionsFormat": "jsonl",
"gcsDestination": {
"outputUriPrefix": "gs://migration-ucaip-trainingaip-20210323142337/batch_output/"
}
},
"dedicatedResources": {
"machineSpec": {
"machineType": "n1-standard-2"
},
"startingReplicaCount": 1,
"maxReplicaCount": 1
}
}
}
Call
End of explanation
"""
print(MessageToJson(request.__dict__["_pb"]))
"""
Explanation: Response
End of explanation
"""
# The fully qualified ID for the batch job
batch_job_id = request.name
# The short numeric ID for the batch job
batch_job_short_id = batch_job_id.split("/")[-1]
print(batch_job_id)
"""
Explanation: Example output:
{
"name": "projects/116273516712/locations/us-central1/batchPredictionJobs/1415053872761667584",
"displayName": "custom_job_XGB20210323142337",
"model": "projects/116273516712/locations/us-central1/models/2093698837704081408",
"inputConfig": {
"instancesFormat": "jsonl",
"gcsSource": {
"uris": [
"gs://migration-ucaip-trainingaip-20210323142337/test.jsonl"
]
}
},
"modelParameters": {
"confidence_threshold": 0.5,
"max_predictions": 10000.0
},
"outputConfig": {
"predictionsFormat": "jsonl",
"gcsDestination": {
"outputUriPrefix": "gs://migration-ucaip-trainingaip-20210323142337/batch_output/"
}
},
"dedicatedResources": {
"machineSpec": {
"machineType": "n1-standard-2"
},
"startingReplicaCount": 1,
"maxReplicaCount": 1
},
"manualBatchTuningParameters": {},
"state": "JOB_STATE_PENDING",
"createTime": "2021-03-23T14:25:10.582704Z",
"updateTime": "2021-03-23T14:25:10.582704Z"
}
End of explanation
"""
request = clients["job"].get_batch_prediction_job(name=batch_job_id)
"""
Explanation: projects.locations.batchPredictionJobs.get
Call
End of explanation
"""
print(MessageToJson(request.__dict__["_pb"]))
"""
Explanation: Response
End of explanation
"""
def get_latest_predictions(gcs_out_dir):
""" Get the latest prediction subfolder using the timestamp in the subfolder name"""
folders = !gsutil ls $gcs_out_dir
latest = ""
for folder in folders:
subfolder = folder.split("/")[-2]
if subfolder.startswith("prediction-"):
if subfolder > latest:
latest = folder[:-1]
return latest
while True:
response = clients["job"].get_batch_prediction_job(name=batch_job_id)
if response.state != aip.JobState.JOB_STATE_SUCCEEDED:
print("The job has not completed:", response.state)
if response.state == aip.JobState.JOB_STATE_FAILED:
break
else:
folder = get_latest_predictions(
response.output_config.gcs_destination.output_uri_prefix
)
! gsutil ls $folder/prediction*
! gsutil cat -h $folder/prediction*
break
time.sleep(60)
"""
Explanation: Example output:
{
"name": "projects/116273516712/locations/us-central1/batchPredictionJobs/1415053872761667584",
"displayName": "custom_job_XGB20210323142337",
"model": "projects/116273516712/locations/us-central1/models/2093698837704081408",
"inputConfig": {
"instancesFormat": "jsonl",
"gcsSource": {
"uris": [
"gs://migration-ucaip-trainingaip-20210323142337/test.jsonl"
]
}
},
"modelParameters": {
"max_predictions": 10000.0,
"confidence_threshold": 0.5
},
"outputConfig": {
"predictionsFormat": "jsonl",
"gcsDestination": {
"outputUriPrefix": "gs://migration-ucaip-trainingaip-20210323142337/batch_output/"
}
},
"dedicatedResources": {
"machineSpec": {
"machineType": "n1-standard-2"
},
"startingReplicaCount": 1,
"maxReplicaCount": 1
},
"manualBatchTuningParameters": {},
"state": "JOB_STATE_PENDING",
"createTime": "2021-03-23T14:25:10.582704Z",
"updateTime": "2021-03-23T14:25:10.582704Z"
}
End of explanation
"""
endpoint = {"display_name": "custom_job_XGB" + TIMESTAMP}
print(
MessageToJson(
aip.CreateEndpointRequest(parent=PARENT, endpoint=endpoint).__dict__["_pb"]
)
)
"""
Explanation: Example output:
```
==> gs://migration-ucaip-trainingaip-20210323142337/batch_output/prediction-custom_job_XGB20210323142337-2021_03_23T07_25_10_544Z/prediction.errors_stats-00000-of-00001 <==
==> gs://migration-ucaip-trainingaip-20210323142337/batch_output/prediction-custom_job_XGB20210323142337-2021_03_23T07_25_10_544Z/prediction.results-00000-of-00001 <==
{"instance": [1.4, 1.3, 5.1, 2.8], "prediction": 2.0451931953430176}
{"instance": [1.5, 1.2, 4.7, 2.4], "prediction": 1.9618644714355469}
```
Make online predictions
projects.locations.endpoints.create
Request
End of explanation
"""
request = clients["endpoint"].create_endpoint(parent=PARENT, endpoint=endpoint)
"""
Explanation: Example output:
{
"parent": "projects/migration-ucaip-training/locations/us-central1",
"endpoint": {
"displayName": "custom_job_XGB20210323142337"
}
}
Call
End of explanation
"""
result = request.result()
print(MessageToJson(result.__dict__["_pb"]))
"""
Explanation: Response
End of explanation
"""
# The full unique ID for the endpoint
endpoint_id = result.name
# The short numeric ID for the endpoint
endpoint_short_id = endpoint_id.split("/")[-1]
print(endpoint_id)
"""
Explanation: Example output:
{
"name": "projects/116273516712/locations/us-central1/endpoints/1733903448723685376"
}
End of explanation
"""
deployed_model = {
"model": model_id,
"display_name": "custom_job_XGB" + TIMESTAMP,
"dedicated_resources": {
"min_replica_count": 1,
"max_replica_count": 1,
"machine_spec": {"machine_type": "n1-standard-4", "accelerator_count": 0},
},
}
print(
MessageToJson(
aip.DeployModelRequest(
endpoint=endpoint_id,
deployed_model=deployed_model,
traffic_split={"0": 100},
).__dict__["_pb"]
)
)
"""
Explanation: projects.locations.endpoints.deployModel
Request
End of explanation
"""
request = clients["endpoint"].deploy_model(
endpoint=endpoint_id, deployed_model=deployed_model, traffic_split={"0": 100}
)
"""
Explanation: Example output:
{
"endpoint": "projects/116273516712/locations/us-central1/endpoints/1733903448723685376",
"deployedModel": {
"model": "projects/116273516712/locations/us-central1/models/2093698837704081408",
"displayName": "custom_job_XGB20210323142337",
"dedicatedResources": {
"machineSpec": {
"machineType": "n1-standard-4"
},
"minReplicaCount": 1,
"maxReplicaCount": 1
}
},
"trafficSplit": {
"0": 100
}
}
Call
End of explanation
"""
result = request.result()
print(MessageToJson(result.__dict__["_pb"]))
"""
Explanation: Response
End of explanation
"""
# The unique ID for the deployed model
deployed_model_id = result.deployed_model.id
print(deployed_model_id)
"""
Explanation: Example output:
{
"deployedModel": {
"id": "7407594554280378368"
}
}
End of explanation
"""
INSTANCES = [[1.4, 1.3, 5.1, 2.8], [1.5, 1.2, 4.7, 2.4]]
"""
Explanation: projects.locations.endpoints.predict
Prepare file for online prediction
End of explanation
"""
prediction_request = {"endpoint": endpoint_id, "instances": INSTANCES}
print(json.dumps(prediction_request, indent=2))
"""
Explanation: Request
End of explanation
"""
request = clients["prediction"].predict(endpoint=endpoint_id, instances=INSTANCES)
"""
Explanation: Example output:
{
"endpoint": "projects/116273516712/locations/us-central1/endpoints/1733903448723685376",
"instances": [
[
1.4,
1.3,
5.1,
2.8
],
[
1.5,
1.2,
4.7,
2.4
]
]
}
Call
End of explanation
"""
print(MessageToJson(request.__dict__["_pb"]))
"""
Explanation: Response
End of explanation
"""
request = clients["endpoint"].undeploy_model(
endpoint=endpoint_id, deployed_model_id=deployed_model_id, traffic_split={}
)
"""
Explanation: Example output:
{
"predictions": [
2.045193195343018,
1.961864471435547
],
"deployedModelId": "7407594554280378368"
}
projects.locations.endpoints.undeployModel
Call
End of explanation
"""
result = request.result()
print(MessageToJson(result.__dict__["_pb"]))
"""
Explanation: Response
End of explanation
"""
delete_model = True
delete_endpoint = True
delete_pipeline = True
delete_batchjob = True
delete_bucket = True
# Delete the model using the Vertex AI fully qualified identifier for the model
try:
if delete_model:
clients["model"].delete_model(name=model_id)
except Exception as e:
print(e)
# Delete the endpoint using the Vertex AI fully qualified identifier for the endpoint
try:
if delete_endpoint:
clients["endpoint"].delete_endpoint(name=endpoint_id)
except Exception as e:
print(e)
# Delete the custom training using the Vertex AI fully qualified identifier for the custome training
try:
if custom_training_id:
clients["job"].delete_custom_job(name=custom_training_id)
except Exception as e:
print(e)
# Delete the batch job using the Vertex AI fully qualified identifier for the batch job
try:
if delete_batchjob:
clients["job"].delete_batch_prediction_job(name=batch_job_id)
except Exception as e:
print(e)
if delete_bucket and "BUCKET_NAME" in globals():
! gsutil rm -r gs://$BUCKET_NAME
"""
Explanation: Example output:
{}
Cleaning up
To clean up all GCP resources used in this project, you can delete the GCP
project you used for the tutorial.
Otherwise, you can delete the individual resources you created in this tutorial.
End of explanation
"""
|
softEcon/course | lectures/economic_models/generalized_roy/module/lecture.ipynb | mit | from IPython.core.display import HTML, display
display(HTML('material/images/grm.html'))
"""
Explanation: Module
As your code grows more and more complex, it is useful to collect all code in a an external file. Here we store all the functions form our notebook in a single file grm.py. Nothing new happens, we just copy all code cells from our notebook into a new file. Below, we can have a look at an html version of th file, which we created using PyCharm's exporting capabilites.
End of explanation
"""
display(HTML('material/images/namespace1.html'))
"""
Explanation: As our module is already quite complex, it is better to study the structure using PyCharm which provides tools to quickly grasp the structure of a module. So, check it out.
Namespace and Scope
Now that our Python code grows more and more complex, we need to discuss the concept of a Namespace. Roughly speaking, a name in Python is a mapping to an object. In Python this describes pretty much everything: lists, dictionaries, functions, classes, etc. Think of a namespace as a dictionary, where the dictionary keys represent the names and the dictionary values the object itself.
End of explanation
"""
display(HTML('material/images/namespace2.html'))
"""
Explanation: Now, the tricky part is that we have multiple independent namespaces in Python, and names can be reused for different namespaces (only the objects are unique), for example:
End of explanation
"""
i = 1
def foo():
i = 5
print(i, 'in foo()')
print(i, 'global')
foo()
"""
Explanation: The Scope in Python defines the “hierarchy level” in which we search namespaces for certain “name-to-object” mappings.
End of explanation
"""
# Unix Pattern Extensions
import glob
# Operating System Interfaces
import os
# System-specific Parameters and Functions
import sys
"""
Explanation: What rules are applied to resolve conflicting scopes?
Local can be inside a function or class method, for example.
Enclosed can be its enclosing function, e.g., if a function is wrapped inside another function.
Global refers to the uppermost level of the executing script itself, and
Built-in are special names that Python reserves for itself.
This introduction draws heavily on a couple of very useful online tutorials: Python Course, Beginners Guide to Namespaces, and Guide to Python Namespaces.
Interacting with the Module
Let us import a couple of the standard libraries to get started.
End of explanation
"""
print '\n Search Path:'
for dir_ in sys.path:
print ' ' + dir_
"""
Explanation: Now we turn to our very own module, we just have to import it as any other library first. How does Python know where to look for for our module?
Whenever the interpreter encounters an import statement, it searches for a build-in module (e.g. os, sys) of the same name. If unsuccessful, the interpreter searches in a list of directories given by the variable sys.path ...
End of explanation
"""
sys.path.insert(0, 'material/module')
"""
Explanation: and the current working directory. However, our module in the modules subdirectory, so we need to add it manually to the search path.
End of explanation
"""
%ls -l
"""
Explanation: Please see here for additional information.
End of explanation
"""
# Import grm.py file
import grm
# Process initializtion file
init_dict = grm.process('material/msc/init.ini')
# Simulate dataset
grm.simulate(init_dict)
# Estimate model
rslt = grm.estimate(init_dict)
# Inspect results
grm.inspect(rslt, init_dict)
# Output results to terminal
%cat results.grm.txt
"""
Explanation: Returning to grm.py:
End of explanation
"""
# The built-in function dir() returns the names
# defined in a module.
print '\n Names: \n'
for function in dir(grm):
print ' ' + function
"""
Explanation: Given our work on the notebook version, we had a very clear idea about the names defined in the module. In cases where you don't:
End of explanation
"""
# Import all public objects in the grm
# module, but keep the namespaces
# separate.
#
import grm as gr
init_dict = gr.process('material/msc/init.ini')
# Imports only the estimate() and simulate()
# functions directly into our namespace.
#
from grm import process
init_dict = process('material/msc/init.ini')
try:
data = simulate(init_dict)
except NameError:
pass
# Imports all pubilc objects directly
# into our namespace.
#
from grm import *
init_dict = process('material/msc/init.ini')
data = simulate(init_dict)
"""
Explanation: What is the deal with all the leading and trailing underscores? Let us check out the Style Guide for Python Code.
There are many ways to import a module and then work with it.
End of explanation
"""
# Create list of all files generated by the module
files = glob.glob('*.grm.*')
# Remove files
for file_ in files:
os.remove(file_)
"""
Explanation: Cleanup
End of explanation
"""
import urllib; from IPython.core.display import HTML
HTML(urllib.urlopen('http://bit.ly/1OKmNHN').read())
"""
Explanation: Additional Resources
Tutorial on Python Modules and Packages
Formatting
End of explanation
"""
|
dougalsutherland/pummeler | notebooks/election data by region.ipynb | mit | from __future__ import division, print_function
%matplotlib inline
import numpy as np
import pandas as pd
import re
import six
from IPython.display import display
import sys
sys.path.append('..')
from pummeler.data import geocode_data
county_to_region = geocode_data('county_region_10').region.to_dict()
"""
Explanation: Map election results to regions.
Assumes you have huffpostdata/election-2012-results cloned at ../../election-2012-results. Does 2000 regions by default; just change county_region_00 to county_region_10 below to do 2010.
End of explanation
"""
from glob import glob
"""
Explanation: Map electoral results to regions
End of explanation
"""
assert len({v for k, v in county_to_region.iteritems() if k.startswith('02')}) == 1
ak_precincts = pd.read_csv('../../election-2012-results/data/ak_precincts.csv')
ak = ak_precincts.groupby(ak_precincts.candidate).sum().reset_index()
ak['state'] = 'ak'
ak['fips'] = next(k for k in county_to_region if k.startswith('02'))
ak['county'] = 'All of Alaska'
ak
bits = [ak]
for f in glob('../../election-2012-results/data/??.csv'):
piece = pd.read_csv(f, dtype={'fips': str})
piece['state'] = f[-6:-4]
bits.append(piece)
election = pd.concat(bits)
"""
Explanation: First, handle Alaska specially:
End of explanation
"""
reps = {
'goode': 'virgil goode',
'obama': 'barack obama',
'johnson': 'gary johnson',
'romney': 'mitt romney',
'stein': 'jill stein',
'virgil h. goode': 'virgil goode',
'virgil h. goode jr.': 'virgil goode',
'gary e. johnson': 'gary johnson',
'write in': 'write-in',
'write-ins': 'write-in',
'hoefling': 'tom hoefling',
'obama barack': 'barack obama',
'stein jill': 'jill stein',
'romney mitt': 'mitt romney',
'johnson gary': 'gary johnson',
'jill stein write-in': 'jill stein',
'hoefling (write-in)': 'tom hoefling',
'tom hoeffling': 'tom hoefling',
'alexander': 'stewart alexander',
'ross c. "rocky"': 'ross c. "rocky"',
'ross c. rocky': 'ross c. "rocky"',
'ross c.': 'ross c. "rocky"',
'rocky': 'ross c. "rocky"',
'paul': 'ron paul',
'ron paul write-in': 'ron paul',
'write-in**': 'write-in',
'clymer': 'james clymer',
'roth': 'cecil james roth',
'prokopich': 'barbara prokopich',
'barbara a. prokopich': 'barbara prokopich',
'kevin m. thorne': 'kevin thorne',
'thorne': 'kevin thorne',
}
def rewrite(s):
s = s.lower()
for x in ['/', ',', '(', ' and', ' for president']:
p = s.find(x)
if p != -1:
s = s[:p]
s = s.strip().replace(' ', ' ')
s = reps.get(s, s)
return s
election['cand'] = election.candidate.apply(rewrite)
cand_votes = election.groupby(election.cand).votes.sum().sort_values(ascending=False)
cand_votes.head(50)
election['party'] = 'oth'
election.loc[election.cand == 'barack obama', 'party'] = 'D'
election.loc[election.cand == 'mitt romney', 'party'] = 'R'
election.loc[election.cand == 'gary johnson', 'party'] = 'L'
election.loc[election.cand == 'jill stein', 'party'] = 'G'
election.groupby(election.party).votes.sum()
"""
Explanation: Normalize candidate names
End of explanation
"""
set(election.fips) - set(county_to_region)
election[pd.isnull(election.fips)]
"""
Explanation: Slightly disagrees with https://en.wikipedia.org/wiki/United_States_presidential_election,_2012: they say Obama 65,915,795, Romney 60,933,504. Not sure how we got too many votes for Romney there; maybe Wikipedia miscounted?
Make sure that the FIPS codes are lining up reasonably
End of explanation
"""
{fips for fips in set(county_to_region) - set(election.fips)
if not fips.startswith('02')}
"""
Explanation: UOCAVA = The Uniformed and Overseas Citizens Absentee Voting Act. Ignore these.
End of explanation
"""
county_to_region['15005'] == county_to_region['15009']
"""
Explanation: 15005 is Kalawao County, Hawaii, which has a population of 89 and is accessible only by mule trail. Its votes are counted under Maui (15009), and they're in the same PUMA anyway:
End of explanation
"""
election_region = election.groupby(election.fips.map(county_to_region)) \
.apply(lambda x: x.votes.groupby(x.party).sum()).unstack()
election_region.index.name = 'region'
election_region.columns = ['votes_{}'.format(p) for p in election_region.columns]
election_region.fillna(0, inplace=True)
election_region = election_region.astype('int')
election_region.head()
election_region.to_csv('2012-by-region.csv.gz', compression='gzip')
"""
Explanation: Do the actual grouping
End of explanation
"""
|
ES-DOC/esdoc-jupyterhub | notebooks/messy-consortium/cmip6/models/emac-2-53-aerchem/aerosol.ipynb | gpl-3.0 | # DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'messy-consortium', 'emac-2-53-aerchem', 'aerosol')
"""
Explanation: ES-DOC CMIP6 Model Properties - Aerosol
MIP Era: CMIP6
Institute: MESSY-CONSORTIUM
Source ID: EMAC-2-53-AERCHEM
Topic: Aerosol
Sub-Topics: Transport, Emissions, Concentrations, Optical Radiative Properties, Model.
Properties: 69 (37 required)
Model descriptions: Model description details
Initialized From: --
Notebook Help: Goto notebook help page
Notebook Initialised: 2018-02-15 16:54:10
Document Setup
IMPORTANT: to be executed each time you run the notebook
End of explanation
"""
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
"""
Explanation: Document Authors
Set document authors
End of explanation
"""
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
"""
Explanation: Document Contributors
Specify document contributors
End of explanation
"""
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
"""
Explanation: Document Publication
Specify document publication status
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: Document Table of Contents
1. Key Properties
2. Key Properties --> Software Properties
3. Key Properties --> Timestep Framework
4. Key Properties --> Meteorological Forcings
5. Key Properties --> Resolution
6. Key Properties --> Tuning Applied
7. Transport
8. Emissions
9. Concentrations
10. Optical Radiative Properties
11. Optical Radiative Properties --> Absorption
12. Optical Radiative Properties --> Mixtures
13. Optical Radiative Properties --> Impact Of H2o
14. Optical Radiative Properties --> Radiative Scheme
15. Optical Radiative Properties --> Cloud Interactions
16. Model
1. Key Properties
Key properties of the aerosol model
1.1. Model Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of aerosol model.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 1.2. Model Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Name of aerosol model code
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.scheme_scope')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "troposhere"
# "stratosphere"
# "mesosphere"
# "mesosphere"
# "whole atmosphere"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 1.3. Scheme Scope
Is Required: TRUE Type: ENUM Cardinality: 1.N
Atmospheric domains covered by the aerosol model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.basic_approximations')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 1.4. Basic Approximations
Is Required: TRUE Type: STRING Cardinality: 1.1
Basic approximations made in the aerosol model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.prognostic_variables_form')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "3D mass/volume ratio for aerosols"
# "3D number concenttration for aerosols"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 1.5. Prognostic Variables Form
Is Required: TRUE Type: ENUM Cardinality: 1.N
Prognostic variables in the aerosol model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.number_of_tracers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 1.6. Number Of Tracers
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Number of tracers in the aerosol model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.family_approach')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 1.7. Family Approach
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Are aerosol calculations generalized into families of species?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.software_properties.repository')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 2. Key Properties --> Software Properties
Software properties of aerosol code
2.1. Repository
Is Required: FALSE Type: STRING Cardinality: 0.1
Location of code for this component.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.software_properties.code_version')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 2.2. Code Version
Is Required: FALSE Type: STRING Cardinality: 0.1
Code version identifier.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.software_properties.code_languages')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 2.3. Code Languages
Is Required: FALSE Type: STRING Cardinality: 0.N
Code language(s).
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.timestep_framework.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Uses atmospheric chemistry time stepping"
# "Specific timestepping (operator splitting)"
# "Specific timestepping (integrated)"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 3. Key Properties --> Timestep Framework
Physical properties of seawater in ocean
3.1. Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
Mathematical method deployed to solve the time evolution of the prognostic variables
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.timestep_framework.split_operator_advection_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 3.2. Split Operator Advection Timestep
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Timestep for aerosol advection (in seconds)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.timestep_framework.split_operator_physical_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 3.3. Split Operator Physical Timestep
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Timestep for aerosol physics (in seconds).
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.timestep_framework.integrated_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 3.4. Integrated Timestep
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Timestep for the aerosol model (in seconds)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.timestep_framework.integrated_scheme_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Explicit"
# "Implicit"
# "Semi-implicit"
# "Semi-analytic"
# "Impact solver"
# "Back Euler"
# "Newton Raphson"
# "Rosenbrock"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 3.5. Integrated Scheme Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Specify the type of timestep scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.meteorological_forcings.variables_3D')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 4. Key Properties --> Meteorological Forcings
**
4.1. Variables 3D
Is Required: FALSE Type: STRING Cardinality: 0.1
Three dimensionsal forcing variables, e.g. U, V, W, T, Q, P, conventive mass flux
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.meteorological_forcings.variables_2D')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 4.2. Variables 2D
Is Required: FALSE Type: STRING Cardinality: 0.1
Two dimensionsal forcing variables, e.g. land-sea mask definition
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.meteorological_forcings.frequency')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 4.3. Frequency
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Frequency with which meteological forcings are applied (in seconds).
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.resolution.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 5. Key Properties --> Resolution
Resolution in the aersosol model grid
5.1. Name
Is Required: TRUE Type: STRING Cardinality: 1.1
This is a string usually used by the modelling group to describe the resolution of this grid, e.g. ORCA025, N512L180, T512L70 etc.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.resolution.canonical_horizontal_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 5.2. Canonical Horizontal Resolution
Is Required: FALSE Type: STRING Cardinality: 0.1
Expression quoted for gross comparisons of resolution, eg. 50km or 0.1 degrees etc.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.resolution.number_of_horizontal_gridpoints')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 5.3. Number Of Horizontal Gridpoints
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Total number of horizontal (XY) points (or degrees of freedom) on computational grid.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.resolution.number_of_vertical_levels')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 5.4. Number Of Vertical Levels
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Number of vertical levels resolved on computational grid.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.resolution.is_adaptive_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 5.5. Is Adaptive Grid
Is Required: FALSE Type: BOOLEAN Cardinality: 0.1
Default is False. Set true if grid resolution changes during execution.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.tuning_applied.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 6. Key Properties --> Tuning Applied
Tuning methodology for aerosol model
6.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General overview description of tuning: explain and motivate the main targets and metrics retained. &Document the relative weight given to climate performance metrics versus process oriented metrics, &and on the possible conflicts with parameterization level tuning. In particular describe any struggle &with a parameter value that required pushing it to its limits to solve a particular model deficiency.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.tuning_applied.global_mean_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 6.2. Global Mean Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List set of metrics of the global mean state used in tuning model/component
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.tuning_applied.regional_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 6.3. Regional Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List of regional metrics of mean state used in tuning model/component
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.tuning_applied.trend_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 6.4. Trend Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List observed trend metrics used in tuning model/component
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.transport.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 7. Transport
Aerosol transport
7.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of transport in atmosperic aerosol model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.transport.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Uses Atmospheric chemistry transport scheme"
# "Specific transport scheme (eulerian)"
# "Specific transport scheme (semi-lagrangian)"
# "Specific transport scheme (eulerian and semi-lagrangian)"
# "Specific transport scheme (lagrangian)"
# TODO - please enter value(s)
"""
Explanation: 7.2. Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Method for aerosol transport modeling
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.transport.mass_conservation_scheme')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Uses Atmospheric chemistry transport scheme"
# "Mass adjustment"
# "Concentrations positivity"
# "Gradients monotonicity"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 7.3. Mass Conservation Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.N
Method used to ensure mass conservation.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.transport.convention')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Uses Atmospheric chemistry transport scheme"
# "Convective fluxes connected to tracers"
# "Vertical velocities connected to tracers"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 7.4. Convention
Is Required: TRUE Type: ENUM Cardinality: 1.N
Transport by convention
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 8. Emissions
Atmospheric aerosol emissions
8.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of emissions in atmosperic aerosol model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.method')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "Prescribed (climatology)"
# "Prescribed CMIP6"
# "Prescribed above surface"
# "Interactive"
# "Interactive above surface"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 8.2. Method
Is Required: TRUE Type: ENUM Cardinality: 1.N
Method used to define aerosol species (several methods allowed because the different species may not use the same method).
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.sources')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Vegetation"
# "Volcanos"
# "Bare ground"
# "Sea surface"
# "Lightning"
# "Fires"
# "Aircraft"
# "Anthropogenic"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 8.3. Sources
Is Required: FALSE Type: ENUM Cardinality: 0.N
Sources of the aerosol species are taken into account in the emissions scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.prescribed_climatology')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant"
# "Interannual"
# "Annual"
# "Monthly"
# "Daily"
# TODO - please enter value(s)
"""
Explanation: 8.4. Prescribed Climatology
Is Required: FALSE Type: ENUM Cardinality: 0.1
Specify the climatology type for aerosol emissions
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.prescribed_climatology_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 8.5. Prescribed Climatology Emitted Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of aerosol species emitted and prescribed via a climatology
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.prescribed_spatially_uniform_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 8.6. Prescribed Spatially Uniform Emitted Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of aerosol species emitted and prescribed as spatially uniform
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.interactive_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 8.7. Interactive Emitted Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of aerosol species emitted and specified via an interactive method
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.other_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 8.8. Other Emitted Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of aerosol species emitted and specified via an "other method"
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.other_method_characteristics')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 8.9. Other Method Characteristics
Is Required: FALSE Type: STRING Cardinality: 0.1
Characteristics of the "other method" used for aerosol emissions
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.concentrations.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 9. Concentrations
Atmospheric aerosol concentrations
9.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of concentrations in atmosperic aerosol model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.concentrations.prescribed_lower_boundary')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 9.2. Prescribed Lower Boundary
Is Required: FALSE Type: STRING Cardinality: 0.1
List of species prescribed at the lower boundary.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.concentrations.prescribed_upper_boundary')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 9.3. Prescribed Upper Boundary
Is Required: FALSE Type: STRING Cardinality: 0.1
List of species prescribed at the upper boundary.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.concentrations.prescribed_fields_mmr')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 9.4. Prescribed Fields Mmr
Is Required: FALSE Type: STRING Cardinality: 0.1
List of species prescribed as mass mixing ratios.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.concentrations.prescribed_fields_mmr')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 9.5. Prescribed Fields Mmr
Is Required: FALSE Type: STRING Cardinality: 0.1
List of species prescribed as AOD plus CCNs.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 10. Optical Radiative Properties
Aerosol optical and radiative properties
10.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of optical and radiative properties
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.absorption.black_carbon')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 11. Optical Radiative Properties --> Absorption
Absortion properties in aerosol scheme
11.1. Black Carbon
Is Required: FALSE Type: FLOAT Cardinality: 0.1
Absorption mass coefficient of black carbon at 550nm (if non-absorbing enter 0)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.absorption.dust')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 11.2. Dust
Is Required: FALSE Type: FLOAT Cardinality: 0.1
Absorption mass coefficient of dust at 550nm (if non-absorbing enter 0)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.absorption.organics')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 11.3. Organics
Is Required: FALSE Type: FLOAT Cardinality: 0.1
Absorption mass coefficient of organics at 550nm (if non-absorbing enter 0)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.mixtures.external')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 12. Optical Radiative Properties --> Mixtures
**
12.1. External
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there external mixing with respect to chemical composition?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.mixtures.internal')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 12.2. Internal
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there internal mixing with respect to chemical composition?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.mixtures.mixing_rule')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 12.3. Mixing Rule
Is Required: FALSE Type: STRING Cardinality: 0.1
If there is internal mixing with respect to chemical composition then indicate the mixinrg rule
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.impact_of_h2o.size')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 13. Optical Radiative Properties --> Impact Of H2o
**
13.1. Size
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does H2O impact size?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.impact_of_h2o.internal_mixture')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 13.2. Internal Mixture
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does H2O impact internal mixture?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.radiative_scheme.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 14. Optical Radiative Properties --> Radiative Scheme
Radiative scheme for aerosol
14.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of radiative scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.radiative_scheme.shortwave_bands')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 14.2. Shortwave Bands
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Number of shortwave bands
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.radiative_scheme.longwave_bands')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 14.3. Longwave Bands
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Number of longwave bands
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 15. Optical Radiative Properties --> Cloud Interactions
Aerosol-cloud interactions
15.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of aerosol-cloud interactions
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.twomey')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 15.2. Twomey
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is the Twomey effect included?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.twomey_minimum_ccn')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 15.3. Twomey Minimum Ccn
Is Required: FALSE Type: INTEGER Cardinality: 0.1
If the Twomey effect is included, then what is the minimum CCN number?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.drizzle')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 15.4. Drizzle
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does the scheme affect drizzle?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.cloud_lifetime')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 15.5. Cloud Lifetime
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does the scheme affect cloud lifetime?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.longwave_bands')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 15.6. Longwave Bands
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Number of longwave bands
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.model.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 16. Model
Aerosol model
16.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of atmosperic aerosol model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.model.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Dry deposition"
# "Sedimentation"
# "Wet deposition (impaction scavenging)"
# "Wet deposition (nucleation scavenging)"
# "Coagulation"
# "Oxidation (gas phase)"
# "Oxidation (in cloud)"
# "Condensation"
# "Ageing"
# "Advection (horizontal)"
# "Advection (vertical)"
# "Heterogeneous chemistry"
# "Nucleation"
# TODO - please enter value(s)
"""
Explanation: 16.2. Processes
Is Required: TRUE Type: ENUM Cardinality: 1.N
Processes included in the Aerosol model.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.model.coupling')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Radiation"
# "Land surface"
# "Heterogeneous chemistry"
# "Clouds"
# "Ocean"
# "Cryosphere"
# "Gas phase chemistry"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 16.3. Coupling
Is Required: FALSE Type: ENUM Cardinality: 0.N
Other model components coupled to the Aerosol model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.model.gas_phase_precursors')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "DMS"
# "SO2"
# "Ammonia"
# "Iodine"
# "Terpene"
# "Isoprene"
# "VOC"
# "NOx"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 16.4. Gas Phase Precursors
Is Required: TRUE Type: ENUM Cardinality: 1.N
List of gas phase aerosol precursors.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.model.scheme_type')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Bulk"
# "Modal"
# "Bin"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 16.5. Scheme Type
Is Required: TRUE Type: ENUM Cardinality: 1.N
Type(s) of aerosol scheme used by the aerosols model (potentially multiple: some species may be covered by one type of aerosol scheme and other species covered by another type).
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.model.bulk_scheme_species')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Sulphate"
# "Nitrate"
# "Sea salt"
# "Dust"
# "Ice"
# "Organic"
# "Black carbon / soot"
# "SOA (secondary organic aerosols)"
# "POM (particulate organic matter)"
# "Polar stratospheric ice"
# "NAT (Nitric acid trihydrate)"
# "NAD (Nitric acid dihydrate)"
# "STS (supercooled ternary solution aerosol particule)"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 16.6. Bulk Scheme Species
Is Required: TRUE Type: ENUM Cardinality: 1.N
List of species covered by the bulk scheme.
End of explanation
"""
|
mwickert/SP-Comm-Tutorial-using-scikit-dsp-comm | tutorial_part2/RealTime-DSP.ipynb | bsd-2-clause | Image('PyAudio_RT_flow@300dpi.png',width='90%')
pah.available_devices()
"""
Explanation: Introduction
A simplified block diagram of PyAudio streaming-based (nonblocking) signal processing.
End of explanation
"""
# define a pass through, y = x, callback
def callback(in_data, frame_count, time_info, status):
DSP_IO.DSP_callback_tic()
# convert byte data to ndarray
in_data_nda = np.fromstring(in_data, dtype=np.int16)
#***********************************************
# DSP operations here
# Here we apply a linear filter to the input
x = in_data_nda.astype(float32)
y = x
# Typically more DSP code here
#***********************************************
# Save data for later analysis
# accumulate a new frame of samples
DSP_IO.DSP_capture_add_samples(y)
#***********************************************
# Convert from float back to int16
y = y.astype(int16)
DSP_IO.DSP_callback_toc()
# Convert ndarray back to bytes
#return (in_data_nda.tobytes(), pyaudio.paContinue)
return y.tobytes(), pah.pyaudio.paContinue
DSP_IO = pah.DSP_io_stream(callback,2,1,Tcapture=0)
DSP_IO.stream(5)
"""
Explanation: Real-Time Loop Through
Here we set up a simple callback function that passes the input samples directly to the output. The module pyaudio_support provides a class for managing a pyaudio stream object, capturing the samples processed by the callback function, and collection of performance metrics. Once the callback function is written/declared a DSP_io_stream object can be created and then the stream(Tsec) method can be executed to start the input/output processing, e.g.,
```python
import pyaudio_helper as pah
DSP_IO = pah.DSP_io_stream(callback,in_idx, out_idx)
DSP_IO.stream(2)
``
wherein_idxis the index of the chosen input device found usingavailable_devices()and similarlyin_idx` is the index of the chosen output device.
The callback function must be written first as the function name used by the object to call the callback.
End of explanation
"""
import sk_dsp_comm.fir_design_helper as fir_d
b = fir_d.fir_remez_bpf(2500,3000,4500,5000,.5,60,44100,18)
fir_d.freqz_resp_list([b],[1],'dB',44100)
ylim([-80,5])
grid();
# Design an IIR Notch
b, a = ss.fir_iir_notch(3000,44100,r= 0.9)
fir_d.freqz_resp_list([b],[a],'dB',44100,4096)
ylim([-60,5])
grid();
"""
Explanation: Real-Time Filtering
Here we set up a callback function that filters the input samples and then sends them to the output.
```python
import pyaudio_helper as pah
DSP_IO = pah.DSP_io_stream(callback,in_idx, out_idx)
DSP_IO.stream(2)
``
wherein_idxis the index of the chosen input device found usingavailable_devices()and similarlyin_idx` is the index of the chosen output device.
The callback function must be written first as the function name is used by the object to call the callback
To demonstrate this we first design some filters that can be used in testing
End of explanation
"""
# For the FIR filter 'b' is defined above, but 'a' also needs to be declared
# For the IIR notch filter both 'b' and 'a' are declared above
a = [1]
zi = signal.lfiltic(b,a,[0])
#zi = signal.sosfilt_zi(sos)
# define callback (#2)
def callback2(in_data, frame_count, time_info, status):
global b, a, zi
DSP_IO.DSP_callback_tic()
# convert byte data to ndarray
in_data_nda = np.fromstring(in_data, dtype=np.int16)
#***********************************************
# DSP operations here
# Here we apply a linear filter to the input
x = in_data_nda.astype(float32)
#y = x
# The filter state/(memory), zi, must be maintained from frame-to-frame
y, zi = signal.lfilter(b,a,x,zi=zi) # for FIR or simple IIR
#y, zi = signal.sosfilt(sos,x,zi=zi) # for IIR use second-order sections
#***********************************************
# Save data for later analysis
# accumulate a new frame of samples
DSP_IO.DSP_capture_add_samples(y)
#***********************************************
# Convert from float back to int16
y = y.astype(int16)
DSP_IO.DSP_callback_toc()
return y.tobytes(), pah.pyaudio.paContinue
DSP_IO = pah.DSP_io_stream(callback2,2,1,Tcapture=0)
DSP_IO.stream(5)
"""
Explanation: Create some global variables for the filter coefficients and the filter state array (recall that a filter has memory).
End of explanation
"""
# define callback (2)
# Here we configure the callback to play back a wav file
def callback3(in_data, frame_count, time_info, status):
DSP_IO.DSP_callback_tic()
# Ignore in_data when generating output only
#***********************************************
global x
# Note wav is scaled to [-1,1] so need to rescale to int16
y = 32767*x.get_samples(frame_count)
# Perform real-time DSP here if desired
#
#***********************************************
# Save data for later analysis
# accumulate a new frame of samples
DSP_IO.DSP_capture_add_samples(y)
#***********************************************
# Convert from float back to int16
y = y.astype(int16)
DSP_IO.DSP_callback_toc()
return y.tobytes(), pah.pyaudio.paContinue
#fs, x_wav = ss.from_wav('OSR_us_000_0018_8k.wav')
fs, x_wav2 = ss.from_wav('Music_Test.wav')
x_wav = (x_wav2[:,0] + x_wav2[:,1])/2
#x_wav = x_wav[15000:90000]
x = pah.loop_audio(x_wav)
#DSP_IO = pah.DSP_io_stream(callback3,2,1,fs=8000,Tcapture=2)
DSP_IO = pah.DSP_io_stream(callback3,2,1,fs=44100,Tcapture=2)
DSP_IO.stream(20)
"""
Explanation: Real-Time Playback
The case of real-time playback sends an ndarray through the chosen audio output path with the array data either being truncated or looped depending upon the length of the array relative to Tsec supplied to stream(Tsec). To manage the potential looping aspect of the input array, we first make a loop_audio object from the input array. An example of this shown below:
End of explanation
"""
# define callback (2)
# Here we configure the callback to capture a one channel input
def callback4(in_data, frame_count, time_info, status):
DSP_IO.DSP_callback_tic()
# convert byte data to ndarray
in_data_nda = np.fromstring(in_data, dtype=np.int16)
#***********************************************
# DSP operations here
# Here we apply a linear filter to the input
x = in_data_nda.astype(float32)
y = x
#***********************************************
# Save data for later analysis
# accumulate a new frame of samples
DSP_IO.DSP_capture_add_samples(y)
#***********************************************
# Convert from float back to int16
y = 0*y.astype(int16)
DSP_IO.DSP_callback_toc()
# Convert ndarray back to bytes
#return (in_data_nda.tobytes(), pyaudio.paContinue)
return y.tobytes(), pah.pyaudio.paContinue
DSP_IO = pah.DSP_io_stream(callback4,0,1,fs=22050)
DSP_IO.stream(5)
"""
Explanation: Real-Time Audio Capture/Record
Here we use PyAudio to acquire or capture the signal present on the chosen input device, e.g., microphone or a line-in signal from some sensor or music source. The example captures from the built-in microphone found on most PCs.
End of explanation
"""
DSP_IO.stream_stats()
"""
Explanation: Capture Buffer Analysis
As each of the above real-time processing scenarios is run, move down here in the notebook to do some analysis of what happened.
* The stream_stats() provides statistics related to the real-time performance
* What is the period between callbacks, ideal not contention theory and measured
* The average time spent in the callback
* The frame-based processing approach taken by PyAudio allows for efficient processing at the expense of latency in getting the first input sample to the output
* With a large frame_length and the corresponding latency, a lot of processing time is available to get DSP work done
* The object DSP_IO also contains a capture buffer (an ndarray), data_capture
* Post processing this buffer allows further study of what was passed to the output of the DSP IP itself
* In the case of a capture only application, the array data_capture is fundamental interest, as this is what you were seeking
End of explanation
"""
T_cb = 1024/44100 * 1000 # times 1000 to get units of ms
print('Callback/Frame period = %1.4f (ms)' % T_cb)
"""
Explanation: Note for a attributes used in the above examples the frame_length is alsways 1024 samples and the sampling rate $f_s = 44.1$ ksps. The ideal callback period is this
$$
T_{cb} = \frac{1024}{44100} = 23.22\ \text{(ms)}
$$
End of explanation
"""
subplot(211)
DSP_IO.cb_active_plot(0,270) # enter start time (ms) and stop time (ms)
subplot(212)
DSP_IO.cb_active_plot(150,160)
tight_layout()
Npts = 1000
Nstart = 1000
plot(arange(len(DSP_IO.data_capture[Nstart:Nstart+Npts]))*1000/44100,
DSP_IO.data_capture[Nstart:Nstart+Npts])
title(r'A Portion of the capture buffer')
ylabel(r'Amplitude (int16)')
xlabel(r'Time (ms)')
grid();
"""
Explanation: Next consider what the captures tic and toc data revels about the processing. Calling the method cb_active_plot() produces a plot similar to what an electrical engineer would see what using a logic analyzer to show the time spent in an interrupt service routine of an embedded system. The latency is also evident. You expect to see a minimum latency of two frame lengths (input buffer fill and output buffer fill),e.g.,
$$
T_\text{latency} >= 2\times \frac{1024}{44100} \times 1000 = 56.44\ \text{(ms)}
$$
The host processor is multitasking, so the latency can be even greater. A true real-time DSP system would give the signal processing high priority and hence much lower latency is expected.
End of explanation
"""
Pxx, F = ss.my_psd(DSP_IO.data_capture,2**13,44100);
fir_d.freqz_resp_list([b],[a],'dB',44100)
plot(F,10*log10(Pxx/max(Pxx))+3,'g') # Normalize by the max PSD
ylim([-80,5])
xlim([100,20e3])
grid();
specgram(DSP_IO.data_capture,1024,44100);
ylim([0, 5000])
"""
Explanation: Finally, the spectrum of the output signal. To apply custon scaling we use a variation of psd() found in the sigsys module. If we are plotting the spectrum of white noise sent through a filter, the output PSD will be of the form $\sigma_w^2|H(e^{j2\pi f/f_s})|^2$, where $\sigma_w^2$ is the variance of the noise driving the filter. You may choose to overlay a plot of
End of explanation
"""
|
NeuPhysics/aNN | ipynb/.ipynb_checkpoints/vacuumClean-checkpoint.ipynb | mit | # This line configures matplotlib to show figures embedded in the notebook,
# instead of opening a new window for each figure. More about that later.
# If you are using an old version of IPython, try using '%pylab inline' instead.
%matplotlib inline
%load_ext snakeviz
import numpy as np
from scipy.optimize import minimize
from scipy.special import expit
import matplotlib.pyplot as plt
from matplotlib.lines import Line2D
import timeit
import pandas as pd
import plotly.plotly as py
from plotly.graph_objs import *
import plotly.tools as tls
# hbar=1.054571726*10**(-34)
hbar=1.0
delm2E=1.0
# lamb=1.0 ## lambda for neutrinos
# lambb=1.0 ## lambda for anti neutrinos
# gF=1.0
# nd=1.0 ## number density
# ndb=1.0 ## number density
omega=1.0
omegab=-1.0
## Here are some matrices to be used
elM = np.array([[1.0,0.0],[0.0,0.0]])
bM = 1.0/2*np.array( [ [ -np.cos(0.4),np.sin(0.4)] , [np.sin(0.4),np.cos(0.4)] ] )
#bM = 1.0/2*np.array( [ [ 1,0] , [0,1] ] )
print bM
## sqareroot of 2
sqrt2=np.sqrt(2.0)
"""
Explanation: Vacuum Neutrino Oscillations
Here is a notebook for homogeneous gas model.
Here we are talking about a homogeneous gas bulk of neutrinos with single energy. The EoM is
$$
i \partial_t \rho_E = \left[ \frac{\delta m^2}{2E}B ,\rho_E \right]
$$
while the EoM for antineutrinos is
$$
i \partial_t \bar\rho_E = \left[- \frac{\delta m^2}{2E}B ,\bar\rho_E \right]
$$
Initial:
Homogeneous, Isotropic, Monoenergetic $\nu_e$ and $\bar\nu_e$
The equations becomes
$$
i \partial_t \rho_E = \left[ \frac{\delta m^2}{2E} B ,\rho_E \right]
$$
$$
i \partial_t \bar\rho_E = \left[- \frac{\delta m^2}{2E}B,\bar\rho_E \right]
$$
Define $\omega=\frac{\delta m^2}{2E}$, $\omega = \frac{\delta m^2}{-2E}$, $\mu=\sqrt{2}G_F n_\nu$
$$
i \partial_t \rho_E = \left[ \omega B ,\rho_E \right]
$$
$$
i \partial_t \bar\rho_E = \left[\bar\omega B,\bar\rho_E \right]
$$
where
$$
B = \frac{1}{2} \begin{pmatrix}
-\cos 2\theta_v & \sin 2\theta_v \
\sin 2\theta_v & \cos 2\theta_v
\end{pmatrix}
$$
or just use theta =0.2rad
$$
L = \begin{pmatrix}
1 & 0 \
0 & 0
\end{pmatrix}
$$
Initial condition
$$
\rho(t=0) = \begin{pmatrix}
1 & 0 \
0 & 0
\end{pmatrix}
$$
$$
\bar\rho(t=0) =\begin{pmatrix}
1 & 0 \
0 & 0
\end{pmatrix}
$$
define the following quantities
hbar$=\hbar$
%2. delm2E$= \delta m^2/2E$
%3. lamb $= \lambda$, lambb $= \bar\lambda$
%4. gF $= G_F$
%5. mu $=\mu$
omega $=\omega$, omegab $=-\bar\omega$
Numerical
End of explanation
"""
def trigf(x):
#return 1/(1+np.exp(-x)) # It's not bad to define this function here for people could use other functions other than expit(x).
return expit(x)
## The time derivative part
### Here are the initial conditions
init = np.array( [[1,0],[0,0]] )
#init = np.array( [[1,2],[3,4]] )
### For neutrinos
def rho(x,ti,initialCondition): # x is the input structure arrays, ti is a time point
elem=np.ones(4)
for i in np.linspace(0,3,4):
elem[i] = np.sum(ti * x[i*3] * trigf( ti*x[i*3+1] + x[i*3+2] ) )
return initialCondition + np.array([[ elem[0] , elem[1] ],[elem[2], elem[3] ]])
## Test
xtemp=np.ones(120)
rho(xtemp,0,init)
## Define Hamiltonians for both
def hamilv():
return delm2E*bM
## The commutator
def commv(x,ti,initialCondition):
return np.dot(hamilv(), rho(x,ti,initialCondition) ) - np.dot(rho(x,ti,initialCondition), hamilv() )
#return rho(x,ti,initialCondition)
## Test
print bM
print hamilv()
print "neutrino\n",commv(xtemp,0,init)
nplinspace(0,9,4)
## The COST of the eqn set
regularization = 0.0001
npsum = np.sum
npravel = np.ravel
nplinspace = np.linspace
nparray = np.array
def costvTi(x,ti,initialCondition): # l is total length of x
list = nplinspace(0,3,4)
fvec = []
costi = np.zeros(4) + 1.0j* np.zeros(4)
commvi = npravel(commv(x,ti,initialCondition))
fvecappend = fvec.append
for i in list:
fvecappend(np.asarray(trigf(ti*x[i*3+1] + x[i*3+2]) ) )
fvec = nparray(fvec)
for i in list:
costi[i] = ( npsum (x[i*3]*fvec[i] + ti * x[i*3]* fvec[i] * ( 1 - fvec[i] ) * x[i*3+1] ) + ( commvi[i] ) )
costiTemp = 0.0
for i in list:
costiTemp = costiTemp + (np.real(costi[i]))**2 + (np.imag(costi[i]))**2
return costiTemp
print costvTi(xtemp,2,init), costvTi2(xtemp,2,init)
## Calculate the total cost
def costv(x,t,initialCondition):
t = np.array(t)
costvTotal = np.sum( costvTList(x,t,initialCondition) )
return costvTotal
def costvTList(x,t,initialCondition): ## This is the function WITHOUT the square!!!
t = np.array(t)
costvList = np.asarray([])
for temp in t:
tempElement = costvTi(x,temp,initialCondition)
costvList = np.append(costvList, tempElement)
return np.array(costvList)
ttemp = np.linspace(0,10)
print ttemp
ttemp = np.linspace(0,10)
print costvTList(xtemp,ttemp,init)
print costv(xtemp,ttemp,init)
"""
Explanation: I am going to substitute all density matrix elements using their corrosponding network expressions.
So first of all, I need the network expression for the unknown functions.
A function is written as
$$ y_i= 1+t_i v_k f(t_i w_k+u_k) ,$$
while it's derivative is
$$v_k f(t w_k+u_k) + t v_k f(tw_k+u_k) (1-f(tw_k+u_k)) w_k .$$
Now I can write down the equations using these two forms.
End of explanation
"""
tlin = np.linspace(0,2,10)
# tlinTest = np.linspace(0,14,10) + 0.5
# initGuess = np.ones(120)
# initGuess = np.asarray(np.split(np.random.rand(1,720)[0],12))
initGuess = np.asarray(np.split(np.random.rand(1,60)[0],12))
costvF = lambda x: costv(x,tlin,init)
#costvFTest = lambda x: costv(x,tlinTest,init)
print costv(initGuess,tlin,init)#, costv(initGuess,tlinTest,init)
## %%snakeviz
# startCG = timeit.default_timer()
#costvFResultCG = minimize(costvF,initGuess,method="CG")
#stopCG = timeit.default_timer()
#print stopCG - startCG
#print costvFResultCG
"""
Explanation: Minimization
Here is the minimization
End of explanation
"""
#initGuessIter = costvFResultNM.get("x")
#for TOLERANCE in np.asarray([1e-8,1e-9,1e-12,1e-14,1e-16,1e-19,1e-20,1e-21,1e-22]):
# costvFResultNMIter = minimize(costvF,initGuessIter,method="Nelder-Mead",tol=TOLERANCE,options={"maxfun":90000,"maxiter":900000})
# initGuessIter = costvFResultNMIter.get("x")
# print TOLERANCE,costvFResultNMIter
#%%snakeviz
#startBFGS = timeit.default_timer()
#costvFResultBFGS = minimize(costvF,initGuess,method="BFGS")
#stopBFGS = timeit.default_timer()
#print stopBFGS - startBFGS
#print costvFResultBFGS
"""
Explanation: %%snakeviz
startNM = timeit.default_timer()
costvFResultNM = minimize(costvF,initGuess,method="Nelder-Mead",options={"maxfun":20000})
stopNM = timeit.default_timer()
print stopNM - startNM
print costvFResultNM
End of explanation
"""
%%snakeviz
startSLSQP = timeit.default_timer()
costvFResultSLSQP = minimize(costvF,initGuess,method="SLSQP")
stopSLSQP = timeit.default_timer()
print stopSLSQP - startSLSQP
print costvFResultSLSQP
#%%snakeviz
#startSLSQPTest = timeit.default_timer()
#costvFResultSLSQPTest = minimize(costvFTest,initGuess,method="SLSQP")
#stopSLSQPTest = timeit.default_timer()
#print stopSLSQPTest - startSLSQPTest
#print costvFResultSLSQPTest
print costvFResultSLSQP.get('x')
print np.asarray(np.split(costvFResultSLSQP.get('x'),12))
print CostOfStructure(np.asarray(np.split(costvFResultSLSQP.get('x'),12)))
#np.savetxt('./assets/homogen/optimize_ResultSLSQPT2120_Vac.txt', costvFResultSLSQP.get('x'), delimiter = ',')
"""
Explanation: initGuessIter = initGuess
for TOLERANCE in np.asarray([1e-19,1e-20,1e-21,1e-22]):
costvFResultSLSQP = minimize(costvF,initGuessIter,method="SLSQP",tol=TOLERANCE)
initGuessIter = costvFResultSLSQP.get("x")
print TOLERANCE,costvFResultSLSQP
End of explanation
"""
# costvFResultSLSQPx = np.genfromtxt('./assets/homogen/optimize_ResultSLSQP.txt', delimiter = ',')
## The first element of neutrino density matrix
xresult = np.asarray(costvFResultSLSQP.get('x'))
#xresult = np.asarray(costvFResultNM.get('x'))
#xresult = np.asarray(costvFResultBFGS.get('x'))
print xresult
plttlin=np.linspace(0,2,100)
pltdata11 = np.array([])
pltdata11Test = np.array([])
pltdata22 = np.array([])
pltdata12 = np.array([])
pltdata21 = np.array([])
for i in plttlin:
pltdata11 = np.append(pltdata11 ,rho(xresult,i,init)[0,0] )
print pltdata11
for i in plttlin:
pltdata12 = np.append(pltdata12 ,rho(xresult,i,init)[0,1] )
print pltdata12
for i in plttlin:
pltdata21 = np.append(pltdata21 ,rho(xresult,i,init)[1,0] )
print pltdata21
#for i in plttlin:
# pltdata11Test = np.append(pltdata11Test ,rho(xresultTest,i,init)[0,0] )
#
#print pltdata11Test
for i in plttlin:
pltdata22 = np.append(pltdata22 ,rho(xresult,i,init)[1,1] )
print pltdata22
print rho(xresult,0,init)
rho(xresult,1,init)
#np.savetxt('./assets/homogen/optimize_pltdatar11.txt', pltdata11, delimiter = ',')
#np.savetxt('./assets/homogen/optimize_pltdatar22.txt', pltdata22, delimiter = ',')
plt.figure(figsize=(16,9.36))
plt.ylabel('rho11')
plt.xlabel('Time')
plt11=plt.plot(plttlin,pltdata11,"b4-",label="vac_rho11")
plt.plot(plttlin,np.exp(-plttlin)*1,"r+")
#plt.plot(plttlin,pltdata11Test,"m4-",label="vac_rho11Test")
plt.show()
#py.iplot_mpl(plt.gcf(),filename="vac_HG-rho11")
# tls.embed("https://plot.ly/~emptymalei/73/")
plt.figure(figsize=(16,9.36))
plt.ylabel('rho12')
plt.xlabel('Time')
plt12=plt.plot(plttlin,pltdata12,"b4-",label="vac_rho12")
plt.plot(plttlin,np.exp(-plttlin)*2.0,"r+")
#plt.plot(plttlin,pltdata11Test,"m4-",label="vac_rho11Test")
plt.show()
#py.iplot_mpl(plt.gcf(),filename="vac_HG-rho11")
# tls.embed("https://plot.ly/~emptymalei/73/")
plt.figure(figsize=(16,9.36))
plt.ylabel('rho21')
plt.xlabel('Time')
plt11=plt.plot(plttlin,pltdata21,"b4-",label="vac_rho21")
plt.plot(plttlin,np.exp(-plttlin)*3,"r+")
#plt.plot(plttlin,pltdata11Test,"m4-",label="vac_rho11Test")
plt.show()
#py.iplot_mpl(plt.gcf(),filename="vac_HG-rho11")
# tls.embed("https://plot.ly/~emptymalei/73/")
plt.figure(figsize=(16,9.36))
plt.ylabel('Time')
plt.xlabel('rho22')
plt22=plt.plot(plttlin,pltdata22,"r4-",label="vac_rho22")
plt.plot(plttlin,np.exp(-plttlin)*4,"r+")
plt.show()
#py.iplot_mpl(plt.gcf(),filename="vac_HG-rho22")
MMA_optmize_Vac_pltdata = np.genfromtxt('./assets/homogen/MMA_optmize_Vac_pltdata.txt', delimiter = ',')
plt.figure(figsize=(16,9.36))
plt.ylabel('MMArho11')
plt.xlabel('Time')
plt.plot(np.linspace(0,15,4501),MMA_optmize_Vac_pltdata,"r-",label="MMAVacrho11")
plt.plot(plttlin,pltdata11,"b4-",label="vac_rho11")
plt.show()
#py.iplot_mpl(plt.gcf(),filename="MMA-rho11-Vac-80-60")
"""
Explanation: Functions
Find the solutions to each elements.
End of explanation
"""
xtemp1 = np.arange(4)
xtemp1.shape = (2,2)
print xtemp1
xtemp1[0,1]
np.dot(xtemp1,xtemp1)
xtemp1[0,1]
"""
Explanation: Practice
End of explanation
"""
|
bmeaut/python_nlp_2017_fall | course_material/11_Machine_Translation/11_Machine_Translation_lab.ipynb | mit | import os
import shutil
import urllib
import nltk
def download_file(url, directory=''):
real_dir = os.path.realpath(directory)
if not os.path.isdir(real_dir):
os.makedirs(real_dir)
file_name = url.rsplit('/', 1)[-1]
real_file = os.path.join(real_dir, file_name)
if not os.path.isfile(real_file):
with urllib.request.urlopen(url) as inf:
with open(real_file, 'wb') as outf:
shutil.copyfileobj(inf, outf)
"""
Explanation: 11. Machine Translation — Lab exercises
Preparations
Introduction
In this lab, we will be using Python Natural Language Toolkit (nltk) again to get to know the IBM models better. There are proper, open-source MT systems out there (such as Apertium and MOSES); however, getting to know them would require more than 90 minutes.
Infrastructure
For today's exercises, you will need the docker image again. Provided you have already downloaded it last time, you can start it by:
docker ps -a: lists all the containers you have created. Pick the one you used last time (with any luck, there is only one)
docker start <container id>
docker exec -it <container id> bash
When that's done, update your git repository:
bash
cd /nlp/python_nlp_2017_fall/
git pull
If git pull returns with errors, it is most likely because some of your files have changes in it (most likely the morphology or syntax notebooks, which you worked on the previous labs). You can check this with git status. If the culprit is the file A.ipynb, you can resolve this problem like so:
cp A.ipynb A_mine.ipynb
git checkout A.ipynb
After that, git pull should work.
And start the notebook:
jupyter notebook --port=8888 --ip=0.0.0.0 --no-browser --allow-root
If you started the notebook, but cannot access it in your browser, make sure jupyter is not running on the host system as well. If so, stop it.
Boilerplate
The following code imports the packages and defines the functions we are going to use.
End of explanation
"""
def read_files(directory=''):
pass
"""
Explanation: Exercises
1. Corpus acquisition
We download and preprocess a subset of the Hunglish corpus. It consists of English-Hungarian translation pairs extracted from open-source software documentation. The sentences are already aligned, but it lacks word alignment.
1.1 Download
Download the corpus. The url is ftp://ftp.mokk.bme.hu/Hunglish2/softwaredocs/bi/opensource_X.bi, where X is a number that ranges from 1 to 9. Use the download_file function defined above.
1.2 Conversion
Read the whole corpus (all files). Try not to read it all into memory. Write a function that
reads all files you have just downloaded
is a generator that yields tuples (Hungarian snippet, English snippet)
Note:
- the files are encoded with the iso-8859-2 (a.k.a. Latin-2) encoding
- the Hungarian and English snippets are separated by a tab
- don't forget to strip whitespace from the returned snippets
- throw away pairs with empty snippets
End of explanation
"""
from nltk.tokenize import sent_tokenize, word_tokenize
"""
Explanation: 1.3 Tokenization
The text is not tokenized. Use nltk's word_tokenize() function to tokenize the snippets. Also, lowercase them. You can do this in read_files() above if you wish, or in the code you write for 1.4 below.
Note:
- The model for the sentence tokenizer (punkt) is not installed by default. You have to download() it.
- NLTK doesn't have Hungarian tokenizer models, so there might be errors in the Hungarian result
- instead of just lowercasing everything, we might have chosen a more sophisticated solution, e.g. by first calling sent_tokenize() and then just lowercase the word at the beginning of the sentence, or even better, tag the snippets for NER. However, we have neither the time nor the resources (models) to do that now.
End of explanation
"""
from nltk.translate.api import AlignedSent
bitext = [] # Your code here
assert len(bitext) == 135439
"""
Explanation: 1.4 Create the training corpus
The models we are going to try expect a list of nltk.translate.api.AlignedSent objects. Create a bitext variable that is a list of AlignedSent objects created from the preprocessed, tokenized corpus.
Note that AlignedSent also allows you to specify an alignment between the words in the two texts. Unfortunately (but not unexpectedly), the corpus doesn't have this information.
End of explanation
"""
from nltk.translate import IBMModel1
ibm1 = IBMModel1(bitext, 5)
"""
Explanation: 2. IBM Models
NLTK implements IBM models 1-5. Unfortunately, the implementations don't provide the end-to-end machine translation systems, only their alignment models.
2.1 IBM Model 1
Train an IBM Model 1 alignment. We do it in a separate code block, so that we don't rerun it by accident – training even a simple model takes some time.
End of explanation
"""
from nltk.translate.ibm_model import AlignmentInfo
def alignment_to_info(alignment):
"""Converts from an Alignment object to the alignment format required by AlignmentInfo."""
pass
assert alignment_to_info([(0, 2), (1, 1), (2, 3)]) == (0, 3, 2, 4)
"""
Explanation: 2.2 Alignment conversion
While the model doesn't have a translate() function, it does provide a way to compute the translation probability $P(F|E)$ with some additional codework. That additional work is what you have to put in.
Remember that the formula for the translation probability is $P(F|E) = \sum_AP(F,A|E)$. Computing $P(F,A|E)$ is a bit hairy; luckily IBMModel1 has a method to calculate at least part of it: prob_t_a_given_s(), which is in fact only $P(F|A,E)$. This function accepts an AlignmentInfo object that contains the source and target sentences as well as the aligment between them.
Unfortunately, AlignmentInfo's representation of an alignment is completely different from the Alignment object's. Your first is task to do the conversion from the latter to the former. Given the example pair John loves Mary / De szereti János Marcsit,
- Aligment is basically a list of source-target, 0-based index pairs, [(0, 2), (1, 1), (2, 3)]
- The alignment in the AlignmentInfo objects is a tuple (!), where the ith position is the index of the target word that is aligned to the ith source word, or 0, if the ith source word is unaligned. Indices are 1-based, because the 0th word is NULL on both sides (see lecture page 35, slide 82). The tuple you return must also contain the alignment for this NULL word, which is not aligned with the NULL on the other side - in other words, the returned tuple starts with a 0. Example: (0, 3, 2, 4). If multiple target words are aligned with the same source word, you are free to use the index of any of them.
End of explanation
"""
def prob_f_a_e(model, src_sentence, tgt_sentence, alig_in_tuple_format):
pass
"""
Explanation: 2.3. Compute $P(F,A|E)$
Your task is to write a function that, given a source and a target sentence and an alignment, creates an AlignmentInfo an object and calls prob_t_a_given_s() of the model with it. The code here (test_prob_t_a_given_s()) might give you some clue as to how to construct the object.
Since prob_t_a_given_s() only computes $P(F|A,E)$, you have to add the $P(A|E)$ component. See page 38, slide 95 and page 39, side 100 in the lecture. What is $J$ and $K$ in the inverse setup?
Important: "interestingly", prob_t_a_given_s() translates from target to source. However, you still want to translate from source to target, so take care when filling the fields of the AlignmentInfo object.
Also note:
1. the alignment you pass to the function should already be in the right (AlignmentInfo) format. Don't bother converting it for now!
1. Test cases for Exercises 2.3 – 2.5 are available below Exercise 2.5.
End of explanation
"""
def prob_best_a(model, aligned_sent):
pass
"""
Explanation: 2.4. Compute $P(F, A_{best}|E)$
Write a function that, given an AlignedSent object, computes $P(F,A|E)$. Since IBMModel1 aligns the sentences of the training set with the most probable alignment, this function will effectively compute $P(F,A_{best}|E)$.
Don't forget to convert the alignment with the function you wrote in Exercise 2.1. before passing it to prob_f_a_e().
End of explanation
"""
def prob_f_e(model, aligned_sent):
pass
"""
Explanation: 2.5. Compute $P(F|E)$
Write a function that, given an AlignedSent object, computes $P(F|E)$. It should enumerate all possible alignments (in the tuple format) and call the function you wrote in Exercise 2.2 with them.
Note: the itertools.product function can be very useful in enumerating the alignments.
End of explanation
"""
import numpy
testext = [
AlignedSent(['klein', 'ist', 'das', 'haus'], ['the', 'house', 'is', 'small']),
AlignedSent(['das', 'haus', 'ist', 'ja', 'groß'], ['the', 'house', 'is', 'big']),
AlignedSent(['das', 'buch', 'ist', 'ja', 'klein'], ['the', 'book', 'is', 'small']),
AlignedSent(['das', 'haus'], ['the', 'house']),
AlignedSent(['das', 'buch'], ['the', 'book']),
AlignedSent(['ein', 'buch'], ['a', 'book'])
]
ibm2 = IBMModel1(testext, 5)
# Tests for Exercise 2.3
assert numpy.allclose(prob_f_a_e(ibm2, ['ein', 'buch'], ['a', 'book'], (0, 1, 2)), 0.08283000979778607)
assert numpy.allclose(prob_f_a_e(ibm2, ['ein', 'buch'], ['a', 'book'], (0, 2, 1)), 0.0002015158225914316)
# Tests for Exercise 2.4
assert numpy.allclose(prob_best_a(ibm2, testext[4]), 0.059443309368677)
assert numpy.allclose(prob_best_a(ibm2, testext[2]), 1.3593610057711997e-05)
# Tests for Exercise 2.5
assert numpy.allclose(prob_f_e(ibm2, testext[4]), 0.13718805082588842)
assert numpy.allclose(prob_f_e(ibm2, testext[2]), 0.0001809283308942621)
"""
Explanation: Test cases for Exercises 2.3 – 2.5.
End of explanation
"""
from collections import defaultdict
from math import log
from nltk.translate import PhraseTable
from nltk.translate.stack_decoder import StackDecoder
# The (probabilistic) phrase table
phrase_table = PhraseTable()
phrase_table.add(('niemand',), ('nobody',), log(0.8))
phrase_table.add(('niemand',), ('no', 'one'), log(0.2))
phrase_table.add(('erwartet',), ('expects',), log(0.8))
phrase_table.add(('erwartet',), ('expecting',), log(0.2))
phrase_table.add(('niemand', 'erwartet'), ('one', 'does', 'not', 'expect'), log(0.1))
phrase_table.add(('die', 'spanische', 'inquisition'), ('the', 'spanish', 'inquisition'), log(0.8))
phrase_table.add(('!',), ('!',), log(0.8))
# The "language model"
language_prob = defaultdict(lambda: -999.0)
language_prob[('nobody',)] = log(0.5)
language_prob[('expects',)] = log(0.4)
language_prob[('the', 'spanish', 'inquisition')] = log(0.2)
language_prob[('!',)] = log(0.1)
# Note: type() with three parameters creates a new type object
language_model = type('',(object,), {'probability_change': lambda self, context, phrase: language_prob[phrase],
'probability': lambda self, phrase: language_prob[phrase]})()
stack_decoder = StackDecoder(phrase_table, language_model)
stack_decoder.translate(['niemand', 'erwartet', 'die', 'spanische', 'inquisition', '!'])
"""
Explanation: 3. Phrase-based translation
NLTK also has some functions related to phrase-based translation, but these are all but finished. The components are scattered into two packages:
- phrase_based defines the function phrase_extraction() that can extract phrases from parallel text, based on an alignment
- stack_decoder defines the StackDecoder object, which can be used to translate sentences based on a phrase table and a language model
3.1. Decoding example
If you are wondering where the rest of the training functionality is, you spotted the problem: unfortunately, the part that assembles the phrase table based on the extracted phrases is missing. Also missing are the classes that represent and compute a language model. So in the code block below, we only run the decoder on an example sentence with a "hand-crafted" model.
Note: This is the same code as in the documentation of the decoder (above).
End of explanation
"""
|
mne-tools/mne-tools.github.io | 0.14/_downloads/plot_read_bem_surfaces.ipynb | bsd-3-clause | # Author: Jaakko Leppakangas <jaeilepp@gmail.com>
#
# License: BSD (3-clause)
import os.path as op
from mayavi import mlab
import mne
from mne.datasets.sample import data_path
print(__doc__)
data_path = data_path()
fname = op.join(data_path, 'MEG', 'sample', 'sample_audvis_raw.fif')
raw = mne.io.read_raw_fif(fname)
subjects_dir = op.join(data_path, 'subjects')
"""
Explanation: Reading BEM surfaces from a forward solution
Plot BEM surfaces used for forward solution generation.
End of explanation
"""
mne.viz.plot_trans(raw.info, trans=None, subject='sample',
subjects_dir=subjects_dir, meg_sensors=[], eeg_sensors=[],
head='outer_skin', skull=['inner_skull', 'outer_skull'])
mlab.view(40, 60)
"""
Explanation: Here we use :func:mne.viz.plot_trans with trans=None to plot only the
surfaces without any transformations. For plotting transformation, see
tut_forward.
End of explanation
"""
|
maelick/GitHub-Analysis | SANER2016/notebooks/CRAN Distribution of GitHub Packages.ipynb | gpl-2.0 | import pandas
from matplotlib import pyplot as plt
%matplotlib inline
from IPython.display import set_matplotlib_formats
#set_matplotlib_formats('pdf')
cran_release = pandas.DataFrame.from_csv('../data/cran-packages-150601.csv', index_col=None)
data = pandas.DataFrame.from_csv('../data/github-cran-bioc-alldata-150420.csv', index_col=None)
github = data.query('Source == "github"').sort('Date').drop_duplicates('Package').set_index('Package')
github.rename(columns={'Date': 'github'}, inplace=True)
cran = cran_release.sort('mtime').drop_duplicates('package').rename(columns={'package': 'Package', 'mtime': 'cran'})[['Package', 'cran']].set_index('Package')
# Keep GitHub packages
packages = github.join(cran, how='left')
# Fix datetimes
packages['github'] = pandas.to_datetime(packages['github']).astype('datetime64[ns]')
packages['cran'] = pandas.to_datetime(packages['cran']).astype('datetime64[ns]')
# Compute delta & sort (cosmetic)
packages['elapsed'] = packages['cran'] - packages['github']
packages.sort('elapsed', inplace=True)
# In days
packages['elapsed'] = packages['elapsed'] / pandas.np.timedelta64(1, 'D')
# Do GitHub packages have a CRAN dependency?
packages.fillna({'Depends': '', 'Imports': ''}, inplace=True)
cran_deps = lambda r: any((p in cran.index for p in
[x.strip() for x in r['Depends'].split(' ') if len(x.strip())>0] + [x.strip() for x in r['Imports'].split(' ') if len(x.strip())>0]
))
packages['cran_deps'] = packages.apply(cran_deps, axis=1)
ax = packages['elapsed'].plot(kind='hist', bins=50)
ax.set_title('Time needed for a GitHub package to appear on CRAN')
ax.figure.set_size_inches(12, 4)
ax.set_xlabel('Duration in days')
ax.set_ylabel('Number of GitHub R packages')
"""
Explanation: CRAN Distribution of GitHub Packages
We study the CRAN distribution of GitHub R Packages. In particular, we focus on a survival analysis based on the time needed for a R package on GitHub to appear on CRAN.
End of explanation
"""
packages.sort('elapsed', ascending=False)[:5][['github', 'cran', 'elapsed']]
"""
Explanation: Which are those outliers?
End of explanation
"""
import lifelines
survival = packages.copy()
# Remove packages that were first on CRAN
survival = survival.fillna(pandas.datetime(2050,1,1)).query('github < cran').replace(pandas.datetime(2050,1,1), pandas.np.nan)
# Observed packages
survival['observed'] = survival['elapsed'].apply(lambda x: not pandas.np.isnan(x))
# Censored packages (NaN) are set to now
survival['elapsed'] = survival.apply(lambda r: r['elapsed'] if not pandas.np.isnan(r['elapsed'])
else (pandas.datetime.now() - r['github']) / pandas.np.timedelta64(1, 'D'), axis=1)
print len(packages), len(survival), len(survival[survival['observed']])
kmf = lifelines.KaplanMeierFitter()
plot_groups = [
[
{'label': 'All', 'df': survival.query('observed or not observed')},
{'label': 'Without outliers', 'df': survival.query('observed == False or elapsed < 1200')},
],[
{'label': 'Version >= 1', 'df': survival[survival['Version'].str[0] >= '1']},
{'label': 'Version < 1', 'df': survival[survival['Version'].str[0] < '1']},
],[
{'label': 'With CRAN dependencies', 'df': survival.query('cran_deps == True')},
{'label': 'Without CRAN dependencies', 'df': survival.query('cran_deps == False')},
],
]
for i, group in enumerate(plot_groups):
ax = plt.subplot(len(plot_groups), 1, i+1)
for cond in group:
print '{}: {} items, {} observed'.format(cond['label'], len(cond['df']), len(cond['df'].query('observed == True')))
kmf.fit(cond['df']['elapsed'], event_observed=cond['df']['observed'], label=cond['label'])
ax = kmf.plot(ax=ax)
ax.set_title('Kaplan Meier Estimation, duration needed for a GitHub package to appear on CRAN')
ax.figure.set_size_inches(12, 8)
ax.set_xlabel('Duration (in days)')
"""
Explanation: Let's prepare the data for a survival analysis.
End of explanation
"""
|
mxbu/logbook | blog-notebooks/optimization.ipynb | mit | import pandas as pd
import numpy as np
from math import sqrt
import sys
from bokeh.plotting import figure, show, ColumnDataSource, save
from bokeh.models import Range1d, HoverTool
from bokeh.io import output_notebook, output_file
import quandl
from gurobipy import *
# output_notebook() #To enable Bokeh output in notebook, uncomment this line
"""
Explanation: Portfolio Optimization using Quandl, Bokeh and Gurobi
Borrowed and updated from Michael C. Grant, Continuum Analytics
End of explanation
"""
APIToken = "xxx-xxxxxx"
quandlcodes = ["GOOG/NASDAQ_AAPL.4","WIKI/GOOGL.4", "GOOG/NASDAQ_CSCO.4","GOOG/NASDAQ_FB.4",
"GOOG/NASDAQ_MSFT.4","GOOG/NASDAQ_TSLA.4","GOOG/NASDAQ_YHOO.4","GOOG/PINK_CSGKF.4",
"YAHOO/F_EOAN.4","YAHOO/F_BMW.4","YAHOO/F_ADS.4","GOOG/NYSE_ABB.4","GOOG/VTX_ADEN.4",
"GOOG/VTX_NOVN.4","GOOG/VTX_HOLN.4","GOOG/NYSE_UBS.4", "GOOG/NYSE_SAP.4", "YAHOO/SW_SNBN.4",
"YAHOO/IBM.4", "YAHOO/RIG.4" , "YAHOO/CTXS.4", "YAHOO/INTC.4","YAHOO/KO.4",
"YAHOO/NKE.4","YAHOO/MCD.4","YAHOO/EBAY.4","GOOG/VTX_NESN.4","YAHOO/MI_ALV.4","YAHOO/AXAHF.4",
"GOOG/VTX_SREN.4"]
"""
Explanation: First of all, we need some data to proceed. For that purpose we use Quandl. First, you're going to need the quandl package. This isn't totally necessary, as pulling from the API is quite simple with or without the package, but it does make it a bit easier and knocks out a few steps. The Quandl package can be downloaded here. If we set up quandl, next thing to do is to choose some stocks to import. The following is a random selection of stocks.
End of explanation
"""
data = quandl.get(quandlcodes,authtoken=APIToken, trim_start='2009-01-01', trim_end='2016-11-09', paginate=True, per_end_date={'gte': '2009-01-01'},
qopts={'columns':['ticker', 'per_end_date']})
"""
Explanation: The command to import those stocks is quandl.get(). With trim_start and trim_end we can choose a desired time horizon.
End of explanation
"""
GrowthRates = data.pct_change()*100
syms = GrowthRates.columns
Sigma = GrowthRates.cov()
stats = pd.concat((GrowthRates.mean(),GrowthRates.std()),axis=1)
stats.columns = ['Mean_return', 'Volatility']
extremes = pd.concat((stats.idxmin(),stats.min(),stats.idxmax(),stats.max()),axis=1)
extremes.columns = ['Minimizer','Minimum','Maximizer','Maximum']
stats
"""
Explanation: Let's now calculate the growth rates and some stats:
End of explanation
"""
fig = figure(tools="pan,box_zoom,reset,resize")
source = ColumnDataSource(stats)
hover = HoverTool(tooltips=[('Symbol','@index'),('Volatility','@Volatility'),('Mean return','@Mean_return')])
fig.add_tools(hover)
fig.circle('Volatility', 'Mean_return', size=5, color='maroon', source=source)
fig.text('Volatility', 'Mean_return', syms, text_font_size='10px', x_offset=3, y_offset=-2, source=source)
fig.xaxis.axis_label='Volatility (standard deviation)'
fig.yaxis.axis_label='Mean return'
output_file("portfolio.html")
show(fig)
"""
Explanation: As we move towards our Markowitz portfolio designs it makes sense to view the stocks on a mean/variance scatter plot.
End of explanation
"""
# Instantiate our model
m = Model("portfolio")
# Create one variable for each stock
portvars = [m.addVar(name=symb,lb=0.0) for symb in syms]
portvars[7]=m.addVar(name='GOOG/PINK_CSGKF - Close',lb=0.0,ub=0.5)
portvars = pd.Series(portvars, index=syms)
portfolio = pd.DataFrame({'Variables':portvars})
# Commit the changes to the model
m.update()
# The total budget
p_total = portvars.sum()
# The mean return for the portfolio
p_return = stats['Mean_return'].dot(portvars)
# The (squared) volatility of the portfolio
p_risk = Sigma.dot(portvars).dot(portvars)
# Set the objective: minimize risk
m.setObjective(p_risk, GRB.MINIMIZE)
# Fix the budget
m.addConstr(p_total, GRB.EQUAL, 1)
# Select a simplex algorithm (to ensure a vertex solution)
m.setParam('Method', 1)
m.optimize()
"""
Explanation: Gurobi
Time to bring in the big guns. Expressed in mathematical terms, we will be solving models in this form:
$$\begin{array}{lll}
\text{minimize} & x^T \Sigma x \
\text{subject to} & \sum_i x_i = 1 & \text{fixed budget} \
& r^T x = \gamma & \text{fixed return} \
& x \geq 0
\end{array}$$
In this model, the optimization variable $x\in\mathbb{R}^N$ is a vector representing the fraction of the budget allocated to each stock; that is, $x_i$ is the amount allocated to stock $i$. The paramters of the model are the mean returns $r$, a covariance matrix $\Sigma$, and the target return $\gamma$. What we will do is sweep $\gamma$ between the worst and best returns we have seen above, and compute the portfolio that achieves the target return but with as little risk as possible.
The covariance matrix $\Sigma$ merits some explanation. Along the diagonal, it contains the squares of the volatilities (standard deviations) computed above. But off the diagonal, it contains measures of the correlation between two stocks: that is, whether they tend to move in the same direction (positive correlation), in opposite directions (negative correlation), or a mixture of both (small correlation). This entire matrix is computed with a single call to Pandas.
Building the base model
We are not solving just one model here, but literally hundreds of them, with different return targets and with the short constraints added or removed. One very nice feature of the Gurobi Python interface is that we can build a single "base" model, and reuse it for each of these scenarios by adding and removing constraints.
First, let's initialize the model and declare the variables. As we mentioned above, we're creating separate variables for the long and short positions. We put these variables into a Pandas DataFrame for easy organization, and create a third column that holds the difference between the long and short variables---that is, the net allocations for each stock. Another nice feature of Gurobi's Python interface is that the variable objects can be used in simple linear and quadratic expressions using familar Python syntax.
End of explanation
"""
portfolio['Minimum risk'] = portvars.apply(lambda x:x.getAttr('x'))
portfolio
# Add the return target
ret50 = 0.5 * extremes.loc['Mean_return','Maximum']
fixreturn = m.addConstr(p_return, GRB.EQUAL, ret50)
m.optimize()
portfolio['50% Max'] = portvars.apply(lambda x:x.getAttr('x'))
"""
Explanation: Minimum Risk Model
We have set our objective to minimize risk, and fixed our budget at 1. The model we solved above gave us the minimum risk model.
End of explanation
"""
m.setParam('OutputFlag',False)
# Determine the range of returns. Make sure to include the lowest-risk
# portfolio in the list of options
minret = extremes.loc['Mean_return','Minimum']
maxret = extremes.loc['Mean_return','Maximum']
riskret = extremes.loc['Volatility','Minimizer']
riskret = stats.loc[riskret,'Mean_return']
riskret =sum(portfolio['Minimum risk']*stats['Mean_return'])
returns = np.unique(np.hstack((np.linspace(minret,maxret,10000),riskret)))
# Iterate through all returns
risks = returns.copy()
for k in range(len(returns)):
fixreturn.rhs = returns[k]
m.optimize()
risks[k] = sqrt(p_risk.getValue())
fig = figure(tools="pan,box_zoom,reset,resize")
# Individual stocks
fig.circle(stats['Volatility'], stats['Mean_return'], size=5, color='maroon')
fig.text(stats['Volatility'], stats['Mean_return'], syms, text_font_size='10px', x_offset=3, y_offset=-2)
fig.circle('Volatility', 'Mean_return', size=5, color='maroon', source=source)
# Divide the efficient frontier into two sections: those with
# a return less than the minimum risk portfolio, those that are greater.
tpos_n = returns >= riskret
tneg_n = returns <= riskret
fig.line(risks[tneg_n], returns[tneg_n], color='red')
fig.line(risks[tpos_n], returns[tpos_n], color='blue')
fig.xaxis.axis_label='Volatility (standard deviation)'
fig.yaxis.axis_label='Mean return'
fig.legend.orientation='bottom_left'
output_file("efffront.html")
show(fig)
"""
Explanation: The efficient frontier
Now what we're going to do is sweep our return target over a range of values, starting at the smallest possible value to the largest. For each, we construct the minimum-risk portfolio. This will give us a tradeoff curve that is known in the business as the efficient frontier or the Pareto-optimal curve.
Note that we're using the same model object we've already constructed! All we have to do is set the return target and re-optimize for each value of interest.
End of explanation
"""
|
tensorflow/docs-l10n | site/ja/lite/tutorials/model_maker_question_answer.ipynb | apache-2.0 | #@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""
Explanation: Copyright 2020 The TensorFlow Authors.
End of explanation
"""
!pip install -q tflite-model-maker
"""
Explanation: TensorFlow Lite Model Maker による BERT 質疑応答
<table class="tfo-notebook-buttons" align="left">
<td> <a target="_blank" href="https://www.tensorflow.org/lite/tutorials/model_maker_question_answer"><img src="https://www.tensorflow.org/images/tf_logo_32px.png">TensorFlow.org で表示</a> </td>
<td> <a target="_blank" href="https://colab.research.google.com/github/tensorflow/docs-l10n/blob/master/site/ja/lite/tutorials/model_maker_question_answer.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png">Google Colab で実行</a> </td>
<td> <a target="_blank" href="https://github.com/tensorflow/docs-l10n/blob/master/site/ja/lite/tutorials/model_maker_question_answer.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png">GitHub でソースを表示</a> </td>
<td> <a href="https://storage.googleapis.com/tensorflow_docs/docs-l10n/site/ja/lite/tutorials/model_maker_question_answer.ipynb"><img src="https://www.tensorflow.org/images/download_logo_32px.png">ノートブックをダウンロード</a> </td>
</table>
TensorFlow Lite Model Maker ライブラリは、TensorFlow モデルを適合し、オンデバイス ML アプリケーションにこのモデルをデプロイする際に特定の入力データに変換するプロセスを単純化します。
このノートブックでは、Model Maker ライブラリを使用したエンドツーエンドの例を示し、質疑応答タスクで一般的に使用される質疑応答モデルの適合と変換を説明します。
BERT 質疑応答タスクの基礎
このライブラリでサポートされているタスクは、抽出型の質疑応答タスクです。特定の文章と質問に対する回答が文章に含まれていることになります。以下の画像は、質疑応答の例を示します。
<p align="center"><img src="https://storage.googleapis.com/download.tensorflow.org/models/tflite/screenshots/model_maker_squad_showcase.png"></p>
<p align="center">
<em>回答は文章に含まれている(画像提供: <a href="https://rajpurkar.github.io/mlx/qa-and-squad/">SQuAD ブログ</a>)</em>
</p>
質疑応答タスクのモデルでは、入力は、すでに前処理されている文章と質問のペアで、出力は、文章の各トークンの開始ロジットと終了ロジットです。入力のサイズは設定可能で、文章と質問の長さに応じて調整することができます。
エンドツーエンドの概要
次のコードスニペットでは、数行のコード内でモデルを取得する方法を示します。全体的なプロセスには、(1)モデルの選択、(2)データの読み込み、(3)モデルの再トレーニング、(4)評価、(5)TensorFlow Lite 形式へのエクスポート、という 5 つのステップが含まれます。
```python
Chooses a model specification that represents the model.
spec = model_spec.get('mobilebert_qa')
Gets the training data and validation data.
train_data = DataLoader.from_squad(train_data_path, spec, is_training=True)
validation_data = DataLoader.from_squad(validation_data_path, spec, is_training=False)
Fine-tunes the model.
model = question_answer.create(train_data, model_spec=spec)
Gets the evaluation result.
metric = model.evaluate(validation_data)
Exports the model to the TensorFlow Lite format with metadata in the export directory.
model.export(export_dir)
```
上記のコードについて、次のセクションでより詳しく説明します。
前提条件
この例を実行するには、GitHub リポジトリ から、Model Maker パッケージを含む必要なパッケージをインストールする必要があります。
End of explanation
"""
import numpy as np
import os
import tensorflow as tf
assert tf.__version__.startswith('2')
from tflite_model_maker import model_spec
from tflite_model_maker import question_answer
from tflite_model_maker.config import ExportFormat
from tflite_model_maker.question_answer import DataLoader
"""
Explanation: 必要なパッケージをインポートします。
End of explanation
"""
spec = model_spec.get('mobilebert_qa_squad')
"""
Explanation: 「エンドツーエンドの概要」では、簡単なエンドツーエンドの例を実演しています。次のセクションでは、順を追ってより詳しく例を説明します。
質疑応答のモデルを表現する model_spec を選択する
各 model_spec オブジェクトは、質疑応答用の特定のモデルを表現します。Model Maker は現在、MobileBERT と BERT ベースモデルをサポートしています。
サポートされているモデル | model_spec の名前 | モデルの説明
--- | --- | ---
MobileBERT | 'mobilebert_qa' | BERT ベースより 4.3 倍小さく、5.5 倍高速ですが、オンデバイスシナリオに適した、優位性のある結果を達成します。
MobileBERT-SQuAD | 'mobilebert_qa_squad' | MobileBERT モデルと同じモデルアーキテクチャを持ち、最初のモデルは SQuAD1.1 で再トレーニング済みです。
BERT-Base | 'bert_qa' | NLP タスクで広く使用される標準的な BERT モデルです。
このチュートリアルでは、例として MobileBERT-SQuAD を使用します。モデルは SQuAD1.1 で再トレーニング済みであるため、質疑応答タスクではより高速に収束する可能性があります。
End of explanation
"""
train_data_path = tf.keras.utils.get_file(
fname='triviaqa-web-train-8000.json',
origin='https://storage.googleapis.com/download.tensorflow.org/models/tflite/dataset/triviaqa-web-train-8000.json')
validation_data_path = tf.keras.utils.get_file(
fname='triviaqa-verified-web-dev.json',
origin='https://storage.googleapis.com/download.tensorflow.org/models/tflite/dataset/triviaqa-verified-web-dev.json')
"""
Explanation: オンデバイス ML アプリ固有の入力データを読み込み、データを前処理する
TriviaQA は、読解問題のデータセットで、質問、回答、エビデンスの 3 つを 1 組とした 65 万個を超えるデータが含まれます。このチュートリアルでは、このデータセットのサブセットを使用して、Model Maker ライブラリの使用方法を学習します。
データを読み込むには、--sample_size=8000 としたコンバータ用 Python スクリプトと一連の web データを実行して、TriviaQA データセットを SQuAD1.1 形式に変換します。次のようにして、変換コードを少し変更してください。
文脈ドキュメントで回答が見つからなかったサンプルを省略します。
大文字や小文字を無視し、文脈の元の解答を取得します。
変換済みデータセットのアーカイブバージョンをダウンロードします。
End of explanation
"""
train_data = DataLoader.from_squad(train_data_path, spec, is_training=True)
validation_data = DataLoader.from_squad(validation_data_path, spec, is_training=False)
"""
Explanation: また、独自のデータセットを使用しても、MobileBERT モデルをトレーニングできます。Colab でこのノートブックを実行している場合は、左のサイドバーを使ってデータをアップロードしてください。
<img src="https://storage.googleapis.com/download.tensorflow.org/models/tflite/screenshots/model_maker_question_answer.png" alt="Upload File">
データをクラウドにアップロードしない場合は、ガイドに従ってオフラインでライブラリを実行することもできます。
DataLoader.from_squad メソッドを使用して、SQuAD 形式データを読み込み、特定の model_spec に従って前処理します。SQuAD2.0 または SQuAD1.1 のいずれかの形式を使用できます。パラメータ version_2_with_negative を True に設定すると、形式は SQuAD2.0 となり、そうでない場合は SQuAD1.1 となります。デフォルトでは、version_2_with_negative は False に設定されています。
End of explanation
"""
model = question_answer.create(train_data, model_spec=spec)
"""
Explanation: TensorFlow モデルをカスタマイズする
読み込んだデータに基づいて、カスタムの質疑応答モデルを作成します。create 関数は次のステップで構成されています。
model_spec に基づいて質疑応答のモデルを作成します。
質疑応答モデルをトレーニングします。デフォルトのエポックとデフォルトのバッチサイズは、default_training_epochs と default_batch_size の 2 つの変数に従って model_spec オブジェクトに設定されています。
End of explanation
"""
model.summary()
"""
Explanation: モデル構造を詳しく確認します。
End of explanation
"""
model.evaluate(validation_data)
"""
Explanation: カスタマイズ済みのモデルを評価する
検証データでモデルを評価し、f1 スコアや exact match などを含むメトリクスの dict を取得します。SQuAD1.1 と SQuAD2.0 のメトリクスは異なることに注意してください。
End of explanation
"""
model.export(export_dir='.')
"""
Explanation: TensorFlow Lite モデルをエクスポートする
トレーニングされたモデルをメタデータで TensorFlow Lite モデル形式に変換し、後でオンデバイス ML アプリケーションで使用できるようにします。語彙ファイルはメタデータに埋め込まれています。デフォルトの TFLite ファイル名は model.tflite です。
多くのオンデバイス ML アプリケーションでは、モデルサイズが重要な要因です。そのため、モデルの量子化を適用して小さくし、実行速度を高められるようにすることをお勧めします。デフォルトのポストトレーニング量子化手法は、BERT および MobileBERT モデルのダイナミックレンジ量子化です。
End of explanation
"""
model.export(export_dir='.', export_format=ExportFormat.VOCAB)
"""
Explanation: Colab の左サイドバーからダウンロードすることで、bert_qa 参照アプリで BertQuestionAnswerer API(TensorFlow Lite Task ライブラリ)を使って TensorFlow Lite モデルファイルを使用できます。
次のいずれかのエクスポートフォーマットを使用できます。
ExportFormat.TFLITE
ExportFormat.VOCAB
ExportFormat.SAVED_MODEL
デフォルトでは、メタデータとともに TensorFlow Lite モデルをエクスポートするだけです。さまざまなファイルを選択的にエクスポートすることも可能です。たとえば、vocab ファイルのみをエクスポートする場合は、次のように行います。
End of explanation
"""
model.evaluate_tflite('model.tflite', validation_data)
"""
Explanation: また、evaluate_tflite メソッドを使って tflite モデルを評価することも可能です。このステップには長い時間がかかります。
End of explanation
"""
new_spec = model_spec.get('mobilebert_qa')
new_spec.seq_len = 512
"""
Explanation: 高度な使用
create 関数は、model_spec パラメータがモデルの仕様を定義するため、このライブラリでは重要な部分です。現在、BertQASpec クラスがサポートされています。モデルには MobileBERT モデルと BERT ベースモデルの 2 つがあります。create 関数は次のステップで構成されています。
model_spec に基づいて質疑応答のモデルを作成します。
質疑応答モデルをトレーニングします。
このセクションでは、モデルの調整やトレーニングハイパーパラメータの調整など、いくつかの高度なトピックを説明します。
モデルの調整
BertQASpec クラスの seq_len や query_len パラメータなどのモデルインフラストラクチャを調整できます。
モデルの調整可能なパラメータは次のとおりです。
seq_len: モデルにフィードする文章の長さ。
query_len: モデルにフィードする質問の長さ。
doc_stride: ドキュメントの塊を取るためのスライドウィンドウアプローチを実行する際のストライド。
initializer_range: すべての重み行列を初期化する truncated_normal_initializer の stdev。
trainable: トレーニング済みレイヤーがトレーニング可能かどうかを示すブール型。
トレーニングパイプラインの調整可能なパラメータは次のとおりです。
model_dir: モデルチェックポイントファイルの場所。設定されていない場合、一時ディレクトリが使用されます。
dropout_rate: ドロップアウトのレート。
learning_rate: Adam の初期学習率。
predict_batch_size: 予測のバッチサイズ。
tpu: 接続先の TPU アドレス。TPU を使用している場合にのみ使用されます。
たとえば、より長いシーケンス長でモデルをトレーニングできます。モデルを変更する場合、最初に新しい model_spec を構築する必要があります。
End of explanation
"""
|
darkrun95/Applied-Data-Science | Applied Machine Learning in Python/Week 2/Assignment2.ipynb | gpl-3.0 | import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
from sklearn.model_selection import train_test_split
np.random.seed(0)
n = 15
x = np.linspace(0,10,n) + np.random.randn(n)/5
y = np.sin(x)+x/6 + np.random.randn(n)/10
X_train, X_test, y_train, y_test = train_test_split(x, y, random_state=0)
# You can use this function to help you visualize the dataset by
# plotting a scatterplot of the data points
# in the training and test sets.
def part1_scatter():
%matplotlib notebook
plt.figure()
plt.scatter(X_train, y_train, label='training data')
plt.scatter(X_test, y_test, label='test data')
plt.legend(loc=4);
# NOTE: Uncomment the function below to visualize the data, but be sure
# to **re-comment it before submitting this assignment to the autograder**.
part1_scatter()
"""
Explanation: You are currently looking at version 1.2 of this notebook. To download notebooks and datafiles, as well as get help on Jupyter notebooks in the Coursera platform, visit the Jupyter Notebook FAQ course resource.
Assignment 2
In this assignment you'll explore the relationship between model complexity and generalization performance, by adjusting key parameters of various supervised learning models. Part 1 of this assignment will look at regression and Part 2 will look at classification.
Part 1 - Regression
First, run the following block to set up the variables needed for later sections.
End of explanation
"""
def answer_one():
from sklearn.linear_model import LinearRegression
from sklearn.preprocessing import PolynomialFeatures
degrees = [1, 3, 6, 9]
poly = [PolynomialFeatures(degree=i) for i in degrees]
degree_predict = []
predict = np.linspace(0, 10, 100)
for i in range(0,4):
x_poly = poly[i].fit_transform(X_train.reshape(-1,1))
x_predict = poly[i].fit_transform(predict.reshape(-1,1))
linear_model = LinearRegression()
linear_model.fit(x_poly, y_train)
degree_predict.append(linear_model.predict(x_predict))
degree_values = np.array(degree_predict)
return degree_values
def plot_one(degree_predictions):
plt.figure(figsize=(10,5))
plt.plot(X_train, y_train, 'o', label='training data', markersize=10)
plt.plot(X_test, y_test, 'o', label='test data', markersize=10)
for i,degree in enumerate([1,3,6,9]):
plt.plot(np.linspace(0,10,100), degree_predictions[i], alpha=0.8, lw=2, label='degree={}'.format(degree))
plt.ylim(-1,2.5)
plt.legend(loc=4)
plot_one(answer_one())
"""
Explanation: Question 1
Write a function that fits a polynomial LinearRegression model on the training data X_train for degrees 1, 3, 6, and 9. (Use PolynomialFeatures in sklearn.preprocessing to create the polynomial features and then fit a linear regression model) For each model, find 100 predicted values over the interval x = 0 to 10 (e.g. np.linspace(0,10,100)) and store this in a numpy array. The first row of this array should correspond to the output from the model trained on degree 1, the second row degree 3, the third row degree 6, and the fourth row degree 9.
<img src="readonly/polynomialreg1.png" style="width: 1000px;"/>
The figure above shows the fitted models plotted on top of the original data (using plot_one()).
<br>
This function should return a numpy array with shape (4, 100)
End of explanation
"""
def answer_two():
from sklearn.linear_model import LinearRegression
from sklearn.preprocessing import PolynomialFeatures
from sklearn.metrics.regression import r2_score
degrees = np.arange(0, 10)
poly = [PolynomialFeatures(degree=i) for i in degrees]
r2_train = []
r2_test = []
for i in range(0,10):
x_poly = poly[i].fit_transform(x.reshape(-1,1))
x_train_poly, x_test_poly, y_train_poly, y_test_poly = train_test_split(x_poly, y, random_state=0)
linear_model = LinearRegression()
linear_model.fit(x_train_poly, y_train_poly)
r2_train_val = linear_model.score(x_train_poly, y_train_poly)
r2_test_val = linear_model.score(x_test_poly, y_test_poly)
r2_train.append(r2_train_val)
r2_test.append(r2_test_val)
r2_train = np.array(r2_train)
r2_test = np.array(r2_test)
return r2_train, r2_test
"""
Explanation: Question 2
Write a function that fits a polynomial LinearRegression model on the training data X_train for degrees 0 through 9. For each model compute the $R^2$ (coefficient of determination) regression score on the training data as well as the the test data, and return both of these arrays in a tuple.
This function should return one tuple of numpy arrays (r2_train, r2_test). Both arrays should have shape (10,)
End of explanation
"""
def answer_plot(r2_results, degrees):
_ = plt.plot(degrees, r2_results[0], label="train data", lw=0.8)
_ = plt.plot(degrees, r2_results[1], label="test data", lw=0.8)
_ = plt.scatter(degrees, r2_results[0])
_ = plt.scatter(degrees, r2_results[1])
_ = plt.ylabel("R square value")
_ = plt.xlabel("Degree")
_ = plt.legend()
def answer_three():
r2_results = answer_two()
degrees = np.arange(0, 10)
# answer_plot(r2_results, degrees)
values = r2_results[1] - r2_results[0]
underfitting = np.min(r2_results[0]).astype(int)
overfitting = values.argmin()
bestfitting = values.argmax()
return underfitting, overfitting, bestfitting
"""
Explanation: Question 3
Based on the $R^2$ scores from question 2 (degree levels 0 through 9), what degree level corresponds to a model that is underfitting? What degree level corresponds to a model that is overfitting? What choice of degree level would provide a model with good generalization performance on this dataset? Note: there may be multiple correct solutions to this question.
(Hint: Try plotting the $R^2$ scores from question 2 to visualize the relationship between degree level and $R^2$)
This function should return one tuple with the degree values in this order: (Underfitting, Overfitting, Good_Generalization)
End of explanation
"""
def answer_four():
from sklearn.preprocessing import PolynomialFeatures
from sklearn.linear_model import Lasso, LinearRegression
from sklearn.metrics.regression import r2_score
degree = 12
poly = PolynomialFeatures(degree=degree)
x_poly = poly.fit_transform(x.reshape(-1,1))
X_train, X_test, y_train, y_test = train_test_split(x_poly, y, random_state=0)
linear_model = LinearRegression()
linear_model.fit(X_train, y_train)
y_pred_linear = linear_model.predict(X_test)
r2_linear = r2_score(y_test, y_pred_linear)
lasso_model = Lasso(alpha=0.01, max_iter=10000)
lasso_model.fit(X_train, y_train)
y_pred_lasso = lasso_model.predict(X_test)
r2_lasso = r2_score(y_test, y_pred_lasso)
return r2_linear, r2_lasso
answer_four()
"""
Explanation: Question 4
Training models on high degree polynomial features can result in overly complex models that overfit, so we often use regularized versions of the model to constrain model complexity, as we saw with Ridge and Lasso linear regression.
For this question, train two models: a non-regularized LinearRegression model (default parameters) and a regularized Lasso Regression model (with parameters alpha=0.01, max_iter=10000) on polynomial features of degree 12. Return the $R^2$ score for both the LinearRegression and Lasso model's test sets.
This function should return one tuple (LinearRegression_R2_test_score, Lasso_R2_test_score)
End of explanation
"""
import pandas as pd
import numpy as np
from sklearn.model_selection import train_test_split
mush_df = pd.read_csv('mushrooms.csv')
mush_df2 = pd.get_dummies(mush_df)
X_mush = mush_df2.iloc[:,2:]
y_mush = mush_df2.iloc[:,1]
# use the variables X_train2, y_train2 for Question 5
X_train2, X_test2, y_train2, y_test2 = train_test_split(X_mush, y_mush, random_state=0)
# For performance reasons in Questions 6 and 7, we will create a smaller version of the
# entire mushroom dataset for use in those questions. For simplicity we'll just re-use
# the 25% test split created above as the representative subset.
#
# Use the variables X_subset, y_subset for Questions 6 and 7.
X_subset = X_test2
y_subset = y_test2
"""
Explanation: Part 2 - Classification
Here's an application of machine learning that could save your life! For this section of the assignment we will be working with the UCI Mushroom Data Set stored in mushrooms.csv. The data will be used to train a model to predict whether or not a mushroom is poisonous. The following attributes are provided:
Attribute Information:
cap-shape: bell=b, conical=c, convex=x, flat=f, knobbed=k, sunken=s
cap-surface: fibrous=f, grooves=g, scaly=y, smooth=s
cap-color: brown=n, buff=b, cinnamon=c, gray=g, green=r, pink=p, purple=u, red=e, white=w, yellow=y
bruises?: bruises=t, no=f
odor: almond=a, anise=l, creosote=c, fishy=y, foul=f, musty=m, none=n, pungent=p, spicy=s
gill-attachment: attached=a, descending=d, free=f, notched=n
gill-spacing: close=c, crowded=w, distant=d
gill-size: broad=b, narrow=n
gill-color: black=k, brown=n, buff=b, chocolate=h, gray=g, green=r, orange=o, pink=p, purple=u, red=e, white=w, yellow=y
stalk-shape: enlarging=e, tapering=t
stalk-root: bulbous=b, club=c, cup=u, equal=e, rhizomorphs=z, rooted=r, missing=?
stalk-surface-above-ring: fibrous=f, scaly=y, silky=k, smooth=s
stalk-surface-below-ring: fibrous=f, scaly=y, silky=k, smooth=s
stalk-color-above-ring: brown=n, buff=b, cinnamon=c, gray=g, orange=o, pink=p, red=e, white=w, yellow=y
stalk-color-below-ring: brown=n, buff=b, cinnamon=c, gray=g, orange=o, pink=p, red=e, white=w, yellow=y
veil-type: partial=p, universal=u
veil-color: brown=n, orange=o, white=w, yellow=y
ring-number: none=n, one=o, two=t
ring-type: cobwebby=c, evanescent=e, flaring=f, large=l, none=n, pendant=p, sheathing=s, zone=z
spore-print-color: black=k, brown=n, buff=b, chocolate=h, green=r, orange=o, purple=u, white=w, yellow=y
population: abundant=a, clustered=c, numerous=n, scattered=s, several=v, solitary=y
habitat: grasses=g, leaves=l, meadows=m, paths=p, urban=u, waste=w, woods=d
<br>
The data in the mushrooms dataset is currently encoded with strings. These values will need to be encoded to numeric to work with sklearn. We'll use pd.get_dummies to convert the categorical variables into indicator variables.
End of explanation
"""
def answer_five():
from sklearn.tree import DecisionTreeClassifier
tree = DecisionTreeClassifier(random_state=0)
tree.fit(X_train2, y_train2)
column_values = X_train2.columns.tolist()
feature_importance = tree.feature_importances_
feature_names = []
for i in range(0, 5):
max_index = feature_importance.argmax()
feature_importance = np.delete(feature_importance, max_index)
feature_names.append(column_values[max_index])
column_values.pop(max_index)
return feature_names
answer_five()
"""
Explanation: Question 5
Using X_train2 and y_train2 from the preceeding cell, train a DecisionTreeClassifier with default parameters and random_state=0. What are the 5 most important features found by the decision tree?
As a reminder, the feature names are available in the X_train2.columns property, and the order of the features in X_train2.columns matches the order of the feature importance values in the classifier's feature_importances_ property.
This function should return a list of length 5 containing the feature names in descending order of importance.
End of explanation
"""
def answer_six():
from sklearn.svm import SVC
from sklearn.model_selection import validation_curve
svc = SVC(kernel="rbf", C=1, random_state=0)
gamma_range = np.logspace(-4,1,6)
train_scores, test_scores = validation_curve(svc, X_subset, y_subset, param_name="gamma", param_range=gamma_range)
train_scores = np.mean(train_scores, axis=1)
test_scores = np.mean(test_scores, axis=1)
return train_scores, test_scores
"""
Explanation: Question 6
For this question, we're going to use the validation_curve function in sklearn.model_selection to determine training and test scores for a Support Vector Classifier (SVC) with varying parameter values. Recall that the validation_curve function, in addition to taking an initialized unfitted classifier object, takes a dataset as input and does its own internal train-test splits to compute results.
Because creating a validation curve requires fitting multiple models, for performance reasons this question will use just a subset of the original mushroom dataset: please use the variables X_subset and y_subset as input to the validation curve function (instead of X_mush and y_mush) to reduce computation time.
The initialized unfitted classifier object we'll be using is a Support Vector Classifier with radial basis kernel. So your first step is to create an SVC object with default parameters (i.e. kernel='rbf', C=1) and random_state=0. Recall that the kernel width of the RBF kernel is controlled using the gamma parameter.
With this classifier, and the dataset in X_subset, y_subset, explore the effect of gamma on classifier accuracy by using the validation_curve function to find the training and test scores for 6 values of gamma from 0.0001 to 10 (i.e. np.logspace(-4,1,6)). Recall that you can specify what scoring metric you want validation_curve to use by setting the "scoring" parameter. In this case, we want to use "accuracy" as the scoring metric.
For each level of gamma, validation_curve will fit 3 models on different subsets of the data, returning two 6x3 (6 levels of gamma x 3 fits per level) arrays of the scores for the training and test sets.
Find the mean score across the three models for each level of gamma for both arrays, creating two arrays of length 6, and return a tuple with the two arrays.
e.g.
if one of your array of scores is
array([[ 0.5, 0.4, 0.6],
[ 0.7, 0.8, 0.7],
[ 0.9, 0.8, 0.8],
[ 0.8, 0.7, 0.8],
[ 0.7, 0.6, 0.6],
[ 0.4, 0.6, 0.5]])
it should then become
array([ 0.5, 0.73333333, 0.83333333, 0.76666667, 0.63333333, 0.5])
This function should return one tuple of numpy arrays (training_scores, test_scores) where each array in the tuple has shape (6,).
End of explanation
"""
def answer_plot(scores, gamma):
_ = plt.plot(gamma, scores[0], label="train data", lw=0.8)
_ = plt.plot(gamma, scores[1], label="test data", lw=0.8)
_ = plt.scatter(gamma, scores[0])
_ = plt.scatter(gamma, scores[1])
_ = plt.ylabel("Scores")
_ = plt.xlabel("Gamma")
_ = plt.legend()
def answer_seven():
scores = answer_six()
gamma_values = np.logspace(-4, 1, 6)
answer_plot(scores, gamma_values)
values = scores[1] - scores[0]
underfit_index = scores[0].argmin()
underfitting = gamma_values[underfit_index]
overfit_index = values.argmin()
overfitting = gamma_values[overfit_index]
bestfit_index = scores[1].argmax()
bestfitting = gamma_values[bestfit_index]
return underfitting, overfitting, bestfitting
answer_seven()
"""
Explanation: Question 7
Based on the scores from question 6, what gamma value corresponds to a model that is underfitting (and has the worst test set accuracy)? What gamma value corresponds to a model that is overfitting (and has the worst test set accuracy)? What choice of gamma would be the best choice for a model with good generalization performance on this dataset (high accuracy on both training and test set)? Note: there may be multiple correct solutions to this question.
(Hint: Try plotting the scores from question 6 to visualize the relationship between gamma and accuracy.)
This function should return one tuple with the degree values in this order: (Underfitting, Overfitting, Good_Generalization)
End of explanation
"""
|
ODZ-UJF-AV-CR/osciloskop | cerf.ipynb | gpl-3.0 | import matplotlib.pyplot as plt
import sys
import os
import time
import h5py
import numpy as np
import glob
import vxi11
# Step 0:
# Connect oscilloscope via direct Ethernet link
# Step 1:
# Run "Record" on the oscilloscope
# and wait for 508 frames to be acquired.
# Step 2:
# Run this cell to initialize grabbing.
# This will need a rewrite
class TmcDriver:
def __init__(self, device):
print("Initializing connection to: " + device)
self.device = device
self.instr = vxi11.Instrument(device)
def write(self, command):
self.instr.write(command);
def read(self, length = 500):
return self.instr.read(length)
def read_raw(self, length = 500):
return self.instr.read_raw(length)
def getName(self):
self.write("*IDN?")
return self.read(300)
def ask(self, command):
return self.instr.ask(command)
def sendReset(self):
self.write("*RST") # Be carefull, this real resets an oscilloscope
# Default oscilloscope record timeout [s]
loop_sleep_time = 60
# For Ethernet
#osc = TmcDriver("TCPIP::147.231.24.72::INSTR")
osc = TmcDriver("TCPIP::10.1.1.35::INSTR")
print(osc.ask("*IDN?"))
"""
Explanation: Oscilloskope utility – using Ethernet
End of explanation
"""
filename = 1
run_start_time = time.time()
if (filename == 1):
for f in glob.iglob("./data2/*.h5"): # delete all .h5 files
print 'Deleting', f
os.remove(f)
else:
print 'Not removing old files, as filename {0} is not 1.'.format(filename)
while True:
osc.write(':WAV:SOUR CHAN1')
osc.write(':WAV:MODE NORM')
osc.write(':WAV:FORM BYTE')
osc.write(':WAV:POIN 1400')
osc.write(':WAV:XINC?')
xinc = float(osc.read(100))
print 'XINC:', xinc,
osc.write(':WAV:YINC?')
yinc = float(osc.read(100))
print 'YINC:', yinc,
osc.write(':TRIGger:EDGe:LEVel?')
trig = float(osc.read(100))
print 'TRIG:', trig,
osc.write(':WAVeform:YORigin?')
yorig = float(osc.read(100))
print 'YORIGIN:', yorig,
osc.write(':WAVeform:XORigin?')
xorig = float(osc.read(100))
print 'XORIGIN:', xorig,
osc.write(':FUNC:WREP:FEND?') # get number of last frame
frames = int(osc.read(100))
print 'FRAMES:', frames, 'SUBRUN', filename
# This is not good if the scaling is different and frames are for example just 254
# if (frames < 508):
# loop_sleep_time += 10
with h5py.File('./data2/t'+'{:02.0f}'.format(filename)+'.h5', 'w') as hf:
hf.create_dataset('FRAMES', data=(frames)) # write number of frames
hf.create_dataset('XINC', data=(xinc)) # write axis parameters
hf.create_dataset('YINC', data=(yinc))
hf.create_dataset('TRIG', data=(trig))
hf.create_dataset('YORIGIN', data=(yorig))
hf.create_dataset('XORIGIN', data=(xorig))
osc.write(':FUNC:WREP:FCUR 1') # skip to n-th frame
time.sleep(0.5)
for n in range(1,frames+1):
osc.write(':FUNC:WREP:FCUR ' + str(n)) # skip to n-th frame
time.sleep(0.001)
osc.write(':WAV:DATA?') # read data
#time.sleep(0.4)
wave1 = bytearray(osc.read_raw(500))
wave2 = bytearray(osc.read_raw(500))
wave3 = bytearray(osc.read_raw(500))
#wave4 = bytearray(osc.read(500))
#wave = np.concatenate((wave1[11:],wave2[:(500-489)],wave3[:(700-489)]))
wave = np.concatenate((wave1[11:],wave2,wave3[:-1]))
hf.create_dataset(str(n), data=wave)
filename = filename + 1
osc.write(':FUNC:WREC:OPER REC') # start recording
print(' Subrun finished, Enter to continue.')
raw_input()
#time.sleep(5*60) # delay for capturing
"""
Explanation: Read repeatedly records from oscilloscope
This should be run after the initialization step. Timeout at the end should be enlarged if not all 508 frames are transferred.
End of explanation
"""
first_run_start_time = time.time()
raw_input()
loop_sleep_time = time.time() - first_run_start_time + 15
print loop_sleep_time
loop_sleep_time=60
"""
Explanation: Stopwatch for timing the first loop
End of explanation
"""
|
ES-DOC/esdoc-jupyterhub | notebooks/snu/cmip6/models/sandbox-3/seaice.ipynb | gpl-3.0 | # DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'snu', 'sandbox-3', 'seaice')
"""
Explanation: ES-DOC CMIP6 Model Properties - Seaice
MIP Era: CMIP6
Institute: SNU
Source ID: SANDBOX-3
Topic: Seaice
Sub-Topics: Dynamics, Thermodynamics, Radiative Processes.
Properties: 80 (63 required)
Model descriptions: Model description details
Initialized From: --
Notebook Help: Goto notebook help page
Notebook Initialised: 2018-02-15 16:54:38
Document Setup
IMPORTANT: to be executed each time you run the notebook
End of explanation
"""
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
"""
Explanation: Document Authors
Set document authors
End of explanation
"""
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
"""
Explanation: Document Contributors
Specify document contributors
End of explanation
"""
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
"""
Explanation: Document Publication
Specify document publication status
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.model.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: Document Table of Contents
1. Key Properties --> Model
2. Key Properties --> Variables
3. Key Properties --> Seawater Properties
4. Key Properties --> Resolution
5. Key Properties --> Tuning Applied
6. Key Properties --> Key Parameter Values
7. Key Properties --> Assumptions
8. Key Properties --> Conservation
9. Grid --> Discretisation --> Horizontal
10. Grid --> Discretisation --> Vertical
11. Grid --> Seaice Categories
12. Grid --> Snow On Seaice
13. Dynamics
14. Thermodynamics --> Energy
15. Thermodynamics --> Mass
16. Thermodynamics --> Salt
17. Thermodynamics --> Salt --> Mass Transport
18. Thermodynamics --> Salt --> Thermodynamics
19. Thermodynamics --> Ice Thickness Distribution
20. Thermodynamics --> Ice Floe Size Distribution
21. Thermodynamics --> Melt Ponds
22. Thermodynamics --> Snow Processes
23. Radiative Processes
1. Key Properties --> Model
Name of seaice model used.
1.1. Model Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of sea ice model.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.model.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 1.2. Model Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Name of sea ice model code (e.g. CICE 4.2, LIM 2.1, etc.)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.variables.prognostic')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Sea ice temperature"
# "Sea ice concentration"
# "Sea ice thickness"
# "Sea ice volume per grid cell area"
# "Sea ice u-velocity"
# "Sea ice v-velocity"
# "Sea ice enthalpy"
# "Internal ice stress"
# "Salinity"
# "Snow temperature"
# "Snow depth"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 2. Key Properties --> Variables
List of prognostic variable in the sea ice model.
2.1. Prognostic
Is Required: TRUE Type: ENUM Cardinality: 1.N
List of prognostic variables in the sea ice component.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.seawater_properties.ocean_freezing_point')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "TEOS-10"
# "Constant"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 3. Key Properties --> Seawater Properties
Properties of seawater relevant to sea ice
3.1. Ocean Freezing Point
Is Required: TRUE Type: ENUM Cardinality: 1.1
Equation used to compute the freezing point (in deg C) of seawater, as a function of salinity and pressure
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.seawater_properties.ocean_freezing_point_value')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 3.2. Ocean Freezing Point Value
Is Required: FALSE Type: FLOAT Cardinality: 0.1
If using a constant seawater freezing point, specify this value.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.resolution.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 4. Key Properties --> Resolution
Resolution of the sea ice grid
4.1. Name
Is Required: TRUE Type: STRING Cardinality: 1.1
This is a string usually used by the modelling group to describe the resolution of this grid e.g. N512L180, T512L70, ORCA025 etc.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.resolution.canonical_horizontal_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 4.2. Canonical Horizontal Resolution
Is Required: TRUE Type: STRING Cardinality: 1.1
Expression quoted for gross comparisons of resolution, eg. 50km or 0.1 degrees etc.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.resolution.number_of_horizontal_gridpoints')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 4.3. Number Of Horizontal Gridpoints
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Total number of horizontal (XY) points (or degrees of freedom) on computational grid.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.tuning_applied.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 5. Key Properties --> Tuning Applied
Tuning applied to sea ice model component
5.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General overview description of tuning: explain and motivate the main targets and metrics retained. Document the relative weight given to climate performance metrics versus process oriented metrics, and on the possible conflicts with parameterization level tuning. In particular describe any struggle with a parameter value that required pushing it to its limits to solve a particular model deficiency.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.tuning_applied.target')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 5.2. Target
Is Required: TRUE Type: STRING Cardinality: 1.1
What was the aim of tuning, e.g. correct sea ice minima, correct seasonal cycle.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.tuning_applied.simulations')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 5.3. Simulations
Is Required: TRUE Type: STRING Cardinality: 1.1
*Which simulations had tuning applied, e.g. all, not historical, only pi-control? *
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.tuning_applied.metrics_used')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 5.4. Metrics Used
Is Required: TRUE Type: STRING Cardinality: 1.1
List any observed metrics used in tuning model/parameters
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.tuning_applied.variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 5.5. Variables
Is Required: FALSE Type: STRING Cardinality: 0.1
Which variables were changed during the tuning process?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.key_parameter_values.typical_parameters')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Ice strength (P*) in units of N m{-2}"
# "Snow conductivity (ks) in units of W m{-1} K{-1} "
# "Minimum thickness of ice created in leads (h0) in units of m"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 6. Key Properties --> Key Parameter Values
Values of key parameters
6.1. Typical Parameters
Is Required: FALSE Type: ENUM Cardinality: 0.N
What values were specificed for the following parameters if used?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.key_parameter_values.additional_parameters')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 6.2. Additional Parameters
Is Required: FALSE Type: STRING Cardinality: 0.N
If you have any additional paramterised values that you have used (e.g. minimum open water fraction or bare ice albedo), please provide them here as a comma separated list
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.assumptions.description')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 7. Key Properties --> Assumptions
Assumptions made in the sea ice model
7.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.N
General overview description of any key assumptions made in this model.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.assumptions.on_diagnostic_variables')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 7.2. On Diagnostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.N
Note any assumptions that specifically affect the CMIP6 diagnostic sea ice variables.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.assumptions.missing_processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 7.3. Missing Processes
Is Required: TRUE Type: STRING Cardinality: 1.N
List any key processes missing in this model configuration? Provide full details where this affects the CMIP6 diagnostic sea ice variables?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.conservation.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 8. Key Properties --> Conservation
Conservation in the sea ice component
8.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
Provide a general description of conservation methodology.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.conservation.properties')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Energy"
# "Mass"
# "Salt"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 8.2. Properties
Is Required: TRUE Type: ENUM Cardinality: 1.N
Properties conserved in sea ice by the numerical schemes.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.conservation.budget')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 8.3. Budget
Is Required: TRUE Type: STRING Cardinality: 1.1
For each conserved property, specify the output variables which close the related budgets. as a comma separated list. For example: Conserved property, variable1, variable2, variable3
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.conservation.was_flux_correction_used')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 8.4. Was Flux Correction Used
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does conservation involved flux correction?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.conservation.corrected_conserved_prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 8.5. Corrected Conserved Prognostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.1
List any variables which are conserved by more than the numerical scheme alone.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Ocean grid"
# "Atmosphere Grid"
# "Own Grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 9. Grid --> Discretisation --> Horizontal
Sea ice discretisation in the horizontal
9.1. Grid
Is Required: TRUE Type: ENUM Cardinality: 1.1
Grid on which sea ice is horizontal discretised?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.grid_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Structured grid"
# "Unstructured grid"
# "Adaptive grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 9.2. Grid Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
What is the type of sea ice grid?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Finite differences"
# "Finite elements"
# "Finite volumes"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 9.3. Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
What is the advection scheme?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.thermodynamics_time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 9.4. Thermodynamics Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
What is the time step in the sea ice model thermodynamic component in seconds.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.dynamics_time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 9.5. Dynamics Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
What is the time step in the sea ice model dynamic component in seconds.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.additional_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 9.6. Additional Details
Is Required: FALSE Type: STRING Cardinality: 0.1
Specify any additional horizontal discretisation details.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.vertical.layering')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Zero-layer"
# "Two-layers"
# "Multi-layers"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 10. Grid --> Discretisation --> Vertical
Sea ice vertical properties
10.1. Layering
Is Required: TRUE Type: ENUM Cardinality: 1.N
What type of sea ice vertical layers are implemented for purposes of thermodynamic calculations?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.vertical.number_of_layers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 10.2. Number Of Layers
Is Required: TRUE Type: INTEGER Cardinality: 1.1
If using multi-layers specify how many.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.vertical.additional_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 10.3. Additional Details
Is Required: FALSE Type: STRING Cardinality: 0.1
Specify any additional vertical grid details.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.seaice_categories.has_mulitple_categories')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 11. Grid --> Seaice Categories
What method is used to represent sea ice categories ?
11.1. Has Mulitple Categories
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Set to true if the sea ice model has multiple sea ice categories.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.seaice_categories.number_of_categories')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 11.2. Number Of Categories
Is Required: TRUE Type: INTEGER Cardinality: 1.1
If using sea ice categories specify how many.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.seaice_categories.category_limits')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 11.3. Category Limits
Is Required: TRUE Type: STRING Cardinality: 1.1
If using sea ice categories specify each of the category limits.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.seaice_categories.ice_thickness_distribution_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 11.4. Ice Thickness Distribution Scheme
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the sea ice thickness distribution scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.seaice_categories.other')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 11.5. Other
Is Required: FALSE Type: STRING Cardinality: 0.1
If the sea ice model does not use sea ice categories specify any additional details. For example models that paramterise the ice thickness distribution ITD (i.e there is no explicit ITD) but there is assumed distribution and fluxes are computed accordingly.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.snow_on_seaice.has_snow_on_ice')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 12. Grid --> Snow On Seaice
Snow on sea ice details
12.1. Has Snow On Ice
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is snow on ice represented in this model?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.snow_on_seaice.number_of_snow_levels')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 12.2. Number Of Snow Levels
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Number of vertical levels of snow on ice?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.snow_on_seaice.snow_fraction')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 12.3. Snow Fraction
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe how the snow fraction on sea ice is determined
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.snow_on_seaice.additional_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 12.4. Additional Details
Is Required: FALSE Type: STRING Cardinality: 0.1
Specify any additional details related to snow on ice.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.dynamics.horizontal_transport')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Incremental Re-mapping"
# "Prather"
# "Eulerian"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 13. Dynamics
Sea Ice Dynamics
13.1. Horizontal Transport
Is Required: TRUE Type: ENUM Cardinality: 1.1
What is the method of horizontal advection of sea ice?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.dynamics.transport_in_thickness_space')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Incremental Re-mapping"
# "Prather"
# "Eulerian"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 13.2. Transport In Thickness Space
Is Required: TRUE Type: ENUM Cardinality: 1.1
What is the method of sea ice transport in thickness space (i.e. in thickness categories)?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.dynamics.ice_strength_formulation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Hibler 1979"
# "Rothrock 1975"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 13.3. Ice Strength Formulation
Is Required: TRUE Type: ENUM Cardinality: 1.1
Which method of sea ice strength formulation is used?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.dynamics.redistribution')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Rafting"
# "Ridging"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 13.4. Redistribution
Is Required: TRUE Type: ENUM Cardinality: 1.N
Which processes can redistribute sea ice (including thickness)?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.dynamics.rheology')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Free-drift"
# "Mohr-Coloumb"
# "Visco-plastic"
# "Elastic-visco-plastic"
# "Elastic-anisotropic-plastic"
# "Granular"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 13.5. Rheology
Is Required: TRUE Type: ENUM Cardinality: 1.1
Rheology, what is the ice deformation formulation?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.enthalpy_formulation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Pure ice latent heat (Semtner 0-layer)"
# "Pure ice latent and sensible heat"
# "Pure ice latent and sensible heat + brine heat reservoir (Semtner 3-layer)"
# "Pure ice latent and sensible heat + explicit brine inclusions (Bitz and Lipscomb)"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 14. Thermodynamics --> Energy
Processes related to energy in sea ice thermodynamics
14.1. Enthalpy Formulation
Is Required: TRUE Type: ENUM Cardinality: 1.1
What is the energy formulation?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.thermal_conductivity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Pure ice"
# "Saline ice"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 14.2. Thermal Conductivity
Is Required: TRUE Type: ENUM Cardinality: 1.1
What type of thermal conductivity is used?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.heat_diffusion')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Conduction fluxes"
# "Conduction and radiation heat fluxes"
# "Conduction, radiation and latent heat transport"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 14.3. Heat Diffusion
Is Required: TRUE Type: ENUM Cardinality: 1.1
What is the method of heat diffusion?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.basal_heat_flux')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Heat Reservoir"
# "Thermal Fixed Salinity"
# "Thermal Varying Salinity"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 14.4. Basal Heat Flux
Is Required: TRUE Type: ENUM Cardinality: 1.1
Method by which basal ocean heat flux is handled?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.fixed_salinity_value')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 14.5. Fixed Salinity Value
Is Required: FALSE Type: FLOAT Cardinality: 0.1
If you have selected {Thermal properties depend on S-T (with fixed salinity)}, supply fixed salinity value for each sea ice layer.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.heat_content_of_precipitation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 14.6. Heat Content Of Precipitation
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the method by which the heat content of precipitation is handled.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.precipitation_effects_on_salinity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 14.7. Precipitation Effects On Salinity
Is Required: FALSE Type: STRING Cardinality: 0.1
If precipitation (freshwater) that falls on sea ice affects the ocean surface salinity please provide further details.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.mass.new_ice_formation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 15. Thermodynamics --> Mass
Processes related to mass in sea ice thermodynamics
15.1. New Ice Formation
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the method by which new sea ice is formed in open water.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.mass.ice_vertical_growth_and_melt')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 15.2. Ice Vertical Growth And Melt
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the method that governs the vertical growth and melt of sea ice.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.mass.ice_lateral_melting')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Floe-size dependent (Bitz et al 2001)"
# "Virtual thin ice melting (for single-category)"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 15.3. Ice Lateral Melting
Is Required: TRUE Type: ENUM Cardinality: 1.1
What is the method of sea ice lateral melting?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.mass.ice_surface_sublimation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 15.4. Ice Surface Sublimation
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the method that governs sea ice surface sublimation.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.mass.frazil_ice')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 15.5. Frazil Ice
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the method of frazil ice formation.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.has_multiple_sea_ice_salinities')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 16. Thermodynamics --> Salt
Processes related to salt in sea ice thermodynamics.
16.1. Has Multiple Sea Ice Salinities
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does the sea ice model use two different salinities: one for thermodynamic calculations; and one for the salt budget?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.sea_ice_salinity_thermal_impacts')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 16.2. Sea Ice Salinity Thermal Impacts
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does sea ice salinity impact the thermal properties of sea ice?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.mass_transport.salinity_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant"
# "Prescribed salinity profile"
# "Prognostic salinity profile"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 17. Thermodynamics --> Salt --> Mass Transport
Mass transport of salt
17.1. Salinity Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
How is salinity determined in the mass transport of salt calculation?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.mass_transport.constant_salinity_value')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 17.2. Constant Salinity Value
Is Required: FALSE Type: FLOAT Cardinality: 0.1
If using a constant salinity value specify this value in PSU?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.mass_transport.additional_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 17.3. Additional Details
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the salinity profile used.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.thermodynamics.salinity_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant"
# "Prescribed salinity profile"
# "Prognostic salinity profile"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 18. Thermodynamics --> Salt --> Thermodynamics
Salt thermodynamics
18.1. Salinity Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
How is salinity determined in the thermodynamic calculation?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.thermodynamics.constant_salinity_value')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 18.2. Constant Salinity Value
Is Required: FALSE Type: FLOAT Cardinality: 0.1
If using a constant salinity value specify this value in PSU?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.thermodynamics.additional_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 18.3. Additional Details
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the salinity profile used.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.ice_thickness_distribution.representation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Explicit"
# "Virtual (enhancement of thermal conductivity, thin ice melting)"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 19. Thermodynamics --> Ice Thickness Distribution
Ice thickness distribution details.
19.1. Representation
Is Required: TRUE Type: ENUM Cardinality: 1.1
How is the sea ice thickness distribution represented?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.ice_floe_size_distribution.representation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Explicit"
# "Parameterised"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 20. Thermodynamics --> Ice Floe Size Distribution
Ice floe-size distribution details.
20.1. Representation
Is Required: TRUE Type: ENUM Cardinality: 1.1
How is the sea ice floe-size represented?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.ice_floe_size_distribution.additional_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 20.2. Additional Details
Is Required: FALSE Type: STRING Cardinality: 0.1
Please provide further details on any parameterisation of floe-size.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.melt_ponds.are_included')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 21. Thermodynamics --> Melt Ponds
Characteristics of melt ponds.
21.1. Are Included
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Are melt ponds included in the sea ice model?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.melt_ponds.formulation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Flocco and Feltham (2010)"
# "Level-ice melt ponds"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 21.2. Formulation
Is Required: TRUE Type: ENUM Cardinality: 1.1
What method of melt pond formulation is used?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.melt_ponds.impacts')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Albedo"
# "Freshwater"
# "Heat"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 21.3. Impacts
Is Required: TRUE Type: ENUM Cardinality: 1.N
What do melt ponds have an impact on?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.has_snow_aging')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 22. Thermodynamics --> Snow Processes
Thermodynamic processes in snow on sea ice
22.1. Has Snow Aging
Is Required: TRUE Type: BOOLEAN Cardinality: 1.N
Set to True if the sea ice model has a snow aging scheme.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.snow_aging_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 22.2. Snow Aging Scheme
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the snow aging scheme.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.has_snow_ice_formation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 22.3. Has Snow Ice Formation
Is Required: TRUE Type: BOOLEAN Cardinality: 1.N
Set to True if the sea ice model has snow ice formation.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.snow_ice_formation_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 22.4. Snow Ice Formation Scheme
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the snow ice formation scheme.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.redistribution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 22.5. Redistribution
Is Required: TRUE Type: STRING Cardinality: 1.1
What is the impact of ridging on snow cover?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.heat_diffusion')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Single-layered heat diffusion"
# "Multi-layered heat diffusion"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 22.6. Heat Diffusion
Is Required: TRUE Type: ENUM Cardinality: 1.1
What is the heat diffusion through snow methodology in sea ice thermodynamics?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.radiative_processes.surface_albedo')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Delta-Eddington"
# "Parameterized"
# "Multi-band albedo"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 23. Radiative Processes
Sea Ice Radiative Processes
23.1. Surface Albedo
Is Required: TRUE Type: ENUM Cardinality: 1.1
Method used to handle surface albedo.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.radiative_processes.ice_radiation_transmission')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Delta-Eddington"
# "Exponential attenuation"
# "Ice radiation transmission per category"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 23.2. Ice Radiation Transmission
Is Required: TRUE Type: ENUM Cardinality: 1.N
Method by which solar radiation through sea ice is handled.
End of explanation
"""
|
GoogleCloudPlatform/training-data-analyst | courses/machine_learning/deepdive2/structured/solutions/3c_bqml_dnn_babyweight.ipynb | apache-2.0 | !sudo chown -R jupyter:jupyter /home/jupyter/training-data-analyst
%%bash
pip freeze | grep google-cloud-bigquery==1.6.1 || \
pip install google-cloud-bigquery==1.6.1
"""
Explanation: LAB 3c: BigQuery ML Model Deep Neural Network.
Learning Objectives
Create and evaluate DNN model with BigQuery ML.
Create and evaluate DNN model with feature engineering with ML.TRANSFORM.
Calculate predictions with BigQuery's ML.PREDICT.
Introduction
In this notebook, we will create multiple deep neural network models to predict the weight of a baby before it is born, using first no feature engineering and then the feature engineering from the previous lab using BigQuery ML.
We will create and evaluate a DNN model using BigQuery ML, with and without feature engineering using BigQuery's ML.TRANSFORM and calculate predictions with BigQuery's ML.PREDICT. If you need a refresher, you can go back and look how we made a baseline model in the notebook BQML Baseline Model or how we combined linear models with feature engineering in the notebook BQML Linear Models with Feature Engineering.
Load necessary libraries
Check that the Google BigQuery library is installed and if not, install it.
End of explanation
"""
%%bigquery
-- LIMIT 0 is a free query; this allows us to check that the table exists.
SELECT * FROM babyweight.babyweight_data_train
LIMIT 0
%%bigquery
-- LIMIT 0 is a free query; this allows us to check that the table exists.
SELECT * FROM babyweight.babyweight_data_eval
LIMIT 0
"""
Explanation: Verify tables exist
Run the following cells to verify that we have previously created the dataset and data tables. If not, go back to lab 1b_prepare_data_babyweight to create them.
End of explanation
"""
%%bigquery
CREATE OR REPLACE MODEL
babyweight.model_4
OPTIONS (
MODEL_TYPE="DNN_REGRESSOR",
HIDDEN_UNITS=[64, 32],
BATCH_SIZE=32,
INPUT_LABEL_COLS=["weight_pounds"],
DATA_SPLIT_METHOD="NO_SPLIT") AS
SELECT
weight_pounds,
is_male,
mother_age,
plurality,
gestation_weeks
FROM
babyweight.babyweight_data_train
"""
Explanation: Model 4: Increase complexity of model using DNN_REGRESSOR
DNN_REGRESSOR is a new regression model_type vs. the LINEAR_REG that we have been using in previous labs.
MODEL_TYPE="DNN_REGRESSOR"
hidden_units: List of hidden units per layer; all layers are fully connected. Number of elements in the array will be the number of hidden layers. The default value for hidden_units is [Min(128, N / (𝜶(Ni+No)))] (1 hidden layer), with N the training data size, Ni, No the input layer and output layer units, respectively, 𝜶 is constant with value 10. The upper bound of the rule will make sure the model won’t be over fitting. Note that, we currently have a model size limitation to 256MB.
dropout: Probability to drop a given coordinate during training; dropout is a very common technique to avoid overfitting in DNNs. The default value is zero, which means we will not drop out any coordinate during training.
batch_size: Number of samples that will be served to train the network for each sub iteration. The default value is Min(1024, num_examples) to balance the training speed and convergence. Serving all training data in each sub-iteration may lead to convergence issues, and is not advised.
Create DNN_REGRESSOR model
Let's train a DNN regressor model in BQ using MODEL_TYPE=DNN_REGRESSOR with 2 hidden layers with 64 and 32 neurons each (HIDDEN_UNITS=[64, 32]) and a batch size of 32 (BATCH_SIZE=32):
End of explanation
"""
%%bigquery
SELECT * FROM ML.TRAINING_INFO(MODEL babyweight.model_4)
"""
Explanation: Get training information and evaluate
Let's first look at our training statistics.
End of explanation
"""
%%bigquery
SELECT
*
FROM
ML.EVALUATE(MODEL babyweight.model_4,
(
SELECT
weight_pounds,
is_male,
mother_age,
plurality,
gestation_weeks
FROM
babyweight.babyweight_data_eval
))
"""
Explanation: Now let's evaluate our trained model on our eval dataset.
End of explanation
"""
%%bigquery
SELECT
SQRT(mean_squared_error) AS rmse
FROM
ML.EVALUATE(MODEL babyweight.model_4,
(
SELECT
weight_pounds,
is_male,
mother_age,
plurality,
gestation_weeks
FROM
babyweight.babyweight_data_eval
))
"""
Explanation: Let's use our evaluation's mean_squared_error to calculate our model's RMSE.
End of explanation
"""
%%bigquery
CREATE OR REPLACE MODEL
babyweight.final_model
TRANSFORM(
weight_pounds,
is_male,
mother_age,
plurality,
gestation_weeks,
ML.FEATURE_CROSS(
STRUCT(
is_male,
ML.BUCKETIZE(
mother_age,
GENERATE_ARRAY(15, 45, 1)
) AS bucketed_mothers_age,
plurality,
ML.BUCKETIZE(
gestation_weeks,
GENERATE_ARRAY(17, 47, 1)
) AS bucketed_gestation_weeks)
) AS crossed)
OPTIONS (
MODEL_TYPE="DNN_REGRESSOR",
HIDDEN_UNITS=[64, 32],
BATCH_SIZE=32,
INPUT_LABEL_COLS=["weight_pounds"],
DATA_SPLIT_METHOD="NO_SPLIT") AS
SELECT
*
FROM
babyweight.babyweight_data_train
"""
Explanation: Final Model: Apply the TRANSFORM clause
Before we perform our prediction, we should encapsulate the entire feature set in a TRANSFORM clause as we did in the last notebook. This way we can have the same transformations applied for training and prediction without modifying the queries.
Let's apply the TRANSFORM clause to the final model and run the query.
End of explanation
"""
%%bigquery
SELECT * FROM ML.TRAINING_INFO(MODEL babyweight.final_model)
"""
Explanation: Let's first look at our training statistics.
End of explanation
"""
%%bigquery
SELECT
*
FROM
ML.EVALUATE(MODEL babyweight.final_model,
(
SELECT
*
FROM
babyweight.babyweight_data_eval
))
"""
Explanation: Now let's evaluate our trained model on our eval dataset.
End of explanation
"""
%%bigquery
SELECT
SQRT(mean_squared_error) AS rmse
FROM
ML.EVALUATE(MODEL babyweight.final_model,
(
SELECT
*
FROM
babyweight.babyweight_data_eval
))
"""
Explanation: Let's use our evaluation's mean_squared_error to calculate our model's RMSE.
End of explanation
"""
%%bigquery
SELECT
*
FROM
ML.PREDICT(MODEL babyweight.final_model,
(
SELECT
"true" AS is_male,
32 AS mother_age,
"Twins(2)" AS plurality,
30 AS gestation_weeks
))
"""
Explanation: Predict with final model
Now that you have evaluated your model, the next step is to use it to predict the weight of a baby before it is born, using BigQuery ML.PREDICT function.
Predict from final model using an example from original dataset
End of explanation
"""
%%bigquery
SELECT
*
FROM
ML.PREDICT(MODEL babyweight.final_model,
(
SELECT
"Unknown" AS is_male,
32 AS mother_age,
"Multiple(2+)" AS plurality,
30 AS gestation_weeks
))
"""
Explanation: Modify above prediction query using example from simulated dataset
Use the feature values you made up above, however set is_male to "Unknown" and plurality to "Multiple(2+)". This is simulating us not knowing the gender or the exact plurality.
End of explanation
"""
|
TheOregonian/long-term-care-db | notebooks/analysis/.ipynb_checkpoints/facilities-analysis-checkpoint.ipynb | mit | import pandas as pd
import numpy as np
from IPython.core.display import display, HTML
display(HTML("<style>.container { width:100% !important; }</style>"))
df = pd.read_csv('../../data/processed/facilities-3-29-scrape.csv')
"""
Explanation: This is a dataset of Assisted Living, Nursing and Residential Care facilities in Oregon, open as of September, 2016. For each, we have:
Data were munged here.
<i>facility_id:</i> Unique ID used to join to complaints
<i>fac_ccmunumber:</i> Unique ID used to join to ownership history
<i>facility_type:</i> NF - Nursing Facility; RCF - Residential Care Facility; ALF - Assisted Living Facility
<i>fac_capacity:</i> Number of beds facility is licensed to have. Not necessarily the number of beds facility does have.
<i>facility_name:</i> Facility name at time of September extract.
<i>offline:</i> created in munging notebook, a count of complaints that DO NOT appear when facility is searched on state's complaint search website.
<i>online:</i> created in munging notebook, a count of complaints that DO appear when facility is searched on state's complaint search website.
End of explanation
"""
df.count()[0]
"""
Explanation: <h3>How many facilities are there?</h3>
End of explanation
"""
df[(df['offline'].isnull())].count()[0]
"""
Explanation: <h3>How many facilities have accurate records online?</h3>
Those that have no offline records.
End of explanation
"""
df[(df['offline'].notnull())].count()[0]
"""
Explanation: <h3>How many facilities have inaccurate records online?<h/3>
Those that have offline records.
End of explanation
"""
df[(df['offline']>df['online']) & (df['online'].notnull())].count()[0]
"""
Explanation: <h3>How many facilities had more than double the number of complaints shown online?</h3>
End of explanation
"""
df[(df['online'].isnull()) & (df['offline'].notnull())].count()[0]
"""
Explanation: <h3>How many facilities show zero complaints online but have complaints offline?</h3>
End of explanation
"""
df[(df['online'].notnull()) & (df['offline'].isnull())].count()[0]
"""
Explanation: <h3>How many facilities have complaints and are accurate online?</h3>
End of explanation
"""
df[(df['online'].notnull()) | df['offline'].notnull()].count()[0]
"""
Explanation: <h3>How many facilities have complaints?</h3>
End of explanation
"""
df[(df['offline'].isnull())].count()[0]/df.count()[0]*100
"""
Explanation: <h3>What percent of facilities have accurate records online?</h3>
End of explanation
"""
df[df['offline'].notnull()].sum()['fac_capacity']
"""
Explanation: <h3>What is the total capacity of all facilities with inaccurate records?</h3>
End of explanation
"""
df[df['online'].isnull()].count()[0]
"""
Explanation: <h3>How many facilities appear to have no complaints, whether or not they do?</h3>
End of explanation
"""
|
bayesimpact/bob-emploi | data_analysis/notebooks/datasets/rome/update_from_v329_to_v330.ipynb | gpl-3.0 | import collections
import glob
import os
from os import path
import matplotlib_venn
import pandas
rome_path = path.join(os.getenv('DATA_FOLDER'), 'rome/csv')
OLD_VERSION = '329'
NEW_VERSION = '330'
old_version_files = frozenset(glob.glob(rome_path + '/*{}*'.format(OLD_VERSION)))
new_version_files = frozenset(glob.glob(rome_path + '/*{}*'.format(NEW_VERSION)))
"""
Explanation: Author: Pascal, pascal@bayesimpact.org
Date: 2016-02-15
ROME update from v329 to v330
In February 2017 I realized that they had released a new version of the ROME. I want to investigate what changed and whether we need to do anything about it.
You might not be able to reproduce this notebook, mostly because it requires to have the two versions of the ROME in your data/rome/csv folder which happens only just before we switch to v330. You will have to trust me on the results ;-)
Skip the run test because it requires older versions of the ROME.
End of explanation
"""
new_files = new_version_files - frozenset(f.replace(OLD_VERSION, NEW_VERSION) for f in old_version_files)
deleted_files = old_version_files - frozenset(f.replace(NEW_VERSION, OLD_VERSION) for f in new_version_files)
print('{:d} new files'.format(len(new_files)))
print('{:d} deleted files'.format(len(deleted_files)))
"""
Explanation: First let's check if there are new or deleted files (only matching by file names).
End of explanation
"""
new_to_old = dict((f, f.replace(NEW_VERSION, OLD_VERSION)) for f in new_version_files)
# Load all datasets.
Dataset = collections.namedtuple('Dataset', ['basename', 'old', 'new'])
data = [Dataset(
basename=path.basename(f),
old=pandas.read_csv(f.replace(NEW_VERSION, OLD_VERSION)),
new=pandas.read_csv(f))
for f in sorted(new_version_files)]
def find_dataset_by_name(data, partial_name):
for dataset in data:
if partial_name in dataset.basename:
return dataset
raise ValueError('No dataset named {}, the list is\n{}'.format(partial_name, [dataset.basename for d in data]))
"""
Explanation: So we have the same set of files: good start.
Now let's set up a dataset that, for each table, links the old file and the new file.
End of explanation
"""
for dataset in data:
if set(dataset.old.columns) != set(dataset.new.columns):
print('Columns of {} have changed.'.format(dataset.basename))
"""
Explanation: Let's make sure the structure hasn't changed:
End of explanation
"""
untouched = 0
for dataset in data:
diff = len(dataset.new.index) - len(dataset.old.index)
if diff > 0:
print('{:d} values added in {}'.format(diff, dataset.basename))
elif diff < 0:
print('{:d} values removed in {}'.format(diff, dataset.basename))
else:
untouched += 1
print('{:d}/{:d} files with the same number of rows'.format(untouched, len(data)))
"""
Explanation: All files have the same columns as before: still good.
End of explanation
"""
jobs = find_dataset_by_name(data, 'referentiel_appellation')
new_ogrs = set(jobs.new.code_ogr) - set(jobs.old.code_ogr)
new_jobs = jobs.new[jobs.new.code_ogr.isin(new_ogrs)]
job_groups = find_dataset_by_name(data, 'referentiel_code_rome_v')
pandas.merge(new_jobs, job_groups.new[['code_rome', 'libelle_rome']], on='code_rome', how='left')
"""
Explanation: So we have minor additions in half of the files. At one point we cared about referentiel_activite and referentiel_activite_riasec but have no concrete application for now.
The only interesting ones are referentiel_appellation and referentiel_competence, so let's see more precisely.
End of explanation
"""
competences = find_dataset_by_name(data, 'referentiel_competence')
new_ogrs = set(competences.new.code_ogr) - set(competences.old.code_ogr)
obsolete_ogrs = set(competences.old.code_ogr) - set(competences.new.code_ogr)
stable_ogrs = set(competences.new.code_ogr) & set(competences.old.code_ogr)
matplotlib_venn.venn2((len(obsolete_ogrs), len(new_ogrs), len(stable_ogrs)), (OLD_VERSION, NEW_VERSION));
"""
Explanation: The new entries look legitimate.
Let's now check the referentiel_competence:
End of explanation
"""
new_names = set(competences.new.libelle_competence) - set(competences.old.libelle_competence)
obsolete_names = set(competences.old.libelle_competence) - set(competences.new.libelle_competence)
stable_names = set(competences.new.libelle_competence) & set(competences.old.libelle_competence)
matplotlib_venn.venn2((len(obsolete_names), len(new_names), len(stable_names)), (OLD_VERSION, NEW_VERSION));
print('Some skills that got removed: {}…'.format(', '.join(sorted(obsolete_names)[:5])))
print('Some skills that got added: {}…'.format(', '.join(sorted(new_names)[:5])))
print('Some skills that stayed: {}…'.format(', '.join(sorted(stable_names)[:5])))
"""
Explanation: Wow! All OGR codes have changed. Let's see if it's only the codes or the values as well:
End of explanation
"""
|
numeristical/introspective | examples/SplineCalib_Multiclass_MNIST.ipynb | mit | import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
%matplotlib inline
from sklearn.ensemble import RandomForestClassifier
from sklearn.decomposition import PCA
from sklearn.preprocessing import StandardScaler
from sklearn.metrics import roc_auc_score, log_loss
from sklearn.metrics import accuracy_score, confusion_matrix
import ml_insights as mli
mli.__version__
# try `!pip install keras` if keras not found
# We use keras just to get the mnist data
# You can also download MNIST at yann.lecun.com/exdb/mnist
from keras.datasets import mnist
"""
Explanation: MultiClass Calibration using SplieCalib
Example: MNIST Random Forest model data
This notebook is to demonstrate how to do multiclass calibration using SplineCalib functionality in the ml_insights package. We will use PCA to extract components from images, run a random forest model, and then demonstrate how to properly calibrate the predictions from the Random Forest Model.
End of explanation
"""
## Load in the MNIST data
(trainX, y_train), (testX, y_test) = mnist.load_data()
## Here is an example image
plt.imshow(trainX[0])
# let's reshape the data so that the pixel values are 784 distinct columns
X_train = trainX.reshape(60000,28*28)
X_test = testX.reshape(10000,28*28)
X_train.shape, X_test.shape
## Scale the values before doing PCA
s = StandardScaler()
X_train_std = s.fit_transform(X_train)
X_test_std = s.transform(X_test)
## Run a PCA on the standardized values
pca_proc = PCA(n_components=30)
pca_proc.fit(X_train_std)
# Transform the data - after this we have 30 PCA features to use in our Random Forest model
X_train_pca = pca_proc.transform(X_train_std)
X_test_pca = pca_proc.transform(X_test_std)
X_train_pca = pd.DataFrame(X_train_pca)
X_train_pca.columns = ['pca_vec_'+str(j) for j in range(X_train_pca.shape[1])]
X_test_pca = pd.DataFrame(X_test_pca)
X_test_pca.columns = ['pca_vec_'+str(j) for j in range(X_test_pca.shape[1])]
# Train our Random Forest model
rf1 = RandomForestClassifier(n_estimators=1000, n_jobs=-1)
rf1.fit(X_train_pca, y_train)
# Get the predicted "probabilities" (vote percentages from the RF)
# Also, use the argmax to get the "hard" prediction (most likely digit)
test_probs = rf1.predict_proba(X_test_pca)
hard_preds = np.argmax(test_probs, axis=1)
# Evaluate the model in terms of log_loss and accuracy
log_loss(y_test, test_probs), accuracy_score(y_test, hard_preds)
# Generate cross_validated predictions on your training data
# This will be the data your calibration object "learns" from
cv_train_preds = mli.cv_predictions(rf1, X_train_pca, y_train)
# Define the calibration object and fit it to the cross-validated predictions
calib_mc = mli.SplineCalib()
calib_mc.fit(cv_train_preds, y_train, verbose=True)
# Calibrate the previous predictions from the model
test_probs_calibrated = calib_mc.calibrate(test_probs)
# Compare the previous (uncalibrated) log_loss with the calibrated one
log_loss(y_test, test_probs), log_loss(y_test, test_probs_calibrated)
# Make hard predictions based on the calibrated probability values
# In multiclass problems, calibration can often improve accuracy
hard_preds_calib = np.argmax(test_probs_calibrated, axis=1)
# We see an increase in accuracy...
accuracy_score(y_test, hard_preds), accuracy_score(y_test, hard_preds_calib)
"""
Explanation: Classifying Digits Using PCA and the MNIST data
We will use PCA to perform dimensionality reduction on the MNIST handwritten digit data. We explore the eigenvectors and further show how the decomposition can be used to capture the salient dimensions of the data necessary to classify the digits.
Load the Data, Examine and Explore
End of explanation
"""
plt.figure(figsize=(10,28))
for i in range(10):
plt.subplot(5,2,i+1)
mli.plot_reliability_diagram((y_train==i).astype(int), cv_train_preds[:,i]);
calib_mc.show_calibration_curve(class_num=i)
"""
Explanation: Assessing the Calibration
SplineCalib (and the ml_insights package) also contains some plots and tools to help you assess the quality of your calibration. While the default settings generally work fairly well, there are adjustments that can be made.
Below we see the calibration curves plotted on top of the reliability diagrams. We see that the calibration curves do a good job of fitting the empirical probabilities. Note also that the calibration curves are noticeably different for different classes.
End of explanation
"""
calib_mc.show_spline_reg_plot(class_num=0)
# Plot these curves for all classes
plt.figure(figsize=(10,20))
for i in range(10):
plt.subplot(5,2,i+1)
calib_mc.show_spline_reg_plot(class_num=i)
"""
Explanation: Next, we see how the log_loss changes with the regularization parameter when we fit the spline. The vertical line indicates the chosen regularization value. For these plots, smaller values indicate more regularization. (SplineCalib errs on the side of more regularization). If the curve were still sloping downward at the chosen value, this might indicate we need to widen the range of regularization parameter with the reg_param_vec argument in SplineCalib.
End of explanation
"""
|
DamienIrving/ocean-analysis | development/hfbasin.ipynb | mit | import matplotlib.pyplot as plt
import iris
import iris.plot as iplt
import iris.coord_categorisation
import cf_units
import numpy
%matplotlib inline
infile = '/g/data/ua6/DRSv2/CMIP5/NorESM1-M/rcp85/mon/ocean/r1i1p1/hfbasin/latest/hfbasin_Omon_NorESM1-M_rcp85_r1i1p1_200601-210012.nc'
cube = iris.load_cube(infile)
print(cube)
dim_coord_names = [coord.name() for coord in cube.dim_coords]
print(dim_coord_names)
cube.coord('latitude').points
aux_coord_names = [coord.name() for coord in cube.aux_coords]
print(aux_coord_names)
cube.coord('region')
global_cube = cube.extract(iris.Constraint(region='global_ocean'))
def convert_to_annual(cube):
"""Convert data to annual timescale.
Args:
cube (iris.cube.Cube)
full_months(bool): only include years with data for all 12 months
"""
iris.coord_categorisation.add_year(cube, 'time')
iris.coord_categorisation.add_month(cube, 'time')
cube = cube.aggregated_by(['year'], iris.analysis.MEAN)
cube.remove_coord('year')
cube.remove_coord('month')
return cube
global_cube_annual = convert_to_annual(global_cube)
print(global_cube_annual)
iplt.plot(global_cube_annual[5, ::])
iplt.plot(global_cube_annual[20, ::])
plt.show()
"""
Explanation: Ocean heat transport in CMIP5 models
Read data
End of explanation
"""
def convert_to_seconds(time_axis):
"""Convert time axis units to seconds.
Args:
time_axis(iris.DimCoord)
"""
old_units = str(time_axis.units)
old_timestep = old_units.split(' ')[0]
new_units = old_units.replace(old_timestep, 'seconds')
new_unit = cf_units.Unit(new_units, calendar=time_axis.units.calendar)
time_axis.convert_units(new_unit)
return time_axis
def linear_trend(data, time_axis):
"""Calculate the linear trend.
polyfit returns [a, b] corresponding to y = a + bx
"""
masked_flag = False
if type(data) == numpy.ma.core.MaskedArray:
if type(data.mask) == numpy.bool_:
if data.mask:
masked_flag = True
elif data.mask[0]:
masked_flag = True
if masked_flag:
return data.fill_value
else:
return numpy.polynomial.polynomial.polyfit(time_axis, data, 1)[-1]
def calc_trend(cube):
"""Calculate linear trend.
Args:
cube (iris.cube.Cube)
running_mean(bool, optional):
A 12-month running mean can first be applied to the data
yr (bool, optional):
Change units from per second to per year
"""
time_axis = cube.coord('time')
time_axis = convert_to_seconds(time_axis)
trend = numpy.ma.apply_along_axis(linear_trend, 0, cube.data, time_axis.points)
trend = numpy.ma.masked_values(trend, cube.data.fill_value)
return trend
trend_data = calc_trend(global_cube_annual)
trend_cube = global_cube_annual[0, ::].copy()
trend_cube.data = trend_data
trend_cube.remove_coord('time')
#trend_unit = ' yr-1'
#trend_cube.units = str(global_cube_annual.units) + trend_unit
iplt.plot(trend_cube)
plt.show()
"""
Explanation: So for any given year, the annual mean shows ocean heat transport away from the tropics.
Trends
End of explanation
"""
print(global_cube_annual)
diffs_data = numpy.diff(global_cube_annual.data, axis=1)
lats = global_cube_annual.coord('latitude').points
diffs_lats = (lats[1:] + lats[:-1]) / 2.
print(diffs_data.shape)
print(len(diffs_lats))
plt.plot(diffs_lats, diffs_data[0, :])
plt.plot(lats, global_cube_annual[0, ::].data / 10.0)
plt.show()
"""
Explanation: So the trends in ocean heat transport suggest reduced transport in the RCP 8.5 simulation (i.e. the trend plot is almost the inverse of the climatology plot).
Convergence
End of explanation
"""
time_axis = global_cube_annual.coord('time')
time_axis = convert_to_seconds(time_axis)
diffs_trend = numpy.ma.apply_along_axis(linear_trend, 0, diffs_data, time_axis.points)
diffs_trend = numpy.ma.masked_values(diffs_trend, global_cube_annual.data.fill_value)
print(diffs_trend.shape)
plt.plot(diffs_lats, diffs_trend * -1)
plt.axhline(y=0)
plt.show()
plt.plot(diffs_lats, diffs_trend * -1, color='black')
plt.axhline(y=0)
plt.axvline(x=30)
plt.axvline(x=50)
plt.axvline(x=77)
plt.xlim(20, 90)
plt.show()
"""
Explanation: Convergence trend
End of explanation
"""
|
hydrosquall/tiingo-python | examples/basic-usage-with-pandas.ipynb | mit | TIINGO_API_KEY = 'REPLACE-THIS-TEXT-WITH-A-REAL-API-KEY'
# This is here to remind you to change your API key.
if not TIINGO_API_KEY or (TIINGO_API_KEY == 'REPLACE-THIS-TEXT-WITH-A-REAL-API-KEY'):
raise Exception("Please provide a valid Tiingo API key!")
from tiingo import TiingoClient
config = {
'api_key': TIINGO_API_KEY,
'session': True # Reuse HTTP sessions across API calls for better performance
}
# Throughout the rest of this notebook, you'll use the "client" to interact with the Tiingo backend services.
client = TiingoClient(config)
"""
Explanation: Tiingo-Python
This notebook shows basic usage of the tiingo-python library. If you're running this on mybinder.org, you can run this code without installing anything on your computer. You can find more information about what available at the Tiingo website, but this notebook will let you play around with real code samples in your browser.
If you've never used jupyter before, I recommend this tutorial from Datacamp.
Basic Setup
First, you'll need to provide your API key as a string in the cell below. If you forget to do this, the notebook cannot run. You can find your API key by visiting this link and logging in to your Tiingo account.
End of explanation
"""
# Get Ticker Metadata for the stock "GOOGL"
ticker_metadata = client.get_ticker_metadata("GOOGL")
print(ticker_metadata)
# Get latest prices, based on 3+ sources as JSON, sampled weekly
ticker_price = client.get_ticker_price("GOOGL", frequency="weekly")
print(ticker_price)
"""
Explanation: Minimal Data Fetching Examples
Below are the code samples from the README.rst along with sample outputs, but this is just the tip of the iceberg of this library's capabilities.
End of explanation
"""
# Get historical GOOGL prices from August 2017 as JSON, sampled daily
historical_prices = client.get_ticker_price("GOOGL",
fmt='json',
startDate='2017-08-01',
endDate='2017-08-31',
frequency='daily')
# Print the first 2 days of data, but you will find more days of data in the overall historical_prices variable.
print(historical_prices[:2])
# See what tickers are available
# Check what tickers are available, as well as metadata about each ticker
# including supported currency, exchange, and available start/end dates.
tickers = client.list_stock_tickers()
print(tickers[:2])
"""
Explanation: For values of frequency:
You can specify any of the end of day frequencies (daily, weekly, monthly, and annually) or any intraday frequency for both the get_ticker_price and get_dataframe methods. Weekly frequencies resample to the end of day on Friday, monthly frequencies resample to the last day of the month, and annually frequencies resample to the end of day on 12-31 of each year. The intraday frequencies are specified using an integer followed by Min or Hour, for example 30Min or 1Hour.
End of explanation
"""
# Search news articles about particular tickers
# This method will not work error if you do not have a paid Tiingo account associated with your API key.
articles = client.get_news(tickers=['GOOGL', 'AAPL'],
tags=['Laptops'],
sources=['washingtonpost.com'],
startDate='2017-01-01',
endDate='2017-08-31')
# Display a sample article
articles[0]
"""
Explanation: For each ticker, you may access
ticker: The ticker's abbreviation
exchange: Which exchange it's traded on
priceCurrency: Currency for the prices listed for this ticker
startDate/ endDate: Start / End Date for Tiingo's data about this ticker
Note that Tiingo is constantly adding new data sources, so the values returned from this call will probably change every day.
End of explanation
"""
# Boilerplate to make pandas charts render inline in jupyter
import matplotlib.pyplot as plt
%matplotlib inline
# Scan some historical Google data
ticker_history_df = client.get_dataframe("GOOGL",
startDate='2018-05-15',
endDate='2018-05-31',
frequency='daily')
# Check which columns you'd like to work with
ticker_history_df.columns
# Browse the first few entries of the raw data
ticker_history_df.head(5)
# View your columns of data on separate plots
columns_to_plot = ['adjClose', 'adjOpen']
ticker_history_df[columns_to_plot].plot.line(subplots=True)
# Plot multiple columns of data in the same chart
ticker_history_df[columns_to_plot].plot.line(subplots=False)
# Make a histogram to see what typical trading volumes are
ticker_history_df.volume.hist()
# You may also fetch data for multiple tickers at once, as long as you are only interested in 1 metric
# at a time. If you need to compare multiple metrics, you must fetch the data 1
# Here we compare Google with Apple's trading volume.
multiple_ticker_history_df = client.get_dataframe(['GOOGL', 'AAPL'],
frequency='weekly',
metric_name='volume',
startDate='2018-01-01',
endDate='2018-07-31')
# Compare the companies: AAPL's volume seems to be much more volatile in the first half of 2018.
multiple_ticker_history_df.plot.line()
"""
Explanation: Basic Pandas Dataframe Examples
Pandas is a popular python library for data analysis an manipulation. We provide out-of-the-box support for returning responses from Tiingo as Python Dataframes. If you are unfamiliar with pandas, I recommend the Mode Notebooks python data analysis tutorial.
End of explanation
"""
|
GEMScienceTools/rmtk | notebooks/vulnerability/derivation_fragility/NLTHA_on_SDOF/2MSA_on_SDOF.ipynb | agpl-3.0 | import numpy
from rmtk.vulnerability.common import utils
from rmtk.vulnerability.derivation_fragility.NLTHA_on_SDOF import MSA_utils
from rmtk.vulnerability.derivation_fragility.NLTHA_on_SDOF.read_pinching_parameters import read_parameters
from rmtk.vulnerability.derivation_fragility.NLTHA_on_SDOF import double_MSA_on_SDOF
%matplotlib inline
"""
Explanation: Double Multiple Stripe Analysis (2MSA) for Single Degree of Freedom (SDOF) Oscillators
<img src="../../../../figures/intact-damaged.jpg" width="500" align="middle">
End of explanation
"""
capacity_curves_file = '/Users/chiaracasotto/GitHub/rmtk_data/2MSA/capacity_curves.csv'
sdof_hysteresis = "/Users/chiaracasotto/GitHub/rmtk_data/pinching_parameters.csv"
capacity_curves = utils.read_capacity_curves(capacity_curves_file)
capacity_curves = utils.check_SDOF_curves(capacity_curves)
utils.plot_capacity_curves(capacity_curves)
hysteresis = read_parameters(sdof_hysteresis)
"""
Explanation: Load capacity curves
In order to use this methodology, it is necessary to provide one (or a group) of capacity curves, defined according to the format described in the RMTK manual.
Please provide the location of the file containing the capacity curves using the parameter capacity_curves_file.
If the User wants to specify the cyclic hysteretic behaviour of the SDOF system, please input the path of the file where the hysteretic parameters are contained, using the variable sdof_hysteresis. The parameters should be defined according to the format described in the RMTK manual. If instead default parameters want to be assumed, please set the sdof_hysteresis variable to "Default"
End of explanation
"""
gmrs_folder = '../../../../../rmtk_data/MSA_records'
number_models_in_DS = 1
no_bins = 2
no_rec_bin = 10
damping_ratio = 0.05
minT = 0.1
maxT = 2
filter_aftershocks = 'FALSE'
Mw_multiplier = 0.92
waveform_path = '../../../../../rmtk_data/2MSA/waveform.csv'
gmrs = utils.read_gmrs(gmrs_folder)
gmr_characteristics = MSA_utils.assign_Mw_Tg(waveform_path, gmrs, Mw_multiplier,
damping_ratio, filter_aftershocks)
#utils.plot_response_spectra(gmrs,minT,maxT)
"""
Explanation: Load ground motion records
For what concerns the ground motions to be used in the Double Multiple Stripe Analysis the following inputs are required:
1. gmrs_folder: path to the folder containing the ground motion records to be used in the analysis. Each accelerogram needs to be in a separate CSV file as described in the RMTK manual.
2. record_scaled_folder. In this folder there should be a csv file for each Intensity Measure bin selected for the MSA, containing the names of the records that should be scaled to that IM bin, and the corresponding scaling factors. An example of this type of file is provided in the RMTK manual.
3. no_bins: number of Intensity Measure bins.
4. no_rec_bin: number of records per bin
5. number_models_in_DS: the number of model to populate each initial damage state with.
If a certain relationship wants to be kept between the ground motion characteristics of the mainshock and the aftershock, the variable filter_aftershocks should be set to TRUE and the following parameters should be defined:
1. Mw_multiplier: the ratio between the aftershock magnitude and the mainshock magnitude.
2. waveform_path: the path to the file containing for each gmr magnitude and predominant period;
Otherwise the variable filter_aftershocks should be set to FALSE and the aforementioned parameters can be left empty.
If the user wants to plot acceleration, displacement and velocity response spectra, the function utils.plot_response_spectra(gmrs, minT, maxT) should be un-commented. The parameters minT and maxT are used to define the period bounds when plotting the spectra for the provided ground motion fields.
End of explanation
"""
damage_model_file = "/Users/chiaracasotto/GitHub/rmtk_data/2MSA/damage_model_ISD.csv"
damage_model = utils.read_damage_model(damage_model_file)
"""
Explanation: Load damage state thresholds
Please provide the path to your damage model file using the parameter damage_model_file in the cell below.
Currently the user can provide spectral displacement, capacity curve dependent and interstorey drift damage model type.
If the damage model type is interstorey drift the user has to input interstorey drift values of the MDOF system. The user can then provide the pushover curve in terms of Vb-dfloor to be able to convert interstorey drift limit states to roof displacements and spectral displacements of the SDOF system, otherwise a linear relationship is assumed.
End of explanation
"""
degradation = False
record_scaled_folder = "../../../../../rmtk_data/2MSA/Scaling_factors"
msa = MSA_utils.define_2MSA_parameters(no_bins,no_rec_bin,record_scaled_folder,filter_aftershocks)
PDM, Sds, gmr_info = double_MSA_on_SDOF.calculate_fragility(
capacity_curves, hysteresis, msa, gmrs, gmr_characteristics,
damage_model, damping_ratio,degradation, number_models_in_DS)
"""
Explanation: Calculate fragility function
In order to obtain the fragility model, it is necessary to input the location of the damage model (damage_model), using the format described in the RMTK manual. It is as well necessary to input the damping value of the structure(s) under analysis and the value of the period (T) to be considered in the regression analysis. The method allows to consider or not degradation. Finally, if desired, it is possible to save the resulting fragility model in a .csv file.
End of explanation
"""
IMT = 'Sa'
T = 0.47
#T = numpy.arange(0.4,1.91,0.01)
regression_method = 'max likelihood'
fragility_model = MSA_utils.calculate_fragility_model_damaged( PDM,gmrs,gmr_info,IMT,msa,damage_model,
T,damping_ratio, regression_method)
"""
Explanation: Fit lognormal CDF fragility curves
The following parameters need to be defined in the cell below in order to fit lognormal CDF fragility curves to the damage probability matrix obtained above:
1. IMT: This parameter specifies the intensity measure type to be used. Currently supported options are "PGA", "Sa","Sd" and "HI" (Housner Intensity).
2. period: This parameter defines the period for which a spectral intensity measure should be computed. If Housner Intensity is selected as intensity measure a range of periods should be defined instead (for example T=np.arange(0.3,3.61,0.01)).
3. regression_method: This parameter defines the regression method to be used for estimating the parameters of the fragility functions. The valid options are "least squares" and "max likelihood".
End of explanation
"""
minIML, maxIML = 0.01, 4
MSA_utils.plot_fragility_model(fragility_model,damage_model,minIML, maxIML)
"""
Explanation: Plot fragility functions
The following parameters need to be defined in the cell below in order to plot the lognormal CDF fragility curves obtained above:
* minIML and maxIML: These parameters define the limits of the intensity measure level for plotting the functions
End of explanation
"""
output_type = "csv"
output_path = "../../../../../rmtk_data/2MSA/"
minIML, maxIML = 0.01, 4
tax = 'RC'
MSA_utils.save_mean_fragility(fragility_model,damage_model,tax,output_type,output_path,minIML, maxIML)
"""
Explanation: Save fragility functions
The derived parametric fragility functions can be saved to a file in either CSV format or in the NRML format that is used by all OpenQuake input models. The following parameters need to be defined in the cell below in order to save the lognormal CDF fragility curves obtained above:
1. taxonomy: This parameter specifies a taxonomy string for the the fragility functions.
2. minIML and maxIML: These parameters define the bounds of applicability of the functions.
3. output_type: This parameter specifies the file format to be used for saving the functions. Currently, the formats supported are "csv" and "nrml".
End of explanation
"""
|
tensorflow/fairness-indicators | g3doc/tutorials/_Deprecated_Fairness_Indicators_Lineage_Case_Study.ipynb | apache-2.0 | #@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""
Explanation: Copyright 2020 The TensorFlow Authors.
End of explanation
"""
!python -m pip install -q -U \
tfx \
tensorflow-model-analysis \
tensorflow-data-validation \
tensorflow-metadata \
tensorflow-transform \
ml-metadata \
tfx-bsl
import os
import tempfile
import six.moves.urllib as urllib
from ml_metadata.metadata_store import metadata_store
from ml_metadata.proto import metadata_store_pb2
import pandas as pd
from google.protobuf import text_format
from sklearn.utils import shuffle
import tensorflow as tf
import tensorflow_data_validation as tfdv
import tensorflow_model_analysis as tfma
from tensorflow_model_analysis.addons.fairness.post_export_metrics import fairness_indicators
from tensorflow_model_analysis.addons.fairness.view import widget_view
import tfx
from tfx.components.evaluator.component import Evaluator
from tfx.components.example_gen.csv_example_gen.component import CsvExampleGen
from tfx.components.schema_gen.component import SchemaGen
from tfx.components.statistics_gen.component import StatisticsGen
from tfx.components.trainer.component import Trainer
from tfx.components.transform.component import Transform
from tfx.orchestration.experimental.interactive.interactive_context import InteractiveContext
from tfx.proto import evaluator_pb2
from tfx.proto import trainer_pb2
"""
Explanation: Fairness Indicators Lineage Case Study
<table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https://www.tensorflow.org/responsible_ai/fairness_indicators/tutorials/Fairness_Indicators_Lineage_Case_Study"><img src="https://www.tensorflow.org/images/tf_logo_32px.png" />View on TensorFlow.org</a>
</td>
<td>
<a target="_blank" href="https://colab.research.google.com/github/tensorflow/fairness-indicators/blob/master/g3doc/tutorials/Fairness_Indicators_Lineage_Case_Study.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" />Run in Google Colab</a>
</td>
<td>
<a target="_blank" href="https://github.com/tensorflow/fairness-indicators/tree/master/g3doc/tutorials/Fairness_Indicators_Lineage_Case_Study.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />View on GitHub</a>
</td>
<td>
<a href="https://storage.googleapis.com/tensorflow_docs/fairness-indicators/g3doc/tutorials/Fairness_Indicators_Lineage_Case_Study.ipynb"><img src="https://www.tensorflow.org/images/download_logo_32px.png" />Download notebook</a>
</td>
<td>
<a href="https://tfhub.dev/google/random-nnlm-en-dim128/1"><img src="https://www.tensorflow.org/images/hub_logo_32px.png" />See TF Hub model</a>
</td>
</table>
Warning: Estimators are deprecated (not recommended for new code). Estimators run v1.Session-style code which is more difficult to write correctly, and can behave unexpectedly, especially when combined with TF 2 code. Estimators do fall under our compatibility guarantees, but will receive no fixes other than security vulnerabilities. See the migration guide for details.
<!--
TODO(b/192933099): update this to use keras instead of estimators.
-->
COMPAS Dataset
COMPAS (Correctional Offender Management Profiling for Alternative Sanctions) is a public dataset, which contains approximately 18,000 criminal cases from Broward County, Florida between January, 2013 and December, 2014. The data contains information about 11,000 unique defendants, including criminal history demographics, and a risk score intended to represent the defendant’s likelihood of reoffending (recidivism). A machine learning model trained on this data has been used by judges and parole officers to determine whether or not to set bail and whether or not to grant parole.
In 2016, an article published in ProPublica found that the COMPAS model was incorrectly predicting that African-American defendants would recidivate at much higher rates than their white counterparts while Caucasian would not recidivate at a much higher rate. For Caucasian defendants, the model made mistakes in the opposite direction, making incorrect predictions that they wouldn’t commit another crime. The authors went on to show that these biases were likely due to an uneven distribution in the data between African-Americans and Caucasian defendants. Specifically, the ground truth label of a negative example (a defendant would not commit another crime) and a positive example (defendant would commit another crime) were disproportionate between the two races. Since 2016, the COMPAS dataset has appeared frequently in the ML fairness literature <sup>1, 2, 3</sup>, with researchers using it to demonstrate techniques for identifying and remediating fairness concerns. This tutorial from the FAT* 2018 conference illustrates how COMPAS can dramatically impact a defendant’s prospects in the real world.
It is important to note that developing a machine learning model to predict pre-trial detention has a number of important ethical considerations. You can learn more about these issues in the Partnership on AI “Report on Algorithmic Risk Assessment Tools in the U.S. Criminal Justice System.” The Partnership on AI is a multi-stakeholder organization -- of which Google is a member -- that creates guidelines around AI.
We’re using the COMPAS dataset only as an example of how to identify and remediate fairness concerns in data. This dataset is canonical in the algorithmic fairness literature.
About the Tools in this Case Study
TensorFlow Extended (TFX) is a Google-production-scale machine learning platform based on TensorFlow. It provides a configuration framework and shared libraries to integrate common components needed to define, launch, and monitor your machine learning system.
TensorFlow Model Analysis is a library for evaluating machine learning models. Users can evaluate their models on a large amount of data in a distributed manner and view metrics over different slices within a notebook.
Fairness Indicators is a suite of tools built on top of TensorFlow Model Analysis that enables regular evaluation of fairness metrics in product pipelines.
ML Metadata is a library for recording and retrieving the lineage and metadata of ML artifacts such as models, datasets, and metrics. Within TFX ML Metadata will help us understand the artifacts created in a pipeline, which is a unit of data that is passed between TFX components.
TensorFlow Data Validation is a library to analyze your data and check for errors that can affect model training or serving.
Case Study Overview
For the duration of this case study we will define “fairness concerns” as a bias within a model that negatively impacts a slice within our data. Specifically, we’re trying to limit any recidivism prediction that could be biased towards race.
The walk through of the case study will proceed as follows:
Download the data, preprocess, and explore the initial dataset.
Build a TFX pipeline with the COMPAS dataset using a Keras binary classifier.
Run our results through TensorFlow Model Analysis, TensorFlow Data Validation, and load Fairness Indicators to explore any potential fairness concerns within our model.
Use ML Metadata to track all the artifacts for a model that we trained with TFX.
Weight the initial COMPAS dataset for our second model to account for the uneven distribution between recidivism and race.
Review the performance changes within the new dataset.
Check the underlying changes within our TFX pipeline with ML Metadata to understand what changes were made between the two models.
Helpful Resources
This case study is an extension of the below case studies. It is recommended working through the below case studies first.
* TFX Pipeline Overview
* Fairness Indicator Case Study
* TFX Data Validation
Setup
To start, we will install the necessary packages, download the data, and import the required modules for the case study.
To install the required packages for this case study in your notebook run the below PIP command.
Note: See here for a reference on compatibility between different versions of the libraries used in this case study.
Wadsworth, C., Vera, F., Piech, C. (2017). Achieving Fairness Through Adversarial Learning: an Application to Recidivism Prediction. https://arxiv.org/abs/1807.00199.
Chouldechova, A., G’Sell, M., (2017). Fairer and more accurate, but for whom? https://arxiv.org/abs/1707.00046.
Berk et al., (2017), Fairness in Criminal Justice Risk Assessments: The State of the Art, https://arxiv.org/abs/1703.09207.
End of explanation
"""
# Download the COMPAS dataset and setup the required filepaths.
_DATA_ROOT = tempfile.mkdtemp(prefix='tfx-data')
_DATA_PATH = 'https://storage.googleapis.com/compas_dataset/cox-violent-parsed.csv'
_DATA_FILEPATH = os.path.join(_DATA_ROOT, 'compas-scores-two-years.csv')
data = urllib.request.urlopen(_DATA_PATH)
_COMPAS_DF = pd.read_csv(data)
# To simpliy the case study, we will only use the columns that will be used for
# our model.
_COLUMN_NAMES = [
'age',
'c_charge_desc',
'c_charge_degree',
'c_days_from_compas',
'is_recid',
'juv_fel_count',
'juv_misd_count',
'juv_other_count',
'priors_count',
'r_days_from_arrest',
'race',
'sex',
'vr_charge_desc',
]
_COMPAS_DF = _COMPAS_DF[_COLUMN_NAMES]
# We will use 'is_recid' as our ground truth lable, which is boolean value
# indicating if a defendant committed another crime. There are some rows with -1
# indicating that there is no data. These rows we will drop from training.
_COMPAS_DF = _COMPAS_DF[_COMPAS_DF['is_recid'] != -1]
# Given the distribution between races in this dataset we will only focuse on
# recidivism for African-Americans and Caucasians.
_COMPAS_DF = _COMPAS_DF[
_COMPAS_DF['race'].isin(['African-American', 'Caucasian'])]
# Adding we weight feature that will be used during the second part of this
# case study to help improve fairness concerns.
_COMPAS_DF['sample_weight'] = 0.8
# Load the DataFrame back to a CSV file for our TFX model.
_COMPAS_DF.to_csv(_DATA_FILEPATH, index=False, na_rep='')
"""
Explanation: Download and preprocess the dataset
End of explanation
"""
context = InteractiveContext()
"""
Explanation: Building a TFX Pipeline
There are several TFX Pipeline Components that can be used for a production model, but for the purpose the this case study will focus on using only the below components:
* ExampleGen to read our dataset.
* StatisticsGen to calculate the statistics of our dataset.
* SchemaGen to create a data schema.
* Transform for feature engineering.
* Trainer to run our machine learning model.
Create the InteractiveContext
To run TFX within a notebook, we first will need to create an InteractiveContext to run the components interactively.
InteractiveContext will use a temporary directory with an ephemeral ML Metadata database instance. To use your own pipeline root or database, the optional properties pipeline_root and metadata_connection_config may be passed to InteractiveContext.
End of explanation
"""
# The ExampleGen TFX Pipeline component ingests data into TFX pipelines.
# It consumes external files/services to generate Examples which will be read by
# other TFX components. It also provides consistent and configurable partition,
# and shuffles the dataset for ML best practice.
example_gen = CsvExampleGen(input_base=_DATA_ROOT)
context.run(example_gen)
"""
Explanation: TFX ExampleGen Component
End of explanation
"""
# The StatisticsGen TFX pipeline component generates features statistics over
# both training and serving data, which can be used by other pipeline
# components. StatisticsGen uses Beam to scale to large datasets.
statistics_gen = StatisticsGen(examples=example_gen.outputs['examples'])
context.run(statistics_gen)
"""
Explanation: TFX StatisticsGen Component
End of explanation
"""
# Some TFX components use a description of your input data called a schema. The
# schema is an instance of schema.proto. It can specify data types for feature
# values, whether a feature has to be present in all examples, allowed value
# ranges, and other properties. A SchemaGen pipeline component will
# automatically generate a schema by inferring types, categories, and ranges
# from the training data.
infer_schema = SchemaGen(statistics=statistics_gen.outputs['statistics'])
context.run(infer_schema)
"""
Explanation: TFX SchemaGen Component
End of explanation
"""
# Setup paths for the Transform Component.
_transform_module_file = 'compas_transform.py'
%%writefile {_transform_module_file}
import tensorflow as tf
import tensorflow_transform as tft
CATEGORICAL_FEATURE_KEYS = [
'sex',
'race',
'c_charge_desc',
'c_charge_degree',
]
INT_FEATURE_KEYS = [
'age',
'c_days_from_compas',
'juv_fel_count',
'juv_misd_count',
'juv_other_count',
'priors_count',
'sample_weight',
]
LABEL_KEY = 'is_recid'
# List of the unique values for the items within CATEGORICAL_FEATURE_KEYS.
MAX_CATEGORICAL_FEATURE_VALUES = [
2,
6,
513,
14,
]
def transformed_name(key):
return '{}_xf'.format(key)
def preprocessing_fn(inputs):
"""tf.transform's callback function for preprocessing inputs.
Args:
inputs: Map from feature keys to raw features.
Returns:
Map from string feature key to transformed feature operations.
"""
outputs = {}
for key in CATEGORICAL_FEATURE_KEYS:
outputs[transformed_name(key)] = tft.compute_and_apply_vocabulary(
_fill_in_missing(inputs[key]),
vocab_filename=key)
for key in INT_FEATURE_KEYS:
outputs[transformed_name(key)] = tft.scale_to_z_score(
_fill_in_missing(inputs[key]))
# Target label will be to see if the defendant is charged for another crime.
outputs[transformed_name(LABEL_KEY)] = _fill_in_missing(inputs[LABEL_KEY])
return outputs
def _fill_in_missing(tensor_value):
"""Replaces a missing values in a SparseTensor.
Fills in missing values of `tensor_value` with '' or 0, and converts to a
dense tensor.
Args:
tensor_value: A `SparseTensor` of rank 2. Its dense shape should have size
at most 1 in the second dimension.
Returns:
A rank 1 tensor where missing values of `tensor_value` are filled in.
"""
if not isinstance(tensor_value, tf.sparse.SparseTensor):
return tensor_value
default_value = '' if tensor_value.dtype == tf.string else 0
sparse_tensor = tf.SparseTensor(
tensor_value.indices,
tensor_value.values,
[tensor_value.dense_shape[0], 1])
dense_tensor = tf.sparse.to_dense(sparse_tensor, default_value)
return tf.squeeze(dense_tensor, axis=1)
# Build and run the Transform Component.
transform = Transform(
examples=example_gen.outputs['examples'],
schema=infer_schema.outputs['schema'],
module_file=_transform_module_file
)
context.run(transform)
"""
Explanation: TFX Transform Component
The Transform component performs data transformations and feature engineering. The results include an input TensorFlow graph which is used during both training and serving to preprocess the data before training or inference. This graph becomes part of the SavedModel that is the result of model training. Since the same input graph is used for both training and serving, the preprocessing will always be the same, and only needs to be written once.
The Transform component requires more code than many other components because of the arbitrary complexity of the feature engineering that you may need for the data and/or model that you're working with.
Define some constants and functions for both the Transform component and the Trainer component. Define them in a Python module, in this case saved to disk using the %%writefile magic command since you are working in a notebook.
The transformation that we will be performing in this case study are as follows:
* For string values we will generate a vocabulary that maps to an integer via tft.compute_and_apply_vocabulary.
* For integer values we will standardize the column mean 0 and variance 1 via tft.scale_to_z_score.
* Remove empty row values and replace them with an empty string or 0 depending on the feature type.
* Append ‘_xf’ to column names to denote the features that were processed in the Transform Component.
Now let's define a module containing the preprocessing_fn() function that we will pass to the Transform component:
End of explanation
"""
# Setup paths for the Trainer Component.
_trainer_module_file = 'compas_trainer.py'
%%writefile {_trainer_module_file}
import tensorflow as tf
import tensorflow_model_analysis as tfma
import tensorflow_transform as tft
from tensorflow_transform.tf_metadata import schema_utils
from compas_transform import *
_BATCH_SIZE = 1000
_LEARNING_RATE = 0.00001
_MAX_CHECKPOINTS = 1
_SAVE_CHECKPOINT_STEPS = 999
def transformed_names(keys):
return [transformed_name(key) for key in keys]
def transformed_name(key):
return '{}_xf'.format(key)
def _gzip_reader_fn(filenames):
"""Returns a record reader that can read gzip'ed files.
Args:
filenames: A tf.string tensor or tf.data.Dataset containing one or more
filenames.
Returns: A nested structure of tf.TypeSpec objects matching the structure of
an element of this dataset and specifying the type of individual components.
"""
return tf.data.TFRecordDataset(filenames, compression_type='GZIP')
# Tf.Transform considers these features as "raw".
def _get_raw_feature_spec(schema):
"""Generates a feature spec from a Schema proto.
Args:
schema: A Schema proto.
Returns:
A feature spec defined as a dict whose keys are feature names and values are
instances of FixedLenFeature, VarLenFeature or SparseFeature.
"""
return schema_utils.schema_as_feature_spec(schema).feature_spec
def _example_serving_receiver_fn(tf_transform_output, schema):
"""Builds the serving in inputs.
Args:
tf_transform_output: A TFTransformOutput.
schema: the schema of the input data.
Returns:
TensorFlow graph which parses examples, applying tf-transform to them.
"""
raw_feature_spec = _get_raw_feature_spec(schema)
raw_feature_spec.pop(LABEL_KEY)
raw_input_fn = tf.estimator.export.build_parsing_serving_input_receiver_fn(
raw_feature_spec)
serving_input_receiver = raw_input_fn()
transformed_features = tf_transform_output.transform_raw_features(
serving_input_receiver.features)
transformed_features.pop(transformed_name(LABEL_KEY))
return tf.estimator.export.ServingInputReceiver(
transformed_features, serving_input_receiver.receiver_tensors)
def _eval_input_receiver_fn(tf_transform_output, schema):
"""Builds everything needed for the tf-model-analysis to run the model.
Args:
tf_transform_output: A TFTransformOutput.
schema: the schema of the input data.
Returns:
EvalInputReceiver function, which contains:
- TensorFlow graph which parses raw untransformed features, applies the
tf-transform preprocessing operators.
- Set of raw, untransformed features.
- Label against which predictions will be compared.
"""
# Notice that the inputs are raw features, not transformed features here.
raw_feature_spec = _get_raw_feature_spec(schema)
serialized_tf_example = tf.compat.v1.placeholder(
dtype=tf.string, shape=[None], name='input_example_tensor')
# Add a parse_example operator to the tensorflow graph, which will parse
# raw, untransformed, tf examples.
features = tf.io.parse_example(
serialized=serialized_tf_example, features=raw_feature_spec)
transformed_features = tf_transform_output.transform_raw_features(features)
labels = transformed_features.pop(transformed_name(LABEL_KEY))
receiver_tensors = {'examples': serialized_tf_example}
return tfma.export.EvalInputReceiver(
features=transformed_features,
receiver_tensors=receiver_tensors,
labels=labels)
def _input_fn(filenames, tf_transform_output, batch_size=200):
"""Generates features and labels for training or evaluation.
Args:
filenames: List of CSV files to read data from.
tf_transform_output: A TFTransformOutput.
batch_size: First dimension size of the Tensors returned by input_fn.
Returns:
A (features, indices) tuple where features is a dictionary of
Tensors, and indices is a single Tensor of label indices.
"""
transformed_feature_spec = (
tf_transform_output.transformed_feature_spec().copy())
dataset = tf.compat.v1.data.experimental.make_batched_features_dataset(
filenames,
batch_size,
transformed_feature_spec,
shuffle=False,
reader=_gzip_reader_fn)
transformed_features = dataset.make_one_shot_iterator().get_next()
# We pop the label because we do not want to use it as a feature while we're
# training.
return transformed_features, transformed_features.pop(
transformed_name(LABEL_KEY))
def _keras_model_builder():
"""Build a keras model for COMPAS dataset classification.
Returns:
A compiled Keras model.
"""
feature_columns = []
feature_layer_inputs = {}
for key in transformed_names(INT_FEATURE_KEYS):
feature_columns.append(tf.feature_column.numeric_column(key))
feature_layer_inputs[key] = tf.keras.Input(shape=(1,), name=key)
for key, num_buckets in zip(transformed_names(CATEGORICAL_FEATURE_KEYS),
MAX_CATEGORICAL_FEATURE_VALUES):
feature_columns.append(
tf.feature_column.indicator_column(
tf.feature_column.categorical_column_with_identity(
key, num_buckets=num_buckets)))
feature_layer_inputs[key] = tf.keras.Input(
shape=(1,), name=key, dtype=tf.dtypes.int32)
feature_columns_input = tf.keras.layers.DenseFeatures(feature_columns)
feature_layer_outputs = feature_columns_input(feature_layer_inputs)
dense_layers = tf.keras.layers.Dense(
20, activation='relu', name='dense_1')(feature_layer_outputs)
dense_layers = tf.keras.layers.Dense(
10, activation='relu', name='dense_2')(dense_layers)
output = tf.keras.layers.Dense(
1, name='predictions')(dense_layers)
model = tf.keras.Model(
inputs=[v for v in feature_layer_inputs.values()], outputs=output)
model.compile(
loss=tf.keras.losses.MeanAbsoluteError(),
optimizer=tf.optimizers.Adam(learning_rate=_LEARNING_RATE))
return model
# TFX will call this function.
def trainer_fn(hparams, schema):
"""Build the estimator using the high level API.
Args:
hparams: Hyperparameters used to train the model as name/value pairs.
schema: Holds the schema of the training examples.
Returns:
A dict of the following:
- estimator: The estimator that will be used for training and eval.
- train_spec: Spec for training.
- eval_spec: Spec for eval.
- eval_input_receiver_fn: Input function for eval.
"""
tf_transform_output = tft.TFTransformOutput(hparams.transform_output)
train_input_fn = lambda: _input_fn(
hparams.train_files,
tf_transform_output,
batch_size=_BATCH_SIZE)
eval_input_fn = lambda: _input_fn(
hparams.eval_files,
tf_transform_output,
batch_size=_BATCH_SIZE)
train_spec = tf.estimator.TrainSpec(
train_input_fn,
max_steps=hparams.train_steps)
serving_receiver_fn = lambda: _example_serving_receiver_fn(
tf_transform_output, schema)
exporter = tf.estimator.FinalExporter('compas', serving_receiver_fn)
eval_spec = tf.estimator.EvalSpec(
eval_input_fn,
steps=hparams.eval_steps,
exporters=[exporter],
name='compas-eval')
run_config = tf.estimator.RunConfig(
save_checkpoints_steps=_SAVE_CHECKPOINT_STEPS,
keep_checkpoint_max=_MAX_CHECKPOINTS)
run_config = run_config.replace(model_dir=hparams.serving_model_dir)
estimator = tf.keras.estimator.model_to_estimator(
keras_model=_keras_model_builder(), config=run_config)
# Create an input receiver for TFMA processing.
receiver_fn = lambda: _eval_input_receiver_fn(tf_transform_output, schema)
return {
'estimator': estimator,
'train_spec': train_spec,
'eval_spec': eval_spec,
'eval_input_receiver_fn': receiver_fn
}
# Uses user-provided Python function that implements a model using TensorFlow's
# Estimators API.
trainer = Trainer(
module_file=_trainer_module_file,
transformed_examples=transform.outputs['transformed_examples'],
schema=infer_schema.outputs['schema'],
transform_graph=transform.outputs['transform_graph'],
train_args=trainer_pb2.TrainArgs(num_steps=10000),
eval_args=trainer_pb2.EvalArgs(num_steps=5000)
)
context.run(trainer)
"""
Explanation: TFX Trainer Component
The Trainer Component trains a specified TensorFlow model.
In order to run the trainer component we need to create a Python module containing a trainer_fn function that will return an estimator for our model. If you prefer creating a Keras model, you can do so and then convert it to an estimator using keras.model_to_estimator().
The Trainer component trains a specified TensorFlow model. In order to run the model we need to create a Python module containing a a function called trainer_fn function that TFX will call.
For our case study we will build a Keras model that will return will return keras.model_to_estimator().
End of explanation
"""
# Uses TensorFlow Model Analysis to compute a evaluation statistics over
# features of a model.
model_analyzer = Evaluator(
examples=example_gen.outputs['examples'],
model=trainer.outputs['model'],
eval_config = text_format.Parse("""
model_specs {
label_key: 'is_recid'
}
metrics_specs {
metrics {class_name: "BinaryAccuracy"}
metrics {class_name: "AUC"}
metrics {
class_name: "FairnessIndicators"
config: '{"thresholds": [0.25, 0.5, 0.75]}'
}
}
slicing_specs {
feature_keys: 'race'
}
""", tfma.EvalConfig())
)
context.run(model_analyzer)
"""
Explanation: TensorFlow Model Analysis
Now that our model is trained developed and trained within TFX, we can use several additional components within the TFX exosystem to understand our models performance in a little more detail. By looking at different metrics we’re able to get a better picture of how the overall model performs for different slices within our model to make sure our model is not underperforming for any subgroup.
First we'll examine TensorFlow Model Analysis, which is a library for evaluating TensorFlow models. It allows users to evaluate their models on large amounts of data in a distributed manner, using the same metrics defined in their trainer. These metrics can be computed over different slices of data and visualized in a notebook.
For a list of possible metrics that can be added into TensorFlow Model Analysis see here.
End of explanation
"""
evaluation_uri = model_analyzer.outputs['evaluation'].get()[0].uri
eval_result = tfma.load_eval_result(evaluation_uri)
tfma.addons.fairness.view.widget_view.render_fairness_indicator(eval_result)
"""
Explanation: Fairness Indicators
Load Fairness Indicators to examine the underlying data.
End of explanation
"""
# Connect to the TFX database.
connection_config = metadata_store_pb2.ConnectionConfig()
connection_config.sqlite.filename_uri = os.path.join(
context.pipeline_root, 'metadata.sqlite')
store = metadata_store.MetadataStore(connection_config)
def _mlmd_type_to_dataframe(mlmd_type):
"""Helper function to turn MLMD into a Pandas DataFrame.
Args:
mlmd_type: Metadata store type.
Returns:
DataFrame containing type ID, Name, and Properties.
"""
pd.set_option('display.max_columns', None)
pd.set_option('display.expand_frame_repr', False)
column_names = ['ID', 'Name', 'Properties']
df = pd.DataFrame(columns=column_names)
for a_type in mlmd_type:
mlmd_row = pd.DataFrame([[a_type.id, a_type.name, a_type.properties]],
columns=column_names)
df = df.append(mlmd_row)
return df
# ML Metadata stores strong-typed Artifacts, Executions, and Contexts.
# First, we can use type APIs to understand what is defined in ML Metadata
# by the current version of TFX. We'll be able to view all the previous runs
# that created our initial model.
print('Artifact Types:')
display(_mlmd_type_to_dataframe(store.get_artifact_types()))
print('\nExecution Types:')
display(_mlmd_type_to_dataframe(store.get_execution_types()))
print('\nContext Types:')
display(_mlmd_type_to_dataframe(store.get_context_types()))
"""
Explanation: Fairness Indicators will allow us to drill down to see the performance of different slices and is designed to support teams in evaluating and improving models for fairness concerns. It enables easy computation of binary and multiclass classifiers and will allow you to evaluate across any size of use case.
We willl load Fairness Indicators into this notebook and analyse the results and take a look at the results. After you have had a moment explored with Fairness Indicators, examine the False Positive Rate and False Negative Rate tabs in the tool. In this case study, we're concerned with trying to reduce the number of false predictions of recidivism, corresponding to the False Positive Rate.
Within Fairness Indicator tool you'll see two dropdowns options:
1. A "Baseline" option that is set by column_for_slicing.
2. A "Thresholds" option that is set by fairness_indicator_thresholds.
“Baseline” is the slice you want to compare all other slices to. Most commonly, it is represented by the overall slice, but can also be one of the specific slices as well.
"Threshold" is a value set within a given binary classification model to indicate where a prediction should be placed. When setting a threshold there are two things you should keep in mind.
Precision: What is the downside if your prediction results in a Type 1 error? In this case study a higher threshold would mean we're predicting more defendants will commit another crime when they actually don't.
Recall: What is the downside of a Type II error? In this case study a higher threshold would mean we're predicting more defendants will not commit another crime when they actually do.
We will set arbitrary thresholds at 0.75 and we will only focus on the fairness metrics for African-American and Caucasian defendants given the small sample sizes for the other races, which aren’t large enough to draw statistically significant conclusions.
The rates of the below might differ slightly based on how the data was shuffled at the beginning of this case study, but take a look at the difference between the data between African-American and Caucasian defendants. At a lower threshold our model is more likely to predict that a Caucasian defended will commit a second crime compared to an African-American defended. However this prediction inverts as we increase our threshold.
False Positive Rate @ 0.75
African-American: ~30%
AUC: 0.71
Binary Accuracy: 0.67
Caucasian: ~8%
AUC: 0.71
AUC: 0.67
More information on Type I/II errors and threshold setting can be found here.
ML Metadata
To understand where disparity could be coming from and to take a snapshot of our current model, we can use ML Metadata for recording and retrieving metadata associated with our model. ML Metadata is an integral part of TFX, but is designed so that it can be used independently.
For this case study, we will list all artifacts that we developed previously within this case study. By cycling through the artifacts, executions, and context we will have a high level view of our TFX model to dig into where any potential issues are coming from. This will provide us a baseline overview of how our model was developed and what TFX components helped to develop our initial model.
We will start by first laying out the high level artifacts, execution, and context types in our model.
End of explanation
"""
statistics_gen = StatisticsGen(
examples=example_gen.outputs['examples'],
schema=infer_schema.outputs['schema'],
stats_options=tfdv.StatsOptions(label_feature='is_recid'))
exec_result = context.run(statistics_gen)
for event in store.get_events_by_execution_ids([exec_result.execution_id]):
if event.path.steps[0].key == 'statistics':
statistics_w_schema_uri = store.get_artifacts_by_id([event.artifact_id])[0].uri
model_stats = tfdv.load_statistics(
os.path.join(statistics_w_schema_uri, 'eval/stats_tfrecord/'))
tfdv.visualize_statistics(model_stats)
"""
Explanation: Identify where the fairness issue could be coming from
For each of the above artifacts, execution, and context types we can use ML Metadata to dig into the attributes and how each part of our ML pipeline was developed.
We'll start by diving into the StatisticsGen to examine the underlying data that we initially fed into the model. By knowing the artifacts within our model we can use ML Metadata and TensorFlow Data Validation to look backward and forward within the model to identify where a potential problem is coming from.
After running the below cell, select Lift (Y=1) in the second chart on the Chart to show tab to see the lift between the different data slices. Within race, the lift for African-American is approximatly 1.08 whereas Caucasian is approximatly 0.86.
End of explanation
"""
_MODEL_NOTE_TO_ADD = 'First model that contains fairness concerns in the model.'
first_trained_model = store.get_artifacts_by_type('Model')[-1]
# Add the two notes above to the ML metadata.
first_trained_model.custom_properties['note'].string_value = _MODEL_NOTE_TO_ADD
store.put_artifacts([first_trained_model])
def _mlmd_model_to_dataframe(model, model_number):
"""Helper function to turn a MLMD modle into a Pandas DataFrame.
Args:
model: Metadata store model.
model_number: Number of model run within ML Metadata.
Returns:
DataFrame containing the ML Metadata model.
"""
pd.set_option('display.max_columns', None)
pd.set_option('display.expand_frame_repr', False)
df = pd.DataFrame()
custom_properties = ['name', 'note', 'state', 'producer_component',
'pipeline_name']
df['id'] = [model[model_number].id]
df['uri'] = [model[model_number].uri]
for prop in custom_properties:
df[prop] = model[model_number].custom_properties.get(prop)
df[prop] = df[prop].astype(str).map(
lambda x: x.lstrip('string_value: "').rstrip('"\n'))
return df
# Print the current model to see the results of the ML Metadata for the model.
display(_mlmd_model_to_dataframe(store.get_artifacts_by_type('Model'), 0))
"""
Explanation: Tracking a Model Change
Now that we have an idea on how we could improve the fairness of our model, we will first document our initial run within the ML Metadata for our own record and for anyone else that might review our changes at a future time.
ML Metadata can keep a log of our past models along with any notes that we would like to add between runs. We'll add a simple note on our first run denoting that this run was done on the full COMPAS dataset
End of explanation
"""
%%writefile {_trainer_module_file}
import numpy as np
import tensorflow as tf
import tensorflow_model_analysis as tfma
import tensorflow_transform as tft
from tensorflow_transform.tf_metadata import schema_utils
from compas_transform import *
_BATCH_SIZE = 1000
_LEARNING_RATE = 0.00001
_MAX_CHECKPOINTS = 1
_SAVE_CHECKPOINT_STEPS = 999
def transformed_names(keys):
return [transformed_name(key) for key in keys]
def transformed_name(key):
return '{}_xf'.format(key)
def _gzip_reader_fn(filenames):
"""Returns a record reader that can read gzip'ed files.
Args:
filenames: A tf.string tensor or tf.data.Dataset containing one or more
filenames.
Returns: A nested structure of tf.TypeSpec objects matching the structure of
an element of this dataset and specifying the type of individual components.
"""
return tf.data.TFRecordDataset(filenames, compression_type='GZIP')
# Tf.Transform considers these features as "raw".
def _get_raw_feature_spec(schema):
"""Generates a feature spec from a Schema proto.
Args:
schema: A Schema proto.
Returns:
A feature spec defined as a dict whose keys are feature names and values are
instances of FixedLenFeature, VarLenFeature or SparseFeature.
"""
return schema_utils.schema_as_feature_spec(schema).feature_spec
def _example_serving_receiver_fn(tf_transform_output, schema):
"""Builds the serving in inputs.
Args:
tf_transform_output: A TFTransformOutput.
schema: the schema of the input data.
Returns:
TensorFlow graph which parses examples, applying tf-transform to them.
"""
raw_feature_spec = _get_raw_feature_spec(schema)
raw_feature_spec.pop(LABEL_KEY)
raw_input_fn = tf.estimator.export.build_parsing_serving_input_receiver_fn(
raw_feature_spec)
serving_input_receiver = raw_input_fn()
transformed_features = tf_transform_output.transform_raw_features(
serving_input_receiver.features)
transformed_features.pop(transformed_name(LABEL_KEY))
return tf.estimator.export.ServingInputReceiver(
transformed_features, serving_input_receiver.receiver_tensors)
def _eval_input_receiver_fn(tf_transform_output, schema):
"""Builds everything needed for the tf-model-analysis to run the model.
Args:
tf_transform_output: A TFTransformOutput.
schema: the schema of the input data.
Returns:
EvalInputReceiver function, which contains:
- TensorFlow graph which parses raw untransformed features, applies the
tf-transform preprocessing operators.
- Set of raw, untransformed features.
- Label against which predictions will be compared.
"""
# Notice that the inputs are raw features, not transformed features here.
raw_feature_spec = _get_raw_feature_spec(schema)
serialized_tf_example = tf.compat.v1.placeholder(
dtype=tf.string, shape=[None], name='input_example_tensor')
# Add a parse_example operator to the tensorflow graph, which will parse
# raw, untransformed, tf examples.
features = tf.io.parse_example(
serialized=serialized_tf_example, features=raw_feature_spec)
transformed_features = tf_transform_output.transform_raw_features(features)
labels = transformed_features.pop(transformed_name(LABEL_KEY))
receiver_tensors = {'examples': serialized_tf_example}
return tfma.export.EvalInputReceiver(
features=transformed_features,
receiver_tensors=receiver_tensors,
labels=labels)
def _input_fn(filenames, tf_transform_output, batch_size=200):
"""Generates features and labels for training or evaluation.
Args:
filenames: List of CSV files to read data from.
tf_transform_output: A TFTransformOutput.
batch_size: First dimension size of the Tensors returned by input_fn.
Returns:
A (features, indices) tuple where features is a dictionary of
Tensors, and indices is a single Tensor of label indices.
"""
transformed_feature_spec = (
tf_transform_output.transformed_feature_spec().copy())
dataset = tf.compat.v1.data.experimental.make_batched_features_dataset(
filenames,
batch_size,
transformed_feature_spec,
shuffle=False,
reader=_gzip_reader_fn)
transformed_features = dataset.make_one_shot_iterator().get_next()
# We pop the label because we do not want to use it as a feature while we're
# training.
return transformed_features, transformed_features.pop(
transformed_name(LABEL_KEY))
# TFX will call this function.
def trainer_fn(hparams, schema):
"""Build the estimator using the high level API.
Args:
hparams: Hyperparameters used to train the model as name/value pairs.
schema: Holds the schema of the training examples.
Returns:
A dict of the following:
- estimator: The estimator that will be used for training and eval.
- train_spec: Spec for training.
- eval_spec: Spec for eval.
- eval_input_receiver_fn: Input function for eval.
"""
tf_transform_output = tft.TFTransformOutput(hparams.transform_output)
train_input_fn = lambda: _input_fn(
hparams.train_files,
tf_transform_output,
batch_size=_BATCH_SIZE)
eval_input_fn = lambda: _input_fn(
hparams.eval_files,
tf_transform_output,
batch_size=_BATCH_SIZE)
train_spec = tf.estimator.TrainSpec(
train_input_fn,
max_steps=hparams.train_steps)
serving_receiver_fn = lambda: _example_serving_receiver_fn(
tf_transform_output, schema)
exporter = tf.estimator.FinalExporter('compas', serving_receiver_fn)
eval_spec = tf.estimator.EvalSpec(
eval_input_fn,
steps=hparams.eval_steps,
exporters=[exporter],
name='compas-eval')
run_config = tf.estimator.RunConfig(
save_checkpoints_steps=_SAVE_CHECKPOINT_STEPS,
keep_checkpoint_max=_MAX_CHECKPOINTS)
run_config = run_config.replace(model_dir=hparams.serving_model_dir)
estimator = tf.keras.estimator.model_to_estimator(
keras_model=_keras_model_builder(), config=run_config)
# Create an input receiver for TFMA processing.
receiver_fn = lambda: _eval_input_receiver_fn(tf_transform_output, schema)
return {
'estimator': estimator,
'train_spec': train_spec,
'eval_spec': eval_spec,
'eval_input_receiver_fn': receiver_fn
}
def _keras_model_builder():
"""Build a keras model for COMPAS dataset classification.
Returns:
A compiled Keras model.
"""
feature_columns = []
feature_layer_inputs = {}
for key in transformed_names(INT_FEATURE_KEYS):
feature_columns.append(tf.feature_column.numeric_column(key))
feature_layer_inputs[key] = tf.keras.Input(shape=(1,), name=key)
for key, num_buckets in zip(transformed_names(CATEGORICAL_FEATURE_KEYS),
MAX_CATEGORICAL_FEATURE_VALUES):
feature_columns.append(
tf.feature_column.indicator_column(
tf.feature_column.categorical_column_with_identity(
key, num_buckets=num_buckets)))
feature_layer_inputs[key] = tf.keras.Input(
shape=(1,), name=key, dtype=tf.dtypes.int32)
feature_columns_input = tf.keras.layers.DenseFeatures(feature_columns)
feature_layer_outputs = feature_columns_input(feature_layer_inputs)
dense_layers = tf.keras.layers.Dense(
20, activation='relu', name='dense_1')(feature_layer_outputs)
dense_layers = tf.keras.layers.Dense(
10, activation='relu', name='dense_2')(dense_layers)
output = tf.keras.layers.Dense(
1, name='predictions')(dense_layers)
model = tf.keras.Model(
inputs=[v for v in feature_layer_inputs.values()], outputs=output)
# To weight our model we will develop a custom loss class within Keras.
# The old loss is commented out below and the new one is added in below.
model.compile(
# loss=tf.keras.losses.BinaryCrossentropy(from_logits=True),
loss=LogisticEndpoint(),
optimizer=tf.optimizers.Adam(learning_rate=_LEARNING_RATE))
return model
class LogisticEndpoint(tf.keras.layers.Layer):
def __init__(self, name=None):
super(LogisticEndpoint, self).__init__(name=name)
self.loss_fn = tf.keras.losses.BinaryCrossentropy(from_logits=True)
def __call__(self, y_true, y_pred, sample_weight=None):
inputs = [y_true, y_pred]
inputs += sample_weight or ['sample_weight_xf']
return super(LogisticEndpoint, self).__call__(inputs)
def call(self, inputs):
y_true, y_pred = inputs[0], inputs[1]
if len(inputs) == 3:
sample_weight = inputs[2]
else:
sample_weight = None
loss = self.loss_fn(y_true, y_pred, sample_weight)
self.add_loss(loss)
reduce_loss = tf.math.divide_no_nan(
tf.math.reduce_sum(tf.nn.softmax(y_pred)), _BATCH_SIZE)
return reduce_loss
"""
Explanation: Improving fairness concerns by weighting the model
There are several ways we can approach fixing fairness concerns within a model. Manipulating observed data/labels, implementing fairness constraints, or prejudice removal by regularization are some techniques<sup>1</sup> that have been used to fix fairness concerns. In this case study we will reweight the model by implementing a custom loss function into Keras.
The code below is the same as the above Transform Component but with the exception of a new class called LogisticEndpoint that we will use for our loss within Keras and a few parameter changes.
Mehrabi, N., Morstatter, F., Saxena, N., Lerman, K., Galstyan, N. (2019). A Survey on Bias and Fairness in Machine Learning. https://arxiv.org/pdf/1908.09635.pdf
End of explanation
"""
trainer_weighted = Trainer(
module_file=_trainer_module_file,
transformed_examples=transform.outputs['transformed_examples'],
schema=infer_schema.outputs['schema'],
transform_graph=transform.outputs['transform_graph'],
train_args=trainer_pb2.TrainArgs(num_steps=10000),
eval_args=trainer_pb2.EvalArgs(num_steps=5000)
)
context.run(trainer_weighted)
# Again, we will run TensorFlow Model Analysis and load Fairness Indicators
# to examine the performance change in our weighted model.
model_analyzer_weighted = Evaluator(
examples=example_gen.outputs['examples'],
model=trainer_weighted.outputs['model'],
eval_config = text_format.Parse("""
model_specs {
label_key: 'is_recid'
}
metrics_specs {
metrics {class_name: 'BinaryAccuracy'}
metrics {class_name: 'AUC'}
metrics {
class_name: 'FairnessIndicators'
config: '{"thresholds": [0.25, 0.5, 0.75]}'
}
}
slicing_specs {
feature_keys: 'race'
}
""", tfma.EvalConfig())
)
context.run(model_analyzer_weighted)
evaluation_uri_weighted = model_analyzer_weighted.outputs['evaluation'].get()[0].uri
eval_result_weighted = tfma.load_eval_result(evaluation_uri_weighted)
multi_eval_results = {
'Unweighted Model': eval_result,
'Weighted Model': eval_result_weighted
}
tfma.addons.fairness.view.widget_view.render_fairness_indicator(
multi_eval_results=multi_eval_results)
"""
Explanation: Retrain the TFX model with the weighted model
In this next part we will use the weighted Transform Component to rerun the same Trainer model as before to see the improvement in fairness after the weighting is applied.
End of explanation
"""
# Pull the URI for the two models that we ran in this case study.
first_model_uri = store.get_artifacts_by_type('ExampleStatistics')[-1].uri
second_model_uri = store.get_artifacts_by_type('ExampleStatistics')[0].uri
# Load the stats for both models.
first_model_uri = tfdv.load_statistics(os.path.join(
first_model_uri, 'eval/stats_tfrecord/'))
second_model_stats = tfdv.load_statistics(os.path.join(
second_model_uri, 'eval/stats_tfrecord/'))
# Visualize the statistics between the two models.
tfdv.visualize_statistics(
lhs_statistics=second_model_stats,
lhs_name='Sampled Model',
rhs_statistics=first_model_uri,
rhs_name='COMPAS Orginal')
# Add a new note within ML Metadata describing the weighted model.
_NOTE_TO_ADD = 'Weighted model between race and is_recid.'
# Pulling the URI for the weighted trained model.
second_trained_model = store.get_artifacts_by_type('Model')[-1]
# Add the note to ML Metadata.
second_trained_model.custom_properties['note'].string_value = _NOTE_TO_ADD
store.put_artifacts([second_trained_model])
display(_mlmd_model_to_dataframe(store.get_artifacts_by_type('Model'), -1))
display(_mlmd_model_to_dataframe(store.get_artifacts_by_type('Model'), 0))
"""
Explanation: After retraining our results with the weighted model, we can once again look at the fairness metrics to gauge any improvements in the model. This time, however, we will use the model comparison feature within Fairness Indicators to see the difference between the weighted and unweighted model. Although we’re still seeing some fairness concerns with the weighted model, the discrepancy is far less pronounced.
The drawback, however, is that our AUC and binary accuracy has also dropped after weighting the model.
False Positive Rate @ 0.75
African-American: ~1%
AUC: 0.47
Binary Accuracy: 0.59
Caucasian: ~0%
AUC: 0.47
Binary Accuracy: 0.58
Examine the data of the second run
Finally, we can visualize the data with TensorFlow Data Validation and overlay the data changes between the two models and add an additional note to the ML Metadata indicating that this model has improved the fairness concerns.
End of explanation
"""
|
yashdeeph709/Algorithms | PythonBootCamp/Complete-Python-Bootcamp-master/.ipynb_checkpoints/Regular Expressions-checkpoint.ipynb | apache-2.0 | import re
# List of patterns to search for
patterns = [ 'term1', 'term2' ]
# Text to parse
text = 'This is a string with term1, but it does not have the other term.'
for pattern in patterns:
print 'Searching for "%s" in: \n"%s"' % (pattern, text),
#Check for match
if re.search(pattern, text):
print '\n'
print 'Match was found. \n'
else:
print '\n'
print 'No Match was found.\n'
"""
Explanation: Regular Expressions
Regular expressions are text matching patterns described with a formal syntax. You'll often hear regular expressions referred to as 'regex' or 'regexp' in conversation. Regular expressions can include a variety of rules, fro finding repetition, to text-matching, and much more. As you advance in Python you'll see that a lot of your parsing problems can be solved with regular expressions (they're also a common interview question!).
If you're familiar with Perl, you'll notice that the syntax for regular expressions are very similar in Python. We will be using the re module with Python for this lecture.
Let's get started!
Searching for Patterns in Text
One of the most common uses for the re module is for finding patterns in text. Let's do a quick example of using the search method in the re module to find some text:
End of explanation
"""
# List of patterns to search for
pattern = 'term1'
# Text to parse
text = 'This is a string with term1, but it does not have the other term.'
match = re.search(pattern, text)
type(match)
"""
Explanation: Now we've seen that re.search() will take the pattern, scan the text, and then returns a Match object. If no pattern is found, a None is returned. To give a clearer picture of this match object, check out the cell below:
End of explanation
"""
# Show start of match
match.start()
# Show end
match.end()
"""
Explanation: This Match object returned by the search() method is more than just a Boolean or None, it contains information about the match, including the original input string, the regular expression that was used, and the location of the match. Let's see the methods we can use on the match object:
End of explanation
"""
# Term to split on
split_term = '@'
phrase = 'What is the domain name of someone with the email: hello@gmail.com'
# Split the phrase
re.split(split_term,phrase)
"""
Explanation: Split with regular expressions
Let's see how we can split with the re syntax. This should look similar to how you used the split() method with strings.
End of explanation
"""
# Returns a list of all matches
re.findall('match','test phrase match is in middle')
"""
Explanation: Note how re.split() returns a list with the term to spit on removed and the terms in the list are a split up version of the string. Create a couple of more examples for yourself to make sure you understand!
Finding all instances of a pattern
You can use re.findall() to find all the instances of a pattern in a string. For example:
End of explanation
"""
def multi_re_find(patterns,phrase):
'''
Takes in a list of regex patterns
Prints a list of all matches
'''
for pattern in patterns:
print 'Searching the phrase using the re check: %r' %pattern
print re.findall(pattern,phrase)
print '\n'
"""
Explanation: Pattern re Syntax
This will be the bulk of this lecture on using re with Python. Regular expressions supports a huge variety of patterns the just simply finding where a single string occurred.
We can use metacharacters along with re to find specific types of patterns.
Since we will be testing multiple re syntax forms, let's create a function that will print out recults given a list of various regular expressions and a phrase to parse:
End of explanation
"""
test_phrase = 'sdsd..sssddd...sdddsddd...dsds...dsssss...sdddd'
test_patterns = [ 'sd*', # s followed by zero or more d's
'sd+', # s followed by one or more d's
'sd?', # s followed by zero or one d's
'sd{3}', # s followed by three d's
'sd{2,3}', # s followed by two to three d's
]
multi_re_find(test_patterns,test_phrase)
"""
Explanation: Repetition Syntax
There are five ways to express repetition in a pattern:
1.) A pattern followed by the metacharacter * is repeated zero or more times.
2.) Replace the * with + and the pattern must appear at least once.
3.) Using ? means the pattern appears zero or one time.
4.) For a specific number of occurrences, use {m} after the pattern, where m is replaced with the number of times the pattern should repeat.
5.) Use {m,n} where m is the minimum number of repetitions and n is the maximum. Leaving out n ({m,}) means the value appears at least m times, with no maximum.
Now we will see an example of each of these using our multi_re_find function:
End of explanation
"""
test_phrase = 'sdsd..sssddd...sdddsddd...dsds...dsssss...sdddd'
test_patterns = [ '[sd]', # either s or d
's[sd]+'] # s followed by one or more s or d
multi_re_find(test_patterns,test_phrase)
"""
Explanation: Character Sets
Character sets are used when you wish to match any one of a group of characters at a point in the input. Brackets are used to construct character set inputs. For example: the input [ab] searches for occurences of either a or b.
Let's see some examples:
End of explanation
"""
test_phrase = 'This is a string! But it has punctutation. How can we remove it?'
"""
Explanation: It makes sense that the first [sd] returns every instance. Also the second input will just return any thing starting with an s in this particular case of the test phrase input.
Exclusion
We can use ^ to exclude terms by incorporating it into the bracket syntax notation. For example: [^...] will match any single character not in the brackets. Let's see some examples:
End of explanation
"""
re.findall('[^!.? ]+',test_phrase)
"""
Explanation: Use [^!.? ] to check for matches that are not a !,.,?, or space. Add the + to check that the match appears at least once, this basicaly translate into finding the words.
End of explanation
"""
test_phrase = 'This is an example sentence. Lets see if we can find some letters.'
test_patterns=[ '[a-z]+', # sequences of lower case letters
'[A-Z]+', # sequences of upper case letters
'[a-zA-Z]+', # sequences of lower or upper case letters
'[A-Z][a-z]+'] # one upper case letter followed by lower case letters
multi_re_find(test_patterns,test_phrase)
"""
Explanation: Character Ranges
As character sets grow larger, typing every character that should (or should not) match could become very tedious. A more compact format using character ranges lets you define a character set to include all of the contiguous characters between a start and stop point. The format used is [start-end].
Common use cases are to search for a specific range of letters in the alphabet, such [a-f] would return matches with any instance of letters between a and f.
Let's walk through some examples:
End of explanation
"""
test_phrase = 'This is a string with some numbers 1233 and a symbol #hashtag'
test_patterns=[ r'\d+', # sequence of digits
r'\D+', # sequence of non-digits
r'\s+', # sequence of whitespace
r'\S+', # sequence of non-whitespace
r'\w+', # alphanumeric characters
r'\W+', # non-alphanumeric
]
multi_re_find(test_patterns,test_phrase)
"""
Explanation: Escape Codes
You can use special escape codes to find specific types of patterns in your data, such as digits, non-digits,whitespace, and more. For example:
<table border="1" class="docutils">
<colgroup>
<col width="14%" />
<col width="86%" />
</colgroup>
<thead valign="bottom">
<tr class="row-odd"><th class="head">Code</th>
<th class="head">Meaning</th>
</tr>
</thead>
<tbody valign="top">
<tr class="row-even"><td><tt class="docutils literal"><span class="pre">\d</span></tt></td>
<td>a digit</td>
</tr>
<tr class="row-odd"><td><tt class="docutils literal"><span class="pre">\D</span></tt></td>
<td>a non-digit</td>
</tr>
<tr class="row-even"><td><tt class="docutils literal"><span class="pre">\s</span></tt></td>
<td>whitespace (tab, space, newline, etc.)</td>
</tr>
<tr class="row-odd"><td><tt class="docutils literal"><span class="pre">\S</span></tt></td>
<td>non-whitespace</td>
</tr>
<tr class="row-even"><td><tt class="docutils literal"><span class="pre">\w</span></tt></td>
<td>alphanumeric</td>
</tr>
<tr class="row-odd"><td><tt class="docutils literal"><span class="pre">\W</span></tt></td>
<td>non-alphanumeric</td>
</tr>
</tbody>
</table>
Escapes are indicated by prefixing the character with a backslash (). Unfortunately, a backslash must itself be escaped in normal Python strings, and that results in expressions that are difficult to read. Using raw strings, created by prefixing the literal value with r, for creating regular expressions eliminates this problem and maintains readability.
Personally, I think this use of r to escape a backslash is probably one of the things that block someone who is not familiar with regex in Python from being able to read regex code at first. Hopefully after seeing these examples this syntax will become clear.
End of explanation
"""
|
jfconavarrete/kaggle-facebook | notebooks/1.0-toni-preprocessing.ipynb | mit | train_X = train.values[:,:-1]
train_t = train.values[:,-1]
print train_X.shape
print train_t.shape
train.describe()
train.head()
train.tail()
"""
Explanation: The number of unique values is huge. This makes me think in a direction where we could center basis functions at the centers of discovered clusters. Discover cluster centers via K-Means?
End of explanation
"""
# train['place_id'].value_counts().plot(kind='bar')
# train['place_id'].value_counts().plot(kind='barh')
sb.distplot(train['accuracy'], bins=50, kde=False, rug=True);
sb.distplot(train['accuracy'], hist=False, rug=True);
with sb.axes_style("white"):
sb.jointplot(x=train['x'], y=train['y'], kind="hex", color="k");
"""
Explanation: Null Hypothesis: the plotted joints are identical
End of explanation
"""
with sb.axes_style("white"):
sb.jointplot(x=train['accuracy'], y=train['time'], kind="hex", color="k");
"""
Explanation: We have p = 0.068, hence the null hypothesis does not hold
End of explanation
"""
col_headers = list(train.columns.values)
print col_headers
train[col_headers[1:-1]] = train[col_headers[1:-1]].apply(lambda x: (x - x.min()) / (x.max() - x.min()))
train['accuracy'] = 1 - train['accuracy']
train.describe()
train.head()
train.tail()
train_X_norm = train.values[:,:-1]
print train_X_norm.shape
K = uniq
clusters = range(0,K)
batch_size = 500
n_init = 10
"""
Explanation: We have p = 0, hence the null hypothesis does not hold
We can also observe that as time passes, we mostly observe that accuracy falls in 3 distinct ranges
1. Analysis
Notes
Essential questions
Did you specify the type of data analytic question (e.g. exploration, association causality) before touching the data?
We are trying to order the places (i.e by their likelihood) based on the following measurements from the dataset: coordinates, accuracy (?), time (?) and place_id.
Did you define the metric for success before beginning?
The metric is Mean Average Precision (What is this?)
Did you understand the context for the question and the scientific or business application?
*We are building a system that would rank a list of places given 'coords', 'accuracy' and 'time'. The purpose might be to enable for specific ads (i.e interesting places around the hotel) to be shown to the person (on FB?) depending on this list.
Did you record the experimental design?
Given.
Did you consider whether the question could be answered with the available data?
We need to further explore 'accuracy' and to check if we could identify different clusters of users - we don't know if the data was genereted by 1 person or many, so we need to check its structure.
Checking the data
Null values?
No!
What do we know of the measurements?
First column is ID and is useless.
Second and Third are coords., they are in kilometers and are floating point. Min is (0,0) and max is (10,10);
Fourth column is accuracy. Range is (1, 1033) and seems to follow a power law distribution. We assume that this is the accuracy of the location given by the GPS. This claim is supported by the fact that the data comes from a mobile device, which is able to give location but this information is sometimes not accurate (i.e in buildings), so we would like to know what is the accuracy of the reading. In order to convert this into real accuracy, we need to normalize the column and assign it values of (1 - current_val).
The fifth column is time given as a timestamp. What patterns are there?
Last column is the class_id, given as an integer
2. Pre-processing
End of explanation
"""
random_state = np.random.RandomState(0)
mbk = MiniBatchKMeans(init='k-means++', n_clusters=K, batch_size=batch_size,
n_init=n_init, max_no_improvement=10, verbose=True)
X_kmeans = mbk.fit(train_X_norm)
print "Done!"
"""
Explanation: 2.1 K-Means clustering
End of explanation
"""
import numpy as np
import cv2
from matplotlib import pyplot as plt
X = np.random.randint(25,50,(25,2))
Y = np.random.randint(60,85,(25,2))
Z = np.vstack((X,Y))
# convert to np.float32
Z = np.float32(Z)
print Z.shape
# define criteria and apply kmeans()
criteria = (cv2.TERM_CRITERIA_EPS + cv2.TERM_CRITERIA_MAX_ITER, 10, 1.0)
#ret,label,center=cv2.kmeans(Z,2,None,criteria,10,cv2.KMEANS_RANDOM_CENTERS)
ret, label, center = cv2.kmeans(Z, 2, criteria, 10, cv2.KMEANS_RANDOM_CENTERS)
# Now separate the data, Note the flatten()
A = Z[label.ravel()==0]
B = Z[label.ravel()==1]
# Plot the data
plt.scatter(A[:,0],A[:,1])
plt.scatter(B[:,0],B[:,1],c = 'r')
plt.scatter(center[:,0],center[:,1],s = 80,c = 'y', marker = 's')
plt.xlabel('Height'),plt.ylabel('Weight')
plt.show()
"""
Explanation: Note: dataset of 1.3 GB is ginormous! Need to use GPU-powered algorithms ;(
2.2 K-Means clustering (OpenCV)
2.2.1 Test
End of explanation
"""
train_X_norm = train_X_norm.astype(np.float32)
print train_X_norm.dtype
print train_X_norm.shape
# define criteria and apply kmeans()
criteria = (cv2.TERM_CRITERIA_EPS + cv2.TERM_CRITERIA_MAX_ITER, 10, 1.0)
ret, label, center = cv2.kmeans(train_X_norm, K, criteria, n_init, cv2.KMEANS_RANDOM_CENTERS)
print center.shape
"""
Explanation: 2.2.1 Run Experiment
End of explanation
"""
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.