repo_name stringlengths 6 77 | path stringlengths 8 215 | license stringclasses 15
values | content stringlengths 335 154k |
|---|---|---|---|
tensorflow/docs-l10n | site/en-snapshot/quantum/tutorials/gradients.ipynb | apache-2.0 | #@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""
Explanation: Copyright 2020 The TensorFlow Authors.
End of explanation
"""
!pip install tensorflow==2.7.0
"""
Explanation: Calculate gradients
<table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https://www.tensorflow.org/quantum/tutorials/gradients"><img src="https://www.tensorflow.org/images/tf_logo_32px.png" />View on TensorFlow.org</a>
</td>
<td>
<a target="_blank" href="https://colab.research.google.com/github/tensorflow/quantum/blob/master/docs/tutorials/gradients.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" />Run in Google Colab</a>
</td>
<td>
<a target="_blank" href="https://github.com/tensorflow/quantum/blob/master/docs/tutorials/gradients.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />View source on GitHub</a>
</td>
<td>
<a href="https://storage.googleapis.com/tensorflow_docs/quantum/docs/tutorials/gradients.ipynb"><img src="https://www.tensorflow.org/images/download_logo_32px.png" />Download notebook</a>
</td>
</table>
This tutorial explores gradient calculation algorithms for the expectation values of quantum circuits.
Calculating the gradient of the expectation value of a certain observable in a quantum circuit is an involved process. Expectation values of observables do not have the luxury of having analytic gradient formulas that are always easy to write down—unlike traditional machine learning transformations such as matrix multiplication or vector addition that have analytic gradient formulas which are easy to write down. As a result, there are different quantum gradient calculation methods that come in handy for different scenarios. This tutorial compares and contrasts two different differentiation schemes.
Setup
End of explanation
"""
!pip install tensorflow-quantum
# Update package resources to account for version changes.
import importlib, pkg_resources
importlib.reload(pkg_resources)
"""
Explanation: Install TensorFlow Quantum:
End of explanation
"""
import tensorflow as tf
import tensorflow_quantum as tfq
import cirq
import sympy
import numpy as np
# visualization tools
%matplotlib inline
import matplotlib.pyplot as plt
from cirq.contrib.svg import SVGCircuit
"""
Explanation: Now import TensorFlow and the module dependencies:
End of explanation
"""
qubit = cirq.GridQubit(0, 0)
my_circuit = cirq.Circuit(cirq.Y(qubit)**sympy.Symbol('alpha'))
SVGCircuit(my_circuit)
"""
Explanation: 1. Preliminary
Let's make the notion of gradient calculation for quantum circuits a little more concrete. Suppose you have a parameterized circuit like this one:
End of explanation
"""
pauli_x = cirq.X(qubit)
pauli_x
"""
Explanation: Along with an observable:
End of explanation
"""
def my_expectation(op, alpha):
"""Compute ⟨Y(alpha)| `op` | Y(alpha)⟩"""
params = {'alpha': alpha}
sim = cirq.Simulator()
final_state_vector = sim.simulate(my_circuit, params).final_state_vector
return op.expectation_from_state_vector(final_state_vector, {qubit: 0}).real
my_alpha = 0.3
print("Expectation=", my_expectation(pauli_x, my_alpha))
print("Sin Formula=", np.sin(np.pi * my_alpha))
"""
Explanation: Looking at this operator you know that $⟨Y(\alpha)| X | Y(\alpha)⟩ = \sin(\pi \alpha)$
End of explanation
"""
def my_grad(obs, alpha, eps=0.01):
grad = 0
f_x = my_expectation(obs, alpha)
f_x_prime = my_expectation(obs, alpha + eps)
return ((f_x_prime - f_x) / eps).real
print('Finite difference:', my_grad(pauli_x, my_alpha))
print('Cosine formula: ', np.pi * np.cos(np.pi * my_alpha))
"""
Explanation: and if you define $f_{1}(\alpha) = ⟨Y(\alpha)| X | Y(\alpha)⟩$ then $f_{1}^{'}(\alpha) = \pi \cos(\pi \alpha)$. Let's check this:
End of explanation
"""
expectation_calculation = tfq.layers.Expectation(
differentiator=tfq.differentiators.ForwardDifference(grid_spacing=0.01))
expectation_calculation(my_circuit,
operators=pauli_x,
symbol_names=['alpha'],
symbol_values=[[my_alpha]])
"""
Explanation: 2. The need for a differentiator
With larger circuits, you won't always be so lucky to have a formula that precisely calculates the gradients of a given quantum circuit. In the event that a simple formula isn't enough to calculate the gradient, the tfq.differentiators.Differentiator class allows you to define algorithms for computing the gradients of your circuits. For instance you can recreate the above example in TensorFlow Quantum (TFQ) with:
End of explanation
"""
sampled_expectation_calculation = tfq.layers.SampledExpectation(
differentiator=tfq.differentiators.ForwardDifference(grid_spacing=0.01))
sampled_expectation_calculation(my_circuit,
operators=pauli_x,
repetitions=500,
symbol_names=['alpha'],
symbol_values=[[my_alpha]])
"""
Explanation: However, if you switch to estimating expectation based on sampling (what would happen on a true device) the values can change a little bit. This means you now have an imperfect estimate:
End of explanation
"""
# Make input_points = [batch_size, 1] array.
input_points = np.linspace(0, 5, 200)[:, np.newaxis].astype(np.float32)
exact_outputs = expectation_calculation(my_circuit,
operators=pauli_x,
symbol_names=['alpha'],
symbol_values=input_points)
imperfect_outputs = sampled_expectation_calculation(my_circuit,
operators=pauli_x,
repetitions=500,
symbol_names=['alpha'],
symbol_values=input_points)
plt.title('Forward Pass Values')
plt.xlabel('$x$')
plt.ylabel('$f(x)$')
plt.plot(input_points, exact_outputs, label='Analytic')
plt.plot(input_points, imperfect_outputs, label='Sampled')
plt.legend()
# Gradients are a much different story.
values_tensor = tf.convert_to_tensor(input_points)
with tf.GradientTape() as g:
g.watch(values_tensor)
exact_outputs = expectation_calculation(my_circuit,
operators=pauli_x,
symbol_names=['alpha'],
symbol_values=values_tensor)
analytic_finite_diff_gradients = g.gradient(exact_outputs, values_tensor)
with tf.GradientTape() as g:
g.watch(values_tensor)
imperfect_outputs = sampled_expectation_calculation(
my_circuit,
operators=pauli_x,
repetitions=500,
symbol_names=['alpha'],
symbol_values=values_tensor)
sampled_finite_diff_gradients = g.gradient(imperfect_outputs, values_tensor)
plt.title('Gradient Values')
plt.xlabel('$x$')
plt.ylabel('$f^{\'}(x)$')
plt.plot(input_points, analytic_finite_diff_gradients, label='Analytic')
plt.plot(input_points, sampled_finite_diff_gradients, label='Sampled')
plt.legend()
"""
Explanation: This can quickly compound into a serious accuracy problem when it comes to gradients:
End of explanation
"""
# A smarter differentiation scheme.
gradient_safe_sampled_expectation = tfq.layers.SampledExpectation(
differentiator=tfq.differentiators.ParameterShift())
with tf.GradientTape() as g:
g.watch(values_tensor)
imperfect_outputs = gradient_safe_sampled_expectation(
my_circuit,
operators=pauli_x,
repetitions=500,
symbol_names=['alpha'],
symbol_values=values_tensor)
sampled_param_shift_gradients = g.gradient(imperfect_outputs, values_tensor)
plt.title('Gradient Values')
plt.xlabel('$x$')
plt.ylabel('$f^{\'}(x)$')
plt.plot(input_points, analytic_finite_diff_gradients, label='Analytic')
plt.plot(input_points, sampled_param_shift_gradients, label='Sampled')
plt.legend()
"""
Explanation: Here you can see that although the finite difference formula is fast to compute the gradients themselves in the analytical case, when it came to the sampling based methods it was far too noisy. More careful techniques must be used to ensure a good gradient can be calculated. Next you will look at a much slower technique that wouldn't be as well suited for analytical expectation gradient calculations, but does perform much better in the real-world sample based case:
End of explanation
"""
pauli_z = cirq.Z(qubit)
pauli_z
"""
Explanation: From the above you can see that certain differentiators are best used for particular research scenarios. In general, the slower sample-based methods that are robust to device noise, etc., are great differentiators when testing or implementing algorithms in a more "real world" setting. Faster methods like finite difference are great for analytical calculations and you want higher throughput, but aren't yet concerned with the device viability of your algorithm.
3. Multiple observables
Let's introduce a second observable and see how TensorFlow Quantum supports multiple observables for a single circuit.
End of explanation
"""
test_value = 0.
print('Finite difference:', my_grad(pauli_z, test_value))
print('Sin formula: ', -np.pi * np.sin(np.pi * test_value))
"""
Explanation: If this observable is used with the same circuit as before, then you have $f_{2}(\alpha) = ⟨Y(\alpha)| Z | Y(\alpha)⟩ = \cos(\pi \alpha)$ and $f_{2}^{'}(\alpha) = -\pi \sin(\pi \alpha)$. Perform a quick check:
End of explanation
"""
sum_of_outputs = tfq.layers.Expectation(
differentiator=tfq.differentiators.ForwardDifference(grid_spacing=0.01))
sum_of_outputs(my_circuit,
operators=[pauli_x, pauli_z],
symbol_names=['alpha'],
symbol_values=[[test_value]])
"""
Explanation: It's a match (close enough).
Now if you define $g(\alpha) = f_{1}(\alpha) + f_{2}(\alpha)$ then $g'(\alpha) = f_{1}^{'}(\alpha) + f^{'}_{2}(\alpha)$. Defining more than one observable in TensorFlow Quantum to use along with a circuit is equivalent to adding on more terms to $g$.
This means that the gradient of a particular symbol in a circuit is equal to the sum of the gradients with regards to each observable for that symbol applied to that circuit. This is compatible with TensorFlow gradient taking and backpropagation (where you give the sum of the gradients over all observables as the gradient for a particular symbol).
End of explanation
"""
test_value_tensor = tf.convert_to_tensor([[test_value]])
with tf.GradientTape() as g:
g.watch(test_value_tensor)
outputs = sum_of_outputs(my_circuit,
operators=[pauli_x, pauli_z],
symbol_names=['alpha'],
symbol_values=test_value_tensor)
sum_of_gradients = g.gradient(outputs, test_value_tensor)
print(my_grad(pauli_x, test_value) + my_grad(pauli_z, test_value))
print(sum_of_gradients.numpy())
"""
Explanation: Here you see the first entry is the expectation w.r.t Pauli X, and the second is the expectation w.r.t Pauli Z. Now when you take the gradient:
End of explanation
"""
class MyDifferentiator(tfq.differentiators.Differentiator):
"""A Toy differentiator for <Y^alpha | X |Y^alpha>."""
def __init__(self):
pass
def get_gradient_circuits(self, programs, symbol_names, symbol_values):
"""Return circuits to compute gradients for given forward pass circuits.
Every gradient on a quantum computer can be computed via measurements
of transformed quantum circuits. Here, you implement a custom gradient
for a specific circuit. For a real differentiator, you will need to
implement this function in a more general way. See the differentiator
implementations in the TFQ library for examples.
"""
# The two terms in the derivative are the same circuit...
batch_programs = tf.stack([programs, programs], axis=1)
# ... with shifted parameter values.
shift = tf.constant(1/2)
forward = symbol_values + shift
backward = symbol_values - shift
batch_symbol_values = tf.stack([forward, backward], axis=1)
# Weights are the coefficients of the terms in the derivative.
num_program_copies = tf.shape(batch_programs)[0]
batch_weights = tf.tile(tf.constant([[[np.pi/2, -np.pi/2]]]),
[num_program_copies, 1, 1])
# The index map simply says which weights go with which circuits.
batch_mapper = tf.tile(
tf.constant([[[0, 1]]]), [num_program_copies, 1, 1])
return (batch_programs, symbol_names, batch_symbol_values,
batch_weights, batch_mapper)
"""
Explanation: Here you have verified that the sum of the gradients for each observable is indeed the gradient of $\alpha$. This behavior is supported by all TensorFlow Quantum differentiators and plays a crucial role in the compatibility with the rest of TensorFlow.
4. Advanced usage
All differentiators that exist inside of TensorFlow Quantum subclass tfq.differentiators.Differentiator. To implement a differentiator, a user must implement one of two interfaces. The standard is to implement get_gradient_circuits, which tells the base class which circuits to measure to obtain an estimate of the gradient. Alternatively, you can overload differentiate_analytic and differentiate_sampled; the class tfq.differentiators.Adjoint takes this route.
The following uses TensorFlow Quantum to implement the gradient of a circuit. You will use a small example of parameter shifting.
Recall the circuit you defined above, $|\alpha⟩ = Y^{\alpha}|0⟩$. As before, you can define a function as the expectation value of this circuit against the $X$ observable, $f(\alpha) = ⟨\alpha|X|\alpha⟩$. Using parameter shift rules, for this circuit, you can find that the derivative is
$$\frac{\partial}{\partial \alpha} f(\alpha) = \frac{\pi}{2} f\left(\alpha + \frac{1}{2}\right) - \frac{ \pi}{2} f\left(\alpha - \frac{1}{2}\right)$$
The get_gradient_circuits function returns the components of this derivative.
End of explanation
"""
custom_dif = MyDifferentiator()
custom_grad_expectation = tfq.layers.Expectation(differentiator=custom_dif)
# Now let's get the gradients with finite diff.
with tf.GradientTape() as g:
g.watch(values_tensor)
exact_outputs = expectation_calculation(my_circuit,
operators=[pauli_x],
symbol_names=['alpha'],
symbol_values=values_tensor)
analytic_finite_diff_gradients = g.gradient(exact_outputs, values_tensor)
# Now let's get the gradients with custom diff.
with tf.GradientTape() as g:
g.watch(values_tensor)
my_outputs = custom_grad_expectation(my_circuit,
operators=[pauli_x],
symbol_names=['alpha'],
symbol_values=values_tensor)
my_gradients = g.gradient(my_outputs, values_tensor)
plt.subplot(1, 2, 1)
plt.title('Exact Gradient')
plt.plot(input_points, analytic_finite_diff_gradients.numpy())
plt.xlabel('x')
plt.ylabel('f(x)')
plt.subplot(1, 2, 2)
plt.title('My Gradient')
plt.plot(input_points, my_gradients.numpy())
plt.xlabel('x')
"""
Explanation: The Differentiator base class uses the components returned from get_gradient_circuits to calculate the derivative, as in the parameter shift formula you saw above. This new differentiator can now be used with existing tfq.layer objects:
End of explanation
"""
# Create a noisy sample based expectation op.
expectation_sampled = tfq.get_sampled_expectation_op(
cirq.DensityMatrixSimulator(noise=cirq.depolarize(0.01)))
# Make it differentiable with your differentiator:
# Remember to refresh the differentiator before attaching the new op
custom_dif.refresh()
differentiable_op = custom_dif.generate_differentiable_op(
sampled_op=expectation_sampled)
# Prep op inputs.
circuit_tensor = tfq.convert_to_tensor([my_circuit])
op_tensor = tfq.convert_to_tensor([[pauli_x]])
single_value = tf.convert_to_tensor([[my_alpha]])
num_samples_tensor = tf.convert_to_tensor([[5000]])
with tf.GradientTape() as g:
g.watch(single_value)
forward_output = differentiable_op(circuit_tensor, ['alpha'], single_value,
op_tensor, num_samples_tensor)
my_gradients = g.gradient(forward_output, single_value)
print('---TFQ---')
print('Foward: ', forward_output.numpy())
print('Gradient:', my_gradients.numpy())
print('---Original---')
print('Forward: ', my_expectation(pauli_x, my_alpha))
print('Gradient:', my_grad(pauli_x, my_alpha))
"""
Explanation: This new differentiator can now be used to generate differentiable ops.
Key Point: A differentiator that has been previously attached to an op must be refreshed before attaching to a new op, because a differentiator may only be attached to one op at a time.
End of explanation
"""
|
gaufung/PythonStandardLibrary | cryptography/Hashlib.ipynb | mit | import hashlib
print('Guaranteed:\n{}\n'.format(
', '.join(sorted(hashlib.algorithms_guaranteed))))
print('Available:\n{}'.format(
', '.join(sorted(hashlib.algorithms_available))))
import hashlib
lorem = '''Lorem ipsum dolor sit amet, consectetur adipisicing
elit, sed do eiusmod tempor incididunt ut labore et dolore magna
aliqua. Ut enim ad minim veniam, quis nostrud exercitation
ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis
aute irure dolor in reprehenderit in voluptate velit esse cillum
dolore eu fugiat nulla pariatur. Excepteur sint occaecat
cupidatat non proident, sunt in culpa qui officia deserunt
mollit anim id est laborum.'''
"""
Explanation: The hashlib module defines an API for accessing different cryptographic hashing algorithms. To work with a specific hash algorithm, use the appropriate constructor function or new() to create a hash object. From there, the objects use the same API, no matter what algorithm is being used.
Hash Algorithms
End of explanation
"""
import hashlib
h = hashlib.md5()
h.update(lorem.encode('utf8'))
print(h.hexdigest())
"""
Explanation: MD5 Example
End of explanation
"""
import hashlib
h = hashlib.sha1()
h.update(lorem.encode('utf-8'))
print(h.hexdigest())
"""
Explanation: Sha1 Example
End of explanation
"""
import hashlib
h = hashlib.new('sha256')
h.update(lorem.encode('utf-8'))
print(h.hexdigest())
"""
Explanation: Change by Name
End of explanation
"""
import hashlib
h = hashlib.md5()
h.update(lorem.encode('utf-8'))
all_at_once = h.hexdigest()
def chunkize(size, text):
"Return parts of the text in size-based increments."
start = 0
while start < len(text):
chunk = text[start:start + size]
yield chunk
start += size
return
h = hashlib.md5()
for chunk in chunkize(64, lorem.encode('utf-8')):
h.update(chunk)
line_by_line = h.hexdigest()
print('All at once :', all_at_once)
print('Line by line:', line_by_line)
print('Same :', (all_at_once == line_by_line))
"""
Explanation: Incresmental Updates
End of explanation
"""
|
Unidata/unidata-python-workshop | notebooks/MetPy_Advanced/Isentropic Analysis.ipynb | mit | from siphon.catalog import TDSCatalog
cat = TDSCatalog('http://thredds.ucar.edu/thredds/catalog/grib/'
'NCEP/GFS/Global_0p5deg/catalog.xml')
best = cat.datasets['Best GFS Half Degree Forecast Time Series']
"""
Explanation: <a name="top"></a>
<div style="width:1000 px">
<div style="float:right; width:98 px; height:98px;">
<img src="https://raw.githubusercontent.com/Unidata/MetPy/master/metpy/plots/_static/unidata_150x150.png" alt="Unidata Logo" style="height: 98px;">
</div>
<h1>Advanced MetPy: Isentropic Analysis</h1>
<div style="clear:both"></div>
</div>
<hr style="height:2px;">
Overview:
Teaching: 30 minutes
Exercises: 30 minutes
Objectives
<a href="#download">Download GFS output from TDS</a>
<a href="#interpolation">Interpolate GFS output to an isentropic level</a>
<a href="#ascent">Calculate regions of isentropic ascent and descent</a>
<a name="download"></a>
Downloading GFS Output
First we need some grids of values to work with. We can do this by dowloading information from the latest run of the GFS available on Unidata's THREDDS data server. First we access the catalog for the half-degree GFS output, and look for the dataset called the "Best GFS Half Degree Forecast Time Series". This dataset combines multiple sets of model runs to yield a time series of output with the shortest forecast offset.
End of explanation
"""
subset_access = best.subset()
query = subset_access.query()
"""
Explanation: Next, we set up access to request subsets of data from the model. This uses the NetCDF Subset Service (NCSS) to make requests from the GRIB collection and get results in netCDF format.
End of explanation
"""
sorted(v for v in subset_access.variables if v.endswith('isobaric'))
"""
Explanation: Let's see what variables are available. Instead of just printing subset_access.variables, we can ask Python to only display variables that end with "isobaric", which is how the TDS denotes GRIB fields that are specified on isobaric levels.
End of explanation
"""
from datetime import datetime
query.time(datetime.utcnow())
query.variables('Temperature_isobaric', 'Geopotential_height_isobaric',
'u-component_of_wind_isobaric', 'v-component_of_wind_isobaric',
'Relative_humidity_isobaric')
query.lonlat_box(west=-130, east=-50, south=10, north=60)
query.accept('netcdf4')
"""
Explanation: Now we put together the "query"--the way we ask for data we want. We give ask for a wide box of data over the U.S. for the time step that's closest to now. We also request temperature, height, winds, and relative humidity. By asking for netCDF4 data, the result is compressed, so the download is smaller.
End of explanation
"""
nc = subset_access.get_data(query)
"""
Explanation: Now all that's left is to actually make the request for data:
End of explanation
"""
from xarray.backends import NetCDF4DataStore
import xarray as xr
ds = xr.open_dataset(NetCDF4DataStore(nc))
"""
Explanation: Open the returned netCDF data using XArray:
End of explanation
"""
import metpy.calc as mpcalc
from metpy.units import units
import numpy as np
"""
Explanation: <a name="interpolation"></a>
Isentropic Interpolation
Now let's take what we've downloaded, and use it to make an isentropic map. In this case, we're interpolating from one vertical coordinate, pressure, to another: potential temperature. MetPy has a function isentropic_interpolation that can do this for us. First, let's start with a few useful imports.
End of explanation
"""
ds = ds.metpy.parse_cf()
temperature = ds['Temperature_isobaric'][0]
data_proj = temperature.metpy.cartopy_crs
"""
Explanation: Let's parse out the metadata for the isobaric temperature and get the projection information. We also index with 0 to get the first, and only, time:
End of explanation
"""
lat = temperature.metpy.y
lon = temperature.metpy.x
# Need to adjust units on humidity because '%' causes problems
ds['Relative_humidity_isobaric'].attrs['units'] = 'percent'
rh = ds['Relative_humidity_isobaric'][0]
height = ds['Geopotential_height_isobaric'][0]
u = ds['u-component_of_wind_isobaric'][0]
v = ds['v-component_of_wind_isobaric'][0]
# Can have different vertical levels for wind and thermodynamic variables
# Find and select the common levels
press = temperature.metpy.vertical
common_levels = np.intersect1d(press, u.metpy.vertical)
temperature = temperature.metpy.sel(vertical=common_levels)
u = u.metpy.sel(vertical=common_levels)
v = v.metpy.sel(vertical=common_levels)
# Get common pressure levels as a data array
press = press.metpy.sel(vertical=common_levels)
"""
Explanation: Let's pull out the grids out into some shorter variable names.
End of explanation
"""
isen_level = np.array([320]) * units.kelvin
isen_press, isen_u, isen_v = mpcalc.isentropic_interpolation(isen_level, press,
temperature, u, v)
"""
Explanation: Next, we perform the isentropic interpolation. At a minimum, this must be given one or more isentropic levels, the 3-D temperature field, and the pressure levels of the original field; it then returns the 3D array of pressure values (2D slices for each isentropic level). You can also pass addition fields which will be interpolated to these levels as well. Below, we interpolate the winds (and pressure) to the 320K isentropic level:
End of explanation
"""
# Need to squeeze() out the size-1 dimension for the isentropic level
isen_press = isen_press.squeeze()
isen_u = isen_u.squeeze()
isen_v = isen_v.squeeze()
%matplotlib inline
import matplotlib.pyplot as plt
import cartopy.crs as ccrs
import cartopy.feature as cfeature
# Create a plot and basic map projection
fig = plt.figure(figsize=(14, 8))
ax = fig.add_subplot(1, 1, 1, projection=ccrs.LambertConformal(central_longitude=-100))
# Contour the pressure values for the isentropic level. We keep the handle
# for the contour so that we can have matplotlib label the contours
levels = np.arange(300, 1000, 25)
cntr = ax.contour(lon, lat, isen_press, transform=data_proj,
colors='black', levels=levels)
cntr.clabel(fmt='%d')
# Set up slices to subset the wind barbs--the slices below are the same as `::5`
# We put these here so that it's easy to change and keep all of the ones below matched
# up.
lon_slice = slice(None, None, 5)
lat_slice = slice(None, None, 3)
ax.barbs(lon[lon_slice], lat[lat_slice],
isen_u[lat_slice, lon_slice].to('knots').magnitude,
isen_v[lat_slice, lon_slice].to('knots').magnitude,
transform=data_proj, zorder=2)
ax.add_feature(cfeature.LAND)
ax.add_feature(cfeature.OCEAN)
ax.add_feature(cfeature.COASTLINE)
ax.add_feature(cfeature.BORDERS, linewidth=2)
ax.add_feature(cfeature.STATES, linestyle=':')
ax.set_extent((-120, -70, 25, 55), crs=data_proj)
"""
Explanation: Let's plot the results and see what it looks like:
End of explanation
"""
# Needed to make numpy broadcasting work between 1D pressure and other 3D arrays
# Use .metpy.unit_array to get numpy array with units rather than xarray DataArray
pressure_for_calc = press.metpy.unit_array[:, None, None]
#
# YOUR CODE: Calculate mixing ratio using something from mpcalc
#
# Take the return and convert manually to units of 'dimenionless'
#mixing.ito('dimensionless')
#
# YOUR CODE: Interpolate all the data
#
# Squeeze the returned arrays
#isen_press = isen_press.squeeze()
#isen_mixing = isen_mixing.squeeze()
#isen_u = isen_u.squeeze()
#isen_v = isen_v.squeeze()
# Create Plot -- same as before
fig = plt.figure(figsize=(14, 8))
ax = fig.add_subplot(1, 1, 1, projection=ccrs.LambertConformal(central_longitude=-100))
levels = np.arange(300, 1000, 25)
cntr = ax.contour(lon, lat, isen_press, transform=data_proj,
colors='black', levels=levels)
cntr.clabel(fmt='%d')
lon_slice = slice(None, None, 8)
lat_slice = slice(None, None, 8)
ax.barbs(lon[lon_slice], lat[lat_slice],
isen_u[lat_slice, lon_slice].to('knots').magnitude,
isen_v[lat_slice, lon_slice].to('knots').magnitude,
transform=data_proj, zorder=2)
#
# YOUR CODE: Contour/Contourf the mixing ratio values
#
ax.add_feature(cfeature.LAND)
ax.add_feature(cfeature.OCEAN)
ax.add_feature(cfeature.COASTLINE)
ax.add_feature(cfeature.BORDERS, linewidth=2)
ax.add_feature(cfeature.STATES, linestyle=':')
ax.set_extent((-120, -70, 25, 55), crs=data_proj)
"""
Explanation: Exercise
Let's add some moisture information to this plot. Feel free to choose a different isentropic level.
Calculate the mixing ratio (using the appropriate function from mpcalc)
Call isentropic_interpolation with mixing ratio--you should copy the one from above and add mixing ratio to the call so that it interpolates everything.
contour (in green) or contourf your moisture information on the map alongside pressure
You'll want to refer to the MetPy API documentation to see what calculation functions would help you.
End of explanation
"""
# %load solutions/mixing.py
"""
Explanation: Solution
End of explanation
"""
isen_press = mpcalc.smooth_gaussian(isen_press.squeeze(), 9)
isen_u = mpcalc.smooth_gaussian(isen_u.squeeze(), 9)
isen_v = mpcalc.smooth_gaussian(isen_v.squeeze(), 9)
"""
Explanation: <a name="ascent"></a>
Calculating Isentropic Ascent
Air flow across isobars on an isentropic surface represents vertical motion. We can use MetPy to calculate this ascent for us.
Since calculating this involves taking derivatives, first let's smooth the input fields using a gaussian_filter.
End of explanation
"""
# Use .values because we don't care about using DataArray
dx, dy = mpcalc.lat_lon_grid_deltas(lon.values, lat.values)
"""
Explanation: Next, we need to take our grid point locations which are in degrees, and convert them to grid spacing in meters--this is what we need to pass to functions taking derivatives.
End of explanation
"""
lift = -mpcalc.advection(isen_press, [isen_u, isen_v], [dx, dy], dim_order='yx')
"""
Explanation: Now we can calculate the isentropic ascent. $\omega$ is given by:
$$\omega = \left(\frac{\partial P}{\partial t}\right)_\theta + \vec{V} \cdot \nabla P + \frac{\partial P}{\partial \theta}\frac{d\theta}{dt}$$
Note, the second term of the above equation is just pressure advection (negated). Therefore, we can use MetPy to calculate this as:
End of explanation
"""
# YOUR CODE GOES HERE
"""
Explanation: Exercise
Use contourf to plot the isentropic lift alongside the isobars and wind barbs. You probably want to convert the values of lift to microbars/s.
End of explanation
"""
# %load solutions/lift.py
"""
Explanation: Solution
End of explanation
"""
|
zizouvb/deeplearning | language-translation/dlnd_language_translation.ipynb | mit | """
DON'T MODIFY ANYTHING IN THIS CELL
"""
import helper
import problem_unittests as tests
source_path = 'data/small_vocab_en'
target_path = 'data/small_vocab_fr'
source_text = helper.load_data(source_path)
target_text = helper.load_data(target_path)
"""
Explanation: Language Translation
In this project, you’re going to take a peek into the realm of neural network machine translation. You’ll be training a sequence to sequence model on a dataset of English and French sentences that can translate new sentences from English to French.
Get the Data
Since translating the whole language of English to French will take lots of time to train, we have provided you with a small portion of the English corpus.
End of explanation
"""
view_sentence_range = (0, 10)
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import numpy as np
print('Dataset Stats')
print('Roughly the number of unique words: {}'.format(len({word: None for word in source_text.split()})))
sentences = source_text.split('\n')
word_counts = [len(sentence.split()) for sentence in sentences]
print('Number of sentences: {}'.format(len(sentences)))
print('Average number of words in a sentence: {}'.format(np.average(word_counts)))
print()
print('English sentences {} to {}:'.format(*view_sentence_range))
print('\n'.join(source_text.split('\n')[view_sentence_range[0]:view_sentence_range[1]]))
print()
print('French sentences {} to {}:'.format(*view_sentence_range))
print('\n'.join(target_text.split('\n')[view_sentence_range[0]:view_sentence_range[1]]))
"""
Explanation: Explore the Data
Play around with view_sentence_range to view different parts of the data.
End of explanation
"""
def text_to_ids(source_text, target_text, source_vocab_to_int, target_vocab_to_int):
"""
Convert source and target text to proper word ids
:param source_text: String that contains all the source text.
:param target_text: String that contains all the target text.
:param source_vocab_to_int: Dictionary to go from the source words to an id
:param target_vocab_to_int: Dictionary to go from the target words to an id
:return: A tuple of lists (source_id_text, target_id_text)
"""
# TODO: Implement Function
# source_id_text and target_id_text are a list of lists where each list represent a line.
# That's why we use a first split('\n')] (not written in the statements)
source_list = [sentence for sentence in source_text.split('\n')]
target_list = [sentence for sentence in target_text.split('\n')]
# Filling the lists
source_id_text = list()
target_id_text = list()
for i in range(len(source_list)):
source_id_text_temp = list()
target_id_text_temp = list()
for word in source_list[i].split():
source_id_text_temp.append(source_vocab_to_int[word])
for word in target_list[i].split():
target_id_text_temp.append(target_vocab_to_int[word])
# We need to add EOS for target
target_id_text_temp.append(target_vocab_to_int['<EOS>'])
source_id_text.append(source_id_text_temp)
target_id_text.append(target_id_text_temp)
return source_id_text, target_id_text
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_text_to_ids(text_to_ids)
"""
Explanation: Implement Preprocessing Function
Text to Word Ids
As you did with other RNNs, you must turn the text into a number so the computer can understand it. In the function text_to_ids(), you'll turn source_text and target_text from words to ids. However, you need to add the <EOS> word id at the end of target_text. This will help the neural network predict when the sentence should end.
You can get the <EOS> word id by doing:
python
target_vocab_to_int['<EOS>']
You can get other word ids using source_vocab_to_int and target_vocab_to_int.
End of explanation
"""
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
helper.preprocess_and_save_data(source_path, target_path, text_to_ids)
"""
Explanation: Preprocess all the data and save it
Running the code cell below will preprocess all the data and save it to file.
End of explanation
"""
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import numpy as np
import helper
import problem_unittests as tests
(source_int_text, target_int_text), (source_vocab_to_int, target_vocab_to_int), _ = helper.load_preprocess()
"""
Explanation: Check Point
This is your first checkpoint. If you ever decide to come back to this notebook or have to restart the notebook, you can start from here. The preprocessed data has been saved to disk.
End of explanation
"""
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
from distutils.version import LooseVersion
import warnings
import tensorflow as tf
from tensorflow.python.layers.core import Dense
# Check TensorFlow Version
assert LooseVersion(tf.__version__) >= LooseVersion('1.1'), 'Please use TensorFlow version 1.1 or newer'
print('TensorFlow Version: {}'.format(tf.__version__))
# Check for a GPU
if not tf.test.gpu_device_name():
warnings.warn('No GPU found. Please use a GPU to train your neural network.')
else:
print('Default GPU Device: {}'.format(tf.test.gpu_device_name()))
"""
Explanation: Check the Version of TensorFlow and Access to GPU
This will check to make sure you have the correct version of TensorFlow and access to a GPU
End of explanation
"""
def model_inputs():
"""
Create TF Placeholders for input, targets, learning rate, and lengths of source and target sequences.
:return: Tuple (input, targets, learning rate, keep probability, target sequence length,
max target sequence length, source sequence length)
"""
# TODO: Implement Function
# Input text placeholder named "input" using the TF Placeholder name parameter with rank 2.
inputs = tf.placeholder(tf.int32,[None,None], name = "input")
# Targets placeholder with rank 2.
targets = tf.placeholder(tf.int32,[None,None], name = "target")
# Learning rate placeholder with rank 0.
learning_rate = tf.placeholder(tf.float32, name = "learning_rate")
# Keep probability placeholder named "keep_prob" using the TF Placeholder name parameter with rank 0.
keep_probability = tf.placeholder(tf.float32, name = "keep_prob")
# Target sequence length placeholder named "target_sequence_length" with rank 1
target_sequence_length = tf.placeholder(tf.int32,[None], name = "target_sequence_length")
# Max target sequence length tensor named "max_target_len" getting its value from applying tf.reduce_max on the target_sequence_length placeholder. Rank 0.
max_target_sequence_length = tf.reduce_max(target_sequence_length, name = "max_target_len")
# Source sequence length placeholder named "source_sequence_length" with rank 1
source_sequence_length = tf.placeholder(tf.int32, [None], name = "source_sequence_length")
# Return the placeholders in the following the tuple (input, targets, learning rate, keep probability, target sequence length, max target sequence length, source sequence length)
return inputs, targets, learning_rate, keep_probability, target_sequence_length, max_target_sequence_length, source_sequence_length
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_model_inputs(model_inputs)
"""
Explanation: Build the Neural Network
You'll build the components necessary to build a Sequence-to-Sequence model by implementing the following functions below:
- model_inputs
- process_decoder_input
- encoding_layer
- decoding_layer_train
- decoding_layer_infer
- decoding_layer
- seq2seq_model
Input
Implement the model_inputs() function to create TF Placeholders for the Neural Network. It should create the following placeholders:
Input text placeholder named "input" using the TF Placeholder name parameter with rank 2.
Targets placeholder with rank 2.
Learning rate placeholder with rank 0.
Keep probability placeholder named "keep_prob" using the TF Placeholder name parameter with rank 0.
Target sequence length placeholder named "target_sequence_length" with rank 1
Max target sequence length tensor named "max_target_len" getting its value from applying tf.reduce_max on the target_sequence_length placeholder. Rank 0.
Source sequence length placeholder named "source_sequence_length" with rank 1
Return the placeholders in the following the tuple (input, targets, learning rate, keep probability, target sequence length, max target sequence length, source sequence length)
End of explanation
"""
def process_decoder_input(target_data, target_vocab_to_int, batch_size):
"""
Preprocess target data for encoding
:param target_data: Target Placehoder
:param target_vocab_to_int: Dictionary to go from the target words to an id
:param batch_size: Batch Size
:return: Preprocessed target data
"""
# TODO: Implement Function
#removing the last word id from each batch in target_data
print(target_data)
target_data = tf.strided_slice(target_data,[0,0],[batch_size,-1],[1,1] )
#target_data = tf.strided_slice(target_data,[0,0],[int(target_data.shape[0]),int(target_data.shape[1]-1)],[1,1] )
# concat the GO ID to the begining of each batch
decoder_input = tf.concat([tf.fill([batch_size,1],target_vocab_to_int['<GO>']),target_data],1)
return decoder_input
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_process_encoding_input(process_decoder_input)
"""
Explanation: Process Decoder Input
Implement process_decoder_input by removing the last word id from each batch in target_data and concat the GO ID to the begining of each batch.
End of explanation
"""
from imp import reload
reload(tests)
def encoding_layer(rnn_inputs, rnn_size, num_layers, keep_prob,
source_sequence_length, source_vocab_size,
encoding_embedding_size):
"""
Create encoding layer
:param rnn_inputs: Inputs for the RNN
:param rnn_size: RNN Size
:param num_layers: Number of layers
:param keep_prob: Dropout keep probability
:param source_sequence_length: a list of the lengths of each sequence in the batch
:param source_vocab_size: vocabulary size of source data
:param encoding_embedding_size: embedding size of source data
:return: tuple (RNN output, RNN state)
"""
# TODO: Implement Function
# Embed the encoder input using tf.contrib.layers.embed_sequence
inputs_embeded = tf.contrib.layers.embed_sequence(
ids = rnn_inputs,
vocab_size = source_vocab_size,
embed_dim = encoding_embedding_size)
# Construct a stacked tf.contrib.rnn.LSTMCell wrapped in a tf.contrib.rnn.DropoutWrapper
cell = tf.contrib.rnn.MultiRNNCell([tf.contrib.rnn.LSTMCell(rnn_size) for _ in range(num_layers) ])
cell_dropout = tf.contrib.rnn.DropoutWrapper(cell, keep_prob)
# Pass cell and embedded input to tf.nn.dynamic_rnn()
RNN_output, RNN_state = tf.nn.dynamic_rnn(
cell = cell_dropout,
inputs = inputs_embeded,
sequence_length = source_sequence_length,
dtype = tf.float32)
return RNN_output, RNN_state
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_encoding_layer(encoding_layer)
"""
Explanation: Encoding
Implement encoding_layer() to create a Encoder RNN layer:
* Embed the encoder input using tf.contrib.layers.embed_sequence
* Construct a stacked tf.contrib.rnn.LSTMCell wrapped in a tf.contrib.rnn.DropoutWrapper
* Pass cell and embedded input to tf.nn.dynamic_rnn()
End of explanation
"""
def decoding_layer_train(encoder_state, dec_cell, dec_embed_input,
target_sequence_length, max_summary_length,
output_layer, keep_prob):
"""
Create a decoding layer for training
:param encoder_state: Encoder State
:param dec_cell: Decoder RNN Cell
:param dec_embed_input: Decoder embedded input
:param target_sequence_length: The lengths of each sequence in the target batch
:param max_summary_length: The length of the longest sequence in the batch
:param output_layer: Function to apply the output layer
:param keep_prob: Dropout keep probability
:return: BasicDecoderOutput containing training logits and sample_id
"""
# TODO: Implement Function
# Create a tf.contrib.seq2seq.TrainingHelper
training_helper = tf.contrib.seq2seq.TrainingHelper(
inputs = dec_embed_input,
sequence_length = target_sequence_length)
# Create a tf.contrib.seq2seq.BasicDecoder
basic_decoder = tf.contrib.seq2seq.BasicDecoder(
cell = dec_cell,
helper = training_helper,
initial_state = encoder_state,
output_layer = output_layer)
# Obtain the decoder outputs from tf.contrib.seq2seq.dynamic_decode
BasicDecoderOutput = tf.contrib.seq2seq.dynamic_decode(
decoder = basic_decoder,
impute_finished = True,
maximum_iterations = max_summary_length
)
return BasicDecoderOutput[0]
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_decoding_layer_train(decoding_layer_train)
"""
Explanation: Decoding - Training
Create a training decoding layer:
* Create a tf.contrib.seq2seq.TrainingHelper
* Create a tf.contrib.seq2seq.BasicDecoder
* Obtain the decoder outputs from tf.contrib.seq2seq.dynamic_decode
End of explanation
"""
def decoding_layer_infer(encoder_state, dec_cell, dec_embeddings, start_of_sequence_id,
end_of_sequence_id, max_target_sequence_length,
vocab_size, output_layer, batch_size, keep_prob):
"""
Create a decoding layer for inference
:param encoder_state: Encoder state
:param dec_cell: Decoder RNN Cell
:param dec_embeddings: Decoder embeddings
:param start_of_sequence_id: GO ID
:param end_of_sequence_id: EOS Id
:param max_target_sequence_length: Maximum length of target sequences
:param vocab_size: Size of decoder/target vocabulary
:param decoding_scope: TenorFlow Variable Scope for decoding
:param output_layer: Function to apply the output layer
:param batch_size: Batch size
:param keep_prob: Dropout keep probability
:return: BasicDecoderOutput containing inference logits and sample_id
"""
# TODO: Implement Function
# creates a new tensor by replicating start_of_sequence_id batch_size times.
start_tokens = tf.tile(tf.constant([start_of_sequence_id],dtype = tf.int32),[batch_size], name = 'start_tokens' )
# Create a tf.contrib.seq2seq.GreedyEmbeddingHelper
embedding_helper = tf.contrib.seq2seq.GreedyEmbeddingHelper(
embedding = dec_embeddings,
start_tokens = start_tokens,
end_token = end_of_sequence_id)
# Create a tf.contrib.seq2seq.BasicDecoder
basic_decoder = tf.contrib.seq2seq.BasicDecoder(
cell = dec_cell,
helper = embedding_helper,
initial_state = encoder_state,
output_layer = output_layer)
# Obtain the decoder outputs from tf.contrib.seq2seq.dynamic_decode
BasicDecoderOutput = tf.contrib.seq2seq.dynamic_decode(
decoder = basic_decoder,
impute_finished = True,
maximum_iterations = max_target_sequence_length)
return BasicDecoderOutput[0]
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_decoding_layer_infer(decoding_layer_infer)
"""
Explanation: Decoding - Inference
Create inference decoder:
* Create a tf.contrib.seq2seq.GreedyEmbeddingHelper
* Create a tf.contrib.seq2seq.BasicDecoder
* Obtain the decoder outputs from tf.contrib.seq2seq.dynamic_decode
End of explanation
"""
def decoding_layer(dec_input, encoder_state,
target_sequence_length, max_target_sequence_length,
rnn_size,
num_layers, target_vocab_to_int, target_vocab_size,
batch_size, keep_prob, decoding_embedding_size):
"""
Create decoding layer
:param dec_input: Decoder input
:param encoder_state: Encoder state
:param target_sequence_length: The lengths of each sequence in the target batch
:param max_target_sequence_length: Maximum length of target sequences
:param rnn_size: RNN Size
:param num_layers: Number of layers
:param target_vocab_to_int: Dictionary to go from the target words to an id
:param target_vocab_size: Size of target vocabulary
:param batch_size: The size of the batch
:param keep_prob: Dropout keep probability
:param decoding_embedding_size: Decoding embedding size
:return: Tuple of (Training BasicDecoderOutput, Inference BasicDecoderOutput)
"""
# TODO: Implement Function
# Embed the target sequences
dec_embeddings = tf.Variable(tf.random_uniform([target_vocab_size, decoding_embedding_size]))
dec_embed_input = tf.nn.embedding_lookup(dec_embeddings, dec_input)
# Construct the decoder LSTM cell (just like you constructed the encoder cell above)
cell = tf.contrib.rnn.MultiRNNCell([tf.contrib.rnn.LSTMCell(rnn_size) for _ in range(num_layers) ])
cell_dropout = tf.contrib.rnn.DropoutWrapper(cell, keep_prob)
# Create an output layer to map the outputs of the decoder to the elements of our vocabulary
output_layer = Dense(target_vocab_size)
# Use the your decoding_layer_train(encoder_state, dec_cell, dec_embed_input, target_sequence_length,
# max_target_sequence_length, output_layer, keep_prob) function to get the training logits.
with tf.variable_scope("decode"):
Training_BasicDecoderOutput = decoding_layer_train(encoder_state,
cell_dropout,
dec_embed_input,
target_sequence_length,
max_target_sequence_length,
output_layer,
keep_prob)
# Use your decoding_layer_infer(encoder_state, dec_cell, dec_embeddings, start_of_sequence_id,
# end_of_sequence_id, max_target_sequence_length, vocab_size, output_layer, batch_size, keep_prob)
# function to get the inference logits.
with tf.variable_scope("decode", reuse=True):
Inference_BasicDecoderOutput = decoding_layer_infer(encoder_state,
cell_dropout,
dec_embeddings,
target_vocab_to_int['<GO>'],
target_vocab_to_int['<EOS>'],
max_target_sequence_length,
target_vocab_size,
output_layer,
batch_size,
keep_prob)
return Training_BasicDecoderOutput, Inference_BasicDecoderOutput
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_decoding_layer(decoding_layer)
"""
Explanation: Build the Decoding Layer
Implement decoding_layer() to create a Decoder RNN layer.
Embed the target sequences
Construct the decoder LSTM cell (just like you constructed the encoder cell above)
Create an output layer to map the outputs of the decoder to the elements of our vocabulary
Use the your decoding_layer_train(encoder_state, dec_cell, dec_embed_input, target_sequence_length, max_target_sequence_length, output_layer, keep_prob) function to get the training logits.
Use your decoding_layer_infer(encoder_state, dec_cell, dec_embeddings, start_of_sequence_id, end_of_sequence_id, max_target_sequence_length, vocab_size, output_layer, batch_size, keep_prob) function to get the inference logits.
Note: You'll need to use tf.variable_scope to share variables between training and inference.
End of explanation
"""
def seq2seq_model(input_data, target_data, keep_prob, batch_size,
source_sequence_length, target_sequence_length,
max_target_sentence_length,
source_vocab_size, target_vocab_size,
enc_embedding_size, dec_embedding_size,
rnn_size, num_layers, target_vocab_to_int):
"""
Build the Sequence-to-Sequence part of the neural network
:param input_data: Input placeholder
:param target_data: Target placeholder
:param keep_prob: Dropout keep probability placeholder
:param batch_size: Batch Size
:param source_sequence_length: Sequence Lengths of source sequences in the batch
:param target_sequence_length: Sequence Lengths of target sequences in the batch
: max_target_sentence_length,
:param source_vocab_size: Source vocabulary size
:param target_vocab_size: Target vocabulary size
:param enc_embedding_size: Decoder embedding size
:param dec_embedding_size: Encoder embedding size
:param rnn_size: RNN Size
:param num_layers: Number of layers
:param target_vocab_to_int: Dictionary to go from the target words to an id
:return: Tuple of (Training BasicDecoderOutput, Inference BasicDecoderOutput)
"""
# TODO: Implement Function
# Encode the input using your encoding_layer(rnn_inputs, rnn_size, num_layers, keep_prob, source_sequence_length, source_vocab_size, encoding_embedding_size).
rnn_output , rnn_state = encoding_layer(input_data,
rnn_size,
num_layers,
keep_prob,
source_sequence_length,
source_vocab_size,
enc_embedding_size)
# Process target data using your process_decoder_input(target_data, target_vocab_to_int, batch_size) function.
decoder_input = process_decoder_input(target_data,
target_vocab_to_int,
batch_size)
# Decode the encoded input using your decoding_layer(dec_input, enc_state, target_sequence_length, max_target_sentence_length,
# rnn_size, num_layers, target_vocab_to_int, target_vocab_size, batch_size, keep_prob, dec_embedding_size) function.
Training_BasicDecoderOutput, Inference_BasicDecoderOutput = decoding_layer(
decoder_input,
rnn_state,
target_sequence_length,
max_target_sentence_length,
rnn_size,
num_layers,
target_vocab_to_int,
target_vocab_size,
batch_size,
keep_prob,
dec_embedding_size)
return Training_BasicDecoderOutput, Inference_BasicDecoderOutput
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_seq2seq_model(seq2seq_model)
"""
Explanation: Build the Neural Network
Apply the functions you implemented above to:
Encode the input using your encoding_layer(rnn_inputs, rnn_size, num_layers, keep_prob, source_sequence_length, source_vocab_size, encoding_embedding_size).
Process target data using your process_decoder_input(target_data, target_vocab_to_int, batch_size) function.
Decode the encoded input using your decoding_layer(dec_input, enc_state, target_sequence_length, max_target_sentence_length, rnn_size, num_layers, target_vocab_to_int, target_vocab_size, batch_size, keep_prob, dec_embedding_size) function.
End of explanation
"""
# Number of Epochs
epochs = 10
# Batch Size
batch_size = 128
# RNN Size
rnn_size = 256
# Number of Layers
num_layers = 2
# Embedding Size (vocabulary of 227 English words)
encoding_embedding_size = 256
decoding_embedding_size = 256
# Learning Rate
learning_rate = 0.001
# Dropout Keep Probability
keep_probability = 0.75
display_step = 10
"""
Explanation: Neural Network Training
Hyperparameters
Tune the following parameters:
Set epochs to the number of epochs.
Set batch_size to the batch size.
Set rnn_size to the size of the RNNs.
Set num_layers to the number of layers.
Set encoding_embedding_size to the size of the embedding for the encoder.
Set decoding_embedding_size to the size of the embedding for the decoder.
Set learning_rate to the learning rate.
Set keep_probability to the Dropout keep probability
Set display_step to state how many steps between each debug output statement
End of explanation
"""
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
save_path = 'checkpoints/dev'
(source_int_text, target_int_text), (source_vocab_to_int, target_vocab_to_int), _ = helper.load_preprocess()
max_target_sentence_length = max([len(sentence) for sentence in source_int_text])
train_graph = tf.Graph()
with train_graph.as_default():
input_data, targets, lr, keep_prob, target_sequence_length, max_target_sequence_length, source_sequence_length = model_inputs()
#sequence_length = tf.placeholder_with_default(max_target_sentence_length, None, name='sequence_length')
input_shape = tf.shape(input_data)
train_logits, inference_logits = seq2seq_model(tf.reverse(input_data, [-1]),
targets,
keep_prob,
batch_size,
source_sequence_length,
target_sequence_length,
max_target_sequence_length,
len(source_vocab_to_int),
len(target_vocab_to_int),
encoding_embedding_size,
decoding_embedding_size,
rnn_size,
num_layers,
target_vocab_to_int)
training_logits = tf.identity(train_logits.rnn_output, name='logits')
inference_logits = tf.identity(inference_logits.sample_id, name='predictions')
masks = tf.sequence_mask(target_sequence_length, max_target_sequence_length, dtype=tf.float32, name='masks')
with tf.name_scope("optimization"):
# Loss function
cost = tf.contrib.seq2seq.sequence_loss(
training_logits,
targets,
masks)
# Optimizer
optimizer = tf.train.AdamOptimizer(lr)
# Gradient Clipping
gradients = optimizer.compute_gradients(cost)
capped_gradients = [(tf.clip_by_value(grad, -1., 1.), var) for grad, var in gradients if grad is not None]
train_op = optimizer.apply_gradients(capped_gradients)
"""
Explanation: Build the Graph
Build the graph using the neural network you implemented.
End of explanation
"""
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
def pad_sentence_batch(sentence_batch, pad_int):
"""Pad sentences with <PAD> so that each sentence of a batch has the same length"""
max_sentence = max([len(sentence) for sentence in sentence_batch])
return [sentence + [pad_int] * (max_sentence - len(sentence)) for sentence in sentence_batch]
def get_batches(sources, targets, batch_size, source_pad_int, target_pad_int):
"""Batch targets, sources, and the lengths of their sentences together"""
for batch_i in range(0, len(sources)//batch_size):
start_i = batch_i * batch_size
# Slice the right amount for the batch
sources_batch = sources[start_i:start_i + batch_size]
targets_batch = targets[start_i:start_i + batch_size]
# Pad
pad_sources_batch = np.array(pad_sentence_batch(sources_batch, source_pad_int))
pad_targets_batch = np.array(pad_sentence_batch(targets_batch, target_pad_int))
# Need the lengths for the _lengths parameters
pad_targets_lengths = []
for target in pad_targets_batch:
pad_targets_lengths.append(len(target))
pad_source_lengths = []
for source in pad_sources_batch:
pad_source_lengths.append(len(source))
yield pad_sources_batch, pad_targets_batch, pad_source_lengths, pad_targets_lengths
"""
Explanation: Batch and pad the source and target sequences
End of explanation
"""
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
def get_accuracy(target, logits):
"""
Calculate accuracy
"""
max_seq = max(target.shape[1], logits.shape[1])
if max_seq - target.shape[1]:
target = np.pad(
target,
[(0,0),(0,max_seq - target.shape[1])],
'constant')
if max_seq - logits.shape[1]:
logits = np.pad(
logits,
[(0,0),(0,max_seq - logits.shape[1])],
'constant')
return np.mean(np.equal(target, logits))
# Split data to training and validation sets
train_source = source_int_text[batch_size:]
train_target = target_int_text[batch_size:]
valid_source = source_int_text[:batch_size]
valid_target = target_int_text[:batch_size]
(valid_sources_batch, valid_targets_batch, valid_sources_lengths, valid_targets_lengths ) = next(get_batches(valid_source,
valid_target,
batch_size,
source_vocab_to_int['<PAD>'],
target_vocab_to_int['<PAD>']))
with tf.Session(graph=train_graph) as sess:
sess.run(tf.global_variables_initializer())
for epoch_i in range(epochs):
for batch_i, (source_batch, target_batch, sources_lengths, targets_lengths) in enumerate(
get_batches(train_source, train_target, batch_size,
source_vocab_to_int['<PAD>'],
target_vocab_to_int['<PAD>'])):
_, loss = sess.run(
[train_op, cost],
{input_data: source_batch,
targets: target_batch,
lr: learning_rate,
target_sequence_length: targets_lengths,
source_sequence_length: sources_lengths,
keep_prob: keep_probability})
if batch_i % display_step == 0 and batch_i > 0:
batch_train_logits = sess.run(
inference_logits,
{input_data: source_batch,
source_sequence_length: sources_lengths,
target_sequence_length: targets_lengths,
keep_prob: 1.0})
batch_valid_logits = sess.run(
inference_logits,
{input_data: valid_sources_batch,
source_sequence_length: valid_sources_lengths,
target_sequence_length: valid_targets_lengths,
keep_prob: 1.0})
train_acc = get_accuracy(target_batch, batch_train_logits)
valid_acc = get_accuracy(valid_targets_batch, batch_valid_logits)
print('Epoch {:>3} Batch {:>4}/{} - Train Accuracy: {:>6.4f}, Validation Accuracy: {:>6.4f}, Loss: {:>6.4f}'
.format(epoch_i, batch_i, len(source_int_text) // batch_size, train_acc, valid_acc, loss))
# Save Model
saver = tf.train.Saver()
saver.save(sess, save_path)
print('Model Trained and Saved')
"""
Explanation: Train
Train the neural network on the preprocessed data. If you have a hard time getting a good loss, check the forms to see if anyone is having the same problem.
End of explanation
"""
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
# Save parameters for checkpoint
helper.save_params(save_path)
"""
Explanation: Save Parameters
Save the batch_size and save_path parameters for inference.
End of explanation
"""
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import tensorflow as tf
import numpy as np
import helper
import problem_unittests as tests
_, (source_vocab_to_int, target_vocab_to_int), (source_int_to_vocab, target_int_to_vocab) = helper.load_preprocess()
load_path = helper.load_params()
"""
Explanation: Checkpoint
End of explanation
"""
def sentence_to_seq(sentence, vocab_to_int):
"""
Convert a sentence to a sequence of ids
:param sentence: String
:param vocab_to_int: Dictionary to go from the words to an id
:return: List of word ids
"""
# TODO: Implement Function
# Convert the sentence to lowercase and to list
list_words = [word for word in sentence.lower().split() ]
# Convert words into ids using vocab_to_int
list_words_int = list()
for word in list_words:
# Convert words not in the vocabulary, to the <UNK> word id.
if word not in vocab_to_int:
list_words_int.append(vocab_to_int['<UNK>'])
else:
list_words_int.append(vocab_to_int[word])
return list_words_int
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_sentence_to_seq(sentence_to_seq)
"""
Explanation: Sentence to Sequence
To feed a sentence into the model for translation, you first need to preprocess it. Implement the function sentence_to_seq() to preprocess new sentences.
Convert the sentence to lowercase
Convert words into ids using vocab_to_int
Convert words not in the vocabulary, to the <UNK> word id.
End of explanation
"""
translate_sentence = 'he saw a old yellow truck .'
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
translate_sentence = sentence_to_seq(translate_sentence, source_vocab_to_int)
loaded_graph = tf.Graph()
with tf.Session(graph=loaded_graph) as sess:
# Load saved model
loader = tf.train.import_meta_graph(load_path + '.meta')
loader.restore(sess, load_path)
input_data = loaded_graph.get_tensor_by_name('input:0')
logits = loaded_graph.get_tensor_by_name('predictions:0')
target_sequence_length = loaded_graph.get_tensor_by_name('target_sequence_length:0')
source_sequence_length = loaded_graph.get_tensor_by_name('source_sequence_length:0')
keep_prob = loaded_graph.get_tensor_by_name('keep_prob:0')
translate_logits = sess.run(logits, {input_data: [translate_sentence]*batch_size,
target_sequence_length: [len(translate_sentence)*2]*batch_size,
source_sequence_length: [len(translate_sentence)]*batch_size,
keep_prob: 1.0})[0]
print('Input')
print(' Word Ids: {}'.format([i for i in translate_sentence]))
print(' English Words: {}'.format([source_int_to_vocab[i] for i in translate_sentence]))
print('\nPrediction')
print(' Word Ids: {}'.format([i for i in translate_logits]))
print(' French Words: {}'.format(" ".join([target_int_to_vocab[i] for i in translate_logits])))
"""
Explanation: Translate
This will translate translate_sentence from English to French.
End of explanation
"""
|
open-forcefield-group/openforcefield | examples/deprecated/host_guest_simulation/smirnoff_host_guest.ipynb | mit | # NBVAL_SKIP
from openeye import oechem # OpenEye Python toolkits
import oenotebook as oenb
# Check license
print("Is your OEChem licensed? ", oechem.OEChemIsLicensed())
from openeye import oeomega # Omega toolkit
from openeye import oequacpac #Charge toolkit
from openeye import oedocking # Docking toolkit
from oeommtools import utils as oeommutils # Tools for OE/OpenMM
from simtk import unit #Unit handling for OpenMM
from simtk.openmm import app
from simtk.openmm.app import PDBFile
from openff.toolkit.typing.engines.smirnoff import *
import os
from pdbfixer import PDBFixer # for solvating
"""
Explanation: Host-guest complex setup and simulation using SMIRNOFF
This notebook takes a SMILES string for a guest and a 3D structure for a host, and generates an initial structure of the complex using docking. It then proceeds to solvate, parameterize the system, and then minimize and do a short simulation with OpenMM.
Please note this is intended for educational purposes and comprises a worked example, not a polished tool. The usual disclaimers apply -- don't take anything here as advice on how you should set up these types of systems; this is just an example of setting up a nontrivial system with SMIRNOFF.
Author - David Mobley (UC Irvine)
Prerequisites
Before beginning, you're going to need a license to the OpenEye toolkits (free for academics), and have these installed and working, ideally in your anaconda Python distribution. Then you'll also need the openff-forcefield toolkits installed, which you can do via conda install -c conda-forge openff-toolkit if you are using anaconda Python.
You'll also need the oenotebook OpenEye Jupyter notebook library installed, such as via pip install -i https://pypi.anaconda.org/openeye/simple openeye-oenotebook (but for some particular environments this fails and you might pip install instead from https://pypi.anaconda.org/openeye/label/beta/simple/openeye-oenotebook or see troubleshooting tips in , and oeommtools, a library for working with OpenEye/OpenMM in conjunction, which is installable via conda install -c OpenEye/label/Orion -c omnia. You also need pdbfixer.
A possibly complete set of installation instructions is to download Anaconda 3 and then do something like this:
./Anaconda3-4.4.0-Linux-x86_64.sh
conda update --all
conda install -c OpenEye/label/Orion -c omnia oeommtools
conda install -c conda-forge openmm openff-toolkit nglview pdbfixer
pip install -i https://pypi.anaconda.org/openeye/simple openeye-toolkits
pip install -i https://pypi.anaconda.org/openeye/simple/openeye-oenotebook
For nglview viewing of 3D structures within the notebook, you will likely also need to run jupyter nbextension install --py nglview-js-widgets --user and jupyter-nbextension enable nglview --py --sys-prefix.
Some platforms may have issues with the openeye-oenotebook installation, so a workaround may be something like pip install https://pypi.anaconda.org/openeye/label/beta/simple/openeye-oenotebook/0.8.1/OpenEye_oenotebook-0.8.1-py2.py3-none-any.whl.
Import some tools we need initially
(Let's do this early so you can fail quickly if you don't have the tools you need)
End of explanation
"""
# Where will we write outputs? Directory will be created if it does not exist
datadir = 'datafiles'
# Where will we download the host file from? The below is an uncharged host
#host_source = 'https://raw.githubusercontent.com/MobleyLab/SAMPL6/master/host_guest/OctaAcidsAndGuests/OA.mol2' #octa acid
# Use file provided in this directory - already charged
host_source = 'OA.mol2'
# What SMILES string for the guest? Should be isomeric SMILES
guest_smiles = 'OC(CC1CCCC1)=O' # Use cyclopentyl acetic acid, the first SAMPL6 octa acid guest
# Another useful source of host-guest files is the benchmarksets repo, e.g. github.com/mobleylab/benchmarksets
# This notebook has also been tested on CB7 Set 1 host-cb7.mol2 with SMILES CC12CC3CC(C1)(CC(C3)(C2)[NH3+])C.
"""
Explanation: Configuration for your run
We'll use this to configure where to get input files, where to write output files, etc.
End of explanation
"""
# NBVAL_SKIP
# Create empty OEMol
mol = oechem.OEMol()
# Convert SMILES
oechem.OESmilesToMol(mol, guest_smiles)
# Draw
oenb.draw_mol(mol)
"""
Explanation: Quickly draw your guest and make sure it's what you intended
OENotebook is super useful and powerful; see https://www.eyesopen.com/notebooks-directory. Here we only use a very small amount of what's available, drawing on http://notebooks.eyesopen.com/introduction-to-oenb.html
End of explanation
"""
# NBVAL_SKIP
# Output host and guest files
hostfile = os.path.join(datadir, 'host.mol2')
guestfile = os.path.join(datadir, 'guest.mol2')
# Create data dir if not present
if not os.path.isdir(datadir):
os.mkdir(datadir)
# Set host file name and retrieve file
if 'http' in host_source:
import urllib
urllib.request.urlretrieve(host_source, hostfile)
else:
import shutil
shutil.copy(host_source, hostfile)
"""
Explanation: Get host file and prep it for docking
(Note that we are going to skip charge assignment for the purposes of this example, because it's slow. So you want to use an input file which has provided charges, OR add charge assignment.)
Retrieve host file, do file bookkeeping
End of explanation
"""
# NBVAL_SKIP
# Read in host file
ifile = oechem.oemolistream(hostfile)
host = oechem.OEMol()
oechem.OEReadMolecule( ifile, host)
ifile.close()
# Prepare a receptor - Start by getting center of mass to use as a hint for where to dock
com = oechem.OEFloatArray(3)
oechem.OEGetCenterOfMass(host, com)
# Create receptor, as per https://docs.eyesopen.com/toolkits/python/dockingtk/receptor.html#creating-a-receptor
receptor = oechem.OEGraphMol()
oedocking.OEMakeReceptor(receptor, host, com[0], com[1], com[2])
"""
Explanation: Prep host file for docking
Here we'll load the host and prepare for docking, which takes a bit of time as it has to get prepared as a "receptor" for docking into
End of explanation
"""
# NBVAL_SKIP
#initialize omega for conformer generation
omega = oeomega.OEOmega()
omega.SetMaxConfs(100) #Generate up to 100 conformers since we'll use for docking
omega.SetIncludeInput(False)
omega.SetStrictStereo(True) #Refuse to generate conformers if stereochemistry not provided
#Initialize charge generation
chargeEngine = oequacpac.OEAM1BCCCharges()
# Initialize docking
dock = oedocking.OEDock()
dock.Initialize(receptor)
# Build OEMol from SMILES
# Generate new OEMol and parse SMILES
mol = oechem.OEMol()
oechem.OEParseSmiles( mol, guest_smiles)
# Set to use a simple neutral pH model
oequacpac.OESetNeutralpHModel(mol)
# Generate conformers with Omega; keep only best conformer
status = omega(mol)
if not status:
print("Error generating conformers for %s." % (guest_smiles))
#print(smi, name, mol.NumAtoms()) #Print debug info -- make sure we're getting protons added as we should
# Assign AM1-BCC charges
oequacpac.OEAssignCharges(mol, chargeEngine)
# Dock to host
dockedMol = oechem.OEGraphMol()
status = dock.DockMultiConformerMolecule(dockedMol, mol) #By default returns only top scoring pose
sdtag = oedocking.OEDockMethodGetName(oedocking.OEDockMethod_Chemgauss4)
oedocking.OESetSDScore(dockedMol, dock, sdtag)
dock.AnnotatePose(dockedMol)
# Write out docked pose if docking successful
if status == oedocking.OEDockingReturnCode_Success:
outmol = dockedMol
# Write out
tripos_mol2_filename = os.path.join(os.path.join(datadir, 'docked_guest.mol2'))
ofile = oechem.oemolostream( tripos_mol2_filename )
oechem.OEWriteMolecule( ofile, outmol)
ofile.close()
# Clean up residue names in mol2 files that are tleap-incompatible: replace substructure names with valid text.
infile = open( tripos_mol2_filename, 'r')
lines = infile.readlines()
infile.close()
newlines = [line.replace('<0>', 'GUEST') for line in lines]
outfile = open(tripos_mol2_filename, 'w')
outfile.writelines(newlines)
outfile.close()
else:
raise Exception("Error: Docking failed.")
"""
Explanation: Generate 3D structure of our guest and dock it
End of explanation
"""
# NBVAL_SKIP
# Import modules
import nglview
import mdtraj
# Load host structure ("trajectory")
traj = mdtraj.load(os.path.join(datadir, 'host.mol2'))
# Load guest structure
lig = mdtraj.load(os.path.join(tripos_mol2_filename))
# Figure out which atom indices correspond to the guest, for use in visualization
atoms_guest = [ traj.n_atoms+i for i in range(lig.n_atoms)]
# "Stack" host and guest Trajectory objects into a single object
complex = traj.stack(lig)
# Visualize
view = nglview.show_mdtraj(complex)
view.add_representation('spacefill', selection="all")
view.add_representation('spacefill', selection=atoms_guest, color='blue') #Adjust guest to show as blue for contrast
# The view command needs to be the last command issued to nglview
view
"""
Explanation: Visualize in 3D to make sure we placed the guest into the binding site
This is optional, but very helpful to make sure you're starting off with your guest in the binding site. To execute this you'll need nglview for visualization and mdtraj for working with trajectory files
End of explanation
"""
# NBVAL_SKIP
# Join OEMols into complex
complex = host.CreateCopy()
oechem.OEAddMols( complex, outmol)
print("Host+guest number of atoms %s" % complex.NumAtoms())
# Write out complex PDB file (won't really use it except as a template)
ostream = oechem.oemolostream( os.path.join(datadir, 'complex.pdb'))
oechem.OEWriteMolecule( ostream, complex)
ostream.close()
# Solvate the system using PDBFixer
# Loosely follows https://github.com/oess/openmm_orion/blob/master/ComplexPrepCubes/utils.py
fixer = PDBFixer( os.path.join(datadir, 'complex.pdb'))
# Convert between OpenEye and OpenMM Topology
omm_top, omm_pos = oeommutils.oemol_to_openmmTop(complex)
# Do it a second time to create a topology we can destroy
fixer_top, fixer_pos = oeommutils.oemol_to_openmmTop(complex)
chain_names = []
for chain in omm_top.chains():
chain_names.append(chain.id)
# Use correct topology, positions
#fixer.topology = copy.deepcopy(omm_top)
fixer.topology = fixer_top
fixer.positions = fixer_pos
# Solvate in 20 mM NaCl and water
fixer.addSolvent(padding=unit.Quantity( 1.0, unit.nanometers), ionicStrength=unit.Quantity( 20, unit.millimolar))
print("Number of atoms after applying PDBFixer: %s" % fixer.topology.getNumAtoms())
# The OpenMM topology produced by the solvation fixer has missing bond
# orders and aromaticity. So our next job is to update our existing OpenMM Topology by copying
# in just the water molecules and ions
# Atom dictionary between the the PDBfixer topology and the water_ion topology
fixer_atom_to_wat_ion_atom = {}
# Loop over new topology and copy water molecules and ions into pre-existing topology
for chain in fixer.topology.chains():
if chain.id not in chain_names:
n_chain = omm_top.addChain(chain.id)
for res in chain.residues():
n_res = omm_top.addResidue(res.name, n_chain)
for at in res.atoms():
n_at = omm_top.addAtom(at.name, at.element, n_res)
fixer_atom_to_wat_ion_atom[at] = n_at
# Copy over any bonds needed
for bond in fixer.topology.bonds():
at0 = bond[0]
at1 = bond[1]
try:
omm_top.addBond(fixer_atom_to_wat_ion_atom[at0],
fixer_atom_to_wat_ion_atom[at1], type=None, order=1)
except:
pass
# Build new position array
omm_pos = omm_pos + fixer.positions[len(omm_pos):]
# Write file of solvated system for visualization purposes
PDBFile.writeFile(omm_top, omm_pos, open(os.path.join(datadir, 'complex_solvated.pdb'), 'w'))
"""
Explanation: Solvate complex
Next we're going to solvate the complex using PDBFixer -- a fairly basic tool, but one which should work. Before doing so, we need to combine the host and the guest into a single OEMol, and then in this case we'll write out a file containing this as a PDB for PDBFixer to read. However, we won't actually use the Topology from that PDB going forward, as the PDB will lose chemistry information we currently have in our OEMols (e.g. it can't retain charges, etc.). Instead, we'll obtain an OpenMM Topology by converting directly from OEChem using utility functionality in oeommtools, and we'll solvate THIS using PDBFixer. PDBFixer will still lose the relevant chemistry information, so we'll just copy any water/ions it added back into our original system.
End of explanation
"""
# NBVAL_SKIP
# Keep a list of OEMols of our components
oemols = []
# Build ions from SMILES strings
smiles = ['[Na+]', '[Cl-]']
for smi in smiles:
mol = oechem.OEMol()
oechem.OESmilesToMol(mol, smi)
# Make sure we have partial charges assigned for these (monatomic, so equal to formal charge)
for atom in mol.GetAtoms():
atom.SetPartialCharge(atom.GetFormalCharge())
oemols.append(mol)
# Build water reference molecule
mol = oechem.OEMol()
oechem.OESmilesToMol(mol, 'O')
oechem.OEAddExplicitHydrogens(mol)
oechem.OETriposAtomNames(mol)
oemols.append(mol)
# Add oemols of host and guest
oemols.append(host)
oemols.append(outmol)
"""
Explanation: Apply SMIRNOFF to set up the system for simulation with OpenMM
Next, we apply a SMIRNOFF force field (SMIRNOFF99Frosst) to the system to set it up for simulation with OpenMM (or writing out, via ParmEd, to formats for use in a variety of other simulation packages).
Prepping a system with SMIRNOFF takes basically three components:
- An OpenMM Topology for the system, which we have from above (coming out of PDBFixer)
- OEMol objects for the components of the system (here host, guest, water and ions)
- The force field XML files
Here, we do not yet have OEMol objects for the ions so our first step is to generate those, and combine it with the host and guest OEMols
Build a list of OEMols of all our components
End of explanation
"""
# NBVAL_SKIP
# Load force fields for small molecules (plus default ions), water, and (temporarily) hydrogen bonds.
# TODO add HBonds constraint through createSystem when openforcefield#32 is implemented, alleviating need for constraints here
ff = ForceField('test_forcefields/smirnoff99Frosst.offxml',
'test_forcefields/hbonds.offxml',
'test_forcefields/tip3p.offxml')
# Set up system
# This draws to some extent on Andrea Rizzi's code at https://github.com/MobleyLab/SMIRNOFF_paper_code/blob/master/scripts/create_input_files.py
system = ff.createSystem(fixer.topology, oemols, nonbondedMethod = PME, nonbondedCutoff=1.1*unit.nanometer, ewaldErrorTolerance=1e-4) #, constraints=smirnoff.HBonds)
# TODO add HBonds constraints here when openforcefield#32 is implemented.
# Fix switching function.
# TODO remove this when openforcefield#31 is fixed
for force in system.getForces():
if isinstance(force, openmm.NonbondedForce):
force.setUseSwitchingFunction(True)
force.setSwitchingDistance(1.0*unit.nanometer)
"""
Explanation: Load our force field and parameterize the system
This uses the SMIRNOFF ForceField class and SMIRNOFF XML files to parameterize the system.
End of explanation
"""
# NBVAL_SKIP
# Even though we're just going to minimize, we still have to set up an integrator, since a Simulation needs one
integrator = openmm.VerletIntegrator(2.0*unit.femtoseconds)
# Prep the Simulation using the parameterized system, the integrator, and the topology
simulation = app.Simulation(fixer.topology, system, integrator)
# Copy in the positions
simulation.context.setPositions( fixer.positions)
# Get initial state and energy; print
state = simulation.context.getState(getEnergy = True)
energy = state.getPotentialEnergy() / unit.kilocalories_per_mole
print("Energy before minimization (kcal/mol): %.2g" % energy)
# Minimize, get final state and energy and print
simulation.minimizeEnergy()
state = simulation.context.getState(getEnergy=True, getPositions=True)
energy = state.getPotentialEnergy() / unit.kilocalories_per_mole
print("Energy after minimization (kcal/mol): %.2g" % energy)
newpositions = state.getPositions()
"""
Explanation: Minimize and (very briefly) simulate our system
Here we will do an energy minimization, followed by a very very brief simulation. These are done in separate cells since OpenMM is quite slow on CPUs so you may not want to run the simulation on your computer if you are using a CPU.
Finalize prep and energy minimize
End of explanation
"""
# NBVAL_SKIP
# Set up NetCDF reporter for storing trajectory; prep for Langevin dynamics
from mdtraj.reporters import NetCDFReporter
integrator = openmm.LangevinIntegrator(300*unit.kelvin, 1./unit.picosecond, 2.*unit.femtoseconds)
# Prep Simulation
simulation = app.Simulation(fixer.topology, system, integrator)
# Copy in minimized positions
simulation.context.setPositions(newpositions)
# Initialize velocities to correct temperature
simulation.context.setVelocitiesToTemperature(300*unit.kelvin)
# Set up to write trajectory file to NetCDF file in data directory every 100 frames
netcdf_reporter = NetCDFReporter(os.path.join(datadir, 'trajectory.nc'), 100) #Store every 100 frames
# Initialize reporters, including a CSV file to store certain stats every 100 frames
simulation.reporters.append(netcdf_reporter)
simulation.reporters.append(app.StateDataReporter(os.path.join(datadir, 'data.csv'), 100, step=True, potentialEnergy=True, temperature=True, density=True))
# Run the simulation and print start info; store timing
print("Starting simulation")
start = time.clock()
simulation.step(1000) #1000 steps of dynamics
end = time.clock()
# Print elapsed time info, finalize trajectory file
print("Elapsed time %.2f seconds" % (end-start))
netcdf_reporter.close()
print("Done!")
# NBVAL_SKIP
# Load stored trajectory using MDTraj; the trajectory doesn't contain chemistry info so we also load a PDB
traj= mdtraj.load(os.path.join(datadir, 'trajectory.nc'), top=os.path.join(datadir, 'complex_solvated.pdb'))
#Recenter/impose periodicity to the system
anchor = traj.top.guess_anchor_molecules()[0]
imgd = traj.image_molecules(anchor_molecules=[anchor])
traj.center_coordinates()
# View the trajectory
view = nglview.show_mdtraj(traj)
# I haven't totally figured out nglview's selection language for our purposes here, so I'm just showing two residues
# which seems (in this case) to include the host and guest plus an ion (?).
view.add_licorice('1-2')
view
# NBVAL_SKIP
# Save centered trajectory for viewing elsewhere
traj.save_netcdf(os.path.join(datadir, 'trajectory_centered.nc'))
"""
Explanation: Run an MD simulation of a few steps, storing a trajectory for visualization
End of explanation
"""
|
CGATOxford/CGATPipelines | CGATPipelines/pipeline_docs/pipeline_peakcalling/notebooks/template_peakcalling_peakstats.ipynb | mit | import sqlite3
import pandas as pd
import numpy as np
%matplotlib inline
import matplotlib
import numpy as np
import matplotlib.pyplot as plt
#import CGATPipelines.Pipeline as P
import os
import statistics
#import collections
#load R and the R packages required
#%load_ext rpy2.ipython
#%R require(ggplot2)
# use these functions to display tables nicely as html
from IPython.display import display, HTML
plt.style.use('bmh')
#plt.style.available
"""
Explanation: Peakcalling Peak Stats
This notebook is for the analysis of outputs from the peakcalling pipeline relating to the quality of the peakcalling steps
There are severals stats that you want collected and graphed (topics covered in this notebook in bold).
These are:
Number of peaks called in each sample
Size distribution of the peaks
Number of reads in peaks
Location of peaks
correlation of peaks between samples
other things?
IDR stats
What peak lists are the best
This notebook takes the sqlite3 database created by CGAT peakcalling_pipeline.py and uses it for plotting the above statistics
It assumes a file directory of:
location of database = project_folder/csvdb
location of this notebook = project_folder/notebooks.dir/
Firstly lets load all the things that might be needed
End of explanation
"""
!pwd
!date
"""
Explanation: This is where we are and when the notebook was run
End of explanation
"""
database_path = '../csvdb'
output_path = '.'
#database_path= "/ifs/projects/charlotteg/pipeline_peakcalling/csvdb"
"""
Explanation: First lets set the output path for where we want our plots to be saved and the database path and see what tables it contains
End of explanation
"""
from IPython.display import HTML
HTML('''<script>
code_show=true;
function code_toggle() {
if (code_show){
$('div.input').hide();
} else {
$('div.input').show();
}
code_show = !code_show
}
$( document ).ready(code_toggle);
</script>
<form action="javascript:code_toggle()"><input type="submit" value="Click here to toggle on/off the raw code."></form>''')
"""
Explanation: This code allows you to see/hide the code in the html verision
End of explanation
"""
def getTableNamesFromDB(database_path):
# Create a SQL connection to our SQLite database
con = sqlite3.connect(database_path)
cur = con.cursor()
# the result of a "cursor.execute" can be iterated over by row
cur.execute("SELECT name FROM sqlite_master WHERE type='table' ORDER BY name;")
available_tables = (cur.fetchall())
#Be sure to close the connection.
con.close()
return available_tables
db_tables = getTableNamesFromDB(database_path)
print('Tables contained by the database:')
for x in db_tables:
print('\t\t%s' % x[0])
#This function retrieves a table from sql database and indexes it with track name
def getTableFromDB(statement,database_path):
'''gets table from sql database depending on statement
and set track as index if contains track in column names'''
conn = sqlite3.connect(database_path)
df = pd.read_sql_query(statement,conn)
if 'track' in df.columns:
df.index = df['track']
return df
"""
Explanation: The code below provides functions for accessing the project database and extract a table names so you can see what tables have been loaded into the database and are available for plotting. It also has a function for geting table from the database and indexing the table with the track name
End of explanation
"""
design_df= getTableFromDB('select * from design;',database_path)
design_df
"""
Explanation: Design of Experiment
Firstly lets check out the experimental design - this is specified in the design_file.tsv that is used to run the pipeline
1) lets get the table from database
End of explanation
"""
peakcalling_design_df= getTableFromDB('select * from peakcalling_bams_and_inputs;',database_path)
print ('''peakcalling_bams_and_inputs table used to generate the peakcalling statement:
ChIPBams = the file you want to call peaks in e.g. ChIP or ATAC-Seq sample.
InputBam = the sample used as the control in peakcalling. In ChIP-Seq this would be your input control\n''')
peakcalling_design_df
"""
Explanation: Now lets double check what files peakcalling was performed for and whether they were paired with an input file. Input file is used in peakcalling to control for background noise. If the bamControl collumn has 'None' in it then a input control was not used for peakcalling
Lets also double check this in the 'peakcalling_bams_and_inputs' table that is used to generate the peakcalling statement:
End of explanation
"""
insert_df = getTableFromDB('select * from insert_sizes;',database_path)
"""
Explanation: Check the files are matched up correctly - if they are not there is a bug in the peakcalling section of the pipeline
now lets look at the insert sizes that are callculated by macs2 (for PE samples) or bamtools (SE reads)
End of explanation
"""
peakcalling_frags_df = getTableFromDB('select * from post_filtering_check;',database_path)
peakcalling_frags_df = peakcalling_frags_df[['Input_Filename','total_reads']].copy()
peakcalling_frags_df['total_fragments'] = peakcalling_frags_df['total_reads'].divide(2)
peakcalling_frags_df
"""
Explanation: Lets also have a quick check of the number of reads & number of fragments in our samples
End of explanation
"""
peakcalling_summary_df= getTableFromDB('select * from peakcalling_summary;',database_path)
peakcalling_summary_df.rename(columns={'sample':'track'},inplace=True)
peakcalling_summary_df.index = peakcalling_summary_df['track']
peakcalling_summary_df.T
"""
Explanation: Now lets look at the peakcalling_summary table which sumarizes the number of fragments and number of peaks called for each file
End of explanation
"""
ax =peakcalling_summary_df[['number_of_peaks','fragment_treatment_total']].divide(1000).plot.scatter(x='fragment_treatment_total',
y='number_of_peaks')
ax.set_xlabel('number of PE fragments')
ax.set_title('correlation of number of fragments \n& number of peaks')
#ax.set_ylim((50000,160000))
#ax.set_xlim((20000000,70000000))
"""
Explanation: Is there any correlation between the number of peaks and the number of fragments? lets plot this. Can you see any saturation where an increase in fragment number does not result in any further gains in peak number?
End of explanation
"""
#greenleaf_data = pd.read_csv('/Users/charlotteg/Documents/7_BassonProj/Mar17/allelic-atac-seq.csv')
#greenleaf_data.drop([0],inplace=True)
#greenleaf_data['total usable reads'] = greenleaf_data['total usable reads'] / 2
#ax = greenleaf_data.plot.scatter(x='total usable reads', y='# of allelic informative(AI) peaks (>=10 reads)')
#ax.set_ylim((50000,160000))
#ax.set_xlim((20000000,70000000))
#greenleaf_data
#factor between number of reads and number of peaks
"""
Explanation: below code provides a look at published datasets you can look at if you want to
End of explanation
"""
df = peakcalling_summary_df[['number_of_peaks','fragment_treatment_total']].copy()
df['frag_to_peaks'] = peakcalling_summary_df.fragment_treatment_total / peakcalling_summary_df.number_of_peaks
df
"""
Explanation: Now lets just look at the number of peaks called
End of explanation
"""
peakcalling_summary_df['number_of_peaks'].plot(kind='bar')
"""
Explanation: Plot bar graph of a number of peaks
End of explanation
"""
|
hetaodie/hetaodie.github.io | assets/media/uda-ml/deep/shensd/IMDB数据/.ipynb_checkpoints/IMDB_In_Keras-zh-checkpoint.ipynb | mit | # Imports
import numpy as np
import keras
from keras.datasets import imdb
from keras.models import Sequential
from keras.layers import Dense, Dropout, Activation
from keras.preprocessing.text import Tokenizer
import matplotlib.pyplot as plt
%matplotlib inline
np.random.seed(42)
"""
Explanation: 使用 Keras 分析 IMDB 电影数据
End of explanation
"""
# Loading the data (it's preloaded in Keras)
(x_train, y_train), (x_test, y_test) = imdb.load_data(num_words=1000)
print(x_train.shape)
print(x_test.shape)
"""
Explanation: 1. 加载数据
该数据集预先加载了 Keras,所以一个简单的命令就会帮助我们训练和测试数据。 这里有一个我们想看多少单词的参数。 我们已将它设置为1000,但你可以随时尝试设置为其他数字。
End of explanation
"""
print(x_train[0])
print(y_train[0])
"""
Explanation: 2. 检查数据
请注意,数据已经过预处理,其中所有单词都包含数字,评论作为向量与评论中包含的单词一起出现。 例如,如果单词'the'是我们词典中的第一个单词,并且评论包含单词'the',那么在相应的向量中有 1。
输出结果是 1 和 0 的向量,其中 1 表示正面评论,0 是负面评论。
End of explanation
"""
# One-hot encoding the output into vector mode, each of length 1000
tokenizer = Tokenizer(num_words=1000)
x_train = tokenizer.sequences_to_matrix(x_train, mode='binary')
x_test = tokenizer.sequences_to_matrix(x_test, mode='binary')
print(x_train[0])
"""
Explanation: 3. 输出的 One-hot 编码
在这里,我们将输入向量转换为 (0,1)-向量。 例如,如果预处理的向量包含数字 14,则在处理的向量中,第 14 个输入将是 1。
End of explanation
"""
# One-hot encoding the output
num_classes = 2
y_train = keras.utils.to_categorical(y_train, num_classes)
y_test = keras.utils.to_categorical(y_test, num_classes)
print(y_train.shape)
print(y_test.shape)
"""
Explanation: 同时我们将对输出进行 one-hot 编码。
End of explanation
"""
# TODO: Build the model architecture
# TODO: Compile the model using a loss function and an optimizer.
"""
Explanation: 4. 模型构建
使用 sequential 在这里构建模型。 请随意尝试不同的层和大小! 此外,你可以尝试添加 dropout 层以减少过拟合。
End of explanation
"""
# TODO: Run the model. Feel free to experiment with different batch sizes and number of epochs.
"""
Explanation: 5. 训练模型
运行模型。 你可以尝试不同的 batch_size 和 epoch 数量!
End of explanation
"""
score = model.evaluate(x_test, y_test, verbose=0)
print("Accuracy: ", score[1])
"""
Explanation: 6. 评估模型
你可以在测试集上评估模型,这将为你提供模型的准确性。你得出的结果可以大于 85% 吗?
End of explanation
"""
|
sdpython/ensae_teaching_cs | _doc/notebooks/td2a_ml/ml_ccc_machine_learning_interpretabilite.ipynb | mit | from jyquickhelper import add_notebook_menu
add_notebook_menu()
# Répare une incompatibilité entre scipy 1.0 et statsmodels 0.8.
from pymyinstall.fix import fix_scipy10_for_statsmodels08
fix_scipy10_for_statsmodels08()
"""
Explanation: 2A.ml - Interprétabilité et corrélations des variables
Plus un modèle de machine learning contient de coefficients, moins sa décision peut être interprétée. Comment contourner cet obstacle et comprendre ce que le modèle a appris ? Notion de feature importance.
End of explanation
"""
import numpy
import statsmodels.api as smapi
nsample = 100
x = numpy.linspace(0, 10, 100)
X = numpy.column_stack((x, x**2 - x))
beta = numpy.array([1, 0.1, 10])
e = numpy.random.normal(size=nsample)
X = smapi.add_constant(X)
y = X @ beta + e
model = smapi.OLS(y, X)
results = model.fit()
results.summary()
"""
Explanation: Modèles linéaires
Les modèles linéaires sont les modèles les plus simples à interpréter. A performance équivalente, il faut toujours choisir le modèle le plus simple. Le module scikit-learn ne propose pas les outils standards d'analyse des modèles linéaires (test de nullité, valeur propre). Il faut choisir statsmodels pour obtenir ces informations.
End of explanation
"""
from sklearn.datasets import load_iris
iris = load_iris()
X = iris.data
Y = iris.target
from sklearn.tree import DecisionTreeClassifier
clf2 = DecisionTreeClassifier(max_depth=3)
clf2.fit(X, Y)
Yp2 = clf2.predict(X)
from sklearn.tree import export_graphviz
export_graphviz(clf2, out_file="arbre.dot")
import os
cwd = os.getcwd()
from pyquickhelper.helpgen import find_graphviz_dot
dot = find_graphviz_dot()
os.system ("\"{1}\" -Tpng {0}\\arbre.dot -o {0}\\arbre.png".format(cwd, dot))
from IPython.display import Image
Image("arbre.png")
from treeinterpreter import treeinterpreter
pred, bias, contrib = treeinterpreter.predict(clf2, X[106:107,:])
X[106:107,:]
pred
bias
contrib
"""
Explanation: Arbres (tree)
Lectures
treeinterpreter
Making Tree Ensembles Interpretable : l'article propose de simplifier une random forest en approximant sa sortie par une somme pondérée d'arbre plus simples.
Understanding variable importances in forests of randomized trees : cet article explique plus formellement le calcul des termes feature_importances_ calculés par scikit-learn pour chaque arbre et forêts d'arbres (voir aussi Random Forests, Leo Breiman and Adele Cutler)
Module treeinterpreter
End of explanation
"""
clf2.feature_importances_
"""
Explanation: pred est identique à ce que retourne la méthode predict de scikit-learn. bias est la proportion de chaque classe. contrib est la somme des contributions de chaque variable à chaque classe. On note $X=(x_1, ..., x_n)$ une observation.
$$P(X \in classe(i)) = \sum_i contrib(x_k,i)$$
Le code est assez facile à lire et permet de comprendre ce que vaut la fonction $contrib$.
Exercice 1 : décrire la fonction contrib
La lecture de Understanding variable importances
in forests of randomized trees devrait vous y aider.
End of explanation
"""
import numpy
import statsmodels.api as smapi
nsample = 100
x = numpy.linspace(0, 10, 100)
X = numpy.column_stack((x, (x-5)**2, (x-5)**2)) # ajout de la même variable
beta = numpy.array([1, 0.1, 2, 8])
e = numpy.random.normal(size=nsample)
X = smapi.add_constant(X)
y = X @ beta + e
import pandas
pandas.DataFrame(numpy.corrcoef(X.T))
model = smapi.OLS(y, X)
results = model.fit()
results.summary()
"""
Explanation: Exercice 2 : implémenter l'algorithme
Décrit dans Making Tree Ensembles Interpretable
Interprétation et corrélation
Modèles linéaires
Les modèles linéaires n'aiment pas les variables corrélées. Dans l'exemple qui suit, les variables $X_2, X_3$ sont identiques. La régression ne peut retrouver les coefficients du modèle initial (2 et 8).
End of explanation
"""
from sklearn.datasets import load_iris
iris = load_iris()
X = iris.data[:,:2]
Y = iris.target
from sklearn.tree import DecisionTreeClassifier
clf1 = DecisionTreeClassifier(max_depth=3)
clf1.fit(X, Y)
clf1.feature_importances_
"""
Explanation: Arbre / tree
Les arbres de décision n'aiment pas plus les variables corrélées.
End of explanation
"""
import numpy
X2 = numpy.hstack([X, numpy.ones((X.shape[0], 1))])
X2[:,2] = X2[:,0]
clf2 = DecisionTreeClassifier(max_depth=3)
clf2.fit(X2, Y)
clf2.feature_importances_
"""
Explanation: On recopie la variables $X_1$.
End of explanation
"""
|
pagutierrez/tutorial-sklearn | notebooks-spanish/14-complejidad_modelos_busqueda_grid.ipynb | cc0-1.0 | from sklearn.model_selection import cross_val_score, KFold
from sklearn.neighbors import KNeighborsRegressor
# Generamos un dataset sintético:
x = np.linspace(-3, 3, 100)
rng = np.random.RandomState(42)
y = np.sin(4 * x) + x + rng.normal(size=len(x))
X = x[:, np.newaxis]
cv = KFold(shuffle=True)
# Para cada parámetro, repetimos una validación cruzada
for n_neighbors in [1, 3, 5, 10, 20]:
scores = cross_val_score(KNeighborsRegressor(n_neighbors=n_neighbors), X, y, cv=cv)
print("n_neighbors: %d, rendimiento medio: %f" % (n_neighbors, np.mean(scores)))
"""
Explanation: Selección de parámetros, validación y test
Muchos algoritmos tienen asociados algunos parámetros que influyen en la complejidad del modelo que pueden aprender. Recuerda cuando usamos KNeighborsRegressor. Si cambiamos el número de vecinos a considerar, obtenemos progresivamente predicciones más y más suavizadas:
<img src="figures/plot_kneigbors_regularization.png" width="100%">
En la figura anterior, podemos ver ajustes con tres valores diferentes para n_neighbors. Con n_neighbors=2, los datos se sobreajustan, el modelo es muy flexible y ajusta demasiado bien el ruido que hay presente en el dataset. Para n_neighbors=20, el modelo no es suficientemente flexible y no puede ajustar la variación en los datos.
En la subfigura intermedia, hemos encontrado un buen punto intermedio, n_neighbors = 5. Ajusta los datos bastante bien y no sufre ni de sobre-aprendizaje ni de infra-aprendizaje. Nos gustaría disponer de un método cuantitativo para identificar tanto el sobre-entrenamiento como el infra-entrenamiento y optimizar los hiperparámetros (en este caso, el número de vecinos) para llegar a los mejores resultados.
Intentamos obtener un equilibrio entre recordar particularidades (y ruido) de los datos de entrenamiento y modelar la suficiente variabilidad de los mismos. Este equilibrio necesita obtenerse para cualquier algoritmo de aprendizaje automático y es un concepto central, denominado equilibrio bias-varianza o "sobre-ajuste Vs. infra-ajuste"
<img src="figures/overfitting_underfitting_cartoon.svg" width="100%">
Hiperparámetros, sobre-ajuste e infra-ajuste
Desafortunadamente, no hay un regla general para conseguir llegar a este punto óptimo y, por ello, el usuario debe encontrar el mejor equilibrio posible entre complejidad del modelo y generalización, probando distintas opciones para los hiper-parámetros. Los hiper-parámetros son aquellos parámetros que podemos ajustar sobre un algoritmos de aprendizaje automático (algoritmo que, a su vez, ajusta los parámetros del modelo en función de los datos de entrenamiento, de ahí el "hiper"). El número de vecinos $k$ del algoritmo kNN es un hiper-parámetro.
A menudo este ajuste de hiper-parámetros se hace mediante una búsqueda por fuerza bruta, por ejemplo usando varios valores de n_neighbors:
End of explanation
"""
from sklearn.model_selection import validation_curve
n_neighbors = [1, 3, 5, 10, 20, 50]
train_scores, test_scores = validation_curve(KNeighborsRegressor(), X, y, param_name="n_neighbors",
param_range=n_neighbors, cv=cv)
plt.plot(n_neighbors, train_scores.mean(axis=1), 'b', label="precisión de entrenamiento")
plt.plot(n_neighbors, test_scores.mean(axis=1), 'g', label="precisión de test")
plt.ylabel('Precisión')
plt.xlabel('Número de vecinos')
plt.xlim([50, 0])
plt.legend(loc="best");
"""
Explanation: Hay una función en scikit-learn, llamada validation_plot, que produce una figura similar a la que vimos previamente. Representa un parámetro, como el número de vecinos, enfrentado a los errores de entrenamiento y validación (utilizando validación cruzada):
End of explanation
"""
from sklearn.model_selection import cross_val_score, KFold
from sklearn.svm import SVR
# Hacer validación cruzada para cada combinación de parámetros:
for C in [0.001, 0.01, 0.1, 1, 10]:
for gamma in [0.001, 0.01, 0.1, 1]:
scores = cross_val_score(SVR(C=C, gamma=gamma), X, y, cv=cv)
print("C: %f, gamma: %f, valor medio de R^2: %f" % (C, gamma, np.mean(scores)))
"""
Explanation: <div class="alert alert-warning">
Observa que muchos vecinos resultan en un modelo suavizado o más simple, por lo que el eje X se ha dibujado invertido.
</div>
Si más de un parámetro es importante, como los parámetros C y gamma de una máquina de vectores soporte (SVM) (de las cuales hablaremos después), se intentan todas las posibles combinaciones de parámetros:
End of explanation
"""
from sklearn.model_selection import GridSearchCV
param_grid = {'C': [0.001, 0.01, 0.1, 1, 10], 'gamma': [0.001, 0.01, 0.1, 1]}
grid = GridSearchCV(SVR(), param_grid=param_grid, cv=cv, verbose=3)
"""
Explanation: Como esto es algo que se hace frecuentemente en aprendizaje automático, hay una clase ya implementada en scikit-learn, GridSearchCV. GridSearchCV utiliza un diccionario que describe los parámetros que deberían probarse y un modelo que entrenar.
La rejilla de parámetros se define como un diccionario, donde las claves son los parámetros y los valores son las cantidades a probar.
End of explanation
"""
grid.fit(X, y)
"""
Explanation: Una de las cosas interesantes de GridSearchCV es que es un meta-estimador. Utiliza un estimador como SVR y crea un nuevo estimador que se comporta exactamente igual que SVR, por lo que podemos llamar a fit para entrenarlo:
End of explanation
"""
grid.predict(X)
"""
Explanation: GridSearchCV aplica un proceso algo más complejo que el visto anteriormente. Primero, ejecuta el mismo bucle de validación cruzada para encontrar la mejor combinación de parámetros. Una vez tiene la mejor combinación, ejecuta el método fit de nuevo sobre todos los datos que se le pasan (sin validación cruzada), para construir un nuevo modelo con los parámetros óptimos obtenidos anteriormente.
Después, utilizando los métodos predict o score podemos realizar una nueva predicción:
End of explanation
"""
print(grid.best_score_)
print(grid.best_params_)
"""
Explanation: Puedes observar los mejores parámetros obtenidos por GridSearchCV en su atributo best_params_ y la puntuación correspondiente en su atributo best_score_:
End of explanation
"""
type(grid.cv_results_)
print(grid.cv_results_.keys())
import pandas as pd
cv_results = pd.DataFrame(grid.cv_results_)
cv_results.head()
cv_results_tiny = cv_results[['param_C', 'param_gamma', 'mean_test_score']]
cv_results_tiny.sort_values(by='mean_test_score', ascending=False).head()
"""
Explanation: Pero puedes investigar más a fondo el rendimiento y algunas cosas más sobre cada una de las combinaciones de parámetros accediendo al atributo cv_results_. cv_results_ es un diccionario donde cada clave es una cadena y cada valor un array. Se puede por tanto usar para crear un DataFrame de pandas.
End of explanation
"""
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X, y, random_state=1)
param_grid = {'C': [0.001, 0.01, 0.1, 1, 10], 'gamma': [0.001, 0.01, 0.1, 1]}
cv = KFold(n_splits=10, shuffle=True)
grid = GridSearchCV(SVR(), param_grid=param_grid, cv=cv)
grid.fit(X_train, y_train)
grid.score(X_test, y_test)
"""
Explanation: Sin embargo, hay un problema en la utilización de este rendimiento para la evaluación. Puedes estar incurriendo en lo que se denomina un error de probar varias hipótesis. Si tienes muchas combinaciones de parámetros, algunas de ellas puede ser que funcionen mejor solo por aleatoriedad y que el rendimiento que estás obteniendo no sea el mismo cuando tengamos nuevos datos. Por tanto, es en general buena idea realizar una separación en entrenamiento y test previa a la búsqueda grid. Este patrón se suele denominar partición de entrenamiento, test y validación, y es bastante común en aprendizaje automático:
<img src="figures/grid_search_cross_validation.svg" width="100%">
Podemos emular este proceso fácilmente dividiendo primero los datos con train_test_split, aplicando GridSearchCV al conjunto de entrenamiento, y calculando el score correspondiente solo con el conjunto de test:
End of explanation
"""
grid.best_params_
"""
Explanation: Podemos comprobar de nuevo los parámetros obtenidos con:
End of explanation
"""
from sklearn.model_selection import train_test_split, ShuffleSplit
X_train, X_test, y_train, y_test = train_test_split(X, y, random_state=1)
param_grid = {'C': [0.001, 0.01, 0.1, 1, 10], 'gamma': [0.001, 0.01, 0.1, 1]}
single_split_cv = ShuffleSplit(n_splits=1)
grid = GridSearchCV(SVR(), param_grid=param_grid, cv=single_split_cv, verbose=3)
grid.fit(X_train, y_train)
grid.score(X_test, y_test)
"""
Explanation: A veces se utiliza un esquema más simple, que parte los datos en tres subconjuntos entrenamiento, validación y test. Esto es una alternativa si tu conjunto de datos es muy grande o si es imposible entrenar muchos modelos mediante validación cruzada, porque entrenar cada modelo es muy costoso computacionalmente. Para hacer este tipo de partición tendríamos que hacer una partición con train_test_split y después aplicar GridSearchCV con un ShuffleSplit y una sola iteración:
<img src="figures/train_validation_test2.svg" width="100%">
End of explanation
"""
clf = GridSearchCV(SVR(), param_grid=param_grid)
clf.fit(X_train, y_train)
clf.score(X_test, y_test)
"""
Explanation: Esto es mucho más rápido pero puede resultar en valores peores de los hiper-parámetros y, por tanto, peores resultados.
End of explanation
"""
|
Zhenxingzhang/AnalyticsVidhya | Articles/12_Useful_Pandas_Techniques_in_Python_for_Data_Manipulation/PythonTipsNTricks.ipynb | apache-2.0 | import pandas as pd
import numpy as np
data = pd.read_csv("train.csv", index_col="Loan_ID")
# test = pd.read_csv("test.csv", index_col="PassengerID")
print data.shape
data.columns
"""
Explanation: 12 Useful Pandas Techniques to add to your Arsenal!!
Introduction
If you are reading this article, I'm sure you love statistical modeling (or will start loving it if you're a begineer)! Not sure about you, but during my initial days with machine learning, I was too anxious to make a model that I didn't even took the pain of even opening my data file and just ran my first model. As expected, this almost never worked out and my compiler started throwing errors. On backtracking, I found the most common reasons - missing data, incorrect coding of values, nominal variables being treated as numeric because the finite values are integers, etc.
Data exploration and feature engineering are crucial for succssfully deriving actionable insights from data. In this article, I will share some tips and tricks using which this can be easily done using Pandas in Python. Python is gaining popularity and Pandas is one of the most popular tools for data handling in Python.
Note: We'll be using the dataset of the "Loan Prediction" problem from the Analytics Vidhya datahacks. Click here to access the dataset and problem statement. The data can be loaded into Python using the command:
End of explanation
"""
data.loc[(data["Gender"]=="Female") & (data["Education"]=="Not Graduate") & (data["Loan_Status"]=="Y"), ["Gender","Education","Loan_Status"]]
"""
Explanation: 1. Boolean Indexing
Useful for indexing a set of column using another set of columns. For instance, we can get a list of all women who are not graduate and got a loan using the following code.
End of explanation
"""
#Check current type:
data.dtypes
"""
Explanation: More: http://pandas.pydata.org/pandas-docs/stable/indexing.html
2. Iterating over rows of a dataframe
Although rarely used, but at times you may need to iterate through all rows. One common problem we face is nominal variables with numeric categories are treated as numerical by defauly by python. Also, there might be some numeric variable where some characters are entered in one of the rows and it'll be considered categorical by default. So it's generally a good idea to manually define the column types.
End of explanation
"""
#Load the file:
colTypes = pd.read_csv('datatypes.csv')
print colTypes
"""
Explanation: Here we see that Credit_History is a nominal variable but appearing as float. A good way to tackle this issue is to create a csv file with column names and types. This way we can make a generic function to read the file and assign column data types. For instance, in this case I've defined a csv file datatypes.csv (download).
End of explanation
"""
#Iterate through each row and assign variable type.
# Note: astype is used to asign types
for i, row in colTypes.iterrows(): #i: dataframe index; row: each row in series format
if row['feature']=="categorical":
data[row['feature']]=data[row['feature']].astype(np.object)
elif row['feature']=="continuous":
data[row['feature']]=data[row['feature']].astype(np.float)
print data.dtypes
"""
Explanation: On loading this file, we can iterate through each row and assign the datatype from column 'type' to the variable name defined in 'feature' column.
End of explanation
"""
#Create a new function:
def num_missing(x):
return sum(x.isnull())
#Applying per column:
print "Missing values per column:"
print data.apply(num_missing, axis=0) #axis=0 defines that function is to be applied on each column
#Applying per row:
print "\nMissing values per row:"
print data.apply(num_missing, axis=1).head() #axis=1 defines that function is to be applied on each column
"""
Explanation: Now the credit history column is modified to 'object' type which is used for representing nominal variables in Pandas.
More: http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.iterrows.html#pandas.DataFrame.iterrows
3. Apply Function
It is one of the most handy functions when it comes to playing with the data, specially creating new varaibles. As the name suggests, apply returns some value after passing each row/column of a dataframe through some function. The function can be both default or user-defined.
For instance, here it can be used to find the #missing values in each row and column.
End of explanation
"""
#First we import a function to determine the mode
from scipy.stats import mode
mode(data['Gender'])
"""
Explanation: Thus we get the desired result. Note: head() function is used in second output because it contains many rows.
More: http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.apply.html#pandas.DataFrame.apply
4. Imputing missing files
'fillna()' is a very useful function handy in implementing very basic forms of imputation like updating missing values with the overall mean/mode/median of the column. Let's impute the 'Gender', 'Married' and 'Self_Employed' columns with there respective modes.
End of explanation
"""
mode(data['Gender']).mode[0]
"""
Explanation: This returns both mode and count. Remember that mode can be an array as there can be multiple values with high frequency. We will take the first one by default always using:
End of explanation
"""
#Impute the values:
data['Gender'].fillna(mode(data['Gender']).mode[0], inplace=True)
data['Married'].fillna(mode(data['Married']).mode[0], inplace=True)
data['Self_Employed'].fillna(mode(data['Self_Employed']).mode[0], inplace=True)
#Now check the #missing values again to confirm:
print data.apply(num_missing, axis=0)
"""
Explanation: Now we can fill the missing values and check using technique #3.
End of explanation
"""
#Determine pivot table
impute_grps = data.pivot_table(values=["LoanAmount"], index=["Gender","Married","Self_Employed"], aggfunc=np.mean)
print impute_grps
"""
Explanation: Hence confirmed the missing values are imputed. Note: This is the most primitive forms of imputation. Other sophisticated techniques include modeling the missing values using some variables or using grouped averages (mean/mode/median). We'll take up the later next.
More: http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.fillna.html#pandas.DataFrame.fillna
5. Pivot Table
Pandas can be used to create MS Excel style pivot tables. For isntance, in this case a very key column is "LoanAmount" which has missing values. We can impute it using mean amount of each 'Gender', 'Married' and 'Self_Employed' group. The mean 'LoanAmount' of each group can be determined as:
End of explanation
"""
#iterate only through rows with missing LoanAmount
for i,row in data.loc[data['LoanAmount'].isnull(),:].iterrows():
ind = tuple([row['Gender'],row['Married'],row['Self_Employed']])
data.loc[i,'LoanAmount'] = impute_grps.loc[ind].values[0]
#Now check the #missing values again to confirm:
print data.apply(num_missing, axis=0)
"""
Explanation: More: http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.pivot_table.html#pandas.DataFrame.pivot_table
6. MultiIndexing
If you notice the output of step #4, it has a peculiar property. Each index is made of a combination of 3 values! This is called MultuIndexing and it can be used to perform some operations really fast if used wisely. Continuing the example from #4, we have the values for each group but not they have to be imputed.
This can be done using the various techniques learned till now.
End of explanation
"""
pd.crosstab(data["Credit_History"],data["Loan_Status"],margins=True)
"""
Explanation: Note:
1. Multi-index require tuple for defining groups of indices in loc statement. This a tuple used in function.
2. The .values[0] suffix is required because by defualt a series element is returned which has an index not matching with that of the dataframe. So direct assignment gives an error.
7. Crosstab
This function can be used to get an initial "feel" of the data. We can validate some basic hypothesis. For instance, in this case "Credit_Histpry" is expected to affect the loan status significantly. This can be tested using a cross-tabulation as:
End of explanation
"""
def percConvert(ser):
return ser/float(ser[-1])
pd.crosstab(data["Credit_History"],data["Loan_Status"],margins=True).apply(percConvert, axis=1)
"""
Explanation: These are absolute numbers but percentages can be more intuitive in making some quick insights. We can do this using the apply function:
End of explanation
"""
prop_rates = pd.DataFrame([1000, 5000, 12000], index=['Rural','Semiurban','Urban'],columns=['rates'])
prop_rates
"""
Explanation: Now it is clearly evident that people with a credit histpry have much higher chances of getting a loan as 80% people with credit history got a loan as compared to only 9% without credit histoty.
But that's not it. It tells an iteresting story. Since I know that having a credit history is super important, what if I predict loan status to be Y for ones with credit history and N otherwise. Surprisingly, we'll be right 82+378=460 times out of 614 which is a whopping 75%! I won't blame you if you're wondering why the hell do we need statistical models. But trust me, increasing the accuracy by even 0.001% beyond this is a challenging ask.
Note: 75% is on train set. The test set will be slightly different but close.
Also, I hope this gives some intuition into why even a 0.05% increase in accuracy can result in jump of 500 ranks on a Kaggle leaderboard.
More: http://pandas.pydata.org/pandas-docs/version/0.17.0/generated/pandas.crosstab.html
8. Merge DataFrames
Merging dataframes becomes essential when we have information coming from different sources which is to be collated. Consider a hypothetical case where the average property rates (INR per sq meters) is available for different property types. Let's define a dataframe as:
End of explanation
"""
data_merged = data.merge(right=prop_rates, how='inner',left_on='Property_Area',right_index=True, sort=False)
data_merged.pivot_table(values='Credit_History',index=['Property_Area','rates'], aggfunc=len)
"""
Explanation: Now we can merge this information with the original dataframe as:
End of explanation
"""
data_sorted = data.sort_values(['ApplicantIncome','CoapplicantIncome'], ascending=False)
data_sorted[['ApplicantIncome','CoapplicantIncome']].head(10)
"""
Explanation: The pivot table validates sucessful merge operation. Note that the 'values' argument is irrelevant here because we are simply counting the values.
More: http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.merge.html#pandas.DataFrame.merge
9. Sorting DataFrames
Pandas allows easy sorting based on multiple columns. This can be done as:
End of explanation
"""
import matplotlib.pyplot as plt
%matplotlib inline
data.boxplot(column="ApplicantIncome",by="Loan_Status")
data.hist(column="ApplicantIncome",by="Loan_Status",bins=30)
"""
Explanation: Note: The Pandas "sort" function is now depricated and we should use "sort_values".
More: http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.sort_values.html#pandas.DataFrame.sort_values
10. Plotting (Boxplot & Histogram)
Many of you might be unaware that boxplots and histograms can be directly plotted in Pandas and calling matplotlob separately is not necessary. It's just a 1-line command. For instance, if we want to compare the distribution of ApplicantIncome by Loan_Status:
End of explanation
"""
#Binning:
def binning(col, cut_points, labels=None):
#Define min and max values:
minval = col.min()
maxval = col.max()
#create list by adding min and max to cut_points
break_points = [minval] + cut_points + [maxval]
#if no labels provided, use default labels 0 ... (n-1)
if not labels:
labels = range(len(cut_points)+1)
#Binning using cut function of pandas
colBin = pd.cut(col,bins=break_points,labels=labels,include_lowest=True)
return colBin
#Binning age:
cut_points = [90,140,190]
labels = ["low","medium","high","very high"]
data["LoanAmount_Bin"] = binning(data["LoanAmount"], cut_points, labels)
print pd.value_counts(data["LoanAmount_Bin"], sort=False)
"""
Explanation: This shows that income is not a big deciding factor on its own as there is no appreciable difference between the people who received and were denied the loan.
More: http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.hist.html#pandas.DataFrame.hist |
http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.boxplot.html#pandas.DataFrame.boxplot
11. Cut function for binning
Sometimes numerical values make more sense if clustered together. For example, if we're trying to model traffic (#cars on road) with time of the day (minutes). The exact minute of an hour might not be that relevant for predicting traffic as compared to actual period of the day like "Morning", "Afternoon", "Evening", "Night", "Late Night". Modeling traffic this way will be more intuitive and will avoid overfitting minute details which practically make little sense.
Here I've defined a simple function which can be re-used for binning any variable fairly easily.
End of explanation
"""
#Define a generic function using Pandas replace function
def coding(col, codeDict):
colCoded = pd.Series(col, copy=True)
for key, value in codeDict.items():
colCoded.replace(key, value, inplace=True)
return colCoded
#Coding LoanStatus as Y=1, N=0:
print 'Before Coding:'
print pd.value_counts(data["Loan_Status"])
data["Loan_Status_Coded"] = coding(data["Loan_Status"], {'N':0,'Y':1})
print '\nAfter Coding:'
print pd.value_counts(data["Loan_Status_Coded"])
"""
Explanation: More: http://pandas.pydata.org/pandas-docs/version/0.17.1/generated/pandas.cut.html
12. Coding nominal data
Very often we come accross a case where we've to modify the categories of a nominal variable. This can be due to various reasons:
1. Some algorithms (like Logistic Regression) require all inputs to be numeric. So nominal varialbes are mostly coded as 0---(n-1)
2. Sometimes the same category might be represented in 2 ways. For eg temperature might be recorded as "High", "Medium", "Low", "H", "low". Here, both "High" and "H" refer to same category. Similarly, in "Low" and "low" there is only a difference of case.
3. Some categories might have very low frequencies and its generally a good idea to combine them
Here I've defined a generic function which takes in input as a dictionary and codes the values using 'replace' function in Pandas
End of explanation
"""
|
Kaggle/learntools | notebooks/deep_learning_intro/raw/ex3.ipynb | apache-2.0 | # Setup plotting
import matplotlib.pyplot as plt
from learntools.deep_learning_intro.dltools import animate_sgd
plt.style.use('seaborn-whitegrid')
# Set Matplotlib defaults
plt.rc('figure', autolayout=True)
plt.rc('axes', labelweight='bold', labelsize='large',
titleweight='bold', titlesize=18, titlepad=10)
plt.rc('animation', html='html5')
# Setup feedback system
from learntools.core import binder
binder.bind(globals())
from learntools.deep_learning_intro.ex3 import *
"""
Explanation: Introduction
In this exercise you'll train a neural network on the Fuel Economy dataset and then explore the effect of the learning rate and batch size on SGD.
When you're ready, run this next cell to set everything up!
End of explanation
"""
import numpy as np
import pandas as pd
from sklearn.preprocessing import StandardScaler, OneHotEncoder
from sklearn.compose import make_column_transformer, make_column_selector
from sklearn.model_selection import train_test_split
fuel = pd.read_csv('../input/dl-course-data/fuel.csv')
X = fuel.copy()
# Remove target
y = X.pop('FE')
preprocessor = make_column_transformer(
(StandardScaler(),
make_column_selector(dtype_include=np.number)),
(OneHotEncoder(sparse=False),
make_column_selector(dtype_include=object)),
)
X = preprocessor.fit_transform(X)
y = np.log(y) # log transform target instead of standardizing
input_shape = [X.shape[1]]
print("Input shape: {}".format(input_shape))
"""
Explanation: In the Fuel Economy dataset your task is to predict the fuel economy of an automobile given features like its type of engine or the year it was made.
First load the dataset by running the cell below.
End of explanation
"""
# Uncomment to see original data
fuel.head()
# Uncomment to see processed features
pd.DataFrame(X[:10,:]).head()
"""
Explanation: Take a look at the data if you like. Our target in this case is the 'FE' column and the remaining columns are the features.
End of explanation
"""
from tensorflow import keras
from tensorflow.keras import layers
model = keras.Sequential([
layers.Dense(128, activation='relu', input_shape=input_shape),
layers.Dense(128, activation='relu'),
layers.Dense(64, activation='relu'),
layers.Dense(1),
])
"""
Explanation: Run the next cell to define the network we'll use for this task.
End of explanation
"""
# YOUR CODE HERE
#_UNCOMMENT_IF(PROD)_
#____
# Check your answer
q_1.check()
#%%RM_IF(PROD)%%
# missing loss
model.compile(
optimizer='adam'
)
q_1.assert_check_failed()
#%%RM_IF(PROD)%%
# missing optimizer
model.compile(
loss='mae'
)
q_1.assert_check_failed()
#%%RM_IF(PROD)%%
# wrong loss
model.compile(
optimizer='adam',
loss='mse'
)
q_1.assert_check_failed()
#%%RM_IF(PROD)%%
# wrong optimizer
model.compile(
optimizer='sgd',
loss='mae'
)
q_1.assert_check_failed()
#%%RM_IF(PROD)%%
model.compile(
optimizer='adam',
loss='mae'
)
q_1.assert_check_passed()
#%%RM_IF(PROD)%%
model.compile(
optimizer='Adam',
loss='MAE'
)
q_1.assert_check_passed()
# Lines below will give you a hint or solution code
#_COMMENT_IF(PROD)_
q_1.hint()
#_COMMENT_IF(PROD)_
q_1.solution()
"""
Explanation: 1) Add Loss and Optimizer
Before training the network we need to define the loss and optimizer we'll use. Using the model's compile method, add the Adam optimizer and MAE loss.
End of explanation
"""
# YOUR CODE HERE
history = ____
# Check your answer
q_2.check()
#%%RM_IF(PROD)%%
# Wrong arguments
history = model.fit(
X, y,
batch_size=8,
epochs=4,
)
q_2.assert_check_failed()
#%%RM_IF(PROD)%%
history = model.fit(
X, y,
batch_size=128,
epochs=200,
)
q_2.assert_check_passed()
# Lines below will give you a hint or solution code
#_COMMENT_IF(PROD)_
q_2.hint()
#_COMMENT_IF(PROD)_
q_2.solution()
"""
Explanation: 2) Train Model
Once you've defined the model and compiled it with a loss and optimizer you're ready for training. Train the network for 200 epochs with a batch size of 128. The input data is X with target y.
End of explanation
"""
import pandas as pd
history_df = pd.DataFrame(history.history)
# Start the plot at epoch 5. You can change this to get a different view.
history_df.loc[5:, ['loss']].plot();
"""
Explanation: The last step is to look at the loss curves and evaluate the training. Run the cell below to get a plot of the training loss.
End of explanation
"""
# View the solution (Run this cell to receive credit!)
q_3.check()
"""
Explanation: 3) Evaluate Training
If you trained the model longer, would you expect the loss to decrease further?
End of explanation
"""
# YOUR CODE HERE: Experiment with different values for the learning rate, batch size, and number of examples
learning_rate = 0.05
batch_size = 32
num_examples = 256
animate_sgd(
learning_rate=learning_rate,
batch_size=batch_size,
num_examples=num_examples,
# You can also change these, if you like
steps=50, # total training steps (batches seen)
true_w=3.0, # the slope of the data
true_b=2.0, # the bias of the data
)
"""
Explanation: With the learning rate and the batch size, you have some control over:
- How long it takes to train a model
- How noisy the learning curves are
- How small the loss becomes
To get a better understanding of these two parameters, we'll look at the linear model, our ppsimplest neural network. Having only a single weight and a bias, it's easier to see what effect a change of parameter has.
The next cell will generate an animation like the one in the tutorial. Change the values for learning_rate, batch_size, and num_examples (how many data points) and then run the cell. (It may take a moment or two.) Try the following combinations, or try some of your own:
| learning_rate | batch_size | num_examples |
|-----------------|--------------|----------------|
| 0.05 | 32 | 256 |
| 0.05 | 2 | 256 |
| 0.05 | 128 | 256 |
| 0.02 | 32 | 256 |
| 0.2 | 32 | 256 |
| 1.0 | 32 | 256 |
| 0.9 | 4096 | 8192 |
| 0.99 | 4096 | 8192 |
End of explanation
"""
# View the solution (Run this cell to receive credit!)
q_4.check()
"""
Explanation: 4) Learning Rate and Batch Size
What effect did changing these parameters have? After you've thought about it, run the cell below for some discussion.
End of explanation
"""
|
quoniammm/happy-machine-learning | Udacity-ML/titanic_survival_exploration-master_0/titanic_survival_exploration.ipynb | mit | import numpy as np
import pandas as pd
# RMS Titanic data visualization code
# 数据可视化代码
from titanic_visualizations import survival_stats
from IPython.display import display
%matplotlib inline
# Load the dataset
# 加载数据集
in_file = 'titanic_data.csv'
full_data = pd.read_csv(in_file)
# Print the first few entries of the RMS Titanic data
# 显示数据列表中的前几项乘客数据
display(full_data.head())
"""
Explanation: 机器学习工程师纳米学位
入门
项目 0: 预测泰坦尼克号乘客生还率
1912年,泰坦尼克号在第一次航行中就与冰山相撞沉没,导致了大部分乘客和船员身亡。在这个入门项目中,我们将探索部分泰坦尼克号旅客名单,来确定哪些特征可以最好地预测一个人是否会生还。为了完成这个项目,你将需要实现几个基于条件的预测并回答下面的问题。我们将根据代码的完成度和对问题的解答来对你提交的项目的进行评估。
提示:这样的文字将会指导你如何使用 iPython Notebook 来完成项目。
点击这里查看本文件的英文版本。
开始
当我们开始处理泰坦尼克号乘客数据时,会先导入我们需要的功能模块以及将数据加载到 pandas DataFrame。运行下面区域中的代码加载数据,并使用 .head() 函数显示前几项乘客数据。
提示:你可以通过单击代码区域,然后使用键盘快捷键 Shift+Enter 或 Shift+ Return 来运行代码。或者在选择代码后使用播放(run cell)按钮执行代码。像这样的 MarkDown 文本可以通过双击编辑,并使用这些相同的快捷键保存。Markdown 允许你编写易读的纯文本并且可以转换为 HTML。
End of explanation
"""
# Store the 'Survived' feature in a new variable and remove it from the dataset
# 从数据集中移除 'Survived' 这个特征,并将它存储在一个新的变量中。
outcomes = full_data['Survived']
data = full_data.drop('Survived', axis = 1)
# Show the new dataset with 'Survived' removed
# 显示已移除 'Survived' 特征的数据集
display(data.head())
"""
Explanation: 从泰坦尼克号的数据样本中,我们可以看到船上每位旅客的特征
Survived:是否存活(0代表否,1代表是)
Pclass:社会阶级(1代表上层阶级,2代表中层阶级,3代表底层阶级)
Name:船上乘客的名字
Sex:船上乘客的性别
Age:船上乘客的年龄(可能存在 NaN)
SibSp:乘客在船上的兄弟姐妹和配偶的数量
Parch:乘客在船上的父母以及小孩的数量
Ticket:乘客船票的编号
Fare:乘客为船票支付的费用
Cabin:乘客所在船舱的编号(可能存在 NaN)
Embarked:乘客上船的港口(C 代表从 Cherbourg 登船,Q 代表从 Queenstown 登船,S 代表从 Southampton 登船)
因为我们感兴趣的是每个乘客或船员是否在事故中活了下来。可以将 Survived 这一特征从这个数据集移除,并且用一个单独的变量 outcomes 来存储。它也做为我们要预测的目标。
运行该代码,从数据集中移除 Survived 这个特征,并将它存储在变量 outcomes 中。
End of explanation
"""
def accuracy_score(truth, pred):
""" Returns accuracy score for input truth and predictions. """
# Ensure that the number of predictions matches number of outcomes
# 确保预测的数量与结果的数量一致
if len(truth) == len(pred):
# Calculate and return the accuracy as a percent
# 计算预测准确率(百分比)
return "Predictions have an accuracy of {:.2f}%.".format((truth == pred).mean()*100)
else:
return "Number of predictions does not match number of outcomes!"
# Test the 'accuracy_score' function
# 测试 'accuracy_score' 函数
predictions = pd.Series(np.ones(5, dtype = int))
print accuracy_score(outcomes[:5], predictions)
"""
Explanation: 这个例子展示了如何将泰坦尼克号的 Survived 数据从 DataFrame 移除。注意到 data(乘客数据)和 outcomes (是否存活)现在已经匹配好。这意味着对于任何乘客的 data.loc[i] 都有对应的存活的结果 outcome[i]。
为了验证我们预测的结果,我们需要一个标准来给我们的预测打分。因为我们最感兴趣的是我们预测的准确率,既正确预测乘客存活的比例。运行下面的代码来创建我们的 accuracy_score 函数以对前五名乘客的预测来做测试。
思考题:从第六个乘客算起,如果我们预测他们全部都存活,你觉得我们预测的准确率是多少?
End of explanation
"""
def predictions_0(data):
""" Model with no features. Always predicts a passenger did not survive. """
predictions = []
for _, passenger in data.iterrows():
# Predict the survival of 'passenger'
# 预测 'passenger' 的生还率
predictions.append(0)
# Return our predictions
# 返回预测结果
return pd.Series(predictions)
# Make the predictions
# 进行预测
predictions = predictions_0(data)
"""
Explanation: 提示:如果你保存 iPython Notebook,代码运行的输出也将被保存。但是,一旦你重新打开项目,你的工作区将会被重置。请确保每次都从上次离开的地方运行代码来重新生成变量和函数。
预测
如果我们要预测泰坦尼克号上的乘客是否存活,但是我们又对他们一无所知,那么最好的预测就是船上的人无一幸免。这是因为,我们可以假定当船沉没的时候大多数乘客都遇难了。下面的 predictions_0 函数就预测船上的乘客全部遇难。
End of explanation
"""
print accuracy_score(outcomes, predictions)
"""
Explanation: 问题1
对比真实的泰坦尼克号的数据,如果我们做一个所有乘客都没有存活的预测,你认为这个预测的准确率能达到多少?
提示:运行下面的代码来查看预测的准确率。
End of explanation
"""
survival_stats(data, outcomes, 'Sex')
"""
Explanation: 回答: Predictions have an accuracy of 61.62%.
我们可以使用 survival_stats 函数来看看 Sex 这一特征对乘客的存活率有多大影响。这个函数定义在名为 titanic_visualizations.py 的 Python 脚本文件中,我们的项目提供了这个文件。传递给函数的前两个参数分别是泰坦尼克号的乘客数据和乘客的 生还结果。第三个参数表明我们会依据哪个特征来绘制图形。
运行下面的代码绘制出依据乘客性别计算存活率的柱形图。
End of explanation
"""
def predictions_1(data):
""" Model with one feature:
- Predict a passenger survived if they are female. """
predictions = []
for _, passenger in data.iterrows():
# Remove the 'pass' statement below
# 移除下方的 'pass' 声明
# and write your prediction conditions here
# 输入你自己的预测条件
if passenger['Sex'] == 'male':
predictions.append(0)
else:
predictions.append(1)
# Return our predictions
# 返回预测结果
return pd.Series(predictions)
# Make the predictions
# 进行预测
predictions = predictions_1(data)
"""
Explanation: 观察泰坦尼克号上乘客存活的数据统计,我们可以发现大部分男性乘客在船沉没的时候都遇难了。相反的,大部分女性乘客都在事故中生还。让我们在先前推断的基础上继续创建:如果乘客是男性,那么我们就预测他们遇难;如果乘客是女性,那么我们预测他们在事故中活了下来。
将下面的代码补充完整,让函数可以进行正确预测。
提示:您可以用访问 dictionary(字典)的方法来访问船上乘客的每个特征对应的值。例如, passenger['Sex'] 返回乘客的性别。
End of explanation
"""
print accuracy_score(outcomes, predictions)
"""
Explanation: 问题2
当我们预测船上女性乘客全部存活,而剩下的人全部遇难,那么我们预测的准确率会达到多少?
提示:运行下面的代码来查看我们预测的准确率。
End of explanation
"""
survival_stats(data, outcomes, 'Parch', ["Sex == 'female'"])
"""
Explanation: 回答: Predictions have an accuracy of 78.68%.
仅仅使用乘客性别(Sex)这一特征,我们预测的准确性就有了明显的提高。现在再看一下使用额外的特征能否更进一步提升我们的预测准确度。例如,综合考虑所有在泰坦尼克号上的男性乘客:我们是否找到这些乘客中的一个子集,他们的存活概率较高。让我们再次使用 survival_stats 函数来看看每位男性乘客的年龄(Age)。这一次,我们将使用第四个参数来限定柱形图中只有男性乘客。
运行下面这段代码,把男性基于年龄的生存结果绘制出来。
End of explanation
"""
def predictions_2(data):
""" Model with two features:
- Predict a passenger survived if they are female.
- Predict a passenger survived if they are male and younger than 10. """
predictions = []
for _, passenger in data.iterrows():
# Remove the 'pass' statement below
# 移除下方的 'pass' 声明
# and write your prediction conditions here
# 输入你自己的预测条件
if passenger['Sex'] == 'male':
if passenger['Age'] > 10:
predictions.append(0)
else:
predictions.append(1)
else:
predictions.append(1)
# Return our predictions
# 返回预测结果
return pd.Series(predictions)
# Make the predictions
# 进行预测
predictions = predictions_2(data)
"""
Explanation: 仔细观察泰坦尼克号存活的数据统计,在船沉没的时候,大部分小于10岁的男孩都活着,而大多数10岁以上的男性都随着船的沉没而遇难。让我们继续在先前预测的基础上构建:如果乘客是女性,那么我们就预测她们全部存活;如果乘客是男性并且小于10岁,我们也会预测他们全部存活;所有其它我们就预测他们都没有幸存。
将下面缺失的代码补充完整,让我们的函数可以实现预测。
提示: 您可以用之前 predictions_1 的代码作为开始来修改代码,实现新的预测函数。
End of explanation
"""
print accuracy_score(outcomes, predictions)
"""
Explanation: 问题3
当预测所有女性以及小于10岁的男性都存活的时候,预测的准确率会达到多少?
提示:运行下面的代码来查看预测的准确率。
End of explanation
"""
survival_stats(data, outcomes, 'Age', ["Sex == 'female'", "Age < 30"])
"""
Explanation: 回答: Predictions have an accuracy of 68.91%.
添加年龄(Age)特征与性别(Sex)的结合比单独使用性别(Sex)也提高了不少准确度。现在该你来做预测了:找到一系列的特征和条件来对数据进行划分,使得预测结果提高到80%以上。这可能需要多个特性和多个层次的条件语句才会成功。你可以在不同的条件下多次使用相同的特征。Pclass,Sex,Age,SibSp 和 Parch 是建议尝试使用的特征。
使用 survival_stats 函数来观测泰坦尼克号上乘客存活的数据统计。
提示: 要使用多个过滤条件,把每一个条件放在一个列表里作为最后一个参数传递进去。例如: ["Sex == 'male'", "Age < 18"]
End of explanation
"""
def predictions_3(data):
""" Model with multiple features. Makes a prediction with an accuracy of at least 80%. """
predictions = []
for _, passenger in data.iterrows():
# Remove the 'pass' statement below
# and write your prediction conditions here
if passenger['Sex'] == 'male':
if passenger['Pclass'] > 2:
predictions.append(0)
elif passenger['Age'] > 10:
predictions.append(0)
elif passenger['Parch'] < 1:
predictions.append(0)
else:
predictions.append(1)
elif passenger['Parch'] > 3:
predictions.append(0)
else:
predictions.append(1)
# Return our predictions
return pd.Series(predictions)
# Make the predictions
predictions = predictions_3(data)
"""
Explanation: 当查看和研究了图形化的泰坦尼克号上乘客的数据统计后,请补全下面这段代码中缺失的部分,使得函数可以返回你的预测。
在到达最终的预测模型前请确保记录你尝试过的各种特征和条件。
提示: 您可以用之前 predictions_2 的代码作为开始来修改代码,实现新的预测函数。
End of explanation
"""
print accuracy_score(outcomes, predictions)
"""
Explanation: 结论
请描述你实现80%准确度的预测模型所经历的步骤。您观察过哪些特征?某些特性是否比其他特征更有帮助?你用了什么条件来预测生还结果?你最终的预测的准确率是多少?
提示:运行下面的代码来查看你的预测准确度。
End of explanation
"""
|
danielballan/docker-demo-images | notebooks/communities/pyladies/Python 101.ipynb | bsd-3-clause | 1 + 1
3 / 4 # caution: in Python 2 the result will be an integer
7 ** 3
"""
Explanation: Welcome to Python 101
<a href="http://pyladies.org"><img align="right" src="http://www.pyladies.com/assets/images/pylady_geek.png" alt="Pyladies" style="position:relative;top:-80px;right:30px;height:50px;" /></a>
Welcome! This notebook is appropriate for people who have never programmed before. A few tips:
To execute a cell, click in it and then type [shift] + [enter]
This notebook's kernel will restart if the page becomes idle for 10 minutes, meaning you'll have to rerun steps again
Try.jupyter.org is awesome, and <a href="http://rackspace.com">Rackspace</a> is awesome for hosting this, but you will want your own Python on your computer too. Hopefully you are in a class and someone helped you install. If not:
Anaconda is great if you use Windows
or will only use Python for data analysis.
If you want to contribute to open source code, you want the standard
Python release. (Follow
the Hitchhiker's Guide to Python.)
Outline
Operators and functions
Data and container types
Control structures
I/O, including basic web APIs
How to write and run a Python script
First, try Python as a calculator.
Python can be used as a shell interpreter. After you install Python, you can open a command line terminal (e.g. powershell or bash), type python3 or python, and a Python shell will open.
For now, we are using the notebook.
Here is simple math. Go to town!
End of explanation
"""
import math
print("The square root of 3 is:", math.sqrt(3))
print("pi is:", math.pi)
print("The sin of 90 degrees is:", math.sin(math.radians(90)))
"""
Explanation: Challenge for you
The arithmetic operators in Python are:
python
+ - * / ** % //
Use the Python interpreter to calculate:
16 times 26515
1835 modulo 163
<p style="font-size:smaller">(psst...)
If you're stuck, try</p>
python
help()
<p style="font-size:smaller">and then in the interactive box, type <tt>symbols</tt>
</p>
More math requires the math module
End of explanation
"""
from math import acos, degrees # use 'from' sparingly
int(degrees(acos(0.743144))) # 'int' to make an integer
"""
Explanation: The import statement imports the module into the namespace
Then access functions (or constants) by using:
python
<module>.<function>
And get help on what is in the module by using:
python
help(<module>)
Challenge for you
Hint: help(math) will show all the functions...
What is the arc cosine of 0.743144 in degrees?
End of explanation
"""
# Here's to start.
s = "foobar"
"bar" in s
# You try out the other ones!
s.find("bar")
"""
Explanation: Math takeaways
Operators are what you think
Be careful of unintended integer math
the math module has the remaining functions
Strings
(Easier in Python than in any other language ever. Even Perl.)
Strings
Use help(str) to see available functions for string objects. For help on a particular function from the class, type the class name and the function name: help(str.join)
String operations are easy:
```
s = "foobar"
"bar" in s
s.find("bar")
index = s.find("bar")
s[:index]
s[index:] + " this is intuitive! Hooray!"
s[-1] # The last element in the list or string
```
Strings are immutable, meaning they cannot be modified, only copied or replaced. (This is related to memory use, and interesting for experienced programmers ... don't worry if you don't get what this means.)
End of explanation
"""
help(str.join)
declaration = "We are now the knights who say:\n"
sayings = ['"icky"'] * 3 + ['"p\'tang"']
# You do the rest -- fix the below :-)
print(sayings)
"""
Explanation: Challenge for you
Using only string addition (concatenation) and the function str.join, combine declaration and sayings :
```python
declaration = "We are the knights who say:\n"
sayings = ['"icky"'] * 3 + ['"p\'tang"']
the (\') escapes the quote
```
to a variable, sentence, that when printed does this:
```python
print(sentence)
We are the knights who say:
"icky", "icky", "icky", "p'tang"!
```
End of explanation
"""
sentence = declaration + ", ".join(sayings) + "!"
print(sentence)
print() # empty 'print' makes a newline
# By the way, you use 'split' to split a string...
# (note what happens to the commas):
print(" - ".join(['ni'] * 12))
print("\n".join("icky, icky, icky, p'tang!".split(", ")))
"""
Explanation: Don't peek until you're done with your own code!
End of explanation
"""
print("%s is: %.3f (well, %d in Indiana)" % ("Pi", math.pi, math.pi))
"""
Explanation: String formatting
There are a bunch of ways to do string formatting:
- C-style:
```python
"%s is: %.3f (or %d in Indiana)" % \
("Pi", math.pi, math.pi)
%s = string
%0.3f = floating point number, 3 places to the left of the decimal
%d = decimal number
Style notes:
Line continuation with '\' works but
is frowned upon. Indent twice
(8 spaces) so it doesn't look
like a control statement
```
End of explanation
"""
# Use a colon and then decimals to control the
# number of decimals that print out.
#
# Also note the number {1} appears twice, so that
# the argument `math.pi` is reused.
print("{0} is: {1} ({1:.3} truncated)".format("Pi", math.pi))
"""
Explanation: New in Python 2.6, str.format doesn't require types:
```python
"{0} is: {1} ({1:3.2} truncated)".format(
"Pi", math.pi)
More style notes:
Line continuation in square or curly
braces or parenthesis is better.
```
End of explanation
"""
# Go to town -- change the decimal places!
print("{pi} is: {pie:05.2}".format(pi="Pi", pie=math.pi))
"""
Explanation: And Python 2.7+ allows named specifications:
```python
"{pi} is {pie:05.3}".format(
pi="Pi", pie=math.pi)
05.3 = zero-padded number, with
5 total characters, and
3 significant digits.
```
End of explanation
"""
# Boolean
x = True
type(x)
"""
Explanation: String takeaways
str.split and str.join, plus the regex module (pattern matching tools for strings), make Python my language of choice for data manipulation
There are many ways to format a string
help(str) for more
Quick look at other types
End of explanation
"""
# Lists can contain multiple types
x = [True, 1, 1.2, 'hi', [1], (1,2,3), {}, None]
type(x)
# (the underscores are for special internal variables)
# List access. Try other numbers!
x[1]
print("x[0] is:", x[0], "... and x[1] is:", x[1]) # Python is zero-indexed
x.append(set(["a", "b", "c"]))
for item in x:
print(item, "... type =", type(item))
"""
Explanation: Python has containers built in...
Lists, dictionaries, sets. We will talk about them later.
There is also a library collections with additional specialized container types.
End of explanation
"""
# You do it!
isinstance(x, tuple)
"""
Explanation: If you need to check an object's type, do this:
python
isinstance(x, list)
isinstance(x[1], bool)
End of explanation
"""
# You do it!
fifth_element = x[4]
print(fifth_element)
fifth_element.append("Both!")
print(fifth_element)
# and see, the original list is changed too!
print(x)
"""
Explanation: Caveat
Lists, when copied, are copied by pointer. What that means is every symbol that points to a list, points to that same list.
Same with dictionaries and sets.
Example:
python
fifth_element = x[4]
fifth_element.append("Both!")
print(fifth_element)
print(x)
Why? The assignment (=) operator copies the pointer to the place on the computer where the list (or dictionary or set) is: it does not copy the actual contents of the whole object, just the address where the data is in the computer. This is efficent because the object could be megabytes big.
End of explanation
"""
import copy
# -------------------- A shallow copy
x[4] = ["list"]
shallow_copy_of_x = copy.copy(x)
shallow_copy_of_x[0] = "Shallow copy"
fifth_element = x[4]
fifth_element.append("Both?")
# look at them
def print_list(l):
print("-" * 8, "the list, element-by-element", "-" * 8)
for elem in l:
print(elem)
print()
print_list(shallow_copy_of_x)
print_list(x)
"""
Explanation: To make a duplicate copy you must do it explicitly
The copy module
Example:
```python
import copy
-------------------- A shallow copy
x[4] = ["list"]
shallow_copy_of_x = copy.copy(x)
shallow_copy_of_x[0] = "Shallow copy"
fifth_element = x[4]
fifth_element.append("Both?")
def print_list(l):
print("-" * 10)
for elem in l:
print(elem)
print()
look at them
print_list(shallow_copy_of_x)
print_list(x)
fifth_element
```
End of explanation
"""
# -------------------- A deep copy
x[4] = ["list"]
deep_copy_of_x = copy.deepcopy(x)
deep_copy_of_x[0] = "Deep copy"
fifth_element = deep_copy_of_x[4]
fifth_element.append("Both? -- no, just this one got it!")
# look at them
print(fifth_element)
print("\nand...the fifth element in the original list:")
print(x[4])
"""
Explanation: And here is a deep copy
```python
-------------------- A deep copy
x[4] = ["list"]
deep_copy_of_x = copy.deepcopy(x)
deep_copy_of_x[0] = "Deep copy"
fifth_element = deep_copy_of_x[4]
fifth_element.append("Both?")
look at them
print_list(deep_copy_of_x)
print_list(x)
fifth_element
```
End of explanation
"""
l = ["a", 0, [1, 2] ]
l[1] = "second element"
type(l)
print(l)
"""
Explanation: Common atomic types
<table style="border:3px solid white;"><tr>
<td> boolean</td>
<td> integer </td>
<td> float </td>
<td>string</td>
<td>None</td>
</tr><tr>
<td><tt>True</tt></td>
<td><tt>42</tt></td>
<td><tt>42.0</tt></td>
<td><tt>"hello"</tt></td>
<td><tt>None</tt></td>
</tr></table>
Common container types
<table style="border:3px solid white;"><tr>
<td> list </td>
<td> tuple </td>
<td> set </td>
<td>dictionary</td>
</tr><tr style="font-size:smaller;">
<td><ul style="margin:5px 2px 0px 15px;"><li>Iterable</li><li>Mutable</li>
<li>No restriction on elements</li>
<li>Elements are ordered</li></ul></td>
<td><ul style="margin:5px 2px 0px 15px;"><li>Iterable</li><li>Immutable</li>
<li>Elements must be hashable</li>
<li>Elements are ordered</li></ul></td>
<td><ul style="margin:5px 2px 0px 15px;"><li>Iterable</li><li>Mutable</li>
<li>Elements are<br/>
unique and must<br/>
be hashable</li>
<li>Elements are not ordered</li></ul></td>
<td><ul style="margin:5px 2px 0px 15px;"><li>Iterable</li><li>Mutable</li>
<li>Key, value pairs.<br/>
Keys are unique and<br/>
must be hashable</li>
<li>Keys are not ordered</li></ul></td>
</tr></table>
Iterable
You can loop over it
Mutable
You can change it
Hashable
A hash function converts an object to a number that will always be the same for the object. They help with identifying the object. A better explanation kind of has to go into the guts of the code...
Container examples
List
To make a list, use square braces.
End of explanation
"""
indices = range(len(l))
print(indices)
# Iterate over the indices using i=0, i=1, i=2
for i in indices:
print(l[i])
# Or iterate over the items in `x` directly
for x in l:
print(x)
"""
Explanation: Items in a list can be anything: <br/>
sets, other lists, dictionaries, atoms
End of explanation
"""
t = ("a", 0, "tuple")
type(t)
for x in t:
print x
"""
Explanation: Tuple
To make a tuple, use parenthesis.
End of explanation
"""
s = set(['a', 0])
if 'b' in s:
print("has b")
s.add("b")
s.remove("a")
if 'b' in s:
print("has b")
l = [1,2,3]
try:
s.add(l)
except TypeError:
print("Could not add the list")
#raise # uncomment this to raise an error
"""
Explanation: Set
To make a set, wrap a list with the function set().
- Items in a set are unique
- Lists, dictionaries, and sets cannot be in a set
End of explanation
"""
# two ways to do the same thing
d = {"mother":"hamster",
"father":"elderberries"}
d = dict(mother="hamster",
father="elderberries")
d['mother']
print("the dictionary keys:", d.keys())
print()
print("the dictionary values:", d.values())
# When iterating over a dictionary, use items() and two variables:
for k, v in d.items():
print("key: ", k, end=" ... ")
print("val: ", v)
# If you don't you will just get the keys:
for k in d:
print(k)
"""
Explanation: Dictionary
To make a dictionary, use curly braces.
- A dictionary is a set of key,value pairs where the keys
are unique.
- Lists, dictionaries, and sets cannot be dictionary keys
- To iterate over a dictionary use items
End of explanation
"""
def function_name(arg1, arg2, kwarg1="my_default_value"):
"""Docstring goes here -- triple quoted."""
pass # the 'pass' keyword means 'do nothing'
# See the docstring appear when using `help`
help(function_name)
"""
Explanation: Type takeaways
Lists, tuples, dictionaries, sets all are base Python objects
Be careful of duck typing
Remember about copy / deepcopy
```python
For more information, use help(object)
help(tuple)
help(set)
help()
```
Function definition and punctuation
The syntax for creating a function is:
```python
def function_name(arg1, arg2, kwarg1=default1):
"""Docstring goes here -- triple quoted."""
pass # the 'pass' keyword means 'do nothing'
The next thing unindented statement is outside
of the function. Leave a blank line between the
end of the function and the next statement.
```
The def keyword begins a function declaration.
The colon (:) finishes the signature.
The body must be indented. The indentation must be exactly the same.
There are no curly braces for function bodies in Python — white space at the beginning of a line tells Python that this line is "inside" the body of whatever came before it.
Also, at the end of a function, leave at least one blank line to separate the thought from the next thing in the script.
End of explanation
"""
# Paste the function definition below:
# Here's the help statement
help(greet_person)
# And here's the function in action!
greet_person("world")
"""
Explanation: Whitespace matters
The 'tab' character '\t' counts as one single character even if it looks like multiple characters in your editor.
But indentation is how you denote nesting!
So, this can seriously mess up your coding. The Python style guide recommends configuring your editor to make the tab keypress type four spaces automatically.
To set the spacing for Python code in Sublime, go to Sublime Text → Preferences → Settings - More → Syntax Specific - User
It will open up the file Python.sublime-settings. Please put this inside, then save and close.
{
"tab_size": 4,
"translate_tabs_to_spaces": true
}
Your first function
Copy this and paste it in the cell below
```python
def greet_person(person):
"""Greet the named person.
usage:
>>> greet_person("world")
hello world
"""
print('hello', person)
```
End of explanation
"""
# your function
def greet_people(list_of_people)
"""Documentation string goes here."""
# You do it here!
pass
"""
Explanation: Duck typing
Python's philosophy for handling data types is called duck typing (If it walks like a duck, and quacks like a duck, it's a duck). Functions do no type checking — they happily process an argument until something breaks. This is great for fast coding but can sometimes make for odd errors. (If you care to specify types, there is a standard way to do it, but don't worry about this if you're a beginner.)
Challenge for you
Create another function named greet_people that takes a list of people and greets them all one by one. Hint: you can call the function greet_person.
End of explanation
"""
def greet_people(list_of_people):
for person in list_of_people:
greet_person(person)
greet_people(["world", "awesome python user!", "rockstar!!!"])
"""
Explanation: don't peek...
End of explanation
"""
# Try it!
"""
Explanation: Quack quack
Make a list of all of the people in your group and use your function to greet them:
```python
people = ["King Arthur",
"Sir Galahad",
"Sir Robin"]
greet_people(people)
What do you think will happen if I do:
greet_people("pyladies")
```
End of explanation
"""
# Standard if / then / else statement.
#
# Go ahead and change `i`
i = 1
if i is None:
print("None!")
elif i % 2 == 0:
print("`i` is an even number!")
else:
print("`i` is neither None nor even")
# This format is for very short one-line if / then / else.
# It is called a `ternary` statement.
#
"Y" if i==1 else "N"
"""
Explanation: WTW?
Remember strings are iterable...
<div style="text-align:center;">quack!</div>
<div style="text-align:right;">quack!</div>
Whitespace / duck typing takeways
Indentation is how to denote nesting in Python
Do not use tabs; expand them to spaces
If it walks like a duck and quacks like a duck, it's a duck
Control structures
Common comparison operators
<table style="border:3px solid white;"><tr>
<td><tt>==</tt></td>
<td><tt>!=</tt></td>
<td><tt><=</tt> or <tt><</tt><br/>
<tt>>=</tt> or <tt>></tt></td>
<td><tt>x in (1, 2)</tt></td>
<td><tt>x is None<br/>
x is not None</tt></td>
</tr><tr style="font-size:smaller;">
<td>equals</td>
<td>not equals</td>
<td>less or<br/>equal, etc.</td>
<td>works for sets,<br/>
lists, tuples,<br/>
dictionary keys,<br/>
strings</td>
<td>just for <tt>None</tt></td>
</tr></table>
If statement
The if statement checks whether the condition after if is true.
Note the placement of colons (:) and the indentation. These are not optional.
If it is, it does the thing below it.
Otherwise it goes to the next comparison.
You do not need any elif or else statements if you only
want to do something if your test condition is true.
Advanced users, there is no switch statement in Python.
End of explanation
"""
i = 0
while i < 3:
print("i is:", i)
i += 1
print("We exited the loop, and now i is:", i)
"""
Explanation: While loop
The while loop requires you to set up something first. Then it
tests whether the statement after the while is true.
Again note the colon (:) and the indentation.
If the condition is true, then the body of
the while loop will execute
Otherwise it will break out of the loop and go on
to the next code underneath the while block
End of explanation
"""
for i in range(3):
print("in the for loop. `i` is:", i)
print()
print("outside the for loop. `i` is:", i)
# or loop directly over a list or tuple
for element in ("one", 2, "three"):
print("in the for loop. `element` is:", element)
print()
print("outside the for loop. `element` is:", element)
"""
Explanation: For loop
The for loop iterates over the items after the for,
executing the body of the loop once per item.
End of explanation
"""
# Paste it here, and run!
"""
Explanation: Challenge for you
Please look at this code and think of what will happen, then copy it and run it. We introduce break and continue...can you tell what they do?
When will it stop?
What will it print out?
What will i be at the end?
python
for i in range(20):
if i == 15:
break
elif i % 2 == 0:
continue
for j in range(5):
print(i + j, end="...")
print() # newline
End of explanation
"""
|
whitead/numerical_stats | unit_6/hw_2019/Homework_6_key.ipynb | gpl-3.0 | import matplotlib.pyplot as plt
import numpy as np
from scipy.special import comb, factorial
n = np.arange(10)
mu = 3
p = mu**n * np.exp(-mu) / factorial(n)
plt.xlabel('Number of Deer [$n$]')
plt.ylabel('P(n)')
plt.plot(n, p, '-o')
plt.show()
"""
Explanation: Homework 6
CHE 116: Numerical Methods and Statistics
2/21/2019
1. Plotting Distributions (20 Points)
For the following problems, choose the distribution that best fits the process and plot the distribution. State whether the sample space is discrete, the range of the sample space, and then use that to justify your choice of distribution. Make sure your axes labels are descriptive and not just the variable letters.
1.1
About 3 deer cross through my yard each day in winter. What is the distribution that could explain the random variable of number of deer? Remember to read the instructions above!
1.1 Answer
Discrete, 0 to the total deer population in North America, Poisson.
End of explanation
"""
from math import pi
mu = 8
s = 2
x = np.linspace(2, 14, 1000)
Px = 1 / np.sqrt(2 * pi) / s * np.exp(-(mu - x)**2 / (2 * s))
plt.plot(x, Px)
plt.xlabel('Hours Slept [$t$]')
plt.ylabel('P(t)')
plt.show()
"""
Explanation: 1.2
You sleep about 8 hours per night with standard deviation of 2. What distribution could explain the random variable of number of hours slept?
1.2 Answer
Continuous, 0 to infinity, normal since < 0 and > 12 won't have high enough probability to matter. You could justify a binomial as well, if you restrict yourself to counting discrete hours.
End of explanation
"""
Ns = [11, 5, 7]
ps = [0.4, 0.5, 0.7]
names = ['Club Poet', 'The Dead Hedgehog', 'The Dancing Cloud']
for N,p, l in zip(Ns, ps, names):
n = np.arange(N + 1)
plt.plot(n, comb(N, n) * p**n * (1 - p)**(N - n), '-o', label=l)
plt.legend()
plt.xlabel('Poems Read [$n$]')
plt.ylabel('$P(n)$')
plt.show()
"""
Explanation: 1.3
You are going out to read your poetry. You can go to only one of three different poetry readings. At each poetry reading, they randomly select a poet to read for a given number of timeslots. You can read multiple times, since you have many poems to read. At CLUB POET, there will be 11 poem reading timeslots and your probability of reading at particular timeslot is 0.4. At THE DEAD HEDGEHOG, there will be 5 timeslots and your probability of reading will be 0.5. At THE DANCING CLOUD, there are 7 timeslots and your probability of reading will be 0.7. Plot each of the distributions representing number of poems read for the different poetry readings. You should plot three distributions.
Each have a maximum number of poems read, which is the discrete upper limit for sample space. 0 is the lower bound. Binomial best fits.
End of explanation
"""
t = np.linspace(0,60, 1000)
lamb = 1 / 20.
plt.plot(t, lamb * np.exp(-lamb * t))
plt.xlabel('time since last message [$t$]')
plt.ylabel('$P(t)$')
plt.show()
"""
Explanation: 1.4
You recently got a boyfriend. Your boyfriend texts you about every 20 minutes. What distribution could represent the time between text messages?
$t > 0$, continuous, exponential best fits.
End of explanation
"""
N = 12
p = 0.7
psum = 0
for ni in range(N + 1):
psum += comb(N, ni) * p**ni * (1 - p)**(N - ni)
if psum > 0.9:
break
print(ni)
"""
Explanation: 2. Loops (20 Points)
For the following problems, use loops to solve them. You cannot use numpy arrays. Justify your choice of distribution, state what quantity is being asked (probability of sample, interval, or prediction interval), and answer the prompt in words. The only formula you may use for expected value is $E[x] = \sum P(x) x$.
2.1 (Warmup)
Show that the binomial distribution sums to one over the whole sample space.
2.2
You ask your students questions during lecture and offer candy bars if they get problems correct. You ask 12 questions per lecture and the students get the questions right with probability 0.7. How many candy bars should you bring to have enough for 90% of the lectures?
2.2 Answer
This is a binomial, since you ask a fixed number of questions but the probability of success is only 0.7. The question is asking about a prediction interval. If you take 10 candy bars, you will have enough just more than 90% of the time.
End of explanation
"""
p = 25 / 1e5
psum = 0
i = 1
while psum < 0.1:
psum += p*(1 - p)**i
i += 1
print(i)
"""
Explanation: 2.3
About 25 per 100,000 people die per year in a car accident. How many years can you drive before there is a 10% chance of dying?
2.3 Answer
This is a geometric distribution, since it's unbounded and you stop on success. We are being asked a prediction interval. If you drive your car for 423 years, you will have a 10% chance of dying.
End of explanation
"""
psum = 0
mu = 25
for i in range(12):
psum += mu**i * np.exp(-mu) / factorial(i)
print(psum)
"""
Explanation: 2.4
In a computer program, each line of code can cause a security vulnerability. A particular company finds that after 1 year, there are 25 security vulnerabilities per 10,000 lines of code. After inspecting a new program which contains 23,000 lines of code, they find 11 secutity vulnerabilities. What is the probability that they found all vulnerabilities?
2.4 Answer
This is a Poison distribution. We are asking about the probability of interval. There is a 0.1% chance that all vulnerabilities were caught.
End of explanation
"""
ss.norm.cdf(0, scale=4, loc=3) - ss.norm.cdf(-4, scale=4, loc=3)
"""
Explanation: 3. Normal Distribution
3.1 (2 Points)
Take $\mu = 3$ and $\sigma = 4$. What is $P(-4 < x < 0)$?
3.1 Answer
End of explanation
"""
1 - ss.norm.cdf(120, loc=111, scale=30)
"""
Explanation: 3.2 (2 Points)
You spend 111 minutes per day thinking about what to eat, with a standard deviation of 30. What is the probability you spend 43 minutes thinking about food?
3.2 Answer
0
3.3 (2 Points)
Using the numbers from 3.2, what is the probability that you spend more than 2 hours thinking about food in a day?
3.3 Answer
End of explanation
"""
ss.norm.cdf(0, loc=111, scale=30)
"""
Explanation: 3.4 (4 Points)
What is unphysical about the sample space implicitly defined in problem 3.2? Compute a quantity to prove that the unphysical part of the sample space is negligible.
3.4 Answer
It is not possible to think for a negative number of minutes. Using the calculation below, we see that the probability of having negative values is 0.001, so there are rarely any consequences of our unphysical assumption.
End of explanation
"""
(-4 -- 2)/4, (4 -- 2) / 4
"""
Explanation: 3.5 (2 Points)
Take $\mu = -2$, $\sigma = 4$. Convert the following interval to z-scores: $(-4, 4)$.
3.5 Answer
End of explanation
"""
ss.norm.cdf(3) - ss.norm.cdf(-3)
"""
Explanation: 3.6 (4 Points)
Can $\sigma$ be negative? Why or why not?
3.6 Answer
No, because the exponentiated term in $p(x)$ would be positive leading to a divergent integral (unnomralized)
3.7 (2 Points)
How much probability is covered between Z-scores $-3$ to $3$?
3.7 Answer
End of explanation
"""
|
mne-tools/mne-tools.github.io | dev/_downloads/c4c1adf6983ad491e45e3941a0c10d6e/time_frequency_mixed_norm_inverse.ipynb | bsd-3-clause | # Author: Alexandre Gramfort <alexandre.gramfort@inria.fr>
# Daniel Strohmeier <daniel.strohmeier@tu-ilmenau.de>
#
# License: BSD-3-Clause
import numpy as np
import mne
from mne.datasets import sample
from mne.minimum_norm import make_inverse_operator, apply_inverse
from mne.inverse_sparse import tf_mixed_norm, make_stc_from_dipoles
from mne.viz import (plot_sparse_source_estimates,
plot_dipole_locations, plot_dipole_amplitudes)
print(__doc__)
data_path = sample.data_path()
subjects_dir = data_path / 'subjects'
meg_path = data_path / 'MEG' / 'sample'
fwd_fname = meg_path / 'sample_audvis-meg-eeg-oct-6-fwd.fif'
ave_fname = meg_path / 'sample_audvis-no-filter-ave.fif'
cov_fname = meg_path / 'sample_audvis-shrunk-cov.fif'
# Read noise covariance matrix
cov = mne.read_cov(cov_fname)
# Handling average file
condition = 'Left visual'
evoked = mne.read_evokeds(ave_fname, condition=condition, baseline=(None, 0))
evoked = mne.pick_channels_evoked(evoked)
# We make the window slightly larger than what you'll eventually be interested
# in ([-0.05, 0.3]) to avoid edge effects.
evoked.crop(tmin=-0.1, tmax=0.4)
# Handling forward solution
forward = mne.read_forward_solution(fwd_fname)
"""
Explanation: Compute MxNE with time-frequency sparse prior
The TF-MxNE solver is a distributed inverse method (like dSPM or sLORETA)
that promotes focal (sparse) sources (such as dipole fitting techniques)
:footcite:GramfortEtAl2013b,GramfortEtAl2011.
The benefit of this approach is that:
it is spatio-temporal without assuming stationarity (sources properties
can vary over time)
activations are localized in space, time and frequency in one step.
with a built-in filtering process based on a short time Fourier
transform (STFT), data does not need to be low passed (just high pass
to make the signals zero mean).
the solver solves a convex optimization problem, hence cannot be
trapped in local minima.
End of explanation
"""
# alpha parameter is between 0 and 100 (100 gives 0 active source)
alpha = 40. # general regularization parameter
# l1_ratio parameter between 0 and 1 promotes temporal smoothness
# (0 means no temporal regularization)
l1_ratio = 0.03 # temporal regularization parameter
loose, depth = 0.2, 0.9 # loose orientation & depth weighting
# Compute dSPM solution to be used as weights in MxNE
inverse_operator = make_inverse_operator(evoked.info, forward, cov,
loose=loose, depth=depth)
stc_dspm = apply_inverse(evoked, inverse_operator, lambda2=1. / 9.,
method='dSPM')
# Compute TF-MxNE inverse solution with dipole output
dipoles, residual = tf_mixed_norm(
evoked, forward, cov, alpha=alpha, l1_ratio=l1_ratio, loose=loose,
depth=depth, maxit=200, tol=1e-6, weights=stc_dspm, weights_min=8.,
debias=True, wsize=16, tstep=4, window=0.05, return_as_dipoles=True,
return_residual=True)
# Crop to remove edges
for dip in dipoles:
dip.crop(tmin=-0.05, tmax=0.3)
evoked.crop(tmin=-0.05, tmax=0.3)
residual.crop(tmin=-0.05, tmax=0.3)
"""
Explanation: Run solver
End of explanation
"""
plot_dipole_amplitudes(dipoles)
"""
Explanation: Plot dipole activations
End of explanation
"""
idx = np.argmax([np.max(np.abs(dip.amplitude)) for dip in dipoles])
plot_dipole_locations(dipoles[idx], forward['mri_head_t'], 'sample',
subjects_dir=subjects_dir, mode='orthoview',
idx='amplitude')
# # Plot dipole locations of all dipoles with MRI slices:
# for dip in dipoles:
# plot_dipole_locations(dip, forward['mri_head_t'], 'sample',
# subjects_dir=subjects_dir, mode='orthoview',
# idx='amplitude')
"""
Explanation: Plot location of the strongest dipole with MRI slices
End of explanation
"""
ylim = dict(grad=[-120, 120])
evoked.pick_types(meg='grad', exclude='bads')
evoked.plot(titles=dict(grad='Evoked Response: Gradiometers'), ylim=ylim,
proj=True, time_unit='s')
residual.pick_types(meg='grad', exclude='bads')
residual.plot(titles=dict(grad='Residuals: Gradiometers'), ylim=ylim,
proj=True, time_unit='s')
"""
Explanation: Show the evoked response and the residual for gradiometers
End of explanation
"""
stc = make_stc_from_dipoles(dipoles, forward['src'])
"""
Explanation: Generate stc from dipoles
End of explanation
"""
plot_sparse_source_estimates(forward['src'], stc, bgcolor=(1, 1, 1),
opacity=0.1, fig_name="TF-MxNE (cond %s)"
% condition, modes=['sphere'], scale_factors=[1.])
time_label = 'TF-MxNE time=%0.2f ms'
clim = dict(kind='value', lims=[10e-9, 15e-9, 20e-9])
brain = stc.plot('sample', 'inflated', 'rh', views='medial',
clim=clim, time_label=time_label, smoothing_steps=5,
subjects_dir=subjects_dir, initial_time=150, time_unit='ms')
brain.add_label("V1", color="yellow", scalar_thresh=.5, borders=True)
brain.add_label("V2", color="red", scalar_thresh=.5, borders=True)
"""
Explanation: View in 2D and 3D ("glass" brain like 3D plot)
End of explanation
"""
|
COHRINT/cops_and_robots | resources/notebooks/spacy/.ipynb_checkpoints/00_spacy-checkpoint.ipynb | apache-2.0 | from __future__ import unicode_literals # If Python 2
import spacy.en
from spacy.tokens import Token
from spacy.parts_of_speech import ADV
nlp = spacy.en.English()
# Find log probability of Nth most frequent word
probs = [lex.prob for lex in nlp.vocab]
probs.sort()
words = [w for w in nlp.vocab if w.has_repvec]
"""
Explanation: Playing around with spaCy
spaCy
Using the basic introduction to spaCy, then playting with it. Let's load spaCy's english dictionary.
End of explanation
"""
tokens = nlp(u'"I ran to the wall quickly," Frank explained to the robot.')
ran = tokens[2]
quickly = tokens[6]
run = nlp(moved.lemma_)[0]
# the integer and string representations of "moved" and its head
print (ran.orth, ran.orth_, ran.head.lemma, ran.head.lemma_)
print (quickly.orth, quickly.orth_, quickly.lemma, quickly.lemma_,)
print (quickly.head.orth_, quickly.head.lemma_)
print (ran.prob, run.prob, quickly.prob)
print (ran.cluster, run.cluster, quickly.cluster)
"""
Explanation: spaCy tokenizes words, then treats each token as a Token object. Each token has an integer and string representation. Each token also has things like:
orth
The form of the word with no string normalization or processing, as it appears in the string, without trailing whitespace. i.e. " Frank " -> "frank"
head
The Token that is the immediate syntactic head of the word. If the word is the root of the dependency tree, the same word is returned.
lemma
The “base” of the word, with no inflectional suffixes, e.g. the lemma of “developing” is “develop”, the lemma of “geese” is “goose”, etc. Note that derivational suffixes are not stripped, e.g. the lemma of “instutitions” is “institution”, not “institute”. Lemmatization is performed using the WordNet data, but extended to also cover closed-class words such as pronouns. By default, the WN lemmatizer returns “hi” as the lemma of “his”. We assign pronouns the lemma -PRON-.
prob
The unigram log-probability of the word, estimated from counts from a large corpus, smoothed using Simple Good Turing estimation.
cluster
The Brown cluster ID of the word. These are often useful features for linear models. If you’re using a non-linear model, particularly a neural net or random forest, consider using the real-valued word representation vector, in Token.repvec, instead.
repvec
A “word embedding” representation: a dense real-valued vector that supports similarity queries between words. By default, spaCy currently loads vectors produced by the Levy and Goldberg (2014) dependency-based word2vec model.
End of explanation
"""
is_adverb = lambda tok: tok.pos == ADV and tok.prob < probs[-1000]
str_ = u'"I ran to the wall quickly," Frank explained to the robot.'
tokens = nlp(str_)
print u''.join(tok.string.upper() if is_adverb(tok) else tok.string for tok in tokens)
quickly = tokens[6]
"""
Explanation: Given a test sentence (in this case: "I ran to the wall quickly," Frank explained to the robot.), we can highlight parts of speech (i.e. adverbs):
End of explanation
"""
from numpy import dot
from numpy.linalg import norm
cosine = lambda v1, v2: dot(v1, v2) / (norm(v1) * norm(v2))
words.sort(key=lambda w: cosine(w.repvec, quickly.repvec))
words.reverse()
print('1-20:')
print('\n'.join(w.orth_ for w in words[0:20]))
print('\n50-60:')
print('\n'.join(w.orth_ for w in words[50:60]))
print('\n100-110:')
print('\n'.join(w.orth_ for w in words[100:110]))
print('\n1000-1010:')
print('\n'.join(w.orth_ for w in words[1000:1010]))
print('\n50000-50010:')
print('\n'.join(w.orth_ for w in words[50000:50010]))
"""
Explanation: Find similar words to 'quickly' via cosine similarity:
End of explanation
"""
say_adverbs = ['quickly', 'swiftly', 'speedily', 'rapidly']
say_vector = sum(nlp.vocab[adverb].repvec for adverb in say_adverbs) / len(say_adverbs)
words.sort(key=lambda w: cosine(w.repvec, say_vector))
words.reverse()
print('1-20:')
print('\n'.join(w.orth_ for w in words[0:20]))
print('\n50-60:')
print('\n'.join(w.orth_ for w in words[50:60]))
print('\n1000-1010:')
print('\n'.join(w.orth_ for w in words[1000:1010]))
"""
Explanation: We can focus on one meaning of quickly and find similar words if we average over related words:
End of explanation
"""
from spacy.parts_of_speech import NOUN
is_noun = lambda tok: tok.pos == NOUN and tok.prob < probs[-1000]
print u''.join(tok.string.upper() if is_noun(tok) else tok.string for tok in tokens)
nouns = [tok for tok in tokens if is_noun(tok)]
"""
Explanation: Let's look at other parts of speech from our original sentence:
End of explanation
"""
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
barrier = nlp('barrier')[0]
car = nlp('car')[0]
agent = nlp('android')[0]
test_nouns = nouns + [barrier] + [car] + [agent]
n = len(test_nouns)
barrier_relations = np.zeros(n)
car_relations = np.zeros(n)
agent_relations = np.zeros(n)
for i, noun in enumerate(test_nouns):
barrier_relations[i] = cosine(barrier.repvec, noun.repvec)
car_relations[i] = cosine(car.repvec, noun.repvec)
agent_relations[i] = cosine(agent.repvec, noun.repvec)
fig, ax = plt.subplots(figsize=(10,8))
index = np.arange(n)
bar_width = 0.2
opacity = 0.4
rects1 = plt.bar(index, barrier_relations, bar_width,
alpha=opacity,
color='b',
label=barrier.orth_)
rects2 = plt.bar(index + bar_width, car_relations, bar_width,
alpha=opacity,
color='r',
label=car.orth_)
rects3 = plt.bar(index + 2 * bar_width, agent_relations, bar_width,
alpha=opacity,
color='g',
label=agent.orth_)
labels = [tok.orth_ for tok in test_nouns]
plt.xlabel('Test Word')
plt.ylabel('Similarity')
plt.title('Similarity of words')
plt.xticks(index + bar_width, labels)
plt.legend()
from IPython.core.display import HTML
# Borrowed style from Probabilistic Programming and Bayesian Methods for Hackers
def css_styling():
styles = open("../styles/custom.css", "r").read()
return HTML(styles)
css_styling()
"""
Explanation: How closely does one test noun match each noun found in our sentence? That is, if we say, "barrier", is it closer to "wall," "Frank", or "robot"? How about "car" or "agent"?
End of explanation
"""
|
deepmind/deepmind-research | causal_reasoning/Causal_Reasoning_in_Probability_Trees.ipynb | apache-2.0 | !apt-get install graphviz
!pip install graphviz
"""
Explanation: Copyright 2020 DeepMind Technologies Limited.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
https://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
Tutorial: Causal Reasoning in Probability Trees
By the AGI Safety Analysis Team @ DeepMind.
Summary: This is the companion tutorial for the paper "Algorithms
for Causal Reasoning in Probability trees" by Genewein et al. (2020).
Probability trees are one of the simplest models of causal
generative processes.They possess clean semantics and are strictly more general
than causal Bayesian networks, being able to e.g. represent causal relations
that causal Bayesian networks can’t. Even so, they have received little
attention from the AI and ML community.
For instance, we can describe a scenario where the weather is the cause and the
barometer reading the effect, or viceversa, depending on whether we're on Earth
or on some alien planet with different physics:
<img src="http://www.adaptiveagents.org/_media/wiki/probtrees.png" alt="Probability Trees" width="700"/>
Causal dependencies like these are naturally represented as a tree.
In this tutorial we present new algorithms for causal reasoning in discrete
probability trees that cover the entire causal hierarchy (association,
intervention, and counterfactuals), operating on arbitrary logical and causal
events.
Part I: Basics
Setup
First we install the graphviz package:
End of explanation
"""
#@title Imports
import numpy as np
import matplotlib as mpl
import matplotlib.pyplot as plt
#@title Data structures
import graphviz
import copy
from random import random
class MinCut:
"""A representation of an event in a probability tree."""
def __init__(self, root, t=frozenset(), f=frozenset()):
self._root = root
self.t = t
self.f = f
def __str__(self):
true_elements = ', '.join([str(id) for id in sorted(self.t)])
false_elements = ', '.join([str(id) for id in sorted(self.f)])
return '{true: {' + true_elements + '}, false: {' + false_elements + '}}'
def __reptr__(self):
return self.__str__()
# Proposition
def prop(root, statement):
cond_lst = Node._parse_statements(statement)
# Complain if more than one statement.
if len(cond_lst) != 1:
raise Exception('\'prop\' takes one and only one statement.')
return None
# Remove list envelope.
cond = cond_lst[0]
# Recurse.
return MinCut._prop(root, root, cond)
def _prop(root, node, cond):
# Take var and val of condition.
condvar, condval = cond
# Search for variable.
for var, val in node.assign:
if condvar == var:
if condval == val:
return MinCut(root, frozenset([node.id]), frozenset())
else:
return MinCut(root, frozenset(), frozenset([node.id]))
# If we reach a leaf node and the variable isn't resolved,
# raise an exception.
if not node.children:
msg = 'Node ' + str(node.id) + ': ' \
+ 'min-cut for condition "' + condvar + ' = ' \
+ condval + '" is undefined.'
raise Exception(msg)
# Variable not found, recurse.
t_set = frozenset()
f_set = frozenset()
for child in node.children:
_, subnode = child
subcut = MinCut._prop(root, subnode, cond)
t_set = t_set.union(subcut.t)
f_set = f_set.union(subcut.f)
# Consolidate into node if children are either only true or false nodes.
cut = MinCut(root, t_set, f_set)
if not cut.f:
cut.t = frozenset([node.id])
elif not cut.t:
cut.f = frozenset([node.id])
return cut
# Negation
def neg(self):
return MinCut(self._root, t=self.f, f=self.t)
def __invert__(self):
return self.neg()
# Conjunction
def conj(root, cut1, cut2):
return MinCut._conj(root, root, cut1, cut2, False, False)
def _conj(root, node, cut1, cut2, end1=False, end2=False):
# Base case.
if (node.id in cut1.f) or (node.id in cut2.f):
return MinCut(root, frozenset(), frozenset([node.id]))
if node.id in cut1.t:
end1 = True
if node.id in cut2.t:
end2 = True
if end1 and end2:
return MinCut(root, frozenset([node.id]), frozenset())
# Recurse.
t_set = frozenset()
f_set = frozenset()
for _, subnode in node.children:
subcut = MinCut._conj(root, subnode, cut1, cut2, end1, end2)
t_set = t_set.union(subcut.t)
f_set = f_set.union(subcut.f)
# Consolidate into node if children are either only true or false nodes.
cut = MinCut(root, t_set, f_set)
if not cut.f:
cut.t = frozenset([node.id])
elif not cut.t:
cut.f = frozenset([node.id])
return cut
def __and__(self, operand):
return MinCut.conj(self._root, self, operand)
# Disjunction
def disj(root, cut1, cut2):
return MinCut.neg(MinCut.conj(root, MinCut.neg(cut1), MinCut.neg(cut2)))
def __or__(self, operand):
return MinCut.disj(self._root, self, operand)
# Causal dependence
def precedes(root, cut_c, cut_e):
return MinCut._precedes(root, root, cut_c, cut_e, False)
def _precedes(root, node, cut_c, cut_e, found_c):
# Base case.
if not found_c:
if (node.id in cut_e.t or node.id in cut_e.f or node.id in cut_c.f):
return MinCut(root, frozenset(), frozenset([node.id]))
if (node.id in cut_c.t):
found_c = True
if found_c:
if (node.id in cut_e.t):
return MinCut(root, frozenset([node.id]), frozenset())
if (node.id in cut_e.f):
return MinCut(root, frozenset(), frozenset([node.id]))
# Recursion.
t_set = frozenset()
f_set = frozenset()
for _, subnode in node.children:
subcut = MinCut._precedes(root, subnode, cut_c, cut_e, found_c)
t_set = t_set.union(subcut.t)
f_set = f_set.union(subcut.f)
# Consolidate into node if children are either only true or false nodes.
cut = MinCut(root, t_set, f_set)
if not cut.f:
cut.t = frozenset([node.id])
elif not cut.t:
cut.f = frozenset([node.id])
return cut
def __lt__(self, operand):
return MinCut.precedes(self._root, self, operand)
class Critical:
"""A representation of the critical set associated to an event."""
# Constructor
def __init__(self, s=frozenset()):
self.s = s
def __str__(self):
elements = ', '.join([str(id) for id in sorted(self.s)])
return '{' + elements + '}'
def __reptr__(self):
return self.__str__()
def critical(root, cut):
_, crit = Critical._critical(root, cut)
return crit
def _critical(node, cut):
# Base case.
if node.id in cut.t:
return (False, Critical(frozenset()))
if node.id in cut.f:
return (True, Critical(frozenset()))
# Recurse.
s = frozenset()
for _, subnode in node.children:
incut, subcrit = Critical._critical(subnode, cut)
if incut:
s = s.union(frozenset([node.id]))
else:
s = s.union(subcrit.s)
return (False, Critical(s))
class Node:
"""A node in probability tree."""
# Constructor.
def __init__(self, uid, statements, children=None):
# Automatically assigned ID.
self.id = uid
# Assignments.
if isinstance(statements, str):
self.assign = Node._parse_statements(statements)
else:
self.assign = statements
# Children.
if children is None:
self.children = []
else:
self.children = children
# Parse statements.
def _parse_statements(statements):
statement_list = statements.split(',')
pair_list = [x.split('=') for x in statement_list]
assign = [(var.strip(), val.strip()) for var, val in pair_list]
return assign
# Sample.
def sample(self):
return self._sample(dict())
def _sample(self, smp):
# Add new assignments.
newsmp = {var: val for var, val in self.assign}
smp = dict(smp, **newsmp)
# Base case.
if not self.children:
return smp
# Recurse.
rnum = random()
for child in self.children:
subprob, subnode = child
rnum -= subprob
if rnum <= 0:
return subnode._sample(smp)
# Something went wrong: probabilities aren't normalized.
msg = 'Node ' + str(self.id) + ': ' \
+ 'probabilities of transitions do not add up to one.'
raise Exception(msg)
# Insert.
def insert(self, prob, node):
self.children.append((prob, node))
# Compute probability of cut.
def prob(self, cut):
return self._prob(cut, 1.0)
def _prob(self, cut, prob):
# Base case.
if self.id in cut.t:
return prob
if self.id in cut.f:
return 0.0
# Recurse.
probsum = 0.0
for child in self.children:
subprob, subnode = child
resprob = subnode._prob(cut, prob * subprob)
probsum += resprob
return probsum
# Return a dictionary with all the random variables and their values.
def rvs(self):
sts = dict()
return self._rvs(sts)
def _rvs(self, sts):
for var, val in self.assign:
if not (var in sts):
sts[var] = list()
if not (val in sts[var]):
sts[var].append(val)
for _, subnode in self.children:
sts = subnode._rvs(sts)
return sts
# Auxiliary function for computing the list of children.
def _normalize_children(children, probsum, logsum):
newchildren = None
if probsum > 0.0:
newchildren = [
(subprob / probsum, subnode) for _, subprob, subnode in children
]
else:
newchildren = [
(sublog / logsum, subnode) for sublog, _, subnode in children
]
return newchildren
# Conditioning
def see(self, cut):
root = copy.deepcopy(self)
root._see(cut, 1.0)
return root
def _see(self, cut, prob):
# Base case.
if self.id in cut.t:
newnode = Node(self.id, self.assign)
return (1.0, prob)
if self.id in cut.f:
newnode = Node(self.id, self.assign)
return (0.0, 0.0)
# Recurse.
newchildren = []
probsum = 0.0
logsum = 0.0
for subprob, subnode in self.children:
reslog, resprob = subnode._see(cut, prob * subprob)
newchildren.append((reslog, resprob, subnode))
logsum += reslog
probsum += resprob
# Normalize.
self.children = Node._normalize_children(newchildren, probsum, logsum)
return (1.0, probsum)
# Causal intervention
def do(self, cut):
root = copy.deepcopy(self)
root._do(cut)
return root
def _do(self, cut):
# Base case.
if self.id in cut.t:
return True
if self.id in cut.f:
return False
# Recurse.
newchildren = []
probsum = 0.0
logsum = 0.0
for subprob, subnode in self.children:
resdo = subnode._do(cut)
if resdo:
newchildren.append((1.0, subprob, subnode))
probsum += subprob
logsum += 1.0
else:
newchildren.append((0.0, 0.0, subnode))
# Normalize.
self.children = Node._normalize_children(newchildren, probsum, logsum)
return (1.0, probsum)
# Counterfactual/subjunctive conditional
def cf(self, root_prem, cut_subj):
root_prem_do = root_prem.do(cut_subj)
root_subj = copy.deepcopy(self)
root_subj._cf(root_prem_do, cut_subj)
return root_subj
def _cf(self, prem, cut):
# Base case.
if self.id in cut.t or self.id in cut.f:
return
# Recurse.
children = []
for child, child_prem in zip(self.children, prem.children):
(_, subnode) = child
(subprob, subnode_prem) = child_prem
subnode._cf(subnode_prem, cut)
children.append((subprob, subnode))
# Update children.
self.children = children
return
# Show probability tree.
def show(self, show_id=False, show_prob=False, cut=None, crit=None):
# Initialize Digraph.
graph_attr = {
'bgcolor': 'White',
'rankdir': 'LR',
'nodesep': '0.1',
'ranksep': '0.3',
'sep': '0'
}
node_attr = {
'style': 'rounded',
'shape': 'box',
'height': '0.1',
'width': '0.5',
'fontsize': '10',
'margin': '0.1, 0.02'
}
edge_attr = {'fontsize': '10'}
g = graphviz.Digraph(
'g',
format='svg',
graph_attr=graph_attr,
node_attr=node_attr,
edge_attr=edge_attr)
# Recursion.
return self._show(
g, 1.0, show_id=show_id, show_prob=show_prob, cut=cut, crit=crit)
def _show(self, g, prob, show_id=False, show_prob=False, cut=None, crit=None):
# Create label.
labels = [name + ' = ' + value for name, value in self.assign]
node_label = '\n'.join(labels)
if show_id:
node_label = str(self.id) + '\n' + node_label
if show_prob:
node_label = node_label + '\np = ' + '{0:.3g}'.format(prob)
# Decorate node.
attr = {'style': 'filled, rounded', 'fillcolor': 'WhiteSmoke'}
if not (cut is None):
if self.id in cut.t:
attr = {'style': 'filled, rounded', 'fillcolor': 'AquaMarine'}
elif self.id in cut.f:
attr = {'style': 'filled, rounded', 'fillcolor': 'LightCoral'}
if not (crit is None):
if self.id in crit.s:
attr = {'style': 'filled, rounded', 'fillcolor': 'Plum'}
g.node(str(self.id), label=node_label, **attr)
# Recurse.
for child in self.children:
subprob, subnode = child
subnode._show(
g,
prob * subprob,
show_id=show_id,
show_prob=show_prob,
cut=cut,
crit=crit)
g.edge(str(self.id), str(subnode.id), label='{0:.3g}'.format(subprob))
return g
def find(self, uid):
if self.id == uid:
return self
for child in self.children:
subprob, subnode = child
found_node = subnode.find(uid)
if found_node is not None:
return found_node
return None
class PTree:
"""A probability tree."""
def __init__(self):
"""Create a probability tree."""
self._root = None
self._count = 0
def root(self, statements, children=None):
"""Sets the root node.
Parameters
----------
statements : str
A string containing a comma-separated list of statements of
the form "var = val", such as "X=1, Y=0". These are the
values resolved by the root node.
children : list((float, Node)), (default: None)
A list of (probability, child node) pairs. These are the root
node's children and their transition probabilities.
Returns
-------
Node
the root node of the probability tree.
"""
self._count += 1
self._root = Node(self._count, statements, children)
return self._root
def child(self, prob, statements, children=None):
"""Create a child node and its transition probability.
Parameters
----------
prob : float
The probability of the transition
statements : str
A string containing a comma-separated list of statements of
the form "var = val", such as "X=1, Y=0". These are the
values resolved by the child node.
children : list((float, Node)), (default: None)
A list of (probability, child node) pairs to be set as the
children of the node.
Returns
-------
Node
the created node.
"""
self._count += 1
return (prob, Node(self._count, statements, children))
def get_root(self):
"""Return the root node.
Returns
-------
Node
the root node of the probability tree.
"""
return self._root
def show(self, show_id=False, show_prob=False, cut=None, crit=None):
"""Returns a graph of the probability tree.
Parameters
----------
show_id: Bool (default: False)
If true, display the unique id's.
show_prob : Bool (default: False)
If true, display the node probabilities.
cut : MinCut (default: None)
If a MinCut is given, then display it.
crit : Critical (default: None)
If a Critical set is given, then show it.
Returns
-------
Node
the created node.
"""
return self._root.show(
show_id=show_id, show_prob=show_prob, cut=cut, crit=crit)
def rvs(self):
"""Return a dictionary with all the random variables and their values.
Returns
-------
dict(str: list)
A dictionary with all the random variables pointing at lists
containing their possible values.
"""
return self._root.rvs()
def rv(self, var):
"""Return a probability distribution for a given random variable.
Parameters
----------
var: str
A string containing the name of the random variable.
Returns
-------
list((float, str))
A list with pairs (prob, val), where prob and val are the
probability
and the value of the random variable.
"""
return [(self.prob(self.prop(var + ' = ' + val)), val)
for val in self.rvs()[var]]
def expect(self, var):
"""Return the expected value of a random variable.
Parameters
----------
var: str
A string containing the name of the random variable.
Returns
-------
float
The expected value of the random variable.
"""
e = 0.0
for prob, val in self.rv(var):
e += prob * float(val)
return e
def find(self, uid):
"""Return a node with given unique identifier.
Parameters
----------
uid: int
Identifier of the node to be returned.
Returns
-------
Node or None
Returns the node if found, otherwise None.
"""
return self._root.find(uid)
def prop(self, statement):
"""Returns min-cut of a statement.
Parameters
----------
statement: str
A single statement of the form "var = val", such as "X = 1".
Returns
-------
MinCut
the min-cut of the event corresponding to the statement.
"""
return MinCut.prop(self._root, statement)
def critical(self, cut):
"""Returns critical set of a min-cut.
Parameters
----------
cut: MinCut
A min-cuts.
Returns
-------
Critical
the critical set for the min-cut.
"""
return Critical.critical(self._root, cut)
def sample(self):
"""Sample a realization.
Returns
-------
dict((str:str))
A dictionary of bound random variables such as
{ 'X': '1', 'Y': '0' }.
"""
return self._root.sample()
def prob(self, cut):
"""Compute probability of a min-cut.
Parameters
----------
cut: MinCut
A min-cut for an event.
Returns
-------
float
The probability of the event of the min-cut.
"""
return self._root.prob(cut)
def see(self, cut):
"""Return a probability tree conditioned on a cut.
Parameters
----------
cut: MinCut
A min-cut for an event.
Returns
-------
PTree
A new probability tree.
"""
newptree = PTree()
newptree._root = self._root.see(cut)
return newptree
def do(self, cut):
"""Intervene on a cut.
Parameters
----------
cut: MinCut
A min-cut for an event.
Returns
-------
float
A new probability tree.
"""
newptree = PTree()
newptree._root = self._root.do(cut)
return newptree
def cf(self, tree_prem, cut_subj):
"""Return a subjunctive conditional tree.
Parameters
----------
tree_prem: PTree
A probality tree representing the premises for the subjunctive
evaluation.
This probability tree must have been obtained through operations on
the
base probability tree.
cut_do: MinCut
A min-cut for an event. This min-cut is the subjunctive condition of
the
counterfactual.
Returns
-------
float
A new probability tree.
"""
newptree = PTree()
newptree._root = self._root.cf(tree_prem._root, cut_subj)
return newptree
def fromFunc(func, root_statement=None):
"""Build a probability tree from a factory function.
Building probability trees can be difficult, especially when we have
to manually specify all its nodes. To simplify this, `fromFunc` allows
building a probability tree using a factory function. A factory
function is a function that:
- receives a dictionary of bound random variables, such as
{ 'X': '1', 'Y': '0' }
- and returns either `None` if a leaf has been reached, or a list
of transitions and their statements, such as
[(0.3, 'Z = 0'), (0.2, 'Z = 1'), (0.5, 'Z = 2')].
Such a factory function contains all the necessary information for
building a probability tree.
The advantage of using a factory function is that we can exploit
symmetries (such as conditional independencies) to code a much
more compact description of the probability tree.
Parameters
----------
func: Function: dict((str: str)) -> list((float, str))
A probality tree factory function.
root_statement: str (default: None)
A string containing the statement (e.g. 'root = 0')
for the root node. If `None`, 'Ω = 1' is used.
Returns
-------
PTree
A new probability tree.
"""
if not root_statement:
root_statement = 'O = 1'
tree = PTree()
bvars = dict(Node._parse_statements(root_statement))
tree.root(root_statement, tree._fromFunc(func, bvars))
return tree
def _fromFunc(self, func, bvars):
"""Auxiliary method for PTree.fromFunc()."""
transition_list = func(bvars)
if not transition_list:
return None
children = []
for prob, statement in transition_list:
add_vars = dict(Node._parse_statements(statement))
new_bvars = {**bvars, **add_vars}
res = self._fromFunc(func, new_bvars)
children.append(self.child(prob, statement, res))
return children
"""
Explanation: Imports and data structures
We import Numpy and Pyplot, and then we define the basic data structures for
this tutorial.
End of explanation
"""
# Creata a blank probability tree.
pt = PTree()
# Add a root node and the children.
pt.root(
'O = 1',
[pt.child(0.5, 'X = 1'),
pt.child(0.3, 'X = 2'),
pt.child(0.2, 'X = 3')])
# Display it.
display(pt.show())
"""
Explanation: 1. Probability trees
A probability tree is a representation of a random experiment or process.
Starting from the root node, the process iteratively takes random
transitions to child nodes, terminating at a leaf node. A path from
the root node to a node is a (partial) realization, and a path from the root
node to a leaf node is a total realization. Every node in the tree has one
or more statements associated with it. When a realization reaches a node,
the statements indicate which values are bound to random variables.
Let's create our first probability tree. It shows a random variable $X$ where: -
$X = 0$ with probability $0.5$; - $X = 1$ with probability $0.3$; - and $X = 2$
with probability $0.2$.
End of explanation
"""
rvs = pt.rvs()
print('Random variables:', rvs)
pdist = pt.rv('X')
print('P(X) =', pdist)
expect = pt.expect('X')
print('E(X) =', expect)
smp = pt.sample()
print('Sample =', smp)
"""
Explanation: We'll typically call the root node $O$, standing for "Omega" ($\Omega$),
which is a common name for the sample space in the literature.
Note: This way of explicitly nesting nodes is the standard method for creating probability trees. We will learn a more concise, programatic method (that mimicks a probabilistic programming language) later in the tutorial.
After creating a probability tree, we can ask it to return:
- the list of random variables and their values using the method rvs();
- the probability distribution for a given random variable using
rv(varname);
- the expected value of a numerical random variable with
expected(varname);
- and obtain a random sample from the tree with sample().
End of explanation
"""
# Create a blank probability tree.
pt = PTree()
# Add a root node and the children.
pt.root('O = 1', [
pt.child(0.3, 'X = 0', [
pt.child(0.2, 'Y = 0'),
pt.child(0.8, 'Y = 1'),
]),
pt.child(0.7, 'X = 1', [
pt.child(0.8, 'Y = 0'),
pt.child(0.2, 'Y = 1'),
]),
])
# Display it.
display(pt.show())
"""
Explanation: Causal dependencies
In a probability tree, a causal dependency $X \rightarrow Y$ is expressed
through a node $X$ having a descendent node $Y$. For instance, consider the next
probability tree:
End of explanation
"""
# Creata a blank probability tree.
pt = PTree()
# Add a root node and the children.
pt.root('O = 1', [
pt.child(0.3 * 0.2, 'X = 0, Y = 0'),
pt.child(0.3 * 0.8, 'X = 0, Y = 1'),
pt.child(0.7 * 0.8, 'X = 1, Y = 0'),
pt.child(0.7 * 0.2, 'X = 1, Y = 1')
])
# Display it.
display(pt.show())
"""
Explanation: Here $Y$ is a descendant of $X$ and therefore $X \rightarrow Y$. This means that
we can affect the value of $Y$ by choosing $X$ but not viceversa. The exact
semantics of this requires interventions, which we'll review later. Notice
how the value of $X$ changes the distribution over $Y$: - $P(Y=1|X=0) >
P(Y=0|X=0)$, - $P(Y=1|X=1) < P(Y=0|X=1)$.
If we want to express that neither $X \rightarrow Y$ nor $Y \rightarrow X$ are
the case, then we need to combine both random variables into the same nodes as
follows:
End of explanation
"""
med = PTree()
med.root('O = 1', [
med.child(0.4, 'D = 0', [
med.child(0.5, 'T = 0',
[med.child(0.2, 'R = 0'),
med.child(0.8, 'R = 1')]),
med.child(0.5, 'T = 1',
[med.child(0.8, 'R = 0'),
med.child(0.2, 'R = 1')])
]),
med.child(0.6, 'D = 1', [
med.child(0.5, 'T = 0',
[med.child(0.8, 'R = 0'),
med.child(0.2, 'R = 1')]),
med.child(0.5, 'T = 1',
[med.child(0.2, 'R = 0'),
med.child(0.8, 'R = 1')])
])
])
print('Random variables:', med.rvs())
display(med.show())
"""
Explanation: Another tree: drug testing
Let's build another example. Here we have a drug testing situation:
A patient has a probability of being ill ($D = 1$).
If the patient takes the drug ($T = 1$) when she is ill, she will likely
feel better ($R = 1$), otherwise she will likely feel worse ($R = 0$).
However, if she takes the drug when she is not ill, the situation is
inverted: the drug might make her feel worse ($R = 0$).
This tree can also be represented as the above causal Bayesian graph. This is
always the case when the causal ordering of the random variables is the same, no
matter which realization path is taken in the tree.
End of explanation
"""
# Create blank tree.
wb = PTree()
# Set the root node and its sub-nodes.
wb.root('O = 1', [
wb.child(0.5, 'A = 0', [
wb.child(0.5, 'W = 0',
[wb.child(0.75, 'B = 0'),
wb.child(0.25, 'B = 1')]),
wb.child(0.5, 'W = 1',
[wb.child(0.25, 'B = 0'),
wb.child(0.75, 'B = 1')])
]),
wb.child(0.5, 'A = 1', [
wb.child(0.5, 'B = 0',
[wb.child(0.75, 'W = 0'),
wb.child(0.25, 'W = 1')]),
wb.child(0.5, 'B = 1',
[wb.child(0.25, 'W = 0'),
wb.child(0.75, 'W = 1')])
])
])
# Display it.
display(wb.show())
"""
Explanation: A tree that cannot be represented as a Bayesian graph: Weather-Barometer Worlds
We can also build a tree where the different realization paths have different
causal dependencies. For instance, imagine we have two possible worlds: - Our
world ($A = 0$) where the weather ($W$) influences the barometer reading
($B$); - An alien world ($A = 1$) where the barometer influences the weather.
Such a situation with multiple causal dependencies cannot be captured in a
single graphical model:
However, we can represent it using a probability tree:
Exercise 1
Now it's your turn to create a probability tree. Create the "weather-barometer
worlds" probability tree and name it wb.
Solution
End of explanation
"""
pt = PTree()
pt.root('O = 1', [
pt.child(0.2, 'X = 0, Y = 0'),
pt.child(0.8, 'X = 1', [pt.child(0.3, 'Y = 1'),
pt.child(0.7, 'Y = 2')])
])
display(pt.show())
"""
Explanation: Remember:
A node can contain more than one statement.
The tree doesn't have to be balanced.
See the next example.
End of explanation
"""
display(med.show(show_prob=True, show_id=True))
"""
Explanation: Displaying additional information
We can display additional information about probability trees:
- Unique identifiers: Each node has an automatically assigned
unique identifier. Use show_id = True to display it.
- Probability: Each node has a probability of being realized.
Use show_prob = True to display this information.
End of explanation
"""
print(wb.rvs())
print(wb.rv('B'))
display(wb.show(show_id=True, show_prob=True))
"""
Explanation: Exercise 2
For the probability tree wb:
- list all the random variables;
- compute the probability distribution of the barometer ($B$);
- display the probability tree with the unique ids and probabilities
of every node.
Solution
End of explanation
"""
# Build a cut for the proposition 'R = 1'.
cut = med.prop('R=1')
# The result is of type MinCut:
print('Type of a cut:', type(cut))
# Print the min-cut. Note that the elements in the
# true and false sets refer to the ids of the prob tree.
print('Min-cut for "R = 1":', cut)
# Render the probability tree with a cut.
display(med.show(cut=cut, show_id=True))
"""
Explanation: 2. Propositions and min-cuts
We've seen that a probability tree is a simple way of representing all the
possible realizations and their causal dependencies. We now investigate the
possible events in a probability tree.
An event is a collection of full realizations. We can describe events
using propositions about random variables (e.g. $W = 0$, $B = 1$) and the
logical connectives of negation, conjunction (AND), and disjunction (OR). The
connectives allow us to state composite events, such as $\neg(W = 1 \wedge B =
0)$. For instance, the event $B = 0$ is the set of all realizations, i.e. paths
from the root to a leaf, that pass through a node with the statement $B=0$.
We can represent events using cuts, and in particular, min-cuts. A
min-cut is a minimal representation of an event in terms of the nodes of a
probability tree. The min-cut of an event collects the smallest number of nodes
in the probability tree that resolves whether an event has occurred or not. In
other words, if a realization hits a node in the min-cut, then we know for sure
whether the event has occurred or not. (In measure theory, a similar notion to
min-cut would be the algebra that renders the event measurable.)
Our implementation of min-cuts furthermore distinguishes between the nodes that
render the event true from the nodes that render the event false.
Let's start by constructing a min-cut for a setting of a random variable in our
drug testing example. Verify that the min-cut is correct for the setting of the
random variable.
End of explanation
"""
# Build a cut for the proposition 'T = 0'.
cut = med.prop('T=0')
# Print the min-cut. Note that the elements in the
# true and false sets refer to the ids of the prob tree.
print('Min-cut for "T = 0":', cut)
# Render the probability tree with a cut.
display(med.show(cut=cut, show_id=True))
"""
Explanation: Let's do a min-cut for not taking the treatment ($T = 0$)
End of explanation
"""
cut = ~med.prop('T = 0')
print('Min-cut for "T = 0":', med.prop('T = 0'))
print('Min-cut for "not T = 0":', ~med.prop('T = 0'))
display(med.show(cut=cut, show_id=True))
"""
Explanation: We can build negative events too using the ~ unary operator. As an example,
let's negate the previous event. Compare the two cuts. Notice that a negation
simply inverts the nodes that are true and false.
End of explanation
"""
# Recovery
cut1 = med.prop('R=1')
print('Cut for "R = 1":')
display(med.show(cut=cut1))
# Taking the treatment
cut2 = med.prop('T=1')
print('Cut for "T=1":')
display(med.show(cut=cut2))
# Conjunction: taking the treatment and recovery
cut_and = cut1 & cut2
print('Cut for "T=1 and R=1":')
display(med.show(cut=cut_and))
# Disjunction: taking the treatment or recovery
cut_or = cut1 | cut2
print('Cut for "T=1 or R=1":')
display(med.show(cut=cut_or))
"""
Explanation: Now let's build more complex events using conjunctions (&) and disjunctions
(|). Make sure these min-cuts make sense to you. Notice that the conjunction
of two events pick out the earliest occurrence of false nodes and the last
occurence of true nodes, whereas the disjunction does the opposite.
End of explanation
"""
# Disease and recovery min-cuts.
cut1 = med.prop('D=1') < med.prop('R=1')
cut2 = med.prop('R=1') < med.prop('D=1')
# Display.
print('Cut for D=1 < R=1:')
display(med.show(cut=cut1))
print('Cut for R=1 < D=1:')
display(med.show(cut=cut2))
"""
Explanation: The precedence relation
In addition to the Boolean operators, we can also use a causal connective which
cannot stated in logical terms: the precedence relation $\prec$. This
relation allows building min-cuts for events where one event $A$ precedes
another event $B$, written $A \prec B$, and thus requires the additional
information provided by the probability tree's structure.
Let's try one example. We want to build the min-cut where having the disease
($D=1$) precedes feeling better ($R=1$), and vice-versa.
End of explanation
"""
pt = PTree()
pt.root('O = 1', [
pt.child(0.1, 'X = 0, Y = 0'),
pt.child(0.2, 'X = 1, Y = 1'),
pt.child(0.7, 'Y = 2')
])
display(pt.show())
"""
Explanation: Requirement: random variables must be measurable
If we try to build a min-cut using a variable that is not measurable, then an
exception is raised. For instance, the random variable $X$ below is not
measurable within the probability tree, because the realization starting at the
root and reaching the leaf $Y = 2$ never sets the value for $X$.
Attempting to build a min-cut for an event involving $X$ will throw an error.
End of explanation
"""
# First we add all the nodes.
pt = PTree()
pt.root('O = 1',
[pt.child(1, 'X = 1'),
pt.child(0, 'X = 2'),
pt.child(0, 'X = 3')])
# Show the cut for 'X = 0'
cut = pt.prop('X = 1')
print('While the root node "O=1" does resolve the event "X=1"\n' +
'probabilistically, it does not resolve the event logically.')
display(pt.show(cut=cut))
"""
Explanation: Special case: probabilistic truth versus logical truth
Let's have a look at one special case. Our definitions make a distinction
between logical and probabilistic truth. This is best seen in the
example below.
In this example, we have a probability tree with three outcomes: $X = 1, 2$, and
$3$. - $X = 1$ occurs with probability one.
- Hence, probabilistically, the event $X=1$ is resolved at the level of the
root node.
- However, it isn't resolved at the logical level, since $X = 2$ or $X = 3$
can happen logically, although with probability zero.
Distinguishing between logical truth and probabilistic truth is important for
stating counterfactuals. This will become clearer later.
End of explanation
"""
# Exercise.
# A = 1.
cut = wb.prop('A=1')
print('Cut for "A=1":')
display(wb.show(cut=cut))
# W = 1.
cut = wb.prop('W=1')
print('Cut for "W=1":')
display(wb.show(cut=cut))
# B = 0 and W = 1.
cut = wb.prop('B=0') & wb.prop('W=1')
print('Cut for "B=0 and W=1":')
display(wb.show(cut=cut))
# not( not(B = 0) or not(W = 1) ).
cut = ~(~wb.prop('B=0') | ~wb.prop('W=1'))
print('Cut for "not( not(B=0) or not(W=1) )":')
display(wb.show(cut=cut))
"""
Explanation: Exercise 3
For the wb probability tree, build the min-cuts for the following events:
- the world is alien ($A = 1$);
- the weather is sunny ($W = 1$);
- the barometer goes down and the weather is sunny ($B = 0 \wedge W = 1$);
- the negation of "barometer does not go down or weather is not sunny",
$\neg(\neg(B = 0) \vee \neg(W = 1))$.
Display every min-cut. In particular, compare the last two. What do you observe?
Solution
End of explanation
"""
# Build the min-cut.
cut = (wb.prop('W=0') < wb.prop('B=0')) \
| (wb.prop('W=0') < wb.prop('B=1')) \
| (wb.prop('W=1') < wb.prop('B=0')) \
| (wb.prop('W=1') < wb.prop('B=1'))
# Display.
display(wb.show(cut=cut))
"""
Explanation: Exercise 4
For the wb probability tree, determine the min-cut for whenever the weather
($W$) affects the value of the barometer ($B$). This min-cut should coincide
with the min-cut for the event ($A=0$).
Hint: enumerate all the 4 cases (values for $W$ and $B$) and combine them using
disjunctions.
Solution
End of explanation
"""
# First we add all the nodes.
pt = PTree()
pt.root('O = 1',
[pt.child(1, 'X = 1'),
pt.child(0, 'X = 2'),
pt.child(0, 'X = 3')])
# Get the critical set for a min-cut.
cut = pt.prop('X = 1')
crit = pt.critical(cut)
# Show the critical set.
print('Min-cut for "X=1":', cut)
print('Critical set for "X=1":', crit)
display(pt.show(show_id=True, cut=cut, crit=crit))
"""
Explanation: 3. Critical sets
Min-cuts correspond to the smallest set of nodes where it becomes clear whether
an event has occurred or not. Every min-cut has an associated critical set:
the set of nodes that determines whether an event won't occur. Given an
event, the associated critical set is defined as the set of parents of the
event's false set in the min-cut.
Together, a critical set and a min-cut form the set of mechanisms that
determine the occurrence of the event.
Let's have a look at a simple example. Here, the critical set is the singleton
containing the root node. Critical sets are computed using the function
PTree.critical(cut), where cut is an event's min-cut. We can display the
critical set by providing the optional argument crit to the PTree.show()
function.
End of explanation
"""
pt = PTree()
pt.root('O = 1', [
pt.child(0.2, 'X = 0, Y = 0'),
pt.child(0.8, 'X = 1', [pt.child(0.3, 'Y = 1'),
pt.child(0.7, 'Y = 0')])
])
# Original tree.
print('Original tree:')
display(pt.show(show_id=True))
# 'X=1'
cut = pt.prop('X=1')
crit = pt.critical(cut)
print('Min-cut and critical set for "X=1":')
display(pt.show(show_id=True, cut=cut, crit=crit))
# 'Y=1'
cut = pt.prop('Y=1')
crit = pt.critical(cut)
print('Min-cut and critical set for "Y=1":')
display(pt.show(show_id=True, cut=cut, crit=crit))
# 'Y=0'
cut = pt.prop('Y=0')
crit = pt.critical(cut)
print('Min-cut and critical set for "Y=0":')
display(pt.show(show_id=True, cut=cut, crit=crit))
"""
Explanation: Let's work out another example. Consider the following probability tree.
Try to predict the min-cut and the critical set of the events $X=1$, $Y=1$, and
$Y=0$.
End of explanation
"""
# Exercise.
# A = 1.
cut = wb.prop('A=1')
crit = wb.critical(cut)
print('Mechanism for "A=1":')
display(wb.show(cut=cut, crit=crit))
# B = 0.
cut = wb.prop('B=0')
crit = wb.critical(cut)
print('Mechanisms for "B=0":')
display(wb.show(cut=cut, crit=crit))
# W = 1.
cut = wb.prop('W=1')
crit = wb.critical(cut)
print('Mechanisms for "W=1":')
display(wb.show(cut=cut, crit=crit))
# B = 0 and W = 1.
cut = wb.prop('B=0') & wb.prop('W=1')
crit = wb.critical(cut)
print('Mechanisms for "B=0 and W=1":')
display(wb.show(cut=cut, crit=crit))
"""
Explanation: Exercise 5
For the wb tree, compute and display the mechanisms (i.e. the min-cut and the
critical set) for the following events:
- the world is alien ($A = 1$);
- the barometer goes down ($B = 0$);
- the weather is sunny ($W = 1$);
- the barometer goes down and weather is sunny ($B = 0 \wedge W = 1$).
Solution
End of explanation
"""
# Min-cuts for some events
cut1 = med.prop('R=1')
cut2 = med.prop('D=1')
cut1_neg = ~cut1
cut_and = cut2 & cut1_neg
cut_or = cut2 | cut1_neg
cut_prec = cut2 < cut1
print('P(R=1) =', med.prob(cut1))
print('P(R=0) =', med.prob(cut1_neg))
print('P(D=1) =', med.prob(cut2))
print('P(D=1 and R=0) =', med.prob(cut_and))
print('P(D=1 or R=0) =', med.prob(cut_or))
print('P(D=1 precedes R=1) =', med.prob(cut_prec))
display(med.show(show_prob=True))
"""
Explanation: We'll return later to critical sets, as they are important for determining the
operations of conditioning and intervening on probability trees.
4. Evaluating probabilities
We can also evaluate probabilities of events. For instance, you may ask:
"$P(R=1)$: What is the probability of recovery?"
"$P(R=0)$: What is the probability of not recovering?"
"$P(D=1)$: What is the probability of having the disease?"
"$P(D=1 \wedge R=0)$: What is the probability of taking the drug and not
recovering?"
"$P(D=1 \vee R=0)$: What is the probability of taking the drug or not
recovering?"
"$P(D=1 \prec R=1)$: What is the probability of taking the drug preceding
the recovery?"
To do so, we use the min-cut of the event.
Let's have a look at some of them. Compare to the graph of the probability tree.
End of explanation
"""
# Exercise.
# A = 0 and B = 0
cut = wb.prop('A=0') & wb.prop('B=0')
print('P(A=0 and B=0) =', wb.prob(cut))
display(wb.show(cut=cut))
# not(B = 0 or W = 1)
cut = ~(wb.prop('B=0') | wb.prop('W=1'))
print('P(not(B=0 or W=1)) =', wb.prob(cut))
display(wb.show(cut=cut))
"""
Explanation: Exercise 6
For the wb tree, evaluate the probability of the following events:
- the world is ours ($A = 0$) and the barometer goes down ($B = 0$);
- it is not the case that the barometer goes down or the weather
is sunny ($\neg(B = 0 \vee W = 1)$).
Print the probabilities and display the probability trees.
Solution
End of explanation
"""
# Now we condition.
cut = med.prop('R=1')
med_see = med.see(cut)
# Critical set.
crit = med.critical(cut)
# Compare probabilities of events.
print('Before conditioning: P(R=1) =', med.prob(cut))
print('After conditioning: P(R=1 | R=1) =', med_see.prob(cut))
# Display both trees for comparison.
print('\nOriginal tree:')
display(med.show(show_prob=True))
print('Tree after conditioning on "R=1":')
display(med_see.show(cut=cut, crit=crit, show_prob=True))
"""
Explanation: 5. Conditioning
We have learned how to represent events using min-cuts. Now we can use min-cuts
to condition probability trees on events. Conditioning allows asking
questions after making observations, such as:
"$P(R=1|T=1)$: What is the probability of recovery given that a patient has
taken the treatment?"
"$P(D=1|R=1)$: What is the probability of having had the disease given that
a patient has recovered/felt better?"
How to compute conditions
Conditioning takes a probability tree and produces a new probability tree with
modified transition probabilities. These are obtained by removing all the total
realizations that are incompatible with the condition, and then
renormalizing, as illustrated below.
<img src="http://www.adaptiveagents.org/_media/wiki/see.png" alt="Seeing" width="700"/>
In the example, we compute the result of seeing $Y= 1$.
Conditioning on an event proceeds in two steps:
- first, we remove the probability mass of the realizations
passing through the false set of the event’s min-cut
(hihglighted in dark, bottom row);
- then we renormalize the probabilities.
We can do this recursively by aggregating the original probabilities
of the true set. The top row shows the result of conditioning a
probability tree on the event $Y= 1$, which also highlights the modified
transition probabilities in red. The bottom row shows the same
operation in a probability mass diagram, which is a representation of a
probability tree that emphasizes the probabilities.
Let's have a look at the drug testing example. We will condition on $R=1$.
Observe how the probabilities change.
End of explanation
"""
# Min-cuts.
cut_r = med.prop('R=1')
cut_tr = med.prop('T=1') & med.prop('R=1')
cut_disease = med.prop('D=1')
# Critical set.
crit = med.critical(cut_tr)
# Condition.
med_see_r = med.see(cut_r)
med_see_tr = med.see(cut_tr)
# Now we evaluate the posterior probability of having a disease.
print('P(D = 1) =', med.prob(cut_disease))
print('P(D = 1 | R = 1) =', med_see_r.prob(cut_disease))
print('P(D = 1 | T = 1, R = 1) =', med_see_tr.prob(cut_disease))
# Display prob tree.
print('\nProbability tree after conditioning on "T=1 and R=1":')
display(med_see_tr.show(cut=cut_tr, show_id=True, crit=crit))
"""
Explanation: We can condition on composite events too and evaluate the probability of events.
Assume you observe that the drug was taken and a recovery is observed. Then, it
is very likely that the patient had the disease.
End of explanation
"""
# Create a simple tree.
pt = PTree()
pt.root('O = 1', [
pt.child(0.6, 'X = 0, Y = 0'),
pt.child(0.6, 'X = 1', [pt.child(0.3, 'Y = 0'),
pt.child(0.7, 'Y = 0')]),
])
# Show tree.
print('Original tree:')
display(pt.show())
# Condition on Y = 0.
cut = pt.prop('Y=0')
pt_see_sure = pt.see(cut)
print('Conditioning on "Y = 0":')
display(pt_see_sure.show(cut=cut))
# Condiiton on not Y = 0.
neg_cut = ~cut
pt_see_impossible = pt.see(neg_cut)
print('Conditioning on "not Y = 0":')
display(pt_see_impossible.show(cut=neg_cut))
"""
Explanation: Special case: conditioning on trivial events
Let's have a look at a special case: conditioning on trivial events, namely
the sure event and the impossible event.
Observe that conditioning on trivial events does not change the probability
tree.
End of explanation
"""
# Create a simple tree.
pt = PTree()
pt.root(
'O = 1',
[pt.child(1.0, 'X = 1'),
pt.child(0.0, 'X = 2'),
pt.child(0.0, 'X = 3')])
# Let's pick the negative event for our minimal prob tree.
cut = ~pt.prop('X = 1')
display(pt.show(cut=cut))
pt_see = pt.see(cut)
display(pt_see.show(cut=cut))
"""
Explanation: Special case: conditioning on an event with probability zero
Let's return to our simple example with tree outcomes. Assume we're conditioning
on an event with probability zero, which can happen logically but not
probabilistically. Using the measure-theoretic definition of conditional
probabilities, we are required to pick a so-called version of the
conditional distribution. There are infinite choices.
Here, we have settled on the following. If we condition on an event with
probability zero, then we assign uniform probability over all the possible
transitions. This is just one arbitrary way of solving this problem.
See the example below.
End of explanation
"""
# Exercise
# No condition.
print('P(W) =', wb.rv('W'))
print('P(B) =', wb.rv('B'))
# Condition on "A = 1"
cut = wb.prop('A=1')
print('P(W | A=1) =', wb.see(cut).rv('W'))
print('P(B | A=1) =', wb.see(cut).rv('B'))
# Condition on "W = 1"
cut = wb.prop('W=1')
print('P(W | W=1) =', wb.see(cut).rv('W'))
print('P(B | W=1) =', wb.see(cut).rv('B'))
"""
Explanation: Exercise 7
For the wb tree, print the probability distribution of
- the weather $W$
- and the barometer $B$.
Do this for the following probability trees:
- the original tree
- the probability tree conditioned on it being an alien world ($\theta = 1$)
- the probability tree conditioned on the weather being sunny ($W = 1$).
What do you observe? Does observing (conditioning) give you any additional
information? If no, why? If yes, why is that?
Solution
End of explanation
"""
pt = PTree()
pt.root('O = 1', [
pt.child(0.2, 'X = 0, Y = 0'),
pt.child(0.8, 'X = 1', [pt.child(0.3, 'Y = 1'),
pt.child(0.7, 'Y = 0')])
])
print('Original:')
display(pt.show(show_prob=True))
# 'Y=1'
cut = pt.prop('Y = 1')
crit = pt.critical(cut)
pt_see = pt.see(cut)
pt_do = pt.do(cut)
print('Condition on "Y=1":')
display(pt_see.show(cut=cut, crit=crit))
print('Intervention on "Y<-1":')
display(pt_do.show(cut=cut, crit=crit))
# 'Y=0'
cut = pt.prop('Y = 0')
crit = pt.critical(cut)
pt_see = pt.see(cut)
pt_do = pt.do(cut)
print('Condition on "Y = 0":')
display(pt_see.show(cut=cut, crit=crit))
print('Intervention on "Y <- 0":')
display(pt_do.show(cut=cut, crit=crit))
"""
Explanation: 6. Interventions
Interventions are at the heart of causal reasoning.
We have seen how to filter probability trees using observational data through
the use of conditioning. Now we investigate how a probability tree transforms
when it is intervened. An intervention is a change to the random process
itself to make something happen, as opposed to a filtration. We can ask
questions like:
"$P(R=1|T \leftarrow 1)$: What is the probability of recovery given that I
take the drug?"
"$P(D=1|T \leftarrow 1 \wedge R=1)$: What is the probability of having the
disease given that I take the drug and that I observe a recovery?"
Here, the notation $T \leftarrow 1$ is a shorthand for the more common notation
$\mathrm{do}(T = 1)$.
How to compute interventions
Interventions differ from conditioning in the following:
- they change the transition probabilities minimally,
so as to make a desired event happen;
- they do not filter the total realizations of the probability tree;
- they are easier to execute than conditions, because they only
change the transition probabilities that leave the critical set,
and they do not require the backward induction of probabilities.
See the illustration below.
<img src="http://www.adaptiveagents.org/_media/wiki/do.png" alt="Doing" width="700"/>
Example intervention on $Y \leftarrow 1$. An intervention proceeds in two steps:
- first, it selects the partial realizations starting in a critical node
and ending in a leaf that traverse the false set of the event’s min-cut;
- then it removes their probability mass, renormalizing the probabilities
from the transitions leaving the critical set.
The top row shows the result of intervening a probability tree
on $Y \leftarrow 1$. The bottom row show the same procedure on
the corresponding probability mass diagram.
Let's start with a simple comparison to illustrate the difference.
End of explanation
"""
# Min-Cuts.
cut_dis = med.prop('D = 1')
cut_arg = med.prop('R = 1')
cut_do = med.prop('T = 1')
# Critical set.
crit_do = med.critical(cut_do)
# Perform intervention.
med_do = med.do(cut_do)
# Display original tree.
print('Original tree:')
print('P(D = 1) =', med.prob(cut_dis))
print('P(T = 1) =', med.prob(cut_do))
print('P(R = 1) =', med.prob(cut_arg))
display(med.show(cut=cut_do, show_prob=True, crit=crit_do))
# Display tree after invervention.
print('Tree after intervening on "T <- 1":')
print('P(D = 1 | T <- 1) =', med_do.prob(cut_dis))
print('P(T = 1 | T <- 1) =', med_do.prob(cut_do))
print('P(R = 1 | T <- 1) =', med_do.prob(cut_arg))
display(med_do.show(cut=cut_do, show_prob=True, crit=crit_do))
"""
Explanation: Notice that the mechanisms for $Y=0$ and $Y=1$ are different. In general, a
single random variable can have multiple mechanism for setting their
individual values.
Let's return to our drug testing example. We investigate the effect of taking
the treatment, that is, by intervening on $T \leftarrow 1$. How do the
probabilities of:
- having the disease ($D = 1$);
- taking the treatment ($T = 1$);
- and recovering ($R = 1$)
change after taking the treatment ($T \leftarrow 1$)?
End of explanation
"""
# Create a simple tree.
pt = PTree()
pt.root(
'O = 1',
[pt.child(1.0, 'X = 1'),
pt.child(0.0, 'X = 2'),
pt.child(0.0, 'X = 3')])
# Let's pick the negative event for our minimal prob tree.
cut = ~pt.prop('X = 1')
crit = pt.critical(cut)
# Intervene.
pt_do = pt.do(cut)
# Show results.
print('Before the intervention:')
display(pt.show(cut=cut, crit=crit))
print('After the invention on "not X <- 1":')
display(pt_do.show(cut=cut, crit=crit))
"""
Explanation: In other words, for the example above, taking the treatment increases the
chances of recovery. This is due to the base rates (i.e. the probability of
having a disease). The base rates are not affected by the decision of taking the
treatment.
Special case: intervening on an event with probability zero
Assume we're intervening on an event with probability zero. Recall that this
is possible logically, but not probabilistically. How do we set the
transition probabilities leaving the critical set? Here again we settle on
assigning uniform probabilities over all the transitions affected by the
intervention.
See the example below.
End of explanation
"""
# Exercise
# No intervention.
print('P(W) =', wb.rv('W'))
print('P(B) =', wb.rv('B'))
# Intervention on "A <- 1"
cut = wb.prop('A=1')
print('P(W|A <- 1) =', wb.do(cut).rv('W'))
print('P(B|A <- 1) =', wb.do(cut).rv('B'))
# Condition on "W <- 1"
cut = wb.prop('W=1')
print('P(W|W <- 1) =', wb.do(cut).rv('W'))
print('P(B|W <- 1) =', wb.do(cut).rv('B'))
"""
Explanation: Exercise 8
For the wb tree, print the probability distribution of
- the weather $W$
- and the barometer $B$.
Do this for the following probability trees:
- the original tree
- the probability tree resulting from enforcing it to being
an alien world ($A \leftarrow 1$)
- the probability tree resulting from setting the weather to
being sunny ($W \leftarrow 1$).
What do you observe? Compare these results with your previous exercise, where
you conditioned on the same events. Why are the probabilities different when you
condition and when you intervene? How is this related to the different causal
dependencies in both worlds?
Solution
End of explanation
"""
# Exercise
cutw = wb.prop('W=1')
cutb = wb.prop('B=1')
cuttheta = wb.prop('A=0')
# Question 1
print('P(A = 0 | W = 1 and B = 1) =', wb.see(cutw).see(cutb).prob(cuttheta))
# Question 2
print('P(A = 0 | W = 1 then B <- 1) =', wb.see(cutw).do(cutb).prob(cuttheta))
# Question 3
print('P(A = 0 | B <- 1 then W = 1) =', wb.do(cutb).see(cutw).prob(cuttheta))
display(wb.show())
"""
Explanation: Exercise 9
Next, evaluate the following probabilities:
What is the probability of being in our world ($A=0$), given that you
observe a sunny weather ($W=1$) and the barometer going up ($B=1$)?
What is the probability of being in our world ($A=0$), given that you first
observe a sunny weather ($W=1$) and then you force the barometer to go
up ($B\leftarrow 1$)?
What is the probability of being in our world ($A=0$), given that you first
force the barometer to go up ($B\leftarrow 1$) and then observe a sunny
weather ($W=1$)?
Answer the following questions:
Does conditioning give different results from intervening? If so, why?
When you mix conditions and interventions, does the order matter? If so,
why?
Solution
End of explanation
"""
pt = PTree()
pt.root('O = 1', [
pt.child(0.25, 'X = 0', [
pt.child(0.25, 'Y = 0',
[pt.child(0.1, 'Z = 0'),
pt.child(0.9, 'Z = 1')]),
pt.child(0.75, 'Y = 1',
[pt.child(0.2, 'Z = 0'),
pt.child(0.8, 'Z = 1')]),
]),
pt.child(0.75, 'X = 1',
[pt.child(0.75, 'Y = 0, Z = 0'),
pt.child(0.25, 'Y = 1, Z = 0')])
])
print('Original:')
display(pt.show())
# Condition on 'Y=0', do 'Y=1'
cut_see = pt.prop('Y=0')
cut_do = pt.prop('Y=1')
# Critical set.
crit = pt.critical(cut_do)
# Evaluate conditional, intervention, and counterfactual.
pt_see = pt.see(cut_see)
pt_do = pt.do(cut_do)
pt_cf = pt.cf(pt_see, cut_do)
# Display results.
print('Condition on "Y = 0":')
display(pt_see.show(cut=cut_see, crit=crit))
print('Intervention on "Y <- 1":')
display(pt_do.show(cut=cut_do, crit=crit))
print('Counterfactual with premise "Y = 0" and subjunctive "Y = 1":')
display(pt_cf.show(cut=cut_do, crit=crit))
"""
Explanation: 7. Counterfactuals
Finally, we have counterfactuals. Counterfactuals are questions about how the
experiment could have gone if something about it were different. For instance:
"What is the probability of having the disease had I not recovered,
given that I have recovered?"
"Given that I have taken the treatment and recovered, what is the
probability of recovery had I not taken the treatment?"
These are tricky questions because they mix two moods:
indicative statements - things that have actually happened;
subjunctive statements - things that could have happened
in an alternate reality/possible world.
Because of this, counterfactuals spawn a new scope of random variables:
<img src="http://www.adaptiveagents.org/_media/wiki/counterfactual.png" alt="Counterfactual" width="400"/>
These two questions above are spelled as follows:
$P(D^\ast=1|R=1)$, where $D^\ast=D_{R \leftarrow 0}$
$P(R^\ast=1|T\leftarrow 1; R=1)$, where $R^\ast=R_{T\leftarrow 0}$
Here the random variables with an asterisk $D^\ast, R^\ast$ are copies of the
original random variables $D, R$ that ocurr in an alternate reality. The
notation $D_{T \leftarrow 0}$ means that the random variable $D$ is in the new
scope spawned by the intervention on $T\leftarrow 0$.
Computing a counterfactual
The next figure shows how to obtain a counterfactual:
<img src="http://www.adaptiveagents.org/_media/wiki/cf.png" alt="Computing a counterfactual" width="700"/>
The example shows a counterfactual probability tree generated by imposing $Y
\leftarrow 1$, given the factual premise $Z = 1$. Starting from a reference
probability tree describing the original process, we first use the factual premise to derive a tree describing the current state of affairs and then modify it using the counterfactual premise.
Then, we replace the nodes downstream of the counterfactual premise's min-cut with the nodes of the reference tree. The events downstream then span a new scope containing copies of the original random variables (marked with "∗"), ready to adopt new values. In other words, we restore the random variables downstream of the counterfactual intervention to their original state of uncertainty.
In particular note that $Z^\ast = 0$ can happen in our alternate
reality, even though we know that $Z = 1$.
Let's have a look at a minimal example.
End of explanation
"""
# Cuts.
cut_disease = med.prop('D = 1')
cut_recovery = med.prop('R = 1')
cut_not_recovery = ~cut_recovery
# Critical.
crit = med.critical(cut_not_recovery)
# Compute counterfactual:
# - compute factual premise,
# - use factual premise and subjunctive premise to compute counterfactual.
med_factual_prem = med.see(cut_recovery)
med_cf = med.cf(med_factual_prem, cut_not_recovery)
print('Baseline:')
print('P(D = 1) =', med.prob(cut_disease))
display(med.show())
print('Premise:')
print('P(D = 1 | R = 1) =', med_factual_prem.prob(cut_disease))
display(med_factual_prem.show())
print('Counterfactual:')
print('P(D* = 1 | R = 1) =', med_cf.prob(cut_disease), ', D* = D[R <- 0]')
display(med_cf.show(crit=crit, cut=cut_not_recovery))
"""
Explanation: Now we return to our drug testing example. Let's ask the two questions we asked
before. We start with the question: "What is the probability of having the
disease had I not recovered, given that I have recovered?", that is
$$P(D^\ast=1|R=1), \qquad D^\ast=D_{R \leftarrow 0}.$$
End of explanation
"""
# Cuts.
cut_treatment = med.prop('T = 1')
cut_not_treatment = ~cut_treatment
cut_recovery = med.prop('R = 1')
# Critical.
crit = med.critical(cut_not_treatment)
# Compute counterfactual:
# - compute factual premise,
# - use factual premise and counterfactual premise to compute counterfactual.
med_factual_prem = med.do(cut_treatment).see(cut_recovery)
med_cf = med.cf(med_factual_prem, cut_not_treatment)
# Display results.
print('Baseline:')
print('P(R = 1) =', med.prob(cut_recovery))
display(med.show())
print('Premise:')
print('P(R = 1 | T <- 1 and R = 1) =', med_factual_prem.prob(cut_recovery))
display(med_factual_prem.show())
print('Counterfactual:')
print('P(R* = 1 | T <- 1 and R = 1) =', med_cf.prob(cut_recovery),
', R* = R[T <- 0]')
display(med_cf.show(cut=cut_not_treatment, crit=crit))
"""
Explanation: As we can see, the probability of the disease in the indicative and the
counterfactual aren't different. This is because the recovery $R$ is independent
of the disease $D$, and because the disease is upstream of the critical set.
Let's have a look at the second question: $$P(R^\ast=1|T\leftarrow 1;
R=1), \qquad R^\ast=R_{T\leftarrow 0}$$
End of explanation
"""
# Exercise
med_prem = med.do(med.prop('T=1')).see(med.prop('R=0'))
med_cf = med.cf(med_prem, med.prop('T=0'))
print('P(R* = 1 | T <- 1, R = 0) =', med_cf.prob(med.prop('R=1')))
regret = med_cf.expect('R') - med_prem.expect('R')
print('Regret = ', regret)
display(med_prem.show(cut=med.prop('R=0')))
"""
Explanation: Hence, if I had not taken the treatment, then the probability of recovery would
have been lower. Why is that?
- In our premise, I have taken the treatment and
then observed a recovery.
- This implies that, most likely, I had the disease,
since taking the treatment when I don't have the disease is risky and can lead
to illness.
- Thus, knowing that I probably have the disease, I know that, had I
not taken the treatment, I would most likely not have recovered.
Exercise 10
Consider the drug testing probability tree med.
Assume you take the drug ($T \leftarrow 1$) and you feel bad afterwards
($R = 0$).
Given this information, what is the probability of recovery ($R = 1$) had
you not taken the drug ($T = 0$)?
Compute the regret, i.e. the difference: $$ \mathbb{E}[ R^\ast | T
\leftarrow 1; R = 0 ] - \mathbb{E}[ R | T \leftarrow 1; R = 0 ], $$ where
$R^\ast = R_{T \leftarrow 0}$.
Solution
End of explanation
"""
# Question 1.
wb_prem = wb.do(wb.prop('A=0')).do(wb.prop('W=1'))
wb_cf = wb.cf(wb_prem, wb.prop('W=0'))
print('P(B*| A <- 0, W <- 1) =', wb_cf.rv('B'), ' where B* = B[W <- 0]')
display(wb_cf.show(show_prob=True, cut=wb.prop('B=1')))
# Question 2.
wb_prem = wb.do(wb.prop('B=1')).see(wb.prop('W=1'))
wb_cf = wb.cf(wb_prem, wb.prop('B=0'))
print('P(W* | B <- 1 then W <- 1) =', wb_cf.rv('W'), ' where W* = W[B <- 0]')
display(wb_cf.show(show_prob=True, cut=wb.prop('W=1')))
"""
Explanation: Exercise 11
Take the probability tree wb. Evaluate the following counterfactuals:
Assume that you set the world to ours ($A \leftarrow 0$) and the weather to
sunny ($W \leftarrow 1$). What is the probability distribution of observing
a high barometer value ($B = 1$) had you set the weather to rainy ($W
\leftarrow 0$)? Does the fact that you set the world and the weather affect
the value of the counterfactual?
Assume that you set the barometer to a high value ($B \leftarrow 1$), and
you observe that the weather is sunny ($W=1$). What is the probability of
observing a sunny weather ($W=1$) had you set the barometer to a low value
($B=0$)?
These are highly non-trivial questions. What do you observe? Do the results make
sense to you?
Solution
End of explanation
"""
def alarm(bvar):
# Define the burglar and earthquake events.
if 'Burglar' not in bvar:
pb = 0.1 # Probability of burglar
pe = 0.001 # Probability of earthquake
return [((1 - pb) * (1 - pe), 'Burglar=0, Earthquake=0'),
((1 - pb) * pe, 'Burglar=0, Earthquake=1'),
(pb * (1 - pe), 'Burglar=1, Earthquake=0'),
(pb * pe, 'Burglar=1, Earthquake=1')]
# Define the alarm event.
if 'Alarm' not in bvar:
if bvar['Burglar'] == '0' and bvar['Earthquake'] == '0':
return [(0.999, 'Alarm=0'), (0.001, 'Alarm=1')]
if bvar['Burglar'] == '0' and bvar['Earthquake'] == '1':
return [(0.01, 'Alarm=0'), (0.99, 'Alarm=1')]
if bvar['Burglar'] == '1' and bvar['Earthquake'] == '0':
return [(0.1, 'Alarm=0'), (0.9, 'Alarm=1')]
else:
return [(0.001, 'Alarm=0'), (0.999, 'Alarm=1')]
# All the events defined.
return None
"""
Explanation: Part II: Examples
Construction of probability trees using factory functions
Building probability trees can be difficult, especially when we have to manually
specify all its nodes.
To simplify this, we could design a function factory(bvar) which:
- receives a dictionary bvar of bound random variables, such as
{ 'X': '1', 'Y': '0' }
- and returns a list of transitions and their statements, such as
[(0.3, 'Z = 0'), (0.2, 'Z = 1'), (0.5, 'Z = 2')]. If all relevant
events have been defined already, return None.
Such a function contains all the necessary information for building a
probability tree. We call this a probability tree factory. We can pass a
description function to the method PTree.fromFunc() to build a probability
tree.
The advantage of using this method is that we can exploit symmetries (e.g.
conditional independencies) to code a much more compact description of the
probability tree. Essentially, it is like specifying a probabilistic program.
Let's experiment with this.
Burglar, Earthquake, Alarm
Let's start with a classical example: a burglar alarm. The alarm gets
triggered by a burglar breaking into our home. However, the alarm can
also be set off by an earthquake.
Let's define the factory function.
End of explanation
"""
# Create the probability tree.
al = PTree.fromFunc(alarm, 'Root = 1')
# Print all the random variables.
print('Random variables:', al.rvs())
print('\nP(Alarm) =', al.rv('Alarm'))
print('\nOriginal probability tree:')
display(al.show())
print('\nSome samples from the probability tree:')
for k in range(5):
print(al.sample())
"""
Explanation: Now, let's create the probability tree.
End of explanation
"""
# Condition on the alarm going off.
cut = al.prop('Alarm=1')
crit = al.critical(cut)
al_see = al.see(cut)
# Compute probability distributions for earthquake and burglar.
print('P(Earthquake = 1 | Alarm = 1) =', al_see.prob(al.prop('Earthquake=1')))
print('P(Burglar = 1 | Alarm = 1) =', al_see.prob(al.prop('Burglar=1')))
# Display the conditional probability tree.
print('\nConditional probability tree:')
display(al_see.show(show_prob=True, cut=cut, crit=crit))
print('\nSome samples from the conditional probability tree:')
for k in range(5):
print(al_see.sample())
"""
Explanation: Assume now you hear the alarm. Which explanation is more likely:
did the earthquake or the burglar trigger the alarm?
End of explanation
"""
# Intervene on the alarm going off.
cut = al.prop('Alarm=1')
crit = al.critical(cut)
al_do = al.do(cut)
# Compute probability distributions for earthquake and burglar.
print('P(Earthquake = 1 | Alarm <- 1) =', al_do.prob(al.prop('Earthquake=1')))
print('P(Burglar = 1 | Alarm <- 1) =', al_do.prob(al.prop('Burglar=1')))
# Display the intervened probability tree.
print('\nIntervened probability tree:')
display(al_do.show(show_prob=True, cut=cut, crit=crit))
print('\nSome samples from the intervened probability tree:')
for k in range(5):
print(al_do.sample())
"""
Explanation: As we can see, it is far more likely that the burglar set off the alarm.
If we now tamper with the alarm, setting it off, then what is the probability
that there was a burglar or an earthquake?
End of explanation
"""
#@title Water Balloon Squad factory function.
def waterballoon(bvar, eps=0.0001):
# Root: defined.
# Define the captain's signal.
if 'Captain' not in bvar:
if 'Captain' not in bvar:
return [(0.5, 'Captain = hold'), (0.5, 'Captain = throw')]
# Define the children's decisions.
if 'A' not in bvar:
if bvar['Captain'] == 'hold':
return [((1-eps)*(1-eps), 'A = hold, B = hold'), (eps*(1-eps), 'A = hold, B = throw'),
(eps*(1-eps), 'A = throw, B = hold'), (eps*eps, 'A = throw, B = throw')]
else:
return [(eps*eps, 'A = hold, B = hold'), (eps*(1-eps), 'A = hold, B = throw'),
(eps*(1-eps), 'A = throw, B = hold'), ((1-eps)*(1-eps), 'A = throw, B = throw')]
# Define target state.
if 'Target' not in bvar:
if bvar['A'] == 'throw' or bvar['B'] == 'throw':
return [(eps, 'Target = dry'), (1-eps, 'Target = wet')]
else:
return [(1-eps, 'Target = dry'), (eps, 'Target = wet')]
return None
# Create and show the probability tree.
wbs = PTree.fromFunc(waterballoon)
wbs.show()
"""
Explanation: Now we observe that the probabilities of the burglar and earthquake
events are exactly as the base rates - we have severed the
causal dependencies connecting those events with the alarm.
Water Balloon Squad (a.k.a. Firing Squad)
The following is a classical example used for illustrating counterfactual
reasoning. There are two children (A and B) holding a water balloon, waiting
for a signal from their friend (the "captain"). Upon the captain's signal,
they both throw their water balloon at a single target simultaneously and
accurately. The causal dependencies are as shown next:
<img src="http://www.adaptiveagents.org/_media/wiki/waterballoon.png" alt="Water Balloon Squad" width="200"/>
This situation can also be described using a probability tree constructed
below. Note that in order to avoid problems due to conditioning on
zero-probability transitions, we assign a tiny value (eps) to the
nearly-impossible transitions.
End of explanation
"""
# First we define some simple events.
cut_C = wbs.prop('Captain=throw')
cut_A = wbs.prop('A=throw')
cut_B = wbs.prop('B=throw')
cut_T = wbs.prop('Target=wet')
"""
Explanation: Let's start by definining some simple events we'll use next.
End of explanation
"""
# Condition on A throwing the water balloon.
wbs_see = wbs.see(cut_A)
# Print the probability of B throwing the water balloon.
print('P(B = throw | A = throw) =', wbs_see.prob(cut_B))
# Display the conditional probability tree.
print('\nConditional probability tree:')
display(wbs_see.show(cut=cut_B))
"""
Explanation: Let's ask our first question. Assume A throws the water balloon.
What is the probability of B having thrown the balloon too?
End of explanation
"""
# Intervene on having A throwing the water balloon.
wbs_do = wbs.do(cut_A)
# Print the probability of B throwing the water balloon.
print('P(B = throw | do[A = throw]) =', wbs_do.prob(cut_B))
# Display the conditional probability tree.
print('\nIntervened probability tree:')
display(wbs_do.show(cut=cut_B))
"""
Explanation: Notice how conditioning changed the transition probabilities, which then
allow us to identify the pathways leading up to the event in which B throws
the balloon.
Assume now that A decides to act on their own and throws the balloon.
What is the probability of B having thrown the balloon too?
End of explanation
"""
# Intervene on having A throwing the water balloon.
wbs_do = wbs.do(cut_A | cut_B)
# Print the probability of B throwing the water balloon.
print('P(B = throw | do[A = throw or B = throw]) =', wbs_do.prob(cut_B))
# Display the conditional probability tree.
print('\nIntervened probability tree:')
display(wbs_do.show(cut=cut_B))
"""
Explanation: Notice the difference between the two results above. If A acts on their own
will, then this alters the conclusions we can draw.
Let's ask a more complex question. Assume A or B throw the balloon on their
own will. That is, we do not know precisely whether A acted by themself,
B acted by themself, or both acted together.
Intuitively, B (or A for that matter) is more likely to having thrown
the water balloon in this situation than when they were waiting for
the captain's signal. What is the probability?
End of explanation
"""
# Condition on the target being wet.
wbs_see = wbs.see(cut_T)
# Compute counterfactual: given that the target is wet,
# what is the probability of B throwing the water balloon
# had the captain not given the signal?
wbs_cf = wbs.cf(wbs_see, ~cut_C)
print('P(B[C != throw] = throw | Target = wet) =', wbs_cf.prob(cut_B))
display(wbs_cf.show(cut=cut_B))
"""
Explanation: The above result shows that it is indeed more likely (%75) that B has
thrown their balloon than when they were waiting for the signal (50%).
Now we will ask counterfactual questions. Let's start with an easy one first.
Assume you observe that the target is wet. Then, had the captain not given
the signal, what is the probability that B would have thrown their water
balloon at the target?
End of explanation
"""
# Condition on the target being wet.
wbs_see = wbs.see(cut_T)
# Compute counterfactual: given that the target is wet,
# what is the probability of B throwing the water balloon
# had A not done it?
wbs_cf = wbs.cf(wbs_see, ~cut_A)
print('P(B[A != throw] = throw | Target = wet) =', wbs_cf.prob(cut_B))
display(wbs_cf.show(cut=cut_B))
"""
Explanation: The result shows that B would not have thrown the water balloon.
This makes sense, as we assumed that B did not get the captain's signal.
Now, assume again you observe that the target is wet.
Had A not thrown the water balloon, would B have thrown it?
End of explanation
"""
#@title Beta-Bernoulli factory function.
def betaBernoulli(bvar, divtheta=41, T=5):
# Root: defined.
# Define biases Bias=0, 1/divtheta, 2/divtheta, ... , 1
if 'Bias' not in bvar:
ptheta = 1.0 / divtheta
biases = [(ptheta, 'Bias=' + str(theta))
for theta in np.linspace(0.0, 1.0, divtheta, endpoint=True)]
return biases
# Biases: defined.
# Now create Bernoulli observations X_1, X_2, ... , X_T,
# where X_t=0 or X_t=1.
t = 1
for var in bvar:
if '_' not in var:
continue
t += 1
if t <= T:
theta = float(bvar['Bias'])
varstr = 'X_' + str(t)
return [(1 - theta, varstr + '=0'), (theta, varstr + '=1')]
# All the events defined.
return None
"""
Explanation: As is shown above, the answer is yes. This is because from observing the wet
target we can conclude that the captain has given the signal.
Hence, had A not thrown the water balloon, B must have thrown it
instead. You can verify the causal pathways by inspecting the tree.
Coin toss prediction
Let's build another probability tree. This is a discrete approximation to a
process having a continuous random variable: a Beta-Bernoulli process.
This problem was first studied by Rev. Thomas Bayes ("An Essay towards
solving a Problem in the Doctrine of Chances", 1763) .
The story goes as follows. Someone picks a coin with an unknown bias and then throws it repeatedly. Our goal is to infer the next outcome based only on the observed outcomes (and not on the latent bias). The unknown bias is drawn
uniformly from the interval [0, 1].
Let's start by coding the factory function for the discretized Beta-Bernoulli
process. Here we assume that the prior distribution over the bias is uniform,
and discretized into divtheta = 40 bins. Then T = 5 coin tosses follow.
End of explanation
"""
# Create tree.
bb = PTree.fromFunc(betaBernoulli)
# Show random variables.
print('Random variables:')
print(bb.rvs())
# Get sample.
print('\nSamples from the process:')
for n in range(10):
print(bb.sample())
"""
Explanation: We now build the probability tree. Let's also print the
random variables and get a few samples.
End of explanation
"""
bb.show()
"""
Explanation: The tree itself is quite large (over 1000 nodes).
Normally such trees are too large to
display, for instance when T is large.
Let's display it, just for fun.
End of explanation
"""
# Prepare the cut forthe data.
observations = ['X_1=1', 'X_2=1', 'X_3=0', 'X_4=1']
cut_data = None
for s in observations:
if cut_data is None:
cut_data = bb.prop(s)
else:
cut_data &= bb.prop(s)
# Prepare the cut for the query.
cut_query = bb.prop('X_5=1')
# Question 1
bias = bb.rv('Bias')
print('P(Bias) :\n' + str(bias))
# Question 2
bb_cond = bb.see(cut_data)
print('\nP(X_5 = 1 | Data) = ' + str(bb_cond.prob(cut_query)))
# Question 3
bias_cond = bb_cond.rv('Bias')
print('\nP(Bias | Data) :\n' + str(bias_cond))
# Question 4
bb_int = bb.do(cut_data)
print('\nP(X_5 = 1 | do(Data)) = ' + str(bb_int.prob(cut_query)))
# Question 5
bias_int = bb_int.rv('Bias')
print('\nP(Bias | do(Data)) :' + str(bias_int))
# Display distribution over bias.
print('\nDistribution over biases for the three settings:')
fig = plt.figure(figsize=(15, 5))
# Show prior.
plt.subplot(131)
res = bb.rv('Bias')
theta = np.array([theta for _, theta in res], dtype=np.float)
prob = np.array([prob for prob, _ in res])
plt.fill_between(theta, prob, 0)
plt.title('P(Bias)')
plt.ylim([-0.005, 0.1])
plt.xlabel('Bias')
# Show posterior after conditioning.
plt.subplot(132)
res = bb.see(cut).rv('Bias')
theta = np.array([theta for _, theta in res], dtype=np.float)
prob = np.array([prob for prob, _ in res])
plt.fill_between(theta, prob, 0)
plt.title('P(Bias|D)')
plt.ylim([-0.005, 0.1])
plt.xlabel('Bias')
# Show posterior after intervening.
plt.subplot(133)
res = bb.do(cut).rv('Bias')
theta = np.array([theta for _, theta in res], dtype=np.float)
prob = np.array([prob for prob, _ in res])
plt.fill_between(theta, prob, 0)
plt.title('P(Bias|do(D))')
plt.ylim([-0.005, 0.1])
plt.xlabel('Bias')
plt.show()
"""
Explanation: Exercise
Let's do some inference now.
Assume you observe the first four toin cosses. They are
observations = ['X_1=1', 'X_2=1', 'X_3=0', 'X_4=1']
Answer the following questions:
1. What is the prior distribution over the unknown bias?
2. What is the probability of the next outcome being Heads (X_5=1)?
3. Given the observations, what is the distribution over the
latent bias?
4. Rather than observing the four outcomes, assume instead
that you enforce the outcomes. What is the probability of
the next outcome being Heads?
5. What is the distribution over the latent bias if you enforce
the data?
Solution
End of explanation
"""
#@title Leader factory function.
def leader(bvar, T=2):
p = 0.75 # Probability of match.
# Define leader.
if 'Leader' not in bvar:
return [(0.5, 'Leader=Alice'), (0.5, 'Leader=Bob')]
# Now create the shouts.
# Figure out the leader.
if bvar['Leader'] == 'Alice':
leader = 'Alice'
follower = 'Bob'
else:
leader = 'Bob'
follower = 'Alice'
# Define random variables of shouts.
for t in range(1, T+1):
leader_str = leader + '_' + str(t)
if leader_str not in bvar:
return [(0.5, leader_str + '=chicken'), (0.5, leader_str + '=egg')]
follower_str = follower + '_' + str(t)
if follower_str not in bvar:
if bvar[leader_str] == 'chicken':
return [(p, follower_str + '=chicken'), (1-p, follower_str + '=egg')]
else:
return [(1-p, follower_str + '=chicken'), (p, follower_str + '=egg')]
# We're done.
return None
# Create true environment.
class ChickenEggGame:
def __init__(self, T=2):
self.T = T
self.pt = PTree.fromFunc(lambda bvar:leader(bvar, T=T))
smp = self.pt.sample()
self.pt.do(self.pt.prop('Leader=' + smp['Leader']))
self.time = 0
def step(self, name, word):
# Check whether parameters are okay.
if name != 'Alice' and name != 'Bob':
raise Exception('"name" has to be either "Alice" or "Bob".')
if word != 'chicken' and word != 'egg':
raise Exception('"word" has to be either "chicken" or "egg".')
if self.time > self.T -1:
raise Exception('The game has only ' + str(self.T) + ' rounds.')
# Enforce instruction.
self.time = self.time + 1
cut_do = self.pt.prop(name + '_' + str(self.time) + '=' + word)
self.pt = self.pt.do(cut_do)
# Produce next sample.
smp = self.pt.sample()
if name == 'Alice':
varname = 'Bob_' + str(self.time)
else:
varname = 'Alice_' + str(self.time)
response = smp[varname]
cut_see = self.pt.prop(varname + '=' + response)
self.pt = self.pt.see(cut_see)
return varname + '=' + response
def reveal(self):
smp = self.pt.sample()
return smp['Leader']
"""
Explanation: Who's in charge?
In this problem we will look at causal induction. Alice and Bob play a game
where both of them shout either 'chicken' or 'egg'.
At the beginning of the game, one of them is chosen to be the leader, and
the other, the follower. The follower will always attempt to match the
leader: so if Alice is the leader and Bob the follower, and Alice
shouts 'chicken', then Bob will attempt to shout 'chicken' too (with
60% success rate).
A typical game would look like this:
Round 1: Alice shouts 'egg', Bob shouts 'chicken'.
Round 2: Alice shouts 'chicken', Bob shouts 'chicken'.
Round 3: Alice shouts 'chicken', Bob shouts 'chicken'.
Round 4: Alice shouts 'egg', Bob shouts 'egg'.
Note that you hear both of them shouting simultaneously.
Our goal is to discover who's the leader. This is a causal induction
problem, because we want to figure out whether:
- hypothesis Leader = Alice: Alice $\rightarrow$ Bob;
- or hypothesis Leader = Bob: Bob $\rightarrow$ Alice.
Let's start by defining the factory function.
End of explanation
"""
ld = PTree.fromFunc(lambda bvar: leader(bvar, T=2), root_statement='Root = 1')
display(ld.show())
"""
Explanation: The factory function is called leader().
Let's first have a look at how the probability tree would
look like for T = 2 rounds.
End of explanation
"""
T = 5
ld = PTree.fromFunc(lambda bvar: leader(bvar, T=T), root_statement='Root = 1')
print('Samples from the probability tree:')
for n in range(T):
print(ld.sample())
"""
Explanation: Notice how the transition probabilities of Alice_n, Bob_n,
n = 1, 2, are identical within the subtree rooted at
Leader = Alice. The same is true for the transitions probabilities
within the subtree rooted at Leader = Bob.
Now, let's create a new probability tree for a slightly longer game,
namely T = 5. This tree is too big to display (over 2K nodes)
but we can still sample from it.
End of explanation
"""
import itertools
# Define cuts for both leaders.
cut_leader_a = ld.prop('Leader = Alice')
cut_leader_b = ld.prop('Leader = Bob')
# The words they can say.
words = ['chicken', 'egg']
# Print the joint distribution over
# shouts when Alice is the leader.
print('Leader = Alice')
for word_a, word_b in itertools.product(words, words):
cut = ld.prop('Alice_1 = ' + word_a) & ld.prop('Bob_1 = ' + word_b)
prob = ld.do(cut_leader_a).prob(cut)
fmt = 'P( Alice_1 = {}, Bob_1 = {} | Leader <- Alice) = {:.2f}'
print(fmt.format(word_a, word_b, prob))
# Print the joint distribution over
# shouts when Bob is the leader.
print('\nLeader = Bob')
for word_a, word_b in itertools.product(words, words):
cut = ld.prop('Alice_1 = ' + word_a) & ld.prop('Bob_1 = ' + word_b)
prob = ld.do(cut_leader_b).prob(cut)
fmt = 'P( Alice_1 = {}, Bob_1 = {} | Leader <- Bob) = {:.2f}'
print(fmt.format(word_a, word_b, prob))
"""
Explanation: Let's first figure out the joint distribution over Alice's and Bob's shouts
in the first round (remember, rounds are i.i.d.) when Alice is the leader,
and compare this to the situation when Bob is the leader.
We can do this by setting Leader to whoever we want to be the leader,
and then enumerate the joint probabilities over the combinations of
shouts.
End of explanation
"""
import functools
obs = [
'Alice_1=chicken', 'Bob_1=egg',
'Alice_2=egg', 'Bob_2=egg',
'Alice_3=egg', 'Bob_3=egg'
]
cuts_data = [ld.prop(data) for data in obs]
cut_data = functools.reduce(lambda x, y: x & y, cuts_data)
cut_query = ld.prop('Leader=Bob')
prob_prior = ld.prob(cut_query)
prob_post = ld.see(cut_data).prob(cut_query)
print('Prior and posterior probabilities:')
print('P( Leader = Bob ) = {:.2f}'.format(prob_prior))
print('P( Leader = Bob | Data ) = {:.2f}'.format(prob_post))
"""
Explanation: Looking at the joint probabilities, we realize that they are identical.
This means that we cannot identify who's the leader by conditioning on
our observations. Let's try this with the following observations:
obs = [
'Alice_1=chicken', 'Bob_1=egg',
'Alice_2=egg', 'Bob_2=egg',
'Alice_3=egg', 'Bob_3=egg'
]
We now compare the prior and posterior probabilities of Bob being the leader.
End of explanation
"""
T = 5
game = ChickenEggGame(T=T)
# Do T rounds.
for n in range(T):
reply = game.step('Alice', 'chicken')
print(reply)
# Reveal.
print('The true leader is:' + game.reveal())
"""
Explanation: As you can see, this doesn't work - we can't disentangle the two hypotheses
just by looking at the data.
Intuitively, we could figure out whether Alice or Bob is the leader by
intervening the game - for instance, by instructing Bob to say what
we want and observe Alice's reaction:
- if Alice matches Bob many times, then she's probably the follower;
- instead if Alice does not attempt to match Bob, then we can conclude
that Alice is the leader.
Crucially, we need to interact in order to collect the data.
It's not enough to passively observe. For this, we'll use
an implementation of the game (ChickenEggGame) that allows
us to instruct either Alice or Bob to shout the word we want.
End of explanation
"""
import copy
T = 5
game = ChickenEggGame(T=T)
# Do T rounds.
print('Game:')
ldg = copy.deepcopy(ld)
for t in range(1, T+1):
reply = game.step('Alice', 'chicken')
instruction = 'Alice_' + str(t) + '=chicken'
ldg = ldg.do(ldg.prop(instruction))
ldg = ldg.see(ldg.prop(reply))
print(instruction + ', ' + reply)
# Prediction.
print('\nPrediction:')
cut_query = ldg.prop('Leader=Alice')
prob_post = ldg.prob(cut_query)
print('P(Leader = Alice | Data) = {:.5f}'.format(prob_post))
# Reveal ground truth.
print('\nGround truth:')
print('Leader = ' + game.reveal())
"""
Explanation: Exercise
Using ChickenEggGame, play T=5 rounds giving an instruction.
Use a copy of the probability tree ld to record the results,
appropriately distinguishing between conditions and interventions.
Finally, compute the posterior probability of Alice being the
leader and compare with ground truth (using the reveal method).
Solution
End of explanation
"""
|
w0nk0/LSTMtest1 | keras save-load tests-cleaned.ipynb | gpl-2.0 | lstmcopy = model_from_yaml(lstmyaml)
print(lstmcopy.evaluate(trainX,trainy,show_accuracy=True))
lstmweights=lstmmodel.get_weights()
lstmcopy.set_weights(lstmweights)
print("Original:", lstmmodel.evaluate(trainX,trainy,show_accuracy=True))
print("Copy:",lstmcopy.evaluate(trainX,trainy,show_accuracy=True))
"""
Explanation: Let's copy the LSTM model into a new model and transfer the weights
End of explanation
"""
jzs1model = Sequential()
jzs1model.add(JZS1(33,60, return_sequences=True))
jzs1model.add(JZS1(60,60))
jzs1model.add(Dense(60,33))
jzs1model.compile(optimizer=SGD(lr=0.1,momentum=0.8),loss='mse',class_mode="categorical")
print("Initial loss")
print(jzs1model.evaluate(trainX,trainy,show_accuracy=True))
print("Training")
import time
start=time.time()
jzs1model.fit(trainX,trainy,nb_epoch = 5, show_accuracy=True,verbose=1)
print("Turning verbose off and training 100 epochs, this will take approx.", 20*(time.time()-start),"seconds")
from sys import stdout
stdout.flush()
jzs1model.fit(trainX,trainy,nb_epoch = 100, show_accuracy=True,verbose=0)
print("New loss")
print(jzs1model.evaluate(trainX,trainy,show_accuracy=True))
jzs1yaml=jzs1model.to_yaml()
print(jzs1yaml)
jzs1copy = model_from_yaml(jzs1yaml)
print("Untrained:",jzs1copy.evaluate(trainX,trainy,show_accuracy=True))
jzs1weights = jzs1model.get_weights()
jzs1copy.set_weights(jzs1weights)
print("Original:", jzs1model.evaluate(trainX,trainy,show_accuracy=True))
print("Copy:",jzs1copy.evaluate(trainX,trainy,show_accuracy=True))
"""
Explanation: So it works with LSTM!
Trained accuracy 51%, copied accuracy 51%
Let's try it with JZS1 then!
End of explanation
"""
|
nreimers/deeplearning4nlp-tutorial | 2015-10_Lecture/Lecture2/code/1_Intro_Theano.ipynb | apache-2.0 | import theano
import theano.tensor as T
#Put your code here
"""
Explanation: Introduction to Theano
For a Theano tutorial please see: http://deeplearning.net/software/theano/tutorial/index.html
Basic Operations
For more details see: http://deeplearning.net/software/theano/tutorial/adding.html
Task: Use Theano to compute a simple polynomial function $$f(x,y) = 3x+xy+3y$$
Hints:
- First define two input variables with the correct type (http://deeplearning.net/software/theano/library/tensor/basic.html#all-fully-typed-constructors)
- Define the computation of the function and store it in a variable
- Use the theano.function() to compile your computation graph
End of explanation
"""
print f(1,1)
print f(10,-3)
"""
Explanation: Now you can invoke f and pass the input values, i.e. f(1,1), f(10,-3) and the result for this operation is returned.
End of explanation
"""
#Graph for z
theano.printing.pydotprint(z, outfile="pics/z_graph.png", var_with_name_simple=True)
#Graph for function f (after optimization)
theano.printing.pydotprint(f, outfile="pics/f_graph.png", var_with_name_simple=True)
"""
Explanation: Printing of the graph
You can print the graph for the above value of z. For details see:
http://deeplearning.net/software/theano/library/printing.html
http://deeplearning.net/software/theano/tutorial/printing_drawing.html
To print the graph, futher libraries must be installed. In 99% of your development time you don't need the graph printing function. Feel free to skip this section
End of explanation
"""
import theano
import theano.tensor as T
import numpy as np
# Put your code here
"""
Explanation: The graph fo z:
<img src="files/pics/z_graph.png">
The graph for f:
<img src="files/pics/f_graph.png">
Simple matrix multiplications
The following types for input variables are typically used:
byte: bscalar, bvector, bmatrix, btensor3, btensor4
16-bit integers: wscalar, wvector, wmatrix, wtensor3, wtensor4
32-bit integers: iscalar, ivector, imatrix, itensor3, itensor4
64-bit integers: lscalar, lvector, lmatrix, ltensor3, ltensor4
float: fscalar, fvector, fmatrix, ftensor3, ftensor4
double: dscalar, dvector, dmatrix, dtensor3, dtensor4
complex: cscalar, cvector, cmatrix, ctensor3, ctensor4
scalar: One element (one number)
vector: 1-dimension
matrix: 2-dimensions
tensor3: 3-dimensions
tensor4: 4-dimensions
As we do not need perfect precision we use mainly float instead of double. Most GPUs are also not able to handle doubles.
So in practice you need: iscalar, ivector, imatrix and fscalar, fvector, vmatrix.
Task: Implement the function $$f(x,W,b) = \tanh(xW+b)$$ with $x \in \mathbb{R}^n, b \in \mathbb{R}^k, W \in \mathbb{R}^{n \times k}$.
$n$ input dimension and $k$ output dimension
End of explanation
"""
inputX = np.asarray([0.1, 0.2, 0.3], dtype='float32')
inputW = np.asarray([[0.1,-0.2],[-0.4,0.5],[0.6,-0.7]], dtype='float32')
inputB = np.asarray([0.1,0.2], dtype='float32')
print "inputX.shape",inputX.shape
print "inputW.shape",inputW.shape
f(inputX, inputW, inputB)
"""
Explanation: Next we define some NumPy-Array with data and let Theano compute the result for $f(x,W,b)$
End of explanation
"""
import theano
import theano.tensor as T
import numpy as np
#Define my internal state
init_value = 1
state = theano.shared(value=init_value, name='state')
#Define my operation f(x) = 2*x
x = T.lscalar('x')
z = 2*x
accumulator = theano.function(inputs=[], outputs=z, givens={x: state})
print accumulator()
print accumulator()
"""
Explanation: Don't confuse x,W, b with inputX, inputW, inputB. x,W,b contain pointer to your symbols in the compute graph. inputX,inputW,inputB contains your data.
Shared Variables and Updates
See: http://deeplearning.net/software/theano/tutorial/examples.html#using-shared-variables
Using shared variables, we can create an internal state.
Creation of a accumulator:
At the beginning initialize the state to 0
With each function call update the state by certain value
Later, in your neural networks, the weight matrices $W$ and the bias values $b$ will be stored as internal state / as shared variable.
Shared variables improve performance, as you need less transfer between your Python code and the execution of the compute graph (which is written & compiled from C code)
Shared variables can also be store on your graphic card
End of explanation
"""
#New accumulator function, now with an update
# Put your code here to update the internal counter
print accumulator(1)
print accumulator(1)
print accumulator(1)
"""
Explanation: Shared Variables
We use theano.shared() to share a variable (i.e. make it internally available for Theano)
Internal state variables are passed by compile time via the parameter givens. So to compute the ouput z, use the shared variable state for the input variable x
For information on the borrow=True parameter see: http://deeplearning.net/software/theano/tutorial/aliasing.html
In most cases we can set it to true and increase by this the performance.
Updating Shared Variables
Using the updates-parameter, we can specify how our shared variables should be updated
This is useful to create a train function for a neural network.
We create a function train(data) which computes the error and gradient
The computed gradient is then used in the same call to update the shared weights
Training just becomes: for mini_batch in mini_batches: train(mini_batch)
End of explanation
"""
|
cmorgan/pysystemtrade | examples/introduction/asimpletradingrule.ipynb | gpl-3.0 | from sysdata.csvdata import csvFuturesData
import matplotlib.pyplot as plt
%matplotlib inline
"""
Explanation: Simple Trading Rule
End of explanation
"""
data = csvFuturesData()
data
"""
Explanation: Work up a minimum example of a trend following system
Let's get some data
We can get data from various places; however for now we're going to use
prepackaged 'legacy' data stored in csv files
End of explanation
"""
print(data.get_instrument_list())
print(data.get_raw_price("EDOLLAR").tail(5))
"""
Explanation: We get stuff out of data with methods
End of explanation
"""
data['SP500']
data.keys()
"""
Explanation: data can also behave in a dict like manner (though it's not a dict)
End of explanation
"""
data.get_instrument_raw_carry_data("EDOLLAR").tail(6)
"""
Explanation: ... however this will only access prices
(note these prices have already been backadjusted for rolls)
We have extra futures data here
End of explanation
"""
import pandas as pd
from syscore.algos import robust_vol_calc
def calc_ewmac_forecast(price, Lfast, Lslow=None):
"""
Calculate the ewmac trading fule forecast, given a price and EWMA speeds
Lfast, Lslow and vol_lookback
"""
# price: This is the stitched price series
# We can't use the price of the contract we're trading, or the volatility
# will be jumpy
# And we'll miss out on the rolldown. See
# http://qoppac.blogspot.co.uk/2015/05/systems-building-futures-rolling.html
price = price.resample("1B").last()
if Lslow is None:
Lslow = 4 * Lfast
# We don't need to calculate the decay parameter, just use the span
# directly
fast_ewma = price.ewm(span=Lfast).mean()
slow_ewma = price.ewm(span=Lslow).mean()
raw_ewmac = fast_ewma - slow_ewma
vol = robust_vol_calc(price.diff())
return raw_ewmac / vol
"""
Explanation: Technical note: csvFuturesData inherits from FuturesData which itself inherits
from Data
The chain is 'data specific' <- 'asset class specific' <- 'generic'
So there are also
In principal there could be an equities data
Let's create a simple trading rule
No capping or scaling
End of explanation
"""
instrument_code = 'EDOLLAR'
price = data.daily_prices(instrument_code)
ewmac = calc_ewmac_forecast(price, 32, 128)
ewmac.columns = ['forecast']
ewmac.tail(5)
ewmac.plot();
plt.title('Forecast')
plt.ylabel('Position')
plt.xlabel('Time')
"""
Explanation: Try it out
(this isn't properly scaled at this stage of course)
End of explanation
"""
accountCurve?
from syscore.accounting import accountCurve
account = accountCurve(price, forecast=ewmac)
account.curve().plot();
plt.title('Profit and Loss')
plt.ylabel('PnL')
plt.xlabel('Time');
account.percent().stats()
"""
Explanation: Did we make money?
End of explanation
"""
|
mathLab/RBniCS | tutorials/02_elastic_block/tutorial_elastic_block.ipynb | lgpl-3.0 | from dolfin import *
from rbnics import *
"""
Explanation: TUTORIAL 02 - Elastic block problem
Keywords: POD-Galerkin method, vector problem
1. Introduction
In this Tutorial we consider a linear elasticity problem in a two-dimensional square domain $\Omega$.
The domain is partioned in nine square subdomains, as in the following figure
<img src="data/elastic_block.png" width="35%" />
Parameters of this problem include Young moduli of each subdomain, as well as lateral traction on the right side of square. In particular:
* the ratio between the Young modulus of the each subdomain $\Omega_{p+1}$, $p=0,\dots,7$ and the top-right subdomain $\Omega_9$ is denoted by $\mu_p$, being
$$
\mu_p \in \left[1, 100\right] \qquad \text{for }p=0,\dots,7.
$$
the horizontal tractions on each boundary $\Gamma_{p-6}$, $p=8,\dots,10$, being
$$
\mu_p \in \left[-1,1\right] \qquad \text{for } p=8,\dots, 10.
$$
For what concerns the remaining boundaries, the left boundary $\Gamma_6$ is clamped, while the top and bottom boundaries $\Gamma_1 \cup \Gamma_5$ are traction free.
The parameter vector $\boldsymbol{\mu}$ is thus given by
$$
\boldsymbol{\mu} = (\mu_0, \cdots,\mu_{10})
$$
on the parameter domain
$$
\mathbb{P}=[1,100]^8\times[-1,1]^3.
$$
In order to obtain a faster approximation of the problem we pursue a model reduction by means of a POD-Galerkin reduced order method.
2. Parametrized formulation
Let $\boldsymbol{u}(\boldsymbol{\mu})$ be the displacement in the domain $\Omega$.
In each subdomain $\Omega_{p+1}$, $p=0,\dots,7$, we assume an isotropic linear elastic material, characterized by the following Lamè constants for plane strain
$$\lambda_1(\mu_p) = \frac{\mu_p \nu}{(1+\nu)(1-2\nu)},$$
$$\lambda_2(\mu_p) = \frac{\mu_p}{2(1+\nu)},$$
for $\nu=0.30$, with the following Piola-Kirchhoff tensor
$$
\boldsymbol{\pi}(\boldsymbol{u}; \mu_p) =
\lambda_1(\mu_p)\;\text{tr}\left[\nabla_{S}\boldsymbol{u}\right]\; \boldsymbol{I} +
2\;\lambda_2(\mu_p)\;\nabla_{S}\boldsymbol{u}
$$
where $\nabla_{S}$ denotes the symmetric part of the gradient.
Similarly, the Piola-Kirchhoff tensor in the top right subdomain $\Omega_9$ is given by $\boldsymbol{\pi}(\boldsymbol{u}; 1)$.
Thus, the Piola-Kirchhoff tensor on the domain $\Omega$ can be obtained as
$$
\boldsymbol{P}(\boldsymbol{u}; \boldsymbol{\mu}) =
\Lambda_1(\boldsymbol{\mu})\;\text{tr}\left[\nabla_{S}\boldsymbol{u}\right]\; \boldsymbol{I} +
2\;\Lambda_2(\boldsymbol{\mu})\;\nabla_{S}\boldsymbol{u}
$$
where
$$
\Lambda_1(\boldsymbol{\mu}) = \sum_{p=0}^{7} \lambda_1(\mu_p) \mathbb{1}{\Omega{p+1}} + \lambda_1(1) \mathbb{1}{\Omega{9}}
$$
$$
\Lambda_2(\boldsymbol{\mu}) = \sum_{p=0}^{7} \lambda_2(\mu_p) \mathbb{1}{\Omega{p+1}} + \lambda_2(1) \mathbb{1}{\Omega{9}}
$$
The strong formulation of the parametrized problem is given by:
<center>for a given parameter $\boldsymbol{\mu}\in\mathbb{P}$, find $\boldsymbol{u}(\boldsymbol{\mu})$ such that</center>
$$
\begin{cases}
- \text{div} \boldsymbol{P}(\boldsymbol{u}(\boldsymbol{\mu}); \boldsymbol{\mu})) = 0 & \text{in } \Omega,\
\boldsymbol{P}(\boldsymbol{u}(\boldsymbol{\mu}); \boldsymbol{\mu})) \mathbf{n} = \mathbf{0} & \text{on } \Gamma_{1},\
\boldsymbol{P}(\boldsymbol{u}(\boldsymbol{\mu}); \boldsymbol{\mu})) \mathbf{n} = \mu_p \mathbf{n} & \text{on } \Gamma_{p-6}, p=8,\dots, 10,\
\boldsymbol{P}(\boldsymbol{u}(\boldsymbol{\mu}); \boldsymbol{\mu})) \mathbf{n} = \mathbf{0} & \text{on } \Gamma_{5},\
\boldsymbol{u}(\boldsymbol{\mu}) = \boldsymbol{0} & \text{on } \Gamma_{6},\
\end{cases}
$$
<br>
where $\mathbf{n}$ denotes the outer normal to the boundary $\partial\Omega$.
The corresponding weak formulation reads:
<center>for a given parameter $\boldsymbol{\mu}\in\mathbb{P}$, find $u(\boldsymbol{\mu})\in\mathbb{V}$ such that</center>
$$a\left(u(\boldsymbol{\mu}),v;\boldsymbol{\mu}\right)=f(v;\boldsymbol{\mu})\quad \forall v\in\mathbb{V}$$
where
the function space $\mathbb{V}$ is defined as
$$
\mathbb{V} = {\boldsymbol{v}\in H^1(\Omega; \mathbb{R}^2) : \boldsymbol{v}|{\Gamma{6}}=\boldsymbol{0}}
$$
the parametrized bilinear form $a(\cdot, \cdot; \boldsymbol{\mu}): \mathbb{V} \times \mathbb{V} \to \mathbb{R}$ is defined by
$$a(\boldsymbol{u}, \boldsymbol{v}; \boldsymbol{\mu})=\int_{\Omega}
\left{
\Lambda_1(\boldsymbol{\mu})\;\text{tr}\left[\nabla_{S}\boldsymbol{u}\right]\;\text{tr}\left[\nabla_{S}\boldsymbol{v}\right] + 2\;\Lambda_2(\boldsymbol{\mu})\;\nabla_{S}\boldsymbol{u} : \nabla_{S}\boldsymbol{v}
\right} d\boldsymbol{x}
$$,
the parametrized linear form $f(\cdot; \boldsymbol{\mu}): \mathbb{V} \to \mathbb{R}$ is defined by
$$f(\boldsymbol{v}; \boldsymbol{\mu})= \sum_{p=8}^{10} \mu_p \int_{\Gamma_{p-6}} \boldsymbol{v} \cdot \mathbf{n} \ ds$$.
End of explanation
"""
class ElasticBlock(EllipticCoerciveProblem):
# Default initialization of members
def __init__(self, V, **kwargs):
# Call the standard initialization
EllipticCoerciveProblem.__init__(self, V, **kwargs)
# ... and also store FEniCS data structures for assembly
assert "subdomains" in kwargs
assert "boundaries" in kwargs
self.subdomains, self.boundaries = kwargs["subdomains"], kwargs["boundaries"]
self.u = TrialFunction(V)
self.v = TestFunction(V)
self.dx = Measure("dx")(subdomain_data=self.subdomains)
self.ds = Measure("ds")(subdomain_data=self.boundaries)
# ...
self.f = Constant((1.0, 0.0))
self.E = 1.0
self.nu = 0.3
self.lambda_1 = self.E * self.nu / ((1.0 + self.nu) * (1.0 - 2.0 * self.nu))
self.lambda_2 = self.E / (2.0 * (1.0 + self.nu))
# Return custom problem name
def name(self):
return "ElasticBlock"
# Return theta multiplicative terms of the affine expansion of the problem.
def compute_theta(self, term):
mu = self.mu
if term == "a":
theta_a0 = mu[0]
theta_a1 = mu[1]
theta_a2 = mu[2]
theta_a3 = mu[3]
theta_a4 = mu[4]
theta_a5 = mu[5]
theta_a6 = mu[6]
theta_a7 = mu[7]
theta_a8 = 1.
return (theta_a0, theta_a1, theta_a2, theta_a3, theta_a4, theta_a5, theta_a6, theta_a7, theta_a8)
elif term == "f":
theta_f0 = mu[8]
theta_f1 = mu[9]
theta_f2 = mu[10]
return (theta_f0, theta_f1, theta_f2)
else:
raise ValueError("Invalid term for compute_theta().")
# Return forms resulting from the discretization of the affine expansion of the problem operators.
def assemble_operator(self, term):
v = self.v
dx = self.dx
if term == "a":
u = self.u
a0 = self.elasticity(u, v) * dx(1)
a1 = self.elasticity(u, v) * dx(2)
a2 = self.elasticity(u, v) * dx(3)
a3 = self.elasticity(u, v) * dx(4)
a4 = self.elasticity(u, v) * dx(5)
a5 = self.elasticity(u, v) * dx(6)
a6 = self.elasticity(u, v) * dx(7)
a7 = self.elasticity(u, v) * dx(8)
a8 = self.elasticity(u, v) * dx(9)
return (a0, a1, a2, a3, a4, a5, a6, a7, a8)
elif term == "f":
ds = self.ds
f = self.f
f0 = inner(f, v) * ds(2)
f1 = inner(f, v) * ds(3)
f2 = inner(f, v) * ds(4)
return (f0, f1, f2)
elif term == "dirichlet_bc":
bc0 = [DirichletBC(self.V, Constant((0.0, 0.0)), self.boundaries, 6)]
return (bc0,)
elif term == "inner_product":
u = self.u
x0 = inner(u, v) * dx + inner(grad(u), grad(v)) * dx
return (x0,)
else:
raise ValueError("Invalid term for assemble_operator().")
# Auxiliary function to compute the elasticity bilinear form
def elasticity(self, u, v):
lambda_1 = self.lambda_1
lambda_2 = self.lambda_2
return 2.0 * lambda_2 * inner(sym(grad(u)), sym(grad(v))) + lambda_1 * tr(sym(grad(u))) * tr(sym(grad(v)))
"""
Explanation: 3. Affine decomposition
For this problem the affine decomposition is straightforward. Indeed, owing to the definitions of $\Lambda_1(\boldsymbol{\mu})$ and $\Lambda_2(\boldsymbol{\mu})$, we have:
$$
a(\boldsymbol{u}, \boldsymbol{v}; \boldsymbol{\mu}) = \sum_{p=0}^7 \underbrace{\mu_{\color{red} p}}{\Theta^{a}{\color{red} p}(\boldsymbol{\mu})} \underbrace{\int_{\Omega_{\color{red}{p + 1}}}
\left{\lambda_1(1)\;\text{tr}\left[\nabla_{S}\boldsymbol{u}\right]\;\text{tr}\left[\nabla_{S}\boldsymbol{v}\right] + 2\;\lambda_2(1)\;\nabla_{S}\boldsymbol{u} : \nabla_{S}\boldsymbol{v} \right} d\boldsymbol{x}}{a{\color{red} p}(\boldsymbol{u}, \boldsymbol{v})} +\
\underbrace{1}{\Theta^{a}{\color{red} 8}(\boldsymbol{\mu})} \underbrace{\int_{\Omega_{\color{red} 9}}
\left{\lambda_1(1)\;\text{tr}\left[\nabla_{S}\boldsymbol{u}\right]\;\text{tr}\left[\nabla_{S}\boldsymbol{v}\right] + 2\;\lambda_2(1)\;\nabla_{S}\boldsymbol{u} : \nabla_{S}\boldsymbol{v} \right} d\boldsymbol{x}}{a{\color{red} 8}(\boldsymbol{u}, \boldsymbol{v})}\
$$
$$
f(\boldsymbol{v}; \boldsymbol{\mu}) =
\sum_{p=8}^{10} \underbrace{\mu_{\color{red} p}}{\Theta^{f}{\color{red}{p-8}}(\boldsymbol{\mu})} \underbrace{\int_{\Gamma_{\color{red}{p-6}}} \boldsymbol{v} \cdot \mathbf{n}}{f{\color{red}{p-8}}(\boldsymbol{v})}.
$$
We will implement the numerical discretization of the problem in the class
class ElasticBlock(EllipticCoerciveProblem):
by specifying the coefficients $\Theta^{a}(\boldsymbol{\mu})$ and $\Theta^{f}_(\boldsymbol{\mu})$ in the method
def compute_theta(self, term):
and the bilinear forms $a(\boldsymbol{u}, \boldsymbol{v})$ and linear forms $f_(\boldsymbol{v})$ in
def assemble_operator(self, term):
End of explanation
"""
mesh = Mesh("data/elastic_block.xml")
subdomains = MeshFunction("size_t", mesh, "data/elastic_block_physical_region.xml")
boundaries = MeshFunction("size_t", mesh, "data/elastic_block_facet_region.xml")
"""
Explanation: 4. Main program
4.1. Read the mesh for this problem
The mesh was generated by the data/generate_mesh.ipynb notebook.
End of explanation
"""
V = VectorFunctionSpace(mesh, "Lagrange", 1)
"""
Explanation: 4.2. Create Finite Element space (Lagrange P1)
End of explanation
"""
problem = ElasticBlock(V, subdomains=subdomains, boundaries=boundaries)
mu_range = [
(1.0, 100.0),
(1.0, 100.0),
(1.0, 100.0),
(1.0, 100.0),
(1.0, 100.0),
(1.0, 100.0),
(1.0, 100.0),
(1.0, 100.0),
(-1.0, 1.0),
(-1.0, 1.0),
(-1.0, 1.0)
]
problem.set_mu_range(mu_range)
"""
Explanation: 4.3. Allocate an object of the ElasticBlock class
End of explanation
"""
reduction_method = PODGalerkin(problem)
reduction_method.set_Nmax(20)
reduction_method.set_tolerance(2e-4)
"""
Explanation: 4.4. Prepare reduction with a POD-Galerkin method
End of explanation
"""
reduction_method.initialize_training_set(100)
reduced_problem = reduction_method.offline()
"""
Explanation: 4.5. Perform the offline phase
End of explanation
"""
online_mu = (1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, -1.0, -1.0)
reduced_problem.set_mu(online_mu)
reduced_solution = reduced_problem.solve()
plot(reduced_solution, reduced_problem=reduced_problem)
"""
Explanation: 4.6. Perform an online solve
End of explanation
"""
reduction_method.initialize_testing_set(100)
reduction_method.error_analysis()
"""
Explanation: 4.7. Perform an error analysis
End of explanation
"""
reduction_method.initialize_testing_set(100)
reduction_method.speedup_analysis()
"""
Explanation: 4.8. Perform a speedup analysis
End of explanation
"""
|
pierresendorek/tensorflow_crescendo | simplest working tensorflow notebook ever.ipynb | lgpl-3.0 | import tensorflow as tf
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
import math
"""
Explanation: Simplest Working Tensorflow Notebook Ever
End of explanation
"""
# The true value of the coefficients, which we will estimate hereafter
a_true, b_true = 1.3, -0.7
n_sample = 1000
"""
Explanation: Generating data with numpy
End of explanation
"""
# Generate some abscissa
x = np.random.rand(n_sample)
# Generate the corresponding ordinate
y = a_true * x + b_true + np.random.randn(n_sample) * 0.1
plt.scatter(x,y)
"""
Explanation: Let's generate the samples
$ y_i = a x_i + b + \epsilon_i $
Where $\epsilon_i \sim \mathcal{N}(0,\sigma^2)$ with $\sigma = 0.1$
End of explanation
"""
a_estimated = tf.Variable(tf.truncated_normal([1]))
b_estimated = tf.Variable(tf.truncated_normal([1]))
"""
Explanation: Finding the coefficients a and b that minimize the sum of the square errors
Random initialization of the variables to estimate
End of explanation
"""
loss = 0.0
for i_sample in range(n_sample):
loss += tf.square(a_estimated * x[i_sample] + b_estimated - y[i_sample])
train_op = tf.train.AdamOptimizer(learning_rate=0.1, epsilon=0.1).minimize(loss)
"""
Explanation: Designing a loss function to minimize the sum of the square error
End of explanation
"""
sess = tf.InteractiveSession()
sess.run(tf.global_variables_initializer())
"""
Explanation: Initialization of TensorFlow
End of explanation
"""
n_iter = 100
loss_list = []
for i in range(n_iter):
sess.run(train_op)
loss_list.append(sess.run(loss))
"""
Explanation: Gradient descent steps to optimize the value of a_estimated and b_estimated
End of explanation
"""
plt.plot(loss_list)
"""
Explanation: Showing the losses for each iteration
End of explanation
"""
a_b_estimated = sess.run([a_estimated, b_estimated])
a_b_estimated
"""
Explanation: And finally, let's see what are the estimated values
End of explanation
"""
a_true, b_true
"""
Explanation: Whereas the true values are
End of explanation
"""
a_est = a_b_estimated[0][0]
b_est = a_b_estimated[1][0]
y_est = a_est * x + b_est
plt.scatter(x,y,color = "b")
plt.scatter(x,y_est, color="r")
"""
Explanation: The values are close ! Great job.
Let's visualize the fitted affine function
End of explanation
"""
|
GoogleCloudPlatform/asl-ml-immersion | notebooks/introduction_to_tensorflow/labs/4_keras_functional_api.ipynb | apache-2.0 | import datetime
import os
import shutil
import numpy as np
import pandas as pd
import tensorflow as tf
from matplotlib import pyplot as plt
from tensorflow import feature_column as fc
from tensorflow import keras
from tensorflow.keras import Model
from tensorflow.keras.callbacks import TensorBoard
from tensorflow.keras.layers import Dense, DenseFeatures, Input, concatenate
print(tf.__version__)
%matplotlib inline
"""
Explanation: Introducing the Keras Functional API
Learning Objectives
1. Understand embeddings and how to create them with the feature column API
1. Understand Deep and Wide models and when to use them
1. Understand the Keras functional API and how to build a deep and wide model with it
Introduction
In the last notebook, we learned about the Keras Sequential API. The Keras Functional API provides an alternate way of building models which is more flexible. With the Functional API, we can build models with more complex topologies, multiple input or output layers, shared layers or non-sequential data flows (e.g. residual layers).
In this notebook we'll use what we learned about feature columns to build a Wide & Deep model. Recall, that the idea behind Wide & Deep models is to join the two methods of learning through memorization and generalization by making a wide linear model and a deep learning model to accommodate both. You can have a look at the original research paper here: Wide & Deep Learning for Recommender Systems.
<img src='assets/wide_deep.png' width='80%'>
<sup>(image: https://ai.googleblog.com/2016/06/wide-deep-learning-better-together-with.html)</sup>
The Wide part of the model is associated with the memory element. In this case, we train a linear model with a wide set of crossed features and learn the correlation of this related data with the assigned label. The Deep part of the model is associated with the generalization element where we use embedding vectors for features. The best embeddings are then learned through the training process. While both of these methods can work well alone, Wide & Deep models excel by combining these techniques together.
Start by importing the necessary libraries for this lab.
End of explanation
"""
!ls -l ../data/*.csv
"""
Explanation: Load raw data
We will use the taxifare dataset, using the CSV files that we created in the first notebook of this sequence. Those files have been saved into ../data.
End of explanation
"""
CSV_COLUMNS = [
"fare_amount",
"pickup_datetime",
"pickup_longitude",
"pickup_latitude",
"dropoff_longitude",
"dropoff_latitude",
"passenger_count",
"key",
]
LABEL_COLUMN = "fare_amount"
DEFAULTS = [[0.0], ["na"], [0.0], [0.0], [0.0], [0.0], [0.0], ["na"]]
UNWANTED_COLS = ["pickup_datetime", "key"]
def features_and_labels(row_data):
label = row_data.pop(LABEL_COLUMN)
features = row_data
for unwanted_col in UNWANTED_COLS:
features.pop(unwanted_col)
return features, label
def create_dataset(pattern, batch_size=1, mode="eval"):
dataset = tf.data.experimental.make_csv_dataset(
pattern, batch_size, CSV_COLUMNS, DEFAULTS
)
dataset = dataset.map(features_and_labels)
if mode == "train":
dataset = dataset.shuffle(buffer_size=1000).repeat()
# take advantage of multi-threading; 1=AUTOTUNE
dataset = dataset.prefetch(1)
return dataset
"""
Explanation: Use tf.data to read the CSV files
We wrote these functions for reading data from the csv files above in the previous notebook. For this lab we will also include some additional engineered features in our model. In particular, we will compute the difference in latitude and longitude, as well as the Euclidean distance between the pick-up and drop-off locations. We can accomplish this by adding these new features to the features dictionary with the function add_engineered_features below.
Note that we include a call to this function when collecting our features dict and labels in the features_and_labels function below as well.
End of explanation
"""
# 1. Bucketize latitudes and longitudes
NBUCKETS = 16
latbuckets = np.linspace(start=38.0, stop=42.0, num=NBUCKETS).tolist()
lonbuckets = np.linspace(start=-76.0, stop=-72.0, num=NBUCKETS).tolist()
fc_bucketized_plat = # TODO: Your code goes here.
fc_bucketized_plon = # TODO: Your code goes here.
fc_bucketized_dlat = # TODO: Your code goes here.
fc_bucketized_dlon = # TODO: Your code goes here.
# 2. Cross features for locations
fc_crossed_dloc = # TODO: Your code goes here.
fc_crossed_ploc = # TODO: Your code goes here.
fc_crossed_pd_pair = # TODO: Your code goes here.
# 3. Create embedding columns for the crossed columns
fc_pd_pair = # TODO: Your code goes here.
fc_dloc = # TODO: Your code goes here.
fc_ploc = # TODO: Your code goes here.
"""
Explanation: Feature columns for Wide and Deep model
For the Wide columns, we will create feature columns of crossed features. To do this, we'll create a collection of Tensorflow feature columns to pass to the tf.feature_column.crossed_column constructor. The Deep columns will consist of numeric columns and the embedding columns we want to create.
Exercise. In the cell below, create feature columns for our wide-and-deep model. You'll need to build
1. bucketized columns using tf.feature_column.bucketized_column for the pickup and dropoff latitude and longitude,
2. crossed columns using tf.feature_column.crossed_column for those bucketized columns, and
3. embedding columns using tf.feature_column.embedding_column for the crossed columns.
End of explanation
"""
# TODO 2
wide_columns = [
# One-hot encoded feature crosses
# TODO: Your code goes here.
]
deep_columns = [
# Embedding_columns
# TODO: Your code goes here.
# Numeric columns
# TODO: Your code goes here.
]
"""
Explanation: Gather list of feature columns
Next we gather the list of wide and deep feature columns we'll pass to our Wide & Deep model in Tensorflow. Recall, wide columns are sparse, have linear relationship with the output while continuous columns are deep, have a complex relationship with the output. We will use our previously bucketized columns to collect crossed feature columns and sparse feature columns for our wide columns, and embedding feature columns and numeric features columns for the deep columns.
Exercise. Collect the wide and deep columns into two separate lists. You'll have two lists: One called wide_columns containing the one-hot encoded features from the crossed features and one called deep_columns which contains numeric and embedding feature columns.
End of explanation
"""
INPUT_COLS = [
"pickup_longitude",
"pickup_latitude",
"dropoff_longitude",
"dropoff_latitude",
"passenger_count",
]
inputs = {
colname: Input(name=colname, shape=(), dtype="float32")
for colname in INPUT_COLS
}
"""
Explanation: Build a Wide and Deep model in Keras
To build a wide-and-deep network, we connect the sparse (i.e. wide) features directly to the output node, but pass the dense (i.e. deep) features through a set of fully connected layers. Here’s that model architecture looks using the Functional API.
First, we'll create our input columns using tf.keras.layers.Input.
End of explanation
"""
def rmse(y_true, y_pred):
return tf.sqrt(tf.reduce_mean(tf.square(y_pred - y_true)))
def build_model(dnn_hidden_units):
# Create the deep part of model
deep = # TODO: Your code goes here.
# Create the wide part of model
wide = # TODO: Your code goes here.
# Combine deep and wide parts of the model
combined = # TODO: Your code goes here.
# Map the combined outputs into a single prediction value
output = # TODO: Your code goes here.
# Finalize the model
model = # TODO: Your code goes here.
# Compile the keras model
model.compile(
# TODO: Your code goes here.
)
return model
"""
Explanation: Then, we'll define our custom RMSE evaluation metric and build our wide and deep model.
Exercise. Complete the code in the function build_model below so that it returns a compiled Keras model. The argument dnn_hidden_units should represent the number of units in each layer of your network. Use the Functional API to build a wide-and-deep model. Use the deep_columns you created above to build the deep layers and the wide_columns to create the wide layers. Once you have the wide and deep components, you will combine them to feed to a final fully connected layer.
End of explanation
"""
HIDDEN_UNITS = [10, 10]
model = build_model(dnn_hidden_units=HIDDEN_UNITS)
tf.keras.utils.plot_model(model, show_shapes=False, rankdir="LR")
"""
Explanation: Next, we can call the build_model to create the model. Here we'll have two hidden layers, each with 10 neurons, for the deep part of our model. We can also use plot_model to see a diagram of the model we've created.
End of explanation
"""
BATCH_SIZE = 1000
NUM_TRAIN_EXAMPLES = 10000 * 5 # training dataset will repeat, wrap around
NUM_EVALS = 50 # how many times to evaluate
NUM_EVAL_EXAMPLES = 10000 # enough to get a reasonable sample
trainds = create_dataset(
pattern="../data/taxi-train*", batch_size=BATCH_SIZE, mode="train"
)
evalds = create_dataset(
pattern="../data/taxi-valid*", batch_size=BATCH_SIZE, mode="eval"
).take(NUM_EVAL_EXAMPLES // 1000)
%%time
steps_per_epoch = NUM_TRAIN_EXAMPLES // (BATCH_SIZE * NUM_EVALS)
OUTDIR = "./taxi_trained"
shutil.rmtree(path=OUTDIR, ignore_errors=True) # start fresh each time
history = model.fit(
x=trainds,
steps_per_epoch=steps_per_epoch,
epochs=NUM_EVALS,
validation_data=evalds,
callbacks=[TensorBoard(OUTDIR)],
)
"""
Explanation: Next, we'll set up our training variables, create our datasets for training and validation, and train our model.
(We refer you the the blog post ML Design Pattern #3: Virtual Epochs for further details on why express the training in terms of NUM_TRAIN_EXAMPLES and NUM_EVALS and why, in this training code, the number of epochs is really equal to the number of evaluations we perform.)
End of explanation
"""
RMSE_COLS = ["rmse", "val_rmse"]
pd.DataFrame(history.history)[RMSE_COLS].plot()
"""
Explanation: Just as before, we can examine the history to see how the RMSE changes through training on the train set and validation set.
End of explanation
"""
|
martinggww/lucasenlights | MachineLearning/DataScience-Python3/TrainTest.ipynb | cc0-1.0 | %matplotlib inline
import numpy as np
from pylab import *
np.random.seed(2)
pageSpeeds = np.random.normal(3.0, 1.0, 100)
purchaseAmount = np.random.normal(50.0, 30.0, 100) / pageSpeeds
scatter(pageSpeeds, purchaseAmount)
"""
Explanation: Train / Test
We'll start by creating some data set that we want to build a model for (in this case a polynomial regression):
End of explanation
"""
trainX = pageSpeeds[:80]
testX = pageSpeeds[80:]
trainY = purchaseAmount[:80]
testY = purchaseAmount[80:]
"""
Explanation: Now we'll split the data in two - 80% of it will be used for "training" our model, and the other 20% for testing it. This way we can avoid overfitting.
End of explanation
"""
scatter(trainX, trainY)
"""
Explanation: Here's our training dataset:
End of explanation
"""
scatter(testX, testY)
"""
Explanation: And our test dataset:
End of explanation
"""
x = np.array(trainX)
y = np.array(trainY)
p4 = np.poly1d(np.polyfit(x, y, 8))
"""
Explanation: Now we'll try to fit an 8th-degree polynomial to this data (which is almost certainly overfitting, given what we know about how it was generated!)
End of explanation
"""
import matplotlib.pyplot as plt
xp = np.linspace(0, 7, 100)
axes = plt.axes()
axes.set_xlim([0,7])
axes.set_ylim([0, 200])
plt.scatter(x, y)
plt.plot(xp, p4(xp), c='r')
plt.show()
"""
Explanation: Let's plot our polynomial against the training data:
End of explanation
"""
testx = np.array(testX)
testy = np.array(testY)
axes = plt.axes()
axes.set_xlim([0,7])
axes.set_ylim([0, 200])
plt.scatter(testx, testy)
plt.plot(xp, p4(xp), c='r')
plt.show()
"""
Explanation: And against our test data:
End of explanation
"""
from sklearn.metrics import r2_score
r2 = r2_score(testy, p4(testx))
print(r2)
"""
Explanation: Doesn't look that bad when you just eyeball it, but the r-squared score on the test data is kind of horrible! This tells us that our model isn't all that great...
End of explanation
"""
from sklearn.metrics import r2_score
r2 = r2_score(np.array(trainY), p4(np.array(trainX)))
print(r2)
"""
Explanation: ...even though it fits the training data better:
End of explanation
"""
|
eflautt/ga-data-science | AdmissionsProject/Part2/starter-code/Flautt-project2-submission.ipynb | mit | #imports
from __future__ import division
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import statsmodels.api as sm
import pylab as pl
import numpy as np
%matplotlib inline
"""
Explanation: Project 2
In this project, you will implement the exploratory analysis plan developed in Project 1. This will lay the groundwork for our our first modeling exercise in Project 3.
Step 1: Load the python libraries you will need for this project
End of explanation
"""
#Read in data from source
df_raw = pd.read_csv("../assets/admissions.csv")
print df_raw.head()
"""
Explanation: Step 2: Read in your data set
End of explanation
"""
df_raw['admit'].count()
df_raw['gpa'].count()
df_raw.shape
rows,columns = df_raw.shape
print(rows)
print(columns)
"""
Explanation: Questions
Question 1. How many observations are in our dataset?
End of explanation
"""
#function
def summary_table():
#creates a summary table for df_raw using .describe()
df_raw.describe()
return x
print X
df_raw.describe()
"""
Explanation: Answer: 400 observations. These 400 observations are displayed within 4 rows of 400 observations each.
Question 2. Create a summary table
End of explanation
"""
df_raw.dropna()
#drops any missing data rows from admissions.csv dataset
#returns 397 observations (complete observation rows) across 4 columns
#3 rows had missing, incomplete, NaN data present
"""
Explanation: Question 3. Why would GRE have a larger STD than GPA?
Answer: The GRE variable has a larger 'std' value since the range of GRE scores varies from 220 to 800 while the range for GPA varies from 2.26 to 4.00.
Question 4. Drop data points with missing data
End of explanation
"""
#boxplot for GRE column data
df_raw.boxplot(column = 'gre', return_type = 'axes')
#boxplot for GPA column data
df_raw.boxplot(column = 'gpa', return_type = 'axes')
"""
Explanation: Question 5. Confirm that you dropped the correct data. How can you tell?
Answer: Code in question one returned 400 observations across 4 rows. Culled data using '.dropna()' method returns 397 observational rows, implying that three rows had been removed due to NaN data being present.
Question 6. Create box plots for GRE and GPA
End of explanation
"""
# distribution plot of 'admit' variable with mean
df_raw.admit.plot(kind = 'density', sharex = False, sharey = False, figsize = (10,4));plt.legend(loc='best')
#
plt.vlines(df_raw.admit.mean(), # Plot black line at mean
ymin=0,
ymax=2.0,
linewidth=4.0)
# distribution plot of 'gre' variable with mean
df_raw.gre.plot(kind = 'density', sharex = False, sharey = False, figsize = (10,4));plt.legend(loc='best')
#
plt.vlines(df_raw.gre.mean(), # Plot black line at mean
ymin=0,
ymax=0.0035,
linewidth=4.0)
# distribution plot of 'gpa' variable with mean
df_raw.gpa.plot(kind = 'density', sharex = False, sharey = False, figsize = (10,4));plt.legend(loc='best')
#
plt.vlines(df_raw.gpa.mean(), # Plot black line at mean
ymin=0,
ymax=1.0,
linewidth=4.0)
# distribution plot of 'prestige' variable with mean
df_raw.prestige.plot(kind = 'density', sharex = False, sharey = False, figsize = (10,4));plt.legend(loc='best')
#
plt.vlines(df_raw.prestige.mean(), # Plot black line at mean
ymin=0,
ymax=0.6,
linewidth=4.0)
"""
Explanation: Question 7. What do this plots show?
Answer:
GRE Boxplot:
The mean for this variable lies just south of 600 (around 580) and the interquartile range lies between 650 and 510 as indicated by the blue square. The box plot displays a significant outlier at 300 which has not been included into the range as it falls well outside the acceptable standard deviation from the mean. Further, this value is below the lower extreme of variable GPA.
GPA Boxplot:
The mean GPA value falls right at ~3.40 with the interquartile range falling between ~3.64 at the upper quartile and ~3.18 at the lower quartile. The lower extreme of this data is right at 2.4 while the upper extreme extends beyond 4.00 despite the maximum of this data being 4.00.
Question 8. Describe each distribution
End of explanation
"""
# correlation matrix for variables in df_raw
df_raw.corr()
"""
Explanation: Question 9. If our model had an assumption of a normal distribution would we meet that requirement?
Answer: We would not meet that requirement as only the variable 'gre' displays itself in a quasi normal distribution. The variables for admit, gpa, and prestige are abnormally distributed.
Question 10. Does this distribution need correction? If so, why? How?
Answer: Yes, this data does need to be corrected. If we are to compare these variables through linear regression or other statistics inferential tools, the data must be normalized in order to conform to a more normal distribution.
We can accomplish this by using breating a new dataframe like so:
(df_norm = (df_raw - df_raw.mean()) / (df_raw.max() - df.min())
Sourced solution for normalization:
http://stackoverflow.com/questions/12525722/normalize-data-in-pandas
Question 11. Which of our variables are potentially colinear?
End of explanation
"""
#utilized this stackoverflow.com resource to attempt to impute missing data
#(http://stackoverflow.com/questions/21050426/pandas-impute-nans)
#data imputation for variable 'admit'
#first commented out line of code will not run. Had errors with "keys" in ...df_raw.groupby('keys')...
#df_raw['admit'].fillna(df_raw.groupby('keys')['admit'].transform('mean'), inplace = True)
df_raw['admit'].fillna(df_raw['admit'].mean(), inplace = True)
"""
Explanation: Question 12. What did you find?
Answer:
The strongest, most interesting correlation that exists between two variables exist between variables 'admit' and 'prestige'. The two variables are negatively correlated (-0.241). This would imply that as the prestige of your school increases by one unit your likelyhood of admission to UCLA decreases by a factor of -0.241 (or -25%), holding all other variables constant.
GPA and GRE variables are positively correlated in that as your GPA increases, your GRE score increases by a factor of 0.382408.
Question 13. Write an analysis plan for exploring the association between grad school admissions rates and prestige of undergraduate schools.
Answer:
I will examine the relationship between variables 'prestige' and 'admit' using the admissions.csv database in order to determine if the two varibles are correlated and if they are causally linked. Further, I will determine if this relationship is statistically significant.
Question 14. What is your hypothesis?
Answer:
H1 = There exists a statistically significant relationship between undergraduate school prestige ('prestige') and admission ('admit').
H0 = There exists an insignificant relationship between variables of undergraduate school prestige ('prestige') and admission ('admit').
Bonus/Advanced
1. Bonus: Explore alternatives to dropping obervations with missing data
2. Bonus: Log transform the skewed data
3. Advanced: Impute missing data
End of explanation
"""
|
ES-DOC/esdoc-jupyterhub | notebooks/cmcc/cmip6/models/sandbox-3/toplevel.ipynb | gpl-3.0 | # DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'cmcc', 'sandbox-3', 'toplevel')
"""
Explanation: ES-DOC CMIP6 Model Properties - Toplevel
MIP Era: CMIP6
Institute: CMCC
Source ID: SANDBOX-3
Sub-Topics: Radiative Forcings.
Properties: 85 (42 required)
Model descriptions: Model description details
Initialized From: --
Notebook Help: Goto notebook help page
Notebook Initialised: 2018-02-15 16:53:50
Document Setup
IMPORTANT: to be executed each time you run the notebook
End of explanation
"""
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
"""
Explanation: Document Authors
Set document authors
End of explanation
"""
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
"""
Explanation: Document Contributors
Specify document contributors
End of explanation
"""
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
"""
Explanation: Document Publication
Specify document publication status
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: Document Table of Contents
1. Key Properties
2. Key Properties --> Flux Correction
3. Key Properties --> Genealogy
4. Key Properties --> Software Properties
5. Key Properties --> Coupling
6. Key Properties --> Tuning Applied
7. Key Properties --> Conservation --> Heat
8. Key Properties --> Conservation --> Fresh Water
9. Key Properties --> Conservation --> Salt
10. Key Properties --> Conservation --> Momentum
11. Radiative Forcings
12. Radiative Forcings --> Greenhouse Gases --> CO2
13. Radiative Forcings --> Greenhouse Gases --> CH4
14. Radiative Forcings --> Greenhouse Gases --> N2O
15. Radiative Forcings --> Greenhouse Gases --> Tropospheric O3
16. Radiative Forcings --> Greenhouse Gases --> Stratospheric O3
17. Radiative Forcings --> Greenhouse Gases --> CFC
18. Radiative Forcings --> Aerosols --> SO4
19. Radiative Forcings --> Aerosols --> Black Carbon
20. Radiative Forcings --> Aerosols --> Organic Carbon
21. Radiative Forcings --> Aerosols --> Nitrate
22. Radiative Forcings --> Aerosols --> Cloud Albedo Effect
23. Radiative Forcings --> Aerosols --> Cloud Lifetime Effect
24. Radiative Forcings --> Aerosols --> Dust
25. Radiative Forcings --> Aerosols --> Tropospheric Volcanic
26. Radiative Forcings --> Aerosols --> Stratospheric Volcanic
27. Radiative Forcings --> Aerosols --> Sea Salt
28. Radiative Forcings --> Other --> Land Use
29. Radiative Forcings --> Other --> Solar
1. Key Properties
Key properties of the model
1.1. Model Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Top level overview of coupled model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 1.2. Model Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Name of coupled model.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.flux_correction.details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 2. Key Properties --> Flux Correction
Flux correction properties of the model
2.1. Details
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe if/how flux corrections are applied in the model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.genealogy.year_released')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 3. Key Properties --> Genealogy
Genealogy and history of the model
3.1. Year Released
Is Required: TRUE Type: STRING Cardinality: 1.1
Year the model was released
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.genealogy.CMIP3_parent')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 3.2. CMIP3 Parent
Is Required: FALSE Type: STRING Cardinality: 0.1
CMIP3 parent if any
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.genealogy.CMIP5_parent')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 3.3. CMIP5 Parent
Is Required: FALSE Type: STRING Cardinality: 0.1
CMIP5 parent if any
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.genealogy.previous_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 3.4. Previous Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Previously known as
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.software_properties.repository')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 4. Key Properties --> Software Properties
Software properties of model
4.1. Repository
Is Required: FALSE Type: STRING Cardinality: 0.1
Location of code for this component.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.software_properties.code_version')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 4.2. Code Version
Is Required: FALSE Type: STRING Cardinality: 0.1
Code version identifier.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.software_properties.code_languages')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 4.3. Code Languages
Is Required: FALSE Type: STRING Cardinality: 0.N
Code language(s).
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.software_properties.components_structure')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 4.4. Components Structure
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe how model realms are structured into independent software components (coupled via a coupler) and internal software components.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.software_properties.coupler')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "OASIS"
# "OASIS3-MCT"
# "ESMF"
# "NUOPC"
# "Bespoke"
# "Unknown"
# "None"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 4.5. Coupler
Is Required: FALSE Type: ENUM Cardinality: 0.1
Overarching coupling framework for model.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.coupling.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 5. Key Properties --> Coupling
**
5.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of coupling in the model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.coupling.atmosphere_double_flux')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 5.2. Atmosphere Double Flux
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is the atmosphere passing a double flux to the ocean and sea ice (as opposed to a single one)?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.coupling.atmosphere_fluxes_calculation_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Atmosphere grid"
# "Ocean grid"
# "Specific coupler grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 5.3. Atmosphere Fluxes Calculation Grid
Is Required: FALSE Type: ENUM Cardinality: 0.1
Where are the air-sea fluxes calculated
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.coupling.atmosphere_relative_winds')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 5.4. Atmosphere Relative Winds
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Are relative or absolute winds used to compute the flux? I.e. do ocean surface currents enter the wind stress calculation?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.tuning_applied.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 6. Key Properties --> Tuning Applied
Tuning methodology for model
6.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General overview description of tuning: explain and motivate the main targets and metrics/diagnostics retained. Document the relative weight given to climate performance metrics/diagnostics versus process oriented metrics/diagnostics, and on the possible conflicts with parameterization level tuning. In particular describe any struggle with a parameter value that required pushing it to its limits to solve a particular model deficiency.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.tuning_applied.global_mean_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 6.2. Global Mean Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List set of metrics/diagnostics of the global mean state used in tuning model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.tuning_applied.regional_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 6.3. Regional Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List of regional metrics/diagnostics of mean state (e.g THC, AABW, regional means etc) used in tuning model/component
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.tuning_applied.trend_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 6.4. Trend Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List observed trend metrics/diagnostics used in tuning model/component (such as 20th century)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.tuning_applied.energy_balance')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 6.5. Energy Balance
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe how energy balance was obtained in the full system: in the various components independently or at the components coupling stage?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.tuning_applied.fresh_water_balance')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 6.6. Fresh Water Balance
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe how fresh_water balance was obtained in the full system: in the various components independently or at the components coupling stage?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.heat.global')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 7. Key Properties --> Conservation --> Heat
Global heat convervation properties of the model
7.1. Global
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe if/how heat is conserved globally
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.heat.atmos_ocean_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 7.2. Atmos Ocean Interface
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how heat is conserved at the atmosphere/ocean coupling interface
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.heat.atmos_land_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 7.3. Atmos Land Interface
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe if/how heat is conserved at the atmosphere/land coupling interface
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.heat.atmos_sea-ice_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 7.4. Atmos Sea-ice Interface
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how heat is conserved at the atmosphere/sea-ice coupling interface
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.heat.ocean_seaice_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 7.5. Ocean Seaice Interface
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how heat is conserved at the ocean/sea-ice coupling interface
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.heat.land_ocean_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 7.6. Land Ocean Interface
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how heat is conserved at the land/ocean coupling interface
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.global')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 8. Key Properties --> Conservation --> Fresh Water
Global fresh water convervation properties of the model
8.1. Global
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe if/how fresh_water is conserved globally
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.atmos_ocean_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 8.2. Atmos Ocean Interface
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how fresh_water is conserved at the atmosphere/ocean coupling interface
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.atmos_land_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 8.3. Atmos Land Interface
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe if/how fresh water is conserved at the atmosphere/land coupling interface
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.atmos_sea-ice_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 8.4. Atmos Sea-ice Interface
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how fresh water is conserved at the atmosphere/sea-ice coupling interface
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.ocean_seaice_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 8.5. Ocean Seaice Interface
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how fresh water is conserved at the ocean/sea-ice coupling interface
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.runoff')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 8.6. Runoff
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe how runoff is distributed and conserved
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.iceberg_calving')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 8.7. Iceberg Calving
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how iceberg calving is modeled and conserved
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.endoreic_basins')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 8.8. Endoreic Basins
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how endoreic basins (no ocean access) are treated
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.snow_accumulation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 8.9. Snow Accumulation
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe how snow accumulation over land and over sea-ice is treated
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.salt.ocean_seaice_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 9. Key Properties --> Conservation --> Salt
Global salt convervation properties of the model
9.1. Ocean Seaice Interface
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how salt is conserved at the ocean/sea-ice coupling interface
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.momentum.details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 10. Key Properties --> Conservation --> Momentum
Global momentum convervation properties of the model
10.1. Details
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how momentum is conserved in the model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 11. Radiative Forcings
Radiative forcings of the model for historical and scenario (aka Table 12.1 IPCC AR5)
11.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of radiative forcings (GHG and aerosols) implementation in model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CO2.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 12. Radiative Forcings --> Greenhouse Gases --> CO2
Carbon dioxide forcing
12.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CO2.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 12.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CH4.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 13. Radiative Forcings --> Greenhouse Gases --> CH4
Methane forcing
13.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CH4.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 13.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.N2O.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 14. Radiative Forcings --> Greenhouse Gases --> N2O
Nitrous oxide forcing
14.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.N2O.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 14.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.tropospheric_O3.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 15. Radiative Forcings --> Greenhouse Gases --> Tropospheric O3
Troposheric ozone forcing
15.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.tropospheric_O3.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 15.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.stratospheric_O3.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 16. Radiative Forcings --> Greenhouse Gases --> Stratospheric O3
Stratospheric ozone forcing
16.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.stratospheric_O3.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 16.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CFC.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 17. Radiative Forcings --> Greenhouse Gases --> CFC
Ozone-depleting and non-ozone-depleting fluorinated gases forcing
17.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CFC.equivalence_concentration')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "Option 1"
# "Option 2"
# "Option 3"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 17.2. Equivalence Concentration
Is Required: TRUE Type: ENUM Cardinality: 1.1
Details of any equivalence concentrations used
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CFC.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 17.3. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.SO4.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 18. Radiative Forcings --> Aerosols --> SO4
SO4 aerosol forcing
18.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.SO4.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 18.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.black_carbon.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 19. Radiative Forcings --> Aerosols --> Black Carbon
Black carbon aerosol forcing
19.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.black_carbon.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 19.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.organic_carbon.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 20. Radiative Forcings --> Aerosols --> Organic Carbon
Organic carbon aerosol forcing
20.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.organic_carbon.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 20.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.nitrate.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 21. Radiative Forcings --> Aerosols --> Nitrate
Nitrate forcing
21.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.nitrate.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 21.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_albedo_effect.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 22. Radiative Forcings --> Aerosols --> Cloud Albedo Effect
Cloud albedo effect forcing (RFaci)
22.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_albedo_effect.aerosol_effect_on_ice_clouds')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 22.2. Aerosol Effect On Ice Clouds
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Radiative effects of aerosols on ice clouds are represented?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_albedo_effect.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 22.3. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_lifetime_effect.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 23. Radiative Forcings --> Aerosols --> Cloud Lifetime Effect
Cloud lifetime effect forcing (ERFaci)
23.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_lifetime_effect.aerosol_effect_on_ice_clouds')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 23.2. Aerosol Effect On Ice Clouds
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Radiative effects of aerosols on ice clouds are represented?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_lifetime_effect.RFaci_from_sulfate_only')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 23.3. RFaci From Sulfate Only
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Radiative forcing from aerosol cloud interactions from sulfate aerosol only?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_lifetime_effect.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 23.4. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.dust.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 24. Radiative Forcings --> Aerosols --> Dust
Dust forcing
24.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.dust.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 24.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.tropospheric_volcanic.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 25. Radiative Forcings --> Aerosols --> Tropospheric Volcanic
Tropospheric volcanic forcing
25.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.tropospheric_volcanic.historical_explosive_volcanic_aerosol_implementation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Type A"
# "Type B"
# "Type C"
# "Type D"
# "Type E"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 25.2. Historical Explosive Volcanic Aerosol Implementation
Is Required: TRUE Type: ENUM Cardinality: 1.1
How explosive volcanic aerosol is implemented in historical simulations
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.tropospheric_volcanic.future_explosive_volcanic_aerosol_implementation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Type A"
# "Type B"
# "Type C"
# "Type D"
# "Type E"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 25.3. Future Explosive Volcanic Aerosol Implementation
Is Required: TRUE Type: ENUM Cardinality: 1.1
How explosive volcanic aerosol is implemented in future simulations
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.tropospheric_volcanic.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 25.4. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.stratospheric_volcanic.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 26. Radiative Forcings --> Aerosols --> Stratospheric Volcanic
Stratospheric volcanic forcing
26.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.stratospheric_volcanic.historical_explosive_volcanic_aerosol_implementation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Type A"
# "Type B"
# "Type C"
# "Type D"
# "Type E"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 26.2. Historical Explosive Volcanic Aerosol Implementation
Is Required: TRUE Type: ENUM Cardinality: 1.1
How explosive volcanic aerosol is implemented in historical simulations
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.stratospheric_volcanic.future_explosive_volcanic_aerosol_implementation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Type A"
# "Type B"
# "Type C"
# "Type D"
# "Type E"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 26.3. Future Explosive Volcanic Aerosol Implementation
Is Required: TRUE Type: ENUM Cardinality: 1.1
How explosive volcanic aerosol is implemented in future simulations
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.stratospheric_volcanic.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 26.4. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.sea_salt.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 27. Radiative Forcings --> Aerosols --> Sea Salt
Sea salt forcing
27.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.sea_salt.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 27.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.other.land_use.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 28. Radiative Forcings --> Other --> Land Use
Land use forcing
28.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.other.land_use.crop_change_only')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 28.2. Crop Change Only
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Land use change represented via crop change only?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.other.land_use.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 28.3. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.other.solar.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "irradiance"
# "proton"
# "electron"
# "cosmic ray"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 29. Radiative Forcings --> Other --> Solar
Solar forcing
29.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How solar forcing is provided
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.other.solar.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 29.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
"""
|
mikekestemont/ghent1516 | Chapter 7 - More on Loops.ipynb | mit | for i in range(10):
print(i)
"""
Explanation: Chapter 7: More on Loops
In the previous chapters we have often discussed the powerful concept of looping in Python. Using loops, we can easily repeat certain actions when coding. With for-loops, for instance, it is really easy to visit the items in a list in a list and print them for example. In this chapter, we will discuss some more advanced forms of looping, as well as new, quick ways to create and deal with lists and other data sequences.
Range
The first new function that we will discuss here is range(). Using this function, we can quickly generate a list of numbers in a specific range:
End of explanation
"""
for i in range(300, 306):
print(i)
"""
Explanation: Here, range() will return a number of integers, starting from zero, up to (but not including) the number which we pass as an argument to the function. Using range() is of course much more convenient to generate such lists of numbers than writing e.g. a while-loop to achieve the same result. Note that we can pass more than one argument to range(), if we want to start counting from a number higher than zero (which will be the default when you only pass a single parameter to the function):
End of explanation
"""
for i in range(15, 26, 3):
print(i)
"""
Explanation: We can even specify a 'step size' as a third argument, which controls how much a variable will increase with each step:
End of explanation
"""
numbers = list(range(10))
print(numbers[3:])
"""
Explanation: If you don't specify the step size explicitly, it will default to 1. If you want to store or print the result of calling range(), you have to cast it explicitly, for instance, to a list:
End of explanation
"""
words = "Be yourself; everyone else is already taken".split()
for i in range(len(words)):
print(words[i])
"""
Explanation: Enumerate
Of course, range() can also be used to iterate over the items in a list or tuple, typically in combination with calling len() to avoid IndexErrors:
End of explanation
"""
for word in words:
print(word)
"""
Explanation: Naturally, the same result can just as easily be obtained using a for-loop:
End of explanation
"""
counter = 0
for word in words:
print(word, ": index", counter)
counter+=1
"""
Explanation: One drawback of such an easy-to-write loop, however, is that it doesn't keep track of the index of the word that we are printing in one of the iterations. Suppose that we would like to print the index of each word in our example above, we would then have to work with a counter...
End of explanation
"""
for i in range(len(words)):
print(words[i], ": index", i)
"""
Explanation: ... or indeed use a call to range() and len():
End of explanation
"""
print(list(enumerate(words)))
"""
Explanation: A function that makes life in Python much easier in this respect is enumerate(). If we pass a list to enumerate(), it will return a list of mini-tuples: each mini-tuple will contain as its first element the indices of the items, and as second element the actual item:
End of explanation
"""
for mini_tuple in enumerate(words):
print(mini_tuple)
"""
Explanation: Here -- as with range() -- we have to cast the result of enumerate() to e.g. a list before we can actually print it. Iterating over the result of enumerate(), on the other hand, is not a problem. Here, we print out each mini-tuple, consisting of an index and an time in a for-loop:
End of explanation
"""
item = (5, 'already')
index, word = item # this is the same as: index, word = (5, "already")
print(index)
print(word)
"""
Explanation: When using such for-loops and enumerate(), we can do something really cool. Remember that we can 'unpack' tuples: if a tuple consists of two elements, we can unpack it on one line of code to two different variables via the assignment operator:
End of explanation
"""
for item in enumerate(words):
index, word = item
print(index)
print(word)
print("=======")
"""
Explanation: In our for-loop example, we can apply the same kind of unpacking in each iteration:
End of explanation
"""
for index, word in enumerate(words):
print(index)
print(word)
print("====")
"""
Explanation: However, there is also a super-convenient shortcut for this in Python, where we unpack each item in the for-statement already:
End of explanation
"""
for i, word in enumerate(words):
print(word, ": index", i)
"""
Explanation: How cool is that? Note how easy it becomes now, to solve our problem with the index above:
End of explanation
"""
titles = ["Emma", "Stoner", "Inferno", "1984", "Aeneid"]
authors = ["J. Austen", "J. Williams", "D. Alighieri", "G. Orwell", "P. Vergilius"]
dates = ["1815", "2006", "Ca. 1321", "1949", "before 19 BC"]
"""
Explanation: Zip
Obviously, enumerate() can be really useful when you're working lists or other kinds of data sequences. Another helpful function in this respect is zip(). Supposed that we have a small database of 5 books in the forms of three lists: the first list contains the titles of the books, the second the author, while the third list contains the dates of publication:
End of explanation
"""
list(zip(titles, authors))
list(zip(titles, dates))
list(zip(authors, dates))
"""
Explanation: In each of these lists, the third item always corresponds to Dante's masterpiece and the last item to the Aeneid by Vergil, which inspired him. The use of zip() can now easily be illustrated:
End of explanation
"""
list(zip(authors, titles, dates))
"""
Explanation: Do you see what happened here? In fact, zip() really functions like a 'zipper' in the real-world: it zips together multiple lists, and return a list of mini-tuples, in which the correct authors, titles and dates will be combined with each other. Moreover, you can pass multiple sequences to at once zip():
End of explanation
"""
for author, title in zip(authors, titles):
print(author)
print(title)
print("===")
"""
Explanation: How awesome is that? Here too: don't forget to cast the result of zip() to a list or tuple, e.g. if you want to print it. As with enumerate() we can now also unzip each mini-tuple when declaring a for-loop:
End of explanation
"""
import string
words = "I have not failed . I’ve just found 10,000 ways that won’t work .".split()
word_lengths = []
for word in words:
if word not in string.punctuation:
word_lengths.append(len(word))
print(word_lengths)
"""
Explanation: As you can understand, this is really useful functionality for dealing with long, complex lists and especially combinations of them.
Comprehensions
Now it's time to have a look at comprehensions in Python: comprehensions, such a list comprehensions or tuple comprehensions provide an easy way to create and fill new lists. They are also often used to change one list into another. Typically, comprehensions can be written in a single line of Python code, which is why people often feel like they are more readable than normal Python code. Let's start with an example. Say that we would like to fill a list of numbers that represent the length of each word in a sentence, but only if that word isn't a punctuation mark. By now, we can of course easily create such a list using a for-loop:
End of explanation
"""
word_lengths = [len(word) for word in words if word not in string.punctuation]
print(word_lengths)
"""
Explanation: We can create the exact same list of numbers using a list comprehension which only takes up one line of Python code:
End of explanation
"""
print(type(word_lengths))
"""
Explanation: OK, impressive, but there are a lot of new things going on here. Let's go through this step by step. The first step is easy: we initialize a variable word_lengths to which we assign a value using the assignment operator. The type of that value will eventually be a list: this is indicated by the square brackets which enclose the list comprehension:
End of explanation
"""
words_without_punc = [word for word in words if word not in string.punctuation]
print(words_without_punc)
"""
Explanation: Inside the squared brackets, we can find the actual comprehension which will determine what goes inside our new list. Note that it not always possible to read these comprehensions for left to right, so you will have to get used to the way they are build up from a syntactic point of view. First of all, we add an expression that determines which elements will make it into our list, in this case: len(word). The variable word, in this case, is generated by the following for-statement: for word in words:. Finally, we add a condition to our statement that will determine whether or not len(word) should be added to our list. In this case, len(word) will only be included in our list if the word is not a punctuation mark: if word not in string.punctuation. This is a full list comprehension, but simpler ones exist. We could for instance not have called len() on word before appending it to our list. Like this, we could remove, for example, easily remove all punctuation for our wordlist:
End of explanation
"""
all_word_lengths = [len(word) for word in words]
print(all_word_lengths)
"""
Explanation: Moreover, we don't have to include the if-statement at the end (it is always optional):
End of explanation
"""
square_numbers = [x*x for x in range(10)]
print(square_numbers)
"""
Explanation: In the comprehensions above, words is the only pre-existing input to our comprehension; all the other variables are created and manipulated inside the function. The new range() function which we saw at the beginning of this chapter is also often often as the input for a comprehension:
End of explanation
"""
tuple_word_lengths = tuple(len(word) for word in words if word not in string.punctuation)
print(tuple_word_lengths)
print(type(tuple_word_lengths))
"""
Explanation: Importantly, we can just as easily create a tuple using the same comprehension syntax, but this time calling tuple() on the comprehension, instead of using the squared brackets to create a normal list:
End of explanation
"""
tuple_word_lengths = tuple()
for word in words:
if word not in string.punctuation:
tuple_word_lengths.append(len(word))
print(tuple_word_lengths)
"""
Explanation: This is very useful, especially if you can figure out why the following code block will generate an error...
End of explanation
"""
nested_list = [[x,x+2] for x in range(10, 22, 3)]
print(nested_list)
print(type(nested_list))
print(type(nested_list[3]))
"""
Explanation: Good programmers can do amazing things with comprehensions. With list comprehensions, it becomes really easy, for example, to create nested lists (lists that themselves consist of lists or tuples). Can you figure out what is happening in the following code block:
End of explanation
"""
nested_tuple = [(x,x+2) for x in range(10, 22, 3)]
print(nested_tuple)
print(type(nested_tuple))
print(type(nested_tuple[3]))
nested_tuple = tuple((x,x+2) for x in range(10, 22, 3))
print(nested_tuple)
print(type(nested_tuple))
print(type(nested_tuple[3]))
"""
Explanation: In the first line above, we create a new list (nested_list) but we don't fill it with single numbers, but instead with mini-lists that contain two values. We could just as easily have done this with mini-tuples, by using round brackets. Can you spot the differences below?
End of explanation
"""
a = [2, 3, 5, 7, 0, 2, 8]
b = [3, 2, 1, 7, 0, 0, 9]
diffs = [a-b for a,b in zip(a, b)]
print(diffs)
"""
Explanation: Note that zip() can also be very useful in this respect, because you can unpack items inside the comprehension. Do you understand what is going in the following code block:
End of explanation
"""
diffs = [abs(a-b) for a,b in zip(a, b) if (a & b)]
print(diffs)
"""
Explanation: Again, more complex comprehensions are thinkable:
End of explanation
"""
A = tuple([x-1,x+3] for x in range(10, 100, 3))
B = [(n*n, n+50) for n in range(10, 1000, 3) if n <= 100]
sums = sum(tuple(item_a[1]+item_b[0] for item_a, item_b in zip(A[:10], B[:10])))
print(sums)
"""
Explanation: Great: you are starting to become a real pro at comprehensions! The following, very dense code block, however, might be more challenging: can you figure out what is going on?
End of explanation
"""
text = "This text contains a lot of different characters, but probably not all of them."
chars = {char.lower() for char in text if char not in string.punctuation}
print(chars)
"""
Explanation: Finally, we should also mention that dictionaries and sets can also be filled in a one-liner using such comprehensions. For sets, the syntax runs entirely parallel to that of list and tuple comprehensions, but here, we use curly brackets to surround the expression:
End of explanation
"""
counts = {word:len(word) for word in words}
print(counts)
"""
Explanation: For dictionaries, which consist of key-value pairs, the syntax is only slightly more complicated. Here, you have to make sure that you link the correct key to the correct value using a colon, in the very first part of the comprehension. The following example will make this clearer:
End of explanation
"""
from IPython.core.display import HTML
def css_styling():
styles = open("styles/custom.css", "r").read()
return HTML(styles)
css_styling()
"""
Explanation: You've reached the end of Chapter 7! Ignore the code below, it's only here to make the page pretty:
End of explanation
"""
|
DominikDitoIvosevic/Uni | STRUCE/2018/.ipynb_checkpoints/SU-2018-LAB05-0036477171-checkpoint.ipynb | mit | # Učitaj osnovne biblioteke...
import sklearn
import codecs
import mlutils
import matplotlib.pyplot as plt
import pgmpy as pgm
%pylab inline
"""
Explanation: Sveučilište u Zagrebu
Fakultet elektrotehnike i računarstva
Strojno učenje 2018/2019
http://www.fer.unizg.hr/predmet/su
Laboratorijska vježba 5: Probabilistički grafički modeli, naivni Bayes, grupiranje i vrednovanje klasifikatora
Verzija: 1.4
Zadnji put ažurirano: 11. siječnja 2019.
(c) 2015-2019 Jan Šnajder, Domagoj Alagić
Objavljeno: 11. siječnja 2019.
Rok za predaju: 21. siječnja 2019. u 07:00h
Upute
Peta laboratorijska vježba sastoji se od tri zadatka. U nastavku slijedite upute navedene u ćelijama s tekstom. Rješavanje vježbe svodi se na dopunjavanje ove bilježnice: umetanja ćelije ili više njih ispod teksta zadatka, pisanja odgovarajućeg kôda te evaluiranja ćelija.
Osigurajte da u potpunosti razumijete kôd koji ste napisali. Kod predaje vježbe, morate biti u stanju na zahtjev asistenta (ili demonstratora) preinačiti i ponovno evaluirati Vaš kôd. Nadalje, morate razumjeti teorijske osnove onoga što radite, u okvirima onoga što smo obradili na predavanju. Ispod nekih zadataka možete naći i pitanja koja služe kao smjernice za bolje razumijevanje gradiva (nemojte pisati odgovore na pitanja u bilježnicu). Stoga se nemojte ograničiti samo na to da riješite zadatak, nego slobodno eksperimentirajte. To upravo i jest svrha ovih vježbi.
Vježbe trebate raditi samostalno. Možete se konzultirati s drugima o načelnom načinu rješavanja, ali u konačnici morate sami odraditi vježbu. U protivnome vježba nema smisla.
End of explanation
"""
from pgmpy.models import BayesianModel
from pgmpy.factors.discrete.CPD import TabularCPD
from pgmpy.inference import VariableElimination
model = BayesianModel([('C', 'S'), ('C', 'R'), ('S', 'W'), ('R', 'W')])
cpd_c = TabularCPD(variable='C', variable_card=2, values=[[0.5, 0.5]])
cpd_s = TabularCPD(variable='S', evidence=['C'], evidence_card=[2],
variable_card=2,
values=[[0.9, 0.5],
[0.1, 0.5]])
cpd_r = TabularCPD(variable='R', evidence=['C'], evidence_card=[2],
variable_card=2,
values=[[0.2, 0.8],
[0.8, 0.2]])
cpd_w = TabularCPD(variable='W', evidence=['S', 'R'], evidence_card=[2,2],
variable_card=2,
values=[[1, 0.1, 0.1, 0.01],
[0, 0.9, 0.9, 0.99]])
model.add_cpds(cpd_c, cpd_r, cpd_s, cpd_w)
model.check_model()
infer = VariableElimination(model)
print(infer.query(['W'])['W'])
print(infer.query(['S'], evidence={'W': 1})['S'])
print(infer.query(['R'], evidence={'W': 1})['R'])
print(infer.query(['C'], evidence={'S': 1, 'R': 1})['C'])
print(infer.query(['C'])['C'])
"""
Explanation: 1. Probabilistički grafički modeli -- Bayesove mreže
Ovaj zadatak bavit će se Bayesovim mrežama, jednim od poznatijih probabilističkih grafičkih modela (probabilistic graphical models; PGM). Za lakše eksperimentiranje koristit ćemo programski paket pgmpy. Molimo Vas da provjerite imate li ovaj paket te da ga instalirate ako ga nemate.
(a)
Prvo ćemo pogledati udžbenički primjer s prskalicom. U ovom primjeru razmatramo Bayesovu mrežu koja modelira zavisnosti između oblačnosti (slučajna varijabla $C$), kiše ($R$), prskalice ($S$) i mokre trave ($W$). U ovom primjeru također pretpostavljamo da već imamo parametre vjerojatnosnih distribucija svih čvorova. Ova mreža prikazana je na sljedećoj slici:
Koristeći paket pgmpy, konstruirajte Bayesovu mrežu iz gornjeg primjera. Zatim, koristeći egzaktno zaključivanje, postavite sljedeće posteriorne upite: $P(w=1)$, $P(s=1|w=1)$, $P(r=1|w=1)$, $P(c=1|s=1, r=1)$ i $P(c=1)$. Provedite zaključivanje na papiru i uvjerite se da ste ispravno konstruirali mrežu. Pomoći će vam službena dokumentacija te primjeri korištenja (npr. ovaj).
End of explanation
"""
print(infer.query(['S'], evidence={'W': 1, 'R': 1})['S'])
print(infer.query(['S'], evidence={'W': 1, 'R': 0})['S'])
print(infer.query(['R'], evidence={'W': 1, 'S': 1})['R'])
print(infer.query(['R'], evidence={'W': 1, 'S': 0})['R'])
"""
Explanation: Q: Koju zajedničku vjerojatnosnu razdiobu ova mreža modelira? Kako tu informaciju očitati iz mreže?
Q: U zadatku koristimo egzaktno zaključivanje. Kako ono radi?
Q: Koja je razlika između posteriornog upita i MAP-upita?
Q: Zašto je vjerojatnost $P(c=1)$ drugačija od $P(c=1|s=1,r=1)$ ako znamo da čvorovi $S$ i $R$ nisu roditelji čvora $C$?
(b)
Efekt objašnjavanja (engl. explaining away) zanimljiv je fenomen u kojem se događa da se dvije varijable "natječu" za objašnjavanje treće. Ovaj fenomen može se primijetiti na gornjoj mreži. U tom se slučaju varijable prskalice ($S$) i kiše ($R$) "natječu" za objašnjavanje mokre trave ($W$). Vaš zadatak je pokazati da se fenomen zaista događa.
End of explanation
"""
model.is_active_trail('C','W')
"""
Explanation: Q: Kako biste svojim riječima opisali ovaj fenomen, koristeći se ovim primjerom?
(c)
Koristeći BayesianModel.is_active_trail provjerite jesu li varijable oblačnosti ($C$) i mokre trave ($W$) uvjetno nezavisne. Što mora vrijediti kako bi te dvije varijable bile uvjetno nezavisne? Provjerite korištenjem iste funkcije.
End of explanation
"""
from sklearn.model_selection import train_test_split
spam_X, spam_y = mlutils.load_SMS_dataset('./spam.csv')
spam_X_train, spam_X_test, spam_y_train, spam_y_test = \
train_test_split(spam_X, spam_y, train_size=0.7, test_size=0.3, random_state=69)
"""
Explanation: Q: Kako možemo na temelju grafa saznati koje dvije varijable su, uz neka opažanja, uvjetno nezavisne?
Q: Zašto bismo uopće htjeli znati koje su varijable u mreži uvjetno nezavisne?
2. Vrednovanje modela (klasifikatora)
Kako bismo se uvjerili koliko naš naučeni model zapravo dobro radi, nužno je provesti evaluaciju modela. Ovaj korak od presudne je važnosti u svim primjenama strojnog učenja, pa je stoga bitno znati provesti evaluaciju na ispravan način.
Vrednovat ćemo modele na stvarnom skupu podataka SMS Spam Collection [1], koji se sastoji od 5,574 SMS-poruka klasificiranih u dvije klase: spam (oznaka: spam) i ne-spam (oznaka: ham). Ako već niste, preuzmite skup podataka s poveznice ili sa stranice kolegija i stavite ga u radni direktorij (otpakirajte arhivu i preimenujte datoteku u spam.csv po potrebi). Sljedeći komad kôda učitava skup podataka i dijeli ga na podskupove za učenje i testiranje.
[1] Almeida, T.A., GÃmez Hidalgo, J.M., Yamakami, A. Contributions to the Study of SMS Spam Filtering: New Collection and Results. Proceedings of the 2011 ACM Symposium on Document Engineering (DOCENG'11), Mountain View, CA, USA, 2011.
End of explanation
"""
from sklearn.pipeline import Pipeline
from sklearn.feature_extraction.text import TfidfVectorizer
from sklearn.decomposition import TruncatedSVD
from sklearn.preprocessing import Normalizer
from sklearn.linear_model import LogisticRegression
from sklearn.metrics import accuracy_score
"""
Explanation: (a)
Prije nego što krenemo u vrednovanje modela za klasifikaciju spama, upoznat ćete se s jednostavnijom apstrakcijom cjelokupnog procesa učenja modela u biblioteci scikit-learn. Ovo je korisno zato što se učenje modela često sastoji od mnoštva koraka prije sâmog pozivanja magične funkcije fit: ekstrakcije podataka, ekstrakcije značajki, standardizacije, skaliranja, nadopunjavanjem nedostajućih vrijednosti i slično.
U "standardnom pristupu", ovo se svodi na pozamašan broj linija kôda u kojoj konstantno proslijeđujemo podatke iz jednog koraka u sljedeći, tvoreći pritom cjevovod izvođenja. Osim nepreglednosti, ovakav pristup je često i sklon pogreškama, s obzirom na to da je dosta jednostavno proslijediti pogrešan skup podataka i ne dobiti pogrešku pri izvođenju kôda. Stoga je u biblioteci scikit-learn uveden razred pipeline.Pipeline. Kroz ovaj razred, svi potrebni koraci učenja mogu se apstrahirati iza jednog cjevovoda, koji je opet zapravo model s fit i predict funkcijama.
U ovom zadatku ćete napraviti samo jednostavni cjevovod modela za klasifikaciju teksta, koji se sastoji od pretvorbe teksta u vektorsku reprezentaciju vreće riječi s TF-IDF-težinama, redukcije dimenzionalnosti pomoću krnje dekompozicije singularnih vrijednosti, normalizacije, te konačno logističke regresije.
NB: Nije sasvim nužno znati kako rade ovi razredi pomoću kojih dolazimo do konačnih značajki, ali preporučamo da ih proučite ako vas zanima (posebice ako vas zanima obrada prirodnog jezika).
End of explanation
"""
# TF-IDF
vectorizer = TfidfVectorizer(stop_words="english", ngram_range=(1, 2), max_features=500)
spam_X_feat_train = vectorizer.fit_transform(spam_X_train)
# Smanjenje dimenzionalnosti
reducer = TruncatedSVD(n_components=300, random_state=69)
spam_X_feat_train = reducer.fit_transform(spam_X_feat_train)
# Normaliziranje
normalizer = Normalizer()
spam_X_feat_train = normalizer.fit_transform(spam_X_feat_train)
# NB
clf = LogisticRegression(solver='lbfgs')
clf.fit(spam_X_feat_train, spam_y_train)
# I sada ponovno sve ovo za testne podatke.
spam_X_feat_test = vectorizer.transform(spam_X_test)
spam_X_feat_test = reducer.transform(spam_X_feat_test)
spam_X_feat_test = normalizer.transform(spam_X_feat_test)
print(accuracy_score(spam_y_test, clf.predict(spam_X_feat_test)))
x_test123 = ["You were selected for a green card, apply here for only 50 USD!!!",
"Hey, what are you doing later? Want to grab a cup of coffee?"]
x_test = vectorizer.transform(x_test123)
x_test = reducer.transform(x_test)
x_test = normalizer.transform(x_test)
print(clf.predict(x_test))
"""
Explanation: Prvo, prilažemo kôd koji to radi "standardnim pristupom":
End of explanation
"""
vectorizer = TfidfVectorizer(stop_words="english", ngram_range=(1, 2), max_features=500)
reducer = TruncatedSVD(n_components=300, random_state=69)
normalizer = Normalizer()
clf = LogisticRegression(solver='lbfgs')
pipeline = Pipeline([('vectorizer', vectorizer), ('reducer', reducer), ('normalizer', normalizer), ('clf', clf)])
pipeline.fit(spam_X_train, spam_y_train)
print(accuracy_score(spam_y_test, pipeline.predict(spam_X_test)))
print(pipeline.predict(x_test123))
"""
Explanation: Vaš zadatak izvesti je dani kôd korištenjem cjevovoda. Proučite razred pipeline.Pipeline.
NB Ne treba vam više od svega nekoliko naredbi.
End of explanation
"""
from sklearn.metrics import classification_report, accuracy_score
print(classification_report(y_pred=pipeline.predict(spam_X_test), y_true=spam_y_test))
"""
Explanation: (b)
U prošlom smo podzadatku ispisali točnost našeg modela. Ako želimo vidjeti koliko je naš model dobar po ostalim metrikama, možemo iskoristiti bilo koju funkciju iz paketa metrics. Poslužite se funkcijom metrics.classification_report, koja ispisuje vrijednosti najčešćih metrika. (Obavezno koristite naredbu print kako ne biste izgubili format izlaza funkcije.) Ispišite ponovno točnost za usporedbu.
End of explanation
"""
from sklearn.dummy import DummyClassifier
rando = DummyClassifier(strategy='uniform')
pipeline = Pipeline([('vectorizer', vectorizer), ('reducer', reducer), ('normalizer', normalizer), ('clf', rando)])
pipeline.fit(spam_X_train, spam_y_train)
print(classification_report(spam_y_test, pipeline.predict(spam_X_test)))
mfc = DummyClassifier(strategy='most_frequent')
pipeline = Pipeline([('vectorizer', vectorizer), ('reducer', reducer), ('normalizer', normalizer), ('clf', mfc)])
pipeline.fit(spam_X_train, spam_y_train)
print(classification_report(spam_y_test, pipeline.predict(spam_X_test)))
"""
Explanation: Potreba za drugim metrikama osim točnosti može se vidjeti pri korištenju nekih osnovnih modela (engl. baselines). Možda najjednostavniji model takvog tipa je model koji svrstava sve primjere u većinsku klasu (engl. most frequent class; MFC) ili označuje testne primjere nasumično (engl. random). Proučite razred dummy.DummyClassifier i pomoću njega stvorite spomenute osnovne klasifikatore. Opet ćete trebati iskoristiti cjevovod kako biste došli do vektorskog oblika ulaznih primjera, makar ovi osnovni klasifikatori koriste samo oznake pri predikciji.
End of explanation
"""
from sklearn.model_selection import cross_val_score, KFold
kf = KFold(n_splits=5)
vectorizer = TfidfVectorizer(stop_words="english", ngram_range=(1, 2), max_features=500)
reducer = TruncatedSVD(n_components=300, random_state=69)
normalizer = Normalizer()
clf = LogisticRegression(solver='lbfgs')
pipeline = Pipeline([('vectorizer', vectorizer), ('reducer', reducer), ('normalizer', normalizer), ('clf', clf)])
for train_index, test_index in kf.split(spam_X):
X_train, X_test = spam_X[train_index], spam_X[test_index]
y_train, y_test = spam_y[train_index], spam_y[test_index]
pipeline.fit(X_train, y_train)
print()
print(cross_val_score(estimator=pipeline, X=X_test, y=y_test, cv=5))
print(accuracy_score(spam_y_test, pipeline.predict(spam_X_test)))
"""
Explanation: Q: Na temelju ovog primjera objasnite zašto točnost nije uvijek prikladna metrika.
Q: Zašto koristimo F1-mjeru?
(c)
Međutim, provjera za kakvom smo posegli u prošlom podzadatku nije robusna. Stoga se u strojnom učenju obično koristi k-struka unakrsna provjera. Proučite razred model_selection.KFold i funkciju model_selection.cross_val_score te izračunajte procjenu pogreške na cijelom skupu podataka koristeći peterostruku unakrsnu provjeru.
NB: Vaš model je sada cjevovod koji sadrži čitavo pretprocesiranje. Također, u nastavku ćemo se ograničiti na točnost, ali ovi postupci vrijede za sve metrike.
End of explanation
"""
from sklearn.model_selection import GridSearchCV
# Vaš kôd ovdje...
"""
Explanation: Q: Zašto "obična" unakrsna provjera nije dovoljno robusna?
Q: Što je to stratificirana k-struka unakrsna provjera? Zašto ju često koristimo?
(d)
Gornja procjena pogreške je u redu ako imamo već imamo model (bez ili s fiksiranim hiperparametrima). Međutim, mi želimo koristiti model koji ima optimalne vrijednosti hiperparametara te ih je stoga potrebno optimirati korištenjem pretraživanja po rešetci (engl. grid search). Očekivano, biblioteka scikit-learn već ima ovu funkcionalnost u razredu model_selection.GridSearchCV. Jedina razlika vaše implementacije iz prošlih vježbi (npr. kod SVM-a) i ove jest ta da ova koristi k-struku unakrsnu provjeru.
Prije optimizacije vrijednosti hiperparametara, očigledno moramo definirati i samu rešetku vrijednosti hiperparametara. Proučite kako se definira ista kroz rječnik u primjeru.
Proučite spomenuti razred te pomoću njega pronađite i ispišite najbolje vrijednosti hiperparametara cjevovoda iz podzadatka (a): max_features $\in {500, 1000}$ i n_components $\in { 100, 200, 300 }$ korištenjem pretraživanja po rešetci na skupu za učenje ($k=3$, kako bi išlo malo brže).
End of explanation
"""
from sklearn.model_selection import GridSearchCV, KFold
def nested_kfold_cv(clf, param_grid, X, y, k1=10, k2=3):
# Vaš kôd ovdje...
pass
"""
Explanation: Q: Koja se metrika optimira pri ovoj optimizaciji?
Q: Kako biste odredili broj preklopa $k$?
(e)
Ako želimo procijeniti pogrešku, ali pritom i napraviti odabir modela, tada se okrećemo ugniježđenoj k-strukoj unakrsnoj provjeri (engl. nested k-fold cross validation). U ovom zadatku ćete ju sami implementirati.
Implementirajte funkciju nested_kfold_cv(clf, param_grid, X, y, k1, k2) koja provodi ugniježđenu k-struku unakrsnu provjeru. Argument clf predstavlja vaš klasifikator, param_grid rječnik vrijednosti hiperparametara (isto kao i u podzadatku (d)), X i y označeni skup podataka, a k1 i k2 broj preklopa u vanjskoj, odnosno unutarnjoj petlji. Poslužite se razredima model_selection.GridSearchCV i model_selection.KFold.
Funkcija vraća listu pogrešaka kroz preklope vanjske petlje.
End of explanation
"""
np.random.seed(1337)
C1_scores_5folds = np.random.normal(78, 4, 5)
C2_scores_5folds = np.random.normal(81, 2, 5)
C1_scores_10folds = np.random.normal(78, 4, 10)
C2_scores_10folds = np.random.normal(81, 2, 10)
C1_scores_50folds = np.random.normal(78, 4, 50)
C2_scores_50folds = np.random.normal(81, 2, 50)
"""
Explanation: Q: Kako biste odabrali koji su hiperparametri generalno najbolji, a ne samo u svakoj pojedinačnoj unutarnjoj petlji?
Q: Čemu u konačnici odgovara procjena generalizacijske pogreške?
(f)
Scenarij koji nas najviše zanima jest usporedba dvaju klasifikatora, odnosno, je li jedan od njih zaista bolji od drugog. Jedini način kako to možemo zaista potvrditi jest statističkom testom, u našem slučaju uparenim t-testom. Njime ćemo se baviti u ovom zadatku.
Radi bržeg izvođenja, umjetno ćemo generirati podatke koji odgovaraju pogreškama kroz vanjske preklope dvaju klasifikatora (ono što bi vratila funkcija nested_kfold_cv):
End of explanation
"""
from scipy.stats import ttest_rel
# Vaš kôd ovdje...
"""
Explanation: Iskoristite ugrađenu funkciju scipy.stats.ttest_rel za provedbu uparenog t-testa i provjerite koji od ova modela je bolji kada se koristi 5, 10 i 50 preklopa.
End of explanation
"""
from sklearn.datasets import make_blobs
Xp, yp = make_blobs(n_samples=300, n_features=2, centers=[[0, 0], [3, 2.5], [0, 4]],
cluster_std=[0.45, 0.3, 0.45], random_state=96)
plt.scatter(Xp[:,0], Xp[:,1], c=yp, cmap=plt.get_cmap("cool"), s=20)
"""
Explanation: Q: Koju hipotezu $H_0$ i alternativnu hipotezu $H_1$ testiramo ovim testom?
Q: Koja pretpostavka na vjerojatnosnu razdiobu primjera je napravljena u gornjem testu? Je li ona opravdana?
Q: Koji je model u konačnici bolji i je li ta prednost značajna uz $\alpha = 0.05$?
3. Grupiranje
U ovom zadatku ćete se upoznati s algoritmom k-sredina (engl. k-means), njegovim glavnim nedostatcima te pretpostavkama. Također ćete isprobati i drugi algoritam grupiranja: model Gaussovih mješavina (engl. Gaussian mixture model).
(a)
Jedan od nedostataka algoritma k-sredina jest taj što unaprijed zahtjeva broj grupa ($K$) u koje će grupirati podatke. Ta informacija nam često nije dostupna (kao što nam nisu dostupne ni oznake primjera) te je stoga potrebno nekako izabrati najbolju vrijednost hiperparametra $K$. Jedan od naivnijih pristupa jest metoda lakta/koljena (engl. elbow method) koju ćete isprobati u ovom zadatku.
U svojim rješenjima koristite ugrađenu implementaciju algoritma k-sredina, dostupnoj u razredu cluster.KMeans.
NB: Kriterijska funkcija algoritma k-sredina još se i naziva inercijom (engl. inertia). Za naučeni model, vrijednost kriterijske funkcije $J$ dostupna je kroz razredni atribut inertia_.
End of explanation
"""
Ks = range(1,16)
from sklearn.cluster import KMeans
Js = []
for K in Ks:
J = KMeans(n_clusters=K).fit(Xp).inertia_
Js.append(J)
plot(Ks, Js)
"""
Explanation: Iskoristite skup podataka Xp dan gore. Isprobajte vrijednosti hiperparametra $K$ iz $[0,1,\ldots,15]$. Ne trebate dirati nikakve hiperparametre modela osim $K$. Iscrtajte krivulju od $J$ u ovisnosti o broju grupa $K$. Metodom lakta/koljena odredite vrijednost hiperparametra $K$.
End of explanation
"""
# Vaš kôd ovdje...
"""
Explanation: Q: Koju biste vrijednost hiperparametra $K$ izabrali na temelju ovog grafa? Zašto? Je li taj odabir optimalan? Kako to znate?
Q: Je li ova metoda robusna?
Q: Možemo li izabrati onaj $K$ koji minimizira pogrešku $J$? Objasnite.
(b)
Odabir vrijednosti hiperparametra $K$ može se obaviti na mnoštvo načina. Pored metode lakta/koljena, moguće je isto ostvariti i analizom siluete (engl. silhouette analysis). Za to smo pripremili funkciju mlutils.plot_silhouette koja za dani broj grupa i podatke iscrtava prosječnu vrijednost koeficijenta siluete i vrijednost koeficijenta svakog primjera (kroz grupe).
Vaš je zadatak isprobati različite vrijednosti hiperparametra $K$, $K \in {2, 3, 5}$ i na temelju dobivenih grafova odlučiti se za optimalan $K$.
End of explanation
"""
from sklearn.datasets import make_blobs
X1, y1 = make_blobs(n_samples=1000, n_features=2, centers=[[0, 0], [1.3, 1.3]], cluster_std=[0.15, 0.5], random_state=96)
plt.scatter(X1[:,0], X1[:,1], c=y1, cmap=plt.get_cmap("cool"), s=20)
"""
Explanation: Q: Kako biste se gledajući ove slike odlučili za $K$?
Q: Koji su problemi ovog pristupa?
(c)
U ovom i sljedećim podzadatcima fokusirat ćemo se na temeljne pretpostavke algoritma k-sredina te što se događa ako te pretpostavke nisu zadovoljene. Dodatno, isprobat ćemo i grupiranje modelom Gaussovih mješavina (engl. Gaussian Mixture Models; GMM) koji ne nema neke od tih pretpostavki.
Prvo, krenite od podataka X1, koji su generirani korištenjem funkcije datasets.make_blobs, koja stvara grupe podataka pomoću izotropskih Gaussovih distribucija.
End of explanation
"""
# Vaš kôd ovdje...
"""
Explanation: Naučite model k-sredina (idealno pretpostavljajući $K=2$) na gornjim podatcima i prikažite dobiveno grupiranje (proučite funkciju scatter, posebice argument c).
End of explanation
"""
from sklearn.datasets import make_circles
X2, y2 = make_circles(n_samples=1000, noise=0.15, factor=0.05, random_state=96)
plt.scatter(X2[:,0], X2[:,1], c=y2, cmap=plt.get_cmap("cool"), s=20)
"""
Explanation: Q: Što se dogodilo? Koja je pretpostavka algoritma k-sredina ovdje narušena?
Q: Što biste morali osigurati kako bi algoritam pronašao ispravne grupe?
(d)
Isprobajte algoritam k-sredina na podatcima generiranim korištenjem funkcije datasets.make_circles, koja stvara dvije grupe podataka tako da je jedna unutar druge.
End of explanation
"""
# Vaš kôd ovdje...
"""
Explanation: Ponovno, naučite model k-sredina (idealno pretpostavljajući $K=2$) na gornjim podatcima i prikažite dobiveno grupiranje (proučite funkciju scatter, posebice argument c).
End of explanation
"""
X31, y31 = make_blobs(n_samples=1000, n_features=2, centers=[[0, 0]], cluster_std=[0.2], random_state=69)
X32, y32 = make_blobs(n_samples=50, n_features=2, centers=[[0.7, 0.5]], cluster_std=[0.15], random_state=69)
X33, y33 = make_blobs(n_samples=600, n_features=2, centers=[[0.8, -0.4]], cluster_std=[0.2], random_state=69)
plt.scatter(X31[:,0], X31[:,1], c="#00FFFF", s=20)
plt.scatter(X32[:,0], X32[:,1], c="#F400F4", s=20)
plt.scatter(X33[:,0], X33[:,1], c="#8975FF", s=20)
# Just join all the groups in a single X.
X3 = np.vstack([X31, X32, X33])
y3 = np.hstack([y31, y32, y33])
"""
Explanation: Q: Što se dogodilo? Koja je pretpostavka algoritma k-sredina ovdje narušena?
Q: Što biste morali osigurati kako bi algoritam pronašao ispravne grupe?
(e)
Završno, isprobat ćemo algoritam na sljedećem umjetno stvorenom skupu podataka:
End of explanation
"""
# Vaš kôd ovdje...
"""
Explanation: Ponovno, naučite model k-sredina (ovaj put idealno pretpostavljajući $K=3$) na gornjim podatcima i prikažite dobiveno grupiranje (proučite funkciju scatter, posebice argument c).
End of explanation
"""
from sklearn.mixture import GaussianMixture
# Vaš kôd ovdje...
"""
Explanation: Q: Što se dogodilo? Koja je pretpostavka algoritma k-sredina ovdje narušena?
Q: Što biste morali osigurati kako bi algoritam pronašao ispravne grupe?
(f)
Sada kada ste se upoznali s ograničenjima algoritma k-sredina, isprobat ćete grupiranje modelom mješavine Gaussa (Gaussian Mixture Models; GMM), koji je generalizacija algoritma k-sredina (odnosno, algoritam k-sredina specijalizacija je GMM-a). Implementacija ovog modela dostupna je u mixture.GaussianMixture. Isprobajte ovaj model (s istim pretpostavkama o broju grupa) na podacima iz podzadataka (c)-(e). Ne morate mijenjati nikakve hiperparametre ni postavke osim broja komponenti.
End of explanation
"""
import itertools as it
from scipy.special import comb
def rand_index_score2(y_gold, y_predict):
N = len(y_gold)
grupa1 = ([ y for i, y in enumerate(y_gold) if y_predict[i] == 0])
grupa2 = ([ y for i, y in enumerate(y_gold) if y_predict[i] == 1])
grupa3 = ([ y for i, y in enumerate(y_gold) if y_predict[i] == 2])
n = [[len([y for y in g if y == i])
for i in [0,1,2]]
for g in [grupa1, grupa2, grupa3]]
a = sum([(comb(nnn, 2)) for nn in n for nnn in nn])
b = n[0][0] * (n[1][1] + n[1][2] + n[2][1] + n[2][2]) + \
n[0][1] * (n[1][0] + n[1][2] + n[2][0] + n[2][2]) + \
n[0][2] * (n[1][0] + n[1][1] + n[2][0] + n[2][1]) + \
n[1][0] * (n[2][1] + n[2][2]) + \
n[1][1] * (n[2][0] + n[2][2]) + \
n[1][2] * (n[2][0] + n[2][1])
return (a+b) / comb(N,2)
"""
Explanation: (g)
Kako vrednovati točnost modela grupiranja ako imamo stvarne oznake svih primjera (a u našem slučaju imamo, jer smo mi ti koji smo generirali podatke)? Često korištena mjera jest Randov indeks koji je zapravo pandan točnosti u zadatcima klasifikacije. Implementirajte funkciju rand_index_score(y_gold, y_predict) koja ga računa. Funkcija prima dva argumenta: listu stvarnih grupa kojima primjeri pripadaju (y_gold) i listu predviđenih grupa (y_predict). Dobro će vam doći funkcija itertools.combinations.
End of explanation
"""
y_pred = KMeans(n_clusters=3).fit(Xp).predict(Xp)
rand_index_score(yp, y_pred)
"""
Explanation: Q: Zašto je Randov indeks pandan točnosti u klasifikacijskim problemima?
Q: Koji su glavni problemi ove metrike?
Q: Kako vrednovati kvalitetu grupiranja ako nenamo stvarne oznake primjera? Je li to uopće moguće?
End of explanation
"""
|
dxwils3/machine_learning | neural_networks/MNIST_keras.ipynb | mit | from __future__ import print_function
from __future__ import division
import numpy as np
np.random.seed(1337) # for reproducibility
from keras.datasets import mnist
from keras.models import Sequential
from keras.layers.core import Dense, Dropout, Activation, Flatten
from keras.layers.convolutional import Convolution2D, MaxPooling2D
from keras.utils import np_utils
"""
Explanation: In this notebook, we duplicate the neural network created in the TensorFlow example in keras. I found the model in keras to be conceptually cleaner.
End of explanation
"""
batch_size = 500
nb_classes = 10
nb_epoch = 1
# input image dimensions
img_rows, img_cols = 28, 28
# number of convolutional filters to use
nb_filters = 32
nb2_filters = 64
# size of pooling area for max pooling
nb_pool = 2
# convolution kernel size
nb_conv = 5
# the data, shuffled and split between tran and test sets
(X_train, y_train), (X_test, y_test) = mnist.load_data()
X_train = X_train.reshape(X_train.shape[0], 1, img_rows, img_cols)
X_test = X_test.reshape(X_test.shape[0], 1, img_rows, img_cols)
X_train = X_train.astype('float32')
X_test = X_test.astype('float32')
X_train /= 255 # normalize the data
X_test /= 255 # normalize the data
# convert class vectors to binary class matrices
Y_train = np_utils.to_categorical(y_train, nb_classes)
Y_test = np_utils.to_categorical(y_test, nb_classes)
"""
Explanation: Set up the parameters for the model. Nothing too exciting here.
End of explanation
"""
model = Sequential()
model.add(Convolution2D(nb_filters, nb_conv, nb_conv,
border_mode='valid',
input_shape=(1, img_rows, img_cols)))
model.add(Activation('relu'))
model.add(Convolution2D(nb2_filters, nb_conv, nb_conv))
model.add(Activation('relu'))
model.add(MaxPooling2D(pool_size=(nb_pool, nb_pool)))
model.add(Dropout(0.5))
model.add(Flatten())
model.add(Dense(128))
model.add(Activation('relu'))
model.add(Dropout(0.5))
model.add(Dense(nb_classes))
model.add(Activation('softmax'))
model.compile(loss='categorical_crossentropy', optimizer='adadelta')
"""
Explanation: Below we build the neural network. This is the same network used in the TensorFlow example.
End of explanation
"""
model.fit(X_train, Y_train, batch_size=batch_size, nb_epoch=nb_epoch,
show_accuracy=True, verbose=1, validation_data=(X_test, Y_test))
score = model.evaluate(X_test, Y_test, show_accuracy=True, verbose=0)
print('Test score:', score[0])
print('Test accuracy:', score[1])
"""
Explanation: Now we just run the neural network. Note that one epoch here is a run through the entire set of training data, so it takes a while.
End of explanation
"""
|
solvebio/solvebio-python | examples/generating_icgc_survival_curves.ipynb | mit | from solvebio import login, Dataset, Filter
import plotly.plotly as py
import plotly.tools as tls
from plotly.graph_objs import *
# Load local SolveBio credentials
login()
"""
Explanation: Advanced SolveBio Tutorial
2016-11-01 Generating survival curves by cancer type
NOTE: This page may not load optimally in GitHuB. Please view in NBViewer for the best experience.
One powerful part of SolveBio is in the ability to filter through datasets quickly in the SolveBio cloud. This means you don't have to download the source data to your computer and run complicated and computationally heavy filtering to bring out the data that you need. This example script shows how you can generate Kaplan-Meier survival curves based on filtering ICGC data.
First we load up the solvebio and plotly modules, to access, filter, and display data. Make sure you already have the solvebio python client installed (see https://docs.solvebio.com/docs/installation for instructions).
End of explanation
"""
icgc_donor = Dataset.get_by_full_path('solvebio:public:/ICGC/3.0.0-23/Donor')
icgc_donor.query()
"""
Explanation: We'll use the ICGC Donor dataset. You can explore this dataset in your browser with https://my.solvebio.com/data/ICGC/2.0.0-21/Donor.
End of explanation
"""
# interval sizes are in days
interval_size = 90
total_interval_to_follow = 1825
f1 = Filter()
f2 = Filter(project_code__prefix='PACA')
"""
Explanation: We'll set the initial Kaplan-Meier interval sizes and total interval sizes as well as our initial filters. This particular filter will compare survival curves between the total ICGC dataset (every patient with survival information) and a subset of the ICGC that begins with the project code PACA (pancreatic cancer projects).
End of explanation
"""
f1_total = icgc_donor.query(filters=f1).filter(donor_survival_time__gt=0).facets('donor_survival_time').get('donor_survival_time')['count']
f2_total = icgc_donor.query(filters=f2).filter(donor_survival_time__gt=0).facets('donor_survival_time').get('donor_survival_time')['count']
f1_data = [[0, 100]]
f2_data = [[0, 100]]
for day in range(interval_size, total_interval_to_follow, interval_size):
f1_percent_alive = 100 * float(icgc_donor.query(filters=f1).filter(donor_survival_time__gte=day).facets('donor_survival_time').get('donor_survival_time')['count'])/float(f1_total)
f1_data += [[day, f1_percent_alive]]
f2_percent_alive = 100 * float(icgc_donor.query(filters=f2).filter(donor_survival_time__gte=day).facets('donor_survival_time').get('donor_survival_time')['count'])/float(f2_total)
f2_data += [[day, f2_percent_alive]]
"""
Explanation: Now we construct the filters that will bring out the survival information data and start querying the SolveBio ICGC dataset for each set interval.
End of explanation
"""
trace1 = Scatter(
name=str(f1),
x=[_x[0] for _x in f1_data],
y=[_y[1] for _y in f1_data],
mode='lines',
line=Line(
shape='hv'
),
)
trace2 = Scatter(
name=str(f2),
x=[_x[0] for _x in f2_data],
y=[_y[1] for _y in f2_data],
mode='lines',
line=Line(
shape='hv'
),
)
data = Data([trace1, trace2])
# Add title to layout object
layout = Layout(
title='Kaplan-Meier',
showlegend=True,
legend=Legend(
x=0,
y=100
),
xaxis=XAxis(
title='Days',
zeroline=True,
showline=True,
tick0=0,
range=[0, total_interval_to_follow + 1],
),
yaxis=YAxis(
title='Percent Survival',
range=[0, 101],
tick0=0,
dtick=10,
)
)
# Make a figure object
fig = Figure(data=data, layout=layout)
# (@) Send fig to Plotly, initialize streaming plot, open new tab
plot_url = py.plot(fig, filename='static-kaplan-meier', auto_open=False)
tls.embed(plot_url)
"""
Explanation: Finally, this entire module below plots the survival curves.
End of explanation
"""
|
AllenDowney/ModSimPy | soln/chap15soln.ipynb | mit | # Configure Jupyter so figures appear in the notebook
%matplotlib inline
# Configure Jupyter to display the assigned value after an assignment
%config InteractiveShell.ast_node_interactivity='last_expr_or_assign'
# import functions from the modsim.py module
from modsim import *
"""
Explanation: Modeling and Simulation in Python
Chapter 15
Copyright 2017 Allen Downey
License: Creative Commons Attribution 4.0 International
End of explanation
"""
init = State(T=90)
"""
Explanation: The coffee cooling problem
I'll use a State object to store the initial temperature.
End of explanation
"""
coffee = System(init=init,
volume=300,
r=0.01,
T_env=22,
t_0=0,
t_end=30,
dt=1)
"""
Explanation: And a System object to contain the system parameters.
End of explanation
"""
def update_func(state, t, system):
"""Update the thermal transfer model.
state: State (temp)
t: time
system: System object
returns: State (temp)
"""
r, T_env, dt = system.r, system.T_env, system.dt
T = state.T
T += -r * (T - T_env) * dt
return State(T=T)
"""
Explanation: The update function implements Newton's law of cooling.
End of explanation
"""
update_func(init, 0, coffee)
"""
Explanation: Here's how it works.
End of explanation
"""
def run_simulation(system, update_func):
"""Runs a simulation of the system.
Add a TimeFrame to the System: results
system: System object
update_func: function that updates state
"""
init = system.init
t_0, t_end, dt = system.t_0, system.t_end, system.dt
frame = TimeFrame(columns=init.index)
frame.row[t_0] = init
ts = linrange(t_0, t_end, dt)
for t in ts:
frame.row[t+dt] = update_func(frame.row[t], t, system)
return frame
"""
Explanation: Here's a version of run_simulation that uses linrange to make an array of time steps.
End of explanation
"""
results = run_simulation(coffee, update_func)
"""
Explanation: And here's how it works.
End of explanation
"""
plot(results.T, label='coffee')
decorate(xlabel='Time (minutes)',
ylabel='Temperature (C)')
"""
Explanation: Here's what the results look like.
End of explanation
"""
coffee.T_final = get_last_value(results.T)
T_final = get_last_value(results.T)
"""
Explanation: And here's the final temperature:
End of explanation
"""
def make_system(T_init, r, volume, t_end):
"""Makes a System object with the given parameters.
T_init: initial temperature in degC
r: heat transfer rate, in 1/min
volume: volume of liquid in mL
t_end: end time of simulation
returns: System object
"""
init = State(T=T_init)
return System(init=init,
r=r,
volume=volume,
temp=T_init,
t_0=0,
t_end=t_end,
dt=1,
T_env=22)
"""
Explanation: Encapsulation
Before we go on, let's define a function to initialize System objects with relevant parameters:
End of explanation
"""
coffee = make_system(T_init=90, r=0.01, volume=300, t_end=30)
results = run_simulation(coffee, update_func)
T_final = get_last_value(results.T)
"""
Explanation: Here's how we use it:
End of explanation
"""
# Solution
milk = make_system(T_init=5, t_end=15, r=0.133, volume=50)
results = run_simulation(milk, update_func)
T_final = get_last_value(results.T)
plot(results.T, label='milk')
decorate(xlabel='Time (minutes)',
ylabel='Temperature (C)')
"""
Explanation: Exercises
Exercise: Simulate the temperature of 50 mL of milk with a starting temperature of 5 degC, in a vessel with the same insulation, for 15 minutes, and plot the results.
By trial and error, find a value for r that makes the final temperature close to 20 C.
End of explanation
"""
|
drivendata/benchmarks | dengue-benchmark-statsmodels.ipynb | mit | %matplotlib inline
from __future__ import print_function
from __future__ import division
import pandas as pd
import numpy as np
from matplotlib import pyplot as plt
import seaborn as sns
from sklearn.model_selection import train_test_split
import statsmodels.api as sm
# just for the sake of this blog post!
from warnings import filterwarnings
filterwarnings('ignore')
"""
Explanation: Dengue fever is bad. It's real bad. Dengue is a mosquito-borne disease that occurs in tropical and sub-tropical parts of the world. In mild cases, symptoms are similar to the flu: fever, rash and muscle and joint pain. But severe cases are dangerous, and dengue fever can cause severe bleeding, low blood pressure and even death.
Because it is carried by mosquitoes, the transmission dynamics of dengue are related to climate variables such as temperature and precipitation. Although the relationship to climate is complex, a growing number of scientists argue that climate change is likely to produce distributional shifts that will have significant public health implications worldwide.
We've launched a competition to use open data to predict the occurrence of Dengue based on climatological data. Here's a first look at the data and how to get started!
As always, we begin with the sacred import's of data science:
End of explanation
"""
# load the provided data
train_features = pd.read_csv('../data-processed/dengue_features_train.csv',
index_col=[0,1,2])
train_labels = pd.read_csv('../data-processed/dengue_labels_train.csv',
index_col=[0,1,2])
# Seperate data for San Juan
sj_train_features = train_features.loc['sj']
sj_train_labels = train_labels.loc['sj']
# Separate data for Iquitos
iq_train_features = train_features.loc['iq']
iq_train_labels = train_labels.loc['iq']
print('San Juan')
print('features: ', sj_train_features.shape)
print('labels : ', sj_train_labels.shape)
print('\nIquitos')
print('features: ', iq_train_features.shape)
print('labels : ', iq_train_labels.shape)
"""
Explanation: A Tale of Two Cities
This dataset has two cities in it: San Juan, Puerto Rico (right) and Iquitos, Peru (left). Since we hypothesize that the spread of dengue may follow different patterns between the two, we will divide the dataset, train seperate models for each city, and then join our predictions before making our final submission.
End of explanation
"""
sj_train_features.head()
"""
Explanation: The problem description gives a good overview of the available variables, but we'll look at the head of the data here as well:
End of explanation
"""
# Remove `week_start_date` string.
sj_train_features.drop('week_start_date', axis=1, inplace=True)
iq_train_features.drop('week_start_date', axis=1, inplace=True)
"""
Explanation: There are a lot of climate variables here, but the first thing that we'll note is that the week_start_date is included in the feature set. This makes it easier for competitors to create time based features, but for this first-pass model, we'll drop that column since we shouldn't use it as a feature in our model.
End of explanation
"""
# Null check
pd.isnull(sj_train_features).any()
(sj_train_features
.ndvi_ne
.plot
.line(lw=0.8))
plt.title('Vegetation Index over Time')
plt.xlabel('Time')
"""
Explanation: Next, let's check to see if we are missing any values in this dataset:
End of explanation
"""
sj_train_features.fillna(method='ffill', inplace=True)
iq_train_features.fillna(method='ffill', inplace=True)
"""
Explanation: Since these are time-series, we can see the gaps where there are NaNs by plotting the data. Since we can't build a model without those values, we'll take a simple approach and just fill those values with the most recent value that we saw up to that point. This is probably a good part of the problem to improve your score by getting smarter.
End of explanation
"""
print('San Juan')
print('mean: ', sj_train_labels.mean()[0])
print('var :', sj_train_labels.var()[0])
print('\nIquitos')
print('mean: ', iq_train_labels.mean()[0])
print('var :', iq_train_labels.var()[0])
"""
Explanation: Distribution of labels
Our target variable, total_cases is a non-negative integer, which means we're looking to make some count predictions. Standard regression techniques for this type of prediction include
Poisson regression
Negative binomial regression
Which techniqe will perform better depends on many things, but the choice between Poisson regression and negative binomial regression is pretty straightforward. Poisson regression fits according to the assumption that the mean and variance of the population distributiona are equal. When they aren't, specifically when the variance is much larger than the mean, the negative binomial approach is better. Why? It isn't magic. The negative binomial regression simply lifts the assumption that the population mean and variance are equal, allowing for a larger class of possible models. In fact, from this perspective, the Poisson distribution is but a special case of the negative binomial distribution.
Let's see how our labels are distributed!
End of explanation
"""
sj_train_labels.hist()
iq_train_labels.hist()
"""
Explanation: It's looking like a negative-binomial sort of day in these parts.
End of explanation
"""
sj_train_features['total_cases'] = sj_train_labels.total_cases
iq_train_features['total_cases'] = iq_train_labels.total_cases
"""
Explanation: variance >> mean suggests total_cases can be described by a negative binomial distribution, so we'll use a negative binomial regression below.
Which inputs strongly correlate with total_cases?
Our next step in this process will be to select a subset of features to include in our regression. Our primary purpose here is to get a better understanding of the problem domain rather than eke out the last possible bit of predictive accuracy. The first thing we will do is to add the total_cases to our dataframe, and then look at the correlation of that variable with the climate variables.
End of explanation
"""
# compute the correlations
sj_correlations = sj_train_features.corr()
iq_correlations = iq_train_features.corr()
# plot san juan
sj_corr_heat = sns.heatmap(sj_correlations)
plt.title('San Juan Variable Correlations')
# plot iquitos
iq_corr_heat = sns.heatmap(iq_correlations)
plt.title('Iquitos Variable Correlations')
"""
Explanation: Compute the data correlation matrix.
End of explanation
"""
# San Juan
(sj_correlations
.total_cases
.drop('total_cases') # don't compare with myself
.sort_values(ascending=False)
.plot
.barh())
# Iquitos
(iq_correlations
.total_cases
.drop('total_cases') # don't compare with myself
.sort_values(ascending=False)
.plot
.barh())
"""
Explanation: Many of the temperature data are strongly correlated, which is expected. But the total_cases variable doesn't have many obvious strong correlations.
Interestingly, total_cases seems to only have weak correlations with other variables. Many of the climate variables are much more strongly correlated. Interestingly, the vegetation index also only has weak correlation with other variables. These correlations may give us some hints as to how to improve our model that we'll talk about later in this post. For now, let's take a sorted look at total_cases correlations.
End of explanation
"""
def preprocess_data(data_path, labels_path=None):
# load data and set index to city, year, weekofyear
df = pd.read_csv(data_path, index_col=[0, 1, 2])
# select features we want
features = ['reanalysis_specific_humidity_g_per_kg',
'reanalysis_dew_point_temp_k',
'station_avg_temp_c',
'station_min_temp_c']
df = df[features]
# fill missing values
df.fillna(method='ffill', inplace=True)
# add labels to dataframe
if labels_path:
labels = pd.read_csv(labels_path, index_col=[0, 1, 2])
df = df.join(labels)
# separate san juan and iquitos
sj = df.loc['sj']
iq = df.loc['iq']
return sj, iq
sj_train, iq_train = preprocess_data('../data-processed/dengue_features_train.csv',
labels_path="../data-processed/dengue_labels_train.csv")
"""
Explanation: A few observations
The wetter the better
The correlation strengths differ for each city, but it looks like reanalysis_specific_humidity_g_per_kg and reanalysis_dew_point_temp_k are the most strongly correlated with total_cases. This makes sense: we know mosquitos thrive wet climates, the wetter the better!
Hot and heavy
As we all know, "cold and humid" is not a thing. So it's not surprising that as minimum temperatures, maximum temperatures, and average temperatures rise, the total_cases of dengue fever tend to rise as well.
Sometimes it rains, so what
Interestingly, the precipitation measurements bear little to no correlation to total_cases, despite strong correlations to the humidity measurements, as evident by the heatmaps above.
This is just a first pass
Precisely none of these correlations are very strong. Of course, that doesn't mean that some feature engineering wizardry can't put us in a better place (standing_water estimate, anyone?). Also, it's always useful to keep in mind that life isn't linear, but out-of-the-box correlation measurement is – or at least, it measures linear dependence.
Nevertheless, for this benchmark we'll focus on the linear wetness trend we see above, and reduce our inputs to
A few good variables
reanalysis_specific_humidity_g_per_kg
reanalysis_dew_point_temp_k
station_avg_temp_c
station_min_temp_c
A mosquito model
Now that we've explored this data, it's time to start modeling. Our first step will be to build a function that does all of the preprocessing we've done above from start to finish. This will make our lives easier, since it needs to be applied to the test set and the traning set before we make our predictions.
End of explanation
"""
sj_train.describe()
iq_train.describe()
"""
Explanation: Now we can take a look at the smaller dataset and see that it's ready to start modelling:
End of explanation
"""
sj_train_subtrain = sj_train.head(800)
sj_train_subtest = sj_train.tail(sj_train.shape[0] - 800)
iq_train_subtrain = iq_train.head(400)
iq_train_subtest = iq_train.tail(iq_train.shape[0] - 400)
"""
Explanation: Split it up!
Since this is a timeseries model, we'll use a strict-future holdout set when we are splitting our train set and our test set. We'll keep around three quarters of the original data for training and use the rest to test. We'll do this separately for our San Juan model and for our Iquitos model.
End of explanation
"""
from statsmodels.tools import eval_measures
import statsmodels.formula.api as smf
def get_best_model(train, test):
# Step 1: specify the form of the model
model_formula = "total_cases ~ 1 + " \
"reanalysis_specific_humidity_g_per_kg + " \
"reanalysis_dew_point_temp_k + " \
"station_min_temp_c + " \
"station_avg_temp_c"
grid = 10 ** np.arange(-8, -3, dtype=np.float64)
best_alpha = []
best_score = 1000
# Step 2: Find the best hyper parameter, alpha
for alpha in grid:
model = smf.glm(formula=model_formula,
data=train,
family=sm.families.NegativeBinomial(alpha=alpha))
results = model.fit()
predictions = results.predict(test).astype(int)
score = eval_measures.meanabs(predictions, test.total_cases)
if score < best_score:
best_alpha = alpha
best_score = score
print('best alpha = ', best_alpha)
print('best score = ', best_score)
# Step 3: refit on entire dataset
full_dataset = pd.concat([train, test])
model = smf.glm(formula=model_formula,
data=full_dataset,
family=sm.families.NegativeBinomial(alpha=best_alpha))
fitted_model = model.fit()
return fitted_model
sj_best_model = get_best_model(sj_train_subtrain, sj_train_subtest)
iq_best_model = get_best_model(iq_train_subtrain, iq_train_subtest)
figs, axes = plt.subplots(nrows=2, ncols=1)
# plot sj
sj_train['fitted'] = sj_best_model.fittedvalues
sj_train.fitted.plot(ax=axes[0], label="Predictions")
sj_train.total_cases.plot(ax=axes[0], label="Actual")
# plot iq
iq_train['fitted'] = iq_best_model.fittedvalues
iq_train.fitted.plot(ax=axes[1], label="Predictions")
iq_train.total_cases.plot(ax=axes[1], label="Actual")
plt.suptitle("Dengue Predicted Cases vs. Actual Cases")
plt.legend()
"""
Explanation: Training time
This is where we start getting down to business. As we noted above, we'll train a NegativeBinomial model, which is often used for count data where the mean and the variance are very different. In this function we have three steps. The first is to specify the functional form
End of explanation
"""
sj_test, iq_test = preprocess_data('../data-processed/dengue_features_test.csv')
sj_predictions = sj_best_model.predict(sj_test).astype(int)
iq_predictions = iq_best_model.predict(iq_test).astype(int)
submission = pd.read_csv("../data-processed/submission_format.csv",
index_col=[0, 1, 2])
submission.total_cases = np.concatenate([sj_predictions, iq_predictions])
submission.to_csv("../data-processed/benchmark.csv")
"""
Explanation: Reflecting on our performance
These graphs can actually tell us a lot about where our model is going wrong and give us some good hints about where investments will improve the model performance. For example, we see that our model in blue does track the seasonality of Dengue cases. However, the timing of the seasonality of our predictions has a mismatch with the actual results. One potential reason for this is that our features don't look far enough into the past--that is to say, we are asking to predict cases at the same time as we are measuring percipitation. Because dengue is misquito born, and the misquito lifecycle depends on water, we need to take both the life of a misquito and the time between infection and symptoms into account when modeling dengue. This is a critical avenue to explore when improving this model.
The other important error is that our predictions are relatively consistent--we miss the spikes that are large outbreaks. One reason is that we don't take into account the contagiousness of dengue. A possible way to account for this is to build a model that progressively predicts a new value while taking into account the previous prediction. By training on the dengue outbreaks and then using the predicted number of patients in the week before, we can start to model this time dependence that the current model misses.
So, we know we're not going to win this thing, but let's submit the model anyway!
End of explanation
"""
|
azhurb/deep-learning | tensorboard/Anna_KaRNNa_Name_Scoped.ipynb | mit | import time
from collections import namedtuple
import numpy as np
import tensorflow as tf
"""
Explanation: Anna KaRNNa
In this notebook, I'll build a character-wise RNN trained on Anna Karenina, one of my all-time favorite books. It'll be able to generate new text based on the text from the book.
This network is based off of Andrej Karpathy's post on RNNs and implementation in Torch. Also, some information here at r2rt and from Sherjil Ozair on GitHub. Below is the general architecture of the character-wise RNN.
<img src="assets/charseq.jpeg" width="500">
End of explanation
"""
with open('anna.txt', 'r') as f:
text=f.read()
vocab = set(text)
vocab_to_int = {c: i for i, c in enumerate(vocab)}
int_to_vocab = dict(enumerate(vocab))
chars = np.array([vocab_to_int[c] for c in text], dtype=np.int32)
text[:100]
chars[:100]
"""
Explanation: First we'll load the text file and convert it into integers for our network to use.
End of explanation
"""
def split_data(chars, batch_size, num_steps, split_frac=0.9):
"""
Split character data into training and validation sets, inputs and targets for each set.
Arguments
---------
chars: character array
batch_size: Size of examples in each of batch
num_steps: Number of sequence steps to keep in the input and pass to the network
split_frac: Fraction of batches to keep in the training set
Returns train_x, train_y, val_x, val_y
"""
slice_size = batch_size * num_steps
n_batches = int(len(chars) / slice_size)
# Drop the last few characters to make only full batches
x = chars[: n_batches*slice_size]
y = chars[1: n_batches*slice_size + 1]
# Split the data into batch_size slices, then stack them into a 2D matrix
x = np.stack(np.split(x, batch_size))
y = np.stack(np.split(y, batch_size))
# Now x and y are arrays with dimensions batch_size x n_batches*num_steps
# Split into training and validation sets, keep the virst split_frac batches for training
split_idx = int(n_batches*split_frac)
train_x, train_y= x[:, :split_idx*num_steps], y[:, :split_idx*num_steps]
val_x, val_y = x[:, split_idx*num_steps:], y[:, split_idx*num_steps:]
return train_x, train_y, val_x, val_y
train_x, train_y, val_x, val_y = split_data(chars, 10, 200)
train_x.shape
train_x[:,:10]
"""
Explanation: Now I need to split up the data into batches, and into training and validation sets. I should be making a test set here, but I'm not going to worry about that. My test will be if the network can generate new text.
Here I'll make both input and target arrays. The targets are the same as the inputs, except shifted one character over. I'll also drop the last bit of data so that I'll only have completely full batches.
The idea here is to make a 2D matrix where the number of rows is equal to the number of batches. Each row will be one long concatenated string from the character data. We'll split this data into a training set and validation set using the split_frac keyword. This will keep 90% of the batches in the training set, the other 10% in the validation set.
End of explanation
"""
def get_batch(arrs, num_steps):
batch_size, slice_size = arrs[0].shape
n_batches = int(slice_size/num_steps)
for b in range(n_batches):
yield [x[:, b*num_steps: (b+1)*num_steps] for x in arrs]
def build_rnn(num_classes, batch_size=50, num_steps=50, lstm_size=128, num_layers=2,
learning_rate=0.001, grad_clip=5, sampling=False):
if sampling == True:
batch_size, num_steps = 1, 1
tf.reset_default_graph()
# Declare placeholders we'll feed into the graph
with tf.name_scope('inputs'):
inputs = tf.placeholder(tf.int32, [batch_size, num_steps], name='inputs')
x_one_hot = tf.one_hot(inputs, num_classes, name='x_one_hot')
with tf.name_scope('targets'):
targets = tf.placeholder(tf.int32, [batch_size, num_steps], name='targets')
y_one_hot = tf.one_hot(targets, num_classes, name='y_one_hot')
y_reshaped = tf.reshape(y_one_hot, [-1, num_classes])
keep_prob = tf.placeholder(tf.float32, name='keep_prob')
# Build the RNN layers
with tf.name_scope("RNN_layers"):
lstm = tf.contrib.rnn.BasicLSTMCell(lstm_size)
drop = tf.contrib.rnn.DropoutWrapper(lstm, output_keep_prob=keep_prob)
cell = tf.contrib.rnn.MultiRNNCell([drop] * num_layers)
with tf.name_scope("RNN_init_state"):
initial_state = cell.zero_state(batch_size, tf.float32)
# Run the data through the RNN layers
with tf.name_scope("RNN_forward"):
outputs, state = tf.nn.dynamic_rnn(cell, x_one_hot, initial_state=initial_state)
final_state = state
# Reshape output so it's a bunch of rows, one row for each cell output
with tf.name_scope('sequence_reshape'):
seq_output = tf.concat(outputs, axis=1,name='seq_output')
output = tf.reshape(seq_output, [-1, lstm_size], name='graph_output')
# Now connect the RNN putputs to a softmax layer and calculate the cost
with tf.name_scope('logits'):
softmax_w = tf.Variable(tf.truncated_normal((lstm_size, num_classes), stddev=0.1),
name='softmax_w')
softmax_b = tf.Variable(tf.zeros(num_classes), name='softmax_b')
logits = tf.matmul(output, softmax_w) + softmax_b
with tf.name_scope('predictions'):
preds = tf.nn.softmax(logits, name='predictions')
with tf.name_scope('cost'):
loss = tf.nn.softmax_cross_entropy_with_logits(logits=logits, labels=y_reshaped, name='loss')
cost = tf.reduce_mean(loss, name='cost')
# Optimizer for training, using gradient clipping to control exploding gradients
with tf.name_scope('train'):
tvars = tf.trainable_variables()
grads, _ = tf.clip_by_global_norm(tf.gradients(cost, tvars), grad_clip)
train_op = tf.train.AdamOptimizer(learning_rate)
optimizer = train_op.apply_gradients(zip(grads, tvars))
# Export the nodes
export_nodes = ['inputs', 'targets', 'initial_state', 'final_state',
'keep_prob', 'cost', 'preds', 'optimizer']
Graph = namedtuple('Graph', export_nodes)
local_dict = locals()
graph = Graph(*[local_dict[each] for each in export_nodes])
return graph
"""
Explanation: I'll write another function to grab batches out of the arrays made by split data. Here each batch will be a sliding window on these arrays with size batch_size X num_steps. For example, if we want our network to train on a sequence of 100 characters, num_steps = 100. For the next batch, we'll shift this window the next sequence of num_steps characters. In this way we can feed batches to the network and the cell states will continue through on each batch.
End of explanation
"""
batch_size = 100
num_steps = 100
lstm_size = 512
num_layers = 2
learning_rate = 0.001
"""
Explanation: Hyperparameters
Here I'm defining the hyperparameters for the network. The two you probably haven't seen before are lstm_size and num_layers. These set the number of hidden units in the LSTM layers and the number of LSTM layers, respectively. Of course, making these bigger will improve the network's performance but you'll have to watch out for overfitting. If your validation loss is much larger than the training loss, you're probably overfitting. Decrease the size of the network or decrease the dropout keep probability.
End of explanation
"""
model = build_rnn(len(vocab),
batch_size=batch_size,
num_steps=num_steps,
learning_rate=learning_rate,
lstm_size=lstm_size,
num_layers=num_layers)
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
file_writer = tf.summary.FileWriter('./logs/3', sess.graph)
"""
Explanation: Write out the graph for TensorBoard
End of explanation
"""
!mkdir -p checkpoints/anna
epochs = 10
save_every_n = 200
train_x, train_y, val_x, val_y = split_data(chars, batch_size, num_steps)
model = build_rnn(len(vocab),
batch_size=batch_size,
num_steps=num_steps,
learning_rate=learning_rate,
lstm_size=lstm_size,
num_layers=num_layers)
saver = tf.train.Saver(max_to_keep=100)
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
# Use the line below to load a checkpoint and resume training
#saver.restore(sess, 'checkpoints/anna20.ckpt')
n_batches = int(train_x.shape[1]/num_steps)
iterations = n_batches * epochs
for e in range(epochs):
# Train network
new_state = sess.run(model.initial_state)
loss = 0
for b, (x, y) in enumerate(get_batch([train_x, train_y], num_steps), 1):
iteration = e*n_batches + b
start = time.time()
feed = {model.inputs: x,
model.targets: y,
model.keep_prob: 0.5,
model.initial_state: new_state}
batch_loss, new_state, _ = sess.run([model.cost, model.final_state, model.optimizer],
feed_dict=feed)
loss += batch_loss
end = time.time()
print('Epoch {}/{} '.format(e+1, epochs),
'Iteration {}/{}'.format(iteration, iterations),
'Training loss: {:.4f}'.format(loss/b),
'{:.4f} sec/batch'.format((end-start)))
if (iteration%save_every_n == 0) or (iteration == iterations):
# Check performance, notice dropout has been set to 1
val_loss = []
new_state = sess.run(model.initial_state)
for x, y in get_batch([val_x, val_y], num_steps):
feed = {model.inputs: x,
model.targets: y,
model.keep_prob: 1.,
model.initial_state: new_state}
batch_loss, new_state = sess.run([model.cost, model.final_state], feed_dict=feed)
val_loss.append(batch_loss)
print('Validation loss:', np.mean(val_loss),
'Saving checkpoint!')
saver.save(sess, "checkpoints/anna/i{}_l{}_{:.3f}.ckpt".format(iteration, lstm_size, np.mean(val_loss)))
tf.train.get_checkpoint_state('checkpoints/anna')
"""
Explanation: Training
Time for training which is is pretty straightforward. Here I pass in some data, and get an LSTM state back. Then I pass that state back in to the network so the next batch can continue the state from the previous batch. And every so often (set by save_every_n) I calculate the validation loss and save a checkpoint.
End of explanation
"""
def pick_top_n(preds, vocab_size, top_n=5):
p = np.squeeze(preds)
p[np.argsort(p)[:-top_n]] = 0
p = p / np.sum(p)
c = np.random.choice(vocab_size, 1, p=p)[0]
return c
def sample(checkpoint, n_samples, lstm_size, vocab_size, prime="The "):
prime = "Far"
samples = [c for c in prime]
model = build_rnn(vocab_size, lstm_size=lstm_size, sampling=True)
saver = tf.train.Saver()
with tf.Session() as sess:
saver.restore(sess, checkpoint)
new_state = sess.run(model.initial_state)
for c in prime:
x = np.zeros((1, 1))
x[0,0] = vocab_to_int[c]
feed = {model.inputs: x,
model.keep_prob: 1.,
model.initial_state: new_state}
preds, new_state = sess.run([model.preds, model.final_state],
feed_dict=feed)
c = pick_top_n(preds, len(vocab))
samples.append(int_to_vocab[c])
for i in range(n_samples):
x[0,0] = c
feed = {model.inputs: x,
model.keep_prob: 1.,
model.initial_state: new_state}
preds, new_state = sess.run([model.preds, model.final_state],
feed_dict=feed)
c = pick_top_n(preds, len(vocab))
samples.append(int_to_vocab[c])
return ''.join(samples)
checkpoint = "checkpoints/anna/i3560_l512_1.122.ckpt"
samp = sample(checkpoint, 2000, lstm_size, len(vocab), prime="Far")
print(samp)
checkpoint = "checkpoints/anna/i200_l512_2.432.ckpt"
samp = sample(checkpoint, 1000, lstm_size, len(vocab), prime="Far")
print(samp)
checkpoint = "checkpoints/anna/i600_l512_1.750.ckpt"
samp = sample(checkpoint, 1000, lstm_size, len(vocab), prime="Far")
print(samp)
checkpoint = "checkpoints/anna/i1000_l512_1.484.ckpt"
samp = sample(checkpoint, 1000, lstm_size, len(vocab), prime="Far")
print(samp)
"""
Explanation: Sampling
Now that the network is trained, we'll can use it to generate new text. The idea is that we pass in a character, then the network will predict the next character. We can use the new one, to predict the next one. And we keep doing this to generate all new text. I also included some functionality to prime the network with some text by passing in a string and building up a state from that.
The network gives us predictions for each character. To reduce noise and make things a little less random, I'm going to only choose a new character from the top N most likely characters.
End of explanation
"""
|
IS-ENES-Data/submission_forms | test/workflow-demo.ipynb | apache-2.0 | # do this in case you want to change imported module code while working with this notebook
# -- (for development and testing puposes only)
%load_ext autoreload
%autoreload 2
# to generate empyty project form including all options for variables
# e.g.:
ACTIVITY_STATUS = "0:open, 1:in-progress ,2:action-required, 3:paused,4:closed"
ERROR_STATUS = "0:open,1:ok,2:error"
ENTITY_STATUS = "0:open,1:stored,2:submitted,3:re-opened,4:closed"
CHECK_STATUS = "0:open,1:warning, 2:error,3:ok"
import dkrz_forms
#from dkrz_forms import form_handler, utils
#sf_t = utils.generate_project_form('ESGF_replication')
#print(checks.get_options(sf_t.sub.activity.status))
"""
Explanation: DKRZ data ingest information handling
This demo notebook is for data managers only !
The submission_forms package provides a collection of components to support the management of information related to data ingest related activities (data transport, data checking, data publication and data archival):
data submission related information management
who, when, what, for which project, data characteristics
data management related information management
ingest, quality assurance, publication, archiving
Background: approaches at other sites
example workflows in other data centers:
* http://eidc.ceh.ac.uk/images/ingestion-workflow/view
* http://www.mdpi.com/2220-9964/5/3/30/pdf
* https://www.rd-alliance.org/sites/default/files/03%20Nurnberger%20-%20DataPublishingWorkflows-CollabMtg20151208_V03.pdf
* http://ropercenter.cornell.edu/polls/deposit-data/
* https://www.arm.gov/engineering/ingest
* https://eosweb.larc.nasa.gov/GEWEX-RFA/documents/data_ingest.txt
* https://eosweb.larc.nasa.gov/GEWEX-RFA/documents/how_to_participate.html
* http://www.nodc.noaa.gov/submit/ online tool
* https://www2.cisl.ucar.edu/resources/cmip-analysis-platform
* https://xras-submit-ncar.xsede.org/
* http://cmip5.whoi.edu/
* https://pypi.python.org/pypi/cmipdata/0.6
DKRZ ingest workflow system
Data ingest request via
* jupyter notebook on server (http://data-forms.dkrz.de:8080), or
* jupyter notebook filled at home and sent to DKRZ (email, rt system)
* download form_template from: (tbd. github or redmine or recommend pypi installation only - which includes all templates .. )
Data ingest request workflow:
* ingest request related information is stored in json format in git repo
* all workflow steps are reflected in json subfields
* workflow json format has W3C prov aligned schema and can be transformed to W3C prov format (and visualized as a prov graph)
* for search git repo can be indexed into a (in-memory) key-value DB
End of explanation
"""
## To do: include different examples how to query for data ingest activities based on different properties
#info_file = "/home/stephan/tmp/Repos/form_repo/test/test_testsuite_123.json"
#info_file = "/home/stephan/Forms/local_repo/test/test_testsuite_1.json"
#info_file = "/home/stephan/Forms/local_repo/CORDEX/CORDEX_kindermann_2.json"
info_file = "/opt/jupyter/notebooks/form_directory/CORDEX/CORDEX_mm_mm.json"
from dkrz_forms import form_handler, utils,checks,wflow_handler
from datetime import datetime
my_form = utils.load_workflow_form(info_file)
wflow_dict = wflow_handler.get_wflow_description(my_form)
list(wflow_dict.values())
wflow_handler.rename_action('data_submission_review',my_form)
my_form = wflow_handler.start_action('data_submission_review',my_form,"stephan kindermann")
myform = wflow_handler.update_action('data_submission_review',my_form,"stephan kindermann")
review_report = {}
review_report['comment'] = 'needed to change and correct submission form'
review_report['additional_info'] = "mail exchange with a@b with respect to question ..."
myform = wflow_handler.finish_action('data_submission_review',my_form,"stephan kindermann",review_report)
"""
Explanation: demo examples - step by step
Data managers have two separate application scenarios for data ingest information management:
* interactive information adaptation for specific individual data ingest activities
* automatic information adaptation by integrating in own data management scripts (e.g. data qualitiy assurance or data publication)
* examples: successfull ESGF publication can update automatically the publication workflow step information
Step 1: find and load a specific data ingest activity related form
Alternative A)
check out out git repo https://gitlab.dkrz.de/DKRZ-CMIP-Pool/data_forms_repo
this repo contains all completed submission forms
all data manager related changes are also committed there
subdirectories in this repo relate to the individual projects (e.g. CMIP6, CORDEX, ESGF_replication, ..)
each entry there contains the last name of the data submission originator
Alternative B) (not yet documented, only prototype)
use search interface and API of search index on all submision forms
End of explanation
"""
?my_form
?my_form.sub
my_form.sub.entity_out.report.
"""
Explanation: interactive "help": use ?form.part and tab completion:
End of explanation
"""
sf = form_handler.save_form(my_form, "init")
report = checks.check_report(my_form,"sub")
checks.display_report(report)
my_form.rev.entity_in.check_status
"""
Explanation: Display status of report
End of explanation
"""
my_form.sub.activity.ticket_url
part = dkrz_forms.checks.check_step_form(my_form,"sub")
dkrz_forms.checks.display_check(part,"sub")
## global check
res = checks.check_generic_form(my_form)
checks.display_checks(my_form,res)
print(my_form.sub.entity_out.status)
print(my_form.rev.entity_in.form_json)
print(my_form.sub.activity.ticket_id)
"""
Explanation: Display status of form
End of explanation
"""
print(my_form.workflow)
"""
Explanation: Step 2: complete information in specific workflow step
workflow steps of specific project are given in my_form.workflow
normally my_form.workflow is given as
[[u'sub', u'data_submission'], [u'rev', u'data_submission_review'], [u'ing', u'data_ingest'], [u'qua', u'data_quality_assurance']]
thus my_form.sub would contain the data_submission related information, my_form.rev the review related one etc.
the workflow step related information dictionaries are configured based on config/project_config.py
End of explanation
"""
review = my_form.rev
?review.activity
"""
Explanation: each workflow_step dictionary is structured consistently according to
* activity: activity related information
* agent: responsible person related info
* entity_out: specific output information of this workflow step
End of explanation
"""
my_form.rev.
workflow_form = utils.load_workflow_form(info_file)
review = workflow_form.rev
# any additional information keys can be added,
# yet they are invisible to generic information management tools ..
workflow_form.status = "review"
review.activity.status = "1:in-review"
review.activity.start_time = str(datetime.now())
review.activity.review_comment = "data volume check to be done"
review.agent.responsible_person = "sk"
sf = form_handler.save_form(workflow_form, "sk: review started")
review.activity.status = "3:accepted"
review.activity.ticket_id = "25389"
review.activity.end_time = str(datetime.now())
review.entity_out.comment = "This submission is related to submission abc_cde"
review.entity_out.tag = "sub:abc_cde" # tags are used to relate different forms to each other
review.entity_out.report = {'x':'y'} # result of validation in a dict (self defined properties)
# ToDo: test and document save_form for data managers (config setting for repo)
sf = form_handler.save_form(workflow_form, "kindermann: form_review()")
"""
Explanation: workflow step: submission review
ToDo: Split in start/end information update actions
End of explanation
"""
workflow_form = utils.load_workflow_form(info_file)
ingest = workflow_form.ing
?ingest.entity_out
# agent related info
workflow_form.status = "ingest"
ingest.activity.status = "started"
ingest.agent.responsible_person = "hdh"
ingest.activity.start_time=str(datetime.now())
# activity related info
ingest.activity.comment = "data pull: credentials needed for remote site"
sf = form_handler.save_form(workflow_form, "kindermann: form_review()")
ingest.activity.status = "completed"
ingest.activity.end_time = str(datetime.now())
# report of the ingest process (entity_out of ingest workflow step)
ingest_report = ingest.entity_out
ingest_report.tag = "a:b:c" # tag structure to be defined
ingest_report.status = "completed"
# free entries for detailed report information
ingest_report.report.remote_server = "gridftp.awi.de://export/data/CMIP6/test"
ingest_report.report.server_credentials = "in server_cred.krb keypass"
ingest_report.report.target_path = ".."
sf = form_handler.save_form(workflow_form, "kindermann: form_review()")
ingest_report.report.
"""
Explanation: add data ingest step related information
Comment: alternatively in tools workflow_step related information could also be
directly given and assigned via dictionaries, yet this is only
recommended for data managers making sure the structure is consistent with
the preconfigured one given in config/project_config.py
* example validation.activity._dict_ = data_manager_generated_dict
End of explanation
"""
from datetime import datetime
workflow_form = utils.load_workflow_form(info_file)
qua = workflow_form.qua
workflow_form.status = "quality assurance"
qua.agent.responsible_person = "hdh"
qua.activity.status = "starting"
qua.activity.start_time = str(datetime.now())
sf = form_handler.save_form(workflow_form, "hdh: qa start")
qua.entity_out.status = "completed"
qua.entity_out.report = {
"QA_conclusion": "PASS",
"project": "CORDEX",
"institute": "CLMcom",
"model": "CLMcom-CCLM4-8-17-CLM3-5",
"domain": "AUS-44",
"driving_experiment": [ "ICHEC-EC-EARTH"],
"experiment": [ "history", "rcp45", "rcp85"],
"ensemble_member": [ "r12i1p1" ],
"frequency": [ "day", "mon", "sem" ],
"annotation":
[
{
"scope": ["mon", "sem"],
"variable": [ "tasmax", "tasmin", "sfcWindmax" ],
"caption": "attribute <variable>:cell_methods for climatologies requires <time>:climatology instead of time_bnds",
"comment": "due to the format of the data, climatology is equivalent to time_bnds",
"severity": "note"
}
]
}
sf = form_handler.save_form(workflow_form, "hdh: qua complete")
"""
Explanation: workflow step: data quality assurance
End of explanation
"""
workflow_form = utils.load_workflow_form(info_file)
workflow_form.status = "publishing"
pub = workflow_form.pub
pub.agent.responsible_person = "katharina"
pub.activity.status = "starting"
pub.activity.start_time = str(datetime.now())
sf = form_handler.save_form(workflow_form, "kb: publishing")
pub.activity.status = "completed"
pub.activity.comment = "..."
pub.activity.end_time = ".."
pub.activity.report = {'model':"MPI-M"} # activity related report information
pub.entity_out.report = {'model':"MPI-M"} # the report of the publication action - all info characterizing the publication
sf = form_handler.save_form(workflow_form, "kb: published")
sf = form_handler.save_form(workflow_form, "kindermann: form demo run 1")
sf.sub.activity.commit_hash
"""
Explanation: workflow step: data publication
End of explanation
"""
|
chris-jd/udacity | intro_to_DS_assignment/Assignment 1 Submission Notebook.ipynb | mit | # Imports
# Numeric Packages
from __future__ import division
import numpy as np
import pandas as pd
import scipy.stats as sps
# Plotting packages
import matplotlib.pyplot as plt
from matplotlib import ticker
import seaborn as sns
%matplotlib inline
sns.set_style('whitegrid')
sns.set_context('talk')
# Other
from datetime import datetime, timedelta
import statsmodels.api as sm
# Import turnstile data and convert datetime column to datetime python objects
df = pd.read_csv('turnstile_weather_v2.csv')
df['datetime'] = pd.to_datetime(df['datetime'])
"""
Explanation: Submission Notebook
Chris Madeley
ToC
References
Statistical Test
Linear Regression (Questions)
Visualisation
Conclusion
Reflection
Change Log
<b>Revision 1:</b> Corrections to questions 1.1, 1.4 based on the comments of the first review.
<b>Revision 2:</b> Corrections to questions 1.1, 1.4, 4.1 based on the comments of the second review.
Overview
These answers to the assignment questions have been prepared in a Jupyter (formally IPython) notebook. This was chosen to allow clarity of working, enable reproducability, and as it should be suitable and useful for the target audience, and can be converted to html easily. In general, the code necessary for each question is included below each question, although some blocks of necessary code fall inbetween questions.
End of explanation
"""
W, p = sps.shapiro(df.ENTRIESn_hourly.tolist())
print 'Probability that data is the realisation of a gaussian random variable: {:.3f}'.format(p)
plt.figure(figsize=[8,5])
sns.distplot(df.ENTRIESn_hourly.tolist(), bins=np.arange(0,10001,500), kde=False)
plt.xlim(0,10000)
plt.yticks(np.arange(0,16001,4000))
plt.title('Histogram of Entry Count')
plt.show()
"""
Explanation: 0. References
In general, only standard package documentation has been used throughout. A couple of one-liners adapted from stackoverflow answers noted in code where used.
1. Statistical Test
1.1 Which statistical test did you use to analyze the NYC subway data? Did you use a one-tail or a two-tail P value? What is the null hypothesis? What is your p-critical value?
The objective of this project, as described in the project details, is to figure out if more people ride the subway when it is raining versus when it is not raining.
To evaluate this question through statistical testing, a hypothesis test is used. To perform such a test two opposing hypotheses are constructed: the null hypothesis and the alternative hypothesis. A hypothesis test considers one sample of data to determine if there is sufficient evidence to reject the null hypothesis for the entire population from which it came; that the difference in the two underlying populations are different with statistical significance. The test is performed to a 'significance level' which determines the probability of Type 1 error occuring, where Type 1 error is the incorrect rejection of the null hypothesis; a false positive.
The null hypothesis is constructed to represent the status quo, where the treatment on a population has no effect on the population, chosen this way because the test controls only for Type 1 error. In the context of this assignment, the null hypothesis for this test is on average, no more people ride the subway compared to when it is not; i.e. 'ridership' is the population and 'rain' is the treatment.
i.e. $H_0: \alpha_{raining} \leq \alpha_{not_raining}$
where $\alpha$ represents the average ridership of the subway.
Consequently, the alternative hypothesis is given by:
$H_1: \alpha_{raining} > \alpha_{not_raining}$.
Due to the way the hypothesis is framed, that we are only questioning whether ridership increases during rain, a single-tailed test is required. This is because we are only looking for a test statistic that shows an increase in ridership in order to reject the null hypothesis.
A significance value of 0.05 has been chosen to reject the null hypothesis for this test, due to it being the most commonly used value for testing.
1.2 Why is this statistical test applicable to the dataset? In particular, consider the assumptions that the test is making about the distribution of ridership in the two samples.
The Mann-Whitney U test was chosen for the hypothesis testing due as it is agnostic to the underlying distribution. The entry values are definitely not normally distributed, illustrated below both graphically and using the Shapiro-Wilk test.
End of explanation
"""
raindata = np.array(df[df.rain==1].ENTRIESn_hourly.tolist())
noraindata = np.array(df[df.rain==0].ENTRIESn_hourly.tolist())
U, p = sps.mannwhitneyu(raindata, noraindata)
print 'Results'
print '-------'
print 'p-value: {:.2f}'.format(p) # Note that p value calculated by scipy is single-tailed
print 'Mean with rain: {:.0f}'.format(raindata.mean())
print 'Mean without rain: {:.0f}'.format(noraindata.mean())
"""
Explanation: 1.3 What results did you get from this statistical test? These should include the following numerical values: p-values, as well as the means for each of the two samples under test.
End of explanation
"""
# Because the hour '0' is actually the entries from 20:00 to 24:00, it makes more sense to label it 24 when plotting data
df.datetime -= timedelta(seconds=1)
df['day']= df.datetime.apply(lambda x: x.day)
df['hour'] = df.datetime.apply(lambda x: x.hour+1)
df['weekday'] = df.datetime.apply(lambda x: not bool(x.weekday()//5))
df['day_week'] = df.datetime.apply(lambda x: x.weekday())
# The dataset includes the Memorial Day Public Holiday, which makes more sense to be classify as a weekend.
df.loc[df['day']==30,'weekday'] = False
"""
Explanation: 1.4 What is the significance and interpretation of these results?
Given the p-value < 0.05, we can reject the null hypothesis that the average ridership is not greater when it is raining, hence the we can accept the alternative hypothesis the average ridership is greater when it rains.
2. Linear Regression
End of explanation
"""
# Create a new column, stall_num2, representing the proportion of entries through a stall across the entire period.
total_patrons = df.ENTRIESn_hourly.sum()
# Dataframe with the units, and total passing through each unit across the time period
total_by_stall = pd.DataFrame(df.groupby('UNIT').ENTRIESn_hourly.sum())
# Create new variable = proportion of total entries
total_by_stall['stall_num2'] = total_by_stall.ENTRIESn_hourly/total_patrons
# Normalise by mean and standard deviation... fixes orders of magnitude errors in the output
total_stall_mean = total_by_stall.stall_num2.mean()
total_stall_stddev = total_by_stall.stall_num2.std()
total_by_stall.stall_num2 = (
(total_by_stall.stall_num2 - total_stall_mean)
/ total_stall_stddev
)
# Map the new variable back on the original dataframe
df['stall_num2'] = df.UNIT.apply(lambda x: total_by_stall.stall_num2[x])
"""
Explanation: 2.1 What approach did you use to compute the coefficients theta and produce prediction for ENTRIESn_hourly in your regression model:
Ordinary Least Squares (OLS) was used for the linear regression for this model.
2.2 What features (input variables) did you use in your model? Did you use any dummy variables as part of your features?
The final fit used in the model includes multiple components, two of which include the custom input stall_num2, described later:
ENTRIESn_hourly ~ 'ENTRIESn_hourly ~ rain:C(hour) + stall_num2*C(hour) + stall_num2*weekday'
- stall_num2 - includes the effect off the stall (unit) number;
- C(hour) - (dummy variable) included using dummy variables, since the the entries across hour vary in a highly nonlinear way;
- weekday - true/false value for whether it is a weekday;
- rain:C(hour) - rain is included as the focus of the study, however it has been combined with the time of day;
- stall_num2 * C(hour) - (dummy variable) interaction between the stall number and time of day; and
- stall_num2 * weekday - interaction between the stall number and whether it is a weekday.
Additionally, an intercept was included in the model, statsmodels appears to automatically create N-1 dummies when this is included.
The variable stall_num2 was created as a substitute to using the UNIT column as a dummy variable. It was clear early on that using UNIT has a large impact on the model accuracy, intuitive given the relative popularity of stalls will be important for predicting their entry count. However, with 240 stalls, a lot of dummy variables are created, and it makes interactions between UNIT and other variables impractical. Additionally, so many dummy variables throws away information relating to the similar response between units of similar popularity.
stall_num2 was constructed by calculating the number of entries that passed through each stall as a proportion of total entries for the entire period of the data. These results were then normalised to have μ=0 and σ=1 (although they're not normally distributed) to make the solution matrix well behaved; keep the condition number within normal bounds.
End of explanation
"""
for i in df.columns.tolist(): print i,
"""
Explanation: 2.3 Why did you select these features in your model?
The first step was to qualitatively assess which parameters may be useful for the model. This begins with looking at a list of the data, and the type of data, which has been captured, illustrated as follows.
End of explanation
"""
plt.figure(figsize=[8,6])
corr = df[['ENTRIESn_hourly',
'EXITSn_hourly',
'day_week', # Day of the week (0-6)
'weekday', # Whether it is a weekday or not
'day', # Day of the month
'hour', # In set [4, 8, 12, 16, 20, 24]
'fog',
'precipi',
'rain',
'tempi',
'wspdi']].corr()
sns.heatmap(corr)
plt.title('Correlation matrix between potential features')
plt.show()
"""
Explanation: Some parameters are going to be clearly important:
- UNIT/station - ridership will vary between entry points;
- hour - ridership will definitely be different between peak hour and 4am; and
- weekday - it is intutive that there will be more entries on weekdays; this is clearly illustrated in the visualisations in section 3.
Additionally, rain needed to be included as a feature due to it being the focus of the overall investigation.
Beyond these parameters, I selected a set of numeric features which may have an impact on the result, and initially computed and plotted the correlations between featires in an effort to screen out some multicollinearity prior linear regression. The results of this correlation matrix indicated a moderately strong correlations between:
- Entries and exits - hence exits is not really suitable for predicting entries, which is somewhat intuitive
- Day of the week and weekday - obviously correlated, hence only one should be chosen.
- Day of the month and temperature are well correlated, and when plotted show a clear warming trend throughout May.
There are also a handful of weaker environmental correlations, such as precipitation and fog, rain and precipitation and rain and temperature.
End of explanation
"""
# Construct and fit the model
mod = sm.OLS.from_formula('ENTRIESn_hourly ~ rain:C(hour) + stall_num2*C(hour) + stall_num2*weekday', data=df)
res = mod.fit_regularized()
s = res.summary2()
"""
Explanation: The final selection of variables was determined through trial and error of rational combinations of variables. The station popularity was captured in using the stall_num2 variable, since it appears to create a superior model compared with just using UNIT dummies, and because it allowed the creation of combinations. Combining the station with hour was useful, and is intuitive since stations in the CBD will have the greatest patronage and have greater entries in the evening peak hour. A similar logic applies to combining the station and whether it is a weekday.
Various combinations of environmental variables were trialled in the model, but none appeared to improve the model accuracy and were subsequently dicarded. Since rain is the focus of this study it was retained, however it was combined with the time of day. The predictive strenght of the model was not really improved with the inclusion of a rain parameter, however combining it with hour appears to improve it's usefulness for providing insight, as will be discussed in section 4.
2.4 What are the parameters (also known as "coefficients" or "weights") of the non-dummy features in your linear regression model?
End of explanation
"""
s.tables[1].ix[['Intercept', 'stall_num2']]
"""
Explanation: Due to the use of several combinations, there are very few non-dummy features, with the coefficients illustrated below. Since stall_num2 is also used in several combinations, it's individual coefficient doesn't prove very useful.
End of explanation
"""
s.tables[1].ix[[i for i in s.tables[1].index if i[:5]=='stall']]
"""
Explanation: However when looking at all the combinations for stall_num2 provides greater insight. Here we can see that activity is greater on weekdays, and greatest in the 16:00-20:00hrs block. It is lowest in the 00:00-04:00hrs block, not shown as it was removed by the model due to the generic stall_num2 parameter being there; the other combinations are effectively referenced to the 00:00-04:00hrs block.
End of explanation
"""
s.tables[1].ix[[i for i in s.tables[1].index if i[:4]=='rain']]
"""
Explanation: Even more interesting are the coefficient for the rain combinations. These appear to indicate that patronage increases in the 08:00-12:00 and 16:00-20:00, corresponding to peak hour. Conversely, subway entries are lower at all other times. Could it be that subway usage increases if it is raining when people are travelling to and from work, but decreases otherwise because people prefer not to travel in the rain at all?
End of explanation
"""
print 'Model Coefficient of Determination (R-squared): {:.3f}'.format(res.rsquared)
"""
Explanation: 2.5 What is your model’s R2 (coefficients of determination) value?
End of explanation
"""
residuals = res.resid
sns.set_style('whitegrid')
sns.distplot(residuals,bins=np.arange(-10000,10001,200),
kde = False, # kde_kws={'kernel':'gau', 'gridsize':4000, 'bw':100},
fit=sps.cauchy, fit_kws={'gridsize':4000})
plt.xlim(-5000,5000)
plt.title('Distribution of Residuals\nwith fitted cauchy Distribution overlaid')
plt.show()
"""
Explanation: The final R-squared value of 0.74 is much greater than earlier models that used UNIT as a dummy variable, which had R-squared values around 0.55.
2.6 What does this R2 value mean for the goodness of fit for your regression model? Do you think this linear model to predict ridership is appropriate for this dataset, given this R2 value?
To evaluate the goodness of fit the residuals of the model have been evaluated in two ways. First, a histogram of the residuals has been plotted below. The distribution of residuals is encouragingly symmetric. However efforts to fit a normal distribution found distributions which underestimated the frequency at the mode and tails. Fitting a fat-tailed distribution, such as the Cauchy distribution below, was far more successful. I'm not sure if there's a good reason why it's worked out this way (but would love to hear ideas as to why).
End of explanation
"""
sns.set_style('whitegrid')
fig = plt.figure(figsize=[6,6])
plt.xlabel('ENTRIESn_hourly')
plt.ylabel('Residuals')
plt.scatter(df.ENTRIESn_hourly, residuals,
c=(df.stall_num2*total_stall_stddev+total_stall_mean)*100, # denormalise values
cmap='YlGnBu')
plt.colorbar(label='UNIT Relative Traffic (%)')
plt.plot([0,20000],[0,-20000], ls=':', c='0.7', lw=2) # Line to show negative prediction values (i.e. negative entries)
plt.xlim(xmin=0)
plt.ylim(-20000,25000)
plt.xticks(rotation='45')
plt.title('Model Residuals vs. Expected Value')
plt.show()
"""
Explanation: Secondly, a scatterplot of the residuals against the expected values is plotted. As expected, the largest residuals are associated with cases where the traffic is largest. In general the model appears to underpredict the traffic at the busiest of units. Also clear on this plot is how individual stations form a 'streak' of points on the diagonal. This is because the model essentially makes a prediction for each station per hour per for weekdays and weekends. The natural variation of the actual result in this timeframe creates the run of points.
End of explanation
"""
print 'Condition Number: {:.2f}'.format(res.condition_number)
"""
Explanation: Additionally, note that the condition number for the final model is relatively low, hence there don't appear to be any collinearity issues with this model. By comparison, when UNIT was included as a dummy variable instead, the correlation was weaker and the condition number was up around 220.
End of explanation
"""
sns.set_style('white')
sns.set_context('talk')
mydf = df.copy()
mydf['rain'] = mydf.rain.apply(lambda x: 'Raining' if x else 'Not Raining')
raindata = df[df.rain==1].ENTRIESn_hourly.tolist()
noraindata = df[df.rain==0].ENTRIESn_hourly.tolist()
fig = plt.figure(figsize=[9,6])
ax = fig.add_subplot(111)
plt.hist([raindata,noraindata],
normed=True,
bins=np.arange(0,11500,1000),
color=['dodgerblue', 'indianred'],
label=['Raining', 'Not Raining'],
align='right')
plt.legend()
sns.despine(left=True, bottom=True)
# http://stackoverflow.com/questions/9767241/setting-a-relative-frequency-in-a-matplotlib-histogram
def adjust_y_axis(x, pos):
return '{:.0%}'.format(x * 1000)
ax.yaxis.set_major_formatter(ticker.FuncFormatter(adjust_y_axis))
plt.title('Histogram of Subway Entries per 4 hour Block per Gate')
plt.ylabel('Proportion of Total Entries')
plt.xlim(500,10500)
plt.xticks(np.arange(1000,10001,1000))
plt.show()
"""
Explanation: In summary, it appears that this linear model has done a reasonable job of predicting ridership in this instance. Clearly some improvements are possible (like fixing the predictions of negative entries!), but given there will always be a degree of random variation, an R-squared value of 0.74 for a linear model seems quite reasonable. To be sure of the model suitability the data should be split into training/test sets. Additionally, more data from extra months could prove beneficial.
3. Visualisation
3.1 One visualization should contain two histograms: one of ENTRIESn_hourly for rainy days and one of ENTRIESn_hourly for non-rainy days.
End of explanation
"""
# Plot to illustrate the average riders per time block for each weekday.
# First we need to sum up the entries per hour (category) per weekday across all units.
# This is done for every day, whilst retaining the 'day_week' field for convenience. reset_index puts it back into a standard dataframe
# For the sake of illustration, memorial day has been excluded since it would incorrectly characterise the Monday ridership
mydf = df.copy()
mydf = mydf[mydf.day!=30].pivot_table(values='ENTRIESn_hourly', index=['day','day_week','hour'], aggfunc=np.sum).reset_index()
# The second pivot takes the daily summed data, and finds the mean for each weekday/hour block.
mydf = mydf.pivot_table(values='ENTRIESn_hourly', index='hour', columns='day_week', aggfunc=np.mean)
# Generate plout using the seaborn heatplot function.
fig = plt.figure(figsize=[9,6])
timelabels = ['Midnight - 4am','4am - 8am','8am - 12pm','12pm - 4pm','4pm - 8pm','8pm - Midnight']
weekdays = ['Mon', 'Tue', 'Wed', 'Thu', 'Fri', 'Sat', 'Sun']
plot = sns.heatmap(mydf, yticklabels=timelabels, xticklabels=weekdays)
plt.xlabel('') # The axis ticks are descriptive enough to negate the need for axis labels
plt.ylabel('')
plot.tick_params(labelsize=14) # Make stuff bigger!
# Make heatmap ticks bigger http://stackoverflow.com/questions/27832054/change-tick-size-on-colorbar-of-seaborn-heatmap
cax = plt.gcf().axes[-1]
cax.tick_params(labelsize=14)
plt.title('Daily NYC Subway Ridership\n(Data from May 2011)', fontsize=20)
plt.show()
"""
Explanation: Once both plots are normalised, the difference between subway entries when raining and not raining are almost identical. No useful differentiation can be made between the two datasets here.
3.2 One visualization can be more freeform. You should feel free to implement something that we discussed in class (e.g., scatter plots, line plots) or attempt to implement something more advanced if you'd like.
End of explanation
"""
|
makeyourowntextminingtoolkit/makeyourowntextminingtoolkit | A01_singular_value_decomposition.ipynb | gpl-2.0 | # import numpy for SVD function
import numpy
# import matplotlib.pyplot for visualising arrays
import matplotlib.pyplot as plt
"""
Explanation: Learning About the Singular Value Decomposition
This notebook explores the Singular Value Decomposition (SVD)
End of explanation
"""
# create a really simple matrix
A = numpy.array([[-1,1], [1,1]])
# and show it
print("A = \n", A)
# plot the array
p = plt.subplot(111)
p.axis('scaled'); p.axis([-2, 2, -2, 2]); p.axhline(y=0, color='lightgrey'); p.axvline(x=0, color='lightgrey')
p.set_yticklabels([]); p.set_xticklabels([])
p.set_title("A")
p.plot(A[0,],A[1,],'ro')
plt.show()
"""
Explanation: Start With A Simple Matrix
End of explanation
"""
# break it down into an SVD
U, s, VT = numpy.linalg.svd(A, full_matrices=False)
S = numpy.diag(s)
# what are U, S and V
print("U =\n", U, "\n")
print("S =\n", S, "\n")
print("V^T =\n", VT, "\n")
for px in [(131,U, "U"), (132,S, "S"), (133,VT, "VT")]:
subplot = px[0]
matrix = px[1]
matrix_name = px[2]
p = plt.subplot(subplot)
p.axis('scaled'); p.axis([-2, 2, -2, 2]); p.axhline(y=0, color='lightgrey'); p.axvline(x=0, color='lightgrey')
p.set_yticklabels([]); p.set_xticklabels([])
p.set_title(matrix_name)
p.plot(matrix[0,],matrix[1,],'ro')
pass
plt.show()
"""
Explanation: Now Take the SVD
End of explanation
"""
# rebuild A2 from U.S.V
A2 = numpy.dot(U,numpy.dot(S,VT))
print("A2 = \n", A2)
# plot the reconstructed A2
p = plt.subplot(111)
p.axis('scaled'); p.axis([-2, 2, -2, 2]); p.axhline(y=0, color='lightgrey'); p.axvline(x=0, color='lightgrey')
p.set_yticklabels([]); p.set_xticklabels([])
p.set_title("A2")
p.plot(A2[0,],A2[1,],'ro')
plt.show()
"""
Explanation: Check U, S and V^T Do Actually Reconstruct A
End of explanation
"""
|
mzszym/oedes | examples/scl/scl-trapping.ipynb | agpl-3.0 | %matplotlib inline
import matplotlib.pylab as plt
import oedes
import numpy as np
oedes.init_notebook() # for displaying progress bars
"""
Explanation: Steady-state space-charge-limited current with traps
This example shows how to simulate effects of a single trap level on current-voltage characteristics of a single carrier device.
End of explanation
"""
L = 200e-9 # device thickness, m
model = oedes.models.std.electrononly(L, traps=['trap'])
params = {
'T': 300, # K
'electrode0.workfunction': 0, # eV
'electrode1.workfunction': 0, # eV
'electron.energy': 0, # eV
'electron.mu': 1e-9, # m2/(Vs)
'electron.N0': 2.4e26, # 1/m^3
'electron.trap.energy': 0, # eV
'electron.trap.trate': 1e-22, # 1/(m^3 s)
'electron.trap.N0': 6.2e22, # 1/m^3
'electrode0.voltage': 0, # V
'electrode1.voltage': 0, # V
'epsilon_r': 3. # 1
}
"""
Explanation: Model and parameters
Electron only device is simulated, without contact barrier. Note that more trap levels can be included by modifying traps= argument below. Each trap level should have unique name.
End of explanation
"""
trapenergy_sweep = oedes.sweep('electron.trap.energy',np.asarray([-0.45, -0.33, -0.21, 1.]))
voltage_sweep = oedes.sweep('electrode0.voltage', np.logspace(-3, np.log10(20.), 100))
"""
Explanation: Sweep parameters
For simplicity, the case of absent traps is modeled by putting trap level 1 eV above transport level. This makes trap states effectively unoccupied.
End of explanation
"""
c=oedes.context(model)
for tdepth,ct in c.sweep(params, trapenergy_sweep):
for _ in ct.sweep(ct.params, voltage_sweep):
pass
v,j = ct.teval(voltage_sweep.parameter_name,'J')
oedes.testing.store(j, rtol=1e-3) # for automatic testing
if tdepth < 0:
label = 'no traps'
else:
label = 'trap depth %s eV' % tdepth
plt.plot(v,j,label=label)
plt.xscale('log')
plt.yscale('log')
plt.xlabel('V')
plt.ylabel(r'$\mathrm{A/m^2}$')
plt.legend(loc=0,frameon=False);
"""
Explanation: Result
End of explanation
"""
|
sdpython/ensae_teaching_cs | _doc/notebooks/td2a_eco/TD2A_eco_les_API.ipynb | mit | from jyquickhelper import add_notebook_menu
add_notebook_menu()
"""
Explanation: 2A.eco - API, API REST
Petite revue d'API REST.
End of explanation
"""
import requests
data_json = requests.get("http://api.worldbank.org/v2/countries?incomeLevel=LMC&format=json").json()
data_json
data_json[0]
# On voit qu'il y a nous manque des informations :
# il y a un total de 52 éléments
data_json_page_2 = requests.get("http://api.worldbank.org/v2/countries?incomeLevel=LMC&format=json&page=2").json()
data_json_page_2
# pour obtenir une observation
# on voit dans l'objet que l'élément 0 correspond à des informations sur les pages
data_json[1][0]
"""
Explanation: Définition :
API, à part que ce mot qui vaut 5 au scrabble, c'est quoi au juste ?
API signifie Application Programming Interface. Le mot le plus important est “interface”, et c’est le mot le plus simple, car nous utilisons tous des interfaces.
Bon, et une interface ?
Définition Larrouse : "Une interface est un dispositif qui permet des échanges et interactions entre différents acteurs"
Pour faire simple, une API est un moyen efficace de faire communiquer entre elles deux applications : concrètement, un fournisseur de service met à disposition des développeurs une interface codifiée, qui leur permet d'obtenir des informations à partir de requêtes.
Sans rentrer dans le détail technique, le dialogue ressemble à : "envoie moi ton adresse sous la forme X = rue, Y = Ville, Z = Pays" et moi, en retour, je t'enverrai le code à afficher sur ton site pour avoir la carte interactive.
Les API qui existent
De plus en plus de sites mettent à disposition des développeurs et autres curieux des API.
Pour en citer quelques-uns :
Twitter : https://dev.twitter.com/rest/public
Facebook : https://developers.facebook.com/
Instagram : https://www.instagram.com/developer/
Spotify : https://developer.spotify.com/web-api/
Ou encore :
Pole Emploi : https://www.emploi-store-dev.fr/portail-developpeur-cms/home.html
SNCF : https://data.sncf.com/api
AirFrance KLM : https://developer.airfranceklm.com/Our_Apis
Banque Mondiale : https://datahelpdesk.worldbank.org/knowledgebase/topics/125589
Comment parler à une API ?
La plupart des API donnent des exemples par communiquer avec les données présentes sur le site.
Simplement, il faut trouver l'url qui renvoit les données que vous souhaitez avoir
Par exemple, avec l'API de la Banque mondiale, voici comme s'écrit une requête pour les données de la Banque Mondiale :
http://api.worldbank.org/countries?incomeLevel=LMC
Avec cette url, on demande la liste des pays dont le niveau de revenus est LMC, c'est à dire "Lower middle income".
En cliquant sur le lien, le site renvoit des données en XML, qui ressemblent pas mal à ce qu'on a vu plus tôt avec le scraping : une structure avec des balises qui s'ouvrent et qui se ferment.
Quand on regare de plus près, on voit que les informations suivantes apparaissent
Code du pays | Nom du pays | Région | Classification en termes de revenus | Les types de prêt pour ces pays | La capitale | Longitude | Latitude
<wb:country id="ARM">
<wb:iso2Code>AM</wb:iso2Code>
<wb:name>Armenia</wb:name>
<wb:region id="ECS">Europe & Central Asia</wb:region>
<wb:adminregion id="ECA">Europe & Central Asia (excluding high income)</wb:adminregion>
<wb:incomeLevel id="LMC">Lower middle income</wb:incomeLevel>
<wb:lendingType id="IBD">IBRD</wb:lendingType>
<wb:capitalCity>Yerevan</wb:capitalCity>
<wb:longitude>44.509</wb:longitude>
<wb:latitude>40.1596</wb:latitude>
</wb:country>
<wb:country id="BGD">
<wb:iso2Code>BD</wb:iso2Code>
<wb:name>Bangladesh</wb:name>
<wb:region id="SAS">South Asia</wb:region>
<wb:adminregion id="SAS">South Asia</wb:adminregion>
<wb:incomeLevel id="LMC">Lower middle income</wb:incomeLevel>
<wb:lendingType id="IDX">IDA</wb:lendingType>
<wb:capitalCity>Dhaka</wb:capitalCity>
<wb:longitude>90.4113</wb:longitude>
<wb:latitude>23.7055</wb:latitude>
</wb:country>
En utilisant cette url ci : http://api.worldbank.org/countries?incomeLevel=LMC&format=json, on obtient directement un json, qui est finalement presque comme un dictionnaire en python.
Rien de plus simple donc pour demander quelque chose à une API, il suffit d'avoir la bonne url.
Et Python : comment il s'adresse aux API ?
C'est là qu'on revient aux fondamentaux : on va avoir besoin du module requests de Python et suivant les API, un parser comme BeautifulSoup ou rien si on réussit à obtenir un json.
On va utiliser le module requests et sa méthode get : on lui donne l'url de l'API qui nous intéresse, on lui demande d'en faire un json et le tour est joué !
Faire appel à l'API de la Banque Mondiale
End of explanation
"""
import os
from pyquickhelper.loghelper import get_password
key = get_password("tastekid", "ensae_teaching_cs,key")
if key is None:
raise ValueError("password cannot be None.")
"""
Explanation: Faire appel à l'API de Tastekid
La Banque Mondiale c'était assez soft : on va passer sur du un peu plus costaud. On va utiliser l'API de Tastekid, site de recommandations de films, livres etc.
Pour cela, il faut commencer par créer un compte :
End of explanation
"""
url = "https://tastedive.com/api/similar?q=pulp+fiction&info=1&k={}".format(key)
recommandations_res = requests.get(url)
if "401 Unauthorized" in recommandations_res.text:
print("Le site tastekid n'accepte pas les requêtes non authentifiée.")
print(recommandations_res.text)
recommandations_res = None
if recommandations_res is not None:
try:
recommandations = recommandations_res.json()
except Exception as e:
print(e)
# Parfois le format json est mal formé. On regarde pourquoi.
print()
raise Exception(recommandations_res.text) from e
if recommandations_res is not None:
print(str(recommandations)[:2000])
# on nous rappelle les informations sur l'élement que l'on recherche : Pulp Fiction
recommandations['Similar']['Info']
# on nous donnes des livres / filmes proches selon le gout des gens
for element in recommandations['Similar']['Results'] :
print(element['Name'],element['Type'])
"""
Explanation: Pour demander à l'API quels sont les oeuvres similaires à Pulp Fiction, nous utilisons la requête suivante
End of explanation
"""
recommandations_films = requests.get("https://tastedive.com/api/similar?q=pulp+fiction&type=movie&info=1&k={}"
.format(key)).json()
print(str(recommandations_films)[:2000])
# on nous donnes des livres / filmes proches selon le gout des gens
for element in recommandations_films['Similar']['Results'] :
print(element['Name'],element['Type'])
film_suivant = "Reservoir Dogs"
recommandations_suivantes_films = requests.get(
"https://tastedive.com/api/similar?q={}&type=movie&info=1&k={}"
.format(film_suivant, key)).json()
# on nous donnes des livres / filmes proches selon le gout des gens
for element in recommandations_suivantes_films['Similar']['Results'] :
print(element['Name'],element['Type'])
## On peut ensuite comparer les films communs aux deux recherches
liste1 = [element['Name'] for element in recommandations_films['Similar']['Results'] ]
liste2 = [element['Name'] for element in recommandations_suivantes_films['Similar']['Results'] ]
films_commun = set(liste1).intersection(liste2)
films_commun, len(films_commun)
films_non_partages = [f for f in liste1 if f not in liste2] + [f for f in liste2 if f not in liste1]
films_non_partages
"""
Explanation: On peut aussi ne demander que des films, on ajoute juste l'option type = movies dans l'url
End of explanation
"""
|
brian-rose/ClimateModeling_courseware | Lectures/Lecture04 -- Intro to CLIMLAB.ipynb | mit | # Ensure compatibility with Python 2 and 3
from __future__ import print_function, division
"""
Explanation: ATM 623: Climate Modeling
Brian E. J. Rose, University at Albany
Lecture 4: Building simple climate models using climlab
Warning: content out of date and not maintained
You really should be looking at The Climate Laboratory book by Brian Rose, where all the same content (and more!) is kept up to date.
Here you are likely to find broken links and broken code.
About these notes:
This document uses the interactive Jupyter notebook format. The notes can be accessed in several different ways:
The interactive notebooks are hosted on github at https://github.com/brian-rose/ClimateModeling_courseware
The latest versions can be viewed as static web pages rendered on nbviewer
A complete snapshot of the notes as of May 2017 (end of spring semester) are available on Brian's website.
Also here is a legacy version from 2015.
Many of these notes make use of the climlab package, available at https://github.com/brian-rose/climlab
End of explanation
"""
%matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
import climlab
"""
Explanation: Contents
Introducing climlab
Using climlab to implement the zero-dimensional energy balance model
Run the zero-dimensional EBM out to equilibrium
A climate change scenario in the EBM
Further climlab resources
<a id='section1'></a>
1. Introducing climlab
climlab is a python package for process-oriented climate modeling.
It is based on a very general concept of a model as a collection of individual,
interacting processes. climlab defines a base class called Process, which
can contain an arbitrarily complex tree of sub-processes (each also some
sub-class of Process). Every climate process (radiative, dynamical,
physical, turbulent, convective, chemical, etc.) can be simulated as a stand-alone
process model given appropriate input, or as a sub-process of a more complex model.
New classes of model can easily be defined and run interactively by putting together an
appropriate collection of sub-processes.
climlab is an open-source community project. The latest code can always be found on github:
https://github.com/brian-rose/climlab
You can install climlab by doing
conda install -c conda-forge climlab
End of explanation
"""
# create a zero-dimensional domain with a single surface temperature
state = climlab.surface_state(num_lat=1, # a single point
water_depth = 100., # 100 meters slab of water (sets the heat capacity)
)
state
"""
Explanation: <a id='section2'></a>
2. Using climlab to implement the zero-dimensional energy balance model
Recall that we have worked with a zero-dimensional Energy Balance Model
$$ C \frac{dT_s}{dt} = (1-\alpha) Q - \tau \sigma T_s^4 $$
Here we are going to implement this exact model using climlab.
Yes, we have already written code to implement this model, but we are going to repeat this effort here as a way of learning how to use climlab.
There are tools within climlab to implement much more complicated models, but the basic interface will be the same.
End of explanation
"""
state['Ts']
"""
Explanation: Here we have created a dictionary called state with a single item called Ts:
End of explanation
"""
state.Ts
"""
Explanation: This dictionary holds the state variables for our model -- which is this case is a single number! It is a temperature in degrees Celsius.
For convenience, we can access the same data as an attribute (which lets us use tab-autocomplete when doing interactive work):
End of explanation
"""
climlab.to_xarray(state)
# create the longwave radiation process
olr = climlab.radiation.Boltzmann(name='OutgoingLongwave',
state=state,
tau = 0.612,
eps = 1.,
timestep = 60*60*24*30.)
# Look at what we just created
print(olr)
# create the shortwave radiation process
asr = climlab.radiation.SimpleAbsorbedShortwave(name='AbsorbedShortwave',
state=state,
insolation=341.3,
albedo=0.299,
timestep = 60*60*24*30.)
# Look at what we just created
print(asr)
# couple them together into a single model
ebm = olr + asr
# Give the parent process name
ebm.name = 'EnergyBalanceModel'
# Examine the model object
print(ebm)
"""
Explanation: It is also possible to see this state dictionary as an xarray.Dataset object:
End of explanation
"""
ebm.state
ebm.Ts
"""
Explanation: The object called ebm here is the entire model -- including its current state (the temperature Ts) as well as all the methods needed to integrated forward in time!
The current model state, accessed two ways:
End of explanation
"""
print(ebm.time['timestep'])
print(ebm.time['steps'])
"""
Explanation: Here is some internal information about the timestep of the model:
End of explanation
"""
ebm.step_forward()
ebm.Ts
"""
Explanation: This says the timestep is 2592000 seconds (30 days!), and the model has taken 0 steps forward so far.
To take a single step forward:
End of explanation
"""
ebm.diagnostics
"""
Explanation: The model got colder!
To see why, let's look at some useful diagnostics computed by this model:
End of explanation
"""
ebm.OLR
ebm.ASR
"""
Explanation: This is another dictionary, now with two items. They should make sense to you.
Just like the state variables, we can access these diagnostics variables as attributes:
End of explanation
"""
for name, process in ebm.subprocess.items():
print(name)
print(process)
"""
Explanation: So why did the model get colder in the first timestep?
What do you think will happen next?
<a id='section3'></a>
3. Run the zero-dimensional EBM out to equilibrium
Let's look at how the model adjusts toward its equilibrium temperature.
Exercise:
Using a for loop, take 500 steps forward with this model
Store the current temperature at each step in an array
Make a graph of the temperature as a function of time
<a id='section4'></a>
4. A climate change scenario
Suppose we want to investigate the effects of a small decrease in the transmissitivity of the atmosphere tau.
Previously we used the zero-dimensional model to investigate a hypothetical climate change scenario in which:
- the transmissitivity of the atmosphere tau decreases to 0.57
- the planetary albedo increases to 0.32
How would we do that using climlab?
Recall that the model is comprised of two sub-components:
End of explanation
"""
ebm.subprocess['OutgoingLongwave'].tau
"""
Explanation: The parameter tau is a property of the OutgoingLongwave subprocess:
End of explanation
"""
ebm.subprocess['AbsorbedShortwave'].albedo
"""
Explanation: and the parameter albedo is a property of the AbsorbedShortwave subprocess:
End of explanation
"""
ebm2 = climlab.process_like(ebm)
print(ebm2)
ebm2.subprocess['OutgoingLongwave'].tau = 0.57
ebm2.subprocess['AbsorbedShortwave'].albedo = 0.32
"""
Explanation: Let's make an exact clone of our model and then change these two parameters:
End of explanation
"""
# Computes diagnostics based on current state but does not change the state
ebm2.compute_diagnostics()
ebm2.ASR - ebm2.OLR
"""
Explanation: Now our model is out of equilibrium and the climate will change!
To see this without actually taking a step forward:
End of explanation
"""
ebm2.Ts
ebm2.step_forward()
ebm2.Ts
"""
Explanation: Shoud the model warm up or cool down?
Well, we can find out:
End of explanation
"""
ebm3 = climlab.process_like(ebm2)
ebm3.integrate_years(50)
# What is the current temperature?
ebm3.Ts
# How close are we to energy balance?
ebm3.ASR - ebm3.OLR
# We should be able to accomplish the exact same thing with explicit timestepping
for n in range(608):
ebm2.step_forward()
ebm2.Ts
ebm2.ASR - ebm2.OLR
"""
Explanation: Automatic timestepping
Often we want to integrate a model forward in time to equilibrium without needing to store information about the transient state.
climlab offers convenience methods to do this easily:
End of explanation
"""
%load_ext version_information
%version_information numpy, matplotlib, climlab
"""
Explanation: <a id='section5'></a>
5. Further climlab resources
We will be using climlab extensively throughout this course. Lots of examples of more advanced usage are found here in the course notes. Here are some links to other resources:
The documentation is hosted at https://climlab.readthedocs.io/en/latest/
Source code (for both software and docs) are at https://github.com/brian-rose/climlab
A video of a talk I gave about climlab at the 2018 AMS Python symposium (January 2018)
Slides from a talk and demonstration that I gave in Febrary 2018 (The Apple Keynote version contains some animations that will not show up in the pdf version)
Version information
End of explanation
"""
|
idekerlab/graph-services | notebooks/DEMO.ipynb | mit | # Tested on:
!python --version
"""
Explanation: cxMate Service DEMO
By Ayato Shimada, Mitsuhiro Eto
This DEMO shows
1. detect communities using an igraph's community detection algorithm
2. paint communities (nodes and edges) in different colors
3. perform layout using graph-tool's sfdp algorithm
End of explanation
"""
import requests
import json
url_community = 'http://localhost:80' # igraph's community detection service URL
url_layout = 'http://localhost:3000' # graph-tool's layout service URL
headers = {'Content-type': 'application/json'}
"""
Explanation: Send CX to service using requests module
Services are built on a server
You don't have to construct graph libraries in your local environment.
It is very easy to use python-igraph and graph-tools.
In order to send CX
requests : to send CX file to service in Python. (curl also can be used.)
json : to convert object to a CX formatted string.
End of explanation
"""
data = open('./yeastHQSubnet.cx') # 1.
parameter = {'type': 'leading_eigenvector', 'clusters': 5, 'palette': 'husl'} # 2.
r = requests.post(url=url_community, headers=headers, data=data, params=parameter) # 3.
"""
Explanation: Network used for DEMO
This DEMO uses yeastHQSubnet.cx as original network.
- 2924 nodes
- 6827 edges
<img src="example1.png" alt="Drawing" style="width: 500px;"/>
1. igraph community detection and color generator service
In order to detect communities, igraph's community detection service can be used.
How to use the service on Jupyter Notebook
open the CX file using open()
set parameters in dictionary format. (About parameters, see the document of service.)
post the CX data to URL of service using requests.post()
End of explanation
"""
import re
with open('output1.cx', 'w') as f:
# single quotation -> double quotation
output = re.sub(string=str(r.json()['data']), pattern="'", repl='"')
f.write(output)
"""
Explanation: What happened?
Output contains
graph with community membership + color assignment for each group.
- node1 : group 1, red
- node2 : group 1, red
- node3 : group 2, green
...
You don't have to create your own color palette manually.
To save and look the output data, you can use r.json()['data']
Note
- When you use this output as input of next service, you must use json.dumps(r.json()['data'])
- You must replace single quotation to double quotation in output file.
End of explanation
"""
data2 = json.dumps(r.json()['data']) # 1.
parameter = {'only-layout': False, 'groups': 'community'} # 2.
r2 = requests.post(url=url_layout, headers=headers, data=data2, params=parameter) # 3.
"""
Explanation: 3. graph-tool layout service
In order to perform layout algorithm, graph-tool's layout algorithm service can be used.
C++ optimized parallel, community-structure-aware layout algorithms
You can use the community structure as a parameter for layout, and result reflects its structure.
You can use graph-tool's service in the same way as igraph's service.
Both input and output of cxMate service are CX, NOT igraph's object, graph-tool's object and so on.
So, you don't have to convert igraph object to graph-tools object.
<img src="service.png" alt="Drawing" style="width: 750px;"/>
How to use the service on Jupyter Notebook
open the CX file using json.dumps(r.json()['data'])
set parameters in dictionary format. (About parameters, see the document of service.)
post the CX data to URL of service using requests.post()
End of explanation
"""
import re
with open('output2.cx', 'w') as f:
# single quotation -> double quotation
output = re.sub(string=str(r2.json()['data']), pattern="'", repl='"')
f.write(output)
"""
Explanation: Save .cx file
To save and look the output data, you can use r.json()['data']
End of explanation
"""
%matplotlib inline
import seaborn as sns, numpy as np
from ipywidgets import interact, FloatSlider
"""
Explanation: Color Palette
If you want to change color of communities, you can do it easily.
Many color palettes of seaborn can be used. (See http://seaborn.pydata.org/tutorial/color_palettes.html)
End of explanation
"""
def show_husl(n):
sns.palplot(sns.color_palette('husl', n))
print('palette: husl')
interact(show_husl, n=10);
"""
Explanation: Default Palette
Without setting parameter 'palette', 'husl' is used as color palette.
End of explanation
"""
def show_pal0(palette):
sns.palplot(sns.color_palette(palette, 24))
interact(show_pal0, palette='deep muted pastel bright dark colorblind'.split());
sns.choose_colorbrewer_palette('qualitative');
sns.choose_colorbrewer_palette('sequential');
"""
Explanation: Other palettes
End of explanation
"""
|
comp-journalism/Baseline_Problem_for_Algorithm_Audits | Statistics.ipynb | mit | import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
from scipy import stats
%matplotlib inline
plt.style.use('ggplot')
plt.rcParams['figure.figsize'] = (15, 3)
plt.rcParams['font.family'] = 'sans-serif'
pd.set_option('display.width', 5000)
pd.set_option('display.max_columns', 60)
HC_baseline = pd.read_csv('./BASELINE/HC_baseline_full_ratings.csv')
DT_baseline = pd.read_csv('./BASELINE/DT_baseline_full_ratings.csv')
HC_imagebox = pd.read_csv('./IMAGE_BOX/HC_imagebox_full_ratings.csv')
DT_imagebox = pd.read_csv('./IMAGE_BOX/DT_imagebox_full_ratings.csv')
"""
Explanation: Statistical analysis
End of explanation
"""
print("Baseline skew: ", stats.skew(HC_baseline.facebookbias_rating[HC_baseline.facebookbias_rating<3]))
print("Image Box skew: ", stats.skew(HC_imagebox.facebookbias_rating[HC_imagebox.facebookbias_rating<3]))
"""
Explanation: Statistical analysis on Allsides bias rating:
No sources from the images boxes were rated in the Allsides bias rating dataset. Therefore comparisons between bias of baseline sources versus image box sources could not be performed.
Statistical analysis on Facebook Study bias rating:
Hillary Clinton Image Box images versus Baseline images source bias according to Facebook bias ratings:
End of explanation
"""
print("Baseline skew: ", stats.skewtest(HC_baseline.facebookbias_rating[HC_baseline.facebookbias_rating<3]))
print("Image Box skew: ", stats.skewtest(HC_imagebox.facebookbias_rating[HC_imagebox.facebookbias_rating<3]))
stats.ks_2samp(HC_baseline.facebookbias_rating[HC_baseline.facebookbias_rating<3],
HC_imagebox.facebookbias_rating[HC_imagebox.facebookbias_rating<3])
HC_imagebox.facebookbias_rating.plot.hist(alpha=0.5, bins=20, range=(-1,1), color='blue')
HC_baseline.facebookbias_rating.plot.hist(alpha=0.5, bins=20, range=(-1,1), color='green')
"""
Explanation: from the stats page "For normally distributed data, the skewness should be about 0. A skewness value > 0 means that there is more weight in the left tail of the distribution. The function skewtest can be used to determine if the skewness value is close enough to 0, statistically speaking."
End of explanation
"""
print("Baseline skew: ", stats.skew(DT_baseline.facebookbias_rating[DT_baseline.facebookbias_rating<3]))
print("Image Box skew: ", stats.skew(DT_imagebox.facebookbias_rating[DT_imagebox.facebookbias_rating<3]))
stats.ks_2samp(DT_baseline.facebookbias_rating[DT_baseline.facebookbias_rating<3],
DT_imagebox.facebookbias_rating[DT_imagebox.facebookbias_rating<3])
DT_imagebox.facebookbias_rating.plot.hist(alpha=0.5, bins=20, range=(-1,1), color='red')
DT_baseline.facebookbias_rating.plot.hist(alpha=0.5, bins=20, range=(-1,1), color='green')
print("Number of missing ratings for Hillary Clinton Baseline data: ", len(HC_baseline[HC_baseline.facebookbias_rating == 999]))
print("Number of missing ratings for Hillary Clinton Image Box data: ", len(HC_imagebox[HC_imagebox.facebookbias_rating == 999]))
print("Number of missing ratings for Donald Trump Baseline data: ", len(DT_baseline[DT_baseline.facebookbias_rating == 999]))
print("Number of missing ratings for Donald Trump Image Box data: ", len(DT_baseline[DT_imagebox.facebookbias_rating == 999]))
"""
Explanation: Donald Trump Image Box images versus Baseline images source bias according to Facebook bias ratings:
End of explanation
"""
def convert_to_ints(col):
if col == 'Left':
return -1
elif col == 'Center':
return 0
elif col == 'Right':
return 1
else:
return np.nan
HC_imagebox['final_rating_ints'] = HC_imagebox.final_rating.apply(convert_to_ints)
DT_imagebox['final_rating_ints'] = DT_imagebox.final_rating.apply(convert_to_ints)
HC_baseline['final_rating_ints'] = HC_baseline.final_rating.apply(convert_to_ints)
DT_baseline['final_rating_ints'] = DT_baseline.final_rating.apply(convert_to_ints)
HC_imagebox.final_rating_ints.value_counts()
DT_imagebox.final_rating_ints.value_counts()
"""
Explanation: The Kolmogorov-Smirnov analyis shows that the distribution of political representation across image sources is different between the baseline images and those found in the image box.
Statistical analysis on Allsides + Facebook + MondoTimes + my bias ratings:
Convert strings to integers:
End of explanation
"""
HC_baseline_counts = HC_baseline.final_rating.value_counts()
HC_imagebox_counts = HC_imagebox.final_rating.value_counts()
DT_baseline_counts = DT_baseline.final_rating.value_counts()
DT_imagebox_counts = DT_imagebox.final_rating.value_counts()
HC_baseline_counts.head()
normalised_bias_ratings = pd.DataFrame({'HC_ImageBox':HC_imagebox_counts,
'HC_Baseline' : HC_baseline_counts,
'DT_ImageBox': DT_imagebox_counts,
'DT_Baseline': DT_baseline_counts} )
normalised_bias_ratings
"""
Explanation: Prepare data for chi squared test
End of explanation
"""
normalised_bias_ratings = normalised_bias_ratings[:3]
"""
Explanation: Remove Unknown / unreliable row
End of explanation
"""
normalised_bias_ratings.loc[:,'HC_Baseline_pcnt'] = normalised_bias_ratings.HC_Baseline/normalised_bias_ratings.HC_Baseline.sum()*100
normalised_bias_ratings.loc[:,'HC_ImageBox_pcnt'] = normalised_bias_ratings.HC_ImageBox/normalised_bias_ratings.HC_ImageBox.sum()*100
normalised_bias_ratings.loc[:,'DT_Baseline_pcnt'] = normalised_bias_ratings.DT_Baseline/normalised_bias_ratings.DT_Baseline.sum()*100
normalised_bias_ratings.loc[:,'DT_ImageBox_pcnt'] = normalised_bias_ratings.DT_ImageBox/normalised_bias_ratings.DT_ImageBox.sum()*100
normalised_bias_ratings
normalised_bias_ratings.columns
HC_percentages = normalised_bias_ratings[['HC_Baseline_pcnt', 'HC_ImageBox_pcnt']]
DT_percentages = normalised_bias_ratings[['DT_Baseline_pcnt', 'DT_ImageBox_pcnt']]
"""
Explanation: Calculate percentages for plotting purposes
End of explanation
"""
stats.chisquare(f_exp=normalised_bias_ratings.HC_Baseline,
f_obs=normalised_bias_ratings.HC_ImageBox)
HC_percentages.plot.bar()
"""
Explanation: Test Hillary Clinton Image Box images against Baseline images:
End of explanation
"""
stats.chisquare(f_exp=normalised_bias_ratings.DT_Baseline,
f_obs=normalised_bias_ratings.DT_ImageBox)
DT_percentages.plot.bar()
"""
Explanation: Test Donald Trump Image Box images against Basline images:
End of explanation
"""
|
ricklupton/ipysankeywidget | examples/More examples.ipynb | mit | from ipysankeywidget import SankeyWidget
from ipywidgets import Layout
"""
Explanation: More examples
<i class="fa fa-2x fa-paper-plane text-info fa-fw"> </i> Simple example
<i class="fa fa-2x fa-space-shuttle text-info fa-fw"> </i> Advanced examples
<i class="fa fa-2x fa-link text-info fa-fw"> </i> Linking and Layout
<i class="fa fa-2x fa-image text-info fa-fw"> </i> Exporting Images
End of explanation
"""
layout = Layout(width="300", height="200")
def sankey(margin_top=10, **value):
"""Show SankeyWidget with default values for size and margins"""
return SankeyWidget(layout=layout,
margins=dict(top=margin_top, bottom=0, left=30, right=60),
**value)
"""
Explanation: <i class="fa fa-gears fa-2x fa-fw text-info"></i> A convenience factory function
End of explanation
"""
links = [
{'source': 'A', 'target': 'B', 'value': 1},
{'source': 'B', 'target': 'C', 'value': 1},
{'source': 'A', 'target': 'D', 'value': 1},
]
sankey(links=links)
rank_sets = [
{ 'type': 'same', 'nodes': ['C', 'D'] }
]
sankey(links=links, rank_sets=rank_sets)
order = [
['A'],
['D', 'B'],
['C'],
]
sankey(links=links, order=order)
order = [
[ [ ], ['A'], [], ],
[ ['B'], [ ], ['D'] ],
[ [ ], ['C'], [] ],
]
sankey(links=links, order=order)
"""
Explanation: Rank assignment
Rank sets
You can adjust the left-right placement of nodes by putting them in rank sets: all nodes in the same set end up with the same rank.
End of explanation
"""
links = [
{'source': 'A', 'target': 'B', 'value': 1},
{'source': 'B', 'target': 'C', 'value': 1},
{'source': 'C', 'target': 'D', 'value': 1},
{'source': 'A', 'target': 'E', 'value': 0.5},
]
nodes = [
{'id': 'C', 'direction': 'l'},
{'id': 'D', 'direction': 'l'},
]
sankey(links=links, nodes=nodes)
"""
Explanation: Reversed nodes
Most nodes are assumed to link from left to right, but sometimes there are return links which should be shown from right to left.
End of explanation
"""
nodes = [
{'id': 'C', 'direction': 'r'},
{'id': 'D', 'direction': 'l'},
]
sankey(links=links, nodes=nodes)
nodes = [
{'id': 'C', 'direction': 'l'},
{'id': 'D', 'direction': 'r'},
]
sankey(links=links, nodes=nodes)
"""
Explanation: Variations:
End of explanation
"""
links = [
{'source': 'A', 'target': 'B', 'value': 3, 'type': 'x'},
{'source': 'B', 'target': 'C', 'value': 2, 'type': 'y'},
{'source': 'B', 'target': 'D', 'value': 1, 'type': 'z'},
]
sankey(links=links)
"""
Explanation: Styling
By default, the links are coloured according to their type:
End of explanation
"""
links = [
{'source': 'A', 'target': 'B', 'value': 3, 'color': 'steelblue'},
{'source': 'B', 'target': 'C', 'value': 2, 'color': '#aaa'},
{'source': 'B', 'target': 'D', 'value': 1, 'color': 'goldenrod'},
]
sankey(links=links)
"""
Explanation: You can also set the colours directly:
End of explanation
"""
nodes = [
{'id': 'B', 'title': 'Middle node', 'style': 'process' },
]
sankey(links=links, nodes=nodes)
"""
Explanation: Process titles default to their ids, but can be overridden. There are also one built-in alternative "style" of node:
- process is drawn with a thicker line
End of explanation
"""
%%html
<style>
.sankey .node {
font-style: italic;
}
</style>
"""
Explanation: Of course, you can also use CSS to adjust the styling:
End of explanation
"""
links = [
{'source': 'A1', 'target': 'B', 'value': 1.5, 'type': 'x'},
{'source': 'A1', 'target': 'B', 'value': 0.5, 'type': 'y'},
{'source': 'A2', 'target': 'B', 'value': 0.5, 'type': 'x'},
{'source': 'A2', 'target': 'B', 'value': 1.5, 'type': 'y'},
{'source': 'B', 'target': 'C', 'value': 2.0, 'type': 'x'},
{'source': 'B', 'target': 'C', 'value': 2.0, 'type': 'y'},
]
sankey(links=links, nodes=[])
sankey(links=links, align_link_types=True)
order = [
['A2', 'A1'],
['B'],
['C'],
]
sankey(links=links, align_link_types=True, order=order)
"""
Explanation: Aligning link types
End of explanation
"""
from ipywidgets import Button, VBox
links = [
{'source': 'A', 'target': 'B', 'value': 1},
{'source': 'B', 'target': 'C', 'value': 1},
{'source': 'A', 'target': 'D', 'value': 1},
]
order = [
['A'],
['D', 'B'],
['C'],
]
s = sankey(links=links, order=order)
def swap(x):
global order
order = [list(reversed(o)) for o in order]
s.order = order
b = Button(description='Swap')
b.on_click(swap)
VBox([b, s])
"""
Explanation: Dynamic updating
End of explanation
"""
links = [
{'source': 'A', 'target': 'B', 'value': 3, 'type': 'x'},
{'source': 'B', 'target': 'C', 'value': 2, 'type': 'y'},
{'source': 'B', 'target': 'D', 'value': 1, 'type': 'z'},
]
groups = [
{'id': 'G', 'title': 'Group', 'nodes': ['C', 'D']}
]
sankey(links=links, nodes=[], groups=groups, margin_top=30)
"""
Explanation: Node groups
End of explanation
"""
sankey(links=links, linkLabelFormat='.1f')
"""
Explanation: Link labels
Link labels show the numeric value of the link on the diagram:
End of explanation
"""
links[2]['value'] = 0.1
links[1]['value'] = 2.9
sankey(links=links, linkLabelFormat='.1f')
sankey(links=links, linkLabelFormat='.1f', linkLabelMinWidth=4)
"""
Explanation: By default the labels for small links are hidden, but you can customize this using linkLabelMinWidth:
End of explanation
"""
links[0]['marker'] = 2.5
sankey(links=links)
"""
Explanation: Link markers
Warning: this feature is experimental and may be changed.
End of explanation
"""
links = [
{'source': 'A', 'target': 'B', 'value': 3, 'type': 'x', 'info_html': 'Hi!'},
{'source': 'B', 'target': 'C', 'value': 2, 'type': 'y', 'info_html': 'B <b>to</b> C'},
{'source': 'B', 'target': 'D', 'value': 1, 'type': 'z'},
]
sankey(links=links, show_link_info_html=True)
"""
Explanation: Extra link info
Warning: this feature is experimental and may be changed.
End of explanation
"""
links = [
{'source': 'A', 'target': 'B', 'value': 30},
{'source': 'B', 'target': 'C', 'value': 20},
{'source': 'B', 'target': 'D', 'value': 10},
]
nodes = [
{'id': 'A', 'position': [0, 50]},
{'id': 'B', 'position': [100, 50]},
{'id': 'C', 'position': [200, 30]},
{'id': 'D', 'position': [200, 100]},
]
w = sankey(
links=links,
nodes=nodes,
node_position_attr='position'
)
w
"""
Explanation: Custom layout
The d3-sankey-diagram layout algorithm can be bypassed completely by specifying coordinates for the nodes directly. This must be activated by the node_position_attr option.
End of explanation
"""
# Try changing this
w.scale = 2
# Try changing this
w.nodes[0]['position'] = [50, 50]
w.send_state()
# w.node_position_attr = None
"""
Explanation: The positions are in display coordinates, within the margins specified. The scale is set to 1 by default, if not specified. When node positions are specified manually, they are not affected by the scale -- only the width of the lines is scaled.
End of explanation
"""
|
trangel/Data-Science | reinforcement_learning/practice_reinforce.ipynb | gpl-3.0 | # This code creates a virtual display to draw game images on.
# If you are running locally, just ignore it
import os
if type(os.environ.get("DISPLAY")) is not str or len(os.environ.get("DISPLAY")) == 0:
!bash ../xvfb start
os.environ['DISPLAY'] = ':1'
import gym
import numpy as np, pandas as pd
import matplotlib.pyplot as plt
%matplotlib inline
env = gym.make("CartPole-v0")
# gym compatibility: unwrap TimeLimit
if hasattr(env,'env'):
env=env.env
env.reset()
n_actions = env.action_space.n
state_dim = env.observation_space.shape
plt.imshow(env.render("rgb_array"))
"""
Explanation: REINFORCE in TensorFlow
Just like we did before for q-learning, this time we'll design a neural network to learn CartPole-v0 via policy gradient (REINFORCE).
End of explanation
"""
import tensorflow as tf
tf.reset_default_graph()
# create input variables. We only need <s,a,R> for REINFORCE
states = tf.placeholder('float32', (None,)+state_dim, name="states")
actions = tf.placeholder('int32', name="action_ids")
cumulative_rewards = tf.placeholder('float32', name="cumulative_returns")
import keras
import keras.layers as L
#sess = tf.InteractiveSession()
#keras.backend.set_session(sess)
#<define network graph using raw tf or any deep learning library>
#network = keras.models.Sequential()
#network.add(L.InputLayer(state_dim))
#network.add(L.Dense(200, activation='relu'))
#network.add(L.Dense(200, activation='relu'))
#network.add(L.Dense(n_actions, activation='linear'))
network = keras.models.Sequential()
network.add(L.Dense(256, activation="relu", input_shape=state_dim, name="layer_1"))
network.add(L.Dense(n_actions, activation="linear", name="layer_2"))
print(network.summary())
#question: counting from the beginning of the model, the logits are in layer #9: model.layers[9].output
#logits = network.layers[2].output #<linear outputs (symbolic) of your network>
logits = network(states)
policy = tf.nn.softmax(logits)
log_policy = tf.nn.log_softmax(logits)
# utility function to pick action in one given state
def get_action_proba(s):
return policy.eval({states: [s]})[0]
"""
Explanation: Building the policy network
For REINFORCE algorithm, we'll need a model that predicts action probabilities given states.
For numerical stability, please do not include the softmax layer into your network architecture.
We'll use softmax or log-softmax where appropriate.
End of explanation
"""
# select log-probabilities for chosen actions, log pi(a_i|s_i)
indices = tf.stack([tf.range(tf.shape(log_policy)[0]), actions], axis=-1)
log_policy_for_actions = tf.gather_nd(log_policy, indices)
# REINFORCE objective function
# hint: you need to use log_policy_for_actions to get log probabilities for actions taken
J = tf.reduce_mean((log_policy_for_actions * cumulative_rewards), axis=-1)# <policy objective as in the last formula. Please use mean, not sum.>
# regularize with entropy
entropy = tf.reduce_mean(policy*log_policy) # <compute entropy. Don't forget the sign!>
# all network weights
all_weights = tf.get_collection(tf.GraphKeys.TRAINABLE_VARIABLES) #<a list of all trainable weights in your network>
# weight updates. maximizing J is same as minimizing -J. Adding negative entropy.
loss = -J - 0.1*entropy
update = tf.train.AdamOptimizer().minimize(loss, var_list=all_weights)
"""
Explanation: Loss function and updates
We now need to define objective and update over policy gradient.
Our objective function is
$$ J \approx { 1 \over N } \sum {s_i,a_i} \pi\theta (a_i | s_i) \cdot G(s_i,a_i) $$
Following the REINFORCE algorithm, we can define our objective as follows:
$$ \hat J \approx { 1 \over N } \sum {s_i,a_i} log \pi\theta (a_i | s_i) \cdot G(s_i,a_i) $$
When you compute gradient of that function over network weights $ \theta $, it will become exactly the policy gradient.
End of explanation
"""
def get_cumulative_rewards(rewards, # rewards at each step
gamma=0.99 # discount for reward
):
"""
take a list of immediate rewards r(s,a) for the whole session
compute cumulative rewards R(s,a) (a.k.a. G(s,a) in Sutton '16)
R_t = r_t + gamma*r_{t+1} + gamma^2*r_{t+2} + ...
The simple way to compute cumulative rewards is to iterate from last to first time tick
and compute R_t = r_t + gamma*R_{t+1} recurrently
You must return an array/list of cumulative rewards with as many elements as in the initial rewards.
"""
#<your code here>
cumulative_rewards = np.zeros((len(rewards)))
cumulative_rewards[-1] = rewards[-1]
for t in range(len(rewards)-2, -1, -1):
cumulative_rewards[t] = rewards[t] + gamma * cumulative_rewards[t + 1]
return cumulative_rewards #< array of cumulative rewards>
assert len(get_cumulative_rewards(range(100))) == 100
assert np.allclose(get_cumulative_rewards([0, 0, 1, 0, 0, 1, 0], gamma=0.9),
[1.40049, 1.5561, 1.729, 0.81, 0.9, 1.0, 0.0])
assert np.allclose(get_cumulative_rewards([0, 0, 1, -2, 3, -4, 0], gamma=0.5),
[0.0625, 0.125, 0.25, -1.5, 1.0, -4.0, 0.0])
assert np.allclose(get_cumulative_rewards([0, 0, 1, 2, 3, 4, 0], gamma=0),
[0, 0, 1, 2, 3, 4, 0])
print("looks good!")
def train_step(_states, _actions, _rewards):
"""given full session, trains agent with policy gradient"""
_cumulative_rewards = get_cumulative_rewards(_rewards)
update.run({states: _states, actions: _actions,
cumulative_rewards: _cumulative_rewards})
"""
Explanation: Computing cumulative rewards
End of explanation
"""
def generate_session(t_max=1000):
"""play env with REINFORCE agent and train at the session end"""
# arrays to record session
states, actions, rewards = [], [], []
s = env.reset()
for t in range(t_max):
# action probabilities array aka pi(a|s)
action_probas = get_action_proba(s)
a = np.random.choice(a=len(action_probas), p=action_probas) #<pick random action using action_probas>
new_s, r, done, info = env.step(a)
# record session history to train later
states.append(s)
actions.append(a)
rewards.append(r)
s = new_s
if done:
break
train_step(states, actions, rewards)
# technical: return session rewards to print them later
return sum(rewards)
s = tf.InteractiveSession()
s.run(tf.global_variables_initializer())
for i in range(100):
rewards = [generate_session() for _ in range(100)] # generate new sessions
print("mean reward:%.3f" % (np.mean(rewards)))
if np.mean(rewards) > 300:
print("You Win!") # but you can train even further
break
"""
Explanation: Playing the game
End of explanation
"""
# record sessions
import gym.wrappers
env = gym.wrappers.Monitor(gym.make("CartPole-v0"),
directory="videos", force=True)
sessions = [generate_session() for _ in range(100)]
env.close()
# show video
from IPython.display import HTML
import os
video_names = list(
filter(lambda s: s.endswith(".mp4"), os.listdir("./videos/")))
HTML("""
<video width="640" height="480" controls>
<source src="{}" type="video/mp4">
</video>
""".format("./videos/"+video_names[-1])) # this may or may not be _last_ video. Try other indices
from submit import submit_cartpole
submit_cartpole(generate_session, "tonatiuh_rangel@hotmail.com", "Cecc5rcVxaVUYtsQ")
# That's all, thank you for your attention!
# Not having enough? There's an actor-critic waiting for you in the honor section.
# But make sure you've seen the videos first.
"""
Explanation: Results & video
End of explanation
"""
|
cchwala/pycomlink | notebooks/outdated_notebooks/Use CML data from CSV file.ipynb | bsd-3-clause | df = pd.read_csv('example_data/gap0_gap4_2012.csv', parse_dates=True, index_col=0)
df.head()
"""
Explanation: All the work you do with pycomlink will be based on the Comlink object, which represents one CML between two sites and with an arbitrary number of channels, i.e. the different connections between the two sites, typically one for each direction.
To get a Comlink object from you raw data which is probably in a CSV file, do the following
Read in the CSV file into a DataFrame using the Python package pandas
Reformat the DataFrame according to the convenctions of pycomlink
Prepare the necessary metadata for the ComlinkChannels and the Comlink object
Build ComlinkChannel objects for each channel, i.e. each pair of TX and RX time series that belong to one CML
Build a Comlink from the channels
Then you are set to go and use all the pycomlink functionality.
Read in CML data from CSV file
Use the fantastic pandas CSV reader. In this case the time stamps are in the first column, hence set index_col=0 and can automatically be parsed to datetime objects, hence set parse_dates=True.
End of explanation
"""
# Rename the columns for the RX level
df.columns = ['rx']
df
# Add a constant TX level
df['tx'] = 20
df
"""
Explanation: pycomlink expects a fixed naming convention for the data in the DataFrames. The columns have to be named rx and tx. Hence, rename rsl here to rx and add a columne with the constant tx level, which was 20 dBm in this case. Please note that you always have to provide the tx level even if it is constat all the time. You can specify that TX is constant by passing atpc='off'.
End of explanation
"""
ch_metadata = {
'frequency': 18.7 * 1e9, # Frequency in Hz
'polarization': 'V',
'channel_id': 'channel_xy',
'atpc': 'off'} # This means that TX level is constant
cml_metadata = {
'site_a_latitude': 50.50, # Some fake coordinates
'site_a_longitude': 11.11,
'site_b_latitude': 50.59,
'site_b_longitude': 11.112,
'cml_id': 'XY_1234'}
"""
Explanation: Prepare the necessary metadata
End of explanation
"""
cml_ch = pycml.ComlinkChannel(df, metadata=ch_metadata)
cml_ch
"""
Explanation: Build a ComlinkChannel object
End of explanation
"""
cml = pycml.Comlink(channels=cml_ch, metadata=cml_metadata)
"""
Explanation: Build a Comlink object with the one channel from above
End of explanation
"""
cml
cml.plot_data(['rx']);
cml.plot_map()
"""
Explanation: Look at the contents of the CML
End of explanation
"""
cml_ch_1 = pycml.ComlinkChannel(df, metadata=ch_metadata)
df.rx = df.rx - 1.3
cml_ch_2 = pycml.ComlinkChannel(df, metadata=ch_metadata)
cml = pycml.Comlink(channels=[cml_ch_1, cml_ch_2], metadata=cml_metadata)
cml
cml.plot_data();
"""
Explanation: In case your CML has several channels, you can pass a list of channels
End of explanation
"""
cml.process.wet_dry.std_dev(window_length=100, threshold=0.3)
cml.process.baseline.linear()
cml.process.baseline.calc_A()
cml.process.A_R.calc_R()
cml.plot_data(['rx', 'wet', 'R']);
"""
Explanation: Run typicall processing
(see other notebooks for more details on this)
End of explanation
"""
|
jegibbs/phys202-2015-work | assignments/assignment05/InteractEx01.ipynb | mit | %matplotlib inline
from matplotlib import pyplot as plt
import numpy as np
from IPython.html.widgets import interact, interactive, fixed
from IPython.display import display
from IPython.html import widgets
"""
Explanation: Interact Exercise 01
Import
End of explanation
"""
def print_sum(a, b):
"""Print the sum of the arguments a and b."""
return a+b
"""
Explanation: Interact basics
Write a print_sum function that prints the sum of its arguments a and b.
End of explanation
"""
interact(print_sum, a=(widgets.FloatSlider(min=-10.,max=10.,step=0.1,value=0)), b=(widgets.IntSlider(min=-8,max=8,step=2,value=0)));
assert True # leave this for grading the print_sum exercise
"""
Explanation: Use the interact function to interact with the print_sum function.
a should be a floating point slider over the interval [-10., 10.] with step sizes of 0.1
b should be an integer slider the interval [-8, 8] with step sizes of 2.
End of explanation
"""
def print_string(s, length=False):
"""Print the string s and optionally its length."""
print(s)
if length==True:
print(len(s))
"""
Explanation: Write a function named print_string that prints a string and additionally prints the length of that string if a boolean parameter is True.
End of explanation
"""
interact(print_string, s="Hello", length=False);
assert True # leave this for grading the print_string exercise
"""
Explanation: Use the interact function to interact with the print_string function.
s should be a textbox with the initial value "Hello World!".
length should be a checkbox with an initial value of True.
End of explanation
"""
|
mehmetcanbudak/JupyterWorkflow | JupyterWorkflow3.ipynb | mit | %matplotlib inline
import matplotlib.pyplot as plt
plt.style.use("seaborn")
from jupyterworkflow.data import get_fremont_data
data = get_fremont_data()
data.head()
data.resample("W").sum().plot()
data.groupby(data.index.time).mean().plot()
pivoted = data.pivot_table("Total", index=data.index.time, columns=data.index.date)
pivoted.iloc[:5, :5]
pivoted.plot(legend=False, alpha=0.01)
"""
Explanation: JupyterWorkflow3
From exploratory analysis to reproducible research
Mehmetcan Budak
End of explanation
"""
#get_fremont_data?
"""
Explanation: SECOND PART
To make a python package so we and other people can use it for analysis.
Go to the directory
mkdir jupyterworkflow create a directory
touch jupyterworkflow/init.py initialize a python package
create a data.py in this directory.
import os
from urllib.request import urlretrieve
import pandas as pd
FREMONT_URL = "https://data.seattle.gov/api/views/65db-xm6k/rows.csv?accessType=DOWNLOAD"
/# create a function to only dowload this data if we need to download it, first run..
def get_fremont_data(filename="Fremont.csv", url=FREMONT_URL, force_download=False):
"""Download and cache the fremont data
Parameters
----------
filename :string (optional)
loation to save the data
url: string (optional)
web location of the data
force_download: bool (optional)
if True, force redownload of data
Returns
-------
data: pandas.DataFrame
The fremont bridge data
"""
if force_download or not os.path.exists(filename):
urlretrieve(url, filename)
data = pd.read_csv("Fremont.csv", index_col="Date", parse_dates=True)
data.columns = ["West", "East"]
data["Total"] = data["West"] + data["East"]
return data
End of explanation
"""
|
mbeyeler/opencv-machine-learning | notebooks/05.02-Using-Decision-Trees-to-Diagnose-Breast-Cancer.ipynb | mit | from sklearn import datasets
data = datasets.load_breast_cancer()
"""
Explanation: <!--BOOK_INFORMATION-->
<a href="https://www.packtpub.com/big-data-and-business-intelligence/machine-learning-opencv" target="_blank"><img align="left" src="data/cover.jpg" style="width: 76px; height: 100px; background: white; padding: 1px; border: 1px solid black; margin-right:10px;"></a>
This notebook contains an excerpt from the book Machine Learning for OpenCV by Michael Beyeler.
The code is released under the MIT license,
and is available on GitHub.
Note that this excerpt contains only the raw code - the book is rich with additional explanations and illustrations.
If you find this content useful, please consider supporting the work by
buying the book!
<!--NAVIGATION-->
< Building Our First Decision Tree | Contents | Using Decision Trees for Regression >
Using Decision Trees to Diagnose Breast Cancer
Now that we have built our first decision trees, it's time to turn our attention to a real dataset: The Breast Cancer Wisconsin dataset https://archive.ics.uci.edu/ml/datasets/Breast+Cancer+Wisconsin+(Diagnostic).
In order to make the take feasible, the researchers performed feature extraction on the images, like we did in Chapter 4, Representing Data and Engineering Features. They went through a total of 569 images, and extracted 30 different features that describe the characteristics of the cell nuclei present in the images, including:
cell nucleus texture (represented by the standard deviation of the gray-scale values)
cell nucleus size (calculated as the mean of distances from center to points on the perimeter)
tissue smoothness (local variation in radius lengths)
tissue compactness
The goal of the research was then to classify tissue samples into benign and malignant (a binary classification task).
Loading the dataset
The full dataset is part of Scikit-Learn's example datasets:
End of explanation
"""
data.data.shape
"""
Explanation: As in previous examples, all data is contained in a 2-D feature matrix data.data, where the rows represent data samples, and the columns are the feature values:
End of explanation
"""
data.feature_names
"""
Explanation: With a look at the provided feature names, we recognize some that we mentioned above:
End of explanation
"""
data.target_names
"""
Explanation: Since this is a binary classification task, we expect to find exactly two target names:
End of explanation
"""
import sklearn.model_selection as ms
X_train, X_test, y_train, y_test = ms.train_test_split(data.data, data.target, test_size=0.2, random_state=42)
X_train.shape, X_test.shape
"""
Explanation: Let's split the dataset into training and test sets using a healthy 80-20 split:
End of explanation
"""
from sklearn import tree
dtc = tree.DecisionTreeClassifier(random_state=42)
dtc.fit(X_train, y_train)
"""
Explanation: Building the decision tree
End of explanation
"""
dtc.score(X_train, y_train)
"""
Explanation: Since we did not specify any pre-pruning parameters, we would expect this decision tree to grow quite large and result in a perfect score on the training set:
End of explanation
"""
dtc.score(X_test, y_test)
with open("tree.dot", 'w') as f:
f = tree.export_graphviz(dtc, out_file=f,
feature_names=data.feature_names,
class_names=data.target_names)
"""
Explanation: However, to our surprise, the test error is not too shabby, either:
End of explanation
"""
import numpy as np
max_depths = np.array([1, 2, 3, 5, 7, 9, 11])
"""
Explanation: Now we want to do some model exploration. For example, we mentioned above that the depth of a tree influences its performance. If we wanted to study this dependency more systematically, we could repeat building the tree for different values of max_depth:
End of explanation
"""
train_score = []
test_score = []
for d in max_depths:
dtc = tree.DecisionTreeClassifier(max_depth=d, random_state=42)
dtc.fit(X_train, y_train)
train_score.append(dtc.score(X_train, y_train))
test_score.append(dtc.score(X_test, y_test))
"""
Explanation: For each of these values, we want to run the full model cascade from start to finish. We also want to record the train and test scores. We do this in a for loop:
End of explanation
"""
import matplotlib.pyplot as plt
%matplotlib inline
plt.style.use('ggplot')
plt.figure(figsize=(10, 6))
plt.plot(max_depths, train_score, 'o-', linewidth=3, label='train')
plt.plot(max_depths, test_score, 's-', linewidth=3, label='test')
plt.xlabel('max_depth')
plt.ylabel('score')
plt.ylim(0.85, 1.1)
plt.legend()
"""
Explanation: We can plot the scores as a function of the tree depth using Matplotlib:
End of explanation
"""
train_score = []
test_score = []
min_samples = np.array([2, 4, 8, 16, 32])
for s in min_samples:
dtc = tree.DecisionTreeClassifier(min_samples_leaf=s, random_state=42)
dtc.fit(X_train, y_train)
train_score.append(dtc.score(X_train, y_train))
test_score.append(dtc.score(X_test, y_test))
"""
Explanation: Let's do one more. What about the minimum numbers of samples required to make a node a leaf node?
We repeat the procedure from above:
End of explanation
"""
plt.figure(figsize=(10, 6))
plt.plot(min_samples, train_score, 'o-', linewidth=3, label='train')
plt.plot(min_samples, test_score, 's-', linewidth=3, label='test')
plt.xlabel('min_samples_leaf')
plt.ylabel('score')
plt.ylim(0.9, 1)
plt.legend()
"""
Explanation: This leads to a plot that looks quite different from the one before:
End of explanation
"""
|
cerinunn/pdart | getting_started.ipynb | lgpl-3.0 |
%pylab inline
from __future__ import (absolute_import, division, print_function,
unicode_literals)
from future.builtins import * # NOQA
from datetime import timedelta
from obspy.core import read
from obspy.core.utcdatetime import UTCDateTime
from obspy.core.inventory import read_inventory
import numpy as np
from obspy.clients.fdsn.client import Client
import pdart.auth as auth
from pdart.util import linear_interpolation, timing_correction
from pdart.extra_plots.plot_timing_divergence import plot_timing
import matplotlib
from matplotlib import pyplot as plt
plt.style.use('ggplot')
plt.rcParams['figure.figsize'] = 10, 4
plt.rcParams['lines.linewidth'] = 0.5
plt.rcParams['font.size'] = 12
SECONDS_PER_DAY=3600.*24
def raw_seismogram():
"""View a raw seismogram.
"""
user=auth.user
auth_password=auth.auth_password
if user == '' or auth_password == '':
print('Set user and auth_password in auth.py')
return
starttime= UTCDateTime('1973-03-13T07:30:00.0')
endtime = UTCDateTime('1973-03-13T09:30:00.0')
network='XA'
station='S14'
channel='MH1'
location='*'
client = Client("IRIS",user=user,password=auth_password)
print(client)
stream = client.get_waveforms(network=network, station=station, channel=channel, location=location, starttime=starttime, endtime=endtime)
stream.plot(equal_scale=False,size=(1000,600),method='full')
raw_seismogram()
"""
Explanation: Getting Started with the Apollo Passive Seismic Data Archive
Nunn, C., Nakamura, Y., Kedar, S., Panning, M.P., submitted to Planetary Science Journal. A New Archive of Apollo’s Lunar Seismic Data.
The data are archived with the following DOI: https://doi.org/10.7914/SN/XA_1969.
1) Connecting to IRIS and viewing a raw seismogram
2) Plotting a seismogram with the instrument response removed
3) Viewing the instrument response for a peaked mid-period seismogram.
4) Viewing the timing divergence
5) Correcting for the timing divergence
End of explanation
"""
# import pandas as pd
# series = pd.Series()
# print(series)
def view_Apollo(stream=None,starttime= UTCDateTime('1973-03-13T07:30:00.0'),endtime = UTCDateTime('1973-03-13T09:30:00.0'),
network='XA',station='S14',channel='MH1',location='*',plot_seismogram=True,plot_response=False):
"""Snippet to read in raw seismogram and remove the instrument response for Apollo.
"""
user=auth.user
auth_password=auth.auth_password
if user == '' or auth_password == '':
print('Set user and auth_password in auth.py')
return
client = Client("IRIS",user=user,password=auth_password)
# get the response file (wildcards allowed)
inv = client.get_stations(starttime=starttime, endtime=endtime,
network=network, sta=station, loc=location, channel=channel,
level="response")
if stream is None:
stream = client.get_waveforms(network=network, station=station, channel=channel, location=location, starttime=starttime, endtime=endtime)
else:
stream.trim(starttime=starttime,endtime=endtime)
for tr in stream:
# interpolate across the gaps of one sample
linear_interpolation(tr,interpolation_limit=1)
stream.merge()
for tr in stream:
# optionally interpolate across any gap
# for removing the instrument response from a seimogram,
# it is useful to get a mask, then interpolate across the gaps,
# then mask the trace again.
if tr.stats.channel in ['MH1', 'MH2', 'MHZ']:
# add linear interpolation but keep the original mask
original_mask = linear_interpolation(tr,interpolation_limit=None)
# remove the instrument response
pre_filt = [0.1,0.3,0.9,1.1]
tr.remove_response(inventory=inv, pre_filt=pre_filt, output="DISP",
water_level=None, plot=plot_response)
if plot_response:
plt.show()
# apply the mask back to the trace
tr.data = np.ma.masked_array(tr, mask=original_mask)
elif tr.stats.channel in ['SHZ']:
# add linear interpolation but keep the original mask
original_mask = linear_interpolation(tr,interpolation_limit=None)
# remove the instrument response
pre_filt = [1,2,11,13]
tr.remove_response(inventory=inv, pre_filt=pre_filt, output="DISP",
water_level=None, plot=plot_response)
if plot_response:
plt.show()
# apply the mask back to the trace
tr.data = np.ma.masked_array(tr, mask=original_mask)
if plot_seismogram:
stream.plot(equal_scale=False,size=(1000,600),method='full')
view_Apollo()
view_Apollo(plot_seismogram=False,plot_response=True)
def view_timing_divergence(starttime= UTCDateTime('1973-06-30T00:00:00.00000Z'),
endtime = UTCDateTime('1973-07-01T00:00:00.00000Z'),network='XA',
station='*',channel='ATT',location='*'):
user=auth.user
auth_password=auth.auth_password
if user == '' or auth_password == '':
print('Set user and auth_password in auth.py')
return
client = Client("IRIS",user=user,password=auth_password)
stream = client.get_waveforms(network=network, station=station, channel=channel, location=location, starttime=starttime, endtime=endtime)
plot_timing(stream=stream, start_time=starttime,end_time=endtime,save_fig=False)
view_timing_divergence()
"""
Explanation: Notice that the raw seismogram is :
1) In digital units
2) That the gaps in the trace are replaced by -1. This is for performance reasons, especially for the short-period channel SHZ.
Now we we plot the seismogram with the -1 values replaced.
If there is only one missing value, we ask for it to be interpolated. But if there are more than one missing value, we
we mark it as a gap on the trace.
The missing data must be dealt with before removing the instrument response.
End of explanation
"""
starttime= UTCDateTime('1973-03-13T07:30:00.0')
def get_traces():
"""Get the traces
"""
user=auth.user
auth_password=auth.auth_password
if user == '' or auth_password == '':
print('Set user and auth_password in auth.py')
return
starttime= UTCDateTime('1973-03-13T00:00:00.0')
endtime = UTCDateTime('1973-03-14T00:00:00.0')
network='XA'
station='*'
channel='*'
location='*'
client = Client("IRIS",user=user,password=auth_password)
print(client)
stream = client.get_waveforms(network=network, station=station, channel=channel, location=location, starttime=starttime, endtime=endtime)
return stream
stream_before = get_traces()
print(stream_before)
# plot the timing divergence before correction
plot_timing(stream=stream_before, start_time=UTCDateTime('1973-03-13T00:00:00.0'),end_time=UTCDateTime('1973-03-13T12:00:00.0'),save_fig=False)
stream_after = stream_before.copy()
correction_time=UTCDateTime('1973-03-13T08:02:00.0')
timing_correction(stream_after,correction_time=correction_time)
# timing divergence after correction
plot_timing(stream=stream_after, start_time=UTCDateTime('1973-03-13T00:00:00.0'),end_time=UTCDateTime('1973-03-13T12:00:00.0'),save_fig=False)
"""
Explanation: In the next section, we will make a correction for the timing divergence. Taking the approximate onset time of the event, (1973-03-13T07:30:00.0), we will shift the timings slightly. This will be useful when comparing the onset times.
End of explanation
"""
print('End of Notebook')
"""
Explanation: It can be seen from the previous image the extent of the timing divergence. It many situations it may be necessary to do more than just correct the start times, but instead to reinterpolate the data.
The real onset time will also be affected by the divergence between the sampling time and the timestamp.
End of explanation
"""
|
harmsm/pythonic-science | chapters/06_image-analysis/00_intro-to-images.ipynb | unlicense | %matplotlib inline
from matplotlib import pyplot as plt
import numpy as np
"""
Explanation: Manipulating Images in Python
End of explanation
"""
from PIL import Image
"""
Explanation: The Python Image Library (PIL)
End of explanation
"""
img = Image.open("img/colonies.jpg")
plt.imshow(img)
"""
Explanation: You can load images as Image instances
End of explanation
"""
arr = np.array(img)
print("x,y,RGB ->",arr.shape)
"""
Explanation: Image instances can be interconverted with numpy arrays
End of explanation
"""
arr = np.array(img)
plt.imshow(arr)
plt.show()
arr[:,:,1] = 255
plt.imshow(arr)
"""
Explanation: Values are represented for different channels in bits
8-bit channels have $2^8 = 256$ possible values.
16-bit channels have $2^{16} = 65,6536$ possible values
Major formats you'll encounter
8-bit grayscale (1 channel, 0-255).
8-bit color (3 channels, each 0-255)
8-bit RGBA (4 channels, each 0-255)
16-bit grayscale (1 channel, 0-65535)
16-bit color (3 channels, each 0-65535)
16-bit RGBA (4 channels, each 0-65535)
By convention, channels are arranged:
Red, Green, Blue, Transparency
What kind of image did we just read in?
What does the following code do?
End of explanation
"""
x = Image.fromarray(arr)
x.save('junk.png')
"""
Explanation: Can convert arrays back to images and save
End of explanation
"""
arr = np.array(img)
low_red = arr[:,:,0] < 50
arr[:,:,0] = low_red*255
arr[:,:,1] = low_red*255
arr[:,:,2] = low_red*255
plt.show()
plt.imshow(arr)
"""
Explanation: One powerful method is to create a mask
End of explanation
"""
print("True and True =", bool(True*True))
print("False and True =", bool(False*True))
print("Flase and False =",bool(False*False))
print("True or False =",bool(True + False))
print("True or True =",bool(True + True))
print("False or False =",bool(False + False))
"""
Explanation: You can combine masks with "*" (and) and "+" (or)
Predict the output of the next cell.
End of explanation
"""
arr = np.array(img)
red = arr[:,:,0] < 2
green = arr[:,:,1] > 5
blue = arr[:,:,2] < 5
mask = red*green*blue
new_arr = np.zeros(arr.shape,dtype=arr.dtype)
new_arr[:,:,1] = mask*255
plt.imshow(img)
"""
Explanation: Combined mask
What will the following code do?
End of explanation
"""
img = Image.open("img/colonies.jpg")
arr = np.array(img)
mask = arr[:,:,0] > 20
arr[mask,0] = 0
arr[mask,1] = 0
arr[mask,2] = 0
plt.imshow(arr)
img2 = Image.fromarray(arr)
img2.save("junk.png")
"""
Explanation: Find pixels with some green but little red or blue, then set those to green. All other pixels are black.
Write a block of code that reads in the image, sets any pixel with red > 20 to (0,0,0), and then writes out a new .png image.
End of explanation
"""
|
joelgrus/codefellows-data-science-week | ipynb/matplotlib.ipynb | unlicense | import matplotlib.pyplot as plt
"""
Explanation: When working with matplotlib we usually do
End of explanation
"""
%matplotlib inline
"""
Explanation: and then some magic to get plots to show up here
End of explanation
"""
plt.plot([1,3],[2,4])
plt.title("This is just a sample graph")
plt.xlabel("This is just an x-axis")
plt.ylabel("This is just a y-axis")
plt.bar(range(10), np.random.rand(10))
plt.title("A random bar chart")
xs = range(10)
plt.scatter(xs, 5 * np.random.rand(10) + xs, color='r', marker='*', label='series1')
plt.scatter(xs, 5 * np.random.rand(10) + xs, color='g', marker='o', label='series2')
plt.title("A scatterplot with two series")
plt.legend(loc=9)
"""
Explanation: Which we can check:
End of explanation
"""
weather = pd.read_table('daily_weather.tsv', parse_dates=['date'])
stations = pd.read_table('stations.tsv')
usage = pd.read_table('usage_2012.tsv', parse_dates=['time_start', 'time_end'])
weather.index = pd.DatetimeIndex(weather['date'])
weather.season_desc = weather.season_desc.map({'Spring' : 'Winter', 'Winter' : 'Fall', 'Fall' : 'Summer', 'Summer' : 'Spring' })
"""
Explanation: Now let's pull in our bike share data
End of explanation
"""
plt.scatter(weather.index, weather.temp)
"""
Explanation: We can now plot the temperature across the year:
End of explanation
"""
plt.scatter(weather.humidity, weather.temp)
"""
Explanation: Or look at the scatterplot of temperature and humidity:
End of explanation
"""
plt.scatter(weather.temp, weather.total_riders)
"""
Explanation: Or look at the scatter between the number of riders and temperature:
End of explanation
"""
for season, color in zip(['Winter','Spring','Summer','Fall'],['blue','green','orange','brown']):
temps = weather[weather.season_desc == season].temp
riders = weather[weather.season_desc == season].total_riders
plt.scatter(temps, riders, color=color, label=season)
plt.legend(loc=4)
plt.ylim([0, 10000])
plt.xlabel("temperature")
plt.ylabel("# of riders")
"""
Explanation: Let's break that down by season. That gives us a good example of mixing vanilla Python code with matplotlib code:
End of explanation
"""
from pandas.tools.plotting import scatter_matrix
scatter_matrix(weather[['temp', 'humidity', 'windspeed', 'total_riders']])
"""
Explanation: Scatterplot matrix
End of explanation
"""
weather['temp'].hist()
weather['temp'].plot()
avg_daily_trips = usage.groupby('station_start').size() / 365
trips = DataFrame({ 'avg_daily_trips' : avg_daily_trips })
station_geos = stations[['station', 'lat', 'long']]
trips_by_geo = pd.merge(station_geos, trips, left_on='station', right_index=True)
trips_by_geo
plt.scatter(trips_by_geo['long'], trips_by_geo['lat'], s=trips_by_geo['avg_daily_trips'])
"""
Explanation: You can also call plots directly on the dataframes (or series) themselves:
End of explanation
"""
|
jmschrei/pomegranate | benchmarks/pomegranate_vs_hmmlearn.ipynb | mit | %pylab inline
import hmmlearn, pomegranate, time, seaborn
from hmmlearn.hmm import *
from pomegranate import *
seaborn.set_style('whitegrid')
"""
Explanation: pomegranate / hmmlearn comparison
<a href="https://github.com/hmmlearn/hmmlearn">hmmlearn</a> is a Python module for hidden markov models with a scikit-learn like API. It was originally present in scikit-learn until its removal due to structural learning not meshing well with the API of many other classical machine learning algorithms. Here is a table highlighting some of the similarities and differences between the two packages.
<table>
<tr>
<th>Feature</th>
<th>pomegranate</th>
<th>hmmlearn</th>
</tr>
<tr>
<th>Graph Structure</th>
<th></th>
<th></th>
</tr>
<tr>
<td>Silent States</td>
<td>✓</td>
<td></td>
</tr>
<tr>
<td>Optional Explicit End State</td>
<td>✓</td>
<td></td>
</tr>
<tr>
<td>Sparse Implementation</td>
<td>✓</td>
<td></td>
</tr>
<tr>
<td>Arbitrary Emissions Allowed on States</td>
<td>✓</td>
<td></td>
</tr>
<tr>
<td>Discrete/Gaussian/GMM Emissions</td>
<td>✓</td>
<td>✓</td>
</tr>
<tr>
<td>Large Library of Other Emissions</td>
<td>✓</td>
<td></td>
</tr>
<tr>
<td>Build Model from Matrices</td>
<td>✓</td>
<td>✓</td>
</tr>
<tr>
<td>Build Model Node-by-Node</td>
<td>✓</td>
<td></td>
</tr>
<tr>
<td>Serialize to JSON</td>
<td>✓</td>
<td></td>
</tr>
<tr>
<td>Serialize using Pickle/Joblib</td>
<td></td>
<td>✓</td>
</tr>
<tr>
<th>Algorithms</th>
<th></th>
<th></th>
</tr>
<tr>
<td>Priors</td>
<td></td>
<td>✓</td>
</tr>
<tr>
<td>Sampling</td>
<td>✓</td>
<td>✓</td>
</tr>
<tr>
<td>Log Probability Scoring</td>
<td>✓</td>
<td>✓</td>
</tr>
<tr>
<td>Forward-Backward Emissions</td>
<td>✓</td>
<td>✓</td>
</tr>
<tr>
<td>Forward-Backward Transitions</td>
<td>✓</td>
<td></td>
</tr>
<tr>
<td>Viterbi Decoding</td>
<td>✓</td>
<td>✓</td>
</tr>
<tr>
<td>MAP Decoding</td>
<td>✓</td>
<td>✓</td>
</tr>
<tr>
<td>Baum-Welch Training</td>
<td>✓</td>
<td>✓</td>
</tr>
<tr>
<td>Viterbi Training</td>
<td>✓</td>
<td></td>
</tr>
<tr>
<td>Labeled Training</td>
<td>✓</td>
<td></td>
</tr>
<tr>
<td>Tied Emissions</td>
<td>✓</td>
<td></td>
</tr>
<tr>
<td>Tied Transitions</td>
<td>✓</td>
<td></td>
</tr>
<tr>
<td>Emission Inertia</td>
<td>✓</td>
<td></td>
</tr>
<tr>
<td>Transition Inertia</td>
<td>✓</td>
<td></td>
</tr>
<tr>
<td>Emission Freezing</td>
<td>✓</td>
<td>✓</td>
</tr>
<tr>
<td>Transition Freezing</td>
<td>✓</td>
<td>✓</td>
</tr>
<tr>
<td>Multi-threaded Training</td>
<td>✓</td>
<td>Coming Soon</td>
</tr>
</table>
</p>
Just because the two features are implemented doesn't speak to how fast they are. Below we investigate how fast the two packages are in different settings the two have implemented.
Fully Connected Graphs with Multivariate Gaussian Emissions
Lets look at the sample scoring method, viterbi, and Baum-Welch training for fully connected graphs with multivariate Gaussian emisisons. A fully connected graph is one where all states have connections to all other states. This is a case which pomegranate is expected to do poorly due to its sparse implementation, and hmmlearn should shine due to its vectorized implementations.
End of explanation
"""
print "hmmlearn version {}".format(hmmlearn.__version__)
print "pomegranate version {}".format(pomegranate.__version__)
"""
Explanation: Both hmmlearn and pomegranate are under active development. Here are the current versions of the two packages.
End of explanation
"""
def initialize_components(n_components, n_dims, n_seqs):
"""
Initialize a transition matrix for a model with a fixed number of components,
for Gaussian emissions with a certain number of dimensions, and a data set
with a certain number of sequences.
"""
transmat = numpy.abs(numpy.random.randn(n_components, n_components))
transmat = (transmat.T / transmat.sum( axis=1 )).T
start_probs = numpy.abs( numpy.random.randn(n_components) )
start_probs /= start_probs.sum()
means = numpy.random.randn(n_components, n_dims)
covars = numpy.ones((n_components, n_dims))
seqs = numpy.zeros((n_seqs, n_components, n_dims))
for i in range(n_seqs):
seqs[i] = means + numpy.random.randn(n_components, n_dims)
return transmat, start_probs, means, covars, seqs
"""
Explanation: We first should have a function which will randomly generate transition matrices and emissions for the hidden markov model, and randomly generate sequences which fit the model.
End of explanation
"""
def hmmlearn_model(transmat, start_probs, means, covars):
"""Return a hmmlearn model."""
model = GaussianHMM(n_components=transmat.shape[0], covariance_type='diag', n_iter=1, tol=1e-8)
model.startprob_ = start_probs
model.transmat_ = transmat
model.means_ = means
model._covars_ = covars
return model
"""
Explanation: Lets create the model in hmmlearn. It's fairly straight forward, only some attributes need to be overridden with the known structure and emissions.
End of explanation
"""
def pomegranate_model(transmat, start_probs, means, covars):
"""Return a pomegranate model."""
states = [ MultivariateGaussianDistribution( means[i], numpy.eye(means.shape[1]) ) for i in range(transmat.shape[0]) ]
model = HiddenMarkovModel.from_matrix(transmat, states, start_probs, merge='None')
return model
"""
Explanation: Now lets create the model in pomegranate. Also fairly straightforward. The biggest difference is creating explicit distribution objects rather than passing in vectors, and passing everything into a function instead of overriding attributes. This is done because each state in the graph can be a different distribution and many distributions are supported.
End of explanation
"""
def evaluate_models(n_dims, n_seqs):
hllp, plp = [], []
hlv, pv = [], []
hlm, pm = [], []
hls, ps = [], []
hlt, pt = [], []
for i in range(10, 112, 10):
transmat, start_probs, means, covars, seqs = initialize_components(i, n_dims, n_seqs)
model = hmmlearn_model(transmat, start_probs, means, covars)
tic = time.time()
for seq in seqs:
model.score(seq)
hllp.append( time.time() - tic )
tic = time.time()
for seq in seqs:
model.predict(seq)
hlv.append( time.time() - tic )
tic = time.time()
for seq in seqs:
model.predict_proba(seq)
hlm.append( time.time() - tic )
tic = time.time()
model.fit(seqs.reshape(n_seqs*i, n_dims), lengths=[i]*n_seqs)
hlt.append( time.time() - tic )
model = pomegranate_model(transmat, start_probs, means, covars)
tic = time.time()
for seq in seqs:
model.log_probability(seq)
plp.append( time.time() - tic )
tic = time.time()
for seq in seqs:
model.predict(seq)
pv.append( time.time() - tic )
tic = time.time()
for seq in seqs:
model.predict_proba(seq)
pm.append( time.time() - tic )
tic = time.time()
model.fit(seqs, max_iterations=1, verbose=False)
pt.append( time.time() - tic )
plt.figure( figsize=(12, 8))
plt.xlabel("# Components", fontsize=12 )
plt.ylabel("pomegranate is x times faster", fontsize=12 )
plt.plot( numpy.array(hllp) / numpy.array(plp), label="Log Probability")
plt.plot( numpy.array(hlv) / numpy.array(pv), label="Viterbi")
plt.plot( numpy.array(hlm) / numpy.array(pm), label="Maximum A Posteriori")
plt.plot( numpy.array(hlt) / numpy.array(pt), label="Training")
plt.xticks( xrange(11), xrange(10, 112, 10), fontsize=12 )
plt.yticks( fontsize=12 )
plt.legend( fontsize=12 )
evaluate_models(10, 50)
"""
Explanation: Lets now compare some algorithm times.
End of explanation
"""
def initialize_components(n_components, n_dims, n_seqs):
"""
Initialize a transition matrix for a model with a fixed number of components,
for Gaussian emissions with a certain number of dimensions, and a data set
with a certain number of sequences.
"""
transmat = numpy.zeros((n_components, n_components))
transmat[-1, -1] = 1
for i in range(n_components-1):
transmat[i, i] = 1
transmat[i, i+1] = 1
transmat[ transmat < 0 ] = 0
transmat = (transmat.T / transmat.sum( axis=1 )).T
start_probs = numpy.abs( numpy.random.randn(n_components) )
start_probs /= start_probs.sum()
means = numpy.random.randn(n_components, n_dims)
covars = numpy.ones((n_components, n_dims))
seqs = numpy.zeros((n_seqs, n_components, n_dims))
for i in range(n_seqs):
seqs[i] = means + numpy.random.randn(n_components, n_dims)
return transmat, start_probs, means, covars, seqs
evaluate_models(10, 50)
"""
Explanation: It looks like in this case pomegranate and hmmlearn are approximately the same for large (>30 components) dense graphs for the forward algorithm (log probability), MAP, and training. However, hmmlearn is significantly faster in terms of calculating the Viterbi path, while pomegranate is faster for smaller (<30 components) graphs.
Sparse Graphs with Multivariate Gaussian Emissions
pomegranate is based off of a sparse implementations and so excels in graphs which are sparse. Lets try a model architecture where each hidden state only has transitions to itself and the next state, but running the same algorithms as last time.
End of explanation
"""
def initialize_components(n_components, n_seqs):
"""
Initialize a transition matrix for a model with a fixed number of components,
for Gaussian emissions with a certain number of dimensions, and a data set
with a certain number of sequences.
"""
transmat = numpy.zeros((n_components, n_components))
transmat[-1, -1] = 1
for i in range(n_components-1):
transmat[i, i] = 1
transmat[i, i+1] = 1
transmat[ transmat < 0 ] = 0
transmat = (transmat.T / transmat.sum( axis=1 )).T
start_probs = numpy.abs( numpy.random.randn(n_components) )
start_probs /= start_probs.sum()
dists = numpy.abs(numpy.random.randn(n_components, 4))
dists = (dists.T / dists.T.sum(axis=0)).T
seqs = numpy.random.randint(0, 4, (n_seqs, n_components*2, 1))
return transmat, start_probs, dists, seqs
def hmmlearn_model(transmat, start_probs, dists):
"""Return a hmmlearn model."""
model = MultinomialHMM(n_components=transmat.shape[0], n_iter=1, tol=1e-8)
model.startprob_ = start_probs
model.transmat_ = transmat
model.emissionprob_ = dists
return model
def pomegranate_model(transmat, start_probs, dists):
"""Return a pomegranate model."""
states = [ DiscreteDistribution({ 'A': d[0],
'C': d[1],
'G': d[2],
'T': d[3] }) for d in dists ]
model = HiddenMarkovModel.from_matrix(transmat, states, start_probs, merge='None')
return model
def evaluate_models(n_seqs):
hllp, plp = [], []
hlv, pv = [], []
hlm, pm = [], []
hls, ps = [], []
hlt, pt = [], []
dna = 'ACGT'
for i in range(10, 112, 10):
transmat, start_probs, dists, seqs = initialize_components(i, n_seqs)
model = hmmlearn_model(transmat, start_probs, dists)
tic = time.time()
for seq in seqs:
model.score(seq)
hllp.append( time.time() - tic )
tic = time.time()
for seq in seqs:
model.predict(seq)
hlv.append( time.time() - tic )
tic = time.time()
for seq in seqs:
model.predict_proba(seq)
hlm.append( time.time() - tic )
tic = time.time()
model.fit(seqs.reshape(n_seqs*i*2, 1), lengths=[i*2]*n_seqs)
hlt.append( time.time() - tic )
model = pomegranate_model(transmat, start_probs, dists)
seqs = [[dna[i] for i in seq] for seq in seqs]
tic = time.time()
for seq in seqs:
model.log_probability(seq)
plp.append( time.time() - tic )
tic = time.time()
for seq in seqs:
model.predict(seq)
pv.append( time.time() - tic )
tic = time.time()
for seq in seqs:
model.predict_proba(seq)
pm.append( time.time() - tic )
tic = time.time()
model.fit(seqs, max_iterations=1, verbose=False)
pt.append( time.time() - tic )
plt.figure( figsize=(12, 8))
plt.xlabel("# Components", fontsize=12 )
plt.ylabel("pomegranate is x times faster", fontsize=12 )
plt.plot( numpy.array(hllp) / numpy.array(plp), label="Log Probability")
plt.plot( numpy.array(hlv) / numpy.array(pv), label="Viterbi")
plt.plot( numpy.array(hlm) / numpy.array(pm), label="Maximum A Posteriori")
plt.plot( numpy.array(hlt) / numpy.array(pt), label="Training")
plt.xticks( xrange(11), xrange(10, 112, 10), fontsize=12 )
plt.yticks( fontsize=12 )
plt.legend( fontsize=12 )
evaluate_models(50)
"""
Explanation: Sparse Graph with Discrete Emissions
Lets also compare MultinomialHMM to a pomegranate HMM with discrete emisisons for completeness.
End of explanation
"""
|
Kaggle/learntools | notebooks/geospatial/raw/ex4.ipynb | apache-2.0 | import math
import pandas as pd
import geopandas as gpd
#from geopy.geocoders import Nominatim # What you'd normally run
from learntools.geospatial.tools import Nominatim # Just for this exercise
import folium
from folium import Marker
from folium.plugins import MarkerCluster
from learntools.core import binder
binder.bind(globals())
from learntools.geospatial.ex4 import *
"""
Explanation: Introduction
You are a Starbucks big data analyst (that’s a real job!) looking to find the next store into a Starbucks Reserve Roastery. These roasteries are much larger than a typical Starbucks store and have several additional features, including various food and wine options, along with upscale lounge areas. You'll investigate the demographics of various counties in the state of California, to determine potentially suitable locations.
<center>
<img src="https://i.imgur.com/BIyE6kR.png" width="450"><br/><br/>
</center>
Before you get started, run the code cell below to set everything up.
End of explanation
"""
def embed_map(m, file_name):
from IPython.display import IFrame
m.save(file_name)
return IFrame(file_name, width='100%', height='500px')
"""
Explanation: You'll use the embed_map() function from the previous exercise to visualize your maps.
End of explanation
"""
# Load and preview Starbucks locations in California
starbucks = pd.read_csv("../input/geospatial-learn-course-data/starbucks_locations.csv")
starbucks.head()
"""
Explanation: Exercises
1) Geocode the missing locations.
Run the next code cell to create a DataFrame starbucks containing Starbucks locations in the state of California.
End of explanation
"""
# How many rows in each column have missing values?
print(starbucks.isnull().sum())
# View rows with missing locations
rows_with_missing = starbucks[starbucks["City"]=="Berkeley"]
rows_with_missing
"""
Explanation: Most of the stores have known (latitude, longitude) locations. But, all of the locations in the city of Berkeley are missing.
End of explanation
"""
# Create the geocoder
geolocator = Nominatim(user_agent="kaggle_learn")
# Your code here
____
# Check your answer
q_1.check()
#%%RM_IF(PROD)%%
def my_geocoder(row):
point = geolocator.geocode(row).point
return pd.Series({'Longitude': point.longitude, 'Latitude': point.latitude})
berkeley_locations = rows_with_missing.apply(lambda x: my_geocoder(x['Address']), axis=1)
starbucks.update(berkeley_locations)
q_1.assert_check_passed()
#%%RM_IF(PROD)%%
# Lines below will give you a hint or solution code
#_COMMENT_IF(PROD)_
q_1.hint()
#_COMMENT_IF(PROD)_
q_1.solution()
# Line below will give you solution code
#_COMMENT_IF(PROD)_
q_1.solution()
"""
Explanation: Use the code cell below to fill in these values with the Nominatim geocoder.
Note that in the tutorial, we used Nominatim() (from geopy.geocoders) to geocode values, and this is what you can use in your own projects outside of this course.
In this exercise, you will use a slightly different function Nominatim() (from learntools.geospatial.tools). This function was imported at the top of the notebook and works identically to the function from GeoPandas.
So, in other words, as long as:
- you don't change the import statements at the top of the notebook, and
- you call the geocoding function as geocode() in the code cell below,
your code will work as intended!
End of explanation
"""
# Create a base map
m_2 = folium.Map(location=[37.88,-122.26], zoom_start=13)
# Your code here: Add a marker for each Berkeley location
____
# Uncomment to see a hint
#_COMMENT_IF(PROD)_
q_2.a.hint()
# Show the map
embed_map(m_2, 'q_2.html')
#%%RM_IF(PROD)%%
# Create a base map
m_2 = folium.Map(location=[37.88,-122.26], zoom_start=13)
# Add a marker for each Berkeley location
for idx, row in starbucks[starbucks["City"]=='Berkeley'].iterrows():
Marker([row['Latitude'], row['Longitude']]).add_to(m_2)
# Show the map
embed_map(m_2, 'q_2.html')
# Get credit for your work after you have created a map
q_2.a.check()
# Uncomment to see our solution (your code may look different!)
#_COMMENT_IF(PROD)_
q_2.a.solution()
"""
Explanation: 2) View Berkeley locations.
Let's take a look at the locations you just found. Visualize the (latitude, longitude) locations in Berkeley in the OpenStreetMap style.
End of explanation
"""
# View the solution (Run this code cell to receive credit!)
q_2.b.solution()
"""
Explanation: Considering only the five locations in Berkeley, how many of the (latitude, longitude) locations seem potentially correct (are located in the correct city)?
End of explanation
"""
CA_counties = gpd.read_file("../input/geospatial-learn-course-data/CA_county_boundaries/CA_county_boundaries/CA_county_boundaries.shp")
CA_counties.crs = {'init': 'epsg:4326'}
CA_counties.head()
"""
Explanation: 3) Consolidate your data.
Run the code below to load a GeoDataFrame CA_counties containing the name, area (in square kilometers), and a unique id (in the "GEOID" column) for each county in the state of California. The "geometry" column contains a polygon with county boundaries.
End of explanation
"""
CA_pop = pd.read_csv("../input/geospatial-learn-course-data/CA_county_population.csv", index_col="GEOID")
CA_high_earners = pd.read_csv("../input/geospatial-learn-course-data/CA_county_high_earners.csv", index_col="GEOID")
CA_median_age = pd.read_csv("../input/geospatial-learn-course-data/CA_county_median_age.csv", index_col="GEOID")
"""
Explanation: Next, we create three DataFrames:
- CA_pop contains an estimate of the population of each county.
- CA_high_earners contains the number of households with an income of at least $150,000 per year.
- CA_median_age contains the median age for each county.
End of explanation
"""
# Your code here
CA_stats = ____
# Check your answer
q_3.check()
#%%RM_IF(PROD)%%
CA_stats = CA_counties.set_index("GEOID").join([CA_pop, CA_high_earners, CA_median_age])
q_3.assert_check_failed()
#%%RM_IF(PROD)%%
CA_stats = CA_counties.set_index("GEOID", inplace=False).join([CA_high_earners, CA_median_age, CA_pop]).reset_index()
q_3.assert_check_passed()
#%%RM_IF(PROD)%%
cols_to_add = CA_pop.join([CA_median_age, CA_high_earners]).reset_index()
CA_stats = CA_counties.merge(cols_to_add, on="GEOID")
q_3.assert_check_passed()
# Lines below will give you a hint or solution code
#_COMMENT_IF(PROD)_
q_3.hint()
#_COMMENT_IF(PROD)_
q_3.solution()
"""
Explanation: Use the next code cell to join the CA_counties GeoDataFrame with CA_pop, CA_high_earners, and CA_median_age.
Name the resultant GeoDataFrame CA_stats, and make sure it has 8 columns: "GEOID", "name", "area_sqkm", "geometry", "population", "high_earners", and "median_age".
End of explanation
"""
CA_stats["density"] = CA_stats["population"] / CA_stats["area_sqkm"]
"""
Explanation: Now that we have all of the data in one place, it's much easier to calculate statistics that use a combination of columns. Run the next code cell to create a "density" column with the population density.
End of explanation
"""
# Your code here
sel_counties = ____
# Check your answer
q_4.check()
#%%RM_IF(PROD)%%
sel_counties = CA_stats[((CA_stats.high_earners > 100000) & \
(CA_stats.median_age < 38.5) & \
(CA_stats.density > 285) & \
((CA_stats.median_age < 35.5) | \
(CA_stats.density > 1400) | \
(CA_stats.high_earners > 500000)))]
q_4.assert_check_passed()
# Lines below will give you a hint or solution code
#_COMMENT_IF(PROD)_
q_4.hint()
#_COMMENT_IF(PROD)_
q_4.solution()
"""
Explanation: 4) Which counties look promising?
Collapsing all of the information into a single GeoDataFrame also makes it much easier to select counties that meet specific criteria.
Use the next code cell to create a GeoDataFrame sel_counties that contains a subset of the rows (and all of the columns) from the CA_stats GeoDataFrame. In particular, you should select counties where:
- there are at least 100,000 households making \$150,000 per year,
- the median age is less than 38.5, and
- the density of inhabitants is at least 285 (per square kilometer).
Additionally, selected counties should satisfy at least one of the following criteria:
- there are at least 500,000 households making \$150,000 per year,
- the median age is less than 35.5, or
- the density of inhabitants is at least 1400 (per square kilometer).
End of explanation
"""
starbucks_gdf = gpd.GeoDataFrame(starbucks, geometry=gpd.points_from_xy(starbucks.Longitude, starbucks.Latitude))
starbucks_gdf.crs = {'init': 'epsg:4326'}
"""
Explanation: 5) How many stores did you identify?
When looking for the next Starbucks Reserve Roastery location, you'd like to consider all of the stores within the counties that you selected. So, how many stores are within the selected counties?
To prepare to answer this question, run the next code cell to create a GeoDataFrame starbucks_gdf with all of the starbucks locations.
End of explanation
"""
# Fill in your answer
num_stores = ____
# Check your answer
q_5.check()
#%%RM_IF(PROD)%%
locations_of_interest = gpd.sjoin(starbucks_gdf, sel_counties)
num_stores = len(locations_of_interest)
q_5.assert_check_passed()
# Lines below will give you a hint or solution code
#_COMMENT_IF(PROD)_
q_5.hint()
#_COMMENT_IF(PROD)_
q_5.solution()
"""
Explanation: So, how many stores are in the counties you selected?
End of explanation
"""
# Create a base map
m_6 = folium.Map(location=[37,-120], zoom_start=6)
# Your code here: show selected store locations
____
# Uncomment to see a hint
#_COMMENT_IF(PROD)_
q_6.hint()
# Show the map
embed_map(m_6, 'q_6.html')
#%%RM_IF(PROD)%%
# Create a base map
m_6 = folium.Map(location=[37,-120], zoom_start=6)
# Your code here: show selected store locations
mc = MarkerCluster()
locations_of_interest = gpd.sjoin(starbucks_gdf, sel_counties)
for idx, row in locations_of_interest.iterrows():
if not math.isnan(row['Longitude']) and not math.isnan(row['Latitude']):
mc.add_child(folium.Marker([row['Latitude'], row['Longitude']]))
m_6.add_child(mc)
# Uncomment to see a hint
#_COMMENT_IF(PROD)_
q_6.hint()
# Show the map
embed_map(m_6, 'q_6.html')
# Get credit for your work after you have created a map
q_6.check()
# Uncomment to see our solution (your code may look different!)
#_COMMENT_IF(PROD)_
q_6.solution()
"""
Explanation: 6) Visualize the store locations.
Create a map that shows the locations of the stores that you identified in the previous question.
End of explanation
"""
|
MaxPowerWasTaken/MaxPowerWasTaken.github.io | jupyter_notebooks/Pandas_View_vs_Copy.ipynb | gpl-3.0 | import pandas as pd
df = pd.DataFrame({'Number' : [100,200,300,400,500], 'Letter' : ['a','b','c', 'd', 'e']})
df
"""
Explanation: Pandas Data Munging: Avoiding that 'SettingWithCopyWarning'
If you use Python for data analysis, you probably use Pandas for Data Munging. And if you use Pandas, you've probably come across the warning below:
```
SettingWithCopyWarning:
A value is trying to be set on a copy of a slice from a DataFrame.
Try using .loc[row_indexer,col_indexer] = value instead
See the caveats in the documentation: http://pandas.pydata.org/pandas-docs/stable/indexing.html#indexing-view-versus-copy
```
The Pandas documentation is great in general, but it's easy to read through the link above and still be confused. Or if you're like me, you'll read the documentation page, think "Oh, I get it," and then get the same warning again.
A Simple Reproducible Example of The Warning<sup>(tm)</sup>
Here's where this issue pops up. Say you have some data:
End of explanation
"""
criteria = df['Number']>300
criteria
#Keep only rows which correspond to 'Number'>300 ('True' in the 'criteria' vector above)
df[criteria]
"""
Explanation: ...and you want to filter it on some criteria. Pandas makes that easy with Boolean Indexing
End of explanation
"""
#Create a new DataFrame based on filtering criteria
df_2 = df[criteria]
#Assign a new column and print output
df_2['new column'] = 'new value'
df_2
"""
Explanation: This works great right? Unfortunately not, because once we:
1. Use that filtering code to create a new Pandas DataFrame, and
2. Assign a new column or change an existing column in that DataFrame
like so...
End of explanation
"""
df.loc[criteria, :]
#Create New DataFrame Based on Filtering Criteria
df_2 = df.loc[criteria, :]
#Add a New Column to the DataFrame
df_2.loc[:, 'new column'] = 'new value'
df_2
"""
Explanation: There's the warning.
So what should we have done differently? The warning suggests using ".loc[row_indexer, col_indexer]". So let's try subsetting the DataFrame the same way as before, but this time using the df.loc[ ] method.
Re-Creating Our New Dataframe Using .loc[]
End of explanation
"""
criteria
"""
Explanation: Two warnings this time!
OK, So What's Going On?
Recall that our "criteria" variable is a Pandas Series of Boolean True/False values, corresponding to whether a row of 'df' meets our Number>300 criteria.
End of explanation
"""
df_2 = df[criteria]
"""
Explanation: The Pandas Docs say a "common operation is the use of boolean vectors to filter the data" as we've done here. But apparently a boolean vector is not the "row_indexer" the warning advises us to use with .loc[] for creating new dataframes. Instead, Pandas wants us to use .loc[] with a vector of row-numbers (technically, "row labels", which here are numbers).
Solution
We can get to that "row_indexer" with one extra line of code. Building on what we had before. Instead of creating our new dataframe by filtering rows with a vector of True/False like below...
End of explanation
"""
criteria_row_indices = df[criteria].index
criteria_row_indices
"""
Explanation: We first grab the indices of that filtered dataframe using .index...
End of explanation
"""
new_df = df.loc[criteria_row_indices, :]
new_df
"""
Explanation: And pass that list of indices to .loc[ ] to create our new dataframe
End of explanation
"""
new_df['New Column'] = 'New Value'
new_df
"""
Explanation: Now we can add a new column without throwing The Warning <sup>(tm)</sup>
End of explanation
"""
|
mne-tools/mne-tools.github.io | 0.24/_downloads/775a4c9edcb81275d5a07fdad54343dc/channel_epochs_image.ipynb | bsd-3-clause | # Authors: Alexandre Gramfort <alexandre.gramfort@inria.fr>
#
# License: BSD-3-Clause
import numpy as np
import matplotlib.pyplot as plt
import mne
from mne import io
from mne.datasets import sample
print(__doc__)
data_path = sample.data_path()
"""
Explanation: Visualize channel over epochs as an image
This will produce what is sometimes called an event related
potential / field (ERP/ERF) image.
Two images are produced, one with a good channel and one with a channel
that does not show any evoked field.
It is also demonstrated how to reorder the epochs using a 1D spectral
embedding as described in :footcite:GramfortEtAl2010.
End of explanation
"""
raw_fname = data_path + '/MEG/sample/sample_audvis_filt-0-40_raw.fif'
event_fname = data_path + '/MEG/sample/sample_audvis_filt-0-40_raw-eve.fif'
event_id, tmin, tmax = 1, -0.2, 0.4
# Setup for reading the raw data
raw = io.read_raw_fif(raw_fname)
events = mne.read_events(event_fname)
# Set up pick list: EEG + MEG - bad channels (modify to your needs)
raw.info['bads'] = ['MEG 2443', 'EEG 053']
# Create epochs, here for gradiometers + EOG only for simplicity
epochs = mne.Epochs(raw, events, event_id, tmin, tmax, proj=True,
picks=('grad', 'eog'), baseline=(None, 0), preload=True,
reject=dict(grad=4000e-13, eog=150e-6))
"""
Explanation: Set parameters
End of explanation
"""
# and order with spectral reordering
# If you don't have scikit-learn installed set order_func to None
from sklearn.manifold import spectral_embedding # noqa
from sklearn.metrics.pairwise import rbf_kernel # noqa
def order_func(times, data):
this_data = data[:, (times > 0.0) & (times < 0.350)]
this_data /= np.sqrt(np.sum(this_data ** 2, axis=1))[:, np.newaxis]
return np.argsort(spectral_embedding(rbf_kernel(this_data, gamma=1.),
n_components=1, random_state=0).ravel())
good_pick = 97 # channel with a clear evoked response
bad_pick = 98 # channel with no evoked response
# We'll also plot a sample time onset for each trial
plt_times = np.linspace(0, .2, len(epochs))
plt.close('all')
mne.viz.plot_epochs_image(epochs, [good_pick, bad_pick], sigma=.5,
order=order_func, vmin=-250, vmax=250,
overlay_times=plt_times, show=True)
"""
Explanation: Show event-related fields images
End of explanation
"""
|
tensorflow/probability | tensorflow_probability/examples/jupyter_notebooks/Fitting_DPMM_Using_pSGLD.ipynb | apache-2.0 | #@title Licensed under the Apache License, Version 2.0 (the "License"); { display-mode: "form" }
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""
Explanation: Copyright 2018 The TensorFlow Probability Authors.
Licensed under the Apache License, Version 2.0 (the "License");
End of explanation
"""
import time
import numpy as np
import matplotlib.pyplot as plt
import tensorflow.compat.v1 as tf
import tensorflow_probability as tfp
plt.style.use('ggplot')
tfd = tfp.distributions
def session_options(enable_gpu_ram_resizing=True):
"""Convenience function which sets common `tf.Session` options."""
config = tf.ConfigProto()
config.log_device_placement = True
if enable_gpu_ram_resizing:
# `allow_growth=True` makes it possible to connect multiple colabs to your
# GPU. Otherwise the colab malloc's all GPU ram.
config.gpu_options.allow_growth = True
return config
def reset_sess(config=None):
"""Convenience function to create the TF graph and session, or reset them."""
if config is None:
config = session_options()
tf.reset_default_graph()
global sess
try:
sess.close()
except:
pass
sess = tf.InteractiveSession(config=config)
# For reproducibility
rng = np.random.RandomState(seed=45)
tf.set_random_seed(76)
# Precision
dtype = np.float64
# Number of training samples
num_samples = 50000
# Ground truth loc values which we will infer later on. The scale is 1.
true_loc = np.array([[-4, -4],
[0, 0],
[4, 4]], dtype)
true_components_num, dims = true_loc.shape
# Generate training samples from ground truth loc
true_hidden_component = rng.randint(0, true_components_num, num_samples)
observations = (true_loc[true_hidden_component]
+ rng.randn(num_samples, dims).astype(dtype))
# Visualize samples
plt.scatter(observations[:, 0], observations[:, 1], 1)
plt.axis([-10, 10, -10, 10])
plt.show()
"""
Explanation: Fitting Dirichlet Process Mixture Model Using Preconditioned Stochastic Gradient Langevin Dynamics
<table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https://www.tensorflow.org/probability/examples/Fitting_DPMM_Using_pSGLD"><img src="https://www.tensorflow.org/images/tf_logo_32px.png" />View on TensorFlow.org</a>
</td>
<td>
<a target="_blank" href="https://colab.research.google.com/github/tensorflow/probability/blob/main/tensorflow_probability/examples/jupyter_notebooks/Fitting_DPMM_Using_pSGLD.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" />Run in Google Colab</a>
</td>
<td>
<a target="_blank" href="https://github.com/tensorflow/probability/blob/main/tensorflow_probability/examples/jupyter_notebooks/Fitting_DPMM_Using_pSGLD.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />View source on GitHub</a>
</td>
<td>
<a href="https://storage.googleapis.com/tensorflow_docs/probability/tensorflow_probability/examples/jupyter_notebooks/Fitting_DPMM_Using_pSGLD.ipynb"><img src="https://www.tensorflow.org/images/download_logo_32px.png" />Download notebook</a>
</td>
</table>
In this notebook, we will demonstrate how to cluster a large number of samples and infer the number of clusters simultaneously by fitting a Dirichlet Process Mixture of Gaussian distribution. We use Preconditioned Stochastic Gradient Langevin Dynamics (pSGLD) for inference.
Table of contents
Samples
Model
Optimization
Visualize the result
4.1. Clustered result
4.2. Visualize uncertainty
4.3. Mean and scale of selected mixture component
4.4. Mixture weight of each mixture component
4.5. Convergence of $\alpha$
4.6. Inferred number of clusters over iterations
4.7. Fitting the model using RMSProp
Conclusion
1. Samples
First, we set up a toy dataset. We generate 50,000 random samples from three bivariate Gaussian distributions.
End of explanation
"""
reset_sess()
# Upperbound on K
max_cluster_num = 30
# Define trainable variables.
mix_probs = tf.nn.softmax(
tf.Variable(
name='mix_probs',
initial_value=np.ones([max_cluster_num], dtype) / max_cluster_num))
loc = tf.Variable(
name='loc',
initial_value=np.random.uniform(
low=-9, #set around minimum value of sample value
high=9, #set around maximum value of sample value
size=[max_cluster_num, dims]))
precision = tf.nn.softplus(tf.Variable(
name='precision',
initial_value=
np.ones([max_cluster_num, dims], dtype=dtype)))
alpha = tf.nn.softplus(tf.Variable(
name='alpha',
initial_value=
np.ones([1], dtype=dtype)))
training_vals = [mix_probs, alpha, loc, precision]
# Prior distributions of the training variables
#Use symmetric Dirichlet prior as finite approximation of Dirichlet process.
rv_symmetric_dirichlet_process = tfd.Dirichlet(
concentration=np.ones(max_cluster_num, dtype) * alpha / max_cluster_num,
name='rv_sdp')
rv_loc = tfd.Independent(
tfd.Normal(
loc=tf.zeros([max_cluster_num, dims], dtype=dtype),
scale=tf.ones([max_cluster_num, dims], dtype=dtype)),
reinterpreted_batch_ndims=1,
name='rv_loc')
rv_precision = tfd.Independent(
tfd.InverseGamma(
concentration=np.ones([max_cluster_num, dims], dtype),
rate=np.ones([max_cluster_num, dims], dtype)),
reinterpreted_batch_ndims=1,
name='rv_precision')
rv_alpha = tfd.InverseGamma(
concentration=np.ones([1], dtype=dtype),
rate=np.ones([1]),
name='rv_alpha')
# Define mixture model
rv_observations = tfd.MixtureSameFamily(
mixture_distribution=tfd.Categorical(probs=mix_probs),
components_distribution=tfd.MultivariateNormalDiag(
loc=loc,
scale_diag=precision))
"""
Explanation: 2. Model
Here, we define a Dirichlet Process Mixture of Gaussian distribution with Symmetric Dirichlet Prior. Throughout the notebook, vector quantities are written in bold. Over $i\in{1,\ldots,N}$ samples, the model with a mixture of $j \in{1,\ldots,K}$ Gaussian distributions is formulated as follow:
$$\begin{align}
p(\boldsymbol{x}1,\cdots, \boldsymbol{x}_N) &=\prod{i=1}^N \text{GMM}(x_i), \
&\,\quad \text{with}\;\text{GMM}(x_i)=\sum_{j=1}^K\pi_j\text{Normal}(x_i\,|\,\text{loc}=\boldsymbol{\mu_{j}},\,\text{scale}=\boldsymbol{\sigma_{j}})\
\end{align}$$
where:
$$\begin{align}
x_i&\sim \text{Normal}(\text{loc}=\boldsymbol{\mu}{z_i},\,\text{scale}=\boldsymbol{\sigma}{z_i}) \
z_i &= \text{Categorical}(\text{prob}=\boldsymbol{\pi}),\
&\,\quad \text{with}\;\boldsymbol{\pi}={\pi_1,\cdots,\pi_K}\
\boldsymbol{\pi}&\sim\text{Dirichlet}(\text{concentration}={\frac{\alpha}{K},\cdots,\frac{\alpha}{K}})\
\alpha&\sim \text{InverseGamma}(\text{concentration}=1,\,\text{rate}=1)\
\boldsymbol{\mu_j} &\sim \text{Normal}(\text{loc}=\boldsymbol{0}, \,\text{scale}=\boldsymbol{1})\
\boldsymbol{\sigma_j} &\sim \text{InverseGamma}(\text{concentration}=\boldsymbol{1},\,\text{rate}=\boldsymbol{1})\
\end{align}$$
Our goal is to assign each $x_i$ to the $j$th cluster through $z_i$ which represents the inferred index of a cluster.
For an ideal Dirichlet Mixture Model, $K$ is set to $\infty$. However, it is known that one can approximate a Dirichlet Mixture Model with a sufficiently large $K$. Note that although we arbitrarily set an initial value of $K$, an optimal number of clusters is also inferred through optimization, unlike a simple Gaussian Mixture Model.
In this notebook, we use a bivariate Gaussian distribution as a mixture component and set $K$ to 30.
End of explanation
"""
# Learning rates and decay
starter_learning_rate = 1e-6
end_learning_rate = 1e-10
decay_steps = 1e4
# Number of training steps
training_steps = 10000
# Mini-batch size
batch_size = 20
# Sample size for parameter posteriors
sample_size = 100
"""
Explanation: 3. Optimization
We optimize the model with Preconditioned Stochastic Gradient Langevin Dynamics (pSGLD), which enables us to optimize a model over a large number of samples in a mini-batch gradient descent manner.
To update parameters $\boldsymbol{\theta}\equiv{\boldsymbol{\pi},\,\alpha,\, \boldsymbol{\mu_j},\,\boldsymbol{\sigma_j}}$ in $t\,$th iteration with mini-batch size $M$, the update is sampled as:
$$\begin{align}
\Delta \boldsymbol { \theta } _ { t } & \sim \frac { \epsilon _ { t } } { 2 } \bigl[ G \left( \boldsymbol { \theta } _ { t } \right) \bigl( \nabla _ { \boldsymbol { \theta } } \log p \left( \boldsymbol { \theta } _ { t } \right)
+ \frac { N } { M } \sum _ { k = 1 } ^ { M } \nabla _ \boldsymbol { \theta } \log \text{GMM}(x_{t_k})\bigr) + \sum_\boldsymbol{\theta}\nabla_\theta G \left( \boldsymbol { \theta } _ { t } \right) \bigr]\
&+ G ^ { \frac { 1 } { 2 } } \left( \boldsymbol { \theta } _ { t } \right) \text { Normal } \left( \text{loc}=\boldsymbol{0} ,\, \text{scale}=\epsilon _ { t }\boldsymbol{1} \right)\
\end{align}$$
In the above equation, $\epsilon _ { t }$ is learning rate at $t\,$th iteration and $\log p(\theta_t)$ is a sum of log prior distributions of $\theta$. $G ( \boldsymbol { \theta } _ { t })$ is a preconditioner which adjusts the scale of the gradient of each parameter.
End of explanation
"""
# Placeholder for mini-batch
observations_tensor = tf.compat.v1.placeholder(dtype, shape=[batch_size, dims])
# Define joint log probabilities
# Notice that each prior probability should be divided by num_samples and
# likelihood is divided by batch_size for pSGLD optimization.
log_prob_parts = [
rv_loc.log_prob(loc) / num_samples,
rv_precision.log_prob(precision) / num_samples,
rv_alpha.log_prob(alpha) / num_samples,
rv_symmetric_dirichlet_process.log_prob(mix_probs)[..., tf.newaxis]
/ num_samples,
rv_observations.log_prob(observations_tensor) / batch_size
]
joint_log_prob = tf.reduce_sum(tf.concat(log_prob_parts, axis=-1), axis=-1)
# Make mini-batch generator
dx = tf.compat.v1.data.Dataset.from_tensor_slices(observations)\
.shuffle(500).repeat().batch(batch_size)
iterator = tf.compat.v1.data.make_one_shot_iterator(dx)
next_batch = iterator.get_next()
# Define learning rate scheduling
global_step = tf.Variable(0, trainable=False)
learning_rate = tf.train.polynomial_decay(
starter_learning_rate,
global_step, decay_steps,
end_learning_rate, power=1.)
# Set up the optimizer. Don't forget to set data_size=num_samples.
optimizer_kernel = tfp.optimizer.StochasticGradientLangevinDynamics(
learning_rate=learning_rate,
preconditioner_decay_rate=0.99,
burnin=1500,
data_size=num_samples)
train_op = optimizer_kernel.minimize(-joint_log_prob)
# Arrays to store samples
mean_mix_probs_mtx = np.zeros([training_steps, max_cluster_num])
mean_alpha_mtx = np.zeros([training_steps, 1])
mean_loc_mtx = np.zeros([training_steps, max_cluster_num, dims])
mean_precision_mtx = np.zeros([training_steps, max_cluster_num, dims])
init = tf.global_variables_initializer()
sess.run(init)
start = time.time()
for it in range(training_steps):
[
mean_mix_probs_mtx[it, :],
mean_alpha_mtx[it, 0],
mean_loc_mtx[it, :, :],
mean_precision_mtx[it, :, :],
_
] = sess.run([
*training_vals,
train_op
], feed_dict={
observations_tensor: sess.run(next_batch)})
elapsed_time_psgld = time.time() - start
print("Elapsed time: {} seconds".format(elapsed_time_psgld))
# Take mean over the last sample_size iterations
mean_mix_probs_ = mean_mix_probs_mtx[-sample_size:, :].mean(axis=0)
mean_alpha_ = mean_alpha_mtx[-sample_size:, :].mean(axis=0)
mean_loc_ = mean_loc_mtx[-sample_size:, :].mean(axis=0)
mean_precision_ = mean_precision_mtx[-sample_size:, :].mean(axis=0)
"""
Explanation: We will use the joint log probability of the likelihood $\text{GMM}(x_{t_k})$ and the prior probabilities $p(\theta_t)$ as the loss function for pSGLD.
Note that as specified in the API of pSGLD, we need to divide the sum of the prior probabilities by sample size $N$.
End of explanation
"""
loc_for_posterior = tf.compat.v1.placeholder(
dtype, [None, max_cluster_num, dims], name='loc_for_posterior')
precision_for_posterior = tf.compat.v1.placeholder(
dtype, [None, max_cluster_num, dims], name='precision_for_posterior')
mix_probs_for_posterior = tf.compat.v1.placeholder(
dtype, [None, max_cluster_num], name='mix_probs_for_posterior')
# Posterior of z (unnormalized)
unnomarlized_posterior = tfd.MultivariateNormalDiag(
loc=loc_for_posterior, scale_diag=precision_for_posterior)\
.log_prob(tf.expand_dims(tf.expand_dims(observations, axis=1), axis=1))\
+ tf.log(mix_probs_for_posterior[tf.newaxis, ...])
# Posterior of z (normarizad over latent states)
posterior = unnomarlized_posterior\
- tf.reduce_logsumexp(unnomarlized_posterior, axis=-1)[..., tf.newaxis]
cluster_asgmt = sess.run(tf.argmax(
tf.reduce_mean(posterior, axis=1), axis=1), feed_dict={
loc_for_posterior: mean_loc_mtx[-sample_size:, :],
precision_for_posterior: mean_precision_mtx[-sample_size:, :],
mix_probs_for_posterior: mean_mix_probs_mtx[-sample_size:, :]})
idxs, count = np.unique(cluster_asgmt, return_counts=True)
print('Number of inferred clusters = {}\n'.format(len(count)))
np.set_printoptions(formatter={'float': '{: 0.3f}'.format})
print('Number of elements in each cluster = {}\n'.format(count))
def convert_int_elements_to_consecutive_numbers_in(array):
unique_int_elements = np.unique(array)
for consecutive_number, unique_int_element in enumerate(unique_int_elements):
array[array == unique_int_element] = consecutive_number
return array
cmap = plt.get_cmap('tab10')
plt.scatter(
observations[:, 0], observations[:, 1],
1,
c=cmap(convert_int_elements_to_consecutive_numbers_in(cluster_asgmt)))
plt.axis([-10, 10, -10, 10])
plt.show()
"""
Explanation: 4. Visualize the result
4.1. Clustered result
First, we visualize the result of clustering.
For assigning each sample $x_i$ to a cluster $j$, we calculate the posterior of $z_i$ as:
$$\begin{align}
j = \underset{z_i}{\arg\max}\,p(z_i\,|\,x_i,\,\boldsymbol{\theta})
\end{align}$$
End of explanation
"""
# Calculate entropy
posterior_in_exponential = tf.exp(posterior)
uncertainty_in_entropy = tf.reduce_mean(-tf.reduce_sum(
posterior_in_exponential
* posterior,
axis=1), axis=1)
uncertainty_in_entropy_ = sess.run(uncertainty_in_entropy, feed_dict={
loc_for_posterior: mean_loc_mtx[-sample_size:, :],
precision_for_posterior: mean_precision_mtx[-sample_size:, :],
mix_probs_for_posterior: mean_mix_probs_mtx[-sample_size:, :]
})
plt.title('Entropy')
sc = plt.scatter(observations[:, 0],
observations[:, 1],
1,
c=uncertainty_in_entropy_,
cmap=plt.cm.viridis_r)
cbar = plt.colorbar(sc,
fraction=0.046,
pad=0.04,
ticks=[uncertainty_in_entropy_.min(),
uncertainty_in_entropy_.max()])
cbar.ax.set_yticklabels(['low', 'high'])
cbar.set_label('Uncertainty', rotation=270)
plt.show()
"""
Explanation: We can see an almost equal number of samples are assigned to appropriate clusters and the model has successfully inferred the correct number of clusters as well.
4.2. Visualize uncertainty
Here, we look at the uncertainty of the clustering result by visualizing it for each sample.
We calculate uncertainty by using entropy:
$$\begin{align}
\text{Uncertainty}\text{entropy} = -\frac{1}{K}\sum^{K}{z_i=1}\sum^{O}_{l=1}p(z_i\,|\,x_i,\,\boldsymbol{\theta}_l)\log p(z_i\,|\,x_i,\,\boldsymbol{\theta}_l)
\end{align}$$
In pSGLD, we treat the value of a training parameter at each iteration as a sample from its posterior distribution. Thus, we calculate entropy over values from $O$ iterations for each parameter. The final entropy value is calculated by averaging entropies of all the cluster assignments.
End of explanation
"""
for idx, numbe_of_samples in zip(idxs, count):
print(
'Component id = {}, Number of elements = {}'
.format(idx, numbe_of_samples))
print(
'Mean loc = {}, Mean scale = {}\n'
.format(mean_loc_[idx, :], mean_precision_[idx, :]))
"""
Explanation: In the above graph, less luminance represents more uncertainty.
We can see the samples near the boundaries of the clusters have especially higher uncertainty. This is intuitively true, that those samples are difficult to cluster.
4.3. Mean and scale of selected mixture component
Next, we look at selected clusters' $\mu_j$ and $\sigma_j$.
End of explanation
"""
plt.ylabel('Mean posterior of mixture weight')
plt.xlabel('Component')
plt.bar(range(0, max_cluster_num), mean_mix_probs_)
plt.show()
"""
Explanation: Again, the $\boldsymbol{\mu_j}$ and $\boldsymbol{\sigma_j}$ close to the ground truth.
4.4 Mixture weight of each mixture component
We also look at inferred mixture weights.
End of explanation
"""
print('Value of inferred alpha = {0:.3f}\n'.format(mean_alpha_[0]))
plt.ylabel('Sample value of alpha')
plt.xlabel('Iteration')
plt.plot(mean_alpha_mtx)
plt.show()
"""
Explanation: We see only a few (three) mixture component have significant weights and the rest of the weights have values close to zero. This also shows the model successfully inferred the correct number of mixture components which constitutes the distribution of the samples.
4.5. Convergence of $\alpha$
We look at convergence of Dirichlet distribution's concentration parameter $\alpha$.
End of explanation
"""
step = sample_size
num_of_iterations = 50
estimated_num_of_clusters = []
interval = (training_steps - step) // (num_of_iterations - 1)
iterations = np.asarray(range(step, training_steps+1, interval))
for iteration in iterations:
start_position = iteration-step
end_position = iteration
result = sess.run(tf.argmax(
tf.reduce_mean(posterior, axis=1), axis=1), feed_dict={
loc_for_posterior:
mean_loc_mtx[start_position:end_position, :],
precision_for_posterior:
mean_precision_mtx[start_position:end_position, :],
mix_probs_for_posterior:
mean_mix_probs_mtx[start_position:end_position, :]})
idxs, count = np.unique(result, return_counts=True)
estimated_num_of_clusters.append(len(count))
plt.ylabel('Number of inferred clusters')
plt.xlabel('Iteration')
plt.yticks(np.arange(1, max(estimated_num_of_clusters) + 1, 1))
plt.plot(iterations - 1, estimated_num_of_clusters)
plt.show()
"""
Explanation: Considering the fact that smaller $\alpha$ results in less expected number of clusters in a Dirichlet mixture model, the model seems to be learning the optimal number of clusters over iterations.
4.6. Inferred number of clusters over iterations
We visualize how the inferred number of clusters changes over iterations.
To do so, we infer the number of clusters over the iterations.
End of explanation
"""
# Learning rates and decay
starter_learning_rate_rmsprop = 1e-2
end_learning_rate_rmsprop = 1e-4
decay_steps_rmsprop = 1e4
# Number of training steps
training_steps_rmsprop = 50000
# Mini-batch size
batch_size_rmsprop = 20
# Define trainable variables.
mix_probs_rmsprop = tf.nn.softmax(
tf.Variable(
name='mix_probs_rmsprop',
initial_value=np.ones([max_cluster_num], dtype) / max_cluster_num))
loc_rmsprop = tf.Variable(
name='loc_rmsprop',
initial_value=np.zeros([max_cluster_num, dims], dtype)
+ np.random.uniform(
low=-9, #set around minimum value of sample value
high=9, #set around maximum value of sample value
size=[max_cluster_num, dims]))
precision_rmsprop = tf.nn.softplus(tf.Variable(
name='precision_rmsprop',
initial_value=
np.ones([max_cluster_num, dims], dtype=dtype)))
alpha_rmsprop = tf.nn.softplus(tf.Variable(
name='alpha_rmsprop',
initial_value=
np.ones([1], dtype=dtype)))
training_vals_rmsprop =\
[mix_probs_rmsprop, alpha_rmsprop, loc_rmsprop, precision_rmsprop]
# Prior distributions of the training variables
#Use symmetric Dirichlet prior as finite approximation of Dirichlet process.
rv_symmetric_dirichlet_process_rmsprop = tfd.Dirichlet(
concentration=np.ones(max_cluster_num, dtype)
* alpha_rmsprop / max_cluster_num,
name='rv_sdp_rmsprop')
rv_loc_rmsprop = tfd.Independent(
tfd.Normal(
loc=tf.zeros([max_cluster_num, dims], dtype=dtype),
scale=tf.ones([max_cluster_num, dims], dtype=dtype)),
reinterpreted_batch_ndims=1,
name='rv_loc_rmsprop')
rv_precision_rmsprop = tfd.Independent(
tfd.InverseGamma(
concentration=np.ones([max_cluster_num, dims], dtype),
rate=np.ones([max_cluster_num, dims], dtype)),
reinterpreted_batch_ndims=1,
name='rv_precision_rmsprop')
rv_alpha_rmsprop = tfd.InverseGamma(
concentration=np.ones([1], dtype=dtype),
rate=np.ones([1]),
name='rv_alpha_rmsprop')
# Define mixture model
rv_observations_rmsprop = tfd.MixtureSameFamily(
mixture_distribution=tfd.Categorical(probs=mix_probs_rmsprop),
components_distribution=tfd.MultivariateNormalDiag(
loc=loc_rmsprop,
scale_diag=precision_rmsprop))
og_prob_parts_rmsprop = [
rv_loc_rmsprop.log_prob(loc_rmsprop),
rv_precision_rmsprop.log_prob(precision_rmsprop),
rv_alpha_rmsprop.log_prob(alpha_rmsprop),
rv_symmetric_dirichlet_process_rmsprop
.log_prob(mix_probs_rmsprop)[..., tf.newaxis],
rv_observations_rmsprop.log_prob(observations_tensor)
* num_samples / batch_size
]
joint_log_prob_rmsprop = tf.reduce_sum(
tf.concat(log_prob_parts_rmsprop, axis=-1), axis=-1)
# Define learning rate scheduling
global_step_rmsprop = tf.Variable(0, trainable=False)
learning_rate = tf.train.polynomial_decay(
starter_learning_rate_rmsprop,
global_step_rmsprop, decay_steps_rmsprop,
end_learning_rate_rmsprop, power=1.)
# Set up the optimizer. Don't forget to set data_size=num_samples.
optimizer_kernel_rmsprop = tf.train.RMSPropOptimizer(
learning_rate=learning_rate,
decay=0.99)
train_op_rmsprop = optimizer_kernel_rmsprop.minimize(-joint_log_prob_rmsprop)
init_rmsprop = tf.global_variables_initializer()
sess.run(init_rmsprop)
start = time.time()
for it in range(training_steps_rmsprop):
[
_
] = sess.run([
train_op_rmsprop
], feed_dict={
observations_tensor: sess.run(next_batch)})
elapsed_time_rmsprop = time.time() - start
print("RMSProp elapsed_time: {} seconds ({} iterations)"
.format(elapsed_time_rmsprop, training_steps_rmsprop))
print("pSGLD elapsed_time: {} seconds ({} iterations)"
.format(elapsed_time_psgld, training_steps))
mix_probs_rmsprop_, alpha_rmsprop_, loc_rmsprop_, precision_rmsprop_ =\
sess.run(training_vals_rmsprop)
"""
Explanation: Over the iterations, the number of clusters is getting closer to three. With the result of convergence of $\alpha$ to smaller value over iterations, we can see the model is successfully learning the parameters to infer an optimal number of clusters.
Interestingly, we can see the inference has already converged to the correct number of clusters in the early iterations, unlike $\alpha$ converged in much later iterations.
4.7. Fitting the model using RMSProp
In this section, to see the effectiveness of Monte Carlo sampling scheme of pSGLD, we use RMSProp to fit the model. We choose RMSProp for comparison because it comes without the sampling scheme and pSGLD is based on RMSProp.
End of explanation
"""
cluster_asgmt_rmsprop = sess.run(tf.argmax(
tf.reduce_mean(posterior, axis=1), axis=1), feed_dict={
loc_for_posterior: loc_rmsprop_[tf.newaxis, :],
precision_for_posterior: precision_rmsprop_[tf.newaxis, :],
mix_probs_for_posterior: mix_probs_rmsprop_[tf.newaxis, :]})
idxs, count = np.unique(cluster_asgmt_rmsprop, return_counts=True)
print('Number of inferred clusters = {}\n'.format(len(count)))
np.set_printoptions(formatter={'float': '{: 0.3f}'.format})
print('Number of elements in each cluster = {}\n'.format(count))
cmap = plt.get_cmap('tab10')
plt.scatter(
observations[:, 0], observations[:, 1],
1,
c=cmap(convert_int_elements_to_consecutive_numbers_in(
cluster_asgmt_rmsprop)))
plt.axis([-10, 10, -10, 10])
plt.show()
"""
Explanation: Compare to pSGLD, although the number of iterations for RMSProp is longer, optimization by RMSProp is much faster.
Next, we look at the clustering result.
End of explanation
"""
plt.ylabel('MAP inferece of mixture weight')
plt.xlabel('Component')
plt.bar(range(0, max_cluster_num), mix_probs_rmsprop_)
plt.show()
"""
Explanation: The number of clusters was not correctly inferred by RMSProp optimization in our experiment. We also look at the mixture weight.
End of explanation
"""
|
dm-wyncode/zipped-code | content/posts/meditations/mongodb-geojson/solving-a-ftlpd-data-delivery-problem-before-i-write-code.ipynb | mit | from IPython.display import IFrame
IFrame(
'https://www.sunfrog.com/Geek-Tech/First-solve-the-problem-Then-write-the-code.html',
width=800,
height=350,
)
"""
Explanation: T-shirt inspiration
❝first solve the problem then write the code❞
End of explanation
"""
from IPython.display import IFrame
IFrame(
'https://docs.mongodb.com/manual/reference/geojson/',
width=800,
height=350,
)
"""
Explanation: Introduction
This Jupyter notebook is a place to keep my thoughts organized on how to best present Fort Lauderdale Police department data obtained at the 2016 Fort Lauderdale Civic Hackathon. I blogged about it here.
Just prior to my participation in the Fort Lauderdale Civic Hackathon, I experimented with a MapBox GL JS API. You can see my simple demonstration here where I created a bunch of fake points and cluster mapped them.
That experiment is what inspired me to suggest to my hackathon parter David Karim that we heat map the data. See that map here.
MongoDB
I know little about databases though I respect their power to efficiently handle data. I struggle with cognitive overhead of SQL databases with their normalized data and join commands.
When I heard that MongoDB organized its data without a requirement of normalization, I knew I had to investigate because that CSV file of data was not normalized.
MongoDB's behavior fit my mental model of how I imagined it would be to easily handle data. While I have little experience with which to compare, it appears that MongoDB can efficiently handle over 92,000 citation documents with ease.
A more difficult question: Can I write code to make MongoDB do its thing most efficiently?!
Creating geojson data
The MapBox API works well with geojson data. A quick search on Google reveals that MongoDB has built-in support for geojson!
End of explanation
"""
# a module for importing values so I do not have to expose them in this Jupyter notebook
from meta import dockerized_mongo_path
# the '!' preceding the command allows me to access the shell from the Jupyter notebook
# in which I am writing this blog post
# ./expect-up-daemon calls a /usr/bin/expect script
# in the $dockerized_mongo_path to bring up the dockerized MongoDB
!cd $dockerized_mongo_path && ./expect-up-daemon
"""
Explanation: Tasks
Create a new collection called 'citations_geojson'.
Jump down to collection creation in this blog post.
* Create some new documents in 'citations_geoson' as a GeoJSON point objects.
* [definition of a geojson point](http://geojson.org/geojson-spec.html#point).
* What a point looks like in MongoDB.
```json
{ type: "Point", coordinates: [ 40, 5 ] }
```
* [PyMongo driver has some info on geo indexing that might be relevant](https://api.mongodb.com/python/current/examples/geo.html).
* [MongoDB docs on geospatial indices](https://docs.mongodb.com/manual/core/geospatial-indexes/).
* [geojson geometry collections looks relevant](http://geojson.org/geojson-spec.html#geometrycollection)
question
Can the MapBox API utilize geojson data served up directly from MongoDB?
While the MongoDB ojbects looks similar to the geojson I was used to seeing when building MapBox maps, they do not appear to be exactly the same.
Theory: a MapBox API may be able to handle the data directly from MongoDB. I have to figure out how to make that connection.
possible sources of answers
geojson vt blog post from MapBox
❝If you’re using Mapbox GL-based tools (either GL JS or Mapbox Mobile), you’re already using GeoJSON-VT under the hood.❞
So it is possible to feed large amounts of data to a map. It does not answer the question of whether or not I have to munge the data coming from MongoDB first or not to make it MapBox valid geojson.
This definitely looks promising from the npm domain!
❝GeoJSON normalization for mongoDB. Convert an array of documents with geospatial information (2dsphere only) into a GeoJSON feature collection.❞
Some words I recognize from trying out MapBox: ❝GeoJSON feature collection❞
Create a collection in MongoDB using PyMongo.
%%HTML # create-collection
<span id="create-collection"></span>
Start the MongoDB Docker container
The repository for this data in a Dockerized MongoDB instance is here: dm-wyncode/docker-mongo-flpd-hackathon-data
End of explanation
"""
from pymongo import MongoClient
client = MongoClient()
db = client.app
collection_names = sorted(db.collection_names())
print(collection_names)
collections = accidents, citations = [db.get_collection(collection)
for collection
in collection_names]
info = [{collection_name: format(collection.count(), ',')}
for collection_name, collection
in zip(collection_names, collections)]
print("document counts")
for item in info:
print(item)
"""
Explanation: Verify that the database is running and responding to the pymongo driver.
End of explanation
"""
|
batfish/pybatfish | jupyter_notebooks/Analyzing the Impact of Failures (and letting loose a Chaos Monkey).ipynb | apache-2.0 | # Import packages
%run startup.py
bf = Session(host="localhost")
# Initialize the example network and snapshot
NETWORK_NAME = "example_network"
BASE_SNAPSHOT_NAME = "base"
SNAPSHOT_PATH = "networks/failure-analysis"
bf.set_network(NETWORK_NAME)
bf.init_snapshot(SNAPSHOT_PATH, name=BASE_SNAPSHOT_NAME, overwrite=True)
"""
Explanation: Analyzing the Impact of Failures (and letting loose a Chaos Monkey)
Planned (maintenance) and unplanned failure of nodes and interfaces in the network is a frequent occurrence. While most networks are designed to be tolerant to such failures, gaining confidence that they are in fact tolerant is difficult. Network engineers often reason about network behavior under failures manually, which is a complex and error-prone task. Consequently, the network could be one link failure away from an outage that leads to a massive loss of revenue and reputation.
Fortunately, based just on device configurations, Batfish makes it easy to proactively analyze the network behavior under failures and offer guarantees on its tolerance to a range of failure scenarios.
In this notebook, we will show how to use Batfish to analyze network behavior under failures. Specifically, we will describe how to simulate a specific network failure scenario, how to check forwarding changes for all flows in that scenario, and finally how to identify vulnerabilities using Chaos Monkey style testing.
Check out a video demo of this notebook here.
Initialization
We will use the example network shown below with three autonomous systems (ASes) spread that connect via eBGP. Within each AS iBGP and OSPF is used. The configurations of these devices are available here.
End of explanation
"""
# Fork a new snapshot with London deactivated
FAIL_LONDON_SNAPSHOT_NAME = "fail_london"
bf.fork_snapshot(BASE_SNAPSHOT_NAME, FAIL_LONDON_SNAPSHOT_NAME, deactivate_nodes=["london"], overwrite=True)
"""
Explanation: bf.fork_snapshot: Simulating network failures
To simulate network failures, Batfish offers a simple API bf.fork_snapshot that clones the original snapshot to a new one with the specified failure scenarios.
Suppose we want to analyze the scenario where node London fails. We can use bf.fork_snapshot to simulate this failure as shown below.
End of explanation
"""
# Get the answer of a traceroute question from Paris to the PoP's prefix
pop_prefix = "2.128.0.0/24"
tr_answer = bf.q.traceroute(
startLocation="paris",
headers=HeaderConstraints(dstIps=pop_prefix),
maxTraces=1
).answer(FAIL_LONDON_SNAPSHOT_NAME)
# Display the result in a pretty form
show(tr_answer.frame())
"""
Explanation: In the code, bf.fork_snapshot accepts four parameters: BASE_SNAPSHOT_NAME indicates the original snapshot name, FAIL_LONDON_SNAPSHOT_NAME is the name of the new snapshot, deactivate_nodes is a list of nodes that we wish to fail, and overwrite=True indicates that we want to reinitialize the snapshot if it already exists.
In addition to deactivate_nodes, bf.fork_snapshot can also take deactivate_interfaces as a parameter to simulate interface failures. Combining these functions, Batfish allows us to simulate complicated failure scenarios involving interfaces and nodes, for example: bf.fork_snapshot(BASE_SNAPSHOT_NAME, FAIL_SNAPSHOT_NAME, deactivate_nodes=FAIL_NODES, deactivate_interfaces=FAIL_INTERFACES, overwrite=True)).
To understand network behavior under the simulated failure, we can run any Batfish question on the newly created snapshot. As an example, to ensure that the flows from Paris would still reach PoP even if London failed, we can run the traceroute question on the snapshot in which London has failed, as shown below. (See the Introduction to Forwarding Analysis using Batfish notebook for more forwarding analysis questions).
End of explanation
"""
# Get the answer to the differential reachability question given two snapshots
diff_reachability_answer = bf.q.differentialReachability(
headers=HeaderConstraints(dstIps=pop_prefix), maxTraces=1).answer(
snapshot=FAIL_LONDON_SNAPSHOT_NAME,
reference_snapshot=BASE_SNAPSHOT_NAME)
# Display the results
show(diff_reachability_answer.frame())
"""
Explanation: Great! We have confirmed that Paris can still reach PoP via Asia even when London has failed.
differentialReachability: Checking changes of forwarding behavior for all flows
Above, we saw how Batfish can create new snapshots that simulate failure scenarios and run analysis on them. This capability is useful to test the forwarding behavior under interesting failure scenarios. In some cases, we may also want to verify that certain network failures have no impact to the network, i.e., the forwarding behavior of all flows would not be changed by those failures.
We now show a powerful question differentialReachability of Batfish, which allows us to analyze changes of any flow between two snapshots. This question will report any flow that was successfully delivered in the base snapshot but will not be delivered in failure snapshot or the other way around---not delivered in the base snapshot but delivered in the failure snapshot.
Let us revisit the scenario where London fails. To understand if this failure impacts any flow to the PoP in the US, we can run the differential reachability question as below, by scoping the search to flows destined to the PoP(from anywhere) and comparing FAIL_LONDON_SNAPSHOT_NAME with BASE_SNAPSHOT_NAME as the reference. Leaving the headers field unscoped would search across flows to all possible destinations.
End of explanation
"""
# Fix for demonstration purpose
random.seed(0)
max_iterations = 5
# Get all links in the network
links = bf.q.edges().answer(BASE_SNAPSHOT_NAME).frame()
for i in range(max_iterations):
# Get two links at random
failed_link1_index = random.randint(0, len(links) - 1)
failed_link2_index = random.randint(0, len(links) - 1)
# Fork a snapshot with the link failures
FAIL_SNAPSHOT_NAME = "fail_snapshot"
bf.fork_snapshot(
BASE_SNAPSHOT_NAME,
FAIL_SNAPSHOT_NAME,
deactivate_interfaces=[links.loc[failed_link1_index].Interface,
links.loc[failed_link2_index].Interface],
overwrite=True)
# Run a differential reachability question
answer = bf.q.differentialReachability(
headers=HeaderConstraints(dstIps=pop_prefix)
).answer(
snapshot=FAIL_SNAPSHOT_NAME,
reference_snapshot=BASE_SNAPSHOT_NAME
)
# A non-empty returned answer means changed forwarding behavior
# We print the bad failure scenario and exit
if len(answer.frame()) > 0:
show(links.iloc[[failed_link1_index, failed_link2_index]])
break
"""
Explanation: We see from the result that the failures of London would in fact permit a flow that was originally being blocked by the AS1_TO_AS2 ACL on New York. This difference reveals a potential security vulnerability! Luckily, Batfish allows us to catch and fix it before something bad happens in production. Similarly, if there were flows that were carried in BASE_SNAPSHOT_NAME but dropped in FAIL_LONDON_SNAPSHOT_NAME (an availability issue), Batfish would have caught it.
Check out our Introduction to Forwarding Change Validation notebook for more use cases of differential reachability queries.
Chaos Monkey Testing
Chaos Monkey style testing is a common method to to build highly reliable software systems. In it, different components of the system are randomly failed to see what impact it has on the service performance. Such testing is known to be highly effective but is not possible to do in the networking context. Until now.
Batfish can easily enable Chaos Monkey testing for networks. Using the basic functions shown above, we can compose more complicated functions that randomly fail links and identify potential vulnerabilities in the network.
Suppose we wanted our network to be robust to any possible 2-link failures. The example below shows how to perform Chaos Monkey testing to identify 2-link-failures that can cause an outage. Specifically, we will fail a pair of links picked at random and check whether the forwarding behavior would be changed by the failure using the differentialReachability question.
Next, we run Chaos Monkey testing, shown as below.
End of explanation
"""
show(answer.frame())
"""
Explanation: We see that there is a failure scenario under to which the network is not robust, that is, the failure will lead to a change in the forwarding behavior of at least some flows. This scenario is the failure of two links that connect Seattle to Philadelphia and San Francisco. This is unexpected because Seattle has another link that connects it to the rest of the network and should generally be available for traffic.
Let us diagnose this situation to understand the problem. To begin, we first see which flows are impacted.
End of explanation
"""
diff_routes = bf.q.bgpRib(network="2.128.0.0/16").answer(snapshot=FAIL_SNAPSHOT_NAME,
reference_snapshot=BASE_SNAPSHOT_NAME)
diff_routes
"""
Explanation: We see that when the links fail, if we ignore flows that end in Seattle (whose links have failed), a general pattern is that Asia loses connectivity to US. Given the network topology, this is quite surprising because after those failure we would have expected Asia to be able to reach US via Europe.
To investigate furhter into the root cause, we ask Batfish to show how the BGP RIB in the two cases differ. We do so using the bgpRib question and comparing the two snapshots as in the differential reachability question. We focus on the impacted destination prefix 2.128.0.0/16.
End of explanation
"""
# View all defined structres on 'hongkong'
bf.q.definedStructures(nodes="hongkong").answer()
"""
Explanation: We see that routers in Asia (Hongkong, Singapore, and Tokyo) and Seattle do not have any BGP routes to the prefix in the failure snapshot, which they did in the reference snapshot. The missing route in Seattle can be explained via missing routes in Asia since Seattle depended on Asia after losing its two other links.
That Europe still has the routes after the failure alerts us to the possibility of improper filtering of incoming routes in Asia. So, we should check on that. There are many ways to analyze the incoming route filters; we'll use the definedStructures question of Batfish to extract necessary definitions that we need to view.
End of explanation
"""
# See the config lines where the route map as1_to_as3 is defined
!cat networks/failure-analysis/configs/hongkong.cfg | head -121 | tail -4
"""
Explanation: We see the route map as1_to_as3 is defined on line 119 and 120. Now we can quickly navigate to the lines in the config file, as showing below.
End of explanation
"""
# See the config lines where the access list '102' is defined
!cat networks/failure-analysis/configs/hongkong.cfg | head -118 | tail -5
"""
Explanation: We see that the route map is denying routes that match the access-list '102.' Let's look at the definition of this list, which is on lines 115-117 per the defined structures list above.
End of explanation
"""
|
atlury/deep-opencl | DL0110EN/2.2.2_training_slope_and_bias_v2.ipynb | lgpl-3.0 | # These are the libraries we are going to use in the lab.
import numpy as np
import matplotlib.pyplot as plt
from mpl_toolkits import mplot3d
"""
Explanation: <a href="http://cocl.us/pytorch_link_top">
<img src="https://cocl.us/Pytorch_top" width="750" alt="IBM 10TB Storage" />
</a>
<img src="https://ibm.box.com/shared/static/ugcqz6ohbvff804xp84y4kqnvvk3bq1g.png" width="200" alt="cognitiveclass.ai logo" />
<h1>Linear regression 1D: Training Two Parameter</h1>
<h2>Table of Contents</h2>
<p>In this lab, you will train a model with PyTorch by using the data that we created. The model will have the slope and bias. And we will review how to make a prediction in several different ways by using PyTorch.</p>
<ul>
<li><a href="#Makeup_Data">Make Some Data</a></li>
<li><a href="#Model_Cost">Create the Model and Cost Function (Total Loss) </a></li>
<li><a href="#Train">Train the Model </a></li>
</ul>
<p>Estimated Time Needed: <strong>20 min</strong></ul>
<hr>
<h2>Preparation</h2>
We'll need the following libraries:
End of explanation
"""
# The class for plot the diagram
class plot_error_surfaces(object):
# Constructor
def __init__(self, w_range, b_range, X, Y, n_samples = 30, go = True):
W = np.linspace(-w_range, w_range, n_samples)
B = np.linspace(-b_range, b_range, n_samples)
w, b = np.meshgrid(W, B)
Z = np.zeros((30,30))
count1 = 0
self.y = Y.numpy()
self.x = X.numpy()
for w1, b1 in zip(w, b):
count2 = 0
for w2, b2 in zip(w1, b1):
Z[count1, count2] = np.mean((self.y - w2 * self.x + b2) ** 2)
count2 += 1
count1 += 1
self.Z = Z
self.w = w
self.b = b
self.W = []
self.B = []
self.LOSS = []
self.n = 0
if go == True:
plt.figure()
plt.figure(figsize = (7.5, 5))
plt.axes(projection='3d').plot_surface(self.w, self.b, self.Z, rstride = 1, cstride = 1,cmap = 'viridis', edgecolor = 'none')
plt.title('Cost/Total Loss Surface')
plt.xlabel('w')
plt.ylabel('b')
plt.show()
plt.figure()
plt.title('Cost/Total Loss Surface Contour')
plt.xlabel('w')
plt.ylabel('b')
plt.contour(self.w, self.b, self.Z)
plt.show()
# Setter
def set_para_loss(self, W, B, loss):
self.n = self.n + 1
self.W.append(W)
self.B.append(B)
self.LOSS.append(loss)
# Plot diagram
def final_plot(self):
ax = plt.axes(projection = '3d')
ax.plot_wireframe(self.w, self.b, self.Z)
ax.scatter(self.W,self.B, self.LOSS, c = 'r', marker = 'x', s = 200, alpha = 1)
plt.figure()
plt.contour(self.w,self.b, self.Z)
plt.scatter(self.W, self.B, c = 'r', marker = 'x')
plt.xlabel('w')
plt.ylabel('b')
plt.show()
# Plot diagram
def plot_ps(self):
plt.subplot(121)
plt.ylim
plt.plot(self.x, self.y, 'ro', label="training points")
plt.plot(self.x, self.W[-1] * self.x + self.B[-1], label = "estimated line")
plt.xlabel('x')
plt.ylabel('y')
plt.ylim((-10, 15))
plt.title('Data Space Iteration: ' + str(self.n))
plt.legend()
plt.show()
plt.subplot(122)
plt.contour(self.w, self.b, self.Z)
plt.scatter(self.W, self.B, c = 'r', marker = 'x')
plt.title('Total Loss Surface Contour Iteration' + str(self.n))
plt.xlabel('w')
plt.ylabel('b')
plt.legend()
"""
Explanation: The class <code>plot_error_surfaces</code> is just to help you visualize the data space and the parameter space during training and has nothing to do with PyTorch.
End of explanation
"""
# Import PyTorch library
import torch
"""
Explanation: <!--Empty Space for separating topics-->
<h2 id="Makeup_Data">Make Some Data</h2>
Import PyTorch:
End of explanation
"""
# Create f(X) with a slope of 1 and a bias of -1
X = torch.arange(-3, 3, 0.1).view(-1, 1)
f = 1 * X - 1
"""
Explanation: Start with generating values from -3 to 3 that create a line with a slope of 1 and a bias of -1. This is the line that you need to estimate.
End of explanation
"""
# Add noise
Y = f + 0.1 * torch.randn(X.size())
"""
Explanation: Now, add some noise to the data:
End of explanation
"""
# Plot out the line and the points with noise
plt.plot(X.numpy(), Y.numpy(), 'rx', label = 'y')
plt.plot(X.numpy(), f.numpy(), label = 'f')
plt.xlabel('x')
plt.ylabel('y')
plt.legend()
"""
Explanation: Plot the line and <code>Y</code> with noise:
End of explanation
"""
# Define the forward function
def forward(x):
return w * x + b
"""
Explanation: <h2 id="Model_Cost">Create the Model and Cost Function (Total Loss)</h2>
Define the <code>forward</code> function:
End of explanation
"""
# Define the MSE Loss function
def criterion(yhat,y):
return torch.mean((yhat-y)**2)
"""
Explanation: Define the cost or criterion function (MSE):
End of explanation
"""
# Create plot_error_surfaces for viewing the data
get_surface = plot_error_surfaces(15, 15, X, Y, 30)
"""
Explanation: Create a <code> plot_error_surfaces</code> object to visualize the data space and the parameter space during training:
End of explanation
"""
# Define the parameters w, b for y = wx + b
w = torch.tensor(-15.0, requires_grad = True)
b = torch.tensor(-10.0, requires_grad = True)
"""
Explanation: <!--Empty Space for separating topics-->
<h2 id="Train">Train the Model</h2>
Create model parameters <code>w</code>, <code>b</code> by setting the argument <code>requires_grad</code> to True because we must learn it using the data.
End of explanation
"""
# Define learning rate and create an empty list for containing the loss for each iteration.
lr = 0.1
LOSS = []
"""
Explanation: Set the learning rate to 0.1 and create an empty list <code>LOSS</code> for storing the loss for each iteration.
End of explanation
"""
# The function for training the model
def train_model(iter):
# Loop
for epoch in range(iter):
# make a prediction
Yhat = forward(X)
# calculate the loss
loss = criterion(Yhat, Y)
# Section for plotting
get_surface.set_para_loss(w.data.tolist(), b.data.tolist(), loss.tolist())
if epoch % 3 == 0:
get_surface.plot_ps()
# store the loss in the list LOSS
LOSS.append(loss)
# backward pass: compute gradient of the loss with respect to all the learnable parameters
loss.backward()
# update parameters slope and bias
w.data = w.data - lr * w.grad.data
b.data = b.data - lr * b.grad.data
# zero the gradients before running the backward pass
w.grad.data.zero_()
b.grad.data.zero_()
"""
Explanation: Define <code>train_model</code> function for train the model.
End of explanation
"""
# Train the model with 15 iterations
train_model(15)
"""
Explanation: Run 15 iterations of gradient descent: <b>bug</b> data space is 1 iteration ahead of parameter space
End of explanation
"""
# Plot out the Loss Result
get_surface.final_plot()
plt.plot(LOSS)
plt.tight_layout()
plt.xlabel("Epoch/Iterations")
plt.ylabel("Cost")
"""
Explanation: Plot total loss/cost surface with loss values for different parameters in red:
End of explanation
"""
# Practice: train and plot the result with lr = 0.2 and the following parameters
w = torch.tensor(-15.0, requires_grad = True)
b = torch.tensor(-10.0, requires_grad = True)
lr = 0.2
LOSS2 = []
"""
Explanation: <!--Empty Space for separating topics-->
<h3>Practice</h3>
Experiment using s learning rates 0.2 and width the following parameters. Run 15 iterations.
End of explanation
"""
# Practice: Plot the LOSS and LOSS2 in order to compare the Total Loss
# Type your code here
"""
Explanation: Double-click <b>here</b> for the solution.
<!--
def my_train_model(iter):
for epoch in range(iter):
Yhat = forward(X)
loss = criterion(Yhat, Y)
get_surface.set_para_loss(w.data.tolist(), b.data.tolist(), loss.tolist())
if epoch % 3 == 0:
get_surface.plot_ps()
LOSS2.append(loss)
loss.backward()
w.data = w.data - lr * w.grad.data
b.data = b.data - lr * b.grad.data
w.grad.data.zero_()
b.grad.data.zero_()
my_train_model(15)
-->
Plot the <code>LOSS</code> and <code>LOSS2</code>
End of explanation
"""
|
passalis/sef | tutorials/Defining new methods.ipynb | mit | def sim_target_supervised(target_data, target_labels, sigma, idx, target_params):
cur_labels = target_labels[idx]
N = cur_labels.shape[0]
N_labels = len(np.unique(cur_labels))
Gt, mask = np.zeros((N, N)), np.zeros((N, N))
for i in range(N):
for j in range(N):
if cur_labels[i] == cur_labels[j]:
Gt[i, j] = 0.8
mask[i, j] = 1
else:
Gt[i, j] = 0.1
mask[i, j] = 0.8 / (N_labels - 1)
return np.float32(Gt), np.float32(mask)
"""
Explanation: Defining new similarity targets
In this tutorial we demonstrate how we can define custom similarity target functions. This allows for easily deriving new dimensionality reduction techniques.
A similarity generator function in the PySEF adheres to the following pattern:
python
def custom_similarity_function(target_data, target_labels, sigma, idx,
target_params):
Gt = np.zeros((len(idx), len(idx)))
Gt_mask = np.zeros((len(idx), len(idx)))
# Calculate the similarity target here
return np.float32(Gt), np.float32(Gt_mask)
Any similarity target function receives 5 arguments: the target data (usually used when we want to mimic another technqiue), the target_labels (usually used when supervised information is availabe), the sigma (scaling factor) that must be used in the similarity calculations, the indices of the current batch and the target_params (used to pass additional arguments to the similarity function). The similarity function must return the similarity only for the target_data/labels pointed by the indices (idx). In each iteration a different set of indices are passed to the function (for calculating the target similarity matrix for a different batch). Note that the target_data, target_labels, sigma, and target_params are passed through the fit function. A similarity function might not need all these data (None can be safely passed to the arguments that are not used).
So, lets define a custom similarity target function that uses supervised information (labels) and sets the target similarity to 0.8 for samples that belong to the same class and 0.1 otherwise.
End of explanation
"""
import numpy as np
from sef_dr.datasets import dataset_loader
(x_train, y_train, x_test, y_test) = dataset_loader(dataset_path='../data', dataset='mnist')
# Sample three classes
idx = np.logical_or(y_train == 0, y_train == 1)
idx = np.logical_or(idx, y_train == 2)
x_train, y_train = x_train[idx], y_train[idx]
x_train, y_train = x_train[:100, :], y_train[:100]
"""
Explanation: Note that we also appropriately set the weighting mask to account for the imbalance between the intra-class and inter-class samples. It is important to remember to work only with the current batch (using the idx) and not with the whole training set (that is always passed to target_data/target_labels). Also, note that the target_data, sigma and target_params arguments are not used.
Load three classes of the MNIST dataset to evaluate the function that we just defined.
End of explanation
"""
import sef_dr
proj = sef_dr.KernelSEF(x_train, x_train.shape[0], 2, sigma=3, )
proj.fit(x_train, target_labels=y_train, target=sim_target_supervised, epochs=500, learning_rate=0.0001, regularizer_weight=0, verbose=0)
train_data = proj.transform(x_train)
"""
Explanation: Now, learn a kernel projection to achieve the target similarity defined by sim_target_supervised:
End of explanation
"""
import matplotlib
import matplotlib.pyplot as plt
import matplotlib.patches as mpatches
%matplotlib inline
fig = plt.figure(figsize=(5, 5))
plt.scatter(train_data[:, 0], train_data[:, 1], c=y_train,
cmap=matplotlib.colors.ListedColormap(['red', 'green', 'blue', ]))
plt.legend(handles=[mpatches.Patch(color='red', label='Digit 0'), mpatches.Patch(color='green', label='Digit 1'),
mpatches.Patch(color='blue', label='Digit 2')], loc='upper left')
plt.show()
"""
Explanation: Note that we pass the sim_target_supervised function in the target argument and we only define the target_labels argument in the fit() function.
Now we can plot the learned projection:
End of explanation
"""
|
h-mayorquin/time_series_basic | presentations/2015-11-11(Analyzing text with Nexa, Part 1).ipynb | bsd-3-clause | import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
import h5py
import IPython
import sys
sys.path.append('../')
from inputs.sensors import Sensor, PerceptualSpace
from inputs.lag_structure import LagStructure
from nexa.nexa import Nexa# First we have to load the signal
"""
Explanation: Analyzing text with Nexa
This is an analysis of the text from the financial times with the nexa framework. Here we apply the nexa machinery to it
End of explanation
"""
signal_location = '../data/wall_street_data_small.hdf5'
# Access the data and load it into signal
with h5py.File(signal_location, 'r') as f:
dset = f['signal']
signals = np.empty(dset.shape, np.float)
dset.read_direct(signals)
"""
Explanation: Extract the Data
Now we extract the signal
End of explanation
"""
# Reshape the data and limit it
Ndata = 10000
signals = signals.reshape(signals.shape[0], signals.shape[1] * signals.shape[2])
# signals = signals[:Ndata, ...].astype('float')
signals += np.random.uniform(size=signals.shape)
print('zeros', np.sum(signals[0] == 0))
print('signals shape', signals.shape)
"""
Explanation: Reshape the data for our purposes and take a piece of it
End of explanation
"""
dt = 1.0
lag_times = np.arange(0, 10, 1)
window_size = signals.shape[0] - (lag_times[-1] + 1)
weights = None
lag_structure = LagStructure(lag_times=lag_times, weights=weights, window_size=window_size)
sensors = [Sensor(signal, dt, lag_structure) for signal in signals.T]
perceptual_space = PerceptualSpace(sensors, lag_first=True)
"""
Explanation: Perceptual Space
End of explanation
"""
# Get the nexa machinery right
Nspatial_clusters = 3
Ntime_clusters = 4
Nembedding = 2
nexa_object = Nexa(perceptual_space, Nspatial_clusters, Ntime_clusters, Nembedding)
# Now we calculate the distance matrix
nexa_object.calculate_distance_matrix()
nexa_object.calculate_embedding()
# Now we calculate the clustering
nexa_object.calculate_spatial_clustering()
# We calculate the cluster to index
nexa_object.calculate_cluster_to_indexes()
# Data clusters
nexa_object.calculate_time_clusters()
"""
Explanation: Nexa Machinery
End of explanation
"""
|
google/data-pills | pills/Google Ads/[DATA_PILL]_[Google_Ads]_Customer_Market_Intelligence_(CMI).ipynb | apache-2.0 | audience1_name = "" #@param {type:"string"}
audience1_file_location = "" #@param {type:"string"}
audience1_size = 0#@param {type:"integer"}
audience2_name = "" #@param {type:"string"}
audience2_file_location = "" #@param {type:"string"}
audience2_size = 0 #@param {type:"integer"}
audience3_name = "" #@param {type:"string"}
audience3_file_location = "" #@param {type:"string"}
audience3_size = 0#@param {type:"integer"}
isUsingGDrive = False #@param {type:"boolean"}
"""
Explanation: PLEASE MAKE A COPY BEFORE CHANGING
Copyright 2018 Google LLC
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
https://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
<b>Important</b>
This content are intended for educational and informational purposes only.
Instructions
1. Upload your CRM audience lists to Google Ads through Customer Match
2. In Google Ads Audience Manager, click the Audiences and then go into "Audience Insights"
3. Download the reports from Audience Insights and place them in a locally acessible folder
4. Configure the locations below then run this colab.
Configuration
Form fields
audienceX_name: name of the audience segment
audienceX_file_location: location of the CSV file containing Audience Insights from Customer Match
audienceX_size: size of the audience as shown in the List Size fields inside Customer Match
isUsingGDrive: check this box if the location of the CSV files are inside a Google Drive. Make sure to use the "/gdrive/" path for file locations.
End of explanation
"""
import IPython
import plotly
import plotly.offline as py
import plotly.graph_objs as go
import math
import json
import numpy as np
import pandas as pd
import re
from scipy import spatial
from scipy.spatial import distance
from sklearn.cluster import KMeans
from google.colab import drive
from google.colab import auth
from sklearn import preprocessing
from sklearn.preprocessing import scale
from sklearn.feature_extraction.text import TfidfTransformer
from sklearn.preprocessing import MinMaxScaler
from apiclient.discovery import build
from oauth2client.service_account import ServiceAccountCredentials
from IPython.display import display
import matplotlib as mpl
py.init_notebook_mode(connected=False)
%matplotlib inline
py.init_notebook_mode(connected=False)
"""
Explanation: Import Libs and configure Plotly
End of explanation
"""
if (isUsingGDrive):
drive.mount('/gdrive')
df_1 = pd.read_csv(audience1_file_location,usecols=['Dimension','Audience','List distribution'])
df_1['List distribution'] = round(df_1['List distribution']*audience1_size)
df_2 = pd.read_csv(audience2_file_location,usecols=['Dimension','Audience','List distribution'])
df_2['List distribution'] = round(df_2['List distribution']*audience2_size)
if ((audience3_name != "") & (audience3_file_location != "") & (audience3_size > 0)):
audience3_enabled = True
df_3 = pd.read_csv(audience3_file_location,usecols=['Dimension','Audience','List distribution'])
df_3['List distribution'] = round(df_3['List distribution']*audience3_size)
else:
audience3_enabled = False
"""
Explanation: Mount Drive and read the Customer Match Insights CSVs
End of explanation
"""
def plot3d(df, item_name_col, value_name_cols):
#add additional column if only 2 audiences presented
if len(value_name_cols) == 2:
df['no_audience'] = 0
value_name_cols.append('no_audience')
py.init_notebook_mode(connected=False)
trace_points = go.Scatter3d(
x=df[value_name_cols[0]],
y=df[value_name_cols[1]],
z=df[value_name_cols[2]],
#z=df[value_name_cols[2]] if len(value_name_cols) > 2 else 0,
text=df[item_name_col],
mode='markers',
marker=dict(
size=12,
line=dict(
color='rgb(0, 0, 0, 1)',
width=0.5
),
color=df.apply(lambda x: "rgba(" + str(int(x[value_name_cols[0]]*255))
+ ',' + str(int(x[value_name_cols[1]]*255))
+ ',' + str(int(x[value_name_cols[2]]*255)) + ',1)', axis=1),
opacity=1
)
)
trace_c1 = go.Scatter3d(
x=[1],
y=[0],
z=[0],
text=value_name_cols[0],
mode='text+markers',
marker=dict(
size=120,
line=dict(
color='rgb(255, 0, 0, 0.5)',
width=3
),
color='rgb(255, 0, 0, 0.5)',#'rgba(217, 217, 217, 0.14)
opacity=.5,
)
)
trace_c2 = go.Scatter3d(
x=[0],
y=[1],
z=[0],
text=value_name_cols[1],
mode='text+markers',
marker=dict(
size=120,
line=dict(
color='rgb(0, 255, 0, 0.5)',
width=3
),
color='rgb(0, 255, 0, 0.5)',#'rgba(217, 217, 217, 0.14)
opacity=.5,
)
)
trace_c3 = go.Scatter3d(
x=[0],
y=[0],
z=[1],
text=value_name_cols[2],
mode='text+markers',
marker=dict(
size=120,
line=dict(
color='rgb(0, 0, 255, 0.5)',
width=3
),
color='rgb(0, 0, 255, 0.5)',#'rgba(217, 217, 217, 0.14)
opacity=.5,
)
)
data = [trace_points, trace_c1,trace_c2,trace_c3]
layout = go.Layout(
margin=dict(
l=0,
r=0,
b=0,
t=0
)
)
fig = go.Figure(data=data, layout=layout)
#py.iplot(fig, filename='simple-3d-scatter')
py.iplot(data)
# Plot and embed in ipython notebook!
#py.iplot(data, filename='basic-scatter')
def configure_plotly_browser_state():
import IPython
display(IPython.core.display.HTML('''
<script src="/static/components/requirejs/require.js"></script>
<script>
requirejs.config({
paths: {
base: '/static/base',
plotly: 'https://cdn.plot.ly/plotly-1.5.1.min.js?noext',
},
});
</script>
'''))
"""
Explanation: Define Plot Function
End of explanation
"""
def scalarToSigmod(scalar):#0-1 input
x = (scalar-.5)*8
return 1 / (1 + math.exp(-x))
def scalarToTanh(scalar):
x = (scalar-.5)*6
return (math.tanh(x)+1)/2
def calc_tfidf(df, label_col_name, transformation='tanh'):
transformer = TfidfTransformer(smooth_idf=True, norm='l1', use_idf=False)
X = df.copy()
y = X[label_col_name]
X = X.drop([label_col_name], axis=1)
tfidf = transformer.fit_transform(X)
#create pd with results
results = pd.DataFrame.from_records(tfidf.toarray() , columns=list(X.columns.values))
#transpose
results_transposed = results.T.reset_index()
results_transposed.columns = ["COMPARED_USERLIST_FULL_NAME"] + list(y)
results_transposed
#scale to 0-1
scaler = MinMaxScaler()
results_transposed[list(y)] = scaler.fit_transform(results_transposed[list(y)])
for col in list(y):
if transformation == 'sig':
results_transposed[col] = results_transposed.apply(lambda x: scalarToSigmod(x[col]), axis=1)
elif transformation == 'tanh':
results_transposed[col] = results_transposed.apply(lambda x: scalarToTanh(x[col]), axis=1)
return results_transposed
"""
Explanation: Define TF-IDF Function
End of explanation
"""
def process_report(report):
data=[]
columnHeader = report.get('columnHeader', {})
dimensionHeaders = columnHeader.get('dimensions', [])
metricHeaders = columnHeader.get('metricHeader', {}).get('metricHeaderEntries', [])
metricHeaders = [header['name'] for header in metricHeaders]
df_headers = dimensionHeaders + metricHeaders
for row in report['data']['rows']:
d = row['dimensions']
m = row['metrics'][0]['values']
data.append(d+m)
df = pd.DataFrame(data, columns=df_headers)
pivot = pd.pivot_table(df,
index=[df.columns[0]],
columns=['ga:segment'],
aggfunc='sum').T
df = pd.DataFrame(pivot.fillna(0).to_records())
return df[df.columns[1:]]
"""
Explanation: Define GA API reporting functions
End of explanation
"""
df_1['Segmento'] = audience1_name
df_2['Segmento'] = audience2_name
if (audience3_enabled):
df_3['Segmento'] = audience3_name
df_list = [df_1,df_2,df_3]
else:
df_list = [df_1,df_2]
df = pd.concat(df_list)
df = df.loc[df['Dimension'] != 'City']
df = df.loc[df['Dimension'] != 'Country']
df['Audience'] = df['Dimension'] + ' | ' + df['Audience']
df.drop(['Dimension'],axis=1,inplace=True)
df_pivot = pd.pivot_table(df, index=['Segmento'], columns=['Audience'],aggfunc='sum').fillna(0)
df_pivot.columns = df_pivot.columns.droplevel(level=0)
df_pivot.reset_index(level=[0],inplace=True)
cmi_df = calc_tfidf(df_pivot,'Segmento')
cmi_df.head()
"""
Explanation: Run TF-IDF
End of explanation
"""
def plot_3d(cmi_df):
configure_plotly_browser_state()
y = list(cmi_df.drop(['COMPARED_USERLIST_FULL_NAME'],axis=1).columns)
plot3d(cmi_df,'COMPARED_USERLIST_FULL_NAME',list(y))
def print_ordered_list(cmi_df):
vecs = [[1,0,0], [0,1,0], [0,0,1]]
segments = list(cmi_df.columns[1:])
cmi_df['vector'] = cmi_df[[*segments]].values.tolist()
for i in range(len(segments)):
data = []
col = 'distance_{}'.format(segments[i])
for row in cmi_df.iterrows():
euc = distance.euclidean(row[1]['vector'], vecs[i])
data.append(euc)
cmi_df[col] = data
for col in cmi_df.columns[-3:]:
display(cmi_df[['COMPARED_USERLIST_FULL_NAME', col]].sort_values(by=col, ascending=True))
plot_3d(cmi_df)
print_ordered_list(cmi_df)
"""
Explanation: Plot the results
End of explanation
"""
|
MTG/essentia | src/examples/python/tutorial_tonal_chords.ipynb | agpl-3.0 | import essentia.streaming as ess
import essentia
audio_file = '../../../test/audio/recorded/mozart_c_major_30sec.wav'
# Initialize algorithms we will use.
loader = ess.MonoLoader(filename=audio_file)
framecutter = ess.FrameCutter(frameSize=4096, hopSize=2048, silentFrames='noise')
windowing = ess.Windowing(type='blackmanharris62')
spectrum = ess.Spectrum()
spectralpeaks = ess.SpectralPeaks(orderBy='magnitude',
magnitudeThreshold=0.00001,
minFrequency=20,
maxFrequency=3500,
maxPeaks=60)
# Use default HPCP parameters for plots.
# However we will need higher resolution and custom parameters for better Key estimation.
hpcp = ess.HPCP()
# Use pool to store data.
pool = essentia.Pool()
# Connect streaming algorithms.
loader.audio >> framecutter.signal
framecutter.frame >> windowing.frame >> spectrum.frame
spectrum.spectrum >> spectralpeaks.spectrum
spectralpeaks.magnitudes >> hpcp.magnitudes
spectralpeaks.frequencies >> hpcp.frequencies
hpcp.hpcp >> (pool, 'tonal.hpcp')
# Run streaming network.
essentia.run(loader)
"""
Explanation: Tonality analysis: chords estimation
Essentia provides two basic algorithms for chord estimation given a harmonic representation of input audio in a form of an HPCPgram (HPCP chromagram; see the HPCP tutorial for how to compute it):
ChordsDetection is a naive algorithm estimating chords in a sliding window over an input HPCP chromagram.
ChordsDetectionBeats is a similar algorithm, but it estimates chords on segments between consecutive beats given their time positions as an additional input.
In addition, ChordsDescriptors describes the estimated chord progression by means of key, scale, histogram, and rate of change.
The analysis pipeline to compute HPCPs in this example is identical to the one used the HPCP tutorial to compute a 12-bin HPCPgram:
End of explanation
"""
# Plots configuration.
import matplotlib.pyplot as plt
from pylab import plot, show, figure, imshow
plt.rcParams['figure.figsize'] = (15, 6)
# Plot HPCP.
imshow(pool['tonal.hpcp'].T, aspect='auto', origin='lower', interpolation='none')
plt.title("HPCPs in frames (the 0-th HPCP coefficient corresponds to A)")
show()
"""
Explanation: Let's plot the resulting HPCP:
End of explanation
"""
from essentia.standard import ChordsDetection
# Using a 2 seconds window over HPCP matrix to estimate chords
chords, strength = ChordsDetection(hopSize=2048, windowSize=2)(pool['tonal.hpcp'])
print(chords)
"""
Explanation: Now we can run a naive estimation of chords with 2-second sliding window over the computed HPCPgram:
End of explanation
"""
|
spulido99/NetworksAnalysis | santiagoangee/Ejercicios 1.2 Weak Ties & Random Networks.ipynb | mit | edges = set([(1,2), (2,3), (2,4), (2,5), (4,5), (4,6), (5,6), (4,7)])
from IPython.core.debugger import Tracer
import collections
import numpy as np
""" Without NetworkX """
edges = set([(1,2), (2,3), (2,4), (2,5), (4,5), (4,6), (5,6), (4,7)])
def edges_to_graph(edges):
edges = list(edges)
graph = {}
for i in range(0,len(edges)):
if graph.get(edges[i][0], None):
graph[edges[i][0]].add(edges[i][1])
else:
if len(edges[i]) == 2:
graph[edges[i][0]] = set([edges[i][1]])
else:
graph[edges[i][0]] = set([])
if len(edges[i]) == 2:
if graph.get(edges[i][1], None):
graph[edges[i][1]].add(edges[i][0])
else:
graph[edges[i][1]] = set([edges[i][0]])
return graph
G = edges_to_graph(edges)
def graph_to_tuples(graph):
output_graph = []
for node, neighbours in graph.items():
output_graph.append((node,list(neighbours)))
return output_graph
def element_neighbours(tuple_graph, element):
for index, item in enumerate(tuple_graph):
if element == item[0]:
return item[1]
raise IndexNotFoundError('Error: the requested element was not found')
def clustering_coefficient(graph):
tuple_graph = graph_to_tuples(graph)
L = np.zeros((len(tuple_graph),), dtype=np.int)
for i in range(0, len(tuple_graph)):
element_at_i = tuple_graph[i][0]
for j in range(0, len(tuple_graph[i][1])-1):
current = tuple_graph[i][1][j]
for k in range(j+1, len(tuple_graph[i][1])):
comparison = tuple_graph[i][1][k]
# Search if there is a link
if comparison in element_neighbours(tuple_graph, current):
L[i] += 1
C = {}
for i in range(len(tuple_graph)):
k = len(tuple_graph[i][1])
if k >= 2:
C[tuple_graph[i][0]] = float(2*L[i])/(k*(k-1))
else:
C[tuple_graph[i][0]] = 0.0
return C
def average_clustering(graph):
C = clustering_coefficient(graph)
return float(sum(C.values()))/len(C)
print(clustering_coefficient(G))
print(average_clustering(G))
import networkx as nx
G = nx.Graph()
G.add_edges_from(edges)
print(nx.clustering(G))
print(nx.average_clustering(G))
"""
Explanation: Ejercicios Weak Ties & Random Networks
Ejercicios básicos de redes
Ejercicio Clustering Coeficient
iCalcule el coeficiente de clustering para cada nodo y en la red (sin dirección)
End of explanation
"""
# To create a weighted, undirected graph, the edges must be provided in the form: (node1, node2, weight)
edges = [('a', 'b', 0.3), ('a', 'c', 1.0), ('a', 'd', 0.9), ('a', 'e', 1.0), ('a', 'f', 0.4),
('c', 'f', 0.2), ('b', 'h', 0.2), ('f', 'j', 0.8), ('f', 'g', 0.9), ('j', 'g', 0.6),
('g', 'k', 0.4), ('g', 'h', 0.2), ('k', 'h', 1.0)]
def edges_to_weighted_graph(edges):
edges = list(edges)
graph = {}
for i in range(0,len(edges)):
if graph.get(edges[i][0], None):
graph[edges[i][0]].add((edges[i][1], edges[i][2]))
else:
if len(edges[i]) == 3:
graph[edges[i][0]] = set([(edges[i][1],edges[i][2])])
else:
graph[edges[i][0]] = set([])
if len(edges[i]) == 3:
if graph.get(edges[i][1], None):
graph[edges[i][1]].add((edges[i][0],edges[i][2]))
else:
graph[edges[i][1]] = set([(edges[i][0],edges[i][2])])
return graph
graph = edges_to_weighted_graph(edges)
print (graph)
""" With NetworkX """
FG = nx.Graph()
FG.add_weighted_edges_from(edges)
print (str(FG))
"""
Explanation: Ejercicio Weigthed Netwroks
Cree una red no direccionada con los siguientes pesos.
(a, b) = 0.3
(a, c) = 1.0
(a, d) = 0.9
(a, e) = 1.0
(a, f) = 0.4
(c, f) = 0.2
(b, h) = 0.2
(f, j) = 0.8
(f, g) = 0.9
(j, g) = 0.6
(g, k) = 0.4
(g, h) = 0.2
(k, h) = 1.0
End of explanation
"""
def adjacency_matrix(graph):
keys = list(graph.keys())
keys.sort()
adj_matrix = np.zeros((len(keys),len(keys)))
for node, edges in graph.items():
for edge in edges:
adj_matrix[keys.index(node)][keys.index(edge[0])] = edge[1]
return (adj_matrix, keys)
print (adjacency_matrix(graph))
""" With NetworkX """
A = nx.adjacency_matrix(FG)
print (A)
"""
Explanation: Imprima la matriz de adyasencia
End of explanation
"""
def weighted_element_neighbours(tuple_graph, element):
for index, item in enumerate(tuple_graph):
if element[0] == item[0]:
neighbours = [i[0] for i in item[1]]
return neighbours
raise IndexNotFoundError('Error: the requested element was not found')
def weighted_graph_to_tuples(graph):
output_graph = []
for node, neighbours in graph.items():
output_graph.append((node,list(neighbours)))
return output_graph
def triadic_closure(graph):
tuple_graph = weighted_graph_to_tuples(graph)
L = np.zeros((len(tuple_graph),), dtype=np.int)
for i in range(0, len(tuple_graph)):
element_at_i = tuple_graph[i][0]
for j in range(0, len(tuple_graph[i][1])-1):
current = tuple_graph[i][1][j]
weight_current = current[1]
if weight_current >= 0.5:
for k in range(j+1, len(tuple_graph[i][1])):
comparison = tuple_graph[i][1][k]
weight_comparison = comparison[1]
if weight_comparison >= 0.5:
# Search if there is a link
if not comparison[0] in weighted_element_neighbours(tuple_graph, current):
return False
return True
print(triadic_closure(graph))
edges2 = [('a','b',0.1),('a','c',0.5),('a','d',0.9),('a','e',0.6),('c','d',0.1),('c','e',0.4),('d','e',0.9)]
graph2 = edges_to_weighted_graph(edges2)
print(triadic_closure(graph2))
""" With NetworkX """
"""
Explanation: Ejercicio Weak & Strong ties
Con la misma red anterior asuma que un link debil es inferior a 0.5, cree un código que calcule si se cumple la propiedad "strong triadic closure"
End of explanation
"""
import copy
""" The following code is thought for unweighted graphs """
edges3 = [(1,2),(1,3),(1,5),(5,6),(2,6),(2,1),(2,4)]
edges4 = [('a','b'),('a','c'),('a','d'),('a','e'),('a','f'),
('b','h'),('c','d'),('c','e'),('c','f'),('d','e'),
('f','j'),('f','g'),('j','g'),('g','k'),('g','h'),
('k','h')]
""" This function was taken from Python Software Foundation.
Python Patterns - Implementing Graphs. https://www.python.org/doc/essays/graphs/
(Visited in march 2017) """
def find_shortest_path(graph, start, end, path=[]):
path = path + [start]
if start == end:
return path
if not start in graph:
return None
shortest = None
for next in graph[start]:
if next not in path:
newpath = find_shortest_path(graph, next, end, path)
if newpath:
if not shortest or len(newpath) < len(shortest):
shortest = newpath
return shortest
# Returns a tuple containing two values:
# Input: an undirected graph G in form of a dict
# (True, span) if there is a local bridge (span > 2) between two nodes
# (True, None) if there is a bridge between two nodes
# (False, None) otherwise
def bridge(graph, start, end):
if not end in graph[start]:
return (False, None)
new_graph = copy.deepcopy(graph)
new_graph[start] = graph[start] - {end}
new_graph[end] = graph[end] - {start}
span_path = find_shortest_path(new_graph, start, end)
if not span_path:
# Global bridge
return (True, None)
path_length = len(span_path) - 1
if path_length > 2:
return (True, path_length)
elif path_length == 2:
return (False, path_length)
elif path_length == 1:
raise MultiGraphNotAllowedError('Error: Multigraphs are not allowed')
else:
raise ReflexiveRelationsNotAllowedError('Error: Reflexive relations are not allowed')
graph3 = edges_to_graph(edges3)
# Return the places of the graph where there is a bridge and the
# span of each bridge as a vector of tuples in the form (start, end, span)
def local_bridges(graph):
nodes = list(graph.keys())
result = []
for i in range(0, len(nodes)-1):
node1 = nodes[i]
for j in range(i+1, len(nodes)):
node2 = nodes[j]
brd = bridge(graph, nodes[i], nodes[j])
if brd[0] and brd[1] != None:
result.append((nodes[i],nodes[j],{'span':brd[1]}))
return result
brds = local_bridges(graph3)
print(brds)
graph4 = edges_to_graph(edges4)
print(local_bridges(graph4))
def distance_matrix(graph):
keys = list(graph.keys())
keys.sort()
d_matrix = np.zeros((len(keys),len(keys)))
for i in range(0, len(keys)):
for j in range(0, len(keys)):
start = keys[i]
end = keys[j]
path = find_shortest_path(graph, start, end)
d_matrix[i][j] = len(path)-1
return (d_matrix, keys)
""" With NetworkX """
"""
Explanation: Cambie un peso de los links anteriores para que se deje de cumplir la propiedad y calcule si es cierto. Explique.
Escriba un código que detecte puntes locales y que calcule el span de cada puente local
End of explanation
"""
import random
import seaborn as sns
%matplotlib inline
N = 12
p = float(1)/6
def random_network_links(N, p):
edges = []
for i in range(0, N-1):
for j in range(i+1, N):
rand = random.random()
if rand <= p:
edges.append((i+1,j+1))
return edges
def random_network_links2(N, p):
edges = []
adj_matrix = np.zeros((N,N), dtype=int)
for i in range(0, N-1):
for j in range(i+1, N):
rand = random.random()
if rand <= p:
edges.append((i+1,j+1))
adj_matrix[i][j] = 1
adj_matrix[j][i] = 1
for i in range(0, N):
if sum(adj_matrix[i]) == 0:
edges.append((i+1,))
return edges
# Returns a number of random networks in the form of a list of edges
def random_networks(number_of_networks, N, p):
networks = []
for i in range(0, number_of_networks):
networks.append(random_network_links2(N,p))
return networks
def len_edges(edges_graph):
result = 0
for edge in edges_graph:
if len(edge) == 2:
result += 1
return result
networks1 = random_networks(1000,N,p)
len_edges1 = [len_edges(i) for i in networks1]
ax = sns.distplot(len_edges1)
""" With NetworkX """
def random_networks_nx(number_of_networks, N, p):
networks = []
for i in range(0, number_of_networks):
G_ran = nx.gnp_random_graph(N,p)
networks.append(G_ran)
return networks
networks2 = random_networks_nx(1000,N,p)
len_edges2 = [len(G.edges()) for G in networks2]
sns.distplot(len_edges2)
"""
Explanation: Ejercicio Random Networks
genere 1000 redes aleatorias N = 12, p = 1/6 y grafique la distribución del número de enlaces
End of explanation
"""
% matplotlib inline
# Transform the list of lists of edges to a list of dicts, this is done to
# calculate the average degree distribution in the next methods
networks1_graph = [edges_to_graph(edges) for edges in networks1]
def degrees(graph):
degrees = {}
for node, links in graph.items():
degrees[node] = len(links)
return degrees
def avg_degree(graph):
dgrs = degrees(graph)
return float(sum(dgrs.values()))/len(dgrs)
avg_degrees1 = [avg_degree(network) for network in networks1_graph]
ax = sns.distplot(avg_degrees1)
""" With NetworkX """
def avg_degree_nx(graph):
graph_degrees = graph.degree()
return float(sum(graph_degrees.values()))/len(graph_degrees)
avg_degrees2 = [avg_degree_nx(network) for network in networks2]
sns.distplot(avg_degrees2)
"""
Explanation: Grafique la distribución del promedio de grados en cada una de las redes generadas del ejercicio anterior
End of explanation
"""
% matplotlib inline
networks100_1 = random_networks(1000, 100, p)
networks100_2 = random_networks_nx(1000,100,p)
len_edges100_1 = [len_edges(i) for i in networks100_1]
ax = sns.distplot(len_edges100_1)
len_edges100_2 = [len(G.edges()) for G in networks100_2]
sns.distplot(len_edges100_2)
networks100_1_graph = [edges_to_graph(edges) for edges in networks100_1]
avg_degrees100_1 = [avg_degree(network) for network in networks100_1_graph]
avg_degrees100_2 = [avg_degree_nx(network) for network in networks100_2]
ax = sns.distplot(avg_degrees100_1)
sns.distplot(avg_degrees100_2)
"""
Explanation: Haga lo mismo para redes con 100 nodos
End of explanation
"""
""" The following code snippet was taken from Mann, Edd. Depth-First Search and Breadth-First Search in Python.
http://eddmann.com/posts/depth-first-search-and-breadth-first-search-in-python/ """
graph5 = copy.deepcopy(graph4)
graph5['m'] = {'n'}
graph5['n'] = {'m'}
def bfs(graph, start):
visited, queue = set(), collections.deque([start])
while queue:
vertex = queue.popleft()
if vertex not in visited:
visited.add(vertex)
queue.extend(graph[vertex] - visited)
return visited
# return a list of lists of nodes of 'graph' each one being the nodes that
# define a specific connected component of of 'graph'
def connected_components(graph):
components = []
nodes = set(graph.keys())
while len(nodes):
root = next(iter(nodes))
visited = bfs(graph, root)
components.append(visited)
nodes = nodes - visited
return components
# Returns a set containing the nodes of a graph's biggest component
def biggest_component_nodes(graph):
components = connected_components(graph)
lengths = [len(component) for component in components]
max_component = 0
max_index = -1
for i in range(0, len(lengths)):
if lengths[i] > max_component:
max_component = lengths[i]
max_index = i
return components[max_index]
# Returns a subgraph containing the biggest connected component of 'graph'
def biggest_component(graph):
nodes = biggest_component_nodes(graph)
nodes = list(nodes)
subgraph = {k:graph[k] for k in nodes if k in graph}
return subgraph
# Plot results
import matplotlib.pyplot as plt
import plotly.plotly as py
from plotly.graph_objs import Scatter, Figure, Layout
from plotly.offline import download_plotlyjs, init_notebook_mode, plot, iplot
init_notebook_mode(connected=True)
def plot_giant_component_growth(N):
p_vector = []
avg_degree_vector = []
p = 0.0
while p <= 1:
p_vector.append(p)
network = random_network_links2(N,p)
network = edges_to_graph(network)
component = biggest_component(network)
avg_degree_vector.append(avg_degree(component))
p += 0.05
plt.plot(p_vector, avg_degree_vector, "o")
plot_giant_component_growth(100)
"""
Explanation: Ejercicio Random Networks - Componente Gigante
Grafique como crece el tamaño del componente más grande de una red aleatoria con N=100 nodos y diferentes valores de p
(grafique con promedio de grado entre 0 y 4 cada 0.05)
End of explanation
"""
def plot_giant_component_growth_nodes(N):
p_vector = []
node_percentages = []
p = 0.0
while p <= 1:
p_vector.append(p)
network = random_network_links2(N,p)
network = edges_to_graph(network)
component = biggest_component(network)
component_percentage = float(len(component))/len(network)
node_percentages.append(component_percentage)
p += 0.001
plt.plot(p_vector, node_percentages, "o")
plot_giant_component_growth_nodes(100)
"""
Explanation: Grafique cuál es el porcentaje de nodos del componente más grande para diferentes valores de p
End of explanation
"""
def identify_p_value_for_total_connection(N):
p = 0.0
while p <= 1:
network = random_network_links2(N,p)
network = edges_to_graph(network)
component = biggest_component(network)
component_percentage = float(len(component))/len(network)
if component_percentage == 1:
return p
p += 0.001
return 1 # Default value for a totally connected component
identify_p_value_for_total_connection(100)
"""
Explanation: Identifique para que valores de p el componente mas grande esta totalmente interconectado
End of explanation
"""
|
keras-team/autokeras | docs/ipynb/text_classification.ipynb | apache-2.0 |
dataset = tf.keras.utils.get_file(
fname="aclImdb.tar.gz",
origin="http://ai.stanford.edu/~amaas/data/sentiment/aclImdb_v1.tar.gz",
extract=True,
)
# set path to dataset
IMDB_DATADIR = os.path.join(os.path.dirname(dataset), "aclImdb")
classes = ["pos", "neg"]
train_data = load_files(
os.path.join(IMDB_DATADIR, "train"), shuffle=True, categories=classes
)
test_data = load_files(
os.path.join(IMDB_DATADIR, "test"), shuffle=False, categories=classes
)
x_train = np.array(train_data.data)
y_train = np.array(train_data.target)
x_test = np.array(test_data.data)
y_test = np.array(test_data.target)
print(x_train.shape) # (25000,)
print(y_train.shape) # (25000, 1)
print(x_train[0][:50]) # this film was just brilliant casting
"""
Explanation: A Simple Example
The first step is to prepare your data. Here we use the IMDB
dataset
as an example.
End of explanation
"""
# Initialize the text classifier.
clf = ak.TextClassifier(
overwrite=True, max_trials=1
) # It only tries 1 model as a quick demo.
# Feed the text classifier with training data.
clf.fit(x_train, y_train, epochs=2)
# Predict with the best model.
predicted_y = clf.predict(x_test)
# Evaluate the best model with testing data.
print(clf.evaluate(x_test, y_test))
"""
Explanation: The second step is to run the TextClassifier. As a quick
demo, we set epochs to 2. You can also leave the epochs unspecified for an
adaptive number of epochs.
End of explanation
"""
clf.fit(
x_train,
y_train,
# Split the training data and use the last 15% as validation data.
validation_split=0.15,
)
"""
Explanation: Validation Data
By default, AutoKeras use the last 20% of training data as validation data. As
shown in the example below, you can use validation_split to specify the
percentage.
End of explanation
"""
split = 5000
x_val = x_train[split:]
y_val = y_train[split:]
x_train = x_train[:split]
y_train = y_train[:split]
clf.fit(
x_train,
y_train,
epochs=2,
# Use your own validation set.
validation_data=(x_val, y_val),
)
"""
Explanation: You can also use your own validation set instead of splitting it from the
training data with validation_data.
End of explanation
"""
input_node = ak.TextInput()
output_node = ak.TextBlock(block_type="ngram")(input_node)
output_node = ak.ClassificationHead()(output_node)
clf = ak.AutoModel(
inputs=input_node, outputs=output_node, overwrite=True, max_trials=1
)
clf.fit(x_train, y_train, epochs=2)
"""
Explanation: Customized Search Space
For advanced users, you may customize your search space by using
AutoModel instead of
TextClassifier. You can configure the
TextBlock for some high-level configurations, e.g.,
vectorizer for the type of text vectorization method to use. You can use
'sequence', which uses TextToInteSequence to
convert the words to integers and use Embedding for
embedding the integer sequences, or you can use 'ngram', which uses
TextToNgramVector to vectorize the
sentences. You can also do not specify these arguments, which would leave the
different choices to be tuned automatically. See the following example for
detail.
End of explanation
"""
input_node = ak.TextInput()
output_node = ak.TextToIntSequence()(input_node)
output_node = ak.Embedding()(output_node)
# Use separable Conv layers in Keras.
output_node = ak.ConvBlock(separable=True)(output_node)
output_node = ak.ClassificationHead()(output_node)
clf = ak.AutoModel(
inputs=input_node, outputs=output_node, overwrite=True, max_trials=1
)
clf.fit(x_train, y_train, epochs=2)
"""
Explanation: The usage of AutoModel is similar to the
functional API of Keras.
Basically, you are building a graph, whose edges are blocks and the nodes are
intermediate outputs of blocks. To add an edge from input_node to
output_node with output_node = ak.[some_block]([block_args])(input_node).
You can even also use more fine grained blocks to customize the search space
even further. See the following example.
End of explanation
"""
train_set = tf.data.Dataset.from_tensor_slices(((x_train,), (y_train,))).batch(32)
test_set = tf.data.Dataset.from_tensor_slices(((x_test,), (y_test,))).batch(32)
clf = ak.TextClassifier(overwrite=True, max_trials=2)
# Feed the tensorflow Dataset to the classifier.
clf.fit(train_set, epochs=2)
# Predict with the best model.
predicted_y = clf.predict(test_set)
# Evaluate the best model with testing data.
print(clf.evaluate(test_set))
"""
Explanation: Data Format
The AutoKeras TextClassifier is quite flexible for the data format.
For the text, the input data should be one-dimensional For the classification
labels, AutoKeras accepts both plain labels, i.e. strings or integers, and
one-hot encoded encoded labels, i.e. vectors of 0s and 1s.
We also support using tf.data.Dataset
format for the training data.
End of explanation
"""
|
mne-tools/mne-tools.github.io | 0.24/_downloads/88563c785f9a977b7ce2000e660aeacf/30_annotate_raw.ipynb | bsd-3-clause | import os
from datetime import timedelta
import mne
sample_data_folder = mne.datasets.sample.data_path()
sample_data_raw_file = os.path.join(sample_data_folder, 'MEG', 'sample',
'sample_audvis_raw.fif')
raw = mne.io.read_raw_fif(sample_data_raw_file, verbose=False)
raw.crop(tmax=60).load_data()
"""
Explanation: Annotating continuous data
This tutorial describes adding annotations to a ~mne.io.Raw object,
and how annotations are used in later stages of data processing.
As usual we'll start by importing the modules we need, loading some
example data <sample-dataset>, and (since we won't actually analyze the
raw data in this tutorial) cropping the ~mne.io.Raw object to just 60
seconds before loading it into RAM to save memory:
End of explanation
"""
my_annot = mne.Annotations(onset=[3, 5, 7], # in seconds
duration=[1, 0.5, 0.25], # in seconds, too
description=['AAA', 'BBB', 'CCC'])
print(my_annot)
"""
Explanation: ~mne.Annotations in MNE-Python are a way of storing short strings of
information about temporal spans of a ~mne.io.Raw object. Below the
surface, ~mne.Annotations are list-like <list> objects,
where each element comprises three pieces of information: an onset time
(in seconds), a duration (also in seconds), and a description (a text
string). Additionally, the ~mne.Annotations object itself also keeps
track of orig_time, which is a POSIX timestamp_ denoting a real-world
time relative to which the annotation onsets should be interpreted.
Creating annotations programmatically
If you know in advance what spans of the ~mne.io.Raw object you want
to annotate, ~mne.Annotations can be created programmatically, and
you can even pass lists or arrays to the ~mne.Annotations
constructor to annotate multiple spans at once:
End of explanation
"""
raw.set_annotations(my_annot)
print(raw.annotations)
# convert meas_date (a tuple of seconds, microseconds) into a float:
meas_date = raw.info['meas_date']
orig_time = raw.annotations.orig_time
print(meas_date == orig_time)
"""
Explanation: Notice that orig_time is None, because we haven't specified it. In
those cases, when you add the annotations to a ~mne.io.Raw object,
it is assumed that the orig_time matches the time of the first sample of
the recording, so orig_time will be set to match the recording
measurement date (raw.info['meas_date']).
End of explanation
"""
time_of_first_sample = raw.first_samp / raw.info['sfreq']
print(my_annot.onset + time_of_first_sample)
print(raw.annotations.onset)
"""
Explanation: Since the example data comes from a Neuromag system that starts counting
sample numbers before the recording begins, adding my_annot to the
~mne.io.Raw object also involved another automatic change: an offset
equalling the time of the first recorded sample (raw.first_samp /
raw.info['sfreq']) was added to the onset values of each annotation
(see time-as-index for more info on raw.first_samp):
End of explanation
"""
time_format = '%Y-%m-%d %H:%M:%S.%f'
new_orig_time = (meas_date + timedelta(seconds=50)).strftime(time_format)
print(new_orig_time)
later_annot = mne.Annotations(onset=[3, 5, 7],
duration=[1, 0.5, 0.25],
description=['DDD', 'EEE', 'FFF'],
orig_time=new_orig_time)
raw2 = raw.copy().set_annotations(later_annot)
print(later_annot.onset)
print(raw2.annotations.onset)
"""
Explanation: If you know that your annotation onsets are relative to some other time, you
can set orig_time before you call :meth:~mne.io.Raw.set_annotations,
and the onset times will get adjusted based on the time difference between
your specified orig_time and raw.info['meas_date'], but without the
additional adjustment for raw.first_samp. orig_time can be specified
in various ways (see the documentation of ~mne.Annotations for the
options); here we'll use an ISO 8601_ formatted string, and set it to be 50
seconds later than raw.info['meas_date'].
End of explanation
"""
fig = raw.plot(start=2, duration=6)
"""
Explanation: <div class="alert alert-info"><h4>Note</h4><p>If your annotations fall outside the range of data times in the
`~mne.io.Raw` object, the annotations outside the data range will
not be added to ``raw.annotations``, and a warning will be issued.</p></div>
Now that your annotations have been added to a ~mne.io.Raw object,
you can see them when you visualize the ~mne.io.Raw object:
End of explanation
"""
fig.canvas.key_press_event('a')
"""
Explanation: The three annotations appear as differently colored rectangles because they
have different description values (which are printed along the top
edge of the plot area). Notice also that colored spans appear in the small
scroll bar at the bottom of the plot window, making it easy to quickly view
where in a ~mne.io.Raw object the annotations are so you can easily
browse through the data to find and examine them.
Annotating Raw objects interactively
Annotations can also be added to a ~mne.io.Raw object interactively
by clicking-and-dragging the mouse in the plot window. To do this, you must
first enter "annotation mode" by pressing :kbd:a while the plot window is
focused; this will bring up the annotation controls window:
End of explanation
"""
new_annot = mne.Annotations(onset=3.75, duration=0.75, description='AAA')
raw.set_annotations(my_annot + new_annot)
raw.plot(start=2, duration=6)
"""
Explanation: The colored rings are clickable, and determine which existing label will be
created by the next click-and-drag operation in the main plot window. New
annotation descriptions can be added by typing the new description,
clicking the :guilabel:Add label button; the new description will be added
to the list of descriptions and automatically selected.
During interactive annotation it is also possible to adjust the start and end
times of existing annotations, by clicking-and-dragging on the left or right
edges of the highlighting rectangle corresponding to that annotation.
<div class="alert alert-danger"><h4>Warning</h4><p>Calling :meth:`~mne.io.Raw.set_annotations` **replaces** any annotations
currently stored in the `~mne.io.Raw` object, so be careful when
working with annotations that were created interactively (you could lose
a lot of work if you accidentally overwrite your interactive
annotations). A good safeguard is to run
``interactive_annot = raw.annotations`` after you finish an interactive
annotation session, so that the annotations are stored in a separate
variable outside the `~mne.io.Raw` object.</p></div>
How annotations affect preprocessing and analysis
You may have noticed that the description for new labels in the annotation
controls window defaults to BAD_. The reason for this is that annotation
is often used to mark bad temporal spans of data (such as movement artifacts
or environmental interference that cannot be removed in other ways such as
projection <tut-projectors-background> or filtering). Several
MNE-Python operations
are "annotation aware" and will avoid using data that is annotated with a
description that begins with "bad" or "BAD"; such operations typically have a
boolean reject_by_annotation parameter. Examples of such operations are
independent components analysis (mne.preprocessing.ICA), functions
for finding heartbeat and blink artifacts
(:func:~mne.preprocessing.find_ecg_events,
:func:~mne.preprocessing.find_eog_events), and creation of epoched data
from continuous data (mne.Epochs). See tut-reject-data-spans
for details.
Operations on Annotations objects
~mne.Annotations objects can be combined by simply adding them with
the + operator, as long as they share the same orig_time:
End of explanation
"""
print(raw.annotations[0]) # just the first annotation
print(raw.annotations[:2]) # the first two annotations
print(raw.annotations[(3, 2)]) # the fourth and third annotations
"""
Explanation: Notice that it is possible to create overlapping annotations, even when they
share the same description. This is not possible when annotating
interactively; click-and-dragging to create a new annotation that overlaps
with an existing annotation with the same description will cause the old and
new annotations to be merged.
Individual annotations can be accessed by indexing an
~mne.Annotations object, and subsets of the annotations can be
achieved by either slicing or indexing with a list, tuple, or array of
indices:
End of explanation
"""
for ann in raw.annotations:
descr = ann['description']
start = ann['onset']
end = ann['onset'] + ann['duration']
print("'{}' goes from {} to {}".format(descr, start, end))
"""
Explanation: You can also iterate over the annotations within an ~mne.Annotations
object:
End of explanation
"""
# later_annot WILL be changed, because we're modifying the first element of
# later_annot.onset directly:
later_annot.onset[0] = 99
# later_annot WILL NOT be changed, because later_annot[0] returns a copy
# before the 'onset' field is changed:
later_annot[0]['onset'] = 77
print(later_annot[0]['onset'])
"""
Explanation: Note that iterating, indexing and slicing ~mne.Annotations all
return a copy, so changes to an indexed, sliced, or iterated element will not
modify the original ~mne.Annotations object.
End of explanation
"""
raw.annotations.save('saved-annotations.csv', overwrite=True)
annot_from_file = mne.read_annotations('saved-annotations.csv')
print(annot_from_file)
"""
Explanation: Reading and writing Annotations to/from a file
~mne.Annotations objects have a :meth:~mne.Annotations.save method
which can write :file:.fif, :file:.csv, and :file:.txt formats (the
format to write is inferred from the file extension in the filename you
provide). There is a corresponding :func:~mne.read_annotations function to
load them from disk:
End of explanation
"""
|
josephmfaulkner/stoqs | stoqs/contrib/notebooks/geospatial_selection_rovctd.ipynb | gpl-3.0 | db = 'stoqs_rovctd_mb'
from django.contrib.gis.geos import fromstr
from django.contrib.gis.measure import D
mars = fromstr('POINT(-122.18681000 36.71137000)')
near_mars = Measurement.objects.using(db).filter(geom__distance_lt=(mars, D(km=.1)))
"""
Explanation: Geospatial Selections and ROVCTD data
Connect to a remote database, select data using GeoDjango's spatial lookup, and make some simple plots
Executing this Notebook requires a personal STOQS server. Follow the steps to build your own development system — this will take a few hours and depends on a good connection to the Internet. Once your server is up log into it (after a cd ~/Vagrants/stoqsvm) and activate your virtual environment with the usual commands:
vagrant ssh -- -X
cd ~/dev/stoqsgit
source venv-stoqs/bin/activate
Connect to your Institution's STOQS database server using read-only credentials. (Note: firewalls typically limit unprivileged access to such resources.)
cd stoqs
ln -s mbari_campaigns.py campaigns.py
export DATABASE_URL=postgis://everyone:guest@kraken.shore.mbari.org:5433/stoqs
Launch Jupyter Notebook on your system with:
cd contrib/notebooks
../../manage.py shell_plus --notebook
navigate to this file and open it. You will then be able to execute the cells and experiment with this notebook.
For reference please see for GeoDjango Spatial Lookups and the STOQS schema diagram.
Define our database and a GeoDjango query set for data within 0.1 km of the MARS site.
End of explanation
"""
mars_dives = Activity.objects.using(db).filter(instantpoint__measurement=near_mars
).distinct()
print mars_dives.count()
"""
Explanation: Count all of the the ROV dives whose Measurements are near MARS
End of explanation
"""
deep_mars_dives = Activity.objects.using(db
).filter(instantpoint__measurement=near_mars,
instantpoint__measurement__depth__gt=800
).distinct()
print deep_mars_dives.count()
"""
Explanation: Near surface ROV location data is notoriously noisy (because of fundamental inaccuracies of USBL navigation systems). Let's remove near surface Measurment values from our selection. Count all of the dives near MARS and whose Measurments are deeper than 800 m.
End of explanation
"""
%%time
%matplotlib inline
import pylab as plt
from mpl_toolkits.basemap import Basemap
m = Basemap(projection='cyl', resolution='l',
llcrnrlon=-122.7, llcrnrlat=36.5,
urcrnrlon=-121.7, urcrnrlat=37.0)
m.arcgisimage(server='http://services.arcgisonline.com/ArcGIS', service='Ocean_Basemap')
for dive in deep_mars_dives:
points = Measurement.objects.using(db).filter(instantpoint__activity=dive,
instantpoint__measurement__depth__gt=800
).values_list('geom', flat=True)
m.scatter(
[geom.x for geom in points],
[geom.y for geom in points])
"""
Explanation: Let's plot the measurement points of dives on a map of Monterey Bay to confirm that the selection is in the right spot.
End of explanation
"""
%%time
# A Python dictionary comprehension for all the Parameters and axis labels we want to plot
parms = {p.name: '{} ({})'.format(p.long_name, p.units) for
p in Parameter.objects.using(db).filter(name__in=
('t', 's', 'o2', 'sigmat', 'spice', 'light'))}
plt.rcParams['figure.figsize'] = (18.0, 8.0)
fig, ax = plt.subplots(1, len(parms), sharey=True)
ax[0].invert_yaxis()
ax[0].set_ylabel('Depth (m)')
dive_names = []
for dive in deep_mars_dives.order_by('startdate'):
dive_names.append(dive.name)
# Use select_related() to improve query performance for the depth lookup
# Need to also order by time
mps = MeasuredParameter.objects.using(db
).filter(measurement__instantpoint__activity=dive
).select_related('measurement'
).order_by('measurement__instantpoint__timevalue')
depth = [mp.measurement.depth for mp in mps.filter(parameter__name='t')]
for i, (p, label) in enumerate(parms.iteritems()):
ax[i].set_xlabel(label)
try:
ax[i].plot(mps.filter(parameter__name=p).values_list(
'datavalue', flat=True), depth)
except ValueError:
pass
from IPython.display import display, HTML
display(HTML('<p>All dives at MARS site: ' + ' '.join(dive_names) + '<p>'))
"""
Explanation: (The major cluster is around the MARS site, but there are a few spurious navigation points even for the deep dive data.)
Let's plot CTD profiles for these dives.
End of explanation
"""
|
stephank16/enes_graph_use_case | neo4j_esgf_b2find/ENES-B2Find-explore.ipynb | gpl-3.0 | import ckanclient
from pprint import pprint
ckan = ckanclient.CkanClient('http://b2find.eudat.eu/api/3/')
"""
Explanation: Simple notebook to explore EUDAT B2Find harvested metadata for ENES
Background:
* EUDAT B2Find harvested ENES metadata consists of metadata for coarse grained data collections
* These coarse grained collections are assigned DOIs
* Metadata for ENES data harvested into the graph database from the ESGF federation is at file level and these files are then related to the collection levels they belong to
* To relate ENES EUDAT B2Find metadata to ENES ESGF metadata in the graph database some implicit domain knowledge is necessary
* This notebook illustrates this relation between ENES B2Find and ENES ESGF metadata for their integration in the neo4j database
Integration aspects:
* ENES ESGF metadata sometimes refers to newer versions of data entities
* ENES B2Find metadata refers to data collections which are assigned DOIs whereas ESGF metadata refers to data entities (individual files) which are assigned to unique IDs (and soon PIDs)
Dependencies
pip install ckanclient, py2neo
ipython helpers:
pip install ipython-cypher
(pip install icypher)
idisplay
ipy_table
pygraphviz
ipython-db
ipython-sql
jgraph
Set up ckan client connection to EUDAT b2find service
End of explanation
"""
# restrict to few (2) results for the purpose of this notebook
q = 'tags:IPCC'
d = ckan.action('package_search', q=q, rows=6)
# 'title' provides the aggregation info for the data collection
# 'url' provides the doi of the data collection
# 'notes' contains information on how to interpret the aggregation info string in 'title'
for result in d['results']:
print result['title']
print result['title'].split()
print result['url']
print result['notes']
print "----------------------------------------------------------------"
#for part in result:
# print part,":-->", result[part]
"""
Explanation: Select ENES data subset in b2find harvested records
End of explanation
"""
# collection pattern (neo4j nodes for pattern parts)
# <activity>/<product>/<institute>/<model>/<experiment>/<frequency>/
# <modeling realm>/<mip table>/<ensemble member>/
# <version number>/<variable name>/<CMORfilename.nc>
# example title: cmip5 output1 LASG-CESS FGOALS-g2 historicalNat
# collection info: activity product institute model experiment
pattern = ['activity','product','institute','model','experiment']
def parse_collection_info(info_string,pattern):
info_parts = info_string.split()
result = dict(zip(pattern,info_parts))
return result
parsed_results = []
for result in d['results']:
parsed_result = parse_collection_info(result['title'],pattern)
parsed_results.append(parsed_result)
print parsed_results
"""
Explanation: Hierarchy information for B2Find ENES data
In the harvested B2Find metadata an indication is given how to derive the hierarchy information:
"Entry name/title of data are specified according to the Data Reference Syntax
(http://cmip-pcmdi.llnl.gov/cmip5/docs/cmip5_data_reference_syntax.pdf)
as activity/product/institute/model/experiment/frequency/modeling realm/MIP table/ensemble
member/version number/variable name/CMOR filename.nc"
End of explanation
"""
from py2neo import authenticate, Node, Relationship, Graph
authenticate("localhost:7474", 'neo4j', 'prolog16')
graph = Graph("http://localhost:7474/db/data/")
cypher = graph.cypher
from neo4jrestclient.client import GraphDatabase
from neo4jrestclient.query import Q
gdb = GraphDatabase("http://localhost:7474/db/data/",username="neo4j",password="prolog16")
%load_ext cypher
%%cypher http://neo4j:prolog16@localhost:7474/db/data
MATCH (a)-[]-(b) RETURN a, b
%load_ext icypher
%install_ext https://bitbucket.org/vladf/ipython-diags/raw/default/diagmagic.py
%install_ext https://raw.github.com/cjdrake/ipython-magic/master/gvmagic.py
%load_ext gvmagic
%dot digraph G { a -> b; a -> c }
pattern = ['activity','product','institute','model','experiment']
nodes = [Node("Collection",name="ENES-data",level=0)]
rels = []
def add_collection(collection_info):
for index, facet in enumerate(pattern):
new_node = Node("Collection",name=pattern[index],level= index)
nodes.append( new_node)
new_rel = Relationship(new_node,"belongs-to",nodes[index-1])
rels.append(new_rel)
%install_ext https://raw.githubusercontent.com/dongweiming/idb/master/idb.py
%load_ext idb ## database interaction
import jgraph
jgraph.draw([(1, 2), (2, 3), (3, 4), (4, 1), (4, 5), (5, 2)])
for result in parsed_results:
add_collection(result)
print nodes
print rels
"""
Explanation: Relation to Neo4j ESGF graph nodes
The ESGF metadata harvesting and Neo4j graph generation is done in the script ENES-Neo4J-fill1.py
Each component of the collection hierarchy is assiged to a node connected with the "belongs_to" relationship and each component has a property name "name" corresponding to the values extracted from the B2Find result recods (see above). Additionally each collection has a level attribute
experiment(6) -- belongs_to --> model(7) -- belongs_to --> institute(8) -- belongs_to --> product(9) -- belongs_to --> activity(10)
The B2Find metadata aggregates all collection levels below 6, thus the level 6 node has to be identified in the Neo4j ESGF graph and related to the corresponding B2Find information
End of explanation
"""
Match (n1:Collection {name:%experiment})-[r:belongs_to]->(n2:Collection {name:%model})-[r:belongs_to]
->(n3:Collection {name:%institute})-[r:belongs_to]->(n4:Collection {name:%product})-[r:belongs_to]
->(n5:Collection {name:%activity})
"""
Explanation: cypher queries to identify corresponding level 6 nodes in ESGF graph structure:
End of explanation
"""
|
ML4DS/ML4all | R_lab2_GP/Pract_regression_student.ipynb | mit | # Import some libraries that will be necessary for working with data and displaying plots
# To visualize plots in the notebook
%matplotlib inline
import matplotlib
import matplotlib.pyplot as plt
import numpy as np
import scipy.io # To read matlab files
from scipy import spatial
import pylab
pylab.rcParams['figure.figsize'] = 8, 5
"""
Explanation: Gaussian Process regression
Authors: Miguel Lázaro Gredilla
Jerónimo Arenas García (jarenas@tsc.uc3m.es)
Jesús Cid Sueiro
Notebook version: 1.0 (Nov, 07, 2017)
Changes: v.1.0 - First version. Python version
v.1.1 - Extraction from a longer version ingluding Bayesian regresssion.
Python 3 compatibility
Pending changes:
End of explanation
"""
np.random.seed(3)
"""
Explanation: 1. Introduction
In this exercise the student will review several key concepts of Bayesian regression and Gaussian processes.
For the purpose of this exercise, the regression model is
$${s}({\bf x}) = f({\bf x}) + \varepsilon$$
where ${s}({\bf x})$ is the output corresponding to input ${\bf x}$, $f({\bf x})$ is the unobservable latent function, and $\varepsilon$ is white zero-mean Gaussian noise, i.e., $\varepsilon \sim {\cal N}(0,\sigma_\varepsilon^2)$.
Practical considerations
Though sometimes unavoidable, it is recommended not to use explicit matrix inversion whenever possible. For instance, if an operation like ${\mathbf A}^{-1} {\mathbf b}$ must be performed, it is preferable to code it using python $\mbox{numpy.linalg.lstsq}$ function (see http://docs.scipy.org/doc/numpy/reference/generated/numpy.linalg.lstsq.html), which provides the LS solution to the overdetermined system ${\mathbf A} {\mathbf w} = {\mathbf b}$.
Sometimes, the computation of $\log|{\mathbf A}|$ (where ${\mathbf A}$ is a positive definite matrix) can overflow available precision, producing incorrect results. A numerically more stable alternative, providing the same result is $2\sum_i \log([{\mathbf L}]{ii})$, where $\mathbf L$ is the Cholesky decomposition of $\mathbf A$ (i.e., ${\mathbf A} = {\mathbf L}^\top {\mathbf L}$), and $[{\mathbf L}]{ii}$ is the $i$th element of the diagonal of ${\mathbf L}$.
Non-degenerate covariance matrices, such as the ones in this exercise, are always positive definite. It may happen, as a consequence of chained rounding errors, that a matrix which was mathematically expected to be positive definite, turns out not to be so. This implies its Cholesky decomposition will not be available. A quick way to palliate this problem is by adding a small number (such as $10^{-6}$) to the diagonal of such matrix.
Reproducibility of computations
To guarantee the exact reproducibility of the experiments, it may be useful to start your code initializing the seed of the random numbers generator, so that you can compare your results with the ones given in this notebook.
End of explanation
"""
# Load data from matlab file DatosLabReg.mat
# matvar = <FILL IN>
# Take main variables, Xtrain, Xtest, Ytrain, Ytest from the corresponding dictionary entries in matvar:
# <SOL>
# </SOL>
# Data normalization
# <SOL>
# </SOL>
"""
Explanation: 2. The stocks dataset.
Load and properly normalize data corresponding to the evolution of the stocks of 10 airline companies. This data set is an adaptation of the Stock dataset from http://www.dcc.fc.up.pt/~ltorgo/Regression/DataSets.html, which in turn was taken from the StatLib Repository, http://lib.stat.cmu.edu/
End of explanation
"""
sigma_0 = np.std(Ytrain)
sigma_eps = sigma_0 / np.sqrt(10)
l = 8
print('sigma_0 = {0}'.format(sigma_0))
print('sigma_eps = {0}'.format(sigma_eps))
"""
Explanation: After running this code, you will have inside matrix Xtrain the evolution of (normalized) price for 9 airlines, whereas vector Ytrain will contain a single column with the price evolution of the tenth airline. The objective of the regression task is to estimate the price of the tenth airline from the prices of the other nine.
3. Non-linear regression with Gaussian Processes
3.1. Multidimensional regression
Rather than using a parametric form for $f({\mathbf x})$, in this section we will use directly the values of the latent function that we will model with a Gaussian process
$$f({\mathbf x}) \sim {\cal GP}\left(0,k_f({\mathbf x}_i,{\mathbf x}_j)\right),$$
where we are assuming a zero mean, and where we will use the Ornstein-Uhlenbeck covariance function, which is defined as:
$$k_f({\mathbf x}_i,{\mathbf x}_j) = \sigma_0^2 \exp \left( -\frac{1}{l}\|{\mathbf x}_i-{\mathbf x}_j\|\right)$$
First, we will use the following gross estimation for the hyperparameters:
End of explanation
"""
# Compute Kernel matrices.
# You may find spatial.distance.cdist() usefull to compute the euclidean distances required by Gaussian kernels.
# <SOL>
# </SOL>
# Compute predictive mean
# m_y = <FILL IN>
# Compute predictive variance
# v_y = <FILL IN>
# Compute MSE
# MSE = <FILL IN>
# Compute NLPD
# NLPD = <FILL IN>
print(m_y.T)
"""
Explanation: As we studied in a previous session, the joint distribution of the target values in the training set, ${\mathbf s}$, and the latent values corresponding to the test points, ${\mathbf f}^\ast$, is given by
$$\left[\begin{array}{c}{\bf s}\{\bf f}^\ast\end{array}\right]~\sim~{\cal N}\left({\bf 0},\left[\begin{array}{cc}{\bf K} + \sigma_\varepsilon^2 {\bf I}& {\bf K}\ast^\top \ {\bf K}\ast & {\bf K}_{\ast\ast} \end{array}\right]\right)$$
Using this model, obtain the posterior of ${\mathbf s}^\ast$ given ${\mathbf s}$. In particular, calculate the <i>a posteriori</i> predictive mean and standard deviations, ${\mathbb E}\left{s({\bf x}^\ast)\mid{\bf s}\right}$ and $\sqrt{{\mathbb V}\left{s({\bf x}^\ast)\mid{\bf s}\right}}$ for each test sample ${\bf x}^\ast$.
Obtain the MSE and NLPD.
End of explanation
"""
print('MSE = {0}'.format(MSE))
print('NLPD = {0}'.format(NLPD))
"""
Explanation: You should obtain the following results:
End of explanation
"""
# <SOL>
# </SOL>
"""
Explanation: 3.2. Unidimensional regression
Use now only the first company to compute the non-linear regression. Obtain the posterior
distribution of $f({\mathbf x}^\ast)$ evaluated at the test values ${\mathbf x}^\ast$, i.e, $p(f({\mathbf x}^\ast)\mid {\mathbf s})$.
This distribution is Gaussian, with mean ${\mathbb E}\left{f({\bf x}^\ast)\mid{\bf s}\right}$ and a covariance matrix $\text{Cov}\left[f({\bf x}^\ast)\mid{\bf s}\right]$. Sample 50 random vectors from the distribution and plot them vs. the values $x^\ast$, together with the test samples.
The Bayesian model does not provide a single function, but a pdf over functions, from which we extracted 50 possible functions.
End of explanation
"""
# <SOL>
# </SOL>
"""
Explanation: Plot again the previous figure, this time including in your plot the confidence interval delimited by two standard deviations of the prediction. You can observe how $95.45\%$ of observed data fall within the designated area.
End of explanation
"""
# <SOL>
# </SOL>
print('MSE = {0}'.format(MSE))
print('NLPD = {0}'.format(NLPD))
"""
Explanation: Compute now the MSE and NLPD of the model. The correct results are given below:
End of explanation
"""
|
Vvkmnn/books | AutomateTheBoringStuffWithPython/lesson30.ipynb | gpl-3.0 | '\\'.join(['folder1','folder2','folder3','file.png']) # join all elements using the escaped (literal) '\' string
"""
Explanation: Lesson 30:
Filenames and Absolute/Relative File Paths
To make sure a programs output persists, scripts have to save to files.
Filenames and File Paths
Files are held in Folders.
A Folder is just a directory on the disk.
A File Path is the path to that file through those folders.
All file paths start at the Root folder (/ on UNIX systems and C:\ on Windows)
Unix systems use /, Windows systems use \.
Files also have File Extensions which is the .type suffix; it tells the OS what application handles the file.
One way to construct file paths is the .join string method:
End of explanation
"""
import os # contains many file path related functions
print(os.path.join('folder1','folder2','folder3','file.png')) # takes string arguments and returns OS-appropriate path
print(os.sep) # show the seperator currently being used.
"""
Explanation: But this string only works on Windows; to create an OS insensitive path, using the os module:
End of explanation
"""
# Start at current directory
defaultpath = os.path.expanduser('~/Dropbox/learn/books/Python/AutomateTheBoringStuffWithPython/')
os.chdir(defaultpath)
print(os.getcwd())
# Change path to /files folder
os.chdir('files') # changes current working directory, if not currently in it (to /files)
print(os.getcwd()) # prints the current working directory (should be /files)
# Reset back to notebook directory
os.chdir(defaultpath)
"""
Explanation: If no explicit path is specified, Python will look for files in the current working directory.
You can find out what the current working directory is with os.getcwd(), and change it is with os.chdir().
End of explanation
"""
print(os.path.abspath('files')) # print absolute path of the files subdirectory
print(os.path.isabs(os.path.abspath('files'))) # Is the absolute path to files an absolute path (True)
print(os.path.relpath('../..', 'files')) # print the relative file path of a folder two folders up relative to subfolder (3 folders; ../../..)
"""
Explanation: There are two kinds of file paths, relative and absolute.
Absolute file paths are the full address, while relative paths are related to the current working directory; they start at the cwd, not the root folder.
There are also . and .. operators:
* .\ refers to the cwd, and .\path will look at any folders below this folder.
* ..\ refers to the folder above the cwd, and will look at any folders below the parent folder of the cwd.
os.path.abspath returns the absolute path of whatever relative path is passed to it.
os.path.relpath() returns the relative path from an absolute path to a relative location.
os.path.isabs() returns a boolean if the path is a true path.
End of explanation
"""
print(os.path.dirname(os.path.abspath('files'))) # outputs absolute path above 'files'
print(os.path.basename('files/26645.pdf')) # outputs just 'files'
"""
Explanation: os.path.dirname() pulls out just the directory path component above a filepath.
os.path.basename() pulls out whats past the last slash.
End of explanation
"""
# Reset back to notebook directory
os.chdir(defaultpath)
print(os.path.exists(os.path.abspath('files'))) # checks if 'files' exists (True)
print(os.path.isfile('files')) # checks if 'files' is a file (False)
print(os.path.isdir('files')) # checks if 'files' is a folder (False)
"""
Explanation: os.path.exists() can check if a path exists.
os.path.isfile() checks if the path ends at a file
os.path.isdir() checks if the path ends at a folder.
End of explanation
"""
"""
A simple program to loop through a folder and find the size of all files in bytes, and the total size of the folder.
"""
import os
# starting size
totalSize = 0
# for the fileName in the 'files' directory
for fileName in os.listdir('files'):
# generate filePaths
filePath = os.path.join('files',fileName)
# check if filePath is a file
if os.path.isfile(filePath) == True:
# if True, increase totalSize by the size of fileName
totalSize += os.path.getsize(filePath)
# also print what the file was and the size
print('%s is %d bytes.'%(filePath, os.path.getsize(filePath)))
# otherwise keep looping
else:
continue
# print the size of the folder at the end
print('\n\nThe \'%s\' folder contains %s bytes in total.'%('files',str(totalSize)))
"""
Explanation: os.path.getsize() returns the size of a file (bytes)
os.path.listdir() returns a list of all the files at the path.
File Size Finder
End of explanation
"""
# clear the folders if the exist already
if os.path.exists(os.path.abspath('files/newfolder/anotherone')) == True:
os.removedirs(os.path.abspath('files/newfolder/anotherone')) # clear folders if they exist
# create new folders at an absolute path
os.makedirs(os.path.abspath('files/newfolder/anotherone')) # create new folders
# check if they exist
if os.path.exists(os.path.abspath('files/newfolder/anotherone')) == True:
print('\'files/newfolder/anotherone\' exists.')
"""
Explanation: os.makedirs() creates new directories at the location.
os.removedirs() removes folders in an absolute location.
End of explanation
"""
|
kialio/gsfcpyboot | Day_00/03_Functions/FunctionsSolutions.ipynb | mit | 1+2
print 1+2
"""
Explanation: Fun with Functions!
Reference:
Code academy's Functions unit
Our objective is to learn how to write and use functions.
Functions allow us to abstract a task, write code to perform it, and then use it in various situations.
Example:
A calculator takes two numbers and an operator as input and then performs the operator on the two numbers. <br>
Ex: 1 + 2 = 3 . <br>
Inputs: '1', '2', '+'<br>
Output: '3'<br>
Check out what happens when you run these in the ipython notebook:
End of explanation
"""
def spam() :
"""print eggs!"""
print "Eggs!"
return 0
"""
Explanation: ipython has some built in functions, like the addition operator, symbolized as the plus sign, and 'print'.
Much of the rest of these lectures will be focused on learning to use the many libraries of code developed by others to do some pretty spectacular things.
Before then, let's explore how we actually write functions!
Here's an example of a function:
End of explanation
"""
spam()
"""
Explanation: It's comprised of 3 parts:
- the header, which defines the function
- the comment, which lets users know what the function does
- and the body, which actually does the task required.
What does spam do when we call it?
End of explanation
"""
def add1():
return
def add1(a):
n=a+1
return n
"""
Explanation: Notice the different parts of the header in spam. <br>What are the minimum parts necessary to define the function?
Clearly we typically want to do more than just print out something predefined though!
Let's return to our intial addition example.<br>
Write a function add1() that takes one input parameter, adds 1 to it, and returns the result.
End of explanation
"""
add1(2)
"""
Explanation: Now try executing it:
End of explanation
"""
def add(a, b):
"""add two numbers together"""
n = a+b
return n
"""
Explanation: What output did you get?
Compare to your neighbor's add1 function.
What does the second part of your function (ie the comment) say?
Why do we care about commenting our code?
Now make a function called 'add' that takes two numbers as input parameters, adds them, and returns that number.
End of explanation
"""
add(1,2)
"""
Explanation: And call it with arguments 1 and 2:
End of explanation
"""
def line(x):
m = 2
b = 0
return y
def line(x):
"""Return the y value of a line with slope 2 and intercept 0 given the x position."""
m = 2
b = 0
y = add(m*x,0)
return y
line(1)
"""
Explanation: A function can take any number of parameters as input. <br>
The number of arguments passed to the function through the parameters generally matches the number of parameters.
Try passing fewer or more than 2 arguments to add(). <br> What happens?
Functions can also call other functions!<br>
Recall the equation for a line: y=mx+b <br>
Write a function called line() that calls add() and returns y, given x.
End of explanation
"""
a
"""
Explanation: What if we want to be able to change the slope (m) and intercept (b) on the fly?
Scope:<br> What variables (aka parameters) exist outside the scope of the functions we have written? <br> Hint: try asking for their values (by naming them and then executing that cell)!
End of explanation
"""
def line(x,m=2,b=0):
"""Return the y value of a line given x, the slope m, and the intercept b"""
y = add(m*x,b)
return y
line(1, 2, 0)
"""
Explanation: Redefine line() with user-defined slope and intercept.
End of explanation
"""
def line(x,m=2,b=0):
"""Return the y value of a line given x and optionally the slope m and intercept b"""
y = add(m*x,b)
return y
"""
Explanation: What if we the slope and intercept to be default values of 2 and 0, respectively, but changable sometimes?
Hint: try setting the parameters to equal the default values!
End of explanation
"""
line()
line(1)
line(1, 2)
line(1, 2, 0)
line(1, 2, 0, 4)
"""
Explanation: What happens when you call line with only one, two, or three parameters? <br>
What's the minimum number of parameters required?
End of explanation
"""
def evenSlopedLine(x,m=2,b=0):
"""Return the y value of a line for slopes which are even.
Let the user know why it failed and how to make it work if the slope is odd."""
if m%2==0 :
return line(x,m,b)
else :
return "evenSlopedLine only works for even slopes. Please put in a slope that's divisible by 2!"
"""
Explanation: What if you only want to allow the function to work on slopes that are even? <br>
Write evenSlopedLine() such that a naieve user can call it and learn something useful about how to use the function if they give it an odd slope value.
End of explanation
"""
def test(x, vals):
"""Return the values of the function evenSlopedLine given x and an array of slopes."""
result=[]
for n in vals :
result.append(evenSlopedLine(x, n))
return result
x=1
m=[2,3]
test(x, m)
"""
Explanation: Test it with m = 2 and m = 3:
End of explanation
"""
def evenSlopesYs(vals,x=1):
"""Return the values of the function evenSlopedLine given x and an array of slopes."""
result=[]
for n in vals :
result.append(evenSlopedLine(x, n))
return result
#x=1
m = [1,2,3,4,5,6,7,8,9,10]
evenSlopesYs(m)
"""
Explanation: Suppose you want evenSlopedLine() to just work. Use add1() to make it so.
Why might this be good? <br>
What are some potential drawbacks?
Rewrite test() as evenSlopesYs, taking an optional x value and returning the array of y values for slopes in the array: m = [1,2,3,4,5,6,7,8,9,10] .<br>
Hint: try different orders for the optional and required parameters!
End of explanation
"""
rr1=r1**2
rr2=r2**2
phi = (math.acos((rr1 + (d ** 2) - rr2) / (2 * r1 * d))) * 2
theta = (math.acos((rr2 + (d ** 2) - rr1) / (2 * r2 * d))) * 2
area1 = 0.5 * theta * rr2 - 0.5 * rr2 * math.sin(theta)
area2 = 0.5 * phi * rr1 - 0.5 * rr1 * math.sin(phi)
area= area1 + area2
"""
Explanation: Compare your working result with your neighbor's. <br> How did you each treat the odd slopes? <br> What might be advantages or disadvantages to your various solutions?
Breakout session
You want to calculate the overlap of two circles given the position of their centers (x1,y1) and (x2,y2) and their radii r1 and r2.
You know that the overlap is 0 if the distance between the two circles is greater than the sum of their radii. <br>
You also know the overlap is the area of the smaller circle if one circle is contained within the other.
Your colleague wrote some code to calculate the area of the intersection, if the two circles overlap but don't fall in one of the special cases just mentioned (no overlap or fully contained). This is great! Unfortunately your colleague wasn't a fan of functions, so you know only that the area of intersection is:
End of explanation
"""
#################################
# Find the intersection of 2 circles w centers at (x1, y1) and (x2, y2) and radii r1 and r2.
# Arguments:
# x1, y1 the centre point of the first circle
# r1 radius of the first circle
# x1, y1 the centre point of the second circle
# r2 radius of the second circle
# Returns:
# zero if the circles do not intersect or just in one point
# area of the intersection of the !first! circle
# If R1> R2, returns area of circle 1 interacting with Circle 2
# If R1< R2, returns area of circle 1
def interarea(x1,y1,r1,x2,y2,r2):
d = math.sqrt((x2 - x1)**2 + (y2 - y1)**2)
if (d >= r1 + r2):
# Circles don't intersect or intersect in just one point
return 0
elif ( d <= abs(r1 - r2) and r2>r1):
# Circle 1 is fully contained, return its area
return math.pi*r1**2
elif ( d <= abs(r1 - r2) and r2<r1):
# Circle 1 is fully contained, returning its area
#print "d=",d,"r1=", r1,"r2=", r2
return math.pi*r2**2
else:
# Circle 1 and 2 are intersecting
rr1=r1**2
rr2=r2**2
phi = (math.acos((rr1 + (d ** 2) - rr2) / (2 * r1 * d))) * 2
theta = (math.acos((rr2 + (d ** 2) - rr1) / (2 * r2 * d))) * 2
area1 = 0.5 * theta * rr2 - 0.5 * rr2 * math.sin(theta)
area2 = 0.5 * phi * rr1 - 0.5 * rr1 * math.sin(phi)
# Return area of intersection
return area1 + area2
"""
Explanation: Part 1: Interarea()
Define a function called interarea() which takes as input the positions and radii of the two circles and returns the area of the intersection.
Feel free to rename your colleague's variables if a different convention makes more sense to you.
Write at least 3 test cases to be sure your functions work for all scenarios!
(There are hints at the end of the exercise's parts.)
End of explanation
"""
d = math.sqrt((x2 - x1)**2 + (y2 - y1)**2)
"""
Explanation: Part 2: Overlap fraction: location
You want to know the area of the intersection because you're interested in associating a point source, defined as a position (x1,y1) with an error in the measurement of (x1Err, y1Err), with a known source of position (x2,y2) with a given error on the position measurement of r2Err.
To do that, you've decided to define the "location overlap" fraction as the intersection area divided by the maximum possible area of intersection allowed by the errors in the position measurements.
Write a function overlapFracLoc() which returns the location overlap fraction. <br> Hint! The overlap fraction should range between 0 and 1.
You can either create your own data set (eg w random.random()) or use the data provided for candidates (source 1) and flares (source 2) at the bottom of this notebook and equivalently in candidates.txt and flare.txt.
Part 3: Extension!
A: Extension fraction:
The first point source could also actually be an extended source. How exciting! <br>
So in addition to having a radius and thus circle defined by the position error measurement (x1Err, y1Err), it can also have an actual extension measured (r1Ext).
Write another function which calculates the fractional extension overlap, overlapFracExt(), defined to be the ratio of [the intersection of the circle defined by the actual extension (r1Ext) and the position error circle (x2,y2,r2Err)] to the maximum area of either the first or second source.
B: Point source "extension" fraction
Of course, not all the sources are extended, but we would still like to define this parameter. It's sensible if one considers the minimum resolvable radius, i.e. the boundary between when we can detect that a source is actually extended.
If the source is a point source, use an optionally settable parameter for the extension (==minimum resolvable radius), and return the ratio relative to the second source's area (defined by r2Err).
Hint! Part 1:
Assuming cartesian coordinates, the distance between two points is:
End of explanation
"""
import math as math
"""
Explanation: Note the error message when you first try to evaluate d: python doesn't know what math is. <br>
Math turns out to be one of those useful libraries of functions (and variables like pi). You can import them within the scope of a variable you call (as) math with:
End of explanation
"""
from math import sqrt
sqrt(25)
"""
Explanation: Typically we import modules like math globally. <br>
It's also possible to import from the math module just the particular function desired:
End of explanation
"""
from math import sqrt as squareroot
squareroot(25)
"""
Explanation: We can also rename it as:
End of explanation
"""
candidatesData = {'SNR357.0-1.0': {'raErr': 0.271981431953, 'decErr': 0.234572055248, 'radius': 0.0, 'ra': 265.612194957, 'radiusErr': 0.0, 'dec': -32.045488282}, 'SNR305.2+0.4': {'raErr': 0.230708064968, 'decErr': 0.224605379155, 'radius': 0.0, 'ra': 197.711507061, 'radiusErr': 0.0, 'dec': -62.4200203869}, 'SNR34.5-1.8': {'raErr': 0.243113651368, 'decErr': 0.235113062829, 'radius': 0.0, 'ra': 285.178490429, 'radiusErr': 0.0, 'dec': 0.597646829113}, 'SNR179.0+3.4': {'raErr': 0.231809181189, 'decErr': 0.231489134935, 'radius': 0.0, 'ra': 89.1713292269, 'radiusErr': 0.0, 'dec': 31.4951149482}, 'SNR0.9-3.2': {'raErr': 0.287716937271, 'decErr': 0.287716937271, 'radius': 2.68576864989, 'ra': 270.078473699, 'radiusErr': 0.0745858407624, 'dec': -29.7964411324}, 'SNR6.7-0.8': {'raErr': 0.24706826093, 'decErr': 0.24706826093, 'radius': 1.61811862853, 'ra': 270.946260674, 'radiusErr': 0.0362000163985, 'dec': -23.5765462593}, 'SNR37.4-2.6': {'raErr': 0.223437466251, 'decErr': 0.218477218501, 'radius': 0.0, 'ra': 287.193642574, 'radiusErr': 0.0, 'dec': 2.79930826528}, 'SNR206.0-0.4': {'raErr': 0.311777416739, 'decErr': 0.311777416739, 'radius': 3.16518333722, 'ra': 99.3480499716, 'radiusErr': 0.0829090932598, 'dec': 6.04291711301}, 'SNR260.0-1.2': {'raErr': 0.219684652942, 'decErr': 0.219684652942, 'radius': 1.05326352149, 'ra': 127.619604964, 'radiusErr': 0.0241890082999, 'dec': -41.3686479476}, 'SNR4.5+3.1': {'raErr': 0.36342658877, 'decErr': 0.36342658877, 'radius': 3.63289679912, 'ra': 266.081085134, 'radiusErr': 0.0403179887527, 'dec': -23.5162043303}, 'SNR350.8+0.6': {'raErr': 0.226943837147, 'decErr': 0.218266375394, 'radius': 0.0, 'ra': 259.821259092, 'radiusErr': 0.0, 'dec': -36.3657425647}, 'SNR36.6-2.7': {'raErr': 0.213787209434, 'decErr': 0.218556786321, 'radius': 0.698134038916, 'ra': 286.888157707, 'radiusErr': 0.0193633172924, 'dec': 2.03796233627}, 'SNR133.1+2.0': {'raErr': 0.256251963544, 'decErr': 0.256251963544, 'radius': 1.4571202293, 'ra': 35.8303573643, 'radiusErr': 0.0589524317201, 'dec': 63.016863932}, 'SNR0.5-1.0': {'raErr': 0.213520647551, 'decErr': 0.491818560314, 'radius': 3.64836077677, 'ra': 267.714692466, 'radiusErr': 0.0246423475888, 'dec': -29.056555018}, 'SNR21.7-4.6': {'raErr': 0.261238806812, 'decErr': 0.228564707775, 'radius': 0.0, 'ra': 281.834000499, 'radiusErr': 0.0, 'dec': -12.0421945577}, 'SNR187.5+4.3': {'raErr': 0.207130074927, 'decErr': 0.205781790933, 'radius': 1.28816870319, 'ra': 94.7693692753, 'radiusErr': 0.00875163251335, 'dec': 24.5142843902}, 'SNR307.9+1.1': {'raErr': 0.268809096198, 'decErr': 0.268809096198, 'radius': 0.948980341874, 'ra': 203.294861341, 'radiusErr': 0.0688023427086, 'dec': -61.3380483636}, 'SNR26.7-2.9': {'raErr': 0.238906235566, 'decErr': 0.238906235566, 'radius': 0.666441980216, 'ra': 282.554062879, 'radiusErr': 0.0378130702637, 'dec': -6.8920362723}, 'SNR266.6+1.1': {'raErr': 0.253406814737, 'decErr': 0.253406814737, 'radius': 1.71293639428, 'ra': 135.870977376, 'radiusErr': 0.0445868105039, 'dec': -45.1219937402}, 'SNR192.6+1.5': {'raErr': 0.29267141516, 'decErr': 0.257917683615, 'radius': 0.0, 'ra': 94.6639732117, 'radiusErr': 0.0, 'dec': 18.6874706276}, 'SNR292.7+0.6': {'raErr': 0.211669864532, 'decErr': 0.211529641793, 'radius': 0.0, 'ra': 171.769078323, 'radiusErr': 0.0, 'dec': -60.5705794929}, 'SNR348.9-0.4': {'raErr': 0.241375544076, 'decErr': 0.241375544076, 'radius': 1.37703943137, 'ra': 259.425427937, 'radiusErr': 0.0346313902607, 'dec': -38.4677640832}, 'SNR1.3-2.9': {'raErr': 0.276048520226, 'decErr': 0.276048520226, 'radius': 2.89045818583, 'ra': 270.040273019, 'radiusErr': 0.0580412932745, 'dec': -29.2575899789}, 'SNR19.6-0.2': {'raErr': 0.218599592668, 'decErr': 0.215109650286, 'radius': 0.0, 'ra': 276.873652703, 'radiusErr': 0.0, 'dec': -11.8917300079}, 'SNR288.0+0.8': {'raErr': 0.233711496338, 'decErr': 0.233711496338, 'radius': 1.86996638915, 'ra': 163.294239207, 'radiusErr': 0.0508525104263, 'dec': -58.6315983807}, 'SNR313.2+0.8': {'raErr': 0.394070303943, 'decErr': 0.394070303943, 'radius': 1.70579742563, 'ra': 213.932796558, 'radiusErr': 0.0528298804548, 'dec': -60.4095049085}, 'SNR34.9-3.6': {'raErr': 0.298655659691, 'decErr': 0.268422313615, 'radius': 0.0, 'ra': 286.982102109, 'radiusErr': 0.0, 'dec': 0.0681712044775}, 'SNR77.8-10.4': {'raErr': 0.255675770164, 'decErr': 0.255675770164, 'radius': 2.54424199628, 'ra': 317.286698163, 'radiusErr': 0.0432313048963, 'dec': 32.3791351336}, 'SNR356.9-2.2': {'raErr': 0.262974308106, 'decErr': 0.257591367918, 'radius': 0.0, 'ra': 266.748578364, 'radiusErr': 0.0, 'dec': -32.7404998269}, 'SNR33.6-2.0': {'raErr': 0.379497447662, 'decErr': 0.379497447662, 'radius': 2.69385048285, 'ra': 284.957646547, 'radiusErr': 0.102425262102, 'dec': -0.316655458163}, 'SNR353.7+0.4': {'raErr': 0.246005751271, 'decErr': 0.246005751271, 'radius': 1.23307152431, 'ra': 261.999939154, 'radiusErr': 0.0819007090426, 'dec': -33.9988614089}, 'SNR338.4+0.7': {'raErr': 0.251629117163, 'decErr': 0.251629117163, 'radius': 2.8293996792, 'ra': 249.479447695, 'radiusErr': 0.100211831445, 'dec': -45.9863390198}, 'SNR338.5-1.2': {'raErr': 0.238002174149, 'decErr': 0.238002174149, 'radius': 0.935759126861, 'ra': 251.704491372, 'radiusErr': 0.0241741004183, 'dec': -47.181164597}, 'SNR264.4-0.8': {'raErr': 0.22305862071, 'decErr': 0.219152934832, 'radius': 0.0, 'ra': 131.758850198, 'radiusErr': 0.0, 'dec': -44.6702066761}, 'SNR321.8-1.9': {'raErr': 0.218281076948, 'decErr': 0.216582158807, 'radius': 0.0, 'ra': 231.615371743, 'radiusErr': 0.0, 'dec': -58.9345087487}, 'SNR292.3-0.0': {'raErr': 0.225969979162, 'decErr': 0.225969979162, 'radius': 0.927298739579, 'ra': 170.511752225, 'radiusErr': 0.0395919631572, 'dec': -61.0622576573}, 'SNR349.9+0.9': {'raErr': 0.216179967762, 'decErr': 0.213377009531, 'radius': 0.0, 'ra': 258.886658595, 'radiusErr': 0.0, 'dec': -36.8597486164}, 'SNR19.6-3.3': {'raErr': 0.344205084869, 'decErr': 0.344205084869, 'radius': 3.04704002084, 'ra': 279.659282116, 'radiusErr': 0.0545954052338, 'dec': -13.3957517685}, 'SNR3.2-2.2': {'raErr': 0.227442192502, 'decErr': 0.221265865521, 'radius': 0.0, 'ra': 270.352408502, 'radiusErr': 0.0, 'dec': -27.2731176644}, 'SNR326.9-2.1': {'raErr': 0.217548504499, 'decErr': 0.217548504499, 'radius': 0.938827953143, 'ra': 239.409968227, 'radiusErr': 0.0254444041526, 'dec': -56.0366371638}, 'SNR318.7-0.9': {'raErr': 0.235692743113, 'decErr': 0.224930318533, 'radius': 0.0, 'ra': 225.390216638, 'radiusErr': 0.0, 'dec': -59.7206378884}, 'SNR298.8+1.8': {'raErr': 0.225842286172, 'decErr': 0.221696652515, 'radius': 0.0, 'ra': 184.342648857, 'radiusErr': 0.0, 'dec': -60.7649405423}, 'SNR28.3-3.0': {'raErr': 0.228599044991, 'decErr': 0.228599044991, 'radius': 0.840031316494, 'ra': 283.438429381, 'radiusErr': 0.0356132924753, 'dec': -5.50680845713}, 'SNR349.9-0.6': {'raErr': 0.216028596904, 'decErr': 0.213251133676, 'radius': 0.0, 'ra': 260.422638137, 'radiusErr': 0.0, 'dec': -37.7348753207}, 'SNR110.2-0.5': {'raErr': 0.236849515328, 'decErr': 0.230260437312, 'radius': 0.0, 'ra': 346.95051627, 'radiusErr': 0.0, 'dec': 59.81091111}, 'SNR349.0-2.8': {'raErr': 0.29074521768, 'decErr': 0.384293598602, 'radius': 2.83871530095, 'ra': 262.10978964, 'radiusErr': 0.0871497833017, 'dec': -39.7314877386}, 'SNR38.0-1.2': {'raErr': 0.283037547031, 'decErr': 0.283037547031, 'radius': 1.34579950505, 'ra': 286.233309323, 'radiusErr': 0.0532988018448, 'dec': 3.94808854222}, 'SNR19.6-1.5': {'raErr': 0.257374355229, 'decErr': 0.257374355229, 'radius': 1.74060124067, 'ra': 278.047525227, 'radiusErr': 0.0919806094943, 'dec': -12.5523966455}, 'SNR91.5+4.2': {'raErr': 0.263638985711, 'decErr': 0.263638985711, 'radius': 1.47211097459, 'ra': 314.685294819, 'radiusErr': 0.059201947235, 'dec': 52.313575239}, 'SNR21.3-2.3': {'raErr': 0.294407339889, 'decErr': 0.246261917561, 'radius': 0.0, 'ra': 279.548261093, 'radiusErr': 0.0, 'dec': -11.4031668834}, 'SNR75.2+1.0': {'raErr': 0.215353155464, 'decErr': 0.213779870664, 'radius': 0.0, 'ra': 304.349927117, 'radiusErr': 0.0, 'dec': 37.3362577335}, 'SNR185.6-3.4': {'raErr': 0.202794333677, 'decErr': 0.20271688318, 'radius': 0.0, 'ra': 86.4210131902, 'radiusErr': 0.0, 'dec': 22.4377998483}, 'SNR31.0+0.3': {'raErr': 0.248025266891, 'decErr': 0.248025266891, 'radius': 0.607140537348, 'ra': 281.743487302, 'radiusErr': 0.0669140104212, 'dec': -1.55237255283}, 'SNR338.9-0.7': {'raErr': 0.225481752104, 'decErr': 0.225481752104, 'radius': 1.05623055203, 'ra': 251.521101135, 'radiusErr': 0.0446777776714, 'dec': -46.5753551249}, 'SNR330.6-0.0': {'raErr': 0.234954432574, 'decErr': 0.234954432574, 'radius': 0.966485195736, 'ra': 241.813325782, 'radiusErr': 0.0348944403116, 'dec': -52.0696464985}, 'SNR179.7-1.6': {'raErr': 0.295735580209, 'decErr': 0.295735580209, 'radius': 2.2746520516, 'ra': 84.6514062551, 'radiusErr': 0.0627806184125, 'dec': 28.3537157867}, 'SNR314.3+1.0': {'raErr': 0.274438303389, 'decErr': 0.274438303389, 'radius': 1.29691980645, 'ra': 215.909934841, 'radiusErr': 0.0371154850299, 'dec': -59.8191299263}, 'SNR298.4+11.8': {'raErr': 0.313375760068, 'decErr': 0.313375760068, 'radius': 1.76035220138, 'ra': 185.757884938, 'radiusErr': 0.0768070296844, 'dec': -50.8660030744}, 'SNR27.9-0.7': {'raErr': 0.258584479144, 'decErr': 0.258584479144, 'radius': 1.67659641636, 'ra': 281.196489622, 'radiusErr': 0.0440245156953, 'dec': -4.7862936741}, 'SNR24.4-0.1': {'raErr': 0.234521546046, 'decErr': 0.234521546046, 'radius': 1.12957934733, 'ra': 279.072202377, 'radiusErr': 0.0370680188077, 'dec': -7.60274129272}, 'SNR15.4-3.2': {'raErr': 0.236252794145, 'decErr': 0.236252794145, 'radius': 0.904611875572, 'ra': 277.632251388, 'radiusErr': 0.0661079886355, 'dec': -17.1098893637}, 'SNR1.5-1.8': {'raErr': 0.301136570075, 'decErr': 0.350579204193, 'radius': 3.97194529144, 'ra': 269.083870298, 'radiusErr': 0.0106048222889, 'dec': -28.5346640059}, 'SNR27.7-2.1': {'raErr': 0.251936191748, 'decErr': 0.251936191748, 'radius': 1.36738670143, 'ra': 282.350910245, 'radiusErr': 0.0378777867137, 'dec': -5.58498279785}, 'SNR47.3+0.1': {'raErr': 0.232245266658, 'decErr': 0.22595209949, 'radius': 0.0, 'ra': 289.433674631, 'radiusErr': 0.0, 'dec': 12.8215567861}, 'SNR340.0+1.2': {'raErr': 0.222737932148, 'decErr': 0.222737932148, 'radius': 0.511918833626, 'ra': 250.42984654, 'radiusErr': 0.0273043571285, 'dec': -44.4949793168}, 'SNR30.0-1.9': {'raErr': 0.232168094995, 'decErr': 0.232168094995, 'radius': 1.69218827621, 'ra': 283.220120639, 'radiusErr': 0.0330618850719, 'dec': -3.48120201817}, 'SNR7.0-2.0': {'raErr': 0.312222919137, 'decErr': 0.267941316416, 'radius': 0.0, 'ra': 272.286431813, 'radiusErr': 0.0, 'dec': -23.9070487126}, 'SNR44.2-1.5': {'raErr': 0.208068049359, 'decErr': 0.20768641136, 'radius': 0.0, 'ra': 289.366755954, 'radiusErr': 0.0, 'dec': 9.27952062116}, 'SNR340.5-1.4': {'raErr': 0.217865253515, 'decErr': 0.217865253515, 'radius': 1.1099562952, 'ra': 253.750178228, 'radiusErr': 0.026096770521, 'dec': -45.728520612}, 'SNR212.0-0.4': {'raErr': 0.236335938673, 'decErr': 0.233634411825, 'radius': 0.0, 'ra': 102.064573789, 'radiusErr': 0.0, 'dec': 0.585185159102}, 'SNR33.8-0.6': {'raErr': 0.282578061319, 'decErr': 0.282578061319, 'radius': 1.71854086244, 'ra': 283.818949902, 'radiusErr': 0.093282108915, 'dec': 0.518769676872}, 'SNR292.0+3.3': {'raErr': 0.216935056034, 'decErr': 0.215769395535, 'radius': 0.0, 'ra': 172.093936919, 'radiusErr': 0.0, 'dec': -57.8300773879}, 'SNR80.6+0.6': {'raErr': 0.229616117211, 'decErr': 0.229616117211, 'radius': 1.67791494779, 'ra': 308.825443364, 'radiusErr': 0.0342199404282, 'dec': 41.4608292626}, 'SNR113.6-1.9': {'raErr': 0.210205443684, 'decErr': 0.209880072488, 'radius': 0.0, 'ra': 354.133277338, 'radiusErr': 0.0, 'dec': 59.6193946617}, 'SNR31.7-1.4': {'raErr': 0.380821634236, 'decErr': 0.380821634236, 'radius': 3.28938499231, 'ra': 283.57307866, 'radiusErr': 0.0966106582556, 'dec': -1.72169220629}, 'SNR76.5+0.5': {'raErr': 0.239194370141, 'decErr': 0.237042031629, 'radius': 0.0, 'ra': 305.79604969, 'radiusErr': 0.0, 'dec': 38.0899242377}, 'SNR359.9-3.9': {'raErr': 0.370403396666, 'decErr': 0.370403396666, 'radius': 3.40579876002, 'ra': 270.248088067, 'radiusErr': 0.0322331856041, 'dec': -31.0404354684}, 'SNR13.9-2.5': {'raErr': 0.246910688874, 'decErr': 0.230218282388, 'radius': 0.0, 'ra': 276.166824682, 'radiusErr': 0.0, 'dec': -18.0610935207}, 'SNR1.8-4.3': {'raErr': 0.289021434669, 'decErr': 0.289021434669, 'radius': 3.81421375835, 'ra': 271.728824097, 'radiusErr': 0.0379135371627, 'dec': -29.5968366794}, 'SNR356.2+0.3': {'raErr': 0.235796937237, 'decErr': 0.231081834107, 'radius': 0.0, 'ra': 263.780658892, 'radiusErr': 0.0, 'dec': -31.9487883966}, 'SNR284.4-1.1': {'raErr': 0.208806630737, 'decErr': 0.208371410621, 'radius': 0.0, 'ra': 155.495025217, 'radiusErr': 0.0, 'dec': -58.5085968979}, 'SNR10.6+0.2': {'raErr': 0.225499913609, 'decErr': 0.225499913609, 'radius': 1.10449743091, 'ra': 272.032883863, 'radiusErr': 0.0341393645176, 'dec': -19.7032873662}, 'SNR7.2-1.0': {'raErr': 0.212435277411, 'decErr': 0.212435277411, 'radius': 1.15182527401, 'ra': 271.458009475, 'radiusErr': 0.0181587333311, 'dec': -23.2450887576}, 'SNR8.4-3.3': {'raErr': 0.24715969135, 'decErr': 0.244745829424, 'radius': 0.0, 'ra': 274.287183754, 'radiusErr': 0.0, 'dec': -23.2733938363}, 'SNR10.6-0.0': {'raErr': 0.216131293557, 'decErr': 0.216131293557, 'radius': 1.1415765588, 'ra': 272.248039653, 'radiusErr': 0.02081870657, 'dec': -19.7838255334}, 'SNR19.5-0.5': {'raErr': 0.329306643087, 'decErr': 0.329306643087, 'radius': 3.33374746458, 'ra': 277.035347808, 'radiusErr': 0.0638087205202, 'dec': -12.1823823697}, 'SNR29.9-0.1': {'raErr': 0.226943543794, 'decErr': 0.220524036188, 'radius': 0.0, 'ra': 281.586115676, 'radiusErr': 0.0, 'dec': -2.74345446505}, 'SNR333.5+1.0': {'raErr': 0.225792647797, 'decErr': 0.22178170716, 'radius': 0.0, 'ra': 244.032656947, 'radiusErr': 0.0, 'dec': -49.305185678}, 'SNR19.0-3.8': {'raErr': 0.240504129745, 'decErr': 0.240504129745, 'radius': 2.97308953392, 'ra': 279.888971769, 'radiusErr': 0.0297750778352, 'dec': -14.1878114482}, 'SNR19.1-4.5': {'raErr': 0.258761006736, 'decErr': 0.258761006736, 'radius': 1.43509359229, 'ra': 280.599105519, 'radiusErr': 0.0528473369628, 'dec': -14.3515550345}, 'SNR25.1-1.8': {'raErr': 0.229996217496, 'decErr': 0.229996217496, 'radius': 1.03107244152, 'ra': 280.858348847, 'radiusErr': 0.0380107405473, 'dec': -7.82248442903}, 'SNR13.1-1.2': {'raErr': 0.234350036095, 'decErr': 0.226731586895, 'radius': 0.0, 'ra': 274.602218968, 'radiusErr': 0.0, 'dec': -18.1865742766}, 'SNR338.9+0.1': {'raErr': 0.222766347396, 'decErr': 0.218998825863, 'radius': 0.0, 'ra': 250.52920877, 'radiusErr': 0.0, 'dec': -46.0119304027}, 'SNR32.2-0.1': {'raErr': 0.224030013612, 'decErr': 0.224030013612, 'radius': 1.28305718136, 'ra': 282.658087263, 'radiusErr': 0.0542669367717, 'dec': -0.688323988425}, 'SNR33.6+0.4': {'raErr': 0.274322951956, 'decErr': 0.245689538065, 'radius': 0.0, 'ra': 282.788714843, 'radiusErr': 0.0, 'dec': 0.823801501246}, 'SNR0.2-1.1': {'raErr': 0.231727942038, 'decErr': 0.229055997984, 'radius': 0.0, 'ra': 267.577987688, 'radiusErr': 0.0, 'dec': -29.3043856157}, 'SNR50.6-0.8': {'raErr': 0.209175932656, 'decErr': 0.209175932656, 'radius': 1.05319549445, 'ra': 291.900664176, 'radiusErr': 0.0159060774408, 'dec': 15.2813222061}, 'SNR21.1-2.8': {'raErr': 0.243908532658, 'decErr': 0.243908532658, 'radius': 2.39797220885, 'ra': 279.965228189, 'radiusErr': 0.0543193832304, 'dec': -11.7788597994}, 'SNR324.4-0.0': {'raErr': 0.219817336197, 'decErr': 0.218253702216, 'radius': 0.0, 'ra': 233.713747774, 'radiusErr': 0.0, 'dec': -55.9346857956}, 'SNR334.2-1.9': {'raErr': 0.233771109082, 'decErr': 0.233771109082, 'radius': 1.38014246042, 'ra': 248.115723197, 'radiusErr': 0.0490037345498, 'dec': -50.8741175307}, 'SNR338.2-0.9': {'raErr': 0.254174197169, 'decErr': 0.254174197169, 'radius': 1.26483497373, 'ra': 251.025366585, 'radiusErr': 0.0286707253048, 'dec': -47.2488966627}}
flaresData = {'Flare54': {'dec': -6.21, 'radius': 1.8, 'ra': 174.2}, 'Flare55': {'dec': 13.27, 'radius': 1.01, 'ra': 238.56}, 'Flare56': {'dec': 52.7, 'radius': 1.39, 'ra': 121.5}, 'Flare57': {'dec': 78.56, 'radius': 0.68, 'ra': 271.96}, 'Flare50': {'dec': 50.57, 'radius': 0.63, 'ra': 132.32}, 'Flare51': {'dec': 32.07, 'radius': 1.01, 'ra': 350.45}, 'Flare52': {'dec': 4.84, 'radius': 1.39, 'ra': 153.94}, 'Flare53': {'dec': 11.11, 'radius': 1.01, 'ra': 72.07}, 'Flare58': {'dec': -46.67, 'radius': 1.01, 'ra': 41.7}, 'Flare59': {'dec': -1.96, 'radius': 1.39, 'ra': 323.52}, 'Flare138': {'dec': 0.91, 'radius': 1.8, 'ra': 165.18}, 'Flare139': {'dec': -27.99, 'radius': 1.39, 'ra': 34.97}, 'Flare134': {'dec': 35.73, 'radius': 1.39, 'ra': 34.79}, 'Flare135': {'dec': -48.83, 'radius': 1.39, 'ra': 269.92}, 'Flare136': {'dec': 38.28, 'radius': 0.63, 'ra': 249.16}, 'Flare137': {'dec': -29.32, 'radius': 1.39, 'ra': 161.57}, 'Flare130': {'dec': 42.92, 'radius': 0.84, 'ra': 36.29}, 'Flare131': {'dec': -50.17, 'radius': 1.39, 'ra': 137.24}, 'Flare132': {'dec': -21.06, 'radius': 1.8, 'ra': 57.75}, 'Flare133': {'dec': -2.43, 'radius': 1.8, 'ra': 55.26}, 'Flare166': {'dec': -31.93, 'radius': 1.39, 'ra': 155.78}, 'Flare61': {'dec': 45.71, 'radius': 1.01, 'ra': 315.87}, 'Flare60': {'dec': -26.66, 'radius': 1.39, 'ra': 333.6}, 'Flare63': {'dec': -33.7, 'radius': 1.01, 'ra': 258.88}, 'Flare62': {'dec': -12.98, 'radius': 0.84, 'ra': 263.2}, 'Flare65': {'dec': 60.67, 'radius': 0.95, 'ra': 39.28}, 'Flare64': {'dec': -4.97, 'radius': 0.59, 'ra': 203.18}, 'Flare67': {'dec': -25.88, 'radius': 0.63, 'ra': 191.56}, 'Flare66': {'dec': 18.02, 'radius': 1.01, 'ra': 259.88}, 'Flare69': {'dec': -1.49, 'radius': 1.39, 'ra': 65.66}, 'Flare68': {'dec': 45.49, 'radius': 1.39, 'ra': 79.32}, 'Flare161': {'dec': 28.58, 'radius': 0.59, 'ra': 39.5}, 'Flare160': {'dec': -11.54, 'radius': 0.68, 'ra': 18.94}, 'Flare129': {'dec': -53.72, 'radius': 1.39, 'ra': 332.38}, 'Flare128': {'dec': 32.35, 'radius': 0.63, 'ra': 282.58}, 'Flare127': {'dec': 15.5, 'radius': 1.39, 'ra': 30.95}, 'Flare126': {'dec': 2.16, 'radius': 0.63, 'ra': 187.43}, 'Flare125': {'dec': -20.05, 'radius': 1.3, 'ra': 287.56}, 'Flare124': {'dec': 52.1, 'radius': 1.01, 'ra': 265.8}, 'Flare123': {'dec': 4.69, 'radius': 0.74, 'ra': 76.12}, 'Flare122': {'dec': 61.53, 'radius': 1.39, 'ra': 121.02}, 'Flare121': {'dec': -1.77, 'radius': 1.39, 'ra': 75.92}, 'Flare120': {'dec': -83.96, 'radius': 1.39, 'ra': 328.85}, 'Flare165': {'dec': 19.5, 'radius': 1.01, 'ra': 108.05}, 'Flare173': {'dec': -29.58, 'radius': 1.39, 'ra': 207.21}, 'Flare164': {'dec': 67.09, 'radius': 0.49, 'ra': 283.29}, 'Flare78': {'dec': 5.78, 'radius': 1.39, 'ra': 100.11}, 'Flare79': {'dec': -39.31, 'radius': 0.63, 'ra': 270.92}, 'Flare76': {'dec': 16.01, 'radius': 1.8, 'ra': 81.3}, 'Flare77': {'dec': 47.51, 'radius': 1.8, 'ra': 24.81}, 'Flare74': {'dec': -30.49, 'radius': 0.86, 'ra': 328.63}, 'Flare75': {'dec': 37.19, 'radius': 1.39, 'ra': 303.22}, 'Flare72': {'dec': -15.89, 'radius': 0.41, 'ra': 356.35}, 'Flare73': {'dec': -51.19, 'radius': 0.68, 'ra': 32.42}, 'Flare70': {'dec': 48.85, 'radius': 0.59, 'ra': 198.29}, 'Flare71': {'dec': -56.45, 'radius': 1.39, 'ra': 36.99}, 'Flare112': {'dec': 33.76, 'radius': 1.39, 'ra': 305.99}, 'Flare113': {'dec': 49.73, 'radius': 0.68, 'ra': 178.24}, 'Flare110': {'dec': -35.87, 'radius': 1.39, 'ra': 136.3}, 'Flare111': {'dec': 4.69, 'radius': 1.39, 'ra': 162.82}, 'Flare116': {'dec': 31.62, 'radius': 0.59, 'ra': 230.75}, 'Flare117': {'dec': 11.8, 'radius': 0.84, 'ra': 338.35}, 'Flare114': {'dec': -13.9, 'radius': 1.39, 'ra': 149.43}, 'Flare115': {'dec': 50.11, 'radius': 1.39, 'ra': 265.32}, 'Flare118': {'dec': -66.29, 'radius': 1.39, 'ra': 353.25}, 'Flare119': {'dec': -49.79, 'radius': 0.84, 'ra': 352.17}, 'Flare198': {'dec': 34.83, 'radius': 1.8, 'ra': 167.58}, 'Flare199': {'dec': -22.74, 'radius': 1.39, 'ra': 195.04}, 'Flare192': {'dec': 54.96, 'radius': 0.68, 'ra': 115.18}, 'Flare193': {'dec': -47.35, 'radius': 0.74, 'ra': 314.16}, 'Flare190': {'dec': -54.96, 'radius': 1.39, 'ra': 84.37}, 'Flare191': {'dec': -60.37, 'radius': 1.39, 'ra': 66.94}, 'Flare196': {'dec': -39.36, 'radius': 1.39, 'ra': 339.75}, 'Flare197': {'dec': -38.05, 'radius': 0.49, 'ra': 67.02}, 'Flare194': {'dec': 24.37, 'radius': 1.39, 'ra': 149.38}, 'Flare195': {'dec': 50.77, 'radius': 0.74, 'ra': 330.62}, 'Flare167': {'dec': -21.19, 'radius': 0.84, 'ra': 291.03}, 'Flare170': {'dec': -40.44, 'radius': 1.01, 'ra': 353.29}, 'Flare171': {'dec': -31.63, 'radius': 1.8, 'ra': 229.26}, 'Flare172': {'dec': -24.46, 'radius': 0.84, 'ra': 45.65}, 'Flare105': {'dec': -12.27, 'radius': 0.84, 'ra': 132.41}, 'Flare104': {'dec': -36.49, 'radius': 1.39, 'ra': 37.14}, 'Flare107': {'dec': -13.39, 'radius': 0.53, 'ra': 233.37}, 'Flare106': {'dec': 1.77, 'radius': 1.39, 'ra': 33.84}, 'Flare101': {'dec': -55.85, 'radius': 1.8, 'ra': 202.86}, 'Flare100': {'dec': -64.92, 'radius': 0.86, 'ra': 195.99}, 'Flare103': {'dec': 4.69, 'radius': 1.39, 'ra': 203.41}, 'Flare102': {'dec': 1.35, 'radius': 1.8, 'ra': 48.14}, 'Flare174': {'dec': 44.19, 'radius': 1.39, 'ra': 300.26}, 'Flare109': {'dec': -3.85, 'radius': 1.39, 'ra': 350.73}, 'Flare108': {'dec': 30.21, 'radius': 1.39, 'ra': 208.34}, 'Flare175': {'dec': -56.45, 'radius': 1.39, 'ra': 119.83}, 'Flare189': {'dec': -5.5, 'radius': 0.84, 'ra': 170.21}, 'Flare176': {'dec': 45.07, 'radius': 0.84, 'ra': 103.04}, 'Flare200': {'dec': 24.32, 'radius': 1.39, 'ra': 153.47}, 'Flare185': {'dec': 7.23, 'radius': 0.84, 'ra': 83.26}, 'Flare184': {'dec': 16.52, 'radius': 0.44, 'ra': 39.72}, 'Flare187': {'dec': 32.49, 'radius': 0.84, 'ra': 198.25}, 'Flare177': {'dec': 65.88, 'radius': 1.39, 'ra': 148.76}, 'Flare181': {'dec': 1.59, 'radius': 1.39, 'ra': 137.67}, 'Flare180': {'dec': -36.43, 'radius': 1.39, 'ra': 80.08}, 'Flare183': {'dec': -36.21, 'radius': 0.59, 'ra': 60.65}, 'Flare182': {'dec': -0.74, 'radius': 1.01, 'ra': 70.45}, 'Flare18': {'dec': 54.9, 'radius': 1.39, 'ra': 181.36}, 'Flare19': {'dec': 28.17, 'radius': 1.39, 'ra': 141.14}, 'Flare201': {'dec': 1.13, 'radius': 1.08, 'ra': 147.17}, 'Flare10': {'dec': -52.35, 'radius': 1.01, 'ra': 259.7}, 'Flare11': {'dec': -6.03, 'radius': 1.8, 'ra': 341.82}, 'Flare12': {'dec': -30.2, 'radius': 1.8, 'ra': 101.75}, 'Flare13': {'dec': 42.23, 'radius': 0.56, 'ra': 330.56}, 'Flare14': {'dec': 41.0, 'radius': 1.39, 'ra': 250.49}, 'Flare15': {'dec': 14.14, 'radius': 0.56, 'ra': 111.15}, 'Flare16': {'dec': -62.38, 'radius': 1.8, 'ra': 256.22}, 'Flare17': {'dec': 33.08, 'radius': 0.84, 'ra': 110.14}, 'Flare83': {'dec': 10.1, 'radius': 0.68, 'ra': 33.18}, 'Flare82': {'dec': -9.12, 'radius': 0.5, 'ra': 228.34}, 'Flare81': {'dec': 17.72, 'radius': 1.8, 'ra': 326.07}, 'Flare80': {'dec': -70.28, 'radius': 1.39, 'ra': 91.08}, 'Flare87': {'dec': -35.96, 'radius': 1.39, 'ra': 288.58}, 'Flare86': {'dec': -35.57, 'radius': 0.63, 'ra': 224.76}, 'Flare85': {'dec': 24.17, 'radius': 1.39, 'ra': 116.92}, 'Flare84': {'dec': 32.55, 'radius': 1.01, 'ra': 268.36}, 'Flare178': {'dec': 70.23, 'radius': 0.84, 'ra': 266.67}, 'Flare179': {'dec': 36.98, 'radius': 1.39, 'ra': 112.62}, 'Flare89': {'dec': 33.63, 'radius': 0.46, 'ra': 95.65}, 'Flare88': {'dec': 40.93, 'radius': 1.08, 'ra': 308.59}, 'Flare163': {'dec': 1.56, 'radius': 1.8, 'ra': 116.55}, 'Flare2': {'dec': 9.38, 'radius': 1.39, 'ra': 267.78}, 'Flare3': {'dec': -44.26, 'radius': 0.51, 'ra': 84.07}, 'Flare0': {'dec': 39.12, 'radius': 0.74, 'ra': 263.99}, 'Flare1': {'dec': 61.27, 'radius': 0.84, 'ra': 18.12}, 'Flare6': {'dec': 44.87, 'radius': 0.56, 'ra': 206.64}, 'Flare7': {'dec': -7.55, 'radius': 0.56, 'ra': 306.65}, 'Flare4': {'dec': -38.36, 'radius': 1.39, 'ra': 299.07}, 'Flare5': {'dec': -27.82, 'radius': 0.84, 'ra': 343.02}, 'Flare8': {'dec': 48.59, 'radius': 1.39, 'ra': 283.36}, 'Flare9': {'dec': -55.05, 'radius': 1.8, 'ra': 7.28}, 'Flare188': {'dec': 70.77, 'radius': 1.39, 'ra': 131.43}, 'Flare90': {'dec': -46.03, 'radius': 1.8, 'ra': 320.5}, 'Flare91': {'dec': -11.71, 'radius': 0.95, 'ra': 112.24}, 'Flare92': {'dec': 2.49, 'radius': 1.39, 'ra': 122.8}, 'Flare93': {'dec': -40.37, 'radius': 1.39, 'ra': 53.43}, 'Flare94': {'dec': -42.84, 'radius': 1.39, 'ra': 299.19}, 'Flare95': {'dec': -53.17, 'radius': 0.84, 'ra': 159.57}, 'Flare96': {'dec': 29.94, 'radius': 1.39, 'ra': 184.36}, 'Flare97': {'dec': 47.56, 'radius': 1.39, 'ra': 250.29}, 'Flare98': {'dec': -32.97, 'radius': 1.39, 'ra': 268.0}, 'Flare99': {'dec': 10.49, 'radius': 0.35, 'ra': 226.28}, 'Flare169': {'dec': -34.21, 'radius': 1.8, 'ra': 84.89}, 'Flare168': {'dec': 31.46, 'radius': 1.39, 'ra': 329.9}, 'Flare186': {'dec': 41.63, 'radius': 0.84, 'ra': 50.15}, 'Flare156': {'dec': -64.43, 'radius': 1.8, 'ra': 171.21}, 'Flare157': {'dec': 68.58, 'radius': 0.84, 'ra': 255.11}, 'Flare154': {'dec': 4.41, 'radius': 1.01, 'ra': 190.12}, 'Flare155': {'dec': 79.94, 'radius': 1.39, 'ra': 57.74}, 'Flare152': {'dec': -21.06, 'radius': 0.51, 'ra': 278.63}, 'Flare153': {'dec': 60.94, 'radius': 0.63, 'ra': 158.35}, 'Flare150': {'dec': -70.17, 'radius': 1.01, 'ra': 202.32}, 'Flare151': {'dec': 6.21, 'radius': 1.39, 'ra': 160.17}, 'Flare206': {'dec': 40.39, 'radius': 1.39, 'ra': 351.55}, 'Flare207': {'dec': -22.6, 'radius': 0.74, 'ra': 42.88}, 'Flare204': {'dec': 30.37, 'radius': 0.68, 'ra': 30.87}, 'Flare205': {'dec': 11.16, 'radius': 1.39, 'ra': 309.08}, 'Flare202': {'dec': -48.41, 'radius': 0.59, 'ra': 83.24}, 'Flare203': {'dec': -75.45, 'radius': 0.68, 'ra': 327.06}, 'Flare158': {'dec': 43.38, 'radius': 0.51, 'ra': 257.7}, 'Flare159': {'dec': 22.39, 'radius': 1.8, 'ra': 52.48}, 'Flare25': {'dec': -2.74, 'radius': 1.8, 'ra': 136.96}, 'Flare24': {'dec': -2.3, 'radius': 1.39, 'ra': 7.85}, 'Flare27': {'dec': 0.84, 'radius': 1.39, 'ra': 129.98}, 'Flare26': {'dec': 13.97, 'radius': 1.01, 'ra': 84.5}, 'Flare21': {'dec': -8.21, 'radius': 0.63, 'ra': 122.14}, 'Flare20': {'dec': 71.14, 'radius': 0.56, 'ra': 109.46}, 'Flare23': {'dec': 58.34, 'radius': 0.68, 'ra': 16.04}, 'Flare22': {'dec': 39.92, 'radius': 0.84, 'ra': 176.37}, 'Flare29': {'dec': 44.56, 'radius': 0.68, 'ra': 140.16}, 'Flare28': {'dec': -5.35, 'radius': 1.8, 'ra': 4.77}, 'Flare162': {'dec': 9.36, 'radius': 1.39, 'ra': 352.57}, 'Flare47': {'dec': 23.04, 'radius': 1.39, 'ra': 18.03}, 'Flare46': {'dec': -7.87, 'radius': 1.3, 'ra': 337.38}, 'Flare45': {'dec': -33.63, 'radius': 1.39, 'ra': 199.72}, 'Flare44': {'dec': -11.17, 'radius': 0.74, 'ra': 207.78}, 'Flare43': {'dec': -21.46, 'radius': 1.39, 'ra': 352.31}, 'Flare42': {'dec': -20.18, 'radius': 0.84, 'ra': 97.19}, 'Flare41': {'dec': 32.52, 'radius': 1.8, 'ra': 53.44}, 'Flare40': {'dec': 16.21, 'radius': 0.53, 'ra': 343.57}, 'Flare49': {'dec': -19.23, 'radius': 0.68, 'ra': 172.33}, 'Flare48': {'dec': -23.56, 'radius': 0.49, 'ra': 74.06}, 'Flare149': {'dec': 21.37, 'radius': 0.51, 'ra': 186.22}, 'Flare148': {'dec': 27.29, 'radius': 1.39, 'ra': 265.0}, 'Flare141': {'dec': 81.38, 'radius': 1.39, 'ra': 161.49}, 'Flare140': {'dec': 32.2, 'radius': 1.01, 'ra': 18.41}, 'Flare143': {'dec': -5.82, 'radius': 0.51, 'ra': 194.27}, 'Flare142': {'dec': -46.74, 'radius': 1.39, 'ra': 105.88}, 'Flare145': {'dec': 56.84, 'radius': 1.01, 'ra': 276.32}, 'Flare144': {'dec': 37.89, 'radius': 1.39, 'ra': 165.75}, 'Flare147': {'dec': -61.6, 'radius': 0.84, 'ra': 38.73}, 'Flare146': {'dec': 20.45, 'radius': 0.74, 'ra': 133.77}, 'Flare211': {'dec': -17.3, 'radius': 1.8, 'ra': 30.96}, 'Flare210': {'dec': 29.51, 'radius': 0.63, 'ra': 180.23}, 'Flare213': {'dec': -25.73, 'radius': 0.74, 'ra': 246.58}, 'Flare212': {'dec': 4.21, 'radius': 1.39, 'ra': 127.78}, 'Flare208': {'dec': 48.34, 'radius': 1.39, 'ra': 254.18}, 'Flare209': {'dec': 34.34, 'radius': 0.68, 'ra': 347.93}, 'Flare32': {'dec': 10.4, 'radius': 0.74, 'ra': 47.18}, 'Flare33': {'dec': -52.33, 'radius': 1.39, 'ra': 276.98}, 'Flare30': {'dec': -25.2, 'radius': 1.39, 'ra': 55.7}, 'Flare31': {'dec': 77.17, 'radius': 1.8, 'ra': 256.91}, 'Flare36': {'dec': 55.3, 'radius': 1.39, 'ra': 198.03}, 'Flare37': {'dec': 35.99, 'radius': 0.74, 'ra': 214.86}, 'Flare34': {'dec': -80.23, 'radius': 1.39, 'ra': 287.49}, 'Flare35': {'dec': 21.6, 'radius': 1.39, 'ra': 83.23}, 'Flare38': {'dec': -14.4, 'radius': 0.68, 'ra': 339.35}, 'Flare39': {'dec': -41.83, 'radius': 0.63, 'ra': 217.09}}
"""
Explanation: Note which version of import your colleague used in their code.
Hint! Part 2:
A. By definition in Part 1, the maximum area of overlap is when one circle is fully contained within the other. Thus the fractional overlap is defined as the intersection area divided by the minimum of the radii.
B. There are several ways to convert the given error measurement (x1Err, y1Err) into a radius. Consider the relative sizes and their impact on the final fraction calculated. Optionally, allow the user to select which option to use!
Uncompleted parts make good homework!
More useful exercises:
Try writing your functions in a file and running the code from the terminal.<br>
Hint: Calling "python filename.py" will execute the code in filename.py.
Write functions to read in the data from files.
Write a wrapper function that takes the data read in from the files and calls the fractional overlap functions. <br>
candidates.txt contains the first sources' information, including position, position error, extension, and extension error (all in degrees, and incidentally in the RA & Dec coordinate system; just assume cartesian geometry!).<br>
flare.txt contains the second sources' information, including position and position error radius (also in degrees and RA,Dec).
Write the results to the terminal and then to a file.
How else might you display the results?
Use matplotlib to display the interesting results!
Use the following data* within the notebook to test your functions with:
*(extracted from candidates.txt and flare.txt)
End of explanation
"""
candidatesData['SNR357.0-1.0']['raErr']
from IPython.display import Image
overlapImage1 = Image(filename = 'SNR347.3-00.5_GeVext_overlap.png')
overlapImage1
overlapImage2 = Image(filename = 'SNR111.7-02.1_GeVext_overlap.png')
overlapImage2
"""
Explanation: NB: candidatesData and flaresData are dictionaries. The data in dicts is accessible via the keys, eg candidatesData[candidateName][variableName] :
End of explanation
"""
|
vterron/Taller-Optimizacion-Python-Pyomo | 02_PyomoOverview.ipynb | mit | !cat abstract1.py
"""
Explanation: <img src="static/pybofractal.png" alt="Pybonacci" style="width: 200px;"/>
<img src="static/cacheme_logo.png" alt="CAChemE" style="width: 300px;"/>
1. Pyomo Overview
Note: Adapted from https://github.com/Pyomo/PyomoGettingStarted, by William and Becca Hart
1.1 Mathematical Modeling
This chapter provides an introduction to Pyomo: Python Optimization Modeling Objects. A more complete description is contained in Pyomo - Optimization Modeling in Python. Pyomo supports the formulation and analysis of mathematical models for complex optimization applications. This capability is commonly associated with algebraic modeling languages (AMLs) such as AMPL, AIMMS, and GAMS. Pyomo’s modeling objects are embedded within Python, a full-featured high-level programming language that contains a rich set of supporting libraries.
Modeling is a fundamental process in many aspects of scientific research, engineering and business. Modeling involves the formulation of a simplified representation of a system or real-world object. Thus, modeling tools like Pyomo can be used in a variety of ways:
Explain phenomena that arise in a system,
Make predictions about future states of a system,
Assess key factors that influence phenomena in a system,
Identify extreme states in a system, that might represent worst-case scenarios or minimal cost plans, and
Analyze trade-offs to support human decision makers.
Mathematical models represent system knowledge with a formalized mathematical language. The following mathematical concepts are central to modern modeling activities:
Variables
VariablesVariables represent unknown or changing parts of a model
(e.g. whether or not to make a decision, or the characteristic of
a system outcome).The values taken by the variables are often
referred to as a <span style="color:dakrblue">solution</span> and are usually an output of the
optimization process.
Parameters
Parameters represents the data that must be supplied to perform
theoptimization. In fact, in some settings the word <span style="color:darkblue">data</span> is used in
place of the word <span style="color:dakrblue">parameters</span>.
Relations
These are equations, inequalities or other mathematical relationships
that define how different parts of a model are connected to each
other.
Goals
These are functions that reflect goals and objectives for the system
being modeled.
The widespread availability of computing resources has made the numerical analysis of mathematical models a commonplace activity. Without a modeling language, the process of setting up input files, executing a solver and extracting the final results from the solver output is tedious and error prone. This difficulty is compounded in complex, large-scale real-world applications which are difficult to debug when errors occur. Additionally, there are many different formats used by optimization software packages, and few formats are recognized by many optimizers. Thus the application of multiple optimization solvers to analyze a model introduces additional complexities.
Pyomo is an AML that extends Python to include objects for mathematical modeling. Hart et al. PyomoBook, PyomoJournal compare Pyomo with other AMLs. Although many good AMLs have been developed for optimization models, the following are motivating factors for the development of Pyomo:
Open Source
Pyomo is developed within Pyomo’s open source project to promote
transparency of the modeling framework and encourage community
development of Pyomo capabilities.
Customizable Capability
Pyomo supports a customizable capability through the extensive use
of plug-ins to modularize software components.
Solver Integration
Pyomo models can be optimized with solvers that are written either in
Python or in compiled, low-level languages.
Programming Language
Pyomo leverages a high-level programming language, which has several
advantages over custom AMLs: a very robust language, extensive
documentation, a rich set of standard libraries, support for modern
programming features like classes and functions, and portability to
many platforms.
1.2 Overview of Modeling Components and Processes
Pyomo supports an object-oriented design for the definition of optimization models. The basic steps of a simple modeling process are:
Create model and declare components
Instantiate the model
Apply solver
Interrogate solver results
In practice, these steps may be applied repeatedly with different data or with different constraints applied to the model. However, we focus on this simple modeling process to illustrate different strategies for modeling with Pyomo.
A Pyomo <span style="color:darkblue">model</span> consists of a collection of modeling <span style="color:darkblue">components</span> that define different aspects of the model. Pyomo includes the modeling components that are commonly supported by modern AMLs: index sets, symbolic parameters, decision variables, objectives, and constraints. These modeling components are defined in Pyomo through the following Python classes:
Set
set data that is used to define a model instance
Param
parameter data that is used to define a model instance
Var
decision variables in a model
Objective
expressions that are minimized or maximized in a model
Constraint
constraint expressions that impose restrictions on variable
values in a model
1.3 Abstract Versus Concrete Models
A mathematical model can be defined using symbols that represent data values. For example, the following equations represent a linear program (LP) to find optimal values for the vector $x$ with parameters $n$ and $b$, and parameter vectors $a$ and $c$:
$$
\begin{array}{lll}
\min & \sum_{j=1}^n c_j x_j & \
s.t. & \sum_{j=1}^n a_ij x_j \geq b_i & \forall i = 1 \ldots m\
& x_j \geq 0 & \forall j = 1 \ldots n
\end{array}
$$
Note:
As a convenience, we use the symbol $\forall$
to mean “for all” or “for each.”
We call this an <span style="color:darkblue">abstract</span> or <span style="color:darkblue">symbolic</span> mathematical model since it relies on unspecified parameter values. Data values can be used to specify a <span style="color:darkblue">model instance</span>. The <span style="color:darkblue; font-family:Courier">AbstractModel</span> class provides a context for defining and initializing abstract optimization models in Pyomo when the data values will be supplied at the time a solution is to be obtained.
In some contexts a mathematical model can be directly defined with the data values supplied at the time of the model definition and built into the model. We call these <span style="color:darkblue">concrete</span> mathematical models. For example, the following LP model is a concrete instance of the previous abstract model:
$$
\begin{array}{ll}
\min & 2x_1 + 3x_2\
s.t. & 3x_1 + 4x_2 \geq 1\
& x_1,x_2 \geq 0
\end{array}
$$
The <span style="color:darkblue; font-family:Courier">ConcreteModel</span> class is used to define concrete optimization models in Pyomo.
1.4 A Simple Abstract Pyomo Model
We repeat the abstract model already given:
$$
\begin{array}{lll}
\min & \sum_{j=1}^n c_j x_j & \
s.t. & \sum_{j=1}^n a_ij x_j \geq b_i & \forall i = 1 \ldots m\
& x_j \geq 0 & \forall j = 1 \ldots n
\end{array}
$$
One way to implement this in Pyomo is as follows:
End of explanation
"""
from pyomo.environ import *
"""
Explanation: Note:
Python is interpreted one line at a time. A line continuation character, backslash, is used for Python statements that need to span multiple lines. In Python, indentation has meaning and must be consistent. For example, lines inside a function definition must be indented and the end of the indentation is used by Python to signal the end of the definition.
The first import line that is required in every Pyomo model. Its purpose is to make the symbols used by Pyomo known to Python.
End of explanation
"""
model = AbstractModel()
"""
Explanation: The declaration of a model is also required. The use of the name <span style="color:darkblue; font-family:Courier">model</span> is not required. Almost any name could be used, but we will use the name <span style="color:darkblue; font-family:Courier">model</span> most of the time in this book. In this example, we are declaring that it will be an abstract model.
End of explanation
"""
model.m = Param(within=NonNegativeIntegers)
model.n = Param(within=NonNegativeIntegers)
"""
Explanation: We declare the parameters $m$ and $n$ using the Pyomo <span style="color:darkblue; font-family:Courier">Param</span> function. This function can take a variety of arguments; this example illustrates use of the <span style="color:darkblue; font-family:Courier">within</span> option that is used by Pyomo to validate the data value that is assigned to the parameter. If this option were not given, then Pyomo would not object to any type of data being assigned to these parameters. As it is, assignment of a value that is not a non-negative integer will result in an error.
End of explanation
"""
model.I = RangeSet(1, model.m)
model.J = RangeSet(1, model.n)
"""
Explanation: Although not required, it is convenient to define index sets. In this example we use the <span style="color:darkblue; font-family:Courier">RangeSet</span> function to declare that the sets will be a sequence of integers starting at 1 and ending at a value specified by the the parameters <span style="color:darkblue; font-family:Courier">model.m</span> and <span style="color:darkblue; font-family:Courier">model.n</span>.
End of explanation
"""
model.a = Param(model.I, model.J)
model.b = Param(model.I)
model.c = Param(model.J)
"""
Explanation: The coefficient and right-hand-side data are defined as indexed parameters. When sets are given as arguments to the <span style="color:darkblue; font-family:Courier">Param</span> function, they indicate that the set will index the parameter.
End of explanation
"""
# the next line declares a variable indexed by the set J
model.x = Var(model.J, domain=NonNegativeReals)
"""
Explanation: Note:
In Python, and therefore in Pyomo, any text after pound sign is considered to be a comment.
The next line interpreted by Python as part of the model declares the variable $x$. The first argument to the <span style="color:darkblue; font-family:Courier">Var</span> function is a set, so it is defined as an index set for the variable. In this case the variable has only one index set, but multiple sets could be used as was the case for the declaration of the parameter <span style="color:darkblue; font-family:Courier">model.a</span>. The second argument specifies a domain for the variable. This information is part of the model and will passed to the solver when data is provided and the model is solved. Specification of the <span style="color:darkblue; font-family:Courier">NonNegativeReals</span> domain implements the requirement that the variables be greater than or equal to zero.
End of explanation
"""
def obj_expression(model):
return summation(model.c, model.x)
"""
Explanation: In abstract models, Pyomo expressions are usually provided to objective function and constraint declarations via a function defined with a Python <span style="color:darkblue; font-family:Courier">def</span> statement. The <span style="color:darkblue; font-family:Courier">def</span> statement establishes a name for a function along with its arguments. When Pyomo uses a function to get objective function or constraint expressions, it always passes in the model (i.e., itself) as the the first argument so the model is always the first formal argument when declaring such functions in Pyomo. Additional arguments, if needed, follow. Since summation is an extremely common part of optimization models, Pyomo provides a flexible function to accommodate it. When given two arguments, the <span style="color:darkblue; font-family:Courier">summation</span> function returns an expression for the sum of the product of the two arguments over their indexes. This only works, of course, if the two arguments have the same indexes. If it is given only one argument it returns an expression for the sum over all indexes of that argument. So in this example, when <span style="color:darkblue; font-family:Courier">summation</span> is passed the arguments <span style="color:darkblue; font-family:Courier">model.c</span>, <span style="color:darkblue; font-family:Courier">model.x</span> it returns an internal representation of the expression $\sum_{j=1}^n c_j x_j$.
End of explanation
"""
model.OBJ = Objective(rule=obj_expression)
"""
Explanation: To declare an objective function, the Pyomo function called <span style="color:darkblue; font-family:Courier">Objective</span> is used. The <span style="color:darkblue; font-family:Courier">rule</span> argument gives the name of a function that returns the expression to be used. The default <span style="color:darkblue">sense</span> is minimization. For maximization, the <span style="color:darkblue; font-family:Courier">sense=maximize</span> argument must be used. The name that is declared, which is <span style="color:darkblue; font-family:Courier">OBJ</span> in this case, appears in some reports and can be almost any name.
End of explanation
"""
def ax_constraint_rule(model, i):
"""return the expression for the constraint for i"""
return sum(model.a[i,j] * model.x[j] for j in model.J) >= model.b[i]
"""
Explanation: Declaration of constraints is similar. A function is declared to deliver the constraint expression. In this case, there can be multiple constraints of the same form because we index the constraints by $i$ in the expression $ \sum_{j=1}^n a_ij x_j \geq b_i \forall i = 1 \ldots m$, which states that we need a constraint for each value of $i$ from one to $m$. In order to parametrize the expression by $i$ we include it as a formal parameter to the function that declares the constraint expression. Technically, we could have used anything for this argument, but that might be confusing. Using an <span style="color:blue; font-family:Courier">i</span> for an $i$ seems sensible in this situation.
End of explanation
"""
# the next line creates one constraint for each member of the set model.I
model.AxbConstraint = Constraint(model.I, rule=ax_constraint_rule)
"""
Explanation: Note:
In Python, indexes are in square brackets and function arguments are in parentheses.
In order to declare constraints that use this expression, we use the Pyomo <span style="color:darkblue; font-family:Courier">Constraint</span> function that takes a variety of arguments. In this case, our model specifies that we can have more than one constraint of the same form and we have created a set, <span style="color:darkblue; font-family:Courier">model.I</span>, over which these constraints can be indexed so that is the first argument to the constraint declaration function. The next argument gives the rule that will be used to generate expressions for the constraints. Taken as a whole, this constraint declaration says that a list of constraints indexed by the set <span style="color:darkblue; font-family:Courier">model.I</span> will be created and for each member of model.I, the function <span style="color:darkblue; font-family:Courier">ax_constraint_rule</span> will be called and it will be passed the model object as well as the member of <span style="color:darkblue; font-family:Courier">model.I</span>.
End of explanation
"""
!cat abstract1.dat
"""
Explanation: In the object oriented view of all of this, we would say that model object is a class instance of the <span style="color:darkblue; font-family:Courier">AbstractModel</span> class, and <span style="color:darkblue; font-family:Courier">model.J</span> is a <span style="color:darkblue; font-family:Courier">Set</span> object that is contained by this model. Many modeling components in Pyomo can be optionally specified as <span style="color:darkblue">indexed components</span>: collections of components that are referenced using one or more values. In this example, the parameter <span style="color:darkblue; font-family:Courier">model.c</span> is indexed with set <span style="color:darkblue; font-family:Courier">model.J</span>.
In order to use this model, data must be given for the values of the parameters. Here is one file that provides data.
End of explanation
"""
!sed -n '4,6p' abstract1.dat
"""
Explanation: There are multiple formats that can be used to provide data to a Pyomo model, but the AMPL format works well for our purposes because it contains the names of the data elements together with the data. In AMPL data files, text after a pound sign is treated as a comment. Lines generally do not matter, but statements must be terminated with a semi-colon.
For this particular data file, there is one constraint, so the value of <span style="color:darkblue; font-family:Courier">model.m</span> will be one and there are two variables (i.e., the vector <span style="color:darkblue; font-family:Courier">model.x</span> is two elements long) so the value of <span style="color:darkblue; font-family:Courier">model.n</span> will be two. These two assignments are accomplished with standard assignments. Notice that in AMPL format input, the name of the model is omitted.
End of explanation
"""
!sed -n '7,18p' abstract1.dat
"""
Explanation: There is only one constraint, so only two values are needed for <span style="color:darkblue; font-family:Courier">model.a</span>. When assigning values to arrays and vectors in AMPL format, one way to do it is to give the index(es) and the the value. The line 1 2 4 causes <span style="color:darkblue; font-family:Courier">model.a[1,2]</span> to get the value 4. Since <span style="color:darkblue; font-family:Courier">model.c</span> has only one index, only one index value is needed so, for example, the line 1 2 causes <span style="color:darkblue; font-family:Courier">model.c[1]</span> to get the value 2. Line breaks generally do not matter in AMPL format data files, so the assignment of the value for the single index of <span style="color:darkblue; font-family:Courier">model.b</span> is given on one line since that is easy to read.
End of explanation
"""
!cat abstract2.py
"""
Explanation: When working with Pyomo (or any other AML), it is convenient to write abstract models in a somewhat more abstract way by using index sets that contain strings rather than index sets that are implied by $1,...,m$ or the summation from 1 to $n$. When this is done, the size of the set is implied by the input, rather than specified directly. Furthermore, the index entries may have no real order. Often, a mixture of integers and indexes and strings as indexes is needed in the same model. To start with an illustration of general indexes, consider a slightly different Pyomo implementation of the model we just presented.
End of explanation
"""
! cat abstract2.dat
"""
Explanation: However, this model can also be fed different data for problems of the same general form using meaningful indexes.
End of explanation
"""
from pyomo.environ import *
model = ConcreteModel()
model.x = Var([1,2], domain=NonNegativeReals)
model.OBJ = Objective(expr = 2*model.x[1] + 3*model.x[2])
model.Constraint1 = Constraint(expr = 3*model.x[1] + 4*model.x[2] >= 1)
"""
Explanation: 1.5 A Simple Concrete Pyomo Model
It is possible to get nearly the same flexible behavior from models declared to be abstract and models declared to be concrete in Pyomo; however, we will focus on a straightforward concrete example here where the data is hard-wired into the model file. Python programmers will quickly realize that the data could have come from other sources.
We repeat the concrete model already given:
$$\min \quad 2x_1 + 3x_2$$
$$s.t. \quad 3x_1 + 4x_2 \geq 1$$
$$x_1,x_2 \geq 0$$
This is implemented as a concrete model as follows:
End of explanation
"""
!pyomo solve abstract1.py abstract1.dat --solver=glpk
"""
Explanation: Although rule functions can also be used to specify constraints and objectives, in this example we use the <span style="color:darkblue; font-family:Courier">expr</span> option that is available only in concrete models. This option gives a direct specification of the expression.
1.6 Solving the Simple Examples
Pyomo supports modeling and scripting but does not install a solver automatically. In order to solve a model, there must be a solver installed on the computer to be used. If there is a solver, then the <span style="color:darkblue; font-family:Courier">pyomo</span> command can be used to solve a problem instance.
Suppose that the solver named glpk (also known as glpsol) is installed on the computer. Suppose further that an abstract model is in the file named <span style="color:darkblue; font-family:Courier">abstract1.py</span> and a data file for it is in the file named <span style="color:darkblue; font-family:Courier">abstract1.dat</span>. From the command prompt, with both files in the current directory, a solution can be obtained with the command:
End of explanation
"""
!pyomo solve abstract1.py abstract1.dat --solver=cplex
"""
Explanation: Since glpk is the default solver, there really is no need specify it so the <span style="color:darkblue; font-family:Courier">--solver</span> option can be dropped.
Note:
There are two dashes before the command line option names such as solver.
To continue the example, if CPLEX is installed then it can be listed as the solver. The command to solve with CPLEX is
End of explanation
"""
!pyomo solve abstract1.py abstract1.dat --solver=cplex --summary
"""
Explanation: This yields the following output on the screen:
The numbers is square brackets indicate how much time was required for each step. Results are written to the file named <span style="color:darkblue; font-family:Courier">results.json</span>, which has a special structure that makes it useful for post-processing. To see a summary of results written to the screen, use the <span style="color:darkblue; font-family:Courier">--summary</span> option:
End of explanation
"""
!pyomo solve --help
"""
Explanation: To see a list of Pyomo command line options, use:
End of explanation
"""
|
transcranial/keras-js | notebooks/layers/pooling/MaxPooling3D.ipynb | mit | data_in_shape = (4, 4, 4, 2)
L = MaxPooling3D(pool_size=(2, 2, 2), strides=None, padding='valid', data_format='channels_last')
layer_0 = Input(shape=data_in_shape)
layer_1 = L(layer_0)
model = Model(inputs=layer_0, outputs=layer_1)
# set weights to random (use seed for reproducibility)
np.random.seed(290)
data_in = 2 * np.random.random(data_in_shape) - 1
result = model.predict(np.array([data_in]))
data_out_shape = result[0].shape
data_in_formatted = format_decimal(data_in.ravel().tolist())
data_out_formatted = format_decimal(result[0].ravel().tolist())
print('')
print('in shape:', data_in_shape)
print('in:', data_in_formatted)
print('out shape:', data_out_shape)
print('out:', data_out_formatted)
DATA['pooling.MaxPooling3D.0'] = {
'input': {'data': data_in_formatted, 'shape': data_in_shape},
'expected': {'data': data_out_formatted, 'shape': data_out_shape}
}
"""
Explanation: MaxPooling3D
[pooling.MaxPooling3D.0] input 4x4x4x2, pool_size=(2, 2, 2), strides=None, padding='valid', data_format='channels_last'
End of explanation
"""
data_in_shape = (4, 4, 4, 2)
L = MaxPooling3D(pool_size=(2, 2, 2), strides=(1, 1, 1), padding='valid', data_format='channels_last')
layer_0 = Input(shape=data_in_shape)
layer_1 = L(layer_0)
model = Model(inputs=layer_0, outputs=layer_1)
# set weights to random (use seed for reproducibility)
np.random.seed(291)
data_in = 2 * np.random.random(data_in_shape) - 1
result = model.predict(np.array([data_in]))
data_out_shape = result[0].shape
data_in_formatted = format_decimal(data_in.ravel().tolist())
data_out_formatted = format_decimal(result[0].ravel().tolist())
print('')
print('in shape:', data_in_shape)
print('in:', data_in_formatted)
print('out shape:', data_out_shape)
print('out:', data_out_formatted)
DATA['pooling.MaxPooling3D.1'] = {
'input': {'data': data_in_formatted, 'shape': data_in_shape},
'expected': {'data': data_out_formatted, 'shape': data_out_shape}
}
"""
Explanation: [pooling.MaxPooling3D.1] input 4x4x4x2, pool_size=(2, 2, 2), strides=(1, 1, 1), padding='valid', data_format='channels_last'
End of explanation
"""
data_in_shape = (4, 5, 2, 3)
L = MaxPooling3D(pool_size=(2, 2, 2), strides=(2, 1, 1), padding='valid', data_format='channels_last')
layer_0 = Input(shape=data_in_shape)
layer_1 = L(layer_0)
model = Model(inputs=layer_0, outputs=layer_1)
# set weights to random (use seed for reproducibility)
np.random.seed(282)
data_in = 2 * np.random.random(data_in_shape) - 1
result = model.predict(np.array([data_in]))
data_out_shape = result[0].shape
data_in_formatted = format_decimal(data_in.ravel().tolist())
data_out_formatted = format_decimal(result[0].ravel().tolist())
print('')
print('in shape:', data_in_shape)
print('in:', data_in_formatted)
print('out shape:', data_out_shape)
print('out:', data_out_formatted)
DATA['pooling.MaxPooling3D.2'] = {
'input': {'data': data_in_formatted, 'shape': data_in_shape},
'expected': {'data': data_out_formatted, 'shape': data_out_shape}
}
"""
Explanation: [pooling.MaxPooling3D.2] input 4x5x2x3, pool_size=(2, 2, 2), strides=(2, 1, 1), padding='valid', data_format='channels_last'
End of explanation
"""
data_in_shape = (4, 4, 4, 2)
L = MaxPooling3D(pool_size=(3, 3, 3), strides=None, padding='valid', data_format='channels_last')
layer_0 = Input(shape=data_in_shape)
layer_1 = L(layer_0)
model = Model(inputs=layer_0, outputs=layer_1)
# set weights to random (use seed for reproducibility)
np.random.seed(283)
data_in = 2 * np.random.random(data_in_shape) - 1
result = model.predict(np.array([data_in]))
data_out_shape = result[0].shape
data_in_formatted = format_decimal(data_in.ravel().tolist())
data_out_formatted = format_decimal(result[0].ravel().tolist())
print('')
print('in shape:', data_in_shape)
print('in:', data_in_formatted)
print('out shape:', data_out_shape)
print('out:', data_out_formatted)
DATA['pooling.MaxPooling3D.3'] = {
'input': {'data': data_in_formatted, 'shape': data_in_shape},
'expected': {'data': data_out_formatted, 'shape': data_out_shape}
}
"""
Explanation: [pooling.MaxPooling3D.3] input 4x4x4x2, pool_size=(3, 3, 3), strides=None, padding='valid', data_format='channels_last'
End of explanation
"""
data_in_shape = (4, 4, 4, 2)
L = MaxPooling3D(pool_size=(3, 3, 3), strides=(3, 3, 3), padding='valid', data_format='channels_last')
layer_0 = Input(shape=data_in_shape)
layer_1 = L(layer_0)
model = Model(inputs=layer_0, outputs=layer_1)
# set weights to random (use seed for reproducibility)
np.random.seed(284)
data_in = 2 * np.random.random(data_in_shape) - 1
result = model.predict(np.array([data_in]))
data_out_shape = result[0].shape
data_in_formatted = format_decimal(data_in.ravel().tolist())
data_out_formatted = format_decimal(result[0].ravel().tolist())
print('')
print('in shape:', data_in_shape)
print('in:', data_in_formatted)
print('out shape:', data_out_shape)
print('out:', data_out_formatted)
DATA['pooling.MaxPooling3D.4'] = {
'input': {'data': data_in_formatted, 'shape': data_in_shape},
'expected': {'data': data_out_formatted, 'shape': data_out_shape}
}
"""
Explanation: [pooling.MaxPooling3D.4] input 4x4x4x2, pool_size=(3, 3, 3), strides=(3, 3, 3), padding='valid', data_format='channels_last'
End of explanation
"""
data_in_shape = (4, 4, 4, 2)
L = MaxPooling3D(pool_size=(2, 2, 2), strides=None, padding='same', data_format='channels_last')
layer_0 = Input(shape=data_in_shape)
layer_1 = L(layer_0)
model = Model(inputs=layer_0, outputs=layer_1)
# set weights to random (use seed for reproducibility)
np.random.seed(285)
data_in = 2 * np.random.random(data_in_shape) - 1
result = model.predict(np.array([data_in]))
data_out_shape = result[0].shape
data_in_formatted = format_decimal(data_in.ravel().tolist())
data_out_formatted = format_decimal(result[0].ravel().tolist())
print('')
print('in shape:', data_in_shape)
print('in:', data_in_formatted)
print('out shape:', data_out_shape)
print('out:', data_out_formatted)
DATA['pooling.MaxPooling3D.5'] = {
'input': {'data': data_in_formatted, 'shape': data_in_shape},
'expected': {'data': data_out_formatted, 'shape': data_out_shape}
}
"""
Explanation: [pooling.MaxPooling3D.5] input 4x4x4x2, pool_size=(2, 2, 2), strides=None, padding='same', data_format='channels_last'
End of explanation
"""
data_in_shape = (4, 4, 4, 2)
L = MaxPooling3D(pool_size=(2, 2, 2), strides=(1, 1, 1), padding='same', data_format='channels_last')
layer_0 = Input(shape=data_in_shape)
layer_1 = L(layer_0)
model = Model(inputs=layer_0, outputs=layer_1)
# set weights to random (use seed for reproducibility)
np.random.seed(286)
data_in = 2 * np.random.random(data_in_shape) - 1
result = model.predict(np.array([data_in]))
data_out_shape = result[0].shape
data_in_formatted = format_decimal(data_in.ravel().tolist())
data_out_formatted = format_decimal(result[0].ravel().tolist())
print('')
print('in shape:', data_in_shape)
print('in:', data_in_formatted)
print('out shape:', data_out_shape)
print('out:', data_out_formatted)
DATA['pooling.MaxPooling3D.6'] = {
'input': {'data': data_in_formatted, 'shape': data_in_shape},
'expected': {'data': data_out_formatted, 'shape': data_out_shape}
}
"""
Explanation: [pooling.MaxPooling3D.6] input 4x4x4x2, pool_size=(2, 2, 2), strides=(1, 1, 1), padding='same', data_format='channels_last'
End of explanation
"""
data_in_shape = (4, 5, 4, 2)
L = MaxPooling3D(pool_size=(2, 2, 2), strides=(1, 2, 1), padding='same', data_format='channels_last')
layer_0 = Input(shape=data_in_shape)
layer_1 = L(layer_0)
model = Model(inputs=layer_0, outputs=layer_1)
# set weights to random (use seed for reproducibility)
np.random.seed(287)
data_in = 2 * np.random.random(data_in_shape) - 1
result = model.predict(np.array([data_in]))
data_out_shape = result[0].shape
data_in_formatted = format_decimal(data_in.ravel().tolist())
data_out_formatted = format_decimal(result[0].ravel().tolist())
print('')
print('in shape:', data_in_shape)
print('in:', data_in_formatted)
print('out shape:', data_out_shape)
print('out:', data_out_formatted)
DATA['pooling.MaxPooling3D.7'] = {
'input': {'data': data_in_formatted, 'shape': data_in_shape},
'expected': {'data': data_out_formatted, 'shape': data_out_shape}
}
"""
Explanation: [pooling.MaxPooling3D.7] input 4x5x4x2, pool_size=(2, 2, 2), strides=(1, 2, 1), padding='same', data_format='channels_last'
End of explanation
"""
data_in_shape = (4, 4, 4, 2)
L = MaxPooling3D(pool_size=(3, 3, 3), strides=None, padding='same', data_format='channels_last')
layer_0 = Input(shape=data_in_shape)
layer_1 = L(layer_0)
model = Model(inputs=layer_0, outputs=layer_1)
# set weights to random (use seed for reproducibility)
np.random.seed(288)
data_in = 2 * np.random.random(data_in_shape) - 1
result = model.predict(np.array([data_in]))
data_out_shape = result[0].shape
data_in_formatted = format_decimal(data_in.ravel().tolist())
data_out_formatted = format_decimal(result[0].ravel().tolist())
print('')
print('in shape:', data_in_shape)
print('in:', data_in_formatted)
print('out shape:', data_out_shape)
print('out:', data_out_formatted)
DATA['pooling.MaxPooling3D.8'] = {
'input': {'data': data_in_formatted, 'shape': data_in_shape},
'expected': {'data': data_out_formatted, 'shape': data_out_shape}
}
"""
Explanation: [pooling.MaxPooling3D.8] input 4x4x4x2, pool_size=(3, 3, 3), strides=None, padding='same', data_format='channels_last'
End of explanation
"""
data_in_shape = (4, 4, 4, 2)
L = MaxPooling3D(pool_size=(3, 3, 3), strides=(3, 3, 3), padding='same', data_format='channels_last')
layer_0 = Input(shape=data_in_shape)
layer_1 = L(layer_0)
model = Model(inputs=layer_0, outputs=layer_1)
# set weights to random (use seed for reproducibility)
np.random.seed(289)
data_in = 2 * np.random.random(data_in_shape) - 1
result = model.predict(np.array([data_in]))
data_out_shape = result[0].shape
data_in_formatted = format_decimal(data_in.ravel().tolist())
data_out_formatted = format_decimal(result[0].ravel().tolist())
print('')
print('in shape:', data_in_shape)
print('in:', data_in_formatted)
print('out shape:', data_out_shape)
print('out:', data_out_formatted)
DATA['pooling.MaxPooling3D.9'] = {
'input': {'data': data_in_formatted, 'shape': data_in_shape},
'expected': {'data': data_out_formatted, 'shape': data_out_shape}
}
"""
Explanation: [pooling.MaxPooling3D.9] input 4x4x4x2, pool_size=(3, 3, 3), strides=(3, 3, 3), padding='same', data_format='channels_last'
End of explanation
"""
data_in_shape = (2, 3, 3, 4)
L = MaxPooling3D(pool_size=(3, 3, 3), strides=(2, 2, 2), padding='valid', data_format='channels_first')
layer_0 = Input(shape=data_in_shape)
layer_1 = L(layer_0)
model = Model(inputs=layer_0, outputs=layer_1)
# set weights to random (use seed for reproducibility)
np.random.seed(290)
data_in = 2 * np.random.random(data_in_shape) - 1
result = model.predict(np.array([data_in]))
data_out_shape = result[0].shape
data_in_formatted = format_decimal(data_in.ravel().tolist())
data_out_formatted = format_decimal(result[0].ravel().tolist())
print('')
print('in shape:', data_in_shape)
print('in:', data_in_formatted)
print('out shape:', data_out_shape)
print('out:', data_out_formatted)
DATA['pooling.MaxPooling3D.10'] = {
'input': {'data': data_in_formatted, 'shape': data_in_shape},
'expected': {'data': data_out_formatted, 'shape': data_out_shape}
}
"""
Explanation: [pooling.MaxPooling3D.10] input 2x3x3x4, pool_size=(3, 3, 3), strides=(2, 2, 2), padding='valid', data_format='channels_first'
End of explanation
"""
data_in_shape = (2, 3, 3, 4)
L = MaxPooling3D(pool_size=(3, 3, 3), strides=(1, 1, 1), padding='same', data_format='channels_first')
layer_0 = Input(shape=data_in_shape)
layer_1 = L(layer_0)
model = Model(inputs=layer_0, outputs=layer_1)
# set weights to random (use seed for reproducibility)
np.random.seed(291)
data_in = 2 * np.random.random(data_in_shape) - 1
result = model.predict(np.array([data_in]))
data_out_shape = result[0].shape
data_in_formatted = format_decimal(data_in.ravel().tolist())
data_out_formatted = format_decimal(result[0].ravel().tolist())
print('')
print('in shape:', data_in_shape)
print('in:', data_in_formatted)
print('out shape:', data_out_shape)
print('out:', data_out_formatted)
DATA['pooling.MaxPooling3D.11'] = {
'input': {'data': data_in_formatted, 'shape': data_in_shape},
'expected': {'data': data_out_formatted, 'shape': data_out_shape}
}
"""
Explanation: [pooling.MaxPooling3D.11] input 2x3x3x4, pool_size=(3, 3, 3), strides=(1, 1, 1), padding='same', data_format='channels_first'
End of explanation
"""
data_in_shape = (3, 4, 4, 3)
L = MaxPooling3D(pool_size=(2, 2, 2), strides=None, padding='valid', data_format='channels_first')
layer_0 = Input(shape=data_in_shape)
layer_1 = L(layer_0)
model = Model(inputs=layer_0, outputs=layer_1)
# set weights to random (use seed for reproducibility)
np.random.seed(292)
data_in = 2 * np.random.random(data_in_shape) - 1
result = model.predict(np.array([data_in]))
data_out_shape = result[0].shape
data_in_formatted = format_decimal(data_in.ravel().tolist())
data_out_formatted = format_decimal(result[0].ravel().tolist())
print('')
print('in shape:', data_in_shape)
print('in:', data_in_formatted)
print('out shape:', data_out_shape)
print('out:', data_out_formatted)
DATA['pooling.MaxPooling3D.12'] = {
'input': {'data': data_in_formatted, 'shape': data_in_shape},
'expected': {'data': data_out_formatted, 'shape': data_out_shape}
}
"""
Explanation: [pooling.MaxPooling3D.12] input 3x4x4x3, pool_size=(2, 2, 2), strides=None, padding='valid', data_format='channels_first'
End of explanation
"""
import os
filename = '../../../test/data/layers/pooling/MaxPooling3D.json'
if not os.path.exists(os.path.dirname(filename)):
os.makedirs(os.path.dirname(filename))
with open(filename, 'w') as f:
json.dump(DATA, f)
print(json.dumps(DATA))
"""
Explanation: export for Keras.js tests
End of explanation
"""
|
ankitpandey2708/ml | recommender-system/ml-latest-small/model.ipynb | mit | import pandas as pd
import numpy as np
movies_df = pd.read_csv('movies.csv')
movies_df['movie_id'] = movies_df['movie_id'].apply(pd.to_numeric)
movies_df.head(3)
ratings_df=pd.read_csv('ratings.csv')
ratings_df.head(3)
"""
Explanation: Matrix Factorization via Singular Value Decomposition
Matrix factorization is the breaking down of one matrix in a product of multiple matrices. It's extremely well studied in mathematics, and it's highly useful. There are many different ways to factor matrices, but singular value decomposition is particularly useful for making recommendations.
So what is singular value decomposition (SVD)? At a high level, SVD is an algorithm that decomposes a matrix $R$ into the best lower rank (i.e. smaller/simpler) approximation of the original matrix $R$. Mathematically, it decomposes R into a two unitary matrices and a diagonal matrix:
$$\begin{equation}
R = U\Sigma V^{T}
\end{equation}$$
where R is users's ratings matrix, $U$ is the user "features" matrix, $\Sigma$ is the diagonal matrix of singular values (essentially weights), and $V^{T}$ is the movie "features" matrix. $U$ and $V^{T}$ are orthogonal, and represent different things. $U$ represents how much users "like" each feature and $V^{T}$ represents how relevant each feature is to each movie.
To get the lower rank approximation, we take these matrices and keep only the top $k$ features, which we think of as the underlying tastes and preferences vectors.
End of explanation
"""
R_df = ratings_df.pivot(index = 'user_id', columns ='movie_id', values = 'rating').fillna(0)
R_df.head()
"""
Explanation: These look good, but I want the format of my ratings matrix to be one row per user and one column per movie. I'll pivot ratings_df to get that and call the new variable R.
End of explanation
"""
R = R_df.as_matrix()
user_ratings_mean = np.mean(R, axis = 1)
R_demeaned = R - user_ratings_mean.reshape(-1, 1)
"""
Explanation: The last thing I need to do is de-mean the data (normalize by each users mean) and convert it from a dataframe to a numpy array.
End of explanation
"""
from scipy.sparse.linalg import svds
U, sigma, Vt = svds(R_demeaned, k = 50)
"""
Explanation: Singular Value Decomposition
Scipy and Numpy both have functions to do the singular value decomposition. I'm going to use the Scipy function svds because it let's me choose how many latent factors I want to use to approximate the original ratings matrix (instead of having to truncate it after).
End of explanation
"""
sigma = np.diag(sigma)
"""
Explanation: Done. The function returns exactly what I detailed earlier in this post, except that the $\Sigma$ returned is just the values instead of a diagonal matrix. This is useful, but since I'm going to leverage matrix multiplication to get predictions I'll convert it to the diagonal matrix form.
End of explanation
"""
all_user_predicted_ratings = np.dot(np.dot(U, sigma), Vt) + user_ratings_mean.reshape(-1, 1)
preds_df = pd.DataFrame(all_user_predicted_ratings, columns = R_df.columns)
preds_df.head()
def recommend_movies(predictions_df, userID, movies_df, original_ratings_df, num_recommendations):
# Get and sort the user's predictions
user_row_number = userID - 1 # UserID starts at 1, not 0
sorted_user_predictions = preds_df.iloc[user_row_number].sort_values(ascending=False) # UserID starts at 1
# Get the user's data and merge in the movie information.
user_data = original_ratings_df[original_ratings_df.user_id == (userID)]
user_full = (user_data.merge(movies_df, how = 'left', left_on = 'movie_id', right_on = 'movie_id').
sort_values(['rating'], ascending=False)
)
print('User {0} has already rated {1} movies.'.format(userID, user_full.shape[0]))
print('Recommending highest {0} predicted ratings movies not already rated.'.format(num_recommendations))
# Recommend the highest predicted rating movies that the user hasn't seen yet.
recommendations = (movies_df[~movies_df['movie_id'].isin(user_full['movie_id'])].
merge(pd.DataFrame(sorted_user_predictions).reset_index(), how = 'left',
left_on = 'movie_id',
right_on = 'movie_id').
rename(columns = {user_row_number: 'Predictions'}).
sort_values('Predictions', ascending = False).
iloc[:num_recommendations, :-1]
)
return user_full, recommendations
already_rated, predictions = recommend_movies(preds_df,11, movies_df, ratings_df, 10)
predictions
already_rated.head(10)
"""
Explanation: Making Predictions from the Decomposed Matrices
I now have everything I need to make movie ratings predictions for every user. I can do it all at once by following the math and matrix multiply $U$, $\Sigma$, and $V^{T}$ back to get the rank $k=50$ approximation of $R$.
I also need to add the user means back to get the actual star ratings prediction.
End of explanation
"""
|
stephensekula/smu-honors-physics | mathematics_of_life/mathematics_of_life.ipynb | mit | #The Python Imaging Library (PIL)
from PIL import Image, ImageDraw
# Basic math and color tools
import math, colorsys, numpy
# Mathematical plotting
import matplotlib as mpl
from matplotlib import colors as mplcolors
import matplotlib.pyplot as plt
# Displaying real graphical images (pictures)
from IPython.display import Image as ipythonImage
import pickle,glob
# Graphical representation library PANDAS
import pandas as pd
# Imports a lot of matplotlib code to run inline
#%pylab inline
%matplotlib notebook
import ipywidgets as widgets
"""
Explanation: Table of Contents
Honors Physics PHYS1010 - The Mathematics of Life
Introduction
Benoit Mandelbrot
Fractals in Python
Setup the Environment
Generating the Mandelbrot Set
Generating a Fractal Image
The Julia Set
Modeling Biological Organisms
Conclusions
Honors Physics PHYS1010 - The Mathematics of Life
Introduction
The laws of nature are remarkably simple. In their most compact form, they can be written down on the front of a tee shirt, or on the side of a coffee cup. Yet the universe is complex. Consider the images from our own planet below.
The top image of rivers and coastlines are not associated with organisms; they can happen whether living creatures are around or not, simply due to water and time and erosion. The bottom image of romanesco broccoli is a picture of a living organism with biochemistry. Yet they share something in common: a mathematical property known as "self-similarity". This means that "the parts are similar to the whole," and if you zoom in closely on a part it will geometrically resemble the whole of which it is part.
Benoit Mandelbrot
It was the mathematician Benoit Mandelbrot who first shed serious and rigorous light on this observed feature of the natural world. Mandelbrot was interested not in the classical geometric shapes - circles, squares, triangles - but in the real shapes found in nature, which are never idealistic but exhibit beauty nonetheless.
He coined the term “fractal” to describe geometric shapes that appear the same no matter how closely you zoom into them. This he did after working with sets of complex numbers, originally the “Julia Set” but later developing his own set (“The Mandelbrot Set”). We will play with those below.
Fractals describe “self-similarity” in nature. Self-similarity is everywhere. Consider the image of a tree and its branches below:
Does this resemble anything else you might have seen in nature? How about the rivers and forks above? How about the human lung's arterial system below?
Self-similarity is everywhere, whether geological forces or biochemical forces are at work. Self-similarity is found on Mars and Europa. It may be a uniting organizational feature of natural systems, as it seems to be a simple consequence of a singular mathematical concept: a system with inputs whose output is fed back into the system.
Fractals in Python
Here we will demonstrate how to generate fractal images using the coding language python
1. Setup the Environment
Here we import the packages we need from the existing python libraries. Python has extensive libraries of functions that saves us from having to write them ourselves.
End of explanation
"""
# Perform the Mandelbrot Set test.
divergence_test_value = 2.0
# The mandelbrot_test function takes a test number, "c", and checks if it is in the Mandelbrot set.
# If the Mandelbrot series diverges for "c", it's not in the Mandelbrot set. To help us draw the
# image, return the value of the interation (n) at which the divergence test fails. If it doesn't
# diverge, return -1 to indicate that. Try 100 iterations to see if it diverges; after 100, assume
# it converges.
global mandelbrot_max_iterations
mandelbrot_max_iterations = 100
def mandelbrot_test(c):
z_n = complex(0,0)
for n in range(0,mandelbrot_max_iterations):
z_n = z_n*z_n + c
if abs(z_n) > divergence_test_value:
return n
return -1
"""
Explanation: This sets up the colors we want in our fractal image.
How this works:
We are building an array of values that correspond to our colors. Colors are defined in Python as a list of three values corresponding to the percentage of Red, Green, and Blue in that color.
Black is (0.0, 0.0, 0.0) and White is (1.0, 1.0, 1.0)
Feel free to change the colors as you wish. Below, we use color maps defined in MatPlotLib to make a color scale for our images. You can choose any color map you like by string name. The string names are given in:
https://matplotlib.org/3.1.0/tutorials/colors/colormaps.html
Below, for the Mandelbrot set we use the diverging color map 'PiYG'. For the Julia Set, we use the color map 'summer'.
2. Generating the Mandelbrot Set
As we covered in the slides, The Mandelbrot Set is the set of complex numbers, C, such that the following equation does not diverge when iterated from $z = 0$:
\begin{split}
z_{n+1}= z_{n}^{2} + c
\end{split}
To determine if the equation is diverging, we need to set up a test.
To do so, we will use a loop and check if the absolute value of $z_{n}$ is larger than a cutoff.
We define a function to do this that accepts an input value for $c$ and returns $-1$ if $c$ is in the Mandelbrot Set and the iteration that diverged if not.
End of explanation
"""
test_c = 1
diverges_at = mandelbrot_test(test_c)
print(f"The test number {test_c} diverges at {diverges_at} iterations.")
test_c = 0
diverges_at = mandelbrot_test(test_c)
print(f"The test number {test_c} diverges at {diverges_at}.")
test_c = 0.5
diverges_at = mandelbrot_test(test_c)
print(f"The test number {test_c} diverges at {diverges_at} iterations.")
test_c = -1
diverges_at = mandelbrot_test(test_c)
print(f"The test number {test_c} diverges at {diverges_at}.")
"""
Explanation: Let's use over Mandelbrot test function to check some examples of "c":
End of explanation
"""
# Define the physical maximal width and height of the image we will make
x_max = 800 #pixels
y_max = 800 #pixels
# Image step size and grid of x,y coordinates
dy=1
dx=1
y, x = numpy.mgrid[slice(0, y_max + dy, dy),
slice(0, x_max + dx, dx)]
# Recenter the cool part of the image
offset=(1.8,1.0) # making these numbers bigger slides the image down and to the right
x_scale = 2.0 # the smaller this is, the more you zoom in on the x-axis
y_scale = 2.0 # the smaller this is, the more you zoom in on the y-axis
# Calculate the numbers we will test using the Mandelbrot Set tester function
z = (x*x_scale/x_max-offset[0]) + 1j*(y*y_scale/y_max-offset[1])
# Create an array of the function in memory to deploy across all z's and compute all results
print(f"Please be patient while we test all the pixels in the {x_max}x{y_max} grid to see if they are in the Mandelbrot Set...")
n = numpy.vectorize(mandelbrot_test)(z)
# Convert the test results to a color scale between 0 and 1 (1 means "in the Mandelbrot set,"
# while [0-1) represents the fraction of total iterations before it diverged!
def ColorValueFromTest(n, max):
#Checks if we are in the Mandelbrot Set
if n == -1:
v=1
#If not, compute the fraction of total iterations
else:
v=float(n)/float(max)
return v
# compute the color values from the Mandelbrot tests
v = numpy.vectorize(ColorValueFromTest)(n, mandelbrot_max_iterations)
v = v[:-1, :-1]
# Time to draw!
# Create a colormesh plot to make the final figure
figure, axes = plt.subplots(figsize=(8,8))
color_map = plt.get_cmap('PiYG')
color_levels = mpl.ticker.MaxNLocator(nbins=15).tick_values(v.min(), v.max())
color_scale = mpl.colors.BoundaryNorm(color_levels, ncolors=color_map.N, clip=True)
plot_mandelbrot = plt.pcolormesh(x, y, v, cmap=color_map, norm=color_scale)
plt.tight_layout()
print("Here is the final image object after testing each pixel:")
figure.canvas.draw()
plt.show()
print(f"Save the fractal image to disk...")
fractal_filename = "fractal.png"
figure.savefig(fractal_filename)
"""
Explanation: 3. Generate a Fractal Image
Now that we can determine if a value is in the Mandelbrot Set, let's build the structure of our image. We are going to loop over all the pixels in our image and check if that pixel is in the Mandelbrot Set. We are using the $x$ and $y$ coordinates to represent the Real and Imaginary parts of the Complex number $z$.
End of explanation
"""
def zn_operator(c, z_n):
# Uncomment one of these to alter the function used in the set iteration - play around!
# zn_result = z_n*z_n +c # this is what we used in the Mandelbrot Set
zn_result = numpy.power(z_n,2) + c # vary the power to change the function, e.g. change 2 to 3, etc.
#zn_result = numpy.sin(z_n) + c
#zn_result = numpy.cos(z_n) + c
#zn_result = numpy.tan(z_n) + c
#zn_result = numpy.asin(z_n) + c
#zn_result = numpy.acos(z_n) + c
#zn_result = numpy.atan(z_n) + c
#zn_result = numpy.sinh(z_n) + c
#zn_result = numpy.cosh(z_n) + c
#zn_result = numpy.tanh(z_n) + c
#zn_result = numpy.asinh(z_n) + c
#zn_result = numpy.acosh(z_n) + c
#zn_result = numpy.atanh(z_n) + c
#zn_result = numpy.exp(z_n) + c
#zn_result = numpy.log(z_n) + c # natural logarithm
#zn_result = numpy.log(z_n,10) + c # logarithm base 10; change the base to your liking!
return zn_result
# Define the Julia Set test function, which employs the zn_operator() defined above
divergence_test_value = 2.0
julia_max_iterations = 100
def julia_test(c, z_n = -2.0**.5):
for n in range(0,julia_max_iterations):
z_n = zn_operator(c, z_n)
if abs(z_n) > divergence_test_value:
return n
return -1
"""
Explanation: Compare to an animated fractal GIF from the web
These fractal images resemble lakes with bays and rivers and forks and streams. If you could zoom in on one of the bays, you would see that it, close up, looks just like the above image. And so on. And so on. Self-similarity. See the animation below.
This idea is used to generate realistic looking mountain ranges, coastlines, and bodies of water in computer animated graphics in video games and movies. It is considered so effective at mimicking the real world because fractal geometry seems to be the mathematics that explains "natural shapes" found in the real world.
4. The Julia Set
It turns out that there are more ways to make a fractal. The Julia Set is another such means of defining sequences of numbers. I trust if you are interested in the mathematical details that lead to the code below, you can click on the link above and learn on your own.
We are going to vary some of the parameters and see what happens.
First we open up our value of $z_n$ and redefine our iteration function.
We have also pulled out the functional form that defines our set, this will make it easier to modify this without breaking anything in our iterate function.
Let's define a "$z_n$ operator" that acts on the number, $z_n$, in each iteration to compute the next iteration in the set computation.
Useful numpy Functions: Call by using numpy.function
Try some of these in the definition of our set and see what happens.
|Trig Functions|Hyperbolic Functions|Exponentials and Logs|
|:---:|:---:|:---:|
|sin(x)|sinh(x)|exp(x)|
|cos(x)|cosh(x)|log(x)|
|tan(x)|tanh(x)|log10(x)|
|arcsin(x)|arcsinh(x)|power(x,y)|
|arccos(x)|arccosh(x)|sqrt(x)|
|arctan(x)|arctanh(x)||
We've implemented a bunch of these as possible options for you to play with below... see the comments in the code!
End of explanation
"""
# Define the physical maximal width and height of the image we will make
x_max = 800 #pixels
y_max = 800 #pixels
# Image step size and grid of x,y coordinates
dy=1
dx=1
y, x = numpy.mgrid[slice(0, y_max + dy, dy),
slice(0, x_max + dx, dx)]
# Recenter the cool part of the image
offset=(1.5,1.5) # making these numbers bigger slides the image down and to the right
x_scale = 3.0 # the smaller this is, the more you zoom in on the x-axis
y_scale = 3.0 # the smaller this is, the more you zoom in on the y-axis
# Calculate the numbers we will test using the Mandelbrot Set tester function
z = (x*x_scale/x_max-offset[0]) + 1j*(y*y_scale/y_max-offset[1])
# Create an array of the function in memory to deploy across all z's and compute all results
print(f"Please be patient while we test all the pixels in the {x_max}x{y_max} grid to see if they are in the Julia Set...")
# Set the number to test in the Julia Set - you can play with this to change the image!
c_julia = complex(-0.4, 0.6)
n = numpy.vectorize(julia_test)(c_julia, z)
# compute the color values from the Mandelbrot tests
v = numpy.vectorize(ColorValueFromTest)(n, julia_max_iterations)
v = v[:-1, :-1]
# Time to draw!
# Create a colormesh plot to make the final figure
figure, axes = plt.subplots(figsize=(8,8))
color_map = plt.get_cmap('summer')
color_levels = mpl.ticker.MaxNLocator(nbins=15).tick_values(v.min(), v.max())
color_scale = mpl.colors.BoundaryNorm(color_levels, ncolors=color_map.N, clip=True)
plot_julia = plt.pcolormesh(x, y, v, cmap=color_map, norm=color_scale)
plt.tight_layout()
print("Here is the final image object after testing each pixel:")
figure.canvas.draw()
plt.show()
print(f"Save the fractal image to disk...")
fractal_filename = "julia.png"
figure.savefig(fractal_filename)
"""
Explanation: Now we open up the value of c to be defined by us and let the pixel location relate to the value of $z_{n}$
End of explanation
"""
# Define a function for drawing Barnsley's Fern
def BarnsleysFern(f,itt):
colname = ["percent","a","b","c","d","e","f"]
fern_structure_frame = pd.DataFrame(data=numpy.array(f), columns = colname)
print(fern_structure_frame)
if itt > 5000:
itt = 5000
x,y = {0.5,0.0}
xypts=[]
print(f"Sum of percentages in your settings: {fern_structure_frame['percent'].sum()}")
if (1.0 - fern_structure_frame['percent'].sum()) > 1e-10:
print("Probabilities must sum to 1")
return
for i in range(itt):
rand = (numpy.random.random())
cond = 0.0
for j in range(len(f)):
if (cond <= rand) and (rand <= (cond+f[j][0])):
x = f[j][1]*x+f[j][2]*y+f[j][5]
y = f[j][3]*x+f[j][4]*y+f[j][6]
xypts.append((x,y))
cond = cond + f[j][0]
xmax,ymax = max(abs(numpy.transpose(xypts)[0])),max(abs(numpy.transpose(xypts)[1]))
figure, axes = plt.subplots(figsize=(6,6))
color = numpy.transpose([[abs(r)/xmax for r in numpy.transpose(xypts)[0]],[abs(g)/ymax for g in numpy.transpose(xypts)[1]],[b/itt for b in range(itt)]])
plt.scatter(numpy.transpose(xypts)[0],numpy.transpose(xypts)[1],alpha=0.5, facecolors=color, edgecolors='none', s=1)
plt.tight_layout()
figure.canvas.draw()
plt.show()
"""
Explanation: The default Julia Set output looks a little like Romanesco Broccoli, eh?
Modeling Biological Organisms
Below, we will create what is known as the "Barnsley Fern". It is so called because this bit of mathematics can create forms similar to this:
End of explanation
"""
# Define the fern structure and draw one
fern_structure = \
((0.01,0.0,0.0,0.0,0.16,0.0,0.0),
(0.85,0.85,0.08,-0.08,0.85,0.0,1.60),
(0.07,0.20,-0.26,0.23,0.22,0.0,1.60),
(0.07,-0.15,0.28,0.26,0.24,0.0,0.44))
BarnsleysFern(fern_structure,5000)
"""
Explanation: For Barnsley's Fern:
Use the following values
|Percent|A|B|C|D|E|F|
|:---:|:---:|:---:|:---:|:---:|:---:|:---:|
|0.01|0.0|0.0|0.0|0.16|0.0|0.0|
|0.85|0.85|0.04|-0.04|0.85|0.0|1.60|
|0.07|0.20|-0.26|0.23|0.22|0.0|1.60|
|0.07|-0.15|0.28|0.26|0.24|0.0|0.44|
Of course, this is only one solution so try as changing the values. Some values modify the curl, some change the thickness, others completely rearrange the structure. Because this is a chaotic system, tiny changes to any of these can have radical outcomes on the organic structure you create. Try it below!
End of explanation
"""
|
adityaka/misc_scripts | python-scripts/data_analytics_learn/link_pandas/Ex_Files_Pandas_Data/Exercise Files/02_08/Begin/Remote Data.ipynb | bsd-3-clause | %matplotlib inline
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import datetime
from pandas_datareader import data, wb
"""
Explanation: Remote Data
Yahoo
St. Louis Fed (FRED)
Google
documentation: http://pandas.pydata.org/pandas-docs/stable/remote_data.html
Installation Requirement
pandas-datareader is required; not included with default Anaconda installation (Summer 2016)
from command prompt: conda install -c anaconda pandas-datareader
documentation: https://anaconda.org/anaconda/pandas-datareader
End of explanation
"""
start = datetime.datetime(2010, 1, 1)
end = datetime.datetime(2016, 7, 15)
yahoo_df = data.DataReader("F", 'yahoo', start, end)
yahoo_df.plot()
"""
Explanation: Yahoo finance
End of explanation
"""
start = datetime.datetime(2010, 1, 1)
end = datetime.datetime(2016, 7, 15)
unrate_df = data.DataReader('UNRATE', 'fred', start, end)
unrate_df.plot()
"""
Explanation: FRED
source: http://quant-econ.net/py/pandas.html
End of explanation
"""
start = datetime.datetime(2010, 1, 1)
end = datetime.datetime(2016, 7, 15)
google_df = data.DataReader("F", 'google', start, end)
google_df.plot()
"""
Explanation: Google
End of explanation
"""
|
adamwang0705/cross_media_affect_analysis | develop/20171024-daheng-prepare_ibm_tweets_news_data.ipynb | mit | """
Initialization
"""
'''
Standard modules
'''
import os
import pickle
import csv
import time
from pprint import pprint
import json
import pymongo
import multiprocessing
import logging
import collections
'''
Analysis modules
'''
%matplotlib inline
%config InlineBackend.figure_format = 'retina' # render double resolution plot output for Retina screens
import matplotlib.pyplot as plt
import pandas as pd
'''
Custom modules
'''
import config
import utilities
import mongodb
import multiprocessing_workers
'''
R magic and packages
'''
# hide all RRuntimeWarnings
import warnings
warnings.filterwarnings('ignore')
# add home for R in anaconda on PATH sys env
os.environ['PATH'] += ':/opt/anaconda3/bin'
# load R magic
%load_ext rpy2.ipython
# load R packages
%R require(ggplot2)
'''
Misc
'''
nb_name = '20171024-daheng-prepare_ibm_tweets_news_data'
# all tweets with keywork 'ibm' in tweet_text field from ND IBM dataset
ibm_tweets_file = os.path.join(config.IBM_TWEETS_NEWS_DIR, 'ibm_tweets.json')
# based on ibm_tweets_file. Duplicate tweets with the same or similar tweet_text are removed
ibm_unique_tweets_file = os.path.join(config.IBM_TWEETS_NEWS_DIR, 'ibm_unique_tweets.json')
# manually selected news sources list by examing most common news sources of valid urls embedded in ibm unique tweets
# selected_news_sources_lst = ['www.forbes.com', 'finance.yahoo.com', 'venturebeat.com',
# 'medium.com', 'www.engadget.com', 'alltheinternetofthings.com',
# 'www.zdnet.com', 'www.wsj.com', 'www.cnbc.com']
selected_news_sources_lst = ['venturebeat', 'engadget', 'wsj', 'cnbc']
# manually collected ibm news data
ibm_news_file = os.path.join(config.HR_DIR, 'selected_ibm_news.csv')
# all tweets related to the 'social_capital_ceo_palihapitiya_watson_joke' news by cnbc
palihapitiya_watson_joke_tweets_file = os.path.join(config.HR_DIR, 'palihapitiya_watson_joke_tweets.csv')
# manually tag information of all tweets related to the 'social_capital_ceo_palihapitiya_watson_joke' news by cnbc
palihapitiya_watson_joke_tweets_tag_file = os.path.join(config.HR_DIR, 'palihapitiya_watson_joke_tweets_tag.csv')
"""
Explanation: Prepare tweets and news data for IBM topic
Last modifed: 2017-10-24
Roadmap
Prepare multiprocessing and MongoDB scripts available in ibm_tweets_analysis project
Filter out tweets with keyword 'ibm' in tweet_text field from MongoDB db
Check basic statistics of embedded URL link in tweet_text to external news article
Manually collect external news articles
Check ibm_news basic statistics
Updated Objective: focus on "social_capital_ceo_palihapitiya_watson_joke" news and tweets
Steps
End of explanation
"""
%%time
"""
Register
IBM_TWEETS_NEWS_DIR = os.path.join(DATA_DIR, 'ibm_tweets_news')
in config
"""
DB_NAME = 'tweets_ek-2'
COL_NAME = 'tw_nt'
if 0 == 1:
multiprocessing.log_to_stderr(logging.DEBUG)
'''
Use multiprocessing to parse tweet_text field for "ibm" keyword
'''
procedure_name = 'tag_native_tweets_text_ibm'
# set processes number to CPU numbers minus 1
process_num = multiprocessing.cpu_count() - 1
process_file_names_lst = ['{}-{}.json'.format(process_ind, procedure_name)
for process_ind in range(process_num)]
process_files_lst = [os.path.join(config.IBM_TWEETS_NEWS_DIR, process_file_name)
for process_file_name in process_file_names_lst]
jobs = []
for process_ind in range(process_num):
p = multiprocessing.Process(target=multiprocessing_workers.find_keywords_in_tweet_text,
args=(DB_NAME, COL_NAME, process_ind, process_num, process_files_lst[process_ind], ['ibm']),
name='Process-{}/{}'.format(process_ind, process_num))
jobs.append(p)
for job in jobs:
job.start()
for job in jobs:
job.join()
"""
Explanation: Prepare multiprocessing and MongoDB scripts available in ibm_tweets_analysis project
Copy mongodb.py and multiprocessing_workers.py files to the project root dir.
- mongodb.py can be used to get connection to local MongoDB database.
- multiprocessing_workers.py can be used to query MongoDB database in multiple processes to save time (need modifications)
Native tweets are stored in tweets_ek-2 db and tw_nt table.
Filter out tweets with keyword 'ibm' in tweet_text field from MongoDB db
Query tweets from MongoDB db
End of explanation
"""
%%time
"""
Merger all process files into a single file
Register
ibm_tweets_file = os.path.join(config.IBM_TWEETS_NEWS_DIR, 'ibm_tweets.json')
in Initialization section.
"""
if 0 == 1:
'''
Re-generate process file names
'''
procedure_name = 'tag_native_tweets_text_ibm'
process_num = multiprocessing.cpu_count() - 1
process_file_names_lst = ['{}-{}.json'.format(process_ind, procedure_name)
for process_ind in range(process_num)]
process_files_lst = [os.path.join(config.IBM_TWEETS_NEWS_DIR, process_file_name)
for process_file_name in process_file_names_lst]
with open(ibm_tweets_file, 'w') as output_f:
for process_file in process_files_lst:
with open(process_file, 'r') as input_f:
for line in input_f:
output_f.write(line)
"""
Explanation: Merge process files
End of explanation
"""
%%time
"""
Remove tweets with the same or silimar tweet_text field
Register
ibm_unique_tweets_file = os.path.join(config.IBM_TWEETS_NEWS_DIR, 'ibm_unique_tweets.json')
in Initialization section.
"""
if 0 == 1:
with open(ibm_unique_tweets_file, 'w') as output_f:
with open(ibm_tweets_file, 'r') as input_f:
uniqe_tweet_text_field = set()
for line in input_f:
tweet_json = json.loads(line)
tweet_text = tweet_json['text']
cleaned_tweet_text = utilities.clean_tweet_text(tweet_text)
if cleaned_tweet_text not in uniqe_tweet_text_field:
uniqe_tweet_text_field.add(cleaned_tweet_text)
output_f.write(line)
"""
Explanation: Remove duplicate tweets
End of explanation
"""
"""
Check number of ibm tweets and number of ibm unique tweets
"""
if 1 == 1:
with open(ibm_tweets_file, 'r') as f:
ibm_tweets_num = sum([1 for line in f])
print('Number of ibm tweets: {}'.format(ibm_tweets_num))
with open(ibm_unique_tweets_file, 'r') as f:
ibm_unique_tweets_num = sum([1 for line in f])
print('Number of unique ibm tweets: {}'.format(ibm_unique_tweets_num))
"""
Check number of ibm unique tweets with URL
"""
if 1 == 1:
with open(ibm_unique_tweets_file, 'r') as f:
# if entities.urls field is not empty
ibm_unique_tweets_url_num = sum([1 for line in f
if json.loads(line)['entities']['urls']])
print('Number of unique ibm tweets with URL: {}'.format(ibm_unique_tweets_url_num))
%%time
"""
Check most popular domain names in URLs embedded in ibm unique tweets
"""
if 1 == 1:
url_domain_names_counter = collections.Counter()
with open(ibm_unique_tweets_file, 'r') as f:
for line in f:
tweet_json = json.loads(line)
# if tweet contains at least one url, entities.urls is not empty
entities_urls = tweet_json['entities']['urls']
if entities_urls:
for entities_url in entities_urls:
# expanded_url field may contain full unshortened url
expanded_url = entities_url['expanded_url']
url_domain_name = expanded_url.split('/')[2]
url_domain_names_counter.update([url_domain_name])
pprint(url_domain_names_counter.most_common(50))
%%time
"""
Re-compute most popular domain names in URLs embedded in ibm unique tweets
- ignore misc irrelevant website domain names
- ignore all shortened urls
Register
selected_news_sources_lst
in Initialization section.
"""
misc_irrelevant_websites_lst = ['twitter', 'youtube', 'youtu.be', 'amazon', 'paper.li', 'linkedin', 'lnkd.in', 'instagram']
shortened_url_identifiers_lst = ['bit.ly', 'ift.tt', 'dlvr.it', 'ow.ly', 'buff.ly', 'oal.lu', 'goo.gl', 'ln.is', 'gag.gl', 'fb.me', 'trap.it', 'ibm.co',
'ibm.biz', 'shar.es', 'crwd.fr', 'klou.tt', 'tek.io', 'owler.us', 'upflow.co', 'hubs.ly', 'zd.net', 'spr.ly', 'flip.it']
if 0 == 1:
valid_url_domain_names_counter = collections.Counter()
ignore_lst = misc_irrelevant_websites_lst + shortened_url_identifiers_lst
with open(ibm_unique_tweets_file, 'r') as f:
for line in f:
tweet_json = json.loads(line)
# if tweet contains at least one url, entities.urls is not empty
entities_urls = tweet_json['entities']['urls']
if entities_urls:
for entities_url in entities_urls:
# expanded_url field may contain full unshortened url
expanded_url = entities_url['expanded_url']
# ignore all urls with manually selected tokens
if not any(token in expanded_url for token in ignore_lst):
# ignore all shortned urls by HEURISTIC
if len(expanded_url.split('/')) > 4:
valid_url_domain_name = expanded_url.split('/')[2]
valid_url_domain_names_counter.update([valid_url_domain_name])
pprint(valid_url_domain_names_counter.most_common(50))
%%time
"""
Check most common valid links
"""
misc_irrelevant_websites_lst = ['twitter', 'youtube', 'youtu.be', 'amazon', 'paper.li', 'linkedin', 'lnkd.in', 'instagram']
shortened_url_identifiers_lst = ['bit.ly', 'ift.tt', 'dlvr.it', 'ow.ly', 'buff.ly', 'oal.lu', 'goo.gl', 'ln.is', 'gag.gl', 'fb.me', 'trap.it', 'ibm.co',
'ibm.biz', 'shar.es', 'crwd.fr', 'klou.tt', 'tek.io', 'owler.us', 'upflow.co', 'hubs.ly', 'zd.net', 'spr.ly', 'flip.it']
if 0 == 1:
urls_counter = collections.Counter()
ignore_lst = misc_irrelevant_websites_lst + shortened_url_identifiers_lst
with open(ibm_unique_tweets_file, 'r') as f:
for line in f:
tweet_json = json.loads(line)
# if tweet contains at least one url, entities.urls is not empty
entities_urls = tweet_json['entities']['urls']
if entities_urls:
for entities_url in entities_urls:
# expanded_url field may contain full unshortened url
expanded_url = entities_url['expanded_url']
# ignore all urls with manually selected tokens
if not any(token in expanded_url for token in ignore_lst):
# ignore all shortned urls by HEURISTIC
if len(expanded_url.split('/')) > 4:
urls_counter.update([expanded_url])
pprint(urls_counter.most_common(50))
%%time
"""
Check most common links to selected news sources
"""
if 0 == 1:
selected_news_sources_urls_counter = collections.Counter()
with open(ibm_tweets_file, 'r') as f:
for line in f:
tweet_json = json.loads(line)
# if tweet contains at least one url, entities.urls is not empty
entities_urls = tweet_json['entities']['urls']
if entities_urls:
for entities_url in entities_urls:
# expanded_url field may contain full unshortened url
expanded_url = entities_url['expanded_url']
# filter out only url links to selected news sources
if any(selected_news_source in expanded_url for selected_news_source in selected_news_sources_lst):
selected_news_sources_urls_counter.update([expanded_url])
pprint(selected_news_sources_urls_counter.most_common(50))
"""
Explanation: Check basic statistics of embedded URL link in tweet_text to external news article
End of explanation
"""
"""
Register
ibm_news_file
in Initialization section.
"""
"""
Explanation: Manually collect external news articles
After examining
- most common valid links
- most common links to selected news sources
manually collect external news articles.
Note:
- single news article may have multiple links (shortened by different services; picture/video materials; trivial parameters)
End of explanation
"""
"""
Load in csv file
"""
if 1 == 1:
ibm_news_df = pd.read_csv(filepath_or_buffer=ibm_news_file, sep='\t')
with pd.option_context('display.max_colwidth', 100, 'expand_frame_repr', False):
display(ibm_news_df[['NEWS_DATE', 'NEWS_NAME', 'NEWS_DOC']])
"""
Print any news_doc by paragraphs
"""
test_lst = ibm_news_df.iloc[10]['NEWS_DOC'].split('::::::::')
for ind, item in enumerate(test_lst):
print('({})'.format(ind+1))
print(item)
"""
Explanation: Check ibm_news basic statistics
End of explanation
"""
%%time
"""
Find out all tweets related to the 'social_capital_ceo_palihapitiya_watson_joke' news
News URL 1: https://www.cnbc.com/2017/05/08/ibms-watson-is-a-joke-says-social-capital-ceo-palihapitiya.html
News URL 2: https://www.cnbc.com/2017/05/09/no-joke-id-like-to-see-my-firm-go-head-to-head-with-ibm-on-a-i-palihapitiya.html
Register
palihapitiya_watson_joke_tweets_file
in Initialization section
"""
if 0 == 1:
target_news_keywords_lst = ['social capital', 'chamath', 'palihapitiya']
target_tweets_dict_lst = []
with open(ibm_unique_tweets_file, 'r') as f:
for line in f:
tweet_json = json.loads(line)
tweet_text = tweet_json['text'].replace('\n', ' ').replace('\r', ' ')
tweet_user_screen_name = tweet_json['user']['screen_name']
tweet_created_at = utilities.parse_tweet_post_time(tweet_json['created_at'])
if any(kw.lower() in tweet_text.lower() for kw in target_news_keywords_lst):
target_tweet_dict = {'tweet_created_at': tweet_created_at,
'tweet_user_screen_name': tweet_user_screen_name,
'tweet_text': tweet_text}
target_tweets_dict_lst.append(target_tweet_dict)
target_tweets_df = pd.DataFrame(target_tweets_dict_lst)
target_tweets_df.to_csv(path_or_buf=palihapitiya_watson_joke_tweets_file, sep='\t', index=True, quoting=csv.QUOTE_MINIMAL)
"""
Read in data
"""
if 1 == 1:
target_tweets_df = pd.read_csv(filepath_or_buffer=palihapitiya_watson_joke_tweets_file,
sep='\t',
index_col=0,
parse_dates=['tweet_created_at'],
quoting=csv.QUOTE_MINIMAL)
with pd.option_context('display.max_rows', 260, 'display.max_colwidth', 150, 'expand_frame_repr', False):
display(target_tweets_df)
"""
Explanation: Updated Objective: focus on "social_capital_ceo_palihapitiya_watson_joke" news and tweets
New Objective:
- Only focus on the "social_capital_ceo_palihapitiya_watson_joke" news and tweets.
- generate a illustrative figure, which should be placed in the Introduction section of the paper, to demonstrate the interaction/cycle between news and tweets.
Find out all related tweets
End of explanation
"""
"""
Register
palihapitiya_watson_joke_tweets_tag_file
in Initialization section
"""
"""
Explanation: Manually tag each tweet
Manually tag each tweet related to "social_capital_ceo_palihapitiya_watson_joke" news for:
- tweet_sentiment: being neutral (1), mild (2), or stimulant/sarcastic (3)
- tweet_news: correspond to first news (1), or second news (2)
End of explanation
"""
"""
Load data
"""
if 1 == 1:
'''
Read in all tweets related to the 'social_capital_ceo_palihapitiya_watson_joke' news
'''
target_tweets_df = pd.read_csv(filepath_or_buffer=palihapitiya_watson_joke_tweets_file,
sep='\t',
index_col=0,
parse_dates=['tweet_created_at'],
quoting=csv.QUOTE_MINIMAL)
'''
Read in manually tagged information for all tweets just loaded
'''
target_tweets_tag_df = pd.read_csv(filepath_or_buffer=palihapitiya_watson_joke_tweets_tag_file,
sep='\t',
index_col=0)
'''
Combine dfs and set index
'''
test_tweets_df = target_tweets_df.join(target_tweets_tag_df)
test_tweets_df['tweet_index'] = test_tweets_df.index
test_tweets_df = test_tweets_df.set_index('tweet_created_at')
"""
Check tweets related to second news
"""
if 1 == 1:
test_df = test_tweets_df[test_tweets_df['tweet_news'] == 2]
display(test_df)
"""
For tweets related to first news
Build tmp dfs for tweets in mild sentiment and harsh sentiment separately
"""
if 1 == 1:
mild_cond = (test_tweets_df['tweet_news'] == 1) & (test_tweets_df['tweet_sentiment'] == 2)
harsh_cond = (test_tweets_df['tweet_news'] == 1) & (test_tweets_df['tweet_sentiment'] == 3)
mild_tweets_df = test_tweets_df[mild_cond]
harsh_tweets_df = test_tweets_df[harsh_cond]
"""
Check tweets in mild sentiment
"""
print(mild_tweets_df['tweet_index'].count())
with pd.option_context('display.max_rows', 100, 'display.max_colwidth', 150, 'expand_frame_repr', False):
display(mild_tweets_df)
"""
Check tweets in harsh sentiment
"""
print(harsh_tweets_df['tweet_index'].count())
with pd.option_context('display.max_rows', 100, 'display.max_colwidth', 150, 'expand_frame_repr', False):
display(harsh_tweets_df)
"""
Bin mild/harsh tweets by 4H period and count numbers
"""
if 1 == 1:
mild_tweets_bin_count = mild_tweets_df['tweet_index'].resample('4H', convention='start').count().rename('mild_tweets_count')
harsh_tweets_bin_count = harsh_tweets_df['tweet_index'].resample('4H', convention='start').count().rename('harsh_tweets_count')
tweets_count = pd.concat([mild_tweets_bin_count, harsh_tweets_bin_count], axis=1)[:24]
with pd.option_context('display.max_rows', 100, 'display.max_colwidth', 150, 'expand_frame_repr', False):
display(tweets_count)
if 1 == 1:
tweets_count.plot(kind="bar", figsize=(12,6), title='# of mild/harsh tweets', stacked=True)
"""
Explanation: Check data and quick plot
End of explanation
"""
"""
Prepare df data
"""
if 1 == 1:
'''
Read in all tweets related to the 'social_capital_ceo_palihapitiya_watson_joke' news
'''
target_tweets_df = pd.read_csv(filepath_or_buffer=palihapitiya_watson_joke_tweets_file,
sep='\t',
index_col=0,
parse_dates=['tweet_created_at'],
quoting=csv.QUOTE_MINIMAL)
'''
Read in manually tagged information for all tweets just loaded
'''
target_tweets_tag_df = pd.read_csv(filepath_or_buffer=palihapitiya_watson_joke_tweets_tag_file,
sep='\t',
index_col=0)
'''
Join dfs and set index
'''
test_tweets_df = target_tweets_df.join(target_tweets_tag_df)
test_tweets_df['tweet_index'] = test_tweets_df.index
test_tweets_df = test_tweets_df.set_index('tweet_created_at')
'''
Bin mild/harsh tweets by 4H period and count numbers
'''
mild_tweets_df = test_tweets_df[(test_tweets_df['tweet_news'] == 1) & (test_tweets_df['tweet_sentiment'] == 2)]
harsh_tweets_df = test_tweets_df[(test_tweets_df['tweet_news'] == 1) & (test_tweets_df['tweet_sentiment'] == 3)]
second_news_mild_tweets_df = test_tweets_df[(test_tweets_df['tweet_news'] == 2) & (test_tweets_df['tweet_sentiment'] == 2)]
mild_tweets_bin_count = mild_tweets_df['tweet_index'].resample('4H', label='start', loffset='2H 1S').count().rename('mild_tweets_count')
harsh_tweets_bin_count = harsh_tweets_df['tweet_index'].resample('4H', label='start', loffset='2H 1S').count().rename('harsh_tweets_count')
second_news_mild_tweets_bin_count = second_news_mild_tweets_df['tweet_index'].resample('4H', label='start', loffset='2H 1S').count().rename('second_news_mild_tweets_count')
tweets_count = pd.concat([mild_tweets_bin_count, harsh_tweets_bin_count, second_news_mild_tweets_bin_count], axis=1)
'''
Misc operations
'''
tweets_count = tweets_count.fillna(0)
tweets_count['mild_tweets_count'] = tweets_count['mild_tweets_count'].astype(int)
tweets_count['harsh_mild_diff'] = tweets_count['harsh_tweets_count'] - tweets_count['mild_tweets_count']
tweets_count['mild_tweets_count_neg'] = - tweets_count['mild_tweets_count']
tweets_count['second_news_mild_tweets_count'] = tweets_count['second_news_mild_tweets_count'].astype(int)
tweets_count['second_news_mild_tweets_count_neg'] = - tweets_count['second_news_mild_tweets_count']
tweets_count.reset_index(drop=False, inplace=True)
tweets_r_df = tweets_count
tweets_r_df
%%R -i tweets_r_df
#
# Prepare data
#
# cast data types
tweets_r_df$tweet_created_at <- as.POSIXct(strptime(tweets_r_df$tweet_created_at, format="%Y-%m-%d %H:%M:%S"))
#
# Plot and tweak histogram
#
# initialize new plot
# cols <- c('Harsh'='red', 'Mild'='blue', 'diff_line'='black')
plt <- ggplot(data=tweets_r_df, aes(x=tweet_created_at)) +
# layers of ref lines for publishing times of first and second news
geom_vline(xintercept=as.POSIXct(strptime('2017-05-08 16:45:00', format="%Y-%m-%d %H:%M:%S")), linetype='dashed', color='grey80') +
geom_vline(xintercept=as.POSIXct(strptime('2017-05-09 09:55:00', format="%Y-%m-%d %H:%M:%S")), linetype='dashed', color='grey80') +
# layer of geom_bar for harsh tweets
geom_bar(aes(y=harsh_tweets_count, fill='Harsh'), stat='identity', alpha=0.65) +
# layer of geom_rect for highlighting largest bar
geom_rect(aes(xmin=as.POSIXct(strptime('2017-05-09 12:15:00', format="%Y-%m-%d %H:%M:%S")),
xmax=as.POSIXct(strptime('2017-05-09 15:45:00', format="%Y-%m-%d %H:%M:%S")),
ymin=0, ymax=27), fill=NA, color="red", size=0.7, alpha=1) +
# layer of geom_bar for mild tweets
geom_bar(aes(y=mild_tweets_count_neg, fill='Mild'), stat='identity', alpha=0.65) +
# layer of geom_line for diff between harsh tweets and mild tweets
geom_line(aes(x=(tweet_created_at), y=harsh_mild_diff), stat='identity', linetype='solid') +
# layer of geom_bar for a few tweets related to second news in mild sentiment
geom_bar(aes(y=second_news_mild_tweets_count_neg), stat='identity', alpha=0.65, fill='green') +
# x-axis and y-axis
scale_x_datetime(name = 'Time',
date_labels = "%b %d %I%p",
date_breaks = "4 hour",
expand = c(0, 0),
limits = c(as.POSIXct(strptime('2017-05-08 12:00:00', format="%Y-%m-%d %H:%M:%S")),
as.POSIXct(strptime('2017-05-10 19:00:00', format="%Y-%m-%d %H:%M:%S")))) +
scale_y_continuous(name = 'Number of users',
breaks = c(-10, -5, 0, 5, 10, 15, 20, 25),
labels = c('10', '5', '0', '5', '10', '15', '20', '25'),
limits = c(-15, 30)) +
# legend
scale_fill_manual(name = "Sentiment Intensity",
values = c('Harsh'='red', 'Mild'='blue')) +
# theme
theme(panel.background = element_blank(),
axis.line = element_line(color='black'),
panel.grid.major.y = element_line(color='grey80'),
panel.grid.major.x = element_blank(),
panel.grid.minor = element_blank(),
axis.text.x = element_text(angle=90),
legend.position = 'top')
#
# Output figure
#
ggsave('./fig/ibm_joke_or_not.png', plt, height=5, width=5, dpi=200)
"""
Explanation: Plot use R ggplot2
End of explanation
"""
|
pushpajnc/models | student_intervention/student_intervention-V1.ipynb | mit | # Import libraries
import numpy as np
import pandas as pd
from time import time
from sklearn.metrics import f1_score
from IPython.display import display
import visuals as vs
from IPython.display import display
import sklearn.learning_curve as curves
import matplotlib.pyplot as pl
rstate = 10
%matplotlib inline
# Read student data
student_data = pd.read_csv("student-data.csv")
display(student_data.head())
print "Student data read successfully!"
"""
Explanation: Building a Student Intervention System
This is a classification problem where we will split the group of students into students who need early intervention and those who don't need it.
Exploring the Data
End of explanation
"""
from collections import Counter
# Calculate number of students
n_students = student_data.shape[0]
# Calculate number of features
n_features = student_data.shape[1]-1
# Calculate passing students
passed_col = student_data["passed"].tolist()
passed_map = Counter(passed_col)
print "number passed "+ str(passed_map["yes"])+"\n"
print "number not passed "+ str(passed_map["no"])+"\n"
pd.crosstab(index=student_data["passed"], columns="count")
n_passed = passed_map["yes"]
# Calculate failing students
n_failed = passed_map["no"]
# TODO: Calculate graduation rate
grad_rate = (float(n_passed)*100)/n_students
# Print the results
print "Total number of students: {}".format(n_students)
print "Number of features: {}".format(n_features)
print "Number of students who passed: {}".format(n_passed)
print "Number of students who failed: {}".format(n_failed)
print "Graduation rate of the class: {:.2f}%".format(grad_rate)
print "67% graudation rate indicates that data is imbalanced, there are more positive labels than negative lables."
"""
Explanation: Implementation: Data Exploration
Let's begin by investigating the dataset to determine how many students we have information on, and learn about the graduation rate among these students. In the code cell below, we will compute the following:
- The total number of students, n_students.
- The total number of features for each student, n_features.
- The number of those students who passed, n_passed.
- The number of those students who failed, n_failed.
- The graduation rate of the class, grad_rate, in percent (%).
End of explanation
"""
# Extract feature columns
feature_cols = list(student_data.columns[:-1])
# Extract target column 'passed'
target_col = student_data.columns[-1]
# Show the list of columns
print "Feature columns:\n{}".format(feature_cols)
print "\nTarget column: {}".format(target_col)
# Separate the data into feature data and target data (X_all and y_all, respectively)
X_all = student_data[feature_cols]
y_all = student_data[target_col]
# Show the feature information by printing the first five rows
print "\nFeature values:"
print X_all.head()
"""
Explanation: Preparing the Data
In this section, we will prepare the data for modeling, training and testing.
Identify feature and target columns
Code cell below will separate the student data into feature and target columns to see if any features are non-numeric.
End of explanation
"""
def preprocess_features(X):
''' Preprocesses the student data and converts non-numeric binary variables into
binary (0/1) variables. Converts categorical variables into dummy variables. '''
# Initialize new output DataFrame
output = pd.DataFrame(index = X.index)
# Investigate each feature column for the data
for col, col_data in X.iteritems():
# If data type is non-numeric, replace all yes/no values with 1/0
if col_data.dtype == object:
col_data = col_data.replace(['yes', 'no'], [1, 0])
# If data type is categorical, convert to dummy variables
if col_data.dtype == object:
# Example: 'school' => 'school_GP' and 'school_MS'
col_data = pd.get_dummies(col_data, prefix = col)
# Collect the revised columns
output = output.join(col_data)
return output
X_all = preprocess_features(X_all)
print "Processed feature columns ({} total features):\n{}".format(len(X_all.columns), list(X_all.columns))
"""
Explanation: Preprocess Feature Columns
There are several non-numeric columns that need to be converted! Many of them are simply yes/no, e.g. internet. These can be reasonably converted into 1/0 (binary) values.
Other columns, like Mjob and Fjob, have more than two values, and are known as categorical variables. I will create as many columns as possible values (e.g. Fjob_teacher, Fjob_other, Fjob_services, etc.), and assign a 1 to one of them and 0 to all others.
End of explanation
"""
# Import any additional functionality you may need here
from sklearn.cross_validation import ShuffleSplit
from sklearn import cross_validation
from sklearn.cross_validation import train_test_split
from sklearn.cross_validation import StratifiedShuffleSplit
# Set the number of training points
num_train = 300
# Set the number of testing points
num_test = X_all.shape[0] - num_train
# Shuffle and split the dataset into the number of training and testing points above
X_train, X_test, y_train, y_test = train_test_split(X_all, y_all, test_size=95, random_state=rstate)
#cv = StratifiedShuffleSplit(y_all, n_iter=10, test_size=95, random_state=10)
#print len(cv)
# Show the results of the split
print "Training set has {} samples.".format(X_train.shape[0])
print "Testing set has {} samples.".format(X_test.shape[0])
"""
Explanation: Implementation: Training and Testing Data Split
So far, we have converted all categorical features into numeric values. For the next step, we split the data (both features and corresponding labels) into training and test sets. In the following code cell below, I will:
- Randomly shuffle and split the data (X_all, y_all) into training and testing subsets.
- Use 300 training points (approximately 75%) and 95 testing points (approximately 25%).
- Set a random_state for the function(s) you use, if provided.
- Store the results in X_train, X_test, y_train, and y_test.
End of explanation
"""
def train_classifier(clf, X_train, y_train):
''' Fits a classifier to the training data. '''
# Start the clock, train the classifier, then stop the clock
start = time()
clf.fit(X_train, y_train)
end = time()
# Print the results
print "Trained model in {:.4f} seconds".format(end - start)
def predict_labels(clf, features, target):
''' Makes predictions using a fit classifier based on F1 score. '''
# Start the clock, make predictions, then stop the clock
start = time()
y_pred = clf.predict(features)
end = time()
# Print and return results
print "Made predictions in {:.4f} seconds.".format(end - start)
return f1_score(target.values, y_pred, pos_label='yes')
def train_predict(clf, X_train, y_train, X_test, y_test):
''' Train and predict using a classifer based on F1 score. '''
# Indicate the classifier and the training set size
print "Training a {} using a training set size of {}. . .".format(clf.__class__.__name__, len(X_train))
# Train the classifier
train_classifier(clf, X_train, y_train)
# Print the results of prediction for both training and testing
print "F1 score for training set: {:.4f}.".format(predict_labels(clf, X_train, y_train))
print "F1 score for test set: {:.4f}.".format(predict_labels(clf, X_test, y_test))
print " "
"""
Explanation: Training and Evaluating Models
In this section, I will choose 3 supervised learning models that are appropriate for this problem and available in scikit-learn. I will first discuss the reasoning behind choosing these three models by considering my knowledge about the data and each model's strengths and weaknesses. I will then fit the model to varying sizes of training data (100 data points, 200 data points, and 300 data points) and measure the F<sub>1</sub> score. I will produce three tables (one for each model) that shows the training set size, training time, prediction time, F<sub>1</sub> score on the training set, and F<sub>1</sub> score on the testing set.
Model Application
We will use following three methods:
1) Decision Trees
2) Logistic regression
3) SVM
General application of these models is to find a classification boundary to separate the data into various classes.
Logistic Regression:
Logistic regression is easy to understand conceptually, it is very similar to linear regression. However, logistic regression gives only linear boundary and some times that may not be very useful as the data may not be linearly separable. Training and prediction for logistic regression can be very small. Training time for logistic regression could be much smaller than the training time for decision trees or SVM. It is very easy to control to complexity of logistic regression boundary.
Decision Trees:
Decision trees are a little bit more complex than Logistic regression. Decision tree classifier can give a highly nonlinear boundary and it can easily overfit to the training data if one does not carefully optimize parameters such as maximum depth of a tree. Training time of the classifier will depend on the depth of the tree. We have a better control over the complexity of the decision tree classifier than the control over the complexity of SVC.
Support Vector Classifier:
Training time for a simple support vector classifier can be much larger than the training time for logistic regression or decission tree classifier. However, SVC can give a very good accuracy compared to logistic regression. Using Kernel methods, SVC can give a nonlinear boundary. We can easily control the complexity of an SVC. SVC also ignores the outliers and maximizes the distances of observations from the classification boundary.
Since there are many features in the data, all three models, logistic regression, decision trees, and SVC are known to do well in such situations.
As the target label is binary and also most of the input features are binary, logistic regression is a natural choice.
DTs can also be used for binary classification for such catagorical features as input. DTs use entropy rather than feature values for determining the decision boundary. Additionally, as they are able to learn a nonlinear decision, they provide an advantage over logistic regression.
They transform the features into a latent space to determine the decision boundary, they are also suitable for training on these kinds of features.
Setup
In the code cell below, I will initialize three helper functions for training and testing the three supervised learning models chosen above. The functions are as follows:
- train_classifier - takes as input a classifier and training data and fits the classifier to the data.
- predict_labels - takes as input a fit classifier, features, and a target labeling and makes predictions using the F<sub>1</sub> score.
- train_predict - takes as input a classifier, and the training and testing data, and performs train_clasifier and predict_labels.
- This function will report the F<sub>1</sub> score for both the training and testing data separately.
End of explanation
"""
# Import the three supervised learning models from sklearn
from sklearn.linear_model import LogisticRegression
from sklearn.tree import DecisionTreeClassifier
from sklearn.svm import SVC
# Initialize the three models
clf_A = LogisticRegression(random_state=rstate)
clf_B = DecisionTreeClassifier(random_state=rstate)
clf_C = SVC(random_state=rstate)
# Set up the training set sizes
X_train_100 = X_train[:100]
y_train_100 = y_train[:100]
X_train_200 = X_train[:200]
y_train_200 = y_train[:200]
X_train_300 = X_train[:300]
y_train_300 = y_train[:300]
# Execute the 'train_predict' function for each classifier and each training set size
for clf in [clf_A, clf_B, clf_C]:
for size in [100, 200, 300]:
train_predict(clf, X_train[:size], y_train[:size], X_test, y_test)
print '\n'
"""
Explanation: Implementation: Model Performance Metrics
End of explanation
"""
from sklearn.grid_search import GridSearchCV
from sklearn.metrics import make_scorer
from sklearn.metrics import f1_score
from sklearn import cross_validation
from sklearn.learning_curve import validation_curve
def my_scorer(y_true, y_predict):
return f1_score(y_true, y_predict, pos_label='yes')
def ModelComplexity(X, y):
""" Calculates the performance of the model as model complexity increases.
The learning and testing errors rates are then plotted. """
# Create 10 cross-validation sets for training and testing
cv = ShuffleSplit(X.shape[0], n_iter = 10, test_size = 0.2, random_state = rstate)
# Vary the max_depth parameter from 1 to 10
max_depth = np.arange(1,11)
# Calculate the training and testing scores
train_scores, test_scores = curves.validation_curve(DecisionTreeClassifier(), X, y, \
param_name = "max_depth", param_range = max_depth, cv = cv, scoring = 'my_scorer')
# Find the mean and standard deviation for smoothing
train_mean = np.mean(train_scores, axis=1)
train_std = np.std(train_scores, axis=1)
test_mean = np.mean(test_scores, axis=1)
test_std = np.std(test_scores, axis=1)
# Plot the validation curve
pl.figure(figsize=(7, 5))
pl.title('Decision Tree Classifier Complexity Performance')
pl.plot(max_depth, train_mean, 'o-', color = 'r', label = 'Training Score')
pl.plot(max_depth, test_mean, 'o-', color = 'g', label = 'Validation Score')
pl.fill_between(max_depth, train_mean - train_std, \
train_mean + train_std, alpha = 0.15, color = 'r')
pl.fill_between(max_depth, test_mean - test_std, \
test_mean + test_std, alpha = 0.15, color = 'g')
# Visual aesthetics
pl.legend(loc = 'lower right')
pl.xlabel('Maximum Depth')
pl.ylabel('Score')
pl.ylim([-0.05,1.05])
pl.show()
# DECISION TREES: Import 'GridSearchCV' and 'make_scorer'
# Create the parameters list you wish to tune
parameters = {"max_depth":[1,2,3,4,5,6,7,8,9,10]}
# Initialize the classifier
clf = DecisionTreeClassifier()
# Make an f1 scoring function using 'make_scorer'
rs_scorer = make_scorer(my_scorer, greater_is_better= True)
# There is a class imbalance problem, 130 failed, 265 passed, needed a larger test_size to have
# near approximate
cv_set = cross_validation.ShuffleSplit(X_all.shape[0], n_iter=10, test_size=0.25, random_state=rstate)
sss = StratifiedShuffleSplit(y_train, n_iter=10, test_size=0.24, random_state=rstate)
print cv_set
print len(X_train)
print cv_set
# Perform grid search on the classifier using the f1_scorer as the scoring method
grid_obj = GridSearchCV(estimator=clf, param_grid=parameters, scoring=rs_scorer, cv=cv_set)
grid_obj2 = GridSearchCV(clf, param_grid=parameters, scoring=rs_scorer, cv=sss)
# Fit the grid search object to the training data and find the optimal parameters
grid_obj = grid_obj.fit(X_all, y_all)
grid_obj2 = grid_obj2.fit(X_train, y_train)
#grid_obj2 = grid_obj2.fit(X_train, y_train)
# Get the estimator
clf = grid_obj.best_estimator_
clf2 = grid_obj2.best_estimator_
print clf
print clf2
# Report the final F1 score for training and testing after parameter tuning
print "Tuned model has a training F1 score of {:.4f}.".format(predict_labels(clf, X_train, y_train))
print "Tuned model has a testing F1 score of {:.4f}.".format(predict_labels(clf, X_test, y_test))
print "Tuned model has a training F1 score of {:.4f}.".format(predict_labels(clf2, X_train, y_train))
print "Tuned model has a testing F1 score of {:.4f}.".format(predict_labels(clf2, X_test, y_test))
# LOGISTIC REGRESSION: Import 'GridSearchCV' and 'make_scorer'
# Create the parameters list you wish to tune
parameters = {"C":[10, 1,0.1, 0.001, 0.0001, 0.00001, 0.000001]}
# Initialize the classifier
clf = LogisticRegression()
# Make an f1 scoring function using 'make_scorer'
f1_scorer = make_scorer(my_scorer, greater_is_better= True)
cv_set = cross_validation.ShuffleSplit(X_all.shape[0], n_iter=10, test_size=0.5, random_state=rstate)
# Perform grid search on the classifier using the f1_scorer as the scoring method
grid_obj = GridSearchCV(estimator=clf, param_grid=parameters, scoring=rs_scorer, cv=cv_set)
# Fit the grid search object to the training data and find the optimal parameters
grid_obj = grid_obj.fit(X_all, y_all)
# Get the estimator
clf = grid_obj.best_estimator_
print clf
# Report the final F1 score for training and testing after parameter tuning
print "Tuned model has a training F1 score of {:.4f}.".format(predict_labels(clf, X_train, y_train))
print "Tuned model has a testing F1 score of {:.4f}.".format(predict_labels(clf, X_test, y_test))
# ModelComplexity(X_all, y_all)
"""
Explanation: Tabular Results
Record of results from above in the tables:
Classifer 1 - Logistic Regression
| Training Set Size | Training Time | Prediction Time (test) | F1 Score (train) | F1 Score (test) |
| :---------------: | :---------------------: | :--------------------: | :--------------: | :-------------: |
| 100 | 0.0021 | 0.0004 | 0.8571 | 0.7612 |
| 200 | 0.0025 | 0.0002 | 0.8380 | 0.7794 |
| 300 | 0.0034 | 0.0003 | 0.8381 | 0.7910 |
Classifer 2 - Decision Tree Classifier
| Training Set Size | Training Time | Prediction Time (test) | F1 Score (train) | F1 Score (test) |
| :---------------: | :---------------------: | :--------------------: | :--------------: | :-------------: |
| 100 | 0.0009 | 0.0002 | 1.0 | 0.6667 |
| 200 | 0.0017 | 0.0005 | 1.0 | 0.6929 |
| 300 | 0.0039 | 0.0005 | 1.0 | 0.7119 |
Classifer 3 - Support vector Classifier
| Training Set Size | Training Time | Prediction Time (test) | F1 Score (train) | F1 Score (test) |
| :---------------: | :---------------------: | :--------------------: | :--------------: | :-------------: |
| 100 | 0.0105 | 0.0021 | 0.8591 | 0.7838 |
| 200 | 0.0064 | 0.004 | 0.8693 | 0.7755 |
| 300 | 0.012 | 0.0064 | 0.8692 | 0.7586 |
Choosing the Best Model
In this section, I will choose from the three supervised learning models the best model to use on the student data. I will then perform a grid search optimization for the model over the entire training set (X_train and y_train) by tuning at least one parameter to improve upon the untuned model's F<sub>1</sub> score.
Chosing the Best Model
I choose the final model to be logistic regression, however as shown below the optimized LR gives lower F1 score (0.77) less than the untuned model (0.79). Then I optimized the 'decision tree' model and it gives the highest F1 score on the test data. Based on the available data, decision tree model seems to be the best model considering limited resources. Decision tree model automatically selects the features based on the calculation of information gain from entropy. SVC can give a good classification boundary and high accuracy with nonlinear kernels, however based on the available data, running SVC with nonlinear kernel can take a long time.
Model in Layman's Terms
I choose final model to be decision tree (DT). DT takes one feature (root node) and splits it into two child nodes by asking a question and the answer to that question will be a yes/no (or 0/1) type. So the data will be split into two catagories. Based on this split, we will calculate the information gain which is equal to: 1 - Sum(entropies of the two catagories). If we gain some information by splitting over that node, ie, if we classify the data based on that single question and if the data in each catagory is more "similar" to each other then that classification is good. We then further refine the data in each catagory by asking another question based on the other feature in our data. Again, if the data in each catagory is more similar to each other then it means that we have gained some information by splitting the data further. If we find that we have not gained any information by splitting the data, we go back and take another feature and split the data based on that feature. This is like a tree and at each node, we are creating two branchs. We repeat this process untill a certain depth of the tree is achieved and that depth is decided by a human based on the bias-variance trade-off of the model.
Implementation: Model Tuning
I will fine tune the chosen model. Using grid search (GridSearchCV) with at least one important parameter tuned with at least 3 different values. I will need to use the entire training set for this.
End of explanation
"""
|
bwbadger/mifid2-rts | rts/RTS2_Worked_Examples.ipynb | bsd-3-clause | # Import the RTS 2 module and the Python date & time tools module
import rts2_annex3
import datetime
# Create a simple Python object to represent a trade.
class SampleTrade(object):
pass
sample_trade = SampleTrade()
sample_trade.asset_class_name = 'Foreign Exchange Derivatives'
sample_trade.sub_asset_class_name= 'Deliverable FX options (DO)'
sample_trade.underlying_currency_pair = ('GBP~USD')
sample_trade.from_date = datetime.date(2017, 9, 13)
sample_trade.to_date = datetime.date(2017, 10, 12)
# Now classify the trade
sample_classification = rts2_annex3.class_root.classification_for(sample_trade)
# Lastly, display the classificaton
sample_classification.classification_dict()
"""
Explanation: Read me
The objective of this project is to show that it is better to express regulatory requirements using executable expressions which all interested parties can share and run and test, rather than to express regulatory requirements as text documents which everyone has to interpret for themselves.
This page is a Jupyter notebook. It is a combination of a document and some live software which you can execute if you are running your own jupyter-notebook server. If you are not running a Jupyter server you can still read the document and see the code examples - you just can't run them.
If you see something wrong on this page or in the code, please create an issue in the GitHub project.
MiFID II classification of trades using the RTS 2 Annex 3 taxonomy.
Governments would prefer to avoid another financial crisis like the one in 2008 and believe that making the big players operate in a more open and transparent way will help avoid another crash.
Markets in Financial Instruments Directive II (MiFID II) is an EU law which has market transparency as its key objective. The predecessor law, MiFID I, only looked at a part of what banking firms do. MiFID II aims to cover most mainstream activity.
Governments rely on regulators to make sure that their laws are being followed. For MiFID II the primary regulator is ESMA. ESMA have produced a number of Regulatory Technical Standard (RTS) documents which aim to explain what banking firms must do to comply with the MiFID II law.
One of the RTS documents, RTS 2, explains how different kinds of trading activity can be identified. Having a clear way to say what has been traded is an important part of making the markets more transparent.
Some kinds of trading activity are already pretty transparent, for example buying and selling shares in a public company. Trades of this kind are mostly done using a public exchange, such as the New York or London stock exchanges. The 'price' for a given stock is simply the amount of money paid in the most recent trade and this price is made public by the exchange so everyone can see what the latest price is. It is pretty easy to identify what has been traded because each stock has a identifier, e.g. 'AAPL' identifies Apple Inc. shares.
Not all trades happen on public exchanges. Many trades happen directly between two parties and these are known as over the counter (OTC) trades. Each OTC trade can be fine-tuned, for example setting payment dates and interest rates. The fine tuning of OTC trades makes it hard to give an identity to what has been traded, but this is where RTS 2 comes in.
The easiest way to understand what RTS 2 is all about is to use it to classify some trades, and you can do just that below.
A Python Implementation of RTS 2 Annex 3
It would be nice if ESMA published a working software implememtation of the RTS rules along with some test data so people can see exactly how the rules are supposed to work, and how reports are supposed to look. But ESMA don't do that. Each participant must somehow get an implementation of the RTS rules, either by writing it themselves or buying an implementation.
One market participant implemented the RTS rules themselves and have now released part of that implementation under an open source license, the BSD license, so anyone can see the implementaion and use it. This document forms a part of that release.
Hopefully this software will encourage ESMA to produce reference implementaions of their rules in future. They could even take this software as a starting point.
The software here is written in the Python programming language. Python was chosen because the language is ubiquitous, that is it can be used easily and immediately on most modern computers; everything from a Raspberry Pi to the largest of big data clusters.
Running a really simple initial classification
The box below contains python code which runs the classification software. If you are just viewing this page then you won't be able to run the code, but if you start the page using your own local Jupyter notebook server then the code will really run if you select the box below and press control+enter. If you can run the code you might like to try changing the values of the attributes below (e.g. to_date) to see what happens.
End of explanation
"""
print(sample_classification.as_json(indent=4))
"""
Explanation: Understanding a classification
The classification is shown here as a Python dictionary, but one could imagine presenting this classification in many ways ... and this is a problem. What is the official accepted literal form of an RTS 2 classification? Nobody seems to know. So let's go with this dictionary for now.
Another point about the above representation is that it is very big, and the example is not one of the biggest! The reason for the size is that the text which appears is exactly as it appears in RTS 2. There is no obvious way to shorten the classification without inventing something, and that would open the door to arguments about what is right. This way, the classification links back to the RTS document in an extremely obvious, if verbose, way. No arguments.
To a large degree, the classification is simply repeating the information we gave our sample_trade object in the code above, but information has been checked and other information added.
This classification first confirms the identity of the RTS document the classification is based upon. The RTS rules may change over time, so it is important to know which version of the RTS a particular classification is based upon.
Next we see the Asset class and Sub-asset class, which is repeating just what we said above. When classifying a trade there are some things you just have to know. There will be some help on how to choose Asset classes and Sub-asset classes below.
Then we see something we didn't include in our trade object. The RTS 2 Annex 3 document defines a number of criteria for each kind of Sub-asset class. The Sub-asset class in this case has two criteria, and the classification included the description, the exact text, from the RTS document to explain what the criteria mean.
The values for the criteria do come from the values on our object, but some involve calculation. The currency pair criterion, criterion 1, is simply the name of underlying_currency_pair value we provided. Criterion 2 gets its value from date calculations which use the from and to dates we gave; the resulting value is a date bucket, bucket 2 in this case.
Would Json be a better format for classifications?
Because the classification is just a Python object we can change its implementation to render the classification in any way we please, or we can take the dictionary it currently produces and convert it to something else. Here, the classification above is shown as json:
End of explanation
"""
# The root of the taxonomy is rts2_annex3.class_root. Here we ask the
# root for the asset classes, and then ask each asset class for its name.
# The names are exactly the names of the Asset classes you'll see in the RTS document.
[asset_class.name for asset_class in rts2_annex3.class_root.asset_classes]
# Each asset class is broken down into Sub-asset classes.
# So now we take the FX Derivatives asset class and display the names of
# its children, the sub-asset classes.
fx_asset_class = rts2_annex3.class_root.asset_class_by_name('Foreign Exchange Derivatives')
[sub_asset_class.name for sub_asset_class in fx_asset_class.children]
# Each sub-asset class has a number of criteria.
# Here we ask the Deliverable FX Options sub-asset class to list its
# criteria:
fx_do_sub_asset_class = fx_asset_class.sub_asset_class_by_name('Deliverable FX options (DO)')
[criterion.description for criterion in fx_do_sub_asset_class.criteria]
"""
Explanation: RTS 2 Annex 3 defines a taxonomy. A Tree.¶
To understand how the classification process works we need to look at what the RTS says.
The RTS 2 Annex 3 taxonomy is made up of Asset classes which get broken down into Sub-asset classes which are further broken down by combinations of criteria values.
Here is some code to list the names of elements of the taxonomy:
End of explanation
"""
import rts2_annex3
import collections
import ipywidgets as widgets
from IPython.display import display
asset_classes = rts2_annex3.class_root.asset_classes
asset_class_dict = collections.OrderedDict([
(an_asset_class.name, an_asset_class)
for an_asset_class
in asset_classes])
asset_class_widget = widgets.Dropdown(
options=asset_class_dict,
description='Asset Classes:',
disabled=False,
)
def sub_asset_class_dict(asset_class):
return collections.OrderedDict([
(sub_asset_class.name, sub_asset_class)
for sub_asset_class
in asset_class.sub_asset_classes])
sub_asset_class_widget = widgets.Dropdown(
options=sub_asset_class_dict(asset_class_widget.value),
description='Sub-asset Classes:',
disabled=False,
)
criteria_vbox = widgets.VBox([])
def criteria_widgets(sub_asset_class):
# OK, in here I need to look up the criteria for the
# sub-asset class and build the widgets in rows of HBox es
return [widgets.Label(criterion.display(prefix=""))
for criterion
in sub_asset_class.criteria]
def asset_class_changed(change):
if change['type'] == 'change' and change['name'] == 'value':
selected_asset_class = change['new']
sub_asset_class_widget.options = sub_asset_class_dict(selected_asset_class)
def sub_asset_class_changed(change):
if change['type'] == 'change' and change['name'] == 'value':
selected_sub_asset_class = change['new']
criteria_vbox.children = criteria_widgets(selected_sub_asset_class)
asset_class_widget.observe(asset_class_changed)
sub_asset_class_widget.observe(sub_asset_class_changed)
display(asset_class_widget)
display(sub_asset_class_widget)
criteria_vbox.children = criteria_widgets(sub_asset_class_widget.value)
display(criteria_vbox)
"""
Explanation: Viewing the RTS 2 taxonomy using ipywidgets
If you are running this notebook on a live Jupyter server then you can run the code below to display widgets which let you navigate the RTS 2 taxonomy.
You can select an asset class in a drop-down widget. This then populates the sub-asset classes drop-down widget for the selected asset class. Selecting a sub-asset class causes the criteria for that sub-asset class to be displayed.
Here is a screen shot of how the widgets look in action. In this example I have selected Energy Commodity Swaps which has seven criteria:
End of explanation
"""
import rts2_annex3
from IPython.display import display
max_name_length = 50
target_string = ''
root = rts2_annex3.class_root
target_string += 'Root\n'
for asset_class in root.asset_classes:
target_string += ' Asset class: "' + asset_class.name + '"\n'
for sub_asset_class in asset_class.children:
target_string += ' Sub-asset class: "' \
+ sub_asset_class.name[:max_name_length] \
+ '"\n'
for criterion in sub_asset_class.criteria:
target_string += ' Critrion: ' \
+ str(criterion.criterion_number) \
+ ' "' \
+ criterion.description[:max_name_length] \
+ '"\n'
print("\nDon't forget, all strings have been trimmed to {limit} characters! ... \n".format(
limit=max_name_length))
print(target_string)
"""
Explanation: Viewing the RTS 2 Annex 3 taxonomy as a tree
The following code walks the RTS 2 Annex 3 taxonomy building up a string which presents the taxonomy as a tree, in the same kind of way that nested file folders on a computer could be shown as a tree.
The names are all trimmed to 50 characters just to force each item onto a single line.
End of explanation
"""
|
fggp/ctcsound | cookbook/11-GUI-with-PySimpleGUI.ipynb | lgpl-2.1 | import PySimpleGUI as sg
"""
Explanation: Building Graphical Interfaces for ctcsound with PySimpleGUI
There are many GUI toolkits for Python. An overview can be found here. We will choose PySimpleGui here. For running ctcsound, we will use iCsound.
Basic PySimpleGUI Concepts
Once PySimpleGUI is installed via pip or conda, it can be imported, by convention as sg.
End of explanation
"""
layout = [[sg.Input(key='INPUT')],
[sg.Button('Read'), sg.Exit()]]
window = sg.Window('Please type', layout)
while True:
event, values = window.read()
print(event, values)
if event is None or event == 'Exit':
break
window.close()
"""
Explanation: The code for a GUI has two main parts:
- The layout containing the widgets (mostly called elements) in the window.
- The main event loop in which the widget's information are being read.
This is a simple example:
End of explanation
"""
%load_ext csoundmagics
cs = ICsound()
orc = """
instr 1
kFreq chnget "freq"
aOut poscil .2, kFreq
out aOut, aOut
endin
schedule(1,0,-1)"""
cs.sendCode(orc)
"""
Explanation: The layout consists of lists which each contains the widgets for one line.
The sg.Input() element is the one which gets the text input. So we give it a key as signifier. Usually the key is a string.
In the event loop, we read the values once an event (here: a press on the Read button) has been received.
As can be seen in the printout, values are a python dictionnary. If we want to retrieve the data, we will write values['INPUT']
The if condition gives two options for closing the window: either by clicking the Exit button, or by closing with the X.
GUI -> Csound
This knowledge is actually enough to start using a PySimpleGUI in iCsound. We will first look at examples where Csound parameters are controlled by widgets.
Slider
First, we start iCsound and send some code to the running instance.
End of explanation
"""
layout = [[sg.Slider(range=(500,1000),orientation='h',key='FREQ',enable_events=True)]]
window = sg.Window('Simple Slider',layout)
while True:
event, values = window.read()
if event is None:
break
cs.setControlChannel('freq',values['FREQ'])
window.close()
"""
Explanation: Now we create a GUI in which we send the slider values to the channel "freq".
End of explanation
"""
cs.sendScore('i -1 0 1')
cs.stopEngine(reset=False)
del cs
"""
Explanation: And we have to turnoff our instrument and delete the iCsound instance.
End of explanation
"""
cs = ICsound()
orc = """
instr Playback
Sfile chnget "file"
aFile[] diskin Sfile,1,0,1
kFadeOut linenr 1, 0, 1, .01
out aFile*kFadeOut
endin
"""
cs.sendCode(orc)
"""
Explanation: Button
A button can have different functions. We use here one button to browse a sound file, and one button to start/stop the performance.
At first we create th iCsound instance and send some new code to it.
End of explanation
"""
layout = [[sg.Text('Select Audio File, then Start/Stop Playback')],
[sg.FileBrowse(key='FILE',enable_events=True), sg.Button('Start'), sg.Button('Stop')]]
window = sg.Window('',layout)
while True:
event, values = window.read()
if event is None:
break
cs.setStringChannel('file',values['FILE'])
if event is 'Start':
cs.sendScore('i "Playback" 0 -1')
if event is 'Stop':
cs.sendCode('schedule(-nstrnum("Playback"),0,0)')
cs.stopEngine(reset=False)
del cs
window.close()
"""
Explanation: Then we build a GUI with three buttons:
- A file browser. We add a string as key, and we set enable_events=True to send the browsed file as event.
- A Start button. This button will send the score i "Playback" 0 -1 to activate instrument Playback. This instrument will play the sound file in a loop.
- A Stop button. This button will stop the instrument. We can stop an instrument with negative p3 by calling .it with negative p1. As we are using instrument names here, we must first transform the name "Playback" to the number Csound has given it: -nstrnum("Playback") does the trick.
Once the window is being closed, we delete the Csound instance as usual with del cs.
End of explanation
"""
# csound start
cs = ICsound()
orc = """
instr 1
kLine randomi -1,1,1,3
chnset kLine, "line"
endin
"""
cs.sendCode(orc)
cs.sendScore('i 1 0 -1')
# GUI
layout = [[sg.Slider(range=(-1,1),
orientation='h',
key='LINE',
disable_number_display=True,
resolution=.01)],
[sg.Text(size=(6,1),
key='LINET',
text_color='black',
background_color='white',
justification = 'right',
font=('Courier',16,'bold'))]
]
# window and event loop
window = sg.Window('Simple Csound -> GUI Example',layout)
while True:
event, values = window.read(timeout=100)
if event is None:
cs.sendScore('i -1 0 1')
cs.stopEngine(reset=False)
del cs
break
window['LINE'].update(cs.channel('line')[0])
window['LINET'].update('%+.3f' % cs.channel('line')[0])
window.close()
"""
Explanation: Spinbox, Menu, Checkbox, Radiobutton
to be added ...
Csound -> GUI
We will now look at the opposite direction: Sending signals from Csound to the GUI.
Basic Example
To see how things are working here, we only create a line in Csound which randomly moves between -1 and 1. We send this k-signal in a channel called "line".
On the PySimpleGUI side there is one important change to the approach so far. As we have no events which are triggered by any user action, we must must include a timeout statement in the window.read() call. Thie will update the widgets every timeout milliseconds. So timeout=100 will refresh the GUI ten times per second.
Reading the Csound channel in the Python environment is done via iCsound's channel method. The return value of cs.channel('line'), however, is not the channel value itself, but a tuple of the value and the data type. So to retrieve the value we have to extract the first element by cs.channel('line')[0].
End of explanation
"""
def plotArray(graph, y=[], linewidth=2, color='black'):
"""Plots the array y in the graph element.
Assumes that x of graph is 0 to tablelen."""
import numpy as np
for i in np.arange(len(y)-1):
graph.draw_line((i,y[i]), (i+1,y[i+1]), color=color, width=linewidth)
"""
Explanation: Display Csound Tables
The Graph element in PySimpleGUI can be used to display Csound function tables. Currently (June 2020) the Graph element is missing a method to plot an array straightforward. So at first we define a function for it.
End of explanation
"""
cs = ICsound()
orc = """
i0 ftgen 1, 0, 0, 1, "examples/fox.wav", 0, 0, 0
"""
cs.sendCode(orc)
tablen = cs.tableLength(1)
layout = [[sg.Graph(canvas_size=(400, 200),
graph_bottom_left=(0, -1),
graph_top_right=(tablen, 1),
background_color='white',
key='graph')]]
window = sg.Window('csound table', layout, finalize=True)
table_disp = window['graph']
y = cs.table(1)
plotArray(table_disp, y)
while True:
event, values = window.read()
if event is None:
cs.stopEngine(reset=False)
del cs
break
window.close()
"""
Explanation: For static display of a table we will put the function outside the event loop. We get the table data with iCsound's table method.
End of explanation
"""
cs = ICsound()
orc = """
seed 0
i0 ftgen 1, 0, 1024, 10, 1
instr Blur
indx = 0
while indx<1024 do
tablew limit:i(table:i(indx,1)+rnd31:i(1/p4,0),-1,1), indx, 1
indx += 1
od
schedule("Blur",.1,0,limit:i(p4-p4/100,1,p4))
endin
instr Play
iFreq random 500, 510
a1 poscil transeg:a(1/10,p3,-3,0), iFreq, 1
a2 poscil transeg:a(1/20,p3,-5,0), iFreq*2.32, 1
a3 poscil transeg:a(1/30,p3,-6,0), iFreq*4.25, 1
a4 poscil transeg:a(1/40,p3,-7,0), iFreq*6.63, 1
a5 poscil transeg:a(1/50,p3,-8,0), iFreq*9.38, 1
aSnd sum a1, a2, a3, a4, a5
aSnd butlp aSnd, 5000
aL, aR pan2 aSnd, random:i(0,1)
out aL, aR
schedule("Play", random:i(.3,1.5), random:i(2,4))
endin
schedule("Play",0,3)
schedule("Blur",3,0,100)
"""
cs.sendCode(orc)
tablen = cs.tableLength(1)
layout = [[sg.Graph(canvas_size=(400, 200),
graph_bottom_left=(0, -1),
graph_top_right=(tablen, 1),
background_color='white',
key='graph')]]
window = sg.Window('csound table real time modifications', layout, finalize=True)
table_disp = window['graph']
while True:
event, values = window.read(timeout=100)
if event is None:
cs.stopEngine(reset=False)
del cs
break
y = cs.table(1)
table_disp.erase()
plotArray(table_disp, y)
window.close()
"""
Explanation: If we want to show dynamically changing tables, we will put the plot function in the event loop. Table data and display are updated then every timeout milliseconds. In the next example an additive synthesis instrument plays some sine partials. After three seconds the table is being blurred slowly.
End of explanation
"""
|
maurov/xraysloth | notebooks/larch.ipynb | bsd-3-clause | import numpy as np
from larch.io import read_ascii
feo = read_ascii('./larch_data/feo_xafs.dat', labels = 'energy ctime i0 i1 nothing')
feo.mu = - np.log(feo.i1/feo.i0)
"""
Explanation: Examples of XAFS data analysis with Larch
First read in some data
End of explanation
"""
from larch.xafs import autobk
autobk(feo, kweight=2, rbkg=0.8, e0=7119.0)
"""
Explanation: Normalization and backgroud removal (= EXAFS extraction)
End of explanation
"""
from larch.xafs import xftf
xftf(feo, kweight=2, kmin=2, kmax=13.0, dk=5, kwindow='Kaiser-Bessel')
"""
Explanation: Fourier transform
End of explanation
"""
%matplotlib inline
import matplotlib.pyplot as plt
plt.plot(feo.energy, feo.mu)
from larch.wxlib import plotlabels as plab
plt.plot(feo.k, feo.chi*feo.k**2)
plt.xlabel(plab.k)
plt.ylabel(plab.chikw.format(2))
plt.plot(feo.k, feo.chi*feo.k**2, label='chi(k)')
plt.plot(feo.k, feo.kwin, label='window')
plt.xlabel(plab.k)
plt.ylabel(plab.chikw.format(2))
plt.legend()
"""
Explanation: Basic plots can be done directly with matplotlib. The command %matplotlib inline permits in-line plots, that is, images are saved in the notebook. This means that the figures are visible when the notebook is open, even without execution.
End of explanation
"""
from sloth.utils.xafsplotter import XAFSPlotter
p = XAFSPlotter(ncols=2, nrows=2, dpi=150, figsize=(6, 4))
p.plot(feo.energy, feo.mu, label='raw', win=0)
p.plot(feo.energy, feo.i0, label='i0', win=0, side='right')
p.plot(feo.energy, feo.norm, label='norm', win=1)
p.plot(feo.k, feo.chi*feo.k**2, label='chi2', win=2)
p.plot(feo.k, feo.chi*feo.k**2, label='chi(k)', win=3)
p.plot(feo.k, feo.kwin, label='window', win=3)
p.subplots_adjust(top=0.9)
dir(feo)
"""
Explanation: A work-in-progress utility is available in sloth.utils.xafsplotter. It is simply a wrapper on top of the wonderful plt.subplots(). The goal of this utility is to produce in-line nice figures with standard layouts ready for reporting your analysis to colleagues. With little effort/customization, those plots could be converted to publication quality figures...
Currently (September 2019), not much is available. To show the idea behind, previous plots are condensed in a single figure.
End of explanation
"""
from wxmplot.interactive import plot
plot(feo.energy, feo.mu, label='mu', xlabel='Energy', ylabel='mu', show_legend=True)
"""
Explanation: Test interactive plot with wxmplot.interactive
With the following commands is possible to open an external plotting window (based on Wxpython) permitting interactive tasks.
End of explanation
"""
|
iagapov/ocelot | demos/ipython_tutorials/5_CSR.ipynb | gpl-3.0 | # the output of plotting commands is displayed inline within frontends,
# directly below the code cell that produced it
%matplotlib inline
from time import time
# this python library provides generic shallow (copy) and deep copy (deepcopy) operations
from copy import deepcopy
# import from Ocelot main modules and functions
from ocelot import *
# import from Ocelot graphical modules
from ocelot.gui.accelerator import *
# load beam distribution
# this function convert CSRtrack beam distribution to Ocelot format - ParticleArray. ParticleArray is designed for tracking.
# in order to work with converters we have to import specific module from ocelot.adaptors
from ocelot.adaptors.csrtrack2ocelot import *
"""
Explanation: This notebook was created by Sergey Tomin. Source and license info is on GitHub. May 2017.
Tutorial N5. Coherent Synchrotron Radiation.
Second order tracking with CSR effect of the 200k particles.
As an example, we will use bunch compressor BC2 of the European XFEL Injector.
The CSR module uses a fast ‘projected’ 1-D method from CSRtrack code and follows the approach presented in {Saldin et al 1998, Dohlus 2003, Dohlus 2004}. The particle tracking uses matrices up to the second order. CSR wake is calculated continuously through beam lines of arbitrary flat geometry. The transverse self-forces are neglected completely. The method calculates the longitudinal self-field of a one-dimensional beam that is obtained by a projection of the ‘real’ three-dimensional beam onto a reference trajectory. A smooth one-dimensional charge density is calculated by binning and filtering, which is crucial for the stability and accuracy of the simulation, since the instability is sensitive to high frequency components in the charge density.
This example will cover the following topics:
Initialization of the CSR object and the places of it applying
tracking of second order with CSR effect.
Requirements
in.fmt1 - input file, initial beam distribution in CSRtrack format (was obtained from s2e simulation performed with ASTRA/CSRtrack).
out.fmt1 - output file, beam distribution after BC2 bunch compressor (was obtained with CSRtrack)
End of explanation
"""
# load and convert CSRtrack file to OCELOT beam distribution
# p_array_i = csrtrackBeam2particleArray("in.fmt1", orient="H")
# save ParticleArray to compresssed numpy array
# save_particle_array("test.npz", p_array_i)
p_array_i = load_particle_array("csr_beam.npz")
# show the longitudinal phase space
plt.plot(-p_array_i.tau()*1000, p_array_i.p(), "r.")
plt.xlabel("S, mm")
plt.ylabel("dE/pc")
"""
Explanation: Load beam distribution from CSRtrack format
End of explanation
"""
b1 = Bend(l = 0.500094098121, angle=-0.03360102249639, e1=0.0, e2=-0.03360102249639, gap=0, tilt=0, fint=0.0, fintx=0.0, eid='BB.393.B2')
b2 = Bend(l = 0.500094098121, angle=0.03360102249639, e1=0.03360102249639, e2=0.0, gap=0, tilt=0, fint=0.0, fintx=0.0, eid='BB.402.B2')
b3 = Bend(l = 0.500094098121, angle=0.03360102249639, e1=0.0, e2=0.03360102249639, gap=0, tilt=0, fint=0.0, fintx=0.0, eid='BB.404.B2')
b4 = Bend(l = 0.500094098121, angle=-0.03360102249639, e1=-0.03360102249639, e2=0.0, gap=0, tilt=0, fint=0.0, fintx=0.0, eid='BB.413.B2')
d_slope = Drift(l=8.5/np.cos(b2.angle))
start_csr = Marker()
stop_csr = Marker()
# define cell frome the bends and drifts
cell = [start_csr, Drift(l=0.1), b1 , d_slope , b2, Drift(l=1.5) , b3, d_slope, b4, Drift(l= 1.), stop_csr]
"""
Explanation: create BC2 lattice
End of explanation
"""
# initialization of tracking method
method = MethodTM()
# for second order tracking we have to choose SecondTM
method.global_method = SecondTM
# for first order tracking uncomment next line
# method.global_method = TransferMap
lat = MagneticLattice(cell, method=method)
"""
Explanation: Initialization tracking method and MagneticLattice object
End of explanation
"""
csr = CSR()
csr.n_bin = 300
csr.m_bin = 5
csr.sigma_min = 0.2e-6
"""
Explanation: Create CSR object
End of explanation
"""
navi = Navigator(lat)
# track witout CSR effect
p_array_no = deepcopy(p_array_i)
print("\n tracking without CSR effect .... ")
start = time()
tws_no, p_array_no = track(lat, p_array_no, navi)
print("\n time exec:", time() - start, "sec")
# again create Navigator with needed step in [m]
navi = Navigator(lat)
navi.unit_step = 0.1 # m
# add csr process to navigator with start and stop elements
navi.add_physics_proc(csr, start_csr, stop_csr)
# tracking
start = time()
p_array_csr = deepcopy(p_array_i)
print("\n tracking with CSR effect .... ")
tws_csr, p_array_csr = track(lat, p_array_csr, navi)
print("\n time exec:", time() - start, "sec")
# recalculate reference particle
from ocelot.cpbd.beam import *
recalculate_ref_particle(p_array_csr)
recalculate_ref_particle(p_array_no)
# load and convert CSRtrack file to OCELOT beam distribution
# distribution after BC2
# p_array_out = csrtrackBeam2particleArray("out.fmt1", orient="H")
# save ParticleArray to compresssed numpy array
# save_particle_array("scr_track.npz", p_array_out)
p_array_out = load_particle_array("scr_track.npz")
# standard matplotlib functions
plt.figure(2, figsize=(10, 6))
plt.subplot(121)
plt.plot(p_array_no.tau()*1000, p_array_no.p(), 'g.', label="OCELOT no CSR")
plt.plot(p_array_csr.tau()*1000, p_array_csr.p(), 'r.', label="OCELOT CSR")
plt.plot(p_array_out.tau()*1000, p_array_out.p(), 'b.', label="CSRtrack")
plt.legend(loc=3)
plt.xlabel("s, mm")
plt.ylabel("dE/pc")
plt.grid(True)
plt.subplot(122)
plt.plot(p_array_no.tau()*1000, p_array_no.p(), 'g.', label="Ocelot no CSR")
plt.plot(p_array_out.tau()*1000, p_array_out.p(), 'b.', label="CSRtrack")
plt.plot(p_array_csr.tau()*1000, p_array_csr.p(), 'r.', label="OCELOT CSR")
plt.legend(loc=3)
plt.xlabel("s, mm")
plt.ylabel("dE/pc")
plt.grid(True)
"""
Explanation: Track particles with and without CSR effect
End of explanation
"""
|
sisnkemp/deep-learning | language-translation/dlnd_language_translation.ipynb | mit | """
DON'T MODIFY ANYTHING IN THIS CELL
"""
import helper
import problem_unittests as tests
source_path = 'data/small_vocab_en'
target_path = 'data/small_vocab_fr'
source_text = helper.load_data(source_path)
target_text = helper.load_data(target_path)
"""
Explanation: Language Translation
In this project, you’re going to take a peek into the realm of neural network machine translation. You’ll be training a sequence to sequence model on a dataset of English and French sentences that can translate new sentences from English to French.
Get the Data
Since translating the whole language of English to French will take lots of time to train, we have provided you with a small portion of the English corpus.
End of explanation
"""
view_sentence_range = (10, 20)
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import numpy as np
print('Dataset Stats')
print('Roughly the number of unique words: {}'.format(len({word: None for word in source_text.split()})))
sentences = source_text.split('\n')
word_counts = [len(sentence.split()) for sentence in sentences]
print('Number of sentences: {}'.format(len(sentences)))
print('Average number of words in a sentence: {}'.format(np.average(word_counts)))
print()
print('English sentences {} to {}:'.format(*view_sentence_range))
print('\n'.join(source_text.split('\n')[view_sentence_range[0]:view_sentence_range[1]]))
print()
print('French sentences {} to {}:'.format(*view_sentence_range))
print('\n'.join(target_text.split('\n')[view_sentence_range[0]:view_sentence_range[1]]))
"""
Explanation: Explore the Data
Play around with view_sentence_range to view different parts of the data.
End of explanation
"""
def text_to_ids(source_text, target_text, source_vocab_to_int, target_vocab_to_int):
"""
Convert source and target text to proper word ids
:param source_text: String that contains all the source text.
:param target_text: String that contains all the target text.
:param source_vocab_to_int: Dictionary to go from the source words to an id
:param target_vocab_to_int: Dictionary to go from the target words to an id
:return: A tuple of lists (source_id_text, target_id_text)
"""
source_id_text = []
target_id_text = []
for sentence in source_text.split('\n'):
ids = []
for w in sentence.split():
ids.append(source_vocab_to_int[w])
source_id_text.append(ids)
for sentence in target_text.split('\n'):
ids = []
for w in sentence.split():
ids.append(target_vocab_to_int[w])
ids.append(target_vocab_to_int['<EOS>'])
target_id_text.append(ids)
return source_id_text, target_id_text
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_text_to_ids(text_to_ids)
"""
Explanation: Implement Preprocessing Function
Text to Word Ids
As you did with other RNNs, you must turn the text into a number so the computer can understand it. In the function text_to_ids(), you'll turn source_text and target_text from words to ids. However, you need to add the <EOS> word id at the end of target_text. This will help the neural network predict when the sentence should end.
You can get the <EOS> word id by doing:
python
target_vocab_to_int['<EOS>']
You can get other word ids using source_vocab_to_int and target_vocab_to_int.
End of explanation
"""
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
helper.preprocess_and_save_data(source_path, target_path, text_to_ids)
"""
Explanation: Preprocess all the data and save it
Running the code cell below will preprocess all the data and save it to file.
End of explanation
"""
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import numpy as np
import helper
(source_int_text, target_int_text), (source_vocab_to_int, target_vocab_to_int), _ = helper.load_preprocess()
"""
Explanation: Check Point
This is your first checkpoint. If you ever decide to come back to this notebook or have to restart the notebook, you can start from here. The preprocessed data has been saved to disk.
End of explanation
"""
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
from distutils.version import LooseVersion
import warnings
import tensorflow as tf
from tensorflow.python.layers.core import Dense
# Check TensorFlow Version
assert LooseVersion(tf.__version__) >= LooseVersion('1.1'), 'Please use TensorFlow version 1.1 or newer'
print('TensorFlow Version: {}'.format(tf.__version__))
# Check for a GPU
if not tf.test.gpu_device_name():
warnings.warn('No GPU found. Please use a GPU to train your neural network.')
else:
print('Default GPU Device: {}'.format(tf.test.gpu_device_name()))
"""
Explanation: Check the Version of TensorFlow and Access to GPU
This will check to make sure you have the correct version of TensorFlow and access to a GPU
End of explanation
"""
def model_inputs():
"""
Create TF Placeholders for input, targets, learning rate, and lengths of source and target sequences.
:return: Tuple (input, targets, learning rate, keep probability, target sequence length,
max target sequence length, source sequence length)
"""
inputs = tf.placeholder(tf.int32, shape = [None, None], name = "input")
targets = tf.placeholder(tf.int32, shape = [None, None])
learning_rate = tf.placeholder(tf.float32)
keep_prob = tf.placeholder(tf.float32, name = "keep_prob")
target_seq_len = tf.placeholder(tf.int32, shape = [None], name = "target_sequence_length")
max_target_len = tf.reduce_max(target_seq_len, name = "max_target_len")
src_seq_len = tf.placeholder(tf.int32, shape = [None], name = "source_sequence_length")
return inputs, targets, learning_rate, keep_prob, target_seq_len, max_target_len, src_seq_len
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_model_inputs(model_inputs)
"""
Explanation: Build the Neural Network
You'll build the components necessary to build a Sequence-to-Sequence model by implementing the following functions below:
- model_inputs
- process_decoder_input
- encoding_layer
- decoding_layer_train
- decoding_layer_infer
- decoding_layer
- seq2seq_model
Input
Implement the model_inputs() function to create TF Placeholders for the Neural Network. It should create the following placeholders:
Input text placeholder named "input" using the TF Placeholder name parameter with rank 2.
Targets placeholder with rank 2.
Learning rate placeholder with rank 0.
Keep probability placeholder named "keep_prob" using the TF Placeholder name parameter with rank 0.
Target sequence length placeholder named "target_sequence_length" with rank 1
Max target sequence length tensor named "max_target_len" getting its value from applying tf.reduce_max on the target_sequence_length placeholder. Rank 0.
Source sequence length placeholder named "source_sequence_length" with rank 1
Return the placeholders in the following the tuple (input, targets, learning rate, keep probability, target sequence length, max target sequence length, source sequence length)
End of explanation
"""
def process_decoder_input(target_data, target_vocab_to_int, batch_size):
"""
Preprocess target data for encoding
:param target_data: Target Placehoder
:param target_vocab_to_int: Dictionary to go from the target words to an id
:param batch_size: Batch Size
:return: Preprocessed target data
"""
rev = tf.reverse(target_data, axis = [1]) # reverse each row to put last element first
sliced = tf.slice(rev, [0, 1], [-1, -1]) # slice to strip the last element of each row (now in first position) off
unrev = tf.reverse(sliced, axis = [1]) # reverse rows to restore original order
go = tf.constant(target_vocab_to_int['<GO>'], dtype = tf.int32, shape = [batch_size, 1])
concat = tf.concat([go, unrev], axis = 1)
return concat
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_process_encoding_input(process_decoder_input)
"""
Explanation: Process Decoder Input
Implement process_decoder_input by removing the last word id from each batch in target_data and concat the GO ID to the begining of each batch.
End of explanation
"""
from imp import reload
reload(tests)
# Taken from the github link about stacked LSTM cell above
def lstm_cell(rnn_size):
return tf.contrib.rnn.BasicLSTMCell(rnn_size)
def encoding_layer(rnn_inputs, rnn_size, num_layers, keep_prob,
source_sequence_length, source_vocab_size,
encoding_embedding_size):
"""
Create encoding layer
:param rnn_inputs: Inputs for the RNN
:param rnn_size: RNN Size
:param num_layers: Number of layers
:param keep_prob: Dropout keep probability
:param source_sequence_length: a list of the lengths of each sequence in the batch
:param source_vocab_size: vocabulary size of source data
:param encoding_embedding_size: embedding size of source data
:return: tuple (RNN output, RNN state)
"""
embedded_inputs = tf.contrib.layers.embed_sequence(rnn_inputs, source_vocab_size, encoding_embedding_size)
cells = tf.contrib.rnn.MultiRNNCell([lstm_cell(rnn_size) for _ in range(num_layers)])
rnn = tf.contrib.rnn.DropoutWrapper(cells, output_keep_prob = keep_prob)
outputs, state = tf.nn.dynamic_rnn(rnn, embedded_inputs, dtype = tf.float32)
return outputs, state
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_encoding_layer(encoding_layer)
"""
Explanation: Encoding
Implement encoding_layer() to create a Encoder RNN layer:
* Embed the encoder input using tf.contrib.layers.embed_sequence
* Construct a stacked tf.contrib.rnn.LSTMCell wrapped in a tf.contrib.rnn.DropoutWrapper
* Pass cell and embedded input to tf.nn.dynamic_rnn()
End of explanation
"""
def decoding_layer_train(encoder_state, dec_cell, dec_embed_input,
target_sequence_length, max_summary_length,
output_layer, keep_prob):
"""
Create a decoding layer for training
:param encoder_state: Encoder State
:param dec_cell: Decoder RNN Cell
:param dec_embed_input: Decoder embedded input
:param target_sequence_length: The lengths of each sequence in the target batch
:param max_summary_length: The length of the longest sequence in the batch
:param output_layer: Function to apply the output layer
:param keep_prob: Dropout keep probability
:return: BasicDecoderOutput containing training logits and sample_id
"""
trainhelper = tf.contrib.seq2seq.TrainingHelper(dec_embed_input, target_sequence_length)
decoder = tf.contrib.seq2seq.BasicDecoder(dec_cell, trainhelper, encoder_state, output_layer = output_layer)
outputs, _ = tf.contrib.seq2seq.dynamic_decode(decoder, impute_finished = True, maximum_iterations = max_summary_length)
return outputs
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_decoding_layer_train(decoding_layer_train)
"""
Explanation: Decoding - Training
Create a training decoding layer:
* Create a tf.contrib.seq2seq.TrainingHelper
* Create a tf.contrib.seq2seq.BasicDecoder
* Obtain the decoder outputs from tf.contrib.seq2seq.dynamic_decode
End of explanation
"""
def decoding_layer_infer(encoder_state, dec_cell, dec_embeddings, start_of_sequence_id,
end_of_sequence_id, max_target_sequence_length,
vocab_size, output_layer, batch_size, keep_prob):
"""
Create a decoding layer for inference
:param encoder_state: Encoder state
:param dec_cell: Decoder RNN Cell
:param dec_embeddings: Decoder embeddings
:param start_of_sequence_id: GO ID
:param end_of_sequence_id: EOS Id
:param max_target_sequence_length: Maximum length of target sequences
:param vocab_size: Size of decoder/target vocabulary
:param decoding_scope: TenorFlow Variable Scope for decoding
:param output_layer: Function to apply the output layer
:param batch_size: Batch size
:param keep_prob: Dropout keep probability
:return: BasicDecoderOutput containing inference logits and sample_id
"""
start_tokens = tf.tile(tf.constant([start_of_sequence_id], dtype=tf.int32), [batch_size])
helper = tf.contrib.seq2seq.GreedyEmbeddingHelper(dec_embeddings, start_tokens, end_of_sequence_id)
decoder = tf.contrib.seq2seq.BasicDecoder(dec_cell, helper, encoder_state, output_layer)
outputs, _ = tf.contrib.seq2seq.dynamic_decode(decoder, impute_finished = True, maximum_iterations = max_target_sequence_length)
return outputs
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_decoding_layer_infer(decoding_layer_infer)
"""
Explanation: Decoding - Inference
Create inference decoder:
* Create a tf.contrib.seq2seq.GreedyEmbeddingHelper
* Create a tf.contrib.seq2seq.BasicDecoder
* Obtain the decoder outputs from tf.contrib.seq2seq.dynamic_decode
End of explanation
"""
def decoding_layer(dec_input, encoder_state,
target_sequence_length, max_target_sequence_length,
rnn_size,
num_layers, target_vocab_to_int, target_vocab_size,
batch_size, keep_prob, decoding_embedding_size):
"""
Create decoding layer
:param dec_input: Decoder input
:param encoder_state: Encoder state
:param target_sequence_length: The lengths of each sequence in the target batch
:param max_target_sequence_length: Maximum length of target sequences
:param rnn_size: RNN Size
:param num_layers: Number of layers
:param target_vocab_to_int: Dictionary to go from the target words to an id
:param target_vocab_size: Size of target vocabulary
:param batch_size: The size of the batch
:param keep_prob: Dropout keep probability
:param decoding_embedding_size: Decoding embedding size
:return: Tuple of (Training BasicDecoderOutput, Inference BasicDecoderOutput)
"""
dec_embeddings = tf.Variable(tf.random_uniform([target_vocab_size, decoding_embedding_size]))
dec_embed_input = tf.nn.embedding_lookup(dec_embeddings, dec_input)
cells = tf.contrib.rnn.MultiRNNCell([lstm_cell(rnn_size) for _ in range(num_layers)])
rnn = tf.contrib.rnn.DropoutWrapper(cells, output_keep_prob = keep_prob)
output_layer = Dense(target_vocab_size,
kernel_initializer = tf.truncated_normal_initializer(mean = 0.0, stddev=0.1))
with tf.variable_scope("decode"):
training_decoder_output = decoding_layer_train(encoder_state, rnn, dec_embed_input, target_sequence_length,
max_target_sequence_length, output_layer, keep_prob)
with tf.variable_scope("decode", reuse = True):
inference_decoder_output = decoding_layer_infer(encoder_state, rnn, dec_embeddings,
target_vocab_to_int['<GO>'], target_vocab_to_int['<EOS>'],
max_target_sequence_length, target_vocab_size,
output_layer, batch_size, keep_prob)
return training_decoder_output, inference_decoder_output
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_decoding_layer(decoding_layer)
"""
Explanation: Build the Decoding Layer
Implement decoding_layer() to create a Decoder RNN layer.
Embed the target sequences
Construct the decoder LSTM cell (just like you constructed the encoder cell above)
Create an output layer to map the outputs of the decoder to the elements of our vocabulary
Use the your decoding_layer_train(encoder_state, dec_cell, dec_embed_input, target_sequence_length, max_target_sequence_length, output_layer, keep_prob) function to get the training logits.
Use your decoding_layer_infer(encoder_state, dec_cell, dec_embeddings, start_of_sequence_id, end_of_sequence_id, max_target_sequence_length, vocab_size, output_layer, batch_size, keep_prob) function to get the inference logits.
Note: You'll need to use tf.variable_scope to share variables between training and inference.
End of explanation
"""
def seq2seq_model(input_data, target_data, keep_prob, batch_size,
source_sequence_length, target_sequence_length,
max_target_sentence_length,
source_vocab_size, target_vocab_size,
enc_embedding_size, dec_embedding_size,
rnn_size, num_layers, target_vocab_to_int):
"""
Build the Sequence-to-Sequence part of the neural network
:param input_data: Input placeholder
:param target_data: Target placeholder
:param keep_prob: Dropout keep probability placeholder
:param batch_size: Batch Size
:param source_sequence_length: Sequence Lengths of source sequences in the batch
:param target_sequence_length: Sequence Lengths of target sequences in the batch
:param source_vocab_size: Source vocabulary size
:param target_vocab_size: Target vocabulary size
:param enc_embedding_size: Decoder embedding size
:param dec_embedding_size: Encoder embedding size
:param rnn_size: RNN Size
:param num_layers: Number of layers
:param target_vocab_to_int: Dictionary to go from the target words to an id
:return: Tuple of (Training BasicDecoderOutput, Inference BasicDecoderOutput)
"""
enc_outputs, enc_state = encoding_layer(input_data, rnn_size, num_layers, keep_prob,
source_sequence_length, source_vocab_size, enc_embedding_size)
dec_input = process_decoder_input(target_data, target_vocab_to_int, batch_size)
train_dec_output, inference_dec_output = decoding_layer(dec_input, enc_state, target_sequence_length,
max_target_sentence_length, rnn_size, num_layers,
target_vocab_to_int, target_vocab_size, batch_size,
keep_prob, dec_embedding_size)
return train_dec_output, inference_dec_output
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_seq2seq_model(seq2seq_model)
"""
Explanation: Build the Neural Network
Apply the functions you implemented above to:
Encode the input using your encoding_layer(rnn_inputs, rnn_size, num_layers, keep_prob, source_sequence_length, source_vocab_size, encoding_embedding_size).
Process target data using your process_decoder_input(target_data, target_vocab_to_int, batch_size) function.
Decode the encoded input using your decoding_layer(dec_input, enc_state, target_sequence_length, max_target_sentence_length, rnn_size, num_layers, target_vocab_to_int, target_vocab_size, batch_size, keep_prob, dec_embedding_size) function.
End of explanation
"""
# Number of Epochs
epochs = 10
# Batch Size
batch_size = 256 # 32
# RNN Size
rnn_size = 512
# Number of Layers
num_layers = 1
# Embedding Size
encoding_embedding_size = 256 # 16
decoding_embedding_size = 256 # 16
# Learning Rate
learning_rate = 0.01
# Dropout Keep Probability
keep_probability = 0.5
display_step = 100
"""
Explanation: Neural Network Training
Hyperparameters
Tune the following parameters:
Set epochs to the number of epochs.
Set batch_size to the batch size.
Set rnn_size to the size of the RNNs.
Set num_layers to the number of layers.
Set encoding_embedding_size to the size of the embedding for the encoder.
Set decoding_embedding_size to the size of the embedding for the decoder.
Set learning_rate to the learning rate.
Set keep_probability to the Dropout keep probability
Set display_step to state how many steps between each debug output statement
End of explanation
"""
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
save_path = 'checkpoints/dev'
(source_int_text, target_int_text), (source_vocab_to_int, target_vocab_to_int), _ = helper.load_preprocess()
max_target_sentence_length = max([len(sentence) for sentence in source_int_text])
train_graph = tf.Graph()
with train_graph.as_default():
input_data, targets, lr, keep_prob, target_sequence_length, max_target_sequence_length, source_sequence_length = model_inputs()
#sequence_length = tf.placeholder_with_default(max_target_sentence_length, None, name='sequence_length')
input_shape = tf.shape(input_data)
train_logits, inference_logits = seq2seq_model(tf.reverse(input_data, [-1]),
targets,
keep_prob,
batch_size,
source_sequence_length,
target_sequence_length,
max_target_sequence_length,
len(source_vocab_to_int),
len(target_vocab_to_int),
encoding_embedding_size,
decoding_embedding_size,
rnn_size,
num_layers,
target_vocab_to_int)
training_logits = tf.identity(train_logits.rnn_output, name='logits')
inference_logits = tf.identity(inference_logits.sample_id, name='predictions')
masks = tf.sequence_mask(target_sequence_length, max_target_sequence_length, dtype=tf.float32, name='masks')
with tf.name_scope("optimization"):
# Loss function
cost = tf.contrib.seq2seq.sequence_loss(
training_logits,
targets,
masks)
# Optimizer
optimizer = tf.train.AdamOptimizer(lr)
# Gradient Clipping
gradients = optimizer.compute_gradients(cost)
capped_gradients = [(tf.clip_by_value(grad, -1., 1.), var) for grad, var in gradients if grad is not None]
train_op = optimizer.apply_gradients(capped_gradients)
"""
Explanation: Build the Graph
Build the graph using the neural network you implemented.
End of explanation
"""
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
def pad_sentence_batch(sentence_batch, pad_int):
"""Pad sentences with <PAD> so that each sentence of a batch has the same length"""
max_sentence = max([len(sentence) for sentence in sentence_batch])
return [sentence + [pad_int] * (max_sentence - len(sentence)) for sentence in sentence_batch]
def get_batches(sources, targets, batch_size, source_pad_int, target_pad_int):
"""Batch targets, sources, and the lengths of their sentences together"""
for batch_i in range(0, len(sources)//batch_size):
start_i = batch_i * batch_size
# Slice the right amount for the batch
sources_batch = sources[start_i:start_i + batch_size]
targets_batch = targets[start_i:start_i + batch_size]
# Pad
pad_sources_batch = np.array(pad_sentence_batch(sources_batch, source_pad_int))
pad_targets_batch = np.array(pad_sentence_batch(targets_batch, target_pad_int))
# Need the lengths for the _lengths parameters
pad_targets_lengths = []
for target in pad_targets_batch:
pad_targets_lengths.append(len(target))
pad_source_lengths = []
for source in pad_sources_batch:
pad_source_lengths.append(len(source))
yield pad_sources_batch, pad_targets_batch, pad_source_lengths, pad_targets_lengths
"""
Explanation: Batch and pad the source and target sequences
End of explanation
"""
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
def get_accuracy(target, logits):
"""
Calculate accuracy
"""
max_seq = max(target.shape[1], logits.shape[1])
if max_seq - target.shape[1]:
target = np.pad(
target,
[(0,0),(0,max_seq - target.shape[1])],
'constant')
if max_seq - logits.shape[1]:
logits = np.pad(
logits,
[(0,0),(0,max_seq - logits.shape[1])],
'constant')
return np.mean(np.equal(target, logits))
# Split data to training and validation sets
train_source = source_int_text[batch_size:]
train_target = target_int_text[batch_size:]
valid_source = source_int_text[:batch_size]
valid_target = target_int_text[:batch_size]
(valid_sources_batch, valid_targets_batch, valid_sources_lengths, valid_targets_lengths ) = next(get_batches(valid_source,
valid_target,
batch_size,
source_vocab_to_int['<PAD>'],
target_vocab_to_int['<PAD>']))
with tf.Session(graph=train_graph) as sess:
sess.run(tf.global_variables_initializer())
for epoch_i in range(epochs):
for batch_i, (source_batch, target_batch, sources_lengths, targets_lengths) in enumerate(
get_batches(train_source, train_target, batch_size,
source_vocab_to_int['<PAD>'],
target_vocab_to_int['<PAD>'])):
_, loss = sess.run(
[train_op, cost],
{input_data: source_batch,
targets: target_batch,
lr: learning_rate,
target_sequence_length: targets_lengths,
source_sequence_length: sources_lengths,
keep_prob: keep_probability})
if batch_i % display_step == 0 and batch_i > 0:
batch_train_logits = sess.run(
inference_logits,
{input_data: source_batch,
source_sequence_length: sources_lengths,
target_sequence_length: targets_lengths,
keep_prob: 1.0})
batch_valid_logits = sess.run(
inference_logits,
{input_data: valid_sources_batch,
source_sequence_length: valid_sources_lengths,
target_sequence_length: valid_targets_lengths,
keep_prob: 1.0})
train_acc = get_accuracy(target_batch, batch_train_logits)
valid_acc = get_accuracy(valid_targets_batch, batch_valid_logits)
print('Epoch {:>3} Batch {:>4}/{} - Train Accuracy: {:>6.4f}, Validation Accuracy: {:>6.4f}, Loss: {:>6.4f}'
.format(epoch_i, batch_i, len(source_int_text) // batch_size, train_acc, valid_acc, loss))
# Save Model
saver = tf.train.Saver()
saver.save(sess, save_path)
print('Model Trained and Saved')
"""
Explanation: Train
Train the neural network on the preprocessed data. If you have a hard time getting a good loss, check the forms to see if anyone is having the same problem.
End of explanation
"""
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
# Save parameters for checkpoint
helper.save_params(save_path)
"""
Explanation: Save Parameters
Save the batch_size and save_path parameters for inference.
End of explanation
"""
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import tensorflow as tf
import numpy as np
import helper
import problem_unittests as tests
_, (source_vocab_to_int, target_vocab_to_int), (source_int_to_vocab, target_int_to_vocab) = helper.load_preprocess()
load_path = helper.load_params()
"""
Explanation: Checkpoint
End of explanation
"""
def sentence_to_seq(sentence, vocab_to_int):
"""
Convert a sentence to a sequence of ids
:param sentence: String
:param vocab_to_int: Dictionary to go from the words to an id
:return: List of word ids
"""
sentence = sentence.lower()
unknown_id = vocab_to_int['<UNK>']
ids = [vocab_to_int.get(w, unknown_id) for w in sentence.split()]
return ids
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_sentence_to_seq(sentence_to_seq)
"""
Explanation: Sentence to Sequence
To feed a sentence into the model for translation, you first need to preprocess it. Implement the function sentence_to_seq() to preprocess new sentences.
Convert the sentence to lowercase
Convert words into ids using vocab_to_int
Convert words not in the vocabulary, to the <UNK> word id.
End of explanation
"""
translate_sentence = 'he saw a yellow old truck .'
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
translate_sentence = sentence_to_seq(translate_sentence, source_vocab_to_int)
loaded_graph = tf.Graph()
with tf.Session(graph=loaded_graph) as sess:
# Load saved model
loader = tf.train.import_meta_graph(load_path + '.meta')
loader.restore(sess, load_path)
input_data = loaded_graph.get_tensor_by_name('input:0')
logits = loaded_graph.get_tensor_by_name('predictions:0')
target_sequence_length = loaded_graph.get_tensor_by_name('target_sequence_length:0')
source_sequence_length = loaded_graph.get_tensor_by_name('source_sequence_length:0')
keep_prob = loaded_graph.get_tensor_by_name('keep_prob:0')
translate_logits = sess.run(logits, {input_data: [translate_sentence]*batch_size,
target_sequence_length: [len(translate_sentence)*2]*batch_size,
source_sequence_length: [len(translate_sentence)]*batch_size,
keep_prob: 1.0})[0]
print('Input')
print(' Word Ids: {}'.format([i for i in translate_sentence]))
print(' English Words: {}'.format([source_int_to_vocab[i] for i in translate_sentence]))
print('\nPrediction')
print(' Word Ids: {}'.format([i for i in translate_logits]))
print(' French Words: {}'.format(" ".join([target_int_to_vocab[i] for i in translate_logits])))
"""
Explanation: Translate
This will translate translate_sentence from English to French.
End of explanation
"""
|
KOLANICH/RichConsole | Tutorial.ipynb | unlicense | n=RichStr("I am ", "normal")
"""
Explanation: RichStr consist of pieces of strings and RichStrs
End of explanation
"""
n
"""
Explanation: __repr__esentation of a rich string shows a "flat" representation of a RichStr - a sequence of styles and strings where style applies to everything after it. This is how terminal works. This representation is useful for debugging.
End of explanation
"""
r=RichStr("RED", sheet=groups["Fore"]["red"])
"""
Explanation: To apply a style or a stylesheet you use sheet named argument of RichStr
End of explanation
"""
r=RichStr("RED", sheet=groups.Fore.red)
str(r)
"""
Explanation: You can also use dot notation
End of explanation
"""
print(r)
"""
Explanation: __str__ is overloaded, so you can print. Note that the red is not pure red, it is because here it is indexed, indexed colors depend on terminal palete
End of explanation
"""
print(r.toHTML())
IPython.display.display_html(r.toHTML(), raw=True)
pureRed=RGBColor("PureRed", 0xFF, bg=True)
prs=RichStr("Pure red", sheet=pureRed)
print(repr(str(prs)))
print(prs)
"""
Explanation: There is a quick and dirty conversion to HTML, but don't use it, it is too unfinished dirty. There are some methods to get CSS rules for some styles.
End of explanation
"""
lightGoldenrod1=RGBColor("lightGoldenrod1", 0xff, 0xff, 0x5f, True)
blackOnGold=Sheet({
"Back":lightGoldenrod1, # requires 3rd-party libraries
"Fore":groups.Fore.black
})
g=RichStr(r, " on GOLD", sheet=blackOnGold)
str(g)
print(g)
print(g.toHTML())
IPython.display.display_html(g.toHTML(), raw=True)
g.sheetRepr()
"""
Explanation: you can create stylesheets from styles
End of explanation
"""
g.optimizedCodeRepr()
"""
Explanation: with RichStr.optimizedCodeRepr you can get optimized code sequence in a machine-readable form
End of explanation
"""
|
IS-ENES-Data/submission_forms | test/forms/Create_Submission_Form.ipynb | apache-2.0 | from dkrz_forms import form_widgets
form_widgets.show_status('form-generation')
"""
Explanation: Create your DKRZ data ingest request form
To generate a data submission form for you, please edit the cell below to include your name, email as well as the project your data belogs to
Then please press "Shift" + Enter to evaluate the cell
a link to the newly generated data submission form will be provided
please follow this link to edit your personal form
Attention: remember the form password provided to you
the password is needed to retrieve your form at a later point e.g. for completion
Currently the following ingest requests are supported:
"CORDEX": CORDEX data ingest requests - CORDEX data to be published in ESGF, the form is aligned to the original CORDEX data ingest exel sheet used for ingest requests at DKRZ
"CMIP6": CMIP6 data ingest request form for data providers - CMIP6 data to be ingested and published in ESGF and which will be long term archived as part of the WDCC
"ESGF_replication" : CMIP6 data request form for data users - request for CMIP6 data to be replicated and made available as part of the DKRZ national archive
"DKRZ_CDP": data ingest request for (CMIP6 related) data collections to be included in the DKRZ CMIP data pool (CDP) e.g. for model evaluation purposes
"test": for demo and testing puposes
End of explanation
"""
from dkrz_forms import form_widgets
form_widgets.create_form()
"""
Explanation: Create a data form
Evaluate the cell below ("Shift-Enter") in case no input fields are visible
The form will be created as soon as you press "Enter" in the last input field below
<br />To fill the form follow the url shown as as result of the form generation.
<br />In case you want to retrieve and complete the form later on, please follow the steps outlined below in "Retrieve your DKRZ data form"
End of explanation
"""
from dkrz_forms import form_widgets
form_widgets.show_status('form-retrieval')
"""
Explanation: Retrieve your DKRZ data form
Via this form you can retrieve previously generated data forms and make them accessible via the Web again for completion.
<br/> Additionally you can get information on the data ingest process status related to your form based request.
End of explanation
"""
MY_LAST_NAME = "testsuite" # e.gl MY_LAST_NAME = "schulz"
#-------------------------------------------------
from dkrz_forms import form_handler, form_widgets
form_info = form_widgets.check_and_retrieve(MY_LAST_NAME)
"""
Explanation: Please provide your last name
please set your last name in the cell below and evaluate the cell (press "Shift-Return")
- when entering the key identifying your form the form is retrieved and accessible via the url presented. (The identifying key was provided to you as part of the form generation step)
End of explanation
"""
|
RyanSkraba/beam | examples/notebooks/documentation/transforms/python/elementwise/filter-py.ipynb | apache-2.0 | #@title Licensed under the Apache License, Version 2.0 (the "License")
# Licensed to the Apache Software Foundation (ASF) under one
# or more contributor license agreements. See the NOTICE file
# distributed with this work for additional information
# regarding copyright ownership. The ASF licenses this file
# to you under the Apache License, Version 2.0 (the
# "License"); you may not use this file except in compliance
# with the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing,
# software distributed under the License is distributed on an
# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
# KIND, either express or implied. See the License for the
# specific language governing permissions and limitations
# under the License.
"""
Explanation: <a href="https://colab.research.google.com/github/apache/beam/blob/master//Users/dcavazos/src/beam/examples/notebooks/documentation/transforms/python/elementwise/filter-py.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open in Colab"/></a>
<table align="left"><td><a target="_blank" href="https://beam.apache.org/documentation/transforms/python/elementwise/filter"><img src="https://beam.apache.org/images/logos/full-color/name-bottom/beam-logo-full-color-name-bottom-100.png" width="32" height="32" />View the docs</a></td></table>
End of explanation
"""
!pip install --quiet -U apache-beam
"""
Explanation: Filter
<script type="text/javascript">
localStorage.setItem('language', 'language-py')
</script>
<table align="left" style="margin-right:1em">
<td>
<a class="button" target="_blank" href="https://beam.apache.org/releases/pydoc/current/apache_beam.transforms.core.html#apache_beam.transforms.core.Filter"><img src="https://beam.apache.org/images/logos/sdks/python.png" width="32px" height="32px" alt="Pydoc"/> Pydoc</a>
</td>
</table>
<br/><br/><br/>
Given a predicate, filter out all elements that don't satisfy that predicate.
May also be used to filter based on an inequality with a given value based
on the comparison ordering of the element.
Setup
To run a code cell, you can click the Run cell button at the top left of the cell,
or select it and press Shift+Enter.
Try modifying a code cell and re-running it to see what happens.
To learn more about Colab, see
Welcome to Colaboratory!.
First, let's install the apache-beam module.
End of explanation
"""
import apache_beam as beam
def is_perennial(plant):
return plant['duration'] == 'perennial'
with beam.Pipeline() as pipeline:
perennials = (
pipeline
| 'Gardening plants' >> beam.Create([
{'icon': '🍓', 'name': 'Strawberry', 'duration': 'perennial'},
{'icon': '🥕', 'name': 'Carrot', 'duration': 'biennial'},
{'icon': '🍆', 'name': 'Eggplant', 'duration': 'perennial'},
{'icon': '🍅', 'name': 'Tomato', 'duration': 'annual'},
{'icon': '🥔', 'name': 'Potato', 'duration': 'perennial'},
])
| 'Filter perennials' >> beam.Filter(is_perennial)
| beam.Map(print)
)
"""
Explanation: Examples
In the following examples, we create a pipeline with a PCollection of produce with their icon, name, and duration.
Then, we apply Filter in multiple ways to filter out produce by their duration value.
Filter accepts a function that keeps elements that return True, and filters out the remaining elements.
Example 1: Filtering with a function
We define a function is_perennial which returns True if the element's duration equals 'perennial', and False otherwise.
End of explanation
"""
import apache_beam as beam
with beam.Pipeline() as pipeline:
perennials = (
pipeline
| 'Gardening plants' >> beam.Create([
{'icon': '🍓', 'name': 'Strawberry', 'duration': 'perennial'},
{'icon': '🥕', 'name': 'Carrot', 'duration': 'biennial'},
{'icon': '🍆', 'name': 'Eggplant', 'duration': 'perennial'},
{'icon': '🍅', 'name': 'Tomato', 'duration': 'annual'},
{'icon': '🥔', 'name': 'Potato', 'duration': 'perennial'},
])
| 'Filter perennials' >> beam.Filter(
lambda plant: plant['duration'] == 'perennial')
| beam.Map(print)
)
"""
Explanation: <table align="left" style="margin-right:1em">
<td>
<a class="button" target="_blank" href="https://github.com/apache/beam/blob/master/sdks/python/apache_beam/examples/snippets/transforms/elementwise/filter.py"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" width="32px" height="32px" alt="View source code"/> View source code</a>
</td>
</table>
<br/><br/><br/>
Example 2: Filtering with a lambda function
We can also use lambda functions to simplify Example 1.
End of explanation
"""
import apache_beam as beam
def has_duration(plant, duration):
return plant['duration'] == duration
with beam.Pipeline() as pipeline:
perennials = (
pipeline
| 'Gardening plants' >> beam.Create([
{'icon': '🍓', 'name': 'Strawberry', 'duration': 'perennial'},
{'icon': '🥕', 'name': 'Carrot', 'duration': 'biennial'},
{'icon': '🍆', 'name': 'Eggplant', 'duration': 'perennial'},
{'icon': '🍅', 'name': 'Tomato', 'duration': 'annual'},
{'icon': '🥔', 'name': 'Potato', 'duration': 'perennial'},
])
| 'Filter perennials' >> beam.Filter(has_duration, 'perennial')
| beam.Map(print)
)
"""
Explanation: <table align="left" style="margin-right:1em">
<td>
<a class="button" target="_blank" href="https://github.com/apache/beam/blob/master/sdks/python/apache_beam/examples/snippets/transforms/elementwise/filter.py"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" width="32px" height="32px" alt="View source code"/> View source code</a>
</td>
</table>
<br/><br/><br/>
Example 3: Filtering with multiple arguments
You can pass functions with multiple arguments to Filter.
They are passed as additional positional arguments or keyword arguments to the function.
In this example, has_duration takes plant and duration as arguments.
End of explanation
"""
import apache_beam as beam
with beam.Pipeline() as pipeline:
perennial = pipeline | 'Perennial' >> beam.Create(['perennial'])
perennials = (
pipeline
| 'Gardening plants' >> beam.Create([
{'icon': '🍓', 'name': 'Strawberry', 'duration': 'perennial'},
{'icon': '🥕', 'name': 'Carrot', 'duration': 'biennial'},
{'icon': '🍆', 'name': 'Eggplant', 'duration': 'perennial'},
{'icon': '🍅', 'name': 'Tomato', 'duration': 'annual'},
{'icon': '🥔', 'name': 'Potato', 'duration': 'perennial'},
])
| 'Filter perennials' >> beam.Filter(
lambda plant, duration: plant['duration'] == duration,
duration=beam.pvalue.AsSingleton(perennial),
)
| beam.Map(print)
)
"""
Explanation: <table align="left" style="margin-right:1em">
<td>
<a class="button" target="_blank" href="https://github.com/apache/beam/blob/master/sdks/python/apache_beam/examples/snippets/transforms/elementwise/filter.py"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" width="32px" height="32px" alt="View source code"/> View source code</a>
</td>
</table>
<br/><br/><br/>
Example 4: Filtering with side inputs as singletons
If the PCollection has a single value, such as the average from another computation,
passing the PCollection as a singleton accesses that value.
In this example, we pass a PCollection the value 'perennial' as a singleton.
We then use that value to filter out perennials.
End of explanation
"""
import apache_beam as beam
with beam.Pipeline() as pipeline:
valid_durations = pipeline | 'Valid durations' >> beam.Create([
'annual',
'biennial',
'perennial',
])
valid_plants = (
pipeline
| 'Gardening plants' >> beam.Create([
{'icon': '🍓', 'name': 'Strawberry', 'duration': 'perennial'},
{'icon': '🥕', 'name': 'Carrot', 'duration': 'biennial'},
{'icon': '🍆', 'name': 'Eggplant', 'duration': 'perennial'},
{'icon': '🍅', 'name': 'Tomato', 'duration': 'annual'},
{'icon': '🥔', 'name': 'Potato', 'duration': 'PERENNIAL'},
])
| 'Filter valid plants' >> beam.Filter(
lambda plant, valid_durations: plant['duration'] in valid_durations,
valid_durations=beam.pvalue.AsIter(valid_durations),
)
| beam.Map(print)
)
"""
Explanation: <table align="left" style="margin-right:1em">
<td>
<a class="button" target="_blank" href="https://github.com/apache/beam/blob/master/sdks/python/apache_beam/examples/snippets/transforms/elementwise/filter.py"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" width="32px" height="32px" alt="View source code"/> View source code</a>
</td>
</table>
<br/><br/><br/>
Example 5: Filtering with side inputs as iterators
If the PCollection has multiple values, pass the PCollection as an iterator.
This accesses elements lazily as they are needed,
so it is possible to iterate over large PCollections that won't fit into memory.
End of explanation
"""
import apache_beam as beam
with beam.Pipeline() as pipeline:
keep_duration = pipeline | 'Duration filters' >> beam.Create([
('annual', False),
('biennial', False),
('perennial', True),
])
perennials = (
pipeline
| 'Gardening plants' >> beam.Create([
{'icon': '🍓', 'name': 'Strawberry', 'duration': 'perennial'},
{'icon': '🥕', 'name': 'Carrot', 'duration': 'biennial'},
{'icon': '🍆', 'name': 'Eggplant', 'duration': 'perennial'},
{'icon': '🍅', 'name': 'Tomato', 'duration': 'annual'},
{'icon': '🥔', 'name': 'Potato', 'duration': 'perennial'},
])
| 'Filter plants by duration' >> beam.Filter(
lambda plant, keep_duration: keep_duration[plant['duration']],
keep_duration=beam.pvalue.AsDict(keep_duration),
)
| beam.Map(print)
)
"""
Explanation: <table align="left" style="margin-right:1em">
<td>
<a class="button" target="_blank" href="https://github.com/apache/beam/blob/master/sdks/python/apache_beam/examples/snippets/transforms/elementwise/filter.py"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" width="32px" height="32px" alt="View source code"/> View source code</a>
</td>
</table>
<br/><br/><br/>
Note: You can pass the PCollection as a list with beam.pvalue.AsList(pcollection),
but this requires that all the elements fit into memory.
Example 6: Filtering with side inputs as dictionaries
If a PCollection is small enough to fit into memory, then that PCollection can be passed as a dictionary.
Each element must be a (key, value) pair.
Note that all the elements of the PCollection must fit into memory for this.
If the PCollection won't fit into memory, use beam.pvalue.AsIter(pcollection) instead.
End of explanation
"""
|
austinjalexander/sandbox | python/py/odsc/theano/Theano Tutorial.ipynb | mit | %matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
import theano
# By convention, the tensor submodule is loaded as T
import theano.tensor as T
"""
Explanation: Theano Tutorial
Theano is a software package which allows you to write symbolic code and compile it onto different architectures (in particular, CPU and GPU). It was developed by machine learning researchers at the University of Montreal. Its use is not limited to machine learning applications, but it was designed with machine learning in mind. It's especially good for machine learning techniques which are CPU-intensive and benefit from parallelization (e.g. large neural networks).
This tutorial will cover the basic principles of Theano, including some common mental blocks which come up. It will also cover a simple multi-layer perceptron example. A more thorough Theano tutorial can be found here: http://deeplearning.net/software/theano/tutorial/
Any comments or suggestions should be directed to me or feel free to submit a pull request.
End of explanation
"""
# The theano.tensor submodule has various primitive symbolic variable types.
# Here, we're defining a scalar (0-d) variable.
# The argument gives the variable its name.
foo = T.scalar('foo')
# Now, we can define another variable bar which is just foo squared.
bar = foo**2
# It will also be a theano variable.
print type(bar)
print bar.type
# Using theano's pp (pretty print) function, we see that
# bar is defined symbolically as the square of foo
print theano.pp(bar)
"""
Explanation: Basics
Symbolic variables
In Theano, all algorithms are defined symbolically. It's more like writing out math than writing code. The following Theano variables are symbolic; they don't have an explicit value.
End of explanation
"""
# We can't compute anything with foo and bar yet.
# We need to define a theano function first.
# The first argument of theano.function defines the inputs to the function.
# Note that bar relies on foo, so foo is an input to this function.
# theano.function will compile code for computing values of bar given values of foo
f = theano.function([foo], bar)
print f(3)
# Alternatively, in some cases you can use a symbolic variable's eval method.
# This can be more convenient than defining a function.
# The eval method takes a dictionary where the keys are theano variables and the values are values for those variables.
print bar.eval({foo: 3})
# We can also use Python functions to construct Theano variables.
# It seems pedantic here, but can make syntax cleaner for more complicated examples.
def square(x):
return x**2
bar = square(foo)
print bar.eval({foo: 3})
"""
Explanation: Functions
To actually compute things with Theano, you define symbolic functions, which can then be called with actual values to retrieve an actual value.
End of explanation
"""
A = T.matrix('A')
x = T.vector('x')
b = T.vector('b')
y = T.dot(A, x) + b
# Note that squaring a matrix is element-wise
z = T.sum(A**2)
# theano.function can compute multiple things at a time
# You can also set default parameter values
# We'll cover theano.config.floatX later
b_default = np.array([0, 0], dtype=theano.config.floatX)
linear_mix = theano.function([A, x, theano.Param(b, default=b_default)], [y, z])
# Supplying values for A, x, and b
print linear_mix(np.array([[1, 2, 3],
[4, 5, 6]], dtype=theano.config.floatX), #A
np.array([1, 2, 3], dtype=theano.config.floatX), #x
np.array([4, 5], dtype=theano.config.floatX)) #b
# Using the default value for b
print linear_mix(np.array([[1, 2, 3],
[4, 5, 6]]), #A
np.array([1, 2, 3])) #x
"""
Explanation: theano.tensor
Theano also has variable types for vectors, matrices, and tensors. The theano.tensor submodule has various functions for performing operations on these variables.
End of explanation
"""
shared_var = theano.shared(np.array([[1, 2], [3, 4]], dtype=theano.config.floatX))
# The type of the shared variable is deduced from its initialization
print shared_var.type()
# We can set the value of a shared variable using set_value
shared_var.set_value(np.array([[3, 4], [2, 1]], dtype=theano.config.floatX))
# ..and get it using get_value
print shared_var.get_value()
shared_squared = shared_var**2
# The first argument of theano.function (inputs) tells Theano what the arguments to the compiled function should be.
# Note that because shared_var is shared, it already has a value, so it doesn't need to be an input to the function.
# Therefore, Theano implicitly considers shared_var an input to a function using shared_squared and so we don't need
# to include it in the inputs argument of theano.function.
function_1 = theano.function([], shared_squared)
print function_1()
"""
Explanation: Shared variables
Shared variables are a little different - they actually do have an explicit value, which can be get/set and is shared across functions which use the variable. They're also useful because they have state across function calls.
End of explanation
"""
# We can also update the state of a shared var in a function
subtract = T.matrix('subtract')
# updates takes a dict where keys are shared variables and values are the new value the shared variable should take
# Here, updates will set shared_var = shared_var - subtract
function_2 = theano.function([subtract], shared_var, updates={shared_var: shared_var - subtract})
print "shared_var before subtracting [[1, 1], [1, 1]] using function_2:"
print shared_var.get_value()
# Subtract [[1, 1], [1, 1]] from shared_var
function_2(np.array([[1, 1], [1, 1]]))
print "shared_var after calling function_2:"
print shared_var.get_value()
# Note that this also changes the output of function_1, because shared_var is shared!
print "New output of function_1() (shared_var**2):"
print function_1()
"""
Explanation: updates
The value of a shared variable can be updated in a function by using the updates argument of theano.function.
End of explanation
"""
# Recall that bar = foo**2
# We can compute the gradient of bar with respect to foo like so:
bar_grad = T.grad(bar, foo)
# We expect that bar_grad = 2*foo
bar_grad.eval({foo: 10})
# Recall that y = Ax + b
# We can also compute a Jacobian like so:
y_J = theano.gradient.jacobian(y, x)
linear_mix_J = theano.function([A, x, b], y_J)
# Because it's a linear mix, we expect the output to always be A
print linear_mix_J(np.array([[9, 8, 7], [4, 5, 6]]), #A
np.array([1, 2, 3]), #x
np.array([4, 5])) #b
# We can also compute the Hessian with theano.gradient.hessian (skipping that here)
"""
Explanation: Gradients
A pretty huge benefit of using Theano is its ability to compute gradients. This allows you to symbolically define a function and quickly compute its (numerical) derivative without actually deriving the derivative.
End of explanation
"""
# Let's create another matrix, "B"
B = T.matrix('B')
# And, a symbolic variable which is just A (from above) dotted against B
# At this point, Theano doesn't know the shape of A or B, so there's no way for it to know whether A dot B is valid.
C = T.dot(A, B)
# Now, let's try to use it
C.eval({A: np.zeros((3, 4)), B: np.zeros((5, 6))})
"""
Explanation: Debugging
Debugging in Theano can be a little tough because the code which is actually being run is pretty far removed from the code you wrote. One simple way to sanity check your Theano expressions before actually compiling any functions is to use test values.
End of explanation
"""
# This tells Theano we're going to use test values, and to warn when there's an error with them.
# The setting 'warn' means "warn me when I haven't supplied a test value"
theano.config.compute_test_value = 'warn'
# Setting the tag.test_value attribute gives the variable its test value
A.tag.test_value = np.random.random((3, 4))
B.tag.test_value = np.random.random((5, 6))
# Now, we get an error when we compute C which points us to the correct line!
C = T.dot(A, B)
# We won't be using test values for the rest of the tutorial.
theano.config.compute_test_value = 'off'
"""
Explanation: The above error message is a little opaque (and it would be even worse had we not given the Theano variables A and B names). Errors like this can be particularly confusing when the Theano expression being computed is very complex. They also won't ever tell you the line number in your Python code where A dot B was computed, because the actual code being run is not your Python code-it's the compiled Theano code! Fortunately, "test values" let us get around this issue. N.b. - Not all theano methods (for example, and significantly, scan) allow for test values
End of explanation
"""
# A simple division function
num = T.scalar('num')
den = T.scalar('den')
divide = theano.function([num, den], num/den)
print divide(10, 2)
# This will cause a NaN
print divide(0, 0)
# To compile a function in debug mode, just set mode='DebugMode'
divide = theano.function([num, den], num/den, mode='DebugMode')
# NaNs now cause errors
print divide(0, 0)
"""
Explanation: Another place where debugging is useful is when an invalid calculation is done, e.g. one which results in nan. By default, Theano will silently allow these nan values to be computed and used, but this silence can be catastrophic to the rest of your Theano computation. At the cost of speed, we can instead have Theano compile functions in DebugMode, where an invalid computation causes an error
End of explanation
"""
# You can get the values being used to configure Theano like so:
print theano.config.device
print theano.config.floatX
# You can also get/set them at runtime:
old_floatX = theano.config.floatX
theano.config.floatX = 'float32'
# Be careful that you're actually using floatX!
# For example, the following will cause var to be a float64 regardless of floatX due to numpy defaults:
var = theano.shared(np.array([1.3, 2.4]))
print var.type() #!!!
# So, whenever you use a numpy array, make sure to set its dtype to theano.config.floatX
var = theano.shared(np.array([1.3, 2.4], dtype=theano.config.floatX))
print var.type()
# Revert to old value
theano.config.floatX = old_floatX
"""
Explanation: Using the CPU vs GPU
Theano can transparently compile onto different hardware. What device it uses by default depends on your .theanorc file and any environment variables defined, as described in detail here: http://deeplearning.net/software/theano/library/config.html
Currently, you should use float32 when using most GPUs, but most people prefer to use float64 on a CPU. For convenience, Theano provides the floatX configuration variable which designates what float accuracy to use. For example, you can run a Python script with certain environment variables set to use the CPU:
THEANO_FLAGS=device=cpu,floatX=float64 python your_script.py
or GPU:
THEANO_FLAGS=device=gpu,floatX=float32 python your_script.py
End of explanation
"""
class Layer(object):
def __init__(self, W_init, b_init, activation):
'''
A layer of a neural network, computes s(Wx + b) where s is a nonlinearity and x is the input vector.
:parameters:
- W_init : np.ndarray, shape=(n_output, n_input)
Values to initialize the weight matrix to.
- b_init : np.ndarray, shape=(n_output,)
Values to initialize the bias vector
- activation : theano.tensor.elemwise.Elemwise
Activation function for layer output
'''
# Retrieve the input and output dimensionality based on W's initialization
n_output, n_input = W_init.shape
# Make sure b is n_output in size
assert b_init.shape == (n_output,)
# All parameters should be shared variables.
# They're used in this class to compute the layer output,
# but are updated elsewhere when optimizing the network parameters.
# Note that we are explicitly requiring that W_init has the theano.config.floatX dtype
self.W = theano.shared(value=W_init.astype(theano.config.floatX),
# The name parameter is solely for printing purporses
name='W',
# Setting borrow=True allows Theano to use user memory for this object.
# It can make code slightly faster by avoiding a deep copy on construction.
# For more details, see
# http://deeplearning.net/software/theano/tutorial/aliasing.html
borrow=True)
# We can force our bias vector b to be a column vector using numpy's reshape method.
# When b is a column vector, we can pass a matrix-shaped input to the layer
# and get a matrix-shaped output, thanks to broadcasting (described below)
self.b = theano.shared(value=b_init.reshape(n_output, 1).astype(theano.config.floatX),
name='b',
borrow=True,
# Theano allows for broadcasting, similar to numpy.
# However, you need to explicitly denote which axes can be broadcasted.
# By setting broadcastable=(False, True), we are denoting that b
# can be broadcast (copied) along its second dimension in order to be
# added to another variable. For more information, see
# http://deeplearning.net/software/theano/library/tensor/basic.html
broadcastable=(False, True))
self.activation = activation
# We'll compute the gradient of the cost of the network with respect to the parameters in this list.
self.params = [self.W, self.b]
def output(self, x):
'''
Compute this layer's output given an input
:parameters:
- x : theano.tensor.var.TensorVariable
Theano symbolic variable for layer input
:returns:
- output : theano.tensor.var.TensorVariable
Mixed, biased, and activated x
'''
# Compute linear mix
lin_output = T.dot(self.W, x) + self.b
# Output is just linear mix if no activation function
# Otherwise, apply the activation function
return (lin_output if self.activation is None else self.activation(lin_output))
"""
Explanation: Example: MLP
Defining a multilayer perceptron is out of the scope of this tutorial; please see here for background information:
http://en.wikipedia.org/wiki/Multilayer_perceptron. We will be using the convention that datapoints are column vectors.
Layer class
We'll be defining our multilayer perceptron as a series of "layers", each applied successively to the input to produce the network output. Each layer is defined as a class, which stores a weight matrix and a bias vector and includes a function for computing the layer's output.
Note that if we weren't using Theano, we might expect the output method to take in a vector and return the layer's activation in response to this input. However, with Theano, the output function is instead meant to be used to create (using theano.function) a function which can take in a vector and return the layer's activation. So, if you were to pass, say, a np.ndarray to the Layer class's output function, you'd get an error. Instead, we'll construct a function for actually computing the Layer's activation outside of the class itself.
End of explanation
"""
class MLP(object):
def __init__(self, W_init, b_init, activations):
'''
Multi-layer perceptron class, computes the composition of a sequence of Layers
:parameters:
- W_init : list of np.ndarray, len=N
Values to initialize the weight matrix in each layer to.
The layer sizes will be inferred from the shape of each matrix in W_init
- b_init : list of np.ndarray, len=N
Values to initialize the bias vector in each layer to
- activations : list of theano.tensor.elemwise.Elemwise, len=N
Activation function for layer output for each layer
'''
# Make sure the input lists are all of the same length
assert len(W_init) == len(b_init) == len(activations)
# Initialize lists of layers
self.layers = []
# Construct the layers
for W, b, activation in zip(W_init, b_init, activations):
self.layers.append(Layer(W, b, activation))
# Combine parameters from all layers
self.params = []
for layer in self.layers:
self.params += layer.params
def output(self, x):
'''
Compute the MLP's output given an input
:parameters:
- x : theano.tensor.var.TensorVariable
Theano symbolic variable for network input
:returns:
- output : theano.tensor.var.TensorVariable
x passed through the MLP
'''
# Recursively compute output
for layer in self.layers:
x = layer.output(x)
return x
def squared_error(self, x, y):
'''
Compute the squared euclidean error of the network output against the "true" output y
:parameters:
- x : theano.tensor.var.TensorVariable
Theano symbolic variable for network input
- y : theano.tensor.var.TensorVariable
Theano symbolic variable for desired network output
:returns:
- error : theano.tensor.var.TensorVariable
The squared Euclidian distance between the network output and y
'''
return T.sum((self.output(x) - y)**2)
"""
Explanation: MLP class
Most of the functionality of our MLP is contained in the Layer class; the MLP class is essentially just a container for a list of Layers and their parameters. The output function simply recursively computes the output for each layer. Finally, the squared_error returns the squared Euclidean distance between the output of the network given an input and the desired (ground truth) output. This function is meant to be used as a cost in the setting of minimizing cost over some training data. As above, the output and squared error functions are not to be used for actually computing values; instead, they're to be used to create functions which are used to compute values.
End of explanation
"""
def gradient_updates_momentum(cost, params, learning_rate, momentum):
'''
Compute updates for gradient descent with momentum
:parameters:
- cost : theano.tensor.var.TensorVariable
Theano cost function to minimize
- params : list of theano.tensor.var.TensorVariable
Parameters to compute gradient against
- learning_rate : float
Gradient descent learning rate
- momentum : float
Momentum parameter, should be at least 0 (standard gradient descent) and less than 1
:returns:
updates : list
List of updates, one for each parameter
'''
# Make sure momentum is a sane value
assert momentum < 1 and momentum >= 0
# List of update steps for each parameter
updates = []
# Just gradient descent on cost
for param in params:
# For each parameter, we'll create a param_update shared variable.
# This variable will keep track of the parameter's update step across iterations.
# We initialize it to 0
param_update = theano.shared(param.get_value()*0., broadcastable=param.broadcastable)
# Each parameter is updated by taking a step in the direction of the gradient.
# However, we also "mix in" the previous step according to the given momentum value.
# Note that when updating param_update, we are using its old value and also the new gradient step.
updates.append((param, param - learning_rate*param_update))
# Note that we don't need to derive backpropagation to compute updates - just use T.grad!
updates.append((param_update, momentum*param_update + (1. - momentum)*T.grad(cost, param)))
return updates
"""
Explanation: Gradient descent
To train the network, we will minimize the cost (squared Euclidean distance of network output vs. ground-truth) over a training set using gradient descent. When doing gradient descent on neural nets, it's very common to use momentum, which is simply a leaky integrator on the parameter update. That is, when updating parameters, a linear mix of the current gradient update and the previous gradient update is computed. This tends to make the network converge more quickly on a good solution and can help avoid local minima in the cost function. With traditional gradient descent, we are guaranteed to decrease the cost at each iteration. When we use momentum, we lose this guarantee, but this is generally seen as a small price to pay for the improvement momentum usually gives.
In Theano, we store the previous parameter update as a shared variable so that its value is preserved across iterations. Then, during the gradient update, we not only update the parameters, but we also update the previous parameter update shared variable.
End of explanation
"""
# Training data - two randomly-generated Gaussian-distributed clouds of points in 2d space
np.random.seed(0)
# Number of points
N = 1000
# Labels for each cluster
y = np.random.random_integers(0, 1, N)
# Mean of each cluster
means = np.array([[-1, 1], [-1, 1]])
# Covariance (in X and Y direction) of each cluster
covariances = np.random.random_sample((2, 2)) + 1
# Dimensions of each point
X = np.vstack([np.random.randn(N)*covariances[0, y] + means[0, y],
np.random.randn(N)*covariances[1, y] + means[1, y]])
# Plot the data
plt.figure(figsize=(8, 8))
plt.scatter(X[0, :], X[1, :], c=y, lw=.3, s=3, cmap=plt.cm.cool)
plt.axis([-6, 6, -6, 6])
plt.show()
# First, set the size of each layer (and the number of layers)
# Input layer size is training data dimensionality (2)
# Output size is just 1-d: class label - 0 or 1
# Finally, let the hidden layers be twice the size of the input.
# If we wanted more layers, we could just add another layer size to this list.
layer_sizes = [X.shape[0], X.shape[0]*2, 1]
# Set initial parameter values
W_init = []
b_init = []
activations = []
for n_input, n_output in zip(layer_sizes[:-1], layer_sizes[1:]):
# Getting the correct initialization matters a lot for non-toy problems.
# However, here we can just use the following initialization with success:
# Normally distribute initial weights
W_init.append(np.random.randn(n_output, n_input))
# Set initial biases to 1
b_init.append(np.ones(n_output))
# We'll use sigmoid activation for all layers
# Note that this doesn't make a ton of sense when using squared distance
# because the sigmoid function is bounded on [0, 1].
activations.append(T.nnet.sigmoid)
# Create an instance of the MLP class
mlp = MLP(W_init, b_init, activations)
# Create Theano variables for the MLP input
mlp_input = T.matrix('mlp_input')
# ... and the desired output
mlp_target = T.vector('mlp_target')
# Learning rate and momentum hyperparameter values
# Again, for non-toy problems these values can make a big difference
# as to whether the network (quickly) converges on a good local minimum.
learning_rate = 0.01
momentum = 0.9
# Create a function for computing the cost of the network given an input
cost = mlp.squared_error(mlp_input, mlp_target)
# Create a theano function for training the network
train = theano.function([mlp_input, mlp_target], cost,
updates=gradient_updates_momentum(cost, mlp.params, learning_rate, momentum))
# Create a theano function for computing the MLP's output given some input
mlp_output = theano.function([mlp_input], mlp.output(mlp_input))
# Keep track of the number of training iterations performed
iteration = 0
# We'll only train the network with 20 iterations.
# A more common technique is to use a hold-out validation set.
# When the validation error starts to increase, the network is overfitting,
# so we stop training the net. This is called "early stopping", which we won't do here.
max_iteration = 20
while iteration < max_iteration:
# Train the network using the entire training set.
# With large datasets, it's much more common to use stochastic or mini-batch gradient descent
# where only a subset (or a single point) of the training set is used at each iteration.
# This can also help the network to avoid local minima.
current_cost = train(X, y)
# Get the current network output for all points in the training set
current_output = mlp_output(X)
# We can compute the accuracy by thresholding the output
# and computing the proportion of points whose class match the ground truth class.
accuracy = np.mean((current_output > .5) == y)
# Plot network output after this iteration
plt.figure(figsize=(8, 8))
plt.scatter(X[0, :], X[1, :], c=current_output,
lw=.3, s=3, cmap=plt.cm.cool, vmin=0, vmax=1)
plt.axis([-6, 6, -6, 6])
plt.title('Cost: {:.3f}, Accuracy: {:.3f}'.format(float(current_cost), accuracy))
plt.show()
iteration += 1
"""
Explanation: Toy example
We'll train our neural network to classify two Gaussian-distributed clusters in 2d space.
End of explanation
"""
|
eecs445-f16/umich-eecs445-f16 | lecture16_pgms_latent_vars_cond_independence/lecture16_pgms_latent_vars_cond_independence.ipynb | mit | from __future__ import division
# scientific
%matplotlib inline
from matplotlib import pyplot as plt;
import numpy as np;
import sklearn as skl;
import sklearn.datasets;
import sklearn.cluster;
# ipython
import IPython;
# python
import os;
#####################################################
# image processing
import PIL;
# trim and scale images
def trim(im, percent=100):
print("trim:", percent);
bg = PIL.Image.new(im.mode, im.size, im.getpixel((0,0)))
diff = PIL.ImageChops.difference(im, bg)
diff = PIL.ImageChops.add(diff, diff, 2.0, -100)
bbox = diff.getbbox()
if bbox:
x = im.crop(bbox)
return x.resize(((x.size[0]*percent)//100, (x.size[1]*percent)//100), PIL.Image.ANTIALIAS);
#####################################################
# daft (rendering PGMs)
import daft;
# set to FALSE to load PGMs from static images
RENDER_PGMS = False;
# decorator for pgm rendering
def pgm_render(pgm_func):
def render_func(path, percent=100, render=None, *args, **kwargs):
print("render_func:", percent);
# render
render = render if (render is not None) else RENDER_PGMS;
if render:
print("rendering");
# render
pgm = pgm_func(*args, **kwargs);
pgm.render();
pgm.figure.savefig(path, dpi=300);
# trim
img = trim(PIL.Image.open(path), percent);
img.save(path, 'PNG');
else:
print("not rendering");
# error
if not os.path.isfile(path):
raise "Error: Graphical model image %s not found. You may need to set RENDER_PGMS=True.";
# display
return IPython.display.Image(filename=path);#trim(PIL.Image.open(path), percent);
return render_func;
######################################################
"""
Explanation: $$ \LaTeX \text{ command declarations here.}
\newcommand{\R}{\mathbb{R}}
\renewcommand{\vec}[1]{\mathbf{#1}}
\newcommand{\X}{\mathcal{X}}
\newcommand{\D}{\mathcal{D}}
\newcommand{\G}{\mathcal{G}}
\newcommand{\Parents}{\mathrm{Parents}}
\newcommand{\NonDesc}{\mathrm{NonDesc}}
\newcommand{\I}{\mathcal{I}}
\newcommand{\dsep}{\text{d-sep}}
$$
End of explanation
"""
X, y = skl.datasets.make_blobs(1000, cluster_std=[1.0, 2.5, 0.5], random_state=170)
plt.scatter(X[:,0], X[:,1])
"""
Explanation: EECS 445: Machine Learning
Lecture 16: Latent Variables, d-Separation, Gaussian Mixture Models
Instructor: Jacob Abernethy
Date: November 9, 2016
References
[MLAPP] Murphy, Kevin. Machine Learning: A Probabilistic Perspective. 2012.
[PRML] Bishop, Christopher. Pattern Recognition and Machine Learning. 2006.
[Koller & Friedman 2009] Koller, Daphne and Nir Friedman. Probabilistic Graphical Models. 2009.
Book Chapter: The language of directed acyclic
graphical models
This notes are really nice
Outline
Review of Exponential Families
MLE
Brief discussion of conjugate priors and MAP estimation
Probabilistic Graphical Models
Review of Conditional Indep. Assumptions
Intro to Hidden Markov Models
Latent Variable Models in general
d-separation in Bayesian Networks
Mixture Models
Gaussian Mixture Model
Relationship to Clustering
Exponential Family Distributions
DEF: $p(x | \theta)$ has exponential family form if:
$$
\begin{align}
p(x | \theta)
&= \frac{1}{Z(\theta)} h(x) \exp\left[ \eta(\theta)^T \phi(x) \right] \
&= h(x) \exp\left[ \eta(\theta)^T \phi(x) - A(\theta) \right]
\end{align}
$$
$Z(\theta)$ is the partition function for normalization
$A(\theta) = \log Z(\theta)$ is the log partition function
$\phi(x) \in \R^d$ is a vector of sufficient statistics
$\eta(\theta)$ maps $\theta$ to a set of natural parameters
$h(x)$ is a scaling constant, usually $h(x)=1$
Exponential Family: MLE
To find the maximum, recall $\nabla_\theta A(\theta) = E_\theta[\phi(x)]$, so
\begin{align}
\nabla_\theta \log p(\D | \theta) & =
\nabla_\theta(\theta^T \phi(\D) - N A(\theta)) \
& = \phi(\D) - N E_\theta[\phi(X)] = 0
\end{align}
Which gives
$$E_\theta[\phi(X)] = \frac{\phi(\D)}{N} = \frac{1}{N} \sum_{k=1}^N \phi(x_k)$$
Obtaining the maximum likelihood is simply solving the calculus problem $\nabla_\theta A(\theta) = \frac{1}{N} \sum_{k=1}^N \phi(x_k)$ which is often easy
Bayes for Exponential Family
Exact Bayesian analysis is considerably simplified if the prior is conjugate to the likelihood.
- Simply, this means that prior $p(\theta)$ has the same form as the posterior $p(\theta|\mathcal{D})$.
This requires likelihood to have finite sufficient statistics
* Exponential family to the rescue!
Note: We will release some notes on cojugate priors + exponential families. It's hard to learn from slides and needs a bit more description.
Likelihood for exponential family
Likelihood:
$$ p(\mathcal{D}|\theta) \propto g(\theta)^N \exp[\eta(\theta)^T s_N]\
s_N = \sum_{i=1}^{N}\phi(x_i)$$
In terms of canonical parameters:
$$ p(\mathcal{D}|\eta) \propto \exp[N\eta^T \bar{s} -N A(\eta)] \
\bar s = \frac{1}{N}s_N $$
Conjugate prior for exponential family
The prior and posterior for an exponential family involve two parameters, $\tau$ and $\nu$, initially set to $\tau_0, \nu_0$
$$ p(\theta| \nu_0, \tau_0) \propto g(\theta)^{\nu_0} \exp[\eta(\theta)^T \tau_0] $$
Denote $ \tau_0 = \nu_0 \bar{\tau}_0$ to separate out the size of the prior pseudo-data, $\nu_0$ , from the mean of the sufficient statistics on this pseudo-data, $\tau_0$ . Hence,
$$ p(\theta| \nu_0, \bar \tau_0) \propto \exp[\nu_0\eta(\theta)^T \bar \tau_0 - \nu_0 A(\eta)] $$
Think of $\tau_0$ as a "guess" of the future sufficient statistics, and $\nu_0$ as the strength of this guess
Prior: Example
$$
\begin{align}
p(\theta| \nu_0, \tau_0)
&\propto (1-\theta)^{\nu_0} \exp[\tau_0\log(\frac{\theta}{1-\theta})] \
&= \theta^{\tau_0}(1-\theta)^{\nu_0 - \tau_0}
\end{align}
$$
Define $\alpha = \tau_0 +1 $ and $\beta = \nu_0 - \tau_0 +1$ to see that this is a beta distribution.
Posterior
Posterior:
$$ p(\theta|\mathcal{D}) = p(\theta|\nu_N, \tau_N) = p(\theta| \nu_0 +N, \tau_0 +s_N) $$
Note that we obtain hyper-parameters by adding. Hence,
$$ \begin{align}
p(\eta|\mathcal{D})
&\propto \exp[\eta^T (\nu_0 \bar\tau_0 + N \bar s) - (\nu_0 + N) A(\eta) ] \
&= p(\eta|\nu_0 + N, \frac{\nu_0 \bar\tau_0 + N \bar s}{\nu_0 + N})
\end{align}$$
where $\bar s = \frac 1 N \sum_{i=1}^{N}\phi(x_i)$.
posterior hyper-parameters are a convex combination of the prior mean hyper-parameters and the average of the sufficient statistics.
Back to Graphical Models!
Recall: Bayesian Networks: Definition
A Bayesian Network $\mathcal{G}$ is a directed acyclic graph whose nodes represent random variables $X_1, \dots, X_n$.
- Let $\Parents_\G(X_k)$ denote the parents of $X_k$ in $\G$
- Let $\NonDesc_\G(X_k)$ denote the variables in $\G$ who are not descendants of $X_k$.
Examples will come shortly...
Bayesian Networks: Local Independencies
Every Bayesian Network $\G$ encodes a set $\I_\ell(\G)$ of local independence assumptions:
For each variable $X_k$, we have $(X_k \perp \NonDesc_\G(X_k) \mid \Parents_\G(X_k))$
Every node $X_k$ is conditionally independent of its nondescendants given its parents.
Example: Naive Bayes
The graphical model for Naive Bayes is shown below:
- $\Parents_\G(X_k) = { C }$, $\NonDesc_\G(X_k) = { X_j }_{j\neq k} \cup { C }$
- Therefore $X_j \perp X_k \mid C$ for any $j \neq k$
<img src="../lecture15_exp_families_bayesian_networks/images/naive-bayes.png">
Factorization Theorem: Statement
Theorem: (Koller & Friedman 3.1) If $\G$ is an I-map for $P$, then $P$ factorizes as follows:
$$
P(X_1, \dots, X_N) = \prod_{k=1}^N P(X_k \mid \Parents_\G(X_k))
$$
Example: Fully Connected Graph
A fully connected graph makes no independence assumptions.
$$
P(A,B,C) = P(A) P(B|A) P(C|A,B)
$$
<img src="../lecture15_exp_families_bayesian_networks/images/fully-connected-b.png">
Importan PGM Example: Markov Chain
State at time $t$ depends only on state at time $t-1$.
$$
P(X_0, X_1, \dots, X_N) = P(X_0) \prod_{t=1}^N P(X_t \mid X_{t-1})
$$
<img src="../lecture15_exp_families_bayesian_networks/images/markov-chain.png">
Compact Representation of PGM: Plate Notation
We can represent (conditionally) iid variables using plate notation.
A box around a variable (or set of variables), with a $K$, means that we have access to $K$ iid samples of this variable.
<img src="../lecture15_exp_families_bayesian_networks/images/plate-example.png">
Unobserved Variables: Hidden Markov Model
Noisy observations $X_k$ generated from hidden Markov chain $Y_k$. (More on this soon)
<img src="../lecture15_exp_families_bayesian_networks/images/hmm.png">
This Brings us to: Latent Variable Models
Uses material from [MLAPP] §10.1-10.4, §11.1-11.2
Latent Variable Models
In general, the goal of probabilistic modeling is to
Use what we know to make inferences about what we don't know.
Graphical models provide a natural framework for this problem.
- Assume unobserved variables are correlated due to the influence of unobserved latent variables.
- Latent variables encode beliefs about the generative process.
In a graphical model, we will often shade in the observed variables to distinguish them from hidden variables.
Example: Gaussian Mixture Models
This dataset is hard to explain with a single distribution.
- Underlying density is complicated overall...
- But it's clearly three Gaussians!
End of explanation
"""
@pgm_render
def pgm_gmm():
pgm = daft.PGM([4,4], origin=[-2,-1], node_unit=0.8, grid_unit=2.0);
# nodes
pgm.add_node(daft.Node("pi", r"$\pi$", 0, 1));
pgm.add_node(daft.Node("z", r"$Z_j$", 0.7, 1));
pgm.add_node(daft.Node("x", r"$X_j$", 1.3, 1, observed=True));
pgm.add_node(daft.Node("mu", r"$\mu$", 0.7, 0.3));
pgm.add_node(daft.Node("sigma", r"$\Sigma$", 1.3, 0.3));
# edges
pgm.add_edge("pi", "z", head_length=0.08);
pgm.add_edge("z", "x", head_length=0.08);
pgm.add_edge("mu", "x", head_length=0.08);
pgm.add_edge("sigma", "x", head_length=0.08);
pgm.add_plate(daft.Plate([0.4,0.8,1.3,0.5], label=r"$\qquad\qquad\qquad\;\; N$",
shift=-0.1))
return pgm;
%%capture
pgm_gmm("images/pgm/pgm-gmm.png")
"""
Explanation: Example: Mixture Models
Instead, introduce a latent cluster label $z_j \in [K]$ for each datapoint $x_j$,
$$
\begin{align}
z_j &\sim \mathrm{Cat}(\pi_1, \dots, \pi_K)
& \forall\, j=1,\dots,N \
x_j \mid z_j &\sim \mathcal{N}(\mu_{z_j}, \Sigma_{z_j})
& \forall\, j=1,\dots,N \
\end{align}
$$
This allows us to explain a complicated density as a mixture of simpler densities:
$$
P(x | \mu, \Sigma) = \sum_{k=1}^K \pi_k \mathcal{N}(x | \mu_k, \Sigma_k)
$$
Example: Mixture Models
End of explanation
"""
@pgm_render
def pgm_hmm():
pgm = daft.PGM([7, 7], origin=[0, 0])
# Nodes
pgm.add_node(daft.Node("Y1", r"$Y_1$", 1, 3.5))
pgm.add_node(daft.Node("Y2", r"$Y_2$", 2, 3.5))
pgm.add_node(daft.Node("Y3", r"$\dots$", 3, 3.5, plot_params={'ec':'none'}))
pgm.add_node(daft.Node("Y4", r"$Y_N$", 4, 3.5))
pgm.add_node(daft.Node("x1", r"$X_1$", 1, 2.5, observed=True))
pgm.add_node(daft.Node("x2", r"$X_2$", 2, 2.5, observed=True))
pgm.add_node(daft.Node("x3", r"$\dots$", 3, 2.5, plot_params={'ec':'none'}))
pgm.add_node(daft.Node("x4", r"$X_N$", 4, 2.5, observed=True))
# Add in the edges.
pgm.add_edge("Y1", "Y2", head_length=0.08)
pgm.add_edge("Y2", "Y3", head_length=0.08)
pgm.add_edge("Y3", "Y4", head_length=0.08)
pgm.add_edge("Y1", "x1", head_length=0.08)
pgm.add_edge("Y2", "x2", head_length=0.08)
pgm.add_edge("Y4", "x4", head_length=0.08)
return pgm;
%%capture
pgm_hmm("images/pgm/hmm.png");
"""
Explanation: Example: Hidden Markov Models
Noisy observations $X_k$ generated from hidden Markov chain $Y_k$.
$$
P(\vec{X}, \vec{Y}) = P(Y_1) P(X_1 \mid Y_1) \prod_{k=2}^N \left(P(Y_k \mid Y_{k-1}) P(X_k \mid Y_k)\right)
$$
End of explanation
"""
@pgm_render
def pgm_unsupervised():
pgm = daft.PGM([6, 6], origin=[0, 0])
# Nodes
pgm.add_node(daft.Node("d1", r"$Z_1$", 2, 3.5))
pgm.add_node(daft.Node("di", r"$Z_2$", 3, 3.5))
pgm.add_node(daft.Node("dn", r"$Z_3$", 4, 3.5))
pgm.add_node(daft.Node("f1", r"$X_1$", 1, 2.50, observed=True))
pgm.add_node(daft.Node("fi-1", r"$X_2$", 2, 2.5, observed=True))
pgm.add_node(daft.Node("fi", r"$X_3$", 3, 2.5, observed=True))
pgm.add_node(daft.Node("fi+1", r"$X_4$", 4, 2.5, observed=True))
pgm.add_node(daft.Node("fm", r"$X_N$", 5, 2.5, observed=True))
# Add in the edges.
pgm.add_edge("d1", "f1", head_length=0.08)
pgm.add_edge("d1", "fi-1", head_length=0.08)
pgm.add_edge("d1", "fi", head_length=0.08)
pgm.add_edge("d1", "fi+1", head_length=0.08)
pgm.add_edge("d1", "fm", head_length=0.08)
pgm.add_edge("di", "f1", head_length=0.08)
pgm.add_edge("di", "fi-1", head_length=0.08)
pgm.add_edge("di", "fi", head_length=0.08)
pgm.add_edge("di", "fi+1", head_length=0.08)
pgm.add_edge("di", "fm", head_length=0.08)
pgm.add_edge("dn", "f1", head_length=0.08)
pgm.add_edge("dn", "fi-1", head_length=0.08)
pgm.add_edge("dn", "fi", head_length=0.08)
pgm.add_edge("dn", "fi+1", head_length=0.08)
pgm.add_edge("dn", "fm", head_length=0.08)
return pgm
%%capture
pgm_unsupervised("images/pgm/unsupervised.png");
"""
Explanation: Example: Unsupervised Learning
Latent variables are fundamental to unsupervised and deep learning.
- Serve as a bottleneck
- Compute a compressed representation of data
End of explanation
"""
@pgm_render
def pgm_question1():
pgm = daft.PGM([4, 4], origin=[0, 0])
# Nodes
pgm.add_node(daft.Node("c", r"$C$", 2, 3.5))
pgm.add_node(daft.Node("a", r"$A$", 1.3, 2.5))
pgm.add_node(daft.Node("b", r"$B$", 2.7, 2.5))
# Add in the edges.
pgm.add_edge("c", "a", head_length=0.08)
pgm.add_edge("c", "b", head_length=0.08)
return pgm;
%%capture
pgm_question1("images/pgm/question1.png")
"""
Explanation: Other Latent Variable Models
Many other models in machine learning involve latent variables:
Neural Networks / Multilayer Perceptrons
Restricted Boltzmann Machines
Deep Belief Networks
Probabilistic PCA
Latent Variable Models: Complexity
Latent variable models exhibit emergent complexity.
- Although each conditional distribution is simple,
- The joint distribution is capable of modeling complex interactions.
However, latent variables make learning difficult.
- Inference is challening in models with latent variables.
- They can introduce new dependencies between observed variables.
Break time
<img src="images/boxing_cat.gif"/>
Bayesian Networks
Part II: Inference, Learning, and d-Separation
Uses material from [Koller & Friedman 2009] Chapter 3, [MLAPP] Chapter 10, and [PRML] §8.2.1
Bayesian Networks: Terminology
Typically, our models will have
- Observed variables $X$
- Hidden variables $Z$
- Parameters $\theta$
Occasionally, we will distinguish between inference and learning.
Bayesian Networks: Inference
Inference: Estimate hidden variables $Z$ from observed variables $X$.
$$
P(Z | X,\theta) = \frac{P(X,Z | \theta)}{P(X|\theta)}
$$
Denominator $P(X|\theta)$ is sometimes called the probability of the evidence.
Occasionally we care only about a subset of the hidden variables, and marginalize out the rest.
Bayesian Networks: Learning
Learning: Estimate parameters $\theta$ from observed data $X$.
$$
P(\theta \mid X) = \sum_{z \in Z} P(\theta, z \mid X) = \sum_{z \in Z} P(\theta \mid z, X) P(z \mid X)
$$
To Bayesians, parameters are hidden variables, so inference and learning are equivalent.
Bayesian Networks: Probability Queries
In general, it is useful to compute $P(A|B)$ for arbitrary collections $A$ and $B$ of variables.
- Both inference and learning take this form.
To accomplish this, we must understand the independence structure of any given graphical model.
Review: Local Independencies
Every Bayesian Network $\G$ encodes a set $\I_\ell(\G)$ of local independence assumptions:
For each variable $X_k$, we have $(X_k \perp \NonDesc_\G(X_k) \mid \Parents_\G(X_k))$
Every node $X_k$ is conditionally independent of its nondescendants given its parents.
For arbitrary sets of variables, when does $(A \perp B \mid C)$ hold?
Review: I-Maps
If $P$ satisfies the independence assertions made by $\G$, we say that
- $\G$ is an I-Map for $P$
- or that $P$ satisfies $\G$.
Any distribution satisfying $\G$ shares common structure.
- We will exploit this structure in our algorithms
- This is what makes graphical models so powerful!
Review: Factorization Theorem
Last time, we proved that for any $P$ satisfying $\G$,
$$
P(X_1, \dots, X_N) = \prod_{k=1}^N P(X_k \mid \Parents_\G(X_k))
$$
If we understand independence structure, we can factorize arbitrary conditional distributions:
$$
P(A_1, \dots, A_n \mid B_1, \dots, B_m) = \;?
$$
Question 1: Is $(A \perp B)$?
End of explanation
"""
@pgm_render
def pgm_question2():
pgm = daft.PGM([4, 4], origin=[0, 0])
# Nodes
pgm.add_node(daft.Node("c", r"$C$", 2, 3.5,
observed=True))
pgm.add_node(daft.Node("a", r"$A$", 1.3, 2.5))
pgm.add_node(daft.Node("b", r"$B$", 2.7, 2.5))
# Add in the edges.
pgm.add_edge("c", "a", head_length=0.08)
pgm.add_edge("c", "b", head_length=0.08)
return pgm
%%capture
pgm_question2("images/pgm/question2.png")
"""
Explanation: Answer 1: No!
No! $A$ and $B$ are not marginally independent.
- Note $C$ is not shaded, so we don't observe it.
In general,
$$
P(A,B) = \sum_{c \in C} P(A,B,c) = \sum_{c \in C} P(A|c)P(B|c)P(c) \neq P(A)P(B)
$$
Question 2: Is $(A \perp B \mid C)$?
End of explanation
"""
@pgm_render
def pgm_question3():
pgm = daft.PGM([4, 4], origin=[0, 0])
# Nodes
pgm.add_node(daft.Node("c", r"$C$", 2, 3.5))
pgm.add_node(daft.Node("a", r"$A$", 1.3, 2.5))
pgm.add_node(daft.Node("b", r"$B$", 2.7, 2.5))
# Add in the edges.
pgm.add_edge("a", "c", head_length=0.08)
pgm.add_edge("c", "b", head_length=0.08)
return pgm
%%capture
pgm_question3("images/pgm/question3.png")
"""
Explanation: Answer 2: Yes!
Yes! $(A \perp B | C)$ follows from the local independence properties of Bayesian networks.
Every variable is conditionally independent of its nondescendants given its parents.
Observing $C$ blocks the path of influence from $A$ to $B$. Or, using factorization theorem:
$$
\begin{align}
P(A,B|C) & = \frac{P(A,B,C)}{P(C)} \
& = \frac{P(C)P(A|C)P(B|C)}{P(C)} \
& = P(A|C)P(B|C)
\end{align}
$$
Question 3: Is $(A \perp B)$?
End of explanation
"""
@pgm_render
def pgm_question4():
pgm = daft.PGM([4, 4], origin=[0, 0])
# Nodes
pgm.add_node(daft.Node("c", r"$C$", 2, 3.5, observed=True))
pgm.add_node(daft.Node("a", r"$A$", 1.3, 2.5))
pgm.add_node(daft.Node("b", r"$B$", 2.7, 2.5))
# Add in the edges.
pgm.add_edge("a", "c", head_length=0.08)
pgm.add_edge("c", "b", head_length=0.08)
return pgm
%%capture
pgm_question4("images/pgm/question4.png")
"""
Explanation: Answer 3: No!
Again, $C$ is not given, so $A$ and $B$ are dependent.
Question 4: Is $(A \perp B \mid C)$?
End of explanation
"""
@pgm_render
def pgm_question5():
pgm = daft.PGM([4, 4], origin=[0, 0])
# Nodes
pgm.add_node(daft.Node("c", r"$C$", 2, 3.5))
pgm.add_node(daft.Node("a", r"$A$", 1.3, 2.5))
pgm.add_node(daft.Node("b", r"$B$", 2.7, 2.5))
# Add in the edges.
pgm.add_edge("a", "c", head_length=0.08)
pgm.add_edge("b", "c", head_length=0.08)
return pgm
%%capture
pgm_question5("images/pgm/question5.png")
"""
Explanation: Answer 4: Yes!
Again, observing $C$ blocks influence from $A$ to $B$.
Every variable is conditionally independent of its nondescendants given its parents.
Question 5: Is $(A \perp B)$?
End of explanation
"""
@pgm_render
def pgm_question6():
pgm = daft.PGM([4, 4], origin=[0, 0])
# Nodes
pgm.add_node(daft.Node("c", r"$C$", 2, 3.5, observed=True))
pgm.add_node(daft.Node("a", r"$A$", 1.3, 2.5))
pgm.add_node(daft.Node("b", r"$B$", 2.7, 2.5))
# Add in the edges.
pgm.add_edge("a", "c", head_length=0.08)
pgm.add_edge("b", "c", head_length=0.08)
return pgm
%%capture
pgm_question6("images/pgm/question6.png")
"""
Explanation: Answer 5: Yes!
Using the factorization rule,
$$
P(A,B,C) = P(A)P(B)P(C\mid A,B)
$$
Therefore, marginalizing out $C$,
$$
\begin{align}
P(A,B) & = \sum_{c \in C} P(A,B,c) \
& = \sum_{c \in C} P(A)P(B) P(c \mid A,B) \
& = P(A)P(B) \sum_{c \in C} P(c \mid A,B) = P(A)P(B)
\end{align}
$$
Question 6: Is $(A \perp B \mid C)$?
End of explanation
"""
@pgm_render
def pgm_bfg_1():
pgm = daft.PGM([4, 4], origin=[0, 0])
# Nodes
pgm.add_node(daft.Node("G", r"$G$", 2, 3.5))
pgm.add_node(daft.Node("B", r"$B$", 1.3, 2.5))
pgm.add_node(daft.Node("F", r"$F$", 2.7, 2.5))
# Add in the edges.
pgm.add_edge("B", "G", head_length=0.08)
pgm.add_edge("F", "G", head_length=0.08)
return pgm;
%%capture
pgm_bfg_1("images/pgm/bfg-1.png")
"""
Explanation: Answer: No!
$A$ can influence $B$ via $C$.
$$
P(A,B | C) = \frac{P(A,B,C)}{P(C)} = \frac{P(A)P(B)P(C|A,B)}{P(C)}
$$
This does not factorize in general to $P(A|C)P(B|C)$.
Example: Battery, Fuel, and Gauge
Consider three binary random variables
- Battery $B$ is either charged $(B=1)$ or dead, $(B=0)$
- Fuel tank $F$ is either full $(F=1)$ or empty, $(F=0)$
- Fuel gauge $G$ either indicates full $(G=1)$ or empty, $(G=0)$
Assume $(B \perp F)$ with priors
- $P(B = 1) = 0.9$
- $P(F = 1) = 0.9$
Example: Battery, Fuel, and Gauge
Given the state of the fuel tank and the battery, the fuel gauge reads full with probabilities:
- $p(G = 1 \mid B = 1, F = 1) = 0.8$
- $p(G = 1 \mid B = 1, F = 0) = 0.2$
- $p(G = 1 \mid B = 0, F = 1) = 0.2$
- $p(G = 1 \mid B = 0, F = 0) = 0.1$
Example: Battery, Fuel, and Gauge
Without any observations, the probability of an empty fuel tank is
$$
P(F=0) = 1 - P(F = 1) = 0.1
$$
End of explanation
"""
@pgm_render
def pgm_bfg_2():
pgm = daft.PGM([4, 4], origin=[0, 0])
# Nodes
pgm.add_node(daft.Node("G", r"$G$", 2, 3.5, offset=(0, 20), observed=True))
pgm.add_node(daft.Node("B", r"$B$", 1.3, 2.5, offset=(0, 20)))
pgm.add_node(daft.Node("F", r"$F$", 2.7, 2.5, offset=(0, 20)))
# Add in the edges.
pgm.add_edge("B", "G", head_length=0.08)
pgm.add_edge("F", "G", head_length=0.08)
return pgm;
%%capture
pgm_bfg_2("images/pgm/bfg-2.png");
"""
Explanation: Example: Empty Gauge
Now, suppose the gauge reads $G=0$. We have
$$
P(G=0) = \sum \limits_{B \in {0, 1}} \sum \limits_{F \in {0, 1}}
P(G = 0 \mid B, F) P(B) P(F) = 0.315
$$
Verify this!
Example: Emtpy Gauge
End of explanation
"""
@pgm_render
def pgm_bfg_3():
pgm = daft.PGM([4, 4], origin=[0, 0])
# Nodes
pgm.add_node(daft.Node("G", r"$G$", 2, 3.5, offset=(0, 20), observed=True))
pgm.add_node(daft.Node("B", r"$B$", 1.3, 2.5, offset=(0, 20), observed=True))
pgm.add_node(daft.Node("F", r"$F$", 2.7, 2.5, offset=(0, 20)))
# Add in the edges.
pgm.add_edge("B", "G", head_length=0.08)
pgm.add_edge("F", "G", head_length=0.08)
return pgm;
%%capture
pgm_bfg_3("images/pgm/bfg-3.png")
"""
Explanation: Example: Empty Gauge
Now, we also have
$$
p(G = 0 \mid F = 0)
= \sum \limits_{B \in {0, 1}} p(G = 0 \mid B, F = 0) p(B) = 0.81
$$
Applying Bayes' Rule,
$$
\begin{align}
p(F = 0 \mid G = 0)
&= \frac{p(G = 0 \mid F = 0) p(F = 0)}{p(G = 0)} \
&\approx 0.257 > p(F = 0) = 0.10
\end{align}
$$
Observing an empty gauge makes it more likely that the tank is empty!
Example: Emtpy Gauge, Dead Battery
Now, suppose we also observe a dead battery $B =0$. Then,
$$
\begin{align}
p(F = 0 \mid G = 0, B = 0)
&= \frac{p(G = 0 \mid B = 0, F = 0) p(F = 0)}{\sum_{F \in {0, 1}} p(G = 0 \mid B = 0, F) p(F)} \
&\approx 0.111
\end{align}
$$
Example: Emtpy Gauge, Dead Battery
End of explanation
"""
|
unoebauer/public-astro-tools | jupyter/pcygni_tutorial.ipynb | mit | import matplotlib.pyplot as plt
"""
Explanation: Pcygni-Profile Calculator Tool Tutorial
This brief tutorial should give you a basic overview of the main features and capabilities of the Python P-Cygni Line profile calculator, which is based on the Elementary Supernova Model (ES) of Jefferey and Branch 1990.
Installation
Obtaining the tool
Head over to github and either clone my repository (https://github.com/unoebauer/public-astro-tools) or download the Python file directly.
Requisites
python 2.7 and following packages
numpy
scipy
astropy
matplotlib
(recommended) ipython
These are all standard Python packages and you should be able to install these with the package manager of your favourite distribution. Alternatively, you can use Anaconda/Miniconda. For this, you can use the requirements file shipped with the github repository:
conda-env create -f pcygni_env.yml
(Optional) Running this tutorial as a jupyter notebook
If you want to interactively use this jupyter notebook, you have to install jupyter as well (it is part of the requirements file and will be installed automatically when setting up the anaconda environment). Then you can start this notebook with:
jupyter notebook pcygni_tutorial.ipnb
Basic usage
The following Python code snippets demonstrate the basic use of the tool. Just execute the following lines in an python/ipython (preferable) shell in the directory in which the Python tool is located:
End of explanation
"""
import pcygni_profile as pcp
"""
Explanation: Import the line profile calculator module
End of explanation
"""
profcalc = pcp.PcygniCalculator()
"""
Explanation: Create an instance of the Line Calculator, for now with the default parameters (check the source code for the default parameters)
End of explanation
"""
fig = profcalc.show_line_profile()
"""
Explanation: Calculate and illustrate line profile
End of explanation
"""
import astropy.units as units
import astropy.constants as csts
"""
Explanation: Advanced Uses
In the remaining part of this tutorial, we have a close look at the line calculator and investigate a few phenomena in more detail. As a preparation, we import astropy
End of explanation
"""
tmp = pcp.PcygniCalculator(t=3000 * units.s, vmax=0.01 * csts.c, vphot=0.001 * csts.c, tauref=1,
vref=5e7 * units.cm/units.s, ve=5e7 * units.cm/units.s,
lam0=1215.7 * units.AA, vdet_min=None, vdet_max=None)
"""
Explanation: Now, we have a look at the parameters of the line profile calculator. The following code line just shows all keyword arguments of the calculator and their default values:
End of explanation
"""
fig = plt.figure()
ax = fig.add_subplot(111)
for t in [1, 10, 100, 1000, 10000]:
tmp = pcp.PcygniCalculator(tauref=t)
x, y = tmp.calc_profile_Flam(npoints=500)
ax.plot(x.to("AA"), y, label=r"$\tau_{{\mathrm{{ref}}}} = {:f}$".format(t))
ax.set_xlabel(r"$\lambda$ [$\mathrm{\AA}$]")
ax.set_ylabel(r"$F_{\lambda}/F^{\mathrm{phot}}_{\lambda}$")
ax.legend(frameon=False)
"""
Explanation: t: time since explosion (default 3000 secs)
vmax: velocity at the outer ejecta edge (with t, can be turned into r) (1% c)
vphot: velocity, i.e. location, of the photosphere (0.1% c)
vref: reference velocity, used in the density law (500 km/s)
ve: another parameter for the density law (500 km/s)
tauref: reference optical depth of the line transition (at vref) (1)
lam0: rest frame natural wavelength of the line transition (1215.7 Angstrom)
vdet_min: inner location of the detached line-formation shell; if None, will be set to vphot (None)
vdet_max: outer location of the detached line-formation shell; if None, will be set to vmax (None)
Note that you have to supply astropy quantities (i.e. numbers with units) for all these parameters (except for the reference optical depth).
Varying the Line Strength - Line Saturation
To start, we investigate the effect of increasing the line strength.
End of explanation
"""
fig = plt.figure()
ax = fig.add_subplot(111)
vmax = 0.01 * csts.c
for v in [0.5 * vmax, vmax, 2 * vmax]:
tmp = pcp.PcygniCalculator(tauref=1000, vmax=v)
x, y = tmp.calc_profile_Flam(npoints=500)
ax.plot(x.to("AA"), y, label=r"$v_{{\mathrm{{max}}}} = {:f}\,c$".format((v / csts.c).to("")))
ax.set_xlabel(r"$\lambda$ [$\mathrm{\AA}$]")
ax.set_ylabel(r"$F_{\lambda} / F^{\mathrm{phot}}_{\lambda}$")
ax.legend(frameon=False)
"""
Explanation: The stronger the line, the deeper the absorption throughs and stronger the emission peak becomes. At a certain point, the line "staurates", i.e the profile does not become more prominent with increasing line strength, since all photons are already scatterd.
Increasing the ejecta size
End of explanation
"""
fig = plt.figure()
ax = fig.add_subplot(111)
vphot = 0.001 * csts.c
for v in [0.5 * vphot, vphot, 2 * vphot]:
tmp = pcp.PcygniCalculator(tauref=1000, vphot=v)
x, y = tmp.calc_profile_Flam(npoints=500)
ax.plot(x.to("AA"), y, label=r"$v_{{\mathrm{{phot}}}} = {:f}\,c$".format((v / csts.c).to("")))
ax.set_xlabel(r"$\lambda$ [$\mathrm{\AA}$]")
ax.set_ylabel(r"$F_{\lambda} / F^{\mathrm{phot}}_{\lambda}$")
ax.legend(frameon=False)
"""
Explanation: Changing the size of the photosphere
End of explanation
"""
fig = plt.figure()
ax = fig.add_subplot(111)
tmp = pcp.PcygniCalculator(tauref=1000)
x, y = tmp.calc_profile_Flam(npoints=500)
ax.plot(x.to("AA"), y, label=r"no detached line-formation shell")
tmp = pcp.PcygniCalculator(tauref=1000, vdet_min=0.0025 * csts.c, vdet_max=0.0075 * csts.c)
x, y = tmp.calc_profile_Flam(npoints=500)
ax.plot(x.to("AA"), y, label=r"detached line-formation shell")
ax.set_xlabel(r"$\lambda$ [$\mathrm{\AA}$]")
ax.set_ylabel(r"$F_{\lambda} / F^{\mathrm{phot}}_{\lambda}$")
ax.legend(frameon=False)
"""
Explanation: Detaching the line forming region from the photosphere
Finally, we investigate what happens when the line does not form throughout the entire envelope but only in a detached shell within the ejecta.
End of explanation
"""
|
ES-DOC/esdoc-jupyterhub | notebooks/uhh/cmip6/models/sandbox-3/aerosol.ipynb | gpl-3.0 | # DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'uhh', 'sandbox-3', 'aerosol')
"""
Explanation: ES-DOC CMIP6 Model Properties - Aerosol
MIP Era: CMIP6
Institute: UHH
Source ID: SANDBOX-3
Topic: Aerosol
Sub-Topics: Transport, Emissions, Concentrations, Optical Radiative Properties, Model.
Properties: 69 (37 required)
Model descriptions: Model description details
Initialized From: --
Notebook Help: Goto notebook help page
Notebook Initialised: 2018-02-15 16:54:41
Document Setup
IMPORTANT: to be executed each time you run the notebook
End of explanation
"""
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
"""
Explanation: Document Authors
Set document authors
End of explanation
"""
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
"""
Explanation: Document Contributors
Specify document contributors
End of explanation
"""
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
"""
Explanation: Document Publication
Specify document publication status
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: Document Table of Contents
1. Key Properties
2. Key Properties --> Software Properties
3. Key Properties --> Timestep Framework
4. Key Properties --> Meteorological Forcings
5. Key Properties --> Resolution
6. Key Properties --> Tuning Applied
7. Transport
8. Emissions
9. Concentrations
10. Optical Radiative Properties
11. Optical Radiative Properties --> Absorption
12. Optical Radiative Properties --> Mixtures
13. Optical Radiative Properties --> Impact Of H2o
14. Optical Radiative Properties --> Radiative Scheme
15. Optical Radiative Properties --> Cloud Interactions
16. Model
1. Key Properties
Key properties of the aerosol model
1.1. Model Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of aerosol model.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 1.2. Model Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Name of aerosol model code
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.scheme_scope')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "troposhere"
# "stratosphere"
# "mesosphere"
# "mesosphere"
# "whole atmosphere"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 1.3. Scheme Scope
Is Required: TRUE Type: ENUM Cardinality: 1.N
Atmospheric domains covered by the aerosol model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.basic_approximations')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 1.4. Basic Approximations
Is Required: TRUE Type: STRING Cardinality: 1.1
Basic approximations made in the aerosol model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.prognostic_variables_form')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "3D mass/volume ratio for aerosols"
# "3D number concenttration for aerosols"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 1.5. Prognostic Variables Form
Is Required: TRUE Type: ENUM Cardinality: 1.N
Prognostic variables in the aerosol model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.number_of_tracers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 1.6. Number Of Tracers
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Number of tracers in the aerosol model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.family_approach')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 1.7. Family Approach
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Are aerosol calculations generalized into families of species?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.software_properties.repository')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 2. Key Properties --> Software Properties
Software properties of aerosol code
2.1. Repository
Is Required: FALSE Type: STRING Cardinality: 0.1
Location of code for this component.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.software_properties.code_version')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 2.2. Code Version
Is Required: FALSE Type: STRING Cardinality: 0.1
Code version identifier.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.software_properties.code_languages')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 2.3. Code Languages
Is Required: FALSE Type: STRING Cardinality: 0.N
Code language(s).
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.timestep_framework.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Uses atmospheric chemistry time stepping"
# "Specific timestepping (operator splitting)"
# "Specific timestepping (integrated)"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 3. Key Properties --> Timestep Framework
Physical properties of seawater in ocean
3.1. Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
Mathematical method deployed to solve the time evolution of the prognostic variables
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.timestep_framework.split_operator_advection_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 3.2. Split Operator Advection Timestep
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Timestep for aerosol advection (in seconds)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.timestep_framework.split_operator_physical_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 3.3. Split Operator Physical Timestep
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Timestep for aerosol physics (in seconds).
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.timestep_framework.integrated_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 3.4. Integrated Timestep
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Timestep for the aerosol model (in seconds)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.timestep_framework.integrated_scheme_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Explicit"
# "Implicit"
# "Semi-implicit"
# "Semi-analytic"
# "Impact solver"
# "Back Euler"
# "Newton Raphson"
# "Rosenbrock"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 3.5. Integrated Scheme Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Specify the type of timestep scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.meteorological_forcings.variables_3D')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 4. Key Properties --> Meteorological Forcings
**
4.1. Variables 3D
Is Required: FALSE Type: STRING Cardinality: 0.1
Three dimensionsal forcing variables, e.g. U, V, W, T, Q, P, conventive mass flux
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.meteorological_forcings.variables_2D')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 4.2. Variables 2D
Is Required: FALSE Type: STRING Cardinality: 0.1
Two dimensionsal forcing variables, e.g. land-sea mask definition
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.meteorological_forcings.frequency')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 4.3. Frequency
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Frequency with which meteological forcings are applied (in seconds).
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.resolution.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 5. Key Properties --> Resolution
Resolution in the aersosol model grid
5.1. Name
Is Required: TRUE Type: STRING Cardinality: 1.1
This is a string usually used by the modelling group to describe the resolution of this grid, e.g. ORCA025, N512L180, T512L70 etc.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.resolution.canonical_horizontal_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 5.2. Canonical Horizontal Resolution
Is Required: FALSE Type: STRING Cardinality: 0.1
Expression quoted for gross comparisons of resolution, eg. 50km or 0.1 degrees etc.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.resolution.number_of_horizontal_gridpoints')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 5.3. Number Of Horizontal Gridpoints
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Total number of horizontal (XY) points (or degrees of freedom) on computational grid.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.resolution.number_of_vertical_levels')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 5.4. Number Of Vertical Levels
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Number of vertical levels resolved on computational grid.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.resolution.is_adaptive_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 5.5. Is Adaptive Grid
Is Required: FALSE Type: BOOLEAN Cardinality: 0.1
Default is False. Set true if grid resolution changes during execution.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.tuning_applied.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 6. Key Properties --> Tuning Applied
Tuning methodology for aerosol model
6.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General overview description of tuning: explain and motivate the main targets and metrics retained. &Document the relative weight given to climate performance metrics versus process oriented metrics, &and on the possible conflicts with parameterization level tuning. In particular describe any struggle &with a parameter value that required pushing it to its limits to solve a particular model deficiency.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.tuning_applied.global_mean_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 6.2. Global Mean Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List set of metrics of the global mean state used in tuning model/component
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.tuning_applied.regional_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 6.3. Regional Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List of regional metrics of mean state used in tuning model/component
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.tuning_applied.trend_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 6.4. Trend Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List observed trend metrics used in tuning model/component
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.transport.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 7. Transport
Aerosol transport
7.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of transport in atmosperic aerosol model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.transport.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Uses Atmospheric chemistry transport scheme"
# "Specific transport scheme (eulerian)"
# "Specific transport scheme (semi-lagrangian)"
# "Specific transport scheme (eulerian and semi-lagrangian)"
# "Specific transport scheme (lagrangian)"
# TODO - please enter value(s)
"""
Explanation: 7.2. Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Method for aerosol transport modeling
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.transport.mass_conservation_scheme')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Uses Atmospheric chemistry transport scheme"
# "Mass adjustment"
# "Concentrations positivity"
# "Gradients monotonicity"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 7.3. Mass Conservation Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.N
Method used to ensure mass conservation.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.transport.convention')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Uses Atmospheric chemistry transport scheme"
# "Convective fluxes connected to tracers"
# "Vertical velocities connected to tracers"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 7.4. Convention
Is Required: TRUE Type: ENUM Cardinality: 1.N
Transport by convention
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 8. Emissions
Atmospheric aerosol emissions
8.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of emissions in atmosperic aerosol model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.method')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "Prescribed (climatology)"
# "Prescribed CMIP6"
# "Prescribed above surface"
# "Interactive"
# "Interactive above surface"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 8.2. Method
Is Required: TRUE Type: ENUM Cardinality: 1.N
Method used to define aerosol species (several methods allowed because the different species may not use the same method).
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.sources')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Vegetation"
# "Volcanos"
# "Bare ground"
# "Sea surface"
# "Lightning"
# "Fires"
# "Aircraft"
# "Anthropogenic"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 8.3. Sources
Is Required: FALSE Type: ENUM Cardinality: 0.N
Sources of the aerosol species are taken into account in the emissions scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.prescribed_climatology')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant"
# "Interannual"
# "Annual"
# "Monthly"
# "Daily"
# TODO - please enter value(s)
"""
Explanation: 8.4. Prescribed Climatology
Is Required: FALSE Type: ENUM Cardinality: 0.1
Specify the climatology type for aerosol emissions
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.prescribed_climatology_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 8.5. Prescribed Climatology Emitted Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of aerosol species emitted and prescribed via a climatology
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.prescribed_spatially_uniform_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 8.6. Prescribed Spatially Uniform Emitted Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of aerosol species emitted and prescribed as spatially uniform
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.interactive_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 8.7. Interactive Emitted Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of aerosol species emitted and specified via an interactive method
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.other_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 8.8. Other Emitted Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of aerosol species emitted and specified via an "other method"
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.other_method_characteristics')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 8.9. Other Method Characteristics
Is Required: FALSE Type: STRING Cardinality: 0.1
Characteristics of the "other method" used for aerosol emissions
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.concentrations.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 9. Concentrations
Atmospheric aerosol concentrations
9.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of concentrations in atmosperic aerosol model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.concentrations.prescribed_lower_boundary')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 9.2. Prescribed Lower Boundary
Is Required: FALSE Type: STRING Cardinality: 0.1
List of species prescribed at the lower boundary.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.concentrations.prescribed_upper_boundary')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 9.3. Prescribed Upper Boundary
Is Required: FALSE Type: STRING Cardinality: 0.1
List of species prescribed at the upper boundary.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.concentrations.prescribed_fields_mmr')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 9.4. Prescribed Fields Mmr
Is Required: FALSE Type: STRING Cardinality: 0.1
List of species prescribed as mass mixing ratios.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.concentrations.prescribed_fields_mmr')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 9.5. Prescribed Fields Mmr
Is Required: FALSE Type: STRING Cardinality: 0.1
List of species prescribed as AOD plus CCNs.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 10. Optical Radiative Properties
Aerosol optical and radiative properties
10.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of optical and radiative properties
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.absorption.black_carbon')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 11. Optical Radiative Properties --> Absorption
Absortion properties in aerosol scheme
11.1. Black Carbon
Is Required: FALSE Type: FLOAT Cardinality: 0.1
Absorption mass coefficient of black carbon at 550nm (if non-absorbing enter 0)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.absorption.dust')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 11.2. Dust
Is Required: FALSE Type: FLOAT Cardinality: 0.1
Absorption mass coefficient of dust at 550nm (if non-absorbing enter 0)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.absorption.organics')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 11.3. Organics
Is Required: FALSE Type: FLOAT Cardinality: 0.1
Absorption mass coefficient of organics at 550nm (if non-absorbing enter 0)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.mixtures.external')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 12. Optical Radiative Properties --> Mixtures
**
12.1. External
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there external mixing with respect to chemical composition?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.mixtures.internal')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 12.2. Internal
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there internal mixing with respect to chemical composition?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.mixtures.mixing_rule')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 12.3. Mixing Rule
Is Required: FALSE Type: STRING Cardinality: 0.1
If there is internal mixing with respect to chemical composition then indicate the mixinrg rule
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.impact_of_h2o.size')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 13. Optical Radiative Properties --> Impact Of H2o
**
13.1. Size
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does H2O impact size?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.impact_of_h2o.internal_mixture')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 13.2. Internal Mixture
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does H2O impact internal mixture?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.radiative_scheme.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 14. Optical Radiative Properties --> Radiative Scheme
Radiative scheme for aerosol
14.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of radiative scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.radiative_scheme.shortwave_bands')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 14.2. Shortwave Bands
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Number of shortwave bands
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.radiative_scheme.longwave_bands')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 14.3. Longwave Bands
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Number of longwave bands
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 15. Optical Radiative Properties --> Cloud Interactions
Aerosol-cloud interactions
15.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of aerosol-cloud interactions
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.twomey')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 15.2. Twomey
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is the Twomey effect included?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.twomey_minimum_ccn')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 15.3. Twomey Minimum Ccn
Is Required: FALSE Type: INTEGER Cardinality: 0.1
If the Twomey effect is included, then what is the minimum CCN number?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.drizzle')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 15.4. Drizzle
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does the scheme affect drizzle?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.cloud_lifetime')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 15.5. Cloud Lifetime
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does the scheme affect cloud lifetime?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.longwave_bands')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 15.6. Longwave Bands
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Number of longwave bands
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.model.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 16. Model
Aerosol model
16.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of atmosperic aerosol model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.model.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Dry deposition"
# "Sedimentation"
# "Wet deposition (impaction scavenging)"
# "Wet deposition (nucleation scavenging)"
# "Coagulation"
# "Oxidation (gas phase)"
# "Oxidation (in cloud)"
# "Condensation"
# "Ageing"
# "Advection (horizontal)"
# "Advection (vertical)"
# "Heterogeneous chemistry"
# "Nucleation"
# TODO - please enter value(s)
"""
Explanation: 16.2. Processes
Is Required: TRUE Type: ENUM Cardinality: 1.N
Processes included in the Aerosol model.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.model.coupling')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Radiation"
# "Land surface"
# "Heterogeneous chemistry"
# "Clouds"
# "Ocean"
# "Cryosphere"
# "Gas phase chemistry"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 16.3. Coupling
Is Required: FALSE Type: ENUM Cardinality: 0.N
Other model components coupled to the Aerosol model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.model.gas_phase_precursors')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "DMS"
# "SO2"
# "Ammonia"
# "Iodine"
# "Terpene"
# "Isoprene"
# "VOC"
# "NOx"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 16.4. Gas Phase Precursors
Is Required: TRUE Type: ENUM Cardinality: 1.N
List of gas phase aerosol precursors.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.model.scheme_type')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Bulk"
# "Modal"
# "Bin"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 16.5. Scheme Type
Is Required: TRUE Type: ENUM Cardinality: 1.N
Type(s) of aerosol scheme used by the aerosols model (potentially multiple: some species may be covered by one type of aerosol scheme and other species covered by another type).
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.model.bulk_scheme_species')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Sulphate"
# "Nitrate"
# "Sea salt"
# "Dust"
# "Ice"
# "Organic"
# "Black carbon / soot"
# "SOA (secondary organic aerosols)"
# "POM (particulate organic matter)"
# "Polar stratospheric ice"
# "NAT (Nitric acid trihydrate)"
# "NAD (Nitric acid dihydrate)"
# "STS (supercooled ternary solution aerosol particule)"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 16.6. Bulk Scheme Species
Is Required: TRUE Type: ENUM Cardinality: 1.N
List of species covered by the bulk scheme.
End of explanation
"""
|
ethen8181/machine-learning | deep_learning/multi_label/nsw.ipynb | mit | # code for loading the format for the notebook
import os
# path : store the current path to convert back to it later
path = os.getcwd()
os.chdir(os.path.join('..', '..', 'notebook_format'))
from formats import load_style
load_style(plot_style=False)
os.chdir(path)
# 1. magic for inline plot
# 2. magic to print version
# 3. magic so that the notebook will reload external python modules
# 4. magic to enable retina (high resolution) plots
# https://gist.github.com/minrk/3301035
%matplotlib inline
%load_ext watermark
%load_ext autoreload
%autoreload 2
%config InlineBackend.figure_format='retina'
import time
import fasttext
import numpy as np
import pandas as pd
# prevent scientific notations
pd.set_option('display.float_format', lambda x: '%.3f' % x)
%watermark -a 'Ethen' -d -t -v -p numpy,pandas,fasttext,scipy
"""
Explanation: <h1>Table of Contents<span class="tocSkip"></span></h1>
<div class="toc"><ul class="toc-item"><li><span><a href="#Approximate-Nearest-Neighborhood-Search-with-Navigable-Small-World" data-toc-modified-id="Approximate-Nearest-Neighborhood-Search-with-Navigable-Small-World-1"><span class="toc-item-num">1 </span>Approximate Nearest Neighborhood Search with Navigable Small World</a></span><ul class="toc-item"><li><span><a href="#Data-Preparation-and-Model" data-toc-modified-id="Data-Preparation-and-Model-1.1"><span class="toc-item-num">1.1 </span>Data Preparation and Model</a></span></li><li><span><a href="#Navigable-Small-World" data-toc-modified-id="Navigable-Small-World-1.2"><span class="toc-item-num">1.2 </span>Navigable Small World</a></span></li><li><span><a href="#Hnswlib" data-toc-modified-id="Hnswlib-1.3"><span class="toc-item-num">1.3 </span>Hnswlib</a></span></li></ul></li><li><span><a href="#Reference" data-toc-modified-id="Reference-2"><span class="toc-item-num">2 </span>Reference</a></span></li></ul></div>
End of explanation
"""
# download the data and un-tar it under the 'data' folder
# -P or --directory-prefix specifies which directory to download the data to
!wget https://dl.fbaipublicfiles.com/fasttext/data/cooking.stackexchange.tar.gz -P data
# -C specifies the target directory to extract an archive to
!tar xvzf data/cooking.stackexchange.tar.gz -C data
!head -n 3 data/cooking.stackexchange.txt
# train/test split
import os
from fasttext_module.split import train_test_split_file
from fasttext_module.utils import prepend_file_name
data_dir = 'data'
test_size = 0.2
input_path = os.path.join(data_dir, 'cooking.stackexchange.txt')
input_path_train = prepend_file_name(input_path, 'train')
input_path_test = prepend_file_name(input_path, 'test')
random_state = 1234
encoding = 'utf-8'
train_test_split_file(input_path, input_path_train, input_path_test,
test_size, random_state, encoding)
print('train path: ', input_path_train)
print('test path: ', input_path_test)
# train the fasttext model
fasttext_params = {
'input': input_path_train,
'lr': 0.1,
'lrUpdateRate': 1000,
'thread': 8,
'epoch': 15,
'wordNgrams': 1,
'dim': 80,
'loss': 'ova'
}
model = fasttext.train_supervised(**fasttext_params)
print('vocab size: ', len(model.words))
print('label size: ', len(model.labels))
print('example vocab: ', model.words[:5])
print('example label: ', model.labels[:5])
# model.get_input_matrix().shape
print('output matrix shape: ', model.get_output_matrix().shape)
model.get_output_matrix()
"""
Explanation: Approximate Nearest Neighborhood Search with Navigable Small World
Performing nearest neighborhood search on embeddings has become a crucial process in many applications, such as similar image/text search. The ann benchmark contains benchmark on various approximate nearest neighborhood search algorithms/libraries and in this document, we'll take a look at one of them, Navigable Small World Graph.
Data Preparation and Model
For the embedding, we'll be training a fasttext multi-label text classification model ourselves, and using the output embedding for this example. The fasttext library has already been introduced in another post, hence we won't be going over it in detail. The readers can also swap out the data preparation and model section with the embedding of their liking.
End of explanation
"""
# we'll get one of the labels to find its nearest neighbors
label_id = 0
print(model.labels[label_id])
index_factors = model.get_output_matrix()
query_factors = model.get_output_matrix()[label_id]
query_factors.shape
"""
Explanation: Given the output matrix, we would like to compute each of its nearest neighbors using the compressed vectors.
For those that are more interested in using some other embeddings, replace the index_factors with the embedding, and query_factors with a random element from that set of embeddings, and the rest of the document should still function properly.
End of explanation
"""
class Node:
"""
Node for a navigable small world graph.
Parameters
----------
idx : int
For uniquely identifying a node.
value : 1d np.ndarray
To access the embedding associated with this node.
neighborhood : set
For storing adjacent nodes.
References
----------
https://book.pythontips.com/en/latest/__slots__magic.html
https://hynek.me/articles/hashes-and-equality/
"""
__slots__ = ['idx', 'value', 'neighborhood']
def __init__(self, idx, value):
self.idx = idx
self.value = value
self.neighborhood = set()
def __hash__(self):
return hash(self.idx)
def __eq__(self, other):
return (
self.__class__ == other.__class__ and
self.idx == other.idx
)
from scipy.spatial import distance
def build_nsw_graph(index_factors, k):
n_nodes = index_factors.shape[0]
graph = []
for i, value in enumerate(index_factors):
node = Node(i, value)
graph.append(node)
for node in graph:
query_factor = node.value.reshape(1, -1)
# note that the following implementation is not the actual procedure that's
# used to find the k closest neighbors, we're just implementing a quick version,
# will come back to this later
# https://codereview.stackexchange.com/questions/55717/efficient-numpy-cosine-distance-calculation
# the smaller the cosine distance the more similar, thus the most
# similar item will be the first element after performing argsort
# since argsort by default sorts in ascending order
dist = distance.cdist(index_factors, query_factor, metric='cosine').ravel()
neighbors_indices = np.argsort(dist)[:k].tolist()
# insert bi-directional connection
node.neighborhood.update(neighbors_indices)
for i in neighbors_indices:
graph[i].neighborhood.add(node.idx)
return graph
k = 10
graph = build_nsw_graph(index_factors, k)
graph[0].neighborhood
"""
Explanation: Navigable Small World
We'll start off by formally defining the problem. k-nearest neighbor search is a problem where given a query object $q$ we need to find the $k$ closest objects from a fixed set of objects $O \in D$, where $D$ is the set of all possible objects at hand.
The idea behind navigable small world is to use a graph data structure $G(V, E)$ to represent these objects $O$, where every object $o_i$ is represented by a vertex/node $v_i$. The navigable small world graph structure is constructed by sequential addition of all elements. For every new element, we find the set of its closest neighbors using a variant of the greedy search algorithm, upon doing so, we'll then introduce a bidirectional connection between that set of neighbors and the incoming element.
Upon building the graph, searching for the closest objects to $q$ is very similar to adding objects to the graph. i.e. It involves traversing through the graph to find the closest vertices/nodes using the same variant of greedy search algorithm that's used when constructing the graph.
Another thing worth noting is that determining closest neighbors is dependent on a distance function. As the algorithm doesn't make any strong assumption about the data, it can be used on any distance function of our likings. Here we'll be using the cosine distance as an illustration.
End of explanation
"""
import heapq
import random
from typing import List, Tuple
def nsw_knn_search(
graph: List[Node],
query: np.ndarray,
k: int=5,
m: int=50) -> Tuple[List[Tuple[float, int]], float]:
"""
Performs knn search using the navigable small world graph.
Parameters
----------
graph :
Navigable small world graph from build_nsw_graph.
query : 1d np.ndarray
Query embedding that we wish to find the nearest neighbors.
k : int
Number of nearest neighbors returned.
m : int
The recall set will be chosen from m different entry points.
Returns
-------
The list of nearest neighbors (distance, index) tuple.
and the average number of hops that was made during the search.
"""
result_queue = []
visited_set = set()
hops = 0
for _ in range(m):
# random entry point from all possible candidates
entry_node = random.randint(0, len(graph) - 1)
entry_dist = distance.cosine(query, graph[entry_node].value)
candidate_queue = []
heapq.heappush(candidate_queue, (entry_dist, entry_node))
temp_result_queue = []
while candidate_queue:
candidate_dist, candidate_idx = heapq.heappop(candidate_queue)
if len(result_queue) >= k:
# if candidate is further than the k-th element from the result,
# then we would break the repeat loop
current_k_dist, current_k_idx = heapq.nsmallest(k, result_queue)[-1]
if candidate_dist > current_k_dist:
break
for friend_node in graph[candidate_idx].neighborhood:
if friend_node not in visited_set:
visited_set.add(friend_node)
friend_dist = distance.cosine(query, graph[friend_node].value)
heapq.heappush(candidate_queue, (friend_dist, friend_node))
heapq.heappush(temp_result_queue, (friend_dist, friend_node))
hops += 1
result_queue = list(heapq.merge(result_queue, temp_result_queue))
return heapq.nsmallest(k, result_queue), hops / m
results = nsw_knn_search(graph, query_factors, k=5)
results
"""
Explanation: In the original paper, the author used the term "friends" of vertices that share an edge, and "friend list" of vertex $v_i$ for the list of vertices that share a common with the vertex $v_i$.
We'll now introduce the variant of greedy search that the algorithm uses. The pseudocode looks like the following:
```
greedy_search(q: object, v_entry_point: object):
v_curr = v_entry_point
d_min = dist_func(q, v_current)
v_next = None
for v_friend in v_curr.get_friends():
d_friend = dist_func(q, v_friend)
if d_friend < d_min:
d_min = d_friend
v_next = v_friend
if v_next is None:
return v_curr
else:
return greedy_search(q, v_next)
```
Where starting from some entry point (chosen at random at the beginning), the greedy search algorithm computes a distance from the input query to each of the current entry point's friend vertices. If the distance between the query and the friend vertex is smaller than the current ones, then the greedy search algorithm will move to the vertex and repeats the process until it can't find a friend vertex that is closer to the query than the current vertex.
This approach can of course lead to local minimum, i.e. the closest vertex/object determined by this greedy search algorithm is not the actual true closest element to the incoming query. Hence, the idea to extend this is to pick a series of entry point, denoted by m in the pseudocode below and return the best results from all those greedy searches. With each additional search, the chances of not finding the true nearest neighbors should decrease exponentially.
The key idea behind the knn search is given a random entry point, it iterates on vertices closest to the query that we've never previously visited. And the algorithm keeps greedily exploring the neighborhood until the $k$ nearest elements can't be improved upon. Then this process repeats for the next random entry point.
```
knn_search(q: object, m: int, k: int):
queue[object] candidates, temp_result, result
set[object] visited_set
for i in range(m):
put random entry point in candidates
temp_result = None
repeat:
get element c closet from candidate to q
remove c from candidates
if c is further than the k-th element from result:
break repeat
for every element e from friends of c:
if e is not visited_set:
add e to visited_set, candidates, temp_result
add objects from temp_result to result
return best k elements from result
```
We'll be using the heapq module as our priority queue.
End of explanation
"""
def build_nsw_graph(index_factors: np.ndarray, k: int) -> List[Node]:
n_nodes = index_factors.shape[0]
graph = []
for i, value in enumerate(index_factors):
node = Node(i, value)
if i > k:
neighbors, hops = nsw_knn_search(graph, node.value, k)
neighbors_indices = [node_idx for _, node_idx in neighbors]
else:
neighbors_indices = list(range(i))
# insert bi-directional connection
node.neighborhood.update(neighbors_indices)
for i in neighbors_indices:
graph[i].neighborhood.add(node.idx)
graph.append(node)
return graph
k = 10
index_factors = model.get_output_matrix()
graph = build_nsw_graph(index_factors, k)
graph[0].neighborhood
results = nsw_knn_search(graph, query_factors, k=5)
results
"""
Explanation: Now that we've implemented the knn search algorithm, we can go back and modify the graph building function and use it to implement the actual way of building the navigable small world graph.
End of explanation
"""
import hnswlib
def build_hnsw(factors, space, ef_construction, M):
# Declaring index
max_elements, dim = factors.shape
hnsw = hnswlib.Index(space, dim) # possible options for space are l2, cosine or ip
# Initing index - the maximum number of elements should be known beforehand
hnsw.init_index(max_elements, M, ef_construction)
# Element insertion (can be called several times)
hnsw.add_items(factors)
return hnsw
space = 'cosine'
ef_construction = 200
M = 24
start = time.time()
hnsw = build_hnsw(index_factors, space, ef_construction, M)
build_time = time.time() - start
build_time
k = 5
# Controlling the recall by setting ef, should always be > k
hnsw.set_ef(70)
# retrieve the top-n search neighbors
labels, distances = hnsw.knn_query(query_factors, k=k)
print(labels)
# find the nearest neighbors and "translate" it to the original labels
[model.labels[label] for label in labels[0]]
"""
Explanation: Hnswlib
We can check the results with a more robust variant of the algorithm, Hierarchical Navigable Small World (HNSW) provided by hnswlib. The idea is very similar to the skip list data structure, except we now replace the link list with nagivable small world graphs. Although we never formally introduce the hierarchical variant, but hopefully the major parameters of the algorithm should look familiar.
ef: The algorithm searches for the ef closest neighbors to the inserted element $q$, this was set to $k$ in the original navigable small world paper. The ef closest neighbors then becomes the candidate/recall set for inserting the bidirectional edges during insertion/construction phase (which is termed ef_construction) or after done construction, the candidate/recall set for finding the actual top k closest elements to the input query object.
M: After choosing the ef_construction objects, only the M closest ones will we create the edges between the enter point and those nodes. i.e. it controls the number of bi-directional links.
The actual process of constructing HNSW and doing knn search is a bit more involved compared to vanilla navigable small world. We won't be getting into all the gory details in this post.
End of explanation
"""
|
mauriciogtec/PropedeuticoDataScience2017 | Proyectos/Proyecto1.ipynb | mit | import numpy as np
"""
Explanation: Creando una sistema de Algebra Lineal
En esta tarea seran guiados paso a paso en como realizar un sistema de arrays en Python para realizar operaciones de algebra lineal.
Pero antes... (FAQ)
Como se hace en la realidad? En la practica, se usan paqueterias funcionales ya probadas, en particular numpy, que contiene todas las herramientas necesarias para hacer computo numerico en Python.
Por que hacer esta tarea entonces? Python es un lenguage disenado para la programacion orientada a objetos. Al hacer la tarea desarrollaran experiencia en este tipo de programacion que les permitira crear objetos en el futuro cuando lo necesiten, y entender mejor como funciona numpy y en general, todas las herramientas de Python. Ademas, en esta tarea tambien aprenderan la forma de usar numpy simultaneamente.
Como comenzar con numpy? En la tarea necesitaremos importar la libreria numpy, que contiene funciones y clases que no son parte de Python basico. Recuerden que Python no es un lenguage de computo cientifico, sino de programacion de proposito general. No esta disenado para hacer algebra lineal, sin embargo, tiene librerias extensas y bien probadas que permiten lograrlo. Anaconda es una distribucion de Python que ademas de instalarlo incluye varias librerias de computo cientifico como numpy. Si instalaron Python por separado deberan tambien instalar numpy manualmente.
Antes de comenzar la tarea deberan poder correr:
End of explanation
"""
x = [1,2,3]
y = [4,5,6]
x + y
"""
Explanation: Lo que el codigo anterior hace es asociar al nombre np todas las herramientas de la libreria numpy. Ahora podremos llamar funciones de numpy como np.<numpy_fun>. El nombre np es opcional, pueden cambiarlo pero necesitaran ese nombre para acceder a las funciones de numpy como <new_name>.<numpy_fun>. Otra opcion es solo inlcuir import numpy, en cuya caso las funciones se llaman como numpy.<numpy_fun>. Para saber mas del sistema de modulos pueden revisar la liga https://docs.python.org/2/tutorial/modules.html
I. Creando una clase Array
Python incluye nativo el uso de listas (e.g. x = [1,2,3]). El problema es que las listas no son herramientas de computo numerico, Python ni siquiera entiende una suma de ellas. De hecho, la suma la entiende como concatenacion:
End of explanation
"""
B = np.array([[1,2,3], [4,5,6]]) # habiendo corrido import numpy as np
"""
Explanation: Vamos a construir una clase Array que incluye a las matrices y a los vectores. Desde el punto de vista computacional, un vector es una matriz de una columna. En clase vimos que conviene pensar a las matrices como transformacion de vectores, sin embargo, desde el punto de vista computacional, como la regla de suma y multiplicacion es similar, conviene pensarlos ambos como arrays, que es el nombre tradicional en programacion
Computacionalmente, que es un array? Tecnicamente, es una lista de listas, todas del mismo tamano, cada uno representando una fila (fila o columna es optativo, haremos filas porque asi lo hace numpy, pero yo previero columnas). Por ejemplo, la lista de listas
[[1,2,3],[4,5,6]]
Corresponde a la matriz
$$
\begin{bmatrix} 1 & 2 & 3 \ 4 & 5 & 6 \end{bmatrix}
$$
The numpy way
End of explanation
"""
B + 2*B # Python sabe sumar y multiplicar arrays como algebra lineal
"""
Explanation: Es posible sumar matrices y multiplicarlas por escalares
End of explanation
"""
np.matmul(B.transpose(), B) # B^t*B
"""
Explanation: Las matrices de numpy se pueden multiplicar con la funcion matmul dentro de numpy
End of explanation
"""
B[1,1]
"""
Explanation: Los arrays the numpy pueden accesarse con indices y slices
Una entrada especifica:
End of explanation
"""
B[1,:]
"""
Explanation: Una fila entera:
End of explanation
"""
B[:,2]
"""
Explanation: Una columna entera:
End of explanation
"""
B[0:2,0:2]
"""
Explanation: Un subbloque (notar que un slice n:m es n,n+1,...,m-1
End of explanation
"""
B.shape
"""
Explanation: En numpy podemos saber la dimension de un array con el campo shape de numpy
End of explanation
"""
vec = np.array([1,2,3])
print(vec)
"""
Explanation: Numpy es listo manejando listas simples como vectores
End of explanation
"""
class Array:
"Una clase minima para algebra lineal"
def __init__(self, list_of_rows):
"Constructor"
self.data = list_of_rows
self.shape = (len(list_of_rows), len(list_of_rows[0]))
A = Array([[1,2,3], [4,5,6]])
A.__dict__ # el campo escondido __dict__ permite acceder a las propiedades de clase de un objeto
A.data
A.shape
"""
Explanation: Comenzando desde cero...
End of explanation
"""
Array([[1,2,3], [4,5,6]])
print(Array([[1,2,3], [4,5,6]]))
np.array([[1,2,3], [4,5,6]])
print(np.array([[1,2,3], [4,5,6]]))
"""
Explanation: El campo data de un Array almacena la lista de listas del array. Necesitamos implementar algunos metodos para que sea funcional como una clase de algebra lineal.
Un metodo para imprimir una matriz de forma mas agradable
Validador. Un metodo para validar que la lista de listas sea valida (columnas del mismo tamano y que las listas interiores sean numericas
Indexing Hacer sentido a expresiones A[i,j]
Iniciar matriz vacia de ceros Este metodos es muy util para preacolar espacio para guardar nuevas matrices
Transposicion B.transpose()
Suma A + B
Multiplicacion escalar y matricial 2 * A y A*B
Vectores (Opcional)
Con esto seria posible hacer algebra lineal
Metodos especiales de clase...
Para hacer esto es posible usar metodos especiales de clase __getitem, __setitem__, __add__, __mul__, __str__. Teoricamente es posible hacer todo sin estos metodos especiales, pero, por ejemplo, es mucho mas agradable escribir A[i,j] que A.get(i,j) o A.setitem(i,j,newval) que A[i,j] = newval.
1. Un metodo para imprimir mejor...
Necesitamos agregar un metodo de impresion. Noten que un array de numpy se imprime bonito comparado con el nuestro
End of explanation
"""
class TestClass:
def __init__(self):
pass # this means do nothing in Python
def say_hi(self):
print("Hey, I am just a normal method saying hi!")
def __repr__(self):
return "I am the special class method REPRESENTING a TestClass without printing"
def __str__(self):
return "I am the special class method for explicitly PRINTING a TestClass object"
x = TestClass()
x.say_hi()
x
print(x)
"""
Explanation: Por que estas diferencias? Python secretamente busca un metodo llamado __repr__ cuando un objeto es llamado sin imprimir explicitamente, y __str__ cuando se imprime con print explicitamente. Por ejemplo:
End of explanation
"""
some_list = [1,2,3, 4, 5, 6]
[i**2 for i in some_list] # Elevar al cuadrado con listas de comprehension
"""
Explanation: <span style = "color: red"> EJERCICIO 1 </span>
Escribe los metodos __repr__ y __str__ para la clase Array de forma que se imprima legiblemente como en numpy arrays.
2. Un validador
Ok esta es la parte aburrida... Los voy a ayudar un poco. Pero no deberia ser cualquier cosa un array, pues todas las filas deberian ser del mismo tamano no? Ademas, las filas deben tener entradas solo numericas... Por ultimo, cuando en vez de listas de listas solo hay una lista, conviene pensarlo como un vector (una columna... por eso yo prefiero llenar por columnas, pero esa es la forma python...).
Todo esto se puede lograr mejorando un poco el validador.
Una nota al margen Para hacer el validador voy a usar una de las herramientas mas poderosas de Python, listas de comprehension. Son formas faciles y rapidas de iterar en un solo comando. Un ejemplo:
End of explanation
"""
class Array:
"Una clase minima para algebra lineal"
def __init__(self, list_of_rows):
"Constructor y validador"
# obtener dimensiones
self.data = list_of_rows
nrow = len(list_of_rows)
# ___caso vector: redimensionar correctamente
if not isinstance(list_of_rows[0], list):
nrow = 1
self.data = [[x] for x in list_of_rows]
# ahora las columnas deben estar bien aunque sea un vector
ncol = len(self.data[0])
self.shape = (nrow, ncol)
# validar tamano correcto de filas
if any([len(r) != ncol for r in self.data]):
raise Exception("Las filas deben ser del mismo tamano")
def __repr__(self):
"Ejercicio"
pass
def __str__(self):
"Ejercicio"
pass
Array([[1,2,3], [4,5]])
vec = Array([1,2,3])
vec.data
"""
Explanation: En el validador uso la lista de comprehension para verificar que todas las filas tengo el mismo tamano. Tambien creo un error si no se cumple con el comando raise. Este es un uso avanzado, entonces si no tienen experiencia en Python no se preocupen mucho por los detalles.
Checar que todas las entradas son numericas lo dejamos como un ejericio optativo... no vale la pena deternos mucho en eso...
End of explanation
"""
A = Array([[1,2], [3,4]])
A[0,0]
A[0,0] = 8
"""
Explanation: 3. Indexing and Item assignment
Tomaria varias lineas de codigo hacer un indexing/slicing tan complejo como el de numpy, pero podemos dar una version sencilla....
Queremos que las siguientes expresiones tengan sentido
```python
A = Array([[1,2],[3,4]])
A[0,0]
A[0,0] = 8
````
Por el momento obtenemos errores
End of explanation
"""
class Array:
"Una clase minima para algebra lineal"
def __init__(self, list_of_rows):
"Constructor y validador"
# obtener dimensiones
self.data = list_of_rows
nrow = len(list_of_rows)
# ___caso vector: redimensionar correctamente
if not isinstance(list_of_rows[0], list):
nrow = 1
self.data = [[x] for x in list_of_rows]
# ahora las columnas deben estar bien aunque sea un vector
ncol = len(self.data[0])
self.shape = (nrow, ncol)
# validar tamano correcto de filas
if any([len(r) != ncol for r in self.data]):
raise Exception("Las filas deben ser del mismo tamano")
def __getitem__(self, idx):
return self.data[idx[0]][idx[1]]
A = Array([[1,2],[3,4]])
A[0,1]
"""
Explanation: Para poder acceder a un index un metodo __getitem__.
End of explanation
"""
np.zeros((3,6))
"""
Explanation: <span style = "color: red"> EJERCICIO 2 </span>
Escribe el metodo __setitem__ para que el codigo A[i,j] = new_value cambie el valor de la entrada (i,j) del array. El esqueleto de la funcion es
python
class Array:
#
#
def __setitem__(self, idx, new_value):
"Ejercicio"
#
4. Iniciar una matriz en ceros
Este no es un metodo de la clase, mas bien conviene que sea una funcion que crea un objeto de la clase en ceros. Vamos! Es el ejercicio mas facil, se los dejo a ustedes. Queremos que el resultado sea similar a la siguiente la funcion zeros de numpy
End of explanation
"""
np.array([[1,2], [3,4]]).transpose()
"""
Explanation: <span style = "color: red"> EJERCICIO 3 </span>
Implementa una funcion de zeros para crear arrays "vacios"
python
def zeros(shape):
"Implementame por favor"
Hint, encontraras utiles las listas de comprehension, por ejemplo el codigo [0. for x in range(5)] crea una lista de 5 ceros.
Implementa una funcion eye(n) que crea la matriz identidad de $n\times n$ (es decir, la matriz que tiene puros ceros y unos en la diagonal). El nombre eye es tradicional en software de algebra lineal, aunque no es muy intuitivo.
5. Transposicion
Otro ejercicio muy facil! Debe funcionar igual que numpy!
End of explanation
"""
"hola " + "tu"
[1,2,3] + [2,3,4]
np.array([1,2,3]) + np.array([2,3,4])
np.array([1,2,3]) + 10 # Broadcasted sum, es muy util
class Array:
"Una clase minima para algebra lineal"
def __init__(self, list_of_rows):
"Constructor y validador"
# obtener dimensiones
self.data = list_of_rows
nrow = len(list_of_rows)
# ___caso vector: redimensionar correctamente
if not isinstance(list_of_rows[0], list):
nrow = 1
self.data = [[x] for x in list_of_rows]
# ahora las columnas deben estar bien aunque sea un vector
ncol = len(self.data[0])
self.shape = (nrow, ncol)
# validar tamano correcto de filas
if any([len(r) != ncol for r in self.data]):
raise Exception("Las filas deben ser del mismo tamano")
def __add__(self, other):
"Hora de sumar"
if isinstance(other, Array):
if self.shape != other.shape:
raise Exception("Las dimensiones son distintas!")
rows, cols = self.shape
newArray = Array([[0. for c in range(cols)] for r in range(rows)])
for r in range(rows):
for c in range(cols):
newArray.data[r][c] = self.data[r][c] + other.data[r][c]
return newArray
elif isinstance(2, (int, float, complex)): # en caso de que el lado derecho sea solo un numero
rows, cols = self.shape
newArray = Array([[0. for c in range(cols)] for r in range(rows)])
for r in range(rows):
for c in range(cols):
newArray.data[r][c] = self.data[r][c] + other
return newArray
else:
return NotImplemented # es un tipo de error particular usado en estos metodos
A = Array([[1,2], [3,4]])
B = Array([[5,6], [7,8]])
C = A + B
C.data
D = A + 10
D.data
"""
Explanation: <span style = "color: red"> EJERCICIO 4 </span>
Implementa la funcion de transposicion
python
class Array:
###
###
###
def transpose(self):
"Implementame :)"
###
6. Suma
Ok aqui llegamos al primer punto dificil! Pongan mucha atencion porque ustedes van a hacer la multiplicacion!!!!
Muchas clases tienen una nocion de suma, como vimos la suma en listas y en strings de Python es concatenacion, pero la suma de Arrays debe ser suma entrada por entrada, como en los arrays de numpy. Como logramos eso? Definiendo metodos para __add__ que reciben como argumentos self y other
End of explanation
"""
Array([[1,2], [3,4]]) + Array([[5,6, 5], [7,8,3]])
"""
Explanation: HONESTAMENTE NO PODRIA SER MAS COOL!
Ahora veamos el error si las dimensiones no cuadran.
End of explanation
"""
class Vector(Array): # declara que Vector es un tipo de Array
def __init__(self, list_of_numbers):
self.vdata = list_of_numbers
list_of_rows = [[x] for x in list_of_numbers]
return Array.__init__(self, list_of_rows)
def __repr__(self):
return "Vector(" + str(self.vdata) + ")"
def __str__(self):
return str(self.vdata)
def __add__(self, other):
new_arr = Array.__add__(self, other)
return Vector([x[0] for x in new_arr.data])
Vector([1,2,3]).__dict__
Vector([1,2,3])
Vector([1,2,3]) + Vector([5,-2,0])
Vector([1,2,3]) + 10
"""
Explanation: <span style = "color: red"> EJERCICIO 5</span>
(dificil) En nuestra clase Array la expresion A + 1 tiene sentido para un Array A, sin embargo la expresion inversa falla, por ejemplo
python
1 + Array([[1,2], [3,4]])
entrega un error. Investiga como implementar el metodo de clase __radd__ para resolver este problema.
Nuestro metodo de suma no sabe restar, implementa el metodo de clase __sub__ similar a la suma para poder calcular expresiones como A - B para A y B arrays o numeros.
7. Multiplicacion Matricial
Ahora si su ejercicio mas dificil y mas importante. Queremos que A * B sea multiplicacion matricial y a*A ser multiplicacion escalar cuando a es un numero.
NOTA IMPORTANTE!!!! Esta es la primera cosa que no esta implementada tal cual como numpy, pero es mucho mas padre hacerlo asi para algebra lineal. Numpy, al igual que las matrices de R, asumen que la multiplicacion debe ser entrada por entrada como += la suma. Otros lenguages como Julia y Matlab disenados para computo cientifico asumen que la multiplicacion de matrices es la default.
<span style = "color: red"> EJERCICIO 6 </span>
Implementa las funciones __mul__ y __rmul__ para hacer multiplicacion matricial (y por un escalar). Hint. Entiende y modifica el codigo de suma del punto anterior
8. Vectores (Herencia!!!)
La clase de Arrays ya hace todo lo necesario para hacer algebra lineal, pero es un poco incomoda para trabajar con vectores. Quisieramos tener una clase Vector que heredara el comportamiento de los Arrays pero con facilidades adicionales para trabajar con vectores. Este es el concepto de "herencia" en programacion orientada a objetos. Lo poderoso de la herencia es que permite trabajar con objetos mas especializados, que pueden tener campos o metodos adicionales a los de su clase padre, pero heredan el comportamiento. Tambien puede hacer un override de metodos de los padres.
End of explanation
"""
|
sujitpal/polydlot | src/tf-serving/02b-model-serializer.ipynb | apache-2.0 | from tensorflow.python.saved_model import builder as saved_model_builder
from tensorflow.python.saved_model import utils, tag_constants, signature_constants
from tensorflow.python.saved_model.signature_def_utils import build_signature_def, predict_signature_def
from tensorflow.contrib.session_bundle import exporter
import keras.backend as K
from keras.models import load_model
import os
import shutil
K.set_learning_phase(0)
"""
Explanation: Serialize native Keras model for Tensorflow Serving
Code adapted from discussion about this in Tensorflow Serving Issue 310, specifically the recipe suggested by @tspthomas.
End of explanation
"""
DATA_DIR = "../../data"
EXPORT_DIR = os.path.join(DATA_DIR, "tf-export")
MODEL_NAME = "keras-mnist-fcn"
MODEL_VERSION = 1
MODEL_BIN = os.path.join(DATA_DIR, "{:s}-best.h5".format(MODEL_NAME))
EXPORT_PATH = os.path.join(EXPORT_DIR, MODEL_NAME)
model = load_model(MODEL_BIN)
"""
Explanation: Load model
The model we will use is the best model produced by this Keras model.
End of explanation
"""
shutil.rmtree(EXPORT_PATH, True)
full_export_path = os.path.join(EXPORT_PATH, str(MODEL_VERSION))
builder = saved_model_builder.SavedModelBuilder(full_export_path)
signature = predict_signature_def(inputs={"images": model.input},
outputs={"scores": model.output})
with K.get_session() as sess:
builder.add_meta_graph_and_variables(sess=sess,
tags=[tag_constants.SERVING],
signature_def_map={"predict": signature})
builder.save()
"""
Explanation: Export model
Resulting exported model should be as follows, under the export directory given by EXPORT_DIR.
.
└── keras-mnist-fcn
└── 1
├── saved_model.pb
└── variables
├── variables.data-00000-of-00001
└── variables.index
End of explanation
"""
|
jlawman/jlawman.github.io | content/sklearn/Walkthrough - Implementing the Random Forest Classifier for the First Time.ipynb | mit | #Import dataset
from sklearn.datasets import load_iris
iris = load_iris()
"""
Explanation: Implementing the Random Forest Classifier from sci-kit learn
1. Import dataset
This tutorial uses the iris dataset (https://en.wikipedia.org/wiki/Iris_flower_data_set) which comes preloaded with sklearn.
End of explanation
"""
#Import train_test_split
from sklearn.model_selection import train_test_split
features_train, features_test, labels_train, labels_test = train_test_split(iris.data,iris.target,test_size=0.2,random_state=1)
"""
Explanation: 2. Prepare training and testing data
Each flower in this dataset contains the following features and labels
* features - measurements of the flower petals and sepals
* labels - the flower species (setosa, versicolor, or virginica) represented as a 0, 1, or 2.
Our train_test_split function will seperate the data as follows
* (features_train, labels_train) - 80% of the data prepared for training
* (features_test, labels_test) - 20% of the data prepared for making our predictions and evaluating our model
End of explanation
"""
#Import classifier
from sklearn.ensemble import RandomForestClassifier
#Create an instance of the RandomForestClassifier
rfc = RandomForestClassifier()
#Fit our model to the training features and labels
rfc.fit(features_train,labels_train)
"""
Explanation: 3. Create and fit the Random Forest Classifier
This tutorial uses the RandomForestClassifier model for our predictions, but you can experiment with other classifiers. To do so, import another classifier and replace the relevant code in this section.
End of explanation
"""
rfc_predictions = rfc.predict(features_test)
"""
Explanation: 4. Make Predictions using Random Forest Classifier
End of explanation
"""
print(rfc_predictions)
"""
Explanation: Understanding our predictions
Our predictions will be an array of 0's 1's, and 2's, depending on which flower our algorithm believes each set of measurements to represent.
End of explanation
"""
print(features_test[0])
"""
Explanation: To intepret this, consider the first set of measurements in features_test:
End of explanation
"""
print(rfc_predictions[0])
"""
Explanation: Our model believes that these measurements correspond to a setosa iris (label 0).
End of explanation
"""
print(labels_test[0])
"""
Explanation: In this case, our model is correct, since the true label indicates that this was a setosa iris (label 0).
End of explanation
"""
#Import pandas to create the confusion matrix dataframe
import pandas as pd
#Import classification_report and confusion_matrix to evaluate our model
from sklearn.metrics import classification_report, confusion_matrix
"""
Explanation: 5. Evaluate our model
For this section we will import two metrics from sklearn: confusion_matrix and classification_report. They will help us understand how well our model did.
End of explanation
"""
#Create a dataframe with the confusion matrix
confusion_df = pd.DataFrame(confusion_matrix(labels_test, rfc_predictions),
columns=["Predicted " + name for name in iris.target_names],
index = iris.target_names)
confusion_df
"""
Explanation: As seen in the confusion matrix below, most predictions are accurate but our model misclassified one specimen of versicolor (our model thought that it was virginca).
End of explanation
"""
print(classification_report(labels_test,rfc_predictions))
"""
Explanation: As seen in the classification report below, our model has 97% precision, recall, and accuracy.
End of explanation
"""
|
Naereen/notebooks | Efficient_sampling_from_a_Binomial_distribution.ipynb | mit | import numpy as np
import matplotlib.pyplot as plt
%load_ext cython
%load_ext watermark
%watermark -a "Lilian Besson (Naereen)" -i -v -p numpy,matplotlib,cython
"""
Explanation: Table of Contents
<p><div class="lev1 toc-item"><a href="#Bernoulli-and-binomial-distribution" data-toc-modified-id="Bernoulli-and-binomial-distribution-1"><span class="toc-item-num">1 </span>Bernoulli and binomial distribution</a></div><div class="lev2 toc-item"><a href="#Requirements" data-toc-modified-id="Requirements-11"><span class="toc-item-num">1.1 </span>Requirements</a></div><div class="lev2 toc-item"><a href="#A-naive-generator" data-toc-modified-id="A-naive-generator-12"><span class="toc-item-num">1.2 </span>A naive generator</a></div><div class="lev2 toc-item"><a href="#The-generator-included-in-numpy.random" data-toc-modified-id="The-generator-included-in-numpy.random-13"><span class="toc-item-num">1.3 </span>The generator included in <code>numpy.random</code></a></div><div class="lev2 toc-item"><a href="#An-efficient-generator-using-the-inverse-transform-method" data-toc-modified-id="An-efficient-generator-using-the-inverse-transform-method-14"><span class="toc-item-num">1.4 </span>An efficient generator using the inverse transform method</a></div><div class="lev3 toc-item"><a href="#Explicit-computation-of-the-probabilities" data-toc-modified-id="Explicit-computation-of-the-probabilities-141"><span class="toc-item-num">1.4.1 </span>Explicit computation of the probabilities</a></div><div class="lev3 toc-item"><a href="#First-function-using-the-inversion-method" data-toc-modified-id="First-function-using-the-inversion-method-142"><span class="toc-item-num">1.4.2 </span>First function using the inversion method</a></div><div class="lev3 toc-item"><a href="#Simplified-code-of-the-inversion-method" data-toc-modified-id="Simplified-code-of-the-inversion-method-143"><span class="toc-item-num">1.4.3 </span>Simplified code of the inversion method</a></div><div class="lev3 toc-item"><a href="#In-Cython" data-toc-modified-id="In-Cython-144"><span class="toc-item-num">1.4.4 </span>In Cython</a></div><div class="lev2 toc-item"><a href="#Numerical-experiments-to-check-time-cost-of-the-different-versions" data-toc-modified-id="Numerical-experiments-to-check-time-cost-of-the-different-versions-15"><span class="toc-item-num">1.5 </span>Numerical experiments to check time cost of the different versions</a></div><div class="lev2 toc-item"><a href="#Checking-that-sampling-from-$Bin(n,p)$-requires-a-time-$\Omega(n)$." data-toc-modified-id="Checking-that-sampling-from-$Bin(n,p)$-requires-a-time-$\Omega(n)$.-16"><span class="toc-item-num">1.6 </span>Checking that sampling from <span class="MathJax_Preview" style="color: inherit;"></span><span class="MathJax" id="MathJax-Element-461-Frame" tabindex="0" data-mathml="<math xmlns="http://www.w3.org/1998/Math/MathML"><mi>B</mi><mi>i</mi><mi>n</mi><mo stretchy="false">(</mo><mi>n</mi><mo>,</mo><mi>p</mi><mo stretchy="false">)</mo></math>" role="presentation" style="position: relative;"><nobr aria-hidden="true"><span class="math" id="MathJax-Span-3306" style="width: 3.761em; display: inline-block;"><span style="display: inline-block; position: relative; width: 3.586em; height: 0px; font-size: 104%;"><span style="position: absolute; clip: rect(1.794em, 1003.54em, 2.887em, -999.998em); top: -2.576em; left: 0em;"><span class="mrow" id="MathJax-Span-3307"><span class="mi" id="MathJax-Span-3308" style="font-family: STIXMathJax_Normal-italic;">𝐵</span><span class="mi" id="MathJax-Span-3309" style="font-family: STIXMathJax_Normal-italic;">𝑖</span><span class="mi" id="MathJax-Span-3310" style="font-family: STIXMathJax_Normal-italic;">𝑛</span><span class="mo" id="MathJax-Span-3311" style="font-family: STIXMathJax_Main;">(</span><span class="mi" id="MathJax-Span-3312" style="font-family: STIXMathJax_Normal-italic;">𝑛</span><span class="mo" id="MathJax-Span-3313" style="font-family: STIXMathJax_Main;">,</span><span class="mi" id="MathJax-Span-3314" style="font-family: STIXMathJax_Normal-italic; padding-left: 0.177em;">𝑝</span><span class="mo" id="MathJax-Span-3315" style="font-family: STIXMathJax_Main;">)</span></span><span style="display: inline-block; width: 0px; height: 2.581em;"></span></span></span><span style="display: inline-block; overflow: hidden; vertical-align: -0.225em; border-left: 0px solid; width: 0px; height: 1.002em;"></span></span></nobr><span class="MJX_Assistive_MathML" role="presentation"><math xmlns="http://www.w3.org/1998/Math/MathML"><mi>B</mi><mi>i</mi><mi>n</mi><mo stretchy="false">(</mo><mi>n</mi><mo>,</mo><mi>p</mi><mo stretchy="false">)</mo></math></span></span><script type="math/tex" id="MathJax-Element-461">Bin(n,p)</script> requires a time <span class="MathJax_Preview" style="color: inherit;"></span><span class="MathJax" id="MathJax-Element-462-Frame" tabindex="0" data-mathml="<math xmlns="http://www.w3.org/1998/Math/MathML"><mi mathvariant="normal">&#x03A9;</mi><mo stretchy="false">(</mo><mi>n</mi><mo stretchy="false">)</mo></math>" role="presentation" style="position: relative;"><nobr aria-hidden="true"><span class="math" id="MathJax-Span-3316" style="width: 1.969em; display: inline-block;"><span style="display: inline-block; position: relative; width: 1.882em; height: 0px; font-size: 104%;"><span style="position: absolute; clip: rect(1.794em, 1001.84em, 2.887em, -999.998em); top: -2.576em; left: 0em;"><span class="mrow" id="MathJax-Span-3317"><span class="mi" id="MathJax-Span-3318" style="font-family: STIXMathJax_Main;">Ω</span><span class="mo" id="MathJax-Span-3319" style="font-family: STIXMathJax_Main;">(</span><span class="mi" id="MathJax-Span-3320" style="font-family: STIXMathJax_Normal-italic;">𝑛</span><span class="mo" id="MathJax-Span-3321" style="font-family: STIXMathJax_Main;">)</span></span><span style="display: inline-block; width: 0px; height: 2.581em;"></span></span></span><span style="display: inline-block; overflow: hidden; vertical-align: -0.225em; border-left: 0px solid; width: 0px; height: 1.002em;"></span></span></nobr><span class="MJX_Assistive_MathML" role="presentation"><math xmlns="http://www.w3.org/1998/Math/MathML"><mi mathvariant="normal">Ω</mi><mo stretchy="false">(</mo><mi>n</mi><mo stretchy="false">)</mo></math></span></span><script type="math/tex" id="MathJax-Element-462">\Omega(n)</script>.</a></div><div class="lev2 toc-item"><a href="#Conclusion" data-toc-modified-id="Conclusion-17"><span class="toc-item-num">1.7 </span>Conclusion</a></div>
# Bernoulli and binomial distribution
- References: [Bernoulli distribution on Wikipedia](https://en.wikipedia.org/wiki/Bernoulli_distribution) and [Binomial distribution on Wikipedia](https://en.wikipedia.org/wiki/Binomial_distribution#Generating_binomial_random_variates).
The Bernoulli distribution of mean $p\in[0,1]$ is defined as the distribution on $\{0,1\}$ such that $\mathbb{P}(X=1) = p$ and $\mathbb{P}(X=0) = 1-p$.
If $X$ follows a Binomial distribution of mean $p\in[0,1]$ and $n$ samples, $X$ is defined as the sum of $n$ independent and identically distributed (iid) samples from a Bernoulli distribution of mean $p$, that is $X\in\{0,\dots,n\}$ ($X\in\mathbb{N}$) and $\forall k\in\{0,\dots,n\}, \mathbb{P}(X=k) = {n \choose k} p^k (1-p)^{n-k}$.
## Requirements
Let's import the modules required for this notebook.
End of explanation
"""
import random
def uniform_01() -> float:
return random.random()
[ uniform_01() for _ in range(5) ]
"""
Explanation: A naive generator
Using the pseudo-random generator of (float) random numbers in $[0,1]$ from the random or numpy.random module, we can easily generate a sample from a Bernoulli distribution.
End of explanation
"""
def bernoulli(p: float) -> int:
return 1 if uniform_01() <= p else 0
[ bernoulli(0) for _ in range(5) ]
[ bernoulli(0.12345) for _ in range(5) ]
[ bernoulli(1) for _ in range(5) ]
"""
Explanation: It's very quick now:
End of explanation
"""
def naive_binomial(n: int, p: float) -> int:
result = 0
for k in range(n): # sum of n iid samples from Bernoulli(p)
result += bernoulli(p)
return result
"""
Explanation: So we can naively generate samples from a Binomial distribution by summing iid samples generated using this bernoulli function.
End of explanation
"""
[ naive_binomial(10, 0.1) for _ in range(5) ]
[ naive_binomial(10, 0.5) for _ in range(5) ]
[ naive_binomial(10, 0.9) for _ in range(5) ]
"""
Explanation: For example :
End of explanation
"""
m = 1000
n = 10
p = 0.12345
X = [ naive_binomial(n, p) for _ in range(m) ]
plt.figure()
plt.hist(X)
plt.title(f"{m} samples from a Binomial distribution with n = {n} and p = {p}.")
plt.show()
m = 1000
n = 10
p = 0.5
X = [ naive_binomial(n, p) for _ in range(m) ]
plt.figure()
plt.hist(X)
plt.title(f"{m} samples from a Binomial distribution with n = {n} and p = {p}.")
plt.show()
m = 1000
n = 10
p = 0.98765
X = [ naive_binomial(n, p) for _ in range(m) ]
plt.figure()
plt.hist(X)
plt.title(f"{m} samples from a Binomial distribution with n = {n} and p = {p}.")
plt.show()
"""
Explanation: We can quickly illustrate the generated distribution, to check it has the correct "shape":
End of explanation
"""
def numpy_binomial(n: int, p: float) -> int:
return np.random.binomial(n, p)
"""
Explanation: The generator included in numpy.random
End of explanation
"""
[ numpy_binomial(10, 0.1) for _ in range(5) ]
[ numpy_binomial(10, 0.5) for _ in range(5) ]
[ numpy_binomial(10, 0.9) for _ in range(5) ]
"""
Explanation: Let's try this out:
End of explanation
"""
m = 1000
n = 10
p = 0.12345
X = [ numpy_binomial(n, p) for _ in range(m) ]
plt.figure()
plt.hist(X)
plt.title(f"{m} samples from a Binomial distribution with n = {n} and p = {p}.")
plt.show()
m = 1000
n = 10
p = 0.5
X = [ naive_binomial(n, p) for _ in range(m) ]
plt.figure()
plt.hist(X)
plt.title(f"{m} samples from a Binomial distribution with n = {n} and p = {p}.")
plt.show()
m = 1000
n = 10
p = 0.98765
X = [ naive_binomial(n, p) for _ in range(m) ]
plt.figure()
plt.hist(X)
plt.title(f"{m} samples from a Binomial distribution with n = {n} and p = {p}.")
plt.show()
"""
Explanation: Let's plot this out also.
End of explanation
"""
def binomial_coefficient(n: int, k: int) -> int:
"""From https://en.wikipedia.org/wiki/Binomial_coefficient#Binomial_coefficient_in_programming_languages"""
if k < 0 or k > n:
return 0
if k == 0 or k == n:
return 1
k = min(k, n - k) # take advantage of symmetry
c = 1
for i in range(k):
c = (c * (n - i)) / (i + 1)
return c
def proba_binomial(n: int, p: float, k: int) -> float:
"""Compute {n \choose k} p^k (1-p)^(n-k)"""
q = 1.0 - p
return binomial_coefficient(n, k) * p**k * q**(n-k)
"""
Explanation: An efficient generator using the inverse transform method
We start by computing the binomial coefficients and then the probability $\mathbb{P}(X=k)$ for $k\in{0,\dots,n}$, if $X\sim Bin(n, p)$, and,
Then use this to write a generator of Binomial-distributed random values.
This function is then simplified to inline all computations.
We propose a fast and simple Cython implementation, to be as efficient as possible, and hopefully comparably efficient when compared against the implementation in Numpy.
Explicit computation of the probabilities
End of explanation
"""
# a generic function
from typing import Callable
def inversion_method(compute_proba: Callable[[int], int], xmax: int, xmin: int =0) -> int:
probas = [ compute_proba(x) for x in range(xmin, xmax + 1) ]
result = xmin
current_proba = 0
one_uniform_sample = uniform_01()
while current_proba <= one_uniform_sample:
current_proba += probas[result]
result += 1
return result - 1
def first_inversion_binomial(n: int, p: float) -> int:
def compute_proba(x):
return proba_binomial(n, p, x)
xmax = n
xmin = 0
return inversion_method(compute_proba, xmax, xmin=xmin)
"""
Explanation: First function using the inversion method
This first function is a generic implementation of the discrete inverse transform method.
For more details, see the Wikipedia page.
Inverse transformation sampling takes uniform samples of a number $u$ between $0$ and $1$, interpreted as a probability, and then returns the largest number $x$ from the domain of the distribution $\mathbb{P}(X)$ such that $\mathbb{P}(-\infty <X<x)\leq u$.
End of explanation
"""
[ first_inversion_binomial(10, 0.1) for _ in range(5) ]
[ first_inversion_binomial(10, 0.5) for _ in range(5) ]
[ first_inversion_binomial(10, 0.9) for _ in range(5) ]
"""
Explanation: Let's try out.
End of explanation
"""
def inversion_binomial(n: int, p: float) -> int:
if p <= 1e-10:
return 0
if p >= 1 - 1e-10:
return n
if p > 0.5: # speed up by computing for q and then substracting
return n - inversion_binomial(n, 1.0 - p)
result = 0
q = 1.0 - p
current_proba = q**n
cum_proba = current_proba
one_uniform_sample = uniform_01()
while cum_proba <= one_uniform_sample:
current_proba *= (p * (n - result)) / (q * (result + 1))
cum_proba += current_proba
result += 1
return result
"""
Explanation: It seems to work as wanted!
Simplified code of the inversion method
The previous function as a few weaknesses: it stores the $n+1$ values of $\mathbb{P}(X=k)$ before hand, it computes all of them even if the for loop of the inversion method stops in average before the end (in average, it takes $np$ steps, which can be much smaller than $n$ for small $p$).
Furthermore, the computations of both the binomial coefficients and the values $p^k (1-p)^{n-k}$ is using powers and not iterative multiplications, leading to more rounding errors.
We can solve all these issues by inlining all the computations.
End of explanation
"""
[ inversion_binomial(10, 0.1) for _ in range(5) ]
[ inversion_binomial(10, 0.5) for _ in range(5) ]
[ inversion_binomial(10, 0.9) for _ in range(5) ]
"""
Explanation: Let's try out.
End of explanation
"""
m = 1000
n = 10
p = 0.12345
X = [ inversion_binomial(n, p) for _ in range(m) ]
plt.figure()
plt.hist(X)
plt.title(f"{m} samples from a Binomial distribution with n = {n} and p = {p}.")
plt.show()
m = 1000
n = 10
p = 0.5
X = [ inversion_binomial(n, p) for _ in range(m) ]
plt.figure()
plt.hist(X)
plt.title(f"{m} samples from a Binomial distribution with n = {n} and p = {p}.")
plt.show()
m = 1000
n = 10
p = 0.98765
X = [ inversion_binomial(n, p) for _ in range(m) ]
plt.figure()
plt.hist(X)
plt.title(f"{m} samples from a Binomial distribution with n = {n} and p = {p}.")
plt.show()
"""
Explanation: It seems to work as wanted!
And now the storage is indeed $O(1)$, and the computation time is $O(x)$ if the return value is $x$, so the mean computation time is $O(np)$.
Note that if $p=1/2$, then $O(np) = O(n/2) = O(n)$, and thus this improved method using the inversion method is (asymptotically) as costly as the naive method (the first method which consists of summing $n$ iid samples from a Bernoulli of mean $p$).
Let's plot this out also.
End of explanation
"""
%load_ext cython
%%cython --annotate
import random
def cython_inversion_binomial(int n, double p) -> int:
if p <= 1e-9:
return 0
if p >= 1 - 1e-9:
return n
if p > 0.5: # speed up by computing for q and then substracting
return n - cython_inversion_binomial(n, 1.0 - p)
cdef int result = 0
cdef double q = 1.0 - p
cdef double current_proba = q**n
cdef double cum_proba = current_proba
cdef double one_uniform_sample = random.random()
while cum_proba < one_uniform_sample:
current_proba *= (p * (n - result)) / (q * (result + 1))
cum_proba += current_proba
result += 1
return result
"""
Explanation: In Cython
End of explanation
"""
[ cython_inversion_binomial(10, 0.1) for _ in range(5) ]
[ cython_inversion_binomial(10, 0.5) for _ in range(5) ]
[ cython_inversion_binomial(10, 0.9) for _ in range(5) ]
"""
Explanation: Let's try out.
End of explanation
"""
m = 1000
n = 10
p = 0.12345
X = [ cython_inversion_binomial(n, p) for _ in range(m) ]
plt.figure()
plt.hist(X)
plt.title(f"{m} samples from a Binomial distribution with n = {n} and p = {p}.")
plt.show()
m = 1000
n = 10
p = 0.5
X = [ cython_inversion_binomial(n, p) for _ in range(m) ]
plt.figure()
plt.hist(X)
plt.title(f"{m} samples from a Binomial distribution with n = {n} and p = {p}.")
plt.show()
inversion_binomialm = 1000
n = 10
p = 0.98765
X = [ cython_inversion_binomial(n, p) for _ in range(m) ]
plt.figure()
plt.hist(X)
plt.title(f"{m} samples from a Binomial distribution with n = {n} and p = {p}.")
plt.show()
"""
Explanation: It seems to work as wanted!
Let's plot this out also.
End of explanation
"""
n = 100
naive_binomial
first_inversion_binomial
inversion_binomial
cython_inversion_binomial
numpy_binomial
"""
Explanation: Numerical experiments to check time cost of the different versions
End of explanation
"""
%timeit naive_binomial(n, 0.123456)
%timeit first_inversion_binomial(n, 0.123456)
%timeit inversion_binomial(n, 0.123456)
%timeit cython_inversion_binomial(n, 0.123456)
%timeit numpy_binomial(n, 0.123456)
"""
Explanation: We can use the %timeit magic to check the (mean) computation time of all the previously mentioned functions:
End of explanation
"""
%timeit naive_binomial(n, 0.5)
%timeit first_inversion_binomial(n, 0.5)
%timeit inversion_binomial(n, 0.5)
%timeit cython_inversion_binomial(n, 0.5)
%timeit numpy_binomial(n, 0.5)
%timeit naive_binomial(n, 0.987654)
%timeit first_inversion_binomial(n, 0.987654)
%timeit inversion_binomial(n, 0.987654)
%timeit cython_inversion_binomial(n, 0.987654)
%timeit numpy_binomial(n, 0.987654)
"""
Explanation: Apparently, our cython method is faster than the function from numpy!
We also check that our first naive implementation of the inversion method was suboptimal, as announced, because of its pre computation of all the values of $\mathbb{P}(X=k)$.
However, we check that the naive method, using the sum of $n$ binomial samples, is as comparably efficient to the pure-Python inversion-based method (for this small $n=100$).
End of explanation
"""
n = 100
%timeit naive_binomial(n, random.random())
%timeit inversion_binomial(n, random.random())
%timeit cython_inversion_binomial(n, random.random())
%timeit numpy_binomial(n, random.random())
n = 1000
%timeit naive_binomial(n, random.random())
%timeit inversion_binomial(n, random.random())
%timeit cython_inversion_binomial(n, random.random())
%timeit numpy_binomial(n, random.random())
n = 10000
%timeit naive_binomial(n, random.random())
%timeit inversion_binomial(n, random.random())
%timeit cython_inversion_binomial(n, random.random())
%timeit numpy_binomial(n, random.random())
"""
Explanation: It's quite awesome to see that our inversion-based method is more efficient that the numpy function, both in the pure-Python and the Cython versions!
But it's weird, as the numpy function is... based on the inversion method, and itself written in C!
See the source code, numpy/distributions.c line 426 (on the 28th February 2019, commit 7c41164).
But the trick is that the implementation in numpy uses the inversion method (running in $\Omega(np)$) if $pn < 30$, and a method denoted "BTPE" otherwise.
I need to work on this method! The BTPE algorithm is much more complicated, and it is described in the following paper:
Kachitvichyanukul, V.; Schmeiser, B. W. (1988). "Binomial random variate generation". Communications of the ACM. 31 (2): 216–222. doi:10.1145/42372.42381.
See the source code, numpy/distributions.c line 263 (on the 28th February 2019, commit 7c41164).
Checking that sampling from $Bin(n,p)$ requires a time $\Omega(n)$.
End of explanation
"""
|
liganega/Gongsu-DataSci | notebooks/GongSu18-Pandas-tutorial-04.ipynb | gpl-3.0 | # 라이브러리 임포트하기
import pandas as pd
# 데이터셋 만들기
d = [0, 1, 2, 3, 4, 5, 6, 7, 8, 9]
# 데이터프레임 만들기
df = pd.DataFrame(d)
df
# 열(column)의 이름 변경하기
df.columns = ['Rev']
df
# 열(column) 추가하기
df['NewCol'] = 5
df
# 새로 만든 열(column) 수정하기
df['NewCol'] = df['NewCol'] + 1
df
# 열(column) 삭제하기
del df['NewCol']
df
# 두 개의 열(column) 추가하기
df['test'] = 3
df['col'] = df['Rev']
df
# 인덱스(index)의 이름 변경하기
i = ['a', 'b', 'c', 'd', 'e', 'f', 'g', 'h', 'i', 'j']
df.index = i
df
"""
Explanation: Pandas 튜토리얼에 있는 Lessons for new pandas users_04에 대한 내용이다.
본 장에서는 새로운 열(columns)을 추가, 삭제하고, 데이터를 슬라이싱(slicing)하는 여러 가지 방법을 다룬다.
End of explanation
"""
df.loc['a']
# df.loc[inclusive:inclusive]
df.loc['a':'d']
# df.iloc[inclusive:exclusive]
df.iloc[0:3]
"""
Explanation: loc 을 이용하여 데이터프레임의 일부를 선택할 수 있다.
End of explanation
"""
df['Rev']
df[['Rev', 'test']]
# df.ix[row, columns]
# ix 함수 대체하기
# df.ix[0:3, 'Rev']
df.loc[df.index[0:3], 'Rev']
# ix 함수 대체하기
# df.ix[5:, 'col']
df.loc[df.index[5:], 'col']
# ix 함수 대체하기
# df.ix[:3, ['col', 'test']]
df.loc[df.index[:3], ['col', 'test']]
"""
Explanation: iloc은 loc와 달리 순서를 나타내는 정수형(integer) 인덱스만 받는다.
열(column) 이름을 사용하여 열을 선택할 수 있다.
End of explanation
"""
df.head()
df.tail()
"""
Explanation: 데이터프레임의 처음 또는 끝에 있는 관측치(observation)를 확인하려면 다음과 같이 하면 된다.
End of explanation
"""
|
kaushik94/sympy | examples/notebooks/Bezout_Dixon_resultant.ipynb | bsd-3-clause | b_3, b_2, b_1, b_0 = sym.symbols("b_3, b_2, b_1, b_0")
x = sym.symbols('x')
b = sym.IndexedBase("b")
p = b_2 * x ** 2 + b_1 * x + b_0
q = sym.diff(p, x)
subresultants_qq_zz.bezout(p, q, x)
"""
Explanation: The Bezout matrix is a special square matrix associated with two polynomials, introduced by Sylvester (1853) and Cayley (1857) and named after Étienne Bézout. Bézoutian may also refer to the determinant of this matrix, which is equal to the resultant of the two polynomials.
The entries of Bezout matrix are bilinear functions of coefficients of the given polynomials. The Bezout formulation has gone over different generalizations. The most common one is the Cayley.. Cayley's matrix is given by,
$$ \left|
\begin{array}{cc}
p(x) & q(x)\
p(a)& q(a)
\end{array}
\right| = \Delta(x, a)$$
where $\Delta(x, a)$ is the determinant.
We have the polynomial:
$$ \delta(x, a) = \frac{\Delta(x,a)}{x-a}$$
The matrix is then constructed from the coefficients of polynomial $\alpha$. Each coefficient is viewed as a polynomial of $x_1,..., x_n$.
The Bezout matrix is highly related to the Sylvester matrix and the greatest common divisor of polynomials. Unlike in Sylvester's formulation, where the resultant of $p$ and $q$ is the determinant of an $(m + n) \times (m + n)$ matrix, in the Cayley formulation, the resultant is obtained
as the determinant of a $n \times n$ matrix.
Example: Generic example
End of explanation
"""
# example one
p = x ** 3 +1
q = x + 1
subresultants_qq_zz.bezout(p, q, x)
subresultants_qq_zz.bezout(p, q, x).det()
# example two
p = x ** 2 - 5 * x + 6
q = x ** 2 - 3 * x + 2
subresultants_qq_zz.bezout(p, q, x)
subresultants_qq_zz.bezout(p, q, x).det()
"""
Explanation: Example: Existence of common roots
Note that if the system has a common root we are expecting the resultant/determinant to equal to zero.
A commot root exists.
End of explanation
"""
z = x ** 2 - 7 * x + 12
h = x ** 2 - x
subresultants_qq_zz.bezout(z, h, x).det()
"""
Explanation: A common root does not exist.
End of explanation
"""
from sympy.polys.multivariate_resultants import DixonResultant
"""
Explanation: Dixon's Resultant
Dixon (1908) showed how to extend this formulation to $m = 3$ polynomials in $n = 2$ variables.
In a similar manner but this time,
$$ \left|
\begin{array}{cc}
p(x, y) & q(x, y) & h(x, y) \cr
p(a, y) & q(a, y) & h(b, y) \cr
p(a, b) & q(a, b) & h(a, b) \cr
\end{array}
\right| = \Delta(x, y, \alpha, \beta)$$
where $\Delta(x, y, \alpha, \beta)$ is the determinant.
Thus, we have the polynomial:
$$ \delta(x,y, \alpha, \beta) = \frac{\Delta(x, y, \alpha, \beta)}{(x-\alpha)(y - \beta)}$$
End of explanation
"""
a_1, a_2, b_1, b_2, u_1, u_2, u_3 = sym.symbols('a_1, a_2, b_1, b_2, u_1, u_2, u_3')
y = sym.symbols('y')
p = a_1 * x ** 2 * y ** 2 + a_2 * x ** 2
q = b_1 * x ** 2 * y ** 2 + b_2 * y ** 2
h = u_1 * x + u_2 * y + u_3
dixon = DixonResultant(variables=[x, y], polynomials=[p, q, h])
poly = dixon.get_dixon_polynomial()
poly
matrix = dixon.get_dixon_matrix(poly)
matrix
matrix.det().factor()
"""
Explanation: Example: Generic example of Dixon $(n=2, m=3)$
End of explanation
"""
p = x + y
q = x ** 2 + y ** 3
h = x ** 2 + y
dixon = DixonResultant([p, q, h], (x, y))
poly = dixon.get_dixon_polynomial()
poly.simplify()
matrix = dixon.get_dixon_matrix(polynomial=poly)
matrix
matrix.det()
"""
Explanation: Dixon's General Case
Yang et al. generalized the Dixon resultant method of three polynomials with two variables to the system of $n+1$ polynomials with $n$ variables.
Example: Numerical example
End of explanation
"""
a, b, c = sym.symbols('a, b, c')
p_1 = a * x ** 2 + b * x * y + (b + c - a) * x + a * y + 3 * (c - 1)
p_2 = 2 * a ** 2 * x ** 2 + 2 * a * b * x * y + a * b * y + b ** 3
p_3 = 4 * (a - b) * x + c * (a + b) * y + 4 * a * b
polynomials = [p_1, p_2, p_3]
dixon = DixonResultant(polynomials, [x, y])
poly = dixon.get_dixon_polynomial()
size = len(poly.monoms())
size
matrix = dixon.get_dixon_matrix(poly)
matrix
"""
Explanation: Example: Generic example
End of explanation
"""
z = sym.symbols('z')
f = x ** 2 + y ** 2 - 1 + z * 0
g = x ** 2 + z ** 2 - 1 + y * 0
h = y ** 2 + z ** 2 - 1
dixon = DixonResultant([f, g, h], [y, z])
poly = dixon.get_dixon_polynomial()
matrix = dixon.get_dixon_matrix(poly)
matrix
matrix.det()
"""
Explanation: Example:
From Dixon resultant’s solution of systems of geodetic polynomial equations
End of explanation
"""
|
klin90/titanic | Engineer.ipynb | mit | import matplotlib.pyplot as plt
import scipy.stats as st
import seaborn as sns
import pandas as pd
import numpy as np
%matplotlib notebook
train = pd.read_csv('train.csv', index_col='PassengerId')
test = pd.read_csv('test.csv', index_col='PassengerId')
tr_len = len(train)
df = train.drop('Survived', axis=1).append(test)
"""
Explanation: Titanic Feature Engineering
Table of Contents
Overview
Feature Engineering and Imputation
Title
Family Size
Fares
Ages
Initial Modeling
End of explanation
"""
df['Title'] = df['Name'].str.extract('\,\s(.*?)[.]', expand=False)
df['Title'].replace('Mme', 'Mrs', inplace=True)
df['Title'].replace('Mlle', 'Miss', inplace=True)
df['Title'].replace('Ms', 'Miss', inplace=True)
df['Title'].replace('Lady', 'fNoble', inplace=True)
df['Title'].replace('the Countess', 'fNoble', inplace=True)
df['Title'].replace('Dona', 'fNoble', inplace=True)
df['Title'].replace('Don', 'mNoble', inplace=True)
df['Title'].replace('Sir', 'mNoble', inplace=True)
df['Title'].replace('Jonkheer', 'mNoble', inplace=True)
df['Title'].replace('Col', 'mil', inplace=True)
df['Title'].replace('Capt', 'mil', inplace=True)
df['Title'].replace('Major', 'mil', inplace=True)
"""
Explanation: Title
We'll extract title information from the Name feature, and then merge some of the titles together.
Merge 'Mme' into 'Mrs'
Merge 'Mlle' and 'Ms' into 'Miss'
Merge 'Lady', 'the Countess', and 'Dona' into 'fNoble'
Merge 'Don', 'Sir', and 'Jonkheer' into 'mNoble'
Merge 'Col', 'Capt', and 'Major' into 'mil'
End of explanation
"""
df['FamSize'] = df['SibSp'] + df['Parch'] + 1
"""
Explanation: Family Size
We'll create a FamSize feature indicating family size. We'll impute the median fare for lone travelers, for the lone missing value.
End of explanation
"""
df['TicketSize'] = df['Ticket'].value_counts()[df['Ticket']].values
df['AdjFare'] = df['Fare'].div(df['TicketSize'])
df['AdjFare'] = df.groupby('Pclass')['AdjFare'].apply(lambda x: x.fillna(x.median()))
"""
Explanation: Fares
We'll create a TicketSize feature, and divide Fare by it to adjust our Fare values. We then impute the lone missing value with its median by Pclass.
End of explanation
"""
df['FilledAge'] = df.groupby(['Sex', 'Title'])['Age'].apply(lambda x: x.fillna(x.median()))
"""
Explanation: Ages
We'll impute missing values with medians by Title and Sex.
End of explanation
"""
df['Embarked'].fillna('S', inplace=True)
"""
Explanation: Embarked
From our strategy using ticket numbers, we will fill both missing values with 'S' - Southampton.
End of explanation
"""
df['CabinKnown'] = df['Cabin'].notnull().astype(int)
"""
Explanation: Cabins
We create an indicator variable if the cabin is known, for now.
End of explanation
"""
from sklearn.ensemble import RandomForestClassifier
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import LabelEncoder
pdf = df.copy()
le = LabelEncoder()
pdf['Sex'] = le.fit_transform(pdf['Sex'])
pdf['Embarked'] = le.fit_transform(pdf['Embarked'])
pdf['Title'] = le.fit_transform(pdf['Title'])
pdf.drop(['CabinKnown', 'Embarked'], axis=1, inplace=True)
p_test = pdf[tr_len:]
p_train = pdf[:tr_len].join(train[['Survived']]).drop(['Name', 'Age', 'SibSp', 'Parch', 'Ticket', 'Fare', 'Cabin'], axis=1)
X_train, X_test, y_train, y_test = train_test_split(p_train.drop('Survived', axis=1), p_train['Survived'], random_state=236)
clf = RandomForestClassifier(n_estimators=1000, max_depth=7, max_features=4)
clf.fit(X_train, y_train)
print('CV Score: {}'.format(clf.score(X_test, y_test)))
pd.Series(clf.feature_importances_, index=X_train.columns)
df.info()
"""
Explanation: Modeling
Let's recombine, drop the unnecessary variables, and try a Random Forest model to gauge feature importance.
End of explanation
"""
|
tritemio/multispot_paper | out_notebooks/usALEX-5samples-PR-leakage-dir-ex-all-ph-out-12d.ipynb | mit | ph_sel_name = "None"
data_id = "12d"
# data_id = "7d"
"""
Explanation: Executed: Mon Mar 27 11:38:45 2017
Duration: 7 seconds.
usALEX-5samples - Template
This notebook is executed through 8-spots paper analysis.
For a direct execution, uncomment the cell below.
End of explanation
"""
from fretbursts import *
init_notebook()
from IPython.display import display
"""
Explanation: Load software and filenames definitions
End of explanation
"""
data_dir = './data/singlespot/'
import os
data_dir = os.path.abspath(data_dir) + '/'
assert os.path.exists(data_dir), "Path '%s' does not exist." % data_dir
"""
Explanation: Data folder:
End of explanation
"""
from glob import glob
file_list = sorted(f for f in glob(data_dir + '*.hdf5') if '_BKG' not in f)
## Selection for POLIMI 2012-11-26 datatset
labels = ['17d', '27d', '7d', '12d', '22d']
files_dict = {lab: fname for lab, fname in zip(labels, file_list)}
files_dict
data_id
"""
Explanation: List of data files:
End of explanation
"""
d = loader.photon_hdf5(filename=files_dict[data_id])
"""
Explanation: Data load
Initial loading of the data:
End of explanation
"""
leakage_coeff_fname = 'results/usALEX - leakage coefficient DexDem.csv'
leakage = np.loadtxt(leakage_coeff_fname)
print('Leakage coefficient:', leakage)
"""
Explanation: Load the leakage coefficient from disk:
End of explanation
"""
dir_ex_coeff_fname = 'results/usALEX - direct excitation coefficient dir_ex_aa.csv'
dir_ex_aa = np.loadtxt(dir_ex_coeff_fname)
print('Direct excitation coefficient (dir_ex_aa):', dir_ex_aa)
"""
Explanation: Load the direct excitation coefficient ($d_{exAA}$) from disk:
End of explanation
"""
d.leakage = leakage
d.dir_ex = dir_ex_aa
"""
Explanation: Update d with the correction coefficients:
End of explanation
"""
d.ph_times_t, d.det_t
"""
Explanation: Laser alternation selection
At this point we have only the timestamps and the detector numbers:
End of explanation
"""
d.add(det_donor_accept=(0, 1), alex_period=4000, D_ON=(2850, 580), A_ON=(900, 2580), offset=0)
"""
Explanation: We need to define some parameters: donor and acceptor ch, excitation period and donor and acceptor excitiations:
End of explanation
"""
plot_alternation_hist(d)
"""
Explanation: We should check if everithing is OK with an alternation histogram:
End of explanation
"""
loader.alex_apply_period(d)
"""
Explanation: If the plot looks good we can apply the parameters with:
End of explanation
"""
d
"""
Explanation: Measurements infos
All the measurement data is in the d variable. We can print it:
End of explanation
"""
d.time_max
"""
Explanation: Or check the measurements duration:
End of explanation
"""
d.calc_bg(bg.exp_fit, time_s=60, tail_min_us='auto', F_bg=1.7)
dplot(d, timetrace_bg)
d.rate_m, d.rate_dd, d.rate_ad, d.rate_aa
"""
Explanation: Compute background
Compute the background using automatic threshold:
End of explanation
"""
d.burst_search(L=10, m=10, F=7, ph_sel=Ph_sel('all'))
print(d.ph_sel)
dplot(d, hist_fret);
# if data_id in ['7d', '27d']:
# ds = d.select_bursts(select_bursts.size, th1=20)
# else:
# ds = d.select_bursts(select_bursts.size, th1=30)
ds = d.select_bursts(select_bursts.size, add_naa=False, th1=30)
n_bursts_all = ds.num_bursts[0]
def select_and_plot_ES(fret_sel, do_sel):
ds_fret= ds.select_bursts(select_bursts.ES, **fret_sel)
ds_do = ds.select_bursts(select_bursts.ES, **do_sel)
bpl.plot_ES_selection(ax, **fret_sel)
bpl.plot_ES_selection(ax, **do_sel)
return ds_fret, ds_do
ax = dplot(ds, hist2d_alex, S_max_norm=2, scatter_alpha=0.1)
if data_id == '7d':
fret_sel = dict(E1=0.60, E2=1.2, S1=0.2, S2=0.9, rect=False)
do_sel = dict(E1=-0.2, E2=0.5, S1=0.8, S2=2, rect=True)
ds_fret, ds_do = select_and_plot_ES(fret_sel, do_sel)
elif data_id == '12d':
fret_sel = dict(E1=0.30,E2=1.2,S1=0.131,S2=0.9, rect=False)
do_sel = dict(E1=-0.4, E2=0.4, S1=0.8, S2=2, rect=False)
ds_fret, ds_do = select_and_plot_ES(fret_sel, do_sel)
elif data_id == '17d':
fret_sel = dict(E1=0.01, E2=0.98, S1=0.14, S2=0.88, rect=False)
do_sel = dict(E1=-0.4, E2=0.4, S1=0.80, S2=2, rect=False)
ds_fret, ds_do = select_and_plot_ES(fret_sel, do_sel)
elif data_id == '22d':
fret_sel = dict(E1=-0.16, E2=0.6, S1=0.2, S2=0.80, rect=False)
do_sel = dict(E1=-0.2, E2=0.4, S1=0.85, S2=2, rect=True)
ds_fret, ds_do = select_and_plot_ES(fret_sel, do_sel)
elif data_id == '27d':
fret_sel = dict(E1=-0.1, E2=0.5, S1=0.2, S2=0.82, rect=False)
do_sel = dict(E1=-0.2, E2=0.4, S1=0.88, S2=2, rect=True)
ds_fret, ds_do = select_and_plot_ES(fret_sel, do_sel)
n_bursts_do = ds_do.num_bursts[0]
n_bursts_fret = ds_fret.num_bursts[0]
n_bursts_do, n_bursts_fret
d_only_frac = 1.*n_bursts_do/(n_bursts_do + n_bursts_fret)
print('D-only fraction:', d_only_frac)
dplot(ds_fret, hist2d_alex, scatter_alpha=0.1);
dplot(ds_do, hist2d_alex, S_max_norm=2, scatter=False);
"""
Explanation: Burst search and selection
End of explanation
"""
bandwidth = 0.03
E_range_do = (-0.1, 0.15)
E_ax = np.r_[-0.2:0.401:0.0002]
E_pr_do_kde = bext.fit_bursts_kde_peak(ds_do, bandwidth=bandwidth, weights='size',
x_range=E_range_do, x_ax=E_ax, save_fitter=True)
mfit.plot_mfit(ds_do.E_fitter, plot_kde=True, bins=np.r_[E_ax.min(): E_ax.max(): bandwidth])
plt.xlim(-0.3, 0.5)
print("%s: E_peak = %.2f%%" % (ds.ph_sel, E_pr_do_kde*100))
"""
Explanation: Donor Leakage fit
End of explanation
"""
nt_th1 = 50
dplot(ds_fret, hist_size, which='all', add_naa=False)
xlim(-0, 250)
plt.axvline(nt_th1)
Th_nt = np.arange(35, 120)
nt_th = np.zeros(Th_nt.size)
for i, th in enumerate(Th_nt):
ds_nt = ds_fret.select_bursts(select_bursts.size, th1=th)
nt_th[i] = (ds_nt.nd[0] + ds_nt.na[0]).mean() - th
plt.figure()
plot(Th_nt, nt_th)
plt.axvline(nt_th1)
nt_mean = nt_th[np.where(Th_nt == nt_th1)][0]
nt_mean
"""
Explanation: Burst sizes
End of explanation
"""
E_pr_fret_kde = bext.fit_bursts_kde_peak(ds_fret, bandwidth=bandwidth, weights='size')
E_fitter = ds_fret.E_fitter
E_fitter.histogram(bins=np.r_[-0.1:1.1:0.03])
E_fitter.fit_histogram(mfit.factory_gaussian(), center=0.5)
fig, ax = plt.subplots(1, 2, figsize=(14, 4.5))
mfit.plot_mfit(E_fitter, ax=ax[0])
mfit.plot_mfit(E_fitter, plot_model=False, plot_kde=True, ax=ax[1])
print('%s\nKDE peak %.2f ' % (ds_fret.ph_sel, E_pr_fret_kde*100))
display(E_fitter.params*100)
"""
Explanation: Fret fit
Max position of the Kernel Density Estimation (KDE):
End of explanation
"""
ds_fret.fit_E_m(weights='size')
"""
Explanation: Weighted mean of $E$ of each burst:
End of explanation
"""
ds_fret.fit_E_generic(fit_fun=bl.gaussian_fit_hist, bins=np.r_[-0.1:1.1:0.03], weights=None)
"""
Explanation: Gaussian fit (no weights):
End of explanation
"""
ds_fret.fit_E_generic(fit_fun=bl.gaussian_fit_hist, bins=np.r_[-0.1:1.1:0.005], weights='size')
E_kde_w = E_fitter.kde_max_pos[0]
E_gauss_w = E_fitter.params.loc[0, 'center']
E_gauss_w_sig = E_fitter.params.loc[0, 'sigma']
E_gauss_w_err = float(E_gauss_w_sig/np.sqrt(ds_fret.num_bursts[0]))
E_kde_w, E_gauss_w, E_gauss_w_sig, E_gauss_w_err
"""
Explanation: Gaussian fit (using burst size as weights):
End of explanation
"""
S_pr_fret_kde = bext.fit_bursts_kde_peak(ds_fret, burst_data='S', bandwidth=0.03) #weights='size', add_naa=True)
S_fitter = ds_fret.S_fitter
S_fitter.histogram(bins=np.r_[-0.1:1.1:0.03])
S_fitter.fit_histogram(mfit.factory_gaussian(), center=0.5)
fig, ax = plt.subplots(1, 2, figsize=(14, 4.5))
mfit.plot_mfit(S_fitter, ax=ax[0])
mfit.plot_mfit(S_fitter, plot_model=False, plot_kde=True, ax=ax[1])
print('%s\nKDE peak %.2f ' % (ds_fret.ph_sel, S_pr_fret_kde*100))
display(S_fitter.params*100)
S_kde = S_fitter.kde_max_pos[0]
S_gauss = S_fitter.params.loc[0, 'center']
S_gauss_sig = S_fitter.params.loc[0, 'sigma']
S_gauss_err = float(S_gauss_sig/np.sqrt(ds_fret.num_bursts[0]))
S_kde, S_gauss, S_gauss_sig, S_gauss_err
"""
Explanation: Stoichiometry fit
Max position of the Kernel Density Estimation (KDE):
End of explanation
"""
S = ds_fret.S[0]
S_ml_fit = (S.mean(), S.std())
S_ml_fit
"""
Explanation: The Maximum likelihood fit for a Gaussian population is the mean:
End of explanation
"""
weights = bl.fret_fit.get_weights(ds_fret.nd[0], ds_fret.na[0], weights='size', naa=ds_fret.naa[0], gamma=1.)
S_mean = np.dot(weights, S)/weights.sum()
S_std_dev = np.sqrt(
np.dot(weights, (S - S_mean)**2)/weights.sum())
S_wmean_fit = [S_mean, S_std_dev]
S_wmean_fit
"""
Explanation: Computing the weighted mean and weighted standard deviation we get:
End of explanation
"""
sample = data_id
"""
Explanation: Save data to file
End of explanation
"""
variables = ('sample n_bursts_all n_bursts_do n_bursts_fret '
'E_kde_w E_gauss_w E_gauss_w_sig E_gauss_w_err S_kde S_gauss S_gauss_sig S_gauss_err '
'E_pr_do_kde nt_mean\n')
"""
Explanation: The following string contains the list of variables to be saved. When saving, the order of the variables is preserved.
End of explanation
"""
variables_csv = variables.replace(' ', ',')
fmt_float = '{%s:.6f}'
fmt_int = '{%s:d}'
fmt_str = '{%s}'
fmt_dict = {**{'sample': fmt_str},
**{k: fmt_int for k in variables.split() if k.startswith('n_bursts')}}
var_dict = {name: eval(name) for name in variables.split()}
var_fmt = ', '.join([fmt_dict.get(name, fmt_float) % name for name in variables.split()]) + '\n'
data_str = var_fmt.format(**var_dict)
print(variables_csv)
print(data_str)
# NOTE: The file name should be the notebook name but with .csv extension
with open('results/usALEX-5samples-PR-leakage-dir-ex-all-ph.csv', 'a') as f:
f.seek(0, 2)
if f.tell() == 0:
f.write(variables_csv)
f.write(data_str)
"""
Explanation: This is just a trick to format the different variables:
End of explanation
"""
|
IBMDecisionOptimization/docplex-examples | examples/cp/jupyter/house_building.ipynb | apache-2.0 | import sys
try:
import docplex.cp
except:
if hasattr(sys, 'real_prefix'):
#we are in a virtual env.
!pip install docplex
else:
!pip install --user docplex
"""
Explanation: House Building with worker skills
This tutorial includes everything you need to set up decision optimization engines, build constraint programming models.
When you finish this tutorial, you'll have a foundational knowledge of Prescriptive Analytics.
It requires either an installation of CPLEX Optimizers or it can be run on IBM Cloud Pak for Data as a Service (Sign up for a free IBM Cloud account
and you can start using IBM Cloud Pak for Data as a Service right away).
CPLEX is available on <i>IBM Cloud Pack for Data</i> and <i>IBM Cloud Pak for Data as a Service</i>:
- <i>IBM Cloud Pak for Data as a Service</i>: Depends on the runtime used:
- <i>Python 3.x</i> runtime: Community edition
- <i>Python 3.x + DO</i> runtime: full edition
- <i>Cloud Pack for Data</i>: Community edition is installed by default. Please install DO addon in Watson Studio Premium for the full edition
Table of contents:
Describe the business problem
How decision optimization (prescriptive analytics) can help
Use decision optimization
Step 1: Download the library
Step 2: Set up the engines
Step 3: Model the Data
Step 4: Set up the prescriptive model
Define the decision variables
Express the business constraints
Express the objective
Solve with Decision Optimization solve service
Step 5: Investigate the solution and run an example analysis
Summary
Describe the business problem
This is a problem of building five houses in different locations; the masonry, roofing, painting, etc. must be scheduled. Some tasks must necessarily take place before others and these requirements are expressed through precedence constraints.
There are three workers, and each worker has a given skill level for each task. Each task requires one worker; the worker assigned must have a non-null skill level for the task. A worker can be assigned to only one task at a time.
Each house has a deadline.
The objective is to maximize the skill levels of the workers assigned to the tasks.
How decision optimization can help
Prescriptive analytics technology recommends actions based on desired outcomes, taking into account specific scenarios, resources, and knowledge of past and current events. This insight can help your organization make better decisions and have greater control of business outcomes.
Prescriptive analytics is the next step on the path to insight-based actions. It creates value through synergy with predictive analytics, which analyzes data to predict future outcomes.
Prescriptive analytics takes that insight to the next level by suggesting the optimal way to handle that future situation. Organizations that can act fast in dynamic conditions and make superior decisions in uncertain environments gain a strong competitive advantage.
<br/>
For example:
Automate complex decisions and trade-offs to better manage limited resources.
Take advantage of a future opportunity or mitigate a future risk.
Proactively update recommendations based on changing events.
Meet operational goals, increase customer loyalty, prevent threats and fraud, and optimize business processes.
<h2>About Detailed Scheduling concepts</h2>
<p>
<ul>
<li> Scheduling consists of assigning starting and completion times to a set of activities while satisfying different types of constraints (resource availability, precedence relationships, … ) and optimizing some criteria (minimizing tardiness, …)
<!-- <img src = "./house_building_utils/activity.png" > -->
<img src = "https://github.com/IBMDecisionOptimization/docplex-examples/blob/master/examples/cp/jupyter/house_building_utils/activity.PNG?raw=true " >
<li> Time is considered as a continuous dimension: domain of possible start/completion times for an activity is potentially very large
<li>Beside start and completion times of activities, other types of decision variables are often involved in real industrial scheduling problems (resource allocation, optional activities …)
</ul>
## Use decision optimization
### Step 1: Download the library
Run the following code to install Decision Optimization CPLEX Modeling library. The *DOcplex* library contains the two modeling packages, Mathematical Programming and Constraint Programming, referred to earlier.
End of explanation
"""
try:
import matplotlib
if matplotlib.__version__ < "1.4.3":
!pip install --upgrade matplotlib
except:
!pip install --user matplotlib
"""
Explanation: Note that the more global package <i>docplex</i> contains another subpackage <i>docplex.mp</i> that is dedicated to Mathematical Programming, another branch of optimization.
Step 2: Set up the prescriptive engine
For display of the solution, ensure last version of matplotlib is available:
End of explanation
"""
from docplex.cp.model import CpoModel
from sys import stdout
from collections import namedtuple
"""
Explanation: Now, we need to import all required modeling functions that are provided by the <i>docplex.cp</i> package:
End of explanation
"""
NB_HOUSES = 5
MAX_AMOUNT_OF_PERIODS = 318
HOUSES = range(1, NB_HOUSES + 1)
"""
Explanation: Step 3: Model the data
Planning contains the number of houses and the max amount of periods (<i>days</i>) for our schedule
End of explanation
"""
period_domain = (0, MAX_AMOUNT_OF_PERIODS)
"""
Explanation: All tasks must start and end between 0 and the max amount of periods
End of explanation
"""
Task = (namedtuple("Task", ["name", "duration"]))
TASKS = {Task("masonry", 35),
Task("carpentry", 15),
Task("plumbing", 40),
Task("ceiling", 15),
Task("roofing", 5),
Task("painting", 10),
Task("windows", 5),
Task("facade", 10),
Task("garden", 5),
Task("moving", 5),
}
"""
Explanation: For each task type in the house building project, the following table shows the duration of the task in days along with the tasks that must be finished before the task can start. A worker can only work on one task at a time; each task, once started, may not be interrupted.
<p>
| *Task* | *Duration* | *Preceding tasks* |
|---|---|---|
| masonry | 35 | |
| carpentry | 15 | masonry |
| plumbing | 40 | masonry |
| ceiling | 15 | masonry |
| roofing | 5 | carpentry |
| painting | 10 | ceiling |
| windows | 5 | roofing |
| facade | 10 | roofing, plumbing |
| garden | 5 | roofing, plumbing |
| moving | 5 | windows, facade, garden, painting |
##### Tasks' durations
End of explanation
"""
TaskPrecedence = (namedtuple("TaskPrecedence", ["beforeTask", "afterTask"]))
TASK_PRECEDENCES = {TaskPrecedence("masonry", "carpentry"),
TaskPrecedence("masonry", "plumbing"),
TaskPrecedence("masonry", "ceiling"),
TaskPrecedence("carpentry", "roofing"),
TaskPrecedence("ceiling", "painting"),
TaskPrecedence("roofing", "windows"),
TaskPrecedence("roofing", "facade"),
TaskPrecedence("plumbing", "facade"),
TaskPrecedence("roofing", "garden"),
TaskPrecedence("plumbing", "garden"),
TaskPrecedence("windows", "moving"),
TaskPrecedence("facade", "moving"),
TaskPrecedence("garden", "moving"),
TaskPrecedence("painting", "moving"),
}
"""
Explanation: The tasks precedences
End of explanation
"""
WORKERS = {"Joe", "Jack", "Jim"}
"""
Explanation: There are three workers with varying skill levels in regard to the ten tasks. If a worker has a skill level of zero for a task, he may not be assigned to the task.
<p>
| *Task* | *Joe* | *Jack* | *Jim* |
|---|---|---|---|
|masonry |9 | 5 | 0|
|carpentry |7 | 0 | 5|
|plumbing |0 | 7 | 0|
|ceiling |5 | 8 | 0|
|roofing |6 | 7 | 0|
|painting |0 | 9 | 6|
|windows |8 | 0 | 5|
|façade |5 | 5 | 0|
|garden |5 | 5 | 9|
|moving |6 | 0 | 8|
##### Workers Names
End of explanation
"""
Skill = (namedtuple("Skill", ["worker", "task", "level"]))
SKILLS = {Skill("Joe", "masonry", 9),
Skill("Joe", "carpentry", 7),
Skill("Joe", "ceiling", 5),
Skill("Joe", "roofing", 6),
Skill("Joe", "windows", 8),
Skill("Joe", "facade", 5),
Skill("Joe", "garden", 5),
Skill("Joe", "moving", 6),
Skill("Jack", "masonry", 5),
Skill("Jack", "plumbing", 7),
Skill("Jack", "ceiling", 8),
Skill("Jack", "roofing", 7),
Skill("Jack", "painting", 9),
Skill("Jack", "facade", 5),
Skill("Jack", "garden", 5),
Skill("Jim", "carpentry", 5),
Skill("Jim", "painting", 6),
Skill("Jim", "windows", 5),
Skill("Jim", "garden", 9),
Skill("Jim", "moving", 8)
}
"""
Explanation: Workers Name and level for each of there skill
End of explanation
"""
def find_tasks(name):
return next(t for t in TASKS if t.name == name)
"""
Explanation: Utility functions
find_tasks: returns the task it refers to in the TASKS vector
End of explanation
"""
def find_skills(worker, task):
return next(s for s in SKILLS if (s.worker == worker) and (s.task == task))
"""
Explanation: find_skills: returns the skill it refers to in the SKILLS vector
End of explanation
"""
def find_max_level_skill(task):
st = [s for s in SKILLS if s.task == task]
return next(sk for sk in st if sk.level == max([s.level for s in st]))
"""
Explanation: find_max_level_skill: returns the tuple "skill" where the level is themaximum for a given task
End of explanation
"""
mdl = CpoModel(name="HouseBuilding")
"""
Explanation: Step 4: Set up the prescriptive model
<h3>Create the model container</h3>
<p>
The model is represented by a Python object that is filled with the different model elements (variables, constraints, objective function, etc). The first thing to do is then to create such an object:
End of explanation
"""
tasks = {} # dict of interval variable for each house and task
for house in HOUSES:
for task in TASKS:
tasks[(house, task)] = mdl.interval_var(start=period_domain,
end=period_domain,
size=task.duration,
name="house {} task {}".format(house, task))
"""
Explanation: Define the decision variables
<h5><i><font color=blue>Concept: interval variable</font></i></h5>
<p>
<ul>
<li> What for?<br>
<blockquote> Modeling an interval of time during which a particular property holds <br>
(an activity executes, a resource is idle, a tank must be non-empty, …)</blockquote>
<li> Example:<br>
<blockquote><code><font color=green>interval_var(start=(0,1000), end=(0,1000), size=(10,20))</font></code>
</blockquote>
<!-- <img src = "./house_building_utils/intervalVar.png" > -->
<img src = "https://github.com/IBMDecisionOptimization/docplex-examples/blob/master/examples/cp/jupyter/house_building_utils/intervalVar.PNG?raw=true" >
<li>Properties:
<ul>
<li>The **value** of an interval variable is an integer interval [start,end)
<li>**Domain** of possible values: [0,10), [1,11), [2,12),...[990,1000), [0,11),[1,12),...
<li>Domain of interval variables is represented **compactly** in CP Optimizer (a few bounds: smin, smax, emin, emax, szmin, szmax)
</ul>
</ul>
For each house, an interval variable is created for each task.<br>
This interval must start and end inside the period_domain and its duration is set as the value stated in TASKS definition.
End of explanation
"""
wtasks = {} # dict of interval variable for each house and skill
for house in HOUSES:
for skill in SKILLS:
iv = mdl.interval_var(name='H' + str(house) + '-' + skill.task + '(' + skill.worker + ')')
iv.set_optional()
wtasks[(house, skill)] = iv
"""
Explanation: <h5><i><font color=blue>Concept: optional interval variable</font></i></h5>
<p>
<ul>
<li>Interval variables can be defined as being **optional** that is, it is part of the decisions of the problem to decide whether the interval will be **present** or **absent** in the solution<br><br>
<li> What for?<br>
<blockquote> Modeling optional activities, alternative execution modes for activities, and … most of the discrete decisions in a schedule</blockquote>
<li> Example:<br>
<blockquote><code><font color=green>interval_var(</font><font color=red>optional=True</font><font color=green>, start=(0,1000), end=(0,1000), size=(10,20))</font></code>
</blockquote>
<li>Properties:
<ul>
<li>An optional interval variable has an additional possible value in its domain (absence value)
<li>**Optionality** is a powerful property that you must learn to leverage in your models
</ul>
</ul>
For each house, an __optional__ interval variable is created for each skill.<br>
Skill being a tuple (worker, task, level), this means that for each house, an __optional__ interval variable is created for each couple worker-task such that the skill level of this worker for this task is > 0.<p>
The "**set_optional()**" specifier allows a choice between different variables, thus between different couples house-skill.
This means that the engine decides if the interval will be present or absent in the solution.
End of explanation
"""
for h in HOUSES:
for p in TASK_PRECEDENCES:
mdl.add(mdl.end_before_start(tasks[(h, find_tasks(p.beforeTask))], tasks[(h, find_tasks(p.afterTask))]))
"""
Explanation: Express the business constraints
<h5>Temporal constraints</h5>
<h5><i><font color=blue>Concept: precedence constraint</font></i></h5>
<p>
<ul>
<li> What for?<br>
<ul>
<li>Modeling temporal constraints between interval variables
<li>Modeling constant or variable minimal delays
</ul>
<li>Properties
<blockquote>Semantic of the constraints handles optionality (as for all constraints in CP Optimizer).<br>
Example of endBeforeStart:<br>
<code><font color=green>end_before_start(a,b,z)</font></code><br>
present(a) <font color=red>AND</font> present(b) ⇒ end(a)+z ⩽ start(b)
</blockquote>
<ul>
The tasks in the model have precedence constraints that are added to the model.
End of explanation
"""
for h in HOUSES:
for t in TASKS:
mdl.add(mdl.alternative(tasks[(h, t)], [wtasks[(h, s)] for s in SKILLS if (s.task == t.name)], 1))
"""
Explanation: <h5>Alternative workers</h5>
<h5><i><font color=blue>Concept: alternative constraint</font></i></h5>
<p>
<ul>
<li> What for?<br>
<ul>
<li>Modeling alternative resource/modes/recipes
<li>In general modeling a discrete selection in the schedule
</ul>
<li> Example:<br>
<blockquote><code><font color=green>alternative(a,[b1,...,bn])</font></code>
</blockquote>
<!-- <img src = "./house_building_utils/alternative.png" > -->
<img src = "https://github.com/IBMDecisionOptimization/docplex-examples/blob/master/examples/cp/jupyter/house_building_utils/alternative.PNG?raw=true" >
<li>Remark: Master interval variable **a** can of course be optional
</ul>
To constrain the solution so that exactly one of the interval variables wtasks associated with a given task of a given house is to be present in the solution, an "**alternative**" constraint is used.
End of explanation
"""
for w in WORKERS:
mdl.add(mdl.no_overlap([wtasks[(h, s)] for h in HOUSES for s in SKILLS if s.worker == w]))
"""
Explanation: <h5>No overlap constraint</h5>
<h5><i><font color=blue>Concept: No-overlap constraint</font></i></h5>
<p>
<ul>
<li> Constraint noOverlap schedules a group of interval variables in such a way that they do not overlap in time.
<li> Absent interval variables are ignored.
<li>It is possible to constrain minimum delays between intervals using transition matrix.
<li>It is possible to constraint the first, last in the sequence or next or preceding interval
</ul>
<!-- <img src = "./house_building_utils/noOverlap.png" > -->
<img src = "https://github.com/IBMDecisionOptimization/docplex-examples/blob/master/examples/cp/jupyter/house_building_utils/noOverlap.PNG?raw=true" >
To add the constraints that a given worker can be assigned only one task at a given moment in time, a **noOverlap** constraint is used.
End of explanation
"""
obj = mdl.sum([s.level * mdl.presence_of(wtasks[(h, s)]) for s in SKILLS for h in HOUSES])
mdl.add(mdl.maximize(obj))
"""
Explanation: Express the objective
The presence of an interval variable in wtasks in the solution must be accounted for in the objective. Thus for each of these possible tasks, the cost is incremented by the product of the skill level and the expression representing the presence of the interval variable in the solution.<p>
The objective of this problem is to maximize the skill level used for all the tasks.
End of explanation
"""
# Solve the model
print("\nSolving model....")
msol = mdl.solve(TimeLimit=10)
"""
Explanation: Solve the model
The model is now completely defined. It is time to solve it !
End of explanation
"""
print("Solve status: " + msol.get_solve_status())
if msol.is_solution():
stdout.write("Solve time: " + str(msol.get_solve_time()) + "\n")
# Sort tasks in increasing begin order
ltasks = []
for hs in HOUSES:
for tsk in TASKS:
(beg, end, dur) = msol[tasks[(hs, tsk)]]
ltasks.append((hs, tsk, beg, end, dur))
ltasks = sorted(ltasks, key = lambda x : x[2])
# Print solution
print("\nList of tasks in increasing start order:")
for tsk in ltasks:
print("From " + str(tsk[2]) + " to " + str(tsk[3]) + ", " + tsk[1].name + " in house " + str(tsk[0]))
else:
stdout.write("No solution found\n")
"""
Explanation: Step 5: Investigate the solution and then run an example analysis
End of explanation
"""
POP_UP_GRAPHIC=False
import docplex.cp.utils_visu as visu
import matplotlib.pyplot as plt
if not POP_UP_GRAPHIC:
%matplotlib inline
#Change the plot size
from pylab import rcParams
rcParams['figure.figsize'] = 15, 3
"""
Explanation: Import graphical tools
You can set POP_UP_GRAPHIC=True if you prefer a pop up graphic window instead of an inline one.
End of explanation
"""
def compact_name(name,n): return name[:n]
if msol and visu.is_visu_enabled():
workers_colors = {}
workers_colors["Joe"] = 'lightblue'
workers_colors["Jack"] = 'violet'
workers_colors["Jim"] = 'lightgreen'
visu.timeline('Solution per houses', 0, MAX_AMOUNT_OF_PERIODS)
for h in HOUSES:
visu.sequence(name="house " + str(h))
for s in SKILLS:
wt = msol.get_var_solution(wtasks[(h,s)])
if wt.is_present():
color = workers_colors[s.worker]
wtname = compact_name(s.task,2)
visu.interval(wt, color, wtname)
visu.show()
"""
Explanation: Draw solution
Useful functions
With the aim to facilitate the display of tasks names, we keep only the n first characters.
End of explanation
"""
def compact_house_task(name):
loc, task = name[1:].split('-', 1)
return task[0].upper() + loc
"""
Explanation: The purpose of this function is to compact the names of the different tasks with the aim of making the graphical display readable. </p>
For example "H3-garden" becomes "G3"
End of explanation
"""
if msol and visu.is_visu_enabled():
visu.timeline('Solution per workers', 0, MAX_AMOUNT_OF_PERIODS)
for w in WORKERS:
visu.sequence(name=w)
for h in HOUSES:
for s in SKILLS:
if s.worker == w:
wt = msol.get_var_solution(wtasks[(h,s)])
if wt.is_present():
ml = find_max_level_skill(s.task).level
if s.level == ml:
color = 'lightgreen'
else:
color = 'salmon'
wtname = compact_house_task(wt.get_name())
visu.interval(wt, color, wtname)
visu.show()
"""
Explanation: Green-like color when task is using the most skilled worker
Red-like color when task does not use the most skilled worker
End of explanation
"""
|
peterdalle/mij | 2 Web scraping and APIs/API and Exercise.ipynb | gpl-3.0 | !pip install omdb
"""
Explanation: Consuming API
Instead of using web scraping, using an API, application programming interface, is the preferred method.
Usually, you need to register in order to use an API. But we will use a freely available API called Open Movie Database API.
1. Install Python library
There already exists a Python library for the Open Movie Database API, called omdb.
Install it with pip.
End of explanation
"""
# Import the library.
import omdb
# Search for movies.
movies = omdb.search("Westworld")
movies
"""
Explanation: 2. Access the API
End of explanation
"""
# Since "movies" is a list, we can loop through it.
for movie in movies:
print("Title: " + movie["title"])
print("Type: " + movie["type"])
print("Year: " + movie["year"])
print("Link: http://imdb.com/title/" + movie["imdb_id"])
print()
# Lets pick the first movie (remember, lists always start at 0).
movie = movies[0]
print(movie["title"])
print(movie["year"])
"""
Explanation: 3. Present results
End of explanation
"""
# Instead of searching (and get a list of movies), we can specify the movie we want (and get a single movie).
movie = omdb.title("Westworld", year="2016")
movie
# Present some info about this movie.
print(movie["title"])
print(movie["year"])
print(movie["country"])
print("Rating: " + movie["imdb_rating"])
print(movie["plot"])
"""
Explanation: 4. Get more info
End of explanation
"""
# Modify this.
movie = omdb.title("Westworld", year="2016")
print(movie["title"])
"""
Explanation: Exercise
Pick your favorite movie or tv-series, and search for it.
We used movie["title"] to show the title of the movie. Show something else! The documentation says you can get this info:
title
year
type
actors
awards
country
director
genre
episode
season
series_id
language
metascore
plot
poster
rated
released
runtime
writer
imdb_id
imdb_rating
imdb_votes
box_office
dvd
production
website
tomato_consensus
tomato_fresh
tomato_image
tomato_meter
tomato_rating
tomato_reviews
tomato_rotten
tomato_user_meter
tomato_user_rating
tomato_user_reviews
End of explanation
"""
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.